I was already inclined to see excessive wealth accumulation as a symptom of sociopathy, and the apparently high rate of eugenicist ideation and sex trafficking among billionaires certainly hasn't changed my mind.
When I see discussion about "AI for the people" or "community-led AI" or "public benefit AI," the focus is almost always on tooling and infrastructure.
But I don't think you can have any of those things without seriously rethinking the purposes and design of AI.
It's quite possible that LLMs and image generators — which were designed to serve profit-led industries — are simply at odds with community and public benefit. So anyone looking to build ethical and communal AI needs to start not with questions like "How do we democratize existing services?" but rather "What sort of AI would serve the needs of actual communities and societies?"
The same applies to decentralized social media, by the way. "How do democratize existing services?" is the wrong question, because the existing services were tailored to the task of extracting value from people's communications with one another. Decentralization and democratization alone will not remedy those harms. To make genuinely beneficial social media services, we have to start by asking what would benefit communities and society at large, then consciously build toward that.
Vibe coding is flooding the world with bespoke programs riddled with security flaws, memory leaks, and sleeper bugs that no one will catch until the damage is already done.
Oh, and small business owners, trying to vibe-code their way around clunky or expensive software while also navigating the rules and regulations that govern their business? There's going to be a lot of flat-out illegal software out there, solely operated by the people who conjured it without understanding how it works or even what standards apply.
DDoS'ing good sources is a bonus for AI companies using bots to scrape those sources. It creates an information scarcity that increases their value as a vendor of the information they're preventing you from accessing directly.
You ever see the Looney Toons bit where Marvin the Martian shoots Bugs with an evolution ray, but Bugs devolves because he "had the silly thing in reverse"?
That's my prediction for agentic AI. To deliver on the hype, companies are going to put out products that resemble AI, but actually operate more like traditional programs, because the "devolved" version is the one that will (mostly) work.
A still of Marvin the Martian holding the evolution gun, facing a hulking devolved version of Bugs Bunny with a nanderthal-like brow.
Here's some background on that Rob Pike email: https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/ and if anything, Willison underplays how much is wrong with this. The directive to "raise as much money for charity as you can" is effectively indistinguishable from "scam as much money out if people as you can," but the association with charity deflects criticism. Note that the AI is using exploits to achieve its goals, unredacting email addresses and deploying psychophancy against its targets.
So far, the project has raised just over $1,500 for Helen Keller International, which sounds laudable until you consider that AI Village has been running four agents for hours a day every day since the first week of April. Taking whatever they spent on nine months of tokens, electricity and compute, and giving that money directly to HKI, would almost certainly have been more effective.
Willison: "My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment."
Agreed! And I'll add: Pretty much anything you do with AI in the real is going to run afoul of that principle. When I have to parse a work email that someone delegated to Copilot, they're wasting my time by offloading their part in the work of communication not just onto an LLM, but also onto me. "Don't waste the time of people who didn't consent to AI interaction" is a sound principle, but "AI moderates" aren't applying it consistently.
"Crediting the emails to 'Claude Opus 4.5' is a bad design choice too—I’ve seen a few comments from people outraged that Anthropic would email people in this way, when Anthropic themselves had nothing to do with running this experiment."
Nah, dude. If Anthropic provided the service that generated this psy-op trash, then they absolutely deserve the credit. They may not have planned the "experiment," but they've built their service to facilitate nearly every wrong thing this project does, and people have every right to give them a portion of the blame. The problem with the attribution is not that it identifies Claude's involvement, but that it masks the identities of the other parties involved.
idk, maybe don't flip the switch that gives access to your ebook reading to companies that train on books without permission and provide surveillance services to authoritarian governments. https://calibre-ebook.com/whats-new
Over time, the public library system has developed a rigorous set of ethics and practices concerning patron privacy and confidentiality, because librarians have seen first hand what can be done with information about what a person reads and what information they seek out, and here's the digital reading community plugging their personal libraries directly into surveillance dragnet services.
There was no real market rationale for including AI in Calibre. It's not like people were going to switch to a competitor if it didn't have AI integration, because Calibre effectively has no competitors. It's the one name that always crops up when you ask how to do something even moderately complicated with ebooks. And now people are actively looking for alternatives.
The thing to understand about Garfield is that he is a cat, and thus is not subject to the strictures of the 9-5 work week, so his hatred of Mondays represents not the reality of capitalist labor, but its internalization as an ideology, even in the absence of the external pressures that condition human labor. Likewise, cats lack the physiognomy that would make regularly digesting lasagna tolerable; it is consumption as a mode, rather than lasagna as a substance, that compels him.
Raising the cost of computer components only looks like an unintended consequence of the AI boom if you don't assume the goal of the industry is to sell you all of your computation as a service.
You don't need those parts if you're just going to ask ChatGPT to do everything for you. And if you can't afford a capable computer, then you have no choice but to ask ChatGPT.
My position on using LLMs for alt text is that alt text is, in large part, an expression of care, both about visually impaired people and the image you're posting, and outsourcing that task to an LLM is a diminution of care.
Writing your own alt text is also an exercise in contemplation and self-understanding. It requires you to think about what appeals to you or interests you in the image you're posting and why you're posting it. Sometimes, the answers will take you by surprise or add nuance to what you intended.
Interesting-looking book coming out next week, focusing on smaller but still substantial empires that form on the periphery of hegemonic empires, and how they can become hegemonic themselves later.
In other words a book for those who are at the PLEASE NO MORE ABOUT ROME OR CHINA stage of their self-education. (Though Qing's evolution is apparently one of them).
Something about "AI-enabled RSS reader" irks me more than AI-enabled versions of most software. You've taken a thoroughly open, efficient, specification-based process for social communication and plugged into the torment nexus.
It helps to understand that LLMs are basically machine learning programs for finding statistical relationships in language use. It's an advanced form of the sort of ML that digital humanities researchers have used to, say, chart the incidence of sexist language in English literature over time. So there are logical uses for LLMs, but a) they're mostly limited to the study of language, and b) they don't necessarily justify the costs piling up around commercial startups like OpenAI and Anthropic.
In terms of cost, consider CERN's Large Hadron Collider, which had a budget of €7.5 billion (~$8.7bil), and was built to explore some of physics' most fundamental questions. OpenAI's Startgate project alone is expected to cost $500 billion, and it's only one of dozens, if not hundreds, of megawatt AI projects planned. Of course, not all of that is devoted solely to LLM processing, but still, the investment that we, as a society, are pouring into LLMs is a geometric increase on what we put into what was previously the largest and most expensive research tool ever made, mostly to misuse them.
If you want to know how the word "impactful" spread through written English over time, LLMs can help. It can help with that because it's a specifically linguistic question, but it's a pretty big leap from there to other types of knowledge. And LLM-based AI is being pitched to businesses and the general public not as a tool for conducting linguistic research, but as a universal knowledge machine. And that's where a great many of the technology's problems begin.
The positioning of LLM-based AI as a universal knowledge machine implies some pretty dubious epistemic premises, e.g. that the components of new knowledge are already encoded in language, and that the essential method for uncovering that knowledge is statistical.
Maybe no one in the field would explicitly claim those premises, but they're built into how the technology is being pitched to consumers.
One question that's worth considering is: In what context would Bluesky-style Starter Packs work best?
They work in Bluesky in part because that network is structured to give every new account global reach by default.
But imagine that you join a Mastodon server and subscribe to a Starter Pack, only to find that 95% of the accounts it collects are hosted on a defederated server. Is that helpful?
My suspicion is that the single context where fediverse Starter Packs would work best is the orbit around mastodon.social.
@hallvors But only if those active posters are accessible from the server you're on. And in the context you're describing, federation becomes an even bigger wedge for determining who is and is not accessible.
That's one of my big cautions about Bluesky-style starter packs: They don't really account for federation. I'm not sure there's any way they could be made to do so.
@mastodonmigration Anyone who loads the Z pack now follows all of the people… on servers that are federated with theirs.
And on some packs, that will be fine. If you're on the same server as user A, that Venn diagram might well be a circle. If I follow a pack from a Merveilles user, then I shouldn't see any broken connections. But the devs are insisting that Starter Packs must be federated so that people on different servers can share lists. And I just don't see anyway that can work as smoothly here as it does on a mediated network like Bluesky.
Which is part of why I say that the context Starter Packs serve best is mastodon.social — a hundred-thousand or so people who barely see the effects of federation because they're federated to all the same servers as most of the people that they already follow… because they're all on the same server.
@mastodonmigration I think that could be true, depending on how they're implemented. (And assuming I understand your meaning.) Right now, it looks like the Mastodin devs are planning to have packs federate the same way that messages do, and I expect that the only way to keep that from being an exercise in frustration will be to only display accounts that are visible to both the pack creator and the user installing the pack. Presumably, that would also curb some potential safety issues, but I suppose it could open others. I'm still wrapping my head around how this could work in a genuinely federated context.
Bluesky is struggling to navigate demands for better moderation while remaining true to the project's initial vision of a social network where moderation doesn't cut into profits.
Jack Dorsey initiated the Bluesky project during a period when he and other social media operators were constantly getting called in to testify to Congress about moderation practices on their platforms. His embrace of decentralization is explicable in part as a way of foisting liability off on third parties. Early plans saw Twitter eventually moving onto the protocol Bluesky was building, making a central question of the project: How do you turn a profit on a decentralized network?
Much of the way that ATproto structures the network can be understood as an answer to that question. Running all activities on the network through a Relay facilitates data harvesting at scale, replicating Twitter's enclosure if all data on its network. Separating labeling and algorithmic feeds into separate services offloads moderation tasks to third parties. "Composable moderation" replaces actual moderation a safety paradigm that shifts responsibility onto the end user.
The value proposition of Dorsey's project was that Twitter would give up its monopoly on the data generated by Twitter users, but spend less on keeping the network safe. Early on, this was expressed as "separating the speech and reach layers," which you could translate as "decentralizing moderation, centralizing profit." As long as the money lost to operating on a more open network was less than the money saved by not moderating, partial decentralization would be a net win for the company.
The form taken by most of Bluesky's concessions to moderation can be explained by their adherence to that initial aim. For example, they police CSAM and DRM at a very high level because those are content types where legal liability would be difficult to avoid. The tendency with other forms of abuse has been decentralization of responsibility, providing tools for third parties to do the work. Confronted with the need for first-party moderation — e.g. preventing users from registering names with racial slurs to the Bluesky-operated PLC — they've generally resisted unless forced, either by bad press or investor intervention. This is all consistent with a mission to externalize the costs of safety.
In some ways, embracing high-profile bad actors is a proof-of-concept for this system, demonstrating the extent to which they've successfully separated speech from reach. In theory, user concerns should be allayed by third-party moderation services, without diminishing the company's potential to harvest data from those accounts and their followers, which can then be converted into profit. Their inventive for heeding calls to moderate those accounts more directly is pretty low.
The Big Rule is not arbitrary. It's not just good vibes. The fediverse exists because of the Big Rule. That there's a fediverse at all is a consequence of people helping others be on the fediverse. The success of a decentralized, volunteer-operated social media network depends on everyone approaching it as a mutual aid project. Our biggest failures are almost all examples of not having helped others be here, too. We're trying harder this time.