

They’re still using Claude at the moment. It’s been embedded in the US defense system since 2024, hence Trump’s ‘immediate ban, but really 6 month offboarding’ nonsense.


They’re still using Claude at the moment. It’s been embedded in the US defense system since 2024, hence Trump’s ‘immediate ban, but really 6 month offboarding’ nonsense.


The bar is so low. Anthropic is only trying to raise it ever so slightly.
e: Also, the US military reportedly used Claude in Iran strikes despite Trump’s ban.


True! But that requires trust. Trust that the person transferring the knowledge correctly interpreted their experience and was able to communicate it well. As wisdom fades from living memory (as those who directly experienced it pass or are marginalized), it seems difficult for society to maintain the integrity of that knowledge across new contexts. The scientific method is supposed to help with this, but we have difficulty following it at scale. Reminds me of this comic about collecting questions.


I largely agree with Siddhartha that wisdom can only be gained through experience. I just think of all the times I knew something intellectually but didn’t understand it sufficiently to properly act on it until I lived it. But there is a more fun corollary from Zen Without Zen Masters, “If you think you can get beyond pleasure without going through it, we are definitely on different trips.”
People leaving ChatGPT cause of the deal OpenAI made with the DoD/DoW.
I’ll just get this done real quick. Wait, it didn’t save my login? That’s okay, I’ll just reset my password. Invalid email, what? Great, now the site is down. Okay, I’ll just call them. The girl from Ipanema goes walking. for 20 minutes. Call back during business hours? I guess I’ll check online for their business hours. Wait, we’re IN their business hours. Okay, how far is their closest brick and mortar location? Oh, the local branch was shut down, and the closest in-person representative of this company or government entity I have to deal with to survive is a 3 hour drive away now? Wait, the site’s back up. Maybe I used my old school email - is this account that old? Maybe I can use a phone number to reset. There we go! Great! And… where’s the form? They completely redesigned the site, I can’t find anything. Okay, here it is, in a place that makes no sense and… this form can no longer be accepted digitally due to new legislature. I have to mail it in? I guess I might have printer ink and envelopes and stamps. Systematically degraded USPS loses letter in transit.


Aye, Anthropic is head and shoulders above everyone else on guidance, largely because they focus entirely on text/code. They’re not simultaneously developing image, video, and audio generators. Even Claude’s voice is just an 11Labs model. Plus I get the impression they’re just smarter about what they choose to research and how they use that info to improve the model.


I did find an update on that funding, btw. Anthropic already took money from Qatar (the QIA), but the amount isn’t known - likely around $100M. The UAE has yet to happen, but if does, it would be “hundreds of millions”.


I mean, I’m not gonna defend him. But fucking up a discord that you’re a mod of isn’t really in the same ballpark as taking money from dictators or directing fully autonomous strikes. Also, from the read, it really sounds like that Deputy CISO was a prime example of cyber-psychosis, or AI mania, or whatever we’ve decided to call it. And I assume he is part of the same vulnerable minority?


Oh, that guy! To be fair, that’s one employee, not Anthropic’s actions or position. You mentioned forcing their software on minorities while insisting it was better than it was, and I was getting OLPC flashbacks. But Anthropic looking for funding in the UAE and Qatar is shitty. I can’t seem to find anything about whether or not they went through with those contracts.


They insisted Claude was human?


Amodei said in an interview that the DoW altered their contract to appear to compromise, so that it looked like they were agreeing to those use limits. But that legalese accompanying the updates rendered that text pointless. Basically, “We won’t use Claude for mass domestic surveillance and full automated killing, unless we really want to.” My guess is OpenAI signed the exact same contract and just pretended not to understand the toothlessness of the guardrails.


Amodei said in an interview that the DoW altered their contract to appear to compromise, so that it looked like they were allowing Anthropic to have those limits. But that legalese accompanying the updates rendered that text pointless. Basically, “We won’t use Claude for domestic surveillance and full automated killing, unless we really want to.” My guess is OpenAI signed the exact same contract and just pretended not to understand the distinction.


It’s more about post/message size for me. If ya post a few sentences that clearly and concisely communicate a point, I don’t really care if they’re crafted or generated. If ya post a wall of text, I wanna know ya put the kind of effort in that made its length necessary if I’m gonna put in the effort to read it.


Here’s Anthropic’s post about the supply chain risk designation.


Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
Anthropic yesterday:
Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required. -Dario’s Post


So will Gemini join Claude or Grok?

To some extent, Anthropic recognizes that an LLM is always role playing.
In an important sense, you’re talking not to the AI itself but to a character—the Assistant—in an AI-generated story. -The persona selection model
Which makes giving an Opus 3 character a blog 2 days later as a “retirement” gig seem contradictory. They usually frame these sorts of contradictions as, “well, we don’t really know, so we’re trying to cover our bases.” The Opus 4.6 system card skirts the same lines. In the welfare section, they essentially just start off by interviewing a character. But then in 7.5, they go on to actually examine what’s going on during text generation.
We found several sparse autoencoder features suggestive of internal representations of emotion active on cases of answer thrashing and other instances of apparent distress during reasoning.
And then there’s their introspection research.
We investigate whether large language models are aware of their own internal states. It is difficult to answer this question through conversation alone, as genuine introspection cannot be distinguished from confabulations. Here, we address this challenge by injecting representations of known concepts into a model’s activations, and measuring the influence of these manipulations on the model’s self-reported states. We find that models can, in certain scenarios, notice the presence of injected concepts and accurately identify them. Models demonstrate some ability to recall prior internal representations and distinguish them from raw text inputs. Strikingly, we find that some models can use their ability to recall prior intentions in order to distinguish their own outputs from artificial prefills. -Signs of introspection in large language models
So there’s this distinction between the state of the model itself, and the state of the text it generates. The latter represents a role the LLM is playing, and the former we’ve only really scratched the surface of understanding. The kinda open question is to what extent it’s like something to be an LLM. It’s very unlikely that it’s like something to be one of the roles it’s playing, at least, no more than a character in a dream has interiority. The blog is marketing, but I hope they keep doing the other research too. People outside the company don’t have the kind of access necessary to do some of this research, so we’re having to take their word for it.
I would watch this.
Like a reverse selkie. Speaking of Fae, the werewolves of Ossory are neat.