Hegseth to Anthropic: Nice company you got there…
-
So they’re not insisting that the company provide the capability, they’re insisting that they not NOT provide the capability.
Ok glad we’ve cleared that up.
Shame on me for the tortured framing.
@jon-nyc said in Hegseth to Anthropic: Nice company you got there…:
So they’re not insisting that the company provide the capability, they’re insisting that they not NOT provide it.
Ok glad we’ve cleared that up.
Shame on me for the tortured framing.
It's incoherent to say that the difference between the version of AI anthropic would like to provide, and the version the DoD wants, is that one has the ability to be used illegally and the other does not. No such categorical separation exists. The DoD is insisting on the absence of imperfect guardrails, literally. That is not the same as "insisting on the ability to break the law". If all they wanted was the ability, they could use the version Anthropic suggests. They are insisting that Anthropic not be in the loop regarding whether something is legal or illegal. They have pledged to follow law, such as it is.
-
Ok, so let’s leave the legality to the future lawyers since it depends on actual use.
For tomorrow’s deadline, they’re insisting that Anthropic NOT NOT provide a certain capability. Which is different than insisting they DO provide that same capability. In fact, the latter is dishonest tribal rhetoric.
Ok, I’m learning. Don’t give up on me yet.
-
No, you're not really learning. But I'm patient. You may not have understood my previous post; feel free to read it again. The difference in versions that Anthropic wants to provide and that DoD wants them to provide, is not one of categorical "can or cannot be used for illegal surveillance". This is an important point. The DoD only wants imperfect guardrails removed.
It is simply and objectively false to think that the version Anthropic would like to provide, will perfectly prevent itself from use in "illegal" surveillance while allowing itself to be used in legal circumstances. The DoD is demanding that the imperfect guardrails not be a potential impediment to their legal uses. Explicitly that is their demand. Yes, your framing is tribal and tortured.
-
Didn't we have a Claude member here once? Maybe he can weigh in. If not, @klaus is as close as we get.
@89th said in Hegseth to Anthropic: Nice company you got there…:
Didn't we have a Claude member here once? Maybe he can weigh in. If not, @klaus is as close as we get.
That’s going back awhile. Yeah, I think his complete handle was Claude Balls.
I just assumed it another one of the late Larry’s numerous fun sock puppets
-
@89th said in Hegseth to Anthropic: Nice company you got there…:
cause Xai is Gai
And Gai means "chicken."
https://www.gainyc.com/
-
Don't think there is a meaningful distinction between different "versions" of a large AI model. It costs way too much for a company to perform parallel trainings to develop two significantly different "versions."
If the underlying AI model is the same, then putting different "guardrails" around the model gives little confidence that a tech savvy user (like the Pentagon) won't have the capability to get around those guardrails.
It sounds to me Anthropic wants the Pentagon to promise (as a matter of contract) not to use its product for certain purposes, rather than trying to not provide a certain "version" of its product.
-
https://www.washingtonpost.com/opinions/2026/02/26/hegseth-anthropic-ai-model-claude/
Hegseth wants Anthropic to modify its contract to allow “any lawful use” of the technology. Anthropic is willing to rewrite its current terms of use but not to include mass surveillance of Americans or accommodate weapons that operate without a person in the loop to make the final decision.
It seems WaPo's Editorial Board reads the situation as I do: it's a contractual "terms of use" issue, not a "product version" issue.
-
-
OpenAI is in negotiations with the DoD to take over for Anthropic. I hope the "supply chain risk" threat is just negotiation hot air. I suspect it will prove to be, but who knows.
In the letter, Anthropic acknowledges that the law is currently ambiguous to non-existent. Their concerns are ethical rather than (currently) legal, and I think the DoD is well within its duties to find a vendor who will not impose their own ethical constraints.