Hegseth to Anthropic: Nice company you got there…
-
No, you're not really learning. But I'm patient. You may not have understood my previous post; feel free to read it again. The difference in versions that Anthropic wants to provide and that DoD wants them to provide, is not one of categorical "can or cannot be used for illegal surveillance". This is an important point. The DoD only wants imperfect guardrails removed.
It is simply and objectively false to think that the version Anthropic would like to provide, will perfectly prevent itself from use in "illegal" surveillance while allowing itself to be used in legal circumstances. The DoD is demanding that the imperfect guardrails not be a potential impediment to their legal uses. Explicitly that is their demand. Yes, your framing is tribal and tortured.
-
Didn't we have a Claude member here once? Maybe he can weigh in. If not, @klaus is as close as we get.
@89th said in Hegseth to Anthropic: Nice company you got there…:
Didn't we have a Claude member here once? Maybe he can weigh in. If not, @klaus is as close as we get.
That’s going back awhile. Yeah, I think his complete handle was Claude Balls.
I just assumed it another one of the late Larry’s numerous fun sock puppets
-
@89th said in Hegseth to Anthropic: Nice company you got there…:
cause Xai is Gai
And Gai means "chicken."
https://www.gainyc.com/
-
Don't think there is a meaningful distinction between different "versions" of a large AI model. It costs way too much for a company to perform parallel trainings to develop two significantly different "versions."
If the underlying AI model is the same, then putting different "guardrails" around the model gives little confidence that a tech savvy user (like the Pentagon) won't have the capability to get around those guardrails.
It sounds to me Anthropic wants the Pentagon to promise (as a matter of contract) not to use its product for certain purposes, rather than trying to not provide a certain "version" of its product.
-
https://www.washingtonpost.com/opinions/2026/02/26/hegseth-anthropic-ai-model-claude/
Hegseth wants Anthropic to modify its contract to allow “any lawful use” of the technology. Anthropic is willing to rewrite its current terms of use but not to include mass surveillance of Americans or accommodate weapons that operate without a person in the loop to make the final decision.
It seems WaPo's Editorial Board reads the situation as I do: it's a contractual "terms of use" issue, not a "product version" issue.
-
-
OpenAI is in negotiations with the DoD to take over for Anthropic. I hope the "supply chain risk" threat is just negotiation hot air. I suspect it will prove to be, but who knows.
In the letter, Anthropic acknowledges that the law is currently ambiguous to non-existent. Their concerns are ethical rather than (currently) legal, and I think the DoD is well within its duties to find a vendor who will not impose their own ethical constraints.