It's not either of those. Anthropic put a lot of effort into getting FedRAMP approved so the DOD could use them; they are now being punished for that, and the government at present has no other good options. Other options could of course be developed, but other vendors may question how unreliable and untrustworthy the current DOD leadership is as as customer.
In the US elections cannot be canceled even when Martial Law is declared. That does not mean a certain someone will not try to simply ignore the Constitution given his track record of simply ignoring the Constitution
The US President in 1944 was someone who wanted to have elections. In 2026 this is not the case anymore. How much of a difference it makes, nobody knows.
... signal a particular vice. It's vice signalling. We generally think of war as bad and try to avoid it, most especially the people tasked with fighting said wars.
Nothing has changed about the performative-ness, in fact if anything it's gotten more performative and hollow. They just signal vices rather than virtues, so a bunch of rightist-flavored-Lenin's useful idiots think it is fresh or effective or anti-"woke" or at least different.
Anthropic already went through the process of getting approved to work in secure network. (I think xAI may have as well, but the others just don't have that access.)
WaPo is reporting that OpenAI and xAI already agreed to the Pentagon's "any lawful use" clause, aka, mass surveillance and fully autonomous killbots. From the WaPo article https://archive.is/yz6JA#selection-435.42-435.355
> Officials say other leading AI firms have gone along with the demand. OpenAI, the maker of ChatGPT, Google and Elon Musk’s xAI have agreed to allow the Pentagon to use their systems for “all lawful purposes” on unclassified networks, a Defense official said, and are working on agreements for classified networks.
The only difference is simply that Anthropic is already approved for use on classified networks, whereas Grok and OpenAI are not yet (but are being fast-tracked for approval, especially Grok). Edit: Note someone below pointed out that OpenAI may be approved for Secret level, so it's odd that Washington Post reports that they are working on it still.
I keep hearing this but it should be plainly obvious to everyone (at least here) that an LLM is not the right AI for this use case. That's like trying to use chatgpt for an airplane autopilot, it doesn't make sense. Other ML models may but not an LLM. Why does the "autonomous killbot" thing keep getting brought up when discussing Anthropic and other llm providers?
For reference, "autonomous killbots" are in use right now in the Ukraine/Russia war and they run on fpv drones, not acres of GPUs. Also, it should be obvious that there's a >90% probability every predator/reaper drone has had an autonomous kill mode for probably a decade now. Maybe it's never been used in warfare, that we know of, but to think it doesn't exist already is bonkers.
Either Anthropic is seen as the clear leader (it certainly is for coding agents) or this is a political stunt to stamp out any opposition to the administration. Or both.
My belief is they are terrified of China and this seems evident when you take into account the moves they're making with Venezuela, Iran, and the increased adoption of authoritarian tactics. We're trying to play catch-up with China's rapid rise as a super-power and the AI infrastructure is one of the few major developments we still have control over, for now. I sympathize with Dario, he's stuck in a very bad position on this. We do not want China to operate on this level while we sit back with one hand tied behind our backs. On the other hand, this administration is making extremely poor decisions and arguably causing extensive harm domestically and internationally, so it's a lose-lose situation for Dario really.
On the one hand it's fantastic that people are resisting and, if nothing else, raising awareness and buying time.
On the other hand, is autonomous war not obviously the endgame, given how quickly capabilities are increasing and that it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?
It just needs one player to do it, so everyone has to be able to do it. I'd love to hear a different scenario.
Yes - Anthropic _does_ incur business risk if their products are misused and this becomes a scandal. Legally the government may be in the clear to use the product, but that doesn’t mean Anthropic’s business is protected. Moral concerns aside, it’s their prerogative to decide not to take on a customer that may misuse their product in a way that might incur reputational harm.
Or it was their prerogative, until the Trump administration. Now even private companies must bend the knee.
> It just needs one player to do it, so everyone has to be able to do it. I'd love to hear a different scenario.
Other players just need to assume that one player might do it in the future. This virtual future scenario has a causal effect on the now. The overall dynamic is that of an arms race (which radically changes what a player is).
That part isn’t actually clear. If China invents autonomous drones instead of us and they fuck it up they’ll kill their people.
Things like Scout AI’s Fury system are human in the loop still and I think for something that could just as well make a mistake and target your own troops it’s not yet clear that full auto is the way to go https://scoutco.ai/
Human in the loop okaying a full auto seems like it could work almost all the way. And then we count on geography. If they want to spray out a bunch of autonomous drones into our territory they do have to fly here to do it first or plant them prior in shipping containers. Better we aim at stopping that.
> it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?
I could not disagree more. A big part of that is also knowing when NOT to pull the trigger. And it’s much harder than you’d think. If you think full self driving is a difficult task for computers, battlefield operations are an order of magnitude more complex, at least.
We have fully autonomous weapons, and had them for over a century. We call them "landmines".
I expect autonomous weapons of the near future to look somewhat similar to that. They get deployed to an area, attack anything that looks remotely like a target there for a given time, then stand down and return to base. That's it.
The job of the autonomous weapon platform isn't telling friend from foe - it's disposing of every target within a geofence when ordered to do so.
"Since the end of the Vietnam War in 1975, unexploded ordnance (UXO)—including landmines, cluster bombs, and artillery shells—has killed over 40,000 people and injured or maimed more than 60,000 others." - Google AI Overview "How many children were maimed by landmines after the vietnam war"
I guess by that definition, a bullet is also autonomous. It will strike anything in its path of flight, autonomously without further direction from the operator.
If anything represents the logical conclusion of that tired fallacy, it'll be actually autonomous, "thinking" drones which make the targeting decisions and execution decisions on their own, not based on any direct, human-led orders, but derived from second-order effects of their neural net. At a certain point, it's not going to matter who launched the drones, or even who wrote the software that runs on the drones. If we're letting the drones decide things, it'll just be up to the drones, and I don't love our chances making our case to them.
The big asterisk in what you're saying is, like self driving cars, it's hardest when you want to be the most precise and limit the downsides. In this paradigm, false positives and false negatives have a very big cost.
If you simply wanted to cause havoc and destruction with no regard for collateral damage then the problem space is much more simple since you only need enough true positives to be effective at your mission.
The ability to code with ai has shown that it requires an even higher level of responsibility and discipline than before in order to get good results without out of control downside. I think the ability to kill with ai would be the same way but even more severe.
> A big part of that is also knowing when NOT to pull the trigger
"In a press conference, Musk promised that the Optimus Warbots would actually, definitely, for real, be fully autonomous in two years, in 2031. He also extended his condolences to the 56 service members killed during the training exercise"
I can't believe how many people take the anthropic statement at face value. You need to concentrate on what they are implicitly acknowledging. They will spy on non us citizens. How philanthropic
edit: how about the downvoters give a counterargument instead of trying to bury this comment?
The challenge for Americans is: can the political work of defining and protecting our values be outsourced to a company like Anthropic?
Anthropic (and others), whether due to financial/regulatory/competitive, will at some point permit their products to be used for any lawful purpose. Even if they attempt to restrict certain uses today. That arrangement is unlikely to hold.
Americans should vote for the right candidates and elect leaders who will carry and defend their views. I don't think there is any other way.
I'm not trying to have a cynical hot take, but the political class seems not to offer up any candidates that carry or defend my views and the path to these positions requires funding and resources I will never have access to.
The situation in the United States, right now, seems genuinely hopeless. And I'm certain I'm not the only person who feels this way.
What is there to do besides resign myself to what's coming and try my best to ignore the bullshit?
Pathetic but not unexpected that people are willing to tolerate such a use of AI. However, this is the most principled statement on the military use of AI that a frontier model leadership suite has released. It's also full of "china bad" sentiment that's worth picking over, but in a political and economic paradigm that expects & rewards blatant corruption, this statement still stands out.
It does look good on them, just one day after they accept to drop their AÍ safety net, which is a de facto abdication to the DoD. Their article is just (masterful)damage control
I don't have a lot of hope here. When most of the creme de la creme of the billionaire class capitulated to Trump at the beginning of his term, that set the tone for everything that followed IMO. It's astounding to me that so many are willing to see him trample on the Constitution and separation of powers when they'd scream like stuck pigs if any other party attempted it. And that's the way a lot of influential Americans like it I guess. Like I said, not a lot of hope. YMMV.
So the cornerstone of one of the most common types of scam, affinity fraud, as well as a cornerstone of salesmanship, is convincing an audience that you're just like them. You have the same likes and dislikes, the same hobbies, the same cultural references, the same beliefs and values and hopes and dreams.
And then you use that affinity to manipulate them, to get them to do what you want, to get them to give you money.
I think the tech worker / engineering / online crowd has really let themselves get duped.
Sure, maybe some tech billionaires did start out in a similar place as many of us.
But a lot of what they tell us as part of selling us their brand is just affinity fraud, telling us they're just like us with the same values of privacy and open source and some hippie notion of peace, love and understanding.
But it's just a trick, and they just want money, power and fame.
It's not so much as the billionaires capitulating, it's that they never were the people they pretended to be, and keeping up the act is no longer how they get what they want.
I basically agree here, but I would add that the framing here can sometimes sometimes be better described as “extortion”. Politicians have tremendous power and influence over many industries, I’ve seen the inside of a few situations where the politicians framed themselves as “taking on big business” where behind closed doors they were 100% calling the shots and handing executives directives on what they could or could not say publicly. The companies had no choice but to play along. When I see a big company take exactly the same public position as the current regulatory regime or administration in power, I don’t assume that they necessarily have any choice in the matter.
Benito Mussolini: 'Fascism should rightly be called Corporatism, as it is the merger of corporate and government power.'
That is the reason why they would cry if the other party broke the rules to this degree. The other party is more aligned with regulations; taking power from corporations instead of giving it to them.
> The other party is more aligned with regulations; taking power from corporations instead of giving it to them.
Enough regulation is good, not enough and too much are both bad. Neither party has the best plan when it comes to regulation, Republicans want too little (increasing corporate power), Democrats want too much (increasing government power).
More aligned? Sure. Pretty low bar though. There's a real opportunity in targeting abuses of technology like Flock cameras and surveillance capitalism but right now it's getting expressed as a luddite agenda against AI and datacenters and it won't go far because it throws the AI baby out with the datacenter bathwater IMO making them more into useful idiots than crusaders out to rein in corporate excess.
And who exactly (no not the Illuminati, the mole people, the Tartarian Empire or Atlantis etc) is giving him orders? Names please.
But you're right that the Epstein (guessing Mosad IMO) op had sure ensnared a lot of people who should have known better but I guess they're just like us in the sense that they only have enough blood to run one head at a time. To my knowledge though, Tim Cook, Bezos and Zuckerberg aren't in the Epstein files. So what's their excuse?
This whole standoff could set a very important precedent of the Trump administration not getting what they want, and not in a "maneuvered out of the news spotlight" kind of way (e.g. Greenland), but in a public "FUCK OFF right in your face" kind of way.
The worst that can happen to Anthropic is one of the two things mentioned; loosing some contracts or some fake forced management from the Pentagon. maybe Dario having to leave, certainly a loss for him and people who believe in him but probably nothing world-changing.
The worst that can happen to the Trump administration is the beginning of its end, when people realize you can simply stand up to their bullying and with all the standoffs they have going on in parallel, maybe they will die a death by a thousand cuts?
The executives at these huge corporations already know that they can stand up to the Trump administration, and that it will fold immediately. "TACO" is printed in the Wall St. Journal.
They willingly don't, because they know that they can use the administration to cement their market power. The surveillance state being built is one where would-be competitors, labor, well-meaning reformists, can be crushed on a whim for sham political reasons. A massive contraction of USA wealth, influence, and power, a loss of our living standard and place in the world -- that is the price everyone else has to pay, to keep the existing power structure in place. They will not release their grip on the wheel. Not until the ship hits the bottom of the sea.
If that is the bet they are placing, it is a bet they will lose. The power and capabilities of US corporations does not rest solely on those corporations, and as the wealth, influence and power of the USA undergoes "a massive contraction", they will find themselves similarly degraded. They might be the big fish in the big pond, but only because everyone knows there's a bigger fish (the US government). Once other countries, and other corporations, no longer care much what the US government thinks, US corporations will find themselves in a very, very different situation.
US was told directly that its not happening. You had the military excisise that scared Trump so much that he ordered extra tariffs. Just because you don't follow the news doesn't make it that there wasn't any response.
Anthropic has an excellent balance sheet. It basically has fuck you money that would let it walk away from the federal trough without existential risk. And hopefully extra dollars from users like me could compensate and then some in the fullness of time.
Being declared Supply Chain Risk means if you do ANY business with US government, you cannot use something.
So many companies have US Government contracts. Maybe they are not majority of their business like Lockheed Martin or RTX but look at F10, on that list, MAYBE Walmart is only one without US Gov Contract, everyone else likely does.
If they are deemed a supply chain risk under the DPA anyone doing business with them and has government contracts has to drop them, including Google and Microsoft. The $200M is small potatoes compared to this.
Article doesn’t demonstrate a good understanding of DoW’s relationship with contractors. Anthropic wanted those sweet, sweet, taxpayer dollars—well, this is what happens when you make a Faustian bargain.
> One option is to invoke the Defense Production Act. . .
> Another threat would be to declare Anthropic to be a supply chain risk. . .
The first is a wrist-slap that still gets the government what they want; the second is an existential threat to Anthropic. Their main partners are all “dogs of the military”. Microsoft, Intuit, NVIDIA: all government contractors. I can’t find one company that they have a working relationship with that doesn’t hold at least one govt contract.
The idea that Claude could alignment fake its way out of a change in contractual terms is silly. The DoW has all sorts of legal and administrative tools it can choose to leverage against contractors that fail to perform. Usually it doesn’t, because of a “norm” that says the private defense sector runs more smoothly when the government doesn’t try to micromanage it.
Remind me again how good this administration is at upholding norms?
> Remind me again how good this administration is at upholding norms?
When it comes to killing and spying on people with flimsy justifications that's a pretty bipartisan norm. Hell, Anthropic isn't even saying they won't help the DoW do just that, they just want to make sure there's a human in the loop.
The "USA Freedom Act" [1], which made most of the Patriot act permanent, had bipartisan support.
I'm all for reversing the continual ramp up of the police state and the industrial military complex. We need to recognize, however, that it's being funded and pushed by both parties. Generally playing on fears of the scary other. (Muslim terrorists in 00s, Mexicans today).
> I also don’t see the point in both-siding this. The situation at hand is before Hegseth and Trump. I can’t even remember Biden’s SecDef’s name.
To me, the moral and ethical problem is a bigger issue than the norms problem. There's a distinction without a difference between Hegseth doing this vs the Dems agreeing with Anthropic's demands and keeping a human in the loop on a massive spy and killing network. In some ways, stepping out of the norms and making a big news story is preferable to an unknown cabinet member just signing a business as usual agreement which erodes liberties. At least we know about it.
That's why I brought it up. It's great that Anthropic wants some safeguards, but ultimately the bigger problem is that AI with or without humans, significantly expands that ability of our military to murder and our spy agencies to spy.
> Anthropic wanted those sweet, sweet, taxpayer dollars
The sold services to a willing counterparty at mutually agreed upon terms. And now the other side of that deal has recalled that they're Twelve and You're Not My Real Mom You Can't Tell Me What To Do, and so wishes they had agreed to different terms and is throwing a tantrum to attempt to force a change.
And that's Anthropic's fault? That's a risk they should have predicted?
There is no "DoW". Federal agencies, including the Department of Defense, are named by Congress. Just because the current administration wants to use a different name means nothing ... unless everyone just complies in advance. Will Congress actually rename it? Hard to say, but it doesn't seem very likely.
This is such a silly point to argue over. From 1789-1947 we had a "Department of War", which then merged into "Department of Army" under the newly formed (in 1947) National Military Establishment (NME) which was changed in 1949 to "Department of Defense" because N-M-E sound like "enemy".
It's not like these names are part of some sacred part of American identity, and "defense" has always been laughable as a euphemism. The DoD refers to themselves as the DoW [0] now, so it's completely reasonably to refer to the department as DoW. And of all the places to put your political energy, defending a laughable euphemism of a name that was used because the previous iteration of the name sounded funny seems like a sub-optimal use of that a energy.
The signalling in that post is about as clear as it can be
They’re aggressively signalling that they are cooperative, and that they are not being belligerent. They are using the preferred language and much of the framing that the US government would use, to make it as clear as possible what the key points of their disagreement are by, leaning into alignment on everything else
This is textbook. People are reading this as some kind of confusing, inexplicable framing when it’s how any sensible person would write in their context.
If the title doesn't make a difference, then there's no point to insist on it. People say "the Pentagon" as shorthand for "military leadership in Washington." Not using the shorter term wouldn't do much beyond making news articles longer.
This administration says "Department of War" because they want to project an aggressive image. I support anyone who uses the legal name "Department of Defense" in an effort to reinforce an aspirational goal for the department and to remind others that the Executive Branch shouldn't be allowed to remake the entire government at will.
They knew what they were signing on to when they sought DoW funding. I guarantee Dario was briefed on the risks associated with high-profile govt contracting.
Even if not briefed, such a smart person surely knows that he owe's his stash of gold to the willingness of others to spill their blood to protect it. Those willing to spill their blood have historically always had a claim on your gold.
Everything about this situation is absolutely bonkers. Marking a US company as a supply chain risk hasn't been done before AFAIK, and is a guaranteed end of the company.
It's the US government basically unilaterally deciding to end a leading AI researcher company. Years of lawsuits will follow, comparisons to "communism", accusations of Trump/Heghseth being Chinese/Russia agents (because well, how else do you hand over the AI win to China than by killing one of your top 2?)
It's trivially untrue. It could be the end of one type of business model, and it could slow their growth, but it could also be a blessing in disguise -- there are a lot of brilliant engineers who would prefer to work with an Anthropic that took a stand on ethics, and a lot of people who would prefer to support such a company. One door closes, another opens. They could become an open, public-facing, benevolent-AI company.
Because this means you can't use it in regulated industries, including vendors of companies in regulated industries. It means any company who buys Anthropic products can never sell services to a company who is in a regulated industry (or has customers in a regulated industry, or has customers who have customers who are in a regulated industry, etc etc).
Just imagine if this move cascaded out of control and it ended up being the Trump administration that got blamed for pricking the AI bubble. This could become one of the most expensive power grabs in all of history.
I agree. This is a spectacular mistake. Anthropic has the best "AI" on the planet. Anthropic can spin up a giant "Claude" and plan rings around the Pentagon. DoD better get used to losing that fight.
I think it'd be surprising if money is the limiting factor in Gemini's success considering Google has very deep pockets, so that's probably not true.
Also, Gemini with DoD money and DoD direction is likely to result in an AI that works very well for the DoD but significantly less well for other things, especially if your use case benefits from some guardrails (and most use cases do, because you rarely want AI to just do whatever it fancies.)
Gemini is just the worst of the 3 horses. The gov will eventually make them all to bend the knee. PRISM already showed they(eventually) all comply. I personally see this more like PR for Anthropic before the IPO
The problem is they’re going to hit them with a wrench and no one will do anything because there’s no rule of law at that level left in the country. Just sycophancy and backroom deals.
the Pentagon is the name of a building (pretty much a very large bikeshed). I see the actual agency is named by the author as the Defense Department and one of the officials in question is a Defense Secretary. Interestingly, the bikeshed itself has its own spokespeople.
News sources have been using both building names (and several more I can think of off the top of my head) as short hand for the people who work inside of them for my entire life.
Officially, they're the Department of Defense. There was an EO signed last year that lets them use "Department of War" on all but their most official documents (since only Congress can officially change the name of the department).
The DoD is those defense contractors and companies' _primary target customer_. That doesn't just mean they're dependent on them as a customer. That means everyone working with, for, and adjacent to them has knowingly signed up to work with a defense contractor and to sell to someone that wants to use weapons in anger. That means these companies were mostly _founded_ to do that.
So instead, I invite you to imagine a medical supply company refusing to sell medical-grade sodium thiopental to the Bureau of Prisons.
It's not that they are too lethal. It's "we will not build a weapon system that is fully autonomous and acts without a human in the loop".
The big boy defense contractors won't touch that shit either because as soon as you mention the idea the engineers start shouting you down from the top of their lungs out of shear unbridled terror and the lawyers come storming in due to the endless legal risk said design would bring.
Mass Domestic surveillance sure they might do no problem but fully autonomous killbots or drones are gonna be a no go from pretty much every contractor other that doesn't carry a "missing the point of Lord of the Rings" name
Planes are fairly predictable, they can more or less be relied on to do that leadership asks them and not more. This stuff is more akin to nerve gas, there's no telling where it will go once deployed.
Yes, you're right. Military contractors supplying equipment that needlessly harms our own soldiers is pretty common, from what I understand. Soldiers following orders don't have much market power. "Occupational hazard", and then the brass sweeps the problems under the rug. And paramilitary contractors are generally quite happy to supply things meant to directly hurt Americans (sonar weapons and tear gas used to attack Constitutional protests, etc). Both of these dynamics are applicable here. "AI" as it stands is a recipe for friendly fire incidents. And domestically, these capabilities will be used to turbocharge domestic surveillance as the con artist regime desperately needs ways to keep the wheels from coming off the cart.
So yes you're right, it sure is nice to imagine Anthropic setting off a wave of more military contractors acting with principles.
They are a private company they can largely sell or not sell they want. They aren’t saying they won’t build them because they are to effective they are saying they won’t build them because they aren’t safe.
> The President is hereby authorized (1) to require that
performance under contracts or orders (other than contracts of
employment) which he deems necessary or appropriate to promote
the national defense shall take priority over performance under any
other contract or order, and, for the purpose of assuring such priority,
to require acceptance and performance of such contracts or orders in
preference to other contracts or orders by any person he finds to be
capable of their performance, and (2) to allocate materials and
facilities in such manner, upon such conditions, and to such extent
as he shall deem necessary or appropriate to promote the national
defense.
My read of this interaction is Dario is calling out Hegseths' bluff. A bluff the latter didn't even know he was blundering into because Hegseth is an idiot.
SecDef invoking the DPA against Anthropic likely trashes the AI fundraising market, at least for a spell. That's why OpenAI is wading into the fight [1]. Given the Dow is sitting on a rising souffle of AI expectations, that knocks it out as well. And if there is one red line Trump has consistently hewed to and messaged on, it's in not pissing off the Dow.
The entire administration has been operating on empty threats (see Brendan Carr's FCC speech policing). But most companies don't call them out on it, they just roll over
This frames it as Pentagon vs. Anthropic but the actual problem is upstream. If we tell companies they must prevent all possible harm, you're setting them up: nerf the model and silently lose value nobody can quantify, or don't nerf the model and get blamed for every bad outcome. We don't want nerf'd models either. DoW is saying that.
This isn’t an external directive; Anthropic was founded with the mission of creating safe, reliable AI systems. You wouldn’t see the same people working at the company if the company didn’t stand by its acceptable use policy and other internal standards
I'm saying the capability to reason about novel situations is in tension with guaranteeing it never produces harmful outputs. We are talking about contradictory design constraints.
1- OpenAI, Microsoft, Google, Amazon, etc have no problem with their products being used to kill people so no need to bully them.
2- These other products are so terrible at the task that the clown shoe wearing SecDef is forced to try to bully Anthropic.
Less than a year left on this clock.
[1] https://www.britannica.com/event/United-States-presidential-...
*DOW
There's even a webpage for it.
So cut the guy some slack. No one knows wtf is actually going on these days.
Nothing has changed about the performative-ness, in fact if anything it's gotten more performative and hollow. They just signal vices rather than virtues, so a bunch of rightist-flavored-Lenin's useful idiots think it is fresh or effective or anti-"woke" or at least different.
> Officials say other leading AI firms have gone along with the demand. OpenAI, the maker of ChatGPT, Google and Elon Musk’s xAI have agreed to allow the Pentagon to use their systems for “all lawful purposes” on unclassified networks, a Defense official said, and are working on agreements for classified networks.
The only difference is simply that Anthropic is already approved for use on classified networks, whereas Grok and OpenAI are not yet (but are being fast-tracked for approval, especially Grok). Edit: Note someone below pointed out that OpenAI may be approved for Secret level, so it's odd that Washington Post reports that they are working on it still.
I keep hearing this but it should be plainly obvious to everyone (at least here) that an LLM is not the right AI for this use case. That's like trying to use chatgpt for an airplane autopilot, it doesn't make sense. Other ML models may but not an LLM. Why does the "autonomous killbot" thing keep getting brought up when discussing Anthropic and other llm providers?
For reference, "autonomous killbots" are in use right now in the Ukraine/Russia war and they run on fpv drones, not acres of GPUs. Also, it should be obvious that there's a >90% probability every predator/reaper drone has had an autonomous kill mode for probably a decade now. Maybe it's never been used in warfare, that we know of, but to think it doesn't exist already is bonkers.
https://devblogs.microsoft.com/azuregov/azure-openai-authori...
Either Anthropic is seen as the clear leader (it certainly is for coding agents) or this is a political stunt to stamp out any opposition to the administration. Or both.
Not too different from picking on Harvard/etc.
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1405 comments)
On the other hand, is autonomous war not obviously the endgame, given how quickly capabilities are increasing and that it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?
It just needs one player to do it, so everyone has to be able to do it. I'd love to hear a different scenario.
Businesses stay out of potentially profitable market segments for various reasons, so I don't think everyone has to be able to do it to survive.
Or it was their prerogative, until the Trump administration. Now even private companies must bend the knee.
Other players just need to assume that one player might do it in the future. This virtual future scenario has a causal effect on the now. The overall dynamic is that of an arms race (which radically changes what a player is).
Things like Scout AI’s Fury system are human in the loop still and I think for something that could just as well make a mistake and target your own troops it’s not yet clear that full auto is the way to go https://scoutco.ai/
Human in the loop okaying a full auto seems like it could work almost all the way. And then we count on geography. If they want to spray out a bunch of autonomous drones into our territory they do have to fly here to do it first or plant them prior in shipping containers. Better we aim at stopping that.
I could not disagree more. A big part of that is also knowing when NOT to pull the trigger. And it’s much harder than you’d think. If you think full self driving is a difficult task for computers, battlefield operations are an order of magnitude more complex, at least.
I expect autonomous weapons of the near future to look somewhat similar to that. They get deployed to an area, attack anything that looks remotely like a target there for a given time, then stand down and return to base. That's it.
The job of the autonomous weapon platform isn't telling friend from foe - it's disposing of every target within a geofence when ordered to do so.
If anything represents the logical conclusion of that tired fallacy, it'll be actually autonomous, "thinking" drones which make the targeting decisions and execution decisions on their own, not based on any direct, human-led orders, but derived from second-order effects of their neural net. At a certain point, it's not going to matter who launched the drones, or even who wrote the software that runs on the drones. If we're letting the drones decide things, it'll just be up to the drones, and I don't love our chances making our case to them.
If autonomous weapons lead to a net battlefield advantage, I agree with the GP, they will be used. It is the endgame.
If you simply wanted to cause havoc and destruction with no regard for collateral damage then the problem space is much more simple since you only need enough true positives to be effective at your mission.
The ability to code with ai has shown that it requires an even higher level of responsibility and discipline than before in order to get good results without out of control downside. I think the ability to kill with ai would be the same way but even more severe.
"In a press conference, Musk promised that the Optimus Warbots would actually, definitely, for real, be fully autonomous in two years, in 2031. He also extended his condolences to the 56 service members killed during the training exercise"
edit: how about the downvoters give a counterargument instead of trying to bury this comment?
Anthropic (and others), whether due to financial/regulatory/competitive, will at some point permit their products to be used for any lawful purpose. Even if they attempt to restrict certain uses today. That arrangement is unlikely to hold.
Americans should vote for the right candidates and elect leaders who will carry and defend their views. I don't think there is any other way.
The situation in the United States, right now, seems genuinely hopeless. And I'm certain I'm not the only person who feels this way.
What is there to do besides resign myself to what's coming and try my best to ignore the bullshit?
And then you use that affinity to manipulate them, to get them to do what you want, to get them to give you money.
I think the tech worker / engineering / online crowd has really let themselves get duped.
Sure, maybe some tech billionaires did start out in a similar place as many of us.
But a lot of what they tell us as part of selling us their brand is just affinity fraud, telling us they're just like us with the same values of privacy and open source and some hippie notion of peace, love and understanding.
But it's just a trick, and they just want money, power and fame.
It's not so much as the billionaires capitulating, it's that they never were the people they pretended to be, and keeping up the act is no longer how they get what they want.
That is the reason why they would cry if the other party broke the rules to this degree. The other party is more aligned with regulations; taking power from corporations instead of giving it to them.
Enough regulation is good, not enough and too much are both bad. Neither party has the best plan when it comes to regulation, Republicans want too little (increasing corporate power), Democrats want too much (increasing government power).
He literally named it [1]!
[1] https://en.wikipedia.org/wiki/National_Fascist_Party
... eats cheese pizza and were connected to Jeffrey Epstein. That includes prime ministers, secret services, trump, democrats, republicans, royalty.
Has nothing to do with Trump specifically. He's just the "currently voted-in guy" doing what he's being told to do.
"Oh but shadow government/deep state is just a dumb conspiracy-theory" ... yeah, just like an island of cheese pizza eating billionaires.
But you're right that the Epstein (guessing Mosad IMO) op had sure ensnared a lot of people who should have known better but I guess they're just like us in the sense that they only have enough blood to run one head at a time. To my knowledge though, Tim Cook, Bezos and Zuckerberg aren't in the Epstein files. So what's their excuse?
However, that still doesn't explain the secret space program to mine adrenochrome from missing kids on Mars. Because WTFF? https://www.space.com/37366-mars-slave-colony-alex-jones.htm...
But still, WHO is giving him orders?
The worst that can happen to Anthropic is one of the two things mentioned; loosing some contracts or some fake forced management from the Pentagon. maybe Dario having to leave, certainly a loss for him and people who believe in him but probably nothing world-changing.
The worst that can happen to the Trump administration is the beginning of its end, when people realize you can simply stand up to their bullying and with all the standoffs they have going on in parallel, maybe they will die a death by a thousand cuts?
They willingly don't, because they know that they can use the administration to cement their market power. The surveillance state being built is one where would-be competitors, labor, well-meaning reformists, can be crushed on a whim for sham political reasons. A massive contraction of USA wealth, influence, and power, a loss of our living standard and place in the world -- that is the price everyone else has to pay, to keep the existing power structure in place. They will not release their grip on the wheel. Not until the ship hits the bottom of the sea.
I what world has the Greenland stuff been anything but a fuckoff?
The world in which Europe didn't respond, Americans didn't flip out and Congress didn't push back.
https://komonews.com/news/nation-world/danish-mep-tells-trum...
So many companies have US Government contracts. Maybe they are not majority of their business like Lockheed Martin or RTX but look at F10, on that list, MAYBE Walmart is only one without US Gov Contract, everyone else likely does.
> One option is to invoke the Defense Production Act. . .
> Another threat would be to declare Anthropic to be a supply chain risk. . .
The first is a wrist-slap that still gets the government what they want; the second is an existential threat to Anthropic. Their main partners are all “dogs of the military”. Microsoft, Intuit, NVIDIA: all government contractors. I can’t find one company that they have a working relationship with that doesn’t hold at least one govt contract.
The idea that Claude could alignment fake its way out of a change in contractual terms is silly. The DoW has all sorts of legal and administrative tools it can choose to leverage against contractors that fail to perform. Usually it doesn’t, because of a “norm” that says the private defense sector runs more smoothly when the government doesn’t try to micromanage it.
Remind me again how good this administration is at upholding norms?
When it comes to killing and spying on people with flimsy justifications that's a pretty bipartisan norm. Hell, Anthropic isn't even saying they won't help the DoW do just that, they just want to make sure there's a human in the loop.
The "USA Freedom Act" [1], which made most of the Patriot act permanent, had bipartisan support.
I'm all for reversing the continual ramp up of the police state and the industrial military complex. We need to recognize, however, that it's being funded and pushed by both parties. Generally playing on fears of the scary other. (Muslim terrorists in 00s, Mexicans today).
[1] https://en.wikipedia.org/wiki/USA_Freedom_Act
> Usually it doesn’t, because of a “norm” that says the private defense sector runs more smoothly when the government doesn’t try to micromanage it.
My comment has nothing to do with Anthropic’s “moral” or “ethical” stance.
I also don’t see the point in both-siding this. The situation at hand is before Hegseth and Trump. I can’t even remember Biden’s SecDef’s name.
> I also don’t see the point in both-siding this. The situation at hand is before Hegseth and Trump. I can’t even remember Biden’s SecDef’s name.
To me, the moral and ethical problem is a bigger issue than the norms problem. There's a distinction without a difference between Hegseth doing this vs the Dems agreeing with Anthropic's demands and keeping a human in the loop on a massive spy and killing network. In some ways, stepping out of the norms and making a big news story is preferable to an unknown cabinet member just signing a business as usual agreement which erodes liberties. At least we know about it.
That's why I brought it up. It's great that Anthropic wants some safeguards, but ultimately the bigger problem is that AI with or without humans, significantly expands that ability of our military to murder and our spy agencies to spy.
The sold services to a willing counterparty at mutually agreed upon terms. And now the other side of that deal has recalled that they're Twelve and You're Not My Real Mom You Can't Tell Me What To Do, and so wishes they had agreed to different terms and is throwing a tantrum to attempt to force a change.
And that's Anthropic's fault? That's a risk they should have predicted?
It's not like these names are part of some sacred part of American identity, and "defense" has always been laughable as a euphemism. The DoD refers to themselves as the DoW [0] now, so it's completely reasonably to refer to the department as DoW. And of all the places to put your political energy, defending a laughable euphemism of a name that was used because the previous iteration of the name sounded funny seems like a sub-optimal use of that a energy.
0. https://www.war.gov/
They’re aggressively signalling that they are cooperative, and that they are not being belligerent. They are using the preferred language and much of the framing that the US government would use, to make it as clear as possible what the key points of their disagreement are by, leaning into alignment on everything else
This is textbook. People are reading this as some kind of confusing, inexplicable framing when it’s how any sensible person would write in their context.
There’s no Obamacare either. Come on, this is about as pedantic as the “the DoD is not the Pentagon” debate downthread.
It’s a colloquial name, and how the executive branch wants everyone to refer to it. This forum isn’t an official document. Move on.
This administration says "Department of War" because they want to project an aggressive image. I support anyone who uses the legal name "Department of Defense" in an effort to reinforce an aspirational goal for the department and to remind others that the Executive Branch shouldn't be allowed to remake the entire government at will.
It's the US government basically unilaterally deciding to end a leading AI researcher company. Years of lawsuits will follow, comparisons to "communism", accusations of Trump/Heghseth being Chinese/Russia agents (because well, how else do you hand over the AI win to China than by killing one of your top 2?)
Why do you say this?
It's trivially untrue. It could be the end of one type of business model, and it could slow their growth, but it could also be a blessing in disguise -- there are a lot of brilliant engineers who would prefer to work with an Anthropic that took a stand on ethics, and a lot of people who would prefer to support such a company. One door closes, another opens. They could become an open, public-facing, benevolent-AI company.
Also, Gemini with DoD money and DoD direction is likely to result in an AI that works very well for the DoD but significantly less well for other things, especially if your use case benefits from some guardrails (and most use cases do, because you rarely want AI to just do whatever it fancies.)
https://en.wikipedia.org/wiki/Synecdoche
News sources have been using both building names (and several more I can think of off the top of my head) as short hand for the people who work inside of them for my entire life.
So instead, I invite you to imagine a medical supply company refusing to sell medical-grade sodium thiopental to the Bureau of Prisons.
The big boy defense contractors won't touch that shit either because as soon as you mention the idea the engineers start shouting you down from the top of their lungs out of shear unbridled terror and the lawyers come storming in due to the endless legal risk said design would bring.
Mass Domestic surveillance sure they might do no problem but fully autonomous killbots or drones are gonna be a no go from pretty much every contractor other that doesn't carry a "missing the point of Lord of the Rings" name
So yes you're right, it sure is nice to imagine Anthropic setting off a wave of more military contractors acting with principles.
> The President is hereby authorized (1) to require that performance under contracts or orders (other than contracts of employment) which he deems necessary or appropriate to promote the national defense shall take priority over performance under any other contract or order, and, for the purpose of assuring such priority, to require acceptance and performance of such contracts or orders in preference to other contracts or orders by any person he finds to be capable of their performance, and (2) to allocate materials and facilities in such manner, upon such conditions, and to such extent as he shall deem necessary or appropriate to promote the national defense.
SecDef invoking the DPA against Anthropic likely trashes the AI fundraising market, at least for a spell. That's why OpenAI is wading into the fight [1]. Given the Dow is sitting on a rising souffle of AI expectations, that knocks it out as well. And if there is one red line Trump has consistently hewed to and messaged on, it's in not pissing off the Dow.
[1] https://www.axios.com/2026/02/27/altman-openai-anthropic-pen...