It is incredible how far the overton window has moved on this issue.
When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
> it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
(spoiler alert)
Wasn't this one of the plot points of the Val Kilmer movie Real Genius? They had to trick the students into creating a weapon by siloing them off from each other and having them build individual but related components? How far we've fallen! Nobody has to take ethics during undergrad anymore I guess...
The late 90s were full of media that questioned reality and authority - like X-Files, The Matrix, Dark City, all sorts of websites about conspiracy theories and UFOs, etc. The zeitgeist was full of speculation about hidden truths. The cultural mood was defiant and sardonic. There was rap, rap-rock, Beavis and Butthead, Fight Club, Office Space... One of the most popular pro wrestlers in the world played a character who beat up his boss and gave him the middle finger. Then after 9/11 it kinda seemed like suddenly the TV shows were all about cops and soldiers. Admittedly, my memories might be somewhat deceiving me. But I do feel that the mood suddenly shifted, much more than the actual damage done to America by the attack should have justified.
If you are waiting until undergraduate level to take ethics, it's far too late to matter anyways.
Doubly so for "business ethics" classes which became à la mode in the post-Enron era. They attempt to teach fundamental ethics, when at most it should be a very thin layer on top of a well founded internal moral framework and well-accepted ethical standards inculcated from day 1 of kindergarten.
Morals are taught 0-9 [0], Ethics perhaps slightly later as it requires more complex thought processes.
Ender's Game the novel, but I would say that it's not actually super relevant. First, the original short story was 1977, and then Card expanded it into a novel which was published mid-1980s. The point in the story is that kids are sensitive, and supergenius kids more so, and that they don't want to interrupt performance with concerns about guilt. But Real Genius wasn't about that! It was about an anti-war stance born of the Vietnam War and creative-class hatred for Ronald Reagan's presidency.
Reminds me of the story of someone's woman working for a research lab to improve the computer-controlled automatic emergency landings of planes with total power failure.
... or so she was told.
She was unknowingly designing glide-bomb avionics.
I feel like these stories are apocryphal. I mean, I can't say for certain that no US DoD research program used subterfuge to trick the performers into working on The Most Racist Bomb. But I can say that in 20 years I've never seen a dearth of people ready, willing, able, and actively participating with full knowledge that they are creating The Fastest Bomb and The Sneakiest Bom and The Biggest Bomb Without Actually Going Nuclear.
IDK, maybe it's different outside the National Capitol Region. But here, you could probably shout "For The Empire" as a toast in the right bars and people wouldn't think you were joking.
What? I'm not questioning whether the weapons research actually happened. I'm questioning the sincerity of people claiming they didn't know what they were doing. I've seen plenty of weapons programs. They aren't a secret to the people working on them. My point is, the government doesn't need to lie to researchers or even pay them very well to get them to develop weapons because there are plenty of intelligent-enough people willing to do it almost for free.
I've worked as a contractor for a safety system that turned out to be for a foreign military. I was given a signal, and told to write software to fit it. The signal could plausibly be collected for a wide variety of civilian purposes.
What I realized later was that none of the civilian markets could possibly justify the cost of the project.
The particular type of signal fitting I was doing was only achievable by a few thousand expensive domain experts in the world, so, I think that addresses your other point.
If "This doesn't fit into my mental model, so everyone else must be lying" is how you deal with things you didn't personally experience, do what you have to.
I once saw the word nickel autocorrected incorrectly into something far worse. It was funny given the context (metals, not coins) but I wondered why someone would even have that word in their autocorrect dictionary.
Attitude towards war depends on context. In 2007 "war" meant "Iraq" which was extremely unpopular, pointless, and had an imperialist flavor. Today "war" means Gaza, Iran, and Venezuela, but it also means Ukraine and Chinese aggression, possibly ramping up to an invasion of Taiwan. I suspect Amodei and many Anthropic employees are thinking of the latter.
And coincidentally, Ukraine and Taiwan enable the US to establish grips literally right on the doorstep of our two top geopolitical adversaries. Sure is amazing how freedom, democracy, and butterflies happen to always so randomly coincide perfectly with the geopolitical ambitions of the empire.
> Chinese aggression, possibly ramping up to an invasion of Taiwan.
It's amusing amidst the US bombing Iran, incarceration the president of Venezuela and his wife after slaughtering everyone who was in the room with him, seizing oil tankers off Cuba, continuing the siege of Gaza and on and on to start getting sanctimonious about China.
Taiwan is Kinmen island in Xiamen harbor, so a mainland invasion of Taiwan would be mainland China "invading" an island in its harbor.
Also mainland China does not recognize Taiwan and mainland China to be separate countries. The US does not recognize Taiwan and mainland China to be separate countries. Taiwan does not consider Taiwan and mainland China to be separate countries. I'm not sure what the invasion would be, a country invading itself? It would be like if the US president sent armed agents to Minnesota who started killing people willy nilly - oh yaa, that just happened.
The most satisfying thing is if mainland China did choose to reassert it's rightful authority in Taiwan against the colonial powers, there's absolutely nothing those western powers can do about it. Just like Russia's assertion over the West tring to nove it's NATO armies to its western borders in the Ukraine. It's amusing to see the US flailing about, hitting a Venezuelan here, a Cuban there to try to look tough. I guess Nicaragua is next on the list. The changes coming in the 21st century are welcome. A bozo like Trump as president is a sign of a fading West.
By this logic, America not recognisigg by the sovereignty of Venezuela, Iran and Cuba—and Israel of Palestine, as well as vice versa—makes everyone an a-okay actor!
Actually dinosaurs existed in China before there were people. And their descendents, the birds, are still around. We should all consider it our moral duty to continue what was begun in Tiananmen Square in 1989 and overthrow the CCP and replace them with the true historical rulers, the chicken.
And probably some of the same companies where you could get fired for publicly expressing some mildly controversial sociological theories like James Damore did are also companies that would not hesitate to work with the CIA or the Pentagon on mass surveillance or weapons systems.
Yes, and even their two exceptions, only one is on moral grounds. They don't want to provide tools for autonomous killing machines because the technology isn't good enough, yet. Once that 'yet' is passed they will be fine supplying that capability. Anthropic is clearly the better company over OpenAI, but that doesn't mean they are good. 'lesser evil' is the correct term here for sure.
Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members at risk.
Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.
> Fisher [...] suggested implanting the nuclear launch codes in a volunteer. If the President of the United States wanted to activate nuclear weapons, he would be required to kill the volunteer to retrieve the codes.
>> [...] The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being. [...]
>> When I suggested this to friends in the Pentagon they said, "My God, that's terrible. Having to kill someone would distort the President's judgment. He might never push the button."
> — Roger Fisher, Bulletin of the Atomic Scientists, March 1981[10]
The danger is that we won't be sending these fully-autonomous drones to 'war', but anytime a person in power feels like assassinating a leader or taking out a dissident, without having to make a big deal out of it. The reality is that AI will be used, not merely as a weapon, but as an accountability sink.
Isn't this the moral hazard of war as it becomes more of a distance sport? That powerful governments can order the razing of cities and assassinate leaders with ease?
We need to do it because our enemies are doing it, in any case.
I do not think that anyone but the US and Israel have assassinated leaders in the last 30 years. I also question their autonomous drone advancement. Russia and China did not have the means to help Venezuela and they do not have the means to help Iran.
It came later than I anticipated, but it did come after all. There is a reason companies like 9mother are working like crazy on various way to mitigate those risks.
War is not moral. It may be necessary, but it is never moral. The only best choice is to fight at every turn making war easy. Our adversaries will, or likely already have, gone the autonomous route. We should be doing everything we can to put major blockers on this similar to efforts to block chemical, biological and nuclear weapons. The logical end of autonomous targeting and weapons is near instant mass killing decisions. So at a minimum we should think of autonomous weapons in a similar class as those since autonomy is a weapon of mass destruction. But we currently don't think that way and that is the problem.
Eventually, unfortunately, we will build these systems but it is weak to argue that the technology isn't ready right now and that is why we won't build them. No matter when these systems come on line there will be collateral damage so there will be no right time from a technology standpoint. Anthropic is making that weak argument and that is primarily what I am dismissive of. The argument that needs to be made is that we aren't ready as a society for these weapons. The US government hasn't done the work to prove they can handle them. The US people haven't proven we are ready to understand their ramifications. So, in my view, Anthropic shouldn't be arguing the technology isn't ready, no weapon of war is ever clean and your hands will be dirty no matter how well you craft the knife. Instead Anthropic should be arguing that we aren't ready as a society and that is why they aren't going to support them.
I think it's the opposite. The human cost of war is part of what keeps the USA from getting into wars more than it already is - no politician wants a second Vietnam.
If war is safe to wage, then it just means we'll do it more and kill more people around the globe.
Your post reads as if you would rather those aggressors who threaten America to not be disposed of. How is the world a better place with the aggressors than without?
> Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members.
I guess let the record state that I am deeply morally opposed to automated killing of any kind.
I am sick to my stomach when I really try to put myself in the shoes of the indigenous peoples of Africa who were the first victims of highly automatic weapons, “machine guns” or “Gatling guns”. The asymmetry was barbaric. I do hope that there is a hell, simply that those who made the decision to execute en masse those peoples have a place to rot in internal hellfire.
To even think of modernizing that scene of inhumane depravity with AI is despicable. No, I am deeply opposed to automated killing of any kind.
The “machine gun” has a more complicated history, and the first practical example may have been Gatling’s, or an earlier example used in Europe https://en.wikipedia.org/wiki/Machine_gun
What do you mean, "hallucinates and kills people"? Killing people is the thing the military is using them for; it's not some accidental side effect. It's the "moral choice" the same way a cruise missile is — some person half a world away can lean back in their chair, take a sip of coffee, click a few buttons and end human lives, without ever fully appreciating or caring about what they've done.
The people that actually target and launch these things do think about what they have done. It is the people ordering them to do it that don't. There is a difference, I hope.
I'm sure it was meant as "kills the wrong people."
People are always worried about getting rid of humans in decision-making. Not that humans are perfect, but because we worry that buggy software will be worse.
The flip side is it's very unlikely that AI won't become that good any time soon, so it'll always remain a means to hold out. Especially since nobody has explicitly defined what "good enough" entails.
2007 was 19 years ago. If you step back another 19 years, you'll find that the major tech companies of the era had huge defense contracts: IBM, HP, Oracle, SGI, Texas Instruments, etc. Not only that, the development of many technologies that we take for granted today -- like integrated circuits, the Internet, even Postgres -- were directly funded by the DoD. And looking back at the history of Silicon Valley, much of its growth in the early days was a direct consequence of military spending.
Things come and go. Attitudes change. Who knows where things will be in another 19 years.
Values relating to mistrust of the military (as per the context of the post I responded to) as well as values relating to ownership of the tech you bought and of personal privacy.
Get off your high horse and stop talking down to a person you don't know. Take your anger out on someone else.
Yeah, it wasn’t some kind of ethical utopia, but it sure as hell has gotten a lot less ethical in real terms. When you start
Making things operate in ways that people dislike or are deceived by, it’s a very slippery slope, because everything from there all the way through eating babies is just a matter of degree.
Trite as it may seem, don’t be evil is actually a very, very strong statement, as is do no harm. 70 percent of tech market cap these days is a a million tiny harms, a warm pool of diluted evil.
Well, aren't you just the sweetest little emotional manipulator? Ethical, for sure. Perhaps, you are angry due to ignorance and react poorly to someone shattering your illusions.
Hard to say for sure. In that instance I can only reasonably speak for myself. So far at least, the evidence suggests the more I have, the more distracted I get by new projects.
If LLM's are indeed a game changer professionally, you kind of need to pick one.
Personally, I loathe seeing power shift towards mega corporations like that, away from being able to run your own computer with free software, but it feels like the economics are headed that way in terms of productivity.
Yeah, and they still happen even today (there were some recent ones with ICE and Israel), but the companies themselves have still worked in war businesses.
But tell me, what would you like your country to do when conflicts arise due to want of natural resources? Would you want your country to just give up that resource your people depend on, like may be 50/50?
Do you believe it will always be possible to settle on a solution in a peaceful way that works for everyone?
Literally yes. If you justify harming others out of nowhere by ‘sabotaging your own existence’ then yes.
‘Sabotaging your own existence’ is a magic sentence that can justify everything. Israel can kill children more than any other nation in the world, and justify it by ‘not sabotaging their own existence’
Anyone can do anything with this perspective. This is the exact point gere. Pull yourself back, if you are about to ‘not sabotage your own existence’ by simply killing innocent civilians because you believe a computer algorithm told you in about 15 years they or their children might do something harmful.
I am sympathetic to the argument that I’d rather elected officials that have a path to be removed have the control of use more so than unelected executives.
Like we have solar now. People talk about how it saves environment. But I think another similar win would be reduction in dependency on oil, and countries won't have to go to war over oil. But it takes time...
But it seems what technology gives, technology takes away. Because new technologies comes with its own resource requirements. And the cycle looks like it will go on...
If the country wage wars for bad reasons, that is another problem that probably should be fixed elsewhere, or you should leave that country and be somewhere who government you can fully get behind.
> defending your country
I am afraid that this does not always have to be an incoming attack. What if some country has a resource that your country badly needs, without which your people will suffer badly and imagine the same is true with the other country. How much of an hit on economic and QoL are you willing to sustain before you ask your government to go out there and get the required resource by force.
I totally get that war is profitable, and most of the wars cannot be justified. But ideas like this sounds like sabotaging your own country and thus your own existence.
What if your family didnt like bread, what of they liked - cigarettes? And instead of giving it away, you just sold it at a price that was practically giving it away?
It's easy to say "I will never let the Department of Defense use my search engine for evil!" Or "the more money they spend on me, the less they have for weapons!" ( https://en.wikiquote.org/wiki/Theo_de_Raadt ) when you aren't really expecting money. But when somebody shows up with a check, it becomes much harder to stick to your principles. Especially after watching Palantir (and "don't be evil" Google) rake in plenty of dough.
If you graduated in 2007, your classmates were born around 1985. Their parents were mostly born in the mid 50s to the mid 60s and came to political consciousness either during the Vietnam War or immediately thereafter. No war since has been even close to as unpopular or frankly as salient. It’s the passing out of cultural relevance of that war that you are noticing.
> No war since has been even close to as unpopular or frankly as salient.
Iraq.
Spoiler alert, a bunch of the current ones are going to be seen similarly too.
Also keep in mind when making comparisons that the Vietnam war was not unpopular with Americans at the beginning, and many people justified it all throughout, using language that will be similar to observers of later wars.
Correct that there was no Iraq generation because there was no draft and numbers were way smaller. Vietnam had over half a million troops at the height of that war. Iraq had under 170k.
But the war was still deeply unpopular. There is a reason America did the extraordinary - to that point - and elect its first black president.
The economic toll will be greater with these wars than Vietnam.
Sure, but it's not reasonable to call it as unpopular domestically as the Vietnam War, which had more than 12 times the casualties, spread over a group that on the whole was unwilling to fight and had to be drafted into the conflict, thereby spreading the pain of lost loved ones throughout society rather than concentrating it heavily into the poorer and less politically powerful social and economic classes. As unpopular as the Iraq war was, the American people's distaste didn't really do much to end it.
That’s reasonable. In the context of the larger discussion here a post up thread’s implication that a graduate in 2007 would be anti-war because of Vietnam is kind of dubious. Public opinion of the war shifted quite a lot in the four years after “Mission Accomplished” and Freedom Fries.
I'm a decade older so maybe I missed the memo but I think you'll have a hard time naming tech companies that actually refused to work with the military, which were large enough and important enough to be in danger of selling something to the military (i.e. not Be Inc. or Beenz.com)
Clearly, all of the traditional big leagues were lined up to take the Army's money. IBM, Control Data, Cray, SGI, and HP all viewed weapons research as a major line of business. DEC was the default minicomputer of the DoD and Sun created features to court the intelligence community including the DoD "Trusted Workstation". Sperry Rand defined "military industrial complex".
Well, they made a big deal about saying that while they sold their software to the Defense Department, it wasn't actually being used to kill people. Except for well-known military contractors (e.g., Raytheon), who have sold plenty of software specifically to kill people.
I guess there's a reason we saw plenty of articles about software used somewhat defensively -- such as distinguishing whether a particular "bang" was a gunshot, and where it likely came from -- instead of offensively -- such as improvements to targeting software.
> It is incredible how far the overton window has moved on this issue.
> When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war,
In 2007 the US was the sole world hegemon. It could afford to let the smartest people work on ad delivery systems.
In 2026, in certain fields, China has a stronger economy and military. Russia is taking over Europe. India and Brazil are going their own way. China is economically colonizing Africa.
The US can't afford to let it's enemies develop strong AI weapons first because of the naive thinking that Russia/China/others will also have naive thinkers that will demand the same.
---
People were just as naive with respect to Ukraine. They were saying that mines and depleted uranium shells are evil. But when Russia attacked, many changed their minds because they realized you can't kill Russians with grandstanding on noble principle. You kill them with mines and depleted uranium shells.
Hopefully people here will change their minds before a hot war. As the saying goes, America always picks the right solution after trying all the wrong ones.
> When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
I don't think it was very common really.
I think for the most part it was tech companies whose systems were not being used for war who like to boast that they refused to let their systems be used for war. Or that they creatively interpreted "for war" that since they were not actually manufacturing explosives, they could claim it was not for war.
> they have to couch it in language clarifying that they would love to support war, actually,
Yes they do because they are trying to sell to the Department of War.
No one made Anthropic try to be a military contractor. It’s pretty much the definition of being a military contractor that your product helps to kill people.
For almost all of history, including recent history, tech and military went together. Whether compound bows, or spears or metallurgy.
Euler used his math to develop artillery tables for the Prussian army.
von Neumann helped develop the atom bomb.
The military played a huge role in creating Silicon Valley.
However, to people who grew up in the mid to late 90s, it is easy to miss that that period was a major aberration. You had serious people talking about the end of history. You had John Perry Barlow's utterly naive Declaration of Independence of Cyberspace which looks more and more naive every year.
When people (myself included FWIW) warn about the dangers of American imperialism, it's because:
1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
2. Every American company with sufficient size eventually becomes a defense contractor. That's really what's happened with the tech companies. They're moving in lockstep with the administration on both domestic and foreign policy;
3. The so-called "imperial boomerang" [2]. Every tactic, weapon and strategy used against colonial subjects are eventually used against the imperial core eg [3]. Do you think it's an accident that US police forces have become increasingly militarized?
The example I like to give is China's high speed rail. China started building HSR only 20 years ago and now has over 32,000 miles of HSR tracks taking ~4M passengers per day. The estimated cost for the entire network is ~$900B. That's less than the US spends on the military every year.
I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Then again, I think Steve Jobs was the only Silicon Valley billlionaire not in a transhumanist polycule with a more than even chance of being in the files.
Thank you for mentioning the term 'imperial boomerang'. You really saw it in the militarization of the police after the Iraq War. Gone are the donut munchers.
> I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Given that Steve Jobs was best friends with Larry Ellison, I’d say he wouldn’t have bent the knee because he would’ve been standing hand in hand with Trump, just like Larry.
>1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
This humanist view unfortunately doesn’t hold anymore in the modern world. Boomers will be happy as long as not a single dollar is spent on housing, so that their own homes can appreciate in value. Republicans would rather burn money than spend it on houses, hospitals, or bridges that might benefit immigrants or “other people” more than themselves.
I used an American political party only as a reference, but the same phenomenon can be seen in many countries around the world. Society has become incredibly cynical and has regressed a lot in terms of humanity.
>"Boomers will be happy as long as not a single dollar is spent on housing"
Not sure what boomers you are talking about. I for one am disgusted at what is happening with the things in general and with the housing in particular. I do not want my house to appreciate Ad infinitum. I do not want to have ever growing class of have-not's so that few jerks can own the governments and half of the world.
Just so we're on the same page, the GP was reeferring to "baby boomers", as in people born 1945-1965. Maybe you know that and that's when you were born. I don't know. But "boomer" has taken on a slang meaning the latest few years for someone who's simply not tech-savvy or is otherwise out-of-touch.
Generational politics has definite limits and isn't absolute but it's also true that the Baby Boomer generation as a whole enjoyed the great opportunities and wealth generation opportunities in history. They fled to the suburbs, subsidized by the government every step of the way, and then basically pulled up the ladder behind them. They also refuse to quit.
And then when crime receded (and there are multiple theories for why this happened), they moved back into the city, bought up all the real estate and then blocked building affordable housing there too.
I personally have a theory that the parting gift of the Baby Boomer generation will be to get rid of Social Security and Medicare since they don't need it anymore.
> Dean Ball: What Secretary Pete Hegseth announced is a desire to kill Anthropic. It is true that the government has abridged private-property rights before. But it is radical and different to say, brazenly: If you don’t do business on our terms, we will kill you; we will kill your company. I can’t imagine sending a worse signal to the business community. It cuts right at heart at everything that makes us different from China, which roots in this idea that the government can’t just kill you if you say you don’t want to do business with it, literally or figuratively. Though in this case, I’m speaking figuratively.
The Overton window has not shifted, at least not among rank-and-file tech workers. There was very loud and vocal internal opposition to building and selling weapons[0]. They all lost the argument in the boardrooms because the US government writes very big checks. But I am told they are very much still around.
CEOs are bound to sociopathically amoral behavior - not by the law, but by the Pareto-optimal behavior of the job market for executives. The law obligates you to act in the interests of the shareholders, but it does not mandate[1] that Line Go Up. That is a function of a specific brand of shareholder that fires their CEOs every 18 months until the line goes up.
In 2007, Big Tech had plenty of the consumer market to conquer, so they could afford to pretend to be opposed to selling to the military. But the game they were playing was always going to end with them selling to the military. Once they were entrenched they could ignore the no-longer-useful-to-us-right-now dissenters, change their politics on a dime, and go after the "real money".
[0] Several of the sibling comments are mentioning hypothetical scenarios involving dual-use technologies or obfuscated purposes. Those are also relevant, but not the whole story.
[1] There are plenty of arguments a CEO could use to defend against a shareholder lawsuit that they did not take a particularly short-sighted action. Notably, that most line-go-up actions tend to be bad long-term decisions. You're allowed to sell low-risk investments.
Complaining loudly about working with the government to build weapons and then continuing to build them isn't the same as people refusing to work for companies that handle weapons contracts. The window has indeed shifted, with tech workers now merely virtue signaling on social media.
CBC news (canadian outlet) released an investigation on this yesterday, and found:
> While the facility was functioning as a school, CBC News has confirmed a previous New York Times report stating the building was once part of an Islamic Revolutionary Guard Corps (IRGC) base.
Around 10 years ago, in college, in Calculus class I had a very ambitious classmate, wanted to go to DARPA and work on Robotics. I asked if he was thinking it through solely from technical perspective or considering ethics side as well. Clearly, he didn't understand the question and I directly inquired - what if the code you write or autonomous machine you contribute to used for killing? His response - that's not my problem.
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into crafting tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
See in your case with the military you can directly say, hey my code will be used to bomb other people possibly. But in today's times it isn't (I am sure even then) so cut and dry. I worked in AdTech industry (like 60% of the bay area techies). So the ad tech I write gets shown to millions/billions. What about ads influencing elections and then politicians waging wars? Anti-vax ads which influence people and then kill them. Scam ads. Insurance ads and then people not getting cancer meds from the same insurance. Am I responsible for those deaths? I would say Yes.
But what is the option? I feel each of us wants to draw a line based off of our morality but the circumstances don't allow us to stick to it (still gotta pay rent)
We are all on a Titanic the way I see it. It's just the DARPA guy is gonna sink first. Rest of us are just pretending to be Jack trying to be the last ones to go.
> "The fact is that a mere training in one or more of the exact sciences, even combined with very high gifts, is no guarantee of a humane or sceptical outlook."
>” I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion.”
This seems more like credentialist arrogance than a well-reasoned judgment.
> what if the code you write or autonomous machine you contribute to used for killing?
This line of thinking, that creating machines that kill is unethical, will destroy the West. If the US wasn't so good at producing killing machines in WW2, you wouldn't be here to complain about DARPA ethics.
Instead of having engineers develop the most advanced machines for killing (i.e. protecting the West) such people go into producing the most addictive content delivery systems, destroying the brains of minors.
> Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance?
Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
What’s a “warfighter?” Do they come from the “Gulf of America?” We used to call them servicemen or service members. Emphasizing they served the people. I guess that’s too effeminate for our roided up and ironically hyper-insecure Secretary of Defense.
A new term was needed some decades ago. "man" titles have not been politically correct for a while, "member" sounds awkward and bureaucratic. In some other languages, "soldier" can be used for all military personnel, while English ended up with a more narrow meaning.
"Awkward and bureacratic" is literally the point of naming conventions commonly adopted by democracies. Titles like "president" or "prime minister", departments like "Department of Defense", referring to government employees as "civil servants", etc. are all intentional measures meant to strip away the prestige and egotism associated with positions of authority in an effort to avoid it going to people's heads, and to remind them that they are meant to serve the good of the public that pays for their existence rather than ruling over them.
"Service member" is awkward, because it has too many syllables. People won't use it when shorter alternatives are available. And it's bureaucratic because it's unspecific. It doesn't tell anything the service those people are members of, and it doesn't tell what kind of work they do.
I am not greatly relieved by this post of Anthropic's. That said, they seem to have lines and are willing to stand by them; I don't see where OpenAI has done that. So, for now and from my point of view, the point goes to Anthropic.
Moving my subscription is not terribly consequential, but since the products are so similar and easy to substitute with one another for my uses, it seems best to participate in what in aggregate is a signal that is being noticed and commented on and interpreted to mean that a significant number of people who buy AI access do care about this.
There are so many inference providers not working for Department of War. Even Alibaba and sure China has lots of issues but they are not bombing anyone now if that's your first priority. Or else, smaller US / European / Asian companies with pure civilian focus. SOTA open weights models they serve are perfectly suitable for coding and chat. I run a local Qwen3.5-122B-A10B-NVFP4 instance and it writes entire Android apps from scratch and that's a midsized model.
Sorry for the off-topic but what hardware are you running Qwen3.5-122B-A10B-NVFP4 on? Is it physically local or just self-administered? Thanks in advance.
Can you give a list of high quality alternatives? Morally speaking i would put China on par with the US if not worse (due to their ongoing Uyghur genocide). I will check out Qwen3 but would be interested in others.
Frankly it’s a shitshow all around.
The truth is that nobody gives a fuck about this. They have no moral qualms, just practical.
And these are the people that should bring us the future.
Man what a depressing scenario.
To state the obvious, I think when corruption and power in government go unchecked, companies eventually end up facing situations like this. It’s almost like making a deal with the devil.
At the beginning, they’re usually doing it for the money — and maybe some level of patriotism. Eventually they find themselves involved in things so ugly that they can’t really stomach it anymore. At the same time, they can’t easily back out either.
Then a new CEO comes in and thinks the previous guy was too soft, "He couldn’t handle it, but I can."
The Department of Defense was named as such after the detonations of the atomic bomb in Hiroshima and Nagasaki
We - as a humanity - collectively recognized the weight of our creation, and decided to walk back
Discussing “AI alignment” in the same breadth as aligning with a “Department of War” (in any country) is simply not an intellectually sound position
None of the countries we’ve attacked this year pose an existential threat to humanity. In contrast, striking first and pulling Europe, Russia, and China into a hot war beginning in the Middle East surely poses a greater collective threat than bioweapons, sentient AI, or the other typical “AI alignment” concerns
Why aren’t there more dissidents among the researcher ranks?
Among those who would resist, half would've done so outwardly by now and been fired, the other half would be hiding their activity. In both cases we wouldn't be hearing about them now.
> Why aren’t there more dissidents among the researcher ranks?
Because they’ve likely all lost faith in humanity watching Trump get reelected and now just want to get rich and hope to insulate their families from the reality we’re all living in.
"We both want a docile American public who go along with our desires so we can achieve goals that may be contrary to the interests of the American public."
Well I will say that if there's a word that describes what the Department has been up to in Venezuela and Iran, "Defense" does seem to be the least Orwellian option.
As someone looking at this from outside the US, the whole sequence of events is frankly terrifying.
I fear that frontier AI is going to be nationalised for military purposes, not just in the US but across the globe.
At the same time, I really don’t know what Anthropic were expecting when they described their technology as potentially more dangerous than an atom bomb while agreeing to integrate purpose-built models with Palantir to be deployed in high-security networks for classified military tasks.
Would love to enumerate those commonalities. Run by a psychopath? Commitment to violent lethality? Burning billions of dollars for uncertain goals? (ok there's one)
After hearing Palmer Luckey's argument for the name change[0], I tend to think it's good change.
Some of his arguments:
It used to be called the department of war, and it had a better track record with regard to foreign conflict, under that name then it did under the DoD name.
Department of war is a more honest name, department of defense is a somewhat newspeak term, although "Department of Peace" would be worse.
It's harder to seek funding for "war", then it is to seek funding for "defense".
If you ask someone, "Do you want to spend money on education or war?", you will get a different answer asking, "Do you want to spend money on education or defense?".
The problem with this argument is that the _original_ Department of War is now called the Department of the Army, which existed alongside the Department of the Navy. Besides, it’s a moot point unless Congress actually changes the name.
Regarding Luckey's other statements, I can almost assure you that the administration did not think as much about it as Luckey has. Insecure Pete just thought the title "Secretary of Defense" was too wussy so he wanted to be Secretary of War.
Also, I think people mainly have issue with the fact that Trump is just randomly and unilaterally renaming random stuff and demolishing buildings without congressional approval. If he had gone through the correct alleys then maybe people could ignore it. Maybe. We'd probably still have qualms about it, but at least we'd know that our representatives had a say.
I went to a military high school up until 2011 and never remember hearing it. My dad and grandpa were military for 20 years each and I've never heard either say it. It definitely hasn't been used broadly in the US for very long (maybe in very specific circles). Even my friends who work as engineers for defense contractors now have never called people "war fighters" around me.
That may be true but changing the department's name can only be done with an act of congress, which has not been done yet. Thus, the name is still officially and legally Dept of Defense.
Just because a name is more accurate doesn't mean that it's its new name. Otherwise we wouldn't be the United States of America (we are literally not united bc Hawaii and Alaska are not contiguous, and we are figuratively not united because... Well, you know)
No, they didn't. The name of the department at issue is “the Department of Defense” and of its head the “Secretary of Defense” — these are set in statute (the latter for slightly longer time than the former) and the relevant statutes has not been changed, since the office of the Secretary of Defense was created in 1947 and the Department of Defense was created in 1949. The executive branch has just decided to use a nickname for a government department (which is the historical name for a prior department which was split to form two of what are now the three main direct subordinate elements within that department.)
It'll be very interesting to see how this case gets resolved - in court and in the court of public opinion. I believe it's incredibly important and I hope they prevail.
As much as Trump and Hegseth would like it to be called the Department of War, it still takes an act of Congress to change the name of the Department of Defense. No reason to call it by anything else until that happens.
This is such a foot stomping childish thing to get caught up on. It does not at all matter what a dept is called. Try to get over the extremely superficial.
I think this is one of the weaknesses of rationalism and effective altruism, is that it tries to make a clean break from the common law legal reasoning that the government, and thus corporations, operate on. While I find rationalism to be a useful lens, the fact is that the common law legal framework is totally dominant, and so these deontological arguments made rationally collapse very quickly when translated to the dominant framework.
Its incredibly simple - they want to get off the supply chain risk list.
Its very evident in his statement, he's trying very hard to clarify what that list means for corporations and downstream business with large commercial and strategic companies.
Imagine if Microsoft, Amazon, Google, etc decided that they don't want to ANY sort of minuscule risk (real or perceived) to their massive public sector business lines (via all their DoD DoJ NHS and other 3 letter agencies, state agencies, city and local municipals etc) - and decide to cancel their enterprise Anthropic licenses - which is a VERY possible scenario.
And these are the big players, theres a whole slew of medium and small players all with existing government contracts that need to tread carefully.
Not everything has to be a conspiracy or some 4D chess business move. Dario is a morally motivated person and regretted the tone that was being conveyed in that memo, so he apologized.
Yeah, that's completely unbelievable. You don't just accidentally call Trump a "dictator" or go on an extended tirade about Sam Altman. Clearly, he was speaking how he truly felt and how he's doing damage control.
It has become a moral imperative to not work on this technology that is meant to replace us and the one thing that has separated us from machine and beast.
Slow it down as much as possible to give us more time.
I'm sorry but it does not very much still exist. Otherwise, Congress would be doing something other than praying for the Anointed One and his holy war.
I'm not obeying in advance, but I'm not giving lip service to normality, either.
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
The OpenAI astroturfers jumped on this one. Their only interest is in trying to spin Anthropic as not meaningfully better to dissuade people from switching, not to get people to drop both companies altogether.
...is Anthropic "meaningfully better" though? They're still fine being a defense contractor, and they lack the tools to enforce the ethics they want to uphold. They seemingly contribute even less to FOSS than OpenAI does (low bar) and split hairs over IP ownership when open models distill their results. Am I supposed to root for them because of their manufactured internet drama?
It's very reminiscent of the half-assed security theater that Google and Apple fought over. Neither one of them resisted government coercion in the end, they just took different routes to end up as federal asskissers.
Long time ago I worked for a company that I learned was selling it's software to help target people during the Iraq war. I quit because I cannot support building software that kills people.
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can imagine what it feels like being a worker there in that line of business.
I've also quit a job where the products I was working were meant to be deployed to CBP to hunt down immigrants. It's a nice gesture, but it won't stop these companies. They just hired someone else without an ethical backbone, and continued the project like nothing happened.
Tech leadership is rotten to the core, and that can't be fixed by individuals making a stand.
I've quit jobs and been laid off from jobs and I will admit that when I do, I always kind of hope that the company goes bankrupt the day after I leave because I was so important. Companies I've quit or been laid off have gone bankrupt, but it took years and sadly I don't think there's any way for me to draw a logical connective of "no tombert -> company fails".
I've never quit a company on purely ethical grounds, but I have turned down interviews and offers because of them. They're probably not going to go bankrupt just by not hiring me, but I like to think that making it incrementally harder to find talent slows down their progress of doing evil things, if only a little.
That's probably still a delusion of grandeur on my end, but we all should have an ethical line that we won't cross; most of us end up working for monsters and/or assholes, especially at BigCos, so your options generally boil down to "work for an asshole who's doing evil that you can live with" or "go live in a Unabomber shed". I guess it's important to make sure that "the evil thing you can live with here" isn't just any act of evil.
At a technical level, I don't believe they're specifically working on targeting anyone. They're providing a general-purpose API that Palantir is presumably using to build the target-finding software.
I imagine that's why the implementation got so far along before this blew up. Someone at Anthropic talked with someone at Palantir and they had a "you did what? Did you read the contract terms" moment, and that was after it went into production.
DoD still has not meaningfully moved to the DoW moniker, to me it represents the most fascist tendency, to make announcements and presume that’s enough to change the truth on the ground. The legal entity one contracts with is DoD. Going along with “DoW” is signal to me that a party has capitulated to the most absurd form of governance.
Pragmatically, it's for the best to use its preferred name instead of legal name when sucking up to the department and Trump to try to get back in good graces.
I don't think we won't get AGI if Anthropic were to implode, and frankly, right now, I'd rather have someone say clearly, "They cannot stomach the existence of someone telling them 'No' or adhering to moral principles. Like spoiled children they can't hear the former and are terrified by later because it might expose them to the condemnation they deserve."
You got me wondering, so I checked to see how much Anthropic's bribed Trump so far. According to Dario, Trump has been soliciting bribes, but they refused to pay, and the contract "renegotiation" is retribution:
"Amodei claimed that tensions between his company and the Trump administration stem partly from the firm’s refusal to financially support Trump and its approach to AI regulation and safety issues."
"As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more."
It's disgusting honestly. There are likely at least 136 directly reported civilian and child deaths linked to the operations where their services were used. And they are very proud.
The internal memo did read as fairly unhinged and political, which is not the message Dario likes to present. I'm glad he addressed this. It was unprofessional and unhelpful - even if Sam Altman is, in fact, a disgusting lunatic.
The one where he accuses Trump of retaliating against Anthropic after failing to solicit a bribe?
That should be the headline here. We know Trump personally made $4B last year, and we know he's been using the full power of the US gov't to retaliate against people that don't "support" him.
Come 2029, when there's an opportunity for the corruption trials to start, this sort of behavior needs to be front of the public mind, both at the top, and throughout his network of appointees.
I find it frustrating that apparently we just gave up on Trump giving up his tax returns, or putting his businesses into a blind trust. This was a big deal in 2016~2019, but I guess the entire world just decided it wasn't worth it.
Now we have a president who doesn't even hide his bribes, and instead starts multiple cryptocurrencies and has a publicly traded company in order to optimize the bribery. Maybe this is this "Department of Government Efficiency" thing I keep hearing about; it's never been more efficient to bribe public officials.
> I find it frustrating that apparently we just gave up on Trump giving up his tax returns, or putting his businesses into a blind trust. This was a big deal in 2016~2019, but I guess the entire world just decided it wasn't worth it.
When you give a guy who started a coup the keys to the kingdom, instead of a life-long prison sentence, arguing over what his taxes were a decade ago is... Splitting hairs.
When I was living in SF, we had lived in the same apartment for 5 years and then our landlord sold the building. The new owner was doing a condo-conversion and so we got 'evicted' (in reality he paid us a small sum of money to move out since evictions are complex there).
My partner and I were both employed, we were going to be fine (although paying much higher rent) but there was this visceral, "The place that we thought was home is being taken and there's nothing we can do about it" unease in the pit of my stomach that stuck with me for months and months.
This really feels the same as that really unpleasant time.
"fully autonomous weapons and mass domestic surveillance"
I still don't buy this discussion. How exactly do they want to use an llm for autonomous weapons, given it's not even possible to reliably have a piece of code written without having to review it?
And how is a 1M token window model suppposed to be useful for mass surveillance?
Honest questions, I am sure I am missing some details. Because so far it looks like a very sophisticated marketing strategy.
When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
My, the world has changed.
(spoiler alert)
Wasn't this one of the plot points of the Val Kilmer movie Real Genius? They had to trick the students into creating a weapon by siloing them off from each other and having them build individual but related components? How far we've fallen! Nobody has to take ethics during undergrad anymore I guess...
>But don’t get distracted by that; I didn’t know at the time.
Caleb Hearth: "Don't Get Distracted" https://calebhearth.com/dont-get-distracted
1997. The War on Terror has a lot to answer for.
https://youtu.be/tH0bTpwQL7U
Doubly so for "business ethics" classes which became à la mode in the post-Enron era. They attempt to teach fundamental ethics, when at most it should be a very thin layer on top of a well founded internal moral framework and well-accepted ethical standards inculcated from day 1 of kindergarten.
Morals are taught 0-9 [0], Ethics perhaps slightly later as it requires more complex thought processes.
[0] https://familiesforlife.sg/pages/fflparticle/Young-Children-...
"A laser is a beam of coherent light." "Does that mean it talks?"
"Your stutter has improved." "I've been giving myself shock treatment." "Up the voltage."
"In the immortal words of Socrates, who said, 'I drank what?'"
"Is there anything I can do for you? Or...more to the point... to you?"
"Can you drive a six-inch spike through a board with your penis?" "...not right now." "A girl's got to have her standards."
"What are you looking at? You're laborers, you're supposed to be laboring! That's what you get for not having an education!"
-- I'm sure I could remember more if I thought about it for a bit. That movie made quite an impression on young me.
Professor Hathaway: "I want to start seeing more of you around in the lab."
Chris Knight: "Fine. I'll gain weight."
... or so she was told.
She was unknowingly designing glide-bomb avionics.
IDK, maybe it's different outside the National Capitol Region. But here, you could probably shout "For The Empire" as a toast in the right bars and people wouldn't think you were joking.
They're not. But if it makes you feel better to believe that, everyone has their own coping mechanism.
What I realized later was that none of the civilian markets could possibly justify the cost of the project.
The particular type of signal fitting I was doing was only achievable by a few thousand expensive domain experts in the world, so, I think that addresses your other point.
It's amusing amidst the US bombing Iran, incarceration the president of Venezuela and his wife after slaughtering everyone who was in the room with him, seizing oil tankers off Cuba, continuing the siege of Gaza and on and on to start getting sanctimonious about China.
Taiwan is Kinmen island in Xiamen harbor, so a mainland invasion of Taiwan would be mainland China "invading" an island in its harbor.
Also mainland China does not recognize Taiwan and mainland China to be separate countries. The US does not recognize Taiwan and mainland China to be separate countries. Taiwan does not consider Taiwan and mainland China to be separate countries. I'm not sure what the invasion would be, a country invading itself? It would be like if the US president sent armed agents to Minnesota who started killing people willy nilly - oh yaa, that just happened.
The most satisfying thing is if mainland China did choose to reassert it's rightful authority in Taiwan against the colonial powers, there's absolutely nothing those western powers can do about it. Just like Russia's assertion over the West tring to nove it's NATO armies to its western borders in the Ukraine. It's amusing to see the US flailing about, hitting a Venezuelan here, a Cuban there to try to look tough. I guess Nicaragua is next on the list. The changes coming in the 21st century are welcome. A bozo like Trump as president is a sign of a fading West.
By this logic, America not recognisigg by the sovereignty of Venezuela, Iran and Cuba—and Israel of Palestine, as well as vice versa—makes everyone an a-okay actor!
Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.
https://en.wikipedia.org/wiki/Roger_Fisher_(academic)#Preven...
> Fisher [...] suggested implanting the nuclear launch codes in a volunteer. If the President of the United States wanted to activate nuclear weapons, he would be required to kill the volunteer to retrieve the codes.
>> [...] The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being. [...]
>> When I suggested this to friends in the Pentagon they said, "My God, that's terrible. Having to kill someone would distort the President's judgment. He might never push the button."
> — Roger Fisher, Bulletin of the Atomic Scientists, March 1981[10]
Counsel: "How do you explain the nanny cam footage of you planting a weapon?"
Robot: "I have encountered an exception and must power off. Shutting down."
"If we develop <terrible weapon> we can save so many lives of our soldiers". It always ends up being used to murder civilians.
We need to do it because our enemies are doing it, in any case.
These people know the best way to farm karma on social media is to bash Western society, they expect the same on HN.
Of course they have the means. Nothing technical prohibits them from blowing couple of carriers. But the price they would have to pay is way too high.
Eventually, unfortunately, we will build these systems but it is weak to argue that the technology isn't ready right now and that is why we won't build them. No matter when these systems come on line there will be collateral damage so there will be no right time from a technology standpoint. Anthropic is making that weak argument and that is primarily what I am dismissive of. The argument that needs to be made is that we aren't ready as a society for these weapons. The US government hasn't done the work to prove they can handle them. The US people haven't proven we are ready to understand their ramifications. So, in my view, Anthropic shouldn't be arguing the technology isn't ready, no weapon of war is ever clean and your hands will be dirty no matter how well you craft the knife. Instead Anthropic should be arguing that we aren't ready as a society and that is why they aren't going to support them.
If war is safe to wage, then it just means we'll do it more and kill more people around the globe.
I guess let the record state that I am deeply morally opposed to automated killing of any kind.
I am sick to my stomach when I really try to put myself in the shoes of the indigenous peoples of Africa who were the first victims of highly automatic weapons, “machine guns” or “Gatling guns”. The asymmetry was barbaric. I do hope that there is a hell, simply that those who made the decision to execute en masse those peoples have a place to rot in internal hellfire.
To even think of modernizing that scene of inhumane depravity with AI is despicable. No, I am deeply opposed to automated killing of any kind.
The “machine gun” has a more complicated history, and the first practical example may have been Gatling’s, or an earlier example used in Europe https://en.wikipedia.org/wiki/Machine_gun
People are always worried about getting rid of humans in decision-making. Not that humans are perfect, but because we worry that buggy software will be worse.
Things come and go. Attitudes change. Who knows where things will be in another 19 years.
Aside from that - there's a lot more people in tech now. It grew too fast too quick to maintain all the values it had back in the 00's and earlier.
He's referring to places like Google or Microsoft having to back out of deals regularly with countries and US agencies after employee backlash.
It seems like now days the backlash is indeed smaller and the heads of said companies are willing to move forward anyway.
That is a significant change from the past.
Get off your high horse and stop talking down to a person you don't know. Take your anger out on someone else.
Trite as it may seem, don’t be evil is actually a very, very strong statement, as is do no harm. 70 percent of tech market cap these days is a a million tiny harms, a warm pool of diluted evil.
Well, aren't you just the sweetest little emotional manipulator? Ethical, for sure. Perhaps, you are angry due to ignorance and react poorly to someone shattering your illusions.
there's a corrupting force we're not coming to terms with here
This is what baffles me when I see people flocking to them for subscriptions based on these events.
Personally, I loathe seeing power shift towards mega corporations like that, away from being able to run your own computer with free software, but it feels like the economics are headed that way in terms of productivity.
There was the 3 laws of robotics, where a robot/software was not to do any harm.
There was concerns over privacy and refusal of sharing your name and info on the internet. After all it's full of strategers and there was danger
Don't get into cars with strangers
I don't want wars.
But tell me, what would you like your country to do when conflicts arise due to want of natural resources? Would you want your country to just give up that resource your people depend on, like may be 50/50?
Do you believe it will always be possible to settle on a solution in a peaceful way that works for everyone?
I am not. Every country is corrupt, and war makes a lot of money for powerful people, but does it justify sabotaging your own existence?
‘Sabotaging your own existence’ is a magic sentence that can justify everything. Israel can kill children more than any other nation in the world, and justify it by ‘not sabotaging their own existence’
Anyone can do anything with this perspective. This is the exact point gere. Pull yourself back, if you are about to ‘not sabotage your own existence’ by simply killing innocent civilians because you believe a computer algorithm told you in about 15 years they or their children might do something harmful.
Like we have solar now. People talk about how it saves environment. But I think another similar win would be reduction in dependency on oil, and countries won't have to go to war over oil. But it takes time...
But it seems what technology gives, technology takes away. Because new technologies comes with its own resource requirements. And the cycle looks like it will go on...
> defending your country
I am afraid that this does not always have to be an incoming attack. What if some country has a resource that your country badly needs, without which your people will suffer badly and imagine the same is true with the other country. How much of an hit on economic and QoL are you willing to sustain before you ask your government to go out there and get the required resource by force.
I totally get that war is profitable, and most of the wars cannot be justified. But ideas like this sounds like sabotaging your own country and thus your own existence.
Also: https://gist.github.com/kemitchell/fdc179d60dc88f0c9b76e5d38... .
Iraq.
Spoiler alert, a bunch of the current ones are going to be seen similarly too.
Also keep in mind when making comparisons that the Vietnam war was not unpopular with Americans at the beginning, and many people justified it all throughout, using language that will be similar to observers of later wars.
Not in same ballpark. There’s no Iraq generation the way there’s a Vietnam one.
> Spoiler alert, a bunch of the current ones are going to be seen similarly too.
No they won’t. The lack of a draft and mass domestic casualties dramatically changes the picture. Especially on the saliency axis.
But the war was still deeply unpopular. There is a reason America did the extraordinary - to that point - and elect its first black president.
The economic toll will be greater with these wars than Vietnam.
https://en.wikipedia.org/wiki/15_February_2003_Iraq_War_prot...
https://en.wikipedia.org/wiki/United_States_military_casualt...
And enormous amount of political support because of the negative perception of AI in society
It's the effect of a cult of personality. People don't feel like they want or need this. But they're on board with the cult.
Clearly, all of the traditional big leagues were lined up to take the Army's money. IBM, Control Data, Cray, SGI, and HP all viewed weapons research as a major line of business. DEC was the default minicomputer of the DoD and Sun created features to court the intelligence community including the DoD "Trusted Workstation". Sperry Rand defined "military industrial complex".
I guess there's a reason we saw plenty of articles about software used somewhat defensively -- such as distinguishing whether a particular "bang" was a gunshot, and where it likely came from -- instead of offensively -- such as improvements to targeting software.
For every company that stands on values, there is another that will do some shady shit for a dollar.
https://www.google.com/maps/@37.6735255,-122.389804,3a,31.2y...
There wouldn’t be a Silicon Valley without World War 2 and US gov. funding of Stanford to develop radar basically.
The initial investment from then gave critical capital mass for Stanford, the VCs, and the tech companies of today.
https://youtu.be/ZTC_RxWN_xo?si=gGza5eIv485xEKLS
> When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war,
In 2007 the US was the sole world hegemon. It could afford to let the smartest people work on ad delivery systems.
In 2026, in certain fields, China has a stronger economy and military. Russia is taking over Europe. India and Brazil are going their own way. China is economically colonizing Africa.
The US can't afford to let it's enemies develop strong AI weapons first because of the naive thinking that Russia/China/others will also have naive thinkers that will demand the same.
---
People were just as naive with respect to Ukraine. They were saying that mines and depleted uranium shells are evil. But when Russia attacked, many changed their minds because they realized you can't kill Russians with grandstanding on noble principle. You kill them with mines and depleted uranium shells.
Hopefully people here will change their minds before a hot war. As the saying goes, America always picks the right solution after trying all the wrong ones.
I don't think it was very common really.
I think for the most part it was tech companies whose systems were not being used for war who like to boast that they refused to let their systems be used for war. Or that they creatively interpreted "for war" that since they were not actually manufacturing explosives, they could claim it was not for war.
Yes they do because they are trying to sell to the Department of War.
No one made Anthropic try to be a military contractor. It’s pretty much the definition of being a military contractor that your product helps to kill people.
Watch as the same people pushing for war today will pretend they were always against it 10 years from now.
I guess we're just doomed to repeat the same cycles.
No. Your tech experience was an aberration.
For almost all of history, including recent history, tech and military went together. Whether compound bows, or spears or metallurgy.
Euler used his math to develop artillery tables for the Prussian army.
von Neumann helped develop the atom bomb.
The military played a huge role in creating Silicon Valley.
However, to people who grew up in the mid to late 90s, it is easy to miss that that period was a major aberration. You had serious people talking about the end of history. You had John Perry Barlow's utterly naive Declaration of Independence of Cyberspace which looks more and more naive every year.
1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
2. Every American company with sufficient size eventually becomes a defense contractor. That's really what's happened with the tech companies. They're moving in lockstep with the administration on both domestic and foreign policy;
3. The so-called "imperial boomerang" [2]. Every tactic, weapon and strategy used against colonial subjects are eventually used against the imperial core eg [3]. Do you think it's an accident that US police forces have become increasingly militarized?
The example I like to give is China's high speed rail. China started building HSR only 20 years ago and now has over 32,000 miles of HSR tracks taking ~4M passengers per day. The estimated cost for the entire network is ~$900B. That's less than the US spends on the military every year.
I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Then again, I think Steve Jobs was the only Silicon Valley billlionaire not in a transhumanist polycule with a more than even chance of being in the files.
[1]: https://www.archives.gov/milestone-documents/president-dwigh...
[2]: https://en.wikipedia.org/wiki/Imperial_boomerang
[3]: https://www.amnestyusa.org/blog/with-whom-are-many-u-s-polic...
Given that Steve Jobs was best friends with Larry Ellison, I’d say he wouldn’t have bent the knee because he would’ve been standing hand in hand with Trump, just like Larry.
This humanist view unfortunately doesn’t hold anymore in the modern world. Boomers will be happy as long as not a single dollar is spent on housing, so that their own homes can appreciate in value. Republicans would rather burn money than spend it on houses, hospitals, or bridges that might benefit immigrants or “other people” more than themselves.
I used an American political party only as a reference, but the same phenomenon can be seen in many countries around the world. Society has become incredibly cynical and has regressed a lot in terms of humanity.
Not sure what boomers you are talking about. I for one am disgusted at what is happening with the things in general and with the housing in particular. I do not want my house to appreciate Ad infinitum. I do not want to have ever growing class of have-not's so that few jerks can own the governments and half of the world.
Generational politics has definite limits and isn't absolute but it's also true that the Baby Boomer generation as a whole enjoyed the great opportunities and wealth generation opportunities in history. They fled to the suburbs, subsidized by the government every step of the way, and then basically pulled up the ladder behind them. They also refuse to quit.
And then when crime receded (and there are multiple theories for why this happened), they moved back into the city, bought up all the real estate and then blocked building affordable housing there too.
I personally have a theory that the parting gift of the Baby Boomer generation will be to get rid of Social Security and Medicare since they don't need it anymore.
See https://news.ycombinator.com/item?id=47270470
> Dean Ball: What Secretary Pete Hegseth announced is a desire to kill Anthropic. It is true that the government has abridged private-property rights before. But it is radical and different to say, brazenly: If you don’t do business on our terms, we will kill you; we will kill your company. I can’t imagine sending a worse signal to the business community. It cuts right at heart at everything that makes us different from China, which roots in this idea that the government can’t just kill you if you say you don’t want to do business with it, literally or figuratively. Though in this case, I’m speaking figuratively.
CEOs are bound to sociopathically amoral behavior - not by the law, but by the Pareto-optimal behavior of the job market for executives. The law obligates you to act in the interests of the shareholders, but it does not mandate[1] that Line Go Up. That is a function of a specific brand of shareholder that fires their CEOs every 18 months until the line goes up.
In 2007, Big Tech had plenty of the consumer market to conquer, so they could afford to pretend to be opposed to selling to the military. But the game they were playing was always going to end with them selling to the military. Once they were entrenched they could ignore the no-longer-useful-to-us-right-now dissenters, change their politics on a dime, and go after the "real money".
[0] Several of the sibling comments are mentioning hypothetical scenarios involving dual-use technologies or obfuscated purposes. Those are also relevant, but not the whole story.
[1] There are plenty of arguments a CEO could use to defend against a shareholder lawsuit that they did not take a particularly short-sighted action. Notably, that most line-go-up actions tend to be bad long-term decisions. You're allowed to sell low-risk investments.
> While the facility was functioning as a school, CBC News has confirmed a previous New York Times report stating the building was once part of an Islamic Revolutionary Guard Corps (IRGC) base.
https://www.cbc.ca/news/world/iran-school-bombing-investigat...
Assuming AI was used for finding targets, perhaps the training data was out of date?
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into crafting tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
But what is the option? I feel each of us wants to draw a line based off of our morality but the circumstances don't allow us to stick to it (still gotta pay rent)
We are all on a Titanic the way I see it. It's just the DARPA guy is gonna sink first. Rest of us are just pretending to be Jack trying to be the last ones to go.
Orwell wrote about this: https://orwell.ru/library/articles/science/english/e_scien
> "The fact is that a mere training in one or more of the exact sciences, even combined with very high gifts, is no guarantee of a humane or sceptical outlook."
This seems more like credentialist arrogance than a well-reasoned judgment.
This line of thinking, that creating machines that kill is unethical, will destroy the West. If the US wasn't so good at producing killing machines in WW2, you wouldn't be here to complain about DARPA ethics.
Instead of having engineers develop the most advanced machines for killing (i.e. protecting the West) such people go into producing the most addictive content delivery systems, destroying the brains of minors.
"Once the rockets are up, who cares where they come down? That's not my department!" says Wernher von Braun.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance? Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
https://www.reddit.com/r/changemyview/comments/4ta3hh/cmv_th...
There are many reasons to detest the current political landscape. Don't get distracted.
Why wouldn’t you move your dollars to someplace incrementally better?
Their statement doesn't make it sound they are incrementally better, they are trying to bend over backwards to keep working for war.
Moving my subscription is not terribly consequential, but since the products are so similar and easy to substitute with one another for my uses, it seems best to participate in what in aggregate is a signal that is being noticed and commented on and interpreted to mean that a significant number of people who buy AI access do care about this.
At the beginning, they’re usually doing it for the money — and maybe some level of patriotism. Eventually they find themselves involved in things so ugly that they can’t really stomach it anymore. At the same time, they can’t easily back out either.
Then a new CEO comes in and thinks the previous guy was too soft, "He couldn’t handle it, but I can."
And the cycle continues.
We - as a humanity - collectively recognized the weight of our creation, and decided to walk back
Discussing “AI alignment” in the same breadth as aligning with a “Department of War” (in any country) is simply not an intellectually sound position
None of the countries we’ve attacked this year pose an existential threat to humanity. In contrast, striking first and pulling Europe, Russia, and China into a hot war beginning in the Middle East surely poses a greater collective threat than bioweapons, sentient AI, or the other typical “AI alignment” concerns
Why aren’t there more dissidents among the researcher ranks?
Because they’ve likely all lost faith in humanity watching Trump get reelected and now just want to get rich and hope to insulate their families from the reality we’re all living in.
"We both want a docile American public who go along with our desires so we can achieve goals that may be contrary to the interests of the American public."
This is not the forbidden love story I would've asked for.
I fear that frontier AI is going to be nationalised for military purposes, not just in the US but across the globe.
At the same time, I really don’t know what Anthropic were expecting when they described their technology as potentially more dangerous than an atom bomb while agreeing to integrate purpose-built models with Palantir to be deployed in high-security networks for classified military tasks.
To me the most Orwellian thing is everyone using the newspeak name for the DoD.
Some of his arguments:
It used to be called the department of war, and it had a better track record with regard to foreign conflict, under that name then it did under the DoD name.
Department of war is a more honest name, department of defense is a somewhat newspeak term, although "Department of Peace" would be worse.
It's harder to seek funding for "war", then it is to seek funding for "defense". If you ask someone, "Do you want to spend money on education or war?", you will get a different answer asking, "Do you want to spend money on education or defense?".
[0] Palmer Luckey talking to Mike Rowe about the name change: https://youtu.be/dejWbn_-gUQ?t=1007
I'm confused. This seems like a bad change.
Regarding Luckey's other statements, I can almost assure you that the administration did not think as much about it as Luckey has. Insecure Pete just thought the title "Secretary of Defense" was too wussy so he wanted to be Secretary of War.
Also, I think people mainly have issue with the fact that Trump is just randomly and unilaterally renaming random stuff and demolishing buildings without congressional approval. If he had gone through the correct alleys then maybe people could ignore it. Maybe. We'd probably still have qualms about it, but at least we'd know that our representatives had a say.
Just because a name is more accurate doesn't mean that it's its new name. Otherwise we wouldn't be the United States of America (we are literally not united bc Hawaii and Alaska are not contiguous, and we are figuratively not united because... Well, you know)
They changed the name and it matches the intention. It is not a newspeak name anymore.
No, they didn't. The name of the department at issue is “the Department of Defense” and of its head the “Secretary of Defense” — these are set in statute (the latter for slightly longer time than the former) and the relevant statutes has not been changed, since the office of the Secretary of Defense was created in 1947 and the Department of Defense was created in 1949. The executive branch has just decided to use a nickname for a government department (which is the historical name for a prior department which was split to form two of what are now the three main direct subordinate elements within that department.)
However I suppose Amodei in this context can be included in the former group.
"Palantir's Maven uses Anthropic's Claude code, sources say."
https://www.reuters.com/technology/palantir-faces-challenge-...
It is always astonishing that the reviled mainstream press is more critical than hackers these days.
Source: https://www.theguardian.com/us-news/2026/feb/26/anthropic-pe...
What, I ask, is the point of having laws and rules if you can just ignore the ones you don't like?
Its just a name, who cares?
Not me.
…but, if you break the law, you break the law. Not maybe maybe who cares, its not me being water boarded, I dont care…
If you break the law. You break the law.
Otherwise, who gives a duck what congress says?
Just fire them all and crown Trump King of America.
I’m being facetious. …but maybe its more of a big deal than you superficially pretend it is.
It’s just another case of the administration blatantly breaking the rules.
…so, you know. If youre ok with no laws or rules, I guess its fine.
Seems a bit chaotic to me. I prefer my governing body to be… marginally bound by some kind of responsibilty to something or someone.
Its very evident in his statement, he's trying very hard to clarify what that list means for corporations and downstream business with large commercial and strategic companies.
Imagine if Microsoft, Amazon, Google, etc decided that they don't want to ANY sort of minuscule risk (real or perceived) to their massive public sector business lines (via all their DoD DoJ NHS and other 3 letter agencies, state agencies, city and local municipals etc) - and decide to cancel their enterprise Anthropic licenses - which is a VERY possible scenario.
And these are the big players, theres a whole slew of medium and small players all with existing government contracts that need to tread carefully.
Slow it down as much as possible to give us more time.
Or more importantly - say something that says nothing.
When you say nothing to politicians like this then eventually the story moves elsewhere.
But these guys had to put a stake in the ground and yell it out loud.
In politics you must know when to speak and what to speak and how to speak without speaking.
The Silicon Valley tech jobs we have now has a history rooted in World War 2 and funding of it by the US gov.
https://youtu.be/ZTC_RxWN_xo?si=gGza5eIv485xEKLS
I’m not saying war is good or anything, but also don't ride a high horse cause none of it would be here w/o WW2.
Calling it the Department of Defense implies a system of laws, checks and balances which no longer exists.
I'm not obeying in advance, but I'm not giving lip service to normality, either.
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
It's very reminiscent of the half-assed security theater that Google and Apple fought over. Neither one of them resisted government coercion in the end, they just took different routes to end up as federal asskissers.
Posted here: https://news.ycombinator.com/item?id=47195085
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can imagine what it feels like being a worker there in that line of business.
Tech leadership is rotten to the core, and that can't be fixed by individuals making a stand.
Or who simply had a different point of view than you.
I've never quit a company on purely ethical grounds, but I have turned down interviews and offers because of them. They're probably not going to go bankrupt just by not hiring me, but I like to think that making it incrementally harder to find talent slows down their progress of doing evil things, if only a little.
That's probably still a delusion of grandeur on my end, but we all should have an ethical line that we won't cross; most of us end up working for monsters and/or assholes, especially at BigCos, so your options generally boil down to "work for an asshole who's doing evil that you can live with" or "go live in a Unabomber shed". I guess it's important to make sure that "the evil thing you can live with here" isn't just any act of evil.
I imagine that's why the implementation got so far along before this blew up. Someone at Anthropic talked with someone at Palantir and they had a "you did what? Did you read the contract terms" moment, and that was after it went into production.
https://news.ycombinator.com/item?id=47269649
"Amodei claimed that tensions between his company and the Trump administration stem partly from the firm’s refusal to financially support Trump and its approach to AI regulation and safety issues."
That should be the headline here. We know Trump personally made $4B last year, and we know he's been using the full power of the US gov't to retaliate against people that don't "support" him.
Come 2029, when there's an opportunity for the corruption trials to start, this sort of behavior needs to be front of the public mind, both at the top, and throughout his network of appointees.
Now we have a president who doesn't even hide his bribes, and instead starts multiple cryptocurrencies and has a publicly traded company in order to optimize the bribery. Maybe this is this "Department of Government Efficiency" thing I keep hearing about; it's never been more efficient to bribe public officials.
When you give a guy who started a coup the keys to the kingdom, instead of a life-long prison sentence, arguing over what his taxes were a decade ago is... Splitting hairs.
May we all see better times.
When I was living in SF, we had lived in the same apartment for 5 years and then our landlord sold the building. The new owner was doing a condo-conversion and so we got 'evicted' (in reality he paid us a small sum of money to move out since evictions are complex there).
My partner and I were both employed, we were going to be fine (although paying much higher rent) but there was this visceral, "The place that we thought was home is being taken and there's nothing we can do about it" unease in the pit of my stomach that stuck with me for months and months.
This really feels the same as that really unpleasant time.
Be proud of that!
I still don't buy this discussion. How exactly do they want to use an llm for autonomous weapons, given it's not even possible to reliably have a piece of code written without having to review it?
And how is a 1M token window model suppposed to be useful for mass surveillance?
Honest questions, I am sure I am missing some details. Because so far it looks like a very sophisticated marketing strategy.
Probably the same way Claude can play Pokemon: give it a bunch of informations and let it make a decision by itself to achieve a specified goal.