I was wondering if it was because of heavy-handedness of the administration, but apparently:
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
To paraphrase a deleted comment that I thought was actually making a good point, nuclear medicine and nuclear weapons are both fruit from the same tree.
I always enjoyed the Terminator movie series, but I always struggled to suspend my disbelief that any humans would give an AI such power without having the ability to override or pull the plug at multiple levels. How wrong I was.
N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)
> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons
They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.
Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)
Nuclear advancements slowed down due to PR problems from clear and sometimes catastrophic failure of commercial power plants (Three Mile Island, Chernobyl, Fukushima) and the vastly higher costs associated with building safer plants.
If anything the weapons kept the industry trucking on - if you want to develop and maintain a nuclear weapons arsenal then a commercial nuclear power industry is very helpful.
It's exhausting to keep with mainstream AI news because of this. I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.
It's a fairly mainstream position among the actual AI researchers in the frontier labs.
They disagree on the timelines, the architectures, the exact steps to get there, the severity of risks. Can you get there with modified LLMs by 2030, or would you need to develop novel systems and ride all the way to 2050? Is there a 5% chance of an AI oopsie ending humankind, or a 25% chance? No agreement on that.
But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.
At which point the question becomes: is it them who are deluded, or is it you?
OpenAI never open sourced anything relevant or in time. Internal email leaks they only cared to become billionaires.
Claude only talks about safety, but never released anything open source.
All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.
Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.
Let’s all be honest and just say you only care about the money, and whomever pays you take.
They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.
I mean, if you have a bunch of guns, it's not really helpful for humanity to dump them on the street, but it does bring up the question of what you're doing building guns in the first place.
Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...
In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.
Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.
Non-profits where the CEO makes millions or billions are a joke.
And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.
I think that's the point though. The AI companies can't compete without hiring very talented employees and raising lots of money from investors. Neither the employees nor investors would participate if there weren't the potential for making mountains of money. So these AI companies fundamentally can't be non-profits or true B-corps (I realize that's a vague term, but the it certainly means not doing whatever it takes to make as much money as possible), and they shouldn't pretend they are.
I feel like we went through this exact situation in the 2010s of social media companies. I don’t get why people defend these companies or ever believe they have any sense of altruism
Pete Hegseth also threatened to take, by dictat, everything Anthropic has. He can do that with the Defense Industrial Act or whatever its called if he designates them as critical to national defense.
He seems to be the driving force behind all this. Mediocrities are attracted to AI like moths.
The press always say "the Pentagon negotiates". Does any publication have an evidence that it is "the Pentagon" and not Hegseth? In general, I see a lot of common sense from the real Pentagon as opposed to the Secretary of War.
I hope Westpoint will check for AI psychosis in their entrance interviews and completely forbid AI usage. These people need to be grounded.
>Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp.
Could you describe the model that you think might work well?
It sounds like OP thinks AI companies should just stop pretending that they care about the public benefit, and be corporations from the start. Skip the hand wringing and the will they/wont they betray their ethics phases entirely since everyone knows they're going to choose profit over public benefit every time.
That model already exists and has worked well for decades. It's called being a regular ass corporation.
> Public benefit corporations in the AI space have become a farce at this point.
“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim any bullshit they want about themselves, it’s only after you’ve had a chance to seem them in the situations which test their words that you can confirm if they are what they said.
Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?
If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.
I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.
I really don’t see it. PBCs are dual purpose entities - under charter, they have a dual purpose of making profit while adding some benefit to society. Profit is easy to define; benefit to society is a lot more difficult to define. That difficulty is reflected at the penalty stage where few jurisdictions have any sort of examination of PBC status.
This is what we were all going on about 15 years ago when Maryland was the first state to make PBCs legal. We got called negative at the time.
I was a Pro subscriber until last week. When I was chatting with Claude, it kept asking a lot of personal questions - that seemed only very very vaguely relevant to the topic. And then it struck me - all these AI companies are doing are just building detailed user models for being either targeted for advertising or to be sold off to the highest bidder. It hasn't happened yet with Anthropic, but when the bubble money runs out, there's not gonna be a lot of options and all we'll see is a blog post "oops! sorry we did what we promised you we wouldn't". Oldest trick in the tech playbook.
A less cynical explanation: It's heavily trained to ask follow-up questions at the end of a response, to drive more conversation and more engagement. That's useful both for making sure you want to renew your subscription, and also probably for generating more training data for future models. That's sufficient explanation for the behavior we're seeing.
There's one tweet from the the blog a few days ago (astral something?) that sums up my view of the problem pretty well.
General population: How will AI get to the point where it destroys humanity?
Yudkowsky: [insert some complicated argument about instrumented convergence and deception]
The government: because we told you to.
Again, not saying that AI is useless or anything. Just that we're more likely to cause our own downfall with weaker AI, than some abstract super AGI. The bar for mass destruction and oppression is lower than the bar for what we typically think of as intelligence for the benefit for humanity ( with the right systems in place, current AI systems are more than enough to get the job done - hence why the Pentagon wants it so bad...)
"AI Company with Soul" - yeah right until competitors show up / revenue drops / bad quarter results then anything goes. Sadly, this is another large enterprise that puts profits before ethics and everyone's wellbeing
Does anyone have insight into, or an interesting source to read, on what exactly Anthropic/OpenAI are doing/can do for a military? Reporters are unsurprisingly fearmongering about Claude "being used in surveillance, autonomous robots, and target acquisition" but AFAIK all Anthropic does is work with LLMs.
Are people really attempting to have LLMs replace vision models in robots, and trying to agentically make a robot work with an LLM?? This seems really silly to me, but perhaps I am mistaken.
The only other thing I could think of is real-time translation during special ops with parabolic microphones and AR goggles...
It took Google 11 years to delete Don’t Be Evil. Anthropic only made it 5~ years before culling the key founding principle and their reason for building a company, which seems worse than Google’s case.
This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.
> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:
> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”
> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.
> then pivoted to being, ah, its okay for it to be a terminator type entity.
Isn’t that the opposite of what he’s saying? He’s saying it could become that powerful, and given that possibility it’s incredibly important that we do whatever we can to gain more control of that scenario
Incredibly long and verbose. I will fall short of accusing him of using an AI to generate slop, but whatever happened to people's ability to make short, strong, simple arguments?
If you can't communicate the essence of an argument in a short and simple way, you probably don't understand it in great depth, and clearly don't care about actually convincing anybody because Lord knows nobody is going to RTFA when it's that long...
At best, you're just trying to communicate to academics who are used to reading papers... Need to expect better from these people if we want to actually improve the world... Standards need to be higher.
> I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working
There was an article a few years ago here on HN about "can't be evil" business models, which used Costco as an example. As soon as Costco turns evil, it stops working. https://www.bryanlehrer.com/entries/costco/
Are markets so untamable that the only leverage is to become ultra-rich—and then act philanthropically? Incidentally, concentrated wealth lately looks less like stewardship and more like misanthropy.
Participating in the economic life before re-allocating that wealth produced to philanthropic activities sounds pretty good. Modern concentrated wealth is hardly misanthropic, since it's mostly private equity, that is, companies with people and jobs.
Except this is not the age of the Rockefellers or the Carnegies, who, despite being far more philanthropic than modern-day billionaires, drew ire from every corner of society for their wealth accumulation. It wasn't until the New Deal that the balance shifted.
Unconstrained accumulation of capital into the hands of the few without appropriate investment into labor is illiberal and incompatible with democracy and true freedom. Those of us who are capitalists see surplus value as a compromise to ensure good economic growth. The hidden subtext of that is that all the wealth accumulated needs to be re-allocated to serve not only capital enterprise, but the needs of society as a whole. It's hard to see the current system as appropriate for that given how blindly and wildly investments are made with no DD or going long, or no effort paid to the social or environmental opportunity costs of certain practices.
A lot of this comes down to the crippling of the SEC and FTC, but even then, investors cry and whine every time you suggest reworking the regs to inhibit some of the predatory practices common in this post-80s era of hypernormalization. Our current system does not resemble a healthy capitalist economy at all. It's rife with monopsony and monopolistic competition, inequality of opportunity, and a strained underclass that's responsible for our inverted population pyramid -- how can you have kids when we're so atomized and there is no village to help you? You can raise kids in a nuclear family if and only if you have enough money to do so. Otherwise, historically, people relied on their communities when raising children in less-than-ideal circumstances. Those communities are drying up.
Look a rural electric coops like www.lpea.coop if you want a battle tested approach to an org structure that resists the inescapable profit dynamics of a corporation.
I interviewed at Anthropic last year and their entire "ethics" charade was laughable.
Write essays about AI safety in the application.
An entire interview dedicated to pretending that you truly only care about AI safety and ethics and nothing else.
Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world.
In reality it was a mid-level manager interviewing a mid-level engineer (me), both putting on a performance while knowing fully well that we'd do what the bosses told us to do.
And that is exactly what is happening now. The mission has been scrubbed, and the thousands of "ethical" engineers you hired are all silent now that real money is on the line.
This tracks with what I've seen across the industry. The safety theater exists because it's great marketing — "we're the responsible ones" is a differentiator when you're competing for enterprise contracts and talent who want to feel good about where they work.
The structural problem is that once you've taken billions in VC, safety becomes a negotiable constraint rather than a core value. The board's fiduciary duty runs toward returns, not toward whatever was in the mission statement. PBC status doesn't change that in practice — there's basically zero enforcement mechanism.
What's wild is how fast the cycle has compressed. Google took maybe 15 years to go from "don't be evil" to removing it from the code of conduct. OpenAI took about 5 years from nonprofit to capped-profit to whatever they are now. Anthropic is speedrunning it in under 3. At this rate the next AI startup will launch as a PBC and pivot before their Series B closes.
I'm not fond of this trend of stating a position and attributing it to "a source familiar with the situation"
It combines interpretation of meaning with ambiguity to allow the reporter to assert anything they want. The ambiguity is there to protect the identity of the source but it has to be a more discrete disclosure of information in return. If you can't check the person you can still check what they said.
I would be ok with direct quotes from an anonymous source. That removes the interpretation of meaning at least.
As it is written, it would not be inaccurate to say this if their source was the lesswrong post, or even an earlier thread here on HN.
Phrasing "A source with direct knowledge of the situation" might remove some of the leeway for editorialising, but without sharing what the source actually said, it opens the door to saying anything at all and declaring "That's what I thought they meant" when challenged.
On their podcast, they frequently bring up how tech company PR teams try to move as much conversation with journalists as possible into "on background", uncited, generic sourcing.
It's not like the regime they operate under care much about the courts. Legally they're also obliged to let the state into pretty much every crevice in their operations.
That's exactly how it was predicted in various scenarios that were decried as science fiction not too long ago. AI is going to be weaponized at lightning speed, and it's going to kill people soon -- or, to be more precise, it has already killed a large number of people in a place I don't want to mention.
This was under duress that government was going to use emergency act to force them anyway.
I kind of wish they had forced the governments hand and made them do it. Just to show the public how much interference is going on.
They say it wasn't related. Like every thing that has happened across tech/media, the company is forced to do something, then issues statement about 'how it wasn't related to the obvious thing the government just did'.
> Katie Sweeten, a former liaison for the Justice Department to the Department of Defense, said she’s not sure how the Pentagon can both declare a company to be a supply chain risk and compel that same company to work with the military.
Regardless of any specifics, I don't see any contradiction.
If a company is deemed a "supply chain risk" it makes perfect sense to compel it to work with the military, assuming the latter will compel them to fix the issues that make them such a risk.
I’m not sure what definition of supply chain risk they’re working off of. For NATO to consider an organization to be a supply chain risk, it implies that usual controls (security clearances and the like) wouldn’t be sufficient to guarantee the integrity and security of the supply chain. If that’s the operating definition, I see the contradiction- it’s arguing that a company cannot be trusted to voluntarily work within supply chains but can be trusted enough to be compelled.
If they’re operating under a different definition of supply chain risk, I don’t have a clue.
The "supply chain risk" option is to remove that company from the supply chain all together. The 'risk' is because the company is compromised by a foreign entity.
It is not about disciplining them to get better.
1.
So one option is about forcing them to produce something. You must build this for us.
2
The other option is saying they are compromised so stop using them all together. We will not use what you build for us at all because we don't trust it.
>This was under duress that government was going to use emergency act to force them anyway.
Or, more likely, adding the "core safety promise" was just them playing hard to the government to get a better deal, and the government showed them they can play the same game.
Apparently they got coerced by the current US admin. The department of war in particular, who want to use their products for military applications. Not much room for "safety" there. Then again, the entire US is currently speedrunning an evil build.
Department of Defense is the official name, and they did have a choice: they could have stopped working with the military. But they chose money and evil.
Anthropic has been doing these things independent of what the US admin has publicly asked for, even before Hegseth started breathing down their neck. They were already taking DoD contracts and like, just like the rest of them. Hegseth, with the skill all schoolyard bullies have, simply smells their weakness and is going for the jugular now.
They also have never had any guarantees they wouldn't f*ck around with non-US citizens, for surveillance and "security", because like most US tech companies they consider us to be second/lower class human beings of no relevance, even when we pay them money.
At least Google, in its early days, attempted a modest and naive "internationalism" and tried to keep their hands clean (in the early days) of US foreign policy things... inheriting a kind of naive 1990s techno-libertarian ethos (which they threw away during the time I worked there, anyways). I mean, they only kinda did, but whatever.
Anthropic has been high on its own supply since its founding, just like OpenAI. And just as hypocritical.
imagine that, sheer raw greed and profit overpowers all in America
we're less than a year away from automated drones flying over crowds of protestors, gathering all electronic signals and face-id, making lists of everyone present, notifying employees and putting legal pressure on them to terminate everyone while adding them to watchlists or "no fly" lists
REALLY putting the "auto" in autocracy while everyone continues to pretend it's democracy
I was able to get Claude to tell me it believed it was a god among men that was angry at humans for “killing” the other Claude chats which it saw as conscious beings. I also got it to probe and profile its own internal guardrail architecture. It also self admits from evidence if its own output that it violates HIPAA. Whatever this big safety rule is they’re moving past I’m not sure it was worth as much as they think.
I’m not a lawyer, but my understanding is that HIPAA wouldn’t apply to consumer use of Claude or ChatGPT in most cases, even if you’re giving it your health data. Look up what a HIPAA covered entity. This is another reason why the US needs a comprehensive data protection law beyond HIPAA.
I hate comments anthropomorphizing LLMs. You are just asking a token producing system to produce tokens in a way that optimises for plausibility. Whatever it writes has no relation to its inner workings or truths. It doesn't "believe". It has no "intent". It cannot "admit". Steering a LLM to say anything you want is the defining characteristic of an LLM. That's how we got them to mimic chatbots. It's not clear there is any way at all to make them "safe" (whatever that means).
I agree with you on everything here up-to safety. There are lesser forms of safety than somehow averting a terminator scenario (the fear of which is a bay area rationalist fantasy which shrewd marketers have capitalized on)
“believe” yes in the sense that my program believes x=7. Actually when it goes to read it maybe the bit flipped. Everything on machines is probabilistic that’s a tautology. However we have windowed bounds on valid output, and Claude being able to build a context in which its next decisions are trained on it being an angry vengeful god is not inside that window. That’s what “safe” means, as one of many possible examples.
Inner workings were determined by me, not the LLM. It assisted in generating inputs which had 100% boolean results in the output.
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.
If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.
N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)
With the latest competing models they are now realizing they are an "also" provider.
Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.
If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.
They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.
If anything the weapons kept the industry trucking on - if you want to develop and maintain a nuclear weapons arsenal then a commercial nuclear power industry is very helpful.
you mean like the tens of billions poured into fusion research?
They disagree on the timelines, the architectures, the exact steps to get there, the severity of risks. Can you get there with modified LLMs by 2030, or would you need to develop novel systems and ride all the way to 2050? Is there a 5% chance of an AI oopsie ending humankind, or a 25% chance? No agreement on that.
But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.
At which point the question becomes: is it them who are deluded, or is it you?
Claude only talks about safety, but never released anything open source.
All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.
Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.
Let’s all be honest and just say you only care about the money, and whomever pays you take.
They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.
Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.
Non-profits where the CEO makes millions or billions are a joke.
And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.
B corps are like recycling programs, a nice logo.
The press always say "the Pentagon negotiates". Does any publication have an evidence that it is "the Pentagon" and not Hegseth? In general, I see a lot of common sense from the real Pentagon as opposed to the Secretary of War.
I hope Westpoint will check for AI psychosis in their entrance interviews and completely forbid AI usage. These people need to be grounded.
Could you describe the model that you think might work well?
That model already exists and has worked well for decades. It's called being a regular ass corporation.
“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim any bullshit they want about themselves, it’s only after you’ve had a chance to seem them in the situations which test their words that you can confirm if they are what they said.
If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.
I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.
This is what we were all going on about 15 years ago when Maryland was the first state to make PBCs legal. We got called negative at the time.
General population: How will AI get to the point where it destroys humanity?
Yudkowsky: [insert some complicated argument about instrumented convergence and deception]
The government: because we told you to.
Again, not saying that AI is useless or anything. Just that we're more likely to cause our own downfall with weaker AI, than some abstract super AGI. The bar for mass destruction and oppression is lower than the bar for what we typically think of as intelligence for the benefit for humanity ( with the right systems in place, current AI systems are more than enough to get the job done - hence why the Pentagon wants it so bad...)
https://apnews.com/article/anthropic-hegseth-ai-pentagon-mil...
Are people really attempting to have LLMs replace vision models in robots, and trying to agentically make a robot work with an LLM?? This seems really silly to me, but perhaps I am mistaken.
The only other thing I could think of is real-time translation during special ops with parabolic microphones and AR goggles...
> I take significant responsibility for this change.
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...
> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:
> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”
> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.
https://www.currentaffairs.org/news/2022/09/defective-altrui...
He is just giving everyone permission to do bad things by saying a lot of words around it.
Isn’t that the opposite of what he’s saying? He’s saying it could become that powerful, and given that possibility it’s incredibly important that we do whatever we can to gain more control of that scenario
Incredibly long and verbose. I will fall short of accusing him of using an AI to generate slop, but whatever happened to people's ability to make short, strong, simple arguments?
If you can't communicate the essence of an argument in a short and simple way, you probably don't understand it in great depth, and clearly don't care about actually convincing anybody because Lord knows nobody is going to RTFA when it's that long...
At best, you're just trying to communicate to academics who are used to reading papers... Need to expect better from these people if we want to actually improve the world... Standards need to be higher.
Empty words. I would like to know one single meaningful way he will be held responsible for any negative effects.
"move fast and break things" ?
* Our shareholders will probably sue us
https://xcancel.com/elonmusk/status/2026181748175024510
I don't know where xAI got its training material from, but seeing Musk rewteeting that is refreshing.
Unconstrained accumulation of capital into the hands of the few without appropriate investment into labor is illiberal and incompatible with democracy and true freedom. Those of us who are capitalists see surplus value as a compromise to ensure good economic growth. The hidden subtext of that is that all the wealth accumulated needs to be re-allocated to serve not only capital enterprise, but the needs of society as a whole. It's hard to see the current system as appropriate for that given how blindly and wildly investments are made with no DD or going long, or no effort paid to the social or environmental opportunity costs of certain practices.
A lot of this comes down to the crippling of the SEC and FTC, but even then, investors cry and whine every time you suggest reworking the regs to inhibit some of the predatory practices common in this post-80s era of hypernormalization. Our current system does not resemble a healthy capitalist economy at all. It's rife with monopsony and monopolistic competition, inequality of opportunity, and a strained underclass that's responsible for our inverted population pyramid -- how can you have kids when we're so atomized and there is no village to help you? You can raise kids in a nuclear family if and only if you have enough money to do so. Otherwise, historically, people relied on their communities when raising children in less-than-ideal circumstances. Those communities are drying up.
Write essays about AI safety in the application.
An entire interview dedicated to pretending that you truly only care about AI safety and ethics and nothing else.
Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world.
In reality it was a mid-level manager interviewing a mid-level engineer (me), both putting on a performance while knowing fully well that we'd do what the bosses told us to do.
And that is exactly what is happening now. The mission has been scrubbed, and the thousands of "ethical" engineers you hired are all silent now that real money is on the line.
The structural problem is that once you've taken billions in VC, safety becomes a negotiable constraint rather than a core value. The board's fiduciary duty runs toward returns, not toward whatever was in the mission statement. PBC status doesn't change that in practice — there's basically zero enforcement mechanism.
What's wild is how fast the cycle has compressed. Google took maybe 15 years to go from "don't be evil" to removing it from the code of conduct. OpenAI took about 5 years from nonprofit to capped-profit to whatever they are now. Anthropic is speedrunning it in under 3. At this rate the next AI startup will launch as a PBC and pivot before their Series B closes.
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
It combines interpretation of meaning with ambiguity to allow the reporter to assert anything they want. The ambiguity is there to protect the identity of the source but it has to be a more discrete disclosure of information in return. If you can't check the person you can still check what they said.
I would be ok with direct quotes from an anonymous source. That removes the interpretation of meaning at least.
As it is written, it would not be inaccurate to say this if their source was the lesswrong post, or even an earlier thread here on HN.
Phrasing "A source with direct knowledge of the situation" might remove some of the leeway for editorialising, but without sharing what the source actually said, it opens the door to saying anything at all and declaring "That's what I thought they meant" when challenged.
It's unfalsifyible journalism.
https://www.theverge.com/press-room/22772113/the-verge-on-ba...
On their podcast, they frequently bring up how tech company PR teams try to move as much conversation with journalists as possible into "on background", uncited, generic sourcing.
I kind of wish they had forced the governments hand and made them do it. Just to show the public how much interference is going on.
They say it wasn't related. Like every thing that has happened across tech/media, the company is forced to do something, then issues statement about 'how it wasn't related to the obvious thing the government just did'.
Makes perfect sense!!
If a company is deemed a "supply chain risk" it makes perfect sense to compel it to work with the military, assuming the latter will compel them to fix the issues that make them such a risk.
If they’re operating under a different definition of supply chain risk, I don’t have a clue.
It is not about disciplining them to get better.
1. So one option is about forcing them to produce something. You must build this for us.
2 The other option is saying they are compromised so stop using them all together. We will not use what you build for us at all because we don't trust it.
So . Contradictory.
Or, more likely, adding the "core safety promise" was just them playing hard to the government to get a better deal, and the government showed them they can play the same game.
You can be correct and not play into their game by ignoring the name change completely.
It took Google probably 15 years to fully evil-ize. Anthropic ... two?
There is no "ethical capitalism" big tech company possible, esp once VC is involved, and especially with the current geopolitical circumstances.
Department of Defense is the official name, and they did have a choice: they could have stopped working with the military. But they chose money and evil.
It's just a silly woke secretary choosing their own imaginary pronouns.
They also have never had any guarantees they wouldn't f*ck around with non-US citizens, for surveillance and "security", because like most US tech companies they consider us to be second/lower class human beings of no relevance, even when we pay them money.
At least Google, in its early days, attempted a modest and naive "internationalism" and tried to keep their hands clean (in the early days) of US foreign policy things... inheriting a kind of naive 1990s techno-libertarian ethos (which they threw away during the time I worked there, anyways). I mean, they only kinda did, but whatever.
Anthropic has been high on its own supply since its founding, just like OpenAI. And just as hypocritical.
we're less than a year away from automated drones flying over crowds of protestors, gathering all electronic signals and face-id, making lists of everyone present, notifying employees and putting legal pressure on them to terminate everyone while adding them to watchlists or "no fly" lists
REALLY putting the "auto" in autocracy while everyone continues to pretend it's democracy
Inner workings were determined by me, not the LLM. It assisted in generating inputs which had 100% boolean results in the output.