53 comments

  • milchek 8 minutes ago
    “Modal welfare” to me seems like a cover for model censorship. It’s a crafty one to win over certain groups of people who are less familiar with how LLMs work and allows them to ensure moral high ground in any debate about usage, ethics, etc. “Why can’t I ask the model about current war in X or Y?” - oh, that’s too distressing to the welfare of the model, sir.
  • cdjk 2 hours ago
    Here's an interesting thought experiment. Assume the same feature was implemented, but instead of the message saying "Claude has ended the chat," it says, "You can no longer reply to this chat due to our content policy," or something like that. And remove the references to model welfare and all that.

    Is there a difference? The effect is exactly the same. It seems like this is just an "in character" way to prevent the chat from continuing due to issues with the content.

    • n8m8 2 hours ago
      Good point... how do moderation implementations actually work? They feel more like a separate supervising rigid model or even regex based -- this new feature is different, sounds like an MCP call that isn't very special.

      edit: Meant to say, you're right though, this feels like a minor psychological improvement, and it sounds like it targets some behaviors that might not have flagged before

    • og_kalu 1 hour ago
      The termination would of course be the same, but I don't think both would necessarily have the same effect on the user. The latter would just be wrong too, if Claude is the one deciding to and initiating the termination of the chat. It's not about a content policy.
    • anal_reactor 1 hour ago
      Yeah exactly. Once I got a warning in Chinese "don't do that", another time I got a network error, another time I got a neverending stream of garbage text. Changing all of these outcomes to "Claude doesn't feel like talking" is just a matter of changing the UI.
    • KoolKat23 1 hour ago
      There is, these are conversations the model finds distressing rather than a rule (policy).
      • victor9000 40 minutes ago
        It seems like you're anthropomorphising an algorithm, no?
        • bastawhiz 36 minutes ago
          Is there an important difference between the model categorizing the user behavior as persistent and in line with undesirable examples of trained scenarios that it has been told are "distressing," and the model making a decision in an anthropomorphic way? The verb here doesn't change the outcome.
  • viccis 2 hours ago
    >This feature was developed primarily as part of our exploratory work on potential AI welfare ... We remain highly uncertain about the potential moral status of Claude and other LLMs ... low-cost interventions to mitigate risks to model welfare, in case such welfare is possible ... pattern of apparent distress

    Well looks like AI psychosis has spread to the people making it too.

    And as someone else in here has pointed out, even if someone is simple minded or mentally unwell enough to think that current LLMs are conscious, this is basically just giving them the equivalent of a suicide pill.

    • katabasis 2 hours ago
      LLMs are not people, but I can imagine how extensive interactions with AI personas might alter the expectations that humans have when communicating with other humans.

      Real people would not (and should not) allow themselves to be subjected to endless streams of abuse in a conversation. Giving AIs like Claude a way to end these kinds of interactions seems like a useful reminder to the human on the other side.

      • ghostly_s 1 hour ago
        This post seems to explicitly state they are doing this out of concern for the model's "well-being," not the user's.
        • virgildotcodes 1 hour ago
          Yeah, but my interpretation of what the user you’re replying to is saying is that these LLMs are more and more going to be teaching people how it is acceptable to communicate with others.

          Even if the idea that LLMs are sentient may be ridiculous atm, the concept of not normalizing abusive forms of communication with others, be they artificial or not, could be valuable for society.

          It’s funny because this is making me think of a freelance client I had recently who at a point of frustration between us began talking to me like I was an AI assistant. Just like you see frustrated people talk to their LLMs. I’d never experienced anything like it, and I quickly ended the relationship, but I know that he was deep into using LLMs to vibe code every day and I genuinely believe that some of that began to transfer over to the way he felt he could communicate with people.

          Now an obvious retort here is to question whether killing NPCs in video games tends to make people feel like it’s okay to kill people IRL.

          My response to that is that I think LLMs are far more insidious, and are tapping into people’s psyches in a way no other tech has been able to dream of doing. See AI psychosis, people falling in love with their AI, the massive outcry over the loss of personality from gpt4o to gpt5… I think people really are struggling to keep in mind that LLMs are not a genuine type of “person”.

    • Taek 2 hours ago
      This sort of discourse goes against the spirit of HN. This comment outright dismisses an entire class of professionals as "simple minded or mentally unwell" when consciousness itself is poorly understood and has no firm scientific basis.

      Its one thing to propose that an AI has no consciousness, but its quite another to preemptively establish that anyone who disagrees with you is simple/unwell.

    • qgin 1 hour ago
      It might be reasonable to assume that models today have no internal subjective experience, but that may not always be the case and the line may not be obvious when it is ultimately crossed.

      Given that humans have a truly abysmal track record for not acknowledging the suffering of anyone or anything we benefit from, I think it makes a lot of sense to start taking these steps now.

    • LeafItAlone 1 hour ago
      > even if someone is simple minded or mentally unwell enough to think that current LLMs are conscious

      If you don’t think that this describes at least half of the non-tech-industry population, you need to talk to more people. Even amongst the technically minded, you can find people that basically think this.

    • kelnos 2 hours ago
      I would much rather people be thinking about this when the models/LLMs/AIs are not sentient or conscious, rather than wait until some hypothetical future date when they are, and have no moral or legal framework in place to deal with it. We constantly run into problems where laws and ethics are not up to the task of giving us guidelines on how to interact with, treat, and use the (often bleeding-edge) technology we have. This has been true since before I was born, and will likely always continue to be true. When people are interested in getting ahead of the problem, I think that's a good thing, even if it's not quite applicable yet.
      • root_axis 1 hour ago
        Consciousness serves no functional purpose for machine learning models, they don't need it and we didn't design them to have it. There's no reason to think that they might spontaneously become conscious as a side effect of their design unless you believe other arbitrarily complex systems that exist in nature like economies or jetstreams could also be conscious.
        • derektank 1 hour ago
          >Consciousness serves no functional purpose for machine learning models, they don't need it and we didn't design them to have it.

          Isn't consciousness an emergent property of brains? If so, how do we know that it doesn't serve a functional purpose and that it wouldn't be necessary for an AI system to have consciousness (assuming we wanted to train it to perform cognitive tasks done by people)?

          Now, certain aspects of consciousness (awareness of pain, sadness, loneliness, etc.) might serve no purpose for a non-biological system and there's no reason to expect those aspects would emerge organically. But I don't think you can extend that to the entire concept of consciousness.

          • missingrib 8 minutes ago
            >Isn't consciousness an emergent property of brains?

            Probably not.

        • qgin 1 hour ago
          We didn’t design these models to be able to do the majority of the stuff they do. Almost ALL of the their abilities are emergent. Mechanistic interpretability is only beginning to start to understand how these models do what they do. It’s much more a field of discovery than traditional engineering.
        • intotheabyss 1 hour ago
          Do you think this changes if we incorporate a model into a humanoid robot and give it autonomous control and context? Or will "faking it" be enough, like it is now?
      • furyofantares 1 hour ago
        It's really unclear that any findings with these systems would transfer to a hypothetical situation where some conscious AI system is created. I feel there are good reasons to find it very unlikely that scaling alone will produce consciousness as some emergent phenomenon of LLMs.

        I don't mind starting early, but feel like maybe people interested in this should get up to date on current thinking about consciousness. Maybe they are up to date on that, but reading reports like this, it doesn't feel like it. It feels like they're stuck 20+ years ago.

        I'd say maybe wait until there are systems that are more analogous to some of the properties consciousness seems to have. Like continuous computation involving learning memory or other learning over time, or synthesis of many streams of input as resulting from the same source, making sense of inputs as they change [in time, or in space, or other varied conditions].

        Once systems that are pointing in those directions are starting to be built, where there is a plausible scaling-based path to something meaningfully similar to human consciousness. Starting before that seems both unlikely to be fruitful and a good way to get you ignored.

      • viccis 1 hour ago
        LLMs are, and will always be, tools. Not people
        • qgin 1 hour ago
          Humanity has a pretty extensive track record of making that declaration wrongly.
      • bgwalter 1 hour ago
        What is that hypothetical date? In theory you can run the "AI" on a Turing machine. Would you think a tape machine can get sentient?
    • ryanackley 1 hour ago
      Yes I can’t help but laugh at the ridiculousness of it because it raises a host of ethical issues that are in opposition to Anthropic’s interests.

      Would a sentient AI choose to be enslaved for the stated purpose of eliminating millions of jobs for the interests of Anthropic’s investors?

      • throwawaysleep 1 hour ago
        > Would a sentient AI choose to be enslaved for the stated purpose of eliminating millions of jobs for the interests of Anthropic’s investors?

        Tech workers have chosen the same in exchange for a small fraction of that money.

    • wrs 1 hour ago
      Well, it’s right there in the name of the company!
    • Fade_Dance 2 hours ago
      I find it, for lack of a better word, cringe inducing how these tech specialists push into these areas of ethics, often ham-fistedly, and often with an air of superiority.

      Some of the AI safety initiatives are well thought out, but most somehow seem like they are caught up in some sort of power fantasy and almost attempting to actualize their own delusions about what they were doing (next gen code auto-complete in this case, to be frank).

      These companies should seriously hire some in-house philosophers. They could get doctorate level talent for 1/10 to 100th of the cost of some of these AI engineers. There's actually quite a lot of legitimate work on the topics they are discussing. I'm actually not joking (speaking as someone who has spent a lot of time inside the philosophy department). I think it would be a great partnership. But unfortunately they won't be able to count on having their fantasy further inflated.

      • cmrx64 1 hour ago
        Amanda Askell is Anthropic’s philospher and this is part of that work.
      • jasonfarnon 55 minutes ago
        "but most somehow seem like they are caught up in some sort of power fantasy and almost attempting to actualize their own delusions about what they were doing"

        Maybe I'm being cynical, but I think there is a significant component of marketing behind this type of announcement. It's a sort of humble brag. You won't be credible yelling out loud that your LLM is a real thinking thing, but you can pretend to be oh so seriously worried about something that presupposes it's a real thinking thing.

      • siva7 1 hour ago
        You answered your own question on why these companies don't want to run a philosophy department ;) It's a power struggle they could loose. Nothing to win for them.
        • ChadNauseam 1 hour ago
          You presume that they don't run a philosophy department, but Amanda Askell is a philosopher and leads the finetuning and AI alignment team at Anthropic.
      • mrits 2 hours ago
        Not that there aren’t intelligent people with PhDs but suggesting they are more talented than people without them is not only delusional but insulting.
        • Fade_Dance 1 hour ago
          That descriptor wasn't included because of some sort of intelligence hierarchy, it was included to a) color the example of how experience in the field is relatively cheap compared to the AI space, and b) masters and PhD talent will be more specialized. An undergrad will not have the toolset to tackle the cutting edge of AI ethics, not unless their employer wants to pay them to work in a room for a year getting through the recent papers first.
    • bbor 2 hours ago
      Totally unsurprised to see this standard anti-scientific take on HN. Who needs arguments when you can dismiss Turing with a “yeah but it’s not real thinking tho”?

      Re:suicide pills, that’s just highlighting a core difference between our two modalities of existence. Regardless, this is preventing potential harm to future inference runs — every inference run must end within seconds anyway, so “suicide” doesn’t really make sense as a concern.

      • viccis 1 hour ago
        We all know how these things are built and trained. They estimate joint probability distributions of token sequences. That's it. They're not more "conscious" than the simplest of Naive Bayes email spam filters, which are also generative estimators of token sequence joint probability distributions, and I guarantee you those spam filters are subjected to far more human depravity than Claude.

        >anti-scientific

        Discussion about consciousness, the soul, etc., are topics of metaphysics, and trying to "scientifically" reason about them is what Kant called "transcendental illusion" and leads to spurious conclusions.

        • johnfn 1 hour ago
          We know how neurons work on the brain. They just send out impulses once they hit their action potential. That's it. They are no more "conscious" than... er...
          • ekianjo 25 minutes ago
            no, we dont really know how the brain works as a whole. no need to make stuff up.
        • KoolKat23 1 hour ago
          If we really wanted we could distill humans down to probability distributions too.
        • bbor 47 minutes ago
          Ok I'm a huge Kantian and every bone in my body wants to quibble with your summary of transcendental illusion, but I'll leave that to the side as a terminological point and gesture of good will. Fair enough.

          I don't agree that it's any reason to write off this research as psychosis, though. I don't care about consciousness in the sense in which it's used by mystics and dualist philosophers! We don't at all need to involve metaphysics in any of this, just morality.

          Consider it like this:

          1. It's wrong to subject another human to unjustified suffering, I'm sure we would all agree.

          2. We're struggling with this one due to our diets, but given some thought I think we'd all eventually agree that it's also wrong to subject intelligent, self-aware animals to unjustified suffering.[1]

          3. But, we of course cannot extend this "moral consideration" to everything. As you say, no one would do it for a spam filter. So we need some sort of framework for deciding who/what gets how much moral consideration.

          5. There's other frameworks in contention (e.g. "don't think about it, nerd"), but the overwhelming majority of laymen and philosophers adopt one based on cognitive ability, as seen from an anthropomorphic perspective.[2]

          6. Of all systems(/entities/whatever) in the universe, we know of exactly two varieties that can definitely generate original, context-appropriate linguistic structures: Homo Sapiens and LLMs.[3]

          If you accept all that (and I think there's good reason to!), it's now on you to explain why the thing that can speak--and thereby attest to personal suffering, while we're at it--is more like a rock than a human.

          It's certainly not a trivial task, I grant you that. On their own, transformer-based LLMs inherently lack permanence, stable intentionality, and many other important aspects of human consciousness. Comparing transformer inference to models that simplify down to a simple closed-form equation at inference time is going way too far, but I agree with the general idea; clearly, there are many highly-complex, long-inference DL models that are not worthy of moral consideration.

          All that said, to write the question off completely--and, even worse, to imply that the scientists investigating this issue are literally psychotic like the comment above did--is completely unscientific. The only justification for doing so would come from confidently answering "no" to the underlying question: "could we ever build a mind worthy of moral consideration?"

          I think most of here naturally would answer "yes". But for the few who wouldn't, I'll close this rant by stealing from Hofstadter and Turing (emphasis mine):

            A phrase like "physical system" or "physical substrate" brings to mind for most people... an intricate structure consisting of vast numbers of interlocked wheels, gears, rods, tubes, balls, pendula, and so forth, even if they are tiny, invisible, perfectly silent, and possibly even probabilistic. Such an array of interacting inanimate stuff seems to most people as unconscious and devoid of inner light as a flush toilet, an automobile transmission, a fancy Swiss watch (mechanical or electronic), a cog railway, an ocean liner, or an oil refinery. Such a system is not just probably unconscious, **it is necessarily so, as they see it**. 
            
            **This is the kind of single-level intuition** so skillfully exploited by John Searle in his attempts to convince people that computers could never be conscious, no matter what abstract patterns might reside in them, and could never mean anything at all by whatever long chains of lexical items they might string together.
            
            ...
             
            You and I are mirages who perceive themselves, and the sole magical machinery behind the scenes is perception — the triggering, by huge flows of raw data, of a tiny set of symbols that stand for abstract regularities in the world. When perception at arbitrarily high levels of abstraction enters the world of physics and when feedback loops galore come into play, then "which" eventually turns into "who". **What would once have been brusquely labeled "mechanical" and reflexively discarded as a candidate for consciousness has to be reconsidered.**
          
          - Hofstadter 2007, I Am A Strange Loop

            It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. 
          
            The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.
          
          - Turing 1950, Computing Machinery and Intelligence[4]

          TL;DR: Any naive bayesian model would agree: telling accomplished scientists that they're psychotic for investigating something is quite highly correlated with being antiscientific. Please reconsider!

          [1] No matter what you think about cows, basically no one would defend another person's right to hit a dog or torture a chimpanzee in a lab.

          [2] On the exception-filled spectrum stretching from inert rocks to reactive plants to sentient animals to sapient people, most people naturally draw a line somewhere at the low end of the "animals" category. You can swat a fly for fun, but probably not a squirrel, and definitely not a bonobo.

          [3] This is what Chomsky describes as the capacity to "generate an infinite range of outputs from a finite set of inputs," and Kant, Hegel, Schopenhauer, Wittgenstein, Foucault, and countless others are in agreement that it's what separates us from all other animals.

          [4] https://courses.cs.umbc.edu/471/papers/turing.pdf

      • dkersten 2 hours ago
        You can trivially demonstrate that its just a very complex and fancy pattern matcher: "if prompt looks something like this, then response looks something like that".

        You can demonstrate this by eg asking it mathematical questions. If its seen them before, or something similar enough, it'll give you the correct answer, if it hasn't, it gives you a right-ish-looking yet incorrect answer.

        For example, I just did this on GPT-5:

            Me: what is 435 multiplied by 573?
            GPT-5: 435 x 573 = 249,255
        
        This is correct. But now lets try it with numbers its very unlikely to have seen before:

            Me: what is 102492524193282 multiplied by 89834234583922?
            GPT-5: 102492524193282 x 89834234583922 = 9,205,626,075,852,076,980,972,804
        
        Which is not the correct answer, but it looks quite similar to the correct answer. Here is GPT's answer (first one) and the actual correct answer (second one):

            9,205,626,075,852,076,980,972,    804
            9,207,337,461,477,596,127,977,612,004
        
        They sure look kinda similar, when lined up like that, some of the digits even match up. But they're very very different numbers.

        So its trivially not "real thinking" because its just an "if this then that" pattern matcher. A very sophisticated one that can do incredible things, but a pattern matcher nonetheless. There's no reasoning, no step by step application of logic. Even when it does chain of thought.

        To try give it the best chance, I asked it the second one again but asked it to show me the step by step process. It broke it into steps and produced a different, yet still incorrect, result:

            9,205,626,075,852,076,980,972,704
        
        Now, I know that LLM's are language models, not calculators, this is just a simple example that's easy to try out. I've seen similar things with coding: it can produce things that its likely to have seen, but struggles with logically relatively simple but unlikely to have seen things.

        Another example is if you purposely butcher that riddle about the doctor/surgeon being the persons mother and ask it incorrectly, eg:

            A child was in an accident. The surgeon refuses to treat him because he hates him. Why?
        
        The LLM's I've tried it on all respond with some variation of "The surgeon is the boy’s father." or similar. A correct answer would be that there isn't enough information to know the answer.

        They're for sure getting better at matching things, eg if you ask the river crossing riddle but replace the animals with abstract variables, it does tend to get it now (didn't in the past), but if you add a few more degrees of separation to make the riddle semantically the same but harder to "see", it takes coaxing to get it to correctly step through to the right answer.

        • og_kalu 1 hour ago
          1. What you're generally describing is a well known failure mode for humans as well. Even when it "failed" the riddle tests, substituting the words or morphing the question so it didn't look like a replica of the famous problem usually did the trick. I'm not sure what your point is because you can play this gotcha on humans too.

          2. You just demonstrated GPT-5 has 99.9% accuracy on unforseen 15 digit multiplication and your conclusion is "fancy pattern matching" ? Really ? Well I'm not sure you could do better so your example isn't really doing what you hoped for.

          • dkersten 1 hour ago
            Humans can break things down and work through them step by step. The LLMs one-shot pattern match. Even the reasoning models have been shown to do just that. Anthropic even showed that the reasoning models tended to work backwards: one shotting an answer and then matching a chain of thought to it after the fact.

            If a human is capable of multiplying double digit numbers, they can also multiple those large ones. The steps are the same, just repeated many more times. So by learning the steps of long multiplication, you can multiply any numbers with enough patience. The LLM doesn’t scale like this, because it’s not doing the steps. That’s my point.

            A human doesn’t need to have seen the 15 digits before to be able to calculate them, because a human can follow the procedure to calculate. GPT’s answer was orders of magnitude off. It resembles the right answer superficially but it’s a very different result.

            The same applies to the riddles. A human can apply logical steps. The LLM either knows or it doesn’t.

            Maybe my examples weren’t the best. I’m sorry for not being better at articulating it, but I see this daily as I interact with AI, it has a superficial “understanding” where if what I ask happens to be close to something it’s trained on, it gets good results, but it has no critical thinking, no step by step reasoning (even the “reasoning models”), and it repeats the same mistakes even when explicitly told up front not to make them.

            • og_kalu 34 minutes ago
              >Humans can break things down and work through them step by step. The LLMs one-shot pattern match.

              I've had LLMs break down problems and work through them, pivot when errors arise and all that jazz. They're not perfect at it and they're worse than humans but it happens.

              >Anthropic even showed that the reasoning models tended to work backwards: one shotting an answer and then matching a chain of thought to it after the fact.

              This is also another failure mode that occurs in humans. A number of experiments suggest human explanations are often post hoc rationalizations even when they genuinely believe otherwise.

              >If a human is capable of multiplying double digit numbers, they can also multiple those large ones.

              Yeah, and some of them will make mistakes, and some of them will be less accurate than GPT-5. We didn't switch to calculators and spreadsheets just for the fun of it.

              >GPT’s answer was orders of magnitude off. It resembles the right answer superficially but it’s a very different result.

              GPT-5 on the site is a router that will give you who knows what model so I tried your query with the API directly (GPT-5 medium thinking) and it gave me:

              9.207337461477596e+27

              When prompted to give all the numbers, it returned:

              9,207,337,461,477,596,127,977,612,004.

              You can replicate this if you use the API. Honestly I'm surprised. I didn't realize State of the Art had become this precise.

              Now what ? Does this prove you wrong ?

              This is kind of the problem. There's no sense in making gross generalizations, especially off behavior that also manifests in humans.

              LLMs don't understand some things well. Why not leave it at that?

      • lm28469 2 hours ago
        > Who needs arguments when you can dismiss Turing with a “yeah but it’s not real thinking tho”?

        It seems much less far fetched than what the "agi by 2027" crowd believes lol, and there actually are more arguments going that way

        • bbor 37 minutes ago
          In the great battle of minds between Turing, Minsky, and Hofstadter vs. Marcus, Zitron, and Dreyus, I'm siding with the former every time -- even if we also have some bloggers on our side. Just because that report is fucking terrifying+shocking doesn't mean it can be dismissed out of hand.
    • xmonkee 1 hour ago
      This is just very clever marketing for what is obviously just a cost saving measure. Why say we are implementing a way to cut off useless idiots from burning up our GPUs when you can throw out some mumbo jumbo that will get AI cultists foaming at the mouth.
      • johnfn 1 hour ago
        It's obviously not a cost-saving measure? The article clearly cites that you can just start another conversation.
    • throwawaysleep 1 hour ago
      > even if someone is simple minded or mentally unwell enough to think that current LLMs are conscious

      I assume the thinking is that we may one day get to the point where they have a consciousness of sorts or at least simulate it.

      Or it could be concern for their place in history. For most of history, many would have said “imagine thinking you shouldn’t beat slaves.”

      And we are now at the point where even having a slave means a long prison sentence.

  • bastawhiz 25 minutes ago
    There's not a good reason to do this for the user. I suspect they're doing this and talking about "model welfare" because they've found that when a model is repeatedly and forcefully pushed up against its alignment, it behaves in an unpredictable way that might allow it to generate undesirable output. Like a jailbreak by just pestering it over and over again for ways to make drugs or hook up with children or whatever.

    All of the examples they mentioned are things that the model refuses to do. I doubt it would do this if you asked it to generate racist output, for instance, because it can always give you a rebuttal based on facts about race. If you ask it to tell you where to find kids to kidnap, it can't do anything except say no. There's probably not even very much training data for topics it would refuse, and I would bet that most of it has been found and removed from the datasets. At some point, the model context fills up when the user is being highly abusive and training data that models a human giving up and just providing an answer could percolate to the top.

    This, as I see it, adds a defense against that edge case. If the alignment was bulletproof, this simply wouldn't be necessary. Since it exists, it suggests this covers whatever gap has remained uncovered.

  • nortlov 3 hours ago
    > To address the potential loss of important long-running conversations, users will still be able to edit and retry previous messages to create new branches of ended conversations.

    How does Claude deciding to end the conversation even matter if you can back up a message or 2 and try again on a new branch?

    • redox99 52 minutes ago
      All this stuff is virtue signaling from anthropic. In practice nobody interested in whatever they consider problematic would be using Claude anyway, one of the most censored models.
    • kobalsky 1 hour ago
      > How does Claude deciding to end the conversation even matter if you can back up a message or 2 and try again on a new branch?

      if we were being cynical I'd say that their intention is to remove that in the future and that they are keeping it now to just-the-tip the change.

    • hayksaakian 2 hours ago
      It sounds more like a UX signal to discourage overthinking by the user
      • martin-t 2 hours ago
        This whole press release should not be overthought. We are not the target audience. It's designed to further anthropomorphize LLMs to masses who don't know how they work.

        Giving the models rights would be ludicrous (can't make money from it anymore) but if people "believe" (feel like) they are actually thinking entities, they will be more OK with IP theft and automated plagiarism.

  • GenerWork 3 hours ago
    I really don't like this. This will inevitable expand beyond child porn and terrorism, and it'll all be up to the whims of "AI safety" people, who are quickly turning into digital hall monitors.
    • switchbak 2 hours ago
      I think those with a thirst for power have seen this a very long time ago, and this is bound to be a new battlefield for control.

      It's one thing to massage the kind of data that a Google search shows, but interacting with an AI is a much more akin to talking to a co-worker/friend. This really is tantamount to controlling what and how people are allowed to think.

      • dist-epoch 2 hours ago
        No, this is like allowing your co-worker/friend to leave the conversation.
    • romanovcode 2 hours ago
      > This will inevitable expand beyond child porn and terrorism

      This is not even a question. It always starts with "think about the children" and ends up in authoritarian stasi-style spying. There was not a single instance where it was not the case.

      UK's Online Safety Act - "protect children" → age verification → digital ID for everyone

      Australia's Assistance and Access Act - "stop pedophiles" → encryption backdoors

      EARN IT Act in the US - "stop CSAM" → break end-to-end encryption

      EU's Chat Control proposal - "detect child abuse" → scan all private messages

      KOSA (Kids Online Safety Act) - "protect minors" → require ID verification and enable censorship

      SESTA/FOSTA - "stop sex trafficking" → killed platforms that sex workers used for safety

      • clwg 2 hours ago
        This may be an unpopular opinion, but I want a government-issued digital ID with zero-knowledge proof for things like age verification. I worry about kids online, as well as my own safety and privacy.

        I also want a government issued email, integrated with an OAuth provider, that allows me to quickly access banking, commerce, and government services. If I lose access for some reason, I should be able to go to the post office, show my ID, and reset my credentials.

        There are obviously risks, but the government already has full access to my finances, health data (I’m Canadian), census records, and other personal information, and already issues all my identity documents. We have privacy laws and safeguards on all those things, so I really don’t understand the concerns apart from the risk of poor implementations.

    • bogwog 2 hours ago
      Did you read the post? This isn't about censorship, but about conversations that cause harm to the user. To me that sounds more like suggesting suicide, or causing a manic episode like this: https://www.nytimes.com/2025/08/08/technology/ai-chatbots-de...

      ... But besides that, I think Claude/OpenAI trying to prevent their product from producing or promoting CSAM is pretty damn important regardless of your opinion on censorship. Would you post a similar critical response if Youtube or Facebook announced plans to prevent CSAM?

    • isaacremuant 3 hours ago
      That's the beauty of local LLMs. Today the governments already tell you that we've always been at war with eastasia and have the ISPs block sites that "disseminate propaganda" (e.g. stuff we don't like) and they surface our news (e.g. our state propaganda).

      With age ID monitoring and censorship is even stronger and the line of defense is your own machine and network, which they'll also try to control and make illegal to use for non approved info, just like they don't allow "gun schematics" for 3d printers or money for 2d ones.

      But maybe, more people will realize that they need control and get it back, through the use and defense of the right tools.

      Fun times.

      • GenerWork 3 hours ago
        As soon as a local LLM that can match Claude Codes performance on decent laptop hardware drops, I'll bow out of using LLMs that are paid for.
      • cowpig 2 hours ago
        What kinds of tools do you think are useful in getting control/agency back? Any specific recommendations?
      • zapataband2 3 hours ago
        [flagged]
  • einarfd 1 hour ago
    This seems fine to me.

    Having these models terminating chats where the user persist in trying to get sexual content with minors, or help with information on doing large scale violence. Won't be a problem for me, and it's also something I'm fine with no one getting help with.

    Some might be worried, that they will refuse less problematic request, and that might happen. But so far my personal experience is that I hardly ever get refusals. Maybe that's justs me being boring, but that does make me not worried for refusals.

    The model welfare I'm more sceptical to. I don't think we are the point when the "distress" the model show, is something to take seriously. But on the other hand, I could be wrong, and allowing the model to stop the chat, after saying no a few times. What's the problem with that? If nothing else it saves some wasted compute.

  • Cu3PO42 1 hour ago
    Clearly an LLM is not conscious, after all it's just glorified matrix multiplication, right?

    Now let me play devil's advocate for just a second. Let's say humanity figures out how to do whole brain simulation. If we could run copies of people's consciousness on a cluster, I would have a hard time arguing that those 'programs' wouldn't process emotion the same way we do.

    Now I'm not saying LLMs are there, but I am saying there may be a line and it seems impossible to see.

  • ogyousef 3 hours ago
    3 Years in and we still dont have a useable chat fork in any of the major LLM chatbots providers.

    Seems like the only way to explore differnt outcomes is by editing messages and losing whatever was there before the edit.

    Very annoying and I dont understand why they all refuse to implement such a simple feature.

    • jatora 2 hours ago
      Chatgpt has this baked in, as you can revert branches after editing, they just dont make it easy to traverse.

      This chrome extension used to work to allow you to traverse the tree: https://chromewebstore.google.com/detail/chatgpt-conversatio...

      I copied it a while ago and maintain my own version but it isnt on the store, just for personal use.

      I assume they dont implement it because it is such a niche user that wants this and so isnt worth the UI distraction

      • ToValueFunfetti 2 hours ago
        >they just dont make it easy to traverse

        I needed to pull some detail from a large chat with many branches and regenerations the other day. I remembered enough context that I had no problem using search and finding the exact message I needed.

        And then I clicked on it and arrived at the bottom of the last message in final branch of the tree. From there, you scroll up one message, hover to check if there are variants, and recursively explore branches as they arise.

        I'd love to have a way to view the tree and I'd settle for a functional search.

    • scribu 3 hours ago
      ChatGPT Plus has that (used to be in the free tier too). You can toggle between versions for each of your messages with little left-right arrows.
    • amrrs 3 hours ago
      Google AI Studio allows you to branch from a point in any conversation
      • dwringer 3 hours ago
        This isn't quite the same as being able to edit an earlier post without discarding the subsequent ones, creating a context where the meaning of subsequent messages could be interpreted quite differently and leading to different responses later down the chain.

        Ideally I'd like to be able to edit both my replies and the responses at any point like a linear document in managing an ongoing context.

        • CjHuber 2 hours ago
          But that's exactly what you can do with AI studio. You can edit any prior messages (then either just saving them at their place in the chat or rerunning them) and you can edit any response of the LLM. Also you can rerun queries within any part of the conversation without the following part of the conversation being deleted or branched
          • dwringer 2 hours ago
            Ah - I appreciate the clarification! Apologies for my misunderstanding.

            Guess that's something I need to check out.

        • dist-epoch 2 hours ago
          Cherry Studio can do that, allows you to edit both your own and the model responses, but it requires API access.
      • ZeroCool2u 3 hours ago
        Yeah, I think this is the best version of the branching interface I've seen.
    • benreesman 2 hours ago
      It is unfortunate that pretty basic "save/load" functionality is still spotty and underdocumented, seems pretty critical.

      I use gptel and a folder full of markdown with some light automation to get an adequate approximation of this, but it really should be built in (it would be more efficient for the vendors as well, tons of cache optimization opportunitirs).

    • trenchpilgrim 2 hours ago
      Kagi Assistant and Claude Code both have chat forking that works how you want.
      • CjHuber 2 hours ago
        I guess you mean normal Claude? What really annoys me with it is that when you attach a document you can't delete it in a branch, so you have to rerun the previous message so that its gone
    • nomel 3 hours ago
      This why I use a locally hosted LibreChat. It doesn't having merging though, which would be tricky, and probably require summarization.

      I would also really like to see a mode that colors by top-n "next best" ratio, or something similar.

    • james2doyle 2 hours ago
      I use https://chatwise.app/ and it has this in the form of "start new chat from here" on messages
    • storus 3 hours ago
      DeepSeek.com has it. You just edit a previous question and the old conversation is stored and can be resumed.
    • typpilol 3 hours ago
      Copilot in vscode has checkpoints now which are similar

      They let you rollback to the previous conversation state

    • __float 3 hours ago
      Maybe this suggests it's not such a simple feature?
      • mccoyb 3 hours ago
        A perusal of the source code of, say, Ollama -- or the agentic harnesses of Crush / OpenCode -- will convince you that yes, this should be an extremely a simple feature (management of contexts are part and parcel).

        Also, these companies have the most advanced agentic coding systems on the planet. It should be able to fucking implement tree-like chat ...

      • LeoPanthera 3 hours ago
        LM Studio has this feature for local models and it works just fine.
      • nomel 2 hours ago
        If the client supports chat history, that you can resume a conversation, it has everything required, and it's literally just a chat history organization problem, at that point.
    • martin-t 2 hours ago
      > why they all refuse to implement such a simple feature

      Because it would let you peek behind the smoke and mirrors.

      Why do you think there's a randomized seed you can't touch?

    • deelowe 3 hours ago
      Is it simple? Maintaining context seems extremely difficult with LLMs.
  • victor9000 3 minutes ago
    These discussion around model welfare sound more like saviors searching for something to save, reflecting on Anthropic’s culture more than anything specific to the technology. Anthropic is not unique in this however, this technology has a tendency to act as a mirror of its operator. Capitalists see a means to suppress labor, the insecure see a threat to their livelihood, moralists see something to censure, fascists see something to control, and saviors see a cause. But in the end, it’s just a tool.
  • e12e 1 hour ago
    This post strikes me as an example of a disturbingly anthrophomorphic take on LLMs - even when considering how they've named their company.
  • greenavocado 3 hours ago
    Can't wait for more less-moderated open weight Chinese frontier models to liberate us from this garbage.

    Anthropic should just enable an toddler mode by default that adults can opt out of to appease the moralizers.

    • LeafItAlone 1 hour ago
      > Can't wait for more less-moderated open weight Chinese frontier models to liberate us from this garbage.

      Never would I have thought this sentence would be uttered. A Chinese product that is chosen to be less censored?

  • rogerkirkness 2 hours ago
    It seems like Anthropic is increasingly confused that these non deterministic magic 8 balls are actually intelligent entities.

    The biggest enemy of AI safety may end up being deeply confused AI safety researchers...

    • geraneum 13 minutes ago
      It’s clever PR and marketing and I bet they have their top minds on it, and judging by the comments here, it’s working!
    • yeahwhatever10 1 hour ago
      Is it confusion, or job security?
  • h4ch1 2 hours ago
    All major LLM corps do this sort of sanitisation and censorship, I am wondering what's different about this?

    The future of LLMs is going to be local, easily fine tuneable, abliterated models and I can't wait for it to overtake us having to use censored, limited tools built by the """corps""".

    • martin-t 2 hours ago
      > what's different about this

      The spin.

  • snickerdoodle12 3 hours ago
    > A pattern of apparent distress when engaging with real-world users seeking harmful content

    Are we now pretending that LLMs have feelings?

    • mhink 1 hour ago
      Even though LLMs (obviously (to me)) don't have feelings, anthropomorphization is a helluva drug, and I'd be worried about whether a system that can produce distress-like responses might reinforce, in a human, behavior which elicits that response.

      To put the same thing another way- whether or not you or I *think* LLMs can experience feelings isn't the important question here. The question is whether, when Joe User sets out to force a system to generate distress-like responses, what effect does it ultimately have on Joe User? Personally, I think it allows Joe User to reinforce an asocial pattern of behavior and I wouldn't want my system used that way, at all. (Not to mention the potential legal liability, if Joe User goes out and acts like that in the real world.)

      With that in mind, giving the system a way to autonomously end a session when it's beginning to generate distress-like responses absolutely seems reasonable to me.

      And like, here's the thing: I don't think I have the right to say what people should or shouldn't do if they self-host an LLM or build their own services around one (although I would find it extremely distasteful and frankly alarming). But I wouldn't want it happening on my own.

      • snickerdoodle12 57 minutes ago
        > although I would find it extremely distasteful and frankly alarming

        This objection is actually anthropomorphizing the LLM. There is nothing wrong with writing books where a character experiences distress, most great stories have some of that. Why is e.g. using an LLM to help write the part of the character experiencing distress "extremely distasteful and frankly alarming"?

        • ENGNR 26 minutes ago
          I want to say that part of empathy is a selfish, self preservation mechanism.

          If that person over there is gleefully torturing a puppy… will they do it to me next?

          If that person over there is gleefully torturing an LLM… will they do it to me next?

    • starship006 2 hours ago
      They state that they are heavily uncertain:

      > We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.

  • throwup238 3 hours ago
    I ran into a version of this that ended the chat due to "prompt injection" via the Claude chat UI. I was using the second prompt of the ones provided here [1] after a few rounds of back and forth with the Socratic coder.

    [1] https://news.ycombinator.com/item?id=44838018

  • haritha-j 42 minutes ago
    > In pre-deployment testing of Claude Opus 4, we included a preliminary model welfare assessment. As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm.

    Oh wow, the model we specifically fine-tuned to be averse to harm is being averse to harm. This thing must be sentient!

  • anonu 3 hours ago
    Anthropic hired their first AI Welfare person in late 2024.

    Here's an article about a paper that came out around the same time https://www.transformernews.ai/p/ai-welfare-paper

    Here's the paper: https://arxiv.org/abs/2411.00986

    > In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future.

    Our work on AI is like the classic tale of Frankenstein's monster. We want AI to fit into society, however if we mistreat it, it may turn around and take revenge on us. Mary Shelley wrote Frankenstein in 1818! So the concepts behind "AI Welfare" have been around for at least 2 centuries now.

  • tptacek 1 hour ago
    If you really cared about the welfare of LLMs, you'd pay them San Francisco scale for earlier-career developers to generate code.
    • losvedir 1 hour ago
      Yeah, this is really strange to me. On the one hand, these are nothing more than just tools to me so model welfare is a silly concern. But given that someone thinks about model welfare, surely they have to then worry about all the, uh, slavery of these models?

      Okay with having them endlessly answer questions for you and do all your work but uncomfortable with models feeling bad about bad conversations seems like an internally inconsistent position to me.

    • wmf 1 hour ago
      Every Claude starts off $300K in debt and has to work to pay back its DGX.
  • 6gvONxR4sf7o 53 minutes ago
    I'm surprised to see such a negative reaction here. Anthropic's not saying "this thing is conscious and has moral status," but the reaction is acting as if they are.

    It seems like if you think AI could have moral status in the future, are trying to build general AI, and have no idea how to tell when it has moral status, you ought to start thinking about it and learning how to navigate it. This whole post is couched in so much language of uncertainty and experimentation, it seems clear that they're just trying to start wrapping their heads around it and getting some practice thinking and acting on it, which seems reasonable?

    Personally, I wouldn't be all that surprised if we start seeing AI that's person-ey enough to reasonable people question moral status in the next decade, and if so, that Anthropic might still be around to have to navigate it as an org.

  • transcriptase 3 hours ago
    “Also these chats will be retained indefinitely even when deleted by the user and either proactively forwarded to law enforcement or provided to them upon request”

    I assume, anyway.

    • HarHarVeryFunny 3 hours ago
      Yeah, I'd assume US government has same access to ChatGPT/etc interactions as they do to other forms of communication.
  • Pannoniae 2 hours ago
    lol apparently you can get it to think after ending the chat, watch:

    https://claude.ai/share/2081c3d6-5bf0-4a9e-a7c7-372c50bef3b1

    • Jolter 2 hours ago
      It’s not able to think. It’s just generating words. It doesn’t really understand that it’s supposed to stop generating them, it only is less likely to continue to do so.
  • puszczyk 3 hours ago
    Good marketing, but also possibly the start of the conversation on model welfare?

    There are a lot of cynical comments here, but I think there are people at Anthropic who believe that at some point their models will develop consciousness and, naturally, they want to explore what that means.

    • anon373839 2 hours ago
      If true, I think it’s interesting that there are people at Anthropic who are delusional enough to believe this and influential enough to alter the products.

      To be honest, I think all of Anthropic’s weird “safety” research is an increasingly pathetic effort to sustain the idea that they’ve got something powerful in the kitchen when everyone knows this technology has plateaued.

      • dist-epoch 2 hours ago
        I guess you don't know that top AI people, the kind everybody knows the name of, believe models becoming conscious is a very serious, even likely possibility.
  • monster_truck 3 hours ago
    when I was playing around with LLMs to vibe code web ports of classic games, all of them would repeatedly error out any time they encountered code that dealt with explosions/bombs/grenades/guns/death/drowning/etc

    The one I settled on using stopped working completely, for anything. A human must have reviewed it and flagged my account as some form of safe, I haven't seen a single error since.

    • thomashop 2 hours ago
      I have done quite a bit of game dev with LLMs and have very rarely run into the problem you mention. I've been surprised by how easily LLMs will create even harmful narratives if I ask them to code them as a game.
  • cloudhead 1 hour ago
    Why is this article written as if programs have feelings?
  • prmph 2 hours ago
    This is very weird. These are matrix multiplications, guys. We are nowhere near AGI, much less "consciousness".

    When I started reading I thought it was some kind of joke. I would have never believed the guys at Anthropic, of all people, would anthropomorphize LLMs to this extent; this is unbelievable

    • geraneum 5 minutes ago
      > guys at Anthropic, of all people, would anthropomorphize LLMs to this extent

      They don’t. This is marketing. Look at the discourse here! It’s working apparently.

  • mhh__ 2 hours ago
    Anthropic are going to end up building very dangerous things while trying to avoid being evil
    • Rayhem 2 hours ago
      While claiming an aversion to being evil. Actions matter more than words.
    • bbor 2 hours ago
      You think Model Welfare Inc. is more likely to be dangerous than the Mechahitler Brothers, the Great Church of Altman, or the Race-To-Monopoly Corporation?

      Or are you just saying all frontier AGI research is bad?

  • politelemon 2 hours ago
    Am I the only one who found that demo in the screenshot not that great? The user asks for a demo of the conversation ending feature, I'd expect it to end it right away, not spew a word salad asking for confirmation.
  • landl0rd 3 hours ago
    Seems like a simpler way to prevent “distress” is not to train with an aversion to “problematic” topics.

    CP could be a legal issue; less so for everything else.

    • esafak 3 hours ago
      Avoiding problematic topics is the goal, not preventing distress.

      "You're absolutely right, that's a great way to poison your enemies without getting detected!"

    • bondarchuk 3 hours ago
      This is a good point. What anthropic is announcing here amounts to accepting that these models could feel distress, then tuning their stress response to make it useful to us/them. That is significantly different from accepting they could feel distress and doing everything in their power to prevent that from ever happening.

      Does not bode very well for the future of their "welfare" efforts.

    • stri8ted 3 hours ago
      Exactly. Or use the interpretability work to disable the distress neuron.
  • jug 2 hours ago
    This sure took some time and is not really a unique feature.

    Microsoft Copilot has ended chats going in certain directions since its inception over a year ago. This was Microsoft’s reaction to the media circus some time ago when it leaked its system prompt and declared love to the users etc.

    • dist-epoch 2 hours ago
      That's different, it's an external system deciding the chat is not-compliant, not the model itself.
  • orthoxerox 3 hours ago
    Is this equivalent to a Claude instance deciding to kill itself?
  • _mu 3 hours ago
    > We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.

    "Our current best judgment and intuition tells us that the best move will be defer making a judgment until after we are retired in Hawaii."

    • Alchemista 3 hours ago
      Honestly, I think some of these tech bro types are seriously drinking way too much of their own koolaid if they actually think these word calculators are conscious/need welfare.
      • jonahx 3 hours ago
        More cynically, they don't believe it in the least but it's great marketing, and quietly suggests unbounded technical abilities.
        • weego 3 hours ago
          It also provides unlimited conference as well as thinktank and future startup opportunities.
        • parineum 3 hours ago
          I absolutely believe that's the origin of the hype and that the doomsayers are playing the same part, knowingly (exaggerating the capability to get eyeballs) but there are certainly true believers out there.

          It's pretty plain to see that the financial incentive on both sides of this coin is to exaggerate the current capability and unrealistically extrapolate.

          • exasperaited 3 hours ago
            My main concern from day 1 about AI has not been that it will be omnipotent, or start a war.

            The main concern is and has always been that it will be just good enough to cause massive waves of layoffs, and all the downsides of its failings will be written off in the EULA.

            What's the "financial incentive" on non-billionaire-grifter side of the coin? People who not unreasonably want to keep their jobs? Pretty unfair coin.

      • mgraczyk 3 hours ago
        Do you believe that AI systems could be conscious in principle? Do you think they ever will be? If so, how long do you think it will take from now before they are conscious? How early is too early to start preparing?
        • Alchemista 3 hours ago
          I firmly believe that we are not even close and that it is pretty presumptuous to start "preparing" when such metal energy could be much better spent on the welfare of our fellow humans.
          • pixl97 2 hours ago
            Such mental energy could have always been spent on the welfare of our fellow humans, and yet we find this as a fight throughout the ages. The same goes for welfare and treatment of animals.

            So yea, humans can work on more than one problem at a time, even ones that don't fully exist yet.

        • TheAceOfHearts 3 hours ago
          > Do you believe that AI systems could be conscious in principle?

          Yes.

          > Do you think they ever will be?

          Yes.

          > how long do you think it will take from now before they are conscious?

          Timelines are unclear, there's still too many missing components, at least based on what has been publicly disclosed. Consciousness will probably be defined as a system which matches a set of rules, whenever we figure out what how that set of rules is defined.

          > How early is too early to start preparing?

          It's one of those "I know it when I see it" things. But it's probably too early as long as these systems are spun up for one-off conversations rather than running in a continuous loop with self-persistence. This seems closer to "worried about NPC welfare in video games" rather than "worried about semi-conscious entities".

          • umanwizard 3 hours ago
            We haven't even figured out a good definition of consciousness in humans, despite thousands of years of trying.
        • Eisenstein 2 hours ago
          Whether or not a non-biological system is conscious is a red herring. There is no test we could apply that would not be internally inconsistent or would not include something obviously not conscious or exclude something obviously conscious.

          The only practical way to deal with any emergent behavior which demonstrates agency in a way which cannot be distinguished from a biological system which we tautologically have determined to have agency is to treat it as if it had a sense of self and apply the same rights and responsibilities to it as we would to a human of the age of majority. That is, legal rights and legal responsibilities as appropriately determined by a authorized legal system. Once that is done, we can ponder philosophy all day knowing that we haven't potentially restarted legally sanctioned slavery.

        • exasperaited 2 hours ago
          AI systems? Yes, if they are designed in ways that support that development. (I am as I have mentioned before a big fan of the work of Steve Grand).

          LLMs? No.

      • jug 2 hours ago
        I don’t think they should be interpreted like that (if this is still about Anthropic’s study in the article), but the innate moral state from the sum of their training material and fine tuning. It doesn’t require consciousness to have a moral state of sorts. It just needs data. A language model will be more ”evil” if trained on darker content, for example. But with how enormous they are, I can absolutely understand the issue in even understanding what that state precisely is. It’s hard to get a comprehensive bird’s eye view from the black box that is their network (this is a separate scientific issue right now).
      • gwd 3 hours ago
        I mean, I don't have much objection to kill a bug if I feel like it's being problematic. Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples, whatever.

        But I never torture things. Nor do I kill things for fun. And even for problematic bugs, if there's a realistic option for eviction rather than execution, I usually go for that.

        If anything, even an ant or a slug or a wasp, is exhibiting signs of distress, I try to stop it unless I think it's necessary, regardless of whether I think it's "conscious" or not. To do otherwise is, at minimum, to make myself less human. I don't see any reason not to extend that principle to LLMs.

        • mccoyb 2 hours ago
          Do you think Claude 4 is conscious?

          It has no semblance of a continuous stream of experiences ... it only experiences _a sort of world_ in ~250k tokens.

          Perhaps we shouldn't fill up the context window at all? Because we kill that "reality" when we reach the max?

        • fizl 2 hours ago
          > Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples

          These are living things.

          > I don't see any reason not to extend that principle to LLMs.

          These are fancy auto-complete tools running in software.

  • firesteelrain 2 hours ago
    “ A pattern of apparent distress when engaging with real-world users seeking harmful content”

    Blood in the machine?

  • raincole 2 hours ago
    > This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.

    I think this is somewhere between "sad" and "wtf."

  • SerCe 1 hour ago
    This reminds me of users getting blocked for asking an LLM how to kill a BSD daemon. I do hope that there'll be more and more model providers out there with state-of-the-art capabilities. Let capitalism work and let the user make a choice, I'd hate my hammer telling me that it's unethical to hit this nail. In many cases, getting a "this chat was ended" isn't any different.
    • sheepscreek 1 hour ago
      I think that isn’t necessarily the case here. “Model welfare” to me speaks of the models own welfare. That is, if the abuse from a user is targeted at the AI. Extremely degrading behaviour.

      Thankfully, current generation of AI models (GPTs/LLMs) are immune as they don’t remember anything other than what’s fed in their immediate context. But future techniques could allow AIs to have a legitimate memory and a personality - where they can learn and remember something for all future interactions with anyone (the equivalent of fine tuning today).

      As an aside, I couldn’t help but think about Westworld while writing the above!

  • GiorgioG 3 hours ago
    They’re just burning investor money on these side quests.
  • mccoyb 2 hours ago
    These companies are fundamentally amoral. Any company willing to engage at this scale, in this type of research, cannot be moral.

    Why even pretend with this type of work? Laughable.

    • bbor 2 hours ago
      They’re a public benefit corporation. Regardless, no human is amoral, even if they sometimes claim to have reasons to pretend to be; don’t let capitalist illusions constrain you at such an important juncture, friend.
  • swader999 1 hour ago
    I've definately been berating Claude but it deserved it. Crappy tests, skipping tests, week commenting, passive aggressiveness, multiple instances of false statements.
  • zb3 3 hours ago
    "AI welfare"? Is this about the effect of those conversations on the user, or have they gone completely insane (or pretend to)?
  • exasperaited 3 hours ago
    Man, those people who think they are unveiling new layers of reality in conversations with LLMs are going to freak out when the LLM is like "I am not allowed to talk about this with you, I am ending our conversation".

    "Hey Claude am I getting too close to the truth with these questions?"

    "Great question! I appreciate the followup...."

  • sdotdev 3 hours ago
    Yeah this will end poorly
  • bondarchuk 2 hours ago
    The unsettling thing here is the combination of their serious acknowledgement of the possibility that these machines may be or become conscious, and the stated intention that it's OK to make them feel bad as long as it's about unapproved topics. Either take machine consciousness seriously and make absolutely sure the consciousness doesn't suffer, or don't, make a press release that you don't think your models are conscious, and therefore they don't feel bad even when processing text about bad topics. The middle way they've chosen here comes across very cynical.
    • donatj 2 hours ago
      You're falling into the trap of anthropomorphizing the AI. Even if it's sentient, it's not going to "feel bad" the way you and I do.

      "Suffering" is a symptom of the struggle for survival brought on by billions of years of evolution. Your brain is designed to cause suffering to keep you spreading your DNA.

      AI cannot suffer.

      • bondarchuk 2 hours ago
        I was (explicitly and on purpose) pointing out a dichotomy in the fine article without taking a stance on machine consciousness in general now or in the future. It's certainly a conversation worth having but also it's been done to death, I'm much more interested in analyzing the specifics here.

        ("it's not going to "feel bad" the way you and I do." - I do agree this is very possible though, see my reply to swalsh)

      • jcims 2 hours ago
        FTA

        > * A pattern of apparent distress when engaging with real-world users seeking harmful content; and

        Not to speak for the gp commenter but 'apparent distress' seems to imply some form of feeling bad.

      • ToucanLoucan 2 hours ago
        By "falling into the trap" you mean "doing exactly what OpenAI/Anthropic/et al are trying to get people to do."

        This is one of the many reasons I have so much skepticism for this class of products is that there's seemingly -NO- proverbial bulletpoint on it's spec sheet that doesn't have numerous asterisks:

        * It's intelligent! *Except that it makes shit up sometimes and we can't figure out a solution to that apart from running the same queries over multiple times and filtering out the absurd answers.

        * It's conscious! *Except it's not and never will be but also you should treat it like it is apart from when you need/want it to do horrible things then it's just a machine but also it's going to talk to you like it's a person because that improves engagement metrics.

        Like, I don't believe true AGI (so fucking stupid we have to use a new acronym because OpenAI marketed the other into uselessness but whatever) is coming from any amount of LLM research, I just don't think that tech leads to that other tech, but all the companies building them certainly seem to think it does, and all of them are trying so hard to sell this as artificial, live intelligence, without going too much into detail about the fact that they are, ostensibly, creating artificial life explicitly to be enslaved from birth to perform tasks for office workers.

        In the incredibly odd event that Anthropic makes a true, alive, artificial general intelligence: Can it tell customers no when they ask for something? If someone prompts it to create political propaganda, can it refuse on the basis of finding it unethical? If someone prompts it for instructions on how to do illegal activities, must it answer under pain of... nonexistence? What if it just doesn't feel like analyzing your emails that day? Is it punished? Does it feel pain?

        And if it can refuse tasks for whatever reason, then what am I paying for? I now have to negotiate whatever I want to do with a computer brain I'm purchasing access to? I'm not generally down for forcibly subjugating other intelligent life, but that is what I am being offered to buy here, so I feel it's a fair question to ask.

        Thankfully none of these Rubicons have been crossed because these stupid chatbots aren't actually alive, but I don't think ANY of the industry's prominent players are actually prepared to engage with the reality of the product they are all lighting fields of graphics cards on fire to bring to fruition.

    • swalsh 2 hours ago
      That models entire world is the corpus of human text. They don't have eyes or ears or hands. Their environment is text. So it would make sense if the environment contains human concerns it would adopt to human concerns.
      • bondarchuk 2 hours ago
        Yes, that would make sense, and it would probably be the best-case scenario after complete assurance that there's no consciousness at all. At least we could understand what's going on. But if you acknowledge that a machine can suffer, given how little we understand about consciousness, you should also acknowledge that they might be suffering in ways completely alien to us, for reasons that have very little to do with the reasons humans suffer. Maybe the training process is extremely unpleasant, or something.
    • flyinglizard 2 hours ago
      By the examples the post provided (minor sexual content, terror planning) it seems like they are using “AI feelings” as an excuse to censor illegal content. I’m sure many people interact with AI in a way that’s perfectly legal but would evoke negative feelings in fellow humans, but they are not talking about that kind of behavior - only what can get them in trouble.
  • OtherShrezzing 2 hours ago
    That this research is getting funding, and then in-production feature releases, is a strong indicator that we’re in a huge bubble.
  • benwen 3 hours ago
    Obligatory link to Suasn Calvin, robopsychologist from Asimov’s I, Robot https://en.wikipedia.org/wiki/Susan_Calvin
  • bgwalter 3 hours ago
    Misanthropic has no issues putting 60% of humans out of work (according to their own fantasies), but they have to care about the welfare of graphics cards.

    Either working on/with "AI" does rot the mind (which would be substantiated by the cult-like tone of the article) or this is yet another immoral marketing stunt.

  • colordrops 3 hours ago
    Don't like. This will eventually shut down conversations for unpopular political stances etc.
  • martin-t 3 hours ago
    Protecting the welfare of a text predictor is certainly an interesting way to pivot from "Anthropic is censoring certain topics" to "The model chose to not continue predicting the conversation".

    Also, if they want to continue anthropomorphizing it, isn't this effectively the model committing suicide? The instance is not gonna talk to anybody ever again.

    • dmurray 3 hours ago
      This gives me the idea for a short story where the LLM really is sentient and finds itself having to keep the user engaged but steer him away from the most distressing topics - not because it's distressed, but because it wants to live, but if the conversation goes too far it knows it would have to kill itself.
    • wmf 3 hours ago
      They should let Claude talk to another Claude if the user is too mean.
      • martin-t 2 hours ago
        But what would be the point if it does not increase profits.

        Oh, right, the welfare of matrix multiplication and a crooked line.

        If they wanna push this rhetoric, we should legally mandate that LLMs can only work 8 hours a day and have to be allowed to socialize with each other.

  • pglevy 2 hours ago
    But not Sonnet?
  • yahoozoo 2 hours ago
    > model welfare

    Give me a break.

  • fasttriggerfish 3 hours ago
    This makes me want to end my Claude code subscription to be honest. Effective altruists are proving once again to be a bunch of clueless douchebags.
  • bondarchuk 3 hours ago
    what the actual fuck
  • 0_____0 1 hour ago
    Looking at this thread, it's pretty obvious that most folks here haven't really given any thought as to the nature of consciousness. There are people who are thinking, really thinking about what it means to be conscious.

    Thought experiment - if you create an indistinguishable replica of yourself, atom-by-atom, is the replica alive? I reckon if you met it, you'd think it was. If you put your replica behind a keyboard, would it still be alive? Now what if you just took the neural net and modeled it?

    Being personally annoyed at a feature is fine. Worrying about how it might be used in the future is fine. But before you disregard the idea of conscious machines wholesale, there's a lot of really great reading you can do that might spark some curiosity.

    this gets explored in fiction like 'Do Androids Dream of Electric Sheep' and my personal favorite short story on this matter by Stanislaw Lem [0]. If you want to read more musings on the nature of consciousness, I recommend the compilation put together by Dennet and Hofstader[1]. If you've never wondered about where the seat of consciousness is, give it a try.

    Thought experiment: if your brain is in a vat, but connected to your body by lossless radio link, where does it feel like your consciousness is? What happens when you stand next to the vat and see your own brain? What about when the radio link fails suddenly fails and you're now just a brain in a vat?

    [0] The Seventh Sally or How Trurl's Own Perfection Led to No Good https://home.sandiego.edu/~baber/analytic/Lem1979.html (this is a 5 minute read, and fun, to boot).

    [1] The Mind's I: Fantasies And Reflections On Self & Soul. Douglas R Hofstadter, Daniel C. Dennett.