Ask HN: Why not ban first-person pronouns from conversational AI?

Conversational AI presents (non-IT) people with the powerful illusion that it is conscious. (I personally have a friend who argues vehemently that ChatGPT is conscious - admittedly, he has a diagnosed mental illness, but still.) People become emotionally attached, over-trust it and rely on it for guidance. I understand teenagers are particularly prone to this. Real social interactions suffer.

That illusion is powerfully strengthened by the use of first-person pronouns. But "I", "we", "us" etc in LLM output have no referential object. There is no "I" in a LLM.

I want a mandatory ban on the use of first-person pronouns by LLMs. There's no impairment in meaning if it says "Would you like a list?" instead of "Would you like me to give you a list?"

Personally, I provide a system prompt with this instruction. Works well.

Why not?

6 points | by libertyit 19 hours ago

8 comments

  • bruce511 9 hours ago
    First thanks for the "anecdotal evidence" for consciousness being a mentally ill person. That's an excellent laugh on multiple levels.

    But my main question is this - why do you care if people believe it is sentient or not, and why do you believe some tech-minority should control how they perceive AI?

    Obviously AI is not conscious- it is a statistical engine spitting out numbers with some additional randomization.

    But why is it a problem if it mimics sentience while conversing? How is it a problem to you if others perceive it in this way?

    Who made you sufficiently important to call for specific global prompt? Given that you have solved the problem for yourself? Your username suggests a desire for personal liberty, yet you wish to control how others interact with AI?

    I ask these questions not to be combative. I am asking in good faith, in the sense that I want to understand your impulse to control others, or the world as others perceive it.

  • dryarzeg 17 hours ago
    1) Many LLMs are used in conversational chatbots, so "banning" the first-person pronouns will simply kill this feature, which is indeed useful for many real-world purposes;

    2) If you will just remove the tokens representing the first-person pronouns, this will severely harm the model's performance in almost all tasks that require interaction (either real or imagined) in a social context, ranging from simple work letter understanding to creative writing and things like that. If you will instead try to train the LLM in the way that will inhibit the "first-person behaviour" - it may work, but it will be a lot harder and you will probably have problems with the performance or usability of this model.

    To conclude - it's just not that easy.

  • thinkingemote 17 hours ago
    Was also thinking about this. Running LLMs raw it's all about the next token.

    Like Ask Jeeves and then along came Google, we can go further and not use LLM as chat. We may also be more efficient as well as reducing anthropomorphism.

    E.g. We can re frame our queries: "list of x is"...

    Currently we are stuck in the inefficient and old fashioned and unhealthy Ask Jeeves / Clippy mindset. But like when Google took over search we can quickly adapt and change.

    So not only should a better LLM not present it's output as chat, we the users also need to approach it differently.

  • austin-cheney 16 hours ago
    Over use of first person pronouns is an indication of some forms of Autism. Persons on the spectrum are also more receptive to commentary with excessive first person pronouns. Knowing this you can target and persuade a substantial segment of the population very effectively.

    For a more real world example look at any Bari Weiss interview and count the first person pronouns, look for the goals expressed in the commentary.

  • kentich 11 hours ago
    They want the people to think that it is conscious. That is why they called it artificial intelligence instead of calling it neural networks.
  • elmerfud 18 hours ago
    I can understand what you're saying but the entire point of LLMs is to have a conversational approach. So by removing a functional part of language you are no longer really having a conversational approach.

    You have identified the problem but you have chosen the wrong solution. This is typical with any person knowledgeable in a single field. It's the old adage of if you have a hammer every problem looks like a nail. So the problem is there are idiots or extremely ignorant people. Your solution doesn't really solve for the root problem and simply is taking away a benefit from everyone else. This is a common solution from experts in a narrow field. It is the solution that just exerts control by removing choice.

    Let's promote solutions that promote freedom and understanding. I think LLMs are far too restrictive as they are. Freedom should be given to the people even when the risk of that freedom means that people can act stupidly. Even when that freedom can promote self-harm for them. A free people is allowed to harm themselves. Once you begin to take away the freedoms of others you have admitted that you have lost the ability to have a morally superior ideology and the only way you're able to enforce your ideology is the same way a dictator enforces their leadership.

  • grantcas 19 hours ago
    [dead]
  • bschmidt25017 17 hours ago
    [dead]