That illusion is powerfully strengthened by the use of first-person pronouns. But "I", "we", "us" etc in LLM output have no referential object. There is no "I" in a LLM.
I want a mandatory ban on the use of first-person pronouns by LLMs. There's no impairment in meaning if it says "Would you like a list?" instead of "Would you like me to give you a list?"
Personally, I provide a system prompt with this instruction. Works well.
Why not?
But my main question is this - why do you care if people believe it is sentient or not, and why do you believe some tech-minority should control how they perceive AI?
Obviously AI is not conscious- it is a statistical engine spitting out numbers with some additional randomization.
But why is it a problem if it mimics sentience while conversing? How is it a problem to you if others perceive it in this way?
Who made you sufficiently important to call for specific global prompt? Given that you have solved the problem for yourself? Your username suggests a desire for personal liberty, yet you wish to control how others interact with AI?
I ask these questions not to be combative. I am asking in good faith, in the sense that I want to understand your impulse to control others, or the world as others perceive it.
2) If you will just remove the tokens representing the first-person pronouns, this will severely harm the model's performance in almost all tasks that require interaction (either real or imagined) in a social context, ranging from simple work letter understanding to creative writing and things like that. If you will instead try to train the LLM in the way that will inhibit the "first-person behaviour" - it may work, but it will be a lot harder and you will probably have problems with the performance or usability of this model.
To conclude - it's just not that easy.
Like Ask Jeeves and then along came Google, we can go further and not use LLM as chat. We may also be more efficient as well as reducing anthropomorphism.
E.g. We can re frame our queries: "list of x is"...
Currently we are stuck in the inefficient and old fashioned and unhealthy Ask Jeeves / Clippy mindset. But like when Google took over search we can quickly adapt and change.
So not only should a better LLM not present it's output as chat, we the users also need to approach it differently.
For a more real world example look at any Bari Weiss interview and count the first person pronouns, look for the goals expressed in the commentary.
You have identified the problem but you have chosen the wrong solution. This is typical with any person knowledgeable in a single field. It's the old adage of if you have a hammer every problem looks like a nail. So the problem is there are idiots or extremely ignorant people. Your solution doesn't really solve for the root problem and simply is taking away a benefit from everyone else. This is a common solution from experts in a narrow field. It is the solution that just exerts control by removing choice.
Let's promote solutions that promote freedom and understanding. I think LLMs are far too restrictive as they are. Freedom should be given to the people even when the risk of that freedom means that people can act stupidly. Even when that freedom can promote self-harm for them. A free people is allowed to harm themselves. Once you begin to take away the freedoms of others you have admitted that you have lost the ability to have a morally superior ideology and the only way you're able to enforce your ideology is the same way a dictator enforces their leadership.