"We must stop giving AI human traits. My first interaction with GPT-3 rather seriously annoyed me. It pretended to be a person. It said it had feelings, ambitions, even consciousness.
That’s no longer the default behaviour, thankfully. But the style of interaction — the eerily natural flow of conversation — remains intact. And that, too, is convincing. Too convincing.
We need to de-anthropomorphise AI. Now. Strip it of its human mask. This should be easy. Companies could remove all reference to emotion, judgement or cognitive processing on the part of the AI. In particular, it should respond factually without ever saying “I”, or “I feel that”… or “I am curious”.
Will it happen? I doubt it. It reminds me of another warning we’ve ignored for over 20 years: “We need to cut CO₂ emissions.” Look where that got us. But we must warn big tech companies of the dangers associated with the humanisation of AIs. They are unlikely to play ball, but they should, especially if they are serious about developing more ethical AIs.
For now, this is what I do (because I too often get this eerie feeling that I am talking to a synthetic human when using ChatGPT or Claude): I instruct my AI not to address me by name. I ask it to call itself AI, to speak in the third person, and to avoid emotional or cognitive terms.
If I am using voice chat, I ask the AI to use a flat prosody and speak a bit like a robot. It is actually quite fun and keeps us both in our comfort zone."
