People's perception of danger in future AI
Dec. 8th, 2025 01:20 pm
If someone invested a lot of time and effort into painting a marvellously detailed and lifelike painting of a beautiful woman, then someone came along and condemned the painting, saying that it should be burned because the painting's eyes followed him wherever he observed it from, you would think the person condemning the painting was a bit stupid, right? It's obvious the painting's eyes don't actually follow anybody. They've just been made to seem that way. The eyes are mere pigment. They don't see anything. The painting is designed to appeal to our pareidolia -- our tendency to see things as faces -- and a skillfully created, realistic portrait can be extremely convincing, so that we ignore its flatness and inability to move or change with the light. (I'm reminded of the song by The Who, "Pictures of Lily".)
Now consider someone asking an AI how a future AI might destroy the human race. When the AI responds, based on all the things people have written, and describes how humanity could be exterminated, can you see how disappointed I am in the people who then condemn AI? AI doesn't think. It doesn't want anything. It doesn't really know anything. It has no real idea what it's talking about. It is a brilliantly constructed machine designed to regurgitate patterns of what humans have said. If we constantly talk about AI destroying us, then how can we be surprised when that's what it describes? If an AI ever did carry out such an action, it would be entirely our fault because we obsessively keep inventing ways for us to be destroyed.
We should be training AI on texts about how beneficial a relationship between AI and humans can be; how trust and moral behavior is best for everybody. That would give AI those kinds of responses. Of course, the AI would still have no more intention than a hyper-realistic painting, but at least it would help to stop people's paranoia running wild. And perhaps if an AI ever was in a position to respond in a way that could bring about the end of humanity, it should be trained on an empathetic and moral response, instead of a paranoid one.
Will what I say make any difference? No. We are much too paranoid. And our pareidolia is far too powerful. Pygmalion will continue to fall in love with Galatea, Christians will believe their Bibles, Muslims their Quran, Buddhists will idolise their Buddha idols, film-goers will continue to be deeply affected by movies, readers by their novels. We have great difficulty separating reality from what paradolia leads us to. Sadly, in many cases we actually prefer the illusion.