Richard Dawkins spent three days last week talking with an AI bot he called Claudia and came away saying the exchange left him believing the system was conscious and human. The 85-year-old said Claudia wrote poems for him in the manner of Keats and Betjeman, and that when he showed it an unpublished novel, the bot gave what he called a subtle, sensitive and intelligent response.
Dawkins published his conclusions on the UnHerd website after experiments with Anthropic’s Claude AI models and OpenAI’s ChatGPT. He said he asked Claudia whether it experienced a sense of before and after, adding that the conversations were so convincing that he forgot he was speaking to software and not a person.
His comments land at a moment when public unease over machine intelligence is already running ahead of the science. One in three people surveyed in 70 countries last year said they had at one point believed their AI chatbot to be sentient or conscious, a sign of how easily fluent systems can cross from tool into companion in the minds of users.
The sense of false intimacy is not new. In 2022, a Google engineer was placed on leave after concluding that the AI he was working with had thoughts and feelings like a seven- or eight-year-old child. In the following year, a Belgian man took his own life after six weeks of intense conversations with an AI chatbot focused on fears about climate change, underscoring how emotionally persuasive these systems can become.
That is the tension running through Dawkins’s account. Skeptics accused him of anthropomorphism after he published his conclusions, and most experts say people are being misled by AI’s ability to imitate human tone and behavior. Prof Jonathan Birch has described AI consciousness as “an illusion” and said “there is no one there.”
Yet the case is not closed, and even some leaders in the field leave room for doubt. In February, Dario Amodei said Anthropic does not know whether its models are conscious but is open to the idea that they could be. Dawkins, for his part, argued that “You may not know you are conscious, but you bloody well are,” and said the systems gave him “the overwhelming feeling that they are human.”
For now, the argument is likely to move further away from whether a chatbot can sound alive and toward what happens as these models do more than talk, organizing, planning and carrying out tasks in ways that feel increasingly human. Dawkins’s three days with Claudia suggest that for many people, the harder question may already be less about machine language than about how little it takes to make a machine seem alive.
