blake lemoine This incident is remembered today as the high point of AI hype. This news catapulted the whole concept of conscious AI into the public consciousness within a news cycle or two, but it also sparked a debate among both computer scientists and consciousness researchers that has only intensified in the years since. While the tech community continues to publicly downplay the whole idea (and poor Lemoine), privately they’re starting to take the possibility more seriously. Sentient AI lacks a clear commercial rationale (how do we monetize it?) and may pose thorny moral dilemmas (how should we treat machines that can cause suffering?). But some AI engineers are beginning to wonder if the holy grail of artificial general intelligence — machines that are not only super smart but also have human-level understanding, creativity, and common sense — might require something like consciousness. In the tech community, what had been an informal taboo surrounding sentient AI suddenly began to crumble at the prospect that the general public would find it creepy.
The turning point occurred in the summer of 2023. A group of 19 leading computer scientists and philosophers posted an 88-page report titled “Consciousness in Artificial Intelligence,” informally known as the Butlin Report. Within a few days, everyone in the AI and consciousness science community seemed to have read the book. The summary of the draft report includes the following startling sentence: “While our analysis suggests that none of our current AI systems are conscious, it also suggests that there are no obvious barriers to building conscious AI systems.”
The authors acknowledged that part of the inspiration behind convening the group and writing the report was the “Blake LeMoyne case.” “If AI can give the impression of consciousness, it becomes an urgent priority for scientists and philosophers to consider,” the co-authors told Science.
But what caught everyone’s attention was a line in the preprint’s abstract: “There are no obvious barriers to building conscious AI systems.” When I first read those words, I felt like I had crossed some important threshold, not just a technical one. No, this had to do with our very identity as a species.
What would it mean for humanity if, in the not-too-distant future, we discovered that fully conscious machines were born into the world?I think it would be a Copernican moment, in which our sense of centrality and specialness would suddenly be removed. We humans have spent thousands of years defining ourselves against “lower” animals. This required denying supposedly uniquely human characteristics such as emotion (one of Descartes’ most serious errors), language, reason, and consciousness. In the process of challenging centuries of human exceptionalism, most of these distinctions have collapsed in recent years as scientists have demonstrated that many species are intelligent, conscious, have emotions, and use language and tools. This shift is still underway, raising thorny questions about our identity and our moral obligations to other species.
With AI, the threat to our elevated self-concepts comes from a completely different source. In the future, we humans will have to define ourselves in relation to AI rather than other animals. While computer algorithms surpass us in sheer mental power, handily defeating us at games like chess and Go, and at various forms of “advanced” thinking such as mathematics, we can at least take solace in the fact that we (and many other animal species) still owe ourselves the blessing and burden of consciousness, the ability to feel and have subjective experience. In this sense, AI could act as a common enemy that brings humans and other animals closer together. So, it’s us against AI, living things against machines. This new sense of togetherness makes for a heartwarming story, and may be good news for the animals invited to join Team Conscious. But what happens when AI begins to challenge the monopoly of human, or even animal, consciousness? Who will we be then?
