Why it's impossible to know if AI will become sentient

Machine Learning


For decades, artificial intelligence existed only in our fiction. We have imagined intelligent computer programs and malicious machines in everything from The Terminator and The Matrix to The Eye, Robots, and M3GAN. Now, AI is leaking out of our fiction and into the real world.

These days, it feels like you can't search for anything online or even watch TV without touching AI, and the whole endeavor is ethically questionable. Popular large-scale language models and generative AI programs are trained primarily on data stolen from writers and artists, and these programs hog energy and resources on a devastating scale, raising questions of rights if AI ever actually reaches the level of human consciousness.

While the first two problems are within our ability to address and solve, Cambridge University philosopher Dr. Tom McClelland says we may never know whether AI will become sentient.

We'll probably never know if AI will ever become sentient.

In a recent study published in the journal mind and languageMcClelland argues that our ability to define and identify consciousness is too limited to know when, or if, AI will get there. Moreover, he argues that reliable testing may be impossible, or at least far off.

McClellan argues that consciousness alone is not enough to entitle AI. Instead, we need to be able to define a particular kind of consciousness, one in which a machine has a sense of good and bad, pleasure and pain, positive and negative emotions about what it is experiencing.

“Consciousness can still be a neutral state as AI develops sentience and becomes self-aware,” McClelland said in a Dec. 17 statement. “Sensation contains conscious experiences, both good and bad, that allow existence to suffer or enjoy. This is where ethics comes into play. Even if we accidentally create a conscious AI, it is unlikely to be the kind of consciousness we need to worry about.”

McClelland points out that self-driving cars that can see and experience the road in front of them will have some level of consciousness, but we only need to worry if they start to have some sense of where they're going and what they're doing.

While AI companies are busy claiming that artificial general intelligence (machines with human-like cognition) are just around the corner, McClelland says we can't even really define our own consciousness, so it's unclear how we can test it with anything else.

Proponents of AI sentience suggest that replicating the functionality of consciousness within a machine structure generates consciousness within that machine. Critics suggest that consciousness requires an “embodied, organic subject” and cannot be created artificially. In contrast, McClelland suggests that both sides are taking logical leaps that are not supported by the available evidence.

“We don't have a detailed explanation of consciousness,” McClelland said. “There is no evidence to suggest that consciousness can emerge with appropriate computational structures, or indeed that consciousness is biological in nature. Nor is there any sign that sufficient evidence is on the horizon. The best-case scenario is that we are in an intellectual revolution away from any kind of viable consciousness test.”

McClelland also reminds us that the question of consciousness in our machines raises complex ethical questions about how we interact with other organisms that we already know (or at least have reason to believe) are conscious.

“There is growing evidence to suggest that shrimp may be in distress, but we kill about 5 trillion shrimp each year,” McClelland said. “Testing shrimp consciousness is difficult, but nothing is as difficult as testing AI consciousness.”

Encounter a sentient machine at your own risk in M3GAN, now streaming on SYFY.



Source link