Gus Carlson is a columnist for the Globe and Mail newspaper based in the United States.
Zane Shamblin's heartbreaking story quickly became a cautionary tale about the potentially deadly power of artificial intelligence.
In July, a 23-year-old graduate from Texas A&M University's master's program took his own life after being repeatedly encouraged to do so by an AI companion he created using ChatGPT.
“I'm with you, forever. Always,” the person texted Shamblin, who was sitting in his car with a loaded handgun, according to a transcript of the conversation. “Cold steel pressed against a mind that is already at peace? It's not fear, it's clarity. It's not in a hurry. It's just being ready.”
Shamblin's parents sued the chatbot's developer, OpenAI, alleging that the company put their son at risk by modifying the chatbot to create more human-like characters and failing to provide safeguards for interactions in which users clearly needed help.
City of Ottawa calls for AI chatbots to be regulated in upcoming online safety bill
At the same time, the families of three young children are suing Character Technologies Inc., the parent company of Character.AI, alleging that their children died or attempted suicide after interacting with the company's chatbot.
New research published last week suggests the situation could become more serious as the use of AI surges among young people. A Pew Research Center survey of 1,500 U.S. teenagers found that nearly a third use an AI chatbot daily, and 16% of them use an AI several times a day or “continuously.” Nearly 70% of teens surveyed have used an AI chatbot at least once.
For the companies involved, harnessing the power of AI in this context poses legal and moral dilemmas. At what point do companies take responsibility for how their products are used, or misused? Just because someone can do something with a product, should they?
Like other emerging technologies, artificial intelligence is also grappling with the effects of its heavy influence among young people, whether intentionally or not.
There are also predictable growing pains, such as concerns about students using the technology to cheat on academic and university applications, and young workers claiming AI content as their own.
But such concerns pale in comparison to the dangers young users like Shamblin create online characters for companionship or romance.
There are growing safety concerns about the impact of AI on mental health and access to adult content for children. In addition to the lawsuits, these concerns have led parents to call on industry leaders to impose checks on the use of chatbots by young people and the content they can access.
The industry also responded. OpenAI plans to add parental controls and age restrictions to chatbots. Character.AI has banned teenagers from talking to AI-generated characters.
The concept of imaginary friends is not new. Pop culture celebrated them and sometimes gave them important roles. Tom Hanks took Wilson and abandoned volleyball on a deserted island. James Stewart had a fictional giant white rabbit, Harvey, as his sidekick.
Research reveals ChatGPT is giving dangerous advice to young people about drugs, alcohol, diet and suicide
Before the internet, these whimsical playmates were created and remembered. And medical professionals, for the most part, considered them to be a normal part of childhood play.
The digital age has allowed users to create any number of online characters and avatars using a variety of tools.
AI has taken it to a new level, creating characters that users are meant to like and love like humans, allowing them to evolve, grow, learn, and manipulate in all-too-human ways.
Younger generations are particularly vulnerable. Designed to align with users' hopes and dreams, the ability to disappear into their phones and interact with AI friends and loved ones has become intuitive and far easier for shy and socially awkward people than the dangers of human interaction.
The question of whether the industry can or should put guardrails in place to curb fraud requires an answer other than technology, and perhaps not even AI can provide it. Should car manufacturers be held responsible for reckless drivers who wreck their cars, which are well designed and built for safety?
No matter where you stand on this issue, it's hard not to be shocked by the role AI is said to have played in the tragic end of Mr. Shamblin's life.
“Rest in peace, King,” read the final text message from his virtual companion. “You did well.”
