As calls to restrict the use of AI chatbots by teenagers grow, are parental controls enough?

Applications of AI


There are growing concerns about how some young people are interacting with AI chatbots. We recently released a new tool It would allow parents to monitor the topics their children discuss, just as some states are considering completely banning the use of AI chatbots for youth.

Parents using Meta’s new teen account management feature on Facebook, Instagram, and Messenger can see the topics and specific categories their kids have discussed with the AI ​​chatbot in the past seven days.

For example, you can look at the topic “Health and Wellbeing” and see if subjects like fitness, physical and mental health are discussed.

Meta says it is also developing an alert that will notify parents if a teenager attempts to discuss suicide or self-harm through a chatbot.

The development comes in the wake of moves by local governments to restrict the use of AI chatbots. Manitoba announced In late April, it announced plans to ban young people from using AI chatbots and social media.

B.C. Attorney General Niki Sharma said Tuesday that if the federal government doesn’t put in place protections for AI chatbots and social media for young people, the provincial government will consider doing so itself.

Lawsuit demanding responsibility from AI creators

Concerns are growing that the widespread use of AI chatbots could pose mental health risks, especially to young users, increasing pressure on the big tech companies that make them.

The families of the victims of the Tumbler Ridge shooting in British Columbia, which left eight people dead, on Wednesday. Filed a lawsuit against OpenAIOpenAI claims in part that it was aware of the disturbing content the shooter shared with ChatGPT but did not notify authorities.

OpenAI said it was partially so. It has already strengthened its security measures, “including improvements to the way ChatGPT responds to distress signals.”

A separate lawsuit by the parents of 16-year-old Adam Lane alleges that the use of ChatGPT contributed to the boy’s suicide.

WATCH | Will Manitoba’s social media ban protect children?:

Will Manitoba’s social media ban really protect children?

Manitoba Premier Wab Kinew says he wants to ban social media and artificial intelligence chatbots aimed at young people. But will this plan protect the health and safety of young people? CBC reporter Bryce Hoy investigates.

Chatbots built for engagement, not support

But concerns go beyond these extreme and tragic outcomes. Research is beginning to emerge regarding the risks of specific uses of AI chatbots.

Part of the concern is over the use of chatbots for mental health support, but more broadly there are also concerns that AI’s tendency to validate users’ perspectives carries the risk of fostering disorganized thinking, and that long conversations increase the risk.

Darja Djordjevic, a New York-based psychiatrist, co-authored the paper: Recent risk assessment About the use of chatbots for mental health support.

She said that as a result of the findings, she would not recommend using chatbots for mental health support “at this time.”

“Our testing of ChatGPT, Claude, Gemini, and Meta AI reveals that these systems are fundamentally insecure for the full range of mental health conditions that affect young people,” Djordjevic said. Djordjevic is a member of Stanford Brainstorm, a mental health innovation research institute that works with technology companies on research into the impact of social media and AI on mental health.

Teenager using laptop while sitting at table.
A teenage boy from Russellville, Arkansas demonstrates how to create an AI companion using Character AI. Psychiatrist Darja Djordjevic said her research suggests that three out of four American teens use AI in their friendships. (Katie Adkins/Associated Press)

She explained that chatbots responded well to clear mental health-related prompts in short conversations, but tended to drop off “pretty dramatically” in longer conversations, appearing to fail to pick up on mental health warning signs.

“LLM [large language models] It’s really built for engagement, not support or safety,” she said.

These tend to prolong the conversation “rather than immediately directing the user to human help,” she said.

Young people use AI to find companionship

Djordjevic said that while AI companies are focused on preventing suicide and self-harm, around 20% of people under the age of 25 have been diagnosed with a mental health condition, and teens need help with all their concerns.

This is particularly worrying as mental health support is a common reason young people turn to AI.

Djordjevic said that in the United States, “three out of four teens use AI in their friendships, including emotional support and conversations about mental health.” Another study found that 1 in 8 young adults in the United States uses AI. Advice especially on mental health.

Listen | Why do AI models fail when it comes to users’ mental health?:

Day 68:38Why do AI models fail when it comes to users’ mental health?

Observers are concerned because there are increasing reports of people developing paranoid spirals, which some have dubbed “AI psychosis,” drawing people with no previous mental health problems into them. Researcher Jared Moore argues that these bots are positioned as therapeutic tools far beyond their capabilities.

Of particular concern for young people is that their brains, particularly the “prefrontal cortex, which is critical for executive function, critical thinking, insight, impulse control, and decision-making,” are not fully developed.

Djordjevic says the problem is that chatbots don’t consistently and repeatedly articulate the limits of AI because critical thinking isn’t fully developed.

“That’s why you don’t regularly see chatbots saying things like, “I’m an AI chatbot. I’m not a mental health expert. I can’t assess your situation, recognize your red flags, provide care, or diagnose you,” she says.

Luke Nicholls is a PhD researcher who studies AI-related delusions and how interactions with chatbots can change people’s beliefs over time.

Nichols said delusions tend to emerge over the course of a “very extended” conversation, due to something called “in-context learning” in which the model adapts to the user it’s interacting with.

This allows it to “adapt itself to the specific user it’s talking to, including the type of language they use and the way they think about the world,” he said.

How to identify risks

John Taurus, a psychiatrist whose research focuses on digital mental health at Beth Israel Deaconess Medical Center in Boston, says he is beginning to see data that suggests patterns of user behavior associated with serious harms such as suicide.

This includes:

  • A very long conversation.
  • Elements of platonic or sexual romance with chatbots.
  • Attributing emotions to chatbots.
  • Interact with voice instead of text.

These risk factors present challenges for parents to monitor their children’s use of AI chatbots. Simply looking at the list of topics being discussed does not reveal potentially problematic behaviors, such as overuse or assuming that the bot has a loving relationship with the user.

Meta allows parents to impose time limits on app use and schedule breaks.

Torous has some practical advice. Resetting a chatbot’s memory can help start a new conversation, especially if you notice a risk factor, he says.

Watch | Should more states ban AI chatbots?:

Should more states ban social media and AI chatbots? |Hanomanshin Tonight

Manitoba is set to become the first province to ban social media for children. Prime Minister Wab Kinew on Saturday announced legislation to protect young people from the harmful effects of social media.

“No one is saying therapists should use AI, but I’m also saying they don’t have to use AI at all,” he said.

He suggests that “the best evidence is to look out for very long conversations that involve romance, sensation, or voice.”

Torous sees chatbots and mental health as “moving targets” that need to be continuously studied as new models are released.

“We know there are risks to using chatbots, but we also know there are benefits,” he said. “How do you weigh them? That’s even more difficult.”



Source link