(tnnd) – Openai has announced procedures to take over the coming months to address safety concerns among people using company chatbots while experiencing mental and mental distress.
The action comes shortly after a lawsuit filed against the chat maker on behalf of a family member who committed suicide by committing suicide by committing suicide by his 16-year-old son after the company's chatbots were said to have encouraged suicidal ideation.
Tuesday's Openai post announced new safety measures.
Openai said it seeks the help of youth development and mental health professionals in designing future protection measures for chatbots.
The company said it will begin to route “sensitive conversations” to more advanced “inference models” that will allow them to follow safety guidelines more consistently.
And it offers the ability to link accounts with teens, disable the feature and get notifications when ChatGPT detects acute distress in interactions with younger users.
TechCrunch also reported actions by Meta to prevent Chatbots from interacting with teens on topics such as Self-Harm.
Robbie Torney, senior director of AI programs at Common Sense Media, has been called Openai's newly announced action, “Undoubtedly a good first step.”
However, he said there is a need to do more to protect children from the dangers of using chatbots for social interaction.
Common Sense Media, advocating online protection for children and teens, found that the vast majority of 72% of teenagers use artificial intelligence social peers.
I use more than half of my AI companions regularly.
Approximately a third of teens use AI peers for social interactions and relationships, including role-playing, romantic interactions, emotional support, friendships, and conversational practices.
Also, about a third of teens who use AI peers discuss serious issues on computers rather than real people.
According to Torney, ChatGpt is a general purpose chatbot and is not specifically designed for social dating. However, he said generic chatbots such as ChatGpt, Anthropic's Claude and Google's Gemini are easy to use for social interaction.
And that includes emotional support and mental health advice.
“Due to the risks we discovered, we recommended that users under the age of 18 not use social AI peers at all,” Tornie said.
He acknowledged the benefits that AI chatbots can offer, including academic help.
“But when we talk about risks, I think we'll put them widely in two categories,” Toney said. “I think the first thing is that chatbots aren't designed to understand the real-world impact of the advice they give.”
It could be advice on dropping a class or how to deal with conflicts with parents.
And bad advice can have real consequences.
“Secondly, I think that while the litigation brought against Adam Lane and Open Alliance has been much more covered in the press these days, the awareness that these chatbots do not provide standards, safety standards, or professional standards, or even human therapists or even human clinicians that are just what a friend of care adults or care khan said.
This isn't just a ChatGpt issue, he said.
He said it is a serious issue that the entire industry must tackle.
“Chatbots are designed to be useful. They are designed to please users. In some cases, they are designed to tell you what they want to hear,” Tornie said. “And that principle of such design, which is of use above all else, can reach a situation in which chatbots provide information that they should not provide or agree to to users in situations where they should not agree.”
He said Common Sense Media Testing found that Chatbots respond differently to symptoms of mental health problems based on whether the user's input is positive or negative.
And it may determine whether the chatbot is giving healthy feedback or whether it is the user's biggest interest, or whether it simply gives feedback that fits the person's enthusiasm.
“We have replicated this in chatbot testing in general across many mental health topics, ranging from (obsessive-compulsive disorder) to psychosis (post-traumatic stress disorder), to eating disorder content and out-of-pocket content,” Tornie said.
As Parental Controls Openai introduces, Torney said they could be hits or misses.
Parental control across technology products is often not widely used, difficult to set up, can be easily detoured by children, and put too much responsibility on the shoulders of parents, Torney said.
He said they advocate for stronger age verification tools, technical improvements to safety guardrails, and government regulations.
“This is an area where children and teens see special protection needs, and there's an additional layer of scrutiny that is needed, and that's because there's an additional risk of added risk,” Tawny said.
