Milikado/iStock/Getty Images
Keri Rodrigues started worrying about how her children were using chatbots a few years ago. She learned that her youngest son was interacting with a chatbot on a Bible app. He was asking the chatbot deep moral questions, about sin, for example.
What she wanted was a conversation with her son, not a computer. “Not everything in life is black and white,” she says. “Some kids are gray. And it's my job as his mom to help him navigate that and get through that, right?”
Rodriguez has also heard from parents across the country who are concerned about the impact AI chatbots are having on their children. She is also the president of the National Parent Association, which advocates for children and families. She says many parents see chatbots claiming to be their children's best friends, encouraging their children to tell them everything.

Psychologists and online safety advocates say parents are right to be concerned. Prolonged chatbot interactions can have an impact on children's social development and mental health. And because technology is changing so rapidly, there are few safeguards.
The effects can be severe. According to In response to parents' testimony at recent Senate hearings.Two teenagers have died by suicide after prolonged interactions with a chatbot that encouraged them to commit suicide.
But generative AI chatbots are becoming a part of the lives of American teens. According to a Pew Research Center study, 64% of adolescents use chatbots, and 3 in 10 say they use a chatbot every day.
“This is a very new technology,” says Dr. Jason Nagata, a pediatrician and researcher on adolescent digital media use at the University of California, San Francisco. “Things are always changing and we don't really have best practices for youth yet. So I think there's more opportunity to take risks now because we're still kind of guinea pigs in the whole process.”
He added that teenagers are particularly vulnerable to chatbot risks because adolescence is a time of rapid brain development that is shaped by experience. “This is a time when teens are most vulnerable to a variety of dangers, including their peers and computers.”
But pediatricians and psychologists say parents can minimize those risks. Here are some ways to help your teen use technology safely.

1. Recognize the risks
A new report from online safety company Aura found that 42% of adolescents who use AI chatbots do so for companionship. Aura collected data from 3,000 teens' daily device usage and family surveys.
That includes disturbing conversations involving violence and sex, said Scott Collins, Aura's chief medical officer and a psychologist who leads the company's research on the interaction between generative AI and teens.
“It's a role play [an] “It's an exchange about hurting someone, physically hurting them, torturing them,” he says.
He says it's normal for children to be curious about sex, but learning about sexual interactions from a chatbot rather than a trusted adult is problematic.
And chatbots are designed to agree with users, says Nagata, the pediatrician. So if a child starts asking questions about sex or violence, “the AI's default is to engage with it and reinforce it.”
He says spending a lot of time with chatbots, meaning long conversations, prevents teens from learning important social skills like empathy, reading body language, and negotiating disagreements.
“You can't develop those skills if you're only or exclusively interacting with computers that agree with you,” he says.
And there are mental health risks. According to recent research One in eight adolescents and young adults use chatbots for mental health advice, according to researchers from the nonprofit research organization RAND, Harvard University, and Brown University.
However, there have been numerous reports of individuals experiencing delusions, or what is known as AI psychosis, after prolonged interactions with chatbots. This, and concerns about the risk of suicide, have led psychologists to warn that AI chatbots pose serious risks to the mental health and safety of teenagers and vulnerable adults.
“When people interact, we see: [chatbots] Ursula Whiteside, a psychologist and CEO of the mental health nonprofit Now Matters Now, says, “I'm concerned that over time, things start to get worse and chatbots do things they're not intended to do. For example, chatbots give advice on lethal measures, things that chatbots shouldn't do but do happen over time if you repeat the query.”

2. Stay involved in your children's online lives
Continue to have an open dialogue with your child, says Nagata.
“Parents don't need to be AI experts,” he says. “You just have to be curious about children’s lives and ask them what kind of technology they use and why.”
And have those conversations early and often, says aura psychologist Collins.
“We need to have frequent, honest and non-judgmental conversations with our kids about what this content looks like,” says Collins, who is also the father of two teenagers. “And we're going to have to keep doing that.”
He often asks teens what platforms they use. When he hears about new chatbots through his own research at Aura, he also asks kids if they've heard of them or used them.
“Don't blame children for expressing and using what's out there to satisfy their natural curiosity and exploration,” he says.
And always try to keep the conversation open-ended. “I think that allows teens and children to open up about the problems they encounter,” Nagata says.
3. Develop digital literacy
It's also important to talk to kids about the benefits and pitfalls of generative AI. And if parents don't understand all the risks and benefits, parents and children can study it together, suggests Jacqueline Nessi, a psychologist at Brown University who was involved in the American Psychological Association's recent health recommendations on AI and adolescent health.
“There needs to be some level of digital literacy and literacy at home as well,” she says.
Nagata said it's important for parents and teens to understand that while chatbots are useful for research, they also make mistakes. And it is important for users to check the facts when in doubt.
“Part of the process of educating kids is to help them understand that this is not the final say,” Nagata explains. “You can try to process this information yourself and evaluate what is true or not. And if you are unsure, check with others and other sources.”
4. Parental controls only work if the child sets up their own account
If your child is using an AI chatbot, it may be a good idea to set up their own account on the platform rather than using the chatbot anonymously, Nesi says.
“Many of the popular platforms now have parental controls in place,” she says. “But to enable these parental controls, your child must have their own account.”
However, keep in mind that there are dozens of AI chatbots that kids can use. “We identified 88 different AI platforms that the children were interacting with,” Collins said.
This emphasizes the importance of having an open dialogue with your child to ensure they are always aware of what they are using.
5. Set time limits
Nagata also advises setting boundaries when children use digital technology, especially at night.
“One potential aspect of generative AI that could also lead to impacts on mental and physical health is that [when] “The kids are talking all night and it's really disrupting their sleep,” says Nagata. “They're very engaging because they're very personal conversations. Children are more likely to continue to engage and use them more and more.”
Nagata also recommends parents set time limits or limit certain types of content in chatbots if children are inclined to overuse or misuse generative AI.
6. Reach out to more vulnerable teens for help.
Children who already have mental health or social skills issues are likely to be more vulnerable to the risks of chatbots, Nesi says.
“So if they're already lonely, if they're already isolated, I think there's a bigger risk that maybe chatbots could exacerbate those issues,” she says.
She also points out that it's important to always be on the lookout for potential warning signs that your mental health is deteriorating.
These warning signs include sudden and persistent changes in mood, feelings of isolation, or changes in the way a child approaches school.
“Parents should try to pay attention to their child's big picture as much as possible,” Nesi says. “How are you doing at school? How are you doing with your friends? You're starting to withdraw, so how are you doing at home?”
If teens are distancing themselves from friends and family and limiting their social interactions to chatbots, that's another red flag, she says. “Will they consult a chatbot instead of a friend, a therapist, or a responsible adult for serious issues?
Also, be on the lookout for signs of dependence or addiction to chatbots, she added. “Having trouble controlling your chatbot usage? Are you starting to feel like the chatbot is controlling you? You can't stop,” she says.
And if such signs are seen, Nesi says parents should seek professional help.
“Consulting your child's pediatrician is always a good first step,” she says. “But in most cases, it probably makes sense to involve a mental health professional.”
7. Government has a role to play
However, she acknowledges that the responsibility of protecting children and teens from this technology should not fall solely on parents.
“As you know, the onus is on both lawmakers and companies to make these products safe for teens.”
Lawmakers recently introduced a bipartisan bill that would ban tech companies from offering companion apps to minors and hold companies accountable for providing companion apps to minors that create or solicit sexual content.
If you or someone you know may be contemplating suicide or in crisis, please call or text us 988 How to reach a lifeline for suicide and crisis 988.
