Psychologists are looking for guardrails for the use of AI in young people. What do you pay attention to this?

Applications of AI


Generic AI developers need to take steps to ensure that young people who use the tool do not harm them, the American Psychological Association warned in their health advisors on Tuesday.

Compiled by an advisory board of psychology experts, the report called on tech companies to confirm that they have simulated relationships and boundaries, create age-appropriate privacy settings, and promote healthy use of AI.

ai atlas

The APA has issued similar technology recommendations in the past. Last year, the group recommended that parents limit teenagers' exposure to videos created by social media influencers and Gen AI. In 2023, he warned of the harm that could be brought about by social media use among young people.

“Like social media, AI is not inherently good or bad,” psychology chief Mitch Prinstein said in a statement. “But we have already seen cases where adolescents have developed unhealthy and dangerous 'relationships' with chatbots. For example, some adolescents may not even know that they are interacting with AI. Therefore, it is important for developers to currently deploy GuardRails. ”

Over the past few years, meteors of artificial intelligence tools such as Openai's ChatGpt and Google's Gemini have presented new and serious challenges to mental health, especially among younger users. People are talking more and more with chatbots, talking with friends, sharing secrets, and relying on them for dating. Its use can have some positive effects on mental health, but it can be harmful, experts say, reinforcing harmful behavior or providing false advice. (Disclosure: CNET's parent company Ziff Davis filed a lawsuit against Openai in April, claiming it infringed Ziff Davis' copyright in training and operating AI systems.)

What APA recommends for using AI

The group sought several different ways to ensure youths can use AI safely, including limiting access to harmful content and protecting data privacy and similarity between younger users.

One important difference between adult users and young people is that adults are more likely to question the accuracy and intention of AI output. Young people (the report defines adolescents as ages 10 to 25) may not be able to approach an appropriate level of interaction with skepticism.

Relationships with AI entities such as chatbots and role-playing tool characters. “Early research shows that strong attachment to AI-generated characters can contribute to fighting the learning of social skills and the development of emotional connections,” the report states.

People in their teens and early 20s are developing habits and social skills to become adults, and changes in how they socialize can have lifelong effects, said Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, not on the panel that produced the report. “These stages of development can be a template for what happens later,” he said.

The APA report asked developers to create a system to prevent relationship erosion, reminding bots that are not human, along with changes in regulations to protect the interests of young people.

Other recommendations include that there should be differences between tools intended for adults and tools used by children, such as age-appropriate settings where age-appropriate settings are defaulted, or designs that are less convincing. The system must undergo human monitoring and intensive testing to ensure it is safe.

According to APA, schools and policymakers need to prioritize education on AI literacy and how to use the tools responsibly. This should include discussion of how to evaluate AI output for bias and inaccurate information. “This education must provide young people with the knowledge and skills to understand what AI is, how it works, potential benefits and limitations, privacy concerns about personal data, and the risks of overreliance,” the report states.

Identify the use of safe and insecure AI

The report shows psychologists are addressing uncertainty about how new and growing technologies affect the mental health of those most vulnerable to potential developmental harms, Jacobson said.

“The nuance of the method [AI] The impact on social development is very broad,” he told me.

AI tools can be useful for mental health and can be harmful, Jacobson said. He and other researchers from Dartmouth recently published research on promising AI chatbots by providing treatments, but were specifically designed and closely monitored to follow treatment practices. He said more common AI tools can provide misinformation and encourage harmful behavior. He pointed out a recent issue with Sycophancy in the ChatGpt model.

“We sometimes connect in ways that these tools feel very validated, but sometimes we can act in very harmful ways,” he said.

Jacobson said it is important for scientists to continue studying the psychological impacts of AI use and educate the public about what they have learned.

“The pace on the field is moving so fast that there's room for science to catch up,” he said.

The APA explained how AI works, provided suggestions on what you can do to promote human-human interactions, highlight potential inaccuracies in health information, and ensure teens are using AI safely, including reviewing privacy settings.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *