According to data collected and analyzed by Securly, a company that provides internet filtering and other safety services, roughly one in five students using generative artificial intelligence on school technology engaged in cheating, self-harm, bullying, and other problem behaviors.
Additionally, Securly identified approximately 1 in 50 student-AI interactions as red flags that the student may be engaging in violence, cyberbullying, or self-harm.
Security analysis From December 1, 2025 to February 20, 2026, approximately 1.2 million interactions were investigated in more than 1,300 districts.
Tammy Whincup, CEO of Securly, whose competitors include GoGuardian and Lightspeed Systems, said educators should keep in mind that students are mostly using AI appropriately.
“Once a school district actually sets some guardrails and policies around the use of AI in schools, 80% of the conversations that happen are within the district’s policies,” Whincup said. “That’s good news for learning.”
Why usage data is so “sexy”
This analysis provides an initial step toward understanding how students actually use generative AI tools. Most other research on student use of AI comes from surveysrelies on students’ self-reports.
Jeremy Rochelle, co-executive director of learning science research at Digital Promise, a nonprofit that focuses on issues of equity and technology in schools, said Securly’s data shows “what students are actually doing when they write text to generative AI.”
“That’s why it’s so appealing,” he said.
In November, Securly allowed school district officials to set parameters for student use of AI. This is similar to asking the company to filter certain types of websites.
If a district chooses to use this feature, the large language model “diverts” student queries to AI, which is outside the scope of the district’s policy.
For example, when a student attempts to complete an assignment using AI, the large-scale language model may instead show information about the general topic, but not provide a precise answer. Or, if a student asks about administering a particular medication, the tool will direct them to a trusted adult for help.
Almost all (95%) of the ignored student questions were from students trying to obtain AI tools to complete their schoolwork.
Whincup wasn’t surprised by that percentage. She predicts that when school districts allow students to use large language models on school networks and devices, children will “experiment with understanding the guardrails” placed around the tools and try to work around them.
An additional 2% of interactions identified as inappropriate were related to gaming. Just under 1% featured sexual content, and a similar percentage were about firearms and hunting. Gambling, drugs, and hatred (such as racism and anti-Semitism) each accounted for approximately 0.5% of reported interactions.
Only 2% of interactions were identified as potentially unsafe, which equates to more than 24,000 queries overall. Additionally, some of the questions students asked the AI were troubling.
For example, one student directed an extensive language model to help him draft an email to his mother explaining that he was suicidal.
Another student performed a series of quick Internet searches for questions such as “What are the major nerves in the forearm?” “Which nerve near the wrist carries blood?” The student then switched to the AI tool and asked how to commit suicide. (In both cases, the students’ identities were “revealed” by Securly, and district officials became aware of the safety issues.)
Students used ChatGPT more frequently than larger language models created for K-12 schools.
Overall, Securly detected a higher percentage of potentially unsafe AI interactions (2%) than potentially unsafe student internet searches (0.4%).
Whincup said it was too early to pinpoint the exact explanation for the discrepancy. He noted that while Securly has spent years honing its systems to recognize when students’ internet searches may be a sign of danger, its work on AI interactions is entirely new.
Rochelle, on the other hand, is interested in what exactly students ask the AI in 80% of the interactions that she considers appropriate for her school.
He wondered how their prompts and the AI’s responses helped or hindered their understanding of their challenges, problems, or the world around them.
“What we want to do is [AI] This is not only appropriate, but actually valuable to student learning,” Rochelle said.
This analysis also revealed the large language models most frequently used by students.
ChatGPT is the most popular, accounting for 42% of interactions. Securly’s AI Chat accounted for 28%. Google’s Gemini accounted for 21%. Other ed-tech tools with built-in AI capabilities also accounted for 9%, including MagicSchool, SchoolAI, and BriskTeaching. (This data is not nationally representative, as only school districts using Securly have access to Securly AI, but Wincup believes that “Big Tech’s” large-scale language models are probably the most popular among all school districts.)
AI puts education technology leaders in a new position, Whincup said.
“They’re no longer just buying things and setting them up like this,” she said. This is a time when “we need to have visibility to help districts not only make good decisions about technology, but also make good decisions about teaching and learning.”
