The Osceola County School District is taking steps to adapt to the ever-changing world of artificial intelligence. At a workshop last week, the school board discussed the use and misuse of AI by students and the consequences of such actions.
Commonly used generative AI tools such as ChatGPT, Copilot, etc. have become widely accessible in terms of price and availability over the past few years and are now frequently integrated into everyday software. AI can be used to solve problems, create documents, answer questions, create images, and more. That’s just the tip of the iceberg.
While AI itself may not be a problem, its use in school settings certainly can be. During the workshop, Dr. Gabriel Berio, Assistant Dean of Student Services, shared new language added to the 2026-27 Student Code of Conduct Handbook regarding the use of AI.
“We have added language explaining that AI tools and applications cannot be used to generate answers on district, state, or national standardized assessments,” Berrio said. “Artificial intelligence and electronics are becoming so accessible that we wanted to put even more emphasis on that.”
The board also addressed the creation of “deepfakes” using AI.
“Taking someone’s voice and inputting it into some kind of AI and saying something that’s not accurate would absolutely be a violation of the terms. [the district’s harassment] It’s policy,” said Superintendent Dr. Mark Shanoff.
This includes creating images or using images to harass someone. Berio added that even if the photo is fake, it still represents an individual.
And it’s not just the creators of the content who are in violation. “The school board’s attorney, Sarah Collen, said: [just] It’s about sharing electronically. “If you receive an image from someone else and show it to a friend on your phone…that can also trigger this Level 4 consequence (representing the highest level of violation of the rules). If a student receives an inappropriate image, the only safe way to avoid this situation is to either not share it with anyone or report it to the administration immediately.”
Berrio agreed, saying there have been instances where students have reported AI-generated photos or texts of a harassing nature to school administrators. Students who do so are eligible for safe harbor protection as long as they do not share or show it to others.
“At the end of the day, these tools are no joke,” Shanoff says. “And all it takes is one bad decision, one bad joke, and you can no longer attend our school.”
Addressing AI issues in the student code of conduct is just one step the board is taking. To be proactive, additional training will be provided to administrators and deans, “Know the Law” posters will be created, and a social media campaign to increase student awareness is being considered.
“We hope that the more different ways we get the message out, the more that message will reach our students,” Berio said.
These local efforts are part of a larger effort happening across Florida. The Florida Department of Education helped establish the Florida K-12 AI Education Task Force, which brings together educators, school leaders, and experts from across the state. The group includes people from dozens of school districts, universities, and industry partners and has a total membership of approximately 250 people.
The task force is working to develop practical guidance and resources for schools. They are developing an AI toolkit for school districts to use, looking at topics such as how to teach students about AI, how to protect student privacy, how to use AI ethically in the classroom, and how to train teachers.
The goal is not to restrict all use of AI in the classroom, but to help schools make the most of it while keeping students safe.
Thomas Kennedy, member of the Citrus County Board of Education and member of the Florida K-12 AI Education Task Force, summarized the challenges facing schools:
“Are we educating our students for the world they came from?” he asked. “Or are we educating them in a world where they have to succeed?”
