
Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister Responsible for the Federal Economic Development Authority of Southern Ontario, speaks during a presentation at Les Ateliers Beau Roque in Barre, Ont., Monday, May 4, 2026. (Spencer Colby/Canadian Press, via AP)
Canada’s federal government is set to unveil a new artificial intelligence strategy, and all signs point to AI adoption across the economy, from small businesses and large corporations to public sector institutions, to be a top priority.
But until now, K-12 classrooms have actually been the front lines of AI adoption. Young Canadians are “power users” of new generative AI tools such as ChatGPT, Gemini, and Claude, with three-quarters of students reporting using AI in their schoolwork.
The problem is that school systems and educators are not prepared.
K-12 education just faced the tsunami of smartphones and social media apps that flooded classrooms and cafeterias a decade ago.
This new generation of AI technology, embedded in chatbots, Google Search Overview, student learning management systems and edtech tools, actually presents a more daunting challenge. Unlike smartphones, which are a distraction and offer little benefit, AI offers both great opportunities and risks for education and students.
Preparing Canada’s K-12 education system should be the goal of the new strategy.
Proponents of the use of AI in education point to its potential to improve pedagogy and management. In the classroom, tutoring, virtual tutoring, simulation-based learning, and more will be possible. For schools facing growing class sizes, shrinking budgets, and chronically overburdened teachers, implementing effective AI could be part of the solution.
However, the immediate impact of widespread use of AI for students is profoundly disrupting long-standing teaching and learning practices. Educators say they are unprepared to handle the use of AI in everything from homework to end-of-year exam grading. Concerned that AI will undermine critical skill development, some are turning back the clock and handing out pencils and paper.
Concerns in the classroom are compounded by growing concerns about the mental health and safety risks of AI for adolescents, from forced companions, naked deepfakes, and worsening screen use and isolation to digital privacy threats.
How do we move forward? A comprehensive approach to AI in education should include four elements:
First is the development of AI literacy and skills.
The focus of public debate on AI in K-12 builds on the foundations of digital and media literacy of previous generations.
This is essential to equip both students and educators with the basics of what AI is and is not. How it works at a basic level. Skills in evaluating accuracy, bias, and limitations of AI output. Practical skills for effective use. Ethical awareness of the social impact of AI.
The second is to strengthen students’ AI-friendly soft skills.
A Dais study on AI exposure in Canadian jobs found that a set of “human” skills, such as teamwork, communication, interpersonal relationships, and leadership skills, are commonly sought after by employers across all occupations, and are uniquely present in the most AI-resistant occupations (senior managers, lawyers, engineers, surgeons, etc.).
As AI is used to automate more repetitive and less complex tasks in areas such as finance and management, “social-emotional” skills built on the foundations of reading, writing, and critical thinking will become increasingly important. Redesigning the curriculum to protect the development of these skills is essential.
Third, the thoughtful use of AI in education delivery and management.
This is about deploying AI to support learning through lesson planning, grading, and educational applications, as well as the “back office” activities of teachers and school administrators.
This raises a central pedagogical question in the classroom: in what ways is AI helpful or harmful in reducing “friction” in learning? For example, thoughtful incorporation of AI can reduce friction for students when accessing research resources. On the other hand, unregulated use by students can undermine skill development by avoiding the friction needed to get answers in research or important analytical tasks.
Few state governments and school boards are proactive in establishing policies and guidance regarding the responsible use of AI. Experience with other education technologies has raised concerns about school boards’ oversight capabilities when vendors incorporate AI into existing services such as Google Classroom.
Finally, ensure AI governance is in place for youth safety, cybersecurity, and digital privacy.
This is a challenge that extends beyond K-12 education to the broader conversation about digital regulation in Canada. The outrage over OpenAI’s failure to report to law enforcement the problematic use of its tools in connection with the horrific Tumbler Ridge school shooting is just the latest example of the need for online safety regulations that capture AI.
The stakes are too high and the pace of change is too fast for incremental adjustments. This will require coordinated action across governments, educators, technology companies and civil society, with young people’s voices centrally involved in shaping solutions.
This should be at the heart of Canada’s new AI strategy.
