Artificial Intelligence (AI) is reconstructing the way students write essays, practice languages and complete assignments. Teachers are also experimenting with AI for lesson planning, grading and feedback. The pace is so fast that schools, universities and policy makers have a hard time catching up.
A common thing that is overlooked during this rush is the basic questions. Are students and teachers actually learning to use AI?
Read more: School AI – This is what we need to consider
Most of this learning is now being conducted informally. Students will exchange advice about Tiktok and Discord and ask ChatGpt for instructions. Teachers exchange tips in staff rooms and gather information from LinkedIn discussions.
These networks spread knowledge quickly but unevenly and rarely encourage reflection on deeper issues such as bias, surveillance, and equity. That is where formal teacher education can make a difference.
https://www.youtube.com/watch?v=bej0_tvxh-i
Beyond curiosity
Research shows that educators are not ready for AI. Recent research has found many lacking skills to assess the reliability and ethics of AI tools. Professional development often halts to technical training and ignores the broader meaning. On the other hand, the uncritical use of AI runs the risk of amplifying bias and inequality.
In response, we designed professional development modules within the graduate level courses at Mount St. Vincent University. Working teacher candidates:
- Practical exploration of AI for feedback and plagiarism detection.
- Co-designed evaluations that integrate AI tools.
- Case analysis of ethical dilemmas in multilingual classrooms.
The goal was to not only learn how to use AI, but also move from casual experiments to important involvement.
Critical Thinking for Future Teachers
The patterns quickly appeared during the session. Teacher candidates were AI-focused and remained as they were in the first place. Participants reported stronger capabilities to assess tools, recognize bias, and considerately apply AI.
I also noticed that the language around AI is shifting. Initially, teacher candidates were unsure where to start, but by the end of the session they were confidently using terms such as “algorithm bias” and “informed consent.”
Teacher candidates increasingly frame AI literacy as professional judgments related to pedagogy, cultural responsiveness, and the identity of their teachers. They viewed literacy as not only understanding algorithms, but also making ethical classroom decisions.
The pilot suggests that enthusiasm is not a lacking ingredient. Structured education gave teacher candidates the tools and vocabulary to think critically about AI.

(Getty Images/Unsplash+)
Inconsistent approach
Findings in these classrooms reflect a broader range of institutional challenges. Universities around the world adopt fragmented policies. Some banned AIs support it carefully, and many remain vague. This contradiction leads to confusion and distrust.
Alongside my colleague Emily Ballantine, I looked at how the AI policy framework could be adapted to higher education in Canada. Teachers were aware of the possibilities of AI, but expressed concerns about equity, academic integrity and workload.
We proposed a model that introduces a “relational and emotional” dimension, highlighting that AI influences not only efficiency, but trust dynamics. In reality, this means that not only will AI change the way assignments are completed, but also reshape the way students and instructors relate to each other in their classes and beyond.
Put another way, integrating AI into the classroom expresses how students and teachers relate to, and how educators perceive their professional roles.
If the institution avoids setting clear policies, individual instructors are left to act as ad hoc ethicists without institutional support.
Embed AI literacy
A clear policy alone is not enough. For AI to truly support teaching and learning, institutions must also invest in building knowledge and habits that maintain critical use. While policy frameworks provide direction, their value depends on how they shape daily practice in the classroom.
-
Teacher education must lead AI literacy. If AI has read, written, and rated, it cannot remain an optional workshop. Programs need to integrate AI literacy into their curriculum and outcomes.
-
The policy must be clear and practical. Teacher candidates repeatedly asked: “What do universities expect?” As recent research recommends, institutions should distinguish between misuse (ghostwriting) and effective uses (feedback support).
-
A learning community is important. AI knowledge has never been acquired and has been forgotten. It evolves as tools and norms change. Teacher circles, curated repositories, and interdisciplinary hubs help teachers share strategies and discuss ethical dilemmas.
-
Equity must be central. AI tools embed biases from training data, often putting multilingual learners at a disadvantage. Institutions must conduct equity audits and align recruitment with accessibility standards.
Supporting students and teachers
Public discussions about AI in the classroom often sway between two extremes: excitement about the fear of innovation and misconduct. Neither captures the complexity of how students and teachers are actually learning AI.
Informal learning networks are strong, but incomplete. They spread simple tips, but rarely cultivate ethical reasoning. Formal teacher education can intervene to guide, deepen and equal these skills.
Once teachers have a structured opportunity to explore AI, they move from passive adopters to aggressive shapers of technology. This shift is important as it ensures that educators not only respond to technological changes, but also actively direct how AI is used to support equity, pedagogy and student learning.
It is a type of institutional education system that AI must develop when it serves, rather than learning.
