Cybersecurity in the age of AI: Challenges to education in India

Applications of AI


Niraj Dubey
singer2671@gmail.com
In this era of global technology transformation, the integration of artificial intelligence (AI) into schools and higher education has transformed the cybersecurity ecosystem from an IT technical problem to a critical strategic governance challenge. The increasing integration of artificial intelligence (AI) into digital systems has revolutionized the way individuals and organizations interact with technology. AI-enabled applications are now widely used in education, communication, governance, data processing, and content generation. While these technologies bring increased efficiency, accessibility, and innovation, they also introduce new cyber safety concerns, allowing cybercriminals to automate, personalize, and accelerate attacks, and require informed understanding and responsible engagement. The education sector is currently one of the most targeted, with ransomware and AI-powered social engineering posing significant risks to sensitive student data and organizational continuity. Educators, administrators, and learners are increasingly relying on AI-enabled digital platforms for education, including the learning process, assessment, communication, and academic support. However, limited awareness of how AI systems work, how data is processed, and how cyber risks arise can expose users to threats such as identity misuse, misinformation, and breaches of personal and organizational data. AI systems work through large-scale data collection, automated decision-making, and algorithmic processing. As a result, they are more susceptible to risks such as unauthorized data access, privacy violations, algorithmic bias, automated cyberattacks, deepfake operations, and AI-powered fraud. The increasing use of generative AI tools and intelligent platforms has expanded the cyber threat landscape, making cyber safety a critical concern for individuals as well as educational institutions. Understanding AI-related cyber risks has therefore become an important aspect of modern digital literacy. Cyber ​​safety in the context of AI extends beyond technical safeguards. This includes the ethical use of AI tools, recognizing AI cyber threats, protecting data and digital identities, and adhering to legal and regulatory frameworks. Educational institutions play a critical role in guiding teachers and learners toward responsible engagement with AI technologies and fostering a culture of critical awareness, responsibility, and safe digital practices. The online training program on ‘Cyber ​​Safety in the Age of AI’ aims to strengthen the knowledge and practical understanding of educators and organizational stakeholders on the safe and responsible use of AI technologies. Participants will also be equipped to guide learners to engage with AI-powered digital tools thoughtfully and ethically. This initiative is in line with the vision of the National Education Policy (NEP) 2020, which emphasizes digital literacy, critical thinking, ethical use of technology and cyber safety as integral components of education. Rapid advances in AI technology, including ChatGPT, have provided students with unprecedented tools to produce high-quality academic content with minimal effort or learning.
New cybersecurity challenges in an AI-driven era
* Hyper-personalized phishing and social engineering: Attackers are using generative AI to create more convincing phishing emails that mimic the tone and style of school administrators, significantly increasing success rates.
* Deepfake impersonation: AI-powered tools can generate realistic audio and video clones, enabling fraud in which a “director” authorizes staff to pay instant money or directs funds to a malicious account.
* Data poisoning and model manipulation: Educational AI tools such as chatbots and admissions AI can be manipulated by feeding them malicious training data, leading to biased decisions, leaking personal data, or unfair scoring.
* Data privacy and “shadow AI”: Misuse of free, unvetted AI tools (“shadow AI”) by students and faculty can result in sensitive research and student data being used to train public AI models.
Risks specific to schools and higher education
* High-value target data: Compared to private businesses, schools often lack the dedicated IT security staff and financial resources needed to defend against advanced AI-powered threats, leaving them reliant on outdated systems. The increased use of online tools creates a risk of unauthorized access to student records and minors’ personal data. Because universities handle important research, they are targets of state actors interested in intellectual property theft. They also face risks from “ghost students,” false identities used in financial aid fraud.
* IoT and legacy system vulnerabilities: The rapid expansion of smart campus devices (IoT) and reliance on vulnerability-prone legacy IT systems has created an “open door” for ransomware, with malware attacks against smart devices in education recently increasing by 146%.
Business interruption: Ransomware attacks can cause significant downtime, disrupt learning, and in extreme cases lead to permanent closure of educational institutions.
Corrective actions and strategies to strengthen cybersecurity
To combat these threats, educational institutions must move from a reactive security posture to a proactive, AI-driven “zero trust” model.
* Introducing AI-driven prevention tools: Use AI to monitor network traffic anomalies in real-time, block malicious phishing attempts, and automatically quarantine infected devices.
Implement zero trust security: Assume that users or devices are untrusted by default. Implement multi-factor authentication (MFA) on all systems, especially access to sensitive data.
Invest in training and awareness: Regularly train your staff and students to recognize AI-generated phishing and social engineering attempts.
Increased vendor control: Vet the security practices of your EdTech third-party providers to ensure they comply with data protection regulations.
Establish an incident response plan: Create and update procedures to quickly detect, report, and recover from cybersecurity breaches, minimizing disruption to operations.
Adopting Secure-by-Design principles: As institutions deploy new AI technologies, they prioritize transparency and security in their selection and configuration.
The authors like to argue that without major reforms in assessment practices to ensure the validation of foundational learning and the academic development expected at tertiary level, the value of degrees will be undermined. Students who rely on AI without engaging in authentic learning may succeed academically, but may remain unprepared for the demands of their respective industries. This situation risks undermining confidence in graduates’ abilities, calling into question their suitability for employment, and ultimately affecting the credibility of higher education qualifications.
(The author is a senior faculty member at GCET Jammu)





Source link