2023 was the year of AI hype. 2024 was the year of AI experimentation. 2025 was the year of the AI hype correction. So what will happen in 2026? Will the bubble burst or will it deflate a bit? Will the ROI of AI be realized?
In the field of cybersecurity, one of the big questions is how adversaries use AI in attacks. It is well known that AI is enabling threat actors to create larger and more realistic phishing attacks than ever before, create deepfakes that impersonate legitimate employees, and generate a variety of malware that evades detection. Additionally, AI systems have vulnerabilities that malicious attackers can exploit, such as through prompt injection attacks.
Some experts predict offensive AI in 2026:
- “Deploying agent-based AI will cause public breaches and lead to employee terminations,” said Paddy Harrington, Forrester analyst.
- “Offensive autonomous and agentic AI is emerging as a mainstream threat, allowing attackers to unleash fully automated phishing, lateral movement, and exploit chain engines that require little or no human operator intervention,” said Marcus Sachs, senior vice president and chief engineer at the Center for Internet Security (CIS).
- “As attackers continue to use AI and move toward agent-based attacks, the prevalence of extraterrestrial attacks will continue to grow,” said John Grady, analyst at Omdia, a division of Informa TechTarget.
- “AI continues to dominate news headlines and the security space,” said Sean Atkinson, CISO, CIS.
Atkinson's predictions are already coming true just nine days into the year, as this week's headlines demonstrate.
Moody’s 2026 Outlook: AI Threats and Regulatory Challenges
Moody's 2026 Cyber Outlook Report warned that AI-powered cyber-attacks, including adaptive malware and autonomous threats, are on the rise, with companies increasingly deploying AI without adequate safeguards.
AI is already enabling more personalized phishing and deepfake attacks, and future risks include model poisoning and faster AI-assisted hacking. Moody's warned that while AI-based defense is essential, AI introduces new risks such as unpredictable behavior and requires strong governance.
The report also highlighted the contrasting regulatory approaches of the EU, US and Asia-Pacific countries. The Trump administration has scaled back or delayed regulatory efforts as the EU pursues cooperative frameworks such as the Network and Information Security Directive. Although 2026 could bring greater regional harmonization, Moody's forecasts that global coordination will remain difficult due to conflicting domestic priorities.
Read Eric Geller's full article on Cybersecurity Dive.
AI-powered cyberattacks force CIOs to strengthen security measures
As AI accelerates innovation, it also poses significant cyber risks. According to a study by cybersecurity vendor Trellix, nearly 90% of CISOs see AI-powered attacks as a major threat.
Healthcare systems are particularly vulnerable, with 275 million patient records exposed in 2024 alone. Like the CIO at UC San Diego Health, CIOs are increasing investments in AI-powered cybersecurity tools while balancing their budgets for innovation.
AI is also powering sophisticated phishing attacks, with 40% of compromised business emails being generated by AI. Experts emphasized the importance of basic security practices such as zero trust, security awareness training, and MFA as key defenses against evolving AI threats.
Read Jen A. Miller's full article on Cybersecurity Dive.
NIST seeks public input on managing AI security risks
NIST is seeking public feedback on its approach to managing security risks associated with AI agents. Through the Center for AI Standards and Innovation (CAISI), NIST aims to gather insights on best practices, methodologies, and case studies to improve the secure development and deployment of AI systems.
The agency highlighted growing concerns that AI agents are insufficiently secure, exposing critical infrastructure to cyberattacks and potentially putting public safety at risk. Public input will help CAISI develop technical guidelines and voluntary security standards to address vulnerabilities, assess risks, and strengthen AI security measures. Submission deadline is 60 days.
Read Eric Gellar's full article on Cybersecurity Dive.
Identity fraud using AI will rapidly increase in 2026
A report from identity vendor Nametag predicts that the increasing availability of deepfake technology will lead to a surge in AI-powered identity fraud targeting businesses. Fraudsters are increasingly using AI to mimic audio, images, and videos, enabling attacks such as employment fraud and social engineering schemes.
High-profile cases such as the $25 million fraud involving British company Arup highlight the risks. IT, HR, and finance departments are the main targets, with deepfake impersonation becoming a standard tactic. Nametag warned that agent AI could amplify the threat and called on organizations to rethink employee identity checks to ensure the right human being is behind every action.
Read the full story by Alexei Alexis on Cybersecurity Dive.
Editor's note: The editor used AI tools to help write this news brief. Our expert editors always review and edit content before publishing.
Sharon Shea is the executive editor of Informa TechTarget's SearchSecurity site.
