The recent wave of well-known ransomware attacks targeting brands like M&S has rekindled fears that AI is driving a surge in cybercrime. AI is undoubtedly reconstructing threat landscapes that allow for more persuasive phishing emails and automatic attack workflows, but its role in ransomware remains largely exaggerated.
The reality is that AI is evolving existing threats and does not reinvent them. Most ransomware operators rely on simple, established techniques that provide speed, scale and profitability. As long as phishing, insider threats, and ransomware continue to deliver results, there is little incentive for bad actors to adopt complex AI tools.
Understanding how ransomware groups actually use is key to building better defenses. Violation and Attack Simulation (BAS) tools are essential for detecting and closing gaps before attackers have an advantage.
What is driving the surge in ransomware?
The 2025 cybersecurity violation investigation draws pictures. According to the survey, ransomware attacks doubled between 2024 and 2025. This has nothing to do with AI innovation and was less about deep-rooted economic, operational and structural changes within cybercrime ecosystems.
At the heart of the growth of this attack is the growing popularity of the Service as a Ransomware (RAAS) business model. Groups like Dragonforce and Ransomhub sell off-the-shelf ransomware toolkits to affiliates in exchange for reduced profits, allowing even less skilled attackers to run destructive campaigns.
The most vulnerable points remain the people behind the system. Become a recent M&S infringement. This incident was not caused by advanced technology, but was the result of social engineering targeting third-party suppliers. This is a reminder that cybercriminals are still exploiting the weakest links in the chain. So, AI is a growing area of concern, but today's ransomware groups are stuck on what works.
Ransomware defense
Building effective ransom defenses means recognizing where traditional approaches are lacking. Penetration testing and red teaming are important to detect complex threats, such as advanced persistent threats (APTS) and insider compromises. However, ransomware operators are usually not dependent on stealth or new tactics. They take advantage of scale, predictability and speed.
In many cases, violations result from common preventable issues such as poor qualification hygiene and poorly configured systems. If the assessment is made only once or twice a year, the new gap will be unaware for several months, giving the attacker plenty of opportunity. To maintain, organizations need to validate their defenses in a faster and more continuous way.
Ransomware Defense BAS
Violation and Attack Simulation (BAS) deals with this blind spot by allowing frequent simulations that mimic real tactics in a controlled, repeatable way. They build resilience, not just guarantees. BAS is not designed to replace human-driven exercises like the Red Team. They complement them by running regularly, providing timely insights between manual evaluations and helping the team maintain high preparation.
Tuning and prioritization are essential to making the most of your BAS. A well-configured platform helps teams focus on what's most important, reduce noise and allow for faster repairs of impactful findings.
Most ransomware actors follow a worn-out playbook and visit the company's network frequently, but not necessarily a sophisticated network. So effective ransomware prevention is not just about deploying cutting-edge technology at every turn, but about making sure the basics are consistent. It means having a robust backup and recovery process, but also means training staff, maintaining visibility, and monitoring common ransomware methods. True resilience comes from predicting not just attacking, but merely responding.
AI Gold Rush
The focus is on how cybercriminals arm AI, but many organizations miss out on more pressing risks. This is a vulnerability introduced through the use of proprietary AI tools.
Shadow AI, ChatGPT and other tools misuse, bypass security protocols, and risk of leaking sensitive company data. Almost 40% of IT workers admit to secretly using fraudulently generated AI tools.
Unregulated AI adoption, poor data governance, and misunderstood AI services expand the attack surface and increase the likelihood of exposure. This creates a lack of visibility into how internal AI tools are handling data, complicating incident responses.
To effectively manage this risk, organizations must apply the same security scrutiny to internal AI as new technologies. This includes governance frameworks, clear visibility into data flows, and regular testing of how AI tools are used throughout your business. Staff also need clear cybersecurity training on the risks of using Shadow AI.
Focus on the basics
The story of AI-powered ransomware can be distracted from internal failures and old, reliable methods of attack: real risks. For security leaders, the response must be clear. The real threat is one that is already working for the attacker.
Security leaders must resist the temptation to chase hypothetical threats. It means strengthening the basics, simulating attacks continuously, and maintaining defenses based on real-world tactics. In the fight against ransomware, AI is not the biggest danger. Self-satisfaction and misguided priorities are coupled with the use of Shadow AI.
Ben Lister is head of netspi's threat research
