As AI evolves with more advanced models and tools, cybersecurity is moving from human speed to machine autonomy, with implications for financial professionals as well.
Juan Matthews Rebelo Santos, founder of BNVD, Brazil’s national security vulnerability database, said Claude Mythos Preview is “more capable than most humans at identifying and exploiting vulnerabilities.”
Project Glasswing’s “urgent collective action is aimed at ensuring that advances in AI coding serve defensive purposes before further advances in offensive capabilities,” Santos explained.
AI is both a cyber threat and a solution
If AI can find and exploit vulnerabilities faster than humans, Santos explains, defenders need the same capabilities before attackers have a chance to obtain them. “This significantly changes the traditional timeline for vulnerability discovery and patching.”
Jamie Bykov Brett, founder of independent consulting firm Bykov Brett, says Mythos’ abilities are real, but the framework is backwards. “Everyone is focused on what Mythos can find. The real problem hasn’t changed. Organizations are already unpatchable.” [the vulnerabilities] they know about it. ”
Mythos is essentially a more powerful microscope for examining weak points in a system, but that doesn’t mean it will be easier for companies to fix those problems, he says. “The bottleneck has always been the immune system, not the diagnosis,” Bikofvret continues. “If this effort is not accompanied by a similar investment in remediation infrastructure, it will be a very expensive way to document how at risk we are.”
AI could have an even bigger impact on accounting teams
Santos said Project Glasswing highlights a deeper problem with AI when it comes to cybersecurity: “Cybersecurity is entering a phase where it is no longer limited by human speed. AI systems can analyze large codebases, identify weaknesses, and even help build exploits at a pace that compresses what used to take weeks into hours. [creating] Structural imbalances result if access is not carefully controlled. ”
Until now, cyber threats have centered around tactics such as phishing. AI tools like Mythos can dramatically shift focus by allowing criminals to find and exploit vulnerabilities at scale.
Santos said human attacks are already being replaced by AI-assisted attacks. In some cases, attacks are carried out by AI. Reconnaissance, exploitation, and social engineering all happen faster and more efficiently, and the latter more effectively.
In particular, the acceleration of the reconnaissance phase can have far-reaching effects. Large-scale attacks, like last year’s attack on M&S, typically require weeks or months of reconnaissance to understand what the attackers can exploit. Now, with the right AI tools, this can be accomplished in a matter of hours, allowing attackers to infiltrate and disrupt multiple organizations in the same amount of time.
Misuse of AI tools can also put your business at risk
Shwetha Babu Prasad, an independent information security expert, said while criminals are increasingly using AI in finance and accounting, the use of AI tools within organizations can create vulnerabilities if not implemented with security in mind.
“In accounting workflows where AI is used for document analysis and reporting, even small context leaks can surface client data across sessions and outputs,” Prasad explains. “This moves the risk from traditional breaches to more subtle and systematic exposures.”
Principles of good cyber hygiene still apply
Avoiding risk requires little more than what businesses are expected to do to protect themselves from cyberattacks. But that means focusing more on those things. Bykov-Brett says the most valuable thing accountants can do is “reframe cyber security as a business continuity risk rather than an IT issue,” and boards should take this seriously.
“While most small business clients understand cash flow risk and insurance, they treat successful AI-powered fraud as something that happened to someone else. However, accountants are one of the few trusted advisors in a position to have such conversations,” says Bikofvret. “Practical advice is simple: Instructions that involve changes to money or access should be verified through another, pre-agreed channel. Always. This used to be good practice, but with AI it is no longer negotiable.”
When protecting systems from AI-assisted attacks, regularly patching systems internally and across the supply chain, staying alert to hazards, and regularly training staff remain important steps.
Stay vigilant and skeptical when using AI
Accountants can also take steps to use AI tools more safely. Prasad said companies should “use AI solutions with defined data governance and auditability, avoid inputting customer-identifiable or sensitive financial data into public AI models, understand data flows and retention, including whether inputs are stored or used for training, and align the use of AI with existing financial management, compliance, and risk frameworks.”
“The goal is not to delay adoption, but to ensure that it occurs within a controlled and auditable manner,” Prasad said.
Over-reliance in systems is a major risk factor that companies need to be aware of, says Bikofvret. This is especially true when it comes to AI tools. “People no longer scrutinize what AI produces. This creates a new attack surface that didn’t exist before. The AI tool or its input is compromised, and every decision that comes from it is compromised. We’re building dependencies faster than we’re building validation habits.”
