“We observed prominent cybercrime threat actors collaborating to plan large-scale vulnerability exploitation operations,” GTIG researchers said in a new report on AI exploitation by malicious actors. “Analysis of exploits associated with this campaign identified a zero-day vulnerability implemented in a Python script that allows users to bypass two-factor authentication (2FA) in a popular open-source web-based systems management tool.”
GTIG did not name the affected tools, but the team said they may have disclosed the vulnerability to vendors, preventing large-scale exploitation. However, such incidents are likely to become more common as AI model inference capabilities advance to the point where they can find high-level logic flaws, not just basic memory corruption or improper input sanitization bugs.
This was the case with the discovered Python 2FA bypass exploit, which required credentials to exploit, but was caused by the tool’s developer hardcoding an invalid trust assumption.
“Although Frontier LLM struggles to manipulate complex enterprise authorization logic, it has improved its ability to perform contextual inference and is able to effectively read developer intent to correlate inconsistencies in 2FA enforcement logic and its hard-coded exceptions,” GTIG researchers conclude. “This feature allows the model to surface dormant logic errors that appear functionally correct to traditional scanners, but are strategically broken from a security perspective.”
