In July 2025, McDonald’s had an unexpected problem with its menu. The issue involved McHire, an AI-powered platform used to recruit and screen job candidates. The system, developed by Paradox.ai, had rookie-level security flaws, with the restaurateur’s backend accepting “123456” as both a username and password, and a lack of multi-factor authentication. As a result, the personal data of approximately 64 million applicants was at risk. Fortunately, this flaw was discovered and notified to the company by security researchers Ian Carroll and Sam Curry.
Incidents like this are not uncommon as organizations rush to implement AI tools without fully auditing them. According to an IBM report, AI adoption is moving faster than AI security and governance. Last year, 13% of organizations reported a breach related to an AI model or application, and an additional 8% said they didn’t even know if their systems had been compromised.
And insurance companies know it, too. Many companies are tightening policy language, increasing premiums, and creating explicit exclusions for certain AI-related incidents. This is an effort aimed at limiting exposure to risks that are not well understood. According to Delinea’s research, 42% of respondents said their current cyber insurance policy includes disclaimers related to AI misuse and liability.
