News Brief: Lagging Governance leads to a surge in AI security threats

AI News


AI is storming the world, with companies of all shapes and sizes hoping to share their actions.

consulting company McKinsey & Co. According to the report, 78% of organizations that had adopted AI from 55% in the middle of 2025 to the beginning of 2025. Furthermore, 92% of companies say they will increase their AI spending over the next three years. Participants in the Lenovo study said the organization allocated nearly 20% of its technology budget to AI in 2025.

The security industry is not someone who is used to the benefits of AI. It helped teams detect threats and vulnerabilities, automate time-consuming manual tasks, speed up incident response times, and reduce false positives and fatigue.

However, security teams also know that unsupervised investments are risky. Companies need to train employees on how to properly use AI, establish policies that outline acceptable and safe use, and employ controls and technology to ensure AI deployment. However, consulting firm Accenture found that only 22% of organizations implement clear AI policies and training.

Let's take a look at some of the latest AI news articles that will enhance how important AI governance and security are.

Focusing on AI security in corporate budgets

Recent reports from KPMG and Thales highlight the growing concerns among companies regarding generated AI security. In KPMG's second quarter, 2025 report, 67% of business leaders said they plan to allocate cyber and data security protection budgets for AI models, while 52% said they prioritize risk and compliance. Concerns about AI data privacy jumped significantly from 43% in the fourth quarter of 2024 to 69% in the second quarter of 2025.

Thales' study revealed rapid ecosystem transformation (69%), data integrity (64%), and trust (57%) as AI-related risks. AI security was ranked as the second highest security expense overall, but only 10% of organizations that listed it as a major security cost suggest a potential inconsistency between concern and actual spending priorities.

Read Eric Geller's complete story about cybersecurity diving.

The first malware to try to get around the discovered AI security tools

Checkpoint researchers have identified the first known malware sample designed to circumvent AI-powered security tools through rapid injection. This rudimentary prototype, called “Skynet,” contains hard coding instructions that encourage the AI ​​tool to ignore malicious code and respond “No malware detected.”

While Check Point's large language model and GPT-4.1 have detected Skynet, security experts see it as the beginning of an inevitable trend that malware authors increasingly target AI vulnerabilities. This finding highlights the key challenges of AI security tools and emphasizes the importance of planning a detailed security approach rather than relying solely on AI-based detection systems that attackers may manipulate.

Read the full story of Jaivi Jayan in Dark Reading.

The growing challenges of nonhuman identity

Organizations are struggling to manage the rapidly growing landscape of non-human identity (NHIS), including service accounts, APIs and AI agents. A typical company is developing 50-1 today because it has 10 NHIs for all users today for all users in 2020, and 40% of these identities have no clear ownership.

AI agents complicate the problem by blurring the line between human and machine identities by acting on behalf of the user. Also, 72% of companies felt confident in preventing human consent attacks, but only 57% said the same about NHI-based threats.

Read the complete story of Robert Lemos about Dark Reading.

AI-generated false information in the Israeli-Iran conflict

The recent conflict between Israel, Iran and the US has been accompanied by a surge in misinformation that has been generated by AI. For example, it includes those allegedly showing US B2 bombers collapsed in Iran after being attacked by an Iranian nuclear facility on June 22, including fake AI-generated images distributed on social media.

Similarly, video generated by AI after missile attacks on Israeli cities in Iran misprinted the destruction of Tel Aviv, ISREAL. Chirag Shah, professor of information and computer science at the University of Washington, warned that as AI technology advances, deepfake detection becomes increasingly difficult.

Read the full story of Esther Shittu from Searchenterpriseai.

Learn more about managing AI security

Editor's Note: The editors used AI tools to help generate this news brief. Our expert editors should always review and edit content before publishing.

Sharon Shea is the executive editor of Informa TechTarget's SearchSecurity site.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *