Microsoft warns of dangerous ‘shadow AI’

Applications of AI


Ahead of the upcoming Munich Security Conference, Microsoft has issued a strong warning against the unchecked use of autonomous software assistants powered by artificial intelligence (AI).

In a report released Tuesday, researchers at the software company said AI assistants are already being used for programming by more than 80% of Fortune 500 companies.

However, Microsoft argued that most companies lack clear rules regarding the use of AI, and its rapid adoption poses untold risks. The lack of oversight by those responsible or “shadow AI” opens the door to new attack methods, the report added.

Upper management knows nothing about the use of AI

“Shadow AI” refers to the use of AI applications by employees without the knowledge or formal approval of a company’s IT or security department.

Employees independently use AI tools and agents from the internet, such as computer programs that operate autonomously, to complete tasks faster and without informing anyone in the company hierarchy.

Microsoft’s report warns of the growing gap between innovation and security.

While the use of AI is exploding, fewer than half of companies (47%) have specific security controls in place for the AI ​​they generate, and 29% of employees are already using unauthorized AI agents in their work. This creates a blind spot in enterprise security.

Rapid deployment is not safe

According to Microsoft experts, the risks increase if companies don’t take enough time to deploy AI applications.

Rapidly deploying AI agents can bypass security and compliance controls and increase the risk of shadow AI, the report said.



Source link