Shadow AI is widespread and most commonly used by executives

Applications of AI


This voice is automatically generated. Please let us know if you have any feedback.

Diving overview:

  • According to , more than 80% of employees, including nearly 90% of security professionals, use unapproved AI tools at work. new report By cyber risk monitoring vendor UpGuard.
  • This unauthorized use of AI, which can lead to security vulnerabilities, is not just widespread but pervasive, with half of employees saying they regularly use unauthorized AI tools, and less than 20% saying they only use AI tools approved by their company.
  • According to the report, security leaders were more likely than the average employee to report using unauthorized tools and were much more likely to say they use them regularly.

Dive Insight:

According to a Nov. 10 report from UpGuard, the use of unauthorized AI platforms, known as shadow AI, is a significant issue facing companies across sectors today.

In a surprising development, UpGuard found that nearly a quarter of employees consider AI tools to be their “most trusted source of information,” about the same as managers and better than colleagues and search engines. Employees in manufacturing, finance, and healthcare reported the highest levels of trust in AI tools.

This perspective of trust has consequences. “Employees who view AI tools as their most trusted source of information are much more likely to use shadow AI tools as part of their regular workflow,” says UpGuard.

Companies across a wide range of industries are experiencing shadow IT issues, with a consistently high percentage of employees reporting regular and periodic AI abuse across sectors such as finance, IT, manufacturing, and healthcare. The highest levels of overall shadow AI usage were among mid-level managers and lower-level employees, while the highest levels of regular usage were among executives.

According to a report from UpGuard, every department in a company uses shadow AI heavily, but marketing and sales teams report using it more extensively than operations and finance personnel.

For security teams looking to reduce the prevalence of shadow AI, one of UpGuard’s findings is particularly noteworthy. Employees use unapproved tools because they think they know enough to manage risks.

“We found a positive correlation between users who report understanding AI security requirements and those who report regularly using unapproved AI tools,” UpGuard said. “This data shows that as employees become more knowledgeable about AI risks, they become more confident in making decisions about those risks, even at the expense of following company policy.”

This correlation suggests that security awareness training is effective. Not enough protection against threats, according to the report. “Programs like this require new approaches to be successful.”

In fact, less than half of workers said they knew and understood their company’s policies regarding the use of AI. Meanwhile, 70% said they were aware that their employees were inappropriately sharing sensitive data with AI tools. The proportion was even higher among security leaders, according to the report.

UpGuard’s report is based on two 2024 surveys of 1,500 security leaders and lower-level employees in the United States, United Kingdom, Canada, Australia, New Zealand, Singapore, and India.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *