ISACA finds that two-thirds of organizations fail to address AI risks

AI News


Despite the rapid increase in the use of these technologies in the workplace, only one-third of organizations are using AI to adequately address security, privacy, and ethical risks, according to a new study from ISACA. Only.

A survey of 3,270 digital trust professionals found that only 34% believe their organizations pay sufficient attention to ethical standards for AI, and less than a third (32%) Organizations say they are adequately addressing AI concerns in their deployments, such as data privacy and bias.

Despite 60% of respondents saying their organization's employees use generative AI tools in their work, and 70% of employees saying they use any type of AI. This remains the same.

Furthermore, according to ISACA, 42% of organizations now formally allow the use of generative AI in the workplace, up from 28% six months ago.

Respondents say the three ways AI is most commonly used today are to improve productivity (35%), automate repetitive tasks (33%), and create written content (33%) said.

Lack of AI knowledge and training

The survey, dated May 7th, reveals a lack of AI knowledge among digital trust professionals, with only 5% of those declaring themselves very or very familiar with AI. It was only 25%.

When it comes to AI, almost half (46%) classify themselves as beginners.

Digital trust professionals overwhelmingly recognize the need to improve their AI knowledge for their roles, with 85% knowing that they will need to improve their skills in this area within two years to advance or retain their job. I accept that I need to improve my knowledge.

Most organizations are not taking steps to address the lack of AI knowledge among IT professionals and the general workforce. Nearly half (40%) do not offer any AI training at all, and 32% of respondents said the training provided is limited to staff in technology-related roles.

Additionally, only 15% of organizations have a formal comprehensive policy regarding government use of AI technology.

talk to information securityRob Clyde, former ISACA board chair and Cybral director, said this was directly related to a lack of AI expertise and training.

“Cybersecurity governance experts are the people who create the policies. If they are not very familiar with AI, they may feel uncomfortable creating AI policy,” he noted.

Clyde advised organizations to leverage available AI frameworks to help build AI governance policies, such as the National Institute of Standards and Technology's (NIST) AI Risk Management Framework.

In the meantime, Clyde added, organizations should put at least some clear rules in place around the use of AI, such as not inputting sensitive information into publicly available large-scale language models (LLMs). .

“We don't have long to figure this out. The time is now,” he warned.

ISACA also revealed that it has released three new online courses for AI training, including auditing and managing these technologies.

How AI will impact cybersecurity jobs

The IT professionals surveyed also emphasized that they expect AI to have a significant impact on their work in general. Almost half (45%) believe that many jobs will be lost to AI in the next five years, and 80% believe that many jobs will change as a result of these technologies.

However, 78% believe AI will have a neutral or positive impact on their career.

Clyde said information security He said he expects AI to essentially replace certain cybersecurity roles over time. This includes his SOC analysts, whose AI is much better at pattern recognition than humans. The other is to significantly reduce the role of humans in creating policies and reports.

However, Clyde agreed with the majority of respondents that AI will have a net positive impact on cybersecurity jobs, creating many new roles related to the safe and secure use of AI in the workplace. .

For example, experts vet AI models to ensure they are free of bias or ensure that AI-based disinformation is not infiltrating the environment.

“If you think about it, we have a whole new opportunity,” Clyde said.

Tackling AI-based threats

Respondents also expressed significant concern about malicious actors using AI tools to target their organizations.

More than four in five (81%) highlighted misinformation/disinformation as their biggest threat. Alarmingly, only 20% of IT professionals say they are confident in their ability to detect false information using AI, and 23% say they are confident in their company's ability to detect false information. did.

Furthermore, 60% said they were very or very concerned that generative AI could be exploited by malicious actors, for example to create more believable phishing messages. .

Despite this, only 35% believe AI risks should be addressed as an immediate priority for their organization.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *