According to a new report from ISC2, most cybersecurity professionals believe that AI will have a positive impact on their jobs and help alleviate pressures caused by the cyber skills gap.
More than four in five respondents (82%) agreed that AI makes cyber professionals more efficient at their jobs, with 42% strongly agreeing with this statement.
An even higher proportion (88%) expect AI to have a significant impact on their work in the coming years, and 35% say it has already had an impact.
More than half (56%) of those surveyed believe that AI will make some parts of their job obsolete. According to ISC2, it is estimated that there is a shortage of 4 million people in this industry, which is not necessarily a negative considering the growing cybersecurity talent shortage.
Cybersecurity roles affected by AI and machine learning (ML) are the most time-consuming and repetitive in nature, the report found. This includes analyzing user behavior patterns and automating repetitive tasks.
“Although AI is unlikely to make significant inroads in bridging the gap between supply and demand, it will play a key role in enabling demand for 5.5 million people.” [global cybersecurity workforce] This will likely relieve some of the pressure on employees, allowing them to focus on more complex, high-value and important tasks,” the report states.
AI increases cyber threats
More than half (54%) of respondents reported a significant increase in cyber threats in the past six months. Of those, 13% directly tied this increase to AI-generated threats, and 41% were unable to establish a definitive link.
Alarmingly, 37% disagree that AI and ML will benefit cybersecurity professionals more than criminals, while only 28% agree and 32% are unsure .
The top AI-based threat cited by respondents was based on misinformation attacks.
- Deepfake (76%)
- Disinformation campaigns (70%)
- Social engineering (64%)
- Hostile attack (47%)
- IP theft (41%)
- Unauthorized access (35%)
Other significant AI-driven concerns revolved around regulation and data practices.
- Lack of regulation (59%)
- Ethical concerns (57%)
- Privacy violation (55%)
- Data poisoning – intentional or accidental (52%)
Four out of five respondents believe there is a clear need for comprehensive and specific regulations governing the safe and ethical use of AI.
How organizations can secure AI tools in the workplace
Only 27% of cybersecurity professionals say their organizations have formal policies in place to govern the safe and ethical use of AI, and only 27% of cybersecurity professionals say their organizations have formal policies in place to govern the safe and ethical use of AI. Only 15% said they were.
However, a significant proportion of organizations are currently discussing formal policies regarding the safe and ethical use of AI (39%) and how AI technology is safe and deployed (38%).
Around one in five (18%) have no plans to create a formal policy on AI in the near future.
The report also found that there is no standard approach for managing employee use of generated AI tools across organizations.
More than one in ten (12%) have blocked all employees' access to generative AI tools, and 32% have blocked access to some of these tools.
Almost half (46%) have given their employees access to all generative AI tools or have not yet considered this issue.
Encouragingly, 60% of cybersecurity professionals say they are confident in leading the deployment of AI within their organization, while a quarter (26%) are confident in addressing AI-driven security issues. Not ready.
More than four in five (41%) admit to having little or no experience with AI or ML, while 21% don't know enough about AI to alleviate their concerns.
ISC2 CEO Clare Rosso said the survey results show that cybersecurity professionals recognize the opportunities and challenges AI presents, and that organizations have the expertise and awareness to safely implement AI into their operations. He said this shows that he is concerned about the lack of.
“This creates a huge opportunity for cybersecurity professionals to take the lead, leverage their expertise in secure technology, and ensure its safe and ethical use,” Rosso commented.