The continued development of AI poses both challenges and opportunities in the cybersecurity field, a series of experts told EURACTIV, highlighting growing risks but also increasing means of defense.
“AI is one of the key technologies in cybersecurity, as it not only opens the door to new cybercrime modus operandi and new cyber risks, but also the cooperation between entities,” said S2 Grupo, a Spanish security firm. Cyber expert Luis Burdalo Rapa said: told EURAACTIV.
To keep up with this rapidly developing technology, cybersecurity companies hire big data and AI experts to understand the functioning of algorithms and cybersecurity analysts to prevent attacks against or using AI technology. Burdaro-Rapa explained that the list needs to be trained.
technical tug of war
AI has the potential to reduce demand for certain entry-level jobs, and AI solutions can assist human workforces and make processes more efficient.
“So the expert ‘collaborates’ with the next person. [Machine Learning] There will be a build-up of ML-supported cybersecurity solutions, in other words professional,” Sven Herpig, director of cybersecurity policy at the think tank Stiftung Neue Veranttwortung (SNV), told EURACTIV.
Large language models such as ChatGPT and Google Bard make it easy for potential applicants to tackle highly complex theoretical subjects.
AI also supports further training and preparation for entry-level jobs, but the use and development of AI-based tools require specific knowledge and expertise with these algorithms.
AI can be part of the solution, for example by creating defensive opportunities, but it can also be part of the problem by accelerating the number of attacks. As such, they are a constant source of cyber risk.
“We don’t know the balance yet, i.e. whether the attacker outperforms the defender. It’s like a tug-of-war, and if the forces of the two powers grow equally, the equilibrium point may not change.” Giuseppe D’Aquisto, senior technical adviser to the Italian Data Protection Authority, told EURACTIV.
In May, EU law enforcement agency Europol released a preliminary report on the potential abuse of generative AI models like ChatGPT, noting: Advanced phishing, impersonation, fake news and disinformation campaigns, new social engineering attacks and malware development.
Balance automation and human review
Machines are already more accurate and durable than humans when it comes to repetitive tasks and mechanical work, such as systematic pattern recognition.
In cybersecurity, such tasks include anomaly detection, detection of malicious behavior patterns, and automation of cybersecurity alert handling.
“If your goal is to solve simple, repetitive problems, all the tasks involved in coding for those problems can be easily automated,” said D’Aquist.
On the other hand, algorithmic decision making often fails to understand the context and explain the rationale behind the decision. A lack of understanding of where decisions come from is one of the key reasons cybersecurity professionals are reluctant to adopt AI-based solutions in areas such as critical infrastructure.
“We use AI to automate low-complexity tasks, such as assessing the potential maliciousness of URLs. It helps me focus,” added Búrdalo Rapa of S2 Grupo.
Detecting low-value malicious activity, such as cybercrime, is more likely to be partially automated, while experts detect higher-value activity, investigate new security vulnerabilities, and There will be an increasing need to acquire the skills to assess risk.
“Determining whether a newly discovered vulnerability is relevant or security-critical can be strongly supported by AI technology. New language models also have the potential to generate code snippets and scripts. Yes,” Jonas Kernebeck, a data engineer at software company Alpas AI, told EURACTIV.
Future prospects
Looking to the future of the IT industry, experts believe AI will strengthen and accelerate the dynamics between cybersecurity practitioners and hackers.
“Cybersecurity is always a race against time, with new attack vectors and new defense strategies,” Karnebek said.
AI can help make IT systems more secure by training employees and applying attack simulations, but it also polarizes cybersecurity workers into two classes, depending on their approach to this disruptive technology. There is also a possibility.
“There will be many more ‘working-class’ IT workers than there are now dealing with the repetitive tasks of ‘fixing’ and ‘versioning’, but there will be ‘strategic’ high-income workers who can manage the complexity of the stages. “It’s the beginning of creation,” D’Aquist said.
[Edited by Luca Bertuzzi/Nathalie Weatherald]




