Understanding AI’s Role in Cybersecurity Beyond the Hype

AI News


AI has great potential benefits for cybersecurity, such as early identification of threats in networks and systems, prevention of phishing attacks, and aggressive cybersecurity applications. It is also hoped that these technologies will help reduce the cyber skills gap by reducing the workload of security teams.

However, the term “AI” has become a buzzword in recent years, with many product vendors and organizations misinterpreting or misrepresenting how the technology is used.

On the first day of the RSA 2023 Conference, Cyberize CSO Diana Kelley said it’s important to accurately assess the role of these technologies. This is because it can lead to unrealistic expectations that can have “serious consequences” including cybersecurity.

“The reason we have to separate the hype from reality is because we trust these systems,” she pointed out.

Kelley found that AI capabilities are generally overrated. For example, developing fully self-driving cars is proving to be a much more difficult task than previously anticipated. Concerns about potentially dystopian uses of AI are “technically possible,” but Kelley says that’s certainly not the case for the foreseeable future.

She added that AI capabilities are generally overrated. Kelley highlighted a question she asked her ChatGPT about a cybersecurity book she authored. She answered it in her five books, none of which she contributed.

Nonetheless, AI technology is playing an increasingly important role in cybersecurity. So far, it’s mostly important for “inferring activity data and logs to look for anomalies”.

Understanding AI

For organizations to use AI effectively, they need to understand the various forms of AI and how they are used. Then you can ask the vendor the right questions to understand if you need the “AI” technology they offer.

AI covers a wide range of technologies and we need to understand the differences between them. Machine learning, for example, is a subset of AI, with very different roles and capabilities than generative AI systems such as ChatGPT.

Kelly said it’s important to recognize that generative AI systems like ChatGPT responses are probabilistic based on the data they’re trained on. This is why Chat GPT got the question about her book wrong. “I most likely wrote those books,” she commented.

ChatGPT, trained on information from all over the internet, makes a lot of mistakes because “there are a lot of problems on the internet.”

There are also significant differences in how different generative AI models work and what they are used for.

There are unsupervised learning models where algorithms discover patterns and anomalies without human intervention. These models are responsible for discovering “human invisible” patterns. In cybersecurity, this includes finding associations between malware forms and specific threat actors and who are most likely to click on phishing links (such as those who reuse passwords).

However, unsupervised AI models have drawbacks because their outputs are based on probabilities. There is a question of “when being wrong has a very big impact”. This can include overreacting when malware is detected and shutting down the entire system.

The goal of supervised learning is to use a labeled dataset to train an AI model and accurately predict its outcome. This helps make predictions and classifications based on known information, such as whether an email is legitimate or phishing. However, supervised learning requires a lot of resources and constant updates for AI to ensure a high level of accuracy.

Kelley also highlighted the many intentional and unintentional cyber risks associated with AI. Intentional includes malware creation and unintentional data bias from the data being trained.

Therefore, it is critical that organizations understand these issues and ask the right questions of cybersecurity vendors offering AI-based solutions.

These include how the AI ​​is trained, such as “what datasets are used” and “why they are or are not supervised”.

Organizations should also ensure that their vendors have built resilience into their systems to prevent both intentional and unintentional problems from occurring. For example, do you have a secure software development life cycle (SSDLC) in place?

Finally, it is imperative to scrutinize whether the benefits of AI offer a true return on investment. “You are in the best position to evaluate this,” Kelly said.

She added that data scientists and platforms such as MLCommons can be used to make this assessment.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *