Tenable opens the playground for generative AI cyber tools

Applications of AI


The security community has been invited to explore the potential of generative artificial intelligence (AI) to serve as a useful tool in research efforts, and a number of prototype tools developed by Tenable have been released, check them out on Github. You can now go out. .

In an accompanying report titled How Generative AI is Changing Security Researchthe company’s research team shares how it has been experimenting with generative AI applications to streamline reverse engineering, code debugging, web application security, and visibility into cloud-based tools. .

Tenable, which describes itself as an “exposure management” company, says tools such as those based on OpenAI’s latest generatively trained transformer model, GPT-4, could be on par with “mid-level security researchers.” says there is. .

But even OpenAI acknowledges that GPT-4 has similar limitations to previous GPT models, as Ray Carney, director of security response and zero-day research at Tenable, explained in the report’s foreword. Especially with respect to credibility and biases that result from model experience. How it was trained, incomplete and incomplete training data, cognitive biases of the model developer, etc.

In addition to this, he said, we need to consider the cognitive biases of people querying the model. Asking the right question is the “most important factor” in your chances of receiving the right answer.

According to Carney, this is relevant to security researchers because their role is to provide timely and accurate data to decision makers.

“To pursue this goal, analysts must process and interpret collections of incomplete and ambiguous data in order to make sound and well-founded analytical judgments,” he wrote. increase. “After many failures over the years, the analytical community has developed a set of tools commonly referred to as ‘structured analytical techniques’. These tools help reduce and minimize the risk of being wrong and avoid making uninformed decisions.

“The caveats OpenAI put forward in its GPT-4 announcement strongly advocate the application of these technologies,” continues Carney. “In fact, only by applying these types of techniques can we finally generate sufficiently sophisticated datasets for training future models in the cybersecurity domain.

“These types of techniques also help ensure that researchers are tailoring their prompts to those models—that is, asking the right questions,” he said. “In the meantime, security researchers need more time to invest their time in more difficult questions that require subject matter expertise for researchers and analysts to unlock important issues. We can continue to explore how to leverage generative AI capabilities for mundane tasks. Context.”

The first tool they came up with is called G-3PO. This tool is built on the Ghidra reverse engineering framework developed by the NSA. Ghidra has been popular with researchers for years after it was declassified and made widely available in the 2010s. It performs many important functions such as binary disassembly to assembly language listings, reconstruction of control flow graphs, and decompiling assembly listings to at least something that looks like code.

However, using Ghidra requires being able to compare the decompiled code with the original assembly listing, add comments, assign descriptive names to variables and functions, and meticulously analyze it.

Here, G-3PO takes the baton and runs the decompiled code through a Large Language Model (LLM) to get an explanation of what the function does and suggestions for friendly variable names.

Tenable says the feature will enable engineers “to quickly and at a high level understand what the code does without having to decipher every line of it first.” Then you can zero in on the areas of your code that concern you the most for deeper analysis.

Two other tools, AI for Pwndbg and AI for GEF, are code debugging assistants that work as plugins for two popular GNU Debugger (GDB) extension frameworks, Pwndbg and GEF. These interactive tools take a variety of data points to help researchers explore the debugging context, including registers, stack values, backtraces, assembly, and decompiled code. All a researcher has to do is ask questions like, “What’s going on here?” or “Does this function look vulnerable?”

Tenable says these tools help solve the problem of navigating the steep learning curve associated with debugging, allowing GDB to be essentially what researchers are looking for without having to decipher the raw debug data. He said he would turn it into a more conversational interface where you can discuss what’s going on. While the tool is by no means perfect, it has shown promising results in reducing complexity and time, and Tenable hopes it can also be used as an educational resource.

Other tools available include BurpGPT, a Burp Suite extension that allows researchers to analyze HTTP requests and responses using GPT, and Identity and Access Management (IAM) policy misconfiguration in cloud environments. such as EscalateGPT, an AI-powered tool that investigates Identify the most common and overlooked concerns among enterprises and use GPT to identify opportunities for escalation and mitigation.

silver lining

Tenable said it was expected that threat actors would make use of generative AI itself, but it was probably only a matter of time before credible AI-crafted malware threats materialized. There is still “ample opportunity” for defenders to leverage generative AI.

In fact, it can even give you an edge in some areas such as log parsing, anomaly detection, triage, and incident response.

“Although the journey of implementing AI into tools for security research has only just begun, it is clear that the unique capabilities offered by these LLMs will continue to have a significant impact on both attackers and defenders,” said the researchers. is writing



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *