Join C-suite executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing their AI investments for success.. learn more
Unless you’ve intentionally avoided social media and the internet entirely, you’ve probably heard of a new AI model called ChatGPT. It is now publicly available for testing. This allows cybersecurity professionals like me to see how it can help the industry.
The widespread availability of machine learning/artificial intelligence (ML/AI) for cybersecurity practitioners is relatively new. One of the most common use cases is endpoint detection and response (EDR), where ML/AI uses behavioral analytics to identify anomalous activity. Known good behavior can be used to identify outliers to identify and kill processes, lock accounts, or trigger alerts.
Whether used to automate tasks or help build and fine-tune new ideas, ML/AI can certainly help you enhance your security efforts and strengthen a healthy cybersecurity posture. increase. Let’s look at some possibilities.
AI and its potential in cybersecurity
When I started working in the cybersecurity space as a junior analyst, I was responsible for detecting fraud and security events using Splunk, a security information and event management (SIEM) tool. Splunk has its own language, the Search Processing Language (SPL), which can become more complex as queries become more sophisticated.
event
transform 2023
Join us July 11-12 in San Francisco. There, he shares how management integrated and optimized his AI investments to drive success and avoid common pitfalls.
Register now
This context helps you understand the power of ChatGPT. ChatGPT already learns SPL and converts junior analyst prompts into queries in just seconds, significantly lowering the barrier to entry. When you ask ChatGPT to create an alert for a brute force attack against your Active Directory, it creates the alert and explains the logic behind the query. It’s a perfect guide for a new her SOC analyst, as it’s more of a standard her SOC type of alert rather than an advanced his Splunk search.
Another compelling use case for ChatGPT is automating routine tasks for overstretched IT teams. In almost any environment, the number of old Active Directory accounts can range from dozens to hundreds. These accounts are often granted privileged permissions and a full privileged access management technology strategy is recommended, but enterprises may not prioritize their implementation.
This creates a situation where IT teams resort to the old-fashioned DIY approach of disabling old accounts using scheduled scripts written by system administrators themselves.
Writing these scripts now carries over to ChatGPT, where you can build logic to identify and disable accounts that have been inactive in the last 90 days. ChatGPT helps senior engineers/administrators free up time for more advanced work if junior engineers can not only learn how the logic works but also create and schedule this script.
If you’re looking for a dynamic exercise that doubles your power, ChatGPT can be used for purple team or red and blue team collaboration to test and improve your organization’s security posture. You can build simple example scripts that penetration testers might use, or debug scripts that might not work as expected.
One of the nearly universal MITER ATT&CK techniques in cyber incidents is persistence. For example, a standard persistence tactic that analysts and threat hunters should look for is when attackers add specified scripts/commands as startup scripts on Windows machines. With a simple request, ChatGPT can create rudimentary but functional scripts. This script allows the red teamer to add this persistence to the target host. The red team uses this tool to help with penetration testing, but the blue team uses this tool to create a better alert her mechanism to see what these tools are like can understand
There are many advantages, but there are also limitations.
Of course, if a situation or research scenario requires analysis, AI is also a very useful aid in facilitating that required analysis or introducing alternative paths. In cybersecurity in particular, whether to automate tasks or generate new ideas, AI reduces the effort to strengthen a healthy cybersecurity posture.
However, there are limits to this usefulness, and what I’m talking about here is the complex human cognition associated with real-world experience that is often involved in decision-making. Unfortunately, AI tools cannot be programmed to function like humans. It can only be used to help you analyze your data and generate output based on the facts you enter. AI has come a long way in a short amount of time, but it can still cause false positives that require human identification.
Still, one of the biggest benefits of AI is that it automates mundane tasks, freeing up humans to focus on more creative or time-consuming tasks. AI can be used, for example, to create scripts and improve efficiency for use by cybersecurity engineers and system administrators. Recently, I used ChatGPT to rewrite a dark web scraping tool I made, reducing the completion time from days to hours.
AI is undoubtedly an important tool that security professionals can use to ease repetitive and mundane tasks, and it can also provide educational assistance to less experienced security professionals.
If there is a drawback to AI in informing human decision-making, then whenever we use the word “automation”, isn’t technology evolving to make human work unnecessary? I would say there is a palpable fear that In the security space, too, there are specific concerns about the potential for misuse of AI. Unfortunately, the latter of these concerns has already proven true as we use tools to create more compelling and effective phishing emails.
In terms of decision-making, I think we are still in the early stages of relying on AI to make final decisions in everyday practical situations. The human ability to use universally subjective thinking is central to the decision-making process, but so far AI lacks the ability to emulate that skill.
So while various versions of ChatGPT have generated quite a bit of buzz since previewing last year, like any new technology, the anxiety it has created needs to be addressed. I do not believe that AI will eliminate information technology and cybersecurity jobs. On the contrary, AI is an important tool that security practitioners can use to ease repetitive and mundane tasks.
We are witnessing the dawn of AI technology, and even its creators seem to have a limited understanding of its power, but how ChatGPT and other ML/AI models will improve cybersecurity practices? We have barely scratched the surface of what is possible. I can’t wait to see what innovations come next.
Thomas Aneiro is Senior Director of Technology Advisory Services at Moxfive.
data decision maker
Welcome to the VentureBeat Community!
DataDecisionMakers is a place where experts, including technologists who work with data, can share data-related insights and innovations.
Join DataDecisionMakers for cutting-edge ideas, updates, best practices, and the future of data and data technology.
You might consider contributing your own article too.
Read more about DataDecisionMakers
