We have been developing machine learning based cybersecurity systems for many years and in 2005 we started developing automation for analytics in our lab. These early automation projects have since evolved into full-fledged machine learning frameworks. Since then, we’ve been waiting for our enemies to make the same move, and 18 years later, the wait is over.Malware with artificial intelligence has arrived.
Defenders have been able to automate their work for some time, hands-free at machine speed for superior detection, analysis, and reaction times. This is in contrast to attackers who had to manually build and deploy their attacks. This means that if blocked, you have to manually change it, much slower for humans.
Automated Malware Campaigns Dramatically Change Malware Gang Reaction Speed
Techniques to run malware campaigns and automatically bypass new defenses are definitely viable today, but so far we haven’t seen anything like it. However, if it does occur, it’s worth noting as it clearly indicates that the enemy’s reaction speed has changed from human to machine speed.
When discussing the criminal or abusive use of AI, deepfakes are probably the first thing that comes to mind. These days, it is often used for fraud cases such as love fraud, as it is easy to create realistic human figures. But deepfakes of real people are something else entirely, and although the exploitation of deepfake images, audio, and video has been relatively small thus far, it will undoubtedly get worse.
Large Language Models (LLMs) such as GPT, LAMDA, and LLaMA can create content not only in human language, but also in any programming language. We’ve seen the first example of a self-replicating piece of code that can use a large language model to create infinite variations of itself.
How do we know about this? Because he, the malware author, was emailed by SPTH.
This individual is what we call an old-school virus enthusiast, who seems to prefer writing viruses that break new ground. SPTH has also created a long list of malware over the years, including the first her DNA infection malware “Mycoplasma Mycoides SPTH-syn1.0”. However, SPTH may or may not appear interested in using malware to cause damage or steal money, so it only appears that SPTH is doing this. It should be emphasized that the
SPTH’s self-replicating code is called LLMorpher. SPTH recently wrote: “Here we go a step further and show how to encode the self-replicating code entirely in natural language. We then use GPT from OpenAI, one of the most powerful artificial intelligence systems published, GPT can create different code for the same behavior, which is a new form of is the metamorphism of
This code can infect programs written in the Python language. When run, it searches your computer for .py files and copies its own functionality into the file. However, functions are not copied directly. Functionality is written in English in GPT, creating the actual code to be copied. This will generate an infected Python file and continue to replicate the malware to new files. The function is reprogrammed each time by GPT. This is unprecedented.
Simply creating malware is not illegal. Using it to infect or damage people’s systems.So SPTH doesn’t seem to do that, but it seems like a third party is misusing his SPTH research. This is still very problematic because it can LLMorpher can be easily downloaded from Github.
How can I block malware like LLMorpher and new strains based on it?
LLMorpher will not work without GPT. GPT is not downloadable, so I don’t have a copy. This means OpenAI (creator of GPT) can easily block anyone using his GPT for malicious purposes. Some of the similar models are available for download (such as LLaMA) and you will eventually see them embedded in malware.
Detecting malicious behavior is your best bet against malware that uses large language models. Security products that also use machine learning are perfect for this.
Only good AI can stop bad AI.