How generative AI is creating new kinds of security threats

AI For Business


Join C-suite executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing their AI investments for success.. learn more


The promised AI revolution has arrived. His ChatGPT for OpenAI set new records for its rapidly growing user base, and the wave of generative AI spread to other platforms, revolutionizing the world of technology.

The threat landscape has also changed dramatically, and some of these risks are beginning to materialize.

Attackers are using AI to improve phishing and fraud. Meta’s 65 billion parameter language model leakedThis will no doubt lead to new and improved phishing attacks. New prompt injection attacks occur daily.

Users often put sensitive business data into AI/ML-based services, and security teams are busy supporting and controlling the use of these services. For example, a Samsung engineer injected proprietary code into his ChatGPT to aid debugging, leaking sensitive data. A Fishbowl survey found that 68% of those who use ChatGPT at work don’t tell their boss about it.

event

transform 2023

Join us July 11-12 in San Francisco. There, he shares how management integrated and optimized his AI investments to drive success and avoid common pitfalls.

Register now

Misuse of AI is becoming an increasing concern for consumers, businesses and even governments. The White House announced new investments in AI research and upcoming public assessments and policies. The AI ​​revolution is progressing rapidly, giving rise to four major classes of problems.

Asymmetry in the relationship between attackers and defenders

Attackers are likely to adopt and design AI earlier than defenders, giving them a distinct advantage. Advanced AI/ML-powered attacks can now be launched at low cost and at incredible scale.

Social engineering attacks primarily benefit from synthetic text, voice, and images. Many of the manually-intensive attacks will be automated, such as phishing attacks that impersonate the IRS or realtors and encourage victims to transfer money.

Attackers can use these technologies to create better malicious code and launch new, more effective attacks at scale. For example, they will be able to quickly generate polymorphic code for malware that evades detection from signature-based systems.

One of the pioneers of AI, Jeffrey Hinton, told The New York Times that he regretted what he helped build because “I don’t know how we can prevent the bad guys from exploiting AI.” It was in the news recently.

Security and AI: A further decline in social trust

We have seen how quickly misinformation spreads thanks to social media. A University of Chicago Pearson Institute/AP-NORC poll found that 91% of adults across politics believe misinformation is a problem, and nearly half say they spread misinformation. I am concerned that With a machine behind it, social trust can be lost cheaper and faster.

Current AI/ML systems based on Large Language Models (LLMs) are inherently knowledgeable and will invent something if they don’t know how to answer. This is often referred to as “hallucinations” and is an unintended consequence of this emerging technology. Lack of accuracy is a big problem when looking for a legitimate answer.

This betrays human trust and leads to serious mistakes with serious consequences. For example, an Australian mayor was accused of defaming OpenAI after ChatGPT mistakenly found him imprisoned on bribery charges when he was actually a whistleblower in the incident. He said he could sue.

new attack

The next decade will see a new generation of attacks against AI/ML systems.

Attackers influence the classifiers that the system uses to bias the model and control the output. It creates a malicious model that is indistinguishable from the real model and can cause real harm depending on how it is used. Prompt injection attacks will also become more common. Just one day after Microsoft introduced Bing Chat, a Stanford University student persuaded the model to reveal internal mandates.

Attackers launch an arms race with hostile ML tools that use a variety of methods to trick AI systems, pollute the data they use, or extract sensitive data from models.

Because much of our software code is generated by AI systems, attackers can take advantage of inherent vulnerabilities that these systems inadvertently introduce to compromise applications at scale.

scale externality

The cost of building and operating large-scale models creates monopolies and barriers to entry that lead to externalities that are not yet predictable.

Ultimately, this will have a negative impact on the public and consumers. While misinformation is rife, large-scale social engineering attacks will impact consumers and leave them with no means to protect themselves.

The federal government’s announcement that governance is coming soon serves as a good start, but there is a lot of groundwork that needs to be laid to win this AI race.

AI and Security: What Happens Next

The nonprofit Future of Life Institute has released an open letter calling for a moratorium on AI innovation. Mr. Elon Musk joined the people concerned and was covered by many mass media, but it is not realistic to just press the pause button. Even Mr. Musk knows this. He seems to have changed his course and set up his own AI company to compete.

It has always been dishonest to argue that innovation should be stifled. The attacker certainly doesn’t honor that request. More innovation and more action are needed to ensure AI is used responsibly and ethically.

Fortunately, this also creates opportunities for innovative approaches to security using AI. Threat hunting and behavioral analysis will improve, but these innovations will take time and require investment. New technologies cause paradigm shifts and things always get worse before they get better. We’ve seen the potential for dystopia when AI is used by the wrong people, but it’s time for security experts to develop a strategy and respond when big problems arise. You have to act.

At this point, we are terribly unprepared for the future of AI.

Aakash Shah is CTO and co-founder of Oak9.

data decision maker

Welcome to the VentureBeat Community!

DataDecisionMakers is a place where experts, including technologists who work with data, can share data-related insights and innovations.

Join DataDecisionMakers for cutting-edge ideas, updates, best practices, and the future of data and data technology.

You might consider contributing your own article too.

Read more about DataDecisionMakers





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *