ChatGPT Security Risks | Spiceworks

AI Basics


  • ChatGPT can be used to create plausible phishing emails and malware to spread misinformation and affect data and financial security.
  • It’s important to refrain from employees uploading sensitive information to ChatGPT and to add a little salt when talking about intellectual property and trade secrets with generative AI tools.
  • Experts demanded timely cybersecurity training and ensured dedicated two-way communication channels were always available to employees.

Last week, Italy temporarily banned the use of ChatGPT in the country, citing privacy concerns. Italian Data Protection Watchdog, Garante per la protezione dei dati personali’s (Italian data protection authority) concerns stem from OpenAI breach in March 2023 exposing email identities, user conversations and payment information.

Additionally, Garante believes that OpenAI has not taken adequate steps to check the age (which is assumed to be 13 years or older) of users who sign up to use its AI-based services. increase.

The decision sparked controversy across Europe, with the European Consumer Organization (BEUC) calling for an investigation of all major chatbots. BEUC Deputy Director Ursula Pachl told Euronews: They don’t realize how manipulative and deceptive it is. ”

“They don’t realize that the information they get can be wrong. I think this case with ChatGPT is very important. It’s like a wake-up call for the European Union, because it doesn’t apply for four years, and we’ve seen how rapidly this kind of system is developing.”

As such, data privacy regulators in Ireland and the UK will likely follow suit. Meanwhile, French regulators have asked Garante for details, even though the data protection commissioner of his German counterpart told her Handelsblatt newspaper that the country might also ban his ChatGPT. bottom.

And that’s just for privacy issues.ChatGPT is also used for malicious purposes.

When asked about the worst possible outcome in an interview with ABC, OpenAI CEO Sam Altman said: One of my particular concerns, he said, is that these models could be used for large-scale disinformation. I fear that these systems will be used in aggressive cyberattacks due to better written computer code. ”

According to the 2023 Checkpoint report, AI chatbots can be used to create malware such as infostealers for Microsoft Office documents, PDFs, and image-based targeting, Python scripts that perform cryptographic operations, i.e. cryptographic tools. used. It develops dark web marketplaces and promotes fraudulent schemes.

However, the efficacy of malware created from code generated by ChatGPT is debatable. So, at the moment, there are two ways of looking at ChatGPT’s cybersecurity risk. So privacy and disinformation/disinformation and phishing, both intertwined.

see next: Using ChatGPT: Where to Start

Create a phishing campaign with company information using ChatGPT

With ChatGPT’s success, Bank of America analysts have declared AI to be on the brink of an “iPhone moment.” The company estimates its economic impact could reach $15.7 trillion by 2030.

Millions of people are using ChatGPT today. Over a dozen organizations have also implemented generative AI techniques in their products and services.

SnapDragon Monitoring CEO Rachel Jones told Spiceworks: “But while it offers great benefits to real businesses, it is a cyberweapon of serious destruction that, in the wrong hands, can wreak havoc on Internet users.”

Some of the ways that amateurs and even seasoned cybercriminals take advantage of ChatGPT for malicious operations include: Phishing scam Tailored to your organization’s structure and inner workings.

“ChatGPT users can teach the tool how your organization communicates with customers and generate realistic phishing emails. We encourage victims to click on links that lead to sites for more information,” Jones added.

“Unlike traditional phishing scams, we find fewer linguistic and cultural errors when emails are fake, which in turn leads to more people falling victim to these threats.”

For a successful phishing campaign, attackers need certain organizational information, such as a dataset of company-generated emails, events employees have attended, and projects they may be working on.

Some of this information is available on the web. Therefore, it is important to limit the dissemination of unnecessary information and keep the card tight to your chest.

Julia O’Toole, CEO of MyCena Security Solutions, is proof of this. “When a criminal uses her ChatGPT, there are no language or cultural barriers. You can tell the application to gather information about your organization, the events you attend, and the companies you work with at breakneck speed. You can,” she tells her Spiceworks. “You can then encourage ChatGPT to use this information to create credible fraudulent emails.”

Phishing emails can act as carriers for dangerous malware such as ransomware, worms, and Trojan horses. In addition, threat actors can also use disinformation tactics to manipulate their targets into a sense of urgency, such as clicking an email. Links or malicious file downloads.

“If a target receives an email from an ‘apparent’ bank, CEO, or supplier, there is no clear indication that the email is fake. The tone, context and reason for performing the bank transfer provide no evidence to suggest that the email is fraudulent. This makes phishing emails generated by ChatGPT very difficult and dangerous to spot. will be ”

That’s why it’s important to refrain from employees uploading confidential information to ChatGPT, and to add a little salt when talking about intellectual property and trade secrets with generative AI tools.

How to neutralize threats from ChatGPT-driven attacks?

1. Call for policy change

in an open letter entitled Pausing Giant AI Experiments: An Open Letterthe nonprofit Future of Life Institute, called for the suppression of all AI that is more advanced than the recently released AI. GPT-4over ChatGPT’s GPT-3.5.

“A strong AI system should only be developed if we are confident that its effects will be positive and the risks manageable. should be increased accordingly,” the letter said.

“We urge all AI labs to immediately suspend training AI systems stronger than GPT-4 for at least six months. If such a moratorium cannot be swiftly enacted, governments must intervene and set a moratorium.

AI Labs and independent experts will use this pause to jointly develop a shared set of safety protocols for advanced AI design and development that will be rigorously audited and supervised by independent external experts. must be developed and implemented. These protocols must ensure that systems conforming to them are secure beyond reasonable doubt. ”

Protocols, regulations, and ethical investigations on AI-related issues are a good idea, but it is doubtful whether governments can intervene to halt the development of the technology. Unless, of course, severe consequences occur.

see next: 5 tasks ChatGPT does best and 5 it doesn’t

2. Enterprise-level protection against ChatGPT-based threats

Jones and O’Toole opined that each employee who interacts with the Internet should practice good cybersecurity hygiene. There is no way around it. Employees must be well trained and familiar with cybersecurity basics.

“As regards Internet users, we advise you to treat all emails requesting personal or financial information with skepticism. If you receive an email requesting information, call your organization instead, security conscious companies will not view this as a nuisance and may ultimately save you from significant financial loss. there is.

O’Toole added: If you receive an email with a link, never click the link. As a habit, check its authenticity first. For example, if your bank asks for your personal information, hang up and call them back using the phone number on your bank’s website. ”

Organizations are responsible for creating dedicated channels and ensuring that two-way communication is always possible. In addition, Jones also suggested using other of his AI-based tools for continuous monitoring.

“Companies should do more to communicate with their customers about the threats ChatGPT poses. We take steps to proactively monitor versions of .AI tools to help spot these fake domains and remove them before they cause harm,” said Jones.

O’Toole, on the other hand, stressed the importance of maintaining perfect password hygiene. “To protect all your online accounts, never use just one root password, even if it has variations like JohnSmith1, John$mith1!, Johnsmith2, etc. If one password is phished, criminals can You can find variations of it and access them all,” he continues O’Toole.

“The same threats apply when using a password manager. All your passwords are stored behind one password, so the risk of losing everything is even higher. can open all of your accounts at once.”

“Instead, users should think of their passwords like they would their home, office, or car keys. It’s just a matter of finding the right key or password for

“The easiest way is to use a tool that generates strong unique passwords like ‘7D£bShX*#Wbqj-2-CiQS’ or ‘kkQO_5*Qy*h89D@h’, but you can use them Don’t centralize behind master keys or identities so you can generate passwords without the risk of a single point of failure, they can’t be cracked, they can be changed at will, and emails generated by ChatGPT can cause Even if one password is phished in , only one online account is affected,” concluded O’Toole.

How can an attacker use ChatGPT to compromise an organization’s cybersecurity? Comment below or let us know LinkedInopen a new window