WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks

AI For Business


SlashNext has published an investigative report detailing a proprietary module based on ChatGPT created by cybercriminals with the express intention of leveraging generative AI for malicious purposes.

The SlashNext team worked with Daniel Kelly, a converted black hat computer hacker who studies the latest threats and tactics employed by cybercriminals. Kelley and his SlashNext took a closer look at cybercrime forums and found malicious actors conducting discussion threads such as:

  • Feel free to share tips with each other on how ChatGPT can be leveraged to improve emails that may be used for phishing or BEC attacks.
  • Facilitate “jailbreaking” of ChatGPT-like interfaces, manipulating ChatGPT-like interfaces to produce output that may involve disclosure of sensitive information, generation of inappropriate content, or execution of harmful code Refers to specialized prompts and inputs designed to
  • It promotes custom modules similar to ChatGPT and is presented as a black-hat alternative to ChatGPT, but without ethical boundaries or restrictions.

These findings demonstrate how malicious actors not only manipulate generative AI platforms like ChatGPT for malicious purposes, but also outright threats based on the same technology specifically designed to conduct fraudulent bids. It has far-reaching implications for the security community in understanding how new platforms are being created.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *