In 1999, an incredible movie about a dystopia run by intelligent machines captured our imagination (and remains one of my favorite movies to this day). Twenty-four years later, the lines between fact and fiction have all but disappeared, and blockbuster hits are wildly different. Enter the Matrix? are we already in it? can anyone be sure?
Robot overlords are not (yet) a reality, but modern life is inseparable from artificial intelligence (AI) and machine learning (ML). Whether you’re searching on Google, unlocking your phone with your face, buying “recommended items” online, or avoiding traffic jams with your trusted travel app, advanced technology is at work behind the scenes. I’m here. AI/ML’s role in personal and professional life has expanded rapidly in recent years, but it wasn’t until November 2022 that he reached a tipping point with his ChatGPT.
of new york timesDescribing the impact of AI chatbots as “Promethean,” Thomas L. Friedman of . are compared with He wrote that ChatGPT is “a departure and an advancement from what was before, because you cannot change one thing, you must change everything.” For better or worse.
AI/ML Wins Both Sides In Cyberspace’s Fifth Domain
My own AI “wow” happened at DEFCON 24 in 2016. That’s when we saw autonomous cyber reasoning systems (CRS) face off against each other, discovering hidden vulnerabilities in code and deploying patches to fix them without human assistance. It was clear that AI/ML would fundamentally change the way organizations do cybersecurity. Since then, we have experienced groundbreaking innovations that enable us to analyze massive amounts of data and reduce response times.
Most importantly, AI/ML-powered scalability, speed, and continuous self-learning will benefit under-resourced cybersecurity teams. With 3.4 million industry jobs remaining vacant worldwide, many security leaders welcome new opportunities to fill gaps and expand their efforts. For example, many companies are turning to AI-powered tools to simplify tedious authentication processes. Adaptive multi-factor authentication and single sign-on methods use behavioral analytics to validate identities based on access, privilege, and risk level. It doesn’t slow down your work. And as hybrid and multicloud environments become more complex, teams are relying on AI to automatically manage permissions for thousands (or millions) of identities across their cloud estates.
ChatGPT is another valuable tool in a defender’s toolbox. According to research by The Wall Street Journal, security teams turn to her ChatGPT to create easy-to-understand communication materials that resonate with business stakeholders and help build program support. Others use it to create human-customizable policy templates. But most of the early ChatGPT cybersecurity use cases are focused on automating tasks, from analyzing log files and mapping threat trends to detecting vulnerabilities and supporting secure coding for developers. .
AI continues to evolve, but it has limitations and cannot deliver the cognitive reasoning, nuance, and critical first-hand experience that human subject matter experts can. For example, a neuroscientist at the University of California, Los Angeles recently asked his ChatGPT-4, the latest version of ChatGPT, “What is his third word in this sentence?” The bot’s answer was “third”. Another example: SC Magazine covered a study of 53,000 email users in over 100 countries and found a 4.2% click-through rate for phishing emails crafted by a professional red team, compared to ChatGPT Only 2.9% of campaigns were created with
In a recent ABC News interview, Sam Altman, CEO of OpenAI (the company that created ChatGPT), urged people to see chatbots as ancillary tools rather than replacements for human experts. said to to a major technology shift. ”
cyber arcUnfortunately, adversaries are adapting and leveraging AI/ML for the same reasons cybersecurity teams do.
Threat researchers have already uncovered many ways ChatGPT could be used for malicious purposes. His Cyber Ark Labs team at our company demonstrated how easy it is to create polymorphic malware (advanced malware that evades security protections and makes mitigation difficult) with ChatGPT. CyberArk researchers found a way to circumvent built-in content filters — checks designed to prevent abuse and malicious activity — by experimenting with creative prompts. They tricked ChatGPT into generating (and continually modifying) code for injection, creating the file search and encryption modules needed to spread ransomware and other malicious payloads. I also discovered that by using his ChatGPT’s API on certain prompts, I was able to completely bypass all content filters.
Fellow researchers at Check Point Research analyzed several underground communities to create infostealer malware, design multi-layered encryption tools (with no prior experience, according to the threat actor’s description), and illicit I discovered a use case of ChatGPT for launching an automated dark web marketplace for cryptography. product.
Altman acknowledged the risks posed by rapidly changing AI/ML technologies in the aforementioned interview. “I am particularly concerned that these models will be used for large-scale disinformation,” he said. “They got better at writing computer code, [they] It can be used for aggressive cyberattacks. ”
IT decision makers share Altman’s concerns. According to a survey by Blackberry Global Research in 2023, 51% believe that a successful cyberattack will land him on ChatGPT within a year. Respondents are most concerned about chatbots’ ability to help threat actors craft more credible and legitimate-sounding phishing emails (53%). This is a robust endpoint that covers everything from strong endpoint permission controls to regular cybersecurity awareness training to help end users spot common phishing and social engineering techniques. highlights the need for security in Respondents were also concerned that AI could be used by inexperienced attackers to improve their knowledge and skills (49%) and that AI would spread disinformation (49%). expressed.
Concerns about AI continue to grow. In late March, an open letter featuring more than 1,100 high-profile signatories announced that until regulators catch up, “all AI labs should immediately suspend training of AI systems stronger than GPT-4 for at least six months.” asked to stop. Just two days after the letter was published, Italy temporarily banned her from ChatGPT and is now investigating possible violations of both the EU’s General Data Protection Regulation and Italy’s Data Protection Code. In many other countries, legislators are sounding the alarm about emerging security and privacy issues. According to NPR, the Center for AI and Digital Policy filed a complaint with the U.S. Federal Trade Commission in late March, explaining that ChatGPT-4 has the ability to “perform massive mass surveillance.” .
A Key Human Element of Identity Security
As public debate and regulatory scrutiny around AI/ML intensifies, enterprise cybersecurity teams must stay vigilant and keep the big picture in mind. In other words, cyber attacks are inevitable. How, where, and for what reason. But no damage.
Organizations can protect what matters most by securing all identities throughout the cycle of accessing resources across their infrastructure. This requires a holistic approach that integrates visionary technology with human expertise. The right identity security platform must protect critical data and systems from myriad threats to confidentiality, integrity, and availability. The right identity security partner should be a trusted advisor and improve your security team and strategy in ways technology can’t. Vision, experience, diverse thinking, technical acumen, empathy, empathetic support, ethical rigor, strong relationships, and proven results – humanity matters in cybersecurity.
As AI/ML capabilities expand rapidly, the cybersecurity community must continue to test and push the boundaries of AI, share information, and advocate for critical guardrails. Only by working together, in the words of Friedman, can we “define how to get the best out of AI and mitigate the worst.”
Learn more about CyberArk’s identity security platform.
Copyright © 2023 IDG Communications, Inc.
