According to Darktrace, social engineering, especially malicious cyber campaigns delivered via email, remain the leading source of vulnerability for organizations to attacks.
Popularized in the 1990s, email security has plagued cyber defenders for nearly three decades. The goal is to lure victims into divulging sensitive information through trust-abusing, extortion, and reward-promising communications so that attackers can get to the heart of critical systems. That’s it.
Social engineering is a profitable business for hackers. It is estimated that approximately 3.4 billion phishing emails are delivered daily.
As organizations continue to rely on email as their primary collaboration and communication tool, email security tools that rely on knowledge of past threats can future-proof organizations and their people against evolving email threats. Failed to guarantee.
Widespread access to generative AI tools such as ChatGPT and the increasing sophistication of nation-state actors means email fraud is more compelling than ever.
Humans can no longer rely on intuition to stop hackers. It’s time to arm your organization with an AI that knows you better than your attackers. In new data published this week, Darktrace finds that email security solutions, including native, cloud, and “static AI” tools, reduce an average of 13 hours from the time an attack is launched against a victim until the attack is detected. It took days, and the defenders made it clear that they would be vulnerable for almost two days. A few weeks if you only rely on these tools.
In March 2023, Darktrace commissioned Censuswide to gather third-party insights into human behavior around email among 6,711 employees across the UK, US, France, Germany, Australia, and the Netherlands. We commissioned a global survey to better understand how employees around the world respond to emails. The latest technologies used as potential security threats, an understanding of email security, and tools to transform those threats.
The survey found that 82% of global employees are concerned that hackers can use generative AI to craft fraudulent emails that are indistinguishable from genuine communications.
The top three characteristics of communications that employees consider an email to be a phishing attack are being asked to click a link or open an attachment (68%), unknown sender or unexpected content (61%), Inappropriate use of spelling and grammar (61%)
Nearly one in three (30%) global employees have been scammed by a fraudulent email or text in the past, while 70% of global employees have been scammed in the last six months. We are noticing an increase in the frequency of emails and texts.
The report also found that 87% of employees worldwide are concerned about the amount of personal information available online that could be used for phishing and other email scams. I was.
Additionally, almost four-fifths (79%) of businesses’ spam filters incorrectly block important, legitimate email from reaching their inboxes. Over a third have tried her ChatGPT or other Gen AI chatbots (35%)
Email threat landscape
Darktrace researchers observed a 135% increase in “new social engineering attacks” across thousands of active Darktrace/Email customers from January to February 2023. This corresponds to the widespread adoption of ChatGPT. These new social engineering attacks use sophisticated linguistic techniques such as increasing the amount of text, using more punctuation, and lengthening sentences without links or attachments. This trend suggests that generative AI such as ChatGPT offers attackers the means to craft sophisticated targeted attacks at speed and scale.
Additionally, attackers are rapidly exploiting the news cycle to profit from employee fear, urgency, or excitement. The latest iteration of this is the collapse of the Silicon Valley Bank (SVB) and the resulting banking crisis, providing opportunities for attackers to disguise highly sensitive communications. 73% of his employees who work for financial services organizations have noticed an increase in the frequency of fraudulent emails and text his messages in the past six months.
Innocent human error and insider threats remain a problem. Many of us (her 2 in almost 5) send important emails to the wrong recipients with similar aliases either by mistake or due to autocomplete. This rises to more than half (51%) in the financial services industry and 41% in the legal industry, adding another layer of risk to non-malicious security he. A self-learning system can catch this error before sensitive information is accidentally shared.
What does the generative AI arms race mean for email security?
The CEO will ask you for information via email. It is written in the exact language and tone of voice they normally use.They even mention personal anecdotes and jokes. Darktrace research shows that 61% of people pay attention to bad spelling and grammar as a sign that an email is fraudulent, but this email is correct. The spelling and grammar are perfect, contain personal information, and are very convincing. But your CEO didn’t write it. It was created by generative AI using basic information pulled from social media profiles by cybercriminals.
With the advent of ChatGPT, AI has burst into mainstream consciousness. With 35% of people having already tried his ChatGPT or other Gen AI chatbots for themselves, real concerns about their impact on cyber defenses have emerged. About 82% of employees worldwide are concerned that hackers can use generative AI to create fraudulent emails that are indistinguishable from genuine communications.
Emails from CEOs and other senior business leaders were the third most common type of email employees were most likely to engage with, with more than a quarter of respondents (26%) agreeing. Defenders face generative AI attacks, linguistically complex and entirely new scams that use never-before-seen techniques and reference topics.
“In a world of increasing AI-assisted attacks, humans can no longer be held responsible for determining the authenticity of communications,” said Darktrace.
“This is now the job of artificial intelligence.”
Email self-learning AI, unlike all other email security tools, isn’t trained on what “bad stuff” looks like, but instead teaches users and their unique organizational learn the normal life patterns of
By understanding what’s normal, you can determine what doesn’t belong in a particular person’s inbox. Email security systems often get this wrong. 79% of his respondents said their company’s spam/security her filters were incorrectly blocking important legitimate emails from reaching their inboxes.
With a deep understanding of your organization and how individuals within your organization interact with their inboxes, AI can help you determine whether all email is suspicious and should be addressed, or whether it is legitimate email. , so you can decide if it should be left alone.
This approach can stop threats such as:
- fishing
- president fraud
- Business Email Compromise (BEC)
- invoice fraud
- data theft
- social engineering
- ransomware and malware
- supply chain attack
- URL-based spear phishing
- account takeover
- human error
- insider threat