While sophisticated hackers and AI-powered cyberattacks tend to hijack news headlines, one thing is clear: human error is the number one threat to cybersecurity, accounting for over 80% of incidents. It means that there is This is despite an exponential increase in organizational cyber training over the past decade, increasing awareness and reducing risk across businesses and industries. Will AI come to my aid? In other words, will artificial intelligence become a tool that helps companies curb human negligence? This section explains.
The impact of cybercrime is expected to reach $10 trillion this year, more than the GDP of every country in the world except the United States and China. Moreover, this figure is estimated to grow to nearly $24 trillion over the next four years.
While sophisticated hackers and AI-powered cyberattacks tend to hijack news headlines, one thing is clear: human error is the number one threat, accounting for over 80% of incidents. That’s it. This is despite an exponential increase in organizational cyber training over the past decade, raising awareness and reducing risk across businesses and industries.
Will AI come to my aid? So, will artificial intelligence become a tool that helps companies curb human error? If so, what are the pros and cons of relying on machine intelligence to mitigate the risks of human behavior? ?
Not surprisingly, there is a lot of interest in AI-driven cybersecurity right now, with the market for AI cybersecurity tools projected to grow from just $4B in 2017 to nearly $35B in net worth by 2025. It has been. These tools typically include machine learning, deep learning, and natural language processing to reduce malicious activity and detect cyber anomalies, fraud, or intrusions. Most of these tools are focused on uncovering changing patterns in data ecosystems such as enterprise clouds, platforms, and data warehouse assets, with a level of sensitivity and granularity that typically escapes human observers. is equipped with
For example, supervised machine learning algorithms can classify malicious email attacks with 98% accuracy and identify “look-alike” features based on human classification or encoding, whereas deep learning recognition of network intrusions is 99.9% accurate. Achieving % accuracy. In terms of natural language processing, we demonstrate high levels of reliability and accuracy in detecting phishing activity and malware through email domain and message keyword extraction where human intuition typically fails.
However, as academics point out, relying on AI to protect your business from cyberattacks is a “double-edged sword.” Most notably, studies show that injecting just 8% of “toxic” or erroneous training data can cause an AI’s accuracy to drop by a whopping 75%. This is similar to how users inject sexist preferences and racist language to destroy conversational user interfaces and large language models. to training data. As ChatGPT likes to say, “As a language model, I am as accurate as the information I get”. A weak predictor of attack.
Moreover, trust in AI tends to result in delegating undesirable tasks to AI without understanding or supervision, especially if the AI is not explainable (paradoxically they often coexist at the highest levels of accuracy). I have. that’s all– Reliance on AI is well-documented, especially when people are under time pressure, often leading to diffusion of human responsibility and increasing careless and reckless behavior . The result is the unintended consequence that the latter dilutes the former instead of improving much-needed collaboration between human and machine intelligence.
As I discuss in my recent book, I, Human: AI, Automation, and the Quest to Reclaim Our Uniqueness, There seems to be a general tendency to hail advances in AI as an excuse for our own intellectual stagnation. Cybersecurity is no exception. We happily welcome technological advances, protect us from our own careless or reckless actions, and keep us from being “responsible.” Because we can shift responsibility from humans to AI errors. Admittedly, this is not a happy outcome for businesses, so the need to educate, alert, train and manage human behavior is as important, if not more important, than ever.
Importantly, organizations must continue to strive to raise employee awareness of the ever-changing risk landscape. Risks will only increase in complexity and uncertainty as AI adoption and pervasiveness increases for both attackers and defenders. It may never be possible to completely eliminate risk or eliminate threats, but the most important aspect of trust is not whether you trust AI or humans, but how a business, brand, or platform whether you trust more than others. this is, either or Choose between relying on humans and artificial intelligence to protect your business from attack, but make good use of both technological innovation and human expertise in the hope that you are no more vulnerable than others We need culture.
Ultimately, this is a question of leadership. Having the right safety profile at the top of the organization, especially the board, as well as the right technical expertise and competence. Decades of research have shown that organizations led by conscientious, risk-aware, and ethical leaders are much more likely to provide a safe culture and environment for their employees, reducing risk is still possible, but less likely. Indeed, such companies can be expected to leverage AI to keep their organizations safe, but it is also their ability to educate and improve their employees. human habits This makes them less vulnerable to attack and negligence. As Samuel Johnson rightly pointed out, long before cybersecurity was a concern, “the chains of habit were too weak to feel and too strong to break.”