Today, warnings about artificial intelligence (AI) are everywhere. It recalled images from the movie Terminator and contained a terrifying message about the potential of AI to cause human extinction. British Prime Minister Rishi Snack has even set up a summit to discuss AI safety. But we’ve been using AI tools for a long time, from algorithms used to recommend related products on his shopping website to cars with technology that recognizes traffic signs and provides lane locations. I have used.
AI is a tool that increases efficiency, processes and classifies large amounts of data, and reduces the burden of decision-making. Nonetheless, these tools are open to everyone, including criminals. And we are already seeing early-stage AI adoption by criminals.
Deepfake technology has been used to generate revenge porn, for example. Technology makes criminal activity more efficient. This allows lawbreakers to target more people, increasing the legitimacy of criminal activity. Observing how the criminal has adapted and adopted technological advances in the past may give us hints as to how the criminal might use her AI.
Better phishing hooks available
AI tools like ChatGPT and Google’s Bard provide writing support, helping even inexperienced writers create effective marketing messages. However, the technology also helps criminals sound more believable when contacting potential victims. Think of spam phishing emails and texts that are poorly written and easily detected. Plausibility is key to extracting information from victims. Fishing is a game of numbers. An estimated 3.4 billion spam emails are sent every day. By my own calculations, if criminals could improve their messaging so that just 0.000005% of messages were able to convince someone to release their information, that would mean an additional 6.2 million additional phishing victims per year.
Automated dialogue with victims
One of the early uses of AI tools was to automate interactions between customers and services via text, chat messages, and phone calls. This has enabled us to respond quickly to customers and optimize operational efficiency. His first contact with the Organization will be with an AI system before he can speak to a human. Criminals can use the same tools to create automated interactions with large numbers of potential victims on a scale not possible by humans alone. Spoofing legitimate services such as banks by phone or email to extract information to steal money.
deepfake
AI is very good at generating mathematical models that can be “trained” on large amounts of real-world data, making those models better at specific tasks. Video and audio deepfake technology is one example. A deepfake act called “metaphysics” recently demonstrated the potential of the technology when it released a video of Simon Cowell singing an opera on the TV show “America’s Got Talent.” While this technology is beyond the reach of most criminals, AI can be used to change the way humans respond to text messages, write emails, leave voice notes, and make phone calls. You are free to use the function to mimic. The same is true for training data, which can be gleaned, for example, from videos on social media. Social media has always been a rich source of information for criminals mining information about potential targets. AI may be used to create a deepfake version of you.
You can use this deepfake to interact with friends and family and trick them into giving information about you to criminals. The more insight you have into your life, the easier it will be to guess your password or her PIN.
brute force
Another technique used by criminals called “bruteforcing” could also benefit from AI. Here, many combinations of letters and symbols are tried in order to see if they match the password. Therefore, longer and more complex passwords are more secure. This method makes it harder to guess. Brute His force is resource intensive, but easy to execute with some knowledge of the opponent. For example, this allows you to prioritize the list of potential passwords, making the process more efficient. For example, you can start with combinations that are related to family and pet names.
With algorithms trained on data, these prioritized lists can be more accurately created and targeted to more people at once, thus requiring fewer resources. Certain AI tools may be developed that collect your online data and analyze it all to build your profile. For example, if you frequently post about Taylor Swift on social media, manually going through your posts for password clues can be a daunting task.
Automated tools do this quickly and efficiently. All of this information is used to create a profile, making it easier to guess passwords or his PIN. Healthy Skepticism AI has the potential to bring real benefits to society, so there is no need to fear AI. But like any new technology, society needs to adapt and understand it. We take smartphones for granted now, but society has had to adapt to the inclusion of smartphones in our lives. While they are largely beneficial, uncertainties remain, such as adequate screen time for children.
As individuals, we need to be proactive in understanding AI rather than complacent with the status quo. We need to develop our own approach to it while maintaining a healthy dose of skepticism. We need to think about how to validate what we read, hear, and see. These simple acts can help society reap the benefits of AI while protecting ourselves from potential harm.
(The author is Professor of Cyber Security at Lancaster University)
