AI is here to change the world, whether we like it or not. While many have praised the breakthrough technology as the “revolution of work,” thanks to its potential to streamline operations and improve efficiency and productivity, others have expressed concern over the amount of influence AI already has exerted on our society. Ultimately, whether the AI revolution turns out to be a net positive or negative depends on who is using it—and for what purpose.
Although critics are correct in pointing out that artificial intelligence poses certain threats, we must recognize that these threats are not inherent to the technology but rather from people who abuse it. AI is just like any other innovation in history—if a way for people to exploit it for their own nefarious purposes can be found, rest assured that it will be.
See also: Protecting OT data under persistent threat from ransomware
Nevertheless, we cannot let this caveat prevent us from using AI in legitimate, beneficial ways. Instead, we must mitigate and identify these harmful uses of the technology to create a landscape where AI technology can be used responsibly.
How generative AI is being abused for the benefit of scammers
Many of the most popular uses of AI today fall into the category of generative AI, or gen-AI People have used gen-AI models, like ChatGPT, for uses that can improve their productivity and efficiency, from drafting emails to powering customer service chatbots or, in manufacturing operations, to make easier-to-understand technical manuals or anticipate equipment downtime. Unfortunately, the highly customizable nature of these models has allowed wrongdoers to find ways to intrude.
One of the most dangerous abuses of generative AI technology has been by scammers who have used it to improve their phishing schemes, which are designed to convince victims to reveal their personal information by impersonating a trusted source—be it a boss, co-worker, friend, or loved one. In the past, it has been easier to spot this kind of scam because of simple, easy-to-spot mistakes such as grammatical errors or inconsistencies in voice.
See also: Your competitors are using AI. Why risk falling behind them?
However, today’s scammers can train a model on a library of writing by the person they hope to imitate and create a convincing impersonation of them. The result is a message that is far more difficult to determine whether it was written by the person it claims to be or if it is a scam.
But it’s not only written content that gen-AI has become dangerously and frighteningly good at creating. “Deepfakes”—images, video, and audio generated by AI—have become a viral threat. Scammers can use deepfakes for any number of nefarious purposes, ranging from blackmail and reputational damage to something as broadly impactful as manipulation of elections and stock markets.
How hackers are exploiting AI to automate cyberattacks
The other aspect of artificial intelligence that has been touted is its advanced data analytics capabilities. In several industries, AI has significantly improved efficiency and accuracy due to its ability to process extensive data sets far more quickly than a human, but this benefit could turn into something extremely dangerous should it fall into the wrong hands.
Podcast: Generative AI on the plant floor
Hackers have leveraged AI’s data analysis prowess by training models to constantly probe networks for vulnerabilities. In doing so, they can increase not only the volume of their attacks but also their severity by making them much more difficult to identify and respond to.
However, these automated attacks become even more troubling when they target critical infrastructure and supply chains. Because so many industries and institutions in our society now operate with or depend upon computers, hackers have several high-value targets to exploit.
Should a hacker find out how to exploit key vulnerabilities in networks running traffic lights, shipping routes, air traffic control, power grids, telecommunications systems, or financial institutions, the damage in money and loss of life could be tremendous.
How we can stop these abuses of AI technology
The silver lining here is that network operators can take a “fight-fire-with-fire” approach by applying the same tools that have been used against them to beef up their cybersecurity.
For example, the models that hackers use to probe networks for vulnerabilities can be used by network operators to identify weaknesses that need to be patched. In the case of fighting scams, AI models are being introduced that analyze writing and audiovisual materials to determine whether they are authentic or AI-generated.
See also: With AI, the time is now, say manufacturing technologists, futurist, 'evangelist'
Still, the best way to fight back against these negative use cases of AI is by keeping informed. Knowing how to spot suspicious emails and determine if they are fraudulent or legitimate can prevent people from falling victim to AI-powered phishing scams and implementing strong cybersecurity practices—including strong password use and access control measures—can make our networks less vulnerable to an automated attack.
AI is an incredibly powerful tool, and while it has the potential to help the world in many ways, wrongdoers can exploit its capabilities in ways that create new cyber threats. To address these cyber threats spurred by the misuse of AI technology, we must understand how it is being abused and what can be done to stop these abuses.