When people think of the threat posed by artificial intelligence (AI) and the potential for AI-generated attacks, science fiction concepts such as the robot Apocalypse may come to mind. Speakers at the recent IBM Security Summit 2023 helped put the real threat into perspective by pointing out that AI is just another technology – albeit a new and more powerful one than we’ve seen in the past.
“AI is just another tool that needs to be understood,” said Yonesy Núñez, DTCC Managing Director and Chief Information Security Officer, speaking as part of a panel discussion, In the Eyes of the Threat Actor: AI-Generated Attacks, Fact or Fiction. “And, just like any technology tool, it’s a double-edged sword in that it can be used for beneficial or nefarious purposes. We just need to remember that the bad actors need to study and learn how to use it, too.”
Related: Managing Risk in a Digital Era
Combatting AI with AI
Panelists outlined how perception may make AI-generated threats seem superhuman but emphasized the importance of treating it like any other threat, although a more powerful and more sophisticated threat than we are accustomed to. More importantly, they explained that AI can also be used to defend against attempts to attack a network.
For example, criminals could conceivably use AI to find vulnerabilities more quickly in code to exploit and launch attacks. At the same time, security teams can leverage AI to find and fix those same vulnerabilities before they launch any new code.
“The attackers are dealing with the same issues we are,” added Núñez. “They also have to sift through huge amounts of information to find some way to make AI useful for them in carrying out malicious attacks. But the threat actors will learn, they will get better at using this new tool to launch attacks. We need to do the same, we just need to do it faster than the criminals.”
Since AI is a powerful tool with far more capabilities, it means security experts must adapt to stay ahead. Criminals could conceivably use AI to help generate advanced phishing emails, so many of the ways to spot fake emails, including looking for typos and grammar mistakes, may be outdated. AI can eliminate many of these more obvious tell-tale signs, making phishing emails more difficult for the average person to spot.
“I am concerned about people being more easily duped by better phishing emails, but in the end, a phish is still a phish. We just need to watch how they evolve and continually update training to spot and stop AI-generated phishing attacks,” said Núñez, noting that some 22% of attacks still come from phishing emails.
While it may sound far-fetched, Nunez said it’s conceivable that AI could even be used to create fake videos, so the person on the other side of the screen on a Zoom call may actually be a deep fake. “We haven’t seen anything like that yet, but it is plausible, so we need to start thinking about and prepare for these kinds of things, even if they seem fanciful right now. And “I’m more concerned about the bad actors using AI to imitate someone’s voice.”
While the prospect of AI-generated criminal activity may strike fear in the hearts of some, cyber security professionals see AI as just another threat they need to manage. And indeed, the threat can also be used as a powerful tool to defend against attacks.