Technology, like any insentient item, lacks a conscience. Some of the same innovations that save lives, increase efficiency, or otherwise improve our quality of life can also be directed toward malevolent ends.
Artificial intelligence (AI) is a prime example. AI techniques are often employed by cybersecurity vendors to improve cyber defenses by, for example, detecting the presence of malicious software in corporate networks1 or flagging spam e-mails before they’ve had a chance to pollute your inbox.2
As AI technologies have matured, however, there’s a growing concern that they may be harnessed toward malicious ends.3
AI as a superpowered cyberattacker
As a recent report noted, AI systems pose at least two potential threats to cybersecurity:4
- They may make current cyberattacks cheaper and more scalable.
- They may create entirely novel forms of cyberattack.
Some researchers worry that AI systems will automate what are otherwise laborious tasks, such as devising and deploying spear phishing attacks.5 In this style of social engineering attack, a target is presented with a seemingly authentic e-mail, text message, or phone call that contains personalized information to trick the target into downloading malicious software or divulging sensitive information. With AI technologies, phishing-style attacks could become both more automated (and thus easier to deploy at greater scales) and more personalized and effective at luring their prey.
Consider this example: Security researchers have already demonstrated that neural networks (computational models that mimic the arrangement of biological neuronal networks) can leverage Twitter to direct personalized messages toward users using information derived from their personal activity on the network.6 These malicious tweets were found to be more effective at duping their targets than other automated phishing methods.7 They proved even better when compared to a human attempting to generate and send spear phishing Tweets.8
The virtual pools of data that individuals and businesses leave on social media platforms could become a fertile spawning ground for AI-powered phishing attacks.
Data poisoning, on the other hand, is an example of a novel risk of AI systems.9 Data poisoning is the act of sabotaging the training data that AI models learn from. Because machine learning systems are only as good as the data they’re trained on, these training data sets are a burgeoning source of cyber vulnerability for companies that rely on them.10 Actors capable of injecting bad data into these training sets can wreak havoc with machine learning algorithms. The U.S. Army, for instance, is working on cyber defenses to mitigate the threat of the malicious use of backdoors to poison AI databases.11
Managing AI risks
The looming risk of AI-powered cyber threats underscores the importance of a multilayered approach to cyber risk management. While cyber defenses, including those that use AI tools, will continue to be important, cyber risk transfer solutions are a critical part of the mix as well. Verisk’s cyber insurance program offers a full suite of tools that allow insurers to stay responsive in this highly dynamic market.
- Simant Dube, “Deep Learning for Malware Classification,” AI/ML at Symantec, March 28, 2019, < https://medium.com/ai-ml-at-symantec/deep-learning-for-malware-classification-dc9d7712528f >, accessed on March 23, 2020.
- Cade Metz, “Google Says Its AI Catches 99.9 of Gmail Spam,” WIRED, July 9, 2015, < https://www.wired.com/2015/07/google-says-ai-catches-99-9-percent-gmail-spam/ >, accessed on March 23, 2020.
- The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Future of Humanity Institute, et al., February 2018, < https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/MaliciousUseofAI.pdf?ver=1553030594217>, accessed on March 23, 2020.
- Ibid, p. 16–17
- Ibid
- John Seymour, et al., Weaponizing Data Science for Social Engineering: Automated E2E Spear Phishing on Twitter, Zero Fox, 2016, < https://www.blackhat.com/docs/us-16/materials/us-16-Seymour-Tully-Weaponizing-Data-Science-For-Social-Engineering-Automated-E2E-Spear-Phishing-On-Twitter.pdf >, accessed on March 23, 2019
- Ibid
- Ibid
- Osonde Osoba, et al., The Risks of Artificial Intelligence to Security and the Future of Work,6 Rand Corporation, p. 6, < https://www.rand.org/pubs/perspectives/PE237.html >, accessed on March 23, 2020.
- Ibid, p. 6
- Jackson Barnett, “Army looks to block data ‘poisoning’ in facial recognition, AI,” FedScoop, February 11, 2020, < https://www.fedscoop.com/army-looks-block-data-poisoning-facial-recognition/ >, accessed on March 23, 2020.