The AI Attack Surface Risks Spiralling Out of Control Here's how to manage it
By Chris Newton-Smith Edited by Patricia Cullen
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur United Kingdom, an international franchise of Entrepreneur Media.
It sometimes feels like the entire world has gone AI crazy. The UN predicts the market will grow 25-fold between 2023-33 to hit $4.8 trillion in just eight years' time. No company wants to be left behind. But in the rush to carve out advantage, they may unwittingly be exposing themselves to the risks.
When AI expands a company's "attack surface" in this way, it could result in theft of sensitive data, sabotage or manipulation of critical AI models, or more conventional ransomware attacks. Complicating matters, AI is often the aggressor, enhancing threat actors' ability to strike. A coordinated response is needed, using AI-powered defences to fight fire with fire. And best practice security frameworks to ensure good governance.
AI is on a tear
Our latest research reveals that 79% of UK and US organisations have adopted new technologies like AI and machine learning (ML) over the past 12 months, with a fifth planning to do so in the coming year. Many have been wowed by the breakout success of ChatGPT and similar chatbots from the likes of Google and Anthropic. These tools offer potentially major productivity gains, cost savings and business process efficiencies. Use cases as diverse as automated lead gen for marketers, and coding assistants for developer teams, are helping to persuade business owners to invest.
Growing the AI attack surface
However, as organisations deploy AI in greater numbers, they risk creating security gaps for hackers to exploit. One threat causing sleepless nights for many security leaders is data poisoning. These are threat actor attempts to manipulate the data on which AI models are trained, in order to influence their output.
These attacks could be designed to sabotage the models powering key business services, or introduce backdoors into sensitive corporate systems. A targeted attack might try to reduce the effectiveness of AI-powered malware filters, for example, enabling threat actors to prosper. While data poisoning is thought to be more theoretical than widespread, a quarter of respondents to our study actually claimed they had experienced such an attack over the past year.
But that's not all. Arguably a bigger threat comes from the AI that business and IT leaders don't even know exists in their organisation. We found that 37% of British and American firms have employees using generative AI (GenAI) without permission. This presents a security risk on several levels. They may unwittingly share sensitive customer information, code, or corporate IP with such tools. It will then be used to "train" the underlying large language model (LLM) and could theoretically be regurgitated back to other users – a significant compliance and security risk. The tools themselves may also contain vulnerabilities, or the developer could suffer a data breach or leakage incident. IBM claims shadow AI-related incidents accounted for at least 20% of data breaches last year. It calculates that organisations with high levels of unsanctioned AI use suffer $670,000 extra in breach costs versus those with minimal or no such usage.
AI on the attack
AI not only represents a business risk in these terms, it's also directly helping threat actors to launch attacks – sometimes against corporate AI systems. The UK's National Cyber Security Centre (NCSC) warns that the technology "will almost certainly continue to make elements of cyber intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats."
Among other things, it will make it easy for bad actors to scope out vulnerable systems at scale, exploit vulnerabilities, and craft highly convincing phishing messages in different languages. This is borne out in our study, which finds that cybersecurity leaders are most concerned about threats including AI-powered phishing (38%). A fifth say they also experienced a deepfake-based attack in the past year, so this is also high up on the list of concerns for the coming 12 months.
Deepfakes enable convincing social media scams that might tarnish a brand, and they can help fraudsters bypass biometric authentication checks at login and account creation. On rarer occasions, the technology has even enabled sanction-busting North Korean IT workers to gain employment in Western firms.
Fighting back with AI and standards
The good news is that there are powerful ways for businesses to tackle these risks. Threat actors may have access to innovative AI-powered tools, but so do network defenders. Threat detection tools might use AI algorithms to spot and flag patterns of suspicious behaviour in huge datasets. That includes phishing emails written by AI, and even deepfakes.
The technology can also automate manual processes for stretched security teams, and even act as an assistant to help them work faster and more efficiently. Over 90% of the security leaders we spoke to say they plan to invest in GenAI-powered threat detection and deepfake detection tools.
Another area of focus for almost all respondents is AI governance and policy enforcement. This is key to managing unsanctioned AI use in the business. Security teams should educate users about the risks of shadow AI, offer them secure alternatives, and have clear policies governing their use. This can be enforced with the right tooling.
Putting structured governance in place is also a key tenet of standards like ISO 27001 (for information security management) and ISO 42001 (for AI management). They offer a systematic approach to identify security gaps and potential threats and then deliver a framework to address them.
ISO 27001 is foundational for good information security, helping address areas that may affect your AI attack surface, like access controls, data protection and supplier security. And ISO 42001 has been designed specifically to help identify, assess and mitigate risk across the AI lifecycle – including data poisoning and theft, and the use of third-party services like ChatGPT. Crucially, they're based on a "Plan-Do-Check-Act" (PDCA) model, which forces organisations to adopt a mindset of continuous improvement.
Even if you've rushed into AI adoption and are now concerned about an increase in cybersecurity risk, it's not too late. Take stock. Understand your risk appetite, and start thinking strategically about securing AI. The journey has only just begun.