What New Security Threats Arise from The Boom in AI and LLMs?
Table of Contents
Generative AI and large language models (LLMs) seem to have burst onto the scene like a supernova. LLMs are machine learning models that are trained using enormous amounts of data to understand and generate human language. LLMs like ChatGPT and Bard have made a far wider audience aware of generative AI technology.
Understandably, organizations that want to sharpen their competitive edge are keen to get on the bandwagon and harness the power of AI and LLMs. That’s why, in a recent study, Research and Markets predicts that the global generative AI market will grow to a value of USD 109.37 billion by the year 2030.
However, the rapid growth of this new trend comes with an old caveat: with progress comes challenges. That’s particularly true when considering the security implications of generative AI and LLMs.
New threats and challenges arising from generative AI and LLMs
As is often the case, innovation often outstrips security, which must catch up to assure users that the tech is viable and reliable. In particular, security teams should be aware of the following considerations:
- Data privacy and leakage. Since LLMs are trained on vast amounts of data, they can sometimes inadvertently generate outputs that may contain sensitive or private information that was part of their training data. Always be mindful that LLMs are probabilistic engines that don’t understand the meaning or the context of the information that they use to generate data. Unless they are instructed or guardrails are used, they have no idea whether data is sensitive, whether it should be exposed or not, unless you intervene and alter prompts to reflect expectations of what information should be made available. If you train LLMs on badly anonymized data, for example, you may end up getting information that’s inappropriate or risky. Fine-tuning is needed to address this, and you would need to track all data and the training paths used, to justify and check the outcome. That’s a huge task.
- Misinformation and propaganda. Bad actors can use LLMs to generate fake news, manipulate public opinion, or create believable misinformation. If you’re not already knowledgeable about a given subject, the answers that you get from LLMs may seem plausible, but it’s often difficult to establish how authoritative the information provided really is, and whether its sources are legitimate or correct. The potential for spreading damaging information is significant.
- Exploitability. Skilled users can potentially “trick” the model into producing harmful, inappropriate, or undesirable content. In line with the above, LLMs can be tuned to produce a distribution of comments and sentiments that look plausible but skew content in a way that presents opinion as fact. Unsuspecting users consider this content reasonable when it may really be exploited for underhand purposes.
- Dependency on external resources. Some LLMs rely on external data sources that can be targets for attacks or manipulation. Prompts and sources can be both manual and machine-generated. Manual prompts can be influenced by human error or malign intentions. Machine-generated prompts can result from inaccurate or malicious information and then be distributed through newly created content and data. Can you be sure that either is reliable? Both must be tested and verified.
- Resource exhaustion attacks. Due to the resource-intensive nature of LLMs, they can be targets for DDoS attacks that aim to drain computational resources by overloading systems. For instance, you could set up a farm of bots to rapidly generate queries at a volume that could pose operational and efficiency problems.
- Proprietary knowledge leakage. Skilled users can potentially “trick” models into exposing their valuable operations prompts. Usually, when you build functionality around AI, you have some initial prompts that you test and validate. For example, you can prompt LLMs to recognize copyrights, identify the primary owner of source code, and then extract knowledge about the copyrights. Potentially this means a copyright owner could lose their advantage over competitors. As I wrote earlier, LLMs don’t understand the information they generate, so it’s possible that they inadvertently expose proprietary knowledge like this.
These are not the only security concerns that arise from generative AI and LLMs. There are other, pre-existing issues that are amplified by the advent of this technology. In my next blog post, we’ll examine these issues and we’ll take a glance at how we might address them to safeguard users’ cybersecurity.