Shining a Light on Shadow AI: What It Is and How to Find It

After speaking to a wide spectrum of customers ranging from SMBs to enterprises, three things have become clear:

  1. Virtually everyone has moved on from using AI solely as an internal tool and are already deploying AI models.
  2. Many are experimenting with AI agents.
  3. Developers don’t ask the application security team what to do when it comes to AI use.

Add that together, and we get Shadow AI.  This refers to AI usage that is not known or visible to an organization’s IT and security teams. Shadow AI comes in many forms, but in this blog we’ll stick to a discussion of Shadow AI as it pertains to applications. 

Application security teams are well aware that AI models come with additional risk. What they’re less aware of is how much AI is already in the applications they’re tasked to protect. To get a sense of the scope of shadow AI, one of our customers uncovered more than 10,000 previously unknown AI projects in the organization’s code base. 

And realistically speaking, this will only accelerate, because developers are moving fast when it comes to implementing this technology. In big organizations, there may be 100 new repos added every day, many including AI, that all need to be checked for compliance and protected.

Moreover, developers rarely (if ever) ask permission to use AI, and they often fail to routinely disclose AI projects to application security teams. But outside of areas under heavy government regulation, developers have the strongest voice when it comes to technical decisions. Any attempt to completely ban AI is a lost cause that will only lead to more shadow AI.

Before any kind of response to the risk AI poses can be forged, shadow AI must first be brought out into the sun.

The risks of the unknown

The cost of building an extremely secure AI model from scratch is, if not actually impossible, prohibitively expensive. Developers seeking to benefit from the latest AI technologies instead either build upon models found on Hugging Face or interface  via APIs with existing large language models (LLMs) like ChatGPT or Claude. In either case, the attack surface is large and unwieldy. The non-deterministic, plain language nature of AI makes it easy to attack and difficult to secure.

Here are just a few of the risks that come with AI use in applications:

  • Major data breach. Generative AI models trained on company data or connected to Retrieval-Augmented Generation (RAG) components with access to company data can leak large amounts of sensitive information.
  • Unauthorized actions. AI agents, which are built to leverage AI output and take autonomous actions, that have access to important assets like databases or the ability to execute code may be manipulated to make unauthorized changes to data or execute malicious code.
  • Intellectual property (IP) disputes. Open source AI models may have noncompliant licenses that put the company’s IP in dangerous territory.
  • Bad decisions. While less of a traditional concern for AppSec practitioners, overreliance on AI, especially LLMs with a tendency to hallucinate information and sources, can lead to poor outcomes.

Prompt injection is the biggest vulnerability inherent to generative AI— and the most difficult to close. Prompt injection plus autonomous AI agents is an especially risky combination. LLMs are trained at a deep level to respond to commands. When you tell an LLM to do something it shouldn’t, it’s not shocking that it often complies. Prompt injection is not fought by changing an LLM’s code, but by building unseen-by-the-end-user prompts that command and remind the LLM not to do certain things throughout every user interaction. Unfortunately, these system prompts are not impossible to circumnavigate and are often leaked themselves.

That said, many AI risks can be minimized with a shift-left approach to development. That said, none can be addressed if the models and agents remain in the shadows.

Fighting AI with AI

The good news is that we can build tools that detect AI in applications, which is exactly what Mend.io is doing right now. The files and code of AI models and agents have certain characteristics that can be discovered by other AI models trained for that task. Likewise, these models can also detect the licenses of open source AI models. 

With information on where and what AI models are used, AppSec teams can take steps to ensure the following:

  • Open source license adherence to company policies
  • Proper sanitization of inputs and outputs 
  • Utilization of SOC 2-compliant services 

Closing thoughts

Many governments are looking to tame the Wild West of AI development. The EU AI Act, which prohibits some types of AI applications and restricts many, went into effect on August 1, 2024, and imposes heavy compliance obligations onto the developers of AI products. To stay both in compliance and secure, organizations must take the first step of bringing all shadow AI out into the open.ains but I think we’ll move the needle more than we have in multiple previous years combined.

Increase visibility and control over AI models used in your applications

Recent resources

Application Security — The Complete Guide

Explore our application security complete guide and find key trends, testing methods, best practices, and tools to safeguard your software.

Read more

Introducing the Mend AppSec Platform

The Mend AppSec platform offers customers everything needed to build proactive application security through one solution, at one price.

Read more

ASPM and Modern Application Security

Gartner’s 2024 Hype Cycle for Application Security: ASPM moves from peak to trough.

Read more