How Do I Protect My AI Model?
Table of Contents
Building, pre-training, and fine-tuning an AI model is no small task and you’ll naturally want to protect your efforts from theft, denial of service attacks, malicious use, and other threats. In this blog we’ll go over the basics of keeping AI models safe from both legal issues and bad actors.
Can I copyright an AI model?
So long as the code and techniques that make up an AI model are human-written, they can be copyrighted, patented, and protected by intellectual property law just like any other application. If your model is built on top of existing models, you’ll need to make sure that you are in compliance with the open source licenses that govern the use of those models. Whether or not you can copyright your derivative model will be dependent on those terms.
Generative AI output, that is the text, images, or sounds that your AI model produces, has so far been considered by US courts to have not been authored by a human and therefore ineligible for copyright protection. This means that if you were to use one AI model to train or to generate code or functionality for another AI model, that derivative AI model will probably not be fully protected by copyright law. If a work is a combination of human and AI effort, only the human-authored portions of the work are eligible for copyright protections. We’re not lawyers, but because AI prompts themselves require a degree of human intelligence and creativity, there is a legal argument to be made that input-catalyzed AI outputs should be copyrightable, but it’s still early days and hasn’t been tested by the courts.
What are the risks of AI models?
AI models bring risk to an organization from many angles. From an attacker’s point of view, there is much to be gained from infiltrating or attacking AI models. An attacker can hold the model ransom, bleed underlying resources and redirect them to mine cryptocurrency, and steal trade secrets, sensitive personal data, or the model itself. The large number of components that make up AI models can leave more vulnerabilities for attackers to exploit and AI models can be insecure by design, making them difficult to protect sufficiently.
AI models also introduce legal risks. If your models are biased against certain groups, leak sensitive information, or perform poorly, your organization may find itself in court. As previously mentioned, the underlying components to your AI model – including pre-trained models, open source libraries, and training data sets – can also land your organization in hot water if you’re not carefully considering the licenses and copyright restrictions of each ingredient.
You can read about the top threats to large language models (LLMs) here, many of which are applicable to AI models broadly.
How do I make my AI more secure?
AI security is a broad topic and each kind of model has its own risks, but many of the secure design concepts that apply to all applications apply to AI models just the same. Here are some things you can do to make your AI more secure:
- Sanitize and validate inputs and outputs, especially true with LLMs
- Set limits on the number of steps that can be taken or amount of resources that can be used in processing a request
- Monitor your AI’s behavior as well as the network behavior around it
- Limit access and permissions to your AI model to only what people and components interacting with it require to do their jobs
- Regularly engage in threat modeling and penetration tests to harden your AI model’s security
How do you store an AI model securely?
AI models are usually very big and need to live across multiple servers. If you are using cloud storage, you will need to use a provider who takes its security seriously. Here are some other considerations for secure AI storage:
- Encryption – Encrypt your AI models and training data to prohibit theft and tampering.
- Access Controls – Limit access to the model to only those who need that access to do their job.
- Hardening hardware – Make sure the physical location that holds your AI models is secure as well as the hardware and firmware of the servers.
- Protecting AI data in transit – Put secure communication protocols in place like HTTPS and VPNs to secure AI models that need to be accessed over a network.
- Staying up-to-date – Any software or systems that are used to access or store the AI should be managed diligently so they’re always up-to-date with the latest security patches.
Like any valuable asset, your AI models need protecting. Security must be considered early, when designing and building, and throughout the development process for the best chance at success.