AI Poisoning a Novel Cyber Security Option


AI Poisoning a Novel Cyber Security Option

 

New Jersey Law Journal                         November 26, 2024

 

By Jonathan Bick  Bick is counsel at Brach Eichler in Roseland, and Chairman of the firm's Patent, Intellectual Property, and Information Technology group. He is also an adjunct professor at Pace Law School and Rutgers Law School.

 

A novel legal self-help technique to secure artificial intelligence data and programs is known as Poisoning AI.  This technique involves modifying the AI algorithm to intentionally produce specific erroneous results.  Poisoning AI may be used to both stop third parties from using AI via the Internet, or alternatively identifying cyber security difficulties. To ameliorate legal difficulties, associated with this technique appropriate user terms of use agreement notice content should be employed. 

 

Self-help is action taken to enforce legal rights without resorting to the legal system. Such action has long been accepted as lawful (for example see, Right of Conditional Seller To Retake Property Without Judicial Aid, 55 A.L.R. 184 (1991) for a collection self-help cases).  Self-help is also recognized as an appropriate course of action by the Uniform Commercial Code (for example see, NJ UCC § 9-609 and Section 2A-525 of the UCC).

 

Both custom and "off the shelf" software have been deemed goods for UCC self-help purposes. In Revlon Group, Inc. v. Logisticon, Inc.(No. 70533 Cal. Super. Ct., Santa Clara Cnty., complaint filed Oct. 22, 1990) for example, a software vendor is authorized to access a client's system and uses to repossess the disputed software. Many states have adopted Article 2A of the UCC and Article 2A.65 Section 2A-525 allows the software lessors to disable or remove software upon default on term similar to UCC Article 9, self-repossession actions.

 

AI is software.  AI software differs from traditional software because a computer rather than a programmer writes the algorithm (the element of the software which tell the computer what to do). AI programs can instruct the computer to add unnecessary content to AI algorithms. Such unnecessary content may poison the AI software.

 

Poisoning AI allows AI owners to change their Internet content or their AI software, so that if the content is copied without consent, then the unauthorized content user’s computer will generate suboptimal output.  For example, an artist may add pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to malfunction. Similarly, an AI software seller may use “poison data” that could damage future iterations of image-generating AI models, by rendering some of the outputs valueless.  

 

Poisoning AI may take several forms.  The most common form is to change the data, which is used to train an AI, thus changing the algorithm. This type of poisoning exploits a security vulnerability in generative AI models.  More specifically, generative AI algorithms usually must be trained on vast amounts of data—in this case, images that have been posted on the Internet.

More examples include, changing the pixels of a photo or other image that are invisible to the human eye but operates to force a machine-learning AI to misinterpret the photo or image.  This type of poisoning is usually used to help artists to protect their Internet posting from unconsented use.

 

The next most common poisoning is to require an existing AI algorithm to execute an unnecessary step.  For example, prior to sending an output, the AI algorithm requires the computer to query the computer for a formula or key. Such formulas or keys are only available to computer which are authorized to use the AI software.

 

AI software is usually trained on billions of images and AI software typically contains millions of lines of code, so detection of modified AI algorithm is extremely difficult.  Additionally, the more poisoned images can be scraped into the model, and the more AI modified algorithm elements in use the more damage the poisoning technique will cause.

 

Another may to poison an AI using data to render AI software valueless is to corrupt the AI algorithm.  For example, when as few as fifty (50) poisoned images of dogs was inputted to a widely used AI program which was trained on millions of photos of dogs and then prompted it to create images of dogs itself, the output included dog images with too many legs.  With three hundred (300) poisoned samples, the AI software was manipulated to generate images of dogs to look like cats.

 

AI poisoning is a form of the nontraditional legal remedy of self-help.  The potential benefits of AI poisoning self-help when used without challenge, include the low cost and rapid use of the law which does not depend on formal invocation of the judicial system and, thus, provides the rapid resolution that legal difficulties sometimes require.

 

In addition to disabling AI software, poisoning has been used to identify an AI software user who is using the AI software without consent.  Such identification may be used as evidence in non-self-help legal actions.

 

Self-help can result in legal difficulties, due to unintended results.  Considering the extensiveness and the rapidity of damage that can be caused when AI poisoning is employed, contractual self-help remedies should be considered. This would remove the element of surprise in software repossession and/or disabling and limiting consequential damages.

 

While neither the UCC nor the traditional self-help doctrine allows a debtor to recover the consequential damages of the creditor's repossession (see 18 U.S.C.A § 1030(a)(4) (West Supp. 1991), appropriate notice may ameliorate or eliminate legal difficulties which might arise for AI poisoning. Such notices are recommended.

 

Additionally, since the legal validity of contractual self-help remedies is being expanded by statute and the courts, when content is posted on the Internet which will be protected by AI poisoning techniques, legal notices are recommended to eliminate or ameliorate legal difficulties. Such legal notices may be implemented by adding an AI poisoning clause in an Internet site’s terms of use agreement.

 

Contractual self-help is designed by contracting parties to control or prevent contract-breach transactions. For example, clause in the AI software agreement or in the terms of use agreements might state:  In the event of unauthorized use of the AI software, the distributor of said software may resort to self-help to recover or disable said software.

 

Courts would rather enforce remedies, but to avoid unfair results, they are becoming more amenable to self-help remedies. As a result, those who exercise self-help remedies and end up in litigation are more likely to find favorable court treatment.