Software Law as a Guide to Artificial Intelligence Law

Published on

April 16, 2025 at 11:00 AM

By Jonathan Bick Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace and Rutgers law schools.

Artificial intelligence (AI) is computer software. AI computer software differs from traditional computer software in a single significant respect, namely how the algorithm is created (i.e. how the series of steps that a computer executes to produce a desired output are prepared). More precisely, traditional computer software algorithms are written by programmers and AI computer software algorithms are written by computers. Due to the similarity of traditional computer software and AI computer software, existing computer software law may be a guide to future AI law.

Computer software law considers legal principles associated with the creation, distribution and use of computer software. Computer software law has addressed the nature of software, associated liability for harm and related intellectual property.

Consequently, computer software law may act as a guide for AI computer software legal difficulties. For example, the nature of AI software is required for proper application of Uniform Commercial Code (UCC) to AI software contracts. The liability options are needed since AI software has been known to do harm. Intellectual property options must be considered since an AI is not a legal entity thus may not secure copyrights or patents.

First, consider using traditional computer software law to determine if AI computer software is a good or a service for contract purposes. Computer software law has found that computer software may be treated as either a good or a service.

Initially, courts treated software as a good because of the tangible nature of the software’s transfer media (disk or CD-ROM) (Graham Packaging v. Commonwealth of Pennsylvania, 882 A.2d 1076 (2005)). Subsequently, some courts have treated software as a service. For example, the court in Wharton Management Group v. Sigma Consultants, 582 A.2d 936 (1990), found that the tangible end product (of software) is only “involved incidentally in the transaction,” rather the software is the knowledge and skill of the programmer.

Accordingly, existing precedent may be used as basis for arguing that AI software is either a good or a service. Appropriately, arguing AI software is a good or a service will allow or deny the application of the UCC.

UCC Section 2-201 indicates that the UCC applies only to transactions involving goods. “Goods” are defined by UCC Section 2-105(1) as “all things (including specially manufactured goods) which are movable at the time of identification to the contract for sale.”

Being governed by the UCC can be advantageous if a party desires an implied warranties to the buyer (Beck Steel v. American Stair, 62 F.3d 396 (1995)). Alternatively, a party which does not want a transaction to include an implied warranty would argue that AI software is a service and governed by the common law of the state rather than the UCC.

The status of AI software also impacts the applicability of regulations to government contractors under the Federal Acquisition Regulation (FAR). For example, the Service Contract Act, 41 U.S.C. Section 6701, imposes additional obligations on government service contractors. The Service Contract Act, 41 U.S.C. Section 6703 outlines specially required contract terms such as minimum wage, fringe benefits, working conditions, and pay rates. Whether AI software is considered to be a good or service has substantial implications for government contractors.

Next, consider using traditional computer software law to determine AI computer software tort and criminal liability. As noted above, programmers write the algorithm for traditional software and computers write the algorithms for AI software. If a traditional software causes damage, the programmer who is a legal person is likely to be a party because the programmer wrote the algorithm which most likely caused traditional software to do harm (Amana v. SEC, 269 F. App’x 217 (2008)).

Since computers write the algorithms for AI software, and no legal person (or entity) is involved, despite an abundant evidence of direct damage by an AI software, it is unlikely that party directly involved in the AI software will be a party to a damage recover action.

However, like traditional software, other parties who were not directly liable may be a party to a damage recovery actions. In the case of AI software, these parties indirectly involved in the harm caused by AI software include: AI programmers, AI developers and AI distributors. The causes of action for these parties indirectly responsible may be based on intent, recklessness, indirect perpetration and abetting.

AI software programmers or developers could be liable for AI software related damages in the same way liability may arise for traditional software related damage and traditional software programmers or developers. First, on the basis of individual accountability in the event that the software was programmed intentionally or recklessly in such a way that it would violate a statute or cause harm to another.

Second, a software developer (either traditional or AI) could be liable through the doctrine of indirect perpetration. This would bridge the gap in cases where software developers, acting like puppet masters, perpetrate violations of law or harm others through third-party actions.

Third, both a traditional software developer or an AI software developer could be held liable if he or she “aids, abets or otherwise assists” in the commission of a statute violation or of harm to another—including providing the means for its commission.

Ordinary negligence applies when a software developer does not use the degree of care that a reasonably prudent person would have used when developing software. As found by the court in United States v. Carroll Towing, 159 F.2d 169 (1947), the reasonableness of the defendant’s conduct is frequently understood as comparing or balancing the costs and benefits of a defendant’s actions.

If it can be determined that there is something a software developer should have done, and would reasonably have been anticipated by any party with ordinary knowledge of the product who was involved in the use and distribution of the software, then they can be found guilty of negligence and required to pay damages to the plaintiff.

Negligence claims, for instance, may be obtainable in situations where product liability claims may not be. Consider the court’s finding in Griggs v. BIC, 981 F.2d 1429 (1992).

In addition to civil and criminal violations, AI software, like traditional software, may be liable indirectly for injury to another. Such injury may include a security breach or accidental disclosure of health information (PHI) protected by federal laws like HIPAA, personally identifiable information (PII), or non-public information (NPI) protected by the Federal Information Security Modernization Act, as well as state laws. Furthermore, AI software, like traditional software, may corrupt computer control systems resulting in lost data and/or malfunctioning machinery. Incidental and consequential damages may include jury awards, litigation costs, penalties, lost revenue, and loss of business and reputation.

Finally, consider using traditional computer software law to determine AI computer software intellectual property ownership. Software law includes decades of statutes and case law addressing intellectual property rights of traditional software developers through patent and copyright laws.

Existing computer software law has addressed most of the intellectual property issues related to AI computer software. More specifically, computer software law has resolved legal difficulties related intellectual property ownership, infringement, rights of use issues, prosecution of registrations. AI computer software intellectual property rights about content use in training data and whether users should be able to prompt these tools with direct reference other creators’ copyrighted and trademarked works by name without their permission may be properly addressed using existing statutes and case law (Andersen v. Stability AI , 23-cv-00201-WHO (N.D. Cal. Aug. 12, 2024).