|
Who's Responsible for an Artificial Intelligence's Unlawful Acts?
Who's Responsible for an Artificial Intelligence's Unlawful Acts?
June 01, 2023 New Jersey Law Journal
By Jonathan Bick | Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace and Rutgers Law Schools.
As artificial intelligence (AI) applications have proliferated on the internet, so has AI-related damage. AI’s are not legal persons in many jurisdictions, hence despite abundant evidence of direct damage by an AI, it may not necessarily be party to a damage recover action. Consequently, other parties, such as AI programmers, AI developers, and AI distributors, as well as causes of action based on intent, recklessness, indirect perpetration, and abetting must be considered.
In addition to civil and criminal violations, AI programs may be liable indirectly for injury to another. Such injury may include a security breach or accidental disclosure of health information (PHI) protected by federal laws like HIPAA or personally identifiable information (PII) or non-public information (NPI) protected by the Federal Information Security Act, as well as state laws. Furthermore, AI programs may corrupt computer control systems resulting in lost data and/or malfunctioning machinery. Incidental and consequential damages may include jury awards, litigation costs, penalties, lost revenue, and loss of business and reputation.
AI is normally embodied in a program. Since computer programmers write, modify, and test code and scripts that allow computer software and applications to function properly, programmers have been found liable for harm caused by programs.
Both traditional software and AI software contain algorithms. Algorithms are procedures employed for solving a problem or performing a computation. Algorithms act as a step-by-step list of instructions specifying specific actions to be performed using either hardware or software-based routines. The fundamental difference with AI and traditional algorithms is that an AI can change its outputs based on new inputs, while a traditional algorithm will always generate the same output for a given input.
For example, a traditional algorithm for minimizing the number of guesses for guessing a number within a range will typically start with the number in the middle of the range (five, for example, if the range is one to 10). If the guess is too low, then the traditional algorithm’s second guess will be the middle of the new range (eight, for example, if the new range is six to 10) and so on.
However, an AI algorithm for minimizing the number of guesses for guessing a number within a range will typically start with a random number (three, for example, if the range is one to 10). If the guess is too low, then the AI algorithm’s second guess will be in the middle of the new range (seven, for example, if the new range is four to 10) and so on. The AI algorithm will repeat the game many times and finally learn to start with the middle number in a range and repeatedly do so.
Traditional algorithm programming provides ready evidence of the intent of the programmer. Such intent may be gleaned from the code and the paper trail left behind by the algorithm. In such cases, courts can identify the programmer’s manipulative intent and hold the programmer liable for the algorithm’s misconduct (see Amanat v. SEC, 269 F. App’x 217 (3d Cir. 2008)). While both the traditional and AI programming may result in the same algorithm and achieve that same result, the traditional program will know the algorithm before the traditional program is executed while the AI programmer may not. This is an essential factor in determining a programmer’s or developer’s liability.
An AI software programmer or developer could be liable for AI-related damage in three separate ways. First, on the basis of individual accountability in the event that the AI was programmed intentionally or recklessly in such a way that it would violate a statute or cause harm to another. Second, an AI software developer could be liable through the doctrine of indirect perpetration. This would bridge the gap in cases where software developers, acting like puppet masters, perpetrate violations of law or harm others through third party actions. Third, an AI software developer could be held liable if he or she “aids, abets or otherwise assists” in the commission of a statute violation or of harm to another—including providing the means for its commission.
Ordinary negligence applies when a software developer does not use the degree of care that a reasonably prudent person would have used when developing software. The reasonableness of the defendant’s conduct is frequently understood as comparing or balancing the costs and benefits of a defendant’s actions (see United States v. Carroll Towing, 159 F.2d 169 (1947)).
If it can be determined that there is something a software developer should have done, and would reasonably have been expected by him by all others involved in the use and distribution of the software, then he can be found guilty of negligence and required to pay damages to the plaintiff.
Negligence claims, for example, may be available in situations in which product liability claims may not be available. See, e.g., Griggs v. BIC, 981 F.2d 1429 (3d Cir. 1992). The U.S. Court of Appeals for the Third Circuit found that a design was not defective under product liability law, but a finding of negligence was possible. More specifically, in Invacare v. Sperry (612 F. Supp. 448 (1984), the court refused to dismiss a negligence claim alleging that a computer seller was negligent for recommending its program and services to the buyer when “it knew, or in the exercise of ordinary care, it should have known, that … the programs and related data processing products were inadequate.”
A computer malpractice cause of action is an option. Malpractice is a failure to employ the higher standard of care that a member of a profession should employ. For example, the court in Data Processing Services v. L.H. Smith Oil, 492 N.E.2d 314 (1986), applying Indiana law, is one of the few cases in which the court imposed professional liability on computer programmers. However, attempts to impose a professional malpractice standard on the IT industry and create a higher duty of care have usually been unsuccessful (see, for example, F&M Schaefer v. Electronic Data Systems, 430 F. Supp. 988 (1977).
Generally, courts have declined to find computer malpractice because the fact that an activity is more technically complex does not mean that greater potential liability must be attached. However, there are examples of cases in which a verdict was issued in favor of the plaintiff suing under “computer malpractice,” when it was determined that a consulting firm did not “act reasonably in light of its superior knowledge and expertise in the area of computer systems.” For example, in Diversified Graphics v. Groves, 868 F.2d 293 (1989), the court found that the computer consulting professional failed to act reasonably in accordance with its superior knowledge and expertise.
The third type of liability is strict liability. Restatement (Second) of Torts 402A (1965) provides liability to the seller of any product that is deemed unreasonably dangerous. “Manufacturers and sellers of defective products are held strictly liable, (that is, liable without fault) in tort (that is, independent of duties imposed by contract) for physical harms to person or property caused by [a] defect.”
Strict liability is usually only applied in extreme cases, where a product defect is obvious. In the case of AI designers or AI programmers who may be considered as rendering professional services, their duty is limited to exercising the skill and knowledge normally possessed by members of that profession or trade. In order to hold them liable for the same type of strict products liability, such parties have to expressly warrant that there are no defects in such services.
In short, none of the above options are likely to secure the accountability of AI software distributors for violations of statutes or result in harm to others involving AI. Alternatively, entities which distribute programs have been found liable for harm caused by programs. These parties may liable for AI-related legal difficulties.
Software development is similar to other professional services when it comes to errors and omissions coverage. If someone can prove that your work caused financial harm, they will likely be compensated for their damages in court.
Normally, to prevail against a software developer, it must be proved that: the software vendor had a duty to provide functioning software to the user, the software did not perform as promised, the user suffered harm, and the software caused said harm. These criteria are usually applied to matters where a contract between the developer and client is not present.
Once a court has acknowledged jurisdiction, statutes in that jurisdiction will determine if the AI is a legal entity for the purpose of litigation and hence liable for a bad act. If the AI is fond not to be a legal entity, hence not sui juris, at least one additional party must be identified in order to allow litigation to proceed. Alternatively, if such parties are not identifiable in a timely manner, then a “John and Jane Doe” action might be considered in an effort to identify the unknown bad actors.
|