IMPROVING SOLUTIONS TO AI RELATED DIFFICULTIES 

160        RUTGERS COMPUTER & TECHNOLOGY LAW JOURNAL   [2024]    Vol. [50]           

 

IMPROVING SOLUTIONS TO AI RELATED DIFFICULTIES         

 

By Jonathan Bick 'Adjunct Professor of Law Rutgers School of Law and Chairman of the Intellectual Property Depaz tnient, Brach Eichler LLC. The author would like to thank Nicole Ryu for excellent research and editing assistance

 

 

 

 

161

 

I. INTRODUCTION

Artificial Intelligence' (AI) interpenetrates every aspect of people's lives. The Internet has created a plethora of data.' Since 2022, widespread use of Chat GPT3 and other artificial intelligence (AI) applications requiring large amounts of data4 have proliferated

""The capacity of computers or other machines to exhibit or simulate intelligent behavior; the field of study concerned with this. In later use also: software used to perform tasks or produce output previously thought to require human intelligence, esp. by using machine learning to extrapolate from large collections of data.... Abbreviated AI." Artificial Intelligence, OXFORD ENGLISH DICTIONARY, https-J/www.oed.corn/dictionary/artificial-intelligence_n?tl—true (last visited Feb. 21, 2024).

2 An Internet data source may be a database, a flat file, live measurements from physical devices, scraped web data, or any of the myriad static and streaming data services which abound across the internet. Consequently, the Internet has allowed a multitude of data sources, for creating data. For example, Internet data has been created by the Internet of Things (IoT) data, social media data, health data, cybersecurity data, business data, smartphone data, among others.

3 ChatGPT (Chat Generative Pre-trained Transformer) released in 2022 made artificial intelligence (Al) mainstream. This software allowed the public to use AI to writing documents, answer question, debugging code, translate language translation, and summarize text resulting in AI integration a wide range of tasks. See Introducing ChatGPT, OPENAI (Nov. 30, 2022), https://openai.com/blogichatgpt.

See Joey Li et al., Methods And Applications For Artificial Intelligence, Big Data, Internet Of Things, And Blockchain In Smart Energy Management, 11 ENERGY & Al, Jan. 2023, at I ("Training artificial intelligence models requires immense volumes of data.").

 

using Internet data5 and AI-related difficulties' have thrived

5 For example, consider the UC Irvine Machine Learning Repository which currently maintain 664 datasets as a service to the machine learning community (including but not limited to Dry Bean Dataset which has Images of 13,611 grains of 7 different registered dry beans were taken with a high-resolution, the National Poll on Healthy Aging (NPHA) which is a subset of the NPHA dataset filtered down to various formats, and the Infrared Thermography Temperature data base which contains infrared thermography temperature information. See Ibrahim Yazici, Ibraheem Shayea & Jafri Din, A Survey of Applications of Artificial Intelligence and Machine Learning in Future Mobile Networks-enabled Systems, 44 ENG'G SC'. & TECH., Aug. 2023,

http s://www. sci enced irect com/science/article/pii/S221509862300 1337?via%3Dih ub.

6 AI-related difficulties are multi-facetted. From a technical perspective, for instance, an Al system can "hallucinate" and generate convincing, but false, information resulting sub-optimal outcomes. Examples include: Northpoint's COMPAS AI system adapted to predicting criminal reoffending which incorrectly gave black defendants a higher risk of reoffending and white defendants a lower risk of reoffending leading to an unfair systemic bias for making criminal justice decisions; Amazon's AI system for screening resumes which reduced the desirability score for resumes of female applicants who included the word "women's" resulting in underrepresenting females in the training data; and the 2012 Knight Capital's AI trading system whose flawed algorithm result in a lack of oversight costing the company more than $400 million due to an AI algorithm resulting in the purchase of 150 stocks at a cost of around $7 billion within the first hour of trading.

From a legal perspective, for instance, an AI system might be used in a manner which results in statutory violations due to data leaks, misrepresenting information and using AI tools without permission or for unintended uses. Examples include copyright infringement, data privacy, and attribution rights matters. For instance, Getty Images is suing Stability AI (Getty Images (US), Inc. v. Stability AI, Inc., 1:23-cv-00135-JLH) for allegedly using its images without permission to train an image generation tool. Additionally, a class-action lawsuit has been filed against OpenAl and Microsoft (Julian Sanction et. al. v. OpenAl, Inc. et. al 1:2023cv10211), alleging that their AI products were developed using scraped personal data.

From a professional responsibilities perspective, for instance, an AI system may generate content, which in some cases may result in the preparation and submission to courts of documents containing references to fake case law combined with the failure to check before or after submission may result in violations of the New Jersey Rules of Professional Conduct (RPC). For example, lawyers have a duty to be accurate and truthful. RPC 3 .1 and more specifically may not "assert or controvert an issue ... unless the lawyer knows or reasonably believes that there is a basis in law and fact for doing so that is not frivolous,

 

162        RUTGERS COMPUTER & TECHNOLOGY LAW JOURNAL   [2024]    Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES           163

 

simultaneously.' AI-related difficulties have legal, technological and business solutions but would likely benefit from the application of existing Internet protocols' which may be adapted to automatically identifying AI applications.

While AI-related difficulties may be ameliorated by

consequently, a lawyer has failed an ethical duty unless AI generated content is checked and verified.

7 Among the most common include: AI facial recognition technology when used by government agencies for identifying suspects and monitoring crowds has resulted in biased against certain groups of people, such as women and people of color generally and false arrests; AI autonomous vehicles have caused deaths for example, in 2018, an Uber self-driving car struck and killed a pedestrian in Arizona; AI predictive policing lead to over-policing of marginalized communities; and AI hiring and recruiting can be biased against certain groups of people, such as women and people of color as evidenced by litigation against firms for using AI systems that discriminate against job applicants; also see Paulius Cerka et al., Liability for Damages Caused by Artificial Intelligence, 31 Comput. L. & Sec. Rev. 376, 380 (2015) https://is.muni.cz/el/law/podzim2017/1VIV735K/um/ai/Cerka Grigiene_Sirbikyte_ Liability_for_Damages_caused_by_AI.pdf (last visited 1/24/24).

8 Internet Protocols are a set of rules governing the format of data sent over the internet or other network.

 

applying existing legal,' technological10 and business solutions," they can rarely be prevented by them.12 This shortcoming of existing solutions to AI-related difficulties may be surmounted by incorporating Internet protocol elements.13 Internet protocol

9 For example, consider the use of litigation to address copyright infringement, as discussed in Dan Walsh, The Legal Issues Presented by Generative AI, MIT Sloan (Aug. 28, 2023), https://mitsloan.mit.edu/ideas-made-to-matter/legal-issues-presented-generative-ai ("[S]everal visual artists filed a class-action lawsuit [for alleged Al copyright infringement] . „ In the U.S., the two main legal systems for regulating the type of work generated by AI are copyrights and patents.").

10 For example, consider one technical solution is the use of an AI software detector. Existing AI software detection methods suffer from several limitations, hindering their effectiveness for identifying AI software in a timely and accurate basis. Among these short comings include: Lack of adaptability because AI content changes too quickly resulting in false negatives or missed detections; High false positive messages usually due to requirements for identifying nearly all AI software; Reliance on manual intervention because existing AI detection system cannot properly interpretate new words or slang; The use of sampling because analyzing every item of content is in efficient; and Dependency on limited contextualization because existing AI detection approaches do not consider context surrounding a detection event. See generally, Ahmed M. Elkhatat, Khaled Elsaid & Saeed Almeer, Evaluating The Efficacy Of AI Content Detection Tools In Differentiating Between Human And AI-Generated Te.vt,19 INT. J. EDUC. INTEGRITY, no. 17, 2023.

71 For example, consider that "[c]ompanies in the insurance industry are adopting various approaches to manage and mitigate AI risks. Some rely on third-party providers, trusting that these vendors have properly vetted their AI solutions." Jen Dalton, Managing AI Risks In The Insurance Industry, ALM PROP. CAS. 360 (Nov. 21, 2023), https://vv-ww.propertycasualty360.corn/2023/11/21/manaOng-ai-risks-in-the-insurance-industry/.

'2AI software detectors currently rely on the fact that AI content generators like ChatGPT are trained on massive datasets of text and code. Consequently, the AI content generators generate specific word patterns (such as 100% proper English) and have low variance in sentence length and word choice. Existing AI software detectors work by copying content generated by a source and comparing it with attributes indicative of AI authorship and thereby assess the probability of Al generated content. However, this method requires interaction with a potential AI source, hence cannot prevent AI-related difficulties.

13 Internet protocols such as SSH, SMB, TCP/IP, HT FP, HTTPS, and others are typically developed by standards organizations, such as the Internet Engineering Task Force (IETF) or the International Organization for Standardization (ISO). See generally, Henning Schulzrinne & Jonathan Rosenberg, Internet Telephony: Architecture and Protocols —An IETF Perspective, 31 COMPUT. NETWORKS 237 (1999).

 

164        RUTGERS COMPUTER & TECHNOLOGY LAW JOURNAL   [2024]    Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES           165

 

elements can help automatically identify AI applications, thus allowing Internet users the option to deny access14 to the AI system or refrain from using AI systems on a timely basis, thus preventing unauthorized data transfer to an AI system or an inadvertent use of an AI system respectively.

Most existing legal solutions adapted to resolving AI-related difficulties focus on identifying a party to be held liable for an AI's actions (other than an AI) because an AI is not a legal entity15 and hence is not subject to litigation. Existing technological solutions are primarily designed to resolve AI-related difficulties by identifying actionable software to limit AI software's access to data and Internet sites. More specifically, when Al software causes a difficulty, such as using Internet content without consent, the software (technological) solution is simply to program an Internet site to prohibit access by objectionable Al software. Existing business solutions are generally created to resolve AI-related difficulties by identifying adverse outcomes and providing for compensation.16

Consequently, each of the existing solutions17 to AI-related difficulties is likely to benefit from knowing information related to the AI causing the difficulty. More specifically, knowing which AI

14 Blocking an IP address for example is a method of limit access to certain websites or services, thus protecting Internet sites from unwanted traffic and thereby blocking ML systems form harvesting data. Blocking IP addresses can also be used to limit access to AI systems thereby ensuring AI system will not be used. Several methods for blocking an IP address exist. Both hardware and software may be adapted to prevent access from certain IP addresses. Perhaps the most common way is for web hosting providers to provide options for their users to block specific IPs in their control panel. Alternatively, Internet content delivery networks and bot management software allow users to manage IP blocklists and set rules based on specific criteria,

15 A legal entity is "[a] lawful or legally standing association, corporation, partnership, proprietorship, trust, or individual. Has legal capacity to (1) enter into agreements or contracts, (2) assume obligations, (3) incur and pay debts, (4) sue and be sued in its own right, and (5) to be accountable for illegal activities. Legal Entity, BLACK'S LAW DICTIONARY (2nd ed.1910).

16 Such as insurance, which pays harmed parties from assets collected from third parties.

17 Legal, technological, and business.

 

caused the difficulty, will likely assist persons in three instances: first, those who are pursuing legal solutions, such as damage claims against AI-related developers or uses, second, those implementing technological solutions, such as software amendments to Internet sites to prevent AI software access, and third, those who are employing business solutions, such as securing insurance policies.

Among the most promising options for incorporating Internet protocol elements to provide AI identification is requiring AI and Machine Learning (ML) systems to be readily identifiable. For example, both AI and ML systems might be made readily identifiable by requiring them either to register or to use specified domain names.18 These options may be implemented via voluntarily

18 Domain names are currently used to identify and prevent Internet access. See Kunsan Zhang et al., Detection of Malicious Domain Name Based on DNS Data Analysis, J. PHYSICS: CONF. SERIES, 2020 (finding that malicious domain name detection based on DNS data analysis could be obtained either by reviewing DNS data through active domain name data analysis or by passive domain name data analysis. Currently the most common Internet site access denial system is the use of a blacklist because DNS servers may be well-suited to block domain names.); see generally, Simon Bell & Peter Komisarezuk, An Analysis of Phishing Blacklists: Google Safe Browsing, OpenPhish, and PhishTank, AUSTL. ComPur. Scr.WK.. (Feb. 4, 2020), https://dLacm.org/doi/10.1145/3373017.3373020.

 

it5ti        KUIGERS COMPUTER & I ECHNOLOGY LAW JOURNAL    [2024]

self-imposed industry standards,19 legislation' or regulations.'

The implementation of Internet protocol AI identification would enhance technological, business, and legal solutions to AI difficulties, and possibly help prevent those difficulties from occurring in the first place.

II. BACKGROUND

Many AI and ML related difficulties arise from the data used to create them. The output of an Al is directly related to the quality

19 Also referred to as self-regulation, self-imposed industry standards "refers to the steps companies I, industries and/or business associations] take to preempt or

supplement governmental rules and guidelines. For an individual company, self-regulation ranges from self-monitoring for regulatory violations to proactive corporate social responsibility (CSR) initiatives." Several industries have seen

success with self-imposed industry standards. See Michael A. Cusumano, Annabelle Gawer & David B. Yoffie, Social Media Companies Should Self-Regulate. Now., HARV. Bus. REV. (Jan. 15 2021), https://hbr.org/2021/01/social-media-companies-should-self-regulate-

now#:—:text—We%20found%20that%20companies%20have,successful%20in%20t he%20first%20place. See generally, Why Big Alcohol Can't Police Itself, MARIN INsr. (Sept. 2008),

https://alcoholjustice.org/images/reports/08mi1219_discus_10.pdf.

20 This option integrates Internet protocol. More particularly, by requiring AI systems to either to register or to use specified domain names and thereby

imbedding the AI nature of AI systems into the Internet protocol, all Internet systems and users will have access to an automatic notice of an AI system.

Existing proposed legislation offered to identify AI proposes to use techniques

other than Internet protocol options, for example consider the AI Labeling Act of 2023, which implements Al identification by requiring "clear and conspicuous notice, as appropriate for the medium of the content, that identifies the content as AI-generated content." H.R. 6466, 118th Cong. (2023).

23 Regulations are policies and programs formulated by governmental agencies to impose controls and restrictions on certain specific activities or behavior to regulate societal risks. See generally, Helen Stout & Martin de Jong, Exploring the Impact of Government Regulation on Technological Transitions; a Historical Perspective on Innovation in the Dutch Network-Based Industries,9 MDPI L., 2020, at 1.

 

Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES

and quantity of data used22 by the ML23 system which provides the AI with information.'

Generally, ML must access collections of various types of data stored in a digital format to create algorithms .25 Using datasets to create algorithms is the key component of any ML project. Datasets primarily consist of images, texts, audios, videos, numerical data points, etc., for solving various AI software challenges, such as identifying objects.

Since ML generally does not discriminate on which data it ingests,26 this data likely includes sensitive personal information. Said personal information may comprise of data secured due to unauthorized access, data leakage, or erroneous data. The access and use of this sensitive data can result in privacy breaches, leading to potential legal liabilities and reputational damage. Additionally, the indiscriminate collection of data by a ML may result in an AI systems use of copyrighted or patented material in their output, leading to infringement claims.27

22 Artificial intelligence capabilities often rely on machine learning. Machine learning describes the capacity of systems to learn from problem-specific training data to automate the process of analytical model building and solve associated tasks. Christian Janiesch, Patrick Zschcch & Kai Heinrich, Machine Learning and

Deep Learning, 31 ELECTRON MKTS. 685 (2021).

23 Machine Learning means that a computer program's performance improves with experience with respect to some class of tasks and performance measures. M. I. Jordan & T. M. Mitchell, Machine Learning: Trends, Perspectives, and Prospects,

349 SCIENCE 255 (2015).

24 Igbal Sarker, Machine Learning: Algorithms, Real-World Applications and Research Directions, 2 SN COMPUT. Sci., 2021, at 1 ("In general, the effectiveness

and the efficiency of a machine learning solution depend on the nature and characteristics of data and the performance of the learning algorithms."). 2' Algorithms are step by step instructions telling a computer what to do. More precisely, "In computer science terms, an algorithm is an abstract, formalized description of a computational procedure." Paul Dourish, Algorithms and Their Others: Algorithmic Culture in Context, Big 3 DATA & Soc'Y, July-Dec. 2016, at

3.

26 If low-quality data is used to build a ML Model, then the AI using said data will deliver a similar result. See Garbage in, Garbage out (GIGO) MERRIAM-WEBSTER DICTIONARY, hitps://www.merriam-webster.comiclictionary/GIGO (Last visited

Jan. 31, 2024).

27 "The primary legal difficulty associated with AI training is the acquisition and use of training data without the consent of the owner of said training data. The

 

Lott'       tc.0 uERS WMPUtuK C. I ECHNOLOGY LAW JOURNAL        [2024]

AI may also cause difficulties due to bad data.28 AI algorithms can inadvertently perpetuate and even amplify existing biases present in its training data.29 Discrimination based on race, gender, age, or other protected characteristics can occur, leading to potential lawsuits, regulatory fines, and reputational harm.

There are also AI-ML malfunctions to consider. Al-ML systems malfunction for any number of reasons, such as lack of proper maintenance, design flaws, or human error. Most concerning

case of Getty Images v. Stability AI (U.S. District Court, District of Delaware Case 1:23-cv-00135-UNA filed 02/03/23) exemplifies the legal difficulties associated with AI training. Getty claims that Stability AI copied more than 12 million photographs from Getty Images' collection, along with the associated captions and metadata, without permission from Getty Images and used the copied material in part to train its AL More specifically, Getty Images makes hundreds of millions of visual assets available to customers via internet sites, such as gettyimages.com and istock.com, and Stability AI used the copied images to train its AI. Copying images without consent has resulted in several types of legal difficulties. These legal difficulties include unlawful acts pursuant to the Copyright Act of 1976, 17 U.S.C. Section101 et seq., the Lanham Act, 15 U.S.C. Section 1051 et seq., as well as state trademark and unfair competition laws." Jonathan Bick, Copyrighted Content and the Legal Difficulties  of Training AI, N.J.

L. J. (July 11, 2023), https://www.law.com/njlawjoumal/2023/07/11/copyrighted-content-and-the-legal-difficulties-of-training-aii.

28 A prominent issue in artificial intelligence (AI) and machine learning is bad data. "Data can also be bad due to manipulation (for example, hacked data) or when it is used for bad intent (for example, to monitor or control)." See Tammy McCausland, The Bad Data Problem, 64 RSCH.-TECH. MGM-r. 68 (2021).

29 "AI training starts with data and processes that data as follows: First, an AI model is given a set of training data and asked to make decisions based on that information. The data allows the AI to make correct and incorrect output. Each time the AI makes and delivers an output, it is told if the output is correct or not. The AI then repeats the process making adjustments to the data processing steps that help the AI become more accurate by making increasingly better algorithms (ordered data processing steps resulting in correct output). Once the initial training is completed, the second step of AI training is to validate the algorithm. In this phase, the AI will validate the assumption that the algorithm created by the Al yielding acceptably correct output when using a new set of test data. If the output is accepted, then the AI is finally tested using live data from real world sources. In the event that the output from either the new set of test data or the real world data yields unacceptable output then the training begins again with the first step." See Bick, supra note 28.

 

Vol. 150]           IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES   169

are malfunctions due to human error. One such example is when an AI is used to serve a purpose beyond that envisaged by the original designers. This is like when an AI is applied in an improper context or after the Al-ML is improperly upciated.3° Such AI systems malfunctions, errors, or failures, may result in financial losses, property damage, or bodily injury.

AI privacy concerns and data or algorithm errors may also have detrimental effects on Al related to unreliable content recommendation and product recommendation. Basically, if AI is interpreting bad data, it is likely then to curate a bad, improperly tailored, experience for the Internet user. Additionally, AI has the ability to gain awareness into consumers personal information, which may exacerbate privacy concerns.31 AI-generated content, such as advertisements or recommendations, may not only provide unwanted or poorly targeted advertising but may also provide completely false or misrepresentative advertising if the information provided by the ML to the AI is erroneous.

In sum, alleged AI-related difficulties may cause privacy violations,32 intellectual property infringement,' personal injuries,'

3° Sasanka Sekhar Chanda & Debarag Narayan Banerjee, Omission and Commission Errors Underlying AI Failures, AI & SOC'Y (2022).

31 Dhruv Grewal et al., Artificial Intelligence: The Light and The Darkness, 136 J. Bus. Rsca. 229 (2021)..

32 See, e.g., In re Clearview AI, Inc. Consumer Privacy Litig., 2022 U.S. Dist. LEXIS 131389 (ND. Ill. July 25, 2022) (wherein Clearview's AI collected billions of photographs of facial images without consent by scraping social media sites for law enforcement entities); See also Lopez v. Apple, Inc., 519 F. Supp. 3d 672 (N.D. Cal. 2021) (finding that the defendant intercepted private discussions).

33 See, e.g., Doe v. Github, Inc., 672 F. Supp. 3d 837 (N.D. Cal. 2023) (holding that an AI's use of copyrighted content without consent constitutes infringement).

34 See, e.g., In re Marriott Intl, Inc., 440 F. Supp. 3d 447 (D. Md. 2020) (holding that AI can use data breach to cause personal injury).

 

1/U         KUTOERS LOMPUTER Sc TECHNOLOGY LAW JOURNAL   [2024]

contract breaches,35 as well as crimes.36 AI algorithmic bias' results in discriminatory consumer outcomes,38 poses professional responsibility and ethical challenges,39 as well as possibly unauthorized practice of law.4°

In addition to civil and criminal violations, AI programs may be liable indirectly for injury to others. Such injury may include a security breach or accidental disclosure of protected health

35 "Contract violations may also result from copying images without consent. For example, the method noted in the Getty complaint by Stability AI to assess the Getty content violated the terms of use agreement for both the gettyimages.coin and istock.com internet sites. Allegedly, Stability AI accessed Getty content via Getty Images' public-facing websites. The Getty Images websites from which Stability AI copied images without permission is subject to express terms and conditions of use which, among other things, expressly prohibited (i) downloading, copying, or re-transmitting any or all of the website or its contents without a license; and (ii) using any data mining, robots or similar data gathering or extraction methods. As a result, a contract breach has allegedly occurred." Bick, supra note 28.

36 See, e.g., Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 U.S. Dist. LEXIS 108263 (S.D.N.Y. June 22, 2023) (wherein defendants were accused of 18 USCS § 505 forgery related to AI use but not convicted).

37 Christian Sandvig et al., When the Algorithm Itself Is a Racist: Diagnosing Ethical Harm in the Basic Components of Software, I0 INT'L I. COMM. 4972, 4973 (2016).

38 Bias in AI occurs when two data sets are not considered equal, possibly due to biased assumptions in the AI algorithm development process or built-in prejudices in the training data which for example resulted in anti-woman bias. See Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women, REUTERS (Oct. 10, 2018, 8:50 PM), https://www.reuters.corniarticle/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCNIMK08G.

39 MODEL RULES OF PRO. CONDUCT r. 8.4(g) (AM. BAR. ASS'N 1983) (stating that it is professional misconduct for a lawyer to "engage in conduct that the lawyer knows or reasonably should know is harassment or discrimination on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status in conduct related to the practice of law.").

4° See, e.g., MillerKing, LLC v. DoNotPay, Inc., No. 3:23-CV-863-NJR, 2023 U.S. Dist. LEXIS 209825 (S.D. Ill. Nov. 17, 2023) (wherein AI use without disclosure was identified as practicing law without a license); Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 U.S. Dist. LEXIS 108263 (S.D.N.Y. June 22, 2023).

 

Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES   171

information (PHI) protected by federal laws like HIPAA, or personally identifiable information (PII) or non-public infoutiation (NPI) protected by the Federal Information Security Act, as well as state laws. Furthermore, AI programs may corrupt computer control systems resulting in lost data and/or malfunctioning machinery. Incidental and consequential damages may include jury awards, litigation costs, penalties, lost revenue, and loss of business and reputation.

AI-related difficulties also include legal ethics challenges and professional responsibility complications. Specifically, attorney professional conduct may become an issue due to AI use. For example, since the ABA's Model RPC 8.4(c) states that it is misconduct for a lawyer to "engage in conduct involving dishonesty, fraud, deceit, or misrepresentation," and because an AI can generate false information, an attorney has an ethical duty to verify All generated content used by said attorney.

More specifically, consider Comment 8 to ABA Model Rule 1.1 (adopted by the ABA in 2012) that indicates that attorneys must knowledgably evaluate Al use, as a form of new technology, just as this comment requires a working knowledge of Adobe, Word, and Excel.41 This Comment requires attorneys "To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology." While New Jersey (unlike New York and Pennsylvania) did not graft Comment 8 into existing New Jersey Rule 1.1, it is generally understood that the competent element in New Jersey Rule 1.1 requires a working knowledge of relevant technologies, including AI.

Consequently, existing legal ethics rules allow (and perhaps require) lawyers to use computer and other technology (such as AI) to increase efficiency. Attorney computer use has primarily been for information use and access, as evidenced by the New Jersey Advisory Committee on Professional Ethics Opinion 701-

41 MODEL RULES OF PRO. CONDUCT r. 1.1 cmt. 8 (AM. BAR. ASSN 2012) (stating that competency includes remaining aware of the benefits and risks associated with relevant technology).

 

1'12       KuTGERS COMPUTER & I ECHNOLOGY LAW JOURNAL    L2U24j   Vol. [50]            IMPROVING SOLUTIONS To AI-RELATED DIFFICULTIES            173

 

Electronic Storage and Access of Client Files.42 Generally, attorneys must use computers and other technology for efficient document preparation and distribution. ABA Model Rule 1.1 states that "A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation."43

Opinion 701 recommends that attorneys may use computers so long as they adhere to three tenets. First, the computer use must result in an enforceable obligation to preserve confidentiality and security. Second, the attorney must use available technology to guard against foreseeable attempts to infiltrate data. Third, if the lawyer uses a computer vendor, then there is an enforceable obligation to preserve confidentiality and security, and the vendor is obligated to notify the lawyer if served with a demand for client data.

An ethics opinion by the New York State Bar Association Committee on Professional Ethics (Ethics Opinion 842-9.10.2010)44 says much the same as the Opinion 701. However, it adds a breach investigation element. More specifically, an attorney using a computer vendor must investigate any potential security breaches or lapses by the vendor to ensure client data was not compromised.

AI use by attorneys for legal writing has resulted in legal ethics difficulties as well. Consider the New York attorney who was sanctioned for using fake ChatGPT cases in a legal brief.45 The court ordered the law firm representing the plaintiff to pay a $5,000 fine for "acts of conscious avoidance and false and misleading statements to the court" when it was discovered that AI generated "bogus judicial decisions with bogus quotes and bogus internal citations,"

42 N.J. Advisory Comm. on Pro. Ethics, Op. 701 (2006) (discussing the electronic storage and access of client files) [hereinafter Op. 701].

43 MODEL RULES OF PRO. CONDUCT r. 1.1 (AM. BAR. ASS'N 1983) (stating that competency includes remaining aware of the benefits and risks associated with relevant technology).

44 NYSBA Comm. on Pro. Ethics, Ethics Op. 842 (2010) (regarding ethics around lawyers using an online storage provider for confidential client information).

45 See Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 U.S. Dist. LEXIS 108263 (S.D.N.Y. June 22, 2023).

 

the result of an AT hallucination (content made up by the AI rather than found by the AI).46

While the use of AI for writing may enhance creative analysis and identification of persuasive precedents, such use may also violate legal ethics rules, including the duty of competence, the duty of confidentiality, and assisting in the unauthorized practice of law. Depending upon how the AI writing was used and billed, other legal ethics rules may be broken as well.

For example, if the fees associated with the AI writing were not reasonable (i.e., the billing was based on the average time required to write a brief rather than the time required with AI assistance), the AI use may violate ethical rules around billing. Another example is if the AI writing was presented as an attorney's writing, which could be considered unethical dishonesty, fraud, deceit or misrepresentation.

It should be noted that the use of AI may also result in a crime (misdemeanor) related to the unlawful practice of law!' More specifically, using AI may result in difficulties associated with copying from other sources while drafting litigation filings.

Attorneys using AI programs to help with drafting documents may also be copying AI generated material that originates from another uncited source. Copying materials without acknowledgement may violate several other legal ethics rules, including those requiring competence and diligence and forbidding frivolous filings. Because no intent is required, even unknowingly copying material without attribution to the source may result in a violation.

Another difficulty that arises with AI is its training. The training information may be generated by an AI programmer, but is usually drawn from Internet connected databases. These Internet accessible databases (including storage of client data on third party servers) are sometimes known as "the cloud."

46 Mata V. Avianca, Inc., No. 1:2022cv01461 - Document 54 (S.D.N.Y. 2023), JUSTIA, https://law.justia.cono/cases/federal/district-courts/new-york/nysdcell:2022cv01461/575368/54/ (last visited Jan. 31, 2024).

47 See N.J. REv. STAT. § 2C:21-22 (2022).

 

1/4         OW I utItS uumeurER & 1 ECHNOLOGY LAW JOURNAL       [2024]    Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES           175

 

As noted earlier, New Jersey has issued an ethics opinion

regarding the storage of client data on "the cloud."48 This opinion permitted the use of an outside service provider to store client files digitally, provided the attorney exercises reasonable care. This ethics opinion suggested that to meet the standard of reasonable care,

attorneys must be knowledgeable about how the provider will handle data entrusted to it, and they must include terms in any agreement with the provider requiring the provider to preserve the confidentiality and security of the data.

AI uses ML to formulate algorithms, which in turn produce and deliver output. ML requires access to data. Giving ML data is similar to giving data to a third party. Traditionally, attorneys have allowed third parties to have access to confidential client information, including process servers, court personnel, building cleaning companies, summer interns, document processing firms, external copy centers, and document delivery services. Existing legal ethics codes require attorneys to ensure the same security obligations for Al accessed data as any other third party to whom an attorney entrusts confidential client files. This means that if an attorney is giving client data to ML used to train Al, the attorney must follow the same security guidelines as with any third party. The attorney must have an agreement of confidentiality with the ML and an assurance of security from the ML system.

However, non-AI entities, like human employees of a third party, have access to the confidential data, either to synthesize the material as an AI would or feed information into an ML system. Consequently, assurances by third parties to reasonable efforts to protect sensitive client data is, in most cases, insufficient (for legal ethical purposes) without additional AI-related specificity.

Generative AI49 has accelerated the discussion around

48 See Op. 701, supra note 43.

Mohammadali M. Shoja, et. al, The Emerging Role of Generative Artificial Intelligence in Medical Education, Research, and Practice, CuREUS, Jun. 24, 2023 (generative AI is a form of artificial intelligence which uses machine learning to generate data including images, music, using self-supervised learning, which solely relies on raw text data without human labeling).

 

_I. I V    AN., l.11,INJ      I             IX- 1 rAAIINULAJU I LAW JUUKNAL            LLIIZ4)

how the implementation of internet protocol Al identification would be able to prevent AI-related difficulties in addition to improving the implementation of existing solutions to AI-related difficulties.

AI is a form of computer use. Both traditional computer use and AI computer use require software controlled by algorithms. Algorithms are problem solving processes memorialized in a set of step-by-step instructions telling a computer what to do. A fundamental difference between AI-directed computers and traditional-directed computers is that an AI can change its algorithms (and hence its outputs) based on new inputs, while traditional, algorithm-driven, computers cannot.

AI is embodied in computer software. As noted, AI and traditional software differ due to the source of the algorithm (i.e. the set of rules to be followed by a computer to problem-solve). More specifically, algorithms for traditional computer software are written by programmers and algorithms for AI computer software are written by computers.

Consequently, traditional computer programmers typically write the entire program, whereas, AI programmers normally write a small amount of AI software.54 As a result, it is unlikely that an AI programmer will be able to foresee suboptimal outcomes associated with Al software which he or she has written. The unlikelihood of an AI programmer's ability to foresee suboptimal AI program outcomes is exacerbated by the fact that, AI algorithms may result in counterintuitive outcomes which would not be foreseen.55

When a traditional algorithm causes harm a programmer may be liable and legal action may be taken against the programmer. However, when an AI algorithm causes harm, while it may be liable, no legal action may be taken against the AI because it is not a legal

54 Kanik Hosanagar & Vivian Jair, We Need Transparency in Algorithms, but Too Much Can Backfire, HARV. BUS. REV. (July 23, 2018) ("[M]achine learning algorithms--and deep learning algorithms in particular--are usually built on just a few hundred lines of code. The algorithm[ls logic is mostly learned from training data and is rarely reflected in its source code.").

'5 Anthony J. Casey & Anthony Niblett, A Framework for the New Personalization of Law, 86 U. Cm. L. REV. 333 (2019) note 82, at 354.

 

VOL [UJ            IMPROVING SOLUTIONS TO Al-RELATED DIFFICULTIES   177

entity.

Significant legal complications are associated with the fact that an AI is not a legal entity. Consequently, an AI need not fulfill legal responsibilities imposed by local, state and federal governing authorities. Consequently, AI can't own property, sign contracts, sue or be sued, and be held accountable for its actions.

III. EXISTING LEGAL SOLUTIONS TO AI-RELATED

DIFFICUL         11ES

 

 

Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES   179

 

KU 1 Cit.RS COMPUTER.& I ECHNOLOGY LAW JOURNAL             [20241

against a party which is not sui juris (such as an AI because it is not a legal person and hence is not able to make contracts and sue others, or be sued), third parties may be required to compensate the victim for financial and non-financial damages. Tort law permits damaged parties to seek indirect liability claims depending on the facts and circumstances of the claim.

Programmers who write the original, traditional software algorithms used by AI to develop AI software algorithms, might be liable for the bad acts of the software AI develops from the programmer's original algorithm. The reasoning behind this is that AI is designed to accomplish goals specified by, and receive tasks from, a human being. Thus, it has been suggested that either direct or vicarious liability may be applied to hold the human programmer who wrote traditional software algorithms that write Al software algorithms liable for the damages caused by the AI agent.

Regrettably, there is limited capacity of programmers to predict an AI's conduct, especially if the AI functions as a neural network that can learn patterns of behavior independently. Certainly, AI programmers and the entities for which they work can be held responsible if they have intentionally created AIs that commit crimes. If they have done this unintentionally, they might in fact benefit from the lack of mens rea.

Future AI criminal statutory legislation or common law related to AI responsibility arrangements might depend upon stare decisis and cite precedent drawn from matters associated with the criminal liability of slave owners in the antebellum U.S. South. Following such precedent, AI may be held responsible for crimes, which would indemnify AI programmers from criminal charges.58

Courts allow indirect liability and thereby transfer the responsibility of an entity who causes the harm to a third party. Also known as secondary liability, this type of liability is often applied in intellectual property cases in which a party facilitates the

58 See Daniel J. Flanigan, Criminal ProcedureM Slave Trials in the Antebellum South, 40 J. S. Hisr. 537 (1974).

 

infringement of another party.59Secondary liability can also arise when a supervisory party is responsible for and has control over the actions of its subordinates or associates.

As discussed, traditional computer programming has

programmers develop a fixed algorithm for a program to run, whereas an AI programmer develops an algorithm that the program itself can change over time, in sometimes unknown or uncertain ways. Because of this, traditional algorithm programming provides

ready evidence of the intent of the programmer. Such intent may be gleaned from the code and the paper trail left behind by the algorithm. In such cases, courts can identify the programmer's manipulative intent and hold the programmer liable for the algorithm's misconduct.'

While both the traditional and AI programming may result in the same algorithm and achieve that same result, the traditional program will know the algorithm before the traditional program is executed while the AI programmer may not. This is an essential factor in determining a programmer's or developer's liability.

An Al software programmer or developer could be liable for AI-related damage in three separate ways. First, on the basis of individual accountability in the event that the AI was programmed intentionally or recklessly in such a way that it would violate a statute or cause harm to another. Second, an AI software developer could be liable through the doctrine of indirect perpetration. This would bridge the gap in cases where software developers, acting like puppet masters, perpetrate violations of law or harm others through third party actions. Third, an AI software developer could be held liable if he or she "aids, abets or otherwise assists" in the commission of a statute violation or of harm to another—including providing the means for its commission.

59 For example, when a party knowingly induces, causes, or materially contributes to copyright infringement, said party may be found liable as a contributory infringer if he or she knew or had reason to know of the infringement. Normally, courts will determine whether a party is vicariously liable by determining if said party profited from the infringement of the primary or direct infringer and had supervisory authority ( such as an employer) over the direct infringer. G° See Amanat v. SEC, 269 F. Appx 217 (3d Cir. 2008).

 

Vol.       IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES   181

 

L.G.V4,1-j

Ordinary negligence applies when a software developer does not use the degree of care that a reasonably prudent person would have used when developing software. The reasonableness of the defendant's conduct is frequently understood as comparing or balancing the costs and benefits of a defendant's actions.61

If it can be determined that there is something a software developer should have done and would reasonably have been expected of him by all others involved in the use and distribution of the software, then he can be found guilty of negligence and required to pay damages to the plaintiff.62

Negligence claims may be available in situations in which product liability claims may not be available.63 The U.S. Court of Appeals for the Third Circuit found that a design was not defective under product liability law, but that a finding of negligence was possible. More specifically, the court refused to dismiss a negligence claim alleging that a computer seller was negligent for recommending its program and services to the buyer when "it knew, or in the exercise of ordinary care, it should have known, that . . . the programs and related data processing products were inadequate."64

A computer malpractice cause of action is another option. Malpractice is a failure to employ the higher standard of care that a member of a profession should employ. For example, Data Processing Services v. L.H. Smith Oil is one of the few cases in which a court-imposed malpractice liability on computer programmers.65 However, most attempts to impose a professional malpractice standard on the Information Technology (IT) industry, which includes computer programmers and software developers, and

61 See United States v. Carroll Towing Co., 159 F.2d 169 (2d Cir. 1947).

62 Tim Tompkins, Hardware and Software Liability, ...._,ThgNSSELAER POLYTECHNIC INST. (Dec. 6, 2000), http://www.cs.rpa.edu/courses/fall00iethies/papersitompkt.html#:—:text---IP/020it% 20can%20be%20determined,pay%20damages%20to%20the%20plaintiff.

63 See, e.g., Griggs v. BIC Corp., 981 F.2d 1429 (3d Cir. 1992).

" See Invacare Corp. v. Sperry Corp., 612 F. Supp. 448 (N.D. Ohio 1984).

65 See Data Processing Servs. v. L.H. Smith Oil Corp., 492 N.E.2d 314 (Ind. Ct.

App. 1986).

 

to create a higher duty of care for these professionals have been

unsuccessful.'

Generally, courts are reluctant to find computer malpractice

because they do not want to impose greater potential liability on an industry simply because the activity is more technically complex. However, there are examples of cases in which a verdict was issued in favor of the plaintiff suing under computer malpractice when it was determined that a consulting film did not act reasonably in light of its superior knowledge and expertise in the area of computer

systems.67

The third type of liability is strict liability. Restatement (Second) of Torts 402A (1965) provides liability to the seller of any product that is deemed unreasonably dangerous. "Manufacturers and sellers of defective products are held strictly liable, (that is, liable without fault) in tort (that is, independent of duties imposed by contract) for physical harms to person or property caused by [a) defect."

Strict liability is usually only applied in extreme cases, where a product defect is obvious. In the case of AI designers or AI programmers who may be considered as rendering professional services, their duty is limited to exercising the skill and knowledge normally possessed by members of that profession or trade. To hold them liable for the same type of strict products liability described above, such parties have to expressly warrant that there are no defects in such services.

In short, none of the above options are likely to secure the accountability of AI software developers for violations of statutes or result in halm to others involving AI. Alternatively, entities which distribute programs have been found liable for harm caused by programs. These parties may be liable for Al-related legal difficulties.

In addition to action against third parties for facilitating AI torts, properly constructed Internet terms of use agreements may allow successful legal actions against third parties for facilitating

66 See, e.g., F&M Schaefer Corp. v. Elec. Data Sys., 430 F. Supp. 988 (S.D.N.Y.

1977).

67 See Diversified Graphics, Ltd. v. Groves, 868 F.2d 293 (8th Cir. 1989).

 

1-,,(111,../IAAJ I i,nW AJUKNAL          LZU241             Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES        183

 

 

contract breaches. Terms of use agreements are the same thing as Terms and. Conditions agreements or Terms of Services agreements. Each agreement defines rules for the use of a website. Since AI programs are only computer programs, normally a third party must assist an AI program to gain access to an Internet site.

A third party's action in facilitating an AI's access to a website may result in said third party being bound by a terms of use agreement on that website. Alternatively, the third party may be liable for the AI's breach of the terms of use agreement due to its legal relationship to the AI (such as a guarantor) or the third party might have induced the AI to breach the contract. Note that while the AI's breach of the terms of use agreement is a contract breach, inducing a contract breach through AI is a tort, making the inducing party liable in damages to the non-breaching party.

In addition to general causes of action related to intent and negligence, some intellectual property holder causes of action arise due to technological actions (such as copying) without any intent or negligence. For example, copyright owners may sue a user of the generative AI software for using the generative AI software that has been trained using the copyright owner's copyrighted data. This litigation risk is highest for users of generative AI software who use AI generated images that are substantially similar to copyrighted works of a particular visual artist, especially if the output inserts a watermark or other insignia indicating that the model was trained using copyrighted data of the visual artist or image source.

B. Existing Legal Solutions to AI-related Deepfake Difficulties68

Some Artificial Intelligence transactions do not have settled liability. AI transactions occur when a user of a generative AI program inputs a question and the generative AI program outputs a result. As discussed, the types of AI output include images and videos, sometimes copied directly from other sources, and

68 See Jonathan Bick, Fact, Fiction or Privacy Infringement: Artificial Intelligence and Deepfake Liability, NEW JERSEY L.J., Jun. 20, 2023, 10:00 AM.

 

sometimes entirely fictionalized by combining information from a variety of sources. Some AI-generated pictures, videos and voices distributed via the internet are called deepfakes.69 Internet deepfake (deep learning + fake) content is widespread and may be used to manipulate the public, attack personal rights, infringe intellectual property and cause personal data difficulties. However, little agreement exists as to who is legally liable for internet AI deepfake content.

Since 20] 7, software has been available to combine AI deep learning capabilities and internet content to create hyper-realistic content, which is completely fake, using algorithms which require as little as a single photo of a source or a sound bite.70 While some use of such AI deepfake software is relatively harmless, such as fake images of a person posing with a celebrity, other AI deepfakes involving pornography, for example, may be defamatory or criminal. This matter is exacerbated by the speed and low cost of internee distribution because the AI deepfake generators can be easily accessed and the images generated can be easily distributed.

More specifically, five separate internet AI deepfake types of liability exist for distributors of said deepfakes. The first is intellectual property infringement. Intellectual property rights are generally owned by the people who create or use the property. An internet AI deepfake may be used to pose as the person owning said intellectual property, resulting in infringement liability.

The second is the use of an intemet AI deepfake generator to exploit people. In many cases, users of generative AI will use fictitious pornographic images of a person, which can be used to extort things from that person. This results in websites affecting

69 See Enes Altuncu, Virginia N.L. Franqueira & Shujun Li, Deepfake: Definitions, Performance Metrics and Standards, Datasets and Benchmarks, and a Meta-Review, ARXry (Aug. 21, 2022), https://arxiv.orgiabs/2208.10913 (manipulation of existing media, which has led to the creation of the new term "deepfake").

7° See Nickie Louise, Samsung's New Al Algorithms Make it Easy to Create Moving Faces From .Just a Single Photo, TECHSTARTUPS (Jan. 8, 2020), littps://techstarmps.com/2020/01/08/samsungs-new-ai-algorithms-make-easy-create-moving-faces-just-single-photo/.

 

 

 

L[A,G,-1-]

people without being aware of the situation. Internet AI deepfakes in this case result in liability associated with invasion of privacy by appropriation of the unauthorized use of another person's likeness for commercial purposes. In addition to porn, such liability can arise when an internet AI deepfake is used without consent to sell a product. For example, if a seller of a product uses AI-generated images of a celebrity endorsing the product, without the celebrities consent, it can lead to liability on the part of the seller.

A third liability resulting from internet AI deepfakes is damage to a person's reputation. Deepfakes are regularly used to spread misinformation resulting in defamation liability."

Using internet AI deepfakes to compromise data protection and privacy results in liability for damages resulting from unauthorized disclosure, modification, substitution or use of sensitive data, which is a fourth type of liability. Deepfakes are used to gain access to personal data collected and stored by online businesses, employers and the government. Having one's identity virtually stolen via an internet AI deepfake increases all the liabilities associated with data breaches.

A fifth type of liability associated with internet AI deepfakes is deceptive trade practices and unfair competition. This occurs when people who purchase or use a product or service fall victim to harm and incorrect information. Usually, misinformation and promotional marketing materials are circulated with a deepfake generated spokesperson.

There is no federal law specifically addressing either deepfakes generally, or deepfake porn specifically. Consequently, the ability to bring criminal or civil charges against an individual for the harms described above differs between states. Certain illegal conduct in one state may not be illegal in another.

More specifically, only Virginia, Texas and California have enacted deepfake-related legislation. Virginia's and most of California's legislation refer directly to pornographic deepfakes, and

71 See, e.g., Kate Conger & John Yoon, Explicit Deepfake Images of Taylor Swift Elude Safeguards and Swamp Social Media, N.Y. TIMES (Jan. 26, 2024), https://www.nytimes.com/2024/01/26/arts/music/taylor-swift-ai-fake-images.html.

 

Vol. t3 uj            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES   185

Texas' and some of California's legislation refer to a specific subset of informational deepfakes. However, even where states lack deepfake related legislation, internet AI deepfake liability may arise due to damage caused by false information.

For example, an internet user may rely upon an internet AI deepfake for medical, financial or legal advice from a seemingly credible source, such as a well-known person, who is allegedly promoting the product. Damage to that user is actionable.72

As a threshold matter, people harmed by an internet AI deepfake would have to identify the source of the deepfake. Since legal action may only be taken against legal persons, and AI is not a legal person, an action naming an AI as a party will not prevail.

Action might be taken against the person depicted in the internet Al deepfake if this person is involved in the production of the deepfake. However, if the person depicted in the deepfake has nothing to do with the content of the deepfake, then action against the depicted party is unlikely to succeed.

In common law jurisdictions, internet AI deepfake victims may initiate an action against the deepfake's creator under one of the privacy torts, the most applicable of which is the "false light" theory.' Such an action is generally premised on precedent wherein the programmers are liable for their programs. However, Al programs, unlike traditional programs, are often morphed or changing versions of the programmer's original algorithm.

More specifically, while both traditional software and AI software contain algorithms (i.e., procedures employed for solving a problem or performing a computation), AI algorithms differ from traditional software algorithms. The fundamental difference between Al and traditional algorithms is that an AI can change its outputs based on new inputs, while a traditional algorithm will always

72 See Charles Toutant, An AI Took Her Clothes Off. Now a New Lawsuit Will Test Rules for Deepfake Porn, NJ L. J. (Feb. 5, 2024, 6:04 PM) littps://www.law.com/njlawjouma1/2024/02/05/an-ai-took-her-clothes-off-now-a-new-lawsuit-will-test-rules-for-deepfake-pornnslretum=20240204084003.

73 See Sara H. Jodka, Manipulating Reality: The Intersection of Deepfakes and The Law, Reuters (Feb. 1, 2024, 12:01 PM), https://www.reuters.com/legal/legalindustry/manipulating-reality-intersection-deepfakes-law-2024-02-01/.

 

„•.,,..      VV JtJUKNAL   1.2U.L.41

generate the same output for a given input. Consequently, Al programmers, unlike traditional programmers, can accurately argue that they are not responsible for some AI-related difficulties.

The victim may initiate an action against the internet AT deepfake's publisher or the person who communicates the deepfake to others, under one of the privacy torts, again such as the "false light" theory. This is particularly appropriate when the publisher is the same person as the creator. In this instance, the deepfake must be published (cominunicated or shared with at least a person) to be actionable and the plaintiff needs to prove that the deepfake incorrectly represents the plaintiff in a way that would be embarrassing or offensive to the average person and involves an "actual malice" requirement. AI output may satisfy the 'actual malice' requirement if the AI developers or distributors allowed the disclose of false content with knowledge that it was false or with reckless disregard of whether it was false or not.

Internet AI deepfake action in New Jersey may arise from four distinct outcomes associated with the tort of invasion of privacy. More specifically, the four categories include: (a) unreasonable intrusion upon the seclusion of another; (b) appropriation of another's name or likeness; (c) unreasonable publicity given to one's private life; and (d) publicity that normally places another in a false light before the public.74

If a deepfake is being used to promote a product or service, the victim whose image is fraudulently used without authorization may invoke the privacy tort of misappropriation or right of publicity (depending upon the jurisdiction). Under this theory, a successful plaintiff can recover any profits made from the commercial use of their image in addition to other statutory and punitive damages. Misappropriation can be combined with false light where relevant.

While there is no New Jersey statute that recognizes a right of publicity, for more than 100 years, New Jersey has recognized a common right to prevent the unauthorized, commercial

74 See Bisbee v. John C. Conover Agency, Inc., 452 A.2d 689 (Super. Ct. App. Div. 1982).

 

Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES   187

appropriation of their name or likeness.75 New Jersey courts have confirmed this right of publicity as a property right in McFarland v. Miller in 1994 and in Canessa v. Kislak in 1967.7'

Additionally, if the internet AI deepfake discloses untrue assertions about a person and those statements demonstrably harm the subject's reputation, a traditional defamation or libel suit may also prevail. Unlike actions which place people in a false light by associating them with content which is misleading or insinuate falsity, defamation and libel actions are associated with content which is technically false.

C. Proposed Legislative Legal Solutions to AI-related Difficulties

While existing legal solutions exist for resolving legal AI-related difficulties, the promulgation of new and additional legislation has historical precedent." A combination of existing laws and new legislation was used to regulate the Internet, for example.78

The Congress is considering many proposed statutes to resolve AI-related difficulties.79 While they address various AI-related difficulties, most proposed legislation have several features in common.

First, they must differentiate AI activity and output from traditional activities and output. Second, they must identify a party to be legally liable (since the AI which caused the harm is not). Third, they must identify who can bring a lawsuit.

Consider for example, a House draft bill with the goal of regulating AI's use of people's voices and likenesses titled "No

75 See Edison v. Edison Polyform & Mfg. Co., 67 A. 392 (1907).

' McFarland v. Miller, 14 F.3d 912 (3d Cir. 1994); Canessa v. J.I. Kislak, 235

A.2d 62 (Super. Ct. 1967).

E.g., the special issues raised by railroad technology. See Moses, supra note 58.

78 See Jonathan Bick, Why Should the Internet Be Any Different?, 19 PACE L. Rev. 41 (1998), Jonathan Bick, Americans with Disabilities Act and the Internet, 10 ALB. L.J. Scr. & TECH. 205 (2000). See also, Jonathan Bick, E-Commerce Tax Policy,13 Han'. J. L. & Tech. 597 (2000).

79 The most common relate to protecting data privacy, combatting discriminatory use of AI and AI misuse during elections.

 

1.4O4-9-1

Artificial Intelligence Fake Replicas and Unauthorized Duplications Act of 2024"8° ("No AI FRAUD Act"). The No AI FRAUD Act was inspired by the unauthorized creation of songs and a dental plan advertisement that used an unauthorized image and seeming performance by a famous actor. The No AI FRAUD Act would address the unauthorized use of an AI's replication a person's voice and/or images by creating a new federal "property" right to a person's likeness and voice, regardless of whether the person is dead or alive.

In order to address the fact that an AI's rendering of a person's voice and/or likeness is not the same as said person's voice and/or likeness, the terms "person's voice" and "person's likeness" are broadly defmed.81 "Person's voice" is defined as including a person's "actual voice" or a "simulation" that is "readily identifiable from the sound of the voice or simulation of the voice, or from other

information displayed in connection therewith."S2           The telin

"person's "likeness" is defined as an "actual or simulated image or likeness of an individual . . . that is readily identifiable as the individual by virtue of the individual's face, likeness, or other distinguishing characteristic, or from other information displayed in connection with the likeness."

Since AI's are not legal persons and hence would not be liable under this proposed statute, the law would hold liable "any person or entity who, in any manner affecting interstate or foreign commerce . ., and without consent of the individual holding the voice or likeness rights affected thereby" does one of the following things:84

1.           "distributes, transmits, or makes available to the public a

personalized cloning service" where such a service is defined as an

8° No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act of 2024, 118 H.R. 6943 [hereinafter No AI FRAUD Act].

81 id

82 id

831d

84 Id.

 

VOL h01            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES   189

algorithm, software, tool, or other technology, service, or device whose primary purpose to produce one or more digital voice replicas or digital depictions of particular individuals.85

2.           "publishes, performs, transmits, or otherwise makes available to the public a digital voice replica or digital depiction with knowledge that . . . it was not authorized" by the party who holds those rights.86

3.           "materially contributes to, directs, or otherwise facilitates" any of the above conduct.87

The parties allowed bring suit are not necessarily the parties who were hai.ated. The proposed bill allows third-party owners of a person's voice or likeness88 and anyone who has an "exclusive personal services" contract with a "recording artist or an exclusive license to distribute sound recordings that capture the individual's audio performances" to bring suit. Thus, a person who was misled by an AI generated dental plan advertisement that used an unauthorized image and seeming performance by a famous actor would not have standing to litigate.

As an aside the bill disclosed that "any digital depiction or digital voice replica which includes child sexual abuse material, is sexually explicit, or includes intimate images" is per se harmful. Though regrettably the term "intimate images" is not defined in the bill.

Subpart C(2)(E) demonstrates this with a special provision that provides that anyone who has an "exclusive personal services" contract with a "recording artist or an exclusive license to distribute sound recordings that capture the individual's audio perfoiniances" can also bring suit.

85 Id.

86 Id.

87 Id.

88 For example, the proposed bill gives rights to "executors, heirs, transferees, or devisees any dead person ("regardless of whether the individual has died before [the bill's] effective date') for a minimum period of ten years after death.

 

19U       KuruEits LOMPUTER & 1 ECHNOLOGY LAW JOURNAL      12024]   Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES           191

 

1. Legislative Proposals to Hold AI

Programmers and Owners Liable

The growth of AI has increased the potential for software-induced harms. For traditional computer programs, post-sale software updates have long been a standard approach to fixing product flaws. In many cases, there are no associated harms. For instance, it is common for software security vulnerabilities to be identified and then fixed through updates before being exploited by malicious actors. When harm occurs which was known or should have been known by the programmer, liability is readily assignable.

However, as previously discussed, AI systems, unlike traditional computer systems, create their own algorithms—sometimes by revising algorithms originally designed by humans, and sometimes completely from scratch. This raises the basis for products liability, namely of attributing responsibility for products that cause harms.

Regarding future legal actions, just as recent legislation, has shifted liability for certain Internet transactions,39 federal and legislative opportunities exist to overcome existing AI legal difficulties by making an AI a legal person or designating AI programing an ultra-hazardous activity or having the court apply the doctrine of strict liability96 for abnormally dangerous activity to AI programing.

Changing technology has resulted in change liability statutes. Alternatively, courts may do the same by relying on

89 See, e.g., Digital Millennium Copyright Act, 1998 Enacted H.R. 2281, 105 Enacted H.R. 2281, 112 Stat. 2860(addresses a number of copyright issues created by the use of Internet technology — which has protocol requiring all content to be copied thus resulting in infringement for Internet users because mere copying with or without intent results in copyright infringement. The DMCA created new copyright rights in Internet assets, and offered new methods for stopping infringement, it gave rise to new rights and privileges (safe harbors) that protect Internet Service ...extend the reach of existing copyright law, while limiting the liability of the providers of online services for copyright infringement by their users).

90 RESTATEMENT (SECOND) OF TORTS §§ 519-520 (AM. L. INST. 1977).

 

192        RUTGERS COMPUTER & TECHNOLOGY LAW JOURNAL   [2024]

systems is to make AI a legal person.95 As mentioned, a legal person,96 as described by United State statutes, is a designation that requires the law to treat the entity as a person with limiting applications. Legal personhood variants have already been granted to corporations.97

Some call for an extension of similar legally binding rights to be granted to AI systems, tools, and platforms 98 Others, warn against such allowances for A1.99

Making an AI a legal entity simply requires the same legislation as making an organization a legal person. Just as Courts have been willing to grant expansive rights to corporations, AI may be granted similar right and responsibilities. if legislation designates AI's as separate entities.

Virtually all proposed legal solutions to AI-related difficulties have the same shortcoming, namely, they are adapted to addressing AI- related difficulties after harm has been done. In short, they are not designed to prevent AI-related difficulties and require court intervention to perfect cures.

However, the AI Labeling Act, introduced by Congressman Tom Kean, Jr. of New Jersey, seeks to establish a framework to label

95 See Nadia Banteka, Artificially Intelligent Persons, 58 Hous. L. REV. 537 (2021) (discussing issues associated with making an AI a legal person).

96 Legal personhood, or legal personality, is a foundational concept of Western law. Legal personhood pertains to how one is viewed or treated by the law. Legal persons are most often understood as those beings that hold rights and/or duties, or at least have the capacity to hold rights, under some legal system. Visa A. J. Kurki, Legal Personhood, CAMBRIDGE UNIV. PRESS (2023).

97 Santa Clara County v. Southern Pacific Railroad Co., 118 U.S. 394 (1886) (holding that a corporation has the same rights as an individual under the 14th Amendment). Also for example, the 2010 case Citizens United v. FEC ruled that political speech by corporations is a form of free speech that is also covered under the First Amendment 558 U.S. 310 (2010). Burwell v. Hobby Lobby Stores, Inc. granted the right of closely-held companies, which aren't traded on the stock market, to file for exemptions to federal laws on religious grounds. 573 U.S. 682 (2014).

" Rafael Dean Brown, Property Ownership and the Legal Personhood of Artificial Intelligence, 30 INFo. & COIVLMC'N TECH. L. (2021).

99 Brandeis Marshall, No Legal Personhood For AL 4 PATTERNS (2023).

 

Vol. [50]            IMPROVING SOLUTIONS TO Al-RELATED DIFFICULTIES   193

AI and AI generated content.100 This Act focuses on disclosure of AI generated materials and seeks transparency for consumers. Internet users would be made aware if they were looking at AI generated content or "interacting with an AI chatbot[.]"1°1 While this Act would not prevent the creation of unsavory AI generated material, it would prevent unwanted AI interactions and the dissemination of deepfakes.

IV. TECHNOLOGICAL SOLUTIONS TO AI-RELATED

DIFFICULTIES

A. Technological Solution to Address AI-related difficulties— domain name requirement for AI and ML

Technological solutions to ameliorate or resolve AI-related difficulties are dependent upon AI's use of the Internet combined with the fact that the Internet is protocol driven.' For example, the technological solution that this article seeks to advance is forcing AI's to use a specific domain name extensions (such as an IP address using ".RealAl")103 for all activities that would allow users and Internet hosts (computers that facilitate internet use) to limit AI access and thereby ameliorate infringement difficulties.

The Internet is regulated by protocol. All Internet users must abide by the same set of rules for formatting and processing data. Failure to do so will bar the user from using elements of the Internet.

100 AI Labeling Act, H.R. 6466, 118th Cong. (2023).

107 Press Release, Congressman Tom Kean, Jr., Kean Acts in Response to Westfield High School Deepfake with New Bill (Nov. 27, 2023), https://kean.house.gov/media/press-releases/kean-introduces-bill-provide-more-transparency-ai-generated-content.

102 AI may be used and created without the Internet. The technological solutions in this paper focus only on Internet related technical solutions and therefore, only affect AI that is trained by and accessed through the Internet.

103 Please note that the top level domain name AI can't be use because .AI is the Internet country code top-level domain (ccTLD) for Anguilla, a British Overseas Territory in the Caribbean. It is administered by the government of Anguilla. .al,

al, WIKIPEDIA, https://en.wikipedia.org/wikil.ai#:—:text=ai%20is%20the%20Internet%20countiy, by%20the%20government%20of%20Anguilla (last visited Feb. 26, 2024).

 

 

 

194        RUTGERS COMPUTER & TECHNOLOGY LAW JOURNAL   [2024]

For example, if access to the World Wide Web requires typing ".WWW", if the user types ".WW" or ".WWWW", then the user will not be able access World Wide Web.'

According to the Internet Protocol (IP), which is a set of standards for addressing and routing data on the Internet, IP address blocking is allowed.1°5 IP banning is a configuration of a network service that blocks requests from hosts with certain IP addresses. More specifically, every device connected to the Internet is assigned a unique IP address, which is needed to enable devices to communicate with each other. With appropriate software on the host website, the IP address of visitors to the site can be logged and can also be used to determine an Internet user's IP address.

Knowing an Internet user's IP address would allow a website to proactively block that user. IP address blocking has been used, for example, by Internet sites to block a repetition of inappropriate behavior.

While circumvention of IP address blocking is possible, such circumvention is unlawful. Consider, for example, where the District Court in the Northern District of California found that circumventing an address block to access a website is a violation of the Computer Fraud and Abuse Act for "unauthorized access," and is thus punishable by civil damages."

104 Henry J. Lowe, The World Wide Web: A Review of an Emerging Internet-based Technology for the Distribution of Biomedical Information, 3 [J]AMA 1 (1996). 1°5 See generally, Jinfang Jiang et al., How Al-enabled SDN Technologies Improve the Security and Functionality of Industrial IoT Network: Architectures, Enabling Technologies, and Opportunities, 9 DIGIT. COMMC'N &NETWORKS 1351 (2023) (which states that Software-Defined Networking (SDN) allows the decoupling of the network control from a data stream, more specifically the SAN controller in the control layer functions as the network operator which in turn manages the switching in the data layer through dedicated control standards, consequently, domain names may be monitored in real-time and optimal network policies can be

determined and deployed).

106 See Craigslist v. 3Taps, 942 F. Supp. 2d 962 (N.D. Cal. 2013).

 

Vol. [50]            IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES   195

B. Alternative Technological Solution to AI-related difficulties - Monitoring of AI

Alternatively, or in addition to IP blocking, IP monitoring of specific IP address activity could be used to limit or eliminate AI access to Internet sites. IP monitoring can, for example, determine if a user has visited the site before, as well as monitor their viewing pattern, how long since they last performed any activity on the site (and set a time out limit), among other things. Since AI use is usually distinguishable from non-AI use (for example it is generally much faster), IP monitoring may be used to identify and terminate an AI's access to an Internet site, thus providing solutions to some AI related difficulties.

In addition to monitoring, individuals and entities who adopt generative AI solutions should employ risk amelioration policies that will help mitigate such risks, as well as taking advantage of existing statutes which mitigate the adverse effects of copyright infringement claims. For example, various generative AI software requires AI software users to indemnify the generative AI software developers and distributors as part of AI software license or appropriate terms of use agreement. In addition to indemnification agreements, errors and omissions insurance coverage should be considered.

V. TECHNOLOGICAL, LEGAL, BUSINESS HYBRID SOLUTIONS TO AI-RELATED DIFFICULTIES - INTELLECTUAL

PROPERTY

Intellectual property allows entities who benefit from said intellectual property to employ technological solutions to AI-related difficulties. Technological-legal-business hybrid solutions normally involve securing legal rights to Al systems, employing technological means to govern the deployment of said AI systems and executing agreements, such as licenses to limit bad acts by said AI systems.

Since AI computer software is a set of protocols (universally agreed-upon actions) that takes a known set of input data and known responses to the data (as output), and prepares a model to generate reasonable predictions for the response to new data, the AI computer software may be incorporated into a patent. While Al may not be

 

 

 

Ut. puj    IMPROVING SOLUTIONS TO Al-RELATED DIFFICULTIES   199

 

VI. BUSINESS SOLUTIONS TO AI-RELATED DIFFICULTIES

A. Existing Business Solutions to AI-related Difficulties -Insurance

Business solutions to resolve Al difficulties eliminate or ameliorate adverse outcomes by providing for compensation from third parties. One such solution is insurance.

Insurance provides financial compensation for the effects of misfortune, wherein the payments being made are from an insurance policy.'" Al liability insurance is normally part of a cyber liability policy and covers losses resulting from data breaches, cyberattacks, and other digital threats.' This coverage can help pay for expenses related to incident response, regulatory fines, legal defense, and customer notifications.

However, other insurance policies may be more applicable for addressing specific AI-related difficulties.

For example, errors and omissions insurance (E& 0),113 also known as professional liability insurance, covers claims arising from negligence, errors, or omissions in the provision of professional services. This type of insurance policy is particularly helpful for AI software developers, and Al service providers, because it may protect against allegations of algorithmic bias, intellectual property infringement, and system failures.

Directors and officers are eligible for Directors and Officers

"' Patrick M. Liedtke, What's Insurance to a Modern Economy?, 32 GENEVA PAPERS RISK INS. 211 (2007).

112 See generally, Josianne El Antoury, How Insurance Policies Can Cover Generative AI Risks, Law360 (Oct. 4, 2023, 12:14 PM), https://www.cov.com/Imedia/files/corporate/publications/2023/10/how-insurance-policies-can-cover-generative-ai-risks.pdf.

113 Errors and omissions insurance (E&O) is a mechanism to transfer financial risk, resulting from honest mistakes or negligence See James E. Larsen & Joseph E. Coleman, Errors & Omissions Insurance: The Experience of States with Mandatory Programs for Real Estate Licensees, WRIGHT STATE UNIVERSITY (Dec. 2004), https://corescholar.libraries.wright.edu/finance/6.

 

(D & 0)114 liability insurance to protect their personal assets. Such policies may protect corporate directors and officers in the event of claims alleging mismanagement, breach of fiduciary duty, or other wrongful acts. As AI becomes more prevalent in corporate decision-making, D&O insurance can help shield executives from potential

AI-related liability.

D&O as with any other type of liability insurance, will not cover events arising from AI-related difficulties if they arise as a result of fraud, willful negligence, or criminal activity-.115 Coverage may also be limited by explicitly in the insurance policy.

Other specialty insurance may also be useful for ameliorating AI difficulties. For example, commercial general liability insurance which covers claims of bodily injury, property damage, and personal injury resulting from a business's operations, products, or services might protect businesses against claims arising from AI-enabled products or services that cause halm to customers or third parties.

Additionally, intellectual property insurance which covers the costs associated with defending or enforcing intellectual property rights would likely be used for securing AI technology rights and infringement matters. Product liability insurance might address claims of bodily injury or property damage caused by a company's products which have integrated AI elements. Employment practices liability insurance provides coverage for claims related to employment practices, such as AI related discrimination or wrongful termination. Media liability insurance covers claims related to content creation, distribution, and publication for businesses that use AI to generate content, which may result in claims of defamation, invasion of privacy, or intellectual property infringement.

114 Directors and Officers (D&O) liability insurance is a type of professional liability or errors and omissions (E&O) insurance that protects company executives and board members when they are sued for mismanagement, misrepresentation, or other breaches of duty or regulations See Mark Cussen, Directors and Officers (D&O) Liability Insurance, US NEWS & WORLD REPORT, https://www.usnews.com/insurance/glossary/directors-officers-liability-insurance (last updated Feb. 15, 2024).

115 See generally, David J. Seno, The Doctrine of Reasonable Expectations in Insurance Law: What to Expect in Wisconsin, 85 MARQ, L. REV. 859 (2002).

 

 

 

%ow      •-.S...•11.•,a1..A.A.-• a a.n vr JVLJAPIAL           LLUL,41

prohibits harassment and discrimination by lawyers against certain identified protected classes.121 Failure to monitor and test AI output may result in a professional misconduct violation of Rule 8.4(g).

VII.        BUSINESS-LEGAL HYBRID SOLUTIONS TO AI-RELATED DIFFICULTIES

Other business solutions to address AI-related difficulties might include sanctions to an AIs legal person status12'- (assuming AI's are authorized to be legal entities). Economic sanctions would be relevant to addressing AI obligations associated with bad acts. For example, legislation which brings AI programs as legal entities into existence might require AI to own property to satisfy legal obligations. Alternatively, Machine Learning databases assembled to train AI could be treated as foreign investments subject to international investment law (and taxed) if they are shared with certain competitors.'

VIII.       CONCLUSION

AI-related difficulties may be ameliorated by existing legal, technological and business means. However, each of said means, has its shortcomings primarily due to their inability to prevent AI-related

identity, marital status or socioeconomic status in conduct related to the practice of law.").

121 In New Jersey, this applies to the use of AI applications, thus a lawyer must not engage in misconduct, including "conduct involving dishonesty, fraud, deceit or misrepresentation;" "conduct that is prejudicial to the administration of justice;"

and "conduct involving discrimination     " N.J. Ct. R. app 3 R. 8.4(c), 8.4(d), 8.4(g). Those duties are addressed in part by the ongoing requirements to ensure accuracy (and avoid falsification) of communications due to AI application generated content which may result in misconduct, including discrimination.

122 Siina Raskulla, Hybrid Theory of Corporate Legal Personhood and its

Application to Argficial Intelligence, 3 SN Soc. SCI. (2023) ("Artificial

intelligence (AI) is often compared to corporations in legal studies when

discussing AI legal personhood").

123 Anupam Chander & Noelle Wurst, Applying International Economic Law to Artificial Intelligence, 24 J. INT'L ECON. L. 804 (2021).

 

Vol. 150]           IMPROVING SOLUTIONS TO AI-RELATED DIFFICULTIES   203

difficulties. Opportunities exist for integrating legal, technological and business solutions to transcend those failings. Among the most promising is incorporation of Internet protocol elements.

Requiring AI's use a specific new domain name extension (such as an IP address using ".RealAI"). This change may be implemented via legislation, voluntary industry standards or regulation. Such a change would take advantage of Al's and ML's dependence on the Internet for developing and implementing their systems, simultaneously taking advantage of the Internet's protocol and thereby making AI system identifiable prior to engaging in Internet transactions.

Future legal, technological, and business solutions to Al-related difficulties are likely to benefit from making AI and ML systems readily identifiable by requiring them either to register or to use specified domain names. Doing so will likely result in enabling technological solutions to easily restrict AI access to Internet databases, thus preempting AI difficulties; business solutions to initiate insurance to distribute the risk of AI halm, thus ameliorating AI difficulties for individuals; and legal solutions to administer proper AI use, by clearly delineating bad actors.