Artificial Intelligence Use and Ethical Compliance

Artificial Intelligence Use and Ethical Compliance


New York Law Journal  June 20, 2024


By Jonathan Bick  Bick is counsel at Brach Eichler and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace Law School and Rutgers Law School.


Since it may be argued that a lawyer’s decision not to use Artificial Intelligence (AI) is unethical, and the improperly use of AI may also be unethical, it is important that attorneys become cognizant of the application of legal ethics to AI. The first step towards AI use ethical compliance, is the timely disclosure of AI use particularly related to generative AI tools. Such a step is likely to ameliorate or eliminate many AI use legal ethics difficulties.


An attorney’s failure to use AI could implicate ABA Model Rule 1.5, which requires lawyer’s fees to be reasonable, since failing to use AI may materially and unnecessarily increase costs of providing legal services. Alternatively, failure to use AI may arguably result in violating ABA Model Rule 1.3 requiring a lawyer to act “with reasonable diligence and promptness in representing a client.” Such would be the case if using an AI solution could have avoided delaying a deal, and violate the promptness requirement under Rule 1.3.


When using AI, an attorney must consider ABA Model Rule 1.6 which specifically prohibits lawyers from using or disclosing client confidential information without the client’s informed consent. Similarly, attorneys using AI must confirm to ABA Model Rule 1.8 regarding client-lawyer relationship, perhaps requiring AI use disclosure.


The brisk development of AI tools has resulted in significant issues for lawyers, law practices, and clients. Among the most important is integrating AI tool use with the application of the Model Rules of Professional Conduct. More specifically, AI tool use must be considered in lights of ethical issues including confidentiality, competence, practice management, and honesty. Failure to properly address such considerations increase the risks of the unethical and unauthorized practice of law when using AI tools due in part to inadvertently waiving attorney-client and attorney work product privileges so as to use AI tools.


AI has two primary uses, namely as a content generation tool and a learning tool. Generative AI  generates output, in response to instructions from a user. Machine Learning AI (ML) facilitates the automation of legal work by recognizing patterns within predefined data sets and re-writing legal task computer algorithms thus improving performance of specific legal tasks, such as conducting legal research.


Whether an attorney uses generative or ML AI, lawyers to avoid ethical difficulties, they must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections.

Lawyers must increasingly be aware of the risks associated with using generative AI tools. Such tools may inadvertently cause an attorney to violate privacy rights, to issue inaccurate documents, and infringe copyrights.


To limit these difficulties, consultation with information technology professionals and Internet experts is advisable. Additionally, the documentation and timely disclosure of such consultations to clients is generally suitable for addressing ABA Model Rule 1.6 ethical difficulties.


A commonly used generative AI tool is ChatGPT which is a software program that generates content in response to human prompts. To program generative AI tools, programmers use ML AI tools to "train" computers to write algorithms based on data generally collected via the Internet.


Since the data to train the ML tools which in turn is used to create generative AI tools may be Internet sourced, the generative AI responses (output/content) are not necessarily correct. More specifically, erroneous generative AI responses may result from inaccurate Internet data at best and "hallucinate" answers at worst. One New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief ((Park v. Kim, No. 22-2057 (2d Cir. 2024) No. 20CV02636)).


Disclosure of AI use and proper documentation can ameliorate or eliminate AI use ethical difficulties.  More specifically, AI use must be documented in the document in which the AI content appears, that is, the documentation must be incorporated into any submission.


Disclosing AI use in this manner is not problem-free. For example, consider the use of ChatGPT to prepare the document title, spell-check the document and generate a legal argument to address a particular point. No standard exists detailing what disclosure and documentation needs to be presented, however, it is generally agreed that at a minimum the document should disclose in a footnote which AI tool was used, what content was generated by the AI tool, and the date the AI tool was used. Such an AI use disclosure standard is consistent with most legal writing submission guidelines usually require that any ideas that already exist in the literature are properly referenced.


When generative AI is used in the writing process simply to improve readability and language of the work, no AI use disclosure is likely required. Just as the use of spell checker, grammar checker and format management software need not be disclosed. 

In any case, whenever AI technology is used, document authors should carefully review and edit the results. AI tools can generate authoritative-sounding content which is both incorrect and incomplete.


In sum, disclosure is required when an AI generates an idea or substantive argument which is incorporated into a document. No AI use discloser is required when an AI is only used to improve readability and language.


In addition to the timely use of disclosure of AI generative content, there are a number of ethical difficulties which may be limited by the timely disclosure of AI use by lawyers, including the duties of: competence, communication, and confidentiality. Consider Rule 1.1 of the ABA Model Rules for example. This rule requires an attorney to provide competent representation, namely the legal knowledge, skill, thoroughness, which in turn has been found to require using current technology.


While the requirement of using AI in representations is not the standard of care in an area of legal practice necessary, discussing the use of AI with the client is recommended.  The Code of Professional Responsibility require lawyers to generally understand the technology available to improve the legal services they provide to clients. Consequently, lawyers have a duty to identify the technology that is needed to effectively represent the client, as well as determining the nature of such technology.  Discussing AI use with the client is likely to fulfill this obligation.


Disclosure is also useful for complying with ABA Model Rule 1.4.  This rule governs a lawyer’s duty to reasonably consult with the client about the means by which the client’s objectives are to be accomplished. Presumably, this duty of communication under Rule 1.4 includes discussing with his or her client the decision to use AI in providing legal services.


It is recommended that an attorney document approval from the client before using AI, and this consent must be informed. The discussion should include the risks and limitations of the AI tool, including circumstances, a lawyer’s decision not to use AI also may need to be communicated to the client if using AI would benefit the client.


Arguably, an attorney’s failure to use AI could implicate ABA Model Rule 1.5, which requires lawyer’s fees to be reasonable. As noted above, failure to use AI technology that materially reduces the costs of providing legal services arguably could result in a lawyer charging an unreasonable fee to a client. This ethical difficult may also be ameliorated by a timely disclosed to a client.