A lawyer’s duties when using artificial intelligence.

As we approach 2024, I’ll address one of the “hot topics” of 2023: legal ethics and the use of Artificial Intelligence (AI).

I’ve been straightforward on the issue.

Rule 1.1 requires a lawyer to provide competent representation. Comment [8], captioned “Maintaining Competence,” states that “a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”  AI is relevant to the practice of law.  Ergo, a lawyer should be aware of the risks & benefits associated with its use.

This year, several incidents highlighted one of the risks: legal memos created by generative AI that include citations to cases that do not exist in real life.[1] The most notable example involved the Avianca matter that I blogged about here.[2] 

In my opinion, the lesson to be drawn from the “hallucination” examples is NOT that a lawyer violates the rules by using generative AI to draft memos or motions. Rather, it’s that a lawyer who does so remains responsible for the work and should check the cites.[3]

Of course, having described it as “straightforward” above, I recognize that my guidance verges towards the simplistic. The bar deserves more than “your duty is to understand AI’s risks and benefits.” So, today, I’m sharing three resources that I hope will help.[4]

Earlier this year, the California State Bar’s Standing Committee on Professional Responsibility and Conduct published Practical Guidance For The Use Of Generative Artificial Intelligence In The Practice Of Law. It includes “guiding principles” that address various duties owed by lawyers. Without delving into each, they are confidentiality, competence, diligence, communication, fees, meritorious claims & contentions, candor to a tribunal, supervising staff, acting at the direction of a supervising lawyer, complying with court rules, abiding by the law, and avoiding conduct that involves prohibited bias.

More recently, the Florida Bar published for comment Proposed Advisory Opinion 2024-1 — Regarding Lawyers’ Use of Generative Artificial Intelligence. The opinion discusses the duties of confidentiality, oversight, fees & costs, and lawyer advertising.

Finally, in November, JDSupra posted Ethical AI Guideposts for Lawyers Using Generative AI. Besides good tips, it includes an interesting (to me) comment on the judicial response to hallucinations:

“Judge Xavier Rodriguez, a learned U.S. District Judge in the Western District of Texas, eloquently encapsulated the problem of judicial over-regulation in response to generative AI missteps:

‘Some judges (primarily federal) have entered orders requiring attorneys to disclose whether they have used AI tools in any motions or briefs that have been filed. This development first occurred because an attorney in New York submitted a ChatGPT-generated brief to the court without first ensuring its correctness [Mata case referenced above]. The ChatGPT brief contained several hallucinations and generated citations to nonexistent cases. In response, some judges have required the disclosure of any AI that the attorney has used. As noted above, that is very problematic considering how ubiquitous AI tools have become. Likely these judges meant to address whether any generative AI tool had been used in preparing a motion or brief. That said, if any order or directive is given by a court, it should merely state that attorneys are responsible for the accuracy of their filings. Otherwise, judges may inadvertently be requiring lawyers to disclose that they used a Westlaw or Lexis platform, Grammarly for editing, or an AI translation tool.’ 24 The Sedona Conference Journal at 822.”

Over the next few months, I’ll try to do a detailed post about each of the duties implicated by the use of AI. Not today.  Today’s goal was to share resources.

I’ll end with this.

I’ve long urged lawyers not to fear technology.  For almost as long, I’ve argued that it’s usually not technology that gets a lawyer into hot water. It’s something that would get the lawyer into hot water even if done in a non-digital world.  For example, failing to check cites before submitting a memorandum. Yes, the Avianca case resulted in the court sanctioning the lawyers. However, in doing so, the judge specifically noted that “[t]echnological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.”[5]

As always, let’s be careful out there. 


[1] Known as a “hallucination,” this risk is not limited to the legal profession. Rather, at least in its early stages, generative AI sometimes presents as fact something that is not.

[2] See also, Mata v. Avianca, 22-cv-1461, 2023 WL 4114965, (S.D.N.Y. June 22, 2023).

[3] As I’ve stated at seminars, if a lawyer asked an associate or paralegal to prepare a memo and submitted it to the court without checking the work, we wouldn’t be calling to ban the use of associates or paralegals if, in this instance, the associate or paralegal intentionally included fake citations that the lawyer failed to notice. We’d be reminding lawyers that they are responsible for their work.

[4] I’m struck by how much generative AI drove 2023’s discussion of the legal ethics issues associated with AI.  AI isn’t new and isn’t limited to generative AI. Indeed, in 2019, Squire Patton Boggs published Legal Ethics in the Use of Artificial IntelligenceThe posts tips remain relevant today.

[5] Mata v. Avianca, 22-cv-1461, 2023 WL 4114965, at 1 (S.D.N.Y. June 22, 2023).