NYSBA Task Force issues guidelines on a lawyer’s use of artificial intelligence.

More and more guidance on a lawyer’s use of artificial intelligence is emerging.  Last December, I blogged here about advisory opinions issued by the Florida and California bars. Today, I write to share the recently released Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence.

The report is thorough. It includes the following sections:

  • Evolution of AI & Generative AI
  • Benefits and Risks of AI and Generative AI Use
  • Legal Profession Impact
  • Legislative Overview and Recommendations
  • AI & Generative AI Guidelines

Legal ethics and professional responsibility figure prominently in two sections.

The section titled “Legal Profession Impact” includes a sub-section devoted to “Ethical Impact.” In turn, the sub-section addresses seven areas:

  • Duty of Competence
  • Duty of Confidentiality & Privacy
  • Duty of Supervision
  • Unauthorized Practice of Law
  • Attorney-Client Privilege and Attorney-Work Product
  • Candor to the Court
  • Judges’ Ethical Obligations.

I don’t want to use block quotes or regurgitate the report. Rather, if interested, I suggest reading the report.  That said, I want to draw attention to two aspects of the section on “Ethical Impact.”

The first is the quote used to open the discussion of the Duty of Competence.  The quote serves as an important reminder to any lawyer who thinks they can ignore developments in technology:

  • A refusal to use technology that makes legal work more accurate and efficient may be considered a refusal to provide competent legal representation to clients.”[1]

Next, any lawyer or legal professional who uses generative AI would be well served by reviewing the examples of how “attorney-client privileged information or attorney-work product [could] be revealed when directing and indirectly using generative AI tools such as ChatGPT or GPT-4.”[2]

Now I’ll move on to the next section in which legal ethics figures prominently.

The “AI and Generative AI Guidelines” appear on pages 57-60. Each guideline cites to a specific conduct rule – 14 in total – and then shares a tip on how to ensure compliance with the rule when using AI. Again, I’m not going to regurgitate the rules or guidelines here. Read them.  However, as a former chair of the VBA’s Pro Bono Committee, I’ll happily reshare this. 

New York’s pro bono rule states that “[l]awyers are strongly encourage to provide pro bono legal services to benefit poor persons” and goes on to suggest that lawyers aspire to provide 50 hours of pro bono legal services per year.[3]  The Task Force’s guideline related Rule 6.1 states that artificial intelligence

  • “may enable you to substantially increase the amount and scope of the pro bono legal services that you can offer. Considering Rule 6.1, you are encouraged to use [AI or generative AI] to enhance your pro bono work.”

Finally, with AI and generative AI so entwined with a lawyer’s duty of competence and the responsibility to stay abreast of the benefits and risks of relevant technology,[4] I’m struck by how incompetent I am to blog about the topic. If anyone should be authoring this post, it’s The First Brother. PK works for Amazon Web Services. His title is “Generative AI Lead Engineer.” In a nutshell, he writes AI that allows AWS clients to automate their workflows. 

I guarantee you this: The First Brother is far more equipped to wax intelligently on legal ethics & professional responsibility than I am on generative AI.[5]  Who knows what will happen as both technology and our understanding of who should be authorized to provide legal services evolve? Maybe the legal profession will be so disrupted that the First Brother replaces me as bar counsel.[6]

You heard it here first!

As always, let’s be careful out there.


[1] Footnote 123 attributes the quote to Nicole Yamane, Artificial Intelligence in the Legal Field and the Indispensable Human Element Legal Ethics Demands, Sept. 24, 2020, Georgetown Univ. Law Center, https://www.law.georgetown.edu/legal-ethics-journal/wp-content/uploads/sites/24/2020/09/GT-GJLE200038.pdf

[2] Citation omitted. The examples appear on pages 34 and 35 of the Task Force’s Report & Recommendations.

[3] Vermont’s rule, which is similar, is here.

[4] V.R.Pr.C. 1.1, Cmt. [8]

[5] The Judiciary recently swapped out my old laptop for a new HP ProBook. It worked great in the office after the tech person set it up. At home? Different story. Took me a few hours to find the power button. Turns out, it’s a button in between “prt scr” and “delete.”

[6] More likely, AI will replace me. As I blogged here, it’s already pretty darn good at providing legal ethics guidance.

Related Posts

A lawyer’s duties when using artificial intelligence

Artificial intelligence & fabricated case law: a lesson in tech competence

2 thoughts on “NYSBA Task Force issues guidelines on a lawyer’s use of artificial intelligence.

  1. Dear Mike,

    I am LOLing as I just hit send on our latest article on AI legal ethics….and then I read your email. Cindy and I knew it would be immediately out of date, but that timing was too comical! Thanks for the great summary.

    As I’m sure you are already aware, we presented today on mental health/mindfulness on behalf of VPO. We had tremendous audience participation and someone even shared afterwards that they had a “lightbulb” moment. Love it when that happens. 🙂

    Hope you are doing well my earth/soul friend!

    Becky

    Like

Leave a comment