In January, the Duke Center on Law & Tech launched Responsible AI in Legal Services (RAILS). The initiative’s mission is to:
- “Bring together a cross-industry group of leaders (judiciary, corporations, law firms, tech providers, access to justice orgs, etc.) to support the responsible, ethical, and safe use of AI to advance the practice of law and delivery of legal services to all.”
Although I only learned about RAILS today, I’m already appreciative of the tab Resource: AI Orders. It includes a link to the RAILS AI Use in Courts Tracker. The Tracker “contains court orders, local rules, and guidelines from the U.S. and other countries [and] allows for search and filtering capabilities based on factors such as jurisdiction, date, and other key terms.”[1]
I’m not aware of any court orders, local rules, or guidelines having been issued in Vermont. So, why is the tracker important?
The Honorable Paul Grimm is a retired federal judge and current law professor at Duke Law School. The RAILS resource page includes words from Judge Grimm.
First, Judge Grimm noted:
- “the headlines in the last year in particular have included many stories about litigants and attorneys who faced or were subjected to sanctions for having filed court papers prepared by GenAI applications that contained citations to fictitious legal authority, or cited actual cases, but which did not actually support the argument for which they were cited.”
He added:
- “These were entirely self-inflicted injuries because no lawyer or party should file any court paper without independently confirming the accuracy of the facts and legal authority cited.”[2]
Finally:
- “In reaction to these lapses, an increasing number of judges and courts have issued a profusion of standing orders, pretrial orders, court rules, general orders, and case management orders that imposed various obligations on litigants and counsel to certify the use of AI technology and the accuracy of their filings. While well intentioned, the sheer number of these orders and lack of uniformity their provisions can cause considerable confusion to litigants and practitioners who may have to appear in many different courts. In this dynamic environment, what is needed is a ‘one-stop’ source for finding all of these orders that will allow litigants and attorneys to make sure they are aware of, and comply with, these court requirements.” (emphasis in the original).
While most readers practice here, some likely practice elsewhere. So, to the extent the tracker might help them, I’m highlighting it today.
I appreciate two other aspects of the RAILS resource page.
The first is that it calls attention to the difference between “AI” and “Generative AI.”
Tech competence indeed.
The second is that it acknowledges that “few” of the court orders and local rules identified by the tracker “govern behaviors that are not already addressed by existing rules of professional responsibility.” I appreciate this acknowledgement because while I agree with RAILS that “the lack of consistency in terminology and scope of these orders may create confusion and compliance challenges for attorneys navigating the AI landscape,” I also believe that the current Rules of Professional Conduct encompass the scope of misconduct that might result from the use of AI, as well as the use of whatever new “thing” that technology brings us next and next and after that.
As always, let’s be careful out there.
[1] Notably, RAILS takes no pride of ownership. Their resources page also links to the Ropes & Gray Court Order Tracker, another tool to track “standing orders and local rules on the use of AI.”
[2] As I’ve repeatedly argued, it’s often not technology that’s the problem. Imagine that a partner asks an associate to draft a memorandum of law. If the partner submits the memo without checking the cases cited by the associate, and if those cases are fictitious, we wouldn’t focus on whether to adopt orders regarding the use of associates. We’d focus on the partner’s failure to check the cites!