Seyfarth Shaw is a law firm with an impressive list of accolades. And, now, the firm appears poised to be the first major law firm to use robots to handle tasks presently performed by lawyers.
In a joint press release with a company called Blue Prism, Seyfarth Shaw announced:
- “We’re excited about the opportunity this creates to free our lawyers from some of the more mundane legal tasks so they can focus on helping our clients solve their most complex business issues,” explained Seyfarth’s chair emeritus Stephen Poor. “In testing various use cases, we’ve already seen how Blue Prism’s RPA software can help us create exponential gains in productivity, and we’ve only begun to scratch the surface of possibilities.”
The ABA Journal has the full story here.
A phrase stood out: “[w]e’re excited about the opportunity this creates to free our lawyers from some of the more mundane legal tasks . . ..”
So, it looks to me as if robots will be performing “mundane legal tasks.”
I’m not the least bit surprised. But, from a regulatory perspective, what if the robot gets it wrong?
In Vermont, Rules 5.3(a) & (b) impose responsibilities regarding nonlawyer assistants. Rule 5.3(c) holds a lawyer ethically liable for the conduct of a nonlawyer assistant if the lawyer orders or ratifies it, or if the lawyer has knowledge of a nonlawyer assistant’s conduct and fails to take reasonable remedial action at a time when the consequences can be avoided or mitigated.
As I’ve often said, Rule 1.1’s duty of competence includes tech competence. Read together, do Rules 1.1 and 5.3 require lawyers who use robots to have some sort of understanding of the coder’s qualifications? Perhaps we will eventually treat the purchase of robots as we do the selection of a cloud vendor and hold that “a lawyer must take reasonable precautions in choosing a robot that will perform mundane legal tasks.”
Even beyond choosing the robot, is there a duty to “trust but verify” the robot’s work? I have no idea what “mundane legal tasks” the robots will be doing. However, absent random quality assurance checks, it’s conceivable that the robots could get a task wrong for quite a period of time before anyone realizes it. Not only that, I’d assume that a mistake would result from a programing error and, therefore, could be repeated over & over & over again. Or, will this have been addressed in the testing phase?
The profession’s eventual replacement of humans with machines intrigues me, even if only from an ethics perspective. Are machines burdened by notions of loyalty? If not, will the conflict of interest rules apply to robots?
In any event, this is only the beginning. As the press release goes on to state:
- “Blue Prism provides an anchor around which we can refine and test the types of robotics that immediately make our lawyers better and faster,” said Byong Kim, director of technology innovations, SeyfarthLean Consulting. “At its core, this is about arming lawyers with the best technology, and software robots are the latest evolution.”