Will AI Replace Lawyers? Assessing The Potential Of Artificial Intelligence In Legal Services
By: Tim Watkins | Coffin Mew
In May 1997, in a high-profile chess match held under tournament conditions, the reigning world champion, Garry Kasparov, took on Deep Blue, an IBM-developed computer, and lost. It was the first time that an artificial intelligence (AI) had defeated a world champion. The result received much coverage at the time and represented a triumph of late 20th century technology.
The question of whether the practice of law exhibits an equivalent level of tactical dexterity to that of a chess match is not one to be answered here, and certainly not by a practising lawyer. But advances in AI, across many facets of life since the turn of the century, are undeniable. And the legal profession, despite often being regarded as resistant to change, is no exception.
In what might be considered a similar Deep Blue moment, a US study conducted in 2018 pitted 20 well-respected corporate lawyers against an AI in an error-spotting test across a suite of non-disclosure agreements (NDAs). Responses were measured by time and accuracy.
The human lawyers achieved an average accuracy of 85%, in an average time of 92 minutes. By comparison, the AI’s success rate was measured at 92% – an impressive score, particularly given that it was achieved in just 26 seconds.
To suggest that this symbolises the imminent end of human lawyers is perhaps leaping to a hasty conclusion. But it does raise a number of interesting questions. Are lawyers – or indeed any professional advisers and service providers – ultimately replaceable? And if so, how, where, and to what extent?
The preponderance of “New Law” – groups of often international freelance lawyers, all operating under a common “firm” brand, but without expensive overheads such as office rent – has been a 21st century innovation that is proving successful in many countries.
But in the same way that modern accountancy firms don’t only do company audits, modern successful law firms offer a multitude of different subject disciplines, such as corporate, commercial, banking, employment, real estate and litigation, to name but a few, which are increasingly tailored to specific business sectors.
Issue-spotting in an NDA is wholly different from the tactical strategising involved in a complex litigation case, or from the cross-table negotiation of any contract. Such expertise derives from skills honed through practice, as much as knowledge learned.
Where might AI assist?
Across all disciplines, however, it is easy to see where generic advances in AI could help the entire profession. For example, all lawyers are required, before onboarding a new client, to verify the client’s identity. So, for non-corporate clients at least, advances in facial recognition technology could perhaps, in due course, save on the need for potentially forgeable documents being collected at the outset of any new instruction.
The courts and justice system is an obvious area where sophisticated AI could assist lawyers and judges in mining the wealth of historic precedent much more quickly and efficiently than human lawyers and researchers might do. Perhaps for smaller matters, fully online courts will not be too far away.
And in corporate transactions, it is often a rite of passage for junior lawyers to spend hours trawling online data rooms, reviewing company documents, contracts and other information as part of due diligence. Speed-reading for issue-spotting based on particular words and phrases in material contracts should, as the abovementioned US study showed, achieve the two advantages of speed efficiency, and mitigation of human error. An AI reviewing dozens of commercial contracts would be less likely to experience boredom or fatigue, if nothing else.
All these examples should speed up processes and save time and costs for clients. But the output of any AI system is, of course, largely dependent on its algorithms and data input – for which human interaction is still required. Should legal AI be developed by IT experts, rather than legal professionals? Who should bear ultimate accountability if an AI makes an obvious or even a not so obvious error?
Propensity for bias and lack of accountability are two obvious areas of concern, particularly if AI use involves decision-making, rather than administrative processing. If an AI were to assist judges in reaching a judgment or appropriate sentence founded on principles, it is perhaps inevitable that data feeds underpinning the AI’s operational algorithms will bear some subjectivity.
To minimise or eliminate this entirely – after all, what use is an AI system that is prone to error more often and obviously than its human equivalent? – significant time, cost and expense will be required. Who would pay for this?
Even within the context of non-contentious transactions, in the smaller context of, say, enabling clients to draft their own wills or claim online refunds, online programs already exist to enable individuals to largely bypass lawyers altogether.
But where the AI is aimed at providing administrative efficiencies in a larger, perhaps multi-jurisdictional transaction where the scope for any cost savings would be felt more greatly, an AI program would increasingly need to be open to interpreting and working with different industries and perhaps different jurisdictions or languages.
One can foresee greater assistance to the in-house lawyer, through growth in AI programs and software designed to offer knowledge, resources and business efficiencies at the expense of external legal counsel. But for the larger, more bespoke and complex transactions, where multiple parties require individual representation, it is hard to see AI replacing lawyers entirely.
The future of legal advice
At the CES Expo 2019, much coverage was given to the ongoing competition between the different smart speakers now on the market. Asking Alexa, for example, to provide a local lawyer recommendation may be no more than a hands-free Google search. But if legal questions and advice are searchable online, it is presumably no bigger stretch to imagine such devices being requested to provide a 24/7 legal advisory service.
AI in the context of legal tech may take a number of different forms – depending on whether its purpose is to assist and benefit practitioners or clients. In theory, and at the easier end of the spectrum, any procedure or activity that usually follows a reasonable degree of uniformity or sequence – for example, a due diligence review of company documents to issue-spot, or by reference to numerical materiality thresholds – is ripe for AI efficiencies that could save time, costs and relieve junior associates from what can often be fairly mundane work.
However, anything more expansive, that seeks to replace rather than support the earnest lawyer, would need to be virtually immune to bias, and yet still have demonstrable accountability if it is ever to compete in the real world.