Why law firms need to think differently – and smarter – about AI

[Australasian Law Management Journal,General Management,Marketing & Business Development,Strategy & Leadership,Technology] January 20, 2019

A new conversation is required within law firms about artificial intelligence to ensure lawyers understand the tasks it can truly tackle, in turn allowing firms and their clients to benefit from this disruptive force, writes Jordan Furlong.

We need a new way to talk about artificial intelligence in the law.

I am seeing a lot of frustration and cross-talk lately in the legal innovation community around what does and does not constitute legal AI, and whether it will or will not deliver any real value. The term ‘legal AI’ is not helping much: it is vague and woolly enough to mean almost anything, and its sci-fi connotations raise expectations beyond what any technology can realistically deliver at this point. I think we need to go back to basics and deconstruct what we are trying to achieve with this technology, and why.

While I was thinking about this subject, the American Bar Association’s Law Practice Today arrived in my inbox, featuring a remarkably on-point article by Michael Mills of Neota Logic. Michael answers the question ‘What is AI?’ with this retort: “First, AI isn’t really ‘artificial’— it’s all created by humans through very, very hard work — and it isn’t really ‘intelligent’ either — the software doesn’t know what it’s doing or why. Second, AI is not a ‘what.’ We can’t point to anything and say, ‘Yup, that’s an AI, right over there by the door’.”

So, what are we really talking about when we talk about legal AI? “A large and growing collection of mathematical methods embodied in software for doing narrowly defined but very useful [legal] tasks,” is Michael’s apt description. This suggests to me that we ought to look more closely at the tasks in question, in order to find out why we are applying these methods to accomplish them.

Michael classifies these tasks into five categories:

1. Electronic discovery

2. Legal research

3. Analytics and prediction

4. Expertise automation

5. Contract analysis.

Why use legal AI to carry out these tasks? Michael suggests that these applications of AI can enable lawyers to:

  • serve more clients more effectively at lower cost;
  • create new revenue streams not dependent on hours worked;
  • focus time and expertise on work that requires the uniquely human and professional skills of empathy, judgment, creativity and advocacy; and
  • increase access to justice by meeting the legal needs of the poor and middle class.

This all seems accurate to me. I would like to take the inquiry a step farther and ask: What are the benefits to clients of applying these methods to these tasks?  I wrote in the past that lawyers should evaluate any potential AI investment in terms of whether and to what extent clients will benefit. In that spirit, I would like to suggest a client-centred framework for viewing legal AI. It seems to me that the five sets of tasks Michael has enumerated can themselves be broken down into two general categories.

1. Volume and costs

One category contains all those tasks that legal AI can accomplish in less time and at lower cost than human lawyers can. Put differently, these are the tasks that human lawyers could carry out — indeed, in the not-too-distant past, routinely carried out — but that would exact an enormous cost if they were left to humans today. You might think of these as ‘volume’ tasks: If you put a million lawyers to work on these tasks, and gave them plenty of time, they could do a fine job.

Electronic discovery is the ideal example here. Theoretically, sure, lawyers alone could identify all the relevant documents hidden in terabytes of e-data, and the results they achieved likely would not differ substantially from what e-discovery software could accomplish. But of course, the costs of this approach are mind-boggling: no judge would authorise it and no client would pay for it if there were any other way to carry it out.

Or take contract analysis. Multinational A buys Multinational B, and the resulting Himalayan pile of contracts needs to be identified, reviewed, analysed and rationalised for various purposes. Give me a million lawyers with a million hours each, and I will render just as good a result as a cognitive-reasoning software program – so long as the merger can wait 40 years and the value of the new corporate behemoth can somehow afford the lawyers’ fees.

For the most part, I think you could also add legal research to this category. A million lawyers, each armed with a million-hour quota, could review every case in existence and gradually work their way down to identify the most salient decisions for a judge’s consideration. Mathematical-model methods can do the job at a sliver of a fraction of the cost, and while that is not the entirety of legal research by any means, this aspect of it fits the bill here.

These three types of legal tasks are susceptible to the application of legal AI, not because the AI produces better results, necessarily – though if you want to argue that machines are less error-prone and inconsistent than overworked lawyers with aching eyes and mental fatigue, go right ahead – but because it produces substantially similar results at an enormously lower cost of effort, money and time. That is what matters to clients. So the client-centred rationale for using these methods is simple: ‘Reduce costs’.

2. Expertise and scarcity

But then there is the second category of tasks, and this one is much more interesting. This category contains all those tasks that legal AI can accomplish by imitating or replicating lawyers’ logical, analytical and advisory skills. If the first category of tasks was about ‘volume’, this one is about ‘expertise’, and the value propositions here are very different.

Take the group of tasks that Michael refers to as ‘expertise automation’. This is a fascinating area in which Michael’s own company, Neota Logic, has been a pioneer: he describes it as “the automation of substantive legal guidance and processes … [that] combines expert systems and other artificial intelligence techniques, including on-demand machine learning, to deliver answers to legal, compliance, and policy questions.”

Note that the tasks given to expert systems are not ‘volume tasks’. You could assign a million lawyers to answer a client’s question, but you are not necessarily going to get the right answer, because maybe none of these lawyers has the expertise required to give the right answer. What you need is an expert lawyer who knows this area of law and will ask clients the right questions, tap into the appropriate set of facts and experiences, follow the correct reasoning path, and render an accurate response.

Here is the client’s problem: this kind of expertise is scarce. Only a relative handful of lawyers possess the knowledge, experience and skill to answer specific kinds of client questions, and the narrower the field of expertise, the smaller the number. The scarcity of this resource raises its price, restricts its accessibility, and renders it prone to charging by the hour.

But suppose the client could distil this expertise into a complex database of probabilistic reasoning and computational decision trees that could provide substantially the same answers that the lawyer would give. This program would be invaluable to the client because it would increase the supply – and reduce the scarcity – of legal expertise.

This is, in fact, exactly the situation that professors Richard and Daniel Susskind describe in their recent book The Future of the Professions. In the Susskinds’ view, the historical influence and dominance of the professions is substantially grounded in the exclusivity that professionals maintain over access to expert knowledge and the ability to dispense it. Richard has stated that the book really seeks to answer the question: “How do we produce and distribute practical expertise as a society?” This is not a new topic: as far back as 2000, he defined an expert system as “the use of computer technology to make scarce … expertise and knowledge more widely available and more easily accessible.”

We might even be standing on that threshold today. Here is what Ben Hancock of The Recorder reported from the LegalWeek 2018 legal technology conference earlier this year:

“Brian Kuhn, the global leader and co-founder of IBM Watson Legal, envisions – and it sounds like IBM is implementing – the creation of ‘cartridges’ of specialised legal information that can be deployed for various legal tasks. That is a mouthful, I know.”

But imagine this: “A firm that specialises in antitrust law ‘trains’ an AI algorithm to interpret documents relevant to that practice area. Then, the firm sells that piece of trained software, allowing a firm weak in antitrust to gain capacity (and removing the need, perhaps, to bring on a bunch of antitrust partners).

Now IBM, it is true, perpetually seems to be ‘a few years away’ from releasing a game-changing legal technology breakthrough. But an ‘expertise cartridge’ is exactly what we are talking about here: distilled legal know-how, transferrable from user to user, distributed widely – the ‘democratisation’ of legal expertise, if you want to get political about it. And the primary buyer for a product like that would not be “a law firm weak in antitrust,” but the general counsel of a large corporation, who would be very interested in a 24/7, mobile and scalable source of antitrust expertise.

The same analysis would apply to our fifth category of legal AI tasks, ‘analytics and prediction’. I am being persuaded to the view, recently enunciated by Sarah Sutherland and Sam Witherspoon, among others, that we are not going to achieve effective litigation prediction from the distillation of court decisions alone – the data points are too few and insufficiently robust. But in broad terms, ‘outcome prediction’ is really the archetypal, fundamental lawyer functionality: To answer the recursive client question, “What’s going to happen in my situation?” Again, from the client perspective, this is not a problem of volume, but of the scarcity of resources available to answer a legal question.

Now, let us pull back for a moment and write ourselves a reality check. We do not have the means today to build programs that can render detailed legal analyses of complex problems or advise with statistical confidence where clients’ current legal circumstances are likely to lead them. Nor are we remotely on track to get there. But if you want to know what the Holy Grail of Automated Legal Services looks like, that is it. And considering the potential payoff for anyone who finds it, you know that a lot of smart people are trying to find the right combination of jurisprudential data, trial lawyer experience, arbitration outcomes, negotiation principles, tribunal decisions, and human game theory that will unlock this prize.

A new framework

So, let us return to the goal I set for myself at the start of this post: Figuring out a better framework for talking about AI in legal services.  I would suggest we classify legal AI offerings according to the type of market problem they aim to solve. I have proposed two categories of these problems.

1. The volume costs of lawyer effort.

2. The scarcity costs of lawyer expertise

There are probably others, or you could break these categories down into finer classifications, but it should do for a start. Here is a basic matrix of these problems, their proposed AI solutions, and the likely impact of those solutions on lawyers and law firms.

 

Finally, because everyone loves fighting over nomenclature, here’s a potential naming protocol.

1. I would suggest the term ‘Volume AI’ for those applications that accomplish high-volume legal tasks far more quickly and efficiently than human lawyers do, to generate great cost and time savings for clients.

2. I would suggest the term ‘Expertise AI’ for those applications that make scarce legal expertise widely available in computerised form, to generate greater accessibility for clients to the legal answers they need.

Volume AI would refer to any technology that reduces the time and effort of lawyers; Expertise AI would refer to any technology that increases the accessibility to clients of valuable but scarce legal expertise.

Maybe we can eventually do away with AI altogether in this area, but let us start small.

Jordan Furlong is a speaker, author, and legal market analyst who forecasts the impact of changing market conditions on lawyers and law firms. He has given presentations to audiences in the United States, Canada, Europe and Australia to law firms, state bars, courts and legal associations. Jordan is a Fellow of the College of Law Practice Management and a member of the Advisory Board of the American Bar Association’s Center for Innovation. He writes regularly about the changing legal market at his website, www.law21.ca.