Skip to content

New LegalTech Applications: From Algorithms versus People to Algorithms and People

This blog post brings us to another interesting point: friends and foes alike now agree that in many areas, artificial intelligence is not only faster and cheaper but also better and more consistent than humans. When the quality of certain human actions is measured [1], we see that this can vary a lot: different people make different decisions, even if they receive the same explanation in advance. This is, of course, the result of our personal interpretations. However, the same people often make different decisions at different times. This is also normal because humans are adaptive beings who learn from their actions. However, these differences can also be the result of our mood at the time or of an (unnoticed) bias.

That we are even inconsistent in our inconsistency is difficult in daily practice. Especially with simple, repetitive (boring) tasks, we see major differences in the outcomes of human decisions [2]. Error rates can even reach 70 percent! Computers are not perfect either, but the mistakes are a lot more consistent than with humans. Therefore, it is easier to correct afterwards.

In daily legal work, there are many such simple actions: answering public records requests, redacting personal information before disclosure, searching within two million legal judgments, reading through long contracts, etc. Scientific research leads, in all cases, to the conclusion that people—both in speed and costs, but certainly also in quality—are beaten by computers regarding these kinds of legal applications (Blair et al., 1985; Grossman et al., 2011). This has also led to the fact that for eDiscovery, for example, the US Federal Courts not only allow “search with machine learning,” but in many cases also recommend or make it mandatory [3] [4].

Active learning, the machine learning algorithm on which these systems are based, can be seen as a form of “human-in-the-loop” machine learning, in which a legal review specialist trains a computer program in many small steps what one is looking for. See Scholtes et al. (2021) for a full overview of how machine learning is used in a legally defensible manner in eDiscovery.

 

 

Technology adoption is often gradual, but it continues. Kevin Kelly, one of the founders of Wired Magazine, states in his book, The Inevitable: Understanding the 12 Technologies That Will Shape Our Future, that there are seven steps to people’s adoption of technology (Kelly, 2016):

  1. A computer cannot possibly do the work I do.
  2. Later: OK, the computer can do a lot of my work, but it can’t do everything I do.
  3. Later: OK, the computer can do all the work I can do, except if the computer doesn’t work or crashes (which often happens), then I’m needed again.
  4. Later: OK, the computer works perfectly with no problems for routine things, but I still have to teach the computer how to do a new task by itself.
  5. Later: OK, OK. The computer can have my old boring job, because it’s clear that humans are not made for this kind of work.
  6. Later: Wow! Now that computers are doing my old job, my new job is a lot more interesting and pays better too.
  7. Later: I’m so glad the computer can’t possibly do the work I’m doing now, go back to #1.

Well… recognizable, right? Will computers ultimately be superior to humans? In practice, it is a bit more nuanced. Research in the medical field has shown that the highest quality can be achieved when computers and humans work together (Daugherty et al., 2018):

  • The computer then does the simple and boring work, for example, searching everything and presenting the best solutions to the people.
  • On the basis of this pre-selection, people then make the final decision, taking into account uncertainties, real-world knowledge, and experience.

The reason for this success is that computers have a “faultless” memory, where people often “forget” things that they do not encounter on a daily basis. Computers can also analyze information in great detail better than humans and do not overlook something “by mistake.” Some examples:

  • Failing to notice a brief but crucial textual comment in a fist-sized medical record.
  • Failing to recognize a medical condition that the specialist was last aware of during training.
  • Unaware of recent (new) insights that have recently been published and that the specialist has not yet read.

We also see these developments within the legal domain: more and more lawyers are being supported by technology. Just as we have replaced the typewriter with the word processor, and just as a judge allows himself to be supported by a computer program for the calculation of alimony awarded, we also see this in applications such as searching for case law, and email review in the event of fraud or competition.

Moreover, this publication, which was originally written in Dutch, was translated completely using Google Translate. Only a few edits were required (mostly layout-related!). In fact, Google Translate used (better) words than the author (a non-native English speaker) would have come up with himself. This is another excellent example of recent progress in artificial intelligence.

 

[1] It is interesting to note that, in this context, lawyers do not really have a tradition of (quantitatively) measuring the quality of their work. This also makes comparing the performance of human actions with an algorithm difficult, if not impossible. See also Dolin, 2017.

[2] In fact, we are fine with people agreeing 80 percent of the time. They then differ in twenty percent of the cases.

[3] In 2012, Federal Magistrate Judge Andrew J. Peck (SDNY) made a landmark decision in the Da Silva Moore vs. Publicis Groupe & MSL Group, 11 Civ. 1279 (February 24, 2012) case. In this case, Judge Peck ruled that computer-assisted document review (computer-assisted review) was “seriously considered for use” in major cases and that lawyers no longer “have to worry about being the 'first' or 'guinea pig' for judicial acceptance of computer-assisted review.” In 2018, Prof van den Herik and the author gave a one-day course at a number of courts in which these developments were central and the judges also became acquainted with machine learning through hands-on. See also: https://ssr.nl/2018/training-big-data-de-mooeizame-dans- Tussen-rechter-en-machine/

[4] See also the contribution of Scholtes and van den Herik in Scholtes et al. (2019) to the Moderate Lustrum Congress for a comprehensive overview of the successful use of legal technology in eDiscovery and legal review in particular.