Skip to content

Where Do We Come From?: LegalTech in the 1980s

“I propose to consider the question, “Can machines think?” Alan Turing asked this question in his groundbreaking article “Computing Machinery and Intelligence” in 1950, long before computers showed any kind of intelligence [1].

Long before computers played a role in the lawsuit, Jaap van den Herik, a professor of Law and Technology at the Law Department from Leiden University in the Netherlands, asked the question “Can computers judge?” in his inaugural address in 1991 (Herik, 1991). His answer at the time was: “Yes, computers can rule on assigned areas of law” (quote p. 33) [2].

Richard Susskind, professor at Oxford University in the United Kingdom, was another visionary who, early on, developed a rule-based expert system that could “legally reason in sub-areas and support lawyers on subjects outside their expertise” (Susskind, 1987). Today, we indeed see that the computer can judge assigned sub-areas (Susskind, 2019).

Van den Herik also stated in 1991 that “Anyone who sees the function of humane justice in our world as regulating the interaction between people will notice that the computer displaces many a controller. I cannot take away from you your possible mourning over this, but the law suffers no loss.”

In this light, it is important to recognize that different judges make different decisions based on the same principles. The same applies to official decisions. This leads to a sense of injustice among citizens—something we do not want[3].

A recent study by Myrto, “Is justice blind or myopic? An examination of the effects of meta-cognitive myopia and truth bias on mock jurors and judges” takes another interesting look at people who administer justice (Myrto et al., 2020). Myrto investigated the phenomenon in which American judges and juries who are exposed to untruths—which they do know to be untruths or which are part of evidence that is not legally permitted—nevertheless (unknowingly) include them in their final deliberations.

People are not really able to distance themselves from their biases (bias). Wikipedia lists more than 200 forms of human bias [4] (Kanaan, 2020). We might as well ask ourselves, “Can people judge that well?”

The breakthrough of judicial computers was slower than expected. Lawyers and judges are conservative and risk-averse; nobody wants to be the first to apply technology in practice. It is not for nothing that the only professionals that still use WordPerfect [5] are lawyers. However, this slow adoption was also related to a sharp decline in the popularity of the earlier mentioned rule-based expert systems in the late 1990s. It turned out to be much more difficult than expected to extract the knowledge of experts. Michael Polanyi had already described this in the early 1960s in his publications on the so-called Tacit Dimension: “We know more than we can tell” (Polanyi, 1967). A direct consequence of this is that experts cannot specify exactly how they make decisions. Knowledge engineering—a technique for extracting and formulating the rules of an expert system—is therefore not possible.

Additional problems were that the creation and maintenance of the rules took too much time, took too long, and was never really finished. It was impossible to describe all occurrences; there were always exceptional situations that were overlooked. Another factor was that once the size of the rule sets exceeded a certain limit, people could no longer handle the complexity. The new rules had all kinds of unexpected side effects and led to unwanted reasoning and results. Therefore, we had to wait for a better form of artificial intelligence: self-learning algorithms.

More on this in subsequent blog posts!

 

[1] In the same article, Turing also listed nine reasons why he thought we humans would not accept intelligent machines. A large number of them are still mentioned today in the discussions related to the adoption of artificial intelligence.

[2] Do we want computers to replace judges? Honestly speaking, the author is of the opinion that this is probably not a good idea. Do we want computers to assist and advise judges in ruling, probably yes. This is actually already the case with specialized computer program advising judges on alimony payments as part of divorce proceedings. However, from a philosophical point of view, an interesting question that could be posed is ‘How far do we wish to go supporting legal professionals?’ Then, if computers are doing better than humans, are we then willing to replace humans or do we prefer them to work in tandem where the human makes the final decision, the so-called human-in-the-loop approach?

[3] Zie onder andere het artikel van 11 september 2019 in NRC Handelsblad: https://www.nrc.nl/nieuws/2019/09/11/ontslag-na-ruzie-beste-in-den-haag-a3973039. Hierin wordt beschreven dat als werkgevers op een zo goedkoop mogelijke manier hun personeel willen ontslaan, dat ze dan het beste terecht kunnen bij de rechtbank in Den Haag.

[4] See: https://en.wikipedia.org/wiki/List_of_cognitive_biases

[5] WordPerfect (WP) is an old word processor that was especially popular from 1982 to the early 1990s. See also: https://en.wikipedia.org/wiki/WordPerfect. WordPerfect is hardly used anywhere outside the legal profession.