Introduction By Johannes (Jan) Scholtes, Chief Data Scientist IPRO and Geoffrey Vance, Partner,...
The Future of Generative Large Language Models and Potential Applications in LegalTech
Introduction
By Johannes (Jan) Scholtes, Chief Data Scientist IPRO and Geoffrey Vance, Partner, Perkins Coie LLP
[Foreword by Geoffrey Vance: Although this article is technically co-authored by Jan and me, the vast majority of the technical discussion is Jan’s work. And that’s the point. Lawyers aren’t supposed to be the Captains of the application of generative AI in the legal industry. A lawyer’s role is more like that of a First Officer, whose responsibility is to assist the primary pilot in getting to the ultimate destination. This article and all the exciting parts within it demonstrate how important data scientists and advanced technology are to the legal profession. The law firms and lawyers who don’t reach that understanding fast will be left behind. Those who do will lead the future of the legal industry.]
This blog was also posted on iPRO's website in April 14, 2023
Contents
Why is Human Language so Hard to Understand for Computer Programs?
What are Large Language Models?
The Zoo of Transformer Models: BERT and GPT
ChatGPT’s Limitations
How to Improve Large Language Models
Integrating LLM with existing Legal Technology
Understand the Decisions; XAI
Why is Human Language so Hard to Understand for Computer Programs?
Human language is difficult for computer programs to understand because it is inherently complex, ambiguous, and context-dependent. Unlike computer languages, which are based on strict rules and syntax, human language is nuanced and can vary greatly based on the speaker, the situation, and the cultural context. As a result, building computer programs that can accurately understand and interpret human language is exceptionally complex and has been an ongoing challenge for artificial intelligence researchers since AI was first introduced. This is exactly the reason why it took so long for humans (in many of our lifetimes) to create reliable computer programs to deal with human language.
In addition, for many different reasons, early language models took shortcuts and none of them addressed all linguistic challenges. It was not until Google introduced the Transformer model in 2017 in the ground-breaking paper “Attention is all you need” that a full encoder-decoder model, using multiple layers of self-attention, resulted in a model capable of understanding almost all of the linguistic challenges. The model soon outperformed all other models on various linguistic tasks such as translation, Q&A, classification, text-analytics.
What are Large Language Models?
Before we dive into the specifics of large language models, let’s first look at the basic definition. Large Language Models are artificial intelligence models that can generate human-like language based on a large amount of data they have been trained on. They use deep learning algorithms to analyze vast amounts of text, learning patterns and relationships between words, phrases, and concepts.
Some of the most well-known LLMs are the GPT series of models developed by OpenAI, BERT developed by Google, and T5 developed by Google Brain.
The Zoo of Transformer Models: BERT and GPT
As encoder-decoder models such as the T5 model are very large and hard to train due to a lack of aligned training data, a variety of cut-down models (also called a zoo of transformer models) have been created. The two best known models are: BERT and GPT.
- BERT is a pre-trained (encoder-only) transformer-based neural network model designed for solving various NLP tasks such as Part-of-Speech tagging, Named Entity Recognition, or sentiment analysis. BERT is commonly used for classification tasks.
- GPT, on the other hand, is a language model that is specifically designed for text generation tasks. It uses a decoder-only transformer architecture. GPT is trained on large amounts of text data and can generate coherent, human-like text in response to a prompt. GPT is commonly used for tasks such as text completion and text generation.
ChatGPT is an extension of GPT. It is based on the latest version of GPT (3.5) and has been fine-tuned for human-computer dialog using reinforcement learning. In addition, it is capable of sticking to human ethical values by using several additional mechanisms. These two capabilities are major achievements!
The core reason ChatGPT is so good is because transformers are the first computational models that take almost all linguistic phenomena seriously. Based on Google’s transformers, OpenAI (with the help of Microsoft) has shaken up the world by introducing a model that can generate language that can no longer be distinguished from human language.
ChatGPT’s Limitations
Much to our chagrin, ChatGPT is not the all-knowing General Artificial Intelligence most would like it to be. This is mainly due to the decoder-only architecture. ChatGPT is great for “chatting”, but one cannot control the factuality. This is due to the lack of an encoder mechanism. The longer the chats, the higher the odds that ChatGPT will get off-track or start “hallucinating”. Being a statistical process, this is a logical consequence: longer sequences are harder to control or predict than shorter ones.
Using ChatGPT on its own for anything else than just casual chit-chatting, is not wise. Using it for legal or medical advice without human validation of the factuality of such advice is just dangerous.
How to Improve Large Language Models
The AI research is aware of this, and there are a number of on-going approaches to improve today’s models:
- Larger models: so far, larger models have always been better. However, there are drawbacks: energy consumption grows exponentially, and larger models are harder to understand and more vulnerable for adversarial attacks[1].
- Build models optimized for certain vertical applications such as legal & medical, co-pilots for specific tasks such as searching, programming, document drafting, eDiscovery and information governance. Currently, ChatGPT is trained using general data from the internet (Wikipedia, various blogs, websites, etc.). By training it with legal or medical data, quality will dramatically improve.
- More reinforcement learning. Sometimes, this is also referred to as active learning. Both in the AlphaGo success as well as in ChatGPT’s, reinforcement learning methods made a big difference. For AlphaGo the computer program learned to outperform humans by playing millions of games against itself. For ChatGPT, the models learned how to stick to human values and have human-like dialogue by chatting for months with humans. This is a true human-in-the-loop form of machine learning. It can be done with humans, but if annotated data sets are available, one can also do this automatically. Companies such a Snorkel provide advanced methods to create such high-quality annotated data-sets with a minimal amount of human effort.
- More controlled dialogues and prompt generation: We need encoders to drive the decoders. Instead of just taking some random section of a text as prompt (as is done with the initial BING integration) without really understanding the meaning of such text, we can better analyze the text so we understand the semantic role and relations between the words, and then use that to generate better prompts and control the text generation. There are many solutions in the world of Artificial Intelligence such as knowledge graphs or semantic networks. But a simple named-entity recognition in combination with relation extraction can already make a big difference.
Currently, the Artificial Intelligence industry is working on all of the above improvements. In addition, one can also expect integrations with other forms of human perception: vision and speech. As you may not know, OpenAI is also the creator of Whisper, the state of the art Speech recognition for 100s of languages and DALL-E2, the well-known image generator, so adding speech to the mix is only a matter of time.
Integrating LLM with existing Legal Technology
If you made it this far, you should by now understand that ChatGPT is not by itself a search engine, nor an eDiscovery data reviewer, a translator, knowledge base, or tool for legal analytics. But it can contribute to these functionalities.
1. Search
Full-text search is one of the most important tools for legal professionals. It is an integral part of every piece of legal software, assisting lawyers in case law search, legal fact finding, document template search, among other tasks.
Today’s typical workflow involved formulating a (Boolean) query, ranking results on some form or relevancy (jurisdiction, date, relevance, source, etc.), reviewing the results, and selecting the ones that matter. As the average query length on Google is only 1.2 words, we expect our search engine to find the most relevant hits with very little information. Defining the query can be hard and will always include human bias (the results one gets depends on the keywords used). What is more, reviewing the results of the search query can be time consuming, and one never knows what one misses. This is where Chatbots can help: by changing the search process into an AI-driven dialogue, we can change the whole search experience.
This is exactly what Microsoft does with the BING – ChatGPT integration, but with a few risks in the current implementation:
- As you can see from the examples used in the Microsoft demo’s, the queries are exceptionally long and highly detailed, contrary to today’s search behavior (remember, “1.2 words to find them all”).
- These queries are used to find relevant content using BING search engine.
- A selection of text from the top results (which ones and which text is unclear) is then used to generate prompts for ChatGPT.
- Longer conversations led to BING going wild, resulting in Microsoft limiting the length of a conversation to 5 prompts.
- It is not completely clear where the information used by ChatGPT comes from. If there exists another human with the same name as the person I am looking for, ChatGPT will not know the difference and make up facts by combining information of them.
As explained earlier, more focus on explaining where the results come from, the ability to eliminate information and a better understanding of the meaning of the text used to drive the dialogue is probably needed to get better results. Especially when we plan to use this for legal search, we need more transparency and understanding where the results come from.
2. Contract Drafting
Contract drafting is likely one of the most promising applications of textual generative artificial intelligence (AI) because contracts are typically highly structured documents that contain specific legal language, terms, and conditions. These documents are often lengthy, complex, and require a high degree of precision, making them time-consuming and expensive to produce.
Textual generative AI models can assist in the drafting of contracts by generating language that conforms to legal standards and meets specific requirements. By analyzing vast amounts of legal data and identifying patterns in legal language, these models can produce contract clauses and provisions that are consistent with legal norms and best practices.
Furthermore, AI-generated contract language can help ensure consistency and accuracy across multiple documents, reduce the risk of errors and omissions, and streamline the contract drafting process. This can save time and money for lawyers and businesses alike, while also reducing the potential for disputes and litigation.
But, here too, we need to do more vertical training, and probably more controlled text generation by understanding and incorporating the structure of legal documents in the text-generation process.
In all cases, it is important to note that AI-generated contract language should be reviewed by a qualified lawyer to ensure that it complies with applicable laws and regulations, and accurately reflects the parties’ intentions. While AI can assist in the drafting process, it cannot replace the expertise and judgment of a human lawyer.
3. Providing Legal Advice
We have serious doubts if generative Artificial Intelligence can be used as it is and provide help in providing meaningful legal advice. AI models lack the ability to provide personalized advice based on a client’s specific circumstances, or to consider the ethical and moral dimensions of a legal issue. Legal advice requires a deep understanding of the law and the ability to apply legal principles to a particular situation. Text generation models do not have this knowledge. So, without additional frameworks capable of storing and understanding such knowledge, using models such as ChatGPT is a random walk in the court.
4. eDiscovery
Analyzing ESI
E-discovery is a process that involves the identification, collection, preservation, review, and production of electronically stored information (ESI) in the context of legal proceedings. While e-discovery often involves searching for specific information or documents, it is more accurately described as a sorting and classification process, rather than a search process.
The reason for this is that e-discovery involves the review and analysis of large volumes of data, often from a variety of sources and in different formats. ChatGPT is unable to handle the native formats this data is in.
The sorting and classification process in e-discovery is critical because it allows legal teams to identify and review relevant documents efficiently and accurately, while also complying with legal requirements for the preservation and production of ESI. Without this process, legal teams would be forced to manually review large volumes of data, which would be time-consuming, costly, and prone to error.
In summary, e-discovery is a sorting and classification process because it involves the review and analysis of large volumes of data, and the classification and organization of that data in a way that is relevant to the legal matter at hand. While searching for specific information is a part of eDiscovery, it is only one aspect of a larger process.
ChatGPT is neither a sorting, nor a text analytical or search tool. Models such as BERT or text-classification models based on word-embeddings or TF-IDF in combination with Support Vector Machines are better, faster, and better understood for Assisted Review and Active Learning.
Query Expansion
Where Generative AI can help, is in the expansion of search queries. As we all know, humans are always biased. When humans define (Boolean) search queries, the search keywords chosen by human operators are subject to this bias. Generative AI can be very beneficial assisting users defining a search query and come up with keywords an end-user would not have thought of. This increases recall and limits human bias.
Summarization
Legal documents can be lengthy and often contain boiler plate text. Summarization can provide a quick overview of the most important aspects of such a document. GPT is very good at summarization tasks. This can assist reviewers or project managers to get faster understanding of documents in eDiscovery.
E-discovery Response Letters
As an AI language model, ChatGPT could be used to draft written responses to eDiscovery requests or provide suggested language for meet and confer sessions. However, it cannot provide personalized legal advice or make strategic decisions based on the specific circumstances of a case.
Reporting in Natural Language
eDiscovery platforms enrich, filter, order and sort ESI into understandable structures. Such structures are used to generate reports. Reports can be in either structured formats (tables and graphs), or in the form of description in natural language. The latter can easily be generated from the ESI database by using generative AI to create a more “human” form of communication.
5. Information Governance
Here too, we can state that ChatGPT is not a text analytical or search tool. Straight forward search engines (using keyword, fuzzy and regular expression search), or advanced text-classification models such as BERT are better, faster and better understood for compliance monitoring and information governance purposes.
Understand the Decisions; XAI
Nobody is more interested in explainable Artificial Intelligence (XAI) than DARPA, the Defense Advanced Research Projects Agency. Already in 2016, DARPA started an XAI program.
Ever since, DARPA has sponsored various research projects related to XAI, including the development of algorithms and models that can generate explanations for their decisions, the creation of benchmark datasets for testing XAI systems, and the exploration of new methods for evaluating the explainability and transparency of AI systems.
XAI is one of the hottest areas of research in the AI community. Without XAI, the application of artificial intelligence is unthinkable in areas such as finance, legal, medical or military.
XAI, refers to the development of AI systems that can provide clear and transparent explanations for their decision-making processes. Unlike traditional black-box AI systems, which are difficult or impossible to interpret, XAI systems aim to provide human-understandable explanations for their behavior.
XAI is not a single technology or approach, but rather a broad research area that includes various techniques and methods for achieving explainability in AI systems. Some approaches to XAI include rule-based systems, which use explicit rules to generate decisions that can be easily understood by humans; model-based systems, which use machine learning models that are designed to be interpretable and explainable; and hybrid systems, which combine multiple techniques to achieve a balance between accuracy and explainability.
The development of XAI is an active area of research, with many academic and industry researchers working to develop new techniques and tools for achieving transparency and explainability in AI systems. Ultimately, the goal of XAI is to promote the development of AI systems that are not only accurate and efficient, but also transparent and trustworthy, allowing humans to understand and control the decision-making processes of the AI system.
For legal applications, a full XAI framework is essential. Without XAI, there can also not be legal defensibility or trust.
Selected References
Vaswani, Ashish, et al. “Attention is all you need.” Advances in neural information processing systems 30 (2017).
Devlin, Jacob, et al. “Bert: Pre-training of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805 (2018).
Tenney, Ian, Dipanjan Das, and Ellie Pavlick. “BERT rediscovers the classical NLP pipeline.” arXiv preprint arXiv:1905.05950 (2019).
Radford, Alec, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. “Language Models are Unsupervised Multitask Learners.” (2019). GPT-2.
Language Models are Few-Shot Learners, Tom B. Brown et al., arXiv:2005.14165, July 2020. GPT-3.
Ouyang, Long, et al. “Training language models to follow instructions with human feedback.” arXiv preprint arXiv:2203.02155 (2022).
Tegmark, Max (2017). Life 3.0 : being human in the age of artificial intelligence (First ed.). New York: Knopf.
Russell, Stuart (2017-08-31). “Artificial intelligence: The future is superintelligent”. Nature. 548 (7669): 520–521. Bibcode:2017Natur.548..520R. doi:10.1038/548520a. ISSN 0028-0836.
Russell, Stuart, Human Compatible. 2019.
[1] Textual adversarial attacks are a type of cyber-attack that involves modifying or manipulating textual data in order to deceive or mislead machine learning models.