The Technology Behind Everlaw AI

Everlaw’s AI Assistant brings the power of generative artificial intelligence (GenAI) to core litigation and investigation workflows. It takes the first pass of arduous tasks — such as reading, reviewing, and summarizing vast amounts of documents — off your plate, so you can save time and augment your work. It also cites its source text or documents where possible, allowing you to check the work efficiently. Ultimately, its output can guide you in determining how best to spend your time. In essence, it’s a smart intern. 

This article aims to introduce you at a high level to how GenAI can, and should be applied to the legal field. We’ll: 

  • Discuss what GenAI even is, touching upon machine learning, Large Language Models (LLMs), and how these models are trained
  • Identify how we at Everlaw think about the core principles or GenAI when applied to the legal field and explain how that mindset has shaped our implementation of Everlaw’s AI Assistant
  • Address some common concerns, such as data security and AI reliability
  • Introduce you to your new Everlaw AI Assistant

Beta: Everlaw AI Assistant is currently in open beta and will not be visible in your project unless an administrator has opted into enabling it for your matter. Reach out to beta@everlaw.com to have a conversation about enabling Everlaw AI Assistant for your matters.

Table of Content

Understanding how GenAI works

Before getting started with Everlaw AI Assistant — or any form of GenAI, that is — it’s helpful to have a basic understanding of how GenAI works and what you can expect from it. In this section, we’ll define Machine Learning, GenAI, and GenAI’s branch most relevant to Everlaw: Large Language Models (LLMs). We’ll then discuss how LLMs are trained and the core competencies of LLMs. 

Note: If you’re already familiar with how GenAI works, or if you prefer to dive right into learning about Everlaw’s implementation, feel free to skip to this article's section on Everlaw’s implementation of GenAI.

What is machine learning?

Machine learning is a branch of AI that is built to make predictions by adapting to and learning from varying input over an extended period of time. It is the most widely used form of AI. 

Unlike traditional computer science algorithms, which are designed to perform specific, well-defined tasks, machine learning code defines statistical models that learn and adapt based on the information fed to them. An algorithm can be expressed in code; whereas a model is basically a giant spreadsheet of numbers linked together. The numbers get updated during training, and provide a huge amount of flexibility and learning ability.

Comparing traditional algorithms to machine learning is like comparing a calculator to a sports team's playbook: 2 + 2 will always equal 4, but a team’s playbook will be updated over the course of the season based on the team’s experiences and reflections. 

Humans write the code necessary for this tech to be possible, but the generated models learn from input and expand their knowledge. There isn’t any code to examine to understand what the model is actually doing. When you think about machine learning, think about models, not code. 

What is GenAI?

Generative artificial intelligence (GenAI) is a branch of AI that focuses on the creation of new content based on learned data patterns. These data patterns are determined when models are trained using machine learning fed by massive amounts of data.

GenAI models can serve a variety of purposes, but the ones we care about in regards to Everlaw AI are called Large Language Models (LLMs).

What are Large Language Models?

Large Language Models (LLMs) are GenAI models that specialize in reading and writing human-readable, natural language text. Think ChatGPT — you pass it a prompt (e.g. a question)  and it returns a response. 

The LLM neural network

Similar to our own human brains, LLMs are neural networks. Instead of having neurons and axons, they have nodes and edges. Information flows along the model’s edges from node to node, and calculations are made along the way based on the model’s existing statistical data patterns.

In training, models are updated to better reflect the information in incoming data. There’s an important distinction to be made here though: models do not store the information that they are fed verbatim. Instead, they infer statistical data based on that information.

Important: Your data and the responses you receive via Everlaw AI Assistant are never used to fine-tune or improve the LLM they are sent to.

Next word prediction

LLMs are expected to take in some input (e.g. a question) and spit out a coherent answer. Just like us, models have to consider the meaning of words in relation to other words. This is accomplished by making next word predictions using word embeddings and transformers. 

Embeddings are a technique that uses numerical mappings to capture the meaning of words. Using embeddings, LLM’s can perform mathematical computations to identify relationships between words.

For example, if we take the numerical embedding for “King”, subtract the embedding for “Man”, and add the embedding for “Woman”, we get “Queen”.

Transformers are a component of the LLM neural network that allow models to understand the meaning of words in context. 

To get a sense of this, let’s compare two sentences:

  • The mouse nibbled on the carrot because it was hungry.
  • The mouse nibbled on the carrot because it was yummy.

In the first sentence, “it” refers to the mouse. In the second, “it” refers to the carrot. Like us, an LLM can distinguish the first reference from the second using the rest of the words in the sentence. 

How LLMs are trained

LLMs learn by example. As was mentioned in this article’s section about the LLM neural network, LLMs are fed billions to trillions of sentences of textual information as examples and use those examples to infer statistical data patterns. This happens over and over with massive loads of data. Combined with human interaction and prior learnings, this training allows models to learn by guessing and being corrected, each time updating the numbers in the model to more likely predict correctly next time.

Fun fact: LLMs don’t need to be explicitly fed information on word definitions, grammar etc. They infer all of that from the examples they are fed.

Though it may seem simple, predicting the next word in a sentence is actually remarkably sophisticated. As LLMs get better and predicting in training, they develop what we at Everlaw consider to be four core competencies:

  • Fluency: LLMs can read and write in English and other languages, often with better grammar than most of us.
  • Creativity: These tools can create truly novel connections and ideas, whether analogies, poetry, or entirely new concepts. 
  • Knowledge: By training on billions to trillions of sentences of textual information, LLMs internalize all the knowledge contained in them.
  • Logical reasoning: In what is probably the most surprising emergent component, these tools can make inferences and connect the dots in ways few anticipated.

Inferring answers to our questions

LLMs output responses to our input using next word prediction. They’re models — they don’t have a database that explicitly stores the answer to any particular question. Instead, they use a combination of the question and their neural network to deduce a reasonable prediction based on statistical context. Essentially, they use; a form of intuition to infer one word after the other.

As an example, let’s think about the following question: “What color is the sky?”  An LLM can look at this question and (A) infer that their answer should begin with “The sky is” and (B) identify (via a transformer) that “color” is a critical word in the question. When it checks its neural network for a word that comes after “The sky is,” it will most likely guess “blue.” This results in the answer: “The sky is blue.”

Everlaw’s implementation of GenAI

Everlaw’s goal with Everlaw AI Assistant is to help you increase the speed and effectiveness of your document review and story building, while allowing you to focus on the aspects of litigation that are top priorities for you.

This section aims to provide you with a high level understanding of both how Everlaw AI Assistant is built to achieve these goals and how we expect it to be used. We’ll address common concerns regarding data security and AI reliability. We’ll also give you a brief introduction to your Everlaw AI Assistant feature set. 

Note: Everlaw uses OpenAI’s LLMs

Prompting the LLM

Providing your LMM with the right information is key to getting accurate, appropriate responses. This is accomplished through a method called prompting. Prompting is the act of priming the LMM with input such as questions, instructions, and relevant resources to factor into the output response. 

You may be familiar with this concept if you’ve used applications like ChatGPT. The difference with using Everlaw is that — using OpenAI’s API — we are able to prompt the LLM with more information than you can pass to ChatGPT.

Depending on the task you ask your Everlaw AI Assistant to do, we prompt the LLM with task-specific instructions, questions, criteria, and your relevant data (more on data security in LLMs and your data). Some Everlaw AI tasks require you to specify a series of information, while others just take the click of a button. Additional criteria defined by Everlaw may include data points such as the expected length of the LLM’s response and the level of creativity the LLM applies. 

The level of detail that Everlaw uses is essential for high stakes, high precision use cases, because it grounds the LLM on the facts at hand, rather than relying on embedded, possibly irrelevant, knowledge. 

LLMs and your data

When you use Everlaw AI Assistant, your data is temporarily sent to OpenAI’s LLMs for processing. While your data does momentarily leave Everlaw, we do have an agreement in place with OpenAI to address your data’s security.

Each data request is sent to OpenAI individually, over an SSL encrypted service, to process and send back to Everlaw. As of November 3, 2023, OpenAI only has servers in the United States. 

The data you submit and the responses you receive via Everlaw AI Assistant are not used to fine-tune or improve OpenAI’s models or service. Both are deleted from OpenAI’s servers after the request is completed.

The data you submit and the responses you receive are also not used to train models across customers or shared between customers. However, as is typical during any beta program, Everlaw uses the data you submit and the responses you receive to provide, evaluate, and improve Everlaw AI Assistant.

Building for AI reliability

As was discussed in this article’s section about how LLMs infer answers to our questions, LLMs generate content based on predictions. While these predictions are based on the statistical aggregation of massive amounts of data and tend to be reliable, LLMs do make mistakes (commonly referred to as “hallucinations”).

To reduce these hallucinations, we:

  • Integrate GenAI only into feature areas where we believe it will have the best, most reliable impact
  • Focus on knowledge from the four corners of the documents in your case, rather than relying on the model’s embedded knowledge of the law. Having direct access to the facts of your documents (instead of relying on statistical predictions) provides more reliably accurate results. This is essential for high stakes, high precision use cases. 
  • Adjust the level of creativity the model uses based on the task at hand (e.g. low creativity for tasks like summarization and higher creativity for tasks like connecting the dots to draft a statement of fact)
  • Ensure that GenAI responses cite source text or documents where possible. These citations allow you to check the work efficiently, just as you might with a smart intern. 

Your AI Assistant’s Focus Areas

Everlaw AI focuses on two main feature areas in the platform. 

Review Assistant, which aims to assist you in document review, can help you by providing information such as:

  • Batch document summaries
  • Summaries by topic within a single document
  • Coding suggestions
  • Sentiment analysis
  • People and organization extraction
  • Dates and numbers extraction
  • Page and line references

Writing Assistant, which lives in Storybuilder, can help you by providing information such as:

  • Draft summaries
  • Document references in drafts

Both AI Assistants site sources for you to reference when checking their work.

To learn more about the functionality of Everlaw AI, visit Getting started with Everlaw AI.

To browse recommended Everlaw AI workflows, visit Leveraging Everlaw AI Assistant in common workflows.

Related Resources

Have more questions? Submit a request

0 Comments

Article is closed for comments.