Everlaw AI Assistant Features Guide (Beta)

 

Everlaw AI Assistant features are only available in closed beta for select customers. For more information, please see this webpage

 

Table of Contents

 

Limitations of Large Language Models

Large language models (LLMs) are systems trained on huge corpuses of text. Based on this training, these models are adept at predicting the next word given a sequence of preceding words. This is the core capability of LLMs.

While some compelling functionality can be built off of this core capability, it is important to remember that LLMs are probabilistic language machines with no notion of things like truthfulness, accuracy, or intent. LLMs are trained to produce fluid text, not accurate responses. When using any feature backed by a LLM, it is important to remember that these systems can and do produce inaccurate, false, and/or misleading statements, even when there are additional guardrails put in place. 

When using Everlaw AI Assistant features, we encourage you to exercise your own judgment and expertise to validate responses. Human validation is particularly important when you are using the system to create factual claims about people or events, or work product you intend to share with others.   

 

Important note on documents with graphical elements and tabular data

Because the output for the Review Assistant and automated summaries is generated based on available document text, the tools will not perform as well on documents with a lot of graphical elements or tabular data. Tabular data is particularly challenging because, while the text is extracted, the formatting is often lost, which can lead to misleading generations. If you are using the Review Assistant or automated summaries on such documents, please exercise extra caution and ensure you are verifying the generated information. 

 

Project-level settings

When your database is enrolled in the beta, all beta features will be turned on by default. Project admins, however, have access to additional controls and settings for these features via the Project settings > General > LLM Tools page. 

Screenshot 2023-07-28 at 11.49.23 AM.png

On this page, admins can turn off all beta features for the project by toggling “Large Language Model usage” off. In addition, there are some feature specific settings, including:

  • Storybuilder summaries:
    • Automatically summarizing new documents added to Storybuilder, which will populate the description field for that document
    • Batch generating summaries for all entries in Storybuilder lacking a description
    • Batch deleting all AI generated summaries, which are the description fields with AI summaries that were not subsequently modified by users on the project
  • Writing Assistant:
    • Turning the Writing Assistant on or off in your project

 

🤖   Review Assistant

Background

Our goal with the Review Assistant is to help you and your team understand documents faster and more comprehensively. We are starting with some initial general purpose “tasks” that reviewers can run against the text of the document they are currently viewing. These tasks can be accessed via the new AI context in the context panel. 

Before diving into the specifics of each task, here are a few things to keep in mind:

  • Document Text: If the PDF image has extractable text, Everlaw uses that as the basis of analysis. If not, Everlaw falls back on the OCR text file. 
  • Variability: Language models are, at core, probabilistic language machines. This means there can be variability in the output, even if you are running the same task or question against the same document text.
  • Limits: For performance reasons, Everlaw currently limits the length of supportable document text to approximately 200 “pages” (a page being defined as ~500 words). If you run a task against a document that exceeds this threshold, we will let you know, via a banner, that the response is based on only a subset of the available content.

Screenshot 2023-07-26 at 1.27.09 PM.png

  • Saved responses: Responses (except for “ask your own question”) are saved per task, per document, per database. If you run a task on a document that has a saved response, the saved response will be displayed instead of sending a new request to the LLM. Any user can discard a saved response using the trashcan icon in the lower right of a task section. This means the next time the task is run, a new generation request will be sent to the LLM. The substance of the output for a given task on a given document should rarely change, so you generally do not need to regenerate output, unless there are clear errors in the current output. 

 

 

Document summary

This task generates a fairly detailed, narrative summary of the document text. If a document is on the lengthier side, Everlaw will provide this summary in chunks. The number of document pages covered by a chunk is dependent on the overall length of the document, with larger chunks for lengthier documents. 

You can click on the page numbers associated with a chunk to jump to the first page in the range. You can also click on the note icon associated with a chunk to automatically populate a note with the summary for that chunk. From there, you can make edits, if desired. 

 

Topic summary

Screenshot 2023-07-26 at 1.29.11 PM.png

This task extracts topics from the document text, along with a description of the topic based on the text. Everlaw also identifies a potentially relevant area in the document for a topic. You can click on the appropriate arrow button to jump to this relevant area for further review and verification. 

If Everlaw cannot identify a relevant area, the arrow will not appear. This does not mean that the LLM fabricated this topic or that the generated response is not justified by the text, though that is a possibility. Other common reasons why the relevant area cannot be identified include small typos, formatting error, or paraphrasing of the anchor text snippet we extract to enable this behavior. 

You can use the search bar at the top of the table to filter the rows based on a content search. 

Screenshot 2023-07-26 at 1.29.57 PM.png

 

People and orgs extraction

This task extracts people found in the text and generates a summary about a given person based on the text. Keep in mind that this task is not guaranteed to be exhaustive in its extraction. 

Clicking on the highlight icon next to a person will add the name as a hit highlight. This allows you to page through all references to that person in the text. The content search underlying this is just a simple OR search of all the separate words comprising the name. This means the search can be overinclusive. 

The extraction is based on the exact text reference to an entity. Because of this, and our deduplication behavior, Everlaw cannot distinguish entities that have the exact same text string as their names. In addition, the same entity that is referred to by different references in the text will be split out into separate entries on the table. 

If the same entity (by text string) is extracted multiple times, the result is deduplicated and added as a secondary row under the first entry for an entity. 

Screenshot 2023-07-26 at 1.31.03 PM.png

Finally, the search bar at the top allows you to filter rows by content. 

Screenshot 2023-07-26 at 1.31.33 PM.png

 

Dates and numbers extraction

Screenshot 2023-07-26 at 1.32.18 PM.png

This task extracts references to datetimes, numbers, percentages, etc in the text (excluding, in theory, numbers used only for enumeration). It also extracts a relevant snippet from the text. Keep in mind that it is unlikely this task will be exhaustive in its extraction, particularly for documents with a lot of number patterns and dates. 

You can click on a snippet to navigate to it in the document like a hit highlight The snippet itself is not added as a search on the hit highlight tab. If the exact snippet cannot be found in the text when clicked, the snippet text will no longer be styled as a blue link. 

Screenshot 2023-07-28 at 11.50.53 AM.png

This does not necessarily mean that the LLM fabricated a given number/date pattern or snippet, though that is possible. Other common reasons why an extracted snippet is not matched with document text include: the LLM paraphrasing, the LLM adding punctuation where none exists, the LLM adding or removing words, and the LLM not accounting for formatting that exists in the actual text.  

The search functionality for this task only searches against the “Date or number” and “Description” fields; it does not search the snippet field. 

Screenshot 2023-07-26 at 1.32.57 PM.png

 

Sentiment

Screenshot 2023-07-26 at 1.33.22 PM.png

This task extracts instances of positive, negative, or harmful and abusive sentiment from the document text. It also generates an analysis of an identified instance of sentiment and identifies a potentially relevant area of the document for a given instance of sentiment. Clicking on the associated arrow will jump you to the potentially relevant area of the document. In addition, you can filter the rows using the search bar at the top. 

 

Ask your own question

Screenshot 2023-07-26 at 1.33.55 PM.png

This task allows you to ask your own question/create your own task. If the document is sufficiently long, the answer will be returned by document chunk. You can click on a page range for a chunk to navigate to the first page in the range. You can also click on the note icon to populate a note field with the content of a generated response. 

Unlike other review window tasks, the response to this task is (1) private to you, (2) will not be saved, and (3) will be cleared immediately when you close the window or move on to a new document. Thus, if you wish to preserve the response, you will have to use the notes tool or copy and paste the response to a file. 

 

📑  Storybuilder summaries

This feature uses Generative AI to populate the “description” field for a Storybuilder entry. Summaries are generally in the range of 100-200 words. A maximum of 5000 summaries can be generated over the lifetime of a project.  

Once generated, you can freely edit the output, if desired. Any generated description that is unmodified will be displayed with italicized, instead of normal, font. Note that simply clicking into the input box and immediately saving counts as a modification, even if no changes were actually made to the generated text. 

You can generate these summaries:

  • In the Storybuilder tab of the full screen review window

  • In areas within Storybuilder where the user can edit the Storybuilder entry for a document or testimony snippet

 

In project settings, project admins have the ability to set:

  • The availability of the feature within the project
  • Whether all new documents added to Storybuilder should be auto-summarized without any additional user action
  • Whether to generate summaries for all Storybuilder document entries currently lacking a description field
  • Whether to delete all AI summaries, which we’ve scoped to only the ones that are unmodified at the time this request is initiated

Screenshot 2023-07-26 at 1.34.31 PM.png

 

✒️   Writing Assistant

Background

Our goal with the Writing Assistant is to help you jumpstart writing tasks that draw heavily on information in your documents. We hope to provide tools that can make the process of writing – from planning, research, to putting words to paper – more efficient. These tools can be particularly helpful in:

  • Exploring and understanding narrative strands across multiple documents or different angles into your accumulated evidence
  • Leveraging, or catching up on, work that has already been done by the review team in terms of document curation and annotation
  • Getting over the hump of the blank page and creating early first drafts of evidence-centered work product, like fact memos and statement of facts

The flexibility of the tool makes it useful at various points in the writing process. But writing is a complicated, nuanced, and personal process where human intentionality, judgment, and style really matters. We don’t envision end-to-end automated writing; instead we hope to build systems that you can leverage as you find the best way to articulate your case. 

 

What is the Writing Assistant drawing from when generating output

The Writing Assistant does not use the text of the documents themselves. Instead Everlaw packages up available information from the Storybuilder entry for a document, namely:

  • Bates
  • Title
  • Date
  • Description
  • Relevance

When creating a draft, you can scope this packaging to either (1) only documents in the draft you are currently in, or (2) all documents in the story timeline. At minimum there must be one eligible Storybuilder entry to enable the drafting tool. An “eligible” entry is one that has a Bates value and a description field or relevance field that has at least 50 characters. At maximum, the writing assistant supports 100 document entries (ie. if there are more than 100 documents on your timeline or draft, you cannot use the writing assistant).

The better the inputs (SB entry information), the better the drafting output.

 

Potential errors and pitfalls

Careful thought and design was put into the Writing Assistant to reduce the possibility of errors and increase the reliability of the generated output. However, large language models, even with guardrails implemented, are not infallible. Here are some issues that you could encounter while using the Writing Assistant: 

  • Wrong, misleading, or missing citations
    • This can occur even if the sentence or statement itself can otherwise be justified by other information in the context
  • Incorrect or contextually irrelevant information
  • Subtle mischaracterization of document details and relevance
  • Entirely fabricated statements. Or, statements that are a mixture of both accurate and inaccurate information 

Because of this inherent nature of LLMs, it is important to verify generated content using your human judgment, experience, and expertise. Everlaw provides easy pathways for verification by putting the citations upfront and making the underlying document text and Storybuilder entry information easily accessible. We hope these integrations provide the transparency and the tooling for users to dive in and verify the substantive correctness of generated output. 

Note: Some of the issues described in this section can be more likely if a lot of documents are used as part of a drafting request. You may want to use more targeted sets of documents when using the Writing Assistant, where possible. That being said, this is far from a consistent trend, and there are many legitimate and good use cases for drafting against a large set of documents. We encourage you to explore!

 

Wait times

Once you hit generate on a writing task, you can expect to wait between 15-30 seconds before the generated content first begins to appear in your document. This delay is due to some behind-the-scenes tasks the system is doing to increase the quality and reliability of the output. 

 

Statement of facts

This task generates a factual narrative based on the provided Storybuilder document entries. There are some optional fields that you can fill out that can help steer the generated output in particular directions based on your expressed intent or interest. This commonly manifests in the tone and substance of the generation, how the facts are framed and narrated, the details that the system picks out from the document information, and the types of general knowledge the system can draw on. The optional field are:

  • Area of law: Here you can input a brief description of the main area of law or legal issue.
  • Representing: Here you can input the party you are representing or the entity that you want to take the perspective of. 
  • Argument: Here you can provide a 1-2 sentence description of the overall theme or argument of the draft.
  • Point of emphasis: Here you can specify particular points or details that you want to include or highlight, if possible given the available document details.

Keep in mind that LLMs are sensitive to wording choice in their prompts, which can affect (sometimes drastically) the content, quality, tone, style, and structure of the output. If you are not getting quite what you expected given what you know about the document details, experimenting with the wording of the additional directions may help. 

 

Custom

This is an extremely open-ended option and you can really experiment with anything you can think of when it comes to instructing the system to draft based on document details. 

Some examples might be comparing and contrasting evidence you have, critiquing the sufficiency of evidence, getting a sense of how the evidence can be narrated on either side of an issue, etc. 

Have more questions? Submit a request

0 Comments

Article is closed for comments.