Everlaw AI Assistant Features Guide (Beta)

 

Everlaw AI Assistant features are only available in closed beta for select customers. For more information, please see this webpage

 

Table of Contents

 

Limitations of Large Language Models

Large language models (LLMs) are trained to perform next word prediction (ie. given some initial words, predicting the next word). Because modern LLMs are trained on huge corpuses of text, they can create fluid text and accomplish seemingly sophisticated tasks based on this rather simple mechanism.   

While compelling functionality can be built on this core capability, it is important to remember that LLMs are statistical language machines with no notion of things like factuality, truthfulness, accuracy, or intent. LLMs are trained to produce fluid text, not accurate responses. You may already be mindful of this when using applications powered by LLMs, like GPT or ChatGPT.

When using Everlaw AI Assistant, it is important to remember that LLMs can and do produce inaccurate, false, and/or misleading statements, even when there are additional guardrails put in place. We encourage you to exercise your own judgment and expertise to validate responses produced by Everlaw AI Assistant. Human validation is particularly important when you are using the system to create factual claims about people or events, or work product you intend to share with others.   

 

Important note on documents with graphical elements and tabular data

Because the output of Everlaw AI Assistant is generated based on available document text, the tools may not perform as well on documents with a lot of graphical elements or tabular data.

  • With graphical elements, because there is no text to describe the content or form of the image, the LLM is essentially "blind" to such information
    • Examples: slide decks with diagrams, documents with images or charts
  • With tabular -- or other types of formatted -- data, the formatting can either be lost or "misunderstood". This can lead to inaccurate output.
    • Example: the LLM could report a numerical figure as belonging to one column of data when it is, in fact, associated with another column 

If you are using Everlaw AI Assistant on such documents, please exercise extra caution and ensure you are verifying the generated information. 

 

Project-level settings

Most beta features will be turned on by default for enrolled databases. One major exception is Coding Suggestions, which must be configured before use. For more information about Coding Suggestions, please see this section of the article

Project admins can control and administer beta features, including turning them off and toggling feature-specific settings, from the Project settings > General > LLM Tools page. 

On this page, admins can turn off all beta features for the project by toggling “Large Language Model usage” off. In addition, there are some feature specific settings, including:

  • Case description:

    • Provide a description of the case, which will be used as context for Everlaw AI features, including coding suggestions. For more information on how to best utilize the description field, please see the Coding Suggestions section later in this article or our Prompting Guide.

  • Review Window:

    • Enable/disable the Review Assistant in your project

    • Choose the desired length of document summaries 

    • Enable coding suggestions and configure categories and codes to allow users to generate suggestions for a document in the review window.

  • Storybuilder summaries:

    • Automatically generate summaries and descriptions for all new documents added to Storybuilder

    • Batch generate summaries and descriptions for all entries in Storybuilder lacking a description

    • Batch delete all AI generated summaries and descriptions

  • Writing Assistant:

    • Turn the Writing Assistant on or off in your project

 

🤖   Review Assistant

Background

The Review Assistant is comprised of a number of "tasks" that are designed to help speed up review. Review Assistant tasks can be accessed from the AI Context in the left of the Review Window.

Screenshot 2024-01-02 at 1.33.56 PM.png

A subset of the tasks can also be run in batch from the results table. 

Screenshot 2024-01-02 at 1.37.15 PM.png

Before diving into the specifics of each task, here are a few things to keep in mind:

  • Document Text: If the PDF image has extractable text, Everlaw uses that as the basis of analysis. If not, Everlaw falls back on the OCR text file. For batch tasks, Everlaw will always use the OCR text file. 
  • Variability: Language models are, at core, probabilistic language machines. This means there can be variability in the output, even if you are running the same task or question against the same document text.
  • Limits: Everlaw currently limits the length of supportable document text to approximately 200 “pages” (a page being defined as ~500 words). If you run a task against a document that exceeds this threshold, we will let you know, via a banner, that the response is based on only a subset of the available content.

Screenshot 2023-07-26 at 1.27.09 PM.png

  • Saved responses: Responses are saved per task, per document, per database, with the exception of the Custom Q&A task, which is saved per user. Any user can discard a saved response using the trashcan icon in the lower right of a task section. This means the next time the task is run, a new generation request will be sent to the LLM. The substance of the output for a given task on a given document should rarely change, so you generally do not need to regenerate output, unless there are clear errors in the current output. 

 

 

Document summary and description

This task generates both (1) a detailed summary and (2) a short description of a document. If the document is on the shorter side, only a description will be generated. 

Screenshot 2024-01-02 at 3.02.47 PM.png

Once generated, detailed summaries can be viewed from within the Review Window. Descriptions can be viewed in the Review Window, on the Results Table, and in the Storybuilder Timeline. 

In the Review Window

To view or generate a summary/description, open the AI Context and navigate to the "Summary" tab. If a saved result exists for the document, the "Generate" button for the "Document summary" task will be replaced with an arrow. Clicking on the arrow will expand the summary container so that you can view the description and summary. If a saved result does not exist, click "Generate" to create the summary and description for the document. 

Screenshot 2024-01-02 at 1.33.56 PM.png

By default, only the description will be shown for the document. The description is a short summary covering the document in its entirety (up to the page limit, currently set at around 200 pages). It is generally no more than a paragraph in length.

Screenshot 2024-01-02 at 6.43.41 PM.png

If documents are on the longer side (generally 10+ pages), Everlaw also generates a more detailed summary. The detailed summary summarizes the document by page chunks. The number of pages covered by a summary chunk scales with the size of the document such that longer documents have summary chunks that cover a greater number of pages per chunk. To view the detailed summary, click the "Show detailed answer by section" link.

Screenshot 2024-01-02 at 6.43.54 PM.png

The detailed summary provides a more fine-grained summary of the document. You can also click on the page numbers associated with a chunk to jump to the first page in the range. This provides a way to navigate the document by the summary. 

Finally, a number of actions can be taken on the summary as a whole, or by individual chunks:

  • Copy to clipboard: copy the contents of the summary or summary chunk to your clipboard
  • Copy to note: copy the contents of the summary or summary chunk to the notes field in the Review Window
  • Delete: delete the existing summary/description for the document

In batch from the Results Table

Summaries and descriptions can also be generated in batch from the results table. For the beta, we've restricted this action such that no more than 1000 documents can be batch summarized at a time. 

To initiate a batch summarization, open a results table. Ensure that either (1) there are no more than 1000 docs in the table or (2) no more than 1000 docs are selected. Then, click the batch tool in the toolbar and select "Generate descriptions and summaries". 

Screenshot 2024-01-02 at 1.37.15 PM.png

A confirmation dialog will appear where you can also optionally add the "Description" column to your results table if it is not currently part of your view. 

Screenshot 2024-01-02 at 7.07.37 PM.png

Once confirmed, the request will be submitted to Everlaw's summary queue. Depending on the size of the batch request and the number of other documents in the queue, a batch summarization can take anywhere from a few seconds to many hours to complete. As individual documents are completed, the Description column will populate with the description for that document. 

Screenshot 2024-01-02 at 7.10.27 PM.png

If the Description column is not visible on your results table, you can add it from the "Add or remove columns" action under "View" on the toolbar. 

Screenshot 2024-01-02 at 7.11.30 PM.png 

In addition, you can also configure CSV exports to include the Description column. 

Screenshot 2024-01-02 at 7.13.22 PM.png 

Topic summary

Screenshot 2023-10-20 at 1.13.48 PM.png

This task extracts topics from the document text, along with a description of the topic based on the text. Everlaw also identifies a potentially relevant area in the document for a topic. You can click on the appropriate arrow button to jump to this relevant area for further review and verification. 

If Everlaw cannot identify a relevant area, the arrow will not appear. This does not mean that the LLM fabricated this topic or that the generated response is not justified by the text, though that is a possibility. Other common reasons why the relevant area cannot be identified include small typos, formatting error, or paraphrasing of the anchor text snippet we extract to enable this behavior. 

You can use the search bar at the top of the table to filter the rows based on a content search. Clicking the note icon will populate a note with the contents of the topic table. Clicking the "copy to clipboard" icon will copy the contents of the table in tsv format, which can be easily pasted into any spreadsheet application. 

 

People and orgs extraction

This task extracts people found in the text and generates a summary about a given person based on the text. Keep in mind that this task is not guaranteed to be exhaustive in its extraction. 

Clicking on the highlight icon next to a person will add the name as a hit highlight. This allows you to page through all references to that person in the text. The content search underlying this is just a simple OR search of all the separate words comprising the name. This means the search can be overinclusive. 

The extraction is based on the exact text reference to an entity. Because of this, and our deduplication behavior, Everlaw cannot distinguish entities that have the exact same text string as their names. In addition, the same entity that is referred to by different references in the text will be split out into separate entries on the table. 

If the same entity (by text string) is extracted multiple times, the result is deduplicated and added as a secondary row under the first entry for an entity. 

Screenshot 2023-07-26 at 1.31.03 PM.png

The search bar at the top allows you to filter rows by content. 

Screenshot 2023-07-26 at 1.31.33 PM.png

Clicking the note icon will populate a note with the contents of the people table. Clicking the "copy to clipboard" icon will copy the contents of the table in tsv format, which can be easily pasted into any spreadsheet application.

Screenshot 2023-10-20 at 1.24.09 PM.png

Dates and numbers extraction

Screenshot 2023-07-26 at 1.32.18 PM.png

This task extracts references to datetimes, numbers, percentages, etc in the text (excluding, in theory, numbers used only for enumeration). It also extracts a relevant snippet from the text. Keep in mind that it is unlikely this task will be exhaustive in its extraction, particularly for documents with a lot of number patterns and dates. 

You can click on a snippet to navigate to it in the document like a hit highlight The snippet itself is not added as a search on the hit highlight tab. If the exact snippet cannot be found in the text when clicked, the snippet text will no longer be styled as a blue link. 

Screenshot 2023-07-28 at 11.50.53 AM.png

This does not necessarily mean that the LLM fabricated a given number/date pattern or snippet, though that is possible. Other common reasons why an extracted snippet is not matched with document text include: the LLM paraphrasing, the LLM adding punctuation where none exists, the LLM adding or removing words, and the LLM not accounting for formatting that exists in the actual text.  

The search functionality for this task only searches against the “Date or number” and “Description” fields; it does not search the snippet field. 

Screenshot 2023-07-26 at 1.32.57 PM.png

Sentiment

Screenshot 2023-07-26 at 1.33.22 PM.png

This task extracts instances of positive, negative, or harmful and abusive sentiment from the document text. It also generates an analysis of an identified instance of sentiment and identifies a potentially relevant area of the document for a given instance of sentiment. Clicking on the associated arrow will jump you to the potentially relevant area of the document. In addition, you can filter the rows using the search bar at the top. 

Ask your own question

Screenshot 2024-01-02 at 7.15.25 PM.png

This task allows you to ask your own question of a document/create your own task. It can be accessed from the "Ask questions about this document" button in the footer of the AI context.

Screenshot 2024-01-02 at 7.20.37 PM.png

If the document is sufficiently long, the answer will first be returned by document chunk ("detailed" answer), with a final synthesized answer generated at the end ("condensed" answer).

Screenshot 2024-01-02 at 7.21.48 PM.png

When reviewing the detailed answer, you can click on a page range for a chunk to navigate to the first page in the range. You can also click on the note icon to populate a note field with the content of a generated response, or the clipboard icon to add the generated answer to your clipboard. 

Unlike other review window tasks, the response to this task is private to you. Everlaw will preserve the last 20 questions asked for a given document. 

💡 Coding suggestions

Coding Suggestions allow you to evaluate documents in accordance with your coding sheet. For each code configured for use in Coding Suggestions, Everlaw will suggest whether the code should be applied to the document and provide rationale for its suggestion based on analysis of the document text. 

Coding suggestions can be used to QC existing review work or help speed up review of new documents.    

Configuring coding suggestions

Suggestions will only be generated for codes that have been configured with coding criteria on the Project Settings > General > LLM Tools page. Coding criteria is background, context, and guidance about what codes are meant to capture and how to evaluate codes against a document. 

Selecting codes to configure

Everlaw offers maximum flexibility in choosing which codes you want to configure for use in Coding Suggestions: you can configure your entire coding sheet or just a single code. To configure codes:

  • First, enable Coding Suggestions for the project by switching the appropriate toggle on. 
  • A table will be shown displaying all code categories in your project and a summary of configured codes. Use the toggles to the left of each category to enable or disable it for use in Coding suggestions.
  • For each category that is enabled, a pop-up will appear where you can add a description for the category and configure individual codes within the category. Only codes that have been enabled and configured with coding criteria will be included in Coding Suggestions. You can also access these configuration settings by clicking the “Edit Configuration” link for a given category. 

Any changes made to the configuration will only affect coding suggestions generated from that point forward; existing coding suggestions will not be affected. The exception is that existing suggestions from disabled categories will be hidden. 

Writing coding criteria

It is important to note that the quality and accuracy of the Coding Suggestions hinges on the quality of your coding criteria. Because of this, we recommend that you evaluate any new or changed coding criteria on a small sample of documents to see if any adjustments should be made. 

Here are some general tips and tricks to writing effective coding criteria:

  • Provide sufficient background and context: Just as you would with a reviewer who is completely new to your case, you must provide sufficient background context about the matter to facilitate effective coding suggestions. This context can include the history of the underlying dispute, the legal claims at issue, jargon or technical terms, and entities involved (including alternative ways an entity may be referred to in the text because of name changes or abbreviations). In general, the idea is to provide extra-textual information (information outside of the document text) that is important for understanding and analyzing textual information found in the documents. 
    • You can also think about the right scope of where to include this background and context. If the information is relevant for the case as a whole and across all categories of codes, then it should be included in the case description. If the information is relevant only to a particular category, then it should be included in the category description. If the information is only relevant to a particular code, then it should be included in that code’s criteria. 
  • Adjust your code criteria based on your goals for the code
    • If the code you are configuring is more extractive in nature (ie. you can clearly point to or “extract” a piece of textual evidence that supports the code’s criteria with no additional context or explanation needed), then you should specify exactly what features or information in the text the system should be looking for in order to decide whether the code should be applied or not. 
      • For example, if you have a code called “Meetings with FDA regulators” meant to capture documents that evidence such meetings, your code criteria can be:
        • “Any direct evidence of interactions between employees of Company A with Federal Drug Administration (FDA) regulators, including, but not limited to evidence or mentions of email communication, phone communication, in–person meetings, etc. Use any title, contact information (like email domains), or contextual information in the document to determine if the individuals involved are employees of Company A or the FDA.”
    • If the code you are configuring is more analytical in nature (i.e. additional context is required to explain how or why a piece of text relates to the concept you’re trying to capture with a code) then you may need to include more information and guidance on how the system should apply aspects of the text to the concept you are trying to capture with the code
      • For example, if you have a “Breach of contract” code meant to capture documents relevant to analyzing the extent to which a breach has occurred, then you may want to describe the clause at issue, specify the parties relevant to the analysis, and give examples of things that would evidence a breach. Your coding criteria could resemble the following: “
        • Any direct or circumstantial evidence relevant to analyzing whether Company A failed to meet its contractual obligations to Company B to deliver a functional software program that also has ‘an intuitive and high quality UI/UX’. Of particular issue is whether the delivered product met the “intuitive and high quality UI/UX” standard set out in the contract. Relevant evidence can include, but is not limited to, discussions or instructions about the UI/UX between employees of Company A and B, exchange of intermediate prototypes or wireframes, and representations made about the state and status of development. In general, look for anything that may shed light on whether and how the parties discussed the UI/UX of the product, and implicit or explicit evidence of the understanding that the parties held around the concept of intuitiveness or quality, even if not described in those exact terms.” 

For more guidance and tips on creating effective coding criteria, please refer to the Coding Suggestions Prompting Guide, found here.

Generating coding suggestions

To generate coding suggestions for a document, open the AI context window while in the Review Window. Then navigate to the “Coding Suggestions” tab and click “generate”. 

For each of the configured codes, Everlaw will return a suggestion of whether the code should be applied to the document (YES or NO) and a rationale for the suggestion. If Everlaw identifies a potentially relevant area in the document, a link to it will also be shown as part of the rationale. Coding suggestions are grouped by category for easy identification. 

Suggestions within categories are further organized into actionable suggestions and “Other” suggestions. Actionable suggestions are:

  • Codes that that Everlaw thinks should be applied, but currently are not applied
  • Codes that Everlaw does not think should be applied, but currently are applied

“Other” suggestions are suggestions that match the current coding of the document. You must expand the “Other” section to see information for these suggestions. 

You can easily apply, remove, or replace a code directly in the Review Assistant based on the suggestion. Note that there will not be any indication that the action was taken from coding suggestions, so be sure to verify that you agree with the suggestion before taking an action.  

To see the underlying coding criteria, click  “View configuration” in the upper right of the Coding Suggestions tab. The coding criteria can only be edited by project admins in the LLM Tools tab of Project Settings. Click here for a printable reviewer guide for coding suggestions

📑  Storybuilder descriptions

On the Storybuilder Timeline view, a document's description field will automatically be populated with an AI-generated description if (1) it is empty and (2) a description is available. The displayed description is the same description generated from the Review Assistant task. If the underlying document for a Storybuilder entry does not have a generated summary/description, a "Summarize" button will appear in that document entry. If clicked, both a detailed summary and a description will be generated for the document. 

Screenshot 2024-01-02 at 7.40.09 PM.png

AI-generated descriptions will be displayed in italics and with the sparkle icon. This helps distinguish AI-generated descriptions from ones created or edited by humans (which are displayed in normal font). 

Screenshot 2024-01-02 at 7.41.43 PM.png

Once generated, you can freely edit the output, if desired. Any edited output will be saved as a human-generated description. Note that simply clicking into the input box and immediately saving counts as a modification, even if no changes were actually made to the generated text. 

In project settings, project admins have the ability to set:

  • The availability of the feature within the project
  • Whether all new documents added to Storybuilder should be auto-summarized without any additional user action
  • Whether to generate summaries for all Storybuilder document entries currently lacking a description field
  • Whether to delete all AI summaries, which we’ve scoped to only the ones that are unmodified at the time this request is initiated

Screenshot 2023-07-26 at 1.34.31 PM.png

 

✒️   Writing Assistant

Background

Our goal with the Writing Assistant is to help you jumpstart writing tasks that draw heavily on information in your documents. We hope to provide tools that can make the process of writing – from planning, research, to putting words to paper – more efficient. These tools can be particularly helpful in:

  • Exploring and understanding narrative strands across multiple documents or different angles into your accumulated evidence
  • Leveraging, or catching up on, work that has already been done by the review team in terms of document curation and annotation
  • Getting over the hump of the blank page and creating early first drafts of evidence-centered work product, like fact memos and statement of facts

The flexibility of the tool makes it useful at various points in the writing process. But writing is a complicated, nuanced, and personal process where human intentionality, judgment, and style really matters. We don’t envision end-to-end automated writing; instead we hope to build systems that you can leverage as you find the best way to articulate your case. 

 

What is the Writing Assistant drawing from when generating output

The Writing Assistant does not use the text of the documents themselves. Instead Everlaw packages up available information from the Storybuilder entry for a document, namely:

  • Bates
  • Title
  • Date
  • Description
  • Relevance

When creating a draft, you can scope this packaging to either (1) only documents in the draft you are currently in, or (2) all documents in the story timeline. At minimum there must be one eligible Storybuilder entry to enable the drafting tool. An “eligible” entry is one that has a Bates value and a description field or relevance field that has at least 50 characters. At maximum, the writing assistant supports 100 document entries (ie. if there are more than 100 documents on your timeline or draft, you cannot use the writing assistant).

The better the inputs (SB entry information), the better the drafting output.

 

Potential errors and pitfalls

Careful thought and design was put into the Writing Assistant to reduce the possibility of errors and increase the reliability of the generated output. However, large language models, even with guardrails implemented, are not infallible. Here are some issues that you could encounter while using the Writing Assistant: 

  • Wrong, misleading, or missing citations
    • This can occur even if the sentence or statement itself can otherwise be justified by other information in the context
  • Incorrect or contextually irrelevant information
  • Subtle mischaracterization of document details and relevance
  • Entirely fabricated statements. Or, statements that are a mixture of both accurate and inaccurate information 

Because of this inherent nature of LLMs, it is important to verify generated content using your human judgment, experience, and expertise. Everlaw provides easy pathways for verification by putting the citations upfront and making the underlying document text and Storybuilder entry information easily accessible. We hope these integrations provide the transparency and the tooling for users to dive in and verify the substantive correctness of generated output. 

Note: Some of the issues described in this section can be more likely if a lot of documents are used as part of a drafting request. You may want to use more targeted sets of documents when using the Writing Assistant, where possible. That being said, this is far from a consistent trend, and there are many legitimate and good use cases for drafting against a large set of documents. We encourage you to explore!

 

Wait times

Once you hit generate on a writing task, you can expect to wait between 15-30 seconds before the generated content first begins to appear in your document. This delay is due to some behind-the-scenes tasks the system is doing to increase the quality and reliability of the output. 

 

Statement of facts

This task generates a factual narrative based on the provided Storybuilder document entries. There are some optional fields that you can fill out that can help steer the generated output in particular directions based on your expressed intent or interest. This commonly manifests in the tone and substance of the generation, how the facts are framed and narrated, the details that the system picks out from the document information, and the types of general knowledge the system can draw on. The optional field are:

  • Area of law: Here you can input a brief description of the main area of law or legal issue.
  • Representing: Here you can input the party you are representing or the entity that you want to take the perspective of. 
  • Argument: Here you can provide a 1-2 sentence description of the overall theme or argument of the draft.
  • Point of emphasis: Here you can specify particular points or details that you want to include or highlight, if possible given the available document details.

Keep in mind that LLMs are sensitive to wording choice in their prompts, which can affect (sometimes drastically) the content, quality, tone, style, and structure of the output. If you are not getting quite what you expected given what you know about the document details, experimenting with the wording of the additional directions may help. 

 

Custom

This is an extremely open-ended option and you can really experiment with anything you can think of when it comes to instructing the system to draft based on document details. 

Some examples might be comparing and contrasting evidence you have, critiquing the sufficiency of evidence, getting a sense of how the evidence can be narrated on either side of an issue, etc. 

Have more questions? Submit a request

0 Comments

Article is closed for comments.