This article is a deep dive on Coding Suggestions, which is a feature available as part of Everlaw AI Assistant.
- If you're looking for a starter guide for all Everlaw AI Assistant features, see this article.
- If you're looking for a general overview of Everlaw AI assistant and information about our privacy and security standards, please see our AI Assistant FAQ.
- If you have feedback or questions, feel free to email us at beta@everlaw.com. We love hearing from our users.
Table of Contents
-
Configuring Coding Suggestion
- Generating Coding Suggestions
- Sampling Coding Suggestions to improve your results
-
Generating Coding Suggestions in batch
- Searching and exporting Coding Suggestions
Coding Suggestions automatically categorizes documents based on codes on your coding sheet. For each code configured for use in Coding Suggestions, Everlaw will suggest whether the code should be applied to the document and provide rationale for its suggestion based on analysis of the document text.
Coding suggestions can be used to speed up review of new documents, QC existing review work, and enable useful search/filtering options.
Configuring coding suggestions
Before users can generate coding suggestions in your project, a project admin must first enable the feature and write coding criteria on the Project Settings > General > AI Assistant page. Coding criteria provides background, context, and guidance about what codes are meant to capture and how to evaluate codes against a document and consists of three description fields:
- Your description of the category
- Your description of the code
- Your description of the case
Adding a case description
Just as you would with a reviewer who is completely new to your case, you must provide sufficient background context about the matter to facilitate effective coding suggestions. To add a case description, navigate to the top of the AI Assistant tab and click "Edit case description."
Similar to how you would describe the case to a colleague with no familiarity of the matter, provide a concise summary describing the nature of the dispute. Include context and information that is generally relevant across all codes in your matter. This can include the legal claims at issue, jargon or technical terms, and key events and entities involved (including alternative ways an entity may be referred to in the text because of name changes or abbreviations).
Information that is relevant primarily to select categories or codes do not need to be included in the case description and instead should be included in the category and code descriptions as described below.
Selecting codes to configure
Everlaw offers maximum flexibility in choosing which codes you want to configure for use in Coding Suggestions: you can configure your entire coding sheet or just a single code. To configure codes:
- First, enable Coding Suggestions for the project by switching the appropriate toggle on.
- A table will be shown displaying all code categories in your project and a summary of configured codes. Use the toggles to the left of each category to enable or disable it for use in Coding suggestions.
- For each category that is enabled, a pop-up will appear where you can add a description for the category and configure individual codes within the category. Unselect the checkbox next to the code's name to exclude it from your configuration. Everlaw will provide a Yes/No evaluation for each configured code. When two codes are exact opposites, we recommend only configuring one and using the No evaluations to identify the opposite (e.g. for two codes “responsive” and “non-responsive”, configure only “responsive,” and identify the “non-responsive” suggestions by searching/filtering for the No suggestions).
You can also access these configuration settings by clicking the “Edit Configuration” link for a given category
Any changes made to the configuration will only affect coding suggestions generated from that point forward; existing coding suggestions will not be affected. You must click regenerate in the Review Window to update suggestions to reflect new code criteria prompts.
Writing coding criteria
The quality and accuracy of the Coding Suggestions hinges on the quality of your coding criteria. Because of this, we recommend that you evaluate any new or changed coding criteria on a small sample of documents to see if any adjustments should be made.
Here are some general tips and tricks to writing effective coding criteria:
-
Provide sufficient background and context: Include extra-textual information (information outside of the document text) that is important for understanding and analyzing textual information found in the documents. This can include relevant acronyms, key events, jargon or technical terms, entities involved (including alternative names and email domains), etc.
- You can think about the right scope of where to include this background and context. If the information is relevant for the case as a whole and across all categories of codes, then it should be included in the case description. If the information is relevant only to a particular category, then it should be included in the category description. If the information is only relevant to a particular code, then it should be included in that code’s criteria.
-
Adjust your code criteria based on your goals for the code:
-
If the code you are configuring is more extractive in nature (ie. you can clearly point to or “extract” a piece of textual evidence that supports the code’s criteria with no additional context or explanation needed), then you should specify exactly what features or information in the text the system should be looking for in order to decide whether the code should be applied or not.
-
For example, if you have a code called “Meetings with FDA regulators” meant to capture documents that evidence such meetings, your code criteria can be:
- “Any direct evidence of interactions between employees of Company A with Federal Drug Administration (FDA) regulators, including, but not limited to evidence or mentions of email communication, phone communication, in–person meetings, etc. Use any title, contact information (like email domains), or contextual information in the document to determine if the individuals involved are employees of Company A or the FDA.”
-
For example, if you have a code called “Meetings with FDA regulators” meant to capture documents that evidence such meetings, your code criteria can be:
-
If the code you are configuring is more analytical in nature (i.e. additional context is required to explain how or why a piece of text relates to the concept you’re trying to capture with a code) then you may need to include more information and guidance on how the system should apply aspects of the text to the concept you are trying to capture with the code
-
For example, if you have a “Breach of contract” code meant to capture documents relevant to analyzing the extent to which a breach has occurred, then you may want to describe the clause at issue, specify the parties relevant to the analysis, and give examples of things that would evidence a breach. Your coding criteria could resemble the following:
- "Any direct or circumstantial evidence relevant to analyzing whether Company A failed to meet its contractual obligations to Company B to deliver a functional software program that also has ‘an intuitive and high quality UI/UX’. Of particular issue is whether the delivered product met the “intuitive and high quality UI/UX” standard set out in the contract. Relevant evidence can include, but is not limited to, discussions or instructions about the UI/UX between employees of Company A and B, exchange of intermediate prototypes or wireframes, and representations made about the state and status of development. In general, look for anything that may shed light on whether and how the parties discussed the UI/UX of the product, and implicit or explicit evidence of the understanding that the parties held around the concept of intuitiveness or quality, even if not described in those exact terms.”
-
For example, if you have a “Breach of contract” code meant to capture documents relevant to analyzing the extent to which a breach has occurred, then you may want to describe the clause at issue, specify the parties relevant to the analysis, and give examples of things that would evidence a breach. Your coding criteria could resemble the following:
-
If the code you are configuring is more extractive in nature (ie. you can clearly point to or “extract” a piece of textual evidence that supports the code’s criteria with no additional context or explanation needed), then you should specify exactly what features or information in the text the system should be looking for in order to decide whether the code should be applied or not.
For more guidance and tips on creating effective coding criteria, please refer to the Coding Suggestions Prompting Guide, found here.
Generating coding suggestions
Coding suggestions can be generated from the Review Window or in batch from the Results Table. To generate coding suggestions for a document in the Review Window, open the AI context window. Then navigate to the “Coding Suggestions” tab and click “Generate”.
For each of the configured codes, Everlaw will return a suggestion of whether the code should be applied to the document (YES or NO) and a rationale for the suggestion. If Everlaw identifies a potentially relevant area in the document, a link to it will also be shown as part of the rationale. Coding suggestions are grouped by category for easy identification.
Suggestions within categories are further organized into actionable suggestions and “Other” suggestions. Actionable suggestions are:
- Codes that Everlaw thinks should be applied, but currently are not applied
- Codes that Everlaw does not think should be applied, but currently are applied
“Other” suggestions are suggestions that match the current coding of the document. You must expand the “Other” section to see information for these suggestions.
You can easily apply, remove, or replace a code directly in the Review Assistant based on the suggestion. Note that there will not be any indication that the action was taken from coding suggestions, so be sure to verify that you agree with the suggestion before taking an action.
To see the underlying coding criteria, click “View configuration” in the upper right of the Coding Suggestions tab. Project admins can edit existing category and code descriptions directly from this configuration dialog and can add new codes or update the case description from AI Assistant tab of Project Settings.
Click here for a printable reviewer guide for coding suggestions.
Sampling coding suggestions to improve your results
To get the best results from coding suggestions, we recommend that you test your configuration on a small sample of documents after initial setup and after each time you update any description. We recommend the following workflow to strengthen your descriptions and optimize your suggestions:
-
Initial setup: first, toggle on coding suggestions and your desired categories and codes in Project Settings and write category, code, and case descriptions following the guidelines in this article and in the Prompting Guide.
-
Select a sample set: the sample set should include at least a handful of documents responsive to each configured code and at least a handful of documents not responsive to each code.
-
If available, use documents that you or a trusted team have already reviewed and tagged for each code.
-
Otherwise select documents to review to create a sample set.
-
-
Generate and read: from the review window or results table, generate coding suggestions for your sample documents and read the rationale accompanying the suggestion. Click the relevant area link, if available, and read the relevant document text.
-
Analyze and compare: do you agree with the suggestions and rationales? Do the codes applied to the document (if any) agree with the suggestions? If not, why?
-
Is there context or information not included in the coding configuration that is necessary to evaluate the document? (e.g. an acronym not defined in the configuration)
-
Does the rationale point to something more specific or more broad than what you have in mind?
-
-
Share feedback with Everlaw: optionally share your feedback by clicking the thumbs up or thumbs down and specifying the code.
-
Edit your descriptions: if needed, update your descriptions by including additional details or adjusting your language. Project admins can update existing category/code descriptions directly from the review window or from Project Settings.
- Regenerate and repeat: after editing your descriptions, regenerate coding suggestions on the same sample set and repeat steps 3 to 7 until you are comfortable with the level of accuracy.
Generating Coding Suggestions in batch
Coding suggestions can be generated in batch from the results table. Before batch generating coding suggestions for large sets of documents, be sure to evaluate any new or changed coding criteria on a small sample of documents as described in the workflow above.
For the beta, we've restricted the number of documents per batch action: you may generate batch coding suggestions for up to 2,000 documents at a time.
To initiate a batch action for coding suggestions:
- Open a results table.
- Ensure that there are no more than 2,000 documents in, or selected on, the table
- Then, click the batch tool in the toolbar and select the option to “Generate coding suggestions.”
After selecting your option, a confirmation dialog will appear where you can also optionally add the coding suggestions column to your results table.
From this dialog, you can also view and verify the coding configuration. Note that for very lengthy coding configurations containing several dozens of codes, suggestions may only be generated for a subset of the codes. To ensure that desired codes are evaluated for a given batch, you can temporarily disable select categories and codes in Project Settings.
Once confirmed, the task is initialized and the request will be submitted to Everlaw's AI task queue. Depending on the size of the batch request and the number of other documents in the queue, a batch summarization or coding suggestion can take anywhere from a few seconds to many hours to complete.
Viewing the results
As individual documents are completed, the Coding Suggestions column will populate with the results.
The column contains all codes that the document was evaluated against (both the YES and NO codes), with the actionable suggestions listed first followed by the non-actionable suggestions in a lower opacity. The rationale for each suggestion is viewable only from the Review Window. You can filter the column for specific codes and specify “Yes”, “No”, or “Any” for the Suggested value. You can also select the “Actionable” checkbox to include only documents with at least one actionable suggestion.
If the columns for Description or Coding Suggestions are not visible on your results table, you can add them from the "Add or remove columns" action under "View" on the toolbar.
Searching and exporting Coding Suggestions
You can search for documents by coding suggestions using the “Suggested code” search term. This term allows you to search by:
- The suggested code
- What the suggestion is (Yes or No)
- Whether the suggestion is actionable based on the current coding status of the document
By default, the Actionable checkbox is selected, meaning that the search will return only documents for which an Apply or Remove action is recommended for the selected code.
Finally, you can configure CSV exports to include coding suggestions. Including coding suggestions in the CSV export will include a column for the “Yes” suggestions and a column for the “No” suggestions.
0 Comments