Predictive Coding Intro and Creating a New Model

Table of Contents

 

What is predictive coding?

Predictive coding is a form of technology-assisted review. Though its exact implementation differs across platforms, the basic idea is that the system will generate predictions about the relevance of documents based on past review decisions.

Return to table of contents

How does Everlaw’s predictive coding system work?

Everlaw’s predictive coding system revolves around ‘models’. Each model is defined by a criteria for relevance and a criteria for irrelevance. These criteria are created using the available ratings and codes in a case. Documents rated and coded with the codes used to define the two criteria are then analyzed to generate predictions. For example, you might create a model around privilege. You have four privilege related codes on your coding sheet:

  • “Privilege: Attorney-Client”
  • “Privilege: Trade Secret”
  • “Privilege: Work Product”
  • “Privilege: Not Privileged”

For your prediction model, the criteria for relevance is any one of the first three codes, and the criteria for irrelevance is the last code. As reviewers start reviewing documents, the platform analyzes their rating behavior, and generates a predicted relevance value for all documents in the case, including the documents that have not been human-reviewed. In the case of this model, documents with a high predicted score are those that the the platform thinks are likely to be privileged whereas documents with low predicted scores are those that the platform thinks are likely to not be privileged.

The model will continuously learn with ongoing review work, and the accuracy of its predictions will increase as the number of human-reviewed documents increase. You can have multiple models running in a single case. To learn more about the implementation of the predictive coding engine, click here.

Return to table of contents

How should I interpret the predicted relevance rating?

Once a model starts running, each document in your case will be given a rating on a 0-100 scale. If you have multiple models in your case, a document will have a separate rating for each model.

Documents with a prediction rating closer to 100 are more likely to be relevant, as defined by the model’s criteria for relevance. Documents with a prediction rating closer to 0 are more likely to be irrelevant, as defined by the model’s criteria for irrelevance. A document’s predicted rating might change with ongoing review work as the model’s precision and accuracy improves. In the example below, the two models in the case are “Rating” and “Coded: Andrew Fastow”. Each document has a different prediction rating for the respective models.  

Return to table of contents

It seems like there’s already a prediction model in my case, but I don’t remember setting one up. What’s going on?

A default model based on the “hot”, “warm”, “cold” rating system is included with all cases on Everlaw. For this model, documents rated “hot” will be considered relevant, documents rated “cold” will be considered irrelevant, and documents rated “warm” will be considered of intermediate relevance. Using this information, the system generates a predicted relevance value for all documents within the case.

Because the default rating model is created automatically at the start of each case, and requires no additional setup or maintenance,  even if you do not create additional models, you can still leverage rating predictions in your review.

Return to table of contents

When do predictions start generating?

A model will start generating predictions once your team has rated a couple hundred documents (~200 or 5% of your case, whichever is smaller), with at least 50 documents coded to be in the positive, or relevant, set, and 50 coded to be in the negative, or not relevant, set.

For example, let’s say you have two models in your case, the “rating” and “related to Andrew Fastow” models. Your team has rated a total of 500 documents with:

  • 100 rated “Hot” and 250 rated “Cold”
  • 25 coded “People: Andrew Fastow” and 10 coded “People: Not Andrew Fastow”

Given the above, the rating model will be generating predictions, but the Andrew Fastow model will not be due to insufficient data.

Return to table of contents

When are predictions updated?

Predictions for each model are generally updated once a day to reflect ongoing review work, unless no documents have been coded since the last update. If there are a lot of models in the update queue, it might take longer to update the predictions.

Return to table of contents

What information am I provided about any given prediction model?

Information about the predictive coding system can be found in the ‘Predictive coding’ tab in the Analytics page. Clicking on the tab will expand it, allowing you to see all the models you have access to, as well as the new model creation wizard.

For each model, you can see:

  • The model’s criteria for relevance and irrelevance
  • Suggested next steps about how to start or improve the model
  • A graph showing the distribution of predicted values across all the documents in the case.
  • A training coverage graph showing how well the model “understands” the documents in the case.
  • Various performance metrics.
  • Status indicators to let you know when the model was last updated.
  • Any training set(s) associated with the model, and an option to create a new training set

For more information on how to use the graphs or interpret the performance metrics, please review the predictive coding user guide article.

Return to table of contents

How can I create a new prediction model?

To create a new prediction model, select "create new model" at the bottom of the list of accessible models. A wizard will open, and will walk you through the creation of a new model.

 

  • Step 1: Set the criteria for relevant documents - Choose the combination of codes that will designate documents that are relevant to the model. The criteria builder functions exactly like the search builder. Review the search documentation if you want a primer on how to use the builder. 
  • If you want to restrict the training of the model to the review and rating decisions of a select group of users, be sure to use the ‘person’ parameter in either the “Rating”, “Code”, or “Category” search term. In the example below, the model will only consider the review decisions of Ian and Lisa when learning from review behavior.

  • Step 2: Set the criteria for irrelevant documents - Choose the combination of codes that will designate documents that are irrelevant to the model. Again, the criteria builder functions exactly like the search builder. We recommend either negating the search terms you used for the relevant criteria, or creating new codes that are the antithesis of the code(s) used for the relevant criteria. Examples of each are shown below. In general, using antithetical codes will result in more accurate predictions, but reviewers must actively apply the antithetical code to documents. 
  • Step 3: Optionally exclude documents - If you want the model to exclude certain documents during analysis, you can create the criteria for exclusion in this step.
  • Step 4: Name your model
  • Step 5: Set sharing permissions: By default, any model you create is private to you. If you want, you can share the model with others so that they can view and access it. You can share the model with individual users or entire groups of users. You can set the permission level for each entity you choose. Giving a user(s) only the “read” permission will restrict them to only viewing information about the model and its performance. Giving a user(s) the additional “admin” permission will allow them to create a training set for that model, and delete the model.

With that, you have a new model that’s ready to run in your case. To navigate between different models, click on the dropdown menu with the model name, and select from the list.

Return to table of contents

What is a training set?

A training set is a randomly seeded collection of documents that you, or your team, can then review to train the system better. You can choose to seed the training set from the entire document corpus, or from a search. Either way, randomly sampled representative documents will be included in a training set. Reviewing these documents will help the model better understand the the documents in your case, resulting in improved predictions.

Return to table of contents

Can I search along predicted values?

Yes! The “Predicted” search term allows you to search across the predicted values for any given model that you have access to. Once you select the model, you can then input a range for the predicted values, or set an upper or lower bound. Remember that prediction ratings are given on a 0-100 scale, with values closer to 100 indicating a greater likelihood of being relevant to the model, and values closer to 0 indicating a greater likelihood of being irrelevant to the model.

Here are some example use cases using the “Rating” model (note that the numerical cutoff used for the relevant/irrelevant distinction is left up to you):

    • QAing review work: Documents predicted to be irrelevant but rated “hot”
    • Finding new documents to review: Documents predicted to be relevant but not yet viewed by anyone on the review team
    • Deposition prep: Documents from the custodian that is being deposed that are predicted to be relevant

Return to table of contents

Have more questions? Submit a request

0 Comments

Article is closed for comments.