To view all of Everlaw's predictive coding-related content, please see our predictive coding section.
Table of Contents
For an introduction to the concepts behind predictive coding and machine learning, feel free to reference our beginner’s guide to predictive coding.
For a guide to predictive coding-related terms and commonly-asked questions about Everlaw’s predictive coding feature, see our Predictive Coding Terms and FAQs.
If you are a Project Administrator, you have access to predictive coding by default. Project Administrators can also give Prediction Model access to specific groups in the Permissions page.
To start building a predictive coding model, go to your Project Analytics page by clicking on the Project Analytics icon on your top toolbar. Then, choose Predictive Coding from the left-hand menu, and select “Create New Model.”
The first page you’ll see provides you with an introduction to predictive coding, as well as a link to the Everlaw predictive coding beginner’s guide. Hit “next” to begin building your model.
Next, decide which documents you’d like to consider “reviewed” for the purposes of the model. These documents, which are a subset of the entire database, will be used to train the model. For example, let’s say that you’re trying to build a model that predicts whether documents are likely to be responsive or not. Documents that have already been coded in the responsiveness category by a human reviewer will help the model learn how to make predictions. Therefore, you’ll want to set your criteria for reviewed documents as “coded under Responsiveness.” The model will look at all documents that have a code within this category to help it make decisions.
It’s important that the criteria include the entire category, instead of an individual code within the category. This is because it’s useful for the model to learn what both relevant and non-relevant documents look like.
Here, specify which documents count as relevant for your model. If you want your model to help you find responsive documents, documents with the specific code of Responsive will be relevant. Documents that are not coded responsive within the responsiveness category will be counted as irrelevant, and your model will learn from those, as well, in order to improve its predictions.
Unlike the previous step, specificity is key when identifying the criteria for relevance. Your model should be told exactly what a relevant document looks like, so it can better predict which unreviewed documents are relevant.
This is an optional step that allows you to note if you’d like to exclude documents from your model. The model will not look at excluded documents at all, either for learning purposes or to make predictions. For example, you might want to exclude a certain type of file, like spreadsheets or audio files, from your model. If you do this, the model will not look at spreadsheets or audio files, even if they have been given a code in the Privilege category. Additionally, it will not make predictions for the likelihood of spreadsheets or audio files to be privileged or relevant. If you do not exclude any documents, all documents with adequate text (including transcribed audio and video files) will receive prediction scores.
Finalizing your model
On the next page, enter a name for your model. This name will be visible to everyone who uses the model, and will also be the name you use to search for the model’s predictions. By default, the name of your model will be the relevance criteria for your model, but you can rename it to whatever you like.
Finally, submit your model. Your model will initialize and begin making predictions once at least 200 documents have been reviewed according to your review criteria, with at least 50 being reviewed relevant and 50 being reviewed irrelevant.
You can share your model before the model has generated any predictions. Click the share button in the top right of the model's page. You can also delete the model by clicking the trash can.
To read more about analyzing your model’s results, see the Predictive Coding Model Interpretation article.