Sample Use Case for Rating Trends and Reviewer Accuracy

Imagine that you’re a couple weeks into your review, and want to get a sense of how well things are running. Here’s an example of how you can use the rating trends and reviewer accuracy statistics to monitor and gauge review quality.

Rating Trends

Rating trends give you insight into how reviewers are applying the hot, warm, cold rating. The patterns revealed here can help you identify reviewers you might want to retrain, or whose work you should consider QAing.

For example, the graph above indicates that:

  • Adam applies the ‘hot’ rating more than average of all the other reviewers on the table. In addition, he never applies the ‘warm’ and ‘cold’ ratings. If you know that he hasn’t been reviewing an unusually relevant set of documents, you might want to retrain him.
  • Mondee only applies the ‘hot’ or ‘cold’ rating to documents, with ‘hot’ commanding a greater proportion. This suggests that Mondee might be too generous in marking documents relevant, and could benefit from a short retraining.

Reviewer Accuracy

Accuracy statistics are an easy way to perform rigorous QA on a reviewer’s work product. To start generating the statistics, an admin on the project must review a portion of the documents that a given reviewer has rated ('hot', 'warm', or 'cold'). Based on discrepancies between the admin and reviewer rating, Everlaw will project an estimated error rate.

In the example above, Demo Reviewer 2 has an estimated error rate of 9%, Demo Reviewer 1 has an estimated error rate of 6%, and Demo Reviewer 3 has an estimated error rate of 1%. There is high statistical confidence for each of these estimates. You can easily click in to see the documents that registered a rating discrepancy. From there, you might be able to identify a pattern to the types of documents a reviewer struggles with, which will help you tailor future trainings or discussions with that reviewer.

Have more questions? Submit a request


Article is closed for comments.