Case Analytics: Evaluating Your Reviewers

Case analytics on Everlaw provide several ways to evaluate reviewer activity on a group or individual level. You can use these tools to visualize trends and pinpoint areas for future improvement among your reviewers.

Table of Contents

Return to table of contents

Accuracy

If you prefer to watch a video, click here.

Reviewer accuracy statistics are calculated for admins to see how accurate reviewers are in rating documents. To calculate a reviewer's accuracy, an admin on the project must look over a portion of the documents a reviewer has rated and correct any mis-rated documents. Based on this information, Everlaw calculates an estimated error rate for a given reviewer, and provides the statistical confidence measure for the estimated error rate.

analytics6.png

The reviewer accuracy table displays:

  • The statistical confidence the system has in the predicted error rate. This is represented by the green circle in the leftmost column. The more filled-in the circle is, the higher the confidence level. Hovering over the circle will provide the exact percentage.
  • The reviewer's name.
  • The predicted error rate as a percent (i.e., for Demo Reviewer 2 in the table above, the system estimates that s/he makes an error on 9% of the documents s/he reviews).
  • The next step column provides next steps that the admin can take to either (i) start generating predictions, (ii) improve the accuracy of the predictions, or (iii) review the documents that have registered errors. Clicking on the button in the column will open a results table that the admin can use to review particular documents depending on the recommended next step.

Clicking "more -->" in the upper right of the table will expand the table to reveal additional columns.

analytics7.png

Clicking on any of the numbers in the four right columns will open a results table with those documents.

By default, the statistics are generated with data starting from the beginning of the project. If you want to use a different start date to calculate the statistics, you can input a new date into the "filter by starting date" box. You can permanently use a different start date by selecting "set as project default" after choosing a date to filter by.

analytics8.png

Return to table of contents

Conflicts

The rating conflicts page displays all the rating conflicts in the project, broken down by type of conflict.

analytics9.png

To resolve conflicts, click on the card or bar graph of the conflict type or user you want to resolve conflicts for. This will open a results table that will display the documents and the conflict type. To resolve conflicts, open a document and modify the rating. The conflict statistics will update accordingly.

resolve_conflicts.gif

Return to table of contents

Usage Data

The Usage Data table shows the raw stats of all the reviewers on the project. These stats include: time in the system, documents and pages viewed, documents and pages viewed per hour, percent of new documents viewed, and the percent of documents viewed that were rated hot, warm or cold.

analytics11.png

The data can be aggregated by month or week; click the By Month/By Week button to toggle between the two. The drop down menu next to the button allows you to select which week or month you would like to view, or “Total,” which will display the sum of all the time periods.

The table can be downloaded as an CSV file to your computer by clicking the Download icon in the top left corner of the Usage Data table.

You can use this data to help diagnose performance. See how many documents or pages per hour each reviewer has moved through to understand comparative efficiencies. This can help you adjust review assignments and even future team composition.

The columns displayed by the Usage Table are:

  • Review Time: The length of time the user has spent in review. This timer is active whenever the user has a review window open and is viewing page images or native pages. A maximum of five minutes will be logged on a page if there is no intervening activity, to prevent issues with users idling or leaving review windows open. Time spent on the home page, searching, or viewing search results is not logged.
  • Docs Viewed: A count of the total number of times the user opened up a document in the review window (even if the user had already previously viewed that document).
  • Pages Viewed: A count of how many native OR non-native pages the user viewed (counting duplicates as well)
  • Docs Per Hour: Calculated using Docs Viewed and Review Time
  • Pages Per Hour: Calculated using Pages Viewed (from this chart) and Review Time.
  • Repeat Views: The percentage of all Docs Viewed that the user had already previously viewed
  • Ratings: A count of the total number of documents the user rated (manually or batch rated, hot/warm/cold)
  • Hot/Warm/Cold: The percentage of documents the user rated hot/warm/cold out of the total number they rated (e.g. if they rated 10 docs - 5 hot, 2 warm, and 3 cold - the percentages would be 50% hot, 20% warm and 30% cold)

Return to table of contents

Review Time

Review time shows the total review time for each reviewer as a percentage of the overall review time. You can hover over any area in the pie chart to see the total time spent in the system by a reviewer, as well as what percentage of the total time the user's time represents.

analytics12.png

Return to table of contents

Reviewer Pace

The Reviewer Pace scatter plot graphs the number of documents viewed over the lifetime of the project. You can select users' data to appear in the graph by selecting them in the User key. If there are overlapping lines, hovering over one of the users in the key will emphasize that user's data by bringing the user's line into the foreground.

Under the graph is a bar that allows you to set which time period you would like to view. You can also toggle between weeks and months to narrow down or expand the time frame you wish to see.

analytics13.png

Return to table of contents

Custom

If built-in reports do not provide all of the information you want, you can make your own custom reports by going to the Custom page. You can select any metrics you want to compare, as well as any users in the project. Select the metrics from the dropdown menus under the "Custom Visualization" title, and the users you wish to analyze from the the Users key. You may hover over one of the users in the legend to emphasize that user's data by bringing the user's line into the foreground. For a detailed description of the various statistics available, please see the Data Collection and Metrics Details section at the end of this page.

Under the graph is a bar that allows you to set which time period you would like to view. The default view is the last 4 weeks, but it is easily adjusted by dragging the markers along the bar. You may also toggle between weeks and months to narrow down or expand any time frame you wish to see.

analytics5.PNG

 

The raw statistics that we collect (and that are available in the Custom Visualization widget) are the following. "New" means previously not viewed by the user. Some of these statistics are the same as those detailed in the Usage Table:

  • Review Time: The length of time the user has spent in review. This timer is active whenever the user has a review window open and is viewing page images or native pages. A maximum of five minutes will be logged on a page if there is no intervening activity, to prevent issues with users idling or leaving review windows open. Time spent on the home page, searching, or viewing search results is not logged.
  • Docs Viewed: A count of the total number of times the user opened up a document in the review window (even if the user had already previously viewed that document).
  • Pages Viewed: A count of the total number of pages the user viewed in the review window (even if they had already viewed that page).
  • New Docs: A count of the total number of new documents the user opened
  • New Pages: A count of the total number of new non-native pages the user viewed.
  • New Natives: A count of the total number of new native files the user opened
  • Docs With Natives: A count of the total number of new documents the user opened that had natives
  • Pages Opened: A count of the total number of non-native pages in the new documents the user opened (e.g. if the user opened document A, which has 3 pages, and document B, which has 2, this count would be 5 - which pages the user views doesn't matter here)
  • Native Pages Viewed: A count of the total number of native pages the user viewed.
  • New Native Pages:  A count of the total number of new native pages the user viewed.
  • Rated Hot/Warm/Cold/Unrated: A count of the number of documents the user manually rated.
  • Batch Rated Hot/Warm/Cold/Unrated: A count of the number of documents the user rated using the Batch Rate tool.

 

Return to table of contents

Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.