Case Analytics, Reviewer Accuracy, and Rating Conflicts

 

Table of Contents:

 

Accessing Analytics

Case Analytics are used for case admins to track both case and reviewer progress over the lifetime of the case. To access each of the analytics charts described below, expand the "case analytics" section of the analytics page sidebar, and select your desired metric.

Return to table of contents

Case Overview

Case overview provides high level statistics about the progress of review.

  • Documents coded: The percent of documents that have been coded in the case
  • Documents viewed: The percent of documents that have been viewed in the case
  • Documents rated: The percent of documents that have been rated in the case, and a breakdown of the distribution of hot, warm, cold ratings
  • Total Documents: The total number of documents in the case, including duplicates
  • Docs Reviewed/Week: The average number of documents reviewed in a week
  • ETA to View All Docs: The estimated amount of time it would take to view all the documents in the case given the current review pace

Return to table of contents

Case Size

The Case Size tab shows you the current and historical size of your database on Everlaw. You can also disaggregate the information to show the total GBs of native data and the total GBs of processed data. Only cases started on or after August 16, 2015 will have case size information; if your case started before, the case size tab will not be visible. 

Return to table of contents

Reviewer Accuracy

If you prefer to watch a video, click here

Reviewer accuracy statistics are calculated for admins to see how accurate reviewers are in rating documents. To calculate a reviewer's accuracy, an admin on the case must look over a portion of the documents a reviewer has rated and correct any mis-rated documents. Based on this information, Everlaw calculated an estimated error rate for a given reviewer, and provides the statistical confidence measure for the estimated error rate.

The reviewer accuracy table displays:

  • The statistical confidence the system has in the predicted error rate (How filled in the green circle is in the leftmost column is a visual indicator of the confidence level. Hovering over the circle will provide the exact percentage)
  • The reviewer's name
  • The predicted error rate as a percent (ie. for Demo Reviewer 2 in the table above, the system estimates that s/he makes an error on 9% of the documents s/he reviews).
  • The next step column provides next steps that the admin can take to either (i) start generating predictions, (ii) improve the accuracy of the predictions, or (iii) review the documents that have registered errors. Clicking on the button in the column will open a results table that the admin can use to review particular documents depending on the recommended next step.

Clicking "more -->" in the upper right of the table will expand the table to reveal additional columns.

Clicking on any of the numbers in the 4 right columns will open a results table with those documents

By default, the statistics are generated with data starting from the beginning of the case. If you want to use a different start date to calculate the statistics, you can input a new date into the "filter by starting date" box. You can permanently use a different start date by selecting "set as project default" after choosing a date to filter by.

Return to table of contents

Rating Conflicts

The rating conflicts page displays all the rating conflicts in the case, broken down by type of conflict.

To resolve conflicts, click on the card or bar graph of the conflict type or user you want to resolve conflicts for. This will open a results table that will display the documents and the conflict type. To resolve conflicts, open a document and modify the rating. The conflict statistics will update accordingly.

Return to table of contents

Usage Data

The Usage Data table shows the raw stats of all the reviewers on the case. These stats include: time in the system, documents and pages viewed, documents and pages viewed per hour, percent of new documents viewed, and the percent of documents viewed that were rated hot, warm or cold. For a detailed description of the various columns, please see the Data Collection and Metrics Details section at the end of this page.

The data can be aggregated by month or week; click the "By Month"/"By Week" button to toggle between the two. The drop down menu next to the button allows you to select which week or month you would like to view, or, 'Total', which will display the sum of all the time periods.

The table can be downloaded as an CSV file to your computer by clicking the 'Download' icon in the top left corner of the Usage Data table.

Everlaw suggestion: Use this data to help diagnose performance. See how many documents or pages per hour each reviewer has moved through to understand comparative efficiencies. This can help you adjust review assignments and even future team composition. 

Return to table of contents

Review Time

Review time shows the total review time for each reviewer as a percentage of the overall review time. You can hover over any area in the pie chart to see the total time spent in the system by a reviewer, as well as what percentage of the total time the user's time represents.

Return to table of contents

Review Progress

The Review Progress page charts the documents viewed and documents rated over the lifetime of the case (for cases that were in Everlaw before analytics was released, the review progress chart begins on that date - 7/13/2012). You may hover your mouse over the graph to see any week since the case was created - the tooltip will show you how many new documents were viewed that week, and how many documents were rated. This graph is similar to the Overview Tab, except you are able to see the entire history of the case. You may add or remove parameters to the graph by clicking the checkbox in the legend. The checkbox next to the legend title selects or deselects all the series. If only some of the data is selected, the checkbox will appear gray, but is still clickable and will add all the data to the graph.

You may hover over one of the series in the legend to emphasize that parameter in the graph by bringing it into the foreground.

To see the data a little more clearly, click on the Y-axis of the graph. The graph will resize to display the documents as a percent instead of absolute number; this provides a better view of the data. Please note the new percentages on the Y-axis as the graph is zoomed in to provide a better view. Click on the Y-axis again to return the axis labels to an absolute number.

Return to table of contents

Rating Trends

Rating trends displays how users rate the documents they are viewing. For each user, you can see what percentage of their ratings falls into the hot, warm, and cold category. This graph is good for gauging how reviewers are distributing their ratings.

You can use the user table to add additional reviewers to the graph. The "average" metrics on the graph are the average of the selected users for any given rating.

You can use the slide bar and toggle at the bottom of the graph to adjust what time frame the metrics reflect.

Return to table of contents

Reviewer Pace

The Reviewer Pace scatter plot graphs the number of documents viewed over the lifetime of the case. You can select users' data to appear in the graph by selecting them in the User key. If there are overlapping lines, hovering over one of the user's in the key will emphasize that user's data by bringing the user's line into the foreground.

Under the graph is a bar that allows you to set which time period you would like to view. You can also toggle between weeks and months to narrow down or expand the time frame you wish to see.

Return to table of contents

Custom

If built-in reports may not display the information you want, you can make your own custom reports by going to the Customs page. You can select any metrics you want to compare, as well as any users in the case. Select the metrics from the dropdown menus under the "Custom Visualization" title, and the users you wish to analyze from the the Users key. You may hover over one of the users in the legend to emphasize that user's data by bringing the user's line into the foreground. For a detailed description of the various statistics available, please see the Data Collection and Metrics Details section at the end of this page.

Under the graph is a bar that allows you to set which time period you would like to view. The default view is the last 4 weeks, but it is easily adjusted by dragging the markers along the bar. You may also toggle between weeks and months to narrow down or expand any time frame you wish to see.

Return to table of contents

Data Collection and Metrics Detail

Everlaw collects a variety of statistics as our users search and review documents. We present the most relevant of these statistics for admins to see at a glance in the Usage Data table, but more specific stats can be charted using the Custom Visualization widget. To help you understand and use the data you're seeing, it's important that we explain how we collect it.

The raw statistics that we collect (and that are available in the Custom Visualization widget) (for the following, new means previously not viewed by the user):

  • Review Time: The length of time the user has spent in review. This timer is active whenever the user has a review window open and is viewing page images or native pages. A maximum of five minutes will be logged on a page if there is no intervening activity, to prevent issues with users idling or leaving review windows open. Time spent on the home page, searching, or viewing search results is not logged.
  • Docs Viewed: A count of the total number of times the user opened up a document in the review window (even if the user had already previously viewed that document).
  • Pages Viewed: A count of the total number of pages the user viewed in the review window (even if they had already viewed that page).
  • New Docs: A count of the total number of new documents the user opened
  • New Pages: A count of the total number of new non-native pages the user viewed.
  • New Natives: A count of the total number of new native files the user opened
  • Docs With Natives: A count of the total number of new documents the user opened that had natives
  • Pages Opened: A count of the total number of non-native pages in the new documents the user opened (e.g. if the user opened document A, which has 3 pages, and document B, which has 2, this count would be 5 - which pages the user views doesn't matter here)
  • Native Pages Viewed: A count of the total number of native pages the user viewed.
  • New Native Pages:  A count of the total number of new native pages the user viewed.
  • Rated Hot/Warm/Cold/Unrated: A count of the number of documents the user manually rated.
  • Batch Rated Hot/Warm/Cold/Unrated: A count of the number of documents the user rated using the Batch Rate tool.

The Usage Table displays some of these stats, as well as a number of additional values. These columns are:

  • Review Time: Same as the stat above
  • Docs Viewed: Same as the stat above
  • Pages Viewed: A count of how many native OR non-native pages the user viewed (counting duplicates as well)
  • Docs Per Hour: Calculated using Docs Viewed and Review Time
  • Pages Per Hour: Calculated using Pages Viewed (from this chart) and Review Time.
  • Repeat Views: The percentage of all Docs Viewed that the user had already previously viewed
  • Docs Rated: A count of the total number of documents the user rated (manually or batch rated, hot/warm/cold)
  • Hot/Warm/Cold: The percentage of documents the user rated hot/warm/cold out of the total number they rated (e.g. if they rated 10 docs - 5 hot, 2 warm, and 3 cold - the percentages would be 50% hot, 20% warm and 30% cold)

 

Return to table of contents

Have more questions? Submit a request

0 Comments

Article is closed for comments.