Evaluating Your Reviewers Using Project Analytics

Project Analytics on Everlaw provides several ways to evaluate reviewer activity on a group or individual level. You can use these tools to visualize trends and pinpoint areas for future improvement among your reviewers.

All metrics in the Project Analytics page are based on the time zone configuration set in your Project Settings (Project Settings > General > Timezone). For example, if an action takes place at 11pm PST but the project time zone is EST, it will be reported as happening during the following day for the purposes of reporting time spent on platform.

The underlying data is reaggregated every 15 minutes. To check for fresh data, reload the page or continue to edit your input values.

Table of Contents

Platform Time

Any time a user spends on your project is logged and reported under Platform Time. If a user does not perform an action for 15 minutes, Everlaw will consider this account idle and no longer track time until an action is performed.  An action is considered to be clicking any button on the page or scrolling the page.

platform_time.gif

Platform time is separated into seven categories:

  • Total platform: The total amount of non-idle time a user has spent logged in
  • Review time: Time spent in the review window, on the results table, using Quick Review, or using the search page
  • Project management: Time spent on any of the following pages: Project Settings, Advanced Settings, Productions Notifications Tab, Database Settings, Project Analytics, or the Assignments page
  • Document analytics: Time spent on any of the following pages: Clustering, Predictive Coding, Search Term Reports, the Data Visualizer, or Legal Holds
  • Uploads: Time spent on the Uploads page. This includes native and processed uploads as well as file transfers
  • Productions: Time spent on the Productions page. This includes creating a Production, creating a Production Protocol, or time spent on the Production Protocol page
  • Storybuilder: Time spent using Storybuilder

Clicking on a column header will sort the column by descending values, and clicking again will sort the column by ascending values.

You can change the date range being shown with the date filter above the table. You can also filter by User and User Group. You can search:

  • Specific User
  • User Groups
  • Users within the User Groups you have selected 

To export an Excel file of the Platform Time table, select the “Export” option from the three-dot menu above the table. Any applied filters (such as date range, User Group, and User), sorting behavior, and removal and/or addition of columns will be reflected in your export.

Screen Shot 2024-04-29 at 2.33.05 PM.png

The export includes separate tabs for Total, Monthly, Weekly, and Daily platform time.

The Totals tab will show all users, including those with no recorded activity. The Monthly, Weekly, and Daily tabs will only include users with activity data. Please note that the smallest unit of time is one second. Users that are included with data values of 0 may have activity that amounts to less than one second.

Return to table of contents

Review Activity

The Review Activity table shows the raw stats of all the reviewers on the project. These stats include: documents and pages viewed, documents and pages viewed per hour, repeat views, percent of new documents viewed, and the percent of documents viewed that were rated hot, warm or cold. 

review_activity.png

The data can be aggregated by month or week; click the By Month/By Week button to toggle between the two. The drop down menu next to the button allows you to select which week or month you would like to view, or “Total,” which will display the sum of all the time periods.

You can also filter by User and User Group. For both filters, you will be able to type in text to search for specific User Groups or Users, and you will be able to filter by Users within the User Groups they have selected.

The table can be downloaded as a CSV file to your computer by clicking the three-dot menu in the top right of the Review Activity table and selecting the Export option.

You can use this data to help diagnose performance. See how many documents or pages per hour each reviewer has moved through to understand comparative efficiencies. This can help you adjust review assignments and even future team composition.

The columns displayed by the Review Activity are:

  • Total Docs: A count of the total number of times the user opened up a document in the review window (even if the user had already previously viewed that document). A "view" is counted regardless of how long the document was opened for.
  • Pages Viewed: A count of how many native OR non-native pages the user viewed (counting duplicates as well)
  • Docs Per Hour: Calculated using Docs Viewed and Review Time
  • Pages Per Hour: Calculated using Pages Viewed (from this chart) and Review Time.
  • Repeat Views: The percentage of all Docs Viewed that the user had already previously viewed. Previous view is counted if the document was viewed at anytime, regardless of how long it was opened for.
  • Ratings: A count of the total number of documents the user rated (manually or batch rated, hot/warm/cold)

Hot/Warm/Cold: The percentage of documents the user rated hot/warm/cold out of the total number they rated (e.g. if they rated 10 docs - 5 hot, 2 warm, and 3 cold - the percentages would be 50% hot, 20% warm and 30% cold)

Return to table of contents

Reviewer Accuracy

Reviewer accuracy statistics are calculated to see how accurate reviewers are in rating documents. To calculate a reviewer's accuracy, a Project Administrator on the project must look over a portion of the documents a reviewer has rated and correct any mis-rated documents. Once a (non-admin) reviewer has rated a document, it will count as reviewed. Based on this information, Everlaw calculates an estimated error rate for a given reviewer, and provides the statistical confidence measure for the estimated error rate.

A document counts as reviewed by an admin once an admin views or rates a document. If a Project Admin opens a document that a reviewer has rated, even briefly, and does not change the rating, the document will count in favor of the reviewer's accuracy.

analytics6.png

The Reviewer Accuracy table displays:

  • The statistical confidence the system has in the predicted error rate. This is represented by the green circle in the leftmost column. The more filled-in the circle is, the higher the confidence level. Hovering over the circle will provide the exact percentage
  • The reviewer's name
  • The predicted error rate as a percent (i.e., for Demo Reviewer 2 in the table above, the system estimates that they make an error on 9% of the documents they review)
  • Next steps

Next steps allow Project Admins to start taking action on the information they've gathered. Perhaps they can start generating predictions, improve the accuracy of the predictions, or review the documents that have registered errors. You can either verify documents, or review errors. Admin-verified documents are documents that have been viewed or rated by an admin. 

Screen_Shot_2020-04-03_at_2.43.43_PM.png

"Verify x docs" refers to the number of documents the admin must view or rate in order to achieve 95% confidence in the reviewer's accuracy.

Clicking "more -->" in the upper right of the table will expand the table to reveal additional columns.

analytics7.png

Clicking on any of the numbers in the four right columns will open a results table with those documents.

By default, the statistics are generated with data starting from the beginning of the project. If you want to use a different start date to calculate the statistics, you can input a new date into the "filter by starting date" box. You can permanently use a different start date by selecting "set as project default" after choosing a date to filter by.

analytics8.png

Return to table of contents

Rating Conflicts

The rating conflicts page displays all the rating conflicts in the project, broken down by type of conflict.

Screenshot

For the purpose of calculating rating conflicts, if a Project Admin opens a document that a reviewer has rated and does not change the rating, the Admin is "agreeing" with that rating. If another Admin changes the rating later, this creates an Admin-Admin conflict.

To resolve conflicts, click on the card or bar graph of the conflict type or user you want to resolve conflicts for. This will open a results table that will display the documents and the conflict type. To resolve conflicts, open a document and modify the rating. The conflict statistics will update accordingly.

ratings_conflicts.gif

Alternatively, you can batch resolve rating conflicts from the results table. This will affirm the documents' current ratings. 

Screen_Shot_2018-08-14_at_9.43.55_AM.png

Return to table of contents

Review Progress

The Review Progress page charts the documents viewed and documents rated over the lifetime of the project (for projects that were in Everlaw before analytics was released, the review progress chart begins on that date - 7/13/2012). You may hover your mouse over the graph to see any week since the project was created - the tooltip will show you how many new documents were viewed that week, and how many documents were rated. These two parameters are both cumulative. In other words, over time, the amount of documents that have been rated and viewed should rise, as review activity progresses on your project. (Note that there may not be a one-to-one correspondence between documents that have been viewed and those that have been rated. For example, some documents may have been viewed but not given a rating. Additionally, it is possible to rate documents without viewing them, through batch actions.)

This graph is similar to the Overview Tab, except you are able to see the entire history of the project. You may add or remove parameters, or series, to the graph by clicking the appropriate checkbox in the legend. The checkbox next to the legend title selects or deselects all the series. If only some of the data is selected, the checkbox will appear gray, but is still clickable and will add all the data to the graph.

You may hover over one of the series in the legend to emphasize that parameter in the graph by bringing it into the foreground.

To see the data a little more clearly, click on the Y-axis of the graph. The graph will resize to display the documents as a percent instead of absolute number; this provides a better view of the data. For example, if you have reviewed 4.1% of the documents on your project and viewed 8.4% of the documents, the Y-axis will resize and display units from 0-10%. This will make the current percentage and historical progression of viewed and rated documents easier to visualize. Click on the Y-axis again to return the axis labels to an absolute number. The image below shows a graph that has been resized by clicking on the Y-axis.

analytics4.png

Return to table of contents

Rating Trends

Rating trends give you insight into how reviewers are applying the hot, warm, and cold ratings. The patterns revealed here can help you identify reviewers you might want to retrain, or whose work you should consider QAing. Each rating is represented as a separate bar chart, and you can include or remove users from the bar chart by using the right side legend. 

rating_trends.png

Each bar represents the percentage of the total number of documents rated by that user in that category. Hover over the bar to see individual documents rated by that user, and what % that number is compared to all documents they have rated. 

Screen_Shot_2019-05-10_at_11.12.19_AM.png

These comparative charts can help you understand, for example, whether someone rates documents hot more liberally compared to their colleagues. Or, for example, you might find that a reviewer rates documents "hot" or "cold" with more relative frequency than they do "warm." In this case you might want to consider clarifying your criteria for the various rating categories to ensure consistency.

Return to table of contents

Reviewer Pace

The Reviewer Pace scatter plot graphs the number of documents viewed over the lifetime of the project. You can select users' data to appear in the graph by selecting them in the User key. If there are overlapping lines, hovering over one of the users in the key will emphasize that user's data by bringing the user's line into the foreground.

Under the graph is a bar that allows you to set which time period you would like to view. You can also toggle between weeks and months to narrow down or expand the time frame you wish to see.

analytics13.png

Return to table of contents

Custom

If built-in reports do not provide all of the information you want, you can make your own custom reports by going to the Custom page. You can select any metrics you want to compare, as well as any users in the project. Select the metrics from the dropdown menus under the Custom Visualization, and the users you wish to analyze from the Users key. You may hover over one of the users in the legend to emphasize that user's data by bringing the user's line into the foreground. For a detailed description of the various statistics available, please see the Data Collection and Metrics Details section at the end of this page.

Under the graph is a bar that allows you to set which time period you would like to view. The default view is the last 4 weeks, but it is easily adjusted by dragging the markers along the bar. You may also toggle between weeks and months to narrow down or expand any time frame you wish to see.

analytics5.PNG

 

The raw statistics that we collect (and that are available in the Custom Visualization widget) are the following (Here, "New" means previously not viewed by the user):

  • Review Time: The length of time the user has spent in review. This timer counts any time a user spends in the review window, on the results table, or on the search page.
  • Docs Viewed: A count of the total number of times the user opened up a document in the review window (even if the user had already previously viewed that document).
  • Pages Viewed: A count of the total number of pages the user viewed in the review window (even if they had already viewed that page).
  • New Docs: A count of the total number of new documents the user opened
  • New Pages: A count of the total number of new non-native pages the user viewed.
  • New Natives: A count of the total number of new native files the user opened
  • Docs With Natives: A count of the total number of new documents the user opened that had natives
  • Pages Opened: A count of the total number of non-native pages in the new documents the user opened (e.g. if the user opened document A, which has 3 pages, and document B, which has 2, this count would be 5 - which pages the user views doesn't matter here)
  • Native Pages Viewed: A count of the total number of native pages the user viewed.
  • New Native Pages:  A count of the total number of new native pages the user viewed.
  • Rated Hot/Warm/Cold/Unrated: A count of the number of documents the user manually rated.
  • Batch Rated Hot/Warm/Cold/Unrated: A count of the number of documents the user rated using the Batch Rate tool. 

Return to table of contents

Have more questions? Submit a request

0 Comments

Article is closed for comments.