Skip to main content
Review Cycle Time

How to discover bottlenecks in the review time and how to turn that into something actionable

José Caldeira avatar
Written by José Caldeira
Updated over a year ago

The Review Cycle Time represents the average time spent by your team in a review activity in your specified time frame.

  • Review Time = Last Approval - Last Commit Activity

  • Last Approval: Last event in a review process before a merge happens. This is typically the last approval date.

  • Last Commit Activity: Last commit date before the first review activity or an explicit review is requested.

How to use Athenian to improve the review process

Review Dashboard

The main review dashboard shows you a timeline, which tells you how the amount of time your teams have spent on review items has changed across your chosen date range. It also shows you the distribution of Pull Request Cycle Times.

You'll be able to see, at a glance:

  • The overall amount of time your team has spent Reviewing Items in your chosen date range;

  • The total proportion of your Cycle Time taken up by review processes;

  • The total number of Pull Requests reviewed in your chosen time frame;

  • The total number of Reviewers and the distribution of Review Time per repository;

Review Metrics and Insights

Scroll down and you'll see some key insights concerning your review data.

You can also switch to a view of the data itself, with crucial information about each of the Pull Requests included in your chosen timeframe.

In Pull Requests tab the data can be filtered based on the current review status:

  • Not reviewed;

  • Review ignored;

  • Review approved;

  • Changes requested;

  • Never released.

Code Reviews Per Repository

This shows you which repositories are having the most review requests.

The color-coded bar chart shows you the number of Pull Requests reviewed vs. the number of Pull Requests not reviewed for each repository.

Review Time Per Repository

Here you can see which repositories have the highest average review times, allowing you to identify patterns across repositories.

Wait Time for First Review

This will show you how the Wait Time for First Review, meaning the average time for the code to to start to be reviewed, has changed over your chosen time period.

You will also see a figure for the average wait time for first review, and a percentage figure for the proportion of review time taken up by first reviews.

Code Review Engagement

Here you will see four timelines, all showing you how your teams’ workload in relation to reviews has changed across your chosen timeframe:

  • Pull requests reviewed;

  • Code review/pull request;

  • Review comments/pull request;

  • Participants/pull request;

Review Activity

The first chart shows how your team members spent their time, with a ratio of unique Pull Requests Reviewed vs Pull Requests Created. This represents the work made by team members vs. review activities.

The second chart shows the total amount of comments over the total amount of Pull Requestes Reviewed. This represents how active developers have been in the review process.

⚠️ Please note ⚠️: Team metrics are always more valuable than individual metrics. Do not use the review activity metrics to rank individual developers. Instead, cross reference this information with other key metrics, including the Average Pull Request Cycle Time, to assess whether you should structure your teams differently to improve your processes.


🔎 What you to look for?

  • What’s the total Review Cycle Time? Is this connected with PRs Size or the number Participants per PR?

  • What's the time until the First Review? What's the percentage of that time when compared with the Review Cycle Time?

  • How many Code Reviews are in flight? How many comments are happening per review process?

  • Is the time spent by teams in Code Reviews increasing? According to the Review Activity section is there a person acting as a bottleneck? Do code review times change per repo, indicating that there may be blockers on reviews that are dependent on different teams?

  • What are the PRs with the longest time in code review? Is the team aware why this is happening?

  • Did the team see an increase in the number of contributors? Does the impact in the Code Review Engagement section reflect in the Pull Request Review Time?

🚀 Actions to improve

  • If PRs are big and they are impacting the review time, try to create smaller and independent Pull Requests. Pick the largest item in the next sprint and ensure it can be delivered in more than two tasks, each one with its own Pull Request and, if possible, with a different developer. TDD, API first and other development patterns can help here.

  • Deal with this at the product level, by breaking stories in smaller value increments. Then deal with it at the execution level by leveraging code deployment independent of feature delivery.

  • Check the Time for first Review periodically and use dailies to ensure developers are doing code reviews in a timely manner.

  • Create specific review time slots, before and after meetings, to keep the review process flowing. Ensure small PRs are reviewed as soon as possible when team members are interrupted. For larger PRs plan time for the reviews.

  • Ensure the teams finish the pending reviews before moving on to new items. To track that, look at the number of Pull Requests in the following states: Review Submitted, Changes Requested or Review Required.

  • When you have complex Pull Requests, or Pull Requests with more than 10 review activities use Zoom to do live code reviews and mob reviews. This will decrease the review time while also supporting better knowledge transfer.

  • It's also a good idea to use these activities to define common code review guidelines, ensuring team members are more effective at reviewing and also more effective preparing the Pull Request to be reviewed.

Did this answer your question?