The Team Benchmark dashboard is your central hub for gaining a deep understanding of your development team's performance, productivity, and collaboration dynamics. Going beyond basic metrics, it offers a comprehensive suite of insights to help you identify strengths, address bottlenecks, and foster a culture of continuous improvement.
This dashboard is designed to empower engineering leaders, team leads, and developers alike with actionable data, enabling them to optimize workflows, boost individual contributions, and achieve peak team performance.
1. General insights
[Team] LoC - 95th Percentile: This insight reports on the 95th percentile of lines of code changed, over the selected period (e.g. 12 months). Monitoring the 95th percentile of lines of code changed (LOC) offers a crucial perspective on the size of the largest code changes being deployed. While average LOC changed can be misleading due to outliers, the 95th percentile gives you a realistic view of the upper limit of change size that's commonly being pushed.
[Team] LoC - Average - Team-wide: This insight reports on the average LoC per pull request, over the selected period (e.g. 12 months). Monitoring the average Lines of Code (LoC) per pull request (PR) is useful because it gives you a direct measure of the size and complexity of code changes being reviewed. It offers insight into how well features are being broken down into manageable units for review and integration. This metric helps balance the need for frequent iteration with the cognitive load placed on reviewers.
Lines Changed Per Developer Over Time: Monitoring lines changed per developer over time offers valuable insights into individual contributions, workload distribution, and potential skill development within a team. It's a more granular view than team-wide LoC metrics, allowing you to identify patterns and trends specific to each developer, which can inform resource allocation, mentorship opportunities, and process improvements.
Ranking - Total LoC per Developer: This insight shows a breakdown of lines changed, added, deleted, and the number of PRs per developer. Monitoring lines changed per developer over time offers valuable insights into individual contributions, workload distribution, and potential skill development within a team. It's a more granular view than team-wide LoC metrics, allowing you to identify patterns and trends specific to each developer, which can inform resource allocation, mentorship opportunities, and process improvements.
Ranking - Total Commits per Developer: This insight reports the Number of commits per developer, over the selected period (e.g., 12 months). Monitoring the total commits per developer provides a window into individual developer's activity and engagement with the codebase over a given period. It helps understand contribution patterns and can highlight potential areas for investigation, though it's essential to interpret this metric carefully and within context.
2. Cycle Time insights
Average Cycle Times Per PR (Days) - Per Developer: This insight shows a breakdown of the average cycle time per developer, over the selected period (e.g. 12 months). Monitoring the average cycle time of pull requests (PRs) is crucial for understanding the efficiency and velocity of your software development process. PR cycle time, typically defined as the time from when a PR is opened to when it's merged, directly impacts how quickly code changes are integrated into the main codebase and ultimately delivered to users.
[Team] Average Cycle Times Per PR (Days): This insight shows an overall average cycle time of pull requests (PRs) for your whole team. Monitoring the average cycle time of pull requests (PRs) is crucial for understanding the efficiency and velocity of your software development process. PR cycle time, typically defined as the time from when a PR is opened to when it's merged, directly impacts how quickly code changes are integrated into the main codebase and ultimately delivered to users.
Average Coding Time per PR (Days): This insight reports on the average time it takes from beginning of development until the first review request. Monitoring Coding Time provides insights into the upfront development effort and efficiency before the code is exposed to peer review. This metric helps identify potential bottlenecks *before* the code review process even begins, offering a more holistic view of the development lifecycle.
Average Idle Time per PR (Days): This insight reports on the average time it takes from the first review request until the review starts. *Can be negative if a review is provided before it is requested.* Monitoring Idle Time provides critical insight into the responsiveness and efficiency of your code review process. It highlights a potential bottleneck in the workflow and directly impacts the overall pull request cycle time.
Average Peer Review Time per PR (Days): This insight reports on the average time it takes for peer reviewers to create the first review until the last review is submitted. Monitoring Review Time is crucial for understanding the efficiency and effectiveness of your code review process itself. It goes beyond just knowing that reviews are happening; it helps you assess how quickly and thoroughly they are being conducted.
Average Merge Time per PR (Days): This insight reports on the average time it takes from the last review submission until the pull request is merged. Monitoring Merge Time helps identify any lingering delays or bottlenecks that occur *after* the code has been approved. While the code review process itself might be efficient, delays at this stage can still impact the overall development cycle.
3. Insights showing work generated for others
Average Comments per PR - Per Developer: This insight reports on the average number of comments made per PR and per developer, including those related to reviews, from the author or from others. Monitoring the Average Comments per Pull Request (PR) provides valuable insights into the quality of the code, the effectiveness of the code review process, and the level of collaboration within the team. It's a proxy indicator for discussion, understanding, and potential complexity.
Average Comments per PR - Team-wide: This insight reports on the average number of comments made per PR across the team, including those related to reviews, from the author or from others. Monitoring the Average Comments per Pull Request (PR) provides valuable insights into the quality of the code, the effectiveness of the code review process, and the level of collaboration within the team. It's a proxy indicator for discussion, understanding, and potential complexity.
Average Peer Reviews per PR - Per Developer: This insight reports on the average number of reviews made per PR and per developer, including approval reviews. Monitoring the average number of Peer Reviews per Pull Request (PR) provides a critical perspective on the depth and rigor of your code review process. It helps ensure that code is being adequately scrutinized and that potential issues are being identified before integration.
Average Peer Reviews per PR - Team-wide: This insight reports on the average number of reviews made per PR across the team, including approval reviews. Monitoring the average number of Peer Reviews per Pull Request (PR) provides a critical perspective on the depth and rigor of your code review process. It helps ensure that code is being adequately scrutinized and that potential issues are being identified before integration.