All Collections
Keypup Templates
Developer Summary dashboard
Developer Summary dashboard
Tom Azernour avatar
Written by Tom Azernour
Updated over a week ago

This dashboard offers a summary of engineering metrics for a given user or team. It can help contextualize delivery performances and explain delivery issues by connecting seemingly unrelated events.

It's important to note that while tracking individual performance can provide valuable insights, it should be done in a way that fosters a positive and supportive work environment. The goal is to facilitate improvement and collaboration, and to surface potential hurdles affecting the team and its members rather than creating a retributive atmosphere. Regular communication, feedback sessions, and a focus on continuous improvement can help make the tracking process more constructive and beneficial for the development team.

Find below examples of how you can leverage this dashboard:

Example 1:

  • Question: Why did the flow metrics (review time, coding time) surge recently?

  • Possible answer: An unusual number of review requests was assigned to the user, reducing the time available to perform thorough testing.

Example 2:

  • Question: The trends of one’s delivery metrics gradually slowed, reducing the ability to ship code and features to the users.

  • Possible answer: Most work is going toward the same project or branch. The collaborator may be at burnout risk due to the lack of variety of topics.

Example 3:

  • Question: Why did deployment frequency dipped while average PR size and commit creation surged?

  • Possible answer: The engineer seemingly worked on broader chunks, which may indicate that the scope was too wide. This has an impact on engineer performance and well-being, also for peers who will review these big units.

Insights

Commit frequency: Measures the average number of commits created per day.

Deployment frequency: This metric informs teams about their ability to ship features, enhancements or fixes to users.

Average PR size: This metric is used as an entry point for development good practices, such as breaking big items into smaller ones to facilitate and accelerate code reviews and deployments.

Coding time: Companies use this stage of the Cycle Time to measure the duration of the first development pass, which is composed of the development time and the review wait time.

Average review duration (GitHub specific): Measures the average time it takes from creating to submitting reviews. This insight should be used with GitHub repositories exclusively, as GitLab and Bitbucket do not track the initial creation of reviews, only the submission time.

Issues assigned: This KPI is a visual indicator showing the total number of Open and Closed issues, providing a quantitative overview of user(s) workload.

Due dates: Shows a breakdown per due date of items (issues and pull requests) due within the selected period.

Ongoing PRs: Measures the number of pull requests that are actively being worked on.

Reviews assigned: This KPI is a visual indicator showing the total number of reviews assigned on pull requests that are still open, providing a quantitative overview of tasks that generate workload for the user(s).

Historical work on branches: Measures the number of pull requests created monthly per destination branch (base ref).

Historical work on repositories: Measures the number of pull requests created per month per project/

Engineering workload distribution: This metric provides an overview of the type of tasks individual engineers have been working on, based on labels

Project effort distribution: This metric is used to understand how the development effort is spread on coexisting projects.

Engineering work pattern analysis: Measures the number of commits per user per day of the week.

Did this answer your question?