Skip to main content

SPACE Metrics dashboard

Liam Davis avatar
Written by Liam Davis
Updated over a week ago

The SPACE framework provides a comprehensive approach to understanding

developer productivity, moving beyond single-metric assessments.

This dashboard offers actionable insights across five key dimensions: Satisfaction & Well-being, Performance, Activity, Communication & Collaboration, and Efficiency & Flow. Empower your team with data-driven insights to optimize workflows, boost performance, and foster a thriving development environment.


1. Satisfaction & Well-being

1.1 Developer Burnout & Engagement

Engineering work pattern analysis: The insight measures the number of commits per user per day of the week, over the selected period, and it allows you to identify whether team members are working on the weekends or have a set pattern for when they are creating commits.

Project effort distribution: This measures the number of commits per project over the selected period. You can use the insights to understand activity level for a project and if they line up with the business's overall goals.

Engineering workload distribution: This insight measures the number of pull requests created per user per label, over the selected period. It provides an overview of the type of tasks individual engineers have been working on, based on labels (E.g., feature developments, bug corrections, etc.).

Engineering Proficiency: Engineering Proficiency measures the number of commits per user per project, showing where developers concentrate their activity. It helps onboard engineers, highlight deep knowledge for complex tasks, and ensure knowledge isn't siloed.

1.2 Developer Satisfaction

Closed Issues: Closed Bugs tracks completed fixes, reflecting the volume of bug-related work done. Detects shifts in focus, revealing if bug fixing is prioritized. Surge signals dedicated sprints; a drop suggests neglecting quality. Correlate with 'Bugs Raised' to ensure issues aren't ignored and code health is maintained.

PRs Completed: PRs Completed counts merged pull requests, indicating code integration volume. Monitor trends: drops signal slowdowns (reviews, complexity), surges reflect smaller tasks. Helps assess delivery pace and identify potential process impacts, balancing speed with quality considerations.

Workload categorization: Workload Categorization visualizes PR distribution across categories (bug, feature, etc.). Highlights imbalances (e.g., bug fixes overshadowing new features), signaling potential problems or shifts in priority. Adjust labels to align with the team's conventions for relevant insights into task focus and effort.

Ongoing tasks per user: This is a visualization of open tasks for each team member, broken down per recommended action.


2. Performance

2.1 Rework & Timely Delivery

Rework ratio: This insight provides the ratio of rework commits versus all commits for the selected period.

Overdue items over time: Overdue Items Over Time tracks unresolved, past-deadline tasks. An increasing count may signal overcommitment, planning issues, or a need for resource adjustment. A valuable metric for assessing workload pressure and its impact on velocity and focus within the team.

Overdue Items List: This insight provides a list of items (issues and pull requests) that are currently overdue. Overdue items should be prioritized by the team to minimize the impact of the delay or have their due dates postponed if no longer a priority. Diligent management of due dates and overdue items will ensure the team gets better at estimating, prioritizing, and scheduling items over time.

2.2 Bugs & System Stability

Change Failure Rate: Change Failure Rate (PR-based) estimates production failures by tracking bug-related pull requests. Monitor to improve coding practices and implement preventive actions. Staying between 0-15% indicates healthy deliveries.

Mean Time To Recovery: This insight reports on the average number of hours it takes to resolve incident-related issues (from open to close), over the selected period. Essentially, it informs on the capacity to handle incidents and restore service.


3. Activity

3.1 Developer Activity

Story points completed: This insight indicates the team’s work completion pace within a sprint. Tracking trends enables more accurate sprint planning, assesses team workload, and improves delivery velocity. Also provides the team a way to communicate how well work planning was realistic

Merged PRs: This insight visually showcases where engineers are most active from commit messages and this provides an instant glance at potential areas for improvement.

Closed Bugs: This insight keeps an eye on closed bugs over time for teams looking at an increasing volume of work completed. Over time, this will allow for an improvement across your development team and can have a good correlation with open work as well.

Reviews Distribution: This insight breaks down review statuses (approved, changes requested) to reveal code review process trends. Skews toward "Approved" suggests strong code quality or lenient reviews. Imbalances signal potential issues in code or engagement in the process. A great way to keep tabs on overall team collaboration.

3.2 Deployments & Incidents

Incidents raised: A count of incidents raised over time. This metric measures the overall quality of the releases in terms of code, design, testing, and deployment. It also provides a good indication of the scalability and stability of the platform.

Bugs raised recently: Measures the evolution of production defects raised by your team. The insight includes both open and closed bugs.

Bugs evolution breakdown: A breakdown of bugs raised over the selected period and grouped by severity. This metric is used to draw the line between high-impact and low-priority bugs, rather than assessing the global quality of code.

Activity feed: This insight complements project management boards (e.g. Jira) by showing you detailed information about issues and pull requests, sorted by due dates.


4. Communication & Collaboration

4.1 Review Responsiveness & Cycle Time

Reviews Overview: The list of most reviewed pull requests, over the selected period. This insight is used to track and contextualize insights such as Total Reviews, to highlight outliers or pull requests that generated lots of reviews.

Reviews performed: The number of pull request reviews submitted, over the selected period. Best practices in software development recommend that every pull request be reviewed by a peer before being merged into the code base.

PR review ratio: The yearly approval ratio of all merged pull requests. This is an audit insight checking that pull requests met their minimum review requirements.

PR Idle Time: The average time it takes from the first review request until the review starts, over the selected period.

PR Review Time: The average time it takes from creating or submitting the first review until the last review is submitted, over the selected period.

PR Merge Time: The average time it takes from the last review submission until the pull request is merged, over the selected period.

PR Cycle Time Overview: This metric is calculated by looking at the time taken by each stage of development, across the entire pull request lifecycle (from development until it’s merged)

Issue Cycle Time Overview: This metric is calculated by looking at the time taken by each stage of product development, across the entire issue lifecycle (from backlog to release).

4.2 Collaboration Patterns

Discussed Items Overview: The list of most commented issues and pull requests, over the selected period. This insight is used to track contextual insights such as Total comments, to highlight outliers or items (issues, pull requests) that generated lots of back and forth.

Average peer comments per PR: This insight is an indication of the overall level of activity and communication for each pull request. The insight is a good indication of the levels of interaction between peers within a team.

Average comments per review: The average number of comments per pull request review, over the selected period. This count is a quantitative evaluation of the quality of pull requests and reviews across the board.


5. Efficiency & Flow

5.1 Overall Efficiency (DORA extract)

Deployment frequency: The average number of pull requests merged on a daily basis, over the selected period. This metric informs teams about their ability to ship features, enhancements, or fixes to users.

PR lead time for changes: The average number of days it takes to develop, approve, and merge pull requests (from open to merge).

Issue lead time for changes: The average number of days it takes to resolve issues (from assigned to close), over the selected period.

5.2 Work Efficiency

Planned vs. unplanned work ratio: Shows the breakdown between planned and orphan pull requests, over the selected period.

Commit frequency: The average number of commits created per day, over the selected period.

Issue Implementation Time: The average time it takes to perform the core work (e.g., implementation + review) on issues.

Did this answer your question?