Jérôme Coupé

In the world of modern software engineering, measuring delivery performance has become essential. The DORA metrics — introduced and popularized by the groundbreaking research in the book Accelerate by Nicole Forsgren, Jez Humble, and Gene Kim — offer a data-driven framework to evaluate and improve how teams build and deliver software.

But while these four key metrics provide a powerful baseline, going beyond them is necessary to develop a more holistic understanding of software health and organizational effectiveness.


What are DORA metrics?

DORA (DevOps Research and Assessment) metrics emerged from years of research into what makes elite technology organizations successful. The results, first published in Accelerate and supported by State of DevOps Reports, demonstrated a strong correlation between high software delivery performance and business success.

The four DORA metrics measure both velocity and stability:

1. Deployment frequency

  • Definition: How often code is successfully deployed to production.
  • Why it matters: Frequent deployments reflect rapid value delivery and fast feedback cycles.
  • Elite benchmark: On-demand, multiple times per day.

2. Lead time for changes

  • Definition: Time from code commit to successful deployment in production.
  • Why it matters: Shorter lead times mean the organization can respond quickly to change.
  • Elite benchmark: Less than one hour.

3. Mean time to recovery (MTTR)

  • Definition: Average time to restore service after a failure or incident.
  • Why it matters: A fast recovery capability minimizes customer impact and builds system resilience.
  • Elite benchmark: Less than one hour.

4. Change failure rate

  • Definition: Percentage of deployments that result in a failure requiring remediation (rollback, hotfix, etc.).
  • Why it matters: A low failure rate reflects the robustness of the deployment pipeline.
  • Elite benchmark: 0–15%.

These benchmarks have evolved slightly over the years as shown in the annual State of DevOps Reports. For example, in 2016 and 2017, teams were categorized into “High,” “Medium,” and “Low” performers—with “Elite” added later as top-tier recognition.


Why DORA metrics matter

  • Evidence-based: Grounded in rigorous research from thousands of organizations.
  • Actionable: They help teams pinpoint specific improvement areas in delivery performance.
  • Balanced: They emphasize both speed and stability, avoiding the “move fast and break things” trap.
  • Predictive: Teams that perform well on DORA metrics tend to achieve better business outcomes—higher profitability, customer satisfaction, and market share.

Why go beyond DORA?

While the DORA metrics provide an excellent foundation for measuring the efficiency and performance of a software delivery process, they focus primarily on delivery speed and operational stability. These two dimensions — while critical — do not capture the full complexity of modern software engineering.

To gain a more comprehensive and strategic view of an organization’s software development health, it is essential to consider additional indicators across the entire lifecycle, from code quality to business impact.

There are three key reasons to expand beyond DORA:

  1. DORA doesn’t measure what you build — only how you deliver it.
    A team can deploy frequently with low failure rates and still ship low-value or poor-quality features. Metrics around customer satisfaction, ROI, or retention help validate whether engineering efforts are aligned with business goals.
  2. DORA is delivery-centric, not development-centric.
    DORA starts the clock at the moment of code commit, but what happens before that—design, code reviews, testing—is just as crucial. Metrics like cycle time, code quality, and technical debt help diagnose upstream inefficiencies or risks that DORA alone won’t detect.
  3. DORA assumes a stable environment—but real systems are complex and dynamic.
    In highly regulated, security-sensitive, or high-load environments, delivery performance must be balanced with metrics around security, compliance, system resilience, and scalability.

In short, DORA tells you how fast and safely you’re delivering, but not whether you’re building the right thing, in the right way, or for the right reasons. Complementing DORA with broader metrics helps teams focus not just on velocity, but also on value, quality, and impact — the true hallmarks of a high-performing engineering organization.

Here are six complementary domains worth tracking:


1. Code quality

  • Code coverage: Measures the percentage of source code covered by automated tests. While not a guarantee of quality, higher coverage often indicates better-tested, more robust code.
  • Static code analysis: Tools like SonarQube identify code smells, complexity, duplication, and violations of coding standards.
  • Technical debt: Quantifies the cost of suboptimal code that will need future correction. High technical debt slows delivery and increases long-term risk.

2. Developer productivity

  • Cycle time: Measures the time it takes for a task or feature to go from “in progress” to “done.” It reveals how efficiently work flows through the system.
  • Throughput: Tracks the number of work items (stories, tasks, bugs) completed in a given time period. This metric helps assess team velocity and forecast delivery capacity.

3. Customer-centric metrics

  • Customer satisfaction: Based on post-interaction surveys, this metric captures how satisfied users are with a feature or release.
  • Net promoter score (NPS): Measures how likely users are to recommend your product. A high NPS reflects strong perceived value and loyalty.

4. Operational excellence

  • System uptime: The percentage of time a system is available and functioning correctly. It is a direct measure of system reliability and is critical for customer trust.
  • Latency: The time it takes for a system to respond to a user request. Lower latency means a more responsive and pleasant user experience.
  • Performance under load: How the system behaves during traffic spikes or resource contention. This includes stress testing and monitoring for degradation under pressure.

5. Security and compliance

  • Time to patch vulnerabilities: Measures how quickly your teams address known security flaws after discovery. Faster patching reduces exposure to attacks.
  • Number of security incidents: Tracks breaches, unauthorized access, or any production security events. Fewer incidents suggest stronger preventive controls.
  • Compliance automation: Percentage of compliance checks performed automatically (e.g., infrastructure as code validations, automated audit trails), reducing manual effort and risk.

6. Business impact

  • Return on investment (ROI): Compares the business value created by a product or feature against the cost of development and maintenance.
  • Customer retention rate: Measures how many users continue to use the product over time. High retention is often a sign of good product-market fit and reliable delivery.
  • Cost per feature delivered: Helps understand how efficiently teams convert investment into customer-facing value.

Conclusion

The DORA metrics, as defined in Accelerate and validated by years of research, remain a cornerstone of modern software delivery measurement. They are a proven lens for understanding how fast and reliably your teams deliver.

But delivery performance is only part of the equation. To truly build products that solve the right problems, delight users, and stand the test of time, organizations must go beyond DORA—incorporating metrics that reflect engineering health, customer value, and business impact.

Use DORA to measure how well you deliver.
Use complementary metrics to measure why it matters.


Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *