Our documentation on "What are the Google DORA stats, and how to interpret your own DevOps performance?" describes the "what" of the classic DORA metrics. However, in practice, translating "what" into "how" requires a mix of "practical" and "analytical" expertise. To apply Google's superficially simple metrics in your real-world production environment requires precisely defining how "intent" maps to "visualization."
In this page, we will visit each of Google's "Classic Four" DORA metrics. We will discuss how the concepts shake out as measurements that your team can adopt. And we'll show how it looks when graphed over time on GitClear (where you can track your own DORA metrics, free).
For example, take "Failed deployment recovery time" (aka "Mean time to recover" or "MTTR"). What are the "start" and "finish" endpoints for a defect? Does it "start" when a "Very High" severity Jira is opened, or did it start in the release that preceded that Jira (the point the bug slipped into production)? Does it "end" when the Jira issue is marked "resolved", or the first time that a deploy occurs after work was done on the issue? Maybe it is "resolved" the last time that an API call is made claiming the defect to be "resolved"?
For technical minds that want to rely on a predictable interpretation of their DORA numbers, these details are essential to the utility of the DORA charts. Thus, this page is dedicated to going deep into explaining how GitClear transforms raw data into interpreted stats like "Failed deployment recovery time."
Google's own words for how it defines its four classic DORA metrics
This one is the least ambiguously defined among the DORA metrics. Most all companies interpret a "deploy" or a "release" as a discrete event that successfully transitions source code from "underway" to "presented to customers."
Release Count is the default featured metric when visiting the DORA tab, where other recent stats are presented
Even the simplest DORA metric has some room for some ambiguity though. Since GitClear allows releases to be defined by numerous different rules (and API calls), it may not be obvious what happens if a user has duplicative data for a release. For example, if a git tag is pushed when the user has set up a rule that "any tag push is a release" and a subsequent API is made that designates a release has occurred.
The key to preventing duplicate deploys on GitClear is to understand that each release is connected to a single commit within the repo's default branch.
GitClear's implementation (Reports API segment release_count
) goes to all possible lengths to prevent counting multiple deploys. If a commit title matches the deploy rule, AND the commit is part of a git tag, AND the API release endpoint is called with the commit's sha, that will still be interpreted as a single release.
Perhaps the biggest potential risk for registering a "duplicate deploy" is calling the Release Endpoint without a commit sha. It is possible to call the endpoint with either a released_at
time, or a release_now
boolean. When these are used, GitClear translates them to most proximal commit on the default branch that was committed prior to the timestamp specified (for released_at
), or the timestamp when the API call was received (for release_now
).
In Google DORA's most recent report, "Change Lead Time" is defined as "The time it takes for a code or change to be successfully deployed to production." The primary ambiguity here: "Which commit should be measured from"?
GitClear's interpretation of "Change Lead Time," as seen on the "DORA" stats tab: What is the average number of business hours that elapsed, for each ticket that had its first commit authored at the date shown on the graph?
As teams onboard with GitClear, "Change Lead Time" is often one of the first metrics to improve
For example, assume that three tickets had work commence: one on June 1, one on June 2, and one on June 3. Say that, for the three tickets, the times between "when the first commit was authored on the ticket" and "when the last commit implementing the issue was deployed," were 10 business hours, 20 business hours, and 30 business hours. If one were to look at the data point for "June" (on a graph that had a long enough time range to group by month) in this case, the Change Lead Time would be "20 business hours" -- the average time that elapsed between when an issue received its first commit, and when the final commit for the issue was deployed to production.
GitClear offers "Change Lead Time" on both a per-pull request (Reports API segment pr_lead_time
) and per-issue basis (Reports API segment dora_change_lead_time
). The former measures the time from first commit in a PR until the PR is deployed. The latter measures the time from first commit on an issue until the last time the issue is deployed. Neither of these versions is more "intrinsically correct," choosing which to focus on is mostly a matter of considering the percent of work that occurs in the PR review flow.
A final point of ambiguity in the definition of Change Lead Time is whether it should measure "actual time" or "business time," i.e., 8 hours per day on weekdays. GitClear's DORA measurements default to business time, since it muddies interpretation clarity if the user has to wonder if/how many weekends may have occurred during the development of a feature.
Another graph that GitClear provides in its DORA stats is labeled "Lead Hours to Resolve." This data point varies from Change Lead Time in the "start time," which here is "the time that the issue was opened in Jira." For "Lead Hours to Resolve," each data point shown on the graph corresponds to the date range that tickets were opened. For example, if viewing the past 6 months, the default interval used would be "per-week," and each data point would show the number of business hours that passed between when the issue was opened, and when the issue was marked as "deployed."
Most often, an issue is considered "deployed" when the final commit referencing the issue is present on the default branch when a deploy occurs. However, for customers that use the Release API, if the defect_keys_resolved
or issue_keys_resolved
are specified, that will override the per-commit interpretation. In this case, the release date specified by the API call will be considered the "resolve time" endpoint that defines the issue's Lead Hours to Resolve.
Simple in principle:
But, as you've gathered if you've read from the start to here: the simplicity of any DORA stat is "in concept" only. When you're responsible for explaining DORA to teammates, converting these "simple concepts" into implementation terms is, yet again, steeped in gray area.
When zooming in on Change Failure Rate in GitClear's DORA stats, it's often possible to isolate individual sprints where repos struggled
Conceptually, "Change Fail Rate" gives managers a sense for the extent to which the company's release cycle is being driven by urgent bug fixes / hotfixes.
The count denominator for this metric is defined above, in "Deployment frequency." The numerator is defined by whether the deploy included work that is expected to resolve a critical defect, by whatever definition you choose for a defect.
For example, if you call the Release API endpoint, supplying either a defect_keys_resolved
param, or an issue_keys_resolved
that specifies a Jira issue designated a "Bug," then that release is deemed "a deploy that resolves a critical defect." If it is the only deploy you ever make, then your Release Defect Percent would be 100%.
As another example, say that a Jira is opened with the Severity/Priority of "Very High". By default, an issue of that rating is considered critical. When a pull request that references that issue gets merged, then the default branch is "armed" to resolve a deploy. The subsequent release after the pull request is merged would be the "Deploy that resolves a defect."
It's rare for the count of "releases with a defect" to differ meaningfully from "releases that resolve an urgent defect." If you generate 3 defects, scattered over a year where you deploy 100 times, your Release Defect % is 3% regardless of which side is instrumented. Sometimes multiple defects will be born together, and just as often, they'll be resolved together.
Since the options measure "either side of the same phenomenon," why prefer to measure the release in which the defect is resolved? Because it is much more common for customers to have specific data on one than the other.
GitClear's beta implementation of Change Fail Rate did, at the time, base this metric on our derivation of "most proximal release prior to defect detected." But just because a defect was detected Dec 1 doesn't mean it came from the Nov 30th deploy. Nobody knows how much time might have passed between the first release when a bug originated, and when the bug was logged to Jira. Thus, trying to instrument "releases where a defect occurred" is inherently fraught.
Finally, we arrive at the DORA stat that isn't even very simple in concept, let alone implementation. As the Google DORA team explains it in 2025:
The time it takes to recover from a failed deployment
To implement this stat, we need to define a "start time" that represents "the time the defect was observed," and an "end time," representing "when a release that successfully resolves the defect has been completed."
Business hours from issue detection to deployed, as shown in "DORA Stats" tab
GitClear offers multiple choices from the "DORA Stats" tab, with varying graphs presenting this interval as Mean (or Median) Hours to Repair (MTTR).
The "start time" is the time the defect was reported (i.e., the detected_at
param to the API). The endpoint is when the defect is marked as "Resolved." It is possible for an API call (to Releases or Critical Defects) to set an explicit released_at
or fix_released_at
, respectively) to explicitly set a date for when the defect was resolved. Only if the API endpoints do not possess data on an explicit resolution, GitClear will check whether the defect is associated with an issue tracker ticket, and if the ticket has a "resolved at" date. If it does, we'll search for an existing release within a few days of the ticket's "resolved at" timestamp, and designate that as the "resolved" time. If both of these fail, then GitClear will check whether a commit or PR that references the defect can be found, and if so, what time the commit or PR was deployed. This is the third option for defining the "resolved at" endpoint for MTTR.
In addition to these DORA graphs indicating how much time is passing between "defect detected" and "defect resolved," GitClear also offers a great deal of granular data within the Defect Browse tab.
For this metric, the "start time" is still the time the defect was reported.
How long have various repos taken between defect detection & initial team response?
The "end time" is the first measurable progress toward resolving the issue: either a commit, an opened pull request, or a release that is designated as resolving the issue.
Located alongside "Hours to Fix Activity," this graph measures specifically the average time that has elapsed between when defects were detected, and when they had subsequently received a pull request that got merged.
Pull Request Cycle Time is available as a chart within the "Google DORA" and "Pull Request" tabs
This metric straddles the line between a "devops" and "dev team" stat, so GitClear dually houses it within DORA and Pull Request stats.
It's easy to refer to a concept like "time to recovery," but harder to specify exactly what should be measured to capture "how fast is the team reacting to an urgent problem?"
GitClear has spent several years working with enterprise customers like Bank of Georgia to understand what DORA means at scale. We hope that our iteratively derived approach to interpreting these key metrics is useful to teams that want to better instrument how changes like "AI adoption" impact service availability. If you'd like to get a free report on your team's performance with these metrics, learn more here.