Tracking Defects

Defect tracking during the testing phase is highly recommended as a standard practice for any software testing.

Defect Arrival Rate

A project's pattern of defect arrival helps the team understand their likelihood of achieving their release commitments. It indicates how much more testing is needed before the system can be released, and is also used to determine the completeness of the testing process, the ability of the team to find defects (the rate of defect discovery) and to monitor process improvements.

  • Count the number of defects found in each unit of time (usually week, but could be day or month, depending on iteration length).
  • Count total cumulative defects found

Monitor defect arrivals using a trend line for each severity. Plot the number of defects on the Y axis and iterations on the X axis. Below is an example of a Defect Arrival Rate report, which shows the number of defects across the development time frame. The high severity lines trending upward in this example indicate the project is not stabilizing.

Defect Arrival Rate chart

Expected trend - Ideally, the defect arrival rate will slow down over time for each severity type. As defect arrivals diminish, confidence in meeting release goals increases.

Increasing defect arrivals - When defect arrivals are not tapering down (especially in the last half of the lifecycle), it suggests the team may continue to find defects at a rate that impacts the release date. It can also forecast more defects found after the software ships. This trend can occur as a result of increased testing efforts, the injection of more code defects, or increased code development activity (the more code is created, the more defects are likely to be introduced). Perform analysis to determine the cause of the increasing defect arrival rate in order to take the appropriate corrective action. The team may not be successfully adopting DevOps best practices.

Defect Trends 

Monitor Defect Trends to ensure that arrival and closure rates have some correlation (i.e. that your arrivals do not consistently outpace your closure, resulting in a high defect backlog).

  • Count the number of defects found and closed each unit of time (usually a week, but could be a day or month, depending on your iteration length). This is a graph over time, typically in two lines.

The following graph is an example of a defect trends report.


Expected Trend - As the end of the cycle (ship date) nears, you should expect to see the arrival rate slowing and the closure rate surpassing it. This indicates that fewer defects are being found (hopefully because you have found them already), and that any backlog or outstanding defects are being fixed.

Increasing defect arrivals - If the arrival curve does not taper down, it suggests that you may continue to find defects post-ship. In the cumulative chart, you expect to see the gap between total found and total fixed. If a large gap remains, you have many defects that are still not closed: this may necessitate lengthening the schedule or shipping with unclosed defects.  Note that looking purely at arrivals and closures does not include defects in a backlog that predates the data collection.

Defect Aging 

Defect Aging is a measurement of team capacity, process efficiency, and defect complexity. Teams monitor Defect Aging trends in order to:

  • determine whether defects are being addressed in a timely manner
  • understand the capacity of the team to resolve past defects to help with future planning
  • determine if there is a problem with high severity, critical defects taking too long to fix

Count the number of days each defect spends in each state (e.g. submitted, assigned, resolved).

Calculate the following:

  • Average number of days defects spend in each state.
  • Maximum number of days defects spend in each state.
  • Minimum number of days defects spend in each state.

Use a bar chart to monitor Defect Aging trends. Plot the number of days on the Y axis, and the states on the X axis. Plot a vertical bar for the average, maximum, and minimum days for each state. Use stacked bars to group by severity. Another useful way to monitor defect aging is to monitor how many defects have remained in specific date ranges by severity. Typically teams track under 3 days, 3-5 days, and over 5 days.

The following figure is an example of a Defect Aging report, which shows days spent in each state.

Defect Aging Report

Expected trend - The average time in each state is acceptable (meeting the team's established target) and there are no excessive maximum times. The average should not necessarily be the same for all states. For example, a defect should move quickly from Submitted to Assigned if the team's process is working efficiently.

High average in assigned state - This trend occurs when defects spend a long time in the submitted state prior to resolution. This can occur when low priority is placed by the developers on debugging and fixing. Or, when defects are identified late in the process. The older a defect is, the more difficult it may be to correct, since additional code may be created based upon it, and correcting the defect may have larger impact throughout the system. When developers don't fix defects quickly, testers may run into a related defect in another area, creating a duplicate report. Confirm that the team is working to address defects by priority in each iteration.

High average time in submitted state - Defects that are not promptly assigned (according to their priority) indicate a problem with the team's process. Either analysis is taking too long, or the team is not placing high enough priority on reviewing submitted defects. Confirm there is sufficient, consistent process and tooling in place that will alert the team to newly submitted defects and that necessary information is captured with each report for efficient analysis. Confirm the team works to move submissions through the process as quickly as possible.


Finding more defects is an indication of high error injection during the development process, if testing effectiveness has not improved drastically. Therefore, this metric can be interpreted in the following ways:

  • If the defect rate is the same or lower than that of the previous release, then ask: Is the testing for the current release less effective?

– If the answer is no, the quality perspective is positive.

– If the answer is yes, extra testing is needed. And beyond immediate actions for the current project, process improvement in development and testing should be sought.

  • If the defect rate is substantially higher than that of the previous release, then ask: Did we plan for and actually improve testing effectiveness significantly?

– If the answer is no, the quality is negative. Ironically, at this stage of the development cycle any remedial actions will yield higher defect rates.

– If the answer is yes, then the quality perspective is the same or positive.