It's all about the answers!

Ask a question

How to determine test case run result when there are multiple runs in one iteration?


Yuanhui Liu (623) | asked Feb 20 '15, 12:56 p.m.
We could have multiple runs for the same test case in the same test phase during one iteration, and our backend automation engine will decide whether or not it needs to re-run a test case based on the run results in that iteration. For example: 

Scenario #1: Test Case A had a failed automated run result followed by two failed manual run results, then our backend automation engine will say Test Case A failed during this iteration so far and it needs to re-run this Test Case A again in this iteration. 

Scenario #2: Test Case B had a failed automated run result followed by one successful manual run and one failed manual run, then our backend automation engine will say since there was a successful run result during this iteration, Test Case B is successful for this iteration and it doesn't need to re-run this Test Case B in this iteration again. 

But we noticed in RQM, it seems like it only takes the last run result as the current TCER result for this iteration, a little different than our process here. We are wondering how does the other team handle this multiple test case run during one iteration scenario? And was wondering in RQM, instead of using the last run result as the current TCER result, is there a way to set TCER as Successful for the iteration as long as there is at least one successful run during one iteration?  

One answer



permanent link
John Nason (2.4k1012) | answered Mar 02 '15, 2:23 p.m.
FORUM MODERATOR / JAZZ DEVELOPER
Generally test teams adopt multiple builds during an iteration, so we assume the last test result for a given iteration is "closest to the truth". It's generally a bit of an odd process that you'd have an earlier success of a test that trumps a later failure of the same result.
You could differentiate between manual and automated runs by using different TCERs (perhaps done through environment where you have an automated env vs. a manual env) and track the results differently.
Another approach could be to manually edit the result after it is run to move it to PASSED, even though it FAILED, and perhaps add a note to the result saying why the result was changed.
Other than that, you'd need to use custom reporting to take all results into account and do custom reporting logic to capture how you wanted to surface this data.
Hope this helps.

Your answer


Register or to post your answer.


Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.