RQM test execution record
Hi Every one
Have few doubts on Test execution record.I'm new to testing and trying to execute the test case by creating the test execution record. My question is if i'm running test every day do i have to create test execution record everytime.If so how do the points work?The question about points is for reports.How does this effect the reports.For ex execution trend report. Thanks and Regards potluriv |
6 answers
Hi Every one Hi, I will try to explain how the Test executioon records and points system works. The test case has weights/point associated with it.The test execution record (TER) is created for a particular testcase and can be reused multiple times. To run a test, you will need a TER. For each execution there is an Execution Result created. Assuming it to be a manual test, based on the number of steps passed / failed etc. you can allocate points in the ER (% of weight or similar logic). These points explain at a high level what happened during execution and can be reported on. -Vaibhav |
vaibhav wrote:
potlurivwrote: to compare tests. Not all tests are of equal value. A test that exercises a single API and takes seconds to execute would not have equal value as a test that requires a complex setup and exercises a product under stress. The points / weight allows you to give an equality of those 2 tests. The points finally assigned are a little more subjective. Lets say in the second scenario every thing ran and passed but one API test failed. Obviously you want to give partial credit here as the majority of the tests ran. You can assign a percentage of the total weight to pass and a portion of the points to failed. There are many different ways this can be done and people come up with all types of formulas to calculate what it should be. The real main thing is to try to be consistent in how you do it so that data has meaning. |
vaibhav wrote: potlurivwrote: to compare tests. Not all tests are of equal value. A test that exercises a single API and takes seconds to execute would not have equal value as a test that requires a complex setup and exercises a product under stress. The points / weight allows you to give an equality of those 2 tests. The points finally assigned are a little more subjective. Lets say in the second scenario every thing ran and passed but one API test failed. Obviously you want to give partial credit here as the majority of the tests ran. You can assign a percentage of the total weight to pass and a portion of the points to failed. There are many different ways this can be done and people come up with all types of formulas to calculate what it should be. The real main thing is to try to be consistent in how you do it so that data has meaning. For the second part of the question, should we reuse the same execution record or create a new one? I think we need to create a new TER for every new test milestone or new test plan. We cannot use the same one. Am I right? |
ianhe wrote:
dmehaffywrote: For the second part of the question, should we reuse the same execution record or create a new one? I think we need to create a new TER for every new test milestone or new test plan. We cannot use the same one. Am I right? Yes if you want to do any reporting that involves tracking progress you need to create TERs per the milestones that you plan otherwise they just get stacked under one TER as results with the last one being indicated in the TER results page. We have a regression test bucket that we run every week on many different machine configurations so we have milestones that we have created for each week and we create a TER for each milestone and each config. Then we can report on total attempts with pass and fail points either in a graph or spread sheet like table. |
Thanks David. I wonder why do you care about how many attempts? We could rerun the same ER many times until it passes. What we care is the final result of this ER.
|
ianhe wrote:
Thanks David. I wonder why do you care about how many attempts? We If you several thousand tests to run, you want to spread them out over time and be able to track that you are making the appropriate progress that is required to complete the testing before you ship the project. You can not tell that by just running the same execution over and over. Also if your code is changing you want to make sure you schedule the TER near the beginning, middle and end of your test cycle. If it passes at the beginning of test does not mean it will pass at the end if there has been a regression introduced in the code base. We care what the final result is but we also need an indication of success rates as well. The normal "power curve" that you can generate if you also use time is important in normal large scale testing like a system test. |
Your answer
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.