Jazz Forum Welcome to the Jazz Community Forum Connect and collaborate with IBM Engineering experts and users

RQM best practices - tracking/reporting new vs regression

CLM 604

All,

How are testers running their testing with Suites, Testplans and Testcases? How we set up ours is:

1.0 Test Plan
--Test cases with new functionality.
2.0 Test Plan
--New Test cases testing new functionality
--Test Suites (Regression) with test cases from 1.x
3.0 Test Plan
--New Test cases testing new functionality
--Test Suites (Regression) with test cases from 1.x and 2.x

Seems simple, is there a more efficient way? Have we missed some new feature to organize things? There are tons of great widget and reports and I like to check every couple of years to make sure we are using RQM to its full potential.

thanks

0 votes



One answer

Permanent link

Hi Mahari,

Your approach seems good. This is how we have organized our test assets.

1) We create a fresh Test Plan for every Release. This is achieved by copying the previous release Test plan. The reason for copy is, we can save time in putting back all the regression data into this New Plan manually and individually from previous release.
2) Then we would have created Sprint Sections in each test plan. Every time we copy, we just delete these Sprint sections from previous releases so that new data can be added which is just the delta for the current release.
3) Sprint sections generally contain all the current ongoing new functionality test cases and their plan items linked to them.
4) Then we start adding the newly created test cases to this new plan which will also contain all the regression test cases/test suites accordingly.
5) While we generate TCERs for each sprint, it would be a combination of all the new functionalities and regression test cases from previous releases and also the areas which might regress because of new functionality. Adding regression test cases into bucket is purely done based on the experience of the test lead by analyzing the features coming into the new release and what could get affected from the previous release. Also, we have set of regression test cases which needs to tested every sprint regardless of their relevance with new features.
6) Once the test plan is drafted, it will reviewed by the development leads and inputs are incorporated.
7) Now comes reporting part.
a) First we have a dashboard for the project area where all test assets can be reported upon.
b) This dashboard typically contains widgets like :
    - TCERS owned by each tester.
    -  Execution Status which will depict total Execution points, passed points, failed points, blocked points and so on.
    -  Defects logged during the particular Sprint
    -  TCERs Widget depicting all the InProgress and Not Run TCERs
    -  Metric graph for each individual tester on the current status of the execution

8) we can also add any custom reports created in JRS into our test dashboard as a widget.
9) Widgets depicting blocking defects, which needs tracking could be added.
10) We can also add widgets for Test Suites ( Regression Only) and then a separate widget for the current release.
 List of widgets is long, however it all depends on the need of your current project.

Hope this helps,
Krupa Gunhalkar

0 votes

Your answer

Register or log in to post your answer.

Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.

Search context
Follow this question

By Email: 

Once you sign in you will be able to subscribe for any updates here.

By RSS:

Answers
Answers and Comments
Question details
× 12,021
× 21

Question asked: Apr 25 '18, 7:45 a.m.

Question was seen: 1,853 times

Last updated: May 07 '18, 11:08 p.m.

Confirmation Cancel Confirm