It's all about the answers!

Ask a question

Best practices for which level to add test casees


David Landry (2347) | asked Sep 08 '14, 12:35 p.m.
edited Sep 08 '14, 12:36 p.m.
What is the best practices for where to create your execution records and add test cases?

As far as I can tell, and I may be wrong, but the summary reporting only happens at the level where test cases are added (ie: if I create a Master test plan, with Child test plans, with Test Suites, with Test Cases, and create execution records at the Test Case level, then since the test Suites are the only artifacts with actual  test cases in them, the summary reporting only shows up at the test Suite level)

Note: By Summary Reporting I mean the percentage bars at the top right corner of the dashboards.

Where is the best place to add your test cases and create your test execution records? Is there a best practices for this, and if so where is the documentation with this information?

What I would like to do start off with creating test cases, add a test script to the case, then create the environment based test execution records .. then later on group the test cases as the Test Suite level based on a feature function. Then group the suites into Child Test Plans based on Features. Then group the Child Test Plans into Master test plans based on Test/Iteration/Release/etc .. but I'm having issues with the dashboard summary reporting not being rolled up all the way. I assume that's because the test cases and where they are added is the limiting factor here?

Is there an RQM resource in the IBM 8200 Warden Markham (Toronto) labs whom I can pick their brains?

Accepted answer


permanent link
John Nason (2.4k1012) | answered Sep 10 '14, 11:16 a.m.
FORUM MODERATOR / JAZZ DEVELOPER
Hi David,
I strongly suggest trying out the video series on understanding Test Case Execution Records: https://www.youtube.com/watch?v=pJEeTqWWTrE

The Test Plan establishes the common goals for testing and the timeline to do your testing.
The Test Case represents the "what" of testing - what is being validated?
The Test Script captures specific details of how to do the "what" of the Test Case, or is an automation of the Test Case.
The Test Suite is a grouping and/or ordering mechanism. As in, these 10 test cases need to be run in order to realize a scenario, or represent a logical unit, e.g. "the regression bucket". Suites are frequently "abused" as a planning mechanism when the test plan is actually better suited, or as an organization mechanism when categories on the test case are better suited. Don't fall into that trap.
The Test Case Execution Record is a planned execution of a Test Case that assigns the "who", "when", and "where" (environment).
The Test Suite Execution record is a planned execution of a Test Suite, similar to TCER above, and as a matter of fact the act of planning a Test Suite Execution Record creates all of the necessary test case execution records for the cases in the suite.
Master Test Plans are an aggregation concept to "pull in" multiple child test plans into a unified testing effort. E.g. you have 3 components in a system as a child test plans and can report on overall health of the system via the master test plan.

All reporting stems from data linked to a test plan, and planned for a particular testing phase in the test plan's timeline. This is done through the results of your test execution records. As a simple example, I have a "Product 1.0" test plan with 2 sprints defined, and that contains 4 test cases. I want to execute 2 test cases in the first sprint, then all 4 test cases in the second sprint. This results in 6 test case execution records associated with this plan. There are 4 test cases, but 6 planned executions of those cases.

For your specific scenario:
Rather than using Child Test Plans for features I'd categorize your test cases appropriately based on feature. Then you can do "by category" reports to report on features.
Create your test plans and timelines before running tests. If you create and run "orphan" test cases you'll have no way to unify these results in a cohesive way.
Don't use suites to group by feature function. Again, you're better served with categories I think.
David Landry selected this answer as the correct answer

One other answer



permanent link
Paul Slauenwhite (8.4k12) | answered Sep 09 '14, 11:56 a.m.
FORUM MODERATOR / JAZZ DEVELOPER
Hi David,

Sounds like you need to start from the beginning.  I would suggest the following:

http://www.ibm.com/developerworks/rational/library/08/1230_kelly (a little dated but a good starting point)

https://jazz.net/help-dev/clm/topic/com.ibm.rational.test.qm.doc/topics/c_workflows_guidance.html

https://jazz.net/help-dev/clm/topic/com.ibm.rational.test.qm.doc/topics/c_node_tutorials.html

You can continue asking questions in this post.

Comments
David Landry commented Sep 10 '14, 9:06 a.m.

Thanks Paul, but I can do all that. 


My questions was about where to add the test cases and where to create execution records and why I would do so at a certain level.

let's assume  I already have a test plan (GA Release) and child plans (one per product feature,) test suties (one per feature function) and test cases (one per function scenario).

Now, it seems I am able to add test cases and/or  generate execution records at all those levels.

Why would I want to add test cases and/or generate execution at a specific level (scenario and why?)

Are there any resources that explain why you would want to add test cases and/or create execution at the MTP level vs the CTP vs the Suite level?



 

Your answer


Register or to post your answer.