Jazz Forum Welcome to the Jazz Community Forum Connect and collaborate with IBM Engineering experts and users

Best practice for setting up test plans, test env, and TERs?

Another newby question.

I am planning on using RQM to manage manual and automated tests (RFT/RPT) for a project. This project has several different customers and has a new release (build) about once a quarter.


In preparation, I created approximately 250 requirements, 250 test cases, and 200 manual test scripts. The test cases map 1-1 with each requirement and the test scripts are re-used among the test cases. There are 7 roles for the different users so some scripts are reusable between the roles.


I created 1 test plan that identifies 1 customer and 1 release baseline (customer and baseline identified in the name as well as the release identifying the release 1.1.1. I am also setting Test Phase to System Acceptance Test. I created a new Category type called "Customer" and added the different customer types in there.

I started with all the rqmts and tc in the 1st test plan (core). I then created 2 test environments in that plan. I also created 4 test environments globally. I finally put 1 of the 2 test environments from the plan in each test case. I am not scheduling any lab resources - do not need to.


Now for the questions. I am still really confused about how to create Test environments and when I should do it and where should I do it.

- Should I have the test environments all defined globally or only define them in each test plan?
- If a test case is used in x plans, should I create x TERs (1 per plan)?
- How to I track the completion of a given test plan testing (what reports are best used to show what is going on per plan)?
- Should each Plan's Test Environments be unique for tracking that plan's execution?
- Some test cases have multiple scripts. How do I handle executing manually each script in relation to the TER? Do I keep reusing the TER? If 5 scripts, how to I set it up after the last script to indicate any issues maybe found in script 2? How will this impact the reporting/tracking if I keep adding TERs?
- I am adding automated scripts to each test case. How is that handled with TERs. Suppose I create a TER for the manual and a TER for the automated, If I get status based on TER count, the numbers will be way off because if I run automated, I will not run manual. Therefore, cannot have 100% test coverage. However, if I run both manual and automated in that case, the count will maybe indicate more test cases executed that actually expected.

I am trying to see what to do but the documentation in that area is pretty simplistic. Maybe I have missed a section but I do not see the best practices to handle multiple releases/customers combined with manual/automated test scripts contained in the same test case (only 1 or the other would be executed) and tracking results.

Thanks for any help you can give me.

0 votes



3 answers

Permanent link
Hello,

I didn't quite follow the description of your setup but I might have some answers for your questions anyway:

- Should I have the test environments all defined globally or only define them in each test plan?


- If a test case is used in x plans, should I create x TERs (1 per plan)?

-> You should consider creating TERs in the test case. You want to create one TER per test plan since the TER will link the test exection result with the test plan. Most of the reports are capable of filtering on test plans. A "Scorecard" report will give you a good insight how many test cases have been executed, how many passed, how many failed and how have never been executed.


- How to I track the completion of a given test plan testing (what reports are best used to show what is going on per plan)?

-> Most of the reports are capable of filtering on test plans. A "Scorecard" report will give you a good insight how many test cases have been executed, how many passed, how many failed and how have never been executed.

- Should each Plan's Test Environments be unique for tracking that plan's execution?

-> Nope. See above.

- Some test cases have multiple scripts. How do I handle executing manually each script in relation to the TER? Do I keep reusing the TER? If 5 scripts, how to I set it up after the last script to indicate any issues maybe found in script 2? How will this impact the reporting/tracking if I keep adding TERs?

-> Execution status is tracked by the last execution result of the TER. If a TER is reused for five scripts you'll miss the execution results of the first four scripts. So you propably want o have a TER per test script. But please note: The Scorecard will report number of test cases and number of TERs. So this is a bit confusing and hard to tell how many test cases haven't been executed yet so you may want to consider having one test case with one test script and one TER.

- I am adding automated scripts to each test case. How is that handled with TERs. Suppose I create a TER for the manual and a TER for the automated, If I get status based on TER count, the numbers will be way off because if I run automated, I will not run manual. Therefore, cannot have 100% test coverage. However, if I run both manual and automated in that case, the count will maybe indicate more test cases executed that actually expected.

-> If you are not going to run a test script manually why would you create a TER?

Hope that helps.

Kind Regards
Joerg

0 votes


Permanent link
Yeah, I think you only need a separate TER if you want to report on it as it's own planned test execution. So in the case that you have one manual script and one planning script, then you would likely just use the same TER because they seem to be covering the same test effort, it's just that one is automated. Also, as you said after the script has been automated then you wouldn't run it manual any more, so no need to skew your reporting with something you don't plan to run.

However, if your test case contained multiple scripts and you planned to run both of them and want to see this represented in your reporting, then you would need multiple TERs. You would just set the default script of the TER accordingly. Also, by setting the default script of the TER (as opposed to selecting it in the run test case dialog), you will avoid the case where the results of one script override the results of another script.

0 votes


Permanent link
Thank you for your response. It helped quite a bit. I have other questions now and will be putting those in a separate thread.

0 votes

Your answer

Register or log in to post your answer.

Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.

Search context
Follow this question

By Email: 

Once you sign in you will be able to subscribe for any updates here.

By RSS:

Answers
Answers and Comments
Question details

Question asked: Jan 28 '11, 4:34 p.m.

Question was seen: 6,330 times

Last updated: Jan 28 '11, 4:34 p.m.

Confirmation Cancel Confirm