Hi Mahari,
Your approach seems good. This is how we have organized our test assets.
1) We create a fresh Test Plan for every Release. This is achieved by copying the previous release Test plan. The reason for copy is, we can save time in putting back all the regression data into this New Plan manually and individually from previous release.
2) Then we would have created Sprint Sections in each test plan. Every time we copy, we just delete these Sprint sections from previous releases so that new data can be added which is just the delta for the current release.
3) Sprint sections generally contain all the current ongoing new functionality test cases and their plan items linked to them.
4) Then we start adding the newly created test cases to this new plan which will also contain all the regression test cases/test suites accordingly.
5) While we generate TCERs for each sprint, it would be a combination of all the new functionalities and regression test cases from previous releases and also the areas which might regress because of new functionality. Adding regression test cases into bucket is purely done based on the experience of the test lead by analyzing the features coming into the new release and what could get affected from the previous release. Also, we have set of regression test cases which needs to tested every sprint regardless of their relevance with new features.
6) Once the test plan is drafted, it will reviewed by the development leads and inputs are incorporated.
7) Now comes reporting part.
a) First we have a dashboard for the project area where all test assets can be reported upon.
b) This dashboard typically contains widgets like :
- TCERS owned by each tester.
- Execution Status which will depict total Execution points, passed points, failed points, blocked points and so on.
- Defects logged during the particular Sprint
- TCERs Widget depicting all the InProgress and Not Run TCERs
- Metric graph for each individual tester on the current status of the execution
8) we can also add any custom reports created in JRS into our test dashboard as a widget.
9) Widgets depicting blocking defects, which needs tracking could be added.
10) We can also add widgets for Test Suites ( Regression Only) and then a separate widget for the current release.
List of widgets is long, however it all depends on the need of your current project.
Hope this helps,
Krupa Gunhalkar