IBM Rational Quality Manager
Quality management · Manual testing · Continuous improvement
Rational Quality Manager 6.0.1
Rational Quality Manager 6.0.1 New & Noteworthy
Rational Quality Manager is an integral part of the Rational solution for Collaborative Lifecycle Management (CLM). For new and noteworthy information about other CLM applications, see these pages:
- Jazz Foundation (Jazz Team Server) and Global Configuration Management 6.0.1
- Change and Configuration Management 6.0.1
- Requirements Management 6.0.1
- Jazz Reporting Service 6.0.1
- Rational Engineering Lifecycle Manager 6.0.1
For a list of changes to the installation and upgrade process, see CLM 6.0.1 Installation and Upgrade Notes.
To activate the configuration management capabilities that were added in CLM 6.0, Rational DOORS Next Generation and Rational Quality Manager require a no-cost activation key. For more information, see Enabling configuration management in CLM 6.0.1 applications.
New in Rational Quality Manager 6.0.1
- Usability improvements
- Progress indicator for exporting to PDF refreshes automatically
- Progress indicator for Reconcile Requirements wizard
- New sections are shown when test cases are executed without a script
- Test cases can be opened in a new tab when test cases are added to test plans
- Attachments can be added to comments in formal reviews
- New keyboard shortcuts for navigating between test environment panels
- Pasting images into manual test script steps
- Test environments
- Administration
- Dashboards
- Validity
- Reporting
- Project-area level access control is available for Jazz Reporting Service reports
- New artifacts are available for Jazz Reporting Service reporting
- Category hierarchy in Jazz Reporting Service reports
- Filter on enumeration values in Jazz Reporting Service reports
- New representation for the test environment in Jazz Reporting Service reports
- Specify external value for priority, workflow state, and stable external URI for custom attribute, category type and, category value
- Table views
- Cross-artifact columns and filter options
- Display parent test case custom attributes (test case execution record view and test case result view)
- Display test cases passed, failed, blocked, in progress, inconclusive and Total test cases (test suite execution record view)
- Display host name (test case result view)
- Filter test cases by test scripts
- Filter test cases by test suites
- Filter test scripts by keywords
- Filter test scripts by test cases
- Filter test cases based on the state of advanced test script filters
- "Used keywords" filter and column in test scripts table view
- "Used in scripts" filter and column in keywords table view
- Test plans filter and column in test suites table view
- Filter master test plans in test plans table view
- Test suite execution records filter and column in test case execution records table view
- Requirements column in test case execution records table view
- Requirements column in test case execution results table view
- Ability to filter columns in the "Change column display settings" for the test case execution record view
- Option to filter by all categories or all iterations
- New "Today" and "Yesterday" filters for test artifacts
- PDF and CSV export support for column in table views
- Requirements
- New Suspected By column in the test case editor
- Suspected By column in the test case table view
- Filter test cases by the Suspected By column
- Bulk updating for Suspected By requirements
- List of Suspected By requirements in the test case editor
- Change suspect state of test case by using a dialog box in the test case editor
- Project areas
- Configuration management
- Rational Quality Manager mobile application
Usability improvements
Progress indicator for exporting to PDF refreshes automatically
When you export data in PDF format, the progress indicator for the operation now automatically refreshes to show the state of the export queue until the operation is complete. You can set an option to change the refresh interval or disable the refresh feature.
Progress indicator for Reconcile Requirements wizard
A progress indicator was added to the Reconcile Requirements wizard to show the loading percentage and the current preprocessing step.
New sections are shown when test cases are executed without a script
When test cases are executed without a script, the following sections from the associated test case are shown on the execution result page:
- Pre-Condition
- Post-Condition
- Expected Results
- Test Description
After the execution result page is saved, these sections are removed from the page.
Test cases can be opened in a new tab when test cases are added to test plans
When you add test cases to test plans by using the Add Test Cases dialog box, you can hover over a test case to see more information about it. Previously, the test case entries were not links, so you could not navigate away from the page. Because you might need more information about a test case to determine whether to add it, the test case entries are now links. However, to prevent mistakenly navigating away from the page, you must right-click the link to follow it, which opens the link in a new tab.
Attachments can be added to comments in formal reviews
You can now add attachments to comments in formal reviews as you can for any other rich-text section by using the Insert Attachment button. Attachments in formal review comments are read-only and cannot be removed from the content.
New keyboard shortcuts for navigating between test environment panels
In the Generate Test Case Execution Records dialog box, on the Generate Test Environments tab, the following new keyboard shortcuts are available for navigating between test environment panels:
Key | Action |
---|---|
Left Arrow | Move to the previous test environment panel. |
Right Arrow | Move to the next test environment panel. |
Up Arrow | Move to the previous test environment in the test environment panel. |
Down Arrow | Move to the next test environment in the test environment panel. |
Home | Move to the first test environment panel. |
End | Move to the last test environment panel. |
Tab | Move to the next focusable area outside of the last test environment panel. |
Pasting images into manual test script steps
Previously, when you pasted an image from the clipboard into a manual test script step using Mozilla Firefox 4.0.1 or later, PNG images were converted to attachments, and other image formats were stored as inline images. Inline images are problematic for Rational Publishing Engine because it does not support them.
For more information, see Work Item 136928: manual script results hold html inline images instead of attachments when copy and pasting screenshots.
In this release, the following image formats are now converted to attachments when they are pasted into manual test script steps: PNG, JPEG, GIF, BMP. For other image formats, use the Capture Screenshot or the Insert Image options. The conversion to attachments is currently supported for Mozilla Firefox.
For support for other browsers, see Work Item 86891: Explore support for Chrome and IE for 'paste into browser' approach for insertion of Image.
Test environments
Editable test environment of execution records
New project properties enable the test environment to be edited
Previously, the test environment could only be edited when it was first created. Now, you can use the Execution Records preferences in the Project Properties to edit the test environment after an execution record is created.
Change the test environment in the execution record editor
After the test execution record preference is turned on, you can modify the test environment in the editor.
Change the test environment in execution record lists
You can also edit the test environment directly in execution record lists.
Change the test environment in bulk in execution record sections
In the test plan editor, in the execution record sections, you can perform a bulk update from the test environment column.
Test environment is optional when you generate test case execution records from a test plan
Specifying the test environment is optional when you manually create a new execution record. Now, specifying the test environment is also optional if you use the wizard to generate execution records from a test plan.
Administration
Ability to specify that execution records cannot run without required associations
For a test case execution record or a test suite execution record to run, you can specify that it must be associated with required attributes such as a test plan, iteration, or test environment. You can select any combination of attributes to restrict the execution records from running without those attributes.
For more information, see Work Item 67050: Process control to only allow execution of a TCER if it has a test plan associated to it.
Limit the maximum size for attachments
You can add attachments to test artifacts such as images and attachments in rich text sections, test case attachments, and test data files. Administrators can now limit the maximum size for attachments to help reduce the size of the Rational Quality Manager repository.
The Test Component > com.ibm.rqm.planning.service.integration.AttachmentUtilService > Maximum Attachment Size (MB) advanced server configuration property specifies the maximum size for attachments in MB. By default, the maximum size for attachments is 50 MB.
Prevent blocked test case execution records from running
A new precondition named Disallow Execution of Test Case Execution Record with Blocking Status was added to the project area properties. This precondition is enforced in the UI only. If the precondition is set, Rational Quality Manager does not allow a test case execution record to run from the UI if it has a blocking status. The list of defects that block the execution record are shown.
Also, blocking defects cannot be added to test case execution records that have a Passed execution result. You must change the execution result and manage the blocking defects in the execution result editor to block the corresponding test case execution record.
Dashboards
New Test Matrix dashboard widget
A new Test Matrix dashboard widget is available in the catalog. You can use this widget to group the results of any saved test artifact query by using two attributes and to show the results in a graphical format, such as in stacked bar charts, stacked column charts, and tables.
New Execution Status by Points dashboard widget
The Execution Status by Points dashboard widget is now available to show execution status by points for test case execution record and test suite execution record. This is a live widget and always shows the latest information. It does not require the data warehouse. Points are calculated based on the last result of the execution record.
User can see the execution status based on the selected grouping attributes. Example grouping attributes include:
- Priority
- Owner
- Test Environment
- Test Plan
- Iteration
- Build for Last Result
- Last Result
- Categories
- Parent categories
Available report type include:
- Table
- Points
- Percentage
- Stacked Bar Chart
- Stacked Column Chart
In each table cell, the stacked bars and stacked columns are clickable, which will show the filtered record.
For more information, see Work Item 138704: As a User, I want to see the execution status of TCER and TSER by points in dashboard.
List of alternate widgets for BIRT report widgets
BIRT report widgets are not supported for project areas that have the configuration management capabilities enabled. The following table shows the alternate, existing widgets to use instead of BIRT report widgets in a project area that is enabled for configuration management.
BIRT report widget | Alternate widget |
---|---|
Execution and defects by owner | Test Artifact |
Execution by Test Schedule (Live) | Execution Status By Points |
Execution Status by Iteration using TCER Count (Live) | Test Matrix |
Execution Status by Iteration using Weight (Live) | Execution Status By Points |
Execution Status by Owner using TCER Count (Live) | Test Matrix |
Execution Status by Owner using Weight (Live) | Execution Status By Points |
Execution Status by Test Case Category using TCER Count (Live) | Test Matrix |
Execution Status by Test Case Category using Weight (Live) | Execution Status By Points |
Execution Status by Test Suite using TSER Count (Live) | Test Matrix |
Execution Status by Test Suite using Weight (Live) | Execution Status By Points |
Execution Status using TCER Count (Live) | Test Matrix |
Execution Status using Weight (Live) | Execution Status By Points |
TCER Listing (Live) | Test Artifact |
TCER Status Counts (Live) | Execution Status By Points |
Tester Report using TCER Count | Test Matrix |
Tester Report using Weight | Execution Status By Points |
Test cases (Live) | Test Artifact |
Test cases by Development State (Live) | Test Artifact |
Test cases by Team (Live) | Test Artifact |
Test cases by Test Environment (Live) | Test Artifact |
Test Cases Impacted by Defects | Test Artifact |
Test Case Coverage by TCER (Live) | Test Matrix |
Test Case Coverage by Test Script (Live) | Test Artifact |
Test Case Review | Test Artifact |
Test Scripts by Development State (Live) | Test Artifact |
Test Suites by Development State (Live) | Test Artifact |
TSER Listing (Live) | Test Artifact |
TSER Status by Owner using TSER Count (Live) | Test Matrix |
TSER Status by Owner using Weight (Live) | Execution Status By Points |
TSER Status using Weight (Live) | Execution Status By Points |
Expanded records are included in the Test Case Coverage by TCER report
The Test Case Coverage by TCER report now includes the information for test cases directly linked to the test plans that are selected as parameters. If the test case does not have an associated test case execution record that meets the criteria of the selected parameters, a bar is not displayed on the chart, but hovering the mouse over the empty space displays a "0" value and you can drill down to the test case itself. If the test case has an associated test case execution record, the report behaves as usual: the total number is displayed on the bar chart and drilling down displays the TCER Listing (Live) report.
Validity
For more information, see Work Item 34952: [QM] validity service.
Link validity
When configuration management capabilities are not enabled, Rational Quality Manager can specify whether a requirement is suspect; that is, whether it has changed since the previous requirement reconciliation. When the configuration management capabilities are enabled, Rational Quality Manager can also specify the link validity, which monitors changes in the requirement. The link validity feature applies in these situations:
- Available for project areas that are enabled for global configurations
- Applies to the link between a requirement and a test case
Link validity has three states:
- Valid: The information in the artifacts at both ends of a link is logically consistent.
- Invalid: The information in an artifact on one side of a link is not logically consistent with the information on the other side.
- Suspect: A link that is not assigned as either valid or invalid.
You can change the link validity to any of the three states. If you change the associated requirement, the link becomes suspect.
Enabling link validity
A new advanced server property named Link Validity enables or disables the link validity feature. By default, the property is enabled.
You can enable or disable link validity at the project level by using the Manage Project Properties settings. By default, the project-level property is disabled.
Link validity for test cases
The Requirement Links section of the test case editor has a new column named Link Validity. You can filter, sort, or group requirements based on link validity. Inline editing and bulk editing options are also available. To update the link validity, a user must have the following permission assigned: Assert Link Validity > Assert Validity.
In the test case table, a Link Validity column shows the rolled-up status for links for all associated requirements of a test case. The link validity for a test case is based on this logic:
- Valid: All associated requirements links are valid.
- Invalid: At least one associated requirements link is invalid.
- Suspect: At least one associated requirements link is suspect, and not invalid.
You can change the link validity of test cases by using the bulk update operation. To perform a bulk update, the Validate Requirements column must be displayed in the test case table.
Bidirectional link validity
Bidirectional link validity is now supported. That means, if you change the properties of a test case, the test case and requirement link are marked as suspect. These test case properties affect the link validity:
- Item ID
- Test script addition or removal
- Archived status
- Name
- Description
- Weight
- Team area
- Template
- Categories
- Custom attributes
- Sections
- Trigger
- Activity
- Execution variables
Sort, search, and filter by validity summary
In previous releases, the browse test case view does not support filtering, sorting, and grouping by Validity Summary. Now, the Requirement Collection Links section of the test case has a new view named Validity. You can use the view to see which test cases are suspect without peforming a reconcile operation. Currently, the Validity view supports only 500 test cases. You can increase the number by using the Advanced server property named Maximum number of test cases in the validity view. If you increase the value from 500, the performance of the view might be affected.
Reporting
Project-area level access control is available for Jazz Reporting Service reports
In Lifecycle Query Engine, access context is now disabled by default. If enabled, for users with read access to the project area, data is displayed in the report; for users without read access to the project area, data is not displayed in the report.
To enable access context in Lifecycle Query Engine:
- Open the Query Service page.
- Clear Ignore Data Source Access Controls.
- Select Use Access Control 2.0.
- Restart the Lifecycle Query Engine server.
New artifacts are available for Jazz Reporting Service reports
In the Jazz Reporting Service, when choosing an artifact, three more types are available in the Quality Management category:
- Test environment
- Test data
- Keyword
The traceability from other test artifacts to the new types is also available. For example, the traceability of a test script step with a linked keyword or a test execution record with executions on a test environment can be viewed.
Another new artifact type, named TestScriptStepResult, is available for Jazz Reporting Service reports. This artifact type includes attributes for these aspects, and others, of test script step results:
- Expected result
- Actual result
- Verdict
- Tested by
- Start time
- End time
- Comment
- Compare
This artifact type links to other artifact types, including these: relatedChangeRequest, includedInTestResult. The following example shows a Jazz Reporting Service report that includes values for the test case, test result, and test script step result:
For more information, see Work Item 138562: As report user, I want to see Test Script Step Result in my report.
Category hierarchy in Jazz Reporting Service reports
Report Builder can now filter to show the tree hierarchy for the category. For more information, see Work Item 138259: As report user, I want to see category hierarchy prompt in the report.
Filter on enumeration values in Jazz Reporting Service reports
Report Builder can now filter to show the enumeration values for properties. Currently, the following properties are supported:
rqm_qm:verdict
rqm_qm:hasWorkflowState
rqm_qm:hasPriority
rqm_qm:testcaseWorkflowState
rqm_qm:testplanWorkflowState
rqm_qm:testscriptWorkflowState
rqm_qm:testsuiteWorkflowState
rqm_qm:scriptType
rqm_qm:scriptStepType
process:iteration
For more information, see Work Item 136789: Enumeration types and related values should be defined in a vocabulary using stable URIs.
New representation for the test environment in Jazz Reporting Service reports
The description for a test environment's lab resource can be complex. The RDF representation is now simplified by using the lab resource type and lab resource attribute as the property name, and by defining them in the vocabulary.
For more information, see Work Item 141172: re-design the TestEnvironment and LabResourceDescription's RDF representation.
Specify external value for priority, workflow state, and stable external URI for custom attribute, category type, and category value
Previously, the default values that are provided for priority (High, Medium, Low, Unassigned) and the workflow state (Draft, Under Review, Approved, Retired) were defined by using server-specific and project-specific URIs that prevented cross-project querying and filtering.
Rational Quality Manager now allows you to specify external value for priority and workflow state. The value could be a stable absolute URI or a non-URI value. By specifying the same value in different projects, Jazz Reporting Service can perform cross projects query against the property.
Besides that, you can specify stable external URI for custom attribute, category type and category value. The value of external URI should be in this format:
[scheme:][//authority][path][?query][#fragment]
You can specify a unique external value or URI to identify the attribute across projects. For example, Jazz Reporting Service can use the URI to map similar attributes, which might have different names in different projects, across project areas.
For more information, see Work Item 140704: As a Admin, I want to specify an external URI for the RDF term for a custom attribute/category/category type/workflow state/priority.
Table views
Cross-artifact columns and filter options
New columns and filter options were added to several table views and dialog boxes. The new filters can be combined with any other existing filters, for example, the "Filter by priority" filter.
Display parent test case custom attributes (test case execution record view and test case result view)
Use this column to display the parent test case custom attributes for a test case execution record or a test case result. You can also order or filter the test case execution records that have a specific associated test case that contains a user-defined custom attribute value.
Display test cases passed, failed, blocked, in progress, inconclusive and Total test cases (test suite execution record view)
Use these columns to display the test cases that are passed, failed, blocked, in progress, inconclusive, and the total number of test cases in a specific test suite execution record.
Display host name (test case result view)
Use this column to display the associated host name for a specific test case execution result. You can also filter to display the test case execution results that were run on a user-specified host name value.
Filter test cases by test scripts
Use this filter to show test cases that contain a specific test script, a group of test scripts, any test scripts, or no test scripts.
Filter test cases by test suites
Use this filter to show test cases that are contained in a specific test suite, a group of test suites, or no test suites.
Filter test scripts by test keywords
Use this filter to show test scripts that contain a specific keyword, a group of keywords, or no parent keyword.
Filter test scripts by test cases
Use this filter to show test scripts that are contained by a specific test case, a group of test cases, or no test cases.
Filter test cases based on the state of advanced test script filters
You can now filter test cases that contain test scripts with specific attributes, for example, test script priority, test script state, test script owner, and so on.
"Used keywords" filter and column in test scripts table view
Use this column to display the keywords used in a specific test script. You can also filter to display the scripts using a specific keyword, a group of keywords, any keywords, or no keywords at all.
"Used in scripts" filter and column in keywords table view
Use this column to display the scripts by using a specific keyword. You can also filter to display the keywords using a specific script, a group of scripts, any script, or no scripts at all.
Test plans filter and column in test suites table view
Use this column to display the associated test plans for a specific test suite. You can also filter to display the test suites associated with a specific test plan, a group of test plans, any test plan or no test plans at all.
Filter master test plans in test plans table view
Use this filter to show the test plans that have at least one or more associated child test plans.
Test suite execution records filter and column in test case execution records table view
Use this column to display the associated test suite execution records for a specific test case execution record. You can also filter to display the test case execution records associated with a specific test case execution record, or a group of test case execution records.
Requirements column in test case execution records table view
Use this column to display the requirements linked to a test case associated with a specific test case execution record. Additionally, you can also filter the test case execution records that have an associated test case with at least one linked requirement, or no linked requirements at all.
Requirements column in test case execution results table view
Use this column to display the requirements linked to a test case that is associated with a specific test case execution result. Additionally, you can filter the test case results that have an associated test case with at least one linked requirement or no linked requirements at all.
Ability to filter columns in the "Change column display settings" for the test case execution record view
You can now filter to display specific columns in the "Change column display settings" in the test case execution record view.
Option to filter by all categories or all iterations
When you filter by category or iteration, you can now select an All check box to specify all categories and all iterations.
For more information, see Work Item 136471: Add Select All Checkboxes Functionality for Category and Iteration filters.
New "Today" and "Yesterday" filters for test artifacts
In the test artifact table view, you can filter the test artifacts that were modified or completed today or yesterday. The "Today" and "Yesterday" filters were added to the columns that show dates. For example, in table views, the filters are available in these columns for the following artifacts:
- Modified column: Test plan, test case, test suite, test script
- Last Modified column: Test case execution record
- Last Result Completed column: Test suite execution record
- Completed column: Test case result, test suite result
PDF and CSV export support for column in table views
PDF and CSV export support is now available for the following views and columns:
- Test Cases view: Test Scripts column, Test Suites column
- Test Suites view: Test Plans column
- Test Scripts view: Test Cases column, Used Keywords column
- Keywords view: Used in Scripts column
- Test Case Execution Record view: Validates Test Case Requirements column, Test Case custom attributes column, Test Suite Execution Record column
- Test Suite Execution Record view: Test Cases Passed column, Test Cases Failed column, Test Cases Blocked column, Test Cases In Progress column, Test Cases Inconclusive column
- Test Case Results view: Validates Test Case Requirements column, Test Case custom attributes column, Host Name column
Requirements
New Suspected By column in the test case editor
In the test case editor, in the Requirements Links section, a new Suspected By column shows whether linked requirements are suspect or not. The Suspected By cell shows one of two options:
- Suspect
- Not Suspect
For a test case, you can set a requirement as suspect or clear its suspect state. When one or more requirements are set as Suspect, the test case is automatically marked as suspect. When all the requirements are marked as Not Suspect, the test case is automatically marked as cleared.
For more information, see Work Item 137648: As a User, I want to Determine which of linked requirements in a testcase caused it to become suspect.
Suspected By column in the test case table view
A new column named Suspected By was added to the test case table view. The column contains a list of requirements that make the associated test case suspect. This new column is a subset of the existing Validates Requirement column. Note that this new column is a default test case table column, so you must enable it in the "Change column display settings" window for the test case table. Filtering, sorting, or grouping is not supported for this new column. However, this new column is displayed in CSV and PDF reports.
For more information, see story Work Item 137648: As a User, I want to Determine which of linked requirements in a testcase caused it to become suspect.
Filter test cases by the Suspected By column
In the test case table view, you can filter test cases by the two values in the Suspected By column:
- Has any requirement link
- Does not have any requirement link
If you filter by "Has any requirement link," the table shows all the test case rows that have requirement links in the Suspected By column. If you filter by "Does not have any requirement link," the table shows all the test case rows that do not have any requirement links in the Suspected By column.
For more information, see Work Item 137648: As a User, I want to Determine which of linked requirements in a testcase caused it to become suspect.
Bulk updating for Suspected By requirements
You can bulk update the suspect state for requirements. That is, you can set multiple selected requirements as Suspect or Not Suspect in a single action.
For more information, see Work Item 137648: As a User, I want to Determine which of linked requirements in a testcase caused it to become suspect.
List of Suspected By requirements in the test case editor
A new list named Suspected By Requirement is added to the right side of the test case editor below the existing Validates Requirement list. Suspected By Requirement contains a list of suspected by requirements, for example, requirements that made the associated Test Case suspect. This new list is a subset of the existing Validates Requirement list.
For more information, see Work Item 137648: As a User, I want to Determine which of linked requirements in a testcase caused it to become suspect.
Change suspect state of test case by using a dialog box in the test case editor
A new dialog box is displayed when the Change Suspect State icon is clicked in test case editor. This dialog box shows the list of linked requirements for the test case and provides three actions for each requirement:
- Ignore
- Clear Suspicion
- Mark Suspect
You can add or remove suspected requirements for the test case by using the Mark Suspect or Clear Suspicion action, similarly to the Reconcile requirements wizard. If all the requirements are removed from the suspected list, then the suspect state for the test case is automatically removed from a suspect state. All these actions can also be viewed in the History section of the test case editor.
For more information, see story Work Item 137648: As a User, I want to Determine which of linked requirements in a testcase caused it to become suspect.
Project areas
New process template for Scaled Agile Framework (SAFe)
A new Quality Management process template is provided for Scaled Agile Framework (SAFe). This process template provides preconfigured elements that are commonly used in SAFe projects, including a single timeline with two program iterations that include program increments and sprints, SAFe-specific artifact templates and categories, and a preconfigured project dashboard template. Use this template to create a SAFe-enabled testing portfolio.
Predefined queries while creating a new project area
While creating a new project area using the Quality Manager Default Process or Quality Manager Legacy Process template, the following predefined queries are created:
Test Case execution records table:
- All Incomplete Test Case Execution Records
- My Incomplete Test Case Execution records
- All Test Case Execution Records for Current Iteration
Test suite execution records table:
- All Incomplete Test Suite Execution Records
- My Incomplete Test Suite Execution records
- All Test Suite Execution Records for Current Iteration
Test case results table:
- All Test Case Results for Current Iteration
- All Test Case Results from Today
- All Test Case Results from Yesterday
- All Current Test Case Results
Test suite results table:
- All Test Suite Results for Current Iteration
- All Test Suite Results from Today
- All Test Suite Results from Yesterday
- All Current Test Suite Results
Configuration management
External utilities now support configuration management
Note: To find the configuration URI, see Finding the Configuration URI . For more information, see the readme files for each tool.
Command-line utilities
A new command-line argument (-configURI) was added to the Rational Quality Manager URL Utility and the Copy Tool that allows you to specify the stream, baseline, or global configuration that the tool should run against. The value for this argument is the URI of the stream, baseline, or configuration.
Importer for Microsoft Word and Excel
You can now specify the target configuration when you import test resources by using the import tool for Micrsoft Word and Excel. You can enter any configuration URI manually, and a new configuration selection tool is now available for choosing local configurations.
Offline execution
The Run Offline option is now available for project areas that are enabled for configuration management. You must specify the target configuration when you upload the results to the QM server by using the import tool for Microsoft Word and Excel. For more information, see Importer for Microsoft Word and Excel.
Finding changed requirements in projects that have configuration management enabled
In a project that does not have configuration management enabled, Quality Management practitioners use the requirement reconcile operation to find which requirements changed. In a configuration-enabled project, you can use the link validity status to determine which requirements have changed. The link validity status is set to "Suspect" in the following cases:
- The link validity status cannot be set to Valid.
- The test case might have changed after it was marked as Valid.
- The requirement might have changed after it was marked as Valid.
- Both the test case and requirement might have changed after they were marked as Valid.
A suspect status does not mean that a requirement has changed; it means that it might have changed and the artifacts and relationship require review. There are three ways to track requirement changes in projects that are enabled for configuration management:
-
From Rational DOORS Next Generation, compare the requirement baseline with another active configuration (stream). The comparison shows which requirements have changed. Mark corresponding test cases as Suspect from Rational Quality Manager. Create a new requirement baseline. Remove the old requirement baseline from the global configuration and add the new one.
-
Select all test cases from Rational Quality Manager and use the bulk edit feature to mark the validity status as Valid. Then, create a new requirement baseline from the active requirement stream. Remove the old requirement baseline from the global configuration and add the new one. Any status that is Suspect means that a requirement has changed.
-
A new project property for link validity named When a test case is approved, mark the validity to Valid is available. When this property is selected, after you author a test case and have it approved, the validity status for the test case is set to Valid. Approved test cases cannot be modified. If an approved test case is marked as Suspect, it means that the requirement changed after it was approved.
For more information, see Work Item 139993: Support reconcile operation opt-in.
Configuration management support for the reconcile operation
When the configuration management capabilities are enabled, requirement reconciliation operations are available in the Requirement Collection Links section of the test plan editor. Requirement reconciliation identifies the following use cases:
- A requirement does not have a test case
- A requirement changed since the last reconciliation
- A requirement was removed from the requirement collection after the last reconciliation
- A requirement was deleted
- A requirement is unplanned
Except for use case 2 from the previous list, all other scenarios are supported when configuration management capabilities are enabled. For use cases 3, 4, and 5, the removed, deleted, and unplanned requirements are shown automatically, without any user action. You cannot mark a test case as suspect during the reconcile operation.
For more information, see Work Item 139993: Support reconcile operation opt-in.
Rational Quality Manager mobile application
End user license agreement
When the Rational Quality Manager mobile application is opened for the first time, the end user license agreement is displayed. You must accept the license agreement to use the mobile application. After the license agreement is accepted, it is not shown again unless the application is reinstalled on the device.
New in previous versions of Rational Quality Manager
For details about new features and functionality in previous releases, see these pages: