Download 6.0.4 Release
June 12, 2017

Jazz Reporting Service

Default reporting option for Jazz solutions

Jazz Reporting Service 6.0.1

Product Release Trial | December 14, 2015
This is not the most recent version. We recommend Jazz Reporting Service 6.0.4 . This is made available for archival purposes and may contain bugs and/or security vulnerabilities. If you do download this version, it is being provided AS IS, without warranties of any kind, including the implied warranties of merchantability and fitness for a particular purpose. We strongly advise you review the support pages for this version and update the product or take actions recommended therein. Security bulletins contain instructions for the security vulnerability addressed therein, and may require upgrading to a newer version. Link to security vulnerability blog: IBM PSIRT blog.

Jazz Reporting Service 6.0.1 "New and Noteworthy"

Jazz Reporting Service 6.0.1 New & Noteworthy

Jazz Reporting Service provides an alternative to Rational Insight. It is not based on Cognos and it includes these components:

Jazz Reporting Service is an integral part of the Rational solution for Collaborative Lifecycle Management (CLM), but it is an optional installation. For new and noteworthy information about other CLM applications, see these pages:

New in Jazz Reporting Service 6.0.1

Report Builder

Report Builder is a key component of the Jazz Reporting Service, which you can use to quickly and easily consolidate data from across your tools and project areas. Depending on the applications that you use, and the type of information that you want to report on, you can choose to report on data that is collected in a data warehouse or that is indexed by Lifecycle Query Engine (LQE).

You can use the ready-to-use and ready-to-copy reports to quickly generate and share information about your projects, or you can create new reports. Your reports can show trends in project or team progress and quality throughout a release, sprint, or a specific time range. Traceability reports can show how artifacts are linked across lifecycle tools. If you use the configuration management features in the RM and QM applications, you can use Report Builder and Lifecycle Query Engine (LQE) to report on data in the configurations. You can export the report data in different formats, including Microsoft Excel spreadsheets or a template that you can use with Rational Publishing Engine to generate document-style reports. You can also export graphs as images for use in documents.

With Report Builder, it is easy to set up reports and integrate them with the CLM dashboards through the widget catalog. The Report Builder widget catalog includes the ready-to-use reports that cut across testing, requirements management, and change management. You can also share reports that you create.

Improved support for custom attributes

Report Builder now has more robust metadata about custom attributes in RTC. Therefore, Report Builder can now retrieve proper labels for the custom attributes that are listed in the UI with more reliable type information as well the project source. The metadata also improves the Data Source refresh time considerably, because it doesn't have to query the ODS data to derive information about the custom attributes.

New ready-to-use reports for Track and Plan

Four new ready-to-use reports are available that are related to tracking and planning for organizations adopting agile methodology. The reports are fully parameterized so that they can be customized in the dashboard for a particular iteration, team, category, work item type, and severity.

Report Management

New tag user interface

Improvements to tagging are now more consistent with how tags are shown in the RTC web client. Now you can simply type to add a tag, or press the space bar to use content assistance to find an existing tag. The same control also works for individual reports. You can remove tags from a report by simply clicking the "x" on the tag bubble.

Selecting all reports in a folder

The Use page is where reports are managed and organized using tag folders. However, it can be cumbersome to select each report individually to perform an action, such as delete. To make this easier you can now select at the tag folder level. The folder automatically expands and all the reports in it are selected. Or if the folder is already selected, its reports are no longer selected.

Click the folder selection check box to expand the folder and selects its contents:

Report creation (all data sources)

Select all for hierarchical attribute conditions

The 6.0 release of Report Builder introduced a hierarchical tree display for certain attributes in the conditions pane. This improvement makes it possible to see values in the context of their parents - such as iteration by release, subcategory by category, team by project, and so on. Now in 6.0.1 we've improved this control with a Select all visible items check box. This approach is valuable when you search for a particular string (such as "JKE") and then want to select all the results.

Limiting project scope at the beginning of the creation workflow

To better facilitate the use of the LQE data source when the artifact types are project-specific, the Limit scope section has been moved to the beginning of the creation work flow. Now, the artifact section is easy to filter down to those types that apply to the selected projects. You can also navigate back to the Limit scope section after selecting artifacts and show only those projects that are relevant for those selections. To do this, click the Show applicable projects for selected artifact types check box. When unchecked, you see all projects that you can access.

Previewing is now part of Format results

The preview section is now merged into the Format results page in the Format section (which was previously called Columns). The preview won't display unless you explicitly click Refresh. You switch to a graph the same way as before by clicking Graph. There is now a more direct correlation between making any changes in the format section and seeing them reflected in your report.

Combining table columns

When reporting across more than one project you often find custom attributes that have been added to one project that are semantically equivalent. In an enterprise you might want to use a common set of custom attributes across all project areas to establish some consistency when reporting status to stakeholders. However, if these custom attributes were added inconsistently to each project area with different IDs, the Report Builder cannot discover that these attributes are semantically equivalent and considers them to be different. In the Conditions pane you will see a set of seemingly duplicate attributes, although their project area is listed to differentiate them.

Let's consider a specific example with a set of projects with a custom attribute called Risk Status. It has been defined in each project area with different IDs so Report Builder detects them as distinct attributes. In the Conditions dialog box they are all listed separately with their project areas:

To report on them, you would have to add conditions for each of these attributes and then group them with an OR condition, since the values should be mutually exclusive. If they had been originally defined with the same IDs, they would show up correctly as a single consolidated attribute. The real problem happens when these attributes show up in a report table - each one becomes a separate column.

The result is a report that is difficult to read and hard to interpret since equivalent values are spread out among multiple columns.

Since the different Risk Status columns are semantically equivalent, you ideally want them all combined into a single column. This is now possible by using the same labels for each of the consecutive column entries. This clue tells the report engine to combine the results for these columns into a single column entry with the common label.

In this example you can rename all the columns to Risk Status in the Format pane as follows:

Then when you run the report you can see a consolidated view of the data with a single Risk Status column:

Changes to the calculations dialog flow

In the 6.0 release we introduced the concept of a calculated column. In the dialog box you could calculate a total count of all artifacts in a group or all artifacts based on the value of a particular attribute. In the evolution of this feature we discovered that in practice these two tasks are quite different. We count all artifacts based on the reference ID attribute, which you see in the UI. Unfortunately this made for a slightly awkward workflow because it appeared that you could choose an attribute but you couldn't. The new workflow orders the steps more logically and clarifies that counting based on a limiter value is different from counting all artifacts.

To count all artifacts, you just have to select All:

Then, to limit the count based on a particular value, select Limit to select a particular attribute and a value to limit against.

Time range support with drill-down when counting artifacts

Another improvement to calculations is the ability to count based on a date range. Previously you could add a condition for a single date, but not between two different dates. As an example, you might want to look at the age of particular defects in a set of ranges such as <30 days, 30-90 days, and 90-180 days.

When limiting values based on a date attribute (such as Creation Date), the time range interface lets you select a time range between a start date and an end date.

If you click Set time range, you get a number of options, such as the ability to pick a relative start date and end date or specific dates.

You can use this capability to construct aging reports that show different time ranges and their counts. The following example shows the defect counts within different creation date ranges. You can drill down on each segment of the bars to see the work items that fall into the defined range.

Dynamic filtering is available in the Add Conditions dialog box

With dynamic filters, you can right-click the lock icon to see the report conditions that will be required at run time in dashboards. These dynamic filters are generated by default for every condition, but users can turn them off or require individual conditions when the report is run.

However, usability testing indicated that this functionality is not easy to discover. Therefore, we added the same lock selection to the Add Conditions dialog box as well, so there are now two ways to find it.

Improved layout of conditions and groups

The layout of conditions and groups is now more readable, and groups are more easily distinguished. Conditions in a group are lined up and sub-groups are indented.

Customizing graph colors

In previous releases graphic reports were limited to default monochromatic colors to ensure accessibility (for visual impairments) and to avoid indications of semantic meaning (i.e. red for errors, green for OK). Now you can customize the graph colors for a particular attribute value or column. You do this in the Format section when configuring a graph.

In the Modify colors section click a color swatch and change it in the report immediately.

New Bluemix styling, header and collapsible side menu

The styling for the Report Builder has been adjusted to be more consistent with IBM Bluemix and the Track and Plan component within Rational Team Concert (RTC). The closer integration with the Track and Plan component makes it easier to navigate between the two.

After making a selection in the action side bar, it collapses to give you more space when creating or navigating reports. Reopen it with the expansion handle to see and select another action.

Report creation (Data Warehouse)

Workflow for reports on historical metrics

The Report Builder now supports creation of historical trend reports. These reports show trends of certain things over time, possibly separated by an attribute dimension. Their unique character warrants a different workflow from reports on current data. We have introduced a new section where you can choose your report type. You can pick either Current data (same as the existing flow) or the new Historical Trends report type, which has, instead of the traceability section, a section where you set the time range for the trend you are measuring. Historical trend reporting is currently available only for the Rational Data Warehouse data source.

Let's consider an example where you want to display the total work items separated by a value dimension of their status (resolved versus unresolved). First, you pick the Historical Trends report type:

Next, you decide what trend and artifact you are interested in measuring. In the Focus on artifact trends section you select from a list of artifacts and associated trends. Since you want to measure work item totals, you pick the basic Work Item Metrics item. If you wanted to find out about work item creation trends instead, you'd pick Work Item Creation Metrics.

The next step is to select a time range for the trend that you want to capture. In the Set a time range section the default time range is set for the last 3 months, and you can change it to any range you like. For this example, let's set it to the last 2 months.

You'll notice under Conditions that you now see new conditions for Start date and End date after you set the time range, and you can change them here, too. Continuing on to Format results, you'll notice the default is a graph presentation. Historical trend reports are typically shown as a line graph, but you can also use a bar chart (stacked or grouped).

In the graph section you need to select the unit to measure: the total count of the related artifact (work items) or some other criteria that was captured in the metrics artifact. You can also select a dimension to separate the value being measured by all the values of the dimension, such as Status, Filed Against and so on. Before you can preview a graph, you must select a value for the unit of measure, and optionally, the dimension.

After you select Total Work Items for the unit of measure and Status Group Category for the dimension, the graph should look something like this:

Now the report is done, and you can save and publish it to a dashboard like any other report created by the Report Builder.

Organizing historical trends underneath the artifact type

The ability to report on a historical trend capability is based on selecting a specific trend, such as creation or totals, and seeing how it changes over time. In previous milestones the trend was embedded in long list of artifacts. This location made it difficult to find a particular trend and artifact type because you needed to read through the entire list for the combination you were looking for. Now that the artifact type and trends are organized hierarchically, you can look for the artifacts you want to report on and expand to see what trends are available.

In this workflow, you expand the target artifact, Work Item, to see a list of trends. Then you select one trend and, optionally, a specific work item type to focus the trend on (such as Defect).

Limiting or excluding trend dates

For trend reporting, you can select a date range to restrict the results. You might want to restrict the result further by only allowing work days (such as Monday to Friday) or specific months or quarters of the year. Now you do this when adding a condition by selecting the Date artifact type.

This ability could be most useful for improving report performance by reducing the results to a specific day of the week. If you're examining a long date range, you can pick a specific day of the week to draw the trend from (such as Wednesday), significantly reducing the number of results the query and chart engine have to process while providing the same trending information.

Validation for historical trend reports

Historical trend reports query against a vast set of data and potentially across multiple project areas. Some reports can take a long time to run if they are not filtered to a smaller subset of data. To make this easier, we have added validation to ensure that some constraints are in place to increase performance. 1) A time range is now mandatory when creating trend reports. Although an infinite range is still possible, it is better to set a range within a specific time frame. You cannot format the results until you enter a time range. 2) The graph must include a value for the unit of measure, so that the report generation can optimize the set of tables to query against. 3) You get a warning when you save if no conditions and project scoping are set. It is advisable to set some scoping to narrow down the results, such as work items with conditions on the Release (Planned for) and / or Category (Filed Against) fields.

Visualizing multiple historical trends

Many different trends are stored in the data warehouse as part of the data mart / fact collection process in the Data Collection Component (DCC). For instance, work items have trends for discovering creation rates, general totals, and resolution rates. Now you can visualize more than one trend on a graph so you can compare them. You can select more than one trend in the Focus on artifact trends section:

Then, in the Format Results section, since you want to visualize multiple measures. You select the radio button to add lines or bar segments based on numeric measures.You can then select one or more specific measures from each of the selected trends and display them as a line or a bar segment, depending on which graph type you are using.

After you click Add measures you can choose an available trend and pick a measure to visualize in the graph. In this case you are interested in the Total Work Items measure that provides a count of all work items filtered by conditions you added previously on the Choose Data tab.

After you choose the measures, the graph preview is generated with a line representing each measure. You can also choose the type of graph and the data scale (such as Weeks) for the X axis. You might also want to decide how the data is gathered within the date scale period. The default is to average all of the data points discovered within the date scale range. But you can also sum all the date points or consider only the first or last data points. In this example you want to sum all the created work items totals for the week, and then look only at the last data point for the total work items within the week.

Setting a goal line

If you want to track a trend to a particular goal, click Add goal line. A dialog box opens so you can select a particular value.

Your value will display as a horizontal dotted line in your graph. You can change its color and add additional goal lines by repeating the process.

Traceability: the ability to refine cross-artifact links by specific link types

The ability to trace links between artifact types has been available since version 5.0.2. However, between different artifact types (i.e.such as tracing a work item to a requirement), you could only trace from a work item to any related requirements, but not to a specific link type (such as tracks versus affects). You can do this now, and still trace to all related links. The relationships are also shown hierarchically, making it easier to find a particular link type for a given artifact.

Report creation (Lifecycle Query Engine)

Merged support for artifact types

Each CLM application defines slightly different data structures (called resource shapes in the specification).

Reports based on a Lifecycle Query Engine (LQE) data source now use merged artifact types (resource shapes) instead of the published artifact types. Previously, each project in Rational Quality Manager (RQM or QM) and DOORS Next Generation (DNG or RM) published a set of their artifact types, but many projects had scalability issues. Loading the metadata from the data source (reading all the artifact types) requires many queries to LQE, and some queries exceeded the result limit. The artifact section of Report Builder tried to show all the artifact types, but the list was long and confusing.

LQE added function that is triggered when an application publishes an artifact type. That type's data is converted into a stable format based on the resource name and is saved in a separate "shapes" partition in LQE. Now, if an application publishes different artifact types with the same name from different projects, Report Builder loads the stable version instead of the actual resource shapes, and the artifact section shows one merged artifact type with all of the properties defined by each project. The following image shows the expansion of Test Plan and a single merged type "QM Test Plan".

When you select a merged type, the report will include all artifacts from the corresponding types in the various application projects, in other words, a cross-project report. You can use the top-level Open Services for Lifecycle Collaboration (OSLC) "Test Plan" type for reports across applications, but it has very limited attributes. When you use the merged "QM Test Plan" type, the available relationships and attributes are from all the project-specific artifact types. However, you still filter the list of properties based on the projects you select in the Limit scope section.

Important: To get these new merged artifact types, you must use only the latest versions of RQM, RTC, DNG, LQE, and Report Builder. If an older version of an LQE data source creates merged artifact types, Report Builder shows the project-specific artifact types for that data source.

Merged artifact type property conflicts

When you select a merged type, the relationship and attribute properties are combined from all project artifact types that have the same name. When these properties are merged, several kinds of conflicts are detected by Report Builder and are flagged by a help icon. If you hover over the icon, a tooltip describes the conflict that was detected.

These conflicts are detected:

  • PropertyNames - A property with a specific predicate has multiple names. The first property name is used.
  • PropertyEnumNames - The possible values of an enumeration property result in multiple enumeration value names for a specific ID. When this conflict occurs, the first name for the enumeration value is used.
  • PropertyTargets - For LQE-based relationship properties, any given property can have multiple target properties if they are other artifact types (not attribute types). In this case, when the relationship is being created, you can choose which target type to use in the report. This conflict occur in two situations: 1) if the target types of a property contain both other artifact types (a relationship) and XSD types (an attribute), or 2) when there are only XSD target types but they are different (string, integer, Boolean). In either case, the conflict is resolved by changing the property target type to XSD string.
  • PropertyPredicates - Two or more properties were found with the same name but different predicates. If the property was contributed by a single project, the tooltip names the project. If the property came from multiple projects, the tooltip shows the predicate.

Rational Team Concert (RTC) artifact type support

RTC now publishes its resource shapes to LQE for use in reports. In the Report Builder artifact section, when you expand Work Item, you now see a single list of all possible work item types instead of the multi-selection list of the known types from the existing work items. If you select a type such as Story, the report shows only work items of that type. If you want to report on multiple work item types, then select the top Work Item type and in the Conditions section, create a filter for the work item "type" property and select the work item types you want, such as Story, Defect, or Task.

Advanced Queries

Advanced query editing supports string templates

If you need to create more complex queries you can use the advanced editing mode to edit the generated SQL or SPARQL query. In a previous sprint we added parsing of the query and generation of dynamic filters for all of the variables in the SELECT statement without a modifier, so that people can enter alternative values to filter the report when they run it. However, these automatic dynamic filters required the variable to be in the SELECT statement and hence part of the report output.

Now you can explicitly add the template parameter inside the query logic to ensure that a particular aspect is exposed as a dynamic filter. For SQL, template parameters are defined by a string that is in between dollar signs ($), such as '$Priority$' in the following example. This string defines a text parameter for priority that will be substituted into the query at run time based on what a user enters when they click the lock on the dashboard.

When they run the report, there will be a filter for the Priority variable:

Automatic dynamic filters support with manual reports (SQL / SPARQL)

One of the powerful features of the Report Builder is its ability to fall back to the native query language of the data source (such as SQL for Data Warehouse) when certain limitations of the user interface are reached. Because of this ability, you can go as far as possible with the Report Builder UI and then move to the Advanced pane and work with the SQL that has been generated up to that point. You can modify the existing SQL as required, within the capabilities of the data source itself (DB2 or Oracle or whatever). Till now, any dynamic filters would no longer be available in the resulting report.

Now, dynamic filters are generated automatically for certain variables in the SELECT statement of the native query. Some variables might not be generated but we make a best guess if possible to try and discover the underlying enumeration values or simply provide a native type filter (string, integer, and so on).

Consider a simple work item report with a set of conditions and columns, and look at its Filters section when running the report.

If you modify the Advanced pane to base the report on the native SQL instead of the Report Builder metadata, you can see how this works. The red line combines the Severity and Priority columns into a single column.

SELECT DISTINCT T1.PROJECT_NAME,
     T1.REFERENCE_ID,
     T1.NAME AS URL1_title,
     T1.URL AS URL1,
     T1.REQUEST_TYPE,
     CONCAT(CONCAT(T1.REQUEST_SEVERITY, ', '), T1.REQUEST_PRIORITY) AS IMPORTANCE,
     T1.REQUEST_CATEGORY_NAME,
     T1.ITERATION_NAME
FROM RIDW.VW_REQUEST T1
WHERE ( T1.REQUEST_TYPE = 'Defect' AND
  (T1.REQUEST_SEVERITY = 'Blocker' OR T1.REQUEST_SEVERITY = 'Critical' OR T1.REQUEST_SEVERITY = 'Major' ) AND
  (T1.REQUEST_PRIORITY = 'High' AND
  (  (((T1.REQUEST_CATEGORY_NAME = 'Jazz Reporting Service' OR T1.REQUEST_CATEGORY_NAME LIKE 'Jazz Reporting Service/%') AND (T1.PROJECT_ID = 53))
) AND
(T1.ISSOFTDELETED = 0) AND
(T1.REQUEST_ID <> -1 AND T1.REQUEST_ID IS NOT NULL)

Then you can look at the report results to see how the Filters section is changed.

All the previous filters still exist except for Severity and Priority, which are combined into a new column. Any column variables that are the result of complex manipulation cannot be generated as a runtime filter.

Lifecycle Query Engine

Lifecycle Query Engine (LQE) implements a linked lifecycle data index of data provided by one or more lifecycle tools. A lifecycle tool makes its data available for indexing by exposing its data by using a tracked resource set, whose members must must be retrievable resources with RDF representations, called index resources. An LQE index built from one or more tracked resource sets allows SPARQL queries to be run against the RDF dataset that aggregates the RDF graphs of the index resources. Thus, data from multiple lifecycle tools can be queried together, including cross-tool links between resources.

New in version 6.0.1

Discover access context lists (ACLs) from a TRS feed

LQE can now discover ACLs from a Tracked Resource Set (TRS) feed. An application can include their ACL resources in their tracked resource sets enabling LQE to scan for ACLs during indexing. Applications can secure an access control description by putting it in its own graph and setting the graph's acc:accessContext to the access that it describes.

The new "inherit and add" permission enables resource groups to inherit the permissions from their parents. Now administrators can define additional permissions.

Synchronization of LQE project area membership

LQE supports the OSLC Indexable Linked Data Provider Specification Version 2.0, which includes a specification for access context, yet there has not been a way to automatically associate user memberships in tools with the access context provided by the tools. Permissions in LQE can now be synched with the permissions users have in project areas.

LQE has adopted a service to synchronize read access list members (only from CLM tool project areas) with LQE access context user groups. The service automatically accesses project membership information through the tool's reportable REST API. The synchronization service automatically detects changes in read access list membership in project areas and sends those changes to LQE without manual intervention by the administrator.

REST APIs

We added APIs for accessing historical metric jobs.

Infrastructure to support historical metrics

LQE has taken another step toward providing access to historical data similar to the data warehouse though the use of partitions:

  • Periodic execution of SPARQL queries to collect historical metrics
  • Persist historical metrics as JSON-LD in an LQE partition
  • Support for SPARQL queries for historical reporting

Historical metrics support

To enable historical metrics reporting in 6.0.1, which is in technical preview, an LQE endpoint is now provided to enable metric collection. To execute this setup, an authenticated user with a JazzAdmin role must send a POST request to lqe/setup/history. After the setup, historical metrics are collected every midnight and saved in a separate partition.

For details about the metric collection jobs, make a GET request to lqe/api/metric-job.

Further support was added for historical metrics:

  1. Updated the vocabulary for Rational Quality Manager
  2. Added support for the WebSphere Liberty server

The following use cases were created and added as automated tests for LQE builds:

  1. Quality metrics for work items - Each day, count metrics are generated for work item creation, closure and opened (by type, by team, by project area, by category).
  2. Blocked work items - Each day, count metrics are generated for open and blocking work items (by type, by team, by project area, by category).
  3. Closed work items - Total (max) closed work items (by iteration, by team, by project area, by category).
  4. Requirements by test cases - Each day, count metrics for linked to test case, not linked to test case (validatedBy property, by project area).
  5. Requirements blocked by open change requests - Counted each day (affectedBy, by project area).
  6. Outstanding requirements - Count of requirements with associated test execution records that have not been run each day (by project area).
  7. Iteration scorecard - Total number of new test results with each status each day (by project area, by iteration).
  8. Test cases by plans - Count of test cases associated with and without test plans each day (by project area).
  9. Test cases states - Count of test cases in each state each day (by project area, by category).
  10. Two more use cases were added to the tests, but data gaps still prevent their success so far:

  11. Burndown - Remaining estimated work (in hours) over time, as well as total work over time (per iteration per team per project area, per category).
  12. Burnup - Completed work (in hours) over time, compared with total work and team capacity (per iteration per team per project area, per category).

Improved navigation

With the addition of a link back to Jazz Team Server Home, users can now easily return to the JTS Administration pages. The link is near the top right.

Reporting across versions and projects

LQE has added SPARQL queries to support merging of artifact details for reporting across versions and projects. Each CLM application defines slightly different data structures (resource shapes). These queries help LQE merge resource shape changes from data from each application.

Updated RQM vocabulary

The vocabulary has been updated to include the various test artifact types.

Vocabulary restoration link added to the UI

The vocabulary page has a new link enabling the restoration of the default vocabularies to accommodate several use cases where they might be deleted.

Reporting across versions and projects

The SPARQL queries now support merging of artifact details for reporting across versions and projects. Each CLM application defines slightly different data structures (resource shapes). These queries help LQE merge resource shape changes from each application.

The data source page now has additional buttons to facilitate actions on all data sources, such as removing, pausing, resuming, and reindexing. Now you can work with one data source or all of them together, making the process far more efficient.

The new Test Connection button provides an easy way to make sure your connection is live before adding or editing data sources, again increasing efficiency. If the URL is incorrect, you can fix it immediately rather than responding only after it fails.

Data Collection Component

The CLM Java ETLs have been replaced with the Data Collection Component. Install and configure the Data Collection Component to report on data warehouse data. Designed to address key performance and deployment concerns with the current ETL solutions, Data Collection Component employs parallel processing and optimizes system resource utilization by using available system resources to process the data efficiently. Overall execution time for the ETL process for CLM data is reduced, whether there is a single instance or multiple instances to process.

New in version 6.0.1

These capabilities were added to DCC:

  • DCC now behaves like any CLM application, so you can customize the behavior of its log files by using the log4j.properties file.
  • Be sure to set up the new data collection property, ETL Administrator E-mail List, with a comma-separated list of email addresses to notify when an ETL job fails.
  • Work item planned start and end dates are now loaded in the RIODS.TASK_DETAIL table in the data warehouse.
  • If applications are registered against the same Jazz Team Server where DCC is registered, those applications are registered as data sources for the data warehouse on startup.
  • RQM attribute information for configurations is now loaded into the data warehouse.

DCC now stores the RTC work item custom attribute definitions in a new table named RICALM.REQUEST_ATTRDEF in the data warehouse.

ALM Cognos Connector

ALM Cognos Connector enables a Jazz server to connect to an existing Cognos BI server already deployed in a WebSphere Application Server environment. This component provides the configuration for connecting the Jazz and Cognos BI servers. When it is configured, users can create and manage Cognos-based reports based on the artifacts from the Jazz server, similar to the capability provided by Rational Reporting for Development Intelligence in previous versions.

For more information on how to use the ALM Cognos Connector with a Cognos BI server and move existing reports to the new Cognos BI server refer to this topic: Migrating Rational Reporting for Development Intelligence reports to Cognos Business Intelligence using the ALM Cognos Connector

New in previous milestones of Jazz Reporting Service 6.0.1

For details about new features and functionality in previous milestones of this release, see these pages: