EditAttachPrintable
r5 - 2016-05-02 - 22:13:47 - TimFeeneyYou are here: TWiki >  Deployment Web > CLMExpensiveOperations > ScenariosToBeInvestigated
e

CLM Operations to Investigate as Potentially Expensive(DRAFT) uc.png

Authors: TimFeeney
Build basis: The Rational solution for Collaborative Lifecycle Management (CLM) and the Rational solution for systems and software engineering (SSE) v6.0.1.

This page captures CLM operations that may potentially be expensive but need further investigation to characterize and confirm. See the parent topic containing the known list.

Note: The information herein is a work in progress and is continually being reviewed/refined

Table 1: Summary of CLM Operations to Investigate
Product Scenario Scenario ID Best Practice Investigate
Rational DOORS Next Generation (DRAFT) Comparing local configurations with large number of artifacts DNG_Compare_Configuration link 106102
 
Rational Team Concert (DRAFT) Concurrently loading a large number of large configurations RTC_Load_Workspace link 389280
Import of large Microsoft Project plans RTC_MSP_Import link 389355
Import of large CSV files RTC_CSV_Import link 389357
Export of large Microsoft Project plans RTC_MSP_Export link 389358
Export of large number of work items to CSV file RTC_Workitem_Export link 389359
Large bulk edits of work items RTC_WI_Bulk_Edit link 389361
Adding many build result contributions to build result RTC_Add_Build_Contribution link 389278
Loading a large plan RTC_Load_Plan link 389362
 
Rational Quality Manager (DRAFT) Comparing local configurations with large number of artifacts RQM_Compare_Configuration link 151178
 
Jazz Reporting Services (all reporting technologies) (DRAFT)        
 
Global Configuration (DRAFT) Creating streams in a global configuration hierarchy GC_Create_Stream link 389494
Creating a baseline staging stream from a global stream hierarchy GC_Create_Baseline_Staging link 389497
Creating a global baseline (or committing a baseline staging stream) from a global stream hierarchy GC_Create_Global_Baseline link 389495

Rational DOORS Next Generation

Comparing local configurations with large number of artifacts

This scenario needs further analysis to confirm it can be expensive and to characterize further. See work item 106102

Rational Rhapsody Design Manager

Importing or moving a large model to DM

When you import a large model, or move a large model (in actively-managed mode) to DM, the server could potentially become unresponsive, especially when others are working in that same stream. DM notifies users that an import is occurring.

Operations that load a large model

Some operations require loading a model in its entirety; if the model is large, it can impact server memory and processing. Such operations include:
  • Generating code from a model
  • Exporting a model (using “Save as”)
  • Creating a table layout in Rhapsody with too broad a scope

Use smaller related models to reduce the system demands; this is also a best practice.

OSLC integration to DNG/DOORS

Rhapsody DM can manage OSLC links between model elements and requirements artifacts in related DOORS and DNG projects. When doing so, DM retrieves all available requirements from the projects and all defined links from the DM server. The integration synchronizes to find new requirements/links. If the requirements project is large, the link management can be resource-intensive.

To reduce resource demands, use a filtered view (e.g. collection, module) to narrow the amount of requirements to retrieve.

Rational Team Concert

Concurrently loading a large number of large configurations

This may occur with many continuous builds doing many concurrent loads of large configurations (those approaching the file/folder limit as described in RTC Limits).

This scenario needs further analysis to confirm it can be expensive and to characterize further. See work item 389280.

Import of large Microsoft Project plans

How long an import takes depends on the quantity of items in the plan and their nested structure. For example, an import from an MS Project file containing 2000 tasks could take up to 30 minutes on the first import and 8-10 minutes on subsequent imports depending on server configuration and load. Consider also the memory demands of an import which will take approximately 100KB for each task being imported over and above the memory needed for typical RTC operation. In most cases, import of Microsoft Project plans is an infrequent operations, generally performed at the start of a project. However, if imports are to be a frequent occurrence, be sure that the server memory allocation has ample spare capacity. Note the numbers provided are based on testing in a non-production environment.

Further investigation to better characterize a 'large' Microsoft Project plan and the load generated when importing one is tracked by work item 389355.

Best Practices for Import of large Microsoft Project plans

If your MS Project file contains more than 1000 tasks, we recommend you import or export during off-hours or when the server is lightly loaded.

Import of large CSV files

Similar to an import of a Microsoft Project plan, an import of a large CSV file can keep the server busy and drive memory usage. However, unlike the plan import, CSV file imports occur more frequently and are often part of a round trip update.

Further investigation to better characterize a 'large' CSV file import and the load generated when importing one is tracked by work item 389357.

Best Practices for Import of large CSV files

For more than 1000 work items, it is best to wait for off hours or until the server is lightly loaded.

Export of large Microsoft Project plans

Similar to an import, export time and load are dependent on the size and complexity of the plan. The impact is primarily to memory on the server.

Further investigation to better characterize a 'large' Microsoft Project plan and the load generated when exporting one is tracked by work item 389358.

Export of large number of work items to CSV file

Exporting a large number of work items primarily impacts memory on the server.

Further investigation to better characterize a 'large' number of work items and the load generated when exporting them is tracked by work item 389359.

Large bulk edits of work items

Performance and load driven by this scenario is a function of the number of work items and number of attributes changed as each work item is changed and saved one at a time.

This scenario needs further analysis to confirm it can be expensive and to characterize further. See work item 389361.

Adding many build result contributions to build result

When a large number of contributions (compilation contributions, Junit tests, downloads, log files, links, work item references, etc) are included on a build result, because of the way they are stored (single data structure), the server could spend a lot of time marshalling and unmarshalling the persisted contributions when adding/deleting contributions. At best this is a slow running operation, however, should there be a large number of concurrent builds performing similar work (adding many build result contributions), the potential for impact to the server increases.

This scenario needs further analysis to confirm it can be expensive and to characterize further. See work item 389278.

Loading a large plan

The RTC plan editor provides users with great flexibility to display plans in customized and flexible configurations. In order, to provide rapid display of custom plan configurations, the RTC planning editor must fetch all the details of each work item when loading plans. Consequently, when the scope of a plan includes a large number of work items, loading of such plans can drive server load. We have greatly improved plan loading performance with each release by deferring the loading of out placed "child" work items or by allowing users to turn on and configure server side plan filtering to avoid loading plan types that will never be displayed in plans.

This scenario needs further analysis to confirm it can be expensive and to characterize further. See work item 389362.

Best Practices for Loading a large plan

For large plans using the traditional RTC plan UI, and running RTC 6.x or later, if the plan loading performance is not acceptable, consider configuring the plan to perform server side filtering. Note that server side filtering is turned off by default. See plan performance. Check also that you are not displaying costly attributes in the plan such as Gantt charts. Breaking the plan into a more manageable size and removing items you never need to display in the plan may help.

In addition, releases of RTC (5.0.2 and forward) include a new planning tool, called RTC Quick Planner which allows teams to edit plans in a fast and fluid manner. RTC Quick Planner loads plans much faster, as it only loads only a few screens of the plan at a time and then loads additional plan items as you page through the plan and ask to display them. Developers, Scrum Masters and other users who need to load plans quickly and create new work items dynamically, should be encouraged to use RTC Quick Planner for their daily plan operations.

Slow running operations

Synchronizing attributes after a large amount of change in attributes for a work item

There are two ways for synchronizing work item attributes. First, using the RTC Eclipse client, run a query and select a set of work items then right click the type column and select Synchronize Attributes. This process would only affect the selected work items and response time is not a concern unless the client selected thousands of work items. The second method is to trigger the sync from the process configuration editor by the "check attributes usages in repository" action. This process could take a long time as it sync all work items created with all work item types within the same project area. If there are thousands of work items in the project area with a large number of attributes changes, it can take some time to complete (a warning is presented to the user).

Loading a work item with a large amount of customized attributes

When there are over 100 custom attributes in a work item type, loading of a work item of that type can be slow from the client perspective but isn't known to drive server load.

Rational Quality Manager

Comparing local configurations with large number of artifacts

This scenario needs further analysis to confirm it can be expensive and to characterize further. See work item 151178.

Jazz Reporting Services (all reporting technologies)

Global Configuration

Creating streams in a global configuration hierarchy

When you create a new stream for a global configuration, it may also generate new streams for the DNG, RQM, and DM contributions, and may need to traverse the global configuration hierarchy as well to create global streams from global baselines. The time to generate the streams depends on the complexity of the configuration hierarchy, the number of local application (DNG, RQM, and DM) baselines in the hierarchy, and number of versioned artifacts in each.

Because the local application (DNG, RQM, DM) creates its streams, most of the demand is placed on those servers. If there are a large number of baselines to create in the application servers, or if the GCM hierarchy is very deep or complex, you might want to create the new streams during a period of light usage.

This scenario needs further analysis to characterize further and establish best practices to limit impact. See work item 389494.

Creating a baseline staging stream from a global stream hierarchy

When you create a baseline staging stream (BSS) for a global configuration hierarchy, you also create a new baseline staging stream for each global stream in the hierarchy. The time to do this depends on the number of global configurations in the hierarchy, how deeply nested they are, and the number of custom properties and links they have.

This scenario needs further analysis to confirm it can be expensive and to characterize further. See work item 389497.

Creating a global baseline (or committing a baseline staging stream) from a global stream hierarchy

When you create a global baseline (or commit the baseline staging stream) for a global configuration hierarchy, it requests baselines for each local configuration from the contributing application (DNG, RQM, or DM). It also traverses the global configuration hierarchy, committing each included baseline staging stream and requesting local configuration baselines for those streams as well.

Similar to creating streams from the global baseline, much of the processing is done by the contributing application servers (DNG, RQM, and DM). If there are many local configurations, consider creating the baseline when usage of those servers is light.

This scenario needs further analysis to characterize further and establish a best practices to limit impact. See work item 389495.


Related topics: Deployment web home, Deployment web home

External links:

Additional contributors: TWikiUser, TWikiUser

Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r7 < r6 < r5 < r4 < r3 | More topic actions...
 
This site is powered by the TWiki collaboration platformCopyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use. Please read the following disclaimer.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.