Collaborative Lifecycle Management 2011 Sizing Guide

Enterprise scalability of Collaborative Lifecycle Management (CLM) 2011 poses the dual challenge of both expanded degrees of flexibility and deployment complexity. Based on a sophisticated performance test environment using a full application lifecycle workload, we have simulated a realistic customer workload to validate the CLM 2011 scalability requirements. With this workload testing, we are confident that we can support over 2000 active users in an enterprise environment. Server performance data collected during these tests runs formed the basis for the results expressed in this sizing document.

CLM Test Environment

Our in-house IBM Jazz environment was the workload model used to generate Rational Team Concert (RTC) test workload. At the time of this writing, the CLM 2011 performance test environment generates load levels of over 2000 active developers operating within 10 active streams (over 200 or more active users per stream). During the course of a typical scalability run, users across all roles generate over 3600 work items per hour (86,000 work items over 24 hours!). Builds are simulated by loading source code from the RTC server at a rate equivalent to 95 builds per hour (over 2300 builds over 24 hours!). In addition, quality professional users run production-like test scenarios for up to 200 active users and analyst users run requirements definition and management scenarios for up to 100 active users.

With the full CLM solution configured, there are additional scenarios that check traceability and links between assets in different repositories and answer key questions assessing test coverage and requirements impacted by defects. All of these scenarios require reading data from the data warehouse or multiple live data repositories. These scenarios are also factored into the CLM performance workload as is the expectation that dashboards may (and usually will) contain viewlets with data from different repositories.

Workload Characterization Summary

The CLM workload is broken down by the various user roles occurring across the lifecycle of a software development project. The roles defined — developers, quality professionals, and analysts — mirror the designed role-based licensing and user workflows that implement project tasks using the CLM platform. Within each role, various representative user workflows or tasks were broken out as a percentage of overall work done. To the highest degree possible, these percentages were gleaned from currently active production usage of the CLM platform (or individual instances of the version 2.x applications) both within IBM and by external customers. More details on the exact breakdown for workloads can be found later in this document.

There are really two distinct dimensions to the test environment sizing: number of assets and the number and roles of users actively interacting with those assets. As your user community grows in size, number of hosted projects, and length of time using the CLM platform, the supporting hardware resources need to grow too in order to support your expanding user community. By testing this extreme workload size on a very large asset base, we will show you how well the CLM 2011 platform scales. Note that the users sizing relates to the actual number of active users, which is a small fraction of the total number of registered users.

Measurements have been taken either in single-server setup or distributed-servers setup:

  • Single server setup uses a single application server and represents a development shop of many active people all concurrently accessing the CLM platform. In this workload test, the role mixture varied between developer-centric and heavy, which added a disproportionately heavy load on the back-end database server. This ensures that regardless of the role mixture, the single application server should be able to handle the user workload.
    Single server solution
    RRC, RQM, RTC and JTS are on
    the same server with shared JTS, dual tier database
    Deployment
    RTC
    Developer
    RQM
    Quality Manager
    RRC
    Analyst
    Large Scale Enterprise Configuration CLM no data yet no data yet no data yet
    RTC only 2000+ active users
    1 million artifacts
    60 GB database
    0 0
    Mid Scale Enterprise Configuration CLM 400 active users
    1 million artifacts
    60 GB database
    50 active users
    100K artifacts
    3 GB database
    50 active users
    50K artifacts
    RTC only 700 active users
    1 million artifacts
    60 GB database
    0 0
    Small Scale Enterprise Configuration CLM not supported not supported not supported
    RTC only
    Single tier with apps and
    databases on same server
    100 active users
    1 million artifacts
    60 GB database
    0 0

    There is not sizing yet available for a single server CLM solution on large scale hardware, use distributed server solution on mid scale hardware instead. Note that all RTC only scalability testing was done with a 50K artifact repository. Prior releases and the CLM 2011 combined workloads (including up to 1000 RTC users) were done on the large RTC repository covering that aspect of this release’s scalability claims.

  • Distributed server setup uses a distributed configuration representing a whole division or even an enterprise using the CLM solution as the platform for their software development shop. While there is no limit or constraint on the number of separate servers put in place by an enterprise (since there is no longer a server license fee), the scalability of a single CLM platform, consisting of a single Jazz Team Server (JTS) and linked Change and Configuration Management (CCM), Quality Management (QM), and Requirements Management (RM) applications, can be expanded to handle most software development shops or project sizes. Our measurements do not in any way reflect a hard limit to the size of the user community supported by a single JTS but are merely representative of a fairly large enterprise customer. By engaging software size modeling specialist in our Techline organization, larger configurations can be derived that will handle much larger user communities on a single JTS. These configurations will be documented in a future update to this document.

    Currently we have no recommendations for 32-bit servers for a distributed topology as we have at present only tested performance on 64-bit servers.
    Distributed server solution
    RRC, RQM, RTC and JTS are on
    different servers with shared JTS, dual tier database
    Deployment
    RTC
    Developer
    RQM
    Quality Manager
    RRC
    Analyst
    Large Scale Enterprise Configuration CLM 2000+ active users
    1 million artifacts
    60 GB database
    1000 active users
    500K artifacts
    15GB database
    200 active users
    100K artifacts
    Mid Scale Enterprise Configuration CLM 700 active users
    1 million artifacts
    60 GB database
    500 active users
    500K artifacts
    15 GB database
    100 active users
    100K artifacts

    Customers may confidently choose a lower cost single application server configuration with the option of moving one or more of the CLM applications to additional servers as the team grows. This implies that you start with (or add) a front-end web server that forwards different application roots to different application servers, that is, utilize a reverse proxy server. Alternatively, you could start with a single application server using four hostname aliases for the various Public URIs for each application (RM, CCM, QM and JTS), that is configure virtual hosts for the applications. With a little planning, you can adjust your hardware configurations while keeping the Public URIs stable, which is currently a requirement with the CLM 2011 applications.

CLM Reference Configurations and Test Results

Two primary reference configurations were tested for scalability: the single application server and the distributed application server environments. Below are block diagrams of the two configurations. Each configuration was tested with different user loads. The block diagrams are followed by tables of test results complete with hardware specifications, utilization and response time numbers. These results can be used as a starting point for scalability expectations of these products or the entire CLM 2011 solution. If further scalability advice is needed, have your IBM sales team contact the Techline to give specific advice based on your workload and hardware constraints.


Configuration 3: CLM 3.0 on Single Server (64bit WAS and 64bit DB2)

Users/Data Volume
Role
Manufacturer
Model
HW Architecture
Number of processors
Processor Speed
Memory (GB)
Total Average CPU % Usage
Total Average Memory Usage (GB)
Average Page Response Time (seconds)
2000 RTC users on 50k Repo CLM – RTC Server IBM SystemX x3250 M3-4251/62Y Intel Xeon x5560 2 2.8Ghz 18 GB 44.8% 7.6 1.2



Configuration 4a: CLM 3.0 on Separate Servers (64bit WAS and 64bit DB2)

Users/Data Volume
Role
Manufacturer
Model
HW Architecture
Number of processors
Processor Speed
Memory (GB)
Total Average CPU % Usage
Total Average Memory Usage (GB)
Average Page Response Time (seconds)
700 RTC, 500 RQM, 100 RRC – RTC 1Mil, RQM 200k, RRC100k CLM – RTC Server IBM 8203-E4A Power6 4 4 Ghz 16 GB 40% 7 2
CLM – RQM Server IBM 8203-E4A Power6 4 4 Ghz 16 GB 40% 7
CLM – RRC Server IBM 8840 Intel Xeon 2 3.4 Ghz 8 GB 30% 7
CLM – JTS Server IBM 8203-E4A Power6 4 4 Ghz 16 GB 50% 7
CLM – DBServer IBM 8840-MC1 Intel Xeon 2 2.8 Ghz 8 GB 50% 5
1000 RQM users on 500k Repo RQM – Server IBM xSeries 3550 7978AC1 Intel Xeon EM64T 2 2.9 Ghz 16 GB 59% 4.9 2.6
RQM – JTS Server IBM xSeries 336 8837Z11 Intel Xeon 7520 2 3.6 Ghz 4 GB 9.2% 1.7
RQM – DB Server IBM xSeries 3550 7978AC1 Intel Xeon EM64T 2 2.9 Ghz 16 GB 8% 2.7
200 RRC users on 100k Repo RRC – Server IBM xSeries x3550 7978AC1 Intel Xeon 2 3 Ghz 8 GB 54% 2.9 2.4
RRC – DB Server IBM xSeries x3550 9196A49 Intel Core E6750 2 2.6 Ghz 4 GB 7.1% 2.6
RRC – JTS Server IBM xSeries x3550 6234E81 Intel E8400 2 3 Ghz 8 GB 16% 2.8

CLM Hardware Sizing Recommendations

Using the described workload above, the tables below were compiled based on the various hardware and software configurations tested. When considering sizing options, CLM 2011 supports several multi-tier configurations depending on your deployment size. With our test environment and at least two servers running on one of the latest state-of-the-art x64 hardware chipsets (e.g. Xeon 5500 or better), we demonstrated the ability to handle enterprise load on a single server for the applications and one for the database server configuration.

Here are the current sizing recommendations for different enterprise deployments. These recommendations will be refined as more complete performance testing is done and the Techline hardware scaling model is available. Currently we have no recommendations for 32-bit servers for a distributed topology as we have at present only tested performance on 64-bit servers. This information will also be provided in a future version of this document.

Small Scale Enterprise Configuration – Dual-tier
2 IBM Systems x3250 – Dual CPU Intel Xeon 3060 2.4 GHz or higher 64-bit (one each for web application server and for database server)
Memory 4GB or higher for each server
Disk High Performance SAS Disk (15K), RAID, SAN or NAS direct connected disk subsystem
Operating System Red Hat Release 5.3 or Windows 2003 (or 2008) Server
Web Application server WAS 7.0.0.11 or higher, or Tomcat 5.5
Database DB2 9.7 FP 2, SQL 2008, Oracle 11g R2
Mid Scale Enterprise Configuration – Dual-tier
2 IBM Systems x3550 Dual CPU Intel Xeon 5160 2.4 GHz or higher 64-bit (one each for web application server and for database server)
Memory 8GB or higher for each server
Disk High Performance SAS Disk (15K), RAID, SAN or NAS direct connected disk subsystem
Operating System Red Hat Release 5.3 or Windows 2003 (or 2008) Server
Web Application server WAS 7.0.0.11 or higher (Tomcat 5.5 not recommended)
Database DB2 9.7 FP 2, SQL 2008, Oracle 11g R2
Large Scale Enterprise Configuration – Dual-tier
2-5 IBM Systems Dual CPU Intel Xeon 5500 2.4 GHz or higher 64-bit (one each for web application server and one for database server)
Memory 12GB or higher for each server (2 system configuration) or 4-8GB for each server (4 or 5 system configuration)
Disk SAN or NAS/DAS disk subsystem – Recommend NetApp 3050C with Fibre Channel connection
Operating System Red Hat Release 5.3 or Windows 2003 (or 2008) Server
Web Application server WAS 7.0.0.11 or higher (Tomcat 5.5 not recommended)
Database DB2 9.7 FP 2, SQL 2008, Oracle 11g R2

Tomcat and WebSphere Application Server (WAS) Configurations

While the Jazz Server is pre-configured with Tomcat out of the box, which is good for evaluation and smaller departmental deployments, we rather recommend using WebSphere Application Server (WAS) for large-scale and enterprise deployments.

Tomcat does include simple, file-based user realm for non-LDAP setups, and provides minimal support for Single Sign-On (SSO). The SSO experience is limited to one host (i.e. all applications deployed in the same (virtual) host). There are external SSO solutions available, but we have not yet confirmed these were supported by Jazz deployments in Tomcat.

WebSphere Application Server (WAS) does provide flexibility for creating multiple profiles, has built-in configurable reverse proxy, and provides full support for Single Sign-On (SSO) whether the applications are all-in one server, or distributed amongst multiple machines for better scalability; WAS also offers out of the box an administration user interface for installing; stopping and (re)starting applications, configuring Java virtual machine properties, security, etc. Customers have told us that they are taking advantage of the ability to create instantaneous WAS instances to enable deployment of a new Jazz server which utilizes their existing security configuration. WAS also provides more flexible LDAP support and role-to-group mapping, monitoring with Tivoli ITCAM integration. The idle standby failover setup procedures are only documented for WAS.

For more information on the benefits of WebSphere, please go to the following link: http://www-01.ibm.com/software/webservers/appserv/was/features/

Network Connectivity

There are two aspects to network connectivity: connectivity amongst the Jazz application and database servers within the CLM environment and connectivity between the CLM servers and the end users.

The recommendation for network connectivity in dual-tier configurations of the CLM environment is to minimize latency between the application server and database server (no more than 1 – 2 ms) and to have the servers located on the same subnet. When using external storage solutions, the recommendation is to minimize connectivity latencies, with the optimal configuration being via Fibre (optical).

As far as network connectivity with the end users of the CLM platform, there are conditions of slow WAN performance and high volume build or SCM usage that can be vastly improved by implementing a content caching proxy server scheme. This configuration is described in the content caching proxy server article on jazz.net.

Disks

There are several considerations when choosing disk subsystems including continuing availability in the event of a disk crash, preventing repository data loss, and raw throughput and low latency of disk operations. In our small and medium sized configurations for the CLM 2011 testing, we chose a RAID 1E (mirrored disk) configuration providing complete data redundancy and continued operation in the event of a single disk failure. This configuration is not cost prohibitive for customers without large SAN or NAS solutions available but still provides reasonable operational characteristics. For larger configuration, use of a NAS solution with direct fibre channel connectivity is recommended but not required.

As background, during our original 2000 user scalability runs for RTC 2.0, we noticed that our 1TB SATA single disk drive was under a tremendous amount of load and physical stress. We knew that this single drive configuration was clearly a bottleneck but needed to collect data to prove it. We have measured the load on both a single drive and Network Attached Storage drive solution (NAS)/Storage Area Network (SAN) based drive solutions as well as overall system behavior. Our measurements were made using an AIX IBM tool called nmon to characterize overall system behavior.

During our RTC 3.0 performance test runs, we could see that the single disk SATA solution was simply not handling user load for deployments of greater than 500 users. It was clear that we needed a more scalable enterprise storage solution as the single drive and controller had no idle time. We considered an internal RAID configuration, a SAN or utilizing an enterprise storage solutions which we had in-house, our choice was the IBM N5500 (IBM OEM NetApp 3050C equivalent). We also knew that our customers would appreciate the fact that we certify and directly support this solution. This solution provides numerous benefits like high availability/disaster recovery as well as the ability to use it as typical network attached storage (NAS) or direct attached storage (DAS) solution.

In our RTC 3.0 testing, our goal was to provide optimal disk performance without the overhead of NAS network connectivity. With the storage directly connected via a Fibre Channel (FC) connection there is no latency between the storage and the server, even if the devices are located on separate floors. Benefits of using this configuration resulted in offloading all disk I/O from the server to the NAS while providing a complete fault tolerance disk subsystem with instantaneous snapshot backup capabilities. While the NAS is one solution, the Intel 5500 system we tested had motherboard RAID support for up to six disk drives. While this is a solution that one can consider, you should make sure that you are using a fairly new RAID controller that can provide very fast RAID 1E and/or RAID 5 performance with hot disk swapping capabilities,. Alternative enterprise storage solutions do have real-time monitoring, hot swap drive replacement support as well as failover in real-time with zero downtime.

In summary, our recommendation for storage for large scale deployments is to consider a SAN or NAS solution. Please review the IBM NAS solutions when making your decision as they provide both NAS and DAS connectivity.

Multiple repositories

As suggested by the tested topologies, each application server is connected to a Jazz Team Server which provides central administration and other common services for the registered applications. Each “cluster” of applications that work together should all share this same JTS. This provides many benefits such as lifecycle project creation and common user and license administration. With all of these applications sharing a common data warehouse, you also have cross repository reporting through the RRDI reporting server that can generate reports from the data warehouse as well as directly sourcing the various repositories in the “cluster”. Once you break out some of these applications and their repositories to be served by a different JTS, reporting across the “clusters” require the Rational Insight product. Also basic linking of artifacts is not as easy to set up as the servers from the different clusters would have to be “friended” in the old CLM 2010 way for their projects to link assets.

Obviously fewer servers mean fewer machines and application server instances to configure, monitor, and pay for, and are a big factor in the Total Cost of Ownership. With the read permission and scalability support in CLM 2011, you can now host many projects on the same server. But where does this end, should all 30,000 developers, testers, and analysts in IBM end up running on one giant server in the cloud? We don’t think so.

There are other reasons why it makes sense to always plan for multiple servers. Scheduling maintenance windows is a very practical issue. A single server supporting intense round-the-globe development has no window for maintenance. Deploying a few regional servers to host projects where they are centered allows each of those servers to have a reasonable window for maintenance if needed. The sheer size of a single repository is another consideration. We already struggle with the time it takes to ship backups around and perform migrations of our 40GB repository on jazz.net. Extrapolating this to a repository 300 times as large exacerbates the problem.. There are things we can do to mitigate this: SAN vendors provide some pretty amazing backup technology, and we’re working on minimizing future migration impacts. What do you lose? Once you realize that it makes more sense to have multiple repositories deployed in your organization there are a lot of things which will continue to work and others that would be more difficult.

Here is a quick summary:

  • You won’t be able to link work items as easily across repositories. Instead, you will have to create a “URL Reference” between servers providing the same role such as CCM servers.
  • You won’t be able to share source control between CCM repositories in streams and repository workspaces.
  • You can continue to build by fetching source from multiple repositories.
  • You can continue to use the same LDAP server to control authentication on your servers.

Artifact Sizing Guidelines

Here is a listing of recommendations on artifact sizing that will ensure optimal performance of the repository when the data sizes increase significantly.

Repository: There are no limits to the number of projects, users, or iteration structures in the server.

Projects: There are no limits to size of projects or number of contained assets.

Work Items: There is no upper limit to the amount of work items in the repository.

Plans: You can have as many plans as needed in the repository and they can also be deleted.

Test Assets: You can have as many test plans, suites, cases, scripts, execution records, environments, results, and schedules (automations) as you want in a project.

Requirements Assets: You can have as many requirements of any type, collections, link types, and links among your assets as you want in a project.

Reports: The bulk of the data warehouse size is a function of the number of work items. The only performance limitation we’ve seen is based on the number of streams that are selected as part of the “Stream Size” reports. This is configurable, and we recommend that you only configure your integration streams to be collected. By default, no streams are configured and you can use the “Administer SCM Snapshot” page when you are logged-in as the administrator in the web user interface

Build: There is no limit on the amount of build results as such, but there are guidelines on the size of the build results and the pruning schedule. We suggest that when you get into the enterprise scale and have 100s of build results a day, ensure that continuous builds are being pruned (e.g., check the pruning option in the build definition editor). In addition, instead of uploading gigabytes of build results, what we do is keep the larger content on the build machines and create links from the build results to the location to find the build results. This avoids putting too much transient data into the repository.

Source Control: We recommend 50K files and folders in a single component and if you have more files to split them into multiple components. We encourage individual users to keep less than 300 change sets suspended. While there is no limitation, there is a tendency for a large suspended set to slow down operations involving the suspended set (suspend, resume, and discard) as well as add some additional cost when refreshing or changing collaborations. We have tested the user experience of working with a large number of components in workspaces and streams. There are no known issues with having hundreds of components in a stream or workspace, and we have tested the user experience with 500 components in collaboration.

Dashboards: Configuration data in dashboards can hold 32KB. This affects viewlets such as static HTML, Headlines, and Bookmarks etc. that can be edited directly by users, and of course any newly written viewlets using the dashboard mementos to store data for viewlets.

Workload Details for Scalability Testing

The following tables are the detailed description of the workload used to test the scalability of the CLM 2011 solution.

Purpose and Content

This workload creates a realistic dynamic user load. It contains standaline requirements management, change and configuration management and quality management as well as combined scenarios involving integrations and reporting across each of the applications. Users periodically log off and on, and then randomly choose a new use case. This creates a dynamic user load, with behaviors similar to live servers.

RQM Role Details

User Role
Percentage of QM total
Role Details
QE Manager 9% Creates test plans, reviews test plans, runs reports related to testing progress, reviews various test assets.
Test Lead 18% Creates test cases, edits test plans, browses test assets, runs reports, modifies test environments, executes tests.
Tester 68% Creates test cases, test scripts, submits and modifies defects, submits requests, edits test scripts and executes tests.
Dashboard Viewer 5% User that logins in, checks dashboard then logs out. Note: this role is for performance test purposes only [not actual product role].
Integrations 9% Integrations which start from the RQM UI and integrate with RTC or RRC.
RRDI Reports 1% RQM related reports pulled from RRDI

RQM Scenario Details

User Role
Percentage of QM total
Scenario Details
QE Manager 53% Browse Test Plans, Test cases: user browses assets by: View Test Plans, then configure View Builder for name search; clicks next a few times (server size paging feature), then open test plan found, review various sections, then close. Search for test case by name, opens test case found, review various sections, then close.
47% Defect Search: user searches for specific defect by number, user reviews the defect (pause), then closes
Test Lead 55% Test Script Review: Search for test script by name, open it, reviews it, then closes.
15% Test Case Creation: create test case by: opening the Create Test Case page, enters data for a new test case, and then saves the test case.
30% Test Plan Editing: list all test plans; from query result, open a test plan for editing, add a test case to the test plan, a few other sections of the test plan are edited and then the test plan is saved.
Tester 50% Test Execution: user selects “test execution records” by name, then selects TER from for execution, starts execution, enters pass/fail verdict, reviews results, sets points then saves.
20% Test Execution Record Browsing: user browses TERs by: name, then selects the TER and opens the most recent results.
5% Test Script Creation: user creates test case by: selecting Create Test Script page, enters data for a new test script, and then saves the test script.
11% Test Script Editing: user selects Test Script by name. test script then opened for editing, modified and then saved.
14% Defect Creation: user creates defect by: opening the Create Defect page, enters data for a new defect, and then saves the defect.
Dashboard Viewer 100% View Dashboard: user logs in, views dashboard, then logs out. This user provides some login/logout behavior to the workload and will be controlled in RPT to occur 4 times per hour per user.
Integrations 14% Test Plan Requirements Collection links to new RTC Task: user logs in, creates a test plan, goes to the requirement collection section, and links a new RTC task.
14% Test Case links to new RRC Requirement: user logs in, creates a test case, goes to the requirement links section, and links a new RRC requirement.
14% Test Case links to RTC Development Item: user logs in, creates a test case, goes to the development links section, and links a new RTC defect.
14% Test Execution Record links to new RTC Defect: user logs in, creates a test case, runs that test case (creating an execution record), and links a new RTC Defect.
14% Test Script links to new RTC Task: user logs in, creates a test script, adds one step to the test case, and links a new RTC task.
15% Test Plan links to RRC Requirements Collection: users logs in, creates a test plan, goes to the requirement collections section, and links a new RRC requirement collection.
15% Test Plan links to RTC Development Release Plan: users logs in, creates a test plan, goes to the development plan section, and links a development plan.
RRDI Reports 100% RQM report viewed from RRDI: user logs in, finds an RQM report, and views the report.

RTC Role Details

User Role
Percentage of QM total
Role Details
Developer 90% Creates defects and tasks; edits and queries defects
Integrations 9% Integrations which start from the RTC UI and integrate with RQM or RRC.
RRDI Reports 1% RTC related reports pulled from RRDI
Developer/SCM 175 users Special developer roll which performance SCM activity
Build Project Manager 10 users Build project manager who updates, publishes, and creates builds

RTC Scenario Details

User Role
Percentage of role total
Scenario Details
Developer 10% Create Defect: user logs in and creates a new defect.
50% Query Defects: user logs in, views the shared queries, runs a query, and views one defect.
30% Update Defect: user logs in, searches for a specific defect, edits the defect, and saves the defect.
10% Create Task: user logs in and creates a new task.
Integrations 50% Development Release Plan links to RRC Requirements Collection: user logs in, creates a dev release plan, goes to the requirement collection section, and links a new requirement collection.
50% Development Release Plan links to RQM Quality Plan: user logs in, creates a dev release plan, and links a new RQM Quality Plan.
RRDI Reports 100% RTC report viewed from RRDI: user logs in, finds an RTC report, and views the report.

RTC Build and SCM Details

User Role
Approximate Number of Simulated Users
Workload Details
Role Details
Developer/SCM 175 4 Daemons as fast as possible. 1 Agent machine File change sets, accept and deliver change sets, accept changes, refresh pending changes, check history, deliver baseline changes, and other SCM related activities.
Build Project Manager 10 10 Virtual users/threads @ 2 builds per hour. 1 Agent machine Ant build that is run standalone without the Jazz Build Engine. Each build has a separate build definition/engine. Notifies jazz of build start, loads workspace, publishes links, pretends build is compiling, publishes artifacts, pretends build is testing, and notifies jazz of build completion.
Note: This special workload requires a different infrastructure than our standard tests. The work simulates a specific number of users and does not factor into the percentage of the load generated by our RPT/web based testing. The infrastructure and scripts for SCM and Build activity are provided by the RTC SVT team.

RRC Role Details

User Role
Percentage of QM total
Role Details
Analyst 90% Creates, views, edits, and manages requirement artifacts.
Integrations 9% Integrations which start from the Analayst UI and integrate with RTC or RQM.
RRDI Reports 1% RRC related reports pulled from RRDI

RRC Scenario Details

User Role
Percentage of role total
Scenario Details
Analyst 5% Create Requirement: user logs in and creates a new Requirement.
5% Create Requirement with Requirement: user logs in, creates a new requirement, during creation they link another requirement to it.
5% Create Link: user logs in, opens an existing artifact, and links two other artifacts to it.
5% Review Approve: user logs in. opens the review panel, adds 20 reviewers and 10 artifacts, saves the review, starts the review, opens the existing review, abstains the first artifact, approves the second artifact, and disproves the remaining.
30% Query Folder: user logs in, opens a project, and queries a folder by saved filter.
5% Create Snapshot: user logs in, opens a project, and creates a snapshot.
2% Upload and Open 12meg Artifact: user logs in, uploads a 12meg artifact, and opens the artifact once the upload is complete.
2% Move Assets: user logs in, opens a folder, moves 50 assets to another folder, moves the 50 assets back.
5% Create Collection: user logs in and creates a new collection.
20% View Collection: user logs in and opens and views one collection.
16% Filter Collection: user logs in, opens the collection tab, and filters by name, user and attribute.
Integrations 25% Individual Artifacts linked to RQM Test Case: user logs in, creates an artifact, and links a new RQM test case
25% Individual Artifacts linked to RTC Story: user logs in, creates an artifact, and links a new RTC story.
25% Requirements Collection links to RTC Development Release Plan: user logs in, creates a requirement collection, and links an existing RTC dev release plan.
25% Requirements Collection links to RQM Quality Plan: user logs in, creates a requirement collection, and links a new RQM quality plan.
RRDI Reports 100% RRC report viewed from RRDI: user logs in, finds an RRC report, and views the report.

Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.
Feedback
Was this information helpful? Yes No 26 people rated this as helpful.