Rational Requirements Management 4.0.1 sizing and tuning guide

Please be aware:The content of this article has been migrated to the Deployment wiki: Collaborative Lifecycle Management performance report: RRC 4.0.4 release topic page. The content will be maintained and updated in the wiki going forward. Therefore, this version of the article might not contain the most up to date information.

All deployment guidance and best practice will be published in the Deployment wiki rather than as Jazz.net articles going forward.

Introduction

Rational Requirements Management provides a complete requirements management capability to define and capture business needs and manage all requirements throughout a project lifecycle. It fosters better business and development collaboration in an online agile requirements community, involving a wide set of stakeholders that can include customers, analysts, developers, testers, and anyone involved in operations and production.

Rational Requirements Management (RM) is an application of the Collaborative Lifecycle Management (CLM) solution, and provides the requirements management capability of the integrated solution. It provides tools to capture, organize, and collaboratively review, analyze, and report on requirements, especially in relation to their associated development work items and test artifacts.

Rational Requirements Composer (RRC) 4.0.1 and Rational DOORS Next Generation (DOORS NG) are two products that use the Rational RM application. RRC 4.0.1 contains significant new function as well as enhancements in many areas of the product compared to RRC 4.0. The goal of this article is to give guidance on the user load a given hardware and topology can support. The test data and the workload used reflects some of the new function in 4.0.1. Additionally, this document provides tuning recommendations.

This document covers the results of the scalability and performance tests conducted in a RRC-only deployment. Scalability results may vary when introducing other CLM applications.


Disclaimer

The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.


Server topology

The performance tests were conducted in our in-house lab taking into account the various topologies and most of the complexities involved in a large scale deployment of the product.

The performance test environment consisted of an LDAP Server, database server, and a proxy server along with the Requirement Management (RM) and Jazz servers (JTS). We refer to this as a distributed topology with the JTS and RM server applications hosted on separate servers see figure 1.

Distributed Topology: Separate JTS/RM/DB Server

Distributed Server Topology

Figure 1: Separate JTS and RM Server Configuration


Hardware & software configurations

The table lists the hardware and software used in the testing performed. All hardware is 64 bit. All of the hardware supports hyperthreading as indicated by the (Threads) detail. The memory column lists the physical ram installed. The Java heap size for RM and JTS servers was 8GB for which 16GB system ram would be sufficient. See the section “Server Tuning Recommendations” for details on Java heap sizing.

Role CPU(s) Cores (Threads) Processor Frequency Memory Disk OS Software

IHS

Intel X3480

4 (8)

3.06 Ghz

16 GB

1x 10k RPM SAS

RHEL 6.3

IHS 8.0.0.3

RM Server

Intel X5667

4 (8)

3.06 Ghz

32 GB

8x 10k RPM SAS RAID0

RHEL 6.2

WAS ND 8.0.0.3

JTS Server

Intel X5667

4 (8)

3.06 Ghz

32 GB

2x 10k RPM SAS RAID1

RHEL 6.2

WAS ND 8.0.0.3

DB2 Server

Intel X5667

8 (16)

3.06 Ghz

32 GB

2x 10k RPM SAS RAID1

RHEL 6.3

DB2 9.7

Pools, limits, and JVM arguments

The following JVM arguments were set on the RM and JTS server:

-Xgcpolicy:gencon -Xmx8g -Xms8g -Xmn1g -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000

The above JVM arguments are specific recommendations for Linux and Windows. More information, including recommendations for other operating systems, can be found in the section Server tuning recommendations.

For 400 user loads and lower, the WebContainer thread pool was set to a minimum of 300 and a maximum of 300 on all systems.
On higher user loads, this number was increased to 0.75x the user load.

JDBC Connection pool size was left at the default value as configured in the JTS server (128).

For the IHS systems used during these tests, the following parameters were configured in the server pool regulation section (worker.c) of httpd.conf:

ThreadLimit         25  ServerLimit         100  StartServers        2  MaxClients          2500  SpareThreads        25  MaxSpareThreads     500  ThreadsPerChild     25  MaxRequestsPerChild 0

Workload characterization

Rational Performance Tester was used to simulate the workload. Each user performs a random use case from a set of available use cases. Users are added at a rate of 1 every 7 seconds. Once all users are online, there is a 10 minute settle time allowed. After this, tests are iterated for 60 minutes. During this time, tests are executed as quickly as possible with a 2 minute ‘think time’ between pages for each user.

There were five individual user groups used in all workloads:

  • Non-module use cases – Performs use cases in all areas except Modules. These are use cases also executed in RRC 4.0 testing.
  • Modules web – Focuses on use cases in Modules as they are done from the RRC web UI
  • Modules rich client – Focuses mostly on use cases in Modules as they are done from the DOORS NG rich client
  • Reporting – Generates traceability reports in PDF form
  • Importing – Imports CSV files, Word documents, ReqIF files, image files

Non-module use cases – 40% of users

Use case Description Share of user group
Login Connect to the server using server credentials.  
Create Collection Create collections with 10 artifacts 6.2 %
Create Link Create multiple links on an artifact 6.2 %
Filter Query Execute a query with 100 results and open 3 levels of nested folder. 8.2 %
Create, Modify, Delete Folder Creates a new folder, moves it to a new location then deletes it 1.0 %
Search String Search for a string returning 30 matched items 10.3 %
Search ID Search for a specific requirement by ID in the repository 8.2 %
View Collections View collection with 100 artifacts from the collections folder 12.4 %
Add Comments Open requirement with 100 comments and raises a comment to 8 people in the team. 8.2 %
View Project Dashboard Open the project and dashboard both for the first time and cached after opening it. 7.2 %
Create Requirement Create a requirement with a table, image and RichText. Also edit an artifact with 100 enumerated attributes and modify one of them. 4.1 %
Show Tree Open folder with artifacts with links and perform a Show tree in the grid 8.2 %
Open Graphical Ux Open business process diagrams, usecases, parts, image, sketch and story board graphical artifacts 5.2 %
Create Open Business Process Diagram Create and open a business process diagram 4.1 %
Create Review Create a review with random artifacts 4.1 %
View Suspect Links Open a random artifact which has suspect links, and view them 6.2 %

Modules web – 42% of users

Use case Description Share of user group
Login Connect to the server using server credentials.  
Save Module A random module is opened from a random project, a change is made, and the module is saved 14.6 %
Switch Module A random module is opened from a random project, the module is closed, and a second module is opened 14.6 %
Add Module Comment A random module is opened from a random project, and a comment is added to this module 19.5 %
Open Large Module A random ‘large’ module is opened from a random project 19.5 %
Display Module History A random module is opened from a random project, and history for this module is opened 9.8 %
Create Artifact in a Module A random module is opened from a random project, and a new artifact is added to this module 12.2 %
Switch Module View A random module is opened from a random project, and a predefined saved view is opened 9.8 %

Modules rich client – 15% of users

Use case Description Share of user group
Login Connect to the server using server credentials.  
Open Project A random project is opened 5 %
Create Artifact A random module is opened from a random project, and a new artifact is added to this module 5 %
View Dashboard The project dashboard for a random project is opened 50 %
Open Collection A random collection is opened from a random project 5 %
Switch Module View A random module is opened from a random project, and a predefined saved view is opened 5 %
Hover Artifact A random artifact from a random project is ‘hovered’ over 5 %
Open Large Module A random ‘large’ module is opened from a random project 5 %
Open & Edit Large Module A random module is opened from a random project, a change is made, and the module is saved 5 %
View Artifact History A random module is opened from a random project, and history for this module is opened 5 %
Create Link A random artifact from a random project is opened, and a link is created to a different random artifact 5 %
Hover on Linked Artifact A random artifact from a random project with existing links is ‘hovered’ over 5 %

Reporting – 2% of users

Use case Description Share of user group
Login Connect to the server using server credentials.  
Create Traceability Report A traceability report is generated from random artifacts 100 %

Importing – 1% of users

Use case Description Share of user group
Login Connect to the server using server credentials.
Import CSV Artifacts are created by importing from CSV 8.3 %
Import, Copy, Paste Artifacts are created by importing from CSV, then the imported artifacts are duplicated using copy & paste 8.3 %
Import 500 CSV (Rich) 500 artifacts are created by importing from CSV, using the DNG Rich Client 8.3 %
Module Word Import A module is created by importing a Word document 8.3 %
Create Child Large Module A random ‘large’ module is opened and a new child artifact is created 8.3 %
Delete Module Artifacts A module is created by importing a Word document, 100 artifacts are deleted from this module 8.3 %
Upload Attachment (Rich) An image file is uploaded using the DNG Rich Client 8.3 %
Upload Attachment (Web) An image file is uploaded using the Web Client 8.3 %
ReqIF Import Update 100 A ReqIF file is imported with 100 artifacts, performing updates where necessary 8.3 %
Import 1000 CSV (Web) 1,000 artifacts are created by importing from CSV using the Web Client 8.3 %
Import 1000 Word 1,000 artifacts are created by importing from a Word document 8.3 %

Data volume

The test data used for these sizing tests represents an extremely large repository compared to most customer environments. We chose this data size to test the limits of the RM application with unusually large data volume. The data shape details are shown in the table below.

 Artifact type  Number

Projects

75

Modules

600

Collections

950

Folders

38,400

Module artifacts

127,500

Non-module artifacts

187,200

Comments

659,800

Links

467,700

Reviews

5,400

Views (public/private)

7,500/7,500

Tags (public/private)

22,500/3,750

Terms

9,900

 Index size on disk  51 GB

Test scenarios and results

We executed the test workload with different user loads to find the maximum sustainable load for the given repository. The tables and charts below detail the data collected for these load runs. We started with 400 concurrent users and increased the user load in subsequent test runs based on the initial test results. The response time remained stable at 500 users but increased significantly at 600 users. Based on these results we recommend no more than 500 concurrent users with this type of workload and repository data when using RM 4.0.1. This recommendation is specific to the test data and the hardware configuration used in this sizing test.

Response time & standard deviation by user load

User Load Average Response Time (seconds) Average Response Time Standard deviation (seconds) Pages Per Second

400

1.73 7.27 2.5

500

1.37 4.59 3.09

600

18.51 68.71 3.01
 

CPU utilization by user load

User Load RM Server JTS Server DB Server IHS Server

400

5.10% 20.50% 0.60% 3.20%

500

5.80% 24.70% 0.60% 3.20%

600

5.90% 45.60% 0.70% 3.70%
 

The following charts show CPU utilization and I/O load for all servers executing the tests at the determined sustainable 500 user load. Java heap utilization is shown for RM and JTS.

Physical RM 500 User CPU/Disk Chart

Physical JTS 500 User CPU/Disk Chart

Physical DB 500 User CPU/Disk Chart

Physical IHS 500 User CPU/Disk Chart

Physical RM 500 User GC Chart

Physical JTS 500 User GC Chart

Recommendations for optimal performance

Application server

IBM® WebSphere® Application Server Version 8.0.0.3 and up

Database server

The database server hardware other than disk storage does not impact RM system scalability significantly. The same hardware recommendation applies to small and large scale deployments. See the disks and disk space discussion below for details on storage recommendations.

Network connectivity

Choosing network connectivity in dual-tier configurations is to minimize latency between the application server and the database server (no more than 1–2 ms), with the servers located on the same subnet. When using external storage, minimize connectivity latencies (the optimal configuration is through a Fibre Channel).

Disks

Based on the performance test results, we found that an increase in repository size along with increase in concurrent users causes significantly more load on the hard disk storage system. Therefore, for storage for large-scale deployments, consider a SAN or NAS solution with the storage directly connected through a Fibre Channel (FC) connection to avoid latency between the storage and the server. The benefits of using this configuration will allow offloading all disk I/O from the server to the NAS, while providing a complete fault tolerance disk subsystem with instantaneous snapshot backup capabilities and high availability and disaster recovery capabilities.

Disk space usage

The disk space usage for the Rational Requirements Management server is composed of the disk space used for the Rational Requirements Management index data and the disk space needed for the database instance.

The indexed data is located on the same machine as the JTS application server. Therefore, giving consideration to adequate disk space is important. For example, for our extremely large test repository with 100 million triples the index size on disk is 51 GB.

The database instance must be located on a server that also has appropriate disk space. For example, the database that housed the extremely large database took up 52 GB disk space.


Server tuning recommendations

The following recommendations should be noted for optimal performance:

  1. Use a 64-bit server with at least 12 GB RAM and quad core CPUs
  2. Set the available TCP/IP ports to the maximum allowed by the particular hardware/OS
  3. Increase the thread pool size of the application container from the default
  4. Increase the JVM heap size to 8 GB but never set it equal or close to equal to the toal OS memory

Available TCP/IP ports

The default number of TCP/IP ports available on AIX and Linux operating systems is too low and needs to be increased. Windows has a higher default limit but it may still be too low for large concurrent user loads. Use the instructions below to increase the port range.

AIX/Linux: ulimit -n 65000

Windows 2003 Server: Open the registry. Under HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Services/Tcpip/Parameters, create a new dWord named MaxUserPort , and set its value to 65000. Restart the computer.

Windows 2008 Server: Follow the instructions in http://support.microsoft.com/kb/929851 to change the dynamic port range. Start Port : 2000 Number of ports : 63535

Thread pool size

The WebSphere Application Server thread pool size needs to be increased from the default 50 to 0.75 times expected user load. For 400 concurrent users the thread pool maximum should be 300.

JVM heap size and other JVM arguments

The WebSphere Application Server JVM heap maximum should be set to at least 6 GB if the machine has at least 12 GB of RAM. If the server machine only has 8 GB RAM a maximum heap size of no more than 5 GB is advisable. Never set the maximum heap size to more than 70% of the maximum available machine memory or you will see poor performance.

A complete list of recommended JVM arguments by platform can be found in the CLM 4.0.1 documentation.

Additional suggestions

In a three server topology (JTS and RM servers on separate hardware) two settings in the fronting.properties need to be adjusted for loads higher than 200 concurrent users:

  • com.ibm.rdm.fronting.server.connection.route.max – should equal the number of users
  • com.ibm.rdm.fronting.server.connection.max – should be twice that value.

When using a proxy/reverse proxy server the maximum allowed connections on the proxy server may have to be increased based on the concurrent user load. Refer to the IBM HTTP Server tuning recommendation for more information.

Database optimization

To ensure optimal database performance, it is necessary to ensure that the database is fully optimized. With both DB2 and Oracle, you can run statistics to allow the databases to analyze the shape of a schema and optimize the contents. Databases generally manage statistics automatically (in a scheduled overnight operation, for example). However, to ensure that the database is fully optimized you may manually run the statistics as follows.

DB2

DB2 REORGCHK UPDATE STATISTICS ON TABLE ALL

Oracle

EXEC DBMS_STATS.gather_schema_stats(' JAZZDBUSER' );


For more information

Feedback
Was this information helpful? Yes No 5 people rated this as helpful.