IBM Rational Requirements Composer 3.0.1 server sizing guide

Introduction

Rational Requirements Composer (RRC) is the requirement management aspect of the Rational solution for Collaborative Lifecycle Management (CLM). The product enables teams to define, manage, and report on requirements in a development project lifecycle. RRC 3.0.1 contains several enhancements in almost all areas of the product compared to RRC 2.0.

This document covers the results of the scalability and performance tests conducted in a RRC-only deployment. For information on integrated CLM deployments refer to the Collaborative Lifecycle Management 2011 Sizing Guide on jazz.net. We cover details of our findings through comparisons between different topologies, databases, web application servers and more varying configurations. Additionally, this document provides the recommendation for optimal performance.


Disclaimer

The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.


Server Topology

The performance tests were conducted in our in-house lab taking into account the various topologies and most of the complexities involved in a large scale deployment of the product.

The performance test environment consisted of an LDAP Server, Token License Server, database server, reporting (RRDI) server along with the Requirement Management (RM) and Jazz servers (JTS). For the distributed environment testing the JTS and RM server applications were split into two servers see figure 1. The dual tier configuration consisted of a single server which included both the JTS and RM server applications on one server and a database server.

Distributed Topology: Separate JTS/RM/DB Server

Distributed Server Topology

Figure 1: Separate JTS and RM Server Configuration


Hardware Configurations

The table lists the hardware used in the testing performed:

 Role  Manufacturer  Model  HW Architecture  Number of processors (total cores)  Processor Speed  Memory (GB)

64 bit RM Server

IBM

xSeries x3550 7978AC1

Intel Xeon

4

3 Ghz

8 GB

64 bit JTS Server

IBM

xSeries x3550 6234E81

Intel E8400

2

3 Ghz

8 GB

64 bit RRDI Server

IBM

xSeries x3550 6234E81

Intel E8400

2

3 Ghz

8 GB

64 bit DB2 Server

IBM

xSeries x3550 9196A49

Intel Core E6750

2

2.6 Ghz

4 GB

64 bit Oracle Server

IBM

xSeries x3550 7979AC1

Intel Xeon

4

3 Ghz

12 GB

32 bit RM Server

IBM

xSeries x3550

Intel Xeon

4

3.2 Ghz

3 GB

32 bit JTS Server

IBM

xSeries x3550

Intel Xeon

4

3.2 Ghz

3 GB

RM – 64 bit Linux RM Server

IBM

xSeries x3550 9196A49

Intel Xeon

8

3 Ghz

8 GB

Workload Characterization

Rational Performance Tester was used to simulate the workload. Each user performs a random use case from a set of available use cases. Based on real customer use, our test scenario consists of a ratio of 70% reads and 30% writes. The users execute use cases at a rate of 30 pages per hour per user. Each test runs for 60 minutes after all of the users are in the system.

 Use case  Description  % Composition

Login

Connect to the server using server credentials.

 

Reports

Opens two reports from the RRDI server, generates 2 RPE reports- one 50 artifacts PDF report and one 100 artifacts MS report

    5

Create Collection

Create collections with 10 artifacts

    6

Create Link

Create a multiples links on an artifact

    6

Filter Query

Execute a query with 100 results and Open 3 levels of nested folder.

    8

Create Modify Delete

Creates a new folder, moves it to a new location then deletes it

    6

Search String

Search for a string returning 30 matched items

    12

Search ID

Search for a specific ID in the repository

    8

View Collections

View collections with 100 artifacts from the collections folders

    12

Windows Layout

Expand collapses the windows layout in the web UI

    6

Add Comments

Open requirement with 100 comments and raises a comment to 8 people in the team.

    8

ViewProject_Dashboard

Open the project and dashboard both for the first time and cached after opening it.

    7

CreateRequirement

Create a requirement with a table, image and RichText. Also edit an artifact with 100 enumerated attributes and modify one of them.

    10

Show Tree

Open folder with artifacts with links and perform a on Show tree in the grid

    4

Open Graphical Ux

Open Business process diagrams, usecases, parts, image, sketch and story board graphical artifacts

    5

Create Open Business Process Diagram

Create and open a business process diagram

    4

Hover Collection

Open a collection with 100 artifacts and hover on the artifacts pane

    4


Data Volume

The repository contained 100,000 artifacts and consisted of the following:

  • Small Project Count: 40
  • Medium Project Count: 9
  • Total number of folders: 2,415
  • Total number of requirement artifacts: 101,929
  • Total number of collections: 292
  • Total reviews: 169
  • Total comments: 78,795
  • Total public tags: 479
  • Total private tags: 59
  • Total term count: 6,652
  • Total link count: 56,276

We also tested a 200,000 artifact scenario and found adequate performance on large-scale hardware (RRC and JTS on separate servers, see “Hardware Sizing” below).


Tests Scenarios

The following scenarios were tested:

  • Windows 32 bit RM Server vs. 64 bit server
  • DB2 vs. Oracle database server
  • WebSphere Application Server vs. Tomcat
  • RRC 2.0.0.3 vs. RRC 3.0.1 using same workload/data volume
  • Linux based RM Server vs. Windows based RM Server
  • Dispatch enabled WAS RM/JTS on same server vs. no dispatch enabled

Test Results

The tests we ran to gather performance and sizing data for these documents are outlined below

 Test Scenario  Test Result Summary

Windows 32 bit servers

The distributed 32 bit server topology (separate JTS/RM Server) supported only 50k artifacts. The equivalent 64 bit topology was able to support 200 users and 100k artifacts.

DB2 vs. Oracle database server

Switching the databases between Oracle and DB2 revealed almost identical performance results.

Websphere Application Server vs. Tomcat

The average response times when running in the WAS topology was 65% better than while running in Tomcat using the same workload and topology.

RRC 2.0.0.3 vs. RRC 3.0.1

Using test data in 3.0.1 that was comparable to 2.0.0.3 data we tested with 100 users and 10k artifacts. RRC 3.0.1 showed a 10x improvement compared to 2.0.0.4.

Linux vs. Windows 64 bit server

Linux servers showed 20-30% better average response time compared to a windows 64 bit servers using the same workload and topology

Dispatch enabled vs. not on same server

Dispatch is enabled by default when installing RM and JTS on the same server and performs 30% better compared to the average response time without dispatching enabled.

Detailed Test Results

32 bit Windows

Due to the limits on maximum memory available to the application server JVM a 32 bit server can only support a small repository (no more than 50k artifacts). A dual tier 32 bit configuration additionally can only support a small user load (no more than 25 users).

RRC 2.0.0.3 vs. RRC 3.0.1

The test data for this test scenario differed from the other scenarios in that we used existing 2.0.0.3 performance test data and the equivalent data replicated in 3.0.1. The workload is also slightly different from that described above. For this 2.0.0.3 to 3.0.1 comparison we used the same test workload that had previously been used to compare 2.x releases. The primary differences are lack of link and report creation as well as graphical artifacts are opened only and not created. The 10x performance improvement seen in this 3.0.1 test over 2.0.0.3 may not apply to other workloads or repository data.

Other Scenarios

All other deployment scenarios were tested with 200 concurrent users and a 100,000 artifact repository. The hardware detailed in the table “Hardware Configurations” above was used. Due to time and resource constraints the hardware for Linux vs. Windows differed slightly. Both tests were run on 8 core servers. However, the Windows server was 2.8 Ghz whereas Linux was 3 Ghz. The performance gain seen on Linux is attributable to some extent to the higher processor speed.

Chart showing results for deployment scenarios

We were unable to do a direct comparison test of the multi-tier topology (RM and JTS on separate server) with the dual-tier topology. The latter benefits from JTS servlet dispatching which eliminates http connections for RM-JTS communication. Our testing did show that the dual tier topology has a 30% performance improvement when using servlet dispatching. This is the default deployment when installing JTS and RM on the same server, using the same public URI for both applications. The limiting factor for scalability and performance under high user load in this dual-tier deployment is the increased CPU load on the combined JTS/RM server. The following recommended topologies reflect this.


Recommendations for optimal performance

Database

IBM® DB2® or Oracle®

Application Server

IBM® WebSphere® Application Server Version 7.0.0.17 and up

Hardware sizing

By using the performance test data that we compiled, we created the tables that follow, based on the various hardware and software configurations for optimal deployment of Rational Requirements Composer server. When considering sizing options, Version 3.0.1 supports both dual and multitier configurations.

The three lists that follow show the sizing for different enterprise deployments.


Small-scale enterprise configuration, 10,000 resources and up to 100 users:

  • 2 Systems: RRC/JTS server Quad Core CPU 2.4 GHz or higher, 64-bit; DB server Dual Core CPU 2.4 GHz or higher, 64-bit
  • Memory: 4 GB or higher
  • Operating system: Linux or Windows server
  • Web application server: WebSphere Application Server 7.0.0.17
  • Database: Oracle 10GR2 or DB2 V9.1 or DB2 V9.5 Fix Pack 4

Medium-scale enterprise configuration, 100,000 resources and up to 200 users:

  • 2 Systems: RRC/JTS server Quad Core CPU 2.8 GHz or higher, 64-bit; DB server Dual Core CPU 2.4 GHz or higher, 64-bit
  • Memory: 8 GB or higher
  • Disk: High-performance SAS disk (15K), RAID
  • Operating system: Linux or Windows server
  • Web application server: WebSphere Application Server 7.0.0.17
  • Database: Oracle 10GR2 or DB2 V9.1 or DB2 V9.5 Fix Pack 4

Large-scale enterprise configuration, 100,000 resources and up to 500 users:

  • 3 Systems: RRC server Quad Core CPU 2.8 GHz or higher, 64-bit; JTS server Quad Core CPU 2.8 GHz or higher, 64-bit; DB server Dual Core CPU 2.4 GHz or higher, 64-bit
  • Memory: 8 GB or higher
  • Disk: High-performance SAS disk (15K), RAID, SAN or NAS direct-connected disk subsystem
  • Operating system: Linux or Windows server
  • Web application server: WebSphere Application Server 7.0.0.17
  • Database: Oracle 10GR2 or DB2 V9.1 or DB2 V9.5 Fix Pack 4

Database server

The database server hardware does not impact RRC system scalability significantly. The same hardware recommendation applies to small and large scale deployments.

Network connectivity

Choosing network connectivity in dual-tier configurations is to minimize latency between the application server and the database server (no more than 1–2 ms), with the servers located on the same subnet. When using external storage, minimize connectivity latencies (the optimal configuration is through a Fibre Channel).

Disks

Based on the performance test results, we found that an increase in resources along with increase in concurrent users causes an increase in the load on the hard disc. Therefore, for storage for large-scale deployments, consider a SAN or NAS solution with the storage directly connected through a Fibre Channel (FC) connection to avoid the latency between the storage and the server. The benefits of using this configuration will allow offloading all disk I/O from the server to the NAS, while providing a complete fault tolerance disk subsystem with instantaneous snapshot backup capabilities and high availability and disaster recovery capabilities.

Disk space usage

The disk space usage for the Rational Requirements Composer server is composed of the disk space used for the Rational Requirements Composer server indexed data and the disk space needed for the database instance. By default, the indexed data is located on the same machine as the JTS application server. Therefore, giving consideration to adequate disk space is important.

For example, for our windows repository loaded with 100,000 Rational Requirements Composer resources, the indexed data takes up 5 GB of disk space. The database instance must be located on a server that also has appropriate disk space. For example, the database that housed 100,000 Rational Requirements Composer resources amounts to approximately 10 GB of disk space including the backup.


Server Tuning Recommendations

WebSphere Thread Pool size needs to be increased from default 50 to 1.5 times userload. In Tomcat the maxThreads size needs to be set accordingly in the server.xml.

WebSphere Java Heap should be set to 4GB if the machine has at least 6GB of RAM In a three server topology two settings in the fronting.properties need to be adjusted for loads higher than 200 concurrent users. com.ibm.rdm.fronting.server.connection.route.max should equal the number of users. com.ibm.rdm.fronting.server.connection.max should be twice that value.

In the unusual case that a plain http connection is used for RM-JTS communication in the three server topology the oauth nonce timeout needs to be increased. Refer to technote 1502335

Database optimization: To ensure optimal database performance, it is necessary to ensure that the database is fully optimized. With both DB2 and Oracle, you can run statistics to allow the databases to analyze the shape of a schema and optimize the contents. Databases generally manage statistics automatically (in a scheduled overnight operation, for example). However, to ensure that the database is fully optimized you may manually run the statistics:

DB2

DB2 REORGCHK UPDATE STATISTICS ON TABLE ALL

Oracle

EXEC DBMS_STATS.gather_schema_stats(' JAZZDBUSER' );

Jazz indexing persistence interval: This interval is meant to record the progress of the underlying index task. When the progression is saved, requests made during that time will take longer than usual for responses. Thus, by increasing the persistence interval, the amount of time during which there is sluggish behavior will be reduced. The downside is that when the progress is persisted, there will be more data to persist. Therefore, if there are large amounts of resource creation or updates, decrease the persistence interval. If the are repository use is mainly read-only operations, then increase the persistence interval to provide better performance.

The persistent property is modified using the Jazz Admin page and requires Administrator privileges.

Advanced Properties: Persist index progress every N ms

Default value is 60000 ms (1 minute)


For more information

Collaborative Lifecycle Management 2011 Sizing Guide

Feedback
Was this information helpful? Yes No 6 people rated this as helpful.