r3 - 2014-09-29 - 21:22:05 - Main.cystrunkYou are here: TWiki >  Deployment Web > DeploymentPlanningAndDesign > PerformanceDatasheetsAndSizingGuidelines > PerformanceImpactOfSharingAJazzTeamServer50

Performance Impact of Sharing a Jazz Team Server within a Deployment of Multiple RDNG Servers

Authors: SentellaCystrunk, VaughnRokosz
Last updated: September 29, 2014
Build basis: Rational DOORS Next Generation (RDNG) 5.0


Beginning with Rational DOORS Next Generation (RDNG) 5.0 release, the storage for Requirements Management (RM) application is decoupled from Jazz Team Server (JTS). With this architectural change comes the ability for multiple RDNG servers to share a single JTS server for common user and license information. This report looks at the performance impact of sharing an instance of JTS among multiple instances of RDNG.


The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.

Summary of Results

Results from the set of tests conducted for this study show virtually no performance impact when up to 3 instances of RDNG share a Jazz Team Server.

Tests were conducted with 100 users per RDNG server for up to three configurations including:

  • 1 JTS, 1 RDNG
  • 1 JTS, 2 RDNG
  • 1 JTS, 3 RDNG

In all configurations, CPU and memory utilization on JTS remained fairly steady while each RDNG server operated independently without any signs of interference from JTS. CPU and heap consumption on JTS averaged about 1% and 0.26 GB, respectively. CPU utilization for each RDNG server averaged about 23% and page response times for their simulated users averaged about 0.8 seconds.

Server Topology

The following topology was used for this testing:


The specific versions of software used were:

IBM Rational CLM Applications
IBM HTTP Server and Web Server Plugin for WebSphere
IBM WebSphere Application Server
Oracle Database

Hardware Description

This table lists the servers that were used for our testing:

Manufacturer / Model
Number of CPU/Cores
CPU Speed
Disk Capacity
Reverse Proxy Server
IBM x3250 M3
Intel Xeon CPU X3480
(1 quad-core)
8 / 4
3.07 GHz
15.5 GB RAID 0 – SAS Disk x 1
279 GB RedHat Enterprise Linux Server 6.3, 64-bit
JTS Server
IBM Flex System x240 Compute Node
Intel Xeon CPU E5-2670 0
(Virtual Processor)
4 / 4
2.6 GHz 16 GB VMWare Virtual Disk - SCSI
60 GB Windows Server 2012 Standard, 64-bit
DNG Server #1
IBM Flex System x240 Compute Node
Intel Xeon CPU E5-2670 0
(Virtual Processor)
4 / 4
2.6 GHz 16 GB VMWare Virtual Disk - SCSI 60 GB Windows Server 2012 Standard, 64-bit
DNG Server #2
IBM Flex System x240 Compute Node
Intel Xeon CPU E5-2670 0
(Virtual Processor)
4 / 4
2.6 GHz 16 GB VMWare Virtual Disk - SCSI 60 GB Windows Server 2012 Standard, 64-bit
DNG Server #3
IBM Flex System x240 Compute Node
Intel Xeon CPU E5-2670 0
(Virtual Processor)
4 / 4
2.6 GHz 16 GB VMWare Virtual Disk - SCSI 60 GB Windows Server 2012 Standard, 64-bit
Database Server
IBM x3650 M3 Intel Xeon CPU X5667
(2 quad-core)
16 / 8
3.07 GHz 31.3 GB
RAID 10 – SAS Disk x 8
(with write-back caching)
279 GB RedHat Enterprise Linux Server 6.3, 64-bit

Data Volume and Shape

For each DNG server, the artifacts were distributed between 6 projects for a total of 517,080 artifacts.

The repository contained the following data:

  • 312 modules
  • 510,000 module artifacts
  • 7,080 requirement artifacts
  • 714 folders
  • 84 collections
  • 1,800 Public Tags
  • 1,428 Terms
  • 1,824,174 Links

JTS index size on disk =  2.6 GB
RM index size on disk =  40.2 GB

Each project contained the following data:

  • 52 modules
  • 85,000 module artifacts
  • 1,180 requirement artifacts
  • 119 folders
  • 14 collections
  • 300 Public Tags
  • 238 Terms
  • 304,029 Links

Test Methodology

IBM® Rational® Performance Tester was used to simulate the workload. A Rational Performance Tester script was created for each use case. The scripts are organized by pages; each page represents a user action. Users were distributed into many user groups and each user group repeatedly runs one script (use case). Tests were run as quickly as possible with a 1 minute “think time” between pages for each user.

Test were run against a single JTS, single RDNG server configuration to find the maximum user load that could be supported with the given workload and sustained over an eight hour period with acceptable response times. This maximum user load was then executed against the remaining configurations to determine if there is any performance impact.

Workload Description

This table shows the use cases and the number of simulated users who were repeatedly running each script:

Use case


Percentage of 100 users


Open a module that contains 1500 artifacts, select 25 artifacts, move them by using the copy and paste functions, and then delete the copied artifacts.


Create an artifact

Create non-module artifacts.


Create a collection

Create collections that contain 10 artifacts.


Create a module artifact end-to-end scenario

Open a medium module that contains 1500 artifacts, create a module artifact, edit the new artifact, and delete the new artifact.


Create a small module artifact end-to-end scenario

Open a small module that contains 500 artifacts, create a module artifact, edit that new artifact, and delete the new artifact.


Display module history

Open a module that contains 1500 artifacts and show the module history and the module audit history.


Hover over a module artifact and edit it

Open a module that contains 1500 artifacts and hover over an artifact. When the rich hover is displayed, edit the artifact text.


Hover over and open a collection

Display all of the collections, hover over a collection, and then open it.


Manage folders

Click “Show Artifacts” to display folder tree and then create a folder. Move the new folder into another folder and then delete the folder that you just created.


Open the RM admin page

Open the rm/admin page.


Open the project dashboard

Open a dashboard that displays the default dashboard.


Search by ID and string

Open a project, select a folder, search for an artifact by its numeric ID, and click a search result to display an artifact.

Search for artifacts by using a generic string search that produces about 50 results.


Scroll 20 pages in a module

Open a module that contains 1500 artifacts and then scroll through 20 pages.


Switch the module view

Open a module that contains 1500 artifacts and then change the view to add columns that display user-defined attributes.


Upload a 4 MB file as a new artifact

Upload a file and create an artifact.


Test Results - Compare Configurations

The following charts display the average server resource utilization and page response times in all tested configurations.

The configurations are labeled as follows:

  • 1 JTS, 1 RDNG = 1-RM
  • 1 JTS, 2 RDNG = 2-RM
  • 1 JTS, 3 RDNG = 3-RM

Each RDNG server is labeled according to the unique context root (i.e. /rm1, /rm2, /rm3) by which its RM application was deployed.


Key Configuration & Tuning Parameters

WebSphere Application Servers

JVM arguments set to:

Each DNG server functions independently. Thus, a separate converter application is deployed for each DNG server. Using the RM administration console for each server, configure the ConverterURL property to reflect the unique name of its converter installation.

WebContainer thread pool set to Min 500 Max 500

JVM arguments set to:



In httpd.conf:

<IfModule worker.c>
      ThreadLimit           25
      ServerLimit           80
      StartServers          1
      MaxClients            2000
      MinSpareThreads       25
      MaxSpareThreads       75
      ThreadsPerChild       25
      MaxRequestsPerChild   0

Oracle Database

Initialization parameters:

processes = 500
shared_servers = 20
dispatchers = 5

Related Information

Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r3 < r2 < r1 | More topic actions
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use. Please read the following disclaimer.