In IBM® Rational® DOORS® Next Generation (RDNG), administrators can configure ther hardware and software to support user load on repositories of varied sizes. The Requirements Management (RM) team, who develop Rational DOORS Next Generation, know that a one size fits all configuration cannot satisfy the large variation in scalability requirements. The team tested the performance of the RM server on various repository sizes with various hardware and software configurations in order to create guidance for hardware and server tuning. You can use that guidance to determine which sizing and tuning settings most closely match your scalability requirements.
The guidance is based on the results of performance tests that the team ran on many hardware and software configurations. For details about the hardware to deploy and the server tuning settings for repositories, see the "RM server hardware configuration and tuning section."
The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.
This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.
Key:
Function |
Number of machines |
Machine type |
Processor/machine |
Total processor cores/threads |
Memory/machine |
Network interface |
OS and version |
|
Proxy Server (IBM HTTP Server and WebSphere Plugin) |
1 |
IBM System x3550 M4 |
2 x Intel Xeon E5-2640 2.5 GHz (six-core) With Intel® Hyper-Threading Technology |
12/24 |
32 GB |
Gigabit Ethernet |
Red Hat Enterprise Linux Server release 6.3 (Santiago) |
|
Jazz Team Server WebSphere Application Server 8.5.5.1 |
1 |
IBM System x3550 M4 |
2 x Intel Xeon E5-2640 2.5 GHz (six-core) With Intel® Hyper-Threading Technology |
12/24 |
32 GB |
Gigabit Ethernet |
Red Hat Enterprise Linux Server release 6.3 (Santiago) |
|
RM server WebSphere Application Server 8.5.5.1 |
1 |
IBM System x3550 M4 |
2 x Intel Xeon E5-2640 2.5 GHz (six-core) With Intel® Hyper-Threading Technology |
12/24 |
Varied, depending on the repository size and hard disk type. 32 GB to 128 GB |
Gigabit Ethernet |
Red Hat Enterprise Linux Server release 6.3 (Santiago) |
|
Database server DB2 10.1.2 |
1 |
IBM System x3650 M4 |
2 x Intel Xeon E5-2640 2.5 GHz (six-core) With Intel® Hyper-Threading Technology |
12/24 |
32 GB |
Gigabit Ethernet |
Red Hat Enterprise Linux Server release 6.3 (Santiago) |
|
Network switches |
N/A |
Cisco 2960G-24TC-L |
N/A |
N/A |
|
Gigabit Ethernet |
24 Ethernet 10/100/1000 ports |
The test data that was used for these sizing guide tests represents extremely large repositories compared to most environments that are currently deployed. The RM team chose three repository sizes to test the limits of the RM application.
See Appendix C for a fourth data shape of 1M artifacts split into two equal projects.
You can choose the appropriate hardware and software configuration based on your scalability goals and requirements. The artifacts, modules, comments, links, and other elements were evenly distributed among the projects. Each project had data as shown in the
following table.
Artifact type |
Number |
Large modules (10,000 artifacts) |
2 |
Medium modules (1500 artifacts) |
40 |
Small modules (500 artifacts) |
10 |
Folders |
119 |
Module artifacts |
85000 |
Non-module artifacts |
1181 |
Comments |
260582 |
Links |
304029 |
Collections |
14 |
Public tags |
300 |
Private tags |
50 |
Views |
200 |
Terms |
238 |
Each repository that was tested contained a different number of projects. In each project, the data was distributed as shown in the previous table. The categories of RM artifacts in the repository were evenly distributed among all of the available projects in the repositories.
Artifact type |
500,000-artifact |
1-million-artifact |
2-million-artifact |
Projects |
6 |
12 |
24 |
Large modules (10,000 artifacts) |
12 |
24 |
48 |
Medium modules (1500 artifacts) |
240 |
480 |
960 |
Small modules (500 artifacts) |
60 |
120 |
240 |
Folders |
714 |
1428 |
2856 |
Module artifacts |
510,000 |
1,020,000 |
2,040,000 |
Non-module artifacts |
7086 |
14,172 |
28344 |
Comments |
1,563,492 |
3,126,984 |
6,253,968 |
Links |
1,824,174 |
3,648,348 |
7,296,696 |
Collections |
84 |
168 |
336 |
Public tags |
1800 |
3600 |
7200 |
Private tags |
300 |
600 |
1200 |
Views |
1200 |
2400 |
4800 |
Terms |
1428 |
2856 |
5712 |
Index size on disk |
39 GB |
79 GB |
157 GB |
In each stage, the users set up in 15 minutes. After they log in, they are allowed another 15 minutes to get settled. Then, tests are iterated for 60 minutes. During that time, tests were run as quickly as possible with a 1 minute “think time” between pages for each user.
This table shows the use cases and the number of users who were repeatedly running each script.
Use case |
Description |
Number of users |
---|---|---|
Copy/Paste/Move/Delete |
Open a module that contains 1500 artifacts, select 25 artifacts, move them by using the copy and paste functions, and then delete the copied artifacts. |
1% |
Create an artifact |
Create non-module artifacts. |
3% |
Create a collection |
Create collections that contain 10 artifacts. |
2% |
Create a module artifact end-to-end scenario |
Open a medium module that contains 1500 artifacts, create a module artifact, edit the new artifact, and delete the new artifact. |
12% |
Create a small module artifact end-to-end scenario |
Open a small module that contains 500 artifacts, create a module artifact, edit that new artifact, and delete the new artifact. |
3% |
Create a comment in a module artifact |
Open a medium module that contains 1500 artifacts, open an artifact that is in the module, expand the Comments section of the artifact, and create a comment. |
17% |
Create a module artifact link by dragging an artifact between modules |
Open a medium module that contains 1500 artifacts in a new browser tab and change the column settings to show links of the LinkTo type. Return to the other browser tab and open another module that contains 1500 artifacts. Drag one requirement to the module in the new tab and drop it on one of the cells in the LinkTo column. When you are prompted, create a link. |
6% |
Display module history |
Open a module that contains 1500 artifacts and show the module history and the module audit history. |
1% |
Hover over a module artifact and edit it |
Open a module that contains 1500 artifacts and hover over an artifact. When the rich hover is displayed, edit the artifact text. |
2% |
Hover over and open a collection |
Display all of the collections, hover over a collection, and then open it. |
1% |
Manage folders |
Click “Show Artifacts” to display folder tree and then create a folder. Move the new folder into another folder and then delete the folder that you just created. |
1% |
Open the RM admin page |
Open the rm/admin page. |
1% |
Open the project dashboard |
Open a dashboard that displays the default dashboard. |
4% |
Search by ID and string |
Open a project, select a folder, search for an artifact by its numeric ID, and click a search result to display an artifact. Search for artifacts by using a generic string search that produces about 50 results. |
9% |
Scroll 20 pages in a module |
Open a module that contains 1500 artifacts and then scroll through 20 pages. |
19% |
Switch the module view |
Open a module that contains 1500 artifacts and then change the view to add columns that display user-defined attributes. |
14% |
Upload a 4 MB file as a new artifact |
Upload a file and create an artifact. |
4% |
Review a module |
Open a module that contains 1500 artifacts and create a review of the module. This use case is unique and complex. |
1 user, irrespective of how many users are in the stage |
Import a ReqIF module that contains 1000 artifacts |
Upload a ReqIF file and import it to create a module that contains 1000 artifacts. |
1 user, irrespective of how many users are in the stage |
Server activity |
100 users in 1 hour |
100 users in 8 hours |
400 users in 8 hours |
Number of artifacts created |
412 + 1000 = 1412 |
3296 + 8000 = 11296 |
3296 + 8000 = 21184 |
Number of artifacts opened |
505 |
4040 |
16160 |
Number of artifacts edited or deleted |
500 |
4000 |
16000 |
Display a list of modules |
600 |
4800 |
19200 |
Comments created |
204 |
1632 |
1632 |
Links created |
72 |
572 |
2288 |
Modules opened |
740 |
5920 |
23680 |
Search by ID and open the artifact |
131 |
1048 |
4192 |
Search by string |
135 |
1080 |
4320 |
Switch module view to filter by attribute |
274 |
2192 |
8768 |
Number of module pages scrolled |
1032 |
8256 |
33024 |
Review module |
1 per hour, irrespective of the number of users |
8 |
8 |
The RM team ran many tests with varied hardware configurations on different repository sizes. The goal was to identify the hardware configuration that yields the best performance and scalability for a given repository. For each test, only the configuration of the RM server was changed. And the same hardware and software tuning were used for Jazz Team Server and the database server, irrespective of the repository size. For the results and the corresponding guidance, see the “RM server hardware configuration and tuning” section.
As mentioned, this article provides guidance about the hardware and software configurations for the RM server. However, it is also important to know about the hardware and software configurations for Jazz Team Server. When Jazz Team Server was configured as described in this section, its resources were underutilized and the server was not on a critical path to slow down the server performance.
-Xgcpolicy:gencon –Xmx16g –Xms16g –Xmn4G -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000
Repository |
Combined hard disk usage of the Jazz Team Server and RM databases |
Hard disk size |
Up to 500,000 artifacts |
~100 GB |
300 GB |
Up to1 million artifacts |
~150 GB |
400 GB |
Up to 2 million artifacts |
~300 GB |
700 GB |
ThreadLimit | 25 |
ServerLimit | 100 |
StartServers | 2 |
MaxClients | 2500 |
SpareThreads | 25 |
MaxSpareThreads | 500 |
ThreadsPerChild | 25 |
MaxRequestsPerChild | 0 |
DB2 REORGCHK UPDATE STATISTICS ON TABLE ALLTo reduce the startup time of large deployments, adjust the buffer pool size on the DB2 database. Use 300000 pages (16 K page size). Oracle
EXEC DBMS_STATS.gather_schema_stats(' JAZZDBUSER' );
-Xgcpolicy:gencon –Xmx8g –Xms8g –Xmn2700M -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000
The following table shows the various hardware configurations with RAM and heap sizes and the maximum number of concurrent users that are supported in each hardware configuration.
RM server hardware and JVM configuration |
Graph |
Jazz Team Server |
DB2 server |
Number of users supported |
32 GB RAM with 16 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
200 |
64 GB RAM with 24 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
450 |
128 GB RAM with 24 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
475 |
-Xgcpolicy:gencon –Xmx16g –Xms16g –Xmn5700M -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000
-Xgcpolicy:gencon –Xmx24g –Xms24g –Xmn8G -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000
The following table outlines the different hardware and JVM heap configurations and the maximum number of users that are supported in each configuration.
RM server hardware and JVM configuration |
Graph |
Jazz Team Server |
DB2 server |
Number of users supported |
HDD – 32 GB RAM with 16 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
50 |
SSD – 32 GB RAM with 16 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
150 |
SAN – 64 GB RAM with 24 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
200 |
HDD – 64 GB RAM with 24 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
425 |
SSD – 64 GB RAM with 24 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
425 |
HDD – 128 GB RAM with 24 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
425 |
-Xgcpolicy:gencon –Xmx16g –Xms16g –Xmn5700M -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000
-Xgcpolicy:gencon –Xmx24g –Xms24g –Xmn8G -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000
The following table outlines the different hardware and JVM heap configurations and the maximum number of users that are supported in each configuration.
RM server hardware and JVM configuration |
Graph |
Jazz Team Server |
DB2 server |
Number of users supported |
SAN – 64 GB RAM with 24 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
<100 |
HDD – 128 GB RAM with 24 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
200 |
SSD – 64 GB RAM with 24 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
300 |
-Xgcpolicy:gencon –Xmx24g –Xms24g –Xmn8G -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000
Repository size |
200 users |
400 users |
200,000 requirements |
16 GB RAM, HDD 8 GB JVM “max heap size” |
32 GB RAM, HDD 16 GB JVM “max heap size” |
500,000 requirements |
32 GB RAM, HDD 16 GB JVM “max heap size” |
64 GB RAM, HDD 24 GB JVM “max heap size” |
1 million requirements |
64 GB RAM, SAN 24 GB JVM “max heap size” |
64 GB RAM, HDD 24 GB JVM “max heap size” |
2 million requirements |
128 GB RAM, HDD 24 GB JVM “max heap size” |
64 GB RAM, SSD 24 GB JVM “max heap size” |
Various server performance benchmarks were captured by using Sysbench. The performance benchmarks were captured on the RM server.
sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 2 Doing CPU performance benchmark Threads started! Done. Maximum prime number checked in CPU test: 20000 Test execution summary: total time: 14.1921s total number of events: 10000 total time taken by event execution: 28.3806 per-request statistics: min: 2.81ms avg: 2.84ms max: 6.16ms approx. 95 percentile: 2.89ms Threads fairness: events (avg/stddev): 5000.0000/5.00 execution time (avg/stddev): 14.1903/0.00
sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Doing thread subsystem performance test Thread yields per test: 1000 Locks used: 1 Threads started! Done. Test execution summary: total time: 2.4548s total number of events: 10000 total time taken by event execution: 2.4534 per-request statistics: min: 0.24ms avg: 0.25ms max: 0.64ms approx. 95 percentile: 0.25ms Threads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 2.4534/0.00
sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 64 Doing mutex performance test Threads started! Done. Test execution summary: total time: 1.8363s total number of events: 64 total time taken by event execution: 113.1430 per-request statistics: min: 1526.83ms avg: 1767.86ms max: 1834.26ms approx. 95 percentile: 1831.69ms Threads fairness: events (avg/stddev): 1.0000/0.00 execution time (avg/stddev): 1.7679/0.07
sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 102400M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 104857600 (1513850.88 ops/sec) 102400.00 MB transferred (1478.37 MB/sec) Test execution summary: total time: 69.2655s total number of events: 104857600 total time taken by event execution: 135.7675 per-request statistics: min: 0.00ms avg: 0.00ms max: 0.72ms approx. 95 percentile: 0.00ms Threads fairness: events (avg/stddev): 26214400.0000/2003181.97 execution time (avg/stddev): 33.9419/0.65
sysbench --test=fileio --file-total-size=150G --file-test-mode=rndrw --max-time=300 --max-requests=0 run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Extra file open flags: 0 128 files, 1.1719Gb each 150Gb total file size Block size 16Kb Number of random requests for random IO: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Time limit exceeded, exiting... Done. Operations performed: 64501 Read, 43000 Write, 137600 Other = 245101 Total Read 1007.8Mb Written 671.88Mb Total transferred 1.6403Gb (5.5981Mb/sec) 358.28 Requests/sec executed Test execution summary: total time: 300.0506s total number of events: 107501 total time taken by event execution: 292.2219 per-request statistics: min: 0.00ms avg: 2.72ms max: 298.61ms approx. 95 percentile: 6.50ms Threads fairness: events (avg/stddev): 107501.0000/0.00 execution time (avg/stddev): 292.2219/0.00
sysbench --test=fileio --file-total-size=150G --file-te st-mode=rndrw --max-time=300 --max-requests=0 run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Extra file open flags: 0 128 files, 1.1719Gb each 150Gb total file size Block size 16Kb Number of random requests for random IO: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Time limit exceeded, exiting... Done. Operations performed: 922042 Read, 614694 Write, 1966976 Other = 3503712 Total Read 14.069Gb Written 9.3795Gb Total transferred 23.449Gb (80.038Mb/sec) 5122.45 Requests/sec executed Test execution summary: total time: 300.0002s total number of events: 1536736 total time taken by event execution: 171.4439 per-request statistics: min: 0.00ms avg: 0.11ms max: 3.46ms approx. 95 percentile: 0.25ms Threads fairness: events (avg/stddev): 1536736.0000/0.00 execution time (avg/stddev): 171.4439/0.00
The large project tests were conducted using a 1 million artifact repository devided into two equal projects, with the data in the shape of the following table:
Artifact type |
Number |
Large modules (10,000 artifacts) |
12 |
Medium modules (1500 artifacts) |
240 |
Small modules (500 artifacts) |
60 |
Folders |
370 |
Module artifacts |
510,000 |
Non-module artifacts |
7,086 |
Comments |
1,563,492 |
Links |
1,824,174 |
Collections |
84 |
Public tags |
1800 |
Private tags |
300 |
Views |
1200 |
Terms |
1,428 |
The following table outlines the different hardware and JVM heap configurations and the maximum number of users that are supported in each configuration.
RM server hardware and JVM configuration |
Graph |
Jazz Team Server |
DB2 server |
Number of users supported |
HDD – 32 GB RAM with 16 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
100 |
SSD – 32 GB RAM with 16 GB JVM heap |
|
32 GB RAM with 16 GB JVM heap |
32 GB RAM |
300 |
SSD – 64 GB RAM with 24 GB JVM heap |
|
64 GB RAM with 24 GB JVM heap |
64 GB RAM |
300 |
'Standard' 1M SSD – 64 GB RAM with 24 GB JVM heap |
|
64 GB RAM with 24 GB JVM heap |
64 GB RAM |
425 |
Warning: Can't find topic Deployment.PerformanceDatasheetReaderComments
I | Attachment | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|
![]() |
1m.png | manage | 52.2 K | 2014-09-30 - 05:09 | UnknownUser | |
![]() |
1mlp.png | manage | 49.6 K | 2014-09-30 - 05:11 | UnknownUser | |
![]() |
2MFlashStorage.png | manage | 25.4 K | 2015-05-20 - 17:50 | UnknownUser | 2 Million Repository on Flash Storage vs HDD |
![]() |
2m.png | manage | 32.6 K | 2014-09-30 - 05:10 | UnknownUser | |
![]() |
500k.png | manage | 43.0 K | 2014-09-30 - 05:10 | UnknownUser |
Status icon key: