The topology used for this testing was based on the standard enterprise topology E1. The E1 topology uses the
WebSphere Application Server and DB2, and installs all of the
CLM applications onto separate servers. We
extended this topology by adding a second RQM server and a
second RTC server, creating a topology where up to 5 CLM servers
are interacting with a single JTS server. We
only used a single RRC server because multiple RRC servers are
not supported by the 4.0.3 release.
This deployment used floating licenses provided by the JTS
server. A single IBM HTTP server was used as a
reverse proxy.
Here are the specific versions of software we used:
Software |
Version |
---|---|
CLM applications |
4.0.3 M2 |
IBM HTTP Server |
8.0.0.3 (Apache 2.2.15) |
IBM DB2 Enterprise Server Edition |
9.7.0.6 |
IBM WebSphere Application Server |
8.0.0.3 |
This table lists the servers that were used for our testing:
Role |
Manufacturer / Model |
CPU |
Number of CPU/Cores |
CPU Speed |
Memory |
OS |
---|---|---|---|---|---|---|
IBM HTTP Server |
IBM x3250 M4 |
Intel Xeon E3-1240 v2 |
1 / 4 |
3.4 GHz |
16 GB | RedHat Enterprise Linux Server 64-bit
v6.3 |
CLM - JTS Server |
IBM x3550 M4 |
Intel Xeon E5-2640 |
2 / 12 |
2.5 GHz | 32 GB | RedHat Enterprise Linux Server 64-bit v6.3 |
CLM - RTC Server #1 |
IBM x3550 M4 | Intel Xeon E5-2640 | 2 / 12 |
2.5 GHz | 32 GB | RedHat Enterprise Linux Server 64-bit v6.3 |
CLM - RTC Server #2 |
IBM x3550 M4 | Intel Xeon E5-2640 | 2 / 12 | 2.5 GHz | 32 GB | RedHat Enterprise Linux Server 64-bit v6.3 |
CLM - RQM Server #1 |
IBM x3550 M4 | Intel Xeon E5-2640 | 2 / 12 |
2.5 GHz | 32 GB | RedHat Enterprise Linux Server 64-bit v6.3 |
CLM - RQM Server #2 | IBM x3550 M4 | Intel Xeon E5-2640 | 2 / 12 | 2.5 GHz | 32 GB | RedHat Enterprise Linux Server 64-bit v6.3 |
CLM - RRS Server |
IBM x3550 M4 | Intel Xeon E5-2640 | 2 / 12 |
2.5 GHz | 32 GB | RedHat Enterprise Linux Server 64-bit v6.3 |
CLM - DB2 Server |
IBM x3650 M4 | Intel Xeon E5-2640 | 2 / 12 |
2.5 GHz | 64 GB |
RedHat Enterprise Linux Server 64-bit v6.3 |
The data repository has an initial population of 100K RTC
workitems, 100K RQM data, and 200K RRC requirement data.
There were 5 projects in each RTC server, 5 projects in each RQM
server and 10 projects in the RM server.
In each RTC project, there are 10K workitems and 20 plans
per project.
For each RQM project, there are 250 Test Plans, 1250 Test Cases,
1284 Test Scripts, 375 Test Suites, 2500 Test Case Execution
Records, 375 Test Suite Execution Records, 5000 Test Case
Results, and 750 Test Suite Results.
In RRC, we used medium sized project areas for this test, and
they consist of 22 modules with 20K artifacts across the
modules., 1K traditional artifacts, 3 collections, and 15K
comments.
Test | CCM1 | CCM2 | QM1 | QM2 | RM | TOTALS | CCM TOTAL | QM TOTAL | RM TOTAL |
---|---|---|---|---|---|---|---|---|---|
Single RM | -- | -- | -- | -- | 84 | 84 | -- | -- | 84 |
Single CCM1 |
581 | -- | -- | -- | -- | 581 | 581 | -- | -- |
Single CCM2 | -- | 598 | -- | -- | -- | 598 | 598 | -- | -- |
Single QM1 | -- | -- | 581 | -- | -- | 581 | -- | 581 | -- |
Single QM2 | -- | -- | -- | 561 | -- | 561 | -- | 561 | -- |
CCM + RM | -- | 586 | -- | -- | 84 | 671 | 586 | -- | 84 |
QM + RM | -- | -- | -- | 558 | 86 | 644 | -- | 558 | 86 |
Dual CCM | 571 | 577 | -- | -- | -- | 1147 | 1147 | -- | -- |
Dual QM | -- | -- | 583 | 566 | -- | 1149 | -- | 1149 | -- |
CCM + QM | 599 | -- | -- | 597 | -- | 1196 | 599 | 597 | -- |
Dual CCM + RM | 603 | 602 | -- | -- | 82 | 1287 | 1205 | -- | 82 |
Dual QM + RM | -- | -- | 568 | 542 | 80 | 1189 | -- | 1109 | 80 |
Dual CCM + QM | 551 | 571 | 534 | -- | -- | 1656 | 1122 | 534 | -- |
Dual QM + CCM | 587 | -- | 530 | 557 | -- | 1675 | 587 | 1087 | -- |
Dual CCM + Dual QM | 556 | 528 | 555 | 556 | -- | 2195 | 1084 | 1111 | -- |
Dual CCM + Dual QM + RM | 532 | 536 | 560 | 545 | 78 | 2250 | 1068 | 1104 | 78 |
The chart below shows the maximum throughput observed through RTC server #2, for a variety of workloads. The estimated error for these measurements is +/- 20 transactions per second. The transaction rate for RTC server #2 is largely independent of the load on the other servers. There is only slight fall-off in throughput for the most complete workloads (involving both RQM servers, both RTC servers, and the RRC server).
The chart below shows the maximum throughput observed
through RQM server #1, for a variety of workloads. The
estimated error for these measurements is +/- 20 transactions
per second. The transaction rate for RQM server #1 is
largely independent of the load on the other servers.
The chart below shows the maximum throughput observed through RQM server #2, for a variety of workloads. The estimated error for these measurements is +/- 20 transactions per second. The transaction rate for RQM server #2 is largely independent of the load on the other servers.
The chart below shows the maximum throughput observed through the RM server, for a variety of workloads. The estimated error for these measurements is +/- 5 transactions per second. The transaction rate for the RM server is largely independent of the load on the other servers.
-Xdump:none -Xdump:heap+java:events=systhrow+user,filter=java/lang/OutOfMemoryError,request=exclusive+prepwalk -Xgcpolicy:gencon -Xmx8g -Xms8g -Xmn1g -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -Xverbosegclog:gc.log -Xdump:heap:file=/home/wasdumps/heapdump.%Y%m%d.%H%M%S.%pid.%seq.txt -Xdump:java:file=/home/wasdumps/javacore.%Y%m%d.%H%M%S.%pid.%seq.txt
In httpd.conf:
<IfModule worker.c> ThreadLimit 25 ServerLimit 100 StartServers 2 MaxClients 2500 MinSpareThreads 25 MaxSpareThreads 500 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>