A study was conducted to examine the impact on DNG performance when the server is deployed with different virtual system and software configurations on Windows. Variations were based on the amount of processors and memory allocated to the virtual system hosting the DNG server and the size of the java heap allocated to the Requirements Management (RM) application.
A total of nine configurations were tested which included varying:
Of the configurations tested, the 6 vCPUs 32G RAM 16G Heap configuration proved to be the minimum recommendation for supporting various loads against a DNG server with approximately 500,000 artifacts.
For more details, see Test Results - Compare Configurations.
The following topology was used for this testing:
The specific versions of software used were:
Software |
Version |
---|---|
IBM Rational CLM Applications |
5.0 |
IBM HTTP Server and Web Server Plugin for WebSphere |
8.5.5.2 |
IBM WebSphere Application Server |
8.5.5.2 |
IBM Tivoli Directory Server |
6.1 |
Oracle Database |
11.2.0.3.0 |
This table lists the servers that were used for our testing:
Role |
Manufacturer / Model |
CPU |
Number of CPU/Cores |
CPU Speed |
Memory |
Disk |
Disk Capacity |
OS |
---|---|---|---|---|---|---|---|---|
Reverse Proxy Server |
IBM x3250 M3 |
Intel Xeon CPU X3480 (1 quad-core) |
8 / 4 |
3.07 GHz |
15.5 GB | RAID 0 – SAS Disk x 1 |
279 GB | RedHat Enterprise Linux Server 6.3,
64-bit |
LDAP Server |
VMWare Virtual Platform |
Intel Xeon CPU E5-2665 0 (Virtual Processor) |
2 / 2 |
2.4 GHz |
2 GB | VMWare Virtual Disk - SCSI |
232 GB | Windows Server 2003 Enterprise SP2, 32-bit |
JTS Server |
VMWare Virtual Platform IBM Flex System x240 Compute Node (Hypervisor) |
Intel Xeon CPU E5-2670 0 (Virtual Processor) |
4 / 4 |
2.6 GHz | 16 GB | VMWare Virtual Disk - SCSI |
60 GB | Windows Server 2012 Standard, 64-bit |
DNG Server |
VMWare Virtual Platform IBM Flex System x240 Compute Node (Hypervisor) |
Intel Xeon CPU E5-2670 0 (Virtual Processor) |
4 - 8 / 4 - 8 (Varied, depending on test configuration) |
2.6 GHz | 16 - 64 GB (Varied, depending on test configuration) |
VMWare Virtual Disk - SCSI | 60 GB | Windows Server 2012 Standard, 64-bit |
Database Server |
IBM x3650 M3 | Intel Xeon CPU X5667 (2 quad-core) |
16 / 8 |
3.07 GHz | 31.3 GB |
RAID 10 – SAS Disk x 8 (with write-back caching) |
279 GB | RedHat Enterprise Linux Server 6.3, 64-bit |
The repository contained the following data:
JTS index size on disk = 2.6 GB
RM index size on disk = 40.2 GB
Each project contained the following data:
Users were distributed across many user groups and each user group repeatedly runs at least one script (use case). Tests were run as quickly as possible with a 1 minute “think time” between pages for each user. Each test simulated multiple users ranging from 50 to 300 virtual users across multiple stages. All users were loaded in at a rate of one user per seven seconds. With exception of the initial stage, 25 users were added at each stage which included a measurement period of one hour preceded by a thirty minute settle time.
For every configuration tested, processor utilization and page response times were measured and their patterns were analyzed to determine the impact of the additional system resources.
This table shows the use cases and the number of simulated users who were repeatedly running each script:
Use case |
Description |
Percentage of 100 users |
---|---|---|
Copy/Paste/Move/Delete Display module history Hover over and open a collection Manage folders Open the RM admin page |
Open a module that contains 1500 artifacts, select 25 artifacts, move them by using the copy and paste functions, and then delete the copied artifacts. Open a module that contains 1500 artifacts and show the module history and the module audit history. Display all of the collections, hover over a collection, and then open it. Click “Show Artifacts” to display folder tree and then create a folder. Move the new folder into another folder and then delete the folder that you just created. Open the rm/admin page. |
3% |
Create an artifact |
Create non-module artifacts. |
4% |
Create a collection |
Create collections that contain 10 artifacts. |
3% |
Create a module artifact end-to-end scenario |
Open a medium module that contains 1500 artifacts, create a module artifact, edit the new artifact, and delete the new artifact. |
20% |
Create a small module artifact end-to-end scenario |
Open a small module that contains 500 artifacts, create a module artifact, edit that new artifact, and delete the new artifact. |
11% |
Hover over a module artifact and edit it |
Open a module that contains 1500 artifacts and hover over an artifact. When the rich hover is displayed, edit the artifact text. |
2% |
Open the project dashboard |
Open a dashboard that displays the default dashboard. |
4% |
Search by ID and string |
Open a project, select a folder, search for an artifact by its numeric ID, and click a search result to display an artifact. Search for artifacts by using a generic string search that produces about 50 results. |
9% |
Scroll 20 pages in a module |
Open a module that contains 1500 artifacts and then scroll through 20 pages. |
19% |
Switch the module view |
Open a module that contains 1500 artifacts and then change the view to add columns that display user-defined attributes. |
18% |
Upload a 4 MB file as a new artifact |
Upload a file and create an artifact. |
7% |
-Xmx4g -Xms4g -Xmn1g -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -Xverbosegclog:gc.log -XX:MaxDirectMemorySize=1G -Xgcpolicy:gencon -Xloaminimum0.02
WebContainer thread pool set to Min 500 Max 500
For 16G RAM, JVM arguments were set to:
-Xmx8g -Xms8g -Xmn2g -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -Xverbosegclog:gc.log -XX:MaxDirectMemorySize=1G -Xgcpolicy:gencon -Xloaminimum0.02
For 32G RAM, JVM arguments were set to:
-Xmx16g -Xms16g -Xmn5700M -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -Xverbosegclog:gc.log -XX:MaxDirectMemorySize=1G -Xgcpolicy:gencon -Xloaminimum0.02
For 64G RAM, JVM arguments were set to:
-Xmx24g -Xms24g -Xmn8G -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -Xverbosegclog:gc.log -XX:MaxDirectMemorySize=1G -Xgcpolicy:gencon -Xloaminimum0.02
Note: The heap was adjusted to not exceed 50% of the RAM or 24 GB of the maximum heap size.
In httpd.conf:
<IfModule worker.c> ThreadLimit 25 ServerLimit 80 StartServers 1 MaxClients 2000 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>
processes = 500
shared_servers = 20
dispatchers = 5
As indicated below in the comparison charts for average processor utilization and page response time (Figure 2 and Figure 3, respectively):
Status icon key: