TWiki > Deployment Web > DeploymentPlanningAndDesign > PerformanceDatasheetsAndSizingGuidelines > DN720CrossServerLinkingPerformanceReport
Revision 15 - 2025-08-22 - 10:57:22 - JayrajsinhGohil

DOORS Next 7.2.0 Cross Server Linking Performance Report

Authors: Jayrajsinh Gohil
Build basis: 7.2.0

Introduction

This article presents the results of performance testing conducted on version 7.2.0 for cross-server linking scenarios where link constraint rules are enforced. The test environment consists of requirements configurations with a data shape of 1 million artifacts on both local and remote DOORS Next (DN) servers. Each component was configured with 500 link constraint rules under an integrated global configuration (GC) setup. The test setup utilized global configurations and requirements in components managed in DN.

Summary of Test scenario

The performance test was executed with a load of 500 concurrent users generating multiple links (25 links) between two artifacts hosted on separate servers, governed by link constraint rules. The primary objective was to evaluate the system's response time under high load conditions when link constraints are enabled across servers.

Linking with Constraints (within-server vs cross-server): We executed a linking scenario with constraints between artifacts, covering both within-server and cross-server configurations. The objective was to assess the response time impact when creating multiple links (25 links) under constrained conditions. Testing was conducted across three component sizes: small, medium, and large. The results indicated that cross-server linking takes longer than within-server linking, with an observed increase ranging approximately from 0.3 to 1.9 seconds .

Cross-Server Linking (with vs without constraints): Cross-server multi-link (25 links) creation was tested with and without link constraint rules across three component sizes: small, medium, and large. The results showed increase in response time when link constraint rules were applied. Specifically, the response time increased by approximately 3 to 4.9 seconds compared to scenarios where constraints were not applied.

Standard Disclaimer

The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.


Data Shapes

The data shape for version 7.2.0 consists of a small number of standard-sized components distributed across five small project areas. The test environment is enabled for configuration management, and all operations are performed within the context of a global configuration.

For each component across all five projects on both DN servers, 500 random link constraint rules have been defined. The configuration includes one main Global Configuration (GC) component and two child GC components, each referencing 215 RM components from both DN application servers.

Link Constraints: Each component includes 500 random link constraint rules on both DN servers. These rules enable the creation of links between the two servers when the conditions are satisfied. For Performance testing , we used Requirement as the Source and Target artifact types, and Embeds as the link type for each component on both servers to create a links between 2 arfiacts. LinkConstraints_Rules.png

Global Configurations

We created global configurations (GCs) and added the DN components as contributions.

  • Main GC component
  • 2 Child GC components
  • Each Child GC contains 215 contributions
GC_Configurations.png

Artifact Counts

We have 3 different standard module sizes:

  • Small: (200 artifacts)
  • Medium: (1500 artifacts)
  • Large: (10,000 artifacts)

Artifact Type Small Component Medium Component Large Component
Number of large modules 0 0 1
Number of medium modules 0 3 0
Number of small modules 20 0 0
Total module artifacts 4000 4500 10000

We use these standard components to create 5 small-sized projects. Artifacts are imported via components to create our standardized shape. In the first 1 million artifacts, each artifact inside a module has 20 links to a matching "Linksource module artifact", resulting in 18,100,000 links. The remaining 2 million links are with the link source module.

Server Data Local RM2 Server Remote RM3 Server
Total Projects 5 5
Small Component 36 36
Medium Component 6 6
Large Component 1 1
Total Components/Project 43 43
Total Components/Server 215 215
Total Module Artifacts 905,000 905,000
RM Links (artifacts * 20 links) 18,100,000 18,100,000


Performance Test Results

We performed a 500-user load test to assess cross-server linking functionality. The test involved selecting a remote server, creating 25 links between two artifacts, and subsequently removing those links—all while link constraint rules were in effect. The RPT schedule was configured with 500 users—78% accessing small components, 20% medium, and 2% large—running for a 1-hour steady-state duration.

The test environment contained 1 million artifacts on each RM server.

The objective was to validate the cross-server linking response time with link constraints enabled, using an Oracle database backend.

All tests were conducted using a DOORS Next data source on the 7.2.0 M7 build.

Test Configuration (DN 7.2.0 on Oracle):

  • Performed on a Linux/Oracle environment
  • Performance workload as detailed above
  • Number of contributions in global configuration: 430
  • XLinkconstraintschecker thread pool size: 20
  • Repository size: 1 million artifacts on each DN servers
  • Number of users: 500
  • Run duration: Steady state 1 hr

Resource Utilization:

As part of performance testing, CPU and Disk usage were monitored across the 4 systems involved:

  • DN Application Server 1
  • DN Application Server 2
  • JTS/GC Application Server
  • Oracle DB with Application and LQE DB

Metrics were collected during a 1-hour steady state test to ensure consistent load conditions.

CPU Utilization

  • The DN database server OracleDB showed the Average CPU usage at ~48.51% throughout the run.
  • The DN application Local server DN Server1 showed the Average CPU usage at ~34% throughout the run.
  • The DN application Remote server DN Server2 showed the Average CPU usage at ~13.78% throughout the run.
  • The JTS and GC server JTS/GC Server showed the Average CPU usage at ~3% throughout the run.
CPU__Utilization.png Disk Utilization
  • The DN database server OracleDB showed the disk utilization at ~69.45% throughout the run.
  • The DN application Remote server JTS GC Server showed the disk utilization at ~1% throughout the run.
  • The DN application Local server DN Server1 showed the disk utilization at ~2.18% throughout the run.
  • The DN application Remote server DN Server2 showed the disk utilization at ~2.43% throughout the run.
  • The LQE application server LQE Server showed the disk utilization at ~0.1% throughout the run.

Disk_Percentage__Busy_OracleDB.png

Disk_Percent_Busy_JTSGC.png

Disk_Percent_Busy_DNServer1.png

Disk_Percent_Busy_DNServer2.png

Disk_Percent_Busy__LQEServer.png

Page Element Throughput

The chart below shows how throughput (measured in transactions per second) scales with user load. A "page element" refers to an individual HTTP call. Throughput reached 142 transactions per second at 500 users.

PageElemetThroughput.png

Response Time

Creating 25 cross-server multi-links between two artifacts takes approximately 15 to 16 seconds across all three component sizes (Small, Medium, and Large) under a 500-user concurrent load.

Page Name Small Component (ms) Medium Component (ms) Large Component (ms)
Cross-server multi-link creation 15128.3 15128.8 15915.8


RPT_Top10Page_ResponseTime.png RPT_AllPages_ResponseTime.png


Linking scenario with constraints (within server vs cross server) :

We executed a linking scenario to evaluate the response time when creating 25 links between two artifacts governed by link constraint rules. The test focused on both:

  • Within-server linking
  • Cross-server linking

Test Configuration

  • XLinkconstraintschecker thread pool size: 20
  • Repository size: 1 million artifacts on each DN servers
  • Users: 10
  • Duration:1 hour steady state

Observation

  • Cross-server linking between two artifacts showed higher response times compared to linking within-server .
  • The increase in response time for cross-server scenarios ranged between 0.3 to 1.9 seconds across three component sizes, as shown in the table below.
Average Response Time in Millisecond: Within-Server vs Cross-Server Linking (With Constraints)
Component Size Create Multilinks within Server (ms) Create Multilinks between Cross Server (ms)
Small 6117.53 6740.12
Medium 6328.21 8231.17
Large 6599.71 6922.9


Cross-Server Linking Scenario (with vs without constraints) :

This test measured the difference in response times for cross-server multi-link creation with and without link constraint rules applied. The test focused on both:

  • Without-Linkconstraints rules applied
  • With-Linkconstraints rules applied

Test Configuration

  • XLinkconstraintschecker thread pool size: 20
  • Repository size: 1 million artifacts on each DN servers
  • Users: 10
  • Duration:1 hour steady state

Observation

  • Cross-server linking between two artifacts with link constraint rules in place showed higher response times compared to linking without constraints.
  • The increase in response time ranged from 3 to 4.9 seconds across three component sizes, as shown in the table below.

Component Size Without Constraints (ms) With Constraints (ms)
Small 3124.55 6740.12
Medium 3325.75 8231.17
Large 3877.14 6922.90


Performance Tuning :

To evaluate the impact of thread pool size on the performance of cross-server linking with link constraints enabled, we conducted performance tests using different values for the XLinkConstraintsChecker thread pool parameter: 5, 10, 15, 20, 25, 30, 35, and 40.

Each configuration was tested under a 500-user load for the cross-server multi-linking scenario. Based on the results, a thread pool size of 20 provided the most optimal performance, delivering a 34% improvement in response time compared to the default configuration using a thread pool size of 10.

Response Time

The table below shows the average response time (in milliseconds) for creating cross-server multi-links at different thread pool sizes.

Thread Size Small Component (ms) Medium Component (ms) Large Component (ms)
5 204250.5 197453.18 214238.12
10 23920 23884.4 25266
15 19748.5 20301.5 21810.6
20 15128.3 15128.8 15915.8
25 15145.7 14742 15105.3
30 14846.5 14311.4 14939.6
35 16480.14 15985.46 18064.05
40 16935.81 15806.68 16826.62
CrossServerMultiLinkResponseTime_vs_ThreadSize.png CPU Utilization

The table below shows the average CPU utilization (in percentage) for the Oracle DB server and the two DN application servers during cross-server multi-linking performance tests.

Thread Size Oracle DB Server (%) DN Server 1 (%) DN Server 2 (%)
5 34.73 34.12 8.94
10 46.6 36.19 13.35
15 48.06 35.42 12.91
20 48.51 34.14 13.78
25 50.96 35.7 15.02
30 51.69 34.33 13.74
35 51.6 35.37 13.91
40 51 34.32 13.6
OracleDBCPU_Vs_ThreadSize.png DNServer1CPU_Vs_ThreadSize.png DNServer2CPU_Vs_ThreadSize.png

Test Environment Details:

The performance test environment consists of:

  • 2 DN Servers
  • 1 JTS/GC Server
  • 1 LQE Server
  • 1 Oracle Database Server with Application DB and LQE DB

  • DN Artifacts: 1 Million on each server
  • DN Indexed Resources: 40 Million (includes links) across both DN application servers
  • LQE RS as LDX

The servers in the test environment are all physical machines. The hardware specifications are listed below.

Hardware Specifications

Role Server Number of machines Machine type Processor Total processors Memory Storage Network interface OS and version
Proxy Server IBM HTTP Server and WebSphere Plugin 1 IBM System x3550 M3 1 x Intel Xeon X3480 3.07 GHz (quad-core) 8 16 GB RAID 0 – 279GB SAS Disk x 1 Gigabit Ethernet Red Hat Enterprise Linux Server 7 (Maipo)
JTS/GC Embedded WebSphere Liberty 1 IBM System x3550 M3 2 x Intel Xeon X5649 2.53 GHz (six-core) 24 32 GB RAID 1 – 136GB SAS Disk x 2 Gigabit Ethernet Red Hat Enterprise Linux Server 8 (Maipo)
DN Server1 Embedded WebSphere Liberty 1 IBM System x3550 M4 2 x Intel Xeon E5-2640 2.5GHz (six-core) 24 32 GB RAID 5 – 279GB SAS Disk x 4 Gigabit Ethernet Red Hat Enterprise Linux Server 7 (Maipo)
DN Server2 Embedded WebSphere Liberty 1 IBM System x3550 M4 2 x Intel Xeon E5-2640 2.5GHz (six-core) 24 32 GB RAID 5 – 279GB SAS Disk x 4 Gigabit Ethernet Red Hat Enterprise Linux Server 7 (Maipo)
LQE RS Embedded WebSphere Liberty 1 IBM System x3550 M4 2 x Intel Xeon E5-2640 2.5GHz (six-core) 24 128 GB RAID 5 – 279GB SAS Disk x 4 Gigabit Ethernet Red Hat Enterprise Linux Server 9 (Maipo)
Database Oracle 19c Enterprise Edition 1 IBM System x3550 M3 2 x Intel Xeon E5-2640 2.5GHz (six-core) 24 64 GB RAID 10 – 279GB SAS Disk x 16 Gigabit Ethernet Red Hat Enterprise Linux Server 8 (Ootpa)

Abbreviations

  • JTS: Jazz Team Server
  • JAS: Jazz Authorization Server
  • GCM: Global Configuration Management
  • DN: DOORS Next
  • RM: Requirements Management
  • Jazz Reporting components:
    • LQE rs: Lifecycle Query Engine (Relational store)
Topic attachments
I Attachment Action Size Date Who Comment
JPEGpng Average_DiskActivity.png manage 114.6 K 2025-07-25 - 08:45 JayrajsinhGohil Disk Activity
JPEGpng CPU__Utilization.png manage 520.0 K 2025-07-31 - 12:01 JayrajsinhGohil CPU Utilization
JPEGpng CrossServerLinkCreation_ResponseTime.png manage 334.7 K 2025-07-25 - 08:49 JayrajsinhGohil Cross Server Link Creation Response Time
JPEGpng CrossServerMultiLinkResponseTime_vs_ThreadSize.png manage 172.4 K 2025-07-31 - 13:21 JayrajsinhGohil CrossServerMultiLinkResponseTime_vs_ThreadSize
JPEGpng DNServer1CPU_Vs_ThreadSize.png manage 116.7 K 2025-08-22 - 10:23 JayrajsinhGohil DNServer1CPU_Vs_ThreadSize
JPEGpng DNServer2CPU_Vs_ThreadSize.png manage 114.8 K 2025-08-22 - 10:24 JayrajsinhGohil DNServer2CPU_Vs_ThreadSize
JPEGpng Disk_Percent_Busy_DNServer1.png manage 558.4 K 2025-08-22 - 10:49 JayrajsinhGohil Disk utilization percentage- DNServer1
JPEGpng Disk_Percent_Busy_DNServer2.png manage 561.8 K 2025-08-22 - 10:49 JayrajsinhGohil Disk utilization percentage- DNServer2
JPEGpng Disk_Percent_Busy_JTSGC.png manage 510.4 K 2025-07-31 - 11:27 JayrajsinhGohil Disk utilization percentage- JTSGCServer
JPEGpng Disk_Percent_Busy__LQEServer.png manage 474.9 K 2025-08-05 - 10:41 JayrajsinhGohil Disk utilization percentage- LQE Server
JPEGpng Disk_Percentage__Busy_OracleDB.png manage 608.9 K 2025-08-05 - 10:32 JayrajsinhGohil Disk utilization percentage- OracleDB(App+LQEDB)
JPEGpng GC_Configurations.png manage 831.5 K 2025-07-25 - 10:28 JayrajsinhGohil GC_Configurations
JPEGpng LinkConstraints_Rules.png manage 964.9 K 2025-07-25 - 10:36 JayrajsinhGohil LinkConstrains_Rules
JPEGpng OracleDBCPU_Vs_ThreadSize.png manage 114.1 K 2025-07-31 - 13:22 JayrajsinhGohil OracleDBCPU_Vs_ThreadSize
JPEGpng PageElemetThroughput.png manage 240.8 K 2025-07-25 - 08:46 JayrajsinhGohil Page Element Throughput
JPEGpng RPT_AllPages_ResponseTime.png manage 1265.2 K 2025-07-25 - 09:14 JayrajsinhGohil RTP_ResponseTime_PageSummary
JPEGpng RPT_Top10Page_ResponseTime.png manage 567.0 K 2025-07-25 - 10:10 JayrajsinhGohil RPT_Top10_PagesResponseTime
Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r15 < r14 < r13 < r12 < r11 | More topic actions
Deployment.DN720CrossServerLinkingPerformanceReport moved from Deployment.DN72CrossServerLinkingPerformanceReport on 2025-08-22 - 10:21 by JayrajsinhGohil -
This site is powered by the TWiki collaboration platformCopyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use. Please read the following disclaimer.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.