Jazz Register Log in

r20 - 2025-04-04 - 20:28:17 - VaughnRokoszYou are here: TWiki >  Deployment Web > DeploymentPlanningAndDesign > PerformanceDatasheetsAndSizingGuidelines > ELMPerformanceOnCloud

Load testing for ELM on Hybrid Cloud v1.0todo.png

Authors: VaughnRokosz, SameerkumarKalaria, RamnarayanKumar, JayrajsinhGohil, PriyankaKarande, AjayTaran
Build basis: 7.1

Introduction

IBM® Engineering Lifecycle Management (ELM) consists of a set of interconnected web applications on multiple physical or virtual server instances. Engineering Lifecycle Management on Hybrid Cloud v1.0 provides containers of ELM applications coupled with an operator that offers the ability to quickly deploy ELM on any of the following environments:

  • Red Hat® OpenShift® cluster platform (on-premise)
  • Red Hat OpenShift cluster on IBM Cloud®
  • IBM Cloud Kubernetes Services
  • Other OpenShift or Kubernetes services in private clouds, or on-premises

This article provides the results of performance testing for Engineering Lifecycle Management (ELM) applications using the IBM Cloud Kubernetes service. The objectives of the performance testing were to:

Summary of results

The supported user loads for the applications are shown below. In this table, the "Model" column is the number of users defined for the small and medium deployments in the ELM 7.1 sizing overview. The "Actual" column is the user loads that could be handled during the testing. Higher user loads can be tolerated but response times will be degraded.

The limiting factor is the number of CPUs assigned to the container. Once the CPU usage reaches 100%, response times degrade. For Report Builder, disk speed is also a bottleneck.

Abbreviations:

  • JTS: Jazz Team Server
  • JAS: Jazz Authorization Server
  • GCM: Global Configuration Management
  • EWM: Engineering Workflow Management
  • ERM: Engineering Requirements Management DOORS Next
  • ETM: Engineering Test Management
  • Jazz Reporting components
    • DCC: Data Collection Component (part of Jazz Reporting Service)
    • RB: Report Builder (part of Jazz Reporting Service)
    • LQE rs: Lifecycle Query Engine (Relational store)
    • LDX rs: Link Index Provider (Relational store) - service provided by LQE in ELM on HC v1.0
  • PUB: PUB Document Builder
  • ENI: Engineering Lifecycle Optimization - Engineering Insights

Standard disclaimer

The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.

Test environment

The high level architecture for Engineering Lifecycle Management on Hybrid Cloud is shown below.

The performance test environment was an instance of this architecture hosted in the IBM Cloud, using the IBM Cloud Kubernetes service. The Kubernetes cluster was configured with 8 worker nodes. Each worker node was configured with 32 CPU and 128G of RAM.

A bare metal database server running Oracle 19c was used for the application databases. This server was configured with 1 terabyte of RAM and 96 CPUs. A RAID array of 16 NVME drives was used for the Oracle storage subsystem.

The performance tests focused on the ELM application containers, and applied load to containers of different sizes. The tests did not explicitly test the ELM operator or other parts of the IBM Cloud infrastructure.

Test data

The repositories used for testing were of medium size:

  • 1 million ERM artifacts
  • 1 million ETM artifacts
  • 100,000 EWM work items

The size of the application databases were:

Application Database size (GB)
ERM DOORS Next 240
LQE 203
EWM 98
ETM 67

ERM container testing

This section provides analysis of the performance testing conducted for IBM Engineering Requirements Management DOORS Next. Two configurations were tested:

  • A small container (4 CPUs, 8G RAM)
  • A medium container (8 CPUs, 16G RAM)

Workload description

Two workloads were used in the testing:

  • A standard workload that simulates a wide range of ERM operations
  • A views-only workload that simulates opening module views with varying filters

There were two variants of each of these workloads: one with link validity enabled, and one with link validity disabled. All workloads were executed in a global configuration having 250 contributions.

Summary of results

The test results are summarized below:
Workload Link Validity Small (4CPU, 8G RAM) Medium (8CPU, 16G RAM)
Standard Enabled 25 users 125 users
Standard Disabled 50 users 200 users
Views only Enabled 25 users 50 users
Views only Disabled 50 users 100

The bottleneck is CPU. Adding more CPUs allows the container to support a higher user load.


Note that the workloads simulate opening module views that display link columns. If link validity is enabled, there is additional CPU load and the max user load is slightly lower than when link validity is disabled.

Small container

The small ERM container was configured with 4 CPUs and 8G of RAM.

Performance analysis - standard workload

The test results from executing the standard workload are shown below.

  1. Performance Stability
    • Performance degrades (sometimes significantly) when the user load exceeds 25-50 users
    • With link validity disabled, the system performs well until 50-75 users
  2. CPU Utilization
    • CPU reaches 100% between 75 and 100 users
  3. Response Time
    • Response time increases significantly between 25 and 50 users
    • Response time increases significantly between 50 and 75 users when link validity is disabled
  4. Throughput
    • Maximum throughput is 20 transactions per second
  5. Conclusion
    • The small environment can handle workloads up to 25 users (50 users when link validity is disabled). The bottleneck is CPU.

CPU

User Load Average CPU Usage CPU Usage Percentage
10 1.35 34%
25 2.35 59%
50 3.18 80%
75 3.79 95
100 3.98 100%
125 4 100%


Response Times

Response times increase between 25 and 50 users. At 75 users, when the CPU is 100% utilized, response times for many operations are over 1 minute.

Throughput

Performance analysis - views-only workload

The test results from executing a views-only workload are shown below.

  1. Performance Stability
    • Response times are reasonable up to 25 users
  2. CPU Utilization
    • CPU reaches 100% at 75 users.
  3. Response Time
    • Response times for many views are longer than 1 minute at 50 users
  4. Conclusion
    • The small container can handle workloads effectively only up 25 users. If link validity is disabled, the small container can support up to 50 users. CPU is the bottleneck.

CPU

User Load Average CPU Usage CPU Usage Percentage
25 2.14 54%
50 3.51 88%
75 4 100
100 3.98 100%
125 3.99 100%
150 3.97 99%
175 3.97 99%
200 3.97 99%

Response Time

Throughput



Medium container

The medium container for ERM was configured with 8 CPUs and 16G of RAM.

Performance analysis - standard workload

  1. Performance Stability
    • Response times are stable up to 125 users
    • Some operations (especially view loading) are degraded at 150 users
  2. CPU Utilization
    • CPU reaches 100% between 150 and 175users
  3. Throughput
    • Throughput reaches a maximum of 40 transactions per second at 125 users
  4. Conclusion
    • The medium container can handle workloads up to 125 users. If link validity is disabled, the medium container can handle 200 users.

CPU

User Load Average CPU Usage Percentage
25 1.64 21%
50 2.69 34%
75 3.56 45%
100 4.18 52%
125 5.56 70%
150 7.6 95%
175 7.88 99%
200 7.9 99%
250 7.6 95%

Response Time

Throughput

Performance analysis - views-only workload

  1. Performance Stability
    • Response times degrade as the load increases, with some operations taking over a minute above 75 users
  2. CPU Utilization
    • CPU reaches 100% between 150 and 175 users
  3. Throughput
    • Throughput levels out at 28 transactions per second at 100 users
  4. Conclusion
    • The medium container environment can between 50 and 75 users when running a workload consisting of view loading

CPU

User Load Average CPU Usage Percentage
25 2.38 30%
50 3.91 49%
75 4.94 62%
100 6.77 85%
125 6.95 87%
150 7.66 96%
175 7.82 98%
200 7.88 99%
250 6.58 82%

Response Time

Throughput

ETM container testing

This section provides analysis of the performance testing conducted for ETM. Two configurations were tested:

  • A small container (4 CPUs, 8G RAM)
  • A medium container (8 CPUs, 16G RAM)

Summary of results

Metric Small container Medium container
CPU Utilization 100% at 900 users 30% at 1000 users
Response Time Response times increase above 900 users Response time is stable
Throughput 65 transactions per second at 900 users 75 transactions per second at 1000 users


The bottleneck is CPU for the small container tests. The medium container handled 1000 users without problems.

The workload for the cloud testing was the same as that used in previous ETM tests in the IBM lab. For the container tests, the user load was increased from 200 to 1000 users, with each stage lasting 15 minutes.

Small container

The small ETM container was configured with 4 CPUs and 8G of RAM.

Performance Analysis

  1. Performance Stability
    • The system performs well up to 900 users, with acceptable CPU utilization and stable response times and throughput.
  2. CPU Utilization
    • CPU reaches 100% at 1000 users
  3. Response Time
    • Response times increase once the CPU usage reaches 100%
  4. Throughput
    • Throughput reaches a maximum of 65 transactions per second
  5. Conclusion
    • The small environment can handle workloads effectively up to 800-900 users. At 1000 users, CPU utilization is 100%

CPU is the bottleneck for the small container.

CPU

Users Average CPU Usage Percentage
200 1.39 34.75%
400 2.09 52.25%
600 2.72 68%
800 3.21 80.25%
1000 3.98 99.50%

Throughput

Response Time

Response times are stable as load increases, up until the point at which the CPU usage reaches 100% (1000 users). At that point, response times increase.


Medium container

The medium container was configured with 8 CPU and 16G of RAM.

Performance Analysis

  1. Performance Stability:
    • The system performs well across the entire run
  2. CPU Utilization:
    • CPU is not exhaused during the test. At 1000 users, the medium container uses only 30% CPU.
  3. Response Time:
    • The response time are stable across the run
  4. Throughput:
    • Throughput increases to 75 transactions per second at 1000 users.
  5. Conclusion:
    • The Medium environment can handle 1000 users

The medium container was capable of handling more than 1000 users.

CPU

Users Average CPU Usage Percentage
200 0.82 10.25%
400 1.24 15.50%
600 1.38 17.25%
800 1.89 23.62%
1000 2.38 29.75%

Throughput

Response Time

LDX container testing

This section provides the results of performance testing conducted for the Link Index Provider. In a hybrid cloud deployment, the Lifecycle Query Engine serves as the Link Index Provider. The Lifecyle Query Engine additionally indexes data sources to support reporting. LQE is configured to use a relational store as the backend.
Two LQE configurations were tested:
  • A small container (8 CPUs, 16G RAM)
  • A medium container (16 CPUs, 32G RAM)

Summary of results

The test results are summarized below.

Metric Small Environment Medium Environment
Peak Throughput 50–56 transactions/sec at 20 threads 180 transactions/sec at 100 threads.
CPU Utilization 100% at 30 threads 90% at 100 threads
Estimated user limit 600 users 2000 users


The bottleneck is CPU. Once the available CPUs are 100% utilized, performance starts to degrade. Adding more CPUs to the LQE container will increase throughput.

Threads vs. users

The performance tests characterize workload in terms of threads. The workload is generated by sending requests directly to the LQE container from a test program, and the test program simulates multi-user load by running multiple threads. Each thread can be thought of as a simulating a very active user that does nothing but make requests for links. There is no pausing, so each thread makes requests as fast as it can. This is an extremely intense workload that is designed to drive the LQE container to its limits.

Real users are much less active and interact with LQE indirectly through the applications. For example, when a user opens a view that has link columns using ERM DOORS Next, a request will be sent to the LQE container from the ERM DOORS Next container. You can consider each thread (or simulated user) to be the equivalent of 20 real users.

Small container

The small container is configured with 8 CPUs and 16G of RAM. For this configuration:

  1. Throughput:
    • Throughput peaks at approximately 50 transactions per second with 20 threads.
    • Beyond this point, throughput does not increase as user load increases (since response times are rising)
  2. CPU Utilization:
    • CPU usage is 100% at 30 threads
  3. Response Time:
    • Response times degrade as the user load increases
  4. Conclusion:
    • The small environment is suitable for handling workloads up to 30 threads. Beyond this threshold, performance degradation is evident due to CPU exhaustion
    • 30 threads correspond to 600 users

Throughput

CPU

Response Time


Medium container

The medium container is configured with 16 CPUs and 32G of RAM. For this configuration:

  1. Throughput:
    • Throughput reaches approximately 180 transactions per second at 100 threads
  2. CPU Utilization:
    • At 100 threads, CPU usage reaches 87%
  3. Response Time:
    • Response times increase slightly between 0 and 50 threads
    • Response times increase at a higher rate between 50 and 100 threads, but remain below 1 second
    • Response times continue to rise as the user load increases beyond 100 threads
  4. Conclusion:
    • The medium container can handle between 50 and 100 concurrent threads
    • Higher loads result in higher response times
    • 100 threads correspond to 2000 real users

CPU is the primary bottleneck.

Throughput

CPU

Response Time


Report Builder testing

This section provides analysis of the performance testing conducted for Report Builder. Two configurations were tested:

  • A small container (2 CPUs, 4G RAM)
  • A medium container (8 CPUs, 16G RAM)

Workload

The following reports were used when applying load to the Report Builder application:

  • List modules and components
  • Filter requirements by contributor
  • List requirements validated by test cases (with link validity = 'Ignore')
  • List requirements validated by test cases linked to failed test results
  • List requirements linked to test artifacts not related to work items
  • List requirements implemented by work items (with link validity = 'Ignored')
  • List test plans, test cases used and test results without passed status and assigned 100 total points
  • List test plans with test cases linked to requirements and work items
  • Count work items by type and status
  • List resolved work items that implement requirements

Each simulated user selects one of these reports to execute. The reports are executed at the rate of one report per minute per user. The number of simulated users was increased until the Report Builder application was overloaded.

Summary of results

Metric Small (2 CPU, 4G RAM) Medium (8 CPU, 16G RAM)
Max user 75 250 users
Throughput 6 reports/second at 75 users Peaks at 15 transactions/sec (250 users)
CPU Utilization 90% at 100 users CPU usage never exceeds 25%
Response Time Sharp increase at 100 users Response times increase as load increases past 100 users

For the small container, the bottleneck is CPU. For the medium container, there is a bottleneck in Report Builder created by the storage subsystem.

Small container

The small container is configured with 2 CPUs and 4G of RAM.

Performance Analysis

  1. Performance Stability
    • The system performs well at 75 users, with acceptable CPU utilization and stable response times and throughput.
  2. CPU Utilization
    • CPU is 90% at 100 users
  3. Response Time
    • Response time increases at 100 users
  4. Throughput
    • Throughput reaches a maximum of 6 reports per second at 75 users.
  5. Conclusion
    • The small environment can handle workloads effectively only up to 75 users. CPU is the bottleneck.

Report Throughput

CPU

Report Performance


Medium container

The medium container for Report Builder is configured with 8 CPUs and 16G of RAM.

Performance Analysis

  1. Throughput
    • Throughput peaks at 13–15 transactions at 200 users
  2. CPU Utilization
    • CPU usage does not max out, and does not rise past 250 users
    • This is an indication of a bottleneck in the Report Builder code
  3. Response Time
    • Response time increases as the load increases
    • At 500 users, the response time is more than 10 times the value at 100 users
  4. Conclusion
    • The medium environment is capable of handling workloads up to 250 users. For higher loads, response times degrade significantly, and throughput is capped.

An analysis of the Report Builder performance indicated that the throughput is limited by an internal bottleneck related to the storage subsystem. Report Builder stores runtime statistics on the persistent volume associated with the container. The writes to the persistent volume are limiting throughput.

Page Throughput

CPU

Report Performance

EWM container testing

This section provides analysis of the performance testing conducted for EWM across small and medium containers in the IBM cloud. Two configurations were tested:

  • A small container (4 CPUs, 8G RAM)
  • A medium container (8 CPUs, 16G RAM)

Workload

The workload used in testing the EWM container included these operations:

  • Load and Open Work Items
  • Create Defects
  • Create Defects with Attachments
  • Query and Edit Defects
  • Query Dashboard View
  • Plan Dashboard View
  • Query and Open Work Items with 100 EWM Links
  • Query and Open Work Items with Cross Links
  • Import 100 Work Items from CSV
  • Import 500 Work Items from CSV

Each operation is 12.5% of the total user load, with the exception of the two "Import CSV" operations (which are limited to 2 users). The simulation uses a 30 second think time. User load is increased incrementally until the container becomes overloaded.

Summary of results

Metric Small container Medium container
Max users 70 125
CPU Utilization 100% at 75 users 100% at 150 users.
Response Time Stable until 70 users Stable until 125 users
Throughput 28 transactions per second at 70 users 52 transactions per second at 125 users

Small container

The small container is configured with 4 CPUs and 8G of RAM.

Performance Analysis

  1. Performance Stability:
    • The system performs well until 70 Users, with acceptable CPU utilization and stable response times and throughput.
  2. CPU Utilization:
    • CPU reaches 100% at 75 users
  3. Response Time:
    • Response times rise at 75 users and above (since the CPU is 100% utilized)
  4. Throughput:
    • Throughput increases with the user load, reaching a maximum of 28 transactions per second at 70users.
  5. Conclusion
    • The small container can handle workloads effectively only up to 70Users. Performance degrades once the container CPUs are exhausted.
    • CPU is the bottleneck

CPU

CPU usage reaches 100% at 75 users.

Users % CPU
40 42.25%
50 56.00%
60 61.75%
65 73.50%
70 76.75%
75 100%
80 100%


Page Element Throughput

The chart below shows how throughput (measured in transactions per second) varies with user load. A "page element" is an HTTP call. Throughput increases as user load increases, up to 75 users. Throughput then levels out, since the workload overloads the CPU and response times start to increase.

Response Time

Response times increase slightly as the workload increases, but there is a sharper increase in response times at 75 users (when the CPU is 100% utitlized).


Medium container

The medium container is configured with 8 CPUs and 16G of RAM.

Performance Analysis

  1. Performance Stability:
    • The system performs well until 125 users, with acceptable CPU utilization and stable response times and throughput.
  2. CPU Utilization:
    • CPU reaches 100% at 150 users
  3. Response Time:
    • Response times degrade at 150 users, since the CPU is 100% utilized
  4. Throughput:
    • Throughput reaches a maximum of 52 transactions per second at 125 users
  5. Conclusion:
    • The Medium container can handle up to 125 users. CPU is the bottleneck.

CPU

Users % CPU
100 55.37%
125 70.75%
150 100%
200 100%
250 100%
300 100%


Page Element Throughput

The chart below shows how throughput (measured in transactions per second) varies with user load. A "page element" is an HTTP call. Throughput increases as user load increases, up to a maximum of 52 transactions per second at 125 users. Throughput then levels out, since the container overloads the CPU and response times increase.

Response Time

The response times for opening a dashboard tab that includes plan widgets degrades faster than other operations. The response time for that operation is shown below.

The response time increases once the CPU maxes out (at 150 users).

Revision history

Topic attachments
I Attachment Action Size Date Who Comment
Pngpng Architecture.png manage 109.1 K 2025-02-14 - 16:11 VaughnRokosz  
Pngpng AverageCPUvsUsers_ME.png manage 353.9 K 2025-02-24 - 13:46 VaughnRokosz  
Pngpng AverageCPUvsUsers_SE.png manage 461.9 K 2025-02-24 - 13:51 VaughnRokosz  
Pngpng AveragePageElementThroughput_ME.png manage 313.3 K 2025-02-24 - 13:47 VaughnRokosz  
Pngpng AveragePageelementthroughput_SM.png manage 443.7 K 2025-02-24 - 13:47 VaughnRokosz  
Pngpng AverageResponseTimevsUsers_SE.png manage 952.4 K 2025-02-24 - 13:48 VaughnRokosz  
Pngpng AverageResponse_timevsUsers_ME.png manage 1201.9 K 2025-02-24 - 13:47 VaughnRokosz  
Pngpng Average_CPU_vs_Users_ME.png manage 353.9 K 2025-02-24 - 13:45 VaughnRokosz  
Pngpng ERMMedStandardRespTime.png manage 127.9 K 2025-02-19 - 18:24 VaughnRokosz  
Pngpng ERMMedViewsOnlyLV.png manage 97.2 K 2025-02-19 - 19:01 VaughnRokosz  
Pngpng ERMSmallRespLV.png manage 87.8 K 2025-02-19 - 15:49 VaughnRokosz  
Pngpng ERMSmallViewsResp.png manage 58.2 K 2025-02-19 - 17:41 VaughnRokosz  
Pngpng ERM_ReadOnly_MediumCPUvsUsers_withLinkValidity.png manage 46.0 K 2025-02-19 - 19:02 VaughnRokosz  
Pngpng ERM_ReadOnly_MediumPageElementThroughputVsUserLoad_withLinkValidity.png manage 47.3 K 2025-02-19 - 19:02 VaughnRokosz  
Pngpng ERM_ReadWrite_MediumCPUvsUsers_withLinkValidity.png manage 44.9 K 2025-02-19 - 18:11 VaughnRokosz  
Pngpng ERM_ReadWrite_MediumPageElementThroughputVs-UserLoad_withLinkValidity.png manage 45.7 K 2025-02-19 - 18:12 VaughnRokosz  
Pngpng ERM_ReadWrite_SmallCPUvsUsers_withLinkValidity.png manage 43.8 K 2025-02-19 - 15:07 VaughnRokosz  
Pngpng ERM_ReadWrite_SmallPageElementThroughputVsUserLoad_withLinkValidity.png manage 41.9 K 2025-02-19 - 15:08 VaughnRokosz  
Pngpng ERM_ViewsOnly_SmallCPUvsUsers_withLinkValidity.png manage 39.4 K 2025-02-19 - 17:25 VaughnRokosz  
Pngpng ERM_ViewsOnly_Throughput_withLinkValidity.png manage 44.2 K 2025-02-19 - 17:25 VaughnRokosz  
Pngpng EWM_Medium-CPUvsUser.png manage 193.3 K 2025-02-05 - 19:36 VaughnRokosz  
Pngpng EWM_MediumEnv-PageThroughputVsUserLoad.png manage 108.3 K 2025-02-05 - 19:37 VaughnRokosz  
Pngpng EWM_MediumEnv-ResponseTime1.png manage 213.7 K 2025-02-05 - 19:37 VaughnRokosz  
Pngpng EWM_MediumEnv-ResponseTime2.png manage 274.0 K 2025-02-05 - 19:38 VaughnRokosz  
Pngpng EWM_MediumEnv_Top10PagesResponseTimeVsUsers.png manage 784.3 K 2025-02-18 - 15:28 VaughnRokosz  
Pngpng EWM_MediumPageElementThroughputVsUser.png manage 217.8 K 2025-02-14 - 18:31 VaughnRokosz  
Pngpng EWM_Small-CPUVsUser.png manage 196.3 K 2025-02-05 - 19:10 VaughnRokosz  
Pngpng EWM_SmallEnv-PageThroughputVsUserload.png manage 107.1 K 2025-02-05 - 19:23 VaughnRokosz  
Pngpng EWM_SmallEnv-ResponseTime1.png manage 243.0 K 2025-02-05 - 19:20 VaughnRokosz  
Pngpng EWM_SmallEnv-ResponseTime2.png manage 201.0 K 2025-02-05 - 19:26 VaughnRokosz  
Pngpng EWM_SmallEnv_Top10PageResponseTimeVsUsers.png manage 665.5 K 2025-02-18 - 15:28 VaughnRokosz  
Pngpng EWM_SmallPageElementThroughputVsUser.png manage 179.7 K 2025-02-14 - 18:31 VaughnRokosz  
Pngpng HybridCloudArchitecture.png manage 115.9 K 2025-04-04 - 17:30 VaughnRokosz  
Pngpng LDXrsMediumCPU.png manage 31.3 K 2025-02-18 - 20:25 VaughnRokosz  
Pngpng LDXrsMediumResponseTime.png manage 32.0 K 2025-02-18 - 20:26 VaughnRokosz  
Pngpng LDXrsMediumThroughput.png manage 33.3 K 2025-02-18 - 20:26 VaughnRokosz  
Pngpng LDXrsSmallCPU.png manage 22.3 K 2025-02-18 - 20:26 VaughnRokosz  
Pngpng LDXrsSmallResponseTime.png manage 23.3 K 2025-02-18 - 20:27 VaughnRokosz  
Pngpng LDXrsSmallThroughput.png manage 26.9 K 2025-02-18 - 20:27 VaughnRokosz  
Pngpng MedRespPlanLoad.png manage 98.2 K 2025-02-18 - 19:36 VaughnRokosz  
Pngpng MedRespTimes.png manage 225.0 K 2025-02-18 - 17:51 VaughnRokosz  
Pngpng RBMedRespTime.png manage 143.2 K 2025-02-17 - 18:21 VaughnRokosz  
Pngpng RBMediumAverageResponseTime.png manage 81.4 K 2025-02-17 - 18:20 VaughnRokosz  
Pngpng RBMediumCPU.png manage 76.9 K 2025-02-17 - 18:21 VaughnRokosz  
Pngpng RBMediumThroughput.png manage 82.1 K 2025-02-17 - 18:21 VaughnRokosz  
Pngpng RBSmallAverageResponseTime.png manage 104.6 K 2025-02-17 - 18:21 VaughnRokosz  
Pngpng RBSmallCPU.png manage 130.6 K 2025-02-17 - 18:22 VaughnRokosz  
Pngpng RBSmallRespTime.png manage 133.6 K 2025-02-17 - 18:22 VaughnRokosz  
Pngpng RBSmallThroughput.png manage 101.2 K 2025-02-17 - 18:22 VaughnRokosz  
Pngpng SizingSummary.png manage 9.4 K 2025-04-04 - 17:19 VaughnRokosz  
Pngpng SmallRespTimes.png manage 223.3 K 2025-02-18 - 17:50 VaughnRokosz  
Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r20 < r19 < r18 < r17 < r16 | More topic actions
 
This site is powered by the TWiki collaboration platformCopyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use. Please read the following disclaimer.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.