r29 - 2024-09-26 - 09:42:22 - SameerkumarKalariaYou are here: TWiki >  Deployment Web > DeploymentPlanningAndDesign > PerformanceDatasheetsAndSizingGuidelines > LDXrs710Performance

LDX rs 7.1 Performance Report todo.png

Authors: SameerkumarKalaria, VaughnRokosz
Build basis: 7.1.0

This article discusses the results of the Link Index Provider (LDX) in 7.1 performance testing. The LDX builds and maintains an index of links between artifacts in different project areas. Applications query the LDX to get information about links.

The LDX in 7.1 has been rearchitected to store link information in a relational database. This architectural approach is the same as that adopted for the Lifecycle Query Engine in 7.0.3 (LQE rs). In fact, 7.1 allows you to use an LQE rs as your link index provider.

Here's what you can expect from LDX in 7.1:

  • Simplified deployment topologies, allowing for a single LQE rs to support both reporting and link resolution. A separate LDX server is no longer required (although that is still an option).
  • Improved scalability and resilience. LDX rs handles concurrent queries more efficiently, and long link queries will no longer impact the entire LDX server.
  • Equivalent or better performance for link queries.
  • Reduced memory requirements for the LDX rs server since load shifts to the database server.
  • Increased CPU and memory usage on your database server. Consider deploying a new database server to support LQE/LDX rs.

The following topics are covered in this article:

  • Architectural changes for LDX rs in 7.1 release
  • Deployment topology
  • Data shape used for performance testing and testing environment details
  • Performance test result
  • Automation tools used during testing

Standard Disclaimer

The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.


How LQE and LDX architecture has evolved

Up until the 7.0.2 release, the Lifecycle Query Engine (LQE) stored information using a triple store database provided by Apache Jena (Jena TDB). This was also true for the Link Index Provider (LDX), since LDX was just a specialized instance of LQE. This is shown in the architectural diagram below.

Both LQE and LDX used TRS feeds provided by applications to index data and store it in the Jena tripe store. LDX indexed a subset of the TRS data as it was just focused on links.

Reporting against LQE data involved an instance of the Report Builder application. Reports were defined in Report Builder and then executed by the LQE server. The LQE server would read data from the Jena triple store and return it to Report Builder. For link-related queries, applications made requests to the LDX asking for links. The LDX would read data from the Jena triple store and return it to the applications. Note that the LDX was often hosted in the same Java Virtual Machine (JVM) as the Jazz Team Server (JTS), although larger customers were encouraged to move the LDX onto its own dedicated server.

This architecture had several disadvantages for large indexes or high concurrent load:

  • The TDB index must be local to the LQE/LDX server machine, which prevents addressing scale issues through clustering
  • There are a minimum of two servers indexing application data, which adds load to the ELM application servers
  • Large Jena indexes must be almost entirely cached in memory for performance reasons, so that LQE/LDX servers require large amounts of RAM (as well as large Java heaps)
  • Jena does not have a sophisticated query optimization engine, so complex traceability reports can be slow, and there are limited options to tune SPARQL queries to perform better.

An additional limitation of the Jena architecture is how it deals with load. A single long-running query can severely impact the performance of the entire LQE or LDX system. There are two reasons this happens.

  • There is contention for internal resources within Jena when running queries. Too many simultaneous queries will interfere with each other, and a bottleneck is likely to form when some of those queries take a long time to run (minutes).
  • Workloads that involve both query execution and indexing create contention for resources in LQE/LDX. If a query is running when indexed data is written to disk, the updated data is temporarily stored in a journal file. Jena can combine the data in the main database with the updates in the journal to present a consistent view of the state of the database at the time the query started. The journal is merged into the main database once there are no reads pending.

There are some problems with this approach. First, the journal will continue to grow as long as there are queries running. And queries will be running if the reports are slow, or if the report workload is high (i.e., many users are running reports frequently). A busy LQE server may not be idle very often, which means that the journal keeps growing. LQE tries to limit the impact by suspending indexing when the number of pending journal updates reaches a threshold, but that means that reports will include stale data until the indexing resumes.

Additionally, queries get slower as the journal grows, since Jena has to do more work to identify the correct dataset for a given report. This means there is a feedback loop where slow queries cause the journal to grow, which in turn causes the queries to be slower and then prevents the journal from being merged into the main database. This feedback loop can even be triggered by a single slow query.

Because of these disadvantages, the JRS 7.0.3 added support for the LQE rs to replace LQE Jena. LDX in 7.0.3 stayed with the Jena architecture by default.

LQE and LDX architecture in the 7.1 release

In the 7.1 release, both LQE and LDX have adopted the relational store. The recommended deployment topology is shown below.

In 7.1, you can deploy a single LQE rs instance that will support both reporting and link resolution. This LQE rs instance will still read the TRS feeds provided by the applications, but the indexed data is stored in a relational database instead of in the Jena triple store. Requests to generate reports flow from Report Builder to the relational database, bypassing the LQE server. Requests for link information flow from the applications to LQE rs (in its role as LDX), but the link data is retrieved from the relational database instead of from the Jena triple store.

This improves the scalability of the LQE solution in several ways:

  • Database optimization engines are more sophisticated than Jena, allowing for improved query execution.
  • There is no longer a need for a separate LDX application, simplifying the topology (and reducing load on the ELM applications by eliminating unnecessary reads of the TRS feeds)
  • There are more options for tuning the performance of SQL queries than for SPARQL queries.
  • Relational databases are designed for high data volumes
  • Relational databases are much better at dealing with concurrent reading and writing. While locking is still needed, the locking usually happens at the row level. Lock escalation to the table level is not frequent, so one slow SQL query will not disrupt the entire server.
  • Since query processing shifts from the LQE server to the database server in this architecture, the LQE server requires less RAM. RAM freed up from LQE can be moved to the database server where it can be used more efficiently. Less memory is also required for link resolution, since the caching of selections within LQE/LDX is no longer needed (that moves to the relational database).
  • Data is represented more efficiently in the database compared to Apache Jena TDB files, saving storage space.
  • Clustering becomes possible through standard database solutions like Oracle RAC

You have the option to deploy LDX rs as an application separate from LQE rs, but this requires that you set up a separate data warehouse for LDX rs data. An instance of LDX cannot share the index data created by LQE rs.


Data Shape

In our test deployment, we have established various links between DOORS Next and other applications using a Java-based automation tool. The applications involved are ETM, EWM, and RMM. The different link types created in each DOORS Next artifact are listed below:

  1. Validated By (ETM): Links artifacts in DOORS Next that are validated by test cases in ETM.
  2. Tracked By (EWM): Links artifacts in DOORS Next that are tracked by work items in EWM.
  3. Implemented By (EWM): Links artifacts in DOORS Next that are implemented by tasks in EWM.
  4. Affected By (EWM): Links artifacts in DOORS Next that are affected by changes in EWM.
  5. Derives Architecture Element (RMM): Links artifacts in DOORS Next that derive architectural elements in RMM.
  6. Traced By Architecture Element (RMM): Links artifacts in DOORS Next that are traced by architectural elements in RMM.
  7. Satisfied By Architecture Element (RMM): Links artifacts in DOORS Next that are satisfied by architectural elements in RMM.
  8. Refined By Architecture Element (RMM): Links artifacts in DOORS Next that are refined by architectural elements in RMM.

LDX rs REST API

For the LDX rs REST API, users need to provide the artifact URL to retrieve backlink details. The API allows users to submit a minimum of 1 and a maximum of 1000 URLs in a single request. All performance tests were conducted using the maximum limit of 1000 target URLs per request.

GC Selection Size

In our load testing, we created various Global Configuration (GC) components with different selection sizes to analyze their impact. The selection sizes ranged from 0.7M to 50M, encompassing a total of 12 distinct GC components. This approach allowed us to comprehensively evaluate how varying GC selection sizes influence performance metrics during the load tests.


Performance Test Environment Details

Oracle Configuration

  • LDX: LQE rs Oracle Server (Used as LDX)
  • Database: LQE rs Oracle Server (Installed with Oracle 19c)
  • Single Server Installation: Both the LDX application and the Oracle 19c database are installed on a single server.

Db2 Configuration

  • LDX: LQE rs Db2 Server (Used as LDX)
  • Database: LQE rs Db2 Server (Installed with Db2)
  • Single Server Installation: Both the LDX application and the Db2 database are installed on a single server.

Separate LDX/Oracle Topology - Pluggable Oracle Configuration

The primary goal of the topology was to enhance CPU power by separating LDX, allowing for a comparison with the combined topology. Additionally, separate Oracle server has significantly improved storage compared to the Oracle server in the combined topology.

  • LDX: Separate LDX Server (Installed with LDX 7.1 build)
  • Database: Separate Server with Pluggable Oracle Database installed
  • Dedicated Servers: This setup uses separate servers for the LDX application and the Oracle database, enhancing modularity and potentially improving performance through dedicated resources.

Jena Configuration

  • LDX: LQE Server with Jena
  • Database: LQE Server with Apache Jena Database
  • Single Server Installation: Both the LDX application and the Apache Jena database are installed on a single server.

The servers in the test environment are all physical servers. The hardware specifications are listed below.

Role Server Machine Type Processor Total Processor Memory Storage OS Version
Proxy Server - IHS IBM HTTP Server and WebSphere Plugin IBM System x3550 M3 2 x Intel Xeon X5667 3.07 GHz (quad-core) 16 16 GB RAID 5 – 279GB SAS Disk x 2 RHEL 7
JTS/GC/LDX WebSphere Liberty IBM System x3550 M4 2 x Intel Xeon E5-2640 2.5GHz (six-core) 24 32 GB RAID 5 – 279GB SAS Disk x 4 RHEL 7
LQE rs Oracle/Jena Oracle 19c IBM System SR650 2 x Xeon Silver 4114 10C 2.2GHz (ten-core) 40 768 GB RAID 10 – 900GB SAS Disk x 16 RHEL 7
LQE rs Db2 Db2 11.5 IBM System SR650 2 x Xeon Silver 4114 10C 2.2GHz (ten-core) 40 768 GB RAID 10 – 900GB SAS Disk x 16 RHEL 7
App Server1 DOORS Next/ETM/EWM/RMM WebSphere Liberty IBM System x3550 M4 2 x Intel Xeon E5-2640 2.5GHz (six-core) 24 32 GB RHEL 7
App Server2 DOORS Next/ETM/EWM/RMM WebSphere Liberty IBM System x3550 M4 2 x Intel Xeon E5-2640 2.5GHz (six-core) 24 32 GB RHEL 7
LDX WebSphere Liberty IBM System x3550 M4 2 x Intel Xeon E5-2640 2.5GHz (six-core) 24 64 GB RAID 1 – 300GB SAS Disk x 4 – 128K Strip Size RHEL 7
LDX rs Oracle DB Oracle 19c IBM System SR650 V2 2 x Xeon Silver 4114 10C 2.2GHz (ten-core) 80 768 GB Raid 10 - 800GB NVME Kioxia Mainstream Drive x 24 - 7.3 TB on /mnt/raid10 RHEL 8

Abbreviations:

  • JTS = Jazz Team Server
  • GC = Global configuration application
  • LDX = Link index provider
  • EWM = Engineering Workflow Management
  • ETM = Engineering Test Management
  • LQE rs = Lifecycle Query Engine relational store
  • RHEL = Red Hat Enterprise Linux
  • LDX rs = Link index provider relational store


Performance Test Results

Single Thread Test

This section compares performance of LDX rs REST API response time against relational databases and Jena. This run used identical datasets for getting statistics. These measurements are executed using a single thread, so there is no other load on the system.

Test Parameters:

Parameter Value
Threads Single thread
Target URLs 1000
GC Selection Size 0.7M to 50M
Number of Link types 7

Performance Comparison of Oracle, LDX rs DB, and Db2 vs. Jena

In the performance evaluation of database systems, Oracle, LDX rs DB, and Db2 demonstrated superior performance compared to Jena. This suggests that traditional relational database management systems (RDBMS) such as Oracle and Db2, along with the advanced LDX rs DB architecture (Oracle), are more efficient in handling the specific workloads tested.

Comparison Among Relational Databases

When focusing on relational databases specifically, LDX rs DB exhibited marginally better performance than Oracle and Db2. This slight edge indicates the effectiveness of the LDX rs DB's architecture in optimizing database operations. However, the performance difference among Oracle and LDX rs DB databases was minimal.

Impact of GC Selection Size

  • In the case of relational databases, the response time remained relatively stable and was not significantly affected by changes in GC selection size. This indicates that relational databases are well-optimized to handle variations in GC parameters, maintaining consistent performance regardless of the GC selection size.
  • Conversely, Jena exhibited a noticeable sensitivity to changes in GC selection size. The response time for Jena varied significantly with different GC sizes

The following chat shows, impact of GC selection size on response time in relational databases and Jena.


SingleThreadTestData.png SingleThreadChart.png

Load test result - Oracle

This section presents the load test results based on key performance indicators (KPIs) such as response time, throughput, CPU usage, and disk utilization. The tests were conducted on an Oracle database with a 40M GC selection size, with varying loads from 10 to 100 threads.

Test Parameters:

Parameter Value
Threads 10 to 100
Target URLs 1000
GC Selection Size 40M
Number of Link types 7

Summary of result:

  • Maximum throughput: 180 requests per second at 100 threads
  • Throughput is limited by available CPU
  • Response times increase gradually if the system becomes overloaded
  • Disk utilization on the database server is low
  • System behaviour under load is improved when compared to the 703 architecture.

Throughput

Throughput, measured as the number of LDX rs API executions per second, increases with higher load. Oracle efficiently handles approximately 190-200 requests per second at 100 threads, demonstrating its capacity to manage high concurrency effectively.


Throughput-Oracle.png

CPU Usage

CPU usage for both the database and LDX server grows with increasing load. The chart below highlights that CPU consumption peaks at around 85% (DB + LDX app) for 100 threads, the topology has the LQE rs and Oracle apps co-located.


CPU-Oracle.png

Average Response Time

As the number of threads increases, the response time rises accordingly. The following chart illustrates the average response time at varying loads, ranging from 10 to 100 threads. The results are showing that the system is reaching its limit around 40-60 threads, and once it hits the limits, response times degrade (but slowly). Jena system reaches at peak at 50 threads. So, we didn't go beyond 50 threads for Jena test.


ResponseTime-Oracle.png

Disk Utilization on DB Server

Disk utilization on the database server also shows a rise as the load increases. At 100 threads, disk usage reaches around 15%, compared to 4% at 10 threads. However, disk utilization remains relatively steady and does not exhibit significant variation with the increased user load.


DiskIO-Oracle.png

SQL Execution Rate

The chart below illustrates the number of SQL statements executed by Oracle per second, which increases as the load grows. At higher thread counts, particularly at 100 threads, Oracle efficiently manages to execute approximately 230 statements per second, demonstrating its scalability under increased demand.


SQLRate-Oracle.png


Load test result - Db2

This section presents the load test results based on key performance indicators (KPIs) such as response time, throughput, CPU usage, and disk utilization. The tests were conducted on an Db2 database with a 40M GC selection size, with varying loads from 10 to 100 threads.

Test Parameters:

Parameter Value
Threads 10 to 100
Target URLs 1000
GC Selection Size 40M
Number of Link types 7

Summary of result:

  • Maximum throughput: 40 requests per second at 60 threads
  • Throughput is limited by available CPU
  • Response times increase gradually if the system becomes overloaded
  • Disk utilization on the database server is low
  • System performance under load is improved when compared to the 703 architecture.

Throughput

Throughput, measured as the number of LDX rs API executions per second, also scales with the load. At 60 threads, Db2 is capable of handling approximately 40 requests per second, showcasing its capacity under peak load conditions.


Throughput-Db2.png

CPU Usage

CPU usage for both the database (DB) and LDX server grows as the load increases. As shown in the chart, system is overloaded at 60 threads near 100% (Db + LDX app). the topology has the LQE rs and Db2 apps co-located.


CPU-Db2.png

Average Response Time

The average response time increases as the number of threads rises. The chart below demonstrates the impact of increasing load, from 10 to 100 threads, on response time, highlighting a direct correlation between higher thread counts and slower response times.


ResponseTime-Db2.png

Disk Utilization on DB Server

Disk utilization on the DB server shows a gradual increase as load increases. At 10 threads, disk usage is around 4%, and at 100 threads, it rises to approximately 12%. The chart demonstrates that while disk utilization grows with higher load, the changes are relatively moderate.


DiskIO-Db2.png


Impact of GC Selection Size

This section demonstrates how GC selection size affects the performance of LDX rs, specifically in terms of response time and throughput. Tests were conducted with GC selection sizes ranging from 0.5M to 50M under varying loads for Db2. For Oracle, GC selection size had minimal impact on performance, so this section primarily focuses on Db2.

Average Response Time

Response time increases as the GC selection size grows. The trend line chart below illustrates the rise in response time with larger GC sizes for both 50 and 100 threads. Notably, fluctuations in response time are observed after the GC size exceeds 50M.


GCImpactResponseTime-Db2.png

Throughput

The throughput decreases as the GC selection size increases, as shown in the chart for 20 threads. At smaller GC sizes, where the response time is faster, throughput reaches approximately at 120 requests per second. However, as the GC selection size grows, throughput drops significantly, ranging from 120 at 0.5M GC to as low as 25 requests per second at 50M GC. This decline indicates the increasing impact of larger GC sizes on system performance, particularly in handling request volumes.


GCImpactThroughput-Db2.png


Impact of separate LDX and database server topology

This section analyzes the performance of the LDX rs system under a topology where the LDX application and the database are hosted on separate servers. Oracle is deployed as the relational store on the dedicated database server.

Summary of system specifications:

  • LDX rs server: 24 vCPu, 32G RAM
  • Database server: 40 vCPU, 768G RAM
    • Storage: RAID 10 array of 24 nvme drives

Summary of results:

  • Maximum throughput: 500 requests per second at 500 threads.
  • Response times are stable up to 500 threads. As load increases past that point, response times gradually increase.
  • The LDX rs server reached 100% CPU around 500 threads.
  • The system could handle up to 2000 threads without hanging or crashing, but response times had degraded to 9 seconds at that level.
  • CPU and Disk utilization on the database server is low.

Throughput

Throughput increases with the number of threads, peaking at around 500 requests per second at 500 threads. However, beyond this point, throughput starts to decline. CPU usage in the LDX rs server is near 100% so the throughput is limited by available CPU.


Throughput-LDXrs.png

CPU Usage

The CPU usage analysis reveals that the LDX rs server becomes a limiting factor in the system’s performance. At 500 threads, the LDX rs server’s CPU usage nears 95%, while the database server still has spare CPU capacity. This indicates that the LDX rs server is the bottleneck rather than the database. As the load increases beyond 500 threads, the LDX rs server becomes overloaded, and response times degrade. The CPU usage for the LDX rs server reaches 100% at around 1200 threads. The database server’s CPU usage remains relatively constant beyond 500 threads, suggesting that the database is not the limiting factor in this scenario.


CPU-LDXrs.png

Average Response Time

The response time analysis is presented in two charts. The first chart illustrates the system’s response time up to its effective limit of 500 threads. In this range, the response time remains consistently under 1 second, indicating that the system can handle the load efficiently up to this point. The second chart shows the system’s behavior under extreme load, ranging from 500 to 2000 threads. In this scenario, the response time increases rapidly as the thread count surpasses 500. Despite the increasing response times, the system continues to process requests, unlike in the case of Jena, which stops responding after 50 threads. This indicates that while the system experiences significant performance degradation under high load, it still maintains functionality, processing requests without completely failing.


ResponseTime-LDXrs.png

Disk Utilization

Disk utilization on the database server increases as the thread load rises. The disk I/O stabilizes at around 30% after the load reaches 1000 threads. This indicates that the disk is not a bottleneck, as it has sufficient capacity to handle the I/O demands even under higher loads.


DiskIO-LDXrs.png

SQL Execution Rate

The SQL execution rate increases with the thread load until it reaches the effective limit of the LDX rs server at around 500 threads. Beyond this point, the LDX rs server cannot send more queries to the database, causing the SQL execution rate to plateau. This further confirms that the performance limit is on the LDX rs side rather than the database.


SQLRate-LDXrs.png

Data shapes


Artifacts count in DOORS NEXT

The data shape for DOORS NEXT is made up of standard-sized components. We have 3 standard size modules small, medium and large in each components. Number of artifacts in all 3 modules are summarized below..

We have 3 different standard module sizes:

  • Small (200 artifacts)
  • Medium (1500 artifacts)
  • Large (10,000 artifacts)

We use these standard module sizes to create 3 different standard components.

Artifact type Small component Medium component Large component
Number of large modules 0 0 1
Number of medium modules 0 3 0
Number of small modules 20 0 0
Total module artifacts 4200 4500 10000
Non-module artifacts 100 100 100

We have created 1 small project with 3 components 1 small, 1 medium and 1 large. The number of artifacts for this standard small project is summarized below.

Artifact type Count
Large components 1
Medium components 1
Small components 1
Total components 3
Module artifacts 18,700
Non-module artifacts 300
Total artifacts 19,000
Large modules (10,000 artifacts) 1
Medium modules (1,500 artifacts) 3
Small modules (200 artifacts) 20

So, we have total 19000 artifacts in one DOORS NEXT small project.


Artifacts count in ETM

The test artifacts were created by the in-house data generation utility.

Artifact Type Count
test plans 12
test cases 10,200
test scripts 11,875
test suites 780
test case execution records 50,756
test suite execution records 720
test case results 1,22,400
test suite results 3,600
test execution schedules 2,848
test phases and test environments 840
build definitions and build records 244
total 5,04,275

Artifacts count in EWM

There are total 30,187 different work items in the EWM project area.

Work Item Type Count
Defect 4,537
Task 25,609
Story 38
Epic 3
total 30,187

Global configurations

We created total 12 different global configurations (GCs) and added different DOORS NEXT components. Each GC component has different selection size.

One of the GC component look like:

GCComponent.png


LDX rs Database

Below is LDX rs database tables data with how many records each tables have.

To see which tables has how many rows data on Oracle, run the following SQL command:

select  
   table_name,  
   num_rows counter  
from  
   dba_tables  
where  
   owner = 'RITRS'
order by  
   counter desc;

Below are top 5 tables with number of rows. See Appendix: Oracle Tables Details for more tables data.

TABLE NAME ROW COUNT
SELECTIONS_SELECTS 411549948
URL 135587348
WORK_ITEM_LINK_PROP 18371610
RDF_STATEMENT 4846212
RDF_STATEMENT_RDF_TYPE 4846212


Performance Automation Tools Used in Testing

To evaluate the performance of the LDX rs REST API under various conditions, we employed a combination of custom-developed automation tools and industry-standard monitoring solutions. This approach allowed us to generate load, measure response times, and monitor throughput and CPU usage effectively.

Load Generation and Response Time Measurement

  1. Custom JAVA Automation Tool:
    • Functionality: The tool was developed using JAVA to generate load on the LDX rs REST API through a multithreading mechanism. It triggers LDX rs API calls in parallel based on configurations specified in a configuration file.
    • Configuration Options:
      • Thread Load: Number of concurrent threads.
      • Database Type: Choice between relational databases and Jena.
      • Number of Target URLs: The number of URLs to target per API call.
      • API Call Frequency: How many times to trigger the same API.
      • ‘RunBoth’ Property: If set to true, the tool can handle load for both relational databases and Jena simultaneously and compare their results at the end.
    • Output: The tool generates a simple text file as the result file, capturing the response times for each test run.
  2. Usage: This tool was instrumental in measuring the response time of the LDX rs API under different thread loads and configurations, providing essential data for performance analysis.

Throughput and CPU Usage Monitoring

  1. NMON:
    • Functionality: NMON is used for monitoring system performance and resource usage.
    • Integration: The NMON tool was started automatically by the JAVA automation tool during each performance run.
    • Output: After the performance run, NMON data was used for detailed analysis of CPU usage and other system metrics over the duration of the test.
  2. IBM Instana:
    • Functionality: IBM Instana Observability automatically discovers, maps, and monitors all services and infrastructure components, providing complete visibility across your application stack. We used for monitoring throughput and CPU usage during load test.
    • Configuration:
      • Central Monitoring: Instana backend was configured on a central machine.
      • Agents: Instana agents were running on each server that needed to be monitored.
    • Output: Instana provided real-time data and insights into the system's performance, aiding in the identification of performance bottlenecks.



Appendix: Oracle Tables Details

TABLE NAME ROW COUNT
SELECTIONS_SELECTS 411549948
URL 135587348
WORK_ITEM_LINK_PROP 18371610
RDF_STATEMENT 4846212
RDF_STATEMENT_RDF_TYPE 4846212
VERSION_RESOURCE 4251587
VERSION_RESOURCE_RDF_TYPE 4251587
TEST_SCRIPT_STEP_RESULT_LINK_PROP 3110140
TEST_RESULT_LINK_PROP 2956644
WORK_ITEM_SUBSCRIBERS 2731486
WORK_ITEM_SCHEDULE_RDF_TYPE 1954679
WORK_ITEM 1954679
WORK_ITEM_SCHEDULE 1954679
WORK_ITEM_RDF_TYPE 1954679
TEST_SCRIPT_STEP 1567500
TEST_SCRIPT_STEP_RDF_TYPE 1567500
TEST_EXECUTION_RECORD_LINK_PROP 1091366
TEST_SCRIPT_STEP_RESULT_RDF_TYPE 969595
TEST_SCRIPT_STEP_RESULT 969595
REQUIREMENT_RELATIONSHIP 851067
REQUIREMENT_LINK_PROP 851066
ITEM 832101
RDF_STATEMENT_LINK_PROP 645587
ARCHITECTURE_ELEMENT_RDF_TYPE 522294
TEST_CASE_REQUIREMENT_RELATIONSHIP 458191
TEST_RICH_TEXT_SECTION 428836
TEST_RICH_TEXT_SECTION_RDF_TYPE 428836
ITEM_LINK_PROP 427743
TEST_SCRIPT 391875
TEST_SCRIPT_RDF_TYPE 391875
TEST_CASE 390207
TEST_CASE_RDF_TYPE 390207
TEST_EXECUTION_SUITE_ELEMENT_LINK_PROP 296265
ARCHITECTURE_ELEMENT 261147
TEST_RESULT_RDF_TYPE 242398
TEST_RESULT 242398
WORK_ITEM_REQUIREMENT_RELATIONSHIP 201702
REQUIREMENT_RDF_TYPE 198532
TEST_SUITE_RESULT_LINK_PROP 188803
TEST_CASE_LINK_PROP 143323
TEST_SCRIPT_STEP_LINK_PROP 120000
REQUIREMENT 99266
TEST_EXECUTION_RECORD 99134
TEST_EXECUTION_RECORD_RDF_TYPE 99134
TEST_EXECUTION_SUITE_ELEMENT_RDF_TYPE 98755
TEST_EXECUTION_SUITE_ELEMENT 98755
TEST_SCRIPT_LINK_PROP 80000
WORK_ITEM_TEST_CASE_RELATIONSHIP 79422
TEST_APPROVAL_DESCRIPTOR_APPROVAL 59870
TEST_APPROVAL_RDF_TYPE 59870
TEST_APPROVAL 59870
TEST_SUITE_ELEMENT_LINK_PROP 59400
RESOURCE_PROPERTY_LINK_PROP 52116
CONFIGURATION_LINK_PROP 49922
CONFIGURATION_RDF_TYPE 41101
SELECTIONS_RDF_TYPE 28123
SELECTIONS_LINK_PROP 25602
TEST_SUITE_LINK_PROP 25600
ARCHITECTURE_RELATIONSHIP 25348
ARCHITECTURE_RELATIONSHIP_LINK_PROP 25348
TEST_APPROVAL_DESCRIPTOR_RDF_TYPE 20040
TEST_APPROVAL_DESCRIPTOR 20040
TEST_QUALITY_APPROVAL_APPROVAL_DESCRIPTOR 20040
TEST_PLAN_LINK_PROP 19924
TEST_SUITE_ELEMENT 19800
TEST_SUITE_ELEMENT_RDF_TYPE 19800
CONFIGURATION_NAME 19342
CONFIGURATION 19321
CONFIGURATION_ACCEPTED_BY 19321
CONFIGURATION_SELECTIONS 15313
SELECTIONS 15313
RESOURCE_PROPERTY_RDF_TYPE 15225
RESOURCE_PROPERTY 15225
ITEM_STRING_PROP 11885
CONFIGURATION_CONTRIBUTION 11885
TEST_QUALITY_APPROVAL 10020
TEST_QUALITY_APPROVAL_RDF_TYPE 10020
RDF_PROPERTY_RDF_TYPE 9463
RDF_PROPERTY 9413
CONFIGURATION_PREVIOUS_BASELINE 8952
CONFIGURATION_BASELINE_OF_STREAM 8952
TEST_SUITE_EXECUTION_RECORD_LINK_PROP 8123
RESOURCE_SHAPE_PROPERTY 7495
CACHED_CONFIG_ALL_CONFIGS 6757
TEST_SUITE_RESULT 6600
TEST_SUITE_RESULT_RDF_TYPE 6600
CONFIGURATION_ACCEPTS 4920
CACHED_CONFIG_ALL_SELECTIONS 4279
TEST_PLAN_RDF_TYPE 4123
TEST_PLAN 4123
COMPONENT_RDF_TYPE 3936
COMPONENT 3936
COMPONENT_LINK_PROP 3925
ITEM_RDF_TYPE 3779
RDF_PROPERTY_LINK_PROP 3467
CONFIGURATION_BOOLEAN_PROP 2503
RESOURCE_SHAPE_LINK_PROP 2263
CONFIGURATION_COMMITTER 1561
RDF_CLASS_LINK_PROP 1522
TEST_SUITE_RDF_TYPE 1380
TEST_SUITE 1380
TEST_SUITE_EXECUTION_RECORD 1320
TEST_SUITE_EXECUTION_RECORD_RDF_TYPE 1320
CONFIGURATION_DESCRIPTION 1290
RDF_CLASS_RDF_TYPE 1218
RDF_CLASS 1154
TEST_ENVIRONMENT_RDF_TYPE 1040
TEST_ENVIRONMENT 1040
TEST_ENVIRONMENT_LINK_PROP 800
WORK_ITEM_PLAN_LINK_PROP 566
TEST_PLATFORM_COVERAGE_LINK_PROP 400
RESOURCE_SHAPE_RDF_TYPE 360
RESOURCE_SHAPE 360
TEST_PHASE 330
TEST_PHASE_RDF_TYPE 330
TEST_BUILD_RECORD 320
TEST_BUILD_RECORD_RDF_TYPE 320
TEST_PHASE_LINK_PROP 292
TEST_CATEGORY_RDF_TYPE 216
TEST_CATEGORY 216
TEST_PLATFORM_COVERAGE_RDF_TYPE 200
TEST_PLATFORM_COVERAGE 200
TEST_BUILD_RECORD_LINK_PROP 200
SOURCE_FILE 159
SOURCE_FILE_RDF_TYPE 159
WORK_ITEM_CATEGORY_LINK_PROP 156
WORK_ITEM_PLAN 154
WORK_ITEM_PLAN_RDF_TYPE 154
TEST_BUILD_DEFINITION_LINK_PROP 112
TEST_CATEGORY_LINK_PROP 108
WORK_ITEM_DELIVERABLE_RDF_TYPE 93
WORK_ITEM_DELIVERABLE_LINK_PROP 93
WORK_ITEM_DELIVERABLE 93
WORK_ITEM_DELIVERABLE_RELEASE_PREDECESSOR 93
WORK_ITEM_CATEGORY_RDF_TYPE 92
WORK_ITEM_CATEGORY 92
PREFIX 55
TEST_BUILD_DEFINITION_RDF_TYPE 30
TEST_BUILD_DEFINITION 30
ACCESS_CONTEXT_RDF_TYPE 26
LQE_INDEXING_ERROR 18
LQE_INDEXING_ERROR_LINK_PROP 18
TRS_DESCRIPTOR 16
TRS_DESCRIPTOR_RDF_TYPE 16
TRS_DESCRIPTOR_LINK_PROP 16
RESOURCE_GROUP_USER_GROUPS 15
RESOURCE_GROUP_RDF_TYPE 15
RESOURCE_GROUP 15
ACCESS_CONTEXT 13
USER_GROUP 13
USER_GROUP_RDF_TYPE 13
USER_GROUP_NAME 12
CACHED_CONFIG 12
RESOURCE_GROUP_NAME 12
Topic attachments
I Attachment Action Size Date Who Comment
Pngpng Architecture702.png manage 196.2 K 2024-08-16 - 11:55 SameerkumarKalaria Architecture702
Pngpng Architecture71.png manage 168.6 K 2024-08-16 - 11:55 SameerkumarKalaria Architecture71
Pngpng CPU-Db2.png manage 28.6 K 2024-09-10 - 06:27 SameerkumarKalaria CPU-Db2
Pngpng CPU-LDXrs.png manage 81.6 K 2024-09-19 - 15:18 SameerkumarKalaria CPU-LDXrs
Pngpng CPU-Oracle.png manage 29.8 K 2024-09-10 - 05:57 SameerkumarKalaria CPU-Oracle
Pngpng CPUAndThroughput1.png manage 85.6 K 2024-08-07 - 16:52 SameerkumarKalaria SingleUserTestChart1
Pngpng DiskIO-Db2.png manage 27.7 K 2024-09-10 - 06:27 SameerkumarKalaria DiskIO-Db2
Pngpng DiskIO-LDXrs.png manage 70.9 K 2024-09-19 - 15:19 SameerkumarKalaria DiskIO-LDXrs
Pngpng DiskIO-Oracle.png manage 28.6 K 2024-09-10 - 05:58 SameerkumarKalaria DiskIO-Oracle
Pngpng FixedUserAndVaryGCSize1.png manage 165.5 K 2024-08-07 - 16:54 SameerkumarKalaria FixedUserAndVaryGCSize1
Pngpng GCComponent.png manage 340.3 K 2024-09-26 - 09:40 SameerkumarKalaria GCComponent
Pngpng GCImpactResponseTime-Db2.png manage 136.0 K 2024-09-13 - 04:19 SameerkumarKalaria GCImpactResponseTime-Db2
Pngpng GCImpactThroughput-Db2.png manage 56.5 K 2024-09-13 - 10:46 SameerkumarKalaria GCImpactThroughput-Db2
Pngpng GcSizeImpactOnThroughputAndResponseTime1.png manage 667.7 K 2024-08-07 - 16:55 SameerkumarKalaria GcSizeImpactOnThroughputAndResponseTime1
Pngpng ResponseTime-Db2.png manage 28.5 K 2024-09-10 - 06:27 SameerkumarKalaria ResponseTime-Db2
Pngpng ResponseTime-LDXrs.png manage 137.4 K 2024-09-19 - 15:19 SameerkumarKalaria ResponseTime-LDXrs
Pngpng ResponseTime-Oracle.png manage 28.0 K 2024-09-10 - 05:57 SameerkumarKalaria ResponseTime-Oracle
Pngpng SQLRate-LDXrs.png manage 76.3 K 2024-09-19 - 15:19 SameerkumarKalaria SQLRate-LDXrs
Pngpng SQLRate-Oracle.png manage 29.6 K 2024-09-10 - 06:30 SameerkumarKalaria SQLRate-Oracle
Pngpng SingleThreadChart.png manage 66.9 K 2024-09-16 - 04:58 SameerkumarKalaria SingleThreadChart
Pngpng SingleThreadTestData.png manage 79.1 K 2024-09-16 - 04:57 SameerkumarKalaria SingleThreadTestData
Pngpng SingleUserTestChart1.png manage 68.1 K 2024-08-07 - 16:53 SameerkumarKalaria SingleUserTestChart1
Pngpng SingleUserTestData.png manage 26.2 K 2024-08-02 - 10:42 SameerkumarKalaria SingleUserTestData
Pngpng Throughput-Db2.png manage 21.0 K 2024-09-10 - 06:26 SameerkumarKalaria Throughput-Db2
Pngpng Throughput-LDXrs.png manage 72.1 K 2024-09-19 - 15:18 SameerkumarKalaria Throughput-LDXrs
Pngpng Throughput-Oracle.png manage 26.9 K 2024-09-10 - 05:52 SameerkumarKalaria Throughput-Oracle
Pngpng VaryUserLoadAcrossAllGC1.png manage 381.5 K 2024-08-07 - 16:54 SameerkumarKalaria VaryUserLoadAcrossAllGC1
Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r29 < r28 < r27 < r26 < r25 | More topic actions
 
This site is powered by the TWiki collaboration platformCopyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use. Please read the following disclaimer.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.