[1]: versioned test artifact counts are based on the data generation utility which collects the sum from each artifact type visible in the UI.
The test in this report does not cover the evaluation of performance under Global Configuration, nor capacity/scalability with regard to increasing the number of streams or baselines for any given ETM component. The integration scenarios including cross-product links are not tested in this release.
While the report provides a Summary of results, detailed information is covered in various sections. Use the page content menu on the upper-right section of the report to navigate. We may provide raw results upon requests.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.
This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.
Counts | Extra large component | Smaller component |
test plans | 1,681 | 50 |
test cases | 1,000,207 | 30,000 |
test scripts | 1,000,209 | 30,000 |
test suites | 100,020 | 3,000 |
test case execution records | 4,000,800 | 120,000 |
test case results | 4,000,921 | 360,000 |
test suite results | 500,100 | 15,000 |
test execution schedules | 47,467 | 1,424 |
test phases and test environments | 31,636 | 800 |
build definitions and build records | 33,533 | 1,006 |
total | 10,716,574 | 561,280 |
The test artifact maintains the following associations:
The specifications of machines under test are listed in the table below. Server tuning details listed in Appendix - Key configuration parameters
Function | Number of Machines | Machine Type | CPU / Machine | Total # of CPU vCores/Machine | Memory/Machine | Disk | Disk capacity | Network interface | OS and Version |
---|---|---|---|---|---|---|---|---|---|
Proxy Server (IBM HTTP Server and WebSphere Plugin) | 1 | IBM System x3250 M4 | 1 x Intel Xeon E3-1240 3.4GHz (quad-core) | 8 | 16GB | RAID 1 -- SAS Disk x 2 | 299GB | Gigabit Ethernet | Red Hat Enterprise Linux Server release 6.5 |
JTS Server | 1 | IBM System x3550 M4 | 2 x Intel Xeon E5-2640 2.5GHz (six-core) | 24 | 32GB | RAID 5 -- SAS Disk x 2 | 897GB | Gigabit Ethernet | Red Hat Enterprise Linux Server release 6.5 |
ETM Server | 1 | IBM System x3550 M4 | 2 x Intel Xeon E5-2640 2.5GHz (six-core) | 24 | 32GB | RAID 5 -- SAS Disk x 2 | 897GB | Gigabit Ethernet | Red Hat Enterprise Linux Server release 6.5 |
Database Server | 1 | IBM System x3650 M4 | 2 x Intel Xeon E5-2640 2.5GHz (six-core) | 40 | 64GB | RAID 10 – 279GB SAS Disk x 16 | 4TB | Gigabit Ethernet | Red Hat Enterprise Linux Server 7 (Maipo) |
RPT workbench | 1 | IBM System x3550 M4 | 2 x Intel Xeon E5-2640 2.5GHz (six-core) | 24 | 32GB | RAID 5 -- SAS Disk x 2 | 897GB | Gigabit Ethernet | Red Hat Enterprise Linux Server release 6.4 |
RPT Agents | 6 | VM image | 4 x Intel Xeon X5650 CPU (1-Core 2.67GHz) | 1 | 2GB | N/A | 30GB | Gigabit Ethernet | Red Hat Enterprise Linux Server release 6.5 |
Network switches | N/A | Cisco 2960G-24TC-L | N/A | N/A | N/A | N/A | N/A | Gigabit Ethernet | 24 Ethernet 10/100/1000 ports |
N/A: Not applicable. vCores = Cores with hyperthreading
All server machines and test clients are located on the same subnet. The LAN has 1000 Mbps of maximum bandwidth and less than 0.3 ms latency in ping.
Rational Performance Tester (RPT) is used to simulate this load created using the web client. Each virtual user completes a random use case from a set of available use cases. Each use case is implemented as a RPT script; and these RPT scripts are organized by pages representing user actions.
The performance work load is role based as each of the areas defined under sequence of actions which are separated into individual user groups within an RPT schedule, see User roles, test cases and workload characterization for detailed description for each test scenario.
Configuration | Use role | % of Total | Related Actions |
---|---|---|---|
Stream | 98 | ||
QE Manager | 8 | Create test plan, browse test plan and test case, browse test script, simple test plan copy, search defect, view dashboard. | |
Test Lead | 19 | Edit test environments, edit test plan, create test case, bulk edit of test cases, browse test script, test execution, search defect. | |
Tester | 66 | Create defect, modify defect, search defect, edit test case, create test script, edit test script, test execution, browse test execution record. | |
Dashboard Viewer | 5 | View dashboard, with login and logout. | |
Baseline | 2 | ||
Baseline Viewer | 2 | Browse test case, or test plan, or test case execution record in baseline. |
Use Role | Percentage of the user role | Sequence of Operations |
---|---|---|
QE Manager | 1 | Create test plan: user creates a test plan, adds description, business objectives, test objectives, two test schedules, test estimate, quality objectives, and entry and exit criteria. |
26 | Browse test plans and test cases: user lists all test plans, types a keyword in the name filter, opens a test plan found, reviews various sections, and closes the test plan. | |
26 | Browse test script: user searches for a test script by name, opens, and reviews the contents. | |
1 | Simple test plan copy: user searches a test plan by name, makes a copy. | |
23 | Search defect: user searches for a specific defect by number, reviews the defect. | |
20 | Refresh dashboard: user views dashboard | |
Test Lead | 8 | Edit test environment: user lists all test environments, selects a test environments, modifies the description and saves the change. |
15 | Edit test plan: user lists all test plans, opens a test plan for editing, adds a test case to the test plan, modifies a few other sections of the test plan, saves the changes. | |
4 | Create test case: user opens page Create Test Case, enters data for a new test case, and saves the new test case. | |
1 | Bulk edit of test cases: user searches for test cases with root name and edits all found with owner change. | |
32 | Browse test script: user searches for a test script by name, opens and reviews the contents, and closes the test script. | |
26 | Test Execution: user searches a test case execution record by name, selects the test case execution record, starts execution, enters pass/fail verdict, reviews results, sets points then saves. | |
11 | Search defect: user searches for a specific defect by number, user reviews the defect, and closes. | |
Tester | 8 | Create defect: user opens the Create Defect page, enters data for a new defect, and saves the defect. |
5 | Modify defect: user searches for a specific defect by number, modifies and saves the defect. | |
14 | Search defect: user searches for a specific defect by number, user reviews the defect and closes. | |
6 | Edit test case: user searches a test case by name, opens the test case, adds a test script to the test case (user clicks next a few times (server size paging feature) before selecting test script), and saves the test case. | |
4 | Create test script: user opens Create Test Script page, enters data for a new test script, and saves the test script. | |
8 | Edit test script: user searches a test script by name, opens a test script, modifies and saves the test script. | |
42 | Test Execution: user searches a test case execution record by name, selects and starts execution, enters pass/fail verdict, reviews results, sets points then saves. | |
7 | Browse test execution record: user searches a test case execution record by name, opens the most recent results. | |
Dashboard Viewer | 100 | View dashboard(with login and logout): user logs in, views dashboard, and logs out. |
Baseline Viewer | 34 | Browse test script in baseline: user switches to a baseline, browse test scripts, and opens a test script from the list. |
33 | Browse test case execution record in baseline: user switches to a baseline, selects browse test case execution records, opens one of them from the list. | |
33 | Browse test plans and test cases in baseline: user switches to a baseline, lists all test plans, types a keyword in the name filter, opens a test plan found, reviews various sections, and closes the test plan. |
For all charts, each column represents the time for page loading (or page response time) in milliseconds, for which value smaller is better.
Browse Test Plans & Test Cases
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
![]() |
![]() |
* Note: this page can be variable; however the 90th percentile of this page is under 1 second in both cases. |
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
Test Execution Record Browsing
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
Browse Test Script In Baseline
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
Browse Test Case Execution Record In Baseline
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
Browse Test Plans And Test Cases In Baseline
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
repository size increased by 50% | component size increased by 2,000% |
![]() |
![]() |
Back to Test Cases & workload characterization
![]() |
Test scenario | Average execution counts per load test |
---|---|---|
Execute the TER (Test Execution For 4 Steps) | 1,478 | |
Browse TER | 864 | |
Search Defect | 650 | |
View Dashboard | 508 | |
Browse Test Script | 460 | |
Edit Test Script | 245 | |
Create Defect | 222 | |
Edit Test Case | 170 | |
Edit Test Plan | 163 | |
Modify Defect | 150 | |
Create Test Script | 119 | |
Browse Test Plans And Test Cases | 110 | |
Refresh Dashboard | 98 | |
Edit Test Environment | 91 | |
Create Test Case | 64 | |
Browse Test Script In Baseline | 23 | |
Browse TER In Baseline | 23 | |
Browse Test Plans And Test Cases In Baseline | 17 | |
Full Text Search and Open Test Suite | 0 | |
Bulk Edit of Test Cases | 17 | |
Create Test Plan | 2 | |
Simple Test Plan Copy | 2 |
extra large component vs smaller component | |||
CPU utilization | ![]() |
Disk utilization | ![]() |
Memory | ![]() |
Disk transfer | ![]() |
extra large component vs smaller component | |||
CPU utilization | ![]() |
Disk utilization | ![]() |
Verbose garbage collection is enabled to log the GC activities. These GC logs showed very little variation between runs. There is also no discernible difference between versions. Below is one example of the output from the GC log for each application.
![]() |
These pages could be the most expensive ones in our performance test, taking more than 10 seconds to load for instance. When we looked at the queries from within the database, we determined to use additional indexes to help improve both of their response times as well as the memory usage. The indexes were created on the ETM database schema as below
--function-based indexes for using name column for filtering and searching CREATE INDEX "QM_USER"."NAMECOL_UPPER_TER" ON "QM_USER"."EXECUTION_EXECUTION_RECORD" (UPPER("NAME")); CREATE INDEX "QM_USER"."NAMECOL_UPPER_TCER" ON "QM_USER"."EXECUTION_EXECUTION_TASK_RESLT" (UPPER("NAME")); CREATE INDEX "QM_USER"."NAMECOL_UPPER_TS" on "QM_USER"."PLANNING_EXECUTION_SCRIPT" (UPPER(NAME)); CREATE INDEX "QM_USER"."NAMECOL_UPPER_CATG" on "QM_USER"."PLANNING_CATEGORY" (UPPER(NAME)); CREATE INDEX "QM_USER"."NAMECOL_UPPER_TC" on "QM_USER"."PLANNING_TEST_CASE" (UPPER(NAME)); CREATE INDEX "QM_USER"."NAMECOL_UPPER_TSU" on "QM_USER"."PLANNING_TEST_SUITE" (UPPER(NAME)); CREATE INDEX "QM_USER"."NAMECOL_UPPER_CTRR" on "QM_USER"."REPOSITORY_CONTRIBUTOR_RECORD" (UPPER(NAME)); CREATE INDEX "QM_USER"."USERID_UPPER_CTRR" on "QM_USER"."REPOSITORY_CONTRBTR_RCRD_SR_DS" (UPPER(USER_ID)); --composite indexes to improve large views such as execution record, and test results CREATE INDEX "QM_USER"."TER_JZ_ARC_MOD_PRJ_ITM_ST" ON "QM_USER"."EXECUTION_EXECUTION_RECORD" ("JZ_DISCRIMINATOR", "ARCHIVED", "MODIFIED", "PROJECT_AREA_ITEM_ID", "ITEM_ID", "STATE_ID"); CREATE INDEX "QM_USER"."TCER_JZ_ARC_ET_PRJ_ITM_ST" ON "QM_USER"."EXECUTION_EXECUTION_TASK_RESLT" ("JZ_DISCRIMINATOR", "ARCHIVED", "END_TIME", "PROJECT_AREA_ITEM_ID", "ITEM_ID", "STATE_ID"); CREATE INDEX "QM_USER"."JZ_ARC_PRJ_SNPS_ITM_ST" ON "QM_USER"."PLANNING_VERSIONABLE_ITEM" ("JZ_DISCRIMINATOR", "ARCHIVED", "PROJECT_AREA_ITEM_ID", "SNAPSHOT", "ITEM_ID", "STATE_ID"); --composite index for process CREATE INDEX "QM_USER"."PRJITMID_ITMID" ON "QM_USER"."PROCESS_PROCESS_AREA" ("PROJECT_AREA_ITEM_ID", "ITEM_ID");
In ETM 7.0.1, we found that the option to allow function-based indexes through SQL profiling may disappear due to an index "RPSTR_VRSNSTDBDSCRNTTMCRTDCNFG" presented under the ETM user schema. After dropping this index, the EXPLAIN PLAN from SQL profile will become desired. Based on the performance payload test and extended longer loads, dropping this particular index in the ETM user schema does not seem to introduce negative performance impact to the application.
####enable automatic memory management (AMM) #original memory_max_target and memory_target are both 0; #original sga_target=28800M and sga_max_size=40G; #may need to increase the memory settings on the system first, run as root: mount -t tmpfs shmfs -o size=56g /dev/shm alter system set memory_max_target=52000M scope=spfile; alter system set memory_target=52000M scope=spfile; alter system set sga_max_size=0 scope=spfile; alter system set sga_target=0 scope=spfile; alter system set pga_aggregate_limit=0 scope=spfile; alter system set pga_aggregate_target=0 scope=spfile; #rebounce db. note that in the test we also secured the sizes of sga_max_size and db_cache_size to match the high water mark observed prior to tuning alter system set sga_max_size=40G scope=spfile; alter system set db_cache_size=24G scope=spfile; #rebounce db ####disable adaptive plan alter system set optimizer_adaptive_plans = false #default true #_optimizer_use_feedback= false #not available in 12c ####these settings can help enable auto parallelism in SQL execution. however for high payload, only turn these on when you have enough disk capacity #alter system set parallel_degree_policy=auto scope=spfile; #default manual #alter system set filesystemio_options=setall scope=spfile; #default none #rebounce db
Product |
Version | Highlights for configurations under test |
---|---|---|
IBM HTTP Server for WebSphere Application Server | 8.5.5.12 | IBM HTTP Server functions as a reverse proxy server implemented via Web server plug-in for WebSphere Application Server.
Configuration details can be found from the CLM infocenter. HTTP server (httpd.conf):
OS Configuration:
|
Embedded WebSphere Liberty Server | 20.0.0.3 | JVM settings:
-Xgcpolicy:gencon -Xmx16g -Xms16g -Xmn4g -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -verbose:gc -Xverbosegclog:gc.log -XX:MaxDirectMemorySize=1GSDK version:
OS Configuration: System wide resources for the app server process owner:
|
Oracle | Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production | |
LDAP server | ||
License server | N/A | |
RPT workbench | 8.3.0.3 | Defaults |
RPT agents | 8.3.0.3 | Defaults |
Network | Shared subnet within test lab |
Warning: Can't find topic Deployment.PerformanceDatasheetReaderComments
Status icon key: