This topic provides general information regarding the different performance tests for Rational Team Concert Enterprise Extensions for the System Z platform.
There are two different sets of tests and two testing aspects to consider when talking about RTC and the System Z platform:
Therefore, the following information and topics in this page are focused on describing the performance tests of RTC EE, where we can differentiate:
The information in this page will cover detailed information on how RTCEE performance tests are run and what they cover, as well as links to relevant information when applicable. Following a description of the main building blocks that are needed to understand the tests is provided: (1) the topologies in which the tests are run; (2) the data that is used in the tests; and (3) the different scenarios that are defined for testing different usages of the features.
Specific RTCEE performance tests are typically executed in a Single Tier Topology infrastructure like the one in the following diagram:
This section describes the testing topologies used for performance tests focused on RTCEE features for Application Development for z/OS.
From the diagram, we can differentiate:
This section describes the testing topologies used for performance tests focused on RTCEE features for Application Development for IBMi.
From the diagram, we can differentiate:
From the diagram, we can differentiate:
Note: specific RTCEE performance tests may be executed following different topologies. In such situations, it will be detailed in the relevant tests reports.
This section describes the data that is used for the performance tests that are specific for the Enterprise Extensions features tests scenarios. The data is based on a sample application which, to achieve levels that will allow us to test scalability of the solution, is replicated generating several different variations. The base application used as data for the testing is the Mortage Application. This sample application is included as part of the Money That Matters sample. The application contains 5 (five) zComponent projects with the following base assets relevant numbers:
Element Type | # Elements | Size | Average size | Max size |
---|---|---|---|---|
COBOL program | 6 | 165KB | 27.5KB | 133KB |
Copybook | 6 | 3KB | 0.5KB | 568 bytes |
REXX | 2 | 2.6KB | 1.3KB | 1.3KB |
BMS | 2 | 9KB | 4.5KB | 6.5KB |
Link Card | 1 | 0.5KB | 0.5KB | 0.5KB |
BIND file | 2 | 0.8KB | 0.4KB | 492 bytes |
The sample application is a COBOL/CICS application that has a number of both statically called programs and dynamically called programs with a number of common and module specific copybooks. The process flow of the application from a CICS perspective is as follows:
In order to produce relevant volumes of information, the test data is replicated several times and used in the tests. Detailed information about the data used will be provided in the specific information pages for each test.
A set of scenarios that are executed as part of the RTCEE Benchmark Tests are defined. While these are standard scenarios that are run, additional scenarios may be defined for execution of specifically designed tests.
This scenario tests the performance of the enterprise Dependency Based Build focused on a single user executing a team build. The tests are executed against two sets of data: “MortgageApplicationx100” and “MortgageApplicationx1000” on z/OS, and "Maillistx100" and "Maillistx1000" on IBMi platform. Unless otherwise specified, the following options are commonly used in the tests executing this scenario:
Detailed information about this scenario and test reports can be found here.
This scenario consists on testing the performance of the Enterprise Extensions features used in a complete end-to-end feature testing, from 'Promotion', 'Package', to final 'Deployment'.
Detailed information about this scenario and test reports can be found here.
This scenario tries to test the performance of the “zImport” and “zLoad” utilities.
The tests are executed against different sets of scalability data based on the Mortgage Application sample. This scenario tests are usually executed as a single user operation run.
Detailed information about this scenario and test reports can be found here.
This scenario tests the performance of the Rational Team Concert ISPF client for the operation of loading sources. There are two set of sample projects being used:
Detailed information about this scenario and test reports can be found here.
The scenario targets the concurrent execution builds. To accomplish that, team builds and personal builds are executed concurrently in different steps of the scenario.
Detailed information about this scenario and test reports can be found here.
This scenario is consider one of the "Additional" ones, as it's not part of the standard set of test scenarios executed every time (i.e. in every release), as part of the RTCEE Benchmark Tests.
The specific scenarios in this category will vary because they are specially designed in every release to test the features that have performance related changes.
Note: For detail scenarios, please refer to 'Test Report' section to get the specific feature test you interested on.
Detailed information about this scenario and test reports can be found here.
This scenario tests the performance of the Enterprise Dependency Based Build feature under a high volume workload, simulating concurrent build operations that developers would execute in a real case scenario.
The tests scenario themselves are based on observations of the usage performed by our customers and our recommended practices to them.
Detailed information about this scenario and test reports can be found here.
The information presented previously in this wiki page describes the type of performance tests regularly executed for RTCEE features, and the different assets used on them. This subsection, as a wrap-up of the information, will describe data that you can expect to find in the RTCEE performance tests reports.
In the different reports you will find information about the specifies of the system used and the software versions used as well. In the different test reports you will usually find:
The following table outlines the resources that are tracked and the units in which results are reported:
CPU | Percentage of processing usage. The actual RTC server process is monitored. |
---|---|
Memory | Measured in Kilobytes. Whole system memory consumption is monitored. |
CPU | Percentage of processing usage. Whole system memory consumption is monitored. Being a dedicated LPAR is assumed that system CPU and Build Agent CPU for builds are comparable. |
---|---|
Memory | Response times and rate are measured. |
The following is an example of a resources graphic and the information presented. Axis meaning will vary depending on information tracked:
Status icon key: