Introduction
This report compares the performance of RTC for z/OS between releases since v4.0.1 until v6.0.1.
Generally constant performance improvements are made into releases of RTC for z/OS. The objective of this report is to present the improvements.
The performance data provided is obtained by benchmark test of each release. Currently the report includes only the build information.
Disclaimer
The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.
This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.
What our tests measure
We use predominantly automated tooling such as Rational Performance Tester (RPT) to simulate a workload normally generated by client software such as the Eclipse client or web browsers. All response times listed are those measured by our automated tooling and not a client.
The diagram below describes at a very high level which aspects of the entire end-to-end experience (human end-user to server and back again) that our performance tests simulate. The tests described in this article simulate a segment of the end-to-end transaction as indicated in the middle of the diagram. Performance tests are server-side and capture response times for this segment of the transaction.
Findings
Based on the test data, build performance of the RTC for z/OS has improved significantly from 4.0.1 to 6.0.1.
From
4.0.1 to 6.0, the overall improvement of the enterprise build time is about
50% :
- from 4.0.2 to 4.0.3 , The enterprise build time has an improvement about 25% ("Collecting buildable files" has the most notable improvement of more than 80%)
- from 4.0.4 to 4.0.5 , The enterprise build time has an improvement about 10% (around 15% improvement in "Collecting buildable files" and about 10% improvement in "compile" time)
- from 4.0.6 to 5.0.1 , The enterprise build time has an improvement about 10% (around 30% improvement in "Updating dependency data" activity and about 65% improvement in "Collecting buildable files" activity)
- from 5.0.2 to 6.0 , The enterprise build time has an improvement about 2% to 5% (more than 60% improvement in "Collecting buildable files" activity and more than 30% improvement in "Updating dependency data" activity)
- from 6.0 to 6.0.1 , The enterprise build time has an improvement about 6% (54%+ in "Updating dependency data" activity and 6% in "Compile" activity)
Topology
The tests are executed in a Single Tier Topology infrastructure like the one in the following diagram:
The RTC server was set up based on WebSphere and DB2 on Linux for System z. The build machine with Rational Build Agent was on zOS.
Test Environment |
|
|
RTC Server |
Operating System & Version: Linux for System z (SUSE Linux Enterprise Server 10 (s390x))
System Resource : 10 GB Storage, 4 CPs (2000 mips, CPU type : 2097.710, CPU model: E12)
CLM: from 4.0.1 GA to 6.0.1 GA, 4 GB heap size
DB2: 9.7.0.5 (from 4.0.1 GA to 4.0.6 GA), 10.1.0.0(5.0 GA to 6.0.1 GA)
WAS: 8.0.0.3 (from 4.0.1 GA to 4.0.5 GA), 8.5.5.1 (from 4.0.6 GA to 6.0.1 GA)
|
Build Forge Agent |
Operating System & Version: z/OS 01.12.00
System Resource: 6 GB Storage, 4 CPs (2000 mips, CPU type : 2097.710, CPU model: E12)
Build System Toolkit: from 4.0.1 GA to 6.0.1 GA
|
Methodology
Build time and individual activity time are compared by getting test start date and time.
The sample projects for the test are:
Test Data |
|
|
Sample Project |
Mortgage*100 |
Mortgage*1000 |
Assets |
600 COBOL programs 400 Copybooks 200 BMS 3 others
|
6000 COBOL programs 4000 Copybooks 2000 BMS 3 others
|
Total Assets |
1203
|
12003
|
In the repository the source code is stored in one stream with one single component which includes 5 zComponent Projects.
Enterprise builds are executed twice against each version.
Results
Run duration
The chart below shows the run duration comparing between 4.0.1 until 6.0.1. Builds are run twice against each release and the average time is taken for comparison.
The enterprise build time has a total improvement about
54% from 4.0.1 to 6.0.1.
For the release by release comparison, it improved by 25% from 4.0.2 to 4.0.3, 10% from 4.0.4 to 4.0.5, 10% from 4.0.6 to 5.0, 2% to 5% from 5.0.2 to 6.0 and 1% from 6.0 to 6.0.1.
NOTE: Our version 4.0.6 benchmark test environment applied a hot fix on 4.0.6GA Build for a product defect.Refer to the technote for further details :
An invalid or expired OAuth token error may occur during an Enterprise Dependency Build in IBM Rational Team Concert 4.0.6
Build Activities
The chars below display the run times of build activity "Collecting buildable files" and "Compile" which have gained some improvements between releases. Builds are run twice against each release and the average time is taken for comparison.
In v4.0.3 compared with v4.0.2, "Collecting buildable files" has a great improvement of more than 80%.
In v4.0.5 compared with v4.0.4, "Collecting buildable files" activity speed is improved about 15% and almost same in "Compile" time.
In v5.0 compared with v4.0.6, "Collecting buildable files" activity speed is improved about 60%.
In v6.0 compared with v5.0.2, "Collecting buildable files" activity speed is improved more than 60%.
In v6.0.1 compared with v6.0, "Updating dependency data" activity speed is improved more than 54% and "Compile" activity is improved about 6%.
Detailed build activities comparison can be found here.
Appendix A - Version 3.0.1 to Version 4.0
Append the historical enterprise build data for version 3.0.1 and version 4.0 for reference. Please note that these data is measured based on the old processor which is relatively slow. The sample project used for the test is Mortgage Application *100.
Version
|
Enterprise build time |
RTC 4.0 |
34 minutes, 1 seconds |
RTC 3.0.1 |
1 hour, 3 minutes, 2 seconds |
Appendix B - Key Tuning Parameters
Product
|
Version |
Highlights for configurations under test |
IBM WebSphere Application Server |
8.0.0.3 (4.0.1GA to 4.0.5GA), 8.5.5.1 (4.0.6GA to 6.0.1GA) |
JVM settings:
* GC policy and arguments, max and init heap sizes:
-Xmn512m -Xgcpolicy:gencon -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000
-Xmx4g -Xms4g
OS configuration:
* hard nofile 120000
* soft nofile 120000
Refer to http://pic.dhe.ibm.com/infocenter/clmhelp/v4r0m4/topic/com.ibm.jazz.install.doc/topics/c_special_considerations_linux.html for details
|
DB2 |
DB2 Enterprise Server 9.7.0.5 (4.0.1GA to 4.0.6GA), 10.1.0.0 (5.0GA to 6.0.1GA) |
Tablespace is stored on the same machine as IBM WebSphere Application Server |
License Server |
Same as CLM version |
Hosted locally by JTS server |
Network |
|
Shared subnet within test lab |
About the authors
SuHui LuLu
Questions and comments:
- What other performance information would you like to see here?
- Do you have performance scenarios to share?
- Do you have scenarios that are not addressed in documentation?
- Where are you having problems in performance?
Warning: Can't find topic Deployment.PerformanceDatasheetReaderComments