EditAttachPrintable
r10 - 2014-04-18 - 18:56:06 - Main.gcovellYou are here: TWiki >  Deployment Web > DeploymentPlanningAndDesign > PerformanceDatasheetsAndSizingGuidelines > RationalTeamConcertOnZOSPerformance50

uc.png Rational Team Concert for z/OS Performance Improvement of Source Code Data Collection and Query in 5.0

Authors: Su Hui
Last updated: April 18, 2014
Build basis: Rational Team Concert on z/OS 5.0 & 4.0.6

Introduction

This report compares the performance of Rational Team Concert (RTC) for z/OS of version 5.0 to the previous 4.0.6 release.

The version 5.0 enhancements include performance improvements in Enterprise Extensions source code data collection service and source code data query.

Source code data is collected by RTC to help find source code dependencies, which will be interrogated during dependency build. The source code data collection service is invoked both as an asynchronous daemon that runs on the Jazz Team Server, and also when a dependency build is requested. It can also be invoked manually in the Eclipse client.

The test measured the performance of the source code data collection service by requesting dependency builds and by manually running in Eclipse client. In version 5.0, the speed for the source code data collection is improved significantly compared to the previous 4.0.6 release.

After the source code data is collected, queries can be created to find resources with any given source code data value in the Eclipse Client. In version 5.0, enhancement is made to reduce the client side processing of source code data query time. In this test, the speeds for large source code data queries are notably faster in version 5.0 than in version 4.0.6.

Disclaimer

The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.

What our tests measure

We use predominantly automated tooling such as Rational Performance Tester (RPT) to simulate a workload normally generated by client software such as the Eclipse client or web browsers. All response times listed are those measured by our automated tooling and not a client.

The diagram below describes at a very high level which aspects of the entire end-to-end experience (human end-user to server and back again) that our performance tests simulate. The tests described in this article simulate a segment of the end-to-end transaction as indicated in the middle of the diagram. Performance tests are server-side and capture response times for this segment of the transaction.

schematicperformancetestmap.jpg

Findings

Based on the test data, performance of the RTC for z/OS has an improvement in Enterprise Extensions dependency build preprocessing speeds from 4.0.6 to 5.0:

  • In the scenario of the initial enterprise build, the preprocessing speeds are up to 49% faster.Within preprocessing, "Updating dependency data" activity speed improved about 31%, and "Collecting buildable files" activity speed improved about 65%.
  • In the scenario of the build after changing all the copybook files, the preprocessing speeds are up to 25% faster. The "Updating dependency data" activity during preprocessing improved about 33%, and "Collecting buildable files" activity speed improved about 24%.
  • In the scenario of the build after changing all the copybook files, the preprocessing speeds are up to 37% faster. The "Updating dependency data" activity during preprocessing improved about 41%, and "Collecting buildable files" activity speed improved about 35%.

In the test of manually rescan or update source code data, the speed is from 25% to 35% faster in version 5.0 for different scenarios.

For source code data query, according to the test result, the query speed is from 52% to 80% faster for different sizes of the returned query results.

Topology

The tests are executed in a Single Tier Topology infrastructure like the one in the following diagram: singleTierRTCEETests.png

The RTC server was set up based on WebSphere and DB2 on Linux for System z. The build machine with Rational Build Agent was on z/OS.

Test Environment
RTC Server Operating System & Version: Linux for System z (SUSE Linux Enterprise Server 10 (s390x))
System Resource : 10 GB Storage, 4 CPs (20000 mips, CPU type : 2097.710, CPU model: E12)
CLM: 5.0 M3 (CALM-I20140228-2021), 5.0 M4 (CALM-I20140329-2009), 4.0.6 GA (CALM-I20140205-1923), 4 GB heap size
DB2: 9.7.0.5
WAS: 8.5.5.1
Build Forge Agent Operating System & Version: z/OS 01.12.00
System Resource: 6 GB Storage, 4 CPs (20000 mips, CPU type : 2097.710, CPU model: E12)
Build System Toolkit: 5.0 M3 (RTC-I20140228-1725), 4.0.6 GA (RTC-I20140205-1437)

Methodology

Test run duration was compared by getting test start date and time. For CPU and memory utilization monitoring tools NMON was used for RTC server and RMF on z/OS was used for Rational Build Agent.

The sample project for the test was Mortgage*1000 which is 1000 duplicates of the Mortgage sample application.

Test Data
Sample Project Mortgage*1000
Assets 6000 COBOL programs
4000 Copybooks
2000 BMS
3 others
Total Assets 12003

In the repository the source code is stored in one stream with one single component which includes 4 zComponent Projects.

Each test was executed twice against both versions.

Test Scenarios

Test Scenario Description
Dependency Build 1. Request full dependency build
2. Change all the copybooks and request build
3. Change all the COBOL files and request build
Rescan/Update Source Code Data From Eclipse client :
1. Reset(rescan) source code data of one stream with Mortgage*1000 application
2. Change all Copybook files and update source code data of the stream
3. Change all COBOL files and update source code data of the stream
Source Code Data Query Create source code data queries on one stream with Mortgage*1000 application, add the below conditions respectively and run the queries:
1. fileType is 'MAIN', logicalName contains 'E'
2. fileType is 'MAIN'
3. fileType isnot 'DBD'

Results

Run duration

Dependency Build

These data tables show the total run duration of the dependency builds comparing between 5.0 and 4.0.6 with the test start and completion time.

runduration.png

The chars below display the run duration of build preprocessing and the individual activities "Updating dependency data", "Collecting buildable files" during preprocessing, which have gained significant improvements between 5.0 and 4.0.6. Builds are run twice against each release and the average time is taken for comparison.

preprocessing.gif

updependency.gif

collectfile.gif

Rescan/Update Source Code Data

The chars below display the run duration when rescan or update source code data manually from Eclipse client. It shows that the speed for source code data collection service has improved in version 5.0 significantly compared with version 4.0.6.

scd_duration.png

scd_collection.gif

Source Code Data Query

The chart below display the average source code query time with different size of the result sets. The data shows the client side processing of source code data queries has improved by up to 80% for large result sets.

scd_query.gif

Resource Utilization

CPU and Memory for RTC server

This graph shows CPU and memory utilization for RTC server, data is collected by NMON tool. Tests in 5.0 environment and 4.0.6 environment show similar CPU and memory utilization.

406_server.gif 50_server.gif

CPU and Memory for Build Agent

This graph shows CPU and memory utilization for Build Forge Agent on z/OS machine, data is collected by RMF tool. Tests in 5.0 environment and 4.0.6 environment show similar CPU and memory utilization.

406_bfa.gif 50_bfa.gif

Appendix A - Key Tuning Parameters

Product
Version Highlights for configurations under test
IBM WebSphere Application Server 8.5.5.1 JVM settings:

* GC policy and arguments, max and init heap sizes:

 -Xmn512m -Xgcpolicy:gencon -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000
 -Xmx4g -Xms4g
OS configuration:
* hard nofile 120000                    
* soft nofile 120000
Refer to http://pic.dhe.ibm.com/infocenter/clmhelp/v4r0m4/topic/com.ibm.jazz.install.doc/topics/c_special_considerations_linux.html for details
DB2 DB2 Enterprise Server 9.7.0.5 Tablespace is stored on the same machine as IBM WebSphere Application Server
License Server Same as CLM version Hosted locally by JTS server
Network Shared subnet within test lab

About the authors

SuHui


Questions and comments:
  • What other performance information would you like to see here?
  • Do you have performance scenarios to share?
  • Do you have scenarios that are not addressed in documentation?
  • Where are you having problems in performance?

Warning: Can't find topic Deployment.PerformanceDatasheetReaderComments

Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r14 | r12 < r11 < r10 < r9 | More topic actions...
 
This site is powered by the TWiki collaboration platformCopyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use. Please read the following disclaimer.