EditAttachPrintable
r4 - 2020-11-20 - 17:37:04 - Main.hhuoYou are here: TWiki >  Deployment Web > DeploymentPlanningAndDesign > PerformanceDatasheetsAndSizingGuidelines > 702PerformanceReportIBMEngineeringTestManagement

new.png[DRAFT] IBM Engineering Test Management (ETM) 7.0.2: performance/scalability updates

Authors: Hongyan Huo
Last updated: Nov 20, 2020
Build basis: IBM Engineering Test Management 7.0.2

Standard disclaimer

The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.

Updates from previous report:

This report provides an update to IBM Engineering Test Management (ETM) 7.0.2 performance and scalability test, from the 7.0.1 release.

Datashape

In this release, the datashape has been evolved as outlined below
  • Number of components: total number of components is increased from 11 to 2511
  • Repository size [1]: increased from 15 million to 20 million total artifacts, or by 33%
  • ETM Database size on disk has grown from 775GB to 1171GB
  • Component size[1]:
    • added small (<1K), medium (5K), and large (50K) sizes
    • keeping extra large (500K) and extra-extra-large (10M) sizes
[1]: measured by total number of versioned artifacts generated by the Data Generation Tool which collects the sum from each artifact type visible in the UI.

The size and number of different components are summarized in the table below

versioned test artifact counts by component size:
Counts Extra-extra large component(10M) Extra large component(500K) Large(50K) Medium(5K) Small(<1K)
test plans 1,681 50 6 4 1
test cases 1,000,207 30,000 3,000 400 20
test scripts 1,000,209 30,000 3,000 400 20
test suites 100,020 3,000 300 40 10
test case execution records 4,000,800 120,000 12,000 1,200 40
test case results 4,000,921 360,000 36,000 2,400 80
test suite results 500,100 15,000 1,500 160 20
test execution schedules 47,467 1,424 500 200 20
test phases and test environments 31,636 800 92 63 20
build definitions and build records 33,533 1,006 120 70 25
Total # of artifacts/component 10,716,574 561,280 56,518 4,937 256
Total # of components/Repository 1 10 50 450 2000
Newly added in 702 N N Y Y Y

Configuration test coverage

The performance and load tests of small, medium, and large sized components are evaluated under Global Configuration (GC) for this release. The GC component is consisted of all 2500 ETM components. Th extra and extra-extra large sized components remain to be tested against their local ETM configurations for regression purpose.

Disclaimer

The extra-extra-large sized component was created initially to expediate the growth of the repository without consideration of artifact versions (i.e. the artifacts were generated in an opt-out environment prior to the enablement of Configuration Management feature). The test against this particular component was initially experimental during 7.0, but it continuously serves as a way to regression test the performance of IBM IBM Engineering Test Management (ETM). The results using this component can represent edge cases due to its huge size, thus the measurement for the performance of this component shown in the report should be looked at and analyzed as a comparison of the performance of different releases in the context of datashape. It should not be deemed as a performance benchmark.

Summary of regression test results

We continue to find the following performance characteristics as we keep increasing the size of the repository

Component performance vs. component size

For any given repository size, generally:
  • The smaller a component, the faster page response times or better performance for that component. Under the 20 million repository, for components that have a size ranged from small to large, 98% of the individual page response times are under 2 seconds (with the exception of Dashboard loading being 4 seconds, and Saving a test plan may take 5 seconds due to large amount of iterations defined); whereas 11 out of 120 pages are exceeding 5 seconds for the extra-large component, most of these slower pages are browsing/searching a given artifact type that has a larger total count, including searching/filtering test case results (9 seconds) and test case records (5.5 seconds) in the default views.
The detailed analysis of the component performance as a function of component size can be found in ETM 7.0.1 Performance Report. It’s worthy to note that since the component size can vary substantially, each of the individual test load was only focused on homogeneously sized components. Carrying out test loads this way not only leads to statistically normalization of page response times, but also allows servers to warmup thoroughly and database become more optimal.

Component performance vs. growing repository size

  • The general performance of a given component[2] degrades as the repository size grows. Although the measurements based on extre-extra-large sized component stress tested some edge cases, we observed a projection of a slight degradation when comparing between the 15-million repository and the 10-million repository, yet a minor to moderate degradation in comparing the 20-million repository to the 15-million repository. The degradation degree for 500K component is still minor under 20-million repository (average page response time for all pages degraded by less than 10%, comparing to 15-million repository).
  • Within a component context, the pages that are more sensitive to the size of the entire repository include those embracing the largest test artifact counts, for instances, in our test environment, page Browse Test Case Execution Records (total count of 4 million in a single component, or 20% of total artifact count in the repository), Browse Test Case Results, and the relevant pages to search/filter the test artifact.

For a detailed analysis of the component performance as a function of repository size, please refer to ETM 7.0.1 Performance Report.

[2]: component size did not grow prior to the performance load test.

What’s not in scope of test for this release

Like all previous releases, scalability that may be impacted by any of the following dimensions are not covered in 7.0.2 tests -
  • The growth of number of baselines for a given component or stream
  • The growth of number of versions for a given stream
  • The growth of number of states for a given artifact

References:

Related topics: ETM 7.0.1 Performance Report, Engineering Test Management (ETM) Adapter Deployment Guide, DOORS Next performance guide


Questions and comments:
  • What other performance information would you like to see here?
  • Do you have performance scenarios to share?
  • Do you have scenarios that are not addressed in documentation?
  • Where are you having problems in performance?

Warning: Can't find topic Deployment.PerformanceDatasheetReaderComments

Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r6 < r5 < r4 < r3 < r2 | More topic actions...
 
This site is powered by the TWiki collaboration platformCopyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use. Please read the following disclaimer.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.