EditAttachPrintable
r3 - 2020-11-18 - 19:15:11 - Main.hhuoYou are here: TWiki >  Deployment Web > DeploymentPlanningAndDesign > PerformanceDatasheetsAndSizingGuidelines > 702PerformanceReportIBMEngineeringTestManagement

new.png[DRAFT] IBM Engineering Test Management 7.0.2: performance/scalability updates

Authors: Hongyan Huo
Last updated: Nov 18, 2020
Build basis: IBM Engineering Test Management 7.0.2

Updates from previous report:

The datashape has been evolved as outlined below
  • Number of components: total number of components is increased from 11 to 2511
  • Size of a component [1]:
    • added small (<1K), medium (5K), and large (50K) sizes
    • keeping extra large (500K) and huge (10M) sizes

  • Repository size [1]: increased from 15 million to 20 million
  • ETM Database size on disk has grown from 775 GB to 1171GB

[1]: measured by total number of versioned artifacts generated by the data generation utility which collects the sum from each artifact type visible in the UI.

Disclaimer

The huge-sized component was created initially to expediate the growth of the repository without consideration of artifact versions (i.e. the artifacts were generated in an opt-out environment prior to Configuration Management being enabled). This test against huge-sized component was originally experimental but continuously serves as a way to compare and characterize the differences in performance between different releases of ETM. The results using huge-sized component shown in the report should thus be looked at as a comparison of the contrasting performance between different releases, and not as an absolute benchmark of performance.

Summary of regression test results

We continue to find the following performance characteristics as we keep increasing the size of the repository

Component performance vs. component size:

For any given repository size, generally:
  • The smaller a component, the faster page response times or better performance for that component. Under the 20 million repository, for components that have a size ranged from small to large, 98% of the individual page response times are under 2 seconds (with the exception of Dashboard loading being 4 seconds, and Saving a test plan may take 5 seconds due to large amount of iterations defined); whereas 11 out of 120 pages are exceeding 5 seconds for the extra-large component, most of these slower pages are browsing/searching a given artifact type that has a larger total count.
It’s worthy to note that since the component size can vary substantially, each of the individual test load was only focused on homogeneously sized components. Carrying out test loads this way not only leads to minimized variations in page response times, but also allow servers to warmup thoroughly and database become more optimal.

Component performance vs. growing repository size:

  • The general performance of a given component[2] degrades as the repository size grows. Although the measurements were based on a huge-sized component which may represent some edge cases, we observed a projection of a slight degradation when comparing between the 15-million repository and the 10-million repository, yet a minor to moderate degradation in comparing the 20-million repository to the 15-million repository.
  • Within a component context, the pages that are more sensitive to the size of the entire repository include those embracing the largest test artifact counts, for instances, in our test environment, page Browse Test Case Execution Records (total count of 4 million in a single component, or 20% of total artifact count in the repository), Browse Test Case Results, and the relevant pages to search/filter the test artifact.

For a detailed analysis of the component performance as a function of repository size, please refer to 7.0.1 report.

[2]: component size did not grow prior to the performance load test.

What’s not in scope of test for this release

Like all previous releases, scalability that may be impacted by any of the following dimensions are not covered in 7.0.2 tests -
  • The growth of number of baselines for a given component or stream
  • The growth of number of versions for a given stream
  • The growth of number of states for a given artifact

Standard disclaimer

The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.

Related topics: ETM 7.0.1 Performance Report, Engineering Test Management (ETM) Adapter Deployment Guide, DOORS Next performance guide


Questions and comments:
  • What other performance information would you like to see here?
  • Do you have performance scenarios to share?
  • Do you have scenarios that are not addressed in documentation?
  • Where are you having problems in performance?

Warning: Can't find topic Deployment.PerformanceDatasheetReaderComments

Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r6 < r5 < r4 < r3 < r2 | More topic actions...
 
This site is powered by the TWiki collaboration platformCopyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use. Please read the following disclaimer.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.