Rational Build Forge Performance test results: Evaluating the improved performance in 7.1.2 relative to


Rational Build Forge 7.1.1.x releases suffered from performance inefficiencies because of architectural limitations. The 7.1.2 release reduces these inefficiencies.

The improvements targeted specific functional areas: logs, purges, schedules, server tests, manifest refreshes, build execution (overhead), and filters.

Performance tests of the and 7.1.2 releases provide data that show the vast improvements achieved in many of those functional areas.

This report is the preliminary set of results that demonstrate those improvements. A full report that covers more test cases and platforms will be released later.

Technical summary


The focus of the performance testing was two-fold:

  • To create a performance baselines for the following functionality:
    • Logs
    • Purges
    • Schedules
    • Server tests
    • Manifest refreshes (not included)
    • Build Execution (overhead time)
  • To generate and publish performance data for 7.1.2 and with benchmark comparisons between the two releases for the target use cases. Those comparisons include configuration coverage for all the primary supported databases.

Test methodology

To achieve our goal of establishing baselines for the target functionality, we focused on the performance of the target function in relation to a specific object. We took measurements as we increased the object�s value to determine how well the functionality scales and the effect on performance.

These results are preliminary. Consider the results within the following context:

  • The results are based on one execution of the tests for each release. However, we do plan to execute multiple runs to determine a standard of deviation.
  • The plan calls for multiple configurations; however, at the time of this initial report, the data obtained span only three configurations. We plan to update the report as more data is available.
  • The majority of our tests include results for six data points; however there are two tests that had five data points. All future runs will have a minimum of six data points.

Test platforms

Test topology

Three consoles connected to four different databases. Two consoles have agents installed.

Test configurations

OS and database
IBM DB2 9.5
Oracle 10.2g
SQLServer 2005
MySQL 5.0
Windows Server 2003 Not in plan
Linux RHEL5 Not applicable

Test hardware

  • User desktop – Two computers used for UI testing
    • Windows Server 2003 Revision 2, Service Pack 2
    • 2.4 GHz dual core
    • 1 GB of RAM
  • DB2 9.5 Server
    • 4 core Intel Xeon MP CPU at 3.66 MHz in 64-bit mode
    • 8 GB of RAM
    • 70 GB of disk space across 2 drives in a RAID1 configuration for the operating system
    • 140 GB of disk space across 2 drives in a RAID0 configuration for data
    • Linux � CentOS release 5.5 (Final)
  • SQLServer 2005 Server
    • 4 core Intel Xeon MP CPU at 3.66 MHz in 32-bit mode
    • 3.25 GB of RAM (used)
    • 70 GB of disk space across 2 drives in a RAID1 configuration for the operating system
    • 140 GB of disk space across 2 drives in a RAID0 configuration for data
    • Windows server 2003 Release 2, Service Pack 2
  • Linux Console
    • Dual core Intel Xeon CPU 3.20GHz
    • 2 GB of RAM
    • 40 GB of disk space for the operating system and data across 1 drive
    • CentOS release 5.4
  • Windows Console
    • Dual core Intel Xeon CPU 3.20GHz
    • 2 GB of RAM
    • 40 GB of disk space for the operating system and data across 1 drive
    • Windows Server 2003 Release 2, Service Pack 2
  • Meta Agent computers – 8 computers that respond like multiple agents
    Common configuration:
    • Dual core Intel Xeon CPU 3.06 GHz
    • 2 GB of RAM
    • CentOS release 5.5
    • 40 GB of disk space across 1 drive

Test results summary

Populate log tests

These tests show how increasing the number of log lines affects the UI displays performance.

The results for these tests showed a consistent performance improvement averaging to 25% with each increase in the number of log lines. The number of log lines varied from 500 to 60000 log lines.

Purge tests

The purge tests show how increasing the number of builds, steps, logs, environment variables, and artifacts affects purge performance.

The biggest improvement in purge performance occurred when increasing the number of builds. The performance increase varied from 100 to 850% depending on the number of builds, the operating system, and the database.

Connection tests

The connection tests measured the performance of the server connections in cases where all the servers were valid, all the servers were invalid, or 10% of the servers were invalid.

For all the connection tests, the improvement in performance was significant. For the release, as the number of servers increased, the connection time took hours. For 7.1.2 though, with the same number of servers, the connection time was reduced to mere minutes. For example, for 4800 server connections, the time was reduced from 5 hours in to fewer than 5 minutes in 7.1.2.

Schedule tests

The results for the schedule tests for 7.1.2 were consistent with the expectations of an improved performance from

Each schedule starts builds two minutes apart and should start 10 builds within 20 minutes. The same tests were executed on and 7.1.2. With a smaller number of schedules, and 7.1.2 performed comparably. However, as the number of schedules increased, 7.1.2 continued to meet the expected 20-minute completion time, while performance decreased. At 600 schedules, we had 100% improvement in 7.1.2 relative to

Build execution tests

These tests measure the performance for build execution while increasing the number of log lines, steps, environment variables, log filters, and command size for individual tests.

The data showed no significant improvements in build execution performance. One configuration showed 30% improvement with increased log lines. In general, however, the performance for the build execution was slight, if any, from

Full test results

For more information about the tests and the resulting data, see the full test results PDF.

For more information

Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.

Was this information helpful? Yes No 0 people rated this as helpful.