With great power comes great responsibility. Using a supported IBM Engineering Lifecycle Management (ELM) public API opens up many possibilities to extend its functionality or allow new IBM Engineering integrations. However, excessive usage of API can lead to unexpected performance degradation, diminished user experience or frequent system outages. There are three basic methods to cope with performance issues:
Below is the list of practices you should follow when using ELM public APIs in custom automation:
Deployment of custom scripts in a production deployment should be reviewed and approved by a Change Control Board (CCB). If you do not have CCB process in place, talk to your server administrators before running automation against the server This allows your administrators to be prepared and aware of potential performance impacts.
Try to estimate the number of API calls or number of artifacts fetched over time. If your estimates shows that the number of calls or artifact fetches are too high, you should expect and plan for negative impact on performance.
The impact of your API requests depends strongly on the data shape of the requests, their result set and the destination repository. For example, the impact of fetching 1 attribute on 1 requirement, is very different to fetching dozens of attributes across a project containing millions of requirements. Therefore, before running on a production system, run it first on a test system with production-sized data and resources.
If possible, try with a small data set then extend it. For example, you can limit your script only to one or a few smaller project areas or only for specific artifact types. If initial results are positive and you do not see a negative performance impact, extend the scope gradually.
Massive data modification from the scripts can also affect the Data Collection Component (DCC), Lifecycle Query Engine (LQE) and Link Index Provider (LDX) applications.
When using custom scripts extensively, monitor your system and take action when your CPU, memory, disk usage or thread connection pool grows, especially if diminished user experience is observed. If necessary, stop the script and investigate if the script is the cause of the performance issue. Address the issue before redeploying the script. You can find more information about monitoring in Deployment Wiki article.
Follow the guidance in Register Custom Automation As a Resource-intensive Scenario. Enable and display the Expensive Scenario Details mbean your monitor dashboard to diagnose whether the script is the cause or contributing factor of the performance degradation. As a best practice use a common pattern for the naming of the scenario, as an example include the automation name, version and other details e.g. the command name that the automation executes. This makes it a lot easier to understand what is going on. Example: use a scenario name like: myautomation v2.7 exportdata
For example, run at night or during the weekend when usage is low and there are no maintenance activities.
Every environment is different. Therefore it is impossible to provide specific limits, for example: What is maximum number of artifacts you can fetch?, or What is the memory footprint for one artifact?, or How many calls can you make concurrently?. You have to test your specific data on your system.
Status icon key: