Engineering Lifecycle Management Wiki - Deployment
Deployment Web
Planning and design
Installing and upgrading
Migrating and evolving
Integrating
Administering
Monitoring
Troubleshooting
Community information and contribution guidelines
Create new topic
Topic list
Search
Advanced search
Notify
RSS
Atom
Changes
Statistics
Web preferences
Edit
Attach
P
rintable
TWiki
>
Deployment Web
>
DeploymentIntegrating
>
CLMProductAPILanding
>
AutomationsBestPractices
Revision 5 - 2024-01-12 - 07:00:27 -
KrzysztofKazmierczyk
<div id="header-title" style="padding: 10px 15px; border-width:1px; border-style:solid; border-color:#FFD28C; background-image: url(<nop>https://jazz.net/wiki/pub/Deployment/WebPreferences/TLASE.jpg); background-size: cover; font-size:120%"> ---+!! API based automations <img src="https://jazz.net/wiki/pub/Deployment/WebPreferences/uc.png" alt="todo.png" width="50" height="50" align="right"> %DKGRAY% Authors: Main.KrzysztofKazmierczyk, Main.PaulEllis <br> Build basis: None %ENDCOLOR%</div></sticky> <!-- Page contents top of page on right hand side in box --> <sticky><div style="float:right; border-width:1px; border-style:solid; border-color:#DFDFDF; background-color:#F6F6F6; margin:0 0 15px 15px; padding: 0 15px 0 15px;"> %TOC{title="Page contents"}% </div></sticky> <sticky><div style="margin:15px;"></sticky> With great power comes great responsibility. Using a supported IBM Engineering Lifecycle Management (ELM) public API opens up many possibilities to extend its functionality or allow new IBM Engineering integrations. However, excessive usage of API can lead to unexpected performance degradation, diminished user experience or frequent system outages. There are three basic methods to cope with performance issues: * Rewrite your scripts, * Run them less frequently * Reduce their parallelism. Below is the list of practices you should follow when using ELM public APIs in custom automation: ---++ Put governance processes in place for custom script deployment Deployment of custom scripts in a production deployment should be reviewed and approved by a Change Control Board (CCB). If you do not have CCB process in place, talk to your server administrators before running automation against the server This allows your administrators to be prepared and aware of potential performance impacts. ---++ Make your expectations reliable Try to estimate the number of API calls or number of artifacts fetched over time. If your estimates shows that the number of calls or artifact fetches are too high, you should expect and plan for negative impact on performance. ---++ Evaluate the functionality and performance impact on production equivalent test system first The approach of your API requests depend strongly on result set and the data shape of the requests and destination repository. For example, the approach to fetch 1 attribute on 1 requirement, is very different to fetching dozens of attributes across a project containing millions of requirements. Therefore, before running on a production system, run it first on a test system with production-sized data and resources. ---++ Start with small portion of data If possible, try with small data set, and then extend it. For example, you can limit your script only to one or few smaller project areas or only for specific artifact types. If initial results are positive, and you do not see a performance impact, extend the scope gradually. ---++ Consider increased load on other applications Massive data modification from the scripts can also affect the Data Collection Component (DCC), Lifecycle Query Engine (LQE) and Link Index Provider (LDX) applications. * DCC component is used for gathering data for data warehouse reports. Your script can cause increased execution times of Data Collection jobs and storage used by your Data Warehouse database. * LQE and LDX applications store the data in their internal Jena index. Large updates of the applications increase the time needed for updating data sources related to an application. Reading the data (e.g. querying LQE for reports, fetching the backlinks from LDX in Engineering Workflow Management or DOORS Next applications) is much slower (in LQE/LDX with Jena store) when the index is being updated at the same time. See more recommendations on [[https://www.ibm.com/docs/en/engineering-lifecycle-management-suite/lifecycle-management/7.0.3?topic=builder-best-practices-creating-reports-report#c_best_practices_for_creating_reports_with_RB__best_prac3__title__1][Tips for administrators]] page. * Since version 7.0.3 there is an option to [[https://www.ibm.com/docs/en/engineering-lifecycle-management-suite/lifecycle-management/7.0.3?topic=overview-lqe-relational-store][store LQE data in a relational database]]. Your script can impact performance of the database where the data is stored. More information available on [[https://jazz.net/blog/index.php/2023/12/06/whats-new-for-ibm-engineering-7-0-3-administrators/][Whats new for IBM Engineering 7.0.3 administrators]] page. ---++ Use monitoring When using custom scripts extensively, monitor your system and take action when your CPU, memory, disk usage or thread connection pool grows, especially if diminished user experience is observed. If necessary, stop the script and investigate if the script is the cause of the performance issue. Address the issue before redeploying the script. You can find more information about monitoring in [[DeploymentMonitoring][Deployment Wiki article]]. ---++ Register your custom automation as a Resource-intensive Scenario Link to the article: [[CreateCustomScenarios][Register Custom Automation As a Resource-intensive Scenario]]. Display the scenario in your monitor dashboard to diagnose whether the script is the cause or contributing factor of the performance degradation. As a best practice use a common pattern for the naming of the scenario, as an example include the automation name, version and other details e.g. the command name that the automation executes. This makes it a lot easier to understand what is going on. Example: use a scenario name like: myautomation v2.7 exportdata ---++ Consider running your scripts when the system has low usage For example, run at night or during the weekend when usage is low and there are no maintenance activities. ---++ Notice, that IBM is not able to provide you with specific limits of the system Every environment is different. Therefore it is impossible to provide specific limits, for example: What is maximum number of artifacts you can fetch?, or What is the memory footprint for one artifact?, or How many calls can you make concurrently?. You have to test your specific data on your system. ---+++++!! Related topics: [[CLMProductAPILanding][API Landing page]] ---+++++!! External links: ---+++++!! Additional contributors: Main.TimFeeney, Main.DanielMoul, Main.FarizSaracevic, Main.RalphSchoon <sticky></div></sticky>
Edit
|
Attach
|
P
rintable
|
V
iew topic
|
Backlinks:
We
b
,
A
l
l Webs
|
H
istory
:
r10
|
r7
<
r6
<
r5
<
r4
|
More topic actions...
Copyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our
Terms of Use.
Please read the following
disclaimer
.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more
here
.