The Link Index Provider (LDX) is a component of the Engineering Lifecycle Management solution (ELM) that builds and maintains an index of links between artifacts. ELM applications query the LDX to get information about links. In the ELM 7.1 release, the LDX was rearchitected to use a relational database for storing link information. This article discusses the performance of the new LDX architecture.
The new LDX architecture, which IBM calls LDX rs, is based on the same architecture that was adopted for the Lifecycle Query Engine (LQE) in the 7.0.3 release.
Here's what you can expect from LDX in 7.1:
The following topics are covered in this article:
The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.
This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.
Up until the 7.0.2 release, the Lifecycle Query Engine (LQE) stored information using a triple store database provided by Apache Jena (Jena TDB). This was also true for the Link Index Provider (LDX), since LDX is just a specialized instance of LQE. This is shown in the architectural diagram below.
Abbreviations:
Both LQE and LDX used Tracked Resource Set (TRS) feeds provided by applications to index data and store it in the Jena triple store. LDX indexed a subset of the TRS data, focusing just on links.
Reporting against LQE data involved an instance of the Report Builder application. Reports were defined in Report Builder and then executed by the LQE server. The LQE server would read data from the Jena triple store and return it to Report Builder. For link-related queries, applications made requests to the LDX asking for links. The LDX would read data from the Jena triple store and return it to the applications. Note that the LDX was often hosted in the same Java Virtual Machine (JVM) as the Jazz Team Server (JTS), although larger customers were encouraged to move the LDX onto its own dedicated server.
This architecture had several disadvantages for large indexes or high concurrent load:
An additional limitation of the Jena architecture is how it deals with load. A single long-running query can severely impact the performance of the entire LQE or LDX server. There are two reasons this happens.
There are some problems with this approach. First, the journal will continue to grow as long as there are queries running. And queries will be running if the reports are slow, or if the report workload is high (i.e., many users are running reports frequently). A busy LQE server may not be idle very often, which means that the journal keeps growing. LQE tries to limit the impact by suspending indexing when the number of pending journal updates reaches a threshold, but that means that reports will include stale data until the indexing resumes.
Additionally, queries get slower as the journal grows, since Jena has to do more work to identify the correct dataset for a given query. This means there is a feedback loop where slow queries cause the journal to grow, which in turn causes the queries to be slower and then prevents the journal from being merged into the main database. This feedback loop can even be triggered by a single slow query.
Because of these disadvantages, LQE was rearchitected in 7.0.3 to replace Jena with a relational database. LDX in 7.0.3 stayed with the Jena architecture.
In the 7.1 release, LDX has adopted the relational store. The recommended deployment topology is shown below.
Abbreviation:
In 7.1, you can deploy a single LQE rs instance that will support both reporting and link resolution. This LQE rs instance will still read the TRS feeds provided by the applications, but the indexed data is stored in a relational database instead of in the Jena triple store. Requests to generate reports flow from Report Builder to the relational database, bypassing the LQE server. Requests for link information flow from the applications to LQE rs (in its role as LDX), but the link data is retrieved from the relational database instead of from the Jena triple store.
This improves the scalability of the LQE solution in several ways:
You have the option to deploy LDX rs as an application separate from LQE rs, but this requires that you set up a separate data warehouse for LDX rs data. An instance of LDX cannot share the index data created by LQE rs. IBM recommends that you start with a single LQE rs server and only deploy a separate LDX server if the LQE rs server cannot handle your workload. IBM strongly recommends using monitoring tools to watch for signs that a single LQE rs is not enough.
This section discusses how the LDX rs performance tests were conducted. It covers:
The test repository was based on the repository used during LQE rs testing in 7.0.3. That repository was extended with 6 additional ELM applications, and links were created between the artifacts.
Application | Artifact count | Database size |
---|---|---|
rm1 | 10 million | 800G |
rm2 | 20,000 | 27G |
rm3 | 20,000 | 27G |
etm1 | 5 million | 832G |
etm2 | 700,000 | 15G |
etm3 | 700,000 | 12G |
ewm1 | 2 million | 152G |
ewm2 | 60,000 | 65G |
ewm3 | 60,000 | 3.5G |
gc | 1000 components |
Links were established between the requirements in the rm2 and rm3 servers and artifacts in ETM, EWM, and RMM:
Abbreviations:
There are a total of 10,000 cross-application links. There are an additional 50 million links between ERM artifacts.
Global configurations
Since cross-application linking always happens in the context of a global configuration (GC), GCs were created with selection sizes ranging from 700,000 to 50 million. Selection size refers to the number of versions that are part of the stream or baseline that is referenced by the GC component.
Here is an example of one of the test GCs. This particular GC contains 40 million versions.
In normal operation, users interact with applications, and those applications decide whether to send requests for link information to LDX. The load tests simulate user activity by sending requests directly to the LDX server, bypassing the applications. Direct interaction with the LDX during load testing offers several advantages:
The workload applies the maximum level of stress possible.
The performance tests characterize workload in terms of threads. The workload is generated by sending requests directly to the LDX server from a test program, and the test program simulates multi-user load by running multiple threads. Each thread can be thought of as a simulating a very active user that does nothing but make requests for links. There is no pausing, so each thread makes requests as fast as it can. This is an extremely intense workload that is designed to drive the LDX rs system to its limits.
Real users are much less active, and interact with LDX rs indirectly through the applications. For example, when a user opens a view that has link columns using ERM, a request will be sent to LDX rs from the ERM server. You can consider each thread (or simulated user) to be the equivalent of 20 real users.
To evaluate the performance of the LDX rs under various conditions, we employed a combination of custom-developed automation tools and industry-standard monitoring solutions. This approach allowed us to generate load, measure response times, and monitor throughput and CPU usage.
Load Generation and Response Time Measurement
Throughput and CPU Usage Monitoring
Performance tests were conducted against 4 different test environments, in order to example LDX performance on both Oracle and DB2, as well as collecting baseline data from Jena. The 4 test systems:
# | Description | Specs | Notes |
---|---|---|---|
1 | LQE Jena | 40 vCPU, 768G RAM | |
2 | LQE rs/Oracle | 40 vCPU, 768G RAM | Oracle co-located with LQE rs |
3 | LQE rs/Db2 | 40 vCPU, 768G RAM | Db2 co-located with LQE rs |
4a | LDX rs (Oracle) | Oracle: 80 vCPU, 768G RAM | LDX rs and Oracle on separate servers |
4b | LDX rs: 24 vCPU, 64G RAM |
Topologies 1-3 are the same as were used in the LQE rs performance testing in 7.0.3.
Topology 4 sets up an LDX rs server with its own dedicated database server. LDX rs and Oracle are each deployed onto seperate servers, and the Oracle instance uses high-speed storage (a RAID 10 array of 24 nvme drives).
The servers in the test environment are all physical servers. The hardware specifications are listed below.
Role | Server | Machine Type | Processor | Total Processors | Memory | Storage | OS Version |
---|---|---|---|---|---|---|---|
LQE rs/Oracle | Oracle 19c | IBM System SR650 | 2 x Xeon Silver 4114 10C 2.2GHz (ten-core) | 40 | 768 GB | RAID 10 – 900GB SAS Disk x 16 | RHEL 7 |
LQE/Jena | Oracle 19c | IBM System SR650 | 2 x Xeon Silver 4114 10C 2.2GHz (ten-core) | 40 | 768 GB | RAID 10 – 900GB SAS Disk x 16 | RHEL 7 |
LQE rs/Db2 | Db2 11.5 | IBM System SR650 | 2 x Xeon Silver 4114 10C 2.2GHz (ten-core) | 40 | 768 GB | RAID 10 – 900GB SAS Disk x 16 | RHEL 7 |
LDX rs | WebSphere Liberty | IBM System x3550 M4 | 2 x Intel Xeon E5-2640 2.5GHz (six-core) | 24 | 64 GB | RAID 1 – 300GB SAS Disk x 4 – 128K Strip Size | RHEL 7 |
Oracle for LDX rs | Oracle 19c | IBM System SR650 V2 | 2 x Xeon Silver 4114 10C 2.2GHz (ten-core) | 80 | 768 GB | Raid 10 - 800GB NVME Kioxia Mainstream Drive x 24 - 7.3 TB on /mnt/raid10 | RHEL 8 |
This section presents the results of the performance tests against the 4 topologies. In summary:
This section looks at the response times for LDX rs queries for the 4 different test topologies, using one simulated user. You can consider these to be "best case" numbers since they reflect performance when the systems are not under heavy load. Each request asks for the links associated with 1000 different artifacts. The tests were repeated with global configurations of various sizes (ranging from 700,000 versions to 50 million versions).
Test Parameters:
Parameter | Value |
---|---|
Threads | Single thread |
Target URLs | 1000 |
GC Selection Size | 0.7M to 50M |
Number of Link types | 7 |
The chart below summarizes the response times for a query returning links for 1000 URLs, executed against all 4 topologies. The relational store topologies outperform Jena at every level of scale, and are not sensitive to the version counts. LQE/Jena degrades as the version counts increase.
This section presents the results of the load testing against topology 2 (LQE rs/Oracle).
Test Parameters:
Parameter | Value |
---|---|
Test environment | LQE rs/Oracle (topology 2) |
Threads | 10 to 100 |
Target URLs | 1000 |
GC Selection Size | 40M |
Number of Link types | 7 |
Environment | Topology 2 |
Summary:
The detailed results for response time, CPU usage, and throughput are shown below. Looking at the three charts together, the throughput levels out at 60 threads. The total CPU usage on the server at that point is 70%. Response times increase as the load increases, but remain within reasonable levels.
Disk utilization on the LQE rs/Oracle server is low (15% at 100 threads). The storage subsystem is not a bottleneck.
LQE/Jena reaches a maximum throughput of 3 transactions per second at only 10 threads.
Response times for LQE/Jena are much higher than for LQE rs/Oracle, and the Jena response times degrade quickly as load increases .
This section presents the results of the load testing against LQE rs/Db2 (topology 3).
Test Parameters:
Parameter | Value |
---|---|
Threads | 10 to 100 |
Target URLs | 1000 |
GC Selection Size | .7M to 50M |
Number of Link types | 7 |
Environment | Topology 3 |
Summary of results:
Throughput
The throughput decreases as the GC selection size increases. The chart below shows the impact of selection size on throughput for a constant workload generated by 20 threads. At smaller selection sizes, where the response time is faster, throughput reaches approximately at 120 requests per second. However, as the selection size grows, throughput drops to as low as 25 requests per second (for 50M versions).
The table below compares the throughput of LQE rs/Db2 to LQE/Jena, for a large GC containing 40 million versions. LQE rs/Db2 outperforms LQE/Jena significantly, even with this large GC.
CPU Usage
The chart below shows CPU usage for LQE rs/Db2 and Jena, as load increases. For LQE rs/Db2, the server has maxed out the CPU at 60 threads. In this topology, the LQE rs server and the Db2 server are co-located; separating the LQE rs application onto its own server would improve performance.
LQE/Jena becomes non-responsive past 50 threads.
Average Response Time
The table below shows how response times vary as load increases, for both LQE rs/Db2 and LQE/Jena, for a large GC selection size of 40 million versions. LQE rs/Db2 outperforms Jena.
Looking at the response times for LQE rs/Db2 using a smaller GC (10 million versions), the response times increase slightly as the load increases. For this run, the CPU on the test system maxes out around 60 threads.
Disk Utilization on DB Server
Disk utilization on the DB server shows a gradual increase as load increases but remains low. The storage subsystem is not a bottleneck.
This section analyzes the performance of the LDX rs system in a topology where the LDX application and the database are hosted on separate servers. Oracle is deployed as the relational store on the dedicated database server.
Test Parameters:
Parameter | Value |
---|---|
Threads | Up to 500 |
Target URLs | 1000 |
GC Selection Size | 40M |
Number of Link types | 7 |
Topology | Topology 4 |
Topology 4 involves a standalone LDX rs and a separate high capacity database server:
Summary of results:
The limiting factor is the number of processors available on the LDX rs server.
Throughput
The expectation is that throughput should increase steadily as the number of threads increases (more threads means more work gets done). At some point, the system will reach a maximum throughput once a bottleneck forms. In this test, throughput increases steadily up until 300 threads, and there is a small increase in throughput up to 500 threads. There is no additional throughput gain beyond 500 threads.
The CPU usage on the LDX rs server is maxing out between 300-500 threads, so CPU is the bottleneck. The expectation is that adding more processors to the LDX rs server would allow for higher throughput.
CPU Usage
At 500 threads, the LDX rs server’s CPU usage nears 95%, while the database server still has spare CPU capacity. This indicates that the LDX rs server is the bottleneck rather than the database. As the load increases beyond 500 threads, the LDX rs server becomes overloaded, and response times degrade. The database server’s CPU usage remains relatively constant beyond 500 threads. Database CPU usage does not increase since the bottleneck is in the LDX rs server.
Average Response Time
The response time analysis is presented in two charts. The first chart illustrates the system’s response time up to its effective limit of 500 threads. In this range, the response time remains consistently under 1 second.
The second chart shows the system’s behavior under extreme load, ranging from 500 to 2000 threads. In this scenario, the response time increases as the thread count increases. Despite the increasing response times, the system continues to process requests, unlike in the case of Jena, which stops responding after 50 threads. This indicates that while the system experiences performance degradation under high load, it still maintains functionality, processing requests without failing.
Disk Utilization
Disk utilization on the database server increases as the thread count rises. The disk I/O stabilizes at around 30% after the load reaches 1000 threads. This indicates that the disk is not a bottleneck, as it has sufficient capacity to handle the I/O demands even under higher loads.
To ensure optimal performance and stability of the LDX RS under high-load conditions, several tuning configurations need to be adjusted for large workloads, such as scenarios involving 1000, 2000, or 5000 concurrent threads. These adjustments aim to enhance connection handling capabilities. Below sections provide recommendations for tuning critical parameters on the LDX rs server, JTS server, IBM HTTP Server (IHS), Oracle database and Db2 database to handle large workloads effectively.
LDX rs tuning
To enhance the ability of LDX rs to manage a high number of concurrent connections and reduce potential bottlenecks caused by connection constraints below tuning settings needs to be done.
Increase Maximum Total Connections and Maximum Connections Per Route:
Disable Process Query Metrics:
JTS tuning
To manage the number of outgoing HTTP connections effectively to avoid resource contention below tuning settings needs to be done.
Increase Outgoing HTTP Connection Properties:
IHS tuning
To adjust the maximum number of clients (HTTP request on IHS server) to allow the IHS server to handle high loads below tuning settings needs to be done.
Increase MaxClients Value in httpd.conf:
Oracle tuning
For optimal performance on Oracle, it is recommended to tune the specified parameters while keeping filesystemio_options and parallel_degree_policy at their default values to maintain system stability and avoid potential overhead.
#rebounce db. note that in the test we also secured the sizes of sga_max_size, db_cache_size and session_cached_cursors
alter system set sga_max_size=123G scope=spfile;
alter system set db_cache_size=32G scope=spfile;
alter system set session_cached_cursors=1000 scope=spfile;
#rebounce db
####disable adaptive plan
alter system set optimizer_adaptive_plans=false;
Db2 tuning
This section addresses Db2 configuration parameters used during the performance testing.
We used the following settings for dwDB database:
Log file size (4KB) | (LOGFILSIZ) = 393216 |
Number of primary log files | (LOGPRIMARY) = 60 |
Number of secondary log files | (LOGSECOND) = 112 |
Size of database shared memory (4KB) | (DATABASE_MEMORY) = 64000000 = 256G |
Buffer pool BP_32K size | 3950000 = 126G |
restart the Db2 instance to take effect.
The data shape for DOORS NEXT is made up of standard-sized components. We have 3 standard size modules small, medium and large in each components. Number of artifacts in all 3 modules are summarized below..
We have 3 different standard module sizes:
We use these standard module sizes to create 3 different standard components.
Artifact type | Small component | Medium component | Large component |
---|---|---|---|
Number of large modules | 0 | 0 | 1 |
Number of medium modules | 0 | 3 | 0 |
Number of small modules | 20 | 0 | 0 |
Total module artifacts | 4200 | 4500 | 10000 |
Non-module artifacts | 100 | 100 | 100 |
We have created 1 small project with 3 components 1 small, 1 medium and 1 large. The number of artifacts for this standard small project is summarized below.
Artifact type | Count |
---|---|
Large components | 1 |
Medium components | 1 |
Small components | 1 |
Total components | 3 |
Module artifacts | 18,700 |
Non-module artifacts | 300 |
Total artifacts | 19,000 |
Large modules (10,000 artifacts) | 1 |
Medium modules (1,500 artifacts) | 3 |
Small modules (200 artifacts) | 20 |
So, we have total 19000 artifacts in one DOORS NEXT small project.
Artifact Type | Count |
---|---|
test plans | 12 |
test cases | 10,200 |
test scripts | 11,875 |
test suites | 780 |
test case execution records | 50,756 |
test suite execution records | 720 |
test case results | 122,400 |
test suite results | 3,600 |
test execution schedules | 2,848 |
test phases and test environments | 840 |
build definitions and build records | 244 |
There are total 30,187 different work items in the EWM project area.
Work Item Type | Count |
---|---|
Defect | 4,537 |
Task | 25,609 |
Story | 38 |
Epic | 3 |
total | 30,187 |
TABLE NAME | ROW COUNT |
---|---|
SELECTIONS_SELECTS | 411549948 |
URL | 135587348 |
WORK_ITEM_LINK_PROP | 18371610 |
RDF_STATEMENT | 4846212 |
RDF_STATEMENT_RDF_TYPE | 4846212 |
VERSION_RESOURCE | 4251587 |
VERSION_RESOURCE_RDF_TYPE | 4251587 |
TEST_SCRIPT_STEP_RESULT_LINK_PROP | 3110140 |
TEST_RESULT_LINK_PROP | 2956644 |
WORK_ITEM_SUBSCRIBERS | 2731486 |
WORK_ITEM_SCHEDULE_RDF_TYPE | 1954679 |
WORK_ITEM | 1954679 |
WORK_ITEM_SCHEDULE | 1954679 |
WORK_ITEM_RDF_TYPE | 1954679 |
TEST_SCRIPT_STEP | 1567500 |
TEST_SCRIPT_STEP_RDF_TYPE | 1567500 |
TEST_EXECUTION_RECORD_LINK_PROP | 1091366 |
TEST_SCRIPT_STEP_RESULT_RDF_TYPE | 969595 |
TEST_SCRIPT_STEP_RESULT | 969595 |
REQUIREMENT_RELATIONSHIP | 851067 |
REQUIREMENT_LINK_PROP | 851066 |
ITEM | 832101 |
RDF_STATEMENT_LINK_PROP | 645587 |
ARCHITECTURE_ELEMENT_RDF_TYPE | 522294 |
TEST_CASE_REQUIREMENT_RELATIONSHIP | 458191 |
TEST_RICH_TEXT_SECTION | 428836 |
TEST_RICH_TEXT_SECTION_RDF_TYPE | 428836 |
ITEM_LINK_PROP | 427743 |
TEST_SCRIPT | 391875 |
TEST_SCRIPT_RDF_TYPE | 391875 |
TEST_CASE | 390207 |
TEST_CASE_RDF_TYPE | 390207 |
TEST_EXECUTION_SUITE_ELEMENT_LINK_PROP | 296265 |
ARCHITECTURE_ELEMENT | 261147 |
TEST_RESULT_RDF_TYPE | 242398 |
TEST_RESULT | 242398 |
WORK_ITEM_REQUIREMENT_RELATIONSHIP | 201702 |
REQUIREMENT_RDF_TYPE | 198532 |
TEST_SUITE_RESULT_LINK_PROP | 188803 |
TEST_CASE_LINK_PROP | 143323 |
TEST_SCRIPT_STEP_LINK_PROP | 120000 |
REQUIREMENT | 99266 |
TEST_EXECUTION_RECORD | 99134 |
TEST_EXECUTION_RECORD_RDF_TYPE | 99134 |
TEST_EXECUTION_SUITE_ELEMENT_RDF_TYPE | 98755 |
TEST_EXECUTION_SUITE_ELEMENT | 98755 |
TEST_SCRIPT_LINK_PROP | 80000 |
WORK_ITEM_TEST_CASE_RELATIONSHIP | 79422 |
TEST_APPROVAL_DESCRIPTOR_APPROVAL | 59870 |
TEST_APPROVAL_RDF_TYPE | 59870 |
TEST_APPROVAL | 59870 |
TEST_SUITE_ELEMENT_LINK_PROP | 59400 |
RESOURCE_PROPERTY_LINK_PROP | 52116 |
CONFIGURATION_LINK_PROP | 49922 |
CONFIGURATION_RDF_TYPE | 41101 |
SELECTIONS_RDF_TYPE | 28123 |
SELECTIONS_LINK_PROP | 25602 |
TEST_SUITE_LINK_PROP | 25600 |
ARCHITECTURE_RELATIONSHIP | 25348 |
ARCHITECTURE_RELATIONSHIP_LINK_PROP | 25348 |
TEST_APPROVAL_DESCRIPTOR_RDF_TYPE | 20040 |
TEST_APPROVAL_DESCRIPTOR | 20040 |
TEST_QUALITY_APPROVAL_APPROVAL_DESCRIPTOR | 20040 |
TEST_PLAN_LINK_PROP | 19924 |
TEST_SUITE_ELEMENT | 19800 |
TEST_SUITE_ELEMENT_RDF_TYPE | 19800 |
CONFIGURATION_NAME | 19342 |
CONFIGURATION | 19321 |
CONFIGURATION_ACCEPTED_BY | 19321 |
CONFIGURATION_SELECTIONS | 15313 |
SELECTIONS | 15313 |
RESOURCE_PROPERTY_RDF_TYPE | 15225 |
RESOURCE_PROPERTY | 15225 |
ITEM_STRING_PROP | 11885 |
CONFIGURATION_CONTRIBUTION | 11885 |
TEST_QUALITY_APPROVAL | 10020 |
TEST_QUALITY_APPROVAL_RDF_TYPE | 10020 |
RDF_PROPERTY_RDF_TYPE | 9463 |
RDF_PROPERTY | 9413 |
CONFIGURATION_PREVIOUS_BASELINE | 8952 |
CONFIGURATION_BASELINE_OF_STREAM | 8952 |
TEST_SUITE_EXECUTION_RECORD_LINK_PROP | 8123 |
RESOURCE_SHAPE_PROPERTY | 7495 |
CACHED_CONFIG_ALL_CONFIGS | 6757 |
TEST_SUITE_RESULT | 6600 |
TEST_SUITE_RESULT_RDF_TYPE | 6600 |
CONFIGURATION_ACCEPTS | 4920 |
CACHED_CONFIG_ALL_SELECTIONS | 4279 |
TEST_PLAN_RDF_TYPE | 4123 |
TEST_PLAN | 4123 |
COMPONENT_RDF_TYPE | 3936 |
COMPONENT | 3936 |
COMPONENT_LINK_PROP | 3925 |
ITEM_RDF_TYPE | 3779 |
RDF_PROPERTY_LINK_PROP | 3467 |
CONFIGURATION_BOOLEAN_PROP | 2503 |
RESOURCE_SHAPE_LINK_PROP | 2263 |
CONFIGURATION_COMMITTER | 1561 |
RDF_CLASS_LINK_PROP | 1522 |
TEST_SUITE_RDF_TYPE | 1380 |
TEST_SUITE | 1380 |
TEST_SUITE_EXECUTION_RECORD | 1320 |
TEST_SUITE_EXECUTION_RECORD_RDF_TYPE | 1320 |
CONFIGURATION_DESCRIPTION | 1290 |
RDF_CLASS_RDF_TYPE | 1218 |
RDF_CLASS | 1154 |
TEST_ENVIRONMENT_RDF_TYPE | 1040 |
TEST_ENVIRONMENT | 1040 |
TEST_ENVIRONMENT_LINK_PROP | 800 |
WORK_ITEM_PLAN_LINK_PROP | 566 |
TEST_PLATFORM_COVERAGE_LINK_PROP | 400 |
RESOURCE_SHAPE_RDF_TYPE | 360 |
RESOURCE_SHAPE | 360 |
TEST_PHASE | 330 |
TEST_PHASE_RDF_TYPE | 330 |
TEST_BUILD_RECORD | 320 |
TEST_BUILD_RECORD_RDF_TYPE | 320 |
TEST_PHASE_LINK_PROP | 292 |
TEST_CATEGORY_RDF_TYPE | 216 |
TEST_CATEGORY | 216 |
TEST_PLATFORM_COVERAGE_RDF_TYPE | 200 |
TEST_PLATFORM_COVERAGE | 200 |
TEST_BUILD_RECORD_LINK_PROP | 200 |
SOURCE_FILE | 159 |
SOURCE_FILE_RDF_TYPE | 159 |
WORK_ITEM_CATEGORY_LINK_PROP | 156 |
WORK_ITEM_PLAN | 154 |
WORK_ITEM_PLAN_RDF_TYPE | 154 |
TEST_BUILD_DEFINITION_LINK_PROP | 112 |
TEST_CATEGORY_LINK_PROP | 108 |
WORK_ITEM_DELIVERABLE_RDF_TYPE | 93 |
WORK_ITEM_DELIVERABLE_LINK_PROP | 93 |
WORK_ITEM_DELIVERABLE | 93 |
WORK_ITEM_DELIVERABLE_RELEASE_PREDECESSOR | 93 |
WORK_ITEM_CATEGORY_RDF_TYPE | 92 |
WORK_ITEM_CATEGORY | 92 |
PREFIX | 55 |
TEST_BUILD_DEFINITION_RDF_TYPE | 30 |
TEST_BUILD_DEFINITION | 30 |
ACCESS_CONTEXT_RDF_TYPE | 26 |
LQE_INDEXING_ERROR | 18 |
LQE_INDEXING_ERROR_LINK_PROP | 18 |
TRS_DESCRIPTOR | 16 |
TRS_DESCRIPTOR_RDF_TYPE | 16 |
TRS_DESCRIPTOR_LINK_PROP | 16 |
RESOURCE_GROUP_USER_GROUPS | 15 |
RESOURCE_GROUP_RDF_TYPE | 15 |
RESOURCE_GROUP | 15 |
ACCESS_CONTEXT | 13 |
USER_GROUP | 13 |
USER_GROUP_RDF_TYPE | 13 |
USER_GROUP_NAME | 12 |
CACHED_CONFIG | 12 |
RESOURCE_GROUP_NAME | 12 |
Status icon key: