Monitoring Lifecycle Query Engine performance

When you log on to Lifecycle Query Engine (LQE), you can quickly assess the overall status by viewing information in various widgets on the home page. You can view the high-level status of data sources, nodes, the query service, and backups. If issues need to be addressed, such as errors with data sources, you can click to see the details and take the appropriate actions.

Monitoring the health of your LQE environment

About this task

You can monitor the status of the Lifecycle Query Engine environment to determine the operating state and view indexing and maintenance activities in the associated feeds.


  1. To see the status of a particular LQE node, on the home page, at https:<host_name>:<port>/lqe/web/admin/home click Health Overview under the node name in the Health Monitoring widget.
    Screen capture of the Health Monitoring widget. It shows the details for the first node on the list.
    You can see details about the system memory and processor, and how the node is performing against the defined thresholds. These thresholds are defined on the Query Service configuration page that you can access by clicking the Edit query service configuration link in the Query Service widget.
    Screen capture of the Query Service widget. It shows the link to the query service configuration page.
    You can also define the thresholds for specific nodes on the LQE nodes page in the administration section at https:<host_name>:<port>/lqe/web/admin/nodes.
  2. To filter the content of the indexing and activity feeds, click Show filtering options in the Recent Indexer Activity widget on the home page, and select items in the lists.

Examining the status of nodes

As an administrator, it is important that you monitor what is happening on the Lifecycle Query Engine system at any point in time. By viewing the information available on the Health Monitoring page, you can assess how the system is performing, identify potential issues, and take steps to improve the performance of the servers. You can also compare current information with historical data about the system.


  • To view a high-level summary of the health of the system, in the Health Monitoring page navigation, click Overview.

    Screen capture of the Health Monitoring Overview page showing the overall status of the node.

  • To view detailed information for a specific node, go to the Performance page, and choose the node from the list.

    Screen capture of the Health Monitoring Performance page showing the memory consumption and CPU consumption graphs for the selected node.

    The following types of information are displayed in graphs or as lists:
    • Memory consumption
    • CPU consumption
    • Query and indexing summary
    • Index size
    To view details about how the LQE node is performing over a period, select the date and time. In each section, the charts are updated to reflect what you selected.
    Tip: You can show or hide individual lines in the charts by clicking the corresponding item in the legend.
  • To monitor the performance on a specific partition, go to the Partition Performance page.
    The following types of information are displayed in graphs or as lists:
    • Partition I/O operations
    • Partition I/O operation time
    • Partition size
    • Query summary
    To view details about how a specific partition of the LQE node is performing over a period, select the date and time. The charts in each of the sections are updated to reflect what you selected.

Monitoring indexing activity statistics

By monitoring the indexing activity, you ensure that indexing and queries are completing successfully, and that defined memory thresholds are not exceeded.


  • On the Health Monitoring page, click Statistics in the menu, and go to the properties under Dataset Statistics.
    Screen capture of the Health Monitoring Overview page showing the overall status of the node.
    • Active Transactions: The number of active readers and writers, providing a view of activities of the index: queries, base logs, change logs, and so on.
    • Completed Transactions: The number of completed transactions, including Read-transactions, Write-transactions, and aborted transactions.
    • Journal Writebacks: A journal is a Jena TDB write-ahead log. This field displays the number of transactions in the journal that are pending or have been written to the backing index. You should monitor the number of pending and completed journal writebacks:
      • Under normal conditions, there should be a low number of pending journal writebacks as the journal writebacks move from a pending state to a completed state.
      • When the number of pending journal writebacks starts to increase and does not decrease over time, there is potential for heap space issues if this condition continues unabated.
    • Suspensions: The number of suspensions that are pending or completed, that timed out, or showed errors because the JVM heap space reached a threshold (The heap usage threshold is set in advanced properties.). This situation causes the suspension of incoming transaction activities.
      To address this problem, you can use the dataset suspension feature in Lifecycle Query Engine (LQE). A dataset suspension does the following tasks:
      • Blocks all new read and write operations to the dataset
      • Waits for existing read operations to complete
      • Attempts to flush journal writebacks to the index
      The following LQE advanced properties control dataset suspension:
      • Heap Suspension Enabled: Initiates dataset suspension when a heap threshold is exceeded. This property is disabled by default.
      • Heap Usage Threshold: Percentage of heap that is used to trigger suspension. The default is 85%.
      • Stack Suspension Enabled: After a commit, initiates dataset suspension if the number of pending journal writes exceeds the maximum pending writebacks threshold. Starting in version 7.0.1, this property is enabled by default.
      • Maximum Pending Writebacks: The threshold of pending writebacks when stack suspension takes place.
      • Suspend Timeout: The number of seconds to wait for read operations to complete before attempting a journal flush. This value must be greater than the total of query timeout and rogue query timeout. For example, if the default query timeout is 600 seconds and default rogue timeout is 180 seconds, this value must be greater than 780 seconds.
    • Overloads: The number of times an overload condition was encountered. An overload can occur in the following situations:
      • When the JVM garbage collection process starts because the heap usage threshold was reached and, after garbage collection, the JVM heap still exceeds the heap usage threshold.
      • When the maximum value of pending writebacks is exceeded and, after stack suspension, the maximum value of pending writebacks is still exceeded.
      A journal writeback might have a stack of data set views and, if the incoming requests don't pause, a backlog of journal writebacks, with a corresponding stack of data sets, can be queued and might lead to stack overflow. The stack indicates the number of times a journal writeback stack overflow was prevented when the heap usage threshold was exceeded.
    • Running or Completed Queries: The number of queries that are running or completed. You should watch the number of running queries over time to determine whether they might potentially lock index writes if the queries don't finish or are not timed out properly.
  • To view currently running queries, go to Administration > Queries, and click Running Queries. All queries should end normally or by timing out; however, in Apache Jena, some queries might become rogue.
    For information about how to handle rogue queries, see Preventing Out-of-memory errors in Lifecycle Query Engine.