Problem determination from heapdump ?
Hi,
RTC 3.0.1.3 instance comes crashing down due to OutOfMemory exception. Analysing the heapdump
shows:
A Linked List Data Structure Detected.
3,591,650,456 bytes (76.85 %) of Java heap is used by 255,780 instances of java/util/LinkedList$Entry
Contains the following object:
- 255,779 instances of java/lang/Object holding 3,579,362,328 bytes
I *think* this means that the average is 139,933 bytes for those 255,779 items but together they're hitting 3.5G or there abouts.
In my jazz logs:
2012-08-27 15:01:51,854 [http-rtp-rtc8%2F9.42.60.213-9443-Processor1611] WARN LeakWrapperLogger - Aborting transact
ion Connection #34820051 has 1 open statements
2012-08-27 15:01:51,861 [jazz: AsynchronousTaskRunner-3] ERROR com.ibm.team.repository - CRJAZ1451I An exception occurred
while executing the task "ItemCleanupTask", contributed by component "com.ibm.team.repository". The task will be rescheduled.
java.lang.OutOfMemoryError
==
2012-08-27 15:05:53,461 [jazz: AsynchronousTaskRunner-2] ERROR nternal.service.BuildForgeEventPollerScheduledTask - Error running event poller task.
java.lang.OutOfMemoryError
at java.lang.ref.ReferenceQueue.<init>(ReferenceQueue.java:177)
at java.util.WeakHashMap.<init>(WeakHashMap.java:178)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.ResultSetProducerLeakWrapper.<init>(ResultSetProducerLeakWrapper.java:24)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.StatementLeakWrapper.<init>(StatementLeakWrapper.java:26)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.PreparedStatementLeakWrapper.<init>(PreparedStatementLeakWrapper.java:37)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.ConnectionLeakWrapper.wrap(ConnectionLeakWrapper.java:92)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.ConnectionLeakWrapper.prepareStatement(ConnectionLeakWrapper.java:221)
at com.ibm.team.repository.service.db.provider.AbstractDbProvider.executeStatement(AbstractDbProvider.java:623)
at com.ibm.team.repository.jdbcdriver.internal.db2.Db2Provider.executeStatement(Db2Provider.java:615)
==
And a few more. On the database side the nearest thing to a "problem" is:
2012-08-28-13.57.17.267439 Instance:rtcinst1 Node:000
PID:11468960(db2agent (RTCDB8) 0) TID:107304 Appid:9.42.60.73.38324.12081418
2454
data protection services sqlpgResSpace Probe:2860 Database:RTCDB8
ADM1823E The active log is full and is held by application handle "60621..".
Terminate this application by COMMIT, ROLLBACK or FORCE APPLICATION.
repeated several times.
This application also died in similar fashion on Aug 14 and I upped the JVM by 1G.
By the way, I've been monitoring our server farm with a self-written tool that gets the basic server info (jvm used, jvm free, jvm max, uptime) each 1/2 hr and this one's timeline did not show any usage at or near the max of its potential JVM.
Suggestions?
RTC 3.0.1.3 instance comes crashing down due to OutOfMemory exception. Analysing the heapdump
shows:
A Linked List Data Structure Detected.
3,591,650,456 bytes (76.85 %) of Java heap is used by 255,780 instances of java/util/LinkedList$Entry
Contains the following object:
- 255,779 instances of java/lang/Object holding 3,579,362,328 bytes
I *think* this means that the average is 139,933 bytes for those 255,779 items but together they're hitting 3.5G or there abouts.
In my jazz logs:
2012-08-27 15:01:51,854 [http-rtp-rtc8%2F9.42.60.213-9443-Processor1611] WARN LeakWrapperLogger - Aborting transact
ion Connection #34820051 has 1 open statements
2012-08-27 15:01:51,861 [jazz: AsynchronousTaskRunner-3] ERROR com.ibm.team.repository - CRJAZ1451I An exception occurred
while executing the task "ItemCleanupTask", contributed by component "com.ibm.team.repository". The task will be rescheduled.
java.lang.OutOfMemoryError
==
2012-08-27 15:05:53,461 [jazz: AsynchronousTaskRunner-2] ERROR nternal.service.BuildForgeEventPollerScheduledTask - Error running event poller task.
java.lang.OutOfMemoryError
at java.lang.ref.ReferenceQueue.<init>(ReferenceQueue.java:177)
at java.util.WeakHashMap.<init>(WeakHashMap.java:178)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.ResultSetProducerLeakWrapper.<init>(ResultSetProducerLeakWrapper.java:24)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.StatementLeakWrapper.<init>(StatementLeakWrapper.java:26)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.PreparedStatementLeakWrapper.<init>(PreparedStatementLeakWrapper.java:37)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.ConnectionLeakWrapper.wrap(ConnectionLeakWrapper.java:92)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.ConnectionLeakWrapper.prepareStatement(ConnectionLeakWrapper.java:221)
at com.ibm.team.repository.service.db.provider.AbstractDbProvider.executeStatement(AbstractDbProvider.java:623)
at com.ibm.team.repository.jdbcdriver.internal.db2.Db2Provider.executeStatement(Db2Provider.java:615)
==
And a few more. On the database side the nearest thing to a "problem" is:
2012-08-28-13.57.17.267439 Instance:rtcinst1 Node:000
PID:11468960(db2agent (RTCDB8) 0) TID:107304 Appid:9.42.60.73.38324.12081418
2454
data protection services sqlpgResSpace Probe:2860 Database:RTCDB8
ADM1823E The active log is full and is held by application handle "60621..".
Terminate this application by COMMIT, ROLLBACK or FORCE APPLICATION.
repeated several times.
This application also died in similar fashion on Aug 14 and I upped the JVM by 1G.
By the way, I've been monitoring our server farm with a self-written tool that gets the basic server info (jvm used, jvm free, jvm max, uptime) each 1/2 hr and this one's timeline did not show any usage at or near the max of its potential JVM.
Suggestions?
One answer
A Linked List Data Structure Detected.
3,374,476,848 bytes (73.39 %) of Java heap is used by 254,114 instances of java/util/LinkedList$Entry
Contains the following object:
- 254,113 instances of java/lang/Object holding 3,362,284,984 bytes
another one bites the dust. My suspicion is that BIRT reports are to blame as I often see java.util.LinkedList at the top of stack traces in the "Active Services" on finding "postRenderReport" ...
3,374,476,848 bytes (73.39 %) of Java heap is used by 254,114 instances of java/util/LinkedList$Entry
Contains the following object:
- 254,113 instances of java/lang/Object holding 3,362,284,984 bytes
another one bites the dust. My suspicion is that BIRT reports are to blame as I often see java.util.LinkedList at the top of stack traces in the "Active Services" on finding "postRenderReport" ...