It's all about the answers!

Ask a question

How has memory management changed in CLM4 ?


Kevin Ramer (4.5k9186201) | asked Dec 18 '12, 9:10 a.m.
 After upgrading most of our CLM from 3.0.1.3 to 4.0.1 our hosts are hogging virtual memory in ways they never did with V3.  For example, an LPAR on IBM 9117 570 w 4 CPU 32G ram hosting 3 JTS, RM, 6 QM spread into the 3 JTS with each of the 9 application servers (Tomcat) configured with 4G JVM.  Database on a 2nd LPAR.  Within the first day following one of the QM crashed due to no swap space ( Never seen in this environment).  RAM increased to 48G and doubled swap.   

Our 2nd LPAR with 8CPU, 96G of ram is also swapping to beat the band.  Granted we have a TON of things there, but this VM activity _never_ manifest itself in as short a time frame under V3.  In the Application management page, the values reported almost always indicate a high percentage use of VM, where in V3 the VM use was much more moderate and fluctuated a great deal.   VM use seems to be highest under WAS application server (we have 2, one full-blown, the other just JTS).

Our OS we try to keep "bare bones" with little besides the CLM applications installed, running. 

Ideas?



Comments
Kevin Ramer commented Dec 26 '12, 1:46 p.m.

No, but server.startup has it seems.  In that file one will find:

JAVA_OPTS="$JAVA_OPTS -Dcom.ibm.team.repository.tempDir=$TEMP_DIR"
JAVA_OPTS="$JAVA_OPTS -Dlog4j.configuration=file://"pwd"/conf/startup_log4j.properties"
JAVA_OPTS="$JAVA_OPTS -Xmx4G"
JAVA_OPTS="$JAVA_OPTS -Xms4G"

The -X help for Xms says:

  -Xms<x>         set initial memory size to <x>
  -Xmx<x>         set memory maximum to <x>

So, taking 4G right out of the gate.



John Morales commented Dec 27 '12, 10:34 a.m.

Has this issue been resolved with RTC CLM 4.01 


Kevin Ramer commented Dec 27 '12, 10:38 a.m.

Speaking as the OP, we only noticed this upon upgrading to 4.0.1.   I have commented the addition of -Xms4G to from the server.startup and restarted some of  the less active CLM applications.  Virtual memory use dropped dramatically.

3 answers



permanent link
Karl Weinert (2.0k52736) | answered Dec 27 '12, 8:32 a.m.
JAZZ DEVELOPER
-Xgc:preferredHeapBase=0x100000000 is also new in 4.x and might be related to your problem if you have it set the same for each application server running.

It is actually a new setting in IBM Java 6 SR7. You can read more about it in the JavaSDK information center.

permanent link
Karl Weinert (2.0k52736) | answered Dec 27 '12, 12:04 p.m.
JAZZ DEVELOPER
You might want to check out this comment as it discusses some of the memory settings. It also links to Work Item 198140   track recommended JVM options (The link on that post seems to be broken)
Also see the Memory Requirements section of the System Requirements
.
Rather than removing the xms settings it might be better to adjust the both the XMS and XMX settings for each application server. Setting the XMS setting a lot lower than the XMS setting can come back to bite you when garbage collection kicks in. That setting was added because a lot of performance issues were resolved in 3.x by setting it equal to the XMX value.



Comments
Kevin Ramer commented Dec 27 '12, 12:41 p.m.

All this is "trade-off" of one kind or another.  Due to the way RTC "grew up" our shop has numerous CCM mostly because project owners wanted visibility limited which RTC did not offer until version 2.  Also, we must manage each RTC machine (or lpar)  to high security standards.   The "overhead" in this security prevents use of dozens of smaller LPAR over one relatively honkin' LPAR.   Right now our major LPAR has 96G ram and 20 application servers (database is elsewhere).   Setting the  starting point of JVM high is more detrimental in our experience.

I'll review the links provided and look toward some compromise until we can get another 512G added to our server


permanent link
Philippe Mulet (55112) | answered Jan 14 '13, 6:51 a.m.
FORUM MODERATOR / JAZZ DEVELOPER
The setting of -Xms is providing guidance to the JVM garbage collector that it doesn't need to battle too much to bring memory usage down. This is a performance optimization, as CLM applications are trying to use heap memory to cache frequently used information (and avoid hitting the DB so much).

At full capacity, 9 JVMs of 4GB each are going to consume more than 32Gb of RAM anyway.

Your answer


Register or to post your answer.


Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.