Physical disk allocation for Red Hat Linux implementation?
Hi all,
Looking for some physical disk allocation estimates for a Red Hat Linux v6.5 implementation with the following dev/test environment configuration (WAS v8.5.5 and CLM v6.0.2 for all listed below):
- Server 1: (WAS, JTS, JRS, DCC, LDX, Rational License Key Server) (8 vCPUs, 16GB RAM)
- Server 2: (WAS, RTC) (8 vCPUs, 16 GB RAM)
- Server 3: (WAS, DNG) (8 vCPUs, 16 GB RAM)
- Server 4: (WAS, RQM) (8 vCPUs, 16 GB RAM)
All the repositories will be on a separate Oracle server.
As mentioned, this is a dev/test environment that will have a spike of up to 25 users but this will also be the proof of concept for the production environment configuration which will have an initially estimated 50 concurrent users and an expectation of growth. Please note that this is NOT the configuration that the production environment will be following. I realize that hosting JTS, JRS, DCC, LDX on the same server would kill us in production as would the lack of resources allocated.
Any physical disk space allocation estimates for each of the following (including logs, caching, etc) on a Red Hat Linux v6.5 server environment would be helpful:
- WAS
- JTS
- JRS
- DCC
- LDX
- RTC
- DNG
- RQM
I could run through each install and generate some estimates but that would take time and I'm hoping someone already has this information readily available from their own direct experience.
Best regards,
-michael
Accepted answer
Hi Michael,
I am not sure how useful our datasheets are going to be for you as the RTC ones shows hundreds of GB required for supporting 1000 concurrent users.
Your considerations really are how much data you are going to bring in for the PoC and dev/test. Our products require a decent sized footprint for 1 or 25 users, but more importantly it is the data that is going to answer your question.
If you're not going to import GBs or requirements data; SCM data etc then you can assume that RM is going to have the largest indices due to its architecture of external jena indices. If you think that 100,000 artifacts could equate to at least 5GB of indices, then that should be a simple rule of thumb. As you are using Linux and 6.0.2, the need for RAM to be > than indices + heap + 2GB doesn't apply as much due to the way indices can be read from disk more efficiently
If you're just considering the space of each application then the reporting tools in Liberty take up around 1.2GB. I don't have a WAS install to hand. Each application is going to use roughly the same. I have them all installed together (really not recommended) and the total is 8.3GB for the applications you mention + LQE.
In summary, your servers 2-4 should work on 50GB, although again this is based on the aforementioned caveats.
Server 1 is a problem as you have DCC and JRS on there. You don't have LQE or DW mentioned, so you need to add one of these 2 based on whether you're going to use configuration management in RM/RQM, or global configurations. If not, you need a DW and no LDX. If you are, you need LDX and LQE.
You are going to need probably 100GB for all this, but once again you're going to need to add or remove some applications compared to what is above.
I hope that answers your questions
Paul
Comments
Hi Paul,
Thanks for the response! I ended up allocating 65 GB each just to get things started. I appreciate you providing feedback and the elaboration. I have control over how much data I can import into the initial dev/test environment and can extend the disk at a later date but really wanted to get an idea what I would be looking at during initial installation and preparation.
Everything being cyclical, I seem to get a lot of Windows installation requests for a while and then, out of the blue, the Unix/Linux requests start coming in and I have to put myself back in the right frame of reference.
Cheers!