Size of DB2 incremental Backup seems to be too big for no activity CCM database
We have client how use incremental backup of DB2 on CCM server.
They did not do anything on CCM database (no workitem creation/update, not even logged in) however, the size of CCM DB backup was 45GB. The full backup, which they took was 60 GB.
Is this reasonable size for incremental backup?
I know incremental backup of DB2 include all long/LOB but the client did not even logged in.
*** About incremental backup ***
Incremental backup and recovery
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=/com.ibm.db2.luw.admin.ha.doc/doc/c0006069.html
Note:
Client did:
1. Stop RTC(Tomcat)
2. set TRACKMOD to ON for CCM database
3. db2stop and db2start
4. DB2 full backup on CCM -> 60GB
5. RTC server stop and start (not logged in)
6. run incremental backup -> 45GB
7. Start RTC server
8. Create 1 new work Items and attached an binary file (398KB)
9. Stop RTC server
10. run incremental Backup on CCM db -> 45 GB
If anyone know or have an experience on backup CCM server with incremental backup, then let us know.
They did not do anything on CCM database (no workitem creation/update, not even logged in) however, the size of CCM DB backup was 45GB. The full backup, which they took was 60 GB.
Is this reasonable size for incremental backup?
I know incremental backup of DB2 include all long/LOB but the client did not even logged in.
*** About incremental backup ***
Incremental backup and recovery
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=/com.ibm.db2.luw.admin.ha.doc/doc/c0006069.html
Note:
- If a table space contains long field or large object data and an incremental backup is taken, all of the long field or large object data will be copied into the backup image if any of the pages in that table space have been modified since the previous backup.
- If you take an incremental backup of a table space that contains a dirty page (that is, a page that contains data that has been changed but has not yet been written to disk) then all large object data is backed up. Normal data is backed up only if it has changed.
Client did:
1. Stop RTC(Tomcat)
2. set TRACKMOD to ON for CCM database
3. db2stop and db2start
4. DB2 full backup on CCM -> 60GB
5. RTC server stop and start (not logged in)
6. run incremental backup -> 45GB
7. Start RTC server
8. Create 1 new work Items and attached an binary file (398KB)
9. Stop RTC server
10. run incremental Backup on CCM db -> 45 GB
If anyone know or have an experience on backup CCM server with incremental backup, then let us know.
2 answers
Hi Ryosuke,
60GB is way to big for empty db2 database.
You can check if there is any big data added through setup. The details you will find on this forum: https://jazz.net/forum/questions/111357/how-to-check-which-tables-are-using-the-most-space.
When you confirm, that there is only few data in the database, you should check with db2 support why the backup uses so much space.
Let us know if it helps for oyu.
60GB is way to big for empty db2 database.
You can check if there is any big data added through setup. The details you will find on this forum: https://jazz.net/forum/questions/111357/how-to-check-which-tables-are-using-the-most-space.
When you confirm, that there is only few data in the database, you should check with db2 support why the backup uses so much space.
Let us know if it helps for oyu.
I had tested for this more and found:
DB2 incremental backup will backup all source files regardless of whether we touch Source file or not.
I guess this is working as a designed but with this customer can not use incremental backup.
Test Scenario in my test environment:
1. Stop RTC
2. Run full bacup (4.5GB)
3. Start RTC server and created only 1 workItems and save.
* I had not done anything except a creation of new WorkItem.
4. Stop RTC server
5. Run Incremental Backup (3.8GB)
Note: since this is a test environment and my test DB was small. I had add large source files to stream on the project.
* I also used db2pd -tables trackmodstate .
According to this 5 out of 7 table spaces are Dirty means there are changes on these tables spaces.
However, I only created 1 work Items (no attachment, no link).
Tablespace Statistics:
Address Id TotalPgs UsablePgs UsedPgs FreePgs HWM Max HWM TrackmodState
0x0000000016F76840 0 10240 10236 10068 168 10068 10068 Dirty
0x0000000016F77FC0 1 1 1 1 0 0 0 Dirty
0x0000000016F7B700 2 2048 2016 928 1088 928 928 Dirty
0x0000000016F7CE80 3 250000 249952 199072 50880 199072 199072 Clean
0x0000000016F7E600 4 2500 2464 64 2400 64 64 Dirty
0x0000000016F81DA0 5 2048 2044 108 1936 108 108 Clean
0x0000000016F83520 6 1 1 1 0 0 0 Dirty
DB2 incremental backup will backup all source files regardless of whether we touch Source file or not.
I guess this is working as a designed but with this customer can not use incremental backup.
Test Scenario in my test environment:
1. Stop RTC
2. Run full bacup (4.5GB)
3. Start RTC server and created only 1 workItems and save.
* I had not done anything except a creation of new WorkItem.
4. Stop RTC server
5. Run Incremental Backup (3.8GB)
Note: since this is a test environment and my test DB was small. I had add large source files to stream on the project.
* I also used db2pd -tables trackmodstate .
According to this 5 out of 7 table spaces are Dirty means there are changes on these tables spaces.
However, I only created 1 work Items (no attachment, no link).
Tablespace Statistics:
Address Id TotalPgs UsablePgs UsedPgs FreePgs HWM Max HWM TrackmodState
0x0000000016F76840 0 10240 10236 10068 168 10068 10068 Dirty
0x0000000016F77FC0 1 1 1 1 0 0 0 Dirty
0x0000000016F7B700 2 2048 2016 928 1088 928 928 Dirty
0x0000000016F7CE80 3 250000 249952 199072 50880 199072 199072 Clean
0x0000000016F7E600 4 2500 2464 64 2400 64 64 Dirty
0x0000000016F81DA0 5 2048 2044 108 1936 108 108 Clean
0x0000000016F83520 6 1 1 1 0 0 0 Dirty
Comments
Takehiko Amano
JAZZ DEVELOPER Sep 03 '13, 10:52 p.m.I think it's better to diagnose what's happening using db2diag.
ryosuke oka
Sep 18 '13, 6:24 a.m.thank you, Amano-san!