JTS and RTC Online Backup Scenario with Oracle
I am operating from the information provided in the Online Backup Wiki.
I have a need to provide a failover for RTC with a given time parameter on 60 minutes. We will be using RTC 5.0.2 so we can utilize the online backups of Oracle and the FullIndex files.
Is it possible to detect when a user is performing and admin function that would update the config files the need to be copied for the backup? I noticed that the floating user log file gives some hint as to what operation the user is performing. Maybe that could be queried before and after to see if an 'Admin Function' had been performed?
Valid online backup procedure?
1. Backup Application Config files.
2. Backup Server Config FilesRTCJTS
3. Backup RTC and JTS Index FilesRTCJTS
4. Backup CCM Full Index Files
5. Online Backup of Oracle database.
If all goes right then I have created a 'Hot Backup' of my production instance? I could copy these backup files to my hot instance. Restore the Oracle Backup to the hot instance. Shutdown the production instance and fire up the hot instance.
I understand that I would loose data in the interval from when the last online back up was taken.
I am trying to avoid the issue where the I would have to do a full text index restore after the database is restored. Which could take hours depending on the size of my database and I am constrained to the 60 minute window.
Holes in this procedure? Did I miss anything?
Accepted answer
Robert,
what you describe is a cold standby scenario, where the standby server is either down or inactive.
Instead of trying to use the online backup to create a second server, I would consider the following scenario:
You would only loose data if the DB gets corrupted, in which case you would need to fall back to a valid backup and loose data anyway. I think the DB is most unlikely to fail, especially if you use available HA solutions.
I don't think there is anything built in that allows to detect that the server configuration files are changed by a user. However, I think these files are usually pretty stable after the first setup and config is done. You could probably run a job that checks the modification time of these files and copies them over if that changes - indicating a change.
what you describe is a cold standby scenario, where the standby server is either down or inactive.
Instead of trying to use the online backup to create a second server, I would consider the following scenario:
- Move the index files to a high performance network storage that can be reached from multiple machines
- Make the database highly available e.g. by supported HA solutions
-
Setup the standby server (image) pretty much identical to the hot one, including access to the DB and index files having the same connection/location information
- Frequently copy the configuration files to the standby server or a backup location
If the main server fails
- If possible (you can access the machines file system) copy the configuration files over
- Shut down the failed server
- Bring up the standby server and reroute network to it
You would only loose data if the DB gets corrupted, in which case you would need to fall back to a valid backup and loose data anyway. I think the DB is most unlikely to fail, especially if you use available HA solutions.
I don't think there is anything built in that allows to detect that the server configuration files are changed by a user. However, I think these files are usually pretty stable after the first setup and config is done. You could probably run a job that checks the modification time of these files and copies them over if that changes - indicating a change.
Comments
Donald Nong
Apr 12 '15, 8:13 p.m.This message is repeated several times in the Wiki
So your last concern may not be completely addressed.