It's all about the answers!

Ask a question

JTS and RTC Online Backup Scenario with Oracle


Robert Carter (42928087) | asked Apr 10 '15, 1:23 p.m.
 I am operating from the information provided in the Online Backup Wiki.

I have a need to provide a failover for RTC with a given time parameter on 60 minutes.  We will be using RTC 5.0.2 so we can utilize the online backups of Oracle and the FullIndex files.

Is it possible to detect when a user is performing and admin function that would update the config files the need to be copied for the backup?  I noticed that the floating user log file gives some hint as to what operation the user is performing.  Maybe that could be queried before and after to see if an 'Admin Function' had been performed?

Valid online backup procedure?
1. Backup Application Config files.
RTC
JTS
2. Backup Server Config Files
RTC
JTS
3. Backup RTC and JTS Index Files
4. Backup CCM Full Index Files
5. Online Backup of Oracle database.

If all goes right then I have created a 'Hot Backup' of my production instance?  I could copy these backup files to my hot instance.  Restore the Oracle Backup to the hot instance. Shutdown the production instance and fire up the hot instance.

I understand that I would loose data in the interval from when the last online back up was taken.

I am trying to avoid the issue where the I would have to do a full text index restore after the database is restored.  Which could take hours depending on the size of my database and I am constrained to the 60 minute window.

Holes in this procedure?  Did I miss anything?


Comments
Donald Nong commented Apr 12 '15, 8:13 p.m.

This message is repeated several times in the Wiki

To get a consistent backup of the work item index files a recreation of the index might still be necessary. If a consistent online backup of the work index is the primary concern, a complete offline backup should then be performed in order to get a fully consistent backup of the databases and the other files.
So your last concern may not be completely addressed.
Unless you can take a true snapshot of the entire system (database, indexes and configuration files), in other words, backup during offline, there are always gaps between taking backups of different parts of the system. Anything falling into the gaps will become inconsistency. For example, when a work item is updated, it is stored in the DB, and the work item index is also updated. If the database restore removes the latest update of this work item, can you be sure such update is also removed from the work item index?

Accepted answer


permanent link
Ralph Schoon (61.4k33643) | answered Apr 13 '15, 4:43 a.m.
FORUM ADMINISTRATOR / FORUM MODERATOR / JAZZ DEVELOPER
edited Apr 13 '15, 4:47 a.m.
Robert,

what you describe is a cold standby scenario, where the standby server is either down or inactive.
Instead of trying to use the online backup to create a second server, I would consider the following scenario:

  1. Move the index files to a high performance network storage that can be reached from multiple machines
  2. Make the database highly available e.g. by supported HA solutions
  3. Setup the standby server (image) pretty much identical to the hot one, including access to the DB and index files having the same connection/location information
  4. Frequently copy the configuration files to the standby server or a backup location 

If the main server fails

  1. If possible (you can access the machines file system) copy the configuration files over
  2. Shut down the failed server
  3. Bring up the standby server and reroute network to it
This way you avoid having to rely on online DB and index backups for the standby. If the indexes are corrupted you would likely be able to restore them from a backup that you take every now and then. If you still need to reindex, you could live with some data not being indexed for a while and schedule a reindex run when there is time that does not affect production.

You would only loose data if the DB gets corrupted, in which case you would need to fall back to a valid backup and loose data anyway. I think the DB is most unlikely to fail, especially if you use available HA solutions.

I don't think there is anything built in that allows to detect that the server configuration files are changed by a user. However, I think these files are usually pretty stable after the first setup and config is done. You could probably run a job that checks the modification time of these files and copies them over if that changes - indicating a change.
 

Robert Carter selected this answer as the correct answer

Your answer


Register or to post your answer.