Jazz Forum Welcome to the Jazz Community Forum Connect and collaborate with IBM Engineering experts and users

Restore a RTC repository onto a backup system

We're configuring RTCz V3.0.1 onto several zOS systems. The production RTC system has a different real hostname from a back up RTC system in case of disaster recovery. The back up system is not usually up. Our design is that unloaded repository data of the production system will be loaded onto RTC database of a back up system in case of disaster. Here is a concern about a public URI, which has to be maintained and cannot be changed. Loaded RTC repository data will have a public URI of production system, but a back up system has a different real hostname. Could someone suggest us how this issue can be solved? I'm not aware of "virtual hostname" concept, but is that a solution or a technique to solve the above issue? Please advise me....

Reference: https://jazz.net/forums/viewtopic.php?p=64222#64222

0 votes



9 answers

Permanent link
Could someone review the recovery steps to make the back up RTC server work OK for the above environment? To clarify the environment, we assume the following conditions.
<Assumption>
1. The production RTC system and the backup one have the exactly same system parameters regarding zOS, DB2z and WASz plus RTC.
2. The zOS system difference only pertains to the dasd volume serial number and the host name.
3. The back up system is not up usually and only starts in case the the production system goes down.
4. The repository back up is obtained weekly basis and its back up is restored onto the back up system in casse of disaster.

<Rcovery>
1. Repository back up is obtained by a supplied JCL(RTCz V301) SBLZSAMP(BLZDB2UN) weekly.
2. In case that the production sytem is disasterd, the back up system is IPLed. The repository back up is restored onto the newly created RTC repository tables by the supplied JCL - SBLZSAMP(BLZDB2LD).
3. The restored repository has a public URIs for the production system, so at this point RTC users cannot login to the restored repository.
4. To resolve the host name, zOS TCPIP.DATA configuration file needs to be updated to assign the ip address against the production hostname(z1.xxxxxx) in order to resolve the hostnames by the same ip address of the back up system .

example:
HOSTNAME e1
DOMAINORIGIN e1.xxxxxx (back up server hostname)
NSOINTERADDR e1-ip-address (back up server ip addr)
HOSTNAME z1
DOMAINORIGIN z1.xxxxxx (disastered hostname)
NSOINTERADDR e1-ip-address (back up server ip addr)

5. Start tcpip and WAS.

0 votes


Permanent link
Could someone review the recovery steps to make the back up RTC server work OK for the above environment? To clarify the environment, we assume the following conditions.
<Assumption>
1. The production RTC system and the backup one have the exactly same system parameters regarding zOS, DB2z and WASz plus RTC.
2. The zOS system difference only pertains to the dasd volume serial number and the host name.
3. The back up system is not up usually and only starts in case the the production system goes down.
4. The repository back up is obtained weekly basis and its back up is restored onto the back up system in casse of disaster.

<Rcovery>
1. Repository back up is obtained by a supplied JCL(RTCz V301) SBLZSAMP(BLZDB2UN) weekly.
2. In case that the production sytem is disasterd, the back up system is IPLed. The repository back up is restored onto the newly created RTC repository tables by the supplied JCL - SBLZSAMP(BLZDB2LD).
3. The restored repository has a public URIs for the production system, so at this point RTC users cannot login to the restored repository.
4. To resolve the host name, zOS TCPIP.DATA configuration file needs to be updated to assign the ip address against the production hostname(z1.xxxxxx) in order to resolve the hostnames by the same ip address of the back up system .

example:
HOSTNAME e1
DOMAINORIGIN e1.xxxxxx (back up server hostname)
NSOINTERADDR e1-ip-address (back up server ip addr)
HOSTNAME z1
DOMAINORIGIN z1.xxxxxx (disastered hostname)
NSOINTERADDR e1-ip-address (back up server ip addr)

5. Start tcpip and WAS.

Given just the bits of information posted here, yes we believe this general approach is good. Of course, the process should be tested in your environment. I think the key is that the DNS is updated for the clients and that the z/OS machine knows the production host name by the back up IP address.

We actually have a z/OS test server which is known by multiple host names in our DNS. The clients and the z/OS machine both use that DNS. In our case, we did not also have to add additional HOSTNAMEs to the TCPIP.DATA configuration so I can not be 100% certain that your syntax is correct.

In our case, WAS was originally configured for HOSTA but we have no problems to use the public URI of HOSTB which I believe is similar to your case.

A few other considerations :
a) Are you also backing up and restoring the RTC configuration and work directories?
b) I assume you are talking about RTC 3.0.1 since you have the sample LOAD and UNLOAD JOBs. If you have implemented the data warehouse, you need the fix in https://jazz.net/jazz/resource/itemName/com.ibm.team.workitem.WorkItem/169115 to use the UNLOAD and LOAD process for those. This fix will be delivered in 3.0.1.1 FixPack. It is only an update to the sample JCL and is also attached to that work item.

Please post if you have additional questions.
Thank you - Bruce Green

0 votes


Permanent link
Reply to your following points:

A few other considerations :
a) Are you also backing up and restoring the RTC configuration and work directories?
+>>teamserver.properties are the same each oher and only difference is the hostname in it. I think that this one in a bckup server should be updated to z1 hostname(disasterd one) and perform URL jts/setup.
+>>We use the configuration files for RBA process, JMON and ISPF client. These are the same for the both system.
+>>What do you mean by 'work directories'? Do you mean any RTC work directories? Could you please spcifically tell what kind of work directories exist? We need to back up the users' work directories.

b) I assume you are talking about RTC 3.0.1 since you have the sample LOAD and UNLOAD JOBs. If you have implemented the data warehouse, you need the fix in https://jazz.net/jazz/resource/itemName/com.ibm.team.workitem.WorkItem/169115 to use the UNLOAD and LOAD process for those. This fix will be delivered in 3.0.1.1 FixPack. It is only an update to the sample JCL and is also attached to that work item.
+>> YES. We use V3.0.1 and do not use the data warehouses. Can we use V3.0.1 version of JCLs in our case?

Please post if you have additional questions.
Thank you - Bruce Green

0 votes


Permanent link
I still do not think that this will work. The issue is not in "finding" the app server and the jazz application. The issue is with finding the REST data. This data is referenced by it's public URI, which must be maintained. So even if you map the different hostnames to the same IP address, the data in the repository will still be referencing other data in the repository with the original URI reference (something like z1.xxxxxx/datalabel), while any new objects or references will be created with a different URI (something like e1.xxxxxx/datalabel).

Now if you use a reverse proxy to make it appear that you are addressing the original z1 hostname, and the repository keps it's default base URI of z1.xxxxxx, then you should be OK.

0 votes


Permanent link
Reply to your following points:

A few other considerations :
a) Are you also backing up and restoring the RTC configuration and work directories?
+>>teamserver.properties are the same each oher and only difference is the hostname in it. I think that this one in a bckup server should be updated to z1 hostname(disasterd one) and perform URL jts/setup.
+>>We use the configuration files for RBA process, JMON and ISPF client. These are the same for the both system.
+>>What do you mean by 'work directories'? Do you mean any RTC work directories? Could you please spcifically tell what kind of work directories exist? We need to back up the users' work directories.
When you ran BLZCPJTS and BLZCPCCM and BLZCPBTK to create your configurations, you created what we call for RTC on z/OS the "configuration directory" and "working directories". These default to /etc/jazz for configuration and /u/jazz for "working files". The work directories are used to contain work item indices, lucene indices, and other important files that should also be copied over as part your DR. If you take a close look at the existing teamserver.properties, you will see a number of references to the working directory to give you a clearer understanding of some of these files and directories.


b) I assume you are talking about RTC 3.0.1 since you have the sample LOAD and UNLOAD JOBs. If you have implemented the data warehouse, you need the fix in https://jazz.net/jazz/resource/itemName/com.ibm.team.workitem.WorkItem/169115 to use the UNLOAD and LOAD process for those. This fix will be delivered in 3.0.1.1 FixPack. It is only an update to the sample JCL and is also attached to that work item.
+>> YES. We use V3.0.1 and do not use the data warehouses. Can we use V3.0.1 version of JCLs in our case?
YES

Please post if you have additional questions.
Thank you - Bruce Green

0 votes


Permanent link
I still do not think that this will work. The issue is not in "finding" the app server and the jazz application. The issue is with finding the REST data. This data is referenced by it's public URI, which must be maintained. So even if you map the different hostnames to the same IP address, the data in the repository will still be referencing other data in the repository with the original URI reference (something like z1.xxxxxx/datalabel), while any new objects or references will be created with a different URI (something like e1.xxxxxx/datalabel).

Now if you use a reverse proxy to make it appear that you are addressing the original z1 hostname, and the repository keps it's default base URI of z1.xxxxxx, then you should be OK.

I think this bit - "4. To resolve the host name, zOS TCPIP.DATA configuration file needs to be updated to assign the ip address against the production hostname(z1.xxxxxx) in order to resolve the hostnames by the same ip address of the back up system ." is intended to resolve your concern. That is, the host name will in fact be maintained, but just by a different underlying IP address.

0 votes


Permanent link
>I think this bit - "4. To resolve the host name, zOS TCPIP.DATA configuration file needs to be updated to assign the ip address against the >production hostname(z1.xxxxxx) in order to resolve the hostnames by the same ip address of the back up system ." is intended to resolve your >concern. That is, the host name will in fact be maintained, but just by a different underlying IP address.

Bruce,

Thank you for following up. We'll do test the above environment. Firstly we will use our internal test machine.
- Setting off the DNS reference by commenting out all NSINTERADDR in TCPIP.DATA.
- Add z1.xxxxxx hostname entry and assign e1.ip-addre in TCPIP.DATA.

Could you suggest what kind of scenario/operation should be satisfied to verify if our disasterd recovery RTC server works OK? (In addition to our normal operational scenario.)

0 votes


Permanent link
We tested our disaster recovery scenario and the result was successful, we think.
The following is our scenario.

Assumption:
1. We have the production RTC system - z1.
2. We have another RTC system - a1, which has the same zOS, DB2z and WASz configuration, plus RTC. The hostname and VOLSERs used are different though.
3. In this testing, we back up a1 repository and restore it onto z1 system and verify it.

Operation:
1. set up 'hosts' file on a client PC like
z1_ip_address z1.hostname
z1_ip_address a1.hostname

2. SYS1.TCPIP.Z1.IPNODES file and activated it. (The original plan was to use TCPIP.DATA, but our TCPIP guys set up IPNODES file instead.)
z1_ip_address z1.hostname
z1_ip_address a1.hostname

3. Backing up RTC repository of a1 by using *.BLZSAMP(DB2BLZUN) job. (We split the supplied JCL and made it possible to specify volume serial number.) Backuped PS files are located on a shared dasd.
4. Restoring RTC repository of a1 by using *.BLZSAMP(DB2BLZLD) job. (We did not recreate jts/ccm tables on z1.)

5. Copied jts/ccm teamserver.properties of a1 onto z1.

6. Access jts/admin via a browser and verify if login is OK and a1 user registration can be seen. Also move to ccm project area looks ok.
7. Accessed project area via RTC client.

Result:
Both accessing RTC server(jts/ccm) via a browser using a1 public uri is ok. RTC client access is also successful with a z1 connecting information.

Question raised:
The original intent was satisfied, but one question is raised after this test. What happens if Operation #2(IPNODES setting) is not activated? In the previous append, Daniel Toczala mentioned about REST data and he felt like that our scenario might do not work. Does this happen? What is REST data and how can we test it explicitly?

0 votes


Permanent link
Hello Bruce and et al,

We only backed up 'teamserver.properties' and restored them when LOADding the repository. But we are concerned about /u/jazz/ccm/indices directory contents. Do we need to backup this directory and restore it in case of disaster recovery? For some reason, we notice that the build engines are not listed under /ProjectAreaa/Build/BuildEngine folder. We're not sure when it happened, but are worrying about the side effect or not. Any information?

Moto

0 votes

Your answer

Register or log in to post your answer.

Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.

Search context
Follow this question

By Email: 

Once you sign in you will be able to subscribe for any updates here.

By RSS:

Answers
Answers and Comments
Question details

Question asked: Sep 12 '11, 9:19 p.m.

Question was seen: 10,809 times

Last updated: Sep 12 '11, 9:19 p.m.

Confirmation Cancel Confirm