Edit
Attach
P
rintable
r31 - 2017-10-26 - 19:04:56 - Main.prabhatgupta
You are here:
TWiki
>
Deployment Web
>
DeploymentInstallingUpgradingAndMigrating
>
ChangeAndConfigurationManagementClusteredEnvironment
<div id="header-title" style="padding: 10px 15px; border-width:1px; border-style:solid; border-color:#FFD28C; background-image: url(<nop>https://jazz.net/wiki/pub/Deployment/WebPreferences/TLASE.jpg); background-size: cover; font-size:120%"> ---+!! Setting up a Change and Configuration Management application clustered environment version 6.0.4 %DKGRAY% Authors: Main.MichaelAfshar, Main.AlexBernstein, Main.ChrisAustin, Main.PrabhatGupta, Main.YanpingChen, Main.BreunReed<br> Build basis: Change and Configuration Management 6.0.4 %ENDCOLOR%</div></sticky> <!-- Page contents top of page on right hand side in box --> <sticky><div style="float:right; border-width:1px; border-style:solid; border-color:#DFDFDF; background-color:#F6F6F6; margin:0 0 15px 15px; padding: 0 15px 0 15px;"> %TOC{title="Page contents"}% </div></sticky> <sticky><div style="margin:15px;"></sticky> You can set up a clustered environment to host a Jazz Team Server and multiple Change and Configuration Management (CCM) nodes. For instructions on setting up a clustered environment in version 6.0.5, see [[ChangeAndConfigurationManagementClusteredEnvironmentVersion605][Change and Configuration Management clustered environment version 6.0.5]] ---++ Approach to clustering The CCM applications must be installed on multiple servers and connected by using an MQTT broker, which enables synchronization across the nodes. A load balancer is used as a front-end URL, which accepts connections and distributes the requests to one of the back-end CCM nodes. The host name of the load balancer is used as public URL for the CCM application during setup. * The following diagram illustrates the approach to clustering: <br /> <img src="%ATTACHURLPATH%/SingleJTS_cluster.png" alt="SingleJTS_cluster.png" width="772" height="462" /> To install the Change and Configuration Management application in a clustered environment, see the [[https://www.ibm.com/support/knowledgecenter/SSYMRC_6.0.4/com.ibm.jazz.install.doc/topics/roadmap_form.html][Interactive Installation Guide]]. ---++ Modifying the MQTT advanced properties You can set or modify the default MQTT related service properties on the application's Advanced Properties page: 1. Log in to the application (CCM) server as an administrator. 1. Click *Application* > *Advanced Properties*. 1. Search for <code>com.ibm.team.repository.service.mqtt.internal.MqttService</code>. 1. Click *Edit* in the title bar and modify each entry. Click *Preview* in the title bar to exit the edit mode and save your changes. The following table list all MQTT properties: <table width="700px" border="1" style="border-collapse:collapse;" "cellspacing:3px;" "cellpadding:5px;" bordercolor="#999"> <caption> <i>Table 1. MQTT properties</i> </caption> <tr> <th width="230" bgcolor="#CCCCCC"><b>Property</b></th> <th width="470" bgcolor="#CCCCCC"><b>Description</b></th> </tr> <tr> <td>MQTT Broker address</td> <td>The address of the MQTT (MQ Telemetry Transport) broker. If not provided, clustering support will be disabled.</td> </tr> <tr> <td>Session persistence location</td> <td>The location (the folder) where the "in-flight" messages will be stored if file-persistent location is enabled. Default is mqtt sub-folder under server folder.</td> </tr> <tr> <td>Unique cluster name</td> <td>This property can be used to uniquely name your cluster. This property must be set when there are more then one clusters communicating through the same MQTT message broker, or when there are more than one cluster-enabled application on the same cluster. If not provided, the port number of MQTT message broker will be used, but that may not be adequate.</td> </tr> <tr> <td>Enable file-based persistence</td> <td>Enabling file-based persistence will cause published MQTT messages to be saved, until the message is confirmed to be delivered. If a node goes down and restarted, or failed connection to MQTT broker is restored, persisted messages with !QoS 1 and 2 will be resent.</td> </tr> <tr> <td>Enable in-memory MQTT message</td> <td>Message logging causes the last received message for each topic to be retained in memory for debug access.</td> </tr> <tr> <td>MQTT message log size</td> <td>Number of messages to store in the MQTT message log.</td> </tr> <tr> <td>Maximum callback processing</td> <td>Maximum number of concurrent background threads to use for processing incoming MQTT messages. Must be greater than minimum.</td> </tr> <tr> <td>Minimum callback processing</td> <td>Minimum number of concurrent background threads to use for processing incoming MQTT messages.</td> </tr> <tr> <td>Queue size</td> <td>This queue will hold background tasks submitted to process the incoming MQTT messages until a callback processing thread becomes available.</td> </tr> </table> ---++ Configure HAProxy server as load balancer for ccm cluster/ JAS cluster/ !IoT !MessageSight cluster Installation and setup instruction for HAProxy server can be found in the [[https://www.ibm.com/support/knowledgecenter/SSYMRC_6.0.4/com.ibm.jazz.install.doc/topics/roadmap_form.html][Interactive Installation Guide]]. In the SVT Single JTS topology, HAProxy server connects to ccm cluster and JAS cluster in http mode, and connects to !IoT !MessageSight cluster in tcp mode. The connection information need to be configured in HAProxy_Install_Dir/haproxy.cfg file. The following example shows how to define http connection and tcp connection in haproxy.cfg file. <verbatim> HAProxy_Install_Dir/haproxy.cfg # connect JAS cluster in http mode frontend jas-proxy bind *:80 bind *:9643 ssl crt /etc/haproxy/ssl/proxy.pem no-sslv3 log global option httplog mode http capture cookie SERVERID len 32 redirect scheme https if !{ ssl_fc } maxconn 2000 # The expected number of the users of the system. default_backend jas backend jas option forwardfor http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc } fullconn 1000 # if not specified, HAProxy will set this to 10% of 'maxconn' specified on the frontend balance leastconn cookie SERVERID insert indirect nocache server jas1 [JAS server 1 URI]:9643 minconn 100 maxconn 500 ssl check cookie jas1 verify none server jas2 [JAS server 2 URI]:9643 minconn 100 maxconn 500 ssl check cookie jas2 verify none # connect MessageSight cluster in tcp mode listen MessageSight bind *:1883 mode tcp balance leastconn option tcplog server MessageSight1 [MessageSight server 1 URI]:1883 check server MessageSight2 [MessageSight server 2 URI]:1883 check </verbatim> After restart HAProxy server, HAProxy Stats for HAProxy Cluster1 and HAProxy Cluster2 in SVT Single JTS topology should be like the following: <img src="%ATTACHURLPATH%/HAProxy_cluster1.png" alt="HAProxy_cluster1.png" width="772" height="462" /> <img src="%ATTACHURLPATH%/HAProxy_cluster2.png" alt="HAProxy_cluster2.png" width="772" height="462" /> *Note:* The !MessageSight1 instance is online. Therefore, !MessageSight2 is offline since it serves as non-primary standby message server in our HA configuration. On failure of the active appliance, the standby appliance activates itself by constructing the IBM !MessageSight server state from the data in the store. ---++ Configure high availability for HAProxy clusters We use the Keepalived service and a virtual IP address that would be shared between the master and slave HAProxy nodes to setup high availability. We use a script master_backup.sh to check status of HAProxy server nodes and to do the switch between the master and slave nodes. Example of Keepalived_Install_Dir/keepalived.conf and Keepalived_Install_Dir/master_backup.sh are as following: <verbatim> Keepalived_Install_Dir/keepalived.conf ! Configuration File for keepalived vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 weight 2 } vrrp_instance VI_1 { state MASTER # BACKUP on the stand-by node interface eno192 # To be replaced by real interface number virtual_router_id 11 priority 101 # 100 on the stand-by node virtual_ipaddress { 10.10.10.20/24 # To be replaced by real cluster ViP } notify_master "/etc/keepalived/master_backup.sh MASTER" notify_backup "/etc/keepalived/master_backup.sh BACKUP" notify_fault "/etc/keepalived/master_backup.sh FAULT" track_script { chk_haproxy } } Keepalived_Install_Dir/master_backup.sh #! /bin/bash STATE=$1 NOW=$(date) KEEPALIVED="/etc/keepalived" case $STATE in "MASTER") touch $KEEPALIVED/MASTER echo "$NOW Becoming MASTER" >> $KEEPALIVED/COUNTER /bin/systemctl start haproxy exit 0 ;; "BACKUP") echo "$NOW Becoming BACKUP" >> $KEEPALIVED/COUNTER /bin/systemctl stop haproxy || killall -9 haproxy exit 0 ;; "FAULT") echo "$NOW Becoming FAULT" >> $KEEPALIVED/COUNTER /bin/systemctl stop haproxy || killall -9 haproxy exit 0 ;; *) echo "unknow state" >> $KEEPALIVED/COUNTER echo "NOW Becoming UNKNOWN" >> $KEEPALIVED/COUNTER exit 1 ;; esac </verbatim> ---++ Configure high availability for !IoT !MessageSight cluster Before configure high availability for !IoT !MessageSight servers, we need to add two additional vNICs to each !IoT !MessageSight node. One for discovery interface, and the other one for replication interface. The discovery IP should be in the same subnet as the hosts IP, and the replication IP should be in a different subnet. For example, we can have additional vNICs like this: <table width="500px" border="1" style="border-collapse:collapse;" "cellspacing:3px;" "cellpadding:5px;" bordercolor="#999"> <caption> <i>Table 2. vNICs</i> </caption> <tr> <th width="180" bgcolor="#CCCCCC"><b>Address</b></th> <th width="320" bgcolor="#CCCCCC"><b>Hostname</b></th> </tr> <tr> <td>10.10.40.1</td> <td>MessageSight1</td> </tr> <tr> <td>10.10.40.20</td> <td>MessageSight1-discover</td> </tr> <tr> <td>10.10.90.48</td> <td>MessageSight1-replicate</td> </tr> <tr> <td>10.10.40.2</td> <td>MessageSight2</td> </tr> <tr> <td>10.10.40.30</td> <td>MessageSight2-discover</td> </tr> <tr> <td>10.10.90.49</td> <td>MessageSight2-replicate</td> </tr> </table> The steps to setup !IoT !MessageSight cluster are as following: 1. For each !IoT !MessageSight node, install the non-Dockerized version of !IoT !MessageSight and configure it according to the [[https://www.ibm.com/support/knowledgecenter/SSYMRC_6.0.4/com.ibm.jazz.install.doc/topics/roadmap_form.html][Interactive Installation Guide]] using !IoT !MessageSight Web UI. 1. Disable all existing endpoints. 1. In !IoT !MessageSight Web UI, do the following on !MessageSight1 node (the primary node): a. Select *Server* > *High Availability*, then click *Edit* in Configuration section. a. Configure the server as follows, then *Save*. <img src="%ATTACHURLPATH%/MessageSight1_1.png" alt="MessageSight1_1.png" width="772" height="400" /> <img src="%ATTACHURLPATH%/MessageSight1_2.png" alt="MessageSight1_2.png" width="772" height="400" /> <img src="%ATTACHURLPATH%/MessageSight1_3.png" alt="MessageSight1_3.png" width="772" height="162" /> a. Switch to !MessageSight2(the standby node). a. Select *Server* > *High Availability*, then click *Edit* in Configuration section. a. Configure the server as follows, then *Save*. <img src="%ATTACHURLPATH%/MessageSight2_1.png" alt="MessageSight2_1.png" width="772" height="400" /> <img src="%ATTACHURLPATH%/MessageSight2_2.png" alt="MessageSight2_2.png" width="772" height="400" /> <img src="%ATTACHURLPATH%/MessageSight2_3.png" alt="MessageSight2_3.png" width="772" height="162" /> a. Switch to !MessageSight1, restart the server in *Clean store* mode: select *Server* > *Server Control*, then click *Clean store* in the !IoT !MessageSight Server section a. Switch back to !MessageSight2 and restart the server in *Clean store* mode. Make sure there is no error in node synchronization. a. Switch to !MessageSight1, enable endpoints then restart the server in *Clean store* mode again. To get more information about implementing high availability for !IoT !MessageSight, see the following topics in the IBM !IoT !MessageSight documentation: * [[http://www.ibm.com/support/knowledgecenter/en/SSWMAJ_2.0.0/com.ibm.ism.doc/Overview/ov40000.html][High Availability overview]] <br /> * [[http://www.ibm.com/support/knowledgecenter/en/SSWMAJ_2.0.0/com.ibm.ism.doc/Administering/ad00400_.html][Configuring your system for high availability]] <br /> * [[http://www.ibm.com/support/knowledgecenter/en/SSWMAJ_2.0.0/com.ibm.ism.doc/Administering/ad00401_.html][Configuring MessageSight by using the Web UI]] <br /> * [[http://www.ibm.com/support/knowledgecenter/en/SSWMAJ_2.0.0/com.ibm.ism.doc/Administering/ad00992_.html][Configuring MessageSight by using REST Administration APIs]] <br /> To setup communication between ccm cluster and !IoT !MessageSight cluster, add the following lines to CCM_Install_Dir/server/server.startup file: <verbatim> CCM_Install_Dir/server/server.startup JAVA_OPTS="$JAVA_OPTS -Dcom.ibm.team.repository.cluster.nodeId="To be replaced by a unique ccm node id"" JAVA_OPTS="$JAVA_OPTS -Dcom.ibm.team.repository.service.internal.db.allowConcurrentAccess=true" JAVA_OPTS="$JAVA_OPTS -Dretry.count=0" JAVA_OPTS="$JAVA_OPTS -Dretry.wait=10" JAVA_OPTS="$JAVA_OPTS -Dactivation.code.ClusterSupport=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" </verbatim> Add the following line to CCM_Install_Dir/server/conf/ccm/teamserver.properties file: <verbatim> CCM_Install_Dir/server/conf/ccmwork/teamserver.properties com.ibm.team.repository.mqtt.broker.address=tcp\://[HAProxy cluster 2 visual IP]\:1883 </verbatim> ---++ Configure IHS as Reverse Proxy *Note:* This section describes about configuring IHS only as a reverse proxy and not for load balancing between clustered nodes. You can also configure IHS to load balance between clustered nodes along with reverse proxy which will be described in next section. 1. [[https://www.ibm.com/support/knowledgecenter/SSYMRC_6.0.4/com.ibm.jazz.install.doc/topics/t_config_reverse_proxy_ihs.html][Setup IHS proxy server]] on a [[http://pic.dhe.ibm.com/infocenter/wasinfo/v8r0/topic/com.ibm.websphere.base.doc/info/aes/ae/tihs_remotesetup.html][remote web server]]. 1. [[https://jazz.net/library/article/745][Configure CLM Plugins]] for each WAS based CLM applications. 1. Configure IHS for all Liberty based Apps, and for both HAProxy clusters. a. In case that the non WAS based applications require different ports, define the ports in HTTPServer_Install_Dir/conf/httpd.conf file and !WebSphere_Plugins_Install_Dir/config/webserver1/plugin-cfg.xml. For example, to define port 9643, add the following into httpd.conf file and plugin-cfg.xml files: <verbatim> HTTPServer_Install_Dir/conf/httpd.conf Listen 0.0.0.0:9643 <VirtualHost *:9643> SSLEnable </VirtualHost> KeyFile HTTPServer_Install_Dir/ihskeys.kdb SSLStashFile HTTPServer_Install_Dir/ihskeys.sth SSLDisable WebSphere_Plugins_Install_Dir/Plugins/config/webserver1/plugin-cfg.xml <VirtualHostGroup Name="default_host"> <VirtualHost Name="*:9443"/> <VirtualHost Name="*:9444"/> <VirtualHost Name="*:9643"/> </VirtualHostGroup> </verbatim> a. In the plugin-cfg.xml file, add access information for each of the non WAS based applications/clusters. For a cluster, only one entry is required. In the SVT Single JTS topology, we need entries for JTS, HAProxy cluster1, and HAProxy cluster2. An example is as following: <verbatim> WebSphere_Plugins_Install_Dir/Plugins/config/webserver1/plugin-cfg.xml <ServerCluster Name="JTS" ServerIOTimeoutRetry="-1" CloneSeparatorChange="false" LoadBalance="Round Robin" GetDWLMTable="false" PostBufferSize="0" IgnoreaproxyAffinityRequests="false" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60"> <Server Name="jts" ConnectTimeout="0" ExtendedHandshake="false" ServerIOTimeout="900" LoadBalanceWeight="1" MaxConnections="-1" WaitForContinue="false"> <Transport Protocol="https" Port="9443" Hostname="To be replaced by JTSs URL" > <Property name="keyring" value="HTTPServer_Install_Dir/ihskeys.kdb"/> <Property name="stashfile" value="HTTPServer_Install_Dir/ihskeys.sth"/> </Transport> </Server> </ServerCluster> <UriGroup Name="jts_URIs"> <Uri Name="/jts/*" AffinityURLIdentifier="jsessionid" AffinityCookie="JSESSIONID"/> <Uri Name="/clmhelp/*" AffinityURLIdentifier="jsessionid" AffinityCookie="JSESSIONID"/> </UriGroup> <Route VirtualHostGroup="default_host" UriGroup="jts_URIs" ServerCluster="JTS"/> <ServerCluster Name="ccm_cluster_server1" ServerIOTimeoutRetry="-1" CloneSeparatorChange="false" LoadBalance="Round Robin" GetDWLMTable="false" PostBufferSize="0" IgnoreaproxyAffinityRequests="false" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60"> <Server Name="ccm_cluster_server1" ConnectTimeout="0" ExtendedHandshake="false" ServerIOTimeout="900" LoadBalanceWeight="1" MaxConnections="-1" WaitForContinue="false"> <Transport Protocol="https" Port="9443" Hostname="To be replaced by the visual IP that defined for HAProxy cluster1" > <Property name="keyring" value="HTTPServer_Install_Dir/ihskeys.kdb"/> <Property name="stashfile" value="HTTPServer_Install_Dir/ihskeys.sth"/> </Transport> </Server> </ServerCluster> <UriGroup Name="ccm_server_Cluster_URIs"> <Uri Name="/ccmwork/*" AffinityURLIdentifier="jsessionid" AffinityCookie="JSESSIONID"/> </UriGroup> <Route VirtualHostGroup="default_host" UriGroup="ccm_server_Cluster_URIs" ServerCluster="ccm_cluster_server1"/> <ServerCluster Name="haproxy_server1_status" ServerIOTimeoutRetry="-1" CloneSeparatorChange="false" LoadBalance="Round Robin" GetDWLMTable="false" PostBufferSize="0" IgnoreaproxyAffinityRequests="false" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60"> <Server Name="haproxy_server1" ConnectTimeout="0" ExtendedHandshake="false" ServerIOTimeout="900" LoadBalanceWeight="1" MaxConnections="-1" WaitForContinue="false"> <Transport Protocol="https" Port="9444" Hostname="To be replaced by the visual IP that defined for HAProxy cluster1" > <Property name="keyring" value="HTTPServer_Install_Dir/ihskeys.kdb"/> <Property name="stashfile" value="HTTPServer_Install_Dir/ihskeys.sth"/> </Transport> </Server> </ServerCluster> <UriGroup Name="haproxy_server1_Cluster_URIs"> <Uri Name="/haproxy1_stats" AffinityURLIdentifier="jsessionid" AffinityCookie="JSESSIONID"/> </UriGroup> <Route VirtualHostGroup="default_host" UriGroup="haproxy_server1_Cluster_URIs" ServerCluster="haproxy_server1_status"/> <ServerCluster Name="jas_cluster_server1" ServerIOTimeoutRetry="-1" CloneSeparatorChange="false" LoadBalance="Round Robin" GetDWLMTable="false" PostBufferSize="0" IgnoreaproxyAffinityRequests="false" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60"> <Server Name=" jas_server1" ConnectTimeout="0" ExtendedHandshake="false" ServerIOTimeout="900" LoadBalanceWeight="1" MaxConnections="-1" WaitForContinue="false"> <Transport Protocol="https" Port="9643" Hostname="To be replaced by the visual IP that defined for HAProxy cluster2" > <Property name="keyring" value="HTTPServer_Install_Dir/ihskeys.kdb"/> <Property name="stashfile" value="HTTPServer_Install_Dir/ihskeys.sth"/> </Transport> </Server> </ServerCluster> <UriGroup Name=" jas_cluster_server1_URIs"> <Uri Name="/oidc/*" AffinityURLIdentifier="jsessionid" AffinityCookie="JSESSIONID"/> <Uri Name="/jazzop/*" AffinityURLIdentifier="jsessionid" AffinityCookie="JSESSIONID"/> </UriGroup> <Route VirtualHostGroup="default_host" UriGroup=" jas_cluster_server1_URIs" ServerCluster="jas_cluster_server1"/> <ServerCluster Name="haproxy_server2_stats" ServerIOTimeoutRetry="-1" CloneSeparatorChange="false" LoadBalance="Round Robin" GetDWLMTable="false" PostBufferSize="0" IgnoreaproxyAffinityRequests="false" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60"> <Server Name="haproxy_server2_stats" ConnectTimeout="0" ExtendedHandshake="false" ServerIOTimeout="900" LoadBalanceWeight="1" MaxConnections="-1" WaitForContinue="false"> <Transport Protocol="https" Port="9444" Hostname="To be replaced by the visual IP that defined for HAProxy cluster2" > <Property name="keyring" value="HTTPServer_Install_Dir/ihskeys.kdb"/> <Property name="stashfile" value="HTTPServer_Install_Dir/ihskeys.sth"/> </Transport> </Server> </ServerCluster> <UriGroup Name="haproxy_server2_stats_Cluster_URIs"> <Uri Name="/haproxy2_stats" AffinityURLIdentifier="jsessionid" AffinityCookie="JSESSIONID"/> </UriGroup> <Route VirtualHostGroup="default_host" UriGroup="haproxy_server2_stats_Cluster_URIs" ServerCluster="haproxy_server2_stats"/> </verbatim> a. Import certificates for each of the non WAS based applications/clusters/haproxy into IHSs keystore. a. Restart IHS. *Accepting all Content for requests through proxy* It is important to make sure that IHS is enabled to accept the content (payload) from all HTTP requests (i.e. GET, PUT, POST, etc). If this is not enabled, then you may see HTTP 400 Bad request errors in the proxy server as some of the HTTP GET requests are sent with a payload and it gets rejected from proxy. So make sure your plugin-cfg.xml is set with *AcceptAllContent="true"* to avoid such errors. ---++ Troubleshooting the cluster ---+++++!! Related topics: [[MigrateFromTWAStoLiberty][Migrate from Traditional !WebSphere to !WebSphere Liberty]] ---+++++!! External links: * [[https://www.ibm.com/support/knowledgecenter/SSYMRC_6.0.4/com.ibm.jazz.install.doc/topics/roadmap_form.html][Interactive Installation Guide]] ---+++++!! Additional contributors: Main.TWikiUser, Main.TWikiUser <sticky></div></sticky>
Edit
|
Attach
|
P
rintable
|
V
iew topic
|
Backlinks:
We
b
,
A
l
l Webs
|
H
istory
:
r36
|
r33
<
r32
<
r31
<
r30
|
More topic actions...
Deployment
Deployment web
Planning and design
Installing and upgrading
Migrating and evolving
Integrating
Administering
Monitoring
Troubleshooting
Community information and contribution guidelines
Create new topic
Topic list
Search
Advanced search
Notify
RSS
Atom
Changes
Statistics
Web preferences
NOTE: Please use the Sandbox web for testing
Status icon key:
To do
Under construction
New
Updated
Constant change
None - stable page
Smaller versions of status icons for inline text:
Copyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our
Terms of Use.
Please read the following
disclaimer
.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more
here
.