Bridge Two Infinsipan Clustered Caches with RELAY2
External
- RedHat JDG Manual Cross-Datacenter Replication https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Data_Grid/6.6/html/Administration_and_Configuration_Guide/chap-Set_Up_Cross-Datacenter_Replication.html
Internal
Relevance
- JDG 6.6
Overview
For more details on concepts behind cross-site replication, see:
Procedure
The following procedure assumes a based TCP-based clustering configuration, as described in WildFly Clustering without Multicast.
1. JGroups Configuration
1.1 Declare the Bridging Cluster JGroups Stack
Usually is TCP-based and a good name for it would be "tcp-relay". An example is available below.
Note that the additional stack configuration must be present on all local cluster nodes, because any node may become site master and manage cross-site replication.
<stack name="tcp-relay"> <transport type="TCP" socket-binding="jgroups-tcp-relay"/> <protocol type="TCPPING"> <property name="initial_hosts">${jboss.cluster.tcp.relay.initial_hosts}</property> <property name="num_initial_members">2</property> <property name="port_range">0</property> <property name="timeout">2000</property> </protocol> <protocol type="MERGE3"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd-relay"/> <protocol type="FD_ALL"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"> <property name="use_mcast_xmit"> false </property> </protocol> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack>
Note that the configuration of the relay JGroups stack must be identical on all nodes across all clusters that will be relaying to each other.
The ${jboss.cluster.tcp.relay.initial_hosts} system property must contain host:port pairs corresponding to the following pairs: this host and TCP Relay transport port, and then all hosts:TCP Relay transport ports belonging to the sites we cross-replicate to. All "remote" nodes are required to be listed because any remote node can become site master. The ${jboss.cluster.tcp.relay.initial_hosts} can be declared in the <system-properties> section, as follows:
<system-properties> <property name="jboss.cluster.tcp.relay.initial_hosts" value="this-node-address[tcp-relay-port], the-other-site-node1[the-other-site-node1-tcp-relay-port], the-other-site-node2[the-other-site-node2-tcp-relay-port], ..."/> </system-properties>
The stack definition refers two new socket bindings ("jgroups-tcp-relay" and "jgroups-tcp-fd-relay") which must be different from the main JGroups stack's bindings:
<socket-binding-group name="standard-sockets" ...> ... <socket-binding name="jgroups-tcp" port="7600"/> <socket-binding name="jgroups-tcp-fd" port="57600"/> <socket-binding name="jgroups-tcp-relay" port="7610"/> <socket-binding name="jgroups-tcp-fd-relay" port="57610"/> ... </socket-binding-group>
1.2 Declare the RELAY2 Protocol
Declare the RELAY2 protocol that refers to the bridging cluster JGroups stack. The RELAY2 protocol is the place the name of the local cluster (site) is declared. It must be placed at the top of the main stack:
<stack/> ... <protocol type="FRAG2"/> <relay site="blue"> <remote-site name="red" stack="tcp-relay" cluster="blue-red-bridge"/> <property name="relay_multicasts">false</property> </relay> </stack>
where the "site" is the name of the local site - the local cluster that will be relaying messages via its coordinator to other "sites".
One or more <remote-site>s can be declared. A <remote-site> declaration contains:
- the site name - the name of the remote site (the other local cluster) to forward messages to.
- the stack - represents the bridging stack name which must be declared in the same jgroups subsystem declaration.
- the cluster name - the name of the bridging JGroups group. All nodes we want to bridge over the same bridging cluster must have the same name declared here. The name of the cluster (in this case "blue-red-bridge") must be homogeneous across all configurations.
The symmetrical declaration for the "red" site is:
<stack/> ... <protocol type="FRAG2"/> <relay site="red"> <remote-site name="blue" stack="tcp-relay" cluster="blue-red-bridge"/> <property name="relay_multicasts">false</property> </relay> </stack>
2. Infinispan Configuration
2.1 Declare The Local Clusters (Sites) a Cache Wants to Relay To
On the "blue" site:
<distributed-cache name="bridged-cache" owners="2" mode="SYNC" start="EAGER"> <backups> <backup site="red" strategy="SYNC" /> <backup site="green" strategy="SYNC" /> </backups> </distributed-cache>
2.2 Configure the Local Cluster Transport
Specify the cluster name for the main JGroups stack:
<cache-container ...> <transport executor="infinispan-transport" lock-timeout="60000" cluster="blue" stack="tcp"/> </cache-container>
Why is this necessary? Why would RELAY2 care how the local cluster is named? It has its own configuration that specifies what "site" we are.