Bridge Two Infinsipan Clustered Caches with RELAY2: Difference between revisions
(15 intermediate revisions by the same user not shown) | |||
Line 11: | Line 11: | ||
* JDG 6.6 | * JDG 6.6 | ||
=Overview= | |||
For more details on concepts behind cross-site replication, see: | |||
<blockquote style="background-color: #f9f9f9; border: solid thin lightgrey;"> | |||
:[[Infinispan_Concepts#Cross-Site_Replication|Infinispan Concepts - Cross-Site Replication]] | |||
</blockquote> | |||
=Procedure= | =Procedure= | ||
The following procedure assumes a based TCP-based clustering configuration, as described in [[WildFly Clustering without Multicast]]. | |||
==1. JGroups Configuration== | ==1. JGroups Configuration== | ||
Line 18: | Line 28: | ||
===1.1 Declare the Bridging Cluster JGroups Stack=== | ===1.1 Declare the Bridging Cluster JGroups Stack=== | ||
Usually is TCP-based and a good name for it would be "tcp-relay" | Usually is TCP-based and a good name for it would be "tcp-relay". An example is available below. | ||
Note that the additional stack configuration must be present on '''all''' local cluster nodes, because any node may become [[Infinispan_Concepts#Site_Master|site master]] and manage cross-site replication. | |||
<pre> | <pre> | ||
Line 24: | Line 36: | ||
<transport type="TCP" socket-binding="jgroups-tcp-relay"/> | <transport type="TCP" socket-binding="jgroups-tcp-relay"/> | ||
<protocol type="TCPPING"> | <protocol type="TCPPING"> | ||
<property name="initial_hosts"> | <property name="initial_hosts">${jboss.cluster.tcp.relay.initial_hosts}</property> | ||
<property name="num_initial_members">2</property> | |||
<property name="port_range">0</property> | |||
<property name="num_initial_members"> | <property name="timeout">2000</property> | ||
<property name="port_range"> | |||
<property name="timeout"> | |||
</protocol> | </protocol> | ||
<protocol type="MERGE3"/> | <protocol type="MERGE3"/> | ||
Line 56: | Line 60: | ||
Note that the configuration of the relay JGroups stack must be identical on all nodes across all clusters that will be relaying to each other. | Note that the configuration of the relay JGroups stack must be identical on all nodes across all clusters that will be relaying to each other. | ||
The | The <tt>${jboss.cluster.tcp.relay.initial_hosts}</tt> system property must contain host:port pairs corresponding to the following pairs: this host and TCP Relay transport port, and then all hosts:TCP Relay transport ports belonging to the sites we cross-replicate to. All "remote" nodes are required to be listed because any remote node can become site master. The <tt>${jboss.cluster.tcp.relay.initial_hosts}</tt> can be declared in the <tt><system-properties></tt> section, as follows: | ||
The stack definition refers two new socket bindings ("" and "") which must be different from the main JGroups stack's bindings: | <pre> | ||
<system-properties> | |||
<property name="jboss.cluster.tcp.relay.initial_hosts" | |||
value="this-node-address[tcp-relay-port], the-other-site-node1[the-other-site-node1-tcp-relay-port], the-other-site-node2[the-other-site-node2-tcp-relay-port], ..."/> | |||
</system-properties> | |||
</pre> | |||
The stack definition refers two new socket bindings ("jgroups-tcp-relay" and "jgroups-tcp-fd-relay") which must be different from the main JGroups stack's bindings: | |||
<pre> | <pre> | ||
Line 74: | Line 85: | ||
===1.2 Declare the RELAY2 Protocol=== | ===1.2 Declare the RELAY2 Protocol=== | ||
Declare the RELAY2 protocol that refers to the bridging cluster JGroups stack. The RELAY2 protocol is the place [[JGroups_Protocol_RELAY2# | Declare the RELAY2 protocol that refers to the bridging cluster JGroups stack. The RELAY2 protocol is the place [[JGroups_Protocol_RELAY2#site|the name of the local cluster (site)]] is declared. It must be placed at the top of the main stack: | ||
<pre> | <pre> | ||
Line 81: | Line 92: | ||
<protocol type="FRAG2"/> | <protocol type="FRAG2"/> | ||
<relay site="blue"> | <relay site="blue"> | ||
<remote-site name="red" stack="tcp-relay" cluster="bridge"/> | <remote-site name="red" stack="tcp-relay" cluster="blue-red-bridge"/> | ||
<property name="relay_multicasts">false</property> | <property name="relay_multicasts">false</property> | ||
</relay> | </relay> | ||
Line 89: | Line 100: | ||
where the "site" is the name of the local site - the local cluster that will be relaying messages via its coordinator to other "sites". | where the "site" is the name of the local site - the local cluster that will be relaying messages via its coordinator to other "sites". | ||
One or more <tt><remote-site></tt>s can be declared. A tt><remote-site></tt> declaration contains: | One or more <tt><remote-site></tt>s can be declared. A <tt><remote-site></tt> declaration contains: | ||
* the site '''name''' - the name of the remote site (the other local cluster) to forward messages to. | * the site '''name''' - the name of the remote site (the other local cluster) to forward messages to. | ||
* the '''stack''' - represents the bridging stack name which must be declared in the same jgroups subsystem declaration. | * the '''stack''' - represents the bridging stack name which must be declared in the same jgroups subsystem declaration. | ||
* the '''cluster''' name - the name of the bridging JGroups group. All nodes | * the '''cluster''' name - the name of the bridging JGroups group. All nodes we want to bridge over the same bridging cluster must have the same name declared here. The name of the cluster (in this case "blue-red-bridge") must be homogeneous across all configurations. | ||
The symmetrical declaration for the "red" site is: | The symmetrical declaration for the "red" site is: | ||
Line 102: | Line 113: | ||
<protocol type="FRAG2"/> | <protocol type="FRAG2"/> | ||
<relay site="red"> | <relay site="red"> | ||
<remote-site name="blue" stack="tcp-relay" cluster="bridge"/> | <remote-site name="blue" stack="tcp-relay" cluster="blue-red-bridge"/> | ||
<property name="relay_multicasts">false</property> | <property name="relay_multicasts">false</property> | ||
</relay> | </relay> | ||
Line 111: | Line 122: | ||
===2.1 Declare The Local Clusters (Sites) a Cache Wants to Relay To=== | ===2.1 Declare The Local Clusters (Sites) a Cache Wants to Relay To=== | ||
On the "blue" site: | |||
<pre> | <pre> | ||
<distributed-cache | <distributed-cache name="bridged-cache" owners="2" mode="SYNC" start="EAGER"> | ||
<backups> | |||
<backup site="red" strategy="SYNC" /> | |||
<backup site="green" strategy="SYNC" /> | |||
</backups> | |||
</distributed-cache> | |||
</distributed-cache> | |||
</pre> | </pre> | ||
===2.2 Configure the Local Cluster Transport=== | ===2.2 Configure the Local Cluster Transport=== | ||
Specify the cluster name for the main JGroups stack: | |||
<pre> | |||
<cache-container ...> | |||
<transport executor="infinispan-transport" lock-timeout="60000" cluster="blue" stack="tcp"/> | |||
</cache-container> | |||
</pre> | |||
<font color=red>Why is this necessary? Why would RELAY2 care how the local cluster is named? It has its own configuration that specifies what "site" we are. </font> |
Latest revision as of 21:57, 5 October 2016
External
- RedHat JDG Manual Cross-Datacenter Replication https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Data_Grid/6.6/html/Administration_and_Configuration_Guide/chap-Set_Up_Cross-Datacenter_Replication.html
Internal
Relevance
- JDG 6.6
Overview
For more details on concepts behind cross-site replication, see:
Procedure
The following procedure assumes a based TCP-based clustering configuration, as described in WildFly Clustering without Multicast.
1. JGroups Configuration
1.1 Declare the Bridging Cluster JGroups Stack
Usually is TCP-based and a good name for it would be "tcp-relay". An example is available below.
Note that the additional stack configuration must be present on all local cluster nodes, because any node may become site master and manage cross-site replication.
<stack name="tcp-relay"> <transport type="TCP" socket-binding="jgroups-tcp-relay"/> <protocol type="TCPPING"> <property name="initial_hosts">${jboss.cluster.tcp.relay.initial_hosts}</property> <property name="num_initial_members">2</property> <property name="port_range">0</property> <property name="timeout">2000</property> </protocol> <protocol type="MERGE3"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd-relay"/> <protocol type="FD_ALL"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"> <property name="use_mcast_xmit"> false </property> </protocol> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack>
Note that the configuration of the relay JGroups stack must be identical on all nodes across all clusters that will be relaying to each other.
The ${jboss.cluster.tcp.relay.initial_hosts} system property must contain host:port pairs corresponding to the following pairs: this host and TCP Relay transport port, and then all hosts:TCP Relay transport ports belonging to the sites we cross-replicate to. All "remote" nodes are required to be listed because any remote node can become site master. The ${jboss.cluster.tcp.relay.initial_hosts} can be declared in the <system-properties> section, as follows:
<system-properties> <property name="jboss.cluster.tcp.relay.initial_hosts" value="this-node-address[tcp-relay-port], the-other-site-node1[the-other-site-node1-tcp-relay-port], the-other-site-node2[the-other-site-node2-tcp-relay-port], ..."/> </system-properties>
The stack definition refers two new socket bindings ("jgroups-tcp-relay" and "jgroups-tcp-fd-relay") which must be different from the main JGroups stack's bindings:
<socket-binding-group name="standard-sockets" ...> ... <socket-binding name="jgroups-tcp" port="7600"/> <socket-binding name="jgroups-tcp-fd" port="57600"/> <socket-binding name="jgroups-tcp-relay" port="7610"/> <socket-binding name="jgroups-tcp-fd-relay" port="57610"/> ... </socket-binding-group>
1.2 Declare the RELAY2 Protocol
Declare the RELAY2 protocol that refers to the bridging cluster JGroups stack. The RELAY2 protocol is the place the name of the local cluster (site) is declared. It must be placed at the top of the main stack:
<stack/> ... <protocol type="FRAG2"/> <relay site="blue"> <remote-site name="red" stack="tcp-relay" cluster="blue-red-bridge"/> <property name="relay_multicasts">false</property> </relay> </stack>
where the "site" is the name of the local site - the local cluster that will be relaying messages via its coordinator to other "sites".
One or more <remote-site>s can be declared. A <remote-site> declaration contains:
- the site name - the name of the remote site (the other local cluster) to forward messages to.
- the stack - represents the bridging stack name which must be declared in the same jgroups subsystem declaration.
- the cluster name - the name of the bridging JGroups group. All nodes we want to bridge over the same bridging cluster must have the same name declared here. The name of the cluster (in this case "blue-red-bridge") must be homogeneous across all configurations.
The symmetrical declaration for the "red" site is:
<stack/> ... <protocol type="FRAG2"/> <relay site="red"> <remote-site name="blue" stack="tcp-relay" cluster="blue-red-bridge"/> <property name="relay_multicasts">false</property> </relay> </stack>
2. Infinispan Configuration
2.1 Declare The Local Clusters (Sites) a Cache Wants to Relay To
On the "blue" site:
<distributed-cache name="bridged-cache" owners="2" mode="SYNC" start="EAGER"> <backups> <backup site="red" strategy="SYNC" /> <backup site="green" strategy="SYNC" /> </backups> </distributed-cache>
2.2 Configure the Local Cluster Transport
Specify the cluster name for the main JGroups stack:
<cache-container ...> <transport executor="infinispan-transport" lock-timeout="60000" cluster="blue" stack="tcp"/> </cache-container>
Why is this necessary? Why would RELAY2 care how the local cluster is named? It has its own configuration that specifies what "site" we are.