WildFly HornetQ Message Replication-Based HA Configuration: Difference between revisions
Line 21: | Line 21: | ||
==check-for-live-server== | ==check-for-live-server== | ||
If a replicated live server should check the current cluster to see if there is already a live server with the same node id. Default is false. | |||
<font color=red>'''TODO''': ?</font> | |||
==failover-on-shutdown== | ==failover-on-shutdown== |
Revision as of 04:49, 9 March 2016
External
- EAP 6 Administration and Configuration Guide - HornetQ Message Replication - https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html/Administration_and_Configuration_Guide/sect-High_Availability.html#HornetQ_Message_Replication
Internal
Overview
For high availability purposes, the live server and the backup server must be installed on two separated physical (or virtual) hosts, provisioned in such a way to minimize the probability of both host failing at the same time.
Configuration
backup-group-name
This is the unique name which identifies a live/backup pair that should replicate with each other. Example: "pair-A".
check-for-live-server
If a replicated live server should check the current cluster to see if there is already a live server with the same node id. Default is false.
TODO: ?
failover-on-shutdown
Procedure
Common Configuration
Common configuration specification using system properties for per-server instance externalization makes sense for WildFly instance running in domain mode, because it permits the use of the same configuration for both the live server and the backup server. The differences in behavior are specified via system properties. In standalone mode, the sequences below can be copied and pasted in their respective standalone*.xml files.
In-Memory Replication Based High Availability
Use the following "messaging" subsystem configuration on both live and stand-by servers. This is convenient because the servers can be made part of the same server group.
... <subsystem xmlns="urn:jboss:domain:messaging:1.4"> <hornetq-server> <shared-store>false</shared-store> <backup-group-name>pair-A</backup-group-name> <check-for-live-server>true</check-for-live-server> <backup>${jboss.messaging.hornetq.backup:false}</backup> <failover-on-shutdown>true</failover-on-shutdown> <persistence-enabled>true</persistence-enabled> ... <jms-connection-factories> ... <connection-factory name="RemoteConnectionFactory"> <ha>true</ha> <retry-interval>1000</retry-interval> <retry-interval-multiplier>1.0</retry-interval-multiplier> <reconnect-attempts>-1</reconnect-attempts> <connectors> <connector-ref connector-name="netty"/> </connectors> <entries> <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/> </entries> </connection-factory> ... </jms-connection-factories> </hornetq-server> </subsystem> ...
This is usually common for the entire domain, so it can be specified in the domain top level section.
... <paths> <path name="hornetq.shared.dir" path="/nfs/hornetq-shared-storage"/> </paths> ...
Live Server Configuration
jboss.messaging.hornetq.backup is by default false, but it's actually a good idea to make the configuration obvious. Add the following in the active node's host.xml:
<host ...> <system-properties> <property name="jboss.messaging.hornetq.backup" value="false"/> </system-properties> ... </host>
Stand-By Server Configuration
jboss.messaging.hornetq.backup should be set to "true" in the stand-by node's host.xml:
<host ...> <system-properties> <property name="jboss.messaging.hornetq.backup" value="true"/> </system-properties> ... </host>
JMS Connection Factories
A backup HornetQ instance does not need the <jms-connection-factories> and <jms-destinations> sections as any JMS components are created from the shared journal when the backup server becomes live.
Log Output
Active server starting:
13:14:00,312 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221000: live server is starting with configuration HornetQ Configuration (clustered=false,backup=false,sharedStore=true,journalDirectory=/nfs/hornetq-shared-storage/journal,bindingsDirectory=/nfs/hornetq-shared-storage/bindings,largeMessagesDirectory=/nfs/hornetq-shared-storage/large-messages,pagingDirectory=/nfs/hornetq-shared-storage/paging) 13:14:00,313 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221006: Waiting to obtain live lock [...] 13:14:00,614 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221035: Live Server Obtained live lock [...] 13:14:01,800 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221007: Server is now live 13:14:01,801 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221001: HornetQ Server version 2.3.25.Final (2.3.x, 123) [db446058-de41-11e5-aea0-174ba3e38330]
Stand-by server starting:
13:18:19,380 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221000: backup server is starting with configuration HornetQ Configuration (clustered=false,backup=true,sharedStore=true,journalDirectory=/nfs/hornetq-shared-storage/journal,bindingsDirectory=/nfs/hornetq-shared-storage/bindings,largeMessagesDirectory=/nfs/hornetq-shared-storage/large-messages,pagingDirectory=/nfs/hornetq-shared-storage/paging) 13:18:19,402 INFO [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221032: Waiting to become backup node 13:18:19,449 INFO [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221033: ** got backup lock [...] 13:18:19,680 INFO [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221109: HornetQ Backup Server version 2.3.25.Final (2.3.x, 123) [db446058-de41-11e5-aea0-174ba3e38330] started, waiting live to fail before it gets active root@h2#
Failover to stand-by server:
[...] 13:20:21,911 INFO [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221010: Backup Server is now live