WildFly HornetQ Shared Filesystem-Based Dedicated HA Configuration: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(3 intermediate revisions by the same user not shown)
Line 76: Line 76:
</pre>
</pre>


==HornetQ JMS ConnectionFactory Configuration==
===HornetQ JMS ConnectionFactory Configuration===
 
<blockquote style="background-color: #f9f9f9; border: solid thin lightgrey;">
:[[HornetQ JMS ConnectionFactory Configuration]]
</blockquote>


===Shared Path Declaration===
===Shared Path Declaration===
Line 151: Line 155:
13:20:21,911 INFO  [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221010: Backup Server is now live
13:20:21,911 INFO  [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221010: Backup Server is now live
</pre>
</pre>
=Other Examples=
* An isolated domain that starts an active and a standby HornetQ node with shared filesystem-based HA: [https://github.com/NovaOrdis/playground/blob/master/jboss/hornetq/configuration-examples/domain-shared-filesystem-based-dedicated-ha/domain.xml domain.xml], [https://github.com/NovaOrdis/playground/blob/master/jboss/hornetq/configuration-examples/domain-shared-filesystem-based-dedicated-ha/host.xml host.xml].

Latest revision as of 21:21, 5 September 2017

External

Internal

Overview

For high availability purposes, the live server and the backup server must be installed on two separated physical (or virtual) hosts, provisioned in such a way to minimize the probability of both host failing at the same time. Highly available HornetQ requires access to reliable shared file system storage, so a file system such as GFS2 or a SAN must be made available to both hosts. HornetQ instances will store on the shared directory, among other things, their bindings and journal files. NFS v4 appropriately configured is also an option.

WildFly Clustering and HornetQ High Availability

This document contains instructions for setting up a configuration where HornetQ HA is independently configured from WildFly clustering.

For more details see:

Concepts: WildFly Clustering and HornetQ High Availability

Procedure

Common Configuration

Common configuration specification using system properties for per-server instance externalization makes sense for WildFly instance running in domain mode, because it permits the use of the same configuration for both the live server and the backup server. The differences in behavior are specified via system properties. In standalone mode, the sequences below can be copied and pasted in their respective standalone*.xml files.

Shared-Storage Based High Availability

Use the following "messaging" subsystem configuration on both live and stand-by servers. This is convenient because the servers can be made part of the same server group.

...
<subsystem xmlns="urn:jboss:domain:messaging:1.4"> 
   <hornetq-server> 

      <persistence-enabled>true</persistence-enabled>
      ...
      <shared-store>true</shared-store>
      <backup>${jboss.messaging.hornetq.backup:false}</backup>
      <create-bindings-dir>true</create-bindings-dir>
      <create-journal-dir>true</create-journal-dir>
      <failover-on-shutdown>true</failover-on-shutdown>

      <paging-directory path="paging" relative-to="hornetq.shared.dir"/>
      <bindings-directory path="bindings" relative-to="hornetq.shared.dir"/> 
      <journal-directory path="journal" relative-to="hornetq.shared.dir"/>
      <large-messages-directory path="large-messages" relative-to="hornetq.shared.dir"/>
      
      ...

      <jms-connection-factories>
         ...
         <connection-factory name="RemoteConnectionFactory">
            <ha>true</ha>
            <retry-interval>1000</retry-interval>
            <retry-interval-multiplier>1.0</retry-interval-multiplier>
            <reconnect-attempts>-1</reconnect-attempts> 
            <connectors> 
               <connector-ref connector-name="netty"/>
            </connectors> 
            <entries> 
               <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/> 
            </entries> 
         </connection-factory>
         ...
      </jms-connection-factories>
   </hornetq-server>
</subsystem>
...

HornetQ JMS ConnectionFactory Configuration

HornetQ JMS ConnectionFactory Configuration

Shared Path Declaration

This is usually common for the entire domain, so it can be specified in the domain top level section.

   ...
   <paths>
      <path name="hornetq.shared.dir" path="/nfs/hornetq-shared-storage"/>
   </paths>
   ...

Live Server Configuration

jboss.messaging.hornetq.backup is by default false, but it's actually a good idea to make the configuration obvious. Add the following in the active node's host.xml:

<host ...>
   <system-properties>
      <property name="jboss.messaging.hornetq.backup" value="false"/>
   </system-properties>
   ...
</host>

Stand-By Server Configuration

jboss.messaging.hornetq.backup should be set to "true" in the stand-by node's host.xml:

<host ...>
   <system-properties>
      <property name="jboss.messaging.hornetq.backup" value="true"/>
   </system-properties>
   ...
</host>

JMS Connection Factories

A backup HornetQ instance does not need the <jms-connection-factories> and <jms-destinations> sections as any JMS components are created from the shared journal when the backup server becomes live.

Log Output

Active server starting:

13:14:00,312 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221000: live server is starting with configuration HornetQ Configuration (clustered=false,backup=false,sharedStore=true,journalDirectory=/nfs/hornetq-shared-storage/journal,bindingsDirectory=/nfs/hornetq-shared-storage/bindings,largeMessagesDirectory=/nfs/hornetq-shared-storage/large-messages,pagingDirectory=/nfs/hornetq-shared-storage/paging)
13:14:00,313 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221006: Waiting to obtain live lock
[...]
13:14:00,614 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221035: Live Server Obtained live lock
[...]
13:14:01,800 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221007: Server is now live
13:14:01,801 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221001: HornetQ Server version 2.3.25.Final (2.3.x, 123) [db446058-de41-11e5-aea0-174ba3e38330] 

Stand-by server starting:

13:18:19,380 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221000: backup server is starting with configuration HornetQ Configuration (clustered=false,backup=true,sharedStore=true,journalDirectory=/nfs/hornetq-shared-storage/journal,bindingsDirectory=/nfs/hornetq-shared-storage/bindings,largeMessagesDirectory=/nfs/hornetq-shared-storage/large-messages,pagingDirectory=/nfs/hornetq-shared-storage/paging)
13:18:19,402 INFO  [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221032: Waiting to become backup node
13:18:19,449 INFO  [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221033: ** got backup lock
[...]
13:18:19,680 INFO  [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221109: HornetQ Backup Server version 2.3.25.Final (2.3.x, 123) [db446058-de41-11e5-aea0-174ba3e38330] started, waiting live to fail before it gets active
root@h2# 

Failover to stand-by server:

[...]
13:20:21,911 INFO  [org.hornetq.core.server] (HQ119000: Activation for server HornetQServerImpl::serverUUID=db446058-de41-11e5-aea0-174ba3e38330) HQ221010: Backup Server is now live

Other Examples

  • An isolated domain that starts an active and a standby HornetQ node with shared filesystem-based HA: domain.xml, host.xml.