WildFly HornetQ-Based Messaging Subsystem Concepts
External
- Messaging chapter in "EAP 6.4 Administration and Configuration Guide" https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html/Administration_and_Configuration_Guide/chap-Messaging.html
Internal
Acceptors and Connectors
Acceptor
An acceptor is a HornetQ mechanism that defines which types of connections are accepted by a HornetQ server. A specific acceptor matches a specific connector.
Connector
A connector is a HornetQ mechanism that defines how a client or another server can connect to a server. The connector information is used by the HornetQ clients. A specific connector matches a specific acceptor. A server may broadcast its connectors over network, as a way to make itself known and allow clients (and other servers in the same cluster) to connect to it.
Address
Persistence
HornetQ does not support database-based persistence. For reasons that went into this decision see https://developer.jboss.org/thread/153581. More details in "Messaging persistence in EAP 6.x" https://access.redhat.com/solutions/226743.
Journal
When a node is started for the first time it persists a unique identifier into its journal directory. This ID is needed for proper formation of clusters.
Security
HornetQ does not allow creation of unauthenticated connections. The connection's user name and password are authenticated against the "other" security domain. The security domain name is not explicitly specified in the configuration, but the default value is "other", and the following declaration has the same effect as the default configuration:
<subsystem xmlns="urn:jboss:domain:messaging:1.2"> <hornetq-server> ... <security-domain>other</security-domain> ...
Clustering
- EAP 6.4 Administation and Configuration Manual - HornetQ Clustering https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html/Administration_and_Configuration_Guide/sect-HornetQ_Clustering.html
Clustering in this context means establishing a mesh of HornetQ brokers. The main purpose of creating a cluster is to spread message processing load across more than one node. Each active node in the cluster acts as an independent HornetQ server and manages its own connections and messages. HornetQ insures that messages can be intelligently load balanced between the servers in the cluster, according to the number of consumers on each node, and whether they are ready for messages. HornetQ also has the ability to automatically redistribute messages between nodes of a cluster to prevent starvation on any particular node.
Clustering does not automatically insure high availability. Go here for more details on HornetQ high availability.
Connections between nodes are explicitly declared in configuration. Messages are passed between nodes over core bridges. Core bridges consume messages from a source queue and forward them to a target queue deployed on a HornetQ node which may or may not be in the same cluster. When a node forms a cluster connection with another node, it automatically creates a core bridge internally. For more details on how to configure a WildFly HornetQ-based messaging cluster see:
Broadcast and Discovery Groups
Broadcast Group
A broadcast group is a HornetQ mechanism used to advertise connector information over the network. If the HornetQ server is configured for high availability, thus has an active and a stand-by node, the broadcast group advertises connector pairs: a live server connector and a stand-by server connector.
Broadcast groups use internally either UDP multicast or JGroups channels.
Discovery Group
A discovery group defines how connector information broadcasted over a broadcast group is received from the broadcast endpoint. Discovery groups are used by JMS clients and cluster connections to obtain the initial connection information in order to download the actual topology.
A discovery group maintains lists of connectors (or connector pairs, in case the server is HA), one connector per server. Each broadcast updates the connector information.
A discovery group implies a broadcast group, so they require either UDP or JGroups.
Does "discovery group" intrinsically mean multicast - or we can have "static" discovery groups where the connectors are statically declared?
Client-Side Load Balancing
All a client needs to do is to look up a connection factory that was configured to load balance amongst multiple connectors. For an example of how such a connection factory is configured see: Configure a ConnectionFactory for Load Balancing.
High Availability
- EAP 6.4 Administration and Configuration Guide - HornetQ High Availability https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html/Administration_and_Configuration_Guide/sect-High_Availability.html
High Availability in this context means the ability of HornetQ to continue functioning after the failure of one or more nodes. High availability is implemented on the server side by using a pair of active/stand-by (backup) broker nodes, and on the client side by logic that allows client connections to automatically migrate from the active sever to the stand-by server in event of active server failure.
High Availability does not necessarily mean that the load is spread across more than one active node. Go here for more details about HornetQ clustering for load balancing.
Dedicated Topology
Collocated Topology
Automatic Client Fail-Over
HornetQ clients can be configured with knowledge of live and backup servers, so that in event of live server failure, the client will detect this and reconnect to the backup server. The backup server will then automatically recreate any sessions and consumers that existed on each connection before fail-over, thus saving the user from having to hand-code manual reconnection logic. HornetQ clients detect connection failure when they have not received packets from the server within the time given by 'client-failure-check-period'.
Client code example:
final Properties env = new Properties(); env.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory"); env.put("jboss.naming.client.connect.timeout", "10000"); env.put(Context.PROVIDER_URL, "remote://<host1>:4447,remote://<host2>:4447"); env.put(Context.SECURITY_PRINCIPAL, "username"); env.put(Context.SECURITY_CREDENTIALS, "password"); Context context = new InitialContext(env); ConnectionFactory cf = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory");
If <host1> (i.e. your "live" server) is down it will automatically try <host2> (i.e. your "backup" server).
If you want the JMS connections to move back to the live when it comes back then you should set <allow-failback> to "true" on both servers.
WildFly Clustering and HornetQ High Availability
HornetQ High Availability is configured independently of WildFly Clustering, so a configuration in which WildFly nodes are running in a non-clustered configuration but the embedded HornetQ instances are configured for High Availability is entirely possible.