WildFly HornetQ-Based Messaging Subsystem Concepts

From NovaOrdis Knowledge Base
Jump to navigation Jump to search



Migrating from https://home.feodorov.com:9443/wiki/Wiki.jsp?page=HornetQClustering



External

Internal

Acceptors and Connectors

Acceptor

An acceptor is a HornetQ mechanism that defines which types of connections are accepted by a HornetQ server. A specific acceptor matches a specific connector.

Connector

A connector is a HornetQ mechanism that defines how a client or another server can connect to a server. The connector information is used by the HornetQ clients. A specific connector matches a specific acceptor. A server may broadcast its connectors over network, as a way to make itself known and allow clients (and other servers in the same cluster) to connect to it.

Address

Persistence

HornetQ does not support database-based persistence. For reasons that went into this decision see https://developer.jboss.org/thread/153581. More details in "Messaging persistence in EAP 6.x" https://access.redhat.com/solutions/226743.

Journal

When a node is started for the first time it persists a unique identifier into its journal directory. This ID is needed for proper formation of clusters.

HornetQ Data Directories

The configuration allows the possibility of creating the HornetQ bindings and journal data directories at startup, if they do not already exist. This configuration could be useful in "experimental" mode when one deletes and recreates HornetQ data files for whatever reason, and probably not that useful in production. If the directories exist, they are not re-created, so the "create" options can be left in place, even in a production configuration. However, there is another set of directories (large messages and paging) that will be created automatically if they don’t exist, in absence of any explicit configuration option. For production, it’s probably best if the directories are created manually as part of the installation procedure, and "create-*" options are removed from configuration.

Security

HornetQ does not allow creation of unauthenticated connections. The connection's user name and password are authenticated against the "other" security domain, and the user must belong to the "guest" role. The security domain name is not explicitly specified in the configuration, but the default value is "other", and the following declaration has the same effect as the default configuration:

<subsystem xmlns="urn:jboss:domain:messaging:1.2">
  <hornetq-server>
    ...
    <security-domain>other</security-domain>
    ...

For more details about the relationship between security domains and security realms, see Relationship between a Security Realm and a Security Domain.

Users can be added to the ApplicationRealm with add-user.sh.

For details on how to secure destinations, see:

Securing a JMS Destination

For details on how to secure cluster connections, see:

Securing a Cluster Connection

Clustering

Clustering in this context means establishing a mesh of HornetQ brokers. The main purpose of creating a cluster is to spread message processing load across more than one node. Each active node in the cluster acts as an independent HornetQ server and manages its own connections and messages. HornetQ insures that messages can be intelligently load balanced between the servers in the cluster, according to the number of consumers on each node, and whether they are ready for messages. HornetQ also has the ability to automatically redistribute messages between nodes of a cluster to prevent starvation on any particular node.

Clustering does not automatically insure high availability. Go here for more details on HornetQ high availability.

Cluster Connection

The elements that turns a HornetQ instance into a clustered HornetQ instance is the presence of one or more cluster connections in the configuration, and the configuration setting that tells the instance that is clustered: <clustered>true</clustered>.

The cluster connections represents bidirectional? unidirectional? connections between nodes. They need to be explicitly declared in configuration. Messages are passed between nodes over core bridges. Core bridges consume messages from a source queue and forward them to a target queue deployed on a HornetQ node which may or may not be in the same cluster. When a node forms a cluster connection with another node, it automatically creates a core bridge internally. For more details on how to configure a WildFly HornetQ-based messaging cluster see:

WildFly HornetQ-Based Messaging Subsystem - Clustering with TCP

Broadcast and Discovery Groups

Broadcast Group

A broadcast group is a HornetQ mechanism used to advertise connector information over the network. If the HornetQ server is configured for high availability, thus has an active and a stand-by node, the broadcast group advertises connector pairs: a live server connector and a stand-by server connector.

Broadcast groups use internally either UDP multicast or JGroups channels.

Discovery Group

A discovery group defines how connector information broadcasted over a broadcast group is received from the broadcast endpoint. Discovery groups are used by JMS clients and cluster connections to obtain the initial connection information in order to download the actual topology.

A discovery group maintains lists of connectors (or connector pairs, in case the server is HA), one connector per server. Each broadcast updates the connector information.

A discovery group implies a broadcast group, so they require either UDP or JGroups.

Does "discovery group" intrinsically mean multicast - or we can have "static" discovery groups where the connectors are statically declared?

Client-Side Load Balancing

All a client needs to do in order to load balance messages across a number of HornetQ nodes is to look up a connection factory that was configured to load balance amongst those nodes. The functionality has been tested with EAP 6.4. For an example of how such a connection factory is configured see: Configure a ConnectionFactory for Load Balancing.

High Availability

High Availability in this context means the ability of HornetQ to continue functioning after the failure of one or more nodes. High availability is implemented on the server side by using a pair of active/stand-by (backup) broker nodes, and on the client side by logic that allows client connections to automatically migrate from the active sever to the stand-by server in event of active server failure.

High Availability does not necessarily mean that the load is spread across more than one active node. Go here for more details about HornetQ clustering for load balancing.

Replication Types

Shared Filesystem-based Replication

In-Memory Replication

In a in-memory replication configuration, the active and stand-by nodes do not share filesystem-based data stores. The message replication is done via network traffic over cluster connections.

For step-by-step instruction on how to configure such a topology see:

Dedicated Topology with In-Memory Replication

Dedicated Topology

For step-by-step instruction on how to configure such a topology see:

Dedicated Topology with Shared Filesystem
Dedicated Topology with In-Memory Replication

Collocated Topology

The diagram shows an example of collocated topology that also offers load balancing:

HornetQTopologies CollocatedTopology withoutLB.png

For step-by-step instruction on how to configure such a topology see:

Collocated Topology with Shared Filesystem

Collocated Topology with Load Balancing

HornetQTopologies CollocatedTopology withLB.png

Server State Replication

TODO Explain this:

Explain this https://access.redhat.com/knowledge/docs/en-US/JBoss_Enterprise_Application_Platform/5/html/HornetQ_User_Guide/failover.html

HornetQ does not replicate full server state between live and backup servers. When the new session is automatically recreated on the backup it will not have any knowledge of messages already sent or acknowledged in that session. Any in-flight sends or acknowledgments at the time of fail-over might also be lost.

Client-Side Failover

Playground Example

TODO

Failover Limitations

Due to the way HornetQ was designed, the failover is not fully transparent and it requires application’s cooperation.

There are two notable situations when the application will be notified of live server failure:

  1. The application performs a blocking operations (for example a message send()). In this situation, if a live server failure occurs, the client side messaging runtime will interrupt the send operation and make it throw a JMSExcepiton.
  2. The live server failure occurs during a transaction. In this case, the client-side messaging runtime rolls back the transaction.

Automatic Client Fail-Over

HornetQ clients can be configured with knowledge of live and backup servers, so that in event of live server failure, the client will detect this and reconnect to the backup server. The backup server will then automatically recreate any sessions and consumers that existed on each connection before fail-over, thus saving the user from having to hand-code manual reconnection logic. HornetQ clients detect connection failure when they have not received packets from the server within the time given by 'client-failure-check-period'.

Client code example:

final Properties env = new Properties();
env.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
env.put("jboss.naming.client.connect.timeout", "10000");
env.put(Context.PROVIDER_URL, "remote://<host1>:4447,remote://<host2>:4447");
env.put(Context.SECURITY_PRINCIPAL, "username");
env.put(Context.SECURITY_CREDENTIALS, "password");
Context context = new InitialContext(env);
ConnectionFactory cf = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory");

If <host1> (i.e. your "live" server) is down it will automatically try <host2> (i.e. your "backup" server).

If you want the JMS connections to move back to the live when it comes back then you should set <allow-failback> to "true" on both servers.

Failover in Case of Administrative Shutdown of the Live Server

HornetQ allows the possibility to specify the client-side failover behavior in case of administrative shutdown of the live server. There are two options:

  1. Client does not fail over to the backup server on administrative shutdown of the live server. If the connection factory is configured to contain other live server connectors, the client will reconnect to those; if not, it will issue a warning log entry and close the connection.
  2. Client does fail over to the backup server on administrative shutdown of the live server. If there are no other live servers available, this is probably a sensible option.

WildFly Clustering and HornetQ High Availability

HornetQ High Availability is configured independently of WildFly Clustering, so a configuration in which WildFly nodes are running in a non-clustered configuration but the embedded HornetQ instances are configured for High Availability is entirely possible.

Generic JMS Client with HornetQ

Playground Example

https://github.com/NovaOrdis/playground/tree/master/wildfly/hornetq/simplest-client