WildFly Isolated Self-Contained Standalone Profile

From NovaOrdis Knowledge Base
Jump to navigation Jump to search

Relevance

EAP 6.0.1, EAP 6.3, WildFly 9.0.2.Final, EAP 7.0.4

Internal

Overview

This article contains step-by-step instructions to create isolated clustered or non-clustered WildFly profiles. The procedure also applies to EAP. The only place where the name of the profile is externalized is the $JBOSS_HOME/profiles's sub-directory that actually contains the profile. For this procedure to work, the name of the sub-directory must be the name of the profile. All files, directories and configuration under that sub-directory should be name agnostic, so profiles can be copied across without need for reconfiguration.

For more details about WildFly profiles, see:

WildFly Profile

Create a new profile sub-directory in $JBOSS_HOME/profiles

cd $JBOSS_HOME/profiles
mkdir node01

This is the only place where the profile name is externalized.

Create the directory structure

    cd $JBOSS_HOME/profiles/node01
    mkdir ./configuration; mkdir ./deployments

The "deployments" directory must exist. It is fine if it's empty, but it has to be there, otherwise the JBoss instance will complain at boot.

Copy necessary artifacts

Maintain the name of the server configuration files, this way we know what they are.


    cd $JBOSS_HOME/standalone/configuration
    cp application-* mgmt-* logging.properties standalone-full.xml ../../profiles/node01/configuration

Pick the appropriate standalone*.xml configuration file, depending on the type of the instance you're attempting to startup. The most typical is standalone.xml for a non-clustered instance, standalone-full-ha.xml for a clustered instance. For more on the differences between startup file, see https://home.feodorov.com:9443/wiki/Wiki.jsp?page=JBoss7DifferencesBetweenStandaloneConfigurationFile.

The authentication credentials copied as part of the application-* and mgmt-* file transfer could be further modified in place by a custom add-user.sh script, described below:

Custom add-user.sh Script

Create a custom profiles/node01/add-user.sh shell wrapper to allow modification of this profile's authentication credential files. Note that both -sc (the location the server config directory) and -dc (the location of the domain config directory) must point to the same profiles/node01/configuration directory, otherwise undesired effects such as modifying the domain files will happen.

#!/bin/bash

reldir=$(dirname $0)

unset JBOSS_HOME

${reldir}/../../bin/add-user.sh -sc ${reldir}/configuration -dc ${reldir}/configuration

then:

chmod a+x add-user.sh

When attempting to add users to an isolated self-contained standalone profiles, make sure you use the script provided with the profile, and NOT the $JBOSS_HOME/bin script, because is you use the $JBOSS_HOME/bin script, the users won't be added to the profile's user files, but to the default user files. For more details see: Adding a User to an Isolated Self-Contained Standalone Profile.

Copy the initial version of the .conf file

Copy $JBOSS_HOME/bin/standalone.conf as $JBOSS_HOME/profiles/node01/profile.conf.

cp $JBOSS_HOME/bin/standalone.conf $JBOSS_HOME/profiles/node01/profile.conf

Use the standard name "profile.conf".

Create the run file

Create the run file in $JBOSS_HOME/profiles/node01. Use the standard name "run". Make sure to use the same standalone*.xml configuration file as the one that was previously copied under "configuration".

#!/bin/bash

port_offset=0
server_config_file=standalone-ha.xml

reldir=$(dirname $0)

profile_dir=${reldir##*/}
[ "${profile_dir}" = "." ] && profile_dir=$(pwd)

#
# The node name is how a JBoss instance identifies itself in a cluster. If you intend to
# stand up a cluster comprising of multiple JBoss nodes running on the same host, then it
# makes sense to maintain configuration for those nodes in different profile
# sub-directories under the same 'profile' directory. The node name will be inferred from
# the name of the profile directory. This is the default behavior. However, if you intend
# to stand up a cluster where the nodes run on different hosts,  then it is better to use
# the same name for the profile directory across nodes  (management uniformity across the
# cluster) and infer the node name from the host name.
#
node_name=${profile_dir##*/}
#node_name=$(hostname -s)

export RUN_CONF=${reldir}/profile.conf

#
# Use ${PROFILE_DIR} in ${RUN_CONF} definitions
#
export PROFILE_DIR=$(dirname $0)

unset JBOSS_HOME

${reldir}/../../bin/standalone.sh \
 --server-config=${server_config_file} \
 -Djboss.server.base.dir=${reldir} \
 -Djboss.node.name=${node_name} \
 -Djboss.socket.binding.port-offset=${port_offset}

then:

chmod a+x run

For cluster nodes running on the same host, increment 'jboss.socket.binding.port-offset' accordingly.

TODO: add logic that will calculate the binding port offset based on the nodeXX.sh index.

jboss.node.name

The node name is how a JBoss instance identifies itself in a cluster. If you intend to stand up a cluster comprising of multiple JBoss nodes running on the same host, then it makes sense to maintain configuration for those nodes in different profile sub-directories under the same 'profile' directory. The node name will be automatically inferred from the name of the profile directory. This is the default behavior. However, if you intend to stand up a cluster where the nodes run on different hosts, then it is better to use the same name for the profile directory across nodes (management uniformity across the cluster) and infer the node name from the host name. More details about jboss.node.name are available here: jboss.node.name.

jboss-cli Support

Simply use $JBOSS_HOME/bin/jboss-cli.sh. This is possible because the CLI wrapper queries (or needs to be provided explicitly) the address of the node to connect to. For more details on CLI see WildFly CLI.

Start-up

If the instance was configured correctly, data, log and tmp will be created in $JBOSS_HOME/profiles/node01.