Infinispan Configuration: Difference between revisions
No edit summary |
|||
Line 64: | Line 64: | ||
<blockquote style="background-color: #f9f9f9; border: solid thin lightgrey;"> | <blockquote style="background-color: #f9f9f9; border: solid thin lightgrey;"> | ||
:[[ | :[[Infinispan_HotRod_Java_Client#Relationship_between_RemoteCacheManager_and_a_Server-Side_Cache_Container|HotRod Java Client API]] | ||
</blockquote> | </blockquote> | ||
Latest revision as of 19:31, 26 June 2017
Internal
Relevance
- JDG 6.6.0
JDG Configuration Files
A JDG instance intended to provide access to an internal local cache (maintained within the memory of the JVM running the JDG instance) is configured from $JDG_HOME/standalone/configuration/standalone.xml A JDG instance intended to cluster with other instance to provide a distributed cache space is configured from $JDG_HOME/standalone/configuration/clustered.xml.
The clustered instance is supposed to be started using $JDG_HOME/bin/clustered.sh. When started this way, the configuration file $JDG_HOME/bin/clustered.conf is sourced. However, when systemctl starts the instance, the systemd startup scripts are written to invoke $JDG_HOME/bin/standalone.sh -c clustered.xml and this sources standalone.conf and not clustered.conf. It is best to keep both of those in sync with configuration.
JDG Main Configuration File
The JDG main configuration fille is standalone.xml or cluster.xml, depending the mode JDG is started in. Relevant sections:
infinispan:server:jgroups Section
The "infinispan:server:jgroups" subsystem contains details related to how JDG nodes cluster with each other. The two main transport choices are multicast (UDP) and TCP. The configuration details are similar to WildFly's:
infinispan:server:core Section
The "infinispan:server:core" subsystem contains the declaration of the cache containers and their corresponding caches that will be exposed for use by the Infinispan cluster:
infinispan:server:endpoint Section
The "infinispan:server:endpoint" subsystem contains the declaration and configuration of various connector endpoints (HotRod, memcached and REST). The connector endpoints are used by clients to connect to the Infinispan cluster and get access to caches:
<subsystem xmlns="urn:infinispan:server:endpoint:6.1"> <hotrod-connector cache-container="clustered" socket-binding="hotrod"> <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/> </hotrod-connector> <memcached-connector cache-container="clustered" socket-binding="memcached"/> <rest-connector cache-container="clustered" auth-method="BASIC" security-domain="other" virtual-server="default-host"/> </subsystem>
The corresponding socket bindings are declared in the <socket-binding-group> element:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> ... <socket-binding name="hotrod" interface="management" port="11222"/> <socket-binding name="http" port="8080"/> <socket-binding name="https" port="8443"/> ... <socket-binding name="memcached" interface="management" port="11211"/> ... </socket-binding-group>
For more details on HotRod clients and how they connect to a HotRod server: