Infinispan HotRod Java Client
Internal
Overview
HotRod defines three level of intelligence for the clients:
- basic client, interested in neither cluster nor hash information
- topology-aware client, interested in cluster information
- hash-distribution-aware client, that is interested in both cluster and hash information
The java client implementation supports all three levels of intelligence. It is transparently notified whenever a new server is added/removed from the HotRod cluster. At startup it only needs to know the address of one HotRod server On connection to the server the cluster topology is piggybacked to the client, and all further requests are being dispatched to all available servers. Any further topology change is also piggybacked.
The client is also hash-distribution-aware. This means that, for each operation, the client chooses the most appropriate remote server to go to: the data owner. As an example, for a put(k,v) operation, the client calculates k’s hash value and knows exactly on which server the data resides on. Then it picks up a tcp connection to that particular server and dispatches the operation to it. This means less burden on the server side which would otherwise need to lookup the value based on the key’s hash. It also results in a quicker response from the server, as an additional network roundtrip is skipped. This hash-distribution-aware aspect is only relevant to the distributed HotRod clusters and makes no difference for replicated server deployments.
For a replicated cluster, the client can load balance requests. This aspect is irrelevant with distributed clusters, where, as shown above, the client sends/receives data from the data owner node. The default strategy is round-robin: requests are being dispatched to all existing servers in a circular manner. Custom types of balancing policies can defined by implementing the RequestBalancingStrategy and by specifying it through the infinispan.client.hotrod.request-balancing-strategy configuration property.
Connection Pooling
In order to avoid creating a TCP connection on each request (which is a costly operation), the client keeps a pool of persistent connections to all the available servers and it reuses these connections whenever it is possible.
Connection Validation
The validity of the connections is checked using an async thread that iterates over the connections in the pool and sends a HotRod ping command to the server. By using this connection validation process the client is being proactive: there’s a hight chance for broken connections to be found while being idle in the pool and no on actual request from the application.
Connection Pooling Configuration
The number of connections per server, total number of connections, how long should a connection be kept idle in the pool before being closed - all these (and more) can be configured. RemoteCacheManager documentation describes configuration options.
Threading
TODO
Metrics
Configuration
API
The basic API to interact with an Infinispan server over the HotRod protocol:
import org.infinispan.client.hotrod.RemoteCache; import org.infinispan.client.hotrod.RemoteCacheManager; import org.infinispan.client.hotrod.configuration.Configuration; import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; ... String host = "localhost"; int port = 11222; Configuration c = new ConfigurationBuilder().addServer().host(host).port(port).build(); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(c); RemoteCache defaultCache = emoteCacheManager.getCache(); RemoteCache namedCache = remoteCacheManager.getCache("some-cache");
RemoteCacheManager
In order to be able to use an RemoteCache instance, the RemoteCacheManager must be started first: beside other things, this instantiates connections to Hot Rod server(s). A constructor that has being passed a Configuration instance starts automatically. Otherwise, the start() method can be used. The RemoteCacheManager is an "expensive" object, as it manages a set of persistent TCP connections to the Hot Rod servers. It is recommended to only have one instance of this per JVM, and to cache it between calls to the server. stop() needs to be called explicitly in order to release all the resources (e.g. threads, TCP connections).
Relationship between RemoteCacheManager and a Server-Side Cache Container
The RemoteCacheManager is configured with a host name and port. The host:port pair is associated on the server-side with a specific socket binding definition ("hotrod" in this case):
<socket-binding-group name="standard-sockets" ...> ... <socket-binding name="hotrod" port="11222"/> ... </socket-binding-group>
The socket binding definition is associated with the HotRod connector configuration declared in the "infinispan:server:endpoint" subsystem (more details about the HotRod connector are available here: The HotRod Connector):
<subsystem xmlns="urn:infinispan:server:endpoint:6.1"> <hotrod-connector cache-container="clustered" socket-binding="hotrod"> <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/> </hotrod-connector> </subsystem>
The HotRod connector thus declared is bound to a specific cache container, and this is what the client will get access to:
<subsystem xmlns="urn:infinispan:server:core:6.4"> <cache-container name="clustered" default-cache="default" statistics="true"> ... <distributed-cache name="default" .../> <distributed-cache name="something" .../> <cache-container name="security"/> </subsystem>
Replicated Clusters Only
infinispan.client.hotrod.request_balancing_strategy
The default value is org.infinispan.client.hotrod.impl.transport.tcp.RoundRobinBalancingStrategy. For replicated (vs distributed) Hot Rod server clusters, the client balances requests to the servers according to this strategy. Distributed clusters do not require this. For more details see load balancing above.
Interacting with a HotRod Server from within the Same JVM
TODO: