Infinispan HotRod Client Configuration: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 114: Line 114:


Indicates whether or not idle connections should be validated by sending an TCP packet to the server, during idle connection eviction runs.  Connections that fail to validate will be dropped from the pool. This setting has no effect unless <tt>timeBetweenEvictionRunsMillis</tt> > 0.  The default setting for this parameter is true.
Indicates whether or not idle connections should be validated by sending an TCP packet to the server, during idle connection eviction runs.  Connections that fail to validate will be dropped from the pool. This setting has no effect unless <tt>timeBetweenEvictionRunsMillis</tt> > 0.  The default setting for this parameter is true.
==Replicated Clusters Only==
====infinispan.client.hotrod.request_balancing_strategy====
The default value is <tt>org.infinispan.client.hotrod.impl.transport.tcp.RoundRobinBalancingStrategy</tt>.  For replicated (vs distributed) Hot Rod server clusters, the client balances requests to the servers according to this strategy. Distributed clusters do not require this. For more details see [[#Overview|load balancing]] above.

Revision as of 02:29, 28 October 2016

Internal

Overview

This article describes the configuration of a HotRod client, which is performed by building a Configuration object and passing it to the RemoteCacheManager constructor.

Configuration Elements

infinispan.client.hotrod.server_list

The default is 127.0.0.1:11222. This is the initial list of Hot Rod servers to connect to, specified in the following format: host1:port1;host2:port2... At least one host:port must be specified.

infinispan.client.hotrod.force_return_values

The default value is false. Whether or not to implicitly org.infinispan.client.hotrod.Flag.FORCE_RETURN_VALUE for all calls.

infinispan.client.hotrod.tcp_no_delay

The default is true. Affects TCP NODELAY on the TCP stack.

infinispan.client.hotrod.tcp_keep_alive

The default is false. Affects TCP KEEPALIVE on the TCP stack.

infinispan.client.hotrod.transport_factory

The default is org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory. It controls which transport to use. Currently only the TcpTransport is supported.

infinispan.client.hotrod.marshaller

The default is org.infinispan.marshall.jboss.GenericJBossMarshaller. Allows to specify a custom org.infinispan.marshall.Marshaller implementation to serialize and deserialize user objects. For portable serialization payloads, you should configure the marshaller to be org.infinispan.client.hotrod.marshall.ApacheAvroMarshaller.

infinispan.client.hotrod.async_executor_factory

Default is org.infinispan.client.hotrod.impl.async.DefaultAsyncExecutorFactory. Allows to specify a custom asynchronous executor for async calls.

infinispan.client.hotrod.default_executor_factory.pool_size

Default is 99. If the default executor is used, this configures the number of threads to initialize the executor with.

infinispan.client.hotrod.default_executor_factory.queue_size

Default value 100000. If the default executor is used, this configures the queue size to initialize the executor with.

infinispan.client.hotrod.hash_function_impl.1

Default = It uses the hash function specified by the server in the responses as indicated in org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashFactory. This specifies the version of the hash function and consistent hash algorithm in use, and is closely tied with the HotRod server version used.

infinispan.client.hotrod.key_size_estimate

Default 64. This hint allows sizing of byte buffers when serializing and deserializing keys, to minimize array resizing.

infinispan.client.hotrod.value_size_estimate

Default 512. This hint allows sizing of byte buffers when serializing and deserializing values, to minimize array resizing.

infinispan.client.hotrod.socket_timeout

Default = 60000 (60 seconds). This property defines the maximum socket read timeout before giving up waiting for bytes from the server.

infinispan.client.hotrod.protocol_version

Default 2.0 .This property defines the protocol version that this client should use. Other valid values include 1.0.

infinispan.client.hotrod.connect_timeout

Default = 60000 (60 seconds). This property defines the maximum socket connect timeout before giving up connecting to the server.

infinispan.client.hotrod.max_retries

Default 10. This property defines the maximum number of retries in case of a recoverable error. A valid value should be greater or equals to 0 (zero). Zero mean no retry.

The following properties are related to connection pooling:

maxActive

Controls the maximum number of connections per server that are allocated (checked out to client threads, or idle in the pool) at one time. When non-positive, there is no limit to the number of connections per server. When maxActive is reached, the connection pool for that server is said to be exhausted. The default setting for this parameter is -1, i.e. there is no limit.

maxTotal

Sets a global limit on the number persistent connections that can be in circulation within the combined set of servers. When non-positive, there is no limit to the total number of persistent connections in circulation. When maxTotal is exceeded, all connections pools are exhausted. The default setting for this parameter is -1 (no limit).

maxIdle

Controls the maximum number of idle persistent connections, per server, at any time. When negative, there is no limit to the number of connections that may be idle per server. The default setting for this parameter is -1.

minIdle

Sets a target value for the minimum number of idle connections (per server) that should always be available. If this parameter is set to a positive number and timeBetweenEvictionRunsMillis > 0, each time the idle connection eviction thread runs, it will try to create enough idle instances so that there will be minIdle idle instances available for each server. The default setting for this parameter is 1.

whenExhaustedAction

Specifies what happens when asking for a connection from a server's pool, and that pool is exhausted. Possible values:

  • 0 - an exception will be thrown to the calling user
  • 1 - the caller will block (invoke waits until a new or idle connections is available.
  • 2 - a new persistent connection will be created and returned (essentially making maxActive meaningless.)

The default whenExhaustedAction setting is 1.

Optionally, one may configure the pool to examine and possibly evict connections as they sit idle in the pool and to ensure that a minimum number of idle connections is maintained for each server. This is performed by an "idle connection eviction" thread, which runs asynchronously. The idle object evictor does not lock the pool throughout its execution. The idle connection eviction thread may be configured using the following attributes:

timeBetweenEvictionRunsMillis

Indicates how long the eviction thread should sleep before "runs" of examining idle connections. When non-positive, no eviction thread will be launched. The default setting for this parameter is 2 minutes.

minEvictableIdleTimeMillis

Specifies the minimum amount of time that an connection may sit idle in the pool before it is eligible for eviction due to idle time. When non-positive, no connection will be dropped from the pool due to idle time alone. This setting has no effect unless timeBetweenEvictionRunsMillis > 0. The default setting for this parameter is 1800000(30 minutes).

testWhileIdle

Indicates whether or not idle connections should be validated by sending an TCP packet to the server, during idle connection eviction runs. Connections that fail to validate will be dropped from the pool. This setting has no effect unless timeBetweenEvictionRunsMillis > 0. The default setting for this parameter is true.

Replicated Clusters Only

infinispan.client.hotrod.request_balancing_strategy

The default value is org.infinispan.client.hotrod.impl.transport.tcp.RoundRobinBalancingStrategy. For replicated (vs distributed) Hot Rod server clusters, the client balances requests to the servers according to this strategy. Distributed clusters do not require this. For more details see load balancing above.