Infinispan Concepts: Difference between revisions
(→HotRod) |
|||
Line 14: | Line 14: | ||
In client-server mode, the Infinispan server runs as a separate data grid server. The data grid may contain one or multiple clustered Infinispan nodes. Each Infinispan node in client-server mode runs as a self-contianed process using a container based on [[WildFly]]. For more than one node, the nodes cluster over JGroups. | In client-server mode, the Infinispan server runs as a separate data grid server. The data grid may contain one or multiple clustered Infinispan nodes. Each Infinispan node in client-server mode runs as a self-contianed process using a container based on [[WildFly]]. For more than one node, the nodes cluster over JGroups. | ||
The client applications can access the data grid sever using Hot Rod, Memcached and REST client APIs. | The client applications can access the data grid sever using [[Infinispan Hot Rod|Hot Rod]], Memcached and REST client APIs. | ||
An Infinispan server in client-server mode does NOT offer transactional operations. | An Infinispan server in client-server mode does NOT offer transactional operations. |
Revision as of 15:39, 12 October 2016
External
- Infinispan 6 User Guide http://infinispan.org/docs/6.0.x/user_guide/user_guide.html
Internal
Usage Modes
Remote Client-Server Mode
In client-server mode, the Infinispan server runs as a separate data grid server. The data grid may contain one or multiple clustered Infinispan nodes. Each Infinispan node in client-server mode runs as a self-contianed process using a container based on WildFly. For more than one node, the nodes cluster over JGroups.
The client applications can access the data grid sever using Hot Rod, Memcached and REST client APIs.
An Infinispan server in client-server mode does NOT offer transactional operations.
However, the client-server mode allows for easy scalability, by just adding nodes, and easier upgrades of the data grid without impact on client application.
Library Mode
Cache Modes
Local Mode
Distributed Mode
Replicated Mode
Invalidation Mode
External: http://infinispan.org/docs/6.0.x/user_guide/user_guide.html#invalidation_mode
The number of invalidation is collected and exposed as the invalidations CLI metric.
Eviction
Passivation
The process of writing into a store data that is being evicted from memory is named passivation. Also see Cache Store Passivation.
TODO process http://infinispan.org/docs/6.0.x/user_guide/user_guide.html#cache-passivation.
Activation
The process of bringing an entry in memory from a cache store. Also see Cache Store Activation.
Expiration
Cache Container
A cache container is the runtime structure that instantiates and manages one or more caches. In case of clustered caches, the cache container encapsulates the networking mechanisms required to maintain state across more than one JVM for its caches, including a JGroups stack.
Each cache container declares a set of caches that share a global configuration, so caches belonging to different cache containers can have different transport configurations, optimized for different use cases.
The cache container implementations are heavyweight objects. There should be possible to use just one cache container per JVM, unless specific configuration requires the availability of more than one instance - but in this case there will be a minimal and finite number of such instances.
A WildFly cache container is the WildFly service wrapper around an Infinispan cache container. Each <cache-container> element eventually results in a org.infinispan.manager.DefaultCacheManager instance being created in the JVM.
The corresponding WildFly/JDG configuration element is <cache-container>. The <cache-container> elements are children of the "infinispan" (for WildFly) or "infinispan:server:core:" (for JDG) subsystems. More details about cache container configuration can be found here:
From an API perspective, the cache container is the primary API mechanism to retrieve cache instances or create cache instances on demand. For more details see:
Cache Manager
A cache manager and a cache container represent similar concepts.
Cache
Default Cache and Named Caches
Each cache container has a default cache instance. The default cache can be retrieved via the CacheManager.getCache() API.
Named caches are retrieved via CacheManager.getCache(String name) API. Therefore, note that the name attribute of named cache is both mandatory and unique for every named cache specified. Named caches have the same XML schema as the default cache so they inherit settings from the default cache while additional behavior can be specified or overridden.
The default cache for a specific cache container is configured using the default-cache configuration attribute.
Persistence
Cache Store
A cache store implements the CacheLoader or CacheWriter interfaces, or both.
Cache stores are deployed in a chain. A cache read operation looks at all of the installed CacheLoaders, in the order they are installed, until it finds a valid and non-null element of data. When performing writes all cache CacheWriters are written to, except if the ignoreModifications element has been set to true for a specific cache writer
More details on cache store configuration:
Cache Store Passivation
If passivation is enabled (by default it is not), data is only written to the cache store when it is evicted from memory. Next time the data is requested, it will be 'activated' which means that data will be brought back to memory and removed from the persistent store. If the passivation is disabled, by default, the cache store contains a copy of the contents in memory, so writes to cache result in cache store writes. This essentially implements a 'write-through' behavior. The interaction between the cache and cache store during passivation is described in detail here: Passivation.
Configuration details:
The number of passivations per node is exposed via the passivations CLI metric.
Cache Store Activation
The process of bringing an entry in memory from a cache store.
The number of activations per node is exposed via the activations CLI metric.
A shared cache store means that the cache store instance is shared among different cache instances (e.g., multiple nodes in a cluster using a JDBC-based CacheStore pointing to the same, shared database).. If "shared" is set to true prevents repeated and unnecessary writes of the same data to the cache loader by different cache instances: only the node where the modification originated will write to the cache store. If disabled, each individual cache reacts to a potential remote update by storing the data to the cache store.
Configuration details:
Cache Store Preloading
If the store is configured to do preloading, when the cache starts, data stored in the cache loader will be pre-loaded into memory. This is particularly useful when data in the cache loader is needed immediately after startup and you want to avoid cache operations being delayed as a result of loading this data lazily. Can be used to provide a warm-cache on startup, however there is a performance penalty as startup time is affected by this process. Note that preloading is done in a local fashion, so any data loaded is only stored locally in the node. No replication or distribution of the preloaded data happens. Also, Infinispan only preloads up to the maximum configured number of entries in eviction.
Configuration details:
Cache Store Purge
If purge is set to true, it empties the specified cache store (if ignoreModifications is false) when the cache loader starts up.
Configuration details:
Connecting to an Infinispan Server
HotRod Connector
The HotRod connector is configured in the infinispan:server:endpoint Section.
For more details on HotRod clients and how they connect to a HotRod server:
memcached Connector
The memcached connector is configured in the infinispan:server:endpoint Section.
REST Connector
The REST connector is configured in the infinispan:server:endpoint Section.
HotRod
Cross-Site Replication
JDG allows linking of two otherwise isolated clustered caches over a link optimized to traverse a WAN.
RELAY Protocol
For concepts related to the underlying JGroups RELAY2 protocol see:
Site Master
Cross-Site Replication Configuration
The configuration procedure is documented here: