Nishant Chandra is a Principal Software Engineer at His main interests are in building scalable software, SOA, Data Mining and Mobile. He has been working on E-Commerce applications based on large J2EE and peer-to-peer technology. In the past, Nishant has worked at and Adobe Inc. He also contributes to open source projects. Other than software technology, he is interested in Analytics, product management, Internet marketing and startups. Nishant is a DZone MVB and is not an employee of DZone and has posted 21 posts at DZone. You can read more from them at their website. View Full User Profile

Getting Started: Infinispan as Remote Cache Cluster

  • submit to reddit
This guide will walk you through configuring and running Infinispan as a remote distributed cache cluster. There is straightforward documentation for running Infinispan in embedded mode. But there is no complete documentation for running Infinispan in client/server or remote mode. This guide helps bridge the gap.

Infinispan offers four modes of operation, which determine how and where the data is stored:
  • Local, where entries are stored on the local node only, regardless of whether a cluster has formed. In this mode Infinispan is typically operating as a local cache
  • Invalidation, where all entries are stored into a cache store (such as a database) only, and invalidated from all nodes. When a node needs the entry it will load it from a cache store. In this mode Infinispan is operating as a distributed cache, backed by a canonical data store such as a database
  • Replication, where all entries are replicated to all nodes. In this mode Infinispan is typically operating as a data grid or a temporary data store, but doesn't offer an increased heap space
  • Distribution, where entries are distributed to a subset of the nodes only. In this mode Infinispan is typically operating as a data grid providing an increased heap space
Invalidation, Replication and Distribution can all use synchronous or asynchronous communication.

Infinispan offers two access patterns, both of which are available in any runtime:
  • Embedded into your application code
  • As a Remote server accessed by a client (REST, memcached or Hot Rod)
In this guide, we will configure an Infinispan server with a HotRod endpoint and  access it via a Java Hot Rod client. One reason to use HotRod protocol is it provides automatic loadbalancing and failover.

1. Download full distribution of Infinispan. I will use version 5.1.5.
2. Configure Infinispan to run in distributed mode. Create infinispan-distributed.xml.

 <infinispan >
<globaljmxstatistics >
<properties> <property > </property></properties>
<jmxstatistics > <clustering > <async> <hash > </hash></async></clustering>
<namedcache > <clustering > <sync> <hash > </hash></sync></clustering> </namedcache>

We will use JGroups to setup cluster communication. Copy etc/jgroups-tcp.xml as jgroups.xml.

3. Place infinispan-distributed.xml and jgroups.xml in bin folder. Start 2 Infinispan instances on the same or different machines.

Starting an Infinispan server is pretty easy. You need to download and unzip the Infinispan distribution and use the startServer script.

bin\startServer.bat --help // Print all available options
bin\startServer.bat -r hotrod -c infinispan-distributed.xml -p 11222
bin\startServer.bat -r hotrod -c infinispan-distributed.xml -p 11223

The 2 server instances will start talking to each other via JGroups.

4. Create a simple Remote HotRod Java Client.
import java.util.Map; import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.client.hotrod.ServerStatistics; public class Quickstart { public static void main(String[] args) { URL ().getContextClassLoader() .getResource(""); RemoteCacheManager RemoteCacheManager(resource, true); //obtain a handle to the remote default cache RemoteCache ("myCache"); //now add something to the cache and make sure it is there cache.put("car", "ferrari"); if(cache.get("car").equals("ferrari")){ System.out.println("Found"); } else { System.out.println("Not found!"); } //remove the data cache.remove("car"); //Print cache statistics ServerStatistics (); for (Map.Entry stat : stats.getStatsMap().entrySet()) { System.out.println(stat.getKey() + " : " + stat.getValue()); } // Print Cache properties System.out.println(cacheContainer.getProperties()); cacheContainer.stop(); }

5. Define
 ;localhost:11223; ## below is connection pooling config 

See RemoteCacheManager for all available properties.

6. Run You will see something like this on the console

Jul 22, 2012 9:40:39 PM org.infinispan.client.hotrod.impl.protocol.Codec10 readNewTopologyAndHash INFO: ISPN004006: localhost/ sent new topology view (id=3) containing 2 addresses: [/, /] Found hits : 3 currentNumberOfEntries : 1 totalBytesRead : 332 timeSinceStart : 1281 totalNumberOfEntries : 8 totalBytesWritten : 926 removeMisses : 0 removeHits : 0 retrievals : 3 stores : 8 misses : 0 {, , , , , , , , ;localhost:11223;, , }  

As you will notice, the cache server returns the cluster topology when the connection is established. You can start more Infinispan instances and notice that the cluster topology changes quickly.

That's it!

Some useful links:
Published at DZone with permission of Nishant Chandra, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)


Jacob Nikom replied on Wed, 2012/10/03 - 2:49pm

Hi Chandra,

Thank you for your tutorial.


I implemented it and for the local server it worked as you described.


However, when I tried to use non-local server (on the same LAN),

it did not work - it complained that it could not find the server.

Have you ever try your prohram with Hot_rod server running on non-local node?


Here are my properties file:

 shell> more
infinispan.client.hotrod.server_list = localhost:11222;;
infinispan.client.hotrod.socket_timeout = 500
infinispan.client.hotrod.connect_timeout = 10

## below is connection pooling config
maxTotal = -1
maxIdle = -1
whenExhaustedAction = 1
testWhileIdle = true
minIdle = 1

I found that the server was removed by TcpTransportFactory class

15:37:15,410 TRACE TcpTransportFactory:200 - Current list: [/, localhost/]
15:37:15,410 TRACE TcpTransportFactory:201 - New list: [/, /]
15:37:15,411 TRACE TcpTransportFactory:202 - Added servers: [/]
15:37:15,411 TRACE TcpTransportFactory:203 - Removed servers: [/]
15:37:15,411  INFO TcpTransportFactory:212 - ISPN004014: New server added(/, adding to the pool.
15:37:15,412 TRACE TransportObjectFactory:59 - Created tcp transport: TcpTransport{socket=Socket[addr=/,port=11223,localport=53248], serverAddress=/, id =3}
15:37:15,412 TRACE TransportObjectFactory:111 - Returning to pool: TcpTransport{socket=Socket[addr=/,port=11223,localport=53248], serverAddress=/, id =3}

So, I am no longer have this server and your program does not work.


Do you know why it was remove from the server list?

What should be done to prevent it and fix the problem?


Best regards,


Jacob Nikom


Aamir Iqbal replied on Wed, 2012/10/31 - 2:49am

 Dear Nishant !

         It is truly very informative article. Thank you very much for writing it. BUT

         As currently i am implementing RemoteCache i am facing many issues. Like you can't get the size of a remote cache, you cant get keySet, values, entrySet of remote cache as they have not implemented them yet. Which is very frustrating as well as disappointing.

Also,  i have concerns in your above code snippet. Does following line compiles without errors?

" (Map.Entry stat : stats.getStatsMap().entrySet()) { "

where are:

      Variable "cache"

      Variable "resource"

      Variable "stats"

defined ????

Waiting for your response,


Amir Iqbal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.