Mario has posted 14 posts at DZone. You can read more from them at their website. View Full User Profile

Implementing a Declarative and Flexible Cache in Spring

  • submit to reddit

Caching has always been a common need in order to both speed up an application and alleviate its workload. Moreover its usefulness is particularly evident today with the raise of the social web applications that are visited by thousands of people at the same time. Anyway, from an architectural point of view, caching it is something orthogonal  to the application’s business logic and for this reason it should have a minimal impact on the development of the application itself.

That’s why I tried to leverage some of the features provided by Spring, in particular aspects and events, in order to build a declarative yet flexible caching mechanism. First of all I created an @Cacheable annotation defined as:

public @interface Cacheable {
// cache entry expires in 1 hour by default
int expiresInSec() default 3600;

with the purpose to use it to annotate a method for which I want to cache its invocation result as it follows:

@Cacheable(expiresInSec = 600)
public Result getResult() {
// executes some very expansive calculation

The expiresInSec parameter defines for how much times the cached result is valid before to be recomputed and has a default of 1 hour. To check if the result of the invocation of a method annotated with @Cacheable is already in a valid cache entry, before to actually invoke it, I wrote an aspect that intercepts the invocations themselves:

@Component @Aspect
public class Memoizer {

private Cache cache;

private CacheManager cacheManager;

public void init() {
cache = cacheManager.getCache("Cache");

public Object aroundCacheableMethod(ProceedingJoinPoint pjp, Cacheable cacheable) throws Throwable {
String objectKey = getObjectKey(pjp);
Element result = cache.get(objectKey);

if (result == null) {
Object value = pjp.proceed();
result = element(objectKey, value, cacheable.expiresInSec());
registerKeyCache(cacheable, objectKey);

return result.getObjectValue();

The around advice intercepts the invocation, generates a key that uniquely identifies it and checks if there is already a valid cached value for that key. In this case it returns the cached value, otherwise it actually invokes the target method, caches it result in order to serve subsequent calls and returns the result. I used EHCache to hold the cached results because I needed a solution that works in a cluster, but you can easily replace it with other similar solutions like Memcached or even a plain Map, if your application runs in a single JVM. The key used to uniquely identify a method invocation is calculated as it follows:

private String getObjectKey(ProceedingJoinPoint pjp) {
String targetName = pjp.getTarget().getClass().getSimpleName();
String methodName = pjp.getSignature().getName();
Object[] args = pjp.getArgs();

StringBuilder sb = new StringBuilder();
if (args != null) for (Object arg : args) sb.append(".").append(arg);
return sb.toString();

The key is just a String generated by concatenating the name of the Class containing the invoked method, the name of the method itself and eventually the result of the toString() invocation on each argument. The reason behind this choice should be obvious: if the application invokes many times the same method with the same arguments it should generate the same result at least during a given time frame. In particular the choice to use the arguments’ String representation in the key works if you provide your domain objects with a meaningful toString(), something that is in general a good practice and that I am used to do. However if this strategy is too simplistic for your model, it is trivial to create an EntryKey object containing the target Class, the method name and the array of arguments and implements the hashCode() and equals() methods accordingly.

That said, often to define an interval during which a cached value is valid is not enough: somewhere else in the application could happen something that invalidates the cached result regardless of its age. That’s why I felt the need to implement a more sophisticated invalidation mechanism and I decided to implement it with events in order to have the highest possible decoupling. The idea is pretty easy: to notify the cache that the entries generated by some specific method invocations need to be invalidated by triggering an event inside the application context. At this purpose I added a parameter in the @Cacheable annotation:

public @interface Cacheable {
int expiresInSec() default 3600;
Class<? extends ApplicationEvent> invalidatingEvent() default VoidEvent.class;
The invalidatingEvent is the class of the ApplicationEvent that, when triggered, invalidates the cache entries generated by the annotated method. The VoidEvent is just a default placeholder used when the cache entries don’t need to be invalidated by any event. Moreover I let the cache to listen all the events triggered inside the application by implementing the ApplicationListener interface:
@Component @Aspect
public class Memoizer implements ApplicationListener<ApplicationEvent> {
public void onApplicationEvent(ApplicationEvent event) {
Element result = keyCache.get(event.getClass());
if (result == null) return;

Set<CachedKey> cachedKeys = (Set<CachedKey>)result.getObjectValue();
for (CachedKey cachedKey : cachedKeys) cache.remove(cachedKey.getKey())

If you find that to listen to all the ApplicationEvent triggered in the application context is too much, you could choose to have a hierarchy of events extending an InvalidateCacheEvent and make the Memoizer to listen only to them. Here the keyCache is a second cache I added to the Memoizer in order to bind the class of the invalidating event to the Set of the keys of the cache entries that have to be removed from the cache when that specific event is triggered. Indeed the registerKeyCache () invoked in the around advice when a new entry is going to be added to the cache has been implemented as it follows:

private void registerKeyCache(Cacheable cacheable, String key) {
Class<? extends ApplicationEvent> invalidatingEvent = cacheable.invalidatingEvent();
if (invalidatingEvent == VoidEvent.class) return;

Element result = keyCache.get(invalidatingEvent);
Set<CachedKey> keys;

if (result == null) {
keys = new HashSet<CachedKey>();
keyCache.put(new Element(invalidatingEvent, keys));
} else keys = (Set<CachedKey>)result.getObjectValue();

keys.add(new CachedKey(key, cacheable));

Pretty good, but not completely satisfying. Let’s suppose that a cached method invocation returns a list of objects (for example the results of query on our database) and then only one of those objects is deleted in a different part of the application. We could invalidate the cached list by triggering an event that notifies for the object deletion, but probably in this case we don’t want to throw away the whole list but just to remove from it the no longer existing object. At this purpose I added a third parameter to @Cacheable in order to give the possibility to optionally define an invalidation strategy:

public @interface Cacheable {
int expiresInSec() default 3600;
Class<? extends ApplicationEvent> invalidatingEvent() default VoidEvent.class;
Class<? extends CacheInvalidator> invalidationStrategy() default DefaultCacheInvalidator.class;

where the DefaultCacheInvalidator just removes the whole entry from the cache as it did before:

public class DefaultCacheInvalidator implements CacheInvalidator<ApplicationEvent> {
public boolean invalidate(Cache cache, String key, ApplicationEvent invalidatingEvent) {
return true;

To achieve this result I slightly modified the method that listens to the invalidating events as it follows:

public void onApplicationEvent(ApplicationEvent event) {
Element result = keyCache.get(event.getClass());
if (result == null) return;

Set<CachedKey> cachedKeys = (Set<CachedKey>)result.getObjectValue();
Set<CachedKey> toBeRemoved = new HashSet<CachedKey>();
for (CachedKey cachedKey : cachedKeys) {
if (cachedKey.invalidate(cache, event)) toBeRemoved.add(cachedKey);


and implemented the invalidate() method of the CachedKey to make it use the chosen invalidation strategy:

public <T extends ApplicationEvent> boolean invalidate(Cache cache, T invalidatingEvent) {
return getInvalidator().invalidate(cache, key, invalidatingEvent);

private CacheInvalidator getInvalidator() {
try {
return cacheable.invalidationStrategy().newInstance();
} catch (Exception e) {
throw new RuntimeException(e);
After this last refactor it is possible to implement  a custom CacheInvalidator that, as asked in the former example, removes a single item from a list:
public final class ItemCacheInvalidator implements CacheInvalidator<RemovedItemEvent> {
public boolean invalidate(Cache cache, String key, RemovedItemEvent invalidatingEvent) {
Element element = cache.get(key);
if (element == null) return true;
List<Item> items = (List<Item>)element.getObjectValue();
return false;

That, in the end, allows, as promised, to cache the result of an expansive query in a declarative thought very flexible and readable way as easy as it follows:

@Cacheable(expiresInSec = 3600, invalidatingEvent = RemovedItemEvent.class, invalidationStrategy = ItemCacheInvalidator.class)
public List<Item> fetchItems() {
// Executes a very time consuming query

The complete source code that implements the cache mechanism I am proposing is attached to the article. I hope I demonstrated how a mix of Spring features, well chosen patterns and a few not too smart ideas can help to find better solutions to every day problems, allowing to develop cleaner and then easier to maintain applications.

Article Resources: 
Published at DZone with permission of its author, Mario Fusco.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)


Amin Abbaspour replied on Mon, 2010/06/14 - 2:16am

Come on dude. Why reinvent the weel? Haven't you seen this?

Alessandro Santini replied on Mon, 2010/06/14 - 2:26am

This example could be a good point to start again the debate about when and how annotations should be used.

Nothing against the general principle - leaving application code unaware of caching is indeed a good thing - but let me point out that:

  • Using annotations/AOP is everything but a new thing;
  • I would generally refrain from specifying a timeout value in the annotation. This kind of setting is one of the most common tweaking factors of an application and I would hardly like to recompile a class just to change it.
  • On top of all: I was following a quite similar approach years ago using AspectJ and a Cacheable interface. Why an annotation now? Where's the extra benefit?
Thanks for your answers.

Mario Fusco replied on Mon, 2010/06/14 - 2:33am in response to: Amin Abbaspour

Sorry, but I am afraid you didn't read well my article. The choice of the Cache/Map/Collection where to keep the cached data is only a marginal point and I suppose I underlined that. The purpose of the article was to show how to intercept method invocations and cache their results (I don't mind where) by simply annotating them.

Cheers, Mario

Mario Fusco replied on Mon, 2010/06/14 - 2:48am in response to: Alessandro Santini

> Using annotations/AOP is everything but a new thing

I didn't say I was going to rewrite the history of software engineering :) Anyway I think there is something new, or at least I never saw that implemented anywhere else,  in how I used events to invalidate cache entries in a more precise and business driven way.

> I would generally refrain from specifying a timeout value in the annotation. This kind of setting is one of the most common tweaking factors of an application and I would hardly like to recompile a class just to change it

I don't think a timeout is the best choice to decide if a cache entry is still valid or not. That's why I implemented the event-driven invalidation mechanism I was mentioning before. Moreover the timeout value is not the same for the whole application but it can change from method to method, in order to give a smaller granularity to tweak your caches

> On top of all: I was following a quite similar approach years ago using AspectJ and a Cacheable interface. Why an annotation now? Where's the extra benefit?

I believe once again the answer is to have a smaller granularity. Suppose you have a DAO with, let's say, 5 methods and you want to cache the result of only 3 of them with different timeouts and possibly define an event that allows you to invalidate a given cache entry regardless of the chosen timeout. How could you do that just implementing an interface?

Nello Sgambato replied on Mon, 2010/06/14 - 9:31am

i don't care if this snippet could be used in production.

i really appreciate the informative intent.

you expressed distinctly how to join several puzzle pieces together

throwing over the glue code!



may you supply us an example of interation with the cache, please

i.e. firing an invalidation event



Mario Fusco replied on Mon, 2010/06/14 - 10:53am in response to: Nello Sgambato

Actually I am already using that code in production, but as you pointed out the main purpose of my article is to show how to put together some Spring features in order to implement another reusable component.

As for the event firing part, sorry that I gave it for granted. Basically what I did in my code is to have another component defined as it follows:

public class SpringEventPublisher {

private ApplicationContext applicationContext;

public void publishEvent(ApplicationEvent event) {

public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;


so I can inject the SprinEventPublisher bean in other beans of my application and trigger ApplicationEvents as easy as:

eventPublisher.publishEvent(new RemovedItemEvent());

where RemoveItemEvent (that extends ApplicationEvent) is the event I was speaking about in the last example of my article.

I hope this helps

Cheers, Mario

Seshendra Nalla replied on Mon, 2010/06/14 - 1:37pm

I have been researching to find an optimal way to prefetch and cache some of our application data (through DB, WebService, MQ etc) responsible to paint the first page. From the point the user hits the home page and before he reaches the first landing page (after logging in), there are several extraneous authentication calls which are consuming significant amount of time (which is really out of my control), hence I decided to utilize that time to prefetch some of the data, so that, when the requests hit my application, it would start serving immediately, instead of triggering and waiting for the data aggregation process.

 I found a way to trigger the prefetch, but the whole point here is to persist the prefetched data in an efficient and flexible manner.

  •  I initially considered using a persistant data store but decided against it as that's going to make unnecessary remote calls and the benefit of prefetching is nullified
  • Later I spent some time on exploring In-Memory DB, but the disadvantages of transforming object data to RDBMS and vice-versa proved to be inefficient in my current context. Also the memory foot-print with the In-memory DB is definitely a drawback.
  • The third option I considered was to use a Cache Implementation (such as OSCache or EHCache) to cache java objects.

Having found the Cache implementation close enough for my requirement, and your article explaining how to do it using AOP/Spring and annotations (which was very similar to what I wanted to design), I want to seek some inputs from you (and the group) on your past experiences with such implementations, common pain areas, maintenance drawbacks etc.


Seshendra Nalla.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.