Sunday, 10 April 2016

How to implement a thread-safe cache ?


Example 1. Use HashMap as cache
// Define cache
Map<A, V> cache = new HashMap<A, V>();

// Get and put item to cache
V result = cache.get(key);
if (result == null) {
  result = compute(key);
  cache.put(key, result);
}
return result;


HashMap is not thread-safe so multiple threads can compute the result at same time.

A ---> compute
B --------------> compute
C -------------------------> compute 
 

Example 2. Use ConcurrentHashMap as cache instead of HashMap
// Define cache
Map<A, V> cache = new ConcurrentHashMap<A, V>();

Same problem as example 1.
If two threads calls compute at same time, one can re-compute the same value.
In case of expensive computation, other thread can repeat the same and could be a performance hit on cache read.

A ---> Check in cache  ---> compute ----> Add to cache
B ---------------------> Check in cache  ------> compute ----> Add to cache


Example 3. Use FutureTask for value in ConcurrentHashMap (Best)

Future.get return result of computation (immediately if available, or blocks until result is computed and returned)
// Define cache
Map<A, Future<V>> cache = new ConcurrentHashMap<A, Future<V>>();

// Get and put item to cache

Future<V> value = cache.get(key);
if (value == null) {
   Callable<V> eval = new Callable<V>() {
        public V call() throws InterruptedException {
              return compute(key);
        }
   };


   FutureTask<V> ft = new FutureTask<V>(eval);
   value = cache.putIfAbsent(key, ft);
   if (value == null) { 

       value = ft;
       ft.run();     // invokes call method
   }
}

try {
   return value.get();
} catch (ExecutionException e) {
   throw launderThrowable(e.getCause());
}


No comments:

Post a Comment

Note: only a member of this blog may post a comment.