How to Lock Cache in ASP.NET

What is the best way to lock cache in asp.net?

Here's the basic pattern:

  • Check the cache for the value, return if its available
  • If the value is not in the cache, then implement a lock
  • Inside the lock, check the cache again, you might have been blocked
  • Perform the value look up and cache it
  • Release the lock

In code, it looks like this:

private static object ThisLock = new object();

public string GetFoo()
{

// try to pull from cache here

lock (ThisLock)
{
// cache was empty before we got the lock, check again inside the lock

// cache is still empty, so retreive the value here

// store the value in the cache here
}

// return the cached value here

}

Lock item in Cache ASP.NET MVC due to concurrency issues

As you said you can introduce a lock:

public class CacheUtils : ICacheService
{
private readonly object _lock = new object();
public T Get<T>(string cacheKey, int hoursUntilExpire, Func<T> getItemCallback) where T : class
{
T item;
lock (_lock)
{
item = HttpRuntime.Cache.Get(cacheKey) as T;
}

if (item != null) return item;
item = getItemCallback();

lock (_lock)
{
HttpRuntime.Cache.Insert(cacheKey, item, null, DateTime.UtcNow.AddHours(hoursUntilExpire), Cache.NoSlidingExpiration);
}
return item;
}

public void Refresh(string key)
{
lock (_lock)
{
HttpRuntime.Cache.Remove(key);
}
}
}

In what use cases is locking on ASP.NET cache required/desirable

Locking is not about where in the code you use it but if more than one thread can execute that code at any one time. If there is a chance of a race condition you need to lock, even if apparently in development and your specific setup no errors emerge. This is so because multi-threading is impacted by so many factors that you cannot predict the behavior.

So if this is a multi-threaded application (web server) you need to lock.

Reducing the amount of code that is locked is also a good advice, as collisions put threads into the waiting state you want to make them wait as less as possible.

If you think you have many read accesses and few write accesses you may want to look into more complex mechanisms like Semaphore and adopt a Multiple Reader Single Writer pattern.

Really, it all depends on what you do in the rest of the application (threads or parallel web requests).

asp.net proper async locking per cache key

If you are going this route - yes, you need separate lock for each key. You can achieve that for example with ConcurrentDictionary:

static ConcurrentDictionary<string, Lazy<AsyncLock>> _keyLocks = new ConcurrentDictionary<string, Lazy<AsyncLock>>();

public async Task<IActionResult> Index(int articleId)
{
var key = CacheKeysFor.Article.ById(articleId);
ArticleModel cacheEntry;
cacheEntry = _cache.Get<ArticleModel>(key);

if (cacheEntry == null)
{
var keyLock = _keyLocks.GetOrAdd(key, _ => new Lazy<AsyncLock>(() => new AsyncLock())).Value;
using (await keyLock.LockAsync())
{
if (cacheEntry == null)
{
cacheEntry = SomeDatabaseCall
_cache.Set(key, cacheEntry, TimeSpan.FromSeconds(60));
}

}
}

return View(cacheEntry);
}

Note that keys locks will accumulate over time. That is not a big problem unless you have many millions of them. If you do have millions of them - you can clear locks collection from time to time. Doing that is not safe and might allow multiple threads to enter protected block, but in this concrete case it seems to be not a problem either (because you use locks to just avoid expensive database call).

Doing locking in ASP.NET correctly

Unless you're absolutely certain that it's critical to have no redundant queries then I would avoid locking altogether. The ASP.NET cache is inherently thread-safe, so the only drawback to the following code is that you might temporarily see a few redundant queries racing each other when their associated cache entry expires:

public static string DoSearch(string query)
{
var results = (string)HttpContext.Current.Cache[query];
if (results == null)
{
results = GetResultsFromSlowDb(query);

HttpContext.Current.Cache.Insert(query, results, null,
DateTime.Now.AddHours(1), Cache.NoSlidingExpiration);
}
return results;
}

If you decide that you really must avoid all redundant queries then you could use a set of more granular locks, one lock per query:

public static string DoSearch(string query)
{
var results = (string)HttpContext.Current.Cache[query];
if (results == null)
{
object miniLock = _miniLocks.GetOrAdd(query, k => new object());
lock (miniLock)
{
results = (string)HttpContext.Current.Cache[query];
if (results == null)
{
results = GetResultsFromSlowDb(query);

HttpContext.Current.Cache.Insert(query, results, null,
DateTime.Now.AddHours(1), Cache.NoSlidingExpiration);
}

object temp;
if (_miniLocks.TryGetValue(query, out temp) && (temp == miniLock))
_miniLocks.TryRemove(query);
}
}
return results;
}

private static readonly ConcurrentDictionary<string, object> _miniLocks =
new ConcurrentDictionary<string, object>();

How can I lock by cache key?

For non shared data among pools

When you have many pools (web garden) each pool can have their static data. There I have measure this days that the ConcurrentDictionary<TKey, TItem> is the faster because they have implement some kind of technique that don't use look inside, so they have make it extreme fast.

So I suggest the ConcurrentDictionary<TKey, TItem> for non shared data among pools.

In this case you must take care the synchronization of the data him self to avoid concurrent data change on the same the data. There you can use the SlimLock, or a Lock.

common resources change among pools

Now, when you have resource that are shared among pools, you need to use mutex. For example if you try to go to save a file from many threads, of open a file for change it from many threads - you need mutex to synchronize that common resource

So for common resource you use the mutex

Mutex can use a Key to lock to lock base on that key - but you can not change the same resource!.

public T GetCache<T>(string key, Func<T> valueFactory...) 
{
// note here that I use the key as the name of the mutex
// also here you need to check that the key have no invalid charater
// to used as mutex name.
var mut = new Mutex(true, key);

try
{
// Wait until it is safe to enter.
mut.WaitOne();

// here you create your cache
}
finally
{
// Release the Mutex.
mut.ReleaseMutex();
}
}

What kind of lock

we have two case for lock.

  1. One case is when we use common resources in all pools, all threads. Common resource can be a file, or the database its self.

In the common resources we need to use mutex.


  1. Second case is when we use variables that are visible only to the inside of a pool - different pools can not see that resources. For example a static List<>, a static Dictionary etc. This static variables, arrays can access only inside the pool and they are not the same across different pools.

In this second case, the lock() is the most easy and common way to use.

Faster than lock

Now, when we have a static dictionary that we keep for long time and make too many reads/writes there, a faster approach to avoid the full program to wait, is the ReaderWriterLockSlim

you can take a full example from here: ReaderWriterLockSlim

Using the ReaderWriterLockSlim, we can avoid the locks when we do not need them - and we do not need to lock the static values when we read - only when we write on them. So I can suggest it for static values that we use them as cache.

What is a pool in asp.net.

Imaging as if different programs that run isolate each other but serves the incoming requests from users. Each pool have his own world and they are not communicate each other. Each pool have their initialize, their static values, and their life. To have some common resource between pools you need some other third program, like a database, like a file on disk, like a service.

So if you have many pools (web garden) to synchronize them for common resource you need mutex. To synchronize them inside you use lock.

IIS app pools, worker processes, app domains

Lifetime of ASP.NET Static Variable



Related Topics



Leave a reply



Submit