System.Lazy<T> with Different Thread-Safety Mode

System.Lazy T with different thread-safety mode

My attempt at a version of Darin's updated answer that doesn't have the race condition I pointed out... warning, I'm not completely sure this is finally completely free of race conditions.

private static int waiters = 0;
private static volatile Lazy<object> lazy = new Lazy<object>(GetValueFromSomewhere);
public static object Value
{
get
{
Lazy<object> currLazy = lazy;
if (currLazy.IsValueCreated)
return currLazy.Value;

Interlocked.Increment(ref waiters);

try
{
return lazy.Value;

// just leave "waiters" at whatever it is... no harm in it.
}
catch
{
if (Interlocked.Decrement(ref waiters) == 0)
lazy = new Lazy<object>(GetValueFromSomewhere);
throw;
}
}
}

Update: I thought I found a race condition after posting this. The behavior should actually be acceptable, as long as you're OK with a presumably rare case where some thread throws an exception it observed from a slow Lazy<T> after another thread has already returned from a successful fast Lazy<T> (future requests will all succeed).

  • waiters = 0
  • t1: comes in runs up to just before the Interlocked.Decrement (waiters = 1)
  • t2: comes in and runs up to just before the Interlocked.Increment (waiters = 1)
  • t1: does its Interlocked.Decrement and prepares to overwrite (waiters = 0)
  • t2: runs up to just before the Interlocked.Decrement (waiters = 1)
  • t1: overwrites lazy with a new one (call it lazy1) (waiters = 1)
  • t3: comes in and blocks on lazy1 (waiters = 2)
  • t2: does its Interlocked.Decrement (waiters = 1)
  • t3: gets and returns the value from lazy1 (waiters is now irrelevant)
  • t2: rethrows its exception

I can't come up with a sequence of events that will cause something worse than "this thread threw an exception after another thread yielded a successful result".

Update2: declared lazy as volatile to ensure that the guarded overwrite is seen by all readers immediately. Some people (myself included) see volatile and immediately think "well, that's probably being used incorrectly", and they're usually right. Here's why I used it here: in the sequence of events from the example above, t3 could still read the old lazy instead of lazy1 if it was positioned just before the read of lazy.Value the moment that t1 modified lazy to contain lazy1. volatile protects against that so that the next attempt can start immediately.

I've also reminded myself why I had this thing in the back of my head saying "low-lock concurrent programming is hard, just use a C# lock statement!!!" the entire time I was writing the original answer.

Update3: just changed some text in Update2 pointing out the actual circumstance that makes volatile necessary -- the Interlocked operations used here are apparently implemented full-fence on the important CPU architectures of today and not half-fence as I had originally just sort-of assumed, so volatile protects a much narrower section than I had originally thought.

lazy T with LazyThreadSafetyMode.None can throw NullReferenceException

The reason this sometimes fails is because LazyThreadSafetyMode.None doesn't give any guarantees about correctness when accessed over multiple threads. The documentation for LazyThreadSafetyMode.None states:

The Lazy instance is not thread safe; if the instance is accessed
from multiple threads, its behavior is undefined. Use this mode only
when high performance is crucial and the Lazy instance is
guaranteed never to be initialized from more than one thread.

You incorrectly assumed that since your delegate always returned the same value, no thread-safety was needed, but this isn't the case.

You should initialize the Lazy<T> with either PublicationOnly or ExecutionAndPublication.

How can Lazy T provide thread safe lazy loading when it's .value property does not lock?

Read/writes to a variable of a reference type are atomic, so it's not possible for such a read to ever return a value that was not written to it, even with no locking. The value being read there is only ever assigned once, when the Lazy generates its value, so either the value is null, and it moves on to the more complex logic, or it isn't, and we already have a value to return. If it moves on it does actually use locking mechanisms to ensure multiple threads aren't attempting to create the value at the same time.

Lazy T without exception caching

It's hard to use built-in Lazy for that: you should wrap your LazyWithoutExceptionCaching.Value getter in a lock. But that makes the use of the built-in Lazy redundant: you'll have unnecessary locks inside the Lazy.Value getter.

It's better to write your own Lazy implementation especially
if you intend to instantiate reference types only, it turns to be rather simple:

public class SimpleLazy<T> where T : class
{
private readonly Func<T> valueFactory;
private T instance;
private readonly object locker = new object();

public SimpleLazy(Func<T> valueFactory)
{
this.valueFactory = valueFactory;
this.instance = null;
}

public T Value
{
get
{
lock (locker)
return instance ?? (instance = valueFactory());
}
}
}

P.S. Maybe we'll have this functionality built-in when this issue gets closed.

Can a singleton initialize thread-safely just use public static field rather a Lazy T or other Singleton solution?

Fundamentally, you are seeing behaviour that isn't guaranteed. Because it isn't guaranteed, you can't rely on it. Take the below code as an example. What output will it generate? The answer is it depends.

17.4.5.1: "If a static constructor (§17.11) exists in the class, execution of the static field initializers occurs immediately prior to
executing that static constructor. Otherwise, the static field
initializers are executed at an implementation-dependent time prior to
the first use of a static field of that class."

implementation-dependent means multiple orders are valid, as long as the rules are kept.

It might generate:

The Dictionary has been initialized now
Start Running...
Start Running...

Or it might generate:

Start Running...
Start Running...
The Dictionary has been initialized now

You can validate it yourself at https://dotnetfiddle.net/d5Tuev (switch runtimes on the left side).

Now, in your testing you have experienced one of those behaviours. But that behaviour isn't guaranteed, and thus you shouldn't rely on it.

using System.Collections.Concurrent;
using System.Threading;

namespace Samples.Console
{
public class Program
{
public static void Main(string[] args)
{
Thread thread1 = new Thread(() => StaticField.StaticFieldTest());
Thread thread2 = new Thread(() => StaticField.StaticFieldTest());

thread1.Start();
thread2.Start();

Thread.Sleep(1000);
}

public class StaticField
{
public static readonly ConcurrentDictionary<string, string> _dic = InitDictionary();

public static void StaticFieldTest()
{
System.Console.WriteLine("Start Running...");

_dic.TryAdd(string.Empty, string.Empty);
_dic.TryAdd(string.Empty, string.Empty);
_dic.TryAdd(string.Empty, string.Empty);
}

public static ConcurrentDictionary<string, string> InitDictionary()
{
System.Console.WriteLine("The Dictionary has been initialized now");
Thread.Sleep(500);
return new ConcurrentDictionary<string, string>();
}
}
}
}

In addition - Lazy or LazyWithNoExceptionCaching are quite useful in some contexts - e.g. if you have an object which takes 20 seconds to create, but you want to ensure you create only one of them.

Is Lazy T a good solution for a thread safe lazy loaded singleton?

I suggest you to read referenced articles from comments:

  • Lazy Class
  • Implementing the Singleton Pattern in C#

In all cases the Lazy<T> class is thread-safe, but you need to remember that the Value of this type can be thread-unsafe, and can be corrupted in multithreading environment:

private static Lazy<MyClass> _instance = new Lazy<MyClass>(() => return new MyClass());

public static MyClass Instance
{
get {
return _instance.Value;
}
}

public void MyConsumerMethod()
{
lock (Instance)
{
// this is safe usage
Instance.SomeMethod();
}

// this can be unsafe operation
Instance.SomeMethod();
}

Also you can use any constructor you like depending on the environment of your application.

What are the dangers of non thread safe singleton?

Since the singleton pattern restricts the class to only one instance, your approach violates that pattern.

If for any reason this is ok for your implementation, and we follow your stated example, there is a very high chance that each of the two concurrent accesses will get another instance of ThreadNotSafeSingleton. This may happen due to optimization if the compiler decides that there is no need to read back the _instance variable it just wrote before returning. This optimization behaviour is defined by the memory model of your C# implementation.

The volatile keyword is often cited as a possible solution, but it will not solve the synchronization issue (as pointed out by BionicCode), when thread 1 passes the if (_instance == null)line and gets put to sleep, then thread 2 also evaluates the same if and gets to instanciate the singleton. When thread 1 wakes up later, it will then instanciate another singleton.



Related Topics



Leave a reply



Submit