C# Thread Lock Mechanisms

There are a lot of different thread lock mechanisms that .Net provides. Some are better than others. Some you really shouldn’t use at all. This post will walk you through some of the most common scenarios and provide suggested mechanisms for dealing with them.

The ‘volatile’ Keyword

The first thing you should know about volatile is that you shouldn’t use it. No, really. It’s a keyword that acts in a particular manner that is often misunderstood. It really only works under very specific circumstances:

  • You want all threads to have access to the most up-to-date data all the time;
  • AND the data writers you’re using only ever write (they do not read);
  • AND the data readers you’re using only ever read (they do not write);
  • AND the data you’re acting upon is an atomic value (something that doesn’t take the CPU more than a single instruction to act upon).

If any of these conditions do not hold true, do not use volatile!

There are also CPU performance implications from using volatile. The CPU will be prevented from caching volatile data, so every access of that data will hit main memory. If you’re making very frequent accesses, this will be noticeable!

The ‘lock’ Keyword

The most common locking mechanism, lock is C#’s standard usage of critical sections. This will usually be your go-to mechanism. You want to use this whenever there’s a piece of code that you only ever want one thread at a time to have access to.

However, there are a few caveats to be aware of when using lock. If you find yourself in trouble with any of these, you should examine using a different locking mechanism:

  • Locks are indefinitely held. If your thread locks on a section of code and the locked object is never released, your thread will be suspended forever. This is the famous deadlock. Once this occurs, you cannot break it and your only recourse is to programmatically kill the thread!
  • The reference value being locked on must be shared across threads. If you’re creating a new non-static reference object in the method, then referencing it for the lock, you’re not actually locking anything as every thread will be ‘locked’ on their own instance of the object.
  • Locks are fast for uncontended usage, but quickly drop off in performance once they come under heavy use.

Exponential Backoff

If you’re dealing with a problem that runs into contention problems, the best ‘easy’ solution is to use an exponential backoff algorithm. If you use a simple Thread.Sleep(...) between retries, you’ll run into the same potential contention but with delays followed by spikes in usage, as all of the threads sleep the same amount of time, then retry. This can be mitigated by applying random sleep times, but the standard solution for this problem borrowed from network engineering is to have each thread follow an exponentially increasing sleep time for better total throughput.

This algorithm can also be helpful with non-locking problems, such as file I/O, DB, and network requests.

Note: This algorithm only works when your lock or ‘wait’ mechanisms are not indefinite.

The ReaderWriterLockSlim Class

.Net provides a class called ReaderWriterLockSlim that is useful for conditions where you want to allow multiple concurrent read access, but want to block all threads when a write operation occurs. This is typically used when:

  • You want to control access to a data set that is not updated frequently, but you want to ensure that all readers have the latest information;
  • AND you want to lock all readers out of accessing the data while an update is occurring (this is something that volatile does not provide!).

ReaderWriterLockSlim is your best bet for resolving these needs. .Net provides many other similar locking mechanisms, but ReaderWriterLockSlim is often much more performant/reliable than most of the other options.

As a bonus, you can also use this mechanism to prevent access to an API, not just a data set. For example: If you want to allow concurrent DB access during normal run times, but don’t want to allow DB accesses to continue once the app enters static de-initialization at process closure. This mechanism allows fast, concurrent access, but also allows a shutdown signal to be set that prevents further DB access.

The Interlocked API

.Net provides an API called Interlocked that is ideal for more complicated locking needs. It allows fast atomic transactions to occur on data that may not normally be atomic.

For example, you could lock on an object in order to increment a counter:

lock (this.locker) this.counter++;

This works, but it’s slow. Interlocked.Increment(ref this.counter); would be a better option.

Explore the API to see further capabilities. This should be your go-to solution, if the above recommendations do not meet your needs.

Events

.Net provides a few classes related to event signaling. The two most interesting are ManualResetEvent and AutoResetEvent. These are typically used to signal when access to a resource is ready for consumption. The general pattern is:

  • One thread will call myEvent.WaitOne() to block until the event is signaled.
  • When another thread wants to signal that the resource is ready, it will call myEvent.Set() and the waiting thread will be released.
  • If myEvent.Set() had already been called before another thread called myEvent.WaitOne(), the thread will not be blocked (the signal will be consumed as “ready”).

Very Important!

ManualResetEvent and AutoResetEvent have different responses to releasing waiting threads:

  • ManualResetEvent will continue signaling until you manually call myEvent.Reset(). This will cause no threads to be blocked by calls to WaitOne(), until the Reset() method is called.
  • AutoResetEvent will automatically call Reset() after the first blocked thread is released. That means that, if you have multiple threads waiting, you will need to call Set() multiple times to release them all.

You should really know what you’re doing if you’re using event classes and, even then, you probably want to use ManualResetEvent, in practice.

Software Transactional Memory

Also referred to as “reference swapping”, STM provides the capability to do lock-free (or implicitly lock-free) data manipulation from multiple threads. In C#, this leverages Interlocked, but in an interesting way. Readers of data will access the raw object, but writers will:

  • Copy the original reference;
  • Make changes to the data;
  • Use Interlocked.CompareExchange() to swap the original reference with the updated reference.

If the data is mutated by another thread while the comparison is occurring, the comparison will restart as many times as is necessary to complete the operation. The same style of operation can be performed in a write-centric manner: Allow writers raw access to the data, but require readers to make a copy.

This mechanism works well when your algorithm is unbalanced heavily either towards reading or writing, but struggles when both operations are more balanced, due to retry contention. In practice, however, most data is either primarily read-centric or primarily write-centric, so this won’t usually be a problem. Just make sure whichever operation is more common is the one that is allowed access to the raw data, rather than being forced to make the copy.

You can lift even more performance out of this by not using Interlocked and using volatile instead, when you know that the data being modified at any given time is separate from the data being read, but:

  • You better know exactly what you’re doing by violating the usual rule about volatile;
  • AND you can absolutely ensure that no conflict will occur by doing so;
  • AND you understand that the problem you’re working on can probably be resolved in a better way without getting yourself into this kind of danger;
  • AND if you’re attempting to do such an operation, you probably are enough of an expert that you aren’t reading this post.

These cautions aside, you can do some interesting stuff this way: Things like reading from one part of a List, while updating another part of the List from a different thread, making the memory operation truly lock-free. But you will probably never need to use such a thing, as your job is probably not to manipulate the low-level compiled statements in a way to get raw, bleeding-edge speed. Leave that to the compiler developers.

Further Reading

There are lots of different techniques to handle multithreaded memory models. If you’re interesting in researching further nuances, this article is a great place to start (be prepared to learn about different CPU architecture memory models and compiler optimizations!).

Update: I wrote a follow-up post on this topic, to reflect on discussion from an engaging Facebook conversation around this post.

One comment

  1. Pingback: A Follow Up to C# Threading Mechanisms | voyageintech