Tag Archive: multithreading

Threads, locks, deadlocks, race conditions, and thread safety in multi-threaded code

Microsoft .NET

Before proceeding, I am going to state that I will talk about multi-threading with a user interface in a different blog. It warrants its own post, so here we will focus on pure threading concepts.

At some point in your career you will need to write a multi-threaded application, but you probably won’t do it right. Even worse, you won’t know it, and arguably you can get away with it most of the time. Some developers go their entire careers without touching multi-threading, which I have found to be more common in web development than client applications, services, or systems development. I believe it is something every programmer should understand, just like I believe every programmer should understand pointers.

My definition of a thread is a unit of work that can be scheduled for execution. A process in Windows can contain one or more threads where work is scheduled to be processed by the CPU. In processors with only one core, true concurrency doesn’t exist where two units of work can occur asynchronously. Although it appears that is the case, otherwise your system would seem unusable, a CPU is able to switch between threads fast enough that it is capable of giving each thread a fair slice of CPU time resulting in a responsive system. Processors with faster clock speeds will result in faster context switching, and ultimately, the ability to process more threads in less time. True concurrency exists on systems with processors that have more than one core, where each core is capable of processing threads independently from the other cores. Of course this all is still managed by a thread pool or scheduler which helps prioritize which threads will get CPU time and when, and their priority.

Now that you understand a bit about what a thread is, why use them? You’re using them whether you know it or not, every application already has them before you even write a line of code in your IDE, such as a thread required for a console window. Usually threads are used for long running tasks as to not block and prevent another operation from occurring. This is very common in a UI, such as Internet Explorer where you want to download a file on a separate thread so you can keep browsing the web. If this was done on the same thread, you would be blocked from doing anything until the download completes, either in full or due to cancellation. If that was the case, you wouldn’t even be able to cancel the download because the UI would be blocked as well. Other considerations might be performance to split large amounts of work up if processing power is available, or to utilize more than one core (parallel programming).

Lets take a look on how you would create a thread in C#.

static void Main() 
{
    Thread thread = new Thread(DoWork);
    thread.Start();
}

static void DoWork() 
{
}

This is simple, requiring us to only create the thread and invoke Start() to begin the unit of work that will be performed by DoWork(). You can observe the number of threads in an application via Windows Task Manager. You will need to enable the threads column though; it isn’t enabled by default.

Threads

You would need a simple application that would allow you to observe this. Sometimes the thread count will be adjusted by the operating system. You need to remember that threads are already there for bare bones things the operating system needs to work, such as your console window. At the end of the day though, you should be able to observe the thread count increase using the sample below. Another note to add is you will notice that even though we are kicking off a seperate thread, the console application doesn’t exit, and we will talk about that later in the article.

static void Main()
{
    Console.ReadKey();

    Thread thread = new Thread(DoWork);
    thread.Start();

    Console.ReadKey();
}

static void DoWork()
{
    while (true) ;
}

I’ll give one more example, but I think by this point the concepts should resonate. Lets look at a simple program that contains two threads for processing work.

class Program
{
    static int value;

    static void Main(string[] args)
    {
        Thread thread1 = new Thread(DoWork1),
               thread2 = new Thread(DoWork2);
            
        thread1.Start();
        thread2.Start();
    }

    static void DoWork1()
    {
        while (true) Debug.Print((value++).ToString());
    }

    static void DoWork2()
    {
        while (true) Debug.Print((value -= 2).ToString());
    }
}

When you run this multiple times and compare the output of each, you should immediately notice that the output varies. Here are two runs that I performed.

run one -1, 0, -3, -2, -3, -2, -4, -1, -6, -5, -4, -3
run two -1, 0, -3, -3, -2, -3, -3, -4, -3, -4, -3, -5

Understanding how multi-threading works this is understandable. Each thread gets a slice of CPU time to execute, but there is no guarantee what that exact slice will be or when it will be. All you know is that there is a guarantee of getting a slice of CPU time. Your threads are also competing with all other threads in the operating system, and by this point you should be visualizing and connecting the dots. A user complains that your software is slow and you find out that they are running another program that is consuming 90% of the available CPU, you know your threads are not getting the time slices they need or in the time that you need them.

Background and Foreground Threads

Look closely at the example program. You will notice that there is no blocking call after invoking the Thread.Start. If you caught on, you are wondering why the console application does not exit immediately since a thread is asynchronous and the main method should be returning and ending execution. The Thread class has a property called IsBackground, and this is important for you to understand. Lets look at the Msdn documentation.

A thread is either a background thread or a foreground thread. Background threads are identical to foreground threads, except that background threads do not prevent a process from terminating. Once all foreground threads belonging to a process have terminated, the common language runtime ends the process. Any remaining background threads are stopped and do not complete.

A thread by default is a foreground thread. While Main has completed its work, the runtime prevents the process from exiting until the threads have completed their work. In this case, that is not until you terminate the application because they are in an infinite while loop. You can change this behavior simply by setting IsBackground to true, which will allow the program to exit without requiring threads to complete. By doing this, the console application would exit immediately and you would need to add a blocking call, such as Console.ReadKey to allow the threads to run.

Lock

You understand by now how to create, work with, and how threads work, but what about a lock? You have probably read about it and even seen a code sample, but you have probably often wondered what I did when I originally read about a lock, which I’ll describe in a few questions.

  • What is a lock?
  • When is a lock appropriate to use?
  • How do I use a lock correctly?

Lets look at what a lock is defined by the Msdn documentation first.

The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement, and then releasing the lock.

Now lets turn that into something more understandable.

ThreadLock1

The image above is simple. You have two people but there is only one key to their only car. If person one takes the key to the car to go run an errand, person two has to wait until they come back and make that key available again before they can drive the car for their own errand. This has both a advantage and disadvantage. The advantage is that having only one key ensures that we don’t try to use the car at the same time. That seems like a silly scenario, but imagine both people running to the car and starting to fight over who gets to use it, and the argument lasts several hours. Basically, several hours were lost, and nothing was accomplished. It was a deadlock. This is the advantage of a lock, because it prevents this sort of scenario from happening by acting as a mediator and forcing each thread to wait their turn to use the resource.

The disadvantage to this is that the thread who has the mutually-exclusive lock may never complete and the lock could last forever. This is also a deadlock, and is the worst sort because the other thread will be waiting around until the lock is released, which might be never. These are the kinds of situations that can hang an application making you scratch your head.

Race Condition

Before I can show you how to use a lock or demonstrate a deadlock, we need to demonstrate a race condition which is the root problem you will solve with a lock in the first place.

class Program
{
    private static List<Guid> guids = new List<Guid>();

    static void Main(string[] args)
    {
        Thread thread1 = new Thread(DoWork1),
               thread2 = new Thread(DoWork2);
            
        thread1.Start();
        thread2.Start();

        while (true)
        {
            Console.Clear();
            Console.Write(guids.Count);
        }
    }

    static void DoWork1()
    {
        while (true) guids.Add(Guid.NewGuid());
    }

    static void DoWork2()
    {
        while (true) guids.Add(Guid.NewGuid());
    }
}

This code will most likely result in an ArgumentOutOfRangeException. I have to say most likely because you might be, in terms of probability, 0.01% lucky enough to run the sample through without observing a race condition. It technically could happen, but the chance is extremely rare, but I have to be truthful from an educational standpoint that it is remotely possible.

Moving on, this results in a exception because List<T> is not thread-safe. Thread safety is a phrase used to describe a mechanism, component, or piece of code that was created with multi-threading in mind, and is guaranteed to work with multi-threaded code when accessed from one or more threads concurrently. Authors of thread-safe code are responsible for ensuring thread-safety, and the definition of expectations of thread-safety will vary which is where you will have to rely on documentation, if there is any. Microsoft is great about making sure they document thread-safety throughout .NET, but this may not be the case for third party code.

The threads in the sample above are accessing the list as fast as they possibly can. When you add or remove an item from a list it will automatically increase its capacity if necessary. What is happening here is the first time it needs to increase capacity, it has been accessed by the other thread which has added an item, resulting in a new capacity less than the current size. The exception message in this case is extremely articulate about the problem.

We’re going to solve this problem with a lock.

class Program
{
    private static List<Guid> guids = new List<Guid>();
    private static object key = new object();

    static void Main(string[] args)
    {
        Thread thread1 = new Thread(DoWork1),
               thread2 = new Thread(DoWork2);
            
        thread1.Start();
        thread2.Start();

        while (true)
        {
            Console.Clear();
            Console.Write(guids.Count);
        }
    }

    static void DoWork1()
    {
        while (true) 
            lock (key)
                guids.Add(Guid.NewGuid());
    }

    static void DoWork2()
    {
        while (true)
            lock (key) 
                guids.Add(Guid.NewGuid());
    }
}

Make sure you read the Msdn documentation on lock, I’m not going to repeat that here, but it is very important that you use an object that is private and not accessible by anything beyond your control. The lock code itself is simple and you should be able to correlate it with the picture of the people, key, and car. What you need to take from this is that key is just an object in memory that acts as the mediator that forces the threads to wait their turn before they can run their critical section of code. Re-run the sample and you may verify that the race condition is resolved.

Deadlocks

Locks are simple and as promised we can demonstrate a deadlock.

class Program
{
    private static List<Guid> guids = new List<Guid>();
    private static object key = new object();

    static void Main(string[] args)
    {
        Thread thread1 = new Thread(DoWork1),
               thread2 = new Thread(DoWork2);
            
        thread1.Start();
        thread2.Start();

        while (true)
        {
            Console.Clear();
            Console.Write(guids.Count);
        }
    }

    static void DoWork1()
    {
        while (true) 
            lock (key)
                while (true)
                    guids.Add(Guid.NewGuid());
    }

    static void DoWork2()
    {
        while (true)
            lock (key) 
                guids.Add(Guid.NewGuid());
    }
}

thread2 might get a chance to run, but when thread1 acquires a lock it will deadlock thread2 causing it to wait forever. This is because we added a infinite loop inside the lock in thread1 so it will never be released. This is a very simple demonstration of a deadlock, but the important thing to note here is a deadlock can be a single thread or many threads could be deadlocked waiting on one another; it just depends on your code.

Worthy mention: Task Parallel Library

I have written other articles on the TPL, but what you need to know is it is the modern way to start tasks that run as threads on the managed thread pool (warrants its own article). Everything I have shown you has been using System.Threading.Thread which has been around forever and a day. Let me say this now, there is nothing wrong with using Thread over Task, but the api surface that the TPL has provided has made writing multi-threaded code much easier.

I’m not going to write much more about TPL in this article, but I want to end the article leaving you with the understanding of how multi-threading works, because as you begin to use TPL, all of these concepts still apply and it’s just another layer of abstraction.

Conclusion

There is still a lot I did not cover about multi-threading. How do I know when a thread completes? How do I share thread state? What is a thread pool? What is thread synchronization? I like to say to colleagues, everything in software development is its own college degree. You can spend years learning about multi-threading all the way down to the operating systems inner workings. I recommend you do, but I hope at least this article covered some areas of multi-threading that I always see a lot of repeated questions for, and generally in my experience a lot of developers know nothing about, struggle with at first, or never fully understand it and just move on.

Happy coding!

Avoiding a deadlock when creating a STA thread and using Dispatcher.

I have always been pretty articulate and careful when writing multi-threading code, but today I wrote my first deadlock that had me baffled for a few minutes. I’m working on a licensing dll that handles a lot of processing, and it also handles some user interface display using WPF Windows. I cannot guaruntee whether or not the calling thread will be a single-threaded apartment, because the assembly that implements it might be a console or a window, but the assembly doesn’t know which, or which threading model it uses. The good news is that System.Threading provides very rich types to be able to check for this as well as handle it.

First lets take a look at actually checking for the threading model, and creating an STA thread if needed.

if (Thread.CurrentThread.GetApartmentState() != ApartmentState.STA) {
    Thread thread = new Thread(() => {
        // ...
    });

    thread.SetApartmentState(ApartmentState.STA);
    thread.Start();
}

You can see that the code to check for the current threading apartment isn’t anything hard, and neither is setting the apartment state. Now my requirement was that the WPF Window be shown modal by calling Window.ShowDialog so I could consume the DialogResult and continue processing in the licensing service. To do this I created the Window in the newly created STA thread.

if (Thread.CurrentThread.GetApartmentState() != ApartmentState.STA) {
    ActivateWindowController controller = new ActivateWindowController();
    ActivateWindow window = null;

    bool? dialogResult = null;

    Thread thread = new Thread(() => {
        window = new ActivateWindow(controller);
        dialogResult = window.ShowDialog();
    });

    thread.SetApartmentState(ApartmentState.STA);
    thread.Start();
    thread.Join();
}

There’s nothing too exciting about any of this either. I’m using the Model-View-Controller (MVC) pattern so I have a ActivateWindowController that gets passed to the ActivateWindow constructor. Because we need the Window to run in a single-threaded apartment, I do the actual instantiation in the new Thread. Then I make a call to ShowDialog() and consume the result. The last bit is adding thread.Join(); so the calling thread is blocked until the new thread terminates, which is when the dialog is closed.

Now I’ll try to explain what ActivateWindowController does as best as I can. When ActivateWindow is loaded, the WPF stack invokes the Loaded event handler. The code isn’t anything special, so I’ll just show it.

public partial class ActivateWindow : Window
{
    /// <summary>
    /// Initializes a new instance of the DCOMProductions.Licensing.Windows.LicenseWindow class.
    /// </summary>
    public ActivateWindow(ActivateWindowController controller) {
        InitializeComponent();
        _controller = controller;
    }

    #region Controller Members

    private ActivateWindowController _controller;

    private void Window_Loaded(object sender, RoutedEventArgs e) {
        _controller.ActivateComplete += ActivateComplete;
        _controller.Activate();
    }

    private void ActivateComplete(object sender, ActivateWindowController.ActivateEventArgs e) {
        DialogResult = e.Result == ActivationResult.Passed ? true : false;
    }

    #endregion
}

When the Loaded event handler is invoked, a call to ActivateWindowController.Activate() is made. What this method does is spin off a new Task by calling Task.Factory.StartNew(...) and consumes a WCF service. In short, it means another thread. When the task completes, it invokes Task.ContinueWith(...) which is responsible for wrapping up the result and delegating everything to the proper thread.

.ContinueWith((task) => {
    CloseDialogCallback method = new CloseDialogCallback(() => {
        OnActivateComplete(new ActivateEventArgs(result));
    });
    _Dispatcher.BeginInvoke(method, null);
});

Now it is important to note here that _Dispatcher is a private instance that was initialized in the constructor of ActivateWindowController. In short, the dispatcher lives on the UI thread of the Window. The dispatcher is required to delegate calls to the correct thread, in this case we want to delegate the callback method to the UI thread. And remember, the UI thread is the STA thread I created earlier.

I highlighted the last line, because this is where the deadlock is introduced. When _Dispatcher.BeginInvoke(method, null); is called, it schedules asynchronously the callback to be executed on the thread that the dispatcher was created on. And remember, the thread that _Dispatcher was created on is the same thread that the ActivateWindowController instance was created on. Let’s double check that.

if (Thread.CurrentThread.GetApartmentState() != ApartmentState.STA) {
    ActivateWindowController controller = new ActivateWindowController();
    ActivateWindow window = null;

    bool? dialogResult = null;

    Thread thread = new Thread(() => {
        window = new ActivateWindow(controller);
        dialogResult = window.ShowDialog();
    });

    thread.SetApartmentState(ApartmentState.STA);
    thread.Start();
    thread.Join();
}

Well, hopefully by now you realize the problem and why there is a deadlock. ActivateWindowController is not being instantiated on our STA thread. So what’s happening here is when _Dispatcher.BeginInvoke is called, it is scheduling the callback to execute on the same thread we told to block by calling thread.Join(). Because that thread is blocked and the dialog can’t return until the callback is executed by the dispatcher, I have a blocked thread and a scheduled callback that will never run; eg. a deadlock.

The fix is to instantiate ActivateWindowController on the STA thread I created which is where we want the dispatcher to delegate the callback to anyway. This resolves the deadlock problem. It’s important to watch out for little quirks like this. It didn’t take me long to realize what created the deadlock, but I think on most days it would have taken me much longer to figure it out. The most important thing to remember is that this is not specific to a Dispatcher, and that it applies to multi-threading regardless.

Async, Await, Tasks, and UI synchronization with C# 5, .NET 4.5 and WinForms

Microsoft .NET

I’m going to cover a easy example on the new asynchronous programming model in C# 5 and .NET 4.5 with the recent release of Visual Studio 11. The example we will be writing is a small WinForm that will calculate the square root of a large number while providing progress updates to the UI, and moreso, doing all of this asynchronously, and safely.

The first note is that all this can be done right inside our form class (eg. Form1.cs), so I won’t include a ZIP file or source download, the example should be very straight forward and simple. Let’s dig right in.

To start with, we’re going to create a SynchronizationContext. This part is not anything new, and has been around since .NET 2.0 so I won’t be covering it. In short, it’s just a very helpful object that allows you to perform synchronization between threads or other asynchronous environments. Add a field, and initialize it in the constructor.

public partial class Form1 : Form 
{
    public Form1() 
    {
        InitializeComponent();
        m_SynchronizationContext = SynchronizationContext.Current;
    }

    private SynchronizationContext m_SynchronizationContext;
}

We will be using this to invoke calls to the UI thread safely later on. Next, let’s declare our async method.

private async void ComputeSquareRootAsync(object sender, EventArgs e) 
{
    double sqrt = await Task<double>.Run(() => 
    {
        double result = 0;

        for (int i = 0; i < 5000000; i++)
        {
            result += Math.Sqrt(i);
        }

        return result;
   });
}

There are a few things to note here. One, notice the declaration private async void. Here we are telling the compiler that this method will be intrinsically asynchronous, and because the method will contain awaiters, the C# compiler will know to rewrite our method appropriately under the hood.

If you don’t know already, async and await simply work off of the existing Task objects in the framework. By telling the program to await Task.Run, we are saying “Run all code up until this point synchronously, then run the task on a background method, but don’t block the UI, and return control to the caller when the result is returned”. This means that while our Task is running and doing some work off on a background thread, our program will not continue the normal flow of execution until the result is returned, but at the same time will not block the calling context.

Now drag a button onto your form (eg. button1) and set it’s click event to our ComputeSquareRootAsync method. Now drag a label (eg. label1) and a progressbar (eg. progressBar1). Let’s update our method a bit.

private async void ComputeSquareRootAsync(object sender, EventArgs e) 
{
    label1.Text = "Calculating sqrt of 5000000";
    button1.Enabled = false;
    progressBar1.Visible = true;

    double sqrt = await Task<double>.Run(() => 
    {
        double result = 0;

        for (int i = 0; i < 5000000; i++)
        {
            result += Math.Sqrt(i);
        }

        return result;
   });

   label1.Text = "The sqrt of 5000000 is " + sqrt;
   button1.Enabled = true;
   progressBar1.Visible = false;
}

Set the label’s initial text in the designer to “Click the button to begin”, and the progress bar’s visiblility to false initially. I also set the button’s text to “Calculate”. This is just cosmetic, but we are making a small, but not really practical, good demo app.

Now what will happen is that the square root will be executed asynchronously while not blocking the UI, but at the same time the last three lines of code that update the controls will not execute until the result is returned (eg. the Task is returned). This is the magic of the new async model.

Let’s implement our progress updates now. To do this, we’re going to implement a new interface onto our form that is provided by .NET 4.5 explicitly for this scenario, and it is called IProgress(Of T).

To make things simple, we will use IProgress(Of Tuple(Of int, int)) (eg. IProgress>) so we can pass in the maximum value and current value of our progress operation (eg. computing the square root).

public partial class Form1 : Form, IProgress<Tuple<int,int>>

Now implement the interface’s Report method.

public void Report(Tuple<int, int> value) 
{
    DateTime now = DateTime.Now;

    if ((now - m_PreviousTime).Milliseconds > 20) 
    {
        m_SynchronizationContext.Post((@object) => 
        {
            Tuple<int, int> minMax = (Tuple<int, int>)@object;
            progressBar1.Maximum = minMax.Item1;
            progressBar1.Value = minMax.Item2;
        }, value);

        m_PreviousTime = now;
    }
}

Now you will notice one thing off the bat, I included a reference to a DateTime value named m_PreviousTime. Add this as a field in the Form1 class and set its value to DateTime.Now.

private DateTime m_PreviousTime = DateTime.Now;

You could also do that in the constructor where we initialized our synchronization context. The method is simple. We are creating an anonymous function that passes in an object instance of our Tuple as a parameter, which we explicitly convert through an explicit cast. Through the usage of the Post method, it is actually calling this anonymous function on the synchronization context, which is actually our UI thread, thus giving us a safe UI update and no cross-thread violations. This is similar do doing a Control.Invoke, and I suspect somewhere under the hood it may actually do that but I haven’t looked at the implementation yet.

And now we’re done. Here’s the full Form1 class code so you can make sure you implemented all the steps.

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Windows.Forms;

namespace Async {
    public partial class Form1 : Form, IProgress<Tuple<int,int>> {
        public Form1() {
            InitializeComponent();
            m_SynchronizationContext = SynchronizationContext.Current;
        }

        private SynchronizationContext m_SynchronizationContext;
        private DateTime m_PreviousTime = DateTime.Now;

        private async void ComputeSquareRootAsync(object sender, EventArgs e) {
            label1.Text = "Calculating Sqrt of 5000000";
            button1.Enabled = false;
            progressBar1.Visible = true;

            double sqrt = await Task<double>.Run(() => {
                double result = 0;

                for (int i = 0; i < 5000000; i++) {
                    result += Math.Sqrt(i);
                    Report(new Tuple<int,int>(5000000, i));
                }

                return result;
            });

            progressBar1.Visible = false;
            button1.Enabled = true;
            label1.Text = "The sqrt of 5000000 is " + sqrt;
        }

        public void Report(Tuple<int, int> value) {
            DateTime now = DateTime.Now;

            if ((now - m_PreviousTime).Milliseconds > 20) {
                m_SynchronizationContext.Post((@object) => {
                    Tuple<int, int> minMax = (Tuple<int, int>)@object;
                    progressBar1.Maximum = minMax.Item1;
                    progressBar1.Value = minMax.Item2;
                }, value);

                m_PreviousTime = now;
            }
        }
    }
}

I may come back and add the source when I have more time.

Using ISynchronizeInvoke to update your UI safely from another Thread.

Microsoft .NET

If you are still using .NET threads to do multi-threading in your applications (compared to the Task Parallel Library or something else), then it is often a mistake of developers on how they update their UI thread. First off, stay the heck away from CheckForIllegalCrossThreadCalls. If you want your application to have unpredictable behavior, throw intermittant exceptions, and have loads of problems then go right ahead. If you want to do things right, read on.

In the early days of .NET, it was common to implement the Invoke pattern which is actually more work than needed. Most if not all WinForms controls for example implement an interface called ISynchronizeInvoke. You can use this to easily update your controls safely from another thread, in a single line of code. Here is the implementation:

namespace YourApplication {
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.ComponentModel;

    /// <summary>
    /// Helper class that allows synchronized invoking to be performed in a single line of code.
    /// </summary>
    internal static class SynchronizedInvoke {
        /// <summary>
        /// Invokes the specified action on the thread that the specified sync object was created on.
        /// </summary>
        public static void Invoke(ISynchronizeInvoke sync, Action action) {
            if (!sync.InvokeRequired) {
                action();
            }
            else {
                object[] args = new object[] { };
                sync.Invoke(action, args);
            }
        }
    }
}

This is just a helper class I usually include in all my WinForms projects as SynchronizedInvoke.cs. But the call is very simple. Let’s say you have a thread that runs a method called “ThreadWork” and you want to set the value of a progress bar you’ve named ‘uxProgressBar’ to 100 at the end of the method.

private void ThreadWork() {
    // Do some work on a thread
    SynchronizeInvoke.Invoke(uxProgressBar, () => uxProgressBar.Value = 100);
}

The key is the first parameter, also called the ‘sync’, is the object you want to invoke on, and the second is just an anonymous method (you can also specify an actual Action). There’s really nothing more to it, but you should also give a read to the article I wrote on using the Task Parallel Library titled ‘Writing thread-safe event handlers with the Task Parallel Library in .NET 4.0‘ which in my opinion is a better way to perform threaded operations and UI updates.

Writing thread-safe event handlers with the Task Parallel Library in .NET 4.0

Microsoft .NET

Download Example Code
TasksWithThreadSafeEvents.zip

In this article we will be using the following technologies:

  • .NET 4.0
  • Windows Forms (WinForms)
  • Task Parallel Library (TPL, part of .NET 4.0)

In a nutshell, I am talking about writing thread-safe events for WinForms. I’ve not ventured into the world of WPF quite yet, so this article may or may not apply to WPF.

Now, with that said, you may be familiar with the concept of BeginInvoke, EndInvoke, and Invoke to access WinForms controls safely from other threads. The amount of code to do that can be quite cumbersome, and to me it looks like speghetti. Another way to do it was by using ISynchronizeInvoke which you could wrap into a helper class, and do invoking in a single line of code. These methods all work great.

There are quite a few articles that explain how to do thread-safe and synchronized events with the Task Parallel Library, but they often are long and complicated. The other problem with the majority of these articles out there all assume one, single, horrific thing: You will always be writing your parallel code inside the form and have access to your controls other other referencable objects. Furthermore, they all seem to implement some sort of helper class that comes as an extra. In this article, my aim is a bit more specific. Take the System.Net.WebClient class for example. It exposes an event called DownloadProgressChanged. You know what is great about this? It’s thread-safe, and it doesn’t have a clue about your form or controls. That’s what this article is about. I am going to show you how to write a class with completely thread-safe events using the TPL and just a few lines of code.

The code:

//-----------------------------------------------------------------------------
// <copyright file="Counter.cs" company="DCOM Productions">
//     Copyright (c) DCOM Productions.  All rights reserved.
// </copyright>
//-----------------------------------------------------------------------------

namespace TasksWithThreadSafeEvents.Objects {
    using System;
    using System.Threading.Tasks;
    using System.Threading;

    internal class Counter {
        // CLR generated constructor

        #region Events

        /// <summary>
        /// Occurs when the counter has counted
        /// </summary>
        public event EventHandler CountChanged;
        private void OnCountChanged() {
            if (CountChanged != null) {
                CountChanged(this, new EventArgs());
            }
        }

        /// <summary>
        /// Occurs when the counter has completed counting
        /// </summary>
        public event EventHandler CountCompleted;
        private void OnCountCompleted(Task task) {
            if (CountCompleted != null) {
                CountCompleted(this, new EventArgs());
            }
        }

        #endregion

        #region Properties

        private int m_Maximum = 100;
        /// <summary>
        /// Sets the maximum value the counter will count to 
        /// </summary>
        public int Maximum {
            get {
                return m_Maximum;
            }
            set {
                m_Maximum = value;
            }
        }

        #endregion

        #region Methods

        private void Count() {
            for (int i = 0; i < Maximum; i++) {
                Thread.Sleep(50);
            }
        }

        /// <summary>
        /// Runs the counter by starting from 0 and incrementing by one, until the counter reaches its maximum
        /// </summary>
        public void Run() {
            Task.Factory.StartNew(Count).ContinueWith(OnCountCompleted);
        }

        #endregion
    }
}

I shouldn’t have to explain the code, but I will say that the main difference you will see in this event model is that OnCountCompleted defines an argument of type Task. This is because we must specify a Action<Task> when calling ContinueWith when we run our task. We don’t care about the Task object in the event handler, we just want to notify the event that it was completed, and with this approach having to pass an Action<Task> is just a small side-effect.

Often in development you want to write components like this (not specifically a counter), but essentially a class (or wrapper) that does all the work you need to, and is thread-safe at the same time. The reason we do this is we don’t want to have to make it thread-safe everywhere we use it in UI. If I have a class that does not provide thread-safe events, that means I have to write all this thread-safe code in each UI that I use it in. The class we just went over is extremely simple, so let’s wire-up the progress.

First, we need to add a TaskScheduler field to the class.

#region Fields

private TaskScheduler m_TaskScheduler = TaskScheduler.FromCurrentSynchronizationContext();

#endregion

This should be pretty self-explanatory, but in a nutshell we define this at class-scope, and it creates the object on the same thread that the class is created on (of course). So essentially, when you create the Counter class somewhere in your WinForm, the TaskScheduler is created on the same thread as the form: your UI thread.

Next, we need to actually report progress in our Count() method that actually does the counting. The cool thing about the Task Parallel Library is we can do this in a single line of code.

private void Count() {
    for (int i = 0; i &lt; Maximum; i++) {         
        Thread.Sleep(50);         
            Task.Factory.StartNew(() =&gt; OnCountChanged(),
            CancellationToken.None,
            TaskCreationOptions.None,
            m_TaskScheduler)
        .Wait();
    }
}

We are simply starting a new task using the existing TaskFactory, and hooking it up to our method that invokes the event handler. We don’t need to specifiy anything special for arguments, but we need to make sure we pass in our TaskScheduler. This is important, because the task scheduler will invoke the task on the thread the task scheduler is on: the UI thread.

The second important thing is that we are calling the Wait method. The reason we are doing this is because we want to give whatever hooks up to the event time to execute their logic. For example, a progress bar to update and paint.

That’s all there is too it.. let’s use the code in a simple WinForms app with a progress bar and a button.

//-----------------------------------------------------------------------------
// <copyright file="ShellForm.cs" company="DCOM Productions">
//     Copyright (c) DCOM Productions.  All rights reserved.
// </copyright>
//-----------------------------------------------------------------------------

namespace TasksWithThreadSafeEvents.Forms {
    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Data;
    using System.Drawing;
    using System.Linq;
    using System.Text;
    using System.Windows.Forms;
    using TasksWithThreadSafeEvents.Objects;

    public partial class ShellForm : Form {
        /// <summary>
        /// Instantiates a new instance of the TasksWithThreadSafeEvents.Forms.ShellForm class
        /// </summary>
        public ShellForm() {
            InitializeComponent();
        }

        private void OnButtonClick(object sender, EventArgs e) {
            Counter counter = new Counter();
            counter.CountChanged += OnCountChanged;
            counter.CountCompleted += OnCountCompleted;
            counter.Maximum = uxProgressBar.Maximum;
            uxProgressBar.Value = 0;
            counter.Run();
        }

        private void OnCountChanged(object sender, EventArgs e) {
            uxProgressBar.Value++;
        }

        private void OnCountCompleted(object sender, EventArgs e) {
            MessageBox.Show("The counter has reached its maximum", "Counter", 
                MessageBoxButtons.OK, MessageBoxIcon.Information);
        }
    }
}

The code looks clean, and it was very little work to implement. You could of course modify the code to actually report a value for progress if you wanted by creating your own class deriving from EventArgs, and passing the for-loop indexer to the event args of the event. I hope this helps those out there looking to write simple, thread-safe classes using the Task Parallel Library.

Download Example Code
TasksWithThreadSafeEvents.zip