ConcurrentDictionary<TKey,TValue> used with Lazy<T>
In a recent thread on the MSDN forum for the TPL, Stephen Toub suggested mixing ConcurrentDictionary<T,U> with Lazy<T>. This provides a fantastic model for creating a thread safe dictionary of values where the construction of the value type is expensive. This is an incredibly useful pattern for many operations, such as value caches.
Parallelism in .NET – Part 20, Using Task with Existing APIs
Although the Task class provides a huge amount of flexibility for handling asynchronous actions, the .NET Framework still contains a large number of APIs that are based on the previous asynchronous programming model. While Task and Task<T> provide a much nicer syntax as well as extending the flexibility, allowing features such as continuations based on multiple tasks, the existing APIs don’t directly support this workflow.
Parallelism in .NET – Part 19, TaskContinuationOptions
My introduction to Task continuations demonstrates continuations on the Task class. In addition, I’ve shown how continuations allow handling of multiple tasks in a clean, concise manner. Continuations can also be used to handle exceptional situations using a clean, simple syntax.
Parallelism in .NET – Part 18, Task Continuations with Multiple Tasks
In my introduction to Task continuations I demonstrated how the Task class provides a more expressive alternative to traditional callbacks. Task continuations provide a much cleaner syntax to traditional callbacks, but there are other reasons to switch to using continuations…
Parallelism in .NET – Part 17, Think Continuations, not Callbacks
In traditional asynchronous programming, we’d often use a callback to handle notification of a background task’s completion. The Task class in the Task Parallel Library introduces a cleaner alternative to the traditional callback: continuation tasks.
Parallelism in .NET – Part 16, Creating Tasks via a TaskFactory
The Task class in the Task Parallel Library supplies a large set of features. However, when creating the task, and assigning it to a TaskScheduler, and starting the Task, there are quite a few steps involved. This gets even more cumbersome when multiple tasks are involved. Each task must be constructed, duplicating any options required, then started individually, potentially on a specific scheduler. At first glance, this makes the new Task class seem like more work than ThreadPool.QueueUserWorkItem in .NET 3.5.
In order to simplify this process, and make Tasks simple to use in simple cases, without sacrificing their power and flexibility, the Task Parallel Library added a new class: TaskFactory.
Parallelism in .NET – Part 15, Making Tasks Run: The TaskScheduler
In my introduction to the Task class, I specifically made mention that the Task class does not directly provide it’s own execution. In addition, I made a strong point that the Task class itself is not directly related to threads or multithreading. Rather, the Task class is used to implement our decomposition of tasks.
Once we’ve implemented our tasks, we need to execute them. In the Task Parallel Library, the execution of Tasks is handled via an instance of the TaskScheduler class.
Parallelism in .NET – Part 14, The Different Forms of Task
Before discussing Task creation and actual usage in concurrent environments, I will briefly expand upon my introduction of the Task class and provide a short explanation of the distinct forms of Task. The Task Parallel Library includes four distinct, though related, variations on the Task class.
Parallelism in .NET – Part 13, Introducing the Task class
Once we’ve used a task-based decomposition to decompose a problem, we need a clean abstraction usable to implement the resulting decomposition. Given that task decomposition is founded upon defining discrete tasks, .NET 4 has introduced a new API for dealing with task related issues, the aptly named Task class.
Parallelism in .NET – Part 12, More on Task Decomposition
Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate. Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction.
However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort. Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks.