MadDotNet–Parallel Programming

Wednesday, August 3, 2011
by asalvo

On Wednesday August 3, 2011, Dr. Joe Hummel, PhD, presented at the Madison .Net Users group. The presentation was focused on the new Tasks and Task Parallel Library (TPL) and started out by explaining what we have been using, and why we need something new. Four current approaches were identified:

  • Threads (var thread = new System.Threading.Thread())
  • Async Programming Model (async delegate invocation)
  • Event-based Async pattern (background worker class)
  • QueueUserWorkItem

While the old ways work, they are tedious to work with. The new TPL improves upon the previous by adding support for

  • Cancelling
  • Easier exception handling
  • Higher level constructs

Tasks

Having used the Task object in a couple of small applications, I can say that it is much easier to work with. The canceling construct seemed a little more involved then what I thought it should be, but I can see where the advanced cancelation features would make sense in a more involved scenario.

Tasks live in the System.Threading.Tasks namespace and can be used as easily as:

Task T = new Task(()=> { /*Do Something */};
T.Start();

//preferred technique, slightly less overhead for CLR to create
var t = Task.Factory.StartNew(()=>{/* Do Something */});

**Parallel Computing**

Rewriting all of the demo code is not practical, so be sure to check out the code which is available from Dr. Hummel’s website, http://joehummel.net/downloads.html.

The main demo presented showed how you can easily replace a for loop with a Parallel.For statement. Obviously there was a performance increase, but there were some other interesting observations. 

While the “for loop” required 600 operations, there was only about 4 threads actually being used. A visualization built into the demo showed that the thread usage was dynamic, as one thread finished up it’s work, it was then used to help out with the slower running portions of the computation.

In the demo code, there was an inner loop inside the Parallel.For, and as test, the inner loop was replaced with a Parallel as well. However, the performance was the same. This showed that some thought should be given to how much parallelism you add. In the case of the demo, had there been more processor cores available it would have run faster. 

The second demonstration walked thru a more difficult real world example of converting a single threaded application to a parallel executing application. The demo consisted of reading in a 120mb text file that contained NetFlix movie ratings and performing some calculations.

The demo showed 3 different ways to approach coding, imperative (standard C#), declarative (Linq) and Procedural (F#). Imperative is your traditional coding using looping constructs, etc. The declarative approach reduced the code down to two Linq statements that were parallelized by adding the AsParrarel extension method to the end. The final implementation was done using F#.

Performance in Seconds

Sequential C#: 16.1
Sequential Linq: 24
Sequential F#: 19
Parallel C#: ~6
Parallel Linq: 8.1
Parallel F#: 8.1

The take away from this is that if you need raw performance, go with C++ (not demoed). If you don’t need raw performance, then pick the programming model (c#/imperative, Linq/Declarative, F#/Functional) that fits your method of thinking.

Summary

Your responsibility is to expose your application to parallelism by creating tasks. You need to still take resource contention and deadlocks into consideration. The CLR’s responsibility is to then execute your tasks as effective as possible.

Q and A

Q: Is it still a good idea to use Thread.Sleep inside a task.
A: The answer was that when possible, you should opt to create more tasks and eliminate the Thread.Sleep.

Q: What is cheaper to create, a task or Thread
A: Task is a couple hundred cycles to create, a thread takes a couple of thousand cycles.

Comments

comments powered by Disqus