Thread.Sleep() is OK to use in some situations eg. watchdog threads.
However in your case, it may not seem to be the optimal solution as pointed out by others.
Without a code sample, it's hard to tell, but based on your description, I don't think it's a question of Thread.Sleep() or not. I would suspect that you may be suffering from a race condition - that's usually why you experience "random" buggy behavior or even "random" crashes in multithreaded code - as seems to be what you are experiencing.
For whatever reason, your for-loop may cause the subtle critical timings of the race condition to occur less often, but it won't solve the root cause. There are many pitfalls to be aware of when doing multithreaded programming, I can only advice you to read up on the topic if you want to be able to avoid these.
In the .NET memory model, the runtime (CLI) will ensure that changes to volatile fields are not cached in registers, so a change on any thread is immediately seen on other threads (NB this is not true in other memory models, including Java's).
But this says nothing about the relative ordering of operations across multiple, volatile or not, fields.
To provide a consistent ordering across multiple fields you need to use a lock (or a memory barrier, either explicitly or implicitly with one of the methods that include a memory barrier).
For more details see "Concurrent Programming on Windows", Joe Duffy, AW, 2008
The lock statement used in your previous code is locking an object instance that is created in the scope of a method so it will have no effect on another thread calling into the same method. Each thread must be able to lock the same instance of an object in order to synchronise access to the given block of code. One way to do this (depending on the semantics you require) is to make the locking object a private static variable of the class that it is used in. This will allow multiple instances of a given object to synchronise access to a block of code or a single shared resource. If synchronisation is required for individual instances of an object or a resource that is instance specific then static should be emitted.
Volatile doesn't guarantee that reads or writes to the given variable will be atomic amongst different threads. It is a compiler hint to preserve ordering of instructions and prevents the variable from being cached inside a register. In general unless you are working on something extremely performance sensitive (low locking / lock free algorithms, data structures etc.) or really know you are doing then I would opt for using Interlocked. The performance difference between using volatile / interlocked / lock in most applications will be neglible, so if you are unsure its best to use what ever gives you the safest guarantee (read Joe Duffy's blog & book).
For example using volatile in the example below is not thread safe and the incremented counter does not reach 10,000,000 (when I ran the test it reached 8848450) . This is because volatile only guarentees reading the latest value (e.g. not cached from a register for example). When using interlocked the operation is thread safe and the counter does reach 10,000,000.
public class Incrementor
{
private volatile int count;
public int Count
{
get { return count; }
}
public void UnsafeIncrement()
{
count++;
}
public void SafeIncrement()
{
Interlocked.Increment(ref count);
}
}
[TestFixture]
public class ThreadingTest
{
private const int fiveMillion = 5000000;
private const int tenMillion = 10000000;
[Test]
public void UnsafeCountShouldNotCountToTenMillion()
{
const int iterations = fiveMillion;
Incrementor incrementor = new Incrementor();
Thread thread1 = new Thread(() => UnsafeIncrement(incrementor, iterations));
Thread thread2 = new Thread(() => UnsafeIncrement(incrementor, iterations));
thread1.Start();
thread2.Start();
thread1.Join();
thread2.Join();
Assert.AreEqual(tenMillion, incrementor.Count);
}
[Test]
public void SafeIncrementShouldCountToTenMillion()
{
const int iterations = fiveMillion;
Incrementor incrementor = new Incrementor();
Thread thread1 = new Thread(() => SafeIncrement(incrementor, iterations));
Thread thread2 = new Thread(() => SafeIncrement(incrementor, iterations));
thread1.Start();
thread2.Start();
thread1.Join();
thread2.Join();
Assert.AreEqual(tenMillion, incrementor.Count);
}
private void UnsafeIncrement(Incrementor incrementor, int times)
{
for (int i =0; i < times; ++i)
incrementor.UnsafeIncrement();
}
private void SafeIncrement(Incrementor incrementor, int times)
{
for (int i = 0; i < times; ++i)
incrementor.SafeIncrement();
}
}
If you search for 'interlocked volatile' you will find a number of answers to your question. The one below for example addresses your question:
If you're talking about what kind of patterns and problems you might stumble into when doing multithreaded concurrent programming in general I've heard a lot of good about. For C# specifics Albahari's book or C# 4.0 in a nutshell is a good reference
Thread.Sleep()
is OK to use in some situations eg. watchdog threads.However in your case, it may not seem to be the optimal solution as pointed out by others.
Without a code sample, it's hard to tell, but based on your description, I don't think it's a question of
Thread.Sleep()
or not. I would suspect that you may be suffering from a race condition - that's usually why you experience "random" buggy behavior or even "random" crashes in multithreaded code - as seems to be what you are experiencing.For whatever reason, your for-loop may cause the subtle critical timings of the race condition to occur less often, but it won't solve the root cause. There are many pitfalls to be aware of when doing multithreaded programming, I can only advice you to read up on the topic if you want to be able to avoid these.
I'll recommend reading http://www.amazon.com/Concurrent-Programming-Windows-Joe-Duffy/dp/032143482X
You ask for "limitations".
I don't think you will find anything that can't be done with ADI (also called APM). The point is performance and programmer effort.
The verdict seems unanimous, Joe Duffy also warns you away from the ADI/APM
And the conclusion is easy, use the TPL if you can. It is easy and efficient. And it's at the just-the -beginning-of-further-development point.
In the .NET memory model, the runtime (CLI) will ensure that changes to volatile fields are not cached in registers, so a change on any thread is immediately seen on other threads (NB this is not true in other memory models, including Java's).
But this says nothing about the relative ordering of operations across multiple, volatile or not, fields.
To provide a consistent ordering across multiple fields you need to use a lock (or a memory barrier, either explicitly or implicitly with one of the methods that include a memory barrier).
For more details see "Concurrent Programming on Windows", Joe Duffy, AW, 2008
Edit due to lunchtime rushed answer..
The lock statement used in your previous code is locking an object instance that is created in the scope of a method so it will have no effect on another thread calling into the same method. Each thread must be able to lock the same instance of an object in order to synchronise access to the given block of code. One way to do this (depending on the semantics you require) is to make the locking object a private static variable of the class that it is used in. This will allow multiple instances of a given object to synchronise access to a block of code or a single shared resource. If synchronisation is required for individual instances of an object or a resource that is instance specific then static should be emitted.
Volatile doesn't guarantee that reads or writes to the given variable will be atomic amongst different threads. It is a compiler hint to preserve ordering of instructions and prevents the variable from being cached inside a register. In general unless you are working on something extremely performance sensitive (low locking / lock free algorithms, data structures etc.) or really know you are doing then I would opt for using Interlocked. The performance difference between using volatile / interlocked / lock in most applications will be neglible, so if you are unsure its best to use what ever gives you the safest guarantee (read Joe Duffy's blog & book).
For example using volatile in the example below is not thread safe and the incremented counter does not reach 10,000,000 (when I ran the test it reached 8848450) . This is because volatile only guarentees reading the latest value (e.g. not cached from a register for example). When using interlocked the operation is thread safe and the counter does reach 10,000,000.
If you search for 'interlocked volatile' you will find a number of answers to your question. The one below for example addresses your question:
A simple example below shows
Volatile vs. Interlocked vs. lock
If you're talking about what kind of patterns and problems you might stumble into when doing multithreaded concurrent programming in general I've heard a lot of good about. For C# specifics Albahari's book or C# 4.0 in a nutshell is a good reference
http://www.amazon.com/Concurrent-Programming-Windows-Joe-Duffy/dp/032143482X
Concurrent Programming on Windows has some great content on this subject which you could take to the team.
http://www.amazon.com/gp/product/032143482X/ref=ox_ya_oh_product