- Mutual exclusion
-
For the concept, see Mutually exclusive events."mutex" redirects here. For the computer program object that negotiates mutual exclusion among threads, see lock (computer science).
Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code in which a process or thread accesses a common resource. The critical section by itself is not a mechanism or algorithm for mutual exclusion. A program, process, or thread can have the critical section in it without any mechanism or algorithm which implements mutual exclusion.
Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs concurrently, such as an application and its interrupt handlers. The synchronization of access to those resources is an acute problem because a thread can be stopped or started at any time.
To illustrate: suppose a section of code is altering a piece of data over several program steps, when another thread, perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data, the data, which is in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread tries overwriting that data, the ensuing state will probably be unrecoverable. These shared data being accessed by critical sections of code must, therefore, be protected, so that other processes which read from or write to the chunk of data are excluded from running.
Contents
Enforcing mutual exclusion
There are both software and hardware solutions for enforcing mutual exclusion. The different solutions are shown below.
Hardware solutions
On a uniprocessor system a common way to achieve mutual exclusion inside kernels is to disable interrupts for the smallest possible number of instructions that will prevent corruption of the shared data structure, the critical section. This prevents interrupt code from running in the critical section, that also protects against interrupt-based process-change.
In a computer in which several processors share memory, an indivisible test-and-set of a flag could be used in a tight loop to wait until the other processor clears the flag. The test-and-set performs both operations without releasing the memory bus to another processor. When the code leaves the critical section, it clears the flag. This is called a "spinlock" or "busy-wait".
Similar atomic multiple-operation instructions, e.g., compare-and-swap, are commonly used for lock-free manipulation of linked lists and other data structures
Software solutions
Beside the hardware supported solution, some software solutions exist that use "busy-wait" to achieve the goal. Examples of these include the following:
- Dekker's algorithm
- Peterson's algorithm
- Lamport's bakery algorithm
- Szymanski's Algorithm
Spin locks and busy waiting take up excessive processor time and power and are considered anti-patterns in almost every case.[citation needed] In addition, these algorithms do not work if Out-of-order execution is utilized on the platform that executes them. Programmers have to specify strict ordering on the memory operations within a thread.[citation needed]
The solution to these problems is to use synchronization facilities provided by an operating system's multithreading library, which will take advantage of hardware solutions if possible but will use software solutions if no hardware solutions exist. For example, when the operating system's lock library is used and a thread tries to acquire an already acquired lock, the operating system will suspend the thread using a context switch and swaps it out with another thread that is ready to be run, or could put that processor into a low power state if there is no other thread that can be run. Therefore, most modern mutual exclusion methods attempt to reduce latency and busy-waits by using queuing and context switches. However, if the time that is spent suspending a thread and then restoring it can be proven to be always more than the time that must be waited for a thread to become ready to run after being blocked in a particular situation, then spinlocks are a fine solution for that situation only.
Advanced mutual exclusion
Synchronization primitives can be built like the examples below by using the solutions explained above:
Many forms of mutual exclusion have side-effects. For example, classic semaphores permit deadlocks, in which one process gets a semaphore, another process gets a second semaphore, and then both wait forever for the other semaphore to be released. Other common side-effects include starvation, in which a process never gets sufficient resources to run to completion, priority inversion in which a higher priority thread waits for a lower-priority thread, and "high latency" in which response to interrupts is not prompt.
Much research is aimed at eliminating the above effects, such as by guaranteeing non-blocking progress. No perfect scheme is known.
Further reading
- Michel Raynal: Algorithms for Mutual Exclusion, MIT Press, ISBN 0-262-18119-3
- Sunil R. Das, Pradip K. Srimani: Distributed Mutual Exclusion Algorithms, IEEE Computer Society, ISBN 0-8186-3380-8
- Thomas W. Christopher, George K. Thiruvathukal: High-Performance Java Platform Computing, Prentice Hall, ISBN 0-13-016164-0
- Gadi Taubenfeld, Synchronization Algorithms and Concurrent Programming, Pearson/Prentice Hall, ISBN 0-13-197259-6
See also
- Atomicity (programming)
- Concurrency control
- Mutually exclusive events
- Semaphore
- Dining philosophers problem
- Reentrant mutex
External links
Categories:
Wikimedia Foundation. 2010.