- Critical section
-
In concurrent programming a critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. A critical section will usually terminate in fixed time, and a thread, task or process will have to wait a fixed time to enter it (aka bounded waiting). Some synchronization mechanism is required at the entry and exit of the critical section to ensure exclusive use, for example a semaphore.
By carefully controlling which variables are modified inside and outside the critical section (usually, by accessing important state only from within), concurrent access to that state is prevented. A critical section is typically used when a multithreaded program must update multiple related variables without a separate thread making conflicting changes to that data. In a related situation, a critical section may be used to ensure a shared resource, for example a printer, can only be accessed by one process at a time.
How critical sections are implemented varies among operating systems.
The simplest method is to prevent any change of processor control inside the critical section. On uni-processor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system calls that can cause a context switch while inside the section and restoring interrupts to their previous state on exit. Any thread of execution entering any critical section anywhere in the system will, with this implementation, prevent any other thread, including an interrupt, from getting the CPU and therefore from entering any other critical section or, indeed, any code whatsoever, until the original thread leaves its critical section.
This brute-force approach can be improved upon by using semaphores. To enter a critical section, a thread must obtain a semaphore, which it releases on leaving the section. Other threads are prevented from entering the critical section at the same time as the original thread, but are free to gain control of the CPU and execute other code, including other critical sections that are protected by different semaphores.
Some confusion exists in the literature about the relationship between different critical sections in the same program.[citation needed] In general, a resource that must be protected from concurrent access may be accessed by several pieces of code. Each piece must be guarded by a common semaphore. Is each piece now a critical section or are all the pieces guarded by the same semaphore in aggregate a single critical section? This confusion is evident in definitions of a critical section such as "... a piece of code that can only be executed by one process or thread at a time". This only works if all access to a protected resource is contained in one "piece of code", which requires either the definition of a piece of code or the code itself to be somewhat contrived.
Contents
Application Level Critical Sections
Application-level critical sections reside in the memory range of the process and are usually modifiable by the process itself. This is called a user-space object because the program run by the user (as opposed to the kernel) can modify and interact with the object. However the functions called may jump to kernel-space code to register the user-space object with the kernel.
Example Code For Critical Sections with POSIX pthread library
/* Sample C/C++, Unix/Linux */ #include <pthread.h> /* This is the critical section object (statically allocated). */ static pthread_mutex_t cs_mutex = PTHREAD_MUTEX_INITIALIZER; void f() { /* Enter the critical section -- other threads are locked out */ pthread_mutex_lock( &cs_mutex ); /* Do some thread-safe processing! */ /*Leave the critical section -- other threads can now pthread_mutex_lock() */ pthread_mutex_unlock( &cs_mutex ); }
Example Code For Critical Sections with Win32 API
/* Sample C/C++, Windows, link to kernel32.dll */ #include <windows.h> static CRITICAL_SECTION cs; /* This is the critical section object -- once initialized, it cannot be moved in memory */ /* If you program in OOP, declare this as a non-static member in your class */ /* Initialize the critical section before entering multi-threaded context. */ InitializeCriticalSection(&cs); void f() { /* Enter the critical section -- other threads are locked out */ EnterCriticalSection(&cs); /* Do some thread-safe processing! */ /* Leave the critical section -- other threads can now EnterCriticalSection() */ LeaveCriticalSection(&cs); } /* Release system object when all finished -- usually at the end of the cleanup code */ DeleteCriticalSection(&cs);
Note that on Windows NT (not 9x/ME), the function TryEnterCriticalSection() can be used to attempt to enter the critical section. This function returns immediately so that the thread can do other things if it fails to enter the critical section (usually due to another thread having locked it). With the pthreads library, the equivalent function is pthread_mutex_trylock(). Note that the use of a CriticalSection is not the same as a Win32 Mutex, which is an object used for inter-process synchronization. A Win32 CriticalSection is for intra-process synchronization (and is much faster as far as lock times), however it cannot be shared across processes.
Kernel Level Critical Sections
Typically, critical sections prevent process and thread migration between processors and the preemption of processes and threads by interrupts and other processes and threads.
Critical sections often allow nesting. Nesting allows multiple critical sections to be entered and exited at little cost.
If the scheduler interrupts the current process or thread in a critical section, the scheduler will either allow the process or thread to run to completion of the critical section, or it will schedule the process or thread for another complete quantum. The scheduler will not migrate the process or thread to another processor, and it will not schedule another process or thread to run while the current process or thread is in a critical section.
Similarly, if an interrupt occurs in a critical section, the interrupt's information is recorded for future processing, and execution is returned to the process or thread in the critical section. Once the critical section is exited, and in some cases the scheduled quantum completes, the pending interrupt will be executed.
Since critical sections may execute only on the processor on which they are entered, synchronization is only required within the executing processor. This allows critical sections to be entered and exited at almost zero cost. No interprocessor synchronization is required, only instruction stream synchronization. Most processors provide the required amount of synchronization by the simple act of interrupting the current execution state. This allows critical sections in most cases to be nothing more than a per processor count of critical sections entered.
Performance enhancements include executing pending interrupts at the exit of all critical sections and allowing the scheduler to run at the exit of all critical sections. Furthermore, pending interrupts may be transferred to other processors for execution.
Critical sections should not be used as a long-lived locking primitive. They should be short enough that the critical section will be entered, executed, and exited without any interrupts occurring, neither from hardware much less the scheduler.
Kernel Level Critical Sections are the base of the software lockout issue.
See also
External links
- Critical Section documentation on the MSDN Library homepage: http://msdn2.microsoft.com/en-us/library/ms682530.aspx
- Tutorial on Critical Sections http://www.futurechips.org/tips-for-power-coders/parallel-programming-understanding-impact-critical-sections.html
Categories:- Concurrency control
- Programming constructs
Wikimedia Foundation. 2010.