Lock (computer science)

Lock (computer science)

In computer science, a lock is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. Locks are one way of enforcing concurrency control policies.

Contents

Types

Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement mandatory locks, where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access.

A (binary) semaphore is the simplest type of lock. In terms of access to the data, no distinction is made between shared (read only) or exclusive (read and write) modes. Other schemes provide for a shared mode, where several threads can acquire a shared lock for read-only access to the data. Other modes such as exclusive, intend-to-exclude and intend-to-upgrade are also widely implemented.

Independent of the type of lock chosen above, locks can be classified by what happens when the lock strategy prevents progress of a thread. Most locking designs block the execution of the thread requesting the lock until it is allowed to access the locked resource. A spinlock is a lock where the thread simply waits ("spins") until the lock becomes available. It is very efficient if threads are only likely to be blocked for a short period of time, as it avoids the overhead of operating system process re-scheduling. It is wasteful if the lock is held for a long period of time.

Locks typically require hardware support for efficient implementation. This usually takes the form of one or more atomic instructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation.

Uniprocessor architectures have the option of using uninterruptable sequences of instructions, using special instructions or instruction prefixes to disable interrupts temporarily, but this technique does not work for multiprocessor shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial synchronization issues.

The reason an atomic operation is required is because of concurrency, where more than one task executes the same logic. For example, consider the following C code:

if (lock == 0) {
    /* lock free - set it */
    lock = myPID; 
}

The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. Dekker's or Peterson's algorithm are possible substitutes, if atomic locking operations are not available.

Careless use of locks can result in deadlock or livelock. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and at run-time. (The most common is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired and released in a specifically defined "cascade" order.)

Some languages do support locks syntactically. An example in C# follows:

class Account {     // this is a monitor of an account
  long val = 0;
 
  public void Deposit(const long x) {
    lock (this) {   // only 1 thread at a time may execute this statement
      val += x;
    }
  }
 
  public void Withdraw(const long x) {
    lock (this) {
      val -= x;
    }
  }
}

Locks can be set to any object:

  object semaphore = new object();
  ...
  lock (semaphore) { ... critical code... }

No synchronized methods like in Java exist.[1]

Granularity

Before being introduced to lock granularity, one needs to understand three concepts about locks.

  • lock overhead: The extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage.
  • lock contention: This occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more granular the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row.)
  • deadlock: The situation when each of two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever.

There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization.

An important property of a lock is its granularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock[citation needed].

In a database management system, for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users.

Database locks

Database locks can be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using 2-phased locks ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected using waits-for graphs. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps.

There are mechanisms employed to manage the actions of multiple concurrent users on a database - the purpose is to prevent lost updates and dirty reads. The two types of locking are Pessimistic and Optimistic Locking.

  • Pessimistic locking: A user who reads a record, with the intention of updating it, places an exclusive lock on the record to prevent other users from manipulating it. This means no one else can manipulate that record until the user releases the lock. The downside is that users can be locked out for a very long time, thereby slowing the overall system response and causing frustration.
    • Where to use pessimistic locking: This is mainly used in environments where data-contention (the degree of users request to the database system at any one time) is heavy; where the cost of protecting data through locks is less than the cost of rolling back transactions, if concurrency conflicts occur. Pessimistic concurrency is best implemented when lock times will be short, as in programmatic processing of records. Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be locked for relatively large periods of time. It is not appropriate for use in Web application development.
  • Optimistic locking: this allows multiple concurrent users access to the database whilst the system keeps a copy of the initial-read made by each user. When a user wants to update a record, the application determines whether another user has changed the record since it was last read. The application does this by comparing the initial-read held in memory to the database record to verify any changes made to the record. Any discrepancies between the initial-read and the database record violates concurrency rules and hence causes the system to disregard any update request. An error message is generated and the user is asked to start the update process again. It improves database performance by reducing the amount of locking required, thereby reducing the load on the database server. It works efficiently with tables that require limited updates since no users are locked out. However, some updates may fail. The downside is constant update failures due to high volumes of update requests from multiple concurrent users - it can be frustrating for users.
    • Where to use optimistic locking: This is appropriate in environments where there is low contention for data, or where read-only access to data is required. Optimistic concurrency is used extensively in .NET to address the needs of mobile and disconnected applications,[2] where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications.

The problems with locks

Lock-based resource protection and thread/process synchronization have many disadvantages:

  • They cause blocking, which means some threads/processes have to wait until a lock (or a whole set of locks) is released.
  • Lock handling adds overhead for each access to a resource, even when the chances for collision are very rare. (However, any chance for such collisions is a race condition.)
  • Locks can be vulnerable to failures and faults that are often very subtle and may be difficult to reproduce reliably. One example is the deadlock. If one thread holding a lock dies, stalls/blocks or goes into any sort of infinite loop, other threads waiting for the lock may wait forever.
  • Lock contention limits scalability and adds complexity.
  • Balances between lock overhead and contention can be unique to given problem domains (applications) as well as sensitive to design, implementation, and even low-level system architectural changes. These balances may change over the life cycle of any given application/implementation and may entail tremendous changes to update (re-balance).
  • Locks are only composable (e.g., managing multiple concurrent locks in order to atomically delete Item X from Table A and insert X into Table B) with relatively elaborate (overhead) software support and perfect adherence by applications programming to rigorous conventions.
  • Priority inversion. High priority threads/processes cannot proceed, if a low priority thread/process is holding the common lock.
  • Convoying. All other threads have to wait, if a thread holding a lock is descheduled due to a time-slice interrupt or page fault (See lock convoy)
  • Hard to debug: Bugs associated with locks are time dependent. They are extremely hard to replicate.
  • There must be sufficient resources - exclusively dedicated memory, real or virtual - available for the locking mechanisms to maintain their state information in response to a varying number of contemporaneous invocations, without which the mechanisms will fail, or "crash" bringing down everything depending on them and bringing down the operating region in which they reside. "Failure" is better than crashing, which means a proper locking mechanism ought to be able to return an "unable to obtain lock for <whatever> reason" status to the critical section in the application, which ought to be able to handle that situation gracefully. The logical design of an application requires these considerations from the very root of conception.

Some people use a concurrency control strategy that doesn't have some or all of these problems. For example, some people use a funnel or serializing tokens, which makes their software immune to the biggest problem—deadlocks. Other people avoid locks entirely—using non-blocking synchronization methods, like lock-free programming techniques and transactional memory. However, many of the above disadvantages have analogues with these alternative synchronization methods.

Language support

Language support for locking depends on the language used:

  • There is no API to handle mutexes in the ISO/IEC standard for C. The current ISO C++ standard, C++11, supports threading facilities. The OpenMP standard is supported by some compilers, and this provides critical sections to be specified using pragmas. The POSIX pthread API provides lock support, but its use is not straightforward.[3] Visual C++ allows adding the synchronize attribute in the code to mark methods that must be synchronized, but this is specific to "COM objects" in the Windows architecture and Visual C++ compiler.[4] C and C++ can easily access any native operating system locking features.
  • Java provides the keyword synchronized to put locks on blocks of code, methods or objects[5] and libraries featuring concurrency-safe data structures.
  • In the C# programming language, the lock keyword can be used to ensure that a thread has exclusive access to a certain resource.
  • VB.NET provides a SyncLock keyword for the same purpose of C#'s lock keyword.
  • Python does not provide a lock keyword, but it is possible to use a lower level mutex mechanism to acquire or release a lock.[6]
  • Ruby also doesn't provide a keyword for synchronization, but it is possible to use an explicit low level mutex object.[7]
  • In x86 Assembly, the LOCK prefix prevents another processor from doing anything in the middle of certain operations: it guarantees atomicity.
  • Objective-C provides the keyword "@synchronized"[8] to put locks on blocks of code and also provides the classes NSLock,[9] NSRecursiveLock,[10] and NSConditionLock[11] along with the NSLocking protocol[12] for locking as well.
  • Ada is probably worth looking at too for a comprehensive overview, with its protected objects[13][14] and rendezvous.

References

  1. ^ Mössenböck, Hanspeter (2002-03-25). "Advanced C#: Variable Number of Parameters". http://ssw.jku.at/Teaching/Lectures/CSharp/Tutorial/: Institut für Systemsoftware, Johannes Kepler Universität Linz, Fachbereich Informatik. p. ??. http://ssw.jku.at/Teaching/Lectures/CSharp/Tutorial/Part2.pdf. Retrieved 2011-08-08. 
  2. ^ "Designing Data Tier Components and Passing Data Through Tiers". Microsoft. August 2002. http://msdn.microsoft.com/en-us/library/ms978496.aspx. Retrieved 2008-05-30. 
  3. ^ Marshall, Dave (March 1999). "Mutual Exclusion Locks". http://www.cs.cf.ac.uk/Dave/C/node31.html#SECTION003110000000000000000. Retrieved 2008-05-30. 
  4. ^ "Synchronize". msdn.microsoft.com. http://msdn.microsoft.com/en-us/library/34d2s8k3(VS.80).aspx. Retrieved 2008-05-30. 
  5. ^ "Synchronization". Sun Microsystems. http://java.sun.com/docs/books/tutorial/essential/concurrency/sync.html. Retrieved 2008-05-30. 
  6. ^ Lundh, Fredrik (July 2007). "Thread Synchronization Mechanisms in Python". http://effbot.org/zone/thread-synchronization.htm. Retrieved 2008-05-30. 
  7. ^ "Programming Ruby: Threads and Processes". 2001. http://www.ruby-doc.org/docs/ProgrammingRuby/html/tut_threads.html. Retrieved 2008-05-30. 
  8. ^ "Apple Threading Reference". Apple, inc. http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/ObjectiveC/Articles/ocThreading.html. Retrieved 2009-10-17. 
  9. ^ "NSLock Reference". Apple, inc. http://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Classes/NSLock_Class/Reference/Reference.html. Retrieved 2009-10-17. 
  10. ^ "NSRecursiveLock Reference". Apple, inc. http://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Classes/NSRecursiveLock_Class/Reference/Reference.html. Retrieved 2009-10-17. 
  11. ^ "NSConditionLock Reference". Apple, inc. http://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Classes/NSConditionLock_Class/Reference/Reference.html. Retrieved 2009-10-17. 
  12. ^ "NSLocking Protocol Reference". Apple, inc. http://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Protocols/NSLocking_Protocol/Reference/Reference.html. Retrieved 2009-10-17. 
  13. ^ ISO/IEC 8652:2007. "Protected Units and Protected Objects". Ada 2005 Reference Manual. http://www.adaic.com/standards/1zrm/html/RM-9-4.html. Retrieved 2010-02-37. "A protected object provides coordinated access to shared data, through calls on its visible protected operations, which can be protected subprograms or protected entries." 
  14. ^ ISO/IEC 8652:2007. "Example of Tasking and Synchronization". Ada 2005 Reference Manual. http://www.adaic.com/standards/1zrm/html/RM-9-11.html. Retrieved 2010-02-37. 

See also

External Links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Thread (computer science) — This article is about the concurrency concept. For the multithreading in hardware, see Multithreading (computer architecture). For the form of code consisting entirely of subroutine calls, see Threaded code. For other uses, see Thread… …   Wikipedia

  • Kernel (computer science) — In computer science, the kernel is the central component of most computer operating systems (OS). Its responsibilities include managing the system s resources (the communication between hardware and software components). As a basic component of… …   Wikipedia

  • Reference (computer science) — This article is about a general notion of reference in computing. For the more specific notion of reference used in C++, see Reference (C++). In computer science, a reference is a value that enables a program to indirectly access a particular… …   Wikipedia

  • Synchronization (computer science) — In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at… …   Wikipedia

  • List of important publications in computer science — This is a list of important publications in computer science, organized by field. Some reasons why a particular publication might be regarded as important: Topic creator – A publication that created a new topic Breakthrough – A publication that… …   Wikipedia

  • Lock — may refer to:* Lock (surname)Mechanical devices* Lock (device), a mechanical device used to secure possessions * Lock (firearm), the ignition mechanism used on early projectile weapons * Lock (water transport), an enclosure in a navigable canal… …   Wikipedia

  • Replication (computer science) — Replication is the process of sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault tolerance, or accessibility. It could be data replication if the… …   Wikipedia

  • Consensus (computer science) — Consensus is a problem in distributed computing that encapsulates the task of group agreement in the presence of faults.[1] In particular, any process in the group may fail at any time. Consensus is fundamental to core techniques in fault… …   Wikipedia

  • Swap (computer science) — For other uses of swap , see swap (disambiguation). In computer programming, the act of swapping two variables refers to mutually exchanging the values of the variables. Usually, this is done with the data in memory. For example, in a program,… …   Wikipedia

  • Schedule (computer science) — In the fields of databases and transaction processing (transaction management), a schedule (also called history) of a system is an abstract model to describe execution of transactions running in the system. Often it is a list of operations… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”