Cache coherency

Cache coherency

In computing, cache coherency (also cache coherence) refers to the integrity of data stored in local caches of a shared resource. Cache coherence is a special case of memory coherence.

When clients in a system maintain caches of a common memory resource, problems may arise with inconsistent data. This is particularly true of CPUs in a multiprocessing system. Referring to the "Multiple Caches of Shared Resource" figure, if the top client has a copy of a memory block from a previous read and the bottom client changes that memory block, the top client could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts and maintain consistency between cache and memory.

Definition

Coherence defines the behavior of reads and writes to the same memory location. The coherence of caches is obtained if the following conditions are met:
# A read made by a processor P to a location X that follows a write by the same processor P to X, with no writes of X by another processor occurring between the write and the read instructions made by P, X must always return the value written by P. This condition is related with the program order preservation, and this must be achieved even in monoprocessed architectures.
# A read made by a processor P1 to location X that follows a write by another processor P2 to X must return the written value made by P2 if no other writes to X made by any processor occur between the two accesses. This condition defines the concept of coherent view of memory. If processors can read the same old value after the write made by P2, we can say that the memory is incoherent.
# Writes to the same location must be sequenced. In other words, if location X received two different values A and B, in this order, by any two processors, the processors can never read location X as B and then read it as A. The location X must be seen with values A and B in that order.

These conditions are defined supposing that the read and write operations are made instantaneously. However, this doesn't happen in computer hardware given memory latency and other aspects of the architecture. A write by processor X may not be seen by a read from processor Y if the read is made within a very small time after the write has been made. The memory consistency model defines when a written value must be seen by a following read instruction made by the other processors.

Cache coherence mechanisms

* Directory-based coherence mechanisms maintain a central directory of cached blocks.

* Snooping is the process where the individual caches monitor address lines for accesses to memory locations that they have cached. When a write operation is observed to a location that a cache has a copy of, the cache controller invalidates its own copy of the snooped memory location.

* Snarfing is where a cache controller watches both address and data in an attempt to update its own copy of a memory location when a second master modifies a location in main memory.

Distributed shared memory systems mimic these mechanisms in an attempt to maintain consistency between blocks of memory in loosely coupled systems.

The two most common types of coherence that are typically studied are Snooping and Directory-based, each having its own benefits and drawbacks. Snooping protocols tend to be faster, if enough bandwidth is available, since all transactions are a request/response seen by all processors. The drawback is that snooping isn't scalable. Every request must be broadcast to all nodes in a system, meaning that as the system gets larger, the size of the (logical or physical) bus and the bandwidth it provides must grow. Directories, on the other hand, tend to have longer latencies (with a 3 hop request/forward/respond) but use much less bandwidth since messages are point to point and not broadcast. For this reason, many of the larger systems (>64 processors) use this type of cache coherency.

Coherence models

Various models and protocols have been devised for maintaining cache coherence, such as:
* MSI protocol
* MESI protocol
* MOSI protocol
* MOESI protocol
* Write-once protocol
* Synapse protocol
* Berkeley protocol
* Illinois protocol
* Firefly protocol
* Dragon protocolChoice of the consistency model is crucial to designing a cache coherent system. Coherence models differ in performance and scalability; each must be evaluated for every system design.

Furthermore, transitions between states in any specific implementation of these protocols may vary. For example, an implementation may choose different update and invalidation transitions such as update-on-read, update-on-write, invalidate-on-read, or invalidate-on-write. The choice of transition may affect the amount of inter-cache traffic, which in turn may affect the amount of cache bandwidth available for actual work. This should be taken into consideration in the design of distributed software that could cause strong contention between the caches of multiple processors.

Further reading

*Handy, Jim. "The Cache Memory Book". Academic Press, Inc., 1998. ISBN 0-12-322980-4

ee also

*ccNUMA
*Write barrier


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Write-once (cache coherency) — In cache coherency protocol literature, Write Once is the first write invalidate protocol defined. It has the optimization of executing write update on the first write and a write invalidate on all subsequent writes, reducing the overall bus… …   Wikipedia

  • Coherency — Coherency* For philosophical concepts, see Coherence theory of truth * For computing concepts, see Memory coherency ** Cache coherency See also * Coherence (disambiguation page) …   Wikipedia

  • Cache coherence — In computing, cache coherence (also cache coherency) refers to the consistency of data stored in local caches of a shared resource. Multiple Caches of Shared Resource When clients in a system maintain caches of a common memory resource, problems… …   Wikipedia

  • Cache algorithms — This article is about general cache algorithms. For detailed algorithms specific to paging, see page replacement algorithm. For detailed algorithms specific to the cache between a CPU and RAM, see CPU cache. In computing, cache algorithms (also… …   Wikipedia

  • Cache — [kæʃ] bezeichnet in der EDV einen schnellen Puffer Speicher, der Zugriffe auf ein langsames Hintergrundmedium oder zeitaufwendige Neuberechnungen nach Möglichkeit vermeidet. Meist werden hierzu Inhalte/Daten gepuffert, die bereits einmal… …   Deutsch Wikipedia

  • Cache-Hierarchie — Cache [kæʃ] bezeichnet in der EDV eine Methode, um Inhalte, die bereits einmal vorlagen, beim nächsten Zugriff schneller zur Verfügung zu stellen. Caches sind als Puffer Speicher realisiert, die die Kopien zwischenspeichern. Sie können als… …   Deutsch Wikipedia

  • Cache-Speicher — Cache [kæʃ] bezeichnet in der EDV eine Methode, um Inhalte, die bereits einmal vorlagen, beim nächsten Zugriff schneller zur Verfügung zu stellen. Caches sind als Puffer Speicher realisiert, die die Kopien zwischenspeichern. Sie können als… …   Deutsch Wikipedia

  • Cache Hit — Cache [kæʃ] bezeichnet in der EDV eine Methode, um Inhalte, die bereits einmal vorlagen, beim nächsten Zugriff schneller zur Verfügung zu stellen. Caches sind als Puffer Speicher realisiert, die die Kopien zwischenspeichern. Sie können als… …   Deutsch Wikipedia

  • Cache Miss — Cache [kæʃ] bezeichnet in der EDV eine Methode, um Inhalte, die bereits einmal vorlagen, beim nächsten Zugriff schneller zur Verfügung zu stellen. Caches sind als Puffer Speicher realisiert, die die Kopien zwischenspeichern. Sie können als… …   Deutsch Wikipedia

  • CPU cache — Cache memory redirects here. For the general use, see cache. A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”