Page cache

Page cache

In computing, page cache, sometimes ambiguously called disk cache, is a "transparent" buffer of disk-backed pages kept in main memory (RAM) by the operating system for quicker access. Page cache is typically implemented in kernels with the paging memory management, and is completely transparent to applications. All memory that is not directly allocated to applications is usually utilized for page cache. Hard disk read speeds are low and random accesses require expensive disk seeks compared to main memory—this is why RAM upgrades usually yield significant improvements in computers' speed and responsiveness.[citation needed] Separate disk caching is provided on the hardware side, by dedicated RAM or NVRAM chips located either in disk controller (inside a hard disk drive; properly called disk buffer[citation needed]) or in a disk array controller. Such memory should not be confused with page cache.

Contents

Memory conservation

Since non-dirty pages in the page cache have identical copies in secondary storage (e.g. hard disk, Flash disk), discarding and re-using their space is much quicker than paging out application memory, and is often preferred. Executable binaries, such as applications and libraries, are also typically accessed through page cache and mapped to individual process spaces using virtual memory (this is done through the mmap system call on Unix-like operating systems). This not only means that the binary files are shared between separate processes, but also that unused parts of binaries will be pushed out of main memory eventually, leading to memory conservation.

Since cached pages can be easily evicted and re-used, some operating systems, notably Windows NT, even report the page cache usage as "free" memory, while the memory is actually allocated to disk pages. This has led to some confusion about the utilization of page cache in Windows.

Page cache and disk writes

The page cache also aids in writing to a disk. Pages that have been modified in memory for writing to disk, are marked "dirty" and have to be flushed to disk before they can be freed. When a file write occurs, the page backing the particular block is looked up. If it is already found in cache, the write is done to that page in memory. Otherwise, when the write perfectly falls on page size boundaries, the page is not even read from disk, but allocated and immediately marked dirty. Otherwise, the page(s) are fetched from disk and requested modifications are done. A file that is created or opened in the page cache, but not written to, might result in a zero byte file at a later read.

However, not all cached pages can be written to — often, program code is mapped as read-only or copy-on-write; in the latter case, modifications to code will only be visible to the process itself and will not be written to disk.

History

The first commercially available page cache (disk cache) for microcomputers was MicroCache from Microcosm Ltd. This appeared in 1982, initially for the CP/M operating system and later for MS-DOS.

Microsoft added a disk cache to MS-DOS (version 4.01) in 1988. They called it SmartDrive.

See also

References