Computer storage density

Computer storage density

Computer storage density is a measure of the quantity of information bits that can be stored on a given length of track, area of surface, or in a given volume of a computer storage medium. Generally, higher density is more desirable, for it allows greater volumes of data to be stored in the same physical space. Density therefore has a direct relationship to storage capacity of a given medium. Density also generally has a fairly direct effect on the performance within a particular medium, as well as price.

Examples

Hard drives store data in the magnetic polarization of small patches of the surface coating on a (normally) metal disk. The maximum areal density is defined by the size of the magnetic particles in the surface, as well as the size of the "head" used to read and write the data. The areal density of disk storage devices has increased dramatically since IBM introduced the RAMAC, the first hard disk in 1956. RAMAC had an areal density of 2,000 bit/in². Commercial hard drives in 2005 typically offer densities between 100 and 150 Gbit/in², an increase of about 75 "million" times over the RAMAC. In 2005 Toshiba introduced a new hard drive using perpendicular recording, which features a density of 179 Gbit/in² [ [http://www.toshiba-europe.com/storage/products/Documents/pressrelease/PR_MK2035GSS_English.pdf Toshiba press release announcing their perpendicular recording drives] ] . Toshiba's experimental systems have demonstrated 277 Gbit/in², and more recently Seagate Technology has demonstrated a drive with a 421 Gbit/in² density [http://arstechnica.com/news.ars/post/20060918-7765.html Seagate hits new heights in disk platter density] ] . It is expected that perpendicular recording technology can scale to about 1 Tbit/in² at its maximum.

Compact Disks (CDs), another common storage media of the early 2000s, stores data in small pits in plastic surface that is then covered with a thin layer of reflective metal. The standard defines pits that are 0.83 micrometers long and 0.5 micrometers wide, arranged in tracks spaced 1.6 micrometers apart, offering a density of about 0.90 Gbit/in². DVD disks are essentially a "product improved" CD, using more of the disk surface, smaller pits (0.84 micrometers), and tighter tracks (0.74 micrometers), offering a density of about 2.2 Gbit/in². Further improvements in HD DVD and Blu-ray offer densities around 7.5 Gbit/in² and 12.5 Gbit/in², respectively (for single-layer devices in both cases) [ [http://www.infotechresearch.com/2005/02/roadmap-for-tracking-optical-disc_09.html Road map for Tracking Optical Disc Technology] ] . When CDs were first introduced they had considerably higher densities (and overall capacity) than then-current hard drives, however hard drives have improved much more quickly than optical media, and by the time the latest blue-laser systems become widely available in 2007, the average hard drive will store somewhere between 500 and 750 GB with densities between 150 and 250 Gbit/in².

A number of technologies are attempting to surpass the densities of all of these media. IBM's Millipede memory is attempting to commercialize a system at 1 Tbit/in² in 2007 (800 Gbit/in² was demonstrated in 2005). This is about the same capacity that perpendicular hard drives are expected to "top out" at, and Millipede technology has so-far been losing the density race with hard drives. Development since mid-2006 appears to be moribund. A newer IBM technology, racetrack memory, uses small nanoscopic wires holding several bits each arranged in 3D to improve density. Although exact numbers have not been mentioned, IBM news articles talk of "100 times" increases. Various holographic storage technologies are also attempting to leapfrog existing systems, but they too have been losing the race, and are estimated to offer 1 Tbit/in² as well, with about 250 GB/in² being the best demonstrated to date.

Effects on Performance

Increasing storage density of a medium almost always improves the transfer speed at which that medium can operate. This is most obvious when considering various disk-based media, where the storage elements are spread over the surface of the disk and must be physically rotated under the "head" in order to be read or written. Higher density means more data moves under the head for any given mechanical movement.

Considering the floppy disk as a basic example, we can calculate the effective transfer speed by determining how fast the bits move under the head. A standard 3½" floppy disk spins at 300 rpm, and the innermost track about 66 mm long (10.5 mm radius). At 300 rpm the linear speed of the media under the head is thus about 66 mm x 300 rpm = 19800 mm/minute, or 330 mm/s. Along that track the bits are stored at a density of 686 bit/mm, which means that the head sees 686 bit/mm x 330 mm/s = 226,380 bit/s (or 221 kbit/s (KiB/s)).

Now consider an improvement to the design that doubles the density of the bits by reducing sample length and keeping the same track spacing. This would immediately result in a doubling of transfer speed because the bits would be passing under the head twice as fast. Early floppy disk interfaces were originally designed with 250 kbit/s transfer speeds in mind, and were already being outperformed with the introduction of the "high density" 1.44 MB (1,440 KiB) floppies in the 1980s. The vast majority of PCs included interfaces designed for high density drives that ran at 500 kbit/s instead. These too were completely overwhelmed by newer devices like the LS-120, which were forced to use higher-speed interfaces such as IDE.

Although the effect on performance is most obvious on rotating media, similar effects come into play even for solid-state media like Flash RAM or DRAM. In this case the performance is generally defined by the time it takes for the electrical signals to travel though the computer bus to the chips, and then through the chips to the individual "cells" used to store data (each cell holds one bit).

One defining electrical property is the resistance of the wires inside the chips. As the cell size decreases, through the improvements in semiconductor fabrication that lead to Moore's Law, the resistance is reduced and less power is needed to operate the cells. This, in turn, means that less electrical current is needed for operation, and thus less time is needed to send the required amount of electrical charge into the system. In DRAM in particular the amount of charge that needs to be stored in a cell's capacitor also directly effects this time.

As fabrication has improved, solid-state memory has improved dramatically in terms of performance. Modern DRAM chips had operational speeds on the order of 10 ns or less. A less obvious effect is that as density improves, the number of DIMMs needed to supply any particular amount of memory decreases, which in turn means less DIMMs overall in any particular computer. This often leads to improved performance as well, as there is less bus traffic. However, this effect is generally not linear.

Effects on Price

Storage density also has a strong effect on the price of memory, although in this case the reasons are not so obvious.

In the case of disk-based media, the primary cost is the moving parts inside the drive. This sets a fixed lower limit, which is why most modern hard drives bottom out around $100 US retail, and have for many years now. That said, the price of high-end drives has fallen rapidly, and this is indeed an effect of density. In this case the only way to make a higher capacity drive is to use more platters, essentially individual hard drives within the case. As the density increases the number of platters needed to supply any given amount of storage falls, leading to lower costs due to the reduction of mechanical parts inside. So while a low-end drive still costs about $100 (although increasing rapidly in overall storage), the price for a large drive is falling rapidly as they become mechanically simpler. It is worth observing dollars per GB for Hard drives.

The fact that overall price has remained fairly steady has led to the common measure of the price/performance ratio in terms of cost per bit. In these terms the increase in density of hard drives becomes much more obvious. IBM's RAMAC from 1956 supplied 5 MB for $50,000, or $10,000 per megabyte. In 1989 a typical 40 MB hard drive from Western Digital retailed for $1199.00, or $36/MB. Drives broke the $1/MB in 1994, and in early 2000 were about 2¢/MB. By 2004 the 250 GB Western Digital Caviar SE listed for $249.99, approaching $1/GB, an improvement of 36 "thousand" times since 1989, and 10 million since the RAMAC [ [http://www.alts.net/ns1625/winchest.html Cost of Hard Drive Storage Space] ] . This is all without adjusting for inflation, which adds another factor of about seven times since 1956.

Solid-state storage has seen similar dramatic reductions in cost per bit. In this case the primary determinant of cost is "yield", the number of working chips produced in a unit time. Chips are produced in batches printed on the surface of a single large silicon wafer, which is then cut up and non-working examples are discarded. To improve yield, modern fabrication has moved to ever-larger wafers, and made great improvements in the quality of the production environment. Other factors include packaging the resulting wafer, which puts a lower limit on this process of about $1 per completed chip [http://www.iiasa.ac.at/Research/TNT/WEB/Research/Understanding_the_dynamics_of_/DRAM_3/dram_3.html DRAM prices] ] .

Given this it becomes more obvious why density has such an effect on cost per bit here as well. A memory chip that stores a given amount of memory but is half the physical size means that twice as many units can be produced on the same wafer, essentially halving the price of each one. DRAM was first introduced commercially in 1971, a 1 kbit part that cost at about $50 in large numbers, or about 5 cents per bit. 64 Mbit parts were common in 1999, at a cost of about 0.00002 cents per bit (20 microcents/bit).

See also

* Bit cell — the length, area or volume required to store a single bit
* patterned media

References


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Memory storage density — is a measure of the quantity of information bits that can be stored on a given length of track, area of surface, or in a given volume of a computer storage medium. Generally, higher density is more desirable, for it allows greater volumes of data …   Wikipedia

  • Computer data storage — 1 GB of SDRAM mounted in a personal computer. An example of primary storage …   Wikipedia

  • computer — computerlike, adj. /keuhm pyooh teuhr/, n. 1. Also called processor. an electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations. Cf. analog… …   Universalium

  • density — den•si•ty [[t]ˈdɛn sɪ ti[/t]] n. pl. ties 1) the state or quality of being dense; compactness 2) stupidity; obtuseness 3) cvb the average number of inhabitants, dwellings, or the like, per unit of area: a population density of 100 persons per… …   From formal English to slang

  • Mass storage — This article describes mass storage in general. For the USB protocol, see USB mass storage device class. In computing, mass storage refers to the storage of large amounts of data in a persisting and machine readable fashion. Devices and/or… …   Wikipedia

  • Area density — For computer memory comparisons, see computer storage density. For other meanings, see Density (disambiguation) The area density of a two dimensional object is calculated as the mass per unit area. The SI derived unit is: kilogram per square… …   Wikipedia

  • Computer hardware — Typical PC hardware= A typical personal computer consists of a case or chassis in a tower shape (desktop) and the following parts:Motherboard* Motherboard It is the body or mainframe of the computer, through which all other components interface.… …   Wikipedia

  • Computer cooling — An OEM AMD heatsink mounted onto a motherboard. Computer cooling is required to remove the waste heat produced by computer components, to keep components within their safe operating temperature limits. Various cooling methods help to improve… …   Wikipedia

  • Computer architecture — In computer science and engineering, computer architecture is the practical art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals and the formal modelling of those systems.… …   Wikipedia

  • Storage — verschiedene Massenspeichermedien Ein Datenspeicher oder Speichermedium dient zur Speicherung von Daten beziehungsweise Informationen. Der Begriff Speichermedium wird auch als Synonym für einen konkreten Datenträger verwendet. Inhaltsverzeichnis… …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”