- Supercomputer
A supercomputer is a
computer that is at the frontline of processing capacity, particularly speed of calculation (at the time of its introduction). The term "Super Computing" was first used by "New York World " newspaper in 1929 [cite book |last=Eames |first=Charles |coauthors= Eames, Ray |title=A Computer Perspective |year=1973 |publisher=Harvard University Press |location= Cambridge, Mass |pages = 95 . Page 95 identifies the article as cite news |title= Super Computing Machines Shown |publisher=New York World |date= March 1, 1920 . However the article shown on page 95 references the Statistical Bureau in Hamilton Hall and an article at the Columbia Computing History web site states that such did not exist until 1929. See [http://www.columbia.edu/acis/history/packard.html The Columbia Difference Tabulator - 1931] ] to refer to large custom-built tabulators thatIBM had made forColumbia University .Supercomputers introduced in the 1960s were designed primarily by
Seymour Cray atControl Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company,Cray Research . He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). Cray, himself, never used the word "supercomputer"; a little-remembered fact is that he only recognized the word "computer". In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of theminicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such asCray ,IBM and HP, who had purchased many of the 1980s companies to gain their experience.The term "supercomputer" itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary
computer . CDC's early machines were simply very fastscalar processor s, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running avector processor , and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-classmicroprocessors , such as thePowerPC ,Opteron , orXeon , and most modern supercomputers are now highly-tunedcomputer cluster s using commodity processors combined with custom interconnects.Common uses
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics,
weather forecasting , climate research (including research intoglobal warming ), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes inwind tunnel s, simulation of the detonation ofnuclear weapons , and research intonuclear fusion ),cryptanalysis , and the like. Major universities, military agencies and scientific research laboratories are heavy users.A particular class of problems, known as
Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Hardware and software design
Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their
memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used fortransaction processing .As with all highly parallel systems,
Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.Supercomputer challenges, technologies
*A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major
HVAC problem.
*Information cannot move faster than thespeed of light between two parts of a supercomputer. For this reason, a supercomputer that is many metres across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds to send a message between CPUs are typical.
*Supercomputers consume and produce massive amounts of data in a very short period of time. According toKen Batcher , "A supercomputer turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly. Technologies developed for supercomputers include:
*Vector processing
*Liquid cooling
*Non-Uniform Memory Access (NUMA)
*Striped disks (the first instance of what was later called RAID)
*Parallel filesystemsProcessing techniques
Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures andSIMD processing instructions for general-purpose computers.Modern
video game consoles in particular useSIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, somegraphics card s have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated,Graphics processing unit s (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU ).Operating systems
Supercomputer
operating system s, today most often variants ofLinux or UNIX, are every bit as complex as those for smaller machines, if not more so. Their user interfaces tend to be less developed, however, as the OS developers have limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). This stems from the fact that because these computers, often priced at millions of dollars, are sold to a very small market, theirR&D budgets are often limited. (The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.)Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as
Silicon Graphics taking a back seat to such companies asAMD andNVIDIA , who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving theirR&D .Historically, until the early-to-mid-1980s, supercomputers usually sacrificed
instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers forFortran existed. This trend would have continued with theETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray'sUnicos and today's Linux.)For this reason, in the future, the highest performance systems are likely to use a variant of UNIX or a UNIX-like operating system but with incompatible system-unique features (especially for the highest-end systems at secure facilities).
Programming
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Because
Fortran has relatively few features and a simple programmatic model, special-purpose compilers can often generate faster code than C orC++ compilersFact|date=June 2008, so Fortran remains the language of choice for scientific programming, and hence for most programs run on supercomputersFact|date=June 2008. To exploit the parallelism of supercomputers, programming environments such as PVM and MPI for loosely connected clusters andOpenMP for tightly coordinated shared memory machines are being used.Software tools
Software tools for distributed processing include standard APIs such as MPI and PVM, VTL and
open source -based software solutions such as Beowulf,WareWulf andopenMosix which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easyprogramming language for supercomputers remains an open research topic incomputer science . Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community which often createsdisruptive technology in this arena.Modern supercomputer architecture
thumb|IBM Roadrunner - LANL">
right|300pxAs of November 2006, the top ten supercomputers on theTop500 list (and indeed the bulk of the remainder of the list) have the same top-level architecture. Each of them is a cluster ofMIMD multiprocessors, each processor of which isSIMD . The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:
*Acomputer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. Each computer runs under a separate instance of anOperating System (OS).
*A multiprocessing computer is a computer, operating under a single OS and using more than one CPU, where the application-level software is indifferent to the number of processors. The processors share tasks usingSymmetric multiprocessing (SMP) andNon-Uniform Memory Access (NUMA).
*ASIMD processor executes the same instruction on more than one set of data at the same time. The processor could be a general purpose commodity processor or special-purposevector processor . It could also be high performance processor or a low power processor. As of 2007, the processor executes several SIMD instructions per nanosecond.As of July 2008 the fastest machine is
IBM Roadrunner . This machine is a cluster of 3240 computers, each with 40 processing cores. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently.Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a ten-year old supercomputer, and the design concepts that allowed past supercomputers to out-perform contemporaneous desktop machines have now been incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can now be done on workstations costing less than 4,000 US dollars.Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer.
Special-purpose supercomputers
Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as
astrophysics computation and brute-forcecodebreaking .Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.Examples of special-purpose supercomputers:
*Belle, Deep Blue, and Hydra, for playingchess
*Reconfigurable computing machines or parts of machines
*GRAPE, for astrophysics and molecular dynamics
*Deep Crack , for breaking the DEScipher
*MDGRAPE-3 , for protein structure computationThe fastest supercomputers today
Measuring supercomputer speed
The speed of a supercomputer is generally measured in "
FLOPS " ("FLoating Point Operations Per Second"), commonly used with anSI prefix such astera- , combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced "teraflops"), orpeta- , combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced "petaflops".) Thismeasurement is based on a particular benchmark which doesLU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.The Top500 list
Since 1993, the fastest supercomputers have been ranked on the
Top500 list according to theirLINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.Current fastest supercomputer system
On June 8, 2008, the Cell/AMD
Opteron -basedIBM Roadrunner at theLos Alamos National Laboratory (LANL) was announced as the fastest operational supercomputer, with a sustained processing rate of 1.026PFLOPS . [cite web
url= http://news.cnet.com/Military-supercomputer-sets-record/2100-1010_3-6241145.html?tag=nefd.top
title= June 2008
date= |year= |month= |format= |work= |publisher= cnet.com
accessdate= 2008-06-09 ] [New York Times, June 9, 2008] However, Roadrunner was then taken out of service to be shipped to its new home.Quasi-supercomputing
Some types of large-scale
distributed computing forembarrassingly parallel problems take the clustered supercomputing concept to an extreme.One such example is the
BOINC platform, a host for a number of distributed computing projects. On August 26, 2008, BOINC recorded a processing power of over 1,080 TFLOPS through over 550,000 active computers on the network. [cite web
url= http://www.boincstats.com/stats/project_graph.php?pr=bo
title= BOINCstats: BOINC Combined
publisher=BOINC
accessdate= 2008-08-26 ] The largest project,SETI@home , reported processing power of over 450 TFLOPS through almost 330,000 active computers. [cite web
url= http://www.boincstats.com/stats/project_graph.php?pr=sah
title= BOINCstats: SETI@Home
publisher= BOINC
accessdate= 2008-08-26 ]Another distributed computing project,
Folding@home , reported over 3.3 PFLOPS of processing power in August 2008. A little over 1 PFLOPS of this processing power is contributed by clients running onPlayStation 3 systems and another 1.8 PFLOPS is contributed by their newly released GPU2 client. [cite web
url= http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats
title= Folding@home: OS Statistics
date= |year= |month= |format= |work= |publisher=Stanford University
pages= |language= |doi= |archiveurl= |archivedate= |quote=
accessdate= ]GIMPS's distributed
Mersenne Prime search achieves currently 29 TFLOPS (as of May 2008).Google 's search engine system may be faster with estimated total processing power of between 126 and 316 TFLOPS. "The New York Times " estimates that theGoogleplex and itsserver farm s contain 450,000 servers. [cite web
url= http://www.nytimes.com/2006/06/14/technology/14search.html?pagewanted=2&ei=5090&en=d96a72b3c5f91c47&ex=1307937600
title= "Hiding in Plain Sight, Google Seeks More Power"
last= Markoff | first=John | authorlink= John Markoff | coauthors= Saul Hensell
date= June 14, 2006 | publisher= "The New York Times "
accessdate= 2008-03-16 ]Research and development
On September 9, 2006 the U.S. Department of Energy's
National Nuclear Security Administration (NNSA) selectedIBM to design and build the world's first supercomputer to use the Cell Broadband Engine (Cell B.E.) processor aiming to produce a machine capable of a sustained speed of up to 1,000 trillion (one quadrillion) calculations per second, or one PFLOPS. This supercomputer called Roadrunner, was completed in May of 2008 and test on 25 May 2008 reach a speed of 1.026 PFLOPS. Roadrunner was then moved to its new home at Los Alamos National Laboratory.Another project in development by IBM is the
Cyclops64 architecture, intended to create a "supercomputer on a chip".Other PFLOP projects include one by Dr.
Narendra Karmarkar [cite web
url= http://economictimes.indiatimes.com/articleshow/msid-225517,curpg-2.cms
title= "Tatas get Karmakar to make super comp"
last= Athley | first= Gouri Agtey | coauthors= Rajeshwari Adappa
date= 30 Oct, 2006 |publisher= "The Economic Times "
accessdate= 2008-03-16 ] in India, aCDAC effort targeted for 2010, [ [http://www.flonnet.com/stories/20070518003711400.htm C-DAC's Param programme sets to touch 10 teraflops by late 2007 and a petaflops by 2010.] Dead link|date=March 2008] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at theUniversity of Illinois at Urbana-Champaign (slated to be completed by 2011). [cite web
url= http://www.nsf.gov/news/news_summ.jsp?cntn_id=109850
title= "National Science Board Approves Funds for Petascale Computing Systems"
date= August 10, 2007 | publisher= U.S.National Science Foundation
accessdate= 2008-03-16 ]Timeline of supercomputers
This is a list of the record-holders for fastest general-purpose supercomputer in the world, and the year each one set the record.For entries prior to 1993, this list refers to various sourcesFact|date=February 2007. From 1993 to present, the list reflects the
Top500 listing, and the "Peak speed" is given as the "Rmax" rating.ee also
Supercomputer Companies / Manufacturer
upercomputer companies in operation
"These companies make supercomputer hardware and/or software, either as their sole activity, or as one of several activities".
*Advanced Micro Devices, Inc.
*Cray Inc.
*Dell
*Fujitsu
*Groupe Bull
*Hitachi
*HP
*IBM
*Microsoft
*nCUBE
*NEC Corporation
*NVIDIA Corporation
*Quadrics
*Sun Microsystems
*SGI
*Supercomputing SystemsDefunct supercomputer companies
"These companies have either folded, or no longer operate in the supercomputer market".
*Control Data Corporation (CDC)
*Convex Computer
*Kendall Square Research
*MasPar Computer Corporation
*Meiko Scientific
*Sequent Computer Systems
*Supercomputer Systems, Inc.,Eau Claire, Wisconsin , S. Chen
*Supercomputer Systems, Inc.,San Diego, California
*Thinking Machines General concepts and history
*Beowulf cluster
*Computational Science and Engineering
*Distributed computing
*Flash mob computer
*Grid computing
*High-performance computing (HPC)
*History of computing hardware
*Metacomputing
*MOSIX
*Parallel computing
*Symmetric multiprocessing
*Quantum computer Notes
External links
Information resources
* [http://www.nytimes.com/2008/06/09/technology/09petaflops.html?ex=1213675200&en=487c5593296d8bab&ei=5070&emc=eta1 Military Supercomputer]
* [http://www.top500.org/ TOP500 Supercomputer list]
* [http://www.green500.org/ Green500 Supercomputer list by efficiency]
* [http://www.LinuxHPC.org LinuxHPC.org] Linux High Performance Computing and Clustering Portal
* [http://www.WinHPC.org WinHPC.org] Windows High Performance Computing and Clustering Portal
* [http://www.clusterresources.com Cluster Resources]
* [http://www.clusterbuilder.org Cluster Builder]
* [http://www.cdac.in Centre for Development of Advanced Computing (CDAC)]
* [http://www.microsoft.com/windowsserver2003/ccs/default.mspx Microsoft Windows Compute Cluster Server (CCS)]
* [http://www.perceus.org/portal/ Infiscale Cluster Portal - Free GPL HPC Resources ]
* [http://www.supercomputingonline.com Supercomputing Online] HPC, Networking & Storage Professionals
* [http://www.technetworld.info Degree Project about best alternatives to implement HPC Cluster]upercomputing centers, organizations
"Organizations"
* [http://www.deisa.org/ DEISA] Distributed European Infrastructure for Supercomputing Applications, a facility integrating eleven European supercomputing centers.
* [http://www.naregi.org/index_e.html NAREGI] Japanese NAtional REsearch Grid Initiative involving several supercomputer centers
* [http://www.teragrid.org TeraGrid] , a national facility integrating nine US supercomputing centers"Centers"
* [http://www.arsc.edu ARSC] Arctic Region Supercomputing Center atUniversity of Alaska Fairbanks
* [http://www.bsc.es BSC] Barcelona Supercomputing Center - Spanish national supercomputing facility and R&D center
* [http://www.cesca.es/en CESCA] Supercomputing Centre of Catalonia - Centre de Supercomputacio de Catalunya
* [http://www.cesga.es CESGA] Galicia Supercomputing Center - Centro de Supercomputación de Galicia
* [http://www.cineca.it/en CINECA] CINECA Interuniversity Consortium, Italy
* [http://www.csar.cfs.ac.uk CSAR] UK national supercomputer service operated by [http://www.mc.manchester.ac.uk Manchester Computing]
* [http://www.epcc.ed.ac.uk/ EPCC] Edinburgh Parallel Computing Centre. Based in theUniversity of Edinburgh .
* [http://www.gsic.titech.ac.jp/ GSIC] Global Scientific Information and Computing Center at the [http://www.titech.ac.jp/ Tokyo Institute of Technology]
*HECToR UK national supercomputer service provided by a consortium of EPCC, Cray andNumerical Algorithms Group (NAG)
* [http://www.hpcx.ac.uk HPCx] UK national supercomputer service operated by EPCC and Daresbury Lab
* [http://www.irb.hr/en/ IRB]
* [http://www.msi.umn.edu Minnesota Supercomputer Institute] (Formerly Minnesota Supercomputer Center) operated byUniversity of Minnesota
* [http://www.nas.nasa.gov NASA Advanced Supercomputing facility]
* [http://www.uybhm.itu.edu.tr/eng/index.html National Center for High Performance Computing] , operated by Technical University of Istanbul
* [http://www.ucar.edu National Center for Atmospheric Research (NCAR)]
* [http://www.ncsa.uiuc.edu National Center for Supercomputing Applications (NCSA)]
* [http://www.nersc.gov National Energy Research Scientific Computing Center (NERSC)]
* [http://www.osc.edu Ohio Supercomputer Center (OSC)]
* [http://www.psc.edu/ Pittsburgh Supercomputing Center] operated byUniversity of Pittsburgh andCarnegie Mellon University .
*Research Computing Services ( [http://www.mc.manchester.ac.uk/researchcomputing web site] ) at theUniversity of Manchester .
* [http://www.sdsc.edu San Diego Supercomputer Center (SDSC)]
* [http://www.sara.nl/userinfo/lisa/usage/batch/index.html SARA] (Stichting Academisch Rekencentrum Amsterdam), Amsterdam, The Netherlands
* [http://www.tcf.vt.edu/systemX.html System X] at Virginia Tech
* [http://www.tacc.utexas.edu Texas Advanced Computing Center (TACC)]
* [http://www.westgrid.ca/support/topics/scheduling.php WestGrid]
* [http://www.tchpc.tcd.ie/ TCHPC] Trinity Centre for High Performance Computing. Based in theUniversity of Dublin .
* [http://www.dcsc.ku.dk/ DCSC] Danish Centre for Scientific Computing. Based at theUniversity of Copenhagen .
* [http://www.man.poznan.pl PSNC] (Poznan Supercomputing and Networking Center), Poznan, Poland
* [http://www.nsc.liu.se/ NSC] National Supercomputer Centre in Sweden atLinköping University ,Sweden pecific machines, general-purpose
* [http://lwn.net/Articles/4759/ Linux NetworX press release: Linux NetworX to build "largest" Linux supercomputer]
* [http://www.llnl.gov/asci/news/white_news.html ASCI White press release]
* [http://www.hoise.com/primeur/02/articles/weekly/AE-PR-05-02-59.html Article about Japanese "Earth Simulator" computer]
* [http://www.es.jamstec.go.jp/esc/eng/ "Earth Simulator" website (in English)]
* [http://www.nec.com.sg/necsin/hpcs.htm NEC high-performance computing information]
* [http://www.hq.nasa.gov/hpcc/insights/vol6/supercom.htm Superconducting Supercomputer]
* [http://www.ncsa.uiuc.edu/BlueWaters/ Blue Waters Petascale Computing System]pecific machines, special-purpose
* [http://grape.c.u-tokyo.ac.jp/gp/paper/hardpaper.html Papers on the GRAPE special-purpose computer]
* [http://big.gsc.riken.go.jp/SPCtext.htm More special-purpose supercomputer information]
* [http://chimera.roma1.infn.it/apehdoc/apemille/INFN_APEmille.html Information about the APEmille special-purpose computer]
* [http://apegate.roma1.infn.it Information about the apeNEXT special-purpose computer]
* [http://phys.columbia.edu/~cqft/ Information about the QCDOC project, machines]
Wikimedia Foundation. 2010.