- QCDOC
The QCDOC, "Quantum ChromoDynamics On a Chip", is a
supercomputer technology focusing on using relatively cheap low power processing elements to produce a massively parallel machine. As the name suggests, the machine is custom made to solve small but extremely demanding problems in the fields ofquantum physics .Overview
The computers were designed and built jointly by
University of Edinburgh (UKQCD) ,Columbia University , theRIKEN Brookhaven Research Center andIBM . The purpose of the collaboration was to exploit computing facilities for lattice field theory calculations whose primary aim is to increase the predictive power of theStandard Model of elementary particle interactions through numerical simulation of quantum chromodynamics (QCD). The target was to build a massively parallel supercomputer able to peak at 10 Tflops with sustained power at 50% capacity.There are three QCDOCs in service each reaching 10 Tflops peak operation.
*University of Edinburgh 's Parallel Computing Centre (EPCC). In operation by the UKQCD since 2005
* RIKEN Brookhaven Research Center atBrookhaven National Laboratory
* U.S. Department of Energy Program in High Energy and Nuclear Physics at Brookhaven National LaboratoryAround 23
UK academic staff, their postdocs and students, from seven universities, belong to UKQCD. Costs were funded through a Joint Infrastructure Fund Award of £6.6 million. Staff costs (system support, physicist programmers and postdocs) are around £1 million per year, other computing and operating costs are around £0.2 million per year. [http://www.scitech.ac.uk/roadmap/rmProject.aspx?q=82]QCDOC was to replace an earlier design, QCDSP, where the power came from connecting large amounts of DSPs together in a similar fashion. The QCDSP strapped 12.288 nodes to a 4D network and reached 1 Tflops in 1998.
QCDOC can be seen as a predecessor to the highly successful
BlueGene/L supercomputer. They share a lot of design traits, and similarities goes beyond superficial characteristics. BlueGene is also a massively parallel supercomputer built with a large amount of cheap, relatively weak PowerPC 440 based SoC nodes conencted with a high bandwidth multidimensional mesh. They differ however in that the computing nodes in BG/L is more powerful and it have a much faster and more sophisticated network so it scales up to several 100.000 nodes per system.Architecture
Computing node
The computing nodes are custom built
ASIC s with 50 million transistors using mainly existing building blocks fromIBM . They are built around a 500 MHz PowerPC 440 core with 4 MBDRAM , memory management for externalDDR SDRAM , system I/O for inter node communications and dual Ethernet built in. The computing node is capable of 1 double precisionGflops . Each node has oneDIMM socket capable of holding 128-2048 MB 333 MHz ECCDDR SDRAM .Inter node communication
Each node has the capability to send and receive data from each of its twelve nearest neighbors in a six-dimensional mesh at a rate of 500 Mbit/s each. This provides a total off-node bandwidth of 12 Gbit/s. Each of these 24 channels has DMA to the other nodes' on-chip DRAM or the external SDRAM. In practice only four dimensions will be used to form a communications sub-torus where the remaining two dimensions will be used to partition the system.
The operating system communicates with the computing nodes using the Ethernet network. This is also used for diagnostics, configuration and communications with disk storage.
Mechanical design
Two nodes are placed together on a daughter card with one DIMM socket and a 4:1 Ethernet hub for off-card communications. The daughter cards have two connectors, one carrying the internode communications network and one carrying power, Ethernet, clock and other house keeping facilities.
Thirty-two daughter cards are placed in two rows on a motherboard that supports 800 Mbit/s off-board Ethernet communications. Eight motherboards are placed in crates with two backplanes supporting four motherboards each. Each crate consists of 512 processor nodes a and a 26 hypercube communications network. One node consumes about 5 W of power and each crate is air and water cooled. A complete system can consist of any number of crates up to several 10,000 computing nodes.
Operating system
The operating system, QOS, is a custom built operating system made to facilitate boot, runtime, monitoring, diagnostics, performance and ease of use of 10.000+ nodes. It uses a custom embedded kernel and provides single process
POSIX ("unix-like") compatibility using the Cygnusnewlib library. The kernel includes a specially written UDP/IP stack and NFS client for disk access.The operating system also maintains system partitions so several users can have access to separate parts of the system for different applications. Each partition will only run one client application at any given time. Any multitasking is scheduled by the host controller system which is a regular computer using a large amounts of Ethernet ports connecting to the QCDOC.
A consequence of the UNIX-like operating system and the PowerPC based hardware, there's very little effort designing applications for the QCDOC.
References
* [http://phys.columbia.edu/~cqft/ Computational Quantum Field Theory at Columbia – Columbia University]
* [http://phys.columbia.edu/~cqft/qcdoc/qcdoc.htm QCDOC Architecture – Columbia University]
* [http://www.theregister.co.uk/2008/03/07/edinburgh_qcdoc_calculations/ UK supercomputer probes secrets of universe – The Register]
* [http://www.bnl.gov/lqcd/linkable_files/pdf/pap231.pdf QCDOC: A 10 Teraflops Computer for Tightly-coupled Calculations - Brookhaven National Laboratory]
* [http://www.research.ibm.com/journal/rd/492/boyle.html Overview of the QCDSP and QCDOC computers – IBM]
* [http://www.scitech.ac.uk/roadmap/rmProject.aspx?q=82 UKQCD – Science and Technology Facilities Council]See also
*
Norman Christ
* PowerPC 440
*BlueGene/L
*Power Architecture
*Supercomputer
Wikimedia Foundation. 2010.