Embarrassingly parallel

Embarrassingly parallel

In the jargon of parallel computing, an embarrassingly parallel workload (or embarrassingly parallel problem) is one for which no particular effort is needed to segment the problem into a very large number of parallel tasks, and there is no essential dependency (or communication) between those parallel tasks. [http://www-unix.mcs.anl.gov/dbpp/text/node10.html Designing and Building Parallel Programs, by Ian Foster. Addison-Wesley (ISBN 9780201575941), 1995.] Section 1.4.4]

A very common usage of an embarrassingly parallel problem lies within graphics processing units (GPUs) for things like 3D projection since each pixel on the screen can be rendered independently from any other pixel.

Embarrassingly parallel problems are ideally suited to distributed computing over the Internet (e.g. SETI@home), and are also easy to perform on server farms which do not have any of the special infrastructure used in a true supercomputer cluster.

Embarrassingly parallel problems lie at one end of the spectrum of parallelization, the degree to which a computational problem can be readily divided amongst processors. At the other end of the spectrum are "disconcertingly serial" workloads.

Examples

Some examples of embarrassingly parallel problems include:
* The Mandelbrot set and other fractal calculations, where each point can be calculated independently.
* Distributed rendering of non-real-time computer graphics. In ray tracing, each pixel may be rendered independently. In computer animation, each frame may be rendered independently (see parallel rendering).
* Brute force searches in cryptography. A notable real-world example is distributed.net.
* BLAST searches in bioinformatics.
* Computer simulations comparing many independent scenarios, such as climate models.
* Genetic algorithms and other evolutionary computation metaheuristics.
* Ensemble calculations of Numerical weather prediction.
* Event simulation and reconstruction in particle physics.

See also

* Amdahl's law

References


* [http://www.cs.mu.oz.au/498/notes/node40.html ]
* [http://www.phy.duke.edu/~rgb/Beowulf/beowulf_book/beowulf_book/node30.html]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Parallel rendering — (or Distributed rendering) is the application of parallel programming to the computational domain of computer graphics. Rendering graphics can require massive computational resources for complex scenes that arise in scientific visualization,… …   Wikipedia

  • Parallel computing — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing …   Wikipedia

  • Parallel Random Access Machine — In computer science, Parallel Random Access Machine (PRAM) is a shared memory abstract machine. As its name indicates, the PRAM was intended as the parallel computing analogy to the random access machine (RAM). In the same way, that the RAM is… …   Wikipedia

  • NAS Parallel Benchmarks — Тип промышленный бенчмарк Разработчик NASA Advanced Supercomputing Division Написана на Фортран, Си Первый выпуск 1991 (1991) Аппаратная платформа кросс платформенная …   Википедия

  • NAS Parallel Benchmarks — Original author(s) NASA Numerical Aerodynamic Simulation Program Developer(s) NASA Advanced Supercomputing Division Stable release 3.3 Development status Active …   Wikipedia

  • Massively parallel — is a description which appears in computer science, life sciences, medical diagnostics, and other fields. A massively parallel computer is a distributed memory computer system which consists of many individual nodes, each of which is essentially… …   Wikipedia

  • Index calculus algorithm — In group theory, the index calculus algorithm is an algorithm for computing discrete logarithms. This is the best known algorithm for certain groups, such as mathbb{Z} m^* (the multiplicative group modulo m ).Dubious|date=April 2008 Description… …   Wikipedia

  • Many-task computing — (MTC)[1][2][3][4][5][6][7] aims to bridge the gap between two computing paradigms, high throughput computing (HTC) …   Wikipedia

  • Distributed computing — is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal …   Wikipedia

  • OpenMP — Original author(s) OpenMP Architecture Review Board[1] Developer(s) OpenMP Architecture Review Board …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”