Amdahl's law

Amdahl's law

Amdahl's law, also known as Amdahl's argument, [Rodgers 85, p.226] is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.

The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour can not be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimal execution time can not be less than that critical 1 hour. Hence the speed up is limited up to 20x, as shown in the diagram on the right.

Description

Amdahl's law is a model for the relationship between the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm, under the assumption that the problem size remains the same when parallelized. For example, if for a given problem size a parallelized implementation of an algorithm can run 12% of the algorithm's operations arbitrarily fast (while the remaining 88% of the operations are not parallelizable), Amdahl's law states that the maximum speedup of the parallelized version is 1/(1 - 0.12) = 1.136 times faster than the non-parallelized implementation.

More technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion "P" of that computation where the improvement has a speedup of "S". (For example, if an improvement can speed up 30% of the computation, "P" will be 0.3; if the improvement makes the portion affected twice as fast, "S" will be 2.) Amdahl's law states that the overall speedup of applying the improvement will be

:frac{1}{(1 - P) + frac{P}{S.

To see how this formula was derived, assume that the running time of the old computation was 1, for some unit of time. The running time of the new computation will be the length of time the unimproved fraction takes, (which is 1 − "P"), plus the length of time the improved fraction takes. The length of time for the improved part of the computation is the length of the improved part's former running time divided by the speedup, making the length of time of the improved part "P"/"S". The final speedup is computed by dividing the old running time by the new running time, which is what the above formula does.

Here's another example. We are given a task which is split up into four parts: P1 = 11%, P2 = 18%, P3 = 23%, P4 = 48%, which add up to 100%. Then we say P1 is not sped up, so S1 = 1 or 100%, P2 is sped up 5x, so S2 = 500%, P3 is sped up 20x, so S3 = 2000%, and P4 is sped up 1.6x, so S4 = 160%. By using the formulaP1/S1 + P2/S2 + P3/S3 + P4/S4,we find the running time is:0.11/1 + 0.18/5 + 0.23/20 + 0.48/1.6 = 0.4575or a little less than ½ the original running time which we know is 1. Therefore the overall speed boost is1/0.4575 = 2.186or a little more than double the original speed using the formula(P1/S1 + P2/S2 + P3/S3 + P4/S4)−1.Notice how the 20x and 5x speedup don't have much effect on the overall speed boost and running time when over half of the task is only sped up 1x, (i.e. not sped up), or 1.6x.

Parallelization

In the case of parallelization, Amdahl's law states that if "P" is the proportion of a program that can be made parallel (i.e. benefit from parallelization), and "(1 − P)" is the proportion that cannot be parallelized (remains serial), then the maximum speedup that can be achieved by using "N" processors is

:frac{1}{(1-P) + frac{P}{N

In the limit, as "N" tends to infinity, the maximum speedup tends to "1 / (1-P)". In practice, performance/price falls rapidly as "N" is increased once there is even a small component of "(1 − P)".

As an example, if "P" is 90%, then "(1 − P)" is 10%, and the problem can be sped up by a maximum of a factor of 10, no matter how large the value of "N" used. For this reason, parallel computing is only useful for either small numbers of processors, or problems with very high values of "P": so-called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce "(1-P)" to the smallest possible value.

Relation to law of diminishing returns

Amdahl's law is often conflated with the law of diminishing returns, whereas only a special case of applying Amdahl's law demonstrates 'law of diminishing returns'. If one picks optimally (in terms of the achieved speed-up) what to improve you will see monotonically decreasing improvements as you improve. If, however, one picks non-optimally, after improving a sub-optimal component and moving on to improve a more optimal improvement one can see an increase in return. Consider, for instance, the illustration. If one picks to work on B then A you find an increase in return. If, instead, one works on improving A then B you will find a diminishing return. Thus, strictly speaking, only one (optimal case) can appropriately be said to demonstrate 'law of diminishing returns'. Note that it is often rational to improve a system in an order that is "non-optimal" in this sense, given that some improvements are more difficult or consuming of development time than others.

Amdahl's law does represent the law of diminishing returns if you are considering what sort of return you get by adding more processors to a machine, if you are running a fixed-size computation that will use all available processors to their capacity. Each new processor you add to the system will add less usable power than the previous one. Each time you double the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit of

:frac{1}{(1 - P)}.

This analysis neglects other potential bottlenecks such as memory bandwidth and I/O bandwidth, if they do not scale with the number of processors; however taking into account such bottlenecks would tend to further demonstrate the diminishing returns of only adding processors.

Speedup in a sequential program

The maximum speedup in an improved sequential program, where some part was sped up by p times is

:Max. Speedup le frac{p}{1 + f * (p - 1)}

where f (0.0 < f < 1.0) is the fraction of time (before the improvement) spent in the part that was not improved. For example,

*If part B (blue) is made five times faster, p = 5.0, t_n (red) = 3 seconds, t_i (blue) = 1 second and
*:f = t_n / (t_n + t_i) = 0.75
*:Max. Speedup le frac{5}{1 + 0.75 * (5 - 1)} = 1.25
*If part A (red) is made to run twice as fast, p = 2.0, t_n (blue) = 1 second, t_i (red) = 3 seconds and
*:f = t_n / (t_n + t_i) = 0.25
*:Max. Speedup le frac{2}{1 + 0.25 * (2 - 1)} = 1.60 (better!)

Therefore, making A twice faster is better than making B five times faster.

*Improving part A by a factor of two will result in a +60% increase in overall program speed.
*However, improving part B by a factor of 5 (which presumably requires more effort) will only achieve an overall speedup of +25%.

Limitations

According to Amdahl's law, the theoretical maximum speedup of using N processors would be N, namely "linear speedup". However, it is not uncommon to observe more than N speedup on a machine with N processors in practice, namely "super linear speedup". One possible reason is the effect of cache aggregation. In parallel computers, not only does the number of processors change, but so does the size of accumulated caches from different processors. With the larger accumulated cache size, more or even the entire data set can fit into caches, dramatically reducing memory access time and producing an additional speedup beyond that arising from pure computation.

Amdahl's law also doesn't take into account that problem sizes may be scaled with increased number of processors, which typically reduces the relative amount of non-parallelizable tasks.

Amdahl's Rule of Thumb

Amdahl's Rule of Thumb is that 1 byte of memory and 1 byte per second of I/O are required for each instruction per second supported by a computer. This also goes by the title "Amdahl's Other Law."

See also

* Speedup
* Amdahl Corporation
* Ninety-ninety rule
* Gustafson's Law
* Karp-Flatt Metric
* Brooks's law
* Moore's Law

Notes

References

* Gene Amdahl, " [http://www-inst.eecs.berkeley.edu/~n252/paper/Amdahl.pdf] Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities", AFIPS Conference Proceedings, (30), pp. 483-485, 1967. Note: Gene Amdahl has approved the use of his complete text in the Usenet comp.sys.super news group FAQ which goes out on the 20th of each month.
*Rodgers, David P. (1985) " [http://portal.acm.org/citation.cfm?id=327215 Improvements in multiprocessor system design] " ACM SIGARCH Computer Architecture News archive Volume 13 , Issue 3 (June 1985) table of contents Special Issue: Proceedings of the 12th annual International Symposium on Computer Architecture (ISCA '85) Pages: 225 - 231 Year of Publication: 1985 ISSN:0163-5964. Also published in International Symposium on Computer Architecture, Proceedings of the 12th annual international symposium on Computer architecture, 1985 , Boston, Massachusetts, United States

External links

* [http://www.cbi.umn.edu/oh/display.phtml?id=59 Gene Amdahl. Oral history interview.] Charles Babbage Institute, University of Minnesota, Minneapolis.
* [http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html Reevaluating Amdahl's Law]
* [http://www.cis.temple.edu/~shi/docs/amdahl/amdahl.html Reevaluating Amdahl's Law and Gustafson's Law]
* [http://www.julianbrowne.com/article/viewer/amdahls-law A simple interactive Amdahl's Law calculator]
* [http://demonstrations.wolfram.com/AmdahlsLaw/ "Amdahl's Law"] by Joel F. Klein, The Wolfram Demonstrations Project, 2007.
* [http://www.cs.wisc.edu/multifacet/amdahl/ Amdahl's Law in the Multicore Era]
* [http://www.cilk.com/multicore-blog/bid/5365/What-the-is-Parallelism-Anyhow Blog Post: "What the $#@! is Parallelism, Anyhow?"]


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Amdahl’s Law — a computer system’s speed is determined by its slowest components (for Gene Amdahl) …   Eponyms, nicknames, and geographical games

  • Amdahl's law — (Computers) law which states that there is a limit to additional speed gained by using multiple parallel processors because some portions of programs must be executed serially …   English contemporary dictionary

  • Amdahl Corporation — was founded by Dr. Gene Amdahl, a former IBM employee, in 1970, and specializes in IBM mainframe compatible computer products. It has been a wholly owned subsidiary of Fujitsu since 1997. The company is located in Sunnyvale, California.Amdahl was …   Wikipedia

  • Gustafson's law — (also known as Gustafson Barsis law) is a law in computer engineering which states that any sufficiently large problem can be efficiently parallelized. Gustafson s Law is closely related to Amdahl s law, which gives a limit to the degree to which …   Wikipedia

  • Gene Amdahl — Gene Myron Amdahl (born November 16, 1922) is a Norwegian American computer architect and hi tech entrepreneur, chiefly known for his work on mainframe computers at International Business Machines (IBM) and later his own companies, especially… …   Wikipedia

  • Ley de Amdahl — La Ley de Amdahl, llamada así por el arquitecto de ordenadores Gene Amdahl, se usa para averiguar la mejora máxima de un sistema cuando solo una parte de éste es mejorado. El incremento de velocidad de un programa utilizando múltiples… …   Wikipedia Español

  • Moore's law — Plot of CPU transistor counts against dates of introduction. Note the logarithmic vertical scale; the line corresponds to exponential growth with transistor count doubling every two years …   Wikipedia

  • Douglas K. Amdahl — Chief Justice of the Minnesota Supreme Court In office 1981–1989 Nominated by Al Quie Preceded by Robert Sheran Succeeded by Peter S. P …   Wikipedia

  • William Mitchell College of Law — Motto Practical Wisdom Established 1900 School type Private Endowment $16.5 million[1] Dean Eric J …   Wikipedia

  • Parallel computing — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”