Karp-Flatt metric

Karp-Flatt metric

The Karp-Flatt Metric is a measure of parallelization of code in parallel processor systems. This metric exists in addition to Amdahl's Law and the Gustafson's Law as an indication of the extent to which a particular computer code is parallelized. It was proposed by Alan H. Karp and Horace P. Flatt in 1990.

Description

Given a parallel computation exhibiting speedup psi on p processors, where p > 1, the experimentally determined serial fraction e is defined to be the Karp - Flatt Metric viz:

:e = frac{frac{1}{psi}-frac{1}{p{1-frac{1}{p

The less the value of e the better the parallelization.

Justification

There are many ways to measure the performance of a parallel algorithm running on a parallel processor. The Karp-Flatt metric defines a metric which reveals aspects of the performance that are not easily discerned from other metrics. A pseudo-"derivation" of sorts follows from Amdahl's Law, which can be written as:

:T(p) = T_s + frac{T_p}{p}

Where:
*T(p) is the total time taken for code execution in a p-processor system
*T_s is the time taken for the serial part of the code to run
*T_p is the time taken for the parallel part of the code to run in one processor
*p is the number of processors

with the obvious result obtained by substituting p = 1 viz. T(1) = T_s + T_p, if we define the serial fraction e = frac{T_s}{T(1)} then the equation can be re-written as

:T(p) = T(1) e + frac{T(1) (1-e)}{p}

In terms of the speedup psi = frac{T(1)}{T(p)} :

:frac{1}{psi} = e + frac{1-e}{p}

Solving for the serial fraction, we get the Karp-Flatt metric as above.Note that this is not a "derivation" from Amdahl's law as the left hand side represents a metric rather than a mathematically derived quantity. The treatment above merely shows that the Karp-Flatt metric is consistent with Amdahl's Law.

Use

While the Karp-Flatt metric is mentioned frequently in computer science literature, it was rarely used as a diagnostic tool the way speedup and efficiency are. Karp and Flatt hoped to correct this by proposing this metric. This metric addresses the inadequacies of the other laws and quantities used to measure the parallelization of computer code. In particular, Amdahl's law does not take into account load balancing issues, nor does it take overhead into consideration. Using the serial fraction as a metric poses definite advantages over the others, particularly as the number of processors grows.

For a problem of fixed size, the efficiency of a parallel computation typically decreases as the number of processors increases. By using the serial fraction obtained experimentally using the Karp-Flatt metric, we can determine if the efficiency decrease is due to limited opportunities of parallelism or increases in algorithmic or architectural overhead.

References

* Quinn Michael J, Parallel Programming in C with MPI and OpenMP McGraw-Hill Inc. 2004. ISBN 0-07-058201-7
* Karp Alan H. and Flatt Horace P. "Measuring Parallel Processor Performance", "Communication of the ACM " Volume 33 Number 5, May 1990

External links

* [http://courses.cs.vt.edu/~cs4234/F03/notes/ch7/1003.html Lecture Notes on Karp-Flatt metric] - Virginia Tech


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Parallel Random Access Machine — In computer science, Parallel Random Access Machine (PRAM) is a shared memory abstract machine. As its name indicates, the PRAM was intended as the parallel computing analogy to the random access machine (RAM). In the same way, that the RAM is… …   Wikipedia

  • Speedup — In parallel computing, speedup refers to how much a parallel algorithm is faster than a corresponding sequential algorithm. Definition Speedup is defined by the following formula::S p = frac{T 1}{T p}where: * p is the number of processors * T 1… …   Wikipedia

  • Computer cluster — Not to be confused with data cluster. A computer cluster is a group of linked computers, working together closely thus in many respects forming a single computer. The components of a cluster are commonly, but not always, connected to each other… …   Wikipedia

  • Amdahl's law — Amdahl s law, also known as Amdahl s argument, [Rodgers 85, p.226] is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used …   Wikipedia

  • Computer multitasking — In computing, multitasking is a method where multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning… …   Wikipedia

  • Distributed computing — is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal …   Wikipedia

  • Non-Uniform Memory Access — (NUMA) is a computer memory design used in Multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non local memory, that is, memory …   Wikipedia

  • Process (computing) — In computing, a process is an instance of a computer program that is being executed. It contains the program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that… …   Wikipedia

  • Thread (computer science) — This article is about the concurrency concept. For the multithreading in hardware, see Multithreading (computer architecture). For the form of code consisting entirely of subroutine calls, see Threaded code. For other uses, see Thread… …   Wikipedia

  • Grid computing — is a term referring to the combination of computer resources from multiple administrative domains to reach a common goal. The grid can be thought of as a distributed system with non interactive workloads that involve a large number of files. What …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”