Loop scheduling

Loop scheduling

In parallel computing, loop scheduling is the problem of assigning proper iterations of parallelizable loops among "n" processors to achieve load balancing and maintain data locality with minimum dispatch overhead.

Typical loop scheduling methods are:
* static even scheduling: evenly divide loop iteration space into n chunks and assign each chunk to a processor
* dynamic scheduling: a chunk of loop iteration is dispatched at runtime by an idle processor. When the chunk size is 1 iteration, it is also called self-scheduling.
* guided scheduling: similar to dynamic scheduling, but the chunk sizes per dispatch keep shrinking until reaching a preset value.

ee also

* OpenMP
* Automatic parallelization
* Loop nest optimization


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Loop optimization — In compiler theory, loop optimization plays an important role in improving cache performance, making effective use of parallel processing capabilities, and reducing overheads associated with executing loops. Most execution time of a scientific… …   Wikipedia

  • Instruction scheduling — In computer science, instruction scheduling is a compiler optimization used to improve instruction level parallelism, which improves performance on machines with instruction pipelines. Put more simply, without changing the meaning of the code, it …   Wikipedia

  • Trace scheduling — is an optimization technique used in compilers for computer programs.A compiler often can, by rearranging its generated machine instructions for faster execution, improve program performance. Trace scheduling is one of many known techniques for… …   Wikipedia

  • Comet (programming language) — Comet is an object oriented programming language used to solve complex combinatorial optimization problems in areas such as resource allocation and scheduling. It offers a wide range of optimization algorithms: from mathematical programming to… …   Wikipedia

  • OpenMP — Original author(s) OpenMP Architecture Review Board[1] Developer(s) OpenMP Architecture Review Board …   Wikipedia

  • Compiler optimization — is the process of tuning the output of a compiler to minimize or maximize some attributes of an executable computer program. The most common requirement is to minimize the time taken to execute a program; a less common one is to minimize the… …   Wikipedia

  • PID controller — A block diagram of a PID controller A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems – a PID is the most commonly used feedback… …   Wikipedia

  • Software pipelining — In computer science, software pipelining is a technique used to optimize loops, in a manner that parallels hardware pipelining. Software pipelining is a type of out of order execution, except that the reordering is done by a compiler (or in the… …   Wikipedia

  • List of NP-complete problems — Here are some of the more commonly known problems that are NP complete when expressed as decision problems. This list is in no way comprehensive (there are more than 3000 known NP complete problems). Most of the problems in this list are taken… …   Wikipedia

  • Liste de problèmes NP-complets — Ceci est une liste des problèmes NP complets les plus connus en théorie de la complexité des algorithmes, exprimés sous la forme d un problème de décision. Puisqu on connaît plus de 3000 problèmes NP complets, cette liste n est pas exhaustive. La …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”