Task parallelism

Task parallelism

Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing execution processes (threads) across different parallel computing nodes. It contrasts to data parallelism as another form of parallelism.

Contents

Description

In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work. Communication takes place usually to pass data from one thread to the next as part of a workflow.

As a simple example, if we are running code on a 2-processor system (CPUs "a" & "b") in a parallel environment and we wish to do tasks "A" and "B" , it is possible to tell CPU "a" to do task "A" and CPU "b" to do task 'B" simultaneously, thereby reducing the run time of the execution. The tasks can be assigned using conditional statements as described below.

Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data (data parallelism). Most real programs fall somewhere on a continuum between Task parallelism and Data parallelism .

Example

The pseudocode below illustrates task parallelism:

program:
...
if CPU="a" then
   do task "A"
else if CPU="b" then
   do task "B"
end if
...
end program

The goal of the program is to do some net total task ("A+B"). If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it as follows.

  • In an SPMD system, both CPUs will execute the code.
  • In a parallel environment, both will have access to the same data.
  • The "if" clause differentiates between the CPU's. CPU "a" will read true on the "if" and CPU "b" will read true on the "else if", thus having their own task.
  • Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.

Code executed by CPU "a":

program:
...
do task "A"
...
end program

Code executed by CPU "b":

program:
...
do task "B"
...
end program

This concept can now be generalized to any number of processors.

JVM Example

Similar to the previous example, Task Parallelism is also possible using the Java Virtual Machine JVM.

The code below illustrates task parallelism on the JVM using the commercial third-party Ateji PX extension[1]. Statements or blocks of statements can be composed in parallel using the || operator inside a parallel block, introduced with square brackets:

[
   || a++;
   || b++;
]

or in short form:

[ a++; || b++; ]

Each parallel statement within the composition is called a branch. We purposely avoid using the terms task or process which mean very different things in different contexts.

Languages

Examples of (fine-grained) task-parallel languages can be found in the realm of Hardware Description Languages like Verilog and VHDL, which can also be considered as representing a "code static" software paradigm where the program has a static structure and the data is changing - as against a "data static" model where the data is not changing (or changing slowly) and the processing (applied methods) change (e.g. database search).[clarification needed]

See also

Notes

  1. ^ http://www.ateji.com/px/patterns.html#task Task Parallelism using Ateji PX, an extension of Java

References


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Data parallelism — (also known as loop level parallelism) is a form of parallelization of computing across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different parallel computing nodes. It… …   Wikipedia

  • Explicit parallelism — In computer programming, explicit parallelism is the representationof concurrent computations by means of primitivesin the form of special purpose directives or function calls. Most parallel primitives are related to process synchronization,… …   Wikipedia

  • Memory-level parallelism — or MLP is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cache misses or translation lookaside buffer misses, at the same time. In a single processor, MLP may be considered a… …   Wikipedia

  • Implicit parallelism — In computer science, implicit parallelism is a characteristic of a programming language that allows a compiler to automatically exploit the parallelism inherent to the computations expressed by some of the language s constructs. A pure implicitly …   Wikipedia

  • Parallel computing — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing …   Wikipedia

  • OpenMP — Original author(s) OpenMP Architecture Review Board[1] Developer(s) OpenMP Architecture Review Board …   Wikipedia

  • Data Intensive Computing — is a class of parallel computing applications which use a data parallel approach to processing large volumes of data typically terabytes or petabytes in size and typically referred to as Big Data. Computing applications which devote most of their …   Wikipedia

  • Comparison of MPI, OpenMP, and Stream Processing — MPI= MPI is a language independent communications protocol used to program parallel computers. Both point to point and collective communication are supported. MPI is a message passing application programmer interface, together with protocol and… …   Wikipedia

  • Parallel FX Library — (PFX) is a managed concurrency library being developed by a collaboration between Microsoft Research and the CLR team at Microsoft for inclusion with a future revision of the .NET Framework. It is composed of two parts: Parallel LINQ (PLINQ) and… …   Wikipedia

  • Chapel (programming language) — Chapel is a new parallel programming language developed by Cray.[1] It is being developed as part of the Cray Cascade project, a participant in DARPA s High Productivity Computing Systems (HPCS) program, which has the goal of increasing… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”