Parallel Random Access Machine

Parallel Random Access Machine

In computer science, Parallel Random Access Machine (PRAM) is a shared memory abstract machine. As its name indicates, the PRAM was intended as the parallel computing analogy to the random access machine (RAM). In the same way, that the RAM is used by sequential algorithm designers to model algorithmic performance (such as time complexity), the PRAM used by parallel algorithm designers to model parallel algorithmic performance (such as time complexity, where the number of processors assumed is typically also stated). Similar to the way in which the RAM model neglects practical issues, such as access time to cache memory versus main memory, the PRAM model neglects such issues as synchronization and communication, but provides any (problem size-dependent) number of processors. Algorithm cost, for instance, is estimated using two parameters O(time) and O(time x processor_number).

Contents

Read/write conflicts

Read/write conflicts in accessing the same shared memory location simultaneously are resolved by one of the following strategies:

  1. Exclusive Read Exclusive Write (EREW)—every memory cell can be read or written to by only one processor at a time
  2. Concurrent Read Exclusive Write (CREW)—multiple processors can read a memory cell but only one can write at a time
  3. Exclusive Read Concurrent Write (ERCW)—never considered
  4. Concurrent Read Concurrent Write (CRCW)—multiple processors can read and write. A CRCW PRAM is sometimes called a concurrent random access machine[1].

Here, E and C stand for 'exclusive' and 'concurrent' correspondingly. The read causes no discrepancies while the concurrent write is further defined as:

Common—all processors write the same value; otherwise is illegal
Arbitrary—only one arbitrary attempt is successful, others retire
Priority—processor rank indicates who gets to write
Another kind of array Reduction operation like SUM, Logical AND or MAX.

Several simplifying assumptions are made while considering the development of algorithms for PRAM. They are :

  1. There is no limit on the number of processors in the machine.
  2. Any memory location is uniformly accessible from any processor.
  3. There is no limit on the amount of shared memory in the system.
  4. Resource contention is absent.
  5. The programs written on these machines are, in general, of type MIMD. Certain special cases such as SIMD may also be handled in such a framework.

These kinds of algorithms are useful for understanding the exploitation of concurrency, dividing the original problem into similar sub-problems and solving them in parallel.

Implementation

You cannot implement these algorithms with the combination of CPU and DRAM because DRAM does not allow concurrent access, but if you implement it by hardware or read/write to the internal memory (SRAM) of FPGA, you can use CRCW algorithm.

However, the test for practical relevance of PRAM (or RAM) algorithms depends on whether their cost model provides an effective abstraction of some computer; the structure of that computer can be quite different than the abstract model. The knowledge of the layers of software and hardware that need to be inserted is beyond the scope of this article. But, articles such as Vishkin (2011) demonstrate how a PRAM-like abstraction can be supported by the Explicit multi-threading (XMT) paradigm and articles such as Caragea & Vishkin (2011) demonstrate that a PRAM algorithm for the maximum flow problem can provide strong speedups relative to the fastest serial program for the same problem.

Example code

This is an example of SystemVerilog code which finds the maximum value in the array in only 2 clock cycles. It compares all the combinations of the elements in the array at the first clock, and merges the result at the second clock. It uses CRCW memory; m[i] <= 1 and maxNo <= data[i] are written concurrently. The concurrency causes no conflicts because the algorithm guarantees that the same value is written to the same memory. This code can be run on FPGA hardware.

module FindMax #(parameter int len = 8)
                (input bit clock, resetN, input bit[7:0] data[len], output bit[7:0] maxNo);
    typedef enum bit[1:0] {COMPARE, MERGE, DONE} State;
 
    State state;
    bit m[len];
    int i, j;
 
    always_ff @(posedge clock, negedge resetN) begin
        if (!resetN) begin
            for (i = 0; i < len; i++) m[i] <= 0;
            state <= COMPARE;
        end else begin
            case (state)
                COMPARE: begin
                    for (i = 0; i < len; i++) begin
                        for (j = 0; j < len; j++) begin
                            if (data[i] < data[j]) m[i] <= 1;
                        end
                    end
                    state <= MERGE;
                end
 
                MERGE: begin
                    for (i = 0; i < len; i++) begin
                        if (m[i] == 0) maxNo <= data[i];
                    end
                    state <= DONE;
                end
            endcase
        end
    end
endmodule

See also

External links

References

  1. ^ Neil Immerman, Expressibility and parallel complexity. Siam Journal on Computing, vol. 18, no. 3, pp. 625-638, 1989.


Eppstein, David; Galil, Zvi (1988), "Parallel algorithmic techniques for combinatorial computation", Ann. Rev. Comput. Sci 3: 233–283, doi:10.1146/annurev.cs.03.060188.001313 

JaJa, Joseph (1992), An Introduction to Parallel Algorithms, Addison-Wesley, ISBN 0-201-54856-9 

Karp, Richard M.; Ramachandran, Vijaya (1988), "A Survey of Parallel Algorithms for Shared-Memory Machines", University of California, Berkeley, Department of EECS, Tech. Rep. UCB/CSD-88-408 

Keller, Jörg; Christoph Keßler, Jesper Träff (2001). Practical PRAM Programming. John Wiley and Sons. ISBN 0471353515. 

Vishkin, Uzi (2009), Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages, Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion, http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/classnotes.pdf 

Vishkin, Uzi (2011), [10.1145/1866739.1866757 Using simple abstraction to reinvent computing for parallelism], "Communications of the ACM, Volume 54 Issue 1, January 2011", Communications of the ACM 54: 75–85, doi:10.1145/1866739.1866757, 10.1145/1866739.1866757 

Caragea, George Constantin; Vishkin, Uzi (2011), "Better speedups for parallel max-flow", ACM SPAA, doi:10.1145/1989493.1989511 


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Parallel Random Access Machine — Als Parallel Random Access Machine, kurz PRAM, bezeichnet man ein Maschinenmodell zur Analyse paralleler Algorithmen. Es handelt sich um eine Registermaschine, die um die Möglichkeit zur parallelen Verarbeitung von Befehlen erweitert wurde. Wie… …   Deutsch Wikipedia

  • Parallel random access machine — PRAM, pour Parallel Random Access Machine, est un modèle abstrait de machine destiné à concevoir des algorithmes pour machines parallèles de modèle MIMD, ou pour de plus rares cas de modèle SIMD. PRAM modélise une machine parallèle à une mémoire… …   Wikipédia en Français

  • Random access machine — In computer science, random access machine (RAM) is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of indirect addressing of its registers. Like the… …   Wikipedia

  • Random access stored program machine — In theoretical computer science the Random Access Stored Program (RASP) machine model is an abstract machine used for the purposes of algorithm development and algorithm complexity theory.The RASP is a Random Access Machine (RAM) model that,… …   Wikipedia

  • Parallel programming model — A parallel programming model is a set of software technologies to express parallel algorithms and match applications with the underlying parallel systems. It encloses the areas of applications, programming languages, compilers, libraries,… …   Wikipedia

  • Dynamic random-access memory — DRAM redirects here. For other uses, see Dram (disambiguation). Computer memory types Volatile RAM DRAM (e.g., DDR SDRAM) SRAM In development T RAM Z RAM TTRAM Historical Delay line memory Selectron tube Williams tube …   Wikipedia

  • Dynamic random access memory — (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically …   Wikipedia

  • Turing machine equivalents — Turing machine(s) Machina Universal Turing machine Alternating Turing machine Quantum Turing machine Read only Turing machine Read only right moving Turing Machines Probabilistic Turing machine Multi track Turing machine Turing machine… …   Wikipedia

  • Pointer machine — In theoretical computer science a pointer machine is an atomistic abstract computational machine model akin to the Random access machine.Depending on the type, a pointer machine may be called a linking automaton, a KU machine, an SMM, an… …   Wikipedia

  • Turing machine — For the test of artificial intelligence, see Turing test. For the instrumental rock band, see Turing Machine (band). Turing machine(s) Machina Universal Turing machine Alternating Turing machine Quantum Turing machine Read only Turing machine… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”