Parallel array

Parallel array

In computing, a parallel array is a data structure for representing arrays of records. It keeps a separate, homogeneous array for each field of the record, each having the same number of elements. Then, objects located at the same index in each array are implicitly the fields of a single record. Pointers from one object to another are replaced by array indices. This contrasts with the normal approach of storing all fields of each record together in memory. For example, one might declare an array of 100 names, each a string, and 100 ages, each an integer, associating each name with the age that has the same index.

An example in C using parallel arrays:

int ages [] = {0, 17, 2, 52, 25};char *names [] = {"None", "Mike", "Billy", "Tom", "Stan"};int parent [] = {0 /*None*/, 3 /*Tom*/, 1 /*Mike*/, 0 /*None*/, 3 /*Tom*/};

for(i = 1; i <= 4; i++) { printf("Name: %s, Age: %d, Parent: %s ", names [i] , ages [i] , names [parent [i] );}Or, in Python:firstName = ['Joe', 'Bob', 'Frank', 'Hans' ] lastName = ['Smith','Seger','Sinatra','Schultze'] heightInCM = [169, 158, 201, 199 ]

for i in xrange(len(firstName)): print "Name: %s %s" % (firstName [i] , LastName [i] ) print "Height in CM: %s" % heightInCM [i]

Parallel arrays have a number of practical advantages over the normal approach:
* They can be used in languages which support only arrays of primitive types and not of records (or perhaps don't support records at all).
* Parallel arrays are simple to understand and use, and are often used where declaring a record is more trouble than it's worth.
* They can save a substantial amount of space in some cases by avoiding alignment issues. For example, one of the fields of the record can be a single bit, and its array would only need to reserve one bit for each record, whereas in the normal approach many more bits would "pad" the field so that it consumes an entire byte or a word.
* If the number of items is small, array indices can occupy significantly less space than full pointers, particularly on architectures with large words.
* Sequentially examining a single field of each record in the array is very fast on modern machines, since this amounts to a linear traversal of a single array, exhibiting ideal locality of reference and cache behavior.

However, parallel arrays also have several strong disadvantages, which serves to explain why they are not generally preferred:

* They have significantly worse locality of reference when visiting the records sequentially and examining multiple fields of each record, which is the norm.
* They obscure the relationship between fields of a single record.
* They have little direct language support (the language and its syntax typically express no relationship between the arrays in the parallel array.)
* They are expensive to grow or shrink, since each of several arrays must be reallocated.

The bad locality of reference is the worst issue. However, a compromise can be made in some cases: if a structure can be divided into groups of fields that are generally accessed together, an array can be constructed for each group, and its elements are records containing only these subsets of the larger structure's fields. This is a valuable way of speeding up access to very large structures with many members, while keeping the portions of the structure tied together. An alternative to tying them together using array indexes is to use references to tie the portions together, but this can be less efficient in time and space. Some compiler optimizations, particularly for vector processors, are able to perform this transformation automatically when arrays of structures are created in the program.

See also

* An example in the linked list article
* Column-oriented DBMS

References

* Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. "Introduction to Algorithms", Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Page 209 of section 10.3: Implementing pointers and objects.


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Array data type — Not to be confused with Array data structure. In computer science, an array type is a data type that is meant to describe a collection of elements (values or variables), each selected by one or more indices that can be computed at run time by the …   Wikipedia

  • Array — In computer science an array [Paul E. Black, array , in Dictionary of Algorithms and Data Structures , Paul E. Black, ed., U.S. National Institute of Standards and Technology. 26 August 2008 (accessed 10 September 2008).… …   Wikipedia

  • Parallel ATA — ATA connector on the right, with two motherboard ATA sockets on the left. Type …   Wikipedia

  • Parallel I/O — Parallel I/O, in the context of a computer, means the performance of multiple I/O operations at the same time. It is a common feature of operating systems.One particular instance is parallel writing of data to disk; when file data is sperad… …   Wikipedia

  • Array-Prozessor — Array Prozessor,   Kombination mehrerer gleichartiger integrierter Schaltkreise oder auch mehrerer gleichartiger Prozessoren zu einer Einheit. Die Einzelelemente des Arrays arbeiten gleichzeitig und parallel; sie werden häufig vom zentralen… …   Universal-Lexikon

  • Parallel Random Access Machine — In computer science, Parallel Random Access Machine (PRAM) is a shared memory abstract machine. As its name indicates, the PRAM was intended as the parallel computing analogy to the random access machine (RAM). In the same way, that the RAM is… …   Wikipedia

  • Parallel computing — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing …   Wikipedia

  • Array programming — In computer science, array programming languages (also known as vector or multidimensional languages) generalize operations on scalars to apply transparently to vectors, matrices, and higher dimensional arrays.Array programming primitives… …   Wikipedia

  • Parallel Extensions — Schichtenarchitektur des .NET Frameworks Bei den Parallel Extensions (parallele Erweiterungen), auch bekannt als Parallel Framework Extensions (PFX), handelt es sich um eine Bibliothek zur Unterstützung der parallelen Programmierung bei… …   Deutsch Wikipedia

  • Parallel programming model — A parallel programming model is a set of software technologies to express parallel algorithms and match applications with the underlying parallel systems. It encloses the areas of applications, programming languages, compilers, libraries,… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”