Chord (distributed hash table)

Chord (distributed hash table)

Chord is one of the original distributed hash table protocols. Chord is being developed at MIT and the current Chord source code can be downloaded and used under the MIT License.

Overview

Using the Chord lookup protocol, node keys are arranged in a circle. The circle cannot have more than 2^m nodes. The ring can have ids/keys ranging from 0 to 2^m - 1.

IDs and keys are assigned an m-bit identifier using what is known as "consistent hashing". The SHA-1 algorithm is the base hashing function for consistent hashing. The "consistent hashing" is integral to the probability of the robustness and performance because both keys and IDs (IP addresses) are uniformly distributed and in the same identifier space. Consistent hashing is also necessary to let nodes join and leave the network without disrupting the network.

Each node has a "successor" and a "predecessor". The "successor" to a node or key is the next node in the identifier circle when you move clockwise. The "predecessor" of a node or key is the next node in the id circle when you move counter-clockwise. If there is a node for each possible ID, the "successor" of node 2 is node 3, and the "predecessor" of node 1 is node 0; however, normally there are holes in the sequence, so, for example, the successor of node 153 may be node 167 (and nodes from 154 to 166 will not exist); in this case, the predecessor of node 167 will be node 153. Since the successor (or predecessor) node may disappear from the network (because of failure or departure), each node records a whole segment of the circle adjacent to it, i.e. the K nodes preceding it and the K nodes following it. one "successor" and "predecessor" are kept in a list to maintain a high probability that the successor and predecessor pointers actually point to the correct nodes after possible failure or departure of the initial successor or predecessor.

Potential Uses

*Cooperative Mirroring: A load balancing mechanism by a local network hosting information available to computers outside of the local network. This scheme could allow developers to balance the load between many computers instead of a central server to ensure availability of their product.

*Time-shared storage: In a network, once a computer joins the network its available data is distributed throughout the network for retrieval when that computer disconnects from the network. As well as other computers' data is sent to the computer in question for offline retrieval when they are no longer connected to the network. Mainly for nodes without the ability to connect full time to the network.

*Distributed Indices: Retrieval of files over the network within a searchable database. eg. P2P file transfer clients.

*Large scale combinatorial searches: Keys being candidate solutions to a problem and each key mapping to the node, or computer, that is responsible for evaluating them as a solution or not. eg. Code Breaking

Proof sketches

Chord must contact at most O(log N) nodes to find a successor in an N-node network, with high probability

Define a node n that has a query for a key k. Suppose node p is the node that the key k maps to in Chord (n eq p). Therefore, node n forwards its query to node f, the closest predecessor of k in its finger table, call it the i"-th" interval of node n, somewhere between n and p. The new distance between f and p is then at most 2^{i-1}. Reiterating, each time the distance at least halves and within m steps (with m as defined above) the query will arrive at node p. Since the identifiers are random after 'log N' forwardings, the probability is {2^m}over{N} and the expected number of identifiers in this interval is 1 with high probability, so only O(log N) nodes need to be contacted.

If Chord keeps track of r = O(log N) predecessors/successors, then with high probability, if each node has probability of 1/4 of failing, find_successor (see below) and find_predecessor (see below) will return the correct nodes

Simply, the probability that all r nodes fail is left(1}over{4 ight)^r = 1}over{N, which is a low probability; so with high probability at least one of them is alive and the node will have the correct pointer.

Pseudocode

Definitions for pseudocode:
*finger [k] : first node that succeeds (n+2^{k-1}) mbox{ mod } 2^m, 1 leq k leq m
*successor: the next node from the node in question on the identifier ring
*predecessor: the previous node from the node in question on the identifier ring

The pseudocode to find the "successor" node of an id is given below:

// ask node n to find the successor of id n.find_successor(id) if (idin(n, successor] ) return successor; else // forward the query around the circle n0 = closest_preceding_node(id); return n0.find_successor(id); // search the local table for the highest predecessor of id n.closest_preceding_node(id) for i = m downto 1 if (finger [i] in(n,id)) return finger [i] ; return n;

The pseudocode to stabilize the chord ring/circle after node joins and departures is as follows:

// create a new Chord ring. n.create() predecessor = nil; successor = n; // join a Chord ring containing node n'. n.join(n') predecessor = nil; successor = n'.find_successor(n); // called periodically. verifies n’s immediate // successor, and tells the successor about n. n.stabilize() x = successor.predecessor; if (xin(n, successor)) successor = x; successor.notify(n); // n' thinks it might be our predecessor. n.notify(n') if (predecessor is nil or n'in(predecessor, n)) predecessor = n'; // called periodically. refreshes finger table entries. // next stores the index of the finger to fix n.fix_fingers() next = next + 1; if (next > m) next = 1; finger [next] = find_successor(n+2^{next-1}); // called periodically. checks whether predecessor has failed. n.check_predecessor() if (predecessor has failed) predecessor = nil;

See also

* CAN
* Kademlia
* Pastry (DHT)
* Tapestry (DHT)
* Koorde

* OverSim - the overlay simulation framework

External links

* [http://www.pdos.lcs.mit.edu/chord The Chord Project]
* [http://www.pdos.lcs.mit.edu/papers/chord:sigcomm01/ Paper proposing Chord: "Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications"]
* [http://pdos.csail.mit.edu/papers/ton:chord/ Updated version of the above paper]
* [http://open-chord.sourceforge.net/ Open Chord - An Open Source Java Implementation]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Distributed hash table — Table de hachage distribuée Pour les articles homonymes, voir DHT. Une table de hachage distribuée (ou DHT pour Distributed Hash Table), est une technologie permettant l identification et l obtention, dans un système réparti, comme certains… …   Wikipédia en Français

  • Distributed hash table — A distributed hash table (DHT) is a class of a decentralized distributed system that provides a lookup service similar to a hash table; (key, value) pairs are stored in a DHT, and any participating node can efficiently retrieve the value… …   Wikipedia

  • Distributed Hash Table — Eine verteilte Hashtabelle (VHT) [engl.: distributed hash table (DHT)] ist eine Datenstruktur, die versucht, das allgemeine Problem in P2P Systemen – das Finden des Speicherorts einer gesuchten Datei – mit möglichst geringem Aufwand effizient zu… …   Deutsch Wikipedia

  • Table de hachage distribuee — Table de hachage distribuée Pour les articles homonymes, voir DHT. Une table de hachage distribuée (ou DHT pour Distributed Hash Table), est une technologie permettant l identification et l obtention, dans un système réparti, comme certains… …   Wikipédia en Français

  • Chord (peer-to-peer) — In computing, Chord is a protocol and algorithm for a peer to peer distributed hash table. A distributed hash table stores key value pairs by assigning keys to different computers (known as nodes ); a node will store the values for all the keys… …   Wikipedia

  • Distributed Hashtable — Eine verteilte Hashtabelle (VHT) [engl.: distributed hash table (DHT)] ist eine Datenstruktur, die versucht, das allgemeine Problem in P2P Systemen – das Finden des Speicherorts einer gesuchten Datei – mit möglichst geringem Aufwand effizient zu… …   Deutsch Wikipedia

  • Table de hachage distribuée — Pour les articles homonymes, voir DHT. Une table de hachage distribuée (ou DHT pour Distributed Hash Table), est une technologie permettant l identification et l obtention, dans un système réparti, comme certains réseaux P2P, d une information. L …   Wikipédia en Français

  • Distributed data store — A distributed data store is a blurred concept and means either a distributed database where users store their information on a number of nodes, or a network in which a user stores their information on a number of peer network nodes . Contents 1… …   Wikipedia

  • The Circle (file system) — The Circle is a peer to peer distributed file system written mainly in Python. It is based on the Chord distributed hash table (DHT). Development on the Circle has ceased in 2004. Features It supports file sharing, instant messaging with buddy… …   Wikipedia

  • The Circle — is a peer to peer distributed file system written mainly in Python. It is based on the Chord distributed hash table (DHT).Development on the Circle has ceased in 2004.FeaturesIt supports file sharing, instant messaging with buddy lists, the… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”