Gradient Networks

Gradient Networks

Motivation

Real-world networks evolve to fulfill a main function, which is often to transport entities such as information, cars, power, water, etc. All these large-scale networks metioned above are non-globally designed. They evolve and grow through local changes, through a natural selection-like dynamics. For example, if a router on the internet is frquently congested and packets are lost or delayed due to that, it will get replaced by several interconnected new routers. Recent research investigate the connection between network topology and the flow efficiency of the transportation.http://cnls.lanl.gov/External/people/highlights/Toroczkai_net.pdf]

The flow is often generated or influenced by local gradients of a scalar, for example: electric current driven by a gradient of electric potential; in the information networks, properties of nodes will generate a bias in the way of information is transmitted from a node to its neighbors. This idea motivated the approach through gredient networks which studies flow efficiency on the network when the flow is driven by gradients of a scalar field distributed on the networkZ. Toroczkai, B. Kozma, K.E. Bassler, N.W.Hengartner and G. Korniss. Gradient Networks, cond-mat/0408262.]

thumb|300px|left|Fig.2. The gradient atnode i is a directed edge pointing towards the largestincrease of the scalar potential in the node's neighborhood.">

Definition of a Gradient Network

Gradient networks are directed subnetworks of an undirected "substrate" network in which each node has an associated scalar potential and one out-link that point to the node with the smallest (or largest) potential in its neighborhood, defined as the reunion of itself and its nearest neighbors on the substrate networksBogdan Danila, Congestion-gradient driven transport on complex networks, PHYSICAL REVIEW E 74, 046114(2006)]

Let us consider that transport takes place on a fixed network G = G(V,E) called the substrate graph. It has N nodes, V = {0, 1, ...,N − 1} and the setof edges E = { (i,j) | i,j ∈ V}. Given a node i, we can define its set of neighbors in G by Si(1) = {j ∈ V | (i,j)∈ E}.

Let us also consider a scalar field, h = {h0, .., hN−1} defined on the set of nodes V, so that every node i has a scalar value hi associated to it.

" Gradient ∇hi on a Network" : "∇h"i"= (i, μ(i))"i.e. the directed edge from "i" to "μ(i)" , where "μ(i)" ∈ Si(1) ∪ {i}, and hμ has the maximum value in { hj | j ∈ Si(1) ∪ {i

"Gradient Network" : "∇G = ∇G (V, F) "where F is the set of gradient edges on G.

In general, the scalar field depends on time, due to the flow, external sources and sinks on the network. Therefore, the gradient network ∇G will be dynamic.

In-degree distribution of Gradient Networks

In a gradient network, "in-degree" of a node i, "ki (in)" is the number of gradient edges pointing into i, and the in-degree distribution "R(l)= P"{"ki (in) = l"}" "
[
thumb|200px|left|Fig.5._The_degree_distributions_of_the_gradient_network_and_thesubstrate(BA Model).] When the substrate G is random graph, and each pair of nodes is connected with probability "P", the scalars " hi" are i.i.d. (independent identically distributed) the exact expression for "R(l)" is given by
R(l)=frac{1}{N}sum_{n=0}^{N-1}mathrm{C}^{N-1-n}_l [1-p(1-p)] ^{N-1-n-l} [p(1-p)^n] ^l]

In the limit " N →∞ " and "P → 0", the degree distribution becomes the power law
R(l) approx l^{-1}
This shows in this limit, the gradient network of random network is scale-free.If the subtstrate network G is scale-free, like BA model, then the gradient network also follow the power-law with the same exponent as those of G.

The Congestion on Networks

The fact that topology of substrate network influence the level of congestion can be illustrated by simple example(Fig.6.) as following: if the network has a star-like structure, then at the central node, the flow would congeste because the central node should handle all flow from others nodes. On the contrary, if the network has a ring-like structure, since every node take same role for transportation there is no traffic jam of flow.

Under assumption that the flow is generated by gradients in the network, characterize efficiency of flow on networks can be characterized through the jamming factor(or congestion factor) defined as:


J =

1-langlelanglefrac{N_{receive{N_{send angle_h angle_{network}=R(0)
where Nreceive is the number of nodes that receive gradient flow and Nsend is the number of nodes that send the flow.The value of J is in the range between 0 and 1. J=0 means no congestion, and J=1 corresponds to maximal congestion.In the limit N → ∞,and the probability with which two arbitrary nodes are connected is constant, for random network, the congestion factor becomes
J(N,P)=1-frac{ln N}{N ln(frac{1}{1-P})}left [ 1+O(frac{1}{N}) ight] ightarrow 1
This result show that random networks are maximally congested in that limit.On the contrary, for scale-free network, J is always a constant for any N. This conclusion means scale-free networks are not prone to maximal jamming.Z. Toroczkai, K.E. Bassler, Nature, 428, 716(2004)]
thumb|250px|left|Fig.7. The congestion coe±cient for randomgraphs and scale-free networks.">

An efficient approach to control congestion

A key problem in communication networks is to understand how to control congestion and maintain a normal and efficient functioning of the networks.Zonghua Liua, Weichuan Mab, Huan Zhanga, Yin Suna and P.M. Hui, an efficient approach of controlling traffic congestion in scale-free networks, Physica A, Volume 370, Issue 2,2006, Pages 843-853] Zonghua Liu et al. studied the network congestion and get the result that congestions are more likely to take place at the nodes with high degrees in networks (See Fig. 8), and an efficient approach of selectively enhancing the message-process capability of a small fraction(e.g. 3%) of nodes is shown to perform just as well as enhancing the capability of all nodes.(See Fig. 9)

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Drainage gradient — Water pooling at the entrance of a banked outercurve in the opposite lane (Photo taken in Sweden) Drainage gradient (DG) is a term in road technology, defining the resulting vector of a road surface cross slope (CS) and longitudinal gradient… …   Wikipedia

  • Retropropagation du gradient — Rétropropagation du gradient La technique de rétropropagation du gradient (Backpropagation en anglais) est une méthode qui permet de calculer le gradient de l erreur pour chaque neurone du réseau, de la dernière couche vers la première. De façon… …   Wikipédia en Français

  • Rétropropagation du gradient — En informatique, la technique de rétropropagation du gradient (Backpropagation en anglais) est une méthode qui permet de calculer le gradient de l erreur pour chaque neurone d un réseau de neurones, de la dernière couche vers la première. De… …   Wikipédia en Français

  • Non-trophic networks — Any action or influence that species have on each other is considered a biological interaction. These interactions between species can be considered in several ways. One such way is to depict interactions in the form of a network, which… …   Wikipedia

  • river — river1 riverless, adj. riverlike, adj. /riv euhr/, n. 1. a natural stream of water of fairly large size flowing in a definite course or channel or series of diverging and converging channels. 2. a similar stream of something other than water: a… …   Universalium

  • Neural network — For other uses, see Neural network (disambiguation). Simplified view of a feedforward artificial neural network The term neural network was traditionally used to refer to a network or circuit of biological neurons.[1] The modern usage of the term …   Wikipedia

  • Chloride potassium symporter 5 — Solute carrier family 12 (potassium/chloride transporter), member 5 Identifiers Symbols SLC12A5; KCC2; KIAA1176 External IDs …   Wikipedia

  • Radial basis function network — A radial basis function network is an artificial neural network that uses radial basis functions as activation functions. They are used in function approximation, time series prediction, and control.Network architectureRadial basis function (RBF) …   Wikipedia

  • cell — cell1 cell like, adj. /sel/, n. 1. a small room, as in a convent or prison. 2. any of various small compartments or bounded areas forming part of a whole. 3. a small group acting as a unit within a larger organization: a local cell of the… …   Universalium

  • Backpropagation — Backpropagation, or propagation of error, is a common method of teaching artificial neural networks how to perform a given task. It was first described by Paul Werbos in 1974, but it wasn t until 1986, through the work of David E. Rumelhart,… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”