Radial basis function network

Radial basis function network

A radial basis function network is an artificial neural network that uses radial basis functions as activation functions. They are used in function approximation, time series prediction, and control.

Network architecture

Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The output, varphi : mathbb{R}^n o mathbb{R} , of the network is thus

:varphi(mathbf{x}) = sum_{i=1}^N a_i ho(||mathbf{x}-mathbf{c}_i||)

where "N" is the number of neurons in the hidden layer, c_i is the center vector for neuron "i", and a_i are the weights of the linear output neuron. In the basic form all inputs are connected to each hidden neuron. The norm is typically taken to be the Euclidean distance and the basis function is taken to be Gaussian

: ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) = exp left [ -eta left Vert mathbf{x} - mathbf{c}_i ight Vert ^2 ight] .

The Gaussian basis functions are local in the sense that

:lim_{||x|| o infty} ho(left Vert mathbf{x} - mathbf{c}_i ight Vert) = 0

i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.

RBF networks are universal approximators on a compact subset of mathbb{R}^n. This means that a RBF network with enough hidden neurons can approximate any continuous function with arbitrary precision.

The weights a_i , mathbf{c}_i , and eta are determined in a manner that optimizes the fit between varphi and the data.

c_1=0.75 and c_2=3.25 .

Normalized

Normalized architecture

In addition to the above "unnormalized" architecture, RBF networks can be "normalized". In this case the mapping is

: varphi ( mathbf{x} ) stackrel{mathrm{def{=} frac { sum_{i=1}^N a_i ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) } { sum_{i=1}^N ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) } = sum_{i=1}^N a_i u ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) where

: u ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) stackrel{mathrm{def{=} frac { ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) } { sum_{i=1}^N ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) }

is known as a "normalized radial basis function".

Theoretical motivation for normalization

There is theoretical justification for this architecture in the case of stochastic data flow. Assume a stochastic kernel approximation for the joint probability density

: Pleft ( mathbf{x} land y ight ) = sum_{i=1}^N , ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) , sigma ig ( left vert y - e_i ight vert ig )

where the weights mathbf{c}_i and e_i are exemplars from the data and we require the kernels to be normalized: int ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) , d^nmathbf{x} =1and: int sigma ig ( left vert y - e_i ight vert ig ) , dy =1.

The probability densities in the input and output spaces are

: P left ( mathbf{x} ight ) = int P left ( mathbf{x} land y ight ) , dy = sum_{i=1}^N , ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig )

and

: P left ( y ight ) = int P left ( mathbf{x} land y ight ) , d^n mathbf{x} = sum_{i=1}^N , sigma ig ( left vert y - e_i ight vert ig )

The expectation of y given an input mathbf{x} is

: varphi left ( mathbf{x} ight ) stackrel{mathrm{def{=} Eleft ( y mid mathbf{x} ight ) = int y , Pleft ( y mid mathbf{x} ight ) dy where: Pleft ( y mid mathbf{x} ight ) is the conditional probability of y given mathbf{x} .The conditional probability is related to the joint probability through Bayes theorem

: Pleft ( y mid mathbf{x} ight ) = frac {P left ( mathbf{x} land y ight )} {P left ( mathbf{x} ight )}

which yields

: varphi left ( mathbf{x} ight ) = int y , frac {P left ( mathbf{x} land y ight )} {P left ( mathbf{x} ight )} , dy .

This becomes

: varphi left ( mathbf{x} ight ) = frac { sum_{i=1}^N a_i ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) } { sum_{i=1}^N ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) } = sum_{i=1}^N a_i u ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig )

when the integrations are performed.

Local linear models

It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order,

: varphi left ( mathbf{x} ight ) = sum_{i=1}^N left ( a_i + mathbf{b}_i cdot left ( mathbf{x} - mathbf{c}_i ight ) ight ) ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig )

and

: varphi left ( mathbf{x} ight ) = sum_{i=1}^N left ( a_i + mathbf{b}_i cdot left ( mathbf{x} - mathbf{c}_i ight ) ight )u ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig )

in the unnormalized and normalized cases, respectively. Here mathbf{b}_i are weights to be determined. Higher order linear terms are also possible.

This result can be written

: varphi left ( mathbf{x} ight ) = sum_{i=1}^{2N} sum_{j=1}^n e_{ij} v_{ij} ig ( mathbf{x} - mathbf{c}_i ig )

where

: e_{ij} = egin{cases} a_i, & mbox{if } i in [1,N] \ b_{ij}, & mbox{if }i in [N+1,2N] end{cases}

and

: v_{ij}ig ( mathbf{x} - mathbf{c}_i ig ) stackrel{mathrm{def{=} egin{cases} delta_{ij} ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) , & mbox{if } i in [1,N] \ left ( x_{ij} - c_{ij} ight ) ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) , & mbox{if }i in [N+1,2N] end{cases}

in the unnormalized case and

: v_{ij}ig ( mathbf{x} - mathbf{c}_i ig ) stackrel{mathrm{def{=} egin{cases} delta_{ij} u ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) , & mbox{if } i in [1,N] \ left ( x_{ij} - c_{ij} ight ) u ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) , & mbox{if }i in [N+1,2N] end{cases}

in the normalized case.

Here delta_{ij} is a Kronecker delta function defined as

: delta_{ij} = egin{cases} 1, & mbox{if }i = j \ 0, & mbox{if }i e j end{cases} .

Training

In a RBF network there are three types of parameters that need to be chosen to adapt the network for a particular task: the center vectors c_i, the output weights w_i, and the RBF width parameters eta_i. In the sequential training of the weights are updated at each time step as data streams in.

For some tasks it makes sense to define an objective function and select the parameter values that minimize its value. The most common objective function is the least squares function

: K( mathbf{w} ) stackrel{mathrm{def{=} sum_{t=1}^infty K_t( mathbf{w} ) where: K_t( mathbf{w} ) stackrel{mathrm{def{=} ig [ y(t) - varphi ig ( mathbf{x}(t), mathbf{w} ig ) ig ] ^2 .We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit.

There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as

: H( mathbf{w} ) stackrel{mathrm{def{=} K( mathbf{w} ) + lambda S( mathbf{w} ) stackrel{mathrm{def{=} sum_{t=1}^infty H_t( mathbf{w} )

where

: S( mathbf{w} ) stackrel{mathrm{def{=} sum_{t=1}^infty S_t( mathbf{w} )

and

: H_t( mathbf{w} ) stackrel{mathrm{def{=} K_t ( mathbf{w} ) + lambda S_t ( mathbf{w} )

where optimization of S maximizes smoothness and lambda is known as a regularization parameter.

Interpolation

RBF networks can be used to interpolate a function y: mathbb{R}^n o mathbb{R} when the values of that function are known on finite number of points: y(x_i) = b_i, i=1, ldots, N. Taking the known points x_i to be the centers of the radial basis functions and evaluating the values of the basis functions at the same points g_{ij} = ho(|| x_j - x_i ||) the weights can be solved from the equation:left [ egin{matrix}g_{11} & g_{12} & cdots & g_{1N} \g_{21} & g_{22} & cdots & g_{2N} \vdots & & ddots & vdots \g_{N1} & g_{N2} & cdots & g_{NN}end{matrix} ight] left [ egin{matrix}w_1 \w_2 \vdots \w_Nend{matrix} ight] = left [ egin{matrix}b_1 \b_2 \vdots \b_Nend{matrix} ight]

It can be shown that the interpolation matrix in the above equation is non-singular, if the points "x_i" are distinct, and thus the weights "w" can be solved by simple linear algebra::mathbf{w} = mathbf{G}^{-1} mathbf{b}

Function approximation

If the purpose is not to perform strict interpolation but instead more general function approximation or classification the optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron.

Training the basis function centers

Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found by clustering the samples and choosing the cluster means as the centers.

The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers.

Pseudoinverse solution for the linear weights

After the centers c_i have been fixed, the weights that minimize the error at the output are computed with a linear pseudoinverse solution::mathbf{w} = mathbf{G}^+ mathbf{b},where the entries of "G" are the values of the radial basis functions evaluated at the points x_i: g_{ji} = ho(||x_j-c_i||).

The existence of this linear solution means that unlike Multi-Layer Perceptron (MLP) networks the RBF networks have a unique local minimum (when the centers are fixed).

Gradient descent training of the linear weights

Another possible training algorithm is gradient descent. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function

: mathbf{w}(t+1) = mathbf{w}(t) - u frac {d} {dmathbf{w H_t(mathbf{w})

where u is a "learning parameter."

For the case of training the linear weights, a_i , the algorithm becomes

: a_i (t+1) = a_i(t) + u ig [ y(t) - varphi ig ( mathbf{x}(t), mathbf{w} ig ) ig ] ho ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )

in the unnormalized case and

: a_i (t+1) = a_i(t) + u ig [ y(t) - varphi ig ( mathbf{x}(t), mathbf{w} ig ) ig ] u ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )

in the normalized case.

For local-linear-architectures gradient-descent training is

: e_{ij} (t+1) = e_{ij}(t) + u ig [ y(t) - varphi ig ( mathbf{x}(t), mathbf{w} ig ) ig ] v_{ij} ig ( mathbf{x}(t) - mathbf{c}_i ig )

Projection operator training of the linear weights

For the case of training the linear weights, a_i and e_{ij} , the algorithm becomes

: a_i (t+1) = a_i(t) + u ig [ y(t) - varphi ig ( mathbf{x}(t), mathbf{w} ig ) ig ] frac { ho ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )} {sum_{i=1}^N ho^2 ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )}

in the unnormalized case and

: a_i (t+1) = a_i(t) + u ig [ y(t) - varphi ig ( mathbf{x}(t), mathbf{w} ig ) ig ] frac {u ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )} {sum_{i=1}^N u^2 ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )}

in the normalized case and

: e_{ij} (t+1) = e_{ij}(t) + u ig [ y(t) - varphi ig ( mathbf{x}(t), mathbf{w} ig ) ig ] frac { v_{ij} ig ( mathbf{x}(t) - mathbf{c}_i ig ) } {sum_{i=1}^N sum_{j=1}^n v_{ij}^2 ig ( mathbf{x}(t) - mathbf{c}_i ig ) }

in the local-linear case.

For one basis function, projection operator training reduces to Newton's method.

Examples

Logistic map

The basic properties of radial basis functions can be illustrated with a simple mathematical map, the logistic map, which maps the unit interval onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore function approximation, time series prediction, and control theory. The map originated from the field of population dynamics and became the prototype chaotic time series. The map, in the fully chaotic regime, is given by

: x(t+1) stackrel{mathrm{def{=} fleft [ x(t) ight ] = 4 x(t) left [ 1-x(t) ight ]

where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map.

Generation of the time series from this equation is the forward problem. The examples here illustrate the inverse problem; identification of the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate

: x(t+1) = f left [ x(t) ight ] approx varphi(t) = varphi left [ x(t) ight ]

for f.

Function approximation

Unnormalized radial basis functions

The architecture is

: varphi ( mathbf{x} ) stackrel{mathrm{def{=} sum_{i=1}^N a_i ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig )

where

: ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) = exp left [ -eta left Vert mathbf{x} - mathbf{c}_i ight Vert ^2 ight] = exp left [ -eta left ( x(t) - c_i ight ) ^2 ight] .

Since the input is a scalar rather than a vector, the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight eta is taken to be a constant equal to 5. The weights c_i are five exemplars from the time series. The weights a_i are trained with projection operator training:

: a_i (t+1) = a_i(t) + u ig [ x(t+1) - varphi ig ( mathbf{x}(t), mathbf{w} ig ) ig ] frac { ho ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )} {sum_{i=1}^N ho^2 ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )}

where the learning rate u is taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error is 0.15.

Normalized radial basis functions

The normalized RBF architecture is

: varphi ( mathbf{x} ) stackrel{mathrm{def{=} frac { sum_{i=1}^N a_i ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) } { sum_{i=1}^N ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) } = sum_{i=1}^N a_i u ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) where

: u ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) stackrel{mathrm{def{=} frac { ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) } { sum_{i=1}^N ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) } .

Again:

: ho ig ( left Vert mathbf{x} - mathbf{c}_i ight Vert ig ) = exp left [ -eta left Vert mathbf{x} - mathbf{c}_i ight Vert ^2 ight] = exp left [ -eta left ( x(t) - c_i ight ) ^2 ight] .

Again, we choose the number of basis functions as five and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight eta is taken to be a constant equal to 6. The weights c_i are five exemplars from the time series. The weights a_i are trained with projection operator training:

: a_i (t+1) = a_i(t) + u ig [ x(t+1) - varphi ig ( mathbf{x}(t), mathbf{w} ig ) ig ] frac {u ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )} {sum_{i=1}^N u^2 ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )}

where the learning rate u is again taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.

Time series prediction

Once the underlying geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration:

: varphi(0) = x(1)

: {x}(t) approx varphi(t-1)

: {x}(t+1) approx varphi(t)=varphi [varphi(t-1)] .

A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the time series estimate for several time steps.

Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the Lyapunov exponent.

Control of a chaotic time series

We assume the output of the logistic map can be manipulated through a control parameter c [ x(t),t] such that

: {x}^{ }_{ }(t+1) = 4 x(t) [1-x(t)] +c [x(t),t] .

The goal is to choose the control parameter in such a way as to drive the time series to a desired output d(t) . This can be done if we choose the control paramer to be

: c^{ }_{ } [x(t),t] stackrel{mathrm{def{=} -varphi [x(t)] + d(t+1)

where

: y [x(t)] approx f [x(t)] = x(t+1)- c [x(t),t]

is an approximation to the underlying natural dynamics of the system.

The learning algorithm is given by

: a_i (t+1) = a_i(t) + u varepsilon frac {u ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )} {sum_{i=1}^N u^2 ig ( left Vert mathbf{x}(t) - mathbf{c}_i ight Vert ig )}

where

: varepsilon stackrel{mathrm{def{=} f [x(t)] - varphi [x(t)] = x(t+1)- c [x(t),t] - varphi [x(t)] = x(t+1) - d(t+1) .

ee also

* Predictive analytics
* Chaos theory

References

* J. Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also see [http://www.ki.inf.tu-dresden.de/~fritzke/FuzzyPaper/node5.html Radial basis function networks according to Moody and Darken]
* T. Poggio and F. Girosi, "Networks for approximation and learning," Proc. IEEE 78(9), 1484-1487 (1990).
* Roger D. Jones, Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian, ? [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=137644 Function approximation and time series prediction with neural networks] ,? Proceedings of the International Joint Conference on Neural Networks, June 17-21, p. I-649 (1990).
*
*
* John R. Davies, Stephen V. Coggeshall, Roger D. Jones, and Daniel Schutzer, "Intelligent Security Systems," in cite book | author=Freedman, Roy S., Flein, Robert A., and Lederman, Jess, Editors | title=Artificial Intelligence in the Capital Markets | location= Chicago | publisher=Irwin| year=1995 | id=ISBN 1-55738-811-3
*
* S. Chen, C. F. N. Cowan, and P. M. Grant, "Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks", IEEE Transactions on Neural Networks, Vol 2, No 2 (Mar) 1991.


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Radial basis function — A radial basis function (RBF) is a real valued function whose value depends only on the distance from the origin, so that phi(mathbf{x}) = phi(||mathbf{x}||); or alternatively on the distance from some other point c , called a center , so that… …   Wikipedia

  • Function approximation — The need for function approximations arises in many branches of applied mathematics, and computer science in particular. In general, a function approximation problem asks us to select a function among a well defined class that closely matches (… …   Wikipedia

  • Neural network — For other uses, see Neural network (disambiguation). Simplified view of a feedforward artificial neural network The term neural network was traditionally used to refer to a network or circuit of biological neurons.[1] The modern usage of the term …   Wikipedia

  • Artificial Neural Network — Réseau de neurones Pour les articles homonymes, voir Réseau. Vue simplifiée d un réseau artificiel de neurones Un réseau de neurones artificiel est un modèle de c …   Wikipédia en Français

  • Neuronal network — Réseau de neurones Pour les articles homonymes, voir Réseau. Neurosciences …   Wikipédia en Français

  • Artificial neural network — An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an… …   Wikipedia

  • Activation function — In computational networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard computer chip circuit can be seen as a digital network of activation functions that can be ON (1) or OFF (0) …   Wikipedia

  • RBFN — radial basis function network …   Medical dictionary

  • RBFN — • radial basis function network …   Dictionary of medical acronyms & abbreviations

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”