# Recursive least squares filter

Recursive least squares filter

Recursive least squares (RLS) algorithm is used in adaptive filters to find the filter coefficients that relate to recursively producing the least squares (minimum of the sum of the absolute squared) of the error signal (difference between the desired and the actual signal). This is contrast to other algorithms that aim to reduce the mean square error. The difference is that RLS filters are dependent on the signals themselves, whereas MSE filters are dependent on their statistics (specifically, the autocorrelation of the input and the cross-correlation of the input and desired signals). If these statistics are known, an MSE filter with fixed co-efficients (i.e., independent of the incoming data) can be built.

Motivation

Suppose that a signal $mathbf\left\{d\right\}$ is transmitted over an echoey, noisy channel that causes it to be received as

:$x\left(n\right)=sum_\left\{k=0\right\}^q b_n\left(k\right) d\left(n-k\right)+v\left(n\right)$

where $v\left(n\right)$ represents white noise. We will attempt to recover the desired signal $mathbf\left\{d\right\}$ by use of an FIR filter, $mathbf\left\{w\right\}$:

:$hat\left\{d\right\}\left(n\right) = y\left(n\right) = mathbf\left\{w\right\}_n^mathit\left\{T\right\} mathbf\left\{x\right\}\left(n\right)$

Our goal is to estimate the parameters of the filter $mathbf\left\{w\right\}$, and at each time "n" we refer to the new least squares estimate by $mathbf\left\{w_n\right\}$. As time evolves, we would like to avoid completely redoing the least squares algorithm to find the new estimate for $mathbf\left\{w\right\}_\left\{n+1\right\}$, in terms of $mathbf\left\{w\right\}_n$.

The benefit of the RLS algorithm is that there is no need to invert matrices, thereby saving computational power. Another advantage is that it provides intuition behind such results as the Kalman filter.

Discussion

The idea behind RLS filters is to minimize a cost function $C$ by appropriately selecting the filter coefficients $mathbf\left\{w\right\}_n$, updating the filter as new data arrives. The error signal $e\left(n\right)$ and desired signal $d\left(n\right)$ are defined in the negative feedback diagram below:

The error implicitly depends on the filter coefficients through the estimate $hat\left\{d\right\}\left(n\right)$:

:$e\left(n\right)=d\left(n\right)-hat\left\{d\right\}\left(n\right)$

The weighted least squares error function $C$—the cost function we desire to minimize—being a function of e(n) is therefore also dependent on the filter coefficients::$C\left(mathbf\left\{mathbf\left\{w\right\}_n\right\}\right)=sum_\left\{i=0\right\}^\left\{n\right\}lambda^\left\{n-i\right\}|e\left(i\right)|^\left\{2\right\}=sum_\left\{i=0\right\}^\left\{n\right\}lambda^\left\{n-i\right\}e\left(i\right),e^\left\{*\right\}\left(i\right)$where

The cost function is minimized by taking the partial derivatives for all entries $k$ of the coefficient vector $mathbf\left\{w\right\}_\left\{n\right\}$ and setting the results to zero:$frac\left\{partial C\left(mathbf\left\{w\right\}_\left\{n\right\}\right)\right\}\left\{partial w^\left\{*\right\}_\left\{n\right\}\left(k\right)\right\}=sum_\left\{i=0\right\}^\left\{n\right\}lambda^\left\{n-i\right\}e\left(i\right),frac\left\{partial e^\left\{*\right\}\left(i\right)\right\}\left\{partial w^\left\{*\right\}_\left\{n\right\}\left(k\right)\right\}=sum_\left\{i=0\right\}^\left\{n\right\}lambda^\left\{n-i\right\}e\left(i\right),x^\left\{*\right\}\left(i-k\right)=0$ Next, replace $e\left(n\right)$ with the definition of the error signal:$sum_\left\{i=0\right\}^\left\{n\right\}lambda^\left\{n-i\right\}left \left[d\left(i\right)-sum_\left\{l=0\right\}^\left\{p\right\}w_\left\{n\right\}\left(l\right)x\left(i-l\right) ight\right] x^\left\{*\right\}\left(i-k\right)= 0$ Rearranging the equation yields:$sum_\left\{l=0\right\}^\left\{p\right\}w_\left\{n\right\}\left(l\right)left \left[sum_\left\{i=0\right\}^\left\{n\right\}lambda^\left\{n-i\right\},x\left(i-l\right)x^\left\{*\right\}\left(i-k\right) ight\right] = sum_\left\{i=0\right\}^\left\{n\right\}lambda^\left\{n-i\right\}d\left(i\right)x^\left\{*\right\}\left(i-k\right)$This form can be expressed in terms of matrices :$mathbf\left\{R\right\}_\left\{x\right\}\left(n\right),mathbf\left\{w\right\}_\left\{n\right\}=mathbf\left\{r\right\}_\left\{dx\right\}\left(n\right)$where $mathbf\left\{R\right\}_\left\{x\right\}\left(n\right)$ is the weighted autocorrelation matrix for $x\left(n\right)$ and $mathbf\left\{r\right\}_\left\{dx\right\}\left(n\right)$ is the cross-correlation between $d\left(n\right)$ and $x\left(n\right)$. Based on this expression we find the coefficients which minimize the cost function as :$mathbf\left\{w\right\}_\left\{n\right\}=mathbf\left\{R\right\}_\left\{x\right\}^\left\{-1\right\}\left(n\right),mathbf\left\{r\right\}_\left\{dx\right\}\left(n\right)$This is the main result of the discussion.

Choosing $lambda$

The smaller $lambda$ is, the smaller contribution of previous samples. This makes the filter "more" sensitive to recent samples, which means more fluctuations in the filter co-efficients. The $lambda=1$ case is referred to as the "growing window RLS algorithm".

Recursive algorithm

The discussion resulted in a single equation to determine a coefficient vector which minimizes the cost function. In this section we want to derive a recursive solution of the form :$mathbf\left\{w\right\}_\left\{n\right\}=mathbf\left\{w\right\}_\left\{n-1\right\}+Deltamathbf\left\{w\right\}_\left\{n-1\right\}$where $Deltamathbf\left\{w\right\}_\left\{n-1\right\}$ is a correction factor at time "n-1". We start the derivation of the recursive algorithm by expressing the cross correlation $mathbf\left\{r\right\}_\left\{dx\right\}\left(n\right)$ in terms of $mathbf\left\{r\right\}_\left\{dx\right\}\left(n-1\right)$ :The Woodbury matrix identity follows:Before we move on, it is necessary to bring $mathbf\left\{g\right\}\left(n\right)$ into another form :The second step follows from the recursive definition of $mathbf\left\{r\right\}_\left\{dx\right\}\left(n \right)$. Next we incorporate the recursive definition of $mathbf\left\{P\right\}\left(n\right)$ together with the alternate form of $mathbf\left\{g\right\}\left(n\right)$ and get:

Note that the recursion for $P$ follows a Riccati equation and thus draws parallels to the Kalman filter.

ee also

*Least mean squares filter

References

*
* Simon Haykin, "Adaptive Filter Theory", Prentice Hall, 2002, ISBN 0-13-048434-2
* M.H.A Davis, R.B. Vinter, "Stochastic Modelling and Control", Springer, 1985, ISBN 0-412-16200-8

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Least mean squares filter — Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the… …   Wikipedia

• Adaptive filter — An adaptive filter is a filter that self adjusts its transfer function according to an optimizing algorithm. Because of the complexity of the optimizing algorithms, most adaptive filters are digital filters that perform digital signal processing… …   Wikipedia

• Filter design — is the process of designing a filter (in the sense in which the term is used in signal processing, statistics, and applied mathematics), often a linear shift invariant filter, which satisfies a set of requirements, some of which are contradictory …   Wikipedia

• Kalman filter — Roles of the variables in the Kalman filter. (Larger image here) In statistics, the Kalman filter is a mathematical method named after Rudolf E. Kálmán. Its purpose is to use measurements observed over time, containing noise (random variations)… …   Wikipedia

• Multidelay block frequency domain adaptive filter — The Multidelay block frequency domain adaptive filter (MDF) algorithm is a block based frequency domain implementation of the (normalised) Least mean squares filter (LMS) algorithm. Contents 1 Introduction 2 Variable definitions 3 Algorithm… …   Wikipedia

• Ensemble Kalman filter — The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter …   Wikipedia

• RLS — may stand for:*Robert Louis Stevenson (1850 1894), Scottish author *Remote Laptop Security, an encryption solution to protect data on stolen laptops *Restless legs syndrome, a medical disorder *Reversing language shift, a model for revitalizing… …   Wikipedia

• List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

• Algorithme des moindres carrés récursifs — L algorithmique des moindres carrés récursifs (en anglais, RLS ou Recursive least squares) est un algorithme de filtre adaptatif utilisé en traitement numérique du signal. Il fournit une manière récursive pour calculer le filtre qui minimise une… …   Wikipédia en Français

• Woodbury matrix identity — In mathematics (specifically linear algebra), the Woodbury matrix identity, named after Max A. Woodbury[1][2] says that the inverse of a rank k correction of some matrix can be computed by doing a rank k correction to the inverse of the original… …   Wikipedia