- Linear prediction
Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples.
In
digital signal processing , linear prediction is often calledlinear predictive coding (LPC) and can thus be viewed as a subset offilter theory . Insystem analysis (a subfield ofmathematics ), linear prediction can be viewed as a part of mathematical modelling or optimization.The prediction model
The most common representation is
:
where is the predicted signal value, the previous observed values, and the predictor coefficients. The error generated by this estimate is
:
where is the true signal value.
These equations are valid for all types of (one-dimensional) linear prediction. The differences are found in the way the parameters are chosen.
For multi-dimensional signals the error metric is often defined as
:
where is a suitable chosen vector norm.
Estimating the parameters
The most common choice in optimization of parameters is the
root mean square criterion which is also called theautocorrelation criterion. In this method we minimize the expected value of the squared error E [e2(n)] , which yields the equation:
for 1 ≤ "j" ≤ "p", where "R" is the
autocorrelation of signal "x""n", defined as:
where "E" is the
expected value . In the multi-dimensional case this corresponds to minimizing the L2 norm.The above equations are called the
normal equations orYule-Walker equations . In matrix form the equations can be equivalently written as:
where the autocorrelation matrix "R" is a symmetric,
Toeplitz matrix with elements "r""i","j" = "R"("i" − "j"), vector "r" is the autocorrelation vector "r""j" = "R"("j"), and vector "a" is the parameter vector.Another, more general, approach is to minimize
:
where we usually constrain the parameters with to avoid the trivial solution. This constraint yields the same predictor as above but the normal equations are then
:
where the index "i" ranges from 0 to "p", and "R" is a ("p" + 1) × ("p" + 1) matrix.
Optimisation of the parameters is a wide topic and a large number of other approaches have been proposed.
Still, the autocorrelation method is the most common and it is used, for example, for
speech coding in the GSM standard.Solution of the matrix equation "Ra" = "r" is computationally a relatively expensive process. The
Gauss algorithm for matrix inversion is probably the oldest solution but this approach does not efficiently use the symmetry of "R" and "r". A faster algorithm is theLevinson recursion proposed byNorman Levinson in 1947, which recursively calculates the solution. Later, Delsarte et al. proposed an improvement to this algorithm called thesplit Levinson recursion which requires about half the number of multiplications and divisions. It uses a special symmetrical property of parameter vectors on subsequent recursion levels.References
Original
* G. U. Yule. On a method of investigating periodicities in disturbed series, with special reference to wolfer’s sunspot numbers. Phil. Trans. Roy. Soc., 226-A:267–298, 1927.
Overview
* J. Makhoul. Linear prediction: A tutorial review. Proceedings of the IEEE, 63 (5):561–580, April 1975.
* M. H. Hayes. Statistical Digital Signal Processing and Modeling. J. Wiley & Sons, Inc., New York, 1996.External links
* [http://labrosa.ee.columbia.edu/matlab/rastamat/ PLP and RASTA (and MFCC, and inversion) in Matlab]
See also
*
Forecasting
*Prediction interval
*Deconvolution
Wikimedia Foundation. 2010.