# Linear least squares

Linear least squares

Linear least squares is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to measurements obtained from experiments. The goals of linear least squares are to extract predictions from the measurements and to reduce the effect of measurement errors. Mathematically, it can be stated as the problem of finding an approximate solution to an overdetermined system of linear equations.

Linear least square problems admit a closed-form solution, in contrast to non-linear least squares problems, which often have to be solved by an iterative procedure.

Motivational example

As a result of an experiment, four $\left(x, y\right)$ data points were obtained, $\left(1, 6\right),$ $\left(2, 5\right),$ $\left(3, 7\right),$ and $\left(4, 10\right)$ (shown in red in the picture on the right). It is desired to find a line that fits "best" these four points. In other words, we would like to find the numbers $alpha$ and that approximately solve the overdetermined linear system:of four equations in two unknowns in some "best" sense.

The least squares approach to solving this problem is to try to make as small as possible the sum of squares of "errors" between the right- and left-hand sides of these equations, that is, to find the minimum of the function

:

The minimum is determined by calculating the partial derivatives of in respect to $alpha$ and and setting them to zero. This results in a system of two equations in two unknowns, which, when solved, gives the solution

:$alpha=3.5$:

and the equation $y=3.5+1.4x$ of the line of best fit. The residuals, that is, the discrepancies between the $y$ values from the experiment and the $y$ values calculated using the line of best fit are then found to be $1.1,$ $-1.3,$ $-0.7,$ and $0.9$ (see the picture on the right). The minimum value of the sum of squares is $S\left(3.5, 1.4\right)=1.1^2+\left(-1.3\right)^2+\left(-0.7\right)^2+0.9^2=4.2.$

The general problem

Consider an overdetermined system

:

of $m$ linear equations in $n$ unknowns, with $m > n,$ written in matrix form as

:

Such a system usually has no solution, and the goal is then to find the numbers which fit the equations "best", in the sense of minimizing the sum of squares of differences between the right- and left-hand sides of the equations. The justification for choosing this criterion is given in properties, below.

The linear least squares problem has a unique solution, provided that the $n$ columns of the matrix $X$ are linearly independent. The solution is obtained by solving the normal equations

:

Uses in data fitting

The primary application of linear least squares is in data fitting. Given a set of "m" data points $y_1, y_2,dots, y_m,$ consisting of experimentally measured values taken at "m" values $x_1, x_2,dots, x_m$ of an independent variable ($x_i$ may be scalar or vector quantities), and given a model function with it is desired to find the parameters such that the model function fits "best" the data. In linear least squares the model function is assumed to be linear in the parameters so

:

Here, the functions $phi_j$ may be nonlinear in the variable x.

Ideally, the model function fits the data exactly, so

:

for all $i=1, 2, dots, m.$ This is usually not possible in practice, as there are more data points than there are parameters to be determined. The approach chosen then is to find the minimal possible value of the sum of squares of the residuals:so to minimize the function

:

The problem then reduces to the overdetermined linear system mentioned earlier, with $X_\left\{ij\right\}=phi_j\left(x_i\right).$

Derivation of the normal equations

"S" is minimized when its gradient with respect to each parameter is equal to zero. The elements of the gradient vector are the partial derivatives of "S" with respect to the parameters:

:Since , the derivatives are

:

Substitution of the expressions for the residuals and the derivatives into the gradient equations gives

:

Upon rearrangement, the normal equations:

are obtained. The normal equations are written in matrix notation as

:

The solution of the normal equations yields the vector of the optimal parameter values.

Inverting the normal equations

Although the algebraic solution of the normal equations can be written as:it is not good practice to invert the normal equations matrix. An exception occurs in numerical smoothing and differentiation where an analytical expression is required.

If the matrix $mathbf\left\{X^TX\right\}$ is well-conditioned and positive definite, that is, it has full rank, the normal equations can be solved directly by using the Cholesky decomposition $mathbf\left\{X^TX=R^TR\right\}$, where R is an upper triangular matrix, giving: The solution is obtained in two stages, a forward substitution, $mathbf\left\{R^Tz=X^Ty\right\}$, followed by a backward substitution . Both subtitutions are facilitated by the triangular nature of R.

See example of linear regression for a worked-out numerical example with three parameters.

Orthogonal decomposition methods

Orthogonal decomposition methods of solving the least squares problem are slower than the normal equations method but are more numerically stable.

The extra stability results from not having to form the product $mathbf\left\{X^TX\right\}$.The residuals are written in matrix notation as:The matrix X is subjected to an orthogonal decomposition; the QR decomposition will serve to illustrate the process. :$mathbf\left\{X=QR\right\}$where Q is an orthogonal $m imes m$ matrix and R is an $m imes n$ matrix which is partitioned into a $n imes n$ block, $mathbfR_n$, and a $m-n imes n$ zero block. $mathbfR_n$ is upper triangular.:The residual vector is left-multiplied by $mathbf \left\{Q^T\right\}$. :The sum of squares of the transformed residuals, $S=mathbf\left\{r^T Q Q^Tr\right\}$, is the same as before, $S=mathbf\left\{r^Tr\right\}$ because Q is orthogonal.:$S=mathbf\left\{U^TU+L^TL\right\}$ The minimum value of "S" is attained when the upper block, U, is zero. Therefore the parameters are found by solving:These equations are easily solved as $mathbf\left\{R\right\}_n$ is upper triangular.

An alternative decomposition of X is the singular value decomposition (SVD) [C.L. Lawson and R.J. Hanson, Solving Least Squares Problems, Prentice-Hall,1974] :$mathbf\left\{ X = USigma V^*\right\}.$This is effectively another kind of orthogonal decomposition as both U and V are orthogonal. This method is the most computationally intensive, but is particularly useful if the normal equations matrix, $mathbf\left\{X^TX\right\}$, is very ill-conditioned (i.e. if its condition number multiplied by the machine's relative round-off error is appreciably large). In that case, including the smallest singular values in the inversion merely adds numerical noise to the solution. This can be cured using the truncated SVD approach, giving a more stable and exact answer, by explicitly setting to zero all singular values below a certain threshold and so ignoring them, a process closely related to factor analysis.

Properties of the least-squares estimators

The gradient equations at the minimum can be written as:A geometrical interpretation of these equations is that the vector of residuals, is orthogonal to the column space of $mathbf\left\{X\right\}$, since the dot product is equal to zero for "any" conformal vector, $mathbf\left\{v\right\}$. This means that is the shortest of all possible vectors , that is, the variance of the residuals is the minimum possible. This is illustrated at the right.

If the experimental errors, $epsilon ,$, are uncorrelated, have a mean of zero and a constant variance, $sigma$, the Gauss-Markov theorem states that the least-squares estimator, , has the minimum variance of all estimators that are linear combinations of the observations. In this sense it is the best, or optimal, estimator of the parameters. Note particularly that this property is independent of the statistical distribution function of the errors. In other words, "the distribution function of the errors need not be a normal distribution".

For example, it is easy to show that the arithmetic mean of a set of measurements of a quantity is the least-squares estimator of the value of that quantity. If the conditions of the Gauss-Markov theorem apply, the arithmetic mean is optimal, whatever the distribution of errors of the measurements might be.

However, in the case that the experimental errors do belong to a Normal distribuition, the least-squares estimator is also a maximum likelihood estimator. [H. Margenau and G.M. Murphy, The Mathematics of Physics and Chemistry, Van Nostrand, 1943, 1956]

These properties underpin the use of the method of least squares for all types of data fitting, even when the assumptions are not strictly valid.

Limitations

An assumption underlying the treatment given above is that the independent variable, "x", is free of error. In practice, the errors on the measurements of the independent variable are usually much smaller than the errors on the dependent variable and can therefore be ignored. When this is not the case, total least squares also known as "Errors-in-variables model", or "Rigorous least squares", should be used. This can be done by adjusting the weighting scheme to take into account errors on both the dependent and independent variables and then following the standard procedure.P. Gans, Data fitting in the Chemical Sciences, Wiley, 1992] [W.E. Deming, Statistical adjustment of Data, Wiley, 1943]

In some cases the (weighted) normal equations matrix $mathbf\left\{X^TX\right\}$ is ill-conditioned; this occurs when the measurements have only a marginal effect on one or more of the estimated parameters.When fitting polynomials the normal equations matrix is a Vandermonde matrix. Vandermode matrices become increasingly ill-conditioned as the order of the matrix increases.] In these cases, the least squares estimate amplifies the measurement noise and may be grossly inaccurate. Various regularization techniques can be applied in such cases, the most common of which is called Tikhonov regularization. If further information about the parameters is known, for example, a range of possible values of x, then minimax techniques can also be used to increase the stability of the solution.

Another drawback of the least squares estimator is the fact that the norm of the residuals, is minimized, whereas in some cases one is truly interested in obtaining small error in the parameter , e.g., a small value of . However, since is unknown, this quantity cannot be directly minimized. If a prior probability on is known, then a Bayes estimator can be used to minimize the mean squared error, . The least squares method is often applied when no prior is known. Surprisingly, however, better estimators can be constructed, an effect known as Stein's phenomenon. For example, if the measurement error is Gaussian, several estimators are known which dominate, or outperform, the least squares technique; the best known of these is the James-Stein estimator.

Weighted linear least squares

When the observations are not equally reliable, a weighted sum of squares:$S=sum_\left\{i=1\right\}^\left\{m\right\}W_\left\{ii\right\}r_i^2$may be minimized.

Each element of the diagonal weight matrix, W should,ideally, be equal to the reciprocal of the variance of the measurement. [This implies that the observations are uncorrelated. If the observations are correlated, the expression:$S=sum_k sum_j r_k W_\left\{kj\right\} r_j,$applies. In this case the weight matrix should ideally be equal to the inverse of the variance-covariance matrix of the observations.] The normal equations are then:

Parameter errors, correlation and confidence limits

The parameter values are linear combinations of the observed values:Therefore an expression for the errors on the parameter can be obtained by error propagation from the errors on the observations. Let the variance-covariance matrix for the observations be denoted by M and that of the parameters by M. Then,:When $mathbf\left\{W=M^\left\{-1$, this simplifies to:

When unit weights are used () it is implied that the experimental errors are uncorrelated and all equal: $mathbf\left\{M\right\}=sigma^2 mathbf\left\{I\right\}$, where $sigma^2,$ is known as the variance of an observation of unit weight, and $mathbf\left\{I\right\}$ is an identity matrix. In this case $sigma^2,$ is approximated by $frac\left\{S\right\}\left\{m-n\right\}$, where "S" is the minimum value of the objective function:In all cases, the variance of the parameter is given by and the covariance between parameters and is given by . Standard deviation is the square root of variance and the correlation coefficient is given by . These error estimates reflect only random errors in the measurements. The true uncertainty in the parameters is larger due to the presence of systematic errors which, by definition, cannot be quantified.Note that even though the observations may be un-correlated, the parameters are always correlated.

It is often "assumed", for want of any concrete evidence, that the error on a parameter belongs to a Normal distribution with a mean of zero and standard deviation $sigma$. Under that assumption the following confidence limits can be derived.:68% confidence limits, :95% confidence limits, :99% confidence limits, The assumption is not unreasonable when "m>>n". If the experimental errors are normally distributed the parameters will belong to a Student's t-distribution with "m-n" degrees of freedom. When "m>>n" Student's t-distribution approximates to a Normal distribution. Note, however, that these confidence limits cannot take systematic error into account. Also, parameter errors should be quoted to one significant figure only, as they are subject to sampling error. [J. Mandel, The Statistical Analysis of Experimental Data, Interscience, 1964]

When the number of observations is relatively small, Chebychev's inequality can be used for an upper bound on probabilities, regardless of any assumptions about the distribution of experimental errors: the maximum probabilities that a parameter will be more than 1, 2 or 3 standard deviations away from its expectation value are 100%, 25% and 11% respectively.

Residual values and correlation

The residuals are related to the observations by:The symmetric, idempotent matrix $mathbf\left\{X left\left(X^TWX ight\right)^\left\{-1\right\}X^T\right\}$ is known in the statistics literature as the hat matrix, $mathbf\left\{H\right\}$. Thus,:$mathbf\left\{hat r=left\left(I-H ight\right) y\right\}$where I is an identity matrix. The variance-covariance matrice of the residuals, Mr is given by:$mathbf\left\{M^r=left\left(I-H ight\right) M left\left(I-H ight\right)\right\}.$This shows that even though the observations may be uncorrelated, the residuals are always correlated.

The sum of residual values is equal to zero whenever the model function contains a constant term. Left-multiply the expression for the residuals by $mathbf\left\{X^T\right\}$.:Say, for example, that the first term of the model is a constant, so that $X_\left\{i1\right\}=1$ for all "i". In that case it follows that:$sum_i^m X_\left\{i1\right\} hat r_i=sum_i^m hat r_i=0.$Thus, in the motivational example, above, the fact that the sum of residual values is equal to zero it is not accidental but is a consequence of the presence of the constant term, α, in the model.

If experimental error follows a normal distribution, then, because of the linear relationship between residuals and observations, so should residuals, [K.V. Mardia, J.T. Kent and J.M. Bibby, Multivariate analysis, Academic Press, 1979] but since the observations are only a sample of the population of all possible observations, the residuals should belong to a Student's t-distribution. Studentized residuals are useful in making a statistical test for an outlier when a particular residual appears to be excessively large.

Objective function

The objective function can be written as:$S=mathbf\left\{ y^T\left(I-H\right)^T\left(I-H\right)y=y^T\left(I-H\right)y\right\}$since $mathbf\left\{ \left(I-H\right)\right\}$ is also symmetric and idempotent. It can be shown from this, [W. C. Hamilton, Statistics in Physical Science, The Ronald Press, New York, 1964] that the expected value of "S" is "m-n". Note, however, that this is true only if the weights have been assigned correctly. If unit weights are assumed, the expected value of "S" is $\left(m-n\right)sigma^2$, where $sigma^2$ is the variance of an observation.

If it is assumed that the residuals belong to a Normal distribution, the objective function, being a sum of weighted squared residuals, will belong to a Chi-square ($chi ^2$) distribution with "m-n" degrees of freedom. Some illustrative percentile values of $chi ^2$ are given in the following table. [M.R. Spiegel, Probability and Statistics, Schaum's Outline Series, McGraw-Hill 1982] :These values can be used for a statistical criterion as to the goodness-of-fit. When unit weights are used, the numbers should be divided by the variance of an observation.

Typical uses and applications

* Polynomial fitting: models are polynomials in an independent variable, "x":
** Straight line: . [F.S. Acton, Analysis of Straight-Line Data, Wiley, 1959]
** Cubic, quartic and higher polynomials. For high-order polynomials the use of orthogonal polynomials is recommended. [P.G. Guest, Numerical Methods of Curve Fitting, Cambridge University Press, 1961.]
*Numerical smoothing and differentiation &mdash; this is an application of polynomial fitting.
*Multinomials in more than one independent variable, including surface fitting
*Curve fitting with B-splines
*Chemometrics, Calibration curve, Standard addition, Gran plot, analysis of mixtures

Notes

References

*Cite book | author=Björck, Åke | authorlink= | coauthors= | title=Numerical methods for least squares problems | date=1996 | publisher=SIAM | location=Philadelphia | isbn=0-89871-360-9 | pages=
*Cite book | author=Bevington, Philip R | coauthors=Robinson, Keith D | title=Data Reduction and Error Analysis for the Physical Sciences | date=2003 | publisher=McGraw Hill | location= | isbn=0072472278 | pages=

* [http://mathworld.wolfram.com/LeastSquaresFitting.html Least Squares Fitting &ndash; From MathWorld]
* [http://mathworld.wolfram.com/LeastSquaresFittingPolynomial.html Least Squares Fitting-Polynomial &ndash; From MathWorld]

Wikimedia Foundation. 2010.

Нужно решить контрольную?

### Look at other dictionaries:

• Linear least squares/Proposed — Linear least squares is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to observations obtained from experiments. Mathematically, it can be stated as the problem of… …   Wikipedia

• Linear least squares (mathematics) — This article is about the mathematics that underlie curve fitting using linear least squares. For statistical regression analysis using least squares, see linear regression. For linear regression on a single variable, see simple linear regression …   Wikipedia

• Non-linear least squares — is the form of least squares analysis which is used to fit a set of m observations with a model that is non linear in n unknown parameters (m > n). It is used in some forms of non linear regression. The basis of the method is to… …   Wikipedia

• Least squares inference in phylogeny — generates a phylogenetic tree based on anobserved matrix of pairwise genetic distances andoptionally a weightmatrix. The goal is to find a tree which satisfies the distance constraints asbest as possible.Ordinary and weighted least squaresThe… …   Wikipedia

• Least squares — The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. Least squares means that the overall solution minimizes the sum of… …   Wikipedia

• Least Squares Method — A statistical technique to determine the line of best fit for a model. The least squares method is specified by an equation with certain parameters to observed data. This method is extensively used in regression analysis and estimation. In the… …   Investment dictionary

• Least-squares spectral analysis — (LSSA) is a method of estimating a frequency spectrum, based on a least squares fit of sinusoids to data samples, similar to Fourier analysis. [cite book | title = Variable Stars As Essential Astrophysical Tools | author = Cafer Ibanoglu |… …   Wikipedia

• Ordinary least squares — This article is about the statistical properties of unweighted linear regression analysis. For more general regression analysis, see regression analysis. For linear regression on a single variable, see simple linear regression. For the… …   Wikipedia

• Total least squares — The bivariate (Deming regression) case of Total Least Squares. The red lines show the error in both x and y. This is different from the traditional least squares method which measures error parallel to the y axis. The case shown, with deviations… …   Wikipedia

• Least-squares estimation of linear regression coefficients — In parametric statistics, the least squares estimator is often used to estimate the coefficients of a linear regression. The least squares estimator optimizes a certain criterion (namely it minimizes the sum of the square of the residuals). In… …   Wikipedia