- Least-squares estimation of linear regression coefficients
In

parametric statistics , the**least-squares estimator**is often used to estimate the coefficients of alinear regression . The least-squares estimator optimizes a certain criterion (namely it minimizes the sum of the square of the residuals). In this article, after setting the mathematical context of linear regression, we will motivate the use of the least-squares estimator $widehat\{\; heta\}\_\{LS\}$ and derive its expression (as seen for example in the articleregression analysis )::$widehat\{\; heta\}\_\{LS\}=(mathbf\{X\}^tmathbf\{X\})^\{-1\}mathbf\{X\}^tvec\{Y\}$

We conclude by giving some qualities of this estimator and a geometrical interpretation.

**Assumptions**For $pinmathbb\{N\}^+$, let "Y" be a random variable taking values in $mathbb\{R\}$, we call

**observation**.We next define the function η, linear in $heta$:

$eta(X;\; heta)=sum\_\{j=1\}^p\; heta\_j\; X\_j,$where

* For $jin\; \{1,...,p\}$, $X\_j$ is a random variable taking values in $mathbb\{R\}$ and is called a**factor**and

* $heta\_j$ is a scalar, for $jin\; \{1,...,p\}$, and $heta^t=(\; heta\_1,cdots,\; heta\_p)$, where $heta^t$ denotes the transpose of vector $heta$.Let $X^t=(X\_1,cdots,X\_p)$. We can write $eta(X;\; heta)=X^t\; heta$. Define the

**error**to be:

$varepsilon(\; heta)=Y-X^t\; heta$We suppose that there exists a

**true parameter**$overline\{\; heta\}inmathbb\{R\}^\{p\}$ such that $mathbb\{E\}\; [varepsilon(overline\{\; heta\})|X]\; =0$. This means that, given the random variables $(X\_1,cdots,X\_p)$, the best prediction we can make of "Y" is $Y=eta(X;overline\{\; heta\})=X^toverline\{\; heta\}$. Henceforth, $varepsilon$ will denote $varepsilon(overline\{\; heta\})$ and η will represent $eta(X;overline\{\; heta\})$.**Least-squares estimator**The idea behind the least-squares estimator is to see linear regression as an orthogonal projection. Let "F" be the L2-space of all random variables whose square has a finite Lebesgue integral. Let "G" be the linear subspace of $F$ generated by $X\_1,cdots,X\_p$ (supposing that $Yin\; F$ and $(X\_1,cdots,X\_p)in\; F^p$). We show in this paragraph that the function $eta$ is an orthogonal projection of "Y" on "G" and we will construct the least-squares estimator.

**Seeing linear regression as an orthogonal projection**We have $mathbb\{E\}(Y|X)=eta$, but $Ymapstomathbb\{E\}(Y|X)$ is a projection, which means that $eta$ is a projection of "Y" on "G". What is more, this projection is an orthogonal one.

To see this, we can build a scalar product in "F": for all couples of random variables $X,Yin\; F$, we define$langle\; X,Y\; angle\_2:=mathbb\{E\}\; [X\; Y]$. It is indeed a scalar product because if $|X|\_2^2=0$, then $X=0$ almost everywhere (where $|X|\_2^2:=langle\; X,X\; angle\_2$ is the norm corresponding to this scalar product).

For all $1leq\; jleq\; p$,

:

Therefore, $varepsilon$ is orthogonal to any $X\_j$ and hence to the whole of the subspace "G", which means that $eta$ is a projection of "Y" on "G", orthogonal with respect to the scalar product we have just defined. We have therefore shown:

$eta(X;overline\{\; heta\})=min\_\{fin\; G\}|Y-f|^2\_2.$**Estimating the coefficients**If, for each $jin\{1,cdots,p\}$ we have a sample of size $n>p,\; (X^1\_j,cdots,X^n\_j)$ of $X\_j$, along with a vector $vec\{Y\}$ of "n" observations of "Y", we can build an estimation of the coefficients of this orthogonal projection. To do this, we can use an

estimation of the scalar product defined earlier.For all couples of samples of size "n" $vec\{U\},vec\{V\}in\; F^n$ of random variables "U" and "V", we define $langle\; vec\{U\},vec\{V\}\; angle:=vec\{U\}^t\; vec\{V\}$, where $vec\{U\}^t$ is the transpose of vector $vec\{U\}$, and $|cdot|:=sqrt\{langle\; cdot,cdot\; angle\}$. Note that the scalar product $langle\; cdot,cdot\; angle$ is defined in $F^n$ and no longer in "F".

Let us define the

**design matrix**(or**random design**), a $n\; imes\; p$ random matrix:$mathbf\{X\}=left\; [egin\{matrix\}X\_1^1cdotsX^1\_p\backslash vdotsvdots\backslash X^n\_1cdotsX^n\_pend\{matrix\}\; ight]$We can now adapt the minimization of the sum of the residuals: the least-squares estimator $widehat\{\; heta\}\_\{LS\}$ will be the value, if it exists, of $heta$ which minimizes $|mathbf\{X\}\; heta-vec\{Y\}|^2$. Therefore, $langle\; mathbf\{X\},vec\{varepsilon\}(widehat\{\; heta\}\_\{LS\})\; angle=\; mathbf\{X\}^t(mathbf\{X\}widehat\{\; heta\}\_\{LS\}-vec\{Y\})=0$.

This yields $mathbf\{X\}^t\; mathbf\{X\}\; widehat\{\; heta\}\_\{LS\}\; =\; mathbf\{X\}^t\; vec\{Y\}$. If $mathbf\{X\}$ is of full rank, then so is $mathbf\{X\}^t\; mathbf\{X\}$. In that case we can compute the least-squares estimator explicitly by inverting the $p\; imes\; p$ matrix $mathbf\{X\}^tmathbf\{X\}$:

$widehat\{\; heta\}\_\{LS\}=(mathbf\{X\}^tmathbf\{X\})^\{-1\}\; mathbf\{X\}^t\; vec\{Y\}$**Qualities and geometrical interpretation****Qualities of this estimator**Not only is the least-square estimator easy to compute, but under the Gauss-Markov assumptions, the Gauss-Markov theorem states that the least-square estimators is the best linear unbiased estimator (

BLUE ) of $overline\{\; heta\}$.The vector of errors $vec\{varepsilon\}=vec\{Y\}-mathbf\{X\}overline\{\; heta\}$ is said to fulfil the

if: :*$mathbb\{E\}vec\{varepsilon\}=vec\{0\}$:*$mathbb\{V\}vec\{varepsilon\}=sigma^2\; mathbf\{I\}\_n$ (uncorrelated but not necessarily independent; homoscedastic but not necessarily identically distributed)Gauss-Markov assumptions where $sigma^2<+infty$ and $mathbf\{I\}\_n$ is the $n\; imes\; n$ identity matrix.

This decisive advantage has led to a sometimes abusive use of least-squares. Least-squares depends on the fulfilment of the Gauss-Markov hypothesis and applying this method in a situation where these conditions are not met can lead to inaccurate results. For example, in the study of

time-series , it is often difficult to assume independence of the residuals.**Geometrical interpretation**The situation described by the linear regression problem can be geometrically seen as follows:

The least-squares is also an

M-estimator of $ho$-type for $ho(r):=frac\{r^2\}\{2\}$.

*Wikimedia Foundation.
2010.*