- Bayesian linear regression
In
statistics , Bayesian linear regression is a Bayesian alternative to the more well-known ordinaryleast-squares linear regression .Consider standard
linear regression problem, where we specify the conditional density of "" given "" predictor variables::
where the noise is
i.i.d. andnormally distributed :
A common,
linear least squares solution, is to estimate theslope using theMoore-Penrose pseudoinverse ::
where is the vector of (of length ).
This is a
frequentist 's view, and assumes we have enough measurements of to say something meaningful about . In theempirical Bayes approach, we will assume we have only a small sample of for our individual measurement, and we seek to correct our estimate by "borrowing" information from a larger set of similar observations.Let us write our conditional
likelihood as:
We seek a natural
conjugate prior (a joint density which is of the same functional form as the likelihood). Since the likelihood is quadratic in , we re-write the likelihood so it is normal in . Write:
Now re-write the likelihood as
:
where
:
with as the number of parameters to estimate.
This suggests a form for the priors:
:
where is an
inverse-gamma distribution :
and is a
normal distribution :
with and as the prior values of and , respectively.
With the prior now specified, we can express the posterior distribution as
:
::
:::
:::
With some re-arrangement, we can re-write the posterior so that the posterior mean is weighted average of the least squares estimator and the prior mean:
:
where comes from the
Cholesky decomposition of (which is apositive-definite matrix by design):
This is the key result of the Empirical Bayes approach; it allows us to estimate the slope for our original linear regression problem by combining estimates using the least squares estimate for a single set of measurements with the empirical prior estimate from a large collection of similar measurements. (Notice that the weighted average also depends on the empirical estimate of the prior covariance matrix .)
To justify this, collect the quadratic terms in the exponential and now express this as a quadratic form in :
:
::
where
::
The posterior can now be expressed as a
Normal distribution times aninverse-gamma distribution ::
A similar analysis can be performed for general case of multi-variate regression for a Bayesian
Estimation of covariance matrices .Example:
Suppose the weights of a large population of 35-year-old men are normally distributed with expected value μ and standard deviation σ. A crude measuring instrument measures a man's weight with a measurement error that is normally distributed with expected value 0 and standard deviation τ. The man's true weight is not observable; his weight measured with error is observed. The conditional probability distribution of a randomly chosen man's true weight, given his weight-measured-with-error, can be found by using
Bayes' theorem , and then the conditional expected value can be used as an estimate of his true weight, provided that the values of μ, σ, and τ are "known". But they are not. One may use the data to estimate the standard deviation of the measurement errors by measuring each man multiple times. One may similarly estimate the population average weight and the population standard deviation of weights by weighing multiple men. These estimates of parameters based on the data are the occasion for the use of the word "empirical". Finally, one may then estimate the aforementioned conditional expected true weight by using Bayes' theorem.ee also
*
Bayesian multivariate linear regression References
* Bradley P. Carlin and Thomas A. Louis, "Bayes and Empirical Bayes Methods for Data Analysis", Chapman & Hall/CRC, Second edition 2000,
* Peter E. Rossi, Greg M. Allenby, and Robert McCulloch, "Bayesian Statistics and Marketing", John Wiley & Sons, Ltd, 2006
* Thomas P. Minka, [http://research.microsoft.com/~minka/papers/linear.html "Bayesian Linear Regression] , 2001
External links
Wikimedia Foundation. 2010.