- Autocovariance
In
statistics , given a realstochastic process "X"("t"), the autocovariance is simply thecovariance of the signal against a time-shifted version of itself. If each state of the series has amean , E ["X""t"] = μ"t", then the autocovariance is given by:K_mathrm{XX} (t,s) = E [(X_t - mu_t)(X_s - mu_s)] = E [X_tcdot X_s] -mu_tcdotmu_s.,
where E is the expectation operator.
Stationarity
If "X"("t") is wide sense stationary then the following conditions are true:
:mu_t = mu_s = mu , for all "t", "s"
and
:K_mathrm{XX}(t,s) = K_mathrm{XX}(s-t) = K_mathrm{XX}( au) ,
where
:au = s - t ,
is the lag time, or the amount of time by which the signal has been shifted.
As a result, the autocovariance becomes
:K_mathrm{XX} ( au) = E { (X(t) - mu)(X(t+ au) - mu) }
::::E { X(t)cdot X(t+ au) } -mu^2,,
::::R_mathrm{XX}( au) - mu^2,,
where "R"XX represents the
autocorrelation , in the signal processing sense.Normalization
When normalized by dividing by the
variance σ2 then the autocovariance becomes the autocorrelation coefficient ρ. That is:ho_mathrm{XX}( au) = frac{ K_mathrm{XX}( au)}{sigma^2}.,
Note, however, that some disciplines use the terms autocovariance and autocorrelation interchangeably.
The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalisation with the variance will put this into the range [−1, 1] .
See also
*
Autocorrelation References
* P. G. Hoel (1984): Mathematical Statistics, New York, Wiley
* [http://w3eos.whoi.edu/12.747/notes/lect06/l06s02.html Lecture notes on autocovariance from WHOI]
Wikimedia Foundation. 2010.