- Particle filter
**Particle filters**, also known as**sequential**(SMC), are sophisticated modelMonte Carlo method sestimation techniques based onsimulation .They are usually used to estimate

Bayesian models and are the sequential ('on-line') analogue ofMarkov chain Monte Carlo (MCMC) batch methods and are often similar toimportance sampling methods. If well-designed, particle filters can be much faster than MCMC. They are often an alternative to the Extended Kalman filter (EKF) or Unscented Kalman filter (UKF) with the advantage that, with sufficient samples, they approach the Bayesian optimal estimate, so they can be made more accurate than either the EKF or UKF. The approaches can also be combined by using a version of theKalman filter as a proposal distribution for the particle filter.**Goal**The particle filter aims to estimate the sequence of hidden parameters, $x\_k$ for $k=0,1,2,3,\; ldots$, based only on the observed data $y\_k$ for $k=0,1,2,3,\; ldots$. All Bayesian estimates of $x\_k$ follow from the

posterior distribution $p(x\_k|y\_0,y\_1,ldots,y\_k)$. In contrast, the MCMC or importance sampling approach would model the full posterior $p(x\_0,x\_1,ldots,x\_k|y\_0,y\_1,ldots,y\_k)$.**Model**Particle methods assume $x\_k$ and the observations $y\_k$ can be modeled in this form:

*$x\_0,\; x\_1,\; ldots$ is a first order

Markov process such that

*:$x\_k|x\_\{k-1\}\; sim\; p\_\{x\_k|x\_\{k-1(x|x\_\{k-1\})$and with an initial distribution $p(x\_0)$.

*The observations $y\_0,\; y\_1,\; ldots$ are conditionally independent provided that $x\_0,\; x\_1,\; ldots$ are known:In other words, each $y\_k$ only depends on $x\_k$::$y\_k|x\_k\; sim\; p\_\{y|x\}(y|x\_k)$One example form of this scenario is

:$x\_k\; =\; f(x\_\{k-1\})\; +\; v\_k\; ,$:$y\_k\; =\; h(x\_k)\; +\; w\_k\; ,$

where both $v\_k$ and $w\_k$ are mutually independent and identically distributed sequences with known

probability density function s and $f(cdot)$ and $h(cdot)$ are known functions.These two equations can be viewed as state space equations and look similar to the state space equations for theKalman filter . If the functions $f(cdot)$ and $h(cdot)$ were linear, and if both $v\_k$ and $w\_k$ wereGaussian , the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter based methods are a first-order approximation. Particle filters are also an approximation, but with enough particles can be much more accurate.**Monte Carlo approximation**Particle methods, like all sampling-based approaches (e.g., MCMC), generate a set of samples that approximate the filtering distribution $p(x\_k|y\_0,dots,y\_k)$. So, with $P$ samples, expectations with respect to the filtering distribution are approximated by

:$int\; f(x\_k)p(x\_k|y\_0,dots,y\_k)dx\_kapproxfrac1Psum\_\{L=1\}^Pf(x\_k^\{(L)\})$

and $f(cdot)$, in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some degree of approximation.

**Sampling Importance Resampling (SIR)**"Sampling importance resampling (SIR)" is a very commonly usedparticle filtering algorithm, which approximates the filteringdistribution $p(x\_k|y\_0,ldots,y\_k)$ by a weighted setof P particles

: $\{(w^\{(L)\}\_k,x^\{(L)\}\_k)~:~L=1,ldots,P\}$.

The "importance weights" $w^\{(L)\}\_k$ are approximations tothe relative posterior probabilities (or densities) of the particlessuch that $sum\_\{L=1\}^P\; w^\{(L)\}\_k\; =\; 1$.

SIR is a sequential (i.e., recursive) version of

importance sampling .As in importance sampling, the expectation of a function$f(cdot)$ can be approximated as a weighted average: $int\; f(x\_k)\; p(x\_k|y\_0,dots,y\_k)\; dx\_k\; approxsum\_\{L=1\}^P\; w^\{(L)\}\; f(x\_k^\{(L)\}).$

For a finite set of particles, the algorithm performance is dependent on the choice of the"proposal distribution"

: $pi(x\_k|x\_\{0:k-1\},y\_\{0:k\}),$.

The "optimal proposal distribution" is given as the "target distribution": $pi(x\_k|x\_\{0:k-1\},y\_\{0:k\})\; =\; p(x\_k|x\_\{k-1\},y\_\{k\}).\; ,$

However, the transition prior is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:: $pi(x\_k|x\_\{0:k-1\},y\_\{0:k\})\; =\; p(x\_k|x\_\{k-1\}).\; ,$"Sampling Importance Resampling" (SIR) filters with transition prior as importance function are commonly known as bootstrap filter and

condensation algorithm ."Resampling" is used to avoid the problem of degeneracy of thealgorithm, that is, avoiding the situation that all but one of theimportance weights are close to zero. The performance of the algorithmcan be also affected by proper choice of resampling method. The"

stratified resampling " proposed by Kitagawa (1996) is optimal interms of variance.A single step of sequential importance resampling is as follows:

:1) For $L=1,ldots,P$ draw samples from the "proposal distribution"

:: $x^\{(L)\}\_k\; sim\; pi(x\_k|x^\{(L)\}\_\{0:k-1\},y\_\{0:k\})$

:2) For $L=1,ldots,P$ update the importance weights up to a normalizing constant:

:: $hat\{w\}^\{(L)\}\_k\; =\; w^\{(L)\}\_\{k-1\}frac\{p(y\_k|x^\{(L)\}\_k)\; p(x^\{(L)\}\_k|x^\{(L)\}\_\{k-1\})\}\{pi(x\_k^\{(L)\}|x^\{(L)\}\_\{0:k-1\},y\_\{0:k\})\}.$

:3) For $L=1,ldots,P$ compute the normalized importance weights:

:: $w^\{(L)\}\_k\; =\; frac\{hat\{w\}^\{(L)\}\_k\}\{sum\_\{J=1\}^P\; hat\{w\}^\{(J)\}\_k\}$

:4) Compute an estimate of the effective number of particles as

:: $hat\{N\}\_mathit\{eff\}\; =\; frac\{1\}\{sum\_\{L=1\}^Pleft(w^\{(L)\}\_k\; ight)^2\}$

:5) If the effective number of particles is less than a given threshold $hat\{N\}\_mathit\{eff\}\; <\; N\_\{thr\}$, then perform resampling:

::a) Draw $P$ particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.

::b) For $L=1,ldots,P$ set $w^\{(L)\}\_k\; =\; 1/P.$

The term "

**Sequential**Importance Resampling" is also sometimes used when referring to SIR filters.**Sequential Importance Sampling (SIS)*** Is the same as Sampling Importance Resampling, but without the resampling stage.

**"Direct version" algorithm**The "direct version" algorithm is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection.To generate a single sample $x$ at $k$ from $p\_\{x\_k|y\_\{1:k(x|y\_\{1:k\})$:

:1) Set p=1

:2) Uniformly generate L from $\{1,...,\; P\}$

:3) Generate a test $hat\{x\}$ from its distribution $p\_\{x\_k|x\_\{k-1(x|x\_\{k-1|k-1\}^\{(L)\})$

:4) Generate the probability of $hat\{y\}$ using $hat\{x\}$ from $p\_\{y|x\}(y\_k|hat\{x\})$ where $y\_k$ is the measured value

:5) Generate another uniform u from $[0,\; m\_k]$

:6) Compare u and $hat\{y\}$

::6a) If u is larger then repeat from step 2

::6b) If u is smaller then save $hat\{x\}$ as $x\{k|k\}^\{(p)\}$ and increment p

:7) If p > P then quit

The goal is to generate P "particles" at $k$ using only the particles from $k-1$.This requires that a Markov equation can be written (and computed) to generate a $x\_k$ based only upon $x\_\{k-1\}$.This algorithm uses composition of the P particles from $k-1$ to generate a particle at $k$ and repeats (steps 2-6) until P particles are generated at $k$.

This can be more easily visualized if $x$ is viewed as a two-dimensional array.One dimension is $k$ and the other dimensions is the particle number.For example, $x(k,L)$ would be the L

^{th}particle at $k$ and can also be written $x\_k^\{(L)\}$ (as done above in the algorithm).Step 3 generates a "potential" $x\_k$ based on a randomly chosen particle ($x\_\{k-1\}^\{(L)\}$) at time $k-1$ and rejects or accepts it in step 6.In other words, the $x\_k$ values are generated using the previously generated $x\_\{k-1\}$.**Other Particle Filters***

Auxiliary particle filter

*Gaussian particle filter

*Unscented particle filter

*Monte Carlo particle filter

*Gauss-Hermite particle filter

*Cost Reference particle filter

*Rao-Blackwellized particle filter **See also***

Ensemble Kalman filter

*Recursive Bayesian estimation **References*** cite book

author = Doucet, A.

coauthors = De Freitas, N.; Gordon, N.J.

year = 2001

title = Sequential Monte Carlo Methods in Practice

publisher = Springer

isbn =* cite book

author = Cappe, O.

coauthors = Moulines, E.; Ryden, T.

year = 2005

title = Inference in Hidden Markov Models

publisher = Springer

isbn =* cite book

author = Liu, J.

year = 2001

title = Monte Carlo strategies in Scientific Computing

publisher = Springer

isbn =* cite book

author = Ristic, B.

coauthors = Arulampalam, S.; Gordon, N.

year = 2004

title = Beyond the Kalman Filter: Particle Filters for Tracking Applications

publisher = Artech House

isbn =* cite journal

author = Doucet, A.

coauthors = Godsill, S.; Andrieu, C.;

year = 2000

title = On Sequential Monte Carlo Methods for Bayesian Filtering

journal = Statistics and Computing

volume = 10

issue = 3

pages = 197-208

url = http://www.springerlink.com/content/q6452k2x37357l3r/* cite journal

author = Arulampalam, M.S.

coauthors = Maskell, S.; Gordon, N.; Clapp, T.;

year = 2002

title = A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking

journal = IEEE Transactions on Signal Processing

volume = 50

issue = 2

pages = 174–188

url = http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=978374

doi = 10.1109/78.978374* cite journal

author = Cappe, O.

coauthors =Godsill, S.; Moulines, E.;

year = 2007

title = An overview of existing methods and recent advances in sequential Monte Carlo

journal = Proceedings of IEEE

volume = 95

issue = 5

doi = 10.1109/JPROC.2007.893250

pages = 899* cite journal

author = Kotecha, J.H.

coauthors = Djuric, P.;

year = 2003

title =Gaussian Particle filtering

volume = 51

issue = 10

journal = IEEE Transactions Signal Processing* cite journal

author = Haug, A.J.

year = 2005

title = A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes

journal = The MITRE Corporation, USA, Tech. Rep., Feb

url = http://www.mitre-corporation.net/work/tech_papers/tech_papers_05/05_0211/05_0211.pdf

accessdate = 2008-05-06* cite journal

author = Pitt, M.K.

coauthors = Shephard, N.

year = 1999

title = Filtering Via Simulation: Auxiliary Particle Filters

journal = Journal of the American Statistical Association

volume = 94

issue = 446

pages = 590–591

url = http://www.questia.com/PM.qst?a=o&se=gglsc&d=5002321997

accessdate = 2008-05-06

doi = 10.2307/2670179**External links*** [

*http://www-sigproc.eng.cam.ac.uk/smc/ Sequential Monte Carlo Methods (Particle Filtering)*] homepage on University of Cambridge

* [*http://www.cs.washington.edu/ai/Mobile_Robotics/mcl/ Dieter Fox's MCL Animations*]

* [*http://web.engr.oregonstate.edu/~hess/ Rob Hess' free software*]

* [*http://babel.isa.uma.es/mrpt/index.php/Particle_Filters Tutorial on particle filtering*] with the MRPT C++ library, and a [*http://babel.isa.uma.es/mrpt/index.php/Paper:An_Optimal_Filtering_Algorithm_for_Non-Parametric_Observation_Models_in_Robot_Localization_(ICRA_2008)#Video mobile robot localization video*] .

*Wikimedia Foundation.
2010.*