Compressed sensing

Compressed sensing

Compressed sensing, also known as compressive sensing, compressive sampling and sparse sampling, is a technique for finding sparse solutions to underdetermined linear systems. In electrical engineering, particularly in signal processing, compressed sensing is the process of acquiring and reconstructing a signal that is supposed to be sparse or compressible.

Contents

History

Several scientific fields used L1 techniques.[1] In statistics, the least-squares method was complemented by the L1-norm , which was introduced by Laplace. Following the introduction of linear programming and Dantzig's simplex algorithm, the L1-norm was used in computational statistics. In statistical theory, the L1-norm was used by George W. Brown and later writers on median-unbiased estimators. It was used by Peter Huber and others working on robust statistics. The L1 norm was also used in signal processing, for example, in the 1970s, when seismologists constructed images of reflective layers within the earth based on data that did not seem to satisfy the Nyquist–Shannon criterion.[2] It was used in matching pursuit in 1993, the LASSO estimator by Robert Tibshirani in 1996[3] and Basis Pursuit in 1998.[4] There were theoretical results describing when these algorithms recovered sparse solutions, but the required type and number of measurements were sub-optimal and subsequently greatly improved by compressed sensing. Around 2004 Emmanuel Candès, Terence Tao and David Donoho discovered important results on the minimum number of data needed to reconstruct an image even though the number of data would be deemed insufficient by the Nyquist–Shannon criterion.[5][6]

Underdetermined linear system

An underdetermined system of linear equations has more unknowns than equations and generally has an infinite number of solutions. However, if there is a unique sparse solution to the underdetermined system, then the Compressed Sensing framework allows the recovery of that solution. Not all underdetermined systems of linear equations have a sparse solution.

Solution / reconstruction method

Compressed sensing takes advantage of the redundancy in many of interesting signals—they are not pure noise. In particular, many signals are sparse, that is, they contain many coefficients close to or equal to zero, when represented in some domain.[7] This is the same insight used in many forms of lossy compression.

Compressed sensing typically starts with taking a weighted linear combination of samples also called compressive measurements in a basis different from the basis in which the signal is known to be sparse. The results found[6] by David Donoho, Emmanuel Candès, Justin Romberg and Terence Tao showed that the number of these compressive measurements can be small and still contain nearly all the useful information. Therefore, the task of converting the image back into the intended domain involves solving an underdetermined matrix equation since the number of compressive measurements taken is smaller than the number of pixels in the full image. However, adding the constraint that the initial signal is sparse enables one to solve this underdetermined system of linear equations.

The least-squares solution to such problems is to minimize the L2 norm—that is, minimize the amount of energy in the system. This is usually simple mathematically (involving only a matrix multiplication by the pseudo-inverse of the basis sampled in). However, this leads to poor results for many practical applications, for which the unknown coefficients have nonzero energy.

To enforce the sparsity constraint when solving for the underdetermined system of linear equations, one can minimize the number of nonzero components of the solution.

The function counting the number of non-zero components of a vector was called the L0 "norm" by David Donoho. The quotation marks served two warnings. First, the number-of-nonzeros L0-"norm" is not a proper F-norm, because it is not continuous in its scalar argument: nnzsx) is constant as α approaches zero. Unfortunately, later authors have neglected Donoho's quotation marks and abused terminology—clashing with the established use of the L0 norm for the space of measurable functions (equipped with an appropriate metric) or for the space of sequences with F–norm (x_n) \mapsto \sum_n{2^{-n} x_n/(1+x_n)}. [8]

Candès. et. al., proved that for many problems it is probable that the L1 norm is equivalent to the L0 norm, in a technical sense: This equivalence result allows one to solve the L1 problem, which is easier than the L0 problem. Finding the candidate with the smallest L1 norm can be expressed relatively easily as a linear program, for which efficient solution methods already exist.[9]

Implementations

The field of compressive sensing is related to other topics in signal processing and computational mathematics, such as to underdetermined linear-systems, group testing, heavy hitters, sparse coding, multiplexing, sparse sampling, and finite rate of innovation. Imaging techniques having a strong affinity with compressive sensing include coded aperture and computational photography.

Starting with the single-pixel camera[10] from Rice University, an up-to-date list of the most recent implementations of compressive sensing in hardware at different technology readiness level is available.[11] Some hardware implementation (like the one used in MRI or compressed genotyping) do not require an actual physical change, whereas other hardware require substantial re-engineering to perform this new type of sampling. Similarly, a number of hardware implementations already existed before 2004; however, while they were acquiring signals in a compressed manner, they generally did not use compressive sensing reconstruction techniques to reconstruct the original signal. The result of these reconstruction were suboptimal and have been greatly enhanced thanks to compressive sensing.

Compressive sensing in the news

Compressed sensing was in the news as part of the single-pixel camera[10] from Rice University. Some aspects of compressed sensing were featured in Wired's "Engineers Test Highly Accurate Face Recognition".[12] A more recent article in Wired described compressed sensing as a full-fledged technique in "Using Math to Turn Lo-Res Datasets Into Hi-Res Samples".[13] Because the article was talking about sampling for MRI, some confusion might have occurred.[14][15]

See also

References

  1. ^ List of L1 regularization ideas from Vivek Goyal, Alyson Fletcher, Sundeep Rangan, The Optimistic Bayesian: Replica Method Analysis of Compressed Sensing
  2. ^ Hayes, Brian, The Best Bits, American Scientist, July 2009
  3. ^ The Lasso page, at Robert Tibshirani's homepage. "Regression shrinkage and selection via the lasso". J. Royal. Statist. Soc B., Vol. 58, No. 1, pages 267-288
  4. ^ "Atomic decomposition by basis pursuit", by Scott Shaobing Chen, David L. Donoho, Michael, A. Saunders. SIAM Journal on Scientific Computing
  5. ^ E. J. Candès, J. Romberg and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59 1207-1223 [1]
  6. ^ a b Donoho, D. L., Compressed Sensing, IEEE Transactions on Information Theory, V. 52(4), 1289–1306, 2006 [2]
  7. ^ Candès, E.J., & Wakin, M.B., An Introduction To Compressive Sampling, IEEE Signal Processing Magazine, V.21, March 2008 [3]
  8. ^ Stefan Rolewicz. Metric Linear Spaces.
  9. ^ L1-MAGIC is a collection of MATLAB routines [4]
  10. ^ a b http://dsp.rice.edu/cscamera
  11. ^ Compressive Sensing Hardware, http://sites.google.com/site/igorcarron2/compressedsensinghardware
  12. ^ Engineers Test Highly Accurate Face Recognition
  13. ^ http://www.wired.com/magazine/2010/02/ff_algorithm/all/1
  14. ^ Why Compressed Sensing is NOT a CSI "Enhance" technology ... yet !
  15. ^ Surely You Must Be Joking Mr. Screenwriter

Further reading


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Compressive sensing — Эта статья или раздел  грубый перевод статьи на другом языке (см. Проверка переводов). Он мог быть сгенерирован программой переводчиком или сделан человеком со слабыми познаниями в языке оригинала. Вы можете помочь …   Википедия

  • Nyquist–Shannon sampling theorem — Fig.1: Hypothetical spectrum of a bandlimited signal as a function of frequency The Nyquist–Shannon sampling theorem, after Harry Nyquist and Claude Shannon, is a fundamental result in the field of information theory, in particular… …   Wikipedia

  • Acquisition comprimée — L Acquisition compressée est une technique permettant de trouver une solution éparse aux systèmes linéaires sous déterminés. Compressed sensing, également appelé Compressive sensing, Compressed Sampling ou encore Sparse Sampling en anglais.… …   Wikipédia en Français

  • Kalman filter — Roles of the variables in the Kalman filter. (Larger image here) In statistics, the Kalman filter is a mathematical method named after Rudolf E. Kálmán. Its purpose is to use measurements observed over time, containing noise (random variations)… …   Wikipedia

  • Overdetermined system — For the philosophical term, see overdetermination. In mathematics, a system of linear equations is considered overdetermined if there are more equations than unknowns.[1] The terminology can be described in terms of the concept of counting… …   Wikipedia

  • Johnson–Lindenstrauss lemma — In mathematics, the Johnson–Lindenstrauss lemma is a result concerning low distortion embeddings of points from high dimensional into low dimensional Euclidean space. The lemma states that a small set of points in a high dimensional space can be… …   Wikipedia

  • Lp space — In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p norm for finite dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue (Dunford Schwartz 1958, III.3),… …   Wikipedia

  • Least squares — The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. Least squares means that the overall solution minimizes the sum of… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • Matching pursuit — Signal reconstruction with matching pursuit algorithm. Matching pursuit is a type of numerical technique which involves finding the best matching projections of multidimensional data onto an over complete dictionary D. The basic idea is to… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”