- Scale space
Scale-space theory is a framework for multi-scale signal representation developed by the
computer vision ,image processing andsignal processing communities with complementary motivations from physics and biological vision. It is a formal theory for handling image structures at different scales, by representing an image as a one-parameter family of smoothed images, the scale-space representation, parametrized by the size of the smoothing kernel used for suppressing fine-scale structures. The parameter in this family is referred to as the "scale parameter", with the interpretation that image structures of spatial size smaller than about have largely been smoothed away in the scale-space level at scale .The main type of scale-space is the "linear (Gaussian) scale-space", which has wide applicability as well as the attractive property of being possible to derive from a small set of "
scale-space axioms ". The corresponding scale-space framework encompasses a theory for Gaussian derivative operators, which can be used as a basis for expressing a large class of visual operations for computerized systems that process visual information. This framework also allows visual operations to be made "scale invariant ", which is necessary for dealing with the size variations that may occur in image data, due to the facts that real-world objects may be of different sizes and in addition the distance between the object and the camera may be unknown and may vary depending on the circumstances.Definition
The notion of scale-space applies to signals of arbitrary numbers of variables. The most common case in the literature applies to two-dimensional images, which is what is presented here. For a given image , its linear (Gaussian) "scale-space representation" is a family of derived signals defined by the
convolution of with theGaussian kernel :such that :
where the semicolon in the argument of implies that the convolution is performed only over the variables , while the scale parameter after the semicolon just indicates which scale level is being defined. This definition of works for a continuum of scales , but typically only a finite discrete set of levels in the scale-space representation would be actually considered.
is the variance of the Gaussian filter and for the resulting filter becomes an impulse function such that , that is, the scale-space representation at scale level is the image itself. As increases, is the result of smoothing with a larger and larger filter, thereby removing more and more of the details which it contains. Since the standard deviation of the filter is , details which are significantly smaller than this value are to a large extent removed from the image at scale parameter .
Scale-space_representation_ at scale , corresponding to the original image .
Scale-space_representation_ at scale .
Scale-space_representation_ at scale .
Scale-space_representation_ at scale .
Scale-space_representation_ at scale .
Scale-space_representation_ at scale .Why a Gaussian filter?
When faced with the task of generating a multi-scale representation one may ask: Could any filter "g" of low-pass type and with a parameter "t" which determines its width be used to generate a scale-space? This is, however, not the case. It is of crucial importance that the smoothing filter does not introduce new spurious structures at coarse scales that do not correspond to simplifications of corresponding structures at finer scales. In the scale-space literature, a number of different ways have been expressed to formulate this criterion in precise mathematical terms.
The conclusion from several different axiomatic derivations that have been presented is that the Gaussian scale-space constitutes the "canonical" way to generate a linear scale-space, based on the essential requirement that new structures must not be created from a fine scale to any coarser scale. [Witkin, A. P. "Scale-space filtering", Proc. 8th Int. Joint Conf. Art. Intell., Karlsruhe, Germany,1019–1022, 1983.] [Koenderink, Jan "The structure of images", Biological Cybernetics, 50:363–370, 1984] [http://www.nada.kth.se/~tony/book.html Lindeberg, Tony, Scale-Space Theory in Computer Vision, Kluwer Academic Publishers, 1994] , ISBN 0-7923-9418-6] [Florack, Luc, Image Structure, Kluwer Academic Publishers, 1997.] [ [http://www.springer.com/sgw/cda/frontpage/0,,5-40356-72-33673666-0,00.html Sporring, Jon et al (Eds), Gaussian Scale-Space Theory, Kluwer Academic Publishers, 1997.] ] [Romeny, Bart ter Haar, Front-End Vision and Multi-Scale Image Analysis, Kluwer Academic Publishers, 2003.] Conditions, referred to as "
scale-space axioms ", that have been used for deriving the uniqueness of the Gaussian kernel include linearity, shift-invariance, semi-group structure, non-enhancement of local extrema, scale invariance and rotational invariance."Equivalently", the scale-space family can be defined as the solution of the
diffusion equation (for example in terms of theheat equation ),:,
with initial condition . This formulation of the scale-space representation "L" means that it is possible to interpret the intensity values of the image "f" as a "temperature distribution" in the image plane and that the process which generates the scale-space representation as a function of "t" corresponds to heat diffusion in the image plane over time "t" (assuming the thermal conductivity of the material equal to the arbitrarily chosen constant ½). Although this connection may appear superficial for a reader not familiar with differential equations, it is indeed the case that the main scale-space formulation in terms of non-enhancement of local extrema is expressed in terms of a sign condition on partial derivatives in the 2+1-D volume generated by the scale-space, thus within the framework of partial differential equations. Furthermore, a detailed analysis of the discrete case shows that the diffusion equation provides a unifying link between continuous and discrete scale-spaces, which also generalizes to non-linear scale-spaces, for example, using anisotropic diffusion. Hence, one may say that the primary way to generate a scale-space is by the diffusion equation, and that the Gaussian kernel arises as the
Green's function of this specific partial differential equation.Motivations
The motivation for generating a scale-space representation of a given data set originates from the basic observation that real-world objects are composed of different structures at different scales. This implies that real-world objects, in contrast to idealized mathematical entities such as points or
line s, may appear in different ways depending on the scale of observation.For example, the concept of a "tree" is appropriate at the scale of meters, while concepts such as leaves and molecules are more appropriate at finer scales.For amachine vision system analysing an unknown scene, there is no way to know a priori what scales are appropriate for describing the interesting structures in the image data.Hence, the only reasonable approach is to consider descriptions at multiple scales in order to be able to capture the unknown scale variations that may occur. Taken to the limit, a scale-space representation considers representations at all scales.Another motivation to the scale-space concept originates from the process of performing a physical measurement on real-world data. In order to extract any information from by measurement process, one has to apply "operators of non-infinitesimal size" to the data. In many branches of computer science and applied mathematics, the size of the measurement operator is disregarded in the theoretical modelling of a problem. The scale-space theory on the other hand explicitly incorporates the need for a non-infinitesimal size of the image operators as an integral part of any measurement as well as any other operation that depends on a real-world measurement.
There is a close link between scale-space theory and biological vision. Many scale-space operations show a high degree of similarity with receptive field profiles recorded from the mammalian retina and the first stages in the visual cortex. In these respects, the scale-space framework can be seen as a theoretically well-founded paradigm for early vision, which in addition has been thoroughly tested by algorithms and experiments.
Gaussian derivatives and the notion of a visual front-end
At any scale in scale-space, we can apply local derivative operators to the scale-space representation:
:
Due to the commutative property between the derivative operator and the Gaussian smoothing operator, such "scale-space derivatives" can equivalently be computed by convolving the original image with Gaussian derivative operators. For this reason they are often also referred to as "Gaussian derivatives":
:
Interestingly, the uniqueness of the Gaussian derivative operators as local operations derived from a scale-space representation can be obtained by similar axiomatic derivations as are used for deriving the uniqueness of the Gaussian kernel for scale-space smoothing. [Koenderink, Jan and van Doorn, Ans: "Generic neighbourhood operators", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol 14, pp 597-605, 1992]
These Gaussian derivative operators can in turn be combined by linear or non-linear operators into a larger variety of different types of feature detectors, which in many cases can be well modelled by
differential geometry . Specifically, invariance (or more appropriately "covariance") to local geometric transformations, such as rotations or local affine transformations, can be obtained by considering differential invariants under the appropriate class of transformations or alternatively by normalizing the Gaussian derivative operators to a locally determined coordinate frame determined from e.g. a preferred orientation in the image domain or by applying a preferred local affine transformation to a local image patch (see the article onaffine shape adaptation for further details).When Gaussian derivative operators and differential invariants are used in this way as basic feature detectors at multiple scales, the uncommitted first stages of visual processing are often referred to as a "visual front-end". This overall framework has been applied to a large variety of problems in computer vision, including
feature detection , feature classification, image segmentation, image matching,motion estimation , computation ofshape cues andobject recognition . The set of Gaussian derivative operators up to a certain order is often referred to as the "N-jet " and constitutes a basic type of feature within the scale-space framework.Examples of multi-scale feature detectors expressed within the scale-space framework
Following the idea of expressing visual operation in terms of differential invariants computed at multiple scales using Gaussian derivative operators, we can express an edge detector from the set of points that satisfy the requirement that the gradient magnitude:should assume a local maximum in the gradient direction:.By working out the differential geometry, it can be shown that this differential edge detector can equivalently be expressed from the zero-crossings of the second-order differential invariant:that satisfy the following sign condition on a third-order differential invariant::.Similarly, multi-scale blob detectors at any given fixed scale can be obtained from local maxima and local minima of either the
Laplacian operator (also referred to as theLaplacian of Gaussian ):or the determinant of the Hessian matrix:.In an analogous fashion, corner detectors and ridge and valley detectors can be expressed as local maxima, minima or zero-crossings of multi-scale differential invariants defined from Gaussian derivatives. The algebraic expressions for the corner and ridge detection operators are, however, somewhat more complex and the reader is referred to the articles oncorner detection andridge detection for further details.Scale-space operations have also been frequently used for expressing coarse-to-fine methods, in particular for tasks such as image matching and for multi-scale image segmentation.
Automatic scale selection and scale invariant feature detection
The theory presented so far describes a well-founded framework for "representing" image structures at multiple scales. In many cases it is, however, also necessary to select locally appropriate scales for further analysis. This need for "scale selection" originates from two major reasons; (i) real-world objects may have different size, and this size may be unknown to the vision system, and (ii) the distance between the object and the camera can vary, and this distance information may also be unknown "a priori".A highly useful property of scale-space representation is that image representations can be made invariant to scales, by performing automatic local scale selection [http://www.nada.kth.se/cvap/abstracts/cvap198.html Lindeberg, Tony "Feature detection with automatic scale selection", International Journal of Computer Vision, 30, 2, pp 77–116, 1998.] ] [http://www.nada.kth.se/cvap/abstracts/cvap191.html Lindeberg, Tony "Edge detection and ridge detection with automatic scale selection", International Journal of Computer Vision, 30, 2, pp 117–154, 1998.] ] based on local maxima (or
minima ) over scales of normalizedderivative s:where is a parameter that is related to the dimensionality of the image feature. This algebraic expression for "scale normalized Gaussian derivative operators" originates from the introduction of "-normalized derivatives" according to: and .It can be theoretically shown that a scale selection module working according to this principle will satisfy the following "scale invariance property": if for a certain type of image feature a local maximum is assumed in a certain image at a certain scale , then under a rescaling of the image by a scale factor the local maximum over scales in the rescaled image will be transformed to the scale level .Following this approach of gamma-normalized derivatives, it can be shown that different types of "scale adaptive and scale invariant feature detectors" can be expressed for tasks such as
blob detection ,corner detection ,ridge detection andedge detection (see the specific articles on these topics for in-depth descriptions of how these scale-invariant feature detectors are formulated).Furthermore, the scale levels obtained from automatic scale selection can be used for determining regions of interest for subsequentaffine shape adaptation [ [http://www.nada.kth.se/~tony/abstracts/LG94-ECCV.html Lindeberg, T. and Garding, J.: Shape-adapted smoothing in estimation of 3-D depth cues from affine distortions of local 2-D structure, Image and Vision Computing, 15,~415–434, 1997.] ] to obtain affine invariant interest points [ [http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/&toc=comp/proceedings/cvpr/2000/0662/01/0662toc.xml&DOI=10.1109/CVPR.2000.855899 Baumberg, A.: Reliable feature matching across widely separated views, Proc. Computer Vision Pattern Recognition, I:1774–1781, 2000.] ] [ [http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf Mikolajczyk, K. and Schmid, C.: Scale and affine invariant interest point detectors, Int. Journal of Computer Vision, 60:1, 63 - 86, 2004.] ] or for determining scale levels for computing associated image descriptors, such as locally scale adaptedN-jet s.Recent work has shown that also more complex operations, such as scale-invariantobject recognition can be performed in this way, by computing local image descriptors (N-jets or local histograms of gradient directions) at scale-adapted interest points obtained from scale-space maxima of the normalizedLaplacian operator (see alsoscale-invariant feature transform [ [http://citeseer.ist.psu.edu/lowe04distinctive.html Lowe, D. G., “Distinctive image features from scale-invariant keypoints”, International Journal of Computer Vision, 60, 2, pp. 91-110, 2004.] ] ).Related multi-scale representations
Pyramid representation is a predecessor to scale-space representation, constructed by simultaneously smoothing and subsampling a given signal. [Burt, Peter and Adelson, Ted, "The Laplacian Pyramid as a Compact Image Code", IEEE Trans. Communications, 9:4, 532–540, 1983.] [ [http://www-prima.inrialpes.fr/Prima/Homepages/jlc/papers/Crowley-Sanderson-PAMI87.pdf Crowley, J. L. and Sanderson, A. C. "Multiple resolution representation and probabilistic matching of 2-D gray-scale shape", IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(1), pp 113-121, 1987.] ] In this way, computationally highly efficient algorithms can be obtained. In a pyramid, however, it is usually algorithmically harder to relate structures at different scales, due to the discrete nature of the scale levels.In a scale-space representation, the existence of a continuous scale parameter makes it conceptually much easier to express this so-called "deep structure".For features defined as zero-crossings of
differential invariant s, theimplicit function theorem directly defines trajectories across scales , and at those scales where bifurcations occur, the local behaviour can be modelled bysingularity theory .Extensions of linear scale-space theory concern the formulation of non-linear scale-space concepts more committed to specific purposes. [Romeny, Bart (Ed), Geometry-Driven Diffusion in Computer Vision, Kluwer Academic Publishers, 1994.] [ [http://www.mia.uni-saarland.de/weickert/book.html Weickert, J Anisotropic diffusion in image processing, Teuber Verlag, Stuttgart, 1998.] ] These "
non-linear scale-space s" often start from the equivalent diffusion formulation of the scale-space concept, which is subsequently extended in a non-linear fashion. A large number of evolution equations have been formulated in this way, motivated by different specific requirements (see the abovementioned book references for further information). It should be noted, however, that not all of these non-linear scale-spaces satisfy similar "nice" theoretical requirements as the linear Gaussian scale-space concept. Hence, unexpected artefacts may sometimes occur and one should be very careful of not using the term "scale-space" for just any type of one-parameter family of images.A first-order extension of the isotropic Gaussian scale-space is provided by the "affine (Gaussian) scale-space" . One motivation for this extension originates from the common need for computing image descriptors subject for real-world objects that are viewed under a perspective camera model. To handle such non-linear deformations locally, partial invariance (or more correctly covariance) to local affine deformations can be achieved by considering affine Gaussian kernels with their shapes determined by the local image structure, see the article on
affine shape adaptation for theory and algorithms. Indeed, this affine scale-space can also be expressed from a non-isotropic extension of the linear (isotropic) diffusion equation, while still being within the class of linearpartial differential equation s.There are strong relations between scale-space theory and wavelet theory, although these two notions of multi-scale representation have been developed from somewhat different premises.There has also been work on other
multi-scale approaches , such as pyramids and a variety of other kernels, that do not exploit or require the same requirements as true scale-space descriptions do.Relations to biological vision
There are interesting relations between scale-space representation and biological vision.Neurophysiological studies have shown that there are
receptive field profiles in the mammalianretina andvisual cortex , which can be well modelled by linear Gaussian derivative operators, in some cases also complemented by a non-isotropic affine scale-space model and/or non-linear combinations of such linear operators [Young, R. A. "The Gaussian derivative model for spatial vision: Retinal mechanisms", Spatial Vision, 2:273–293, 1987.] [ [http://cobalt056.bpe.es.osaka-u.ac.jp/ohzawa-lab/publications/1995/TINS95.html DeAngelis, G. C., Ohzawa, I., and Freeman, R. D., "Receptive-field dynamics in the central visual pathways", Trends Neurosci. 18: 451–458, 1995.] ]Implementation issues
When implementing scale-space smoothing in practice there are a number of different approaches that can be taken in terms of continuous or discrete Gaussian smoothing, implementation in the Fourier domain, in terms of pyramids based on binomial filters that approximate the Gaussian or using recursive filters. More details about this are given in a separate article on
scale-space implementation .References
ee also
Complementary articles on specific sub-topics of scale-space:
*scale-space axioms
*scale-space implementation
*scale-space segmentation
*multi-scale approaches Multi-scale feature detection within the scale-space framework:
*edge detection
*blob detection
*corner detection
*ridge detection
*affine shape adaptation
*interest point detection The Gaussian function and other smoothing or multi-scale approaches:
*Gaussian function
*Gaussian filter
*multi-scale approaches
*wavelets
*non-linear scale-space
*smoothing
*pyramid (image processing)
*mipmappingMore general articles on feature detection, computer vision and image processing:
*
feature detection (computer vision)
*computer vision
*image processing External links
* [http://www.nada.kth.se/~tony/cern-review/cern-html/cern-html.html Lindeberg, Tony, "Scale-space: A framework for handling image structures at multiple scales", In: Proc. CERN School of Computing, Egmond aan Zee, The Netherlands, 8-21 September, 1996] (online web tutorial)
* [http://www.nada.kth.se/~tony/abstracts/Lin94-SI-abstract.html Lindeberg, Tony: Scale-space theory: A basic tool for analysing structures at different scales, in J. of Applied Statistics, 21(2), pp. 224–270, 1994] (longer pdf tutorial on scale-space)
* [http://www.nada.kth.se/cvap/abstracts/cvap222.html Lindeberg, Tony, "Principles for automatic scale selection", In: B. Jähne (et al., eds.), Handbook on Computer Vision and Applications, volume 2, pp 239--274, Academic Press, Boston, USA, 1999.] (tutorial on approaches to automatic scale selection)
* [http://micro.magnet.fsu.edu/primer/java/scienceopticsu/powersof10/index.html Powers of ten interactive Java tutorial at Molecular Expressions website]
* [http://cobalt056.bpe.es.osaka-u.ac.jp/ohzawa-lab/teaching/AA_RFtutorial.html On-line resource with space-time receptive fields of visual neurons provided by Izumi Ohzawa at Osaka University]
* [http://wagga.cs.umass.edu/~manmatha/cmpsci670/lecturespdf/lecture10.pdf Lecture on scale-space at the University of Massachusetts] (pdf)
* [http://operaomnia.interfree.it/thesis/thesis_italy_XX_ciclo_andrea_anzalone.html Multiscale analysis for optimized vessel segmentation of fundus retina images] Ph.D Thesis
Wikimedia Foundation. 2010.