Structured Light 3D Scanner

Structured Light 3D Scanner


Projecting a narrow band of light onto a three dimensionally shaped surface produces a line of illumination that appears distorted from other perspectives than that of the projector, and can be used for an exact geometric reconstruction of the surface shape (light section).

A faster and more versatile method is the projection of patterns consisting of many stripes at once, or of arbitrary fringes, as this allows for the acquisition of a multitude of samples simultaneously. Seen from different viewpoints, the pattern appears geometrically distorted due to the surface shape of the object.

Although many other variants of structured light projection are possible, patterns of parallel stripes are widely used. The picture shows the geometrical deformation of a single stripe projected onto a simple 3D surface. The displacement of the stripes allows for an exact retrieval of the 3D coordinates of any details on the object's surface.

Generation of stripe patterns

Two major methods of stripe pattern generation have been established: Laser interference and projection.

The laser interference method works with two wide planar laser beam fronts. Their interference results in regular, equidistant line patterns. Different pattern sizes can be obtained by changing the angle between these beams. The method allows for the exact and easy generation of very fine patterns with unlimited depth of field. Disadvantages are high cost of implementation, difficulties providing the ideal beam geometry, and laser typical effects like speckle noise and the possible self interference with beam parts reflected from objects. Also typically there is no means of modulating individual stripes, e.g. with Gray codes (see below).

The projection method uses non coherent light and basically works like a video projector. Patterns are generated by a display within the projector, typically an LCD (liquid crystal) or LCOS (liquid crystal on silicon) display.

A proprietary projection method uses DLP (moving micro mirror) displays. DLP displays do not absorb light significantly and are therefore allowing for highest light intensities. They also have an extremely linear gray value reproduction, as they are steered by pulse length modulation.

Principally, stripes generated by display projectors have small discontinuities as to the pixel boundaries in the displays. Sufficiently small boundaries however can practically be neglected as they are evened out by the slightest defocus.

A typical measuring assembly consists of one stripe projector and at least one camera. For many applications, two cameras on opposite sides of the projector have been established as useful.


Geometric distortions by optics and perspective must be compensated by a calibration of the measuring equipment, using special calibration patterns and surfaces. A mathematical model is used to describe the imaging properties of projector and cameras. Essentially based on the simple geometric properties of a
pinhole camera, the model also has to take into account the geometric distortions and optical aberration of projector and camera lenses. The parameters of the camera as well as its orientation in space can be determined by a series of calibration measurements, using photogrammetric bundle adjustment.

Analysis of stripe patterns

There are several depth cues contained in stripe patterns observed. The displacement of any single stripe can directly be converted into 3D coordinates. For this purpose, the individual stripe has to be identified, which can e.g. be accomplished by tracing or counting stripes (pattern recognition method). Another common method projects alternating stripe patterns resulting in binary Gray code sequences identifying the number of each individual stripe hitting the object.An important depth cue also results from the varying stripe widths along the object surface. Stripe width is a function of the steepness of a surface part, hence the first derivative of the elevation. Stripe frequency and phase deliver similar cues and can be analyzed by Fourier transform. Finally, wavelet transform has recently been discussed for the same purpose.

In many practical implementations, series of measurements combining pattern recognition, Gray codes and Fourier transform are obtained for a complete and unambiguous reconstruction of shapes.

Another method also belonging to the area of fringe projection has been demonstrated, utilizing the depth of field of the camera ( [ (Univ. of Stuttgart)] ).It is also possible to use projected patterns primarily as a means of structure insertion into scenes, for an essentially photogrammetric acquisiton.

Precision and range

The optical resolution of fringe projection methods depends on the width of the stripes used and their optical quality. It is of course also limited by the wavelength of light.An extreme reduction of stripe width proves inefficient due to limitations in depth of field, camera resolution and display resolution. Therefore the phase shift method has been widely established: A number of at least 3, typically about 10 exposures are taken with slightly shifted stripes. The first theoretical deductions of this method relied on stripes with a sine wave shaped intensity modulation, but the methods works with "rectangular" modulated stripes, as delivered from LCD or DLP displays as well.By phase shifting, surface detail of e.g. 1/10 the stripe pitch can be resolved.Current optical stripe pattern profilometry hence allows for detail resolutions down to the wavelength of light, below 1 micrometer in practice or, with larger stripe patterns, to approx. 1/10 of the stripe width. Concerning level accuracy, interpolating over several pixels of the acquired camera image can yield a reliable height resolution and also accuracy, down to 1/50 pixel.

Arbitrarily large objects can me measured with accordingly large stripe patterns and setups. Practical applications are documented involving objects several meters in size.

Typical accuracy figures are:
* Planarity of a 2ft. (60cm) wide surface, to 10 μm.
* Shape of a motor combustion chamber to 2 μm (elevation), yielding a volume accuracy 10 times better than with volumetric dosing.
* Shape of an object 2" large, to about 1 μm
* Radius of a blade edge of e.g. 10 μm, to ±0.4 μm


As the method can measure shapes from one perspective only at a time, complete 3D shapes have to be combined from different measurements in different angles. This can be accomplished by attaching marker points to the object and combining perspectives afterwards by matching these markers. The process can be automated, by mounting the object on a motorized turntable or CNC positioning device. Markers can as well be applied on a positioning device instead of the object itself.

The 3D data gathered can be used to retrieve CAD (computer aided design) data and models from existing components (reverse engineering), hand formed samples or sculptures, natural objects or artifacts.


As with all optical methods, reflective or transparent surfaces raise difficulties. Reflections are causing light either being reflected away from the camera or right into its optics. In both cases, the dynamic range of the camera can be exceeded. Double reflections can cause the stripe pattern to be overlaid with unwanted light, entirely eliminating the chance for proper detection. Reflective cavities are therefore difficult to handle. Transparent or semi transparent surfaces are also causing major difficulties. In these cases, coating the surfaces with a thin opaque lacquer just for measuring purposes is a common practice. For measuring entirely reflective surfaces, the alternative method of fringe reflection has been implemented.


Although several patterns have to be taken per picture in most structured light variants, high speed implementations are available for a number of applications, for example:
* Inline precision inspection of components during the production process.
* Health care applications, as live measuring of human body shapes or the micro structures of human skin.Motion picture applications have been proposed, for example the acquisition of spatial scene data for three dimensional television.


* Precision shape measurement for production control (e.g. turbine blades)
* Reverse engineering (obtaining precision CAD data from existing objects)
* Volume measurement (e.g. combustion chamber volume in motors)
* Classification of grinding materials and tools
* Precision structure measurement of grinded surfaces
* Radius determination of cutting tool blades
* Precision measurement of planarity
* Documenting objects of cultural heritage
* Skin surface measurement for cosmetics and medicine
* Body shape measurement
* Forensic inspections
* Road pavement structure and roughness
* Wrinkle measurement on cloth and leather
* Measurement of topography of solar cells (see reference W J Walecki, et al. 2008)


* Fringe 2005, The 5th International Workshop on Automatic Processing of Fringe Patterns Berlin: Springer, 2006. ISBN 3-540-26037-4 ISBN 978-3-540-26037-0


* Fechteler, P., Eisert, P., Rurainsky, J.: [ Fast and High Resolution 3D Face Scanning] Proc. of ICIP 2007

* Fechteler, P., Eisert, P.: [ Adaptive Color Classification for Structured Light Systems] Proc. of CVPR 2008

* Peng, T., Gupta, S.K.: [ Model and algorithms for point cloud construction using digital projection patterns] . Journal of Computing and Information Science in Engineering, 7(4): 372-381, 2007

* Hof, C., Hopermann, H.: [ Comparison of Replica- and In Vivo-Measurement of the Microtopography of Human Skin] University of the Federal Armed Forces, Hamburg

* Frankowski, G., Chen, M., Huth, T.: [ Real-time 3D Shape Measurement with Digital Stripe Projection by Texas Instruments Micromirror Devices (DMD)] Proc. Of SPIE-Vol. 3958(2000), pp. 90 - 106
* Frankowski, G., Chen, M., Huth, T.: [ Optical Measurement of the 3D-Coordinates and the Combustion Chamber Volume of Engine Cylinder Heads] Proc. Of "Fringe 2001", pp. 593 - 598

* Elena Stoykova, Jana Harizanova, Venteslav Sainov: Pattern Projection Profilometry for 3D Coordinates Measurement of Dynamic Scenes. In: Three Dimensional Television, Springer, 2008, ISBN 978-3-540-72531-2

* Song Zhang, Peisen Huang: [ High-resolution, Real-time 3-D Shape Measurement] (PhD Dissertation, Harvard Univ., 2005)

* W. Wilke: [ Segmentierung und Approximation großer Punktwolken] (Dissertation Univ. Darmstadt, 2000)

* G. Wiora: [ Optische 3D-Messtechnik Präzise Gestaltvermessung mit einem erweiterten Streifenprojektionsverfahren] (Dissertation Univ. Heidelberg, 2001)

* Klaus Körner, Ulrich Droste: [ Tiefenscannende Streifenprojektion (DSFP) ] University of Stuttgart (further English references on the site)

* W J Walecki, F Szondy and M M Hilali, "Fast in-line surface topography metrology enabling stress calculation for solar cell manufacturing for throughput in excess of 2000 wafers per hour" 2008 Meas. Sci. Technol. 19 025302 (6pp) doi:10.1088/0957-0233/19/2/025302

Wikimedia Foundation. 2010.

Нужен реферат?

Look at other dictionaries:

  • Structured light — is the process of projecting a known pattern of pixels (often grids or horizontal bars) on to a scene. The way that these deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the… …   Wikipedia

  • 3D scanner — A 3D scanner is a device that analyzes a real world object or environment to collect data on its shape and possibly its appearance (i.e. color). The collected data can then be used to construct digital, three dimensional models useful for a wide… …   Wikipedia

  • Kinect — for Xbox 360 …   Wikipedia

  • Automated optical inspection — (AOI) is an automated visual inspection of a wide range of products, such as printed circuit boards (PCBs), LCDs, transistors, automotive parts, lids and labels on product packages or agricultural products (seed corn or fruits). In case of PCB… …   Wikipedia

  • Range imaging — is the name for a collection of techniques which are used to produce a 2D image showing the distance to points in a scene from a specific point, normally associated with some type of sensor device.The resulting image, the range image , has pixel… …   Wikipedia

  • Machine vision glossary — Common definitions related to the machine vision field.Compiled for application on the Society of Manufacturing Engineers interest area.General related fields *Machine vision *Computer vision *Image processing *Signal processing NOTOC 0 9*1394.… …   Wikipedia

  • LIDAR — A FASOR used at the Starfire Optical Range for LIDAR and laser guide star experiments is tuned to the sodium D2a line and used to excite sodium atoms in the upper atmosphere …   Wikipedia

  • Reverse engineering — is the process of discovering the technological principles of a device, object, or system through analysis of its structure, function, and operation. It often involves taking something (e.g., a mechanical device, electronic component, software… …   Wikipedia

  • Computer facial animation — is primarily an area of computer graphics that encapsulates models and techniques for generating and animating images of the human head and face. Due to its subject and output type, it is also related to many other scientific and artistic fields… …   Wikipedia

  • information processing — Acquisition, recording, organization, retrieval, display, and dissemination of information. Today the term usually refers to computer based operations. Information processing consists of locating and capturing information, using software to… …   Universalium

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”