- Wave field synthesis
Wave field synthesis (WFS) is a spatial audio rendering technique, characterized by creation of virtual acoustic environments. It produces "artificial" wave fronts synthesized by a large number of individually driven speakers. Such wave fronts seem to originate from a virtual starting point, the "virtual source" or "notional source". Contrary to traditional spatialization techniques such as stereo, the localization of virtual sources in WFS does not depend on or change with the listener's position.
Physical fundamentals
WFS is based on
Huygens' Principle , which states that anywave front can be regarded as a superposition of elementary spherical waves. Therefore, any wave front can be synthesized from such elementary waves. In practice, a computer controls a large array of individual loudspeakers and actuates each one at exactly the time when the desired virtual wave front would pass through it.The basic procedure was developed in 1988 by Professor Berkhout at the University of Delft. [1] Its mathematical basis is the Kirchhoff-Helmholtz integral. It states that the sound pressure is completely determined within a volume free of sources, if sound pressure and velocity are determined in all points on its surface.
:
Therefore any sound field can be reconstructed, if sound pressure and acoustic velocity are restored on all points of the surface of its volume. This approach is the underlying principle of Holophony.
For reproduction, the entire surface of the volume would have to be covered with closely spaced monopole and dipole loudspeakers, each individually driven with its own signal. Moreover, the listening area would have to be anechoic, in order to comply with the source-free volume assumption. In practice, this is hardly feasible.
According to Rayleigh II the sound pressure is determined in each point of a half-space, if the sound pressure in each point of its dividing plane is known. Because our acoustic perception is most exact in the horizontal plane, practical approaches generally reduce the problem to a horizontal loudspeaker line, circle or rectangle around the listener.
The origin of the synthesized wave front can be in any point on the horizontal plane of the loudspeakers. It represents the virtual acoustic source, which hardly differs from a material acoustic source at the same position. Unlike conventional (stereo) reproduction, the virtual sources do not move along if the listener moves in the room. For sources behind the loudspeakers, the array will produce convex wave fronts. Sources in front of the speakers can be rendered by concave wave fronts that focus in the virtual source and diverge again. Hence the reproduction inside the volume is incomplete - it breaks down if the listener sits between speakers and inner source.
Procedural advantages
By means of level and time information, stored in the impulse response of the recording room or on help of the model- based mirror- source approach, a sound field with very stable position of the acoustic sources can be established by wave field synthesis. In principle, it would even be possible to establish a virtual copy of a genuine sound field, indistinguishable from the real sound. Changes of the listener position in the rendition area would effect the same impression as an appropriate change of location in the recording room. Listeners are no longer relegated to a "sweet spot" area within the room.
The Moving Picture Expert Group standardized the object-oriented transmission standard MPEG4, which allows a separate transmission of contents of (dry recorded audio signal) and form (the impulse response or the acoustic model).Each virtual acoustic source needs its own (mono) audio channel. The spatial sound field in the recording room consists of the direct wave of the acoustic source and a spatially distributed pattern of mirror acoustic sources, caused by the reflections by the recording room surfaces. To reduce that spatial mirror source distribution onto a few transmitting canals inevitably must cause a significant loss of spatial information. Much more accurately this spatial distribution can be synthesized by the rendition side.
Concerning the conventional, channel- orientated rendition procedures, WFS provides a clear advantage: "Virtual panning spots" called virtual acoustic sources, guided which the signal content of the associated channels, can be positioned far beyond the material rendition area. That reduces the influence of the listener position, because the relative changes in angles and levels are clearly smaller, as with closely fixed material loudspeaker boxes. That extended the sweet spot considerably; he can now nearly cover the entire rendition area. The procedure of the wave field synthesis thus is not only compatible, it improved the reproduction for the conventional transmission methods clearly.
Remaining problems
The most perceptible difference concerning the original sound field is the reduction on the horizontal level of the loudspeaker lines until today. It is particularly noticeable; due to the necessity of acoustic damping in the rendition area, which hardly mirrors acoustic sources that occur outside this level. However the condition of the source- free volume from the mathematical approach would be imopaired without this acoustic treatment.
Since with the WFS is trying to simulate the acoustic characteristics of the recording space, the acoustics of the rendition area must be suppressed. One possible solution is to arrange the walls in an absorbing and non-reflective way. The second possibility is playback within the near field. For this to work effectively the loudspeakers must couple very closely at the hearing zone or the diaphragm surface must be very large.
Another cause for disturbance of the spherical wavefront is the "Truncation- Effect". Because the resulting wavefront is a composite of elementary waves, a sudden change of pressure can occur if no further speakers deliver elementary waves by the end of the speaker row, causing a 'shadow-wave' effect. The caused "shadow wave" can be reduced if the volume of the outer loudspeakers is reduced. However for virtual acoustic sources in front of the loudspeaker arrangement this pressure change hurries ahead of the actual wave front whereby it becomes clearly audible.
A further problem is until today the high cost. A large number of individual transducers must very close together. Otherwise spatial Aliasing effects becomes audible. This is a result of having a finite number of transducers (and hence elementary waves).
There are also discretisation caused position-dependent narrow-band break-downs in the frequency response within the rendition range. Their frequency depends on the angle of the virtual acoustic source and on the angle of the listener to the loudspeaker arrangement: : For aliasing free rendition in the entire audio range thereafter a distance of the single emitters below 2 cm would be necessary. But fortunately our ear is not particularly sensitive for this effect, so that already with 10 - 15 cm emitter distance it is hardly disturbing still. On the other hand the size of the emitter field does limit the representation range; outside of its borders no virtual acoustic sources be produced. Therefore the reduction of the procedure seems today justified on the horizontal level.
Research and market maturity
The newer development for the WFS was setting in from 1988 by the Delft University. In the context of the CARROUSO project, promoted by the European Union (January 2001 to June 2003), Europe- wide ten institutes were included in this research. The WFS- sound system IOSONO was developed by the Fraunhofer Institute for digital media technology (IDMT) by the technical University of
Ilmenau .Such loudspeaker rows become installed in some cinemas and theatres and in public range with good success.For home -audio application was not prospering the procedure until now. Large acceptance problems remain besides the high effort. If solved those problems, the prospects for creating virtual acoustic environments becomes very interesting.References
*Berkhout, A.J.: A Holographic Approach to Acoustic Control, J.Audio Eng.Soc., vol. 36, Dezember 1988, pp. 977–995
*Berkhout, A.J.; De Vries, D.; Vogel, P.: Acoustic Control by Wave Field Synthesis, J.Acoust.Soc.Am., vol. 93, Mai 1993, pp. 2764–2778External links
* [http://www.hauptmikrofon.de/HW/Wittek_thesis_201207.pdf Perceptual Differences Between Wavefield Synthesis and Stereophony] by Helmut Wittek
* [http://www.syntheticwave.de Patented 3D- wfs "Transformation" approach, aiming onto home use]
* [http://www.hauptmikrofon.de/theile/Theile_DAFX.PDF Wave Field Synthesis – A Promising Spatial Audio Rendering Concept] by Günther Theile/(IRT)
* [http://recherche.ircam.fr/equipes/salles/WFS_WEBSITE/Index_wfs_site.htm Wave Field Synthesis at IRCAM]
* [http://www-nt.e-technik.uni-erlangen.de/lms/research/projects/WFS/index.php?lang=eng Wave Field Synthesis at the University of Erlangen-Nuremberg]Demo
* [http://www.holophony.net/Principle%20of%20wave%20field%20synthesis.htm DEMO Wave field synthesis (Flash- Animation 60 sec.)]
Wikimedia Foundation. 2010.