- Sound synthesis
Basics of sound
When any mechanical collision occurs, such as a fork being dropped, sound is produced. The energy from the collision is transferred through the air and other mediums, and if heard, into your ears. On a small scale, the collision creates sine waves. When different sine waves with variations are added, or are existing in the same place at the same time, they produce more complex waves with a sine function derivation. The environment the collision is produced in modulates, or changes, the wave or "sound". The basic idea behind sound synthesis is to produce a sound wave with a machine that would have been naturally produced by a collision, and then manipulate it like the environment does.
Generally, a single "sound" will include a
fundamental frequency , and any number ofovertone s. The frequencies of these overtones are either integer multiples of the fundamental frequency, or integer fractions thereof (subharmonics ). This study of how complex waveforms can be alternately represented is covered in Laplace and Fourier transforms.When natural tonal instruments' sounds are analyzed in the
frequency domain (as on a spectrum analyzer), the spectra of their sounds will exhibitamplitude spikes at each of the fundamental tone's harmonics. Some harmonics may have higher amplitudes than others. The specific set of harmonic-vs-amplitude pairs is known as a sound'sharmonic content .When analyzed in the
time domain , a sound does not necessarily have the same harmonic content throughout the duration of the sound. Typically, high-frequency harmonics will die out more quickly than the lower harmonics. For a synthesized sound to "sound" right, it requires accurate reproduction of the original sound in both the frequency domain and the time domain.Percussion instrument s and rasps have very low harmonic content, and exhibitspectra that are comprised mainly of noise shaped by the resonant frequencies of the structures that produce the sounds. However, the resonant properties of the instruments (the spectral peaks of which are also referred to asformant s) also shape an instrument's spectrum (esp. in string, wind, voice and other natural instruments).In most conventional synthesizers, for purposes of re-synthesis, recordings of real instruments are composed of several components.
These component sounds represent the acoustic responses of different parts of the instrument, the sounds produced by the instrument during different parts of a performance, or the behavior of the instrument under different playing conditions (pitch, intensity of playing, fingering, etc.) The distinctive timbre, intonation and attack of a real instrument can therefore be created by mixing together these components in such a way as resembles the natural behavior of the real instrument. Nomenclature varies by synthesizer methodology and manufacturer, but the components are often referred to as
oscillator s orpartial s. A higher fidelity reproduction of a natural instrument can typically be achieved using more oscillators, but increased computational power and human programming is required, and most synthesizers use between one and four oscillators by default.One of the most important parts of any sound is its amplitude envelope. This envelope determines whether the sound is percussive, like a snare drum, or persistent, like a violin string. Most often, this shaping of the sound's amplitude profile is realized with an "ADSR" (Attack Decay Sustain Release) envelope model applied to control oscillator volumes. Apart from Sustain, each of these stages is modeled by a change in volume (typically exponential).
Although the oscillations in real instruments also change frequency, most instruments can be modeled well without this refinement. This refinement is necessary to generate a
vibrato .Attack time
is the time taken for initial run-up of the sound level from nil (or blank sustainment) to 100% or what ever designated percentage.
Decay time
is the time taken for the subsequent run down from the attack level to the designated Sustain level.
ustain level
is the steady volume produced when a key is held down.
Release time
is the time taken for the sound to decay from the Sustain level to nil when the key is released. If a key is released during the Attack or Decay stage, the Sustain phase is usually skipped. Similarly, a Sustain level of zero will produce a more-or-less piano-like (or percussive) envelope, with no continuous steady level, even when a key is held. Exponential rates are commonly used because they closely model real physical vibrations, which usually rise or decay exponentially.
Overview of popular synthesis methods
Subtractive synthesizers use a simple acoustic model that assumes an instrument can be approximated by a simple signal generator (producing
sawtooth wave s,square wave s, etc...) followed by a filter which represents the frequency-dependent losses and resonances in the instrument body. For reasons of simplicity andeconomy , these filters are typically low-order lowpass filters. The combination of simple modulation routings (such aspulse width modulation andoscillator sync ), along with the physically unrealistic lowpass filters, is responsible for the "classic synthesizer" sound commonly associated with "analog synthesis" and often mistakenly used when referring to software synthesizers using subtractive synthesis. Althoughphysical modeling synthesis , synthesis wherein the sound is generated according to the physics of the instrument, has superseded subtractive synthesis for accurately reproducing natural instrument timbres, the subtractive synthesis paradigm is still ubiquitous in synthesizers with most modern designs still offering low-order lowpass or bandpass filters following the oscillator stage.One of the newest systems to evolve inside music synthesis is physical modeling. This involves taking up models of components of musical objects andcreating systems which define action, filters, envelopes and otherparameters over time. The definition of such instruments is virtuallylimitless, as one can combine any given models available with any amountof sources of modulation in terms of pitch, frequency and contour. Forexample, the model of a violin with characteristics of a pedal steelguitar and perhaps the action of piano hammer ... physical modelingon computers gets better and faster with higher processing.
One of the easiest synthesis systems is to record a real instrument as a digitized waveform, and then play back its recordings at different speeds to produce different tones. This is the technique used in "sampling". Most samplers designate a part of the sample for each component of the ADSR envelope, and then repeat that section while changing the volume for that segment of the envelope. This lets the sampler have a persuasively different envelope using the same note..
ynthesizer basics
There are three major kinds of synthesizers, analog, digital and software. In addition there are synthesizers that rely upon combinations of those three kinds, known as
hybrid synthesizers .There are also many different kinds of synthesis methods, each applicable to both analog and digital synthesizers. These techniques tend to be mathematically related, especially frequency modulation and phase modulation.
*
Subtractive synthesis
*Additive synthesis
*Granular synthesis
*Wavetable synthesis
*Frequency modulation synthesis
*Phase distortion synthesis
*Physical modeling synthesis
*Sample-based synthesis
* Subharmonic synthesis
Wikimedia Foundation. 2010.