- MUSIC-N
-
MUSIC-N refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs.[1] MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music (in actuality, sound) on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task. The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard.[2] However, CSIRAC did not produce digital audio[clarification needed], like the MUSIC-series of programs.
MUSIC had a number of descendants, e.g.:
- MUSIC II, MUSIC III, MUSIC IV (all developed at Bell Labs)
- MUSIC IV-B (developed at Princeton University to run on an IBM mainframe)
- MUSIC IV-BF (re-written in FORTRAN, therefore portable)
- MUSIC V (the last of the Bell Labs line)
- MUSIC 360 and MUSIC 11(written by Barry Vercoe at MIT, descended from MUSIC IV-BF)
- Csound (descended from MUSIC 11 and in wide use today)
- CMix / Real-time Cmix (by Paul Lansky, Brad Garton, and others)
- CMusic (by F. Richard Moore)
- Structured Audio Orchestra Language (SAOL), which is part of the MPEG-4 audio standard, by Eric Scheirer
Less obviously, MUSIC can be seen as the parent program for:
- Max/MSP
- Pure Data
- AudioMulch
- SuperCollider
- JSyn
- Common Lisp Music
- ChucK
- Any other computer synthesis language that relies on a modular system (e.g. Reaktor).
All MUSIC-N derivative programs have a (more-or-less) common design, made up of a library of functions built around simple signal-processing and synthesis routines (written as opcodes or unit generators). These simple opcodes are then constructed by the user into an instrument (usually through a text-based instruction file, but increasingly through a graphical interface) that defines a sound which is then "played" by a second file (called the score) which specifies notes, durations, pitches, amplitudes, and other parameters relevant to the musical informatics of the piece. Some variants of the language merge the instrument and score, though most still distinguish between control-level functions (which operate on the music) and functions that run at the sampling rate of the audio being generated (which operate on the sound). A notable exception is ChucK, which unifies audio-rate and control-rate timing into a single framework, allowing arbitrarily fine time granularity and also one mechanism to manage both. This has the advantage of more flexible and readable code as well as drawbacks of reduced system performance.
MUSIC-N and derived software are mostly available as complete self-contained programs, which can have different types of user-interfaces, from text- to GUI-based ones. In this aspect, Csound and RTcmix have since evolved to work effectively as software libraries which can be accessed through a variety of frontends and programming languages, such as C, C++, Java, Python, Tcl, Lua, Lisp, Scheme, etc., as well as other music systems such as Pure Data, Max/MSP and plugin frameworks LADSPA and VST.
A number of highly original (and to this day largely unchallenged) assumptions are implemented in MUSIC and its descendants about the best way to create sound on a computer. Many of Mathew's implementations (such as using pre-calculated arrays for waveform and envelope storage, the use of a scheduler that runs in musical time rather than at audio rate) are the norm for most hardware and software synthesis and audio DSP systems today.
References
- ^ Peter Manning, Computer and Electronic Music. Oxford Univ. Press, 1993.
- ^ The music of CSIRAC
See also
Categories:- Audio programming languages
- Software synthesizers
Wikimedia Foundation. 2010.