0% found this document useful (0 votes)
5 views

05_SymbolicControl

Uploaded by

Marcela Pavia
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

05_SymbolicControl

Uploaded by

Marcela Pavia
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/255906971

Symbolic Control of Sound Synthesis in Computer Assisted Composition

Conference Paper · September 2005

CITATIONS READS

8 765

3 authors, including:

Marco Stroppa
Musikhochschule Stuttgart
26 PUBLICATIONS 139 CITATIONS

SEE PROFILE

All content following this page was uploaded by Marco Stroppa on 11 September 2014.

The user has requested enhancement of the downloaded file.


SYMBOLIC CONTROL OF SOUND SYNTHESIS IN
COMPUTER ASSISTED COMPOSITION

Jean Bresson Marco Stroppa Carlos Agon


Ircam Staatliche Hochschule für Ircam
Musical Representations Team Musik und Darstellende Kunst Musical Representations Team
Paris, France Stuttgart, Deutschland Paris, France

ABSTRACT 2.1. New problems of representations

The first obvious specificity of sound synthesis in CAC


This paper presents current works and future directions
environments is the great amount of data that needs to
concerning the control of sound synthesis in
be computed. However, another really specific aspect
OpenMusic. We will particularly focus on the concept
stands in the nature of this data, generally made of
of synthesis models for composition, and on the
control functions or other kind of time-sampled data, for
representation of sound synthesis objects.
which the basic elements may not have any significance
outside of their context, contrary to the symbolic
1. INTRODUCTION objects of CAC systems. We call them sub-symbolic
Computer-assisted composition (CAC) systems are data.
designed to allow the use of computers for formalizing In writing an instrumental score, the physical reality
and experimenting with musical ideas, for creating and is not entirely specified, but just expressed until a
manipulating musical structures through programming certain level. When synthesizing a sound, however, the
techniques. The progress of computer-music research in sound must be thoroughly described. Due to the
the field of sound synthesis extended the possibilities potentially important amount of parameters that ought
for sound generation, allowing the composers, to be as little constrained as possible, the issue of its
following the pioneering work of K. Stockhausen and I. description and notation is therefore more complex. In
Xenakis, to dig further than instrumental music, into order to achieve a more efficient computation, and an
the proper composition of sound. However, despite the easier user interaction, we may be tempted to curb the
many software synthesisers at hand, they usually remain amount of parameters that can be freely controlled, and
hard to control without significant technical skills. In fix the others, but it is important to let the user make
this article, we will try to propose solutions to integrate this choice and to provide him with means to do it.
sound synthesis and compositional systems, in order to Moreover, the problem of having pertinent subjective
allow for real musical expressiveness by means of representation of these parameters still remains
synthesis technologies. unsolved. The system cannot decide automatically how
OpenMusic (OM), is a visual programming language to interpret data in order to keep a correspondence
dedicated to music composition [1]. Following some between the sonic and musical fields, but it should
preceding articles concerning the implementation of allow the user to do so.
high-level data structures for controlling sound In spite of some potentially powerful systems, the
synthesis in OpenMusic [2][3], we will present here earliest attempts of musical control of sound synthesis
some tools for the representation of synthesis data and were not widely used by composers, probably because
processes, and means to link compositional concepts of their low level, generally text-based, interfaces. A
with sound synthesis in this environment. meaningful, musical, and first of all subjective
After a general presentation of this issue, we will put representation of synthesis data seems to be mandatory.
forward our approach of the representation of sound Some studies investigated the relationship between
models with some examples in OM. visual and perceptual descriptions of synthesis
parameters (e.g. [13] [7]) but they are either hard to be
put in practice in a given synthesis environment, or
2. SOUND SYNTHESIS AND COMPUTER require important restrictions.
ASSISTED COMPOSITION
Although CAC environments are generally used for 2.2. A specific conception of time
writing instrumental music, and manipulating symbolic One of the most innovative characteristics of sound
structures (such as notes, chords, rhythms, etc.), their synthesis, at both a musical and technological level, is
functional paradigm can be conceptually applied to the also that it makes it possible to connect the microscopic
realm of the composition of sound. However, the and macroscopic compositional aspects of music (i.e.
control of sound synthesis also brings to CAC some the creation of the inner structure of sounds and the
specific problems. composition using these sounds). One might then
assume the necessity of a variable scale of temporal and
logical granularities.
3. SYNTHESIS MODELS: A MEANINGFUL 4. SYNTHESIS MODELS IN OPENMUSIC
REPRESENTATION OF SOUND OBJECTS AND
PROCESSES The OpenMusic Lisp-based environment can
communicate with external synthesizers by creating
parameters files or structures and sending commands to
3.1. Models for composition external programs. Composers can therefore use the
symbolic and computational possibilities of OM in
When using computers, an important part of the order to control these sound synthesizers. In this section
composer's task is to formalize her/his musical ideas. we present examples of such applications.
An interesting approach of this compositional
formalization is the concept of models as described by
M. Malt in [10]: a compositional model is a conceptual 4.1. Sound transformation with SuperVP
representation that is a link between an abstract musical
concept and the concrete world. A CAC environment The OM-AS library, written for OpenMusic by H.
must then be a place where composers develop and Tutschku, contains a set of functions that create
manipulate models, by means of experiments on data parameter files for SuperVP [5], a phase vocoder whose
and processes. These models should be connected re-synthesis of sound files can be modified by
together, interact, be embedded in other models, as are intermediate, possibly time-varying, transformations
the corresponding musical concepts in the composer's such as time stretching, pitch shifting, filtering, etc. A
mind. parameter file allows for complex time-varying
functions to be specified. The new OM-SuperVP
interface computes the sound files within an OM patch
3.2. Synthesis models (see Figure 1), so that sounds can be created directly
using graphical algorithms.
In sound synthesis we are used to distinguish various
families of models, corresponding to different synthesis
methods (physical models, abstract models, signal
models, etc.) However, one tends to identify the models
to these techniques, but this not enough to define real
compositional models. Such a definition must also
reflect a musical intention, that is, it must implicitly
determine an identifiable group among the infinity of
possible sounds. What really defines a model might be
the choice of the variable and invariant parameters, of
the degrees of freedom, of the deterministic or non-
deterministic parts of the algorithm. From this
perspective, the choice of the synthesizer can be just a
possible variable parameter in a data-based sound
model.
A sound model therefore represents a sonic potential,
a multidimensional space that a composer can
experiment and manipulate in order to explore a class of
Figure 1. Sound synthesis by SuperVP i n
sounds determined by this model. The resulting sound OpenMusic. A sound is transformed by a time-
is a realization, an instance of this sound potential. It varying frequency shift computed from a melody.
will be more or less musical and lively depending on
the richness of the model.
4.2. Csound synthesis patches
3.3. Sound representation
The library OM2CSound was originally written in
From the perspective put forward above, a PatchWork [9] by L. Pottier. It allows the graphical
compositional object corresponding to a physical sound design of Csound [4] score files. The library was then
can no longer be a simple sound file or waveform data ported to OpenMusic and enhanced by K. Haddad who
buffer, but a set of data and processes yielding a sonic developed boxes for generating Csound orchestras. A
result, consistent with all the possible objects contained Csound synthesis function can now synthesize the
in the model. The external representation of this model sound in OM. Figure 2 shows a simple example of a
gives it a musical potential: it determines the realm of Csound synthesis patch.
modifications and possibilities of experimentation [6]. This approach is a functional generative model for
A complete representation requires the use of a sound description, which potentially allows to produce
symbolic description language, which should a large set of sounds (Csound can be used to implement
incorporate different levels, that is: the representation of many kinds of different synthesis techniques). It also
the processes of sound creation that define the model; illustrates the potential of a graphical implementation of
the representation of the processing parameters and data synthesis processes within a visual language. The inner
that are the compositional inputs of the model; and the structure of the process becomes more accessible and can
representation of the real resulting sound. be easily edited.
This system provides pre-determined matrix classes
that represent different synthesis models and can be sub-
classed and enriched by the user. The object-oriented
system of OpenMusic is used to improve the
modularity and the behaviour of these classes during the
computation of the matrix’s components and the
synthesis process. These high-level abstractions reflect
the conception of data-processing objects corresponding
to classes of musical objects sharing common
properties.
This matrix-based representation of a sound thus
permits powerful computational possibilities, but its
unfolding in time still remains a problem. In [3], we
proposed a generalized sound-description model based
on SDIF (sound description interchange format), but the
issue of a meaningful representation and manipulation
of synthesis parameters could only be partially
addressed.
Figure 2. A simple Csound patch in OpenMusic.
Double-clicking on the rightmost myinstr1 box 4.4. Extension to other synthesis models
opens the graphical definition of the instrument (at
the right). Being a complete programming language, OpenMusic
However, a graphical patch cannot contain as much allows to implement any kind of synthesis model in it.
information as a textual score or orchestra file does. The The Modalys physical-model synthesis has recently
production of complex sounds therefore requires been implemented in OM by N. Ellis at Ircam.
describing abstractions or partially defined models in The communication with external systems is also
order to limit the amount data and allow for musical possible via OSC. Control parameters can then be
experimentation. generated for any synthesiser compatible with this
standard transfer protocol.
4.3. OMChroma : high level abstractions 5. TEMPORAL ISSUES
The Chroma system [12] has been developed by M.
Stroppa during his musical activity. The concept of 5.1. Synthesis models in time
"virtual sythesizer" allows the composer to control
sound synthesis by creating data sets which are Time in composition must be considered at multifarious
internally translated and formatted for different possible levels : as linear time, final arrangement of pre-
synthesizers (Csound, CHANT, etc.) OMChroma [2] is calculated musical material, but also as logical and
the implementation of Chroma in OpenMusic (see hierarchical time, with temporal dependences,
Figure 3). recursively-nested structures, rules and constraints
between the different objects.
The examples presented in section 4 do not have any
advanced temporal structures. We could see them as a
way to generate sound objects "out of time", or rather
with only a local time. These objects can be considered
as compositional primitives [8] that need to be
integrated in a musical structure. The next step consists
in embedding these primitives in a temporal context in
order to unfold the models in time.

5.2. Synthesis models in the maquette

In OpenMusic, the maquette [1] is a special object which


is at the same time a scheduling tool that can play MIDI
and audio objects, and a patch (i.e. a graphical program)
in which these objects can be manipulated, connected
and computed.
A maquette contains boxes which can represent
Figure 3. Csound additive synthesis in OMChroma. different types of objects in a global temporal context.
The matrix can be instantiated in different ways Such objects are either simple musical objects (sounds,
(functions, bpf, numerical values, lists), and then i s notes, chords, etc.), patches (graphical programs having a
formatted for the synthesizer by the synthesize resulting "temporal" output), or other maquettes. In this
generic function. way, hierarchical structures can be created, as well as
complex functional and temporal relationships between models that take into account the temporal organization
the patch boxes. Synthesis models patches as those and user-defined rules at different structural levels.
examined in section 4 can be embedded in this temporal We believe that the association of visual programs
structure, which could integrate their local temporal logic with the properties of the maquette might allow to
within the general time flow. integrate the different successive layers of electronic
We made an example with the Marco Stroppa's piece composition: from the sound creation to the global
Traiettoria… deviata [11]. The electronic part of this structure construction. The sonic representation and
piece was originally computed at the Centro di Sonolgia temporal organization allow an easier interpretation of
Computazionale of the University of Padua (Italy) in the composer’s thought, as well as the possibility to
1982, using an ancestor of the Chroma system. A part of experiment upon the musical piece itself.
the electronics was recently re-created by the composer in This graphical and interactive system may also be a
OpenMusic. The sound components were created with way for documenting, transmitting, analysing and
patches similar to the patch in Figure 3. However, the learning about a compositional thought, by means of
original temporal organization was a hierarchical formalization and representation of the models. It could
structure of such sound units that could only be therefore lead to the development of a common
reproduced in OMChroma by the algorithmic knowledge on sound synthesis for musical composition.
concatenation of matrices, shifted with the corresponding
hierarchical offset. By putting these patches in maquettes 7. REFERENCES
(see Figures 4 and 5), we could recreate the temporal and
hierarchical structure of the piece. [1] C. Agon "OpenMusic : Un Langage Visuel pour la
Composition Assistée par Ordinateur", PhD.
Thesis, Université Paris VI, 1998.
[2] C. Agon, M. Stroppa, G. Assayag "High Level
Musical Control of Sound Synthesis in
OpenMusic", Proc. ICMC, Berlin, 2000.
[3] J. Bresson, C. Agon "SDIF Sound Description
Data Representation and Manipulation in
Computer Assisted Composition", Proc. ICMC,
Miami, 2004.
[4] R. Boulanger (ed.), The Csound Book, MIT Press,
2000.
Figure 4. Sound synthesis patches in a maquette. [5] Ph. Depalle, G. Poirot "A Modular System for
Each box contains a patch producing a sound. Analysis, Processing and Synthesis of Sound
Signals", Proc. ICMC, Montreal, Canada, 1991.
[6] G. Eckel, R. Gonzalez-Arroyo "Musically Salient
Control Abstractions for Sound Synthesis", Proc.
ICMC, Aarhus, Denmark, 1994.
[ 7 ] K. Giannakis, M. Smith "Auditory-Visual
Associations for Music Compositional Processes: a
Survey", Proc. ICMC, Berlin, 2000.
[8] H. Honing, "Issues in the Representation of Time
and Structure in Music", Contemporary Music
Review, 9, 1993.
Figure 5. Top level structure of an extract from
Traiettoria… deviata (M. Stroppa) reconstructed in [9] M. Laurson, J. Duthen "Patchwork, a Graphic
a maquette. The topmost box is the maquette of Language in PreForm", Proc. ICMC, Ohio State
Figure 4. University, USA, 1989.
[10] M. Malt "Concepts et modèles, de l'imaginaire à
The subcomponents of the maquette of Figure 4 are
l'écriture dans la composition assistée par
displayed either as sounds, or as hand made pictures
ordinateur", Actes du séminaire Musique,
similar to the original "paper" score. The sound boxes
instruments, machines, 2003.
are calculated with the input data from the topmost box.
Opening each of these boxes allows seeing their contents [11] M. Stroppa. Traiettoria (1982-84), a cycle of three
either as sounds or as synthesis patches. pieces (Traiettoria… deviata, Dialoghi, Contrasti)
for piano and computer-generated sounds. Recorded
by Wergo: WER 2030-2, 1992.
6. CONCLUSION
[12] M. Stroppa "Paradigms for the High-level Musical
Control of Digital Signal Processing", Proc.
In this discussion about the symbolical control of sound
DAFX, Verona, 2000.
synthesis, we emphasized some of the salient features
required for such an environment: an expressive, [13] D. L. Wessel "Timbre Space as a Musical Control
personalized, interactive representation of sound Structure", Computer Music Journal Vol. 3(2),
synthesis objects, and means to implement synthesis 1979.

View publication stats

You might also like