Sound_Processes_A_New_Computer_Music_Fra
Sound_Processes_A_New_Computer_Music_Fra
ABSTRACT how is the flow of control organised and what are the input
and output requirements? What are the structural levels
Sound Processes is an open source computer music frame- and what is the granularity of access to the musical data,
work providing abstractions suitable for composing real- how can it be arranged and grouped? But also: What is
time sound synthesis processes. It posits a memory model the coverage of the system, to what extent does it intend to
that automatically persists object graphs in a database, pre- reflect the overall compositional process? [1]
serving the evolution of these objects over time and making
them available either for later analysis or for incorporation 1.1 Musical Representation
into the compositional process itself. We report on the ex-
perience of using a prototype of this framework for a gen- A great deal has been written about the representation of
erative sound installation; in a second iteration, a graphical musical data, but some of the debate such as the age-old
front-end was created that focuses on tape music compo- juxtaposition between procedural (implicit) and declara-
sition and introduces new abstractions. Using this more tive (explicit) representation [2] has obscured more rele-
controlled setting allowed us to study the implications of vant aspects: The first concerns the understanding of rep-
using a live versioning system for composition. We encoun- resentation as knowledge representation. Michael Ham-
tered a number of challenges in this system and present man discusses this problem and defines ‘representation’ as
suggestions to tackle them: the relationship between com- something that «constitutes the agency through which an
positional time (versions) and performance time; the re- interface is embodied by orienting a particular way of con-
lationship between text and interface and between object ceiving and understanding a signal» [3]. If we rely solely
dependencies and interface; the representation, organisa- on the established cultural denotation of representations,
tion and querying of musical data; the preservation and these might be useful, but we run into danger of confound-
evolution of compositions. ing representation with the represented.
Second, taking the previous definition, it is clear that rep-
1. INTRODUCTION resentations have a translational potency. A representation
can always be rewritten as another, qualitatively distinct
The emergence of computer music systems is often tied to representation. For example, a procedural description of a
general developments in the computer science discipline, sound production can be unfolded by following the proce-
such as the establishment of new programming languages dure and recording its output, perhaps yielding an explicit
which serve as host languages or the appearance of new pro- sequence of events in time. Procedures in turn can be spec-
gramming paradigms—e.g. object-oriented programming— ified declaratively, giving rise to an abstraction such as the
that find their way into domain specific languages. Hard- dataflow variable.
ware developments also play a role, for example by making Finally, a representation specifies what is not represented.
it possible in the mid 1990s to build new real-time sound In the aforementioned article Truax made two important
synthesis systems for desktop computers. There is probably remarks: Before the advent of computer composition sys-
no abstraction or paradigm that has not been explored for tems, the process of composing was difficult to assess, rely-
its musical potential: Functional programming, dataflow ing on artefacts such as the final score or at best sketch-book
programming, constraints and logic programming, concur- notes. The introduction of computer programs and the use
rency abstractions, aspect-oriented programming, along of technical aids have resulted in «an increasing observabil-
with a number of design patterns. ity of musical activity», since these aids “externalise” the
On the other hand, the basic questions one has to answer process. He then posits the thesis that any computer system
when designing such a system appear to be unchanged. Al- embodies a model of the musical process; it becomes a
ready in 1976 Barry Truax listed the following: Which data “data source” for the study of musical activity.
representations are chosen, which operational capabilities, The corollary that can be derived from these remarks
is that a computer music system should take the activity
Copyright: c 2014 Hanns Holger Rutz . This is an open-access article distributed of composing into account. But in the nearly forty years
under the terms of the Creative Commons Attribution 3.0 Unported License, which that have passed, the interest in musical representations
permits unrestricted use, distribution, and reproduction in any medium, provided has almost entirely focused on the way “musical time” is
the original author and source are credited. formulated—the time in which elements are placed during
- 1618 -
Proceedings ICMC|SMC|2014 14-20 September 2014, Athens, Greece
the performance of a piece. • The virtual performance time T(P) . This is the rep-
resentational form of TP . For example, if we think
of a timeline view, the positions of elements on the
2. ACCOUNTING FOR CREATION TIME timeline are values in T(P) .
Music software already stores data in a persistent way so
that it becomes available for a later inspection. A number of
• The creation time TK . This is the time when an ob-
experiments that observed composers at work asked them
ject is created, modified or deleted as part of the
to store “snapshots” of these data, so that the evolution of
composition process.
the composition process could be examined. Apart from
the coarse granularity of such sequences of snapshots, this
approach requires an active intervention of the composer. In Sound Processes the primary concern is the handling
As Christopher Burns notes: of TK as it informs the underlying memory model. The
data structures employed and their interaction have been
«Composers are generally more interested in described before [6], thus we just give a brief overview.
producing work than in documenting it. Sketches
and drafts are often saved only if their continu-
ing availability is necessary for the completion 2.1 A Memory Model for Sound Processes
of a project, and mistakes and false starts are
unlikely to be preserved.» [4] The memory model is an extension of software transac-
tional memory (STM). In STM, the basic unit of operation
My main critique however concerns the usage of the data
is a reference cell that stores a value. The two permitted
thus obtained. Truax reserves the observation to “theorists”
operations are access (reading the value) and update (writ-
who seek to understand the musical activity, whereas the
ing or overwriting the value). This value can be either an
composers themselves are not mentioned. The externali-
immutable entity such as a number or a pointer to another
sation of the storage action means that the historic trace
reference cell. The operations must be performed within a
of the decision-making process itself has no useful repre-
transaction that provides the properties of atomicity, consis-
sentation within the composition system itself. There is
tency and isolation: Multiple operations performed inside
no re-entry of the temporal embedding of the decisions
the same transaction form one compound and indivisable
within the decision-making process. This is also implicit in
operation. If an error occurs, all operations participating in
Burns’ reflection that assumes a complementarity between
the transaction are undone together.
production and documentation.
As an analogy, we can look at the process of software de- Transactions are also used in databases, and since version
velopment. Today it is not possible to imagine this process control systems utilise databases, there are similarities be-
without the employment of version control systems such as tween an STM and a VCS. Similar to the snapshot scenario
Git or Subversion. These technologies have multiple goals, above, in a VCS the user explicitly decides when to make
including the review of decisions in order to find mistakes a new snapshot. This action is called commit. This is a
and the possibility for multiple users to concurrently ma- manual transaction and it is the responsibility of the user
nipulate the code base and eventually “merge” their work. to maintain some sort of “consistency” for the state of the
What is not provided is for the developed software to en- code base at the moment of committing. Each commit is
gage with its own history, so there is no interface back from tagged with a user identifier and a time stamp and consti-
the versioning system to the developed software. tutes a new version. The VCS allows one to create new
This is probably fine, since versioning is just a “tool” in branches from any previous version and to merge multiple
the software design process that helps to achieve the de- branches into one, producing a version graph.
sign goals. In computer composition, however, questions In Sound Processes, the STM is extended with the seman-
of representation—the data structures, their interfaces and tics of a versioning system: Each transaction is associated
relations—are the very materials of the composition itself. with a time stamp representing TK , and the evolution of
Hamman, in looking at Agostino Di Scipio’s work and the reference cells is automatically persisted to secondary
that of Gottfried Michael Koenig, argues that «just as one memory (hard-disk). From the user’s perspective, these
might compose musical and acoustical materials per se, one cells still look like ordinary STM cells, but they have to be
might also compose aspects of the very task environment accessed through special transaction handles provided by
in which those materials are composed.» If the process of so-called cursors. A cursor represents a path into the ver-
decision-making is itself made manifest within the com- sion graph, and when a cell is accessed or updated, behind
position system, it can re-enter that process as one of its the scenes a complex index resolves the history of that cell
possible materials. to find the value associated with it at the particular moment
To distinguish the different temporal ascriptions of a da- in TK . From the system’s point of view, it makes no dif-
tum, we proposed the following terminology: [5] ference whether one looks at the most recent “version” of
a composition or any other moment in its history. More-
• The (actual) performance time TP . When a musical over, we can now programmatically ask when a datum was
datum is heard in a “real-time” performance, this modified or what its past states were, and we may use this
happens in TP . information in an artistically meaningful way.
- 1619 -
Proceedings ICMC|SMC|2014 14-20 September 2014, Athens, Greece
On top of this fundamental level of an automatic and con- pred: Expr.Var[S, Span] succ: Expr.Var[S, Span]
comitant versioning, arbitrary structures can now be de- .stop .apply
fined for the “intrinsic” musical data. Expr[S, Long] Expr[S, Span]
Many authors have taken up on the distinction between in- .length
+
time and outside-time data prominently expressed by Iannis
newStart: Expr[S, Long] Expr[S, Long]
Xenakis. We can now say that “outside-time” only refers
to T(P) . The composer conceptually “spatialises” material, +
3.1 Expressions
We provide simple data types for numbers, boolean values, Figure 1. Expression chains produced by function
strings etc. along with tuples and ordered and unordered placeAfter. Arrows point in dataflow direction from de-
collections. In order to be able to establish relationships be- pendency to dependent.
tween such elements, we create a dataflow-like layer. Here
objects can propagate changes to their dependents. Un- and electro-acoustic materials, we base our core abstraction
like variables in common dataflow programming languages for sounding objects, Proc, on three members:
whose values are initially unknown and will be assigned
only once, we use the concept of expressions that have an 1. An expression graph that evaluates to a unit gener-
initial value and may be updated multiple times. Thus they ator graph handled by the ScalaCollider library, a
closer resemble objects in a PD or Max patch. client for the SuperCollider Server.
Without loss of generality, we propose to represent points 2. A dictionary scans that maps between logical signal
in T(P) as expressions whose value is of type Long, a 64-bit names and real-time input or output signals.
integer number representing an offset in sample frames at
a chosen sample rate and logical offset. Time intervals use 3. A dictionary attributes that maps between logical
type Span which can be thought of as a tuple of a start and a key names and heterogeneous values used to config-
stop point in time. Unbounded intervals are also permitted, ure the sound process.
e.g. if an object is created in a real-time live situation, it
The unit generators are extended by various elements which
may have a defined start point but an undefined end point.
interact with the Proc structure, for example by reading
If the object is eventually deleted, the span is updated with
from a scan input, writing to a scan output, determining the
a defined end point.
placement of the process in time, accessing the attributes
The following code is an example of how a program-
dictionary, etc.
matic creation of an expression tree looks like. It defines
A ‘scan’ is a connecting point, it administrates sinks (pro-
a function that ties a span succ to an arithmetic expression
cess outputs) and sources (process inputs). A sink or source
formed by an offset gap appended to another span pred:
may be either a grapheme or another scan. A grapheme
def placeAfter(pred: Expr.Var[S, Span], is a random access object—accessible both in real-time
succ: Expr.Var[S, Span],
gap : Expr [S, Long])
and offline—producing a linear time signal from segments
(implicit tx: S#Tx): Unit = { of break-point functions or stored audio files. A scan sig-
val newStart = pred.stop + gap nal is produced either by linking the scan’s source to an-
val newStop = newStart + succ().length other scan’s sink—thus establishing “bus routing” between
succ() = Span(newStart, newStop)
} processes—or a grapheme input, or it is produced by the
process’ graph function itself. This is illustrated in Fig. 2. 1
A visualisation of the structure is shown in Fig. 1. In Processes are placed in T(P) by associating them with a
short, an object Expr[S, Long] is an expression in sys- time span—which may be an expression and thus algorith-
tem S which evaluates to a long integer. Different systems mically specified and updated. A special data structure
can be used to decide whether a structure should be traced keeps a designated group of processes indexed in T(P) , and
in TK or not. An Expr.Var is a variable holding an ex- a transport class may then iterate over this temporal dimen-
pression. The broken arrow results from reading the old sion in real-time (or offline for the purpose of bouncing).
value of succ() for determining the length of the updated
span. The graph is thus acyclic—cyclic object graphs are
currently not supported. 4. VOICE TRAP
Several pieces were realised using the system. We report
3.2 Sounding Objects on two of them: A sound installation Voice Trap, written
The symbolic nature of programming languages naturally 1 The dashed arrow from grapheme to graph means that the implemen-
tation for plugging graphemes directly into sinks is currently missing, but
produces a bias towards supporting symbolically repre- that there are work-arounds to record the real-time signal and introduce
sented structures. To improve on the support for electronic the recording as a new grapheme.
- 1620 -
Proceedings ICMC|SMC|2014 14-20 September 2014, Athens, Greece
Figure 3. Wide shot and details of Voice Trap (top), and version graph detail (bottom)
Proc A Proc B
playing sounds as well as to an inaudible “hidden” file con-
taining different voice recordings. The idea is that from
scans
scans
source source the outside sounds those fragments will be preferred which
Scan Scan
sinks sinks contain speech. Each of the twelve channels operates inde-
out in pendently; the evolution of each channel is captured by our
graph graph framework, and the algorithm can make references to this
history.
The bottom of Fig. 3 shows an example version graph
Grapheme
for four channels. Each channel has a dedicated cursor,
and each horizontal stretch is the succession of transactions
Figure 2. Interaction between scans, graphemes and graph producing a certain number of iterations over the sound
functions phrases followed by a jump into the “past”, going halfway
back between the current transaction and the last branching
point. After a jump back in TK , the sound phrase from
using the “bare-bones” framework, and a tape composi- that past version is heard again, but the successive evolu-
tion (Inde)terminus, written using an emerging environ- tion (overwriting of fragments with new sounds) diverges
ment with a graphical front-end. from the previous path, because the sound database itself
Voice Trap is a collaboration between me and visual artist is ephemeral and not reverted to a previous state.
Nayarí Castillo. It is spun around the story of a girl who is Although I found it difficult to perceive these jumps—
haunted by voices. The story is written across four large perhaps due to the channel-locality of the jump or due to
mirrors on the floor of the room. Large jars, “voice traps”, the fact that the specific environmental sounds are more
are filled with different materials and placed on the mir- difficult to distinguish than traditional musical gestures
rors. The jars are tagged with the written description of made from pitches—this piece demonstrated that the frame-
a particular voice and their contents relate to the sound work is functional and can handle a continuously growing
qualities of the imaginary voices. The sound installation is database even after tens of thousands of transactions and
diffused from 96 piezo speakers grouped into twelve chan- several hundred megabytes file size.
nels which are placed on a metal grid suspended below the There was no specific development environment that al-
ceiling. Fig. 3 shows photos of the exhibition. lowed the composition of the algorithms in a traceable way;
The material of the sound composition comes from a mi- they were written in the object language using a traditional
crophone that picks up the noises from the street in front IDE, an activity which remained unobserved. On the other
of the gallery. These are fed into a database from which hand, the traces the algorithm produced inside the observed
individual phrases are constructed. An algorithm searches domain were easily captured. Constructing a whole meta
the database for sounds that are both similar to the currently language was too much of an effort at this stage, so another
- 1621 -
Proceedings ICMC|SMC|2014 14-20 September 2014, Athens, Greece
Recursion Object
- process group
- selected time span
- output channels
bounce
5. (INDE)TERMINUS
to different segmentations so that not only a diachronous
Such a setting was established in another experiment. Its reversal occurs, but also a synchronous scattering. The
working title (Inde)terminus refers to Gottfried Michael transformed bounces were placed on a new timeline and
Koenig’s tape piece Terminus I from 1961 which is based cut again into chunks to remove the silent parts. The new
on a scheme for deriving sounds from previous sounds by temporal structure was then adjusted and “composed”, pos-
applying a set of transformations [7]. sibly thinning out the material further or introducing new
To realise this electroacoustic study, a graphical tape elements. Since the next iteration would again reverse the
music environment named Mellite was written, based on temporal succession, a specific similarity arises within the
Sound Processes. A screenshot is shown in Fig. 4. On the group of even-numbered iterations and within the group of
left side, a timeline view can be seen with several audio file odd-numbered iterations.
regions placed on the canvas. The supported operations are: The trace of the re-imported bounces permitted the cre-
adding and removing, selecting, moving, resizing, muting ation of a closed recursive setting: After a certain number
or un-muting a region, adjusting its gain and fade curves. of iterations, the input to the initial bounce is exchanged
We use the concept of a workspace which is a tree of “el- for the result of the most recent (fifth) iteration, retroac-
ements”, shown as a window on the right-hand side of the tively re-triggering the bounce and transformation of Fig. 6.
screenshot. The opened popup menu shows the types of el- Consecutively, the iterations would be re-worked, a proce-
ements supported: folders, process groups (timelines), arte- dure that could be repeated ad infinitum, explaining the title
fact stores (hard-disk locations), audio files, text strings, in- of the study. Practically, this re-working was carried out
teger and decimal numbers, and code fragments. Elements for the second (sixth), third (seventh) and fourth (eighth)
can be dragged and dropped between different locations of iteration, as shown in the bottom row of Fig. 5.
the interface. The “flattening operation” of the bounce establishes what
The code fragment elements played an essential part. The may be perceived as a crucial deferral or suspension in
experiment begins with an initial hand-constructed canvas the process: A time canvas is manipulated whose prod-
of three minutes duration, sparsely placing sounds on an uct is used in another canvas, but the propagation of the
8-channel layout. In the next step a bounce is carried out changes from the former to the latter is suspended until a
and fed through a signal processing stage, becoming the conscious decision is made. Furthermore, the flattening
blueprint for the next iteration. Here, a new canvas is built bounce provides the closure of the material which makes it
around this blueprint, possibly cutting it up, removing some possible to subject it again to general transformations such
parts of it and adding new sounds. Then again a bounce as the segmentation and recombination. This connectivity
and a transformation is carried out, and so forth. This is is an important feature of a representation, perhaps more
illustrated in the top part of Fig. 5. important than its “symbol” function (Hamman).
The environment uses an embedded Scala interpreter and
an integrated code editor to textually manipulate objects
5.1 Ex Post Analysis
or, in this case, to define transformations of the bounced
sounds. The creation procedure of this transformed sound From an outside perspective, the version history can now
file is memorised, so it can be re-rendered at a later point be used to query different aspects of the process. As an
even if the input canvas has changed. This idea is illustrated example, Fig. 7 shows a “punch card” plot similar to the
in Fig. 6 and works as a generalisation of the expression ones given by popular open source platform GitHub. It
cells, whereby the deployed sound file artifact serves as the indicates at what times of the week someone has worked
“evaluated” expression. on a piece of software. While composing is hardly an office
In this study the transformation was a segmentation and job, charts like this, especially when more data is available,
reversal of the resultant segments of the bounced file. Each could reveal different profiles of composers, or they could
channel was bounced and transformed separately, leading be used to compare different types of activities.
- 1622 -
Proceedings ICMC|SMC|2014 14-20 September 2014, Athens, Greece
Mon
62× Fade
Tue 10× Remove
Thu
Fri
55× Move
Sat
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
hour 94× Resize
Figure 7. “Punch card” of working hours distribution 66× Mute 65× File
51× Add
While the selection of sound files was not important to the 4× Remove
175× Fade
composition to reveal when particular sound files had been
added to the piece, and it was possible to further elucidate 50× Move
- 1623 -
Proceedings ICMC|SMC|2014 14-20 September 2014, Athens, Greece
45
40
horizontal segmentation. One could interpret that diagram
35 again. One would find the “carriage returns” in scanning
30
25
through the timelines; discern the initial phase of each
20 iteration from the subsequent refinement; see moments of
15
10 obstinate distillation at a particular spot; see at which point
5
0
in TK a certain part of the piece is more or less finished. . .
–393.66
–131.22
–43.74
–14.58
–4.86
–1.62
–0.54
–0.18
–0.06
0.00
+0.06
+0.18
+0.54
+1.62
+4.86
+14.58
+43.74
+131.22
+393.66
amount [s] 6. LIMITATIONS AND FUTURE DIRECTIONS
30 Drawing from the experience gathered so far, we will now
28
26 highlight some limitations and make suggestions for future
24
22
refinements of the framework. First of all, the querying
20 possibilities should be improved and extended, especially
frequency
18
16 for collections: Finding out when elements were added or
14
12
removed requires iteration over the whole data structure for
10 each possible version step. What we envision is a general
8
6 indexing operation that produces auxiliary data structures
4
2
for ordered or unordered sequences. One should be able to
0 index a group of sound processes not just by their position-
–393.66
–131.22
–43.74
–14.58
–4.86
–1.62
–0.54
–0.18
–0.06
0.00
+0.06
+0.18
+0.54
+1.62
+4.86
+14.58
+43.74
+131.22
+393.66
- 1624 -
Proceedings ICMC|SMC|2014 14-20 September 2014, Athens, Greece
Property Current state Proposal sion of the Mellite front-end to a full-blown environment
Memory disposal Manual Garbage collected usable by other composers. The conflict between such us-
Serialisation Static, top-down + Dynamic ability and the critical value software plays in the artistic
Cyclic graphs No Yes episteme is aptly worded by Hamman: [3]
Indices Specific Generic
«When well-designed, the interface should
Timeline objects Non-nested Nested, recursive
tell us, by reminding us of our history of ex-
Expressions Determinate + Constrained
perience, how it works. We shouldn’t have
Workspace Isolated Interacting
to think about how to use a door knob, for in-
User Single Collaborative
stance . . . At precisely the moment when an in-
terface becomes sensible and useful, however,
Table 1. Suggestions for improving the framework the shapes, materials, and structures which
constitute its physical and epistemological frame,
cease to exist in themselves. . . »
constraints between them. Instead of saying that a sound We should thus not forget the advantage of having—and
object starts this much time after another sound object (the retaining—a prototypical situation that can be understood
placeAfter example), we can just generally say that it
as a “foregrounding” of representations, viewing music
starts after that sound, or we could say it starts at most this composition «as a task that is as much concerned with the
and this much time after that sound. theories and procedures by which musical artifacts might
In terms of the representation of musical data, we feel that be generated as it is with the actual generation of those
the current timeline model is too limited. A more powerful artifacts.» (Hamman)
representation would allow the hierarchic and recursive
nesting of elements in T(P) . Similar to the idea of filtering Acknowledgments
collections as an expression operator, fragments of one
timeline could appear within an outer timeline. The research was supported by a PhD grant from the Uni-
In terms of usage scenarios, the studies have shown that versity of Plymouth. The (Inde)terminus study was carried
the framework scales reasonably well to be used for real- out during a studio residency provided by ZKM Karlsruhe.
time generative sound installations as well as mixed of-
fline/online work such as tape composition. We have also 8. REFERENCES
developed a real-time graphical user interface for live im-
[1] B. Truax, “A communicational approach to computer
provisation, but it has not yet been coupled with the current
sound programs,” Journal of Music Theory, vol. 20,
version of Sound Processes, a case we still have to explore.
no. 2, pp. 227–300, 1976.
A second scenario is the collaboration of multiple com-
posers on a composition, or performers improvising to- [2] T. Winograd, “Frame representations and the declar-
gether; can we associate transactions with different users? ative/procedural controversy,” in Representation and
What is the nature of distributed transactions or do we need Understanding: Studies in Cognitive Science, D. G. Bo-
to constantly merge multiple distributed transactions? brow and A. Collins, Eds. New York: Academic Press,
The previous suggestions have been summarised in Table 1. 1975, pp. 185–210.
Of course, there are many more paths to explore. Graphical
[3] M. Hamman, “From Symbol to Semiotic: Represen-
user interfaces is one of them. How should interconnected
tation, Signification, and the Composition of Music
dataflow expressions be represented and edited? How do
Interaction,” Journal of New Music Research, vol. 28,
we convey links and dependencies between different ele-
no. 2, pp. 90–104, 1999.
ments across the user interface, without resorting to “patch
cords”? How continuous are the transitions between a live [4] C. Burns, “Tracing Compositional Process: Software
improvisation view and a tape editing view? What is the synthesis code as documentary evidence,” in Proceed-
relation between code fragments and graphical, symbolic ings of the 28th International Computer Music Confer-
or iconic elements? ence (ICMC), Göteborg, 2002, pp. 568–571.
[5] H. H. Rutz, E. Miranda, and G. Eckel, “On the Trace-
7. CONCLUSIONS ability of the Compositional Process,” in Proceedings of
the 7th Sound an Music Computing Conference (SMC),
We concluded our previous paper [6] by saying that the Barcelona, 2010, pp. 38:1–38:7.
most important task would be to put the framework into
production in different contexts and see how it scaled un- [6] H. H. Rutz, “A Reactive, Confluently Persistent Frame-
der real-world conditions. We believe this task has been work for the Design of Computer Music Systems,” in
successfully completed, and the current paper showed that Proceedings of the 9th Sound and Music Computing
a great number of interesting questions arise from the pos- Conference (SMC), Copenhagen, 2012, pp. 121–129.
sibility to concomitantly trace the version history or to [7] G. M. Koenig, “Genesis der Form unter technischen Be-
analyse it ex post facto. dingungen,” in Ästhetische Praxis, ser. Texte zur Musik.
Our next research focuses on the challenges and sugges- Saarbrücken: PFAU Verlag, 1993, vol. 3, pp. 277–288.
tions described in the previous section, as well as the exten-
- 1625 -
Proceedings ICMC|SMC|2014
- 1626 -