Making Programming Languages To Dance To: Live Coding With Tidal
Making Programming Languages To Dance To: Live Coding With Tidal
Abstract
Live coding of music has grown into a vibrant international community of research and practice over the past decade, providing a
new research domain where computer science blends with the arts.
In this paper the domain of live coding is described, with focus
on the programming language design challenges involved, and the
ways in which a functional approach can meet those challenges.
This leads to the introduction of Tidal 0.4, a Domain Specic Language embedded in Haskell. This is a substantial restructuring of
Tidal, which now represents musical pattern as functions from time
to events, inspired by Functional Reactive Programming.
Categories and Subject Descriptors J.5 [Performing Arts]; J.5
[Music]; D.3.2 [Applicative (functional) languages]
Keywords domain specic languages, live coding, music
1.
Live coding is where source code is edited and interpreted in order to modify and control a running process. Over the past decade,
this technique has been increasingly used as a means of creating
live, improvised music (Collins et al. 2003), with new programming languages and environments developed as end-user music interfaces (e.g. Wang and Cook 2004; Sorensen 2005; Aaron et al.
2011; McLean et al. 2010). Live coding of music and video is now
a vibrant area of research, a core topic in major Computer Music conferences, the subject of journal special issues, and the focus of international seminars. This research runs alongside emerging communities of live coding practitioners, with international live
coding music festivals held in the UK, Germany and Mexico. Speculative, isolated experiments by both researchers and practitioners
have expanded, developing into active communities of practice.
Live coding has predominantly emerged from digital performing arts and related research contexts, but connects also with activities in Software Engineering and Computer Science, under the
developing umbrella of live programming language research (see
for example the proceedings of the LIVE workshop, ICSE 2013).
These intertwined strands are revitalising ideas around liveness rst
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from [email protected].
CONF yy, Month dd, 20yy, City, ST, Country.
Copyright 2014 ACM 978-1-nnnn-nnnn-n/yy/mm. . . $15.00.
http://dx.doi.org/10.1145/nnnnnnn.nnnnnnn
2.1
It is worth considering what we mean by the word live. In practice, the speed of communications is never instantaneous, and so in
a sense nothing is completely live. Instead, lets consider liveness
in terms of live feedback loops, where two agents (human or computational) continually inuence one another. We can then identify
different forms of liveness in terms of different arrangements of
feedback loops.
In a live coded performance, there are at least three main feedback loops. One is between the programmer and their code; making
a change, and reading it in context alongside any syntactical errors
or warnings. This loop is known as manipulation feedback (Nash
and Blackwell 2011), and may possibly include process and/or data
visualisation through debugging and other programmer tools. A
second feedback loop, known as performance feedback (Nash and
Blackwell 2011), connects the programmer and the program output, in this case music carried by sound. In live coding of music, the
feedback cycle of software development is shared with that of musical development. The third loop is between the programmer and
their audience and/or co-performers. We can call this feedback loop
social feedback, which is foregrounded at algorave events, where
the audience is dancing.
2.2
3.
Introducing Tidal
type
type
type
data
Time = Rational
Arc = ( Time , Time )
Event a = ( Arc , Arc , a )
Pattern a = Pattern ( Arc [ Event a ])
the given time are returned. The arcs of these events may overlap,
in other words supporting musical polyphony.
All Tidal patterns are notionally innite in length; they cycle
indenitely, and can be queried for events at any point. Longterm structure is certainly possible to represent, although Tidals
development has been focused on live coding situations where such
structure is already provided by the live coder, who is continually
changing the pattern.
This use of functions to represent time-varying values borrows
ideas from Functional Reactive Programming (Elliott 2009). However, the particular use of time arcs appears to be novel, and allows both continuous and discrete patterns to be represented within
the same datatype. For discrete patterns, events active during the
given time arc are returned. For continuous structures, an event
value is sampled with a granularity given by the duration of the
Arc. In practice, this allows discrete and continuous patterns to
be straightforwardly combined, allowing expressive composition of
music through composition of functions.
3.2
We will now look into how patterns are built and combined in Tidal.
Our focus in this section will be on implementation rather than use,
but this will hopefully provide some important insights into how
Tidal may be used.
Perhaps the simplest pattern is silence, which returns no events
for any time:
silence :: Pattern a
silence = Pattern const []
This is because we are only visualising the rst cycle, the others
are still there.
The denition for combining patterns so that their events cooccur is straightforward:
The vertical order of the events as visualised above is not meaningful; that the events co-occur simply allow us to make polyphonic music, where multiple events may sound at the same time.
By combining the functions we have seen so far, we may already
begin to compose some interesting patterns:
density 16 stack [ pure blue ,
cat [ silence ,
cat [ pure green ,
pure yellow ]
],
pure orange ]
3.3
Parsing strings
This already makes certain pattern transformations straightforward. For example, musical transposition (increasing or decreasing
all musical note values) may be dened in terms of addition:
So, values within square brackets are combined over time with
If curly brackets rather than square brackets are used, subpatterns are combined in a different way, timewise. The rst subpattern still takes up a single cycle, but other subpatterns on that level
are stretched or shrunk so that each element within them are the
same length. For example compare the following two patterns:
Notice that the resulting pattern will always maintain the structure of the rst pattern over time. However where an event in the
left hand pattern matches with multiple events in the right hand pattern, the number of events within this structure will be multiplied.
For example:
( blend 0.5 < > "[ black grey white ]" <>
"[ red green , magenta yellow ]")
4.
Transformations
The above shows every fourth cycle (starting with the rst one)
being shifted to the left, by a third of a cycle.
iter The iter transformation is related to < , but the shift is
compounded until the cycle gets back to its starting position. The
number of steps that this takes place over is given as a parameter.
The shift amount is therefore one divided by the given number of
steps, which in the below example is .
every Reversing a pattern is not very interesting unless you contrast it with the original, to create symmetries. To do this, we can
use every, a higher order transformation which applies a given pattern transformation every given number of cycles. The following
reverses every third cycle:
superimpose is another higher order transformation, which combines the given pattern with the result of the given transformation.
For example, we can use this with the transformation in the above
example:
8. Conclusion
To visualise some of the repeating structure, the above image
shows a ten-by-twenty grid of cycles, scanning across and down.
5.
The visual examples only work up to a point, and the multidimensional nature of timbre is difcult to get across with colour alone.
Tidal allows many aspects of sound, such formant lters, spatialisation, pitch, onset and offset to be patterned separately, and then
composed into patterns of synthesiser control messages. Pattern
transforms can then manipulate multiple aspects of sound at once;
for example the jux transform works similarly to superimpose, but
the original pattern is panned to the left speaker, and the transformed pattern to the right. The striate pattern effectively cuts
a sample into multiple sound grains, so that those patterns of
grains can then be manipulated with further transforms. For details,
please refer to the Tidal documentation, and also to the numerous
video examples linked to from the homepage http://yaxu.org/
tidal.
6.
7.
Over the past year, a community of Tidal users has started to grow.
This followed a residency in Hangar Barcelona, during which the
Tidal installation procedure was improved and documented. This
community was surveyed, by invitation via the Tidal on-line forum,
encouraged to give honest answers, and fteen responded. Two
demographic questions were asked. Given an optional free text
question What is your gender?, 10 identied as male, and the
remainder chose not to answer. Given an optional question What
is your age?, 7 chose 17-25, 4 chose 26-40, and the remainder
chose not to answer.
Respondents were asked to estimate the number of hours they
had used Tidal. Answers ranged from 2 to 300, with a mean of 44.2
and a standard deviation of 80.8. We can say that all had at least
played around with it for over an hour, and that many had invested
signicant time in learning it; the mode was 8 hours.
A surprising nding was that respondents generally had little or
no experience of functional programming languages before trying
Tidal. When asked the question How much experience of functional programming languages (e.g. Haskell, Lisp, etc) did you have
Strongly disagree
Strongly agree
References
S. Aaron, A. F. Blackwell, R. Hoadley, and T. Regan. A principled approach
to developing new languages for live coding. In Proceedings of New
Interfaces for Musical Expression 2011, pages 381386, 2011.
R. Bell. An Interface for Realtime Music Using Interpreted Haskell. In
Proceedings of LAC 2011, 2011.
A. Blackwell, A. McLean, J. Noble, and J. Rohrhuber. Collaboration
and learning through live coding (Dagstuhl Seminar 13382). Dagstuhl
Reports, 3(9):130168, 2014. . URL http://drops.dagstuhl.de/
opus/volltexte/2014/4420.
M. Clayton. Time in Indian Music: Rhythm, Metre, and Form in North Indian Rag Performance (Oxford Monographs on Music). Oxford University Press, USA, Aug. 2008. ISBN 0195339681. URL http:
//www.worldcat.org/isbn/0195339681.
N. Collins and A. McLean. Algorave: A survey of the history, aesthetics and
technology of live performance of algorithmic electronic dance music.
In Proceedings of the International Conference on New Interfaces for
Musical Expression, 2014.
N. Collins, A. McLean, J. Rohrhuber, and A. Ward. Live coding in laptop
performance. Organised Sound, 8(03):321330, 2003. . URL http:
//dx.doi.org/10.1017/s135577180300030x.
C. Elliott. Push-pull functional reactive programming. In Proceedings of
2nd ACM SIGPLAN symposium on Haskell 2009, 2009.
T. Hall.
Towards a Slow Code Manifesto.
http://www.ludions.com/slowcode/, Apr. 2007.
Published online;