Bean: A Digital Musical Instrument For Use in Music Therapy
Bean: A Digital Musical Instrument For Use in Music Therapy
Nicholas
J.
Kirwan
Department
of
Architecture,
Design
&
Media
Technology
Medialogy
section
Aalborg
University
Copenhagen
[email protected]
Winter
2014
Abstract
The
use
of
interactive
technology
in
Music
therapy
is
growing,
and
with
good
reason.
The
flexibility
afforded
by
the
use
of
these
technologies
in
music
therapy
is
substantial.
Presented
here,
are
the
initial
steps
in
development
of
a
Digital
Musical
Instrument,
which
is
designed
for
use
in
a
music
therapy
setting.
An
informal
evaluation
was
performed,
including
both
clients
and
therapists,
in
order
to
assess
the
current
state
of
development,
and
provide
clues
for
optimal
improvement
going
forward.
Both
the
strengths,
and
the
weaknesses
of
the
design
at
the
time
of
the
evaluation,
were
assessed.
Using
this
information,
the
current
design
has
been
updated,
and
is
now
closer
to
a
more
appropriate
state
for
further
formal
evaluation.
Introduction
A
basic
working
definition
of
music
therapy
is
the
use
of
music
as
a
tool,
in
a
therapeutic
setting.
Tailored
to
the
individual
needs
of
the
client,
this
tool
can
be
used
to
achieve
therapeutic
goals
such
as
enabling
communication
or
improving
motor
skills
[1].
The
flexible
nature
of
a
Digital
Musical
Instrument’s
(DMI)1
sonic
output
and
control
possibilities
could
be
a
powerful
tool
to
add
to
the
arsenal
of
a
music
therapist.
Indeed
it
has
been
shown
that
the
use
of
electronic
musical
technologies
have
an
impact
on
outcomes
relating
to
communication
and
expression
[2],
while
also
enabling
a
sense
of
achievement
and
empowerment
[3].
As
mentioned,
communication
is
a
common
goal
in
this
form
of
therapy.
Facilitating
performance
and
ancillary
gestures
through
tangible
interaction,
could
therefore
lead
to
expressive
communication
when
combined
with
music
[4].
For
some
clients
the
“up
to
date
technology”
itself
can
be
a
positive
and
engaging
factor
in
music
therapy,
in
addition
to
the
possibility
for
new
and
interesting
sounds
or
“new
sound
worlds”
[5].
The
use
of
novel
technologies
in
music
therapy
can
however
pose
some
practical,
as
well
as
design
problems.
For
instance,
can
clients
easily
understand
the
musical
contribution
is
of
their
making?
Is
the
control
of
these
contributions
intuitive
and
understandable?
Is
the
experience
of
using
these
technologies
engaging,
with
enough
variance
to
hold
interest?
These
issues
are
not
directly
related
to
music
therapy,
but
are
in
fact
universal
factors
associated
with
DMI
design,
for
example
the
“ubiquitous
mapping
problem”
[6].
Effective
utilization
of
these
factors
could
possibly
be
of
greater
importance
when
the
user
has
complex
needs.
In
this
paper,
the
iterative
development
of
Bean,
a
novel
tangible
DMI,
will
be
presented.
Bean
was
created
to
investigate
problems
like
the
above-‐mentioned,
and
to
help
provide
some
answers.
After
outlining
the
background
research
relevant
to
the
design,
current
popular
technologies
used
in
music
therapy
will
be
discussed.
After
this,
the
design
and
construction
process
of
Bean
will
be
elaborated
on,
in
regard
to
the
design,
hardware
and
software.
Next
an
initial
client
evaluation
will
be
described
followed
by
a
discussion/conclusion.
Finally
future
plans
for
the
development
of
Bean
will
be
discussed.
1“Digital
musical
instrument”
is
used
here
as
described
in
the
first
paragraphs
of
[15].
Background
As
mentioned,
the
use
of
technology
in
music
therapy
has
the
potential
for
many
positive
applications,
but
the
therapist
must
have
the
required
knowledge
to
effectively
use
these
technologies
in
a
therapeutic
setting
[7].
Research
has
been
conducted
investigating
technology
use
in
music
therapy
[7][8].
Interestingly,
these
studies
make
clear
that
distance
sensing2
is
the
most
frequently
used
sensing
mode,
when
sensing
technologies
are
used.
However,
tangible
interface
use
is
not
as
widespread
in
music
therapy,
which
is
understandable,
when
a
percentage
of
clients
would
have
physical
disabilities
that
could
hinder
such
interaction.
Despite
a
lack
of
total
inclusivity,
there
is
still
a
need
for
the
option
of
tangible
interaction
for
clients
with
this
ability
to
ideally
enable
an
embodied
musical
experience.
A
framework
for
the
use
of
music
therapy
related
technologies
was
developed
through
an
investigation
of
music
therapists’
experience
with
technology
use
[2](Figure
1).
The
data
gathered
here
can
in
part,
be
used
to
effectively
design
technologies
suiting
this
setting.
After
reflection,
the
first
two
points
are
aimed
more
at
informing
the
therapist,
and
have
little
relevance
to
the
design
of
instruments.
In
essence
these
points
address
the
resources
available
to
the
therapist,
and
the
evaluation
of
suitable
sensor
technologies
to
fit
the
individual
client’s
challenges
and
needs.
Of
course
“understanding
movement”
could
be
seen
as
a
relevant
element
of
developing
a
gesture-‐based
instrument.
The
context
as
used
by
Magee
et
Al.
(2008)
in
this
framework,
seems
to
be
describing
a
subjective
evaluation
of
the
client’s
physical
needs
by
the
therapist.
The
three
last
points
can
be
intrinsically
linked
to
the
functionality
and
design
of
DMIs
such
as
Bean.
These
elements
in
the
context
of
DMI
design
would
however
be
more
intuitive
in
the
following
order:
Cause/effect
and
a
sense
of
agency
is
a
primary
element.
After
this,
comes
enabling
the
client
through
effective
mapping,
which
should
leading
to
musical
play
that
holds
the
interest
of
the
user.
Figure
1
Framework
for
technology
use
in
music
therapy,
adapted
to
aid
the
design
of
therapeutically
oriented
DMIs.
2
Referring
to
infrared
distance
sensing,
which
is
used
in
Soundbeam.
http://www.soundbeam.co.uk/
[2]
This
statement
could
be
seen
as
alluding
to
a
sense
of
agency3
[9].
Paine
et
Al.
(2009),
categorize
agency
into
two
approaches
in
relation
to
DMI
design.
The
first:
the
control
of
predetermined
sequences
of
sounds
such
as
triggering
sounds
in
sample
based
software.
The
second:
the
creation
of
sound
through
real-‐time
manipulation
of
software
synthesis
variables.
Furthermore,
when
the
creation
paradigm
is
designed
for,
it
is
suggested
immediate
agency
should
be
facilitated
accounting
for
primary
causality
in
the
use
of
the
DMI.
Immediate
agency
and
corresponding
feedback
could
be
seen
as
modeling
the
cause
and
effect
cycle.
3
Agency
as
a
term
has
many
different
meanings,
but
the
use
of
it
in
this
paper
can
be
defined
as
the
ability
to
act,
and
understand
the
causal
significance
of
ones
actions.
Feedback
is
paramount
in
facilitating
agency
in
the
context
of
DMI
use
[12].
Related
work
In
this
section,
some
of
the
most
popular
interactive
technologies
currently
in
use
in
music
therapy
will
be
looked
at.
According
to
a
survey
including
over
600
therapists
from
around
the
globe,
Soundbeam
was
the
most
popular
system
of
interactive
technology
in
use[8].
Soundbeam
was
followed
in
second
place
by
MIDIcreator4.
As
regards
tangible
interfaces
a
notable
commercially
available
example
is
the
Skoog5
[7]
(Figure
2).
Figure
2
Examples
of
technologies
used
in
music
therapy
Soundbeam
The
Soundbeam
is
a
powerful
complete
system.
Incorporated
in
it
are
sample
based
sound
production
and
preset
sequencer
aspects.
The
method
of
interaction
is
through
distance
sensing
and
switches.
It
provides
a
particularly
effective
platform
for
those
clients
with
physical
disabilities,
where
the
smallest
of
movement,
even
chest
movement
when
breathing,
can
be
translated
to
sound.
Via
MIDI
6
this
system
can
communicate
with
external
hardware
and
software
midi
enabled
music
devices.
However,
the
system
lacks
an
option
for
tangible
embodied
interaction.
Midicreator
This
system
is
very
similar
to
Soundbeam
in
that
it
is
self-‐sufficient
interactive
music
making
system.
Various
plug
and
play
sensors
can
be
used
to
trigger
the
preset
sounds.
MIDIcreator
offers
more
varied
sensor
choice;
apart
from
motion
sensing
there
is
for
example
squeeze
sensing,
two
axis
acceleration
sensing
and
a
cushion
weight
sensor.
Like
the
Soundbeam,
MIDIcreator
also
has
MIDI
pass-‐
through
functionality.
4
http://www.midicreator-‐resources.co.uk/
5
http://www.skoogmusic.com/
6
http://www.midi.org/aboutmidi/index.php
Skoog
The
Skoog
is
a
commercially
available
tangible
interface,
which
is
aimed
towards
novice
musicians.
It
is
also
well
suited
to
music
therapy
situations,
and
in
fact
there
were
preliminary
case
studies
carried
out
where
no
adverse
effects
to
the
Skoog
were
encountered7.
The
modes
of
interaction
the
Skoog
provides
are
tapping,
shaking,
squeezing
and
twisting,
which
are
assignable
through
use
of
the
supplied
software.
There
is
a
very
interesting
visual/colour
cue
system
designed
to
aid
pedagogically.
Structurally
the
Skoog
is
a
cube
like
shape
with
five
semi
spherical
protruding
buttons.
Each
button
is
a
different
colour,
which
is
mirrored,
in
the
accompanying
software.
The
interface
can
therefore
be
played
by
colour
cues
through
visual
feedback
from
the
software.
As
with
the
other
systems
mentioned
so
far,
the
Skoog
has
MIDI
out
capabilities.
There
is
no
spatial
change
sensing;
the
Skoog
is
a
static
interface.
Bean
Bean
is
a
novel
gesturally
controlled
digital
musical
instrument.
The
user
interaction
is
minimalistic,
consisting
of
the
spatial
movement
of
the
instrument,
along
with
two
push
buttons.
The
instrument
is
played
by
a
combination
of
these
two
modes
of
interaction.
This
simplicity
was
an
intentional
design
feature,
with
transparency
in
mind.
Although
primarily
a
musical
instrument,
there
are
some
visual
aspects
integrated
in
Bean.
Direct
visual
feedback
from
the
instrument
itself
is
mirrored
in
accompanying
software,
where
a
3D
virtual
representation
of
the
instrument
can
be
seen.
These
aspects
were
also
developed
with
an
aim
to
encourage
an
immediate
sense
of
agency.
In
essence
the
instrument
can
be
seen
as
having
both
physical
and
a
virtual
segments.
The
construction
of
Bean,
as
well
as
an
outline
of
the
system
which
runs
behind
it,
will
be
elaborated
on
next.
Constructing
Bean
Initial
Design
Bean
is
ellipsoidal
in
shape,
which
innately
fits
well
between
two
hands.
The
initial
step
to
realize
this
shape
for
rapid
prototyping
was
the
use
of
3D
imaging
software.
Meshmixer8
was
used
to
firstly
create
a
3D
model
of
an
ellipsoid.
After
this
123D
Make9
was
used.
123D
Make
is
a
powerful
3D
modeling
software,
which
facilitates
the
segmenting
of
3D
shapes,
to
provide
laser
cut-‐able
templates
(Figure
3).
These
templates
can
then
be
cut
and
used
to
reconstruct
these
3D
shapes,
using
a
press-‐fit
format.
The
Templates
were
then
transferred
to
Corel
draw10,
which
is
the
graphics
software
which
drives
the
laser
cutter.
Using
Corel
draw,
the
template
was
modified
to
enable
secure
attachment
of
internal
hardware.
Several
iterations
where
cut,
during
a
fine-‐tuning
process
for
both
fit
and
size.
The
material
used
to
manufacture
the
press-‐fit
skeleton
was
3mm
hardboard.
Corel
draw
and
the
laser
cutter
were
also
used
to
cut
the
button
tops
from
3mm
acrylic
sheet
material.
These
additions
were
needed
to
increase
the
surface
of
the
pressable
area
on
each
of
the
buttons.
7
http://www.skoogmusic.com/community/case-‐study/watsonej-‐summary
8
http://www.meshmixer.com/
9
http://www.123dapp.com/make
10
http://www.coreldraw.com/rw/product/graphic-‐design-‐software/?hptrack=eu2bb1
Figure
3
The
partially
assembled
press-‐fit
structure,
showing
the
modification
for
the
attachment
of
hardware.
Finally,
the
outer
surface
covering
consists
of
layers
of
PVC
foil11,
covered
by
a
double
layer
of
nylon
from
a
pair
of
stockings.
This
covering
has
a
dual
purpose.
The
first
restricts
access
to
the
internal
hardware
by
enclosing
the
skeletal
frame.
The
second
is
partly
cosmetic,
to
diffuse
the
internal
light
source
and
make
Bean
pleasing
to
the
eye.
Hardware
Embedded
computing
is
at
the
heart
of
Bean
(figure
4).
Teensy
3.012,
a
compact
Arduino13
compatible
USB
microcontroller,
is
the
“brain”
of
the
physical
segment
of
the
instrument
i.e.
the
ellipsoid.
The
Teensy
board
powers
up
and
initiates
communication
with
the
Wii
Nunchuck14
board.
It
then
receives
all
the
sensor
data,
turns
the
relevant
data
into
direct
visual
feedback,
and
also
transmits
all
the
data
further
over
serial
communication
to
the
computer.
For
ease
of
connection
the
Teensy
was
mounted
on
a
custom
made
circuit
board,
which
allowed
for
effective
connection
and
disconnection
with
both
the
Nunchuck
board
and
the
LED.
11
Also
known
as
cling
film.
Commonly
used
for
food
storage
purposes.
12
https://www.pjrc.com/teensy/index.html
13
http://arduino.cc/
14
The
Wii
Nunchuck
is
a
controller
for
use
with
the
Nintendo
Wii
game
console.
Figure
4
The
internal
layout
of
Bean
The
sensor
unit
is
in
fact
a
modified
Wii
Nunchuck.
Modified
to
enable
the
original
buttons
to
be
extended
away
from
the
body
of
the
Nunchuck,
and
be
placed
on
the
outer
shell
of
the
instrument.
The
main
sensor
is
an
on-‐board
accelerometer
from
the
Nunchuck.
This
sensor
enables
movement
tracking
in
both
the
pitch
(X-‐axis)
and
roll
(Y-‐axis),
and
jolt
detection
vertically
(Z-‐axis).
The
two
buttons
allow
extra
access
to
control
parameters.
The
RGB,
or
multicolour,
LED
is
an
individually
addressable
LED,
containing
a
WS280115
control
chip.
The
direct
visual
feedback
mentioned
earlier
is
provided
by
light
from
this
onboard
LED.
Software
A
system
overview
can
be
seen
in
Figure
5,
a
data
flow
diagram
that
outlines
the
process
of
turning
raw
sensor
data
into
aural
and
visual
feedback.
To
facilitate
this
process,
a
number
of
software
solutions
were
created:
Figure
5
A
data
flow
diagram
showing
the
sensor
data,
control
paths
and
feedback
of
the
system.
15
http://www.adafruit.com/datasheets/WS2801.pdf
Sensor
Input
The
first
step
in
the
development
of
the
software
used
in
the
instrument
was
to
program
the
Teensy
microcontroller.
An
Arduino
sketch
was
created
that
enables
the
Teensy
to
initialize
the
Wii
Nunchuck,
by
using
the
I2C 16
communication
protocol,
and
begin
receiving
the
sensor
data.
The
LED
is
also
initialized
with
this
sketch,
and
is
communicated
with,
by
the
use
of
the
SPI17
communication
protocol.
The
sketch
also
directly
maps
certain
sensor
data
to
different
colours
produced
by
the
LED.
The
final
step
is
the
formatting
and
transmission
of
the
sensor
data
over
the
serial
bus
to
the
laptop.
Aural
feedback
The
concept
behind
the
current
implementation
of
aural
feedback
is
that
of
harmonic
backing
chords,
which
shift
autonomously.
This
harmony
provides
a
musical
setting,
a
starting
point.
Over
this
the
client
has
the
opportunity
to
improvise
using
a
solo
voice,
which
is
governed
by
certain
rules
to
enable
the
client
to
easily
find
notes
that
fit
with
these
chords.
When
“fit”
is
used
here,
it
is
with
the
understanding
that
music
is
subjective
in
manner,
and
that
people
may
have
differing
thoughts
on
what
notes
successfully
fit
with
certain
chords.
The
meaning
of
the
word
in
the
context
of
this
paper
alludes
to
the
fact
that
the
notes
available
to
the
client
are
harmonically
consonant
with
the
backing
chords.
The
harmonic
content
of
the
chords
is
noncomplex
in
nature.
The
four
chords
are
Cmaj9,
Dmin9,
Emin7
and
Fmaj9.
These
chords
use
only
notes
from
the
C
major
scale,
and
are
therefore
relatively
close
harmonically
speaking.
The
major
advantage
of
using
these
chords
for
the
accompanying
element
of
the
aural
feedback
is
that
all
the
elements
of
the
C
major
pentatonic
scale18
fit
with
these
chords.
For
this
reason
the
notes
of
the
C
pentatonic
scale
are
used
for
the
solo
voice
element
of
the
aural
feedback.
To
provide
more
content
to
choose
from,
two
octaves
are
used,
totaling
10
tones,
which
is
available
for
the
user
to
choose
from
during
a
solo.
There
is
also
another
group
of
tones
made
available
to
the
user
when
the
instrument
is
shaken
briefly.
These
notes
constitute
an
A
blues
scale19.
This
new
state
lasts
for
30
seconds,
providing
and
option
for
tonal
variance
and
possible
dissonance
in
the
solo,
before
the
pentatonic
tone
mode
is
re-‐engaged.
Pure
Data
Aural
feedback
was
implemented
using
Pure
Data,
a
graphical
programming
language.
Bean.pd
is
the
main
hub
where
the
sensor
data
is
received
and
formatted.
Open
Sound
Protocol20
(OSC)
is
used
to
transmit
the
sensor
data
into
this
patch.
Formatting,
in
this
context,
can
be
understood
in
this
way;
the
accelerometer
data
and
the
current
state
of
both
buttons
are
transformed
into
data
usable
by
the
synthesizers
and
control
elements,
e.g.
accelerometer
roll
data
is
received
as
numbers
between
70-‐170
then
scaled
to
a
number
between
0-‐1.
16
http://www.i2c-‐bus.org/
17
http://arduino.cc/en/Reference/SPI
18
The
pentatonic
scale
is
possibly
best
known
from
the
black
keys
of
a
piano.
The
C
pentatonic
scale
includes
the
notes
C,
D,
E,
G,
A
and
C
at
the
octave.
19
The
A
blues
scale
is
a
common
scale
used
in
jazz
improvisation.
This
scale
has
a
dissonant
note
available
in
this
context.
The
notes
are
A,
C,
D,
D#,
E,
G
and
A
at
the
octave.
D#
is
an
idiomatic
dissonant
tone.
20
http://opensoundcontrol.org/introduction-‐osc
This
is
done
for
practical
reasons;
a
number
between
0-‐1
is
easier
for
the
designer
to
relatively
assess.
There
are
also
OSC
control
messages
broadcast
from
Bean.pd.
These
messages
are
composed
using
the
sub-‐patch
OSCreturn.pd,
and
have
the
purpose
of
controlling
certain
aspects
of
the
visual
feedback.
The
reason
for
these
messages
will
be
elaborated
on
in
a
later
section
of
this
paper.
The
frequencies
equating
to
the
notes
of
both
the
C
pentatonic
and
the
A
blues
scales
are
held
in
sub-‐patches
Cpenta.pd
and
Ablues.pd.
The
frequency,
when
selected,
is
sent
to
the
synthesizer
sub-‐patch
soloSynth.pd
and
the
corresponding
note
is
played.
The
sub-‐patch
soloSynth.pd
receives
the
frequency
information
from
either
Cpenta.pd
or
Ablues.pd,
and
translates
these
frequencies
to
notes;
the
users’
solo
voice
is
composed
of
these
notes.
soloSynth.pd
is
a
monophonic
synthesizer.
The
method
of
sound
creation
is
a
combination
of
additive
synthesis
and
frequency
modulation
synthesis.
The
additive
synthesis
comprises
of
a
fundamental
and
three
partials.
These
partials
are
individually
adjusted
in
amplitude
to
provide
an
element
of
timbre
change.
Frequency
modulation
is
used
to
add
complexity
to
the
aural
content
of
the
users’
solo.
An
ADSR21
envelope
is
also
implemented
here.
This
envelope
enables
amplitude
shaping
of
the
output
from
soloSynth.pd,
which
leads
to
a
more
responsive
solo
sound.
Another
sub-‐patch
within
Bean.pd,
namely
Backing.pd,
is
where
the
accompanying
harmonic
element
of
the
aural
feedback
is
created.
Contained
in
this
patch
is
a
bank
of
five
additive
synthesizers,
one
for
each
note
in
the
harmony.
Each
of
these
synthesizers
in
turn
composes
a
tone,
constructed
of
a
fundamental
and
three
partials.
These
tones
combine
to
give
a
full
yet
somehow
open
sound.
The
four
chords
change
randomly
over
time
with
equally
weighted
probability
for
each.
In
the
current
implementation
there
is
also
an
additional
option
for
the
user
to
intentionally
change
the
accompanying
chord.
Mapping
The
mapping
strategy
for
Bean
is
generally
one-‐to-‐one
mapping,
however
in
practice,
some
of
these
mappings
combine
naturally
through
gestures.
This
could
be
described
as
an
extra
mapping
layer
[10].
In
the
case
of
Bean
when
considering
the
intended
use,
this
strategy
was
considered
a
good
starting
point.
The
selection
of
note
in
the
solo
voice
is
the
most
discernable
change
aurally.
This
change
is
mapped
to
the
pitch
angle
of
the
instrument
(Figure
6).
When
the
instrument
is
swiveled
downwards
on
the
X-‐axis,
the
pitches
fall,
and
conversely
when
the
instrument
is
swiveled
upwards
on
this
axis,
the
pitches
rise.
The
scope
of
measurable
movement
is
divided
into
ten
to
facilitate
the
available
notes.
21
ASDR
stands
for
Attack,
Sustain,
Decay,
and
Release.
Figure
6
The
accelerometer
control
data
useful
for
mapping.
Visual
Feedback
The
LED
installed
inside
the
physical
element
of
the
instrument
provides
primary
visual
feedback.
This
feedback
is
mirrored
in
a
secondary
visual
feedback,
which
is
a
3D
virtual
representation
of
Bean.
The
reason
for
this
representation
is
to
aid
in
reinforcing
the
causal
connection
of
physical
movement
to
perceptual
change
in
visual
and
aural
feedback.
Through
iterative
development,
the
functionality
of
this
feedback
has
been
revised
and
added
to.
Colour
to
musical
note
mapping
was
implemented
to
provide
a
form
of
visual
cueing.
To
facilitate
this
virtual
representation,
an
application
was
created,
using
Processing22
which
is
a
java
based
software
development
environment.
Visualbean
The
Visualbean
application
facilitates
both
visual
feedback,
and
throughput
of
data
from
the
Teensy
to
Pure
Data.
The
serial
data
from
Teensy
is
received
into
Visualbean,
and
then
transformed
into
OSC
data,
which
is
then
sent
further
on
to
Pure
Data.
The
visual
representation
is
a
mirroring
3D
ellipsoid
e.g.
when
the
instrument
is
tilted
on
the
X-‐axis,
the
image
tilts
on
it’s
X-‐axis.
As
mentioned
earlier,
colour
to
tone
mapping
is
implemented
in
the
physical
part
of
the
instrument.
This
feature
is
also
mirrored
in
Visualbean.
The
ellipsoid
changes
colour
to
match
the
internal
LED,
which
represents
the
selected
note
(Figure
7).
A
possible
method
for
assigning
colour
to
musical
tone
would
have
been
studying
examples
of
chromesthesia23.
These
examples
however,
tend
to
be
subjective
in
nature
and
colour
combinations
vary
from
person
to
person
[13].
It
was
decided
a
more
logical
and
universal
approach
would
be
transposing
and
superimposing
the
equal
temperament
frequencies
of
the
selected
notes
from
the
audible
range
to
the
visual
frequency
range.
This
method
was
used
to
assign
colours
to
notes
in
the
Bean
system24.
These
resultant
colour/note
combinations
can
be
seen
in
table
1.
The
OSC
messages
received
back
from
Pure
Data
control
the
colour
of
the
virtual
representation.
This
message
is
a
number
between
0-‐9
corresponding
to
the
currently
selected
note,
which
in
turn
is
mapped
to
the
assigned
colour.
22
https://www.processing.org/
23
Chromesthesia
is
the
most
common
form
of
synesthesia.
Hearing
sound
induces
the
sensation
Table 1 The colour to note combinations for the C pentatonic scale.
Note
C
D
E
G
A
Colour
Green
Blue
Violet
Red
Orange
Informal
Evaluation
An
initial
evaluation
of
the
Bean
system
was
carried
out.
This
evaluation
occurred
in
two
sessions
over
two
days.
25
Cope
foundation
is
a
non-‐profit
organization,
which
supports
approximately
2000
people
with
Free
play
Participant
A
was
initially
hesitant
in
using
Bean.
His
interaction
was
exploratory,
starting
with
just
moving
the
instrument
in
space,
registering
that
the
representation
on
screen
was
mirroring
the
physical
movements.
Shortly
after,
the
buttons
were
pressed,
with
resulting
surprise
when
the
solo
voice
engaged.
Participant
B
was
more
direct
in
use,
engaging
the
play
button
immediately.
This
was
to
be
expected,
as
he
could
see
the
first
participant’s
use
of
the
device.
His
gestures
were
slow
and
deliberate
at
the
start,
but
quickly
changed
to
moving
the
device
more
aggressively.
User
Impressions
An
open
discussion
followed
the
free
play.
Some
open
questions
were
posed.
What
are
your
first
impressions?
Did
you
understand
the
control
functionality?
Was
it
interesting
to
use?
How
would
you
change/improve
it?
What
are
your
first
impressions?
Both
participants
were
positive,
about
the
instrument.
It
was
different,
but
fun.
Whether
this
fun
factor
was
because
the
technology
is
new,
or
the
fact
that
making
music
was
facilitated
in
a
new
way,
was
unclear.
They
were
both
nevertheless
eager
to
try
the
interface
again.
Did
you
understand
the
control
functionality?
Both
of
the
participants
understood
that
movement
affected
the
sound,
and
that
the
play
button
had
to
be
pressed
to
solo.
The
change
chord
button
however
was
a
mystery.
Participant
B
triggered
the
jolt
controlled
A
blues
scale;
the
participants
did
not
realize
the
change
in
scale.
Was
it
interesting
to
use?
Both
participants
were
positive
about
using
the
device.
When
they
were
asked
in
connection
to
interest,
if
they
could
see
themselves
using
the
instrument
for
a
sustained
time,
they
both
answered
yes.
As
with
the
first
question
it
is
unclear
if
the
opportunity
to
play
music,
or
the
opportunity
to
play
with
new
technology
was
the
deciding
factor.
How
would
you
change/improve
it?
Both
participants
agreed
that
a
cover
for
the
surface
of
the
device
would
be
a
good
idea.
Participant
A
also
felt
that
the
device
could
be
used
for
other
purposes,
relating
to
computer
control.
The
member
of
staff
was
also
of
the
opinion
that
the
device
was
very
flexible
and
could
be
used
for
other
purposes.
Figure
8
Participant
B
playing
Bean
during
the
evaluation
session.
26
In
Drum
and
Bass
music
for
instance,
the
beat
is
a
very
prominent
element.
There
appeared
to
be
clearly
conscious
perception27
of
action
and
engaged
use
of
Bean.
This
paper
has
outlined
a
vital
initial
step
in
the
design
and
further
development
of
a
digital
musical
instrument,
Bean,
which
is
primarily
designed
for
use
as
a
novel
tool
in
the
arsenal
of
the
music
therapist.
Research
pertaining
to
the
fields
of
music
therapy
practice,
DMI/NIME
design
and
human
computer
interaction
has
guided
the
process.
An
initial
informal
evaluation
of
a
functioning
prototype
by
a
possible
target
group
and
professionals
in
the
field
has
proved
to
be
informative
for
the
further
development
of
Bean.
Future
Plans
It
is
clear
that
much
work
is
still
needed
on
some
aspects
of
the
system,
but
it
is
safe
to
say,
there
is
a
firm
foundation
to
work
further
from
here.
The
developments
carried
out
since
this
evaluation
have
improved
the
device
structurally,
and
the
hope
is
that
the
instrument
now
has
better
playability
after
visual
cueing
has
been
introduced.
Some
aspects
of
the
mapping
strategy
will
also
be
reviewed,
such
as
the
change
chord
option.
This
could
possibly
be
changed
to
an
option,
which
would
allow
extended
range,
similar
to
some
small
midi
keyboard
controllers
offer.
The
jolt
option
could
also
be
revised,
to
possibly
apply
a
dynamic
audio
effect,
such
as
a
phaser,
for
a
predetermined
amount
of
time.
To
provide
more
flexibility
in
sound
choice,
and
a
familiar
protocol
the
music
therapists,
MIDI
messaging
could
be
implemented.
The
proliferation
of
MIDI
device
use
in
music
therapy
would
suggest
that
it
would
be
preferable
to
have
some
MIDI
functionality
integrated
in
the
system.
The
Bean.pd
patch
could
be
developed
further
to
facilitate
flexibility
with
regards
MIDI
communication.
There
are
plans
to
replicate
the
Bean
system,
in
order
to
enable
musically
collaborative
therapeutic
group
work.
A
larger
scale
more
formal
evaluation
would
however
be
a
next
step,
to
possibly
get
empirical
data,
informing
on
how
Bean
would
perform
in
a
therapeutic
setting,
and
this
evaluation
was
a
vital
step
in
preparing
the
instrument
optimally
of
this
test.
Acknowledgements
Many
thanks
go
to
the
service
users
and
staff
members
from
Cope
foundation
for
facilitating
and
participating
in
this
evaluation.
Also,
thanks
to
both
therapists
Eoin
Nash
and
Ed
Kuczaj,
who
generously
offered
their
professional
opinions
on
Bean.
Thanks
to
Cumhur
Erkut
for
valuable
advice
and
guidance
throughout
the
project.
And
last
but
not
least,
thanks
go
to
lovely
my
wife,
Louise,
for
bringing
her
considerable
scientific
writing
knowledge
to
bear
in
a
time
of
need.
27
Perception
is
used
here
to
describe
both
audio
and
visual
sensing
in
use
of
Bean.
References
[1]
K.
E.
Bruscia,
Defining
Music
Therapy.
Barcelona
Publishers,
1998,
p.
300.
[2]
W.
L.
Magee
and
K.
Burland,
“An
Exploratory
Study
of
the
Use
of
Electronic
Music
Technologies
in
Clinical
Music
Therapy,”
Nord.
J.
Music
Ther.,
vol.
17,
no.
2,
pp.
124–141,
Jul.
2008.
[3]
K.
Burland
and
W.
Magee,
“Developing
identities
using
music
technology
in
therapeutic
settings,”
Psychol.
Music,
vol.
42,
no.
2,
pp.
177–189,
Nov.
2012.
[4]
M.
Wanderley
and
B.
Vines,
“The
musical
significance
of
clarinetists’
ancillary
gestures:
an
exploration
of
the
field,”
J.
New
Music
…,
2005.
[5]
A.
Hunt,
R.
Kirk,
and
M.
Neighbour,
“Multiple
media
interfaces
for
music
therapy,”
IEEE
Multimed.,
2004.
[6]
J.
Malloch
and
M.
Wanderley,
“The
T-‐Stick:
From
musical
interface
to
musical
instrument,”
…
7th
Int.
Conf.
New
…,
2007.
[7]
B.
Farrimond,
D.
Gillard,
D.
Bott,
and
D.
Lonie,
“Engagement
with
Technology
in
Special
Educational
&
Disabled
Music
Settings,”
Youth
Music,
2011.
[8]
N.
D.
Hahna,
S.
Hadley,
V.
H.
Miller,
and
M.
Bonaventura,
“Music
technology
usage
in
music
therapy:
A
survey
of
practice,”
Arts
Psychother.,
vol.
39,
no.
5,
pp.
456–464,
Nov.
2012.
[9]
G.
Paine
and
J.
Drummond,
“Developing
an
Ontology
of
New
Interfaces
for
Realtime
Electronic
Music
Performance,”
…
Electroacoust.
Music
Stud.,
2009.
[10]
A.
Hunt,
M.
M.
Wanderley,
and
M.
Paradis,
“The
Importance
of
Parameter
Mapping
in
Electronic
Instrument
Design,”
J.
New
Music
Res.,
vol.
32,
no.
4,
pp.
429–440,
Dec.
2003.
[11]
S.
Fels
and
M.
Lyons,
“NIME
2011
Tutorial
NIME
Primer
University
of
British
Columbia,”
2011.
[12]
P.
Wyeth,
“Agency,
tangible
technology
and
young
children,”
IDC
’07
Proc.
6th
Int.
Conf.
Interact.
Des.
Child.,
pp.
101–104,
2007.
[13]
G.
Rogers,
“Four
cases
of
pitch-‐specific
chromesthesia
in
trained
musicians
with
absolute
pitch,”
Psychol.
Music,
1987.
[14]
A.
Tanaka,
“Interaction,
experience
and
the
future
of
music,”
in
Consuming
Music
Together,
K.
Hara
and
B.
Brown,
Eds.
Springer,
2006.
[15]
J.
Solis
and
K.
Ng,
“Input
Devices
and
Music
Interaction,”
in
Musical
robots
and
interactive
multimodal
systems,
Solis.,
2011.