AFPLectureNotes
AFPLectureNotes
Reactive Programming
Yale University
Department of Computer Science
[email protected], [email protected],
[email protected], [email protected]
1 Introduction
Can functional languages be used in the real world, and in particular for real-
time systems? More specifically, can the expressiveness of functional languages
be used advantageously in such systems, and can performance issues be overcome
at least for the most common applications?
For the past several years we have been trying to answer these questions in
the affirmative. We have developed a general paradigm called functional reactive
This research was supported in part by grants from the National Science Foun-
dation (CCR9900957 and CCR-9706747), the Defense Advanced Research Projects
Agency (F33615-99-C-3013 and DABT63-00-1-0002), and the National Aeronautics
and Space Administration (NCC 2-1229). The second author was also supported by
an NSF Graduate Research Fellowship.
programming that is well suited to programming hybrid systems, i.e. systems
with both continuous and discrete components. An excellent example of a hy-
brid system is a mobile robot. From a physical perspective, mobile robots have
continuous components such as voltage-controlled motors, batteries, and range
finders, as well as discrete components such as microprocessors, bumper switches,
and digital communication. More importantly, from a logical perspective, mobile
robots have continuous notions such as wheel speed, orientation, and distance
from a wall, as well as discrete notions such as running into another object,
receiving a message, or achieving a goal.
Functional reactive programming was first manifested in Fran, a domain
specific language (DSL) for graphics and animation developed by Conal Elliott
at Microsoft Research [5, 4]. FRP [13, 8, 16] is a DSL developed at Yale that is
the “essence” of Fran in that it exposes the key concepts without bias toward
application specifics. FAL [6], Frob [11, 12], Fvision [14], and Fruit [2] are four
other DSLs that we have developed, each embracing the paradigm in ways suited
to a particular application domain. In addition, we have pushed FRP toward
real-time embedded systems through several variants including Real-Time FRP
and Event-Driven FRP [18, 17, 15].
The core ideas of functional reactive programming have evolved (often in
subtle ways) through these many language designs, culminating in what we now
call Yampa, which is the main topic of this paper.1 Yampa is a DSL embedded
in Haskell and is a refinement of FRP. Its most distinguishing feature is that the
core FRP concepts are represented using arrows [7], a generalization of monads.
The programming discipline induced by arrows prevents certain kinds of time-
and space-leaks that are common in generic FRP programs, thus making Yampa
more suitable for systems having real-time constraints.
Yampa has been used to program real industrial-strength mobile robots [10,
8]2 , building on earlier experience with FRP and Frob [11, 12]. In this paper,
however, we will use a robot simulator. In this way, the reader will be able to run
all of our programs, as well as new ones that she might write, without having to
buy a $10,000 robot. All of the code in this paper, and the simulator itself, are
available via the Yampa home page at www.haskell.org/yampa.
The simulated robot, which we refer to as a simbot, is a differential drive
robot, meaning that it has two wheels, much like a cart, each driven by an
independent motor. The relative velocity of these two wheels thus governs the
turning rate of the simbot; if the velocities are identical, the simbot will go
straight. The physical simulation of the simbot includes translational inertia,
but (for simplicity) not rotational inertia.
The motors are what makes the simbot go; but it also has several kinds of
sensors. First, it has a bumper switch to detect when the simbot gets “stuck.”
1
Yampa is a river in Colorado whose long placid sections are occasionally interrupted
by turbulent rapids, and is thus a good metaphor for the continuous and discrete
components of hybrid systems. But if you prefer acronyms, Yampa was started at
YAle, ended in Arrows, and had Much Programming in between.
2
In these two earlier papers we referred to Yampa as AFRP.
That is, if the simbot runs into something, it will just stop and signal the pro-
gram. Second, it has a range finder that can determine the nearest object in any
given direction. In our examples we will assume that the simbot has independent
range finders that only look forward, backward, left, and right, and thus we will
only query the range finder at these four angles. Third, the simbot has what we
call an “animate object tracker” that gives the location of all other simbots, as
well as possibly some free-moving balls, that are within a certain distance from
the simbot. You can think of this tracker as modelling either a visual subsystem
that can see these objects, or a communication subsystem through which the
simbots and balls share each others’ coordinates. Each simbot also has a unique
ID and a few other capabilities that we will introduce as we need them.
2 Yampa Basics
That is, a value of type Signal a is a function mapping suitable values of time
(Double is used in our implementation) to a value of type a. Conceptually, then,
a signal s’s value at some time t is just s(t).
For example, the velocity of a differential drive robot is a pair of numbers
representing the speeds of the left and right wheels. If the speeds are in turn
represented as type Speed, then the robot’s velocity can be represented as type
Signal (Speed,Speed). A program controlling the robot must therefore provide
such a value as output.
Being able to define and manipulate continuous values in a programming
language provides great expressive power. For example, the equations governing
the motion of a differential drive robot [3] are:
1
t
x(t) = 2 0 (vr (t) + vl (t)) cos(θ(t)) dt
1
t
y(t) = 2 0 (vr (t) + vl (t)) sin(θ(t)) dt
1 t
θ(t) = l 0 (vr (t) − vl (t)) dt
where x(t), y(t), and θ(t) are the robot’s x and y coordinates and orientation,
respectively; vr (t) and vl (t) are the right and left wheel speeds, respectively; and
l is the distance between the two wheels. In FRP these equations can be written
as:
Although quite general, the concept of a signal can lead to programs that have
conspicuous time- and space-leaks,4 for reasons that are beyond the scope of
this paper. Earlier versions of Fran, FAL, and FRP used various methods to
make this performance problem less of an issue, but ultimately they all either
suffered from the problem in one way or another, or introduced other problems
as a result of fixing it.
In Yampa, the problem is solved in a more radical way: signals are simply not
allowed as first-class values! Instead, the programmer has access only to signal
transformers, or what we prefer to call signal functions. A signal function is just
a function that maps signals to signals:
g :: A -> C
g x = f2 (f1 x)
we can write in a point-free style using the familiar function composition oper-
ator:
g = f2 . f1
This code is “point-free” in that the values (points) passed to and returned from
a function are never directly manipulated.
To do this at the level of signal functions, all we need is a primitive operator
to “lift” ordinary functions to the level of signal functions:
g’ :: SF A C
g’ = arr g
= arr f1 >>> arr f2
Note that (>>>) actually represents reverse function composition, and thus its
arguments are reversed in comparison to (.).
Unfortunately, most programs are not simply linear compositions of func-
tions, and it is often the case that more than one input and/or output is needed.
For example, suppose that f1 :: A -> B, f2 :: A -> C and we wish to define
the following in point-free style:
h :: A -> (B,C)
h x = (f1 x, f2 x)
h = f1 & f2
In Yampa there is a combinator (&&&) :: SF a b -> SF a c -> SF a (b,c)
that is analogous to &, thus allowing us to write:
h’ :: SF A (B,C)
h’ = arr h
= arr f1 &&& arr f2
As another example, suppose that f1 :: A -> B and f2 :: C -> D. One
can easily write a point-free version of:
i :: (A,C) -> (B,D)
i (x,y) = (f1 x, f2 y)
by using (&) defined above and Haskell’s standard fst and snd operators:
i = (f1 . fst) & (f2 . snd)
For signal functions, all we need are analogous versions of fst and snd, which
we can achieve via lifting:
i’ :: SF (A,C) (B,D)
i’ = arr i
= arr (f1 . fst) &&& arr (f2 . snd)
= (arr fst >>> arr f1) &&& (arr snd >>> arr f2)
The “argument wiring” pattern captured by i’ is in fact a common one, and
thus Yampa provides the following combinator:
(***) :: SF b c -> SF b’ c’ -> SF (b,b’) (c,c’)
f *** g = (arr fst >>> f) &&& (arr snd >>> g)
so that i’ can be written simply as:
i’ = arr f1 *** arr f2
g’, h’, and i’ were derived from g, h, and i, respectively, by appealing to
one’s intuition about functions and their composition. In the next section we
will formalize this using type classes.
Exercise 1. Define (a) first in terms of just arr, (>>>), and (&&&), (b) (***)
in terms of just first, second, and (>>>), and (c) (&&&) in terms of just arr,
(>>>), and (***).
IB
IB IB
f
IB
IB
IB
we can also lift binary operators. For example, arr2 (+) has type
Num a => SF (a,a) a.
To see all of this in action, consider the FRP code presented earlier for the
coordinates and orientation of the mobile robot. We will rewrite the code for the
x-coordinate in Yampa (leaving the y-coordinate and orientation as an exercise).
Suppose there are signal functions vrSF, vlSF :: SF SimbotInput Speed
and thetaSF :: SF SimbotInput Angle. The type SimbotInput represents the
input state of the simbot, which we will have much more to say about later. With
these signal functions in hand, the previous FRP code for x:
Exercise 2. Define signal functions ySF and thetaSF in Yampa that correspond
to the definitions of y and theta, respectively, in FRP.
Although we have achieved the goal of preventing direct access to signals, one
might argue that we have lost the clarity of the original FRP code: the code
for xSF is certainly more difficult to understand than that for x. Most of the
complexity is due to the need to wire signal functions together using the various
pairing/unpairing combinators such as (&&&) and (***). Precisely to address
this problem, Paterson [9] has suggested the use of special syntax to make arrow
programming more readable, and has written a preprocessor that converts the
syntactic sugar into conventional Haskell code. Using this special arrow syntax,
the above Yampa code for xSF can be rewritten as:
Although not quite as readable as the original FRP definition of x, this code is
far better than the unsugared version. There are several things to note about
the structure of this code:
1. The syntax proc pat -> ... is analogous to a Haskell lambda expression
of the form \ pat -> ... , except that it defines a signal function rather
than a normal Haskell function.
2. In the syntax pat <- SFexpr -< expr, the expression SFexpr must be a
signal function, say of type SF T1 T2, in which case expr must have type
T1 and pat must have type T2. This is analogous to pat = expr 1 expr 2 in a
Haskell let or where clause, in which case if expr 1 has type T1 -> T2, then
expr 2 must have type T1 and pat must have type T2.
3. The overall syntax:
It is important to note that the arrow syntax allows one to get a handle
on a signal’s values (or samples), but not on the signals themselves. In other
words, first recalling that a signal function SF a b can be thought of as a type
Signal a -> Signal b, which in turn can be thought of as type
(Time -> a) -> (Time -> b), the syntax allows getting a handle on values of
type a and b, but not on values of type Time -> a or Time -> b.
Figure 2(a) is a signal flow diagram that precisely represents the wiring im-
plied by the sugared definition of xSF’. (It also reflects well the data dependencies
in the original FRP program for x.) Figure 2(b) shows the same diagram, except
that it has been overlaid with the combinator applications implied by the unsug-
ared definition of xSF (for clarity, the lifting via arr of the primitive functions
– i.e. those drawn with circles – is omitted). These diagrams demonstrate nicely
the relationship between the sugared and unsugared forms of Yampa programs.
Exercise 3. Rewrite the definitions of ySF and thetaSF from the previous exer-
cise using the arrow syntax. Also draw their signal flow diagrams.
sf :: SF a b
sf = proc i -> do
x <- sfx -< i
y <- sfy -< i
b <- flag -< i
returnA -< if b then x else y
behaves like sfx whenever flag yields a true value, and like sfy whenever it
yields false.
However, this is not completely satisfactory, because there are many situ-
ations where one would prefer that a signal function switch into, or literally
become, some other signal function, rather than continually alternate between
two signal functions based on the value of a boolean. Indeed, there is often a
succession of new signal functions to switch into as a succession of particular
events occurs, much like state changes in a finite state automaton. Furthermore,
we would like for these newly invoked signal functions to start afresh from time
zero, rather than being signal functions that have been “running” since the
vrSF vr
+
inp
vlSF
vl * integral i (/2)
theta
thetaSF cos
(a) Sugared
>>>
>>>
>>>
&&& v
>>>
&&&
vrSF
+
vlSF * integral (/2)
t
>>>
thetaSF cos
(b) Unsugared
Exercise 4. Rather than set the wheel speed to zero when the robot gets stuck,
negate it instead. Then define xspd recursively so that the velocity gets negated
every time the robot gets stuck.
6
Certain input events such as key presses are in fact properly bufferred in our imple-
mentation such that none will be lost.
Switching semantics. There are several kinds of switching combinators in
Yampa, four of which we will use in this paper. These four switchers arise out
of two choices in the semantics:
1. Whether or not the switch happens exactly at the time of the event, or
infinitesimally just after. In the latter case, a “d” (for “delayed”) is prefixed
to the name switch.
2. Whether or not the switch happens just for the first event in an event stream,
or for every event. In the latter case, an “r” (for “recurring”) is prefixed to
the name switch.
This leads to the four switchers, whose names and types are:
switch, dSwitch :: SF a (b, Event c) -> (c -> SF a b) -> SF a b
rSwitch, drSwitch :: SF a b -> SF (a, Event (SF a b)) b
An example of the use of switch was given above. Delayed switching is useful
for certain kinds of recursive signal functions. In Sec. 2.7 we will see an example
of the use of drSwitch.
As mentioned earlier, an important property of switching is that time begins
afresh within each signal function being switched into. For example, consider the
expression:
let sinSF = time >>> arr sin
in (sinSF &&& rsStuck) ‘switch‘ const sinSF
sinSF to the left of the switch generates a sinusoidal signal. If the first event
generated by rsStuck happens at time t, then the sinSF on the right will begin
at time 0, regardless of what the time t is; i.e. the sinusoidal signal will start
over at the time of the event.
The simulator knows about two versions of the simbot, for which each of these
properties is slightly different. The RobotType field is just a string, which for
the simbots will be either "SimbotA" or "SimbotB". The remaining fields are
self-explanatory.
To actually run the simulator, we use the function:
runSim :: Maybe WorldTemplate ->
SimbotController -> SimbotController -> IO ()
where a WorldTemplate is a data type that describes the initial state of the sim-
ulator world. It is a list of simbots, walls, balls, and blocks, along with locations
of the centers of each:
data ObjectTemplate =
OTBlock { otPos :: Position2 } -- Square obstacle
| OTVWall { otPos :: Position2 } -- Vertical wall
| OTHWall { otPos :: Position2 } -- Horizontal wall
| OTBall { otPos :: Position2 } -- Ball
| OTSimbotA { otRId :: RobotId, -- Simbot A robot
otPos :: Position2,
otHdng :: Heading }
| OTSimbotB { otRId :: RobotId, -- Simbot B robot
otPos :: Position2,
otHdng :: Heading }
The constants worldXMin, worldYMin, worldXMax, and worldYMax are the bounds
of the simulated world, and are assumed to be in meters. Currently these values
are -5, -5, 5, and 5, respectively (i.e. the world is 10 meters by 10 meters, with
the center coordinate being (0, 0)). The walls are currently fixed in size at 1.0 m
by 0.1 m, and the blocks are 0.5 m by 0.5 m. The diameter of a simbot is 0.5 m.
Your overall program should be structured as follows:
import AFrob
import AFrobRobotSim
main :: IO ()
main = runSim (Just world) rcA rcB
world :: WorldTemplate
world = ...
rcA :: SimbotController
rcA rProps =
case rpId rProps of
1 -> rcA1 rProps
2 -> rcA2 rProps
3 -> rcA3 rProps
Stop, go, and turn. For starters, let’s define the world’s dumbest controller –
one for a stationary simbot:
rcStop :: SimbotController
rcStop _ = constant (mrFinalize ddBrake)
We can do one better than this, however, by first determining the maximal
allowable wheel speeds and then running the simbot at, say, one-half that speed:
rcBlind2 rps =
let max = rpWSMax rps
in constant (mrFinalize $ ddVelDiff (max/2) (max/2))
We can also control the simbot through ddVelTR, which allows specifying
the simbot’s forward and rotational velocities, rather than the individual wheel
speeds. For a differential drive robot, the maximal rotational velocity depends on
the vehicle’s forward velocity; it can rotate most quickly when it is standing still,
and cannot rotate at all if it is going at its maximal forward velocity (because to
turn while going at its maximal velocity, one of the wheels would have to slow
down, in which case it would no longer be going at its maximal velocity). If the
maximal wheel velocity is vmax , and the forward velocity is vf , then it is easy
to show that the maximal rotational velocity in radians per second is given by:
2(vmax − vf )
ωmax =
l
For example, this simbot turns as fast as possible while going at a given speed:
rcTurn :: Velocity -> SimbotController
rcTurn vel rps =
let vMax = rpWSMax rps
rMax = 2 * (vMax - vel) / rpDiameter rps
in constant (mrFinalize $ ddVelTR vel rMax)
Exercise 7. Link rcBlind2, rcTurn, and rcStop together in the following way:
Perform rcBlind2 for 2 seconds, then rcTurn for three seconds, and then do
rcStop. (Hint: use after to generate an event after a given time interval.)
The simbot talks (sort of ). For something more interesting, let’s define
a simbot that, whenever it gets stuck, reverses its direction and displays the
message "Ouch!!" on the console:
rcReverse :: Velocity -> SimbotController
rcReverse v rps = beh ‘dSwitch‘ const (rcReverse (-v) rps)
where beh = proc sbi -> do
stuckE <- rsStuck -< sbi
let mr = ddVelDiff v v ‘mrMerge‘
tcoPrintMessage (tag stuckE "Ouch!!")
returnA -< (mrFinalize mr, stuckE)
Note the use of a let binding within a proc: this is analogous to a let binding
within Haskell’s monadic do syntax. Note also that rcReverse is recursive –
this is how the velocity is reversed everytime the simbot gets stuck – and there-
fore requires the use of dSwitch to ensure that the recursion is well founded.
(It does not require the rec keyword, however, because the recursion occurs
outside of the proc expression.) The other reason for the dSwitch is rather sub-
tle: tcoPrintMessage uses stuckE to control when the message is printed, but
stuckE also controls the switch; thus if the switch happened instantaneously,
the message would be missed!
If preferred, it is not hard to write rcReverse without the arrow syntax:
rcReverse’ v rps =
(rsStuck >>> arr fun) ‘dSwitch‘ const (rcReverse’ (-v) rps)
where fun stuckE =
let mr = ddVelDiff v v ‘mrMerge‘
tcoPrintMessage (tag stuckE "Ouch!!")
in (mrFinalize mr, stuckE)
Finding our way using odometry. Note from Fig. 3 that our simbots have
odometry; that is, the ability of a robot to track its own location. This capability
on a real robot can be approximated by so-called “dead reckoning,” in which the
robot monitors its actual wheel velocities and keeps track of its position incre-
mentally. Unfortunately, this is not particularly accurate, because of the errors
that arise from wheel slippage, uneven terrain, and so on. A better technique is
to use GPS (global positioning system), which uses satellite signals to determine
a vehicle’s position to within a few feet of accuracy. In our simulator we will
assume that the simbot’s odometry is perfect.
We can use odometry readings as feedback into a controller to stabilize and
increase the accuracy of some desired action. For example, suppose we wish to
move the simbot at a fixed speed in a certain direction. We can set the speed
easily enough as shown in the examples above, but we cannot directly specify
the direction. However, we can read the direction using the odometry function
odometryHeading :: SimbotInput -> Heading and use this to control the ro-
tational velocity.
(A note about robot headings. In AFrob there are three data types that relate
to headings:
The parameter k is called the gain of the controller, and can be adjusted to give
a faster response, at the risk of being too fast and thus being unstable. lim m y
limits the maximum absolute value of y to m.
Before the next example we will first rewrite the above program in the fol-
lowing way:
rcHeading’ :: Velocity -> Heading -> SimbotController
rcHeading’ vel hd rps =
proc sbi -> do
rcHeadingAux rps -< (sbi, vel, hd)
Note the use of vector arithmetic to compute the difference between the de-
sired position pd and actual position odometryPosition sbi, and the use of
vector2RhoTheta to convert the error vector into distance d and heading h.
vel is the speed at which we will approach the target. Finally, note the use
of rcHeadingAux defined above to move the simbot at the desired velocity and
heading.
Exercise 9. rcMoveTo will behave a little bit funny once the simbot reaches its
destination, because a differential drive robot is not able to maneuver well at
slow velocities (compare the difficulty of parallel parking a car to the ease of
switching lanes at high speed). Modify rcMove so that once it gets reasonably
close to its target, it stops (using rcStop).
Exercise 11. Define a controller that takes a list of points and causes the robot
to move to each point successively in turn.
Exercise 12. (a) Define a controller that chases a ball. (Hint: use the aotBalls
method in class HasAnimateObjectTracker to find the location of the ball.) (b)
Once the ball is hit, the simulator will stop the robot and create an rsStuck
event. Therefore, modify your controller so that it restarts the robot whenever
it gets stuck, or perhaps backs up first and then restarts.
Home on the range. Recall that our simbots have range finders that are able
to determine the distance of the nearest object in a given direction. We will
assume that there are four of these, one looking forward, one backward, one to
the left, and one to the right:
rfFront :: HasRangeFinder i => i -> Distance
rfBack :: HasRangeFinder i => i -> Distance
rfLeft :: HasRangeFinder i => i -> Distance
rfRight :: HasRangeFinder i => i -> Distance
These are intended to simulate four sonar sensors, except that they are far more
accurate than a conventional sonar, which has a rather broad signal. They are
more similar to the capability of a laser-based range finder.
With a range finder we can do some degree of autonomous navigation in
“unknown terrain.” That is, navigation in an area where we do not have a
precise map. In such situations a certain degree of the navigation must be done
based on local features that the robot “sees,” such as walls, doors, and other
objects.
For example, let’s define a controller that causes our simbot to follow a wall
that is on its left. The idea is to move forward at a constant velocity v, and as
the desired distance d from the wall varies from the left range finder reading r,
adjustments are made to the rotational velocity ω to keep the simbot in line.
This task is not quite as simple as the previous ones, and for reasons that are
beyond the scope of this paper, it is desirable to use what is known as a PD
(for “proportionate/derivative”) controller, which means that the error signal is
fed back proportionately and also as its derivative. More precisely, one can show
that, for small deviations from the norm:
dr
ω = Kp (r − d) + Kd ( )
dt
Kp and Kd are the proportionate gain and derivative gain, respectively. Generally
speaking, the higher the gain, the better the reponse will be, but care must be
taken to avoid responding too quickly, which may cause over-shooting the mark,
or worse, unstable behavior that is oscillatory or that diverges. It can be shown
that the optimal relationship between Kp and Kd is given by:
Kp = vKd2 /4
In the code below, we will set Kd to 5. For pragmatic reasons we will also put a
limit on the absolute value of ω using the limiting function lim.
Assuming all of this mathematics is correct, then writing the controller is
fairly straightforward:
1. If the simbot sees a wall directly in front of itself, it should slow down as it
approaches the wall, stopping at distance d from the wall. Then it should
turn right and continue following the wall which should now be on its left.
(This is an inside-corner right turn.)
2. If the simbot loses track of the wall on its left, it continues straight ahead for
a distance d, turns left, goes straight for distance d again, and then follows
the wall which should again be on its left. (This is an outside-corner left
turn.)
The first of these operations permits us to determine the angle and distance of
each of the other simbots. By converting these measurements to vectors, we can
add them and take their average, then use rcHeading to steer the robot toward
the resulting centroid.
Other than dealing with numeric conversions, the final code is fairly straight-
forward:
rcAlign :: Velocity -> SimbotController
rcAlign v rps = proc sbi -> do
let neighbors = aotOtherRobots sbi
vs = map (\(_,_,a,d) -> vector2Polar d a) neighbors
avg = if vs==[] then zeroVector
else foldl1 (^+^) vs ^/ intToFloat (length vs)
heading = vector2Theta avg + odometryHeading sbi
rcHeadingAux rps -< (sbi, v, heading)
intToFloat = fromInteger . toInteger
When observing the world through robot sensors, one should not make too
many assumptions about what one is going to see, because noise, varying light
conditions, occlusion, etc. can destroy those expectations. For example, in the
case of the simbots, the simulator does not guarantee that all other robots will
be visible through the animate object tracker. Indeed, at the very first time-step,
none are visible. For reasons of causality, sensor data is delayed one time-step;
but at the very first time step, there is no previous data to report, and thus the
animate object tracker returns an empty list of other robots. This is why in the
code above the list vs is tested for being empty.
Exercise 15. Write a program for two simbots that are traveling in a straight
path, except that their paths continually interleave each other, as in a braid of
rope. (Hint: treat the velocities as vectors, and determine the proper equations
for two simbots to circle one another while maintaining a specified distance.
Then add these velocities to the simbots’ forward velocities to yield the desired
behavior.)
Exercise 16. Write a program to play “robocup soccer,” as follows. Using wall
segments, create two goals at either end of the field. Decide on a number of
players on each team, and write controllers for each of them. You may wish to
write a couple of generic controllers, such as one for a goalkeeper, one for attack,
and one for defense. Create an initial world where the ball is at the center mark,
and each of the players is positioned strategically while being on-side (with the
defensive players also outside of the center circle). Each team may use the same
controller, or different ones. Indeed, you can pit your controller-writing skills
against those of your friends (but we do not recommend betting money on the
game’s outcome).
4 Acknowledgements
We wish to thank all the members of the Yale Haskell Group for their support and
feedback on many of the ideas in this paper. In particular, thanks to Zhanyong
Wan, who undoubtedly would have been deeply involved in this work if he had
not been so busy writing his thesis. Also thanks to Greg Hager and Izzet Pembeci
at Johns Hopkins, who believed enough in our ideas to try them out on real
robots. Finally, thanks to Conal Elliott, who started us on our path toward
continuous nirvana, despite discrete moments of trauma.
References
1. Richard S. Bird. A calculus of functions for program derivation. In David A.
Turner, editor, Reseach Topics in Functional Programming. Adison-Wesley, 1990.
2. Antony Courtney and Conal Elliott. Genuinely functional user interfaces. In Proc.
of the 2001 Haskell Workshop, September 2001.
3. Gregory Dudek and Michael Jenkin. Computational Principles of Mobile Robots.
Cambride University Press, New York, 2000.
4. Conal Elliott. Functional implementations of continuous modeled animation. In
Proceedings of PLILP/ALP ’98. Springer-Verlag, 1998.
5. Conal Elliott and Paul Hudak. Functional reactive animation. In International
Conference on Functional Programming, pages 263–273, June 1997.
6. Paul Hudak. The Haskell School of Expression – Learning Functional Programming
through Multimedia. Cambridge University Press, New York, 2000.
7. John Hughes. Generalising monads to arrows. Science of Computer Programming,
37:67–111, May 2000.
8. Henrik Nilsson, Antony Courtney, and John Peterson. Functional Reactive Pro-
gramming, continued. In ACM SIGPLAN 2002 Haskell Workshop, October 2002.
9. Ross Paterson. A new notation for arrows. In ICFP’01: International Conference
on Functional Programming, pages 229–240, Firenze, Italy, 2001.
10. Izzet Pembeci, Henrik Nilsson, and Greogory Hager. Functional reactive robotics:
An exercise in principled integration of domain-specific languages. In Principles
and Practice of Declarative Programming (PPDP’02), October 2002.
11. John Peterson, Gregory Hager, and Paul Hudak. A language for declarative robotic
programming. In International Conference on Robotics and Automation, 1999.
12. John Peterson, Paul Hudak, and Conal Elliott. Lambda in motion: Controlling
robots with Haskell. In First International Workshop on Practical Aspects of
Declarative Languages. SIGPLAN, Jan 1999.
13. John Peterson, Zhanyong Wan, Paul Hudak, and Henrik Nilsson. Yale FRP User’s
Manual. Department of Computer Science, Yale University, January 2001. Avail-
able at http://www.haskell.org/frp/manual.html.
14. Alastair Reid, John Peterson, Greg Hager, and Paul Hudak. Prototyping real-
time vision systems: An experiment in DSL design. In Proc. Int’l Conference on
Software Engineering, May 1999.
15. Zhanyong Wan. Functional Reactive Programming for Real-Time Embedded Sys-
tems. PhD thesis, Department of Computer Science, Yale University, December
2002.
16. Zhanyong Wan and Paul Hudak. Functional reactive programming from first prin-
ciples. In Proceedings of the ACM SIGPLAN ’00 Conference on Programming
Language Design and Implementation (PLDI), pages 242–252, Vancouver, BC,
Canada, June 2000. ACM, ACM Press.
17. Zhanyong Wan, Walid Taha, and Paul Hudak. Real-time FRP. In Proceedings
of Sixth ACM SIGPLAN International Conference on Functional Programming,
Florence, Italy, September 2001. ACM.
18. Zhanyong Wan, Walid Taha, and Paul Hudak. Event-driven FRP. In Proceedings
of Fourth International Symposium on Practical Aspects of Declarative Languages.
ACM, Jan 2002.