0% found this document useful (0 votes)
73 views6 pages

Brain Driver Ias12 PDF

1) The document describes an approach to semi-autonomously control a real car using brain signals from a brain-computer interface (BCI). 2) In the first scenario, the car is completely controlled by the brain through four different brain patterns for steering and throttle/brake control. 3) In the second scenario, the BCI is used to make path decisions at intersections while the car autonomously follows paths and avoids obstacles between decision points.

Uploaded by

cesar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views6 pages

Brain Driver Ias12 PDF

1) The document describes an approach to semi-autonomously control a real car using brain signals from a brain-computer interface (BCI). 2) In the first scenario, the car is completely controlled by the brain through four different brain patterns for steering and throttle/brake control. 3) In the second scenario, the BCI is used to make path decisions at intersections while the car autonomously follows paths and avoids obstacles between decision points.

Uploaded by

cesar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Semi-Autonomous Car Control Using Brain Computer

Interfaces
Daniel Göhring, David Latotzky, Miao Wang, Raúl Rojas
Artificial Intelligence Group
Institut für Informatik
Freie Universität Berlin, Germany

Abstract—In this paper we present an approach to control In a first application, computer-aided free driving allows
a real car with brain signals. To achieve this, we use a brain the passenger to claim steering- and speed-control in special
computer interface (BCI) which is connected to our autonomous areas. The car prevents traffic rule-violations and accidents by
car. The car is equipped with a variety of sensors and can
be controlled by a computer. We implemented two scenarios reclaiming control before they happen. The second application
to test the usability of the BCI for controlling our car. In the implements a semi-autonomous path-planning, where a car
first scenario our car is completely brain controlled, using four drives autonomously through a road-network until it arrives
different brain patterns for steering and throttle/brake. We will at so called decision points. Typically located at crossings,
describe the control interface which is necessary for a smooth, decision points require the passenger to choose which way to
brain controlled driving. In a second scenario, decisions for
path selection at intersections and forkings are made using the drive next.
BCI. Between these points, the remaining autonomous functions The paper is structured as follows: Section II introduces
(e.g. path following and obstacle avoidance) are still active. We the autonomous car “MadeInGermany” and the applied BCI
evaluated our approach in a variety of experiments on a closed hardware. In Section III we describe the training process
airfield and will present results on accuracy, reaction times and and the classification approach used. Section IV presents the
usability.
developed usability interface which enables a human to easily
I. I NTRODUCTION and safely control the car using brain patterns, followed by
Section V, which shows experimental results of the presented
Autonomous cars play an important role in current robotics approach. Section VI summarizes the paper and suggests
and A.I. research. The development of driverless cars started future work.
in the late ’70s and ’80s. Ernst Dickmann’s Mercedes Benz
achieved a travel velocity of 100 km/h on restricted highways II. AUTONOMOUS C AR AND BCI H ARDWARE
without traffic [3]. In the DARPA Grand Challenge 2005,
autonomous cars drove off-road on desert terrain, several of
them reaching the finish line [9]. DARPA’s Urban Challenge
of 2007 demonstrated that intelligent cars are able to handle
urban scenarios and situations with simulated traffic [10].
Lately, autonomous cars have been driving through real world
traffic for testing purposes in urban and rural areas alike [8].
This research lead to the introduction of various driver
assistance systems for street cars. One key aspect for driver
assistance systems is how the interface between human and
machine affects usability. This interface question is more
important for people without full bodily control. Brain Com-
puter Interfaces can be a solution here. Recently, BCI-systems
have become relatively affordable and allow people to interact
directly with their environment [5]. Another big field lies in
human interaction within computer games, e.g. in the research
game “Brain Basher” [1] or in [6]. As a sub-field of BCI Fig. 1. The autonomous car “MadeInGermany”
research, BCI using motor imagination brain patterns has
become popular, where the user has to think of a motion
instead of performing it physically [4]. In other work, users A. Autonomous Car
could control mechanical devices with EEG patterns [7]. In Our autonomous car “MadeInGermany” served as a test
this paper we want to present a solution where a human platform c.f. Fig. 1: a modified Volkswagen Passat, equipped
controls a car just by using brain signals, i.e., without need with a variety of different sensors and a drive by wire control
for any physical interaction with the car. via CAN bus. An introduction to these sensors is necessary
at this stage, as they are used in the here-described semi- B. Brain Computer Interface
autonomous mode. The platform is equipped with six laser The brain computer interface used in this approach is a
scanners, three at front and three at the back. Additionally, on commercial product, the Epoc cap from Emotive. It has 16
top of the car a rotating laser scanner from Velodyne scans EEG sensors which measure potential differences on the scalp.
the near environment, c.f. Fig. 2. Further, the car has different A contact fluid is necessary for good recognition. As a first
radar sensors for obstacle detection and cameras, which are step, the device has to be trained to the brain patterns of a user.
used for 3D feature extraction, lane and traffic light detection. The 16 sensor readings are mapped to four different direction
The car actuators, i.e., gear shifting, motor and brake control classes or to the neutral class. Unfortunately we had no access
are manipulated via CAN bus. to the sensor readings of the head sensors, thus, the first
A modular architecture allows separate software- classification process was not transparent. The classification
components for the different sensors and actuators on result is used by the controller module to generate throttle,
each car, while utilizing the same modules for decision- brake and steering commands.
making and other higher level functions. Besides GPS and
CAN data, the car relies on camera, lidar and radar sensors.
Besides, the authors want to mention, that the architecture
described in this paper is also applied to a semi-autonomous
wheelchair, c.f. Fig. 3, but in this paper we want to focus on
the application to the semi-autonomous car.

Fig. 4. Epoc neuroheadset from Emotive. The cap is connected wirelessly


to the computer.

III. T RAINING AND C LASSIFICATION


In the training phase the user can decide whether to control
the steering only (two classes) or to also control steering
and velocity (four classes). The classification program then
Fig. 2. Sensor configuration of “MadeInGermany”
asks the user to sequentially think of the different direction
schemes. Many users tend to think of different motions, i.e.
they think of moving the right arm, without really performing
those motions. Thus, certain motor images do activate different
regions in the brain, but not necessarily the same regions as
would be activated during real motions [2]. The corresponding
electric brain patterns are measured by the BCI. Usually, this
method is called “Motor Imagery”.
The training process must be executed every time the user
puts the cab on his head. After some time, a retraining can
be necessary. After the training process, the user can estimate
the quality of classification by performing an evaluation test.
If classification is not sufficiently correct, a retraining is
necessary; sometimes the user must choose other patterns, e.g.
to think of other motions or images.
IV. I NTERFACE D ESIGN
A. Requirements
With the free drive controller and BrainChooser alike,
Fig. 3. Autonomous wheelchair equipped with Kinect and lidar sensors.
design of interface is essential for BCI usability. Important
aspects we focused on were
• stability and smoothness of executed actions, c) Steering.: When no “left” or “right” command is
• robustness to falsely classified brain patterns, received for more than one second, the steering wheel returns
• safety of executed maneuvers with respect to physical to neutral position. At higher velocities (>2 m/s), the steering
limitations and to the surrounding area, wheel turns more slowly and the steering angle limits are
• minimality of necessary actions for maneuvers. reduced. This prevents excessive centrifugal forces and allows
. Two solutions were developed and tested on the closed the driver to stay on a desired trajectory without causing
Tempelhof airfield, for which we designed a demonstration oscillations. The steering angle ω, depending on the velocity
course, see Fig. 5. of the car v in m/s is limited to the following values (in rad):
ω = 2.5π ∗ min(1.0, 2.0/v) (1)
Accordingly, the steering angle change δω within each time
step is:

δω = 0.6π ∗ min(1.0, 2.0/v) (2)


When the driver is accelerating or decelerating (“pull” or
“push” commands), we also reduce the steering angle.
d) Steer Angle Smoother.: Sending steering angle com-
mands to the steering controller with 5 Hz only causes non-
smooth steering maneuvers. The steer controller works with
100 Hz, therefore we implemented a steering angle smoother,
which linearly extrapolates 20 desired angle values (one value
for each 10 ms) for the controller. The result was a very soft
Fig. 5. Test parcours on closed Berlin-Tempelhof airfield. turning steering wheel.

B. Free Drive
1.2
steering wheel angle in rad

In free drive mode the operator has access to steering orig. data, 5 Hz
upd. freq. = 5 Hz
1
and speed control. Accidents are prevented by constantly upd. freq. < 5 Hz
0.8 upd. freq. > 5 Hz
monitoring the operator’s decisions. If a collision is imminent
0.6
or if the car is about to leave the free drive zone, the computer
0.4
immediately stops the car.
0.2
a) Control Actions.: The four brain commands (“left”,
0
“right”, “push”, “pull”) are mapped to steer and velocity com- 0 0.2 0.4 0.6 0.8
time in seconds
1.0

mands as follows: Commands “left” and “right” increase or


decrease the current steering wheel angle. Commands “push”
and “pull” increase or decrease the desired velocity. This Fig. 6. Steer Angle Smoother, black dotted curve shows the raw angle values
at 5 Hz; blue, red and green curves show interpolated values for interpolation
solution proved superior compared to giving direct throttle frequencies higher, equal or lower than 5 Hz.
or brake commands, as it is easier to maintain a certain
speed. When none of the four commands is detected, the e) Velocity Controller.: The desired velocity is input
velocity stays constant. The steering angle stays constant for to a PID-controller. The PID-controller generates positive or
one second, and then is reduced (or respectively increased) negative output values. Positive outputs are weighted and
towards zero position, in which the car moves straight. mapped to throttle commands, negative outputs are similarly
b) Control Frequency.: Steering a car requires the driver mapped to brake commands.
to be able to execute steering motion with slow response As experiments will show later, staying on a given trajectory
times. To allow filtering of noisy brain signals, we allowed can be hard at higher velocities, so an application on open
only one steering command per second in earlier tests. Then, traffic is far away. Therefore we implemented and tested
a large steering angle was added to the current position. In another solution for the BCI, in which it assists the human in
this solution the driver had problems to execute small steering deciding which direction to take; all driving and safety relevant
angles, the response times were shown to be too long. In decisions are made by the car.
further tests a control frequency of 5 Hz and a steer step
of 0.6 rad proved to be a good solution, meaning that one C. BrainChooser
revolution of the steering wheel takes about two seconds. The While the autonomous car usually plans the best trajectory
velocity can be increased or decreased by 0.15 m/s in each through the road network to reach its destination, the Brain-
step. The steering angle is limited to ±2.5π. Velocities are Chooser application allows the passenger to modify the route
cropped between 0 and 10 m/s. at certain decision points by using the BCI device.
The road network is presented as a directed graph of V. E XPERIMENTS
way-points which are connected by lanes. Curved lanes are A. Benchmarks
approximated by spline interpolation over the way-points. Fig.
We conducted different experiments on the former Tempel-
7 shows the spline interpolation of the road network graph with
hof airport in Berlin.
the car at a decision point for two possible directions.
Experiment 1: At first we measured the accuracy of
control. The first task was to keep the car on an infield course,
see Fig. 5, using “left” and “right” patterns for steering only.
The velocity was set to 2 meters per second. The driver had
to drive the track for three laps to see whether the accuracy
remained constant over time. The resulting traces are depicted
in Fig. 8; the errors are shown in Fig. 13.
Experiment 2: In the second experiment the driving
person had to control throttle and brake in addition to the
steering commands for left and right. The car was now able
to accelerate from 0 to 3 meters per second. The resulting
trace is shown in Fig. 11, the errors are shown in Fig. 13.
Experiment 3: To check the lateral error to the lane at
higher speeds, we designed another track with long straight
lanes and two sharp corners. The velocity was fixed to 5 meters
per second and like in the first experiments, the driver had to
steer left and right only, trying to stay at the reference lane. The
resulting trajectory is shown in Fig. 12, the errors in Fig. 13.
Experiment 4: We checked the response time of the test
person. The test person received different commands, such
as “left”, “right”, “push” or “pull” from another person and
had to generate the corresponding brain pattern - this had to
Fig. 7. A spline interpolation of the road network with a point cloud from be recognized by the control computer. The time from the
the lidar-scanner for detecting obstacles. A path along the right lane has been command until the recognition within the control computer
chosen at the decision point in front of the crossroad, as indicated by the
highlighted red-green line. was measured. We also measured falsely classified patterns.
Experiment 5: In this experiment, we tested the second
module, the BrainChooser. Here, at intersections, the operator
was asked to decide for the left or the right route. Then the
To find a desired trajectory in the road network, weights are test person had about ten seconds to decide for left or right
assigned to lanes representing the distance, speed-limit and direction. This long decision phase helps to filter out noise
obstacles on the lanes, if necessary. When the car reaches a and ensures that the test person was generating the desired
decision point where an operator’s choice is required, e.g. an pattern over a longer time, reducing the risk of coincidentally
intersection, the operator is requested to choose a direction. generated patterns.
The request is executed with the help of a synthetic voice B. Experimental Results
recording. Once a choice was made, the chosen trajectory on
the road network is processed and executed by the steering Experiment 1: At the beginning of the first experiment
and velocity controller. we marked the desired lanes on the airfield. As we found, on a
flat surface those lanes are hard to see from greater distances.
At decision points, the operator is requested by voice Moreover, it is difficult for a human driver to estimate his
recording to input a direction with the BCI. Since it’s usually distance to the middle of the lane with centimeter accuracy.
not possible to hold a brain pattern steady over a long period of Therefore the test person had access to a computer monitor,
time, messages with detected patterns arrive at irregular inter- which displayed a model of the car on the virtual track from
vals and include false positives. To robustly classify the brain bird’s eye perspective. The test person succeeded in keeping
pattern into one of the four categories, four variables (one for a close distance to the desired trajectory, while only having to
each possible pattern) accumulate the detection-probabilities. steer the car. We performed three tests to observe the variance
The variable which first passes a certain threshold defines the between different laps. The standard deviation of the lateral
operator’s decision. This method proved to be relatively robust error function over time was 1.875 meters for one lap, the
to false detections. It also gives the operator the required error function is shown in Fig. 9. One lap lasted for about 10
time to enter the desired direction. To prevent distraction, minutes. In the following laps this error did not diverge by
no audio-feedback is given during the selection. However, a more than 0.2 m. The angular error standard deviation was
display presents the currently detected pattern, resulting in 0.20 rad. The traces of the driven laps are shown in Fig. 8.
faster decisions. Fig. 13 comprises the results of the first three experiments.
Fig. 11. Experiment 2: Infield test course. The test person had to control
Fig. 8. Experiment 1: Infield test course. The test person had to control the car with four commands (“left”, “right”, “push”, “pull”) to steer the car
the car with two steering commands (“left” and “right”). Velocity was set to and to adjust the velocity, 0-3 meters per second. The traces of the car (red)
2 meters per second. The traces of all three driven laps are depicted (red). and of the original lap (blue) are depicted.
Reference trace of the original track in blue.

Experiment 2: The test person managed to control the car,


controlling the velocity and the steering wheel. However, the
accuracy of steering control was reduced, compared to Exp.
1, resulting in a larger standard deviation of the lateral error,
which was 2.765 m. The standard deviation of the orientation
was 0.410 rad and, thus, larger as well.
Experiment 3: The lateral error became even greater on
the speedway. The speed was set to 5 meters per second and
the test person tried to focus on heading in the right direction
Fig. 9. Experiment 1: Lateral error of the car to the reference trajectory. (keeping the orientation error small) rather than reducing the
lateral distance. This is due to the fact that at higher speeds,
the target point for orienting the car is displaced forwards. The
standard deviation of the lateral error was 4.484, the standard
deviation of the orientation error was 0.222 rad. The results
are contained in 13.
Experiment 4: In this experiment we measured the time
it takes to generate a pattern with the brain and to classify it.
Results are shown in Fig. 14. Over 60 percent of the brain
commands could be generated within 5 or less seconds, about
26 percent even within two seconds or less. In 20 percent of all
cases the generated pattern was wrong. This was usually due
to concentration problems of the test person. After a while,
at latest after one hour a new training of the brain patterns
is necessary. Further, after using the BCI for 90 minutes we
Fig. 10. Experiment 1: Orientation error of the car to the reference trajectory. experienced some tiredness of our test subject, which results
that help disabled people to become more mobile. It has been
proven that free driving with a BCI is possible, but the control
is still too inaccurate for letting mind-controlled cars oper-
ate within open traffic. The semi-autonomous BrainChooser
overcame this weakness, and decisions were performed with
a high precision. Improvements of the BCI device could
have multiple positive effects. One effect, of course, would
be a more accurate control of the car, i.e., a more accurate
steering and velocity control in free drive mode. Further, it is
desirable to be able to distinguish more than four brain patterns
in the future. This would enable the driver to give further
commands, e.g., switching lights off and on, or setting the
onboard navigation system to the desired location by thought
alone.
More detailed experiments regarding this decline of concen-
tration over time and within the context of car driving will be
Fig. 12. Experiment 3: Speedway test course. As in the first experiment, the future work as well.
test person had to control the car with two steering commands. Velocity was
set to 5 meters per second. The traces of the car (red) and of the original lap ACKNOWLEDGMENTS
(blue) are depicted.
The authors wish to thank the German federal ministry of
education and research (BMBF). Further, the authors would
σlateral σangle
like to thank Henrik Matzke for his support as a test candidate
[m] [rad]
in the experiments.
Infield 2m/s, 2 DOF 1.875 0.200
Infield 3m/s, 4 DOF 2.765 0.410 R EFERENCES
Speedway 5m/s, 2 DOF 4.484 0.222 [1] D. O. Bos and B. Reuderink. Brainbasher: a bci game. In P. Markopou-
los, J. Hoonhout, I. Soute, and J. Read, editors, Extended Abstracts
Fig. 13. Error measurements: Lateral distance to reference trajectory in
of the International Conference on Fun and Games 2008, Eindhoven,
meters and orientation error in rad. 2 or 4 DOF refer to the two or four
Netherlands, pages 36–39, 2008.
patterns, the test person has to generate.
[2] J. Decety. Do executed and imagined movements share the same central
structures? In Cognitive Brain Research, volume 3, pages 87–93, 1996.
[3] E. Dickmanns, R. Behringer, D. Dickmanns, T. Hildebrandt, M. Maurer,
in longer response times or higher inaccuracies. F. Thomanek, and J. Schiehlen. The seeing passenger car ’vamors-p’.
In IVS94, pages 68–73, 1994.
Experiment 5: In this experiment for the BrainChooser [4] B. V. D. Laar, D. O. Bos, B. Reuderink, and D. Heylen. Actual and
the test person achieved correctly classified directions in more imagined movement in bci gaming. In Adaptive and Emergent Behaviour
than 90 percent of cases. and Complex Systems (AISB 09), 2009.
[5] R. Leeb, F. Lee, C. Keinrath, R. Scherer, H. Bischof, and
VI. C ONCLUSION AND F UTURE W ORK G. Pfurtscheller. Brain-computer communication: Motivation, aim, and
impact of exploring a virtual apartment. In IEEE Transactions on Neur
Brain-computer interfaces pose a great opportunity to in- Sys. and Rehab. Eng., volume 15, pages 473–482, 2007.
[6] A. Nijholt, D. Tan, B. Allison, J. Milan, , and B. Graimann. Brain-
teract with highly intelligent systems such as autonomous computer interfaces for hci and games. In CHI ’08: CHI ’08 extended
vehicles. While relying on the car as a smart assistance system, abstracts on Human factors in computing systems, pages 3925–3928,
they allow a passenger to gain control of the very essential 2008.
[7] J. Pineda, D. Silverman, A. Vankov, and J. Hestenes. Learning to
aspect of driving without the need to use arms or legs. Even control brain rhythms: making a brain-computer interface possible.
while legal issues remain for public deployment, this could Neural Systems and Rehabilitation Engineering, IEEE Transactions on,
already enable a wide range of disabled people to command a 11/2:181–184, 2003.
[8] S. Thrun. Official Google Blog: What we’re driving at.
vehicle in closed environments such as a parks, zoos, or inside http://googleblog.blogspot.com/2010/10/what-were-driving-at.html, Oct
buildings. 2010.
Free drive with the brain and BrainChooser give a glimpse [9] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel,
P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley,
of what is already possible with brain-computer interfaces for M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L.-E. Jen-
commanding autonomous cars. Modifying the route of a vehi- drossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen,
cle with a BCI is already an interesting option for applications P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Ne-
fian, and P. Mahoney. Stanley: The robot that won the darpa grand
challenge. Journal of Field Robotics, 23(9):661 – 692, September 2006.
[10] C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner, J. Dolan, D. Dug-
2s or 5 - 10 10 s or falsely gins, D. Ferguson, T. Galatali, C. Geyer, M. Gittleman, S. Harbaugh,
2-5s
less s more class. M. Hebert, T. Howard, A. Kelly, D. Kohanbash, M. Likhachev, N. Miller,
percent 26 % 36 % 10 % 9% 20 % K. Peterson, R. Rajkumar, P. Rybski, B. Salesky, S. Scherer, Y. Woo-
Seo, R. Simmons, S. Singh, J. Snider, A. Stentz, W. Whittaker, J. Ziglar,
Fig. 14. Experiment 4: Reaction times. The test subject is told to generate a H. Bae, B. Litkouhi, J. Nickolaou, V. Sadekar, S. Zeng, J. Struble,
certain pattern. A pattern counts as recognized, when the computer recognizes M. Taylor, and M. Darms. Tartan racing: A multi-modal approach to
the correct class. the darpa urban challenge. Technical report, Tartan Racing, April 2007.

You might also like