0% found this document useful (0 votes)
33 views26 pages

Sensors: A New Controller For A Smart Walker Based On Human-Robot Formation

1. The document discusses a new controller for a smart walker based on human-robot formation. The smart walker uses encoders, a laser range finder, and ultrasound sensors. 2. The controller calculates the walker's velocity based on the user's location from laser sensor measurements, without needing sensors attached to the user. An algorithm detects the user's legs from laser sensor distances. 3. The controller was theoretically analyzed, simulated, and validated with real users, showing accurate performance. Safety rules are also used to ensure the user's safety. The smart walker is intended to help people with lower limb mobility impairments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views26 pages

Sensors: A New Controller For A Smart Walker Based On Human-Robot Formation

1. The document discusses a new controller for a smart walker based on human-robot formation. The smart walker uses encoders, a laser range finder, and ultrasound sensors. 2. The controller calculates the walker's velocity based on the user's location from laser sensor measurements, without needing sensors attached to the user. An algorithm detects the user's legs from laser sensor distances. 3. The controller was theoretically analyzed, simulated, and validated with real users, showing accurate performance. Safety rules are also used to ensure the user's safety. The smart walker is intended to help people with lower limb mobility impairments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

sensors

Article
A New Controller for a Smart Walker Based on
Human-Robot Formation
Carlos Valadão 1, *, Eliete Caldeira 2 , Teodiano Bastos-Filho 1 , Anselmo Frizera-Neto 1
and Ricardo Carelli 3,4
1 Postgraduate Program in Electrical Engineering, Federal University of Espirito Santo (UFES),
Fernando Ferrari Av., 514, 29075-910 Vitoria, Brazil; [email protected] (T.B.-F.);
[email protected] (A.F.-N.)
2 Electrical Engineering Department, Federal University of Espirito Santo (UFES), Fernando Ferrari Av., 514,
29075-910 Vitoria, Brazil; [email protected]
3 Institute of Automatics, National University of San Juan (UNSJ), San Martín Av. (Oeste), 1109,
J5400ARL San Juan, Argentina; [email protected]
4 Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), C1425FQB Buenos Aires, Argentina
* Correspondence: [email protected]; Tel.: +55-27-4009-2661

Academic Editor: Dan Zhang


Received: 12 May 2016; Accepted: 14 July 2016; Published: 19 July 2016

Abstract: This paper presents the development of a smart walker that uses a formation controller
in its displacements. Encoders, a laser range finder and ultrasound are the sensors used in the
walker. The control actions are based on the user (human) location, who is the actual formation
leader. There is neither a sensor attached to the user’s body nor force sensors attached to the arm
supports of the walker, and thus, the control algorithm projects the measurements taken from the
laser sensor into the user reference and, then, calculates the linear and angular walker’s velocity
to keep the formation (distance and angle) in relation to the user. An algorithm was developed to
detect the user’s legs, whose distances from the laser sensor provide the information necessary to the
controller. The controller was theoretically analyzed regarding its stability, simulated and validated
with real users, showing accurate performance in all experiments. In addition, safety rules are used
to check both the user and the device conditions, in order to guarantee that the user will not have any
risks when using the smart walker. The applicability of this device is for helping people with lower
limb mobility impairments.

Keywords: smart walker; robotic walker; accessibility; mobility; assistive technology

1. Introduction
Mobility can be defined as the “ability to move or be moved freely and easily” [1], which is an
important skill of the human body that affects virtually all areas of a person’s life, since it is used for
working, entertaining, having social relationships and exercising, among other daily tasks. Mobility
also works as a form of primary exercise for the elderly [2]. People whose mobility has been impaired
usually rely on other people having to perform daily tasks. In addition, the lack of mobility, inevitably,
decreases the quality of life of those affected [3].
Impaired people can lack totally or partially the force needed to perform their movements. In the
first group, the affected limb is unable to perform any force to allow the movement, while in the second
group, the force is not enough to perform the movement [4]. Having a place or device in which the
person can support partially his/her weight helps with balance and makes it easier to walk.
Assistive technologies (AT) are used to help impaired people, including those with problems
related to mobility. They are defined as a set of equipment, services, strategies and practices,
whose concept and application are used to minimize the problems of people with impairments [5].

Sensors 2016, 16, 1116; doi:10.3390/s16071116 www.mdpi.com/journal/sensors


Sensors 2016, 16, 1116 2 of 26

This kind of technology has gained importance and awareness, due to the increase of the population
who need assistance in several forms, including mobility, especially for the elderly, who can suffer
from fall-related problems [6].
Mobility-impaired people may resort to some assistive devices to improve their quality of life,
independence, self-esteem and avoid the fear of falling [7,8]. Several situations may lead a person
to require an assistive device, such as infirmities, accidents and natural aging, which brings some
age-related illnesses [9], and studies from the United Nations show that there is a tendency of the
increase of the elderly’s participation in the overall population [10]. This shows the importance of
studying technologies to assist the elderly. In addition, there are also people with non-age-related
illnesses that affect mobility.
The mobility-aid devices are divided into two major categories: alternative and augmentative
ones [7]. In the first category, the device changes the way the person moves himself/herself and does
not require residual force of the affected limbs to allow the movement [7]. Examples of this category
are wheelchairs, cars adapted to elderly people [11] and auto-guided vehicles (AGV), which allow
the user to move without using the force of the affected limb [12]. By using these kinds of auxiliary
devices, people who do not have or are unable to use remaining forces can benefit themselves [13,14].
In contrast, in the second category, the augmentative devices are designed for those who can still
use their residual forces to move. These devices only work if the user intentionally applies the
remaining forces to aid the movement [15]. Examples of these devices are canes, crutches, walkers and
exoskeletons [9].
Each group has its advantages and disadvantages. Remarkably, the alternative devices can
be used for most kinds of impairments, i.e., for both people with and without residual forces;
however, the disadvantage is exactly the lack of the use of residual forces, because if a person still has
remaining muscle forces and does not use them, his/her muscle may suffer atrophy [16–18]. Thus,
augmentative devices use these remaining forces, therefore keeping the muscle tone and avoiding atrophy.
The disadvantage of these devices lies in the fact that not everyone who suffers from an impairment can
use them, exactly because they need residual forces, and some people may not have them [7].

Figure 1. UFES’s smart walker.

The choice of the mobility-aid device must be made by a physician, who will take into
consideration all of the factors and issues about the user’s health and rehabilitation needs [9].
If there is still usable remaining forces in the affected limbs, an augmentative device must be
chosen [19,20]. Additionally, augmentative devices can be used as tools for physiotherapy sessions
to enhance the muscular tone of the weakened limbs and improve the movement ability [21]. Some
authors make a subdivision of augmentative devices according to the application: for transportation
and for rehabilitation (for both transportation and physiotherapy) [7]. Figure 1 exemplifies this
kind of device, which is a smart walker developed in our lab at UFES/Brazil to help people in gait
rehabilitation [22].
Sensors 2016, 16, 1116 3 of 26

In terms of the number of frames, walkers are categorized into basically three types, according to
their ground reaction forces [23]: four-legged, front-wheeled and rollators (Figure 2). Each one has its
advantages and disadvantages, which are intrinsic to their construction.

(a) (b)

(c)

Figure 2. Types of walkers according to their mechanical structure. (a) Four-legged; (b) front-wheeled [24];
(c) rollator [24].

Four legged walkers are considered the most stable of the conventional walkers and can be used
for people who do not have good body balance [23]. As a disadvantage, this kind of walker requires
more force to use it, since the user has to lift the whole device and then put it back on the ground at
each step. Nevertheless, this walker does not offer a natural gait, due to the need for lifting it [7,9,25].
On the other hand, front wheeled walkers do not require much force, since the user has only to lift
the walker in the rear part, keeping the wheels on the ground and using them to move the walker.
This walker provides a better gait when compared to the four-legged one; however, it still requires
lifting the walker up, although not completely, during the gait, requiring less force than the previous
one, but offering less stability and requiring more balance and control, especially when the walker
is not totally on the ground [7]. Finally, the third type of walker is the rollator, which is the one that
provides the most natural gait. In addition, it requires less force than the two previous walkers, since
no lifting is necessary. However, this structure requires from the user better control and a good balance,
since the wheels can run freely [7,9,23,25].
The rollator is the kind of walker used in this work, although built from a four-legged frame
(converted to free wheels) and attached to a mobile robot to become a smart walker, with electronics,
sensors, control system and actuators (motors) added to it.

1.1. Smart Walkers


As previously mentioned, smart walkers are walkers that contain, besides the mechanical structure
to support the user, electronics, control systems and sensors, in order to allow a better user experience
and minimize the risks of falling. In addition, they also provide a more natural and soft gait [7,26]
Sensors 2016, 16, 1116 4 of 26

as they are usually built on a rollator structure [25]. Some of them are designed to help users in
other functions, such as guiding visually-impaired people or elderly people who have memory
deficits [27–30]. There are also examples of walkers that go beyond the mobility support and expand
the user experience, providing sensorial, cognitive and health monitoring support [7]. They can be
classified into passive or active depending on the propulsion movement being made/aided by the
device or not. A passive smart walker may contain actuators to help the orientation of the device, but
not propulsion, while an active smart walker provides support in propelling the movement [15]. Some
notable smart walkers that contain other features, besides aiding the users with movement, are:

• RT Walker: this is a passive walker, which contains a laser sensor to monitor the surrounding
area and two other laser sensors to find the user’s legs. In addition, this walker also contains
inclinometers to find the walker inclination [31].
• GUIDO Smart Walker: this walker had its first version (PAM-AID walker) developed to assist
blind people [32]. Throughout time, this walker received new sensors, other control strategies
and, currently, has ultrasound or a laser sensor (depending on the version) to help users
avoid obstacles [29]. They do not offer propulsion, being classified as a passive walker [33].
The current version of this walker can also perform SLAM (simultaneous localization and
mapping), which allows the device with mapping the environment (while assisting the user)
and using this information to detect obstacles [27].
• JARoW: this walker uses omni-directional wheels to improve its maneuverability. Two infrared
sensors are used to detect the user’s legs in order to control the walker speed [34].
• Other devices: there are several other devices, with different techniques and purposes.
The SIMBIOSIS Smart Walker, for example, was designed to study human gait [35]; the PAMM
(Personal Assistant for Mobility and Monitoring) monitors the user’s health while he/she uses the
device [36]; the iWalker was developed to be used inside a structured environment using local
sensors and others in the environment [30]. These walkers are focused either on elderly people or
to study human gait.

In this work, a new controller for a human-robot formation is introduced, in which the human is
the leader of the formation and does not have any sensor on him/her, and the follower is the robot,
which contains all of the sensing devices. All measurements necessary for the walker controller are
obtained from the distance to the user’s legs (through a laser sensor), in addition to measurements
from the ultrasound sensors and robot odometry. No force sensors are used, which is an advance in
relation to the smart walker developed in [22].
Table 1 shows the comparison among the smart walker controllers. As can be seen, the smart
walker of this paper proposes a novel controller based on human-robot formation. Besides, this work
does not have any sensors attached to the user’s body, as some of those presented in Table 1 have.

Table 1. Smart walkers’ controllers and sensors.

Smart Walker Sensors Controllers


Several algorithm controllers for
RT Walker [37] Force/moment sensing and encoders motion (obstacle avoidance, path
following, among others)
Laser sensor, force sensors, switches,
GUIDO Smart Walker [29] Shared control approach
sonar and encoders
PAM-AID [38] Ultrasound and laser sensors Algorithmic controller
Health sensors, external sensors,
PAMM [36] Admittance-based controller
encoders, force sensors, among others
JARoW [34] Infrared sensors Algorithmic controllers
iWalker [30] RFID, encoders and external sensors Algorithmic controllers
UFES’ Smart Walker [22] IMUs, laser sensor and force sensor Force and inverse kinematics controllers
Our device Laser sensor, encoders and ultrasound Formation-based controller
Sensors 2016, 16, 1116 5 of 26

2. Materials and Methods

2.1. Mechanical Structure and Hardware


Our smart walker was designed by adapting a conventional commercial four-legged walker
(Figure 2a) into a structure, which was attached to a Pioneer 3-DX robot (Figure 3); the original
four-legged walker was modified to include free wheels. Thus, our smart walker can move freely in
any direction, and once attached to the robot, its movements are given by the robot. The human, in
turn, controls the robot. Therefore, the movement is guided by the human.

Figure 3. Diagram showing the structures that compose our smart walker [39].

To make the whole structure of the walker, some pieces were built in aluminum to attach the
walker frame to the robot. Figure 3 shows details of the smart walker. Item A is the modified walker
with four free wheels (Item D) and with other options for height configuration. Foam supports for the
forearms were built to give more comfort to the user during the use, shown in Item G. Item B shows
the robot and the ultrasound sensors to detect obstacles. The robot provides both propulsion and the
non-holonomic restrictions needed for this application. Items E and F are, respectively, a support to
store the battery used by the laser sensor SICK LMS-200 LRF (item C, [40]) and a WiFi to Ethernet
converter used to allow the robot to communicate wirelessly.

2.2. Algorithms
The controller used as a basis to develop the human-robot formation controller presented in this
paper was based on the studies of [41]. Here, the idea is to substitute the master robot (leader) with a
human, with all of the sensors included in the follower robot. The human does not have any sensors,
thus the acquired data by the robot are translated and rotated and then projected into the human
reference. In other words, the data collected by the robot are processed and projected on the human
pose as if the human were the master robot. Thus, the human commands the formation. By using the
laser range sensor, the smart walker can calculate the human speed, position and orientation inferred
from the distance to the human’s legs and, therefore, adjust its own speed, keeping the formation
maintained. After the output of the formation control, the data of the linear and angular speed are
processed by an internal servo PID (proportional-integral-derivative) controller, which calculates the
rotational speed of each wheel of the robot.
Sensors 2016, 16, 1116 6 of 26

The steps of the controller are presented in the block diagram shown in Figure 4. Blocks, such
as “LD Safety Rules” (LD meaning leg detection), “Overspeed” and “Backwards move”, shown in
Figure 4, are related to the safety rules, explained in Section 2.2.4. The human-robot kinematics is
explained in Section 2.2.2.

Figure 4. Flowchart of the data acquisition, controller and safety rules applied to the smart walker.
Sensors 2016, 16, 1116 7 of 26

2.2.1. Legs and User Detection


To detect the human pose, the laser sensor firstly scans the area in front of it (180◦ ) with 1◦
resolution, searching for the legs [40]. The control algorithm uses only the filtered data in a region
of interest (ROI), defined from 68◦ to 112◦ and 1 m long, which is the area where the human’s legs
should be located. Inside this region, only the human’s legs should be detected, as the walker frames
are outside that region.
The algorithm to detect the legs is based on the one developed by [42]. This algorithm works
with the signal transitions, which are high variations in the laser signal. Transitions are actually
big variations in the laser signal, and they can be found by deriving the main signal and finding

dθ . By analyzing this last signal and finding the peaks and valleys, it is possible to infer where the
transitions provoked by the legs are. Figure 5 shows in pictures what the transitions are and how they
are analyzed (algorithm functioning).
The variables T1 up to T4 represent the transitions (high variation of the laser signal), and L L
and R L are the left and right leg, respectively. Finally, the variable “user” means the position of the
user itself.
Laser Signal (Region of Interest)
2 Peaks and Valleys - Derivated Signal
1.5

1.8

1
1.6

1.4 0.5

1.2
Meters/Degree

0
Meters

-0.5
0.8

0.6
-1

0.4

-1.5
0.2

0 -2
70 75 80 85 90 95 100 105 110 65 70 75 80 85 90 95 100 105 110 115
Degrees Degrees

(a) (b)
Peaks and Valleys - Original Signal Laser Signal with User Information
2 2

1.8 1.8

1.6 1.6

1.4 1.4

1.2 1.2
Meters

Meters

1 1

0.8 0.8 RL

0.6
↑ ↑ 0.6 LL
USER
T3 T4
0.4
↑ ↑ 0.4

T1 T2
0.2 0.2

0 0
70 75 80 85 90 95 100 105 110 70 75 80 85 90 95 100 105 110
Degrees Degrees

(c) (d)

Figure 5. Example of finding the user based on the position of each leg. (a) Laser signal inside the
region of interest; (b) signal derivative and transitions (peaks and valleys); (c) transitions in the original
signal; (d) legs and user position.

Summarizing, the algorithm detects where the legs are located by performing the analysis of the
number, length and amplitude of the transitions in the derivative function of the laser signal inside the
ROI area. These transitions express big variations in the laser reading, generating peaks and valleys,
which are used to find the legs’ extremities.
Sensors 2016, 16, 1116 8 of 26

There are five possible cases that are analyzed by the algorithm to find each human’s leg,
in addition to estimate the human’s pose (legs’ position and orientation). The first case happens when
there are less than two transitions or more than four transitions. In such cases, the algorithm ignores
the frame and waits for the next laser scan. If there are more than 20 frames ignored consecutively,
there is a safety rule that stops the walker. The other cases are when there are two, three or four
transitions detected. Figure 6 shows examples of leg detection for: two transitions (a); four (b); three
with the left leg closer to the laser (c); three with the right leg closer to the laser (d) and a case in which
only one leg is detected; the other is outside the safe zone, and the laser sensor cannot detect it (e).
The system can know this by analyzing the leg diameter in the space between two transitions. To be
considered a leg, it should be bigger than the predefined value of 9 cm (after projection). The value
9 cm was taken based on the average horizontal line (frontal plane) of the leg at a 30-cm height, which
is the laser height from the ground, of people that work in our laboratory. The dots in the graphics
represent the variations, i.e., the transitions.

Figure 6. Diagram showing the operation of the leg detection algorithm. (a) Legs togheter - two
transitions; (b) Legs separated - four transitions; (c) Three transitions - left leg closer to the laser sensor;
(d) Three transitions - right leg closer to the laser sensor; (e) Only one leg detected.

The flowcharts represented in Figure 7 show the decision tree used to determine where the user is,
by analyzing the signals from the laser sensor for each case aforementioned. Figure 8 is an extension of
Figure 7, which details how the number of transitions makes the algorithm behave.
In the case of the system finding less than two transitions, which means not detecting any leg
or detecting just one leg, the system ignores the frame and increments the safety counter. If this
counter reaches a maximum value, the walker stops. If there are only two transitions, the legs are
probably in superposition, without significant difference between them regarding the distance from
the laser sensor.
Figures 7 and 8 show the algorithm used to make the system detect where the user’s legs are.
First, the algorithm starts reading the laser sensor with its 180◦ angle range and a 30-m distance range.
Second, it is necessary to crop the area into the region of interest, which is the area behind the walker.
This area is defined by the angles between 68◦ and 112◦ and 1 m long. Everything outside this area
is ignored.
Then, using this filtered signal, the algorithm computes its derivative and then finds where
the major variations are, which are most probably the user legs. The transitions should be higher
than a predefined threshold. This avoids the system obtaining noises or transitions, which could
Sensors 2016, 16, 1116 9 of 26

be misinterpreted as representing legs. With the information of the derivative of the laser signal,
the algorithm finds the peaks and valleys, which indicate the transitions.
According to the number of transitions, distinct scripts are used to process the information. If there
is only one transition, this laser reading is ignored and the robot waits for the next one. With only one
transition, it is not possible to detect and define where the two legs are. The same behavior is adopted
in the case of more than five transitions, since in this case, it is impossible to detect what transitions
represent the legs.
For the other cases, it is possible to calculate the position of each leg. For two legs, the extreme
position of the transitions is analyzed, and the person is considered to be in the middle between these
points. Mathematically, this is represented by Equations (1) and (2).

φ
L L = T1 + (1)
2
φ
R L = T2 − (2)
2
where L L is the left leg and R L is the right leg. T1 and T2 are the first and second transitions (from
left to right), respectively. The variable φ is related to the size of the leg projected in the distance of
each transition.
In the case of the legs, there are two possibilities, which are either the left leg being closer to the
laser or the right leg being closer to the sensor. To determine which leg is closer, the first and the last
transition are analyzed. If the first transition (T1 ) is closer to the sensor, this means the left leg is in
front of the right leg. Otherwise (if T3 is closer to the sensor), this means the right leg is in front of the
left leg.
In the first case, the mathematical expression that would represent each leg position is given by
Equations (3) and (4).

T1 + T2
LL = (3)
2
φ
R L = T3 − (4)
2
where T3 is the third transition (from left to right).
On the other hand, the second case is given by the mathematical expressions shown in
Equations (5) and (6).
φ
L L = T1 + (5)
2
T2 + T3
RL = (6)
2
In the case of four transitions, the leg angles are defined by Equations (7) and (8).

T1 + T2
LL = (7)
2
T3 + T4
RL = (8)
2
After detecting each leg position, the user position is estimated by taking the average position
of both legs. The information given by L L and R L , as well as the transitions are actually the indexes
(angles) in the vector of the laser signal. To find the distance, it is necessary to look for the amplitude
(distance, represented by variable d) given by those angles. Therefore, the user distance and angle
from the walker are defined as in Equations (9) and (10).
Sensors 2016, 16, 1116 10 of 26

d( L L ) + d( R L )
dhL = (9)
2
L + RL
ϕhL = L (10)
2

where dhL is the distance from the human to the robot, d( L L ) is the distance of the left leg and d( R L ) is
the distance of the right leg (Equation (9)). In Equation (10), ϕhL , L L and R L are, respectively, the human
orientation and his/her left and right leg orientation (angle). The superscript L means the laser axis,
and the subscript h means the human/user.

Figure 7. Flowchart used to infer the human position by finding where the legs are (main part).
Sensors 2016, 16, 1116 11 of 26

Figure 8. Details of the scripts for each kind of leg detection.

2.2.2. Human-Robot Kinematics


The human-robot interaction is given by a system that helps the user to move him/herself with
the aid of the weight support and balance. In addition, different from other works that use inertial
measurement units and/or force sensors, there are no sensors attached to the user’s body. All sensing
is made by the robot, which is the follower in the formation. The laser sensor is used to verify where
the user is, and in the case of obstacle detection, the ultrasound sensors present in the robot are also
used. Therefore, in this work, a human-robot interaction is shown , in which the human does not need
to wear any sensor, since all of the data needed are acquired by the robot and its sensors.
The diagram of the human-robot interaction is depicted in Figure 9. This is the diagram used to
generate the human-robot kinematics model and, further, the control laws. In the diagram depicted
in Figure 9, it is possible to see the variables used by the controller algorithm to calculate the robot’s
linear and angular speed to keep the formation. The variables presented in this picture are described
in Table 2.
Sensors 2016, 16, 1116 12 of 26

Figure 9. Diagram showing the human-robot interaction and the human, robot, laser sensor and
absolute references.

Table 2. Variables in Figure 9.

Variable Details Unit


vh Human linear speed in the absolute axis m/s
ωh Human angular speed in the absolute axis rad/s
d Human-robot distance m
θ Robot angle in the human reference rad
ϕ Human angle in the robot reference rad
x L and y L Laser sensor longitudinal and transversal axis m
x H and y H Human longitudinal and transversal axis m
x R and y R Robot longitudinal and transversal axis m
xhL and yhL Human position in the laser sensor reference m
xhR and yhR Human position in the robot reference m
x hR and yhR Robot position in the human reference m
ḣre f Speed vector the robot should follow to keep the formation m/s
α Robot orientation in the human reference rad
αre f Set-point orientation the robot should achieve to keep the formation rad
vr Robot linear speed m/s
ωr Robot angular speed m/s
β Human orientation in the robot reference rad
vhx Human linear speed in the transversal axis m/s
v hy Human linear speed in the longitudinal axis m/s
k Sample time 0.1 s (100 ms)
k−1 Previous sample time 0.1 s (100 ms)
k − 1| k Positions and angles of instant (k − 1) projected into instant (k) 0.1 s (100 ms)

The mathematical model of the smart walker pose in the human reference is described in
Equations (11)–(13). Pioneer 3-DX is a differential mobile robot with non-holonomic constraints.
Sensors 2016, 16, 1116 13 of 26

ẋrh = v0r · cos α + ωh0 · d · sin θ (11)


ẏrh = v0r · sin α + ωh0 · d · cos θ − vh (12)
α̇ = ωr0 − ωh0 (13)

where ẋ and ẏ are the robot-human position variation due to his/her movements (linear speed vh and
angular speed wh ) and α is the robot-human angle also related to his/her displacements. θ is the angle
of the robot in the human reference, and d is the distance between the human and the robot. Velocities
v0r and ωr0 are, respectively, the robot linear and angular speed.
In such a model, all variables with the index h are related to the user (human), while variables
with index r are related to the robot (smart walker). The index L and 0 mean, respectively, the laser
sensor and absolute references.
As shown in Equations (11)–(13), the absolute speeds of the human (both linear and angular)
are needed to calculate the control actions. On the other hand, as shown in the diagram in Figure 4,
the first step made by the controller after receiving the distance and angle from the laser sensor is to
convert them into the Cartesian system using the laser sensor as the reference.
The following steps are used to find the linear and angular speed the robot should perform to
keep the formation. Some of those steps can be visualized in Figure 4.

1. First, the coordinates are calculated in the Cartesian system (Equation (14)).
! !
xhL (k ) cos ϕ(k)
= d(k) (14)
yhL (k) sin ϕ(k )

where xhL (k), yhL (k) are the Cartesian coordinates with the laser sensor reference at the instant
k. These coordinates are calculated using the distance and angle from the laser sensor (d(k )
and ϕ(k)).
2. Second, after computing the human coordinates in the laser reference, it is necessary to measure
the human linear and angular speed. To this end, it is necessary to calculate the robot angle
variation, which is given by Equation (15). Figure 10 depicts the absolute angle robot variation.

∆αr (k) = αr (k) − αr (k − 1) (15)

where αr means the robot absolute angle, and ∆αr is its variation in two consecutive instants.
3. Following the control algorithm, this variation is used to calculate the rotation and translation
transform matrices used to find the human previous location projected in the current one
(Figure 11).

The projection shown in Figure 11 is represented in Equation (16).


0
! ! ! !
xhL cos(∆αr (k)) sin(∆αr (k)) xhL vr · ∆k
0 = · − (16)
yhL − sin(∆αr (k)) cos(∆αr (k)) yhL 0
k −1| k k −1

where:

• xhL and yhL are the human position in the laser reference at instant k − 1.
0 0
• xhL and yhL are the previous item position projected into the current instant k.
• vr is the robot linear speed.
• ∆k is the sample time between two consecutive instants of time (in the figure, the time needed for
the walker to move from YL , which is supposed to be in the instant k − 1, to YL0 , which is supposed
to be in instant k, which is the current instant.
• Subscript k − 1|k means the positions in instant k − 1 projected into instant k.
Sensors 2016, 16, 1116 14 of 26

Figure 10. Variation of the robot in the absolute reference and calculation of the angular speed.

Figure 11. Projection of the previous position in the current time.

This information is useful to calculate the speed of the human using a single reference, which is k.

4. Once the human’s previous location projected on the current robot reference and the current user
location are available, it is possible to compute the human linear and angular speeds. The human
speed in this case is the same for the robot reference and the absolute reference. This occurs
because it is a variation, and the robot displacement in the absolute reference was taken into
account in the last part of Equation (16).

Figure 12 shows the user speed calculation based on the robot movement. It is considered that
the human did not move.
Sensors 2016, 16, 1116 15 of 26

Figure 12. Human speed vector calculation.

Therefore, it is possible to calculate the absolute human linear speed in both axes (x and y),
in addition to the module of the speed, as shown in Equations (17)–(19), respectively.

xhL (k) − xhL (k − 1|k )


vhx (k) = (17)
∆k
yhL (k) − yhL (k − 1|k)
v hy ( k ) = (18)
q ∆k
vh (k) = v2hx (k) + v2hy (k) (19)

This velocity value is independent of the reference system that is being used.

5. The next step relies on computing the human orientation in the robot reference, as shown in
Figure 13.

Figure 13. Calculation of the angle of the human in the laser reference.

This information is necessary to find the human angular speed, which is exactly the angle between
the two speed components of the human speed, given by Equation (20).
Sensors 2016, 16, 1116 16 of 26

yhL (k) − yhL (k − 1|k)


β(k) = arctan (20)
xhL (k) − xhL (k − 1|k)
where β is the human speed vector angle (angle between the components of the human speed).

6. Using the variation of the angle β, it is possible to find the human angular speed on the robot
reference. The robot angular speed ωr is previously provided by the robot encoders. Figure 14
shows how the angle variation affects the user angular speed.

β ( k ) − β ( k − 1)
ωh (k ) = + ωr ( k ) (21)
∆k
where ωh and ωr are the absolute speeds of the human and the robot.
7. From the human orientation in the robot’s reference, it is possible to find the robot’s orientation in
the human reference, through Equation (22). The angle between the laser reference and the robot
reference is π2 (assuming counterclockwise positive), as shown in Figure 15.
 π
α(k) = − β(k) + (22)
2
8. Following the calculation of the control actions, the next step is to find the displacement vector T,
which converts the robot’s reference into the user’s reference.
! !
cos( π2 ) sin( π2 ) xhL (k )
T(k) = · (23)
− sin( π2 ) cos( π2 ) yhL (k)

9. Finally, it is possible to compute the robot position in the human reference, as shown in
Equation (24) and represented in Figure 16.
! !
xrh (k) cos(−αr (k) + π2 ) sin(−αr (k ) + π2 )  
= · 0 − T(k) (24)
yrh (k) − sin(−αr (k) + π2 ) cos(−αr (k) π2 )

where xrh and yrh are the human position in the robot reference.
10. To define the control laws, it is important to define first the vector h, which contains the robot
position in the human reference, shown in Equation (25). Figure 16 details the desired vector hd
and the current vector h.
" #
xrh
h= (25)
yrh

11. It is also essential to determine which are the desired values, given by the hd vector, represented
in Equation (26).
" #
xrh |d
hd = (26)
yrh |d

where the variables with subscript d indicate the desired values, i.e., the set-points of the robot
position vector and each one of its components.
12. The error vector is given by Equation (27).

h̃ = h − hd (27)
13. By using the inverse kinematic model, the speed reference vector can be calculated as in
Equation (28)
h
ḣre f = −Kh̃ − ḣr (28)
Sensors 2016, 16, 1116 17 of 26

h
where K is a control gain, and ḣr (shown in Equation (28)) refers to the human contribution to the
whole system movement, given by:
" #
h ωh · d · sin θ
ḣr = (29)
−ωh · d cos θ − vh
where θ refers to the human angle in the absolute reference.
14. Finally, the control laws are computed, as described in Equations (30) and (31).

vc = |ḣre f | cos α̇ (30)


ωc = k ω α̃ + α̇re f + ωh , (31)

where vc and ωc are the controller outputs. Thus, these are reference speeds the controller sends
to the robot.

The kinematic controller is presented in Equations (30) and (31). Even though the dynamic model
can be affected by the user’s weight, the usually slow speeds during robot operation make it irrelevant
to consider the dynamic effects. Therefore, a pure kinematic controller is sufficient to keep the user
distance and angle, and the dynamic model alterations due to the user and structure weight can be
ignored. Besides, part of the weight is not supported by the robot, but the walker frame and the
user’s legs.

Figure 14. User angular speed by β-angle variation.

Figure 15. Conversion from the laser reference to the robot reference. Note that the β angle goes from
the X H axis up to the YL axis. Since it goes clockwise, it is negative.
Sensors 2016, 16, 1116 18 of 26

Figure 16. Robot position projected into the human reference.

2.2.3. Control Stability Proof


The stability proof of the controller is made by analyzing the whole system with the direct
Lyapunov method. Considering the state vector as the error vector, the positive definite function of
Equation (32) is taken:
1 T
V= h̃h̃ (32)
2
The derivative function of Equation (32) is shown in Equation (33).

V̇ = h̃h̃˙ T (33)

By substituting h̃˙ T = ḣre f = −Kh̃, it is possible to find that this derivative function is negative
definite, as shown in Equation (34), thus concluding the asymptotic stability at the equilibrium point
of zero error.

V̇ = −h̃Kh̃ < 0, ∀h̃ 6= 0, ∀K > 0 (34)

Similarly, we can prove that the robot’s orientation converges to zero by taking the candidate
function (Equation (35)).
1 2
V= α̃ > 0 (35)
2
Deriving Equation (35), we find:

V̇ = α̃α̃˙ (36)

To prove Equation (36) is negative definite, it is necessary to close the loop and isolate the
term α̃˙ . Therefore, considering that the controller speed equals the robot speed, which means ωc of
Equation (31) equals ωr0 of Equation (13), we can obtain Equation (38) by computing this information
into Equation (13).
α̇ = ωr0 − ωh0 (37)
α̇ = ωc − ωh (38)
α̇ = k ω α̃ + α̇re f + ωh − ωh (39)
0 = k ω α̃ + α̇re f − α̇ (40)
0 = k ω α̇re f − α̃˙ (41)
| {z }
−α̃˙
α̃˙ = −k ω α̃ (42)
Sensors 2016, 16, 1116 19 of 26

Now, by replacing Equation (42) in Equation (36), Equation (43) can be obtained, thus proving
that V̇ is negative definite.

V̇ = −Kw α̃2 < 0 (43)

2.2.4. Safety Rules


Safety rules are a special part of the algorithm, apart from the controller, which analyzes whether
the controller’s output is safe or not. Normally, the controller’s outputs are executed; however, there are
some special situations that may require a safety supervisor. Examples of these situations are when the
laser sensor only detects one leg, which may imply that the user is losing his or her balance and may
fall. Table 3 shows the safety rules used in this work and when they are applicable. Thus, the safety
supervisor can change the controller’s output, in order to guarantee human safety.

Table 3. Safety rules for the smart walker.

Situation Action Notes


There is a counter that increases each
time both legs are not detected. If this
counter reaches the limit number, the
No legs/only one leg detected,
Increase the counter walker stops immediately. The limit
several times sequentially
number can be defined inside the code.
If the leg is detected before the counter
reaches the limit, the counter is zeroed.
If the counter reaches the maximum
Counter exceeded limit Brake the robot limit, it brakes the robot and stops
its movement.
This safety rule is applicable for both
High speed Limit speed
linear and angular speeds.
Braking the robot in this case means the
Backwards movement Brake the robot
speed will be set up to zero.
The robot is stopped while the obstacle
Obstacle (detected by ultrasound sensors) Brake the robot
is not removed from the path.

3. Obstacle Avoidance
The robot is equipped with ultrasound sensors that can be used to avoid obstacles. If the obstacle
avoidance is turned on, the robot stops if there is an object or person within a 50-cm distance in front
of the robot. Figure 17 shows how the ultrasounds act in the case of finding an obstacle in front of
the robot.

Figure 17. Ultrasound sensors are used to avoid collision. In (a), it stops moving to avoid the obstacle;
in (b), it keeps moving, since there is no obstacle.
Sensors 2016, 16, 1116 20 of 26

4. Experiments
Experiments were performed in order to validate the controller and safety rules by applying them
in real situations. This is a proof-of-concept application; therefore, the goal is to prove that the system
can work, and it was not tested with people suffering from disabilities.

4.1. Straight Line


In the first experiment set, the user was asked to walk helped by the smart walker in a 10-m
straight path, three times. The user goes from the start point until the finish point and then returns
back to the start point. In the first instance, there is no obstacle in the line the user goes through. In the
second instance, a wood board is placed, which represents an obstacle that should be detected by
the ultrasound sensor and brake the walker, following Safety Rule #5. The path and movements are
described in Figure 18. As can be seen in the results (Figure 19), the errors in the distance and angle
converge towards zero, and since it is a straight line, there is almost no angular speed.

Figure 18. Straight path where the user guided the robot. The human walked through that path, and
the robot helped him to perform such an action. The human guides the robot, not the opposite.

The error tends to zero, but it does not converge due to the movement of the user. During the
walking, the human-robot distance changes not only due to the robot movement, but also due to the
human movement. Therefore, the controller always tends to force the error towards zero, but when
the user moves, it changes the distance again, making the error different than zero. Since the speed is
limited, the robot cannot act so quickly in order to always maintain the error at zero. However, it stays
bounded, which shows that the controller is acting while the user walks.
In the case of the straight line experiment, the angle error was −0.04 rad, while the distance error
was 0.0009 m.
In order to validate the obstacle detection algorithm, which is Safety Rule #5, the ultrasound
sensors are monitored, and the walker stops immediately if the robot finds an obstacle within 50 cm
from the robot. In the beginning of this experiment, there is an obstacle, and when the robot achieves
50 cm from the obstacle, it stops. Then, the obstacle is removed, and the walker can move forward again.
Sensors 2016, 16, 1116 21 of 26

Figure 19. Results for the straight path. (a) Graphical data about the experiment; (b) Graphical path of
the experiment.

Figure 20 shows some photos of the experiment and the diagram of the path with the obstacles.
Every time the obstacle was put in front of the smart walker, it stopped, and once it was removed,
the device started moving again. The results of this experiment are shown in Figure 21.
Similarly to the previous case, the errors stay bounded. When the walker approximates the
obstacle, it stops in order not to collide with it. Only when the obstacle is removed does it allow the
user to walk again. In this experiment, the angle error was −0.02 rad, while the distance error was
0.05 m.

Figure 20. Diagram of the path with obstacles.


Sensors 2016, 16, 1116 22 of 26

Figure 21. Results for the straight path with obstacles. (a) Graphical data about the experiment;
(b) Graphical path of the experiment, showing the obstacles and the end of the path.

4.2. Lemniscate Path


The second experiment was conducted with the walker following a Lemniscate curve path (which
is typically used to validate robot performance). In the first instance, there is no obstacle, while in the
second instance, the walker should brake before colliding with the obstacle. The user started the curve
at Point “A” and then walked following the whole curve, returning to the same point, as shown in
Figure 22.

Figure 22. Lemniscate curve followed in the second set of experiments (with photos).
Sensors 2016, 16, 1116 23 of 26

Mathematically, this Lemniscate curve is represented by Equations (44) and (45).

cos(t)
x (t) = a · (44)
sin2 (t) + 1
sin(t) · cos(t)
y(t) = b · (45)
sin2 (t) + 1

where a and b are the constants defining the length in each axis.
The Lemniscate curve followed by the robot can be viewed in Figure 23. The error varies through
time, but always tends to zero.

Figure 23. Results for the Lemniscate curve. (a) Graphical data about the experiment; (b) Graphical
path of the experiment.

The shape generated by the robot odometry in Figure 23 is relative to the Lemniscate curve.
The shape does not fit completely in the curve shown due to two reasons: (1) the human guided the
robot, and therefore, he did not walk exactly on the Lemniscate curve; (2) there are some odometry
errors due to sliding in addition to other errors that naturally accumulate throughout time when using
odometry. The control error tends to zero, with means the distance error is 0.02 m and the mean angle
error equals −0.08 rad. It is important to emphasize that the error varies with the human movement,
i.e., it will always be changing while the human moves, but the controller makes it go towards zero
throughout time.
After collecting the data from the experiments, some results, a discussion and conclusions can be
made, which are shown in Sections 5 and 6.

5. Discussion
The results of the experiments show that the controller maintained stability and helped the user
in different paths, including complex curves, such as the Lemniscate one. In addition, the safety rules
were functional when necessary.
The graphs in Figure 23 show that in abrupt changes, there were slight increases of the errors,
but they quickly went towards zero, due to the effect of the controller. Additionally, in the cases when
the safety rules were needed, the robot responded in a suitable way, detecting the obstacle and braking
the smart walker.
Still, regarding the safety rules, when only one leg was detected, the smart walker was stopped.
This is an important issue addressed here, as the one leg detection may be the cause of the falling of
the user.
Sensors 2016, 16, 1116 24 of 26

The results of this research show that the controller kept the set-point distance and angle.
Additionally, the safety rules also have been activated as expected.
This shows that the idea behind the controller, sensors and actuators together with the mechanical
structure, i.e, the whole structure of the smart walker, can be a useful tool for mobility and rehabilitation
and also may be used in clinics for such purposes.

6. Conclusions
This work presented a new controller and a safety supervisor to guide a smart walker with
no sensors attached to the user. Additionally, it was shown how these concepts were applied in a
modified walker frame attached to a mobile robot, which has a laser sensor to detect the user’s legs.
In addition, ultrasound sensors on-board this robot allowed braking the smart walker to avoid the
collision with obstacles.
The algorithm used to detect the user’s legs was suitable for this application, and the whole
system integration was tested with two different kinds of experiments, involving obstacles and free
paths (without obstacles).
The results were found satisfactory, and this controller, together with the safety rules, is a potential
tool for helping people both for mobility and for physical rehabilitation, since the controller proved
to guarantee safe walking. The safety rules behaved as expected, offering the user a safe experience,
needed both in mobility and rehabilitation.

Acknowledgments: The authors acknowledge the financial support from CNPq (#458651/2013-3) and technical
support from the Federal University of Espirito Santo, the National University of San Juan and CONICET.
Author Contributions: Carlos Valadao envisioned and developed all of the algorithms for leg detection and the
smart walker’s controller, besides writing this piece. All of the work was supervised and technically advised by
Teodiano Bastos-Filho, Eliete Caldeira, Anselmo Frizera-Neto and Ricardo Carelli, who also contributed to the
editing of this manuscript. Teodiano Freire Bastos-Filho and Ricardo Carelli have provided the general direction
of the research.
Conflicts of Interest: The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

LD Leg detection
LMS Laser measurement systems
LRF Laser range finder
PID Proportional-integral-derivative
ROI Region of interest

References
1. Oxford Dictionaries. Available online: http://www.oxforddictionaries.com/definition/english/mobility
(accessed on 10 May 2016).
2. Yu, K.T.; Lam, C.P.; Chang, M.F.; Mou, W.H.; Tseng, S.H.; Fu, L.C. An interactive robotic walker for assisting
elderly mobility in senior care unit. In Proceedings of the IEEE Workshop on Advanced Robotics and Its
Social Impacts, ARSO, Seoul, Korea, 26–28 October 2010; pp. 24–29.
3. Morris, A.; Donamukkala, R.; Anuj, K.; Aaron, S.; Matthews, T.J.; Dunbar-Jacob, J.; Thrun, S. A robotic
walker that provides guidance. In Proceedings of the 2003 IEEE International Conference on Robotics and
Automation (Cat. No. 03CH37422), Taipei, Taiwan, 14–19 September 2003; pp. 25–30.
4. Valadão, C.; Cifuentes, C.; Frizera, A.; Carelli, R.; Bastos, T. Development of a Smart Walker to Assist Human
Mobility. In Proceedings of the 4th IEEE Biosignals and Biorobotics conference (ISSNIP), Rio de Janeiro,
Brazil, 18–20 February 2013; pp. 1–5.
5. Cook, A.M.; Polgar, J.M. Cook and Hussey’s Assistive Technologies: Principles and Practice, 3rd ed.;
Mosby Elsevier: St. Louis, MO, USA, 2013; p. 592.
6. Duxbury, A.S. Gait Disorders and Fall Risk: Detection and Prevention. Comp. Ther. 2000, 26, 238–245.
Sensors 2016, 16, 1116 25 of 26

7. Martins, M.M.; Santos, C.P.; Frizera-Neto, A.; Ceres, R. Assistive Mobility Devices Focusing on Smart
Walkers: Classification and Review. Robot. Auton. Syst. 2012, 60, 548–562.
8. World Health Organization. WHO Global Report on Falls Prevention in Older Age; World Health Organization
Press: Geneva, Switzerland, 2007; p. 53.
9. Bradley, S.M.; Hernandez, C.R. Geriatric assistive devices. Am. Fam. Physician 2011, 84, 405–411.
10. United Nations. World Population Ageing 1950–2050. In Technical Report 26; United Nations: New York, NY,
USA, 2002; pp. xxvii–xxxi.
11. Kamata, M.; Shino, M. Mobility devices for the elderly: “Silver vehicle” feasibility. IATSS Res. 2006,
30, 52–59.
12. Ceres, R.; Pons, J.; Calderon, L.; Jimenez, A.; Azevedo, L. A robotic vehicle for disabled children. IEEE Eng.
Med. Biol. Mag. 2005, 24, 55–63.
13. Frizera Neto, A. Interfaz Multimodal Para Modelado y Asistencia a la Marcha Humana Mediante Andadores
Robóticos. Ph.D. Thesis, Universidad de Alcalá, Madrid, Spain, 2010.
14. Bastos-Filho, T.F.; Cheein, F.A.; Muller, S.M.T.; Celeste, W.C.; De La Cruz, C.; Cavalieri, D.C.;
Sarcinelli-Filho, M.; Amaral, P.F.S.; Perez, E.; Soria, C.M.; et al. Towards a new modality-independent
interface for a robotic wheelchair. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 567–584.
15. Wasson, G.; Gunderson, J.; Graves, S.; Felder, R. An Assistive Robotic Agent for Pedestrian Mobility.
In Proceedings of the 5th International Conference on Autonomous Agents, Montreal, QC, Canada,
28 May–1 June 2001; pp. 169–173.
16. Alwan, M.; Wasson, G.; Sheth, P.; Ledoux, A.; Huang, C. Passive derivation of basic walker-assisted gait
characteristics from measured forces and moments. In Proceedings of the Annual International Conference
of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004.
17. Wasson, G.; Sheth, P.; Alwan, M.; Granata, K.; Ledoux, A.; Huang, C. User Intent in a Shared Control
Framework for Pedestrian Mobility Aids. In Proceedings of the 2003 IEEE/RSJ International Conference on
Intelligent Robots and Systems, (IROS 2003), Las Vegas, NE, USA, 27–31 October 2003; pp. 2962–2967.
18. Wasson, G.; Sheth, P.; Ledoux, A.; Alwan, M. A physics-based model for predicting user intent in
shared-control pedestrian mobility aids. In Proceedings of the 2004 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October
2004; pp. 1914–1919.
19. Loterio, F.A.; Mayor, J.J.V.; Frizera Neto, A.; Filho, T.F.B. Assessment of applicability of robotic walker for
post-stroke hemiparetic individuals through muscle pattern analysis. In Proceedings of the 5th ISSNIP-IEEE
Biosignals and Biorobotics Conference (2014): Biosignals and Robotics for Better and Safer Living (BRC),
Salvador, Brazil, 26–28 May 2014; IEEE: Salvador, Brazil, 2014; pp. 1–5.
20. Kikuchi, T.; Tanaka, T.; Tanida, S.; Kobayashi, K.; Mitobe, K. Basic study on gait rehabilitation system with
intelligently controllable walker (i-Walker). In Proceedings of the 2010 IEEE International Conference on
Robotics and Biomimetics, ROBIO 2010, Tianjin, China, 14–18 December 2010; pp. 277–282.
21. Valadão, C.; Cifuentes, C.; Frizera, A.; Carelli, R.; Bastos, T. Development of a Smart Walker for People
with Disabilities and Elderlies. In XV Reunión de Trabajo en Procesamiento de la Información y Control; RPIC:
San Carlos de Bariloche, Argentina, 2013; pp. 977–982.
22. Rodriguez, C.; Cifuentes, C.; Frizera, A.; Bastos, T. Metodologia para Obtenção de Comandos de Navegação
de um Andador Robótico Através de Sensores de Força e Laser. In XI Simpósio Brasileiro de Automação
Inteligente (SBAI), 2013; SBA: Fortaleza, Brazil, 2013; pp. 1–6.
23. Lacey, G.; Mac Namara, S.; Dawson-Howe, K.M. Personal Adaptive Mobility Aid for the Infirm and Elderly
Blind. In Assistive Technology and Artificial Intelligence; Springer: Berlin, Germany; Heidelberg, Germany, 1998;
pp. 211–220.
24. Figure-Pixabay. Available online: https://pixabay.com/ (accessed on 18 July 2016).
25. Chan, A.D.C.; Green, J.R. Smart rollator prototype. In Proceedings of the MeMeA 2008—IEEE International
Workshop on Medical Measurements and Applications, Ottawa, ON, Canada, 9–10 May 2008; pp. 97–100.
26. Einbinder, E.; Horrom, T.A. Smart Walker: A tool for promoting mobility in elderly adults. J. Rehabil.
Res. Dev. 2010, 47, xiii–xv.
27. Rodriguez-Losada, D.; Matia, F.; Jimenez, A.; Galan, R.; Lacey, G. Implementing map based navigation in
guido, the robotic SmartWalker. In Proceedings of the 2005 IEEE International Conference on Robotics and
Automation, Barcelona, Spain, 18–22 April 2005; pp. 3390–3395.
Sensors 2016, 16, 1116 26 of 26

28. Wachaja, A.; Agarwal, P.; Zink, M.; Adame, M.R.; Moller, K.; Burgard, W. Navigating blind people with
a smart walker. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; pp. 6014–6019.
29. Rodriguez-Losada, D. A Smart Walker for the Blind. Robot. Autom. Mag. 2008, 15, 75–83.
30. Kulyukin, V.; Kutiyanawala, A.; LoPresti, E.; Matthews, J.; Simpson, R. iWalker: Toward a rollator-mounted
wayfinding system for the elderly. In Proceedings of the 2008 IEEE International Conference on RFID,
Las Vegas, NV, USA, 16–17 April 2008; pp. 303–311.
31. Hirata, Y.; Muraki, A.; Kosuge, K. Motion control of intelligent passive-type walker for fall-prevention
function based on estimation of user state. In Proceedings of the 2006 IEEE International Conference on
Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 3498–3503.
32. Lacey, G.; Dawson-Howe, K. Evaluation of robot mobility aid for the elderly blind. In Proceedings of the
Fifth International Symposium on Intelligent Robotic Systems, Stockholm, Sweden, 8–11 July 1997.
33. Rentschler, A.J.; Simpson, R.; Cooper, R.A.; Boninger, M.L. Clinical evaluation of Guido robotic walker
Andrew. J. Rehabil. Res. Dev. 2008, 45, 1281.
34. Lee, G.; Ohnuma, T.; Chong, N.Y. Design and control of JAIST active robotic walker. Intell. Serv. Robot. 2010,
3, 125–135.
35. Frizera-Neto, A.; Ceres, R.; Rocon, E.; Pons, J.L. Empowering and assisting natural human mobility:
The simbiosis walker. Int. J. Adv. Robot. Syst. 2011, 8, 34–50.
36. Dubowsky, S.; Genot, F.; Godding, S.; Kozono, H.; Skwersky, A.; Yu, H.; Yu, L.S. PAMM—A robotic aid to
the elderly for mobility assistance and monitoring: A “helping-hand” for the elderly. In Proceedings of
the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000;
pp. 570–576.
37. Hirata, Y.; Hara, A.; Kosuge, K. Passive-type intelligent walking support system “RT Walker”. In Proceedings
of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan,
28 September–2 October 2004; pp. 3871–3876.
38. MacNamara, S.; Lacey, G. A smart walker for the frail visually impaired. In Proceedings of the
IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000;
pp. 1354–1359.
39. Valadão, C.T.; Lotério, F.; Cardoso, V.; Bastos-Filho, T.; Frizera-Neto, A.; Carelli, R. Adaptação De Andador
Convencional Para Reabilitação E Assistência a Pessoas Com Restrições Motoras. In XXIV Congresso Brasileiro
de Engenharia Biomédica; SBEB: Uberlândia, Brazil, 2014; pp. 533–536.
40. Sick. Technical Documentation LMS200/211/221/291 Laser Measurement Systems; 2006. Available online:
http://sicktoolbox.sourceforge.net/docs/sick-lms-technical-description.pdf (accessed on 18 July 2016).
41. Roberti, F.; Marcos Toibero, J.; Frizera Vassallo, R.; Carelli, R. Control Estable de Formación Basado en Visión
Omnidireccional para Robots Móviles No Holonómicos. In Revista Iberoamericana de Automática e Informática
Industrial RIAI; Elsevier: Madrid, Spain, 2011; pp. 29–37.
42. Schneider Junior, V.; Frizera Neto, A.; Valadão, C.; Elias, A.; Bastos Filho, T.; Filho, A. Detecção de pernas
utilizando um sensor de varredura laser aplicado a um andador robótico v. In Congresso Brasileiro de
Automática; Universidade Federal do Espírito Santo: Vitória-ES, Brazil, 2012; pp. 1364–1370.

c 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like