0% found this document useful (0 votes)
11 views

ProposalforNaturalHuman RobotInteractionwithRoboticArms v6 ISO

This document proposes using a hand-based interface to control two robotic arms in order to enable natural human-robot interaction. It describes challenges in designing, implementing, and testing such an interface. The interface was developed to track hand position and gestures using a Leap Motion device in order to move an OWI robotic arm based on a user's hand movements. The goal was to create an affordable and intuitive interface for controlling robotic arms without requiring technical expertise.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

ProposalforNaturalHuman RobotInteractionwithRoboticArms v6 ISO

This document proposes using a hand-based interface to control two robotic arms in order to enable natural human-robot interaction. It describes challenges in designing, implementing, and testing such an interface. The interface was developed to track hand position and gestures using a Leap Motion device in order to move an OWI robotic arm based on a user's hand movements. The goal was to create an affordable and intuitive interface for controlling robotic arms without requiring technical expertise.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/283854338

Proposal for Natural Human-Robot Interaction through the Use of Robotic


Arms

Conference Paper · October 2015


DOI: 10.1109/LARS-SBR.2015.23

CITATIONS READS

3 4,834

3 authors, including:

Giovanni Leal Fernando De la Rosa


National University of Colombia Los Andes University (Colombia)
1 PUBLICATION 3 CITATIONS 53 PUBLICATIONS 147 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Human-Robot Interaction View project

Visual Blocks Programming Technologies Design View project

All content following this page was uploaded by Fernando De la Rosa on 15 November 2015.

The user has requested enhancement of the downloaded file.


Proposal for Natural Human-Robot Interaction
through the Use of Robotic Arms
Giovanni Leal-Cárdenas, Germán Medina-Niño, Fernando De la Rosa
Systems and Computing Engineering Department
Universidad de los Andes
Bogotá, Colombia
{eg.leal2132, gd.medina2889, fde}@uniandes.edu.co

Abstract—The study of the ways in which humans interact II. PROBLEM OF INTEREST
with robots is a multidisciplinary field with multiple
contributions from electronics, robotics, human-computer The robotics industry is preparing itself for a new business
interaction, ergonomics and even social sciences. The robotics opportunity made possible by new affordable technologies, the
industry is mainly focused on the development of conventional domestic market. However, the fact that electronic agents
technologies that improve efficiency and reduce the amount of cannot be used universally prevents the industry from
repetitive work. To achieve this, enterprises must train their promoting and selling them to the average consumer.
technical staff to accompany the robot when performing tasks, Fortunately, the challenge of producing a robot that adapts to
during configuration and technical programming for proper an untested environment is no longer an issue; for example, the
operation. Taking the latter into account, the development and Roomba robot vacuum cleaner manufactured by the American
creation of unconventional interfaces for interaction between company iRobot [2], has been often compared to a pet due to
humans and robots is critical, because they allow for a natural the low level maintenance that it requires. This example also
control over a robot to generate wide acceptance and massive use shows the unawareness of users regarding the pieces of
in the performance of a wide range of possible tasks. This paper equipment they acquire, making it the most remarkable success
presents the challenges in the design, implementation and testing story in the industry by sales and being the sole example in this
of a hand-based interface to control two robotic arms and the
market.
benefits of this technology that is between robotics and human
interaction. A proposition has been made to simply offer the public
nonspecific-use robots at margin prices in order to see what the
Keywords—Robotic arm, Human-Robot Interaction, gesture- consumer would do with them. At first glance, this looks like a
based commands, hand-based interface. move from a desperate industry trying to get a market share
without the importance of solving a real problem, in other
I. INTRODUCTION
words, a capitalist move for a capitalist society. However, it is
Currently, the use, implementation and programming of a deeper well taught step. After seeing the evolution of the
robots in the small business and domestic environments have Makerbot Replicator 3D-printer, that was initially conceived
been associated to an unfavorable cost-efficiency ratio. This without a target market and, as shown in an architectural
factor is regarded as the main obstacle against the publication [3], created for the sole purpose of designing
popularization and promotion of the advantages of robots structures but ended up having unintended uses, it is evident
among the general public. Taking into account machines that that the eye of the consumer is the one that has the real
can be controlled using gestures or that enable a natural potential to exploit the capabilities of such a product. The latter
interaction in order to perform and repeat tasks of daily life, could be looked at as an opportunity which confronts
without the intervention of a professional or the requirement of companies to revise one of the biggest challenges of the
technical knowledge can be the differentiating factor that industry, Human-Robot Interaction (HRI). If the average
allows the massive entry of robots to small and medium consumer has to find a way to use an advanced piece of
businesses as well as domestic environments. Other proposed electronics in his/her daily live, the interaction between them
research directions include the use of the gestures as guidelines must be seamless to facilitate its use.
for autonomous devices but, in this case, the perception of the
environment and planning is delegated to a human [1]. The implementation of new interfaces for interaction
between humans and robots proposes several unique challenges
This paper presents the process of design, implementation, to the systems being developed. The problem addressed in this
testing, and benefits of a hand-based interface that enables paper is to develop an unconventional interface to control two
controlling two robotic arms. All test iterations were designed robotic arms in order to enable the user to perform tasks near
with the aim of improving the precision of the system and the arm’s workspace or from a remote workspace because of
reducing the perception of discomfort that any user may security reasons.
manifest.
The developed system must adhere to certain restrictions signals to an Owi robotic arm that can be moved according to
besides working properly and differentiating itself. The first the user's movements (Fig. 1).
constraint is economic in nature and is one of the necessary
requirements to build a project that differentiates itself from
those that are currently on the market, therefore, the budget of
the whole device should not exceed US$500. The robotic arm
OWI provides the second limitation. The budgeted hardware
moves using DC motors, which are operated by pulses of
current in one direction for each degree of freedom. These
motors, unlike servo or stepper motors, have no way to know
the arm’s position. For this reason, the system is command-
based.
III. RELATED WORK
Regarding the natural interface, the proposed project
focuses on the ease of configuration and the intuitive
implementation of such systems (without the need for writing
code). Danish company Universal Robot [4] and Rethink
Robotics robots [5] have proposed approaches to this premise.
These two companies focus their efforts on the ability of the
robotic arms to maintain a state of neutral gravity. The operator
can guide the system and point out what locations should be
reached sequentially, without learning about its functioning, so
it is an interesting contribution to the line of assembly
technology. The mentioned technologies describe a promising
future and provide improvements for the use of robotic arms in
medium-sized industries; nevertheless, they do not consider the Fig. 1. (Top left) Adafruit Motor-Shield, (Top rigth) and explosion view of
cost-benefit limitation nor the implementation of an implicitly the Leap Motion peripheral, (Bottom) An assembled OWI Robotic Arm.
natural interaction.
The Leap Motion device has been used in several other B. Hand position tracking
HRI projects. For instance, the Pomodoro is a platform that A software component was developed to track the user’s
integrates the Leap Motion to control a mobile robot with the hand position, gestures and if it is the right or the left one, on
goal of rehabilitating children with reduced hand mobility in a the Processing platform that uses the Java language. The Leap
didactic manner [6]. This work exemplifies the range of motion interface provides real time position in the three
applications that are possible by using a natural interface conventional axes as well as the other needed aforementioned
between a human and a robot. It also highlights the importance information (Fig. 2). This information is analyzed whether or
of studying the factors that improve this experience and make it not it activates a trigger that sends a command to a robotic arm
viable for the general public. Another sensor that has been a depending on the hand that is being used. The triggers are
disruptive technology in commercial and research fields is the conditional statements defined if a hand trespasses a limit.
Microsoft Kinect [7], where the human body is meant to be the
controller. This sensor provides a high level of precision that
boosts its capabilities as an input device in fields as virtual
reality, augmented reality and, clearly, HRI.
IV. PROPOSED NATURAL INTERFACE
The following section describes the materials, concepts and
architecture used to build the system of two robotic arms
controlled through a natural interface, with the capabilities of
remote control via WEB protocol.
A. Components
The movement of the hands and fingers of the user is
captured, in real time at a frequency of 120 fps, through only
one Leap Motion device [8]. Then, a computer processes this Fig. 2. Real time position visualizer provided by the Leap Motion API.
information. If the user generates a gesture or places a hand
over a limit, it triggers a command and sends it to one of the The Leap Motion input interface enables the interaction
two Arduino that serve as controllers of both robotic arms between the user and the robotic arms by capturing the motions
(each Arduino related to a specific arm). Each Arduino is and gestures made by the user’s hands. The developed software
attached to an Adafruit Motor-Shield [9] that sends electrical receives the information of the motions and gestures in real-
time and then processes it according to the commands
presented in the function below (Fig. 3). These commands The integration of two robotic arms with only one Leap
depend on the hand being used (right or left, which the Leap Motion interface takes place and the user can control the robots
Motion can detect on its own) as well as on the position of the through motions and gestures provided by his/her hands. The
hand over the X, Y, and Z axis, defining (0, 0, 0) as the center necessary adjustments were made to recognize each hand with
of the space. Additionally, hand gestures made by the user are the Leap Motion interface; this means that the Leap motion
taken into account. In the case that any of these triggers is recognizes the movements of both hands and these are
detected, the corresponding command is sent to the Arduino processed properly so that the arms move in an equivalent way.
board, which is responsible for sending electrical signals This architecture may be simplified to allow control of a single
through the Adafruit Motor-Shield to move a robotic arm robotic arm with one of the user’s hands.
according to the user’s motions and/or gestures. All of these
factors determine the first letter of the command (R for right or  Remote Architecture
L for Left) and a number corresponding to a selected ASCII A remote communication platform for handling the robotic
character. arms was developed, therefore, the user can interact with the
robotic arms from any place in the world through an Internet
connection. The development of this platform consists of a
Client-Server architecture that uses a UDP communication
protocol, which allows for fast communication due to the small
size of the data packets (Fig. 5).
In the first instance, the connection between the client and
the server is achieved, where the user enters the IP address of
{
the server and the IP address of the client is also configured on
the server side. This software is responsible for capturing
information received by the interface, and building datagrams
or data packets that will be sent to the server software where
the arms are located through the preset connection. The server
software receives the datagrams or data packets sent by the
client software and then processes and sends this information
{ { to the Arduino Uno which also processes it thus enabling the
arms to follow the movements executed by the user from the
Fig. 3. Commands result of the processing of the information retrieved by the
Leap Motion interface. client software.

C. System architecture
Next, the overall system architecture developed and the
integration of its various components, both software and
hardware, are described. The two developed architectures that
facilitate user interaction with the robotic arms are also
explained. These architectures allow the user to naturally
interact with the robotic arms, either locally or remotely.
 Local Architecture
The local architecture configuration allows the user to
perform tasks and have physical access to the robotic arms.
The physical components of Fig. 4 are needed for this
configuration.

Fig. 5. Hardware architecture for controlling both arms, operated remotely.

Under this architecture, a feedback with two cameras was


necessary to provide the user with sufficient visual information.
One camera is situated on top of the place giving a bird’s eye
view and the second camera is placed parallel to the arms
compensating for the perception of depth. Together, these two
cameras provide sufficient feedback to the user for a correct
orientation as was tested in the third iteration described in the
following section.

Fig. 4. Hardware architecture for controlling both arms, operated locally.


V. METHODOLOGY interaction so that users felt that they were doing something
The project was developed through incremental iterations, interesting and productive.
where solutions were provided to each one by adding new
functionalities in the next one. The testing first iteration uses
only one control arm in mirror and natural configurations. In
the mirror configuration the user and the robotic arm are
located one in front of the other, which created difficulties for
the user related to location and guidance with respect to the
arm (Fig. 6 Top). For this reason, a natural configuration is
proposed where the robotic arm and the user are side by side
and the robotic arm follows the motion of the user’s hand
(either the right one or the left one) (Fig. 6 Bottom). Ten users
were selected and each one executed two tasks. The first one
entailed picking up a pill and placing it in a cup; this was
designed to test the level of precision of the gripper part of the
robotic arm. The second task involved testing large continuous
displacement of the arm in order to test how tedious it was to
maintain a certain position of the hand over a prolonged period
of time.

Fig. 7. Arrangement of elements for the second iteration. (Top) Mirror


configuration. (Bottom) Natural configuration.

In the third and final iteration, the task to be performed is


the same one as in the second iteration but it was conceived to
analyze the behavioral aspects of the tele-operated control of
the arms (Fig. 8). Tests were designed so that the user may
view the arms through a camera located frontally to the in front
of them but the performance was significantly poorly compared
to that using the natural configuration. One change regarding
the setup of the robots is the camera placed over the system
designed to give a bird-eye view and another one to portrait
another angle. Another change is the setup where the user is
able to see through the two aforementioned cameras as shown
in the Fig. 10.
Subsequently, the time that a user needed to complete the
proposed task is consolidated and classified for further
analysis. Finally, it is noted that the technological limitations
are an important factor when acquiring practical results,
because the movement of the robotic arms using DC motors
hinders some of the interaction with users. However, this
Fig. 6. Arrangement of elements for the first iteration. (Top) Mirror behavior could be more natural if the robotic arms were
configuration. (Bottom) Natural configuration. composed of servomotors, which provide greater feedback
from the movement.
In the second iteration, tasks with two arms are performed
in both natural and mirror configurations, where the purpose is VI. RESULTS
to evaluate the use of both arms simultaneously. Each user’s In the execution of the first task of the first iteration, users
hand controls a different robotic arm, so both robotic arms are showed great interest in the developed system and made an
able to move simultaneously (Fig. 7). The proposed task effort to perform the exercises in the best way possible. Note
involved relocating different objects of laboratory equipment that this is directly related to the complexity and usefulness
and interacting with them. The implementation of the task was perception that the user has about the task.
posed as a sequential process thus aiming for a better
features and applications of the system, not the interface.
Possible errors associated with the used tools such as times and
measurements are negligible considering the magnitude of the
average times.

TABLE I. RESULTS FROM USERS IN THE FIRST ITERATION


(NATURAL CONFIGURATION)

Fig. 8. Experimental workspace: Arrangement of elements for the third


In the second iteration the process was more focused and
iteration with a Web camera at the roof (bird’s eye view) and a cell phone motivated because users were facing a task that had an actual
(camera 2) at one side of the experimental workspace (natural configuration). application, the users felt they were doing something
productive.
As shown in Fig. 9, the execution times dramatically
lowered (40-65%) when users perform more tasks and become The same test of the second iteration was performed in the
more familiar with the system. This learning curve tends to third one, where users complete a set of tasks handling
imply that processes are executed to be natural and easy to laboratory instruments, but this time these tasks are performed
associate after trial and error. The time differences between the remotely. In this case, the setup includes two workspaces: the
natural and mirror configurations always favored the natural user workspace where users perform tasks remotely (Fig. 10)
one because the perception of depth played an important role in and the experimental workspace where the robotic arms and
the performance of the tasks. Nevertheless, the second iteration the elements to be manipulated are located (Fig. 8).
was conceived considering the two configurations and it
confirms the affirmation from the results of the first iteration.
04:48
User 1
04:19 User 2
User 3
03:50 User 4
User 5
03:21 User 6
02:52 User 7
User 8
02:24 User 9
User 10
01:55 Average
01:26
00:57
00:28
00:00
t1 t2 t3

Fig. 9. Time graph of Task 1 (Natural configuration) in the first iteration,


repeated tree times and displaying a clear learning curve.

The second task of the first iteration is tedious and


repetitive for users. They complained due to lack of real Fig. 10. User workspace: User through the process in remote configuration
application of the procedures they were asked to execute. with feedback from two cameras.
However, it was possible to identify the problem that was
expected to be found considering the posed constraints. Unable The tests results of iteration 3, where the user is in a remote
to replicate the movements of the arm, due to technical workspace with the feedback of only one camera, compared to
limitations, users have trouble handling tasks requiring the results of iteration 2, where the user shares the workspace
movement in different spatial axes simultaneously. This is with the robotic arms, show that it was difficult for users to
noted and will be considered in future work, due to the impact perform the task completely because they lacked the depth
that it represents in an application-level system in a real information needed to manipulate the robots and the elements
environment. in the experimental workspace (Table II). Regarding
communication quality, there were minimal delays between the
The results of this iteration were assertive and the events user interaction, the Leap Motion interface and the robotic
that were foreseen to happen in the planning stages of the tests arms because of the implementation of the UDP protocol;
were identified. User perception was generally positive (Table however, there were problems regarding the transmission of
I) and their recommendations are with respect to hardware, the video because of the use of commercial video streaming
providers. At the end of the tasks, the users learned to work and only take place in local industries and small businesses.
compensate for video delays but needed two perspectives from However, the developed prototype shows how small must be
two cameras. the investment for a technology that could offer the ability to
train people without the assistance of a technical professional
In order to achieve the proposed goals, with the tests in a couple of years, and that can contribute to manual labor in
performed on the system, two cameras were integrated with common environments such as production chains. The most
different viewing angles over the tele-operated system. important factor for the proper control, precision and well
Thereby, the second camera located on the same level as the performance of users was the use of their hands to control, and
robotic arms resulted in tests being executed and completed to interact with the robotic arms and/or another robotic
without complications and without any instructions from machine. The results from this work allow to think that ways of
external sources. These conditions let users reach the tasks natural interactions will have good results in the Human-Robot
targets in a moderate time and by themselves, thus achieving Interaction field.
the proposed goal.
The authors are convinced that these technologies, such as
TABLE II. ELAPSED TIME WHILE TESTING FOR ITERATION 2, ITERATION 3 the Leap Motion device, provide developers and the robot
WITH ONE CAMERA* AND ITERATION 3 WITH TWO CAMERAS**.
industry with a disruptive way to approach the HRI field due to
Iteration 2 (min:seg) Iteration 3* (min:seg) Iteration 3** (min:seg) their low investment value and good level of precision.
user 1 08:03 17:29 11:42
Regarding the future work related to the project, the extent
user 2 09:00 10:32 07:03
and evolution can be accomplished by implementing common
user 3 06:32 14:00 09:23
user 4 11:44 14:57 10:01
robotics technologies. The implementation of servomotors is
user 5 05:25 14:29 07:05
certain to generate a positive impact on the system interaction
user 6 06:56 13:27 06:35 with the user, because this technology provides greater control
user 7 03:39 13:58 09:46 and degrees of freedom of movement, thus allowing replication
user 8 04:02 21:46 10:39 of the movement of the hands and wrist of the user. The
user 9 06:38 14:27 07:04 developed system provides facilities when scaling new features
user 10 06:53 18:06 08:52 and technologies, because a non-layered software architecture
Average 06:53 15:19 08:49 was implemented which avoids dependencies on hardware and
software, therefore allowing better fluency in future
implementations of the system.
In the improved workspace of the robotic arms, the system
has one camera at the roof and other camera at arms level, for REFERENCES
an easy and better view of the system area. Through this [1] J. Burke, R. Murphy, E. Rogers, V. Lumelsky, and J. Scholtz, “Final
distribution, the system provides the user with a wide field of report for the DARPA/NSF interdisciplinary study on human-robot
vision. Moreover, the room where the user controlled the arms interaction,” IEEE Transactions on Systems, Man and Cybernetics, Part
through the Leap Motion is equipped with a projector that C (Applications and Reviews), vol. 34, no. 2,, May 2004 pp. 103–112.
displays the working area of the arms in real time, so that the [2] Nytimes.com,. (2014). Review: The Roomba 880 From iRobot.
user received feedback of the current state of the system. At the [Online]. Available:
http://www.nytimes.com/2014/01/23/technology/personaltech/review-
same time, the second camera shows the user a view of the the-roomba-880-from-irobot.html?_r=0. Accessed on: June 1, 2015.
system from the same height as the arms thus providing the [3] M. Stokes. 3D Printing for Architects with MakerBot. Birmingham, UK.
user with greater detail of the arms’ movements. This Packt Publishing. 2013.
configuration allowed validating the progress made by the [4] Universal Robots. Collaborative Industrial Robotic Arms Universal
implementation of a tele-operated control and signaled the robots. [Online]. Available: http://www.universal-robots.com/en/.
importance of multiple views for humans to perceive depth. Accessed on: June 1, 2015.
Therefore, the latter was defined as an important requirement [5] Rethink Robotics. Rethink Robotics | Advanced Robotics Technology |
in HRI. Collaborative Robots. [Online]. Available:
http://www.rethinkrobotics.com/. Accessed on: June 1, 2015.
VII. CONCLUSIONS AND FUTURE WORK [6] S. F. dos Reis Alves , A. J. Uribe-Quevedo, I. Nunes da Silva, H.
Ferasoli Filho, “Pomodoro, a Mobile Robot Platform for Hand Motion
A natural human robot interaction system has been Exercising”, in 2014 5th IEEE RAS & EMBS International Conference
implemented and tested in an experimental environment with on Biomedical Robotics and Biomechatronics (BioRob), 2014.
good performance. A user can interact and control two robotic [7] Microsoft, “The kinect effect: How the world is using kinect,” 2013.
arms by using his/her hands in the same workspace or he/she [Online]. Available: http://www.xbox.com/enGB/Kinect/Kinect-Effect.
can tele-operate the robotic arms remotely. Using new Accessed on: June 1, 2015.
technologies to control prototypes can be a trial and error [8] Leap Motion | Mac & PC Motion Controller for Games, Design, &
More. [Online]. Available: http://www.leapmotion.com. Accessed on:
cyclic process but the results with untrained users are valuable March 2, 2015.
and make the whole project worthy of follow-up work and [9] Motor/Stepper/Servo Shield for Arduino v2 Kit. Adafruit.com. [Online].
continuous monitoring. Available: http://www.adafruit.com/product/1438. Accessed on: June 1,
2015.
The different iterations accompanied by user feedback
made clear that this kind of technologies, as they are known,

View publication stats

You might also like