An Integrated Planning and Programming System For HumanRobotCooperation
An Integrated Planning and Programming System For HumanRobotCooperation
com
ScienceDirect
Procedia CIRP 63 (2017) 95 – 100
* Corresponding author. Tel.: +49-821-90678153; fax: +49 821 90678-199. E-mail address: [email protected]
Abstract
The application of human-robot-cooperation in industrial environments is still in an initial stage. This is caused by safety reasons, the complexity
of distributing tasks between the human and the robot as well as the time-consuming programming of the robot. Therefore, a concept for the
combination of an automated distribution of the tasks between human and robot and a task-oriented programming of the robot is focused in this
paper. The programming system consists of four parts: a task and world model, a planning system for the distribution of the tasks, a programming
system to generate the robot program, and an operation module containing an action recognition. After the suggestions for the allocation of the
tasks to human and robot conducted by the planning system, the tasks are presented to the user who can approve or adopt them on a human
machine interface. According to the determined distribution the robot program will be generated subsequently. This automatically generated
program consists of several modules where each module represents an assembly task. In order to detect the assembly task, which is carried out
by the human, the movements of the worker’s hands and of the object have to be detected by a camera system. In order to interpret the movements
of the worker, an approach using Hidden Markov Model is used whereas the Hidden Markov Model is fed with the position data of the hands.
© 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
© 2017 The Authors. Published by Elsevier B.V.
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the scientific committee of The 50th CIRP Conference on Manufacturing Systems.
Peer-review under responsibility of the scientific committee of The 50th CIRP Conference on Manufacturing Systems
Keywords: human-robot-cooperation; planning; programming
2212-8271 © 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the scientific committee of The 50th CIRP Conference on Manufacturing Systems
doi:10.1016/j.procir.2017.03.318
96 Julia Berg and Gunther Reinhart / Procedia CIRP 63 (2017) 95 – 100
that will support the planning process of the task division. These considered as intuitive and can be performed by non-experts
tasks will then be transferred into robot code where a task- [10]. A similar approach is presented by [11].
oriented programming approach is applied. To enhance the A different type of intuitive programming designed for non-
human’s acceptance towards the robot, action recognition is experts is the task-oriented programming. Task-oriented
included in the code so that the robot is able to react to the programming has already been addressed in several works [12].
human’s working processes. Approaches were lacking the fast adapting of the task-oriented
planning system to changing developments. Therefore [13]
expanded earlier approaches to a skill-based model to adapt the
task-oriented programming system to different assembly
systems. The main focus of his work was the modelling of skills
which provide an independent description of the functions of a
resource. Furthermore every process is described by the
necessary process skills. The description and sequence of the
skills is the basis for the generation of the robot and PLC code
executed by a postprocessor which was developed by [14].
Figure 1: Scenario of human-robot-cooperation Other similar approaches for an easier programming of the
robot whereas the user tells the robot “what” to do instead of
2. State of the art “how to do” it, are focus of the project SMErobotics, or a drag
and drop approach presented by [15-17]. In this approach the
The planning of human-robot-applications has been user defines the robot program by dragging blocks on a user
highlighted in several papers. One of the first approaches was a interface. The blocks contain information such as “move to”
task planning algorithm presented by Beumelburg [4]. In order and can be specified further by the user. More approaches for a
to assign the tasks to human and robot, the skills of both are simplified programming of robots are presented in [18-20].
modelled and compared to the requirements of the task. Based In order to increase the acceptance of the collaboration with
on the work of Beumelburg, Heinze et al. set up a methodology the robot in human-robot-cooperation, approaches address the
for the combined planning and simulation of human-robot- anticipation of the robot. Lenz et al. present an approach where
applications which was developed during the project the decisions for the next steps of the robot are due to the
MANUSERV [5]. The aim of the project was the development subsequent steps that need to be performed based on the
of a system for human-robot-cooperation solutions based on the assembly plan and human actions. A data base with information
manual processes and a database where robot technologies are about the assembly plan, tools and products supports the
collected. An important aspect of the work was the description decision process [21]. Another approach in this context was
of the processes in a standardized description language [5]. The presented by Faber et al. where the human-robot-cooperation in
description of the processes is based on the Methods Time assembly tasks had to be supported by cognitive architecture.
Measurement (MTM) whereas the five basic motions reach, In this approach the planning of the tasks of the robot was
grasp, move, position and release describe assembly processes. performed online. The basis for the planning process is the
Another aspect within this project was the planning of the CAD data of the product which serves as a basis for the
application based on the Planning Domain Definition Language derivation of the assembly sequence. With this assembly
(PDDL). Next to the planning process, simulation was a focus sequence, the costs for possible assembly solutions were took
of the project. With the defined working plan and CAD data of into account and calculated related to the participant. The path
the robot system, the application can be simulated using the with the lowest costs is taken. Information about the decisions
simulation software VEROSIM [4]. Another project of the system are visualized on a human machine interface [22].
concerning the planning process of individual assistance Other approaches use Bayesian models to estimate the workers
systems in assembly processes is called INDIVA. The main intention to adopt the robot to the worker’s behavior [23,24].
goal of this project is the early simulation of human-robot- In the last two presented approaches, the autonomy of the
collaboration and the individual adjustment of the application robot system increases. However studies show that the
towards the worker [6]. More approaches concerning the task efficiency of a worker decreases when the autonomy and speed
assignment in human-robot-cooperation are presented in [7, 8, of the robot increase [25].
9]. Although many approaches for an easier programming of the
In order to ease the process of programming robots in robot have been presented, they mostly do not include the
general, not necessarily referred to human-robot-application, human’s needs such as an adaption to the human’s speed. In
many approaches have been presented. Two ways to easily order to ease the programming process, but also to meet the
program robots are programming by demonstration and task- human’s requirements in collaboration, an approach for an
oriented programming. The aim of these programming automated programing system with integrated action
techniques is that non-experts in programming are able to recognition is introduced in this paper. The following passage
program the robot. Orendt gave an example of programing by describes the system architecture of the approach. After giving
demonstration through its Tool Center Point (TCP) which is an overview of the architecture, each module is described.
seen as an intuitive possibility for programing [10]. The robot
is guided on its TCP and out of the path the program is
generated. The research shows that this way of programming is
Julia Berg and Gunther Reinhart / Procedia CIRP 63 (2017) 95 – 100 97
3. System architecture available robot, the gripper, and other components such as
feeding systems. If the layout of the scene is already fixed, the
The system consists of four modules which are the task and layout definition will be taken into account in the planning
world model, the planning module, a programming module and module when the tasks are assigned to human or robot.
an operation module. The main purpose of each module is
described in the following sections. Figure 2 gives an overview
of the system architecture. 3.2. Planning module
process, the task assignment is presented on a user interface by change of the variable task_1 will be conducted in the module
blocks on a time bar which is similar to the representation of for task 1:
the tasks in a Gantt Chart presented in [7] The worker now has If task 1 however is conducted by a human, the variable has to
the possibility to agree with the assigned tasks or to arrange the be changed by the action recognition. The robot program itself
tasks by moving the blocks on the screen. . After the agreement runs on the robot control; the analysis of the action recognition
the tasks will be transferred into robot code in the programming conducts a PC online . When the execution of task 1 by the
module which is described in the following section. worker is identified by the system, the variable task_1 will be
changed to True which means that all requirements for task 2
are fulfilled and the module can be started.
3.3. Programming module According to [25], a big issue for the worker in human-
robot-cooperation is unpredictable robot movements [25]. To
Within the programming module the code for a robot is help the worker to get used to the assembly together with the
generated. It is divided into two parts: program code according robot, the sequence of the robot tasks will be defined once and
to [26] and variable definition for the action recognition. The not changed. However when there is a change in sequence
programming module described by Weck & Brecher consists caused by the worker which results in a change of tasks on the
of an action planner, a gripping planner, a path planner and an tasklist of the robot, the necessary task can be conducted if the
assembly planner. The main task of the action planner is to requirements according to the assembly plan are met.
divide the tasks into smaller units which can be transferred into
robot program code. If the task is a gripping task, the gripping
planner is responsible for the detailed planning of the best 3.4. Operation module
gripping position in order to ensure a safe transport and no
damage to the part. The choice of the gripper is also task of the The operation module consists of the two parts action
gripping planner. The focus of the path planner is mainly the recognition and collision detection. Both elements are
evaluation of a possible path without any collision with any described in detail within the next two sections.
objects in the robot surroundings [26].
The inputs for the programming module are the tasks of 3.4.1. Action Recognition
human and robot in a certain description. The task description As described above, the generated robot code shall include
is conducted by the planning module, but still on a basis which a basis for an action recognition performed during operation.
is not detailed enough to generate robot program code. As The action recognition aims to identify tasks performed by the
described in the action planner by [26], the tasks have to be worker, so that a slight adaption towards the workers behavior
subdivided into fractions which can be assigned to gripper, path can be fulfilled. At first, the basics of the action recognition
and assembly planner. Each task will be transferred into one based on Hidden Markov Models are explained which is
program module, so that the program will consist of several followed by the adoption of this model for an assembly work
modules. Backhaus describes a post processor which conducts place.
several steps to generate the robot code [14]. The post The basics for Hidden Markov Models have already been
processor uses a task description model with six levels. It starts presented by Rabiner [28]. A Hidden Markov Model (HMM)
from a primary process level, where the assembly process and is a special case of a Bayesian Network. Sequences of certain
the starting state and end state of the product are described. The events can be detected without knowing their real states. The
last level is the code level where the program code is generated basis for HMM is the Markov-Chain. The possibility to change
[16]. The presented approach will lean on the concept of to a different state only depends on the actual state and the next
Backhaus. The robot code will neither contain any waiting state itself. In a HMM the actual states are not visible, rather
times nor waiting for a signal from a button in order to start the they are hidden (cf. Figure 3). However every state sends out
robot. The modules will be started by the action recognition. emissions that can be detected. The emission matrix describes
The action recognition itself is described in the following the probability to detect a certain emission in a certain state
section, here only the variables for the action recognition that [28].
are integrated in the code, are introduced.
In order to realize the action recognition during operation,
the code modules have to be prepared. Therefore variables will
be generated. A module to fulfil a task by the robot can only be
started when the requirements according to the assembly
priority plan are met. This means that the basis for the
generation and assignment of the variables is the assembly plan
Figure 3: Hidden Markov Model (cf. [28])
of the product. The idea is to set up the modules with if-
conditions. For example if task 2 can only be started after task In order to increase the accuracy and robustness of an HMM,
1 has finished, the module for task 2 would start with the several HMM can be connected on several levels to build up a
condition: If task_1 = True Then… layered hidden markov model (LHMM) [29]. A layered hidden
During the assignment of the tasks, task 1 will be assigned markov model consists of at least two layers whereas one is the
to either human or robot. If the task is assigned to the robot, the top-layer model and one is a layer with several models. Layered
hidden markov models have been issue of research in action
Julia Berg and Gunther Reinhart / Procedia CIRP 63 (2017) 95 – 100 99
recognition. [29] uses LHMM to identify the workers actions position of the hand as center. The arms of the human are
in an assembly line [30]. One layer represents the subtasks of modelled as cuboids. The robot is also modelled by a ball with
the worker which are combined to a task in the above layer. In the TCP coordinate as center as shown in Figure 5. If the ball
order to apply this LHMM on the assembly line, a big set of of a hand and the ball representing the robot overlap, a collision
trainings had to be fulfilled. In order to integrate the action is detected which results in a stop of the robot. The robot will
recognition using HMM to this task-oriented programming pause its program as long as the collision is detected. Figure 5
system, the training effort of the model has to be reduced. The shows the visualization of the collision detection. In this case
presented approach applies the HMM to an assembly no collision is detected, because there is no overlapping of the
workspace and gives an approach to reduce the training. The robot and the hands.
LHMM consists of two layers whereas in the top-layer the
assembly tasks are identified. The bottom layer contains of two
HMM, one for the right and one for the left hand. The input for
the bottom layer is the position of right or left hand. Figure 4
shows the setup of the layers for the considered structure. For
the identification of the actions, the assembly step is divided
into the five basic motions according to MTM. To identify the
assembly step, the reach and move motion must be identified.
If the hand position of either the right or left hand moves to an
object which is assigned to a task, there is a certain possibility
that this certain task is being executed. To estimate the
probability that a hand moves to an object, a linear slope from
the point where the assembly takes place to the objects is Figure 5: Visualization of collision detection
defined. If the hand mostly moves on this line, the probability
for the reach movement is high. This identification takes place 4. Conclusion and future work
in the bottom layer as seen in Figure 4.
In this paper an approach for a task-oriented programming
system has been introduced. The programming system consists
of four modules which are: the task and world model, the
planning module, a programming module and an operation
module. The planning module’s main task is the assignment of
the tasks to the resources human and robot and the planning of
the application. The programming module is responsible for the
code generation which includes the preparations for the action
recognition performed in the operation module. Within the
operation module, an action recognition based on layered
Hidden Markov Models is executed. The action recognition
Figure 4: Structure of the layered Hidden Markov Model identifies tasks executed by the worker, so that the robot system
can slightly adopt to the worker’s behavior, for example by
The positions of objects to corresponding tasks have to be
starting its task in the right moment without getting a signal
given to the system by camera detection.
from a button push. Regarding the action recognition and
In order to receive the position of the hands, a Leap Motion
collision detection, the system has been implemented and
Controller is used. Coming from the consumers industry, the
tested with a Leap Motion Controllers. To expand the detected
Leap Motion Controller is a device to detect the hand positions
space, the approach will be enhanced by a Kinect. Thus the
and gestures. Studies of the Leap Motion Controller show a
safety spaced can be expanded and bigger parts of the body can
high accuracy respective to the hand positions and processing
be detected to feed the model for action recognition.
time [31]. First studies of the action recognition also show that
The presented approach is concept for an architecture for an
the Leap Motion Controller is an appropriate device to fulfill
automated programming system for the human-robot-
the requirements. The disadvantage of the usage of one Leap
cooperation. The programming module will now be
Motion Controller is the working space which is compared to
implemented. This includes the development and design of the
the assembly space to small.
human machine interface where the tasks will be presented to
the user. Furthermore it is planned that more tasks can be added
3.4.2. Collision detection
on the interface in case tasks are missing within the process.
In order to enhance the acceptance and the safety of the
Within the project some scenarios for a human-robot-
worker in a close collaboration with the robot, collision
cooperation will be identified. Through the implementation of
detection has been set up. Using the Leap Motion Controller
the presented system in these applications, the system will be
which detects the hand positions for the action recognition,
evaluated.
collision detection between the hands and the TCP of the robot
has been developed. To detect possible detections, the
coordinates of the hand middle and the coordinate of the TCP
are compared. The hands are modelled by balls with the
100 Julia Berg and Gunther Reinhart / Procedia CIRP 63 (2017) 95 – 100