0% found this document useful (0 votes)
9 views

2004 IFToMM DatagloveBasedGraspPlanningforMulti FingeredRobotHand

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

2004 IFToMM DatagloveBasedGraspPlanningforMulti FingeredRobotHand

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/272416042

Dataglove Based Grasp Planning for Multi-fingered Robot Hand

Conference Paper · April 2004

CITATIONS READS

10 256

2 authors:

Jie Liu Yuru Zhang


Science (UK) Limited Beihang University
29 PUBLICATIONS 3,188 CITATIONS 223 PUBLICATIONS 2,665 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Yuru Zhang on 12 July 2015.

The user has requested enhancement of the downloaded file.


Proceedings of the 11th World Congress in Mechanism and Machine Science
April 1-4, 2004, Tianjin, China
China Machine Press, edited by Tian Huang

Dataglove Based Grasp Planning for Multi-Fingered Robot Hand


Jie Liu, Yuru Zhang
Robotics Institute, Beijing Univ. of Aeronautics & Astronautics, Beijing 100083, China
e-mail: [email protected]

Abstract: We developed a dataglove based grasp planning reflected in the robot motions. This may result in a fatal
system for grasp simulation, visualization, and analysis of motion accident. Robot teaching in VR environment can overcome
mapping from human hand to robot hand. Mapping in joint this problem. Takahashi et al. [4] presented a robot teaching
coordinate system during preliminary grasp is achieved. The method using a dataglove to measure finger motion of the
system can effectively analyze motion mapping from human hand
to robot hand in a virtual environment. It is also useful in building
human in a VR environment. Kaiser et al., [5] analyzed
virtual prototypes of different hand design and determine how the human skill. For a dexterous hand, it is desirable to control
design affect the grasp ability. This paper describes the working it by tracking compatible motions of human hand. Grasp
principle, the hardware and software architecture, and the grasp planning of robot hand involves determining the hand
strategy. Furthermore, we developed a classifier for human hand placement relative to the object of interest as well as the
grasp according to features of human grasp represented by posture of the hand assumed in grasping the object. The
Gaussian Mixture Model (GMM). The taxonomy of robot hand operator's movements in the VR space are interpreted as
grasp can be achieved by the joint coordinate mapping from robot task-level operations. Takahashi and Ogata
human hand to robot hand. Our aim is to control robotic finger investigated the transfer of gross human motion in the VR
motions through human grasp experience.
environment to robot motion [6].
Keywords: Grasp planning, Motion mapping, Gaussian mixture Our aim is to develop a friendly task-teaching system
model for multi-fingered robotic hands with the ability of learning
human motion. The system will provide solutions to map
human skill to robot hands and thus enhance the usage and
1 Introduction productivity of robots. This paper presents a teaching
In the field of robot hand programming, grasp planning method for grasp planning of multi-fingered robot hand
involves determining the hand placement relative to the based on a data glove. Position and orientation of the robot
object of interest as well as the posture of the hand assumed hand is determined on line by the operator. In the system,
in grasping the object. It is computationally intensive due grasp planning is made more tractable by using cues
to both the large search space and the incorporation of extracted from human execution of grasps. We use pattern
geometric and task-oriented constraints. Globally optimal recognition technology to design a classifier, which
grasp planning using comprehensive or exhaustive search analyzes grasps of human hand and realizes the taxonomy
is not feasible, especially with a high number of joints of of grasp. This research has provided a natural and intuitive
the robot hand. Telemanipulation entails tracking human means for human to control a robot hand by wearing an
finger motions to control robotic finger motions, with the instrumented glove. Data measured during the human
intention of providing human-like versatility and execution of the task produces commands to the robot
complexity in robot hand performance. However, system to replicate the observed task firstly in VR
teleoperation is less suitable for controlling robots environment and then in physical environment.
performing repetitive tasks, as it requires the full time The remainder of this paper is laid out as follows.
attention of a human operator. Section 2 describes the system implementation. Then, in
Napier [1] performed the seminal work in analyzing the section 3, the grasping simulator is presented, grasp
grasping movements of the human hand. Recently, taxonomy based on GMM are shown in section 4,
techniques using human motion measurements as teaching experimentation and results are shown in section 5. Finally,
data for robots are proposed in a teaching by showing section 6 summarizes discussion and future works.
method [2-3]. Task programming based on the observation
of human operation is viable for humanoid robots because 2 System Implementation
it is not necessary to describe the details of motions and Figures 1 and 2 show the hardware and software
forces explicitly for the robot to accomplish the task. Direct architectures of the grasp planning system.
teaching presents one difficulty that the operator undergoes
stress because a mistake of the operator is immediately
CyberTouch

User interface

CyberTouch Driver API


Graphical
Human hand
Simulation
Simulator core based on OpenGL BH3 joint
Interfaces
Virtual Hand Suite and MFC angle data

Figure 2 Software architecture


The hardware system consists of four parts: the measurements. It uses proprietary resistive bend-sensing
dexterous hand, the controller, Cyberglove and the PC technology to transform hand motions into real-time digital
based virtual environment. The dexterous hand BH3 joint-angle data. The CyberGlove we used is a right-handed
(figure 3) is developed at Robotics Institute, Beijing glove with 18 bend sensors. It features two bend sensors on
University of Aeronautics & Astronautics. It is a each finger, four abduction sensors, plus sensors measuring
three-fingered articulated hand with a total of 9 degrees of thumb crossover, palm arch, wrist flexion and wrist
freedom. There are three links and independently abduction. The position information is transmitted to the
controllable joints in each of the three identical fingers. hand controller over the serial port RS-232.
We developed the system framework on PC. It is a
real-time interactive system using the CyberGlove as an
input device. The real-time 3D animated simulation is
achieved by utilizing Virtual Hand Suite, the software
development kit of CyberGlove. The system provides a
virtual environment with low cost and high quality. The
system can be integrated with many PC based applications.

3 The Grasping Simulator


To make use of human intelligence in robot hand grasp
planning, one of the key technologies is the grasp mapping
strategy [7]. The strategy used in our system is shown in
Figure 4.
Human grasp Robot hand grasp
Grasp
Figure 3 Configuration of BH3 dexterous hand translator
Analytic grasp
The controller is a 9-axis servo motion board with Observation measures
PC-ISA bus plug structure. It is designed with DSP and
A priori Mapping
FPGA techniques. The PWM output, A/D exchange, serial A priori
knowledge of algorithm
interface, watchdog and real time interrupter are all knowledge of
integrated into the DSP chip TMS320F240. This makes the human hand robot hand
controller has a good ratio of performance to price.
Communication between the controller and PC is achieved
through two RAM and I/O channels.
The CyberGlove of the Immersion Corporation is a Figure 4 Grasp mapping strategy
fully instrumented glove that provides up to 22 joint-angle Mapping developed for robot hands includes direct
joint angle mapping, pose mapping, and fingertip position 4 Grasp Taxonomy Based on Gaussian Mixture
mapping [8][9]. Fingertip position mapping is used in precise Model
grasp. Drawbacks of pose mapping include operator The taxonomy of multi-fingered robot hand grasp is one of
dependency and a lack of clear definition of what identical the key problems in grasp planning [11]. In order to solve the
poses are for kinematically dissimilar hands. For problem we make use of pattern recognition technology to
preliminary grasp, we use the direct joint angle mapping. design a classifier for recognizing the grasp types based on
This simple method generates roughly similar motions of Gaussian Mixture Model (GMM). GMM is a statistical
human and robot. The aim of preliminary grasp phase is to model widely used in the domains of identification,
determine the accessible surface areas of the object for each verification and sign language recognition [12-14]. According
finger and select the best configuration for each finger to characteristic of the hand motion data, the GMM
based on finger-related criteria. The task during this phase recognizer is used to classify the grasp types. The GMM
consists of a pre-shape determination and a set of digit has three parameters: the mixture weight ci , the mean
trajectories. The pre-shape is a prescribed hand
configuration before grasping. The digit trajectories are the vector µ i , and the covariance matrix ∑ i . With the
motions of fingertips before contacting with the object [10]. notation λ to indicate the complete set of the model, let
Our focus is to map the human hand gesture to the BH3
robot hand to obtain a pre-shape grasp configuration.
The 3D grasp simulator is written in C++ using MFC λ = (ci , µ i , Σ i ) i=1,2,…,M (1)
and OpenGL. The human hand model is generated with M
Virtual Hand Suite by MFC. The 3D model of the robot
hand is developed in Unigraphics16 and imported into
where M is the number of mixtures, ∑c
i =1
i = 1 . We
OpenGL programming environment. The computer is a estimate parameters of GMM by the Expectation
Pentium IV1.7GHz PC running Windows2000 operating Maximization algorithm, a method for maximum
system. We use multiple threads simultaneously to record likelihood estimation (MLE). The grasp feature of human
data and renew display scene. The graphical interface of is represented by GMM. The taxonomy of robot hand grasp
the simulator is shown in Figure 5. is achieved by the joint coordinate mapping from human
During the experiment with the simulator, the human hand to robot hand. The structure of a general grasp
operator can control the motion of the two virtual hands recognition system is presented briefly in figure 6. The
online by wearing the CyberGlove. As the BH3 motion system includes data collection module, data preprocessing
commands are generated from human motion, the module, training module (model parameter estimation
kinematic difference between the BH3 and human hand module) and recognizing module. The classification
must be taken into consideration. The major differences process of the GMM classifier applied in grasp taxonomy is
between the human and the robot hand are the structure of shown in figure 7. We suppose x is the sample vector, V is
the finger base joints and the location of the thumb. By the total numbers of GMM and calculate the probability of
x corresponding to each model λv (1 ≤ v ≤ V ) in basic
mapping the motions from human hand to robot hand, we
can find the configuration difference between the two
hands from screen and adjust the parameters of mapping model base,
algorithm. The system is helpful for robot hand designers pvk ( x) = N (µ vk , Σ vk ) (2)
when prototyping different hand models to determine how
design decisions affect a hand’s grasping ability in
M
p ( x λv ) = ∑ c vk p vk ( x)
simulation.
(3)
k =1

N (µ vk , Σ vk ) is normal distribution. The class that x


belongs to is determined by the following formula.

v* = arg max log Ρ( x / λv ) (4)


1≤ v ≤V

We calculate log-likelihood score v for current model λV .


*

The algorithm is written in C++.

Figure 5 Main interface of the grasping simulator


List
Of grasp

Data Pre-processing
collection

S1 GMM Outpu t
classifier
S2 S3

GMM
Training parameter Basic grasp
data estimation models

Figure 6 A GMM-based human grasp recognition system

λ1 Grasp 1’s GMM


Key p ( x / λ1 )
frame
Prob. Computation
Grasp signal X
X belongs to the class
Pre-process
Grasp2’s GMM
ing Feature λ2 P( X / λ 2 ) v* = arg max log P[X / λ v ]
1≤ v ≤V
extraction Maximize

Prob. Computation

GraspV’s GMM
λV
P ( X / λV )

Prob. Computation

Figure 7 The Diagram of GMM classifier

5 Experimentation and Results hand grasp database, 20 of which are training samples, the
Feature extraction process focuses on the changes of joint remaining are testing samples. Initialize the GMM
angle. The grasp gesture input device is a right-handed parameters by the modified K_means algorithm, a general
Cyberglove with 18 sensors, whose sampling rate is method for vector quantization. The mixtures number of
38400-baud rate. Via CyberGlove we can get the data of 18 GMM is 5; Training samples are 15 dimensional vectors,
sensors during each sample time. Since the BH3 hand has a grasp types is 5. According to the result, we find that the
total of 9 degrees of freedom, we can use the 9 sensor data GMM classifier can classify the five types of grasp with an
of thumb, index and middle finger of human hand to accuracy of 88%.
mapping human hand motion to the BH3 hand. The data
used are from sensors: thumb rotation, thumb TMJ
(Trapeziometacarpal Joint), thumb IJ (outer thumb joint),
thumb abduction (angle between thumb and index finger),
index PIJ (joint second from finger tip), index DIJ (joint
closest to finger tip), middle PIJ, middle DIJ, middle
–index abduction (angle between middle and index fingers)
[15]
, as shown in figure 8. Five types of grasp are recognized,
which are hook grip, power grasp, cylindrical grasp, lateral
pinch, chuck grip. We define the grasp of BH3, which are
pose like human hand during grasping. The hook grip of
human hand and BH3 are shown in Figure 9 and Figure 10,
respectively. 30 observation data corresponding to each Figure 8 Anatomical Definitions of the Human Hand
type grasp are obtained and respectively stored in single
Since the robot hand is very likely to be kinematically
dissimilar from the human hand, controlling its degrees of
freedom is not very intuitive to some extent and thus
requires some amount of research on how to map human
hand motion to robot hand motion.

Acknowledgements
This research is sponsored by the Natural Science
Foundation of China (Grant No. 59985001) and the
Doctoral Grant of the Education Ministry of China (Grant
No. 2000000605).

References
Figure 9 Hook grip of the BH3 1. Napier, J. R, The prehensile movements of the human hand,
J. Of Bone and Joint Surgery, 38(4): 902-913,1956
2. Kuniyoshi, T., Inaba, M., Inoue, H., Teaching by showing:
generating robot programs by visual observation of human
performance, In: Proc. 20th Int. Symp. Ind. robots,
119-126,1989
3. Kang, S.B., Ikeuchi, K., Toward automatic robot instruction
from perception-temporal segmentation of tasks from human
hand motion, IEEE Trans. on Robotics and Automation,
11(5): 670-681, 1995
4. Takahashi, T., Ogata, H., Robotic assembly operation based
on task-level teaching in virtual reality, In: Proc. of ICRA,
Figure 10 Hook grip of the human hand Nice, France, 1083-1088,1992
5. Kaiser, M., Dillmann, R., Building Elementary Robot Skills
from human Demonstration, In: Proc. of ICRA, Minneapolis,
6 Discussion and Future Works
MN, USA, 2700-2705 ,1996
Although the system is still in an initial phase, we believe it 6. Takahashi, T., Ogata, H., Muto, S., A method for analyzing
is one of major steps in implementing both human assembly operations for use in automatically
tele-manipulation and autonomous manipulation. The generating robot commands, In: Proc. of ICRA, Atlanta,GA,
work presented here develops a procedure intended to USA, 695-700,1993
further this research. It reduces the complexity of planning 7. Kang, S.B., Ikeuchi, K., Toward automatic robot instruction
grasp tasks by taking advantage of human intelligence, from perception — mapping human grasps to manipulator
learning and experience and combines with the strength, grasps, IEEE Trans. on Robotics and Automation, 13(1):
endurance and speed of the robot hand. 81-95,1997
In summary we feel the system has the following 8. Weston B. Griffin, et al, Calibration and Mapping of a
Human Hand for Dexterous Telemanipulation, in Proc. 2000
advantages:
ASME IMECE DSC-Symposium on Haptic Interfaces, 1-8,
(1) Novel visualization environment for user to 2000
understand pre-grasps for human hand and robot 9. Kawasaki, H. et al, Virtual teaching based on hand
hand at the same time. manipulability for multi-fingered robots, In: Proc. of ICRA,
(2) We give a grasping taxonomy for robot hand Seoul, Korea, 1388-1393, 2001
based on GMM. 10. Wren, D, Fisher, R.B., Dexterous hand grasping strategies
(3) As an integrated graphical simulation platform, using preshapes and digit trajectories, In: Proc. of ICSMC,
the grasp simulator is a virtual environment with 910-915, 1995
low cost and high quality. In addition, it has the 11. Cutkosky, M. R., On grasp choice, grasp models, and the
design of hands for manufacturing tasks, IEEE Trans. on
flexibility to integrate with many PC based
Robotics and Automation, 5(3): 269-279,1989
applications.
(4) By employing database of grasp, we can plan
12. Reynolds, D.A., Speaker identification and verification
using Gaussian mixture speaker models, Speech
robot grasp online or offline, and replicate the Communication, 17: 91-108,1995
same grasp exactly. 13. Jiyong, M.,Wen G., The upervised Gassian mixture model, J.
(5) The simulator can be used to find the most of Comput. Sci.& Technol., 13(5): 471-474, 1998
effective motion mapping method to be used for 14. Jiangqin, W., et al, A hierarchical DGMM recognizer for
preliminary grasp planning by comparing the Chinese sign language recognition, J. of Software, 11(11):
human hand motion and robot hand motion in 1430-1439, 2000
virtual environment using different mapping 15. CyberGlove User’s Manual, Virtual Technologies Inc, 1997
methods.

View publication stats

You might also like