0% found this document useful (0 votes)
47 views8 pages

Hand Gesture Recognition For Human-Machine Interaction: Elena Sánchez-Nielsen Luis Antón-Canalís Mario Hernández-Tejera

Uploaded by

Ranjeet Mote
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views8 pages

Hand Gesture Recognition For Human-Machine Interaction: Elena Sánchez-Nielsen Luis Antón-Canalís Mario Hernández-Tejera

Uploaded by

Ranjeet Mote
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Hand Gesture Recognition for Human-Machine

Interaction
Elena Sánchez-Nielsen Luis Antón-Canalís Mario Hernández-Tejera
Department of Statistic, Department of Statistic, Institute of Intelligent Systems and
O.R. and Computer Science, O.R. and Computer Science Numerical Applications in
University of La Laguna University of La Laguna Engineering.
Edificio de Física y Matemáticas Edificio de Física y Matemáticas Campus Universitario de Tafira
38271, La Laguna, Spain 38271, La Laguna, Spain 35017, Las Palmas, Spain
[email protected] [email protected] [email protected]

ABSTRACT
Even after more than two decades of input devices development, many people still find the interaction with
computers an uncomfortable experience. Efforts should be made to adapt computers to our natural means of
communication: speech and body language. The PUI paradigm has emerged as a post-WIMP interface paradigm
in order to cover these preferences. The aim of this paper is the proposal of a real time vision system for its
application within visual interaction environments through hand gesture recognition, using general-purpose
hardware and low cost sensors, like a simple personal computer and an USB web cam, so any user could make
use of it in his office or home. The basis of our approach is a fast segmentation process to obtain the moving
hand from the whole image, which is able to deal with a large number of hand shapes against different
backgrounds and lighting conditions, and a recognition process that identifies the hand posture from the
temporal sequence of segmented hands. The most important part of the recognition process is a robust shape
comparison carried out through a Hausdorff distance approach, which operates on edge maps. The use of a
visual memory allows the system to handle variations within a gesture and speed up the recognition process
through the storage of different variables related to each gesture. This paper includes experimental evaluations
of the recognition process of 26 hand postures and it discusses the results. Experiments show that the system can
achieve a 90% recognition average rate and is suitable for real-time applications.

Keywords
Man-Machine Interaction, Perceptual user interface, Image Processing, Hand gesture recognition, Hausdorff
distance

1. INTRODUCTION such as the contact-less control or home appliances


Body language is an important way of for welfare improvement [Pen98, Wil95, Ju97,
communication among humans, adding emphasis to Dam97].
voice messages or even being a complete message by In order to be able to represent a serious alternative
itself. Thus, automatic posture recognition systems to conventional input devices like keyboards and
could be used for improving human-machine mice, applications based on computer vision like
interaction [Turk98]. This kind of human-machine those mentioned above should be able to work
interfaces would allow a human user to control successfully under uncontrolled light conditions, no
remotely through hand postures a wide variety of matter what kind of background the user stands in
devices. Different applications have been suggested, front of. In addition, deformable and articulated
objects like hands mean an increased difficulty not
Permission to make digital or hard copies of all or part of only in the segmentation process but also in the
this work for personal or classroom use is granted without
shape recognition stage.
fee provided that copies are not made or distributed for
profit or commercial advantage and that copies bear this Most work in this research field tries to elude the
notice and the full citation on the first page. To copy problem by using markers, using marked gloves, or
otherwise, or republish, to post on servers or to redistribute requiring a simple background [Dav94, Bob95,
to lists, requires prior specific permission and/or a fee. Hun95, Mag95]. Other approaches are based on
Journal of WSCG, Vol.12, No.1-3, ISSN 1213-6972 complex representations of hand shapes, what makes
WSCG’2004, February 2-6, 2003, Plzen, Czech Republic.
Copyright UNION Agency – Science Press
them unavailable for their implementation in real- step. In order to configure this memory, different
time applications [Tri01]. ways are proposed.
A new vision-based framework is presented in this 2. Acquisition: a frame from the webcam is
paper, which allows the users to interact with captured.
computers through hand postures, being the system 3. Segmentation: each frame is processed separately
adaptable to different light conditions and before its analysis: the image is smoothed, skin
backgrounds. Its efficiency makes it suitable for real- pixels are labelled, noise is removed and small gaps
time applications. The present paper focuses on the are filled. Image edges are found, and finally, after a
diverse stages involved in hand posture recognition, blob analysis, the blob which represents the user’s
from the original captured image to its final hand is segmented. A new image is created which
classification. Frames from video sequences are contains the portion of the original one where the
processed and analyzed in order to remove noise, user’s hand was placed.
find skin tones and label every object pixel. Once the
hand has been segmented it is identified as a certain 4. Pattern Recognition: once the user’s hand has
posture or discarded, if it does not belong to the been segmented, its posture is compared with those
visual memory. stored in the system’s visual memory (VMS) using
innovative Hausdorff matching approach.
The recognition problem is approached through a
matching process in which the segmented hand is 5. Executing Action: finally, the system carries out
compared with all the postures in the system’s the corresponding action according to the recognized
memory using the Hausdorff distance. The system‘s hand posture.
visual memory stores all the recognizable postures, USER HAND POSTURE LOCATION PROCESS

Hand Blob Selection


their distance transform, their edge map and Acquisition
Process
Skin Color Detection
Color Smoothing
morphologic information. A faster and more robust Initialization
Process
Visual
Memory Color Space Blob Analysis
comparison is performed thanks to this data, properly STARTUP
System
(VMS)
Color
Image
Transformation
Sequence
classifying postures, even those which are similar, Grouping Hand Posture
Detection
saving valuable time needed for real time processing. Edge Maps

The postures included in the visual memory may be


initialized by the human user, learned or trained from USER HAND POSTURE RECOGNITION PROCESS

previous tracking hand motion [San99] or they can Resize Hand Posture to Hand VMS

be generated during the recognition process. Compare Hand Posture with VMS
Execution Action Process
This paper is organized as follows: Section 2 Execute Action Resize Hand VMS to Hand Posture

introduces an overview of system components. Hand Compare VMS with Hand Posture

gesture detection and recognition approach are Classify Hand Posture


presented in Section 3 and 4. The advantages of the
proposed system are demonstrated on experimental Figure 1. Global Hand Posture Detection and
evaluations from real world scenes in Section 5. Recognition Diagram.
Conclusions and future work are finally described in
Section 6.
3. HAND POSTURE DETECTION
2. SYSTEM COMPONENTS The operators developed for image processing must
A low cost computer vision system that can be be kept low time consuming in order to obtain the
executed in a common PC equipped with an USB fast processing rate needed to achieve real time
web cam is one of the main objectives of our speed. Furthermore, certain operators should be
approach. The system should be able to work under adaptable to different light conditions and
different degrees of scene background complexity backgrounds.
and illumination conditions, which shouldn’t change
during the execution. Skin Colour Features
Modelling skin colour requires the selection of an
Figure 1 shows an overview of our hand posture appropriate colour space and identifying the cluster
detection and recognition framework, which contains associated with skin colour in this space. HSI space
two major modules: (i) user hand posture location; (Hue, Saturation and Intensity) was chosen since the
and (ii) user hand posture recognition. The following hue and saturation pair of skin-tone colours are
processes compose the general framework: independent of the intensity component [Jon98].
1. Initialization: the recognizable postures are stored Thus, colours can be specified using just two
in a visual memory which is created in a start-up parameters instead of the three specified by RGB
space colour (Red, Green, Blue).
In order to find common skin tone features, several uses “reference average” was introduced to
images involving people with different backgrounds normalize the colour appearance. The normalization
and light conditions where processed by hand to operation subtracts from each pixel colour band
separate skin areas. Figure 2 shows the distribution (R,G,B) the average of the whole image, so odd
of skin colour features. Yellow dots represent coloured images like the reddish one are turned into
samples of skin-tone colour from segmented images, more natural images. The histograms on the right
while blue dots are the rest of the image colour side of figure 3 show that after this operation, skin-
samples. It can be observed that skin-tone pixels are tone colours in different light conditions are much
concentrated in a parametrical elliptical model. For more similar.
practical purposes, however, skin-tone pixel
classification was simplified using a rectangular
model. The fact that the appearance of the skin
colour tone depends on the lighting conditions was
confirmed in the analysis of these images. The values
lay between 0 and 35 for hue component and
between 20 and 220 for saturation component.

(a) (b) (c) (d)


Figure 3. Skin detection: (a) Histogram for hue
and saturation components before normalization
operation, red line for artificial light and blue line
for natural light; (b) original image under
artificial and natural light conditions; (c)
normalized image; (d) histogram for hue and
saturation components after normalization
operation.

Grouping Skin-Tone Pixels


Figure 2. Skin-tone colour distribution in HSI Once the initial image has been smoothed and
space: (a) original image under natural and normalized, a binary image is obtained in which
artificial light conditions; (b) segmented- skin every white pixel represents a skin-tone pixel from
image; (c) HSI colour space (yellow dots represent the original image. The skin-tone classification is
skin colour samples and blue dots represent the based on the normalized image and the
rest of the image samples), x-axis for hue considerations of the HSI space colour mentioned in
component and y-axis for saturation component. section 3.1. Then, a pixel was classified as a skin-
tone pixel if its hue and saturation components lay in
Colour Smoothing a certain range. However, these ranges still vary
An image acquired by a low cost web cam is slightly depending on light conditions, user’s skin
corrupted by random variations in intensity and colour and background. These ranges are defined by
illumination. A linear smoothing filter was applied in two rectangles in the HS plane: the R1 rectangle for
order to remove noisy pixels and homogenize natural light (0 ≤ H ≤ 15; 20 ≤ S ≤ 120) and the R2
colours. Best results were achieved using a mean rectangle for artificial light (0 ≤ H ≤ 30; 60 ≤ S ≤
filter, among the different approaches of proposed 160).
lineal filters [Jai95]. It is necessary to deal with wrongly classified pixels,
The appearance of the skin-tone colour depends on not only with false positives but also with false
the lighting conditions. Artificial light may create negatives, once the binary-skin image has been
reddish pictures, as shown in Figure 3, which means computed. In order to remove background noisy
different values for skin-tone colours. The pixels and fill small gaps inside interest areas, all 5x5
histograms on the left side of figure 3 represent the neighbourhoods are analyzed. The value of a certain
distribution of skin hue and saturation components pixel may change from skin to background and vice
for artificial light (red line) and natural light (blue versa depending on the average amount of skin
line). Values are shifted to the right for the artificial pixels in all 5x5 neighbourhoods.
light values. A lighting compensation technique that
The next step consists on the elimination of all those then uses the largest ranked such point as the
pixels that are not critical for shape comparison. It is measure of distance. The Manhattan distance is used
not necessary the use of classical convolution to define the distance between any two data points.
operators because the image at this stage is a binary In order to avoid erroneous results due to occlusions
one, so edge borders were found leaving on the or noise conditions, the Hausdorff distance can be
image just those pixels that had at least one naturally extended to find the best partial Hausdorff
background pixel on their neighbourhood. Optimal distance between sets A and B. It is defined as:
edge maps, where no redundant pixels could be
hk ( A, B ) = K min a − b
th
found, were produced with the use of a 4- (3)
a∈A b∈B
connectivity neighbourhood.
Using the previous definition, the Bidirectional
Blobs Analysis Partial Hausdorff distance is defined as follows:
Blobs, Binary Linked Objects, are groups of pixels
that share the same label due to their connectivity in H kl ( A, B ) = max(hk ( A, B ), hl (B, A)) (4)
a binary image. After a blob analysis, all those pixels
that belong to a same object share a unique label, so Matching Hand Postures using Hausdorff
every blob can be identified with this label. Blob
Distance (HD) and Visual Memory
analysis creates a list of all the blobs in the image,
along with global features: area, perimeter length, System (VMS)
compactness and mass centre about each one. After A hand posture matching approach was developed by
this stage, the image contains blobs that represent introducing the notion of a visual memory system
skin areas of the original image. The user’s hand may and focusing on the Hausdorff distance introduced in
be located using the global features available for section 4.1.
every blob, but the system must have been informed 4.2.1 Visual Memory System
whether the user is right or left handed. Most likely, Because our problem approach is slightly different
the two largest blobs must be the user’s hand and from the common one [San01, Bar98, Ruc96],
face, so it will be assumed that the hand corresponds different solutions are required. Our system has a
to the right most blob for a right-handed user and visual memory (VMS) which stores recognizable
vice versa. pattern postures. In order to address diverse visual
aspects for each stored pattern M and non-local
distortions, the pattern postures are represented by q
4. HAND POSTURE different samples:
RECOGNITION VMS = Pmq
in
{
: m = 1,2, K , M ; q = 1,2, K , Q; i = 1, K ,3; n = 1,2, K N }(5)
Once the user hand has been segmented, a model-
based approach based on the Hausdorff distance that where each qth sample of each mth pattern hand
works on edge map images and a visual memory is posture is defined by its ith binary edge map, its ith
proposed to recognize the hand posture. distance transform [14] and its ith morphologic
Hausdorff Distance (HD) information respectively, as follows:
The Hausdorff distance [Ruc96] is a measure 1n
Pmq {
= p11 12 1N
} 2n
{21 22 2N
} 3n
{
mq , p mq , K , p mq ; Pmq = p mq , p mq , K , p mq ; Pmq = p mq , p mq , K , p mq
31 32 3N
} (6)
between two sets of points. Unlike most shape
comparison methods, the Hausdorff distance between VMS can be created in a start-up stage, where the
two images can be calculated without the explicit user introduces a set of specific hand gesture
pairing of points of their respective data sets, A and vocabulary. New postures can be added to the visual
B. Formally, given two finite sets of points A = system whenever the user wants to. Furthermore, the
{a1,..,am} and B = {b1,..,bn }, the Hausdorff distance hand gesture vocabulary could be obtained by a hand
is defined as: tracking system as the one proposed in [San99]. A
detail of four hand posture patterns that composes
H ( A, B ) = max (h ( A, B ), h (B , A )) (1) VMS is shown in Figure 4.
where Every J segmented user hand posture (UHP) is
defined as:
h( A, B ) = max min a − b (2)
a∈A b∈B {
UHP = U inj : j = 1,2, K , J ; i = 1, K ,3; n = 1, K , N } (7)
The function h (A,B) is called the directed Hausdorff In the same way of (6), each i value of Uj is defined th
distance from set A to B. It ranks each point of A by its binary edge map, distance transform and
based on its distance to the nearest point in B and morphologic information.
(a) (b) The computation of the bidirectional partial
Hausdorff distance consists of a seven-stage
1 Selected
Gesture:
processing scheme, which is graphically showed in
Figure 5. This process is repeated for every stored
2 Gesture 3
pattern that composes the VMS. Finally, the posture
L:52,75,U:85
is classified as the pattern for which the minimum
Height:87
3 Width:42
bidirectional partial Hausdorff distance is given.
X Center:19 Postures may not belong to the VMS, so a decision
4 Y Center: 36 rule is used for rejecting them:
Area: 402
(( ) ) ( ( ))
 pm1n*q if H kl ( hk S U 1j n , Pm1*nq , hl Pm1*nq , S U 1j n ≤ α ) (8)
5 output =  
Distance otherwise, reject 
Transform
Image
6 1n
Where Pm*q is the classified pattern. This way, the
(c) output will be discarded if the minimum
bidirectional partial Hausdorff distance if higher
Figure 4. Detail of six stored hand postures of
Visual Memory System (VMS). There are four than a given threshold α.
samples for each posture (q = 4). Each one is The computation of the partial directed distance can
represented by its edge map (a), morphologic be speeded up using the notion of the distance
information (b) and its distance transform (c). transform image. It is based on overlapping edge
points of S (U 1j n ) on Pmq
1n
’s distance transform image.
Then, for every edge point in S (U 1j n ) , the value in Pmq
1n
4.2.2 Matching Hand Postures
The matching hand postures solution involves is taken, and the directed partial distance,
finding the minimum bidirectional partial Hausdorff hk (S (U 1j n ), Pmq
1n
) , matches up with the kth quartile of
distance (4) between U 1j n and PMq , where those values. hl (S (Pmq ), U 1jn ) is computed in same
1n
U 1j n 1n

represents the jth user hand posture edge map fashion that hk (S (U 1j n ), Pmq
1n
) , replacing U 1jn by Pmq1n
computed from the user hand posture detection
1n denotes each
module (described in section 3) and PMq and Pmq
1n by
U 1j n . In each comparison stage, it is only

one of the M stored patterns edge map in VMS. necessary to compute the distance transform image
of U 1j n , since the one of Pmq
1n has previously been
With the aim of considering the different changes in
appearance and non-rigid distortions of user hand calculated and stored in VMS.
postures regarding the stored patterns, a resize Figure 6(a) shows a detail of the comparison process,
operation is computed. Firstly, U 1j n is scaled to the corresponding to the computation of hk (U 1j n , Pmq
1n
),
size of the qth sample, which represents a stored where the user hand posture is compared with the
pattern Pmq
1n in VMS and secondly, the specific distance transform image of one of the stored pattern
pattern in VMS, Pmq 1n is scaled to the size of in VMS. It should be noticed that, if the postures that
U 1j n .
are compared are similar, the size change does not
Figure 5 illustrates this process in the steps 1 and 4. affect gravely the shape of the posture, like observed
We denote both scaled operations respectively in 6(a). If they are different, on the other hand, an
S (U 1j n ) and S (Pmq )
1n .
image deformation can take place like observed in
6(b).
In order to accelerate the computation of the
bidirectional partial Hausdorff distance and make it
suitable for real-time recognition, a pre-processing
5. EXPERIMENTS
The hand posture detection and recognition approach
stage for each one of the stored pattern in VMS is
was implemented in Borland Delphi on an AMD K7
calculated. The pre-processing step is based on the
700Mhz PC running Windows XP with an USB
storage of the edge map, the distance transform and
Logitech Quickcam webcam. The recognition
the morphologic information in the initialization
approach has been tested with real world images as
process of the system (described in section 2).
the input mechanism of a user interface, surveillance
The distance transform [Pag92] for each edge map system, autonomous robot vision system or virtual
posture image generates an image where background environment applications. All tests were executed on
pixels have been labelled with their Manhattan 128x96 images with 24b colour depth, which were
distance to the closest object pixel. sampled at a rate of 25 frames/sec.
The system was tested under two different light The average processing time per second on an AMD
conditions: natural daylight and artificial light, in K7 700Mhz PC system is 8 frames. Being impossible
order to test its adaptability to different lighting for a human to make 25 postures in a second time, it
conditions. Also, two different users were is feasible the analysis on just 8 frames. It is noticed
considered. The first one, named “original subject” that this feature is needed to be improved; a tracking
was the one who created the visual memory. The system should be implemented in order to avoid the
second one tested the application using the same complete analysis of each frame. Anyway, the use of
visual memory created by the original subject. The a faster PC system than the currently used would
gesture vocabulary in our gesture interface is assure a real time processing, which allows the use of
composed by 26 postures, shown in Figure 7, each of the proposed approach in real-time video
which represent a gesture command mode. It can be applications.
observed that some postures are really similar, like
postures pair 1-20, 2-21, 5-25, 18-19, 21-22.

Recognition Performance and Discussion


In order to test system recognition performance, 100
frames were processed for each posture in the visual
memory (VMS), and hree outputs were considered:
“right” meant a correct classification; “discarded”
meant a posture which did not belong to the visual
memory (VMS), and “wrong” was an incorrect
classification. The graphic illustrated in Figure 8
shows the average classification output under
artificial and natural light conditions for the original
subject and the results under artificial light
conditions for the second subject. The system
reaches a 95% recognition rate under optimal
circumstances (same visual memory user and
artificial light conditions). Even under different light
conditions (natural light) and being the results of the
segmentation process slightly different, the system is
able to reach an 86% recognition rate (figure 8b).
In order to study the system’s adaptability to hand
morphologic changes, the test was repeated with a
different user. Results are shown in figure 9. Figure 5. Computation of bidirectional partial
Although reasonably high, these results imply that Hausdorff distance for hand posture recognition
each user should create their own visual memory, in problem. D.T denotes the distance transform
same fashion as voice recognition applications must image.
be trained for each user. In these situations, it has
been observed that better results are obtained if the
samples for each stored posture in the visual memory
(VMS) are generated using more than one user,
because a certain degree of variability is added to the
sample.
Results show a high recognition rate, where the
system can achieve a 90% recognition average rate.
It can be affirmed that if a posture edge map is
properly segmented in the image processing stage, Figure 6. Shape Comparison between user hand
the Hausdorff matching approach will classify it posture and a stored pattern in VMS: (a) similar
properly. What is more, posture pairs which are postures, (b): deformed postures, (1): user hand
really similar, like 1-20, 2-21, 5-25 and 21-22 in posture edge map, (2): specific pattern edge map
figure 7, are properly classified. in VMS, (3): overlapping user hand posture edge
Figure 10 shows the required time to process each of map (in blue) on pattern’s distance transform
the stages of the developed approach. An amount of image.
500 frames were analysed in order to obtain an
average of the time needed for each stage processing.
Figure 10. Time distribution measured in
percentage to process each one of the stages of the
detection and recognition hand posture approach.

6. CONCLUSIONS AND FUTURE


WORK
A fast processing process and a robust matching
carried out through a Hausdorff distance approach; a
Figure 7. Posture Data numerated consecutively visual memory system and resolution of non-rigid
between 0 and 25. distortions have been presented for hand posture
detection and recognition problem. Different light
conditions, backgrounds and human users have been
tested in order to evaluate system’s performance. The
recognition rates show that the system is robust
5%
14% against similar postures. Even more, the runtime
1%
2% behaviour allows the use in real-time video
applications with a simple personal computer and a
standard USB camera.
Future research will concentrate on investigating
94% 84%
efficient hierarchical N-template matching and
Right Discarded W rong Right Discarded W rong
studying other robust and efficient methods about
face and hand location in order to integrate the
(a) (b) components of the system into a gesture interface for
Figure 8. Recognition Rates: (a) original subject, an anthropomorphic autonomous robot with an active
artificial light; (b) original subject, natural light. vision system [Her99] and into virtual environment
applications.

11%
7. ACKNOWLEDGMENTS
3%
This work has partially been supported by the Canary
Autonomous Government under Project PI2000/042.

8. REFERENCES
86%
[Bar98] Barnabas Takacs. Comparing Face Images
using the modified Hausdorff distance. Pattern
Right Discarded W rong Recognition, Vol 31, nº 12, pp. 1873-1881, 1998.
[Bob95] Bobick, A. And Wilson, A. A state-based
Figure 9. Recognition Rates: second subject,
technique for the summarization and recognition
artificial light.
of gesture. In Proc. IEEE Fifth Int. Conf. on
Computer Vision, Cambridge, pp. 382-388, 1995.
[Dam97]A. Van Dam. Post-WIMP user interfaces.
Comunications of the ACM, VOL. 40, nº2, pp.
63-67, February 1997.
[Dav94] Davis, J. and Shah, M. Visual gesture
recognition. IEE Proc. Vis. Image Signal Process,
141(2) :101-106, 1994.
[Her99]M. Hernández, J. Cabrera, M. Castrillón, A. Machine Intelligence, Vol.23, Nº. 12, December
Domínguez, C. Guerra, D. Hernández, J. Isern. 2001.
DESEO: An Active Vision System for Detection, [Turk98] M. Turk, (ed.). Proceedings of the
Tracking and Recognition. Volume 1542 of Workshop on Perceptual User Interfaces, San
Lecture Notes in Computer Science, 1999. Francisco, CA, November 1998.
[Hun95]. E. Hunter, J. Schlenzig, and R. Jain. [Wil95]William T. Freeman, Craig D. Weissman.
Posture Estimation in Reduced-Model Gesture Television Control by Hand gestures. IEEE Intl.
Input Systems. Proc. Int´l Workshop Automatic Workshop on Automatic Face and Gesture
Face and Gesture Recognition, pp. 296-301, 1995. Recognition, Zurich, June, 1995.
[Jai95] R. Jain, R. Kasturi, B. Schunck. Machine
Vision. McGraw-Hill Editions, 1995.
[Jon98] M. Jones and J. M. Rehg. Statistical Colour
Models with Application to Skin Detection.
Technical Report Series, Cambridge Research
Laboratory, Dec, 1998.
[Ju97] Ju, S., Black, M., Minneman, S., Kimber, D.:
Analysis of Gesture and Action in Technical Talks
for Video Indexing. IEEE Conf. on Computer
Vision and Pattern Recognition, CVPR97.
[Mag95] C. Maggioni. Gesturecomputer. New Ways
of Operating a Computer. Proc. Int´l Workshop
Automatic Face and Gesture Recognition, 1995.
[Ruc96] Ruckelidge, W. Efficient Visual
Recognition Using the Hausdorff Distance,
volume 1173 of Lecture notes in computer
science, Springer-Verlag, 1996.
[Pag92] Paglieroni, D.W. Distances Transforms.
Computer Vision, Graphics and Image Processing:
Graphical Models and Image Processing, 54 :56-
74, 1992
[Pen98]Pentland, A. Smart Rooms: Machine
Understanding of Human Behavior. Computer
Vision for Human-Machine Interaction, eds.
Cambridge University Press, pp. 3-21, 1998.
[San01] Sánchez-Nielsen Elena, Lorenzo-Navarro
Javier, Hernández-Tejera F. Mario. Increasing
Efficiency of Hausdorff Approach for Tracking
Real Scenes with Complex Enviroments. 11th
IEEE International Conference on Image analysis
and processing, pp. 131-136, 2001.
[San99] Sánchez-Nielsen Elena, Hernández-Tejera F.
Mario. Hand Tracking using the Hausdorff
Distance. Proceedings of the IV Ibero American
Symposium on Pattern Recognition, 1999.
[Tri01] J. Triesch and C. von der Malsburg. A
System for Person-Independent Hand Posture
Recognition against Complex Backgrounds.
IEEE, 2001 Transactions on Pattern Analysis and

You might also like