What's Up CAPTCHA - A CAPTCHA Based On Image Orientation
What's Up CAPTCHA - A CAPTCHA Based On Image Orientation
What’s Up CAPTCHA?
A CAPTCHA Based on Image Orientation
Rich Gossweiler Maryam Kamvar Shumeet Baluja
Google, Inc. Google, Inc. Google, Inc.
1600 Amphitheatre Parkway 1600 Amphitheatre Parkway 1600 Amphitheatre Parkway
Mountain View, CA 94043 Mountain View, CA 94043 Mountain View, CA 94043
[email protected] [email protected] [email protected]
ABSTRACT that can be quickly solved by humans, but are difficult for
We present a new CAPTCHA which is based on identifying an computers to solve. Using CAPTCHAs, services can distinguish
image’s upright orientation. This task requires analysis of the legitimate users from computer bots while requiring minimal
often complex contents of an image, a task which humans usually effort by the human user.
perform well and machines generally do not. Given a large We present a novel CAPTCHA which requires users to adjust
repository of images, such as those from a web search result, we randomly rotated images to their upright orientation. Previous
use a suite of automated orientation detectors to prune those research has shown that humans can achieve accuracy rates above
images that can be automatically set upright easily. We then 90% for rotating high resolution images to their upright
apply a social feedback mechanism to verify that the remaining orientation, and can achieve a success rate of approximately 84%
images have a human-recognizable upright orientation. The main for thumbnail images [27]. However, rotating images to their
advantages of our CAPTCHA technique over the traditional text upright orientation is a difficult task for computers and can only
recognition techniques are that it is language-independent, does be done successfully for a subset of images [15][19].
not require text-entry (e.g. for a mobile device), and employs
another domain for CAPTCHA generation beyond character Figure 1 illustrates that some images are:
obfuscation. This CAPTCHA lends itself to rapid implementation (A) easy for both computers and people to orient (because
and has an almost limitless supply of images. We conducted the image contains a face, which can be detected and
extensive experiments to measure the viability of this technique. oriented by computers)
(B) easy for humans to orient (because the image contains
Categories and Subject Descriptors an object, e.g. a bird, that is easily recognized by humans)
D.4.6 [Security and Protection]: Access Control and
but difficult for computers to orient (because the image
Authentication
contains multiple objects with few guidelines for meaningful
General Terms segmentation and the object in the foreground is of an
irregular, deformable, shape).
Security, Human Factors, Experimentation.
1. INTRODUCTION
With an increasing number of free services on the internet, we
find a pronounced need to protect these services from abuse.
Automated programs (often referred to as bots) have been
designed to attack a variety of services. For example, attacks are
common on free email providers to acquire accounts. Nefarious
bots use these accounts to send spam emails, to post spam and
advertisements on discussion boards, and to skew results of on-
line polls.
To thwart automated attacks, services often ask users to solve a
puzzle before being given access to a service. These puzzles, first
introduced by von Ahn et al. in 2003[2], were CAPTCHAs:
Completely Automated Public Turing test to tell Computers and
Humans Apart. CAPTCHAs are designed to be simple problems
841
WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces
(C) difficult for both people and computers to orient techniques introduced in academia to defeat CAPTCHAs are soon
(because the image is ambiguous and there is no “correct” likely to be in widespread use by spammers. To minimize the
upright orientation) success of these automated methods, systems increase the noise
and warping used in these CAPTCHAs. Unfortunately, this not
To obtain candidate images for our CAPTCHA system, we start
only makes it harder for computers to solve, but it also makes it
with a large repository and then remove images that a computer
difficult for people to solve – leading to higher error rates [8][9]
can successfully orient as well as those that are difficult for
and higher associated frustration levels.
humans to orient.
To address this, numerous alternate CAPTCHAs (including image
For example, all of the images returned from an image-search
based ones) have been proposed [1][6][7][8]. In designing a new
start as potential candidates for our system. We then use a suite
CAPTCHA, the basic tenets for creating a CAPTCHA (from [10])
of automated orientation detectors to remove those that can be set
should be kept in mind:
upright by a computer. We discuss the system used to
automatically determine upright orientation in Section 2. We then 1. Easy for most people to solve
apply a social feedback mechanism to verify that the remaining 2. Difficult for automated bots to solve
images are easily oriented by humans. In order to identify images 3. Easy to generate and evaluate
that people cannot orient, we compute the variance of users’
submitted orientations and reject images which have a high It is straightforward to create a system that fulfills the first two
variance. We discuss this social-feedback mechanism in detail in requirements. The first requirement suggests the need for
Section 3. usability evaluations, ensuring that people can solve the
CAPTCHA in a reasonable amount of time and with reasonable
Our CAPTCHA technique achieves high success rates for humans
success rates. The second requirement suggests that we test
and low success rates for bots, does not require text entry, and is
state- of-the-art automated methods against the CAPTCHA. In
more enjoyable for the user than text-based CAPTCHAs. We
the CAPTCHA proposed here, we ensure the automated methods
discuss two user studies we have performed to demonstrate both
can not be used to defeat our CAPTCHA by using them to filter
the viability and the user-experience of our system in Section 4.
images which can be automatically recognized and oriented.
In Section 5, we present directions for future study.
The third requirement is harder to fulfill; it is this requirement that
1.1 BACKGROUND: CAPTCHAs presents the greatest challenge to image-based CAPTCHA
Traditional CAPTCHAs require the user to identify a series of systems. The early success of the text-CAPTCHAs was aided by
letters that may be warped or obscured by distracting backgrounds the ease in which they could be generated – random sequences of
and other noise in the image. Various amounts of warping and letters could be chosen, distorted, and distracting pixels, noise,
distractions can be used; examples are shown in Figure 2. colors, etc. added. Subsequent image-based CAPTCHAs were
proposed which required users to identify images with labels.
Recently, many character recognition CAPTCHAs have been The difficulty with these systems is that they require a priori
deciphered using automated computer vision techniques. These knowledge of the image labels. Reliable labels are not available
methods have been custom designed to remove noise and to for most images on the web, so common techniques used to obtain
segment the images to make the characters amenable for optical- labels included:
character recognition [3][4][5]. Because of the large pragmatic
and economic incentives for spammers to defeat CAPTCHAs, the (1) using the label assigned to an image by a search
engine,
(2) using the context of the page to determine a label,
(3) using images that were labeled when they were
encountered in a different task, or
(4) using games to extract the labels from users (such as
the ESP game [11]).
Unfortunately, many times the labels obtained by the former two
methods are often noisy and unreliable in practice because people
are needed to manually verify the labels. The latter two
approaches provide less noisy labels. However, even in the cases
in which labels can be obtained, it is necessary to be careful how
they are used. Asking the user to come up with the label may be
difficult unless many labels are assigned to each image.
Furthermore, unless exact matches are entered, similarity
distances between given and expected answers may be quite
complex to compute (for example, a number of measurements can
be used: edit distance, ad-hoc semantic distance, thesaurus
distance, word-net distance, etc.).
Figure 2: typical character recognition type CAPTCHAs (from Other, more interesting uses of labeled images, such as finding
Google’s Gmail, Yahoo Mail, xdrive.com, forexhound.com ) sets of images with recurring themes (or images that do not
belong to the same set) are possible [10]. However, it is likely that
842
WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces
when a small set of N images is given, and the goal is to find 2.1 LEARNING IMAGE ORIENTATION
which of the N-1 images does not pertain to the same set (i.e. the In order to identify images that are easy for computers to orient,
anomalous CAPTCHA, as described in [10]), automated methods we pass the images through an automated orientation detection
may be able to make significant inroads. For example, if N-1 system, developed by Baluja [19]. Although the particular
images are of a chair in several different orientations and the machine learning tools and features used make this orientation-
anomalous image is of a tree, the use of current computer-vision detection system distinct, the overall architecture is typical of
techniques will be able to narrow down the candidates rapidly many current systems.
(e.g. using local-feature detection [20] and the many variants
[21]). When the orientation detection system receives an image, it
computes a number of simple transformations on the image,
In the CAPTCHA we propose, we are careful not to provide the yielding 15 single-channel images:
user with a small set of images to compare. Any similarity
computation must be done against the entire set of images • 1-3: Red, Green, Blue (R,G,B) Channels.
possible – without any a priori filtering clues given. The success
• 4-6: Y, I, Q (transformation of R,G,B) Channels.
of our CAPTCHA rests on the fact that orienting an image is an
AI-hard problem. In the next section, we will review the many • 7-9: Normalized version of R,G,B (linearly scaled
systems that attempt to determine an image’s upright orientation. to span 0-255).
Although a few systems achieve success, their success is, when • 10-12: Normalized versions of Y,I,Q (linearly
tested in realistic scenarios, limited to a small subset of image scaled to span 0-255).
types [19]. • 13: Intensity (simple average of R, G, B).
• 14: Horizontal edge image computed from
2. DETECTING ORIENTATION intensity.
The interest in automated orientation detection rapidly arose with • 15: Vertical edge image computed from intensity.
the advent of digital cameras and camera phones that did not have
built-in physical orientation sensors. When images were taken, For each of these single-band images, the system computes the
software systems needed a method to determine whether the mean and variance of the entire image as well as for square sub-
image was portrait (upright) or landscape (horizontal). The regions of the image. The sub-regions cover (1/2)x(1/2) to
problem is still relevant because of the large scale scanning and (1/6)x(1/6) of the image (there are a total of
digitization of printed material. 91=1+4+9+16+25+36 squares). The mean and variance of
The seemingly simple task of making an image upright is quite vertical and horizontal slices of the image that cover 1/2 to 1/6 of
difficult to automate over a wide variety of photographic content. the image (there are a total of 20=2+3+4+5+6 vertical and 20
There are several classes of images which can be successfully horizontal slices) are also computed. In sum, there are 1965
oriented by computers. Some objects, such as faces, cars, features representing averages (15*(91+20+20)) and 1965
pedestrians, sky, grass etc. [22][23], are easily recognizable by features representing variances, for a total of 3930 features.
computers. It is important to note that computer-vision techniques
have not yet been successful at unconstrained object detection; Rotated Input Images
therefore, it is infeasible to recognize the vast majority of objects
in typical images and use the knowledge of the object’s shape to
orient the image.
Instead of relying on object recognition, the majority of the
techniques explored for upright detection do not attempt to
understand the contents of the image. Rather, they rely on an
assortment of high-level statistics about regions of the image
(such as edges, colors, color gradients, textures), combined with a
R
G
B
Y
I
Q
R norm
G norm
B norm
Y norm
I norm
Q norm
Average
Horizontal Edges
Vertical Edges
statistical or machine learning approach, to categorize the image
orientation [12]. For example, many typical vacation images
(such as sunsets, beaches, etc.) have an easily recognizable
pattern of light and dark or consistent color patches that can be
exploited to yield good results. Means (shown)
and variances of
Many images, however, are difficult for computers to orient. For blocks
example, indoor scenes have variations in lighting sources, and
abstract and close-up images provide the greatest challenge to
both computers and people, often because no clear anchor points
or lighting sources exist.
The classes of images that are easily oriented by computers are
explicitly handled in our system. A detailed examination of a
recent machine learning approach in [19] is given below. It is
incorporated in our system to ensure that the chosen images are Feature vector, 1965 means and 1965 variances
difficult for computers to solve.
Figure 3: Features extracted from an image.
843
WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces
These ‘retinal’, or localized, features have been successfully when used as a CAPTCHA. The details of the image selection
employed for detection tasks in a variety of visual domains. process and how the Adaboost classifiers are used are given in the
Figure 3 shows the features in detail. next section.
The image is then rotated by a set amount, and the process
repeats. Each time, the feature vector is passed through a
3. SELECTING IMAGES FOR THE
classifier (in this case a machine-learning based AdaBoost [25] ROTATIONAL CAPTCHA SYSTEM
classifier that is trained to give a +1 response if the image is As previously mentioned, a two-step process is needed to
upright and a -1 response otherwise). The classifier was determine which images should be included in our CAPTCHA
previously trained using thousands of images for which the system. First, in Section 3.1, we describe the automated methods
upright orientation was known (these were labeled with a +1), and used to detect whether a candidate image should be excluded
were then rotated by random amounts (these rotations were because it is easily oriented by a computer. In Section 3.2, we
labeled with -1). Although a description of AdaBoost and its describe the social-feedback mechanism that can harness the
training is beyond the scope of this paper, the classifiers found by power of users to further identify which images should be
AdaBoost are both simple to compute (are orders of magnitude excluded from the dataset because they are too difficult for
faster than the somewhat worse-performing Support Vector humans to orient.
Machine based classifiers for this task) and are memory efficient;
both are important considerations for deployment. 3.1 Removing Computer-Detectable Images
It is important not to simply select random images for this task.
As Figure 4 illustrates, when an image is given for classification, There are many cues which can quickly reveal the upright
it is rotated to numerous orientations, depending on the accuracy orientation of an image to automated systems; these images must
needed, and features are extracted from the image at each be filtered out. For example, if typical vacation or snapshot
orientation. Each set of these features is then passed through a photos are used, automated rotation accuracies can be in the 90%
classifier. The classifier is trained to output a real value between range [14][15][19]. The existence of any of the cues in the
+1.0 for upright and -1.0 for not upright. The rotation with the presented images will severely limit the effectiveness of the
maximal output (closest to +1.0) is chosen as the correct one. approach. Three common cues are listed below:
Figure 4 shows four orientations; however, any number can be
used. 1. Text: Usually the predominant orientation of text in an
image reveals the upright orientation of an image.
When tried on a variety of images to determine the correct upright
orientation from only the four canonical 90° rotations, the system 2. Faces and People: Most photographs are taken with the
yielded wildly varying accuracies ranging from approximately face(s) / people upright in the image.
90% to random at 25%, depending on the content of the image. 3. Blue skies, green grass, and beige sand: These are all
The average performance on outdoor photographs, architecture revealing clues, and are present in many travel/tourist
photographs and typical tourist type photographs was photographs found on the web. Extending this beyond
significantly higher than the performance on abstract color, in general, the sky often has few texture/edges in
photographs, close-ups and backgrounds. When an analysis of comparison to the ground. Additional cues found
the features used to make the discriminations was done, it was important in human tests include “grass”, “trees”,
found that the edge features play a significant role. This is “cars”, “water” and “clouds” [27][16].
important since they are not reliant on color information – so
black and white images can be captured; albeit with less accuracy. Ideally, we would like to use only images that do not contain any
of the elements listed above. All of the images chosen for
For our use, we use multiples of the classifiers described above. presentation to a user were scanned automatically for faces and
180 Adaboost classifiers were trained to examine each image and for the existence of large blocks of text. If either existed, the
determine the susceptibility of that image to automated attacks image was no longer a candidate.1 Although accurate detectors do
not exist for all the objects of interest listed in (3) above, the types
of images containing the other objects (trees, cars, clouds) were
often outdoors and were effectively eliminated through the use of
Upright? -0.6 the automated orientation classifiers described in Section 2.1.
If the image had neither text nor faces, it was passed through the
Upright? -0.2
set of 180 AdaBoost classifiers in order to further ensure that the
candidate image was not too easy for automated systems. The
output of these classifiers determined if the image was accepted
Upright? +0.7 into the final image pool. The following heuristics were used
when analyzing the 180 outputs of the classifiers:
844
WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces
• If the predictions of the classifiers together had too the automated techniques to exclude machine-recognizable
large entropy, then the image was rejected. Because the images, produces a dataset for our rotational CAPTCHA system.
classifiers are trained independently, they make
different guesses on ambiguous images. Some images 4. USER EXPERIMENTS
(such as simple textures, macro images, etc.) have no In this section, we describe two user studies. The first study was
discernible upright orientation for humans or designed to determine whether this system would result in a
computers. Therefore, if the entropy of guesses was viable CAPTCHA system in terms of user-success rates and bot-
high, the image may not actually have a discernible failure rates. The second study was designed to informally gauge
correct orientation. user reactions to the system in comparison to existing
These two heuristics attempt to find images that were not too CAPTCHAs. Since these were uncontrolled studies, we did not
easy, but yet possible to orient correctly. The goal is to be measure task-completion times.
conservative on both ends of the spectrum; the images need to
neither be too easy nor too hard. The images were accepted when
no single orientation dominated the results, while ensuring that
4.1 Viability Study
The goal of this study was to understand if users would determine
there were still peaks in a histogram of the orientations returned.
the same upright orientation for candidate images in the rotational
There are many methods to make the selection even more CAPTCHA system. We found that after applying a social-
amenable to people while remaining difficult for computers. It correction heuristic (which can be applied in real time in a
has been found in [15] that the correct orientation of images of deployed system), our CAPTCHA system meets high human-
indoor objects is more difficult than outdoor objects. This may be success and high computer-failure standards.
due to the larger variance of lighting directionality and larger
amounts of texture throughout the image. Therefore, using a 4.1.1 Image Dataset
classifier to first select only indoor images may be useful. The set of images used for our rotational CAPTCHA experiment
Second, due to sometimes warped objects, lack of shading and was collected from the top 1,000 search results for popular image-
lighting cues, and often unrealistic colors, cartoons also make queries2. We rejected from the dataset any image which could be
ideal candidates. Automated classifiers to determine whether an machine-recognizable, according to the process described in
image is a cartoon also exist [26] and may be useful here to scan Section 3. From the remaining candidate images, we selected a set
the web for such images. Finally, although we did not alter the of approximately 500 images to be the final dataset which we
content of the image, it may be possible to simply alter the color- used in our study. This ensures that our dataset meets the two
mapping, overall lighting curves, and hue/saturation levels to requirements laid forth by [2]:
reveal images that appear unnatural but remain recognizable to
• First, that this CAPTCHA does not base its security in
people.
the secrecy of a database. The set of images used is the
set of images on the WWW, and is thus is non-
3.2 Removing Images Difficult for Humans to secretive. Further, it is possible to alter the images to
Orient produce ones that can be made arbitrarily more difficult.
Once we have pruned from our data set images that a computer
• Second, that there is an automated way to generate
can successfully orient, we identify images that are too difficult
problem instances, along with their solution. We
for a human to successfully rotate upright. To do this, we present
generate the problem instances by issuing an image
several randomly rotated images to the user in the deployed
search query; their solution (the image’s orientation)
system. One of the images presented is a “new” candidate image
defaults to the posted orientation of the image on the
being considered to join the pool of valid images. As large
web, but may be changed to incorporate the corrective
numbers of users rotate the new image we examine the average
offset found by the social-feedback mechanism.
and standard deviation of the human orientations.
To normalize the shape and size of the images, we scaled each
We identify images that are difficult to rotate upright by image to a 180x180 pixel square and we then applied a circular
analyzing the angle which multiple users submitted as upright for mask to remove the image corners.
a given image. Images that have a high variation in their
submitted orientations are those that are likely to have no clear 4.1.2 Experiment Setup
upright orientation. Based on this simple analysis from users, we 500 users were recruited through Google-internal company email
can identify and exclude difficult images from our dataset. groups used for miscellaneous communications. The users came
This social feedback mechanism also has the added advantage of from a wide cross-section of the company, and included
being able to “correct” images whose default orientation is not engineers, sales associates, administrative assistants and product
originally upright – for example images where the photographer managers. Users participated in the study from their own
may not have held the camera exactly upright. Though the computer and were not compensated for their participation. Since
variance of the submitted orientation across users may be small, this study was done remotely at the participant’s computer there
the average orientation may be different than the image’s posted was no human moderator present. Participants received an email
orientation. Users will correct this image to its natural upright with a link to the experiment website which included a brief
position, compensating for the angle of the original image. introduction to the study:
845
WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces
Each user was asked to rotate 10 images to their natural upright Figure 6: The first six images displayed to users
position. The first six images and their offset angles were the
same for each user. We kept these trials constant to ensure that we
users rotate very accurately (images 1, 5 and 6), and those which
would have a significant sample size for some of the problem
users do not seem to rotate accurately (images 2, 3 and 4). The
instances. Figure 6 shows the first six images at the orientation
images which have poor results can be attributed to by three
that they were shown to the users. The last four images and their
factors, each of which can be addressed by our social feedback
offset angles were randomly selected at runtime. We did this to
mechanism:
evaluate our technique on a wide variety of images. For each trial,
we recorded the image-ID, the image’s offset angle (a number
between ±180 which indicated the position the image was
presented to the user), and the user’s final rotation angle (a
number between ±180 which indicated the angle at which the user
submitted the image).
4.1.3 Results
We have created a system that has sufficiently high human-
success rates and sufficiently low computer-success rates. When
using three images, the rotational CAPTCHA system results in an
84% human success metric, and a .009% bot-success metric
(assuming random guessing). These metrics are based on two
variables: the number of images we require a user to rotate and
the size of the acceptable error window (the degrees from upright
which we still consider to be upright). Predictably, as the number Figure 7: average degrees from the original orientation that
of images shown becomes greater, the probability of correctly each image was rotated.
solving them decreases. However, as the error window increases,
the probability of correctly solving them increases. The system 1. Some images are simply difficult to determine which way is
which results in an 84% human success rate and .009% bot upright. Figure 8 shows one such image and plots the absolute
success rate asks the user to rotate three images, each within 16° number of degrees-from-upright which each user rotated the
of upright (8-degrees on either side of upright). image. Based on the standard deviation in responses, this image
is not a good candidate for social correction. We see that its
Figure 7 illustrates the average and standard deviation of users’
standard deviation was greater than the half of the error window;
final rotation angles for the first six images (the images which
it was deemed not to have an identifiable upright position, and
were shown to all of the users). There are some images for which
was rejected from the dataset.
846
WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces
image # 1
180
170
160
150
140
130
120
110
100
90
80
70
60
50
40
30 Figure 10: Two possible natural orientations of the image.
20
10
0
0 100 200 300 400 500
It is important to note that the decisions about whether an image
Figure 8: An image with large distribution of orientations. falls into one of the above categories can be made in real time by
a system that presents a user a “candidate” image in addition to
2. Some images’ default upright orientation may not correspond
the CAPTCHA images. The “candidate” image need not be used
to the users’ view of their natural upright orientation. We
to influence the user’s success at solving the CAPTCHA, but is
designate the default upright orientation as the angle at which
simply used to gather information. The user is not informed of
the image was taken originally. This is illustrated in the picture
which image is a candidate image.
of the toy car (image #3). Figure 9 shows the original orientation
of the image, in contrast to the orientation of the image which In our analysis, the human success rate is determined by the
most users thought was “natural”, shown in the graph. Based on average probability that a user can rotate an image correctly.
the low deviation in responses, this image is a good candidate However, we exclude any images which fall into case 1 or case 3
for being “socially corrected”. If this image was used after the outlined above. Those images would be identified and
social correction phase, the “upright” orientation would be subsequently rejected from the dataset by our social-feedback
changed to approximately 60° from the shown orientation. mechanism. If an image falls in case 2, we corrected the upright
orientation based on the mode of the users’ final rotation, as this
could be similarly determined by the correction aspect of our
image # 3
social-feedback mechanism.
180
170 Human success rates are influenced by two factors: the size of the
160 error window and the number of images needed to rotate. Table 1
150
140
shows the effect on human success, as the size of the error
130 window and number of images we require a user to successfully
120
orient vary. The configurations which have a success rate of
110
100 greater than 80% are highlighted in green.
90
80
70
60 Table 1: Human-success rates (%), as number of images shown and
50 size of acceptable error window varies.
40
30
20
10
0
0 100 200 300 400 500
847
WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces
848
WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces
4.2.2 Results
68.75% of users (11 users) preferred rotating images, and 31.25%
of users (5 users) preferred deciphering text.
Among the comments from users who preferred the rotational
approach indicated they thought that method was “easy”, “cool”,
“fun” and “faster”. One user stated that he preferred “visual cues
over text”, and many users referenced feeling like they were at an
eye exam while deciphering the text.
Only two of the five users who stated their preference as
“deciphering text” provided insight to their choice. One user
pointed to an implementation flaw (that the slider should retain
focus even when the mouse left its bounding box) as the reason he
did not like the rotational approach, while the other user pointed Figure 12: Example rotating an image on a mobile device.
to familiarity with the text CAPTCHA, and more absolute input
mechanism as the rationale for her preference. “I prefer
[deciphering text] since it requires simple keyboard inputs which
are absolute. With rotating pictures I found myself continually Finally, another interesting aspect to this system is related to
making fine adjustments to make them perfectly upright, therefore adoption and user perception. Most CAPTCHAs are viewed as
taking a slight bit longer to accomplish. Also, I’m much more intrusive and annoying. To alleviate user dissatisfaction with
familiar with [deciphering text] since it’s what most internet them, we can use images that keep the user within the overall
portals use for security purposes.” experience of the website. For example, on a Disney sign-up
page, Disney characters, movie stills, or cartoon sketches can be
From these two studies, we conclude that not only is the rotational used as the images to rotate; eBay could use images of objects
task a viable one, but compared to the standard deciphering text, that are for sale; a Baseball Fantasy Group site could use baseball-
users may prefer it. related items when creating a user account.
849
WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces
Finally, our system provides opportunities for a number of [11] Ahn, L.V., Dabbish, L. (2004) Labeling Images with a
interesting extensions. First, the set of images selected can be Computer Game, CHI-2004.
chosen to be more interesting or valuable to the end-user by [12] Vailaya, A., Zhang, H., Yang, C., Liu, F., Jain, A. (2002)
displaying those that are related to the overall theme of the “Automatic Image Orientation Detection”, IEEE
website. Second, more aggressive social-correction can be used Transactions on Image Processing. 11,7.
through the presentation of multiple images of which only a few
must be uprighted; this gives real, and immediate, insight into [13] Wang, Y. & Zhang, H. (2001), “Content-Based Image
which images may be too hard for users. Third, the large number Orientation Detection with Support Vector Machines” in
of 3D models being created for independent applications, such as IEEE Workshop on Content-Based Access of Image and
Google’s Sketch-Up, can be used as sources of new images as Video Libraries. pp 17-23.
well as full-object rotations. [14] Wang, Y., & Zhang, H. (2004) “Detecting Image
Orientation based on low level visual content” Computer
Vision and Image Understanding, 2004.
[15] Zhang, L, Li, M., Zhang, H (2002) “Boosting Image
7. ACKNOWLEDGMENTS Orientation Detection with Indoor vs. Outdoor
Many thanks are extended to Henry Rowley and Ranjith
Classification”, Workshop on Application of Computer
Unnikrishnan for the text-identification system. Thanks are also
Vision, 2002.
given to Kaari Flagstad Baluja for her valuable comments.
[16] Luo, J. & Boutell, M. (2005) A probabilistic approach to
image orientation detection via confidence-based integration
8. REFERENCES
of low level and semantic cues, IEEE Transactions on
[1] Shahreza, A., Shahreza, S. (2008) “Advanced Collage Pattern Analysis and Machine Intelligence, v27,5 pp.715-
CAPTCHA”, Fifth International Conference on Information 726.
Technology, 1234- 1235 [17] Lyu, S. (2005) Automatic Image Orientation Determination
[2] von Ahn, L., Blum, M., Hopper, N. and Langford, J. with Natural Image Statistics, Proceedings of the 13th annual
CAPTCHA: Using Hard AI Problems for Security. Advances ACM international conference on Multimedia, pp 491-494
in Cryptology, Eurocrypt 2003. Pages 294-311. [18] Wang, L., Liu, X., Xia, L, Xu, G., Bruckstein, A., (2003)
[3] Huang, S.Y., Lee, Y.K., Bell, G. Ou, Z.h. (2008) “A “Image Orientation Detection with Integrated Human
Projection-based Segmentation Algorithm for Breaking MSN Perception Cues (or which way is up)”, ICIP-2003.
and YAHOO CAPTCHAs”, The 2008 International [19] Baluja, S. (2007) Automated image-orientation detection: a
Conference of Signal and Image Engineering scalable boosting approach, Pattern Analysis & Applications,
[4] Chellapilla K., Simard, P. “Using Machine Learning to V10, 3.
Break Visual Human Interaction Proofs (HIPs),” in L. K.
[20] Lowe, D.G., Distinctive Image Features from Scale-
Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural
Invariant Keypoints, International Journal of Computer
Information Processing Systems 17, pp. 265–272. MIT Press
Vision, v.60 n.2, p.91-110, November 2004
[5] Mori, G., Malik, J. (2003) “Recognizing Objects in
[21] Tuytelaars, Tinne, Mikolajczyk, K., A Survey on Local
Adversarial Clutter: Breaking a Visual CAPTCHA”, in
Invariant Features, preprint, Foundations and Trends in
Computer Vision and Pattern Recognition (CVPR-2003).
Computer Graphics and Vision 1:1, 1-106.
[6] Elson, J., Douceur, J. Howell, J., Saul, J., (2007) Asirra: A
[22] Yang, M.H., Kriegman, D.J., Ahuja, N. (2002) “Detecting
CAPTCHA that Exploits Interest-Aligned Manual Image
Faces in Images”, IEEE-PAMI 24:1
Categorization, in Proceedings of the 14th ACM conference
on Computer and communications security. [23] Bileschi, S., Leung, B. Rifkin, R., Towards Component-
based Car Detection, 2004 ECCV Workshop on Statistical
[7] Golle, P. (2008) Machine Learning Attacks against the
Learning and Computer Vision.
Asirra CAPTCHA, to appear in in Proceedings of the 15th
ACM conference on Computer and communications [24] Rowley, H., Jing, Y., Baluja, S. (2006), Large-Scale Image-
security. Based Adult-Content Filtering, International Conference on
Computer Vision Theory and Applications.
[8] Chellapilla, K., Larson, K., Simard, P., Czerwinski, M.,
Designing Human Friendly Human Interaction Proofs [25] Freund, Y., R. Schapire, “Experiments with a New Boosting
(HIPs), CHI-2005. Algorithm” (1996), in Machine Learning, Proceedings of the
Thirteenth International Conference – 1996.
[9] Yan, J., Ahmad, A.S.E., (2008) Usability of CAPTCHAs Or
Usability issue in CAPTCHA design. In Symposium on [26] Lienhart, R., Hartmann, A. (2002) Classifying images on the
Usable Privacy and Security (SOUPS) 2008. web automatically, J. Electron. Imaging Vol 11, 445
[10] Chew, M., Tygar, D. (2004) Image Recognition [27] Luo, J., Crandall, D., Singhal, A., Boutell, M. Gray,R.,
CAPTCHAs, in Proceedings of the 7th International “Psychophysical Study of Image Orientation Perception”,
Information Security Conference (ISC 2004) Spatial Vision, V16:5.
850