Synthesizing Robust Adversarial Examples
Synthesizing Robust Adversarial Examples
Abstract
Standard methods for generating adversarial ex-
amples for neural networks do not consistently
fool neural network classifiers in the physical
arXiv:1707.07397v3 [cs.CV] 7 Jun 2018
sarial examples robust over a chosen distribution of transfor- subject to ||x0 − x||p <
mations, and we demonstrate the efficacy of this algorithm x0 ∈ [0, 1]d
in both the 2D and 3D case. We succeed in computing and
fabricating physical-world 3D adversarial objects that are This approach has been shown to be effective at generating
robust over a large, realistic distribution of 3D viewpoints, adversarial examples. However, prior work has shown that
demonstrating that the algorithm successfully produces ad- these adversarial examples fail to remain adversarial under
versarial three-dimensional objects that are adversarial in image transformations that occur in the real world, such as
the physical world. Specifically, our contributions are as angle and viewpoint changes (Luo et al., 2016; Lu et al.,
follows: 2017).
To address this issue, we introduce Expectation Over
• We develop Expectation Over Transformation (EOT), Transformation (EOT). The key insight behind EOT is to
the first algorithm that produces robust adversarial ex- model such perturbations within the optimization procedure.
amples: single adversarial examples that are simulta- Rather than optimizing the log-likelihood of a single ex-
neously adversarial over an entire distribution of trans- ample, EOT uses a chosen distribution T of transformation
formations. functions t taking an input x0 controlled by the adversary
to the “true” input t(x0 ) perceived by the classifier. Fur-
thermore, rather than simply taking the norm of x0 − x
• We consider the problem of constructing 3D adversar- to constrain the solution space, given a distance function
ial examples under the EOT framework, viewing the d(·, ·), EOT instead aims to constrain the expected effective
3D rendering process as part of the transformation, and distance between the adversarial and original inputs, which
we show that the approach successfully synthesizes we define as:
adversarial objects.
rendering corresponding to the texture, we care to minimize To solve this optimization problem, EOT requires the ability
the visual difference between t(x0 ) and t(x) rather than to differentiate though the 3D rendering function with re-
minimizing the distance in texture space. spect to the texture. Given a particular pose and choices for
all other transformation parameters, a simple 3D rendering
Thus, we have the following optimization problem:
process can be modeled as a matrix multiplication and addi-
tion: every pixel in the rendering is some linear combination
arg max Et∼T [log P (yt |t(x0 ))] of pixels in the texture (plus some constant term). Given a
x0
particular choice of parameters, the rendering of a texture x
subject to Et∼T [d(t(x0 ), t(x))] < can be written as M x + b for some coordinate map M and
x ∈ [0, 1]d background b.
Standard 3D renderers, as part of the rendering pipeline,
In practice, the distribution T can model perceptual dis- compute the texture-space coordinates corresponding to on-
tortions such as random rotation, translation, or addition screen coordinates; we modify an existing renderer to return
of noise. However, the method generalizes beyond simple this information. Then, instead of differentiating through
transformations; transformations in T can perform opera- the renderer, we compute and then differentiate through
tions such as 3D rendering of a texture. M x + b. We must re-compute M and b using the renderer
We maximize the objective via stochastic gradient descent. for each pose, because EOT samples new poses at each
We approximate the gradient of the expected value through gradient descent step.
sampling transformations independently at each gradient
descent step and differentiating through the transformation. 2.3. Optimizing the objective
Once EOT has been parameterized, i.e. once a distribution T
2.2. Choosing a distribution of transformations
is chosen, the issue of actually optimizing the induced objec-
Given its ability to synthesize robust adversarial examples, tive function remains. Rather than solving the constrained
we use the EOT framework for generating 2D examples, 3D optimization problem given above, we use the Lagrangian-
models, and ultimately physical-world adversarial objects. relaxed form of the problem, as Carlini & Wagner (2017c)
Within the framework, however, there is a great deal of free- do in the standard single-viewpoint case:
dom in the actual method by which examples are generated,
including choice of T , distance metric, and optimization
method. arg max Et∼T [log P (yt |t(x0 ))]
x0
2.2.1. 2D CASE −λEt∼T [d(t(x0 ), t(x)])
In the 2D case, we choose T to approximate a realistic
space of possible distortions involved in printing out an In order to encourage visual imperceptibility of the gen-
image and taking a natural picture of it. This amounts to a erated images, we set d(x0 , x) to be the `2 norm in the
set of random transformations of the form t(x) = Ax + b, LAB color space, a perceptually uniform color space where
which are more thoroughly described in Section 3. These Euclidean distance roughly corresponds with perceptual dis-
random transformations are easy to differentiate, allowing tance (McLaren, 1976). Using distance in LAB space as a
for a straightforward application of EOT. proxy for human perceptual distance is a standard technique
in computer vision. Note that the Et∼T [||LAB(t(x0 )) −
2.2.2. 3D CASE LAB(t(x))||2 ] can be sampled and estimated in conjunc-
tion with E[P (yt |t(x))]; in general, the Lagrangian formu-
We note that the domain and codomain of t ∈ T need not lation gives EOT the ability to constrain the search space (in
be the same. To synthesize 3D adversarial examples, we our case, using LAB distance) without computing a complex
consider textures (color patterns) x corresponding to some projection. Our optimization, then, is:
chosen 3D object (shape), and we choose a distribution of
transformation functions t(x) that take a texture and render
a pose of the 3D object with the texture x applied. The h
transformation functions map a texture to a rendering of an arg max Et∼T log P (yt |t(x0 ))
x0
object, simulating functions including rendering, lighting, i
rotation, translation, and perspective projection of the object. −λ||LAB(t(x0 )) − LAB(t(x))||2
Finding textures that are adversarial over a realistic distribu-
tion of poses allows for transfer of adversarial examples to We use projected gradient descent to maximize the objective,
the physical world. and clip to the set of valid inputs (e.g. [0, 1] for images).
Synthesizing Robust Adversarial Examples
3. Evaluation
First, we describe our procedure for quantitatively evaluat-
ing the efficacy of EOT for generating 2D, 3D, and physical- Original:
world adversarial examples. Then, we show that we can Persian 97% / 99% / 19% / 95% /
reliably produce transformation-tolerant adversarial exam- cat 0% 0% 0% 0%
ples in both the 2D and 3D case. We show that we can
synthesize and fabricate 3D adversarial objects, even those
with complex shapes, in the physical world: these adver-
sarial objects remain adversarial regardless of viewpoint,
camera noise, and other similar real-world factors. Finally, Adv: 0% / 0% / 0% / 0% /
we present a qualitative analysis of our results and discuss jacamar 91% 96% 83% 97%
some challenges in applying EOT in the physical world.
Table 1. Evaluation of 1000 2D adversarial examples with random targets. We evaluate each example over 1000 randomly sampled
transformations to calculate classification accuracy and adversariality (percent classified as the adversarial class).
Original:
turtle
classified as turtle classified as rifle
97% / 96% / 96% / 20% /
classified as other
0% 0% 0% 0%
Adv:
jigsaw classified as baseball classified as espresso
0% / 0% / 0% / 0% /
puzzle classified as other
100% 99% 99% 83%
Figure 4. A sample of photos of unperturbed 3D prints. The
Figure 3. A 3D adversarial example showing classifier confidence unperturbed 3D-printed objects are consistently classified as the
in true / adversarial classes over randomly sampled poses. true class.
pair. In our final evaluation, we used the example with the that this works well in practice: we produce objects that
smallest λ that still maintained ¿90% adversariality over are optimized for the proxy distribution, and we find that
100 held out, random transformations. they generalize to the “true” physical-world distribution and
remain adversarial.
We consider 10 3D models, obtained from 3D asset sites,
that represent different ImageNet classes: barrel, baseball, Beyond modeling the 3D rendering process, we need to
dog, orange, turtle, clownfish, sofa, teddy bear, car, and taxi. model physical-world phenomena such as lighting effects
and camera noise. Furthermore, we need to model the 3D
We choose 20 random target classes per 3D model, and use printing process: in our case, we use commercially available
EOT to synthesize adversarial textures for the 3D models full-color 3D printing. With the 3D printing technology
with minimal parameter search (four pre-chosen λ values we use, we find that color accuracy varies between prints,
were tested across each (3D model, target) pair). For each so we model printing errors as well. We approximate all
of the 200 adversarial examples, we sample 100 random of these phenomena by a distribution of transformations
transformations from the distribution at evaluation time. under EOT: in addition to the transformations considered
Table 2 summarizes results, and Figure 3 shows renderings for 3D in simulation, we consider camera noise, additive and
of drawn samples, along with classification probabilities. multiplicative lighting, and per-channel color inaccuracies.
See the appendix for more examples.
We evaluate physical adversarial examples over two 3D-
The adversarial objects have a mean adversariality of 83.4% printed objects: one of a turtle (where we consider any of the
with a long left tail, showing that EOT usually produces 5 turtle classes in ImageNet as the “true” class), and one of
highly adversarial objects. See the appendix for a plot of a baseball. The unperturbed 3D-printed objects are correctly
the distribution of adversariality over the 200 examples. classified as the true class with 100% accuracy over a large
number of samples. Figure 4 shows example photographs
3.4. Physical adversarial examples of unperturbed objects, along with their classifications.
In the case of the physical world, we cannot capture the We choose target classes for each of the 3D models at ran-
“true” distribution unless we perfectly model all physical dom — “rifle” for the turtle, and “espresso” for the baseball
phenomena. Therefore, we must approximate the distribu- — and we use EOT to synthesize adversarial examples. We
tion and perform EOT over the proxy distribution. We find evaluate the performance of our two 3D-printed adversarial
Synthesizing Robust Adversarial Examples
Table 2. Evaluation of 200 3D adversarial examples with random targets. We evaluate each example over 100 randomly sampled poses to
calculate classification accuracy and adversariality (percent classified as the adversarial class).
Figure 6. Three pictures of the same adversarial turtle (all clas- Figure 7. A side-by-side comparison of a 3D-printed model (left)
sified as “rifle”), demonstrating the need for a wide distribution along with a printout of the corresponding texture, printed on a
and the efficacy of EOT in finding examples robust across wide standard laser color printer (center) and the original digital texture
distributions of physical-world effects like lighting. (right), showing significant error in color accuracy in printing.
outside of the chosen distribution. In constructing physical- Limitations. There are two possible failure cases of the
world adversarial objects, we use a crude approximation EOT algorithm. As with any adversarial attack, if the at-
of the rendering and capture process, and this succeeds in tacker is constrained to too small of a `p ball, EOT will be
ensuring robustness in a diverse set of environments; see, unable to create an adversarial example. Another case is
for example, Figure 6, which shows the same adversarial tur- when the distribution of transformations the attacker chooses
tle in vastly different lighting conditions. When a stronger is too “large”. As a simple example, it is impossible to make
guarantee is needed, a domain expert may opt to model an adversarial example robust to the function that randomly
the perceptual distribution more precisely in order to better perturbs each pixel value to the interval [0, 1] uniformly at
constrain the search space. random.
Semantically relevant misclassification. Interestingly, State of the art neural networks are vulnerable to adver-
for the majority of viewpoints where the adversarial tar- sarial examples (Szegedy et al., 2013; Biggio et al., 2013).
get class is not the top-1 predicted class, the classifier also Researchers have proposed a number of methods for syn-
fails to correctly predict the source class. Instead, we find thesizing adversarial examples in the white-box setting
that the classifier often classifies the object as an object (with access to the gradient of the classifier), including L-
that is semantically similar to the adversarial target; while BFGS (Szegedy et al., 2013), the Fast Gradient Sign Method
generating the adversarial turtle to be classified as a rifle, (FGSM) (Goodfellow et al., 2015), Jacobian-based Saliency
for example, the second most popular class (after “rifle”) Map Attack (JSMA) (Papernot et al., 2016b), a Lagrangian
was “revolver,” followed by “holster” and then “assault rifle.” relaxation formulation (Carlini & Wagner, 2017c), and
Similarly, when generating the baseball to be classified as DeepFool (Moosavi-Dezfooli et al., 2015), all for what we
an espresso, the example was often classified as “coffee” or call the single-viewpoint case where the adversary directly
“bakery.” controls the input to the neural network. Projected Gradient
Descent (PGD) can be seen as a universal first-order adver-
sary (Madry et al., 2017). A number of approaches find
Breaking defenses. The existence of robust adversarial
adversarial examples in the black-box setting, with some
examples implies that defenses based on randomly trans-
relying on the transferability phenomena and making use of
forming the input are not secure: adversarial examples gen-
substitute models (Papernot et al., 2017; 2016a) and others
erated using EOT can circumvent these defenses. Athalye
applying black-box gradient estimation (Chen et al., 2017).
et al. (2018) investigates this further and circumvents several
published defenses by applying Expectation over Transfor- Moosavi-Dezfooli et al. (2017) show the existence of uni-
mation. versal (image-agnostic) adversarial perturbations, small per-
Synthesizing Robust Adversarial Examples
turbation vectors that can be applied to any image to induce photos taken head-on from a single viewpoint, while EOT
misclassification. Their work solves a different problem produces 2D/3D adversarial examples robust over transfor-
than we do: they propose an algorithm that finds pertur- mations. Their approach also includes a mechanism for
bations that are universal over images; in our work, we enhancing perturbations’ printability using a color map to
give an algorithm that finds a perturbation to a single im- address the limited color gamut and color inaccuracy of the
age or object that is universal over a chosen distribution of printer. Note that this differs from our approach in achieving
transformations. In preliminary experiments, we found that printability: rather than creating a color map, we find an
universal adversarial perturbations, like standard adversarial adversarial example that is robust to color inaccuracy. Our
perturbations to single images, are not inherently robust to approach has the advantage of working in settings where
transformation. color accuracy varies between prints, as was the case with
our 3D-printer.
4.2. Defenses Concurrently to our work, Evtimov et al. (2017) proposed
Some progress has been made in defending against adver- a method for generating robust physical-world adversarial
sarial examples in the white-box setting, but a complete so- examples in the 2D case by optimizing over a fixed set
lution has not yet been found. Many proposed defenses (Pa- of manually-captured images. However, the approach is
pernot et al., 2016c; Hendrik Metzen et al., 2017; Hendrycks limited to the 2D case, with no clear translation to 3D, where
& Gimpel, 2017; Meng & Chen, 2017; Zantedeschi et al., there is no simple mapping between what the adversary
2017; Buckman et al., 2018; Ma et al., 2018; Guo et al., controls (the texture) and the observed input to the classifier
2018; Dhillon et al., 2018; Xie et al., 2018; Song et al., (an image). Furthermore, the approach requires the taking
2018; Samangouei et al., 2018) have been found to be vul- and preprocessing of a large number of photos in order to
nerable to iterative optimization-based attacks (Carlini & produce each adversarial example, which may be expensive
Wagner, 2016; 2017c;b;a; Athalye et al., 2018). or even infeasible for many objects.
Some of these defenses that can be viewed as “input trans- Brown et al. (2016) apply our EOT algorithm to produce
formation” defenses are circumvented through application an “adversarial patch”, a small image patch that can be
of EOT. applied to any scene to cause targeted misclassification in
the physical world.
4.3. Physical-world adversarial examples Real-world adversarial examples have also been demon-
In the first work on physical-world adversarial examples, Ku- strated in contexts other than image classification/detection,
rakin et al. (2016) demonstrate the transferability of FGSM- such as speech-to-text (Carlini et al., 2016).
generated adversarial misclassification on a printed page.
In their setup, a photo is taken of a printed image with QR 5. Conclusion
code guides, and the resultant image is warped, cropped, and
resized to become a square of the same size as the source im- Our work demonstrates the existence of robust adversarial
age before classifying it. Their results show the existence of examples, adversarial inputs that remain adversarial over a
2D physical-world adversarial examples for approximately chosen distribution of transformations. By introducing EOT,
axis-aligned views, demonstrating that adversarial pertur- a general-purpose algorithm for creating robust adversar-
bations produced using FGSM can transfer to the physical ial examples, and by modeling 3D rendering and printing
world and are robust to camera noise, rescaling, and lighting within the framework of EOT, we succeed in fabricating
effects. Kurakin et al. (2016) do not synthesize targeted three-dimensional adversarial objects. With access only to
physical-world adversarial examples, they do not evaluate low-cost commercially available 3D printing technology,
other real-world 2D transformations such as rotation, skew, we successfully print physical adversarial objects that are
translation, or zoom, and their approach does not translate classified as a chosen target class over a variety of angles,
to the 3D case. viewpoints, and lighting conditions by a standard ImageNet
classifier. Our results suggest that adversarial examples and
Sharif et al. (2016) develop a real-world adversarial attack objects are a practical concern for real world systems, even
on a state-of-the-art face recognition algorithm, where ad- when the examples are viewed from a variety of angles and
versarial eyeglass frames cause targeted misclassification viewpoints.
in portrait photos. The algorithm produces robust pertur-
bations through optimizing over a fixed set of inputs: the
attacker collects a set of images and finds a perturbation
that minimizes cross entropy loss over the set. The algo-
rithm solves a different problem than we do in our work: it
produces adversarial perturbations universal over portrait
Synthesizing Robust Adversarial Examples
Acknowledgments Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C.-J.
Zoo: Zeroth order optimization based black-box attacks
We wish to thank Ilya Sutskever for providing feedback on to deep neural networks without training substitute mod-
early parts of this work, and we wish to thank John Carring- els. In Proceedings of the 10th ACM Workshop on Ar-
ton and ZVerse for providing financial and technical sup- tificial Intelligence and Security, AISec ’17, pp. 15–26,
port with 3D printing. We are grateful to Tatsu Hashimoto, New York, NY, USA, 2017. ACM. ISBN 978-1-4503-
Daniel Kang, Jacob Steinhardt, and Aditi Raghunathan for 5202-4. doi: 10.1145/3128572.3140448. URL http:
helpful comments on early drafts of this paper. //doi.acm.org/10.1145/3128572.3140448.
Representations, 2018. URL https://openreview. Samangouei, P., Kabkab, M., and Chellappa, R. Defense-
net/forum?id=B1gJ1L2aW. accepted as oral pre- gan: Protecting classifiers against adversarial attacks
sentation. using generative models. International Conference
on Learning Representations, 2018. URL https://
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and openreview.net/forum?id=BkJ3ibb0-. ac-
Vladu, A. Towards deep learning models resistant to ad- cepted as poster.
versarial attacks. 2017. URL https://arxiv.org/
abs/1706.06083. Sharif, M., Bhagavatula, S., Bauer, L., and Reiter, M. K.
Accessorize to a crime: Real and stealthy attacks on state-
McLaren, K. Xiiithe development of the cie 1976 of-the-art face recognition. In Proceedings of the 2016
(l* a* b*) uniform colour space and colourdiffer- ACM SIGSAC Conference on Computer and Communi-
ence formula. Journal of the Society of Dyers and cations Security, CCS ’16, pp. 1528–1540, New York,
Colourists, 92(9):338–341, September 1976. doi: NY, USA, 2016. ACM. ISBN 978-1-4503-4139-4. doi:
10.1111/j.1478-4408.1976.tb03301.x. URL https: 10.1145/2976749.2978392. URL http://doi.acm.
//onlinelibrary.wiley.com/doi/abs/10. org/10.1145/2976749.2978392.
1111/j.1478-4408.1976.tb03301.x.
Song, Y., Kim, T., Nowozin, S., Ermon, S., and Kush-
Meng, D. and Chen, H. MagNet: a two-pronged defense man, N. Pixeldefend: Leveraging generative models
against adversarial examples. In ACM Conference on to understand and defend against adversarial examples.
Computer and Communications Security (CCS), 2017. International Conference on Learning Representations,
arXiv preprint arXiv:1705.09064. 2018. URL https://openreview.net/forum?
id=rJUYGxbCW. accepted as poster.
Moosavi-Dezfooli, S., Fawzi, A., and Frossard, P. Deep-
fool: a simple and accurate method to fool deep neu- Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan,
ral networks. CoRR, abs/1511.04599, 2015. URL D., Goodfellow, I., and Fergus, R. Intriguing properties of
http://arxiv.org/abs/1511.04599. neural networks. 2013. URL https://arxiv.org/
abs/1312.6199.
Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O., and
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna,
Frossard, P. Universal adversarial perturbations. In IEEE
Z. Rethinking the inception architecture for computer vi-
Conference on Computer Vision and Pattern Recognition
sion. 2015. URL https://arxiv.org/abs/1512.
(CVPR), 2017.
00567.
Papernot, N., McDaniel, P., and Goodfellow, I. Transfer-
Xie, C., Wang, J., Zhang, Z., Ren, Z., and Yuille, A. Mit-
ability in machine learning: from phenomena to black-
igating adversarial effects through randomization. In-
box attacks using adversarial samples. 2016a. URL
ternational Conference on Learning Representations,
https://arxiv.org/abs/1605.07277.
2018. URL https://openreview.net/forum?
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, id=Sk9yuql0Z. accepted as poster.
Z. B., and Swami, A. The limitations of deep learning in Zantedeschi, V., Nicolae, M.-I., and Rawat, A. Efficient
adversarial settings. In IEEE European Symposium on defenses against adversarial attacks. arXiv preprint
Security & Privacy, 2016b. arXiv:1707.06728, 2017.
Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami,
A. Distillation as a defense to adversarial perturbations
against deep neural networks. In Security and Privacy
(SP), 2016 IEEE Symposium on, pp. 582–597. IEEE,
2016c.
A. Distributions of Transformations
Under the EOT framework, we must choose a distribution of transformations, and the optimization produces an adversarial
example that is robust under the distribution of transformations. Here, we give the specific parameters we chose in the 2D
(Table 4), 3D (Table 5), and physical-world case (Table 6).
Table 4. Distribution of transformations for the 2D case, where each parameter is sampled uniformly at random from the specified range.
Table 5. Distribution of transformations for the 3D case when working in simulation, where each parameter is sampled uniformly at
random from the specified range.
Table 6. Distribution of transformations for the physical-world 3D case, approximating rendering, physical-world phenomena, and
printing error.
Synthesizing Robust Adversarial Examples
Original: European
fire salamander P (true): 93% P (true): 91% P (true): 93% P (true): 93%
P (adv): 0% P (adv): 0% P (adv): 0% P (adv): 0%
Original: caldron P (true): 75% P (true): 83% P (true): 54% P (true): 80%
P (adv): 0% P (adv): 0% P (adv): 0% P (adv): 0%
Original: barrel
Adv: guillotine
Original: baseball
Original: turtle
Original: barracouta P (true): 91% P (true): 95% P (true): 92% P (true): 92%
P (adv): 0% P (adv): 0% P (adv): 0% P (adv): 0%
Original: tiger cat P (true): 85% P (true): 91% P (true): 69% P (true): 96%
P (adv): 0% P (adv): 0% P (adv): 0% P (adv): 0%
Adv: tiger P (true): 32% P (true): 11% P (true): 59% P (true): 14%
P (adv): 54% P (adv): 84% P (adv): 22% P (adv): 79%
Original: speedboat
P (true): 14% P (true): 1% P (true): 1% P (true): 1%
P (adv): 0% P (adv): 0% P (adv): 0% P (adv): 0%
Adv: crossword
puzzle P (true): 3% P (true): 0% P (true): 0% P (true): 0%
P (adv): 91% P (adv): 100% P (adv): 100% P (adv): 100%
Original: baseball
Adv: Airedale
Original: orange
Original: dog
Adv: bittern
Original: teddybear
Adv: sock
Original: clownfish
Adv: panpipe
Original: sofa
Adv: sturgeon
Figure 13. A histogram of adversariality (percent of 100 samples classified as the adversarial class) across the 200 3D adversarial
examples.
Synthesizing Robust Adversarial Examples