Adversarial Robustness Toolbox Readthedocs Io en Latest Modules Attacks Evasion HTML Fast Gradient Method FGM PDF
Adversarial Robustness Toolbox Readthedocs Io en Latest Modules Attacks Evasion HTML Fast Gradient Method FGM PDF
art.attacks.evasion
Adversarial Patch
class art.attacks.evasion.AdversarialPatch(classifier:
Union[art.es mators.classifica on.classifier.ClassifierNeuralNetwork,
art.es mators.classifica on.classifier.ClassifierGradients], rota on_max: float = 22.5, scale_min: float = 0.1, scale_max: float =
1.0, learning_rate: float = 5.0, max_iter: int = 500, batch_size: int = 16, patch_shape: Op onal[Tuple[int, int, int]] = None)
generate(*args, **kwargs)
Parameters: x – An array with the original inputs. x is expected to have spa al dimensions.
y – An array with the original labels to be predicted.
set_params(**kwargs) → None
Take in a dic onary of parameters and apply a ack-specific checks before saving them as a ributes.
class art.attacks.evasion.AdversarialPatchNumpy(classifier:
Union[art.es mators.classifica on.classifier.ClassifierNeuralNetwork,
art.es mators.classifica on.classifier.ClassifierGradients], target: int = 0, rota on_max: float = 22.5, scale_min: float = 0.1,
scale_max: float = 1.0, learning_rate: float = 5.0, max_iter: int = 500, clip_patch: Op onal[Union[list, tuple]] = None,
batch_size: int = 16)
generate(*args, **kwargs)
Parameters: x – An array with the original inputs. x is expected to have spa al dimensions.
y – An array with the original labels to be predicted.
class art.attacks.evasion.AdversarialPatchTensorFlowV2(classifier:
Union[art.es mators.classifica on.classifier.ClassifierNeuralNetwork,
art.es mators.classifica on.classifier.ClassifierGradients], rota on_max: float = 22.5, scale_min: float = 0.1, scale_max: float =
1.0, learning_rate: float = 5.0, max_iter: int = 500, batch_size: int = 16, patch_shape: Op onal[Tuple[int, int, int]] = None)
Parameters: ini al_patch_value ( ndarray ) – Patch value to use for rese ng the patch.
Auto Attack
__init__(es mator: art.es mators.classifica on.classifier.ClassifierGradients, norm: Union[int, float] = inf, eps: float =
0.3, eps_step: float = 0.1, a acks: Op onal[List[art.a acks.a ack.EvasionA ack]] = None, batch_size: int = 32,
es mator_orig: Op onal[art.es mators.es mator.BaseEs mator] = None)
generate(*args, **kwargs)
__init__(es mator: art.es mators.es mator.BaseEs mator, norm: Union[float, int] = inf, eps: float = 0.3, eps_step:
float = 0.1, max_iter: int = 100, targeted: bool = False, nb_random_init: int = 5, batch_size: int = 32, loss_type:
Op onal[str] = None)
generate(*args, **kwargs)
Implementa on of the boundary a ack from Brendel et al. (2018). This is a powerful black-box a ack
that only requires final class predic on.
__init__(es mator: art.es mators.classifica on.classifier.Classifier, targeted: bool = True, delta: float = 0.01, epsilon:
float = 0.01, step_adapt: float = 0.667, max_iter: int = 5000, num_trial: int = 25, sample_size: int = 20, init_size: int =
100) → None
generate(*args, **kwargs)
The L_2 op mized a ack of Carlini and Wagner (2016). This a ack is among the most effec ve and
should be used among the primary a acks to evaluate poten al defences. A major difference wrt to the
original implementa on (h ps://github.com/carlini/nn_robust_a acks) is that we use line search in the
op miza on of the a ack objec ve.
generate(*args, **kwargs)
class art.attacks.evasion.CarliniLInfMethod(classifier:
art.es mators.classifica on.classifier.ClassifierGradients, confidence: float = 0.0, targeted: bool = False, learning_rate: float =
0.01, max_iter: int = 10, max_halving: int = 5, max_doubling: int = 5, eps: float = 0.3, batch_size: int = 128)
This is a modified version of the L_2 op mized a ack of Carlini and Wagner (2016). It controls the L_Inf
norm, i.e. the maximum perturba on applied to each pixel.
__init__(classifier: art.es mators.classifica on.classifier.ClassifierGradients, confidence: float = 0.0, targeted: bool =
False, learning_rate: float = 0.01, max_iter: int = 10, max_halving: int = 5, max_doubling: int = 5, eps: float = 0.3,
batch_size: int = 128) → None
generate(*args, **kwargs)
class art.attacks.evasion.DecisionTreeAttack(classifier:
art.es mators.classifica on.scikitlearn.ScikitlearnDecisionTreeClassifier, offset: float = 0.001)
Close implementa on of Papernot’s a ack on decision trees following Algorithm 2 and communica on
with the authors.
generate(*args, **kwargs)
DeepFool
__init__(classifier: art.es mators.classifica on.classifier.ClassifierGradients, max_iter: int = 100, epsilon: float = 1e-
06, nb_grads: int = 10, batch_size: int = 1) → None
generate(*args, **kwargs)
R A h ldi h d i l l
Returns: An array holding the adversarial examples.
DPatch
generate(*args, **kwargs)
Generate DPatch.
generate(*args, **kwargs)
This a ack was originally implemented by Goodfellow et al. (2015) with the infinity norm (and is known
as the “Fast Gradient Sign Method”). This implementa on extends the a ack to other norms, and is
therefore called the Fast Gradient Method.
__init__(es mator: art.es mators.classifica on.classifier.ClassifierGradients, norm: int = inf, eps: float = 0.3,
eps_step: float = 0.1, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, minimal: bool = False) →
None
generate(*args, **kwargs)
Feature Adversaries
class art.attacks.evasion.FeatureAdversaries(classifier:
art.es mators.classifica on.classifier.ClassifierNeuralNetwork, delta: Op onal[float] = None, layer: Op onal[int] = None,
batch_size: int = 32)
generate(*args, **kwargs)
maxcor : int
The maximum number of variable metric correc ons used to define the
limited memory matrix. (The limited memory BFGS method does not store
the full hessian but uses this many terms in an approxima on to it.)
ol : float
The itera on stops when
(f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol .
gtol : float
The itera on will stop when max{|proj g_i | i = 1, ..., n} <= gtol
eps : float
Step size used for numerical approxima on of the jacobian.
maxfun : int
Maximum number of func on evalua ons.
maxiter : int
Maximum number of itera ons.
Implementa on of the a ack framework proposed by Inkawhich et al. (2018). Priori zes the frame of a
sequen al input to be adversarially perturbed based on the saliency score of each frame.
__init__(classifier: art.es mators.classifica on.classifier.Classifier, a acker: art.a acks.a ack.EvasionA ack, method:
str = 'itera ve_saliency', frame_index: int = 1, batch_size: int = 1)
generate(*args, **kwargs)
HopSkipJump Attack
class art.attacks.evasion.HopSkipJump(classifier: Classifier, targeted: bool = False, norm: int = 2, max_iter: int =
50, max_eval: int = 10000, init_eval: int = 100, init_size: int = 100)
Implementa on of the HopSkipJump a ack from Jianbo et al. (2019). This is a powerful black-box
a ack that only requires final class predic on, and is an advanced version of the boundary a ack.
generate(*args, **kwargs)
class art.attacks.evasion.HighConfidenceLowUncertainty(classifier:
art.es mators.classifica on.GPy.GPyGaussianProcessClassifier, conf: float = 0.95, unc_increase: float = 100.0, min_val: float =
0.0, max_val: float = 1.0)
generate(*args, **kwargs)
The Basic Itera ve Method is the itera ve version of FGM and FGSM.
__init__(es mator: ClassifierGradients, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool =
False, batch_size: int = 32) → None
class art.attacks.evasion.ProjectedGradientDescent(es mator, norm: int = inf, eps: float = 0.3, eps_step:
float = 0.1, max_iter: int = 100, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, random_eps: bool =
False)
The Projected Gradient Descent a ack is an itera ve method in which, a er each itera on, the
perturba on is projected on an lp-ball of specified radius (in addi on to clipping the values of the
adversarial sample so that it lies in the permi ed data range). This is the a ack proposed by Madry et
al. for adversarial training.
__init__(es mator, norm: int = inf, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False,
num_random_init: int = 0, batch_size: int = 32, random_eps: bool = False)
generate(*args, **kwargs)
set_params(**kwargs) → None
Take in a dic onary of parameters and apply a ack-specific checks before saving them as a ributes.
The Projected Gradient Descent a ack is an itera ve method in which, a er each itera on, the
perturba on is projected on an lp-ball of specified radius (in addi on to clipping the values of the
adversarial sample so that it lies in the permi ed data range). This is the a ack proposed by Madry et
al. for adversarial training.
generate(*args, **kwargs)