attacks module

class cleverhans.attacks.Attack(model, back='tf', sess=None)[source]

Bases: object

Abstract base class for all attack classes.

construct_graph(fixed, feedable, x_val, hash_key)[source]

Construct the graph required to run the attack through generate_np. :param fixed: Structural elements that require defining a new graph. :param feedable: Arguments that can be fed to the same graph when

they take different values.
Parameters:
  • x_val – symbolic adversarial example
  • hash_key – the key used to store this graph in our cache
generate(x, **kwargs)[source]

Generate the attack’s symbolic graph for adversarial examples. This method should be overriden in any child class that implements an attack that is expressable symbolically. Otherwise, it will wrap the numerical implementation as a symbolic operator. :param x: The model’s symbolic inputs. :param **kwargs: optional parameters used by child classes. :return: A symbolic representation of the adversarial examples.

generate_np(x_val, **kwargs)[source]

Generate adversarial examples and return them as a NumPy array. Sub-classes should not implement this method unless they must perform special handling of arguments. :param x_val: A NumPy array with the original inputs. :param **kwargs: optional parameters used by child classes. :return: A NumPy array holding the adversarial examples.

get_or_guess_labels(x, kwargs)[source]

Get the label to use in generating an adversarial example for x. The kwargs are fed directly from the kwargs of the attack. If ‘y’ is in kwargs, then assume it’s an untargeted attack and use that as the label. If ‘y_target’ is in kwargs, then assume it’s a targeted attack and use that as the label. Otherwise, use the model’s prediction as the label and perform an untargeted attack.

parse_params(params=None)[source]

Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes. :param params: a dictionary of attack-specific parameters :return: True when parsing was successful

class cleverhans.attacks.BasicIterativeMethod(model, back='tf', sess=None)[source]

Bases: cleverhans.attacks.Attack

The Basic Iterative Method (Kurakin et al. 2016). The original paper used hard labels for this attack; no label smoothing. Paper link: https://arxiv.org/pdf/1607.02533.pdf

generate(x, **kwargs)[source]

Generate symbolic graph for adversarial examples and return. :param x: The model’s symbolic inputs. :param eps: (required float) maximum distortion of adversarial example

compared to original input
Parameters:
  • eps_iter – (required float) step size for each attack iteration
  • nb_iter – (required int) Number of attack iterations.
  • y – (optional) A tensor with the model labels.
  • y_target – (optional) A tensor with the labels to target. Leave y_target=None if y is also set. Labels should be one-hot-encoded.
  • ord – (optional) Order of the norm (mimics Numpy). Possible values: np.inf, 1 or 2.
  • clip_min – (optional float) Minimum input component value
  • clip_max – (optional float) Maximum input component value
parse_params(eps=0.3, eps_iter=0.05, nb_iter=10, y=None, ord=inf, clip_min=None, clip_max=None, y_target=None, **kwargs)[source]

Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes.

Attack-specific parameters: :param eps: (required float) maximum distortion of adversarial example

compared to original input
Parameters:
  • eps_iter – (required float) step size for each attack iteration
  • nb_iter – (required int) Number of attack iterations.
  • y – (optional) A tensor with the model labels.
  • y_target – (optional) A tensor with the labels to target. Leave y_target=None if y is also set. Labels should be one-hot-encoded.
  • ord – (optional) Order of the norm (mimics Numpy). Possible values: np.inf, 1 or 2.
  • clip_min – (optional float) Minimum input component value
  • clip_max – (optional float) Maximum input component value
class cleverhans.attacks.CarliniWagnerL2(model, back='tf', sess=None)[source]

Bases: cleverhans.attacks.Attack

This attack was originally proposed by Carlini and Wagner. It is an iterative attack that finds adversarial examples on many defenses that are robust to other attacks. Paper link: https://arxiv.org/abs/1608.04644

At a high level, this attack is an iterative attack using Adam and a specially-chosen loss function to find adversarial examples with lower distortion than other attacks. This comes at the cost of speed, as this attack is often much slower than others.

generate(x, **kwargs)[source]

Return a tensor that constructs adversarial examples for the given input. Generate uses tf.py_func in order to operate over tensors.

Parameters:
  • x – (required) A tensor with the inputs.
  • y – (optional) A tensor with the true labels for an untargeted attack. If None (and y_target is None) then use the original labels the classifier assigns.
  • y_target – (optional) A tensor with the target labels for a targeted attack.
  • confidence – Confidence of adversarial examples: higher produces examples with larger l2 distortion, but more strongly classified as adversarial.
  • batch_size – Number of attacks to run simultaneously.
  • learning_rate – The learning rate for the attack algorithm. Smaller values produce better results but are slower to converge.
  • binary_search_steps – The number of times we perform binary search to find the optimal tradeoff- constant between norm of the purturbation and confidence of the classification.
  • max_iterations – The maximum number of iterations. Setting this to a larger value will produce lower distortion results. Using only a few iterations requires a larger learning rate, and will produce larger distortion results.
  • abort_early – If true, allows early aborts if gradient descent is unable to make progress (i.e., gets stuck in a local minimum).
  • initial_const – The initial tradeoff-constant to use to tune the relative importance of size of the pururbation and confidence of classification. If binary_search_steps is large, the initial constant is not important. A smaller value of this constant gives lower distortion results.
  • clip_min – (optional float) Minimum input component value
  • clip_max – (optional float) Maximum input component value
parse_params(y=None, y_target=None, nb_classes=None, batch_size=1, confidence=0, learning_rate=0.005, binary_search_steps=5, max_iterations=1000, abort_early=True, initial_const=0.01, clip_min=0, clip_max=1)[source]
class cleverhans.attacks.FastGradientMethod(model, back='tf', sess=None)[source]

Bases: cleverhans.attacks.Attack

This attack was originally implemented by Goodfellow et al. (2015) with the infinity norm (and is known as the “Fast Gradient Sign Method”). This implementation extends the attack to other norms, and is therefore called the Fast Gradient Method. Paper link: https://arxiv.org/abs/1412.6572

generate(x, **kwargs)[source]

Generate symbolic graph for adversarial examples and return. :param x: The model’s symbolic inputs. :param eps: (optional float) attack step size (input variation) :param ord: (optional) Order of the norm (mimics NumPy).

Possible values: np.inf, 1 or 2.
Parameters:
  • y – (optional) A tensor with the model labels. Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None. Labels should be one-hot-encoded.
  • y_target – (optional) A tensor with the labels to target. Leave y_target=None if y is also set. Labels should be one-hot-encoded.
  • clip_min – (optional float) Minimum input component value
  • clip_max – (optional float) Maximum input component value
parse_params(eps=0.3, ord=inf, y=None, y_target=None, clip_min=None, clip_max=None, **kwargs)[source]

Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes.

Attack-specific parameters: :param eps: (optional float) attack step size (input variation) :param ord: (optional) Order of the norm (mimics NumPy).

Possible values: np.inf, 1 or 2.
Parameters:
  • y – (optional) A tensor with the model labels. Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None. Labels should be one-hot-encoded.
  • y_target – (optional) A tensor with the labels to target. Leave y_target=None if y is also set. Labels should be one-hot-encoded.
  • clip_min – (optional float) Minimum input component value
  • clip_max – (optional float) Maximum input component value
class cleverhans.attacks.SaliencyMapMethod(model, back='tf', sess=None)[source]

Bases: cleverhans.attacks.Attack

The Jacobian-based Saliency Map Method (Papernot et al. 2016). Paper link: https://arxiv.org/pdf/1511.07528.pdf

generate(x, **kwargs)[source]

Generate symbolic graph for adversarial examples and return. :param x: The model’s symbolic inputs. :param theta: (optional float) Perturbation introduced to modified

components (can be positive or negative)
Parameters:
  • gamma – (optional float) Maximum percentage of perturbed features
  • clip_min – (optional float) Minimum component value for clipping
  • clip_max – (optional float) Maximum component value for clipping
  • y_target – (optional) Target tensor if the attack is targeted
parse_params(theta=1.0, gamma=inf, nb_classes=None, clip_min=0.0, clip_max=1.0, y_target=None, **kwargs)[source]

Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes.

Attack-specific parameters: :param theta: (optional float) Perturbation introduced to modified

components (can be positive or negative)
Parameters:
  • gamma – (optional float) Maximum percentage of perturbed features
  • nb_classes – (optional int) Number of model output classes
  • clip_min – (optional float) Minimum component value for clipping
  • clip_max – (optional float) Maximum component value for clipping
  • y_target – (optional) Target tensor if the attack is targeted
class cleverhans.attacks.VirtualAdversarialMethod(model, back='tf', sess=None)[source]

Bases: cleverhans.attacks.Attack

This attack was originally proposed by Miyato et al. (2016) and was used for virtual adversarial training. Paper link: https://arxiv.org/abs/1507.00677

generate(x, **kwargs)[source]

Generate symbolic graph for adversarial examples and return. :param x: The model’s symbolic inputs. :param eps: (optional float ) the epsilon (input variation parameter) :param num_iterations: (optional) the number of iterations :param xi: (optional float) the finite difference parameter :param clip_min: (optional float) Minimum input component value :param clip_max: (optional float) Maximum input component value

parse_params(eps=2.0, num_iterations=1, xi=1e-06, clip_min=None, clip_max=None, **kwargs)[source]

Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes.

Attack-specific parameters: :param eps: (optional float )the epsilon (input variation parameter) :param num_iterations: (optional) the number of iterations :param xi: (optional float) the finite difference parameter :param clip_min: (optional float) Minimum input component value :param clip_max: (optional float) Maximum input component value

cleverhans.attacks.fgsm(x, predictions, eps, back='tf', clip_min=None, clip_max=None)[source]

A wrapper for the Fast Gradient Sign Method. It calls the right function, depending on the user’s backend. :param x: the input :param predictions: the model’s output

(Note: in the original paper that introduced this
attack, the loss was computed by comparing the model predictions with the hard labels (from the dataset). Instead, this version implements the loss by comparing the model predictions with the most likely class. This tweak is recommended since the discovery of label leaking in the following paper: https://arxiv.org/abs/1611.01236)
Parameters:
  • eps – the epsilon (input variation parameter)
  • back – switch between TensorFlow (‘tf’) and Theano (‘th’) implementation
  • clip_min – optional parameter that can be used to set a minimum value for components of the example returned
  • clip_max – optional parameter that can be used to set a maximum value for components of the example returned
Returns:

a tensor for the adversarial example

cleverhans.attacks.jsma(sess, x, predictions, grads, sample, target, theta, gamma=inf, increase=True, back='tf', clip_min=None, clip_max=None)[source]

A wrapper for the Jacobian-based saliency map approach. It calls the right function, depending on the user’s backend. :param sess: TF session :param x: the input :param predictions: the model’s symbolic output (linear output,

pre-softmax)
Parameters:
  • sample – (1 x 1 x img_rows x img_cols) numpy array with sample input
  • target – target class for input sample
  • theta – delta for each feature adjustment
  • gamma – a float between 0 - 1 indicating the maximum distortion percentage
  • increase – boolean; true if we are increasing pixels, false otherwise
  • back – switch between TensorFlow (‘tf’) and Theano (‘th’) implementation
  • clip_min – optional parameter that can be used to set a minimum value for components of the example returned
  • clip_max – optional parameter that can be used to set a maximum value for components of the example returned
Returns:

an adversarial sample

cleverhans.attacks.vatm(model, x, logits, eps, back='tf', num_iterations=1, xi=1e-06, clip_min=None, clip_max=None)[source]

A wrapper for the perturbation methods used for virtual adversarial training : https://arxiv.org/abs/1507.00677 It calls the right function, depending on the user’s backend. :param model: the model which returns the network unnormalized logits :param x: the input placeholder :param logits: the model’s unnormalized output tensor :param eps: the epsilon (input variation parameter) :param num_iterations: the number of iterations :param xi: the finite difference parameter :param clip_min: optional parameter that can be used to set a minimum

value for components of the example returned
Parameters:clip_max – optional parameter that can be used to set a maximum value for components of the example returned
Returns:a tensor for the adversarial example