The following code corresponds to the paper Towards Evaluating the Robustness of Neural Networks.
In it, we develop three attacks against neural networks to produce adversarial examples (given an instance x, can we produce an instance x' that is visually similar to x but is a different class). The attacks are tailored to three distance metrics.
The code is available from github or as a direct zip download. It is available under the BSD license.