Defensive Distillation Attack Code

Defensive Distillation was recently proposed as a defense to adversarial examples.

Unfortunately, distillation is not secure. We show this in our paper. We strongly believe that research should be reproducible, and so our releasing the code required to train a baseline model on MNIST, train a defensively distilled model on MNIST, and attack the defensively distilled model.

The code is available from github or as a direct zip download. It is available under the GPLv3 license.