We show that neural networks on audio are also vulnerable to adversarial examples by making a speech-to-text neural network transcribe any input waveform as any any desired sentence.
Neural networks are highly vulnerable to evasion attacks. This project contains code to perform these attacks in a robust manner to evaluate future possible defenses.
Defensive Distillation was recently proposed as a defense to adversarial examples. This project contains the TensorFlow models required to train a defensively distilled network and show it is broken.
Printf is, unintentionally, a Turing-complete language. We demonstrate this by implementing a brainfuck interpreter through using only calls to the standard C printf.