Talks

On Evaluating Adversarial Robustness CAMLIS (keynote), 2019. Slides. Talk.

Recent Advances in Adversarial Machine Learning USENIX Security (invited talk), 2019. Slides. Talk.

Recent Advances in Adversarial Machine Learning ScAINet (keynote), 2019. Slides.

Making and Measuring Progress in Adversarial Machine Learning. Deep Learning and Security Workshop (keynote), 2019. Slides. Talk.

Evaluating the Robustness of Neural Networks Various lectures, 2017-2019. Slides.

Attacking Machine Learning: On the Security and Privacy of Neural Networks. RSA, 2019. Talk. Slides.

On the (In-)Security of Machine Learning. Various, 2018. Slides.

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ICML Plenary, 2018. Talk. Slides.

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ICML, 2018. Slides.

Security & Privacy in Machine Learning. VOICE, 2018. Slides.

Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. IEEE DLS, 2018. Talk. Slides.

Tutorial on Adversarial Machine Learning with CleverHans Open Data Science Conference, 2017. Slides.

Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. ACM Workshop on Artificial Intelligence and Security, 2017. Slides.

Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy, 2017. Talk. Slides.

Hidden Voice Commands. USENIX Security, 2016. Talk. Slides.

Control-Flow Bending: On the Effectiveness of Control-Flow Integrity. USENIX Security, 2015. Talk. Slides.

ROP is Still Dangerous: Breaking Modern Defenses. USENIX Security, 2014. Talk. Slides.