Talks

Applications of Large Language Models to Security Stanford, 2023. Slides

Security & Privacy of LLMs Various guest lectures, 2023. Slides

Are aligned language models adversarially aligned? Simons Institute, 2023. Talk, Slides

Practical poisoning of machine learning models Stanford, 2023. Talk, Slides

An Intro to Adversarial Machine Learning Simons Institute, 2023. Slides

Underspecified Foundation Models Considered Harmful ICICS (keynote), 2022. Talk, Slides

Machine learning is becoming less dependable DSN (keynote), 2022. Slides

A collection of things you can (and can not do) with training data poisoning DLS (keynote), 2022. Slides

A crisis in adversarial machine learning Art of Robustness, 2022. Slides

When Machine Learning Isn't Private USENIX Enigma, 2022. Talk, Slides

Adversarial Attacks That Matter. ICCV AROW2, 2021. Slides

An Unreliable Foundation: Security & Privacy of Large Scale Machine Learning. UK Security and Privacy Seminar. Talk. Slides

Extracting Training Data from Large Language Models. Alan Turing Institute's Interest Group on Privacy and Machine Learning. Talk. Slides.

Extracting Training Data from Large Language Models. USENIX Security, 2021. Talk.. Slides.

Poisoning the Unlabeled Dataset of Semi Supervised Learning. USENIX Security, 2021. Talk. Slides.

Adversarially (non-)Robust Deep Learning. AI For Good, 2021. Slides. Talk.

How Private is Machine Learning? Boston University, 2021. Slides. Talk.

Adversarial Examples for Robust Detection of Synthetic Media. DARPA MediFor, 2021. Slides.

Deep Learning: (still) Not Robust. S+SSPR, 2021. Slides. Talk.

Adversary Instantiation:
Lower Bounds for Differentially Private Machine Learning. ICT4V, 2020. Slides. Talk.

Deep Learning: (still) Not Robust. Carnegie Mellon University, 2020. Slides.

Cryptanalytic Extraction of Neural Networks. CRYPTO, 2020. Slides. Talk.

A (short) Primer on Adversarial Robustness. CVPR Workshop on Media Forensics, 2020. Talk.

On Evaluating Adversarial Robustness. CAMLIS (keynote), 2019. Slides. Talk.

The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks. USENIX Security, 2019. Slides. Talk.

Lessons Learned from Evaluating the Robustness of Neural Networks to Adversarial Examples. USENIX Security (invited talk), 2019. Slides. Talk.

Recent Advances in Adversarial Machine Learning. ScAINet (keynote), 2019. Slides.

Lessons Learned from Evaluating the Robustness of Neural Networks to Adversarial Examples. Simons Institute, Berkeley (invited talk), 2019. Slides. Talk.

Making and Measuring Progress in Adversarial Machine Learning. Deep Learning and Security Workshop (keynote), 2019. Slides. Talk.

Attacking Machine Learning: On the Security and Privacy of Neural Networks. RSA, 2019. Talk. Slides.

On the (In-)Security of Machine Learning. Various, 2018. Slides.

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ICML Plenary, 2018. Talk. Slides.

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ICML, 2018. Slides.

Security & Privacy in Machine Learning. VOICE, 2018. Slides.

Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. IEEE DLS, 2018. Talk. Slides.

Tutorial on Adversarial Machine Learning with CleverHans. Open Data Science Conference, 2017. Slides.

Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. ACM Workshop on Artificial Intelligence and Security, 2017. Slides.

Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy, 2017. Talk. Slides.

Hidden Voice Commands. USENIX Security, 2016. Talk. Slides.

Control-Flow Bending: On the Effectiveness of Control-Flow Integrity. USENIX Security, 2015. Talk. Slides.

ROP is Still Dangerous: Breaking Modern Defenses. USENIX Security, 2014. Talk. Slides.