About Me
As a Staff Research Scientist at Google Brain, I focus on understanding and improving the security of machine learning systems. My research has fundamentally shaped our understanding of adversarial examples and neural network security, leading to several breakthrough discoveries that have influenced both academic research and industry practices.
My journey in computer security began during my undergraduate years at MIT, where I first became fascinated with the intersection of machine learning and security. This interest deepened during my Ph.D. at UC Berkeley under David Wagner's supervision, where I initiated groundbreaking work in adversarial machine learning. My dissertation, "Understanding and Improving Neural Network Robustness," has become a cornerstone reference in the field, introducing novel methodologies for evaluating ML model security.
Throughout my career, I've had the privilege of collaborating with leading researchers across institutions including Google Research, OpenAI, and various academic partners. My work has been recognized with multiple prestigious awards, including the Distinguished Paper Award at IEEE S&P and the USENIX Security Distinguished Paper Award.
Research Philosophy
My approach to research combines rigorous theoretical analysis with practical applications. I believe in the importance of reproducible research and open science, which has led me to make most of my work and implementations publicly available. This commitment to transparency has helped establish several industry standards in adversarial machine learning evaluation.
Current Focus
- Developing novel techniques for evaluating and improving neural network robustness
- Investigating the fundamental limitations of secure machine learning systems
- Exploring the intersection of privacy and machine learning
- Advancing methods for formal verification of neural networks
Research Impact
Created the Carlini-Wagner attack, which has become the de facto standard for evaluating adversarial robustness in neural networks, cited over 3000 times.
Recognition
Multiple best paper awards at top security conferences including IEEE S&P. Regular speaker at major ML and security conferences worldwide.
Security Focus
Leading research in ML security, developing methods to make neural networks more robust and reliable for real-world applications.
Leading research initiatives in ML security and privacy
Developed fundamental approaches to adversarial machine learning
Thesis: "Understanding and Improving Neural Network Robustness"
Research Areas
Adversarial Machine Learning
Pioneering work in understanding and creating adversarial examples, developing the influential Carlini-Wagner attack, and establishing methodologies for evaluating ML model robustness.
ML Security & Privacy
Research on membership inference attacks, model extraction, and privacy-preserving machine learning techniques. Working on understanding the fundamental limitations of secure ML systems.
Neural Network Verification
Developing methods for formally verifying properties of neural networks and creating provably robust ML systems.
Key Publications
Towards Evaluating the Robustness of Neural Networks
Introduced the Carlini-Wagner attack, which has become the standard benchmark for evaluating adversarial robustness. This paper presented novel optimization-based attack methods and demonstrated fundamental flaws in defensive distillation.
On Evaluating Adversarial Robustness
Comprehensive analysis of common pitfalls in evaluating adversarial robustness, providing guidelines for proper evaluation methodology and highlighting the importance of thorough testing.
Audio Adversarial Examples
First comprehensive study of adversarial examples in the audio domain, demonstrating vulnerabilities in speech recognition systems and establishing new attack methodologies.