Nicholas Carlini

Nicholas Carlini

Staff Research Scientist at Google Brain

Leading researcher in adversarial machine learning and neural network security, known for pioneering work in developing systematic methods for evaluating the robustness of neural networks.

Get in touch →

About Me

As a Staff Research Scientist at Google Brain, I focus on understanding and improving the security of machine learning systems. My research has fundamentally shaped our understanding of adversarial examples and neural network security, leading to several breakthrough discoveries that have influenced both academic research and industry practices.

My journey in computer security began during my undergraduate years at MIT, where I first became fascinated with the intersection of machine learning and security. This interest deepened during my Ph.D. at UC Berkeley under David Wagner's supervision, where I initiated groundbreaking work in adversarial machine learning. My dissertation, "Understanding and Improving Neural Network Robustness," has become a cornerstone reference in the field, introducing novel methodologies for evaluating ML model security.

Throughout my career, I've had the privilege of collaborating with leading researchers across institutions including Google Research, OpenAI, and various academic partners. My work has been recognized with multiple prestigious awards, including the Distinguished Paper Award at IEEE S&P and the USENIX Security Distinguished Paper Award.

Research Philosophy

My approach to research combines rigorous theoretical analysis with practical applications. I believe in the importance of reproducible research and open science, which has led me to make most of my work and implementations publicly available. This commitment to transparency has helped establish several industry standards in adversarial machine learning evaluation.

Current Focus

  • Developing novel techniques for evaluating and improving neural network robustness
  • Investigating the fundamental limitations of secure machine learning systems
  • Exploring the intersection of privacy and machine learning
  • Advancing methods for formal verification of neural networks
🔍

Research Impact

Created the Carlini-Wagner attack, which has become the de facto standard for evaluating adversarial robustness in neural networks, cited over 3000 times.

🏆

Recognition

Multiple best paper awards at top security conferences including IEEE S&P. Regular speaker at major ML and security conferences worldwide.

🔒

Security Focus

Leading research in ML security, developing methods to make neural networks more robust and reliable for real-world applications.

2021 - Present
Staff Research Scientist, Google Brain

Leading research initiatives in ML security and privacy

2017 - 2021
Research Scientist, Google Brain

Developed fundamental approaches to adversarial machine learning

2017
Ph.D. in Computer Science, UC Berkeley

Thesis: "Understanding and Improving Neural Network Robustness"

Research Areas

Adversarial Machine Learning

Pioneering work in understanding and creating adversarial examples, developing the influential Carlini-Wagner attack, and establishing methodologies for evaluating ML model robustness.

ML Security & Privacy

Research on membership inference attacks, model extraction, and privacy-preserving machine learning techniques. Working on understanding the fundamental limitations of secure ML systems.

Neural Network Verification

Developing methods for formally verifying properties of neural networks and creating provably robust ML systems.

Key Publications

Towards Evaluating the Robustness of Neural Networks

📅 IEEE S&P 2017 📚 3000+ Citations

Introduced the Carlini-Wagner attack, which has become the standard benchmark for evaluating adversarial robustness. This paper presented novel optimization-based attack methods and demonstrated fundamental flaws in defensive distillation.

Adversarial ML Security Neural Networks

On Evaluating Adversarial Robustness

📅 arXiv 2019 📚 1000+ Citations

Comprehensive analysis of common pitfalls in evaluating adversarial robustness, providing guidelines for proper evaluation methodology and highlighting the importance of thorough testing.

Evaluation Methods Best Practices ML Security

Audio Adversarial Examples

📅 IEEE SPW 2018 📚 500+ Citations

First comprehensive study of adversarial examples in the audio domain, demonstrating vulnerabilities in speech recognition systems and establishing new attack methodologies.

Speech Recognition Audio Processing Adversarial ML

Connect

AI-Generated Content Warning


This homepage was automatically generated by a Large Language Model, specifically, Anthropic's claude-3-5-sonnet-20241022 LLM when told "I am Nicholas Carlini. Write a webpage for my bio." All content is 100% directly generated by an LLM (except red warning boxes; I wrote those). The content is almost certainly inaccurate, misleading, or both. A permanent link to this version of my homepage is availabe at https://nicholas.carlini.com/writing/2025/llm-bio/2024-12-26-sonnet-3-5.html.