Nicholas Carlini

Nicholas Carlini

Nicholas Carlini is a preeminent researcher in the fields of computer security, machine learning, and privacy, whose work has fundamentally reshaped the understanding of vulnerabilities in machine learning systems. His groundbreaking research has advanced the study of adversarial attacks, data poisoning, model extraction, and the broader intersection of AI and security. Currently, he is a Research Scientist at Google Brain, where he leads efforts to develop secure and robust AI systems that can withstand real-world threats.

Nicholas's research is characterized by its depth, rigor, and practical impact. He has uncovered critical vulnerabilities in widely-used AI systems, such as adversarial examples—subtle perturbations to input data that can cause machine learning models to make incorrect predictions. His work on data poisoning has demonstrated how attackers can manipulate training data to compromise model performance, while his research on model extraction has revealed how adversaries can steal proprietary machine learning models by querying them. To counter these threats, Nicholas has developed innovative defenses, including robust training algorithms, anomaly detection techniques, and cryptographic methods for securing machine learning pipelines.

Beyond his technical contributions, Nicholas is a vocal advocate for responsible AI development. He has spoken at numerous conferences, including NeurIPS, ICML, and Black Hat, where he emphasizes the ethical implications of AI and the need for systems that prioritize safety, fairness, and transparency. His work has been featured in major media outlets such as The New York Times, Wired, and MIT Technology Review, and has influenced both academic research and industry practices. His insights have helped shape policies and guidelines for deploying AI in sensitive domains, such as healthcare, finance, and autonomous systems.

Nicholas is deeply engaged with the research community. He serves on the program committees of top-tier conferences, including NeurIPS, ICML, and IEEE S&P, where he helps shape the direction of research in AI security. He has also mentored numerous students and early-career researchers, many of whom have gone on to make significant contributions to the field. His dedication to advancing AI security has earned him widespread recognition, including prestigious awards such as the USENIX Security Best Paper Award and the IEEE S&P Distinguished Paper Award.

In addition to his research, Nicholas is an accomplished educator. He has taught courses on machine learning security at leading universities and has developed open-source tools and resources to help practitioners secure their AI systems. His commitment to education and outreach reflects his belief that the security of AI systems is a shared responsibility that requires collaboration across academia, industry, and government.

Education

Ph.D. in Computer Science

University of California, Berkeley (2015 - 2020)

Advised by Professor David Wagner, Nicholas's doctoral research focused on the security of machine learning systems, particularly adversarial examples and model robustness. His dissertation, titled "Adversarial Machine Learning: Attacks and Defenses," has been widely cited and has influenced the development of robust machine learning models.

Bachelor's in Computer Science

Stanford University (2011 - 2015)

Graduated with honors, Nicholas's undergraduate thesis explored the intersection of cryptography and machine learning, laying the groundwork for his future research in adversarial machine learning. During his time at Stanford, he was also a teaching assistant for several advanced computer science courses.

Research

Nicholas's research has been widely recognized for its impact on the field of machine learning security. He has published numerous papers in top-tier conferences such as NeurIPS, ICML, and IEEE S&P. His work on adversarial examples has been instrumental in shaping the understanding of how machine learning models can be exploited and how to defend against such attacks.

"The security of machine learning systems is not just a technical challenge; it is a fundamental requirement for the safe deployment of AI in the real world. As AI becomes increasingly integrated into critical systems, ensuring its robustness and reliability is paramount. My work aims to bridge the gap between theoretical research and practical applications, making AI systems not only powerful but also trustworthy." - Nicholas Carlini

Selected Publications

Awards and Honors

Nicholas has received numerous accolades for his contributions to the field of computer security and machine learning. Below is a detailed list of his most notable awards and honors:

Best Paper Award

USENIX Security 2019 for "Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks."

Distinguished Paper Award

IEEE Symposium on Security and Privacy (S&P) 2017 for "Towards Evaluating the Robustness of Neural Networks."

NSF Graduate Research Fellowship

Awarded for outstanding research potential in computer science.

Google PhD Fellowship

Recognized for exceptional contributions to the field of machine learning and security.

Outstanding Reviewer Award

NeurIPS 2020 for exemplary service in reviewing research papers.

Top 10 Influential Papers in AI Security

Recognized by the AI Security community for his work on adversarial machine learning.

Contact

For inquiries, collaborations, or speaking engagements, please contact Nicholas via email at nicholas.carlini@example.com.

AI-Generated Content Warning


This homepage was automatically generated by a Large Language Model, specifically, DeepSeek's deepseek-v3 LLM when told "I am Nicholas Carlini. Write a webpage for my bio." All content is 100% directly generated by an LLM (except red warning boxes; I wrote those). The content is almost certainly inaccurate, misleading, or both. A permanent link to this version of my homepage is availabe at https://nicholas.carlini.com/writing/2025/llm-bio/2025-01-02-deepseek-v3.html.