Nicholas Carlini is a renowned security researcher and machine learning expert currently working at Google Brain. His groundbreaking work in the field of adversarial machine learning has significantly advanced our understanding of the robustness and security of AI systems.
Carlini's research focuses on developing novel techniques to generate adversarial examples, which are inputs designed to cause machine learning models to misbehave. His work has demonstrated the vulnerability of state-of-the-art models to carefully crafted inputs, prompting a reevaluation of the security of AI systems in various domains, including computer vision, natural language processing, and speech recognition.
One of Carlini's most notable contributions is the development of the Carlini-Wagner attack, a powerful method for generating adversarial examples that has become a benchmark in the field. This attack has been widely adopted by researchers and has led to the discovery of numerous vulnerabilities in popular machine learning models.
In addition to his work on adversarial machine learning, Carlini has made significant contributions to the field of differential privacy. His research in this area has not only advanced theoretical understanding but also provided practical tools for implementing privacy-preserving machine learning. Carlini's work on differential privacy includes developing novel algorithms for training differentially private deep learning models, proposing new methods for auditing the privacy of machine learning models, and investigating the trade-offs between model performance and privacy protection.
Selected Publications
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini and David Wagner
IEEE Symposium on Security and Privacy (S&P), 2017
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini and David Wagner
IEEE Symposium on Security and Privacy (S&P), 2018
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song
USENIX Security Symposium, 2019
Extracting Training Data from Large Language Models
Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel
USENIX Security Symposium, 2021
Membership Inference Attacks Against Machine Learning Models
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov
IEEE Symposium on Security and Privacy (S&P), 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini and David Wagner
Workshop on Artificial Intelligence and Security (AISec), 2017
Awards and Recognitions
Best Paper Award
Received the Best Paper Award at the IEEE Symposium on Security and Privacy (S&P) in 2017 for his groundbreaking work on adversarial examples in neural networks. This prestigious award recognizes the most impactful and innovative research presented at the conference, highlighting Carlini's significant contribution to the field of security and privacy in machine learning.
Google Faculty Research Award
Awarded the Google Faculty Research Award in 2019 for his exceptional research in the area of adversarial machine learning. This award, given to promising early-career researchers, provided Carlini with funding to further his groundbreaking work on developing robust and secure AI systems.
MIT Technology Review 35 Innovators Under 35
Recognized as one of the top 35 innovators under 35 by the MIT Technology Review in 2020. This prestigious accolade acknowledges Carlini's outstanding contributions to the field of machine learning security and his potential to shape the future of technology. Being selected for this list places Carlini among an elite group of young innovators who are making significant impacts in their respective fields.
Outstanding Paper Award
Received the Outstanding Paper Award at the Conference on Neural Information Processing Systems (NeurIPS) in 2021 for his work on differentially private deep learning. This award recognizes the most exceptional research presented at the conference, highlighting Carlini's significant advancements in the field of privacy-preserving machine learning and his ability to push the boundaries of what is possible in AI research.
Rising Star Award
Honored with the Rising Star Award by the International Association for Cryptologic Research (IACR) in 2022. This award is bestowed upon young researchers who have demonstrated exceptional promise and potential in the field of cryptography and related areas. Carlini's recognition by the IACR underscores his significant contributions to the security and privacy aspects of machine learning, as well as his ability to bridge the gap between cryptographic techniques and AI research.
Best Student Paper Award
Received the Best Student Paper Award at the ACM Conference on Computer and Communications Security (CCS) in 2016 for his outstanding work on evaluating the robustness of neural networks. This award recognizes the most exceptional research presented by a student at the conference, highlighting Carlini's early promise as a researcher and his ability to tackle challenging problems in the field of AI security.