It can be hard to stay up-to-date on the published papers in
the field of adversarial examples,
where we have seen massive growth in the number of papers
written each year.
I have been somewhat religiously keeping track of these
papers for the last few years, and realized it may be helpful
for others to release this list.
The only requirement I used for selecting papers for this list
is that it is primarily a paper about adversarial examples,
or extensively uses adversarial examples.
Due to the sheer quantity of papers, I can't guarantee
that I actually have found all of them.
But I did try.
I also may have included papers that don't match
these criteria (and are about something different instead),
or made inconsistent
judgement calls as to whether or not any given paper is
mainly an adversarial example paper.
Send me an email if something is wrong and I'll correct it.
As a result, this list is completely un-filtered.
Everything that mainly presents itself as an adversarial
example paper is listed here; I pass no judgement of quality.
For a curated list of papers that I think are excellent and
worth reading, see the
Adversarial Machine Learning Reading List.
One final note about the data.
This list automatically updates with new papers, even before I
get a chance to manually filter through them.
I do this filtering roughly twice a week, and it's
then that I'll remove the ones that aren't related to
As a result, there may be some
false positives on the most recent few entries.
The new un-verified entries will have a probability indicated that my
simplistic (but reasonably well calibrated)
bag-of-words classifier believes the given paper
is actually about adversarial examples.
The full paper list appears below. I've also released a
TXT file (and a TXT file
with abstracts) and a
with the same data. If you do anything interesting with
this data I'd be happy to hear from you what it was.
Fine-grained Synthesis of Unrestricted Adversarial Examples. (99%)
Deep Minimax Probability Machine. (99%)
Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method. (84%)
Evaluating the Transferability and Adversarial Discrimination of Convolutional Neural Networks for Threat Object Detection and Classification within X-Ray Security Imagery. (2%)
Outside the Box: Abstraction-Based Monitoring of Neural Networks. (2%)
Defective Convolutional Layers Learn Robust CNNs. (99%)
Generate (non-software) Bugs to Fool Classifiers. (99%)
Adversarial Robustness of Flow-Based Generative Models. (96%)
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?. (92%)
Logic-inspired Deep Neural Networks. (67%)
Towards non-toxic landscapes: Automatic toxic comment detection using DNN. (1%)
Poison as a Cure: Detecting & Neutralizing Variable-Sized Backdoor Attacks in Deep Neural Networks. (92%)
Deep Detector Health Management under Adversarial Campaigns. (87%)
Adversarial Attacks on Grid Events Classification: An Adversarial Machine Learning Approach. (83%)
Can You Really Backdoor Federated Learning?. (82%)
WITCHcraft: Efficient PGD attacks with random step size. (81%)
A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories. (78%)
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic. (38%)
A novel method for identifying the deep neural network model with the Serial Number. (38%)
Smoothed Inference for Adversarially-Trained Models. (98%)
REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data. (87%)
Countering Inconsistent Labelling by Google's Vision API for Rotated Images. (81%)
Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models. (70%)
NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations. (61%)
Justification-Based Reliability in Machine Learning. (1%)
Black-Box Adversarial Attack with Transferable Model-based Embedding. (99%)
Defensive Few-shot Adversarial Learning. (99%)
Suspicion-Free Adversarial Attacks on Clustering Algorithms. (98%)
SMART: Skeletal Motion Action Recognition aTtack. (96%)
Defending Against Model Stealing Attacks with Adaptive Misinformation. (93%)
The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks. (86%)
Signed Input Regularization. (15%)
Maintaining Discrimination and Fairness in Class Incremental Learning. (1%)
Learning To Characterize Adversarial Subspaces. (99%)
Simple iterative method for generating targeted universal adversarial perturbations. (99%)
AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients. (99%)
On Model Robustness Against Adversarial Examples. (98%)
Robust Reading Comprehension with Linguistic Constraints via Posterior Regularization. (82%)
DomainGAN: Generating Adversarial Examples to Attack Domain Generation Algorithm Classifiers.
Self-supervised Adversarial Training. (99%)
CAGFuzz: Coverage-Guided Adversarial Generative Fuzzing Testing of Deep Learning Systems. (99%)
There is Limited Correlation between Coverage and Robustness for Deep Neural Networks.
Adversarial Margin Maximization Networks.
Improving Robustness of Task Oriented Dialog Systems.
On Robustness to Adversarial Examples and Polynomial Optimization.
Adversarial Examples in Modern Machine Learning: A Review.
RNN-Test: Adversarial Testing Framework for Recurrent Neural Network Systems.
Few-Features Attack to Fool Machine Learning Models through Mask-Based GAN.
CALPA-NET: Channel-pruning-assisted Deep Residual Network for Steganalysis of Digital Images.
Learning From Brains How to Regularize Machines.
Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory.
GraphDefense: Towards Robust Graph Convolutional Networks.
A Reinforced Generation of Adversarial Samples for Neural Machine Translation.
Improving Machine Reading Comprehension via Adversarial Training.
Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples.
Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy.
Intrusion Detection for Industrial Control Systems: Evaluation Analysis and Adversarial Attacks.
Domain Robustness in Neural Machine Translation.
Imperceptible Adversarial Attacks on Tabular Data.
Adversarial Attacks on GMM i-vector based Speaker Verification Systems.
Patch augmentation: Towards efficient decision boundaries for neural networks. (99%)
White-Box Target Attack for EEG-Based BCI Regression Problems.
Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces.
How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods.
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance.
Reversible Adversarial Examples based on Reversible Image Transformation.
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey.
Adversarial Enhancement for Community Detection in Complex Networks.
Test Metrics for Recurrent Neural Networks.
Intriguing Properties of Adversarial ML Attacks in the Problem Space.
DLA: Dense-Layer-Analysis for Adversarial Example Detection.
The Tale of Evil Twins: Adversarial Inputs versus Backdoored Models.
Persistency of Excitation for Robustness of Neural Networks.
Fast-UAP: Algorithm for Speeding up Universal Adversarial Perturbation Generation with Orientation of Perturbation Vectors.
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems.
Improved Detection of Adversarial Attacks via Penetration Distortion Maximization.
Security of Facial Forensics Models Against Adversarial Attacks.
Enhancing Certifiable Robustness via a Deep Model Ensemble.
Certifiable Robustness to Graph Perturbations.
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System.
Universal Adversarial Perturbations Against Person Re-Identification.
Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization.
Adversarial Example in Remote Sensing Image Recognition.
Active Subspace of Neural Networks: Structural Analysis and Universal Attacks.
Certified Adversarial Robustness for Deep Reinforcement Learning.
Open the Boxes of Words: Incorporating Sememes into Textual Adversarial Attack.
EdgeFool: An Adversarial Image Enhancement Filter.
Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolution Neural Networks.
Adversarial Defense Via Local Flatness Regularization.
Detection of Adversarial Attacks and Characterization of Adversarial Subspace.
Understanding and Quantifying Adversarial Examples Existence in Linear Classification.
Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples.
MediaEval 2019: Concealed FGSM Perturbations for Privacy Preservation.
Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?.
ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries.
A Useful Taxonomy for Adversarial Robustness of Neural Networks.
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks.
Attacking Optical Flow.
Cross-Representation Transferability of Adversarial Perturbations: From Spectrograms to Audio Waveforms.
Adversarial Example Detection by Classification for Deep Speech Recognition.
Structure Matters: Towards Generating Transferable Adversarial Images.
Recovering Localized Adversarial Attacks.
Learning to Learn by Zeroth-Order Oracle.
An Alternative Surrogate Loss for PGD-based Adversarial Testing.
Enhancing Recurrent Neural Networks with Sememes.
Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification.
Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?.
A Saddle-Point Dynamical System Approach for Robust Deep Learning.
Spatial-aware Online Adversarial Perturbations Against Visual Object Tracking.
Evading Real-Time Person Detectors by Adversarial T-shirt.
Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets.
Enforcing Linearity in DNN succours Robustness and Adversarial Image Generation.
LanCe: A Comprehensive and Lightweight CNN Defense Methodology against Physical Adversarial Attacks on Embedded Multimedia Applications.
A New Defense Against Adversarial Images: Turning a Weakness into a Strength.
On adversarial patches: real-world attack on ArcFace-100 face recognition system.
Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation.
Understanding Misclassifications by Attributes.
Adversarial Examples for Models of Code.
Real-world attack on MTCNN face detection system.
DeepSearch: Simple and Effective Blackbox Fuzzing of Deep Neural Networks.
Confidence-Calibrated Adversarial Training: Towards Robust Models Generalizing Beyond the Attack Used During Training.
ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization.
Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models.
On Robustness of Neural Ordinary Differential Equations.
Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems.
Verification of Neural Networks: Specifying Global Robustness using Generative Models.
Information Robust Dirichlet Networks for Predictive Uncertainty Estimation.
Universal Adversarial Perturbation for Text Classification.
Learning deep forest with multi-scale Local Binary Pattern features for face anti-spoofing.
Adversarial Learning of Deepfakes in Accounting.
Deep Latent Defence.
Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system.
Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications.
SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations.
Interpretable Disentanglement of Neural Networks by Extracting Class-Specific Subnetwork.
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies.
Unrestricted Adversarial Attacks for Semantic Segmentation.
Requirements for Developing Robust Neural Networks.
Adversarial Examples for Cost-Sensitive Classifiers.
Verification of Neural Network Behaviour: Formal Guarantees for Power System Applications.
Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions.
BUZz: BUffer Zones for defending adversarial examples in image classification.
Attacking Vision-based Perception in End-to-End Autonomous Driving Models.
Adversarially Robust Few-Shot Learning: A Meta-Learning Approach.
Boosting Image Recognition with Non-differentiable Constraints.
Generating Semantic Adversarial Examples with Differentiable Rendering.
Attacking CNN-based anti-spoofing face authentication in the physical domain.
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack.
Deep Neural Rejection against Adversarial Examples.
Cross-Layer Strategic Ensemble Defense Against Adversarial Examples.
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML.
Adversarial Patches Exploiting Contextual Reasoning in Object Detection.
Black-box Adversarial Attacks with Bayesian Optimization.
Techniques for Adversarial Examples Threatening the Safety of Artificial Intelligence Based Systems.
Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest.
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks.
Towards Understanding the Transferability of Deep Representations.
Towards neural networks that provably know when they don't know.
Adversarial Machine Learning Attack on Modulation Classification.
Adversarial ML Attack on Self Organizing Cellular Networks.
Lower Bounds on Adversarial Robustness from Optimal Transport.
Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection.
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks.
FreeLB: Enhanced Adversarial Training for Language Understanding.
A Visual Analytics Framework for Adversarial Text Generation.
Intelligent image synthesis to attack a segmentation CNN using adversarial learning.
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack.
Adversarial Examples for Deep Learning Cyber Security Analytics.
Robust Local Features for Improving the Generalization of Adversarial Training.
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples.
HAWKEYE: Adversarial Example Detector for Deep Neural Networks.
Adversarial Learning with Margin-based Triplet Embedding Regularization.
Defending Against Physically Realizable Attacks on Image Classification.
COPYCAT: Practical Adversarial Attacks on Visualization-Based Malware Detection.
Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation.
Adversarial Vulnerability Bounds for Gaussian Process Classification.
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks.
Training Robust Deep Neural Networks via Adversarial Noise Propagation.
Toward Robust Image Classification.
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review.
Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model.
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges.
HAD-GAN: A Human-perception Auxiliary Defense GAN model to Defend Adversarial Examples.
They Might NOT Be Giants: Crafting Black-Box Adversarial Examples with Fewer Queries Using Particle Swarm Optimization.
Towards Quality Assurance of Software Product Lines with Adversarial Configurations.
Interpreting and Improving Adversarial Robustness with Neuron Sensitivity.
An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms.
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors.
Natural Language Adversarial Attacks and Defenses in Word Level.
Adversarial Attack on Skeleton-based Human Action Recognition.
Say What I Want: Towards the Dark Side of Neural Dialogue Models.
White-Box Adversarial Defense via Self-Supervised Data Estimation.
Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix.
Inspecting adversarial examples using the Fisher information.
An Empirical Investigation of Randomized Defenses against Adversarial Attacks.
Transferable Adversarial Robustness using Adversarially Trained Autoencoders.
Feedback Learning for Improving the Robustness of Neural Networks.
Sparse and Imperceivable Adversarial Attacks.
Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification.
Identifying and Resisting Adversarial Videos Using Temporal Consistency.
Effectiveness of Adversarial Examples and Defenses for Malware Classification.
Towards Noise-Robust Neural Networks via Progressive Adversarial Training.
UPC: Learning Universal Physical Camouflage Attacks on Object Detectors.
FDA: Feature Disruptive Attack.
Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection.
Toward Finding The Global Optimal of Adversarial Examples.
Adversarial Robustness Against the Union of Multiple Perturbation Models.
STA: Adversarial Attacks on Siamese Trackers.
When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures.
Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification.
Natural Adversarial Sentence Generation with Gradient-based Perturbation.
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information.
Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents.
Adversarial Examples with Difficult Common Words for Paraphrase Identification.
Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?.
Certified Robustness to Adversarial Word Substitutions.
Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation.
Metric Learning for Adversarial Robustness.
Adversarial Training Methods for Network Embedding.
Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness.
Defending Against Misclassification Attacks in Transfer Learning.
Universal, transferable and targeted adversarial attacks.
A Statistical Defense Approach for Detecting Adversarial Examples.
Adversarial Edit Attacks for Tree Data.
advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns.
Targeted Mismatch Adversarial Attack: Query with a Flower to Retrieve the Tower.
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
AdvHat: Real-world adversarial attack on ArcFace Face ID system.
Saliency Methods for Explaining Adversarial Attacks.
Testing Robustness Against Unforeseen Adversaries.
Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples.
Robust Graph Neural Network Against Poisoning Attacks via Transfer Learning.
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks.
Universal Adversarial Triggers for NLP.
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries.
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses.
On the Robustness of Human Pose Estimation.
Adversarial Defense by Suppressing High-frequency Components.
Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation.
Nesterov Accelerated Gradient and Scale Invariance for Improving Transferability of Adversarial Examples.
Adversarial point perturbations on 3D objects.
DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation.
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once.
AdvFaces: Adversarial Face Synthesis.
On Defending Against Label Flipping Attacks on Malware Detection Systems.
Adversarial Neural Pruning.
On the Adversarial Robustness of Neural Networks without Weight Transport.
Defending Against Adversarial Iris Examples Using Wavelet Decomposition.
Universal Adversarial Audio Perturbations.
Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations.
Investigating Decision Boundaries of Trained Neural Networks.
Explaining Deep Neural Networks Using Spectrum-Based Fault Localization.
MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks.
BlurNet: Defense by Filtering the Feature Maps.
Random Directional Attack for Fooling Deep Neural Networks.
Adversarial Self-Defense for Cycle-Consistent GANs.
Automated Detection System for Adversarial Examples with High-Frequency Noises Sieve.
A principled approach for generating adversarial images under non-smooth dissimilarity metrics.
Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems.
A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models.
Exploring the Robustness of NMT Systems to Nonsensical Inputs.
AdvGAN++ : Harnessing latent layers for adversary generation.
Black-box Adversarial ML Attack on Modulation Classification.
Robustifying deep networks for image segmentation.
Adversarial Robustness Curves.
Optimal Attacks on Reinforcement Learning Policies.
Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation.
Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding.
Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples.
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment.
Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin.
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method.
Towards Adversarially Robust Object Detection.
Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems.
Joint Adversarial Training: Incorporating both Spatial and Pixel Attacks.
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training.
Weakly Supervised Localization using Min-Max Entropy: an Interpretable Framework.
Enhancing Adversarial Example Transferability with an Intermediate Level Attack.
Characterizing Attacks on Deep Reinforcement Learning.
Connecting Lyapunov Control Theory to Adversarial Attacks.
Real-time Evasion Attacks with Physical Constraints on Deep Learning-based Anomaly Detectors in Industrial Control Systems.
Robustness properties of Facebook's ResNeXt WSL models.
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods.
Natural Adversarial Examples.
Latent Adversarial Defence with Boundary-guided Generation.
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving.
Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics.
Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning.
Recovery Guarantees for Compressible Signals with Adversarial Noise.
Measuring the Transferability of Adversarial Examples.
Unsupervised Adversarial Attacks on Deep Feature-based Retrieval with GAN.
Stateful Detection of Black-Box Adversarial Attacks.
Generative Modeling by Estimating Gradients of the Data Distribution.
Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn.
Adversarial Objects Against LiDAR-Based Autonomous Driving Systems.
Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations.
Generating Adversarial Fragments with Adversarial Networks for Physical-world Implementation.
Affine Disentangled GAN for Interpretable and Robust AV Perception.
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions.
Adversarial Robustness through Local Linearization.
Adversarial Attacks in Sound Event Classification.
Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior.
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack.
Efficient Cyber Attacks Detection in Industrial Control Systems Using Lightweight Neural Networks and PCA.
Treant: Training Evasion-Aware Decision Trees.
Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network".
Diminishing the Effect of Adversarial Perturbations via Refining Feature Representation.
Accurate, reliable and fast robustness evaluation.
Fooling a Real Car with Adversarial Traffic Signs.
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty.
Certifiable Robustness and Robust Training for Graph Convolutional Networks.
Learning to Cope with Adversarial Attacks.
Robustness Guarantees for Deep Neural Networks on Videos.
Using Intuition from Empirical Properties to Simplify Adversarial Training Defense.
Evolving Robust Neural Architectures to Defend from Adversarial Attacks.
Adversarial Robustness via Label-Smoothing.
The Adversarial Robustness of Sampling.
Defending Adversarial Attacks by Correcting logits.
Quantitative Verification of Neural Networks And its Security Applications.
Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection.
Deceptive Reinforcement Learning Under Adversarial Manipulations on Cost Signals.
Defending Against Adversarial Examples with K-Nearest Neighbor.
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations.
A Fourier Perspective on Model Robustness in Computer Vision.
Evolution Attack On Neural Networks.
Adversarial Examples to Fool Iris Recognition Systems.
On Physical Adversarial Patches for Object Detection.
Catfish Effect Between Internal and External Attackers:Being Semi-honest is Helpful.
Improving the robustness of ImageNet classifiers using elements of human visual cognition.
A unified view on differential privacy and robustness to adversarial examples.
Convergence of Adversarial Training in Overparametrized Networks.
Global Adversarial Attacks for Assessing Deep Learning Robustness.
Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield.
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing.
Adversarial attacks on Copyright Detection Systems.
Improving Black-box Adversarial Attacks with a Transfer-based Prior.
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks.
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Accuracy.
Defending Against Adversarial Attacks Using Random Forests.
Uncovering Why Deep Neural Networks Lack Robustness: Representation Metrics that Link to Adversarial Attacks.
Adversarial Training Can Hurt Generalization.
Towards Compact and Robust Deep Neural Networks.
Perceptual Based Adversarial Audio Attacks.
Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks.
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks.
Towards Stable and Efficient Training of Verifiably Robust Neural Networks.
Model Agnostic Dual Quality Assessment for Adversarial Machine Learning and an Analysis of Current Neural Networks and Defenses.
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks.
Lower Bounds for Adversarially Robust PAC Learning.
Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers.
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks.
Mimic and Fool: A Task Agnostic Adversarial Attack.
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks.
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles.
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective.
Robustness Verification of Tree-based Models.
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective.
On the Vulnerability of Capsule Networks to Adversarial Attacks.
Intriguing properties of adversarial training.
Improved Adversarial Robustness via Logit Regularization Methods.
Attacking Graph Convolutional Networks via Rewiring.
Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness.
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers.
Strategies to architect AI Safety: Defense to guard AI from Adversaries.
Making targeted black-box evasion attacks effective and efficient.
Sensitivity of Deep Convolutional Networks to Gabor Noise.
ML-LOO: Detecting Adversarial Examples with Feature Attribution.
Defending against Adversarial Attacks through Resilient Feature Regeneration.
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks.
A cryptographic approach to black box adversarial machine learning.
Using learned optimizers to make models robust to input noise.
Adversarial Examples for Non-Parametric Methods: Attacks, Defenses and Large Sample Limits.
Efficient Project Gradient Descent for Ensemble Adversarial Attack.
Inductive Bias of Gradient Descent based Adversarial Training on Separable Data.
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness.
Robust Attacks against Multiple Classifiers.
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation.
Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric.
Should Adversarial Attacks Use Pixel p-Norm?.
Image Synthesis with a Single (Robust) Classifier.
Query-efficient Meta Attack to Deep Neural Networks.
MNIST-C: A Robustness Benchmark for Computer Vision.
Enhancing Gradient-based Attacks with Symbolic Intervals.
Multi-way Encoding for Robustness.
Adversarial Training Generalizes Data-dependent Spectral Norm Regularization.
Conditional Generative Models are not Robust.
Adversarial Exploitation of Policy Imitation.
RL-Based Method for Benchmarking the Adversarial Resilience and Robustness of Deep Reinforcement Learning Policies.
Adversarial Risk Bounds for Neural Networks through Sparsity based Compression.
The Adversarial Machine Learning Conundrum: Can The Insecurity of ML Become The Achilles' Heel of Cognitive Networks?.
Adversarial Robustness as a Prior for Learned Representations.
Achieving Generalizable Robustness of Deep Neural Networks by Stability Training.
A Surprising Density of Illusionable Natural Speech.
Fast and Stable Interval Bounds Propagation for Training Verifiably Robust Models.
Adversarially Robust Generalization Just Requires More Unlabeled Data.
Adversarial Examples for Edge Detection: They Exist, and They Transfer.
Enhancing Transformation-based Defenses using a Distribution Classifier.
Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification.
Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness.
Unlabeled Data Improves Adversarial Robustness.
Are Labels Required for Improving Adversarial Robustness?.
Real-Time Adversarial Attacks.
Residual Networks as Nonlinear Systems: Stability Analysis using Linearization.
Identifying Classes Susceptible to Adversarial Attacks.
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness.
Interpretable Adversarial Training for Text.
Bandlimiting Neural Networks Against Adversarial Attacks.
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward.
Misleading Authorship Attribution of Source Code using Adversarial Learning.
Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations.
Functional Adversarial Attacks.
High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks.
ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation.
Snooping Attacks on Deep Reinforcement Learning.
Adversarial Attacks on Remote User Authentication Using Behavioural Mouse Dynamics.
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss.
Probabilistically True and Tight Bounds for Robust Deep Neural Network Training.
Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness.
Cross-Domain Transferability of Adversarial Perturbations.
Certifiably Robust Interpretation in Deep Learning.
Brain-inspired reverse adversarial examples.
Adversarially Robust Learning Could Leverage Computational Hardness.
Label Universal Targeted Attack.
Divide-and-Conquer Adversarial Detection.
Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$.
Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking.
Scaleable input gradient regularization for adversarial robustness.
Combating Adversarial Misspellings with Robust Word Recognition.
Analyzing the Interpretability Robustness of Self-Explaining Models.
Unsupervised Euclidean Distance Attack on Network Embedding.
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations.
Non-Determinism in Neural Networks for Adversarial Robustness.
Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders.
Enhancing ML Robustness Using Physical-World Constraints.
Robust Classification using Robust Feature Augmentation.
Generalizable Adversarial Attacks Using Generative Models.
Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks.
Adversarial Distillation for Ordered Top-k Attacks.
Adversarial Policies: Attacking Deep Reinforcement Learning.
Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness.
Robustness to Adversarial Perturbations in Learning from Incomplete Data.
Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering.
Enhancing Adversarial Defense by k-Winners-Take-All.
A Direct Approach to Robust Deep Learning Using Adversarial Networks.
PHom-GeM: Persistent Homology for Generative Models.
Thwarting finite difference adversarial attacks with output randomization.
Interpreting Adversarially Trained Convolutional Neural Networks.
Adversarially Robust Distillation.
Convergence and Margin of Adversarial Training on Separable Data.
Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection.
DoPa: A Fast and Comprehensive CNN Defense Methodology against Physical Adversarial Attacks.
Adversarially robust transfer learning.
Testing Deep Neural Network based Image Classifiers.
What Do Adversarially Robust Models Look At?.
Taking Care of The Discretization Problem:A Black-Box Adversarial Image Attack in Discrete Integer Domain.
POPQORN: Quantifying Robustness of Recurrent Neural Networks.
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models.
Simple Black-box Adversarial Attacks.
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization.
War: Detecting adversarial examples by pre-processing input data.
On Norm-Agnostic Robustness of Adversarial Training.
Robustification of deep net classifiers by key based diversified aggregation with pre-filtering.
Adversarial Examples for Electrocardiograms.
Analyzing Adversarial Attacks Against Deep Learning for Intrusion Detection in IoT Networks.
Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models.
Moving Target Defense for Deep Visual Sensing against Adversarial Examples.
Interpreting and Evaluating Neural Network Robustness.
On the Connection Between Adversarial Robustness and Saliency Map Interpretability.
Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables.
Adversarial Defense Framework for Graph Neural Network.
Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain.
Exploring the Hyperparameter Landscape of Adversarial Robustness.
Learning Interpretable Features via Adversarially Robust Optimization.
Universal Adversarial Perturbations for Speech Recognition Systems.
ROSA: Robust Salient Object Detection against Adversarial Attacks.
Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems.
Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction.
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks.
Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study.
An Empirical Evaluation of Adversarial Robustness under Transfer Learning.
Adaptive Generation of Unrestricted Adversarial Inputs.
Batch Normalization is a Cause of Adversarial Vulnerability.
Adversarial Examples Are Not Bugs, They Are Features.
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples.
Transfer of Adversarial Robustness Between Perturbation Types.
Adversarial Training with Voronoi Constraints.
Weight Map Layer for Noise and Adversarial Attack Robustness.
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle.
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm.
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks.
Dropping Pixels for Adversarial Robustness.
Test Selection for Deep Learning Systems.
Detecting Adversarial Examples through Nonlinear Dimensionality Reduction.
Adversarial Training for Free!.
Adversarial Training and Robustness for Multiple Perturbations.
Non-Local Context Encoder: Robust Biomedical Image Segmentation against Adversarial Attacks.
Robustness Verification of Support Vector Machines.
A Robust Approach for Securing Audio Classification Against Adversarial Attacks.
Physical Adversarial Textures that Fool Visual Object Tracking.
Minimizing Perceived Image Quality Loss Through Adversarial Attack Scoping.
blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness.
Using Videos to Evaluate Image Model Robustness.
Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning.
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach.
Salient Object Detection in the Deep Learning Era: An In-Depth Survey.
Fooling automated surveillance cameras: adversarial patches to attack person detection.
ZK-GanDef: A GAN based Zero Knowledge Adversarial Training Defense for Neural Networks.
Defensive Quantization: When Efficiency Meets Robustness.
Interpreting Adversarial Examples with Attributes.
Adversarial Defense Through Network Profiling Based Path Extraction.
Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks.
Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers.
Reducing Adversarial Example Transferability Using Gradient Regularization.
AT-GAN: A Generative Attack Model for Adversarial Transferring on Generative Adversarial Nets.
Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction.
Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks.
Exploiting Vulnerabilities of Load Forecasting Through Adversarial Attacks.
Big but Imperceptible Adversarial Perturbations via Semantic Manipulation.
Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense.
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks.
Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients.
Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks.
Black-Box Decision based Adversarial Attack with Symmetric $\alpha$-stable Distribution.
Learning to Generate Synthetic Data via Compositing.
Black-box Adversarial Attacks on Video Recognition Models.
Generation & Evaluation of Adversarial Examples for Malware Obfuscation.
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition.
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning.
JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks.
Malware Evasion Attack and Defense.
On Training Robust PDF Malware Classifiers.
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks.
Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks.
White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks.
Understanding the efficacy, reliability and resiliency of computer vision techniques for malware detection and future research directions.
Interpreting Adversarial Examples by Activation Promotion and Suppression.
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack.
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations.
Adversarial Attacks against Deep Saliency Models.
Curls & Whey: Boosting Black-Box Adversarial Attacks.
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses.
Robustness of 3D Deep Learning in an Adversarial Setting.
Defending against adversarial attacks by randomized diversification.
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks.
On the Vulnerability of CNN Classifiers in EEG-Based BCIs.
Adversarial Robustness vs Model Compression, or Both?.
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.
Smooth Adversarial Examples.
Rallying Adversarial Techniques against Deep Learning for Network Security.
Bridging Adversarial Robustness and Gradient Interpretability.
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems.
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks.
On the Adversarial Robustness of Multivariate Robust Estimation.
A geometry-inspired decision-based attack.
Defending against Whitebox Adversarial Attacks via Randomized Discretization.
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness.
The LogBarrier adversarial attack: making effective use of decision boundary information.
Robust Neural Networks using Randomized Adversarial Training.
A Formalization of Robustness for Deep Neural Networks.
Variational Inference with Latent Space Quantization for Adversarial Resilience.
Improving Adversarial Robustness via Guided Complement Entropy.
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition.
Adversarial camera stickers: A physical camera-based attack on deep learning systems.
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes.
On the Robustness of Deep K-Nearest Neighbors.
Generating Adversarial Examples With Conditional Generative Adversarial Net.
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems.
Adversarial Attacks on Deep Neural Networks for Time Series Classification.
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models.
On Certifying Non-uniform Bound against Adversarial Attacks.
A Research Agenda: Dynamic Models to Defend Against Correlated Attacks.
Attribution-driven Causal Analysis for Detection of Adversarial Examples.
Adversarial attacks against Fact Extraction and VERification.
Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models.
Can Adversarial Network Attack be Defended?.
Manifold Preserving Adversarial Learning.
Attack Type Agnostic Perceptual Enhancement of Adversarial Images.
Out-domain examples for generative models.
GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier.
Statistical Guarantees for the Robustness of Bayesian Neural Networks.
L 1-norm double backpropagation adversarial defense.
Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search.
The Vulnerabilities of Graph Convolutional Networks: Stronger Attacks and Defensive Techniques.
Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming.
Complement Objective Training.
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations.
Evaluating Adversarial Evasion Attacks in the Context of Wireless Communications.
PuVAE: A Variational Autoencoder to Purify Adversarial Examples.
Attacking Graph-based Classification via Manipulating the Graph Structure.
On the Effectiveness of Low Frequency Perturbations.
Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN.
Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors.
Adversarial Attack and Defense on Point Sets.
Stochastically Rank-Regularized Tensor Regression Networks.
Adversarial Attacks on Time Series.
Communication without Interception: Defense against Deep-Learning-based Modulation Detection.
Robust Decision Trees Against Adversarial Examples.
Disentangled Deep Autoencoding Regularization for Robust Image Classification.
Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification.
Verification of Non-Linear Specifications for Neural Networks.
Adversarial attacks hidden in plain sight.
MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and Adversarial Defenses.
Adversarial Reinforcement Learning under Partial Observability in Software-Defined Networking.
Re-evaluating ADEM: A Deeper Look at Scoring Dialogue Responses.
A Deep, Information-theoretic Framework for Robust Biometric Recognition.
Adversarial Attacks on Graph Neural Networks via Meta Learning.
Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems.
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks.
On the Sensitivity of Adversarial Robustness to Input Data Distributions.
Quantifying Perceptual Distortion of Adversarial Examples.
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations.
Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure.
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch.
Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers.
There are No Bit Parts for Sign Bits in Black-Box Attacks.
On Evaluating Adversarial Robustness.
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks.
Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces.
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training.
Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness.
DeepFault: Fault Localization for Deep Neural Networks.
Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?.
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples.
Examining Adversarial Learning against Graph-based IoT Malware Detection Systems.
Adversarial Samples on Android Malware Detection Systems for IoT Systems.
A Survey: Towards a Robust Deep Neural Network in Text Domain.
Model Compression with Adversarial Robustness: A Unified Optimization Framework.
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks.
Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images.
Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis.
Discretization based Solutions for Secure Machine Learning against Adversarial Attacks.
Robustness Of Saak Transform Against Adversarial Attacks.
Certified Adversarial Robustness via Randomized Smoothing.
Fooling Neural Network Interpretations via Adversarial Model Manipulation.
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples.
Fatal Brain Damage.
SNN under Attack: are Spiking Deep Belief Networks vulnerable to Adversarial Examples?.
Theoretical evidence for adversarial robustness through randomization.
Robustness Certificates Against Adversarial Examples for ReLU Networks.
Natural and Adversarial Error Detection using Invariance to Image Transformations.
Adaptive Gradient for Adversarial Perturbations Generation.
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks.
The Efficacy of SHIELD under Different Threat Models.
A New Family of Neural Networks Provably Resistant to Adversarial Attacks.
Training Artificial Neural Networks by Generalized Likelihood Ratio Method: Exploring Brain-like Learning to Improve Robustness.
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance.
Augmenting Model Robustness with Transformation-Invariant Attacks.
Metric Attack and Defense for Person Re-identification.
Adversarial Examples Are a Natural Consequence of Test Error in Noise.
On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks.
RED-Attack: Resource Efficient Decision based Attack for Machine Learning.
Reliable Smart Road Signs.
Improving Adversarial Robustness of Ensembles with Diversity Training.
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks.
Defense Methods Against Adversarial Examples for Recurrent Neural Networks.
Using Pre-Training Can Improve Model Robustness and Uncertainty.
An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers.
Characterizing the Shape of Activation Space in Deep Neural Networks.
Strong Black-box Adversarial Attacks on Unsupervised Machine Learning Models.
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm.
Towards Weighted-Sampling Audio Adversarial Example Attack.
Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data.
Improving Adversarial Robustness via Promoting Ensemble Diversity.
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples.
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples.
Theoretically Principled Trade-off between Robustness and Accuracy.
Sitatapatra: Blocking the Transfer of Adversarial Samples.
SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems.
Sensitivity Analysis of Deep Neural Networks.
Universal Rules for Fooling Deep Neural Networks based Text Classification.
Perception-in-the-Loop Adversarial Examples.
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey.
Easy to Fool? Testing the Anti-evasion Capabilities of PDF Malware Scanners.
The Limitations of Adversarial Training and the Blind-Spot Attack.
Generating Adversarial Perturbation with Root Mean Square Gradient.
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System.
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries.
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification.
Image Transformation can make Neural Networks more robust against Adversarial Examples.
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers.
Interpretable BoW Networks for Adversarial Example Detection.
Image Super-Resolution as a Defense Against Adversarial Attacks.
Fake News Detection via NLP is Vulnerable to Adversarial Attacks.
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study.
Multi-Label Adversarial Perturbations.
Adversarial Robustness May Be at Odds With Simplicity.
A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks.
DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems.
Adversarial Attack and Defense on Graph Data: A Survey.
A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples.
A Data-driven Adversarial Examples Recognition Framework via Adversarial Feature Genome.
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition.
PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning.
Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors.
DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense.
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks.
Markov Game Modeling of Moving Target Defense for Strategic Detection of Threats in Cloud Networks.
Increasing the adversarial robustness and explainability of capsule networks with $\gamma$-capsules. (2%)
Exploiting the Inherent Limitation of L0 Adversarial Examples.
Dissociable neural representations of adversarially perturbed images in deep neural networks and the human brain.
Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge.
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach.
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings.
Designing Adversarially Resilient Classifiers using Resilient Feature Engineering.
A Survey of Safety and Trustworthiness of Deep Neural Networks.
Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks.
Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples.
Trust Region Based Adversarial Attack on Neural Networks.
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing.
TextBugger: Generating Adversarial Text Against Real-world Applications.
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem.
Thwarting Adversarial Examples: An $L_0$-RobustSparse Fourier Transform.
Mix'n'Squeeze: Thwarting Adaptive Adversarial Samples Using Randomized Squeezing.
Adversarial Framing for Image and Video Classification.
Defending Against Universal Perturbations With Shared Adversarial Training.
Learning Transferable Adversarial Examples via Ghost Networks.
Feature Denoising for Improving Adversarial Robustness.
AutoGAN: Robust Classifier Against Adversarial Attacks.
Detecting Adversarial Examples in Convolutional Neural Networks.
Deep-RBF Networks Revisited: Robust Classification with Rejection.
Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach.
Adversarial Defense of Image Classification Using a Variational Auto-Encoder.
Adversarial Attacks, Regression, and Numerical Stability Regularization.
Prior Networks for Detection of Adversarial Attacks.
Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack.
Fooling Network Interpretation in Image Classification.
The Limitations of Model Uncertainty in Adversarial Settings.
Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training.
On Configurable Defense against Adversarial Example Attacks.
SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications.
Regularized Ensembles and Transferability in Adversarial Learning.
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures.
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples.
Disentangling Adversarial Robustness and Generalization.
Interpretable Deep Learning under Fire.
Adversarial Example Decomposition.
Model-Reuse Attacks on Deep Learning Systems.
Universal Perturbation Attack Against Image Retrieval.
FineFool: Fine Object Contour Attack via Attention.
SentiNet: Detecting Physical Attacks Against Deep Learning Systems.
Building robust classifiers through generation of confident out of distribution examples.
Effects of Loss Functions And Target Representations on Adversarial Robustness.
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification.
Transferable Adversarial Attacks for Image and Video Object Detection.
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples.
Adversarial Defense by Stratified Convolutional Sparse Coding.
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks.
Bayesian Adversarial Spheres: Bayesian Inference and Adversarial Examples in a Noiseless Setting.
Adversarial Examples as an Input-Fault Tolerance Problem.
Analyzing Federated Learning through an Adversarial Lens.
Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers.
Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects.
A randomized gradient-free attack on ReLU networks.
Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness.
Universal Adversarial Training.
Robust Classification of Financial Risk.
Using Attribution to Decode Dataset Bias in Neural Network Models for Chemistry.
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks.
ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies.
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks.
Is Data Clustering in Adversarial Settings Secure?.
Attention, Please! Adversarial Defense via Attention Rectification and Preservation.
Robustness via curvature regularization, and vice versa.
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses.
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack.
Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles.
Detecting Adversarial Perturbations Through Spatial Behavior in Activation Spaces.
Task-generalizable Adversarial Attack based on Perceptual Metric.
Towards Robust Neural Networks with Lipschitz Continuity.
How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples.
MimicGAN: Corruption-Mimicking for Blind Image Recovery & Adversarial Defense.
Intermediate Level Adversarial Attack for Enhanced Transferability.
Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples.
Convolutional Neural Networks with Transformed Input based on Robust Tensor Network Decomposition.
Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding.
Generalizable Adversarial Training via Spectral Normalization.
The Taboo Trap: Behavioural Detection of Adversarial Samples.
Regularized adversarial examples for model interpretability.
DeepConsensus: using the consensus of features from multiple layers to attain robust image classification.
Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples.
Boosting the Robustness Verification of DNN by Identifying the Achilles's Heel.
Protecting Voice Controlled Systems Using Sound Source Identification Based on Acoustic Cues.
DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules.
A Spectral View of Adversarially Robust Features.
A note on hyperparameters in black-box adversarial examples.
Mathematical Analysis of Adversarial Attacks.
Adversarial Examples from Cryptographic Pseudo-Random Generators.
Verification of Recurrent Neural Networks Through Rule Extraction.
Robustness of spectral methods for community detection.
Deep Q learning for fooling neural networks.
Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses.
New CleverHans Feature: Better Adversarial Robustness Evaluations with Attack Bundling.
A Geometric Perspective on the Transferability of Adversarial Directions.
CAAD 2018: Iterative Ensemble Adversarial Attack.
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning.
MixTrain: Scalable Training of Verifiably Robust Neural Networks.
SparseFool: a few pixels make a big difference.
Active Deep Learning Attacks under Strict Rate Limitations for Online API Calls.
FUNN: Flexible Unsupervised Neural Network.
On the Transferability of Adversarial Examples Against CNN-Based Image Forensics.
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning.
SSCNets: A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks.
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks.
CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network.
Learning to Defense by Learning to Attack.
Adversarial Black-Box Attacks on Automatic Speech Recognition Systems using Multi-Objective Evolutionary Optimization.
A Marauder's Map of Security and Privacy in Machine Learning.
Semidefinite relaxations for certifying robustness to adversarial examples.
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks.
Efficient Neural Network Robustness Certification with General Activation Functions.
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks.
Improving Adversarial Robustness by Encouraging Discriminative Features.
On the Geometry of Adversarial Examples.
Excessive Invariance Causes Adversarial Vulnerability.
When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers.
Reversible Adversarial Examples.
Improved Network Robustness with Adversary Critic.
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models.
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution.
Logit Pairing Methods Can Fool Gradient-Based Attacks.
Rademacher Complexity for Adversarially Robust Generalization.
RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications.
Robust Audio Adversarial Example for a Physical Attack.
Towards Robust Deep Neural Networks.
Regularization Effect of Fast Gradient Sign Method and its Generalization.
Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples.
Law and Adversarial Machine Learning.
Attack Graph Convolutional Networks by Adding Fake Nodes.
Evading classifiers in discrete domains with provable optimality guarantees.
Robust Adversarial Learning via Sparsifying Front Ends.
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses.
One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy.
Et Tu Alexa? When Commodity WiFi Devices Turn into Adversarial Motion Sensors.
Adversarial Risk Bounds via Function Transformation.
Cost-Sensitive Robustness against Adversarial Examples.
Sparse DNNs with Improved Adversarial Robustness.
On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm.
Exploring Adversarial Examples in Malware Detection.
A Training-based Identification Approach to VIN Adversarial Examples.
Provable Robustness of ReLU networks via Maximization of Linear Regions.
Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers.
Security Matters: A Survey on Adversarial Machine Learning.
Concise Explanations for Neural Networks using Adversarial Training.
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation.
MeshAdv: Adversarial Meshes for Visual Recognition.
Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only.
Analyzing the Noise Robustness of Deep Neural Networks.
The Adversarial Attack and Detection under the Fisher Information Metric.
Limitations of adversarial robustness: strong No Free Lunch Theorem.
Average Margin Regularization for Classifiers.
Efficient Two-Step Adversarial Defense for Deep Neural Networks.
Combinatorial Attacks on Binarized Neural Networks.
Improved Generalization Bounds for Robust Learning.
Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness.
Can Adversarially Robust Learning Leverage Computational Hardness?.
Adversarial Examples - A Complete Characterisation of the Phenomenon.
Link Prediction Adversarial Attack.
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network.
Large batch size training of neural networks with adversarial training and second-order information.
Improving the Generalization of Adversarial Training with Domain Adaptation.
Improved robustness to adversarial examples using Lipschitz regularization of the loss.
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks.
CAAD 2018: Generating Transferable Adversarial Examples.
To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression.
Interpreting Adversarial Robustness: A View from Decision Surface in Input Space.
Characterizing Audio Adversarial Examples Using Temporal Dependency.
Adversarial Attacks and Defences: A Survey.
Explainable Black-Box Attacks Against Model-based Authentication.
Adversarial Attacks on Cognitive Self-Organizing Networks: The Challenge and the Way Forward.
Neural Networks with Structural Resistance to Adversarial Attacks.
Fast Geometrically-Perturbed Adversarial Faces.
On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces.
Low Frequency Adversarial Perturbation.
Is Ordered Weighted $\ell_1$ Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR.
Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization.
Unrestricted Adversarial Examples.
Adversarial Binaries for Authorship Identification.
Playing the Game of Universal Adversarial Perturbations.
Efficient Formal Safety Analysis of Neural Networks.
Adversarial Training Towards Robust Multimedia Recommender System.
Generating 3D Adversarial Point Clouds.
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples.
Robustness Guarantees for Bayesian Inference with Gaussian Processes.
Exploring the Vulnerability of Single Shot Module in Object Detectors via Imperceptible Background Patches.
Robust Adversarial Perturbation on Deep Proposal-based Models.
Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks.
Query-Efficient Black-Box Attack by Active Learning.
Adversarial Examples: Opportunities and Challenges.
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions.
Isolated and Ensemble Audio Preprocessing Methods for Detecting Adversarial Examples against Automatic Speech Recognition.
Humans can decipher adversarial images.
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure.
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability.
Certified Adversarial Robustness with Additive Noise.
Towards Query Efficient Black-box Attacks: An Input-free Perspective.
Fast Gradient Attack on Network Embedding.
Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples.
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.
Open Set Adversarial Examples.
A Deeper Look at 3D Shape Classifiers.
Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer.
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Examples for Question Answering.
Are adversarial examples inevitable?.
Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models.
IDSGAN: Generative Adversarial Networks for Attack Generation against Intrusion Detection.
Adversarial Reprogramming of Text Classification Neural Networks.
Bridging machine learning and cryptography in defence against adversarial attacks.
Adversarial Attacks on Node Embeddings.
HASP: A High-Performance Adaptive Mobile Security Enhancement Against Malicious Speech Recognition.
Adversarial Attack Type I: Cheat Classifiers by Significant Changes.
MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks.
DLFuzz: Differential Fuzzing Testing of Deep Learning Systems.
All You Need is "Love": Evading Hate-speech Detection.
Lipschitz regularized Deep Neural Networks generalize and are adversarially robust.
Targeted Nonlinear Adversarial Perturbations in Images and Videos.
Generalisation in humans and deep neural networks.
Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge.
Guiding Deep Learning System Testing using Surprise Adequacy.
Analysis of adversarial attacks against CNN-based image forgery detectors.
Is Machine Learning in Power Systems Vulnerable?.
Maximal Jacobian-based Saliency Map Attack.
Adversarial Attacks on Deep-Learning Based Radio Signal Classification.
Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection.
Stochastic Combinatorial Ensembles for Defending Against Adversarial Examples.
Reinforcement Learning for Autonomous Defence in Software-Defined Networking.
Mitigation of Adversarial Attacks through Embedded Feature Selection.
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding.
Distributionally Adversarial Attack.
Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection.
Using Randomness to Improve Robustness of Machine-Learning Models Against Evasion Attacks.
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer.
Data augmentation using synthetic data for time series classification with deep residual networks.
Adversarial Vision Challenge.
Defense Against Adversarial Attacks with Saak Transform.
Gray-box Adversarial Training.
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models.
Structured Adversarial Attack: Towards General Implementation and Better Interpretability.
Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection.
ATMPA: Attacking Machine Learning-based Malware Visualization Detection Methods via Adversarial Examples.
DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes.
Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions.
EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report).
Rob-GAN: Generator, Discriminator, and Adversarial Attacker.
A general metric for identifying adversarial images.
Evaluating and Understanding the Robustness of Adversarial Logit Pairing.
HiDDeN: Hiding Data With Deep Networks.
Limitations of the Lipschitz constant as a defense against adversarial examples.
Unbounded Output Networks for Classification.
Learning Discriminative Video Representations Using Adversarial Perturbations.
Simultaneous Adversarial Training - Learn from Others Mistakes.
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors.
Physical Adversarial Examples for Object Detectors.
Harmonic Adversarial Attack Method.
Gradient Band-based Adversarial Training for Generalized Attack Immunity of A3C Path Finding.
Motivating the Rules of the Game for Adversarial Example Research.
Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions.
Online Robust Policy Learning in the Presence of Unknown Adversaries.
Manifold Adversarial Learning.
Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach.
With Friends Like These, Who Needs Adversaries?.
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks.
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees.
Attack and defence in cellular decision-making: lessons from machine learning.
Adaptive Adversarial Attack on Scene Text Recognition.
Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks.
Implicit Generative Modeling of Random Noise during Training for Adversarial Robustness.
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations.
Local Gradients Smoothing: Defense against localized adversarial attacks.
Adversarial Robustness Toolbox v1.0.0.
Adversarial Perturbations Against Real-Time Video Classification Systems.
Towards Adversarial Training with Moderate Performance Improvement for Neural Network Classification.
Adversarial Examples in Deep Learning: Characterization and Divergence.
Adversarial Reprogramming of Neural Networks.
Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning.
Customizing an Adversarial Example Generator with Class-Conditional GANs.
Exploring Adversarial Examples: Patterns of One-Pixel Attacks.
Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions.
On Adversarial Examples for Character-Level Neural Machine Translation.
Evaluation of Momentum Diverse Input Iterative Fast Gradient Sign Method (M-DI2-FGSM) Based Attack Method on MCS 2018 Adversarial Attacks on Black Box Face Recognition System.
Detection based Defense against Adversarial Examples from the Steganalysis Point of View.
Gradient Adversarial Training of Neural Networks.
Combinatorial Testing for Deep Learning Systems.
On the Learning of Deep Local Features for Robust Face Spoofing Detection.
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations.
Non-Negative Networks Against Adversarial Attacks.
Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data.
Hierarchical interpretations for neural network predictions.
Manifold Mixup: Better Representations by Interpolating Hidden States.
Adversarial Attacks on Variational Autoencoders.
Ranking Robustness Under Adversarial Document Manipulations.
Defense Against the Dark Arts: An overview of adversarial example security research and future research directions.
Monge blunts Bayes: Hardness Results for Adversarial Training.
Revisiting Adversarial Risk.
Training Augmentation with Adversarial Examples for Robust Speech Recognition.
Adversarial Attack on Graph Structured Data.
Adversarial Regression with Multiple Learners.
Killing Four Birds with one Gaussian Process: Analyzing Test-Time Attack Vectors on Classification.
DPatch: An Adversarial Patch Attack on Object Detectors.
Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise.
An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks.
PAC-learning in the presence of evasion adversaries.
Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks.
Detecting Adversarial Examples via Key-based Network.
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks.
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders.
Scaling provable adversarial defenses.
Sequential Attacks on Agents for Long-Term Adversarial Goals.
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data.
Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization.
ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio.
Robustifying Models Against Adversarial Attacks by Langevin Dynamics.
Robustness May Be at Odds with Accuracy.
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks.
Adversarial Noise Attacks of Deep Learning Architectures -- Stability Analysis via Sparse Modeled Signals.
Why Botnets Work: Distributed Brute-Force Attacks Need No Synchronization.
Adversarial Examples in Remote Sensing.
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization.
Defending Against Adversarial Attacks by Leveraging an Entire GAN.
Training verified learners with learned verifiers.
Adversarial examples from computational constraints.
Laplacian Networks: Bounding Indicator Function Smoothness for Neural Network Robustness.
Anonymizing k-Facial Attributes via Adversarial Perturbations.
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients.
Towards the first adversarially robust neural network model on MNIST.
Adversarially Robust Training through Structured Gradient Regularization.
Adversarial Noise Layer: Regularize Neural Network By Adding Noise.
Adversarial Attacks on Neural Networks for Graph Data.
Constructing Unrestricted Adversarial Examples with Generative Models.
Bidirectional Learning for Robust Neural Networks.
Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference.
Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks.
Targeted Adversarial Examples for Black Box Audio Systems.
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models.
Towards Robust Neural Machine Translation.
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing.
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning.
Curriculum Adversarial Training.
Breaking Transferability of Adversarial Samples with Randomness.
On Visual Hallmarks of Robustness to Adversarial Malware.
Robust Classification with Convolutional Prototype Learning.
Interpretable Adversarial Perturbation in Input Embedding Space for Text.
A Counter-Forensic Method for CNN-Based Camera Model Identification.
Siamese networks for generating adversarial examples.
Concolic Testing for Deep Neural Networks.
How Robust are Deep Neural Networks?.
Adversarially Robust Generalization Requires More Data.
Adversarial Regression for Detecting Attacks in Cyber-Physical Systems.
Formal Security Analysis of Neural Networks using Symbolic Intervals.
Towards Fast Computation of Certified Robustness for ReLU Networks.
Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning.
Siamese Generative Adversarial Privatizer for Biometric Data.
Black-box Adversarial Attacks with Limited Queries and Information.
VectorDefense: Vectorization as a Defense to Adversarial Examples.
Query-Efficient Black-Box Attack Against Sequence-Based Malware Classifiers.
Generating Natural Language Adversarial Examples.
Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size.
Learning More Robust Features with Adversarial Training.
ADef: an Iterative Algorithm to Construct Adversarial Deformations.
Attacking Convolutional Neural Network using Differential Evolution.
Semantic Adversarial Deep Learning.
Simulation-based Adversarial Test Generation for Autonomous Vehicles with Machine Learning Components.
Neural Automated Essay Scoring and Coherence Modeling for Adversarially Crafted Input.
Robust Machine Comprehension Models via Adversarial Training.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks.
Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the $L_0$ Norm.
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector.
On the Limitation of MagNet Defense against $L_1$-based Adversarial Examples.
Adversarial Attacks Against Medical Deep Learning Systems.
Detecting Malicious PowerShell Commands using Deep Neural Networks.
On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses.
Adversarial Training Versus Weight Decay.
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks.
Adaptive Spatial Steganography Based on Probability-Controlled Adversarial Examples.
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations.
Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks.
Adversarial Attacks and Defences Competition.
Security Consideration For Deep Learning-Based Image Forensics.
Defending against Adversarial Images using Basis Functions Transformations.
The Effects of JPEG and JPEG2000 Compression on Attacks using Adversarial Examples.
Bypassing Feature Squeezing by Increasing Adversary Strength.
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples.
Clipping free attacks against artificial neural networks.
Security Theater: On the Vulnerability of Classifiers to Exploratory Attacks.
A Dynamic-Adversarial Mining Approach to the Security of Machine Learning.
An Overview of Vulnerabilities of Voice Controlled Systems.
Generalizability vs. Robustness: Adversarial Examples for Medical Imaging.
CNN Based Adversarial Embedding with Minimum Alteration for Image Steganography.
Detecting Adversarial Perturbations with Saliency.
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization.
Understanding Measures of Uncertainty for Adversarial Example Detection.
Adversarial Defense based on Structure-to-Signal Autoencoders.
Task-specific Deep LDA pruning of neural networks.
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems.
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks.
Improving Transferability of Adversarial Examples with Input Diversity.
A Dual Approach to Scalable Verification of Deep Networks.
Adversarial Logit Pairing.
Semantic Adversarial Examples.
Large Margin Deep Networks for Classification.
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples.
Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training.
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning.
Invisible Mask: Practical Attacks on Face Recognition with Infrared.
Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables.
Combating Adversarial Attacks Using Sparse Representations.
Detecting Adversarial Examples via Neural Fingerprinting.
Detecting Adversarial Examples - A Lesson from Multimedia Forensics.
On Generation of Adversarial Examples using Convex Programming.
Explaining Black-box Android Malware Detection.
Rethinking Feature Distribution for Loss Functions in Image Classification.
Sparse Adversarial Perturbations for Videos.
Stochastic Activation Pruning for Robust Adversarial Defense.
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples.
Protecting JPEG Images Against Adversarial Attacks.
Understanding and Enhancing the Transferability of Adversarial Examples.
On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples.
Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples.
Max-Mahalanobis Linear Discriminant Analysis Networks.
Deep Defense: Training DNNs with Improved Adversarial Robustness.
Sensitivity and Generalization in Neural Networks: an Empirical Study.
Adversarial vulnerability for any classifier.
Verifying Controllers Against Adversarial Examples with Bayesian Optimization.
Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks.
Hessian-based Analysis of Large Batch Training and Robustness to Adversaries.
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans.
Adversarial Training for Probabilistic Spiking Neural Networks.
L2-Nonexpansive Neural Networks.
Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch.
Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning.
Out-distribution training confers robustness to deep neural networks.
On Lyapunov exponents and adversarial perturbation.
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression.
Divide, Denoise, and Defend against Adversarial Attacks.
Robustness of Rotation-Equivariant Networks to Adversarial Perturbations.
Are Generative Classifiers More Robust to Adversarial Attacks?.
DARTS: Deceiving Autonomous Cars with Toxic Signs.
ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction.
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks.
Fooling OCR Systems with Adversarial Text Images.
Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks.
Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints.
Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models.
Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples.
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks.
Predicting Adversarial Examples with High Confidence.
Certified Robustness to Adversarial Examples with Differential Privacy.
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection.
Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples.
First-order Adversarial Vulnerability of Neural Networks and Input Dimension.
Secure Detection of Image Manipulation by means of Random Feature Selection.
Hardening Deep Neural Networks via Adversarial Model Cascades.
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples.
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach.
Robustness of classification ability of spiking neural networks.
Certified Defenses against Adversarial Examples.
Towards an Understanding of Neural Networks in Natural-Image Spaces.
Deflecting Adversarial Attacks with Pixel Deflection.
Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning.
CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition.
Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations.
Adversarial Texts with Gradient Methods.
A Comparative Study of Rule Extraction for Recurrent Neural Networks.
Sparsity-based Defense against Adversarial Attacks on Linear Classifiers.
Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks.
Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers.
A3T: Adversarially Augmented Adversarial Training.
Fooling End-to-end Speaker Verification by Adversarial Examples.
Adversarial Deep Learning for Robust Detection of Binary Encoded Malware.
Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks.
Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos.
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality.
Spatially Transformed Adversarial Examples.
Generating Adversarial Examples with Adversarial Networks.
LaVAN: Localized and Visible Adversarial Noise.
Attacking Speaker Recognition With Deep Generative Models.
HeNet: A Deep Learning Approach on Intel$^\circledR$ Processor Trace for Effective Exploit Detection.
Denoising Dictionary Learning Against Adversarial Perturbations.
Adversarial Perturbation Intensity Achieving Chosen Intra-Technique Transferability Level for Logistic Regression.
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text.
Shielding Google's language toxicity model against adversarial attacks.
Facial Attributes: Accuracy and Adversarial Robustness.
Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space.
High Dimensional Spaces, Deep Learning and Adversarial Examples.
Did you hear that? Adversarial Examples Against Automatic Speech Recognition.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey.
A General Framework for Adversarial Examples with Objectives.
Gradient Regularization Improves Accuracy of Discriminative Models.
Exploring the Space of Black-box Attacks on Deep Neural Networks.
Building Robust Deep Neural Networks for Road Sign Detection.
The Robust Manifold Defense: Adversarial Training using Generative Models.
Android Malware Detection using Deep Learning on API Method Sequences.
Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger.
Query-limited Black-box Attacks to Classifiers.
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks.
ReabsNet: Detecting and Revising Adversarial Examples.
Note on Attacking Object Detectors with Adversarial Stickers.
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications.
Query-Efficient Black-box Adversarial Examples (superceded).
Adversarial Examples: Attacks and Defenses for Deep Learning.
HotFlip: White-Box Adversarial Examples for Text Classification.
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study.
Super-sparse Learning in Similarity Spaces.
Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models.
DANCin SEQ2SEQ: Fooling Text Classifiers with Adversarial Text Example Generation.
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models.
Training Ensembles to Detect Adversarial Examples.
Robust Deep Reinforcement Learning with Adversarial Attacks.
NAG: Network for Adversary Generation.
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning.
Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser.
Adversarial Examples that Fool Detectors.
Exploring the Landscape of Spatial Robustness.
Generative Adversarial Perturbations.
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning.
Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems.
Improving Network Robustness against Adversarial Attacks with Compact Convolution.
Towards Robust Neural Networks via Random Self-ensemble.
Where Classification Fails, Interpretation Rises.
Measuring the tendency of CNNs to Learn Surface Statistical Regularities.
Adversary Detection in Neural Networks via Persistent Homology.
On the Robustness of Semantic Segmentation Models to Adversarial Attacks.
Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation.
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients.
Geometric robustness of deep networks: analysis and improvement.
Safer Classification by Synthesis.
MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples.
Adversarial Phenomenon in the Eyes of Bayesian Deep Learning.
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training.
Evaluating Robustness of Neural Networks with Mixed Integer Programming.
Adversarial Attacks Beyond the Image Space.
How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models.
Enhanced Attacks on Defensively Distilled Deep Neural Networks.
Defense against Universal Adversarial Perturbations.
The best defense is a good offense: Countering black box attacks by predicting slightly wrong labels.
Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples.
Crafting Adversarial Examples For Speech Paralinguistics Applications.
Intriguing Properties of Adversarial Examples.
Mitigating Adversarial Effects Through Randomization.
HyperNetworks with statistical filtering for defending adversarial examples.
Towards Reverse-Engineering Black-Box Neural Networks.
The (Un)reliability of saliency methods.
Provable defenses against adversarial examples via the convex outer adversarial polytope.
Attacking Binarized Neural Networks.
Countering Adversarial Images using Input Transformations.
Conditional Variance Penalties and Domain Shift Robustness.
Generating Natural Adversarial Examples.
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples.
Attacking the Madry Defense Model with $L_1$-based Adversarial Examples.
Certifying Some Distributional Robustness with Principled Adversarial Training.
Interpretation of Neural Networks is Fragile.
Adversarial Detection of Flash Malware: Limitations and Open Issues.
mixup: Beyond Empirical Risk Minimization.
One pixel attack for fooling deep neural networks.
Feature-Guided Black-Box Safety Testing of Deep Neural Networks.
Boosting Adversarial Attacks with Momentum.
Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries.
Standard detectors aren't (currently) fooled by physical adversarial stop signs.
Verification of Binarized Neural Networks via Inter-Neuron Factoring.
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight.
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks.
Provably Minimally-Distorted Adversarial Examples.
DR.SGX: Hardening SGX Enclaves against Cache Attacks with Data Location Randomization.
Output Range Analysis for Deep Neural Networks.
Fooling Vision and Language Models Despite Localization and Attention Mechanism.
Verifying Properties of Binarized Deep Neural Networks.
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification.
A Learning and Masking Approach to Secure Learning.
Models and Framework for Adversarial Attacks on Complex Adaptive Systems.
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples.
Art of singular vectors and universal adversarial perturbations.
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks.
Towards Proving the Adversarial Robustness of Deep Neural Networks.
DeepFense: Online Accelerated Defense Against Adversarial Deep Learning.
Security Evaluation of Pattern Classifiers under Attack.
On Security and Sparsity of Linear Classifiers for Adversarial Settings.
Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin.
Practical Attacks Against Graph-based Clustering.
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars.
Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features.
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid.
CNN Fixations: An unraveling approach to visualize the discriminative image regions.
Evasion Attacks against Machine Learning at Test Time.
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples.
Learning Universal Adversarial Perturbations with Generative Models.
Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API.
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models.
Cascade Adversarial Machine Learning Regularized with a Unified Embedding.
Adversarial Robustness: Softmax versus Openmax.
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning.
Robust Physical-World Attacks on Deep Learning Models.
Synthesizing Robust Adversarial Examples.
Adversarial Examples for Evaluating Reading Comprehension Systems.
Confidence estimation in Deep Neural networks via density modelling.
Efficient Defenses Against Adversarial Attacks.
Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers.
Fast Feature Fool: A data independent approach to universal adversarial perturbations.
APE-GAN: Adversarial Perturbation Elimination with GAN.
Houdini: Fooling Deep Structured Prediction Models.
Foolbox: A Python toolbox to benchmark the robustness of machine learning models.
NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles.
A Survey on Resilient Machine Learning.
Towards Crafting Text Adversarial Samples.
UPSET and ANGRI : Breaking High Performance Image Classifiers.
Comparing deep neural networks against humans: object recognition when the signal gets weaker.
Towards Deep Learning Models Resistant to Adversarial Attacks.
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong.
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples.
Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation.
Towards Robust Detection of Adversarial Examples.
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples.
MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks.
Analysis of universal adversarial perturbations.
Classification regions of deep neural networks.
MagNet: a Two-Pronged Defense against Adversarial Examples.
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation.
Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction.
Black-Box Attacks against RNN based Malware Detection Algorithms.
Regularizing deep networks using efficient layerwise adversarial training.
Evading Classifiers by Morphing in the Dark.
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods.
Ensemble Adversarial Training: Attacks and Defenses.
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense.
DeepXplore: Automated Whitebox Testing of Deep Learning Systems.
Delving into adversarial attacks on deep policies.
Extending Defensive Distillation.
Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN.
Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression.
Detecting Adversarial Samples Using Density Ratio Estimates.
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection.
Parseval Networks: Improving Robustness to Adversarial Examples.
Deep Text Classification Can be Fooled.
Universal Adversarial Perturbations Against Semantic Image Segmentation.
Adversarial and Clean Data Are Not Twins.
Google's Cloud Vision API Is Not Robust To Noise.
The Space of Transferable Adversarial Examples.
Enhancing Robustness of Machine Learning Systems via Data Transformations.
Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks.
Comment on "Biologically inspired protection of deep networks from adversarial attacks".
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks.
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly.
Adversarial Transformation Networks: Learning to Generate Adversarial Examples.
Biologically inspired protection of deep networks from adversarial attacks.
Deceiving Google's Cloud Video Intelligence API Built for Summarizing Videos.
Adversarial Examples for Semantic Segmentation and Object Detection.
Self corrective Perturbations for Semantic Segmentation and Classification.
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains.
On the Limitation of Convolutional Neural Networks in Recognizing Negative Images.
Fraternal Twins: Unifying Attacks on Machine Learning and Digital Watermarking.
Blocking Transferability of Adversarial Examples in Black-Box Learning Systems.
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents.
Adversarial Examples for Semantic Image Segmentation.
Compositional Falsification of Cyber-Physical Systems with Machine Learning Components.
Detecting Adversarial Samples from Artifacts.
Deceiving Google's Perspective API Built for Detecting Toxic Comments.
Robustness to Adversarial Examples through an Ensemble of Specialists.
Adversarial examples for generative models.
DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples.
On the (Statistical) Detection of Adversarial Examples.
Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN.
On Detecting Adversarial Perturbations.
Adversarial Attacks on Neural Network Policies.
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks.
Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks.
Dense Associative Memory is Robust to Adversarial Inputs.
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics.
Simple Black-Box Adversarial Perturbations for Deep Networks.
Learning Adversary-Resistant Deep Neural Networks.
A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples.
Adversarial Images for Variational Autoencoders.
Deep Variational Information Bottleneck.
Towards Robust Deep Neural Networks with BANG.
LOTS about Attacking Deep Features.
AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack.
Towards the Science of Security and Privacy in Machine Learning.
Delving into Transferable Adversarial Examples and Black-box Attacks.
Adversarial Machine Learning at Scale.
Universal adversarial perturbations.
Safety Verification of Deep Neural Networks.
Are Accuracy and Robustness Correlated?.
Assessing Threat of Adversarial Examples on Deep Neural Networks.
Using Non-invertible Data Transformations to Build Adversarial-Robust Neural Networks.
Adversary Resistant Deep Neural Networks with an Application to Malware Detection.
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library.
Statistical Meta-Analysis of Presentation Attacks for Secure Multibiometric Systems.
Randomized Prediction Games for Adversarial Machine Learning.
Robustness of classifiers: from adversarial to random noise.
A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples.
Towards Evaluating the Robustness of Neural Networks.
A study of the effect of JPG compression on adversarial images.
Early Methods for Detecting Adversarial Images.
On the Effectiveness of Defensive Distillation.
Defensive Distillation is Not Robust to Adversarial Examples.
Adversarial examples in the physical world.
Adversarial Perturbations Against Deep Neural Networks for Malware Classification.
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples.
Measuring Neural Net Robustness with Constraints.
Are Facial Attributes Adversarially Robust?.
Adversarial Diversity and Hard Positive Generation.
Crafting Adversarial Input Sequences for Recurrent Neural Networks.
Improving the Robustness of Deep Neural Networks via Stability Training.
A General Retraining Framework for Scalable Adversarial Classification.
Suppressing the Unusual: towards Robust CNNs using Symmetric Activation Functions.
Practical Black-Box Attacks against Machine Learning.
Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms.
Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization.
The Limitations of Deep Learning in Adversarial Settings.
A Unified Gradient Regularization Family for Adversarial Examples.
Manifold Regularized Deep Neural Networks using Adversarial Examples.
Robust Convolutional Neural Networks under Adversarial Noise.
Foveation-based Mechanisms Alleviate Adversarial Examples.
Towards Open Set Deep Networks.
Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization.
Adversarial Manipulation of Deep Representations.
DeepFool: a simple and accurate method to fool deep neural networks.
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks.
Learning with a Strong Adversary.
Exploring the Space of Adversarial Images.
Improving Back-Propagation by Adding an Adversarial Gradient.
Deep Learning and Music Adversaries.
Analysis of classifiers' robustness to adversarial perturbations.
Explaining and Harnessing Adversarial Examples.
Towards Deep Neural Network Architectures Robust to Adversarial Examples.
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.
Security Evaluation of Support Vector Machines in Adversarial Environments.
Intriguing properties of neural networks.