It can be hard to stay up-to-date on the published papers in
the field of adversarial examples,
where we have seen massive growth in the number of papers
written each year.
I have been somewhat religiously keeping track of these
papers for the last few years, and realized it may be helpful
for others to release this list.
The only requirement I used for selecting papers for this list
is that it is primarily a paper about adversarial examples,
or extensively uses adversarial examples.
Due to the sheer quantity of papers, I can't guarantee
that I actually have found all of them.
But I did try.
I also may have included papers that don't match
these criteria (and are about something different instead),
or made inconsistent
judgement calls as to whether or not any given paper is
mainly an adversarial example paper.
Send me an email if something is wrong and I'll correct it.
As a result, this list is completely un-filtered.
Everything that mainly presents itself as an adversarial
example paper is listed here; I pass no judgement of quality.
For a curated list of papers that I think are excellent and
worth reading, see the
Adversarial Machine Learning Reading List.
One final note about the data.
This list automatically updates with new papers, even before I
get a chance to manually filter through them.
I do this filtering roughly twice a week, and it's
then that I'll remove the ones that aren't related to
As a result, there may be some
false positives on the most recent few entries.
The new un-verified entries will have a probability indicated that my
simplistic (but reasonably well calibrated)
bag-of-words classifier believes the given paper
is actually about adversarial examples.
The full paper list appears below. I've also released a
TXT file (and a TXT file
with abstracts) and a
with the same data. If you do anything interesting with
this data I'd be happy to hear from you what it was.
A Distributional Robustness Certificate by Randomized Smoothing. (99%)
Amnesiac Machine Learning. (9%)
VENOMAVE: Clean-Label Poisoning Against Speech Recognition. (99%)
Boosting Gradient for White-Box Adversarial Attacks. (99%)
Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods. (98%)
Tight Second-Order Certificates for Randomized Smoothing. (97%)
Mitigating Sybil Attacks on Differential Privacy based Federated Learning. (87%)
Towards Understanding the Dynamics of the First-Order Adversaries. (74%)
Preventing Personal Data Theft in Images with Adversarial ML. (68%)
RobustBench: a standardized adversarial robustness benchmark. (99%)
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness. (99%)
Verifying the Causes of Adversarial Examples. (99%)
When Bots Take Over the Stock Market: Evasion Attacks Against Algorithmic Traders. (99%)
L-RED: Efficient Post-Training Detection of Imperceptible Backdoor Attacks without Access to the Training Set. (99%)
Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification. (84%)
FLAG: Adversarial Data Augmentation for Graph Neural Networks. (10%)
A Survey of Machine Learning Techniques in Adversarial Image Forensics. (1%)
How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers. (1%)
Poisoned classifiers are not only backdoored, they are fundamentally broken. (99%)
FADER: Fast Adversarial Example Rejection. (99%)
Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing. (99%)
A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness. (98%)
A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models. (92%)
Layer-wise Characterization of Latent Information Leakage in Federated Learning. (2%)
Learning Robust Algorithms for Online Allocation Problems Using Adversarial Training. (96%)
Mischief: A Simple Black-Box Attack Against Transformer Architectures. (92%)
DOOM: A Novel Adversarial-DRL-Based Op-Code Level Metamorphic Malware Obfuscator for the Enhancement of IDS. (5%)
Formal Verification of Robustness and Resilience of Learning-Enabled State Estimation Systems for Robotics. (2%)
Embedding and Synthesis of Knowledge in Tree Ensemble Classifiers. (2%)
Generalizing Universal Adversarial Attacks Beyond Additive Perturbations. (99%)
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning. (99%)
Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection in AMI through Adversarial Attacks. (99%)
Progressive Defense Against Adversarial Attacks for Deep Learning as a Service in Internet of Things. (99%)
Input-Aware Dynamic Backdoor Attack. (56%)
Adversarial Images through Stega Glasses. (50%)
Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness. (50%)
Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training. (13%)
Certifying Neural Network Robustness to Random Input Noise from Samples. (10%)
Federated Learning in Adversarial Settings. (3%)
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing. (99%)
Towards Resistant Audio Adversarial Examples. (99%)
GreedyFool: An Imperceptible Black-box Adversarial Example Attack against Neural Networks. (99%)
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability. (99%)
An Adversarial Attack against Stacked Capsule Autoencoder. (98%)
Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability. (83%)
FAR: A General Framework for Attributional Robustness. (38%)
BlockFLA: Accountable Federated Learning via Hybrid Blockchain Architecture. (5%)
Exploiting Interfaces of Secure Encrypted Virtual Machines. (1%)
Linking average- and worst-case perturbation robustness via class selectivity and dimensionality. (98%)
Higher-Order Certification for Randomized Smoothing. (98%)
Toward Few-step Adversarial Training from a Frequency Perspective. (86%)
To be Robust or to be Fair: Towards Fairness in Adversarial Training. (99%)
Towards Understanding Pixel Vulnerability under Adversarial Attacks for Images. (99%)
EFSG: Evolutionary Fooling Sentences Generator. (93%)
From Hero to Z\'eroe: A Benchmark of Low-Level Adversarial Attacks. (54%)
Shape-Texture Debiased Neural Network Training. (33%)
On the Power of Abstention and Data-Driven Decision Making for Adversarial Robustness. (10%)
Universal Model for 3D Medical Image Analysis. (1%)
FaiR-N: Fair and Robust Neural Networks for Structured Data. (1%)
IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function based Restoration. (93%)
Gradient-based Analysis of NLP Models is Manipulable. (10%)
Is It Time to Redefine the Classification Task for Deep Neural Networks? (98%)
ByzShield: An Efficient and Robust System for Distributed Training. (2%)
Understanding Spatial Robustness of Deep Neural Networks. (92%)
How Does Mixup Help With Robustness and Generalization? (83%)
Targeted Attention Attack on Deep Learning Models in Road Sign Recognition. (99%)
Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial Attacks. (99%)
Improved Techniques for Model Inversion Attacks. (83%)
Affine-Invariant Robust Training. (81%)
A Unified Approach to Interpreting and Boosting Adversarial Transferability. (61%)
Improve Adversarial Robustness via Weight Penalization on Classification Layer. (54%)
Transcending Transcend: Revisiting Malware Classification with Conformal Evaluation. (13%)
An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference. (1%)
Hiding the Access Pattern is Not Enough: Exploiting Search Pattern Leakage in Searchable Encryption.
Learning Clusterable Visual Features for Zero-Shot Recognition.
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks.
Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features.
Global Optimization of Objective Functions Represented by ReLU Networks.
CD-UAP: Class Discriminative Universal Adversarial Perturbation.
Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples.
Double Targeted Universal Adversarial Perturbations.
Adversarial attacks on audio source separation.
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples. (99%)
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems. (99%)
Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks. (76%)
Revisiting Batch Normalization for Improving Corruption Robustness. (45%)
Moving Target Defense for Robust Monitoring of Electric Grid Transformers in Adversarial Environments. (1%)
Robust Semi-Supervised Learning with Out of Distribution Data. (1%)
Visualizing Color-wise Saliency of Black-Box Image Classification Models.
Constraining Logits by Bounded Function for Adversarial Robustness.
Adversarial Patch Attacks on Monocular Depth Estimation Networks.
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models.
Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model.
Adversarial Boot Camp: label free certified robustness in one epoch.
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective.
Understanding Classifier Mistakes with Generative Models.
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference.
CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation.
Second-Order NLP Adversarial Examples.
Understanding Catastrophic Overfitting in Single-step Adversarial Training.
Downscaling Attack and Defense: Turning What You See Back Into What You Get. (1%)
Unknown Presentation Attack Detection against Rational Attackers.
Geometry-aware Instance-reweighted Adversarial Training.
TextAttack: Lessons learned in designing Python frameworks for NLP.
A Study for Universal Adversarial Attacks on Texture Recognition.
Adversarial Attack and Defense of Structured Prediction Models.
Adversarial and Natural Perturbations for General Robustness.
Multi-Step Adversarial Perturbations on Recommender Systems Embeddings.
A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples.
Does Network Width Really Help Adversarial Robustness?
Efficient Robust Training via Backward Smoothing.
Note: An alternative proof of the vulnerability of $k$-NN classifiers in high intrinsic dimensionality regions.
Query complexity of adversarial attacks.
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders.
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense.
CorrAttack: Black-box Adversarial Attack with Structured Search.
A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations.
Bag of Tricks for Adversarial Training.
Assessing Robustness of Text Classification through Maximal Safe Radius Computation.
Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning.
Accurate and Robust Feature Importance Estimation under Distribution Shifts.
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks.
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles.
Neural Topic Modeling with Cycle-Consistent Adversarial Training.
Fast Fr\'echet Inception Distance.
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients.
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability.
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment.
STRATA: Building Robustness with a Simple Method for Generating Black-box Adversarial Attacks for Models of Code.
Graph Adversarial Networks: Protecting Information against Adversarial Attacks.
RoGAT: a robust GNN combined revised GAT with adjusted graphs.
Learning to Improve Image Compression without Changing the Standard Decoder.
Where Does the Robustness Come from? A Study of the Transformation-based Ensemble Defence.
Differentially Private Adversarial Robustness Through Randomized Perturbations.
Beneficial Perturbations Network for Defending Adversarial Examples.
Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision.
Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training.
Advancing the Research and Development of Assured Artificial Intelligence and Machine Learning Capabilities.
Adversarial Examples in Deep Learning for Multivariate Time Series Regression.
Improving Query Efficiency of Black-box Adversarial Attack.
Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz Regularization.
Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining.
Detection of Iterative Adversarial Attacks via Counter Attack.
Adversarial robustness via stochastic regularization of neural activation sensitivity.
A Partial Break of the Honeypots Defense to Catch Adversarial Attacks.
Semantics-Preserving Adversarial Training.
Robustification of Segmentation Models Against Adversarial Perturbations In Medical Imaging.
Torchattacks : A Pytorch Repository for Adversarial Attacks.
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors.
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time.
Adversarial Attack Based Countermeasures against Deep Learning Side-Channel Attacks.
Uncertainty-aware Attention Graph Neural Network for Defending Adversarial Attacks.
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing.
Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers.
Generating Adversarial yet Inconspicuous Patches with a Single Image.
Adversarial Training with Stochastic Weight Average.
Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations.
Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness.
DeepDyve: Dynamic Verification for Deep Neural Networks.
Feature Distillation With Guided Adversarial Contrastive Learning.
Crafting Adversarial Examples for Deep Learning Based Prognostics (Extended Version).
Improving Robustness and Generality of NLP Models Using Disentangled Representations.
Efficient Certification of Spatial Robustness.
OpenAttack: An Open-source Textual Adversarial Attack Toolkit.
It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception.
Making Images Undiscoverable from Co-Saliency Detection.
Adversarial Exposure Attack on Diabetic Retinopathy Imagery.
Bias Field Poses a Threat to DNN-based X-Ray Recognition.
Learning to Attack: Towards Textual Adversarial Attacking in Real-world Situations.
EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks.
Robust Decentralized Learning for Neural Networks.
Certifying Confidence via Randomized Smoothing.
On the Transferability of Minimal Prediction Preserving Inputs in Question Answering.
Generating Label Cohesive and Well-Formed Adversarial Claims.
Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks.
Large Norms of CNN Layers Do Not Hurt Adversarial Robustness.
Label Smoothing and Adversarial Robustness.
MultAV: Multiplicative Adversarial Videos.
Online Alternate Generator against Adversarial Attacks.
Analysis of Generalizability of Deep Neural Networks Based on the Complexity of Decision Boundary.
Multimodal Safety-Critical Scenarios Generation for Decision-Making Algorithms Evaluation.
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View.
Contextualized Perturbation for Textual Adversarial Attack.
Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup.
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems.
Switching Gradient Directions for Query-Efficient Black-Box Adversarial Attacks.
Decision-based Universal Adversarial Attack.
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses.
Input Hessian Regularization of Neural Networks.
Robust Deep Learning Ensemble against Deception.
Hold Tight and Never Let Go: Security of Deep Learning based Automated Lane Centering under Physical-World Attack.
Towards the Quantification of Safety Risks in Deep Neural Networks.
Certified Robustness of Graph Classification against Topology Attack with Randomized Smoothing.
Achieving Adversarial Robustness via Sparsity.
Counterfactual Explanations & Adversarial Examples -- Common Grounds, Essential Differences, and Potential Transfers.
Defending Against Multiple and Unforeseen Adversarial Videos.
Robust Neural Machine Translation: Modeling Orthographic and Interpunctual Variation.
Semantic-preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection.
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent.
Second Order Optimization for Adversarial Robustness and Interpretability.
A Black-box Adversarial Attack for Poisoning Clustering.
End-to-end Kernel Learning via Generative Random Fourier Features.
SoK: Certified Robustness for Deep Neural Networks.
Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples.
Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks On Deep COVID-19 Models.
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective.
Adversarial attacks on deep learning models for fatty liver disease classification by modification of ultrasound image reconstruction method.
Adversarial Attack on Large Scale Graph.
Black Box to White Box: Discover Model Characteristics Based on Strategic Probing.
A Game Theoretic Analysis of LQG Control under Adversarial Attack.
Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks.
Detection Defense Against Adversarial Attacks with Saliency Map.
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks.
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks.
Yet Meta Learning Can Adapt Fast, It Can Also Break Easily.
Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation.
MetaSimulator: Simulating Unknown Target Models for Query-Efficient Black-box Attacks.
Open-set Adversarial Defense.
Adversarially Robust Neural Architectures.
Flow-based detection and proxy-based evasion of encrypted malware C2 traffic.
Adversarial Attacks on Deep Learning Systems for User Identification based on Motion Sensors.
Defending against substitute model black box adversarial attacks with the 01 loss.
Adversarial Patch Camouflage against Aerial Detection.
Evasion Attacks to Graph Neural Networks via Influence Function.
MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models.
An Integrated Approach to Produce Robust Models with High Efficiency.
Benchmarking adversarial attacks and defenses for time-series data.
Improving Resistance to Adversarial Deformations by Regularizing Gradients.
Adversarially Robust Learning via Entropic Regularization.
A Scene-Agnostic Framework with Adversarial Training for Abnormal Event Detection in Video.
GhostBuster: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing.
Robustness Hidden in Plain Sight: Can Analog Computing Defend Against Adversarial Attacks?
Minimal Adversarial Examples for Deep Learning on 3D Point Clouds.
Color and Edge-Aware Adversarial Image Perturbations.
Adversarial Eigen Attack on Black-Box Models.
Adversarially Training for Audio Classifiers.
Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks.
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses.
Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning.
An Adversarial Attack Defending System for Securing In-Vehicle Networks.
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation.
Developing and Defeating Adversarial Examples.
Ptolemy: Architecture Support for Robust Deep Learning.
PermuteAttack: Counterfactual Explanation of Machine Learning Credit Scorecards.
Self-Competitive Neural Networks.
A Survey on Assessing the Generalization Envelope of Deep Neural Networks at Inference Time for Image Classification.
Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media.
Towards adversarial robustness with 01 loss neural networks.
$\beta$-Variational Classifiers Under Attack.
Yet Another Intermediate-Level Attack.
Prototype-based interpretation of the functionality of neurons in winner-take-all neural networks.
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training.
On $\ell_p$-norm Robustness of Ensemble Stumps and Trees.
Improving adversarial robustness of deep neural networks by using semantic information.
Accelerated Zeroth-Order Momentum Methods from Mini to Minimax Optimization.
Direct Adversarial Training for GANs.
Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection.
A Deep Dive into Adversarial Robustness in Zero-Shot Learning.
Adversarial Attack and Defense Strategies for Deep Speaker Recognition Systems.
TextDecepter: Hard Label Black Box Attack on Text Classifiers.
Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks.
Attack on Multi-Node Attention for Object Detection.
On the Generalization Properties of Adversarial Training.
Efficiently Constructing Adversarial Examples by Feature Watermarking.
Defending Adversarial Attacks without Adversarial Attacks in Deep Reinforcement Learning.
Adversarial Training and Provable Robustness: A Tale of Two Objectives.
Semantically Adversarial Learnable Filters.
Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise.
Defending Adversarial Examples via DNN Bottleneck Reinforcement.
Feature Binding with Category-Dependant MixUp for Semantic Segmentation and Adversarial Robustness.
Semantics-preserving adversarial attacks in NLP.
Revisiting Adversarially Learned Injection Attacks Against Recommender Systems.
Informative Dropout for Robust Representation Learning: A Shape-bias Perspective.
FireBERT: Hardening BERT-based classifiers against adversarial attack.
Enhancing Robustness Against Adversarial Examples in Network Intrusion Detection Systems.
Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks.
Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG with Variable Length.
Visual Attack and Defense on Text.
Optimizing Information Loss Towards Robust Neural Networks.
Adversarial Examples on Object Recognition: A Comprehensive Survey.
Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations.
Stronger and Faster Wasserstein Adversarial Attacks.
One word at a time: adversarial attacks on retrieval models.
Robust Deep Reinforcement Learning through Adversarial Loss.
Entropy Guided Adversarial Model for Weakly Supervised Object Localization.
TREND: Transferability based Robust ENsemble Design.
Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples.
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks.
Anti-Bandit Neural Architecture Search for Model Defense.
Trojaning Language Models for Fun and Profit.
Efficient Adversarial Attacks for Visual Object Tracking.
Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Physical Adversarial Attack on Vehicle Detector in the Carla Simulator.
Adversarial Attacks with Multiple Antennas Against Deep Learning-Based Modulation Classifiers.
TEAM: We Need More Powerful Adversarial Examples for DNNs.
Black-box Adversarial Sample Generation Based on Differential Evolution.
A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks.
End-to-End Adversarial White Box Attacks on Music Instrument Classification.
Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data.
Stylized Adversarial Defense.
Generative Classifiers as a Basis for Trustworthy Computer Vision.
Detecting Anomalous Inputs to DNN Classifiers By Joint Statistical Testing at the Layers.
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations.
Reachable Sets of Classifiers & Regression Models: (Non-)Robustness Analysis and Robust Training.
Label-Only Membership Inference Attacks.
Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning.
Attacking and Defending Machine Learning Applications of Public Cloud.
KOVIS: Keypoint-based Visual Servoing with Zero-Shot Sim-to-Real Transfer for Robotics Manipulation.
From Sound Representation to Model Robustness.
Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing.
RANDOM MASK: Towards Robust Convolutional Neural Networks.
Robust Collective Classification against Structural Attacks.
MirrorNet: Bio-Inspired Adversarial Attack for Camouflaged Object Segmentation.
Adversarial Privacy-preserving Filter.
MP3 Compression To Diminish Adversarial Noise in End-to-End Speech Recognition.
Scalable Inference of Symbolic Adversarial Examples.
SOCRATES: Towards a Unified Platform for Neural Network Verification.
Adversarial Training Reduces Information and Improves Transferability.
Threat of Adversarial Attacks on Face Recognition: A Comprehensive Survey.
Robust Machine Learning via Privacy/Rate-Distortion Theory.
Audio Adversarial Examples for Robust Hybrid CTC/Attention Speech Recognition.
Towards Visual Distortion in Black-Box Attacks.
Neural Network Robustness Verification on GPUs.
DeepNNK: Explaining deep models and their generalization using polytope interpolation.
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers.
Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks.
Robust Tracking against Adversarial Attacks.
Semantic Equivalent Adversarial Data Augmentation for Visual Question Answering.
Exploiting vulnerabilities of deep neural networks for privacy protection.
Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency.
Adversarial Immunization for Improving Certifiable Robustness on Graphs.
DDR-ID: Dual Deep Reconstruction Networks Based Image Decomposition for Anomaly Detection.
Anomaly Detection in Unsupervised Surveillance Setting Using Ensemble of Multimodal Data with Adversarial Defense.
Neural Networks with Recurrent Generative Feedback.
Understanding and Diagnosing Vulnerability under Adversarial Attacks.
Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources.
Accelerated Stochastic Gradient-free and Projection-free Methods.
Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data.
An Empirical Study on the Robustness of NAS based Architectures.
Do Adversarially Robust ImageNet Models Transfer Better?
Learning perturbation sets for robust machine learning.
Artificial GAN Fingerprints: Rooting Deepfake Attribution in Training Data. (1%)
A Survey of Privacy Attacks in Machine Learning.
Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels.
A Survey on Security Attacks and Defense Techniques for Connected and Autonomous Vehicles.
Towards robust sensing for Autonomous Vehicles: An adversarial perspective.
Robustifying Reinforcement Learning Agents via Action Space Adversarial Training.
Bounding The Number of Linear Regions in Local Area for Neural Networks with ReLU Activations.
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack.
Multitask Learning Strengthens Adversarial Robustness.
Adversarial Examples and Metrics.
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows.
Towards a Theoretical Understanding of the Robustness of Variational Autoencoders.
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components.
A simple defense against adversarial attacks on heatmap explanations.
Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations.
Adversarial robustness via robust low rank representations.
Calling Out Bluff: Attacking the Robustness of Automatic Scoring Systems with Simple Adversarial Testing.
Security and Machine Learning in the Real World.
Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes.
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems.
Patch-wise Attack for Fooling Deep Neural Network.
Adversarial jamming attacks and defense strategies via adaptive deep reinforcement learning.
Generating Fluent Adversarial Examples for Natural Languages.
Probabilistic Jacobian-based Saliency Maps Attacks.
Understanding Object Detection Through An Adversarial Lens.
ManiGen: A Manifold Aided Black-box Generator of Adversarial Examples.
Improved Detection of Adversarial Images Using Deep Neural Networks.
Miss the Point: Targeted Adversarial Attack on Multiple Landmark Detection.
Generating Adversarial Inputs Using A Black-box Differential Technique.
Boundary thickness and robustness in learning models.
Improving Adversarial Robustness by Enforcing Local and Global Compactness.
Node Copying for Protection Against Graph Neural Network Topology Attacks.
Efficient detection of adversarial images.
How benign is benign overfitting?
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations.
Delving into the Adversarial Robustness on Face Recognition.
A Critical Evaluation of Open-World Machine Learning.
On the relationship between class selectivity, dimensionality, and robustness.
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs.
Robust Learning with Frequency Domain Regularization.
Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability.
Fast Training of Deep Neural Networks Robust to Adversarial Perturbations.
Making Adversarial Examples More Transferable and Indistinguishable.
Detection as Regression: Certified Object Detection by Median Smoothing.
Certifying Decision Trees Against Evasion Attacks by Program Analysis.
On Data Augmentation and Adversarial Risk: An Empirical Analysis.
Understanding and Improving Fast Adversarial Training.
Black-box Adversarial Example Generation with Normalizing Flows.
Adversarial Learning in the Cyber Security Domain.
On Connections between Regularizations for Improving DNN Robustness.
Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors.
Towards Robust Deep Learning with Ensemble Networks and Noisy Layers.
Efficient Proximal Mapping of the 1-path-norm of Shallow Networks.
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment.
Decoder-free Robustness Disentanglement without (Additional) Supervision.
Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring.
Trace-Norm Adversarial Examples.
Generating Adversarial Examples withControllable Non-transferability.
Fundamental Limits of Adversarial Learning.
Unifying Model Explainability and Robustness via Machine-Checkable Concepts.
Measuring Robustness to Natural Distribution Shifts in Image Classification.
Robust Learning against Logical Adversaries.
Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks.
Query-Free Adversarial Transfer via Undertrained Surrogates.
Adversarial Example Games.
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey.
Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures.
Black-box Certification and Learning under Adversarial Perturbations.
Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection.
Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications.
Generating Adversarial Examples with an Optimized Quality.
Harnessing Adversarial Distances to Discover High-Confidence Errors.
Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification.
Legal Risks of Adversarial Machine Learning Research.
Biologically Inspired Mechanisms for Adversarial Robustness.
Improving Uncertainty Estimates through the Relationship with Adversarial Robustness.
FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications.
Geometry-Inspired Top-k Adversarial Perturbations.
Orthogonal Deep Models As Defense Against Black-Box Attacks.
A Unified Framework for Analyzing and Detecting Malicious Examples of DNN Models.
Learning Diverse Latent Representations for Improving the Resilience to Adversarial Attacks.
Informative Outlier Matters: Robustifying Out-of-distribution Detection Using Outlier Mining.
Can 3D Adversarial Logos Cloak Humans?
Proper Network Interpretability Helps Adversarial Robustness in Classification.
Smooth Adversarial Training.
Does Adversarial Transferability Indicate Knowledge Transferability?
Compositional Explanations of Neurons.
Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks.
Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness.
Defending against adversarial attacks on medical imaging AI system, classification or detection?
Towards Robust Sensor Fusion in Visual Perception.
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks.
RayS: A Ray Searching Method for Hard-label Adversarial Attack.
Learning to Generate Noise for Robustness against Multiple Perturbations.
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models.
The Generalized Lasso with Nonlinear Observations and Generative Priors. (1%)
Network Moments: Extensions and Sparse-Smooth Attacks.
How do SGD hyperparameters in natural training affect adversarial robustness?
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble.
Using Learning Dynamics to Explore the Role of Implicit Regularization in Adversarial Examples.
A general framework for defining and optimizing robustness.
Analyzing the Real-World Applicability of DGA Classifiers.
Towards an Adversarially Robust Normalization Approach.
Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers.
Adversarial Attacks for Multi-view Deep Models.
Local Competition and Uncertainty for Adversarial Robustness in Deep Learning.
Dissecting Deep Networks into an Ensemble of Generative Classifiers for Robust Predictions.
The Dilemma Between Dimensionality Reduction and Adversarial Robustness.
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples.
Noise or Signal: The Role of Image Backgrounds in Object Recognition.
Adversarial Examples Detection and Analysis with Layer-wise Autoencoders.
Adversarial Defense by Latent Style Transformations.
Disrupting Deepfakes with an Adversarial Attack that Survives Training.
Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning.
Universal Lower-Bounds on Classification Error under Adversarial Attacks and Random Corruption.
Calibrating Deep Neural Network Classifiers on Out-of-Distribution Datasets.
SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness.
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness Proofs.
On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron.
AdvMind: Inferring Adversary Intent of Black-Box Attacks.
The shape and simplicity biases of adversarially robust ImageNet-trained CNNs.
Total Deep Variation: A Stable Regularizer for Inverse Problems.
Multiscale Deep Equilibrium Models.
DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder.
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data.
Fast & Accurate Method for Bounding the Singular Values of Convolutional Layers with Application to Lipschitz Regularization.
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks.
Efficient Black-Box Adversarial Attack Guided by the Distribution of Adversarial Perturbations.
GradAug: A New Regularization Method for Deep Neural Networks.
PatchUp: A Regularization Technique for Convolutional Neural Networks.
On Saliency Maps and Adversarial Robustness.
On the transferability of adversarial examples between convex and 01 loss models.
Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems.
Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks.
ClustTR: Clustering Training for Robustness.
The Pitfalls of Simplicity Bias in Neural Networks.
Adversarial Self-Supervised Contrastive Learning.
Defensive Approximation: Enhancing CNNs Security through Approximate Computing.
Targeted Adversarial Perturbations for Monocular Depth Prediction.
Provably Robust Metric Learning.
Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces.
D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack.
Achieving robustness in classification using optimal transport with hinge regularization.
Large-Scale Adversarial Training for Vision-and-Language Representation Learning.
Smoothed Geometry for Robust Attribution.
Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks.
Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification.
Robustness to Adversarial Attacks in Learning-Enabled Controllers.
On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples.
Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors.
Evaluating Graph Vulnerability and Robustness using TIGER.
Towards Robust Fine-grained Recognition by Maximal Separation of Discriminative Features.
Deterministic Gaussian Averaged Neural Networks.
Interpolation between Residual and Non-Residual Networks.
Towards Certified Robustness of Metric Learning.
Towards an Intrinsic Definition of Robustness for a Classifier.
Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access.
GAP++: Learning to generate target-conditioned adversarial examples.
Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers.
Provable tradeoffs in adversarially robust classification.
Calibrated neighborhood aware confidence measure for deep metric learning.
A Self-supervised Approach for Adversarial Robustness.
Distributional Robustness with IPMs and links to Regularization and GANs.
On Universalized Adversarial and Invariant Perturbations.
Adversarial Feature Desensitization.
Tricking Adversarial Attacks To Fail.
Global Robustness Verification Networks.
Provable trade-offs between private & robust machine learning.
Extensions and limitations of randomized smoothing for robustness guarantees.
Uncertainty-Aware Deep Classifiers using Generative Models.
Unique properties of adversarially trained linear classifiers on Gaussian data.
Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?
Sponge Examples: Energy-Latency Attacks on Neural Networks.
Adversarial Image Generation and Training for Deep Convolutional Neural Networks.
Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing.
Characterizing the Weight Space for Different Learning Models.
Towards Understanding Fast Adversarial Training.
Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised Learning.
Pick-Object-Attack: Type-Specific Adversarial Attack for Object Detection.
SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization.
Exploring the role of Input and Output Layers of a Deep Neural Network in Adversarial Defense.
Perturbation Analysis of Gradient-based Adversarial Attacks.
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start.
Detecting Audio Attacks on ASR Systems with Dropout Uncertainty.
Second-Order Provable Defenses against Adversarial Attacks.
Adversarial Attacks on Reinforcement Learning based Energy Management Systems of Extended Range Electric Delivery Vehicles.
Adversarial Attacks on Classifiers for Eye-based User Modelling.
Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack Methods.
Evaluations and Methods for Explanation through Robustness Analysis.
Estimating Principal Components under Adversarial Perturbations.
Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training.
SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions.
Monocular Depth Estimators: Vulnerabilities and Attacks.
QEBA: Query-Efficient Boundary-Based Blackbox Attack.
Adversarial Attacks and Defense on Texts: A Survey.
Adversarial Robustness of Deep Convolutional Candlestick Learner.
Enhancing Resilience of Deep Learning Networks by Means of Transferable Adversaries.
Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models.
Calibrated Surrogate Losses for Adversarially Robust Classification.
Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques.
Effects of Forward Error Correction on Communications Aware Evasion Attacks.
Generating Semantically Valid Adversarial Questions for TableQA.
Investigating a Spectral Deception Loss Metric for Training Machine Learning-based Evasion Attacks.
Adversarial Feature Selection against Evasion Attacks.
Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification.
Adaptive Adversarial Logits Pairing.
SoK: Arms Race in Adversarial Malware Detection.
ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds.
Adversarial Attack on Hierarchical Graph Pooling Neural Networks.
Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks.
Revisiting Role of Autoencoders in Adversarial Settings.
Robust Ensemble Model Training via Random Layer Sampling Against Adversarial Attack.
Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition.
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning.
Model-Based Robust Deep Learning.
Adversarial Machine Learning in Recommender Systems: State of the art and Challenges.
Graph Structure Learning for Robust Graph Neural Networks.
Feature Purification: How Adversarial Training Performs Robust Deep Learning.
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks.
Synthesizing Unrestricted False Positive Adversarial Objects Using Generative Models.
Bias-based Universal Adversarial Patch Attack for Automatic Check-out.
An Evasion Attack against ML-based Phishing URL Detectors.
Defending Your Voice: Adversarial Attack on Voice Conversion.
Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspective.
Universalization of any adversarial attack using very few test examples.
On Intrinsic Dataset Properties for Adversarial Machine Learning.
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks.
Spatiotemporal Attacks for Embodied Agents.
Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks.
Universal Adversarial Perturbations: A Survey.
Encryption Inspired Adversarial Defense for Visual Classification.
PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields.
How to Make 5G Communications "Invisible": Adversarial Machine Learning for Wireless Privacy.
Practical Traffic-space Adversarial Attacks on Learning-based NIDSs.
Initializing Perturbations in Multiple Directions for Fast Adversarial Training.
Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning.
Towards Assessment of Randomized Mechanisms for Certifying Adversarial Robustness.
A Deep Learning-based Fine-grained Hierarchical Learning Approach for Robust Malware Classification.
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses.
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients.
Evaluating Ensemble Robustness Against Adversarial Attacks.
Increased-confidence adversarial examples for improved transferability of Counter-Forensic attacks.
Adversarial examples are useful too!
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data.
Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers.
It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations.
Class-Aware Domain Adaptation for Improving Adversarial Robustness.
Towards Robustness against Unsuspicious Adversarial Examples.
Efficient Exact Verification of Binarized Neural Networks.
Projection & Probability-Driven Black-Box Attack.
Defending Hardware-based Malware Detectors against Adversarial Attacks.
GraCIAS: Grassmannian of Corrupted Images for Adversarial Security.
Training robust neural networks using Lipschitz bounds.
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder.
Hacking the Waveform: Generalized Wireless Adversarial Deep Learning.
Adversarial Training against Location-Optimized Adversarial Patches.
Proper measure for adversarial robustness.
On the Benefits of Models with Perceptually-Aligned Gradients.
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Robust Encodings: A Framework for Combating Adversarial Typos.
Jacks of All Trades, Masters Of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks.
Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models.
Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees.
Defense of Word-level Adversarial Attacks via Random Substitution Encoding.
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks.
Universal Adversarial Attacks with Natural Triggers for Text Classification.
Imitation Attacks and Defenses for Black-box Machine Translation Systems.
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness.
Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability.
TAVAT: Token-Aware Virtual Adversarial Training for Language Understanding.
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP.
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks.
Minority Reports Defense: Defending Against Adversarial Patches.
DeSePtion: Dual Sequence Prediction and Adversarial Examples for Improved Fact-Checking.
Adversarial Fooling Beyond "Flipping the Label".
Improved Image Wasserstein Attacks and Defenses.
Towards Feature Space Adversarial Attack.
Transferable Perturbations of Deep Feature Distributions.
Printing and Scanning Attack for Image Counter Forensics.
Improved Adversarial Training via Learned Optimizer.
Enabling Fast and Universal Audio Adversarial Attack Using Generative Model.
Harnessing adversarial examples with a surprisingly simple defense.
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty.
One Sparse Perturbation to Fool them All, almost Always!
Reevaluating Adversarial Examples in Natural Language.
Adversarial Machine Learning in Network Intrusion Detection Systems.
Adversarial Attacks and Defenses: An Interpretation Perspective.
RAIN: A Simple Approach for Robust and Accurate Image Classification Networks.
Evaluating Adversarial Robustness for Deep Neural Network Interpretability using fMRI Decoding.
On Adversarial Examples for Biomedical NLP Tasks.
Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks.
Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation.
CodNN -- Robust Neural Networks From Coded Classification.
Provably robust deep generative models.
QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks.
Adversarial examples and where to find them.
Scalable Attack on Graph Data by Injecting Vicious Nodes.
Certifying Joint Adversarial Robustness for Model Ensembles.
Probabilistic Safety for Bayesian Neural Networks.
BERT-ATTACK: Adversarial Attack Against BERT Using BERT.
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks.
GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified Examples.
Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning.
Adversarial Training for Large Neural Language Models.
Headless Horseman: Adversarial Attacks on Transfer Learning Models.
Protecting Classifiers From Attacks. A Bayesian Approach.
Single-step Adversarial training with Dropout Scheduling.
Adversarial Attack on Deep Learning-Based Splice Localization.
Shortcut Learning in Deep Neural Networks.
Enhancing Deep Neural Networks Against Adversarial Malware Examples.
Targeted Attack for Deep Hashing based Retrieval.
Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing Website Classifiers.
On the Optimal Interaction Range for Multi-Agent Systems Under Adversarial Attack.
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions.
Adversarial robustness guarantees for random deep neural networks.
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples.
Towards Transferable Adversarial Attack against Deep Face Recognition.
Adversarial Augmentation Policy Search for Domain and Cross-Lingual Generalization in Reading Comprehension.
Adversarial Weight Perturbation Helps Robust Generalization.
Towards Robust Classification with Image Quality Assessment.
PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning.
Robust Large-Margin Learning in Hyperbolic Space.
Domain Adaptive Transfer Attack (DATA)-based Segmentation Networks for Building Extraction from Aerial Images.
Certified Adversarial Robustness for Deep Reinforcement Learning.
Verification of Deep Convolutional Neural Networks Using ImageStars.
Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems.
Luring of Adversarial Perturbations.
Blind Adversarial Training: Balance Accuracy and Robustness.
Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness.
On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems.
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking.
Towards Evaluating the Robustness of Chinese BERT Classifiers.
Feature Partitioning for Robust Tree Ensembles and their Certification in Adversarial Scenarios.
Learning to fool the speaker recognition.
Universal Adversarial Perturbations Generative Network for Speaker Recognition.
Approximate Manifold Defense Against Multiple Adversarial Perturbations.
Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks.
BAE: BERT-based Adversarial Examples for Text Classification.
Adversarial Robustness through Regularization: A Second-Order Approach.
Evading Deepfake-Image Detectors with White- and Black-Box Attacks.
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes.
Physically Realizable Adversarial Examples for LiDAR Object Detection.
A Thorough Comparison Study on Adversarial Attacks and Defenses for Common Thorax Disease Classification in Chest X-rays.
Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement.
Adversarial Attacks on Multivariate Time Series.
Improved Gradient based Adversarial Attacks for Quantized Networks.
Towards Deep Learning Models Resistant to Large Perturbations.
Efficient Black-box Optimization of Adversarial Windows Malware with Constrained Manipulations.
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning.
DaST: Data-free Substitute Training for Adversarial Attacks.
Adversarial Imitation Attack.
Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks.
Challenging the adversarial robustness of DNNs based on error-correcting output codes.
Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples.
Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study.
Defense Through Diverse Directions.
Adversarial Attacks on Monocular Depth Estimation.
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations.
Adversarial Perturbations Fool Deepfake Detectors.
Understanding the robustness of deep neural network classifiers for breast cancer screening.
Architectural Resilience to Foreground-and-Background Adversarial Noise.
Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression.
Robust Out-of-distribution Detection in Neural Networks.
Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises.
Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning.
Investigating Image Applications Based on Spatial-Frequency Transform and Deep Learning Techniques.
Quantum noise protects quantum classifiers against adversaries.
One Neuron to Fool Them All.
Adversarial Robustness on In- and Out-Distribution Improves Explainability.
Face-Off: Adversarial Face Obfuscation.
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates.
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations.
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence.
Improving Adversarial Robustness Through Progressive Hardening.
Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles.
Solving Non-Convex Non-Differentiable Min-Max Games using Proximal Gradient Method.
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior.
Heat and Blur: An Effective and Fast Defense Against Adversarial Examples.
Adversarial Transferability in Wearable Sensor Systems.
Towards Privacy Protection by Generating Adversarial Identity Masks.
Anomalous Instance Detection in Deep Learning: A Survey.
Output Diversified Initialization for Adversarial Attacks.
Toward Adversarial Robustness via Semi-supervised Robust Training.
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation.
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models.
VarMixup: Exploiting the Latent Space for Robust Training and Inference.
Certified Defenses for Adversarial Patches.
Towards a Resilient Machine Learning Classifier -- a Case Study of Ransomware Detection.
GeoDA: a geometric framework for black-box adversarial attacks.
When are Non-Parametric Methods Robust?
ARAE: Adversarially Robust Training of Autoencoders Improves Novelty Detection.
Topological Effects on Attacks Against Vertex Classification.
Inline Detection of DGA Domains Using Side Information.
ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems.
Frequency-Tuned Universal Adversarial Attacks.
SAD: Saliency-based Defenses Against Adversarial Examples.
Using an ensemble color space model to tackle adversarial examples.
Cryptanalytic Extraction of Neural Network Models.
A Survey of Adversarial Learning on Graphs.
Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift.
Towards Probabilistic Verification of Machine Unlearning.
Manifold Regularization for Locally Stable Deep Neural Networks.
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models.
Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world.
Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM.
An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods.
On the Robustness of Cooperative Multi-Agent Reinforcement Learning.
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks.
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles.
Dynamic Backdoor Attacks Against Machine Learning Models.
Triple Memory Networks: a Brain-Inspired Method for Continual Learning.
Defense against adversarial attacks on spoofing countermeasures of ASV.
Automatic Generation of Adversarial Examples for Interpreting Malware Classifiers.
Towards Practical Lottery Ticket Hypothesis for Adversarial Training.
Exploiting Verified Neural Networks via Floating Point Numerical Error.
Detection and Recovery of Adversarial Attacks with Injected Attractors.
Adversarial Robustness Through Local Lipschitzness.
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization.
Search Space of Adversarial Perturbations against Image Filters.
Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems.
Colored Noise Injection for Training Adversarially Robust Neural Networks.
Double Backpropagation for Training Autoencoders against Adversarial Attack.
Black-box Smoothing: A Provable Defense for Pretrained Classifiers.
Metrics and methods for robustness evaluation of neural networks with generative models.
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks.
Analyzing Accuracy Loss in Randomized Smoothing Defenses.
Security of Deep Learning based Lane Keeping System under Physical-World Adversarial Attack.
Type I Attack for Generative Models.
Rethinking Randomized Smoothing for Adversarial Robustness.
Data-Free Adversarial Perturbations for Practical Black-Box Attack.
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness.
Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.
Adversarial Network Traffic: Towards Evaluating the Robustness of Deep Learning-Based Network Traffic Classification.
Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study.
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models.
Why is the Mahalanobis Distance Effective for Anomaly Detection?
End-to-end Robustness for Sensing-Reasoning Machine Learning Pipelines.
Applying Tensor Decomposition to image for Robustness against Adversarial Attack.
Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT.
Detecting Patch Adversarial Attacks with Image Residuals.
Are L2 adversarial examples intrinsically different?
Provable Robust Learning Based on Transformation-Specific Smoothing.
Utilizing Network Properties to Detect Erroneous Inputs.
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks.
Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy.
Randomization matters. How to defend against strong adversarial attacks.
Invariance vs. Robustness of Neural Networks.
Overfitting in adversarially robust deep learning.
MGA: Momentum Gradient Attack on Network.
Improving Robustness of Deep-Learning-Based Image Reconstruction.
Defense-PointNet: Protecting PointNet Against Adversarial Attacks.
Adversarial Attack on Deep Product Quantization Network for Image Retrieval.
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization.
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy.
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization.
G\"odel's Sentence Is An Adversarial Example But Unsolvable.
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks.
Towards an Efficient and General Framework of Robust Training for Graph Neural Networks.
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.
Adversarial Ranking Attack and Defense.
A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples: BOBYQA.
Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks.
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space.
Towards Rapid and Robust Adversarial Training with One-Step Attacks.
Precise Tradeoffs in Adversarial Training for Linear Regression.
On Pruning Adversarially Robust Neural Networks.
Adversarial Attack on DL-based Massive MIMO CSI Feedback.
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference.
VisionGuard: Runtime Detection of Adversarial Inputs to Perception Systems.
Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks.
Temporal Sparse Adversarial Attack on Gait Recognition.
Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples.
Polarizing Front Ends for Robust CNNs.
UnMask: Adversarial Detection and Defense Through Robust Feature Alignment.
Robustness from Simple Classifiers.
Adversarial Detection and Correction by Matching Prediction Distributions.
Robustness to Programmable String Transformations via Augmented Abstract Training.
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework.
Adversarial Attacks on Machine Learning Systems for High-Frequency Trading.
Enhanced Adversarial Strategically-Timed Attacks against Deep Reinforcement Learning.
On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective.
A Bayes-Optimal View on Adversarial Examples.
Towards Certifiable Adversarial Sample Detection.
Boosting Adversarial Training with Hypersphere Embedding.
Bayes-TrEx: Model Transparency by Example.
AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks.
NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion.
On Adaptive Attacks to Adversarial Example Defenses.
Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks.
Randomized Smoothing of All Shapes and Sizes.
Action-Manipulation Attacks Against Stochastic Bandits: Attacks and Defense.
Deflecting Adversarial Attacks.
Block Switching: A Stochastic Approach for Deep Learning Security.
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent.
TensorShield: Tensor-based Defense Against Adversarial Attacks on Images.
On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples.
Scalable Quantitative Verification For Deep Neural Networks.
Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification.
CAT: Customized Adversarial Training for Improved Robustness.
On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks.
Robust Stochastic Bandit Algorithms under Probabilistic Unbounded Adversarial Attack.
Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness.
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality.
Undersensitivity in Neural Reading Comprehension.
Hold me tight! Influence of discriminative features on deep network boundaries.
Blind Adversarial Network Perturbations.
Adversarial Distributional Training for Robust Deep Learning.
Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets.
Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks.
The Conditional Entropy Bottleneck.
Identifying Audio Adversarial Examples via Anomalous Pattern Detection.
Stabilizing Differentiable Architecture Search via Perturbation-based Regularization.
Flickering Adversarial Attacks against Video Recognition Networks.
Adversarial Robustness for Code.
Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models.
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations.
Robustness of Bayesian Neural Networks to Gradient-Based Attacks.
Improving the affordability of robustness training for DNNs.
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models.
Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers.
Adversarial Data Encryption.
Generalised Lipschitz Regularisation Equals Distributional Robustness.
MDEA: Malware Detection with Evolutionary Adversarial Learning.
Input Validation for Neural Networks via Runtime Local Robustness Verification.
Robust binary classification with the 01 loss.
Watch out! Motion is Blurring the Vision of Your Deep Neural Networks.
Feature-level Malware Obfuscation in Deep Learning.
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples.
Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection.
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing.
Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for High-Dimensional Images.
Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks.
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness.
Improving the Adversarial Robustness of Transfer Learning via Noisy Feature Distillation.
Semantic Robustness of Models of Source Code.
Analysis of Random Perturbations for Robust Convolutional Neural Networks.
RAID: Randomized Adversarial-Input Detection for Neural Networks.
Assessing the Adversarial Robustness of Monte Carlo and Distillation Methods for Deep Bayesian Neural Network Classification.
Reliability Validation of Learning Enabled Vehicle Tracking.
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models.
AI-GAN: Attack-Inspired Generation of Adversarial Examples.
Over-the-Air Adversarial Attacks on Deep Learning Based Modulation Classifier over Wireless Channels.
Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study.
Adversarially Robust Frame Sampling with Bounded Irregularities.
Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria.
Minimax Defense against Gradient-based Adversarial Attacks.
A Differentiable Color Filter for Generating Unrestricted Adversarial Images.
Regularizers for Single-step Adversarial Training.
Defending Adversarial Attacks via Semantic Feature Manipulation.
Robust saliency maps with decoy-enhanced saliency score.
Towards Sharper First-Order Adversary with Quantized Gradients.
AdvJND: Generating Adversarial Examples with Just Noticeable Difference.
Additive Tree Ensembles: Reasoning About Potential Instances.
Politics of Adversarial Machine Learning.
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications.
Tiny Noise Can Make an EEG-Based Brain-Computer Interface Speller Output Anything.
A4 : Evading Learning-based Adblockers.
D2M: Dynamic Defense and Modeling of Adversarial Movement in Networks.
Just Noticeable Difference for Machines to Generate Adversarial Images.
Semantic Adversarial Perturbations using Learnt Representations.
Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain.
Modelling and Quantifying Membership Information Leakage in Machine Learning.
Interpreting Machine Learning Malware Detectors Which Leverage N-gram Analysis.
Generating Natural Adversarial Hyperspectral examples with a modified Wasserstein GAN.
FakeLocator: Robust Localization of GAN-Based Face Manipulations via Semantic Segmentation Networks with Bells and Whistles.
Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning.
Practical Fast Gradient Sign Attack against Mammographic Image Classifier.
Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks.
Weighted Average Precision: Adversarial Example Detection in the Visual Perception of Autonomous Vehicles.
AI-Powered GUI Attack and Its Defensive Methods.
Analyzing the Noise Robustness of Deep Neural Networks.
When Wireless Security Meets Machine Learning: Motivation, Challenges, and Research Directions.
Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack.
Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples.
On the human evaluation of audio adversarial examples.
Adversarial Attack on Community Detection by Hiding Individuals.
SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation.
Secure and Robust Machine Learning for Healthcare: A Survey.
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence.
GhostImage: Perception Domain Attacks against Vision-based Object Classification Systems.
Generate High-Resolution Adversarial Samples by Identifying Effective Features.
Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning.
Elephant in the Room: An Evaluation Framework for Assessing Adversarial Examples in NLP.
Cyber Attack Detection thanks to Machine Learning Algorithms.
Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks.
A Little Fog for a Large Turn.
The gap between theory and practice in function approximation with deep neural networks.
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.
Increasing the robustness of DNNs against image corruptions by playing the Game of Noise.
Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation.
Advbox: a toolbox to generate adversarial examples that fool neural networks.
Membership Inference Attacks Against Object Detection Models.
An Adversarial Approach for the Robust Classification of Pneumonia from Chest Radiographs.
Fast is better than free: Revisiting adversarial training.
Exploring and Improving Robustness of Multi Task Deep Neural Networks via Domain Agnostic Defenses.
Sparse Black-box Video Attack with Reinforcement Learning.
ReluDiff: Differential Verification of Deep Neural Networks.
Guess First to Enable Better Compression and Adversarial Robustness.
To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers.
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius.
Transferability of Adversarial Examples to Attack Cloud-based Image Classifier Service.
Softmax-based Classification is k-means Clustering: Formal Proof, Consequences for Adversarial Attacks, and Improvement through Centroid Based Tailoring.
Generating Semantic Adversarial Examples via Feature Manipulation.
Deceiving Image-to-Image Translation Networks for Autonomous Driving with Adversarial Perturbations.
The Human Visual System and Adversarial AI.
Reject Illegal Inputs with Generative Classifier Derived from Any Discriminative Classifier.
Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient.
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks.
Automated Testing for Deep Learning Systems with Differential Behavior Criteria.
Protecting GANs against privacy attacks by preventing overfitting.
Erase and Restore: Simple, Accurate and Resilient Detection of $L_2$ Adversarial Examples.
Quantum Adversarial Machine Learning.
Adversarial Example Generation using Evolutionary Multi-objective Optimization.
Defending from adversarial examples with a two-stream architecture.
Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices.
Search Based Repair of Deep Neural Networks.
Benchmarking Adversarial Robustness.
Efficient Adversarial Training with Transferable Adversarial Examples.
Attack-Resistant Federated Learning with Residual-based Reweighting.
Analysis of Moving Target Defense Against False Data Injection Attacks on Power Grid.
Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer.
Characterizing the Decision Boundary of Deep Neural Networks.
White Noise Analysis of Neural Networks.
Geometry-aware Generation of Adversarial and Cooperative Point Clouds.
T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack.
Measuring Dataset Granularity.
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing.
secml: A Python Library for Secure and Explainable Machine Learning.
Jacobian Adversarially Regularized Networks for Robustness.
Explainability and Adversarial Robustness for RNNs.
Adversarial symmetric GANs: bridging adversarial samples and adversarial networks.
Does Symbolic Knowledge Prevent Adversarial Fooling?
A New Ensemble Method for Concessively Targeted Multi-model Attack.
Mitigating large adversarial perturbations on X-MAS (X minus Moving Averaged Samples).
Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection.
$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers.
Towards Verifying Robustness of Neural Networks Against Semantic Perturbations.
Perturbations on the Perceptual Ball.
Identifying Adversarial Sentences by Analyzing Text Complexity.
An Adversarial Perturbation Oriented Domain Adaptation Approach for Semantic Segmentation.
Adversarial VC-dimension and Sample Complexity of Neural Networks.
SIGMA : Strengthening IDS with GAN and Metaheuristics Attacks.
Detecting Adversarial Attacks On Audio-Visual Speech Recognition.
APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection.
CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator.
MimicGAN: Robust Projection onto Image Manifolds with Corruption Mimicking.
On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration.
Constructing a provably adversarially-robust classifier from a high accuracy one.
DAmageNet: A Universal Adversarial Dataset.
What Else Can Fool Deep Learning? Addressing Color Constancy Errors on Deep Neural Network Performance.
Towards Robust Toxic Content Classification.
Potential adversarial samples for white-box attacks.
Learning to Model Aspects of Hearing Perception Using Neural Loss Functions.
Gabor Layers Enhance Network Robustness.
What it Thinks is Important is Important: Robustness Transfers through Input Gradients.
An Efficient Approach for Using Expectation Maximization Algorithm in Capsule Networks.
Detecting and Correcting Adversarial Images Using Image Processing Operations and Convolutional Neural Networks.
Towards a Robust Classifier: An MDL-Based Method for Generating Adversarial Examples.
Training Provably Robust Models by Polyhedral Envelope Regularization.
Appending Adversarial Frames for Universal Video Attack.
Feature Losses for Adversarial Robustness.
Hardening Random Forest Cyber Detectors Against Adversarial Attacks.
Amora: Black-box Adversarial Morphing Attack.
Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks.
Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations.
Principal Component Properties of Adversarial Samples.
Does Interpretability of Neural Networks Imply Adversarial Robustness?
Detection of Face Recognition Adversarial Attacks.
The Search for Sparse, Robust Neural Networks.
Region-Wise Attack: On Efficient Generation of Robust Physical Adversarial Examples.
Learning with Multiplicative Perturbations.
A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks.
Walking on the Edge: Fast, Low-Distortion Adversarial Examples.
Towards Robust Image Classification Using Sequential Attention Models.
Scratch that! An Evolution-based Adversarial Attack against Neural Networks.
A Survey of Black-Box Adversarial Attacks on Computer Vision Models.
FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks.
Cost-Aware Robust Tree Ensembles for Security Applications.
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples.
Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs.
Adversary A3C for Robust Reinforcement Learning.
A Method for Computing Class-wise Universal Adversarial Perturbations.
AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds.
Design and Interpretation of Universal Adversarial Patches in Face Detection.
Error-Correcting Neural Network.
Square Attack: a query-efficient black-box adversarial attack via random search.
Towards Privacy and Security of Deep Learning Systems: A Survey.
Survey of Attacks and Defenses on Edge-Deployed Neural Networks.
An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense.
Can Attention Masks Improve Adversarial Robustness?
Defending Against Adversarial Machine Learning.
Using Depth for Pixel-Wise Detection of Adversarial Attacks in Crowd Counting.
Playing it Safe: Adversarial Robustness with an Abstain Option.
ColorFool: Semantic Adversarial Colorization.
Adversarial Attack with Pattern Replacement.
One Man's Trash is Another Man's Treasure: Resisting Adversarial Examples by Adversarial Examples.
When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks.
Time-aware Gradient Attack on Dynamic Network Link Prediction.
Universal Adversarial Perturbations to Understand Robustness of Texture vs. Shape-biased Training.
Robust Assessment of Real-World Adversarial Examples.
Bounding Singular Values of Convolution Layers.
Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction.
Attack Agnostic Statistical Method for Adversarial Detection.
Universal adversarial examples in speech command classification.
Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference.
Heuristic Black-box Adversarial Attacks on Video Recognition Models.
Adversarial Examples Improve Image Recognition.
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation.
Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method.
Fine-grained Synthesis of Unrestricted Adversarial Examples.
Deep Minimax Probability Machine.
Logic-inspired Deep Neural Networks.
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Adversarial Robustness of Flow-Based Generative Models.
Defective Convolutional Layers Learn Robust CNNs.
Generate (non-software) Bugs to Fool Classifiers.
A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories.
A novel method for identifying the deep neural network model with the Serial Number.
Adversarial Attacks on Grid Events Classification: An Adversarial Machine Learning Approach.
WITCHcraft: Efficient PGD attacks with random step size.
Deep Detector Health Management under Adversarial Campaigns.
Countering Inconsistent Labelling by Google's Vision API for Rotated Images.
Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models.
Smoothed Inference for Adversarially-Trained Models.
SMART: Skeletal Motion Action Recognition aTtack.
Suspicion-Free Adversarial Attacks on Clustering Algorithms.
Black-Box Adversarial Attack with Transferable Model-based Embedding.
Defensive Few-shot Adversarial Learning.
Learning To Characterize Adversarial Subspaces.
On Model Robustness Against Adversarial Examples.
Simple iterative method for generating targeted universal adversarial perturbations.
AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients.
Adversarial Embedding: A robust and elusive Steganography and Watermarking technique.
Self-supervised Adversarial Training.
DomainGAN: Generating Adversarial Examples to Attack Domain Generation Algorithm Classifiers.
CAGFuzz: Coverage-Guided Adversarial Generative Fuzzing Testing of Deep Learning Systems.
There is Limited Correlation between Coverage and Robustness for Deep Neural Networks.
Adversarial Margin Maximization Networks.
Improving Robustness of Task Oriented Dialog Systems.
On Robustness to Adversarial Examples and Polynomial Optimization.
Adversarial Examples in Modern Machine Learning: A Review.
RNN-Test: Adversarial Testing Framework for Recurrent Neural Network Systems.
Few-Features Attack to Fool Machine Learning Models through Mask-Based GAN.
Learning From Brains How to Regularize Machines.
Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory.
CALPA-NET: Channel-pruning-assisted Deep Residual Network for Steganalysis of Digital Images.
GraphDefense: Towards Robust Graph Convolutional Networks.
A Reinforced Generation of Adversarial Samples for Neural Machine Translation.
Improving Machine Reading Comprehension via Adversarial Training.
Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples.
Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy.
Intrusion Detection for Industrial Control Systems: Evaluation Analysis and Adversarial Attacks.
Patch augmentation: Towards efficient decision boundaries for neural networks.
Domain Robustness in Neural Machine Translation.
Adversarial Attacks on GMM i-vector based Speaker Verification Systems.
Imperceptible Adversarial Attacks on Tabular Data.
Reducing Sentiment Bias in Language Models via Counterfactual Evaluation. (1%)
White-Box Target Attack for EEG-Based BCI Regression Problems.
Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces.
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance.
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods.
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey.
Reversible Adversarial Example based on Reversible Image Transformation.
Adversarial Enhancement for Community Detection in Complex Networks.
Test Metrics for Recurrent Neural Networks.
DLA: Dense-Layer-Analysis for Adversarial Example Detection.
Intriguing Properties of Adversarial ML Attacks in the Problem Space.
Persistency of Excitation for Robustness of Neural Networks.
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models.
Fast-UAP: An Algorithm for Speeding up Universal Adversarial Perturbation Generation with Orientation of Perturbation Vectors.
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems.
MadNet: Using a MAD Optimization for Defending Against Adversarial Attacks.
Automatic Detection of Generated Text is Easiest when Humans are Fooled.
Security of Facial Forensics Models Against Adversarial Attacks.
Enhancing Certifiable Robustness via a Deep Model Ensemble.
Certifiable Robustness to Graph Perturbations.
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System.
Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization.
Beyond Universal Person Re-ID Attack.
Adversarial Example in Remote Sensing Image Recognition.
Active Subspace of Neural Networks: Structural Analysis and Universal Attacks.
Certified Adversarial Robustness for Deep Reinforcement Learning.
Open the Boxes of Words: Incorporating Sememes into Textual Adversarial Attack.
EdgeFool: An Adversarial Image Enhancement Filter.
Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolutional Neural Networks.
Detection of Adversarial Attacks and Characterization of Adversarial Subspace.
Understanding and Quantifying Adversarial Examples Existence in Linear Classification.
Adversarial Defense Via Local Flatness Regularization.
Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples.
MediaEval 2019: Concealed FGSM Perturbations for Privacy Preservation.
Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?
ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries.
A Useful Taxonomy for Adversarial Robustness of Neural Networks.
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks.
Attacking Optical Flow.
Adversarial Example Detection by Classification for Deep Speech Recognition.
Cross-Representation Transferability of Adversarial Attacks: From Spectrograms to Audio Waveforms.
Structure Matters: Towards Generating Transferable Adversarial Images.
Recovering Localized Adversarial Attacks.
Learning to Learn by Zeroth-Order Oracle.
An Alternative Surrogate Loss for PGD-based Adversarial Testing.
Enhancing Recurrent Neural Networks with Sememes.
Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification.
Toward Metrics for Differentiating Out-of-Distribution Sets.
Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?
Spatial-aware Online Adversarial Perturbations Against Visual Object Tracking.
A Fast Saddle-Point Dynamical System Approach to Robust Deep Learning.
Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets.
Enforcing Linearity in DNN succours Robustness and Adversarial Image Generation.
LanCe: A Comprehensive and Lightweight CNN Defense Methodology against Physical Adversarial Attacks on Embedded Multimedia Applications.
Adversarial T-shirt! Evading Person Detectors in A Physical World.
A New Defense Against Adversarial Images: Turning a Weakness into a Strength.
Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation.
Understanding Misclassifications by Attributes.
Adversarial Examples for Models of Code.
On adversarial patches: real-world attack on ArcFace-100 face recognition system.
DeepSearch: Simple and Effective Blackbox Fuzzing of Deep Neural Networks.
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks.
ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization.
Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models.
Real-world adversarial attack on MTCNN face detection system.
On Robustness of Neural Ordinary Differential Equations.
Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems.
Verification of Neural Networks: Specifying Global Robustness using Generative Models.
Universal Adversarial Perturbation for Text Classification.
Information Aware Max-Norm Dirichlet Networks for Predictive Uncertainty Estimation.
Learning deep forest with multi-scale Local Binary Pattern features for face anti-spoofing.
Adversarial Learning of Deepfakes in Accounting.
Deep Latent Defence.
Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system.
Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications.
SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations.
Interpretable Disentanglement of Neural Networks by Extracting Class-Specific Subnetwork.
Unrestricted Adversarial Attacks for Semantic Segmentation.
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies.
Requirements for Developing Robust Neural Networks.
Adversarial Examples for Cost-Sensitive Classifiers.
Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions.
BUZz: BUffer Zones for defending adversarial examples in image classification.
Verification of Neural Network Behaviour: Formal Guarantees for Power System Applications.
Attacking Vision-based Perception in End-to-End Autonomous Driving Models.
Adversarially Robust Few-Shot Learning: A Meta-Learning Approach.
Boosting Image Recognition with Non-differentiable Constraints.
Generating Semantic Adversarial Examples with Differentiable Rendering.
Attacking CNN-based anti-spoofing face authentication in the physical domain.
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack.
Cross-Layer Strategic Ensemble Defense Against Adversarial Examples.
Deep Neural Rejection against Adversarial Examples.
Black-box Adversarial Attacks with Bayesian Optimization.
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML.
Role of Spatial Context in Adversarial Robustness for Object Detection.
Techniques for Adversarial Examples Threatening the Safety of Artificial Intelligence Based Systems.
Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest.
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks.
Towards Understanding the Transferability of Deep Representations.
Adversarial Machine Learning Attack on Modulation Classification.
Adversarial ML Attack on Self Organizing Cellular Networks.
Towards neural networks that provably know when they don't know.
Lower Bounds on Adversarial Robustness from Optimal Transport.
Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection.
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks.
FreeLB: Enhanced Adversarial Training for Natural Language Understanding.
A Visual Analytics Framework for Adversarial Text Generation.
Intelligent image synthesis to attack a segmentation CNN using adversarial learning.
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack.
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples.
Robust Local Features for Improving the Generalization of Adversarial Training.
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments.
HAWKEYE: Adversarial Example Detector for Deep Neural Networks.
Towards Interpreting Recurrent Neural Networks through Probabilistic Abstraction.
Adversarial Learning with Margin-based Triplet Embedding Regularization.
COPYCAT: Practical Adversarial Attacks on Visualization-Based Malware Detection.
Defending Against Physically Realizable Attacks on Image Classification.
Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation.
Adversarial Vulnerability Bounds for Gaussian Process Classification.
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks.
Training Robust Deep Neural Networks via Adversarial Noise Propagation.
Toward Robust Image Classification.
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review.
Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model.
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges.
They Might NOT Be Giants: Crafting Black-Box Adversarial Examples with Fewer Queries Using Particle Swarm Optimization.
HAD-GAN: A Human-perception Auxiliary Defense GAN to Defend Adversarial Examples.
Towards Quality Assurance of Software Product Lines with Adversarial Configurations.
Interpreting and Improving Adversarial Robustness with Neuron Sensitivity.
An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms.
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors.
Natural Language Adversarial Attacks and Defenses in Word Level.
Adversarial Attack on Skeleton-based Human Action Recognition.
Say What I Want: Towards the Dark Side of Neural Dialogue Models.
White-Box Adversarial Defense via Self-Supervised Data Estimation.
Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix.
Inspecting adversarial examples using the Fisher information.
An Empirical Investigation of Randomized Defenses against Adversarial Attacks.
Transferable Adversarial Robustness using Adversarially Trained Autoencoders.
Feedback Learning for Improving the Robustness of Neural Networks.
Sparse and Imperceivable Adversarial Attacks.
Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification.
Identifying and Resisting Adversarial Videos Using Temporal Consistency.
Effectiveness of Adversarial Examples and Defenses for Malware Classification.
Towards Noise-Robust Neural Networks via Progressive Adversarial Training.
UPC: Learning Universal Physical Camouflage Attacks on Object Detectors.
FDA: Feature Disruptive Attack.
Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection.
Toward Finding The Global Optimal of Adversarial Examples.
Adversarial Robustness Against the Union of Multiple Perturbation Models.
STA: Adversarial Attacks on Siamese Trackers.
When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures.
Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification.
Natural Adversarial Sentence Generation with Gradient-based Perturbation.
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information.
Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents.
Adversarial Examples with Difficult Common Words for Paraphrase Identification.
Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?
Certified Robustness to Adversarial Word Substitutions.
Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation.
Metric Learning for Adversarial Robustness.
Adversarial Training Methods for Network Embedding.
Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness.
Defending Against Misclassification Attacks in Transfer Learning.
Universal, transferable and targeted adversarial attacks.
A Statistical Defense Approach for Detecting Adversarial Examples.
Gated Convolutional Networks with Hybrid Connectivity for Image Classification.
Adversarial Edit Attacks for Tree Data.
advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns.
Targeted Mismatch Adversarial Attack: Query with a Flower to Retrieve the Tower.
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
AdvHat: Real-world adversarial attack on ArcFace Face ID system.
Saliency Methods for Explaining Adversarial Attacks.
Testing Robustness Against Unforeseen Adversaries.
Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples.
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks.
Transferring Robustness for Graph Neural Network Against Poisoning Attacks.
Universal Adversarial Triggers for NLP.
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses.
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries.
On the Robustness of Human Pose Estimation.
Adversarial Defense by Suppressing High-frequency Components.
Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation.
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks.
Adversarial point perturbations on 3D objects.
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once.
AdvFaces: Adversarial Face Synthesis.
DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation.
On Defending Against Label Flipping Attacks on Malware Detection Systems.
Adversarial Neural Pruning with Latent Vulnerability Suppression.
On the Adversarial Robustness of Neural Networks without Weight Transport.
Defending Against Adversarial Iris Examples Using Wavelet Decomposition.
Universal Adversarial Audio Perturbations.
Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations.
Investigating Decision Boundaries of Trained Neural Networks.
Explaining Deep Neural Networks Using Spectrum-Based Fault Localization.
MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks.
BlurNet: Defense by Filtering the Feature Maps.
Random Directional Attack for Fooling Deep Neural Networks.
Adversarial Self-Defense for Cycle-Consistent GANs.
Automated Detection System for Adversarial Examples with High-Frequency Noises Sieve.
A principled approach for generating adversarial images under non-smooth dissimilarity metrics.
Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems.
A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models.
Exploring the Robustness of NMT Systems to Nonsensical Inputs.
AdvGAN++ : Harnessing latent layers for adversary generation.
Black-box Adversarial ML Attack on Modulation Classification.
Robustifying deep networks for image segmentation.
Adversarial Robustness Curves.
Optimal Attacks on Reinforcement Learning Policies.
Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation.
Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding.
Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples.
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment.
Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin.
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method.
Towards Adversarially Robust Object Detection.
Joint Adversarial Training: Incorporating both Spatial and Pixel Attacks.
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training.
Weakly Supervised Localization using Min-Max Entropy: an Interpretable Framework.
Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems.
Enhancing Adversarial Example Transferability with an Intermediate Level Attack.
Characterizing Attacks on Deep Reinforcement Learning.
Connecting Lyapunov Control Theory to Adversarial Attacks.
Robustness properties of Facebook's ResNeXt WSL models.
Constrained Concealment Attacks against Reconstruction-based Anomaly Detectors in Industrial Control Systems.
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods.
Natural Adversarial Examples.
Latent Adversarial Defence with Boundary-guided Generation.
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving.
Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics.
Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning.
Recovery Guarantees for Compressible Signals with Adversarial Noise.
Measuring the Transferability of Adversarial Examples.
Unsupervised Adversarial Attacks on Deep Feature-based Retrieval with GAN.
Stateful Detection of Black-Box Adversarial Attacks.
Generative Modeling by Estimating Gradients of the Data Distribution.
Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn.
Adversarial Objects Against LiDAR-Based Autonomous Driving Systems.
Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations.
Generating Adversarial Fragments with Adversarial Networks for Physical-world Implementation.
Affine Disentangled GAN for Interpretable and Robust AV Perception.
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions.
Adversarial Robustness through Local Linearization.
Adversarial Attacks in Sound Event Classification.
Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior.
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack.
Efficient Cyber Attacks Detection in Industrial Control Systems Using Lightweight Neural Networks and PCA.
Treant: Training Evasion-Aware Decision Trees.
Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network".
Diminishing the Effect of Adversarial Perturbations via Refining Feature Representation.
Accurate, reliable and fast robustness evaluation.
Fooling a Real Car with Adversarial Traffic Signs.
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty.
Certifiable Robustness and Robust Training for Graph Convolutional Networks.
Learning to Cope with Adversarial Attacks.
Robustness Guarantees for Deep Neural Networks on Videos.
Using Intuition from Empirical Properties to Simplify Adversarial Training Defense.
Adversarial Robustness via Label-Smoothing.
Evolving Robust Neural Architectures to Defend from Adversarial Attacks.
The Adversarial Robustness of Sampling.
Defending Adversarial Attacks by Correcting logits.
Quantitative Verification of Neural Networks And its Security Applications.
Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection.
Deceptive Reinforcement Learning Under Adversarial Manipulations on Cost Signals.
Defending Against Adversarial Examples with K-Nearest Neighbor.
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations.
A Fourier Perspective on Model Robustness in Computer Vision.
Evolution Attack On Neural Networks.
Adversarial Examples to Fool Iris Recognition Systems.
A Cyclically-Trained Adversarial Network for Invariant Representation Learning.
On Physical Adversarial Patches for Object Detection.
Catfish Effect Between Internal and External Attackers:Being Semi-honest is Helpful.
Improving the robustness of ImageNet classifiers using elements of human visual cognition.
A unified view on differential privacy and robustness to adversarial examples.
Convergence of Adversarial Training in Overparametrized Networks.
Global Adversarial Attacks for Assessing Deep Learning Robustness.
Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield.
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing.
Adversarial attacks on Copyright Detection Systems.
Improving Black-box Adversarial Attacks with a Transfer-based Prior.
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks.
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Accuracy.
Defending Against Adversarial Attacks Using Random Forests.
Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences.
Adversarial Training Can Hurt Generalization.
Towards Compact and Robust Deep Neural Networks.
Perceptual Based Adversarial Audio Attacks.
Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks.
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks.
Towards Stable and Efficient Training of Verifiably Robust Neural Networks.
Adversarial Robustness Assessment: Why both $L_0$ and $L_\infty$ Attacks Are Necessary.
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks.
Lower Bounds for Adversarially Robust PAC Learning.
Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers.
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks.
Mimic and Fool: A Task Agnostic Adversarial Attack.
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks.
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles.
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective.
Robustness Verification of Tree-based Models.
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective.
On the Vulnerability of Capsule Networks to Adversarial Attacks.
Intriguing properties of adversarial training.
Improved Adversarial Robustness via Logit Regularization Methods.
Attacking Graph Convolutional Networks via Rewiring.
Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness.
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers.
Strategies to architect AI Safety: Defense to guard AI from Adversaries.
Making targeted black-box evasion attacks effective and efficient.
Sensitivity of Deep Convolutional Networks to Gabor Noise.
ML-LOO: Detecting Adversarial Examples with Feature Attribution.
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks.
Defending Against Universal Attacks Through Selective Feature Regeneration.
A cryptographic approach to black box adversarial machine learning.
Using learned optimizers to make models robust to input noise.
Efficient Project Gradient Descent for Ensemble Adversarial Attack.
Inductive Bias of Gradient Descent based Adversarial Training on Separable Data.
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness.
Robustness for Non-Parametric Classification: A Generic Attack and Defense.
Robust Attacks against Multiple Classifiers.
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation.
Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric.
Should Adversarial Attacks Use Pixel p-Norm?
Image Synthesis with a Single (Robust) Classifier.
MNIST-C: A Robustness Benchmark for Computer Vision.
Enhancing Gradient-based Attacks with Symbolic Intervals.
Query-efficient Meta Attack to Deep Neural Networks.
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation.
Multi-way Encoding for Robustness.
Adversarial Training is a Form of Data-dependent Operator Norm Regularization.
Adversarial Exploitation of Policy Imitation.
RL-Based Method for Benchmarking the Adversarial Resilience and Robustness of Deep Reinforcement Learning Policies.
Adversarial Risk Bounds for Neural Networks through Sparsity based Compression.
The Adversarial Machine Learning Conundrum: Can The Insecurity of ML Become The Achilles' Heel of Cognitive Networks?
Adversarial Robustness as a Prior for Learned Representations.
Achieving Generalizable Robustness of Deep Neural Networks by Stability Training.
A Surprising Density of Illusionable Natural Speech.
Fast and Stable Interval Bounds Propagation for Training Verifiably Robust Models.
Understanding the Limitations of Conditional Generative Models.
Adversarially Robust Generalization Just Requires More Unlabeled Data.
Adversarial Examples for Edge Detection: They Exist, and They Transfer.
Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification.
Enhancing Transformation-based Defenses using a Distribution Classifier.
Unlabeled Data Improves Adversarial Robustness.
Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness.
Are Labels Required for Improving Adversarial Robustness?
Real-Time Adversarial Attacks.
Residual Networks as Nonlinear Systems: Stability Analysis using Linearization.
Identifying Classes Susceptible to Adversarial Attacks.
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness.
Interpretable Adversarial Training for Text.
Bandlimiting Neural Networks Against Adversarial Attacks.
Misleading Authorship Attribution of Source Code using Adversarial Learning.
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward.
Functional Adversarial Attacks.
CopyCAT: Taking Control of Neural Policies with Constant Attacks.
ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation.
Adversarial Attacks on Remote User Authentication Using Behavioural Mouse Dynamics.
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss.
Snooping Attacks on Deep Reinforcement Learning.
Probabilistically True and Tight Bounds for Robust Deep Neural Network Training.
High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks.
Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness.
Cross-Domain Transferability of Adversarial Perturbations.
Certifiably Robust Interpretation in Deep Learning.
Brain-inspired reverse adversarial examples.
Label Universal Targeted Attack.
Divide-and-Conquer Adversarial Detection.
Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking.
Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$.
Scaleable input gradient regularization for adversarial robustness.
Combating Adversarial Misspellings with Robust Word Recognition.
Analyzing the Interpretability Robustness of Self-Explaining Models.
Adversarially Robust Learning Could Leverage Computational Hardness.
Unsupervised Euclidean Distance Attack on Network Embedding.
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations.
Non-Determinism in Neural Networks for Adversarial Robustness.
Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders.
Rearchitecting Classification Frameworks For Increased Robustness.
Robust Classification using Robust Feature Augmentation.
Generalizable Adversarial Attacks Using Generative Models.
Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks.
Adversarial Distillation for Ordered Top-k Attacks.
Adversarial Policies: Attacking Deep Reinforcement Learning.
Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness.
Robustness to Adversarial Perturbations in Learning from Incomplete Data.
Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering.
Enhancing Adversarial Defense by k-Winners-Take-All.
A Direct Approach to Robust Deep Learning Using Adversarial Networks.
PHom-GeM: Persistent Homology for Generative Models.
Thwarting finite difference adversarial attacks with output randomization.
Interpreting Adversarially Trained Convolutional Neural Networks.
Adversarially Robust Distillation.
Convergence and Margin of Adversarial Training on Separable Data.
Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection.
DoPa: A Fast and Comprehensive CNN Defense Methodology against Physical Adversarial Attacks.
Adversarially robust transfer learning.
Testing DNN Image Classifiers for Confusion & Bias Errors.
What Do Adversarially Robust Models Look At?
Taking Care of The Discretization Problem:A Black-Box Adversarial Image Attack in Discrete Integer Domain.
POPQORN: Quantifying Robustness of Recurrent Neural Networks.
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models.
Simple Black-box Adversarial Attacks.
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization.
On Norm-Agnostic Robustness of Adversarial Training.
An Efficient Pre-processing Method to Eliminate Adversarial Effects.
Robustification of deep net classifiers by key based diversified aggregation with pre-filtering.
Adversarial Examples for Electrocardiograms.
Analyzing Adversarial Attacks Against Deep Learning for Intrusion Detection in IoT Networks.
Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models.
Moving Target Defense for Deep Visual Sensing against Adversarial Examples.
Interpreting and Evaluating Neural Network Robustness.
On the Connection Between Adversarial Robustness and Saliency Map Interpretability.
Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables.
Adversarial Defense Framework for Graph Neural Network.
Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain.
Exploring the Hyperparameter Landscape of Adversarial Robustness.
Learning Interpretable Features via Adversarially Robust Optimization.
Universal Adversarial Perturbations for Speech Recognition Systems.
ROSA: Robust Salient Object Detection against Adversarial Attacks.
Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction.
Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems.
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks.
Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study.
An Empirical Evaluation of Adversarial Robustness under Transfer Learning.
Adaptive Generation of Unrestricted Adversarial Inputs.
Batch Normalization is a Cause of Adversarial Vulnerability.
Adversarial Examples Are Not Bugs, They Are Features.
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples.
Transfer of Adversarial Robustness Between Perturbation Types.
Adversarial Training with Voronoi Constraints.
Weight Map Layer for Noise and Adversarial Attack Robustness.
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle.
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm.
Dropping Pixels for Adversarial Robustness.
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks.
Test Selection for Deep Learning Systems.
Detecting Adversarial Examples through Nonlinear Dimensionality Reduction.
Adversarial Training for Free!
Adversarial Training and Robustness for Multiple Perturbations.
Non-Local Context Encoder: Robust Biomedical Image Segmentation against Adversarial Attacks.
Robustness Verification of Support Vector Machines.
A Robust Approach for Securing Audio Classification Against Adversarial Attacks.
Physical Adversarial Textures that Fool Visual Object Tracking.
Minimizing Perceived Image Quality Loss Through Adversarial Attack Scoping.
blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness.
Using Videos to Evaluate Image Model Robustness.
Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning.
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach.
Salient Object Detection in the Deep Learning Era: An In-Depth Survey.
Fooling automated surveillance cameras: adversarial patches to attack person detection.
ZK-GanDef: A GAN based Zero Knowledge Adversarial Training Defense for Neural Networks.
Defensive Quantization: When Efficiency Meets Robustness.
Interpreting Adversarial Examples with Attributes.
Adversarial Defense Through Network Profiling Based Path Extraction.
Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks.
Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers.
Reducing Adversarial Example Transferability Using Gradient Regularization.
AT-GAN: An Adversarial Generator Model for Non-constrained Adversarial Examples.
Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction.
Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks.
Exploiting Vulnerabilities of Load Forecasting Through Adversarial Attacks.
Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense.
Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients.
Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks.
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks.
Unrestricted Adversarial Examples via Semantic Manipulation.
Black-Box Decision based Adversarial Attack with Symmetric $\alpha$-stable Distribution.
Learning to Generate Synthetic Data via Compositing.
Black-box Adversarial Attacks on Video Recognition Models.
Generation & Evaluation of Adversarial Examples for Malware Obfuscation.
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition.
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning.
JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks.
Malware Evasion Attack and Defense.
On Training Robust PDF Malware Classifiers.
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks.
White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks.
Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks.
Understanding the efficacy, reliability and resiliency of computer vision techniques for malware detection and future research directions.
Interpreting Adversarial Examples by Activation Promotion and Suppression.
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack.
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations.
Adversarial Attacks against Deep Saliency Models.
Curls & Whey: Boosting Black-Box Adversarial Attacks.
Robustness of 3D Deep Learning in an Adversarial Setting.
Defending against adversarial attacks by randomized diversification.
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks.
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses.
On the Vulnerability of CNN Classifiers in EEG-Based BCIs.
Adversarial Robustness vs Model Compression, or Both?
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.
Smooth Adversarial Examples.
Rallying Adversarial Techniques against Deep Learning for Network Security.
Bridging Adversarial Robustness and Gradient Interpretability.
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks.
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems.
On the Adversarial Robustness of Multivariate Robust Estimation.
A geometry-inspired decision-based attack.
Defending against Whitebox Adversarial Attacks via Randomized Discretization.
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness.
The LogBarrier adversarial attack: making effective use of decision boundary information.
Robust Neural Networks using Randomized Adversarial Training.
A Formalization of Robustness for Deep Neural Networks.
Variational Inference with Latent Space Quantization for Adversarial Resilience.
Improving Adversarial Robustness via Guided Complement Entropy.
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition.
Adversarial camera stickers: A physical camera-based attack on deep learning systems.
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes.
On the Robustness of Deep K-Nearest Neighbors.
Generating Adversarial Examples With Conditional Generative Adversarial Net.
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems.
Adversarial Attacks on Deep Neural Networks for Time Series Classification.
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models.
On Certifying Non-uniform Bound against Adversarial Attacks.
A Research Agenda: Dynamic Models to Defend Against Correlated Attacks.
Attribution-driven Causal Analysis for Detection of Adversarial Examples.
Adversarial attacks against Fact Extraction and VERification.
Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models.
Can Adversarial Network Attack be Defended?
Manifold Preserving Adversarial Learning.
Attack Type Agnostic Perceptual Enhancement of Adversarial Images.
Out-domain examples for generative models.
GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier.
Statistical Guarantees for the Robustness of Bayesian Neural Networks.
L 1-norm double backpropagation adversarial defense.
Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search.
The Vulnerabilities of Graph Convolutional Networks: Stronger Attacks and Defensive Techniques.
Complement Objective Training.
Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming.
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations.
Evaluating Adversarial Evasion Attacks in the Context of Wireless Communications.
PuVAE: A Variational Autoencoder to Purify Adversarial Examples.
Attacking Graph-based Classification via Manipulating the Graph Structure.
On the Effectiveness of Low Frequency Perturbations.
Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN.
Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors.
Adversarial Attack and Defense on Point Sets.
Stochastically Rank-Regularized Tensor Regression Networks.
Adversarial Attacks on Time Series.
Robust Decision Trees Against Adversarial Examples.
The Best Defense Is a Good Offense: Adversarial Attacks to Avoid Modulation Detection.
Disentangled Deep Autoencoding Regularization for Robust Image Classification.
Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification.
Verification of Non-Linear Specifications for Neural Networks.
Adversarial attacks hidden in plain sight.
MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and Adversarial Defenses.
Adversarial Reinforcement Learning under Partial Observability in Software-Defined Networking.
Re-evaluating ADEM: A Deeper Look at Scoring Dialogue Responses.
A Deep, Information-theoretic Framework for Robust Biometric Recognition.
Adversarial Attacks on Graph Neural Networks via Meta Learning.
Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems.
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks.
On the Sensitivity of Adversarial Robustness to Input Data Distributions.
Quantifying Perceptual Distortion of Adversarial Examples.
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations.
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch.
Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers.
Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure.
There are No Bit Parts for Sign Bits in Black-Box Attacks.
On Evaluating Adversarial Robustness.
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks.
Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces.
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training.
Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness.
DeepFault: Fault Localization for Deep Neural Networks.
Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples.
Examining Adversarial Learning against Graph-based IoT Malware Detection Systems.
Adversarial Samples on Android Malware Detection Systems for IoT Systems.
A Survey: Towards a Robust Deep Neural Network in Text Domain.
Model Compression with Adversarial Robustness: A Unified Optimization Framework.
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks.
Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images.
Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis.
Discretization based Solutions for Secure Machine Learning against Adversarial Attacks.
Robustness Of Saak Transform Against Adversarial Attacks.
Certified Adversarial Robustness via Randomized Smoothing.
Fooling Neural Network Interpretations via Adversarial Model Manipulation.
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples.
Fatal Brain Damage.
Theoretical evidence for adversarial robustness through randomization.
Predictive Uncertainty Quantification with Compound Density Networks.
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks.
Robustness Certificates Against Adversarial Examples for ReLU Networks.
Natural and Adversarial Error Detection using Invariance to Image Transformations.
Adaptive Gradient for Adversarial Perturbations Generation.
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks.
The Efficacy of SHIELD under Different Threat Models.
A New Family of Neural Networks Provably Resistant to Adversarial Attacks.
Training Artificial Neural Networks by Generalized Likelihood Ratio Method: Exploring Brain-like Learning to Improve Robustness.
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance.
Augmenting Model Robustness with Transformation-Invariant Attacks.
Adversarial Examples Are a Natural Consequence of Test Error in Noise.
On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks.
RED-Attack: Resource Efficient Decision based Attack for Machine Learning.
Reliable Smart Road Signs.
Adversarial Metric Attack and Defense for Person Re-identification.
Improving Adversarial Robustness of Ensembles with Diversity Training.
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks.
Defense Methods Against Adversarial Examples for Recurrent Neural Networks.
Using Pre-Training Can Improve Model Robustness and Uncertainty.
An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers.
Characterizing the Shape of Activation Space in Deep Neural Networks.
Strong Black-box Adversarial Attacks on Unsupervised Machine Learning Models.
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm.
Weighted-Sampling Audio Adversarial Example Attack.
Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data.
Improving Adversarial Robustness via Promoting Ensemble Diversity.
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples.
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples.
Theoretically Principled Trade-off between Robustness and Accuracy.
SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems.
Sitatapatra: Blocking the Transfer of Adversarial Samples.
Universal Rules for Fooling Deep Neural Networks based Text Classification.
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey.
Sensitivity Analysis of Deep Neural Networks.
Perception-in-the-Loop Adversarial Examples.
Easy to Fool? Testing the Anti-evasion Capabilities of PDF Malware Scanners.
The Limitations of Adversarial Training and the Blind-Spot Attack.
Generating Adversarial Perturbation with Root Mean Square Gradient.
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System.
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries.
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification.
Image Transformation can make Neural Networks more robust against Adversarial Examples.
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers.
Interpretable BoW Networks for Adversarial Example Detection.
Image Super-Resolution as a Defense Against Adversarial Attacks.
Fake News Detection via NLP is Vulnerable to Adversarial Attacks.
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study.
Multi-Label Adversarial Perturbations.
Adversarial Robustness May Be at Odds With Simplicity.
A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks.
DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems.
Adversarial Attack and Defense on Graph Data: A Survey.
A Data-driven Adversarial Examples Recognition Framework via Adversarial Feature Genome.
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition.
PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning.
A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples.
Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors.
DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense.
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks.
Markov Game Modeling of Moving Target Defense for Strategic Detection of Threats in Cloud Networks.
Exploiting the Inherent Limitation of L0 Adversarial Examples.
Dissociable neural representations of adversarially perturbed images in convolutional neural networks and the human brain.
Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge.
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach.
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings.
Designing Adversarially Resilient Classifiers using Resilient Feature Engineering.
A Survey of Safety and Trustworthiness of Deep Neural Networks.
Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks.
Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples.
Trust Region Based Adversarial Attack on Neural Networks.
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing.
TextBugger: Generating Adversarial Text Against Real-world Applications.
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem.
Thwarting Adversarial Examples: An $L_0$-RobustSparse Fourier Transform.
On the Security of Randomized Defenses Against Adversarial Samples.
Adversarial Framing for Image and Video Classification.
Defending Against Universal Perturbations With Shared Adversarial Training.
Feature Denoising for Improving Adversarial Robustness.
AutoGAN: Robust Classifier Against Adversarial Attacks.
Detecting Adversarial Examples in Convolutional Neural Networks.
Learning Transferable Adversarial Examples via Ghost Networks.
Deep-RBF Networks Revisited: Robust Classification with Rejection.
Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach.
Adversarial Defense of Image Classification Using a Variational Auto-Encoder.
Adversarial Attacks, Regression, and Numerical Stability Regularization.
Prior Networks for Detection of Adversarial Attacks.
Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack.
Fooling Network Interpretation in Image Classification.
The Limitations of Model Uncertainty in Adversarial Settings.
MMA Training: Direct Input Space Margin Maximization through Adversarial Training.
On Configurable Defense against Adversarial Example Attacks.
Regularized Ensembles and Transferability in Adversarial Learning.
SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications.
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures.
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples.
Disentangling Adversarial Robustness and Generalization.
Interpretable Deep Learning under Fire.
Adversarial Example Decomposition.
Model-Reuse Attacks on Deep Learning Systems.
Universal Perturbation Attack Against Image Retrieval.
FineFool: Fine Object Contour Attack via Attention.
Building robust classifiers through generation of confident out of distribution examples.
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification.
Effects of Loss Functions And Target Representations on Adversarial Robustness.
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems.
Transferable Adversarial Attacks for Image and Video Object Detection.
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples.
Adversarial Defense by Stratified Convolutional Sparse Coding.
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks.
Bayesian Adversarial Spheres: Bayesian Inference and Adversarial Examples in a Noiseless Setting.
Adversarial Examples as an Input-Fault Tolerance Problem.
Analyzing Federated Learning through an Adversarial Lens.
Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers.
Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects.
A randomized gradient-free attack on ReLU networks.
Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness.
Robust Classification of Financial Risk.
Universal Adversarial Training.
Using Attribution to Decode Dataset Bias in Neural Network Models for Chemistry.
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks.
ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies.
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks.
Is Data Clustering in Adversarial Settings Secure?
Attention, Please! Adversarial Defense via Attention Rectification and Preservation.
Robustness via curvature regularization, and vice versa.
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses.
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack.
Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles.
Detecting Adversarial Perturbations Through Spatial Behavior in Activation Spaces.
Task-generalizable Adversarial Attack based on Perceptual Metric.
Towards Robust Neural Networks with Lipschitz Continuity.
How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples.
MimicGAN: Corruption-Mimicking for Blind Image Recovery & Adversarial Defense.
Intermediate Level Adversarial Attack for Enhanced Transferability.
Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples.
Convolutional Neural Networks with Transformed Input based on Robust Tensor Network Decomposition.
Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding.
Generalizable Adversarial Training via Spectral Normalization.
Regularized adversarial examples for model interpretability.
The Taboo Trap: Behavioural Detection of Adversarial Samples.
DeepConsensus: using the consensus of features from multiple layers to attain robust image classification.
Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples.
Boosting the Robustness Verification of DNN by Identifying the Achilles's Heel.
Protecting Voice Controlled Systems Using Sound Source Identification Based on Acoustic Cues.
DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules.
A Spectral View of Adversarially Robust Features.
A note on hyperparameters in black-box adversarial examples.
Mathematical Analysis of Adversarial Attacks.
Adversarial Examples from Cryptographic Pseudo-Random Generators.
Verification of Recurrent Neural Networks Through Rule Extraction.
Robustness of spectral methods for community detection.
Deep Q learning for fooling neural networks.
Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses.
New CleverHans Feature: Better Adversarial Robustness Evaluations with Attack Bundling.
A Geometric Perspective on the Transferability of Adversarial Directions.
CAAD 2018: Iterative Ensemble Adversarial Attack.
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning.
MixTrain: Scalable Training of Verifiably Robust Neural Networks.
SparseFool: a few pixels make a big difference.
Active Deep Learning Attacks under Strict Rate Limitations for Online API Calls.
FUNN: Flexible Unsupervised Neural Network.
On the Transferability of Adversarial Examples Against CNN-Based Image Forensics.
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning.
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks.
SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters.
CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network.
Adversarial Black-Box Attacks on Automatic Speech Recognition Systems using Multi-Objective Evolutionary Optimization.
Learning to Defense by Learning to Attack.
A Marauder's Map of Security and Privacy in Machine Learning.
Semidefinite relaxations for certifying robustness to adversarial examples.
Efficient Neural Network Robustness Certification with General Activation Functions.
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks.
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks.
Improving Adversarial Robustness by Encouraging Discriminative Features.
On the Geometry of Adversarial Examples.
Excessive Invariance Causes Adversarial Vulnerability.
When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers.
Reversible Adversarial Examples.
Improved Network Robustness with Adversary Critic.
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models.
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution.
Logit Pairing Methods Can Fool Gradient-Based Attacks.
RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications.
Rademacher Complexity for Adversarially Robust Generalization.
Robust Audio Adversarial Example for a Physical Attack.
Towards Robust Deep Neural Networks.
Regularization Effect of Fast Gradient Sign Method and its Generalization.
Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples.
Law and Adversarial Machine Learning.
Attack Graph Convolutional Networks by Adding Fake Nodes.
Evading classifiers in discrete domains with provable optimality guarantees.
Robust Adversarial Learning via Sparsifying Front Ends.
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses.
One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy.
Et Tu Alexa? When Commodity WiFi Devices Turn into Adversarial Motion Sensors.
Adversarial Risk Bounds via Function Transformation.
Cost-Sensitive Robustness against Adversarial Examples.
Sparse DNNs with Improved Adversarial Robustness.
On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm.
Exploring Adversarial Examples in Malware Detection.
A Training-based Identification Approach to VIN Adversarial Examples.
Provable Robustness of ReLU networks via Maximization of Linear Regions.
Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers.
Security Matters: A Survey on Adversarial Machine Learning.
Concise Explanations of Neural Networks using Adversarial Training.
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation.
MeshAdv: Adversarial Meshes for Visual Recognition.
Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only.
Analyzing the Noise Robustness of Deep Neural Networks.
The Adversarial Attack and Detection under the Fisher Information Metric.
Limitations of adversarial robustness: strong No Free Lunch Theorem.
Efficient Two-Step Adversarial Defense for Deep Neural Networks.
Combinatorial Attacks on Binarized Neural Networks.
Average Margin Regularization for Classifiers.
Improved Generalization Bounds for Robust Learning.
Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness.
Can Adversarially Robust Learning Leverage Computational Hardness?
Adversarial Examples - A Complete Characterisation of the Phenomenon.
Link Prediction Adversarial Attack.
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network.
Improving the Generalization of Adversarial Training with Domain Adaptation.
Large batch size training of neural networks with adversarial training and second-order information.
Improved robustness to adversarial examples using Lipschitz regularization of the loss.
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks.
CAAD 2018: Generating Transferable Adversarial Examples.
Interpreting Adversarial Robustness: A View from Decision Surface in Input Space.
To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression.
Characterizing Audio Adversarial Examples Using Temporal Dependency.
Adversarial Attacks and Defences: A Survey.
Explainable Black-Box Attacks Against Model-based Authentication.
Adversarial Attacks on Cognitive Self-Organizing Networks: The Challenge and the Way Forward.
Neural Networks with Structural Resistance to Adversarial Attacks.
Fast Geometrically-Perturbed Adversarial Faces.
On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces.
Low Frequency Adversarial Perturbation.
Is Ordered Weighted $\ell_1$ Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR.
Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization.
Unrestricted Adversarial Examples.
Adversarial Binaries for Authorship Identification.
Playing the Game of Universal Adversarial Perturbations.
Efficient Formal Safety Analysis of Neural Networks.
Adversarial Training Towards Robust Multimedia Recommender System.
Generating 3D Adversarial Point Clouds.
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples.
Robustness Guarantees for Bayesian Inference with Gaussian Processes.
Exploring the Vulnerability of Single Shot Module in Object Detectors via Imperceptible Background Patches.
Robust Adversarial Perturbation on Deep Proposal-based Models.
Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks.
Query-Efficient Black-Box Attack by Active Learning.
Adversarial Examples: Opportunities and Challenges.
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions.
Isolated and Ensemble Audio Preprocessing Methods for Detecting Adversarial Examples against Automatic Speech Recognition.
Humans can decipher adversarial images.
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure.
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability.
Certified Adversarial Robustness with Additive Noise.
Towards Query Efficient Black-box Attacks: An Input-free Perspective.
Fast Gradient Attack on Network Embedding.
Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples.
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.
A Deeper Look at 3D Shape Classifiers.
Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer.
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Examples for Question Answering.
Query Attack via Opposite-Direction Feature:Towards Robust Image Retrieval.
Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models.
Are adversarial examples inevitable?
IDSGAN: Generative Adversarial Networks for Attack Generation against Intrusion Detection.
Adversarial Reprogramming of Text Classification Neural Networks.
Bridging machine learning and cryptography in defence against adversarial attacks.
Adversarial Attacks on Node Embeddings.
HASP: A High-Performance Adaptive Mobile Security Enhancement Against Malicious Speech Recognition.
Adversarial Attack Type I: Cheat Classifiers by Significant Changes.
MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks.
DLFuzz: Differential Fuzzing Testing of Deep Learning Systems.
All You Need is "Love": Evading Hate-speech Detection.
Lipschitz regularized Deep Neural Networks generalize and are adversarially robust.
Targeted Nonlinear Adversarial Perturbations in Images and Videos.
Generalisation in humans and deep neural networks.
Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge.
Guiding Deep Learning System Testing using Surprise Adequacy.
Analysis of adversarial attacks against CNN-based image forgery detectors.
Is Machine Learning in Power Systems Vulnerable?
Maximal Jacobian-based Saliency Map Attack.
Adversarial Attacks on Deep-Learning Based Radio Signal Classification.
Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection.
Stochastic Combinatorial Ensembles for Defending Against Adversarial Examples.
Reinforcement Learning for Autonomous Defence in Software-Defined Networking.
Mitigation of Adversarial Attacks through Embedded Feature Selection.
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding.
Distributionally Adversarial Attack.
Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection.
Using Randomness to Improve Robustness of Machine-Learning Models Against Evasion Attacks.
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer.
Data augmentation using synthetic data for time series classification with deep residual networks.
Adversarial Vision Challenge.
Defense Against Adversarial Attacks with Saak Transform.
Gray-box Adversarial Training.
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models.
Structured Adversarial Attack: Towards General Implementation and Better Interpretability.
Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection.
ATMPA: Attacking Machine Learning-based Malware Visualization Detection Methods via Adversarial Examples.
Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions.
DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes.
EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report).
Rob-GAN: Generator, Discriminator, and Adversarial Attacker.
A general metric for identifying adversarial images.
Evaluating and Understanding the Robustness of Adversarial Logit Pairing.
HiDDeN: Hiding Data With Deep Networks.
Limitations of the Lipschitz constant as a defense against adversarial examples.
Unbounded Output Networks for Classification.
Contrastive Video Representation Learning via Adversarial Perturbations.
Simultaneous Adversarial Training - Learn from Others Mistakes.
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors.
Physical Adversarial Examples for Object Detectors.
Harmonic Adversarial Attack Method.
Gradient Band-based Adversarial Training for Generalized Attack Immunity of A3C Path Finding.
Motivating the Rules of the Game for Adversarial Example Research.
Defend Deep Neural Networks Against Adversarial Examples via Fixed and Dynamic Quantized Activation Functions.
Online Robust Policy Learning in the Presence of Unknown Adversaries.
Manifold Adversarial Learning.
Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach.
With Friends Like These, Who Needs Adversaries?
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks.
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees.
Attack and defence in cellular decision-making: lessons from machine learning.
Adaptive Adversarial Attack on Scene Text Recognition.
Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks.
Implicit Generative Modeling of Random Noise during Training for Adversarial Robustness.
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations.
Local Gradients Smoothing: Defense against localized adversarial attacks.
Adversarial Robustness Toolbox v1.0.0.
Adversarial Perturbations Against Real-Time Video Classification Systems.
Towards Adversarial Training with Moderate Performance Improvement for Neural Network Classification.
Adversarial Examples in Deep Learning: Characterization and Divergence.
Adversarial Reprogramming of Neural Networks.
Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning.
Customizing an Adversarial Example Generator with Class-Conditional GANs.
Exploring Adversarial Examples: Patterns of One-Pixel Attacks.
Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions.
On Adversarial Examples for Character-Level Neural Machine Translation.
Evaluation of Momentum Diverse Input Iterative Fast Gradient Sign Method (M-DI2-FGSM) Based Attack Method on MCS 2018 Adversarial Attacks on Black Box Face Recognition System.
Detection based Defense against Adversarial Examples from the Steganalysis Point of View.
Gradient Adversarial Training of Neural Networks.
Combinatorial Testing for Deep Learning Systems.
On the Learning of Deep Local Features for Robust Face Spoofing Detection.
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations.
Non-Negative Networks Against Adversarial Attacks.
Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data.
Hierarchical interpretations for neural network predictions.
Manifold Mixup: Better Representations by Interpolating Hidden States.
Adversarial Attacks on Variational Autoencoders.
Ranking Robustness Under Adversarial Document Manipulations.
Defense Against the Dark Arts: An overview of adversarial example security research and future research directions.
Monge blunts Bayes: Hardness Results for Adversarial Training.
Revisiting Adversarial Risk.
Training Augmentation with Adversarial Examples for Robust Speech Recognition.
Adversarial Attack on Graph Structured Data.
Adversarial Regression with Multiple Learners.
Killing Four Birds with one Gaussian Process: Analyzing Test-Time Attack Vectors on Classification.
DPatch: An Adversarial Patch Attack on Object Detectors.
Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise.
An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks.
PAC-learning in the presence of evasion adversaries.
Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks.
Detecting Adversarial Examples via Key-based Network.
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks.
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders.
Scaling provable adversarial defenses.
Sequential Attacks on Agents for Long-Term Adversarial Goals.
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data.
Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization.
ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio.
Robustifying Models Against Adversarial Attacks by Langevin Dynamics.
Robustness May Be at Odds with Accuracy.
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks.
Adversarial Noise Attacks of Deep Learning Architectures -- Stability Analysis via Sparse Modeled Signals.
Why Botnets Work: Distributed Brute-Force Attacks Need No Synchronization.
Adversarial Examples in Remote Sensing.
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization.
Defending Against Adversarial Attacks by Leveraging an Entire GAN.
Training verified learners with learned verifiers.
Adversarial examples from computational constraints.
Laplacian Networks: Bounding Indicator Function Smoothness for Neural Network Robustness.
Anonymizing k-Facial Attributes via Adversarial Perturbations.
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients.
Towards the first adversarially robust neural network model on MNIST.
Adversarially Robust Training through Structured Gradient Regularization.
Adversarial Noise Layer: Regularize Neural Network By Adding Noise.
Adversarial Attacks on Neural Networks for Graph Data.
Constructing Unrestricted Adversarial Examples with Generative Models.
Bidirectional Learning for Robust Neural Networks.
Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference.
Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks.
Targeted Adversarial Examples for Black Box Audio Systems.
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models.
Towards Robust Neural Machine Translation.
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing.
Curriculum Adversarial Training.
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning.
Breaking Transferability of Adversarial Samples with Randomness.
On Visual Hallmarks of Robustness to Adversarial Malware.
Robust Classification with Convolutional Prototype Learning.
Interpretable Adversarial Perturbation in Input Embedding Space for Text.
A Counter-Forensic Method for CNN-Based Camera Model Identification.
Siamese networks for generating adversarial examples.
Concolic Testing for Deep Neural Networks.
How Robust are Deep Neural Networks?
Adversarially Robust Generalization Requires More Data.
Adversarial Regression for Detecting Attacks in Cyber-Physical Systems.
Formal Security Analysis of Neural Networks using Symbolic Intervals.
Towards Fast Computation of Certified Robustness for ReLU Networks.
Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning.
Siamese Generative Adversarial Privatizer for Biometric Data.
Black-box Adversarial Attacks with Limited Queries and Information.
VectorDefense: Vectorization as a Defense to Adversarial Examples.
Query-Efficient Black-Box Attack Against Sequence-Based Malware Classifiers.
Generating Natural Language Adversarial Examples.
Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size.
Learning More Robust Features with Adversarial Training.
ADef: an Iterative Algorithm to Construct Adversarial Deformations.
Attacking Convolutional Neural Network using Differential Evolution.
Semantic Adversarial Deep Learning.
Simulation-based Adversarial Test Generation for Autonomous Vehicles with Machine Learning Components.
Neural Automated Essay Scoring and Coherence Modeling for Adversarially Crafted Input.
Robust Machine Comprehension Models via Adversarial Training.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks.
Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the $L_0$ Norm.
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector.
On the Limitation of MagNet Defense against $L_1$-based Adversarial Examples.
Adversarial Attacks Against Medical Deep Learning Systems.
Detecting Malicious PowerShell Commands using Deep Neural Networks.
On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses.
Adversarial Training Versus Weight Decay.
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks.
Adaptive Spatial Steganography Based on Probability-Controlled Adversarial Examples.
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations.
Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks.
Adversarial Attacks and Defences Competition.
Security Consideration For Deep Learning-Based Image Forensics.
Defending against Adversarial Images using Basis Functions Transformations.
The Effects of JPEG and JPEG2000 Compression on Attacks using Adversarial Examples.
Bypassing Feature Squeezing by Increasing Adversary Strength.
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples.
Clipping free attacks against artificial neural networks.
Security Theater: On the Vulnerability of Classifiers to Exploratory Attacks.
A Dynamic-Adversarial Mining Approach to the Security of Machine Learning.
An Overview of Vulnerabilities of Voice Controlled Systems.
Generalizability vs. Robustness: Adversarial Examples for Medical Imaging.
CNN Based Adversarial Embedding with Minimum Alteration for Image Steganography.
Detecting Adversarial Perturbations with Saliency.
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization.
Understanding Measures of Uncertainty for Adversarial Example Detection.
Adversarial Defense based on Structure-to-Signal Autoencoders.
Task-specific Deep LDA pruning of neural networks.
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems.
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks.
Improving Transferability of Adversarial Examples with Input Diversity.
A Dual Approach to Scalable Verification of Deep Networks.
Adversarial Logit Pairing.
Semantic Adversarial Examples.
Large Margin Deep Networks for Classification.
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples.
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning.
Invisible Mask: Practical Attacks on Face Recognition with Infrared.
Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training.
Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables.
Combating Adversarial Attacks Using Sparse Representations.
Detecting Adversarial Examples via Neural Fingerprinting.
Detecting Adversarial Examples - A Lesson from Multimedia Forensics.
On Generation of Adversarial Examples using Convex Programming.
Explaining Black-box Android Malware Detection.
Rethinking Feature Distribution for Loss Functions in Image Classification.
Sparse Adversarial Perturbations for Videos.
Stochastic Activation Pruning for Robust Adversarial Defense.
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples.
Protecting JPEG Images Against Adversarial Attacks.
Understanding and Enhancing the Transferability of Adversarial Examples.
On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples.
Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples.
Max-Mahalanobis Linear Discriminant Analysis Networks.
Deep Defense: Training DNNs with Improved Adversarial Robustness.
Sensitivity and Generalization in Neural Networks: an Empirical Study.
Adversarial vulnerability for any classifier.
Verifying Controllers Against Adversarial Examples with Bayesian Optimization.
Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks.
Hessian-based Analysis of Large Batch Training and Robustness to Adversaries.
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans.
Adversarial Training for Probabilistic Spiking Neural Networks.
L2-Nonexpansive Neural Networks.
Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch.
Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning.
Out-distribution training confers robustness to deep neural networks.
On Lyapunov exponents and adversarial perturbation.
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression.
Divide, Denoise, and Defend against Adversarial Attacks.
Robustness of Rotation-Equivariant Networks to Adversarial Perturbations.
Are Generative Classifiers More Robust to Adversarial Attacks?
DARTS: Deceiving Autonomous Cars with Toxic Signs.
ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction.
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks.
Fooling OCR Systems with Adversarial Text Images.
Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks.
Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints.
Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models.
Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples.
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks.
Predicting Adversarial Examples with High Confidence.
Certified Robustness to Adversarial Examples with Differential Privacy.
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection.
Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples.
First-order Adversarial Vulnerability of Neural Networks and Input Dimension.
Secure Detection of Image Manipulation by means of Random Feature Selection.
Hardening Deep Neural Networks via Adversarial Model Cascades.
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples.
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach.
Robustness of classification ability of spiking neural networks.
Certified Defenses against Adversarial Examples.
Towards an Understanding of Neural Networks in Natural-Image Spaces.
Deflecting Adversarial Attacks with Pixel Deflection.
Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning.
CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition.
Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations.
Adversarial Texts with Gradient Methods.
A Comparative Study of Rule Extraction for Recurrent Neural Networks.
Sparsity-based Defense against Adversarial Attacks on Linear Classifiers.
Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks.
Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers.
A3T: Adversarially Augmented Adversarial Training.
Fooling End-to-end Speaker Verification by Adversarial Examples.
Adversarial Deep Learning for Robust Detection of Binary Encoded Malware.
Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks.
Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos.
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality.
Spatially Transformed Adversarial Examples.
Generating Adversarial Examples with Adversarial Networks.
LaVAN: Localized and Visible Adversarial Noise.
Attacking Speaker Recognition With Deep Generative Models.
HeNet: A Deep Learning Approach on Intel$^\circledR$ Processor Trace for Effective Exploit Detection.
Denoising Dictionary Learning Against Adversarial Perturbations.
Adversarial Perturbation Intensity Achieving Chosen Intra-Technique Transferability Level for Logistic Regression.
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text.
Shielding Google's language toxicity model against adversarial attacks.
Facial Attributes: Accuracy and Adversarial Robustness.
Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space.
High Dimensional Spaces, Deep Learning and Adversarial Examples.
Did you hear that? Adversarial Examples Against Automatic Speech Recognition.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey.
A General Framework for Adversarial Examples with Objectives.
Gradient Regularization Improves Accuracy of Discriminative Models.
Exploring the Space of Black-box Attacks on Deep Neural Networks.
Building Robust Deep Neural Networks for Road Sign Detection.
The Robust Manifold Defense: Adversarial Training using Generative Models.
Android Malware Detection using Deep Learning on API Method Sequences.
Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger.
Query-limited Black-box Attacks to Classifiers.
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks.
ReabsNet: Detecting and Revising Adversarial Examples.
Note on Attacking Object Detectors with Adversarial Stickers.
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications.
Query-Efficient Black-box Adversarial Examples (superceded).
Adversarial Examples: Attacks and Defenses for Deep Learning.
HotFlip: White-Box Adversarial Examples for Text Classification.
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study.
Super-sparse Learning in Similarity Spaces.
Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models.
DANCin SEQ2SEQ: Fooling Text Classifiers with Adversarial Text Example Generation.
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models.
Training Ensembles to Detect Adversarial Examples.
Robust Deep Reinforcement Learning with Adversarial Attacks.
NAG: Network for Adversary Generation.
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning.
Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser.
Adversarial Examples that Fool Detectors.
Exploring the Landscape of Spatial Robustness.
Generative Adversarial Perturbations.
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning.
Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems.
Improving Network Robustness against Adversarial Attacks with Compact Convolution.
Towards Robust Neural Networks via Random Self-ensemble.
Where Classification Fails, Interpretation Rises.
Measuring the tendency of CNNs to Learn Surface Statistical Regularities.
Adversary Detection in Neural Networks via Persistent Homology.
On the Robustness of Semantic Segmentation Models to Adversarial Attacks.
Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation.
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients.
Geometric robustness of deep networks: analysis and improvement.
Safer Classification by Synthesis.
MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples.
Adversarial Phenomenon in the Eyes of Bayesian Deep Learning.
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training.
Evaluating Robustness of Neural Networks with Mixed Integer Programming.
Adversarial Attacks Beyond the Image Space.
How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models.
Enhanced Attacks on Defensively Distilled Deep Neural Networks.
Defense against Universal Adversarial Perturbations.
The best defense is a good offense: Countering black box attacks by predicting slightly wrong labels.
Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples.
Crafting Adversarial Examples For Speech Paralinguistics Applications.
Intriguing Properties of Adversarial Examples.
Mitigating Adversarial Effects Through Randomization.
HyperNetworks with statistical filtering for defending adversarial examples.
Towards Reverse-Engineering Black-Box Neural Networks.
The (Un)reliability of saliency methods.
Provable defenses against adversarial examples via the convex outer adversarial polytope.
Attacking Binarized Neural Networks.
Countering Adversarial Images using Input Transformations.
Conditional Variance Penalties and Domain Shift Robustness.
Generating Natural Adversarial Examples.
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples.
Attacking the Madry Defense Model with $L_1$-based Adversarial Examples.
Certifying Some Distributional Robustness with Principled Adversarial Training.
Interpretation of Neural Networks is Fragile.
Adversarial Detection of Flash Malware: Limitations and Open Issues.
mixup: Beyond Empirical Risk Minimization.
One pixel attack for fooling deep neural networks.
Feature-Guided Black-Box Safety Testing of Deep Neural Networks.
Boosting Adversarial Attacks with Momentum.
Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries.
Standard detectors aren't (currently) fooled by physical adversarial stop signs.
Verification of Binarized Neural Networks via Inter-Neuron Factoring.
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight.
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks.
Provably Minimally-Distorted Adversarial Examples.
DR.SGX: Hardening SGX Enclaves against Cache Attacks with Data Location Randomization.
Output Range Analysis for Deep Neural Networks.
Fooling Vision and Language Models Despite Localization and Attention Mechanism.
Verifying Properties of Binarized Deep Neural Networks.
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification.
A Learning and Masking Approach to Secure Learning.
Models and Framework for Adversarial Attacks on Complex Adaptive Systems.
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples.
Art of singular vectors and universal adversarial perturbations.
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks.
Towards Proving the Adversarial Robustness of Deep Neural Networks.
DeepFense: Online Accelerated Defense Against Adversarial Deep Learning.
Security Evaluation of Pattern Classifiers under Attack.
On Security and Sparsity of Linear Classifiers for Adversarial Settings.
Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin.
Practical Attacks Against Graph-based Clustering.
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars.
Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features.
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid.
CNN Fixations: An unraveling approach to visualize the discriminative image regions.
Evasion Attacks against Machine Learning at Test Time.
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples.
Learning Universal Adversarial Perturbations with Generative Models.
Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API.
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models.
Cascade Adversarial Machine Learning Regularized with a Unified Embedding.
Adversarial Robustness: Softmax versus Openmax.
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning.
Robust Physical-World Attacks on Deep Learning Models.
Synthesizing Robust Adversarial Examples.
Adversarial Examples for Evaluating Reading Comprehension Systems.
Confidence estimation in Deep Neural networks via density modelling.
Efficient Defenses Against Adversarial Attacks.
Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers.
Fast Feature Fool: A data independent approach to universal adversarial perturbations.
APE-GAN: Adversarial Perturbation Elimination with GAN.
Houdini: Fooling Deep Structured Prediction Models.
Foolbox: A Python toolbox to benchmark the robustness of machine learning models.
NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles.
A Survey on Resilient Machine Learning.
Towards Crafting Text Adversarial Samples.
UPSET and ANGRI : Breaking High Performance Image Classifiers.
Comparing deep neural networks against humans: object recognition when the signal gets weaker.
Towards Deep Learning Models Resistant to Adversarial Attacks.
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong.
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples.
Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation.
Towards Robust Detection of Adversarial Examples.
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples.
MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks.
Analysis of universal adversarial perturbations.
Classification regions of deep neural networks.
MagNet: a Two-Pronged Defense against Adversarial Examples.
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation.
Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction.
Black-Box Attacks against RNN based Malware Detection Algorithms.
Regularizing deep networks using efficient layerwise adversarial training.
Evading Classifiers by Morphing in the Dark.
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods.
Ensemble Adversarial Training: Attacks and Defenses.
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense.
DeepXplore: Automated Whitebox Testing of Deep Learning Systems.
Delving into adversarial attacks on deep policies.
Extending Defensive Distillation.
Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN.
Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression.
Detecting Adversarial Samples Using Density Ratio Estimates.
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection.
Parseval Networks: Improving Robustness to Adversarial Examples.
Deep Text Classification Can be Fooled.
Universal Adversarial Perturbations Against Semantic Image Segmentation.
Adversarial and Clean Data Are Not Twins.
Google's Cloud Vision API Is Not Robust To Noise.
The Space of Transferable Adversarial Examples.
Enhancing Robustness of Machine Learning Systems via Data Transformations.
Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks.
Comment on "Biologically inspired protection of deep networks from adversarial attacks".
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks.
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly.
Adversarial Transformation Networks: Learning to Generate Adversarial Examples.
Biologically inspired protection of deep networks from adversarial attacks.
Deceiving Google's Cloud Video Intelligence API Built for Summarizing Videos.
Adversarial Examples for Semantic Segmentation and Object Detection.
Self corrective Perturbations for Semantic Segmentation and Classification.
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains.
On the Limitation of Convolutional Neural Networks in Recognizing Negative Images.
Fraternal Twins: Unifying Attacks on Machine Learning and Digital Watermarking.
Blocking Transferability of Adversarial Examples in Black-Box Learning Systems.
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents.
Adversarial Examples for Semantic Image Segmentation.
Compositional Falsification of Cyber-Physical Systems with Machine Learning Components.
Detecting Adversarial Samples from Artifacts.
Deceiving Google's Perspective API Built for Detecting Toxic Comments.
Robustness to Adversarial Examples through an Ensemble of Specialists.
Adversarial examples for generative models.
DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples.
On the (Statistical) Detection of Adversarial Examples.
Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN.
On Detecting Adversarial Perturbations.
Adversarial Attacks on Neural Network Policies.
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks.
Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks.
Dense Associative Memory is Robust to Adversarial Inputs.
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics.
Simple Black-Box Adversarial Perturbations for Deep Networks.
Learning Adversary-Resistant Deep Neural Networks.
A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples.
Adversarial Images for Variational Autoencoders.
Deep Variational Information Bottleneck.
Towards Robust Deep Neural Networks with BANG.
LOTS about Attacking Deep Features.
AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack.
Towards the Science of Security and Privacy in Machine Learning.
Delving into Transferable Adversarial Examples and Black-box Attacks.
Adversarial Machine Learning at Scale.
Universal adversarial perturbations.
Safety Verification of Deep Neural Networks.
Are Accuracy and Robustness Correlated?
Assessing Threat of Adversarial Examples on Deep Neural Networks.
Using Non-invertible Data Transformations to Build Adversarial-Robust Neural Networks.
Adversary Resistant Deep Neural Networks with an Application to Malware Detection.
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library.
Statistical Meta-Analysis of Presentation Attacks for Secure Multibiometric Systems.
Randomized Prediction Games for Adversarial Machine Learning.
Robustness of classifiers: from adversarial to random noise.
A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples.
Towards Evaluating the Robustness of Neural Networks.
A study of the effect of JPG compression on adversarial images.
Early Methods for Detecting Adversarial Images.
On the Effectiveness of Defensive Distillation.
Defensive Distillation is Not Robust to Adversarial Examples.
Adversarial examples in the physical world.
Adversarial Perturbations Against Deep Neural Networks for Malware Classification.
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples.
Measuring Neural Net Robustness with Constraints.
Are Facial Attributes Adversarially Robust?
Adversarial Diversity and Hard Positive Generation.
Crafting Adversarial Input Sequences for Recurrent Neural Networks.
Improving the Robustness of Deep Neural Networks via Stability Training.
A General Retraining Framework for Scalable Adversarial Classification.
Suppressing the Unusual: towards Robust CNNs using Symmetric Activation Functions.
Practical Black-Box Attacks against Machine Learning.
Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms.
Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization.
The Limitations of Deep Learning in Adversarial Settings.
A Unified Gradient Regularization Family for Adversarial Examples.
Manifold Regularized Deep Neural Networks using Adversarial Examples.
Robust Convolutional Neural Networks under Adversarial Noise.
Foveation-based Mechanisms Alleviate Adversarial Examples.
Towards Open Set Deep Networks.
Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization.
Adversarial Manipulation of Deep Representations.
DeepFool: a simple and accurate method to fool deep neural networks.
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks.
Learning with a Strong Adversary.
Exploring the Space of Adversarial Images.
Improving Back-Propagation by Adding an Adversarial Gradient.
Deep Learning and Music Adversaries.
Analysis of classifiers' robustness to adversarial perturbations.
Explaining and Harnessing Adversarial Examples.
Towards Deep Neural Network Architectures Robust to Adversarial Examples.
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.
Security Evaluation of Support Vector Machines in Adversarial Environments.
Intriguing properties of neural networks.