It can be hard to stay up-to-date on the published papers in
the field of adversarial examples,
where we have seen massive growth in the number of papers
written each year.
I have been somewhat religiously keeping track of these
papers for the last few years, and realized it may be helpful
for others to release this list.
The only requirement I used for selecting papers for this list
is that it is primarily a paper about adversarial examples,
or extensively uses adversarial examples.
Due to the sheer quantity of papers, I can't guarantee
that I actually have found all of them.
But I did try.
I also may have included papers that don't match
these criteria (and are about something different instead),
or made inconsistent
judgement calls as to whether or not any given paper is
mainly an adversarial example paper.
Send me an email if something is wrong and I'll correct it.
As a result, this list is completely un-filtered.
Everything that mainly presents itself as an adversarial
example paper is listed here; I pass no judgement of quality.
For a curated list of papers that I think are excellent and
worth reading, see the
Adversarial Machine Learning Reading List.
One final note about the data.
This list automatically updates with new papers, even before I
get a chance to manually filter through them.
I do this filtering roughly twice a week, and it's
then that I'll remove the ones that aren't related to
As a result, there may be some
false positives on the most recent few entries.
The new un-verified entries will have a probability indicated that my
simplistic (but reasonably well calibrated)
bag-of-words classifier believes the given paper
is actually about adversarial examples.
The full paper list appears below. I've also released a
TXT file (and a TXT file
with abstracts) and a
with the same data. If you do anything interesting with
this data I'd be happy to hear from you what it was.
Universal Adversarial Attack on Deep Learning Based Prognostics. (99%)
Balancing detectability and performance of attacks on the control channel of Markov Decision Processes. (98%)
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack. (93%)
BERT is Robust! A Case Against Synonym-Based Adversarial Examples in Text Classification. (92%)
Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup. (13%)
Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel. (10%)
A Novel Data Encryption Method Inspired by Adversarial Attacks. (99%)
Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder. (99%)
PETGEN: Personalized Text Generation Attack on Deep Sequence Embedding-based Classification Models. (99%)
Dodging Attack Using Carefully Crafted Natural Makeup. (47%)
Avengers Ensemble! Improving Transferability of Authorship Obfuscation. (12%)
ARCH: Efficient Adversarial Regularized Training with Caching. (2%)
Adversarial Bone Length Attack on Action Recognition. (99%)
Randomized Substitution and Vote for Textual Adversarial Example Detection. (99%)
Improving Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator. (99%)
Evolving Architectures with Gradient Misalignment toward Low Adversarial Transferability. (98%)
A Practical Adversarial Attack on Contingency Detection of Smart Energy Systems. (98%)
Adversarial Examples for Evaluating Math Word Problem Solvers. (96%)
PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos. (86%)
SignGuard: Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering. (81%)
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models. (50%)
Formalizing and Estimating Distribution Inference Risks. (50%)
Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models. (16%)
Adversarially Trained Object Detector for Unsupervised Domain Adaptation. (2%)
How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding. (1%)
Perturbation CheckLists for Evaluating NLG Evaluation Metrics. (1%)
Detecting Safety Problems of Multi-Sensor Fusion in Autonomous Driving. (1%)
TREATED:Towards Universal Defense against Textual Adversarial Attacks. (99%)
CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph. (98%)
RockNER: A Simple Method to Create Adversarial Examples for Evaluating the Robustness of Named Entity Recognition Models. (84%)
Check Your Other Door! Establishing Backdoor Attacks in the Frequency Domain. (41%)
Shape-Biased Domain Generalization via Shock Graph Embeddings. (2%)
Source Inference Attacks in Federated Learning. (1%)
RobustART: Benchmarking Robustness on Architecture Design and Training Techniques. (98%)
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency. (81%)
A Strong Baseline for Query Efficient Attacks in a Black Box Setting. (99%)
Protein Folding Neural Networks Are Not Robust. (99%)
Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification. (99%)
Towards Transferable Adversarial Attacks on Vision Transformers. (99%)
Energy Attack: On Transferring Adversarial Examples. (99%)
Multi-granularity Textual Adversarial Attack with Behavior Cloning. (98%)
Spatially Focused Attack against Spatiotemporal Graph Neural Networks. (81%)
Differential Privacy in Personalized Pricing with Nonparametric Demand Models. (26%)
Where Did You Learn That From? Surprising Effectiveness of Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning. (89%)
Robust Optimal Classification Trees Against Adversarial Examples. (80%)
Adversarial Parameter Defense by Multi-Step Risk Minimization. (98%)
POW-HOW: An enduring timing side-channel to evadeonline malware sandboxes. (10%)
Unpaired Adversarial Learning for Single Image Deraining with Rain-Space Contrastive Constraints. (1%)
Robustness and Generalization via Generative Adversarial Training. (82%)
Trojan Signatures in DNN Weights. (33%)
Automated Robustness with Adversarial Training as a Post-Processing Step. (4%)
Exposing Length Divergence Bias of Textual Matching Models. (2%)
Efficient Combinatorial Optimization for Word-level Adversarial Textual Attack. (98%)
Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning. (2%)
DexRay: A Simple, yet Effective Deep Learning Approach to Android Malware Detection based on Image Representation of Bytecode. (1%)
Real-World Adversarial Examples involving Makeup Application. (99%)
Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness. (99%)
Training Meta-Surrogate Model for Transferable Adversarial Attack. (99%)
SEC4SR: A Security Analysis Platform for Speaker Recognition. (99%)
Risk Assessment for Connected Vehicles under Stealthy Attacks on Vehicle-to-Vehicle Networks. (1%)
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples. (99%)
Impact of Attention on Adversarial Robustness of Image Classification Models. (99%)
Adversarial Robustness for Unsupervised Domain Adaptation. (98%)
Real World Robustness from Systematic Noise. (91%)
Building Compact and Robust Deep Neural Networks with Toeplitz Matrices. (61%)
Towards Improving Adversarial Training of NLP Models. (98%)
Excess Capacity and Backdoor Poisoning. (97%)
Regional Adversarial Training for Better Robust Generalization. (96%)
R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversarial Attacks through Noise Filters for Dynamic Vision Sensors. (86%)
Proof Transfer for Neural Network Verification. (9%)
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks. (2%)
Morphence: Moving Target Defense Against Adversarial Examples. (99%)
EG-Booster: Explanation-Guided Booster of ML Evasion Attacks. (99%)
DPA: Learning Robust Physical Adversarial Camouflages for Object Detectors. (86%)
Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction. (83%)
Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning. (75%)
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning. (4%)
Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings. (99%)
Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise. (99%)
Investigating Vulnerabilities of Deep Neural Policies. (99%)
Single Node Injection Attack against Graph Neural Networks. (68%)
Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms. (41%)
Adaptive perturbation adversarial training: based on reinforcement learning. (41%)
How Does Adversarial Fine-Tuning Benefit BERT? (33%)
ML-based IoT Malware Detection Under Adversarial Settings: A Systematic Evaluation. (26%)
DuTrust: A Sentiment Analysis Dataset for Trustworthiness Evaluation. (1%)
Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution. (99%)
Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models. (98%)
DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks. (82%)
Mal2GCN: A Robust Malware Detection Approach Using Deep Graph Convolutional Networks With Non-Negative Weights. (99%)
Disrupting Adversarial Transferability in Deep Neural Networks. (98%)
Evaluating the Robustness of Neural Language Models to Input Perturbations. (16%)
Deep learning models are not robust against noise in clinical text. (1%)
Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks. (99%)
A Hierarchical Assessment of Adversarial Severity. (98%)
Physical Adversarial Attacks on an Aerial Imagery Object Detector. (96%)
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference. (80%)
Detection and Continual Learning of Novel Face Presentation Attacks. (2%)
Adversarially Robust One-class Novelty Detection. (99%)
Backdoor Attacks on Network Certification via Data Poisoning. (98%)
Bridged Adversarial Training. (93%)
Generalized Real-World Super-Resolution through Adversarial Robustness. (93%)
Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE. (99%)
Are socially-aware trajectory prediction models really socially-aware? (92%)
OOWL500: Overcoming Dataset Collection Bias in the Wild. (76%)
StyleAugment: Learning Texture De-biased Representations by Style Augmentation without Pre-defined Textures. (1%)
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications. (99%)
Semantic-Preserving Adversarial Text Attacks. (99%)
Deep Bayesian Image Set Classification: A Defence Approach against Adversarial Attacks. (99%)
Kryptonite: An Adversarial Attack Using Regional Focus. (99%)
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning. (54%)
SegMix: Co-occurrence Driven Mixup for Semantic Segmentation and Adversarial Robustness. (4%)
Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations. (99%)
Multi-Expert Adversarial Attack Detection in Person Re-identification Using Context Inconsistency. (92%)
Relating CNNs with brain: Challenges and findings. (10%)
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks. (99%)
"Adversarial Examples" for Proof-of-Learning. (98%)
Regularizing (Stabilizing) Deep Learning Based Reconstruction Algorithms. (2%)
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier. (99%)
AdvDrop: Adversarial Attack to DNNs by Dropping Information. (99%)
Integer-arithmetic-only Certified Robustness for Quantized Neural Networks. (98%)
Towards Understanding the Generative Capability of Adversarially Robust Classifiers. (98%)
Detecting and Segmenting Adversarial Graphics Patterns from Images. (93%)
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning. (1%)
Early-exit deep neural networks for distorted images: providing an efficient edge offloading. (1%)
Application of Adversarial Examples to Physical ECG Signals. (99%)
Pruning in the Face of Adversaries. (99%)
ASAT: Adaptively Scaled Adversarial Training in Time Series. (98%)
Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain. (80%)
Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better. (99%)
Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes. (98%)
MBRS : Enhancing Robustness of DNN-based Watermarking by Mini-Batch of Real and Simulated JPEG Compression. (45%)
Proceedings of the 1st International Workshop on Adaptive Cyber Defense. (1%)
When Should You Defend Your Classifier -- A Game-theoretical Analysis of Countermeasures against Adversarial Examples. (98%)
Adversarial Relighting against Face Recognition. (98%)
Semantic Perturbations with Normalizing Flows for Improved Generalization. (13%)
Coalesced Multi-Output Tsetlin Machines with Clause Sharing. (1%)
Appearance Based Deep Domain Adaptation for the Classification of Aerial Images. (1%)
Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy. (99%)
Interpreting Attributions and Interactions of Adversarial Attacks. (83%)
Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose? (62%)
NeuraCrypt is not private. (10%)
On the Opportunities and Risks of Foundation Models. (2%)
Identifying and Exploiting Structures for Reliable Deep Learning. (2%)
Neural Architecture Dilation for Adversarial Robustness. (81%)
Deep Adversarially-Enhanced k-Nearest Neighbors. (74%)
IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance. (1%)
LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis. (1%)
Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks. (99%)
Optical Adversarial Attack. (98%)
Understanding Structural Vulnerability in Graph Convolutional Networks. (96%)
The Forgotten Threat of Voltage Glitching: A Case Study on Nvidia Tegra X2 SoCs. (1%)
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning. (99%)
Deep adversarial attack on target detection systems. (99%)
Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate. (68%)
Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack. (98%)
Attacks against Ranking Algorithms with Text Embeddings: a Case Study on Recruitment Algorithms. (78%)
Are Neural Ranking Models Robust? (4%)
Logic Explained Networks. (1%)
Simple black-box universal adversarial attacks on medical image classification based on deep neural networks. (99%)
On the Effect of Pruning on Adversarial Robustness. (81%)
SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version). (68%)
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing. (64%)
UniNet: A Unified Scene Understanding Network and Exploring Multi-Task Relationships through the Lens of Adversarial Attacks. (2%)
Instance-wise Hard Negative Example Generation for Contrastive Learning in Unpaired Image-to-Image Translation. (1%)
Meta Gradient Adversarial Attack. (99%)
On Procedural Adversarial Noise Attack And Defense. (99%)
Enhancing Knowledge Tracing via Adversarial Training. (98%)
Neural Network Repair with Reachability Analysis. (96%)
Classification Auto-Encoder based Detector against Diverse Data Poisoning Attacks. (92%)
Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning. (82%)
Privacy-Preserving Machine Learning: Methods, Challenges and Directions. (16%)
Explainable AI and susceptibility to adversarial attacks: a case study in classification of breast ultrasound images. (15%)
Jointly Attacking Graph Neural Network and its Explanations. (96%)
Membership Inference Attacks on Lottery Ticket Networks. (33%)
Information Bottleneck Approach to Spatial Attention Learning. (1%)
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles. (80%)
Ensemble Augmentation for Deep Neural Networks Using 1-D Time Series Vibration Data. (2%)
BOSS: Bidirectional One-Shot Synthesis of Adversarial Examples. (99%)
Poison Ink: Robust and Invisible Backdoor Attack. (99%)
Imperceptible Adversarial Examples by Spatial Chroma-Shift. (99%)
Householder Activations for Provable Robustness against Adversarial Attacks. (83%)
Fairness Properties of Face Recognition and Obfuscation Systems. (68%)
Exploring Structure Consistency for Deep Model Watermarking. (10%)
Locally Interpretable One-Class Anomaly Detection for Credit Card Fraud Detection. (1%)
Robust Transfer Learning with Pretrained Language Models through Adapters. (82%)
Semi-supervised Conditional GAN for Simultaneous Generation and Detection of Phishing URLs: A Game theoretic Perspective. (31%)
On the Robustness of Domain Adaption to Adversarial Attacks. (99%)
On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples. (99%)
AdvRush: Searching for Adversarially Robust Neural Architectures. (99%)
The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks. (88%)
DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on Commercial EdgeML Device. (69%)
Tutorials on Testing Neural Networks. (1%)
Hybrid Classical-Quantum Deep Learning Models for Autonomous Vehicle Traffic Image Classification Under Adversarial Attack. (98%)
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles. (10%)
Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks. (9%)
Efficacy of Statistical and Artificial Intelligence-based False Information Cyberattack Detection Models for Connected Vehicles. (1%)
Advances in adversarial attacks and defenses in computer vision: A survey. (92%)
Certified Defense via Latent Space Randomized Smoothing with Orthogonal Encoders. (80%)
An Effective and Robust Detector for Logo Detection. (70%)
Style Curriculum Learning for Robust Medical Image Segmentation. (2%)
Delving into Deep Image Prior for Adversarial Defense: A Novel Reconstruction-based Defense Framework. (99%)
Adversarial Robustness of Deep Code Comment Generation. (99%)
Towards Adversarially Robust and Domain Generalizable Stereo Matching by Rethinking DNN Feature Backbones. (93%)
T$_k$ML-AP: Adversarial Attacks to Top-$k$ Multi-Label Learning. (81%)
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. (67%)
Fair Representation Learning using Interpolation Enabled Disentanglement. (1%)
Who's Afraid of Thomas Bayes? (92%)
Practical Attacks on Voice Spoofing Countermeasures. (86%)
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers. (22%)
Unveiling the potential of Graph Neural Networks for robust Intrusion Detection. (13%)
Feature Importance-aware Transferable Adversarial Attacks. (99%)
Enhancing Adversarial Robustness via Test-time Transformation Ensembling. (98%)
The Robustness of Graph k-shell Structure under Adversarial Attacks. (93%)
Understanding the Effects of Adversarial Personalized Ranking Optimization Method on Recommendation Quality. (31%)
Towards robust vision by multi-task learning on monkey visual cortex. (3%)
Imbalanced Adversarial Training with Reweighting. (86%)
Towards Robustness Against Natural Language Word Substitutions. (73%)
Models of Computational Profiles to Study the Likelihood of DNN Metamorphic Test Cases. (67%)
WaveCNet: Wavelet Integrated CNNs to Suppress Aliasing Effect for Noise-Robust Image Classification. (15%)
TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing. (2%)
Towards Black-box Attacks on Deep Learning Apps. (89%)
Poisoning of Online Learning Filters: DDoS Attacks and Countermeasures. (50%)
PDF-Malware: An Overview on Threats, Detection and Evasion Attacks. (8%)
Benign Adversarial Attack: Tricking Algorithm for Goodness. (99%)
Learning to Adversarially Blur Visual Object Tracking. (98%)
Adversarial Attacks with Time-Scale Representations. (96%)
Adversarial training may be a double-edged sword. (99%)
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them. (98%)
Stress Test Evaluation of Biomedical Word Embeddings. (73%)
X-GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question Answering. (1%)
A Differentiable Language Model Adversarial Attack on Text Classifiers. (99%)
Structack: Structure-based Adversarial Attacks on Graph Neural Networks. (86%)
Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation. (45%)
Free Hyperbolic Neural Networks with Limited Radii. (8%)
On the Certified Robustness for Ensemble Models and Beyond. (99%)
Unsupervised Detection of Adversarial Examples with Model Explanations. (99%)
Membership Inference Attack and Defense for Wireless Signal Classifiers with Deep Learning. (83%)
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks. (75%)
Estimating Predictive Uncertainty Under Program Data Distribution Shift. (1%)
Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack. (1%)
Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic Gradients. (98%)
Improved Text Classification via Contrastive Adversarial Training. (84%)
Black-box Probe for Unsupervised Domain Adaptation without Model Transferring. (81%)
Defending against Reconstruction Attack in Vertical Federated Learning. (10%)
Generative Models for Security: Attacks, Defenses, and Opportunities. (10%)
A Tandem Framework Balancing Privacy and Security for Voice User Interfaces. (5%)
Spinning Sequence-to-Sequence Models with Meta-Backdoors. (4%)
On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms. (2%)
Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks. (99%)
A Markov Game Model for AI-based Cyber Security Attack Mitigation. (10%)
Leaking Secrets through Modern Branch Predictor in the Speculative World. (1%)
Discriminator-Free Generative Adversarial Attack. (99%)
Feature-Filter: Detecting Adversarial Examples through Filtering off Recessive Features. (99%)
Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition. (98%)
On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples. (80%)
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI. (33%)
Structural Watermarking to Deep Neural Networks via Network Channel Pruning. (11%)
Generative Adversarial Neural Cellular Automata. (1%)
Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units. (1%)
Just Train Twice: Improving Group Robustness without Training Group Information. (1%)
RobustFed: A Truth Inference Approach for Robust Federated Learning. (1%)
BEDS-Bench: Behavior of EHR-models under Distributional Shift--A Benchmark. (9%)
EGC2: Enhanced Graph Classification with Easy Graph Compression. (84%)
Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI. (1%)
Self-Supervised Contrastive Learning with Adversarial Perturbations for Robust Pretrained Language Models. (99%)
Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving. (98%)
ECG-Adv-GAN: Detecting ECG Adversarial Examples with Conditional Generative Adversarial Networks. (92%)
Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks. (80%)
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting. (16%)
Shifts: A Dataset of Real Distributional Shift Across Multiple Large-Scale Tasks. (1%)
AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning. (99%)
Conservative Objective Models for Effective Offline Model-Based Optimization. (67%)
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense. (88%)
Using BERT Encoding to Tackle the Mad-lib Attack in SMS Spam Detection. (69%)
Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors. (41%)
What classifiers know what they don't? (1%)
EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial Attacks. (99%)
Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising. (99%)
Perceptual-based deep-learning denoiser as a defense against adversarial attacks on ASR systems. (96%)
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning. (81%)
A Closer Look at the Adversarial Robustness of Information Bottleneck Models. (70%)
SoftHebb: Bayesian inference in unsupervised Hebbian soft winner-take-all networks. (56%)
Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks. (76%)
Stateful Detection of Model Extraction Attacks. (2%)
Attack Rules: An Adversarial Approach to Generate Attacks for Industrial Control Systems using Machine Learning. (1%)
Hack The Box: Fooling Deep Learning Abstraction-Based Monitors. (91%)
Identifying Layers Susceptible to Adversarial Attacks. (83%)
HOMRS: High Order Metamorphic Relations Selector for Deep Neural Networks. (81%)
Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis. (22%)
Cyber-Security Challenges in Aviation Industry: A Review of Current and Future Trends. (1%)
Learning to Detect Adversarial Examples Based on Class Scores. (99%)
Resilience of Autonomous Vehicle Object Category Detection to Universal Adversarial Perturbations. (99%)
Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems. (99%)
GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network. (98%)
Towards Robust General Medical Image Segmentation. (83%)
ARC: Adversarially Robust Control Policies for Autonomous Vehicles. (38%)
Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models. (99%)
Improving Model Robustness with Latent Distribution Locally and Globally. (99%)
Analytically Tractable Hidden-States Inference in Bayesian Neural Networks. (50%)
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (31%)
Controlled Caption Generation for Images Through Adversarial Attacks. (99%)
Incorporating Label Uncertainty in Understanding Adversarial Robustness. (38%)
RoFL: Attestable Robustness for Secure Federated Learning. (1%)
GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization. (99%)
Self-Adversarial Training incorporating Forgery Attention for Image Forgery Localization. (95%)
ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients. (76%)
On Generalization of Graph Autoencoders with Adversarial Training. (12%)
On Robustness of Lane Detection Models to Physical-World Adversarial Attacks in Autonomous Driving. (1%)
When and How to Fool Explainable Models (and Humans) with Adversarial Examples. (99%)
Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks. (99%)
Adversarial Robustness of Probabilistic Network Embedding for Link Prediction. (87%)
Dealing with Adversarial Player Strategies in the Neural Network Game iNNk through Ensemble Learning. (69%)
Understanding the Security of Deepfake Detection. (33%)
Poisoning Attack against Estimating from Pairwise Comparisons. (15%)
Confidence Conditioned Knowledge Distillation. (10%)
Certifiably Robust Interpretation via Renyi Differential Privacy. (67%)
Mirror Mirror on the Wall: Next-Generation Wireless Jamming Attacks Based on Software-Controlled Surfaces. (1%)
Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity. (99%)
Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples. (99%)
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks. (99%)
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding. (68%)
Spotting adversarial samples for speaker verification by neural vocoders. (26%)
The Interplay between Distribution Parameters and the Accuracy-Robustness Tradeoff in Classification. (16%)
Reinforcement Learning for Feedback-Enabled Cyber Resilience. (10%)
Single-Step Adversarial Training for Semantic Segmentation. (96%)
Explanation-Guided Diagnosis of Machine Learning Evasion Attacks. (82%)
Understanding Adversarial Attacks on Observations in Deep Reinforcement Learning. (80%)
Bi-Level Poisoning Attack Model and Countermeasure for Appliance Consumption Data of Smart Homes. (8%)
Exploring Robustness of Neural Networks through Graph Measures. (8%)
A Context-Aware Information-Based Clone Node Attack Detection Scheme in Internet of Things. (1%)
Understanding and Improving Early Stopping for Learning with Noisy Labels. (1%)
Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges. (99%)
Understanding Adversarial Examples Through Deep Neural Network's Response Surface and Uncertainty Regions. (99%)
Attack Transferability Characterization for Adversarially Robust Multi-label Classification. (99%)
Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices. (99%)
Bio-Inspired Adversarial Attack Against Deep Neural Networks. (98%)
Do Not Deceive Your Employer with a Virtual Background: A Video Conferencing Manipulation-Detection System. (62%)
The Threat of Offensive AI to Organizations. (54%)
Local Reweighting for Adversarial Training. (22%)
On the Interaction of Belief Bias and Explanations. (15%)
Feature Importance Guided Attack: A Model Agnostic Adversarial Attack. (99%)
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent. (99%)
Improving Transferability of Adversarial Patches on Face Recognition with Generative Models. (99%)
Data Poisoning Won't Save You From Facial Recognition. (93%)
Adversarial Robustness of Streaming Algorithms through Importance Sampling. (61%)
Test-Time Adaptation to Distribution Shift by Confidence Maximization and Input Transformation. (2%)
Realtime Robust Malicious Traffic Detection via Frequency Domain Analysis. (1%)
RAILS: A Robust Adversarial Immune-inspired Learning System. (96%)
Who is Responsible for Adversarial Defense? (93%)
Immuno-mimetic Deep Neural Networks (Immuno-Net). (64%)
ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense. (31%)
Stabilizing Equilibrium Models by Jacobian Regularization. (1%)
Multi-stage Optimization based Adversarial Training. (99%)
The Feasibility and Inevitability of Stealth Attacks. (68%)
On the (Un-)Avoidability of Adversarial Examples. (99%)
Countering Adversarial Examples: Combining Input Transformation and Noisy Training. (99%)
Adversarial Examples in Multi-Layer Random ReLU Networks. (81%)
Teacher Model Fingerprinting Attacks Against Transfer Learning. (2%)
Feature Attributions and Counterfactual Explanations Can Be Manipulated. (1%)
DetectX -- Adversarial Input Detection using Current Signatures in Memristive XBar Arrays. (99%)
Self-Supervised Iterative Contextual Smoothing for Efficient Adversarial Defense against Gray- and Black-Box Attack. (99%)
Long-term Cross Adversarial Training: A Robust Meta-learning Method for Few-shot Classification Tasks. (83%)
On Adversarial Robustness of Synthetic Code Generation. (81%)
NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data. (67%)
Policy Smoothing for Provably Robust Reinforcement Learning. (99%)
Delving into the pixels of adversarial samples. (98%)
Hardness of Samples Is All You Need: Protecting Deep Learning Models Using Hardness of Samples. (98%)
Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier. (91%)
Membership Inference on Word Embedding and Beyond. (38%)
An Alternative Auxiliary Task for Enhancing Image Classification. (11%)
Zero-shot learning approach to adaptive Cybersecurity using Explainable AI. (1%)
Adversarial Examples Make Strong Poisons. (98%)
Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem. (95%)
Generative Model Adversarial Training for Deep Compressed Sensing. (8%)
Attack to Fool and Explain Deep Networks. (99%)
A Stealthy and Robust Fingerprinting Scheme for Generative Models. (47%)
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples. (99%)
On the Connections between Counterfactual Explanations and Adversarial Examples. (99%)
Residual Error: a New Performance Measure for Adversarial Robustness. (99%)
The Dimpled Manifold Model of Adversarial Examples in Machine Learning. (98%)
Light Lies: Optical Adversarial Attack. (92%)
BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection. (82%)
Less is More: Feature Selection for Adversarial Robustness with Compressive Counter-Adversarial Attacks. (80%)
Group-Structured Adversarial Training. (68%)
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks. (45%)
Accumulative Poisoning Attacks on Real-time Data. (22%)
Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning. (5%)
Analyzing Adversarial Robustness of Deep Neural Networks in Pixel Space: a Semantic Perspective. (99%)
Bad Characters: Imperceptible NLP Attacks. (99%)
Adversarial Visual Robustness by Causal Intervention. (99%)
DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs. (99%)
Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning. (92%)
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks. (91%)
Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems. (82%)
Poisoning and Backdooring Contrastive Learning. (70%)
CoCoFuzzing: Testing Neural Code Models with Coverage-Guided Fuzzing. (64%)
CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing. (56%)
Real-time Attacks Against Deep Reinforcement Learning Policies. (99%)
Localized Uncertainty Attacks. (99%)
Evaluating the Robustness of Bayesian Neural Networks Against Different Types of Attacks. (67%)
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch. (38%)
Explainable AI for Natural Adversarial Images. (13%)
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness. (2%)
Scaling-up Diverse Orthogonal Convolutional Networks with a Paraunitary Framework. (1%)
Loki: Hardening Code Obfuscation Against Automated Attacks. (1%)
Adversarial Attacks on Deep Models for Financial Transaction Records. (99%)
Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information. (99%)
Towards Adversarial Robustness via Transductive Learning. (80%)
Voting for the right answer: Adversarial defense for speaker verification. (78%)
Detect and remove watermark in deep neural networks via generative adversarial networks. (68%)
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. (13%)
Securing Face Liveness Detection Using Unforgeable Lip Motion Patterns. (12%)
Probabilistic Margins for Instance Reweighting in Adversarial Training. (8%)
CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals. (1%)
PopSkipJump: Decision-Based Attack for Probabilistic Classifiers. (99%)
Audio Attacks and Defenses against AED Systems -- A Practical Study. (99%)
Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology. (99%)
Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions. (92%)
Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs. (81%)
Evading Malware Classifiers via Monte Carlo Mutant Feature Discovery. (81%)
Partial success in closing the gap between human and machine vision. (15%)
Text Generation with Efficient (Soft) Q-Learning. (2%)
Resilient Control of Platooning Networked Robitic Systems via Dynamic Watermarking. (1%)
Self-training Guided Adversarial Domain Adaptation For Thermal Imagery. (1%)
Code Integrity Attestation for PLCs using Black Box Neural Network Predictions. (1%)
Target Model Agnostic Adversarial Attacks with Query Budgets on Language Understanding Models. (99%)
Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks. (99%)
ATRAS: Adversarially Trained Robust Architecture Search. (96%)
Security Analysis of Camera-LiDAR Semantic-Level Fusion Against Black-Box Attacks on Autonomous Vehicles. (64%)
Weakly-supervised High-resolution Segmentation of Mammography Images for Breast Cancer Diagnosis. (1%)
HistoTransfer: Understanding Transfer Learning for Histopathology. (1%)
Adversarial Robustness via Fisher-Rao Regularization. (54%)
What can linearized neural networks actually say about generalization? (31%)
FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack. (2%)
Adversarial Robustness through the Lens of Causality. (99%)
Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks. (99%)
Adversarial purification with Score-based generative models. (89%)
Relaxing Local Robustness. (80%)
TDGIA:Effective Injection Attacks on Graph Neural Networks. (76%)
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution. (56%)
CARTL: Cooperative Adversarially-Robust Transfer Learning. (8%)
A Shuffling Framework for Local Differential Privacy. (1%)
Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm. (99%)
Deep neural network loses attention to adversarial images. (99%)
Verifying Quantized Neural Networks using SMT-Based Model Checking. (86%)
Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation. (80%)
An Ensemble Approach Towards Adversarial Robustness. (41%)
Towards an Automated Pipeline for Detecting and Classifying Malware through Machine Learning. (1%)
Fair Classification with Adversarial Perturbations. (1%)
HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks. (99%)
Towards Defending against Adversarial Examples via Attack-Invariant Features. (99%)
Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training. (99%)
Attacking Adversarial Attacks as A Defense. (99%)
We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature. (98%)
Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. (88%)
URLTran: Improving Phishing URL Detection Using Transformers. (10%)
ZoPE: A Fast Optimizer for ReLU Networks with Low-Dimensional Inputs. (3%)
Practical Machine Learning Safety: A Survey and Primer. (2%)
Network insensitivity to parameter noise via adversarial regularization. (2%)
On Improving Adversarial Transferability of Vision Transformers. (99%)
Simulated Adversarial Testing of Face Recognition Models. (99%)
Towards the Memorization Effect of Neural Networks in Adversarial Training. (93%)
Handcrafted Backdoors in Deep Neural Networks. (92%)
Enhancing Robustness of Neural Networks through Fourier Stabilization. (73%)
Provably Robust Detection of Out-of-distribution Data (almost) for free. (1%)
Reveal of Vision Transformers Robustness against Adversarial Attacks. (99%)
Adversarial Attack and Defense in Deep Ranking. (99%)
Position Bias Mitigation: A Knowledge-Aware Graph Model for EmotionCause Extraction. (89%)
3DB: A Framework for Debugging Computer Vision Models. (45%)
RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models. (11%)
A Primer on Multi-Neuron Relaxation-based Adversarial Robustness Certification. (98%)
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model. (4%)
Ensemble Defense with Data Diversity: Weak Correlation Implies Strong Robustness. (92%)
Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks. (69%)
RDA: Robust Domain Adaptation via Fourier Adversarial Attacking. (2%)
BO-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization. (99%)
Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness. (93%)
Human-Adversarial Visual Question Answering. (31%)
DOCTOR: A Simple Method for Detecting Misclassification Errors. (1%)
Teaching keyword spotters to spot new keywords with limited examples. (1%)
Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout. (99%)
Imperceptible Adversarial Examples for Fake Image Detection. (99%)
A Little Robustness Goes a Long Way: Leveraging Universal Features for Targeted Transfer Attacks. (99%)
Transferable Adversarial Examples for Anchor Free Object Detection. (99%)
Exploring Memorization in Adversarial Training. (98%)
Defending against Backdoor Attacks in Natural Language Generation. (38%)
Robust Learning via Persistency of Excitation. (22%)
Sneak Attack against Mobile Robotic Networks under Formation Control. (1%)
PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack. (99%)
Towards Robustness of Text-to-SQL Models against Synonym Substitution. (75%)
BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively Inspired Orthographic Adversarial Attacks. (62%)
Adversarial Defense for Automatic Speaker Verification by Self-Supervised Learning. (99%)
Improving Compositionality of Neural Networks by Decoding Representations to Inputs. (54%)
Markpainting: Adversarial Machine Learning meets Inpainting. (12%)
On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study. (9%)
Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models. (5%)
Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models. (1%)
Concurrent Adversarial Learning for Large-Batch Training. (1%)
Adaptive Feature Alignment for Adversarial Training. (99%)
QueryNet: An Efficient Attack Framework with Surrogates Carrying Multiple Identities. (99%)
Transferable Sparse Adversarial Attack. (99%)
Adversarial Training with Rectified Rejection. (87%)
Robustifying $\ell_\infty$ Adversarial Training to the Union of Perturbation Models. (82%)
Dominant Patterns: Critical Features Hidden in Deep Neural Networks. (80%)
Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models. (75%)
Gradient-based Data Subversion Attack Against Binary Classifiers. (73%)
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals. (1%)
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores. (1%)
Generating Adversarial Examples with Graph Neural Networks. (99%)
Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice. (98%)
Evaluating Resilience of Encrypted Traffic Classification Against Adversarial Evasion Attacks. (62%)
NoiLIn: Do Noisy Labels Always Hurt Adversarial Training? (26%)
DAAIN: Detection of Anomalous and Adversarial Input using Normalizing Flows. (12%)
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations. (99%)
Analysis and Applications of Class-wise Robustness in Adversarial Training. (99%)
A Measurement Study on the (In)security of End-of-Life (EoL) Embedded Devices. (2%)
Demotivate adversarial defense in remote sensing. (99%)
AdvParams: An Active DNN Intellectual Property Protection Technique via Adversarial Perturbation Based Parameter Encryption. (92%)
Robust Regularization with Adversarial Labelling of Perturbed Samples. (83%)
SafeAMC: Adversarial training for robust modulation recognition models. (83%)
Towards optimally abstaining from prediction. (78%)
Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial Awareness. (76%)
Visualizing Representations of Adversarially Perturbed Inputs. (68%)
Chromatic and spatial analysis of one-pixel attacks against an image classifier. (15%)
DeepMoM: Robust Deep Learning With Median-of-Means. (1%)
A BIC based Mixture Model Defense against Data Poisoning Attacks on Classifiers. (84%)
Deep Repulsive Prototypes for Adversarial Robustness. (99%)
Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge. (98%)
Adversarial robustness against multiple $l_p$-threat models at the price of one and how to quickly fine-tune robust models to another threat model. (76%)
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger. (61%)
Fooling Partial Dependence via Data Poisoning. (13%)
Practical Convex Formulation of Robust One-hidden-layer Neural Network Training. (98%)
Adversarial Attack Driven Data Augmentation for Accurate And Robust Medical Image Segmentation. (98%)
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs. (67%)
Robust Value Iteration for Continuous Control Tasks. (9%)
OFEI: A Semi-black-box Android Adversarial Sample Attack Framework Against DLaaS. (99%)
Learning Security Classifiers with Verified Global Robustness Properties. (92%)
Feature Space Targeted Attacks by Statistic Alignment. (82%)
Improved OOD Generalization via Adversarial Training and Pre-training. (12%)
Out-of-Distribution Detection in Dermatology using Input Perturbation and Subset Scanning. (5%)
Every Byte Matters: Traffic Analysis of Bluetooth Wearable Devices. (1%)
Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models. (1%)
Dissecting Click Fraud Autonomy in the Wild. (1%)
Killing Two Birds with One Stone: Stealing Model and Inferring Attribute from BERT-based APIs. (92%)
CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes. (92%)
Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters. (12%)
Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems. (99%)
Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation. (98%)
Securing Optical Networks using Quantum-secured Blockchain: An Overview. (1%)
ReLUSyn: Synthesizing Stealthy Attacks for Deep Neural Network Based Cyber-Physical Systems. (81%)
Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks. (76%)
Backdoor Attacks on Self-Supervised Learning. (47%)
Intriguing Properties of Vision Transformers. (8%)
Explainable Enterprise Credit Rating via Deep Feature Crossing Network. (1%)
Simple Transparent Adversarial Examples. (99%)
Anomaly Detection of Test-Time Evasion Attacks using Class-conditional Generative Adversarial Networks. (86%)
Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance. (11%)
TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks. (1%)
Attack on practical speaker verification system using universal adversarial perturbations. (99%)
Local Aggressive Adversarial Attacks on 3D Point Cloud. (99%)
An Orthogonal Classifier for Improving the Adversarial Robustness of Neural Networks. (76%)
Balancing Robustness and Sensitivity using Feature Contrastive Learning. (15%)
DeepStrike: Remotely-Guided Fault Injection Attacks on DNN Accelerator in Cloud-FPGA. (1%)
User Label Leakage from Gradients in Federated Learning. (1%)
Hunter in the Dark: Deep Ensemble Networks for Discovering Anomalous Activity from Smart Networks. (1%)
Sparta: Spatially Attentive and Adversarially Robust Activation. (99%)
Detecting Adversarial Examples with Bayesian Neural Network. (99%)
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks. (98%)
On the Robustness of Domain Constraints. (98%)
Learning and Certification under Instance-targeted Poisoning. (82%)
Towards Robust Vision Transformer. (95%)
Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning. (93%)
An SDE Framework for Adversarial Training, with Convergence and Robustness Analysis. (69%)
Vision Transformers are Robust Learners. (99%)
Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing. (99%)
SoundFence: Securing Ultrasonic Sensors in Vehicles Using Physical-Layer Defense. (2%)
Real-time Detection of Practical Universal Adversarial Perturbations. (99%)
Salient Feature Extractor for Adversarial Defense on Deep Neural Networks. (99%)
High-Robustness, Low-Transferability Fingerprinting of Neural Networks. (9%)
Information-theoretic Evolution of Model Agnostic Global Explanations. (1%)
Iterative Algorithms for Assessing Network Resilience Against Structured Perturbations. (1%)
Stochastic-Shield: A Probabilistic Approach Towards Training-Free Adversarial Defense in Quantized CNNs. (98%)
When Human Pose Estimation Meets Robustness: Adversarial Algorithms and Benchmarks. (5%)
DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks. (1%)
Biometrics: Trust, but Verify. (1%)
AVA: Adversarial Vignetting Attack against Visual Recognition. (99%)
OutFlip: Generating Out-of-Domain Samples for Unknown Intent Detection with Natural Language Attack. (70%)
Adversarial Reinforcement Learning in Dynamic Channel Access and Power Control. (2%)
A Statistical Threshold for Adversarial Classification in Laplace Mechanisms. (1%)
Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds. (99%)
Improving Adversarial Transferability with Gradient Refining. (99%)
Accuracy-Privacy Trade-off in Deep Ensemble. (4%)
Adversarial examples attack based on random warm restart mechanism and improved Nesterov momentum. (99%)
Examining and Mitigating Kernel Saturation in Convolutional Neural Networks using Negative Images. (1%)
Automated Decision-based Adversarial Attacks. (99%)
Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks. (88%)
Security Concerns on Machine Learning Solutions for 6G Networks in mmWave Beam Prediction. (81%)
Robust Training Using Natural Transformation. (13%)
Learning Image Attacks toward Vision Guided Autonomous Vehicles. (4%)
Combining Time-Dependent Force Perturbations in Robot-Assisted Surgery Training. (1%)
Self-Supervised Adversarial Example Detection by Disentangled Representation. (99%)
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks. (96%)
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]. (93%)
Provable Guarantees against Data Poisoning Using Self-Expansion and Compatibility. (16%)
Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition. (99%)
Uniform Convergence, Adversarial Spheres and a Simple Remedy. (15%)
Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model. (99%)
A Simple and Strong Baseline for Universal Targeted Attacks on Siamese Visual Tracking. (99%)
Understanding Catastrophic Overfitting in Adversarial Training. (92%)
Attestation Waves: Platform Trust via Remote Power Analysis. (1%)
Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning. (99%)
Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks. (97%)
Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation. (1%)
A Theoretical-Empirical Approach to Estimating Sample Complexity of DNNs. (1%)
Poisoning the Unlabeled Dataset of Semi-Supervised Learning. (92%)
Broadly Applicable Targeted Data Sample Omission Attacks. (68%)
An Overview of Laser Injection against Embedded Neural Network Models. (2%)
Physical world assistive signals for deep neural network classifiers -- neither defense nor attack. (83%)
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack. (73%)
Intriguing Usage of Applicability Domain: Lessons from Cheminformatics Applied to Adversarial Learning. (99%)
Who's Afraid of Adversarial Transferability? (99%)
Multi-Robot Coordination and Planning in Uncertain and Adversarial Environments. (10%)
GRNN: Generative Regression Neural Network -- A Data Leakage Attack for Federated Learning. (1%)
Spinner: Automated Dynamic Command Subsystem Perturbation. (1%)
Adversarial Example Detection for DNN Models: A Review. (99%)
A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation. (96%)
On the Adversarial Robustness of Quantized Neural Networks. (75%)
Hidden Backdoors in Human-Centric Language Models. (73%)
One Detector to Rule Them All: Towards a General Deepfake Attack Detection Framework. (62%)
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification. (62%)
Load Oscillating Attacks of Smart Grids: Demand Strategies and Vulnerability Analysis. (2%)
RATT: Leveraging Unlabeled Data to Guarantee Generalization. (1%)
Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense. (99%)
Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks. (99%)
Black-box adversarial attacks using Evolution Strategies. (98%)
IPatch: A Remote Adversarial Patch. (97%)
DeFiRanger: Detecting Price Manipulation Attacks on DeFi Applications. (10%)
FIPAC: Thwarting Fault- and Software-Induced Control-Flow Attacks with ARM Pointer Authentication. (2%)
GasHis-Transformer: A Multi-scale Visual Transformer Approach for Gastric Histopathology Image Classification. (67%)
A neural anisotropic view of underspecification in deep learning. (26%)
Analytical bounds on the local Lipschitz constants of ReLU networks. (12%)
Learning Robust Variational Information Bottleneck with Reference. (5%)
AdvHaze: Adversarial Haze Attack. (99%)
Improved and Efficient Text Adversarial Attacks using Target Information. (97%)
Metamorphic Detection of Repackaged Malware. (91%)
Structure-Aware Hierarchical Graph Pooling using Information Bottleneck. (2%)
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity. (1%)
Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT. (99%)
Delving into Data: Effectively Substitute Training for Black-box Attack. (99%)
secml-malware: Pentesting Windows Malware Classifiers with Adversarial EXEmples in Python. (99%)
Impact of Spatial Frequency Based Constraints on Adversarial Robustness. (98%)
PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches. (87%)
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks. (22%)
3D Adversarial Attacks Beyond Point Cloud. (99%)
Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors. (80%)
Influence Based Defense Against Data Poisoning Attacks in Online Learning. (99%)
Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks. (92%)
Evaluating Deception Detection Model Robustness To Linguistic Variation. (82%)
Lightweight Detection of Out-of-Distribution and Adversarial Samples via Channel Mean Discrepancy. (3%)
Improving Neural Silent Speech Interface Models by Adversarial Training. (1%)
Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting. (99%)
Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors. (98%)
Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions. (86%)
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics. (22%)
Dual Head Adversarial Training. (99%)
Mixture of Robust Experts (MoRE): A Flexible Defense Against Multiple Perturbations. (99%)
Robust Certification for Laplace Learning on Geometric Graphs. (96%)
Jacobian Regularization for Mitigating Universal Adversarial Perturbations. (95%)
Dataset Inference: Ownership Resolution in Machine Learning. (83%)
Adversarial Training for Deep Learning-based Intrusion Detection Systems. (99%)
MixDefense: A Defense-in-Depth Framework for Adversarial Example Detection Based on Statistical and Semantic Analysis. (99%)
MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor Based on Multi-Task Adversarial Training. (64%)
Does enhanced shape bias improve neural network robustness to common corruptions? (26%)
Robust Sensor Fusion Algorithms Against Voice Command Attacks in Autonomous Vehicles. (9%)
Network Defense is Not a Game. (1%)
Staircase Sign Method for Boosting Adversarial Attacks. (99%)
Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models. (99%)
LAFEAT: Piercing Through Adversarial Defenses with Latent Features. (99%)
Removing Adversarial Noise in Class Activation Feature Space. (99%)
Direction-Aggregated Attack for Transferable Adversarial Examples. (99%)
Manipulating SGD with Data Ordering Attacks. (95%)
Improving Adversarial Robustness Using Proxy Distributions. (92%)
Provable Robustness of Adversarial Training for Learning Halfspaces with Noise. (22%)
Protecting the Intellectual Properties of Deep Neural Networks with an Additional Class and Steganographic Images. (11%)
Semi-Supervised Domain Adaptation with Prototypical Alignment and Consistency Learning. (1%)
Best Practices for Noise-Based Augmentation to Improve the Performance of Emotion Recognition "In the Wild". (83%)
Attacking Text Classifiers via Sentence Rewriting Sampler. (99%)
Scale-Adv: A Joint Attack on Image-Scaling and Machine Learning Classifiers. (99%)
Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training. (87%)
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation. (67%)
AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages with Adversarial Examples. (15%)
Fashion-Guided Adversarial Attack on Person Segmentation. (99%)
Towards Variable-Length Textual Adversarial Attacks. (99%)
An Adversarially-Learned Turing Test for Dialog Generation Models. (96%)
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators. (81%)
Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries. (2%)
Gradient-based Adversarial Attacks against Text Transformers. (99%)
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World. (86%)
Are Multilingual BERT models robust? A Case Study on Adversarial Attacks for Multilingual Question Answering. (12%)
Federated Learning for Malware Detection in IoT Devices. (10%)
Meaningful Adversarial Stickers for Face Recognition in Physical World. (98%)
Orthogonalizing Convolutional Layers with the Cayley Transform. (80%)
Defening against Adversarial Denial-of-Service Attacks. (38%)
Improved Branch and Bound for Neural Network Verification via Lagrangian Decomposition. (1%)
Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune. (99%)
Detecting Operational Adversarial Examples for Reliable Deep Learning. (82%)
Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack. (75%)
Sparse Coding Frontend for Robust Neural Networks. (99%)
A Backdoor Attack against 3D Point Cloud Classifiers. (96%)
Plot-guided Adversarial Example Construction for Evaluating Open-domain Story Generation. (56%)
Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation. (50%)
Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack. (1%)
Achieving Model Robustness through Discrete Adversarial Training. (99%)
Fool Me Twice: Entailment from Wikipedia Gamification. (61%)
Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach. (15%)
Disentangled Contrastive Learning for Learning Robust Textual Representations. (11%)
Relating Adversarially Robust Generalization to Flat Minima. (99%)
Reversible Watermarking in Deep Convolutional Neural Networks for Integrity Authentication. (1%)
Learning Sampling Policy for Faster Derivative Free Optimization. (1%)
FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems. (98%)
Explainability-based Backdoor Attacks Against Graph Neural Networks. (15%)
A single gradient step finds adversarial examples on random two-layers neural networks. (10%)
Adversarial Learning Inspired Emerging Side-Channel Attacks and Defenses. (8%)
Universal Adversarial Training with Class-Wise Perturbations. (99%)
Universal Spectral Adversarial Attacks for Deformable Shapes. (81%)
Adversarial Robustness Guarantees for Gaussian Processes. (68%)
The art of defense: letting networks fool the attacker. (64%)
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective. (61%)
Improving Robustness of Deep Reinforcement Learning Agents: Environment Attacks based on Critic Networks. (8%)
Sparse Oblique Decision Trees: A Tool to Understand and Manipulate Neural Net Features. (3%)
An Object Detection based Solver for Google's Image reCAPTCHA v2. (1%)
Exploring Targeted Universal Adversarial Perturbations to End-to-end ASR Models. (93%)
Adversarial Robustness under Long-Tailed Distribution. (89%)
Taming Adversarial Robustness via Abstaining. (67%)
Backdoor Attack in the Physical World. (2%)
Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model. (99%)
Adaptive Clustering of Robust Semantic Representations for Adversarial Image Purification. (98%)
BBAEG: Towards BERT-based Biomedical Adversarial Example Generation for Text Classification. (96%)
Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses. (74%)
Can audio-visual integration strengthen robustness under multimodal attacks? (68%)
Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models. (33%)
Unified Detection of Digital and Physical Face Attacks. (8%)
Beyond Categorical Label Representations for Image Classification. (2%)
Rethinking Perturbations in Encoder-Decoders for Fast Training. (1%)
Adversarial Attack in the Context of Self-driving. (99%)
Reliably fast adversarial training via latent adversarial perturbation. (93%)
Mitigating Gradient-based Adversarial Attacks via Denoising and Compression. (99%)
Gradient-based Adversarial Deep Modulation Classification with Data-driven Subsampling. (93%)
Property-driven Training: All You (N)Ever Wanted to Know About. (26%)
Defending Against Image Corruptions Through Adversarial Augmentations. (92%)
RABA: A Robust Avatar Backdoor Attack on Deep Neural Network. (83%)
Fast-adapting and Privacy-preserving Federated Recommender System. (1%)
TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness. (99%)
Domain Invariant Adversarial Learning. (98%)
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction. (92%)
Towards Evaluating and Training Verifiably Robust Neural Networks. (45%)
Augmenting Zero Trust Architecture to Endpoints Using Blockchain: A Systematic Review. (3%)
Learning from Noisy Labels via Dynamic Loss Thresholding. (1%)
Adversarial Heart Attack: Neural Networks Fooled to Segment Heart Symbols in Chest X-Ray Images. (99%)
Adversarial Attacks and Defenses for Speech Recognition Systems. (99%)
Fast Certified Robust Training via Better Initialization and Shorter Warmup. (86%)
Fast Jacobian-Vector Product for Deep Networks. (22%)
Too Expensive to Attack: A Joint Defense Framework to Mitigate Distributed Attacks for the Internet of Things Grid. (2%)
Digital Forensics vs. Anti-Digital Forensics: Techniques, Limitations and Recommendations. (1%)
On the Robustness of Vision Transformers to Adversarial Examples. (99%)
Class-Aware Robust Adversarial Training for Object Detection. (96%)
PointBA: Towards Backdoor Attacks in 3D Point Cloud. (92%)
Statistical inference for individual fairness. (67%)
Learning Robust Feedback Policies from Demonstrations. (47%)
What Causes Optical Flow Networks to be Vulnerable to Physical Adversarial Attacks. (33%)
Improving robustness against common corruptions with frequency biased models. (1%)
On the Adversarial Robustness of Visual Transformers. (99%)
Lagrangian Objective Function Leads to Improved Unforeseen Attack Generalization in Adversarial Training. (99%)
Enhancing the Transferability of Adversarial Attacks through Variance Tuning. (99%)
ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training. (95%)
Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing. (93%)
Fooling LiDAR Perception via Adversarial Trajectory Perturbation. (83%)
Robust Reinforcement Learning under model misspecification. (31%)
Automating Defense Against Adversarial Attacks: Discovery of Vulnerabilities and Application of Multi-INT Imagery to Protect Deployed Models. (16%)
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models. (9%)
Improved Autoregressive Modeling with Distribution Smoothing. (86%)
On the benefits of robust models in modulation recognition. (99%)
IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking. (99%)
LiBRe: A Practical Bayesian Approach to Adversarial Detection. (99%)
Cyclic Defense GAN Against Speech Adversarial Attacks. (99%)
Combating Adversaries with Anti-Adversaries. (93%)
On Generating Transferable Targeted Perturbations. (93%)
Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation. (86%)
Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness. (83%)
Visual Explanations from Spiking Neural Networks using Interspike Intervals. (62%)
Unsupervised Robust Domain Adaptation without Source Data. (13%)
Adversarial Attacks are Reversible with Natural Supervision. (99%)
Adversarial Attacks on Deep Learning Based mmWave Beam Prediction in 5G and Beyond. (98%)
MagDR: Mask-guided Detection and Reconstruction for Defending Deepfakes. (81%)
Deep-RBF Networks for Anomaly Detection in Automotive Cyber-Physical Systems. (70%)
Orthogonal Projection Loss. (45%)
THAT: Two Head Adversarial Training for Improving Robustness at Scale. (26%)
A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks and Defenses in Cryptography. (11%)
HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks. (10%)
The Geometry of Over-parameterized Regression and Adversarial Perturbations. (2%)
Synthesize-It-Classifier: Learning a Generative Classifier through RecurrentSelf-analysis. (1%)
Spirit Distillation: Precise Real-time Prediction with Insufficient Data. (1%)
Recent Advances in Large Margin Learning. (1%)
Adversarial Feature Stacking for Accurate and Robust Predictions. (99%)
Vulnerability of Appearance-based Gaze Estimation. (97%)
Black-box Detection of Backdoor Attacks with Limited Information and Data. (96%)
Deepfake Forensics via An Adversarial Game. (10%)
Robust and Accurate Object Detection via Adversarial Learning. (98%)
CLIP: Cheap Lipschitz Training of Neural Networks. (96%)
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers? (92%)
Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning. (83%)
RPATTACK: Refined Patch Attack on General Object Detectors. (76%)
NNrepair: Constraint-based Repair of Neural Network Classifiers. (50%)
Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs. (31%)
Improved Estimation of Concentration Under $\ell_p$-Norm Distance Metrics Using Half Spaces. (22%)
ESCORT: Ethereum Smart COntRacTs Vulnerability Detection using Deep Neural Network and Transfer Learning. (1%)
Grey-box Adversarial Attack And Defence For Sentiment Classification. (99%)
Fast Approximate Spectral Normalization for Robust Deep Neural Networks. (98%)
Spatio-Temporal Sparsification for General Robust Graph Convolution Networks. (87%)
RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy. (75%)
Adversarial Feature Augmentation and Normalization for Visual Recognition. (13%)
Adversarially Optimized Mixup for Robust Classification. (13%)
ExAD: An Ensemble Approach for Explanation-based Adversarial Detection. (99%)
TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing. (75%)
Natural Perturbed Training for General Robustness of Neural Network Classifiers. (38%)
Self adversarial attack as an augmentation method for immunohistochemical stainings. (33%)
Boundary Attributions Provide Normal (Vector) Attributions. (15%)
LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial Attack. (99%)
SoK: A Modularized Approach to Study the Security of Automatic Speech Recognition Systems. (93%)
Attribution of Gradient Based Adversarial Attacks for Reverse Engineering of Deceptions. (86%)
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond. (2%)
Generating Adversarial Computer Programs using Optimized Obfuscations. (99%)
Boosting Adversarial Transferability through Enhanced Momentum. (99%)
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles. (98%)
Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training. (76%)
Model Extraction and Adversarial Transferability, Your BERT is Vulnerable! (69%)
TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation. (61%)
Noise Modulation: Let Your Model Interpret Itself. (54%)
KoDF: A Large-scale Korean DeepFake Detection Dataset. (16%)
Reading Isn't Believing: Adversarial Attacks On Multi-Modal Neurons. (9%)
Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap? (99%)
Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection. (98%)
Improved, Deterministic Smoothing for L1 Certified Robustness. (82%)
Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition. (41%)
Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots. (38%)
Cyber Intrusion Detection by Using Deep Neural Networks with Attack-sharing Loss. (13%)
Adversarial YOLO: Defense Human Detection Patch Attacks via Detecting Adversarial Patches. (92%)
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation. (75%)
Bio-inspired Robustness: A Review. (70%)
Adversarial Driving: Attacking End-to-End Autonomous Driving Systems. (68%)
Constant Random Perturbations Provide Adversarial Robustness with Minimal Effect on Accuracy. (83%)
Adversarial Training is Not Ready for Robot Learning. (67%)
HDTest: Differential Fuzz Testing of Brain-Inspired Hyperdimensional Computing. (64%)
Understanding invariance via feedforward inversion of discriminatively trained classifiers. (10%)
Meta-Solver for Neural Ordinary Differential Equations. (2%)
Towards Robust Speech-to-Text Adversarial Attack. (99%)
BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks. (98%)
Multi-Discriminator Sobolev Defense-GAN Against Adversarial Attacks for End-to-End Speech Systems. (82%)
Attack as Defense: Characterizing Adversarial Examples using Robustness. (99%)
Generating Unrestricted Adversarial Examples via Three Parameters. (99%)
Simeon -- Secure Federated Machine Learning Through Iterative Filtering. (12%)
Learning Defense Transformers for Counterattacking Adversarial Examples. (99%)
Internal Wasserstein Distance for Adversarial Attack and Defense. (99%)
Game-theoretic Understanding of Adversarially Learned Features. (98%)
Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case. (82%)
Network Environment Design for Autonomous Cyberdefense. (1%)
Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors through Voltage Over-scaling. (99%)
Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification. (99%)
Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink. (99%)
DAFAR: Detecting Adversaries by Feedback-Autoencoder Reconstruction. (99%)
ReinforceBug: A Framework to Generate Adversarial Textual Examples. (97%)
Multi-Task Federated Reinforcement Learning with Adversaries. (15%)
BODAME: Bilevel Optimization for Defense Against Model Extraction. (8%)
Improving Adversarial Robustness via Channel-wise Activation Suppressing. (99%)
TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack. (92%)
VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples. (67%)
Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks. (1%)
Stabilized Medical Image Attacks. (99%)
Revisiting Model's Uncertainty and Confidences for Adversarial Example Detection. (99%)
Practical Relative Order Attack in Deep Ranking. (99%)
BASAR:Black-box Attack on Skeletal Action Recognition. (99%)
Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack. (98%)
Robust Black-box Watermarking for Deep NeuralNetwork using Inverse Document Frequency. (10%)
Deep Learning for Android Malware Defenses: a Systematic Literature Review. (4%)
Towards Strengthening Deep Learning-based Side Channel Attacks with Mixup. (2%)
Packet-Level Adversarial Network Traffic Crafting using Sequence Generative Adversarial Networks. (99%)
Enhancing Transformation-based Defenses against Adversarial Examples with First-Order Perturbations. (99%)
Contemplating real-world object classification. (81%)
Consistency Regularization for Adversarial Robustness. (47%)
Deep Model Intellectual Property Protection via Deep Watermarking. (1%)
Universal Adversarial Perturbations and Image Spam Classifiers. (99%)
Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain. (99%)
Improving Global Adversarial Robustness Generalization With Adversarially Trained GAN. (99%)
Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and Accuracy. (76%)
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification. (98%)
Hidden Backdoor Attack against Semantic Segmentation Models. (93%)
Cyber Threat Intelligence Model: An Evaluation of Taxonomies, Sharing Standards, and Ontologies within Cyber Threat Intelligence. (13%)
Don't Forget to Sign the Gradients! (10%)
PCP: Preemptive Circuit Padding against Tor circuit fingerprinting. (1%)
Hard-label Manifolds: Unexpected Advantages of Query Efficiency for Finding On-manifold Adversarial Examples. (99%)
WaveGuard: Understanding and Mitigating Audio Adversarial Examples. (99%)
Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack. (99%)
QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval. (99%)
SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain. (99%)
Gradient-Guided Dynamic Efficient Adversarial Training. (96%)
PointGuard: Provably Robust 3D Point Cloud Classification. (92%)
Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods. (12%)
A Novel Framework for Threat Analysis of Machine Learning-based Smart Healthcare Systems. (1%)
Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks. (99%)
A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models. (89%)
Shift Invariance Can Reduce Adversarial Robustness. (87%)
A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks. (81%)
On the effectiveness of adversarial training against common corruptions. (67%)
Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations. (64%)
A Survey On Universal Adversarial Attack. (99%)
Evaluating the Robustness of Geometry-Aware Instance-Reweighted Adversarial Training. (99%)
Online Adversarial Attacks. (99%)
Adversarial Examples for Unsupervised Machine Learning Models. (98%)
ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples. (97%)
DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers. (97%)
Fixing Data Augmentation to Improve Adversarial Robustness. (69%)
A Brief Survey on Deep Learning Based Data Hiding, Steganography and Watermarking. (26%)
Group-wise Inhibition based Feature Regularization for Robust Classification. (16%)
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations. (1%)
Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World. (99%)
Brain Programming is Immune to Adversarial Attacks: Towards Accurate and Robust Image Classification using Symbolic Learning. (99%)
Smoothness Analysis of Adversarial Training. (98%)
Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis. (96%)
Mind the box: $l_1$-APGD for sparse adversarial attacks on image classifiers. (93%)
Adversarial training in communication constrained federated learning. (87%)
Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms. (82%)
Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attack. (70%)
A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness. (64%)
Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. (62%)
Model-Agnostic Defense for Lane Detection against Adversarial Attack. (98%)
Robust learning under clean-label attack. (22%)
Effective Universal Unrestricted Adversarial Attacks using a MOE Approach. (98%)
Tiny Adversarial Mulit-Objective Oneshot Neural Architecture Search. (93%)
End-to-end Uncertainty-based Mitigation of Adversarial Attacks to Automated Lane Centering. (73%)
Adversarial Information Bottleneck. (33%)
Neuron Coverage-Guided Domain Generalization. (2%)
What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors.
NEUROSPF: A tool for the Symbolic Analysis of Neural Networks. (68%)
On Instabilities of Conventional Multi-Coil MRI Reconstruction to Small Adverserial Perturbations.
Do Input Gradients Highlight Discriminative Features?
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints.
Nonlinear Projection Based Gradient Estimation for Query Efficient Blackbox Attacks.
Understanding Robustness in Teacher-Student Setting: A New Perspective.
Cybersecurity Threats in Connected and Automated Vehicles based Federated Learning Systems.
Confidence Calibration with Bounded Error Using Transformations.
Sketching Curvature for Efficient Out-of-Distribution Detection for Deep Neural Networks.
Multiplicative Reweighting for Robust Neural Network Optimization.
Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis.
Graphfool: Targeted Label Adversarial Attack on Graph Embedding.
The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations.
Rethinking Natural Adversarial Examples for Classification Models.
Automated Discovery of Adaptive Attacks on Adversarial Defenses.
Adversarial Robustness with Non-uniform Perturbations.
Non-Singular Adversarial Robustness of Neural Networks.
Enhancing Model Robustness By Incorporating Adversarial Knowledge Into Semantic Representation.
Adversarial Examples Detection beyond Image Space.
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models.
On the robustness of randomized classifiers to adversarial examples.
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks.
Man-in-The-Middle Attacks and Defense in a Power System Cyber-Physical Testbed.
Sandwich Batch Normalization.
The Effects of Image Distribution and Task on Adversarial Robustness.
A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box Optimization.
Constrained Optimization to Train Neural Networks on Critical and Under-Represented Classes. (1%)
On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning.
Measuring $\ell_\infty$ Attacks by the $\ell_2$ Norm.
A PAC-Bayes Analysis of Adversarial Robustness.
Effective and Efficient Vote Attack on Capsule Networks.
Verifying Probabilistic Specifications with Functional Lagrangians.
Random Projections for Improved Adversarial Robustness.
Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks.
Center Smoothing: Provable Robustness for Functions with Metric-Space Outputs.
Consistent Non-Parametric Methods for Adaptive Robustness.
Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids.
Improving Hierarchical Adversarial Robustness of Deep Neural Networks.
Bridging the Gap Between Adversarial Robustness and Optimization Bias.
Globally-Robust Neural Networks.
A Law of Robustness for Weight-bounded Neural Networks.
Just Noticeable Difference for Machine Perception and Generation of Regularized Adversarial Images with Minimal Perturbation.
Data Profiling for Adversarial Training: On the Ruin of Problematic Data.
Certifiably Robust Variational Autoencoders.
Certified Robustness to Programmable Transformations in LSTMs.
And/or trade-off in artificial neurons: impact on adversarial robustness.
Generating Structured Adversarial Attacks Using Frank-Wolfe Method.
Universal Adversarial Examples and Perturbations for Quantum Classifiers.
Low Curvature Activations Reduce Overfitting in Adversarial Training.
Guided Interpolation for Adversarial Training.
Cross-modal Adversarial Reprogramming.
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS.
Exploring Adversarial Robustness of Deep Metric Learning.
Adversarial Attack on Network Embeddings via Supervised Network Poisoning.
Perceptually Constrained Adversarial Attacks.
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification.
Mixed Nash Equilibria in the Adversarial Examples Game.
Adversarial defense for automatic speaker verification by cascaded self-supervised learning models.
UAVs Path Deviation Attacks: Survey and Research Challenges.
Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective.
Universal Adversarial Perturbations for Malware.
Certified Defenses: Why Tighter Relaxations May Hurt Training. (13%)
Adversarially robust deepfake media detection using fused convolutional neural network predictions.
Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy.
RobOT: Robustness-Oriented Testing for Deep Learning Systems.
RoBIC: A benchmark suite for assessing classifiers robustness.
Meta Federated Learning.
Adversarial Robustness: What fools you makes you stronger.
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection.
Dompteur: Taming Audio Adversarial Examples.
Enhancing Real-World Adversarial Patches through 3D Modeling of Complex Target Scenes.
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons.
Bayesian Inference with Certifiable Adversarial Robustness.
Target Training Does Adversarial Training Without Adversarial Samples.
Security and Privacy for Artificial Intelligence: Opportunities and Challenges.
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models.
Adversarial Perturbations Are Not So Weird: Entanglement of Robust and Non-Robust Features in Neural Network Classifiers.
Detecting Localized Adversarial Examples: A Generic Approach using Critical Region Analysis.
Making Paper Reviewing Robust to Bid Manipulation Attacks.
Adversarially Trained Models with Test-Time Covariate Shift Adaptation.
Efficient Certified Defenses Against Patch Attacks on Image Classifiers.
A Real-time Defense against Website Fingerprinting Attacks.
Benford's law: what does it say on adversarial images?
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples.
Adversarial example generation with AdaBelief Optimizer and Crop Invariance.
Adversarial Imaging Pipelines.
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation.
Corner Case Generation and Analysis for Safety Assessment of Autonomous Vehicles.
Model Agnostic Answer Reranking System for Adversarial Question Answering.
Robust Single-step Adversarial Training with Regularizer.
Understanding the Interaction of Adversarial Training with Noisy Labels.
Optimal Transport as a Defense Against Adversarial Attacks.
DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks.
Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression.
Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images.
PredCoin: Defense against Query-based Hard-label Attack.
Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review.
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models.
Audio Adversarial Examples: Attacks Using Vocal Masks.
Adversarially Robust Learning with Unknown Perturbation Sets.
IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks.
On Robustness of Neural Semantic Parsers.
Towards Robust Neural Networks via Close-loop Control.
Recent Advances in Adversarial Training for Adversarial Robustness.
Probabilistic Trust Intervals for Out of Distribution Detection. (2%)
Fast Training of Provably Robust Neural Networks by SingleProp.
Towards Speeding up Adversarial Training in Latent Spaces.
Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems.
Deep Deterministic Information Bottleneck with Matrix-based Entropy Functional.
Towards Imperceptible Query-limited Adversarial Attacks with Perceptual Feature Fidelity Loss.
Admix: Enhancing the Transferability of Adversarial Attacks.
Cortical Features for Defense Against Adversarial Audio Attacks.
You Only Query Once: Effective Black Box Adversarial Attacks with Minimal Repeated Queries.
Increasing the Confidence of Deep Neural Networks by Coverage Analysis.
Adversarial Machine Learning Attacks on Condition-Based Maintenance Capabilities.
Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network.
Adversarial Learning with Cost-Sensitive Classes.
Robust Android Malware Detection System against Adversarial Attacks using Q-Learning.
Adversaries in Online Learning Revisited: with applications in Robust Optimization and Adversarial training.
Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling.
Meta Adversarial Training against Universal Patches.
Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting.
Improving Neural Network Robustness through Neighborhood Preserving Layers.
Blind Image Denoising and Inpainting Using Robust Hadamard Autoencoders.
Property Inference From Poisoning.
Adversarial Vulnerability of Active Transfer Learning.
Introducing and assessing the explainable AI (XAI)method: SIDU.
SkeletonVis: Interactive Visualization for Understanding Adversarial Attacks on Human Action Recognition Models.
The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs.
Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers.
Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems.
Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models.
Diverse Adversaries for Mitigating Bias in Training.
They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in CMOS Image Sensors.
Probabilistic Robustness Analysis for DNNs based on PAC Learning.
Generalizing Adversarial Examples by AdaBelief Optimizer.
Few-Shot Website Fingerprinting Attack.
Understanding and Achieving Efficient Robustness with Adversarial Supervised Contrastive Learning.
A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network.
A Comprehensive Evaluation Framework for Deep Model Robustness.
Error Diffusion Halftoning Against Adversarial Examples.
Partition-Based Convex Relaxations for Certifying the Robustness of ReLU Neural Networks.
Online Adversarial Purification based on Self-Supervision.
Generating Black-Box Adversarial Examples in Sparse Domain.
Adaptive Neighbourhoods for the Discovery of Adversarial Examples.
Self-Adaptive Training: Bridging the Supervised and Self-Supervised Learning.
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary.
Adv-OLM: Generating Textual Adversaries via OLM.
A Person Re-identification Data Augmentation Method with Adversarial Defense Effect.
Adversarial Attacks and Defenses for Speaker Identification Systems.
A general multi-modal data learning method for Person Re-identification. (78%)
Fooling thermal infrared pedestrian detectors in real world using small bulbs.
Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data.
Invariance, encodings, and generalization: learning identity effects with neural networks.
LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition.
A Search-Based Testing Framework for Deep Neural Networks of Source Code Embedding.
PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack.
Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization.
What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space.
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks. (1%)
GraphAttacker: A General Multi-Task GraphAttack Framework.
Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions.
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving.
Adversarial Attacks On Multi-Agent Communication.
Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks.
Fundamental Tradeoffs in Distributionally Adversarial Training.
Black-box Adversarial Attacks in Autonomous Vehicle Technology.
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds.
Mining Data Impressions from Deep Models as Substitute for the Unavailable Training Data.
Context-Aware Image Denoising with Auto-Threshold Canny Edge Detection to Suppress Adversarial Perturbation.
Robusta: Robust AutoML for Feature Selection via Reinforcement Learning.
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks.
Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series.
Image Steganography based on Iteratively Adversarial Samples of A Synchronized-directions Sub-image.
Robustness Gym: Unifying the NLP Evaluation Landscape.
Small Input Noise is Enough to Defend Against Query-based Black-box Attacks.
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps.
Random Transformation of Image Brightness for Adversarial Attack.
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing.
Adversarially Robust and Explainable Model Compression with On-Device Personalization for Text Classification.
Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks.
DiPSeN: Differentially Private Self-normalizing Neural Networks For Adversarial Robustness in Federated Learning.
Exploring Adversarial Fake Images on Face Manifold.
The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks.
Robust Text CAPTCHAs Using Adversarial Examples.
Adversarial Robustness by Design through Analog Computing and Synthetic Gradients.
Understanding the Error in Evaluating Adversarial Robustness.
Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks.
Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks.
Local Competition and Stochasticity for Adversarial Robustness in Deep Learning.
Local Black-box Adversarial Attacks: A Query Efficient Approach.
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead.
Improving DGA-Based Malicious Domain Classifiers for Malware Defense with Adversarial Machine Learning.
Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning.
Patch-wise++ Perturbation for Adversarial Targeted Attacks.
Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial Attacks for Online Visual Object Trackers.
Beating Attackers At Their Own Games: Adversarial Example Detection Using Adversarial Gradient Directions.
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization.
Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces.
Improving Adversarial Robustness in Weight-quantized Neural Networks.
With False Friends Like These, Who Can Have Self-Knowledge?
Generating Natural Language Attacks in a Hard Label Black Box Setting.
Enhanced Regularizers for Attributional Robustness.
Analysis of Dominant Classes in Universal Adversarial Perturbations.
Person Re-identification with Adversarial Triplet Embedding.
My Teacher Thinks The World Is Flat! Interpreting Automatic Essay Scoring Mechanism.
Sparse Adversarial Attack to Object Detection.
Assessment of the Relative Importance of different hyper-parameters of LSTM for an IDS.
Robustness, Privacy, and Generalization of Adversarial Training.
A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning.
A Context Aware Approach for Generating Natural Language Attacks.
Exploring Adversarial Examples via Invertible Neural Networks.
Improving the Certified Robustness of Neural Networks via Consistency Regularization.
Adversarial Momentum-Contrastive Pre-Training.
Learning Robust Representation for Clustering through Locality Preserving Variational Discriminative Network.
The Translucent Patch: A Physical and Universal Attack on Object Detectors.
Gradient-Free Adversarial Attacks for Bayesian Neural Networks.
SCOPE CPS: Secure Compiling of PLCs in Cyber-Physical Systems.
Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems.
Learning to Initialize Gradient Descent Using Gradient Descent.
Unadversarial Examples: Designing Objects for Robust Vision.
Multi-shot NAS for Discovering Adversarially Robust Convolutional Neural Architectures at Targeted Capacities.
On Frank-Wolfe Optimization for Adversarial Robustness and Interpretability.
Genetic Adversarial Training of Decision Trees.
Incremental Verification of Fixed-Point Implementations of Neural Networks.
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring.
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks.
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification.
Self-Progressing Robust Training.
Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition.
Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines.
On Success and Simplicity: A Second Look at Transferable Targeted Attacks.
Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks.
Sample Complexity of Adversarially Robust Linear Classification on Separated Data.
Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks.
ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries.
AdvExpander: Generating Natural Language Adversarial Examples by Expanding Text.
Adversarially Robust Estimate and Risk Analysis in Linear Regression.
RAILS: A Robust Adversarial Immune-inspired Learning System.
Efficient Training of Robust Decision Trees Against Adversarial Examples.
On the human-recognizability phenomenon of adversarially trained deep image classifiers.
Characterizing the Evasion Attackability of Multi-label Classifiers.
A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks.
On the Limitations of Denoising Strategies as Adversarial Defenses.
FoggySight: A Scheme for Facial Lookup Privacy.
FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition (OCR) Systems.
Amata: An Annealing Mechanism for Adversarial Training Acceleration.
Disentangled Information Bottleneck.
Adaptive Verifiable Training Using Pairwise Class Similarity.
Robustness Threats of Differential Privacy.
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios.
Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints.
Binary Black-box Evasion Attacks Against Deep Learning-based Static Malware Detectors with Adversarial Byte-Level Language Model.
Contrastive Learning with Adversarial Perturbations for Conditional Text Generation.
Achieving Adversarial Robustness Requires An Active Teacher.
Query-free Black-box Adversarial Attacks on Graphs.
Random Projections for Adversarial Attack Detection.
Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning.
GNNUnlock: Graph Neural Networks-based Oracle-less Unlocking Scheme for Provably Secure Logic Locking.
Next Wave Artificial Intelligence: Robust, Explainable, Adaptable, Ethical, and Accountable.
DSRNA: Differentiable Search of Robust Neural Architectures.
I-GCN: Robust Graph Convolutional Network via Influence Mechanism.
An Empirical Review of Adversarial Defenses.
Robustness and Transferability of Universal Attacks on Compressed Models.
Geometric Adversarial Attacks and Defenses on 3D Point Clouds.
SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers.
Generating Out of Distribution Adversarial Attack using Latent Space Poisoning.
Detection of Adversarial Supports in Few-shot Classifiers Using Self-Similarity and Filtering.
Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters.
Composite Adversarial Attacks.
Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective.
On 1/n neural representation and robustness.
Locally optimal detection of stochastic targeted universal adversarial perturbations.
A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models.
Using Feature Alignment can Improve Clean Average Precision and Adversarial Robustness in Object Detection.
EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation.
Overcomplete Representations Against Adversarial Videos.
Mitigating the Impact of Adversarial Attacks in Very Deep Networks.
Reinforcement Based Learning on Classification Task Could Yield Better Generalization and Adversarial Accuracy.
A Singular Value Perspective on Model Robustness.
Sparse Fooling Images: Fooling Machine Perception through Unrecognizable Images.
Backpropagating Linearly Improves Transferability of Adversarial Examples.
Learning to Separate Clusters of Adversarial Representations for Robust Adversarial Detection.
Reprogramming Language Models for Molecular Representation Learning.
Black-box Model Inversion Attribute Inference Attacks on Classification Models.
PAC-Learning for Strategic Classification.
Evaluating adversarial robustness in simulated cerebellum.
Advocating for Multiple Defense Strategies against Adversarial Examples.
Practical No-box Adversarial Attacks against DNNs.
Towards Natural Robustness Against Adversarial Examples.
Unsupervised Adversarially-Robust Representation Learning on Graphs.
Kernel-convoluted Deep Neural Networks with Data Augmentation.
Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning.
FAT: Federated Adversarial Training.
An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks.
Channel Effects on Surrogate Models of Adversarial Attacks against Wireless Signal Classifiers.
Attribute-Guided Adversarial Training for Robustness to Natural Perturbations.
From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation.
Towards Defending Multiple Adversarial Perturbations via Gated Batch Normalization.
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques.
Essential Features: Content-Adaptive Pixel Discretization to Improve Model Robustness to Adaptive Adversarial Attacks.
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
Adversarial Robustness Across Representation Spaces.
Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks.
Boosting Adversarial Attacks on Neural Networks with Better Optimizer.
One-Pixel Attack Deceives Computer-Assisted Diagnosis of Cancer.
Towards Imperceptible Adversarial Image Patches Based on Network Explanations.
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses.
Just One Moment: Structural Vulnerability of Deep Action Recognition against One Frame Attack.
Architectural Adversarial Robustness: The Case for Deep Pursuit.
SwitchX- Gmin-Gmax Switching for Energy-Efficient and Robust Implementation of Binary Neural Networks on Memristive Xbars.
A Targeted Universal Attack on Graph Convolutional Network.
Cyberbiosecurity: DNA Injection Attack in Synthetic Biology.
Deterministic Certification to Adversarial Attacks via Bernstein Polynomial Approximation.
FaceGuard: A Self-Supervised Defense Against Adversarial Face Images.
3D Invisible Cloak.
SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images.
Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks.
Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers.
Voting based ensemble improves robustness of defensive models.
Generalized Adversarial Examples: Attacks and Defenses.
Robust and Natural Physical Adversarial Examples for Object Detectors.
Regularization with Latent Space Virtual Adversarial Training.
Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness.
Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks.
Robust Attacks on Deep Learning Face Recognition in the Physical World.
Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect.
Adversarial Attack on Facial Recognition using Visible Light.
SurFree: a fast surrogate-free black-box attack.
Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption.
Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization.
Probing Model Signal-Awareness via Prediction-Preserving Input Minimization. (80%)
Trust but Verify: Assigning Prediction Credibility by Counterfactual Constrained Learning.
Stochastic sparse adversarial attacks.
On the Adversarial Robustness of 3D Point Cloud Classification.
Towards Imperceptible Universal Attacks on Texture Recognition.
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack.
Augmented Lagrangian Adversarial Attacks.
Learnable Boundary Guided Adversarial Training.
Nudge Attacks on Point-Cloud DNNs.
Spatially Correlated Patterns in Adversarial Images.
A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations.
Robust Data Hiding Using Inverse Gradient Attention. (2%)
Are Chess Discussions Racist? An Adversarial Hate Speech Data Set.
Detecting Universal Trigger's Adversarial Attack with Honeypot.
An Experimental Study of Semantic Continuity for Deep Learning Models.
Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams.
Adversarial Threats to DeepFake Detection: A Practical Perspective.
Multi-Task Adversarial Attack.
Latent Adversarial Debiasing: Mitigating Collider Bias in Deep Neural Networks.
Robustified Domain Adaptation.
Adversarial collision attacks on image hashing functions.
Contextual Fusion For Adversarial Robustness.
Adversarial Turing Patterns from Cellular Automata.
Adversarial Profiles: Detecting Out-Distribution & Adversarial Samples in Pre-trained CNNs.
FoolHD: Fooling speaker identification by Highly imperceptible adversarial Disturbances.
SIENA: Stochastic Multi-Expert Neural Patcher.
Shaping Deep Feature Space towards Gaussian Mixture for Visual Classification.
Generating universal language adversarial examples by understanding and enhancing the transferability across neural models.
Robustness and Generalization to Nearest Categories. (54%)
MAAC: Novel Alert Correlation Method To Detect Multi-step Attack.
Enforcing robust control guarantees within neural network policies.
Adversarially Robust Classification based on GLRT.
Combining GANs and AutoEncoders for Efficient Anomaly Detection.
Extreme Value Preserving Networks.
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations.
Towards Understanding the Regularization of Adversarial Robustness on Neural Networks.
Ensemble of Models Trained by Key-based Transformed Images for Adversarially Robust Defense Against Black-box Attacks.
Power Side-Channel Attacks on BNN Accelerators in Remote FPGAs. (1%)
Audio-Visual Event Recognition through the lens of Adversary.
Transformer-Encoder Detector Module: Using Context to Improve Robustness to Adversarial Attacks on Object Detection.
Query-based Targeted Action-Space Adversarial Policies on Deep Reinforcement Learning Agents.
Adversarial Robustness Against Image Color Transformation within Parametric Filter Space.
Sparse PCA: Algorithms, Adversarial Perturbations and Certificates.
Adversarial images for the primate brain.
Detecting Adversarial Patches with Class Conditional Reconstruction Networks.
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks.
Solving Inverse Problems With Deep Neural Networks -- Robustness Included?
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective Genetic Optimization Guided By Deep Networks.
Bridging the Performance Gap between FGSM and PGD Adversarial Training.
Single-Node Attack for Fooling Graph Neural Networks.
A survey on practical adversarial examples for malware classifiers.
A Black-Box Attack Model for Visually-Aware Recommender Systems.
Data Augmentation via Structured Adversarial Perturbations.
Defense-friendly Images in Adversarial Attacks: Dataset and Metrics forPerturbation Difficulty.
Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks.
You Do (Not) Belong Here: Detecting DPI Evasion Attacks with Context Learning.
Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks.
Penetrating RF Fingerprinting-based Authentication with a Generative Adversarial Attack.
Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks.
MalFox: Camouflaged Adversarial Malware Example Generation Based on Conv-GANs Against Black-Box Detectors.
A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs.
Adversarial Examples in Constrained Domains.
Frequency-based Automated Modulation Classification in the Presence of Adversaries.
Robust Algorithms for Online Convex Problems via Primal-Dual.
LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud-based Deep Networks.
Vulnerability of the Neural Networks Against Adversarial Examples: A Survey.
MAD-VAE: Manifold Awareness Defense Variational Autoencoder.
Integer Programming-based Error-Correcting Output Code Design for Robust Classification.
Leveraging Extracted Model Adversaries for Improved Black Box Attacks.
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks.
Adversarial Attacks on Optimization based Planners.
Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks.
Perception Improvement for Free: Exploring Imperceptible Black-box Adversarial Attacks on Image Classification.
Adversarial Robust Training of Deep Learning MRI Reconstruction Models.
Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-fine Framework and Its Adversarial Examples.
Perception Matters: Exploring Imperceptible and Transferable Anti-forensics for GAN-generated Fake Face Imagery Detection.
Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks?
Reliable Graph Neural Networks via Robust Aggregation.
Passport-aware Normalization for Deep Model Protection.
Robustifying Binary Classification to Adversarial Perturbation.
Beyond cross-entropy: learning highly separable feature distributions for robust and accurate classification.
WaveTransform: Crafting Adversarial Examples via Input Decomposition.
Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations.
Object Hider: Adversarial Patch Attack Against Object Detectors.
Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?
Transferable Universal Adversarial Perturbations Using Generative Models.
Fast Local Attack: Generating Local Adversarial Examples for Object Detectors.
Anti-perturbation of Online Social Networks by Graph Label Transition.
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes.
GreedyFool: Distortion-Aware Sparse Adversarial Attack.
Robust Pre-Training by Adversarial Contrastive Learning.
Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy.
Versatile Verification of Tree Ensembles.
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound.
Dynamic Adversarial Patch for Evading Object Detection Models.
Asymptotic Behavior of Adversarial Training in Binary Classification.
ATRO: Adversarial Training with a Rejection Option.
Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks.
Stop Bugging Me! Evading Modern-Day Wiretapping Using Adversarial Perturbations.
Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures.
Learn Robust Features via Orthogonal Multi-Path.
Contrastive Learning with Adversarial Examples.
Adversarial Attacks on Binary Image Recognition Systems.
Rewriting Meaningful Sentences via Conditional BERT Sampling and an application on fooling text classifiers.
An Efficient Adversarial Attack for Tree Ensembles.
Adversarial Robustness of Supervised Sparse Coding.
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming.
Defense-guided Transferable Adversarial Attacks.
Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free.
Adversarial Attacks on Deep Algorithmic Trading Policies.
Maximum Mean Discrepancy is Aware of Adversarial Attacks.
Precise Statistical Analysis of Classification Accuracies for Adversarial Training.
Learning Black-Box Attackers with Transferable Priors and Query Feedback.
Class-Conditional Defense GAN Against End-to-End Speech Attacks.
A Distributional Robustness Certificate by Randomized Smoothing.
Preventing Personal Data Theft in Images with Adversarial ML.
Towards Understanding the Dynamics of the First-Order Adversaries.
Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods.
Boosting Gradient for White-Box Adversarial Attacks.
Tight Second-Order Certificates for Randomized Smoothing.
A Survey of Machine Learning Techniques in Adversarial Image Forensics.
Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification.
RobustBench: a standardized adversarial robustness benchmark.
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness.
Verifying the Causes of Adversarial Examples.
When Bots Take Over the Stock Market: Evasion Attacks Against Algorithmic Traders.
FLAG: Adversarial Data Augmentation for Graph Neural Networks.
Poisoned classifiers are not only backdoored, they are fundamentally broken.
FADER: Fast Adversarial Example Rejection.
A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models.
Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing.
Weight-Covariance Alignment for Adversarially Robust Neural Networks.
DPAttack: Diffused Patch Attacks against Universal Object Detection.
Mischief: A Simple Black-Box Attack Against Transformer Architectures.
Learning Robust Algorithms for Online Allocation Problems Using Adversarial Training.
Certifying Neural Network Robustness to Random Input Noise from Samples.
Adversarial Images through Stega Glasses.
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning.
Generalizing Universal Adversarial Attacks Beyond Additive Perturbations.
Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training.
Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness.
Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection in AMI through Adversarial Attacks.
Progressive Defense Against Adversarial Attacks for Deep Learning as a Service in Internet of Things.
Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability.
Towards Resistant Audio Adversarial Examples.
An Adversarial Attack against Stacked Capsule Autoencoder.
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability.
GreedyFool: Multi-Factor Imperceptibility and Its Application to Designing Black-box Adversarial Example Attack.
Toward Few-step Adversarial Training from a Frequency Perspective.
Higher-Order Certification for Randomized Smoothing.
Linking average- and worst-case perturbation robustness via class selectivity and dimensionality.
Universal Model for 3D Medical Image Analysis.
To be Robust or to be Fair: Towards Fairness in Adversarial Training.
Towards Understanding Pixel Vulnerability under Adversarial Attacks for Images.
Shape-Texture Debiased Neural Network Training.
On the Power of Abstention and Data-Driven Decision Making for Adversarial Robustness.
From Hero to Z\'eroe: A Benchmark of Low-Level Adversarial Attacks.
EFSG: Evolutionary Fooling Sentences Generator.
Contrast and Classify: Training Robust VQA Models. (2%)
Gradient-based Analysis of NLP Models is Manipulable.
IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function based Restoration.
Is It Time to Redefine the Classification Task for Deep Neural Networks?
Regularizing Neural Networks via Adversarial Model Perturbation. (1%)
Understanding Spatial Robustness of Deep Neural Networks.
How Does Mixup Help With Robustness and Generalization?
Transcending Transcend: Revisiting Malware Classification with Conformal Evaluation.
Improve Adversarial Robustness via Weight Penalization on Classification Layer.
A Unified Approach to Interpreting and Boosting Adversarial Transferability.
Improved Techniques for Model Inversion Attacks.
Affine-Invariant Robust Training.
Targeted Attention Attack on Deep Learning Models in Road Sign Recognition.
Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial Attacks.
Hiding the Access Pattern is Not Enough: Exploiting Search Pattern Leakage in Searchable Encryption.
Learning Clusterable Visual Features for Zero-Shot Recognition.
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks.
Revisiting Batch Normalization for Improving Corruption Robustness.
Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features.
Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks.
Global Optimization of Objective Functions Represented by ReLU Networks.
CD-UAP: Class Discriminative Universal Adversarial Perturbation.
Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples.
Double Targeted Universal Adversarial Perturbations.
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples.
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems.
Adversarial attacks on audio source separation.
Visualizing Color-wise Saliency of Black-Box Image Classification Models.
Constraining Logits by Bounded Function for Adversarial Robustness.
Adversarial Patch Attacks on Monocular Depth Estimation Networks.
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models.
Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model.
Adversarial Boot Camp: label free certified robustness in one epoch.
Understanding Classifier Mistakes with Generative Models.
CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation.
Second-Order NLP Adversarial Examples.
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference.
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective.
Understanding Catastrophic Overfitting in Single-step Adversarial Training.
Downscaling Attack and Defense: Turning What You See Back Into What You Get.
TextAttack: Lessons learned in designing Python frameworks for NLP.
A Study for Universal Adversarial Attacks on Texture Recognition.
Adversarial Attack and Defense of Structured Prediction Models.
Geometry-aware Instance-reweighted Adversarial Training.
Unknown Presentation Attack Detection against Rational Attackers.
Adversarial and Natural Perturbations for General Robustness.
Multi-Step Adversarial Perturbations on Recommender Systems Embeddings.
A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples.
Efficient Robust Training via Backward Smoothing.
Do Wider Neural Networks Really Help Adversarial Robustness?
Note: An alternative proof of the vulnerability of $k$-NN classifiers in high intrinsic dimensionality regions.
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders.
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense.
Query complexity of adversarial attacks.
CorrAttack: Black-box Adversarial Attack with Structured Search.
A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations.
Assessing Robustness of Text Classification through Maximal Safe Radius Computation.
Bag of Tricks for Adversarial Training.
Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning.
Accurate and Robust Feature Importance Estimation under Distribution Shifts.
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks.
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles.
Neural Topic Modeling with Cycle-Consistent Adversarial Training.
Fast Fr\'echet Inception Distance.
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability.
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment.
STRATA: Building Robustness with a Simple Method for Generating Black-box Adversarial Attacks for Models of Code.
Graph Adversarial Networks: Protecting Information against Adversarial Attacks.
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients.
Learned Fine-Tuner for Incongruous Few-Shot Adversarial Learning. (82%)
Learning to Improve Image Compression without Changing the Standard Decoder.
RoGAT: a robust GNN combined revised GAT with adjusted graphs.
Where Does the Robustness Come from? A Study of the Transformation-based Ensemble Defence.
Differentially Private Adversarial Robustness Through Randomized Perturbations.
Beneficial Perturbations Network for Defending Adversarial Examples.
Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision.
Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training.
Advancing the Research and Development of Assured Artificial Intelligence and Machine Learning Capabilities.
Adversarial Examples in Deep Learning for Multivariate Time Series Regression.
Improving Query Efficiency of Black-box Adversarial Attack.
Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz Regularization.
Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining.
Adversarial robustness via stochastic regularization of neural activation sensitivity.
A Partial Break of the Honeypots Defense to Catch Adversarial Attacks.
Semantics-Preserving Adversarial Training.
Robustification of Segmentation Models Against Adversarial Perturbations In Medical Imaging.
Detection of Iterative Adversarial Attacks via Counter Attack.
Torchattacks: A PyTorch Repository for Adversarial Attacks.
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors.
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time.
Adversarial Attack Based Countermeasures against Deep Learning Side-Channel Attacks.
Uncertainty-aware Attention Graph Neural Network for Defending Adversarial Attacks.
Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers.
Generating Adversarial yet Inconspicuous Patches with a Single Image.
Adversarial Training with Stochastic Weight Average.
Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness.
DeepDyve: Dynamic Verification for Deep Neural Networks.
Feature Distillation With Guided Adversarial Contrastive Learning.
Crafting Adversarial Examples for Deep Learning Based Prognostics (Extended Version).
Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations.
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing.
Password Strength Signaling: A Counter-Intuitive Defense Against Password Cracking. (1%)
Improving Robustness and Generality of NLP Models Using Disentangled Representations.
Efficient Certification of Spatial Robustness.
OpenAttack: An Open-source Textual Adversarial Attack Toolkit.
It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception.
Making Images Undiscoverable from Co-Saliency Detection.
Adversarial Exposure Attack on Diabetic Retinopathy Imagery.
Bias Field Poses a Threat to DNN-based X-Ray Recognition.
Learning to Attack: Towards Textual Adversarial Attacking in Real-world Situations.
EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks.
Robust Decentralized Learning for Neural Networks.
MIRAGE: Mitigating Conflict-Based Cache Attacks with a Practical Fully-Associative Design. (1%)
Certifying Confidence via Randomized Smoothing.
Generating Label Cohesive and Well-Formed Adversarial Claims.
Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks.
Label Smoothing and Adversarial Robustness.
MultAV: Multiplicative Adversarial Videos.
Online Alternate Generator against Adversarial Attacks.
On the Transferability of Minimal Prediction Preserving Inputs in Question Answering.
Large Norms of CNN Layers Do Not Hurt Adversarial Robustness.
Multimodal Safety-Critical Scenarios Generation for Decision-Making Algorithms Evaluation.
Analysis of Generalizability of Deep Neural Networks Based on the Complexity of Decision Boundary.
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View.
Contextualized Perturbation for Textual Adversarial Attack.
Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup.
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems.
Switching Gradient Directions for Query-Efficient Black-Box Adversarial Attacks.
Decision-based Universal Adversarial Attack.
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses.
Input Hessian Regularization of Neural Networks.
Robust Deep Learning Ensemble against Deception.
Hold Tight and Never Let Go: Security of Deep Learning based Automated Lane Centering under Physical-World Attack.
Towards the Quantification of Safety Risks in Deep Neural Networks.
Certified Robustness of Graph Classification against Topology Attack with Randomized Smoothing.
Achieving Adversarial Robustness via Sparsity.
Defending Against Multiple and Unforeseen Adversarial Videos.
Robust Neural Machine Translation: Modeling Orthographic and Interpunctual Variation.
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.
Semantic-preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection.
Second Order Optimization for Adversarial Robustness and Interpretability.
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent.
A Black-box Adversarial Attack for Poisoning Clustering.
End-to-end Kernel Learning via Generative Random Fourier Features.
SoK: Certified Robustness for Deep Neural Networks.
Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples.
Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks On Deep COVID-19 Models.
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective.
Adversarial attacks on deep learning models for fatty liver disease classification by modification of ultrasound image reconstruction method.
Adversarial Attack on Large Scale Graph.
Black Box to White Box: Discover Model Characteristics Based on Strategic Probing.
A Game Theoretic Analysis of LQG Control under Adversarial Attack.
Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks.
Detection Defense Against Adversarial Attacks with Saliency Map.
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks.
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks.
MIPGAN -- Generating Strong and High Quality Morphing Attacks Using Identity Prior Driven GAN. (10%)
Yet Meta Learning Can Adapt Fast, It Can Also Break Easily.
Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation.
Open-set Adversarial Defense.
Adversarially Robust Neural Architectures.
Flow-based detection and proxy-based evasion of encrypted malware C2 traffic.
Adversarial Attacks on Deep Learning Systems for User Identification based on Motion Sensors.
Simulating Unknown Target Models for Query-Efficient Black-box Attacks.
Defending against substitute model black box adversarial attacks with the 01 loss.
Adversarial Patch Camouflage against Aerial Detection.
Evasion Attacks to Graph Neural Networks via Influence Function.
MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models.
An Integrated Approach to Produce Robust Models with High Efficiency.
Benchmarking adversarial attacks and defenses for time-series data.
Improving Resistance to Adversarial Deformations by Regularizing Gradients.
A Scene-Agnostic Framework with Adversarial Training for Abnormal Event Detection in Video.
GhostBuster: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing.
Minimal Adversarial Examples for Deep Learning on 3D Point Clouds.
On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks.
Adversarial Eigen Attack on Black-Box Models.
Color and Edge-Aware Adversarial Image Perturbations.
Adversarially Robust Learning via Entropic Regularization.
Adversarially Training for Audio Classifiers.
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses.
Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning.
Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks.
An Adversarial Attack Defending System for Securing In-Vehicle Networks.
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation.
Developing and Defeating Adversarial Examples.
Ptolemy: Architecture Support for Robust Deep Learning.
PermuteAttack: Counterfactual Explanation of Machine Learning Credit Scorecards.
Self-Competitive Neural Networks.
A Survey on Assessing the Generalization Envelope of Deep Neural Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples.
Towards adversarial robustness with 01 loss neural networks.
On Attribution of Deepfakes.
$\beta$-Variational Classifiers Under Attack.
Yet Another Intermediate-Level Attack.
Prototype-based interpretation of the functionality of neurons in winner-take-all neural networks.
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training.
On $\ell_p$-norm Robustness of Ensemble Stumps and Trees.
Improving adversarial robustness of deep neural networks by using semantic information.
Direct Adversarial Training for GANs.
Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization.
A Deep Dive into Adversarial Robustness in Zero-Shot Learning.
Adversarial Attack and Defense Strategies for Deep Speaker Recognition Systems.
Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection.
Robustness Verification of Quantum Classifiers. (81%)
TextDecepter: Hard Label Black Box Attack on Text Classifiers.
Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks.
Relevance Attack on Detectors.
Efficiently Constructing Adversarial Examples by Feature Watermarking.
Defending Adversarial Attacks without Adversarial Attacks in Deep Reinforcement Learning.
On the Generalization Properties of Adversarial Training.
Semantically Adversarial Learnable Filters.
Adversarial Training and Provable Robustness: A Tale of Two Objectives.
Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise.
Defending Adversarial Examples via DNN Bottleneck Reinforcement.
Feature Binding with Category-Dependant MixUp for Semantic Segmentation and Adversarial Robustness.
Semantics-preserving adversarial attacks in NLP.
Revisiting Adversarially Learned Injection Attacks Against Recommender Systems.
Informative Dropout for Robust Representation Learning: A Shape-bias Perspective.
FireBERT: Hardening BERT-based classifiers against adversarial attack.
Enhancing Robustness Against Adversarial Examples in Network Intrusion Detection Systems.
Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks.
Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG with Variable Length.
Visual Attack and Defense on Text.
Optimizing Information Loss Towards Robust Neural Networks.
Adversarial Examples on Object Recognition: A Comprehensive Survey.
Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations.
Stronger and Faster Wasserstein Adversarial Attacks.
One word at a time: adversarial attacks on retrieval models.
Robust Deep Reinforcement Learning through Adversarial Loss.
Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples.
TREND: Transferability based Robust ENsemble Design.
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Entropy Guided Adversarial Model for Weakly Supervised Object Localization.
Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks.
Anti-Bandit Neural Architecture Search for Model Defense.
Efficient Adversarial Attacks for Visual Object Tracking.
Trojaning Language Models for Fun and Profit.
Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Physical Adversarial Attack on Vehicle Detector in the Carla Simulator.
Adversarial Attacks with Multiple Antennas Against Deep Learning-Based Modulation Classifiers.
TEAM: We Need More Powerful Adversarial Examples for DNNs.
Black-box Adversarial Sample Generation Based on Differential Evolution.
A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks.
End-to-End Adversarial White Box Attacks on Music Instrument Classification.
Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data.
Stylized Adversarial Defense.
Generative Classifiers as a Basis for Trustworthy Computer Vision.
Detecting Anomalous Inputs to DNN Classifiers By Joint Statistical Testing at the Layers.
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations.
Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning.
Reachable Sets of Classifiers and Regression Models: (Non-)Robustness Analysis and Robust Training.
Label-Only Membership Inference Attacks.
Attacking and Defending Machine Learning Applications of Public Cloud.
KOVIS: Keypoint-based Visual Servoing with Zero-Shot Sim-to-Real Transfer for Robotics Manipulation.
From Sound Representation to Model Robustness.
Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing.
RANDOM MASK: Towards Robust Convolutional Neural Networks.
Robust Collective Classification against Structural Attacks.
Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection. (1%)
MirrorNet: Bio-Inspired Adversarial Attack for Camouflaged Object Segmentation.
Adversarial Privacy-preserving Filter.
MP3 Compression To Diminish Adversarial Noise in End-to-End Speech Recognition.
Scalable Inference of Symbolic Adversarial Examples.
SOCRATES: Towards a Unified Platform for Neural Network Verification.
Adversarial Training Reduces Information and Improves Transferability.
Robust Machine Learning via Privacy/Rate-Distortion Theory.
Threat of Adversarial Attacks on Face Recognition: A Comprehensive Survey.
Audio Adversarial Examples for Robust Hybrid CTC/Attention Speech Recognition.
Towards Visual Distortion in Black-Box Attacks.
DeepNNK: Explaining deep models and their generalization using polytope interpolation.
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers.
Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks.
Robust Tracking against Adversarial Attacks.
Scaling Polyhedral Neural Network Verification on GPUs.
Semantic Equivalent Adversarial Data Augmentation for Visual Question Answering.
Exploiting vulnerabilities of deep neural networks for privacy protection.
Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency.
Adversarial Immunization for Improving Certifiable Robustness on Graphs.
DDR-ID: Dual Deep Reconstruction Networks Based Image Decomposition for Anomaly Detection.
Towards Quantum-Secure Authentication and Key Agreement via Abstract Multi-Agent Interaction. (1%)
Anomaly Detection in Unsupervised Surveillance Setting Using Ensemble of Multimodal Data with Adversarial Defense.
Neural Networks with Recurrent Generative Feedback.
Understanding and Diagnosing Vulnerability under Adversarial Attacks.
Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources.
Accelerated Stochastic Gradient-free and Projection-free Methods.
Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data.
An Empirical Study on the Robustness of NAS based Architectures.
Do Adversarially Robust ImageNet Models Transfer Better?
Learning perturbation sets for robust machine learning.
On Robustness and Transferability of Convolutional Neural Networks. (1%)
Less is More: A privacy-respecting Android malware classifier using Federated Learning. (1%)
A Survey of Privacy Attacks in Machine Learning.
Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels.
A Survey on Security Attacks and Defense Techniques for Connected and Autonomous Vehicles.
Towards robust sensing for Autonomous Vehicles: An adversarial perspective.
Robustifying Reinforcement Learning Agents via Action Space Adversarial Training.
Bounding The Number of Linear Regions in Local Area for Neural Networks with ReLU Activations.
Multitask Learning Strengthens Adversarial Robustness.
Adversarial Examples and Metrics.
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows.
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack.
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components.
Towards a Theoretical Understanding of the Robustness of Variational Autoencoders.
A simple defense against adversarial attacks on heatmap explanations.
Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations.
Adversarial robustness via robust low rank representations.
Security and Machine Learning in the Real World.
Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes.
Calling Out Bluff: Attacking the Robustness of Automatic Scoring Systems with Simple Adversarial Testing.
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems.
Patch-wise Attack for Fooling Deep Neural Network.
Adversarial jamming attacks and defense strategies via adaptive deep reinforcement learning.
Generating Fluent Adversarial Examples for Natural Languages.
Probabilistic Jacobian-based Saliency Maps Attacks.
Understanding Object Detection Through An Adversarial Lens.
ManiGen: A Manifold Aided Black-box Generator of Adversarial Examples.
Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification. (15%)
Improved Detection of Adversarial Images Using Deep Neural Networks.
Miss the Point: Targeted Adversarial Attack on Multiple Landmark Detection.
Generating Adversarial Inputs Using A Black-box Differential Technique.
Improving Adversarial Robustness by Enforcing Local and Global Compactness.
Boundary thickness and robustness in learning models.
Node Copying for Protection Against Graph Neural Network Topology Attacks.
Efficient detection of adversarial images.
How benign is benign overfitting?
Delving into the Adversarial Robustness on Face Recognition.
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations.
A Critical Evaluation of Open-World Machine Learning.
On the relationship between class selectivity, dimensionality, and robustness.
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs.
Robust Learning with Frequency Domain Regularization.
Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability.
Fast Training of Deep Neural Networks Robust to Adversarial Perturbations.
Making Adversarial Examples More Transferable and Indistinguishable.
Detection as Regression: Certified Object Detection by Median Smoothing.
Certifying Decision Trees Against Evasion Attacks by Program Analysis.
On Data Augmentation and Adversarial Risk: An Empirical Analysis.
Understanding and Improving Fast Adversarial Training.
Black-box Adversarial Example Generation with Normalizing Flows.
Adversarial Learning in the Cyber Security Domain.
On Connections between Regularizations for Improving DNN Robustness.
Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors.
Deep Active Learning via Open Set Recognition. (1%)
Towards Robust Deep Learning with Ensemble Networks and Noisy Layers.
Efficient Proximal Mapping of the 1-path-norm of Shallow Networks.
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment.
Decoder-free Robustness Disentanglement without (Additional) Supervision.
Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring.
Trace-Norm Adversarial Examples.
Generating Adversarial Examples withControllable Non-transferability.
Unifying Model Explainability and Robustness via Machine-Checkable Concepts.
Measuring Robustness to Natural Distribution Shifts in Image Classification.
Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks.
Query-Free Adversarial Transfer via Undertrained Surrogates.
Adversarial Example Games.
Robustness against Relational Adversary.
A Le Cam Type Bound for Adversarial Learning and Applications.
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey.
Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures.
Black-box Certification and Learning under Adversarial Perturbations.
Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection.
Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications.
Generating Adversarial Examples with an Optimized Quality.
Harnessing Adversarial Distances to Discover High-Confidence Errors.
Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification.
Legal Risks of Adversarial Machine Learning Research.
Biologically Inspired Mechanisms for Adversarial Robustness.
Improving Uncertainty Estimates through the Relationship with Adversarial Robustness.
FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications.
Geometry-Inspired Top-k Adversarial Perturbations.
Orthogonal Deep Models As Defense Against Black-Box Attacks.
A Unified Framework for Analyzing and Detecting Malicious Examples of DNN Models.
Informative Outlier Matters: Robustifying Out-of-distribution Detection Using Outlier Mining.
Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks.
Smooth Adversarial Training.
Proper Network Interpretability Helps Adversarial Robustness in Classification.
Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability.
Can 3D Adversarial Logos Cloak Humans?
Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks.
Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness.
Defending against adversarial attacks on medical imaging AI system, classification or detection?
Compositional Explanations of Neurons.
Towards Robust Sensor Fusion in Visual Perception.
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks.
RayS: A Ray Searching Method for Hard-label Adversarial Attack.
Learning to Generate Noise for Multi-Attack Robustness.
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models.
Network Moments: Extensions and Sparse-Smooth Attacks.
How do SGD hyperparameters in natural training affect adversarial robustness?
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble.
Stochastic Shortest Path with Adversarially Changing Costs. (1%)
Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples.
A general framework for defining and optimizing robustness.
Analyzing the Real-World Applicability of DGA Classifiers.
Towards an Adversarially Robust Normalization Approach.
Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers.
Adversarial Attacks for Multi-view Deep Models.
Local Competition and Uncertainty for Adversarial Robustness in Deep Learning.
Dissecting Deep Networks into an Ensemble of Generative Classifiers for Robust Predictions.
The Dilemma Between Dimensionality Reduction and Adversarial Robustness.
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples.
Noise or Signal: The Role of Image Backgrounds in Object Recognition.
Adversarial Examples Detection and Analysis with Layer-wise Autoencoders.
Adversarial Defense by Latent Style Transformations.
Disrupting Deepfakes with an Adversarial Attack that Survives Training.
Universal Lower-Bounds on Classification Error under Adversarial Attacks and Random Corruption.
Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning.
Calibrating Deep Neural Network Classifiers on Out-of-Distribution Datasets.
SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness.
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness Proofs.
On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron.
AdvMind: Inferring Adversary Intent of Black-Box Attacks.
The shape and simplicity biases of adversarially robust ImageNet-trained CNNs.
Total Deep Variation: A Stable Regularizer for Inverse Problems.
DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder.
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data.
Fast & Accurate Method for Bounding the Singular Values of Convolutional Layers with Application to Lipschitz Regularization.
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks.
CG-ATTACK: Modeling the Conditional Distribution of Adversarial Perturbations to Boost Black-Box Attack.
Multiscale Deep Equilibrium Models.
GradAug: A New Regularization Method for Deep Neural Networks.
PatchUp: A Regularization Technique for Convolutional Neural Networks.
On Saliency Maps and Adversarial Robustness.
On the transferability of adversarial examples between convex and 01 loss models.
Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems.
Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks.
Duplicity Games for Deception Design with an Application to Insider Threat Mitigation. (10%)
ClustTR: Clustering Training for Robustness.
The Pitfalls of Simplicity Bias in Neural Networks.
Adversarial Self-Supervised Contrastive Learning.
Defensive Approximation: Securing CNNs using Approximate Computing.
Provably Robust Metric Learning.
Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces.
D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack.
Targeted Adversarial Perturbations for Monocular Depth Prediction.
Large-Scale Adversarial Training for Vision-and-Language Representation Learning.
Smoothed Geometry for Robust Attribution.
Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks.
Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification.
Robustness to Adversarial Attacks in Learning-Enabled Controllers.
On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples.
Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors.
Achieving robustness in classification using optimal transport with hinge regularization.
Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural Networks. (96%)
Evaluating Graph Vulnerability and Robustness using TIGER.
Towards Robust Fine-grained Recognition by Maximal Separation of Discriminative Features.
Deterministic Gaussian Averaged Neural Networks.
Interpolation between Residual and Non-Residual Networks.
Towards Certified Robustness of Metric Learning.
Towards an Intrinsic Definition of Robustness for a Classifier.
Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access.
GAP++: Learning to generate target-conditioned adversarial examples.
Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers.
Provable tradeoffs in adversarially robust classification.
Calibrated neighborhood aware confidence measure for deep metric learning.
A Self-supervised Approach for Adversarial Robustness.
Distributional Robustness with IPMs and links to Regularization and GANs.
On Universalized Adversarial and Invariant Perturbations.
Tricking Adversarial Attacks To Fail.
Global Robustness Verification Networks.
Provable trade-offs between private & robust machine learning.
Adversarial Feature Desensitization.
Extensions and limitations of randomized smoothing for robustness guarantees.
Uncertainty-Aware Deep Classifiers using Generative Models.
Unique properties of adversarially trained linear classifiers on Gaussian data.
Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?
Adversarial Image Generation and Training for Deep Convolutional Neural Networks.
Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing.
Sponge Examples: Energy-Latency Attacks on Neural Networks.
Characterizing the Weight Space for Different Learning Models.
Towards Understanding Fast Adversarial Training.
Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised Learning.
Pick-Object-Attack: Type-Specific Adversarial Attack for Object Detection.
SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization.
Exploring the role of Input and Output Layers of a Deep Neural Network in Adversarial Defense.
Perturbation Analysis of Gradient-based Adversarial Attacks.
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start.
Detecting Audio Attacks on ASR Systems with Dropout Uncertainty.
Second-Order Provable Defenses against Adversarial Attacks.
Adversarial Attacks on Reinforcement Learning based Energy Management Systems of Extended Range Electric Delivery Vehicles.
Adversarial Attacks on Classifiers for Eye-based User Modelling.
Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack Methods.
Evaluations and Methods for Explanation through Robustness Analysis.
Estimating Principal Components under Adversarial Perturbations.
Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training.
SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions.
Monocular Depth Estimators: Vulnerabilities and Attacks.
QEBA: Query-Efficient Boundary-Based Blackbox Attack.
Adversarial Attacks and Defense on Texts: A Survey.
Adversarial Robustness of Deep Convolutional Candlestick Learner.
Enhancing Resilience of Deep Learning Networks by Means of Transferable Adversaries.
Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques.
Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models.
Calibrated Surrogate Losses for Adversarially Robust Classification.
Effects of Forward Error Correction on Communications Aware Evasion Attacks.
Investigating a Spectral Deception Loss Metric for Training Machine Learning-based Evasion Attacks.
Generating Semantically Valid Adversarial Questions for TableQA.
Adversarial Feature Selection against Evasion Attacks.
Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification.
SoK: Arms Race in Adversarial Malware Detection.
Adaptive Adversarial Logits Pairing.
ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds.
Adversarial Attack on Hierarchical Graph Pooling Neural Networks.
Frontal Attack: Leaking Control-Flow in SGX via the CPU Frontend. (1%)
Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks.
Revisiting Role of Autoencoders in Adversarial Settings.
Robust Ensemble Model Training via Random Layer Sampling Against Adversarial Attack.
Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition.
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning.
Graph Structure Learning for Robust Graph Neural Networks.
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data.
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks.
A survey on Adversarial Recommender Systems: from Attack/Defense strategies to Generative Adversarial Networks.
Feature Purification: How Adversarial Training Performs Robust Deep Learning.
Synthesizing Unrestricted False Positive Adversarial Objects Using Generative Models.
Bias-based Universal Adversarial Patch Attack for Automatic Check-out.
An Evasion Attack against ML-based Phishing URL Detectors.
Universalization of any adversarial attack using very few test examples.
On Intrinsic Dataset Properties for Adversarial Machine Learning.
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks.
Defending Your Voice: Adversarial Attack on Voice Conversion.
Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspective.
Spatiotemporal Attacks for Embodied Agents.
Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks.
Universal Adversarial Perturbations: A Survey.
Encryption Inspired Adversarial Defense for Visual Classification.
PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields.
How to Make 5G Communications "Invisible": Adversarial Machine Learning for Wireless Privacy.
Practical Traffic-space Adversarial Attacks on Learning-based NIDSs.
Initializing Perturbations in Multiple Directions for Fast Adversarial Training.
Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning.
Towards Assessment of Randomized Mechanisms for Certifying Adversarial Robustness.
A Deep Learning-based Fine-grained Hierarchical Learning Approach for Robust Malware Classification.
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses.
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients.
Evaluating Ensemble Robustness Against Adversarial Attacks.
Increased-confidence adversarial examples for improved transferability of Counter-Forensic attacks.
Adversarial examples are useful too!
Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers.
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data.
It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations.
Class-Aware Domain Adaptation for Improving Adversarial Robustness.
Towards Robustness against Unsuspicious Adversarial Examples.
Efficient Exact Verification of Binarized Neural Networks.
Projection & Probability-Driven Black-Box Attack.
Defending Hardware-based Malware Detectors against Adversarial Attacks.
GraCIAS: Grassmannian of Corrupted Images for Adversarial Security.
Training robust neural networks using Lipschitz bounds.
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder.
Hacking the Waveform: Generalized Wireless Adversarial Deep Learning.
Adversarial Training against Location-Optimized Adversarial Patches.
Measuring Adversarial Robustness using a Voronoi-Epsilon Adversary.
On the Benefits of Models with Perceptually-Aligned Gradients.
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Robust Encodings: A Framework for Combating Adversarial Typos.
Jacks of All Trades, Masters Of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks.
Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models.
Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees.
Defense of Word-level Adversarial Attacks via Random Substitution Encoding.
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks.
Imitation Attacks and Defenses for Black-box Machine Translation Systems.
Universal Adversarial Attacks with Natural Triggers for Text Classification.
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness.
Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability.
TAVAT: Token-Aware Virtual Adversarial Training for Language Understanding.
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP.
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks.
Minority Reports Defense: Defending Against Adversarial Patches.
DeSePtion: Dual Sequence Prediction and Adversarial Examples for Improved Fact-Checking.
Adversarial Fooling Beyond "Flipping the Label".
"Call me sexist, but...": Revisiting Sexism Detection Using Psychological Scales and Adversarial Samples. (81%)
Improved Image Wasserstein Attacks and Defenses.
Transferable Perturbations of Deep Feature Distributions.
Towards Feature Space Adversarial Attack.
Printing and Scanning Attack for Image Counter Forensics.
Improved Adversarial Training via Learned Optimizer.
Enabling Fast and Universal Audio Adversarial Attack Using Generative Model.
Harnessing adversarial examples with a surprisingly simple defense.
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty.
A Black-box Adversarial Attack Strategy with Adjustable Sparsity and Generalizability for Deep Image Classifiers.
Reevaluating Adversarial Examples in Natural Language.
Adversarial Machine Learning in Network Intrusion Detection Systems.
Adversarial Attacks and Defenses: An Interpretation Perspective.
Evaluating Adversarial Robustness for Deep Neural Network Interpretability using fMRI Decoding.
On Adversarial Examples for Biomedical NLP Tasks.
Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks.
Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation.
RAIN: A Simple Approach for Robust and Accurate Image Classification Networks.
CodNN -- Robust Neural Networks From Coded Classification.
Provably robust deep generative models.
QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks.
Adversarial examples and where to find them.
Scalable Attack on Graph Data by Injecting Vicious Nodes.
Certifying Joint Adversarial Robustness for Model Ensembles.
Probabilistic Safety for Bayesian Neural Networks.
BERT-ATTACK: Adversarial Attack Against BERT Using BERT.
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks.
GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified Examples.
Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning.
Adversarial Training for Large Neural Language Models.
Headless Horseman: Adversarial Attacks on Transfer Learning Models.
Protecting Classifiers From Attacks. A Bayesian Approach.
Single-step Adversarial training with Dropout Scheduling.
Adversarial Attack on Deep Learning-Based Splice Localization.
Shortcut Learning in Deep Neural Networks.
Targeted Attack for Deep Hashing based Retrieval.
A Framework for Enhancing Deep Neural Networks Against Adversarial Malware.
Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing Website Classifiers.
On the Optimal Interaction Range for Multi-Agent Systems Under Adversarial Attack.
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions.
Adversarial Robustness Guarantees for Random Deep Neural Networks.
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples.
Adversarial Weight Perturbation Helps Robust Generalization.
Adversarial Augmentation Policy Search for Domain and Cross-Lingual Generalization in Reading Comprehension.
Towards Robust Classification with Image Quality Assessment.
Towards Transferable Adversarial Attack against Deep Face Recognition.
PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning.
Domain Adaptive Transfer Attack (DATA)-based Segmentation Networks for Building Extraction from Aerial Images.
Certified Adversarial Robustness for Deep Reinforcement Learning.
Robust Large-Margin Learning in Hyperbolic Space.
Verification of Deep Convolutional Neural Networks Using ImageStars.
Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems.
Luring of transferable adversarial perturbations in the black-box paradigm.
Blind Adversarial Training: Balance Accuracy and Robustness.
Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness.
On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems.
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking.
Towards Evaluating the Robustness of Chinese BERT Classifiers.
Feature Partitioning for Robust Tree Ensembles and their Certification in Adversarial Scenarios.
Learning to fool the speaker recognition.
Universal Adversarial Perturbations Generative Network for Speaker Recognition.
Approximate Manifold Defense Against Multiple Adversarial Perturbations.
Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks.
BAE: BERT-based Adversarial Examples for Text Classification.
Adversarial Robustness through Regularization: A Second-Order Approach.
Evading Deepfake-Image Detectors with White- and Black-Box Attacks.
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes.
Physically Realizable Adversarial Examples for LiDAR Object Detection.
A Thorough Comparison Study on Adversarial Attacks and Defenses for Common Thorax Disease Classification in Chest X-rays.
Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement.
Adversarial Attacks on Multivariate Time Series.
Improved Gradient based Adversarial Attacks for Quantized Networks.
Towards Deep Learning Models Resistant to Large Perturbations.
Efficient Black-box Optimization of Adversarial Windows Malware with Constrained Manipulations.
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning.
DaST: Data-free Substitute Training for Adversarial Attacks.
Adversarial Imitation Attack.
Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks.
Challenging the adversarial robustness of DNNs based on error-correcting output codes.
Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples.
Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study.
Defense Through Diverse Directions.
Adversarial Attacks on Monocular Depth Estimation.
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations.
Adversarial Perturbations Fool Deepfake Detectors.
Understanding the robustness of deep neural network classifiers for breast cancer screening.
Architectural Resilience to Foreground-and-Background Adversarial Noise.
Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression.
Robust Out-of-distribution Detection in Neural Networks.
Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises.
Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning.
Investigating Image Applications Based on Spatial-Frequency Transform and Deep Learning Techniques.
Quantum noise protects quantum classifiers against adversaries.
One Neuron to Fool Them All.
Adversarial Robustness on In- and Out-Distribution Improves Explainability.
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates.
Face-Off: Adversarial Face Obfuscation.
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations.
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence.
Improving Adversarial Robustness Through Progressive Hardening.
Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles.
Solving Non-Convex Non-Differentiable Min-Max Games using Proximal Gradient Method.
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior.
Heat and Blur: An Effective and Fast Defense Against Adversarial Examples.
Adversarial Transferability in Wearable Sensor Systems.
Output Diversified Initialization for Adversarial Attacks.
Anomalous Example Detection in Deep Learning: A Survey.
Towards Face Encryption by Generating Adversarial Identity Masks.
Toward Adversarial Robustness via Semi-supervised Robust Training.
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models.
Certified Defenses for Adversarial Patches.
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation.
On the benefits of defining vicinal distributions in latent space.
Towards a Resilient Machine Learning Classifier -- a Case Study of Ransomware Detection.
GeoDA: a geometric framework for black-box adversarial attacks.
When are Non-Parametric Methods Robust?
Topological Effects on Attacks Against Vertex Classification.
Inline Detection of DGA Domains Using Side Information.
ARAE: Adversarially Robust Training of Autoencoders Improves Novelty Detection.
ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems.
Frequency-Tuned Universal Adversarial Attacks.
SAD: Saliency-based Defenses Against Adversarial Examples.
Using an ensemble color space model to tackle adversarial examples.
Cryptanalytic Extraction of Neural Network Models.
A Survey of Adversarial Learning on Graphs.
Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift.
Towards Probabilistic Verification of Machine Unlearning.
Manifold Regularization for Locally Stable Deep Neural Networks.
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models.
Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world.
Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM.
An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods.
On the Robustness of Cooperative Multi-Agent Reinforcement Learning.
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles.
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks.
Dynamic Backdoor Attacks Against Machine Learning Models.
Triple Memory Networks: a Brain-Inspired Method for Continual Learning.
Defense against adversarial attacks on spoofing countermeasures of ASV.
MAB-Malware: A Reinforcement Learning Framework for Attacking Static Malware Classifiers.
Towards Practical Lottery Ticket Hypothesis for Adversarial Training.
Exploiting Verified Neural Networks via Floating Point Numerical Error.
Detection and Recovery of Adversarial Attacks with Injected Attractors.
Adversarial Robustness Through Local Lipschitzness.
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization.
Search Space of Adversarial Perturbations against Image Filters.
Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems.
Colored Noise Injection for Training Adversarially Robust Neural Networks.
Double Backpropagation for Training Autoencoders against Adversarial Attack.
Black-box Smoothing: A Provable Defense for Pretrained Classifiers.
Metrics and methods for robustness evaluation of neural networks with generative models.
Discriminative Multi-level Reconstruction under Compact Latent Space for One-Class Novelty Detection.
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks.
Analyzing Accuracy Loss in Randomized Smoothing Defenses.
Security of Deep Learning based Lane Keeping System under Physical-World Adversarial Attack.
Type I Attack for Generative Models.
Data-Free Adversarial Perturbations for Practical Black-Box Attack.
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness.
Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.
Hidden Cost of Randomized Smoothing.
Adversarial Network Traffic: Towards Evaluating the Robustness of Deep Learning-Based Network Traffic Classification.
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies.
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models.
Why is the Mahalanobis Distance Effective for Anomaly Detection?
End-to-end Robustness for Sensing-Reasoning Machine Learning Pipelines.
Applying Tensor Decomposition to image for Robustness against Adversarial Attack.
Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT.
Detecting Patch Adversarial Attacks with Image Residuals.
Certified Defense to Image Transformations via Randomized Smoothing.
Are L2 adversarial examples intrinsically different?
Provable Robust Learning Based on Transformation-Specific Smoothing.
Utilizing Network Properties to Detect Erroneous Inputs.
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks.
FMix: Enhancing Mixed Sample Data Augmentation. (22%)
Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy.
Invariance vs. Robustness of Neural Networks.
Overfitting in adversarially robust deep learning.
MGA: Momentum Gradient Attack on Network.
Improving Robustness of Deep-Learning-Based Image Reconstruction.
Defense-PointNet: Protecting PointNet Against Adversarial Attacks.
Adversarial Attack on Deep Product Quantization Network for Image Retrieval.
Randomization matters. How to defend against strong adversarial attacks.
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization.
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy.
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization.
G\"odel's Sentence Is An Adversarial Example But Unsolvable.
Towards an Efficient and General Framework of Robust Training for Graph Neural Networks.
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks.
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.
Adversarial Ranking Attack and Defense.
A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples: BOBYQA.
Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks.
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space.
Towards Rapid and Robust Adversarial Training with One-Step Attacks.
Precise Tradeoffs in Adversarial Training for Linear Regression.
HYDRA: Pruning Adversarially Robust Neural Networks.
Adversarial Attack on DL-based Massive MIMO CSI Feedback.
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference.
VisionGuard: Runtime Detection of Adversarial Inputs to Perception Systems.
Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks.
Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition.
Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples.
Polarizing Front Ends for Robust CNNs.
Robustness from Simple Classifiers.
Adversarial Detection and Correction by Matching Prediction Distributions.
UnMask: Adversarial Detection and Defense Through Robust Feature Alignment.
Robustness to Programmable String Transformations via Augmented Abstract Training.
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework.
Adversarial Attacks on Machine Learning Systems for High-Frequency Trading.
Enhanced Adversarial Strategically-Timed Attacks against Deep Reinforcement Learning.
On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective.
A Bayes-Optimal View on Adversarial Examples.
Towards Certifiable Adversarial Sample Detection.
Boosting Adversarial Training with Hypersphere Embedding.
Bayes-TrEx: Model Transparency by Example.
AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks.
NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion.
On Adaptive Attacks to Adversarial Example Defenses.
Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks.
Randomized Smoothing of All Shapes and Sizes.
Action-Manipulation Attacks Against Stochastic Bandits: Attacks and Defense.
Deflecting Adversarial Attacks.
Block Switching: A Stochastic Approach for Deep Learning Security.
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent.
TensorShield: Tensor-based Defense Against Adversarial Attacks on Images.
On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples.
Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification.
Scalable Quantitative Verification For Deep Neural Networks.
CAT: Customized Adversarial Training for Improved Robustness.
On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks.
Robust Stochastic Bandit Algorithms under Probabilistic Unbounded Adversarial Attack.
Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness.
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality.
Undersensitivity in Neural Reading Comprehension.
Hold me tight! Influence of discriminative features on deep network boundaries.
Blind Adversarial Network Perturbations.
Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets.
Adversarial Distributional Training for Robust Deep Learning.
Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks.
The Conditional Entropy Bottleneck.
Identifying Audio Adversarial Examples via Anomalous Pattern Detection.
Stabilizing Differentiable Architecture Search via Perturbation-based Regularization.
Over-the-Air Adversarial Flickering Attacks against Video Recognition Networks.
Adversarial Robustness for Code.
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations.
Robustness of Bayesian Neural Networks to Gradient-Based Attacks.
Improving the affordability of robustness training for DNNs.
Fast Geometric Projections for Local Robustness Certification.
Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models.
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models.
Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers.
Adversarial Data Encryption.
Generalised Lipschitz Regularisation Equals Distributional Robustness.
MDEA: Malware Detection with Evolutionary Adversarial Learning.
Input Validation for Neural Networks via Runtime Local Robustness Verification.
Robust binary classification with the 01 loss.
Watch out! Motion is Blurring the Vision of Your Deep Neural Networks.
Feature-level Malware Obfuscation in Deep Learning.
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples.
Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection.
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing.
Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for High-Dimensional Images.
Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks.
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness.
Renofeation: A Simple Transfer Learning Method for Improved Adversarial Robustness.
Semantic Robustness of Models of Source Code.
Analysis of Random Perturbations for Robust Convolutional Neural Networks.
RAID: Randomized Adversarial-Input Detection for Neural Networks.
Assessing the Adversarial Robustness of Monte Carlo and Distillation Methods for Deep Bayesian Neural Network Classification.
Reliability Validation of Learning Enabled Vehicle Tracking.
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models.
AI-GAN: Attack-Inspired Generation of Adversarial Examples.
Over-the-Air Adversarial Attacks on Deep Learning Based Modulation Classifier over Wireless Channels.
Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study.
Adversarially Robust Frame Sampling with Bounded Irregularities.
Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria.
Minimax Defense against Gradient-based Adversarial Attacks.
A Differentiable Color Filter for Generating Unrestricted Adversarial Images.
Regularizers for Single-step Adversarial Training.
Defending Adversarial Attacks via Semantic Feature Manipulation.
Robust saliency maps with decoy-enhanced saliency score.
Towards Sharper First-Order Adversary with Quantized Gradients.
AdvJND: Generating Adversarial Examples with Just Noticeable Difference.
Additive Tree Ensembles: Reasoning About Potential Instances.
Politics of Adversarial Machine Learning.
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications.
Tiny Noise Can Make an EEG-Based Brain-Computer Interface Speller Output Anything.
A4 : Evading Learning-based Adblockers.
D2M: Dynamic Defense and Modeling of Adversarial Movement in Networks.
Just Noticeable Difference for Machines to Generate Adversarial Images.
Semantic Adversarial Perturbations using Learnt Representations.
Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain.
Modelling and Quantifying Membership Information Leakage in Machine Learning.
Interpreting Machine Learning Malware Detectors Which Leverage N-gram Analysis.
Generating Natural Adversarial Hyperspectral examples with a modified Wasserstein GAN.
FakeLocator: Robust Localization of GAN-Based Face Manipulations via Semantic Segmentation Networks with Bells and Whistles.
Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning.
Practical Fast Gradient Sign Attack against Mammographic Image Classifier.
Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks.
Weighted Average Precision: Adversarial Example Detection in the Visual Perception of Autonomous Vehicles.
AI-Powered GUI Attack and Its Defensive Methods.
Analyzing the Noise Robustness of Deep Neural Networks.
When Wireless Security Meets Machine Learning: Motivation, Challenges, and Research Directions.
Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack.
Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples.
On the human evaluation of audio adversarial examples.
Adversarial Attack on Community Detection by Hiding Individuals.
SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation.
Secure and Robust Machine Learning for Healthcare: A Survey.
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence.
GhostImage: Perception Domain Attacks against Vision-based Object Classification Systems.
Generate High-Resolution Adversarial Samples by Identifying Effective Features.
Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning.
Elephant in the Room: An Evaluation Framework for Assessing Adversarial Examples in NLP.
Cyber Attack Detection thanks to Machine Learning Algorithms.
Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks.
A Little Fog for a Large Turn.
The gap between theory and practice in function approximation with deep neural networks.
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.
Increasing the robustness of DNNs against image corruptions by playing the Game of Noise.
Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation.
Advbox: a toolbox to generate adversarial examples that fool neural networks.
Membership Inference Attacks Against Object Detection Models.
An Adversarial Approach for the Robust Classification of Pneumonia from Chest Radiographs.
Fast is better than free: Revisiting adversarial training.
Exploring and Improving Robustness of Multi Task Deep Neural Networks via Domain Agnostic Defenses.
Sparse Black-box Video Attack with Reinforcement Learning.
ReluDiff: Differential Verification of Deep Neural Networks.
Guess First to Enable Better Compression and Adversarial Robustness.
To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers.
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius.
Transferability of Adversarial Examples to Attack Cloud-based Image Classifier Service.
Softmax-based Classification is k-means Clustering: Formal Proof, Consequences for Adversarial Attacks, and Improvement through Centroid Based Tailoring.
Generating Semantic Adversarial Examples via Feature Manipulation.
Deceiving Image-to-Image Translation Networks for Autonomous Driving with Adversarial Perturbations.
The Human Visual System and Adversarial AI.
Reject Illegal Inputs with Generative Classifier Derived from Any Discriminative Classifier.
Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient.
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks.
Automated Testing for Deep Learning Systems with Differential Behavior Criteria.
Protecting GANs against privacy attacks by preventing overfitting.
Erase and Restore: Simple, Accurate and Resilient Detection of $L_2$ Adversarial Examples.
Quantum Adversarial Machine Learning.
Adversarial Example Generation using Evolutionary Multi-objective Optimization.
Defending from adversarial examples with a two-stream architecture.
Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices.
Search Based Repair of Deep Neural Networks.
Benchmarking Adversarial Robustness.
Efficient Adversarial Training with Transferable Adversarial Examples.
Attack-Resistant Federated Learning with Residual-based Reweighting.
Analysis of Moving Target Defense Against False Data Injection Attacks on Power Grid.
Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer.
Characterizing the Decision Boundary of Deep Neural Networks.
White Noise Analysis of Neural Networks.
Geometry-aware Generation of Adversarial and Cooperative Point Clouds.
T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack.
Measuring Dataset Granularity.
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing.
secml: A Python Library for Secure and Explainable Machine Learning.
Jacobian Adversarially Regularized Networks for Robustness.
Explainability and Adversarial Robustness for RNNs.
Adversarial symmetric GANs: bridging adversarial samples and adversarial networks.
Does Symbolic Knowledge Prevent Adversarial Fooling?
A New Ensemble Method for Concessively Targeted Multi-model Attack.
Mitigating large adversarial perturbations on X-MAS (X minus Moving Averaged Samples).
Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection.
$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers.
Towards Verifying Robustness of Neural Networks Against Semantic Perturbations.
Perturbations on the Perceptual Ball.
Identifying Adversarial Sentences by Analyzing Text Complexity.
An Adversarial Perturbation Oriented Domain Adaptation Approach for Semantic Segmentation.
Adversarial VC-dimension and Sample Complexity of Neural Networks.
SIGMA : Strengthening IDS with GAN and Metaheuristics Attacks.
Detecting Adversarial Attacks On Audio-Visual Speech Recognition.
APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection.
CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator.
MimicGAN: Robust Projection onto Image Manifolds with Corruption Mimicking.
On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration.
Constructing a provably adversarially-robust classifier from a high accuracy one.
DAmageNet: A Universal Adversarial Dataset.
What Else Can Fool Deep Learning? Addressing Color Constancy Errors on Deep Neural Network Performance.
Towards Robust Toxic Content Classification.
Potential adversarial samples for white-box attacks.
Learning to Model Aspects of Hearing Perception Using Neural Loss Functions.
Gabor Layers Enhance Network Robustness.
An Efficient Approach for Using Expectation Maximization Algorithm in Capsule Networks.
Detecting and Correcting Adversarial Images Using Image Processing Operations and Convolutional Neural Networks.
What it Thinks is Important is Important: Robustness Transfers through Input Gradients.
Towards a Robust Classifier: An MDL-Based Method for Generating Adversarial Examples.
Training Provably Robust Models by Polyhedral Envelope Regularization.
Appending Adversarial Frames for Universal Video Attack.
Statistically Robust Neural Network Classification. (22%)
Feature Losses for Adversarial Robustness.
Hardening Random Forest Cyber Detectors Against Adversarial Attacks.
Amora: Black-box Adversarial Morphing Attack.
Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks.
Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations.
Principal Component Properties of Adversarial Samples.
Training Deep Neural Networks for Interpretability and Adversarial Robustness.
Detection of Face Recognition Adversarial Attacks.
The Search for Sparse, Robust Neural Networks.
Region-Wise Attack: On Efficient Generation of Robust Physical Adversarial Examples.
Learning with Multiplicative Perturbations.
A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks.
Walking on the Edge: Fast, Low-Distortion Adversarial Examples.
Towards Robust Image Classification Using Sequential Attention Models.
Scratch that! An Evolution-based Adversarial Attack against Neural Networks.
A Survey of Black-Box Adversarial Attacks on Computer Vision Models.
FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks.
Cost-Aware Robust Tree Ensembles for Security Applications.
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples.
Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs.
Adversary A3C for Robust Reinforcement Learning.
A Method for Computing Class-wise Universal Adversarial Perturbations.
AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds.
Design and Interpretation of Universal Adversarial Patches in Face Detection.
Error-Correcting Neural Network.
Square Attack: a query-efficient black-box adversarial attack via random search.
Towards Privacy and Security of Deep Learning Systems: A Survey.
Survey of Attacks and Defenses on Edge-Deployed Neural Networks.
An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense.
Can Attention Masks Improve Adversarial Robustness?
Defending Against Adversarial Machine Learning.
Using Depth for Pixel-Wise Detection of Adversarial Attacks in Crowd Counting.
Playing it Safe: Adversarial Robustness with an Abstain Option.
ColorFool: Semantic Adversarial Colorization.
Adversarial Attack with Pattern Replacement.
One Man's Trash is Another Man's Treasure: Resisting Adversarial Examples by Adversarial Examples.
When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks.
Time-aware Gradient Attack on Dynamic Network Link Prediction.
Robust Assessment of Real-World Adversarial Examples.
Universal Adversarial Robustness of Texture and Shape-Biased Models.
Bounding Singular Values of Convolution Layers.
Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction.
Attack Agnostic Statistical Method for Adversarial Detection.
Universal adversarial examples in speech command classification.
Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference.
Heuristic Black-box Adversarial Attacks on Video Recognition Models.
Adversarial Examples Improve Image Recognition.
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation.
Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method.
Fine-grained Synthesis of Unrestricted Adversarial Examples.
Deep Minimax Probability Machine.
Logic-inspired Deep Neural Networks.
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Adversarial Robustness of Flow-Based Generative Models.
Defective Convolutional Layers Learn Robust CNNs.
Generate (non-software) Bugs to Fool Classifiers.
A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories.
A novel method for identifying the deep neural network model with the Serial Number.
Adversarial Attacks on Grid Events Classification: An Adversarial Machine Learning Approach.
WITCHcraft: Efficient PGD attacks with random step size.
Deep Detector Health Management under Adversarial Campaigns.
Countering Inconsistent Labelling by Google's Vision API for Rotated Images.
Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models.
Smoothed Inference for Adversarially-Trained Models.
SMART: Skeletal Motion Action Recognition aTtack.
Suspicion-Free Adversarial Attacks on Clustering Algorithms.
Black-Box Adversarial Attack with Transferable Model-based Embedding.
Defensive Few-shot Adversarial Learning.
Learning To Characterize Adversarial Subspaces.
On Model Robustness Against Adversarial Examples.
Simple iterative method for generating targeted universal adversarial perturbations.
AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients.
Adversarial Embedding: A robust and elusive Steganography and Watermarking technique.
Self-supervised Adversarial Training.
DomainGAN: Generating Adversarial Examples to Attack Domain Generation Algorithm Classifiers.
CAGFuzz: Coverage-Guided Adversarial Generative Fuzzing Testing of Deep Learning Systems.
There is Limited Correlation between Coverage and Robustness for Deep Neural Networks.
Adversarial Margin Maximization Networks.
Improving Robustness of Task Oriented Dialog Systems.
On Robustness to Adversarial Examples and Polynomial Optimization.
Adversarial Examples in Modern Machine Learning: A Review.
Few-Features Attack to Fool Machine Learning Models through Mask-Based GAN.
RNN-Test: Towards Adversarial Testing for Recurrent Neural Network Systems.
Learning From Brains How to Regularize Machines.
Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory.
CALPA-NET: Channel-pruning-assisted Deep Residual Network for Steganalysis of Digital Images.
GraphDefense: Towards Robust Graph Convolutional Networks.
A Reinforced Generation of Adversarial Samples for Neural Machine Translation.
Improving Machine Reading Comprehension via Adversarial Training.
Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples.
Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy.
Intrusion Detection for Industrial Control Systems: Evaluation Analysis and Adversarial Attacks.
Patch augmentation: Towards efficient decision boundaries for neural networks.
Domain Robustness in Neural Machine Translation.
Adversarial Attacks on GMM i-vector based Speaker Verification Systems.
Imperceptible Adversarial Attacks on Tabular Data.
White-Box Target Attack for EEG-Based BCI Regression Problems.
Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces.
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance.
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods.
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey.
Reversible Adversarial Example based on Reversible Image Transformation.
Adversarial Enhancement for Community Detection in Complex Networks.
DLA: Dense-Layer-Analysis for Adversarial Example Detection.
Intriguing Properties of Adversarial ML Attacks in the Problem Space.
Coverage Guided Testing for Recurrent Neural Networks.
Persistency of Excitation for Robustness of Neural Networks.
Fast-UAP: An Algorithm for Speeding up Universal Adversarial Perturbation Generation with Orientation of Perturbation Vectors.
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models.
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems.
MadNet: Using a MAD Optimization for Defending Against Adversarial Attacks.
Automatic Detection of Generated Text is Easiest when Humans are Fooled.
Security of Facial Forensics Models Against Adversarial Attacks.
Enhancing Certifiable Robustness via a Deep Model Ensemble.
Certifiable Robustness to Graph Perturbations.
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System.
Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization.
Beyond Universal Person Re-ID Attack.
Adversarial Example in Remote Sensing Image Recognition.
Active Subspace of Neural Networks: Structural Analysis and Universal Attacks.
Certified Adversarial Robustness for Deep Reinforcement Learning.
Word-level Textual Adversarial Attacking as Combinatorial Optimization.
EdgeFool: An Adversarial Image Enhancement Filter.
Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolutional Neural Networks.
Detection of Adversarial Attacks and Characterization of Adversarial Subspace.
Understanding and Quantifying Adversarial Examples Existence in Linear Classification.
Adversarial Defense Via Local Flatness Regularization.
Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples.
MediaEval 2019: Concealed FGSM Perturbations for Privacy Preservation.
Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?
ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries.
A Useful Taxonomy for Adversarial Robustness of Neural Networks.
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks.
Attacking Optical Flow.
Adversarial Example Detection by Classification for Deep Speech Recognition.
Cross-Representation Transferability of Adversarial Attacks: From Spectrograms to Audio Waveforms.
Structure Matters: Towards Generating Transferable Adversarial Images.
Recovering Localized Adversarial Attacks.
Learning to Learn by Zeroth-Order Oracle.
An Alternative Surrogate Loss for PGD-based Adversarial Testing.
Enhancing Recurrent Neural Networks with Sememes.
Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification.
Toward Metrics for Differentiating Out-of-Distribution Sets.
Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?
Spatial-aware Online Adversarial Perturbations Against Visual Object Tracking.
A Fast Saddle-Point Dynamical System Approach to Robust Deep Learning.
Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets.
Enforcing Linearity in DNN succours Robustness and Adversarial Image Generation.
LanCe: A Comprehensive and Lightweight CNN Defense Methodology against Physical Adversarial Attacks on Embedded Multimedia Applications.
Adversarial T-shirt! Evading Person Detectors in A Physical World.
A New Defense Against Adversarial Images: Turning a Weakness into a Strength.
Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation.
Understanding Misclassifications by Attributes.
Adversarial Examples for Models of Code.
On adversarial patches: real-world attack on ArcFace-100 face recognition system.
DeepSearch: Simple and Effective Blackbox Fuzzing of Deep Neural Networks.
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks.
ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization.
Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models.
Real-world adversarial attack on MTCNN face detection system.
On Robustness of Neural Ordinary Differential Equations.
Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems.
Verification of Neural Networks: Specifying Global Robustness using Generative Models.
Universal Adversarial Perturbation for Text Classification.
Information Aware Max-Norm Dirichlet Networks for Predictive Uncertainty Estimation.
Learning deep forest with multi-scale Local Binary Pattern features for face anti-spoofing.
Adversarial Learning of Deepfakes in Accounting.
Deep Latent Defence.
Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system.
Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications.
SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations.
Interpretable Disentanglement of Neural Networks by Extracting Class-Specific Subnetwork.
Unrestricted Adversarial Attacks for Semantic Segmentation.
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies.
Requirements for Developing Robust Neural Networks.
Adversarial Examples for Cost-Sensitive Classifiers.
Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions.
BUZz: BUffer Zones for defending adversarial examples in image classification.
Verification of Neural Network Behaviour: Formal Guarantees for Power System Applications.
Attacking Vision-based Perception in End-to-End Autonomous Driving Models.
Adversarially Robust Few-Shot Learning: A Meta-Learning Approach.
Boosting Image Recognition with Non-differentiable Constraints.
Generating Semantic Adversarial Examples with Differentiable Rendering.
Attacking CNN-based anti-spoofing face authentication in the physical domain.
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack.
Cross-Layer Strategic Ensemble Defense Against Adversarial Examples.
Deep Neural Rejection against Adversarial Examples.
Black-box Adversarial Attacks with Bayesian Optimization.
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML.
Role of Spatial Context in Adversarial Robustness for Object Detection.
Techniques for Adversarial Examples Threatening the Safety of Artificial Intelligence Based Systems.
Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest.
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks.
Towards Understanding the Transferability of Deep Representations.
Adversarial Machine Learning Attack on Modulation Classification.
Adversarial ML Attack on Self Organizing Cellular Networks.
Towards neural networks that provably know when they don't know.
Lower Bounds on Adversarial Robustness from Optimal Transport.
Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection.
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks.
FreeLB: Enhanced Adversarial Training for Natural Language Understanding.
A Visual Analytics Framework for Adversarial Text Generation.
Intelligent image synthesis to attack a segmentation CNN using adversarial learning.
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack.
Matrix Sketching for Secure Collaborative Machine Learning. (1%)
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples.
Robust Local Features for Improving the Generalization of Adversarial Training.
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments.
HAWKEYE: Adversarial Example Detector for Deep Neural Networks.
Towards Interpreting Recurrent Neural Networks through Probabilistic Abstraction.
Adversarial Learning with Margin-based Triplet Embedding Regularization.
COPYCAT: Practical Adversarial Attacks on Visualization-Based Malware Detection.
Defending Against Physically Realizable Attacks on Image Classification.
Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation.
Adversarial Vulnerability Bounds for Gaussian Process Classification.
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks.
Toward Robust Image Classification.
Training Robust Deep Neural Networks via Adversarial Noise Propagation.
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review.
Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model.
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges.
They Might NOT Be Giants: Crafting Black-Box Adversarial Examples with Fewer Queries Using Particle Swarm Optimization.
HAD-GAN: A Human-perception Auxiliary Defense GAN to Defend Adversarial Examples.
Towards Quality Assurance of Software Product Lines with Adversarial Configurations.
Interpreting and Improving Adversarial Robustness with Neuron Sensitivity.
An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms.
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors.
Natural Language Adversarial Attacks and Defenses in Word Level.
Adversarial Attack on Skeleton-based Human Action Recognition.
Say What I Want: Towards the Dark Side of Neural Dialogue Models.
White-Box Adversarial Defense via Self-Supervised Data Estimation.
Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix.
Inspecting adversarial examples using the Fisher information.
An Empirical Investigation of Randomized Defenses against Adversarial Attacks.
Transferable Adversarial Robustness using Adversarially Trained Autoencoders.
Feedback Learning for Improving the Robustness of Neural Networks.
Sparse and Imperceivable Adversarial Attacks.
Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification.
Identifying and Resisting Adversarial Videos Using Temporal Consistency.
Effectiveness of Adversarial Examples and Defenses for Malware Classification.
Towards Noise-Robust Neural Networks via Progressive Adversarial Training.
UPC: Learning Universal Physical Camouflage Attacks on Object Detectors.
FDA: Feature Disruptive Attack.
Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection.
Toward Finding The Global Optimal of Adversarial Examples.
Adversarial Robustness Against the Union of Multiple Perturbation Models.
DeepObfuscator: Obfuscating Intermediate Representations with Privacy-Preserving Adversarial Learning on Smartphones. (1%)
STA: Adversarial Attacks on Siamese Trackers.
When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures.
Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification.
Natural Adversarial Sentence Generation with Gradient-based Perturbation.
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information.
Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents.
Adversarial Examples with Difficult Common Words for Paraphrase Identification.
Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?
Certified Robustness to Adversarial Word Substitutions.
Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation.
Metric Learning for Adversarial Robustness.
Adversarial Training Methods for Network Embedding.
Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness.
Defending Against Misclassification Attacks in Transfer Learning.
Universal, transferable and targeted adversarial attacks.
A Statistical Defense Approach for Detecting Adversarial Examples.
Gated Convolutional Networks with Hybrid Connectivity for Image Classification.
Adversarial Edit Attacks for Tree Data.
advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns.
Targeted Mismatch Adversarial Attack: Query with a Flower to Retrieve the Tower.
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
AdvHat: Real-world adversarial attack on ArcFace Face ID system.
Saliency Methods for Explaining Adversarial Attacks.
Testing Robustness Against Unforeseen Adversaries.
Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples.
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks.
Transferring Robustness for Graph Neural Network Against Poisoning Attacks.
Universal Adversarial Triggers for NLP.
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses.
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries.
On the Robustness of Human Pose Estimation.
Adversarial Defense by Suppressing High-frequency Components.
Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation.
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks.
Adversarial point perturbations on 3D objects.
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once.
AdvFaces: Adversarial Face Synthesis.
DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation.
On Defending Against Label Flipping Attacks on Malware Detection Systems.
Adversarial Neural Pruning with Latent Vulnerability Suppression.
On the Adversarial Robustness of Neural Networks without Weight Transport.
Defending Against Adversarial Iris Examples Using Wavelet Decomposition.
Universal Adversarial Audio Perturbations.
Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations.
Investigating Decision Boundaries of Trained Neural Networks.
Explaining Deep Neural Networks Using Spectrum-Based Fault Localization.
MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks.
BlurNet: Defense by Filtering the Feature Maps.
Random Directional Attack for Fooling Deep Neural Networks.
Adversarial Self-Defense for Cycle-Consistent GANs.
Automated Detection System for Adversarial Examples with High-Frequency Noises Sieve.
A principled approach for generating adversarial images under non-smooth dissimilarity metrics.
Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems.
A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models.
Exploring the Robustness of NMT Systems to Nonsensical Inputs.
AdvGAN++ : Harnessing latent layers for adversary generation.
Black-box Adversarial ML Attack on Modulation Classification.
Robustifying deep networks for image segmentation.
Adversarial Robustness Curves.
Optimal Attacks on Reinforcement Learning Policies.
Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation.
Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding.
Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples.
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment.
Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin.
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method.
Towards Adversarially Robust Object Detection.
Joint Adversarial Training: Incorporating both Spatial and Pixel Attacks.
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training.
Weakly Supervised Localization using Min-Max Entropy: an Interpretable Framework.
Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems.
Enhancing Adversarial Example Transferability with an Intermediate Level Attack.
Characterizing Attacks on Deep Reinforcement Learning.
Connecting Lyapunov Control Theory to Adversarial Attacks.
Robustness properties of Facebook's ResNeXt WSL models.
Constrained Concealment Attacks against Reconstruction-based Anomaly Detectors in Industrial Control Systems.
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods.
Latent Adversarial Defence with Boundary-guided Generation.
Natural Adversarial Examples.
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving.
Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics.
Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning.
Recovery Guarantees for Compressible Signals with Adversarial Noise.
Measuring the Transferability of Adversarial Examples.
Unsupervised Adversarial Attacks on Deep Feature-based Retrieval with GAN.
Stateful Detection of Black-Box Adversarial Attacks.
Generative Modeling by Estimating Gradients of the Data Distribution.
Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn.
Adversarial Objects Against LiDAR-Based Autonomous Driving Systems.
Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations.
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving.
Affine Disentangled GAN for Interpretable and Robust AV Perception.
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions.
Adversarial Robustness through Local Linearization.
Adversarial Attacks in Sound Event Classification.
Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior.
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack.
Efficient Cyber Attacks Detection in Industrial Control Systems Using Lightweight Neural Networks and PCA.
Treant: Training Evasion-Aware Decision Trees.
Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network".
Diminishing the Effect of Adversarial Perturbations via Refining Feature Representation.
Accurate, reliable and fast robustness evaluation.
Fooling a Real Car with Adversarial Traffic Signs.
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty.
Certifiable Robustness and Robust Training for Graph Convolutional Networks.
Learning to Cope with Adversarial Attacks.
Robustness Guarantees for Deep Neural Networks on Videos.
Using Intuition from Empirical Properties to Simplify Adversarial Training Defense.
Adversarial Robustness via Label-Smoothing.
Evolving Robust Neural Architectures to Defend from Adversarial Attacks.
The Adversarial Robustness of Sampling.
Defending Adversarial Attacks by Correcting logits.
Quantitative Verification of Neural Networks And its Security Applications.
Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection.
Deceptive Reinforcement Learning Under Adversarial Manipulations on Cost Signals.
Defending Against Adversarial Examples with K-Nearest Neighbor.
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations.
A Fourier Perspective on Model Robustness in Computer Vision.
Evolution Attack On Neural Networks.
Adversarial Examples to Fool Iris Recognition Systems.
A Cyclically-Trained Adversarial Network for Invariant Representation Learning.
On Physical Adversarial Patches for Object Detection.
Catfish Effect Between Internal and External Attackers:Being Semi-honest is Helpful.
Improving the robustness of ImageNet classifiers using elements of human visual cognition.
A unified view on differential privacy and robustness to adversarial examples.
Convergence of Adversarial Training in Overparametrized Networks.
Global Adversarial Attacks for Assessing Deep Learning Robustness.
Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield.
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing.
Adversarial attacks on Copyright Detection Systems.
Improving Black-box Adversarial Attacks with a Transfer-based Prior.
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks.
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Accuracy.
Defending Against Adversarial Attacks Using Random Forests.
Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences.
Adversarial Training Can Hurt Generalization.
Towards Compact and Robust Deep Neural Networks.
Perceptual Based Adversarial Audio Attacks.
Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks.
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks.
Towards Stable and Efficient Training of Verifiably Robust Neural Networks.
Adversarial Robustness Assessment: Why both $L_0$ and $L_\infty$ Attacks Are Necessary.
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks.
Lower Bounds for Adversarially Robust PAC Learning.
Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers.
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks.
Mimic and Fool: A Task Agnostic Adversarial Attack.
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks.
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles.
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective.
Robustness Verification of Tree-based Models.
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective.
On the Vulnerability of Capsule Networks to Adversarial Attacks.
Intriguing properties of adversarial training.
Improved Adversarial Robustness via Logit Regularization Methods.
Attacking Graph Convolutional Networks via Rewiring.
Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness.
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers.
Strategies to architect AI Safety: Defense to guard AI from Adversaries.
Sensitivity of Deep Convolutional Networks to Gabor Noise.
ML-LOO: Detecting Adversarial Examples with Feature Attribution.
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks.
Making targeted black-box evasion attacks effective and efficient.
Defending Against Universal Attacks Through Selective Feature Regeneration.
A cryptographic approach to black box adversarial machine learning.
Using learned optimizers to make models robust to input noise.
Efficient Project Gradient Descent for Ensemble Adversarial Attack.
Inductive Bias of Gradient Descent based Adversarial Training on Separable Data.
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness.
Robustness for Non-Parametric Classification: A Generic Attack and Defense.
Robust Attacks against Multiple Classifiers.
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation.
Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric.
Should Adversarial Attacks Use Pixel p-Norm?
Image Synthesis with a Single (Robust) Classifier.
MNIST-C: A Robustness Benchmark for Computer Vision.
Enhancing Gradient-based Attacks with Symbolic Intervals.
Query-efficient Meta Attack to Deep Neural Networks.
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation.
Multi-way Encoding for Robustness.
Adversarial Training is a Form of Data-dependent Operator Norm Regularization.
Adversarial Exploitation of Policy Imitation.
RL-Based Method for Benchmarking the Adversarial Resilience and Robustness of Deep Reinforcement Learning Policies.
Adversarial Risk Bounds for Neural Networks through Sparsity based Compression.
The Adversarial Machine Learning Conundrum: Can The Insecurity of ML Become The Achilles' Heel of Cognitive Networks?
Adversarial Robustness as a Prior for Learned Representations.
Achieving Generalizable Robustness of Deep Neural Networks by Stability Training.
A Surprising Density of Illusionable Natural Speech.
Fast and Stable Interval Bounds Propagation for Training Verifiably Robust Models.
Understanding the Limitations of Conditional Generative Models.
Adversarially Robust Generalization Just Requires More Unlabeled Data.
Adversarial Examples for Edge Detection: They Exist, and They Transfer.
Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification.
Enhancing Transformation-based Defenses using a Distribution Classifier.
Unlabeled Data Improves Adversarial Robustness.
Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness.
Are Labels Required for Improving Adversarial Robustness?
Real-Time Adversarial Attacks.
Residual Networks as Nonlinear Systems: Stability Analysis using Linearization.
Identifying Classes Susceptible to Adversarial Attacks.
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness.
Interpretable Adversarial Training for Text.
Bandlimiting Neural Networks Against Adversarial Attacks.
Misleading Authorship Attribution of Source Code using Adversarial Learning.
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward.
Functional Adversarial Attacks.
CopyCAT: Taking Control of Neural Policies with Constant Attacks.
ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation.
Adversarial Attacks on Remote User Authentication Using Behavioural Mouse Dynamics.
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss.
Snooping Attacks on Deep Reinforcement Learning.
High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks.
Expected Tight Bounds for Robust Training.
Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness.
Cross-Domain Transferability of Adversarial Perturbations.
Certifiably Robust Interpretation in Deep Learning.
Brain-inspired reverse adversarial examples.
Label Universal Targeted Attack.
Divide-and-Conquer Adversarial Detection.
Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking.
Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$.
Scaleable input gradient regularization for adversarial robustness.
Combating Adversarial Misspellings with Robust Word Recognition.
Analyzing the Interpretability Robustness of Self-Explaining Models.
Adversarially Robust Learning Could Leverage Computational Hardness.
Unsupervised Euclidean Distance Attack on Network Embedding.
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations.
Non-Determinism in Neural Networks for Adversarial Robustness.
Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders.
Rearchitecting Classification Frameworks For Increased Robustness.
Robust Classification using Robust Feature Augmentation.
Generalizable Adversarial Attacks Using Generative Models.
Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks.
Adversarial Distillation for Ordered Top-k Attacks.
Adversarial Policies: Attacking Deep Reinforcement Learning.
Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness.
Robustness to Adversarial Perturbations in Learning from Incomplete Data.
Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering.
Enhancing Adversarial Defense by k-Winners-Take-All.
A Direct Approach to Robust Deep Learning Using Adversarial Networks.
PHom-GeM: Persistent Homology for Generative Models.
Thwarting finite difference adversarial attacks with output randomization.
Interpreting Adversarially Trained Convolutional Neural Networks.
Adversarially Robust Distillation.
Convergence and Margin of Adversarial Training on Separable Data.
Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection.
DoPa: A Fast and Comprehensive CNN Defense Methodology against Physical Adversarial Attacks.
Adversarially robust transfer learning.
Testing DNN Image Classifiers for Confusion & Bias Errors.
What Do Adversarially Robust Models Look At?
Taking Care of The Discretization Problem:A Black-Box Adversarial Image Attack in Discrete Integer Domain.
POPQORN: Quantifying Robustness of Recurrent Neural Networks.
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models.
Simple Black-box Adversarial Attacks.
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization.
On Norm-Agnostic Robustness of Adversarial Training.
An Efficient Pre-processing Method to Eliminate Adversarial Effects.
Robustification of deep net classifiers by key based diversified aggregation with pre-filtering.
Adversarial Examples for Electrocardiograms.
Analyzing Adversarial Attacks Against Deep Learning for Intrusion Detection in IoT Networks.
Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models.
Moving Target Defense for Deep Visual Sensing against Adversarial Examples.
Interpreting and Evaluating Neural Network Robustness.
On the Connection Between Adversarial Robustness and Saliency Map Interpretability.
Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables.
Adversarial Defense Framework for Graph Neural Network.
Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain.
Exploring the Hyperparameter Landscape of Adversarial Robustness.
Learning Interpretable Features via Adversarially Robust Optimization.
Universal Adversarial Perturbations for Speech Recognition Systems.
ROSA: Robust Salient Object Detection against Adversarial Attacks.
Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction.
Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems.
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks.
Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study.
An Empirical Evaluation of Adversarial Robustness under Transfer Learning.
Adaptive Generation of Unrestricted Adversarial Inputs.
Batch Normalization is a Cause of Adversarial Vulnerability.
Adversarial Examples Are Not Bugs, They Are Features.
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples.
Transfer of Adversarial Robustness Between Perturbation Types.
Adversarial Training with Voronoi Constraints.
Weight Map Layer for Noise and Adversarial Attack Robustness.
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle.
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm.
Dropping Pixels for Adversarial Robustness.
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks.
Test Selection for Deep Learning Systems.
Detecting Adversarial Examples through Nonlinear Dimensionality Reduction.
Adversarial Training for Free!
Adversarial Training and Robustness for Multiple Perturbations.
Non-Local Context Encoder: Robust Biomedical Image Segmentation against Adversarial Attacks.
Robustness Verification of Support Vector Machines.
A Robust Approach for Securing Audio Classification Against Adversarial Attacks.
Physical Adversarial Textures that Fool Visual Object Tracking.
Minimizing Perceived Image Quality Loss Through Adversarial Attack Scoping.
blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness.
Using Videos to Evaluate Image Model Robustness.
Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning.
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach.
Salient Object Detection in the Deep Learning Era: An In-Depth Survey.
Fooling automated surveillance cameras: adversarial patches to attack person detection.
ZK-GanDef: A GAN based Zero Knowledge Adversarial Training Defense for Neural Networks.
Defensive Quantization: When Efficiency Meets Robustness.
Interpreting Adversarial Examples with Attributes.
Adversarial Defense Through Network Profiling Based Path Extraction.
Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks.
Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers.
Reducing Adversarial Example Transferability Using Gradient Regularization.
AT-GAN: An Adversarial Generator Model for Non-constrained Adversarial Examples.
Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction.
Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks.
Exploiting Vulnerabilities of Load Forecasting Through Adversarial Attacks.
Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense.
Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients.
Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks.
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks.
Unrestricted Adversarial Examples via Semantic Manipulation.
Black-Box Decision based Adversarial Attack with Symmetric $\alpha$-stable Distribution.
Learning to Generate Synthetic Data via Compositing.
Black-box Adversarial Attacks on Video Recognition Models.
Generation & Evaluation of Adversarial Examples for Malware Obfuscation.
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition.
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning.
JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks.
Malware Evasion Attack and Defense.
On Training Robust PDF Malware Classifiers.
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks.
White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks.
Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks.
Understanding the efficacy, reliability and resiliency of computer vision techniques for malware detection and future research directions.
Interpreting Adversarial Examples by Activation Promotion and Suppression.
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack.
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations.
Adversarial Attacks against Deep Saliency Models.
Curls & Whey: Boosting Black-Box Adversarial Attacks.
Robustness of 3D Deep Learning in an Adversarial Setting.
Defending against adversarial attacks by randomized diversification.
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks.
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses.
On the Vulnerability of CNN Classifiers in EEG-Based BCIs.
Adversarial Robustness vs Model Compression, or Both?
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.
Smooth Adversarial Examples.
Rallying Adversarial Techniques against Deep Learning for Network Security.
Bridging Adversarial Robustness and Gradient Interpretability.
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks.
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems.
On the Adversarial Robustness of Multivariate Robust Estimation.
A geometry-inspired decision-based attack.
Defending against Whitebox Adversarial Attacks via Randomized Discretization.
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness.
The LogBarrier adversarial attack: making effective use of decision boundary information.
Robust Neural Networks using Randomized Adversarial Training.
A Formalization of Robustness for Deep Neural Networks.
Variational Inference with Latent Space Quantization for Adversarial Resilience.
Improving Adversarial Robustness via Guided Complement Entropy.
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition.
Fast Bayesian Uncertainty Estimation and Reduction of Batch Normalized Single Image Super-Resolution Network. (45%)
Adversarial camera stickers: A physical camera-based attack on deep learning systems.
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes.
On the Robustness of Deep K-Nearest Neighbors.
Generating Adversarial Examples With Conditional Generative Adversarial Net.
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems.
Adversarial Attacks on Deep Neural Networks for Time Series Classification.
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models.
On Certifying Non-uniform Bound against Adversarial Attacks.
A Research Agenda: Dynamic Models to Defend Against Correlated Attacks.
Attribution-driven Causal Analysis for Detection of Adversarial Examples.
Adversarial attacks against Fact Extraction and VERification.
Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models.
Can Adversarial Network Attack be Defended?
Manifold Preserving Adversarial Learning.
Attack Type Agnostic Perceptual Enhancement of Adversarial Images.
Out-domain examples for generative models.
GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier.
Statistical Guarantees for the Robustness of Bayesian Neural Networks.
L 1-norm double backpropagation adversarial defense.
Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search.
The Vulnerabilities of Graph Convolutional Networks: Stronger Attacks and Defensive Techniques.
Complement Objective Training.
Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming.
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations.
Evaluating Adversarial Evasion Attacks in the Context of Wireless Communications.
PuVAE: A Variational Autoencoder to Purify Adversarial Examples.
Attacking Graph-based Classification via Manipulating the Graph Structure.
On the Effectiveness of Low Frequency Perturbations.
Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN.
Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors.
Adversarial Attack and Defense on Point Sets.
Adversarial Attacks on Time Series.
Robust Decision Trees Against Adversarial Examples.
Tensor Dropout for Robust Learning.
The Best Defense Is a Good Offense: Adversarial Attacks to Avoid Modulation Detection.
A Distributionally Robust Optimization Method for Adversarial Multiple Kernel Learning. (76%)
AutoGAN-based Dimension Reduction for Privacy Preservation. (1%)
Disentangled Deep Autoencoding Regularization for Robust Image Classification.
Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification.
Verification of Non-Linear Specifications for Neural Networks.
Adversarial attacks hidden in plain sight.
MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and Adversarial Defenses.
Adversarial Reinforcement Learning under Partial Observability in Software-Defined Networking.
Re-evaluating ADEM: A Deeper Look at Scoring Dialogue Responses.
A Deep, Information-theoretic Framework for Robust Biometric Recognition.
Adversarial Attacks on Graph Neural Networks via Meta Learning.
Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems.
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks.
On the Sensitivity of Adversarial Robustness to Input Data Distributions.
Quantifying Perceptual Distortion of Adversarial Examples.
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations.
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch.
Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers.
Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure.
There are No Bit Parts for Sign Bits in Black-Box Attacks.
On Evaluating Adversarial Robustness.
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks.
Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces.
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training.
Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness.
DeepFault: Fault Localization for Deep Neural Networks.
Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples.
Examining Adversarial Learning against Graph-based IoT Malware Detection Systems.
Adversarial Samples on Android Malware Detection Systems for IoT Systems.
A Survey: Towards a Robust Deep Neural Network in Text Domain.
Model Compression with Adversarial Robustness: A Unified Optimization Framework.
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks.
Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images.
Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis.
Discretization based Solutions for Secure Machine Learning against Adversarial Attacks.
Robustness Of Saak Transform Against Adversarial Attacks.
Certified Adversarial Robustness via Randomized Smoothing.
Fooling Neural Network Interpretations via Adversarial Model Manipulation.
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples.
Fatal Brain Damage.
Theoretical evidence for adversarial robustness through randomization.
Predictive Uncertainty Quantification with Compound Density Networks.
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks.
Robustness Certificates Against Adversarial Examples for ReLU Networks.
Natural and Adversarial Error Detection using Invariance to Image Transformations.
Adaptive Gradient for Adversarial Perturbations Generation.
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks.
The Efficacy of SHIELD under Different Threat Models.
A New Family of Neural Networks Provably Resistant to Adversarial Attacks.
Training Artificial Neural Networks by Generalized Likelihood Ratio Method: Exploring Brain-like Learning to Improve Robustness.
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance.
Augmenting Model Robustness with Transformation-Invariant Attacks.
Adversarial Examples Are a Natural Consequence of Test Error in Noise.
On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks.
RED-Attack: Resource Efficient Decision based Attack for Machine Learning.
Reliable Smart Road Signs.
Adversarial Metric Attack and Defense for Person Re-identification.
Improving Adversarial Robustness of Ensembles with Diversity Training.
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks.
Defense Methods Against Adversarial Examples for Recurrent Neural Networks.
Using Pre-Training Can Improve Model Robustness and Uncertainty.
An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers.
Characterizing the Shape of Activation Space in Deep Neural Networks.
Strong Black-box Adversarial Attacks on Unsupervised Machine Learning Models.
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm.
Weighted-Sampling Audio Adversarial Example Attack.
Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data.
Improving Adversarial Robustness via Promoting Ensemble Diversity.
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples.
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples.
Theoretically Principled Trade-off between Robustness and Accuracy.
SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems.
Sitatapatra: Blocking the Transfer of Adversarial Samples.
Universal Rules for Fooling Deep Neural Networks based Text Classification.
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey.
Sensitivity Analysis of Deep Neural Networks.
Perception-in-the-Loop Adversarial Examples.
Easy to Fool? Testing the Anti-evasion Capabilities of PDF Malware Scanners.
The Limitations of Adversarial Training and the Blind-Spot Attack.
Generating Adversarial Perturbation with Root Mean Square Gradient.
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System.
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries.
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification.
Image Transformation can make Neural Networks more robust against Adversarial Examples.
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers.
Interpretable BoW Networks for Adversarial Example Detection.
Image Super-Resolution as a Defense Against Adversarial Attacks.
Fake News Detection via NLP is Vulnerable to Adversarial Attacks.
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study.
Multi-Label Adversarial Perturbations.
Adversarial Robustness May Be at Odds With Simplicity.
A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks.
DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems.
Adversarial Attack and Defense on Graph Data: A Survey.
A Data-driven Adversarial Examples Recognition Framework via Adversarial Feature Genome.
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition.
PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning.
A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples.
Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors.
DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense.
Markov Game Modeling of Moving Target Defense for Strategic Detection of Threats in Cloud Networks.
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks.
Exploiting the Inherent Limitation of L0 Adversarial Examples.
Dissociable neural representations of adversarially perturbed images in convolutional neural networks and the human brain.
Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge.
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach.
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings.
Designing Adversarially Resilient Classifiers using Resilient Feature Engineering.
A Survey of Safety and Trustworthiness of Deep Neural Networks.
Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks.
Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples.
Trust Region Based Adversarial Attack on Neural Networks.
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing.
TextBugger: Generating Adversarial Text Against Real-world Applications.
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem.
Thwarting Adversarial Examples: An $L_0$-RobustSparse Fourier Transform.
On the Security of Randomized Defenses Against Adversarial Samples.
Adversarial Framing for Image and Video Classification.
Defending Against Universal Perturbations With Shared Adversarial Training.
Feature Denoising for Improving Adversarial Robustness.
AutoGAN: Robust Classifier Against Adversarial Attacks.
Detecting Adversarial Examples in Convolutional Neural Networks.
Learning Transferable Adversarial Examples via Ghost Networks.
Deep-RBF Networks Revisited: Robust Classification with Rejection.
Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach.
Adversarial Defense of Image Classification Using a Variational Auto-Encoder.
Adversarial Attacks, Regression, and Numerical Stability Regularization.
Prior Networks for Detection of Adversarial Attacks.
Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack.
Fooling Network Interpretation in Image Classification.
The Limitations of Model Uncertainty in Adversarial Settings.
MMA Training: Direct Input Space Margin Maximization through Adversarial Training.
On Configurable Defense against Adversarial Example Attacks.
Regularized Ensembles and Transferability in Adversarial Learning.
SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications.
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures.
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples.
Disentangling Adversarial Robustness and Generalization.
Interpretable Deep Learning under Fire.
Adversarial Example Decomposition.
Model-Reuse Attacks on Deep Learning Systems.
Universal Perturbation Attack Against Image Retrieval.
FineFool: Fine Object Contour Attack via Attention.
Building robust classifiers through generation of confident out of distribution examples.
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification.
Effects of Loss Functions And Target Representations on Adversarial Robustness.
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems.
Transferable Adversarial Attacks for Image and Video Object Detection.
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples.
Adversarial Defense by Stratified Convolutional Sparse Coding.
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks.
Bayesian Adversarial Spheres: Bayesian Inference and Adversarial Examples in a Noiseless Setting.
Adversarial Examples as an Input-Fault Tolerance Problem.
Analyzing Federated Learning through an Adversarial Lens.
Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers.
Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects.
A randomized gradient-free attack on ReLU networks.
Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness.
Robust Classification of Financial Risk.
Universal Adversarial Training.
Using Attribution to Decode Dataset Bias in Neural Network Models for Chemistry.
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks.
ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies.
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks.
Is Data Clustering in Adversarial Settings Secure?
Attention, Please! Adversarial Defense via Attention Rectification and Preservation.
Robustness via curvature regularization, and vice versa.
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses.
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack.
Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles.
Detecting Adversarial Perturbations Through Spatial Behavior in Activation Spaces.
Task-generalizable Adversarial Attack based on Perceptual Metric.
Towards Robust Neural Networks with Lipschitz Continuity.
How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples.
MimicGAN: Corruption-Mimicking for Blind Image Recovery & Adversarial Defense.
Intermediate Level Adversarial Attack for Enhanced Transferability.
Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples.
Convolutional Neural Networks with Transformed Input based on Robust Tensor Network Decomposition.
Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding.
Generalizable Adversarial Training via Spectral Normalization.
Regularized adversarial examples for model interpretability.
The Taboo Trap: Behavioural Detection of Adversarial Samples.
DeepConsensus: using the consensus of features from multiple layers to attain robust image classification.
Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples.
Boosting the Robustness Verification of DNN by Identifying the Achilles's Heel.
Protecting Voice Controlled Systems Using Sound Source Identification Based on Acoustic Cues.
DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules.
A Spectral View of Adversarially Robust Features.
A note on hyperparameters in black-box adversarial examples.
Mathematical Analysis of Adversarial Attacks.
Adversarial Examples from Cryptographic Pseudo-Random Generators.
Verification of Recurrent Neural Networks Through Rule Extraction.
Robustness of spectral methods for community detection.
Deep Q learning for fooling neural networks.
Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses.
New CleverHans Feature: Better Adversarial Robustness Evaluations with Attack Bundling.
A Geometric Perspective on the Transferability of Adversarial Directions.
CAAD 2018: Iterative Ensemble Adversarial Attack.
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning.
MixTrain: Scalable Training of Verifiably Robust Neural Networks.
SparseFool: a few pixels make a big difference.
Active Deep Learning Attacks under Strict Rate Limitations for Online API Calls.
FUNN: Flexible Unsupervised Neural Network.
On the Transferability of Adversarial Examples Against CNN-Based Image Forensics.
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning.
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks.
SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters.
CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network.
Adversarial Black-Box Attacks on Automatic Speech Recognition Systems using Multi-Objective Evolutionary Optimization.
Learning to Defense by Learning to Attack.
A Marauder's Map of Security and Privacy in Machine Learning.
Semidefinite relaxations for certifying robustness to adversarial examples.
Efficient Neural Network Robustness Certification with General Activation Functions.
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks.
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks.
Improving Adversarial Robustness by Encouraging Discriminative Features.
On the Geometry of Adversarial Examples.
Excessive Invariance Causes Adversarial Vulnerability.
When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers.
Reversible Adversarial Examples.
Improved Network Robustness with Adversary Critic.
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models.
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution.
Logit Pairing Methods Can Fool Gradient-Based Attacks.
RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications.
Rademacher Complexity for Adversarially Robust Generalization.
Robust Audio Adversarial Example for a Physical Attack.
Towards Robust Deep Neural Networks.
Regularization Effect of Fast Gradient Sign Method and its Generalization.
Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples.
Law and Adversarial Machine Learning.
Attack Graph Convolutional Networks by Adding Fake Nodes.
Evading classifiers in discrete domains with provable optimality guarantees.
Robust Adversarial Learning via Sparsifying Front Ends.
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses.
One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy.
Et Tu Alexa? When Commodity WiFi Devices Turn into Adversarial Motion Sensors.
Adversarial Risk Bounds via Function Transformation.
Cost-Sensitive Robustness against Adversarial Examples.
Sparse DNNs with Improved Adversarial Robustness.
On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm.
Exploring Adversarial Examples in Malware Detection.
A Training-based Identification Approach to VIN Adversarial Examples.
Provable Robustness of ReLU networks via Maximization of Linear Regions.
Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers.
Security Matters: A Survey on Adversarial Machine Learning.
Concise Explanations of Neural Networks using Adversarial Training.
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation.
MeshAdv: Adversarial Meshes for Visual Recognition.
Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only.
Analyzing the Noise Robustness of Deep Neural Networks.
The Adversarial Attack and Detection under the Fisher Information Metric.
Limitations of adversarial robustness: strong No Free Lunch Theorem.
Efficient Two-Step Adversarial Defense for Deep Neural Networks.
Combinatorial Attacks on Binarized Neural Networks.
Average Margin Regularization for Classifiers.
Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness.
Improved Generalization Bounds for Robust Learning.
Can Adversarially Robust Learning Leverage Computational Hardness?
Adversarial Examples - A Complete Characterisation of the Phenomenon.
Link Prediction Adversarial Attack.
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network.
Improving the Generalization of Adversarial Training with Domain Adaptation.
Large batch size training of neural networks with adversarial training and second-order information.
Improved robustness to adversarial examples using Lipschitz regularization of the loss.
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks.
CAAD 2018: Generating Transferable Adversarial Examples.
Interpreting Adversarial Robustness: A View from Decision Surface in Input Space.
To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression.
Characterizing Audio Adversarial Examples Using Temporal Dependency.
Adversarial Attacks and Defences: A Survey.
Explainable Black-Box Attacks Against Model-based Authentication.
Adversarial Attacks on Cognitive Self-Organizing Networks: The Challenge and the Way Forward.
Neural Networks with Structural Resistance to Adversarial Attacks.
Fast Geometrically-Perturbed Adversarial Faces.
On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces.
Low Frequency Adversarial Perturbation.
Is Ordered Weighted $\ell_1$ Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR.
Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization.
Unrestricted Adversarial Examples.
Adversarial Binaries for Authorship Identification.
Playing the Game of Universal Adversarial Perturbations.
Efficient Formal Safety Analysis of Neural Networks.
Adversarial Training Towards Robust Multimedia Recommender System.
Generating 3D Adversarial Point Clouds.
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples.
Robustness Guarantees for Bayesian Inference with Gaussian Processes.
Exploring the Vulnerability of Single Shot Module in Object Detectors via Imperceptible Background Patches.
Robust Adversarial Perturbation on Deep Proposal-based Models.
Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks.
Query-Efficient Black-Box Attack by Active Learning.
Adversarial Examples: Opportunities and Challenges.
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions.
Isolated and Ensemble Audio Preprocessing Methods for Detecting Adversarial Examples against Automatic Speech Recognition.
Humans can decipher adversarial images.
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure.
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability.
Certified Adversarial Robustness with Additive Noise.
Towards Query Efficient Black-box Attacks: An Input-free Perspective.
Fast Gradient Attack on Network Embedding.
Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples.
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.
A Deeper Look at 3D Shape Classifiers.
Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer.
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Examples for Question Answering.
Query Attack via Opposite-Direction Feature:Towards Robust Image Retrieval.
Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models.
Are adversarial examples inevitable?
IDSGAN: Generative Adversarial Networks for Attack Generation against Intrusion Detection.
Adversarial Reprogramming of Text Classification Neural Networks.
Bridging machine learning and cryptography in defence against adversarial attacks.
Adversarial Attacks on Node Embeddings.
HASP: A High-Performance Adaptive Mobile Security Enhancement Against Malicious Speech Recognition.
Adversarial Attack Type I: Cheat Classifiers by Significant Changes.
MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks.
DLFuzz: Differential Fuzzing Testing of Deep Learning Systems.
All You Need is "Love": Evading Hate-speech Detection.
Lipschitz regularized Deep Neural Networks generalize and are adversarially robust.
Targeted Nonlinear Adversarial Perturbations in Images and Videos.
Generalisation in humans and deep neural networks.
Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge.
Guiding Deep Learning System Testing using Surprise Adequacy.
Analysis of adversarial attacks against CNN-based image forgery detectors.
Is Machine Learning in Power Systems Vulnerable?
Maximal Jacobian-based Saliency Map Attack.
Adversarial Attacks on Deep-Learning Based Radio Signal Classification.
Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection.
Stochastic Combinatorial Ensembles for Defending Against Adversarial Examples.
Reinforcement Learning for Autonomous Defence in Software-Defined Networking.
Mitigation of Adversarial Attacks through Embedded Feature Selection.
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding.
Distributionally Adversarial Attack.
Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection.
Using Randomness to Improve Robustness of Machine-Learning Models Against Evasion Attacks.
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer.
Data augmentation using synthetic data for time series classification with deep residual networks.
Adversarial Vision Challenge.
Defense Against Adversarial Attacks with Saak Transform.
Gray-box Adversarial Training.
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models.
Structured Adversarial Attack: Towards General Implementation and Better Interpretability.
Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection.
ATMPA: Attacking Machine Learning-based Malware Visualization Detection Methods via Adversarial Examples.
Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions.
DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes.
EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report).
Rob-GAN: Generator, Discriminator, and Adversarial Attacker.
A general metric for identifying adversarial images.
Evaluating and Understanding the Robustness of Adversarial Logit Pairing.
HiDDeN: Hiding Data With Deep Networks.
Limitations of the Lipschitz constant as a defense against adversarial examples.
Unbounded Output Networks for Classification.
Contrastive Video Representation Learning via Adversarial Perturbations.
Simultaneous Adversarial Training - Learn from Others Mistakes.
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors.
Physical Adversarial Examples for Object Detectors.
Harmonic Adversarial Attack Method.
Gradient Band-based Adversarial Training for Generalized Attack Immunity of A3C Path Finding.
Motivating the Rules of the Game for Adversarial Example Research.
Defend Deep Neural Networks Against Adversarial Examples via Fixed and Dynamic Quantized Activation Functions.
Online Robust Policy Learning in the Presence of Unknown Adversaries.
Manifold Adversarial Learning.
Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach.
With Friends Like These, Who Needs Adversaries?
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks.
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees.
Attack and defence in cellular decision-making: lessons from machine learning.
Adaptive Adversarial Attack on Scene Text Recognition.
Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks.
Implicit Generative Modeling of Random Noise during Training for Adversarial Robustness.
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations.
Local Gradients Smoothing: Defense against localized adversarial attacks.
Adversarial Robustness Toolbox v1.0.0.
Adversarial Perturbations Against Real-Time Video Classification Systems.
Towards Adversarial Training with Moderate Performance Improvement for Neural Network Classification.
Adversarial Examples in Deep Learning: Characterization and Divergence.
Adversarial Reprogramming of Neural Networks.
Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning.
Customizing an Adversarial Example Generator with Class-Conditional GANs.
Exploring Adversarial Examples: Patterns of One-Pixel Attacks.
Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions.
On Adversarial Examples for Character-Level Neural Machine Translation.
Evaluation of Momentum Diverse Input Iterative Fast Gradient Sign Method (M-DI2-FGSM) Based Attack Method on MCS 2018 Adversarial Attacks on Black Box Face Recognition System.
Detection based Defense against Adversarial Examples from the Steganalysis Point of View.
Gradient Adversarial Training of Neural Networks.
Combinatorial Testing for Deep Learning Systems.
On the Learning of Deep Local Features for Robust Face Spoofing Detection.
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations.
Non-Negative Networks Against Adversarial Attacks.
Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data.
Hierarchical interpretations for neural network predictions.
Manifold Mixup: Better Representations by Interpolating Hidden States.
Adversarial Attacks on Variational Autoencoders.
Ranking Robustness Under Adversarial Document Manipulations.
Defense Against the Dark Arts: An overview of adversarial example security research and future research directions.
Monge blunts Bayes: Hardness Results for Adversarial Training.
Revisiting Adversarial Risk.
Training Augmentation with Adversarial Examples for Robust Speech Recognition.
Adversarial Attack on Graph Structured Data.
Adversarial Regression with Multiple Learners.
Killing four birds with one Gaussian process: the relation between different test-time attacks.
DPatch: An Adversarial Patch Attack on Object Detectors.
Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise.
An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks.
PAC-learning in the presence of evasion adversaries.
Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks.
Detecting Adversarial Examples via Key-based Network.
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks.
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders.
Scaling provable adversarial defenses.
Sequential Attacks on Agents for Long-Term Adversarial Goals.
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data.
Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization.
ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio.
Robustifying Models Against Adversarial Attacks by Langevin Dynamics.
Robustness May Be at Odds with Accuracy.
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks.
Adversarial Noise Attacks of Deep Learning Architectures -- Stability Analysis via Sparse Modeled Signals.
Why Botnets Work: Distributed Brute-Force Attacks Need No Synchronization.
Adversarial Examples in Remote Sensing.
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization.
Defending Against Adversarial Attacks by Leveraging an Entire GAN.
Training verified learners with learned verifiers.
Adversarial examples from computational constraints.
Laplacian Networks: Bounding Indicator Function Smoothness for Neural Network Robustness.
Anonymizing k-Facial Attributes via Adversarial Perturbations.
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients.
Towards the first adversarially robust neural network model on MNIST.
Adversarially Robust Training through Structured Gradient Regularization.
Adversarial Noise Layer: Regularize Neural Network By Adding Noise.
Adversarial Attacks on Neural Networks for Graph Data.
Constructing Unrestricted Adversarial Examples with Generative Models.
Bidirectional Learning for Robust Neural Networks.
Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference.
Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks.
Targeted Adversarial Examples for Black Box Audio Systems.
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models.
Towards Robust Neural Machine Translation.
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing.
Curriculum Adversarial Training.
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning.
Breaking Transferability of Adversarial Samples with Randomness.
On Visual Hallmarks of Robustness to Adversarial Malware.
Robust Classification with Convolutional Prototype Learning.
Interpretable Adversarial Perturbation in Input Embedding Space for Text.
A Counter-Forensic Method for CNN-Based Camera Model Identification.
Siamese networks for generating adversarial examples.
Concolic Testing for Deep Neural Networks.
How Robust are Deep Neural Networks?
Adversarially Robust Generalization Requires More Data.
Adversarial Regression for Detecting Attacks in Cyber-Physical Systems.
Formal Security Analysis of Neural Networks using Symbolic Intervals.
Towards Fast Computation of Certified Robustness for ReLU Networks.
Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning.
Siamese Generative Adversarial Privatizer for Biometric Data.
Black-box Adversarial Attacks with Limited Queries and Information.
VectorDefense: Vectorization as a Defense to Adversarial Examples.
Query-Efficient Black-Box Attack Against Sequence-Based Malware Classifiers.
Generating Natural Language Adversarial Examples.
Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size.
Learning More Robust Features with Adversarial Training.
ADef: an Iterative Algorithm to Construct Adversarial Deformations.
Attacking Convolutional Neural Network using Differential Evolution.
Semantic Adversarial Deep Learning.
Simulation-based Adversarial Test Generation for Autonomous Vehicles with Machine Learning Components.
Neural Automated Essay Scoring and Coherence Modeling for Adversarially Crafted Input.
Robust Machine Comprehension Models via Adversarial Training.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks.
Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the $L_0$ Norm.
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector.
On the Limitation of MagNet Defense against $L_1$-based Adversarial Examples.
Adversarial Attacks Against Medical Deep Learning Systems.
Detecting Malicious PowerShell Commands using Deep Neural Networks.
On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses.
Adversarial Training Versus Weight Decay.
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks.
Adaptive Spatial Steganography Based on Probability-Controlled Adversarial Examples.
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations.
Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks.
Adversarial Attacks and Defences Competition.
Security Consideration For Deep Learning-Based Image Forensics.
Defending against Adversarial Images using Basis Functions Transformations.
The Effects of JPEG and JPEG2000 Compression on Attacks using Adversarial Examples.
Bypassing Feature Squeezing by Increasing Adversary Strength.
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples.
Clipping free attacks against artificial neural networks.
Security Theater: On the Vulnerability of Classifiers to Exploratory Attacks.
A Dynamic-Adversarial Mining Approach to the Security of Machine Learning.
An Overview of Vulnerabilities of Voice Controlled Systems.
Generalizability vs. Robustness: Adversarial Examples for Medical Imaging.
CNN Based Adversarial Embedding with Minimum Alteration for Image Steganography.
Detecting Adversarial Perturbations with Saliency.
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization.
Understanding Measures of Uncertainty for Adversarial Example Detection.
Adversarial Defense based on Structure-to-Signal Autoencoders.
Task dependent Deep LDA pruning of neural networks.
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems.