It can be hard to stay up-to-date on the published papers in
the field of adversarial examples,
where we have seen massive growth in the number of papers
written each year.
I have been somewhat religiously keeping track of these
papers for the last few years, and realized it may be helpful
for others to release this list.
The only requirement I used for selecting papers for this list
is that it is primarily a paper about adversarial examples,
or extensively uses adversarial examples.
Due to the sheer quantity of papers, I can't guarantee
that I actually have found all of them.
But I did try.
I also may have included papers that don't match
these criteria (and are about something different instead),
or made inconsistent
judgement calls as to whether or not any given paper is
mainly an adversarial example paper.
Send me an email if something is wrong and I'll correct it.
As a result, this list is completely un-filtered.
Everything that mainly presents itself as an adversarial
example paper is listed here; I pass no judgement of quality.
For a curated list of papers that I think are excellent and
worth reading, see the
Adversarial Machine Learning Reading List.
One final note about the data.
This list automatically updates with new papers, even before I
get a chance to manually filter through them.
I do this filtering roughly twice a week, and it's
then that I'll remove the ones that aren't related to
adversarial examples.
As a result, there may be some
false positives on the most recent few entries.
The new un-verified entries will have a probability indicated that my
simplistic (but reasonably well calibrated)
bag-of-words classifier believes the given paper
is actually about adversarial examples.
The full paper list appears below. I've also released a
TXT file (and a TXT file
with abstracts) and a
JSON file
with the same data. If you do anything interesting with
this data I'd be happy to hear from you what it was.
Paper List
2022-05-13
l-Leaks: Membership Inference Attacks with Logits. (41%)Shuhao Li; Yajie Wang; Yuanzhang Li; Yu-an Tan
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations. (26%)Yongjie Wang; Hangwei Qian; Chunyan Miao
Millimeter-Wave Automotive Radar Spoofing. (2%)Mihai Ordean; Flavio D. Garcia
2022-05-12
Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks. (75%)Pascale Gourdeau; Varun Kanade; Marta Kwiatkowska; James Worrell
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. (61%)Hongbin Liu; Jinyuan Jia; Neil Zhenqiang Gong
How to Combine Membership-Inference Attacks on Multiple Updated Models. (11%)Matthew Jagielski; Stanley Wu; Alina Oprea; Jonathan Ullman; Roxana Geambasu
Infrared Invisible Clothing:Hiding from Infrared Detectors at Multiple Angles in Real World. (4%)Xiaopei Zhu; Zhanhao Hu; Siyuan Huang; Jianmin Li; Xiaolin Hu
Smooth-Reduce: Leveraging Patches for Improved Certified Robustness. (2%)Ameya Joshi; Minh Pham; Minsu Cho; Leonid Boytsov; Filipe Condessa; J. Zico Kolter; Chinmay Hegde
Stalloris: RPKI Downgrade Attack. (1%)Tomas Hlavacek; Philipp Jeitner; Donika Mirdita; Haya Shulman; Michael Waidner
2022-05-11
A Longitudal Study of Cryptographic API -- a Decade of Android Malware. (1%)Adam Janovsky; Davide Maiorca; Dominik Macko; Vashek Matyas; Giorgio Giacinto
Injection Attacks Reloaded: Tunnelling Malicious Payloads over DNS. (1%)Philipp Jeitner; Haya Shulman
The Hijackers Guide To The Galaxy: Off-Path Taking Over Internet Resources. (1%)Tianxiang Dai; Philipp Jeitner; Haya Shulman; Michael Waidner
2022-05-10
Robust Medical Image Classification from Noisy Labeled Data with Global and Local Representation Guided Co-training. (1%)Cheng Xue; Lequan Yu; Pengfei Chen; Qi Dou; Pheng-Ann Heng
White-box Testing of NLP models with Mask Neuron Coverage. (1%)Arshdeep Sekhon; Yangfeng Ji; Matthew B. Dwyer; Yanjun Qi
2022-05-09
Using Frequency Attention to Make Adversarial Patch Powerful Against Person Detector. (98%)Xiaochun Lei; Chang Lu; Zetao Jiang; Zhaoting Gong; Xiang Cai; Linjun Lu
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks. (97%)Harel Berger; Amit Dvir; Chen Hajaj; Rony Ronen
Model-Contrastive Learning for Backdoor Defense. (87%)Zhihao Yue; Jun Xia; Zhiwei Ling; Ting Wang; Xian Wei; Mingsong Chen
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations? (61%)Alvin Chan; Yew-Soon Ong; Clement Tan
Federated Multi-Armed Bandits Under Byzantine Attacks. (2%)Ilker Demirel; Yigit Yildirim; Cem Tekin
Verifying Integrity of Deep Ensemble Models by Lossless Black-box Watermarking with Sensitive Samples. (2%)Lina Lin; Hanzhou Wu
2022-05-08
Fingerprint Template Invertibility: Minutiae vs. Deep Templates. (68%)Kanishka P. Wijewardena; Steven A. Grosz; Kai Cao; Anil K. Jain
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. (22%)Jingtao Li; Adnan Siraj Rakin; Xing Chen; Zhezhi He; Deliang Fan; Chaitali Chakrabarti
VPN: Verification of Poisoning in Neural Networks. (9%)Youcheng Sun; Muhammad Usman; Divya Gopinath; Corina S. Păsăreanu
FOLPETTI: A Novel Multi-Armed Bandit Smart Attack for Wireless Networks. (3%)Bout Emilie; Brighente Alessandro; Conti Mauro; Loscri Valeria
PGADA: Perturbation-Guided Adversarial Alignment for Few-shot Learning Under the Support-Query Shift. (1%)Siyang Jiang; Wei Ding; Hsi-Wen Chen; Ming-Syan Chen
2022-05-07
Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees. (92%)Binghui Wang; Youqi Li; Pan Zhou
2022-05-06
Imperceptible Backdoor Attack: From Input Space to Feature Representation. (68%)Nan Zhong; Zhenxing Qian; Xinpeng Zhang
Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data. (22%)Joceline Ziegler; Bjarne Pfitzner; Heinrich Schulz; Axel Saalbach; Bert Arnrich
LPGNet: Link Private Graph Networks for Node Classification. (1%)Aashish Kolluri; Teodora Baluta; Bryan Hooi; Prateek Saxena
Unlimited Lives: Secure In-Process Rollback with Isolated Domains. (1%)Merve Turhan; Thomas Nyman; Christoph Bauman; Jan Tobias Mühlberg
2022-05-05
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems. (99%)Gaurav Kumar Nayak; Ruchit Rawal; Rohit Lal; Himanshu Patil; Anirban Chakraborty
Can collaborative learning be private, robust and scalable? (12%)Dmitrii Usynin; Helena Klause; Daniel Rueckert; Georgios Kaissis
Large Scale Transfer Learning for Differentially Private Image Classification. (2%)Harsh Mehta; Abhradeep Thakurta; Alexey Kurakin; Ashok Cutkosky
Are GAN-based Morphs Threatening Face Recognition? (1%)Eklavya Sarkar; Pavel Korshunov; Laurent Colbois; Sébastien Marcel
2022-05-04
Based-CE white-box adversarial attack will not work using super-fitting. (99%)Youhuan Yang; Lei Sun; Leyu Dai; Song Guo; Xiuqing Mao; Xiaoqin Wang; Bayi Xu
Rethinking Classifier And Adversarial Attack. (98%)Youhuan Yang; Lei Sun; Leyu Dai; Song Guo; Xiuqing Mao; Xiaoqin Wang; Bayi Xu
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning. (98%)Antonio Emanuele Cinà; Kathrin Grosse; Ambra Demontis; Sebastiano Vascon; Werner Zellinger; Bernhard A. Moser; Alina Oprea; Battista Biggio; Marcello Pelillo; Fabio Roli
Robust Conversational Agents against Imperceptible Toxicity Triggers. (92%)Ninareh Mehrabi; Ahmad Beirami; Fred Morstatter; Aram Galstyan
Subverting Fair Image Search with Generative Adversarial Perturbations. (83%)Avijit Ghosh; Matthew Jagielski; Christo Wilson
2022-05-03
Don't sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks. (96%)Jonathan Rusert; Padmini Srinivasan
Adversarial Training for High-Stakes Reliability. (68%)Daniel M. Ziegler; Seraphina Nix; Lawrence Chan; Tim Bauman; Peter Schmidt-Nielsen; Tao Lin; Adam Scherlis; Noa Nabeshima; Ben Weinstein-Raun; Haas Daniel de; Buck Shlegeris; Nate Thomas
On the uncertainty principle of neural networks. (3%)Jun-Jie Zhang; Dong-Xiao Zhang; Jian-Nan Chen; Long-Gang Pang
Meta-Cognition. An Inverse-Inverse Reinforcement Learning Approach for Cognitive Radars. (1%)Kunal Pattanayak; Vikram Krishnamurthy; Christopher Berry
2022-05-02
SemAttack: Natural Textual Attacks via Different Semantic Spaces. (96%)Boxin Wang; Chejian Xu; Xiangyu Liu; Yu Cheng; Bo Li
Deep-Attack over the Deep Reinforcement Learning. (93%)Yang Li; Quan Pan; Erik Cambria
Enhancing Adversarial Training with Feature Separability. (92%)Yaxin Li; Xiaorui Liu; Han Xu; Wentao Wang; Jiliang Tang
BERTops: Studying BERT Representations under a Topological Lens. (92%)Jatin Chauhan; Manohar Kaul
MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust Classification of Breast Cancer. (83%)Shoukun Sun; Min Xian; Aleksandar Vakanski; Hossny Ghanem
Revisiting Gaussian Neurons for Online Clustering with Unknown Number of Clusters. (1%)Ole Christian Eidheim
2022-05-01
A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction. (98%)Yong Xie; Dakuo Wang; Pin-Yu Chen; Jinjun Xiong; Sijia Liu; Sanmi Koyejo
Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances. (9%)Shoujie Tong; Qingxiu Dong; Damai Dai; Yifan song; Tianyu Liu; Baobao Chang; Zhifang Sui
A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness. (3%)Jeremiah Zhe Liu; Shreyas Padhy; Jie Ren; Zi Lin; Yeming Wen; Ghassen Jerfel; Zack Nado; Jasper Snoek; Dustin Tran; Balaji Lakshminarayanan
Adversarial Plannning. (2%)Valentin Vie; Ryan Sheatsley; Sophia Beyda; Sushrut Shringarputale; Kevin Chan; Trent Jaeger; Patrick McDaniel
2022-04-30
Optimizing One-pixel Black-box Adversarial Attacks. (82%)Tianxun Zhou; Shubhankar Agrawal; Prateek Manocha
"And Then There Were None": Cracking White-box DNN Watermarks via Invariant Neuron Transforms. (26%)Yifan Yan; Xudong Pan; Yining Wang; Mi Zhang; Min Yang
Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees. (1%)Jonathan Brophy; Zayd Hammoudeh; Daniel Lowd
Loss Function Entropy Regularization for Diverse Decision Boundaries. (1%)Chong Sue Sin
2022-04-29
Adversarial attacks on an optical neural network. (92%)Shuming Jiao; Ziwei Song; Shuiying Xiang
Logically Consistent Adversarial Attacks for Soft Theorem Provers. (2%)Alexander Gaskell; Yishu Miao; Lucia Specia; Francesca Toni
Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation. (1%)Heng Zhu; Qing Ling
2022-04-28
Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations. (99%)Na Liu; Mark Dras; Wei Emma Zhang
Formulating Robustness Against Unforeseen Attacks. (99%)Sihui Dai; Saeed Mahloujifar; Prateek Mittal
Randomized Smoothing under Attack: How Good is it in Pratice? (84%)Thibault Maho; Teddy Furon; Erwan Le Merrer
Improving robustness of language models from a geometry-aware perspective. (68%)Bin Zhu; Zhaoquan Gu; Le Wang; Jinyin Chen; Qi Xuan
Mixup-based Deep Metric Learning Approaches for Incomplete Supervision. (50%)Luiz H. Buris; Daniel C. G. Pedronette; Joao P. Papa; Jurandy Almeida; Gustavo Carneiro; Fabio A. Faria
AGIC: Approximate Gradient Inversion Attack on Federated Learning. (16%)Jin Xu; Chi Hong; Jiyue Huang; Lydia Y. Chen; Jérémie Decouchant
An Online Ensemble Learning Model for Detecting Attacks in Wireless Sensor Networks. (1%)Hiba Tabbaa; Samir Ifzarne; Imad Hafidi
2022-04-27
Adversarial Fine-tune with Dynamically Regulated Adversary. (99%)Pengyue Hou; Ming Zhou; Jie Han; Petr Musilek; Xingyu Li
Defending Against Person Hiding Adversarial Patch Attack with a Universal White Frame. (98%)Youngjoon Yu; Hong Joo Lee; Hakmin Lee; Yong Man Ro
An Adversarial Attack Analysis on Malicious Advertisement URL Detection Framework. (81%)Ehsan Nowroozi; Abhishek; Mohammadreza Mohammadi; Mauro Conti
2022-04-26
Boosting Adversarial Transferability of MLP-Mixer. (99%)Haoran Lyu; Yajie Wang; Yu-an Tan; Huipeng Zhou; Yuhang Zhao; Quanxin Zhang
Restricted Black-box Adversarial Attack Against DeepFake Face Swapping. (99%)Junhao Dong; Yuan Wang; Jianhuang Lai; Xiaohua Xie
Improving the Transferability of Adversarial Examples with Restructure Embedded Patches. (99%)Huipeng Zhou; Yu-an Tan; Yajie Wang; Haoran Lyu; Shangbo Wu; Yuanzhang Li
On Fragile Features and Batch Normalization in Adversarial Training. (97%)Nils Philipp Walter; David Stutz; Bernt Schiele
Mixed Strategies for Security Games with General Defending Requirements. (75%)Rufan Bai; Haoxing Lin; Xinyu Yang; Xiaowei Wu; Minming Li; Weijia Jia
Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios. (26%)Dazhong Rong; Qinming He; Jianhai Chen
Designing Perceptual Puzzles by Differentiating Probabilistic Programs. (13%)Kartik Chandra; Tzu-Mao Li; Joshua Tenenbaum; Jonathan Ragan-Kelley
Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies. (8%)Shaltiel Eloul; Fran Silavong; Sanket Kamthe; Antonios Georgiadis; Sean J. Moran
Performance Analysis of Out-of-Distribution Detection on Trained Neural Networks. (4%)Jens Henriksson; Christian Berger; Markus Borg; Lars Tornberg; Sankar Raman Sathyamoorthy; Cristofer Englund
2022-04-25
Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social Networks. (99%)Jiawei Zhang; Jinwei Wang; Hao Wang; Xiangyang Luo
When adversarial examples are excusable. (89%)Pieter-Jan Kindermans; Charles Staats
A Simple Structure For Building A Robust Model. (81%)Xiao Tan; JingBo Gao; Ruolin Li
Real or Virtual: A Video Conferencing Background Manipulation-Detection System. (67%)Ehsan Nowroozi; Yassine Mekdad; Mauro Conti; Simone Milani; Selcuk Uluagac; Berrin Yanikoglu
Can Rationalization Improve Robustness? (12%)Howard Chen; Jacqueline He; Karthik Narasimhan; Danqi Chen
PhysioGAN: Training High Fidelity Generative Model for Physiological Sensor Readings. (1%)Moustafa Alzantot; Luis Garcia; Mani Srivastava
VITA: A Multi-Source Vicinal Transfer Augmentation Method for Out-of-Distribution Generalization. (1%)Minghui Chen; Cheng Wen; Feng Zheng; Fengxiang He; Ling Shao
Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications. (1%)Han Cai; Ji Lin; Yujun Lin; Zhijian Liu; Haotian Tang; Hanrui Wang; Ligeng Zhu; Song Han
2022-04-24
A Hybrid Defense Method against Adversarial Attacks on Traffic Sign Classifiers in Autonomous Vehicles. (99%)Zadid Khan; Mashrur Chowdhury; Sakib Mahmud Khan
Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity. (81%)Marco Marchetti; Edmond S. L. Ho
2022-04-23
Smart App Attack: Hacking Deep Learning Models in Android Apps. (98%)Yujin Huang; Chunyang Chen
Towards Data-Free Model Stealing in a Hard Label Setting. (13%)Sunandini Sanyal; Sravanti Addepalli; R. Venkatesh Babu
Reinforced Causal Explainer for Graph Neural Networks. (1%)Xiang Wang; Yingxin Wu; An Zhang; Fuli Feng; Xiangnan He; Tat-Seng Chua
2022-04-22
How Sampling Impacts the Robustness of Stochastic Neural Networks. (98%)Sina Däubener; Asja Fischer
A Tale of Two Models: Constructing Evasive Attacks on Edge Models. (83%)Wei Hao; Aahil Awatramani; Jiayang Hu; Chengzhi Mao; Pin-Chun Chen; Eyal Cidon; Asaf Cidon; Junfeng Yang
Enhancing the Transferability via Feature-Momentum Adversarial Attack. (82%)Xianglong; Yuezun Li; Haipeng Qu; Junyu Dong
Data-Efficient Backdoor Attacks. (75%)Pengfei Xia; Ziqiang Li; Wei Zhang; Bin Li
2022-04-21
A Mask-Based Adversarial Defense Scheme. (99%)Weizhen Xu; Chenyi Zhang; Fangzhen Zhao; Liangda Fang
Is Neuron Coverage Needed to Make Person Detection More Robust? (98%)Svetlana Pavlitskaya; Şiyar Yıkmış; J. Marius Zöllner
Robustness of Machine Learning Models Beyond Adversarial Attacks. (98%)Sebastian Scher; Andreas Trügler
Adversarial Contrastive Learning by Permuting Cluster Assignments. (15%)Muntasir Wahed; Afrina Tabassum; Ismini Lourentzou
Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation. (4%)Jun Xia; Ting Wang; Jiepin Ding; Xian Wei; Mingsong Chen
Detecting Topology Attacks against Graph Neural Networks. (1%)Senrong Xu; Yuan Yao; Liangyue Li; Wei Yang; Feng Xu; Hanghang Tong
2022-04-20
GUARD: Graph Universal Adversarial Defense. (99%)Jintang Li; Jie Liao; Ruofan Wu; Liang Chen; Changhua Meng; Zibin Zheng; Weiqiang Wang
Adversarial Scratches: Deployable Attacks to CNN Classifiers. (99%)Loris Giulivi; Malhar Jere; Loris Rossi; Farinaz Koushanfar; Gabriela Ciocarlie; Briland Hitaj; Giacomo Boracchi
Fast AdvProp. (98%)Jieru Mei; Yucheng Han; Yutong Bai; Yixiao Zhang; Yingwei Li; Xianhang Li; Alan Yuille; Cihang Xie
Case-Aware Adversarial Training. (98%)Mingyuan Fan; Yang Liu; Wenzhong Guo; Ximeng Liu; Jianhua Li
Improved Worst-Group Robustness via Classifier Retraining on Independent Splits. (1%)Thien Hang Nguyen; Hongyang R. Zhang; Huy Le Nguyen
2022-04-19
Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks. (99%)Kenneth T. Co; David Martinez-Rego; Zhongyuan Hau; Emil C. Lupu
Robustness Testing of Data and Knowledge Driven Anomaly Detection in Cyber-Physical Systems. (86%)Xugui Zhou; Maxfield Kouzel; Homa Alemzadeh
Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation. (83%)Siyu Lai; Zhen Yang; Fandong Meng; Xue Zhang; Yufeng Chen; Jinan Xu; Jie Zhou
2022-04-18
UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples. (99%)Rahim Taheri
Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge. (99%)Qun Song; Zhenyu Yan; Wenjie Luo; Rui Tan
Centralized Adversarial Learning for Robust Deep Hashing. (99%)Xunguang Wang; Xu Yuan; Zheng Zhang; Guangming Lu; Xiaomeng Li
Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors. (98%)Nyee Thoang Lim; Meng Yi Kuan; Muxin Pu; Mei Kuan Lim; Chun Yong Chong
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. (75%)Enyan Dai; Tianxiang Zhao; Huaisheng Zhu; Junjie Xu; Zhimeng Guo; Hui Liu; Jiliang Tang; Suhang Wang
CorrGAN: Input Transformation Technique Against Natural Corruptions. (70%)Mirazul Haque; Christof J. Budnik; Wei Yang
Poisons that are learned faster are more effective. (64%)Pedro Sandoval-Segura; Vasu Singla; Liam Fowl; Jonas Geiping; Micah Goldblum; David Jacobs; Tom Goldstein
2022-04-17
Residue-Based Natural Language Adversarial Attack Detection. (99%)Vyas Raina; Mark Gales
Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning. (95%)Jun Guo; Yonghong Chen; Yihang Hao; Zixin Yin; Yin Yu; Simin Li
2022-04-16
SETTI: A Self-supervised Adversarial Malware Detection Architecture in an IoT Environment. (95%)Marjan Golmaryami; Rahim Taheri; Zahra Pooranian; Mohammad Shojafar; Pei Xiao
Homomorphic Encryption and Federated Learning based Privacy-Preserving CNN Training: COVID-19 Detection Use-Case. (67%)Febrianti Wibawa; Ferhat Ozgur Catak; Salih Sarp; Murat Kuzlu; Umit Cali
2022-04-15
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning. (92%)Mathias Lechner; Alexander Amini; Daniela Rus; Thomas A. Henzinger
2022-04-14
From Environmental Sound Representation to Robustness of 2D CNN Models Against Adversarial Attacks. (99%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Planting Undetectable Backdoors in Machine Learning Models. (99%)Shafi Goldwasser; Michael P. Kim; Vinod Vaikuntanathan; Or Zamir
Q-TART: Quickly Training for Adversarial Robustness and in-Transferability. (50%)Madan Ravi Ganesh; Salimeh Yasaei Sekeh; Jason J. Corso
Robotic and Generative Adversarial Attacks in Offline Writer-independent Signature Verification. (41%)Jordan J. Bird
2022-04-13
Task-Driven Data Augmentation for Vision-Based Robotic Control. (96%)Shubhankar Agarwal; Sandeep P. Chinchali
Stealing Malware Classifiers and AVs at Low False Positive Conditions. (82%)Maria Rigaki; Sebastian Garcia
Defensive Patches for Robust Recognition in the Physical World. (80%)Jiakai Wang; Zixin Yin; Pengfei Hu; Aishan Liu; Renshuai Tao; Haotong Qin; Xianglong Liu; Dacheng Tao
A Novel Approach to Train Diverse Types of Language Models for Health Mention Classification of Tweets. (78%)Pervaiz Iqbal Khan; Imran Razzak; Andreas Dengel; Sheraz Ahmed
Overparameterized Linear Regression under Adversarial Attacks. (76%)Antônio H. Ribeiro; Thomas B. Schön
Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures. (38%)Huming Qiu; Hua Ma; Zhi Zhang; Alsharif Abuadbba; Wei Kang; Anmin Fu; Yansong Gao
A Natural Language Processing Approach for Instruction Set Architecture Identification. (1%)Dinuka Sahabandu; Sukarno Mertoguno; Radha Poovendran
2022-04-12
Liuer Mihou: A Practical Framework for Generating and Evaluating Grey-box Adversarial Attacks against NIDS. (99%)Ke He; Dan Dongseong Kim; Jing Sun; Jeong Do Yoo; Young Hun Lee; Huy Kang Kim
Examining the Proximity of Adversarial Examples to Class Manifolds in Deep Networks. (98%)Štefan Pócoš; Iveta Bečková; Igor Farkaš
Toward Robust Spiking Neural Network Against Adversarial Perturbation. (98%)Ling Liang; Kaidi Xu; Xing Hu; Lei Deng; Yuan Xie
Machine Learning Security against Data Poisoning: Are We There Yet? (92%)Antonio Emanuele Cinà; Kathrin Grosse; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Optimal Membership Inference Bounds for Adaptive Composition of Sampled Gaussian Mechanisms. (11%)Saeed Mahloujifar; Alexandre Sablayrolles; Graham Cormode; Somesh Jha
3DeformRS: Certifying Spatial Deformations on Point Clouds. (9%)Gabriel Pérez S.; Juan C. Pérez; Motasem Alfarra; Silvio Giancola; Bernard Ghanem
2022-04-11
A Simple Approach to Adversarial Robustness in Few-shot Image Classification. (98%)Akshayvarun Subramanya; Hamed Pirsiavash
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information. (92%)Yi Zeng; Minzhou Pan; Hoang Anh Just; Lingjuan Lyu; Meikang Qiu; Ruoxi Jia
Generalizing Adversarial Explanations with Grad-CAM. (84%)Tanmay Chakraborty; Utkarsh Trehan; Khawla Mallat; Jean-Luc Dugelay
Anti-Adversarially Manipulated Attributions for Weakly Supervised Semantic Segmentation and Object Localization. (83%)Jungbeom Lee; Eunji Kim; Jisoo Mok; Sungroh Yoon
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm. (47%)Lei Xu; Yangyi Chen; Ganqu Cui; Hongcheng Gao; Zhiyuan Liu
medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space. (1%)Amil Dravid; Florian Schiffers; Boqing Gong; Aggelos K. Katsaggelos
2022-04-10
"That Is a Suspicious Reaction!": Interpreting Logits Variation to Detect NLP Adversarial Attacks. (88%)Edoardo Mosca; Shreyash Agarwal; Javier Rando-Ramirez; Georg Groh
Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural Networks. (54%)Karthikeyan Nagarajan; Junde Li; Sina Sayyah Ensan; Mohammad Nasim Imtiaz Khan; Sachhidh Kannan; Swaroop Ghosh
Measuring the False Sense of Security. (26%)Carlos Gomes
2022-04-08
Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser. (99%)Sonal Joshi; Saurabh Kataria; Yiwen Shao; Piotr Zelasko; Jesus Villalba; Sanjeev Khudanpur; Najim Dehak
AdvEst: Adversarial Perturbation Estimation to Classify and Detect Adversarial Attacks against Speaker Identification. (99%)Sonal Joshi; Saurabh Kataria; Jesus Villalba; Najim Dehak
Evaluating the Adversarial Robustness for Fourier Neural Operators. (92%)Abolaji D. Adesoji; Pin-Yu Chen
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense. (87%)Shaik Mohammed Maqsood; Viveros Manuela Ceron; Addluri GowthamKrishna
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks. (45%)Xinqiao Zhang; Huili Chen; Ke Huang; Farinaz Koushanfar
Characterizing and Understanding the Behavior of Quantized Models for Reliable Deployment. (13%)Qiang Hu; Yuejun Guo; Maxime Cordy; Xiaofei Xie; Wei Ma; Mike Papadakis; Yves Le Traon
Labeling-Free Comparison Testing of Deep Learning Models. (11%)Yuejun Guo; Qiang Hu; Maxime Cordy; Xiaofei Xie; Mike Papadakis; Yves Le Traon
Does Robustness on ImageNet Transfer to Downstream Tasks? (2%)Yutaro Yamada; Mayu Otani
The self-learning AI controller for adaptive power beaming with fiber-array laser transmitter system. (1%)A. M. Vorontsov; G. A. Filimonov
2022-04-07
Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings. (99%)Yuhao Mao; Chong Fu; Saizhuo Wang; Shouling Ji; Xuhong Zhang; Zhenguang Liu; Jun Zhou; Alex X. Liu; Raheem Beyah; Ting Wang
Adaptive-Gravity: A Defense Against Adversarial Samples. (99%)Ali Mirzaeian; Zhi Tian; Sai Manoj P D; Banafsheh S. Latibari; Ioannis Savidis; Houman Homayoun; Avesta Sasan
Using Multiple Self-Supervised Tasks Improves Model Robustness. (81%)Matthew Lawhon; Chengzhi Mao; Junfeng Yang
Transformer-Based Language Models for Software Vulnerability Detection: Performance, Model's Security and Platforms. (69%)Chandra Thapa; Seung Ick Jang; Muhammad Ejaz Ahmed; Seyit Camtepe; Josef Pieprzyk; Surya Nepal
Defending Active Directory by Combining Neural Network based Dynamic Program and Evolutionary Diversity Optimisation. (1%)Diksha Goel; Max Hector Ward-Graham; Aneta Neumann; Frank Neumann; Hung Nguyen; Mingyu Guo
2022-04-06
Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks. (99%)Xu Han; Anmin Liu; Yifeng Xiong; Yanbo Fan; Kun He
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network. (95%)Byung-Kwan Lee; Junho Kim; Yong Man Ro
Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck. (93%)Junho Kim; Byung-Kwan Lee; Yong Man Ro
Optimization Models and Interpretations for Three Types of Adversarial Perturbations against Support Vector Machines. (68%)Wen Su; Qingna Li; Chunfeng Cui
Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems. (62%)Furkan Mumcu; Keval Doshi; Yasin Yilmaz
Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures. (4%)Md Tamjid Jim Hossain; Shahriar Jim Badsha; Jim Hung; La; Haoting Shen; Shafkat Islam; Ibrahim Khalil; Xun Yi
2022-04-05
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning. (98%)Nilaksh Das; Duen Horng Chau
Adversarial Robustness through the Lens of Convolutional Filters. (87%)Paul Gavrikov; Janis Keuper
User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning. (2%)Tiantian Feng; Raghuveer Peri; Shrikanth Narayanan
SwapMix: Diagnosing and Regularizing the Over-Reliance on Visual Context in Visual Question Answering. (1%)Vipul Gupta; Zhuowan Li; Adam Kortylewski; Chenyu Zhang; Yingwei Li; Alan Yuille
GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning. (1%)Jinyin Chen; Shulong Hu; Haibin Zheng; Changyou Xing; Guomin Zhang
2022-04-04
Experimental quantum adversarial learning with programmable superconducting qubits. (99%)Wenhui Ren; Weikang Li; Shibo Xu; Ke Wang; Wenjie Jiang; Feitong Jin; Xuhao Zhu; Jiachen Chen; Zixuan Song; Pengfei Zhang; Hang Dong; Xu Zhang; Jinfeng Deng; Yu Gao; Chuanyu Zhang; Yaozu Wu; Bing Zhang; Qiujiang Guo; Hekang Li; Zhen Wang; Jacob Biamonte; Chao Song; Dong-Ling Deng; H. Wang
RobustSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition. (99%)Jianfei Yang; Han Zou; Lihua Xie
PRADA: Practical Black-Box Adversarial Attacks against Neural Ranking Models. (99%)Chen Wu; Ruqing Zhang; Jiafeng Guo; Rijke Maarten de; Yixing Fan; Xueqi Cheng
DAD: Data-free Adversarial Defense at Test Time. (99%)Gaurav Kumar Nayak; Ruchit Rawal; Anirban Chakraborty
FaceSigns: Semi-Fragile Neural Watermarks for Media Authentication and Countering Deepfakes. (98%)Paarth Neekhara; Shehzeen Hussain; Xinqiao Zhang; Ke Huang; Julian McAuley; Farinaz Koushanfar
2022-04-03
Breaking the De-Pois Poisoning Defense. (98%)Alaa Anani; Mohamed Ghanem; Lotfy Abdel Khaliq
Adversarially robust segmentation models learn perceptually-aligned gradients. (16%)Pedro Sandoval-Segura
Detecting In-vehicle Intrusion via Semi-supervised Learning-based Convolutional Adversarial Autoencoders. (1%)Thien-Nu Hoang; Daehee Kim
2022-04-02
Adversarial Neon Beam: Robust Physical-World Adversarial Attack to DNNs. (98%)Chengyin Hu; Kalibinuer Tiliwalidi
DST: Dynamic Substitute Training for Data-free Black-box Attack. (98%)Wenxuan Wang; Xuelin Qian; Yanwei Fu; Xiangyang Xue
2022-04-01
SkeleVision: Towards Adversarial Resiliency of Person Tracking with Multi-Task Learning. (47%)Nilaksh Das; Sheng-Yun Peng; Duen Horng Chau
Robust and Accurate -- Compositional Architectures for Randomized Smoothing. (31%)Miklós Z. Horváth; Mark Niklas Müller; Marc Fischer; Martin Vechev
FrequencyLowCut Pooling -- Plug & Play against Catastrophic Overfitting. (16%)Julia Grabinski; Steffen Jung; Janis Keuper; Margret Keuper
Preventing Distillation-based Attacks on Neural Network IP. (2%)Mahdieh Grailoo; Zain Ul Abideen; Mairo Leier; Samuel Pagliarini
FedRecAttack: Model Poisoning Attack to Federated Recommendation. (1%)Dazhong Rong; Shuai Ye; Ruoyan Zhao; Hon Ning Yuen; Jianhai Chen; Qinming He
2022-03-31
Improving Adversarial Transferability via Neuron Attribution-Based Attacks. (99%)Jianping Zhang; Weibin Wu; Jen-tse Huang; Yizhan Huang; Wenxuan Wang; Yuxin Su; Michael R. Lyu
Adversarial Examples in Random Neural Networks with General Activations. (98%)Andrea Montanari; Yuchen Wu
Scalable Whitebox Attacks on Tree-based Models. (96%)Giuseppe Castiglione; Gavin Ding; Masoud Hashemi; Christopher Srinivasa; Ga Wu
Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond. (86%)Yi Yu; Wenhan Yang; Yap-Peng Tan; Alex C. Kot
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. (81%)Florian Tramèr; Reza Shokri; Ayrton San Joaquin; Hoang Le; Matthew Jagielski; Sanghyun Hong; Nicholas Carlini
2022-03-30
Investigating Top-$k$ White-Box and Transferable Black-box Attack. (87%)Chaoning Zhang; Philipp Benz; Adil Karjauv; Jae Won Cho; Kang Zhang; In So Kweon
Sensor Data Validation and Driving Safety in Autonomous Driving Systems. (83%)Jindi Zhang
Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis. (56%)Yi Chang; Zhao Ren; Thanh Tam Nguyen; Wolfgang Nejdl; Björn W. Schuller
2022-03-29
Mel Frequency Spectral Domain Defenses against Adversarial Attacks on Speech Recognition Systems. (99%)Nicholas Mehlman; Anirudh Sreeram; Raghuveer Peri; Shrikanth Narayanan
Zero-Query Transfer Attacks on Context-Aware Object Detectors. (99%)Zikui Cai; Shantanu Rane; Alejandro E. Brito; Chengyu Song; Srikanth V. Krishnamurthy; Amit K. Roy-Chowdhury; M. Salman Asif
Exploring Frequency Adversarial Attacks for Face Forgery Detection. (99%)Shuai Jia; Chao Ma; Taiping Yao; Bangjie Yin; Shouhong Ding; Xiaokang Yang
StyleFool: Fooling Video Classification Systems via Style Transfer. (99%)Yuxin Cao; Xi Xiao; Ruoxi Sun; Derui Wang; Minhui Xue; Sheng Wen
Recent improvements of ASR models in the face of adversarial attacks. (98%)Raphael Olivier; Bhiksha Raj
Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients. (83%)Kaidong Li; Ziming Zhang; Cuncong Zhong; Guanghui Wang
Treatment Learning Transformer for Noisy Image Classification. (26%)Chao-Han Huck Yang; I-Te Danny Hung; Yi-Chieh Liu; Pin-Yu Chen
Can NMT Understand Me? Towards Perturbation-based Evaluation of NMT Models for Code Generation. (11%)Pietro Liguori; Cristina Improta; Vivo Simona De; Roberto Natella; Bojan Cukic; Domenico Cotroneo
2022-03-28
Boosting Black-Box Adversarial Attacks with Meta Learning. (99%)Junjie the State Key Lab of Intelligent Control and Decision of Complex Systems and the School of Automation, Beijing Institute of Technology, Beijing, China Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China Fu; Jian the State Key Lab of Intelligent Control and Decision of Complex Systems and the School of Automation, Beijing Institute of Technology, Beijing, China Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China Sun; Gang the State Key Lab of Intelligent Control and Decision of Complex Systems and the School of Automation, Beijing Institute of Technology, Beijing, China Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China Wang
A Fast and Efficient Conditional Learning for Tunable Trade-Off between Accuracy and Robustness. (62%)Souvik Kundu; Sairam Sundaresan; Massoud Pedram; Peter A. Beerel
Robust Unlearnable Examples: Protecting Data Against Adversarial Learning. (16%)Shaopeng Fu; Fengxiang He; Yang Liu; Li Shen; Dacheng Tao
Neurosymbolic hybrid approach to driver collision warning. (15%)Kyongsik Yun; Thomas Lu; Alexander Huyen; Patrick Hammer; Pei Wang
Attacker Attribution of Audio Deepfakes. (1%)Nicolas M. Müller; Franziska Dieckmann; Jennifer Williams
2022-03-27
Rebuild and Ensemble: Exploring Defense Against Text Adversaries. (76%)Linyang Li; Demin Song; Jiehang Zeng; Ruotian Ma; Xipeng Qiu
Adversarial Representation Sharing: A Quantitative and Secure Collaborative Learning Framework. (8%)Jikun Chen; Feng Qiang; Na Ruan
2022-03-26
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective. (99%)Yimeng Zhang; Yuguang Yao; Jinghan Jia; Jinfeng Yi; Mingyi Hong; Shiyu Chang; Sijia Liu
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies. (99%)Zhuang Qian; Kaizhu Huang; Qiu-Feng Wang; Xu-Yao Zhang
Reverse Engineering of Imperceptible Adversarial Image Perturbations. (99%)Yifan Gong; Yuguang Yao; Yize Li; Yimeng Zhang; Xiaoming Liu; Xue Lin; Sijia Liu
Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding. (33%)Zhilu Wang; Chao Huang; Qi Zhu
A Systematic Survey of Attack Detection and Prevention in Connected and Autonomous Vehicles. (1%)Trupil Limbasiya; Ko Zheng Teng; Sudipta Chattopadhyay; Jianying Zhou
A Roadmap for Big Model. (1%)Sha Yuan; Hanyu Zhao; Shuai Zhao; Jiahong Leng; Yangxiao Liang; Xiaozhi Wang; Jifan Yu; Xin Lv; Zhou Shao; Jiaao He; Yankai Lin; Xu Han; Zhenghao Liu; Ning Ding; Yongming Rao; Yizhao Gao; Liang Zhang; Ming Ding; Cong Fang; Yisen Wang; Mingsheng Long; Jing Zhang; Yinpeng Dong; Tianyu Pang; Peng Cui; Lingxiao Huang; Zheng Liang; Huawei Shen; Hui Zhang; Quanshi Zhang; Qingxiu Dong; Zhixing Tan; Mingxuan Wang; Shuo Wang; Long Zhou; Haoran Li; Junwei Bao; Yingwei Pan; Weinan Zhang; Zhou Yu; Rui Yan; Chence Shi; Minghao Xu; Zuobai Zhang; Guoqiang Wang; Xiang Pan; Mengjie Li; Xiaoyu Chu; Zijun Yao; Fangwei Zhu; Shulin Cao; Weicheng Xue; Zixuan Ma; Zhengyan Zhang; Shengding Hu; Yujia Qin; Chaojun Xiao; Zheni Zeng; Ganqu Cui; Weize Chen; Weilin Zhao; Yuan Yao; Peng Li; Wenzhao Zheng; Wenliang Zhao; Ziyi Wang; Borui Zhang; Nanyi Fei; Anwen Hu; Zenan Ling; Haoyang Li; Boxi Cao; Xianpei Han; Weidong Zhan; Baobao Chang; Hao Sun; Jiawen Deng; Chujie Zheng; Juanzi Li; Lei Hou; Xigang Cao; Jidong Zhai; Zhiyuan Liu; Maosong Sun; Jiwen Lu; Zhiwu Lu; Qin Jin; Ruihua Song; Ji-Rong Wen; Zhouchen Lin; Liwei Wang; Hang Su; Jun Zhu; Zhifang Sui; Jiajun Zhang; Yang Liu; Xiaodong He; Minlie Huang; Jian Tang; Jie Tang
2022-03-25
Improving Adversarial Transferability with Spatial Momentum. (99%)Guoqiu Wang; Xingxing Wei; Huanqian Yan
Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness. (89%)Giulio Lovisotto; Nicole Finnie; Mauricio Munoz; Chaithanya Kumar Mummadi; Jan Hendrik Metzen
Origins of Low-dimensional Adversarial Perturbations. (89%)Elvis Dohmatob; Chuan Guo; Morgane Goibert
Improving robustness of jet tagging algorithms with adversarial training. (10%)Annika Stein; Xavier Coubez; Spandan Mondal; Andrzej Novak; Alexander Schmidt
A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training. (5%)Yifei Wang; Yisen Wang; Jiansheng Yang; Zhouchen Lin
A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration. (1%)Ramya Hebbalaguppe; Jatin Prakash; Neelabh Madan; Chetan Arora
2022-03-24
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning. (99%)Arezoo Rajabi; Bhaskar Ramasubramanian; Radha Poovendran
A Perturbation Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow. (99%)Jenny Schmalfuss; Philipp Scholze; Andrés Bruhn
NPC: Neuron Path Coverage via Characterizing Decision Logic of Deep Neural Networks. (93%)Xiaofei Xie; Tianlin Li; Jian Wang; Lei Ma; Qing Guo; Felix Juefei-Xu; Yang Liu
MERLIN -- Malware Evasion with Reinforcement LearnINg. (56%)Tony Quertier; Benjamin Marais; Stéphane Morucci; Bertrand Fournel
Repairing Group-Level Errors for DNNs Using Weighted Regularization. (13%)Ziyuan Zhong; Yuchi Tian; Conor J. Sweeney; Vicente Ordonez-Roman; Baishakhi Ray
A Manifold View of Adversarial Risk. (11%)Wenjia Zhang; Yikai Zhang; Xiaoling Hu; Mayank Goswami; Chao Chen; Dimitris Metaxas
2022-03-23
Powerful Physical Adversarial Examples Against Practical Face Recognition Systems. (99%)Inderjeet Singh; Toshinori Araki; Kazuya Kakizaki
Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation. (99%)Hanjie Chen; Yangfeng Ji
Input-specific Attention Subnetworks for Adversarial Detection. (99%)Emil Biju; Anirudh Sriram; Pratyush Kumar; Mitesh M Khapra
Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection. (69%)Liang Chen; Yong Zhang; Yibing Song; Lingqiao Liu; Jue Wang
Distort to Detect, not Affect: Detecting Stealthy Sensor Attacks with Micro-distortion. (3%)Suman Sourav; Binbin Chen
On the (Limited) Generalization of MasterFace Attacks and Its Relation to the Capacity of Face Representations. (3%)Philipp Terhörst; Florian Bierbaum; Marco Huber; Naser Damer; Florian Kirchbuchner; Kiran Raja; Arjan Kuijper
2022-03-22
Exploring High-Order Structure for Robust Graph Structure Learning. (99%)Guangqian Yang; Yibing Zhan; Jinlong Li; Baosheng Yu; Liu Liu; Fengxiang He
On Adversarial Robustness of Large-scale Audio Visual Learning. (93%)Juncheng B Bernie Li; Shuhui Bernie Qu; Xinjian Bernie Li; Bernie Po-Yao; Huang; Florian Metze
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes. (83%)Elvis Dohmatob; Alberto Bietti
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis. (78%)Yuwei Sun; Hideya Ochiai; Jun Sakuma
A Girl Has A Name, And It's ... Adversarial Authorship Attribution for Deobfuscation. (2%)Wanyue Zhai; Jonathan Rusert; Zubair Shafiq; Padmini Srinivasan
GradViT: Gradient Inversion of Vision Transformers. (1%)Ali Hatamizadeh; Hongxu Yin; Holger Roth; Wenqi Li; Jan Kautz; Daguang Xu; Pavlo Molchanov
On Robust Classification using Contractive Hamiltonian Neural ODEs. (1%)Muhammad Zakwan; Liang Xu; Giancarlo Ferrari-Trecate
2022-03-21
Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack. (92%)Chi Liu; Huajie Chen; Tianqing Zhu; Jun Zhang; Wanlei Zhou
Integrity Fingerprinting of DNN with Double Black-box Design and Verification. (10%)Shuo Wang; Sidharth Agarwal; Sharif Abuadbba; Kristen Moore; Surya Nepal; Salil Kanhere
On The Robustness of Offensive Language Classifiers. (2%)Jonathan Rusert; Zubair Shafiq; Padmini Srinivasan
Defending against Co-residence Attack in Energy-Efficient Cloud: An Optimization based Real-time Secure VM Allocation Strategy. (1%)Lu Cao; Ruiwen Li; Xiaojun Ruan; Yuhong Liu
2022-03-20
An Intermediate-level Attack Framework on The Basis of Linear Regression. (99%)Yiwen Guo; Qizhang Li; Wangmeng Zuo; Hao Chen
A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement. (99%)Yuting Yang; Pei Huang; Juan Cao; Jintao Li; Yun Lin; Jin Song Dong; Feifei Ma; Jian Zhang
Leveraging Expert Guided Adversarial Augmentation For Improving Generalization in Named Entity Recognition. (82%)Aaron Reich; Jiaao Chen; Aastha Agrawal; Yanzhe Zhang; Diyi Yang
Adversarial Parameter Attack on Deep Neural Networks. (62%)Lijia Yu; Yihan Wang; Xiao-Shan Gao
2022-03-19
Adversarial Defense via Image Denoising with Chaotic Encryption. (99%)Shi Hu; Eric Nalisnick; Max Welling
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense. (98%)Thai Le; Jooyoung Lee; Kevin Yen; Yifan Hu; Dongwon Lee
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model. (84%)Jiayi Wang; Rongzhou Bao; Zhuosheng Zhang; Hai Zhao
Efficient Neural Network Analysis with Sum-of-Infeasibilities. (74%)Haoze Wu; Aleksandar Zeljić; Guy Katz; Clark Barrett
Deep Learning Generalization, Extrapolation, and Over-parameterization. (68%)Roozbeh Yousefzadeh
On Robust Prefix-Tuning for Text Classification. (10%)Zonghan Yang; Yang Liu
2022-03-18
Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike. (99%)Johannes Schneider; Giovanni Apruzzese
Adversarial Attacks on Deep Learning-based Video Compression and Classification Systems. (99%)Jung-Woo Chang; Mojan Javaheripi; Seira Hidano; Farinaz Koushanfar
Neural Predictor for Black-Box Adversarial Attacks on Speech Recognition. (99%)Marie Biolková; Bac Nguyen
AutoAdversary: A Pixel Pruning Method for Sparse Adversarial Attack. (99%)Jinqiao Li; Xiaotao Liu; Jian Zhao; Furao Shen
Defending Variational Autoencoders from Adversarial Attacks with MCMC. (83%)Anna Kuzina; Max Welling; Jakub M. Tomczak
DTA: Physical Camouflage Attacks using Differentiable Transformation Network. (83%)Naufal Suryanto; Yongsu Kim; Hyoeun Kang; Harashta Tatimma Larasati; Youngyeo Yun; Thi-Thu-Huong Le; Hunmin Yang; Se-Yoon Oh; Howon Kim
AdIoTack: Quantifying and Refining Resilience of Decision Tree Ensemble Inference Models against Adversarial Volumetric Attacks on IoT Networks. (78%)Arman Pashamokhtari; Gustavo Batista; Hassan Habibi Gharakheili
Towards Robust 2D Convolution for Reliable Visual Recognition. (9%)Lida Li; Shuai Li; Kun Wang; Xiangchu Feng; Lei Zhang
2022-03-17
Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input. (99%)Junyoung Byun; Seungju Cho; Myung-Joon Kwon; Hee-Seon Kim; Changick Kim
Self-Ensemble Adversarial Training for Improved Robustness. (99%)Hongjun Wang; Yisen Wang
Leveraging Adversarial Examples to Quantify Membership Information Leakage. (98%)Grosso Ganesh Del; Hamid Jalalzai; Georg Pichler; Catuscia Palamidessi; Pablo Piantanida
On the Properties of Adversarially-Trained CNNs. (93%)Mattia Carletti; Matteo Terzi; Gian Antonio Susto
PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks. (89%)Yue Wang; Wenqing Li; Esha Sarkar; Muhammad Shafique; Michail Maniatakos; Saif Eddin Jabari
HDLock: Exploiting Privileged Encoding to Protect Hyperdimensional Computing Models against IP Stealing. (1%)Shijin Duan; Shaolei Ren; Xiaolin Xu
2022-03-16
Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training. (99%)Adir Rahamim; Itay Naeh
Towards Practical Certifiable Patch Defense with Vision Transformer. (98%)Zhaoyu Chen; Bo Li; Jianghe Xu; Shuang Wu; Shouhong Ding; Wenqiang Zhang
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations? (97%)Yonggan Fu; Shunyao Zhang; Shang Wu; Cheng Wan; Yingyan Lin
Provable Adversarial Robustness for Fractional Lp Threat Models. (87%)Alexander Levine; Soheil Feizi
What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study. (83%)Binxiao Huang; Chaofan Tao; Rui Lin; Ngai Wong
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks. (82%)Fan Wu; Linyi Li; Chejian Xu; Huan Zhang; Bhavya Kailkhura; Krishnaram Kenthapadi; Ding Zhao; Bo Li
Reducing Flipping Errors in Deep Neural Networks. (68%)Xiang Deng; Yun Xiao; Bo Long; Zhongfei Zhang
Attacking deep networks with surrogate-based adversarial black-box methods is easy. (45%)Nicholas A. Lord; Romain Mueller; Luca Bertinetto
On the Convergence of Certified Robust Training with Interval Bound Propagation. (15%)Yihan Wang; Zhouxing Shi; Quanquan Gu; Cho-Jui Hsieh
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients. (15%)Xiaoyu Cao; Neil Zhenqiang Gong
Understanding robustness and generalization of artificial neural networks through Fourier masks. (2%)Nikos Karantzas; Emma Besier; Josue Ortega Caro; Xaq Pitkow; Andreas S. Tolias; Ankit B. Patel; Fabio Anselmi
2022-03-15
Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness. (76%)Tejas Gokhale; Swaroop Mishra; Man Luo; Bhavdeep Singh Sachdeva; Chitta Baral
SoK: Why Have Defenses against Social Engineering Attacks Achieved Limited Success? (11%)Theodore Longtchi; Rosana Montañez Rodriguez; Laith Al-Shawaf; Adham Atyabi; Shouhuai Xu
Towards Adversarial Control Loops in Sensor Attacks: A Case Study to Control the Kinematics and Actuation of Embedded Systems. (10%)Yazhou Tu; Sara Rampazzi; Xiali Hei
LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference. (1%)Zhongzhi Yu; Yonggan Fu; Shang Wu; Mengquan Li; Haoran You; Yingyan Lin
Adversarial Counterfactual Augmentation: Application in Alzheimer's Disease Classification. (1%)Tian Xia; Pedro Sanchez; Chen Qin; Sotirios A. Tsaftaris
2022-03-14
Efficient universal shuffle attack for visual object tracking. (99%)Siao Liu; Zhaoyu Chen; Wei Li; Jiwei Zhu; Jiafeng Wang; Wenqiang Zhang; Zhongxue Gan
Defending Against Adversarial Attack in ECG Classification with Adversarial Distillation Training. (99%)Jiahao Shao; Shijia Geng; Zhaoji Fu; Weilun Xu; Tong Liu; Shenda Hong
Task-Agnostic Robust Representation Learning. (98%)A. Tuan Nguyen; Ser Nam Lim; Philip Torr
Adversarial amplitude swap towards robust image classifiers. (83%)Chun Yang Tan; Hiroshi Kera; Kazuhiko Kawamoto
On the benefits of knowledge distillation for adversarial robustness. (82%)Javier Maroto; Guillermo Ortiz-Jiménez; Pascal Frossard
RES-HD: Resilient Intelligent Fault Diagnosis Against Adversarial Attacks Using Hyper-Dimensional Computing. (82%)Onat Gungor; Tajana Rosing; Baris Aksanli
Energy-Latency Attacks via Sponge Poisoning. (80%)Antonio Emanuele Cinà; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis. (54%)Giulio Rossolini; Federico Nesti; Fabio Brau; Alessandro Biondi; Giorgio Buttazzo
2022-03-13
LAS-AT: Adversarial Training with Learnable Attack Strategy. (99%)Xiaojun Jia; Yong Zhang; Baoyuan Wu; Ke Ma; Jue Wang; Xiaochun Cao
Generating Practical Adversarial Network Traffic Flows Using NIDSGAN. (99%)Bolor-Erdene Zolbayar; Ryan Sheatsley; Patrick McDaniel; Michael J. Weisman; Sencun Zhu; Shitong Zhu; Srikanth Krishnamurthy
Model Inversion Attack against Transfer Learning: Inverting a Model without Accessing It. (92%)Dayong Ye; Huiqiang Chen; Shuai Zhou; Tianqing Zhu; Wanlei Zhou; Shouling Ji
One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy. (67%)Dayong Ye; Sheng Shen; Tianqing Zhu; Bo Liu; Wanlei Zhou
Policy Learning for Robust Markov Decision Process with a Mismatched Generative Model. (3%)Jialian Li; Tongzheng Ren; Dong Yan; Hang Su; Jun Zhu
2022-03-12
Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior. (99%)Yinpeng Dong; Shuyu Cheng; Tianyu Pang; Hang Su; Jun Zhu
A survey in Adversarial Defences and Robustness in NLP. (99%)Shreya Goyal; Sumanth Doddapaneni; Mitesh M. Khapra; Balaraman Ravindran
Label-only Model Inversion Attack: The Attack that Requires the Least Information. (47%)Dayong Ye; Tianqing Zhu; Shuai Zhou; Bo Liu; Wanlei Zhou
2022-03-11
Block-Sparse Adversarial Attack to Fool Transformer-Based Text Classifiers. (99%)Sahar Sadrizadeh; Ljiljana Dolamic; Pascal Frossard
Learning from Attacks: Attacking Variational Autoencoder for Improving Image Classification. (98%)Jianzhang Zheng; Fan Yang; Hao Shen; Xuan Tang; Mingsong Chen; Liang Song; Xian Wei
An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks. (96%)Anirudh Yadav; Ashutosh Upadhyay; S. Sharanya
Enhancing Adversarial Training with Second-Order Statistics of Weights. (38%)Gaojie Jin; Xinping Yi; Wei Huang; Sven Schewe; Xiaowei Huang
ROOD-MRI: Benchmarking the robustness of deep learning segmentation models to out-of-distribution and corrupted data in MRI. (33%)Lyndon Boone; Mahdi Biparva; Parisa Mojiri Forooshani; Joel Ramirez; Mario Masellis; Robert Bartha; Sean Symons; Stephen Strother; Sandra E. Black; Chris Heyn; Anne L. Martel; Richard H. Swartz; Maged Goubran
Perception Over Time: Temporal Dynamics for Robust Image Understanding. (16%)Maryam Daniali; Edward Kim
Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation. (15%)Yunhan Huang; Quanyan Zhu
2022-03-10
Exploiting the Potential of Datasets: A Data-Centric Approach for Model Robustness. (92%)Yiqi Zhong; Lei Wu; Xianming Liu; Junjun Jiang
Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation. (75%)Saeed Ranjbar Alvar; Lanjun Wang; Jian Pei; Yong Zhang
Attack Analysis of Face Recognition Authentication Systems Using Fast Gradient Sign Method. (69%)Arbena Musa; Kamer Vishi; Blerim Rexha
Attacks as Defenses: Designing Robust Audio CAPTCHAs Using Attacks on Automatic Speech Recognition Systems. (64%)Hadi Abdullah; Aditya Karlekar; Saurabh Prasad; Muhammad Sajidur Rahman; Logan Blue; Luke A. Bauer; Vincent Bindschaedler; Patrick Traynor
SoK: On the Semantic AI Security in Autonomous Driving. (10%)Junjie Shen; Ningfei Wang; Ziwen Wan; Yunpeng Luo; Takami Sato; Zhisheng Hu; Xinyang Zhang; Shengjian Guo; Zhenyu Zhong; Kang Li; Ziming Zhao; Chunming Qiao; Qi Alfred Chen
2022-03-09
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation. (99%)Qilong Zhang; Chaoning Zhang; Chaoqun Li; Jingkuan Song; Lianli Gao; Heng Tao Shen
Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack. (99%)Ye Liu; Yaya Cheng; Lianli Gao; Xianglong Liu; Qilong Zhang; Jingkuan Song
Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity. (99%)Cheng Luo; Qinliang Lin; Weicheng Xie; Bizhu Wu; Jinheng Xie; Linlin Shen
Binary Classification Under $\ell_0$ Attacks for General Noise Distribution. (98%)Payam Delgosha; Hamed Hassani; Ramtin Pedarsani
Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition. (97%)Xiao Yang; Yinpeng Dong; Tianyu Pang; Zihao Xiao; Hang Su; Jun Zhu
Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees. (92%)Darshan Thaker; Paris Giampouras; René Vidal
Defending Black-box Skeleton-based Human Activity Classifiers. (92%)He Wang; Yunfeng Diao; Zichang Tan; Guodong Guo
Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition. (81%)Yi Chang; Sofiane Laridi; Zhao Ren; Gregory Palmer; Björn W. Schuller; Marco Fisichella
Improving Neural ODEs via Knowledge Distillation. (80%)Haoyu Chu; Shikui Wei; Qiming Lu; Yao Zhao
Physics-aware Complex-valued Adversarial Machine Learning in Reconfigurable Diffractive All-optical Neural Network. (22%)Ruiyang Chen; Yingjie Li; Minhan Lou; Jichao Fan; Yingheng Tang; Berardi Sensale-Rodriguez; Cunxi Yu; Weilu Gao
On the surprising tradeoff between ImageNet accuracy and perceptual similarity. (1%)Manoj Kumar; Neil Houlsby; Nal Kalchbrenner; Ekin D. Cubuk
2022-03-08
Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust NIDS. (99%)João Vitorino; Nuno Oliveira; Isabel Praça
Shape-invariant 3D Adversarial Point Clouds. (99%)Qidong Huang; Xiaoyi Dong; Dongdong Chen; Hang Zhou; Weiming Zhang; Nenghai Yu
ART-Point: Improving Rotation Robustness of Point Cloud Classifiers via Adversarial Rotation. (92%)Robin Wang; Yibo Yang; Dacheng Tao
Robustly-reliable learners under poisoning attacks. (13%)Maria-Florina Balcan; Avrim Blum; Steve Hanneke; Dravyansh Sharma
DeepSE-WF: Unified Security Estimation for Website Fingerprinting Defenses. (2%)Alexander Veicht; Cedric Renggli; Diogo Barradas
Harmonicity Plays a Critical Role in DNN Based Versus in Biologically-Inspired Monaural Speech Segregation Systems. (1%)Rahil Institute for Systems Research, University of Maryland Parikh; Ilya Google Inc Kavalerov; Carol Institute for Systems Research, University of Maryland Espy-Wilson; Shihab Institute for Systems Research, University of Maryland Shamma
2022-03-07
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches. (99%)Maura Pintor; Daniele Angioni; Angelo Sotgiu; Luca Demetrio; Ambra Demontis; Battista Biggio; Fabio Roli
Art-Attack: Black-Box Adversarial Attack via Evolutionary Art. (99%)Phoenix Williams; Ke Li
Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon. (99%)Yiqi Zhong; Xianming Liu; Deming Zhai; Junjun Jiang; Xiangyang Ji
Adversarial Texture for Fooling Person Detectors in the Physical World. (98%)Zhanhao Hu; Siyuan Huang; Xiaopei Zhu; Xiaolin Hu; Fuchun Sun; Bo Zhang
Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision. (83%)Jun Zhuang; Mohammad Al Hasan
Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation. (31%)Xiaogeng Liu; Haoyu Wang; Yechao Zhang; Fangzhou Wu; Shengshan Hu
2022-03-06
Searching for Robust Neural Architectures via Comprehensive and Reliable Evaluation. (99%)Jialiang Sun; Tingsong Jiang; Chao Li; Weien Zhou; Xiaoya Zhang; Wen Yao; Xiaoqian Chen
Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer. (98%)Shengshan Hu; Xiaogeng Liu; Yechao Zhang; Minghui Li; Leo Yu Zhang; Hai Jin; Libing Wu
Scalable Uncertainty Quantification for Deep Operator Networks using Randomized Priors. (45%)Yibo Yang; Georgios Kissas; Paris Perdikaris
2022-03-05
aaeCAPTCHA: The Design and Implementation of Audio Adversarial CAPTCHA. (92%)Md Imran Hossen; Xiali Hei
2022-03-04
Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation. (82%)Xudong Zhang; Zan Wang; Jingke Zhao; Lanjun Wang
2022-03-03
Ad2Attack: Adaptive Adversarial Attack on Real-Time UAV Tracking. (99%)Changhong Fu; Sihang Li; Xinnan Yuan; Junjie Ye; Ziang Cao; Fangqiang Ding
Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation. (98%)KiYoon Yoo; Jangho Kim; Jiho Jang; Nojun Kwak
Adversarial Patterns: Building Robust Android Malware Classifiers. (98%)Dipkamal Bhusal; Nidhi Rastogi
Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training. (84%)Pervaiz Iqbal Khan; Shoaib Ahmed Siddiqui; Imran Razzak; Andreas Dengel; Sheraz Ahmed
Label-Only Model Inversion Attacks via Boundary Repulsion. (74%)Mostafa Kahla; Si Chen; Hoang Anh Just; Ruoxi Jia
Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models. (56%)Zhibo Wang; Xiaowei Dong; Henry Xue; Zhifei Zhang; Weifeng Chiu; Tao Wei; Kui Ren
Why adversarial training can hurt robust accuracy. (22%)Jacob Clarysse; Julia Hörmann; Fanny Yang
Understanding Failure Modes of Self-Supervised Learning. (4%)Neha Mukund Kalibhat; Kanika Narang; Liang Tan; Hamed Firooz; Maziar Sanjabi; Soheil Feizi
Ensemble Methods for Robust Support Vector Machines using Integer Programming. (2%)Jannis Kurtz
Autonomous and Resilient Control for Optimal LEO Satellite Constellation Coverage Against Space Threats. (1%)Yuhan Zhao; Quanyan Zhu
2022-03-02
Enhancing Adversarial Robustness for Deep Metric Learning. (99%)Mo Zhou; Vishal M. Patel
Detecting Adversarial Perturbations in Multi-Task Perception. (98%)Marvin Klingner; Varun Ravi Kumar; Senthil Yogamani; Andreas Bär; Tim Fingscheidt
Canonical foliations of neural networks: application to robustness. (82%)Eliot Tron; Nicolas Couellan; Stéphane Puechmorel
Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers. (69%)Evan Crothers; Nathalie Japkowicz; Herna Viktor; Paula Branco
Video is All You Need: Attacking PPG-based Biometric Authentication. (13%)Lin Li; Chao Chen; Lei Pan; Jun Zhang; Yang Xiang
A Quantitative Geometric Approach to Neural Network Smoothness. (2%)Zi Wang; Gautam Prakriya; Somesh Jha
MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members. (2%)Ismat Jarin; Birhanu Eshete
2022-03-01
Adversarial samples for deep monocular 6D object pose estimation. (99%)Jinlai Zhang; Weiming Li; Shuang Liang; Hao Wang; Jihong Zhu
Global-Local Regularization Via Distributional Robustness. (86%)Hoang Phan; Trung Le; Trung Phung; Tuan Anh Bui; Nhat Ho; Dinh Phung
Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. (11%)Wei Dai; Daniel Berleant
Signature Correction Attack on Dilithium Signature Scheme. (1%)Saad Islam; Koksal Mus; Richa Singh; Patrick Schaumont; Berk Sunar
2022-02-28
Enhance transferability of adversarial examples with model architecture. (99%)Mingyuan Fan; Wenzhong Guo; Shengxing Yu; Zuobin Ying; Ximeng Liu
Towards Robust Stacked Capsule Autoencoder with Hybrid Adversarial Training. (99%)Jiazhu Dai; Siwei Xiong
Evaluating the Adversarial Robustness of Adaptive Test-time Defenses. (98%)Francesco Croce; Sven Gowal; Thomas Brunner; Evan Shelhamer; Matthias Hein; Taylan Cemgil
MaMaDroid2.0 -- The Holes of Control Flow Graphs. (88%)Harel Berger; Chen Hajaj; Enrico Mariconti; Amit Dvir
Improving Lexical Embeddings for Robust Question Answering. (67%)Weiwen Xu; Bowei Zou; Wai Lam; Ai Ti Aw
Robust Textual Embedding against Word-level Adversarial Attacks. (26%)Yichen Yang; Xiaosen Wang; Kun He
Artificial Intelligence for Cyber Security (AICS). (1%)James Holt; Edward Raff; Ahmad Ridley; Dennis Ross; Arunesh Sinha; Diane Staheli; William Streilen; Milind Tambe; Yevgeniy Vorobeychik; Allan Wollaber
Explaining RADAR features for detecting spoofing attacks in Connected Autonomous Vehicles. (1%)Nidhi Rastogi; Sara Rampazzi; Michael Clifford; Miriam Heller; Matthew Bishop; Karl Levitt
2022-02-27
A Unified Wasserstein Distributional Robustness Framework for Adversarial Training. (99%)Tuan Anh Bui; Trung Le; Quan Tran; He Zhao; Dinh Phung
Robust Control of Partially Specified Boolean Networks. (1%)Luboš Brim; Samuel Pastva; David Šafránek; Eva Šmijáková
2022-02-26
Adversarial robustness of sparse local Lipschitz predictors. (87%)Ramchandran Muthukumar; Jeremias Sulam
Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations. (45%)Metehan Cekic; Can Bakiskan; Upamanyu Madhow
2022-02-25
ARIA: Adversarially Robust Image Attribution for Content Provenance. (99%)Maksym Andriushchenko; Xiaoyang Rebecca Li; Geoffrey Oxholm; Thomas Gittings; Tu Bui; Nicolas Flammarion; John Collomosse
Projective Ranking-based GNN Evasion Attacks. (97%)He Zhang; Xingliang Yuan; Chuan Zhou; Shirui Pan
On the Effectiveness of Dataset Watermarking in Adversarial Settings. (56%)Buse Gul Atli Tekgul; N. Asokan
2022-02-24
Towards Effective and Robust Neural Trojan Defenses via Input Filtering. (92%)Kien Do; Haripriya Harikumar; Hung Le; Dung Nguyen; Truyen Tran; Santu Rana; Dang Nguyen; Willy Susilo; Svetha Venkatesh
Robust Probabilistic Time Series Forecasting. (76%)TaeHo Yoon; Youngsuk Park; Ernest K. Ryu; Yuyang Wang
Understanding Adversarial Robustness from Feature Maps of Convolutional Layers. (50%)Cong Xu; Min Yang
Measuring CLEVRness: Blackbox testing of Visual Reasoning Models. (16%)Spyridon Mouselinos; Henryk Michalewski; Mateusz Malinowski
Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration. (3%)Ryan Soklaski; Michael Yee; Theodoros Tsiligkaridis
Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations. (2%)Chih-Kuan Yeh; Kuan-Yun Lee; Frederick Liu; Pradeep Ravikumar
Interpolation-based Contrastive Learning for Few-Label Semi-Supervised Learning. (1%)Xihong Yang; Xiaochang Hu; Sihang Zhou; Xinwang Liu; En Zhu
2022-02-23
Improving Robustness of Convolutional Neural Networks Using Element-Wise Activation Scaling. (96%)Zhi-Yuan Zhang; Di Liu
Using calibrator to improve robustness in Machine Reading Comprehension. (13%)Jing Jin; Houfeng Wang
2022-02-22
LPF-Defense: 3D Adversarial Defense based on Frequency Analysis. (99%)Hanieh Naderi; Arian Etemadi; Kimia Noorbakhsh; Shohreh Kasaei
Universal adversarial perturbation for remote sensing images. (92%)Zhaoxia Yin; Qingyu Wang; Jin Tang; Bin Luo
Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era. (84%)Changjiang Li; Li Wang; Shouling Ji; Xuhong Zhang; Zhaohan Xi; Shanqing Guo; Ting Wang
2022-02-21
Adversarial Attacks on Speech Recognition Systems for Mission-Critical Applications: A Survey. (99%)Ngoc Dung Huynh; Mohamed Reda Bouadjenek; Imran Razzak; Kevin Lee; Chetan Arora; Ali Hassani; Arkady Zaslavsky
Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness. (99%)Beomsu Kim; Junghoon Seo
HoneyModels: Machine Learning Honeypots. (99%)Ahmed Abdou; Ryan Sheatsley; Yohan Beugin; Tyler Shipp; Patrick McDaniel
Transferring Adversarial Robustness Through Robust Representation Matching. (99%)Pratik Vaishnavi; Kevin Eykholt; Amir Rahmati
On the Effectiveness of Adversarial Training against Backdoor Attacks. (96%)Yinghua Gao; Dongxian Wu; Jingfeng Zhang; Guanhao Gan; Shu-Tao Xia; Gang Niu; Masashi Sugiyama
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey. (83%)Miguel A. Ramirez; Song-Kyoo Kim; Hussam Al Hamadi; Ernesto Damiani; Young-Ji Byon; Tae-Yeon Kim; Chung-Suk Cho; Chan Yeob Yeun
A Tutorial on Adversarial Learning Attacks and Countermeasures. (75%)Cato Pauling; Michael Gimson; Muhammed Qaid; Ahmad Kida; Basel Halak
Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection. (41%)Yein Kim; Huili Chen; Farinaz Koushanfar
Privacy Leakage of Adversarial Training Models in Federated Learning Systems. (38%)Jingyang Zhang; Yiran Chen; Hai Li
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition. (10%)Tianyu Pang; Min Lin; Xiao Yang; Jun Zhu; Shuicheng Yan
Cyber-Physical Defense in the Quantum Era. (2%)Michel Barbeau; Joaquin Garcia-Alfaro
2022-02-20
Real-time Over-the-air Adversarial Perturbations for Digital Communications using Deep Neural Networks. (93%)Roman A. Sandler; Peter K. Relich; Cloud Cho; Sean Holloway
Sparsity Winning Twice: Better Robust Generaliztion from More Efficient Training. (26%)Tianlong Chen; Zhenyu Zhang; Pengjun Wang; Santosh Balachandra; Haoyu Ma; Zehao Wang; Zhangyang Wang
Overparametrization improves robustness against adversarial attacks: A replication study. (3%)Ali Borji
2022-02-18
Exploring Adversarially Robust Training for Unsupervised Domain Adaptation. (99%)Shao-Yuan Lo; Vishal M. Patel
Learning Representations Robust to Group Shifts and Adversarial Examples. (93%)Ming-Chang Chiu; Xuezhe Ma
Critical Checkpoints for Evaluating Defence Models Against Adversarial Attack and Robustness. (92%)Kanak Tekwani; Manojkumar Parmar
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy Breaches. (80%)Reena Zelenkova; Jack Swallow; M. A. P. Chamikara; Dongxi Liu; Mohan Baruwal Chhetri; Seyit Camtepe; Marthie Grobler; Mahathir Almashor
Data-Driven Mitigation of Adversarial Text Perturbation. (75%)Rasika Bhalerao; Mohammad Al-Rubaie; Anand Bhaskar; Igor Markov
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias. (68%)Shangxi Wu; Qiuyang He; Yi Zhang; Jitao Sang
Stochastic Perturbations of Tabular Features for Non-Deterministic Inference with Automunge. (38%)Nicholas J. Teague
Label-Smoothed Backdoor Attack. (33%)Minlong Peng; Zidi Xiong; Mingming Sun; Ping Li
Black-box Node Injection Attack for Graph Neural Networks. (33%)Mingxuan Ju; Yujie Fan; Yanfang Ye; Liang Zhao
Robust Reinforcement Learning as a Stackelberg Game via Adaptively-Regularized Adversarial Training. (9%)Peide Huang; Mengdi Xu; Fei Fang; Ding Zhao
Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML Systems. (4%)Mohamad Fazelnia; Igor Khokhlov; Mehdi Mirakhorli
Synthetic Disinformation Attacks on Automated Fact Verification Systems. (1%)Yibing Du; Antoine Bosselut; Christopher D. Manning
2022-02-17
Rethinking Machine Learning Robustness via its Link with the Out-of-Distribution Problem. (99%)Abderrahmen Amich; Birhanu Eshete
Mitigating Closed-model Adversarial Examples with Bayesian Neural Modeling for Enhanced End-to-End Speech Recognition. (98%)Chao-Han Huck Yang; Zeeshan Ahmed; Yile Gu; Joseph Szurley; Roger Ren; Linda Liu; Andreas Stolcke; Ivan Bulyko
Developing Imperceptible Adversarial Patches to Camouflage Military Assets From Computer Vision Enabled Technologies. (98%)Chris Wise; Jo Plested
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations. (78%)Zirui Peng; Shaofeng Li; Guoxing Chen; Cheng Zhang; Haojin Zhu; Minhui Xue
2022-02-16
The Adversarial Security Mitigations of mmWave Beamforming Prediction Models using Defensive Distillation and Adversarial Retraining. (99%)Murat Kuzlu; Ferhat Ozgur Catak; Umit Cali; Evren Catak; Ozgur Guler
Understanding and Improving Graph Injection Attack by Promoting Unnoticeability. (10%)Yongqiang Chen; Han Yang; Yonggang Zhang; Kaili Ma; Tongliang Liu; Bo Han; James Cheng
Gradient Based Activations for Accurate Bias-Free Learning. (1%)Vinod K Kurmi; Rishabh Sharma; Yash Vardhan Sharma; Vinay P. Namboodiri
2022-02-15
Unreasonable Effectiveness of Last Hidden Layer Activations. (99%)Omer Faruk Tuna; Ferhat Ozgur Catak; M. Taner Eskil
StratDef: a strategic defense against adversarial attacks in malware detection. (98%)Aqib Rashid; Jose Such
Random Walks for Adversarial Meshes. (96%)Amir Belder; Gal Yefet; Ran Ben Izhak; Ayellet Tal
Generative Adversarial Network-Driven Detection of Adversarial Tasks in Mobile Crowdsensing. (93%)Zhiyan Chen; Burak Kantarci
Applying adversarial networks to increase the data efficiency and reliability of Self-Driving Cars. (89%)Aakash Kumar
Improving the repeatability of deep learning models with Monte Carlo dropout. (1%)Andreanne Lemay; Katharina Hoebel; Christopher P. Bridge; Brian Befano; Sanjosé Silvia De; Diden Egemen; Ana Cecilia Rodriguez; Mark Schiffman; John Peter Campbell; Jayashree Kalpathy-Cramer
Taking a Step Back with KCal: Multi-Class Kernel-Based Calibration for Deep Neural Networks. (1%)Zhen Lin; Shubhendu Trivedi; Jimeng Sun
Holistic Adversarial Robustness of Deep Learning Models. (1%)Pin-Yu Chen; Sijia Liu
2022-02-14
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark. (99%)Yonghao Xu; Pedram Ghamisi
Finding Dynamics Preserving Adversarial Winning Tickets. (86%)Xupeng Shi; Pengfei Zheng; A. Adam Ding; Yuan Gao; Weizhong Zhang
Recent Advances in Reliable Deep Graph Learning: Adversarial Attack, Inherent Noise, and Distribution Shift. (83%)Bingzhe Wu; Jintang Li; Chengbin Hou; Guoji Fu; Yatao Bian; Liang Chen; Junzhou Huang
UA-FedRec: Untargeted Attack on Federated News Recommendation. (1%)Jingwei Yi; Fangzhao Wu; Bin Zhu; Yang Yu; Chao Zhang; Guangzhong Sun; Xing Xie
2022-02-13
Adversarial Fine-tuning for Backdoor Defense: Connecting Backdoor Attacks to Adversarial Attacks. (99%)Bingxu Mu; Zhenxing Niu; Le Wang; Xue Wang; Rong Jin; Gang Hua
Towards Understanding and Defending Input Space Trojans. (12%)Zhenting Wang; Hailun Ding; Juan Zhai; Shiqing Ma
Extracting Label-specific Key Input Features for Neural Code Intelligence Models. (9%)Md Rafiqul Islam Rabin
Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey. (2%)Zhilin Wang; Qiao Kang; Xinyi Zhang; Qin Hu
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation. (1%)Cong Guo; Yuxian Qiu; Jingwen Leng; Xiaotian Gao; Chen Zhang; Yunxin Liu; Fan Yang; Yuhao Zhu; Minyi Guo
2022-02-12
RoPGen: Towards Robust Code Authorship Attribution via Automatic Coding Style Transformation. (98%)Zhen Qian Li; Qian Guenevere; Chen; Chen Chen; Yayi Zou; Shouhuai Xu
DeepSensor: Deep Learning Testing Framework Based on Neuron Sensitivity. (78%)Haibo Jin; Ruoxi Chen; Haibin Zheng; Jinyin Chen; Zhenguang Liu; Qi Xuan; Yue Yu; Yao Cheng
2022-02-11
Adversarial Attacks and Defense Methods for Power Quality Recognition. (99%)Jiwei Tian; Buhong Wang; Jing Li; Zhen Wang; Mete Ozay
Open-set Adversarial Defense with Clean-Adversarial Mutual Learning. (98%)Rui Shao; Pramuditha Perera; Pong C. Yuen; Vishal M. Patel
Towards Adversarially Robust Deepfake Detection: An Ensemble Approach. (98%)Ashish Hooda; Neal Mangaokar; Ryan Feng; Kassem Fawaz; Somesh Jha; Atul Prakash
Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models. (92%)Abigail Swenor; Jugal Kalita
Predicting Out-of-Distribution Error with the Projection Norm. (62%)Yaodong Yu; Zitong Yang; Alexander Wei; Yi Ma; Jacob Steinhardt
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers. (62%)Limin Yang; Zhi Chen; Jacopo Cortellazzi; Feargus Pendlebury; Kevin Tu; Fabio Pierazzi; Lorenzo Cavallaro; Gang Wang
White-Box Attacks on Hate-speech BERT Classifiers in German with Explicit and Implicit Character Level Defense. (12%)Shahrukh Khan; Mahnoor Shahid; Navdeeppal Singh
On the Detection of Adaptive Adversarial Attacks in Speaker Verification Systems. (10%)Zesheng Chen
Noise Augmentation Is All You Need For FGSM Fast Adversarial Training: Catastrophic Overfitting And Robust Overfitting Require Different Augmentation. (10%)Chaoning Zhang; Kang Zhang; Axi Niu; Chenshuang Zhang; Jiu Feng; Chang D. Yoo; In So Kweon
Improving Generalization via Uncertainty Driven Perturbations. (2%)Matteo Pagliardini; Gilberto Manunza; Martin Jaggi; Michael I. Jordan; Tatjana Chavdarova
CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep Learning. (1%)Jun Shu; Xiang Yuan; Deyu Meng; Zongben Xu
2022-02-10
FAAG: Fast Adversarial Audio Generation through Interactive Attack Optimisation. (99%)Yuantian Miao; Chao Chen; Lei Pan; Jun Zhang; Yang Xiang
Towards Assessing and Characterizing the Semantic Robustness of Face Recognition. (76%)Juan C. Pérez; Motasem Alfarra; Ali Thabet; Pablo Arbeláez; Bernard Ghanem
Controlling the Complexity and Lipschitz Constant improves polynomial nets. (12%)Zhenyu Zhu; Fabian Latorre; Grigorios G Chrysos; Volkan Cevher
FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling. (8%)Chuhan Wu; Fangzhao Wu; Tao Qi; Yongfeng Huang; Xing Xie
A Field of Experts Prior for Adapting Neural Networks at Test Time. (1%)Neerav Karani; Georg Brunner; Ertunc Erdil; Simin Fei; Kerem Tezcan; Krishna Chaitanya; Ender Konukoglu
2022-02-09
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios. (99%)Jung Im Choi; Qing Tian
Gradient Methods Provably Converge to Non-Robust Networks. (82%)Gal Vardi; Gilad Yehudai; Ohad Shamir
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger. (22%)Muhammad Umer; Robi Polikar
Learning to Bootstrap for Combating Label Noise. (2%)Yuyin Zhou; Xianhang Li; Fengze Liu; Xuxi Chen; Lequan Yu; Cihang Xie; Matthew P. Lungren; Lei Xing
Model Architecture Adaption for Bayesian Neural Networks. (1%)Duo Wang; Yiren Zhao; Ilia Shumailov; Robert Mullins
ARIBA: Towards Accurate and Robust Identification of Backdoor Attacks in Federated Learning. (1%)Yuxi Mi; Jihong Guan; Shuigeng Zhou
2022-02-08
Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations. (99%)Yun-Yun Tsai; Lei Hsiung; Pin-Yu Chen; Tsung-Yi Ho
Verification-Aided Deep Ensemble Selection. (96%)Guy Amir; Guy Katz; Michael Schapira
Adversarial Detection without Model Information. (87%)Abhishek Moitra; Youngeun Kim; Priyadarshini Panda
Robust, Deep, and Reinforcement Learning for Management of Communication and Power Networks. (1%)Alireza Sadeghi
2022-02-07
On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks. (99%)Salijona Dyrmishi; Salah Ghamizi; Thibault Simonetto; Yves Le Traon; Maxime Cordy
Blind leads Blind: A Zero-Knowledge Attack on Federated Learning. (98%)Jiyue Huang; Zilong Zhao; Lydia Y. Chen; Stefanie Roos
Evaluating Robustness of Cooperative MARL: A Model-based Approach. (97%)Nhan H. Pham; Lam M. Nguyen; Jie Chen; Hoang Thanh Lam; Subhro Das; Tsui-Wei Weng
Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests. (74%)Xilie Xu; Jingfeng Zhang; Feng Liu; Masashi Sugiyama; Mohan Kankanhalli
Membership Inference Attacks and Defenses in Neural Network Pruning. (50%)Xiaoyong Yuan; Lan Zhang
SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation. (4%)Jun Xia; Lirong Wu; Jintao Chen; Bozhen Hu; Stan Z. Li
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning. (3%)Ji Gao; Sanjam Garg; Mohammad Mahmoody; Prashant Nalini Vasudevan
More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks. (2%)Jing Xu; Rui Wang; Kaitai Liang; Stjepan Picek
2022-02-06
Pipe Overflow: Smashing Voice Authentication for Fun and Profit. (99%)Shimaa Ahmed; Yash Wani; Ali Shahin Shamsabadi; Mohammad Yaghini; Ilia Shumailov; Nicolas Papernot; Kassem Fawaz
Redactor: Targeted Disinformation Generation using Probabilistic Decision Boundaries. (8%)Geon Heo; Steven Euijong Whang
2022-02-05
Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework. (99%)Mohammad Khalooei; Mohammad Mehdi Homayounpour; Maryam Amirmazlaghani
Adversarial Detector with Robust Classifier. (93%)Takayuki Osakabe; Maungmaung Aprilpyone; Sayaka Shiota; Hitoshi Kiya
Memory Defense: More Robust Classification via a Memory-Masking Autoencoder. (76%)Eashan Lehigh University Adhikarla; Dan Lehigh University Luo; Brian D. Lehigh University Davison
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation. (75%)Wenxiao Wang; Alexander Levine; Soheil Feizi
2022-02-04
Pixle: a fast and effective black-box attack based on rearranging pixels. (98%)Jary Pomponi; Simone Scardapane; Aurelio Uncini
Backdoor Defense via Decoupling the Training Process. (80%)Kunzhe Huang; Yiming Li; Baoyuan Wu; Zhan Qin; Kui Ren
LTU Attacker for Membership Inference. (67%)Joseph Pedersen; Rafael Muñoz-Gómez; Jiangnan Huang; Haozhe Sun; Wei-Wei Tu; Isabelle Guyon
A Survey on Safety-Critical Scenario Generation for Autonomous Driving -- A Methodological Perspective. (1%)Wenhao Ding; Chejian Xu; Mansur Arief; Haohong Lin; Bo Li; Ding Zhao
2022-02-03
ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking. (93%)Chong Xiang; Alexander Valtchanov; Saeed Mahloujifar; Prateek Mittal
Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization. (67%)Xiaojun Xu; Jacky Yibo Zhang; Evelyn Ma; Danny Son; Oluwasanmi Koyejo; Bo Li
2022-02-02
An Eye for an Eye: Defending against Gradient-based Attacks with Gradients. (99%)Hanbin Hong; Yuan Hong; Yu Kong
Smoothed Embeddings for Certified Few-Shot Learning. (76%)Mikhail Pautov; Olesya Kuznetsova; Nurislam Tursynbek; Aleksandr Petiushko; Ivan Oseledets
Probabilistically Robust Learning: Balancing Average- and Worst-case Performance. (75%)Alexander Robey; Luiz F. O. Chamon; George J. Pappas; Hamed Hassani
Make Some Noise: Reliable and Efficient Single-Step Adversarial Training. (68%)Jorge Pau de; Adel Bibi; Riccardo Volpi; Amartya Sanyal; Philip H. S. Torr; Grégory Rogez; Puneet K. Dokania
Robust Binary Models by Pruning Randomly-initialized Networks. (10%)Chen Liu; Ziqi Zhao; Sabine Süsstrunk; Mathieu Salzmann
NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections. (10%)N. Benjamin Erichson; Soon Hoe Lim; Francisco Utrera; Winnie Xu; Ziang Cao; Michael W. Mahoney
2022-02-01
Language Dependencies in Adversarial Attacks on Speech Recognition Systems. (98%)Karla Markert; Donika Mirdita; Konstantin Böttinger
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks. (80%)Anne Harrington; Arturo Deza
Visualizing Automatic Speech Recognition -- Means for a Better Understanding? (64%)Karla Markert; Romain Parracone; Mykhailo Kulakov; Philip Sperl; Ching-Yu Kao; Konstantin Böttinger
Datamodels: Predicting Predictions from Training Data. (2%)Andrew Ilyas; Sung Min Park; Logan Engstrom; Guillaume Leclerc; Aleksander Madry
2022-01-31
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons. (99%)Chandresh Pravin; Ivan Martino; Giuseppe Nicosia; Varun Ojha
Boundary Defense Against Black-box Adversarial Attacks. (99%)Manjushree B. Aithal; Xiaohua Li
Query Efficient Decision Based Sparse Attacks Against Black-Box Deep Learning Models. (99%)Viet Quoc Vo; Ehsan Abbasnejad; Damith C. Ranasinghe
Can Adversarial Training Be Manipulated By Non-Robust Features? (98%)Lue Tao; Lei Feng; Hongxin Wei; Jinfeng Yi; Sheng-Jun Huang; Songcan Chen
GADoT: GAN-based Adversarial Training for Robust DDoS Attack Detection. (96%)Maged Abdelaty; Sandra Scott-Hayward; Roberto Doriguzzi-Corin; Domenico Siracusa
Rate Coding or Direct Coding: Which One is Better for Accurate, Robust, and Energy-efficient Spiking Neural Networks? (93%)Youngeun Kim; Hyoungseob Park; Abhishek Moitra; Abhiroop Bhattacharjee; Yeshwanth Venkatesha; Priyadarshini Panda
AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks. (89%)Muhammad Usman; Youcheng Sun; Divya Gopinath; Corina S. Pasareanu
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks. (81%)Mingfu Xue; Shifeng Ni; Yinghao Wu; Yushu Zhang; Jian Wang; Weiqiang Liu
On the Robustness of Quality Measures for GANs. (80%)Motasem Alfarra; Juan C. Pérez; Anna Frühstück; Philip H. S. Torr; Peter Wonka; Bernard Ghanem
MEGA: Model Stealing via Collaborative Generator-Substitute Networks. (76%)Chi Hong; Jiyue Huang; Lydia Y. Chen
Learning Robust Representation through Graph Adversarial Contrastive Learning. (26%)Jiayan Guo; Shangyang Li; Yue Zhao; Yan Zhang
Few-Shot Backdoor Attacks on Visual Object Tracking. (10%)Yiming Li; Haoxiang Zhong; Xingjun Ma; Yong Jiang; Shu-Tao Xia
UQGAN: A Unified Model for Uncertainty Quantification of Deep Classifiers trained via Conditional GANs. (9%)Philipp Oberdiek; Gernot A. Fink; Matthias Rottmann
Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors. (5%)Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Timo Schenk; Adrian Lars Benjamin Iten; Gérôme Bovet; Gregorio Martínez Pérez; Burkhard Stiller
Securing Federated Sensitive Topic Classification against Poisoning Attacks. (1%)Tianyue Chu; Alvaro Garcia-Recuero; Costas Iordanou; Georgios Smaragdakis; Nikolaos Laoutaris
2022-01-30
Improving Corruption and Adversarial Robustness by Enhancing Weak Subnets. (92%)Yong Guo; David Stutz; Bernt Schiele
GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks. (84%)Chenhui Deng; Xiuyu Li; Zhuo Feng; Zhiru Zhang
TPC: Transformation-Specific Smoothing for Point Cloud Models. (75%)Wenda Chu; Linyi Li; Bo Li
2022-01-29
Scale-Invariant Adversarial Attack for Evaluating and Enhancing Adversarial Defenses. (99%)Mengting Xu; Tao Zhang; Zhongnian Li; Daoqiang Zhang
Robustness of Deep Recommendation Systems to Untargeted Interaction Perturbations. (82%)Sejoon Oh; Srijan Kumar
Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms. (1%)Jeongyeol Kwon; Yonathan Efroni; Constantine Caramanis; Shie Mannor
2022-01-28
Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning. (87%)Jie Zhang; Lei Zhang; Gang Li; Chao Wu
Feature Visualization within an Automated Design Assessment leveraging Explainable Artificial Intelligence Methods. (81%)Raoul Schönhof; Artem Werner; Jannes Elstner; Boldizsar Zopcsak; Ramez Awad; Marco Huber
Certifying Model Accuracy under Distribution Shifts. (74%)Aounon Kumar; Alexander Levine; Tom Goldstein; Soheil Feizi
Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions. (13%)Jiachen Sun; Qingzhao Zhang; Bhavya Kailkhura; Zhiding Yu; Chaowei Xiao; Z. Morley Mao
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. (4%)Lukas Struppek; Dominik Hintersdorf; Antonio De Almeida Correia; Antonia Adler; Kristian Kersting
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire. (3%)Siddhartha Datta; Nigel Shadbolt
Toward Training at ImageNet Scale with Differential Privacy. (1%)Alexey Kurakin; Shuang Song; Steve Chien; Roxana Geambasu; Andreas Terzis; Abhradeep Thakurta
2022-01-27
Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains. (99%)Qilong Zhang; Xiaodan Li; Yuefeng Chen; Jingkuan Song; Lianli Gao; Yuan He; Hui Xue
Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities. (10%)Xin Du; Benedicte Legastelois; Bhargavi Ganesh; Ajitha Rajan; Hana Chockler; Vaishak Belle; Stuart Anderson; Subramanian Ramamoorthy
CacheFX: A Framework for Evaluating Cache Security. (1%)Daniel Genkin; William Kosasih; Fangfei Liu; Anna Trikalinou; Thomas Unterluggauer; Yuval Yarom
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders. (1%)Tianshuo Cong; Xinlei He; Yang Zhang
2022-01-26
Boosting 3D Adversarial Attacks with Attacking On Frequency. (98%)Binbin Liu; Jinlai Zhang; Lyujie Chen; Jihong Zhu
How Robust are Discriminatively Trained Zero-Shot Learning Models? (98%)Mehmet Kerim Yucel; Ramazan Gokberk Cinbis; Pinar Duygulu
Autonomous Cyber Defense Introduces Risk: Can We Manage the Risk? (2%)Alexandre K. Ligo; Alexander Kott; Igor Linkov
Automatic detection of access control vulnerabilities via API specification processing. (1%)Alexander Barabanov; Denis Dergunov; Denis Makrushin; Aleksey Teplov
2022-01-25
Virtual Adversarial Training for Semi-supervised Breast Mass Classification. (3%)Xuxin Chen; Ximin Wang; Ke Zhang; Kar-Ming Fung; Theresa C. Thai; Kathleen Moore; Robert S. Mannel; Hong Liu; Bin Zheng; Yuchen Qiu
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training. (1%)Wenyong Huang; Zhenhe Zhang; Yu Ting Yeung; Xin Jiang; Qun Liu
2022-01-24
What You See is Not What the Network Infers: Detecting Adversarial Examples Based on Semantic Contradiction. (99%)Yijun Yang; Ruiyuan Gao; Yu Li; Qiuxia Lai; Qiang Xu
Attacks and Defenses for Free-Riders in Multi-Discriminator GAN. (76%)Zilong Zhao; Jiyue Huang; Stefanie Roos; Lydia Y. Chen
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation. (75%)Zayd Hammoudeh; Daniel Lowd
Backdoor Defense with Machine Unlearning. (33%)Yang Liu; Mingyuan Fan; Cen Chen; Ximeng Liu; Zhuo Ma; Li Wang; Jianfeng Ma
On the Complexity of Attacking Elliptic Curve Based Authentication Chips. (1%)Ievgen Kabin; Zoya Dyka; Dan Klann; Jan Schaeffner; Peter Langendoerfer
2022-01-23
Efficient and Robust Classification for Sparse Attacks. (83%)Mark Beliaev; Payam Delgosha; Hamed Hassani; Ramtin Pedarsani
Gradient-guided Unsupervised Text Style Transfer via Contrastive Learning. (78%)Chenghao Fan; Ziao Li; Wei wei
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models. (56%)Shagufta Mehnaz; Sayanton V. Dibbo; Ehsanul Kabir; Ninghui Li; Elisa Bertino
Increasing the Cost of Model Extraction with Calibrated Proof of Work. (22%)Adam Dziedzic; Muhammad Ahmad Kaleem; Yu Shen Lu; Nicolas Papernot
2022-01-22
Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection. (99%)Siyuan Liang; Baoyuan Wu; Yanbo Fan; Xingxing Wei; Xiaochun Cao
Robust Unpaired Single Image Super-Resolution of Faces. (98%)Saurabh Goswami; Rajagopalan A. N
On the Robustness of Counterfactual Explanations to Adverse Perturbations. (10%)Marco Virgolin; Saverio Fracaros
2022-01-21
Robust Unsupervised Graph Representation Learning via Mutual Information Maximization. (99%)Jihong Wang; Minnan Luo; Jundong Li; Ziqi Liu; Jun Zhou; Qinghua Zheng
Natural Attack for Pre-trained Models of Code. (99%)Zhou Yang; Jieke Shi; Junda He; David Lo
The Security of Deep Learning Defences for Medical Imaging. (80%)Moshe Levy; Guy Amit; Yuval Elovici; Yisroel Mirsky
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World. (75%)Hua Ma; Yinshan Li; Yansong Gao; Alsharif Abuadbba; Zhi Zhang; Anmin Fu; Hyoungshick Kim; Said F. Al-Sarawi; Nepal Surya; Derek Abbott
Identifying Adversarial Attacks on Text Classifiers. (73%)Zhouhang Xie; Jonathan Brophy; Adam Noack; Wencong You; Kalyani Asthana; Carter Perkins; Sabrina Reis; Sameer Singh; Daniel Lowd
The Many Faces of Adversarial Risk. (47%)Muni Sreenivas Pydi; Varun Jog
2022-01-20
Learning-based Hybrid Local Search for the Hard-label Textual Attack. (99%)Zhen Yu; Xiaosen Wang; Wanxiang Che; Kun He
Cheating Automatic Short Answer Grading: On the Adversarial Usage of Adjectives and Adverbs. (95%)Anna Filighera; Sebastian Ochs; Tim Steuer; Thomas Tregel
Survey on Federated Learning Threats: concepts, taxonomy on attacks and defences, experimental study and challenges. (93%)Nuria Rodríguez-Barroso; Daniel Jiménez López; M. Victoria Luzón; Francisco Herrera; Eugenio Martínez-Cámara
Low-Interception Waveform: To Prevent the Recognition of Spectrum Waveform Modulation via Adversarial Examples. (83%)Haidong Xie; Jia Tan; Xiaoying Zhang; Nan Ji; Haihua Liao; Zuguo Yu; Xueshuang Xiang; Naijin Liu
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios. (70%)Zhen Xiang; David J. Miller; George Kesidis
Adversarial Jamming for a More Effective Constellation Attack. (56%)Haidong Xie; Yizhou Xu; Yuanqing Chen; Nan Ji; Shuai Yuan; Naijin Liu; Xueshuang Xiang
Steerable Pyramid Transform Enables Robust Left Ventricle Quantification. (13%)Xiangyang Zhu; Kede Ma; Wufeng Xue
DeepGalaxy: Testing Neural Network Verifiers via Two-Dimensional Input Space Exploration. (1%)Xuan Xie; Fuyuan Zhang
2022-01-19
Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation. (96%)Sixiao Zhang; Hongxu Chen; Xiangguo Sun; Yicong Li; Guandong Xu
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders. (8%)Zeyang Sha; Xinlei He; Ning Yu; Michael Backes; Yang Zhang
2022-01-18
TAFA: Task-Agnostic Model Fingerprinting for Deep Neural Networks. (99%)Xudong Pan; Mi Zhang; Yifan Lu; Yifan Yan; Min Yang
Adversarial vulnerability of powerful near out-of-distribution detection. (78%)Stanislav Fort
Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning. (13%)Phung Lai; NhatHai Phan; Abdallah Khreishah; Issa Khalil; Xintao Wu
Secure IoT Routing: Selective Forwarding Attacks and Trust-based Defenses in RPL Network. (2%)Jun Jiang; Yuhong Liu
Lung Swapping Autoencoder: Learning a Disentangled Structure-texture Representation of Chest Radiographs. (1%)Lei Zhou; Joseph Bae; Huidong Liu; Gagandeep Singh; Jeremy Green; Amit Gupta; Dimitris Samaras; Prateek Prasanna
2022-01-17
Masked Faces with Faced Masks. (81%)Jiayi Zhu; Qing Guo; Felix Juefei-Xu; Yihao Huang; Yang Liu; Geguang Pu
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations. (56%)Chris Emmery; Ákos Kádár; Grzegorz Chrupała; Walter Daelemans
AugLy: Data Augmentations for Robustness. (3%)Zoe Papakipos; Joanna Bitton
2022-01-16
Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems. (99%)Wei Jia; Zhaojun Lu; Haichun Zhang; Zhenglin Liu; Jie Wang; Gang Qu
ALA: Adversarial Lightness Attack via Naturalness-aware Regularizations. (99%)Liangru Sun; Felix Juefei-Xu; Yihao Huang; Qing Guo; Jiayi Zhu; Jincao Feng; Yang Liu; Geguang Pu
Adversarial Machine Learning Threat Analysis in Open Radio Access Networks. (33%)Ron Bitton; Dan Avraham; Eitan Klevansky; Dudu Mimran; Oleg Brodt; Heiko Lehmann; Yuval Elovici; Asaf Shabtai
Neighboring Backdoor Attacks on Graph Convolutional Network. (22%)Liang Chen; Qibiao Peng; Jintang Li; Yang Liu; Jiawei Chen; Yong Li; Zibin Zheng
2022-01-15
StolenEncoder: Stealing Pre-trained Encoders. (13%)Yupei Liu; Jinyuan Jia; Hongbin Liu; Neil Zhenqiang Gong
2022-01-14
CommonsenseQA 2.0: Exposing the Limits of AI through Gamification. (56%)Alon Talmor; Ori Yoran; Ronan Le Bras; Chandra Bhagavatula; Yoav Goldberg; Yejin Choi; Jonathan Berant
Security Orchestration, Automation, and Response Engine for Deployment of Behavioural Honeypots. (1%)Upendra Bartwal; Subhasis Mukhopadhyay; Rohit Negi; Sandeep Shukla
2022-01-13
Evaluation of Four Black-box Adversarial Attacks and Some Query-efficient Improvement Analysis. (96%)Rui Wang
The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression. (93%)Hamed Hassani; Adel Javanmard
On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles. (83%)Qingzhao Zhang; Shengtuo Hu; Jiachen Sun; Qi Alfred Chen; Z. Morley Mao
Reconstructing Training Data with Informed Adversaries. (54%)Borja Balle; Giovanni Cherubin; Jamie Hayes
Jamming Attacks on Federated Learning in Wireless Networks. (2%)Yi Shi; Yalin E. Sagduyu
2022-01-12
Adversarially Robust Classification by Conditional Generative Model Inversion. (99%)Mitra Alirezaei; Tolga Tasdizen
Towards Adversarially Robust Deep Image Denoising. (99%)Hanshu Yan; Jingfeng Zhang; Jiashi Feng; Masashi Sugiyama; Vincent Y. F. Tan
Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data. (70%)Sunder Ali Khowaja; Ik Hyun Lee; Kapal Dev; Muhammad Aslam Jarwar; Nawab Muhammad Faseeh Qureshi
2022-01-11
Quantifying Robustness to Adversarial Word Substitutions. (99%)Yuting Yang; Pei Huang; FeiFei Ma; Juan Cao; Meishan Zhang; Jian Zhang; Jintao Li
Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition. (99%)Hanrui Wang; Shuo Wang; Zhe Jin; Yandan Wang; Cunjian Chen; Massimo Tistarell
2022-01-10
Evaluation of Neural Networks Defenses and Attacks using NDCG and Reciprocal Rank Metrics. (98%)Haya Brama; Lihi Dery; Tal Grinshpoun
IoTGAN: GAN Powered Camouflage Against Machine Learning Based IoT Device Identification. (89%)Tao Hou; Tao Wang; Zhuo Lu; Yao Liu; Yalin Sagduyu
Reciprocal Adversarial Learning for Brain Tumor Segmentation: A Solution to BraTS Challenge 2021 Segmentation Task. (73%)Himashi Peiris; Zhaolin Chen; Gary Egan; Mehrtash Harandi
GMFIM: A Generative Mask-guided Facial Image Manipulation Model for Privacy Preservation. (3%)Mohammad Hossein Khojaste; Nastaran Moradzadeh Farid; Ahmad Nickabadi
Towards Group Robustness in the presence of Partial Group Labels. (1%)Vishnu Suresh Lokhande; Kihyuk Sohn; Jinsung Yoon; Madeleine Udell; Chen-Yu Lee; Tomas Pfister
2022-01-09
Rethink Stealthy Backdoor Attacks in Natural Language Processing. (89%)Lingfeng Shen; Haiyun Jiang; Lemao Liu; Shuming Shi
A Retrospective and Futurespective of Rowhammer Attacks and Defenses on DRAM. (76%)Zhi Zhang; Jiahao Qi; Yueqiang Cheng; Shijie Jiang; Yiyang Lin; Yansong Gao; Surya Nepal; Yi Zou
Privacy-aware Early Detection of COVID-19 through Adversarial Training. (10%)Omid Rohanian; Samaneh Kouchaki; Andrew Soltan; Jenny Yang; Morteza Rohanian; Yang Yang; David Clifton
2022-01-08
LoMar: A Local Defense Against Poisoning Attack on Federated Learning. (9%)Xingyu Li; Zhe Qu; Shangqing Zhao; Bo Tang; Zhuo Lu; Yao Liu
PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++. (1%)Jaewoo Song; Fangzhen Lin
2022-01-07
iDECODe: In-distribution Equivariance for Conformal Out-of-distribution Detection. (93%)Ramneet Kaur; Susmit Jha; Anirban Roy; Sangdon Park; Edgar Dobriban; Oleg Sokolsky; Insup Lee
Asymptotic Security using Bayesian Defense Mechanisms with Application to Cyber Deception. (11%)Hampei Sasahara; Henrik Sandberg
Negative Evidence Matters in Interpretable Histology Image Classification. (1%)Soufiane Belharbi; Marco Pedersoli; Ismail Ben Ayed; Luke McCaffrey; Eric Granger
2022-01-06
Phrase-level Adversarial Example Generation for Neural Machine Translation. (98%)Juncheng Wan; Jian Yang; Shuming Ma; Dongdong Zhang; Weinan Zhang; Yong Yu; Furu Wei
Learning to be adversarially robust and differentially private. (31%)Jamie Hayes; Borja Balle; M. Pawan Kumar
Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time Algorithms and Adversarial Training. (2%)Yatong Bai; Tanmay Gautam; Somayeh Sojoudi
2022-01-05
On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving. (99%)Giulio Rossolini; Federico Nesti; Gianluca D'Amico; Saasha Nair; Alessandro Biondi; Giorgio Buttazzo
ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints. (99%)Amira Guesmi; Khaled N. Khasawneh; Nael Abu-Ghazaleh; Ihsen Alouani
Adversarial Robustness in Cognitive Radio Networks. (1%)Makan Zamanipour
2022-01-04
Towards Transferable Unrestricted Adversarial Examples with Minimum Changes. (99%)Fangcheng Liu; Chao Zhang; Hongyang Zhang
Towards Understanding and Harnessing the Effect of Image Transformation in Adversarial Detection. (99%)Hui Liu; Bo Zhao; Yuefeng Peng; Weidong Li; Peng Liu
On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error. (86%)Fabio Brau; Giulio Rossolini; Alessandro Biondi; Giorgio Buttazzo
Corrupting Data to Remove Deceptive Perturbation: Using Preprocessing Method to Improve System Robustness. (10%)Hieu Le; Hans Walker; Dung Tran; Peter Chin
Towards Understanding Quality Challenges of the Federated Learning: A First Look from the Lens of Robustness. (2%)Amin Eslami Abyane; Derui Zhu; Souza Roberto Medeiros de; Lei Ma; Hadi Hemmati
2022-01-03
Compression-Resistant Backdoor Attack against Deep Neural Networks. (75%)Mingfu Xue; Xin Wang; Shichang Sun; Yushu Zhang; Jian Wang; Weiqiang Liu
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection. (68%)Phillip Rieger; Thien Duc Nguyen; Markus Miettinen; Ahmad-Reza Sadeghi
Revisiting PGD Attacks for Stability Analysis of Large-Scale Nonlinear Systems and Perception-Based Control. (11%)Aaron Havens; Darioush Keivan; Peter Seiler; Geir Dullerud; Bin Hu
2022-01-02
Actor-Critic Network for Q&A in an Adversarial Environment. (33%)Bejan Sadeghian
On Sensitivity of Deep Learning Based Text Classification Algorithms to Practical Input Perturbations. (12%)Aamir Miyajiwala; Arnav Ladkat; Samiksha Jagadale; Raviraj Joshi
2022-01-01
Rethinking Feature Uncertainty in Stochastic Neural Networks for Adversarial Robustness. (87%)Hao Yang; Min Wang; Zhengfei Yu; Yun Zhou
Revisiting Neuron Coverage Metrics and Quality of Deep Neural Networks. (41%)Zhou Yang; Jieke Shi; Muhammad Hilmi Asyrofi; David Lo
Generating Adversarial Samples For Training Wake-up Word Detection Systems Against Confusing Words. (1%)Haoxu Wang; Yan Jia; Zeqing Zhao; Xuyang Wang; Junjie Wang; Ming Li
2021-12-31
Adversarial Attack via Dual-Stage Network Erosion. (99%)Yexin Duan; Junhua Zou; Xingyu Zhou; Wu Zhang; Jin Zhang; Zhisong Pan
On Distinctive Properties of Universal Perturbations. (83%)Sung Min Park; Kuo-An Wei; Kai Xiao; Jerry Li; Aleksander Madry
2021-12-30
Benign Overfitting in Adversarially Robust Linear Classification. (99%)Jinghui Chen; Yuan Cao; Quanquan Gu
2021-12-29
Invertible Image Dataset Protection. (92%)Kejiang Chen; Xianhan Zeng; Qichao Ying; Sheng Li; Zhenxing Qian; Xinpeng Zhang
Challenges and approaches for mitigating byzantine attacks in federated learning. (4%)Shengshan Hu; Jianrong Lu; Wei Wan; Leo Yu Zhang
2021-12-28
Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently. (99%)Futa Waseda; Sosuke Nishikawa; Trung-Nghia Le; Huy H. Nguyen; Isao Echizen
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks. (99%)Weiran Lin; Keane Lucas; Lujo Bauer; Michael K. Reiter; Mahmood Sharif
Repairing Adversarial Texts through Perturbation. (99%)Guoliang Dong; Jingyi Wang; Jun Sun; Sudipta Chattopadhyay; Xinyu Wang; Ting Dai; Jie Shi; Jin Song Dong
DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification. (91%)Aleksandra Ćiprijanović; Diana Kafkes; Gregory Snyder; F. Javier Sánchez; Gabriel Nathan Perdue; Kevin Pedro; Brian Nord; Sandeep Madireddy; Stefan M. Wild
Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge. (88%)Kartikeya Bhardwaj; Dibakar Gope; James Ward; Paul Whatmough; Danny Loh
Mind Your Solver! On Adversarial Attack and Defense for Combinatorial Optimization. (86%)Han Lu; Zenan Li; Runzhong Wang; Qibing Ren; Junchi Yan; Xiaokang Yang
Gas Gauge: A Security Analysis Tool for Smart Contract Out-of-Gas Vulnerabilities. (1%)Behkish Nassirzadeh; Huaiying Sun; Sebastian Banescu; Vijay Ganesh
2021-12-27
Adversarial Attack for Asynchronous Event-based Data. (99%)Wooju Lee; Hyun Myung
PRIME: A Few Primitives Can Boost Robustness to Common Corruptions. (81%)Apostolos Modas; Rahul Rade; Guillermo Ortiz-Jiménez; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
Associative Adversarial Learning Based on Selective Attack. (26%)Runqi Wang; Xiaoyue Duan; Baochang Zhang; Song Xue; Wentao Zhu; David Doermann; Guodong Guo
Learning Robust and Lightweight Model through Separable Structured Transformations. (8%)Yanhui Huang; Yangyu Xu; Xian Wei
2021-12-26
Perlin Noise Improve Adversarial Robustness. (99%)Chengjun Tang; Kun Zhang; Chunfang Xing; Yong Ding; Zengmin Xu
2021-12-25
Task and Model Agnostic Adversarial Attack on Graph Neural Networks. (93%)Kartik Sharma; Samidha Verma; Sourav Medya; Sayan Ranu; Arnab Bhattacharya
NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification. (50%)Haibin Zheng; Zhiqing Chen; Tianyu Du; Xuhong Zhang; Yao Cheng; Shouling Ji; Jingyi Wang; Yue Yu; Jinyin Chen
2021-12-24
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping. (99%)Behnam Ghavami; Seyd Movi; Zhenman Fang; Lesley Shannon
NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks. (98%)Ruoxi Chen; Haibo Jin; Jinyin Chen; Haibin Zheng; Yue Yu; Shouling Ji
CatchBackdoor: Backdoor Testing by Critical Trojan Neural Path Identification via Differential Fuzzing. (82%)Haibo Jin; Ruoxi Chen; Jinyin Chen; Yao Cheng; Chong Fu; Ting Wang; Yue Yu; Zhaoyan Ming
SoK: A Study of the Security on Voice Processing Systems. (9%)Robert Chang; Logan Kuo; Arthur Liu; Nader Sehatbakhsh
DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning. (1%)Ismat Jarin; Birhanu Eshete
Gradient Leakage Attack Resilient Deep Learning. (1%)Wenqi Wei; Ling Liu
2021-12-23
Adaptive Modeling Against Adversarial Attacks. (99%)Zhiwen Yan; Teck Khim Ng
Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization. (99%)Yihua Zhang; Guanhua Zhang; Prashant Khanduri; Mingyi Hong; Shiyu Chang; Sijia Liu
Robust Secretary and Prophet Algorithms for Packing Integer Programs. (2%)C. J. Argue; Anupam Gupta; Marco Molinaro; Sahil Singla
Counterfactual Memorization in Neural Language Models. (2%)Chiyuan Zhang; Daphne Ippolito; Katherine Lee; Matthew Jagielski; Florian Tramèr; Nicholas Carlini
2021-12-22
Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-Art. (99%)Xiang Ling; Lingfei Wu; Jiangyu Zhang; Zhenqing Qu; Wei Deng; Xiang Chen; Chunming Wu; Shouling Ji; Tianyue Luo; Jingzheng Wu; Yanjun Wu
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? (98%)Xinhsuai Dong; Luu Anh Tuan; Min Lin; Shuicheng Yan; Hanwang Zhang
Detect & Reject for Transferability of Black-box Adversarial Attacks Against Network Intrusion Detection Systems. (98%)Islam Debicha; Thibault Debatty; Jean-Michel Dricot; Wim Mees; Tayeb Kenaza
Adversarial Deep Reinforcement Learning for Trustworthy Autonomous Driving Policies. (96%)Aizaz Sharif; Dusica Marijan
Understanding and Measuring Robustness of Multimodal Learning. (69%)Nishant Vishwamitra; Hongxin Hu; Ziming Zhao; Long Cheng; Feng Luo
Evaluating the Robustness of Deep Reinforcement Learning for Autonomous and Adversarial Policies in a Multi-agent Urban Driving Environment. (1%)Aizaz Sharif; Dusica Marijan
2021-12-21
A Theoretical View of Linear Backpropagation and Its Convergence. (99%)Ziang Li; Yiwen Guo; Haodi Liu; Changshui Zhang
An Attention Score Based Attacker for Black-box NLP Classifier. (91%)Yueyang Liu; Hunmin Lee; Zhipeng Cai
Covert Communications via Adversarial Machine Learning and Reconfigurable Intelligent Surfaces. (81%)Brian Kim; Tugba Erpek; Yalin E. Sagduyu; Sennur Ulukus
Input-Specific Robustness Certification for Randomized Smoothing. (68%)Ruoxin Chen; Jie Li; Junchi Yan; Ping Li; Bin Sheng
Improving Robustness with Image Filtering. (68%)Matteo Terzi; Mattia Carletti; Gian Antonio Susto
On the Adversarial Robustness of Causal Algorithmic Recourse. (10%)Ricardo Dominguez-Olmedo; Amir-Hossein Karimi; Bernhard Schölkopf
MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation. (4%)Zhongzhi Yu; Yonggan Fu; Sicheng Li; Chaojian Li; Yingyan Lin
Exploring Credibility Scoring Metrics of Perception Systems for Autonomous Driving. (2%)Viren Khandal; Arth Vidyarthi
Longitudinal Study of the Prevalence of Malware Evasive Techniques. (1%)Lorenzo Maffia; Dario Nisi; Platon Kotzias; Giovanni Lagorio; Simone Aonzo; Davide Balzarotti
Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems. (1%)Andrea Stocco; Brian Pulfer; Paolo Tonella
2021-12-20
Certified Federated Adversarial Training. (98%)Giulio Zizzo; Ambrish Rawat; Mathieu Sinn; Sergio Maffeis; Chris Hankin
Energy-bounded Learning for Robust Models of Code. (83%)Nghi D. Q. Bui; Yijun Yu
Black-Box Testing of Deep Neural Networks through Test Case Diversity. (81%)Zohreh Aghababaeyan; Manel Abdellatif; Lionel Briand; Ramesh S; Mojtaba Bagherzadeh
Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction. (80%)Dongfang Li; Baotian Hu; Qingcai Chen; Tujie Xu; Jingcong Tao; Yunan Zhang
Adversarially Robust Stability Certificates can be Sample-Efficient. (2%)Thomas T. C. K. Zhang; Stephen Tu; Nicholas M. Boffi; Jean-Jacques E. Slotine; Nikolai Matni
2021-12-19
Initiative Defense against Facial Manipulation. (67%)Qidong Huang; Jie Zhang; Wenbo Zhou; WeimingZhang; Nenghai Yu
2021-12-18
Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks. (12%)Simone Marullo; Matteo Tiezzi; Marco Gori; Stefano Melacci
Android-COCO: Android Malware Detection with Graph Neural Network for Byte- and Native-Code. (1%)Peng Xu
2021-12-17
Reasoning Chain Based Adversarial Attack for Multi-hop Question Answering. (92%)Jiayu Fudan University Ding; Siyuan Fudan University Wang; Qin East China Normal University Chen; Zhongyu Fudan University Wei
Deep Bayesian Learning for Car Hacking Detection. (81%)Laha Ale; Scott A. King; Ning Zhang
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations. (81%)Siddhant Arora; Danish Pruthi; Norman Sadeh; William W. Cohen; Zachary C. Lipton; Graham Neubig
Dynamics-aware Adversarial Attack of 3D Sparse Convolution Network. (80%)An Tao; Yueqi Duan; He Wang; Ziyi Wu; Pengliang Ji; Haowen Sun; Jie Zhou; Jiwen Lu
Provable Adversarial Robustness in the Quantum Model. (62%)Khashayar Barooti; Grzegorz Głuch; Ruediger Urbanke
Domain Adaptation on Point Clouds via Geometry-Aware Implicits. (1%)Yuefan Shen; Yanchao Yang; Mi Yan; He Wang; Youyi Zheng; Leonidas Guibas
2021-12-16
Towards Robust Neural Image Compression: Adversarial Attack and Model Finetuning. (99%)Tong Chen; Zhan Ma
Addressing Adversarial Machine Learning Attacks in Smart Healthcare Perspectives. (99%)Arawinkumaar Selvakkumar; Shantanu Pal; Zahra Jadidi
All You Need is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines. (99%)Yuxuan Zhang; Bo Dong; Felix Heide
TAFIM: Targeted Adversarial Attacks against Facial Image Manipulations. (64%)Shivangi Aneja; Lev Markhasin; Matthias Niessner
A Robust Optimization Approach to Deep Learning. (45%)Dimitris Bertsimas; Xavier Boix; Kimberly Villalobos Carballo; Dick den Hertog
APTSHIELD: A Stable, Efficient and Real-time APT Detection System for Linux Hosts. (16%)Tiantian Zhu; Jinkai Yu; Tieming Chen; Jiayu Wang; Jie Ying; Ye Tian; Mingqi Lv; Yan Chen; Yuan Fan; Ting Wang
Dataset correlation inference attacks against machine learning models. (13%)Ana-Maria Creţu; Florent Guépin; Montjoye Yves-Alexandre de
Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced Classification by Training on Random Noise Images. (2%)Shiran Zada; Itay Benou; Michal Irani
Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants. (2%)Max Bartolo; Tristan Thrush; Sebastian Riedel; Pontus Stenetorp; Robin Jia; Douwe Kiela
2021-12-15
On the Convergence and Robustness of Adversarial Training. (99%)Yisen Wang; Xingjun Ma; James Bailey; Jinfeng Yi; Bowen Zhou; Quanquan Gu
Temporal Shuffling for Defending Deep Action Recognition Models against Adversarial Attacks. (97%)Jaehui Hwang; Huan Zhang; Jun-Ho Choi; Cho-Jui Hsieh; Jong-Seok Lee
DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models. (75%)Hongyu Zhu; Yan Chen; Jing Yan; Jing Liu; Yu Hong; Ying Chen; Hua Wu; Haifeng Wang
Robust Neural Network Classification via Double Regularization. (1%)Olof Zetterqvist; Rebecka Jörnsten; Johan Jonasson
2021-12-14
Robustifying automatic speech recognition by extracting slowly varying features. (99%)Matias Pizarro; Dorothea Kolossa; Asja Fischer
Adversarial Examples for Extreme Multilabel Text Classification. (99%)Mohammadreza Qaraei; Rohit Babbar
Dual-Key Multimodal Backdoors for Visual Question Answering. (81%)Matthew Walmer; Karan Sikka; Indranil Sur; Abhinav Shrivastava; Susmit Jha
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training. (76%)Chen Liu; Zhichao Huang; Mathieu Salzmann; Tong Zhang; Sabine Süsstrunk
MuxLink: Circumventing Learning-Resilient MUX-Locking Using Graph Neural Network-based Link Prediction. (4%)Lilas Alrahis; Satwik Patnaik; Muhammad Shafique; Ozgur Sinanoglu
2021-12-13
Detecting Audio Adversarial Examples with Logit Noising. (99%)Namgyu Park; Sangwoo Ji; Jong Kim
Triangle Attack: A Query-efficient Decision-based Adversarial Attack. (99%)Xiaosen Wang; Zeliang Zhang; Kangheng Tong; Dihong Gong; Kun He; Zhifeng Li; Wei Liu
2021-12-12
Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses. (98%)Chun Pong Lau; Jiang Liu; Hossein Souri; Wei-An Lin; Soheil Feizi; Rama Chellappa
Quantifying and Understanding Adversarial Examples in Discrete Input Spaces. (91%)Volodymyr Kuleshov; Evgenii Nikishin; Shantanu Thakoor; Tingfung Lau; Stefano Ermon
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification. (91%)Ashwinee Panda; Saeed Mahloujifar; Arjun N. Bhagoji; Supriyo Chakraborty; Prateek Mittal
WOOD: Wasserstein-based Out-of-Distribution Detection. (12%)Yinan Wang; Wenbo Sun; Jionghua "Judy" Jin; Zhenyu "James" Kong; Xiaowei Yue
2021-12-11
MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare. (99%)Muchao Ye; Junyu Luo; Guanjie Zheng; Cao Xiao; Ting Wang; Fenglong Ma
Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting. (98%)Junhua Zou; Zhisong Pan; Junyang Qiu; Xin Liu; Ting Rui; Wei Li
Stereoscopic Universal Perturbations across Different Architectures and Datasets. (98%)Zachary Berger; Parth Agrawal; Tian Yu Liu; Stefano Soatto; Alex Wong
2021-12-10
Learning to Learn Transferable Attack. (99%)Shuman Fang; Jie Li; Xianming Lin; Rongrong Ji
Cross-Modal Transferable Adversarial Attacks from Images to Videos. (99%)Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks. (92%)Seungyong Moon; Gaon An; Hyun Oh Song
Attacking Point Cloud Segmentation with Color-only Perturbation. (87%)Jiacen Xu; Zhe Zhou; Boyuan Feng; Yufei Ding; Zhou Li
Batch Label Inference and Replacement Attacks in Black-Boxed Vertical Federated Learning. (75%)Yang Liu; Tianyuan Zou; Yan Kang; Wenhan Liu; Yuanqin He; Zhihao Yi; Qiang Yang
Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models. (68%)Jialuo Chen; Jingyi Wang; Tinglan Peng; Youcheng Sun; Peng Cheng; Shouling Ji; Xingjun Ma; Bo Li; Dawn Song
Efficient Action Poisoning Attacks on Linear Contextual Bandits. (67%)Guanlin Liu; Lifeng Lai
How Private Is Your RL Policy? An Inverse RL Based Analysis Framework. (41%)Kritika Prakash; Fiza Husain; Praveen Paruchuri; Sujit P. Gujar
SoK: On the Security & Privacy in Federated Learning. (5%)Gorka Abad; Stjepan Picek; Aitor Urbieta
2021-12-09
RamBoAttack: A Robust Query Efficient Deep Neural Network Decision Exploit. (99%)Viet Quoc Vo; Ehsan Abbasnejad; Damith C. Ranasinghe
Amicable Aid: Turning Adversarial Attack to Benefit Classification. (99%)Juyeop Kim; Jun-Ho Choi; Soobeom Jang; Jong-Seok Lee
Mutual Adversarial Training: Learning together is better than going alone. (99%)Jiang Liu; Chun Pong Lau; Hossein Souri; Soheil Feizi; Rama Chellappa
PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function. (99%)Manaar Alam; Shubhajit Datta; Debdeep Mukhopadhyay; Arijit Mondal; Partha Pratim Chakrabarti
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures. (69%)Eugene Bagdasaryan; Vitaly Shmatikov
Robustness Certificates for Implicit Neural Networks: A Mixed Monotone Contractive Approach. (38%)Saber Jafarpour; Matthew Abate; Alexander Davydov; Francesco Bullo; Samuel Coogan
PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures. (10%)Dan Hendrycks; Andy Zou; Mantas Mazeika; Leonard Tang; Dawn Song; Jacob Steinhardt
Are We There Yet? Timing and Floating-Point Attacks on Differential Privacy Systems. (2%)Jiankai Jin; Eleanor McMurtry; Benjamin I. P. Rubinstein; Olga Ohrimenko
3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D Object Detection. (1%)Alexander Lehner; Stefano Gasperini; Alvaro Marcos-Ramiro; Michael Schmidt; Mohammad-Ali Nikouei Mahani; Nassir Navab; Benjamin Busam; Federico Tombari
2021-12-08
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection. (99%)Jiang Liu; Alexander Levine; Chun Pong Lau; Rama Chellappa; Soheil Feizi
On visual self-supervision and its effect on model robustness. (99%)Michal Kucer; Diane Oyen; Garrett Kenyon
SNEAK: Synonymous Sentences-Aware Adversarial Attack on Natural Language Video Localization. (93%)Wenbo Gou; Wen Shi; Jian Lou; Lijie Huang; Pan Zhou; Ruixuan Li
Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework. (8%)Ching-Yun Ko; Jeet Mohapatra; Sijia Liu; Pin-Yu Chen; Luca Daniel; Lily Weng
2021-12-07
Saliency Diversified Deep Ensemble for Robustness to Adversaries. (99%)Alex Bogun; Dimche Kostadinov; Damian Borth
Vehicle trajectory prediction works, but not everywhere. (50%)Mohammadhossein Bahari; Saeed Saadatnejad; Ahmad Rahimi; Mohammad Shaverdikondori; Mohammad Shahidzadeh; Seyed-Mohsen Moosavi-Dezfooli; Alexandre Alahi
Lightning: Striking the Secure Isolation on GPU Clouds with Transient Hardware Faults. (11%)Rihui Sun; Pefei Qiu; Yongqiang Lyu; Donsheng Wang; Jiang Dong; Gang Qu
Membership Inference Attacks From First Principles. (2%)Nicholas Carlini; Steve Chien; Milad Nasr; Shuang Song; Andreas Terzis; Florian Tramer
Training Deep Models to be Explained with Fewer Examples. (1%)Tomoharu Iwata; Yuya Yoshikawa
Presentation Attack Detection Methods based on Gaze Tracking and Pupil Dynamic: A Comprehensive Survey. (1%)Jalil Nourmohammadi Khiarak
2021-12-06
Adversarial Machine Learning In Network Intrusion Detection Domain: A Systematic Review. (99%)Huda Ali Alatwi; Charles Morisset
Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal. (84%)Yucheng Shi; Yahong Han
ML Attack Models: Adversarial Attacks and Data Poisoning Attacks. (82%)Jing Lin; Long Dang; Mohamed Rahouti; Kaiqi Xiong
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks. (82%)Xi Li; Zhen Xiang; David J. Miller; George Kesidis
When the Curious Abandon Honesty: Federated Learning Is Not Private. (68%)Franziska Boenisch; Adam Dziedzic; Roei Schuster; Ali Shahin Shamsabadi; Ilia Shumailov; Nicolas Papernot
Defending against Model Stealing via Verifying Embedded External Features. (33%)Yiming Li; Linghui Zhu; Xiaojun Jia; Yong Jiang; Shu-Tao Xia; Xiaochun Cao
Context-Aware Transfer Attacks for Object Detection. (1%)Zikui Cai; Xinxin Xie; Shasha Li; Mingjun Yin; Chengyu Song; Srikanth V. Krishnamurthy; Amit K. Roy-Chowdhury; M. Salman Asif
2021-12-05
Robust Active Learning: Sample-Efficient Training of Robust Deep Learning Models. (96%)Yuejun Guo; Qiang Hu; Maxime Cordy; Mike Papadakis; Yves Le Traon
Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness. (88%)Konstantinos P. Panousis; Sotirios Chatzis; Sergios Theodoridis
Beyond Robustness: Resilience Verification of Tree-Based Classifiers. (2%)Stefano Calzavara; Lorenzo Cazzaro; Claudio Lucchese; Federico Marcuzzi; Salvatore Orlando
On Impact of Semantically Similar Apps in Android Malware Datasets. (1%)Roopak Surendran
2021-12-04
RADA: Robust Adversarial Data Augmentation for Camera Localization in Challenging Weather. (10%)Jialu Wang; Muhamad Risqi U. Saputra; Chris Xiaoxuan Lu; Niki Trigon; Andrew Markham
2021-12-03
Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach. (99%)James Lee Hu; Mohammadreza Ebrahimi; Hsinchun Chen
Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing. (99%)Bhagyashree Puranik; Upamanyu Madhow; Ramtin Pedarsani
Blackbox Untargeted Adversarial Testing of Automatic Speech Recognition Systems. (98%)Xiaoliang Wu; Ajitha Rajan
Attack-Centric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models. (54%)Tochukwu Idika; Ismail Akturk
Adversarial Attacks against a Satellite-borne Multispectral Cloud Detector. (13%)Andrew Du; Yee Wei Law; Michele Sasdelli; Bo Chen; Ken Clarke; Michael Brown; Tat-Jun Chin
A Game-Theoretic Approach for AI-based Botnet Attack Defence. (9%)Hooman Alavizadeh; Julian Jang-Jaccard; Tansu Alpcan; Seyit A. Camtepe
2021-12-02
A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space. (99%)Thibault Simonetto; Salijona Dyrmishi; Salah Ghamizi; Maxime Cordy; Yves Le Traon
Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks? (93%)Ayesha Siddique; Khaza Anuarul Hoque
Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness? (75%)Peter Lorenz; Dominik Strassel; Margret Keuper; Janis Keuper
Training Efficiency and Robustness in Deep Learning. (41%)Fartash Faghri
FedRAD: Federated Robust Adaptive Distillation. (10%)Stefán Páll Sturluson; Samuel Trew; Luis Muñoz-González; Matei Grama; Jonathan Passerat-Palmbach; Daniel Rueckert; Amir Alansary
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis. (3%)Yu Feng; Benteng Ma; Jing Zhang; Shanshan Zhao; Yong Xia; Dacheng Tao
On the Existence of the Adversarial Bayes Classifier (Extended Version). (2%)Pranjal Awasthi; Natalie S. Frank; Mehryar Mohri
Editing a classifier by rewriting its prediction rules. (1%)Shibani Santurkar; Dimitris Tsipras; Mahalaxmi Elango; David Bau; Antonio Torralba; Aleksander Madry
2021-12-01
Adversarial Robustness of Deep Reinforcement Learning based Dynamic Recommender Systems. (99%)Siyu Wang; Yuanjiang Cao; Xiaocong Chen; Lina Yao; Xianzhi Wang; Quan Z. Sheng
Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness. (99%)Jia-Li Yin; Lehui Xie; Wanqing Zhu; Ximeng Liu; Bo-Hao Chen
$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial Training. (99%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines. (96%)Jiachen Sun; Akshay Mehra; Bhavya Kailkhura; Pin-Yu Chen; Dan Hendrycks; Jihun Hamm; Z. Morley Mao
Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation. (95%)Tianyue Zheng; Zhe Chen; Shuya Ding; Chao Cai; Jun Luo
Robustness in Deep Learning for Computer Vision: Mind the gap? (31%)Nathan Drenkow; Numair Sani; Ilya Shpitser; Mathias Unberath
CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning. (1%)Aidan Boyd; Patrick Tinsley; Kevin Bowyer; Adam Czajka
2021-11-30
Using a GAN to Generate Adversarial Examples to Facial Image Recognition. (99%)Andrew Merrigan; Alan F. Smeaton
Mitigating Adversarial Attacks by Distributing Different Copies to Different Users. (86%)Jiyi Zhang; Wesley Joon-Wie Tann; Ee-Chien Chang
Human Imperceptible Attacks and Applications to Improve Fairness. (83%)Xinru Hua; Huanzhong Xu; Jose Blanchet; Viet Nguyen
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning. (81%)Yangsibo Huang; Samyak Gupta; Zhao Song; Kai Li; Sanjeev Arora
FROB: Few-shot ROBust Model for Classification and Out-of-Distribution Detection. (78%)Nikolaos Dionelis
COREATTACK: Breaking Up the Core Structure of Graphs. (78%)Bo Zhou; Yuqian Lv; Jinhuan Wang; Jian Zhang; Qi Xuan
Adversarial Attacks Against Deep Generative Models on Data: A Survey. (12%)Hui Sun; Tianqing Zhu; Zhiqiu Zhang; Dawei Jin. Ping Xiong; Wanlei Zhou
A Face Recognition System's Worst Morph Nightmare, Theoretically. (1%)Una M. Kelly; Raymond Veldhuis; Luuk Spreeuwers
New Datasets for Dynamic Malware Classification. (1%)Berkant Düzgün; Aykut Çayır; Ferhat Demirkıran; Ceyda Nur Kayha; Buket Gençaydın; Hasan Dağ
Reliability Assessment and Safety Arguments for Machine Learning Components in Assuring Learning-Enabled Autonomous Systems. (1%)Xingyu Zhao; Wei Huang; Vibhav Bharti; Yi Dong; Victoria Cox; Alec Banks; Sen Wang; Sven Schewe; Xiaowei Huang
2021-11-29
MedRDF: A Robust and Retrain-Less Diagnostic Framework for Medical Pretrained Models Against Adversarial Attack. (99%)Mengting Xu; Tao Zhang; Daoqiang Zhang
Adversarial Attacks in Cooperative AI. (82%)Ted Fujimoto; Arthur Paul Pedersen
Living-Off-The-Land Command Detection Using Active Learning. (10%)Talha Ongun; Jack W. Stokes; Jonathan Bar Or; Ke Tian; Farid Tajaddodianfar; Joshua Neil; Christian Seifert; Alina Oprea; John C. Platt
A Simple Long-Tailed Recognition Baseline via Vision-Language Model. (1%)Teli Ma; Shijie Geng; Mengmeng Wang; Jing Shao; Jiasen Lu; Hongsheng Li; Peng Gao; Yu Qiao
ROBIN : A Benchmark for Robustness to Individual Nuisances in Real-World Out-of-Distribution Shifts. (1%)Bingchen Zhao; Shaozuo Yu; Wufei Ma; Mingxin Yu; Shenxiao Mei; Angtian Wang; Ju He; Alan Yuille; Adam Kortylewski
Pyramid Adversarial Training Improves ViT Performance. (1%)Charles Herrmann; Kyle Sargent; Lu Jiang; Ramin Zabih; Huiwen Chang; Ce Liu; Dilip Krishnan; Deqing Sun
2021-11-28
Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional Variational AutoEncoders for Adversary Detection in the Presence of Noisy Images. (96%)Dvij Kalaria; Aritra Hazra; Partha Pratim Chakrabarti
MALIGN: Adversarially Robust Malware Family Detection using Sequence Alignment. (54%)Shoumik Saha; Sadia Afroz; Atif Rahman
Automated Runtime-Aware Scheduling for Multi-Tenant DNN Inference on GPU. (1%)Fuxun Yu; Shawn Bray; Di Wang; Longfei Shangguan; Xulong Tang; Chenchen Liu; Xiang Chen
ExCon: Explanation-driven Supervised Contrastive Learning for Image Classification. (1%)Zhibo Zhang; Jongseong Jang; Chiheb Trabelsi; Ruiwen Li; Scott Sanner; Yeonjeong Jeong; Dongsub Shim
2021-11-27
Adaptive Image Transformations for Transfer-based Adversarial Attack. (99%)Zheng Yuan; Jie Zhang; Shiguang Shan
Adaptive Perturbation for Adversarial Attack. (99%)Zheng Yuan; Jie Zhang; Shiguang Shan
Statically Detecting Adversarial Malware through Randomised Chaining. (98%)Matthew Crawford; Wei Wang; Ruoxi Sun; Minhui Xue
Dissecting Malware in the Wild. (1%)Hamish Spencer; Wei Wang; Ruoxi Sun; Minhui Xue
2021-11-26
ArchRepair: Block-Level Architecture-Oriented Repairing for Deep Neural Networks. (50%)Hua Qi; Zhijie Wang; Qing Guo; Jianlang Chen; Felix Juefei-Xu; Lei Ma; Jianjun Zhao
2021-11-25
AdvBokeh: Learning to Adversarially Defocus Blur. (99%)Yihao Huang; Felix Juefei-Xu; Qing Guo; Weikai Miao; Yang Liu; Geguang Pu
Clustering Effect of (Linearized) Adversarial Robust Models. (97%)Yang Bai; Xin Yan; Yong Jiang; Shu-Tao Xia; Yisen Wang
Simple Contrastive Representation Adversarial Learning for NLP Tasks. (93%)Deshui Miao; Jiaqi Zhang; Wenbo Xie; Jian Song; Xin Li; Lijuan Jia; Ning Guo
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks. (92%)Xiangyu Qi; Tinghao Xie; Ruizhe Pan; Jifeng Zhu; Yong Yang; Kai Bu
Going Grayscale: The Road to Understanding and Improving Unlearnable Examples. (92%)Zhuoran Liu; Zhengyu Zhao; Alex Kolmus; Tijn Berns; Laarhoven Twan van; Tom Heskes; Martha Larson
Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning. (3%)Sanjay Kariyappa; Moinuddin K Qureshi
Joint inference and input optimization in equilibrium networks. (1%)Swaminathan Gurumurthy; Shaojie Bai; Zachary Manchester; J. Zico Kolter
2021-11-24
Thundernna: a white box adversarial attack. (99%)Linfeng Ye
Unity is strength: Improving the Detection of Adversarial Examples with Ensemble Approaches. (99%)Francesco Craighero; Fabrizio Angaroni; Fabio Stella; Chiara Damiani; Marco Antoniotti; Alex Graudenzi
Robustness against Adversarial Attacks in Neural Networks using Incremental Dissipativity. (92%)Bernardo Aquino; Arash Rahnama; Peter Seiler; Lizhen Lin; Vijay Gupta
WFDefProxy: Modularly Implementing and Empirically Evaluating Website Fingerprinting Defenses. (15%)Jiajun Gong; Wuqi Zhang; Charles Zhang; Tao Wang
SLA$^2$P: Self-supervised Anomaly Detection with Adversarial Perturbation. (5%)Yizhou Wang; Can Qin; Rongzhe Wei; Yi Xu; Yue Bai; Yun Fu
An Attack on Feature Level-based Facial Soft-biometric Privacy Enhancement. (2%)Dailé Osorio-Roig; Christian Rathgeb; Pawel Drozdowski; Philipp Terhörst; Vitomir Štruc; Christoph Busch
Accelerating Deep Learning with Dynamic Data Pruning. (1%)Ravi S Raju; Kyle Daruwalla; Mikko Lipasti
2021-11-23
Adversarial machine learning for protecting against online manipulation. (92%)Stefano Cresci; Marinella Petrocchi; Angelo Spognardi; Stefano Tognazzi
Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS. (84%)Witt Christian Schroeder de; Yongchao Huang; Philip H. S. Torr; Martin Strohmeier
Subspace Adversarial Training. (69%)Tao Li; Yingwen Wu; Sizhe Chen; Kun Fang; Xiaolin Huang
HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance. (1%)Huanrui Yang; Xiaoxuan Yang; Neil Zhenqiang Gong; Yiran Chen
2021-11-22
Adversarial Examples on Segmentation Models Can be Easy to Transfer. (99%)Jindong Gu; Hengshuang Zhao; Volker Tresp; Philip Torr
Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes. (99%)Utku Ozbulak; Maura Pintor; Messem Arnout Van; Neve Wesley De
Imperceptible Transfer Attack and Defense on 3D Point Cloud Classification. (99%)Daizong Liu; Wei Hu
Backdoor Attack through Frequency Domain. (92%)Tong Wang; Yuan Yao; Feng Xu; Shengwei An; Hanghang Tong; Ting Wang
NTD: Non-Transferability Enabled Backdoor Detection. (69%)Yinshan Li; Hua Ma; Zhi Zhang; Yansong Gao; Alsharif Abuadbba; Anmin Fu; Yifeng Zheng; Said F. Al-Sarawi; Derek Abbott
A Comparison of State-of-the-Art Techniques for Generating Adversarial Malware Binaries. (33%)Prithviraj Dasgupta; Zachariah Osman
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data. (13%)Yongji Wu; Xiaoyu Cao; Jinyuan Jia; Neil Zhenqiang Gong
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration. (1%)Yifan Gong; Geng Yuan; Zheng Zhan; Wei Niu; Zhengang Li; Pu Zhao; Yuxuan Cai; Sijia Liu; Bin Ren; Xue Lin; Xulong Tang; Yanzhi Wang
Electric Vehicle Attack Impact on Power Grid Operation. (1%)Mohammad Ali Sayed; Ribal Atallah; Chadi Assi; Mourad Debbabi
2021-11-21
Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models. (99%)Alon Zolfi; Shai Avidan; Yuval Elovici; Asaf Shabtai
Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability. (99%)Yifeng Xiong; Jiadong Lin; Min Zhang; John E. Hopcroft; Kun He
Medical Aegis: Robust adversarial protectors for medical images. (99%)Qingsong Yao; Zecheng He; S. Kevin Zhou
Local Linearity and Double Descent in Catastrophic Overfitting. (73%)Varun Sivashankar; Nikil Selvam
Denoised Internal Models: a Brain-Inspired Autoencoder against Adversarial Attacks. (62%)Kaiyuan Liu; Xingyu Li; Yi Zhou; Jisong Guan; Yurui Lai; Ge Zhang; Hang Su; Jiachen Wang; Chunxu Guo
2021-11-20
Are Vision Transformers Robust to Patch Perturbations? (98%)Jindong Gu; Volker Tresp; Yao Qin
2021-11-19
Zero-Shot Certified Defense against Adversarial Patches with Vision Transformers. (99%)Yuheng Huang; Yuanchun Li
Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method. (99%)Tao Bai; Jun Zhao; Jinlin Zhu; Shoudong Han; Jiefeng Chen; Bo Li; Alex Kot
Meta Adversarial Perturbations. (99%)Chia-Hung Yuan; Pin-Yu Chen; Chia-Mu Yu
Resilience from Diversity: Population-based approach to harden models against adversarial attacks. (99%)Jasser Jasser; Ivan Garibay
Enhanced countering adversarial attacks via input denoising and feature restoring. (99%)Yanni Li; Wenhui Zhang; Jiawei Liu; Xiaoli Kou; Hui Li; Jiangtao Cui
Fooling Adversarial Training with Inducing Noise. (98%)Zhirui Wang; Yifei Wang; Yisen Wang
Exposing Weaknesses of Malware Detectors with Explainability-Guided Evasion Attacks. (86%)Wei Wang; Ruoxi Sun; Tian Dong; Shaofeng Li; Minhui Xue; Gareth Tyson; Haojin Zhu
2021-11-18
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems. (99%)Bao Gia Doan; Minhui Xue; Shiqing Ma; Ehsan Abbasnejad; Damith C. Ranasinghe
A Review of Adversarial Attack and Defense for Classification Methods. (99%)Yao Li; Minhao Cheng; Cho-Jui Hsieh; Thomas C. M. Lee
Robust Person Re-identification with Multi-Modal Joint Defence. (98%)Yunpeng Gong; Lifei Chen
Enhancing the Insertion of NOP Instructions to Obfuscate Malware via Deep Reinforcement Learning. (96%)Daniel Gibert; Matt Fredrikson; Carles Mateu; Jordi Planes; Quan Le
How to Build Robust FAQ Chatbot with Controllable Question Generator? (80%)Yan Pan; Mingyang Ma; Bernhard Pflugfelder; Georg Groh
Adversarial attacks on voter model dynamics in complex networks. (76%)Katsumi Chiyomaru; Kazuhiro Takemoto
Enhanced Membership Inference Attacks against Machine Learning Models. (12%)Jiayuan Ye; Aadyaa Maddi; Sasi Kumar Murakonda; Reza Shokri
Wiggling Weights to Improve the Robustness of Classifiers. (2%)Sadaf Gulshad; Ivan Sosnovik; Arnold Smeulders
Improving Transferability of Representations via Augmentation-Aware Self-Supervision. (1%)Hankook Lee; Kibok Lee; Kimin Lee; Honglak Lee; Jinwoo Shin
2021-11-17
TraSw: Tracklet-Switch Adversarial Attacks against Multi-Object Tracking. (99%)Delv Lin; Qi Chen; Chengyu Zhou; Kun He
Generating Unrestricted 3D Adversarial Point Clouds. (99%)Xuelong Dai; Yanjie Li; Hua Dai; Bin Xiao
SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness. (93%)Jongheon Jeong; Sejun Park; Minkyu Kim; Heung-Chang Lee; Doguk Kim; Jinwoo Shin
Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation. (92%)Mehdi Sadi; B. M. S. Bahar Talukder; Kaniz Mishty; Md Tauhidur Rahman
Do Not Trust Prediction Scores for Membership Inference Attacks. (33%)Dominik Hintersdorf; Lukas Struppek; Kristian Kersting
2021-11-16
Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks. (99%)Adaku Uchendu; Daniel Campoy; Christopher Menart; Alexandra Hildenbrandt
Improving the robustness and accuracy of biomedical language models through adversarial training. (99%)Milad Moradi; Matthias Samwald
Detecting AutoAttack Perturbations in the Frequency Domain. (99%)Peter Lorenz; Paula Harder; Dominik Strassel; Margret Keuper; Janis Keuper
Adversarial Tradeoffs in Linear Inverse Problems and Robust StateEstimation. (92%)Bruce D. Lee; Thomas T. C. K. Zhang; Hamed Hassani; Nikolai Matni
Consistent Semantic Attacks on Optical Flow. (81%)Tom Koren; Lior Talker; Michael Dinerstein; Roy J Jevnisek
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences. (54%)Wei Guo; Benedetta Tondi; Mauro Barni
Enabling equivariance for arbitrary Lie groups. (1%)Lachlan Ewen MacDonald; Sameera Ramasinghe; Simon Lucey
2021-11-15
A Survey on Adversarial Attacks for Malware Analysis. (98%)Kshitiz Aryal; Maanak Gupta; Mahmoud Abdelsalam
Triggerless Backdoor Attack for NLP Tasks with Clean Labels. (68%)Leilei Gan; Jiwei Li; Tianwei Zhang; Xiaoya Li; Yuxian Meng; Fei Wu; Shangwei Guo; Chun Fan
Property Inference Attacks Against GANs. (67%)Junhao Zhou; Yufei Chen; Chao Shen; Yang Zhang
2021-11-14
Generating Band-Limited Adversarial Surfaces Using Neural Networks. (99%)Roee Ben-Shlomo; Yevgeniy Men; Ido Imanuel
Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks. (76%)Chen Ma; Xiangyu Guo; Li Chen; Jun-Hai Yong; Yisen Wang
Towards Interpretability of Speech Pause in Dementia Detection using Adversarial Learning. (75%)Youxiang Zhu; Bang Tran; Xiaohui Liang; John A. Batsis; Robert M. Roth
Improving Compound Activity Classification via Deep Transfer and Representation Learning. (1%)Vishal Dey; Raghu Machiraju; Xia Ning
2021-11-13
Robust and Accurate Object Detection via Self-Knowledge Distillation. (62%)Weipeng Xu; Pengzhi Chu; Renhao Xie; Xiongziyan Xiao; Hongcheng Huang
UNTANGLE: Unlocking Routing and Logic Obfuscation Using Graph Neural Networks-based Link Prediction. (2%)Lilas Alrahis; Satwik Patnaik; Muhammad Abdullah Hanif; Muhammad Shafique; Ozgur Sinanoglu
2021-11-12
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception. (99%)Joel Dapello; Jenelle Feather; Hang Le; Tiago Marques; David D. Cox; Josh H. McDermott; James J. DiCarlo; SueYeon Chung
Measuring the Contribution of Multiple Model Representations in Detecting Adversarial Instances. (98%)Daniel Steinberg; Paul Munro
Adversarially Robust Learning for Security-Constrained Optimal Power Flow. (10%)Priya L. Donti; Aayushya Agarwal; Neeraj Vijay Bedmutha; Larry Pileggi; J. Zico Kolter
On Transferability of Prompt Tuning for Natural Language Understanding. (8%)Yusheng Su; Xiaozhi Wang; Yujia Qin; Chi-Min Chan; Yankai Lin; Zhiyuan Liu; Peng Li; Juanzi Li; Lei Hou; Maosong Sun; Jie Zhou
A Bayesian Nash equilibrium-based moving target defense against stealthy sensor attacks. (1%)David Umsonst; Serkan Sarıtaş; György Dán; Henrik Sandberg
Resilient Consensus-based Multi-agent Reinforcement Learning. (1%)Martin Figura; Yixuan Lin; Ji Liu; Vijay Gupta
2021-11-11
On the Equivalence between Neural Network and Support Vector Machine. (1%)Yilan Chen; Wei Huang; Lam M. Nguyen; Tsui-Wei Weng
2021-11-10
Trustworthy Medical Segmentation with Uncertainty Estimation. (13%)Giuseppina Carannante; Dimah Dera; Nidhal C. Bouaynaya; Ghulam Rasool; Hassan M. Fathallah-Shaykh
Robust Learning via Ensemble Density Propagation in Deep Neural Networks. (2%)Giuseppina Carannante; Dimah Dera; Ghulam Rasool; Nidhal C. Bouaynaya; Lyudmila Mihaylova
2021-11-09
Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search. (99%)Pengfei Xia; Ziqiang Li; Bin Li
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps. (99%)Muhammad Awais; Fengwei Zhou; Chuanlong Xie; Jiawei Li; Sung-Ho Bae; Zhenguo Li
Sparse Adversarial Video Attacks with Spatial Transformations. (98%)Ronghui Mu; Wenjie Ruan; Leandro Soriano Marcolino; Qiang Ni
A Statistical Difference Reduction Method for Escaping Backdoor Detection. (97%)Pengfei Xia; Hongjing Niu; Ziqiang Li; Bin Li
Data Augmentation Can Improve Robustness. (73%)Sylvestre-Alvise Rebuffi; Sven Gowal; Dan A. Calian; Florian Stimberg; Olivia Wiles; Timothy Mann
Are Transformers More Robust Than CNNs? (67%)Yutong Bai; Jieru Mei; Alan Yuille; Cihang Xie
2021-11-08
Geometrically Adaptive Dictionary Attack on Face Recognition. (99%)Junyoung Byun; Hyojun Go; Changick Kim
Defense Against Explanation Manipulation. (98%)Ruixiang Tang; Ninghao Liu; Fan Yang; Na Zou; Xia Hu
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories. (98%)Adnan Siraj Rakin; Md Hafizul Islam Chowdhuryy; Fan Yao; Deliang Fan
On Assessing The Safety of Reinforcement Learning algorithms Using Formal Methods. (75%)Paulina Stevia Nouwou Mindom; Amin Nikanjam; Foutse Khomh; John Mullins
Get a Model! Model Hijacking Attack Against Machine Learning Models. (69%)Ahmed Salem; Michael Backes; Yang Zhang
Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks. (69%)Lijia Yu; Xiao-Shan Gao
Characterizing the adversarial vulnerability of speech self-supervised learning. (68%)Haibin Wu; Bo Zheng; Xu Li; Xixin Wu; Hung-yi Lee; Helen Meng
HAPSSA: Holistic Approach to PDF Malware Detection Using Signal and Statistical Analysis. (67%)Tajuddin Manhar Mohammed; Lakshmanan Nataraj; Satish Chikkagoudar; Shivkumar Chandrasekaran; B. S. Manjunath
Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning. (67%)Qinkai Zheng; Xu Zou; Yuxiao Dong; Yukuo Cen; Da Yin; Jiarong Xu; Yang Yang; Jie Tang
BARFED: Byzantine Attack-Resistant Federated Averaging Based on Outlier Elimination. (1%)Ece Isik-Polat; Gorkem Polat; Altan Kocyigit
2021-11-07
Generative Dynamic Patch Attack. (99%)Xiang Li; Shihao Ji
Natural Adversarial Objects. (81%)Felix Lau; Nishant Subramani; Sasha Harrison; Aerin Kim; Elliot Branson; Rosanne Liu
2021-11-06
"How Does It Detect A Malicious App?" Explaining the Predictions of AI-based Android Malware Detector. (11%)Zhi Lu; Vrizlynn L. L. Thing
2021-11-05
A Unified Game-Theoretic Interpretation of Adversarial Robustness. (98%)Jie Ren; Die Zhang; Yisen Wang; Lu Chen; Zhanpeng Zhou; Yiting Chen; Xu Cheng; Xin Wang; Meng Zhou; Jie Shi; Quanshi Zhang
Sequential Randomized Smoothing for Adversarially Robust Speech Recognition. (96%)Raphael Olivier; Bhiksha Raj
Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups. (2%)Aidmar Wainakh; Ephraim Zimmer; Sandeep Subedi; Jens Keim; Tim Grube; Shankar Karuppayah; Alejandro Sanchez Guinea; Max Mühlhäuser
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models. (99%)Boxin Wang; Chejian Xu; Shuohang Wang; Zhe Gan; Yu Cheng; Jianfeng Gao; Ahmed Hassan Awadallah; Bo Li
Adversarial Attacks on Graph Classification via Bayesian Optimisation. (87%)Xingchen Wan; Henry Kenlay; Binxin Ru; Arno Blaas; Michael A. Osborne; Xiaowen Dong
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods. (47%)Peru Bhardwaj; John Kelleher; Luca Costabello; Declan O'Sullivan
Attacking Deep Reinforcement Learning-Based Traffic Signal Control Systems with Colluding Vehicles. (3%)Ao Qu; Yihong Tang; Wei Ma
2021-11-03
LTD: Low Temperature Distillation for Robust Adversarial Training. (54%)Erh-Chung Chen; Che-Rung Lee
Multi-Glimpse Network: A Robust and Efficient Classification Architecture based on Recurrent Downsampled Attention. (41%)Sia Huat Tan; Runpei Dong; Kaisheng Ma
2021-11-02
HydraText: Multi-objective Optimization for Adversarial Textual Attack. (99%)Shengcai Liu; Ning Lu; Wenjing Hong; Chao Qian; Ke Tang
Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks. (96%)Maksym Yatsura; Jan Hendrik Metzen; Matthias Hein
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds. (70%)Yujia Huang; Huan Zhang; Yuanyuan Shi; J Zico Kolter; Anima Anandkumar
Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness. (68%)Ke Sun; Mingjie Li; Zhouchen Lin
Knowledge Cross-Distillation for Membership Privacy. (38%)Rishav Chourasia; Batnyam Enkhtaivan; Kunihiro Ito; Junki Mori; Isamu Teranishi; Hikaru Tsuchida
Adversarially Perturbed Wavelet-based Morphed Face Generation. (9%)Kelsey O'Haire; Sobhan Soleymani; Baaria Chaudhary; Poorya Aghdaie; Jeremy Dawson; Nasser M. Nasrabadi
2021-11-01
Robustness of deep learning algorithms in astronomy -- galaxy morphology studies. (83%)A. Ćiprijanović; D. Kafkes; G. N. Perdue; K. Pedro; G. Snyder; F. J. Sánchez; S. Madireddy; S. Wild; B. Nord
Indiscriminate Poisoning Attacks Are Shortcuts. (75%)Da Yu; Huishuai Zhang; Wei Chen; Jian Yin; Tie-Yan Liu
When Does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning? (69%)Lijie Fan; Sijia Liu; Pin-Yu Chen; Gaoyuan Zhang; Chuang Gan
Graph Structural Attack by Spectral Distance. (67%)Lu Lin; Ethan Blaser; Hongning Wang
ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack. (9%)Dahoon Park; Kon-Woo Kwon; Sunghoon Im; Jaeha Kung
2021-10-31
An Actor-Critic Method for Simulation-Based Optimization. (56%)Kuo Li; Qing-Shan Jia; Jiaqi Yan
2021-10-30
Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided Curriculum Learning Approach. (97%)Anindya Sarkar; Anirban Sarkar; Sowrya Gali; Vineeth N Balasubramanian
AdvCodeMix: Adversarial Attack on Code-Mixed Data. (93%)Sourya Dipta Das; Ayan Basak; Soumil Mandal; Dipankar Das
Backdoor Pre-trained Models Can Transfer to All. (3%)Lujia Shen; Shouling Ji; Xuhong Zhang; Jinfeng Li; Jing Chen; Jie Shi; Chengfang Fang; Jianwei Yin; Ting Wang
2021-10-29
Attacking Video Recognition Models with Bullet-Screen Comments. (99%)Kai Chen; Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Adversarial Robustness with Semi-Infinite Constrained Learning. (92%)Alexander Robey; Luiz F. O. Chamon; George J. Pappas; Hamed Hassani; Alejandro Ribeiro
{\epsilon}-weakened Robustness of Deep Neural Networks. (62%)Pei Huang; Yuting Yang; Minghao Liu; Fuqi Jia; Feifei Ma; Jian Zhang
You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership. (11%)Xuxi Chen; Tianlong Chen; Zhenyu Zhang; Zhangyang Wang
2021-10-28
Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework. (99%)Lifan Yuan; Yichi Zhang; Yangyi Chen; Wei Wei
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis. (92%)Junfeng Guo; Ang Li; Cong Liu
The magnitude vector of images. (1%)Michael F. Adamer; Leslie O'Bray; Brouwer Edward De; Bastian Rieck; Karsten Borgwardt
2021-10-27
Towards Evaluating the Robustness of Neural Networks Learned by Transduction. (98%)Jiefeng Chen; Xi Wu; Yang Guo; Yingyu Liang; Somesh Jha
CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks. (98%)Haotian Xue; Kaixiong Zhou; Tianlong Chen; Kai Guo; Xia Hu; Yi Chang; Xin Wang
Towards Robust Reasoning over Knowledge Graphs. (83%)Zhaohan Xi; Ren Pang; Changjiang Li; Shouling Ji; Xiapu Luo; Xusheng Xiao; Ting Wang
Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks. (74%)Hassan Dbouk; Naresh R. Shanbhag
Adversarial Neuron Pruning Purifies Backdoored Deep Models. (15%)Dongxian Wu; Yisen Wang
From Intrinsic to Counterfactual: On the Explainability of Contextualized Recommender Systems. (5%)Yao Zhou; Haonan Wang; Jingrui He; Haixun Wang
Robust Contrastive Learning Using Negative Samples with Diminished Semantics. (1%)Songwei Ge; Shlok Mishra; Haohan Wang; Chun-Liang Li; David Jacobs
RoMA: Robust Model Adaptation for Offline Model-based Optimization. (1%)Sihyun Yu; Sungsoo Ahn; Le Song; Jinwoo Shin
2021-10-26
Can't Fool Me: Adversarially Robust Transformer for Video Understanding. (99%)Divya Choudhary; Palash Goyal; Saurabh Sahu
Frequency Centric Defense Mechanisms against Adversarial Examples. (99%)Sanket B. Shah; Param Raval; Harin Khakhi; Mehul S. Raval
ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers. (99%)Husheng Han; Kaidi Xu; Xing Hu; Xiaobing Chen; Ling Liang; Zidong Du; Qi Guo; Yanzhi Wang; Yunji Chen
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks. (99%)Yonggan Fu; Qixuan Yu; Yang Zhang; Shang Wu; Xu Ouyang; David Cox; Yingyan Lin
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective. (98%)Jingwei Sun; Ang Li; Louis DiValentin; Amin Hassanzadeh; Yiran Chen; Hai Li
A Frequency Perspective of Adversarial Robustness. (98%)Shishira R Maiya; Max Ehrlich; Vatsal Agarwal; Ser-Nam Lim; Tom Goldstein; Abhinav Shrivastava
Disrupting Deep Uncertainty Estimation Without Harming Accuracy. (86%)Ido Galil; Ran El-Yaniv
Improving Local Effectiveness for Global robust training. (83%)Jingyue Lu; M. Pawan Kumar
Robustness of Graph Neural Networks at Scale. (76%)Simon Geisler; Tobias Schmidt; Hakan Şirin; Daniel Zügner; Aleksandar Bojchevski; Stephan Günnemann
Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions. (75%)Izzat Alsmadi; Kashif Ahmad; Mahmoud Nazzal; Firoj Alam; Ala Al-Fuqaha; Abdallah Khreishah; Abdulelah Algosaibi
Adversarial Robustness in Multi-Task Learning: Promises and Illusions. (64%)Salah Ghamizi; Maxime Cordy; Mike Papadakis; Yves Le Traon
AugMax: Adversarial Composition of Random Augmentations for Robust Training. (56%)Haotao Wang; Chaowei Xiao; Jean Kossaifi; Zhiding Yu; Anima Anandkumar; Zhangyang Wang
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes. (50%)Sanghyun Hong; Michael-Andrei Panaitescu-Liess; Yiğitcan Kaya; Tudor Dumitraş
Semantic Host-free Trojan Attack. (10%)Haripriya Harikumar; Kien Do; Santu Rana; Sunil Gupta; Svetha Venkatesh
CAFE: Catastrophic Data Leakage in Vertical Federated Learning. (3%)Xiao Jin; Pin-Yu Chen; Chia-Yi Hsu; Chia-Mu Yu; Tianyi Chen
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge. (1%)Geng Yuan; Xiaolong Ma; Wei Niu; Zhengang Li; Zhenglun Kong; Ning Liu; Yifan Gong; Zheng Zhan; Chaoyang He; Qing Jin; Siyue Wang; Minghai Qin; Bin Ren; Yanzhi Wang; Sijia Liu; Xue Lin
Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection. (1%)Chunjong Park; Anas Awadalla; Tadayoshi Kohno; Shwetak Patel
Defensive Tensorization. (1%)Adrian Bulat; Jean Kossaifi; Sourav Bhattacharya; Yannis Panagakis; Timothy Hospedales; Georgios Tzimiropoulos; Nicholas D Lane; Maja Pantic
Task-Aware Meta Learning-based Siamese Neural Network for Classifying Obfuscated Malware. (1%)Jinting Zhu; Julian Jang-Jaccard; Amardeep Singh; Paul A. Watters; Seyit Camtepe
2021-10-25
Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks. (99%)Qiyu Kang; Yang Song; Qinxu Ding; Wee Peng Tay
Generating Watermarked Adversarial Texts. (99%)Mingjie Li; Hanzhou Wu; Xinpeng Zhang
Beyond $L_p$ clipping: Equalization-based Psychoacoustic Attacks against ASRs. (92%)Hadi Abdullah; Muhammad Sajidur Rahman; Christian Peeters; Cassidy Gibson; Washington Garcia; Vincent Bindschaedler; Thomas Shrimpton; Patrick Traynor
Fast Gradient Non-sign Methods. (92%)Yaya Cheng; Jingkuan Song; Xiaosu Zhu; Qilong Zhang; Lianli Gao; Heng Tao Shen
Ensemble Federated Adversarial Training with Non-IID data. (87%)Shuang Luo; Didi Zhu; Zexi Li; Chao Wu
GANash -- A GAN approach to steganography. (81%)Venkatesh Subramaniyan; Vignesh Sivakumar; A. K. Vagheesan; S. Sakthivelan; K. J. Jegadish Kumar; K. K. Nagarajan
A Dynamical System Perspective for Lipschitz Neural Networks. (81%)Laurent Meunier; Blaise Delattre; Alexandre Araujo; Alexandre Allauzen
An Adaptive Structural Learning of Deep Belief Network for Image-based Crack Detection in Concrete Structures Using SDNET2018. (13%)Shin Kamada; Takumi Ichimura; Takashi Iwasaki
2021-10-24
Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples. (80%)Yi Xiang Marcus Tan; Penny Chong; Jiamei Sun; Ngai-man Cheung; Yuval Elovici; Alexander Binder
2021-10-23
ADC: Adversarial attacks against object Detection that evade Context consistency checks. (99%)Mingjun Yin; Shasha Li; Chengyu Song; M. Salman Asif; Amit K. Roy-Chowdhury; Srikanth V. Krishnamurthy
A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness. (81%)Chang Song; Riya Ranjan; Hai Li
2021-10-22
Improving Robustness of Malware Classifiers using Adversarial Strings Generated from Perturbed Latent Representations. (99%)Marek Galovic; Branislav Bosansky; Viliam Lisy
How and When Adversarial Robustness Transfers in Knowledge Distillation? (91%)Rulin Shao; Jinfeng Yi; Pin-Yu Chen; Cho-Jui Hsieh
Fairness Degrading Adversarial Attacks Against Clustering Algorithms. (86%)Anshuman Chhabra; Adish Singla; Prasant Mohapatra
Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff. (80%)Adel Javanmard; Mohammad Mehrabi
ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI. (15%)Samuel Hess; Gregory Ditzler
PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy. (15%)Xiaolan Gu; Ming Li; Li Xiong
Spoofing Detection on Hand Images Using Quality Assessment. (1%)Asish Bera; Ratnadeep Dey; Debotosh Bhattacharjee; Mita Nasipuri; Hubert P. H. Shum
Text Counterfactuals via Latent Optimization and Shapley-Guided Search. (1%)Quintin Pope; Xiaoli Z. Fern
MANDERA: Malicious Node Detection in Federated Learning via Ranking. (1%)Wanchuang Zhu; Benjamin Zi Hao Zhao; Simon Luo; Ke Deng
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning. (1%)Anvith Thudi; Hengrui Jia; Ilia Shumailov; Nicolas Papernot
2021-10-21
CAPTIVE: Constrained Adversarial Perturbations to Thwart IC Reverse Engineering. (98%)Amir Hosein Afandizadeh Zargari; Marzieh AshrafiAmiri; Minjun Seo; Sai Manoj Pudukotai Dinakarrao; Mohammed E. Fouda; Fadi Kurdahi
PROVES: Establishing Image Provenance using Semantic Signatures. (93%)Mingyang Xie; Manav Kulshrestha; Shaojie Wang; Jinghan Yang; Ayan Chakrabarti; Ning Zhang; Yevgeniy Vorobeychik
RoMA: a Method for Neural Network Robustness Measurement and Assessment. (92%)Natan Levy; Guy Katz
Anti-Backdoor Learning: Training Clean Models on Poisoned Data. (83%)Yige Li; Xixiang Lyu; Nodens Koren; Lingjuan Lyu; Bo Li; Xingjun Ma
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion. (68%)Shijie Zhang; Hongzhi Yin; Tong Chen; Zi Huang; Quoc Viet Hung Nguyen; Lizhen Cui
Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness. (61%)Simon Geisler; Johanna Sommer; Jan Schuchardt; Aleksandar Bojchevski; Stephan Günnemann
Physical Side-Channel Attacks on Embedded Neural Networks: A Survey. (8%)Maria Méndez Real; Rubén Salvador
Watermarking Graph Neural Networks based on Backdoor Attacks. (1%)Jing Xu; Stjepan Picek
2021-10-20
Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical Reinforcement Learning. (83%)Thai Le; Long Tran-Thanh; Dongwon Lee
Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks. (62%)Zihan Liul; Yun Luo; Zelin Zang; Stan Z. Li
Moir\'e Attack (MA): A New Potential Risk of Screen Photos. (56%)Dantong Niu; Ruohao Guo; Yisen Wang
Adversarial attacks against Bayesian forecasting dynamic models. (13%)Roi Naveiro
No One Representation to Rule Them All: Overlapping Features of Training Methods. (1%)Raphael Gontijo-Lopes; Yann Dauphin; Ekin D. Cubuk
2021-10-19
Multi-concept adversarial attacks. (99%)Vibha Belavadi; Yan Zhou; Murat Kantarcioglu; Bhavani M. Thuraisingham
A Regularization Method to Improve Adversarial Robustness of Neural Networks for ECG Signal Classification. (96%)Linhai Ma; Liang Liang
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks. (69%)Atul Sharma; Wei Chen; Joshua Zhao; Qiang Qiu; Somali Chaterji; Saurabh Bagchi
Understanding Convolutional Neural Networks from Theoretical Perspective via Volterra Convolution. (61%)Tenghui Li; Guoxu Zhou; Yuning Qiu; Qibin Zhao
Detecting Backdoor Attacks Against Point Cloud Classifiers. (26%)Zhen Xiang; David J. Miller; Siheng Chen; Xi Li; George Kesidis
Speech Pattern based Black-box Model Watermarking for Automatic Speech Recognition. (13%)Haozhe Chen; Weiming Zhang; Kunlin Liu; Kejiang Chen; Han Fang; Nenghai Yu
A Deeper Look into RowHammer`s Sensitivities: Experimental Analysis of Real DRAM Chips and Implications on Future Attacks and Defenses. (5%)Lois Orosa; Abdullah Giray Yağlıkçı; Haocong Luo; Ataberk Olgun; Jisung Park; Hasan Hassan; Minesh Patel; Jeremie S. Kim; Onur Mutlu
2021-10-18
Boosting the Transferability of Video Adversarial Examples via Temporal Translation. (99%)Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information. (99%)Baolin Zheng; Peipei Jiang; Qian Wang; Qi Li; Chao Shen; Cong Wang; Yunjie Ge; Qingyang Teng; Shenyi Zhang
Improving Robustness using Generated Data. (97%)Sven Gowal; Sylvestre-Alvise Rebuffi; Olivia Wiles; Florian Stimberg; Dan Andrei Calian; Timothy Mann
MEMO: Test Time Robustness via Adaptation and Augmentation. (13%)Marvin Zhang; Sergey Levine; Chelsea Finn
Minimal Multi-Layer Modifications of Deep Neural Networks. (4%)Idan Refaeli; Guy Katz
2021-10-17
ECG-ATK-GAN: Robustness against Adversarial Attacks on ECG using Conditional Generative Adversarial Networks. (99%)Khondker Fariha Hossain; Sharif Amit Kamran; Xingjun Ma; Alireza Tavakkoli
Unrestricted Adversarial Attacks on ImageNet Competition. (99%)Yuefeng Chen; Xiaofeng Mao; Yuan He; Hui Xue; Chao Li; Yinpeng Dong; Qi-An Fu; Xiao Yang; Wenzhao Xiang; Tianyu Pang; Hang Su; Jun Zhu; Fangcheng Liu; Chao Zhang; Hongyang Zhang; Yichi Zhang; Shilong Liu; Chang Liu; Wenzhao Xiang; Yajie Wang; Huipeng Zhou; Haoran Lyu; Yidan Xu; Zixuan Xu; Taoyu Zhu; Wenjun Li; Xianfeng Gao; Guoqiu Wang; Huanqian Yan; Ying Guo; Chaoning Zhang; Zheng Fang; Yang Wang; Bingyang Fu; Yunfei Zheng; Yekui Wang; Haorong Luo; Zhen Yang
Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training. (99%)Alexander Daniel Pan; Daniel Yongkyun; Lee; Huan Zhang; Yize Chen; Yuanyuan Shi
Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications. (22%)Bang Wu; Xiangwen Yang; Shirui Pan; Xingliang Yuan
Poisoning Attacks on Fair Machine Learning. (12%)Minh-Hao Van; Wei Du; Xintao Wu; Aidong Lu
2021-10-16
Black-box Adversarial Attacks on Network-wide Multi-step Traffic State Prediction Models. (99%)Bibek Poudel; Weizi Li
Analyzing Dynamic Adversarial Training Data in the Limit. (82%)Eric Wallace; Adina Williams; Robin Jia; Douwe Kiela
Characterizing Improper Input Validation Vulnerabilities of Mobile Crowdsourcing Services. (5%)Sojhal Ismail Khan; Dominika Woszczyk; Chengzeng You; Soteris Demetriou; Muhammad Naveed
Tackling the Imbalance for GNNs. (4%)Rui Wang; Weixuan Xiong; Qinghu Hou; Ou Wu
2021-10-15
Adversarial Attacks on Gaussian Process Bandits. (99%)Eric Han; Jonathan Scarlett
Generating Natural Language Adversarial Examples through An Improved Beam Search Algorithm. (99%)Tengfei Zhao; Zhaocheng Ge; Hanping Hu; Dingmeng Shi
Adversarial Attacks on ML Defense Models Competition. (99%)Yinpeng Dong; Qi-An Fu; Xiao Yang; Wenzhao Xiang; Tianyu Pang; Hang Su; Jun Zhu; Jiayu Tang; Yuefeng Chen; XiaoFeng Mao; Yuan He; Hui Xue; Chao Li; Ye Liu; Qilong Zhang; Lianli Gao; Yunrui Yu; Xitong Gao; Zhe Zhao; Daquan Lin; Jiadong Lin; Chuanbiao Song; Zihao Wang; Zhennan Wu; Yang Guo; Jiequan Cui; Xiaogang Xu; Pengguang Chen
Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture. (76%)Xinyu Tang; Saeed Mahloujifar; Liwei Song; Virat Shejwalkar; Milad Nasr; Amir Houmansadr; Prateek Mittal
Robustness of different loss functions and their impact on networks learning capability. (76%)Vishal Rajput
Chunked-Cache: On-Demand and Scalable Cache Isolation for Security Architectures. (22%)Ghada Dessouky; Alexander Gruler; Pouya Mahmoody; Ahmad-Reza Sadeghi; Emmanuel Stapf
Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks. (10%)Yangyi Chen; Fanchao Qi; Zhiyuan Liu; Maosong Sun
Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation. (8%)Yao Qin; Chiyuan Zhang; Ting Chen; Balaji Lakshminarayanan; Alex Beutel; Xuezhi Wang
Hand Me Your PIN! Inferring ATM PINs of Users Typing with a Covered Hand. (1%)Matteo Cardaioli; Stefano Cecconello; Mauro Conti; Simone Milani; Stjepan Picek; Eugen Saraci
2021-10-14
Adversarial examples by perturbing high-level features in intermediate decoder layers. (99%)Vojtěch Čermák; Lukáš Adam
DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks. (99%)Yixiang Wang; Jiqiang Liu; Xiaolin Chang; Jianhua Wang; Ricardo J. Rodríguez
Adversarial Purification through Representation Disentanglement. (99%)Tao Bai; Jun Zhao; Lanqing Guo; Bihan Wen
RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models. (93%)Wenkai Yang; Yankai Lin; Peng Li; Jie Zhou; Xu Sun
An Optimization Perspective on Realizing Backdoor Injection Attacks on Deep Neural Networks in Hardware. (87%)M. Caner Tol; Saad Islam; Berk Sunar; Ziming Zhang
Interactive Analysis of CNN Robustness. (80%)Stefan Sietzen; Mathias Lechner; Judy Borowski; Ramin Hasani; Manuela Waldner
On Adversarial Vulnerability of PHM algorithms: An Initial Study. (69%)Weizhong Yan; Zhaoyuan Yang; Jianwei Qiu
Identifying and Mitigating Spurious Correlations for Improving Robustness in NLP Models. (61%)Tianlu Wang; Diyi Yang; Xuezhi Wang
Toward Degradation-Robust Voice Conversion. (9%)Chien-yu Huang; Kai-Wei Chang; Hung-yi Lee
Interpreting the Robustness of Neural NLP Models to Textual Perturbations. (9%)Yunxiang Zhang; Liangming Pan; Samson Tan; Min-Yen Kan
Retrieval-guided Counterfactual Generation for QA. (2%)Bhargavi Paranjape; Matthew Lamm; Ian Tenney
Effective Certification of Monotone Deep Equilibrium Models. (1%)Mark Niklas Müller; Robin Staab; Marc Fischer; Martin Vechev
2021-10-13
A Framework for Verification of Wasserstein Adversarial Robustness. (99%)Tobias Wegel; Felix Assion; David Mickisch; Florens Greßner
Identification of Attack-Specific Signatures in Adversarial Examples. (99%)Hossein Souri; Pirazh Khorramshahi; Chun Pong Lau; Micah Goldblum; Rama Chellappa
Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness. (99%)Xiao Yang; Yinpeng Dong; Wenzhao Xiang; Tianyu Pang; Hang Su; Jun Zhu
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer. (98%)Fanchao Qi; Yangyi Chen; Xurui Zhang; Mukai Li; Zhiyuan Liu; Maosong Sun
Brittle interpretations: The Vulnerability of TCAV and Other Concept-based Explainability Tools to Adversarial Attack. (93%)Davis Brown; Henry Kvinge
Traceback of Data Poisoning Attacks in Neural Networks. (92%)Shawn Shan; Arjun Nitin Bhagoji; Haitao Zheng; Ben Y. Zhao
Boosting the Certified Robustness of L-infinity Distance Nets. (1%)Bohang Zhang; Du Jiang; Di He; Liwei Wang
Benchmarking the Robustness of Spatial-Temporal Models Against Corruptions. (1%)Chenyu Yi; SIYUAN YANG; Haoliang Li; Yap-peng Tan; Alex Kot
2021-10-12
Adversarial Attack across Datasets. (99%)Yunxiao Qin; Yuanhao Xiong; Jinfeng Yi; Cho-Jui Hsieh
Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning. (99%)Jinyin Chen; Guohan Huang; Haibin Zheng; Shanqing Yu; Wenrong Jiang; Chen Cui
SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text. (92%)Hoang-Quoc Nguyen-Son; Seira Hidano; Kazuhide Fukushima; Shinsaku Kiyomoto
On the Security Risks of AutoML. (45%)Ren Pang; Zhaohan Xi; Shouling Ji; Xiapu Luo; Ting Wang
Zero-bias Deep Neural Network for Quickest RF Signal Surveillance. (1%)Yongxin Liu; Yingjie Chen; Jian Wang; Shuteng Niu; Dahai Liu; Houbing Song
2021-10-11
Boosting Fast Adversarial Training with Learnable Adversarial Initialization. (99%)Xiaojun Jia; Yong Zhang; Baoyuan Wu; Jue Wang; Xiaochun Cao
Parameterizing Activation Functions for Adversarial Robustness. (98%)Sihui Dai; Saeed Mahloujifar; Prateek Mittal
Amicable examples for informed source separation. (86%)Naoya Takahashi; Yuki Mitsufuji
Doubly-Trained Adversarial Data Augmentation for Neural Machine Translation. (12%)Weiting Tan; Shuoyang Ding; Huda Khayrallah; Philipp Koehn
Large Language Models Can Be Strong Differentially Private Learners. (1%)Xuechen Li; Florian Tramèr; Percy Liang; Tatsunori Hashimoto
Intriguing Properties of Input-dependent Randomized Smoothing. (1%)Peter Súkeník; Aleksei Kuvshinov; Stephan Günnemann
Hiding Images into Images with Real-world Robustness. (1%)Qichao Ying; Hang Zhou; Xianhan Zeng; Haisheng Xu; Zhenxing Qian; Xinpeng Zhang
Source Mixing and Separation Robust Audio Steganography. (1%)Naoya Takahashi; Mayank Kumar Singh; Yuki Mitsufuji
Homogeneous Learning: Self-Attention Decentralized Deep Learning. (1%)Yuwei Sun; Hideya Ochiai
Certified Patch Robustness via Smoothed Vision Transformers. (1%)Hadi Salman; Saachi Jain; Eric Wong; Aleksander Mądry
2021-10-10
Adversarial Attacks in a Multi-view Setting: An Empirical Study of the Adversarial Patches Inter-view Transferability. (98%)Bilel Tarchoun; Ihsen Alouani; Anouar Ben Khalifa; Mohamed Ali Mahjoub
Universal Adversarial Attacks on Neural Networks for Power Allocation in a Massive MIMO System. (92%)Pablo Millán Santos; B. R. Manoj; Meysam Sadeghi; Erik G. Larsson
2021-10-09
Demystifying the Transferability of Adversarial Attacks in Computer Networks. (99%)Ehsan Nowroozi; Yassine Mekdad; Mohammad Hajian Berenjestanaki; Mauro Conti; Abdeslam EL Fergougui
Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning. (93%)Guanlin Liu; Lifeng Lai
Widen The Backdoor To Let More Attackers In. (13%)Siddhartha Datta; Giulio Lovisotto; Ivan Martinovic; Nigel Shadbolt
2021-10-08
Explainability-Aware One Point Attack for Point Cloud Neural Networks. (99%)Hanxiao Tan; Helena Kotthaus
Game Theory for Adversarial Attacks and Defenses. (98%)Shorya Sharma
Graphs as Tools to Improve Deep Learning Methods. (10%)Carlos Lassance; Myriam Bontonou; Mounia Hamidouche; Bastien Pasdeloup; Lucas Drumetz; Vincent Gripon
IHOP: Improved Statistical Query Recovery against Searchable Symmetric Encryption through Quadratic Optimization. (3%)Simon Oya; Florian Kerschbaum
A Wireless Intrusion Detection System for 802.11 WPA3 Networks. (1%)Neil Dalal; Nadeem Akhtar; Anubhav Gupta; Nikhil Karamchandani; Gaurav S. Kasbekar; Jatin Parekh
Salient ImageNet: How to discover spurious features in Deep Learning? (1%)Sahil Singla; Soheil Feizi
2021-10-07
One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features. (99%)Stephen Casper; Max Nadeau; Gabriel Kreiman
EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box Android Malware Detection. (99%)Hamid Bostani; Veelasha Moonsamy
Adversarial Attack by Limited Point Cloud Surface Modifications. (98%)Atrin Arya; Hanieh Naderi; Shohreh Kasaei
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks. (98%)Hanxun Huang; Yisen Wang; Sarah Monazam Erfani; Quanquan Gu; James Bailey; Xingjun Ma
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction. (80%)Jinyin Chen; Haiyang Xiong; Haibin Zheng; Jian Zhang; Guodong Jiang; Yi Liu
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time. (62%)Tian Dong; Han Qiu; Tianwei Zhang; Jiwei Li; Hewu Li; Jialiang Lu
Adversarial Unlearning of Backdoors via Implicit Hypergradient. (56%)Yi Zeng; Si Chen; Won Park; Z. Morley Mao; Ming Jin; Ruoxi Jia
MPSN: Motion-aware Pseudo Siamese Network for Indoor Video Head Detection in Buildings. (1%)Kailai Sun; Xiaoteng Ma; Peng Liu; Qianchuan Zhao
2021-10-06
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training with Crafted Input Noise. (99%)Souvik Kundu; Massoud Pedram; Peter A. Beerel
Reversible adversarial examples against local visual perturbation. (99%)Zhaoxia Yin; Li Chen; Shaowei Zhu
Attack as the Best Defense: Nullifying Image-to-image Translation GANs via Limit-aware Adversarial Attack. (99%)Chin-Yuan Yeh; Hsi-Wen Chen; Hong-Han Shuai; De-Nian Yang; Ming-Syan Chen
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs. (99%)Philipp Benz; Soomin Ham; Chaoning Zhang; Adil Karjauv; In So Kweon
Adversarial Attacks on Machinery Fault Diagnosis. (99%)Jiahao Chen; Diqun Yan
Adversarial Attacks on Spiking Convolutional Networks for Event-based Vision. (98%)Julian Büchel; Gregor Lenz; Yalun Hu; Sadique Sheik; Martino Sorbaro
A Uniform Framework for Anomaly Detection in Deep Neural Networks. (97%)Fangzhen Zhao; Chenyi Zhang; Naipeng Dong; Zefeng You; Zhenxin Wu
Double Descent in Adversarial Training: An Implicit Label Noise Perspective. (88%)Chengyu Dong; Liyuan Liu; Jingbo Shang
Improving Adversarial Robustness for Free with Snapshot Ensemble. (83%)Yihao Wang
DoubleStar: Long-Range Attack Towards Depth Estimation based Obstacle Avoidance in Autonomous Systems. (45%)Ce Michigan State University Zhou; Qiben Michigan State University Yan; Yan Michigan State University Shi; Lichao Lehigh University Sun
Inference Attacks Against Graph Neural Networks. (2%)Zhikun Zhang; Min Chen; Michael Backes; Yun Shen; Yang Zhang
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks. (1%)Jiawei Du; Hanshu Yan; Jiashi Feng; Joey Tianyi Zhou; Liangli Zhen; Rick Siow Mong Goh; Vincent Y. F. Tan
Data-driven behavioural biometrics for continuous and adaptive user verification using Smartphone and Smartwatch. (1%)Akriti Verma; Valeh Moghaddam; Adnan Anwar
On The Vulnerability of Recurrent Neural Networks to Membership Inference Attacks. (1%)Yunhao Yang; Parham Gohari; Ufuk Topcu
Stegomalware: A Systematic Survey of MalwareHiding and Detection in Images, Machine LearningModels and Research Challenges. (1%)Rajasekhar Chaganti; Vinayakumar Ravi; Mamoun Alazab; Tuan D. Pham
Exploring the Common Principal Subspace of Deep Features in Neural Networks. (1%)Haoran Liu; Haoyi Xiong; Yaqing Wang; Haozhe An; Dongrui Wu; Dejing Dou
Generalizing Neural Networks by Reflecting Deviating Data in Production. (1%)Yan Xiao; Yun Lin; Ivan Beschastnikh; Changsheng Sun; David S. Rosenblum; Jin Song Dong
2021-10-05
Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems. (99%)Lisa Oakley; Alina Oprea; Stavros Tripakis
Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power of Geometric Transformations. (99%)Shasha Li; Abhishek Aich; Shitong Zhu; M. Salman Asif; Chengyu Song; Amit K. Roy-Chowdhury; Srikanth Krishnamurthy
Adversarial defenses via a mixture of generators. (99%)Maciej Żelaszczyk; Jacek Mańdziuk
Neural Network Adversarial Attack Method Based on Improved Genetic Algorithm. (92%)Dingming Yang; Yanrong Cui; Hongqiang Yuan
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models. (33%)Kangjie Chen; Yuxian Meng; Xiaofei Sun; Shangwei Guo; Tianwei Zhang; Jiwei Li; Chun Fan
Spectral Bias in Practice: The Role of Function Frequency in Generalization. (1%)Sara Fridovich-Keil; Raphael Gontijo-Lopes; Rebecca Roelofs
CADA: Multi-scale Collaborative Adversarial Domain Adaptation for Unsupervised Optic Disc and Cup Segmentation. (1%)Peng Liu; Charlie T. Tran; Bin Kong; Ruogu Fang
Noisy Feature Mixup. (1%)Soon Hoe Lim; N. Benjamin Erichson; Francisco Utrera; Winnie Xu; Michael W. Mahoney
2021-10-04
Benchmarking Safety Monitors for Image Classifiers with Machine Learning. (1%)Raul Sena LAAS Ferreira; Jean LAAS Arlat; Jeremie LAAS Guiochet; Hélène LAAS Waeselynck
2021-10-03
Adversarial Examples Generation for Reducing Implicit Gender Bias in Pre-trained Models. (82%)Wenqian Ye; Fei Xu; Yaojia Huang; Cassie Huang; Ji A
2021-10-02
Evaluating Deep Learning Models and Adversarial Attacks on Accelerometer-Based Gesture Authentication. (98%)Elliu Huang; Troia Fabio Di; Mark Stamp
Anti-aliasing Deep Image Classifiers using Novel Depth Adaptive Blurring and Activation Function. (13%)Md Tahmid Hossain; Shyh Wei Teng; Ferdous Sohel; Guojun Lu
2021-10-01
Calibrated Adversarial Training. (98%)Tianjin Huang; Vlado Menkovski; Yulong Pei; Mykola Pechenizkiy
Universal Adversarial Spoofing Attacks against Face Recognition. (87%)Takuma Amada; Seng Pei Liew; Kazuya Kakizaki; Toshinori Araki
Score-Based Generative Classifiers. (84%)Roland S. Zimmermann; Lukas Schott; Yang Song; Benjamin A. Dunn; David A. Klindt
One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency. (1%)Sayeed Shafayet Chowdhury; Nitin Rathi; Kaushik Roy
2021-09-30
Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation. (98%)Manjushree B. Aithal; Xiaohua Li
You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for Object Detectors. (95%)Zijian Zhu; Hang Su; Chang Liu; Wenzhao Xiang; Shibao Zheng
Adversarial Semantic Contour for Object Detection. (92%)Yichi Zhang; Zijian Zhu; Xiao Yang; Jun Zhu
From Zero-Shot Machine Learning to Zero-Day Attack Detection. (10%)Mohanad Sarhan; Siamak Layeghy; Marcus Gallagher; Marius Portmann
2021-09-29
On Brightness Agnostic Adversarial Examples Against Face Recognition Systems. (99%)Inderjeet Singh; Satoru Momiyama; Kazuya Kakizaki; Toshinori Araki
Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks. (70%)Kaleel Mahmood; Rigel Mahmood; Ethan Rathbun; Dijk Marten van
BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining. (41%)Weizhe Hua; Yichi Zhang; Chuan Guo; Zhiru Zhang; G. Edward Suh
Mitigation of Adversarial Policy Imitation via Constrained Randomization of Policy (CRoP). (10%)Nancirose Piazza; Vahid Behzadan
2021-09-28
slimTrain -- A Stochastic Approximation Method for Training Separable Deep Neural Networks. (1%)Elizabeth Newman; Julianne Chung; Matthias Chung; Lars Ruthotto
2021-09-27
MUTEN: Boosting Gradient-Based Adversarial Attacks via Mutant-Based Ensembles. (99%)Yuejun Guo; Qiang Hu; Maxime Cordy; Michail Papadakis; Yves Le Traon
Query-based Adversarial Attacks on Graph with Fake Nodes. (99%)Zhengyi Wang; Zhongkai Hao; Hang Su; Jun Zhu
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective. (98%)Adhyyan Narang; Vidya Muthukumar; Anant Sahai
GANG-MAM: GAN based enGine for Modifying Android Malware. (64%)Renjith G; Sonia Laudanna; Aji S; Corrado Aaron Visaggio; Vinod P
Distributionally Robust Multi-Output Regression Ranking. (3%)Shahabeddin Sotudian; Ruidi Chen; Ioannis Paschalidis
Improving Uncertainty of Deep Learning-based Object Classification on Radar Spectra using Label Smoothing. (1%)Kanil Patel; William Beluch; Kilian Rambach; Michael Pfeiffer; Bin Yang
Federated Deep Learning with Bayesian Privacy. (1%)Hanlin Gu; Lixin Fan; Bowen Li; Yan Kang; Yuan Yao; Qiang Yang
2021-09-26
Distributionally Robust Multiclass Classification and Applications in Deep CNN Image Classifiers. (11%)Ruidi Chen; Boran Hao; Ioannis Paschalidis
2021-09-25
Two Souls in an Adversarial Image: Towards Universal Adversarial Example Detection using Multi-view Inconsistency. (99%)Sohaib Kiani; Sana Awan; Chao Lan; Fengjun Li; Bo Luo
Contributions to Large Scale Bayesian Inference and Adversarial Machine Learning. (98%)Víctor Gallego
MINIMAL: Mining Models for Data Free Universal Adversarial Triggers. (93%)Swapnil Parekh; Yaman Singla Kumar; Somesh Singh; Changyou Chen; Balaji Krishnamurthy; Rajiv Ratn Shah
2021-09-24
Local Intrinsic Dimensionality Signals Adversarial Perturbations. (98%)Sandamal Weerasinghe; Tansu Alpcan; Sarah M. Erfani; Christopher Leckie; Benjamin I. P. Rubinstein
2021-09-23
Breaking BERT: Understanding its Vulnerabilities for Biomedical Named Entity Recognition through Adversarial Attack. (98%)Anne Dirkson; Suzan Verberne; Wessel Kraaij
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training. (88%)Jakub Breier; Xiaolu Hou; Martín Ochoa; Jesus Solano
AES Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses. (68%)Yaman Kumar Singla; Swapnil Parekh; Somesh Singh; Junyi Jessy Li; Rajiv Ratn Shah; Changyou Chen
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications. (1%)Dongqi Han; Zhiliang Wang; Wenqi Chen; Ying Zhong; Su Wang; Han Zhang; Jiahai Yang; Xingang Shi; Xia Yin
2021-09-22
Exploring Adversarial Examples for Efficient Active Learning in Machine Learning Classifiers. (99%)Honggang Yu; Shihfeng Zeng; Teng Zhang; Ing-Chao Lin; Yier Jin
CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks. (81%)Mikhail Pautov; Nurislam Tursynbek; Marina Munkhoeva; Nikita Muravev; Aleksandr Petiushko; Ivan Oseledets
Security Analysis of Capsule Network Inference using Horizontal Collaboration. (69%)Adewale Adeyemo; Faiq Khalid; Tolulope A. Odetola; Syed Rafay Hasan
Adversarial Transfer Attacks With Unknown Data and Class Overlap. (62%)Luke E. Richards; André Nguyen; Ryan Capps; Steven Forsythe; Cynthia Matuszek; Edward Raff
Pushing the Right Buttons: Adversarial Evaluation of Quality Estimation. (1%)Diptesh Kanojia; Marina Fomicheva; Tharindu Ranasinghe; Frédéric Blain; Constantin Orăsan; Lucia Specia
Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis. (1%)Zeyuan Yin; Ye Yuan; Panfeng Guo; Pan Zhou
2021-09-21
Attacks on Visualization-Based Malware Detection: Balancing Effectiveness and Executability. (99%)Hadjer Benkraouda; Jingyu Qian; Hung Quoc Tran; Berkay Kaplan
3D Point Cloud Completion with Geometric-Aware Adversarial Augmentation. (93%)Mengxi Wu; Hao Huang; Yi Fang
DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning. (76%)Md Tamjid Hossain; Shafkat Islam; Shahriar Badsha; Haoting Shen
Privacy, Security, and Utility Analysis of Differentially Private CPES Data. (13%)Md Tamjid Hossain; Shahriar Badsha; Haoting Shen
2021-09-20
Robust Physical-World Attacks on Face Recognition. (99%)Xin Zheng; Yanbo Fan; Baoyuan Wu; Yong Zhang; Jue Wang; Shirui Pan
Modeling Adversarial Noise for Adversarial Defense. (99%)Dawei Zhou; Nannan Wang; Bo Han; Tongliang Liu
Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection? (99%)Deqiang Li; Tian Qiu; Shuo Chen; Qianmu Li; Shouhuai Xu
Robustness Analysis of Deep Learning Frameworks on Mobile Platforms. (10%)Amin Eslami Abyane; Hadi Hemmati
"Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World. (2%)Emily Wenger; Max Bronckers; Christian Cianfarani; Jenna Cryan; Angela Sha; Haitao Zheng; Ben Y. Zhao
Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework. (1%)Muhammad Shafique; Alberto Marchisio; Rachmad Vidya Wicaksana Putra; Muhammad Abdullah Hanif
2021-09-19
On the Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars. (99%)Deboleena Roy; Chun Tao; Indranil Chakraborty; Kaushik Roy
Adversarial Training with Contrastive Learning in NLP. (16%)Daniela N. Rim; DongNyeong Heo; Heeyoul Choi
2021-09-18
Clean-label Backdoor Attack against Deep Hashing based Retrieval. (98%)Kuofeng Gao; Jiawang Bai; Bin Chen; Dongxian Wu; Shu-Tao Xia
2021-09-17
Messing Up 3D Virtual Environments: Transferable Adversarial 3D Objects. (98%)Enrico Meloni; Matteo Tiezzi; Luca Pasqualini; Marco Gori; Stefano Melacci
Exploring the Training Robustness of Distributional Reinforcement Learning against Noisy State Observations. (8%)Ke Sun; Yi Liu; Yingnan Zhao; Hengshuai Yao; Shangling Jui; Linglong Kong
2021-09-16
Harnessing Perceptual Adversarial Patches for Crowd Counting. (99%)Shunchang Liu; Jiakai Wang; Aishan Liu; Yingwei Li; Yijie Gao; Xianglong Liu; Dacheng Tao
KATANA: Simple Post-Training Robustness Using Test Time Augmentations. (98%)Gilad Cohen; Raja Giryes
Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual Patterns. (96%)Prasanth Buddareddygari; Travis Zhang; Yezhou Yang; Yi Ren
Adversarial Attacks against Deep Learning Based Power Control in Wireless Communications. (95%)Brian Kim; Yi Shi; Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus
Don't Search for a Search Method -- Simple Heuristics Suffice for Adversarial Text Attacks. (68%)Nathaniel Berger; Stefan Riezler; Artem Sokolov; Sebastian Ebert
Membership Inference Attacks Against Recommender Systems. (3%)Minxing Zhang; Zhaochun Ren; Zihan Wang; Pengjie Ren; Zhumin Chen; Pengfei Hu; Yang Zhang
2021-09-15
Universal Adversarial Attack on Deep Learning Based Prognostics. (99%)Arghya Basak; Pradeep Rathore; Sri Harsha Nistala; Sagar Srinivas; Venkataramana Runkana
Balancing detectability and performance of attacks on the control channel of Markov Decision Processes. (98%)Alessio Russo; Alexandre Proutiere
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack. (95%)DonghuaWang; Tingsong Jiang; Jialiang Sun; Weien Zhou; Xiaoya Zhang; Zhiqiang Gong; Wen Yao; Xiaoqian Chen
BERT is Robust! A Case Against Synonym-Based Adversarial Examples in Text Classification. (92%)Jens Hauser; Zhao Meng; Damián Pascual; Roger Wattenhofer
Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup. (13%)Guang Liu; Yuzhao Mao; Hailong Huang; Weiguo Gao; Xuan Li
Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel. (10%)Henrique Teles Maia; Chang Xiao; Dingzeyu Li; Eitan Grinspun; Changxi Zheng
2021-09-14
A Novel Data Encryption Method Inspired by Adversarial Attacks. (99%)Praveen Fernando; Jin Wei-Kocsis
Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder. (99%)Yao Qiu; Jinchao Zhang; Jie Zhou
PETGEN: Personalized Text Generation Attack on Deep Sequence Embedding-based Classification Models. (99%)Bing He; Mustaque Ahamad; Srijan Kumar
EVAGAN: Evasion Generative Adversarial Network for Low Data Regimes. (76%)Rizwan Hamid Randhawa; Nauman Aslam; Muhammad Alauthman; Husnain Rafiq; Muhammad Khalid
Dodging Attack Using Carefully Crafted Natural Makeup. (47%)Nitzan Guetta; Asaf Shabtai; Inderjeet Singh; Satoru Momiyama; Yuval Elovici
Avengers Ensemble! Improving Transferability of Authorship Obfuscation. (12%)Muhammad Haroon; Muhammad Fareed Zaffar; Padmini Srinivasan; Zubair Shafiq
ARCH: Efficient Adversarial Regularized Training with Caching. (8%)Simiao Zuo; Chen Liang; Haoming Jiang; Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen; Tuo Zhao
2021-09-13
Adversarial Bone Length Attack on Action Recognition. (99%)Nariki Tanaka; Hiroshi Kera; Kazuhiko Kawamoto
Randomized Substitution and Vote for Textual Adversarial Example Detection. (99%)Xiaosen Wang; Yifeng Xiong; Kun He
Improving the Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator. (99%)Wenzhao Xiang; Hang Su; Chang Liu; Yandong Guo; Shibao Zheng
Evolving Architectures with Gradient Misalignment toward Low Adversarial Transferability. (98%)Kevin Richard G. Operiano; Wanchalerm Pora; Hitoshi Iba; Hiroshi Kera
A Practical Adversarial Attack on Contingency Detection of Smart Energy Systems. (98%)Moein Sabounchi; Jin Wei-Kocsis
Adversarial Examples for Evaluating Math Word Problem Solvers. (96%)Vivek Kumar; Rishabh Maheshwary; Vikram Pudi
PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos. (86%)Nupur Thakur; Baoxin Li
SignGuard: Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering. (81%)Jian Xu; Shao-Lun Huang; Linqi Song; Tian Lan
Formalizing and Estimating Distribution Inference Risks. (56%)Anshuman Suri; David Evans
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models. (50%)Kun Zhou; Wayne Xin Zhao; Sirui Wang; Fuzheng Zhang; Wei Wu; Ji-Rong Wen
Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models. (16%)Won Park; Nan Li; Qi Alfred Chen; Z. Morley Mao
Adversarially Trained Object Detector for Unsupervised Domain Adaptation. (3%)Kazuma Fujii; Hiroshi Kera; Kazuhiko Kawamoto
Perturbation CheckLists for Evaluating NLG Evaluation Metrics. (1%)Ananya B. Sai; Tanay Dixit; Dev Yashpal Sheth; Sreyas Mohan; Mitesh M. Khapra
How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding. (1%)Tianda Li; Ahmad Rashid; Aref Jafari; Pranav Sharma; Ali Ghodsi; Mehdi Rezagholizadeh
Detecting Safety Problems of Multi-Sensor Fusion in Autonomous Driving. (1%)Ziyuan Zhong; Zhisheng Hu; Shengjian Guo; Xinyang Zhang; Zhenyu Zhong; Baishakhi Ray
2021-09-12
TREATED:Towards Universal Defense against Textual Adversarial Attacks. (99%)Bin Zhu; Zhaoquan Gu; Le Wang; Zhihong Tian
CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph. (98%)Xugang Wu; Huijun Wu; Xu Zhou; Kai Lu
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain. (93%)Hasan Abed Al Kader Hammoud; Bernard Ghanem
RockNER: A Simple Method to Create Adversarial Examples for Evaluating the Robustness of Named Entity Recognition Models. (84%)Bill Yuchen Lin; Wenyang Gao; Jun Yan; Ryan Moreno; Xiang Ren
Shape-Biased Domain Generalization via Shock Graph Embeddings. (2%)Maruthi Narayanan; Vickram Rajendran; Benjamin Kimia
Source Inference Attacks in Federated Learning. (1%)Hongsheng Hu; Zoran Salcic; Lichao Sun; Gillian Dobbie; Xuyun Zhang
2021-09-11
RobustART: Benchmarking Robustness on Architecture Design and Training Techniques. (98%)Shiyu Tang; Ruihao Gong; Yan Wang; Aishan Liu; Jiakai Wang; Xinyun Chen; Fengwei Yu; Xianglong Liu; Dawn Song; Alan Yuille; Philip H. S. Torr; Dacheng Tao
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency. (81%)Yonggan Fu; Yang Zhao; Qixuan Yu; Chaojian Li; Yingyan Lin
2021-09-10
A Strong Baseline for Query Efficient Attacks in a Black Box Setting. (99%)Rishabh Maheshwary; Saket Maheshwary; Vikram Pudi
2021-09-09
Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification. (99%)Maximilian Mozes; Max Bartolo; Pontus Stenetorp; Bennett Kleinberg; Lewis D. Griffin
Energy Attack: On Transferring Adversarial Examples. (99%)Ruoxi Shi; Borui Yang; Yangzhou Jiang; Chenglong Zhao; Bingbing Ni
Protein Folding Neural Networks Are Not Robust. (99%)Sumit Kumar Jha; Arvind Ramanathan; Rickard Ewetz; Alvaro Velasquez; Susmit Jha
Towards Transferable Adversarial Attacks on Vision Transformers. (99%)Zhipeng Wei; Jingjing Chen; Micah Goldblum; Zuxuan Wu; Tom Goldstein; Yu-Gang Jiang
Multi-granularity Textual Adversarial Attack with Behavior Cloning. (98%)Yangyi Chen; Jin Su; Wei Wei
Spatially Focused Attack against Spatiotemporal Graph Neural Networks. (81%)Fuqiang Liu; Luis Miranda-Moreno; Lijun Sun
Differential Privacy in Personalized Pricing with Nonparametric Demand Models. (26%)Xi Chen; Sentao Miao; Yining Wang
EvilModel 2.0: Bringing Neural Network Models into Malware Attacks. (5%)Zhi Wang; Chaoge Liu; Xiang Cui; Jie Yin; Xutong Wang
2021-09-08
Where Did You Learn That From? Surprising Effectiveness of Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning. (89%)Maziar Gomrokchi; Susan Amin; Hossein Aboutalebi; Alexander Wong; Doina Precup
Robust Optimal Classification Trees Against Adversarial Examples. (80%)Daniël Vos; Sicco Verwer
2021-09-07
Adversarial Parameter Defense by Multi-Step Risk Minimization. (98%)Zhiyuan Zhang; Ruixuan Luo; Xuancheng Ren; Qi Su; Liangyou Li; Xu Sun
POW-HOW: An enduring timing side-channel to evade online malware sandboxes. (12%)Antonio Nappa; Panagiotis Papadopoulos; Matteo Varvello; Daniel Aceituno Gomez; Juan Tapiador; Andrea Lanzi
Unpaired Adversarial Learning for Single Image Deraining with Rain-Space Contrastive Constraints. (1%)Xiang Chen; Jinshan Pan; Kui Jiang; Yufeng Huang; Caihua Kong; Longgang Dai; Yufeng Li
2021-09-06
Robustness and Generalization via Generative Adversarial Training. (82%)Omid Poursaeed; Tianxing Jiang; Harry Yang; Serge Belongie; SerNam Lim
Trojan Signatures in DNN Weights. (33%)Greg Fields; Mohammad Samragh; Mojan Javaheripi; Farinaz Koushanfar; Tara Javidi
Automated Robustness with Adversarial Training as a Post-Processing Step. (4%)Ambrish Rawat; Mathieu Sinn; Beat Buesser
Exposing Length Divergence Bias of Textual Matching Models. (2%)Lan Jiang; Tianshu Lyu; Chong Meng; Xiaoyong Lyu; Dawei Yin
2021-09-05
Efficient Combinatorial Optimization for Word-level Adversarial Textual Attack. (98%)Shengcai Liu; Ning Lu; Cheng Chen; Ke Tang
Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning. (2%)Yusen Wu; Hao Chen; Xin Wang; Chao Liu; Phuong Nguyen; Yelena Yesha
DexRay: A Simple, yet Effective Deep Learning Approach to Android Malware Detection based on Image Representation of Bytecode. (1%)Nadia Daoudi; Jordan Samhi; Abdoul Kader Kabore; Kevin Allix; Tegawendé F. Bissyandé; Jacques Klein
2021-09-04
Real-World Adversarial Examples involving Makeup Application. (99%)Chang-Sheng Lin; Chia-Yi Hsu; Pin-Yu Chen; Chia-Mu Yu
Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness. (99%)Uriya Pesso; Koby Bibas; Meir Feder
Training Meta-Surrogate Model for Transferable Adversarial Attack. (99%)Yunxiao Qin; Yuanhao Xiong; Jinfeng Yi; Cho-Jui Hsieh
2021-09-03
SEC4SR: A Security Analysis Platform for Speaker Recognition. (99%)Guangke Chen; Zhe Zhao; Fu Song; Sen Chen; Lingling Fan; Yang Liu
Risk Assessment for Connected Vehicles under Stealthy Attacks on Vehicle-to-Vehicle Networks. (1%)Tianci Yang; Carlos Murguia; Chen Lv
2021-09-02
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples. (99%)Guanxiong Liu; Issa Khalil; Abdallah Khreishah; NhatHai Phan
Impact of Attention on Adversarial Robustness of Image Classification Models. (99%)Prachi Agrawal; Narinder Singh Punn; Sanjay Kumar Sonbhadra; Sonali Agarwal
Adversarial Robustness for Unsupervised Domain Adaptation. (98%)Muhammad Awais; Fengwei Zhou; Hang Xu; Lanqing Hong; Ping Luo; Sung-Ho Bae; Zhenguo Li
Real World Robustness from Systematic Noise. (91%)Yan Wang; Yuhang Li; Ruihao Gong
Building Compact and Robust Deep Neural Networks with Toeplitz Matrices. (61%)Alexandre Araujo
2021-09-01
Towards Improving Adversarial Training of NLP Models. (98%)Jin Yong Yoo; Yanjun Qi
Excess Capacity and Backdoor Poisoning. (97%)Naren Sarayu Manoj; Avrim Blum
Regional Adversarial Training for Better Robust Generalization. (96%)Chuanbiao Song; Yanbo Fan; Yicheng Yang; Baoyuan Wu; Yiming Li; Zhifeng Li; Kun He
R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversarial Attacks through Noise Filters for Dynamic Vision Sensors. (86%)Alberto Marchisio; Giacomo Pira; Maurizio Martina; Guido Masera; Muhammad Shafique
Proof Transfer for Neural Network Verification. (9%)Christian Sprecher; Marc Fischer; Dimitar I. Dimitrov; Gagandeep Singh; Martin Vechev
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks. (2%)Anuj Dubey; Rosario Cammarota; Vikram Suresh; Aydin Aysu
2021-08-31
EG-Booster: Explanation-Guided Booster of ML Evasion Attacks. (99%)Abderrahmen Amich; Birhanu Eshete
Morphence: Moving Target Defense Against Adversarial Examples. (99%)Abderrahmen Amich; Birhanu Eshete
DPA: Learning Robust Physical Adversarial Camouflages for Object Detectors. (86%)Yexin Duan; Jialin Chen; Xingyu Zhou; Junhua Zou; Zhengyun He; Wu Zhang; Jin Zhang; Zhisong Pan
Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction. (83%)Zhenrui Yue; Zhankui He; Huimin Zeng; Julian McAuley
Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning. (75%)Doha Al Bared; Mohamed Nassar
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning. (4%)Linyang Li; Demin Song; Xiaonan Li; Jiehang Zeng; Ruotian Ma; Xipeng Qiu
2021-08-30
Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings. (99%)Mazda Moayeri; Soheil Feizi
Investigating Vulnerabilities of Deep Neural Policies. (99%)Ezgi Korkmaz
Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise. (99%)Mingyu Dong; Diqun Yan; Yongkang Gong; Rangding Wang
Single Node Injection Attack against Graph Neural Networks. (68%)Shuchang Tao; Qi Cao; Huawei Shen; Junjie Huang; Yunfan Wu; Xueqi Cheng
Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms. (41%)Albert Jiménez Sanfiz; Mohamed Akrout
Adaptive perturbation adversarial training: based on reinforcement learning. (41%)Zhishen Nie; Ying Lin; Sp Ren; Lan Zhang
How Does Adversarial Fine-Tuning Benefit BERT? (33%)Javid Ebrahimi; Hao Yang; Wei Zhang
ML-based IoT Malware Detection Under Adversarial Settings: A Systematic Evaluation. (26%)Ahmed Abusnaina; Afsah Anwar; Sultan Alshamrani; Abdulrahman Alabduljabbar; RhongHo Jang; Daehun Nyang; David Mohaisen
DuTrust: A Sentiment Analysis Dataset for Trustworthiness Evaluation. (1%)Lijie Wang; Hao Liu; Shuyuan Peng; Hongxuan Tang; Xinyan Xiao; Ying Chen; Hua Wu; Haifeng Wang
2021-08-29
Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution. (99%)Zongyi Li; Jianhan Xu; Jiehang Zeng; Linyang Li; Xiaoqing Zheng; Qi Zhang; Kai-Wei Chang; Cho-Jui Hsieh
Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models. (98%)Zeyuan Wang; Chaofeng Sha; Su Yang
DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks. (82%)Shiwen Ni; Jiawen Li; Hung-Yu Kao
Rumor Detection on Social Media with Hierarchical Adversarial Training. (47%)Shiwen Ni; Jiawen Li; Hung-Yu Kao
2021-08-27
Mal2GCN: A Robust Malware Detection Approach Using Deep Graph Convolutional Networks With Non-Negative Weights. (99%)Omid Kargarnovin; Amir Mahdi Sadeghzadeh; Rasool Jalili
Disrupting Adversarial Transferability in Deep Neural Networks. (98%)Christopher Wiedeman; Ge Wang
Evaluating the Robustness of Neural Language Models to Input Perturbations. (16%)Milad Moradi; Matthias Samwald
Deep learning models are not robust against noise in clinical text. (1%)Milad Moradi; Kathrin Blagec; Matthias Samwald
2021-08-26
Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks. (99%)Landan Seguin; Anthony Ndirango; Neeli Mishra; SueYeon Chung; Tyler Lee
A Hierarchical Assessment of Adversarial Severity. (98%)Guillaume Jeanneret; Juan C Perez; Pablo Arbelaez
Physical Adversarial Attacks on an Aerial Imagery Object Detector. (96%)Andrew Du; Bo Chen; Tat-Jun Chin; Yee Wei Law; Michele Sasdelli; Ramesh Rajasegaran; Dillon Campbell
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference. (80%)Yang Zheng; Xiaoyi Feng; Zhaoqiang Xia; Xiaoyue Jiang; Ambra Demontis; Maura Pintor; Battista Biggio; Fabio Roli
Detection and Continual Learning of Novel Face Presentation Attacks. (2%)Mohammad Rostami; Leonidas Spinoulas; Mohamed Hussein; Joe Mathai; Wael Abd-Almageed
2021-08-25
Adversarially Robust One-class Novelty Detection. (99%)Shao-Yuan Lo; Poojan Oza; Vishal M. Patel
Uncertify: Attacks Against Neural Network Certification. (99%)Tobias Lorenz; Marta Kwiatkowska; Mario Fritz
Bridged Adversarial Training. (93%)Hoki Kim; Woojin Lee; Sungyoon Lee; Jaewook Lee
Generalized Real-World Super-Resolution through Adversarial Robustness. (93%)Angela Castillo; María Escobar; Juan C. Pérez; Andrés Romero; Radu Timofte; Gool Luc Van; Pablo Arbeláez
2021-08-24
Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE. (99%)Wenzhao Xiang; Chang Liu; Shibao Zheng
Are socially-aware trajectory prediction models really socially-aware? (92%)Saeed Saadatnejad; Mohammadhossein Bahari; Pedram Khorsandi; Mohammad Saneian; Seyed-Mohsen Moosavi-Dezfooli; Alexandre Alahi
OOWL500: Overcoming Dataset Collection Bias in the Wild. (76%)Brandon Leung; Chih-Hui Ho; Amir Persekian; David Orozco; Yen Chang; Erik Sandstrom; Bo Liu; Nuno Vasconcelos
StyleAugment: Learning Texture De-biased Representations by Style Augmentation without Pre-defined Textures. (1%)Sanghyuk Chun; Song Park
2021-08-23
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications. (99%)Wenjie Ruan; Xinping Yi; Xiaowei Huang
Semantic-Preserving Adversarial Text Attacks. (99%)Xinghao Yang; Weifeng Liu; James Bailey; Tianqing Zhu; Dacheng Tao; Wei Liu
Deep Bayesian Image Set Classification: A Defence Approach against Adversarial Attacks. (99%)Nima Mirnateghi; Syed Afaq Ali Shah; Mohammed Bennamoun
Kryptonite: An Adversarial Attack Using Regional Focus. (99%)Yogesh Kulkarni; Krisha Bhambani
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning. (73%)Virat Shejwalkar; Amir Houmansadr; Peter Kairouz; Daniel Ramage
SegMix: Co-occurrence Driven Mixup for Semantic Segmentation and Adversarial Robustness. (4%)Md Amirul Islam; Matthew Kowal; Konstantinos G. Derpanis; Neil D. B. Bruce
2021-08-22
Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations. (99%)Inci M. Baytas; Debayan Deb
Multi-Expert Adversarial Attack Detection in Person Re-identification Using Context Inconsistency. (98%)Xueping Wang; Shasha Li; Min Liu; Yaonan Wang; Amit K. Roy-Chowdhury
Relating CNNs with brain: Challenges and findings. (10%)Reem Abdel-Salam
2021-08-21
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks. (99%)Jiaming Mu; Binghui Wang; Qi Li; Kun Sun; Mingwei Xu; Zhuotao Liu
"Adversarial Examples" for Proof-of-Learning. (98%)Rui Zhang; Jian Liu; Yuan Ding; Qingbiao Wu; Kui Ren
Regularizing Instabilities in Image Reconstruction Arising from Learned Denoisers. (2%)Abinash Nayak
2021-08-20
AdvDrop: Adversarial Attack to DNNs by Dropping Information. (99%)Ranjie Duan; Yuefeng Chen; Dantong Niu; Yun Yang; A. K. Qin; Yuan He
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier. (99%)Chong Xiang; Saeed Mahloujifar; Prateek Mittal
Integer-arithmetic-only Certified Robustness for Quantized Neural Networks. (98%)Haowen Lin; Jian Lou; Li Xiong; Cyrus Shahabi
Towards Understanding the Generative Capability of Adversarially Robust Classifiers. (98%)Yao Zhu; Jiacheng Ma; Jiacheng Sun; Zewei Chen; Rongxin Jiang; Zhenguo Li
Detecting and Segmenting Adversarial Graphics Patterns from Images. (93%)Xiangyu Purdue University Qu; Stanley H. Purdue University Chan
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning. (1%)Ege Erdogan; Alptekin Kupcu; A. Ercument Cicek
Early-exit deep neural networks for distorted images: providing an efficient edge offloading. (1%)Roberto G. Pacheco; Fernanda D. V. R. Oliveira; Rodrigo S. Couto
2021-08-19
Application of Adversarial Examples to Physical ECG Signals. (99%)Taiga Waseda University Ono; Takeshi The University of Electro-Communications Sugawara; Jun University of Tsukuba Sakuma; Tatsuya Waseda University RIKEN AIP Mori
Pruning in the Face of Adversaries. (99%)Florian Merkle; Maximilian Samsinger; Pascal Schöttle
ASAT: Adaptively Scaled Adversarial Training in Time Series. (98%)Zhiyuan Zhang; Wei Li; Ruihan Bao; Keiko Harimoto; Yunfang Wu; Xu Sun
Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain. (80%)Guangyao Chen; Peixi Peng; Li Ma; Jia Li; Lin Du; Yonghong Tian
2021-08-18
Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better. (99%)Bojia Zi; Shihao Zhao; Xingjun Ma; Yu-Gang Jiang
Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes. (98%)Mingjun Yin; Shasha Li; Zikui Cai; Chengyu Song; M. Salman Asif; Amit K. Roy-Chowdhury; Srikanth V. Krishnamurthy
MBRS : Enhancing Robustness of DNN-based Watermarking by Mini-Batch of Real and Simulated JPEG Compression. (45%)Zhaoyang Jia; Han Fang; Weiming Zhang
Proceedings of the 1st International Workshop on Adaptive Cyber Defense. (1%)Damian Marriott; Kimberly Ferguson-Walter; Sunny Fugate; Marco Carvalho
2021-08-17
When Should You Defend Your Classifier -- A Game-theoretical Analysis of Countermeasures against Adversarial Examples. (98%)Maximilian Samsinger; Florian Merkle; Pascal Schöttle; Tomas Pevny
Adversarial Relighting against Face Recognition. (98%)Ruijun Gao; Qing Guo; Qian Zhang; Felix Juefei-Xu; Hongkai Yu; Wei Feng
Semantic Perturbations with Normalizing Flows for Improved Generalization. (13%)Oguz Kaan Yuksel; Sebastian U. Stich; Martin Jaggi; Tatjana Chavdarova
Coalesced Multi-Output Tsetlin Machines with Clause Sharing. (1%)Sondre Glimsdal; Ole-Christoffer Granmo
Appearance Based Deep Domain Adaptation for the Classification of Aerial Images. (1%)Dennis Wittich; Franz Rottensteiner
2021-08-16
Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy. (99%)Ruikui Wang; Yuanfang Guo; Ruijie Yang; Yunhong Wang
Interpreting Attributions and Interactions of Adversarial Attacks. (83%)Xin Wang; Shuyun Lin; Hao Zhang; Yufei Zhu; Quanshi Zhang
Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose? (62%)Max Lennon; Nathan Drenkow; Philippe Burlina
NeuraCrypt is not private. (10%)Nicholas Carlini; Sanjam Garg; Somesh Jha; Saeed Mahloujifar; Mohammad Mahmoody; Florian Tramer
Identifying and Exploiting Structures for Reliable Deep Learning. (2%)Amartya Sanyal
On the Opportunities and Risks of Foundation Models. (2%)Rishi Bommasani; Drew A. Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Arx Sydney von; Michael S. Bernstein; Jeannette Bohg; Antoine Bosselut; Emma Brunskill; Erik Brynjolfsson; Shyamal Buch; Dallas Card; Rodrigo Castellon; Niladri Chatterji; Annie Chen; Kathleen Creel; Jared Quincy Davis; Dora Demszky; Chris Donahue; Moussa Doumbouya; Esin Durmus; Stefano Ermon; John Etchemendy; Kawin Ethayarajh; Li Fei-Fei; Chelsea Finn; Trevor Gale; Lauren Gillespie; Karan Goel; Noah Goodman; Shelby Grossman; Neel Guha; Tatsunori Hashimoto; Peter Henderson; John Hewitt; Daniel E. Ho; Jenny Hong; Kyle Hsu; Jing Huang; Thomas Icard; Saahil Jain; Dan Jurafsky; Pratyusha Kalluri; Siddharth Karamcheti; Geoff Keeling; Fereshte Khani; Omar Khattab; Pang Wei Koh; Mark Krass; Ranjay Krishna; Rohith Kuditipudi; Ananya Kumar; Faisal Ladhak; Mina Lee; Tony Lee; Jure Leskovec; Isabelle Levent; Xiang Lisa Li; Xuechen Li; Tengyu Ma; Ali Malik; Christopher D. Manning; Suvir Mirchandani; Eric Mitchell; Zanele Munyikwa; Suraj Nair; Avanika Narayan; Deepak Narayanan; Ben Newman; Allen Nie; Juan Carlos Niebles; Hamed Nilforoshan; Julian Nyarko; Giray Ogut; Laurel Orr; Isabel Papadimitriou; Joon Sung Park; Chris Piech; Eva Portelance; Christopher Potts; Aditi Raghunathan; Rob Reich; Hongyu Ren; Frieda Rong; Yusuf Roohani; Camilo Ruiz; Jack Ryan; Christopher Ré; Dorsa Sadigh; Shiori Sagawa; Keshav Santhanam; Andy Shih; Krishnan Srinivasan; Alex Tamkin; Rohan Taori; Armin W. Thomas; Florian Tramèr; Rose E. Wang; William Wang; Bohan Wu; Jiajun Wu; Yuhuai Wu; Sang Michael Xie; Michihiro Yasunaga; Jiaxuan You; Matei Zaharia; Michael Zhang; Tianyi Zhang; Xikun Zhang; Yuhui Zhang; Lucia Zheng; Kaitlyn Zhou; Percy Liang
2021-08-15
Neural Architecture Dilation for Adversarial Robustness. (81%)Yanxi Li; Zhaohui Yang; Yunhe Wang; Chang Xu
Deep Adversarially-Enhanced k-Nearest Neighbors. (74%)Ren Wang; Tianqi Chen
IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance. (1%)Ruixuan Liu; Changliu Liu
2021-08-14
LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis. (1%)Fan Wu; Yunhui Long; Ce Zhang; Bo Li
2021-08-13
Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks. (99%)Federico Nesti; Giulio Rossolini; Saasha Nair; Alessandro Biondi; Giorgio Buttazzo
Optical Adversarial Attack. (98%)Abhiram Gnanasambandam; Alex M. Sherman; Stanley H. Chan
Understanding Structural Vulnerability in Graph Convolutional Networks. (96%)Liang Chen; Jintang Li; Qibiao Peng; Yang Liu; Zibin Zheng; Carl Yang
The Forgotten Threat of Voltage Glitching: A Case Study on Nvidia Tegra X2 SoCs. (1%)Otto Bittner; Thilo Krachenfels; Andreas Galauner; Jean-Pierre Seifert
2021-08-12
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning. (99%)Hong Wang; Yuefan Deng; Shinjae Yoo; Haibin Ling; Yuewei Lin
Deep adversarial attack on target detection systems. (99%)Uche M. Osahor; Nasser M. Nasrabadi
Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate. (69%)Hannah Rose Kirk; Bertram Vidgen; Paul Röttger; Tristan Thrush; Scott A. Hale
2021-08-11
Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attacks. (98%)Zitao Chen; Pritam Dash; Karthik Pattabiraman
Attacks against Ranking Algorithms with Text Embeddings: a Case Study on Recruitment Algorithms. (78%)Anahita Samadi; Debapriya Banerjee; Shirin Nilizadeh
Are Neural Ranking Models Robust? (4%)Chen Wu; Ruqing Zhang; Jiafeng Guo; Yixing Fan; Xueqi Cheng
Logic Explained Networks. (1%)Gabriele Ciravegna; Pietro Barbiero; Francesco Giannini; Marco Gori; Pietro Lió; Marco Maggini; Stefano Melacci
2021-08-10
Simple black-box universal adversarial attacks on medical image classification based on deep neural networks. (99%)Kazuki Koga; Kazuhiro Takemoto
On the Effect of Pruning on Adversarial Robustness. (81%)Artur Jordao; Helio Pedrini
SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version). (68%)Nils Lukas; Edward Jiang; Xinda Li; Florian Kerschbaum
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing. (64%)Sanchit Sinha; Hanjie Chen; Arshdeep Sekhon; Yangfeng Ji; Yanjun Qi
UniNet: A Unified Scene Understanding Network and Exploring Multi-Task Relationships through the Lens of Adversarial Attacks. (2%)NareshKumar Gurulingan; Elahe Arani; Bahram Zonooz
Instance-wise Hard Negative Example Generation for Contrastive Learning in Unpaired Image-to-Image Translation. (1%)Weilun Wang; Wengang Zhou; Jianmin Bao; Dong Chen; Houqiang Li
2021-08-09
Meta Gradient Adversarial Attack. (99%)Zheng Yuan; Jie Zhang; Yunpei Jia; Chuanqi Tan; Tao Xue; Shiguang Shan
On Procedural Adversarial Noise Attack And Defense. (99%)Jun Yan; Xiaoyang Deng; Huilin Yin; Wancheng Ge
Enhancing Knowledge Tracing via Adversarial Training. (98%)Xiaopeng Guo; Zhijie Huang; Jie Gao; Mingyu Shang; Maojing Shu; Jun Sun
Neural Network Repair with Reachability Analysis. (96%)Xiaodong Yang; Tom Yamaguchi; Hoang-Dung Tran; Bardh Hoxha; Taylor T Johnson; Danil Prokhorov
Classification Auto-Encoder based Detector against Diverse Data Poisoning Attacks. (92%)Fereshteh Razmi; Li Xiong
Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning. (82%)Wanqi Xue; Wei Qiu; Bo An; Zinovi Rabinovich; Svetlana Obraztsova; Chai Kiat Yeo
Privacy-Preserving Machine Learning: Methods, Challenges and Directions. (16%)Runhua Xu; Nathalie Baracaldo; James Joshi
Explainable AI and susceptibility to adversarial attacks: a case study in classification of breast ultrasound images. (15%)Hamza Rasaee; Hassan Rivaz
2021-08-07
Jointly Attacking Graph Neural Network and its Explanations. (96%)Wenqi Fan; Wei Jin; Xiaorui Liu; Han Xu; Xianfeng Tang; Suhang Wang; Qing Li; Jiliang Tang; Jianping Wang; Charu Aggarwal
Membership Inference Attacks on Lottery Ticket Networks. (33%)Aadesh Bagmar; Shishira R Maiya; Shruti Bidwalka; Amol Deshpande
Information Bottleneck Approach to Spatial Attention Learning. (1%)Qiuxia Lai; Yu Li; Ailing Zeng; Minhao Liu; Hanqiu Sun; Qiang Xu
2021-08-06
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles. (80%)Jindi Zhang; Yang Lou; Jianping Wang; Kui Wu; Kejie Lu; Xiaohua Jia
Ensemble Augmentation for Deep Neural Networks Using 1-D Time Series Vibration Data. (2%)Atik Faysal; Ngui Wai Keng; M. H. Lim
2021-08-05
BOSS: Bidirectional One-Shot Synthesis of Adversarial Examples. (99%)Ismail Alkhouri; Alvaro Velasquez; George Atia
Poison Ink: Robust and Invisible Backdoor Attack. (99%)Jie Zhang; Dongdong Chen; Jing Liao; Qidong Huang; Gang Hua; Weiming Zhang; Nenghai Yu
Imperceptible Adversarial Examples by Spatial Chroma-Shift. (99%)Ayberk Aydin; Deniz Sen; Berat Tuna Karli; Oguz Hanoglu; Alptekin Temizel
Householder Activations for Provable Robustness against Adversarial Attacks. (83%)Sahil Singla; Surbhi Singla; Soheil Feizi
Fairness Properties of Face Recognition and Obfuscation Systems. (68%)Harrison Rosenberg; Brian Tang; Kassem Fawaz; Somesh Jha
Exploring Structure Consistency for Deep Model Watermarking. (10%)Jie Zhang; Dongdong Chen; Jing Liao; Han Fang; Zehua Ma; Weiming Zhang; Gang Hua; Nenghai Yu
Locally Interpretable One-Class Anomaly Detection for Credit Card Fraud Detection. (1%)Tungyu Wu; Youting Wang
2021-08-04
Robust Transfer Learning with Pretrained Language Models through Adapters. (82%)Wenjuan Han; Bo Pang; Yingnian Wu
Semi-supervised Conditional GAN for Simultaneous Generation and Detection of Phishing URLs: A Game theoretic Perspective. (31%)Sharif Amit Kamran; Shamik Sengupta; Alireza Tavakkoli
2021-08-03
On the Robustness of Domain Adaption to Adversarial Attacks. (99%)Liyuan Zhang; Yuhang Zhou; Lei Zhang
On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples. (99%)Adelin Travers; Lorna Licollari; Guanghan Wang; Varun Chandrasekaran; Adam Dziedzic; David Lie; Nicolas Papernot
AdvRush: Searching for Adversarially Robust Neural Architectures. (99%)Jisoo Mok; Byunggook Na; Hyeokjun Choe; Sungroh Yoon
The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks. (88%)Ambrish Rawat; Killian Levacher; Mathieu Sinn
DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on Commercial EdgeML Device. (69%)Yoo-Seung Won; Soham Chatterjee; Dirmanto Jap; Arindam Basu; Shivam Bhasin
Tutorials on Testing Neural Networks. (1%)Nicolas Berthier; Youcheng Sun; Wei Huang; Yanghao Zhang; Wenjie Ruan; Xiaowei Huang
2021-08-02
Hybrid Classical-Quantum Deep Learning Models for Autonomous Vehicle Traffic Image Classification Under Adversarial Attack. (98%)Reek Majumder; Sakib Mahmud Khan; Fahim Ahmed; Zadid Khan; Frank Ngeni; Gurcan Comert; Judith Mwakalonge; Dimitra Michalaka; Mashrur Chowdhury
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles. (10%)Anum Talpur; Mohan Gurusamy
Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks. (9%)Yuwei Sun; Ng Chong; Hideya Ochiai
Efficacy of Statistical and Artificial Intelligence-based False Information Cyberattack Detection Models for Connected Vehicles. (1%)Sakib Mahmud Khan; Gurcan Comert; Mashrur Chowdhury
2021-08-01
Advances in adversarial attacks and defenses in computer vision: A survey. (92%)Naveed Akhtar; Ajmal Mian; Navid Kardan; Mubarak Shah
Certified Defense via Latent Space Randomized Smoothing with Orthogonal Encoders. (80%)Huimin Zeng; Jiahao Su; Furong Huang
An Effective and Robust Detector for Logo Detection. (70%)Xiaojun Jia; Huanqian Yan; Yonglin Wu; Xingxing Wei; Xiaochun Cao; Yong Zhang
Style Curriculum Learning for Robust Medical Image Segmentation. (2%)Zhendong Liu; Van Manh; Xin Yang; Xiaoqiong Huang; Karim Lekadir; Víctor Campello; Nishant Ravikumar; Alejandro F Frangi; Dong Ni
2021-07-31
Delving into Deep Image Prior for Adversarial Defense: A Novel Reconstruction-based Defense Framework. (99%)Li Ding; Yongwei Wang; Xin Ding; Kaiwen Yuan; Ping Wang; Hua Huang; Z. Jane Wang
Adversarial Robustness of Deep Code Comment Generation. (99%)Yu Zhou; Xiaoqing Zhang; Juanjuan Shen; Tingting Han; Taolue Chen; Harald Gall
Towards Adversarially Robust and Domain Generalizable Stereo Matching by Rethinking DNN Feature Backbones. (93%)Kelvin Cheng; Christopher Healey; Tianfu Wu
T$_k$ML-AP: Adversarial Attacks to Top-$k$ Multi-Label Learning. (81%)Shu Hu; Lipeng Ke; Xin Wang; Siwei Lyu
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. (67%)Jinyuan Jia; Yupei Liu; Neil Zhenqiang Gong
Fair Representation Learning using Interpolation Enabled Disentanglement. (1%)Akshita Jha; Bhanukiran Vinzamuri; Chandan K. Reddy
2021-07-30
Who's Afraid of Thomas Bayes? (92%)Erick Galinkin
Practical Attacks on Voice Spoofing Countermeasures. (86%)Andre Kassis; Urs Hengartner
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers. (50%)Stefanos Koffas; Jing Xu; Mauro Conti; Stjepan Picek
Unveiling the potential of Graph Neural Networks for robust Intrusion Detection. (13%)David Pujol-Perich; José Suárez-Varela; Albert Cabellos-Aparicio; Pere Barlet-Ros
2021-07-29
Feature Importance-aware Transferable Adversarial Attacks. (99%)Zhibo Wang; Hengchang Guo; Zhifei Zhang; Wenxin Liu; Zhan Qin; Kui Ren
Enhancing Adversarial Robustness via Test-time Transformation Ensembling. (98%)Juan C. Pérez; Motasem Alfarra; Guillaume Jeanneret; Laura Rueda; Ali Thabet; Bernard Ghanem; Pablo Arbeláez
The Robustness of Graph k-shell Structure under Adversarial Attacks. (93%)B. Zhou; Y. Q. Lv; Y. C. Mao; J. H. Wang; S. Q. Yu; Q. Xuan
Understanding the Effects of Adversarial Personalized Ranking Optimization Method on Recommendation Quality. (31%)Vito Walter Anelli; Yashar Deldjoo; Noia Tommaso Di; Felice Antonio Merra
Towards robust vision by multi-task learning on monkey visual cortex. (3%)Shahd Safarani; Arne Nix; Konstantin Willeke; Santiago A. Cadena; Kelli Restivo; George Denfield; Andreas S. Tolias; Fabian H. Sinz
2021-07-28
Imbalanced Adversarial Training with Reweighting. (86%)Wentao Wang; Han Xu; Xiaorui Liu; Yaxin Li; Bhavani Thuraisingham; Jiliang Tang
Towards Robustness Against Natural Language Word Substitutions. (73%)Xinshuai Dong; Anh Tuan Luu; Rongrong Ji; Hong Liu
Models of Computational Profiles to Study the Likelihood of DNN Metamorphic Test Cases. (67%)Ettore Merlo; Mira Marhaba; Foutse Khomh; Houssem Ben Braiek; Giuliano Antoniol
WaveCNet: Wavelet Integrated CNNs to Suppress Aliasing Effect for Noise-Robust Image Classification. (15%)Qiufu Li; Linlin Shen; Sheng Guo; Zhihui Lai
TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing. (2%)Aoting Hu; Renjie Xie; Zhigang Lu; Aiqun Hu; Minhui Xue
2021-07-27
Towards Black-box Attacks on Deep Learning Apps. (89%)Hongchen Cao; Shuai Li; Yuming Zhou; Ming Fan; Xuejiao Zhao; Yutian Tang
Poisoning Online Learning Filters: DDoS Attacks and Countermeasures. (50%)Wesley Joon-Wie Tann; Ee-Chien Chang
PDF-Malware: An Overview on Threats, Detection and Evasion Attacks. (8%)Nicolas Fleury; Theo Dubrunquez; Ihsen Alouani
2021-07-26
Benign Adversarial Attack: Tricking Algorithm for Goodness. (99%)Xian Zhao; Jiaming Zhang; Zhiyu Lin; Jitao Sang
Learning to Adversarially Blur Visual Object Tracking. (98%)Qing Guo; Ziyi Cheng; Felix Juefei-Xu; Lei Ma; Xiaofei Xie; Yang Liu; Jianjun Zhao
Adversarial Attacks with Time-Scale Representations. (96%)Alberto Santamaria-Pang; Jianwei Qiu; Aritra Chowdhury; James Kubricht; Peter Tu; Iyer Naresh; Nurali Virani
2021-07-24
Adversarial training may be a double-edged sword. (99%)Ali Rahmati; Seyed-Mohsen Moosavi-Dezfooli; Huaiyu Dai
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them. (98%)Florian Tramèr
Stress Test Evaluation of Biomedical Word Embeddings. (73%)Vladimir Araujo; Andrés Carvallo; Carlos Aspillaga; Camilo Thorne; Denis Parra
X-GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question Answering. (1%)Jingjing Jiang; Ziyi Liu; Yifan Liu; Zhixiong Nan; Nanning Zheng
2021-07-23
A Differentiable Language Model Adversarial Attack on Text Classifiers. (99%)Ivan Fursov; Alexey Zaytsev; Pavel Burnyshev; Ekaterina Dmitrieva; Nikita Klyuchnikov; Andrey Kravchenko; Ekaterina Artemova; Evgeny Burnaev
Structack: Structure-based Adversarial Attacks on Graph Neural Networks. (86%)Hussain Hussain; Tomislav Duricic; Elisabeth Lex; Denis Helic; Markus Strohmaier; Roman Kern
Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation. (45%)Bingqian Lin; Yi Zhu; Yanxin Long; Xiaodan Liang; Qixiang Ye; Liang Lin
Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers. (8%)Yunhui Guo; Xudong Wang; Yubei Chen; Stella X. Yu
2021-07-22
On the Certified Robustness for Ensemble Models and Beyond. (99%)Zhuolin Yang; Linyi Li; Xiaojun Xu; Bhavya Kailkhura; Tao Xie; Bo Li
Unsupervised Detection of Adversarial Examples with Model Explanations. (99%)Gihyuk Ko; Gyumin Lim
Membership Inference Attack and Defense for Wireless Signal Classifiers with Deep Learning. (83%)Yi Shi; Yalin E. Sagduyu
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks. (75%)Ramin Barati; Reza Safabakhsh; Mohammad Rahmati
Estimating Predictive Uncertainty Under Program Data Distribution Shift. (1%)Yufei Li; Simin Chen; Wei Yang
Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack. (1%)Fan Wu; Min Gao; Junliang Yu; Zongwei Wang; Kecheng Liu; Xu Wange
2021-07-21
Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic Gradients. (98%)Huimin Wu; Zhengmian Hu; Bin Gu
Improved Text Classification via Contrastive Adversarial Training. (84%)Lin Pan; Chung-Wei Hang; Avirup Sil; Saloni Potdar
Black-box Probe for Unsupervised Domain Adaptation without Model Transferring. (81%)Kunhong Wu; Yucheng Shi; Yahong Han; Yunfeng Shao; Bingshuai Li
Defending against Reconstruction Attack in Vertical Federated Learning. (10%)Jiankai Sun; Yuanshun Yao; Weihao Gao; Junyuan Xie; Chong Wang
Generative Models for Security: Attacks, Defenses, and Opportunities. (10%)Luke A. Bauer; Vincent Bindschaedler
A Tandem Framework Balancing Privacy and Security for Voice User Interfaces. (5%)Ranya Aloufi; Hamed Haddadi; David Boyle
Spinning Sequence-to-Sequence Models with Meta-Backdoors. (4%)Eugene Bagdasaryan; Vitaly Shmatikov
On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms. (2%)Shuyu Cheng; Guoqiang Wu; Jun Zhu
2021-07-20
Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks. (99%)Saikat Majumdar; Mohammad Hossein Samavatian; Kristin Barber; Radu Teodorescu
A Markov Game Model for AI-based Cyber Security Attack Mitigation. (10%)Hooman Alavizadeh; Julian Jang-Jaccard; Tansu Alpcan; Seyit A. Camtepe
Leaking Secrets through Modern Branch Predictor in the Speculative World. (1%)Md Hafizul Islam Chowdhuryy; Fan Yao
2021-07-19
Discriminator-Free Generative Adversarial Attack. (99%)Shaohao Lu; Yuqiao Xian; Ke Yan; Yi Hu; Xing Sun; Xiaowei Guo; Feiyue Huang; Wei-Shi Zheng
Feature-Filter: Detecting Adversarial Examples through Filtering off Recessive Features. (99%)Hui Liu; Bo Zhao; Yuefeng Peng; Jiabao Guo; Peng Liu
Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition. (98%)Benjamin Spetter-Goldstein; Nataniel Ruiz; Sarah Adel Bargal
On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples. (80%)Verena Praher; Katharina Prinz; Arthur Flexer; Gerhard Widmer
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI. (33%)Takayuki Miura; Satoshi Hasegawa; Toshiki Shibahara
Structural Watermarking to Deep Neural Networks via Network Channel Pruning. (11%)Xiangyu Zhao; Yinzhe Yao; Hanzhou Wu; Xinpeng Zhang
Generative Adversarial Neural Cellular Automata. (1%)Maximilian Otte; Quentin Delfosse; Johannes Czech; Kristian Kersting
Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units. (1%)Woo-Jeoung Nam; Seong-Whan Lee
Just Train Twice: Improving Group Robustness without Training Group Information. (1%)Evan Zheran Liu; Behzad Haghgoo; Annie S. Chen; Aditi Raghunathan; Pang Wei Koh; Shiori Sagawa; Percy Liang; Chelsea Finn
2021-07-18
RobustFed: A Truth Inference Approach for Robust Federated Learning. (1%)Farnaz Tahmasebian; Jian Lou; Li Xiong
2021-07-17
BEDS-Bench: Behavior of EHR-models under Distributional Shift--A Benchmark. (9%)Anand Avati; Martin Seneviratne; Emily Xue; Zhen Xu; Balaji Lakshminarayanan; Andrew M. Dai
2021-07-16
EGC2: Enhanced Graph Classification with Easy Graph Compression. (84%)Jinyin Chen; Dunjie Zhang; Zhaoyan Ming; Mingwei Jia; Yi Liu
Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI. (1%)Quanshi Zhang; Tian Han; Lixin Fan; Zhanxing Zhu; Hang Su; Ying Nian Wu; Jie Ren; Hao Zhang
2021-07-15
Self-Supervised Contrastive Learning with Adversarial Perturbations for Robust Pretrained Language Models. (99%)Zhao Meng; Yihan Dong; Mrinmaya Sachan; Roger Wattenhofer
Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving. (98%)Ibrahim Sobh; Ahmed Hamed; Varun Ravi Kumar; Senthil Yogamani
ECG-Adv-GAN: Detecting ECG Adversarial Examples with Conditional Generative Adversarial Networks. (92%)Khondker Fariha Hossain; Sharif Amit Kamran; Alireza Tavakkoli; Lei Pan; Xingjun Ma; Sutharshan Rajasegarar; Chandan Karmaker
Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks. (80%)Ismail Alarab; Simant Prakoonwit
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting. (16%)Xiangyu Qi; Jifeng Zhu; Chulin Xie; Yong Yang
Tailor: Generating and Perturbing Text with Semantic Controls. (3%)Alexis Ross; Tongshuang Wu; Hao Peng; Matthew E. Peters; Matt Gardner
Shifts: A Dataset of Real Distributional Shift Across Multiple Large-Scale Tasks. (1%)Andrey Malinin; Neil Band; Ganshin; Alexander; German Chesnokov; Yarin Gal; Mark J. F. Gales; Alexey Noskov; Andrey Ploskonosov; Liudmila Prokhorenkova; Ivan Provilkov; Vatsal Raina; Vyas Raina; Roginskiy; Denis; Mariya Shmatova; Panos Tigas; Boris Yangel
2021-07-14
AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning. (99%)Yihao Huang; Qing Guo; Felix Juefei-Xu; Lei Ma; Weikai Miao; Yang Liu; Geguang Pu
Conservative Objective Models for Effective Offline Model-Based Optimization. (67%)Brandon Trabucco; Aviral Kumar; Xinyang Geng; Sergey Levine
2021-07-13
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense. (88%)Duhun Hwang; Eunjung Lee; Wonjong Rhee
Using BERT Encoding to Tackle the Mad-lib Attack in SMS Spam Detection. (69%)Sergio Rojas-Galeano
Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors. (41%)M. Ben Amor; J. Stier; M. Granitzer
What classifiers know what they don't? (1%)Mohamed Ishmael Belghazi; David Lopez-Paz
2021-07-12
EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial Attacks. (99%)Andrei Ilie; Marius Popescu; Alin Stefanescu
Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising. (99%)Anouar Kherchouche; Sid Ahmed Fezza; Wassim Hamidouche
Perceptual-based deep-learning denoiser as a defense against adversarial attacks on ASR systems. (96%)Anirudh Sreeram; Nicholas Mehlman; Raghuveer Peri; Dillon Knox; Shrikanth Narayanan
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning. (81%)Jun Wang; Chang Xu; Francisco Guzman; Ahmed El-Kishky; Yuqing Tang; Benjamin I. P. Rubinstein; Trevor Cohn
A Closer Look at the Adversarial Robustness of Information Bottleneck Models. (70%)Iryna Korshunova; David Stutz; Alexander A. Alemi; Olivia Wiles; Sven Gowal
SoftHebb: Bayesian inference in unsupervised Hebbian soft winner-take-all networks. (56%)Timoleon Moraitis; Dmitry Toichkin; Yansong Chua; Qinghai Guo
2021-07-11
Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks. (76%)Kendra Albert; Maggie Delano; Bogdan Kulynych; Ram Shankar Siva Kumar
Stateful Detection of Model Extraction Attacks. (2%)Soham Pal; Yash Gupta; Aditya Kanade; Shirish Shevade
Attack Rules: An Adversarial Approach to Generate Attacks for Industrial Control Systems using Machine Learning. (1%)Muhammad Azmi Umer; Chuadhry Mujeeb Ahmed; Muhammad Taha Jilani; Aditya P. Mathur
2021-07-10
Hack The Box: Fooling Deep Learning Abstraction-Based Monitors. (91%)Sara Hajj Ibrahim; Mohamed Nassar
HOMRS: High Order Metamorphic Relations Selector for Deep Neural Networks. (88%)Florian Tambon; Giulio Antoniol; Foutse Khomh
Identifying Layers Susceptible to Adversarial Attacks. (83%)Shoaib Ahmed Siddiqui; Thomas Breuel
Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis. (22%)Anisie Uwimana1; Ransalu Senanayake
Cyber-Security Challenges in Aviation Industry: A Review of Current and Future Trends. (1%)Elochukwu Ukwandu; Mohamed Amine Ben Farah; Hanan Hindy; Miroslav Bures; Robert Atkinson; Christos Tachtatzis; Xavier Bellekens
2021-07-09
Learning to Detect Adversarial Examples Based on Class Scores. (99%)Tobias Uelwer; Felix Michels; Candido Oliver De
Resilience of Autonomous Vehicle Object Category Detection to Universal Adversarial Perturbations. (99%)Mohammad Nayeem Teli; Seungwon Oh
Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems. (99%)Shangyu Xie; Han Wang; Yu Kong; Yuan Hong
GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network. (98%)Zuohui Chen; Renxuan Wang; Jingyang Xiang; Yue Yu; Xin Xia; Shouling Ji; Qi Xuan; Xiaoniu Yang
Towards Robust General Medical Image Segmentation. (83%)Laura Daza; Juan C. Pérez; Pablo Arbeláez
ARC: Adversarially Robust Control Policies for Autonomous Vehicles. (38%)Sampo Kuutti; Saber Fallah; Richard Bowden
2021-07-08
Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models. (99%)Daniel Park; Haidar Khan; Azer Khan; Alex Gittens; Bülent Yener
Improving Model Robustness with Latent Distribution Locally and Globally. (99%)Zhuang Qian; Shufei Zhang; Kaizhu Huang; Qiufeng Wang; Rui Zhang; Xinping Yi
Analytically Tractable Hidden-States Inference in Bayesian Neural Networks. (50%)Luong-Ha Nguyen; James-A. Goulet
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (33%)Akshay Mehra; Bhavya Kailkhura; Pin-Yu Chen; Jihun Hamm
2021-07-07
Controlled Caption Generation for Images Through Adversarial Attacks. (99%)Nayyer Aafaq; Naveed Akhtar; Wei Liu; Mubarak Shah; Ajmal Mian
Incorporating Label Uncertainty in Understanding Adversarial Robustness. (38%)Xiao Zhang; David Evans
RoFL: Attestable Robustness for Secure Federated Learning. (2%)Lukas Burkhalter; Hidde Lycklama; Alexander Viand; Nicolas Küchler; Anwar Hithnawi
2021-07-06
GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization. (99%)Sungyoon Lee; Hoki Kim; Jaewook Lee
Self-Adversarial Training incorporating Forgery Attention for Image Forgery Localization. (95%)Long Zhuo; Shunquan Tan; Bin Li; Jiwu Huang
ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients. (76%)Alessandro Cappelli; Julien Launay; Laurent Meunier; Ruben Ohana; Iacopo Poli
On Generalization of Graph Autoencoders with Adversarial Training. (12%)Tianjin huang; Yulong Pei; Vlado Menkovski; Mykola Pechenizkiy
On Robustness of Lane Detection Models to Physical-World Adversarial Attacks in Autonomous Driving. (1%)Takami Sato; Qi Alfred Chen
2021-07-05
When and How to Fool Explainable Models (and Humans) with Adversarial Examples. (99%)Jon Vadillo; Roberto Santana; Jose A. Lozano
Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks. (99%)Xiao Yang; Yinpeng Dong; Tianyu Pang; Hang Su; Jun Zhu
Adversarial Robustness of Probabilistic Network Embedding for Link Prediction. (87%)Xi Chen; Bo Kang; Jefrey Lijffijt; Bie Tijl De
Dealing with Adversarial Player Strategies in the Neural Network Game iNNk through Ensemble Learning. (69%)Mathias Löwe; Jennifer Villareale; Evan Freed; Aleksanteri Sladek; Jichen Zhu; Sebastian Risi
Understanding the Security of Deepfake Detection. (33%)Xiaoyu Cao; Neil Zhenqiang Gong
Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems. (15%)Ron Bitton; Nadav Maman; Inderjeet Singh; Satoru Momiyama; Yuval Elovici; Asaf Shabtai
Poisoning Attack against Estimating from Pairwise Comparisons. (15%)Ke Ma; Qianqian Xu; Jinshan Zeng; Xiaochun Cao; Qingming Huang
Confidence Conditioned Knowledge Distillation. (10%)Sourav Mishra; Suresh Sundaram
2021-07-04
Certifiably Robust Interpretation via Renyi Differential Privacy. (67%)Ao Liu; Xiaoyu Chen; Sijia Liu; Lirong Xia; Chuang Gan
Mirror Mirror on the Wall: Next-Generation Wireless Jamming Attacks Based on Software-Controlled Surfaces. (1%)Paul Staat; Harald Elders-Boll; Christian Zenger; Christof Paar
2021-07-03
Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity. (99%)Yajie Wang; Shangbo Wu; Wenyi Jiang; Shengang Hao; Yu-an Tan; Quanxin Zhang
2021-07-01
Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples. (99%)Nelson Manohar-Alers; Ryan Feng; Sahib Singh; Jiguo Song; Atul Prakash
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks. (99%)Alberto Marchisio; Giacomo Pira; Maurizio Martina; Guido Masera; Muhammad Shafique
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding. (68%)Dong Wang; Ning Ding; Piji Li; Hai-Tao Zheng
Spotting adversarial samples for speaker verification by neural vocoders. (26%)Haibin Wu; Po-chun Hsu; Ji Gao; Shanshan Zhang; Shen Huang; Jian Kang; Zhiyong Wu; Helen Meng; Hung-yi Lee
The Interplay between Distribution Parameters and the Accuracy-Robustness Tradeoff in Classification. (16%)Alireza Mousavi Hosseini; Amir Mohammad Abouei; Mohammad Hossein Rohban
Reinforcement Learning for Feedback-Enabled Cyber Resilience. (10%)Yunhan Huang; Linan Huang; Quanyan Zhu
2021-06-30
Single-Step Adversarial Training for Semantic Segmentation. (96%)Daniel Wiens; Barbara Hammer
Understanding Adversarial Attacks on Observations in Deep Reinforcement Learning. (84%)You Qiaoben; Chengyang Ying; Xinning Zhou; Hang Su; Jun Zhu; Bo Zhang
Explanation-Guided Diagnosis of Machine Learning Evasion Attacks. (82%)Abderrahmen Amich; Birhanu Eshete
Bi-Level Poisoning Attack Model and Countermeasure for Appliance Consumption Data of Smart Homes. (8%)Mustain Billah; Adnan Anwar; Ziaur Rahman; Syed Md. Galib
Exploring Robustness of Neural Networks through Graph Measures. (8%)Asim Rowan University Waqas; Ghulam Rowan University Rasool; Hamza University of Minnesota Farooq; Nidhal C. Rowan University Bouaynaya
A Context-Aware Information-Based Clone Node Attack Detection Scheme in Internet of Things. (1%)Khizar Hameed; Saurabh Garg; Muhammad Bilal Amin; Byeong Kang; Abid Khan
Understanding and Improving Early Stopping for Learning with Noisy Labels. (1%)Yingbin Bai; Erkun Yang; Bo Han; Yanhua Yang; Jiatong Li; Yinian Mao; Gang Niu; Tongliang Liu
2021-06-29
Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges. (99%)Bowei Xi
Understanding Adversarial Examples Through Deep Neural Network's Response Surface and Uncertainty Regions. (99%)Juan Shu; Bowei Xi; Charles Kamhoua
Attack Transferability Characterization for Adversarially Robust Multi-label Classification. (99%)Zhuo Yang; Yufei Han; Xiangliang Zhang
Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices. (99%)Tao Bai; Jinqi Luo; Jun Zhao
Bio-Inspired Adversarial Attack Against Deep Neural Networks. (98%)Bowei Xi; Yujie Chen; Fan Fei; Zhan Tu; Xinyan Deng
Do Not Deceive Your Employer with a Virtual Background: A Video Conferencing Manipulation-Detection System. (62%)Mauro Conti; Simone Milani; Ehsan Nowroozi; Gabriele Orazi
The Threat of Offensive AI to Organizations. (54%)Yisroel Mirsky; Ambra Demontis; Jaidip Kotak; Ram Shankar; Deng Gelei; Liu Yang; Xiangyu Zhang; Wenke Lee; Yuval Elovici; Battista Biggio
Local Reweighting for Adversarial Training. (22%)Ruize Gao; Feng Liu; Kaiwen Zhou; Gang Niu; Bo Han; James Cheng
On the Interaction of Belief Bias and Explanations. (15%)Ana Valeria Gonzalez; Anna Rogers; Anders Søgaard
2021-06-28
Feature Importance Guided Attack: A Model Agnostic Adversarial Attack. (99%)Gilad Gressel; Niranjan Hegde; Archana Sreekumar; Michael Darling
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent. (99%)Oliver Bryniarski; Nabeel Hingun; Pedro Pachuca; Vincent Wang; Nicholas Carlini
Improving Transferability of Adversarial Patches on Face Recognition with Generative Models. (99%)Zihao Xiao; Xianfeng Gao; Chilin Fu; Yinpeng Dong; Wei Gao; Xiaolu Zhang; Jun Zhou; Jun Zhu
Data Poisoning Won't Save You From Facial Recognition. (97%)Evani Radiya-Dixit; Florian Tramèr
Adversarial Robustness of Streaming Algorithms through Importance Sampling. (61%)Vladimir Braverman; Avinatan Hassidim; Yossi Matias; Mariano Schain; Sandeep Silwal; Samson Zhou
Test-Time Adaptation to Distribution Shift by Confidence Maximization and Input Transformation. (2%)Chaithanya Kumar Mummadi; Robin Hutmacher; Kilian Rambach; Evgeny Levinkov; Thomas Brox; Jan Hendrik Metzen
Certified Robustness via Randomized Smoothing over Multiplicative Parameters. (1%)Nikita Muravev; Aleksandr Petiushko
Realtime Robust Malicious Traffic Detection via Frequency Domain Analysis. (1%)Chuanpu Fu; Qi Li; Meng Shen; Ke Xu
2021-06-27
RAILS: A Robust Adversarial Immune-inspired Learning System. (98%)Ren Wang; Tianqi Chen; Stephen Lindsly; Cooper Stansbury; Alnawaz Rehemtulla; Indika Rajapakse; Alfred Hero
Who is Responsible for Adversarial Defense? (93%)Kishor Datta Gupta; Dipankar Dasgupta
ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense. (82%)Ren Wang; Tianqi Chen; Philip Yao; Sijia Liu; Indika Rajapakse; Alfred Hero
Immuno-mimetic Deep Neural Networks (Immuno-Net). (64%)Ren Wang; Tianqi Chen; Stephen Lindsly; Cooper Stansbury; Indika Rajapakse; Alfred Hero
Stabilizing Equilibrium Models by Jacobian Regularization. (1%)Shaojie Bai; Vladlen Koltun; J. Zico Kolter
2021-06-26
Multi-stage Optimization based Adversarial Training. (99%)Xiaosen Wang; Chuanbiao Song; Liwei Wang; Kun He
The Feasibility and Inevitability of Stealth Attacks. (68%)Ivan Y. Tyukin; Desmond J. Higham; Eliyas Woldegeorgis; Alexander N. Gorban
2021-06-24
On the (Un-)Avoidability of Adversarial Examples. (99%)Sadia Chowdhury; Ruth Urner
Countering Adversarial Examples: Combining Input Transformation and Noisy Training. (99%)Cheng Zhang; Pan Gao
2021-06-23
Adversarial Examples in Multi-Layer Random ReLU Networks. (81%)Peter L. Bartlett; Sébastien Bubeck; Yeshwanth Cherapanamjeri
Teacher Model Fingerprinting Attacks Against Transfer Learning. (2%)Yufei Chen; Chao Shen; Cong Wang; Yang Zhang
Meaningfully Explaining Model Mistakes Using Conceptual Counterfactuals. (1%)Abubakar Abid; Mert Yuksekgonul; James Zou
Feature Attributions and Counterfactual Explanations Can Be Manipulated. (1%)Dylan Slack; Sophie Hilgard; Sameer Singh; Himabindu Lakkaraju
2021-06-22
DetectX -- Adversarial Input Detection using Current Signatures in Memristive XBar Arrays. (99%)Abhishek Moitra; Priyadarshini Panda
Self-Supervised Iterative Contextual Smoothing for Efficient Adversarial Defense against Gray- and Black-Box Attack. (99%)Sungmin Cha; Naeun Ko; Youngjoon Yoo; Taesup Moon
Long-term Cross Adversarial Training: A Robust Meta-learning Method for Few-shot Classification Tasks. (83%)Fan Liu; Shuyu Zhao; Xuelong Dai; Bin Xiao
On Adversarial Robustness of Synthetic Code Generation. (81%)Mrinal Anand; Pratik Kayal; Mayank Singh
NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data. (67%)I-Chung Hsieh; Cheng-Te Li
2021-06-21
Policy Smoothing for Provably Robust Reinforcement Learning. (99%)Aounon Kumar; Alexander Levine; Soheil Feizi
Delving into the pixels of adversarial samples. (98%)Blerta Lindqvist
HODA: Hardness-Oriented Detection of Model Extraction Attacks. (98%)Amir Mahdi Sadeghzadeh; Amir Mohammad Sobhanian; Faezeh Dehghan; Rasool Jalili
Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier. (91%)Simone Marullo; Matteo Tiezzi; Marco Gori; Stefano Melacci
Membership Inference on Word Embedding and Beyond. (38%)Saeed Mahloujifar; Huseyin A. Inan; Melissa Chase; Esha Ghosh; Marcello Hasegawa
An Alternative Auxiliary Task for Enhancing Image Classification. (11%)Chen Liu
Zero-shot learning approach to adaptive Cybersecurity using Explainable AI. (1%)Dattaraj Rao; Shraddha Mane
2021-06-20
Adversarial Examples Make Strong Poisons. (98%)Liam Fowl; Micah Goldblum; Ping-yeh Chiang; Jonas Geiping; Wojtek Czaja; Tom Goldstein
Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem. (95%)Jiaqi Ma; Junwei Deng; Qiaozhu Mei
Generative Model Adversarial Training for Deep Compressed Sensing. (8%)Ashkan Esmaeili
2021-06-19
Attack to Fool and Explain Deep Networks. (99%)Naveed Akhtar; Muhammad A. A. K. Jalwana; Mohammed Bennamoun; Ajmal Mian
A Stealthy and Robust Fingerprinting Scheme for Generative Models. (47%)Li Guanlin; Guo Shangwei; Wang Run; Xu Guowen; Zhang Tianwei
2021-06-18
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples. (99%)Maura Pintor; Luca Demetrio; Angelo Sotgiu; Giovanni Manca; Ambra Demontis; Nicholas Carlini; Battista Biggio; Fabio Roli
Residual Error: a New Performance Measure for Adversarial Robustness. (99%)Hossein Aboutalebi; Mohammad Javad Shafiee; Michelle Karg; Christian Scharfenberger; Alexander Wong
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis. (99%)Martin Pawelczyk; Chirag Agarwal; Shalmali Joshi; Sohini Upadhyay; Himabindu Lakkaraju
The Dimpled Manifold Model of Adversarial Examples in Machine Learning. (98%)Adi Shamir; Odelia Melamed; Oriel BenShmuel
Light Lies: Optical Adversarial Attack. (92%)Kyulim Kim; JeongSoo Kim; Seungri Song; Jun-Ho Choi; Chulmin Joo; Jong-Seok Lee
BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection. (82%)Yulin Zhu; Yuni Lai; Kaifa Zhao; Xiapu Luo; Mingquan Yuan; Jian Ren; Kai Zhou
Less is More: Feature Selection for Adversarial Robustness with Compressive Counter-Adversarial Attacks. (80%)Emre Ozfatura; Muhammad Zaid Hameed; Kerem Ozfatura; Deniz Gunduz
Group-Structured Adversarial Training. (68%)Farzan Farnia; Amirali Aghazadeh; James Zou; David Tse
Accumulative Poisoning Attacks on Real-time Data. (45%)Tianyu Pang; Xiao Yang; Yinpeng Dong; Hang Su; Jun Zhu
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks. (45%)Suyoung Lee; Wonho Song; Suman Jana; Meeyoung Cha; Sooel Son
Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning. (5%)Junyuan Hong; Haotao Wang; Zhangyang Wang; Jiayu Zhou
2021-06-17
Analyzing Adversarial Robustness of Deep Neural Networks in Pixel Space: a Semantic Perspective. (99%)Lina Wang; Xingshu Chen; Yulong Wang; Yawei Yue; Yi Zhu; Xuemei Zeng; Wei Wang
Bad Characters: Imperceptible NLP Attacks. (99%)Nicholas Boucher; Ilia Shumailov; Ross Anderson; Nicolas Papernot
DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs. (99%)Junhao Zhu; Yalu Shan; Jinhuan Wang; Shanqing Yu; Guanrong Chen; Qi Xuan
Adversarial Visual Robustness by Causal Intervention. (99%)Kaihua Tang; Mingyuan Tao; Hanwang Zhang
Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning. (92%)Shubham Jain; Ana-Maria Cretu; Montjoye Yves-Alexandre de
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks. (91%)Yulong *co-first authors Cao*; Ningfei *co-first authors Wang*; Chaowei *co-first authors Xiao*; Dawei *co-first authors Yang*; Jin *co-first authors Fang; Ruigang *co-first authors Yang; Qi Alfred *co-first authors Chen; Mingyan *co-first authors Liu; Bo *co-first authors Li
Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems. (82%)Giovanni Apruzzese; Mauro Andreolini; Luca Ferretti; Mirco Marchetti; Michele Colajanni
Poisoning and Backdooring Contrastive Learning. (70%)Nicholas Carlini; Andreas Terzis
CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing. (69%)Fan Wu; Linyi Li; Zijian Huang; Yevgeniy Vorobeychik; Ding Zhao; Bo Li
CoCoFuzzing: Testing Neural Code Models with Coverage-Guided Fuzzing. (64%)Moshi Wei; Yuchao Huang; Jinqiu Yang; Junjie Wang; Song Wang
On Deep Neural Network Calibration by Regularization and its Impact on Refinement. (3%)Aditya Singh; Alessandro Bay; Biswa Sengupta; Andrea Mirabile
Effective Model Sparsification by Scheduled Grow-and-Prune Methods. (1%)Xiaolong Ma; Minghai Qin; Fei Sun; Zejiang Hou; Kun Yuan; Yi Xu; Yanzhi Wang; Yen-Kuang Chen; Rong Jin; Yuan Xie
2021-06-16
Real-time Adversarial Perturbations against Deep Reinforcement Learning Policies: Attacks and Defenses. (99%)Buse G. A. Tekgul; Shelly Wang; Samuel Marchal; N. Asokan
Localized Uncertainty Attacks. (99%)Ousmane Amadou Dia; Theofanis Karaletsos; Caner Hazirbas; Cristian Canton Ferrer; Ilknur Kaynar Kabul; Erik Meijer
Evaluating the Robustness of Bayesian Neural Networks Against Different Types of Attacks. (67%)Yutian Pang; Sheng Cheng; Jueming Hu; Yongming Liu
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch. (38%)Hossein Souri; Liam Fowl; Rama Chellappa; Micah Goldblum; Tom Goldstein
Explainable AI for Natural Adversarial Images. (13%)Tomas Folke; ZhaoBin Li; Ravi B. Sojitra; Scott Cheng-Hsin Yang; Patrick Shafto
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness. (2%)James Diffenderfer; Brian R. Bartoldson; Shreya Chaganti; Jize Zhang; Bhavya Kailkhura
Scaling-up Diverse Orthogonal Convolutional Networks with a Paraunitary Framework. (1%)Jiahao Su; Wonmin Byeon; Furong Huang
Loki: Hardening Code Obfuscation Against Automated Attacks. (1%)Moritz Schloegel; Tim Blazytko; Moritz Contag; Cornelius Aschermann; Julius Basler; Thorsten Holz; Ali Abbasi
2021-06-15
Adversarial Attacks on Deep Models for Financial Transaction Records. (99%)Ivan Fursov; Matvey Morozov; Nina Kaploukhaya; Elizaveta Kovtun; Rodrigo Rivera-Castro; Gleb Gusev; Dmitry Babaev; Ivan Kireev; Alexey Zaytsev; Evgeny Burnaev
Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information. (99%)Tommy Li; Cory Merkel
Towards Adversarial Robustness via Transductive Learning. (80%)Jiefeng Chen; Yang Guo; Xi Wu; Tianqi Li; Qicheng Lao; Yingyu Liang; Somesh Jha
Voting for the right answer: Adversarial defense for speaker verification. (78%)Haibin Wu; Yang Zhang; Zhiyong Wu; Dong Wang; Hung-yi Lee
Detect and remove watermark in deep neural networks via generative adversarial networks. (68%)Haoqi Wang; Mingfu Xue; Shichang Sun; Yushu Zhang; Jian Wang; Weiqiang Liu
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. (13%)Chulin Xie; Minghao Chen; Pin-Yu Chen; Bo Li
Securing Face Liveness Detection Using Unforgeable Lip Motion Patterns. (12%)Man Senior Member, IEEE Zhou; Qian Senior Member, IEEE Wang; Qi Senior Member, IEEE Li; Peipei Senior Member, IEEE Jiang; Jingxiao Senior Member, IEEE Yang; Chao Senior Member, IEEE Shen; Cong Fellow, IEEE Wang; Shouhong Ding
Probabilistic Margins for Instance Reweighting in Adversarial Training. (8%)Qizhou Wang; Feng Liu; Bo Han; Tongliang Liu; Chen Gong; Gang Niu; Mingyuan Zhou; Masashi Sugiyama
CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals. (1%)Efrat Levy; Asaf Shabtai; Bogdan Groza; Pal-Stefan Murvay; Yuval Elovici
2021-06-14
PopSkipJump: Decision-Based Attack for Probabilistic Classifiers. (99%)Carl-Johann Simon-Gabriel; Noman Ahmed Sheikh; Andreas Krause
Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology. (99%)Alex Foote; Amina Asif; Ayesha Azam; Tim Marshall-Cox; Nasir Rajpoot; Fayyaz Minhas
Audio Attacks and Defenses against AED Systems -- A Practical Study. (99%)Rodrigo dos Santos; Shirin Nilizadeh
Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions. (92%)Antonio Emanuele Cinà; Kathrin Grosse; Sebastiano Vascon; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Evading Malware Classifiers via Monte Carlo Mutant Feature Discovery. (81%)John Boutsikas; Maksim E. Eren; Charles Varga; Edward Raff; Cynthia Matuszek; Charles Nicholas
On the Relationship between Heterophily and Robustness of Graph Neural Networks. (81%)Jiong Zhu; Junchen Jin; Donald Loveland; Michael T. Schaub; Danai Koutra
Partial success in closing the gap between human and machine vision. (15%)Robert Geirhos; Kantharaju Narayanappa; Benjamin Mitzkus; Tizian Thieringer; Matthias Bethge; Felix A. Wichmann; Wieland Brendel
Text Generation with Efficient (Soft) Q-Learning. (2%)Han Guo; Bowen Tan; Zhengzhong Liu; Eric P. Xing; Zhiting Hu
Resilient Control of Platooning Networked Robitic Systems via Dynamic Watermarking. (1%)Matthew Porter; Arnav Joshi; Sidhartha Dey; Qirui Wu; Pedro Hespanhol; Anil Aswani; Matthew Johnson-Roberson; Ram Vasudevan
Self-training Guided Adversarial Domain Adaptation For Thermal Imagery. (1%)Ibrahim Batuhan Akkaya; Fazil Altinel; Ugur Halici
Code Integrity Attestation for PLCs using Black Box Neural Network Predictions. (1%)Yuqi Chen; Christopher M. Poskitt; Jun Sun
2021-06-13
Target Model Agnostic Adversarial Attacks with Query Budgets on Language Understanding Models. (99%)Jatin Chauhan; Karan Bhukar; Manohar Kaul
Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks. (99%)Utku Ozbulak; Esla Timothy Anzaku; Neve Wesley De; Messem Arnout Van
ATRAS: Adversarially Trained Robust Architecture Search. (96%)Yigit Alparslan; Edward Kim
Security Analysis of Camera-LiDAR Semantic-Level Fusion Against Black-Box Attacks on Autonomous Vehicles. (64%)R. Spencer Hallyburton; Yupei Liu; Miroslav Pajic
Weakly-supervised High-resolution Segmentation of Mammography Images for Breast Cancer Diagnosis. (1%)Kangning Liu; Yiqiu Shen; Nan Wu; Jakub Chłędowski; Carlos Fernandez-Granda; Krzysztof J. Geras
HistoTransfer: Understanding Transfer Learning for Histopathology. (1%)Yash Sharma; Lubaina Ehsan; Sana Syed; Donald E. Brown
2021-06-12
Adversarial Robustness via Fisher-Rao Regularization. (67%)Marine Picot; Francisco Messina; Malik Boudiaf; Fabrice Labeau; Ismail Ben Ayed; Pablo Piantanida
What can linearized neural networks actually say about generalization? (31%)Guillermo Ortiz-Jiménez; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack. (2%)Tolulope Odetola; Faiq Khalid; Travis Sandefur; Hawzhin Mohammed; Syed Rafay Hasan
2021-06-11
Adversarial Robustness through the Lens of Causality. (99%)Yonggang Zhang; Mingming Gong; Tongliang Liu; Gang Niu; Xinmei Tian; Bo Han; Bernhard Schölkopf; Kun Zhang
Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks. (99%)Nezihe Merve Gürel; Xiangyu Qi; Luka Rimanic; Ce Zhang; Bo Li
Adversarial purification with Score-based generative models. (89%)Jongmin Yoon; Sung Ju Hwang; Juho Lee
Relaxing Local Robustness. (80%)Klas Leino; Matt Fredrikson
TDGIA:Effective Injection Attacks on Graph Neural Networks. (76%)Xu Zou; Qinkai Zheng; Yuxiao Dong; Xinyu Guan; Evgeny Kharlamov; Jialiang Lu; Jie Tang
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution. (56%)Fanchao Qi; Yuan Yao; Sophia Xu; Zhiyuan Liu; Maosong Sun
CARTL: Cooperative Adversarially-Robust Transfer Learning. (8%)Dian Chen; Hongxin Hu; Qian Wang; Yinli Li; Cong Wang; Chao Shen; Qi Li
A Shuffling Framework for Local Differential Privacy. (1%)Casey Meehan; Amrita Roy Chowdhury; Kamalika Chaudhuri; Somesh Jha
2021-06-10
Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm. (99%)Mingkang Zhu; Tianlong Chen; Zhangyang Wang
Deep neural network loses attention to adversarial images. (99%)Shashank Kotyan; Danilo Vasconcellos Vargas
Verifying Quantized Neural Networks using SMT-Based Model Checking. (92%)Luiz Sena; Xidan Song; Erickson Alves; Iury Bessa; Edoardo Manino; Lucas Cordeiro; Eddie de Lima Filho
Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation. (80%)Jiawei Zhang; Linyi Li; Huichen Li; Xiaolu Zhang; Shuang Yang; Bo Li
An Ensemble Approach Towards Adversarial Robustness. (41%)Haifeng Qian
Towards an Automated Pipeline for Detecting and Classifying Malware through Machine Learning. (1%)Nicola Loi; Claudio Borile; Daniele Ucci
Fair Classification with Adversarial Perturbations. (1%)L. Elisa Celis; Anay Mehrotra; Nisheeth K. Vishnoi
2021-06-09
HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks. (99%)Mohammad Hossein Samavatian; Saikat Majumdar; Kristin Barber; Radu Teodorescu
Towards Defending against Adversarial Examples via Attack-Invariant Features. (99%)Dawei Zhou; Tongliang Liu; Bo Han; Nannan Wang; Chunlei Peng; Xinbo Gao
Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training. (99%)Dawei Zhou; Nannan Wang; Xinbo Gao; Bo Han; Jun Yu; Xiaoyu Wang; Tongliang Liu
Attacking Adversarial Attacks as A Defense. (99%)Boxi Wu; Heng Pan; Li Shen; Jindong Gu; Shuai Zhao; Zhifeng Li; Deng Cai; Xiaofei He; Wei Liu
We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature. (98%)Bin Liang; Jiachun Li; Jianjun Huang
Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. (93%)Yanchao Sun; Ruijie Zheng; Yongyuan Liang; Furong Huang
URLTran: Improving Phishing URL Detection Using Transformers. (10%)Pranav Maneriker; Jack W. Stokes; Edir Garcia Lazo; Diana Carutasu; Farid Tajaddodianfar; Arun Gururajan
Practical Machine Learning Safety: A Survey and Primer. (4%)Sina Mohseni; Haotao Wang; Zhiding Yu; Chaowei Xiao; Zhangyang Wang; Jay Yadawa
ZoPE: A Fast Optimizer for ReLU Networks with Low-Dimensional Inputs. (3%)Christopher A. Strong; Sydney M. Katz; Anthony L. Corso; Mykel J. Kochenderfer
Network insensitivity to parameter noise via adversarial regularization. (2%)Julian Büchel; Fynn Faber; Dylan R. Muir
2021-06-08
On Improving Adversarial Transferability of Vision Transformers. (99%)Muzammal Naseer; Kanchana Ranasinghe; Salman Khan; Fahad Shahbaz Khan; Fatih Porikli
Simulated Adversarial Testing of Face Recognition Models. (99%)Nataniel Ruiz; Adam Kortylewski; Weichao Qiu; Cihang Xie; Sarah Adel Bargal; Alan Yuille; Stan Sclaroff
Towards the Memorization Effect of Neural Networks in Adversarial Training. (93%)Han Xu; Xiaorui Liu; Wentao Wang; Wenbiao Ding; Zhongqin Wu; Zitao Liu; Anil Jain; Jiliang Tang
Handcrafted Backdoors in Deep Neural Networks. (92%)Sanghyun Hong; Nicholas Carlini; Alexey Kurakin
Enhancing Robustness of Neural Networks through Fourier Stabilization. (73%)Netanel Raviv; Aidan Kelley; Michael Guo; Yevgeny Vorobeychik
Provably Robust Detection of Out-of-distribution Data (almost) for free. (1%)Alexander Meinke; Julian Bitterwolf; Matthias Hein
2021-06-07
Adversarial Attack and Defense in Deep Ranking. (99%)Mo Zhou; Le Wang; Zhenxing Niu; Qilin Zhang; Nanning Zheng; Gang Hua
Reveal of Vision Transformers Robustness against Adversarial Attacks. (99%)Ahmed Aldahdooh; Wassim Hamidouche; Olivier Deforges
Position Bias Mitigation: A Knowledge-Aware Graph Model for EmotionCause Extraction. (89%)Hanqi Yan; Lin Gui; Gabriele Pergola; Yulan He
3DB: A Framework for Debugging Computer Vision Models. (45%)Guillaume Leclerc; Hadi Salman; Andrew Ilyas; Sai Vemprala; Logan Engstrom; Vibhav Vineet; Kai Xiao; Pengchuan Zhang; Shibani Santurkar; Greg Yang; Ashish Kapoor; Aleksander Madry
RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models. (11%)Xin Guo; Jianlei Yang; Haoyi Zhou; Xucheng Ye; Jianxin Li
2021-06-06
A Primer on Multi-Neuron Relaxation-based Adversarial Robustness Certification. (98%)Kevin Roth
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model. (4%)Zi Wang
2021-06-05
Ensemble Defense with Data Diversity: Weak Correlation Implies Strong Robustness. (92%)Renjue Li; Hanwei Zhang; Pengfei Yang; Cheng-Chao Huang; Aimin Zhou; Bai Xue; Lijun Zhang
Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks. (69%)Qin Ding; Cho-Jui Hsieh; James Sharpnack
RDA: Robust Domain Adaptation via Fourier Adversarial Attacking. (2%)Jiaxing Huang; Dayan Guan; Aoran Xiao; Shijian Lu
2021-06-04
Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness. (99%)Zifeng Wang; Tong Jian; Aria Masoomi; Stratis Ioannidis; Jennifer Dy
BO-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization. (99%)Zhuosheng Zhang; Shucheng Yu
Human-Adversarial Visual Question Answering. (31%)Sasha Sheng; Amanpreet Singh; Vedanuj Goswami; Jose Alberto Lopez Magana; Wojciech Galuba; Devi Parikh; Douwe Kiela
Predify: Augmenting deep neural networks with brain-inspired predictive coding dynamics. (15%)Bhavin Choksi; Milad Mozafari; Callum Biggs O'May; Benjamin Ador; Andrea Alamia; Rufin VanRullen
DOCTOR: A Simple Method for Detecting Misclassification Errors. (1%)Federica Granese; Marco Romanelli; Daniele Gorla; Catuscia Palamidessi; Pablo Piantanida
Teaching keyword spotters to spot new keywords with limited examples. (1%)Abhijeet Awasthi; Kevin Kilgour; Hassan Rom
2021-06-03
Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout. (99%)Pengfei Xie; Linyuan Wang; Ruoxi Qin; Kai Qiao; Shuhao Shi; Guoen Hu; Bin Yan
Imperceptible Adversarial Examples for Fake Image Detection. (99%)Quanyu Liao; Yuezun Li; Xin Wang; Bin Kong; Bin Zhu; Siwei Lyu; Youbing Yin; Qi Song; Xi Wu
A Little Robustness Goes a Long Way: Leveraging Universal Features for Targeted Transfer Attacks. (99%)Jacob M. Springer; Melanie Mitchell; Garrett T. Kenyon
Transferable Adversarial Examples for Anchor Free Object Detection. (99%)Quanyu Liao; Xin Wang; Bin Kong; Siwei Lyu; Bin Zhu; Youbing Yin; Qi Song; Xi Wu
Exploring Memorization in Adversarial Training. (98%)Yinpeng Dong; Ke Xu; Xiao Yang; Tianyu Pang; Zhijie Deng; Hang Su; Jun Zhu
Improving Neural Network Robustness via Persistency of Excitation. (68%)Kaustubh Sridhar; Oleg Sokolsky; Insup Lee; James Weimer
Defending against Backdoor Attacks in Natural Language Generation. (38%)Chun Fan; Xiaoya Li; Yuxian Meng; Xiaofei Sun; Xiang Ao; Fei Wu; Jiwei Li; Tianwei Zhang
Sneak Attack against Mobile Robotic Networks under Formation Control. (1%)Yushan Li; Jianping He; Xuda Ding; Lin Cai; Xinping Guan
2021-06-02
PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack. (99%)Alexander Matyasko; Lap-Pui Chau
Towards Robustness of Text-to-SQL Models against Synonym Substitution. (75%)Yujian Gan; Xinyun Chen; Qiuping Huang; Matthew Purver; John R. Woodward; Jinxia Xie; Pengsheng Huang
BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively Inspired Orthographic Adversarial Attacks. (62%)Yannik Keller; Jan Mackensen; Steffen Eger
2021-06-01
Adversarial Defense for Automatic Speaker Verification by Self-Supervised Learning. (99%)Haibin Wu; Xu Li; Andy T. Liu; Zhiyong Wu; Helen Meng; Hung-yi Lee
Improving Compositionality of Neural Networks by Decoding Representations to Inputs. (68%)Mike Wu; Noah Goodman; Stefano Ermon
Markpainting: Adversarial Machine Learning meets Inpainting. (12%)David Khachaturov; Ilia Shumailov; Yiren Zhao; Nicolas Papernot; Ross Anderson
On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study. (9%)Divyansh Kaushik; Douwe Kiela; Zachary C. Lipton; Wen-tau Yih
Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models. (5%)Linjie Li; Jie Lei; Zhe Gan; Jingjing Liu
Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models. (1%)Rosa Biagio La; Roberto Capobianco; Daniele Nardi
Concurrent Adversarial Learning for Large-Batch Training. (1%)Yong Liu; Xiangning Chen; Minhao Cheng; Cho-Jui Hsieh; Yang You
2021-05-31
Adaptive Feature Alignment for Adversarial Training. (99%)Tao Wang; Ruixin Zhang; Xingyu Chen; Kai Zhao; Xiaolin Huang; Yuge Huang; Shaoxin Li; Jilin Li; Feiyue Huang
QueryNet: An Efficient Attack Framework with Surrogates Carrying Multiple Identities. (99%)Sizhe Chen; Zhehao Huang; Qinghua Tao; Xiaolin Huang
Transferable Sparse Adversarial Attack. (99%)Ziwen He; Wei Wang; Jing Dong; Tieniu Tan
Adversarial Training with Rectified Rejection. (99%)Tianyu Pang; Huishuai Zhang; Di He; Yinpeng Dong; Hang Su; Wei Chen; Jun Zhu; Tie-Yan Liu
Robustifying $\ell_\infty$ Adversarial Training to the Union of Perturbation Models. (82%)Ameya D. Patil; Michael Tuttle; Alexander G. Schwing; Naresh R. Shanbhag
Dominant Patterns: Critical Features Hidden in Deep Neural Networks. (80%)Zhixing Ye; Shaofei Qin; Sizhe Chen; Xiaolin Huang
Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models. (75%)Chong Li; Cenyuan Zhang; Xiaoqing Zheng; Xuanjing Huang
Gradient-based Data Subversion Attack Against Binary Classifiers. (73%)Rosni K Vasu; Sanjay Seetharaman; Shubham Malaviya; Manish Shukla; Sachin Lodha
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals. (1%)Asma Ghandeharioun; Been Kim; Chun-Liang Li; Brendan Jou; Brian Eoff; Rosalind W. Picard
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores. (1%)Giang Nguyen; Daeyoung Kim; Anh Nguyen
2021-05-30
Generating Adversarial Examples with Graph Neural Networks. (99%)Florian Jaeckle; M. Pawan Kumar
Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice. (98%)Rongzhou Bao; Jiayi Wang; Hai Zhao
Evaluating Resilience of Encrypted Traffic Classification Against Adversarial Evasion Attacks. (62%)Ramy Maarouf; Danish Sattar; Ashraf Matrawy
NoiLIn: Do Noisy Labels Always Hurt Adversarial Training? (26%)Jingfeng Zhang; Xilie Xu; Bo Han; Tongliang Liu; Gang Niu; Lizhen Cui; Masashi Sugiyama
DAAIN: Detection of Anomalous and Adversarial Input using Normalizing Flows. (12%)Baußnern Samuel von; Johannes Otterbach; Adrian Loy; Mathieu Salzmann; Thomas Wollmann
EEG-based Cross-Subject Driver Drowsiness Recognition with an Interpretable Convolutional Neural Network. (1%)Jian Cui; Zirui Lan; Olga Sourina; Wolfgang Müller-Wittig
2021-05-29
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations. (99%)Mingfu Xue; Yinghao Wu; Zhiyu Wu; Jian Wang; Yushu Zhang; Weiqiang Liu
Analysis and Applications of Class-wise Robustness in Adversarial Training. (99%)Qi Tian; Kun Kuang; Kelu Jiang; Fei Wu; Yisen Wang
A Measurement Study on the (In)security of End-of-Life (EoL) Embedded Devices. (2%)Dingding Wang; Muhui Jiang; Rui Chang; Yajin Zhou; Baolei Hou; Xiapu Luo; Lei Wu; Kui Ren
2021-05-28
Demotivate adversarial defense in remote sensing. (99%)Adrien Chan-Hon-Tong; Gaston Lenczner; Aurelien Plyer
AdvParams: An Active DNN Intellectual Property Protection Technique via Adversarial Perturbation Based Parameter Encryption. (92%)Mingfu Xue; Zhiyu Wu; Jian Wang; Yushu Zhang; Weiqiang Liu
Robust Regularization with Adversarial Labelling of Perturbed Samples. (83%)Xiaohui Guo; Richong Zhang; Yaowei Zheng; Yongyi Mao
SafeAMC: Adversarial training for robust modulation recognition models. (83%)Javier Maroto; Gérôme Bovet; Pascal Frossard
Towards optimally abstaining from prediction. (81%)Adam Tauman Kalai; Varun Kanade
Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial Awareness. (76%)Glenn Dawson; Robi Polikar
Visualizing Representations of Adversarially Perturbed Inputs. (68%)Daniel Steinberg; Paul Munro
Chromatic and spatial analysis of one-pixel attacks against an image classifier. (15%)Janne Alatalo; Joni Korpihalkola; Tuomo Sipola; Tero Kokkonen
DeepMoM: Robust Deep Learning With Median-of-Means. (1%)Shih-Ting Huang; Johannes Lederer
2021-05-27
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers. (84%)Xi Li; David J. Miller; Zhen Xiang; George Kesidis
2021-05-26
Deep Repulsive Prototypes for Adversarial Robustness. (99%)Alex Serban; Erik Poll; Joost Visser
Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge. (98%)Heng Chang; Yu Rong; Tingyang Xu; Wenbing Huang; Honglei Zhang; Peng Cui; Xin Wang; Wenwu Zhu; Junzhou Huang
Adversarial robustness against multiple $l_p$-threat models at the price of one and how to quickly fine-tune robust models to another threat model. (76%)Francesco Croce; Matthias Hein
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger. (61%)Fanchao Qi; Mukai Li; Yangyi Chen; Zhengyan Zhang; Zhiyuan Liu; Yasheng Wang; Maosong Sun
Fooling Partial Dependence via Data Poisoning. (13%)Hubert Baniecki; Wojciech Kretowicz; Przemyslaw Biecek
2021-05-25
Practical Convex Formulation of Robust One-hidden-layer Neural Network Training. (98%)Yatong Bai; Tanmay Gautam; Yu Gai; Somayeh Sojoudi
Adversarial Attack Driven Data Augmentation for Accurate And Robust Medical Image Segmentation. (98%)Mst. Tasnim Pervin; Linmi Tao; Aminul Huq; Zuoxiang He; Li Huo
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs. (67%)Mohammad Malekzadeh; Anastasia Borovykh; Deniz Gündüz
Robust Value Iteration for Continuous Control Tasks. (9%)Michael Lutter; Shie Mannor; Jan Peters; Dieter Fox; Animesh Garg
2021-05-24
OFEI: A Semi-black-box Android Adversarial Sample Attack Framework Against DLaaS. (99%)Guangquan Xu; GuoHua Xin; Litao Jiao; Jian Liu; Shaoying Liu; Meiqi Feng; Xi Zheng
Learning Security Classifiers with Verified Global Robustness Properties. (92%)Yizheng Chen; Shiqi Wang; Yue Qin; Xiaojing Liao; Suman Jana; David Wagner
Feature Space Targeted Attacks by Statistic Alignment. (82%)Lianli Gao; Yaya Cheng; Qilong Zhang; Xing Xu; Jingkuan Song
Improved OOD Generalization via Adversarial Training and Pre-training. (12%)Mingyang Yi; Lu Hou; Jiacheng Sun; Lifeng Shang; Xin Jiang; Qun Liu; Zhi-Ming Ma
Out-of-Distribution Detection in Dermatology using Input Perturbation and Subset Scanning. (5%)Hannah Kim; Girmaw Abebe Tadesse; Celia Cintas; Skyler Speakman; Kush Varshney
Every Byte Matters: Traffic Analysis of Bluetooth Wearable Devices. (1%)Ludovic Barman; Alexandre Dumur; Apostolos Pyrgelis; Jean-Pierre Hubaux
Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models. (1%)Jieyu Lin; Jiajie Zou; Nai Ding
Dissecting Click Fraud Autonomy in the Wild. (1%)Tong Zhu; Yan Meng; Haotian Hu; Xiaokuan Zhang; Minhui Xue; Haojin Zhu
2021-05-23
Killing Two Birds with One Stone: Stealing Model and Inferring Attribute from BERT-based APIs. (99%)Lingjuan Lyu; Xuanli He; Fangzhao Wu; Lichao Sun
CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes. (92%)Hao Huang; Yongtao Wang; Zhaoyu Chen; Yuheng Li; Zhi Tang; Wei Chu; Jingdong Chen; Weisi Lin; Kai-Kuang Ma
Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters. (12%)Javier Carnerero-Cano; Luis Muñoz-González; Phillippa Spencer; Emil C. Lupu
2021-05-22
Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems. (99%)Yifan Jia; Jingyi Wang; Christopher M. Poskitt; Sudipta Chattopadhyay; Jun Sun; Yuqi Chen
Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation. (98%)Jinyu Yang; Chunyuan Li; Weizhi An; Hehuan Ma; Yuzhi Guo; Yu Rong; Peilin Zhao; Junzhou Huang
Securing Optical Networks using Quantum-secured Blockchain: An Overview. (1%)Purva Sharma; Vimal Bhatia; Shashi Prakash
2021-05-21
ReLUSyn: Synthesizing Stealthy Attacks for Deep Neural Network Based Cyber-Physical Systems. (81%)Aarti Kashyap; Syed Mubashir Iqbal; Karthik Pattabiraman; Margo Seltzer
Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks. (76%)Leo Schwinn; René Raab; An Nguyen; Dario Zanca; Bjoern Eskofier
Backdoor Attacks on Self-Supervised Learning. (47%)Aniruddha Saha; Ajinkya Tejankar; Soroush Abbasi Koohpayegani; Hamed Pirsiavash
Intriguing Properties of Vision Transformers. (8%)Muzammal Naseer; Kanchana Ranasinghe; Salman Khan; Munawar Hayat; Fahad Shahbaz Khan; Ming-Hsuan Yang
Explainable Enterprise Credit Rating via Deep Feature Crossing Network. (1%)Weiyu Guo; Zhijiang Yang; Shu Wu; Fu Chen
2021-05-20
Simple Transparent Adversarial Examples. (99%)Jaydeep Borkar; Pin-Yu Chen
Anomaly Detection of Adversarial Examples using Class-conditional Generative Adversarial Networks. (99%)Hang Wang; David J. Miller; George Kesidis
Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance. (11%)Jack W. Stokes; Paul England; Kevin Kane
TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks. (1%)Yu Li; Min Li; Qiuxia Lai; Yannan Liu; Qiang Xu
2021-05-19
Attack on practical speaker verification system using universal adversarial perturbations. (99%)Weiyi Zhang; Shuning Zhao; Le Liu; Jianmin Li; Xingliang Cheng; Thomas Fang Zheng; Xiaolin Hu
Local Aggressive Adversarial Attacks on 3D Point Cloud. (99%)Yiming Sun; Feng Chen; Zhiyu Chen; Mingjie Wang
An Orthogonal Classifier for Improving the Adversarial Robustness of Neural Networks. (76%)Cong Xu; Xiang Li; Min Yang
Balancing Robustness and Sensitivity using Feature Contrastive Learning. (15%)Seungyeon Kim; Daniel Glasner; Srikumar Ramalingam; Cho-Jui Hsieh; Kishore Papineni; Sanjiv Kumar
DeepStrike: Remotely-Guided Fault Injection Attacks on DNN Accelerator in Cloud-FPGA. (1%)Yukui Luo; Cheng Gongye; Yunsi Fei; Xiaolin Xu
User Label Leakage from Gradients in Federated Learning. (1%)Aidmar Wainakh; Fabrizio Ventola; Till Müßig; Jens Keim; Carlos Garcia Cordero; Ephraim Zimmer; Tim Grube; Kristian Kersting; Max Mühlhäuser
Hunter in the Dark: Deep Ensemble Networks for Discovering Anomalous Activity from Smart Networks. (1%)Shiyi Yang; Nour Moustafa; Hui Guo
2021-05-18
Sparta: Spatially Attentive and Adversarially Robust Activation. (99%)Qing Guo; Felix Juefei-Xu; Changqing Zhou; Yang Liu; Song Wang
Detecting Adversarial Examples with Bayesian Neural Network. (99%)Yao Li; Tongyi Tang; Cho-Jui Hsieh; Thomas C. M. Lee
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks. (98%)Dequan Wang; An Ju; Evan Shelhamer; David Wagner; Trevor Darrell
On the Robustness of Domain Constraints. (98%)Ryan Sheatsley; Blaine Hoak; Eric Pauley; Yohan Beugin; Michael J. Weisman; Patrick McDaniel
Learning and Certification under Instance-targeted Poisoning. (82%)Ji Gao; Amin Karbasi; Mohammad Mahmoody
2021-05-17
Towards Robust Vision Transformer. (95%)Xiaofeng Mao; Gege Qi; Yuefeng Chen; Xiaodan Li; Ranjie Duan; Shaokai Ye; Yuan He; Hui Xue
Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning. (93%)Franziska Boenisch; Philip Sperl; Konstantin Böttinger
An SDE Framework for Adversarial Training, with Convergence and Robustness Analysis. (69%)Haotian Gu; Xin Guo
A Fusion-Denoising Attack on InstaHide with Data Augmentation. (1%)Xinjian Luo; Xiaokui Xiao; Yuncheng Wu; Juncheng Liu; Beng Chin Ooi
2021-05-16
Vision Transformers are Robust Learners. (99%)Sayak Paul; Pin-Yu Chen
Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing. (99%)Xunguang Wang; Zheng Zhang; Baoyuan Wu; Fumin Shen; Guangming Lu
SoundFence: Securing Ultrasonic Sensors in Vehicles Using Physical-Layer Defense. (2%)Jianzhi Lou; Qiben Yan; Qing Hui; Huacheng Zeng
2021-05-15
Real-time Detection of Practical Universal Adversarial Perturbations. (99%)Kenneth T. Co; Luis Muñoz-González; Leslie Kanthan; Emil C. Lupu
2021-05-14
Salient Feature Extractor for Adversarial Defense on Deep Neural Networks. (99%)Jinyin Chen; Ruoxi Chen; Haibin Zheng; Zhaoyan Ming; Wenrong Jiang; Chen Cui
High-Robustness, Low-Transferability Fingerprinting of Neural Networks. (9%)Siyue Wang; Xiao Wang; Pin-Yu Chen; Pu Zhao; Xue Lin
Information-theoretic Evolution of Model Agnostic Global Explanations. (1%)Sukriti Verma; Nikaash Puri; Piyush Gupta; Balaji Krishnamurthy
Iterative Algorithms for Assessing Network Resilience Against Structured Perturbations. (1%)Shenyu Liu; Sonia Martinez; Jorge Cortes
2021-05-13
Stochastic-Shield: A Probabilistic Approach Towards Training-Free Adversarial Defense in Quantized CNNs. (98%)Lorena Qendro; Sangwon Ha; Jong René de; Partha Maji
When Human Pose Estimation Meets Robustness: Adversarial Algorithms and Benchmarks. (5%)Jiahang Wang; Sheng Jin; Wentao Liu; Weizhong Liu; Chen Qian; Ping Luo
DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks. (1%)Yingzhe He; Guozhu Meng; Kai Chen; Jinwen He; Xingbo Hu
Biometrics: Trust, but Verify. (1%)Anil K. Jain; Debayan Deb; Joshua J. Engelsma
2021-05-12
AVA: Adversarial Vignetting Attack against Visual Recognition. (99%)Binyu Tian; Felix Juefei-Xu; Qing Guo; Xiaofei Xie; Xiaohong Li; Yang Liu
OutFlip: Generating Out-of-Domain Samples for Unknown Intent Detection with Natural Language Attack. (70%)DongHyun Choi; Myeong Cheol Shin; EungGyun Kim; Dong Ryeol Shin
Adversarial Reinforcement Learning in Dynamic Channel Access and Power Control. (2%)Feng Wang; M. Cenk Gursoy; Senem Velipasalar
A Statistical Threshold for Adversarial Classification in Laplace Mechanisms. (1%)Ayşe Ünsal; Melek Önen
2021-05-11
Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds. (99%)Guiyu Tian; Wenhao Jiang; Wei Liu; Yadong Mu
Improving Adversarial Transferability with Gradient Refining. (99%)Guoqiu Wang; Huanqian Yan; Ying Guo; Xingxing Wei
Accuracy-Privacy Trade-off in Deep Ensemble: A Membership Inference Perspective. (5%)Shahbaz Rezaei; Zubair Shafiq; Xin Liu
2021-05-10
Adversarial examples attack based on random warm restart mechanism and improved Nesterov momentum. (99%)Tiangang Li
Examining and Mitigating Kernel Saturation in Convolutional Neural Networks using Negative Images. (1%)Nidhi Gowdra; Roopak Sinha; Stephen MacDonell
2021-05-09
Automated Decision-based Adversarial Attacks. (99%)Qi-An Fu; Yinpeng Dong; Hang Su; Jun Zhu
Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks. (88%)Abhiroop Bhattacharjee; Abhishek Moitra; Priyadarshini Panda
Security Concerns on Machine Learning Solutions for 6G Networks in mmWave Beam Prediction. (81%)Ferhat Ozgur Catak; Evren Catak; Murat Kuzlu; Umit Cali
Robust Training Using Natural Transformation. (13%)Shuo Wang; Lingjuan Lyu; Surya Nepal; Carsten Rudolph; Marthie Grobler; Kristen Moore
Learning Image Attacks toward Vision Guided Autonomous Vehicles. (4%)Hyung-Jin Yoon; Hamidreza Jafarnejadsani; Petros Voulgaris
Combining Time-Dependent Force Perturbations in Robot-Assisted Surgery Training. (1%)Yarden Sharon; Daniel Naftalovich; Lidor Bahar; Yael Refaely; Ilana Nisky
2021-05-08
Self-Supervised Adversarial Example Detection by Disentangled Representation. (99%)Zhaoxi Zhang; Leo Yu Zhang; Xufei Zheng; Shengshan Hu; Jinyu Tian; Jiantao Zhou
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks. (96%)Jian Chen; Xuxin Zhang; Rui Zhang; Chen Wang; Ling Liu
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]. (93%)Jiehang Zeng; Xiaoqing Zheng; Jianhan Xu; Linyang Li; Liping Yuan; Xuanjing Huang
Provable Guarantees against Data Poisoning Using Self-Expansion and Compatibility. (16%)Charles Jin; Melinda Sun; Martin Rinard
2021-05-07
Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition. (99%)Bangjie Yin; Wenxuan Wang; Taiping Yao; Junfeng Guo; Zelun Kong; Shouhong Ding; Jilin Li; Cong Liu
Uniform Convergence, Adversarial Spheres and a Simple Remedy. (15%)Gregor Bachmann; Seyed-Mohsen Moosavi-Dezfooli; Thomas Hofmann
2021-05-06
Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model. (99%)Ruoxi Qin; Linyuan Wang; Xingyuan Chen; Xuehui Du; Bin Yan
A Simple and Strong Baseline for Universal Targeted Attacks on Siamese Visual Tracking. (99%)Zhenbang Li; Yaya Shi; Jin Gao; Shaoru Wang; Bing Li; Pengpeng Liang; Weiming Hu
Understanding Catastrophic Overfitting in Adversarial Training. (92%)Peilin Kang; Seyed-Mohsen Moosavi-Dezfooli
Attestation Waves: Platform Trust via Remote Power Analysis. (1%)Ignacio M. Delgado-Lozano; Macarena C. Martínez-Rodríguez; Alexandros Bakas; Billy Bob Brumley; Antonis Michalas
2021-05-05
Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning. (99%)Matthew Durham University, Durham, UK Watson; Noura Al Durham University, Durham, UK Moubayed
Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks. (97%)Faiq Khalid; Muhammad Abdullah Hanif; Muhammad Shafique
Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation. (1%)Robert A. Marsden; Alexander Bartler; Mario Döbler; Bin Yang
A Theoretical-Empirical Approach to Estimating Sample Complexity of DNNs. (1%)Devansh Bisla; Apoorva Nandini Saridena; Anna Choromanska
2021-05-04
Poisoning the Unlabeled Dataset of Semi-Supervised Learning. (92%)Nicholas Carlini
Broadly Applicable Targeted Data Sample Omission Attacks. (68%)Guy Barash; Eitan Farchi; Sarit Kraus; Onn Shehory
An Overview of Laser Injection against Embedded Neural Network Models. (2%)Mathieu Dumont; Pierre-Alain Moellic; Raphael Viera; Jean-Max Dutertre; Rémi Bernhard
2021-05-03
Physical world assistive signals for deep neural network classifiers -- neither defense nor attack. (83%)Camilo Pestana; Wei Liu; David Glance; Robyn Owens; Ajmal Mian
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack. (73%)Yixu Wang; Jie Li; Hong Liu; Yan Wang; Yongjian Wu; Feiyue Huang; Rongrong Ji
2021-05-02
Intriguing Usage of Applicability Domain: Lessons from Cheminformatics Applied to Adversarial Learning. (99%)Luke Chang; Katharina Dost; Kaiqi Zhao; Ambra Demontis; Fabio Roli; Gill Dobbie; Jörg Wicker
Who's Afraid of Adversarial Transferability? (99%)Ziv Katzir; Yuval Elovici
Multi-Robot Coordination and Planning in Uncertain and Adversarial Environments. (10%)Lifeng Zhou; Pratap Tokekar
GRNN: Generative Regression Neural Network -- A Data Leakage Attack for Federated Learning. (2%)Hanchi Ren; Jingjing Deng; Xianghua Xie
Spinner: Automated Dynamic Command Subsystem Perturbation. (1%)Meng Wang; Chijung Jung; Ali Ahad; Yonghwi Kwon
2021-05-01
Adversarial Example Detection for DNN Models: A Review and Experimental Comparison. (99%)Ahmed Aldahdooh; Wassim Hamidouche; Sid Ahmed Fezza; Olivier Deforges
A Perceptual Distortion Reduction Framework: Towards Generating Adversarial Examples with High Perceptual Quality and Attack Success Rate. (98%)Ruijie Yang; Yunhong Wang; Ruikui Wang; Yuanfang Guo
On the Adversarial Robustness of Quantized Neural Networks. (75%)Micah Gorsline; James Smith; Cory Merkel
Hidden Backdoors in Human-Centric Language Models. (73%)Shaofeng Li; Hui Liu; Tian Dong; Benjamin Zi Hao Zhao; Minhui Xue; Haojin Zhu; Jialiang Lu
One Detector to Rule Them All: Towards a General Deepfake Attack Detection Framework. (62%)Shahroz Tariq; Sangyup Lee; Simon S. Woo
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification. (62%)Wei Guo; Benedetta Tondi; Mauro Barni
Load Oscillating Attacks of Smart Grids: Demand Strategies and Vulnerability Analysis. (2%)Falah Alanazi; Jinsub Kim; Eduardo Cotilla-Sanchez
RATT: Leveraging Unlabeled Data to Guarantee Generalization. (1%)Saurabh Garg; Sivaraman Balakrishnan; J. Zico Kolter; Zachary C. Lipton
2021-04-30
Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks. (99%)Jun-Ho Choi; Huan Zhang; Jun-Hyuk Kim; Cho-Jui Hsieh; Jong-Seok Lee
Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense. (99%)Haoxi Zhan; Xiaobing Pei
Black-box adversarial attacks using Evolution Strategies. (98%)Hao Qiu; Leonardo Lucio Custode; Giovanni Iacca
IPatch: A Remote Adversarial Patch. (97%)Yisroel Mirsky
DeFiRanger: Detecting Price Manipulation Attacks on DeFi Applications. (10%)Siwei Wu; Dabao Wang; Jianting He; Yajin Zhou; Lei Wu; Xingliang Yuan; Qinming He; Kui Ren
FIPAC: Thwarting Fault- and Software-Induced Control-Flow Attacks with ARM Pointer Authentication. (2%)Robert Schilling; Pascal Nasahl; Stefan Mangard
2021-04-29
GasHis-Transformer: A Multi-scale Visual Transformer Approach for Gastric Histopathology Image Classification. (67%)Haoyuan Chen; Chen Li; Xiaoyan Li; Ge Wang; Weiming Hu; Yixin Li; Wanli Liu; Changhao Sun; Yudong Yao; Yueyang Teng; Marcin Grzegorzek
A neural anisotropic view of underspecification in deep learning. (26%)Guillermo Ortiz-Jimenez; Itamar Franco Salazar-Reque; Apostolos Modas; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
Analytical bounds on the local Lipschitz constants of ReLU networks. (12%)Trevor Avant; Kristi A. Morgansen
Learning Robust Variational Information Bottleneck with Reference. (5%)Weizhu Qian; Bowei Chen; Xiaowei Huang
2021-04-28
AdvHaze: Adversarial Haze Attack. (99%)Ruijun Gao; Qing Guo; Felix Juefei-Xu; Hongkai Yu; Wei Feng
2021-04-27
Improved and Efficient Text Adversarial Attacks using Target Information. (97%)Mahmoud Hossam; Trung Le; He Zhao; Viet Huynh; Dinh Phung
Metamorphic Detection of Repackaged Malware. (91%)Shirish Singh; Gail Kaiser
Structure-Aware Hierarchical Graph Pooling using Information Bottleneck. (2%)Kashob Kumar Roy; Amit Roy; A K M Mahbubur Rahman; M Ashraful Amin; Amin Ahsan Ali
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity. (1%)Mathias P. M. Parisot; Balazs Pejo; Dayana Spagnuelo
2021-04-26
Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT. (99%)Pavlos Papadopoulos; Essen Oliver Thornewill von; Nikolaos Pitropakis; Christos Chrysoulas; Alexios Mylonas; William J. Buchanan
Delving into Data: Effectively Substitute Training for Black-box Attack. (99%)Wenxuan Wang; Bangjie Yin; Taiping Yao; Li Zhang; Yanwei Fu; Shouhong Ding; Jilin Li; Feiyue Huang; Xiangyang Xue
secml-malware: Pentesting Windows Malware Classifiers with Adversarial EXEmples in Python. (99%)Luca Demetrio; Battista Biggio
Impact of Spatial Frequency Based Constraints on Adversarial Robustness. (98%)Rémi Bernhard; Pierre-Alain Moellic; Martial Mermillod; Yannick Bourrier; Romain Cohendet; Miguel Solinas; Marina Reyboz
PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches. (87%)Chong Xiang; Prateek Mittal
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks. (22%)Sebastian Szyller; Vasisht Duddu; Tommi Gröndahl; N. Asokan
2021-04-25
3D Adversarial Attacks Beyond Point Cloud. (99%)Jinlai Zhang; Lyujie Chen; Binbin Liu; Bo Ouyang; Qizhi Xie; Jihong Zhu; Weiming Li; Yanmei Meng
Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors. (80%)Xinwei Zhao; Matthew C. Stamm
2021-04-24
Influence Based Defense Against Data Poisoning Attacks in Online Learning. (99%)Sanjay Seetharaman; Shubham Malaviya; Rosni KV; Manish Shukla; Sachin Lodha
2021-04-23
Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks. (98%)Zeyu Qin; Yanbo Fan; Hongyuan Zha; Baoyuan Wu
Evaluating Deception Detection Model Robustness To Linguistic Variation. (82%)Maria Glenski; Ellyn Ayton; Robin Cosbey; Dustin Arendt; Svitlana Volkova
Lightweight Detection of Out-of-Distribution and Adversarial Samples via Channel Mean Discrepancy. (3%)Xin Dong; Junfeng Guo; Wei-Te Ting; H. T. Kung
Improving Neural Silent Speech Interface Models by Adversarial Training. (1%)Amin Honarmandi Shandiz; László Tóth; Gábor Gosztolya; Alexandra Markó; Tamás Gábor Csapó
2021-04-22
Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting. (99%)Qiming Wu; Zhikang Zou; Pan Zhou; Xiaoqing Ye; Binghui Wang; Ang Li
Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors. (98%)Arman Maesumi; Mingkang Zhu; Yi Wang; Tianlong Chen; Zhangyang Wang; Chandrajit Bajaj
Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions. (86%)Jing Wu; Mingyi Zhou; Ce Zhu; Yipeng Liu; Mehrtash Harandi; Li Li
Operator Shifting for General Noisy Matrix Systems. (56%)Philip Etter; Lexing Ying
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics. (22%)Jonathan Hayase; Weihao Kong; Raghav Somani; Sewoong Oh
2021-04-21
Dual Head Adversarial Training. (99%)Yujing Jiang; Xingjun Ma; Sarah Monazam Erfani; James Bailey
Mixture of Robust Experts (MoRE): A Flexible Defense Against Multiple Perturbations. (99%)Kaidi Xu; Chenan Wang; Xue Lin; Bhavya Kailkhura; Ryan Goldhahn
Robust Certification for Laplace Learning on Geometric Graphs. (96%)Matthew Thorpe; Bao Wang
Jacobian Regularization for Mitigating Universal Adversarial Perturbations. (95%)Kenneth T. Co; David Martinez Rego; Emil C. Lupu
Dataset Inference: Ownership Resolution in Machine Learning. (83%)Pratyush Maini; Mohammad Yaghini; Nicolas Papernot
2021-04-20
Adversarial Training for Deep Learning-based Intrusion Detection Systems. (99%)Islam Debicha; Thibault Debatty; Jean-Michel Dricot; Wim Mees
MixDefense: A Defense-in-Depth Framework for Adversarial Example Detection Based on Statistical and Semantic Analysis. (99%)Yijun Yang; Ruiyuan Gao; Yu Li; Qiuxia Lai; Qiang Xu
MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor Based on Multi-Task Adversarial Training. (64%)Jian Ma; Shuyi Xie; Haiqin Yang; Lianxin Jiang; Mengyuan Zhou; Xiaoyi Ruan; Yang Mo
Does enhanced shape bias improve neural network robustness to common corruptions? (26%)Chaithanya Kumar Mummadi; Ranjitha Subramaniam; Robin Hutmacher; Julien Vitay; Volker Fischer; Jan Hendrik Metzen
Robust Sensor Fusion Algorithms Against Voice Command Attacks in Autonomous Vehicles. (9%)Jiwei Guan; Xi Zheng; Chen Wang; Yipeng Zhou; Alireza Jolfa
Network Defense is Not a Game. (1%)Andres Molina-Markham; Ransom K. Winder; Ahmad Ridley
2021-04-19
Staircase Sign Method for Boosting Adversarial Attacks. (99%)Qilong Zhang; Xiaosu Zhu; Jingkuan Song; Lianli Gao; Heng Tao Shen
Improving Adversarial Robustness Using Proxy Distributions. (99%)Vikash Sehwag; Saeed Mahloujifar; Tinashe Handina; Sihui Dai; Chong Xiang; Mung Chiang; Prateek Mittal
Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models. (99%)Lyuyi Zhu; Kairui Feng; Ziyuan Pu; Wei Ma
LAFEAT: Piercing Through Adversarial Defenses with Latent Features. (99%)Yunrui Yu; Xitong Gao; Cheng-Zhong Xu
Removing Adversarial Noise in Class Activation Feature Space. (99%)Dawei Zhou; Nannan Wang; Chunlei Peng; Xinbo Gao; Xiaoyu Wang; Jun Yu; Tongliang Liu
Direction-Aggregated Attack for Transferable Adversarial Examples. (99%)Tianjin Huang; Vlado Menkovski; Yulong Pei; YuHao Wang; Mykola Pechenizkiy
Manipulating SGD with Data Ordering Attacks. (95%)Ilia Shumailov; Zakhar Shumaylov; Dmitry Kazhdan; Yiren Zhao; Nicolas Papernot; Murat A. Erdogdu; Ross Anderson
Provable Robustness of Adversarial Training for Learning Halfspaces with Noise. (22%)Difan Zou; Spencer Frei; Quanquan Gu
Protecting the Intellectual Properties of Deep Neural Networks with an Additional Class and Steganographic Images. (11%)Shichang Sun; Mingfu Xue; Jian Wang; Weiqiang Liu
Semi-Supervised Domain Adaptation with Prototypical Alignment and Consistency Learning. (1%)Kai Li; Chang Liu; Handong Zhao; Yulun Zhang; Yun Fu
2021-04-18
Best Practices for Noise-Based Augmentation to Improve the Performance of Emotion Recognition "In the Wild". (83%)Mimansa Jaiswal; Emily Mower Provost
On the Sensitivity and Stability of Model Interpretations in NLP. (1%)Fan Yin; Zhouxing Shi; Cho-Jui Hsieh; Kai-Wei Chang
2021-04-17
Attacking Text Classifiers via Sentence Rewriting Sampler. (99%)Lei Xu; Kalyan Veeramachaneni
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems. (99%)Yue Gao; Ilia Shumailov; Kassem Fawaz
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation. (98%)Max Bartolo; Tristan Thrush; Robin Jia; Sebastian Riedel; Pontus Stenetorp; Douwe Kiela
Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training. (87%)Kuan-Hao Huang; Wasi Uddin Ahmad; Nanyun Peng; Kai-Wei Chang
AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages with Adversarial Examples. (15%)Qianchu Liu; Edoardo M. Ponti; Diana McCarthy; Ivan Vulić; Anna Korhonen
2021-04-16
Fashion-Guided Adversarial Attack on Person Segmentation. (99%)Marc Treu; Trung-Nghia Le; Huy H. Nguyen; Junichi Yamagishi; Isao Echizen
Towards Variable-Length Textual Adversarial Attacks. (99%)Junliang Guo; Zhirui Zhang; Linlin Zhang; Linli Xu; Boxing Chen; Enhong Chen; Weihua Luo
An Adversarially-Learned Turing Test for Dialog Generation Models. (96%)Xiang Gao; Yizhe Zhang; Michel Galley; Bill Dolan
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators. (81%)David Stutz; Nandhini Chandramoorthy; Matthias Hein; Bernt Schiele
Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries. (2%)Arjun Nitin Bhagoji; Daniel Cullina; Vikash Sehwag; Prateek Mittal
2021-04-15
Gradient-based Adversarial Attacks against Text Transformers. (99%)Chuan Guo; Alexandre Sablayrolles; Hervé Jégou; Douwe Kiela
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World. (86%)Mingfu Xue; Can He; Shichang Sun; Jian Wang; Weiqiang Liu
Are Multilingual BERT models robust? A Case Study on Adversarial Attacks for Multilingual Question Answering. (12%)Sara Rosenthal; Mihaela Bornea; Avirup Sil
Federated Learning for Malware Detection in IoT Devices. (10%)Valerian Rey; Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Gérôme Bovet; Martin Jaggi
2021-04-14
Meaningful Adversarial Stickers for Face Recognition in Physical World. (98%)Ying Guo; Xingxing Wei; Guoqiu Wang; Bo Zhang
Orthogonalizing Convolutional Layers with the Cayley Transform. (80%)Asher Trockman; J. Zico Kolter
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks. (38%)Nicolas M. Müller; Simon Roschmann; Konstantin Böttinger
Improved Branch and Bound for Neural Network Verification via Lagrangian Decomposition. (1%)Palma Alessandro De; Rudy Bunel; Alban Desmaison; Krishnamurthy Dvijotham; Pushmeet Kohli; Philip H. S. Torr; M. Pawan Kumar
2021-04-13
Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune. (99%)Shanshi Huang; Hongwu Jiang; Shimeng Yu
Detecting Operational Adversarial Examples for Reliable Deep Learning. (82%)Xingyu Zhao; Wei Huang; Sven Schewe; Yi Dong; Xiaowei Huang
Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack. (75%)Luca Pajola; Mauro Conti
2021-04-12
Sparse Coding Frontend for Robust Neural Networks. (99%)Can Bakiskan; Metehan Cekic; Ahmet Dundar Sezer; Upamanyu Madhow
A Backdoor Attack against 3D Point Cloud Classifiers. (96%)Zhen Xiang; David J. Miller; Siheng Chen; Xi Li; George Kesidis
Plot-guided Adversarial Example Construction for Evaluating Open-domain Story Generation. (56%)Sarik Ghazarian; Zixi Liu; Akash SM; Ralph Weischedel; Aram Galstyan; Nanyun Peng
Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation. (50%)Chong Zhang; Jieyu Zhao; Huan Zhang; Kai-Wei Chang; Cho-Jui Hsieh
Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack. (1%)Xinyi Zhang; Chengfang Fang; Jie Shi
2021-04-11
Achieving Model Robustness through Discrete Adversarial Training. (99%)Maor Ivgi; Jonathan Berant
2021-04-10
Fool Me Twice: Entailment from Wikipedia Gamification. (61%)Julian Martin Eisenschlos; Bhuwan Dhingra; Jannis Bulian; Benjamin Börschinger; Jordan Boyd-Graber
Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach. (15%)Simiao Zuo; Chen Liang; Haoming Jiang; Xiaodong Liu; Pengcheng He; Jianfeng Gao; Weizhu Chen; Tuo Zhao
Disentangled Contrastive Learning for Learning Robust Textual Representations. (11%)Xiang Chen; Xin Xie; Zhen Bi; Hongbin Ye; Shumin Deng; Ningyu Zhang; Huajun Chen
2021-04-09
Relating Adversarially Robust Generalization to Flat Minima. (99%)David Stutz; Matthias Hein; Bernt Schiele
SPoTKD: A Protocol for Symmetric Key Distribution over Public Channels Using Self-Powered Timekeeping Devices. (1%)Mustafizur Rahman; Liang Zhou; Shantanu Chakrabartty
Reversible Watermarking in Deep Convolutional Neural Networks for Integrity Authentication. (1%)Xiquan Guan; Huamin Feng; Weiming Zhang; Hang Zhou; Jie Zhang; Nenghai Yu
Learning Sampling Policy for Faster Derivative Free Optimization. (1%)Zhou Zhai; Bin Gu; Heng Huang
2021-04-08
FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems. (98%)Liang Tong; Zhengzhang Chen; Jingchao Ni; Wei Cheng; Dongjin Song; Haifeng Chen; Yevgeniy Vorobeychik
Explainability-based Backdoor Attacks Against Graph Neural Networks. (15%)Jing Jason Xu; Jason Minhui; Xue; Stjepan Picek
A single gradient step finds adversarial examples on random two-layers neural networks. (10%)Sébastien Bubeck; Yeshwanth Cherapanamjeri; Gauthier Gidel; Rémi Tachet des Combes
Adversarial Learning Inspired Emerging Side-Channel Attacks and Defenses. (8%)Abhijitt Dhavlle
2021-04-07
Universal Adversarial Training with Class-Wise Perturbations. (99%)Philipp Benz; Chaoning Zhang; Adil Karjauv; In So Kweon
Universal Spectral Adversarial Attacks for Deformable Shapes. (81%)Arianna Rampini; Franco Pestarini; Luca Cosmo; Simone Melzi; Emanuele Rodolà
Adversarial Robustness Guarantees for Gaussian Processes. (68%)Andrea Patane; Arno Blaas; Luca Laurenti; Luca Cardelli; Stephen Roberts; Marta Kwiatkowska
The art of defense: letting networks fool the attacker. (64%)Jinlai Zhang; Binbin Liu; Lyvjie Chen; Bo Ouyang; Jihong Zhu; Minchi Kuang; Houqing Wang; Yanmei Meng
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective. (61%)Yi Zeng; Won Park; Z. Morley Mao; Ruoxi Jia
Improving Robustness of Deep Reinforcement Learning Agents: Environment Attacks based on Critic Networks. (10%)Lucas Schott; Manon Césaire; Hatem Hajri; Sylvain Lamprier
Sparse Oblique Decision Trees: A Tool to Understand and Manipulate Neural Net Features. (3%)Suryabhan Singh Hada; Miguel Á. Carreira-Perpiñán; Arman Zharmagambetov
An Object Detection based Solver for Google's Image reCAPTCHA v2. (1%)Md Imran Hossen; Yazhou Tu; Md Fazle Rabby; Md Nazmul Islam; Hui Cao; Xiali Hei
2021-04-06
Exploring Targeted Universal Adversarial Perturbations to End-to-end ASR Models. (93%)Zhiyun Lu; Wei Han; Yu Zhang; Liangliang Cao
Adversarial Robustness under Long-Tailed Distribution. (89%)Tong Wu; Ziwei Liu; Qingqiu Huang; Yu Wang; Dahua Lin
Robust Adversarial Classification via Abstaining. (75%)Abed AlRahman Al Makdah; Vaibhav Katewa; Fabio Pasqualetti
Backdoor Attack in the Physical World. (2%)Yiming Li; Tongqing Zhai; Yong Jiang; Zhifeng Li; Shu-Tao Xia
2021-04-05
Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model. (99%)Payam Delgosha; Hamed Hassani; Ramtin Pedarsani
Adaptive Clustering of Robust Semantic Representations for Adversarial Image Purification. (98%)Samuel Henrique Silva; Arun Das; Ian Scarff; Peyman Najafirad
BBAEG: Towards BERT-based Biomedical Adversarial Example Generation for Text Classification. (96%)Ishani Mondal
Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses. (74%)Yao Deng; Tiehua Zhang; Guannan Lou; Xi Zheng; Jiong Jin; Qing-Long Han
Can audio-visual integration strengthen robustness under multimodal attacks? (68%)Yapeng Tian; Chenliang Xu
Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models. (33%)Neal Mangaokar; Jiameng Pu; Parantapa Bhattacharya; Chandan K. Reddy; Bimal Viswanath
Unified Detection of Digital and Physical Face Attacks. (8%)Debayan Deb; Xiaoming Liu; Anil K. Jain
Beyond Categorical Label Representations for Image Classification. (2%)Boyuan Chen; Yu Li; Sunand Raghupathi; Hod Lipson
Rethinking Perturbations in Encoder-Decoders for Fast Training. (1%)Sho Takase; Shun Kiyono
2021-04-04
Semantically Stealthy Adversarial Attacks against Segmentation Models. (99%)Zhenhua Chen; Chuhua Wang; David J. Crandall
Reliably fast adversarial training via latent adversarial perturbation. (93%)Geon Yeong Park; Sang Wan Lee
2021-04-03
Mitigating Gradient-based Adversarial Attacks via Denoising and Compression. (99%)Rehana Mahfuz; Rajeev Sahay; Aly El Gamal
Gradient-based Adversarial Deep Modulation Classification with Data-driven Subsampling. (93%)Jinho Yi; Aly El Gamal
Property-driven Training: All You (N)Ever Wanted to Know About. (26%)Marco Casadio; Matthew Daggitt; Ekaterina Komendantskaya; Wen Kokke; Daniel Kienitz; Rob Stewart
2021-04-02
Defending Against Image Corruptions Through Adversarial Augmentations. (92%)Dan A. Calian; Florian Stimberg; Olivia Wiles; Sylvestre-Alvise Rebuffi; Andras Gyorgy; Timothy Mann; Sven Gowal
RABA: A Robust Avatar Backdoor Attack on Deep Neural Network. (83%)Ying He; Zhili Shen; Chang Xia; Jingyu Hua; Wei Tong; Sheng Zhong
Diverse Gaussian Noise Consistency Regularization for Robustness and Uncertainty Calibration under Noise Domain Shifts. (2%)Athanasios Tsiligkaridis; Theodoros Tsiligkaridis
Fast-adapting and Privacy-preserving Federated Recommender System. (1%)Qinyong Wang; Hongzhi Yin; Tong Chen; Junliang Yu; Alexander Zhou; Xiangliang Zhang
2021-04-01
TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness. (99%)Zhuolin Yang; Linyi Li; Xiaojun Xu; Shiliang Zuo; Qian Chen; Benjamin Rubinstein; Pan Zhou; Ce Zhang; Bo Li
Domain Invariant Adversarial Learning. (98%)Matan Levi; Idan Attias; Aryeh Kontorovich
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction. (93%)Luoqiu Li; Xiang Chen; Ningyu Zhang; Shumin Deng; Xin Xie; Chuanqi Tan; Mosha Chen; Fei Huang; Huajun Chen
Towards Evaluating and Training Verifiably Robust Neural Networks. (45%)Zhaoyang Lyu; Minghao Guo; Tong Wu; Guodong Xu; Kehuan Zhang; Dahua Lin
Augmenting Zero Trust Architecture to Endpoints Using Blockchain: A Systematic Review. (3%)Lampis Alevizos; Vinh Thong Ta; Max Hashem Eiza
Learning from Noisy Labels via Dynamic Loss Thresholding. (1%)Hao Yang; Youzhi Jin; Ziyin Li; Deng-Bao Wang; Lei Miao; Xin Geng; Min-Ling Zhang
2021-03-31
Adversarial Heart Attack: Neural Networks Fooled to Segment Heart Symbols in Chest X-Ray Images. (99%)Gerda Bortsova; Florian Dubost; Laurens Hogeweg; Ioannis Katramados; Bruijne Marleen de
Adversarial Attacks and Defenses for Speech Recognition Systems. (99%)Piotr Żelasko; Sonal Joshi; Yiwen Shao; Jesus Villalba; Jan Trmal; Najim Dehak; Sanjeev Khudanpur
Fast Certified Robust Training with Short Warmup. (86%)Zhouxing Shi; Yihan Wang; Huan Zhang; Jinfeng Yi; Cho-Jui Hsieh
Fast Jacobian-Vector Product for Deep Networks. (22%)Randall Balestriero; Richard Baraniuk
Too Expensive to Attack: A Joint Defense Framework to Mitigate Distributed Attacks for the Internet of Things Grid. (2%)Jianhua Li; Ximeng Liu; Jiong Jin; Shui Yu
Digital Forensics vs. Anti-Digital Forensics: Techniques, Limitations and Recommendations. (1%)Jean-Paul A. Yaacoub; Hassan N. Noura; Ola Salman; Ali Chehab
2021-03-30
On the Robustness of Vision Transformers to Adversarial Examples. (99%)Kaleel Mahmood; Rigel Mahmood; Dijk Marten van
Class-Aware Robust Adversarial Training for Object Detection. (96%)Pin-Chun Chen; Bo-Han Kung; Jun-Cheng Chen
PointBA: Towards Backdoor Attacks in 3D Point Cloud. (92%)Xinke Li; Zhiru Chen; Yue Zhao; Zekun Tong; Yabang Zhao; Andrew Lim; Joey Tianyi Zhou
What Causes Optical Flow Networks to be Vulnerable to Physical Adversarial Attacks. (88%)Simon Schrodi; Tonmoy Saikia; Thomas Brox
Statistical inference for individual fairness. (67%)Subha Maity; Songkai Xue; Mikhail Yurochkin; Yuekai Sun
Learning Lipschitz Feedback Policies from Expert Demonstrations: Closed-Loop Guarantees, Generalization and Robustness. (47%)Abed AlRahman Al Makdah; Vishaal Krishnan; Fabio Pasqualetti
Improving robustness against common corruptions with frequency biased models. (1%)Tonmoy Saikia; Cordelia Schmid; Thomas Brox
2021-03-29
Lagrangian Objective Function Leads to Improved Unforeseen Attack Generalization in Adversarial Training. (99%)Mohammad Azizmalayeri; Mohammad Hossein Rohban
Enhancing the Transferability of Adversarial Attacks through Variance Tuning. (99%)Xiaosen Wang; Kun He
On the Adversarial Robustness of Vision Transformers. (99%)Rulin Shao; Zhouxing Shi; Jinfeng Yi; Pin-Yu Chen; Cho-Jui Hsieh
ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training. (95%)Zeinab Golgooni; Mehrdad Saberi; Masih Eskandar; Mohammad Hossein Rohban
Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing. (93%)Cheng Chen; Bhavya Kailkhura; Ryan Goldhahn; Yi Zhou
Fooling LiDAR Perception via Adversarial Trajectory Perturbation. (83%)Yiming Li; Congcong Wen; Felix Juefei-Xu; Chen Feng
Robust Reinforcement Learning under model misspecification. (31%)Lebin Yu; Jian Wang; Xudong Zhang
Automating Defense Against Adversarial Attacks: Discovery of Vulnerabilities and Application of Multi-INT Imagery to Protect Deployed Models. (16%)Josh Kalin; David Noever; Matthew Ciolino; Dominick Hambrick; Gerry Dozier
MISA: Online Defense of Trojaned Models using Misattributions. (10%)Panagiota Kiourti; Wenchao Li; Anirban Roy; Karan Sikka; Susmit Jha
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models. (9%)Wenkai Yang; Lei Li; Zhiyuan Zhang; Xuancheng Ren; Xu Sun; Bin He
Selective Output Smoothing Regularization: Regularize Neural Networks by Softening Output Distributions. (1%)Xuan Cheng; Tianshu Xie; Xiaomin Wang; Qifeng Weng; Minghui Liu; Jiali Deng; Ming Liu
2021-03-28
Improved Autoregressive Modeling with Distribution Smoothing. (86%)Chenlin Meng; Jiaming Song; Yang Song; Shengjia Zhao; Stefano Ermon
2021-03-27
On the benefits of robust models in modulation recognition. (99%)Javier Maroto; Gérôme Bovet; Pascal Frossard
IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking. (99%)Shuai Jia; Yibing Song; Chao Ma; Xiaokang Yang
LiBRe: A Practical Bayesian Approach to Adversarial Detection. (99%)Zhijie Deng; Xiao Yang; Shizhen Xu; Hang Su; Jun Zhu
2021-03-26
Cyclic Defense GAN Against Speech Adversarial Attacks. (99%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Combating Adversaries with Anti-Adversaries. (93%)Motasem Alfarra; Juan C. Pérez; Ali Thabet; Adel Bibi; Philip H. S. Torr; Bernard Ghanem
On Generating Transferable Targeted Perturbations. (93%)Muzammal Naseer; Salman Khan; Munawar Hayat; Fahad Shahbaz Khan; Fatih Porikli
Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation. (86%)Dohun Lim; Hyeonseok Lee; Sungchan Kim
Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness. (83%)Yi Cai; Xuefei Ning; Huazhong Yang; Yu Wang
Visual Explanations from Spiking Neural Networks using Interspike Intervals. (62%)Youngeun Kim; Priyadarshini Panda
Unsupervised Robust Domain Adaptation without Source Data. (13%)Peshal Agarwal; Danda Pani Paudel; Jan-Nico Zaech; Gool Luc Van
2021-03-25
Adversarial Attacks are Reversible with Natural Supervision. (99%)Chengzhi Mao; Mia Chiquier; Hao Wang; Junfeng Yang; Carl Vondrick
Adversarial Attacks on Deep Learning Based mmWave Beam Prediction in 5G and Beyond. (98%)Brian Kim; Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus
MagDR: Mask-guided Detection and Reconstruction for Defending Deepfakes. (81%)Zhikai Chen; Lingxi Xie; Shanmin Pang; Yong He; Bo Zhang
Deep-RBF Networks for Anomaly Detection in Automotive Cyber-Physical Systems. (70%)Matthew Burruss; Shreyas Ramakrishna; Abhishek Dubey
Orthogonal Projection Loss. (45%)Kanchana Ranasinghe; Muzammal Naseer; Munawar Hayat; Salman Khan; Fahad Shahbaz Khan
THAT: Two Head Adversarial Training for Improving Robustness at Scale. (26%)Zuxuan Wu; Tom Goldstein; Larry S. Davis; Ser-Nam Lim
A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks and Defenses in Cryptography. (11%)Xiaoxuan Lou; Tianwei Zhang; Jun Jiang; Yinqian Zhang
HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks. (10%)Peizhuo Lv; Pan Li; Shengzhi Zhang; Kai Chen; Ruigang Liang; Yue Zhao; Yingjiu Li
The Geometry of Over-parameterized Regression and Adversarial Perturbations. (2%)Jason W. Rocks; Pankaj Mehta
Synthesize-It-Classifier: Learning a Generative Classifier through RecurrentSelf-analysis. (1%)Arghya Pal; Rapha Phan; KokSheik Wong
Spirit Distillation: Precise Real-time Prediction with Insufficient Data. (1%)Zhiyuan Wu; Hong Qi; Yu Jiang; Chupeng Cui; Zongmin Yang; Xinhui Xue
Recent Advances in Large Margin Learning. (1%)Yiwen Guo; Changshui Zhang
2021-03-24
Adversarial Feature Stacking for Accurate and Robust Predictions. (99%)Faqiang Liu; Rong Zhao; Luping Shi
Vulnerability of Appearance-based Gaze Estimation. (97%)Mingjie Xu; Haofei Wang; Yunfei Liu; Feng Lu
Black-box Detection of Backdoor Attacks with Limited Information and Data. (96%)Yinpeng Dong; Xiao Yang; Zhijie Deng; Tianyu Pang; Zihao Xiao; Hang Su; Jun Zhu
Deepfake Forensics via An Adversarial Game. (10%)Zhi Wang; Yiwen Guo; Wangmeng Zuo
2021-03-23
Robust and Accurate Object Detection via Adversarial Learning. (98%)Xiangning Chen; Cihang Xie; Mingxing Tan; Li Zhang; Cho-Jui Hsieh; Boqing Gong
CLIP: Cheap Lipschitz Training of Neural Networks. (96%)Leon Bungert; René Raab; Tim Roith; Leo Schwinn; Daniel Tenbrinck
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers? (92%)Antonio Emanuele Cinà; Sebastiano Vascon; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations. (87%)Chaitanya K. Ryali; David J. Schwab; Ari S. Morcos
RPATTACK: Refined Patch Attack on General Object Detectors. (76%)Hao Huang; Yongtao Wang; Zhaoyu Chen; Zhi Tang; Wenqiang Zhang; Kai-Kuang Ma
NNrepair: Constraint-based Repair of Neural Network Classifiers. (50%)Muhammad Usman; Divya Gopinath; Youcheng Sun; Yannic Noller; Corina Pasareanu
Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs. (31%)Ramneet Kaur; Susmit Jha; Anirban Roy; Oleg Sokolsky; Insup Lee
Improved Estimation of Concentration Under $\ell_p$-Norm Distance Metrics Using Half Spaces. (22%)Jack Prescott; Xiao Zhang; David Evans
ESCORT: Ethereum Smart COntRacTs Vulnerability Detection using Deep Neural Network and Transfer Learning. (1%)Oliver Lutz; Huili Chen; Hossein Fereidooni; Christoph Sendner; Alexandra Dmitrienko; Ahmad Reza Sadeghi; Farinaz Koushanfar
2021-03-22
Grey-box Adversarial Attack And Defence For Sentiment Classification. (99%)Ying Xu; Xu Zhong; Antonio Jimeno Yepes; Jey Han Lau
Fast Approximate Spectral Normalization for Robust Deep Neural Networks. (98%)Zhixin Pan; Prabhat Mishra
Spatio-Temporal Sparsification for General Robust Graph Convolution Networks. (87%)Mingming Lu; Ya Zhang
RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy. (75%)Adnan Siraj Rakin; Li Yang; Jingtao Li; Fan Yao; Chaitali Chakrabarti; Yu Cao; Jae-sun Seo; Deliang Fan
Adversarial Feature Augmentation and Normalization for Visual Recognition. (13%)Tianlong Chen; Yu Cheng; Zhe Gan; Jianfeng Wang; Lijuan Wang; Zhangyang Wang; Jingjing Liu
Adversarially Optimized Mixup for Robust Classification. (13%)Jason Bunk; Srinjoy Chattopadhyay; B. S. Manjunath; Shivkumar Chandrasekaran
2021-03-21
ExAD: An Ensemble Approach for Explanation-based Adversarial Detection. (99%)Raj Vardhan; Ninghao Liu; Phakpoom Chinprutthiwong; Weijie Fu; Zhenyu Hu; Xia Ben Hu; Guofei Gu
TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing. (75%)Tao Gui; Xiao Wang; Qi Zhang; Qin Liu; Yicheng Zou; Xin Zhou; Rui Zheng; Chong Zhang; Qinzhuo Wu; Jiacheng Ye; Zexiong Pang; Yongxin Zhang; Zhengyan Li; Ruotian Ma; Zichu Fei; Ruijian Cai; Jun Zhao; Xinwu Hu; Zhiheng Yan; Yiding Tan; Yuan Hu; Qiyuan Bian; Zhihua Liu; Bolin Zhu; Shan Qin; Xiaoyu Xing; Jinlan Fu; Yue Zhang; Minlong Peng; Xiaoqing Zheng; Yaqian Zhou; Zhongyu Wei; Xipeng Qiu; Xuanjing Huang
Natural Perturbed Training for General Robustness of Neural Network Classifiers. (38%)Sadaf Gulshad; Arnold Smeulders
Self adversarial attack as an augmentation method for immunohistochemical stainings. (33%)Jelica Vasiljević; Friedrich Feuerhake; Cédric Wemmert; Thomas Lampert
2021-03-20
Robust Models Are More Interpretable Because Attributions Look Normal. (15%)Zifan Wang; Matt Fredrikson; Anupam Datta
2021-03-19
LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial Attack. (99%)Ashkan Esmaeili; Marzieh Edraki; Nazanin Rahnavard; Mubarak Shah; Ajmal Mian
SoK: A Modularized Approach to Study the Security of Automatic Speech Recognition Systems. (93%)Yuxuan Chen; Jiangshan Zhang; Xuejing Yuan; Shengzhi Zhang; Kai Chen; Xiaofeng Wang; Shanqing Guo
Attribution of Gradient Based Adversarial Attacks for Reverse Engineering of Deceptions. (86%)Michael Goebel; Jason Bunk; Srinjoy Chattopadhyay; Lakshmanan Nataraj; Shivkumar Chandrasekaran; B. S. Manjunath
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond. (2%)Xuhong Li; Haoyi Xiong; Xingjian Li; Xuanyu Wu; Xiao Zhang; Ji Liu; Jiang Bian; Dejing Dou
2021-03-18
Generating Adversarial Computer Programs using Optimized Obfuscations. (99%)Shashank Srikant; Sijia Liu; Tamara Mitrovska; Shiyu Chang; Quanfu Fan; Gaoyuan Zhang; Una-May O'Reilly
Boosting Adversarial Transferability through Enhanced Momentum. (99%)Xiaosen Wang; Jiadong Lin; Han Hu; Jingdong Wang; Kun He
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles. (98%)Gabriel D. Cantareira; Rodrigo F. Mello; Fernando V. Paulovich
Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training. (76%)Saurabh Sahu; Palash Goyal
Model Extraction and Adversarial Transferability, Your BERT is Vulnerable! (69%)Xuanli He; Lingjuan Lyu; Qiongkai Xu; Lichao Sun
TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation. (61%)Todd Huster; Emmanuel Ekwedike
Noise Modulation: Let Your Model Interpret Itself. (54%)Haoyang Li; Xinggang Wang
KoDF: A Large-scale Korean DeepFake Detection Dataset. (16%)Patrick Kwon; Jaeseong You; Gyuhyeon Nam; Sungwoo Park; Gyeongsu Chae
Reading Isn't Believing: Adversarial Attacks On Multi-Modal Neurons. (9%)David A. Noever; Samantha E. Miller Noever
2021-03-17
Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap? (99%)Nathan Inkawhich; Kevin J Liang; Jingyang Zhang; Huanrui Yang; Hai Li; Yiran Chen
Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection. (98%)Mazen Abdelfattah; Kaiwen Yuan; Z. Jane Wang; Rabab Ward
Improved, Deterministic Smoothing for L1 Certified Robustness. (82%)Alexander Levine; Soheil Feizi
Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition. (41%)Yaodong Yu; Zitong Yang; Edgar Dobriban; Jacob Steinhardt; Yi Ma
Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots. (38%)Samson Tan; Shafiq Joty
Cyber Intrusion Detection by Using Deep Neural Networks with Attack-sharing Loss. (13%)Boxiang Wendy Dong; Wendy Hui; Wang; Aparna S. Varde; Dawei Li; Bharath K. Samanthula; Weifeng Sun; Liang Zhao
2021-03-16
Adversarial YOLO: Defense Human Detection Patch Attacks via Detecting Adversarial Patches. (92%)Nan Ji; YanFei Feng; Haidong Xie; Xueshuang Xiang; Naijin Liu
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation. (75%)Jungbeom Lee; Eunji Kim; Sungroh Yoon
Bio-inspired Robustness: A Review. (70%)Harshitha Machiraju; Oh-Hyeon Choung; Pascal Frossard; Michael. H Herzog
Adversarial Driving: Attacking End-to-End Autonomous Driving Systems. (68%)Han Wu; Wenjie Ruan
2021-03-15
Constant Random Perturbations Provide Adversarial Robustness with Minimal Effect on Accuracy. (83%)Bronya Roni Chernyak; Bhiksha Raj; Tamir Hazan; Joseph Keshet
Adversarial Training is Not Ready for Robot Learning. (67%)Mathias Lechner; Ramin Hasani; Radu Grosu; Daniela Rus; Thomas A. Henzinger
HDTest: Differential Fuzz Testing of Brain-Inspired Hyperdimensional Computing. (64%)Dongning Ma; Jianmin Guo; Yu Jiang; Xun Jiao
Understanding invariance via feedforward inversion of discriminatively trained classifiers. (10%)Piotr Teterwak; Chiyuan Zhang; Dilip Krishnan; Michael C. Mozer
Meta-Solver for Neural Ordinary Differential Equations. (2%)Julia Gusak; Alexandr Katrutsa; Talgat Daulbaev; Andrzej Cichocki; Ivan Oseledets
2021-03-14
Towards Robust Speech-to-Text Adversarial Attack. (99%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks. (98%)Manoj Rohit Vemparala; Alexander Frickenstein; Nael Fasfous; Lukas Frickenstein; Qi Zhao; Sabine Kuhn; Daniel Ehrhardt; Yuankai Wu; Christian Unger; Naveen Shankar Nagaraja; Walter Stechele
Multi-Discriminator Sobolev Defense-GAN Against Adversarial Attacks for End-to-End Speech Systems. (82%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Membership Inference Attacks on Machine Learning: A Survey. (68%)Hongsheng Hu; Zoran Salcic; Lichao Sun; Gillian Dobbie; Philip S. Yu; Xuyun Zhang
2021-03-13
Attack as Defense: Characterizing Adversarial Examples using Robustness. (99%)Zhe Zhao; Guangke Chen; Jingyi Wang; Yiwei Yang; Fu Song; Jun Sun
Generating Unrestricted Adversarial Examples via Three Parameters. (99%)Hanieh Naderi; Leili Goli; Shohreh Kasaei
Simeon -- Secure Federated Machine Learning Through Iterative Filtering. (12%)Nicholas Malecki; Hye-young Paik; Aleksandar Ignjatovic; Alan Blair; Elisa Bertino
2021-03-12
Learning Defense Transformers for Counterattacking Adversarial Examples. (99%)Jincheng Li; Jiezhang Cao; Yifan Zhang; Jian Chen; Mingkui Tan
Internal Wasserstein Distance for Adversarial Attack and Defense. (99%)Mingkui Tan; Shuhai Zhang; Jiezhang Cao; Jincheng Li; Yanwu Xu
A Unified Game-Theoretic Interpretation of Adversarial Robustness. (98%)Jie Ren; Die Zhang; Yisen Wang; Lu Chen; Zhanpeng Zhou; Yiting Chen; Xu Cheng; Xin Wang; Meng Zhou; Jie Shi; Quanshi Zhang
Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case. (82%)Evren Catak; Ferhat Ozgur Catak; Arild Moldsvor
Network Environment Design for Autonomous Cyberdefense. (1%)Andres Molina-Markham; Cory Miniter; Becky Powell; Ahmad Ridley
2021-03-11
Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors through Voltage Over-scaling. (99%)Md Shohidul Islam; Ihsen Alouani; Khaled N. Khasawneh
Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification. (99%)Shiqi Wang; Huan Zhang; Kaidi Xu; Xue Lin; Suman Jana; Cho-Jui Hsieh; J. Zico Kolter
Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink. (99%)Ranjie Duan; Xiaofeng Mao; A. K. Qin; Yun Yang; Yuefeng Chen; Shaokai Ye; Yuan He
DAFAR: Detecting Adversaries by Feedback-Autoencoder Reconstruction. (99%)Haowen Liu; Ping Yi; Hsiao-Ying Lin; Jie Shi
ReinforceBug: A Framework to Generate Adversarial Textual Examples. (97%)Bushra Sabir; M. Ali Babar; Raj Gaire
Multi-Task Federated Reinforcement Learning with Adversaries. (15%)Aqeel Anwar; Arijit Raychowdhury
BODAME: Bilevel Optimization for Defense Against Model Extraction. (8%)Yuto Mori; Atsushi Nitanda; Akiko Takeda
2021-03-10
Improving Adversarial Robustness via Channel-wise Activation Suppressing. (99%)Yang Bai; Yuyuan Zeng; Yong Jiang; Shu-Tao Xia; Xingjun Ma; Yisen Wang
TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack. (92%)Yam Sharon; David Berend; Yang Liu; Asaf Shabtai; Yuval Elovici
VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples. (67%)Tian Pan; Yibing Song; Tianyu Yang; Wenhao Jiang; Wei Liu
Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks. (1%)Md Akmal Haidar; Mehdi Rezagholizadeh
2021-03-09
Stabilized Medical Image Attacks. (99%)Gege Qi; Lijun Gong; Yibing Song; Kai Ma; Yefeng Zheng
Revisiting Model's Uncertainty and Confidences for Adversarial Example Detection. (99%)Ahmed Aldahdooh; Wassim Hamidouche; Olivier Déforges
Practical Relative Order Attack in Deep Ranking. (99%)Mo Zhou; Le Wang; Zhenxing Niu; Qilin Zhang; Yinghui Xu; Nanning Zheng; Gang Hua
BASAR:Black-box Attack on Skeletal Action Recognition. (99%)Yunfeng Diao; Tianjia Shao; Yong-Liang Yang; Kun Zhou; He Wang
Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack. (98%)He Wang; Feixiang He; Zhexi Peng; Tianjia Shao; Yong-Liang Yang; Kun Zhou; David Hogg
Deep Learning for Android Malware Defenses: a Systematic Literature Review. (11%)Yue Liu; Chakkrit Tantithamthavorn; Li Li; Yepang Liu
Robust Black-box Watermarking for Deep NeuralNetwork using Inverse Document Frequency. (10%)Mohammad Mehdi Yadollahi; Farzaneh Shoeleh; Sajjad Dadkhah; Ali A. Ghorbani
Towards Strengthening Deep Learning-based Side Channel Attacks with Mixup. (2%)Zhimin Luo; Mengce Zheng; Ping Wang; Minhui Jin; Jiajia Zhang; Honggang Hu; Nenghai Yu
2021-03-08
Packet-Level Adversarial Network Traffic Crafting using Sequence Generative Adversarial Networks. (99%)Qiumei Cheng; Shiying Zhou; Yi Shen; Dezhang Kong; Chunming Wu
Enhancing Transformation-based Defenses against Adversarial Examples with First-Order Perturbations. (99%)Haimin Zhang; Min Xu
Contemplating real-world object classification. (81%)Ali Borji
Consistency Regularization for Adversarial Robustness. (50%)Jihoon Tack; Sihyun Yu; Jongheon Jeong; Minseon Kim; Sung Ju Hwang; Jinwoo Shin
Prime+Probe 1, JavaScript 0: Overcoming Browser-based Side-Channel Defenses. (2%)Anatoly Shusterman; Ayush Agarwal; Sioli O'Connell; Daniel Genkin; Yossi Oren; Yuval Yarom
Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors. (1%)Jian Ding; Enze Xie; Hang Xu; Chenhan Jiang; Zhenguo Li; Ping Luo; Gui-Song Xia
Deep Model Intellectual Property Protection via Deep Watermarking. (1%)Jie Zhang; Dongdong Chen; Jing Liao; Weiming Zhang; Huamin Feng; Gang Hua; Nenghai Yu
2021-03-07
Universal Adversarial Perturbations and Image Spam Classifiers. (99%)Andy Phung; Mark Stamp
Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain. (99%)Jinyu Tian; Jiantao Zhou; Yuanman Li; Jia Duan
Improving Global Adversarial Robustness Generalization With Adversarially Trained GAN. (99%)Desheng School of Electrical Engineering, Southwest Jiaotong University, Chengdu, P. R. China Wang; Weidong School of Electrical Engineering, Southwest Jiaotong University, Chengdu, P. R. China Jin; Yunpu School of Electrical Engineering, Southwest Jiaotong University, Chengdu, P. R. China Wu; Aamir School of Electrical Engineering, Southwest Jiaotong University, Chengdu, P. R. China Khan
Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and Accuracy. (76%)Chen Chen; Kezhi Kong; Peihong Yu; Juan Luque; Tom Goldstein; Furong Huang
2021-03-06
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification. (98%)Ahmadreza Azizi; Ibrahim Asadullah Tahmid; Asim Waheed; Neal Mangaokar; Jiameng Pu; Mobin Javed; Chandan K. Reddy; Bimal Viswanath
Hidden Backdoor Attack against Semantic Segmentation Models. (93%)Yiming Li; Yanjie Li; Yalei Lv; Yong Jiang; Shu-Tao Xia
2021-03-05
Cyber Threat Intelligence Model: An Evaluation of Taxonomies, Sharing Standards, and Ontologies within Cyber Threat Intelligence. (13%)Vasileios Mavroeidis; Siri Bromander
Don't Forget to Sign the Gradients! (10%)Omid Aramoon; Pin-Yu Chen; Gang Qu
Tor circuit fingerprinting defenses using adaptive padding. (1%)George Kadianakis; Theodoros Polyzos; Mike Perry; Kostas Chatzikokolakis
2021-03-04
Hard-label Manifolds: Unexpected Advantages of Query Efficiency for Finding On-manifold Adversarial Examples. (99%)Washington Garcia; Pin-Yu Chen; Somesh Jha; Scott Clouse; Kevin R. B. Butler
WaveGuard: Understanding and Mitigating Audio Adversarial Examples. (99%)Shehzeen Hussain; Paarth Neekhara; Shlomo Dubnov; Julian McAuley; Farinaz Koushanfar
Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack. (99%)Mengting Xu; Tao Zhang; Zhongnian Li; Mingxia Liu; Daoqiang Zhang
QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval. (99%)Xiaodan Li; Jinfeng Li; Yuefeng Chen; Shaokai Ye; Yuan He; Shuhui Wang; Hang Su; Hui Xue
SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain. (99%)Paula Harder; Franz-Josef Pfreundt; Margret Keuper; Janis Keuper
Gradient-Guided Dynamic Efficient Adversarial Training. (96%)Fu Wang; Yanghao Zhang; Yanbin Zheng; Wenjie Ruan
PointGuard: Provably Robust 3D Point Cloud Classification. (92%)Hongbin Liu; Jinyuan Jia; Neil Zhenqiang Gong
Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods. (12%)William Paul; Yinzhi Cao; Miaomiao Zhang; Phil Burlina
A Novel Framework for Threat Analysis of Machine Learning-based Smart Healthcare Systems. (1%)Nur Imtiazul Haque; Mohammad Ashiqur Rahman; Md Hasan Shahriar; Alvi Ataur Khalil; Selcuk Uluagac
On the privacy-utility trade-off in differentially private hierarchical text classification. (1%)Dominik Wunderlich; Daniel Bernau; Francesco Aldà; Javier Parra-Arnau; Thorsten Strufe
2021-03-03
Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks. (99%)Zhiqun Zhao; Hengyou Wang; Hao Sun; Zhihai He
A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models. (89%)Josh Kalin; David Noever; Matthew Ciolino
Shift Invariance Can Reduce Adversarial Robustness. (87%)Songwei Ge; Vasu Singla; Ronen Basri; David Jacobs
A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks. (81%)Yudi Dong; Huaxia Wang; Yu-Dong Yao
On the effectiveness of adversarial training against common corruptions. (67%)Klim Kireev; Maksym Andriushchenko; Nicolas Flammarion
Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations. (64%)Yu-Lin Tsai; Chia-Yi Hsu; Chia-Mu Yu; Pin-Yu Chen
2021-03-02
Evaluating the Robustness of Geometry-Aware Instance-Reweighted Adversarial Training. (99%)Dorjan Hitaj; Giulio Pagnotta; Iacopo Masi; Luigi V. Mancini
A Survey On Universal Adversarial Attack. (99%)Chaoning Zhang; Philipp Benz; Chenguo Lin; Adil Karjauv; Jing Wu; In So Kweon
Online Adversarial Attacks. (99%)Andjela Mladenovic; Avishek Joey Bose; Hugo Berard; William L. Hamilton; Simon Lacoste-Julien; Pascal Vincent; Gauthier Gidel
Adversarial Examples for Unsupervised Machine Learning Models. (98%)Chia-Yi Hsu; Pin-Yu Chen; Songtao Lu; Sijia Liu; Chia-Mu Yu
ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples. (97%)Mingfu Xue; Shichang Sun; Can He; Yushu Zhang; Jian Wang; Weiqiang Liu
DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers. (97%)Colin Paterson; Haoze Wu; John Grese; Radu Calinescu; Corina S. Pasareanu; Clark Barrett
Fixing Data Augmentation to Improve Adversarial Robustness. (69%)Sylvestre-Alvise Rebuffi; Sven Gowal; Dan A. Calian; Florian Stimberg; Olivia Wiles; Timothy Mann
A Brief Survey on Deep Learning Based Data Hiding. (54%)Chaoning Zhang; Chenguo Lin; Philipp Benz; Kejiang Chen; Weiming Zhang; In So Kweon
Group-wise Inhibition based Feature Regularization for Robust Classification. (16%)Haozhe Liu; Haoqian Wu; Weicheng Xie; Feng Liu; Linlin Shen
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations. (1%)Eitan Borgnia; Jonas Geiping; Valeriia Cherepanova; Liam Fowl; Arjun Gupta; Amin Ghiasi; Furong Huang; Micah Goldblum; Tom Goldstein
2021-03-01
Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World. (99%)Jiakai Wang; Aishan Liu; Zixin Yin; Shunchang Liu; Shiyu Tang; Xianglong Liu
Brain Programming is Immune to Adversarial Attacks: Towards Accurate and Robust Image Classification using Symbolic Learning. (99%)Gerardo Ibarra-Vazquez; Gustavo Olague; Mariana Chan-Ley; Cesar Puente; Carlos Soubervielle-Montalvo
Smoothness Analysis of Adversarial Training. (98%)Sekitoshi Kanai; Masanori Yamada; Hiroshi Takahashi; Yuki Yamanaka; Yasutoshi Ida
Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis. (96%)Mahsa Paknezhad; Cuong Phuc Ngo; Amadeus Aristo Winarto; Alistair Cheong; Beh Chuen Yang; Wu Jiayang; Lee Hwee Kuan
Mind the box: $l_1$-APGD for sparse adversarial attacks on image classifiers. (93%)Francesco Croce; Matthias Hein
Adversarial training in communication constrained federated learning. (87%)Devansh Shah; Parijat Dube; Supriyo Chakraborty; Ashish Verma
Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms. (82%)Miguel Á. Carreira-Perpiñán; Suryabhan Singh Hada
Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attack. (70%)Shahroz Tariq; Sowon Jeon; Simon S. Woo
A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness. (64%)Jacob Abernethy; Pranjal Awasthi; Satyen Kale
Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. (62%)Wei Dai; Daniel Berleant
2021-02-28
Model-Agnostic Defense for Lane Detection against Adversarial Attack. (98%)Henry Xu; An Ju; David Wagner
Robust learning under clean-label attack. (22%)Avrim Blum; Steve Hanneke; Jian Qian; Han Shao
2021-02-27
Effective Universal Unrestricted Adversarial Attacks using a MOE Approach. (98%)A. E. Baia; Bari G. Di; V. Poggioni
Tiny Adversarial Mulit-Objective Oneshot Neural Architecture Search. (93%)Guoyang Xie; Jinbao Wang; Guo Yu; Feng Zheng; Yaochu Jin
End-to-end Uncertainty-based Mitigation of Adversarial Attacks to Automated Lane Centering. (73%)Ruochen Jiao; Hengyi Liang; Takami Sato; Junjie Shen; Qi Alfred Chen; Qi Zhu
Adversarial Information Bottleneck. (33%)Pemhlong Zhai; Shihua Zhang
Neuron Coverage-Guided Domain Generalization. (2%)Chris Xing Tian; Haoliang Li; Xiaofei Xie; Yang Liu; Shiqi Wang
2021-02-26
What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors.Jonas Geiping; Liam Fowl; Gowthami Somepalli; Micah Goldblum; Michael Moeller; Tom Goldstein
NEUROSPF: A tool for the Symbolic Analysis of Neural Networks. (68%)Muhammad Usman; Yannic Noller; Corina Pasareanu; Youcheng Sun; Divya Gopinath
2021-02-25
On Instabilities of Conventional Multi-Coil MRI Reconstruction to Small Adverserial Perturbations.Chi Zhang; Jinghan Jia; Burhaneddin Yaman; Steen Moeller; Sijia Liu; Mingyi Hong; Mehmet Akçakaya
Do Input Gradients Highlight Discriminative Features?Harshay Shah; Prateek Jain; Praneeth Netrapalli
Nonlinear Projection Based Gradient Estimation for Query Efficient Blackbox Attacks.Huichen Li; Linyi Li; Xiaojun Xu; Xiaolu Zhang; Shuang Yang; Bo Li
Understanding Robustness in Teacher-Student Setting: A New Perspective.Zhuolin Yang; Zhaoxi Chen; Tiffany Cai; Xinyun Chen; Bo Li; Yuandong Tian
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints.Maura Pintor; Fabio Roli; Wieland Brendel; Battista Biggio
Cybersecurity Threats in Connected and Automated Vehicles based Federated Learning Systems.Ranwa Al Mallah; Godwin Badu-Marfo; Bilal Farooq
A statistical framework for efficient out of distribution detection in deep neural networks. (1%)Matan Haroush; Tzviel Frostig; Ruth Heller; Daniel Soudry
2021-02-24
Confidence Calibration with Bounded Error Using Transformations.Sooyong Jang; Radoslav Ivanov; Insup lee; James Weimer
Sketching Curvature for Efficient Out-of-Distribution Detection for Deep Neural Networks.Apoorva Sharma; Navid Azizan; Marco Pavone
Robust SleepNets.Yigit Alparslan; Edward Kim
Multiplicative Reweighting for Robust Neural Network Optimization.Noga Bar; Tomer Koren; Raja Giryes
Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis.Leo Schwinn; An Nguyen; René Raab; Leon Bungert; Daniel Tenbrinck; Dario Zanca; Martin Burger; Bjoern Eskofier
Graphfool: Targeted Label Adversarial Attack on Graph Embedding.Jinyin Chen; Xiang Lin; Dunjie Zhang; Wenrong Jiang; Guohan Huang; Hui Xiong; Yun Xiang
2021-02-23
The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations.Jeremiah Duncan; Fabian Fallas; Chris Gropp; Emily Herron; Maria Mahbub; Paula Olaya; Eduardo Ponce; Tabitha K. Samuel; Daniel Schultz; Sudarshan Srinivasan; Maofeng Tang; Viktor Zenkov; Quan Zhou; Edmon Begoli
Rethinking Natural Adversarial Examples for Classification Models.Xiao Li; Jianmin Li; Ting Dai; Jie Shi; Jun Zhu; Xiaolin Hu
Automated Discovery of Adaptive Attacks on Adversarial Defenses.Chengyuan Yao; Pavol Bielik; Petar Tsankov; Martin Vechev
Adversarial Robustness with Non-uniform Perturbations.Ecenaz Erdemir; Jeffrey Bickford; Luca Melis; Sergul Aydore
Non-Singular Adversarial Robustness of Neural Networks.Yu-Lin Tsai; Chia-Yi Hsu; Chia-Mu Yu; Pin-Yu Chen
Enhancing Model Robustness By Incorporating Adversarial Knowledge Into Semantic Representation.Jinfeng Li; Tianyu Du; Xiangyu Liu; Rong Zhang; Hui Xue; Shouling Ji
Adversarial Examples Detection beyond Image Space.Kejiang Chen; Yuefeng Chen; Hang Zhou; Chuan Qin; Xiaofeng Mao; Weiming Zhang; Nenghai Yu
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models.Liuqiao Chen; Hu Wang; Benjamin Zi Hao Zhao; Minhui Xue; Haifeng Qian
2021-02-22
On the robustness of randomized classifiers to adversarial examples.Rafael Pinot; Laurent Meunier; Florian Yger; Cédric Gouy-Pailler; Yann Chevaleyre; Jamal Atif
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks.Ginevra Carbone; Guido Sanguinetti; Luca Bortolussi
Man-in-The-Middle Attacks and Defense in a Power System Cyber-Physical Testbed.Patrick Wlazlo; Abhijeet Sahu; Zeyu Mao; Hao Huang; Ana Goulart; Katherine Davis; Saman Zonouz
Sandwich Batch Normalization: A Drop-In Replacement for Feature Distribution Heterogeneity.Xinyu Gong; Wuyang Chen; Tianlong Chen; Zhangyang Wang
2021-02-21
The Effects of Image Distribution and Task on Adversarial Robustness.Owen Kunhardt; Arturo Deza; Tomaso Poggio
A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box Optimization.HanQin Cai; Yuchen Lou; Daniel McKenzie; Wotao Yin
Constrained Optimization to Train Neural Networks on Critical and Under-Represented Classes. (1%)Sara Sangalli; Ertunc Erdil; Andreas Hoetker; Olivio Donati; Ender Konukoglu
2021-02-20
On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning.Ren Wang; Kaidi Xu; Sijia Liu; Pin-Yu Chen; Tsui-Wei Weng; Chuang Gan; Meng Wang
Measuring $\ell_\infty$ Attacks by the $\ell_2$ Norm.Sizhe Chen; Qinghua Tao; Zhixing Ye; Xiaolin Huang
2021-02-19
A PAC-Bayes Analysis of Adversarial Robustness.Guillaume IRIT Vidot; Paul LHC Viallard; Amaury LHC Habrard; Emilie LHC Morvant
Effective and Efficient Vote Attack on Capsule Networks.Jindong Gu; Baoyuan Wu; Volker Tresp
2021-02-18
Random Projections for Improved Adversarial Robustness.Ginevra Carbone; Guido Sanguinetti; Luca Bortolussi
Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks.Matthew Ciolino; Josh Kalin; David Noever
Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications.Leonard Berrada; Sumanth Dathathri; Krishnamurthy Dvijotham; Robert Stanforth; Rudy Bunel; Jonathan Uesato; Sven Gowal; M. Pawan Kumar
Center Smoothing: Provable Robustness for Functions with Metric-Space Outputs.Aounon Kumar; Tom Goldstein
2021-02-17
Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids.Jiangnan Li; Yingyuan Yang; Jinyuan Stella Sun; Kevin Tomsovic; Hairong Qi
Improving Hierarchical Adversarial Robustness of Deep Neural Networks.Avery Ma; Aladin Virmaux; Kevin Scaman; Juwei Lu
Consistent Non-Parametric Methods for Maximizing Robustness.Robi Bhattacharjee; Kamalika Chaudhuri
Bridging the Gap Between Adversarial Robustness and Optimization Bias.Fartash Faghri; Sven Gowal; Cristina Vasconcelos; David J. Fleet; Fabian Pedregosa; Nicolas Le Roux
2021-02-16
Globally-Robust Neural Networks.Klas Leino; Zifan Wang; Matt Fredrikson
A Law of Robustness for Weight-bounded Neural Networks.Hisham Husain; Borja Balle
Just Noticeable Difference for Machine Perception and Generation of Regularized Adversarial Images with Minimal Perturbation.Adil Kaan Akan; Emre Akbas; Fatos T. Yarman Vural
2021-02-15
Data Profiling for Adversarial Training: On the Ruin of Problematic Data.Chengyu Dong; Liyuan Liu; Jingbo Shang
Certified Robustness to Programmable Transformations in LSTMs.Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Generating Structured Adversarial Attacks Using Frank-Wolfe Method.Ehsan Kazemi; Thomas Kerdreux; Liquang Wang
Universal Adversarial Examples and Perturbations for Quantum Classifiers.Weiyuan Gong; Dong-Ling Deng
Low Curvature Activations Reduce Overfitting in Adversarial Training.Vasu Singla; Sahil Singla; David Jacobs; Soheil Feizi
And/or trade-off in artificial neurons: impact on adversarial robustness.Alessandro Fontana
Certifiably Robust Variational Autoencoders.Ben Barrett; Alexander Camuto; Matthew Willetts; Tom Rainforth
2021-02-14
Guided Interpolation for Adversarial Training.Chen Chen; Jingfeng Zhang; Xilie Xu; Tianlei Hu; Gang Niu; Gang Chen; Masashi Sugiyama
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS.Felix Olowononi; Danda B. Rawat; Chunmei Liu
Exploring Adversarial Robustness of Deep Metric Learning.Thomas Kobber Panum; Zi Wang; Pengyu Kan; Earlence Fernandes; Somesh Jha
Adversarial Attack on Network Embeddings via Supervised Network Poisoning.Viresh Gupta; Tanmoy Chakraborty
Perceptually Constrained Adversarial Attacks.Muhammad Zaid Hameed; Andras Gyorgy
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification.Mingu Kang; Trung Quang Tran; Seungju Cho; Daeyoung Kim
Cross-modal Adversarial Reprogramming.Paarth Neekhara; Shehzeen Hussain; Jinglong Du; Shlomo Dubnov; Farinaz Koushanfar; Julian McAuley
2021-02-13
Mixed Nash Equilibria in the Adversarial Examples Game.Laurent Meunier; Meyer Scetbon; Rafael Pinot; Jamal Atif; Yann Chevaleyre
Adversarial defense for automatic speaker verification by cascaded self-supervised learning models.Haibin Wu; Xu Li; Andy T. Liu; Zhiyong Wu; Helen Meng; Hung-yi Lee
2021-02-12
UAVs Path Deviation Attacks: Survey and Research Challenges.Francesco Betti Sorbelli; Mauro Conti; Cristina M. Pinotti; Giulio Rigoni
Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective.Chaoning Zhang; Philipp Benz; Adil Karjauv; In So Kweon
Universal Adversarial Perturbations for Malware.Raphael Labaca-Castro; Luis Muñoz-González; Feargus Pendlebury; Gabi Dreo Rodosek; Fabio Pierazzi; Lorenzo Cavallaro
Certified Defenses: Why Tighter Relaxations May Hurt Training. (13%)Nikola Jovanović; Mislav Balunović; Maximilian Baader; Martin Vechev
2021-02-11
Adversarially robust deepfake media detection using fused convolutional neural network predictions.Sohail Ahmed Khan; Alessandro Artusi; Hang Dai
Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy.Dylan Slack; Nathalie Rauschmayr; Krishnaram Kenthapadi
RobOT: Robustness-Oriented Testing for Deep Learning Systems.Jingyi Wang; Jialuo Chen; Youcheng Sun; Xingjun Ma; Dongxia Wang; Jun Sun; Peng Cheng
2021-02-10
RoBIC: A benchmark suite for assessing classifiers robustness.Thibault Maho; Benoît Bonnet; Teddy Furon; Erwan Le Merrer
Meta Federated Learning.Omid Aramoon; Pin-Yu Chen; Gang Qu; Yuan Tian
Adversarial Robustness: What fools you makes you stronger.Grzegorz Głuch; Rüdiger Urbanke
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection.Hanshu Yan; Jingfeng Zhang; Gang Niu; Jiashi Feng; Vincent Y. F. Tan; Masashi Sugiyama
Dompteur: Taming Audio Adversarial Examples.Thorsten Eisenhofer; Lea Schönherr; Joel Frank; Lars Speckemeier; Dorothea Kolossa; Thorsten Holz
Enhancing Real-World Adversarial Patches through 3D Modeling of Complex Target Scenes.Yael Mathov; Lior Rokach; Yuval Elovici
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons.Bohang Zhang; Tianle Cai; Zhou Lu; Di He; Liwei Wang
Bayesian Inference with Certifiable Adversarial Robustness.Matthew Wicker; Luca Laurenti; Andrea Patane; Zhoutong Chen; Zheng Zhang; Marta Kwiatkowska
2021-02-09
Target Training Does Adversarial Training Without Adversarial Samples.Blerta Lindqvist
Security and Privacy for Artificial Intelligence: Opportunities and Challenges.Ayodeji Oseni; Nour Moustafa; Helge Janicke; Peng Liu; Zahir Tari; Athanasios Vasilakos
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models.Sahar Abdelnabi; Mario Fritz
Adversarial Perturbations Are Not So Weird: Entanglement of Robust and Non-Robust Features in Neural Network Classifiers.Jacob M. Springer; Melanie Mitchell; Garrett T. Kenyon
Detecting Localized Adversarial Examples: A Generic Approach using Critical Region Analysis.Fengting Li; Xuankai Liu; Xiaoli Zhang; Qi Li; Kun Sun; Kang Li
Making Paper Reviewing Robust to Bid Manipulation Attacks.Ruihan Wu; Chuan Guo; Felix Wu; Rahul Kidambi; der Maaten Laurens van; Kilian Q. Weinberger
Adversarially Trained Models with Test-Time Covariate Shift Adaptation.Jay Nandy; Sudipan Saha; Wynne Hsu; Mong Li Lee; Xiao Xiang Zhu
2021-02-08
Efficient Certified Defenses Against Patch Attacks on Image Classifiers.Jan Hendrik Metzen; Maksym Yatsura
A Real-time Defense against Website Fingerprinting Attacks.Shawn Shan; Arjun Nitin Bhagoji; Haitao Zheng; Ben Y. Zhao
Benford's law: what does it say on adversarial images?João G. Zago; Fabio L. Baldissera; Eric A. Antonelo; Rodrigo T. Saad
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples.Omer Faruk Tuna; Ferhat Ozgur Catak; M. Taner Eskil
2021-02-07
Adversarial example generation with AdaBelief Optimizer and Crop Invariance.Bo Yang; Hengwei Zhang; Yuchen Zhang; Kaiyong Xu; Jindong Wang
Adversarial Imaging Pipelines.Buu Phan; Fahim Mannan; Felix Heide
2021-02-06
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation.Wuxinlin Cheng; Chenhui Deng; Zhiqiang Zhao; Yaohui Cai; Zhiru Zhang; Zhuo Feng
2021-02-05
Corner Case Generation and Analysis for Safety Assessment of Autonomous Vehicles.Haowei Sun; Shuo Feng; Xintao Yan; Henry X. Liu
Model Agnostic Answer Reranking System for Adversarial Question Answering.Sagnik Majumder; Chinmoy Samant; Greg Durrett
Robust Single-step Adversarial Training with Regularizer.Lehui Xie; Yaopeng Wang; Jia-Li Yin; Ximeng Liu
Understanding the Interaction of Adversarial Training with Noisy Labels.Jianing Zhu; Jingfeng Zhang; Bo Han; Tongliang Liu; Gang Niu; Hongxia Yang; Mohan Kankanhalli; Masashi Sugiyama
Optimal Transport as a Defense Against Adversarial Attacks.Quentin Bouniot; Romaric Audigier; Angélique Loesch
2021-02-04
DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks.Chong Xiang; Prateek Mittal
Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression.Masanori Yamada; Sekitoshi Kanai; Tomoharu Iwata; Tomokatsu Takahashi; Yuki Yamanaka; Hiroshi Takahashi; Atsutoshi Kumagai
Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images.Jiasong Chen; Linchen Qian; Timur Urakov; Weiyong Gu; Liang Liang
PredCoin: Defense against Query-based Hard-label Attack.Junfeng Guo; Yaswanth Yadlapalli; Thiele Lothar; Ang Li; Cong Liu
Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review.Dongrui Wu; Weili Fang; Yi Zhang; Liuqing Yang; Hanbin Luo; Lieyun Ding; Xiaodong Xu; Xiang Yu
Audio Adversarial Examples: Attacks Using Vocal Masks.Lynnette Ng; Kai Yuan Tay; Wei Han Chua; Lucerne Loke; Danqi Ye; Melissa Chua
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models.Yugeng Liu; Rui Wen; Xinlei He; Ahmed Salem; Zhikun Zhang; Michael Backes; Cristofaro Emiliano De; Mario Fritz; Yang Zhang
2021-02-03
Adversarially Robust Learning with Unknown Perturbation Sets.Omar Montasser; Steve Hanneke; Nathan Srebro
IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks.Yixiang Wang; Jiqiang Liu; Xiaolin Chang; Jelena Mišić; Vojislav B. Mišić
2021-02-02
On Robustness of Neural Semantic Parsers.Shuo Huang; Zhuang Li; Lizhen Qu; Lei Pan
Towards Robust Neural Networks via Close-loop Control.Zhuotong Chen; Qianxiao Li; Zheng Zhang
Recent Advances in Adversarial Training for Adversarial Robustness.Tao Bai; Jinqi Luo; Jun Zhao; Bihan Wen; Qian Wang
Probabilistic Trust Intervals for Out of Distribution Detection. (2%)Gagandeep Singh; Deepak Mishra
2021-02-01
Fast Training of Provably Robust Neural Networks by SingleProp.Akhilan Boopathy; Tsui-Wei Weng; Sijia Liu; Pin-Yu Chen; Gaoyuan Zhang; Luca Daniel
Towards Speeding up Adversarial Training in Latent Spaces.Yaguan Qian; Qiqi Shao; Tengteng Yao; Bin Wang; Shaoning Zeng; Zhaoquan Gu; Wassim Swaileh
Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems.Alireza Bahramali; Milad Nasr; Amir Houmansadr; Dennis Goeckel; Don Towsley
2021-01-31
Deep Deterministic Information Bottleneck with Matrix-based Entropy Functional.Xi Yu; Shujian Yu; Jose C. Principe
Towards Imperceptible Query-limited Adversarial Attacks with Perceptual Feature Fidelity Loss.Pengrui Quan; Ruiming Guo; Mani Srivastava
Admix: Enhancing the Transferability of Adversarial Attacks.Xiaosen Wang; Xuanran He; Jingdong Wang; Kun He
2021-01-30
Cortical Features for Defense Against Adversarial Audio Attacks.Ilya Kavalerov; Ruijie Zheng; Wojciech Czaja; Rama Chellappa
2021-01-29
You Only Query Once: Effective Black Box Adversarial Attacks with Minimal Repeated Queries.Devin Willmott; Anit Kumar Sahu; Fatemeh Sheikholeslami; Filipe Condessa; Zico Kolter
2021-01-28
Adversarial Machine Learning Attacks on Condition-Based Maintenance Capabilities.Hamidreza Habibollahi Najaf Abadi
Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network.B. R. Manoj; Meysam Sadeghi; Erik G. Larsson
Increasing the Confidence of Deep Neural Networks by Coverage Analysis.Giulio Rossolini; Alessandro Biondi; Giorgio Carlo Buttazzo
Adversarial Learning with Cost-Sensitive Classes.Haojing Shen; Sihong Chen; Ran Wang; Xizhao Wang
2021-01-27
Robust Android Malware Detection System against Adversarial Attacks using Q-Learning.Hemant Rathore; Sanjay K. Sahay; Piyush Nikam; Mohit Sewak
Adversaries in Online Learning Revisited: with applications in Robust Optimization and Adversarial training.Sebastian Pokutta; Huan Xu
Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling.Chris Emmery; Ákos Kádár; Grzegorz Chrupała
Meta Adversarial Training against Universal Patches.Jan Hendrik Metzen; Nicole Finnie; Robin Hutmacher
Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting.Federico Nesti; Alessandro Biondi; Giorgio Buttazzo
Improving Neural Network Robustness through Neighborhood Preserving Layers.Bingyuan Liu; Christopher Malon; Lingzhou Xue; Erik Kruus
2021-01-26
Blind Image Denoising and Inpainting Using Robust Hadamard Autoencoders.Rasika Karkare; Randy Paffenroth; Gunjan Mahindre
Property Inference From Poisoning.Melissa Chase; Esha Ghosh; Saeed Mahloujifar
Adversarial Vulnerability of Active Transfer Learning.Nicolas M. Müller; Konstantin Böttinger
Introducing and assessing the explainable AI (XAI)method: SIDU.Satya M. Muddamsetty; Mohammad N. S. Jahromi; Andreea E. Ciontos; Laura M. Fenoy; Thomas B. Moeslund
SkeletonVis: Interactive Visualization for Understanding Adversarial Attacks on Human Action Recognition Models.Haekyu Park; Zijie J. Wang; Nilaksh Das; Anindya S. Paul; Pruthvi Perumalla; Zhiyan Zhou; Duen Horng Chau
The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs.Xinwei Zhao; Matthew C. Stamm
Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers.Xinwei Zhao; Matthew C. Stamm
Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems.Utku Ozbulak; Baptist Vandersmissen; Azarakhsh Jalalvand; Ivo Couckuyt; Messem Arnout Van; Neve Wesley De
Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models.Mazen Abdelfattah; Kaiwen Yuan; Z. Jane Wang; Rabab Ward
2021-01-25
Diverse Adversaries for Mitigating Bias in Training.Xudong Han; Timothy Baldwin; Trevor Cohn
They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in CMOS Image Sensors.Sebastian Köhler; Giulio Lovisotto; Simon Birnbach; Richard Baker; Ivan Martinovic
Generalizing Adversarial Examples by AdaBelief Optimizer.Yixiang Wang; Jiqiang Liu; Xiaolin Chang
Towards Practical Robustness Analysis for DNNs based on PAC-Model Learning.Renjue Li; Pengfei Yang; Cheng-Chao Huang; Youcheng Sun; Bai Xue; Lijun Zhang
Few-Shot Website Fingerprinting Attack.Mantun Chen; Yongjun Wang; Zhiquan Qin; Xiatian Zhu
Understanding and Achieving Efficient Robustness with Adversarial Supervised Contrastive Learning.Anh Bui; Trung Le; He Zhao; Paul Montague; Seyit Camtepe; Dinh Phung
2021-01-23
A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network.Xinwei Zhao; Chen Chen; Matthew C. Stamm
A Comprehensive Evaluation Framework for Deep Model Robustness.Aishan Liu; Xianglong Liu; Jun Guo; Jiakai Wang; Yuqing Ma; Ze Zhao; Xinghai Gao; Gang Xiao
Error Diffusion Halftoning Against Adversarial Examples.Shao-Yuan Lo; Vishal M. Patel
2021-01-22
Partition-Based Convex Relaxations for Certifying the Robustness of ReLU Neural Networks.Brendon G. Anderson; Ziye Ma; Jingqi Li; Somayeh Sojoudi
Online Adversarial Purification based on Self-Supervision.Changhao Shi; Chester Holtz; Gal Mishne
Generating Black-Box Adversarial Examples in Sparse Domain.Hadi Zanddizari; Behnam Zeinali; J. Morris Chang
Adaptive Neighbourhoods for the Discovery of Adversarial Examples.Jay Morgan; Adeline Paiement; Arno Pauly; Monika Seisenberger
2021-01-21
Self-Adaptive Training: Bridging the Supervised and Self-Supervised Learning.Lang Huang; Chao Zhang; Hongyang Zhang
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary.Huan Zhang; Hongge Chen; Duane Boning; Cho-Jui Hsieh
Adv-OLM: Generating Textual Adversaries via OLM.Vijit Malik; Ashwani Bhat; Ashutosh Modi
A Person Re-identification Data Augmentation Method with Adversarial Defense Effect.Yunpeng Gong; Zhiyong Zeng; Liwen Chen; Yifan Luo; Bin Weng; Feng Ye
Adversarial Attacks and Defenses for Speaker Identification Systems.Sonal Joshi; Jesús Villalba; Piotr Żelasko; Laureano Moro-Velázquez; Najim Dehak
A general multi-modal data learning method for Person Re-identification. (78%)Yunpeng Gong
2021-01-20
Fooling thermal infrared pedestrian detectors in real world using small bulbs.Xiaopei Zhu; Xiao Li; Jianmin Li; Zheyao Wang; Xiaolin Hu
Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data.Francesco Cartella; Orlando Anunciacao; Yuki Funabiki; Daisuke Yamaguchi; Toru Akishita; Olivier Elshocht
Invariance, encodings, and generalization: learning identity effects with neural networks.S. Brugiapaglia; M. Liu; P. Tupper
2021-01-19
LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition.Valeriia Cherepanova; Micah Goldblum; Harrison Foley; Shiyuan Duan; John Dickerson; Gavin Taylor; Tom Goldstein
A Search-Based Testing Framework for Deep Neural Networks of Source Code Embedding.Maryam Vahdat Pour; Zhuo Li; Lei Ma; Hadi Hemmati
PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack.Jie Wang; Zhaoxia Yin; Jin Tang; Jing Jiang; Bin Luo
Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization.Jie Wang; Zhaoxia Yin; Jing Jiang; Yang Du
2021-01-18
What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space.Shihao Zhao; Xingjun Ma; Yisen Wang; James Bailey; Bo Li; Yu-Gang Jiang
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks. (1%)Zhengyan Zhang; Guangxuan Xiao; Yongwei Li; Tian Lv; Fanchao Qi; Zhiyuan Liu; Yasheng Wang; Xin Jiang; Maosong Sun
2021-01-17
Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions.Nodens Koren; Qiuhong Ke; Yisen Wang; James Bailey; Xingjun Ma
GraphAttacker: A General Multi-Task GraphAttack Framework.Jinyin Chen; Dunjie Zhang; Zhaoyan Ming; Kejie Huang; Wenrong Jiang; Chen Cui
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving.James Tu; Huichen Li; Xinchen Yan; Mengye Ren; Yun Chen; Ming Liang; Eilyan Bitar; Ersin Yumer; Raquel Urtasun
2021-01-16
Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks.Jia Liu; Yaochu Jin
Adversarial Attacks On Multi-Agent Communication.James Tu; Tsunhsuan Wang; Jingkang Wang; Sivabalan Manivasagam; Mengye Ren; Raquel Urtasun
2021-01-15
Fundamental Tradeoffs in Distributionally Adversarial Training.Mohammad Mehrabi; Adel Javanmard; Ryan A. Rossi; Anup Rao; Tung Mai
Black-box Adversarial Attacks in Autonomous Vehicle Technology.K Naveen Kumar; C Vishnu; Reshmi Mitra; C Krishna Mohan
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds.Bogdan Georgiev; Lukas Franken; Mayukh Mukherjee
Mining Data Impressions from Deep Models as Substitute for the Unavailable Training Data.Gaurav Kumar Nayak; Konda Reddy Mopuri; Saksham Jain; Anirban Chakraborty
2021-01-14
Context-Aware Image Denoising with Auto-Threshold Canny Edge Detection to Suppress Adversarial Perturbation.Li-Yun Wang; Yeganeh Jalalpour; Wu-chi Feng
Robusta: Robust AutoML for Feature Selection via Reinforcement Learning.Xiaoyang Wang; Bo Li; Yibo Zhang; Bhavya Kailkhura; Klara Nahrstedt
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks.Yige Li; Xixiang Lyu; Nodens Koren; Lingjuan Lyu; Bo Li; Xingjun Ma
2021-01-13
Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series.Pradeep Rathore; Arghya Basak; Sri Harsha Nistala; Venkataramana Runkana
Image Steganography based on Iteratively Adversarial Samples of A Synchronized-directions Sub-image.Xinghong Qin; Shunquan Tan; Bin Li; Weixuan Tang; Jiwu Huang
2021-01-12
Robustness Gym: Unifying the NLP Evaluation Landscape.Karan Goel; Nazneen Rajani; Jesse Vig; Samson Tan; Jason Wu; Stephan Zheng; Caiming Xiong; Mohit Bansal; Christopher Ré
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps.Yujin Huang; Han Hu; Chunyang Chen
Random Transformation of Image Brightness for Adversarial Attack.Bo Yang; Kaiyong Xu; Hengjun Wang; Hengwei Zhang
On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks.Junyoung Byun; Hyojun Go; Changick Kim
2021-01-11
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing.Andreas Bär; Jonas Löhdefink; Nikhil Kapoor; Serin J. Varghese; Fabian Hüger; Peter Schlicht; Tim Fingscheidt
2021-01-10
Adversarially Robust and Explainable Model Compression with On-Device Personalization for Text Classification.Yao Qiang; Supriya Tumkur Suresh Kumar; Marco Brocanelli; Dongxiao Zhu
2021-01-08
Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks.Marissa Dotter; Sherry Xie; Keith Manville; Josh Harguess; Colin Busho; Mikel Rodriguez
DiPSeN: Differentially Private Self-normalizing Neural Networks For Adversarial Robustness in Federated Learning.Olakunle Ibitoye; M. Omair Shafiq; Ashraf Matrawy
Exploring Adversarial Fake Images on Face Manifold.Dongze Li; Wei Wang; Hongxing Fan; Jing Dong
2021-01-07
The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks.Arno Blaas; Stephen J. Roberts
Robust Text CAPTCHAs Using Adversarial Examples.Rulin Shao; Zhouxing Shi; Jinfeng Yi; Pin-Yu Chen; Cho-Jui Hsieh
2021-01-06
Adversarial Robustness by Design through Analog Computing and Synthetic Gradients.Alessandro Cappelli; Ruben Ohana; Julien Launay; Laurent Meunier; Iacopo Poli; Florent Krzakala
Understanding the Error in Evaluating Adversarial Robustness.Pengfei Xia; Ziqiang Li; Hongjing Niu; Bin Li
2021-01-05
Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks.Rachel Sterneck; Abhishek Moitra; Priyadarshini Panda
2021-01-04
Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks.Yanghao Zhang; Fu Wang; Wenjie Ruan
Local Competition and Stochasticity for Adversarial Robustness in Deep Learning.Konstantinos P. Panousis; Sotirios Chatzis; Antonios Alexos; Sergios Theodoridis
Local Black-box Adversarial Attacks: A Query Efficient Approach.Tao Xiang; Hangcheng Liu; Shangwei Guo; Tianwei Zhang; Xiaofeng Liao
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead.Muhammad Shafique; Mahum Naseer; Theocharis Theocharides; Christos Kyrkou; Onur Mutlu; Lois Orosa; Jungwook Choi
2021-01-02
Improving DGA-Based Malicious Domain Classifiers for Malware Defense with Adversarial Machine Learning.Ibrahim Yilmaz; Ambareen Siraj; Denis Ulybyshev
2020-12-31
Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning.Chenglei Si; Zhengyan Zhang; Fanchao Qi; Zhiyuan Liu; Yasheng Wang; Qun Liu; Maosong Sun
Patch-wise++ Perturbation for Adversarial Targeted Attacks.Lianli Gao; Qilong Zhang; Jingkuan Song; Heng Tao Shen
2020-12-30
Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial Attacks for Online Visual Object Trackers.Krishna Kanth Nakka; Mathieu Salzmann
Beating Attackers At Their Own Games: Adversarial Example Detection Using Adversarial Gradient Directions.Yuhang Wu; Sunpreet S. Arora; Yanhong Wu; Hao Yang
2020-12-29
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization.Renya Department of Information Science and Biomedical Engineering, Graduate School of Science and Engineering, Kagoshima University Daimo; Satoshi Department of Information Science and Biomedical Engineering, Graduate School of Science and Engineering, Kagoshima University Ono; Takahiro Department of Information Science and Biomedical Engineering, Graduate School of Science and Engineering, Kagoshima University Suzuki
Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces.Linyang Li; Yunfan Shao; Demin Song; Xipeng Qiu; Xuanjing Huang
Improving Adversarial Robustness in Weight-quantized Neural Networks.Chang Song; Elias Fallon; Hai Li
With False Friends Like These, Who Can Have Self-Knowledge?Lue Tao; Songcan Chen
Generating Natural Language Attacks in a Hard Label Black Box Setting.Rishabh Maheshwary; Saket Maheshwary; Vikram Pudi
2020-12-28
Enhanced Regularizers for Attributional Robustness.Anindya Sarkar; Anirban Sarkar; Vineeth N Balasubramanian
Analysis of Dominant Classes in Universal Adversarial Perturbations.Jon Vadillo; Roberto Santana; Jose A. Lozano
2020-12-27
Person Re-identification with Adversarial Triplet Embedding.Xinglu Wang
My Teacher Thinks The World Is Flat! Interpreting Automatic Essay Scoring Mechanism.Swapnil Parekh; Yaman Kumar Singla; Changyou Chen; Junyi Jessy Li; Rajiv Ratn Shah
2020-12-26
Sparse Adversarial Attack to Object Detection.Jiayu Bao
Assessment of the Relative Importance of different hyper-parameters of LSTM for an IDS.Mohit Sewak; Sanjay K. Sahay; Hemant Rathore
2020-12-25
Robustness, Privacy, and Generalization of Adversarial Training.Fengxiang He; Shaopeng Fu; Bohan Wang; Dacheng Tao
A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning.Ahmadreza Jeddi; Mohammad Javad Shafiee; Alexander Wong
2020-12-24
A Context Aware Approach for Generating Natural Language Attacks.Rishabh Maheshwary; Saket Maheshwary; Vikram Pudi
Exploring Adversarial Examples via Invertible Neural Networks.Ruqi Bai; Saurabh Bagchi; David I. Inouye
Improving the Certified Robustness of Neural Networks via Consistency Regularization.Mengting Xu; Tao Zhang; Zhongnian Li; Daoqiang Zhang
Adversarial Momentum-Contrastive Pre-Training.Cong Xu; Min Yang
Learning Robust Representation for Clustering through Locality Preserving Variational Discriminative Network.Ruixuan Luo; Wei Li; Zhiyuan Zhang; Ruihan Bao; Keiko Harimoto; Xu Sun
2020-12-23
The Translucent Patch: A Physical and Universal Attack on Object Detectors.Alon Zolfi; Moshe Kravchik; Yuval Elovici; Asaf Shabtai
Gradient-Free Adversarial Attacks for Bayesian Neural Networks.Matthew Yuan; Matthew Wicker; Luca Laurenti
SCOPE CPS: Secure Compiling of PLCs in Cyber-Physical Systems.Eyasu Getahun Chekole; Martin Ochoa; Sudipta Chattopadhyay
Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems.Moshe Kravchik; Battista Biggio; Asaf Shabtai
2020-12-22
Learning to Initialize Gradient Descent Using Gradient Descent.Kartik Ahuja; Amit Dhurandhar; Kush R. Varshney
Unadversarial Examples: Designing Objects for Robust Vision.Hadi Salman; Andrew Ilyas; Logan Engstrom; Sai Vemprala; Aleksander Madry; Ashish Kapoor
Multi-shot NAS for Discovering Adversarially Robust Convolutional Neural Architectures at Targeted Capacities.Xuefei Ning; Junbo Zhao; Wenshuo Li; Tianchen Zhao; Huazhong Yang; Yu Wang
On Frank-Wolfe Optimization for Adversarial Robustness and Interpretability.Theodoros Tsiligkaridis; Jay Roberts
2020-12-21
Genetic Adversarial Training of Decision Trees.Francesco Ranzato; Marco Zanella
Incremental Verification of Fixed-Point Implementations of Neural Networks.Luiz Sena; Erickson Alves; Iury Bessa; Eddie Filho; Lucas Cordeiro
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring.Chenchen Zhao; Hao Li
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks.Chenchen Zhao; Hao Li
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification.Siyuan Cheng; Yingqi Liu; Shiqing Ma; Xiangyu Zhang
Self-Progressing Robust Training.Minhao Cheng; Pin-Yu Chen; Sijia Liu; Shiyu Chang; Cho-Jui Hsieh; Payel Das
Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition.Shoma Ishida; Satoshi Ono
Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines.Aidan Kehoe; Peter Wittek; Yanbo Xue; Alejandro Pozas-Kerstjens
On Success and Simplicity: A Second Look at Transferable Targeted Attacks.Zhengyu Zhao; Zhuoran Liu; Martha Larson
Learning from What We Know: How to Perform Vulnerability Prediction using Noisy Historical Data. (1%)Aayush Garg; Renzo Degiovanni; Matthieu Jimenez; Maxime Cordy; Mike Papadakis; Yves Le Traon
2020-12-20
Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks.Jayendra Kantipudi; Shiv Ram Dubey; Soumendu Chakraborty
2020-12-19
Sample Complexity of Adversarially Robust Linear Classification on Separated Data.Robi Bhattacharjee; Somesh Jha; Kamalika Chaudhuri
2020-12-18
Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks.Kieran Browne; Ben Swift
ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries.Jinyin Chen; Zhen Wang; Haibin Zheng; Jun Xiao; Zhaoyan Ming
AdvExpander: Generating Natural Language Adversarial Examples by Expanding Text.Zhihong Shao; Zitao Liu; Jiyong Zhang; Zhongqin Wu; Minlie Huang
Adversarially Robust Estimate and Risk Analysis in Linear Regression.Yue Xing; Ruizhi Zhang; Guang Cheng
RAILS: A Robust Adversarial Immune-inspired Learning System.Ren Wang; Tianqi Chen; Stephen Lindsly; Alnawaz Rehemtulla; Alfred Hero; Indika Rajapakse
Efficient Training of Robust Decision Trees Against Adversarial Examples.Daniël Vos; Sicco Verwer
On the human-recognizability phenomenon of adversarially trained deep image classifiers.Jonathan Helland; Nathan VanHoudnos
2020-12-17
Characterizing the Evasion Attackability of Multi-label Classifiers.Zhuo Yang; Yufei Han; Xiangliang Zhang
A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks.Qingsong Yao; Zecheng He; Yi Lin; Kai Ma; Yefeng Zheng; S. Kevin Zhou
2020-12-16
On the Limitations of Denoising Strategies as Adversarial Defenses.Zhonghan Niu; Zhaoxi Chen; Linyi Li; Yubin Yang; Bo Li; Jinfeng Yi
2020-12-15
FoggySight: A Scheme for Facial Lookup Privacy.Ivan Evtimov; Pascal Sturmfels; Tadayoshi Kohno
FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition (OCR) Systems.Lu Chen; Jiao Sun; Wei Xu
Amata: An Annealing Mechanism for Adversarial Training Acceleration.Nanyang Ye; Qianxiao Li; Xiao-Yun Zhou; Zhanxing Zhu
2020-12-14
Disentangled Information Bottleneck.Ziqi Pan; Li Niu; Jianfu Zhang; Liqing Zhang
Adaptive Verifiable Training Using Pairwise Class Similarity.Shiqi Wang; Kevin Eykholt; Taesung Lee; Jiyong Jang; Ian Molloy
Robustness Threats of Differential Privacy.Nurislam Tursynbek; Aleksandr Petiushko; Ivan Oseledets
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios.Hassan Ali; Surya Nepal; Salil S. Kanhere; Sanjay Jha
Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints.Xin Li; Xiangrui Li; Deng Pan; Dongxiao Zhu
Binary Black-box Evasion Attacks Against Deep Learning-based Static Malware Detectors with Adversarial Byte-Level Language Model.Mohammadreza Ebrahimi; Ning Zhang; James Hu; Muhammad Taqi Raza; Hsinchun Chen
Contrastive Learning with Adversarial Perturbations for Conditional Text Generation.Seanie Lee; Dong Bok Lee; Sung Ju Hwang
2020-12-13
Achieving Adversarial Robustness Requires An Active Teacher.Chao Ma; Lexing Ying
2020-12-12
Query-free Black-box Adversarial Attacks on Graphs.Jiarong Xu; Yizhou Sun; Xin Jiang; Yanhao Wang; Yang Yang; Chunping Wang; Jiangang Lu
2020-12-11
Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning.Omer Faruk Tuna; Ferhat Ozgur Catak; M. Taner Eskil
Attack Agnostic Detection of Adversarial Examples via Random Subspace Analysis.Nathan Drenkow; Neil Fendley; Philippe Burlina
Analyzing and Improving Adversarial Training for Generative Modeling. (86%)Xuwang Yin; Shiying Li; Gustavo K. Rohde
2020-12-10
GNNUnlock: Graph Neural Networks-based Oracle-less Unlocking Scheme for Provably Secure Logic Locking.Lilas Alrahis; Satwik Patnaik; Faiq Khalid; Muhammad Abdullah Hanif; Hani Saleh; Muhammad Shafique; Ozgur Sinanoglu
Next Wave Artificial Intelligence: Robust, Explainable, Adaptable, Ethical, and Accountable.Odest Chadwicke Jenkins; Daniel Lopresti; Melanie Mitchell
DSRNA: Differentiable Search of Robust Neural Architectures.Ramtin Hosseini; Xingyi Yang; Pengtao Xie
I-GCN: Robust Graph Convolutional Network via Influence Mechanism.Haoxi Zhan; Xiaobing Pei
An Empirical Review of Adversarial Defenses.Ayush Goel
Robustness and Transferability of Universal Attacks on Compressed Models.Alberto G. Matachana; Kenneth T. Co; Luis Muñoz-González; David Martinez; Emil C. Lupu
Geometric Adversarial Attacks and Defenses on 3D Point Clouds.Itai Lang; Uriel Kotlicki; Shai Avidan
SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers.Bingyao Huang; Haibin Ling
2020-12-09
Generating Out of Distribution Adversarial Attack using Latent Space Poisoning.Ujjwal Upadhyay; Prerana Mukherjee
Detection of Adversarial Supports in Few-shot Classifiers Using Self-Similarity and Filtering.Yi Xiang Marcus Tan; Penny Chong; Jiamei Sun; Ngai-Man Cheung; Yuval Elovici; Alexander Binder
Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters.Rida El-Allami; Alberto Marchisio; Muhammad Shafique; Ihsen Alouani
Composite Adversarial Attacks.Xiaofeng Mao; Yuefeng Chen; Shuhui Wang; Hang Su; Yuan He; Hui Xue
2020-12-08
Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective.Jingwei Sun; Ang Li; Binghui Wang; Huanrui Yang; Hai Li; Yiran Chen
On 1/n neural representation and robustness.Josue Nassar; Piotr Aleksander Sokol; SueYeon Chung; Kenneth D. Harris; Il Memming Park
Locally optimal detection of stochastic targeted universal adversarial perturbations.Amish Goel; Pierre Moulin
A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models.Mohammed Hassanin; Nour Moustafa; Murat Tahtali
Using Feature Alignment can Improve Clean Average Precision and Adversarial Robustness in Object Detection.Weipeng Xu; Hongcheng Huang
EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation.Qi Zhou; Haipeng Chen; Yitao Zheng; Zhen Wang
Overcomplete Representations Against Adversarial Videos.Shao-Yuan Lo; Jeya Maria Jose Valanarasu; Vishal M. Patel
Mitigating the Impact of Adversarial Attacks in Very Deep Networks.Mohammed Hassanin; Ibrahim Radwan; Nour Moustafa; Murat Tahtali; Neeraj Kumar
Reinforcement Based Learning on Classification Task Could Yield Better Generalization and Adversarial Accuracy.Shashi Kant Gupta
Poisoning Semi-supervised Federated Learning via Unlabeled Data: Attacks and Defenses. (95%)Yi Liu; Xingliang Yuan; Ruihui Zhao; Cong Wang; Dusit Niyato; Yefeng Zheng
Data Dependent Randomized Smoothing. (1%)Motasem Alfarra; Adel Bibi; Philip H. S. Torr; Bernard Ghanem
2020-12-07
A Singular Value Perspective on Model Robustness.Malhar Jere; Maghav Kumar; Farinaz Koushanfar
Backpropagating Linearly Improves Transferability of Adversarial Examples.Yiwen Guo; Qizhang Li; Hao Chen
Learning to Separate Clusters of Adversarial Representations for Robust Adversarial Detection.Byunggill Joe; Jihun Hamm; Sung Ju Hwang; Sooel Son; Insik Shin
Are DNNs fooled by extremely unrecognizable images?Soichiro Kumano; Hiroshi Kera; Toshihiko Yamasaki
Reprogramming Language Models for Molecular Representation Learning.Ria Vinod; Pin-Yu Chen; Payel Das
2020-12-06
Black-box Model Inversion Attribute Inference Attacks on Classification Models.Shagufta Mehnaz; Ninghui Li; Elisa Bertino
PAC-Learning for Strategic Classification.Ravi Sundaram; Anil Vullikanti; Haifeng Xu; Fan Yao
2020-12-05
Evaluating adversarial robustness in simulated cerebellum.Liu Yuezhang; Bo Li; Qifeng Chen
2020-12-04
Advocating for Multiple Defense Strategies against Adversarial Examples.Alexandre Araujo; Laurent Meunier; Rafael Pinot; Benjamin Negrevergne
Practical No-box Adversarial Attacks against DNNs.Qizhang Li; Yiwen Guo; Hao Chen
Towards Natural Robustness Against Adversarial Examples.Haoyu Chu; Shikui Wei; Yao Zhao
Unsupervised Adversarially-Robust Representation Learning on Graphs.Jiarong Xu; Yang Yang; Junru Chen; Chunping Wang; Xin Jiang; Jiangang Lu; Yizhou Sun
Kernel-convoluted Deep Neural Networks with Data Augmentation.Minjin Kim; Young-geun Kim; Dongha Kim; Yongdai Kim; Myunghee Cho Paik
2020-12-03
Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning.Kendra Albert; Maggie Delano; Jonathon Penney; Afsaneh Rigot; Ram Shankar Siva Kumar
FAT: Federated Adversarial Training.Giulio Zizzo; Ambrish Rawat; Mathieu Sinn; Beat Buesser
An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks.Giuseppe Ughi; Vinayak Abrol; Jared Tanner
Channel Effects on Surrogate Models of Adversarial Attacks against Wireless Signal Classifiers.Brian Kim; Yalin E. Sagduyu; Tugba Erpek; Kemal Davaslioglu; Sennur Ulukus
Attribute-Guided Adversarial Training for Robustness to Natural Perturbations.Tejas Gokhale; Rushil Anirudh; Bhavya Kailkhura; Jayaraman J. Thiagarajan; Chitta Baral; Yezhou Yang
2020-12-02
From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation.Nikhil Kapoor; Andreas Bär; Serin Varghese; Jan David Schneider; Fabian Hüger; Peter Schlicht; Tim Fingscheidt
Towards Defending Multiple Adversarial Perturbations via Gated Batch Normalization.Aishan Liu; Shiyu Tang; Xianglong Liu; Xinyun Chen; Lei Huang; Zhuozhuo Tu; Dawn Song; Dacheng Tao
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques.Han Qiu; Yi Zeng; Tianwei Zhang; Yong Jiang; Meikang Qiu
Essential Features: Content-Adaptive Pixel Discretization to Improve Model Robustness to Adaptive Adversarial Attacks.Ryan Feng; Wu-chi Feng; Atul Prakash
How Robust are Randomized Smoothing based Defenses to Data Poisoning?Akshay Mehra; Bhavya Kailkhura; Pin-Yu Chen; Jihun Hamm
2020-12-01
Adversarial Robustness Across Representation Spaces.Pranjal Awasthi; George Yu; Chun-Sung Ferng; Andrew Tomkins; Da-Cheng Juan
Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks.Christian Cosgrove; Adam Kortylewski; Chenglin Yang; Alan Yuille
Boosting Adversarial Attacks on Neural Networks with Better Optimizer.Heng Yin; Hengwei Zhang; Jindong Wang; Ruiyu Dou
One-Pixel Attack Deceives Computer-Assisted Diagnosis of Cancer.Joni Korpihalkola; Tuomo Sipola; Samir Puuska; Tero Kokkonen
Towards Imperceptible Adversarial Image Patches Based on Network Explanations.Yaguan Qian; Jiamin Wang; Bin Wang; Zhaoquan Gu; Xiang Ling; Chunming Wu
2020-11-30
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses.Gaurang Sriramanan; Sravanti Addepalli; Arya Baburaj; R. Venkatesh Babu
Just One Moment: Structural Vulnerability of Deep Action Recognition against One Frame Attack.Jaehui Hwang; Jun-Hyuk Kim; Jun-Ho Choi; Jong-Seok Lee
2020-11-29
Architectural Adversarial Robustness: The Case for Deep Pursuit.George Cazenavette; Calvin Murdock; Simon Lucey
SwitchX- Gmin-Gmax Switching for Energy-Efficient and Robust Implementation of Binary Neural Networks on Memristive Xbars.Abhiroop Bhattacharjee; Priyadarshini Panda
A Targeted Universal Attack on Graph Convolutional Network.Jiazhu Dai; Weifeng Zhu; Xiangfeng Luo
2020-11-28
Cyberbiosecurity: DNA Injection Attack in Synthetic Biology.Dor Farbiash; Rami Puzis
Deterministic Certification to Adversarial Attacks via Bernstein Polynomial Approximation.Ching-Chia Kao; Jhe-Bang Ko; Chun-Shien Lu
FaceGuard: A Self-Supervised Defense Against Adversarial Face Images.Debayan Deb; Xiaoming Liu; Anil K. Jain
2020-11-27
3D Invisible Cloak.Mingfu Xue; Can He; Zhiyu Wu; Jian Wang; Zhe Liu; Weiqiang Liu
SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images.Mingfu Xue; Shichang Sun; Zhiyu Wu; Can He; Jian Wang; Weiqiang Liu
Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks.Mingfu Xue; Chengxiang Yuan; Can He; Zhiyu Wu; Yushu Zhang; Zhe Liu; Weiqiang Liu
Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers.Kaidi Xu; Huan Zhang; Shiqi Wang; Yihan Wang; Suman Jana; Xue Lin; Cho-Jui Hsieh
Voting based ensemble improves robustness of defensive models.Devvrit; Minhao Cheng; Cho-Jui Hsieh; Inderjit Dhillon
Generalized Adversarial Examples: Attacks and Defenses.Haojing Shen; Sihong Chen; Ran Wang; Xizhao Wang
Robust and Natural Physical Adversarial Examples for Object Detectors.Mingfu Xue; Chengxiang Yuan; Can He; Jian Wang; Weiqiang Liu
2020-11-26
Regularization with Latent Space Virtual Adversarial Training.Genki Osada; Budrul Ahsan; Revoti Prasad Bora; Takashi Nishide
Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness.Yilun Jin; Lixin Fan; Kam Woh Ng; Ce Ju; Qiang Yang
Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks.Abhishek Moitra; Priyadarshini Panda
Robust Attacks on Deep Learning Face Recognition in the Physical World.Meng Shen; Hao Yu; Liehuang Zhu; Ke Xu; Qi Li; Xiaojiang Du
Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect.Athena Sayles; Ashish Hooda; Mohit Gupta; Rahul Chatterjee; Earlence Fernandes
2020-11-25
Adversarial Attack on Facial Recognition using Visible Light.Morgan Frearson; Kien Nguyen
SurFree: a fast surrogate-free black-box attack.Thibault Maho; Teddy Furon; Erwan Le Merrer
Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption.Ivan Evtimov; Russel Howes; Brian Dolhansky; Hamed Firooz; Cristian Canton Ferrer
Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization.Tianyu Han; Sven Nebelung; Federico Pedersoli; Markus Zimmermann; Maximilian Schulze-Hagen; Michael Ho; Christoph Haarburger; Fabian Kiessling; Christiane Kuhl; Volkmar Schulz; Daniel Truhn
Probing Model Signal-Awareness via Prediction-Preserving Input Minimization. (80%)Sahil Suneja; Yunhui Zheng; Yufan Zhuang; Jim Laredo; Alessandro Morari
2020-11-24
Trust but Verify: Assigning Prediction Credibility by Counterfactual Constrained Learning.Luiz F. O. Chamon; Santiago Paternain; Alejandro Ribeiro
Stochastic sparse adversarial attacks.Manon Césaire; Hatem Hajri; Sylvain Lamprier; Patrick Gallinari
On the Adversarial Robustness of 3D Point Cloud Classification.Jiachen Sun; Karl Koenig; Yulong Cao; Qi Alfred Chen; Z. Morley Mao
Towards Imperceptible Universal Attacks on Texture Recognition.Yingpeng Deng; Lina J. Karam
2020-11-23
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack.Rui Shu; Tianpei Xia; Laurie Williams; Tim Menzies
Augmented Lagrangian Adversarial Attacks.Jérôme Rony; Eric Granger; Marco Pedersoli; Ismail Ben Ayed
2020-11-22
Learnable Boundary Guided Adversarial Training.Jiequan Cui; Shu Liu; Liwei Wang; Jiaya Jia
Nudge Attacks on Point-Cloud DNNs.Yiren Zhao; Ilia Shumailov; Robert Mullins; Ross Anderson
2020-11-21
Spatially Correlated Patterns in Adversarial Images.Nandish Chattopadhyay; Lionell Yip En Zhi; Bryan Tan Bing Xing; Anupam Chattopadhyay
A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations.Can Bakiskan; Metehan Cekic; Ahmet Dundar Sezer; Upamanyu Madhow
Robust Data Hiding Using Inverse Gradient Attention. (2%)Honglei Zhang; Hu Wang; Yuanzhouhan Cao; Chunhua Shen; Yidong Li
2020-11-20
Are Chess Discussions Racist? An Adversarial Hate Speech Data Set.Rupak Sarkar; Ashiqur R. KhudaBukhsh
Detecting Universal Trigger's Adversarial Attack with Honeypot.Thai Le; Noseong Park; Dongwon Lee
2020-11-19
An Experimental Study of Semantic Continuity for Deep Learning Models.Shangxi Wu; Jitao Sang; Xian Zhao; Lizhang Chen
Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams.Chawin Sitawarin; Evgenios M. Kornaropoulos; Dawn Song; David Wagner
Adversarial Threats to DeepFake Detection: A Practical Perspective.Paarth Neekhara; Brian Dolhansky; Joanna Bitton; Cristian Canton Ferrer
Multi-Task Adversarial Attack.Pengxin Guo; Yuancheng Xu; Baijiong Lin; Yu Zhang
Latent Adversarial Debiasing: Mitigating Collider Bias in Deep Neural Networks.Luke Darlow; Stanisław Jastrzębski; Amos Storkey
2020-11-18
Robustified Domain Adaptation.Jiajin Zhang; Hanqing Chao; Pingkun Yan
Adversarial collision attacks on image hashing functions.Brian Dolhansky; Cristian Canton Ferrer
Contextual Fusion For Adversarial Robustness.Aiswarya Akumalla; Seth Haney; Maksim Bazhenov
Adversarial Turing Patterns from Cellular Automata.Nurislam Tursynbek; Ilya Vilkoviskiy; Maria Sindeeva; Ivan Oseledets
Self-Gradient Networks.Hossein Aboutalebi; Mohammad Javad Shafiee Alexander Wong
Adversarial Profiles: Detecting Out-Distribution & Adversarial Samples in Pre-trained CNNs.Arezoo Rajabi; Rakesh B. Bobba
2020-11-17
FoolHD: Fooling speaker identification by Highly imperceptible adversarial Disturbances.Ali Shahin Shamsabadi; Francisco Sepúlveda Teixeira; Alberto Abad; Bhiksha Raj; Andrea Cavallaro; Isabel Trancoso
SIENA: Stochastic Multi-Expert Neural Patcher.Thai Le; Noseong Park; Dongwon Lee
Shaping Deep Feature Space towards Gaussian Mixture for Visual Classification.Weitao Wan; Jiansheng Chen; Cheng Yu; Tong Wu; Yuanyi Zhong; Ming-Hsuan Yang
Generating universal language adversarial examples by understanding and enhancing the transferability across neural models.Liping Yuan; Xiaoqing Zheng; Yi Zhou; Cho-Jui Hsieh; Kai-wei Chang; Xuanjing Huang
Probing Predictions on OOD Images via Nearest Categories. (75%)Yao-Yuan Yang; Cyrus Rashtchian; Ruslan Salakhutdinov; Kamalika Chaudhuri
2020-11-16
MAAC: Novel Alert Correlation Method To Detect Multi-step Attack.Xiaoyu Wang; Lei Yu; Houhua He; Xiaorui Gong
Enforcing robust control guarantees within neural network policies.Priya L. Donti; Melrose Roderick; Mahyar Fazlyab; J. Zico Kolter
Adversarially Robust Classification based on GLRT.Bhagyashree Puranik; Upamanyu Madhow; Ramtin Pedarsani
Combining GANs and AutoEncoders for Efficient Anomaly Detection.Fabio ISTI CNR, Pisa, Italy Carrara; Giuseppe ISTI CNR, Pisa, Italy Amato; Luca ISTI CNR, Pisa, Italy Brombin; Fabrizio ISTI CNR, Pisa, Italy Falchi; Claudio ISTI CNR, Pisa, Italy Gennaro
Extreme Value Preserving Networks.Mingjie Sun; Jianguo Li; Changshui Zhang
2020-11-15
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations.Jinyuan Jia; Binghui Wang; Xiaoyu Cao; Hongbin Liu; Neil Zhenqiang Gong
Towards Understanding the Regularization of Adversarial Robustness on Neural Networks.Yuxin Wen; Shuai Li; Kui Jia
Ensemble of Models Trained by Key-based Transformed Images for Adversarially Robust Defense Against Black-box Attacks.MaungMaung AprilPyone; Hitoshi Kiya
Power Side-Channel Attacks on BNN Accelerators in Remote FPGAs. (1%)Shayan Moini; Shanquan Tian; Jakub Szefer; Daniel Holcomb; Russell Tessier
2020-11-14
Audio-Visual Event Recognition through the lens of Adversary.Juncheng B Li; Kaixin Ma; Shuhui Qu; Po-Yao Huang; Florian Metze
2020-11-13
Transformer-Encoder Detector Module: Using Context to Improve Robustness to Adversarial Attacks on Object Detection.Faisal Alamri; Sinan Kalkan; Nicolas Pugeault
Query-based Targeted Action-Space Adversarial Policies on Deep Reinforcement Learning Agents.Xian Yeow Lee; Yasaman Esfandiari; Kai Liang Tan; Soumik Sarkar
2020-11-12
Adversarial Robustness Against Image Color Transformation within Parametric Filter Space.Zhengyu Zhao; Zhuoran Liu; Martha Larson
Sparse PCA: Algorithms, Adversarial Perturbations and Certificates.Tommaso d'Orsi; Pravesh K. Kothari; Gleb Novikov; David Steurer
2020-11-11
Adversarial images for the primate brain.Li Yuan; Will Xiao; Gabriel Kreiman; Francis E. H. Tay; Jiashi Feng; Margaret S. Livingstone
Detecting Adversarial Patches with Class Conditional Reconstruction Networks.Perry Deng; Mohammad Saidur Rahman; Matthew Wright
2020-11-10
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks.Martin Gubri; Maxime Cordy; Mike Papadakis; Yves Le Traon
2020-11-09
Solving Inverse Problems With Deep Neural Networks -- Robustness Included?Martin Genzel; Jan Macdonald; Maximilian März
2020-11-07
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective Genetic Optimization Guided By Deep Networks.Alex Mathai; Shreya Khare; Srikanth Tamilselvam; Senthil Mani
Bridging the Performance Gap between FGSM and PGD Adversarial Training.Tianjin Huang; Vlado Menkovski; Yulong Pei; Mykola Pechenizkiy
2020-11-06
Single-Node Attack for Fooling Graph Neural Networks.Ben Finkelshtein; Chaim Baskin; Evgenii Zheltonozhskii; Uri Alon
A survey on practical adversarial examples for malware classifiers.Daniel Park; Bülent Yener
2020-11-05
A Black-Box Attack Model for Visually-Aware Recommender Systems.Rami Cohen; Oren Sar Shalom; Dietmar Jannach; Amihood Amir
Data Augmentation via Structured Adversarial Perturbations.Calvin Luo; Hossein Mobahi; Samy Bengio
Defense-friendly Images in Adversarial Attacks: Dataset and Metrics forPerturbation Difficulty.Camilo Pestana; Wei Liu; David Glance; Ajmal Mian
Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks.Leo Schwinn; An Nguyen; René Raab; Dario Zanca; Bjoern Eskofier; Daniel Tenbrinck; Martin Burger
2020-11-03
You Do (Not) Belong Here: Detecting DPI Evasion Attacks with Context Learning.Shitong Zhu; Shasha Li; Zhongjie Wang; Xun Chen; Zhiyun Qian; Srikanth V. Krishnamurthy; Kevin S. Chan; Ananthram Swami
Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks.Denis Emelin; Ivan Titov; Rico Sennrich
Penetrating RF Fingerprinting-based Authentication with a Generative Adversarial Attack.Samurdhi Karunaratne; Enes Krijestorac; Danijela Cabric
Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks.Tao Bai; Jinqi Luo; Jun Zhao
MalFox: Camouflaged Adversarial Malware Example Generation Based on Conv-GANs Against Black-Box Detectors.Fangtian Zhong; Xiuzhen Cheng; Dongxiao Yu; Bei Gong; Shuaiwen Song; Jiguo Yu
A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs.Souvik Kundu; Mahdi Nazemi; Peter A. Beerel; Massoud Pedram
2020-11-02
Adversarial Examples in Constrained Domains.Ryan Sheatsley; Nicolas Papernot; Michael Weisman; Gunjan Verma; Patrick McDaniel
Frequency-based Automated Modulation Classification in the Presence of Adversaries.Rajeev Sahay; Christopher G. Brinton; David J. Love
Robust Algorithms for Online Convex Problems via Primal-Dual.Marco Molinaro
Trustworthy AI.Richa Singh; Mayank Vatsa; Nalini Ratha
2020-11-01
LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud-based Deep Networks.Hang Zhou; Dongdong Chen; Jing Liao; Weiming Zhang; Kejiang Chen; Xiaoyi Dong; Kunlin Liu; Gang Hua; Nenghai Yu
Vulnerability of the Neural Networks Against Adversarial Examples: A Survey.Rui Zhao
2020-10-31
MAD-VAE: Manifold Awareness Defense Variational Autoencoder.Frederick Morlock; Dingsu Wang
2020-10-30
Integer Programming-based Error-Correcting Output Code Design for Robust Classification.Samarth Gupta; Saurabh Amin
Leveraging Extracted Model Adversaries for Improved Black Box Attacks.Naveen Jafer Nizar; Ari Kobren
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks.Lubin Meng; Jian Huang; Zhigang Zeng; Xue Jiang; Shan Yu; Tzyy-Ping Jung; Chin-Teng Lin; Ricardo Chavarriaga; Dongrui Wu
Adversarial Attacks on Optimization based Planners.Sai Vemprala; Ashish Kapoor
Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks.Dorjan Hitaj; Briland Hitaj; Sushil Jajodia; Luigi V. Mancini
Perception Improvement for Free: Exploring Imperceptible Black-box Adversarial Attacks on Image Classification.Yongwei Wang; Mingquan Feng; Rabab Ward; Z. Jane Wang; Lanjun Wang
Adversarial Robust Training of Deep Learning MRI Reconstruction Models.Francesco Calivá; Kaiyang Cheng; Rutwik Shah; Valentina Pedoia
2020-10-29
Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-fine Framework and Its Adversarial Examples.Yingwei Li; Zhuotun Zhu; Yuyin Zhou; Yingda Xia; Wei Shen; Elliot K. Fishman; Alan L. Yuille
Perception Matters: Exploring Imperceptible and Transferable Anti-forensics for GAN-generated Fake Face Imagery Detection.Yongwei Wang; Xin Ding; Li Ding; Rabab Ward; Z. Jane Wang
Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks?Roger Granda; Tinne Tuytelaars; Jose Oramas
Reliable Graph Neural Networks via Robust Aggregation.Simon Geisler; Daniel Zügner; Stephan Günnemann
Passport-aware Normalization for Deep Model Protection.Jie Zhang; Dongdong Chen; Jing Liao; Weiming Zhang; Gang Hua; Nenghai Yu
Robustifying Binary Classification to Adversarial Perturbation.Fariborz Salehi; Babak Hassibi
Beyond cross-entropy: learning highly separable feature distributions for robust and accurate classification.Arslan Ali; Andrea Migliorati; Tiziano Bianchi; Enrico Magli
WaveTransform: Crafting Adversarial Examples via Input Decomposition.Divyam Anshumaan; Akshay Agarwal; Mayank Vatsa; Richa Singh
2020-10-28
Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations.Amit Daniely; Hadas Schacham
Object Hider: Adversarial Patch Attack Against Object Detectors.Yusheng Zhao; Huanqian Yan; Xingxing Wei
Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?Anna-Kathrin Kopetzki; Bertrand Charpentier; Daniel Zügner; Sandhya Giri; Stephan Günnemann
Transferable Universal Adversarial Perturbations Using Generative Models.Atiye Sadat Hashemi; Andreas Bär; Saeed Mozaffari; Tim Fingscheidt
2020-10-27
Fast Local Attack: Generating Local Adversarial Examples for Object Detectors.Quanyu Liao; Xin Wang; Bin Kong; Siwei Lyu; Youbing Yin; Qi Song; Xi Wu
Anti-perturbation of Online Social Networks by Graph Label Transition.Jun Zhuang; Mohammad Al Hasan
2020-10-26
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes.Jinyuan Jia; Binghui Wang; Neil Zhenqiang Gong
GreedyFool: Distortion-Aware Sparse Adversarial Attack.Xiaoyi Dong; Dongdong Chen; Jianmin Bao; Chuan Qin; Lu Yuan; Weiming Zhang; Nenghai Yu; Dong Chen
Robust Pre-Training by Adversarial Contrastive Learning.Ziyu Jiang; Tianlong Chen; Ting Chen; Zhangyang Wang
Versatile Verification of Tree Ensembles.Laurens Devos; Wannes Meert; Jesse Davis
Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy.Philipp Benz; Chaoning Zhang; Adil Karjauv; In So Kweon
Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis. (16%)Xudong Pan; Mi Zhang; Yifan Yan; Jiaming Zhu; Min Yang
2020-10-25
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound.Saheb Chhabra; Akshay Agarwal; Richa Singh; Mayank Vatsa
Dynamic Adversarial Patch for Evading Object Detection Models.Shahar Hoory; Tzvika Shapira; Asaf Shabtai; Yuval Elovici
Asymptotic Behavior of Adversarial Training in Binary Classification.Hossein Taheri; Ramtin Pedarsani; Christos Thrampoulidis
2020-10-24
ATRO: Adversarial Training with a Rejection Option.Masahiro Kato; Zhenghang Cui; Yoshihiro Fukuhara
Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks.Huimin Zeng; Chen Zhu; Tom Goldstein; Furong Huang
Stop Bugging Me! Evading Modern-Day Wiretapping Using Adversarial Perturbations.Yael Mathov; Tal Ben Senior; Asaf Shabtai; Yuval Elovici
2020-10-23
Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures.Nafise Sadat Moosavi; Boer Marcel de; Prasetya Ajie Utama; Iryna Gurevych
Towards Robust Neural Networks via Orthogonal Diversity.Kun Fang; Qinghua Tao; Yingwen Wu; Tao Li; Jia Cai; Feipeng Cai; Xiaolin Huang; Jie Yang
2020-10-22
Contrastive Learning with Adversarial Examples.Chih-Hui Ho; Nuno Vasconcelos
Adversarial Attacks on Binary Image Recognition Systems.Eric Balkanski; Harrison Chase; Kojin Oshiba; Alexander Rilee; Yaron Singer; Richard Wang
Rewriting Meaningful Sentences via Conditional BERT Sampling and an application on fooling text classifiers.Lei Xu; Ivan Ramirez; Kalyan Veeramachaneni
An Efficient Adversarial Attack for Tree Ensembles.Chong Zhang; Huan Zhang; Cho-Jui Hsieh
Adversarial Robustness of Supervised Sparse Coding.Jeremias Sulam; Ramchandran Muthukumar; Raman Arora
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming.Sumanth Dathathri; Krishnamurthy Dvijotham; Alexey Kurakin; Aditi Raghunathan; Jonathan Uesato; Rudy Bunel; Shreya Shankar; Jacob Steinhardt; Ian Goodfellow; Percy Liang; Pushmeet Kohli
Defense-guided Transferable Adversarial Attacks.Zifei Zhang; Kai Qiao; Jian Chen; Ningning Liang
Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free.Haotao Wang; Tianlong Chen; Shupeng Gui; Ting-Kuei Hu; Ji Liu; Zhangyang Wang
2020-10-21
Adversarial Attacks on Deep Algorithmic Trading Policies.Yaser Faghan; Nancirose Piazza; Vahid Behzadan; Ali Fathi
Maximum Mean Discrepancy is Aware of Adversarial Attacks.Ruize Gao; Feng Liu; Jingfeng Zhang; Bo Han; Tongliang Liu; Gang Niu; Masashi Sugiyama
Precise Statistical Analysis of Classification Accuracies for Adversarial Training.Adel Javanmard; Mahdi Soltanolkotabi
Learning Black-Box Attackers with Transferable Priors and Query Feedback.Jiancheng Yang; Yangzhou Jiang; Xiaoyang Huang; Bingbing Ni; Chenglong Zhao
Class-Conditional Defense GAN Against End-to-End Speech Attacks.Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
A Distributional Robustness Certificate by Randomized Smoothing.Jungang Yang; Liyao Xiang; Ruidong Chen; Yukun Wang; Wei Wang; Xinbing Wang
2020-10-20
Preventing Personal Data Theft in Images with Adversarial ML.Thomas Cilloni; Wei Wang; Charles Walter; Charles Fleming
Towards Understanding the Dynamics of the First-Order Adversaries.Zhun Deng; Hangfeng He; Jiaoyang Huang; Weijie J. Su
Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods.Byungjoo Kim; Bryce Chudomelka; Jinyoung Park; Jaewoo Kang; Youngjoon Hong; Hyunwoo J. Kim
Boosting Gradient for White-Box Adversarial Attacks.Hongying Liu; Zhenyu Zhou; Fanhua Shang; Xiaoyu Qi; Yuanyuan Liu; Licheng Jiao
Tight Second-Order Certificates for Randomized Smoothing.Alexander Levine; Aounon Kumar; Thomas Goldstein; Soheil Feizi
2020-10-19
A Survey of Machine Learning Techniques in Adversarial Image Forensics.Ehsan Nowroozi; Ali Dehghantanha; Reza M. Parizi; Kim-Kwang Raymond Choo
Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification.Erwin Quiring; Lukas Pirch; Michael Reimsbach; Daniel Arp; Konrad Rieck
RobustBench: a standardized adversarial robustness benchmark.Francesco Croce; Maksym Andriushchenko; Vikash Sehwag; Nicolas Flammarion; Mung Chiang; Prateek Mittal; Matthias Hein
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness.Guillermo Ortiz-Jimenez; Apostolos Modas; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
Verifying the Causes of Adversarial Examples.Honglin Li; Yifei Fan; Frieder Ganz; Anthony Yezzi; Payam Barnaghi
When Bots Take Over the Stock Market: Evasion Attacks Against Algorithmic Traders.Elior Nehemya; Yael Mathov; Asaf Shabtai; Yuval Elovici
FLAG: Adversarial Data Augmentation for Graph Neural Networks.Kezhi Kong; Guohao Li; Mucong Ding; Zuxuan Wu; Chen Zhu; Bernard Ghanem; Gavin Taylor; Tom Goldstein
2020-10-18
FADER: Fast Adversarial Example Rejection.Francesco Crecchi; Marco Melis; Angelo Sotgiu; Davide Bacciu; Battista Biggio
Poisoned classifiers are not only backdoored, they are fundamentally broken.Mingjie Sun; Siddhant Agarwal; J. Zico Kolter
2020-10-17
A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models.erhat Ozgur Catak; Samed Sivaslioglu; Kevser Sahinbas
Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing.Jinghan Yang; Adith Boloor; Ayan Chakrabarti; Xuan Zhang; Yevgeniy Vorobeychik
Weight-Covariance Alignment for Adversarially Robust Neural Networks.Panagiotis Eustratiadis; Henry Gouk; Da Li; Timothy Hospedales
2020-10-16
DPAttack: Diffused Patch Attacks against Universal Object Detection.Shudeng Wu; Tao Dai; Shu-Tao Xia
Mischief: A Simple Black-Box Attack Against Transformer Architectures.Wynter Adrian de
Learning Robust Algorithms for Online Allocation Problems Using Adversarial Training.Goran Zuzic; Di Wang; Aranyak Mehta; D. Sivakumar
2020-10-15
Certifying Neural Network Robustness to Random Input Noise from Samples.Brendon G. Anderson; Somayeh Sojoudi
Adversarial Images through Stega Glasses.Benoît CRIStAL Bonnet; Teddy CRIStAL Furon; Patrick CRIStAL Bas
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning.Hongjun Wang; Guanbin Li; Xiaobai Liu; Liang Lin
Generalizing Universal Adversarial Attacks Beyond Additive Perturbations.Yanghao Zhang; Wenjie Ruan; Fu Wang; Xiaowei Huang
Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training.Zichao Li; Liyuan Liu; Chengyu Dong; Jingbo Shang
Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness.Long Zhao; Ting Liu; Xi Peng; Dimitris Metaxas
Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection in AMI through Adversarial Attacks.Jiangnan Li; Yingyuan Yang; Jinyuan Stella Sun
Progressive Defense Against Adversarial Attacks for Deep Learning as a Service in Internet of Things.Ling Wang; Cheng Zhang; Zejian Luo; Chenguang Liu; Jie Liu; Xi Zheng; Athanasios Vasilakos
2020-10-14
Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability.Yuxian Meng; Chun Fan; Zijun Sun; Eduard Hovy; Fei Wu; Jiwei Li
Towards Resistant Audio Adversarial Examples.Tom Dörr; Karla Markert; Nicolas M. Müller; Konstantin Böttinger
An Adversarial Attack against Stacked Capsule Autoencoder.Jiazhu Dai; Siwei Xiong
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability.Mahmoud Hossam; Trung Le; He Zhao; Dinh Phung
GreedyFool: Multi-Factor Imperceptibility and Its Application to Designing Black-box Adversarial Example Attack.Hui Liu; Bo Zhao; Jiabao Guo; Yang An; Peng Liu
2020-10-13
Toward Few-step Adversarial Training from a Frequency Perspective.Hans Shih-Han Wang; Cory Cornelius; Brandon Edwards; Jason Martin
Higher-Order Certification for Randomized Smoothing.Jeet Mohapatra; Ching-Yun Ko; Tsui-Wei Weng; Pin-Yu Chen; Sijia Liu; Luca Daniel
Linking average- and worst-case perturbation robustness via class selectivity and dimensionality.Matthew L. Leavitt; Ari Morcos
2020-10-12
Universal Model for 3D Medical Image Analysis.Xiaoman Zhang; Ya Zhang; Xiaoyun Zhang; Yanfeng Wang
To be Robust or to be Fair: Towards Fairness in Adversarial Training.Han Xu; Xiaorui Liu; Yaxin Li; Jiliang Tang
Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework for Refining Arbitrary Dense Adversarial Attacks.He Zhao; Thanh Nguyen; Trung Le; Paul Montague; Vel Olivier De; Tamas Abraham; Dinh Phung
Shape-Texture Debiased Neural Network Training.Yingwei Li; Qihang Yu; Mingxing Tan; Jieru Mei; Peng Tang; Wei Shen; Alan Yuille; Cihang Xie
On the Power of Abstention and Data-Driven Decision Making for Adversarial Robustness.Maria-Florina Balcan; Avrim Blum; Dravyansh Sharma; Hongyang Zhang
From Hero to Z\'eroe: A Benchmark of Low-Level Adversarial Attacks.Steffen Eger; Yannik Benz
EFSG: Evolutionary Fooling Sentences Generator.Giovanni Marco Di; Marco Brambilla
Contrast and Classify: Training Robust VQA Models. (2%)Yash Kant; Abhinav Moudgil; Dhruv Batra; Devi Parikh; Harsh Agrawal
2020-10-11
Gradient-based Analysis of NLP Models is Manipulable.Junlin Wang; Jens Tuyls; Eric Wallace; Sameer Singh
IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function based Restoration.Ziyi Wu; Yueqi Duan; He Wang; Qingnan Fan; Leonidas J. Guibas
2020-10-10
Is It Time to Redefine the Classification Task for Deep Neural Networks?Keji Han; Yun Li
Regularizing Neural Networks via Adversarial Model Perturbation. (1%)Yaowei Zheng; Richong Zhang; Yongyi Mao
2020-10-09
Understanding Spatial Robustness of Deep Neural Networks.Ziyuan Zhong; Yuchi Tian; Baishakhi Ray
How Does Mixup Help With Robustness and Generalization?Linjun Zhang; Zhun Deng; Kenji Kawaguchi; Amirata Ghorbani; James Zou
2020-10-08
Transcending Transcend: Revisiting Malware Classification with Conformal Evaluation.Federico Barbero; Feargus Pendlebury; Fabio Pierazzi; Lorenzo Cavallaro
Improve Adversarial Robustness via Weight Penalization on Classification Layer.Cong Xu; Dan Li; Min Yang
A Unified Approach to Interpreting and Boosting Adversarial Transferability.Xin Wang; Jie Ren; Shuyun Lin; Xiangming Zhu; Yisen Wang; Quanshi Zhang
Improved Techniques for Model Inversion Attacks.Si Chen; Ruoxi Jia; Guo-Jun Qi
Affine-Invariant Robust Training.Oriol Barbany Mayor
Targeted Attention Attack on Deep Learning Models in Road Sign Recognition.Xinghao Yang; Weifeng Liu; Shengli Zhang; Wei Liu; Dacheng Tao
Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial Attacks.Anit Kumar Sahu; Satya Narayan Shukla; J. Zico Kolter
2020-10-07
Hiding the Access Pattern is Not Enough: Exploiting Search Pattern Leakage in Searchable Encryption.Simon Oya; Florian Kerschbaum
Learning Clusterable Visual Features for Zero-Shot Recognition.Jingyi Xu; Zhixin Shu; Dimitris Samaras
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks.Ahmed Salem; Michael Backes; Yang Zhang
Revisiting Batch Normalization for Improving Corruption Robustness.Philipp Benz; Chaoning Zhang; Adil Karjauv; In So Kweon
Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features.Philipp Benz; Chaoning Zhang; In So Kweon
Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks.Bedeuro Kim; Alsharif Abuadbba; Yansong Gao; Yifeng Zheng; Muhammad Ejaz Ahmed; Hyoungshick Kim; Surya Nepal
Global Optimization of Objective Functions Represented by ReLU Networks.Christopher A. Strong; Haoze Wu; Aleksandar Zeljić; Kyle D. Julian; Guy Katz; Clark Barrett; Mykel J. Kochenderfer
CD-UAP: Class Discriminative Universal Adversarial Perturbation.Chaoning Zhang; Philipp Benz; Tooba Imtiaz; In So Kweon
Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples.Eden Levy; Yael Mathov; Ziv Katzir; Asaf Shabtai; Yuval Elovici
Double Targeted Universal Adversarial Perturbations.Philipp Benz; Chaoning Zhang; Tooba Imtiaz; In So Kweon
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples.Sven Gowal; Chongli Qin; Jonathan Uesato; Timothy Mann; Pushmeet Kohli
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems.AKM Iqtidar Newaz; Nur Imtiazul Haque; Amit Kumar Sikder; Mohammad Ashiqur Rahman; A. Selcuk Uluagac
Adversarial attacks on audio source separation.Naoya Takahashi; Shota Inoue; Yuki Mitsufuji
2020-10-06
Visualizing Color-wise Saliency of Black-Box Image Classification Models.Yuhki SenseTime Japan Hatakeyama; Hiroki SenseTime Japan Sakuma; Yoshinori SenseTime Japan Konishi; Kohei Kyoto University Suenaga
Constraining Logits by Bounded Function for Adversarial Robustness.Sekitoshi Kanai; Masanori Yamada; Shin'ya Yamaguchi; Hiroshi Takahashi; Yasutoshi Ida
Adversarial Patch Attacks on Monocular Depth Estimation Networks.Koichiro Yamanaka; Ryutaroh Matsumoto; Keita Takahashi; Toshiaki Fujii
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models.Ahmed Salem; Yannick Sautter; Michael Backes; Mathias Humbert; Yang Zhang
2020-10-05
Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model.Xin Qiu; Risto Miikkulainen
Adversarial Boot Camp: label free certified robustness in one epoch.Ryan Campbell; Chris Finlay; Adam M Oberman
Understanding Classifier Mistakes with Generative Models.Laëtitia Shao; Yang Song; Stefano Ermon
CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation.Tianlu Wang; Xuezhi Wang; Yao Qin; Ben Packer; Kang Li; Jilin Chen; Alex Beutel; Ed Chi
Second-Order NLP Adversarial Examples.John X. Morris
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference.Sanghyun Hong; Yiğitcan Kaya; Ionuţ-Vlad Modoranu; Tudor Dumitraş
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective.Boxin Wang; Shuohang Wang; Yu Cheng; Zhe Gan; Ruoxi Jia; Bo Li; Jingjing Liu
Understanding Catastrophic Overfitting in Single-step Adversarial Training.Hoki Kim; Woojin Lee; Jaewook Lee
Downscaling Attack and Defense: Turning What You See Back Into What You Get.Andrew J. Lohn
Metadata-Based Detection of Child Sexual Abuse Material. (1%)Mayana Pereira; Rahul Dodhia; Hyrum Anderson; Richard Brown
2020-10-04
TextAttack: Lessons learned in designing Python frameworks for NLP.John X. Morris; Jin Yong Yoo; Yanjun Qi
A Study for Universal Adversarial Attacks on Texture Recognition.Yingpeng Deng; Lina J. Karam
Adversarial Attack and Defense of Structured Prediction Models.Wenjuan Han; Liwen Zhang; Yong Jiang; Kewei Tu
Geometry-aware Instance-reweighted Adversarial Training.Jingfeng Zhang; Jianing Zhu; Gang Niu; Bo Han; Masashi Sugiyama; Mohan Kankanhalli
Unknown Presentation Attack Detection against Rational Attackers.Ali Khodabakhsh; Zahid Akhtar
2020-10-03
Adversarial and Natural Perturbations for General Robustness.Sadaf Gulshad; Jan Hendrik Metzen; Arnold Smeulders
Multi-Step Adversarial Perturbations on Recommender Systems Embeddings.Vito Walter Anelli; Alejandro Bellogín; Yashar Deldjoo; Noia Tommaso Di; Felice Antonio Merra
A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples.Zhao Meng; Roger Wattenhofer
Efficient Robust Training via Backward Smoothing.Jinghui Chen; Yu Cheng; Zhe Gan; Quanquan Gu; Jingjing Liu
Do Wider Neural Networks Really Help Adversarial Robustness?Boxi Wu; Jinghui Chen; Deng Cai; Xiaofei He; Quanquan Gu
2020-10-02
Note: An alternative proof of the vulnerability of $k$-NN classifiers in high intrinsic dimensionality regions.Teddy Furon
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders.Vito Walter Anelli; Noia Tommaso Di; Daniele Malitesta; Felice Antonio Merra
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense.MaungMaung AprilPyone; Hitoshi Kiya
Query complexity of adversarial attacks.Grzegorz Głuch; Rüdiger Urbanke
CorrAttack: Black-box Adversarial Attack with Structured Search.Zhichao Huang; Yaowei Huang; Tong Zhang
A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations.Gustavo Olague; Gerardo Ibarra-Vazquez; Mariana Chan-Ley; Cesar Puente; Carlos Soubervielle-Montalvo; Axel Martinez
2020-10-01
Assessing Robustness of Text Classification through Maximal Safe Radius Computation.Malfa Emanuele La; Min Wu; Luca Laurenti; Benjie Wang; Anthony Hartshorn; Marta Kwiatkowska
Bag of Tricks for Adversarial Training.Tianyu Pang; Xiao Yang; Yinpeng Dong; Hang Su; Jun Zhu
2020-09-30
Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning.Guneet S. Dhillon; Nicholas Carlini
Accurate and Robust Feature Importance Estimation under Distribution Shifts.Jayaraman J. Thiagarajan; Vivek Narayanaswamy; Rushil Anirudh; Peer-Timo Bremer; Andreas Spanias
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks.Uday Shankar Shanthamallu; Jayaraman J. Thiagarajan; Andreas Spanias
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles.Huanrui Yang; Jingyang Zhang; Hongliang Dong; Nathan Inkawhich; Andrew Gardner; Andrew Touchet; Wesley Wilkes; Heath Berry; Hai Li
2020-09-29
Neural Topic Modeling with Cycle-Consistent Adversarial Training.Xuemeng Hu; Rui Wang; Deyu Zhou; Yuxuan Xiong
Fast Fr\'echet Inception Distance.Alexander Mathiasen; Frederik Hvilshøj
2020-09-28
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability.Ishai Omid Rosenberg; Shai Omid Meir; Jonathan Omid Berrebi; Ilay Omid Gordon; Guillaume Omid Sicard; Omid Eli; David
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment.Sharan Raja; Rudraksh Tuwani
STRATA: Building Robustness with a Simple Method for Generating Black-box Adversarial Attacks for Models of Code.Jacob M. Springer; Bryn Marie Reinstadler; Una-May O'Reilly
Graph Adversarial Networks: Protecting Information against Adversarial Attacks.Peiyuan Liao; Han Zhao; Keyulu Xu; Tommi Jaakkola; Geoffrey Gordon; Stefanie Jegelka; Ruslan Salakhutdinov
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients.Yifei Huang; Yaodong Yu; Hongyang Zhang; Yi Ma; Yuan Yao
Learned Fine-Tuner for Incongruous Few-Shot Adversarial Learning. (82%)Pu Zhao; Sijia Liu; Parikshit Ram; Songtao Lu; Yuguang Yao; Djallel Bouneffouf; Xue Lin
2020-09-27
Learning to Improve Image Compression without Changing the Standard Decoder.Yannick Strümpler; Ren Yang; Radu Timofte
RoGAT: a robust GNN combined revised GAT with adjusted graphs.Xianchen Zhou; Yaoyun Zeng; Hongxia Wang
Where Does the Robustness Come from? A Study of the Transformation-based Ensemble Defence.Chang Liao; Yao Cheng; Chengfang Fang; Jie Shi
2020-09-26
Differentially Private Adversarial Robustness Through Randomized Perturbations.Nan Xu; Oluwaseyi Feyisetan; Abhinav Aggarwal; Zekun Xu; Nathanael Teissier
Beneficial Perturbations Network for Defending Adversarial Examples.Shixian Wen; Amanda Rios; Laurent Itti
2020-09-25
Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision.Sara Mandelli; Nicolò Bonettini; Paolo Bestagini; Stefano Tubaro
Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training.Shunsuke Kitada; Hitoshi Iyatomi
2020-09-24
Advancing the Research and Development of Assured Artificial Intelligence and Machine Learning Capabilities.Tyler J. Shipp; Daniel J. Clouse; Lucia Michael J. De; Metin B. Ahiskali; Kai Steverson; Jonathan M. Mullin; Nathaniel D. Bastian
Adversarial Examples in Deep Learning for Multivariate Time Series Regression.Gautam Raj Mode; Khaza Anuarul Hoque
Improving Query Efficiency of Black-box Adversarial Attack.Yang Bai; Yuyuan Zeng; Yong Jiang; Yisen Wang; Shu-Tao Xia; Weiwei Guo
2020-09-23
Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz Regularization.Prashnna Kumar Gyawali; Sandesh Ghimire; Linwei Wang
Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining.Ananya B. Sai; Akash Kumar Mohankumar; Siddhartha Arora; Mitesh M. Khapra
Adversarial robustness via stochastic regularization of neural activation sensitivity.Gil Fidel; Ron Bitton; Ziv Katzir; Asaf Shabtai
A Partial Break of the Honeypots Defense to Catch Adversarial Attacks.Nicholas Carlini
Semantics-Preserving Adversarial Training.Wonseok Lee; Hanbit Lee; Sang-goo Lee
Robustification of Segmentation Models Against Adversarial Perturbations In Medical Imaging.Hanwool Park; Amirhossein Bayat; Mohammad Sabokrou; Jan S. Kirschke; Bjoern H. Menze
Detection of Iterative Adversarial Attacks via Counter Attack.Matthias Rottmann; Kira Maag; Mathis Peyron; Natasa Krejic; Hanno Gottschalk
Torchattacks: A PyTorch Repository for Adversarial Attacks.Hoki Kim
2020-09-22
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors.Yi-Shan Lin; Wen-Chuan Lee; Z. Berkay Celik
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time.Ferran Alet; Kenji Kawaguchi; Tomas Lozano-Perez; Leslie Pack Kaelbling
Adversarial Attack Based Countermeasures against Deep Learning Side-Channel Attacks.Ruizhe Gu; Ping Wang; Mengce Zheng; Honggang Hu; Nenghai Yu
2020-09-21
Uncertainty-aware Attention Graph Neural Network for Defending Adversarial Attacks.Boyuan Feng; Yuke Wang; Zheng Wang; Yufei Ding
Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers.Boyuan Feng; Yuke Wang; Xu Li; Yufei Ding
Generating Adversarial yet Inconspicuous Patches with a Single Image.Jinqi Luo; Tao Bai; Jun Zhao; Bo Li
Adversarial Training with Stochastic Weight Average.Joong-Won Hwang; Youngwan Lee; Sungchan Oh; Yuseok Bae
Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness.Anh Bui; Trung Le; He Zhao; Paul Montague; Olivier deVel; Tamas Abraham; Dinh Phung
DeepDyve: Dynamic Verification for Deep Neural Networks.Yu Li; Min Li; Bo Luo; Ye Tian; Qiang Xu
Feature Distillation With Guided Adversarial Contrastive Learning.Tao Bai; Jinnan Chen; Jun Zhao; Bihan Wen; Xudong Jiang; Alex Kot
Crafting Adversarial Examples for Deep Learning Based Prognostics (Extended Version).Gautam Raj Mode; Khaza Anuarul Hoque
Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations.Alex Wong; Mukund Mundhra; Stefano Soatto
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing.Maurice Weber; Nana Liu; Bo Li; Ce Zhang; Zhikuan Zhao
Password Strength Signaling: A Counter-Intuitive Defense Against Password Cracking. (1%)Wenjie Bai; Jeremiah Blocki; Ben Harsha
2020-09-20
Improving Robustness and Generality of NLP Models Using Disentangled Representations.Jiawei Wu; Xiaoya Li; Xiang Ao; Yuxian Meng; Fei Wu; Jiwei Li
2020-09-19
Efficient Certification of Spatial Robustness.Anian Ruoss; Maximilian Baader; Mislav Balunović; Martin Vechev
OpenAttack: An Open-source Textual Adversarial Attack Toolkit.Guoyang Zeng; Fanchao Qi; Qianrui Zhou; Tingji Zhang; Bairu Hou; Yuan Zang; Zhiyuan Liu; Maosong Sun
Making Images Undiscoverable from Co-Saliency Detection.Ruijun Gao; Qing Guo; Felix Juefei-Xu; Hongkai Yu; Xuhong Ren; Wei Feng; Song Wang
Adversarial Exposure Attack on Diabetic Retinopathy Imagery.Yupeng Cheng; Felix Juefei-Xu; Qing Guo; Huazhu Fu; Xiaofei Xie; Shang-Wei Lin; Weisi Lin; Yang Liu
Bias Field Poses a Threat to DNN-based X-Ray Recognition.Binyu Tian; Qing Guo; Felix Juefei-Xu; Wen Le Chan; Yupeng Cheng; Xiaohong Li; Xiaofei Xie; Shengchao Qin
Learning to Attack: Towards Textual Adversarial Attacking in Real-world Situations.Yuan Zang; Bairu Hou; Fanchao Qi; Zhiyuan Liu; Xiaojun Meng; Maosong Sun
Adversarial Rain Attack and Defensive Deraining for DNN Perception.Liming Zhai; Felix Juefei-Xu; Qing Guo; Xiaofei Xie; Lei Ma; Wei Feng; Shengchao Qin; Yang Liu
EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks.Yaguan Qian; Qiqi Shao; Jiamin Wang; Xiang Lin; Yankai Guo; Zhaoquan Gu; Bin Wang; Chunming Wu
2020-09-18
Robust Decentralized Learning for Neural Networks.Yao Zhou; Jun Wu; Jingrui He
MIRAGE: Mitigating Conflict-Based Cache Attacks with a Practical Fully-Associative Design. (1%)Gururaj Saileshwar; Moinuddin Qureshi
2020-09-17
Certifying Confidence via Randomized Smoothing.Aounon Kumar; Alexander Levine; Soheil Feizi; Tom Goldstein
Generating Label Cohesive and Well-Formed Adversarial Claims.Pepa Atanasova; Dustin Wright; Isabelle Augenstein
Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks.T. Gittings; S. Schneider; J. Collomosse
Label Smoothing and Adversarial Robustness.Chaohao Fu; Hongbin Chen; Na Ruan; Weijia Jia
Online Alternate Generator against Adversarial Attacks.Haofeng Li; Yirui Zeng; Guanbin Li; Liang Lin; Yizhou Yu
MultAV: Multiplicative Adversarial Videos.Shao-Yuan Lo; Vishal M. Patel
On the Transferability of Minimal Prediction Preserving Inputs in Question Answering.Shayne Longpre; Yi Lu; Christopher DuBois
Large Norms of CNN Layers Do Not Hurt Adversarial Robustness.Youwei Liang; Dong Huang
2020-09-16
Multimodal Safety-Critical Scenarios Generation for Decision-Making Algorithms Evaluation.Wenhao Ding; Baiming Chen; Bo Li; Kim Ji Eun; Ding Zhao
Analysis of Generalizability of Deep Neural Networks Based on the Complexity of Decision Boundary.Shuyue Guan; Murray Loew
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View.Erick Galinkin
Contextualized Perturbation for Textual Adversarial Attack.Dianqi Li; Yizhe Zhang; Hao Peng; Liqun Chen; Chris Brockett; Ming-Ting Sun; Bill Dolan
2020-09-15
Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup.Jang-Hyun Kim; Wonho Choo; Hyun Oh Song
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems.Haoliang Nanyang Technological University, Singapore Li; Yufei Nanyang Technological University, Singapore Wang; Xiaofei Nanyang Technological University, Singapore Xie; Yang Nanyang Technological University, Singapore Liu; Shiqi City University of Hong Kong Wang; Renjie Nanyang Technological University, Singapore Wan; Lap-Pui Nanyang Technological University, Singapore Chau; Alex C. Nanyang Technological University, Singapore Kot
Switching Gradient Directions for Query-Efficient Black-Box Adversarial Attacks.Chen Ma; Shuyu Cheng; Li Chen; Junhai Yong
Decision-based Universal Adversarial Attack.Jing Wu; Mingyi Zhou; Shuaicheng Liu; Yipeng Liu; Ce Zhu
2020-09-14
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses.Ambar Pal; René Vidal
Input Hessian Regularization of Neural Networks.Waleed Mustafa; Robert A. Vandermeulen; Marius Kloft
Robust Deep Learning Ensemble against Deception.Wenqi Wei; Ling Liu
Hold Tight and Never Let Go: Security of Deep Learning based Automated Lane Centering under Physical-World Attack.Takami Sato; Junjie Shen; Ningfei Wang; Yunhan Jack Jia; Xue Lin; Qi Alfred Chen
2020-09-13
Manifold attack.Khanh-Hung Tran; Fred-Maurice Ngole-Mboula; Jean-Luc Starck
Towards the Quantification of Safety Risks in Deep Neural Networks.Peipei Xu; Wenjie Ruan; Xiaowei Huang
2020-09-12
Certified Robustness of Graph Classification against Topology Attack with Randomized Smoothing.Zhidong Gao; Rui Hu; Yanmin Gong
2020-09-11
Achieving Adversarial Robustness via Sparsity.Shufan Wang; Ningyi Liao; Liyao Xiang; Nanyang Ye; Quanshi Zhang
Defending Against Multiple and Unforeseen Adversarial Videos.Shao-Yuan Lo; Vishal M. Patel
Robust Neural Machine Translation: Modeling Orthographic and Interpunctual Variation.Toms Bergmanis; Artūrs Stafanovičs; Mārcis Pinnis
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben
Semantic-preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection.Lan Zhang; Peng Liu; Yoon-Ho Choi
2020-09-10
Second Order Optimization for Adversarial Robustness and Interpretability.Theodoros Tsiligkaridis; Jay Roberts
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent.Ricardo Bigolin Lanfredi; Joyce D. Schroeder; Tolga Tasdizen
2020-09-09
End-to-end Kernel Learning via Generative Random Fourier Features.Kun Fang; Xiaolin Huang; Fanghui Liu; Jie Yang
Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples.Jin Yong Yoo; John X. Morris; Eli Lifland; Yanjun Qi
A black-box adversarial attack for poisoning clustering.Antonio Emanuele Cinà; Alessandro Torcinovich; Marcello Pelillo
SoK: Certified Robustness for Deep Neural Networks.Linyi Li; Xiangyu Qi; Tao Xie; Bo Li
2020-09-08
Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks On Deep COVID-19 Models.Achyut Mani Tripathi; Ashish Mishra
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective.Gabriel Resende Machado; Eugênio Silva; Ronaldo Ribeiro Goldschmidt
2020-09-07
Adversarial attacks on deep learning models for fatty liver disease classification by modification of ultrasound image reconstruction method.Michal Byra; Grzegorz Styczynski; Cezary Szmigielski; Piotr Kalinowski; Lukasz Michalowski; Rafal Paluszkiewicz; Bogna Ziarkiewicz-Wroblewska; Krzysztof Zieniewicz; Andrzej Nowicki
Adversarial Attack on Large Scale Graph.Jintang Li; Tao Xie; Liang Chen; Fenfang Xie; Xiangnan He; Zibin Zheng
Black Box to White Box: Discover Model Characteristics Based on Strategic Probing.Josh Kalin; Matthew Ciolino; David Noever; Gerry Dozier
2020-09-06
A Game Theoretic Analysis of LQG Control under Adversarial Attack.Zuxing Li; György Dán; Dong Liu
Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks.Shankar A. Deka; Dušan M. Stipanović; Claire J. Tomlin
Detection Defense Against Adversarial Attacks with Saliency Map.Dengpan Ye; Chuanxi Chen; Changrui Liu; Hao Wang; Shunzhi Jiang
2020-09-05
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks.Nilaksh Polo Das; Haekyu Polo Park; Zijie J. Polo Wang; Fred Polo Hohman; Robert Polo Firstman; Emily Polo Rogers; Duen Polo Horng; Chau
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks.Wei-An Lin; Chun Pong Lau; Alexander Levine; Rama Chellappa; Soheil Feizi
2020-09-03
MIPGAN -- Generating Strong and High Quality Morphing Attacks Using Identity Prior Driven GAN. (10%)Haoyu Zhang; Sushma Venkatesh; Raghavendra Ramachandra; Kiran Raja; Naser Damer; Christoph Busch
2020-09-02
Yet Meta Learning Can Adapt Fast, It Can Also Break Easily.Han Xu; Yaxin Li; Xiaorui Liu; Hui Liu; Jiliang Tang
Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation.Danilo Vasconcellos Vargas; Bingli Liao; Takahiro Kanzaki
Open-set Adversarial Defense.Rui Shao; Pramuditha Perera; Pong C. Yuen; Vishal M. Patel
Adversarially Robust Neural Architectures.Minjing Dong; Yanxi Li; Yunhe Wang; Chang Xu
Flow-based detection and proxy-based evasion of encrypted malware C2 traffic.Carlos University of Porto and INESC TEC Novo; Ricardo University of Porto and INESC TEC Morla
Adversarial Attacks on Deep Learning Systems for User Identification based on Motion Sensors.Cezara Benegui; Radu Tudor Ionescu
Simulating Unknown Target Models for Query-Efficient Black-box Attacks.Chen Ma; Li Chen; Jun-Hai Yong
2020-09-01
Defending against substitute model black box adversarial attacks with the 01 loss.Yunzhe Xue; Meiyan Xie; Usman Roshan
2020-08-31
Adversarial Patch Camouflage against Aerial Detection.Ajaya Adhikari; Richard den Hollander; Ioannis Tolios; Bekkum Michael van; Anneloes Bal; Stijn Hendriks; Maarten Kruithof; Dennis Gross; Nils Jansen; Guillermo Pérez; Kit Buurman; Stephan Raaijmakers
Evasion Attacks to Graph Neural Networks via Influence Function.Binghui Wang; Tianxiang Zhou; Minhua Lin; Pan Zhou; Ang Li; Meng Pang; Cai Fu; Hai Li; Yiran Chen
MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models.Thai Le; Suhang Wang; Dongwon Lee
2020-08-30
An Integrated Approach to Produce Robust Models with High Efficiency.Zhijian Li; Bao Wang; Jack Xin
Benchmarking adversarial attacks and defenses for time-series data.Shoaib Ahmed Siddiqui; Andreas Dengel; Sheraz Ahmed
Shape Defense Against Adversarial Attacks.Ali Borji
2020-08-29
Improving Resistance to Adversarial Deformations by Regularizing Gradients.Pengfei Xia; Bin Li
2020-08-27
A Scene-Agnostic Framework with Adversarial Training for Abnormal Event Detection in Video.Mariana-Iuliana Georgescu; Radu Tudor Ionescu; Fahad Shahbaz Khan; Marius Popescu; Mubarak Shah
GhostBuster: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing.Zhongyuan Hau; Soteris Demetriou; Luis Muñoz-González; Emil C. Lupu
Minimal Adversarial Examples for Deep Learning on 3D Point Clouds.Jaeyeon Kim; Binh-Son Hua; Duc Thanh Nguyen; Sai-Kit Yeung
On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks.Deboleena Roy; Indranil Chakraborty; Timur Ibrayev; Kaushik Roy
Adversarial Eigen Attack on Black-Box Models.Linjun Zhou; Peng Cui; Yinan Jiang; Shiqiang Yang
Color and Edge-Aware Adversarial Image Perturbations.Robert Bassett; Mitchell Graves; Patrick Reilly
Adversarially Robust Learning via Entropic Regularization.Gauri Jagatap; Ameya Joshi; Animesh Basak Chowdhury; Siddharth Garg; Chinmay Hegde
2020-08-26
Adversarially Training for Audio Classifiers.Raymel Alfonso Sallo; Mohammad Esmaeilpour; Patrick Cardinal
2020-08-25
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses.Fu Lin; Rohit Mittapalli; Prithvijit Chattopadhyay; Daniel Bolya; Judy Hoffman
Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning.Yinghua Zhang; Yangqiu Song; Jian Liang; Kun Bai; Qiang Yang
Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks.Abhiroop Bhattacharjee; Priyadarshini Panda
An Adversarial Attack Defending System for Securing In-Vehicle Networks.Yi Li; Jing Lin; Kaiqi Xiong
2020-08-24
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation.Binghui Wang; Jinyuan Jia; Xiaoyu Cao; Neil Zhenqiang Gong
2020-08-23
Developing and Defeating Adversarial Examples.Ian McDiarmid-Sterling; Allan Moser
Ptolemy: Architecture Support for Robust Deep Learning.Yiming Gan; Yuxian Qiu; Jingwen Leng; Minyi Guo; Yuhao Zhu
PermuteAttack: Counterfactual Explanation of Machine Learning Credit Scorecards.Masoud Hashemi; Ali Fathi
2020-08-22
Self-Competitive Neural Networks.Iman Saberi; Fathiyeh Faghih
2020-08-21
A Survey on Assessing the Generalization Envelope of Deep Neural Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples.Julia Lust; Alexandru Paul Condurache
2020-08-20
Towards adversarial robustness with 01 loss neural networks.Yunzhe Xue; Meiyan Xie; Usman Roshan
On Attribution of Deepfakes.Baiwu Zhang; Jin Peng Zhou; Ilia Shumailov; Nicolas Papernot
$\beta$-Variational Classifiers Under Attack.Marco Maggipinto; Matteo Terzi; Gian Antonio Susto
Yet Another Intermediate-Level Attack.Qizhang Li; Yiwen Guo; Hao Chen
2020-08-19
Prototype-based interpretation of the functionality of neurons in winner-take-all neural networks.Ramin Zarei Sabzevar; Kamaledin Ghiasi-Shirazi; Ahad Harati
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training.Alfred Laugros; Alice Caplier; Matthieu Ospici
On $\ell_p$-norm Robustness of Ensemble Stumps and Trees.Yihan Wang; Huan Zhang; Hongge Chen; Duane Boning; Cho-Jui Hsieh
2020-08-18
Improving adversarial robustness of deep neural networks by using semantic information.Lina Wang; Rui Tang; Yawei Yue; Xingshu Chen; Wei Wang; Yi Zhu; Xuemei Zeng
Direct Adversarial Training for GANs.Ziqiang Li
Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization.Feihu Huang; Shangqian Gao; Jian Pei; Heng Huang
2020-08-17
A Deep Dive into Adversarial Robustness in Zero-Shot Learning.Mehmet Kerim Yucel; Ramazan Gokberk Cinbis; Pinar Duygulu
Adversarial Attack and Defense Strategies for Deep Speaker Recognition Systems.Arindam Jati; Chin-Cheng Hsu; Monisankha Pal; Raghuveer Peri; Wael AbdAlmageed; Shrikanth Narayanan
Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection.Luca Demetrio; Scott E. Coull; Battista Biggio; Giovanni Lagorio; Alessandro Armando; Fabio Roli
Robustness Verification of Quantum Classifiers. (81%)Ji Guan; Wang Fang; Mingsheng Ying
2020-08-16
TextDecepter: Hard Label Black Box Attack on Text Classifiers.Sachin Saxena
Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks.Elahe Arani; Fahad Sarfraz; Bahram Zonooz
2020-08-15
Relevance Attack on Detectors.Sizhe Chen; Fan He; Xiaolin Huang; Kun Zhang
2020-08-14
Efficiently Constructing Adversarial Examples by Feature Watermarking.Yuexin Xiang; Wei Ren; Tiantian Li; Xianghan Zheng; Tianqing Zhu; Kim-Kwang Raymond Choo
Defending Adversarial Attacks without Adversarial Attacks in Deep Reinforcement Learning.Xinghua Qu; Yew-Soon Ong; Abhishek Gupta; Zhu Sun
On the Generalization Properties of Adversarial Training.Yue Xing; Qifan Song; Guang Cheng
2020-08-13
Adversarial Training and Provable Robustness: A Tale of Two Objectives.Jiameng Fan; Wenchao Li
Semantically Adversarial Learnable Filters.Ali Shahin Shamsabadi; Changjae Oh; Andrea Cavallaro
2020-08-12
Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise.Alex Serban; Erik Poll; Joost Visser
Defending Adversarial Examples via DNN Bottleneck Reinforcement.Wenqing Liu; Miaojing Shi; Teddy Furon; Li Li
Feature Binding with Category-Dependant MixUp for Semantic Segmentation and Adversarial Robustness.Md Amirul Islam; Matthew Kowal; Konstantinos G. Derpanis; Neil D. B. Bruce
Semantics-preserving adversarial attacks in NLP.Rahul Singh; Tarun Joshi; Vijayan N. Nair; Agus Sudjianto
2020-08-11
Revisiting Adversarially Learned Injection Attacks Against Recommender Systems.Jiaxi Tang; Hongyi Wen; Ke Wang
2020-08-10
Informative Dropout for Robust Representation Learning: A Shape-bias Perspective.Baifeng Shi; Dinghuai Zhang; Qi Dai; Zhanxing Zhu; Yadong Mu; Jingdong Wang
FireBERT: Hardening BERT-based classifiers against adversarial attack.Gunnar Mein; Kevin Hartman; Andrew Morris
2020-08-09
Enhancing Robustness Against Adversarial Examples in Network Intrusion Detection Systems.Mohammad J. Hashemi; Eric Keller
Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks.Xiaosen Wang; Yichen Yang; Yihe Deng; Kun He
2020-08-08
Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG with Variable Length.Linhai Ma; Liang Liang
2020-08-07
Visual Attack and Defense on Text.Shengjun Liu; Ningkang Jiang; Yuanbin Wu
Optimizing Information Loss Towards Robust Neural Networks.Philip Sperl; Konstantin Böttinger
Adversarial Examples on Object Recognition: A Comprehensive Survey.Alex Serban; Erik Poll; Joost Visser
2020-08-06
Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations.Ziquan Liu; Yufei Cui; Antoni B. Chan
Stronger and Faster Wasserstein Adversarial Attacks.Kaiwen Wu; Allen Houze Wang; Yaoliang Yu
2020-08-05
One word at a time: adversarial attacks on retrieval models.Nisarg Raval; Manisha Verma
Robust Deep Reinforcement Learning through Adversarial Loss.Tuomas Oikarinen; Wang Zhang; Alexandre Megretski; Luca Daniel; Tsui-Wei Weng
2020-08-04
Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples.Xiaojun Jia; Xingxing Wei; Xiaochun Cao; Xiaoguang Han
TREND: Transferability based Robust ENsemble Design.Deepak Ravikumar; Sangamesh Kodge; Isha Garg; Kaushik Roy
Can Adversarial Weight Perturbations Inject Neural Backdoors?Siddhant Garg; Adarsh Kumar; Vibhor Goel; Yingyu Liang
Entropy Guided Adversarial Model for Weakly Supervised Object Localization.Sabrina Narimene Benassou; Wuzhen Shi; Feng Jiang
2020-08-03
Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks.Haoqiang Guo; Lu Peng; Jian Zhang; Fang Qi; Lide Duan
Anti-Bandit Neural Architecture Search for Model Defense.Hanlin Chen; Baochang Zhang; Song Xue; Xuan Gong; Hong Liu; Rongrong Ji; David Doermann
2020-08-01
Efficient Adversarial Attacks for Visual Object Tracking.Siyuan Liang; Xingxing Wei; Siyuan Yao; Xiaochun Cao
Trojaning Language Models for Fun and Profit.Xinyang Zhang; Zheng Zhang; Shouling Ji; Ting Wang
2020-07-31
Vulnerability Under Adversarial Machine Learning: Bias or Variance?Hossein Aboutalebi; Mohammad Javad Shafiee; Michelle Karg; Christian Scharfenberger; Alexander Wong
Physical Adversarial Attack on Vehicle Detector in the Carla Simulator.Tong Wu; Xuefei Ning; Wenshuo Li; Ranran Huang; Huazhong Yang; Yu Wang
Adversarial Attacks with Multiple Antennas Against Deep Learning-Based Modulation Classifiers.Brian Kim; Yalin E. Sagduyu; Tugba Erpek; Kemal Davaslioglu; Sennur Ulukus
TEAM: We Need More Powerful Adversarial Examples for DNNs.Yaguan Qian; Ximin Zhang; Bin Wang; Wei Li; Zhaoquan Gu; Haijiang Wang; Wassim Swaileh
2020-07-30
Black-box Adversarial Sample Generation Based on Differential Evolution.Junyu Lin; Lei Xu; Yingqi Liu; Xiangyu Zhang
A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks.Yi Zeng; Han Qiu; Gerard Memmi; Meikang Qiu
2020-07-29
End-to-End Adversarial White Box Attacks on Music Instrument Classification.Katharina Johannes Kepler University Linz Prinz; Arthur Johannes Kepler University Linz Flexer
Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data.Kai Steverson; Jonathan Mullin; Metin Ahiskali
Stylized Adversarial Defense.Muzammal Naseer; Salman Khan; Munawar Hayat; Fahad Shahbaz Khan; Fatih Porikli
Generative Classifiers as a Basis for Trustworthy Computer Vision.Radek Mackowiak; Lynton Ardizzone; Ullrich Köthe; Carsten Rother
Detecting Anomalous Inputs to DNN Classifiers By Joint Statistical Testing at the Layers.Jayaram Raghuram; Varun Chandrasekaran; Somesh Jha; Suman Banerjee
2020-07-28
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations.Xiaoyu Zhang; Ajmal Mian; Rohit Gupta; Nazanin Rahnavard; Mubarak Shah
Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning.Jirong Yi; Raghu Mudumbai; Weiyu Xu
Reachable Sets of Classifiers and Regression Models: (Non-)Robustness Analysis and Robust Training.Anna-Kathrin Kopetzki; Stephan Günnemann
Label-Only Membership Inference Attacks.Christopher A. Choquette-Choo; Florian Tramer; Nicholas Carlini; Nicolas Papernot
2020-07-27
Attacking and Defending Machine Learning Applications of Public Cloud.Dou Goodman; Hao Xin
KOVIS: Keypoint-based Visual Servoing with Zero-Shot Sim-to-Real Transfer for Robotics Manipulation.En Yen Puang; Keng Peng Tee; Wei Jing
From Sound Representation to Model Robustness.Mohamad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing.Yi Zhang; Jitao Sang
2020-07-26
RANDOM MASK: Towards Robust Convolutional Neural Networks.Tiange Luo; Tianle Cai; Mengxiao Zhang; Siyu Chen; Liwei Wang
Robust Collective Classification against Structural Attacks.Kai Zhou; Yevgeniy Vorobeychik
Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection. (1%)Elizabeth Newman; Lars Ruthotto; Joseph Hart; Bart van Bloemen Waanders
2020-07-25
MirrorNet: Bio-Inspired Adversarial Attack for Camouflaged Object Segmentation.Jinnan Yan; Trung-Nghia Le; Khanh-Duy Nguyen; Minh-Triet Tran; Thanh-Toan Do; Tam V. Nguyen
Adversarial Privacy-preserving Filter.Jiaming Zhang; Jitao Sang; Xian Zhao; Xiaowen Huang; Yanfeng Sun; Yongli Hu
MP3 Compression To Diminish Adversarial Noise in End-to-End Speech Recognition.Iustina Andronic; Ludwig Kürzinger; Edgar Ricardo Chavez Rosas; Gerhard Rigoll; Bernhard U. Seeber
2020-07-24
Deep Co-Training with Task Decomposition for Semi-Supervised Domain Adaptation. (1%)Luyu Yang; Yan Wang; Mingfei Gao; Abhinav Shrivastava; Kilian Q. Weinberger; Wei-Lun Chao; Ser-Nam Lim
2020-07-23
Provably Robust Adversarial Examples.Dimitar I. Dimitrov; Gagandeep Singh; Timon Gehr; Martin Vechev
2020-07-22
SOCRATES: Towards a Unified Platform for Neural Network Verification.Long H. Pham; Jiaying Li; Jun Sun
Adversarial Training Reduces Information and Improves Transferability.Matteo Terzi; Alessandro Achille; Marco Maggipinto; Gian Antonio Susto
Robust Machine Learning via Privacy/Rate-Distortion Theory.Ye Wang; Shuchin Aeron; Adnan Siraj Rakin; Toshiaki Koike-Akino; Pierre Moulin
Threat of Adversarial Attacks on Face Recognition: A Comprehensive Survey.Fatemeh Vakhshiteh; Raghavendra Ramachandra; Ahmad Nickabadi
2020-07-21
Audio Adversarial Examples for Robust Hybrid CTC/Attention Speech Recognition.Ludwig Kürzinger; Edgar Ricardo Chavez Rosas; Lujun Li; Tobias Watzel; Gerhard Rigoll
Towards Visual Distortion in Black-Box Attacks.Nannan Li; Zhenzhong Chen
2020-07-20
DeepNNK: Explaining deep models and their generalization using polytope interpolation.Sarath Shekkizhar; Antonio Ortega
Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks.Nupur Thakur; Yuzhen Ding; Baoxin Li
Robust Tracking against Adversarial Attacks.Shuai Jia; Chao Ma; Yibing Song; Xiaokang Yang
Scaling Polyhedral Neural Network Verification on GPUs.Christoph Müller; François Serre; Gagandeep Singh; Markus Püschel; Martin Vechev
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers.Yuzhen Ding; Nupur Thakur; Baoxin Li
2020-07-19
Semantic Equivalent Adversarial Data Augmentation for Visual Question Answering.Ruixue Tang; Chao Ma; Wei Emma Zhang; Qi Wu; Xiaokang Yang
Exploiting vulnerabilities of deep neural networks for privacy protection.Ricardo Sanchez-Matilla; Chau Yi Li; Ali Shahin Shamsabadi; Riccardo Mazzon; Andrea Cavallaro
Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency.Shasha Li; Shitong Zhu; Sudipta Paul; Amit Roy-Chowdhury; Chengyu Song; Srikanth Krishnamurthy; Ananthram Swami; Kevin S Chan
Adversarial Immunization for Improving Certifiable Robustness on Graphs.Shuchang Tao; Huawei Shen; Qi Cao; Liang Hou; Xueqi Cheng
2020-07-18
DDR-ID: Dual Deep Reconstruction Networks Based Image Decomposition for Anomaly Detection.Dongyun Lin; Yiqun Li; Shudong Xie; Tin Lay Nwe; Sheng Dong
Towards Quantum-Secure Authentication and Key Agreement via Abstract Multi-Agent Interaction. (1%)Ibrahim H. Ahmed; Josiah P. Hanna; Elliot Fosong; Stefano V. Albrecht
2020-07-17
Anomaly Detection in Unsupervised Surveillance Setting Using Ensemble of Multimodal Data with Adversarial Defense.Sayeed Shafayet Chowdhury; Kaji Mejbaul Islam; Rouhan Noor
Neural Networks with Recurrent Generative Feedback.Yujia Huang; James Gornet; Sihui Dai; Zhiding Yu; Tan Nguyen; Doris Y. Tsao; Anima Anandkumar
2020-07-16
Understanding and Diagnosing Vulnerability under Adversarial Attacks.Haizhong Zheng; Ziqi Zhang; Honglak Lee; Atul Prakash
Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources.Yun-Yun Tsai; Pin-Yu Chen; Tsung-Yi Ho
Accelerated Stochastic Gradient-free and Projection-free Methods.Feihu Huang; Lue Tao; Songcan Chen
Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data.Julian Bitterwolf; Alexander Meinke; Matthias Hein
An Empirical Study on the Robustness of NAS based Architectures.Chaitanya Devaguptapu; Devansh Agarwal; Gaurav Mittal; Vineeth N Balasubramanian
Do Adversarially Robust ImageNet Models Transfer Better?Hadi Salman; Andrew Ilyas; Logan Engstrom; Ashish Kapoor; Aleksander Madry
Learning perturbation sets for robust machine learning.Eric Wong; J. Zico Kolter
On Robustness and Transferability of Convolutional Neural Networks. (1%)Josip Djolonga; Jessica Yung; Michael Tschannen; Rob Romijnders; Lucas Beyer; Alexander Kolesnikov; Joan Puigcerver; Matthias Minderer; Alexander D'Amour; Dan Moldovan; Sylvain Gelly; Neil Houlsby; Xiaohua Zhai; Mario Lucic
Less is More: A privacy-respecting Android malware classifier using Federated Learning. (1%)Rafa Gálvez; Veelasha Moonsamy; Claudia Diaz
2020-07-15
A Survey of Privacy Attacks in Machine Learning.Maria Rigaki; Sebastian Garcia
Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels.Wenjie Wan; Zhaodi Zhang; Yiwei Zhu; Min Zhang; Fu Song
A Survey on Security Attacks and Defense Techniques for Connected and Autonomous Vehicles.Minh Pham; Kaiqi Xiong
2020-07-14
Towards robust sensing for Autonomous Vehicles: An adversarial perspective.Apostolos Modas; Ricardo Sanchez-Matilla; Pascal Frossard; Andrea Cavallaro
Robustifying Reinforcement Learning Agents via Action Space Adversarial Training.Kai Liang Tan; Yasaman Esfandiari; Xian Yeow Lee; Aakanksha; Soumik Sarkar
Bounding The Number of Linear Regions in Local Area for Neural Networks with ReLU Activations.Rui Zhu; Bo Lin; Haixu Tang
Multitask Learning Strengthens Adversarial Robustness.Chengzhi Mao; Amogh Gupta; Vikram Nitin; Baishakhi Ray; Shuran Song; Junfeng Yang; Carl Vondrick
Adversarial Examples and Metrics.Nico Döttling; Kathrin Grosse; Michael Backes; Ian Molloy
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows.Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack.Yupeng Cheng; Qing Guo; Felix Juefei-Xu; Wei Feng; Shang-Wei Lin; Weisi Lin; Yang Liu
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components.Ken Alparslan; Yigit Alparslan; Matthew Burlick
Towards a Theoretical Understanding of the Robustness of Variational Autoencoders.Alexander Camuto; Matthew Willetts; Stephen Roberts; Chris Holmes; Tom Rainforth
2020-07-13
A simple defense against adversarial attacks on heatmap explanations.Laura Rieger; Lars Kai Hansen
Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations.Chaoning Zhang; Philipp Benz; Tooba Imtiaz; In-So Kweon
Adversarial robustness via robust low rank representations.Pranjal Awasthi; Himanshu Jain; Ankit Singh Rawat; Aravindan Vijayaraghavan
Security and Machine Learning in the Real World.Ivan Evtimov; Weidong Cui; Ece Kamar; Emre Kiciman; Tadayoshi Kohno; Jerry Li
Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes.Satya Narayan Shukla; Anit Kumar Sahu; Devin Willmott; J. Zico Kolter
Calling Out Bluff: Attacking the Robustness of Automatic Scoring Systems with Simple Adversarial Testing.Yaman Kumar; Mehar Bhatia; Anubha Kabra; Jessy Junyi Li; Di Jin; Rajiv Ratn Shah
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems.Hadi Abdullah; Kevin Warren; Vincent Bindschaedler; Nicolas Papernot; Patrick Traynor
Patch-wise Attack for Fooling Deep Neural Network.Lianli Gao; Qilong Zhang; Jingkuan Song; Xianglong Liu; Heng Tao Shen
2020-07-12
Adversarial jamming attacks and defense strategies via adaptive deep reinforcement learning.Feng Wang; Chen Zhong; M. Cenk Gursoy; Senem Velipasalar
Generating Fluent Adversarial Examples for Natural Languages.Huangzhao Zhang; Hao Zhou; Ning Miao; Lei Li
Probabilistic Jacobian-based Saliency Maps Attacks.Théo Combey; António Loison; Maxime Faucher; Hatem Hajri
2020-07-11
Understanding Object Detection Through An Adversarial Lens.Ka-Ho Chow; Ling Liu; Mehmet Emre Gursoy; Stacey Truex; Wenqi Wei; Yanzhao Wu
ManiGen: A Manifold Aided Black-box Generator of Adversarial Examples.Guanxiong Liu; Issa Khalil; Abdallah Khreishah; Abdulelah Algosaibi; Adel Aldalbahi; Mohammed Alaneem; Abdulaziz Alhumam; Mohammed Anan
Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification. (15%)Francisco Utrera; Evan Kravitz; N. Benjamin Erichson; Rajiv Khanna; Michael W. Mahoney
2020-07-10
Improved Detection of Adversarial Images Using Deep Neural Networks.Yutong Gao; Yi Pan
Miss the Point: Targeted Adversarial Attack on Multiple Landmark Detection.Qingsong Yao; Zecheng He; Hu Han; S. Kevin Zhou
Generating Adversarial Inputs Using A Black-box Differential Technique.João Batista Pereira Matos Juúnior; Lucas Carvalho Cordeiro; Marcelo d'Amorim; Xiaowei Huang
2020-07-09
Improving Adversarial Robustness by Enforcing Local and Global Compactness.Anh Bui; Trung Le; He Zhao; Paul Montague; Olivier deVel; Tamas Abraham; Dinh Phung
Boundary thickness and robustness in learning models.Yaoqing Yang; Rajiv Khanna; Yaodong Yu; Amir Gholami; Kurt Keutzer; Joseph E. Gonzalez; Kannan Ramchandran; Michael W. Mahoney
Node Copying for Protection Against Graph Neural Network Topology Attacks.Florence Regol; Soumyasundar Pal; Mark Coates
Efficient detection of adversarial images.Darpan Kumar Yadav; Kartik Mundra; Rahul Modpur; Arpan Chattopadhyay; Indra Narayan Kar
2020-07-08
How benign is benign overfitting?Amartya Sanyal; Puneet K Dokania; Varun Kanade; Philip H. S. Torr
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations.Giulio Lovisotto; Henry Turner; Ivo Sluganovic; Martin Strohmeier; Ivan Martinovic
RobFR: Benchmarking Adversarial Robustness on Face Recognition.Xiao Yang; Dingcheng Yang; Yinpeng Dong; Hang Su; Wenjian Yu; Jun Zhu
A Critical Evaluation of Open-World Machine Learning.Liwei Song; Vikash Sehwag; Arjun Nitin Bhagoji; Prateek Mittal
On the relationship between class selectivity, dimensionality, and robustness.Matthew L. Leavitt; Ari S. Morcos
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs.Rana Abou Khamis; Ashraf Matrawy
2020-07-07
Robust Learning with Frequency Domain Regularization.Weiyu Guo; Yidong Ouyang
Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability.Utku Ozbulak; Jonathan Peck; Neve Wesley De; Bart Goossens; Yvan Saeys; Messem Arnout Van
Fast Training of Deep Neural Networks Robust to Adversarial Perturbations.Justin Goodwin; Olivia Brown; Victoria Helus
Making Adversarial Examples More Transferable and Indistinguishable.Junhua Zou; Yexin Duan; Boyu Li; Wu Zhang; Yu Pan; Zhisong Pan
Detection as Regression: Certified Object Detection by Median Smoothing.Ping-yeh Chiang; Michael J. Curry; Ahmed Abdelkader; Aounon Kumar; John Dickerson; Tom Goldstein
2020-07-06
Certifying Decision Trees Against Evasion Attacks by Program Analysis.Stefano Calzavara; Pietro Ferrara; Claudio Lucchese
On Data Augmentation and Adversarial Risk: An Empirical Analysis.Hamid Eghbal-zadeh; Khaled Koutini; Paul Primus; Verena Haunschmid; Michal Lewandowski; Werner Zellinger; Bernhard A. Moser; Gerhard Widmer
Understanding and Improving Fast Adversarial Training.Maksym Andriushchenko; Nicolas Flammarion
Black-box Adversarial Example Generation with Normalizing Flows.Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
2020-07-05
Adversarial Learning in the Cyber Security Domain.Ihai Rosenberg; Asaf Shabtai; Yuval Elovici; Lior Rokach
2020-07-04
On Connections between Regularizations for Improving DNN Robustness.Yiwen Guo; Long Chen; Yurong Chen; Changshui Zhang
Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors.Zijian Jiang; Jianwen Zhou; Haiping Huang
Deep Active Learning via Open Set Recognition. (1%)Jaya Krishna Mandivarapu; Blake Camp; Rolando Estrada
2020-07-03
Towards Robust Deep Learning with Ensemble Networks and Noisy Layers.Yuting Liang; Reza Samavi
2020-07-02
Efficient Proximal Mapping of the 1-path-norm of Shallow Networks.Fabian Latorre; Paul Rolland; Nadav Hallak; Volkan Cevher
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment.Xabier Echeberria-Barrio; Amaia Gil-Lerchundi; Ines Goicoechea-Telleria; Raul Orduna-Urrutia
Decoder-free Robustness Disentanglement without (Additional) Supervision.Yifei Wang; Dan Peng; Furui Liu; Zhenguo Li; Zhitang Chen; Jiansheng Yang
Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring.Zhihui Shao; Jianyi Yang; Shaolei Ren
Trace-Norm Adversarial Examples.Ehsan Kazemi; Thomas Kerdreux; Liqiang Wang
Generating Adversarial Examples withControllable Non-transferability.Renzhi Wang; Tianwei Zhang; Xiaofei Xie; Lei Ma; Cong Tian; Felix Juefei-Xu; Yang Liu
2020-07-01
Unifying Model Explainability and Robustness via Machine-Checkable Concepts.Vedant Nanda; Till Speicher; John P. Dickerson; Krishna P. Gummadi; Muhammad Bilal Zafar
Measuring Robustness to Natural Distribution Shifts in Image Classification.Rohan Taori; Achal Dave; Vaishaal Shankar; Nicholas Carlini; Benjamin Recht; Ludwig Schmidt
Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks.Kishor Datta Gupta; Dipankar Dasgupta; Zahid Akhtar
Query-Free Adversarial Transfer via Undertrained Surrogates.Chris Miller; Soroush Vosoughi
Adversarial Example Games.Avishek Joey Bose; Gauthier Gidel; Hugo Berard; Andre Cianflone; Pascal Vincent; Simon Lacoste-Julien; William L. Hamilton
Robustness against Relational Adversary.Yizhen Wang; Xiaozhu Meng; Ke Wang; Mihai Christodorescu; Somesh Jha
A Le Cam Type Bound for Adversarial Learning and Applications.Qiuling Xu; Kevin Bello; Jean Honorio
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey.Samuel Henrique Silva; Peyman Najafirad
2020-06-30
Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures.Jiachen Sun; Yulong Cao; Qi Alfred Chen; Z. Morley Mao
Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection.Deqiang Li; Qianmu Li
Black-box Certification and Learning under Adversarial Perturbations.Hassan Ashtiani; Vinayak Pathak; Ruth Urner
Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications.Eric Wong; Tim Schneider; Joerg Schmitt; Frank R. Schmidt; J. Zico Kolter
Generating Adversarial Examples with an Optimized Quality.Aminollah Khormali; DaeHun Nyang; David Mohaisen
2020-06-29
Harnessing Adversarial Distances to Discover High-Confidence Errors.Walter Bennette; Karsten Maurer; Sean Sisti
Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification.Chen Dan; Yuting Wei; Pradeep Ravikumar
Legal Risks of Adversarial Machine Learning Research.Ram Shankar Siva Kumar; Jonathon Penney; Bruce Schneier; Kendra Albert
Biologically Inspired Mechanisms for Adversarial Robustness.Manish V. Reddy; Andrzej Banburski; Nishka Pant; Tomaso Poggio
Improving Uncertainty Estimates through the Relationship with Adversarial Robustness.Yao Qin; Xuezhi Wang; Alex Beutel; Ed H. Chi
2020-06-28
FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications.Yunfei Song; Tian Liu; Tongquan Wei; Xiangfeng Wang; Zhe Tao; Mingsong Chen
Geometry-Inspired Top-k Adversarial Perturbations.Nurislam Tursynbek; Aleksandr Petiushko; Ivan Oseledets
2020-06-26
Orthogonal Deep Models As Defense Against Black-Box Attacks.Mohammad A. A. K. Jalwana; Naveed Akhtar; Mohammed Bennamoun; Ajmal Mian
A Unified Framework for Analyzing and Detecting Malicious Examples of DNN Models.Kaidi Jin; Tianwei Zhang; Chao Shen; Yufei Chen; Ming Fan; Chenhao Lin; Ting Liu
Informative Outlier Matters: Robustifying Out-of-distribution Detection Using Outlier Mining.Jiefeng Chen; Yixuan Li; Xi Wu; Yingyu Liang; Somesh Jha
Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks.Ali Mirzaeian; Jana Kosecka; Houman Homayoun; Tinoosh Mohsenin; Avesta Sasan
2020-06-25
Smooth Adversarial Training.Cihang Xie; Mingxing Tan; Boqing Gong; Alan Yuille; Quoc V. Le
Proper Network Interpretability Helps Adversarial Robustness in Classification.Akhilan Boopathy; Sijia Liu; Gaoyuan Zhang; Cynthia Liu; Pin-Yu Chen; Shiyu Chang; Luca Daniel
Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability.Kaizhao Liang; Jacky Y. Zhang; Boxin Wang; Zhuolin Yang; Oluwasanmi Koyejo; Bo Li
Can 3D Adversarial Logos Cloak Humans?Yi Wang; Jingyang Zhou; Tianlong Chen; Sijia Liu; Shiyu Chang; Chandrajit Bajaj; Zhangyang Wang
2020-06-24
Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks.Huiying Li; Shawn Shan; Emily Wenger; Jiayun Zhang; Haitao Zheng; Ben Y. Zhao
Defending against adversarial attacks on medical imaging AI system, classification or detection?Xin Li; Deng Pan; Dongxiao Zhu
Compositional Explanations of Neurons.Jesse Mu; Jacob Andreas
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness.Xingjun Ma; Linxi Jiang; Hanxun Huang; Zejia Weng; James Bailey; Yu-Gang Jiang
2020-06-23
RayS: A Ray Searching Method for Hard-label Adversarial Attack.Jinghui Chen; Quanquan Gu
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks.Francesco Croce; Maksym Andriushchenko; Naman D. Singh; Nicolas Flammarion; Matthias Hein
Adversarial Robustness of Deep Sensor Fusion Models.Shaojie Wang; Tong Wu; Ayan Chakrabarti; Yevgeniy Vorobeychik
2020-06-22
Learning to Generate Noise for Multi-Attack Robustness.Divyam Madaan; Jinwoo Shin; Sung Ju Hwang
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models.Cassidy Laidlaw; Sahil Singla; Soheil Feizi
2020-06-21
Network Moments: Extensions and Sparse-Smooth Attacks.Modar Alfadly; Adel Bibi; Emilio Botero; Salman Alsubaihi; Bernard Ghanem
2020-06-20
How do SGD hyperparameters in natural training affect adversarial robustness?Sandesh Kamath; Amit Deshpande; K V Subrahmanyam
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble.Yi Zhou; Xiaoqing Zheng; Cho-Jui Hsieh; Kai-wei Chang; Xuanjing Huang
Stochastic Shortest Path with Adversarially Changing Costs. (1%)Aviv Rosenberg; Yishay Mansour
2020-06-19
Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples.Josue Ortega Caro; Yilong Ju; Ryan Pyle; Sourav Dey; Wieland Brendel; Fabio Anselmi; Ankit Patel
A general framework for defining and optimizing robustness.Alessandro Tibo; Manfred Jaeger; Kim G. Larsen
Analyzing the Real-World Applicability of DGA Classifiers.Arthur Drichel; Ulrike Meyer; Samuel Schüppen; Dominik Teubert
Towards an Adversarially Robust Normalization Approach.Muhammad Awais; Fahad Shamshad; Sung-Ho Bae
Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers.I. Fursov; A. Zaytsev; N. Kluchnikov; A. Kravchenko; E. Burnaev
Adversarial Attacks for Multi-view Deep Models.Xuli Sun; Shiliang Sun
2020-06-18
Local Competition and Uncertainty for Adversarial Robustness in Deep Learning.Antonios Alexos; Konstantinos P. Panousis; Sotirios Chatzis
Dissecting Deep Networks into an Ensemble of Generative Classifiers for Robust Predictions.Lokender Tiwari; Anish Madan; Saket Anand; Subhashis Banerjee
The Dilemma Between Dimensionality Reduction and Adversarial Robustness.Sheila Alemany; Niki Pissinou
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples.Kaleel Mahmood; Deniz Gurevin; Dijk Marten van; Phuong Ha Nguyen
2020-06-17
Noise or Signal: The Role of Image Backgrounds in Object Recognition.Kai Xiao; Logan Engstrom; Andrew Ilyas; Aleksander Madry
Adversarial Examples Detection and Analysis with Layer-wise Autoencoders.Bartosz Wójcik; Paweł Morawiecki; Marek Śmieja; Tomasz Krzyżek; Przemysław Spurek; Jacek Tabor
Adversarial Defense by Latent Style Transformations.Shuo Wang; Surya Nepal; Alsharif Abuadbba; Carsten Rudolph; Marthie Grobler
Disrupting Deepfakes with an Adversarial Attack that Survives Training.Eran Segalis
Universal Lower-Bounds on Classification Error under Adversarial Attacks and Random Corruption.Elvis Dohmatob
Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning.Vedant Nanda; Samuel Dooley; Sahil Singla; Soheil Feizi; John P. Dickerson
2020-06-16
Calibrating Deep Neural Network Classifiers on Out-of-Distribution Datasets.Zhihui Shao; Jianyi Yang; Shaolei Ren
SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness.Mohammadamin Tavakoli; Forest Agostinelli; Pierre Baldi
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness Proofs.Christopher Brix; Thomas Noll
On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron.Sergey Bochkanov
AdvMind: Inferring Adversary Intent of Black-Box Attacks.Ren Pang; Xinyang Zhang; Shouling Ji; Xiapu Luo; Ting Wang
The shape and simplicity biases of adversarially robust ImageNet-trained CNNs.Peijie Chen; Chirag Agarwal; Anh Nguyen
2020-06-15
Total Deep Variation: A Stable Regularizer for Inverse Problems.Erich Kobler; Alexander Effland; Karl Kunisch; Thomas Pock
DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder.Ao Zhang; Jinwen Ma
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data.Zhun Deng; Linjun Zhang; Amirata Ghorbani; James Zou
Fast & Accurate Method for Bounding the Singular Values of Convolutional Layers with Application to Lipschitz Regularization.Alexandre Araujo; Benjamin Negrevergne; Yann Chevaleyre; Jamal Atif
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks.Xiang Zhang; Marinka Zitnik
CG-ATTACK: Modeling the Conditional Distribution of Adversarial Perturbations to Boost Black-Box Attack.Yan Feng; Baoyuan Wu; Yanbo Fan; Li Liu; Zhifeng Li; Shutao Xia
Multiscale Deep Equilibrium Models.Shaojie Bai; Vladlen Koltun; J. Zico Kolter
2020-06-14
GradAug: A New Regularization Method for Deep Neural Networks.Taojiannan Yang; Sijie Zhu; Chen Chen
PatchUp: A Regularization Technique for Convolutional Neural Networks.Mojtaba Faramarzi; Mohammad Amini; Akilesh Badrinaaraayanan; Vikas Verma; Sarath Chandar
On Saliency Maps and Adversarial Robustness.Puneet Mangla; Vedant Singh; Vineeth N Balasubramanian
On the transferability of adversarial examples between convex and 01 loss models.Yunzhe Xue; Meiyan Xie; Usman Roshan
Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems.Yuanjiang Cao; Xiaocong Chen; Lina Yao; Xianzhi Wang; Wei Emma Zhang
Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks.Sarada Krithivasan; Sanchari Sen; Anand Raghunathan
Duplicity Games for Deception Design with an Application to Insider Threat Mitigation. (11%)Linan Huang; Quanyan Zhu
2020-06-13
The Pitfalls of Simplicity Bias in Neural Networks.Harshay Shah; Kaustav Tamuly; Aditi Raghunathan; Prateek Jain; Praneeth Netrapalli
Adversarial Self-Supervised Contrastive Learning.Minseon Kim; Jihoon Tack; Sung Ju Hwang
Rethinking Clustering for Robustness.Motasem Alfarra; Juan C. Pérez; Adel Bibi; Ali Thabet; Pablo Arbeláez; Bernard Ghanem
Defensive Approximation: Securing CNNs using Approximate Computing.Amira Guesmi; Ihsen Alouani; Khaled Khasawneh; Mouna Baklouti; Tarek Frikha; Mohamed Abid; Nael Abu-Ghazaleh
2020-06-12
Provably Robust Metric Learning.Lu Wang; Xuanqing Liu; Jinfeng Yi; Yuan Jiang; Cho-Jui Hsieh
Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces.Chaofei Yang; Lei Ding; Yiran Chen; Hai Li
D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack.Qiuling Xu; Guanhong Tao; Xiangyu Zhang
Targeted Adversarial Perturbations for Monocular Depth Prediction.Alex Wong; Safa Cicek; Stefano Soatto
2020-06-11
Large-Scale Adversarial Training for Vision-and-Language Representation Learning.Zhe Gan; Yen-Chun Chen; Linjie Li; Chen Zhu; Yu Cheng; Jingjing Liu
Smoothed Geometry for Robust Attribution.Zifan Wang; Haofan Wang; Shakul Ramkumar; Matt Fredrikson; Piotr Mardziel; Anupam Datta
Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks.Nataniel Ruiz; Sarah Adel Bargal; Stan Sclaroff
Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification.Xu Li; Na Li; Jinghua Zhong; Xixin Wu; Xunying Liu; Dan Su; Dong Yu; Helen Meng
Robustness to Adversarial Attacks in Learning-Enabled Controllers.Zikang Xiong; Joe Eappen; He Zhu; Suresh Jagannathan
On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples.Richard Y. Zhang
Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors.Suzanne C. Wetstein; Cristina González-Gonzalo; Gerda Bortsova; Bart Liefers; Florian Dubost; Ioannis Katramados; Laurens Hogeweg; Ginneken Bram van; Josien P. W. Pluim; Bruijne Marleen de; Clara I. Sánchez; Mitko Veta
Achieving robustness in classification using optimal transport with hinge regularization.Mathieu Serrurier; Franck Mamalet; Alberto González-Sanz; Thibaut Boissin; Jean-Michel Loubes; Barrio Eustasio del
Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural Networks. (96%)Kathrin Grosse; Taesung Lee; Battista Biggio; Youngja Park; Michael Backes; Ian Molloy
2020-06-10
Evaluating Graph Vulnerability and Robustness using TIGER.Scott Freitas; Duen Horng Chau
Towards Robust Fine-grained Recognition by Maximal Separation of Discriminative Features.Krishna Kanth Nakka; Mathieu Salzmann
Deterministic Gaussian Averaged Neural Networks.Ryan Campbell; Chris Finlay; Adam M Oberman
Interpolation between Residual and Non-Residual Networks.Zonghan Yang; Yang Liu; Chenglong Bao; Zuoqiang Shi
Towards Certified Robustness of Metric Learning.Xiaochen Yang; Yiwen Guo; Mingzhi Dong; Jing-Hao Xue
2020-06-09
Towards an Intrinsic Definition of Robustness for a Classifier.Théo Giraudon; Vincent Gripon; Matthias Löwe; Franck Vermet
Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access.Jiaqi Ma; Shuangrui Ding; Qiaozhu Mei
GAP++: Learning to generate target-conditioned adversarial examples.Xiaofeng Mao; Yuefeng Chen; Yuhong Li; Yuan He; Hui Xue
Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers.Fangfang Yang; Shaolei Ren
Provable tradeoffs in adversarially robust classification.Edgar Dobriban; Hamed Hassani; David Hong; Alexander Robey
Distributional Robust Batch Contextual Bandits. (1%)Nian Si; Fan Zhang; Zhengyuan Zhou; Jose Blanchet
2020-06-08
Calibrated neighborhood aware confidence measure for deep metric learning.Maryna Karpusha; Sunghee Yun; Istvan Fehervari
A Self-supervised Approach for Adversarial Robustness.Muzammal Naseer; Salman Khan; Munawar Hayat; Fahad Shahbaz Khan; Fatih Porikli
Distributional Robustness with IPMs and links to Regularization and GANs.Hisham Husain
On Universalized Adversarial and Invariant Perturbations.Sandesh Kamath; Amit Deshpande; K V Subrahmanyam
Tricking Adversarial Attacks To Fail.Blerta Lindqvist
Global Robustness Verification Networks.Weidi Sun; Yuteng Lu; Xiyue Zhang; Zhanxing Zhu; Meng Sun
Trade-offs between membership privacy & adversarially robust learning.Jamie Hayes
Adversarial Feature Desensitization.Pouya Bashivan; Reza Bayat; Adam Ibrahim; Kartik Ahuja; Mojtaba Faramarzi; Touraj Laleh; Blake Aaron Richards; Irina Rish
2020-06-07
Extensions and limitations of randomized smoothing for robustness guarantees.Jamie Hayes
Uncertainty-Aware Deep Classifiers using Generative Models.Murat Sensoy; Lance Kaplan; Federico Cerutti; Maryam Saleki
2020-06-06
Unique properties of adversarially trained linear classifiers on Gaussian data.Jamie Hayes
Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?Stefano Melacci; Gabriele Ciravegna; Angelo Sotgiu; Ambra Demontis; Battista Biggio; Marco Gori; Fabio Roli
2020-06-05
Adversarial Image Generation and Training for Deep Convolutional Neural Networks.Ronghua Shi; Hai Shu; Hongtu Zhu; Ziqi Chen
Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing.Vishaal Krishnan; Abed AlRahman Al Makdah; Fabio Pasqualetti
Sponge Examples: Energy-Latency Attacks on Neural Networks.Ilia Shumailov; Yiren Zhao; Daniel Bates; Nicolas Papernot; Robert Mullins; Ross Anderson
2020-06-04
Characterizing the Weight Space for Different Learning Models.Saurav Musunuru; Jay N. Paranjape; Rahul Kumar Dubey; Vijendran G. Venkoparao
Towards Understanding Fast Adversarial Training.Bai Li; Shiqi Wang; Suman Jana; Lawrence Carin
Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised Learning.Haibin Wu; Andy T. Liu; Hung-yi Lee
Pick-Object-Attack: Type-Specific Adversarial Attack for Object Detection.Omid Mohamad Nezami; Akshay Chaturvedi; Mark Dras; Utpal Garain
2020-06-02
SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization.A. F. M. Shahab Uddin; Mst. Sirazam Monira; Wheemyung Shin; TaeChoong Chung; Sung-Ho Bae
Exploring the role of Input and Output Layers of a Deep Neural Network in Adversarial Defense.Jay N. Paranjape; Rahul Kumar Dubey; Vijendran V Gopalan
Perturbation Analysis of Gradient-based Adversarial Attacks.Utku Ozbulak; Manvel Gasparyan; Neve Wesley De; Messem Arnout Van
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start.Zhuoran Liu; Martha Larson
Detecting Audio Attacks on ASR Systems with Dropout Uncertainty.Tejas Jayashankar; Jonathan Le Roux; Pierre Moulin
2020-06-01
Second-Order Provable Defenses against Adversarial Attacks.Sahil Singla; Soheil Feizi
Adversarial Attacks on Reinforcement Learning based Energy Management Systems of Extended Range Electric Delivery Vehicles.Pengyue Wang; Yan Li; Shashi Shekhar; William F. Northrop
Adversarial Attacks on Classifiers for Eye-based User Modelling.Inken CISPA Helmholtz Center for Information Security Hagestedt; Michael CISPA Helmholtz Center for Information Security Backes; Andreas University of Stuttgart Bulling
Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack Methods.Kyungmi Lee; Anantha P. Chandrakasan
2020-05-31
Evaluations and Methods for Explanation through Robustness Analysis.Cheng-Yu Hsieh; Chih-Kuan Yeh; Xuanqing Liu; Pradeep Ravikumar; Seungyeon Kim; Sanjiv Kumar; Cho-Jui Hsieh
Estimating Principal Components under Adversarial Perturbations.Pranjal Awasthi; Xue Chen; Aravindan Vijayaraghavan
2020-05-30
Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training.Zheng Xu; Ali Shafahi; Tom Goldstein
2020-05-29
SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions.Mao Ye; Chengyue Gong; Qiang Liu
2020-05-28
Monocular Depth Estimators: Vulnerabilities and Attacks.Alwyn Mathew; Aditya Prakash Patra; Jimson Mathew
QEBA: Query-Efficient Boundary-Based Blackbox Attack.Huichen Li; Xiaojun Xu; Xiaolu Zhang; Shuang Yang; Bo Li
Adversarial Attacks and Defense on Texts: A Survey.Aminul Huq; Mst. Tasnim Pervin
Adversarial Robustness of Deep Convolutional Candlestick Learner.Jun-Hao Chen; Samuel Yen-Chi Chen; Yun-Cheng Tsai; Chih-Shiang Shur
2020-05-27
Enhancing Resilience of Deep Learning Networks by Means of Transferable Adversaries.Moritz Seiler; Heike Trautmann; Pascal Kerschke
Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques.Han Qiu; Yi Zeng; Qinkai Zheng; Tianwei Zhang; Meikang Qiu; Gerard Memmi
Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models.Mitch Hill; Jonathan Mitchell; Song-Chun Zhu
Calibrated Surrogate Losses for Adversarially Robust Classification.Han Bao; Clayton Scott; Masashi Sugiyama
2020-05-26
Effects of Forward Error Correction on Communications Aware Evasion Attacks.Matthew DelVecchio; Bryse Flowers; William C. Headley
Investigating a Spectral Deception Loss Metric for Training Machine Learning-based Evasion Attacks.Matthew DelVecchio; Vanessa Arndorfer; William C. Headley
Generating Semantically Valid Adversarial Questions for TableQA.Yi Zhu; Menglin Xia; Yiwei Zhou
2020-05-25
Adversarial Feature Selection against Evasion Attacks.Fei Zhang; Patrick P. K. Chan; Battista Biggio; Daniel S. Yeung; Fabio Roli
2020-05-24
Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification.Sina Däubener; Lea Schönherr; Asja Fischer; Dorothea Kolossa
SoK: Arms Race in Adversarial Malware Detection.Deqiang Li; Qianmu Li; Yanfang Ye; Shouhuai Xu
Adaptive Adversarial Logits Pairing.Shangxi Wu; Jitao Sang; Kaiyuan Xu; Guanhua Zheng; Changsheng Xu
2020-05-23
ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds.Kibok Lee; Zhuoyuan Chen; Xinchen Yan; Raquel Urtasun; Ersin Yumer
Adversarial Attack on Hierarchical Graph Pooling Neural Networks.Haoteng Tang; Guixiang Ma; Yurong Chen; Lei Guo; Wei Wang; Bo Zeng; Liang Zhan
Frontal Attack: Leaking Control-Flow in SGX via the CPU Frontend. (1%)Ivan Puddu; Moritz Schneider; Miro Haller; Srdjan Čapkun
2020-05-22
Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks.Hokuto Hirano; Kazuki Koga; Kazuhiro Takemoto
2020-05-21
Revisiting Role of Autoencoders in Adversarial Settings.Byeong Cheon Kim; Jung Uk Kim; Hakmin Lee; Yong Man Ro
Robust Ensemble Model Training via Random Layer Sampling Against Adversarial Attack.Hakmin Lee; Hong Joo Lee; Seong Tae Kim; Yong Man Ro
Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition.Qing Wang; Pengcheng Guo; Lei Xie
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning.Youngjoon Yu; Hong Joo Lee; Byeong Cheon Kim; Jung Uk Kim; Yong Man Ro
2020-05-20
Graph Structure Learning for Robust Graph Neural Networks.Wei Jin; Yao Ma; Xiaorui Liu; Xianfeng Tang; Suhang Wang; Jiliang Tang
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data.Alexander Robey; Hamed Hassani; George J. Pappas
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks.Arash Rahnama; Andrew Tseng
A survey on Adversarial Recommender Systems: from Attack/Defense strategies to Generative Adversarial Networks.Yashar Deldjoo; Noia Tommaso Di; Felice Antonio Merra
Feature Purification: How Adversarial Training Performs Robust Deep Learning.Zeyuan Allen-Zhu; Yuanzhi Li
2020-05-19
Synthesizing Unrestricted False Positive Adversarial Objects Using Generative Models.Martin Kotuliak; Sandro E. Schoenborn; Andrei Dan
Bias-based Universal Adversarial Patch Attack for Automatic Check-out.Aishan Liu; Jiakai Wang; Xianglong Liu; Bowen Cao; Chongzhi Zhang; Hang Yu
2020-05-18
An Evasion Attack against ML-based Phishing URL Detectors.Bushra University of Adelaide, CREST - The Centre for Research on Engineering Software Technologies, CSIROs Data61 Sabir; M. Ali University of Adelaide, CREST - The Centre for Research on Engineering Software Technologies Babar; Raj CSIROs Data61 Gaire
Universalization of any adversarial attack using very few test examples.Sandesh Kamath; Amit Deshpande; K V Subrahmanyam
On Intrinsic Dataset Properties for Adversarial Machine Learning.Jeffrey Z. Pan; Nicholas Zufelt
Defending Your Voice: Adversarial Attack on Voice Conversion.Chien-yu Huang; Yist Y. Lin; Hung-yi Lee; Lin-shan Lee
Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspective.Linhai Ma; Liang Liang
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks.Linhai Ma; Liang Liang
Spatiotemporal Attacks for Embodied Agents.Aishan Liu; Tairan Huang; Xianglong Liu; Yitao Xu; Yuqing Ma; Xinyun Chen; Stephen J. Maybank; Dacheng Tao
2020-05-17
Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks.Mahdieh Abbasi; Arezoo Rajabi; Christian Gagne; Rakesh B. Bobba
2020-05-16
Universal Adversarial Perturbations: A Survey.Ashutosh Chaubey; Nikhil Agrawal; Kavya Barnwal; Keerat K. Guliani; Pramod Mehta
Encryption Inspired Adversarial Defense for Visual Classification.MaungMaung AprilPyone; Hitoshi Kiya
PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields.Chong Xiang; Arjun Nitin Bhagoji; Vikash Sehwag; Prateek Mittal
2020-05-15
How to Make 5G Communications "Invisible": Adversarial Machine Learning for Wireless Privacy.Brian Kim; Yalin E. Sagduyu; Kemal Davaslioglu; Tugba Erpek; Sennur Ulukus
Practical Traffic-space Adversarial Attacks on Learning-based NIDSs.Dongqi Han; Zhiliang Wang; Ying Zhong; Wenqi Chen; Jiahai Yang; Shuqiang Lu; Xingang Shi; Xia Yin
Initializing Perturbations in Multiple Directions for Fast Adversarial Training.Xunguang Wang; Ship Peng Xu; Eric Ke Wang
2020-05-14
Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning.Jianwen Sun; Tianwei Zhang; Xiaofei Xie; Lei Ma; Yan Zheng; Kangjie Chen; Yang Liu
Towards Assessment of Randomized Mechanisms for Certifying Adversarial Robustness.Tianhang Zheng; Di Wang; Baochun Li; Jinhui Xu
A Deep Learning-based Fine-grained Hierarchical Learning Approach for Robust Malware Classification.Ahmed Abusnaina; Mohammed Abuhamad; Hisham Alasmary; Afsah Anwar; Rhongho Jang; Saeed Salem; DaeHun Nyang; David Mohaisen
2020-05-13
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses.Yaxin Li; Wei Jin; Han Xu; Jiliang Tang
2020-05-12
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients.Chengcheng Ma; Baoyuan Wu; Shibiao Xu; Yanbo Fan; Yong Zhang; Xiaopeng Zhang; Zhifeng Li
Evaluating Ensemble Robustness Against Adversarial Attacks.George Adam; Romain Speciel
Increased-confidence adversarial examples for improved transferability of Counter-Forensic attacks.Wenjie Li; Benedetta Tondi; Rongrong Ni; Mauro Barni
Adversarial examples are useful too!Ali Borji
2020-05-11
Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers.Brian Kim; Yalin E. Sagduyu; Kemal Davaslioglu; Tugba Erpek; Sennur Ulukus
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data.Lu Wang; Huan Zhang; Jinfeng Yi; Cho-Jui Hsieh; Yuan Jiang
2020-05-09
It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations.Samson Tan; Shafiq Joty; Min-Yen Kan; Richard Socher
Class-Aware Domain Adaptation for Improving Adversarial Robustness.Xianxu Hou; Jingxin Liu; Bolei Xu; Xiaolong Wang; Bozhi Liu; Guoping Qiu
2020-05-08
Towards Robustness against Unsuspicious Adversarial Examples.Liang Tong; Minzhe Guo; Atul Prakash; Yevgeniy Vorobeychik
2020-05-07
Efficient Exact Verification of Binarized Neural Networks.Kai Jia; Martin Rinard
Projection & Probability-Driven Black-Box Attack.Jie Li; Rongrong Ji; Hong Liu; Jianzhuang Liu; Bineng Zhong; Cheng Deng; Qi Tian
Defending Hardware-based Malware Detectors against Adversarial Attacks.Abraham Peedikayil Kuruvila; Shamik Kundu; Kanad Basu
2020-05-06
GraCIAS: Grassmannian of Corrupted Images for Adversarial Security.Ankita Shukla; Pavan Turaga; Saket Anand
Training robust neural networks using Lipschitz bounds.Patricia Pauli; Anne Koch; Julian Berberich; Paul Kohler; Frank Allgöwer
2020-05-05
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder.Guanlin Li; Shuya Ding; Jun Luo; Chang Liu
Hacking the Waveform: Generalized Wireless Adversarial Deep Learning.Francesco Restuccia; Salvatore D'Oro; Amani Al-Shawabka; Bruno Costa Rendon; Kaushik Chowdhury; Stratis Ioannidis; Tommaso Melodia
Adversarial Training against Location-Optimized Adversarial Patches.Sukrut Rao; David Stutz; Bernt Schiele
Measuring Adversarial Robustness using a Voronoi-Epsilon Adversary.Hyeongji Kim; Pekka Parviainen; Ketil Malde
2020-05-04
On the Benefits of Models with Perceptually-Aligned Gradients.Gunjan Aggarwal; Abhishek Sinha; Nupur Kumari; Mayank Singh
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?Marco Melis; Michele Scalas; Ambra Demontis; Davide Maiorca; Battista Biggio; Giorgio Giacinto; Fabio Roli
2020-05-03
Robust Encodings: A Framework for Combating Adversarial Typos.Erik Jones; Robin Jia; Aditi Raghunathan; Percy Liang
2020-05-01
Jacks of All Trades, Masters Of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks.Neil Fendley; Max Lennon; I-Jeng Wang; Philippe Burlina; Nathan Drenkow
Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models.Bill Yuchen Lin; Seyeon Lee; Rahul Khanna; Xiang Ren
Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees.Jacob H. Seidman; Mahyar Fazlyab; Victor M. Preciado; George J. Pappas
Defense of Word-level Adversarial Attacks via Random Substitution Encoding.Zhaoyang Wang; Hongtao Wang
2020-04-30
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks.Winston Wu; Dustin Arendt; Svitlana Volkova
Imitation Attacks and Defenses for Black-box Machine Translation Systems.Eric Wallace; Mitchell Stern; Dawn Song
Universal Adversarial Attacks with Natural Triggers for Text Classification.Liwei Song; Xinwei Yu; Hsuan-Tung Peng; Karthik Narasimhan
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness.Pu Zhao; Pin-Yu Chen; Payel Das; Karthikeyan Natesan Ramamurthy; Xue Lin
2020-04-29
Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability.Nathan Inkawhich; Kevin J Liang; Binghui Wang; Matthew Inkawhich; Lawrence Carin; Yiran Chen
TAVAT: Token-Aware Virtual Adversarial Training for Language Understanding.Linyang Li; Xipeng Qiu
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP.John X. Morris; Eli Lifland; Jin Yong Yoo; Jake Grigsby; Di Jin; Yanjun Qi
2020-04-28
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks.Pranjal Awasthi; Natalie Frank; Mehryar Mohri
Minority Reports Defense: Defending Against Adversarial Patches.Michael McCoyd; Won Park; Steven Chen; Neil Shah; Ryan Roggenkemper; Minjune Hwang; Jason Xinyu Liu; David Wagner
2020-04-27
DeSePtion: Dual Sequence Prediction and Adversarial Examples for Improved Fact-Checking.Christopher Hidey; Tuhin Chakrabarty; Tariq Alhindi; Siddharth Varia; Kriste Krstovski; Mona Diab; Smaranda Muresan
Adversarial Fooling Beyond "Flipping the Label".Konda Reddy Mopuri; Vaisakh Shaj; R. Venkatesh Babu
"Call me sexist, but...": Revisiting Sexism Detection Using Psychological Scales and Adversarial Samples. (81%)Mattia Samory; Indira Sen; Julian Kohne; Fabian Floeck; Claudia Wagner
2020-04-26
Improved Image Wasserstein Attacks and Defenses.J. Edward Hu; Adith Swaminathan; Hadi Salman; Greg Yang
Transferable Perturbations of Deep Feature Distributions.Nathan Inkawhich; Kevin J Liang; Lawrence Carin; Yiran Chen
Towards Feature Space Adversarial Attack.Qiuling Xu; Guanhong Tao; Siyuan Cheng; Xiangyu Zhang
Printing and Scanning Attack for Image Counter Forensics.Hailey James; Otkrist Gupta; Dan Raviv
2020-04-25
Improved Adversarial Training via Learned Optimizer.Yuanhao Xiong; Cho-Jui Hsieh
Enabling Fast and Universal Audio Adversarial Attack Using Generative Model.Yi Xie; Zhuohang Li; Cong Shi; Jian Liu; Yingying Chen; Bo Yuan
Harnessing adversarial examples with a surprisingly simple defense.Ali Borji
2020-04-24
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty.Xiyue Zhang; Xiaofei Xie; Lei Ma; Xiaoning Du; Qiang Hu; Yang Liu; Jianjun Zhao; Meng Sun
A Black-box Adversarial Attack Strategy with Adjustable Sparsity and Generalizability for Deep Image Classifiers.Arka Ghosh; Sankha Subhra Mullick; Shounak Datta; Swagatam Das; Rammohan Mallipeddi; Asit Kr. Das
Reevaluating Adversarial Examples in Natural Language.John X. Morris; Eli Lifland; Jack Lanchantin; Yangfeng Ji; Yanjun Qi
2020-04-23
Adversarial Machine Learning in Network Intrusion Detection Systems.Elie Alhajjar; Paul Maxwell; Nathaniel D. Bastian
Adversarial Attacks and Defenses: An Interpretation Perspective.Ninghao Liu; Mengnan Du; Ruocheng Guo; Huan Liu; Xia Hu
Evaluating Adversarial Robustness for Deep Neural Network Interpretability using fMRI Decoding.Patrick McClure; Dustin Moraczewski; Ka Chun Lam; Adam Thomas; Francisco Pereira
On Adversarial Examples for Biomedical NLP Tasks.Vladimir Araujo; Andres Carvallo; Carlos Aspillaga; Denis Parra
Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks.Jianhe Yuan; Zhihai He
Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation.Marvin Klingner; Andreas Bär; Tim Fingscheidt
RAIN: A Simple Approach for Robust and Accurate Image Classification Networks.Jiawei Du; Hanshu Yan; Vincent Y. F. Tan; Joey Tianyi Zhou; Rick Siow Mong Goh; Jiashi Feng
2020-04-22
CodNN -- Robust Neural Networks From Coded Classification.Netanel Andrew Raviv; Siddharth Andrew Jain; Pulakesh Andrew Upadhyaya; Jehoshua Andrew Bruck; Andrew Anxiao; Jiang
Provably robust deep generative models.Filipe Condessa; Zico Kolter
QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks.Priyadarshini Panda
Adversarial examples and where to find them.Niklas Risse; Christina Göpfert; Jan Philip Göpfert
2020-04-21
Scalable Attack on Graph Data by Injecting Vicious Nodes.Jihong Wang; Minnan Luo; Fnu Suya; Jundong Li; Zijiang Yang; Qinghua Zheng
Certifying Joint Adversarial Robustness for Model Ensembles.Mainuddin Ahmad Jonas; David Evans
Probabilistic Safety for Bayesian Neural Networks.Matthew Wicker; Luca Laurenti; Andrea Patane; Marta Kwiatkowska
BERT-ATTACK: Adversarial Attack Against BERT Using BERT.Linyang Li; Ruotian Ma; Qipeng Guo; Xiangyang Xue; Xipeng Qiu
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks.Sanchari Sen; Balaraman Ravindran; Anand Raghunathan
2020-04-20
GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified Examples.Julia Lust; Alexandru Paul Condurache
Approximate exploitability: Learning a best response in large games. (74%)Finbarr Timbers; Nolan Bard; Edward Lockhart; Marc Lanctot; Martin Schmid; Neil Burch; Julian Schrittwieser; Thomas Hubert; Michael Bowling
2020-04-19
Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning.Hongcai Xu; Junpeng Bao; Gaojie Zhang
Adversarial Training for Large Neural Language Models.Xiaodong Liu; Hao Cheng; Pengcheng He; Weizhu Chen; Yu Wang; Hoifung Poon; Jianfeng Gao
Headless Horseman: Adversarial Attacks on Transfer Learning Models.Ahmed Abdelkader; Michael J. Curry; Liam Fowl; Tom Goldstein; Avi Schwarzschild; Manli Shu; Christoph Studer; Chen Zhu
2020-04-18
Protecting Classifiers From Attacks. A Bayesian Approach.Victor Gallego; Roi Naveiro; Alberto Redondo; David Rios Insua; Fabrizio Ruggeri
Single-step Adversarial training with Dropout Scheduling.Vivek B. S.; R. Venkatesh Babu
2020-04-17
Adversarial Attack on Deep Learning-Based Splice Localization.Andras Rozsa; Zheng Zhong; Terrance E. Boult
2020-04-16
Shortcut Learning in Deep Neural Networks.Robert Geirhos; Jörn-Henrik Jacobsen; Claudio Michaelis; Richard Zemel; Wieland Brendel; Matthias Bethge; Felix A. Wichmann
2020-04-15
Targeted Attack for Deep Hashing based Retrieval.Jiawang Bai; Bin Chen; Yiming Li; Dongxian Wu; Weiwei Guo; Shu-tao Xia; En-hui Yang
A Framework for Enhancing Deep Neural Networks Against Adversarial Malware.Deqiang Li; Qianmu Li; Yanfang Ye; Shouhuai Xu
Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing Website Classifiers.Yusi Lei; Sen Chen; Lingling Fan; Fu Song; Yang Liu
2020-04-14
On the Optimal Interaction Range for Multi-Agent Systems Under Adversarial Attack.Saad J Saleh
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions.Jon Vadillo; Roberto Santana; Jose A. Lozano
2020-04-13
Adversarial Robustness Guarantees for Random Deep Neural Networks.Palma Giacomo De; Bobak T. Kiani; Seth Lloyd
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples.Maximilian Mozes; Pontus Stenetorp; Bennett Kleinberg; Lewis D. Griffin
Adversarial Weight Perturbation Helps Robust Generalization.Dongxian Wu; Shu-tao Xia; Yisen Wang
Adversarial Augmentation Policy Search for Domain and Cross-Lingual Generalization in Reading Comprehension.Adyasha Maharana; Mohit Bansal
Towards Robust Classification with Image Quality Assessment.Yeli Feng; Yiyu Cai
Towards Transferable Adversarial Attack against Deep Face Recognition.Yaoyao Zhong; Weihong Deng
2020-04-12
PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning.Chenglin Yang; Adam Kortylewski; Cihang Xie; Yinzhi Cao; Alan Yuille
2020-04-11
Domain Adaptive Transfer Attack (DATA)-based Segmentation Networks for Building Extraction from Aerial Images.Younghwan Na; Jun Hee Kim; Kyungsu Lee; Juhum Park; Jae Youn Hwang; Jihwan P. Choi
Certified Adversarial Robustness for Deep Reinforcement Learning.Michael Everett; Bjorn Lutjens; Jonathan P. How
Robust Large-Margin Learning in Hyperbolic Space.Melanie Weber; Manzil Zaheer; Ankit Singh Rawat; Aditya Menon; Sanjiv Kumar
Verification of Deep Convolutional Neural Networks Using ImageStars.Hoang-Dung Tran; Stanley Bak; Weiming Xiang; Taylor T. Johnson
2020-04-10
Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems.Eirini Anthi; Lowri Williams; Matilda Rhode; Pete Burnap; Adam Wedgbury
Luring of transferable adversarial perturbations in the black-box paradigm.Rémi Bernhard; Pierre-Alain Moellic; Jean-Max Dutertre
2020-04-09
Blind Adversarial Training: Balance Accuracy and Robustness.Haidong Xie; Xueshuang Xiang; Naijin Liu; Bin Dong
Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness.Haidong Xie; Lixin Qian; Xueshuang Xiang; Naijin Liu
On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems.Ivan Y. Tyukin; Desmond J. Higham; Alexander N. Gorban
2020-04-08
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking.Hongjun Wang; Guangrun Wang; Ya Li; Dongyu Zhang; Liang Lin
2020-04-07
Towards Evaluating the Robustness of Chinese BERT Classifiers.Boxin Wang; Boyuan Pan; Xin Li; Bo Li
Feature Partitioning for Robust Tree Ensembles and their Certification in Adversarial Scenarios.Stefano Calzavara; Claudio Lucchese; Federico Marcuzzi; Salvatore Orlando
Learning to fool the speaker recognition.Jiguo Li; Xinfeng Zhang; Jizheng Xu; Li Zhang; Yue Wang; Siwei Ma; Wen Gao
Universal Adversarial Perturbations Generative Network for Speaker Recognition.Jiguo Li; Xinfeng Zhang; Chuanmin Jia; Jizheng Xu; Li Zhang; Yue Wang; Siwei Ma; Wen Gao
2020-04-05
Approximate Manifold Defense Against Multiple Adversarial Perturbations.Jay Nandy; Wynne Hsu; Mong Li Lee
2020-04-04
Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks.Zuowen Wang; Leo Horne
BAE: BERT-based Adversarial Examples for Text Classification.Siddhant Garg; Goutham Ramakrishnan
2020-04-03
Adversarial Robustness through Regularization: A Second-Order Approach.Avery Ma; Fartash Faghri; Amir-massoud Farahmand
2020-04-01
Evading Deepfake-Image Detectors with White- and Black-Box Attacks.Nicholas Carlini; Hany Farid
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes.Sravanti Addepalli; Vivek B. S.; Arya Baburaj; Gaurang Sriramanan; R. Venkatesh Babu
Physically Realizable Adversarial Examples for LiDAR Object Detection.James Tu; Mengye Ren; Siva Manivasagam; Ming Liang; Bin Yang; Richard Du; Frank Cheng; Raquel Urtasun
2020-03-31
A Thorough Comparison Study on Adversarial Attacks and Defenses for Common Thorax Disease Classification in Chest X-rays.Chendi Rao; Jiezhang Cao; Runhao Zeng; Qi Chen; Huazhu Fu; Yanwu Xu; Mingkui Tan
2020-03-30
Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement.Chao-Han Huck Yang; Jun Qi; Pin-Yu Chen; Xiaoli Ma; Chin-Hui Lee
Adversarial Attacks on Multivariate Time Series.Samuel Harford; Fazle Karim; Houshang Darabi
Improved Gradient based Adversarial Attacks for Quantized Networks.Kartik Gupta; Thalaiyasingam Ajanthan
Towards Deep Learning Models Resistant to Large Perturbations.Amirreza Shaeiri; Rozhin Nobahari; Mohammad Hossein Rohban
Efficient Black-box Optimization of Adversarial Windows Malware with Constrained Manipulations.Luca Demetrio; Battista Biggio; Giovanni Lagorio; Fabio Roli; Alessandro Armando
2020-03-28
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning.Tianlong Chen; Sijia Liu; Shiyu Chang; Yu Cheng; Lisa Amini; Zhangyang Wang
DaST: Data-free Substitute Training for Adversarial Attacks.Mingyi Zhou; Jing Wu; Yipeng Liu; Shuaicheng Liu; Ce Zhu
Adversarial Imitation Attack.Mingyi Zhou; Jing Wu; Yipeng Liu; Shuaicheng Liu; Xiang Zhang; Ce Zhu
2020-03-26
Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks.Zain Khan; Jirong Yi; Raghu Mudumbai; Xiaodong Wu; Weiyu Xu
Challenging the adversarial robustness of DNNs based on error-correcting output codes.Bowen Zhang; Benedetta Tondi; Xixiang Lv; Mauro Barni
2020-03-25
Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples.Alejandro Barredo-Arrieta; Ser Javier Del
2020-03-24
Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study.Luan Nguyen; Sunpreet S. Arora; Yuhang Wu; Hao Yang
2020-03-23
Defense Through Diverse Directions.Christopher M. Bender; Yang Li; Yifeng Shi; Michael K. Reiter; Junier B. Oliva
Adversarial Attacks on Monocular Depth Estimation.Ziqi Zhang; Xinge Zhu; Yingwei Li; Xiangqun Chen; Yao Guo
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations.Saima Sharmin; Nitin Rathi; Priyadarshini Panda; Kaushik Roy
Adversarial Perturbations Fool Deepfake Detectors.Apurva Gandhi; Shomik Jain
2020-03-22
Understanding the robustness of deep neural network classifiers for breast cancer screening.Witold Oleszkiewicz; Taro Makino; Stanisław Jastrzębski; Tomasz Trzciński; Linda Moy; Kyunghyun Cho; Laura Heacock; Krzysztof J. Geras
Architectural Resilience to Foreground-and-Background Adversarial Noise.Carl Cheng; Evan Hu
2020-03-21
Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression.Feiyang Cai; Jiani Li; Xenofon Koutsoukos
Robust Out-of-distribution Detection in Neural Networks.Jiefeng Chen; Yixuan Li; Xi Wu; Yingyu Liang; Somesh Jha
Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises.Bin Yan; Dong Wang; Huchuan Lu; Xiaoyun Yang
2020-03-20
Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning.Cameron Buckner
Investigating Image Applications Based on Spatial-Frequency Transform and Deep Learning Techniques.Qinkai Zheng; Han Qiu; Gerard Memmi; Isabelle Bloch
Quantum noise protects quantum classifiers against adversaries.Yuxuan Du; Min-Hsiu Hsieh; Tongliang Liu; Dacheng Tao; Nana Liu
One Neuron to Fool Them All.Anshuman Suri; David Evans
Adversarial Robustness on In- and Out-Distribution Improves Explainability.Maximilian Augustin; Alexander Meinke; Matthias Hein
2020-03-19
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates.Amin Ghiasi; Ali Shafahi; Tom Goldstein
Face-Off: Adversarial Face Obfuscation.Varun Chandrasekaran; Chuhan Gao; Brian Tang; Kassem Fawaz; Somesh Jha; Suman Banerjee
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations.Huan Zhang; Hongge Chen; Chaowei Xiao; Bo Li; Mingyan Liu; Duane Boning; Cho-Jui Hsieh
Overinterpretation reveals image classification model pathologies. (81%)Brandon Carter; Siddhartha Jain; Jonas Mueller; David Gifford
2020-03-18
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence.Christian Berghoff; Matthias Neu; Twickel Arndt von
Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles.Songan Zhang; Huei Peng; Subramanya Nageshrao; H. Eric Tseng
Solving Non-Convex Non-Differentiable Min-Max Games using Proximal Gradient Method.Babak Barazandeh; Meisam Razaviyayn
SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing.Chawin Sitawarin; Supriyo Chakraborty; David Wagner
2020-03-17
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior.Hu Zhang; Linchao Zhu; Yi Zhu; Yi Yang
Heat and Blur: An Effective and Fast Defense Against Adversarial Examples.Haya Brama; Tal Grinshpoun
Adversarial Transferability in Wearable Sensor Systems.Ramesh Kumar Sah; Hassan Ghasemzadeh
2020-03-15
Output Diversified Initialization for Adversarial Attacks.Yusuke Tashiro; Yang Song; Stefano Ermon
Anomalous Example Detection in Deep Learning: A Survey.Saikiran Bulusu; Bhavya Kailkhura; Bo Li; Pramod K. Varshney; Dawn Song
Towards Face Encryption by Generating Adversarial Identity Masks.Xiao Yang; Yinpeng Dong; Tianyu Pang; Hang Su; Jun Zhu; Yuefeng Chen; Hui Xue
Toward Adversarial Robustness via Semi-supervised Robust Training.Yiming Li; Baoyuan Wu; Yan Feng; Yanbo Fan; Yong Jiang; Zhifeng Li; Shutao Xia
2020-03-14
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models.Chawin Sitawarin; David Wagner
Certified Defenses for Adversarial Patches.Ping-Yeh Chiang; Renkun Ni; Ahmed Abdelkader; Chen Zhu; Christoph Studer; Tom Goldstein
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation.Xiaogang Xu; Hengshuang Zhao; Jiaya Jia
On the benefits of defining vicinal distributions in latent space.Puneet Mangla; Vedant Singh; Shreyas Jayant Havaldar; Vineeth N Balasubramanian
2020-03-13
Towards a Resilient Machine Learning Classifier -- a Case Study of Ransomware Detection.Chih-Yuan Yang; Ravi Sahita
GeoDA: a geometric framework for black-box adversarial attacks.Ali Rahmati; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard; Huaiyu Dai
When are Non-Parametric Methods Robust?Robi Bhattacharjee; Kamalika Chaudhuri
2020-03-12
Topological Effects on Attacks Against Vertex Classification.Benjamin A. Miller; Mustafa Çamurcu; Alexander J. Gomez; Kevin Chan; Tina Eliassi-Rad
Inline Detection of DGA Domains Using Side Information.Raaghavi Sivaguru; Jonathan Peck; Femi Olumofin; Anderson Nascimento; Cock Martine De
ARAE: Adversarially Robust Training of Autoencoders Improves Novelty Detection.Mohammadreza Salehi; Atrin Arya; Barbod Pajoum; Mohammad Otoofi; Amirreza Shaeiri; Mohammad Hossein Rohban; Hamid R. Rabiee
ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems.Jiangnan Li; Yingyuan Yang; Jinyuan Stella Sun; Kevin Tomsovic; Hairong Qi
2020-03-11
Frequency-Tuned Universal Adversarial Attacks.Yingpeng Deng; Lina J. Karam
2020-03-10
SAD: Saliency-based Defenses Against Adversarial Examples.Richard Tran; David Patrick; Michael Geyer; Amanda Fernandez
Using an ensemble color space model to tackle adversarial examples.Shreyank N Gowda; Chun Yuan
Cryptanalytic Extraction of Neural Network Models.Nicholas Carlini; Matthew Jagielski; Ilya Mironov
A Survey of Adversarial Learning on Graphs.Liang Chen; Jintang Li; Jiaying Peng; Tao Xie; Zengxu Cao; Kun Xu; Xiangnan He; Zibin Zheng
2020-03-09
Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift.Remi Tachet des Combes; Han Zhao; Yu-Xiang Wang; Geoff Gordon
Towards Probabilistic Verification of Machine Unlearning.David Marco Sommer; Liwei Song; Sameer Wagh; Prateek Mittal
Manifold Regularization for Locally Stable Deep Neural Networks.Charles Jin; Martin Rinard
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models.Yankun Ren; Jianbin Lin; Siliang Tang; Jun Zhou; Shuang Yang; Yuan Qi; Xiang Ren
Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world.Ivan Fursov; Alexey Zaytsev; Nikita Kluchnikov; Andrey Kravchenko; Evgeny Burnaev
2020-03-08
Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM.Rui Zhang; Quanyan Zhu
An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods.Sanghyuk Chun; Seong Joon Oh; Sangdoo Yun; Dongyoon Han; Junsuk Choe; Youngjoon Yoo
On the Robustness of Cooperative Multi-Agent Reinforcement Learning.Jieyu Lin; Kristina Dzeparoska; Sai Qian Zhang; Alberto Leon-Garcia; Nicolas Papernot
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.Raphaël Dang-Nhu; Gagandeep Singh; Pavol Bielik; Martin Vechev
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles.Ranjie Duan; Xingjun Ma; Yisen Wang; James Bailey; A. K. Qin; Yun Yang
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks.Siqi Liu; Arnaud Arindra Adiyoso Setio; Florin C. Ghesu; Eli Gibson; Sasa Grbic; Bogdan Georgescu; Dorin Comaniciu
2020-03-07
Dynamic Backdoor Attacks Against Machine Learning Models.Ahmed Salem; Rui Wen; Michael Backes; Shiqing Ma; Yang Zhang
2020-03-06
Defense against adversarial attacks on spoofing countermeasures of ASV.Haibin Wu; Songxiang Liu; Helen Meng; Hung-yi Lee
Triple Memory Networks: a Brain-Inspired Method for Continual Learning.Liyuan Wang; Bo Lei; Qian Li; Hang Su; Jun Zhu; Yi Zhong
MAB-Malware: A Reinforcement Learning Framework for Attacking Static Malware Classifiers.Wei Song; Xuezixiang Li; Sadia Afroz; Deepali Garg; Dmitry Kuznetsov; Heng Yin
2020-03-05
Towards Practical Lottery Ticket Hypothesis for Adversarial Training.Bai Li; Shiqi Wang; Yunhan Jia; Yantao Lu; Zhenyu Zhong; Lawrence Carin; Suman Jana
Exploiting Verified Neural Networks via Floating Point Numerical Error.Kai Jia; Martin Rinard
Detection and Recovery of Adversarial Attacks with Injected Attractors.Jiyi Zhang; Ee-Chien Chang; Hwee Kuan Lee
Adversarial Robustness Through Local Lipschitzness.Yao-Yuan Yang; Cyrus Rashtchian; Hongyang Zhang; Ruslan Salakhutdinov; Kamalika Chaudhuri
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization.Saehyung Lee; Hyungyu Lee; Sungroh Yoon
Search Space of Adversarial Perturbations against Image Filters.Dang Duy Thang; Toshihiro Matsui
2020-03-04
Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems.Yi Xie; Cong Shi; Zhuohang Li; Jian Liu; Yingying Chen; Bo Yuan
Colored Noise Injection for Training Adversarially Robust Neural Networks.Evgenii Zheltonozhskii; Chaim Baskin; Yaniv Nemcovsky; Brian Chmiel; Avi Mendelson; Alex M. Bronstein
Double Backpropagation for Training Autoencoders against Adversarial Attack.Chengjin Sun; Sizhe Chen; Xiaolin Huang
Black-box Smoothing: A Provable Defense for Pretrained Classifiers.Hadi Salman; Mingjie Sun; Greg Yang; Ashish Kapoor; J. Zico Kolter
Metrics and methods for robustness evaluation of neural networks with generative models.Igor Buzhinsky; Arseny Nerinovsky; Stavros Tripakis
2020-03-03
Discriminative Multi-level Reconstruction under Compact Latent Space for One-Class Novelty Detection.Jaewoo Park; Yoon Gyo Jung; Andrew Beng Jin Teoh
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks.Francesco Croce; Matthias Hein
Analyzing Accuracy Loss in Randomized Smoothing Defenses.Yue Gao; Harrison Rosenberg; Kassem Fawaz; Somesh Jha; Justin Hsu
Security of Deep Learning based Lane Keeping System under Physical-World Adversarial Attack.Takami Sato; Junjie Shen; Ningfei Wang; Yunhan Jack Jia; Xue Lin; Qi Alfred Chen
Type I Attack for Generative Models.Chengjin Sun; Sizhe Chen; Jia Cai; Xiaolin Huang
2020-03-02
Data-Free Adversarial Perturbations for Practical Black-Box Attack.ZhaoXin Huan; Yulong Wang; Xiaolu Zhang; Lin Shang; Chilin Fu; Jun Zhou
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness.Ahmadreza Jeddi; Mohammad Javad Shafiee; Michelle Karg; Christian Scharfenberger; Alexander Wong
Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.Nataniel Ruiz; Sarah Adel Bargal; Stan Sclaroff
Hidden Cost of Randomized Smoothing.Jeet Lily Mohapatra; Ching-Yun Lily Ko; Lily Tsui-Wei; Weng; Sijia Liu; Pin-Yu Chen; Luca Daniel
Adversarial Network Traffic: Towards Evaluating the Robustness of Deep Learning-Based Network Traffic Classification.Amir Mahdi Sadeghzadeh; Saeed Shiravi; Rasool Jalili
2020-03-01
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies.Wei Jin; Yaxin Li; Han Xu; Yiqi Wang; Shuiwang Ji; Charu Aggarwal; Jiliang Tang
2020-02-29
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models.Xiao Zhang; Jinghui Chen; Quanquan Gu; David Evans
Why is the Mahalanobis Distance Effective for Anomaly Detection?Ryo Kamoi; Kei Kobayashi
2020-02-28
End-to-end Robustness for Sensing-Reasoning Machine Learning Pipelines.Zhuolin Yang; Zhikuan Zhao; Hengzhi Pei; Boxin Wang; Bojan Karlas; Ji Liu; Heng Guo; Bo Li; Ce Zhang
Applying Tensor Decomposition to image for Robustness against Adversarial Attack.Seungju Cho; Tae Joon Jun; Mingu Kang; Daeyoung Kim
2020-02-27
Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT.Lichao Sun; Kazuma Hashimoto; Wenpeng Yin; Akari Asai; Jia Li; Philip Yu; Caiming Xiong
Detecting Patch Adversarial Attacks with Image Residuals.Marius Arvinte; Ahmed Tewfik; Sriram Vishwanath
Certified Defense to Image Transformations via Randomized Smoothing.Marc Fischer; Maximilian Baader; Martin Vechev
Are L2 adversarial examples intrinsically different?Mingxuan Li; Jingyuan Wang; Yufan Wu
Utilizing Network Properties to Detect Erroneous Inputs.Matt Gorbett; Nathaniel Blanchard
TSS: Transformation-Specific Smoothing for Robustness Certification.Linyi Li; Maurice Weber; Xiaojun Xu; Luka Rimanic; Bhavya Kailkhura; Tao Xie; Ce Zhang; Bo Li
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks.Yue Zhao; Yuwei Wu; Caihua Chen; Andrew Lim
FMix: Enhancing Mixed Sample Data Augmentation. (22%)Ethan Harris; Antonia Marcu; Matthew Painter; Mahesan Niranjan; Adam Prügel-Bennett; Jonathon Hare
2020-02-26
Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy.Aditya Saligrama; Guillaume Leclerc
Invariance vs. Robustness of Neural Networks.Sandesh Kamath; Amit Deshpande; K V Subrahmanyam
Overfitting in adversarially robust deep learning.Leslie Rice; Eric Wong; J. Zico Kolter
MGA: Momentum Gradient Attack on Network.Jinyin Chen; Yixian Chen; Haibin Zheng; Shijing Shen; Shanqing Yu; Dan Zhang; Qi Xuan
Improving Robustness of Deep-Learning-Based Image Reconstruction.Ankit Raj; Yoram Bresler; Bo Li
Defense-PointNet: Protecting PointNet Against Adversarial Attacks.Yu Zhang; Gongbo Liang; Tawfiq Salem; Nathan Jacobs
Adversarial Attack on Deep Product Quantization Network for Image Retrieval.Yan Feng; Bin Chen; Tao Dai; Shutao Xia
Randomization matters. How to defend against strong adversarial attacks.Rafael Pinot; Raphael Ettedgui; Geovani Rizk; Yann Chevaleyre; Jamal Atif
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization.Sicheng Zhu; Xiao Zhang; David Evans
2020-02-25
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy.Aditi Raghunathan; Sang Michael Xie; Fanny Yang; John Duchi; Percy Liang
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization.Yifei Min; Lin Chen; Amin Karbasi
G\"odel's Sentence Is An Adversarial Example But Unsolvable.Xiaodong Qi; Lansheng Han
Towards an Efficient and General Framework of Robust Training for Graph Neural Networks.Kaidi Xu; Sijia Liu; Pin-Yu Chen; Mengshu Sun; Caiwen Ding; Bhavya Kailkhura; Xue Lin
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks.Alexander Levine; Soheil Feizi
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.Jingfeng Zhang; Xilie Xu; Bo Han; Gang Niu; Lizhen Cui; Masashi Sugiyama; Mohan Kankanhalli
Adversarial Ranking Attack and Defense.Mo Zhou; Zhenxing Niu; Le Wang; Qilin Zhang; Gang Hua
2020-02-24
A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples: BOBYQA.Giuseppe Ughi; Vinayak Abrol; Jared Tanner
Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks.Matthew J. Roos
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space.Camilo Pestana; Naveed Akhtar; Wei Liu; David Glance; Ajmal Mian
Towards Rapid and Robust Adversarial Training with One-Step Attacks.Leo Schwinn; René Raab; Björn Eskofier
Precise Tradeoffs in Adversarial Training for Linear Regression.Adel Javanmard; Mahdi Soltanolkotabi; Hamed Hassani
HYDRA: Pruning Adversarially Robust Neural Networks.Vikash Sehwag; Shiqi Wang; Prateek Mittal; Suman Jana
2020-02-23
Adversarial Attack on DL-based Massive MIMO CSI Feedback.Qing Liu; Jiajia Guo; Chao-Kai Wen; Shi Jin
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference.Ting-Kuei Hu; Tianlong Chen; Haotao Wang; Zhangyang Wang
2020-02-22
Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks.Kirthi Shankar Sivamani; Rajeev Sahay; Aly El Gamal
Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition.Ziwen He; Wei Wang; Jing Dong; Tieniu Tan
Real-Time Detectors for Digital and Physical Adversarial Inputs to Perception Systems.Yiannis Kantaros; Taylor Carpenter; Kaustubh Sridhar; Yahan Yang; Insup Lee; James Weimer
Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples.Guanxiong Liu; Issa Khalil; Abdallah Khreishah
2020-02-21
Polarizing Front Ends for Robust CNNs.Can Bakiskan; Soorya Gopalakrishnan; Metehan Cekic; Upamanyu Madhow; Ramtin Pedarsani
Robustness from Simple Classifiers.Sharon Qian; Dimitris Kalimeris; Gal Kaplun; Yaron Singer
Adversarial Detection and Correction by Matching Prediction Distributions.Giovanni Vacanti; Looveren Arnaud Van
UnMask: Adversarial Detection and Defense Through Robust Feature Alignment.Scott Freitas; Shang-Tse Chen; Zijie J. Wang; Duen Horng Chau
Robustness to Programmable String Transformations via Augmented Abstract Training.Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework.Dinghuai Zhang; Mao Ye; Chengyue Gong; Zhanxing Zhu; Qiang Liu
Adversarial Attacks on Machine Learning Systems for High-Frequency Trading.Micah Goldblum; Avi Schwarzschild; Ankit B. Patel; Tom Goldstein
2020-02-20
Enhanced Adversarial Strategically-Timed Attacks against Deep Reinforcement Learning.Chao-Han Huck Yang; Jun Qi; Pin-Yu Chen; Yi Ouyang; I-Te Danny Hung; Chin-Hui Lee; Xiaoli Ma
On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective.Motasem Alfarra; Adel Bibi; Hasan Hammoud; Mohamed Gaafar; Bernard Ghanem
A Bayes-Optimal View on Adversarial Examples.Eitan Richardson; Yair Weiss
Towards Certifiable Adversarial Sample Detection.Ilia Shumailov; Yiren Zhao; Robert Mullins; Ross Anderson
Boosting Adversarial Training with Hypersphere Embedding.Tianyu Pang; Xiao Yang; Yinpeng Dong; Kun Xu; Hang Su; Jun Zhu
Byzantine-resilient Decentralized Stochastic Gradient Descent. (5%)Shangwei Guo; Tianwei Zhang; Han Yu; Xiaofei Xie; Lei Ma; Tao Xiang; Yang Liu
2020-02-19
Bayes-TrEx: Model Transparency by Example.Serena Booth; Yilun Zhou; Ankit Shah; Julie Shah
AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks.Xiao Wang; Siyue Wang; Pin-Yu Chen; Xue Lin; Peter Chin
NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion.Aritran Piplai; Sai Sree Laya Chukkapalli; Anupam Joshi
On Adaptive Attacks to Adversarial Example Defenses.Florian Tramer; Nicholas Carlini; Wieland Brendel; Aleksander Madry
Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks.Tsubasa Takahashi
Randomized Smoothing of All Shapes and Sizes.Greg Yang; Tony Duan; J. Edward Hu; Hadi Salman; Ilya Razenshteyn; Jerry Li
2020-02-18
Action-Manipulation Attacks Against Stochastic Bandits: Attacks and Defense.Guanlin Liu; Lifeng lai
Deflecting Adversarial Attacks.Yao Qin; Nicholas Frosst; Colin Raffel; Garrison Cottrell; Geoffrey Hinton
Block Switching: A Stochastic Approach for Deep Learning Security.Xiao Wang; Siyue Wang; Pin-Yu Chen; Xue Lin; Peter Chin
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent.Pu Zhao; Pin-Yu Chen; Siyue Wang; Xue Lin
2020-02-17
TensorShield: Tensor-based Defense Against Adversarial Attacks on Images.Negin Entezari; Evangelos E. Papalexakis
On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples.Pamela K. Douglas; Farzad Vasheghani Farahani
Scalable Quantitative Verification For Deep Neural Networks.Teodora Baluta; Zheng Leong Chua; Kuldeep S. Meel; Prateek Saxena
CAT: Customized Adversarial Training for Improved Robustness.Minhao Cheng; Qi Lei; Pin-Yu Chen; Inderjit Dhillon; Cho-Jui Hsieh
On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks.Hisaichi Shibata; Shouhei Hanaoka; Yukihiro Nomura; Naoto Hayashi; Osamu Abe
Robust Stochastic Bandit Algorithms under Probabilistic Unbounded Adversarial Attack.Ziwei Guan; Kaiyi Ji; Donald J Jr Bucci; Timothy Y Hu; Joseph Palombo; Michael Liston; Yingbin Liang
Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness.Huijie Feng; Chunpeng Wu; Guoyang Chen; Weifeng Zhang; Yang Ning
GRAPHITE: A Practical Framework for Generating Automatic Physical Adversarial Machine Learning Attacks.Ryan Feng; Neal Mangaokar; Jiefeng Chen; Earlence Fernandes; Somesh Jha; Atul Prakash
2020-02-16
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality.Yi Zhang; Orestis Plevrakis; Simon S. Du; Xingguo Li; Zhao Song; Sanjeev Arora
2020-02-15
Undersensitivity in Neural Reading Comprehension.Johannes Welbl; Pasquale Minervini; Max Bartolo; Pontus Stenetorp; Sebastian Riedel
Hold me tight! Influence of discriminative features on deep network boundaries.Guillermo Ortiz-Jimenez; Apostolos Modas; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
Blind Adversarial Network Perturbations.Milad Nasr; Alireza Bahramali; Amir Houmansadr
2020-02-14
Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets.Dongxian Wu; Yisen Wang; Shu-Tao Xia; James Bailey; Xingjun Ma
Adversarial Distributional Training for Robust Deep Learning.Yinpeng Dong; Zhijie Deng; Tianyu Pang; Hang Su; Jun Zhu
2020-02-13
Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks.Taro Kiritani; Koji Ono
The Conditional Entropy Bottleneck.Ian Fischer
Identifying Audio Adversarial Examples via Anomalous Pattern Detection.Victor Akinwande; Celia Cintas; Skyler Speakman; Srihari Sridharan
2020-02-12
Stabilizing Differentiable Architecture Search via Perturbation-based Regularization.Xiangning Chen; Cho-Jui Hsieh
Over-the-Air Adversarial Flickering Attacks against Video Recognition Networks.Roi Pony; Itay Naeh; Shie Mannor
2020-02-11
Adversarial Robustness for Code.Pavol Bielik; Martin Vechev
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations.Florian Tramèr; Jens Behrmann; Nicholas Carlini; Nicolas Papernot; Jörn-Henrik Jacobsen
Robustness of Bayesian Neural Networks to Gradient-Based Attacks.Ginevra Carbone; Matthew Wicker; Luca Laurenti; Andrea Patane; Luca Bortolussi; Guido Sanguinetti
Improving the affordability of robustness training for DNNs.Sidharth Gupta; Parijat Dube; Ashish Verma
Fast Geometric Projections for Local Robustness Certification.Aymeric Fromherz; Klas Leino; Matt Fredrikson; Bryan Parno; Corina Păsăreanu
Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models.Xiao Zang; Yi Xie; Jie Chen; Bo Yuan
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models.Lin Chen; Yifei Min; Mingrui Zhang; Amin Karbasi
2020-02-10
Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers.Prithviraj Dasgupta; Joseph B. Collins; Michael McCarrick
Adversarial Data Encryption.Yingdong Hu; Liang Zhang; Wei Shan; Xiaoxiao Qin; Jing Qi; Zhenzhou Wu; Yang Yuan
Generalised Lipschitz Regularisation Equals Distributional Robustness.Zac Cranko; Zhan Shi; Xinhua Zhang; Richard Nock; Simon Kornblith
2020-02-09
MDEA: Malware Detection with Evolutionary Adversarial Learning.Xiruo Wang; Risto Miikkulainen
Input Validation for Neural Networks via Runtime Local Robustness Verification.Jiangchao Liu; Liqian Chen; Antoine Mine; Ji Wang
Robust binary classification with the 01 loss.Yunzhe Xue; Meiyan Xie; Usman Roshan
Watch out! Motion is Blurring the Vision of Your Deep Neural Networks.Qing Guo; Felix Juefei-Xu; Xiaofei Xie; Lei Ma; Jian Wang; Bing Yu; Wei Feng; Yang Liu
Feature-level Malware Obfuscation in Deep Learning.Keith Dillon
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples.Paarth Neekhara; Shehzeen Hussain; Malhar Jere; Farinaz Koushanfar; Julian McAuley
Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection.Quanyu Liao; Xin Wang; Bin Kong; Siwei Lyu; Youbing Yin; Qi Song; Xi Wu
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing.Jinyuan Jia; Binghui Wang; Xiaoyu Cao; Neil Zhenqiang Gong
Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for High-Dimensional Images.Avrim Blum; Travis Dick; Naren Manoj; Hongyang Zhang
2020-02-08
Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks.Lu Chen; Wei Xu
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness.Aounon Kumar; Alexander Levine; Tom Goldstein; Soheil Feizi
2020-02-07
Renofeation: A Simple Transfer Learning Method for Improved Adversarial Robustness.Ting-Wu Chin; Cha Zhang; Diana Marculescu
Semantic Robustness of Models of Source Code.Goutham Ramakrishnan; Jordan Henkel; Zi Wang; Aws Albarghouthi; Somesh Jha; Thomas Reps
Analysis of Random Perturbations for Robust Convolutional Neural Networks.Adam Dziedzic; Sanjay Krishnan
RAID: Randomized Adversarial-Input Detection for Neural Networks.Hasan Ferit Eniser; Maria Christakis; Valentin Wüstholz
Assessing the Adversarial Robustness of Monte Carlo and Distillation Methods for Deep Bayesian Neural Network Classification.Meet P. Vadera; Satya Narayan Shukla; Brian Jalaian; Benjamin M. Marlin
2020-02-06
Reliability Validation of Learning Enabled Vehicle Tracking.Youcheng Sun; Yifan Zhou; Simon Maskell; James Sharp; Xiaowei Huang
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models.Yao Deng; Xi Zheng; Tianyi Zhang; Chen Chen; Guannan Lou; Miryung Kim
AI-GAN: Attack-Inspired Generation of Adversarial Examples.Tao Bai; Jun Zhao; Jinlin Zhu; Shoudong Han; Jiefeng Chen; Bo Li; Alex Kot
2020-02-05
Over-the-Air Adversarial Attacks on Deep Learning Based Modulation Classifier over Wireless Channels.Brian Kim; Yalin E. Sagduyu; Kemal Davaslioglu; Tugba Erpek; Sennur Ulukus
Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study.David Mickisch; Felix Assion; Florens Greßner; Wiebke Günther; Mariele Motta
2020-02-04
Adversarially Robust Frame Sampling with Bounded Irregularities.Hanhan Li; Pin Wang
Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria.Qi Xuan; Yalu Shan; Jinhuan Wang; Zhongyuan Ruan; Guanrong Chen
Minimax Defense against Gradient-based Adversarial Attacks.Blerta Lindqvist; Rauf Izmailov
2020-02-03
A Differentiable Color Filter for Generating Unrestricted Adversarial Images.Zhengyu Zhao; Zhuoran Liu; Martha Larson
Regularizers for Single-step Adversarial Training.B. S. Vivek; R. Venkatesh Babu
Defending Adversarial Attacks via Semantic Feature Manipulation.Shuo Wang; Tianle Chen; Surya Nepal; Carsten Rudolph; Marthie Grobler; Shangyu Chen
2020-02-02
Robust saliency maps with decoy-enhanced saliency score.Yang Lu; Wenbo Guo; Xinyu Xing; William Stafford Noble
2020-02-01
Towards Sharper First-Order Adversary with Quantized Gradients.Zhuanghua Liu; Ivor W. Tsang
AdvJND: Generating Adversarial Examples with Just Noticeable Difference.Zifei Zhang; Kai Qiao; Lingyun Jiang; Linyuan Wang; Bin Yan
2020-01-31
Additive Tree Ensembles: Reasoning About Potential Instances.Laurens Devos; Wannes Meert; Jesse Davis
Politics of Adversarial Machine Learning.Kendra Albert; Jonathon Penney; Bruce Schneier; Ram Shankar Siva Kumar
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications.Dou Goodman; Lv Zhonghou; Wang minghua
2020-01-30
Tiny Noise Can Make an EEG-Based Brain-Computer Interface Speller Output Anything.Xiao Zhang; Dongrui Wu; Lieyun Ding; Hanbin Luo; Chin-Teng Lin; Tzyy-Ping Jung; Ricardo Chavarriaga
2020-01-29
A4 : Evading Learning-based Adblockers.Shitong Zhu; Zhongjie Wang; Xun Chen; Shasha Li; Umar Iqbal; Zhiyun Qian; Kevin S. Chan; Srikanth V. Krishnamurthy; Zubair Shafiq
D2M: Dynamic Defense and Modeling of Adversarial Movement in Networks.Scott Freitas; Andrew Wicker; Duen Horng Chau; Joshua Neil
Just Noticeable Difference for Machines to Generate Adversarial Images.Adil Kaan Akan; Mehmet Ali Genc; Fatos T. Yarman Vural
Semantic Adversarial Perturbations using Learnt Representations.Isaac Dunn; Tom Melham; Daniel Kroening
Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain.Yigit Alparslan; Ken Alparslan; Jeremy Keim-Shenk; Shweta Khade; Rachel Greenstadt
2020-01-28
Modelling and Quantifying Membership Information Leakage in Machine Learning.Farhad Farokhi; Mohamed Ali Kaafar
2020-01-27
Interpreting Machine Learning Malware Detectors Which Leverage N-gram Analysis.William Briguglio; Sherif Saad
Generating Natural Adversarial Hyperspectral examples with a modified Wasserstein GAN.Jean-Christophe OBELIX Burnel; Kilian OBELIX Fatras; Nicolas OBELIX Courty
FakeLocator: Robust Localization of GAN-Based Face Manipulations via Semantic Segmentation Networks with Bells and Whistles.Yihao Huang; Felix Juefei-Xu; Run Wang; Xiaofei Xie; Lei Ma; Jianwen Li; Weikai Miao; Yang Liu; Geguang Pu
Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning.Inaam Ilahi; Muhammad Usama; Junaid Qadir; Muhammad Umar Janjua; Ala Al-Fuqaha; Dinh Thai Hoang; Dusit Niyato
Practical Fast Gradient Sign Attack against Mammographic Image Classifier.Ibrahim Yilmaz
2020-01-26
Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks.Rehana Mahfuz; Rajeev Sahay; Aly El Gamal
2020-01-25
Weighted Average Precision: Adversarial Example Detection in the Visual Perception of Autonomous Vehicles.Yilan Li; Senem Velipasalar
AI-Powered GUI Attack and Its Defensive Methods.Ning Yu; Zachary Tuttle; Carl Jake Thurnau; Emmanuel Mireku
Analyzing the Noise Robustness of Deep Neural Networks.Kelei Cao; Mengchen Liu; Hang Su; Jing Wu; Jun Zhu; Shixia Liu
2020-01-24
When Wireless Security Meets Machine Learning: Motivation, Challenges, and Research Directions.Yalin E. Sagduyu; Yi Shi; Tugba Erpek; William Headley; Bryse Flowers; George Stantchev; Zhuo Lu
2020-01-23
Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack.Bo Zhang; Ruotong Yu; Haipei Sun; Yanying Li; Jun Xu; Hui Wang
Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples.Ya-guan Qian; Xi-Ming Zhang; Bin Wang; Wei Li; Jian-Hai Chen; Wu-Jie Zhou; Jing-Sheng Lei
On the human evaluation of audio adversarial examples.Jon Vadillo; Roberto Santana
2020-01-22
Adversarial Attack on Community Detection by Hiding Individuals.Jia Li; Honglei Zhang; Zhichao Han; Yu Rong; Hong Cheng; Junzhou Huang
2020-01-21
SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation.Jesse Sun; Fatemeh Darbeha; Mark Zaidi; Bo Wang
Secure and Robust Machine Learning for Healthcare: A Survey.Adnan Qayyum; Junaid Qadir; Muhammad Bilal; Ala Al-Fuqaha
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence.Kihyuk Sohn; David Berthelot; Chun-Liang Li; Zizhao Zhang; Nicholas Carlini; Ekin D. Cubuk; Alex Kurakin; Han Zhang; Colin Raffel
GhostImage: Perception Domain Attacks against Vision-based Object Classification Systems.Yanmao Man; Ming Li; Ryan Gerdes
Generate High-Resolution Adversarial Samples by Identifying Effective Features.Sizhe Chen; Peidong Zhang; Chengjin Sun; Jia Cai; Xiaolin Huang
Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning.Nilaksh Polo Das; Haekyu Polo Park; Zijie J. Polo Wang; Fred Polo Hohman; Robert Polo Firstman; Emily Polo Rogers; Duen Polo Horng; Chau
Elephant in the Room: A