It can be hard to stay up-to-date on the published papers in
the field of adversarial examples,
where we have seen massive growth in the number of papers
written each year.
I have been somewhat religiously keeping track of these
papers for the last few years, and realized it may be helpful
for others to release this list.
The only requirement I used for selecting papers for this list
is that it is primarily a paper about adversarial examples,
or extensively uses adversarial examples.
Due to the sheer quantity of papers, I can't guarantee
that I actually have found all of them.
But I did try.
I also may have included papers that don't match
these criteria (and are about something different instead),
or made inconsistent
judgement calls as to whether or not any given paper is
mainly an adversarial example paper.
Send me an email if something is wrong and I'll correct it.
As a result, this list is completely un-filtered.
Everything that mainly presents itself as an adversarial
example paper is listed here; I pass no judgement of quality.
For a curated list of papers that I think are excellent and
worth reading, see the
Adversarial Machine Learning Reading List.
One final note about the data.
This list automatically updates with new papers, even before I
get a chance to manually filter through them.
I do this filtering roughly twice a week, and it's
then that I'll remove the ones that aren't related to
adversarial examples.
As a result, there may be some
false positives on the most recent few entries.
The new un-verified entries will have a probability indicated that my
simplistic (but reasonably well calibrated)
bag-of-words classifier believes the given paper
is actually about adversarial examples.
The full paper list appears below. I've also released a
TXT file (and a TXT file
with abstracts) and a
JSON file
with the same data. If you do anything interesting with
this data I'd be happy to hear from you what it was.
Paper List
2023-03-30
Adversarial Attack and Defense for Dehazing Networks. (97%)Jie Gui; Xiaofeng Cong; Chengwei Peng; Yuan Yan Tang; James Tin-Yau Kwok
Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving. (81%)Zijian Zhu; Yichi Zhang; Hai Chen; Yinpeng Dong; Shu Zhao; Wenbo Ding; Jiachen Zhong; Shibao Zheng
Robo3D: Towards Robust and Reliable 3D Perception against Corruptions. (2%)Lingdong Kong; Youquan Liu; Xin Li; Runnan Chen; Wenwei Zhang; Jiawei Ren; Liang Pan; Kai Chen; Ziwei Liu
Explainable Intrusion Detection Systems Using Competitive Learning Techniques. (1%)Jesse Ables; Thomas Kirby; Sudip Mittal; Ioana Banicescu; Shahram Rahimi; William Anderson; Maria Seale
Differential Area Analysis for Ransomware: Attacks, Countermeasures, and Limitations. (1%)Marco Venturini; Francesco Freda; Emanuele Miotto; Alberto Giaretta; Mauro Conti
2023-03-29
Latent Feature Relation Consistency for Adversarial Robustness. (99%)Xingbin Liu; Huafeng Kuang; Hong Liu; Xianming Lin; Yongjian Wu; Rongrong Ji
Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness. (99%)Wei Wei; Jiahuan Zhou; Ying Wu
Targeted Adversarial Attacks on Wind Power Forecasts. (88%)René Heinrich; Christoph Scholz; Stephan Vogt; Malte Lehna
ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing. (56%)Xiaodan Li; Yuefeng Chen; Yao Zhu; Shuhui Wang; Rong Zhang; Hui Xue
Graph Neural Networks for Hardware Vulnerability Analysis -- Can you Trust your GNN? (16%)Lilas Alrahis; Ozgur Sinanoglu
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling. (10%)Ethan Wisdom; Tejas Gokhale; Chaowei Xiao; Yezhou Yang
A Tensor-based Convolutional Neural Network for Small Dataset Classification. (2%)Zhenhua Chen; David Crandall
ALUM: Adversarial Data Uncertainty Modeling from Latent Model Uncertainty Compensation. (1%)Wei Wei; Jiahuan Zhou; Hongze Li; Ying Wu
2023-03-28
A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion. (99%)Haomin Zhuang; Yihua Zhang; Sijia Liu
Improving the Transferability of Adversarial Samples by Path-Augmented Method. (99%)Jianping Zhang; Jen-tse Huang; Wenxuan Wang; Yichen Li; Weibin Wu; Xiaosen Wang; Yuxin Su; Michael R. Lyu
Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition. (99%)Xiao Yang; Chang Liu; Longlong Xu; Yikai Wang; Yinpeng Dong; Ning Chen; Hang Su; Jun Zhu
Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization. (98%)Jianping Zhang; Yizhan Huang; Weibin Wu; Michael R. Lyu
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm. (98%)Bakary Badjie; José Cecílio; António Casimiro
TransAudio: Towards the Transferable Adversarial Audio Attack via Learning Contextualized Perturbations. (98%)Qi Gege; Yuefeng Chen; Xiaofeng Mao; Yao Zhu; Binyuan Hui; Xiaodan Li; Rong Zhang; Hui Xue
A Survey on Malware Detection with Graph Representation Learning. (41%)Tristan Bilot; Nour El Madhoun; Khaldoun Al Agha; Anis Zouaoui
Provable Robustness for Streaming Models with a Sliding Window. (15%)Aounon Kumar; Vinu Sankar Sadasivan; Soheil Feizi
On the Use of Reinforcement Learning for Attacking and Defending Load Frequency Control. (3%)Amr S. Mohamed; Deepa Kundur
A Universal Identity Backdoor Attack against Speaker Verification based on Siamese Network. (1%)Haodong Zhao; Wei Du; Junjie Guo; Gongshen Liu
2023-03-27
Classifier Robustness Enhancement Via Test-Time Transformation. (99%)Tsachi Blau; Roy Ganz; Chaim Baskin; Michael Elad; Alex Bronstein
Improving the Transferability of Adversarial Examples via Direction Tuning. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
EMShepherd: Detecting Adversarial Samples via Side-channel Leakage. (99%)Ruyi Ding; Cheng Gongye; Siyue Wang; Aidong Ding; Yunsi Fei
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks. (97%)Tianrui Qin; Xitong Gao; Juanjuan Zhao; Kejiang Ye; Cheng-Zhong Xu
CAT:Collaborative Adversarial Training. (69%)Xingbin Liu; Huafeng Kuang; Xianming Lin; Yongjian Wu; Rongrong Ji
Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection. (67%)Nicola Franco; Daniel Korth; Jeanette Miriam Lorenz; Karsten Roscher; Stephan Guennemann
Personalized Federated Learning on Long-Tailed Data via Adversarial Feature Augmentation. (41%)Yang Lu; Pinxin Qian; Gang Huang; Hanzi Wang
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder. (41%)Tao Sun; Lu Pang; Chao Chen; Haibin Ling
Sequential training of GANs against GAN-classifiers reveals correlated "knowledge gaps" present among independently trained GAN instances. (41%)Arkanath Pathak; Nicholas Dufour
Anti-DreamBooth: Protecting users from personalized text-to-image synthesis. (5%)Le Thanh Van; Hao Phung; Thuan Hoang Nguyen; Quan Dao; Ngoc Tran; Anh Tran
2023-03-26
MGTBench: Benchmarking Machine-Generated Text Detection. (26%)Xinlei He; Xinyue Shen; Zeyuan Chen; Michael Backes; Yang Zhang
2023-03-25
CFA: Class-wise Calibrated Fair Adversarial Training. (98%)Zeming Wei; Yifei Wang; Yiwen Guo; Yisen Wang
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks. (68%)Jinyuan Jia; Yupei Liu; Yuepeng Hu; Neil Zhenqiang Gong
Improving robustness of jet tagging algorithms with adversarial training: exploring the loss surface. (12%)Annika Stein
2023-03-24
PIAT: Parameter Interpolation based Adversarial Training for Image Classification. (99%)Kun He; Xin Liu; Yichen Yang; Zhou Qin; Weigao Wen; Hui Xue; John E. Hopcroft
How many dimensions are required to find an adversarial example? (99%)Charles Godfrey; Henry Kvinge; Elise Bishoff; Myles Mckay; Davis Brown; Tim Doster; Eleanor Byler
Effective black box adversarial attack with handcrafted kernels. (99%)Petr Dvořáček; Petr Hurtik; Petra Števuliáková
Adversarial Attack and Defense for Medical Image Analysis: Methods and Applications. (99%)Junhao Dong; Junxi Chen; Xiaohua Xie; Jianhuang Lai; Hao Chen
Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing. (99%)Lin Li; Michael Spratling
Feature Separation and Recalibration for Adversarial Robustness. (98%)Woo Jae Kim; Yoonki Cho; Junsik Jung; Sung-Eui Yoon
Physically Adversarial Infrared Patches with Learnable Shapes and Locations. (97%)Wei Xingxing; Yu Jie; Huang Yao
Generalist: Decoupling Natural and Robust Generalization. (96%)Hongjun Wang; Yisen Wang
Ensemble-based Blackbox Attacks on Dense Prediction. (86%)Zikui Cai; Yaoteng Tan; M. Salman Asif
Backdoor Attacks with Input-unique Triggers in NLP. (54%)Xukun Zhou; Jiwei Li; Tianwei Zhang; Lingjuan Lyu; Muqiao Yang; Jun He
PoisonedGNN: Backdoor Attack on Graph Neural Networks-based Hardware Security Systems. (22%)Lilas Alrahis; Satwik Patnaik; Muhammad Abdullah Hanif; Muhammad Shafique; Ozgur Sinanoglu
Enhancing Multiple Reliability Measures via Nuisance-extended Information Bottleneck. (5%)Jongheon Jeong; Sihyun Yu; Hankook Lee; Jinwoo Shin
Optimal Smoothing Distribution Exploration for Backdoor Neutralization in Deep Learning-based Traffic Systems. (2%)Yue Wang; Wending Li; Michail Maniatakos; Saif Eddin Jabari
TRAK: Attributing Model Behavior at Scale. (1%)Sung Min Park; Kristian Georgiev; Andrew Ilyas; Guillaume Leclerc; Aleksander Madry
2023-03-23
Watch Out for the Confusing Faces: Detecting Face Swapping with the Probability Distribution of Face Identification Models. (68%)Yuxuan Duan; Xuhong Zhang; Chuer Yu; Zonghui Wang; Shouling Ji; Wenzhi Chen
Quadratic Graph Attention Network (Q-GAT) for Robust Construction of Gene Regulatory Networks. (50%)Hui Zhang; Xuexin An; Qiang He; Yudong Yao; Feng-Lei Fan; Yueyang Teng
Optimization and Optimizers for Adversarial Robustness. (41%)Hengyue Liang; Buyun Liang; Le Peng; Ying Cui; Tim Mitchell; Ju Sun
Adversarial Robustness and Feature Impact Analysis for Driver Drowsiness Detection. (41%)João Vitorino; Lourenço Rodrigues; Eva Maia; Isabel Praça; André Lourenço
Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense. (15%)Kalpesh Krishna; Yixiao Song; Marzena Karpinska; John Wieting; Mohit Iyyer
Decentralized Adversarial Training over Graphs. (13%)Ying Cao; Elsa Rizk; Stefan Vlaski; Ali H. Sayed
Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs. (8%)Hasan Abed Al Kader Hammoud; Adel Bibi; Philip H. S. Torr; Bernard Ghanem
Low-frequency Image Deep Steganography: Manipulate the Frequency Distribution to Hide Secrets with Tenacious Robustness. (1%)Huajie Chen; Tianqing Zhu; Yuan Zhao; Bo Liu; Xin Yu; Wanlei Zhou
Efficient Symbolic Reasoning for Neural-Network Verification. (1%)Zi Dj Wang; Somesh Dj Jha; Dj Krishnamurthy; Dvijotham
2023-03-22
Reliable and Efficient Evaluation of Adversarial Robustness for Deep Hashing-Based Retrieval. (99%)Xunguang Wang; Jiawang Bai; Xinyue Xu; Xiaomeng Li
Semantic Image Attack for Visual Model Diagnosis. (99%)Jinqi Luo; Zhaoning Wang; Chen Henry Wu; Dong Huang; la Torre Fernando De
Revisiting DeepFool: generalization and improvement. (99%)Alireza Abdollahpourrostam; Mahed Abroshan; Seyed-Mohsen Moosavi-Dezfooli
Wasserstein Adversarial Examples on Univariant Time Series Data. (99%)Wenjie Wang; Li Xiong; Jian Lou
Test-time Defense against Adversarial Attacks: Detection and Reconstruction of Adversarial Examples via Masked Autoencoder. (99%)Yun-Yun Tsai; Ju-Chin Chao; Albert Wen; Zhaoyuan Yang; Chengzhi Mao; Tapan Shah; Junfeng Yang
Sibling-Attack: Rethinking Transferable Adversarial Attacks against Face Recognition. (78%)Zexin Li; Bangjie Yin; Taiping Yao; Juefeng Guo; Shouhong Ding; Simin Chen; Cong Liu
An Extended Study of Human-like Behavior under Adversarial Training. (76%)Paul Gavrikov; Janis Keuper; Margret Keuper
Distribution-restrained Softmax Loss for the Model Robustness. (38%)Hao Wang; Chen Li; Jinzhe Jiang; Xin Zhang; Yaqian Zhao; Weifeng Gong
Backdoor Defense via Adaptively Splitting Poisoned Dataset. (16%)Kuofeng Gao; Yang Bai; Jindong Gu; Yong Yang; Shu-Tao Xia
Edge Deep Learning Model Protection via Neuron Authorization. (11%)Jinyin Chen; Haibin Zheng; Tao Liu; Rongchang Li; Yao Cheng; Xuhong Zhang; Shouling Ji
2023-03-21
State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems. (99%)Junbin Fang; You Jiang; Canjian Jiang; Zoe L. Jiang; Siu-Ming Yiu; Chuanyi Liu
Information-containing Adversarial Perturbation for Combating Facial Manipulation Systems. (99%)Yao Zhu; Yuefeng Chen; Xiaodan Li; Rong Zhang; Xiang Tian; Bolun Zheng; Yaowu Chen
OTJR: Optimal Transport Meets Optimal Jacobian Regularization for Adversarial Robustness. (99%)Binh M. Le; Shahroz Tariq; Simon S. Woo
Efficient Decision-based Black-box Patch Attacks on Video Recognition. (98%)Kaixun Jiang; Zhaoyu Chen; Tony Huang; Jiafeng Wang; Dingkang Yang; Bo Li; Yan Wang; Wenqiang Zhang
Black-box Backdoor Defense via Zero-shot Image Purification. (86%)Yucheng Shi; Mengnan Du; Xuansheng Wu; Zihan Guan; Ninghao Liu
Model Robustness Meets Data Privacy: Adversarial Robustness Distillation without Original Data. (10%)Yuzheng Wang; Zhaoyu Chen; Dingkang Yang; Pinxue Guo; Kaixun Jiang; Wenqiang Zhang; Lizhe Qi
Influencer Backdoor Attack on Semantic Segmentation. (8%)Haoheng Lan; Jindong Gu; Philip Torr; Hengshuang Zhao
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-enabled IoTs: An Anticipatory Study. (1%)Mohamed Amine Ferrag; Burak Kantarci; Lucas C. Cordeiro; Merouane Debbah; Kim-Kwang Raymond Choo
2023-03-20
Adversarial Robustness of Learning-based Static Malware Classifiers. (99%)Shoumik Saha; Wenxiao Wang; Yigitcan Kaya; Soheil Feizi
TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization. (99%)Ziquan Liu; Yi Xu; Xiangyang Ji; Antoni B. Chan
Adversarial Attacks against Binary Similarity Systems. (99%)Gianluca Capozzi; Daniele Cono D'Elia; Luna Giuseppe Antonio Di; Leonardo Querzoni
Translate your gibberish: black-box adversarial attack on machine translation systems. (83%)Andrei Chertkov; Olga Tsymboi; Mikhail Pautov; Ivan Oseledets
GNN-Ensemble: Towards Random Decision Graph Neural Networks. (56%)Wenqi Wei; Mu Qiao; Divyesh Jadav
Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving. (41%)Yinpeng Dong; Caixin Kang; Jinlai Zhang; Zijian Zhu; Yikai Wang; Xiao Yang; Hang Su; Xingxing Wei; Jun Zhu
Did You Train on My Dataset? Towards Public Dataset Protection with Clean-Label Backdoor Watermarking. (9%)Ruixiang Tang; Qizhang Feng; Ninghao Liu; Fan Yang; Xia Hu
Boosting Semi-Supervised Learning by Exploiting All Unlabeled Data. (2%)Yuhao Chen; Xin Tan; Borui Zhao; Zhaowei Chen; Renjie Song; Jiajun Liang; Xuequan Lu
Robustifying Token Attention for Vision Transformers. (1%)Yong Guo; David Stutz; Bernt Schiele
2023-03-19
Randomized Adversarial Training via Taylor Expansion. (99%)Gaojie Jin; Xinping Yi; Dengyu Wu; Ronghui Mu; Xiaowei Huang
AdaptGuard: Defending Against Universal Attacks for Model Adaptation. (82%)Lijun Sheng; Jian Liang; Ran He; Zilei Wang; Tieniu Tan
2023-03-18
NoisyHate: Benchmarking Content Moderation Machine Learning Models with Human-Written Perturbations Online. (98%)Yiran Ye; Thai Le; Dongwon Lee
FedRight: An Effective Model Copyright Protection for Federated Learning. (96%)Jinyin Chen; Mingjun Li; Mingjun Li; Haibin Zheng
2023-03-17
Fuzziness-tuned: Improving the Transferability of Adversarial Examples. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
Robust Mode Connectivity-Oriented Adversarial Defense: Enhancing Neural Network Robustness Against Diversified $\ell_p$ Attacks. (99%)Ren Wang; Yuxuan Li; Sijia Liu
It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness. (98%)Peiyu Xiong; Michael Tegegn; Jaskeerat Singh Sarin; Shubhraneel Pal; Julia Rubin
Detection of Uncertainty in Exceedance of Threshold (DUET): An Adversarial Patch Localizer. (83%)Terence Jie Chua; Wenhan Yu; Jun Zhao
Adversarial Counterfactual Visual Explanations. (31%)Guillaume Jeanneret; Loïc Simon; Frédéric Jurie
MedLocker: A Transferable Adversarial Watermarking for Preventing Unauthorized Analysis of Medical Image Dataset. (16%)Bangzheng Pu; Xingxing Wei; Shiji Zhao; Huazhu Fu
Can AI-Generated Text be Reliably Detected? (13%)Vinu Sankar Sadasivan; Aounon Kumar; Sriram Balasubramanian; Wenxiao Wang; Soheil Feizi
Mobile Edge Adversarial Detection for Digital Twinning to the Metaverse with Deep Reinforcement Learning. (9%)Terence Jie Chua; Wenhan Yu; Jun Zhao
Moving Target Defense for Service-oriented Mission-critical Networks. (1%)Doğanalp Ergenç; Florian Schneider; Peter Kling; Mathias Fischer
2023-03-16
Rethinking Model Ensemble in Transfer-based Adversarial Attacks. (99%)Huanran Chen; Yichi Zhang; Yinpeng Dong; Jun Zhu
Image Classifiers Leak Sensitive Attributes About Their Classes. (68%)Lukas Struppek; Dominik Hintersdorf; Felix Friedrich; Manuel Brack; Patrick Schramowski; Kristian Kersting
Among Us: Adversarially Robust Collaborative Perception by Consensus. (67%)Yiming Li; Qi Fang; Jiamu Bai; Siheng Chen; Felix Juefei-Xu; Chen Feng
Exorcising ''Wraith'': Protecting LiDAR-based Object Detector in Automated Driving System from Appearing Attacks. (50%)Qifan Xiao; Xudong Pan; Yifan Lu; Mi Zhang; Jiarun Dai; Min Yang
Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation. (11%)Yifan Yan; Xudong Pan; Mi Zhang; Min Yang
2023-03-15
Black-box Adversarial Example Attack towards FCG Based Android Malware Detection under Incomplete Feature Information. (99%)Heng Li; Zhang Cheng; Bang Wu; Liheng Yuan; Cuiying Gao; Wei Yuan; Xiapu Luo
Robust Evaluation of Diffusion-Based Adversarial Purification. (83%)Minjong Lee; Dongwoo Kim
DeeBBAA: A benchmark Deep Black Box Adversarial Attack against Cyber-Physical Power Systems. (81%)Arnab Bhattacharjee; Tapan K. Saha; Ashu Verma; Sukumar Mishra
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models. (62%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models. (45%)Ian E. Nielsen; Ravi P. Ramachandran; Nidhal Bouaynaya; Hassan M. Fathallah-Shaykh; Ghulam Rasool
Certifiable (Multi)Robustness Against Patch Attacks Using ERM. (10%)Saba Ahmadi; Avrim Blum; Omar Montasser; Kevin Stangl
Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement. (1%)Fartash Faghri; Hadi Pouransari; Sachin Mehta; Mehrdad Farajtabar; Ali Farhadi; Mohammad Rastegari; Oncel Tuzel
2023-03-14
BODEGA: Benchmark for Adversarial Example Generation in Credibility Assessment. (98%)Piotr Przybyła; Alexander Shvets; Horacio Saggion
Resilient Dynamic Average Consensus based on Trusted agents. (69%)Shamik Bhattacharyya; Rachel Kalpana Kalaimani
Improving Adversarial Robustness with Hypersphere Embedding and Angular-based Regularizations. (31%)Olukorede Fakorede; Ashutosh Nirala; Modeste Atsague; Jin Tian
2023-03-13
Constrained Adversarial Learning and its applicability to Automated Software Testing: a systematic review. (99%)João Vitorino; Tiago Dias; Tiago Fonseca; Eva Maia; Isabel Praça
Can Adversarial Examples Be Parsed to Reveal Victim Model Information? (99%)Yuguang Yao; Jiancheng Liu; Yifan Gong; Xiaoming Liu; Yanzhi Wang; Xue Lin; Sijia Liu
Review on the Feasibility of Adversarial Evasion Attacks and Defenses for Network Intrusion Detection Systems. (99%)Islam Debicha; Benjamin Cochez; Tayeb Kenaza; Thibault Debatty; Jean-Michel Dricot; Wim Mees
SMUG: Towards robust MRI reconstruction by smoothed unrolling. (98%)Hui Li; Jinghan Jia; Shijun Liang; Yuguang Yao; Saiprasad Ravishankar; Sijia Liu
Model-tuning Via Prompts Makes NLP Models Adversarially Robust. (93%)Mrigank Raman; Pratyush Maini; J. Zico Kolter; Zachary C. Lipton; Danish Pruthi
Robust Contrastive Language-Image Pretraining against Adversarial Attacks. (76%)Wenhan Yang; Baharan Mirzasoleiman
Model Extraction Attacks on Split Federated Learning. (47%)Jingtao Li; Adnan Siraj Rakin; Xing Chen; Li Yang; Zhezhi He; Deliang Fan; Chaitali Chakrabarti
WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminative Analysis. (1%)Yiye Chen; Yunzhi Lin; Ruinian Xu; Patricio A. Vela
2023-03-12
Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion Detection Systems. (99%)Islam Debicha; Benjamin Cochez; Tayeb Kenaza; Thibault Debatty; Jean-Michel Dricot; Wim Mees
Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented Reality. (99%)Weiquan Liu; Shijun Zheng; Cheng Wang
DNN-Alias: Deep Neural Network Protection Against Side-Channel Attacks via Layer Balancing. (96%)Mahya Morid Ahmadi; Lilas Alrahis; Ozgur Sinanoglu; Muhammad Shafique
Multi-metrics adaptively identifies backdoors in Federated learning. (92%)Siquan Huang; Yijiang Li; Chong Chen; Leyu Shi; Ying Gao
Adversarial Attacks to Direct Data-driven Control for Destabilization. (91%)Hampei Sasahara
Backdoor Defense via Deconfounded Representation Learning. (83%)Zaixi Zhang; Qi Liu; Zhicai Wang; Zepu Lu; Qingyong Hu
Interpreting Hidden Semantics in the Intermediate Layers of 3D Point Cloud Classification Neural Network. (76%)Weiquan Liu; Minghao Liu; Shijun Zheng; Cheng Wang
Boosting Source Code Learning with Data Augmentation: An Empirical Study. (11%)Zeming Dong; Qiang Hu; Yuejun Guo; Zhenya Zhang; Maxime Cordy; Mike Papadakis; Yves Le Traon; Jianjun Zhao
2023-03-11
Improving the Robustness of Deep Convolutional Neural Networks Through Feature Learning. (99%)Jin Ding; Jie-Chao Zhao; Yong-Zhi Sun; Ping Tan; Ji-En Ma; You-Tong Fang
SHIELD: An Adaptive and Lightweight Defense against the Remote Power Side-Channel Attacks on Multi-tenant FPGAs. (8%)Mahya Morid Ahmadi; Faiq Khalid; Radha Vaidya; Florian Kriebel; Andreas Steininger; Muhammad Shafique
2023-03-10
Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks. (99%)Binghui Wang; Meng Pang; Yun Dong
Boosting Adversarial Attacks by Leveraging Decision Boundary Information. (99%)Boheng Zeng; LianLi Gao; QiLong Zhang; ChaoQun Li; JingKuan Song; ShuaiQi Jing
Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey. (99%)Yulong Wang; Tong Sun; Shenghong Li; Xin Yuan; Wei Ni; Ekram Hossain; H. Vincent Poor
Investigating Stateful Defenses Against Black-Box Adversarial Examples. (99%)Ryan Feng; Ashish Hooda; Neal Mangaokar; Kassem Fawaz; Somesh Jha; Atul Prakash
Do we need entire training data for adversarial training? (99%)Vipul Gupta; Apurva Narayan
MIXPGD: Hybrid Adversarial Training for Speech Recognition Systems. (99%)Aminul Huq; Weiyi Zhang; Xiaolin Hu
TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets. (61%)Weixin Chen; Dawn Song; Bo Li
2023-03-09
NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks. (99%)Wenkai Tan; Justus Renkhoff; Alvaro Velasquez; Ziyu Wang; Lusi Li; Jian Wang; Shuteng Niu; Fan Yang; Yongxin Liu; Houbing Song
Evaluating the Robustness of Conversational Recommender Systems by Adversarial Examples. (92%)Ali Montazeralghaem; James Allan
Identification of Systematic Errors of Image Classifiers on Rare Subgroups. (83%)Jan Hendrik Metzen; Robin Hutmacher; N. Grace Hua; Valentyn Boreiko; Dan Zhang
Learning the Legibility of Visual Text Perturbations. (78%)Dev Seth; Rickard Stureborg; Danish Pruthi; Bhuwan Dhingra
Efficient Certified Training and Robustness Verification of Neural ODEs. (75%)Mustafa Zeqiri; Mark Niklas Müller; Marc Fischer; Martin Vechev
2023-03-08
Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient Estimation. (99%)Geunhyeok Yu; Minwoo Jeon; Hyoseok Hwang
Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples. (99%)Jinwei Wang; Hao Wu; Haihua Wang; Jiawei Zhang; Xiangyang Luo; Bin Ma
Exploring Adversarial Attacks on Neural Networks: An Explainable Approach. (99%)Justus Renkhoff; Wenkai Tan; Alvaro Velasquez; illiam Yichen Wang; Yongxin Liu; Jian Wang; Shuteng Niu; Lejla Begic Fazlic; Guido Dartmann; Houbing Song
BeamAttack: Generating High-quality Textual Adversarial Examples through Beam Search and Mixed Semantic Spaces. (99%)Hai Zhu; Qingyang Zhao; Yuren Wu
DeepGD: A Multi-Objective Black-Box Test Selection Approach for Deep Neural Networks. (3%)Zohreh Aghababaeyan; Manel Abdellatif; Mahboubeh Dadkhah; Lionel Briand
2023-03-07
Logit Margin Matters: Improving Transferable Targeted Adversarial Attack by Logit Calibration. (99%)Juanjuan Weng; Zhiming Luo; Zhun Zhong; Shaozi Li; Nicu Sebe
Robustness-preserving Lifelong Learning via Dataset Condensation. (96%)Jinghan Jia; Yihua Zhang; Dogyoon Song; Sijia Liu; Alfred Hero
Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on Object Detectors. (93%)Raz Lapid; Moshe Sipper
CUDA: Convolution-based Unlearnable Datasets. (82%)Vinu Sankar Sadasivan; Mahdi Soltanolkotabi; Soheil Feizi
EavesDroid: Eavesdropping User Behaviors via OS Side-Channels on Smartphones. (11%)Quancheng Wang; Ming Tang; Jianming Fu
Stabilized training of joint energy-based models and their practical applications. (2%)Martin Sustek; Samik Sadhu; Lukas Burget; Hynek Hermansky; Jesus Villalba; Laureano Moro-Velazquez; Najim Dehak
2023-03-06
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning. (33%)Hritik Bansal; Nishad Singhi; Yu Yang; Fan Yin; Aditya Grover; Kai-Wei Chang
Students Parrot Their Teachers: Membership Inference on Model Distillation. (31%)Matthew Jagielski; Milad Nasr; Christopher Choquette-Choo; Katherine Lee; Nicholas Carlini
A Unified Algebraic Perspective on Lipschitz Neural Networks. (15%)Alexandre Araujo; Aaron Havens; Blaise Delattre; Alexandre Allauzen; Bin Hu
Learning to Backdoor Federated Learning. (15%)Henger Li; Chen Wu; Senchun Zhu; Zizhan Zheng
On the Feasibility of Specialized Ability Stealing for Large Language Code Models. (2%)Zongjie Li; Chaozheng Wang; Pingchuan Ma; Chaowei Liu; Shuai Wang; Daoyuan Wu; Cuiyun Gao
ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning. (1%)Animesh Basak Chowdhury; Lilas Alrahis; Luca Collini; Johann Knechtel; Ramesh Karri; Siddharth Garg; Ozgur Sinanoglu; Benjamin Tan
Rethinking Confidence Calibration for Failure Prediction. (1%)Fei Zhu; Zhen Cheng; Xu-Yao Zhang; Cheng-Lin Liu
2023-03-05
Consistent Valid Physically-Realizable Adversarial Attack against Crowd-flow Prediction Models. (99%)Hassan Ali; Muhammad Atif Butt; Fethi Filali; Ala Al-Fuqaha; Junaid Qadir
Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks. (99%)Yiran Li; Junpeng Wang; Takanori Fujiwara; Kwan-Liu Ma
Adversarial Sampling for Fairness Testing in Deep Neural Network. (98%)Tosin Ige; William Marfo; Justin Tonkinson; Sikiru Adewale; Bolanle Hafiz Matti
Local Environment Poisoning Attacks on Federated Reinforcement Learning. (12%)Evelyn Ma; Tiancheng Qin; Rasoul Etesami
Robustness, Evaluation and Adaptation of Machine Learning Models in the Wild. (10%)Vihari Piratla
Knowledge-Based Counterfactual Queries for Visual Question Answering. (3%)Theodoti Stoikou; Maria Lymperaiou; Giorgos Stamou
2023-03-04
Improved Robustness Against Adaptive Attacks With Ensembles and Error-Correcting Output Codes. (68%)Thomas Philippon; Christian Gagné
2023-03-03
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees. (91%)Jinghuai Zhang; Jinyuan Jia; Hongbin Liu; Neil Zhenqiang Gong
Certified Robust Neural Networks: Generalization and Corruption Resistance. (69%)Amine Bennouna; Ryan Lucas; Parys Bart Van
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions. (47%)Thuy Dung Nguyen; Tuan Nguyen; Phi Le Nguyen; Hieu H. Pham; Khoa Doan; Kok-Seng Wong
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms. (38%)Christian Westbrook; Sudeep Pasricha
Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models. (33%)Naman D Singh; Francesco Croce; Matthias Hein
Stealthy Perception-based Attacks on Unmanned Aerial Vehicles. (16%)Amir Khazraei; Haocheng Meng; Miroslav Pajic
AdvART: Adversarial Art for Camouflaged Object Detection Attacks. (15%)Amira Guesmi; Ioan Marius Bilasco; Muhammad Shafique; Ihsen Alouani
TrojText: Test-time Invisible Textual Trojan Insertion. (2%)Yepeng Liu; Bo Feng; Qian Lou
2023-03-02
Defending against Adversarial Audio via Diffusion Model. (99%)Shutong Wu; Jiongxiao Wang; Wei Ping; Weili Nie; Chaowei Xiao
Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression. (99%)Junho Kim. Byung-Kwan Lee; Yong Man Ro
APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation. (99%)Amira Guesmi; Muhammad Abdullah Hanif; Ihsen Alouani; Muhammad Shafique
AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems. (99%)Amira Guesmi; Muhammad Abdullah Hanif; Muhammad Shafique
Targeted Adversarial Attacks against Neural Machine Translation. (98%)Sahar Sadrizadeh; AmirHossein Dabiri Aghdam; Ljiljana Dolamic; Pascal Frossard
The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks. (93%)Spencer Frei; Gal Vardi; Peter L. Bartlett; Nathan Srebro
Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators. (10%)Lennart Brocki; Neo Christopher Chung
D-Score: An Expert-Based Method for Assessing the Detectability of IoT-Related Cyber-Attacks. (3%)Yair Meidan; Daniel Benatar; Ron Bitton; Dan Avraham; Asaf Shabtai
Interpretable System Identification and Long-term Prediction on Time-Series Data. (1%)Xiaoyi Liu; Duxin Chen; Wenjia Wei; Xia Zhu; Wenwu Yu
Consistency Models. (1%)Yang Song; Prafulla Dhariwal; Mark Chen; Ilya Sutskever
CADeSH: Collaborative Anomaly Detection for Smart Homes. (1%)Yair Meidan; Dan Avraham; Hanan Libhaber; Asaf Shabtai
Conflict-Based Cross-View Consistency for Semi-Supervised Semantic Segmentation. (1%)Zicheng Wang; Zhen Zhao; Xiaoxia Xing; Dong Xu; Xiangyu Kong; Luping Zhou
2023-03-01
To Make Yourself Invisible with Adversarial Semantic Contours. (99%)Yichi Zhang; Zijian Zhu; Hang Su; Jun Zhu; Shibao Zheng; Yuan He; Hui Xue
Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Data Manifolds. (98%)Odelia Melamed; Gilad Yehudai; Gal Vardi
Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process. (95%)Mingze Ni; Zhensu Sun; Wei Liu
A Practical Upper Bound for the Worst-Case Attribution Deviations. (70%)Fan Wang; Adams Wai-Kin Kong
Combating Exacerbated Heterogeneity for Robust Models in Federated Learning. (54%)Jianing Zhu; Jiangchao Yao; Tongliang Liu; Quanming Yao; Jianliang Xu; Bo Han
Poster: Sponge ML Model Attacks of Mobile Apps. (8%)Souvik Paul; Nicolas Kourtellis
DOLOS: A Novel Architecture for Moving Target Defense. (8%)Giulio Pagnotta; Gaspari Fabio De; Dorjan Hitaj; Mauro Andreolini; Michele Colajanni; Luigi V. Mancini
Mitigating Backdoors in Federated Learning with FLD. (2%)Yihang Lin; Pengyuan Zhou; Zhiqian Wu; Yong Liao
Competence-Based Analysis of Language Models. (1%)Adam Davies; Jize Jiang; ChengXiang Zhai
2023-02-28
A semantic backdoor attack against Graph Convolutional Networks. (98%)Jiazhu Dai; Zhipeng Xiong
Feature Extraction Matters More: Universal Deepfake Disruption through Attacking Ensemble Feature Extractors. (67%)Long Tang; Dengpan Ye; Zhenhao Lu; Yunming Zhang; Shengshan Hu; Yue Xu; Chuanxi Chen
Single Image Backdoor Inversion via Robust Smoothed Classifiers. (22%)Mingjie Sun; Zico Kolter
Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger. (11%)Yi Yu; Yufei Wang; Wenhan Yang; Shijian Lu; Yap-peng Tan; Alex C. Kot
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases. (1%)Chong Fu; Xuhong Zhang; Shouling Ji; Ting Wang; Peng Lin; Yanghe Feng; Jianwei Yin
2023-02-27
A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking. (99%)Chang Liu; Yinpeng Dong; Wenzhao Xiang; Xiao Yang; Hang Su; Jun Zhu; Yuefeng Chen; Yuan He; Hui Xue; Shibao Zheng
Adversarial Attack with Raindrops. (99%)Jiyuan Liu; Bingyi Lu; Mingkang Xiong; Tao Zhang; Huilin Xiong
Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign Recognition: A Feasibility Study. (99%)Fabian Woitschek; Georg Schneider
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks. (98%)Jialai Wang; Ziyuan Zhang; Meiqi Wang; Han Qiu; Tianwei Zhang; Qi Li; Zongpeng Li; Tao Wei; Chao Zhang
CBA: Contextual Background Attack against Optical Aerial Detection in the Physical World. (98%)Jiawei Lian; Xiaofei Wang; Yuru Su; Mingyang Ma; Shaohui Mei
Improving Model Generalization by On-manifold Adversarial Augmentation in the Frequency Domain. (96%)Chang Liu; Wenzhao Xiang; Yuan He; Hui Xue; Shibao Zheng; Hang Su
Efficient and Low Overhead Website Fingerprinting Attacks and Defenses based on TCP/IP Traffic. (83%)Guodong Huang; Chuan Ma; Ming Ding; Yuwen Qian; Chunpeng Ge; Liming Fang; Zhe Liu
GLOW: Global Layout Aware Attacks on Object Detection. (81%)Buyu Liu; BaoJun; Jianping Fan; Xi Peng; Kui Ren; Jun Yu
Online Black-Box Confidence Estimation of Deep Neural Networks. (16%)Fabian Woitschek; Georg Schneider
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks. (15%)Mohammad Mohammadi; Jonathan Nöther; Debmalya Mandal; Adish Singla; Goran Radanovic
Differentially Private Diffusion Models Generate Useful Synthetic Images. (10%)Sahra Ghalebikesabi; Leonard Berrada; Sven Gowal; Ira Ktena; Robert Stanforth; Jamie Hayes; Soham De; Samuel L. Smith; Olivia Wiles; Borja Balle
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation. (5%)Gaurav Patel; Konda Reddy Mopuri; Qiang Qiu
2023-02-26
Contextual adversarial attack against aerial detection in the physical world. (99%)Jiawei Lian; Xiaofei Wang; Yuru Su; Mingyang Ma; Shaohui Mei
Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators. (96%)Keane Lucas; Matthew Jagielski; Florian Tramèr; Lujo Bauer; Nicholas Carlini
2023-02-25
Deep Learning-based Multi-Organ CT Segmentation with Adversarial Data Augmentation. (99%)Shaoyan Pan; Shao-Yuan Lo; Min Huang; Chaoqiong Ma; Jacob Wynne; Tonghe Wang; Tian Liu; Xiaofeng Yang
Scalable Attribution of Adversarial Attacks via Multi-Task Learning. (99%)Zhongyi Guo; Keji Han; Yao Ge; Wei Ji; Yun Li
SATBA: An Invisible Backdoor Attack Based On Spatial Attention. (67%)Huasong Zhou; Xiaowei Xu; Xiaodong Wang; Leon Bevan Bullock
2023-02-24
Defending Against Backdoor Attacks by Layer-wise Feature Analysis. (68%)Najeeb Moharram Jebreel; Josep Domingo-Ferrer; Yiming Li
Chaotic Variational Auto encoder-based Adversarial Machine Learning. (54%)Pavan Venkata Sainadh Reddy; Yelleti Vivek; Gopi Pranay; Vadlamani Ravi
Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights? (12%)Ruisi Cai; Zhenyu Zhang; Zhangyang Wang
2023-02-23
Less is More: Data Pruning for Faster Adversarial Training. (99%)Yize Li; Pu Zhao; Xue Lin; Bhavya Kailkhura; Ryan Goldhahn
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots. (99%)Boyang Zhang; Xinlei He; Yun Shen; Tianhao Wang; Yang Zhang
Boosting Adversarial Transferability using Dynamic Cues. (99%)Muzammal Naseer; Ahmad Mahmood; Salman Khan; Fahad Khan
HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks. (98%)Chao Hu; Ruishi Yu; Binqi Zeng; Yu Zhan; Ying Fu; Quan Zhang; Rongkai Liu; Heyuan Shi
Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective. (84%)Zhengbao He; Tao Li; Sizhe Chen; Xiaolin Huang
More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models. (70%)Kai Greshake; Sahar Abdelnabi; Shailesh Mishra; Christoph Endres; Thorsten Holz; Mario Fritz
On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space. (68%)Yuyang Deng; Nidham Gazagnadou; Junyuan Hong; Mehrdad Mahdavi; Lingjuan Lyu
Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity. (2%)Khatoon Mohammed
2023-02-22
Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of Perturbation and AI Techniques. (98%)Saminder Dhesi; Laura Fontes; Pedro Machado; Isibor Kennedy Ihianle; Farhad Fassihi Tash; David Ada Adama
PAD: Towards Principled Adversarial Malware Detection Against Evasion Attacks. (98%)Deqiang Li; Shicheng Cui; Yun Li; Jia Xu; Fu Xiao; Shouhuai Xu
Feature Partition Aggregation: A Fast Certified Defense Against a Union of Sparse Adversarial Attacks. (97%)Zayd Hammoudeh; Daniel Lowd
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms. (33%)Minzhou Pan; Yi Zeng; Lingjuan Lyu; Xue Lin; Ruoxi Jia
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective. (12%)Jindong Wang; Xixu Hu; Wenxin Hou; Hao Chen; Runkai Zheng; Yidong Wang; Linyi Yang; Haojun Huang; Wei Ye; Xiubo Geng; Binxin Jiao; Yue Zhang; Xing Xie
2023-02-21
MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-based Malware Detection. (99%)Aqib Rashid; Jose Such
MultiRobustBench: Benchmarking Robustness Against Multiple Attacks. (99%)Sihui Dai; Saeed Mahloujifar; Chong Xiang; Vikash Sehwag; Pin-Yu Chen; Prateek Mittal
Interpretable Spectrum Transformation Attacks to Speaker Recognition. (98%)Jiadi Yao; Hong Luo; Xiao-Lei Zhang
Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker. (97%)Sihui Dai; Wenxin Ding; Arjun Nitin Bhagoji; Daniel Cullina; Ben Y. Zhao; Haitao Zheng; Prateek Mittal
Generalization Bounds for Adversarial Contrastive Learning. (31%)Xin Zou; Weiwei Liu
2023-02-20
An Incremental Gray-box Physical Adversarial Attack on Neural Network Training. (98%)Rabiah Al-qudah; Moayad Aloqaily; Bassem Ouni; Mohsen Guizani; Thierry Lestable
Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing System. (97%)Hao Lv; Bing Li; Lei Zhang; Cheng Liu; Ying Wang
Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts. (88%)Francesco Croce; Sylvestre-Alvise Rebuffi; Evan Shelhamer; Sven Gowal
Poisoning Web-Scale Training Datasets is Practical. (83%)Nicholas Carlini; Matthew Jagielski; Christopher A. Choquette-Choo; Daniel Paleka; Will Pearce; Hyrum Anderson; Andreas Terzis; Kurt Thomas; Florian Tramèr
Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network. (47%)Xiaojian Yuan; Kejiang Chen; Jie Zhang; Weiming Zhang; Nenghai Yu; Yang Zhang
Model-based feature selection for neural networks: A mixed-integer programming approach. (22%)Shudian Zhao; Calvin Tsay; Jan Kronqvist
Take Me Home: Reversing Distribution Shifts using Reinforcement Learning. (8%)Vivian Lin; Kuk Jin Jang; Souradeep Dutta; Michele Caprio; Oleg Sokolsky; Insup Lee
2023-02-19
X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection. (99%)Aishan Liu; Jun Guo; Jiakai Wang; Siyuan Liang; Renshuai Tao; Wenbo Zhou; Cong Liu; Xianglong Liu; Dacheng Tao
Stationary Point Losses for Robust Model. (93%)Weiwei Gao; Dazhi Zhang; Yao Li; Zhichang Guo; Ovanes Petrosian
On Feasibility of Server-side Backdoor Attacks on Split Learning. (76%)Behrad Tajalli; Oguzhan Ersoy; Stjepan Picek
2023-02-18
Adversarial Machine Learning: A Systematic Survey of Backdoor Attack, Weight Attack and Adversarial Example. (99%)Baoyuan Wu; Li Liu; Zihao Zhu; Qingshan Liu; Zhaofeng He; Siwei Lyu
Delving into the Adversarial Robustness of Federated Learning. (98%)Jie Zhang; Bo Li; Chen Chen; Lingjuan Lyu; Shuang Wu; Shouhong Ding; Chao Wu
Meta Style Adversarial Training for Cross-Domain Few-Shot Learning. (78%)Yuqian Fu; Yu Xie; Yanwei Fu; Yu-Gang Jiang
MedViT: A Robust Vision Transformer for Generalized Medical Image Classification. (12%)Omid Nejati Manzari; Hamid Ahmadabadi; Hossein Kashiani; Shahriar B. Shokouhi; Ahmad Ayatollahi
RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks. (11%)Marwan Omar
2023-02-17
Measuring Equality in Machine Learning Security Defenses. (81%)Luke E. Richards; Edward Raff; Cynthia Matuszek
Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions. (5%)Manish Nagireddy; Moninder Singh; Samuel C. Hoffman; Evaline Ju; Karthikeyan Natesan Ramamurthy; Kush R. Varshney
RetVec: Resilient and Efficient Text Vectorizer. (1%)Elie Bursztein; Marina Zhang; Owen Vallis; Xinyu Jia; Alexey Kurakin
2023-02-16
On the Effect of Adversarial Training Against Invariance-based Adversarial Examples. (99%)Roland Rauter; Martin Nocker; Florian Merkle; Pascal Schöttle
High-frequency Matters: An Overwriting Attack and defense for Image-processing Neural Network Watermarking. (67%)Huajie Chen; Tianqing Zhu; Chi Liu; Shui Yu; Wanlei Zhou
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data. (3%)Pratik Karmakar; Debabrota Basu
A Novel Noise Injection-based Training Scheme for Better Model Robustness. (2%)Zeliang Zhang; Jinyang Jiang; Minjie Chen; Zhiyuan Wang; Yijie Peng; Zhaofei Yu
2023-02-15
Masking and Mixing Adversarial Training. (99%)Hiroki Adachi; Tsubasa Hirakawa; Takayoshi Yamashita; Hironobu Fujiyoshi; Yasunori Ishii; Kazuki Kozuka
Robust Mid-Pass Filtering Graph Convolutional Networks. (98%)Jincheng Huang; Lun Du; Xu Chen; Qiang Fu; Shi Han; Dongmei Zhang
Graph Adversarial Immunization for Certifiable Robustness. (98%)Shuchang Tao; Huawei Shen; Qi Cao; Yunfan Wu; Liang Hou; Xueqi Cheng
XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural Architectures for Non-ideal Xbars. (87%)Abhiroop Bhattacharjee; Abhishek Moitra; Priyadarshini Panda
Tight Auditing of Differentially Private Machine Learning. (41%)Milad Nasr; Jamie Hayes; Thomas Steinke; Borja Balle; Florian Tramèr; Matthew Jagielski; Nicholas Carlini; Andreas Terzis
Field-sensitive Data Flow Integrity. (1%)So Shizukuishi; Yoshitaka Arahori; Katsuhiko Gondow
Uncertainty-Estimation with Normalized Logits for Out-of-Distribution Detection. (1%)Mouxiao Huang; Yu Qiao
2023-02-14
Randomization for adversarial robustness: the Good, the Bad and the Ugly. (99%)Lucas Gnecco-Heredia; Yann Chevaleyre; Benjamin Negrevergne; Laurent Meunier
Regret-Based Optimization for Robust Reinforcement Learning. (99%)Roman Belaire; Pradeep Varakantham; David Lo
Attacking Fake News Detectors via Manipulating News Social Engagement. (83%)Haoran Wang; Yingtong Dou; Canyu Chen; Lichao Sun; Philip S. Yu; Kai Shu
An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning. (31%)Shenghui Li; Edith C. -H. Ngai; Thiemo Voigt
A modern look at the relationship between sharpness and generalization. (10%)Maksym Andriushchenko; Francesco Croce; Maximilian Müller; Matthias Hein; Nicolas Flammarion
Bounding Training Data Reconstruction in DP-SGD. (8%)Jamie Hayes; Saeed Mahloujifar; Borja Balle
READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises. (1%)Chenglei Si; Zhengyan Zhang; Yingfa Chen; Xiaozhi Wang; Zhiyuan Liu; Maosong Sun
2023-02-13
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data. (98%)Gorka Abad; Oguzhan Ersoy; Stjepan Picek; Aitor Urbieta
Raising the Cost of Malicious AI-Powered Image Editing. (82%)Hadi Salman; Alaa Khaddaj; Guillaume Leclerc; Andrew Ilyas; Aleksander Madry
Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction Challenge. (8%)Ali Al-Kaswan; Maliheh Izadi; Deursen Arie van
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions. (1%)Marwan Omar
2023-02-12
TextDefense: Adversarial Text Detection based on Word Importance Entropy. (99%)Lujia Shen; Xuhong Zhang; Shouling Ji; Yuwen Pu; Chunpeng Ge; Xing Yang; Yanghe Feng
2023-02-11
Mutation-Based Adversarial Attacks on Neural Text Detectors. (69%)Gongbo Liang; Jesus Guerrero; Izzat Alsmadi
HateProof: Are Hateful Meme Detection Systems really Robust? (13%)Piush Aggarwal; Pranit Chawla; Mithun Das; Punyajoy Saha; Binny Mathew; Torsten Zesch; Animesh Mukherjee
MTTM: Metamorphic Testing for Textual Content Moderation Software. (2%)Wenxuan Wang; Jen-tse Huang; Weibin Wu; Jianping Zhang; Yizhan Huang; Shuqing Li; Pinjia He; Michael Lyu
Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play. (1%)Jeremiah Zhe Liu; Krishnamurthy Dj Dvijotham; Jihyeon Lee; Quan Yuan; Martin Strobel; Balaji Lakshminarayanan; Deepak Ramachandran
High Recovery with Fewer Injections: Practical Binary Volumetric Injection Attacks against Dynamic Searchable Encryption. (1%)Xianglong Zhang; Wei Wang; Peng Xu; Laurence T. Yang; Kaitai Liang
2023-02-10
Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples. (98%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Unnoticeable Backdoor Attacks on Graph Neural Networks. (80%)Enyan Dai; Minhua Lin; Xiang Zhang; Suhang Wang
Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks. (73%)Piotr Gaiński; Klaudia Bałazy
2023-02-09
IB-RAR: Information Bottleneck as Regularizer for Adversarial Robustness. (98%)Xiaoyun Xu; Guilherme Perin; Stjepan Picek
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples. (98%)Chumeng Liang; Xiaoyu Wu; Yang Hua; Jiaru Zhang; Yiming Xue; Tao Song; Zhengui Xue; Ruhui Ma; Haibing Guan
Hyperparameter Search Is All You Need For Training-Agnostic Backdoor Robustness. (75%)Eugene Bagdasaryan; Vitaly Shmatikov
Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder. (62%)Jiliang Zhang; Jing Xu; Zhi Zhang; Yansong Gao
Augmenting NLP data to counter Annotation Artifacts for NLI Tasks. (16%)Armaan Singh Bhullar
Better Diffusion Models Further Improve Adversarial Training. (12%)Zekai Wang; Tianyu Pang; Chao Du; Min Lin; Weiwei Liu; Shuicheng Yan
Incremental Satisfiability Modulo Theory for Verification of Deep Neural Networks. (1%)Pengfei Yang; Zhiming Chi; Zongxin Liu; Mengyu Zhao; Cheng-Chao Huang; Shaowei Cai; Lijun Zhang
2023-02-08
WAT: Improve the Worst-class Robustness in Adversarial Training. (99%)Boqi Li; Weiwei Liu
Exploiting Certified Defences to Attack Randomised Smoothing. (99%)Andrew C. Cullen; Paul Montague; Shijie Liu; Sarah M. Erfani; Benjamin I. P. Rubinstein
Shortcut Detection with Variational Autoencoders. (13%)Nicolas M. Müller; Simon Roschmann; Shahbaz Khan; Philip Sperl; Konstantin Böttinger
Continuous Learning for Android Malware Detection. (13%)Yizheng Chen; Zhoujie Ding; David Wagner
Training-free Lexical Backdoor Attacks on Language Models. (8%)Yujin Huang; Terry Yue Zhuo; Qiongkai Xu; Han Hu; Xingliang Yuan; Chunyang Chen
On Function-Coupled Watermarks for Deep Neural Networks. (2%)Xiangyu Wen; Yu Li; Wei Jiang; Qiang Xu
Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy. (1%)Cheolhyoung Lee; Kyunghyun Cho
2023-02-07
Toward Face Biometric De-identification using Adversarial Examples. (98%)Mahdi Ghafourian; Julian Fierrez; Luis Felipe Gomez; Ruben Vera-Rodriguez; Aythami Morales; Zohra Rezgui; Raymond Veldhuis
Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence. (83%)Simin Li; Jun Guo; Jingqiao Xiu; Pu Feng; Xin Yu; Jiakai Wang; Aishan Liu; Wenjun Wu; Xianglong Liu
Membership Inference Attacks against Diffusion Models. (64%)Tomoya Matsumoto; Takayuki Miura; Naoto Yanai
Temporal Robustness against Data Poisoning. (12%)Wenxiao Wang; Soheil Feizi
Robustness Implies Fairness in Casual Algorithmic Recourse. (2%)Ahmad-Reza Ehyaei; Amir-Hossein Karimi; Bernhard Schölkopf; Setareh Maghsudi
Low-Latency Communication using Delay-Aware Relays Against Reactive Adversaries. (1%)Vivek Chaudhary; J. Harshan
2023-02-06
Less is More: Understanding Word-level Textual Adversarial Attack via n-gram Frequency Descend. (99%)Ning Lu; Shengcai Liu; Zhirui Zhang; Qi Wang; Haifeng Liu; Ke Tang
SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency. (92%)Junfeng Guo; Yiming Li; Xun Chen; Hanqing Guo; Lichao Sun; Cong Liu
Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness. (87%)Yuancheng Xu; Yanchao Sun; Micah Goldblum; Tom Goldstein; Furong Huang
Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks. (75%)Jan Schuchardt; Aleksandar Bojchevski; Johannes Gasteiger; Stephan Günnemann
GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks. (67%)Salah Ghamizi; Jingfeng Zhang; Maxime Cordy; Mike Papadakis; Masashi Sugiyama; Yves Le Traon
Target-based Surrogates for Stochastic Optimization. (1%)Jonathan Wilder Lavington; Sharan Vaswani; Reza Babanezhad; Mark Schmidt; Nicolas Le Roux
Dropout Injection at Test Time for Post Hoc Uncertainty Quantification in Neural Networks. (1%)Emanuele Ledda; Giorgio Fumera; Fabio Roli
2023-02-05
On the Role of Contrastive Representation Learning in Adversarial Robustness: An Empirical Study. (54%)Fatemeh Ghofrani; Mehdi Yaghouti; Pooyan Jamshidi
2023-02-04
CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks. (98%)Shashank Agnihotri; Margret Keuper
A Minimax Approach Against Multi-Armed Adversarial Attacks Detection. (86%)Federica Granese; Marco Romanelli; Siddharth Garg; Pablo Piantanida
Run-Off Election: Improved Provable Defense against Data Poisoning Attacks. (81%)Keivan Rezaei; Kiarash Banihashem; Atoosa Chegini; Soheil Feizi
Certified Robust Control under Adversarial Perturbations. (78%)Jinghan Yang; Hunmin Kim; Wenbin Wan; Naira Hovakimyan; Yevgeniy Vorobeychik
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Decision Tree Models. (1%)Abdullah Caglar Oksuz; Anisa Halimi; Erman Ayday
2023-02-03
TextShield: Beyond Successfully Detecting Adversarial Sentences in Text Classification. (96%)Lingfeng Shen; Ze Zhang; Haiyun Jiang; Ying Chen
DeTorrent: An Adversarial Padding-only Traffic Analysis Defense. (73%)James K Holland; Jason Carpenter; Se Eun Oh; Nicholas Hopper
A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification. (56%)Gorka Abad; Jing Xu; Stefanos Koffas; Behrad Tajalli; Stjepan Picek
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels. (15%)Simone Bombari; Shayan Kiyani; Marco Mondelli
Asymmetric Certified Robustness via Feature-Convex Neural Networks. (8%)Samuel Pfrommer; Brendon G. Anderson; Julien Piet; Somayeh Sojoudi
BarrierBypass: Out-of-Sight Clean Voice Command Injection Attacks through Physical Barriers. (2%)Payton Walker; Tianfang Zhang; Cong Shi; Nitesh Saxena; Yingying Chen
From Robustness to Privacy and Back. (2%)Hilal Asi; Jonathan Ullman; Lydia Zakynthinou
Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning. (1%)Jacob Alexander Markson Brown; Xi Jiang; Van Tran; Arjun Nitin Bhagoji; Nguyen Phong Hoang; Nick Feamster; Prateek Mittal; Vinod Yegneswaran
2023-02-02
Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial Defense. (99%)Zunzhi You; Daochang Liu; Chang Xu
TransFool: An Adversarial Attack against Neural Machine Translation Models. (99%)Sahar Sadrizadeh; Ljiljana Dolamic; Pascal Frossard
On the Robustness of Randomized Ensembles to Adversarial Perturbations. (75%)Hassan Dbouk; Naresh R. Shanbhag
A sliced-Wasserstein distance-based approach for out-of-class-distribution detection. (62%)Mohammad Shifat E Rabbi; Abu Hasnat Mohammad Rubaiyat; Yan Zhuang; Gustavo K Rohde
Effective Robustness against Natural Distribution Shifts for Models with Different Training Data. (13%)Zhouxing Shi; Nicholas Carlini; Ananth Balashankar; Ludwig Schmidt; Cho-Jui Hsieh; Alex Beutel; Yao Qin
SPECWANDS: An Efficient Priority-based Scheduler Against Speculation Contention Attacks. (10%)Bowen Tang; Chenggang Wu; Pen-Chung Yew; Yinqian Zhang; Mengyao Xie; Yuanming Lai; Yan Kang; Wei Wang; Qiang Wei; Zhe Wang
Provably Bounding Neural Network Preimages. (8%)Suhas Dj Kotha; Christopher Dj Brix; Zico Dj Kolter; Dj Krishnamurthy; Dvijotham; Huan Zhang
Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation. (2%)Hyoungwook Nam; Raghavendra Pradyumna Pothukuchi; Bo Li; Nam Sung Kim; Josep Torrellas
Generalized Uncertainty of Deep Neural Networks: Taxonomy and Applications. (1%)Chengyu Dong
Dataset Distillation Fixes Dataset Reconstruction Attacks. (1%)Noel Loo; Ramin Hasani; Mathias Lechner; Daniela Rus
2023-02-01
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks. (99%)Xiaoyun Xu; Oguzhan Ersoy; Stjepan Picek
Effectiveness of Moving Target Defenses for Adversarial Attacks in ML-based Malware Detection. (92%)Aqib Rashid; Jose Such
Exploring Semantic Perturbations on Grover. (56%)Pranav Kulkarni; Ziqing Ji; Yan Xu; Marko Neskovic; Kevin Nolan
BackdoorBox: A Python Toolbox for Backdoor Learning. (10%)Yiming Li; Mengxi Ya; Yang Bai; Yong Jiang; Shu-Tao Xia
2023-01-31
Reverse engineering adversarial attacks with fingerprints from adversarial examples. (99%)David Aaron Embedded Intelligence Nicholson; Vincent Embedded Intelligence Emanuele
The Impacts of Unanswerable Questions on the Robustness of Machine Reading Comprehension Models. (97%)Son Quoc Tran; Phong Nguyen-Thuan Do; Uyen Le; Matt Kretchmar
Are Defenses for Graph Neural Networks Robust? (80%)Felix Mujkanovic; Simon Geisler; Stephan Günnemann; Aleksandar Bojchevski
Adversarial Training of Self-supervised Monocular Depth Estimation against Physical-World Attacks. (75%)Zhiyuan Cheng; James Liang; Guanhong Tao; Dongfang Liu; Xiangyu Zhang
Robust Linear Regression: Gradient-descent, Early-stopping, and Beyond. (47%)Meyer Scetbon; Elvis Dohmatob
Fairness-aware Vision Transformer via Debiased Self-Attention. (47%)Yao Qiang; Chengyin Li; Prashant Khanduri; Dongxiao Zhu
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression. (12%)Zhuoran Liu; Zhengyu Zhao; Martha Larson
Identifying the Hazard Boundary of ML-enabled Autonomous Systems Using Cooperative Co-Evolutionary Search. (1%)Sepehr Sharifi; Donghwan Shin; Lionel C. Briand; Nathan Aschbacher
2023-01-30
Feature-Space Bayesian Adversarial Learning Improved Malware Detector Robustness. (99%)Bao Gia Doan; Shuiqiao Yang; Paul Montague; Vel Olivier De; Tamas Abraham; Seyit Camtepe; Salil S. Kanhere; Ehsan Abbasnejad; Damith C. Ranasinghe
Improving Adversarial Transferability with Scheduled Step Size and Dual Example. (99%)Zeliang Zhang; Peihan Liu; Xiaosen Wang; Chenliang Xu
Towards Adversarial Realism and Robust Learning for IoT Intrusion Detection and Classification. (99%)João Vitorino; Isabel Praça; Eva Maia
Certified Robustness of Learning-based Static Malware Detectors. (99%)Zhuoqun Huang; Neil G. Marchant; Keane Lucas; Lujo Bauer; Olga Ohrimenko; Benjamin I. P. Rubinstein
Identifying Adversarially Attackable and Robust Samples. (99%)Vyas Raina; Mark Gales
On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex. (98%)Terry Yue Zhuo; Zhuang Li; Yujin Huang; Fatemeh Shiri; Weiqing Wang; Gholamreza Haffari; Yuan-Fang Li
Anchor-Based Adversarially Robust Zero-Shot Learning Driven by Language. (96%)Xiao Li; Wei Zhang; Yining Liu; Zhanhao Hu; Bo Zhang; Xiaolin Hu
Inference Time Evidences of Adversarial Attacks for Forensic on Transformers. (87%)Hugo Lemarchant; Liangzi Li; Yiming Qian; Yuta Nakashima; Hajime Nagahara
On the Efficacy of Metrics to Describe Adversarial Attacks. (82%)Tommaso Puccetti; Tommaso Zoppi; Andrea Ceccarelli
Benchmarking Robustness to Adversarial Image Obfuscations. (74%)Florian Stimberg; Ayan Chakrabarti; Chun-Ta Lu; Hussein Hazimeh; Otilia Stretcu; Wei Qiao; Yintao Liu; Merve Kaya; Cyrus Rashtchian; Ariel Fuxman; Mehmet Tek; Sven Gowal
Extracting Training Data from Diffusion Models. (5%)Nicholas Carlini; Jamie Hayes; Milad Nasr; Matthew Jagielski; Vikash Sehwag; Florian Tramèr; Borja Balle; Daphne Ippolito; Eric Wallace
M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System. (1%)Chenqi Kong; Kexin Zheng; Yibing Liu; Shiqi Wang; Anderson Rocha; Haoliang Li
2023-01-29
Scaling in Depth: Unlocking Robustness Certification on ImageNet. (98%)Kai Hu; Andy Zou; Zifan Wang; Klas Leino; Matt Fredrikson
Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid. (93%)Farhin Farhad Riya; Shahinul Hoque; Jinyuan Stella Sun; Jiangnan Li; Hairong Qi
Uncovering Adversarial Risks of Test-Time Adaptation. (82%)Tong Wu; Feiran Jia; Xiangyu Qi; Jiachen T. Wang; Vikash Sehwag; Saeed Mahloujifar; Prateek Mittal
Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing. (75%)Yatong Bai; Brendon G. Anderson; Aerin Kim; Somayeh Sojoudi
Adversarial Attacks on Adversarial Bandits. (69%)Yuzhe Ma; Zhijin Zhou
Towards Verifying the Geometric Robustness of Large-scale Neural Networks. (54%)Fu Wang; Peipei Xu; Wenjie Ruan; Xiaowei Huang
Lateralized Learning for Multi-Class Visual Classification Tasks. (13%)Abubakar Siddique; Will N. Browne; Gina M. Grimshaw
Diverse, Difficult, and Odd Instances (D2O): A New Test Set for Object Classification. (3%)Ali Borji
Adversarial Style Augmentation for Domain Generalization. (2%)Yabin Zhang; Bin Deng; Ruihuang Li; Kui Jia; Lei Zhang
Confidence-Aware Calibration and Scoring Functions for Curriculum Learning. (1%)Shuang Ao; Stefan Rueger; Advaith Siddharthan
2023-01-28
Node Injection for Class-specific Network Poisoning. (82%)Ansh Kumar Sharma; Rahul Kukreja; Mayank Kharbanda; Tanmoy Chakraborty
Out-of-distribution Detection with Energy-based Models. (82%)Sven Elflein
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering. (13%)Rui Zhu; Di Tang; Siyuan Tang; Guanhong Tao; Shiqing Ma; Xiaofeng Wang; Haixu Tang
Selecting Models based on the Risk of Damage Caused by Adversarial Attacks. (1%)Jona Klemenc; Holger Trittenbach
2023-01-27
Semantic Adversarial Attacks on Face Recognition through Significant Attributes. (99%)Yasmeen M. Khedr; Yifeng Xiong; Kun He
Targeted Attacks on Timeseries Forecasting. (99%)Yuvaraj Govindarajulu; Avinash Amballa; Pavan Kulkarni; Manojkumar Parmar
Adapting Step-size: A Unified Perspective to Analyze and Improve Gradient-based Methods for Adversarial Attacks. (98%)Wei Tao; Lei Bao; Long Sheng; Gaowei Wu; Qing Tao
PECAN: A Deterministic Certified Defense Against Backdoor Attacks. (97%)Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Vertex-based reachability analysis for verifying ReLU deep neural networks. (93%)João Zago; Eduardo Camponogara; Eric Antonelo
OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks. (92%)Xingwu Guo; Ziwei Zhou; Yueling Zhang; Guy Katz; Min Zhang
PCV: A Point Cloud-Based Network Verifier. (88%)Arup Kumar Sarker; Farzana Yasmin Ahmad; Matthew B. Dwyer
Robust Transformer with Locality Inductive Bias and Feature Normalization. (88%)Omid Nejati Manzari; Hossein Kashiani; Hojat Asgarian Dehkordi; Shahriar Baradaran Shokouhi
Analyzing Robustness of the Deep Reinforcement Learning Algorithm in Ramp Metering Applications Considering False Data Injection Attack and Defense. (87%)Diyi Liu; Lanmin Liu; Lee D Han
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers. (80%)Sungmin Cha; Sungjun Cho; Dasol Hwang; Honglak Lee; Taesup Moon; Moontae Lee
Certified Invertibility in Neural Networks via Mixed-Integer Programming. (62%)Tianqi Cui; Thomas Bertalan; George J. Pappas; Manfred Morari; Ioannis G. Kevrekidis; Mahyar Fazlyab
2023-01-26
Attacking Important Pixels for Anchor-free Detectors. (99%)Yunxu Xie; Shu Hu; Xin Wang; Quanyu Liao; Bin Zhu; Xi Wu; Siwei Lyu
Certified Interpretability Robustness for Class Activation Mapping. (92%)Alex Gu; Tsui-Wei Weng; Pin-Yu Chen; Sijia Liu; Luca Daniel
Interaction-level Membership Inference Attack Against Federated Recommender Systems. (31%)Wei Yuan; Chaoqun Yang; Quoc Viet Hung Nguyen; Lizhen Cui; Tieke He; Hongzhi Yin
Minerva: A File-Based Ransomware Detector. (13%)Dorjan Hitaj; Giulio Pagnotta; Gaspari Fabio De; Carli Lorenzo De; Luigi V. Mancini
2023-01-25
RobustPdM: Designing Robust Predictive Maintenance against Adversarial Attacks. (99%)Ayesha Siddique; Ripan Kumar Kundu; Gautam Raj Mode; Khaza Anuarul Hoque
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing. (98%)Jiali Wei; Ming Fan; Wenjing Jiao; Wuxia Jin; Ting Liu
A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection. (96%)Mohammad Azizmalayeri; Arman Zarei; Alireza Isavand; Mohammad Taghi Manzuri; Mohammad Hossein Rohban
On the Adversarial Robustness of Camera-based 3D Object Detection. (81%)Shaoyuan Xie; Zichao Li; Zeyu Wang; Cihang Xie
A Study on FGSM Adversarial Training for Neural Retrieval. (75%)Simon Lupart; Stéphane Clinchant
Distilling Cognitive Backdoor Patterns within an Image. (2%)Hanxun Huang; Xingjun Ma; Sarah Erfani; James Bailey
Connecting metrics for shape-texture knowledge in computer vision. (1%)Tiago Oliveira; Tiago Marques; Arlindo L. Oliveira
2023-01-24
Blockchain-aided Secure Semantic Communication for AI-Generated Content in Metaverse. (13%)Yijing Lin; Hongyang Du; Dusit Niyato; Jiangtian Nie; Jiayi Zhang; Yanyu Cheng; Zhaohui Yang
Learning Effective Strategies for Moving Target Defense with Switching Costs. (1%)Vignesh Viswanathan; Megha Bose; Praveen Paruchuri
Data Augmentation Alone Can Improve Adversarial Training. (1%)Lin Li; Michael Spratling
2023-01-23
DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards Secure Industrial Internet of Things Analytics. (99%)Onat Gungor; Tajana Rosing; Baris Aksanli
Practical Adversarial Attacks Against AI-Driven Power Allocation in a Distributed MIMO Network. (92%)Ömer Faruk Tuna; Fehmi Emre Kadan; Leyli Karaçay
BayBFed: Bayesian Backdoor Defense for Federated Learning. (78%)Kavita Kumari; Phillip Rieger; Hossein Fereidooni; Murtuza Jadliwala; Ahmad-Reza Sadeghi
Backdoor Attacks in Peer-to-Peer Federated Learning. (31%)Gokberk Yar; Cristina Nita-Rotaru; Alina Oprea
2023-01-22
Provable Unrestricted Adversarial Training without Compromise with Generalizability. (99%)Lilin Zhang; Ning Yang; Yanchao Sun; Philip S. Yu
ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning. (8%)Shangqing Liu; Bozhi Wu; Xiaofei Xie; Guozhu Meng; Yang Liu
2023-01-20
Limitations of Piecewise Linearity for Efficient Robustness Certification. (95%)Klas Leino
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning. (16%)Soumyadeep Pal; Ren Wang; Yuguang Yao; Sijia Liu
Dr.Spider: A Diagnostic Evaluation Benchmark towards Text-to-SQL Robustness. (8%)Shuaichen Chang; Jun Wang; Mingwen Dong; Lin Pan; Henghui Zhu; Alexander Hanbo Li; Wuwei Lan; Sheng Zhang; Jiarong Jiang; Joseph Lilien; Steve Ash; William Yang Wang; Zhiguo Wang; Vittorio Castelli; Patrick Ng; Bing Xiang
Defending SDN against packet injection attacks using deep learning. (2%)Anh Tuan Phu; Bo Li; Faheem Ullah; Tanvir Ul Huque; Ranesh Naha; Ali Babar; Hung Nguyen
2023-01-19
On the Vulnerability of Backdoor Defenses for Federated Learning. (62%)Pei Fang; Jinghui Chen
On the Relationship Between Information-Theoretic Privacy Metrics And Probabilistic Information Privacy. (31%)Chong Xiao Wang; Wee Peng Tay
RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation. (16%)Utkarsh Nath; Yancheng Wang; Yingzhen Yang
Enhancing Deep Learning with Scenario-Based Override Rules: a Case Study. (1%)Adiel Ashrov; Guy Katz
2023-01-17
Denoising Diffusion Probabilistic Models as a Defense against Adversarial Attacks. (98%)Lars Lien Ankile; Anna Midgley; Sebastian Weisshaar
Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness. (68%)Ezgi Korkmaz
Label Inference Attack against Split Learning under Regression Setting. (8%)Shangyu Xie; Xin Yang; Yuanshun Yao; Tianyi Liu; Taiqing Wang; Jiankai Sun
2023-01-16
$\beta$-DARTS++: Bi-level Regularization for Proxy-robust Differentiable Architecture Search. (1%)Peng Ye; Tong He; Baopu Li; Tao Chen; Lei Bai; Wanli Ouyang
Modeling Uncertain Feature Representation for Domain Generalization. (1%)Xiaotong Li; Zixuan Hu; Jun Liu; Yixiao Ge; Yongxing Dai; Ling-Yu Duan
2023-01-15
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense. (4%)Siyuan Cheng; Guanhong Tao; Yingqi Liu; Shengwei An; Xiangzhe Xu; Shiwei Feng; Guangyu Shen; Kaiyuan Zhang; Qiuling Xu; Shiqing Ma; Xiangyu Zhang
2023-01-13
On the feasibility of attacking Thai LPR systems with adversarial examples. (99%)Chissanupong Jiamsuchon; Jakapan Suaboot; Norrathep Rattanavipanon
2023-01-12
Security-Aware Approximate Spiking Neural Networks. (87%)Syed Tihaam Ahmad; Ayesha Siddique; Khaza Anuarul Hoque
Jamming Attacks on Decentralized Federated Learning in General Multi-Hop Wireless Networks. (3%)Yi Shi; Yalin E. Sagduyu; Tugba Erpek
2023-01-11
Phase-shifted Adversarial Training. (82%)Yeachan Kim; Seongyeon Kim; Ihyeok Seo; Bonggun Shin
Universal Detection of Backdoor Attacks via Density-based Clustering and Centroids Analysis. (68%)Wei Guo; Benedetta Tondi; Mauro Barni
2023-01-10
On the Robustness of AlphaFold: A COVID-19 Case Study. (73%)Ismail Alkhouri; Sumit Jha; Andre Beckus; George Atia; Alvaro Velasquez; Rickard Ewetz; Arvind Ramanathan; Susmit Jha
CDA: Contrastive-adversarial Domain Adaptation. (38%)Nishant Yadav; Mahbubul Alam; Ahmed Farahat; Dipanjan Ghosh; Chetan Gupta; Auroop R. Ganguly
User-Centered Security in Natural Language Processing. (12%)Chris Emmery
Diffusion Models For Stronger Face Morphing Attacks. (3%)Zander Blasingame; Chen Liu
2023-01-09
Over-The-Air Adversarial Attacks on Deep Learning Wi-Fi Fingerprinting. (99%)Fei Xiao; Yong Huang; Yingying Zuo; Wei Kuang; Wei Wang
On the Susceptibility and Robustness of Time Series Models through Adversarial Attack and Defense. (98%)Asadullah Hill Galib; Bidhan Bashyal
Is Federated Learning a Practical PET Yet? (4%)Franziska Boenisch; Adam Dziedzic; Roei Schuster; Ali Shahin Shamsabadi; Ilia Shumailov; Nicolas Papernot
SoK: Hardware Defenses Against Speculative Execution Attacks. (1%)Guangyuan Hu; Zecheng He; Ruby Lee
2023-01-08
RobArch: Designing Robust Architectures against Adversarial Attacks. (76%)ShengYun Peng; Weilin Xu; Cory Cornelius; Kevin Li; Rahul Duggal; Duen Horng Chau; Jason Martin
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope. (1%)Jingwei Zhang; Farzan Farnia
2023-01-07
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. (99%)Wenjie Qu; Jinyuan Jia; Neil Zhenqiang Gong
Adversarial training with informed data selection. (99%)Marcele O. K. Mendonça; Javier Maroto; Pascal Frossard; Paulo S. R. Diniz
2023-01-06
Adversarial Attacks on Neural Models of Code via Code Difference Reduction. (99%)Zhao Tian; Junjie Chen; Zhi Jin
Stealthy Backdoor Attack for Code Models. (98%)Zhou Yang; Bowen Xu; Jie M. Zhang; Hong Jin Kang; Jieke Shi; Junda He; David Lo
2023-01-05
Silent Killer: Optimizing Backdoor Trigger Yields a Stealthy and Powerful Data Poisoning Attack. (98%)Tzvi Lederer; Gallil Maimon; Lior Rokach
gRoMA: a Tool for Measuring Deep Neural Networks Global Robustness. (96%)Natan Levy; Raz Yerushalmi; Guy Katz
Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks. (61%)Yan Scholten; Jan Schuchardt; Simon Geisler; Aleksandar Bojchevski; Stephan Günnemann
Can Large Language Models Change User Preference Adversarially? (1%)Varshini Subhash
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models. (1%)Hojjat Aghakhani; Wei Dai; Andre Manoel; Xavier Fernandes; Anant Kharkar; Christopher Kruegel; Giovanni Vigna; David Evans; Ben Zorn; Robert Sim
2023-01-04
Availability Adversarial Attack and Countermeasures for Deep Learning-based Load Forecasting. (98%)Wangkun Xu; Fei Teng
Beckman Defense. (84%)A. V. Subramanyam
GUAP: Graph Universal Attack Through Adversarial Patching. (81%)Xiao Zang; Jie Chen; Bo Yuan
Enhancement attacks in biomedical machine learning. (1%)Matthew Rosenblatt; Javid Dadashkarimi; Dustin Scheinost
2023-01-03
Explainability and Robustness of Deep Visual Classification Models. (92%)Jindong Gu
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition. (83%)Hasan Abed Al Kader Hammoud; Shuming Liu; Mohammed Alkhrashi; Fahad AlBalawi; Bernard Ghanem
Backdoor Attacks Against Dataset Distillation. (50%)Yugeng Liu; Zheng Li; Michael Backes; Yun Shen; Yang Zhang
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector. (33%)Kshitiz Aryal; Maanak Gupta; Mahmoud Abdelsalam
2023-01-02
Efficient Robustness Assessment via Adversarial Spatial-Temporal Focus on Videos. (92%)Wei Xingxing; Wang Songping; Yan Huanqian
2023-01-01
Generalizable Black-Box Adversarial Attack with Meta Learning. (99%)Fei Yin; Yong Zhang; Baoyuan Wu; Yan Feng; Jingyi Zhang; Yanbo Fan; Yujiu Yang
ExploreADV: Towards exploratory attack for Neural Networks. (99%)Tianzuo Luo; Yuyi Zhong; Siaucheng Khoo
Trojaning semi-supervised learning model via poisoning wild images on the web. (47%)Le Feng; Zhenxing Qian; Sheng Li; Xinpeng Zhang
2022-12-30
Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence. (99%)Han Fang; Jiyi Zhang; Yupeng Qiu; Ke Xu; Chengfang Fang; Ee-Chien Chang
Guidance Through Surrogate: Towards a Generic Diagnostic Attack. (99%)Muzammal Naseer; Salman Khan; Fatih Porikli; Fahad Shahbaz Khan
Defense Against Adversarial Attacks on Audio DeepFake Detection. (86%)Piotr Kawa; Marcin Plata; Piotr Syga
Adversarial attacks and defenses on ML- and hardware-based IoT device fingerprinting and identification. (82%)Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Gérôme Bovet; Gregorio Martínez Pérez
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples. (22%)Jiaming Zhang; Xingjun Ma; Qi Yi; Jitao Sang; Yugang Jiang; Yaowei Wang; Changsheng Xu
Targeted k-node Collapse Problem: Towards Understanding the Robustness of Local k-core Structure. (1%)Yuqian Lv; Bo Zhou; Jinhuan Wang; Qi Xuan
2022-12-29
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice. (68%)Giovanni Apruzzese; Hyrum S. Anderson; Savino Dambra; David Freeman; Fabio Pierazzi; Kevin A. Roundy
Detection of out-of-distribution samples using binary neuron activation patterns. (11%)Bartlomiej Olber; Krystian Radlak; Adam Popowicz; Michal Szczepankiewicz; Krystian Chachula
2022-12-28
Thermal Heating in ReRAM Crossbar Arrays: Challenges and Solutions. (99%)Kamilya Smagulova; Mohammed E. Fouda; Ahmed Eltawil
Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks. (98%)Junlin Wu; Hussein Sibai; Yevgeniy Vorobeychik
Publishing Efficient On-device Models Increases Adversarial Vulnerability. (95%)Sanghyun Hong; Nicholas Carlini; Alexey Kurakin
Differentiable Search of Accurate and Robust Architectures. (92%)Yuwei Ou; Xiangning Xie; Shangce Gao; Yanan Sun; Kay Chen Tan; Jiancheng Lv
Robust Ranking Explanations. (76%)Chao Chen; Chenghua Guo; Guixiang Ma; Xi Zhang; Sihong Xie
Evaluating Generalizability of Deep Learning Models Using Indian-COVID-19 CT Dataset. (1%)Suba S; Nita Parekh; Ramesh Loganathan; Vikram Pudi; Chinnababu Sunkavalli
2022-12-27
EDoG: Adversarial Edge Detection For Graph Neural Networks. (98%)Xiaojun Xu; Yue Yu; Hanzhang Wang; Alok Lal; Carl A. Gunter; Bo Li
Learning When to Use Adaptive Adversarial Image Perturbations against Autonomous Vehicles. (86%)Hyung-Jin Yoon; Hamidreza Jafarnejadsani; Petros Voulgaris
Sparse Mixture Once-for-all Adversarial Training for Efficient In-Situ Trade-Off Between Accuracy and Robustness of DNNs. (62%)Souvik Kundu; Sairam Sundaresan; Sharath Nittur Sridhar; Shunlin Lu; Han Tang; Peter A. Beerel
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning. (56%)Jianyi Zhang; Fangjiao Zhang; Qichao Jin; Zhiqiang Wang; Xiaodong Lin; Xiali Hei
2022-12-25
Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks. (99%)Xingxing Wei; Ying Guo; Jie Yu; Bo Zhang
2022-12-24
Frequency Regularization for Improving Adversarial Robustness. (99%)Binxiao Huang; Chaofan Tao; Rui Lin; Ngai Wong
2022-12-23
Out-of-Distribution Detection with Reconstruction Error and Typicality-based Penalty. (61%)Genki Osada; Takahashi Tsubasa; Budrul Ahsan; Takashi Nishide
Towards Scalable Physically Consistent Neural Networks: an Application to Data-driven Multi-zone Thermal Building Models. (1%)Natale Loris Di; Bratislav Svetozarevic; Philipp Heer; Colin Neil Jones
2022-12-22
Adversarial Machine Learning and Defense Game for NextG Signal Classification with Deep Learning. (98%)Yalin E. Sagduyu
Aliasing is a Driver of Adversarial Attacks. (80%)Adrián Rodríguez-Muñoz; Antonio Torralba
GAN-based Domain Inference Attack. (2%)Yuechun Gu; Keke Chen
Hybrid Quantum-Classical Generative Adversarial Network for High Resolution Image Generation. (1%)Shu Lok Tsang; Maxwell T. West; Sarah M. Erfani; Muhammad Usman
2022-12-21
Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective. (80%)Shihua Huang; Zhichao Lu; Kalyanmoy Deb; Vishnu Naresh Boddeti
Vulnerabilities of Deep Learning-Driven Semantic Communications to Backdoor (Trojan) Attacks. (67%)Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus; Aylin Yener
A Theoretical Study of The Effects of Adversarial Attacks on Sparse Regression. (13%)Deepak Maurya; Jean Honorio
2022-12-20
A Comprehensive Study and Comparison of the Robustness of 3D Object Detectors Against Adversarial Attacks. (98%)Yifan Zhang; Junhui Hou; Yixuan Yuan
Multi-head Uncertainty Inference for Adversarial Attack Detection. (98%)Yuqi Yang; Songyun Yang; Jiyang Xie. Zhongwei Si; Kai Guo; Ke Zhang; Kongming Liang
In and Out-of-Domain Text Adversarial Robustness via Label Smoothing. (98%)Yahan Yang; Soham Dan; Dan Roth; Insup Lee
Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial Attacks. (96%)Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus; Aylin Yener
Unleashing the Power of Visual Prompting At the Pixel Level. (92%)Junyang Wu; Xianhang Li; Chen Wei; Huiyu Wang; Alan Yuille; Yuyin Zhou; Cihang Xie
Learned Systems Security. (78%)Roei Schuster; Jin Peng Zhou; Paul Grubbs; Thorsten Eisenhofer; Nicolas Papernot
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks. (22%)Jimmy Z. Di; Jack Douglas; Jayadev Acharya; Gautam Kamath; Ayush Sekhari
ReCode: Robustness Evaluation of Code Generation Models. (10%)Shiqi Wang; Zheng Li; Haifeng Qian; Chenghao Yang; Zijian Wang; Mingyue Shang; Varun Kumar; Samson Tan; Baishakhi Ray; Parminder Bhatia; Ramesh Nallapati; Murali Krishna Ramanathan; Dan Roth; Bing Xiang
SoK: Analysis of Root Causes and Defense Strategies for Attacks on Microarchitectural Optimizations. (5%)Nadja Ramhöj Holtryd; Madhavan Manivannan; Per Stenström
Defending Against Poisoning Attacks in Open-Domain Question Answering. (2%)Orion Weller; Aleem Khan; Nathaniel Weir; Dawn Lawrie; Durme Benjamin Van
DISCO: Distilling Phrasal Counterfactuals with Large Language Models. (1%)Zeming Chen; Qiyue Gao; Kyle Richardson; Antoine Bosselut; Ashish Sabharwal
2022-12-19
TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization. (99%)Bairu Hou; Jinghan Jia; Yihua Zhang; Guanhua Zhang; Yang Zhang; Sijia Liu; Shiyu Chang
Towards Robustness of Text-to-SQL Models Against Natural and Realistic Adversarial Table Perturbation. (75%)Xinyu Pi; Bing Wang; Yan Gao; Jiaqi Guo; Zhoujun Li; Jian-Guang Lou
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends. (50%)Yonghao Xu; Tao Bai; Weikang Yu; Shizhen Chang; Peter M. Atkinson; Pedram Ghamisi
Task-Oriented Communications for NextG: End-to-End Deep Learning and AI Security Aspects. (26%)Yalin E. Sagduyu; Sennur Ulukus; Aylin Yener
Flareon: Stealthy any2any Backdoor Injection via Poisoned Augmentation. (2%)Tianrui Qin; Xianghuan He; Xitong Gao; Yiren Zhao; Kejiang Ye; Cheng-Zhong Xu
Exploring Optimal Substructure for Out-of-distribution Generalization via Feature-targeted Model Pruning. (1%)Yingchun Wang; Jingcai Guo; Song Guo; Weizhan Zhang; Jie Zhang
2022-12-18
Estimating the Adversarial Robustness of Attributions in Text with Transformers. (99%)Adam Ivankay; Mattia Rigotti; Ivan Girardi; Chiara Marchiori; Pascal Frossard
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks. (99%)Anqi Zhao; Tong Chu; Yahao Liu; Wen Li; Jingjing Li; Lixin Duan
Discrete Point-wise Attack Is Not Enough: Generalized Manifold Adversarial Attack for Face Recognition. (99%)Qian Li; Yuxiao Hu; Ye Liu; Dongxiao Zhang; Xin Jin; Yuntian Chen
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks. (4%)Zeyang Sha; Xinlei He; Pascal Berrang; Mathias Humbert; Yang Zhang
2022-12-17
Confidence-aware Training of Smoothed Classifiers for Certified Robustness. (86%)Jongheon Jeong; Seojin Kim; Jinwoo Shin
A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and Fairness. (2%)Tiantian Feng; Rajat Hebbar; Nicholas Mehlman; Xuan Shi; Aditya Kommineni; and Shrikanth Narayanan
HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation. (1%)Hongyi Yuan; Zheng Yuan; Chuanqi Tan; Fei Huang; Songfang Huang
2022-12-16
Adversarial Example Defense via Perturbation Grading Strategy. (99%)Shaowei Zhu; Wanli Lyu; Bin Li; Zhaoxia Yin; Bin Luo
Biomedical image analysis competitions: The state of current participation practice. (4%)Matthias Eisenmann; Annika Reinke; Vivienn Weru; Minu Dietlinde Tizabi; Fabian Isensee; Tim J. Adler; Patrick Godau; Veronika Cheplygina; Michal Kozubek; Sharib Ali; Anubha Gupta; Jan Kybic; Alison Noble; Solórzano Carlos Ortiz de; Samiksha Pachade; Caroline Petitjean; Daniel Sage; Donglai Wei; Elizabeth Wilden; Deepak Alapatt; Vincent Andrearczyk; Ujjwal Baid; Spyridon Bakas; Niranjan Balu; Sophia Bano; Vivek Singh Bawa; Jorge Bernal; Sebastian Bodenstedt; Alessandro Casella; Jinwook Choi; Olivier Commowick; Marie Daum; Adrien Depeursinge; Reuben Dorent; Jan Egger; Hannah Eichhorn; Sandy Engelhardt; Melanie Ganz; Gabriel Girard; Lasse Hansen; Mattias Heinrich; Nicholas Heller; Alessa Hering; Arnaud Huaulmé; Hyunjeong Kim; Bennett Landman; Hongwei Bran Li; Jianning Li; Jun Ma; Anne Martel; Carlos Martín-Isla; Bjoern Menze; Chinedu Innocent Nwoye; Valentin Oreiller; Nicolas Padoy; Sarthak Pati; Kelly Payette; Carole Sudre; Wijnen Kimberlin van; Armine Vardazaryan; Tom Vercauteren; Martin Wagner; Chuanbo Wang; Moi Hoon Yap; Zeyun Yu; Chun Yuan; Maximilian Zenk; Aneeq Zia; David Zimmerer; Rina Bao; Chanyeol Choi; Andrew Cohen; Oleh Dzyubachyk; Adrian Galdran; Tianyuan Gan; Tianqi Guo; Pradyumna Gupta; Mahmood Haithami; Edward Ho; Ikbeom Jang; Zhili Li; Zhengbo Luo; Filip Lux; Sokratis Makrogiannis; Dominik Müller; Young-tack Oh; Subeen Pang; Constantin Pape; Gorkem Polat; Charlotte Rosalie Reed; Kanghyun Ryu; Tim Scherr; Vajira Thambawita; Haoyu Wang; Xinliang Wang; Kele Xu; Hung Yeh; Doyeob Yeo; Yixuan Yuan; Yan Zeng; Xin Zhao; Julian Abbing; Jannes Adam; Nagesh Adluru; Niklas Agethen; Salman Ahmed; Yasmina Al Khalil; Mireia Alenyà; Esa Alhoniemi; Chengyang An; Talha Anwar; Tewodros Weldebirhan Arega; Netanell Avisdris; Dogu Baran Aydogan; Yingbin Bai; Maria Baldeon Calisto; Berke Doga Basaran; Marcel Beetz; Cheng Bian; Hao Bian; Kevin Blansit; Louise Bloch; Robert Bohnsack; Sara Bosticardo; Jack Breen; Mikael Brudfors; Raphael Brüngel; Mariano Cabezas; Alberto Cacciola; Zhiwei Chen; Yucong Chen; Daniel Tianming Chen; Minjeong Cho; Min-Kook Choi; Chuantao Xie Chuantao Xie; Dana Cobzas; Julien Cohen-Adad; Jorge Corral Acero; Sujit Kumar Das; Oliveira Marcela de; Hanqiu Deng; Guiming Dong; Lars Doorenbos; Cory Efird; Di Fan; Mehdi Fatan Serj; Alexandre Fenneteau; Lucas Fidon; Patryk Filipiak; René Finzel; Nuno R. Freitas; Christoph M. Friedrich; Mitchell Fulton; Finn Gaida; Francesco Galati; Christoforos Galazis; Chang Hee Gan; Zheyao Gao; Shengbo Gao; Matej Gazda; Beerend Gerats; Neil Getty; Adam Gibicar; Ryan Gifford; Sajan Gohil; Maria Grammatikopoulou; Daniel Grzech; Orhun Güley; Timo Günnemann; Chunxu Guo; Sylvain Guy; Heonjin Ha; Luyi Han; Il Song Han; Ali Hatamizadeh; Tian He; Jimin Heo; Sebastian Hitziger; SeulGi Hong; SeungBum Hong; Rian Huang; Ziyan Huang; Markus Huellebrand; Stephan Huschauer; Mustaffa Hussain; Tomoo Inubushi; Ece Isik Polat; Mojtaba Jafaritadi; SeongHun Jeong; Bailiang Jian; Yuanhong Jiang; Zhifan Jiang; Yueming Jin; Smriti Joshi; Abdolrahim Kadkhodamohammadi; Reda Abdellah Kamraoui; Inha Kang; Junghwa Kang; Davood Karimi; April Khademi; Muhammad Irfan Khan; Suleiman A. Khan; Rishab Khantwal; Kwang-Ju Kim; Timothy Kline; Satoshi Kondo; Elina Kontio; Adrian Krenzer; Artem Kroviakov; Hugo Kuijf; Satyadwyoom Kumar; Rosa Francesco La; Abhi Lad; Doohee Lee; Minho Lee; Chiara Lena; Hao Li; Ling Li; Xingyu Li; Fuyuan Liao; KuanLun Liao; Arlindo Limede Oliveira; Chaonan Lin; Shan Lin; Akis Linardos; Marius George Linguraru; Han Liu; Tao Liu; Di Liu; Yanling Liu; João Lourenço-Silva; Jingpei Lu; Jiangshan Lu; Imanol Luengo; Christina B. Lund; Huan Minh Luu; Yi Lv; Yi Lv; Uzay Macar; Leon Maechler; Sina Mansour L.; Kenji Marshall; Moona Mazher; Richard McKinley; Alfonso Medela; Felix Meissen; Mingyuan Meng; Dylan Miller; Seyed Hossein Mirjahanmardi; Arnab Mishra; Samir Mitha; Hassan Mohy-ud-Din; Tony Chi Wing Mok; Gowtham Krishnan Murugesan; Enamundram Naga Karthik; Sahil Nalawade; Jakub Nalepa; Mohamed Naser; Ramin Nateghi; Hammad Naveed; Quang-Minh Nguyen; Cuong Nguyen Quoc; Brennan Nichyporuk; Bruno Oliveira; David Owen; Jimut Bahan Pal; Junwen Pan; Wentao Pan; Winnie Pang; Bogyu Park; Vivek Pawar; Kamlesh Pawar; Michael Peven; Lena Philipp; Tomasz Pieciak; Szymon Plotka; Marcel Plutat; Fattaneh Pourakpour; Domen Preložnik; Kumaradevan Punithakumar; Abdul Qayyum; Sandro Queirós; Arman Rahmim; Salar Razavi; Jintao Ren; Mina Rezaei; Jonathan Adam Rico; ZunHyan Rieu; Markus Rink; Johannes Roth; Yusely Ruiz-Gonzalez; Numan Saeed; Anindo Saha; Mostafa Salem; Ricardo Sanchez-Matilla; Kurt Schilling; Wei Shao; Zhiqiang Shen; Ruize Shi; Pengcheng Shi; Daniel Sobotka; Théodore Soulier; Bella Specktor Fadida; Danail Stoyanov; Timothy Sum Hon Mun; Xiaowu Sun; Rong Tao; Franz Thaler; Antoine Théberge; Felix Thielke; Helena Torres; Kareem A. Wahid; Jiacheng Wang; YiFei Wang; Wei Wang; Xiong Wang; Jianhui Wen; Ning Wen; Marek Wodzinski; Ye Wu; Fangfang Xia; Tianqi Xiang; Chen Xiaofei; Lizhan Xu; Tingting Xue; Yuxuan Yang; Lin Yang; Kai Yao; Huifeng Yao; Amirsaeed Yazdani; Michael Yip; Hwanseung Yoo; Fereshteh Yousefirizi; Shunkai Yu; Lei Yu; Jonathan Zamora; Ramy Ashraf Zeineldin; Dewen Zeng; Jianpeng Zhang; Bokai Zhang; Jiapeng Zhang; Fan Zhang; Huahong Zhang; Zhongchen Zhao; Zixuan Zhao; Jiachen Zhao; Can Zhao; Qingshuo Zheng; Yuheng Zhi; Ziqi Zhou; Baosheng Zou; Klaus Maier-Hein; Paul F. Jäger; Annette Kopp-Schneider; Lena Maier-Hein
Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation? (1%)Ming-Chang Chiu; Pin-Yu Chen; Xuezhe Ma
On Human Visual Contrast Sensitivity and Machine Vision Robustness: A Comparative Study. (1%)Ming-Chang Chiu; Yingfei Wang; Derrick Eui Gyu Kim; Pin-Yu Chen; Xuezhe Ma
2022-12-15
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks. (98%)Nikolaos Antoniou; Efthymios Georgiou; Alexandros Potamianos
On Evaluating Adversarial Robustness of Chest X-ray Classification: Pitfalls and Best Practices. (84%)Salah Ghamizi; Maxime Cordy; Michail Papadakis; Yves Le Traon
Are Multimodal Models Robust to Image and Text Perturbations? (5%)Jielin Qiu; Yi Zhu; Xingjian Shi; Florian Wenzel; Zhiqiang Tang; Ding Zhao; Bo Li; Mu Li
Holistic risk assessment of inference attacks in machine learning. (4%)Yang Yang
Defending against cybersecurity threats to the payments and banking system. (2%)Williams Haruna; Toyin Ajiboro Aremu; Yetunde Ajao Modupe
White-box Inference Attacks against Centralized Machine Learning and Federated Learning. (1%)Jingyi Ge
2022-12-14
SAIF: Sparse Adversarial and Interpretable Attack Framework. (98%)Tooba Imtiaz; Morgan Kohler; Jared Miller; Zifeng Wang; Mario Sznaier; Octavia Camps; Jennifer Dy
Dissecting Distribution Inference. (88%)Anshuman Suri; Yifu Lu; Yanjin Chen; David Evans
Generative Robust Classification. (11%)Xuwang Yin
Synthesis of Adversarial DDOS Attacks Using Tabular Generative Adversarial Networks. (8%)Abdelmageed Ahmed Hassan; Mohamed Sayed Hussein; Ahmed Shehata AboMoustafa; Sarah Hossam Elmowafy
DOC-NAD: A Hybrid Deep One-class Classifier for Network Anomaly Detection. (1%)Mohanad Sarhan; Gayan Kulatilleke; Wai Weng Lo; Siamak Layeghy; Marius Portmann
2022-12-13
Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial Detection. (99%)Peter Lorenz; Margret Keuper; Janis Keuper
Object-fabrication Targeted Attack for Object Detection. (99%)Xuchong Zhang; Changfeng Sun; Haoliang Han; Hang Wang; Hongbin Sun; Nanning Zheng
Adversarial Attacks and Defences for Skin Cancer Classification. (99%)Vinay Jogani; Joy Purohit; Ishaan Shivhare; Samina Attari; Shraddha Surtkar
Towards Efficient and Domain-Agnostic Evasion Attack with High-dimensional Categorical Inputs. (80%)Hongyan Bao; Yufei Han; Yujun Zhou; Xin Gao; Xiangliang Zhang
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models. (73%)Chengzhi Mao; Scott Geng; Junfeng Yang; Xin Wang; Carl Vondrick
Pixel is All You Need: Adversarial Trajectory-Ensemble Active Learning for Salient Object Detection. (56%)Zhenyu Wu; Lin Wang; Wei Wang; Qing Xia; Chenglizhao Chen; Aimin Hao; Shuo Li
AdvCat: Domain-Agnostic Robustness Assessment for Cybersecurity-Critical Applications with Categorical Inputs. (56%)Helene Orsini; Hongyan Bao; Yujun Zhou; Xiangrui Xu; Yufei Han; Longyang Yi; Wei Wang; Xin Gao; Xiangliang Zhang
Privacy-preserving Security Inference Towards Cloud-Edge Collaborative Using Differential Privacy. (1%)Yulong Wang; Xingshu Chen; Qixu Wang
Boosting Semi-Supervised Learning with Contrastive Complementary Labeling. (1%)Qinyi Deng; Yong Guo; Zhibang Yang; Haolin Pan; Jian Chen
2022-12-12
SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation. (98%)Wanqing Zhu; Jia-Li Yin; Bo-Hao Chen; Ximeng Liu
Adversarially Robust Video Perception by Seeing Motion. (98%)Lingyu Zhang; Chengzhi Mao; Junfeng Yang; Carl Vondrick
A Survey on Reinforcement Learning Security with Application to Autonomous Driving. (96%)Ambra Demontis; Maura Pintor; Luca Demetrio; Kathrin Grosse; Hsiao-Ying Lin; Chengfang Fang; Battista Biggio; Fabio Roli
HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable Design. (96%)Hui Wei; Zhixiang Wang; Xuemei Jia; Yinqiang Zheng; Hao Tang; Shin'ichi Satoh; Zheng Wang
Robust Perception through Equivariance. (96%)Chengzhi Mao; Lingyu Zhang; Abhishek Joshi; Junfeng Yang; Hao Wang; Carl Vondrick
Despite "super-human" performance, current LLMs are unsuited for decisions about ethics and safety. (75%)Joshua Albrecht; Ellie Kitanidis; Abraham J. Fetterman
AFLGuard: Byzantine-robust Asynchronous Federated Learning. (15%)Minghong Fang; Jia Liu; Neil Zhenqiang Gong; Elizabeth S. Bentley
Carpet-bombing patch: attacking a deep network without usual requirements. (2%)Pol Labarbarie; Adrien Chan-Hon-Tong; Stéphane Herbin; Milad Leyli-Abadi
2022-12-11
DISCO: Adversarial Defense with Local Implicit Functions. (99%)Chih-Hui Ho; Nuno Vasconcelos
REAP: A Large-Scale Realistic Adversarial Patch Benchmark. (98%)Nabeel Hingun; Chawin Sitawarin; Jerry Li; David Wagner
2022-12-10
General Adversarial Defense Against Black-box Attacks via Pixel Level and Feature Level Distribution Alignments. (99%)Xiaogang Xu; Hengshuang Zhao; Philip Torr; Jiaya Jia
Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense. (93%)Yang Yu; Qi Liu; Likang Wu; Runlong Yu; Sanshi Lei Yu; Zaixi Zhang
Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking. (93%)Dennis Gross; Thiago D. Simao; Nils Jansen; Guillermo A. Perez
Mitigating Adversarial Gray-Box Attacks Against Phishing Detectors. (54%)Giovanni Apruzzese; V. S. Subrahmanian
How to Backdoor Diffusion Models? (10%)Sheng-Yen Chou; Pin-Yu Chen; Tsung-Yi Ho
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification. (1%)Ruixuan Tang; Hanjie Chen; Yangfeng Ji
2022-12-09
Understanding and Combating Robust Overfitting via Input Loss Landscape Analysis and Regularization. (98%)Lin Li; Michael Spratling
Expeditious Saliency-guided Mix-up through Random Gradient Thresholding. (2%)Minh-Long Luu; Zeyi Huang; Eric P. Xing; Yong Jae Lee; Haohan Wang
Robustness Implies Privacy in Statistical Estimation. (1%)Samuel B. Hopkins; Gautam Kamath; Mahbod Majid; Shyam Narayanan
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models. (1%)Rui Zhu; Di Tang; Siyuan Tang; XiaoFeng Wang; Haixu Tang
QVIP: An ILP-based Formal Verification Approach for Quantized Neural Networks. (1%)Yedi Zhang; Zhe Zhao; Fu Song; Min Zhang; Taolue Chen; Jun Sun
2022-12-08
Targeted Adversarial Attacks against Neural Network Trajectory Predictors. (99%)Kaiyuan Tan; Jun Wang; Yiannis Kantaros
XRand: Differentially Private Defense against Explanation-Guided Attacks. (68%)Truc Nguyen; Phung Lai; NhatHai Phan; My T. Thai
Robust Graph Representation Learning via Predictive Coding. (22%)Billy Byiringiro; Tommaso Salvatori; Thomas Lukasiewicz
2022-12-06
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning. (96%)Hongbin Liu; Wenjie Qu; Jinyuan Jia; Neil Zhenqiang Gong
2022-12-05
Enhancing Quantum Adversarial Robustness by Randomized Encodings. (99%)Weiyuan Gong; Dong Yuan; Weikang Li; Dong-Ling Deng
Multiple Perturbation Attack: Attack Pixelwise Under Different $\ell_p$-norms For Better Adversarial Performance. (99%)Ngoc N. Tran; Anh Tuan Bui; Dinh Phung; Trung Le
FaceQAN: Face Image Quality Assessment Through Adversarial Noise Exploration. (92%)Žiga Babnik; Peter Peer; Vitomir Štruc
Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning. (47%)Mingyuan Fan; Cen Chen; Chengyu Wang; Wenmeng Zhou; Jun Huang; Ximeng Liu; Wenzhong Guo
Blessings and Curses of Covariate Shifts: Adversarial Learning Dynamics, Directional Convergence, and Equilibria. (8%)Tengyuan Liang
What is the Solution for State-Adversarial Multi-Agent Reinforcement Learning? (3%)Songyang Han; Sanbao Su; Sihong He; Shuo Han; Haizhao Yang; Fei Miao
Spuriosity Rankings: Sorting Data for Spurious Correlation Robustness. (1%)Mazda Moayeri; Wenxiao Wang; Sahil Singla; Soheil Feizi
Efficient Malware Analysis Using Metric Embeddings. (1%)Ethan M. Rudd; David Krisiloff; Scott Coull; Daniel Olszewski; Edward Raff; James Holt
2022-12-04
Bayesian Learning with Information Gain Provably Bounds Risk for a Robust Adversarial Defense. (98%)Bao Gia Doan; Ehsan Abbasnejad; Javen Qinfeng Shi; Damith C. Ranasinghe
Recognizing Object by Components with Human Prior Knowledge Enhances Adversarial Robustness of Deep Neural Networks. (88%)Xiao Li; Ziqi Wang; Bo Zhang; Fuchun Sun; Xiaolin Hu
CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness. (82%)Huy Phan; Miao Yin; Yang Sui; Bo Yuan; Saman Zonouz
FedCC: Robust Federated Learning against Model Poisoning Attacks. (45%)Hyejun Jeong; Hamin Son; Seohu Lee; Jayun Hyun; Tai-Myoung Chung
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder. (8%)Qi Tian; Kun Kuang; Kelu Jiang; Furui Liu; Zhihua Wang; Fei Wu
2022-12-03
LDL: A Defense for Label-Based Membership Inference Attacks. (83%)Arezoo Rajabi; Dinuka Sahabandu; Luyao Niu; Bhaskar Ramasubramanian; Radha Poovendran
Security Analysis of SplitFed Learning. (8%)Momin Ahmad Khan; Virat Shejwalkar; Amir Houmansadr; Fatima Muhammad Anwar
2022-12-02
Membership Inference Attacks Against Semantic Segmentation Models. (45%)Tomas Chobola; Dmitrii Usynin; Georgios Kaissis
Guaranteed Conformance of Neurosymbolic Models to Natural Constraints. (1%)Kaustubh Sridhar; Souradeep Dutta; James Weimer; Insup Lee
2022-12-01
Purifier: Defending Data Inference Attacks via Transforming Confidence Scores. (89%)Ziqi Yang; Lijin Wang; Da Yang; Jie Wan; Ziming Zhao; Ee-Chien Chang; Fan Zhang; Kui Ren
Pareto Regret Analyses in Multi-objective Multi-armed Bandit. (31%)Mengfan Xu; Diego Klabjan
All You Need Is Hashing: Defending Against Data Reconstruction Attack in Vertical Federated Learning. (2%)Pengyu Qiu; Xuhong Zhang; Shouling Ji; Yuwen Pu; Ting Wang
Generalizing and Improving Jacobian and Hessian Regularization. (1%)Chenwei Cui; Zehao Yan; Guangshen Liu; Liangfu Lu
On the Limit of Explaining Black-box Temporal Graph Neural Networks. (1%)Minh N. Vu; My T. Thai
SimpleMind adds thinking to deep neural networks. (1%)Youngwon Choi; M. Wasil Wahi-Anwar; Matthew S. Brown
2022-11-30
Towards Interpreting Vulnerability of Multi-Instance Learning via Customized and Universal Adversarial Perturbations. (97%)Yu-Xuan Zhang; Hua Meng; Xue-Mei Cao; Zhengchun Zhou; Mei Yang; Avik Ranjan Adhikary
Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations. (84%)Haniyeh Ehsani Oskouie; Farzan Farnia
Efficient Adversarial Input Generation via Neural Net Patching. (73%)Tooba Khan; Kumar Madhukar; Subodh Vishnu Sharma
Toward Robust Diagnosis: A Contour Attention Preserving Adversarial Defense for COVID-19 Detection. (69%)Kun Xiang; Xing Zhang; Jinwen She; Jinpeng Liu; Haohan Wang; Shiqi Deng; Shancheng Jiang
Tight Certification of Adversarially Trained Neural Networks via Nonconvex Low-Rank Semidefinite Relaxations. (38%)Hong-Ming Chiu; Richard Y. Zhang
Improved Smoothed Analysis of 2-Opt for the Euclidean TSP. (8%)Bodo Manthey; Rhijn Jesse van
2022-11-29
Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive Diffusion. (99%)Kui Zhang; Hang Zhou; Jie Zhang; Qidong Huang; Weiming Zhang; Nenghai Yu
Understanding and Enhancing Robustness of Concept-based Models. (99%)Sanchit Sinha; Mengdi Huai; Jianhui Sun; Aidong Zhang
Advancing Deep Metric Learning Through Multiple Batch Norms And Multi-Targeted Adversarial Examples. (88%)Inderjeet Singh; Kazuya Kakizaki; Toshinori Araki
Penalizing Confident Predictions on Largely Perturbed Inputs Does Not Improve Out-of-Distribution Generalization in Question Answering. (83%)Kazutoshi Shinoda; Saku Sugawara; Akiko Aizawa
Quantization-aware Interval Bound Propagation for Training Certifiably Robust Quantized Neural Networks. (73%)Mathias Lechner; Đorđe Žikelić; Krishnendu Chatterjee; Thomas A. Henzinger; Daniela Rus
AdvMask: A Sparse Adversarial Attack Based Data Augmentation Method for Image Classification. (54%)Suorong Yang; Jinqiao Li; Jian Zhao; Furao Shen
A3T: Accuracy Aware Adversarial Training. (10%)Enes Altinisik; Safa Messaoud; Husrev Taha Sencar; Sanjay Chawla
Building Resilience to Out-of-Distribution Visual Data via Input Optimization and Model Finetuning. (1%)Christopher J. Holder; Majid Khonji; Jorge Dias; Muhammad Shafique
2022-11-28
Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces. (99%)Xiaoqing Chen; Dongrui Wu
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning. (95%)Eldor Abdukhamidov; Mohammed Abuhamad; Simon S. Woo; Eric Chan-Tin; Tamer Abuhmed
Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning. (83%)Gyojin Han; Jaehyun Choi; Hyeong Gwon Hong; Junmo Kim
Towards More Robust Interpretation via Local Gradient Alignment. (76%)Sunghwan Joo; Seokhyeon Jeong; Juyeon Heo; Adrian Weller; Taesup Moon
Understanding the Impact of Adversarial Robustness on Accuracy Disparity. (15%)Yuzheng Hu; Fan Wu; Hongyang Zhang; Han Zhao
How Important are Good Method Names in Neural Code Generation? A Model Robustness Perspective. (13%)Guang Yang; Yu Zhou; Wenhua Yang; Tao Yue; Xiang Chen; Taolue Chen
Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning. (8%)Xiaoyue Duan; Guoliang Kang; Runqi Wang; Shumin Han; Song Xue; Tian Wang; Baochang Zhang
Attack on Unfair ToS Clause Detection: A Case Study using Universal Adversarial Triggers. (8%)Shanshan Xu; Irina Broda; Rashid Haddad; Marco Negrini; Matthias Grabmair
Gamma-convergence of a nonlocal perimeter arising in adversarial machine learning. (3%)Leon Bungert; Kerrek Stinson
CoNAL: Anticipating Outliers with Large Language Models. (1%)Albert Xu; Xiang Ren; Robin Jia
Learning Antidote Data to Individual Unfairness. (1%)Peizhao Li; Ethan Xia; Hongfu Liu
2022-11-27
Imperceptible Adversarial Attack via Invertible Neural Networks. (99%)Zihan Chen; Ziyue Wang; Junjie Huang; Wentao Zhao; Xiao Liu; Dejian Guan
Foiling Explanations in Deep Neural Networks. (98%)Snir Vitrack Tamam; Raz Lapid; Moshe Sipper
Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning. (84%)Yunchao Zhang; Zonglin Di; Kaiwen Zhou; Cihang Xie; Xin Wang
Traditional Classification Neural Networks are Good Generators: They are Competitive with DDPMs and GANs. (50%)Guangrun Wang; Philip H. S. Torr
Federated Learning Attacks and Defenses: A Survey. (47%)Yao Chen; Yijie Gui; Hong Lin; Wensheng Gan; Yongdong Wu
Adversarial Rademacher Complexity of Deep Neural Networks. (47%)Jiancong Xiao; Yanbo Fan; Ruoyu Sun; Zhi-Quan Luo
2022-11-26
Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning. (99%)Ethan Rathbun; Kaleel Mahmood; Sohaib Ahmad; Caiwen Ding; Dijk Marten van
2022-11-25
Boundary Adversarial Examples Against Adversarial Overfitting. (99%)Muhammad Zaid Hameed; Beat Buesser
Supervised Contrastive Prototype Learning: Augmentation Free Robust Neural Network. (98%)Iordanis Fostiropoulos; Laurent Itti
Beyond Smoothing: Unsupervised Graph Representation Learning with Edge Heterophily Discriminating. (3%)Yixin Liu; Yizhen Zheng; Daokun Zhang; Vincent CS Lee; Shirui Pan
TrustGAN: Training safe and trustworthy deep learning models through generative adversarial networks. (1%)Hélion du Mas des Bourboux
2022-11-24
SAGA: Spectral Adversarial Geometric Attack on 3D Meshes. (98%)Tomer Stolik; Itai Lang; Shai Avidan
Explainable and Safe Reinforcement Learning for Autonomous Air Mobility. (92%)Lei Wang; Hongyu Yang; Yi Lin; Suwan Yin; Yuankai Wu
Tracking Dataset IP Use in Deep Neural Networks. (76%)Seonhye Park; Alsharif Abuadbba; Shuo Wang; Kristen Moore; Yansong Gao; Hyoungshick Kim; Surya Nepal
Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models. (8%)Jacob Shams; Ben Nassi; Ikuya Morikawa; Toshiya Shimizu; Asaf Shabtai; Yuval Elovici
Generative Joint Source-Channel Coding for Semantic Image Transmission. (1%)Ecenaz Erdemir; Tze-Yang Tung; Pier Luigi Dragotti; Deniz Gunduz
CycleGANWM: A CycleGAN watermarking method for ownership verification. (1%)Dongdong Lin; Benedetta Tondi; Bin Li; Mauro Barni
2022-11-23
Query Efficient Cross-Dataset Transferable Black-Box Attack on Action Recognition. (99%)Rohit Gupta; Naveed Akhtar; Gaurav Kumar Nayak; Ajmal Mian; Mubarak Shah
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners. (99%)Elre T. Oldewage; John Bronskill; Richard E. Turner
Reliable Robustness Evaluation via Automatically Constructed Attack Ensembles. (76%)Shengcai Liu; Fu Peng; Ke Tang
Dual Graphs of Polyhedral Decompositions for the Detection of Adversarial Attacks. (62%)Huma Jamil; Yajing Liu; Christina Cole; Nathaniel Blanchard; Emily J. King; Michael Kirby; Christopher Peterson
Privacy-Enhancing Optical Embeddings for Lensless Classification. (11%)Eric Bezzam; Martin Vetterli; Matthieu Simeoni
Principled Data-Driven Decision Support for Cyber-Forensic Investigations. (1%)Soodeh Atefi; Sakshyam Panda; Manos Panaousis; Aron Laszka
Data Provenance Inference in Machine Learning. (1%)Mingxue Xu; Xiang-Yang Li
2022-11-22
Benchmarking Adversarially Robust Quantum Machine Learning at Scale. (99%)Maxwell T. West; Sarah M. Erfani; Christopher Leckie; Martin Sevior; Lloyd C. L. Hollenberg; Muhammad Usman
PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples. (99%)Shengshan Hu; Junwei Zhang; Wei Liu; Junhui Hou; Minghui Li; Leo Yu Zhang; Hai Jin; Lichao Sun
Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces. (98%)Shengbang Fang; Matthew C Stamm
Improving Robust Generalization by Direct PAC-Bayesian Bound Minimization. (70%)Zifan Wang; Nan Ding; Tomer Levinboim; Xi Chen; Radu Soricut
Backdoor Cleansing with Unlabeled Data. (70%)Lu Pang; Tao Sun; Haibin Ling; Chao Chen
SoK: Inference Attacks and Defenses in Human-Centered Wireless Sensing. (69%)Wei Sun; Tingjun Chen; Neil Gong
2022-11-21
Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors. (99%)Sizhe Chen; Geng Yuan; Xinwen Cheng; Yifan Gong; Minghai Qin; Yanzhi Wang; Xiaolin Huang
Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization. (99%)Jiafeng Wang; Zhaoyu Chen; Kaixun Jiang; Dingkang Yang; Lingyi Hong; Yan Wang; Wenqiang Zhang
Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack. (99%)Yunfeng Diao; He Wang; Tianjia Shao; Yong-Liang Yang; Kun Zhou; David Hogg
Addressing Mistake Severity in Neural Networks with Semantic Knowledge. (92%)Natalie Abreu; Nathan Vaska; Victoria Helus
Efficient Generalization Improvement Guided by Random Weight Perturbation. (68%)Tao Li; Weihao Yan; Zehao Lei; Yingwen Wu; Kun Fang; Ming Yang; Xiaolin Huang
CLAWSAT: Towards Both Robust and Accurate Code Models. (56%)Jinghan Jia; Shashank Srikant; Tamara Mitrovska; Chuang Gan; Shiyu Chang; Sijia Liu; Una-May O'Reilly
Fairness Increases Adversarial Vulnerability. (54%)Cuong Tran; Keyu Zhu; Ferdinando Fioretto; Henternyck Pascal Van
Don't Watch Me: A Spatio-Temporal Trojan Attack on Deep-Reinforcement-Learning-Augment Autonomous Driving. (10%)Yinbo Yu; Jiajia Liu
SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks. (8%)Sunder Ali Khowaja; Parus Khuwaja; Kapal Dev; Angelos Antonopoulos
A Survey on Backdoor Attack and Defense in Natural Language Processing. (2%)Xuan Sheng; Zhaoyang Han; Piji Li; Xiangmao Chang
Understanding and Improving Visual Prompting: A Label-Mapping Perspective. (2%)Aochuan Chen; Yuguang Yao; Pin-Yu Chen; Yihua Zhang; Sijia Liu
Privacy in Practice: Private COVID-19 Detection in X-Ray Images. (1%)Lucas Lange; Maja Schneider; Erhard Rahm
A Tale of Frozen Clouds: Quantifying the Impact of Algorithmic Complexity Vulnerabilities in Popular Web Servers. (1%)Masudul Hasan Masud Bhuiyan; Cristian-Alexandru Staicu
2022-11-20
Spectral Adversarial Training for Robust Graph Neural Network. (99%)Jintang Li; Jiaying Peng; Liang Chen; Zibin Zheng; Tingting Liang; Qing Ling
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identification. (81%)Wenli Sun; Xinyang Jiang; Shuguang Dou; Dongsheng Li; Duoqian Miao; Cheng Deng; Cairong Zhao
Adversarial Cheap Talk. (4%)Chris Lu; Timon Willi; Alistair Letcher; Jakob Foerster
Deep Composite Face Image Attacks: Generation, Vulnerability and Detection. (2%)Jag Mohan Singh; Raghavendra Ramachandra
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation. (2%)Hyungmin Kim; Sungho Suh; Sunghyun Baek; Daehwan Kim; Daun Jeong; Hansang Cho; Junmo Kim
2022-11-19
Towards Adversarial Robustness of Deep Vision Algorithms. (92%)Hanshu Yan
Phonemic Adversarial Attack against Audio Recognition in Real World. (87%)Jiakai Wang; Zhendong Chen; Zixin Yin; Qinghong Yang; Xianglong Liu
Towards Robust Dataset Learning. (82%)Yihan Wu; Xinda Li; Florian Kerschbaum; Heng Huang; Hongyang Zhang
Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning. (80%)Mingxuan Ju; Yujie Fan; Chuxu Zhang; Yanfang Ye
Mask Off: Analytic-based Malware Detection By Transfer Learning and Model Personalization. (9%)Amirmohammad Pasdar; Young Choon Lee; Seok-Hee Hong
Investigating the Security of EV Charging Mobile Applications As an Attack Surface. (1%)K. Sarieddine; M. A. Sayed; S. Torabi; R. Atallah; C. Assi
2022-11-18
Adversarial Stimuli: Attacking Brain-Computer Interfaces via Perturbed Sensory Events. (98%)Bibek Upadhayay; Vahid Behzadan
Adversarial Detection by Approximation of Ensemble Boundary. (75%)T. Windeatt
Leveraging Algorithmic Fairness to Mitigate Blackbox Attribute Inference Attacks. (68%)Jan Aalmoes; Vasisht Duddu; Antoine Boutet
Invariant Learning via Diffusion Dreamed Distribution Shifts. (10%)Priyatham Kattakinda; Alexander Levine; Soheil Feizi
Improving Robustness of TCM-based Robust Steganography with Variable Robustness. (1%)Jimin Zhang; Xianfeng Zhao; Xiaolei He
Intrusion Detection in Internet of Things using Convolutional Neural Networks. (1%)Martin Kodys; Zhi Lu; Kar Wai Fok; Vrizlynn L. L. Thing
Provable Defense against Backdoor Policies in Reinforcement Learning. (1%)Shubham Kumar Bharti; Xuezhou Zhang; Adish Singla; Xiaojin Zhu
2022-11-17
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks. (99%)Stephen Casper; Kaivalya Hariharan; Dylan Hadfield-Menell
Towards Good Practices in Evaluating Transfer Adversarial Attacks. (93%)Zhengyu Zhao; Hanwei Zhang; Renjue Li; Ronan Sicre; Laurent Amsaleg; Michael Backes
Assessing Neural Network Robustness via Adversarial Pivotal Tuning. (92%)Peter Ebert Christensen; Vésteinn Snæbjarnarson; Andrea Dittadi; Serge Belongie; Sagie Benaim
UPTON: Unattributable Authorship Text via Data Poisoning. (86%)Ziyao Wang; Thai Le; Dongwon Lee
Generalizable Deepfake Detection with Phase-Based Motion Analysis. (50%)Ekta Prashnani; Michael Goebel; B. S. Manjunath
More Effective Centrality-Based Attacks on Weighted Networks. (15%)Balume Mburano; Weisheng Si; Qing Cao; Wei Xing Zheng
Potential Auto-driving Threat: Universal Rain-removal Attack. (2%)Jinchegn Hu; Jihao Li; Zhuoran Hou; Jingjing Jiang; Cunjia Liu; Yuanjian Zhang
Data-Centric Debugging: mitigating model failures via targeted data collection. (1%)Sahil Singla; Atoosa Malemir Chegini; Mazda Moayeri; Soheil Feiz
A Tale of Two Cities: Data and Configuration Variances in Robust Deep Learning. (1%)Guanqin Zhang; Jiankun Sun; Feng Xu; H. M. N. Dilum Bandara; Shiping Chen; Yulei Sui; Tim Menzies
VeriSparse: Training Verified Locally Robust Sparse Neural Networks from Scratch. (1%)Sawinder Kaur; Yi Xiao; Asif Salekin
2022-11-16
T-SEA: Transfer-based Self-Ensemble Attack on Object Detection. (99%)Hao Huang; Ziyan Chen; Huanran Chen; Yongtao Wang; Kevin Zhang
Efficiently Finding Adversarial Examples with DNN Preprocessing. (99%)Avriti Chauhan; Mohammad Afzal; Hrishikesh Karmarkar; Yizhak Elboher; Kumar Madhukar; Guy Katz
Improving Interpretability via Regularization of Neural Activation Sensitivity. (92%)Ofir Moshe; Gil Fidel; Ron Bitton; Asaf Shabtai
Attacking Object Detector Using A Universal Targeted Label-Switch Patch. (86%)Avishag Shapira; Ron Bitton; Dan Avraham; Alon Zolfi; Yuval Elovici; Asaf Shabtai
Differentially Private Optimizers Can Learn Adversarially Robust Models. (83%)Yuan Zhang; Zhiqi Bu
Interpretable Dimensionality Reduction by Feature Preserving Manifold Approximation and Projection. (56%)Yang Yang; Hongjian Sun; Jialei Gong; Yali Du; Di Yu
Privacy against Real-Time Speech Emotion Detection via Acoustic Adversarial Evasion of Machine Learning. (38%)Brian Testa; Yi Xiao; Avery Gump; Asif Salekin
Holistic Evaluation of Language Models. (2%)Percy Liang; Rishi Bommasani; Tony Lee; Dimitris Tsipras; Dilara Soylu; Michihiro Yasunaga; Yian Zhang; Deepak Narayanan; Yuhuai Wu; Ananya Kumar; Benjamin Newman; Binhang Yuan; Bobby Yan; Ce Zhang; Christian Cosgrove; Christopher D. Manning; Christopher Ré; Diana Acosta-Navas; Drew A. Hudson; Eric Zelikman; Esin Durmus; Faisal Ladhak; Frieda Rong; Hongyu Ren; Huaxiu Yao; Jue Wang; Keshav Santhanam; Laurel Orr; Lucia Zheng; Mert Yuksekgonul; Mirac Suzgun; Nathan Kim; Neel Guha; Niladri Chatterji; Omar Khattab; Peter Henderson; Qian Huang; Ryan Chi; Sang Michael Xie; Shibani Santurkar; Surya Ganguli; Tatsunori Hashimoto; Thomas Icard; Tianyi Zhang; Vishrav Chaudhary; William Wang; Xuechen Li; Yifan Mai; Yuhui Zhang; Yuta Koreeda
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Systems. (1%)Alessio Russo; Alexandre Proutiere
2022-11-15
Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation. (99%)Zhihao Zhu; Chenwang Wu; Min Zhou; Hao Liao; Defu Lian; Enhong Chen
Universal Distributional Decision-based Black-box Adversarial Attack with Reinforcement Learning. (99%)Yiran Huang; Yexu Zhou; Michael Hefenbrock; Till Riedel; Likun Fang; Michael Beigl
MORA: Improving Ensemble Robustness Evaluation with Model-Reweighing Attack. (99%)Yunrui Yu; Xitong Gao; Cheng-Zhong Xu
Person Text-Image Matching via Text-Featur Interpretability Embedding and External Attack Node Implantation. (92%)Fan Li; Hang Zhou; Huafeng Li; Yafei Zhang; Zhengtao Yu
Backdoor Attacks on Time Series: A Generative Approach. (70%)Yujing Jiang; Xingjun Ma; Sarah Monazam Erfani; James Bailey
Improved techniques for deterministic l2 robustness. (22%)Sahil Singla; Soheil Feizi
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning. (22%)Jinghuai Zhang; Hongbin Liu; Jinyuan Jia; Neil Zhenqiang Gong
Backdoor Attacks for Remote Sensing Data with Wavelet Transform. (12%)Nikolaus Dräger; Yonghao Xu; Pedram Ghamisi
2022-11-14
Efficient Adversarial Training with Robust Early-Bird Tickets. (92%)Zhiheng Xi; Rui Zheng; Tao Gui; Qi Zhang; Xuanjing Huang
Attacking Face Recognition with T-shirts: Database, Vulnerability Assessment and Detection. (13%)M. Ibsen; C. Rathgeb; F. Brechtel; R. Klepp; K. Pöppelmann; A. George; S. Marcel; C. Busch
Towards Robust Numerical Question Answering: Diagnosing Numerical Capabilities of NLP Systems. (5%)Jialiang Xu; Mengyu Zhou; Xinyi He; Shi Han; Dongmei Zhang
Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be Effective for Detecting Unknown Spurious Correlations. (5%)Shea Cardozo; Gabriel Islas Montero; Dmitry Kazhdan; Botty Dimanov; Maleakhi Wijaya; Mateja Jamnik; Pietro Lio
Robustifying Deep Vision Models Through Shape Sensitization. (2%)Aditay Tripathi; Rishubh Singh; Anirban Chakraborty; Pradeep Shenoy
2022-11-13
Certifying Robustness of Convolutional Neural Networks with Tight Linear Approximation. (26%)Yuan Xiao; Tongtong Bai; Mingzheng Gu; Chunrong Fang; Zhenyu Chen
2022-11-12
Adversarial and Random Transformations for Robust Domain Adaptation and Generalization. (75%)Liang Xiao; Jiaolong Xu; Dawei Zhao; Erke Shang; Qi Zhu; Bin Dai
2022-11-11
Generating Textual Adversaries with Minimal Perturbation. (98%)Xingyi Zhao; Lu Zhang; Depeng Xu; Shuhan Yuan
On the robustness of non-intrusive speech quality model by adversarial examples. (98%)Hsin-Yi Lin; Huan-Hsin Tseng; Yu Tsao
An investigation of security controls and MITRE ATT\&CK techniques. (47%)Md Rayhanur Rahman; Laurie Williams
Investigating co-occurrences of MITRE ATT\&CK Techniques. (12%)Md Rayhanur Rahman; Laurie Williams
Remapped Cache Layout: Thwarting Cache-Based Side-Channel Attacks with a Hardware Defense. (9%)Wei Song; Rui Hou; Peng Liu; Xiaoxin Li; Peinan Li; Lutan Zhao; Xiaofei Fu; Yifei Sun; Dan Meng
2022-11-10
Test-time adversarial detection and robustness for localizing humans using ultra wide band channel impulse responses. (99%)Abhiram Kolli; Muhammad Jehanzeb Mirza; Horst Possegger; Horst Bischof
Impact of Adversarial Training on Robustness and Generalizability of Language Models. (99%)Enes Altinisik; Hassan Sajjad; Husrev Taha Sencar; Safa Messaoud; Sanjay Chawla
Privacy-Utility Balanced Voice De-Identification Using Adversarial Examples. (98%)Meng Chen; Li Lu; Jiadi Yu; Yingying Chen; Zhongjie Ba; Feng Lin; Kui Ren
Stay Home Safe with Starving Federated Data. (80%)Jaechul Roh; Yajun Fang
MSDT: Masked Language Model Scoring Defense in Text Domain. (38%)Jaechul Roh; Minhao Cheng; Yajun Fang
Robust DNN Surrogate Models with Uncertainty Quantification via Adversarial Training. (3%)Lixiang Zhang; Jia Li
Mitigating Forgetting in Online Continual Learning via Contrasting Semantically Distinct Augmentations. (1%)Sheng-Feng Yu; Wei-Chen Chiu
2022-11-09
On the Robustness of Explanations of Deep Neural Network Models: A Survey. (50%)Amlan Jyoti; Karthik Balaji Ganesh; Manoj Gayala; Nandita Lakshmi Tunuguntla; Sandesh Kamath; Vineeth N Balasubramanian
Are All Edges Necessary? A Unified Framework for Graph Purification. (5%)Zishan Gu; Jintang Li; Liang Chen
QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems. (3%)Ana-Maria Cretu; Florimond Houssiau; Antoine Cully; Montjoye Yves-Alexandre de
Accountable and Explainable Methods for Complex Reasoning over Text. (2%)Pepa Atanasova
2022-11-08
Preserving Semantics in Textual Adversarial Attacks. (99%)David Herel; Hugo Cisneros; Tomas Mikolov
NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries? (98%)Saadia Gabriel; Hamid Palangi; Yejin Choi
How Fraudster Detection Contributes to Robust Recommendation. (54%)Yuni Lai; Kai Zhou
Lipschitz Continuous Algorithms for Graph Problems. (16%)Soh Kumabe; Yuichi Yoshida
Learning advisor networks for noisy image classification. (1%)Simone Ricci; Tiberio Uricchio; Bimbo Alberto Del
2022-11-07
Are AlphaZero-like Agents Robust to Adversarial Perturbations? (99%)Li-Cheng Lan; Huan Zhang; Ti-Rong Wu; Meng-Yu Tsai; I-Chen Wu; Cho-Jui Hsieh
Black-Box Attack against GAN-Generated Image Detector with Contrastive Perturbation. (82%)Zijie Lou; Gang Cao; Man Lin
Deviations in Representations Induced by Adversarial Attacks. (70%)Daniel Steinberg; Paul Munro
Interpreting deep learning output for out-of-distribution detection. (1%)Damian Matuszewski; Ida-Maria Sintorn
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks. (1%)Naoya Tezuka; Hideya Ochiai; Yuwei Sun; Hiroshi Esaki
2022-11-06
Contrastive Weighted Learning for Near-Infrared Gaze Estimation. (31%)Adam Lee
2022-11-05
Textual Manifold-based Defense Against Natural Language Adversarial Examples. (99%)Dang Minh Nguyen; Luu Anh Tuan
Stateful Detection of Adversarial Reprogramming. (96%)Yang Zheng; Xiaoyi Feng; Zhaoqiang Xia; Xiaoyue Jiang; Maura Pintor; Ambra Demontis; Battista Biggio; Fabio Roli
Robust Lottery Tickets for Pre-trained Language Models. (83%)Rui Zheng; Rong Bao; Yuhao Zhou; Di Liang; Sirui Wang; Wei Wu; Tao Gui; Qi Zhang; Xuanjing Huang
2022-11-04
Improving Adversarial Robustness to Sensitivity and Invariance Attacks with Deep Metric Learning. (99%)Anaelia Ovalle; Evan Czyzycki; Cho-Jui Hsieh
Logits are predictive of network type. (68%)Ali Borji
An Adversarial Robustness Perspective on the Topology of Neural Networks. (64%)Morgane Goibert; Thomas Ricatte; Elvis Dohmatob
Fairness-aware Regression Robust to Adversarial Attacks. (38%)Yulu Jin; Lifeng Lai
Extension of Simple Algorithms to the Matroid Secretary Problem. (9%)Simon Park
Robustness of Fusion-based Multimodal Classifiers to Cross-Modal Content Dilutions. (3%)Gaurav Verma; Vishwa Vinay; Ryan A. Rossi; Srijan Kumar
Data Models for Dataset Drift Controls in Machine Learning With Images. (1%)Luis Oala; Marco Aversa; Gabriel Nobis; Kurt Willis; Yoan Neuenschwander; Michèle Buck; Christian Matek; Jerome Extermann; Enrico Pomarico; Wojciech Samek; Roderick Murray-Smith; Christoph Clausen; Bruno Sanguinetti
2022-11-03
Physically Adversarial Attacks and Defenses in Computer Vision: A Survey. (99%)Xingxing Wei; Bangzheng Pu; Jiefan Lu; Baoyuan Wu
Adversarial Defense via Neural Oscillation inspired Gradient Masking. (98%)Chunming Jiang; Yilei Zhang
M-to-N Backdoor Paradigm: A Stealthy and Fuzzy Attack to Deep Learning Models. (98%)Linshan Hou; Zhongyun Hua; Yuhong Li; Leo Yu Zhang
Robust Few-shot Learning Without Using any Adversarial Samples. (89%)Gaurav Kumar Nayak; Ruchit Rawal; Inder Khatri; Anirban Chakraborty
Data-free Defense of Black Box Models Against Adversarial Attacks. (84%)Gaurav Kumar Nayak; Inder Khatri; Shubham Randive; Ruchit Rawal; Anirban Chakraborty
Leveraging Domain Features for Detecting Adversarial Attacks Against Deep Speech Recognition in Noise. (38%)Christian Heider Nielsen; Zheng-Hua Tan
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems. (33%)Chong Chen; Ying Gao; Leyu Shi; Siquan Huang
Unintended Memorization and Timing Attacks in Named Entity Recognition Models. (12%)Rana Salal Ali; Benjamin Zi Hao Zhao; Hassan Jameel Asghar; Tham Nguyen; Ian David Wood; Dali Kaafar
2022-11-02
Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise. (99%)Jhih-Cing Huang; Yu-Lin Tsai; Chao-Han Huck Yang; Cheng-Fang Su; Chia-Mu Yu; Pin-Yu Chen; Sy-Yen Kuo
Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks. (99%)Amira Guesmi; Ihsen Alouani; Khaled N. Khasawneh; Mouna Baklouti; Tarek Frikha; Mohamed Abid; Nael Abu-Ghazaleh
Improving transferability of 3D adversarial attacks with scale and shear transformations. (99%)Jinali Zhang; Yinpeng Dong; Jun Zhu; Jihong Zhu; Minchi Kuang; Xiaming Yuan
Adversarial Attack on Radar-based Environment Perception Systems. (99%)Amira Guesmi; Ihsen Alouani
Isometric Representations in Neural Networks Improve Robustness. (62%)Kosio Beshkov; Jonas Verhellen; Mikkel Elle Lepperød
BATT: Backdoor Attack with Transformation-based Triggers. (56%)Tong Xu; Yiming Li; Yong Jiang; Shu-Tao Xia
Untargeted Backdoor Attack against Object Detection. (50%)Chengxiao Luo; Yiming Li; Yong Jiang; Shu-Tao Xia
Generative Adversarial Training Can Improve Neural Language Models. (33%)Sajad Movahedi; Azadeh Shakery
Backdoor Defense via Suppressing Model Shortcuts. (3%)Sheng Yang; Yiming Li; Yong Jiang; Shu-Tao Xia
Human-in-the-Loop Mixup. (1%)Katherine M. Collins; Umang Bhatt; Weiyang Liu; Vihari Piratla; Ilia Sucholutsky; Bradley Love; Adrian Weller
2022-11-01
LMD: A Learnable Mask Network to Detect Adversarial Examples for Speaker Verification. (99%)Xing Chen; Jie Wang; Xiao-Lei Zhang; Wei-Qiang Zhang; Kunde Yang
The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training. (99%)Junhao Dong; Seyed-Mohsen Moosavi-Dezfooli; Jianhuang Lai; Xiaohua Xie
DensePure: Understanding Diffusion Models towards Adversarial Robustness. (98%)Chaowei Xiao; Zhongzhu Chen; Kun Jin; Jiongxiao Wang; Weili Nie; Mingyan Liu; Anima Anandkumar; Bo Li; Dawn Song
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks. (87%)Jianan Zhou; Jianing Zhu; Jingfeng Zhang; Tongliang Liu; Gang Niu; Bo Han; Masashi Sugiyama
Universal Perturbation Attack on Differentiable No-Reference Image- and Video-Quality Metrics. (82%)Ekaterina Shumitskaya; Anastasia Antsiferova; Dmitriy Vatolin
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning. (80%)Virat Shejwalkar; Lingjuan Lyu; Amir Houmansadr
Maximum Likelihood Distillation for Robust Modulation Classification. (69%)Javier Maroto; Gérôme Bovet; Pascal Frossard
FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness. (45%)Wenhao Wu; Wei Li; Jiachen Liu; Xinyan Xiao; Ziqiang Cao; Sujian Li; Hua Wu
Amplifying Membership Exposure via Data Poisoning. (22%)Yufei Chen; Chao Shen; Yun Shen; Cong Wang; Yang Zhang
ActGraph: Prioritization of Test Cases Based on Deep Neural Network Activation Graph. (13%)Jinyin Chen; Jie Ge; Haibin Zheng
2022-10-31
Scoring Black-Box Models for Adversarial Robustness. (98%)Jian Vora; Pranay Reddy Samala
ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation. (88%)Tomokatsu Takahashi; Masanori Yamada; Yuuki Yamanaka; Tomoya Yamashita
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy. (16%)Daphne Ippolito; Florian Tramèr; Milad Nasr; Chiyuan Zhang; Matthew Jagielski; Katherine Lee; Christopher A. Choquette-Choo; Nicholas Carlini
2022-10-30
Poison Attack and Defense on Deep Source Code Processing Models. (99%)Jia Li; Zhuo Li; Huangzhao Zhang; Ge Li; Zhi Jin; Xing Hu; Xin Xia
Character-level White-Box Adversarial Attacks against Transformers via Attachable Subwords Substitution. (99%)Aiwei Liu; Honghai Yu; Xuming Hu; Shu'ang Li; Li Lin; Fukun Ma; Yawen Yang; Lijie Wen
Benchmarking Adversarial Patch Against Aerial Detection. (99%)Jiawei Lian; Shaohui Mei; Shun Zhang; Mingyang Ma
Symmetric Saliency-based Adversarial Attack To Speaker Identification. (92%)Jiadi Yao; Xing Chen; Xiao-Lei Zhang; Wei-Qiang Zhang; Kunde Yang
FI-ODE: Certified and Robust Forward Invariance in Neural ODEs. (61%)Yujia Huang; Ivan Dario Jimenez Rodriguez; Huan Zhang; Yuanyuan Shi; Yisong Yue
Imitating Opponent to Win: Adversarial Policy Imitation Learning in Two-player Competitive Games. (9%)The Viet Bui; Tien Mai; Thanh H. Nguyen
2022-10-29
On the Need of Neuromorphic Twins to Detect Denial-of-Service Attacks on Communication Networks. (10%)Holger Boche; Rafael F. Schaefer; H. Vincent Poor; Frank H. P. Fitzek
2022-10-28
Universal Adversarial Directions. (99%)Ching Lam Choi; Farzan Farnia
Improving the Transferability of Adversarial Attacks on Face Recognition with Beneficial Perturbation Feature Augmentation. (99%)Fengfan Zhou; Hefei Ling; Yuxuan Shi; Jiazhong Chen; Zongyi Li; Ping Li
Improving Hyperspectral Adversarial Robustness using Ensemble Networks in the Presences of Multiple Attacks. (98%)Nicholas Soucy; Salimeh Yasaei Sekeh
Distributed Black-box Attack against Image Classification Cloud Services. (89%)Han Wu; Sareh Rowlands; Johan Wahlstrom
RoChBert: Towards Robust BERT Fine-tuning for Chinese. (75%)Zihan Zhang; Jinfeng Li; Ning Shi; Bo Yuan; Xiangyu Liu; Rong Zhang; Hui Xue; Donghong Sun; Chao Zhang
Robust Boosting Forests with Richer Deep Feature Hierarchy. (56%)Jianqiao Wangni
Localized Randomized Smoothing for Collective Robustness Certification. (26%)Jan Schuchardt; Tom Wollschläger; Aleksandar Bojchevski; Stephan Günnemann
Towards Reliable Neural Specifications. (11%)Chuqin Geng; Nham Le; Xiaojie Xu; Zhaoyue Wang; Arie Gurfinkel; Xujie Si
On the Vulnerability of Data Points under Multiple Membership Inference Attacks and Target Models. (1%)Mauro Conti; Jiaxin Li; Stjepan Picek
2022-10-27
TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion Attacks against Network Intrusion Detection Systems. (99%)Islam Debicha; Richard Bauwens; Thibault Debatty; Jean-Michel Dricot; Tayeb Kenaza; Wim Mees
Isometric 3D Adversarial Examples in the Physical World. (99%)Yibo Miao; Yinpeng Dong; Jun Zhu; Xiao-Shan Gao
LeNo: Adversarial Robust Salient Object Detection Networks with Learnable Noise. (92%)He Tang; He Wang
TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack. (92%)Yu Cao; Dianqi Li; Meng Fang; Tianyi Zhou; Jun Gao; Yibing Zhan; Dacheng Tao
Efficient and Effective Augmentation Strategy for Adversarial Training. (56%)Sravanti Addepalli; Samyak Jain; R. Venkatesh Babu
Noise Injection Node Regularization for Robust Learning. (2%)Noam Levi; Itay M. Bloch; Marat Freytsis; Tomer Volansky
Domain Adaptive Object Detection for Autonomous Driving under Foggy Weather. (1%)Jinlong Li; Runsheng Xu; Jin Ma; Qin Zou; Jiaqi Ma; Hongkai Yu
2022-10-26
Improving Adversarial Robustness with Self-Paced Hard-Class Pair Reweighting. (99%)Pengyue Hou; Jie Han; Xingyu Li
There is more than one kind of robustness: Fooling Whisper with adversarial examples. (98%)Raphael Olivier; Bhiksha Raj
Disentangled Text Representation Learning with Information-Theoretic Perspective for Adversarial Robustness. (86%)Jiahao Zhao; Wenji Mao
BioNLI: Generating a Biomedical NLI Dataset Using Lexico-semantic Constraints for Adversarial Examples. (75%)Mohaddeseh Bastan; Mihai Surdeanu; Niranjan Balasubramanian
EIPSIM: Modeling Secure IP Address Allocation at Cloud Scale. (11%)Eric University of Wisconsin-Madison Pauley; Kyle Pennsylvania State University Domico; Blaine University of Wisconsin-Madison Hoak; Ryan University of Wisconsin-Madison Sheatsley; Quinn University of Wisconsin-Madison Burke; Yohan University of Wisconsin-Madison Beugin; Patrick University of Wisconsin-Madison McDaniel
V-Cloak: Intelligibility-, Naturalness- & Timbre-Preserving Real-Time Voice Anonymization. (10%)Jiangyi Zhejiang University Deng; Fei Zhejiang University Teng; Yanjiao Zhejiang University Chen; Xiaofu Wuhan University Chen; Zhaohui Wuhan University Wang; Wenyuan Zhejiang University Xu
Rethinking the Reverse-engineering of Trojan Triggers. (5%)Zhenting Wang; Kai Mei; Hailun Ding; Juan Zhai; Shiqing Ma
Cover Reproducible Steganography via Deep Generative Models. (1%)Kejiang Chen; Hang Zhou; Yaofei Wang; Menghan Li; Weiming Zhang; Nenghai Yu
DEMIS: A Threat Model for Selectively Encrypted Visual Surveillance Data. (1%)Ifeoluwapo Aribilola; Mamoona Naveed Asghar; Brian Lee
Privately Fine-Tuning Large Language Models with Differential Privacy. (1%)Rouzbeh Behnia; Mohamamdreza Ebrahimi; Jason Pacheco; Balaji Padmanabhan
2022-10-25
Adversarially Robust Medical Classification via Attentive Convolutional Neural Networks. (99%)Isaac Wasserman
A White-Box Adversarial Attack Against a Digital Twin. (99%)Wilson Patterson; Ivan Fernandez; Subash Neupane; Milan Parmar; Sudip Mittal; Shahram Rahimi
Adaptive Test-Time Defense with the Manifold Hypothesis. (98%)Zhaoyuan Yang; Zhiwei Xu; Jing Zhang; Richard Hartley; Peter Tu
LP-BFGS attack: An adversarial attack based on the Hessian with limited pixels. (98%)Jiebao Zhang; Wenhua Qian; Rencan Nie; Jinde Cao; Dan Xu
Improving Adversarial Robustness via Joint Classification and Multiple Explicit Detection Classes. (98%)Sina Baharlouei; Fatemeh Sheikholeslami; Meisam Razaviyayn; Zico Kolter
Multi-view Representation Learning from Malware to Defend Against Adversarial Variants. (98%)James Lee Hu; Mohammadreza Ebrahimi; Weifeng Li; Xin Li; Hsinchun Chen
Accelerating Certified Robustness Training via Knowledge Transfer. (73%)Pratik Vaishnavi; Kevin Eykholt; Amir Rahmati
Causal Information Bottleneck Boosts Adversarial Robustness of Deep Neural Network. (64%)Huan Hua; Jun Yan; Xi Fang; Weiquan Huang; Huilin Yin; Wancheng Ge
Towards Robust Recommender Systems via Triple Cooperative Defense. (61%)Qingyang Wang; Defu Lian; Chenwang Wu; Enhong Chen
Towards Formal Approximated Minimal Explanations of Neural Networks. (13%)Shahaf Bassan; Guy Katz
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification. (13%)Yulin Zhu; Liang Tong; Kai Zhou
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks. (3%)M. Kuzlu; F. O. Catak; S. Sarp; U. Cali; O Gueler
Robustness of Locally Differentially Private Graph Analysis Against Poisoning. (1%)Jacob Imola; Amrita Roy Chowdhury; Kamalika Chaudhuri
2022-10-24
Ares: A System-Oriented Wargame Framework for Adversarial ML. (99%)Farhan Ahmed; Pratik Vaishnavi; Kevin Eykholt; Amir Rahmati
SpacePhish: The Evasion-space of Adversarial Attacks against Phishing Website Detectors using Machine Learning. (99%)Giovanni Apruzzese; Mauro Conti; Ying Yuan
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs. (96%)Haibin Zheng; Haiyang Xiong; Jinyin Chen; Haonan Ma; Guohan Huang
On the Robustness of Dataset Inference. (81%)Sebastian Szyller; Rui Zhang; Jian Liu; N. Asokan
Flexible Android Malware Detection Model based on Generative Adversarial Networks with Code Tensor. (16%)Zhao Yang; Fengyang Deng; Linxi Han
Revisiting Sparse Convolutional Model for Visual Recognition. (11%)Xili Dai; Mingyang Li; Pengyuan Zhai; Shengbang Tong; Xingjian Gao; Shao-Lun Huang; Zhihui Zhu; Chong You; Yi Ma
2022-10-23
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning. (68%)Kaiyuan Zhang; Guanhong Tao; Qiuling Xu; Siyuan Cheng; Shengwei An; Yingqi Liu; Shiwei Feng; Guangyu Shen; Pin-Yu Chen; Shiqing Ma; Xiangyu Zhang
Adversarial Pretraining of Self-Supervised Deep Networks: Past, Present and Future. (45%)Guo-Jun Qi; Mubarak Shah
2022-10-22
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation. (99%)Fan Yin; Yao Li; Cho-Jui Hsieh; Kai-Wei Chang
Hindering Adversarial Attacks with Implicit Neural Representations. (92%)Andrei A. Rusu; Dan A. Calian; Sven Gowal; Raia Hadsell
GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections. (81%)Junyuan Fang; Haixian Wen; Jiajing Wu; Qi Xuan; Zibin Zheng; Chi K. Tse
Nash Equilibria and Pitfalls of Adversarial Training in Adversarial Robustness Games. (26%)Maria-Florina Balcan; Rattana Pukdee; Pradeep Ravikumar; Hongyang Zhang
Precisely the Point: Adversarial Augmentations for Faithful and Informative Text Generation. (4%)Wenhao Wu; Wei Li; Jiachen Liu; Xinyan Xiao; Sujian Li; Yajuan Lyu
2022-10-21
Evolution of Neural Tangent Kernels under Benign and Adversarial Training. (99%)Noel Loo; Ramin Hasani; Alexander Amini; Daniela Rus
The Dark Side of AutoML: Towards Architectural Backdoor Search. (68%)Ren Pang; Changjiang Li; Zhaohan Xi; Shouling Ji; Ting Wang
Diffusion Visual Counterfactual Explanations. (10%)Maximilian Augustin; Valentyn Boreiko; Francesco Croce; Matthias Hein
TCAB: A Large-Scale Text Classification Attack Benchmark. (10%)Kalyani Asthana; Zhouhang Xie; Wencong You; Adam Noack; Jonathan Brophy; Sameer Singh; Daniel Lowd
A critical review of cyber-physical security for building automation systems. (2%)Guowen Li; Lingyu Ren; Yangyang Fu; Zhiyao Yang; Veronica Adetola; Jin Wen; Qi Zhu; Teresa Wu; K. Selcuk Candanf; Zheng O'Neill
Extracted BERT Model Leaks More Information than You Think! (1%)Xuanli He; Chen Chen; Lingjuan Lyu; Qiongkai Xu
2022-10-20
Identifying Human Strategies for Generating Word-Level Adversarial Examples. (98%)Maximilian Mozes; Bennett Kleinberg; Lewis D. Griffin
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks. (98%)Jiyang Guan; Jian Liang; Ran He
Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models. (98%)Hannah Chen; Yangfeng Ji; David Evans
Learning Sample Reweighting for Accuracy and Adversarial Robustness. (93%)Chester Holtz; Tsui-Wei Weng; Gal Mishne
Similarity of Neural Architectures Based on Input Gradient Transferability. (86%)Jaehui Hwang; Dongyoon Han; Byeongho Heo; Song Park; Sanghyuk Chun; Jong-Seok Lee
New data poison attacks on machine learning classifiers for mobile exfiltration. (80%)Miguel A. Ramirez; Sangyoung Yoon; Ernesto Damiani; Hussam Al Hamadi; Claudio Agostino Ardagna; Nicola Bena; Young-Ji Byon; Tae-Yeon Kim; Chung-Suk Cho; Chan Yeob Yeun
Attacking Motion Estimation with Adversarial Snow. (16%)Jenny Schmalfuss; Lukas Mehl; Andrés Bruhn
How Does a Deep Learning Model Architecture Impact Its Privacy? (5%)Guangsheng Zhang; Bo Liu; Huan Tian; Tianqing Zhu; Ming Ding; Wanlei Zhou
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario. (4%)Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Enrique Tomás Martínez Beltrán; Daniel Demeter; Gérôme Bovet; Gregorio Martínez Pérez; Burkhard Stiller
Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning. (3%)Xiaoyi Chen; Baisong Xin; Shengfang Zhai; Shiqing Ma; Qingni Shen; Zhonghai Wu
LOT: Layer-wise Orthogonal Training on Improving $\ell_2$ Certified Robustness. (3%)Xiaojun Xu; Linyi Li; Bo Li
2022-10-19
Few-shot Transferable Robust Representation Learning via Bilevel Attacks. (93%)Minseon Kim; Hyeonjeong Ha; Sung Ju Hwang
Targeted Adversarial Self-Supervised Learning. (86%)Minseon Kim; Hyeonjeong Ha; Sooel Son; Sung Ju Hwang
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis. (83%)Ruinan Jin; Xiaoxiao Li
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey. (69%)Hui Cao; Wenlong Zou; Yinkun Wang; Ting Song; Mengjun Liu
Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP. (64%)Yangyi Chen; Hongcheng Gao; Ganqu Cui; Fanchao Qi; Longtao Huang; Zhiyuan Liu; Maosong Sun
Model-Free Prediction of Adversarial Drop Points in 3D Point Clouds. (54%)Hanieh Naderi; Chinthaka Dinesh; Ivan V. Bajic; Shohreh Kasaei
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. (41%)Xiaoyu Cao; Jinyuan Jia; Zaixi Zhang; Neil Zhenqiang Gong
Chaos Theory and Adversarial Robustness. (22%)Jonathan S. Kent
Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning. (16%)Ruihan Wu; Xiangyu Chen; Chuan Guo; Kilian Q. Weinberger
Variational Model Perturbation for Source-Free Domain Adaptation. (1%)Mengmeng Jing; Xiantong Zhen; Jingjing Li; Cees G. M. Snoek
2022-10-18
Scaling Adversarial Training to Large Perturbation Bounds. (98%)Sravanti Addepalli; Samyak Jain; Gaurang Sriramanan; R. Venkatesh Babu
Not All Poisons are Created Equal: Robust Training against Data Poisoning. (97%)Yu Yang; Tian Yu Liu; Baharan Mirzasoleiman
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models. (73%)Lan Jiang; Hao Zhou; Yankai Lin; Peng Li; Jie Zhou; Rui Jiang
Analysis of Master Vein Attacks on Finger Vein Recognition Systems. (56%)Huy H. Nguyen; Trung-Nghia Le; Junichi Yamagishi; Isao Echizen
Training set cleansing of backdoor poisoning by self-supervised representation learning. (56%)H. Wang; S. Karami; O. Dia; H. Ritter; E. Emamjomeh-Zadeh; J. Chen; Z. Xiang; D. J. Miller; G. Kesidis
On the Adversarial Robustness of Mixture of Experts. (13%)Joan Puigcerver; Rodolphe Jenatton; Carlos Riquelme; Pranjal Awasthi; Srinadh Bhojanapalli
Transferable Unlearnable Examples. (8%)Jie Ren; Han Xu; Yuxuan Wan; Xingjun Ma; Lichao Sun; Jiliang Tang
Automatic Detection of Fake Key Attacks in Secure Messaging. (8%)Tarun Kumar Yadav; Devashish Gosain; Amir Herzberg; Daniel Zappala; Kent Seamons
Improving Adversarial Robustness by Contrastive Guided Diffusion Process. (2%)Yidong Ouyang; Liyan Xie; Guang Cheng
2022-10-17
Towards Generating Adversarial Examples on Mixed-type Data. (99%)Han Xu; Menghai Pan; Zhimeng Jiang; Huiyuan Chen; Xiaoting Li; Mahashweta Das; Hao Yang
Differential Evolution based Dual Adversarial Camouflage: Fooling Human Eyes and Object Detectors. (99%)Jialiang Sun; Tingsong Jiang; Wen Yao; Donghua Wang; Xiaoqian Chen
Probabilistic Categorical Adversarial Attack & Adversarial Training. (99%)Pengfei He; Han Xu; Jie Ren; Yuxuan Wan; Zitao Liu; Jiliang Tang
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class. (96%)Khoa D. Doan; Yingjie Lao; Ping Li
DE-CROP: Data-efficient Certified Robustness for Pretrained Classifiers. (87%)Gaurav Kumar Nayak; Ruchit Rawal; Anirban Chakraborty
Beyond Model Interpretability: On the Faithfulness and Adversarial Robustness of Contrastive Textual Explanations. (78%)Julia El Zini; Mariette Awad
Towards Fair Classification against Poisoning Attacks. (76%)Han Xu; Xiaorui Liu; Yuxuan Wan; Jiliang Tang
Deepfake Text Detection: Limitations and Opportunities. (41%)Jiameng Pu; Zain Sarwar; Sifat Muhammad Abdullah; Abdullah Rehman; Yoonjin Kim; Parantapa Bhattacharya; Mobin Javed; Bimal Viswanath
You Can't See Me: Physical Removal Attacks on LiDAR-based Autonomous Vehicles Driving Frameworks. (15%)Yulong Cao; S. Hrushikesh Bhupathiraju; Pirouz Naghavi; Takeshi Sugawara; Z. Morley Mao; Sara Rampazzi
Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models. (9%)Zhiyuan Zhang; Lingjuan Lyu; Xingjun Ma; Chenguang Wang; Xu Sun
Understanding CNN Fragility When Learning With Imbalanced Data. (1%)Damien Dablain; Kristen N. Jacobson; Colin Bellinger; Mark Roberts; Nitesh Chawla
2022-10-16
Object-Attentional Untargeted Adversarial Attack. (99%)Chao Zhou; Yuan-Gen Wang; Guopu Zhu
Nowhere to Hide: A Lightweight Unsupervised Detector against Adversarial Examples. (99%)Hui Liu; Bo Zhao; Kehuan Zhang; Peng Liu
ODG-Q: Robust Quantization via Online Domain Generalization. (83%)Chaofan Tao; Ngai Wong
Interpretable Machine Learning for Detection and Classification of Ransomware Families Based on API Calls. (1%)Rawshan Ara Mowri; Madhuri Siddula; Kaushik Roy
2022-10-15
RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging. (2%)Ajay Jaiswal; Kumar Ashutosh; Justin F Rousseau; Yifan Peng; Zhangyang Wang; Ying Ding
2022-10-14
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture. (87%)Yichuan Mo; Dongxian Wu; Yifei Wang; Yiwen Guo; Yisen Wang
Dynamics-aware Adversarial Attack of Adaptive Neural Networks. (86%)An Tao; Yueqi Duan; Yingqi Wang; Jiwen Lu; Jie Zhou
Is Face Recognition Safe from Realizable Attacks? (84%)Sanjay Saha; Terence Sim
Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks. (76%)Sishuo Chen; Wenkai Yang; Zhiyuan Zhang; Xiaohan Bi; Xu Sun
Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis. (67%)Phillip Technical University Darmstadt Rieger; Torsten University of Würzburg Krauß; Markus Technical University Darmstadt Miettinen; Alexandra University of Würzburg Dmitrienko; Ahmad-Reza Technical University Darmstadt Sadeghi
2022-10-13
Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition. (99%)Shuai Jia; Bangjie Yin; Taiping Yao; Shouhong Ding; Chunhua Shen; Xiaokang Yang; Chao Ma
AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient. (99%)Farzad Nikfam; Alberto Marchisio; Maurizio Martina; Muhammad Shafique
Demystifying Self-supervised Trojan Attacks. (87%)Changjiang Li; Ren Pang; Zhaohan Xi; Tianyu Du; Shouling Ji; Yuan Yao; Ting Wang
Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors. (81%)Qixun Wang; Yifei Wang; Hong Zhu; Yisen Wang
Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation. (13%)Zhouxing Shi; Yihan Wang; Huan Zhang; Zico Kolter; Cho-Jui Hsieh
Large-Scale Open-Set Classification Protocols for ImageNet. (2%)Jesus Andres Palechor Anacona; Annesha Bhoumik; Manuel Günther
SoK: How Not to Architect Your Next-Generation TEE Malware? (1%)Kubilay Ahmet Küçük; Steve Moyle; Andrew Martin; Alexandru Mereacre; Nicholas Allott
Feature Reconstruction Attacks and Countermeasures of DNN training in Vertical Federated Learning. (1%)Peng Ye; Zhifeng Jiang; Wei Wang; Bo Li; Baochun Li
Characterizing the Influence of Graph Elements. (1%)Zizhang Chen; Peizhao Li; Hongfu Liu; Pengyu Hong
2022-10-12
A Game Theoretical vulnerability analysis of Adversarial Attack. (99%)Khondker Fariha Hossain; Alireza Tavakkoli; Shamik Sengupta
Visual Prompting for Adversarial Robustness. (99%)Aochuan Chen; Peter Lorenz; Yuguang Yao; Pin-Yu Chen; Sijia Liu
Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation. (99%)Zeyu Qin; Yanbo Fan; Yi Liu; Li Shen; Yong Zhang; Jue Wang; Baoyuan Wu
Robust Models are less Over-Confident. (96%)Julia Grabinski; Paul Gavrikov; Janis Keuper; Margret Keuper
Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity. (86%)Andrew C. Cullen; Paul Montague; Shijie Liu; Sarah M. Erfani; Benjamin I. P. Rubinstein
Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning. (82%)Yongyuan Liang; Yanchao Sun; Ruijie Zheng; Furong Huang
COLLIDER: A Robust Training Framework for Backdoor Data. (81%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. (76%)Haotao Wang; Junyuan Hong; Aston Zhang; Jiayu Zhou; Zhangyang Wang
Few-shot Backdoor Attacks via Neural Tangent Kernels. (62%)Jonathan Hayase; Sewoong Oh
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning? (9%)Yi Zeng; Minzhou Pan; Himanshu Jahagirdar; Ming Jin; Lingjuan Lyu; Ruoxi Jia
Understanding Impacts of Task Similarity on Backdoor Attack and Detection. (2%)Di Tang; Rui Zhu; XiaoFeng Wang; Haixu Tang; Yi Chen
2022-10-11
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness? (99%)Nikolaos Tsilivis; Julia Kempe
Stable and Efficient Adversarial Training through Local Linearization. (91%)Zhuorong Li; Daiwei Yu
RoHNAS: A Neural Architecture Search Framework with Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks. (86%)Alberto Marchisio; Vojtech Mrazek; Andrea Massa; Beatrice Bussolino; Maurizio Martina; Muhammad Shafique
Adversarial Attack Against Image-Based Localization Neural Networks. (78%)Meir Brand; Itay Naeh; Daniel Teitelman
Detecting Backdoors in Deep Text Classifiers. (76%)You Guo; Jun Wang; Trevor Cohn
Human Body Measurement Estimation with Adversarial Augmentation. (33%)Nataniel Ruiz; Miriam Bellver; Timo Bolkart; Ambuj Arora; Ming C. Lin; Javier Romero; Raja Bala
Curved Representation Space of Vision Transformers. (10%)Juyeop Kim; Junha Park; Songkuk Kim; Jong-Seok Lee
Zeroth-Order Hard-Thresholding: Gradient Error vs. Expansivity. (1%)Vazelhes William de; Hualin Zhang; Huimin Wu; Xiao-Tong Yuan; Bin Gu
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach. (1%)Peng Mi; Li Shen; Tianhe Ren; Yiyi Zhou; Xiaoshuai Sun; Rongrong Ji; Dacheng Tao
2022-10-10
Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization. (92%)Ziquan Liu; Antoni B. Chan
Revisiting adapters with adversarial training. (88%)Sylvestre-Alvise Rebuffi; Francesco Croce; Sven Gowal
Universal Adversarial Perturbations: Efficiency on a small image dataset. (81%)Waris ENSEIRB-MATMECA, UB Radji
Certified Training: Small Boxes are All You Need. (22%)Mark Niklas Müller; Franziska Eckert; Marc Fischer; Martin Vechev
Denoising Masked AutoEncoders Help Robust Classification. (1%)Quanlin Wu; Hang Ye; Yuntian Gu; Huishuai Zhang; Liwei Wang; Di He
2022-10-09
Pruning Adversarially Robust Neural Networks without Adversarial Examples. (99%)Tong Jian; Zifeng Wang; Yanzhi Wang; Jennifer Dy; Stratis Ioannidis
Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective. (99%)Yao Zhu; Yuefeng Chen; Xiaodan Li; Kejiang Chen; Yuan He; Xiang Tian; Bolun Zheng; Yaowu Chen; Qingming Huang
Online Training Through Time for Spiking Neural Networks. (1%)Mingqing Xiao; Qingyan Meng; Zongpeng Zhang; Di He; Zhouchen Lin
2022-10-08
Symmetry Subgroup Defense Against Adversarial Attacks. (99%)Blerta Lindqvist
FedDef: Defense Against Gradient Leakage in Federated Learning-based Network Intrusion Detection Systems. (99%)Jiahui Chen; Yi Zhao; Qi Li; Xuewei Feng; Ke Xu
Robustness of Unsupervised Representation Learning without Labels. (54%)Aleksandar Petrov; Marta Kwiatkowska
2022-10-07
Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs. (99%)Prashant Pandey; Aleti Vardhan; Mustafa Chasmai; Tanuj Sur; Brejesh Lall
Pre-trained Adversarial Perturbations. (99%)Yuanhao Ban; Yinpeng Dong
ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints. (93%)Yinpeng Dong; Shouwei Ruan; Hang Su; Caixin Kang; Xingxing Wei; Jun Zhu
Game-Theoretic Understanding of Misclassification. (47%)Kosuke Sumiyasu; Kazuhiko Kawamoto; Hiroshi Kera
A2: Efficient Automated Attacker for Boosting Adversarial Training. (41%)Zhuoer Xu; Guanghui Zhu; Changhua Meng; Shiwen Cui; Zhenzhe Ying; Weiqiang Wang; Ming GU; Yihua Huang
NMTSloth: Understanding and Testing Efficiency Degradation of Neural Machine Translation Systems. (13%)Simin Chen; Cong Liu; Mirazul Haque; Zihe Song; Wei Yang
A Wolf in Sheep's Clothing: Spreading Deadly Pathogens Under the Disguise of Popular Music. (2%)Anomadarshi Barua; Yonatan Gizachew Achamyeleh; Mohammad Abdullah Al Faruque
Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation. (1%)Indu Panigrahi; Ryan Manzuk; Adam Maloof; Ruth Fong
Mind Your Data! Hiding Backdoors in Offline Reinforcement Learning Datasets. (1%)Chen Gong; Zhou Yang; Yunpeng Bai; Junda He; Jieke Shi; Arunesh Sinha; Bowen Xu; Xinwen Hou; Guoliang Fan; David Lo
2022-10-06
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems. (99%)Chawin Sitawarin; Florian Tramèr; Nicholas Carlini
Enhancing Code Classification by Mixup-Based Data Augmentation. (96%)Zeming Dong; Qiang Hu; Yuejun Guo; Maxime Cordy; Mike Papadakis; Yves Le Traon; Jianjun Zhao
Deep Reinforcement Learning based Evasion Generative Adversarial Network for Botnet Detection. (92%)Rizwan Hamid Randhawa; Nauman Aslam; Mohammad Alauthman; Muhammad Khalid; Husnain Rafiq
On Optimal Learning Under Targeted Data Poisoning. (82%)Steve Hanneke; Amin Karbasi; Mohammad Mahmoody; Idan Mehalel; Shay Moran
Towards Out-of-Distribution Adversarial Robustness. (73%)Adam Ibrahim; Charles Guille-Escuret; Ioannis Mitliagkas; Irina Rish; David Krueger; Pouya Bashivan
InferES : A Natural Language Inference Corpus for Spanish Featuring Negation-Based Contrastive and Adversarial Examples. (61%)Venelin Kovatchev; Mariona Taulé
Unsupervised Domain Adaptation for COVID-19 Information Service with Contrastive Adversarial Domain Mixup. (41%)Huimin Zeng; Zhenrui Yue; Ziyi Kou; Lanyu Shang; Yang Zhang; Dong Wang
Synthetic Dataset Generation for Privacy-Preserving Machine Learning. (2%)Efstathia Soufleri; Gobinda Saha; Kaushik Roy
Enhancing Mixup-Based Graph Learning for Language Processing via Hybrid Pooling. (1%)Zeming Dong; Qiang Hu; Yuejun Guo; Maxime Cordy; Mike Papadakis; Yves Le Traon; Jianjun Zhao
Bad Citrus: Reducing Adversarial Costs with Model Distances. (1%)Giorgio Severi; Will Pearce; Alina Oprea
2022-10-05
Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks. (99%)Shengming Yuan; Qilong Zhang; Lianli Gao; Yaya Cheng; Jingkuan Song
Dynamic Stochastic Ensemble with Adversarial Robust Lottery Ticket Subnetworks. (98%)Qi Peng; Wenlin Liu; Ruoxi Qin; Libin Hou; Bin Yan; Linyuan Wang
On Adversarial Robustness of Deep Image Deblurring. (83%)Kanchana Vaishnavi Gandikota; Paramanand Chandramouli; Michael Moeller
A Closer Look at Robustness to L-infinity and Spatial Perturbations and their Composition. (81%)Luke Rowe; Benjamin Thérien; Krzysztof Czarnecki; Hongyang Zhang
Jitter Does Matter: Adapting Gaze Estimation to New Domains. (78%)Ruicong Liu; Yiwei Bao; Mingjie Xu; Haofei Wang; Yunfei Liu; Feng Lu
Image Masking for Robust Self-Supervised Monocular Depth Estimation. (38%)Hemang Chawla; Kishaan Jeeveswaran; Elahe Arani; Bahram Zonooz
Over-the-Air Federated Learning with Privacy Protection via Correlated Additive Perturbations. (38%)Jialing Liao; Zheng Chen; Erik G. Larsson
2022-10-04
Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective. (97%)Bohang Zhang; Du Jiang; Di He; Liwei Wang
Robust Fair Clustering: A Novel Fairness Attack and Defense Framework. (93%)Anshuman Chhabra; Peizhao Li; Prasant Mohapatra; Hongfu Liu
A Study on the Efficiency and Generalization of Light Hybrid Retrievers. (86%)Man Luo; Shashank Jain; Anchit Gupta; Arash Einolghozati; Barlas Oguz; Debojeet Chatterjee; Xilun Chen; Chitta Baral; Peyman Heidari
Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting Models. (81%)Fan Liu; Hao Liu; Wenzhao Jiang
On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses. (75%)Anshuman Chhabra; Ashwin Sekhari; Prasant Mohapatra
Robustness Certification of Visual Perception Models via Camera Motion Smoothing. (70%)Hanjiang Hu; Zuxin Liu; Linyi Li; Jiacheng Zhu; Ding Zhao
Backdoor Attacks in the Supply Chain of Masked Image Modeling. (68%)Xinyue Shen; Xinlei He; Zheng Li; Yun Shen; Michael Backes; Yang Zhang
CADet: Fully Self-Supervised Anomaly Detection With Contrastive Learning. (54%)Charles Guille-Escuret; Pau Rodriguez; David Vazquez; Ioannis Mitliagkas; Joao Monteiro
Invariant Aggregator for Defending Federated Backdoor Attacks. (22%)Xiaoyang Wang; Dimitrios Dimitriadis; Sanmi Koyejo; Shruti Tople
2022-10-03
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples. (99%)Jinyuan Jia; Wenjie Qu; Neil Zhenqiang Gong
Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual Active Speaker Detection. (97%)Xuanjun Chen; Haibin Wu; Helen Meng; Hung-yi Lee; Jyh-Shing Roger Jang
Stability Analysis and Generalization Bounds of Adversarial Training. (96%)Jiancong Xiao; Yanbo Fan; Ruoyu Sun; Jue Wang; Zhi-Quan Luo
On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks. (92%)Huimin Zeng; Zhenrui Yue; Yang Zhang; Ziyi Kou; Lanyu Shang; Dong Wang
Decompiling x86 Deep Neural Network Executables. (83%)Zhibo Liu; Yuanyuan Yuan; Shuai Wang; Xiaofei Xie; Lei Ma
Strength-Adaptive Adversarial Training. (80%)Chaojian Yu; Dawei Zhou; Li Shen; Jun Yu; Bo Han; Mingming Gong; Nannan Wang; Tongliang Liu
ASGNN: Graph Neural Networks with Adaptive Structure. (68%)Zepeng Zhang; Songtao Lu; Zengfeng Huang; Ziping Zhao
UnGANable: Defending Against GAN-based Face Manipulation. (2%)Zheng Li; Ning Yu; Ahmed Salem; Michael Backes; Mario Fritz; Yang Zhang
2022-10-02
Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis. (99%)Jiancong Xiao; Zeyu Qin; Yanbo Fan; Baoyuan Wu; Jue Wang; Zhi-Quan Luo
Understanding Adversarial Robustness Against On-manifold Adversarial Examples. (99%)Jiancong Xiao; Liusha Yang; Yanbo Fan; Jue Wang; Zhi-Quan Luo
FLCert: Provably Secure Federated Learning against Poisoning Attacks. (74%)Xiaoyu Cao; Zaixi Zhang; Jinyuan Jia; Neil Zhenqiang Gong
Optimization for Robustness Evaluation beyond $\ell_p$ Metrics. (16%)Hengyue Liang; Buyun Liang; Ying Cui; Tim Mitchell; Ju Sun
Automated Security Analysis of Exposure Notification Systems. (1%)Kevin Morio; Ilkan Esiyok; Dennis Jackson; Robert Künnemann
2022-10-01
DeltaBound Attack: Efficient decision-based attack in low queries regime. (96%)Lorenzo Rossi
Adversarial Attacks on Transformers-Based Malware Detectors. (91%)Yash Jakhotiya; Heramb Patil; Jugal Rawlani; Dr. Sunil B. Mane
Voice Spoofing Countermeasures: Taxonomy, State-of-the-art, experimental analysis of generalizability, open challenges, and the way forward. (5%)Awais Khan; Khalid Mahmood Malik; James Ryan; Mikul Saravanan
2022-09-30
Your Out-of-Distribution Detection Method is Not Robust! (99%)Mohammad Azizmalayeri; Arshia Soltani Moakhar; Arman Zarei; Reihaneh Zohrabi; Mohammad Taghi Manzuri; Mohammad Hossein Rohban
Learning Robust Kernel Ensembles with Kernel Average Pooling. (99%)Pouya Bashivan; Adam Ibrahim; Amirozhan Dehghani; Yifei Ren
Adversarial Robustness of Representation Learning for Knowledge Graphs. (95%)Peru Bhardwaj
Visual Privacy Protection Based on Type-I Adversarial Attack. (92%)Zhigang Su; Dawei Zhou; Decheng Liu; Nannan Wang; Zhen Wang; Xinbo Gao
On the tightness of linear relaxation based robustness certification methods. (78%)Cheng Tang
Data Poisoning Attacks Against Multimodal Encoders. (73%)Ziqing Yang; Xinlei He; Zheng Li; Michael Backes; Mathias Humbert; Pascal Berrang; Yang Zhang
ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks. (70%)Tim Clifford; Ilia Shumailov; Yiren Zhao; Ross Anderson; Robert Mullins
2022-09-29
Physical Adversarial Attack meets Computer Vision: A Decade Survey. (99%)Hui Wei; Hao Tang; Xuemei Jia; Hanxun Yu; Zhubo Li; Zhixiang Wang; Shin'ichi Satoh; Zheng Wang
Towards Lightweight Black-Box Attacks against Deep Neural Networks. (99%)Chenghao Sun; Yonggang Zhang; Wan Chaoqun; Qizhou Wang; Ya Li; Tongliang Liu; Bo Han; Xinmei Tian
Generalizability of Adversarial Robustness Under Distribution Shifts. (83%)Kumail Alhamoud; Hasan Abed Al Kader Hammoud; Motasem Alfarra; Bernard Ghanem
Digital and Physical Face Attacks: Reviewing and One Step Further. (2%)Chenqi Kong; Shiqi Wang; Haoliang Li
Chameleon Cache: Approximating Fully Associative Caches with Random Replacement to Prevent Contention-Based Cache Attacks. (1%)Thomas Unterluggauer; Austin Harris; Scott Constable; Fangfei Liu; Carlos Rozas
2022-09-28
A Survey on Physical Adversarial Attack in Computer Vision. (99%)Donghua Wang; Wen Yao; Tingsong Jiang; Guijiang Tang; Xiaoqian Chen
Exploring the Relationship between Architecture and Adversarially Robust Generalization. (99%)Aishan Liu; Shiyu Tang; Siyuan Liang; Ruihao Gong; Boxi Wu; Xianglong Liu; Dacheng Tao
A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks. (67%)Kevin Hector; Mathieu Dumont; Pierre-Alain Moellic; Jean-Max Dutertre
Supervised Contrastive Learning as Multi-Objective Optimization for Fine-Tuning Large Pre-trained Language Models. (47%)Youness Moukafih; Mounir Ghogho; Kamel Smaili
On the Robustness of Ensemble-Based Machine Learning Against Data Poisoning. (12%)Marco Anisetti; Claudio A. Ardagna; Alessandro Balestrucci; Nicola Bena; Ernesto Damiani; Chan Yeob Yeun
CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention. (1%)Ziyu Guo; Renrui Zhang; Longtian Qiu; Xianzheng Ma; Xupeng Miao; Xuming He; Bin Cui
Improving alignment of dialogue agents via targeted human judgements. (1%)Amelia Glaese; Nat McAleese; Maja Trębacz; John Aslanides; Vlad Firoiu; Timo Ewalds; Maribeth Rauh; Laura Weidinger; Martin Chadwick; Phoebe Thacker; Lucy Campbell-Gillingham; Jonathan Uesato; Po-Sen Huang; Ramona Comanescu; Fan Yang; Abigail See; Sumanth Dathathri; Rory Greig; Charlie Chen; Doug Fritz; Jaume Sanchez Elias; Richard Green; Soňa Mokrá; Nicholas Fernando; Boxi Wu; Rachel Foley; Susannah Young; Iason Gabriel; William Isaac; John Mellor; Demis Hassabis; Koray Kavukcuoglu; Lisa Anne Hendricks; Geoffrey Irving
2022-09-27
Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks against Object Detection. (74%)Svetlana Pavlitskaya; Jonas Hendl; Sebastian Kleim; Leopold Müller; Fabian Wylczoch; J. Marius Zöllner
Inducing Data Amplification Using Auxiliary Datasets in Adversarial Training. (33%)Saehyung Lee; Hyungyu Lee
Attacking Compressed Vision Transformers. (33%)Swapnil Parekh; Devansh Shah; Pratyush Shukla
Mitigating Attacks on Artificial Intelligence-based Spectrum Sensing for Cellular Network Signals. (8%)Ferhat Ozgur Catak; Murat Kuzlu; Salih Sarp; Evren Catak; Umit Cali
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection. (5%)Yiming Li; Yang Bai; Yong Jiang; Yong Yang; Shu-Tao Xia; Bo Li
Reconstruction-guided attention improves the robustness and shape processing of neural networks. (2%)Seoyoung Ahn; Hossein Adeli; Gregory J. Zelinsky
A Learning-based Honeypot Game for Collaborative Defense in UAV Networks. (1%)Yuntao Wang; Zhou Su; Abderrahim Benslimane; Qichao Xu; Minghui Dai; Ruidong Li
Stability Via Adversarial Training of Neural Network Stochastic Control of Mean-Field Type. (1%)Julian Barreiro-Gomez; Salah Eddine Choutri; Boualem Djehiche
Measuring Overfitting in Convolutional Neural Networks using Adversarial Perturbations and Label Noise. (1%)Svetlana Pavlitskaya; Joël Oswald; J. Marius Zöllner
2022-09-26
FG-UAP: Feature-Gathering Universal Adversarial Perturbation. (99%)Zhixing Ye; Xinwen Cheng; Xiaolin Huang
Activation Learning by Local Competitions. (64%)Hongchao Zhou
Multi-Task Adversarial Training Algorithm for Multi-Speaker Neural Text-to-Speech. (1%)Yusuke Nakai; Yuki Saito; Kenta Udagawa; Hiroshi Saruwatari
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification. (1%)Adrien Bennetot; Gianni Franchi; Ser Javier Del; Raja Chatila; Natalia Diaz-Rodriguez
2022-09-25
SPRITZ-1.5C: Employing Deep Ensemble Learning for Improving the Security of Computer Networks against Adversarial Attacks. (81%)Ehsan Nowroozi; Mohammadreza Mohammadi; Erkay Savas; Mauro Conti; Yassine Mekdad
2022-09-24
Approximate better, Attack stronger: Adversarial Example Generation via Asymptotically Gaussian Mixture Distribution. (99%)Zhengwei Fang; Rui Wang; Tao Huang; Liping Jing
2022-09-23
The "Beatrix'' Resurrections: Robust Backdoor Detection via Gram Matrices. (13%)Wanlun Ma; Derui Wang; Ruoxi Sun; Minhui Xue; Sheng Wen; Yang Xiang
2022-09-22
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models. (50%)Sohaib Ahmad; Benjamin Fuller; Kaleel Mahmood
2022-09-21
Fair Robust Active Learning by Joint Inconsistency. (99%)Tsung-Han Wu; Shang-Tse Chen; Winston H. Hsu
Toy Models of Superposition. (45%)Nelson Elhage; Tristan Hume; Catherine Olsson; Nicholas Schiefer; Tom Henighan; Shauna Kravec; Zac Hatfield-Dodds; Robert Lasenby; Dawn Drain; Carol Chen; Roger Grosse; Sam McCandlish; Jared Kaplan; Dario Amodei; Martin Wattenberg; Christopher Olah
DARTSRepair: Core-failure-set Guided DARTS for Network Robustness to Common Corruptions. (13%)Xuhong Ren; Jianlang Chen; Felix Juefei-Xu; Wanli Xue; Qing Guo; Lei Ma; Jianjun Zhao; Shengyong Chen
Fairness Reprogramming. (1%)Guanhua Zhang; Yihua Zhang; Yang Zhang; Wenqi Fan; Qing Li; Sijia Liu; Shiyu Chang
2022-09-20
Understanding Real-world Threats to Deep Learning Models in Android Apps. (99%)Zizhuang Deng; Kai Chen; Guozhu Meng; Xiaodong Zhang; Ke Xu; Yao Cheng
Audit and Improve Robustness of Private Neural Networks on Encrypted Data. (99%)Jiaqi Xue; Lei Xu; Lin Chen; Weidong Shi; Kaidi Xu; Qian Lou
GAMA: Generative Adversarial Multi-Object Scene Attacks. (99%)Abhishek Aich; Calvin-Khang Ta; Akash Gupta; Chengyu Song; Srikanth V. Krishnamurthy; M. Salman Asif; Amit K. Roy-Chowdhury
Sparse Vicious Attacks on Graph Neural Networks. (98%)Giovanni Trappolini; Valentino Maiorca; Silvio Severino; Emanuele Rodolà; Fabrizio Silvestri; Gabriele Tolomei
Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks. (98%)Abhishek Aich; Shasha Li; Chengyu Song; M. Salman Asif; Srikanth V. Krishnamurthy; Amit K. Roy-Chowdhury
Rethinking Data Augmentation in Knowledge Distillation for Object Detection. (68%)Jiawei Liang; Siyuan Liang; Aishan Liu; Mingli Zhu; Danni Yuan; Chenye Xu; Xiaochun Cao
CANflict: Exploiting Peripheral Conflicts for Data-Link Layer Attacks on Automotive Networks. (1%)Alvise de Faveri Tron; Stefano Longari; Michele Carminati; Mario Polino; Stefano Zanero
EM-Fault It Yourself: Building a Replicable EMFI Setup for Desktop and Server Hardware. (1%)Niclas Kühnapfel; Robert Buhren; Hans Niklas Jacob; Thilo Krachenfels; Christian Werling; Jean-Pierre Seifert
2022-09-19
Catoptric Light can be Dangerous: Effective Physical-World Attack by Natural Phenomenon. (99%)Chengyin Hu; Weiwen Shi
Adversarial Color Projection: A Projector-Based Physical Attack to DNNs. (99%)Chengyin Hu; Weiwen Shi
2022-09-18
On the Adversarial Transferability of ConvMixer Models. (99%)Ryota Iijima; Miki Tanaka; Isao Echizen; Hitoshi Kiya
AdvDO: Realistic Adversarial Attacks for Trajectory Prediction. (96%)Yulong Cao; Chaowei Xiao; Anima Anandkumar; Danfei Xu; Marco Pavone
Distribution inference risks: Identifying and mitigating sources of leakage. (1%)Valentin Hartmann; Léo Meynent; Maxime Peyrard; Dimitrios Dimitriadis; Shruti Tople; Robert West
2022-09-17
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models. (99%)Raphael Olivier; Hadi Abdullah; Bhiksha Raj
pFedDef: Defending Grey-Box Attacks for Personalized Federated Learning. (98%)Taejin Kim; Shubhranshu Singh; Nikhil Madaan; Carlee Joe-Wong
A study on the deviations in performance of FNNs and CNNs in the realm of grayscale adversarial images. (4%)Durga Shree Nagabushanam; Steve Mathew; Chiranji Lal Chowdhary
2022-09-16
Robust Ensemble Morph Detection with Domain Generalization. (99%)Hossein Kashiani; Shoaib Meraj Sami; Sobhan Soleymani; Nasser M. Nasrabadi
A Large-scale Multiple-objective Method for Black-box Attack against Object Detection. (99%)Siyuan Liang; Longkang Li; Yanbo Fan; Xiaojun Jia; Jingzhi Li; Baoyuan Wu; Xiaochun Cao
Enhance the Visual Representation via Discrete Adversarial Training. (97%)Xiaofeng Mao; Yuefeng Chen; Ranjie Duan; Yao Zhu; Gege Qi; Shaokai Ye; Xiaodan Li; Rong Zhang; Hui Xue
Model Inversion Attacks against Graph Neural Networks. (92%)Zaixi Zhang; Qi Liu; Zhenya Huang; Hao Wang; Chee-Kong Lee; Enhong Chen
PointCAT: Contrastive Adversarial Training for Robust Point Cloud Recognition. (62%)Qidong Huang; Xiaoyi Dong; Dongdong Chen; Hang Zhou; Weiming Zhang; Kui Zhang; Gang Hua; Nenghai Yu
Cascading Failures in Power Grids. (33%)Rounak Meyur
Dataset Inference for Self-Supervised Models. (16%)Adam Dziedzic; Haonan Duan; Muhammad Ahmad Kaleem; Nikita Dhawan; Jonas Guan; Yannis Cattan; Franziska Boenisch; Nicolas Papernot
On the Robustness of Graph Neural Diffusion to Topology Perturbations. (15%)Yang Song; Qiyu Kang; Sijie Wang; Zhao Kai; Wee Peng Tay
A Systematic Evaluation of Node Embedding Robustness. (11%)Alexandru Mara; Jefrey Lijffijt; Stephan Günnemann; Bie Tijl De
2022-09-15
Improving Robust Fairness via Balance Adversarial Training. (99%)Chunyu Sun; Chenye Xu; Chengyuan Yao; Siyuan Liang; Yichao Wu; Ding Liang; XiangLong Liu; Aishan Liu
A Light Recipe to Train Robust Vision Transformers. (98%)Edoardo Debenedetti; Vikash Sehwag; Prateek Mittal
Part-Based Models Improve Adversarial Robustness. (92%)Chawin Sitawarin; Kornrapat Pongmala; Yizheng Chen; Nicholas Carlini; David Wagner
Explicit Tradeoffs between Adversarial and Natural Distributional Robustness. (80%)Mazda Moayeri; Kiarash Banihashem; Soheil Feizi
Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization. (80%)Omar Montasser; Steve Hanneke; Nathan Srebro
Defending Root DNS Servers Against DDoS Using Layered Defenses. (15%)A S M Rizvi; Jelena Mirkovic; John Heidemann; Wesley Hardaker; Robert Story
BadRes: Reveal the Backdoors through Residual Connection. (2%)Mingrui He; Tianyu Chen; Haoyi Zhou; Shanghang Zhang; Jianxin Li
Adversarial Cross-View Disentangled Graph Contrastive Learning. (1%)Qianlong Wen; Zhongyu Ouyang; Chunhui Zhang; Yiyue Qian; Yanfang Ye; Chuxu Zhang
Towards Improving Calibration in Object Detection Under Domain Shift. (1%)Muhammad Akhtar Munir; Muhammad Haris Khan; M. Saquib Sarfraz; Mohsen Ali
2022-09-14
Robust Transferable Feature Extractors: Learning to Defend Pre-Trained Networks Against White Box Adversaries. (99%)Alexander Cann; Ian Colbert; Ihab Amer
PointACL:Adversarial Contrastive Learning for Robust Point Clouds Representation under Adversarial Attack. (99%)Junxuan Huang; Yatong An; Lu cheng; Bai Chen; Junsong Yuan; Chunming Qiao
Certified Robustness to Word Substitution Ranking Attack for Neural Ranking Models. (99%)Chen Wu; Ruqing Zhang; Jiafeng Guo; Wei Chen; Yixing Fan; Rijke Maarten de; Xueqi Cheng
Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models. (97%)Jiawei Liu; Yangyang Kang; Di Tang; Kaisong Song; Changlong Sun; Xiaofeng Wang; Wei Lu; Xiaozhong Liu
On the interplay of adversarial robustness and architecture components: patches, convolution and attention. (67%)Francesco Croce; Matthias Hein
M^4I: Multi-modal Models Membership Inference. (54%)Pingyi Hu; Zihan Wang; Ruoxi Sun; Hu Wang; Minhui Xue
Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering. (12%)Jingjing Jiang; Ziyi Liu; Nanning Zheng
Robust Constrained Reinforcement Learning. (9%)Yue Wang; Fei Miao; Shaofeng Zou
2022-09-13
Adversarial Coreset Selection for Efficient Robust Training. (99%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
TSFool: Crafting High-quality Adversarial Time Series through Multi-objective Optimization to Fool Recurrent Neural Network Classifiers. (99%)Yanyun Wang; Dehui Du; Yuanhao Liu
PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models. (92%)William Hackett; Stefan Trawicki; Zhengxin Yu; Neeraj Suri; Peter Garraghan
Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation. (78%)Maksym Yatsura; Kaspar Sakmann; N. Grace Hua; Matthias Hein; Jan Hendrik Metzen
Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. (68%)Hussain Hussain; Meng Cao; Sandipan Sikdar; Denis Helic; Elisabeth Lex; Markus Strohmaier; Roman Kern
ADMM based Distributed State Observer Design under Sparse Sensor Attacks. (22%)Vinaya Mary Prinse; Rachel Kalpana Kalaimani
A Tale of HodgeRank and Spectral Method: Target Attack Against Rank Aggregation Is the Fixed Point of Adversarial Game. (15%)Ke Ma; Qianqian Xu; Jinshan Zeng; Guorong Li; Xiaochun Cao; Qingming Huang
Defense against Privacy Leakage in Federated Learning. (12%)Jing Wu; Munawar Hayat; Mingyi Zhou; Mehrtash Harandi
Federated Learning based on Defending Against Data Poisoning Attacks in IoT. (1%)Jiayin Li; Wenzhong Guo; Xingshuo Han; Jianping Cai; Ximeng Liu
2022-09-12
Adaptive Perturbation Generation for Multiple Backdoors Detection. (95%)Yuhang Wang; Huafeng Shi; Rui Min; Ruijia Wu; Siyuan Liang; Yichao Wu; Ding Liang; Aishan Liu
CARE: Certifiably Robust Learning with Reasoning via Variational Inference. (75%)Jiawei Zhang; Linyi Li; Ce Zhang; Bo Li
Sample Complexity of an Adversarial Attack on UCB-based Best-arm Identification Policy. (69%)Varsha Pendyala
Boosting Robustness Verification of Semantic Feature Neighborhoods. (54%)Anan Kabaha; Dana Drachsler-Cohen
Semantic-Preserving Adversarial Code Comprehension. (1%)Yiyang Li; Hongqiu Wu; Hai Zhao
Holistic Segmentation. (1%)Stefano Gasperini; Alvaro Marcos-Ramiro; Michael Schmidt; Nassir Navab; Benjamin Busam; Federico Tombari
Class-Level Logit Perturbation. (1%)Mengyang Li; Fengguang Su; Ou Wu; Ji Zhang
2022-09-11
Resisting Deep Learning Models Against Adversarial Attack Transferability via Feature Randomization. (99%)Ehsan Nowroozi; Mohammadreza Mohammadi; Pargol Golmohammadi; Yassine Mekdad; Mauro Conti; Selcuk Uluagac
Generate novel and robust samples from data: accessible sharing without privacy concerns. (5%)David Banh; Alan Huang
2022-09-10
Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense. (99%)Bowen Peng; Bo Peng; Jie Zhou; Jianyue Xie; Li Liu
2022-09-09
The Space of Adversarial Strategies. (99%)Ryan Sheatsley; Blaine Hoak; Eric Pauley; Patrick McDaniel
Defend Data Poisoning Attacks on Voice Authentication. (54%)Ke Li; Cameron Baird; Dan Lin
Robust-by-Design Classification via Unitary-Gradient Neural Networks. (41%)Fabio Brau; Giulio Rossolini; Alessandro Biondi; Giorgio Buttazzo
Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled Membership Inference. (10%)Hanzhou Wu
Saliency Guided Adversarial Training for Learning Generalizable Features with Applications to Medical Imaging Classification System. (1%)Xin Li; Yao Qiang; Chengyin Li; Sijia Liu; Dongxiao Zhu
2022-09-08
Incorporating Locality of Images to Generate Targeted Transferable Adversarial Examples. (99%)Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Evaluating the Security of Aircraft Systems. (92%)Edan Habler; Ron Bitton; Asaf Shabtai
Uncovering the Connection Between Differential Privacy and Certified Robustness of Federated Learning against Poisoning Attacks. (62%)Chulin Xie; Yunhui Long; Pin-Yu Chen; Bo Li
A Survey of Recent Advances in Deep Learning Models for Detecting Malware in Desktop and Mobile Platforms. (1%)Pascal Maniriho; Abdun Naser Mahmood; Mohammad Jabed Morshed Chowdhury
FADE: Enabling Large-Scale Federated Adversarial Training on Resource-Constrained Edge Devices. (1%)Minxue Tang; Jianyi Zhang; Mingyuan Ma; Louis DiValentin; Aolin Ding; Amin Hassanzadeh; Hai Li; Yiran Chen
2022-09-07
On the Transferability of Adversarial Examples between Encrypted Models. (99%)Miki Tanaka; Isao Echizen; Hitoshi Kiya
Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples. (99%)Nuo Xu; Kaleel Mahmood; Haowen Fang; Ethan Rathbun; Caiwen Ding; Wujie Wen
Reward Delay Attacks on Deep Reinforcement Learning. (70%)Anindya Sarkar; Jiarui Feng; Yevgeniy Vorobeychik; Christopher Gill; Ning Zhang
Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems. (47%)Sahar Abdelnabi; Mario Fritz
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots. (15%)Wai Man Si; Michael Backes; Jeremy Blackburn; Cristofaro Emiliano De; Gianluca Stringhini; Savvas Zannettou; Yand Zhang
Physics-Guided Adversarial Machine Learning for Aircraft Systems Simulation. (1%)Houssem Ben Braiek; Thomas Reid; Foutse Khomh
Hardware faults that matter: Understanding and Estimating the safety impact of hardware faults on object detection DNNs. (1%)Syed Qutub; Florian Geissler; Yang Peng; Ralf Grafe; Michael Paulitsch; Gereon Hinz; Alois Knoll
MalDetConv: Automated Behaviour-based Malware Detection Framework Based on Natural Language Processing and Deep Learning Techniques. (1%)Pascal Maniriho; Abdun Naser Mahmood; Mohammad Jabed Morshed Chowdhury
2022-09-06
Instance Attack:An Explanation-based Vulnerability Analysis Framework Against DNNs for Malware Detection. (99%)Sun RuiJin; Guo ShiZe; Guo JinHong; Xing ChangYou; Yang LuMing; Guo Xi; Pan ZhiSong
Bag of Tricks for FGSM Adversarial Training. (96%)Zichao Li; Li Liu; Zeyu Wang; Yuyin Zhou; Cihang Xie
Improving Robustness to Out-of-Distribution Data by Frequency-based Augmentation. (82%)Koki Mukai; Soichiro Kumano; Toshihiko Yamasaki
Defending Against Backdoor Attack on Graph Nerual Network by Explainability. (80%)Bingchen Jiang; Zhao Li
MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World. (54%)Hua Ma; Yinshan Li; Yansong Gao; Zhi Zhang; Alsharif Abuadbba; Anmin Fu; Said F. Al-Sarawi; Nepal Surya; Derek Abbott
Multimodal contrastive learning for remote sensing tasks. (1%)Umangi Jain; Alex Wilson; Varun Gulshan
Annealing Optimization for Progressive Learning with Stochastic Approximation. (1%)Christos Mavridis; John Baras
Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps. (1%)Alireza Ganjdanesh; Shangqian Gao; Heng Huang
A Survey of Machine Unlearning. (1%)Thanh Tam Nguyen; Thanh Trung Huynh; Phi Le Nguyen; Alan Wee-Chung Liew; Hongzhi Yin; Quoc Viet Hung Nguyen
2022-09-05
Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples. (98%)Hezekiah J. Branch; Jonathan Rodriguez Cefalu; Jeremy McHugh; Leyla Hujer; Aditya Bahl; Daniel del Castillo Iglesias; Ron Heichman; Ramesh Darwishi
White-Box Adversarial Policies in Deep Reinforcement Learning. (98%)Stephen Casper; Dylan Hadfield-Menell; Gabriel Kreiman
"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution. (69%)Yuyou Gan; Yuhao Mao; Xuhong Zhang; Shouling Ji; Yuwen Pu; Meng Han; Jianwei Yin; Ting Wang
Adversarial Detection: Attacking Object Detection in Real Time. (64%)Han Wu; Syed Yunas; Sareh Rowlands; Wenjie Ruan; Johan Wahlstrom
PromptAttack: Prompt-based Attack for Language Models via Gradient Search. (16%)Yundi Shi; Piji Li; Changchun Yin; Zhaoyang Han; Lu Zhou; Zhe Liu
Federated Zero-Shot Learning for Visual Recognition. (2%)Zhi Chen; Yadan Luo; Sen Wang; Jingjing Li; Zi Huang
Improving Out-of-Distribution Detection via Epistemic Uncertainty Adversarial Training. (2%)Derek Everett; Andre T. Nguyen; Luke E. Richards; Edward Raff
2022-09-04
An Adaptive Black-box Defense against Trojan Attacks (TrojDef). (98%)Guanxiong Liu; Abdallah Khreishah; Fatima Sharadgah; Issa Khalil
Hide & Seek: Seeking the (Un)-Hidden key in Provably-Secure Logic Locking Techniques. (11%)Satwik Patnaik; Nimisha Limaye; Ozgur Sinanoglu
Synergistic Redundancy: Towards Verifiable Safety for Autonomous Vehicles. (1%)Ayoosh Bansal; Simon Yu; Hunmin Kim; Bo Li; Naira Hovakimyan; Marco Caccamo; Lui Sha
2022-09-02
Adversarial Color Film: Effective Physical-World Attack to DNNs. (98%)Chengyin Hu; Weiwen Shi
Impact of Scaled Image on Robustness of Deep Neural Networks. (98%)Chengyin Hu; Weiwen Shi
Property inference attack; Graph neural networks; Privacy attacks and defense; Trustworthy machine learning. (95%)Xiuling Wang; Wendy Hui Wang
Impact of Colour Variation on Robustness of Deep Neural Networks. (92%)Chengyin Hu; Weiwen Shi
Scalable Adversarial Attack Algorithms on Influence Maximization. (68%)Lichao Sun; Xiaobin Rui; Wei Chen
Are Attribute Inference Attacks Just Imputation? (31%)Bargav Jayaraman; David Evans
Explainable AI for Android Malware Detection: Towards Understanding Why the Models Perform So Well? (9%)Yue Liu; Chakkrit Tantithamthavorn; Li Li; Yepang Liu
Revisiting Outer Optimization in Adversarial Training. (5%)Ali Dabouei; Fariborz Taherkhani; Sobhan Soleymani; Nasser M. Nasrabadi
2022-09-01
Adversarial for Social Privacy: A Poisoning Strategy to Degrade User Identity Linkage. (98%)Jiangli Shao; Yongqing Wang; Boshen Shi; Hao Gao; Huawei Shen; Xueqi Cheng
Universal Fourier Attack for Time Series. (12%)Elizabeth Coda; Brad Clymer; Chance DeSmet; Yijing Watkins; Michael Girard
2022-08-31
Be Your Own Neighborhood: Detecting Adversarial Example by the Neighborhood Relations Built on Self-Supervised Learning. (99%)Zhiyuan He; Yijun Yang; Pin-Yu Chen; Qiang Xu; Tsung-Yi Ho
Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters Substitution. (99%)MingWei Zhou; Xiaobing Pei
Membership Inference Attacks by Exploiting Loss Trajectory. (70%)Yiyong Liu; Zhengyu Zhao; Michael Backes; Yang Zhang
Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research. (13%)Zhibo Zhang; Hussam Al Hamadi; Ernesto Damiani; Chan Yeob Yeun; Fatma Taher
Feature Alignment by Uncertainty and Self-Training for Source-Free Unsupervised Domain Adaptation. (1%)JoonHo Lee; Gyemin Lee
Vulnerability of Distributed Inverter VAR Control in PV Distributed Energy System. (1%)Bo Tu; Wen-Tai Li; Chau Yuen
2022-08-30
A Black-Box Attack on Optical Character Recognition Systems. (99%)Samet Bayram; Kenneth Barner
Robustness and invariance properties of image classifiers. (99%)Apostolos Modas
Solving the Capsulation Attack against Backdoor-based Deep Neural Network Watermarks by Reversing Triggers. (1%)Fangqi Li; Shilin Wang; Yun Zhu
Constraining Representations Yields Models That Know What They Don't Know. (1%)Joao Monteiro; Pau Rodriguez; Pierre-Andre Noel; Issam Laradji; David Vazquez
2022-08-29
Towards Adversarial Purification using Denoising AutoEncoders. (99%)Dvij Kalaria; Aritra Hazra; Partha Pratim Chakrabarti
Reducing Certified Regression to Certified Classification for General Poisoning Attacks. (54%)Zayd Hammoudeh; Daniel Lowd
Interpreting Black-box Machine Learning Models for High Dimensional Datasets. (1%)Md. Rezaul Karim; Md. Shajalal; Alex Graß; Till Döhmen; Sisay Adugna Chala; Christian Beecks; Stefan Decker
2022-08-28
Cross-domain Cross-architecture Black-box Attacks on Fine-tuned Models with Transferred Evolutionary Strategies. (99%)Yinghua Zhang; Yangqiu Song; Kun Bai; Qiang Yang
2022-08-27
Adversarial Robustness for Tabular Data through Cost and Utility Awareness. (99%)Klim Kireev; Bogdan Kulynych; Carmela Troncoso
SA: Sliding attack for synthetic speech detection with resistance to clipping and self-splicing. (99%)Deng JiaCheng; Dong Li; Yan Diqun; Wang Rangding; Zeng Jiaming
Overparameterized (robust) models from computational constraints. (13%)Sanjam Garg; Somesh Jha; Saeed Mahloujifar; Mohammad Mahmoody; Mingyuan Wang
TrojViT: Trojan Insertion in Vision Transformers. (11%)Mengxin Zheng; Qian Lou; Lei Jiang
RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems. (1%)Emna Baccour; Aiman Erbad; Amr Mohamed; Mounir Hamdi; Mohsen Guizani
2022-08-26
What Does the Gradient Tell When Attacking the Graph Structure. (69%)Zihan Liu; Ge Wang; Yun Luo; Stan Z. Li
Network-Level Adversaries in Federated Learning. (54%)Giorgio Severi; Matthew Jagielski; Gökberk Yar; Yuxuan Wang; Alina Oprea; Cristina Nita-Rotaru
ATTRITION: Attacking Static Hardware Trojan Detection Techniques Using Reinforcement Learning. (45%)Vasudev JV Gohil; Hao JV Guo; Satwik JV Patnaik; JV Jeyavijayan; Rajendran
Lower Difficulty and Better Robustness: A Bregman Divergence Perspective for Adversarial Training. (4%)Zihui Wu; Haichang Gao; Bingqian Zhou; Xiaoyan Guo; Shudong Zhang
2022-08-25
Semantic Preserving Adversarial Attack Generation with Autoencoder and Genetic Algorithm. (99%)Xinyi Wang; Simon Yusuf Enoch; Dong Seong Kim
SNAP: Efficient Extraction of Private Properties with Poisoning. (86%)Harsh Chaudhari; John Abascal; Alina Oprea; Matthew Jagielski; Florian Tramèr; Jonathan Ullman
FuncFooler: A Practical Black-box Attack Against Learning-based Binary Code Similarity Detection Methods. (78%)Lichen Jia; Bowen Tang; Chenggang Wu; Zhe Wang; Zihan Jiang; Yuanming Lai; Yan Kang; Ning Liu; Jingfeng Zhang
Robust Prototypical Few-Shot Organ Segmentation with Regularized Neural-ODEs. (31%)Prashant Pandey; Mustafa Chasmai; Tanuj Sur; Brejesh Lall
Calibrated Selective Classification. (15%)Adam Fisch; Tommi Jaakkola; Regina Barzilay
XDRI Attacks - and - How to Enhance Resilience of Residential Routers. (4%)Philipp Jeitner; Haya Shulman; Lucas Teichmann; Michael Waidner
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning. (1%)Haodong Zhao; Wei Du; Fangqi Li; Peixuan Li; Gongshen Liu
2022-08-24
Unrestricted Black-box Adversarial Attack Using GAN with Limited Queries. (99%)Dongbin Na; Sangwoo Ji; Jong Kim
Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps. (98%)Mohammadreza Amirian; Friedhelm Schwenker; Thilo Stadelmann
Attacking Neural Binary Function Detection. (98%)Joshua Bundt; Michael Davinroy; Ioannis Agadakos; Alina Oprea; William Robertson
A Perturbation Resistant Transformation and Classification System for Deep Neural Networks. (98%)Nathaniel Dean; Dilip Sarkar
Rethinking Cost-sensitive Classification in Deep Learning via Adversarial Data Augmentation. (92%)Qiyuan Chen; Raed Al Kontar; Maher Nouiehed; Jessie Yang; Corey Lester
2022-08-23
Towards an Awareness of Time Series Anomaly Detection Models' Adversarial Vulnerability. (99%)Shahroz Tariq; Binh M. Le; Simon S. Woo
Adversarial Vulnerability of Temporal Feature Networks for Object Detection. (99%)Svetlana Pavlitskaya; Nikolai Polley; Michael Weber; J. Marius Zöllner
Transferability Ranking of Adversarial Examples. (99%)Mosh Levy; Yuval Elovici; Yisroel Mirsky
Auditing Membership Leakages of Multi-Exit Networks. (76%)Zheng Li; Yiyong Liu; Xinlei He; Ning Yu; Michael Backes; Yang Zhang
A Comprehensive Study of Real-Time Object Detection Networks Across Multiple Domains: A Survey. (13%)Elahe Arani; Shruthi Gowda; Ratnajit Mukherjee; Omar Magdy; Senthilkumar Kathiresan; Bahram Zonooz
Robust DNN Watermarking via Fixed Embedding Weights with Optimized Distribution. (10%)Benedetta Tondi; Andrea Costanzo; Mauro Barni
2022-08-22
Fight Fire With Fire: Reversing Skin Adversarial Examples by Multiscale Diffusive and Denoising Aggregation Mechanism. (99%)Yongwei Wang; Yuan Li; Zhiqi Shen
Hierarchical Perceptual Noise Injection for Social Media Fingerprint Privacy Protection. (98%)Simin Li; Huangxinxin Xu; Jiakai Wang; Aishan Liu; Fazhi He; Xianglong Liu; Dacheng Tao
Different Spectral Representations in Optimized Artificial Neural Networks and Brains. (93%)Richard C. Gerum; Cassidy Pirlot; Alona Fyshe; Joel Zylberberg
Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models. (87%)Xinlei He; Zheng Li; Weilin Xu; Cory Cornelius; Yang Zhang
BARReL: Bottleneck Attention for Adversarial Robustness in Vision-Based Reinforcement Learning. (86%)Eugene Bykovets; Yannick Metz; Mennatallah El-Assady; Daniel A. Keim; Joachim M. Buhmann
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN. (62%)Huy Phan; Cong Shi; Yi Xie; Tianfang Zhang; Zhuohang Li; Tianming Zhao; Jian Liu; Yan Wang; Yingying Chen; Bo Yuan
Toward Better Target Representation for Source-Free and Black-Box Domain Adaptation. (31%)Qucheng Peng; Zhengming Ding; Lingjuan Lyu; Lichao Sun; Chen Chen
Optimal Bootstrapping of PoW Blockchains. (1%)Ranvir Rana; Dimitris Karakostas; Sreeram Kannan; Aggelos Kiayias; Pramod Viswanath
2022-08-21
PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D Point Cloud Recognition. (99%)Jiachen Sun; Weili Nie; Zhiding Yu; Z. Morley Mao; Chaowei Xiao
Inferring Sensitive Attributes from Model Explanations. (56%)Vasisht Duddu; Antoine Boutet
Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning. (10%)Kerem Ozfatura; Emre Ozfatura; Alptekin Kupcu; Deniz Gunduz
MockingBERT: A Method for Retroactively Adding Resilience to NLP Models. (4%)Jan Jezabek; Akash Singh
NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs. (1%)Yijun Tian; Chuxu Zhang; Zhichun Guo; Xiangliang Zhang; Nitesh V. Chawla
A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective. (1%)Chanwoo Park; Sangdoo Yun; Sanghyuk Chun
2022-08-20
Analyzing Adversarial Robustness of Vision Transformers against Spatial and Spectral Attacks. (86%)Gihyun Kim; Jong-Seok Lee
GAIROSCOPE: Injecting Data from Air-Gapped Computers to Nearby Gyroscopes. (33%)Mordechai Guri
Sensor Security: Current Progress, Research Challenges, and Future Roadmap. (10%)Anomadarshi Barua; Mohammad Abdullah Al Faruque
Evaluating Out-of-Distribution Detectors Through Adversarial Generation of Outliers. (5%)Sangwoong Yoon; Jinwon Choi; Yonghyeon Lee; Yung-Kyun Noh; Frank Chongwoo Park
Adversarial contamination of networks in the setting of vertex nomination: a new trimming method. (1%)Sheyda Peyman; Minh Tang; Vince Lyzinski
2022-08-19
Real-Time Robust Video Object Detection System Against Physical-World Adversarial Attacks. (99%)Husheng Han; Xing Hu; Kaidi Xu; Pucheng Dang; Ying Wang; Yongwei Zhao; Zidong Du; Qi Guo; Yanzhi Yang; Tianshi Chen
Gender Bias and Universal Substitution Adversarial Attacks on Grammatical Error Correction Systems for Automated Assessment. (92%)Vyas Raina; Mark Gales
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models. (76%)Yulong Wang; Minghui Zhao; Shenghong Li; Xin Yuan; Wei Ni
A Novel Plug-and-Play Approach for Adversarially Robust Generalization. (54%)Deepak Maurya; Adarsh Barik; Jean Honorio
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability. (8%)Wei Huang; Xingyu Zhao; Gaojie Jin; Xiaowei Huang
UKP-SQuARE v2 Explainability and Adversarial Attacks for Trustworthy QA. (1%)Rachneet Sachdeva; Haritz Puerto; Tim Baumgärtner; Sewin Tariverdian; Hao Zhang; Kexin Wang; Hossain Shaikh Saadi; Leonardo F. R. Ribeiro; Iryna Gurevych
2022-08-18
Resisting Adversarial Attacks in Deep Neural Networks using Diverse Decision Boundaries. (99%)Manaar Alam; Shubhajit Datta; Debdeep Mukhopadhyay; Arijit Mondal; Partha Pratim Chakrabarti
Enhancing Targeted Attack Transferability via Diversified Weight Pruning. (99%)Hung-Jui Wang; Yu-Yu Wu; Shang-Tse Chen
Enhancing Diffusion-Based Image Synthesis with Robust Classifier Guidance. (45%)Bahjat Kawar; Roy Ganz; Michael Elad
Reverse Engineering of Integrated Circuits: Tools and Techniques. (33%)Abhijitt Dhavlle
DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization. (10%)Anshul Nasery; Sravanti Addepalli; Praneeth Netrapalli; Prateek Jain
Discovering Bugs in Vision Models using Off-the-shelf Image Generation and Captioning. (3%)Olivia Wiles; Isabela Albuquerque; Sven Gowal
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy. (2%)Wenqiang Ruan; Mingxin Xu; Wenjing Fang; Li Wang; Lei Wang; Weili Han
Profiler: Profile-Based Model to Detect Phishing Emails. (1%)Mariya Shmalko; Alsharif Abuadbba; Raj Gaire; Tingmin Wu; Hye-Young Paik; Surya Nepal
2022-08-17
Two Heads are Better than One: Robust Learning Meets Multi-branch Models. (99%)Dong Huang; Qingwen Bu; Yuhao Qing; Haowen Pi; Sen Wang; Heming Cui
An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks. (99%)Raz Lapid; Zvika Haramaty; Moshe Sipper
Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against Shadow-Based Adversarial Attacks. (98%)Andrew Wang; Wyatt Mayor; Ryan Smith; Gopal Nookula; Gregory Ditzler
Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System. (70%)Abdur R. Shahid; Ahmed Imteaj; Peter Y. Wu; Diane A. Igoche; Tauhidul Alam
An Efficient Multi-Step Framework for Malware Packing Identification. (41%)Jong-Wouk Kim; Yang-Sae Moon; Mi-Jung Choi
An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models. (26%)Jihyeon Hyeong; Jayoung Kim; Noseong Park; Sushil Jajodia
Efficient Detection and Filtering Systems for Distributed Training. (13%)Konstantinos Konstantinidis; Aditya Ramamoorthy
On the Privacy Effect of Data Enhancement via the Lens of Memorization. (10%)Xiao Li; Qiongxiu Li; Zhanhao Hu; Xiaolin Hu
ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach. (2%)Tong Zhou; Shaolei Ren; Xiaolin Xu
DF-Captcha: A Deepfake Captcha for Preventing Fake Calls. (1%)Yisroel Mirsky
Analyzing Robustness of End-to-End Neural Models for Automatic Speech Recognition. (1%)Goutham Rajendran; Wei Zou
2022-08-16
A Context-Aware Approach for Textual Adversarial Attack through Probability Difference Guided Beam Search. (82%)Huijun Liu; Jie Yu; Shasha Li; Jun Ma; Bin Ji
Imperceptible and Robust Backdoor Attack in 3D Point Cloud. (68%)Kuofeng Gao; Jiawang Bai; Baoyuan Wu; Mengxi Ya; Shu-Tao Xia
AutoCAT: Reinforcement Learning for Automated Exploration of Cache-Timing Attacks. (13%)Mulong Luo; Wenjie Xiong; Geunbae Lee; Yueying Li; Xiaomeng Yang; Amy Zhang; Yuandong Tian; Hsien-Hsin S. Lee; G. Edward Suh
2022-08-15
MENLI: Robust Evaluation Metrics from Natural Language Inference. (92%)Yanran Chen; Steffen Eger
Man-in-the-Middle Attack against Object Detection Systems. (86%)Han Wu; Sareh Rowlands; Johan Wahlstrom
Training-Time Attacks against k-Nearest Neighbors. (2%)Ara Vartanian; Will Rosenbaum; Scott Alfeld
CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI Models. (1%)Chuyen Nguyen; Caleb Morgan; Sudip Mittal
2022-08-14
A Multi-objective Memetic Algorithm for Auto Adversarial Attack Optimization Design. (99%)Jialiang Sun; Wen Yao; Tingsong Jiang; Xiaoqian Chen
Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection. (92%)Haibin Zheng; Haiyang Xiong; Haonan Ma; Guohan Huang; Jinyin Chen
InvisibiliTee: Angle-agnostic Cloaking from Person-Tracking Systems with a Tee. (92%)Yaxian Li; Bingqing Zhang; Guoping Zhao; Mingyu Zhang; Jiajun Liu; Ziwei Wang; Jirong Wen
Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning. (67%)Ashish Gupta; Tie Luo; Mao V. Ngo; Sajal K. Das
2022-08-13
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks. (99%)Tian Yu Liu; Yu Yang; Baharan Mirzasoleiman
Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification. (95%)Beini Xie; Heng Chang; Xin Wang; Tian Bian; Shiji Zhou; Daixin Wang; Zhiqiang Zhang; Wenwu Zhu
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer. (62%)Tong Wang; Yuan Yao; Feng Xu; Miao Xu; Shengwei An; Ting Wang
2022-08-12
Scale-free and Task-agnostic Attack: Generating Photo-realistic Adversarial Patterns with Patch Quilting Generator. (99%)Xiangbo Gao; Cheng Luo; Qinliang Lin; Weicheng Xie; Minmin Liu; Linlin Shen; Keerthy Kusumam; Siyang Song
MaskBlock: Transferable Adversarial Examples with Bayes Approach. (99%)Mingyuan Fan; Cen Chen; Ximeng Liu; Wenzhong Guo
Defensive Distillation based Adversarial Attacks Mitigation Method for Channel Estimation using Deep Learning Models in Next-Generation Wireless Networks. (98%)Ferhat Ozgur Catak; Murat Kuzlu; Evren Catak; Umit Cali; Ozgur Guler
A Knowledge Distillation-Based Backdoor Attack in Federated Learning. (93%)Yifan Wang; Wei Fan; Keke Yang; Naji Alhusaini; Jing Li
Unifying Gradients to Improve Real-world Robustness for Deep Networks. (92%)Yingwen Wu; Sizhe Chen; Kun Fang; Xiaolin Huang
Dropout is NOT All You Need to Prevent Gradient Leakage. (62%)Daniel Scheliga; Patrick Mäder; Marco Seeland
Defense against Backdoor Attacks via Identifying and Purifying Bad Neurons. (2%)Mingyuan Fan; Yang Liu; Cen Chen; Ximeng Liu; Wenzhong Guo
PRIVEE: A Visual Analytic Workflow for Proactive Privacy Risk Inspection of Open Data. (2%)Kaustav Bhattacharjee; Akm Islam; Jaideep Vaidya; Aritra Dasgupta
2022-08-11
Diverse Generative Perturbations on Attention Space for Transferable Adversarial Attacks. (99%)Woo Jae Kim; Seunghoon Hong; Sung-Eui Yoon
General Cutting Planes for Bound-Propagation-Based Neural Network Verification. (68%)Huan Zhang; Shiqi Wang; Kaidi Xu; Linyi Li; Bo Li; Suman Jana; Cho-Jui Hsieh; J. Zico Kolter
On deceiving malware classification with section injection. (5%)Silva Adeilson Antonio da; Mauricio Pamplona Segundo
A Probabilistic Framework for Mutation Testing in Deep Neural Networks. (1%)Florian Tambon; Foutse Khomh; Giuliano Antoniol
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment. (1%)Jie Zhu; Leye Wang; Xiao Han
Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone. (1%)Aghiles Ait Messaoud; Sonia Ben Mokhtar; Vlad Nitu; Valerio Schiavoni
2022-08-10
Explaining Machine Learning DGA Detectors from DNS Traffic Data. (13%)Giorgio Piras; Maura Pintor; Luca Demetrio; Battista Biggio
A Sublinear Adversarial Training Algorithm. (3%)Yeqi Gao; Lianke Qin; Zhao Song; Yitan Wang
DVR: Micro-Video Recommendation Optimizing Watch-Time-Gain under Duration Bias. (1%)Yu Zheng; Chen Gao; Jingtao Ding; Lingling Yi; Depeng Jin; Yong Li; Meng Wang
2022-08-09
Adversarial Machine Learning-Based Anticipation of Threats Against Vehicle-to-Microgrid Services. (98%)Ahmed Omara; Burak Kantarci
Reducing Exploitability with Population Based Training. (67%)Pavel Czempin; Adam Gleave
Robust Machine Learning for Malware Detection over Time. (9%)Daniele Angioni; Luca Demetrio; Maura Pintor; Battista Biggio
2022-08-08
Robust and Imperceptible Black-box DNN Watermarking Based on Fourier Perturbation Analysis and Frequency Sensitivity Clustering. (75%)Yong Liu; Hanzhou Wu; Xinpeng Zhang
PerD: Perturbation Sensitivity-based Neural Trojan Detection Framework on NLP Applications. (67%)Diego Garcia-soto; Huili Chen; Farinaz Koushanfar
Adversarial robustness of $\beta-$VAE through the lens of local geometry. (47%)Asif Khan; Amos Storkey
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning. (26%)Tianxing Zhang; Hanzhou Wu; Xiaofeng Lu; Guangling Sun
Abutting Grating Illusion: Cognitive Challenge to Neural Network Models. (1%)Jinyu Fan; Yi Zeng
Testing of Machine Learning Models with Limited Samples: An Industrial Vacuum Pumping Application. (1%)Ayan Chatterjee; Bestoun S. Ahmed; Erik Hallin; Anton Engman
2022-08-07
Federated Adversarial Learning: A Framework with Convergence Analysis. (80%)Xiaoxiao Li; Zhao Song; Jiaming Yang
Are Gradients on Graph Structure Reliable in Gray-box Attacks? (13%)Zihan Liu; Yun Luo; Lirong Wu; Siyuan Li; Zicheng Liu; Stan Z. Li
2022-08-06
Blackbox Attacks via Surrogate Ensemble Search. (99%)Zikui Cai; Chengyu Song; Srikanth Krishnamurthy; Amit Roy-Chowdhury; M. Salman Asif
On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning. (22%)Congyu Fang; Hengrui Jia; Anvith Thudi; Mohammad Yaghini; Christopher A. Choquette-Choo; Natalie Dullerud; Varun Chandrasekaran; Nicolas Papernot
Preventing or Mitigating Adversarial Supply Chain Attacks; a legal analysis. (3%)Kaspar Rosager Ludvigsen; Shishir Nagaraja; Angela Daly
2022-08-05
Adversarial Robustness of MR Image Reconstruction under Realistic Perturbations. (73%)Jan Nikolas Morshuis; Sergios Gatidis; Matthias Hein; Christian F. Baumgartner
Data-free Backdoor Removal based on Channel Lipschitzness. (64%)Runkai Zheng; Rongjun Tang; Jianze Li; Li Liu
Lethal Dose Conjecture on Data Poisoning. (2%)Wenxiao Wang; Alexander Levine; Soheil Feizi
LCCDE: A Decision-Based Ensemble Framework for Intrusion Detection in The Internet of Vehicles. (1%)Li Yang; Abdallah Shami; Gary Stevens; Rusett Stephen De
Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks. (1%)Bernd Prach; Christoph H. Lampert
2022-08-04
Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification. (99%)Faris Almalik; Mohammad Yaqub; Karthik Nandakumar
2022-08-03
Spectrum Focused Frequency Adversarial Attacks for Automatic Modulation Classification. (99%)Sicheng College of Information and Communication Engineering, Harbin Engineering University, Harbin Zhang; Jiarun College of Information and Communication Engineering, Harbin Engineering University, Harbin Yu; Zhida College of Information and Communication Engineering, Harbin Engineering University, Harbin Bao; Shiwen Department of Electrical & Computer Engineering, Auburn University, Auburn Mao; Yun College of Information and Communication Engineering, Harbin Engineering University, Harbin Lin
Design of secure and robust cognitive system for malware detection. (99%)Sanket Shukla
A New Kind of Adversarial Example. (99%)Ali Borji
Adversarial Attacks on ASR Systems: An Overview. (98%)Xiao Zhang; Hao Tan; Xuan Huang; Denghui Zhang; Keke Tang; Zhaoquan Gu
Multiclass ASMA vs Targeted PGD Attack in Image Segmentation. (96%)Johnson University of Toronto Vo; Jiabao University of Toronto Xie; Sahil University of Toronto Patel
MOVE: Effective and Harmless Ownership Verification via Embedded External Features. (84%)Yiming Li; Linghui Zhu; Xiaojun Jia; Yang Bai; Yong Jiang; Shu-Tao Xia; Xiaochun Cao
Robust Graph Neural Networks using Weighted Graph Laplacian. (13%)Bharat Runwal; Vivek; Sandeep Kumar
2022-08-02
Adversarial Camouflage for Node Injection Attack on Graphs. (81%)Shuchang Tao; Qi Cao; Huawei Shen; Yunfan Wu; Liang Hou; Xueqi Cheng
Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry. (2%)Mark Penrod; Harrison Termotto; Varshini Reddy; Jiayu Yao; Finale Doshi-Velez; Weiwei Pan
SCFI: State Machine Control-Flow Hardening Against Fault Attacks. (1%)Pascal Nasahl; Martin Unterguggenberger; Rishub Nagpal; Robert Schilling; David Schrammel; Stefan Mangard
2022-08-01
GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction. (98%)Jiacheng Zhu; Jielin Qiu; Zhuolin Yang; Douglas Weber; Michael A. Rosenberg; Emerson Liu; Bo Li; Ding Zhao
Understanding Adversarial Robustness of Vision Transformers via Cauchy Problem. (81%)Zheng Wang; Wenjie Ruan
On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel. (75%)Shubhi Shukla; Manaar Alam; Sarani Bhattacharya; Debdeep Mukhopadhyay; Pabitra Mitra
Attacking Adversarial Defences by Smoothing the Loss Landscape. (26%)Panagiotis Eustratiadis; Henry Gouk; Da Li; Timothy Hospedales
2022-07-31
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning. (99%)Mohammad Hossein Samavatian; Saikat Majumdar; Kristin Barber; Radu Teodorescu
Robust Real-World Image Super-Resolution against Adversarial Attacks. (99%)Jiutao Yue; Haofeng Li; Pengxu Wei; Guanbin Li; Liang Lin
Is current research on adversarial robustness addressing the right problem? (97%)Ali Borji
2022-07-30
enpheeph: A Fault Injection Framework for Spiking and Compressed Deep Neural Networks. (5%)Alessio Colucci; Andreas Steininger; Muhammad Shafique
CoNLoCNN: Exploiting Correlation and Non-Uniform Quantization for Energy-Efficient Low-precision Deep Convolutional Neural Networks. (2%)Muhammad Abdullah Hanif; Giuseppe Maria Sarda; Alberto Marchisio; Guido Masera; Maurizio Martina; Muhammad Shafique
2022-07-29
Robust Trajectory Prediction against Adversarial Attacks. (99%)Yulong Cao; Danfei Xu; Xinshuo Weng; Zhuoqing Mao; Anima Anandkumar; Chaowei Xiao; Marco Pavone
Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation and Complexity Analysis. (56%)Tao Li; Haozhe Lei; Quanyan Zhu
2022-07-28
Pro-tuning: Unified Prompt Tuning for Vision Tasks. (1%)Xing Nie; Bolin Ni; Jianlong Chang; Gaomeng Meng; Chunlei Huo; Zhaoxiang Zhang; Shiming Xiang; Qi Tian; Chunhong Pan
2022-07-27
Point Cloud Attacks in Graph Spectral Domain: When 3D Geometry Meets Graph Signal Processing. (96%)Daizong Liu; Wei Hu; Xin Li
Look Closer to Your Enemy: Learning to Attack via Teacher-student Mimicking. (91%)Mingejie Wang; Zhiqing Tang; Sirui Li; Dingwen Xiao
Membership Inference Attacks via Adversarial Examples. (73%)Hamid Jalalzai; Elie Kadoche; Rémi Leluc; Vincent Plassier
Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips. (69%)Jiawang Bai; Kuofeng Gao; Dihong Gong; Shu-Tao Xia; Zhifeng Li; Wei Liu
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking. (47%)Abhishek Chakraborty; Daniel Xing; Yuntao Liu; Ankur Srivastava
Label-Only Membership Inference Attack against Node-Level Graph Neural Networks. (22%)Mauro Conti; Jiaxin Li; Stjepan Picek; Jing Xu
Generative Steganography Network. (1%)Ping Wei; Sheng Li; Xinpeng Zhang; Ge Luo; Zhenxing Qian; Qing Zhou
2022-07-26
LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity. (99%)Martin Gubri; Maxime Cordy; Mike Papadakis; Yves Le Traon; Koushik Sen
Perception-Aware Attack: Creating Adversarial Music via Reverse-Engineering Human Perception. (99%)Rui Duan; Zhe Qu; Shangqing Zhao; Leah Ding; Yao Liu; Zhuo Lu
Generative Extraction of Audio Classifiers for Speaker Identification. (73%)Tejumade Afonja; Lucas Bourtoule; Varun Chandrasekaran; Sageev Oore; Nicolas Papernot
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. (8%)Tilman Räuker; Anson Ho; Stephen Casper; Dylan Hadfield-Menell
2022-07-25
$p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations. (99%)Adam Dziedzic; Stephan Rabanser; Mohammad Yaghini; Armin Ale; Murat A. Erdogdu; Nicolas Papernot
Improving Adversarial Robustness via Mutual Information Estimation. (99%)Dawei Zhou; Nannan Wang; Xinbo Gao; Bo Han; Xiaoyu Wang; Yibing Zhan; Tongliang Liu
SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness. (99%)Jindong Gu; Hengshuang Zhao; Volker Tresp; Philip Torr
Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer. (75%)Yingyi Chen; Xi Shen; Yahui Liu; Qinghua Tao; Johan A. K. Suykens
Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment. (9%)Tian Liu; Xueyang Hu; Tao Shu
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning. (2%)Xinlei He; Hongbin Liu; Neil Zhenqiang Gong; Yang Zhang
2022-07-24
Versatile Weight Attack via Flipping Limited Bits. (86%)Jiawang Bai; Baoyuan Wu; Zhifeng Li; Shu-tao Xia
Can we achieve robustness from data alone? (82%)Nikolaos Tsilivis; Jingtong Su; Julia Kempe
Proving Common Mechanisms Shared by Twelve Methods of Boosting Adversarial Transferability. (69%)Quanshi Zhang; Xin Wang; Jie Ren; Xu Cheng; Shuyun Lin; Yisen Wang; Xiangming Zhu
Privacy Against Inference Attacks in Vertical Federated Learning. (2%)Borzoo Rassouli; Morteza Varasteh; Deniz Gunduz
Semantic-guided Multi-Mask Image Harmonization. (1%)Xuqian Ren; Yifan Liu
2022-07-22
Do Perceptually Aligned Gradients Imply Adversarial Robustness? (99%)Roy Ganz; Bahjat Kawar; Michael Elad
Provable Defense Against Geometric Transformations. (47%)Rem Yang; Jacob Laurel; Sasa Misailovic; Gagandeep Singh
Aries: Efficient Testing of Deep Neural Networks via Labeling-Free Accuracy Estimation. (41%)Qiang Hu; Yuejun Guo; Xiaofei Xie; Maxime Cordy; Lei Ma; Mike Papadakis; Yves Le Traon
Learning from Multiple Annotator Noisy Labels via Sample-wise Label Fusion. (1%)Zhengqi Gao; Fan-Keng Sun; Mingran Yang; Sucheng Ren; Zikai Xiong; Marc Engeler; Antonio Burazer; Linda Wildling; Luca Daniel; Duane S. Boning
2022-07-21
Synthetic Dataset Generation for Adversarial Machine Learning Research. (99%)Xiruo Liu; Shibani Singh; Cory Cornelius; Colin Busho; Mike Tan; Anindya Paul; Jason Martin
Careful What You Wish For: on the Extraction of Adversarially Trained Models. (99%)Kacem Khaled; Gabriela Nicolescu; Magalhães Felipe Gohring de
Rethinking Textual Adversarial Defense for Pre-trained Language Models. (99%)Jiayi Wang; Rongzhou Bao; Zhuosheng Zhang; Hai Zhao
AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance. (98%)Xiaoliang Liu; Furao Shen; Jian Zhao; Changhai Nie
Knowledge-enhanced Black-box Attacks for Recommendations. (92%)Jingfan Chen; Wenqi Fan; Guanghui Zhu; Xiangyu Zhao; Chunfeng Yuan; Qing Li; Yihua Huang
Towards Efficient Adversarial Training on Vision Transformers. (92%)Boxi Wu; Jindong Gu; Zhifeng Li; Deng Cai; Xiaofei He; Wei Liu
Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation. (87%)Tong Wu; Tianhao Wang; Vikash Sehwag; Saeed Mahloujifar; Prateek Mittal
Contrastive Self-Supervised Learning Leads to Higher Adversarial Susceptibility. (83%)Rohit Gupta; Naveed Akhtar; Ajmal Mian; Mubarak Shah
A Forgotten Danger in DNN Supervision Testing: Generating and Detecting True Ambiguity. (1%)Michael Weiss; André García Gómez; Paolo Tonella
2022-07-20
Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness. (99%)Sekitoshi Kanai; Shin'ya Yamaguchi; Masanori Yamada; Hiroshi Takahashi; Kentaro Ohno; Yasutoshi Ida
Illusionary Attacks on Sequential Decision Makers and Countermeasures. (98%)Tim Franzmeyer; João F. Henriques; Jakob N. Foerster; Philip H. S. Torr; Adel Bibi; Witt Christian Schroeder de
Test-Time Adaptation via Conjugate Pseudo-labels. (10%)Sachin Goyal; Mingjie Sun; Aditi Raghunathan; Zico Kolter
Malware Triage Approach using a Task Memory based on Meta-Transfer Learning Framework. (9%)Jinting Zhu; Julian Jang-Jaccard; Ian Welch; Harith Al-Sahaf; Seyit Camtepe
2022-07-19
Towards Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms. (99%)Linbo Liu; Youngsuk Park; Trong Nghia Hoang; Hilaf Hasson; Jun Huan
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. (41%)Zaixi Zhang; Xiaoyu Cao; Jinyuan Jia; Neil Zhenqiang Gong
Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive Privacy Analysis and Beyond. (26%)Yuzheng Hu; Tianle Cai; Jinyong Shan; Shange Tang; Chaochao Cai; Ethan Song; Bo Li; Dawn Song
Assaying Out-Of-Distribution Generalization in Transfer Learning. (1%)Florian Wenzel; Andrea Dittadi; Peter Vincent Gehler; Carl-Johann Simon-Gabriel; Max Horn; Dominik Zietlow; David Kernert; Chris Russell; Thomas Brox; Bernt Schiele; Bernhard Schölkopf; Francesco Locatello
2022-07-18
Defending Substitution-Based Profile Pollution Attacks on Sequential Recommenders. (99%)Zhenrui Yue; Huimin Zeng; Ziyi Kou; Lanyu Shang; Dong Wang
Prior-Guided Adversarial Initialization for Fast Adversarial Training. (99%)Xiaojun Jia; Yong Zhang; Xingxing Wei; Baoyuan Wu; Ke Ma; Jue Wang; Xiaochun Cao
Decorrelative Network Architecture for Robust Electrocardiogram Classification. (99%)Christopher Wiedeman; Ge Wang
Multi-step domain adaptation by adversarial attack to $\mathcal{H} \Delta \mathcal{H}$-divergence. (96%)Arip Asadulaev; Alexander Panfilov; Andrey Filchenkov
Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations. (91%)Hashmat Shadab Malik; Shahina K Kunhimon; Muzammal Naseer; Salman Khan; Fahad Shahbaz Khan
Easy Batch Normalization. (69%)Arip Asadulaev; Alexander Panfilov; Andrey Filchenkov
Adversarial Contrastive Learning via Asymmetric InfoNCE. (61%)Qiying Yu; Jieming Lou; Xianyuan Zhan; Qizhang Li; Wangmeng Zuo; Yang Liu; Jingjing Liu
Detection of Poisoning Attacks with Anomaly Detection in Federated Learning for Healthcare Applications: A Machine Learning Approach. (22%)Ali Raza; Shujun Li; Kim-Phuc Tran; Ludovic Koehl
A Certifiable Security Patch for Object Tracking in Self-Driving Systems via Historical Deviation Modeling. (10%)Xudong Pan; Qifan Xiao; Mi Zhang; Min Yang
Benchmarking Machine Learning Robustness in Covid-19 Genome Sequence Classification. (2%)Sarwan Ali; Bikram Sahoo; Alexander Zelikovskiy; Pin-Yu Chen; Murray Patterson
2022-07-17
Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal. (99%)Xinwei Liu; Jian Liu; Yang Bai; Jindong Gu; Tao Chen; Xiaojun Jia; Xiaochun Cao
Threat Model-Agnostic Adversarial Defense using Diffusion Models. (99%)Tsachi Blau; Roy Ganz; Bahjat Kawar; Alex Bronstein; Michael Elad
Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game. (96%)Xiao-Shan Gao; Shuang Liu; Lijia Yu
Automated Repair of Neural Networks. (16%)Dor Cohen; Ofer Strichman
2022-07-16
DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking. (99%)Xiangyu Yin; Wenjie Ruan; Jonathan Fieldsend
Certified Neural Network Watermarks with Randomized Smoothing. (1%)Arpit Bansal; Ping-yeh Chiang; Michael Curry; Rajiv Jain; Curtis Wigington; Varun Manjunatha; John P Dickerson; Tom Goldstein
Progress and limitations of deep networks to recognize objects in unusual poses. (1%)Amro Abbas; Stéphane Deny
Exploring The Resilience of Control Execution Skips against False Data Injection Attacks. (1%)Ipsita Koley; Sunandan Adhikary; Soumyajit Dey
MixTailor: Mixed Gradient Aggregation for Robust Learning Against Tailored Attacks. (1%)Ali Ramezani-Kebrya; Iman Tabrizian; Fartash Faghri; Petar Popovski
2022-07-15
Towards the Desirable Decision Boundary by Moderate-Margin Adversarial Training. (99%)Xiaoyu Liang; Yaguan Qian; Jianchang Huang; Xiang Ling; Bin Wang; Chunming Wu; Wassim Swaileh
CARBEN: Composite Adversarial Robustness Benchmark. (98%)Lei Hsiung; Yun-Yun Tsai; Pin-Yu Chen; Tsung-Yi Ho
Masked Spatial-Spectral Autoencoders Are Excellent Hyperspectral Defenders. (68%)Jiahao Qi; Zhiqiang Gong; Xingyue Liu; Kangcheng Bin; Chen Chen; Yongqian Li; Wei Xue; Yu Zhang; Ping Zhong
Feasibility of Inconspicuous GAN-generated Adversarial Patches against Object Detection. (10%)Svetlana Pavlitskaya; Bianca-Marina Codău; J. Marius Zöllner
PASS: Parameters Audit-based Secure and Fair Federated Learning Scheme against Free Rider. (5%)Jianhua Wang
3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models. (1%)Ronghui Mu; Wenjie Ruan; Leandro S. Marcolino; Qiang Ni
2022-07-14
Adversarial Examples for Model-Based Control: A Sensitivity Analysis. (98%)Po-han Department of Electrical and Computer Engineering, The University of Texas at Austin Li; Ufuk Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin Topcu; Sandeep P. Department of Electrical and Computer Engineering, The University of Texas at Austin Chinchali
Adversarial Attacks on Monocular Pose Estimation. (98%)Hemang Chawla; Arnav Varma; Elahe Arani; Bahram Zonooz
Provably Adversarially Robust Nearest Prototype Classifiers. (83%)Václav Voráček; Matthias Hein
Improving Task-free Continual Learning by Distributionally Robust Memory Evolution. (70%)Zhenyi Wang; Li Shen; Le Fang; Qiuling Suo; Tiehang Duan; Mingchen Gao
RSD-GAN: Regularized Sobolev Defense GAN Against Speech-to-Text Adversarial Attacks. (67%)Mohammad Esmaeilpour; Nourhene Chaalia; Patrick Cardinal
Sound Randomized Smoothing in Floating-Point Arithmetics. (50%)Václav Voráček; Matthias Hein
Audio-guided Album Cover Art Generation with Genetic Algorithms. (38%)James Marien; Sam Leroux; Bart Dhoedt; Boom Cedric De
Distance Learner: Incorporating Manifold Prior to Model Training. (16%)Aditya Chetan; Nipun Kwatra
Active Data Pattern Extraction Attacks on Generative Language Models. (11%)Bargav Jayaraman; Esha Ghosh; Huseyin Inan; Melissa Chase; Sambuddha Roy; Wei Dai
Contrastive Adapters for Foundation Model Group Robustness. (1%)Michael Zhang; Christopher Ré
Lipschitz Bound Analysis of Neural Networks. (1%)Sarosij Bose
2022-07-13
Perturbation Inactivation Based Adversarial Defense for Face Recognition. (99%)Min Ren; Yuhao Zhu; Yunlong Wang; Zhenan Sun
On the Robustness of Bayesian Neural Networks to Adversarial Attacks. (93%)Luca Bortolussi; Ginevra Carbone; Luca Laurenti; Andrea Patane; Guido Sanguinetti; Matthew Wicker
Adversarially-Aware Robust Object Detector. (91%)Ziyi Dong; Pengxu Wei; Liang Lin
PIAT: Physics Informed Adversarial Training for Solving Partial Differential Equations. (15%)Simin Shekarpaz; Mohammad Azizmalayeri; Mohammad Hossein Rohban
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities. (10%)Subash Neupane; Jesse Ables; William Anderson; Sudip Mittal; Shahram Rahimi; Ioana Banicescu; Maria Seale
Interactive Machine Learning: A State of the Art Review. (4%)Natnael A. Wondimu; Cédric Buche; Ubbo Visser
Sample-dependent Adaptive Temperature Scaling for Improved Calibration. (2%)Tom Joy; Francesco Pinto; Ser-Nam Lim; Philip H. S. Torr; Puneet K. Dokania
DiverGet: A Search-Based Software Testing Approach for Deep Neural Network Quantization Assessment. (1%)Ahmed Haj Yahmed; Houssem Ben Braiek; Foutse Khomh; Sonia Bouzidi; Rania Zaatour
2022-07-12
Exploring Adversarial Examples and Adversarial Robustness of Convolutional Neural Networks by Mutual Information. (99%)Jiebao Zhang; Wenhua Qian; Rencan Nie; Jinde Cao; Dan Xu
Adversarial Robustness Assessment of NeuroEvolution Approaches. (99%)Inês Valentim; Nuno Lourenço; Nuno Antunes
Frequency Domain Model Augmentation for Adversarial Attack. (99%)Yuyang Long; Qilong Zhang; Boheng Zeng; Lianli Gao; Xianglong Liu; Jian Zhang; Jingkuan Song
Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware. (92%)Luca Demetrio; Battista Biggio; Fabio Roli
Game of Trojans: A Submodular Byzantine Approach. (87%)Dinuka Sahabandu; Arezoo Rajabi; Luyao Niu; Bo Li; Bhaskar Ramasubramanian; Radha Poovendran
Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust Deep Neural Architectures. (84%)Jia Liu; Ran Cheng; Yaochu Jin
Certified Adversarial Robustness via Anisotropic Randomized Smoothing. (76%)Hanbin Hong; Yuan Hong
RelaxLoss: Defending Membership Inference Attacks without Losing Utility. (26%)Dingfan Chen; Ning Yu; Mario Fritz
Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations. (5%)Satoshi Munakata; Caterina Urban; Haruki Yokoyama; Koji Yamamoto; Kazuki Munakata
Markov Decision Process For Automatic Cyber Defense. (4%)Simon Yusuf Enoch; Simon Yusuf Enoch; Dong Seong Kim
Estimating Test Performance for AI Medical Devices under Distribution Shift with Conformal Prediction. (1%)Charles Lu; Syed Rakin Ahmed; Praveer Singh; Jayashree Kalpathy-Cramer
Backdoor Attacks on Crowd Counting. (1%)Yuhua Sun; Tailai Zhang; Xingjun Ma; Pan Zhou; Jian Lou; Zichuan Xu; Xing Di; Yu Cheng; Lichao
2022-07-11
Statistical Detection of Adversarial examples in Blockchain-based Federated Forest In-vehicle Network Intrusion Detection Systems. (99%)Ibrahim Aliyu; Engelenburg Selinde van; Muhammed Bashir Muazu; Jinsul Kim; Chang Gyoon Lim
RUSH: Robust Contrastive Learning via Randomized Smoothing. (98%)Yijiang Pang; Boyang Liu; Jiayu Zhou
Physical Passive Patch Adversarial Attacks on Visual Odometry Systems. (98%)Yaniv Nemcovsky; Matan Yaakoby; Alex M. Bronstein; Chaim Baskin
Towards Effective Multi-Label Recognition Attacks via Knowledge Graph Consistency. (83%)Hassan Mahmood; Ehsan Elhamifar
"Why do so?" -- A Practical Perspective on Machine Learning Security. (64%)Kathrin Grosse; Lukas Bieringer; Tarek Richard Besold; Battista Biggio; Katharina Krombholz
Susceptibility of Continual Learning Against Adversarial Attacks. (45%)Hikmat Khan; Pir Masoom Shah; Syed Farhan Alam Zaidi; Saif ul Islam
Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches. (22%)Zhiyuan Cheng; James Liang; Hongjun Choi; Guanhong Tao; Zhiwen Cao; Dongfang Liu; Xiangyu Zhang
Adversarial Style Augmentation for Domain Generalized Urban-Scene Segmentation. (1%)Zhun Zhong; Yuyang Zhao; Gim Hee Lee; Nicu Sebe
2022-07-10
One-shot Neural Backdoor Erasing via Adversarial Weight Masking. (33%)Shuwen Chai; Jinghui Chen
Hiding Your Signals: A Security Analysis of PPG-based Biometric Authentication. (4%)Lin Li; Chao Chen; Lei Pan; Yonghang Tai; Jun Zhang; Yang Xiang
2022-07-09
Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features. (98%)Taha Belkhouja; Janardhan Rao Doppa
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain. (98%)Chang Yue; Peizhuo Lv; Ruigang Liang; Kai Chen
Dynamic Time Warping based Adversarial Framework for Time-Series Domain. (97%)Taha Belkhouja; Yan Yan; Janardhan Rao Doppa
Training Robust Deep Models for Time-Series Domain: Novel Algorithms and Theoretical Analysis. (67%)Taha Belkhouja; Yan Yan; Janardhan Rao Doppa
2022-07-08
Not all broken defenses are equal: The dead angles of adversarial accuracy. (99%)Raphael Olivier; Bhiksha Raj
Improved and Interpretable Defense to Transferred Adversarial Examples by Jacobian Norm with Selective Input Gradient Regularization. (99%)Deyin Liu; Lin Wu; Lingqiao Liu; Haifeng Zhao; Farid Boussaid; Mohammed Bennamoun
Defense Against Multi-target Trojan Attacks. (80%)Haripriya Harikumar; Santu Rana; Kien Do; Sunil Gupta; Wei Zong; Willy Susilo; Svetha Venkastesh
Guiding the retraining of convolutional neural networks against adversarial inputs. (80%)Francisco Durán López; Silverio Martínez-Fernández; Michael Felderer; Xavier Franch
Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future. (68%)Byunggill Joe; Insik Shin; Jihun Hamm
Models Out of Line: A Fourier Lens on Distribution Shift Robustness. (10%)Sara Fridovich-Keil; Brian R. Bartoldson; James Diffenderfer; Bhavya Kailkhura; Peer-Timo Bremer
A law of adversarial risk, interpolation, and label noise. (1%)Daniel Paleka; Amartya Sanyal
2022-07-07
On the Relationship Between Adversarial Robustness and Decision Region in Deep Neural Network. (99%)Seongjin Park; Haedong Jeong; Giyoung Jeon; Jaesik Choi
Harnessing Out-Of-Distribution Examples via Augmenting Content and Style. (11%)Zhuo Huang; Xiaobo Xia; Li Shen; Bo Han; Mingming Gong; Chen Gong; Tongliang Liu
CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal Relationships. (5%)Rebecca Roelofs; Liting Sun; Ben Caine; Khaled S. Refaat; Ben Sapp; Scott Ettinger; Wei Chai
2022-07-06
The Weaknesses of Adversarial Camouflage in Overhead Imagery. (83%)Etten Adam Van
Adversarial Robustness of Visual Dialog. (64%)Lu Yu; Verena Rieser
Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural Networks with Power Consumption Information. (54%)Cory Merkel
When does Bias Transfer in Transfer Learning? (10%)Hadi Salman; Saachi Jain; Andrew Ilyas; Logan Engstrom; Eric Wong; Aleksander Madry
Privacy-preserving Reflection Rendering for Augmented Reality. (2%)Yiqin Zhao; Sheng Wei; Tian Guo
Not All Models Are Equal: Predicting Model Transferability in a Self-challenging Fisher Space. (1%)Wenqi Shao; Xun Zhao; Yixiao Ge; Zhaoyang Zhang; Lei Yang; Xiaogang Wang; Ying Shan; Ping Luo
2022-07-05
Query-Efficient Adversarial Attack Based on Latin Hypercube Sampling. (99%)Dan Wang; Jiayu Lin; Yuan-Gen Wang
Defending against the Label-flipping Attack in Federated Learning. (98%)Najeeb Moharram Jebreel; Josep Domingo-Ferrer; David Sánchez; Alberto Blanco-Justicia
UniCR: Universally Approximated Certified Robustness via Randomized Smoothing. (93%)Hanbin Hong; Binghui Wang; Yuan Hong
PRoA: A Probabilistic Robustness Assessment against Functional Perturbations. (92%)Tianle Zhang; Wenjie Ruan; Jonathan E. Fieldsend
Learning to Accelerate Approximate Methods for Solving Integer Programming via Early Fixing. (38%)Longkang Li; Baoyuan Wu
Robustness Analysis of Video-Language Models Against Visual and Language Perturbations. (1%)Madeline C. Schiappa; Shruti Vyas; Hamid Palangi; Yogesh S. Rawat; Vibhav Vineet
Conflicting Interactions Among Protection Mechanisms for Machine Learning Models. (1%)Sebastian Szyller; N. Asokan
PoF: Post-Training of Feature Extractor for Improving Generalization. (1%)Ikuro Sato; Ryota Yamada; Masayuki Tanaka; Nakamasa Inoue; Rei Kawakami
Class-Specific Semantic Reconstruction for Open Set Recognition. (1%)Hongzhi Huang; Yu Wang; Qinghua Hu; Ming-Ming Cheng
2022-07-04
Hessian-Free Second-Order Adversarial Examples for Adversarial Learning. (99%)Yaguan Qian; Yuqi Wang; Bin Wang; Zhaoquan Gu; Yuhan Guo; Wassim Swaileh
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples. (98%)Giovanni Apruzzese; Rodion Vladimirov; Aliya Tastemirova; Pavel Laskov
Task-agnostic Defense against Adversarial Patch Attacks. (98%)Ke Xu; Yao Xiao; Zhaoheng Zheng; Kaijie Cai; Ram Nevatia
Large-scale Robustness Analysis of Video Action Recognition Models. (70%)Madeline C. Schiappa; Naman Biyani; Shruti Vyas; Hamid Palangi; Vibhav Vineet; Yogesh Rawat
Counterbalancing Teacher: Regularizing Batch Normalized Models for Robustness. (1%)Saeid Asgari Taghanaki; Ali Gholami; Fereshte Khani; Kristy Choi; Linh Tran; Ran Zhang; Aliasghar Khani
2022-07-03
RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely Limited Queries. (99%)Keshav Kasichainula; Hadi Mansourifar; Weidong Shi
Removing Batch Normalization Boosts Adversarial Training. (98%)Haotao Wang; Aston Zhang; Shuai Zheng; Xingjian Shi; Mu Li; Zhangyang Wang
Anomaly Detection with Adversarially Learned Perturbations of Latent Space. (13%)Vahid Reza Khazaie; Anthony Wong; John Taylor Jewell; Yalda Mohsenzadeh
Identifying the Context Shift between Test Benchmarks and Production Data. (1%)Matthew Groh
2022-07-02
FL-Defender: Combating Targeted Attacks in Federated Learning. (80%)Najeeb Jebreel; Josep Domingo-Ferrer
Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis. (11%)Ruinan Jin; Xiaoxiao Li
PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection. (1%)Zhi Lu; Vrizlynn L. L. Thing
2022-07-01
Efficient Adversarial Training With Data Pruning. (99%)Maximilian Kaufmann; Yiren Zhao; Ilia Shumailov; Robert Mullins; Nicolas Papernot
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label. (99%)Shengshan Hu; Ziqi Zhou; Yechao Zhang; Leo Yu Zhang; Yifeng Zheng; Yuanyuan HE; Hai Jin
2022-06-30
Detecting and Recovering Adversarial Examples from Extracting Non-robust and Highly Predictive Adversarial Perturbations. (99%)Mingyu Dong; Jiahao Chen; Diqun Yan; Jingxing Gao; Li Dong; Rangding Wang
Measuring Forgetting of Memorized Training Examples. (83%)Matthew Jagielski; Om Thakkar; Florian Tramèr; Daphne Ippolito; Katherine Lee; Nicholas Carlini; Eric Wallace; Shuang Song; Abhradeep Thakurta; Nicolas Papernot; Chiyuan Zhang
MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples Detectors. (80%)Federica Granese; Marine Picot; Marco Romanelli; Francisco Messina; Pablo Piantanida
Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN. (16%)Kuan Li; Yang Liu; Xiang Ao; Jianfeng Chi; Jinghua Feng; Hao Yang; Qing He
Threat Assessment in Machine Learning based Systems. (13%)Lionel Nganyewou Tidjon; Foutse Khomh
Robustness of Epinets against Distributional Shifts. (1%)Xiuyuan Lu; Ian Osband; Seyed Mohammad Asghari; Sven Gowal; Vikranth Dwaracherla; Zheng Wen; Roy Benjamin Van
ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State. (1%)Xinshao Wang; Yang Hua; Elyor Kodirov; Sankha Subhra Mukherjee; David A. Clifton; Neil M. Robertson
No Reason for No Supervision: Improved Generalization in Supervised Models. (1%)Mert Bulent Sariyildiz; Yannis Kalantidis; Karteek Alahari; Diane Larlus
Augment like there's no tomorrow: Consistently performing neural networks for medical imaging. (1%)Joona Pohjonen; Carolin Stürenberg; Atte Föhr; Reija Randen-Brady; Lassi Luomala; Jouni Lohi; Esa Pitkänen; Antti Rannikko; Tuomas Mirtti
2022-06-29
IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound. (92%)Palma Alessandro De; Rudy Bunel; Krishnamurthy Dvijotham; M. Pawan Kumar; Robert Stanforth
Adversarial Ensemble Training by Jointly Learning Label Dependencies and Member Models. (33%)Lele Wang; Bin Liu
longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks. (10%)Venelin Kovatchev; Trina Chatterjee; Venkata S Govindarajan; Jifan Chen; Eunsol Choi; Gabriella Chronis; Anubrata Das; Katrin Erk; Matthew Lease; Junyi Jessy Li; Yating Wu; Kyle Mahowald
Private Graph Extraction via Feature Explanations. (4%)Iyiola E. Olatunji; Mandeep Rathee; Thorben Funke; Megha Khosla
RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness. (2%)Francesco Pinto; Harry Yang; Ser-Nam Lim; Philip H. S. Torr; Puneet K. Dokania
2022-06-28
Increasing Confidence in Adversarial Robustness Evaluations. (99%)Roland S. Zimmermann; Wieland Brendel; Florian Tramer; Nicholas Carlini
Rethinking Adversarial Examples for Location Privacy Protection. (93%)Trung-Nghia Le; Ta Gu; Huy H. Nguyen; Isao Echizen
A Deep Learning Approach to Create DNS Amplification Attacks. (92%)Jared Mathews; Prosenjit Chatterjee; Shankar Banik; Cory Nance
On the amplification of security and privacy risks by post-hoc explanations in machine learning models. (31%)Pengrui Quan; Supriyo Chakraborty; Jeya Vikranth Jeyakumar; Mani Srivastava
How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection. (12%)Mantas Mazeika; Bo Li; David Forsyth
An Empirical Study of Challenges in Converting Deep Learning Models. (5%)Moses Jack Openja; Amin Jack Nikanjam; Ahmed Haj Jack Yahmed; Foutse Jack Khomh; Zhen Jack Ming; Jiang
Reasoning about Moving Target Defense in Attack Modeling Formalisms. (2%)Gabriel Ballot; Vadim Malvone; Jean Leneutre; Etienne Borde
AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE. (1%)Changjie Lu; Shen Zheng; Zirui Wang; Omar Dib; Gaurav Gupta
2022-06-27
Adversarial Example Detection in Deployed Tree Ensembles. (99%)Laurens Devos; Wannes Meert; Jesse Davis
Towards Secrecy-Aware Attacks Against Trust Prediction in Signed Graphs. (31%)Yulin Zhu; Tomasz Michalak; Xiapu Luo; Kai Zhou
Utilizing Class Separation Distance for the Evaluation of Corruption Robustness of Machine Learning Classifiers. (15%)Georg Siedel; Silvia Vock; Andrey Morozov; Stefan Voß
Cyber Network Resilience against Self-Propagating Malware Attacks. (13%)Alesia Chernikova; Nicolò Gozzi; Simona Boboila; Priyanka Angadi; John Loughner; Matthew Wilden; Nicola Perra; Tina Eliassi-Rad; Alina Oprea
Quantification of Deep Neural Network Prediction Uncertainties for VVUQ of Machine Learning Models. (4%)Mahmoud Yaseen; Xu Wu
2022-06-26
Self-Healing Robust Neural Networks via Closed-Loop Control. (45%)Zhuotong Chen; Qianxiao Li; Zheng Zhang
De-END: Decoder-driven Watermarking Network. (1%)Han Fang; Zhaoyang Jia; Yupeng Qiu; Jiyi Zhang; Weiming Zhang; Ee-Chien Chang
2022-06-25
Empirical Evaluation of Physical Adversarial Patch Attacks Against Overhead Object Detection Models. (99%)Gavin S. Hartnett; Li Ang Zhang; Caolionn O'Connell; Andrew J. Lohn; Jair Aguirre
Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising. (99%)Sandhya Aneja; Nagender Aneja; Pg Emeroylariffion Abas; Abdul Ghani Naim
RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer. (99%)Xiaoliang Liu; Furao Shen; Jian Zhao; Changhai Nie
Defending Multimodal Fusion Models against Single-Source Adversaries. (81%)Karren Yang; Wan-Yi Lin; Manash Barman; Filipe Condessa; Zico Kolter
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning. (12%)Baoyuan Wu; Hongrui Chen; Mingda Zhang; Zihao Zhu; Shaokui Wei; Danni Yuan; Chao Shen; Hongyuan Zha
Cascading Failures in Smart Grids under Random, Targeted and Adaptive Attacks. (1%)Sushmita Ruj; Arindam Pal
2022-06-24
Defending Backdoor Attacks on Vision Transformer via Patch Processing. (99%)Khoa D. Doan; Yingjie Lao; Peng Yang; Ping Li
AdAUC: End-to-end Adversarial AUC Optimization Against Long-tail Problems. (96%)Wenzheng Hou; Qianqian Xu; Zhiyong Yang; Shilong Bao; Yuan He; Qingming Huang
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective. (92%)Mark Huasong Meng; Guangdong Bai; Sin Gee Teo; Zhe Hou; Yan Xiao; Yun Lin; Jin Song Dong
Robustness of Explanation Methods for NLP Models. (82%)Shriya Atmakuri; Tejas Chheda; Dinesh Kandula; Nishant Yadav; Taesung Lee; Hessel Tuinhof
zPROBE: Zero Peek Robustness Checks for Federated Learning. (4%)Zahra Ghodsi; Mojan Javaheripi; Nojan Sheybani; Xinqiao Zhang; Ke Huang; Farinaz Koushanfar
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection Systems. (2%)D'Jeff Kanda Nkashama; Arian Soltani; Jean-Charles Verdier; Marc Frappier; Pierre-Marting Tardif; Froduald Kabanza
2022-06-23
Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs. (99%)Chengyin Hu; Weiwen Shi
A Framework for Understanding Model Extraction Attack and Defense. (98%)Xun Xian; Mingyi Hong; Jie Ding
Towards End-to-End Private Automatic Speaker Recognition. (76%)Francisco Teixeira; Alberto Abad; Bhiksha Raj; Isabel Trancoso
BERT Rankers are Brittle: a Study using Adversarial Document Perturbations. (75%)Yumeng Wang; Lijun Lyu; Avishek Anand
Never trust, always verify : a roadmap for Trustworthy AI? (1%)Lionel Nganyewou Tidjon; Foutse Khomh
Measuring Representational Robustness of Neural Networks Through Shared Invariances. (1%)Vedant Nanda; Till Speicher; Camila Kolling; John P. Dickerson; Krishna P. Gummadi; Adrian Weller
2022-06-22
AdvSmo: Black-box Adversarial Attack by Smoothing Linear Structure of Texture. (99%)Hui Xia; Rui Zhang; Shuliang Jiang; Zi Kang
InfoAT: Improving Adversarial Training Using the Information Bottleneck Principle. (98%)Mengting Xu; Tao Zhang; Zhongnian Li; Daoqiang Zhang
Robust Universal Adversarial Perturbations. (97%)Changming Xu; Gagandeep Singh
Guided Diffusion Model for Adversarial Purification from Random Noise. (68%)Quanlin Wu; Hang Ye; Yuntian Gu
Understanding the effect of sparsity on neural networks robustness. (61%)Lukas Timpl; Rahim Entezari; Hanie Sedghi; Behnam Neyshabur; Olga Saukh
Shilling Black-box Recommender Systems by Learning to Generate Fake User Profiles. (41%)Chen Lin; Si Chen; Meifang Zeng; Sheng Zhang; Min Gao; Hui Li
2022-06-21
SSMI: How to Make Objects of Interest Disappear without Accessing Object Detectors? (99%)Hui Xia; Rui Zhang; Zi Kang; Shuliang Jiang
Transferable Graph Backdoor Attack. (99%)Shuiqiao Yang; Bao Gia Doan; Paul Montague; Vel Olivier De; Tamas Abraham; Seyit Camtepe; Damith C. Ranasinghe; Salil S. Kanhere
(Certified!!) Adversarial Robustness for Free! (84%)Nicholas Dj Carlini; Florian Dj Tramer; Dj Krishnamurthy; Dvijotham; J. Zico Kolter
Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems. (81%)Yanchao Sun; Ruijie Zheng; Parisa Hassanzadeh; Yongyuan Liang; Soheil Feizi; Sumitra Ganesh; Furong Huang
FlashSyn: Flash Loan Attack Synthesis via Counter Example Driven Approximation. (68%)Zhiyang Chen; Sidi Mohamed Beillahi; Fan Long
Natural Backdoor Datasets. (33%)Emily Wenger; Roma Bhattacharjee; Arjun Nitin Bhagoji; Josephine Passananti; Emilio Andere; Haitao Zheng; Ben Y. Zhao
The Privacy Onion Effect: Memorization is Relative. (22%)Nicholas Carlini; Matthew Jagielski; Nicolas Papernot; Andreas Terzis; Florian Tramer; Chiyuan Zhang
ProML: A Decentralised Platform for Provenance Management of Machine Learning Software Systems. (1%)Nguyen Khoi Tran; Bushra Sabir; M. Ali Babar; Nini Cui; Mehran Abolhasan; Justin Lipman
2022-06-20
Understanding Robust Learning through the Lens of Representation Similarities. (99%)Christian Cianfarani; Arjun Nitin Bhagoji; Vikash Sehwag; Ben Zhao; Prateek Mittal
Diversified Adversarial Attacks based on Conjugate Gradient Method. (98%)Keiichiro Yamamura; Haruki Sato; Nariaki Tateiwa; Nozomi Hata; Toru Mitsutake; Issa Oe; Hiroki Ishikura; Katsuki Fujisawa
Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum. (76%)Junlin Wu; Yevgeniy Vorobeychik
SafeBench: A Benchmarking Platform for Safety Evaluation of Autonomous Vehicles. (5%)Chejian Xu; Wenhao Ding; Weijie Lyu; Zuxin Liu; Shuai Wang; Yihan He; Hanjiang Hu; Ding Zhao; Bo Li
Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities. (1%)Julian Bitterwolf; Alexander Meinke; Maximilian Augustin; Matthias Hein
2022-06-19
On the Limitations of Stochastic Pre-processing Defenses. (99%)Yue Gao; Ilia Shumailov; Kassem Fawaz; Nicolas Papernot
Towards Adversarial Attack on Vision-Language Pre-training Models. (98%)Jiaming Zhang; Qi Yi; Jitao Sang
A Universal Adversarial Policy for Text Classifiers. (98%)Gallil Maimon; Lior Rokach
JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System. (68%)Jiaming Zhang; Qi Yi; Jitao Sang
Adversarially trained neural representations may already be as robust as corresponding biological neural representations. (31%)Chong Guo; Michael J. Lee; Guillaume Leclerc; Joel Dapello; Yug Rao; Aleksander Madry; James J. DiCarlo
2022-06-18
Demystifying the Adversarial Robustness of Random Transformation Defenses. (99%)Chawin Sitawarin; Zachary Golan-Strieb; David Wagner
On the Role of Generalization in Transferability of Adversarial Examples. (99%)Yilin Wang; Farzan Farnia
DECK: Model Hardening for Defending Pervasive Backdoors. (98%)Guanhong Tao; Yingqi Liu; Siyuan Cheng; Shengwei An; Zhuo Zhang; Qiuling Xu; Guangyu Shen; Xiangyu Zhang
Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning. (10%)Marin Matsumoto; Tsubasa Takahashi; Seng Pei Liew; Masato Oguchi
Adversarial Scrutiny of Evidentiary Statistical Software. (2%)Rediet Abebe; Moritz Hardt; Angela Jin; John Miller; Ludwig Schmidt; Rebecca Wexler
2022-06-17
Detecting Adversarial Examples in Batches -- a geometrical approach. (99%)Danush Kumar Venkatesh; Peter Steinbach
Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation. (99%)Wen Sun; Jian Jin; Weisi Lin
Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization. (99%)Deokjae Lee; Seungyong Moon; Junhyeok Lee; Hyun Oh Song
Comment on Transferability and Input Transformation with Additive Noise. (99%)Hoki Kim; Jinseong Park; Jaewook Lee
Adversarial Robustness is at Odds with Lazy Training. (98%)Yunjuan Wang; Enayat Ullah; Poorya Mianjy; Raman Arora
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection. (83%)Jinyin Chen; Chengyu Jia; Haibin Zheng; Ruoxi Chen; Chenbo Fu
RetrievalGuard: Provably Robust 1-Nearest Neighbor Image Retrieval. (81%)Yihan Wu; Hongyang Zhang; Heng Huang
The Consistency of Adversarial Training for Binary Classification. (26%)Natalie S. Frank; Jonathan Niles-Weed
Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary Classification. (15%)Natalie S. Frank
Understanding Robust Overfitting of Adversarial Training and Beyond. (8%)Chaojian Yu; Bo Han; Li Shen; Jun Yu; Chen Gong; Mingming Gong; Tongliang Liu
2022-06-16
Adversarial Privacy Protection on Speech Enhancement. (99%)Mingyu Dong; Diqun Yan; Rangding Wang
Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge. (99%)Dingcheng Yang; Zihao Xiao; Wenjian Yu
Analysis and Extensions of Adversarial Training for Video Classification. (93%)Kaleab A. Kinfu; René Vidal
Double Sampling Randomized Smoothing. (89%)Linyi Li; Jiawei Zhang; Tao Xie; Bo Li
Adversarial Robustness of Graph-based Anomaly Detection. (76%)Yulin Zhu; Yuni Lai; Kaifa Zhao; Xiapu Luo; Mingquan Yuan; Jian Ren; Kai Zhou
A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. (68%)Ganqu Cui; Lifan Yuan; Bingxiang He; Yangyi Chen; Zhiyuan Liu; Maosong Sun
Backdoor Attacks on Vision Transformers. (31%)Akshayvarun Subramanya; Aniruddha Saha; Soroush Abbasi Koohpayegani; Ajinkya Tejankar; Hamed Pirsiavash
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey. (22%)Abhijith Sharma; Yijun Bian; Phil Munz; Apurva Narayan
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences. (5%)Daryna Oliynyk; Rudolf Mayer; Andreas Rauber
Catastrophic overfitting is a bug but also a feature. (2%)Guillermo Ortiz-Jiménez; Jorge Pau de; Amartya Sanyal; Adel Bibi; Puneet K. Dokania; Pascal Frossard; Gregory Rogéz; Philip H. S. Torr
Gradient-Based Adversarial and Out-of-Distribution Detection. (2%)Jinsol Lee; Mohit Prabhushankar; Ghassan AlRegib
"Understanding Robustness Lottery": A Comparative Visual Analysis of Neural Network Pruning Approaches. (1%)Zhimin Li; Shusen Liu; Xin Yu; Kailkhura Bhavya; Jie Cao; Diffenderfer James Daniel; Peer-Timo Bremer; Valerio Pascucci
2022-06-15
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack. (99%)Ruize Gao; Jiongxiao Wang; Kaiwen Zhou; Feng Liu; Binghui Xie; Gang Niu; Bo Han; James Cheng
Morphence-2.0: Evasion-Resilient Moving Target Defense Powered by Out-of-Distribution Detection. (99%)Abderrahmen Amich; Ata Kaboudi; Birhanu Eshete
Architectural Backdoors in Neural Networks. (83%)Mikel Bober-Irizar; Ilia Shumailov; Yiren Zhao; Robert Mullins; Nicolas Papernot
Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial Pruning. (75%)Jonah O'Brien Weiss; Tiago Alves; Sandip Kundu
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness. (74%)Tianlong Chen; Huan Zhang; Zhenyu Zhang; Shiyu Chang; Sijia Liu; Pin-Yu Chen; Zhangyang Wang
A Search-Based Testing Approach for Deep Reinforcement Learning Agents. (62%)Amirhossein Zolfagharian; Manel Abdellatif; Lionel Briand; Mojtaba Bagherzadeh; Ramesh S
Can pruning improve certified robustness of neural networks? (56%)Zhangheng Li; Tianlong Chen; Linyi Li; Bo Li; Zhangyang Wang
Improving Diversity with Adversarially Learned Transformations for Domain Generalization. (33%)Tejas Gokhale; Rushil Anirudh; Jayaraman J. Thiagarajan; Bhavya Kailkhura; Chitta Baral; Yezhou Yang
Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning. (11%)Tianlong Chen; Sijia Liu; Shiyu Chang; Lisa Amini; Zhangyang Wang
The Manifold Hypothesis for Gradient-Based Explanations. (2%)Sebastian Bordt; Uddeshya Upadhyay; Zeynep Akata; Luxburg Ulrike von
READ: Aggregating Reconstruction Error into Out-of-distribution Detection. (1%)Wenyu Jiang; Hao Cheng; Mingcai Chen; Shuai Feng; Yuxin Ge; Chongjun Wang
2022-06-14
Adversarial Vulnerability of Randomized Ensembles. (99%)Hassan Dbouk; Naresh R. Shanbhag
Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial Attacks and Training. (99%)B. R. Manoj; Meysam Sadeghi; Erik G. Larsson
Proximal Splitting Adversarial Attacks for Semantic Segmentation. (92%)Jérôme Rony; Jean-Christophe Pesquet; Ismail Ben Ayed
Efficiently Training Low-Curvature Neural Networks. (92%)Suraj Srinivas; Kyle Matoba; Himabindu Lakkaraju; Francois Fleuret
Defending Observation Attacks in Deep Reinforcement Learning via Detection and Denoising. (88%)Zikang Xiong; Joe Eappen; He Zhu; Suresh Jagannathan
Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINO. (86%)Javier Rando; Nasib Naimi; Thomas Baumann; Max Mathys
Turning a Curse Into a Blessing: Enabling Clean-Data-Free Defenses by Model Inversion. (68%)Si Chen; Yi Zeng; Won Park; Ruoxi Jia
Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises. (62%)Minkyu Choi; Yizhen Zhang; Kuan Han; Xiaokai Wang; Zhongming Liu
When adversarial attacks become interpretable counterfactual explanations. (62%)Mathieu Serrurier; Franck Mamalet; Thomas Fel; Louis Béthune; Thibaut Boissin
Attacks on Perception-Based Control Systems: Modeling and Fundamental Limits. (2%)Amir Khazraei; Henry Pfister; Miroslav Pajic
A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise. (1%)Jongwoo Ko; Bongsoo Yi; Se-Young Yun
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions. (1%)Rui Zhang; Song Guo; Junxiao Wang; Xin Xie; Dacheng Tao
2022-06-13
Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of Perturbations. (99%)Kaustubh Sridhar; Souradeep Dutta; Ramneet Kaur; James Weimer; Oleg Sokolsky; Insup Lee
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale. (99%)Gaoyuan Zhang; Songtao Lu; Yihua Zhang; Xiangyi Chen; Pin-Yu Chen; Quanfu Fan; Lee Martie; Lior Horesh; Mingyi Hong; Sijia Liu
Pixel to Binary Embedding Towards Robustness for CNNs. (47%)Ikki Kishida; Hideki Nakayama
Towards Understanding Sharpness-Aware Minimization. (1%)Maksym Andriushchenko; Nicolas Flammarion
Efficient Human-in-the-loop System for Guiding DNNs Attention. (1%)Yi He; Xi Yang; Chia-Ming Chang; Haoran Xie; Takeo Igarashi
2022-06-12
Consistent Attack: Universal Adversarial Perturbation on Embodied Vision Navigation. (98%)Chengyang Ying; You Qiaoben; Xinning Zhou; Hang Su; Wenbo Ding; Jianyong Ai
Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems. (92%)Zahra Jadidi; Shantanu Pal; Nithesh Nayak K; Arawinkumaar Selvakkumar; Chih-Chia Chang; Maedeh Beheshti; Alireza Jolfaei
Darknet Traffic Classification and Adversarial Attacks. (81%)Nhien Rust-Nguyen; Mark Stamp
InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness. (26%)Shruthi Gowda; Bahram Zonooz; Elahe Arani
RSSD: Defend against Ransomware with Hardware-Isolated Network-Storage Codesign and Post-Attack Analysis. (9%)Benjamin Reidys; Peng Liu; Jian Huang
Neurotoxin: Durable Backdoors in Federated Learning. (5%)Zhengming Zhang; Ashwinee Panda; Linyue Song; Yaoqing Yang; Michael W. Mahoney; Joseph E. Gonzalez; Kannan Ramchandran; Prateek Mittal
An Efficient Method for Sample Adversarial Perturbations against Nonlinear Support Vector Machines. (4%)Wen Su; Qingna Li
2022-06-11
Improving the Adversarial Robustness of NLP Models by Information Bottleneck. (99%)Cenyuan Zhang; Xiang Zhou; Yixin Wan; Xiaoqing Zheng; Kai-Wei Chang; Cho-Jui Hsieh
Defending Adversarial Examples by Negative Correlation Ensemble. (99%)Wenjian Luo; Hongwei Zhang; Linghao Kong; Zhijian Chen; Ke Tang
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks. (81%)Nuo Xu; Binghui Wang; Ran Ran; Wujie Wen; Parv Venkitasubramaniam
Bilateral Dependency Optimization: Defending Against Model-inversion Attacks. (69%)Xiong Peng; Feng Liu; Jingfen Zhang; Long Lan; Junjie Ye; Tongliang Liu; Bo Han
2022-06-10
Localized adversarial artifacts for compressed sensing MRI. (76%)Rima Alaifari; Giovanni S. Alberti; Tandri Gauksson
Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency. (70%)Jinyin Chen; Mingjun Li; Tao Liu; Haibin Zheng; Yao Cheng; Changting Lin
Blades: A Simulator for Attacks and Defenses in Federated Learning. (33%)Shenghui Li; Li Ju; Tianru Zhang; Edith Ngai; Thiemo Voigt
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers. (9%)Nan Luo; Yuanzhang Li; Yajie Wang; Shangbo Wu; Yu-an Tan; Quanxin Zhang
Deep Leakage from Model in Federated Learning. (3%)Zihao Zhao; Mengen Luo; Wenbo Ding
Adversarial Counterfactual Environment Model Learning. (1%)Xiong-Hui Chen; Yang Yu; Zheng-Mao Zhu; Zhihua Yu; Zhenjun Chen; Chenghe Wang; Yinan Wu; Hongqiu Wu; Rong-Jun Qin; Ruijin Ding; Fangsheng Huang
2022-06-09
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models. (99%)Federico Nesti; Giulio Rossolini; Gianluca D'Amico; Alessandro Biondi; Giorgio Buttazzo
ReFace: Real-time Adversarial Attacks on Face Recognition Systems. (99%)Shehzeen Hussain; Todd Huster; Chris Mesterharm; Paarth Neekhara; Kevin An; Malhar Jere; Harshvardhan Sikka; Farinaz Koushanfar
Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks. (98%)Huishuai Zhang; Da Yu; Yiping Lu; Di He
Meet You Halfway: Explaining Deep Learning Mysteries. (92%)Oriel BenShmuel
Early Transferability of Adversarial Examples in Deep Neural Networks. (86%)Oriel BenShmuel
GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing. (86%)Zhongkai Hao; Chengyang Ying; Yinpeng Dong; Hang Su; Jun Zhu; Jian Song
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. (84%)Aarohi Shammie Srivastava; Abhinav Shammie Rastogi; Abhishek Shammie Rao; Abu Awal Md Shammie Shoeb; Abubakar Shammie Abid; Adam Shammie Fisch; Adam R. Shammie Brown; Adam Shammie Santoro; Aditya Shammie Gupta; Adrià Shammie Garriga-Alonso; Agnieszka Shammie Kluska; Aitor Shammie Lewkowycz; Akshat Shammie Agarwal; Alethea Shammie Power; Alex Shammie Ray; Alex Shammie Warstadt; Alexander W. Shammie Kocurek; Ali Shammie Safaya; Ali Shammie Tazarv; Alice Shammie Xiang; Alicia Shammie Parrish; Allen Shammie Nie; Aman Shammie Hussain; Amanda Shammie Askell; Amanda Shammie Dsouza; Ambrose Shammie Slone; Ameet Shammie Rahane; Anantharaman S. Shammie Iyer; Anders Shammie Andreassen; Andrea Shammie Madotto; Andrea Shammie Santilli; Andreas Shammie Stuhlmüller; Andrew Shammie Dai; Andrew Shammie La; Andrew Shammie Lampinen; Andy Shammie Zou; Angela Shammie Jiang; Angelica Shammie Chen; Anh Shammie Vuong; Animesh Shammie Gupta; Anna Shammie Gottardi; Antonio Shammie Norelli; Anu Shammie Venkatesh; Arash Shammie Gholamidavoodi; Arfa Shammie Tabassum; Arul Shammie Menezes; Arun Shammie Kirubarajan; Asher Shammie Mullokandov; Ashish Shammie Sabharwal; Austin Shammie Herrick; Avia Shammie Efrat; Aykut Shammie Erdem; Ayla Shammie Karakaş; B. Ryan Shammie Roberts; Bao Sheng Shammie Loe; Barret Shammie Zoph; Bartłomiej Shammie Bojanowski; Batuhan Shammie Özyurt; Behnam Shammie Hedayatnia; Behnam Shammie Neyshabur; Benjamin Shammie Inden; Benno Shammie Stein; Berk Shammie Ekmekci; Bill Yuchen Shammie Lin; Blake Shammie Howald; Cameron Shammie Diao; Cameron Shammie Dour; Catherine Shammie Stinson; Cedrick Shammie Argueta; César Ferri Shammie Ramírez; Chandan Shammie Singh; Charles Shammie Rathkopf; Chenlin Shammie Meng; Chitta Shammie Baral; Chiyu Shammie Wu; Chris Shammie Callison-Burch; Chris Shammie Waites; Christian Shammie Voigt; Christopher D. Shammie Manning; Christopher Shammie Potts; Cindy Shammie Ramirez; Clara E. Shammie Rivera; Clemencia Shammie Siro; Colin Shammie Raffel; Courtney Shammie Ashcraft; Cristina Shammie Garbacea; Damien Shammie Sileo; Dan Shammie Garrette; Dan Shammie Hendrycks; Dan Shammie Kilman; Dan Shammie Roth; Daniel Shammie Freeman; Daniel Shammie Khashabi; Daniel Shammie Levy; Daniel Moseguí Shammie González; Danielle Shammie Perszyk; Danny Shammie Hernandez; Danqi Shammie Chen; Daphne Shammie Ippolito; Dar Shammie Gilboa; David Shammie Dohan; David Shammie Drakard; David Shammie Jurgens; Debajyoti Shammie Datta; Deep Shammie Ganguli; Denis Shammie Emelin; Denis Shammie Kleyko; Deniz Shammie Yuret; Derek Shammie Chen; Derek Shammie Tam; Dieuwke Shammie Hupkes; Diganta Shammie Misra; Dilyar Shammie Buzan; Dimitri Coelho Shammie Mollo; Diyi Shammie Yang; Dong-Ho Shammie Lee; Ekaterina Shammie Shutova; Ekin Dogus Shammie Cubuk; Elad Shammie Segal; Eleanor Shammie Hagerman; Elizabeth Shammie Barnes; Elizabeth Shammie Donoway; Ellie Shammie Pavlick; Emanuele Shammie Rodola; Emma Shammie Lam; Eric Shammie Chu; Eric Shammie Tang; Erkut Shammie Erdem; Ernie Shammie Chang; Ethan A. Shammie Chi; Ethan Shammie Dyer; Ethan Shammie Jerzak; Ethan Shammie Kim; Eunice Engefu Shammie Manyasi; Evgenii Shammie Zheltonozhskii; Fanyue Shammie Xia; Fatemeh Shammie Siar; Fernando Shammie Martínez-Plumed; Francesca Shammie Happé; Francois Shammie Chollet; Frieda Shammie Rong; Gaurav Shammie Mishra; Genta Indra Shammie Winata; Melo Gerard Shammie de; Germán Shammie Kruszewski; Giambattista Shammie Parascandolo; Giorgio Shammie Mariani; Gloria Shammie Wang; Gonzalo Shammie Jaimovitch-López; Gregor Shammie Betz; Guy Shammie Gur-Ari; Hana Shammie Galijasevic; Hannah Shammie Kim; Hannah Shammie Rashkin; Hannaneh Shammie Hajishirzi; Harsh Shammie Mehta; Hayden Shammie Bogar; Henry Shammie Shevlin; Hinrich Shammie Schütze; Hiromu Shammie Yakura; Hongming Shammie Zhang; Hugh Mee Shammie Wong; Ian Shammie Ng; Isaac Shammie Noble; Jaap Shammie Jumelet; Jack Shammie Geissinger; Jackson Shammie Kernion; Jacob Shammie Hilton; Jaehoon Shammie Lee; Jaime Fernández Shammie Fisac; James B. Shammie Simon; James Shammie Koppel; James Shammie Zheng; James Shammie Zou; Jan Shammie Kocoń; Jana Shammie Thompson; Jared Shammie Kaplan; Jarema Shammie Radom; Jascha Shammie Sohl-Dickstein; Jason Shammie Phang; Jason Shammie Wei; Jason Shammie Yosinski; Jekaterina Shammie Novikova; Jelle Shammie Bosscher; Jennifer Shammie Marsh; Jeremy Shammie Kim; Jeroen Shammie Taal; Jesse Shammie Engel; Jesujoba Shammie Alabi; Jiacheng Shammie Xu; Jiaming Shammie Song; Jillian Shammie Tang; Joan Shammie Waweru; John Shammie Burden; John Shammie Miller; John U. Shammie Balis; Jonathan Shammie Berant; Jörg Shammie Frohberg; Jos Shammie Rozen; Jose Shammie Hernandez-Orallo; Joseph Shammie Boudeman; Joseph Shammie Jones; Joshua B. Shammie Tenenbaum; Joshua S. Shammie Rule; Joyce Shammie Chua; Kamil Shammie Kanclerz; Karen Shammie Livescu; Karl Shammie Krauth; Karthik Shammie Gopalakrishnan; Katerina Shammie Ignatyeva; Katja Shammie Markert; Kaustubh D. Shammie Dhole; Kevin Shammie Gimpel; Kevin Shammie Omondi; Kory Shammie Mathewson; Kristen Shammie Chiafullo; Ksenia Shammie Shkaruta; Kumar Shammie Shridhar; Kyle Shammie McDonell; Kyle Shammie Richardson; Laria Shammie Reynolds; Leo Shammie Gao; Li Shammie Zhang; Liam Shammie Dugan; Lianhui Shammie Qin; Lidia Shammie Contreras-Ochando; Louis-Philippe Shammie Morency; Luca Shammie Moschella; Lucas Shammie Lam; Lucy Shammie Noble; Ludwig Shammie Schmidt; Luheng Shammie He; Luis Oliveros Shammie Colón; Luke Shammie Metz; Lütfi Kerem Shammie Şenel; Maarten Shammie Bosma; Maarten Shammie Sap; Hoeve Maartje Shammie ter; Maheen Shammie Farooqi; Manaal Shammie Faruqui; Mantas Shammie Mazeika; Marco Shammie Baturan; Marco Shammie Marelli; Marco Shammie Maru; Maria Jose Ramírez Shammie Quintana; Marie Shammie Tolkiehn; Mario Shammie Giulianelli; Martha Shammie Lewis; Martin Shammie Potthast; Matthew L. Shammie Leavitt; Matthias Shammie Hagen; Mátyás Shammie Schubert; Medina Orduna Shammie Baitemirova; Melody Shammie Arnaud; Melvin Shammie McElrath; Michael A. Shammie Yee; Michael Shammie Cohen; Michael Shammie Gu; Michael Shammie Ivanitskiy; Michael Shammie Starritt; Michael Shammie Strube; Michał Shammie Swędrowski; Michele Shammie Bevilacqua; Michihiro Shammie Yasunaga; Mihir Shammie Kale; Mike Shammie Cain; Mimee Shammie Xu; Mirac Shammie Suzgun; Mo Shammie Tiwari; Mohit Shammie Bansal; Moin Shammie Aminnaseri; Mor Shammie Geva; Mozhdeh Shammie Gheini; Mukund Varma Shammie T; Nanyun Shammie Peng; Nathan Shammie Chi; Nayeon Shammie Lee; Neta Gur-Ari Shammie Krakover; Nicholas Shammie Cameron; Nicholas Shammie Roberts; Nick Shammie Doiron; Nikita Shammie Nangia; Niklas Shammie Deckers; Niklas Shammie Muennighoff; Nitish Shirish Shammie Keskar; Niveditha S. Shammie Iyer; Noah Shammie Constant; Noah Shammie Fiedel; Nuan Shammie Wen; Oliver Shammie Zhang; Omar Shammie Agha; Omar Shammie Elbaghdadi; Omer Shammie Levy; Owain Shammie Evans; Pablo Antonio Moreno Shammie Casares; Parth Shammie Doshi; Pascale Shammie Fung; Paul Pu Shammie Liang; Paul Shammie Vicol; Pegah Shammie Alipoormolabashi; Peiyuan Shammie Liao; Percy Shammie Liang; Peter Shammie Chang; Peter Shammie Eckersley; Phu Mon Shammie Htut; Pinyu Shammie Hwang; Piotr Shammie Miłkowski; Piyush Shammie Patil; Pouya Shammie Pezeshkpour; Priti Shammie Oli; Qiaozhu Shammie Mei; Qing Shammie Lyu; Qinlang Shammie Chen; Rabin Shammie Banjade; Rachel Etta Shammie Rudolph; Raefer Shammie Gabriel; Rahel Shammie Habacker; Ramón Risco Shammie Delgado; Raphaël Shammie Millière; Rhythm Shammie Garg; Richard Shammie Barnes; Rif A. Shammie Saurous; Riku Shammie Arakawa; Robbe Shammie Raymaekers; Robert Shammie Frank; Rohan Shammie Sikand; Roman Shammie Novak; Roman Shammie Sitelew; Ronan Shammie LeBras; Rosanne Shammie Liu; Rowan Shammie Jacobs; Rui Shammie Zhang; Ruslan Shammie Salakhutdinov; Ryan Shammie Chi; Ryan Shammie Lee; Ryan Shammie Stovall; Ryan Shammie Teehan; Rylan Shammie Yang; Sahib Shammie Singh; Saif M. Shammie Mohammad; Sajant Shammie Anand; Sam Shammie Dillavou; Sam Shammie Shleifer; Sam Shammie Wiseman; Samuel Shammie Gruetter; Samuel R. Shammie Bowman; Samuel S. Shammie Schoenholz; Sanghyun Shammie Han; Sanjeev Shammie Kwatra; Sarah A. Shammie Rous; Sarik Shammie Ghazarian; Sayan Shammie Ghosh; Sean Shammie Casey; Sebastian Shammie Bischoff; Sebastian Shammie Gehrmann; Sebastian Shammie Schuster; Sepideh Shammie Sadeghi; Shadi Shammie Hamdan; Sharon Shammie Zhou; Shashank Shammie Srivastava; Sherry Shammie Shi; Shikhar Shammie Singh; Shima Shammie Asaadi; Shixiang Shane Shammie Gu; Shubh Shammie Pachchigar; Shubham Shammie Toshniwal; Shyam Shammie Upadhyay; Shammie Shyamolima; Debnath; Siamak Shakeri; Simon Thormeyer; Simone Melzi; Siva Reddy; Sneha Priscilla Makini; Soo-Hwan Lee; Spencer Torene; Sriharsha Hatwar; Stanislas Dehaene; Stefan Divic; Stefano Ermon; Stella Biderman; Stephanie Lin; Stephen Prasad; Steven T. Piantadosi; Stuart M. Shieber; Summer Misherghi; Svetlana Kiritchenko; Swaroop Mishra; Tal Linzen; Tal Schuster; Tao Li; Tao Yu; Tariq Ali; Tatsu Hashimoto; Te-Lin Wu; Théo Desbordes; Theodore Rothschild; Thomas Phan; Tianle Wang; Tiberius Nkinyili; Timo Schick; Timofei Kornev; Timothy Telleen-Lawton; Titus Tunduny; Tobias Gerstenberg; Trenton Chang; Trishala Neeraj; Tushar Khot; Tyler Shultz; Uri Shaham; Vedant Misra; Vera Demberg; Victoria Nyamai; Vikas Raunak; Vinay Ramasesh; Vinay Uday Prabhu; Vishakh Padmakumar; Vivek Srikumar; William Fedus; William Saunders; William Zhang; Wout Vossen; Xiang Ren; Xiaoyu Tong; Xinran Zhao; Xinyi Wu; Xudong Shen; Yadollah Yaghoobzadeh; Yair Lakretz; Yangqiu Song; Yasaman Bahri; Yejin Choi; Yichi Yang; Yiding Hao; Yifu Chen; Yonatan Belinkov; Yu Hou; Yufang Hou; Yuntao Bai; Zachary Seid; Zhuoye Zhao; Zijian Wang; Zijie J. Wang; Zirui Wang; Ziyi Wu
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training. (41%)Tianlong Chen; Zhenyu Zhang; Sijia Liu; Yang Zhang; Shiyu Chang; Zhangyang Wang
DORA: Exploring outlier representations in Deep Neural Networks. (1%)Kirill Bykov; Mayukh Deb; Dennis Grinwald; Klaus-Robert Müller; Marina M. -C. Höhne
Membership Inference via Backdooring. (1%)Hongsheng Hu; Zoran Salcic; Gillian Dobbie; Jinjun Chen; Lichao Sun; Xuyun Zhang
2022-06-08
Wavelet Regularization Benefits Adversarial Training. (99%)Jun Yan; Huilin Yin; Xiaoyang Deng; Ziming Zhao; Wancheng Ge; Hao Zhang; Gerhard Rigoll
Latent Boundary-guided Adversarial Training. (99%)Xiaowei Zhou; Ivor W. Tsang; Jie Yin
Adversarial Text Normalization. (73%)Joanna Bitton; Maya Pavlova; Ivan Evtimov
Autoregressive Perturbations for Data Poisoning. (70%)Pedro Sandoval-Segura; Vasu Singla; Jonas Geiping; Micah Goldblum; Tom Goldstein; David W. Jacobs
Toward Certified Robustness Against Real-World Distribution Shifts. (5%)Haoze Wu; Teruhiro Tagomori; Alexander Robey; Fengjun Yang; Nikolai Matni; George Pappas; Hamed Hassani; Corina Pasareanu; Clark Barrett
Generative Adversarial Networks and Image-Based Malware Classification. (1%)Huy Nguyen; Troia Fabio Di; Genya Ishigaki; Mark Stamp
Robust Deep Ensemble Method for Real-world Image Denoising. (1%)Pengju Liu; Hongzhi Zhang; Jinghui Wang; Yuzhi Wang; Dongwei Ren; Wangmeng Zuo
2022-06-07
Fooling Explanations in Text Classifiers. (99%)Adam Ivankay; Ivan Girardi; Chiara Marchiori; Pascal Frossard
AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems. (99%)Guangke Chen; Zhe Zhao; Fu Song; Sen Chen; Lingling Fan; Yang Liu
Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition. (99%)Guangke Chen; Zhe Zhao; Fu Song; Sen Chen; Lingling Fan; Feng Wang; Jiashui Wang
Adaptive Regularization for Adversarial Training. (98%)Dongyoon Yang; Insung Kong; Yongdai Kim
Building Robust Ensembles via Margin Boosting. (83%)Dinghuai Zhang; Hongyang Zhang; Aaron Courville; Yoshua Bengio; Pradeep Ravikumar; Arun Sai Suggala
On the Permanence of Backdoors in Evolving Models. (67%)Huiying Li; Arjun Nitin Bhagoji; Yuxin Chen; Haitao Zheng; Ben Y. Zhao
Subject Membership Inference Attacks in Federated Learning. (4%)Anshuman Suri; Pallika Kanani; Virendra J. Marathe; Daniel W. Peterson
Adversarial Reprogramming Revisited. (3%)Matthias Englert; Ranko Lazic
Certifying Data-Bias Robustness in Linear Regression. (1%)Anna P. Meyer; Aws Albarghouthi; Loris D'Antoni
Parametric Chordal Sparsity for SDP-based Neural Network Verification. (1%)Anton Xue; Lars Lindemann; Rajeev Alur
Can CNNs Be More Robust Than Transformers? (1%)Zeyu Wang; Yutong Bai; Yuyin Zhou; Cihang Xie
2022-06-06
Robust Adversarial Attacks Detection based on Explainable Deep Reinforcement Learning For UAV Guidance and Planning. (99%)Thomas Hickling; Nabil Aouf; Phillippa Spencer
Fast Adversarial Training with Adaptive Step Size. (98%)Zhichao Huang; Yanbo Fan; Chen Liu; Weizhong Zhang; Yong Zhang; Mathieu Salzmann; Sabine Süsstrunk; Jue Wang
Certified Robustness in Federated Learning. (87%)Motasem Alfarra; Juan C. Pérez; Egor Shulgin; Peter Richtárik; Bernard Ghanem
Robust Image Protection Countering Cropping Manipulation. (12%)Qichao Ying; Hang Zhou; Zhenxing Qian; Sheng Li; Xinpeng Zhang
PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN Model. (3%)Xuefeng Fan; Hangyu Gui; Xiaoyi Zhou
Tackling covariate shift with node-based Bayesian neural networks. (1%)Trung Trinh; Markus Heinonen; Luigi Acerbi; Samuel Kaski
Anomaly Detection with Test Time Augmentation and Consistency Evaluation. (1%)Haowei He; Jiaye Teng; Yang Yuan
2022-06-05
Federated Adversarial Training with Transformers. (98%)Ahmed Aldahdooh; Wassim Hamidouche; Olivier Déforges
Vanilla Feature Distillation for Improving the Accuracy-Robustness Trade-Off in Adversarial Training. (98%)Guodong Cao; Zhibo Wang; Xiaowei Dong; Zhifei Zhang; Hengchang Guo; Zhan Qin; Kui Ren
Which models are innately best at uncertainty estimation? (1%)Ido Galil; Mohammed Dabbah; Ran El-Yaniv
2022-06-04
Soft Adversarial Training Can Retain Natural Accuracy. (76%)Abhijith Sharma; Apurva Narayan
2022-06-03
Saliency Attack: Towards Imperceptible Black-box Adversarial Attack. (99%)Zeyu Dai; Shengcai Liu; Ke Tang; Qing Li
Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis. (96%)Raphael Ettedgui; Alexandre Araujo; Rafael Pinot; Yann Chevaleyre; Jamal Atif
Evaluating Transfer-based Targeted Adversarial Perturbations against Real-World Computer Vision Systems based on Human Judgments. (92%)Zhengyu Zhao; Nga Dang; Martha Larson
A Robust Backpropagation-Free Framework for Images. (80%)Timothy Zee; Alexander G. Ororbia; Ankur Mali; Ifeoma Nwogu
Gradient Obfuscation Checklist Test Gives a False Sense of Security. (73%)Nikola Popovic; Danda Pani Paudel; Thomas Probst; Gool Luc Van
Kallima: A Clean-label Framework for Textual Backdoor Attacks. (26%)Xiaoyi Chen; Yinpeng Dong; Zeyu Sun; Shengfang Zhai; Qingni Shen; Zhonghai Wu
2022-06-02
FACM: Correct the Output of Deep Neural Network with Middle Layers Features against Adversarial Samples. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
Improving the Robustness and Generalization of Deep Neural Network with Confidence Threshold Reduction. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs for Medical Image Segmentation and Detection. (99%)Linhai Ma; Liang Liang
Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline. (99%)Junjian Li; Honglong Chen
Adversarial Laser Spot: Robust and Covert Physical Adversarial Attack to DNNs. (98%)Chengyin Hu
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions. (31%)Amrith Setlur; Benjamin Eysenbach; Virginia Smith; Sergey Levine
MaxStyle: Adversarial Style Composition for Robust Medical Image Segmentation. (8%)Chen Chen; Zeju Li; Cheng Ouyang; Matt Sinclair; Wenjia Bai; Daniel Rueckert
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection. (4%)Wei Guo; Benedetta Tondi; Mauro Barni
Learning Unbiased Transferability for Domain Adaptation by Uncertainty Modeling. (1%)Jian Hu; Haowen Zhong; Junchi Yan; Shaogang Gong; Guile Wu; Fei Yang
2022-06-01
On the reversibility of adversarial attacks. (99%)Chau Yi Li; Ricardo Sánchez-Matilla; Ali Shahin Shamsabadi; Riccardo Mazzon; Andrea Cavallaro
NeuroUnlock: Unlocking the Architecture of Obfuscated Deep Neural Networks. (99%)Mahya Morid Ahmadi; Lilas Alrahis; Alessio Colucci; Ozgur Sinanoglu; Muhammad Shafique
Attack-Agnostic Adversarial Detection. (99%)Jiaxin Cheng; Mohamed Hussein; Jay Billa; Wael AbdAlmageed
On the Perils of Cascading Robust Classifiers. (98%)Ravi Mangal; Zifan Wang; Chi Zhang; Klas Leino; Corina Pasareanu; Matt Fredrikson
Anti-Forgery: Towards a Stealthy and Robust DeepFake Disruption Attack via Adversarial Perceptual-aware Perturbations. (98%)Run Wang; Ziheng Huang; Zhikai Chen; Li Liu; Jing Chen; Lina Wang
Support Vector Machines under Adversarial Label Contamination. (97%)Huang Xiao; Battista Biggio; Blaine Nelson; Han Xiao; Claudia Eckert; Fabio Roli
Defense Against Gradient Leakage Attacks via Learning to Obscure Data. (80%)Yuxuan Wan; Han Xu; Xiaorui Liu; Jie Ren; Wenqi Fan; Jiliang Tang
The robust way to stack and bag: the local Lipschitz way. (70%)Thulasi Tholeti; Sheetal Kalyani
Robustness Evaluation and Adversarial Training of an Instance Segmentation Model. (54%)Jacob Bond; Andrew Lingg
RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model. (1%)Hangzhi Guo; Feiran Jia; Jinghui Chen; Anna Squicciarini; Amulya Yadav
2022-05-31
Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems. (99%)Zeyan Liu; Fengjun Li; Jingqiang Lin; Zhu Li; Bo Luo
Exact Feature Collisions in Neural Networks. (95%)Utku Ozbulak; Manvel Gasparyan; Shodhan Rao; Neve Wesley De; Messem Arnout Van
CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models. (93%)Akshita Jha; Chandan K. Reddy
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences. (83%)Shang Wang; Yansong Gao; Anmin Fu; Zhi Zhang; Yuqing Zhang; Willy Susilo
Semantic Autoencoder and Its Potential Usage for Adversarial Attack. (81%)Yurui Ming; Cuihuan Du; Chin-Teng Lin
An Effective Fusion Method to Enhance the Robustness of CNN. (80%)Yating Ma; Zhichao Lian
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models. (64%)Kaiji Lu; Anupam Datta
Generative Models with Information-Theoretic Protection Against Membership Inference Attacks. (10%)Parisa Hassanzadeh; Robert E. Tillman
Likelihood-Free Inference with Generative Neural Networks via Scoring Rule Minimization. (1%)Lorenzo Pacchiardi; Ritabrata Dutta
2022-05-30
Exposing Fine-grained Adversarial Vulnerability of Face Anti-spoofing Models. (99%)Songlin Yang; Wei Wang; Chenye Xu; Bo Peng; Jing Dong
Searching for the Essence of Adversarial Perturbations. (99%)Dennis Y. Menn; Tzu-hsun Feng; Hung-yi Lee
Guided Diffusion Model for Adversarial Purification. (99%)Jinyi Wang; Zhaoyang Lyu; Dahua Lin; Bo Dai; Hongfei Fu
Domain Constraints in Feature Space: Strengthening Robustness of Android Malware Detection against Realizable Adversarial Examples. (98%)Hamid Bostani; Zhuoran Liu; Zhengyu Zhao; Veelasha Moonsamy
Why Adversarial Training of ReLU Networks Is Difficult? (68%)Xu Cheng; Hao Zhang; Yue Xin; Wen Shen; Jie Ren; Quanshi Zhang
CalFAT: Calibrated Federated Adversarial Training with Label Skewness. (67%)Chen Chen; Yuchen Liu; Xingjun Ma; Lingjuan Lyu
Securing AI-based Healthcare Systems using Blockchain Technology: A State-of-the-Art Systematic Literature Review and Future Research Directions. (15%)Rucha Shinde; Shruti Patil; Ketan Kotecha; Vidyasagar Potdar; Ganeshsree Selvachandran; Ajith Abraham
Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning. (13%)Yinglun Xu; Qi Zeng; Gagandeep Singh
White-box Membership Attack Against Machine Learning Based Retinopathy Classification. (10%)Mounia Hamidouche; Reda Bellafqira; Gwenolé Quellec; Gouenou Coatrieux
Fool SHAP with Stealthily Biased Sampling. (2%)Gabriel Laberge; Ulrich Aïvodji; Satoshi Hara; Mario Marchand.; Foutse Khomh
Snoopy: A Webpage Fingerprinting Framework with Finite Query Model for Mass-Surveillance. (2%)Gargi Mitra; Prasanna Karthik Vairam; Sandip Saha; Nitin Chandrachoodan; V. Kamakoti
2022-05-29
Robust Weight Perturbation for Adversarial Training. (99%)Chaojian Yu; Bo Han; Mingming Gong; Li Shen; Shiming Ge; Bo Du; Tongliang Liu
Mixture GAN For Modulation Classification Resiliency Against Adversarial Attacks. (99%)Eyad Shtaiwi; Ahmed El Ouadrhiri; Majid Moradikia; Salma Sultana; Ahmed Abdelhadi; Zhu Han
Unfooling Perturbation-Based Post Hoc Explainers. (98%)Zachariah Carmichael; Walter J Scheirer
On the Robustness of Safe Reinforcement Learning under Observational Perturbations. (93%)Zuxin Liu; Zijian Guo; Zhepeng Cen; Huan Zhang; Jie Tan; Bo Li; Ding Zhao
Superclass Adversarial Attack. (80%)Soichiro Kumano; Hiroshi Kera; Toshihiko Yamasaki
Problem-Space Evasion Attacks in the Android OS: a Survey. (50%)Harel Berger; Chen Hajaj; Amit Dvir
Context-based Virtual Adversarial Training for Text Classification with Noisy Labels. (11%)Do-Myoung Lee; Yeachan Kim; Chang-gyun Seo
A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks. (1%)Binyan Hu; Yu Sun; A. K. Qin
2022-05-28
Contributor-Aware Defenses Against Adversarial Backdoor Attacks. (98%)Glenn Dawson; Muhammad Umer; Robi Polikar
BadDet: Backdoor Attacks on Object Detection. (92%)Shih-Han Chan; Yinpeng Dong; Jun Zhu; Xiaolu Zhang; Jun Zhou
Syntax-Guided Program Reduction for Understanding Neural Code Intelligence Models. (62%)Md Rafiqul Islam Rabin; Aftab Hussain; Mohammad Amin Alipour
2022-05-27
fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating Weather Conditions on the Camera Lens of Autonomous Systems. (96%)Alberto Marchisio; Giovanni Caramia; Maurizio Martina; Muhammad Shafique
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power. (95%)Binghui Li; Jikai Jin; Han Zhong; John E. Hopcroft; Liwei Wang
Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction. (93%)Ruochen Jiao; Xiangguo Liu; Takami Sato; Qi Alfred Chen; Qi Zhu
Defending Against Stealthy Backdoor Attacks. (73%)Sangeet Sagar; Abhinav Bhatt; Abhijith Srinivas Bidaralli
EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks. (13%)Runlin Lei; Zhen Wang; Yaliang Li; Bolin Ding; Zhewei Wei
2022-05-26
A Physical-World Adversarial Attack Against 3D Face Recognition. (99%)Yanjie Li; Yiquan Li; Bin Xiao
Transferable Adversarial Attack based on Integrated Gradients. (99%)Yi Huang; Adams Wai-Kin Kong
MALICE: Manipulation Attacks on Learned Image ComprEssion. (99%)Kang Liu; Di Wu; Yiru Wang; Dan Feng; Benjamin Tan; Siddharth Garg
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors. (98%)Avishag Shapira; Alon Zolfi; Luca Demetrio; Battista Biggio; Asaf Shabtai
Circumventing Backdoor Defenses That Are Based on Latent Separability. (96%)Xiangyu Qi; Tinghao Xie; Yiming Li; Saeed Mahloujifar; Prateek Mittal
An Analytic Framework for Robust Training of Artificial Neural Networks. (93%)Ramin Barati; Reza Safabakhsh; Mohammad Rahmati
Adversarial attacks and defenses in Speaker Recognition Systems: A survey. (81%)Jiahe Lan; Rui Zhang; Zheng Yan; Jie Wang; Yu Chen; Ronghui Hou
PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations. (81%)Manaar Alam; Esha Sarkar; Michail Maniatakos
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning. (81%)Zhenting Wang; Juan Zhai; Shiqing Ma
R-HTDetector: Robust Hardware-Trojan Detection Based on Adversarial Training. (80%)Kento Hasegawa; Seira Hidano; Kohei Nozawa; Shinsaku Kiyomoto; Nozomu Togawa
BagFlip: A Certified Defense against Data Poisoning. (75%)Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Fight Poison with Poison: Detecting Backdoor Poison Samples via Decoupling Benign Correlations. (67%)Xiangyu Qi; Tinghao Xie; Saeed Mahloujifar; Prateek Mittal
Membership Inference Attack Using Self Influence Functions. (45%)Gilad Cohen; Raja Giryes
MemeTector: Enforcing deep focus for meme detection. (1%)Christos Koutlis; Manos Schinas; Symeon Papadopoulos
2022-05-25
Surprises in adversarially-trained linear regression. (87%)Antônio H. Ribeiro; Dave Zachariah; Thomas B. Schön
Textual Backdoor Attacks with Iterative Trigger Injection. (61%)Jun Yan; Vansh Gupta; Xiang Ren
How explainable are adversarially-robust CNNs? (8%)Mehdi Nourelahi; Lars Kotthoff; Peijie Chen; Anh Nguyen
2022-05-24
Defending a Music Recommender Against Hubness-Based Adversarial Attacks. (99%)Katharina Hoedt; Arthur Flexer; Gerhard Widmer
Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks. (99%)Sizhe Chen; Zhehao Huang; Qinghua Tao; Yingwen Wu; Cihang Xie; Xiaolin Huang
Certified Robustness Against Natural Language Attacks by Causal Intervention. (98%)Haiteng Zhao; Chang Ma; Xinshuai Dong; Anh Tuan Luu; Zhi-Hong Deng; Hanwang Zhang
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks. (92%)Shutong Wu; Sizhe Chen; Cihang Xie; Xiaolin Huang
Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation. (64%)Xiaoguang Li; Neil Zhenqiang Gong; Ninghui Li; Wenhai Sun; Hui Li
WeDef: Weakly Supervised Backdoor Defense for Text Classification. (56%)Lesheng Jin; Zihan Wang; Jingbo Shang
Recipe2Vec: Multi-modal Recipe Representation Learning with Graph Neural Networks. (50%)Yijun Tian; Chuxu Zhang; Zhichun Guo; Yihong Ma; Ronald Metoyer; Nitesh V. Chawla
EBM Life Cycle: MCMC Strategies for Synthesis, Defense, and Density Modeling. (10%)Mitch Hill; Jonathan Mitchell; Chu Chen; Yuan Du; Mubarak Shah; Song-Chun Zhu
Comprehensive Privacy Analysis on Federated Recommender System against Attribute Inference Attacks. (9%)Shijie Zhang; Hongzhi Yin
Fast & Furious: Modelling Malware Detection as Evolving Data Streams. (2%)Fabrício Ceschin; Marcus Botacin; Heitor Murilo Gomes; Felipe Pinagé; Luiz S. Oliveira; André Grégio
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. (2%)Tianlong Chen; Zhenyu Zhang; Yihua Zhang; Shiyu Chang; Sijia Liu; Zhangyang Wang
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing. (1%)Zhiwei Hao; Yong Luo; Zhi Wang; Han Hu; Jianping An
2022-05-23
Collaborative Adversarial Training. (98%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Alleviating Robust Overfitting of Adversarial Training With Consistency Regularization. (98%)Shudong Zhang; Haichang Gao; Tianwei Zhang; Yunyi Zhou; Zihui Wu
Learning to Ignore Adversarial Attacks. (95%)Yiming Zhang; Yangqiaoyu Zhou; Samuel Carton; Chenhao Tan
Towards a Defense against Backdoor Attacks in Continual Federated Learning. (50%)Shuaiqi Wang; Jonathan Hayase; Giulia Fanti; Sewoong Oh
Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation. (10%)Huarui He; Jie Wang; Zhanqiu Zhang; Feng Wu
RCC-GAN: Regularized Compound Conditional GAN for Large-Scale Tabular Data Synthesis. (1%)Mohammad Esmaeilpour; Nourhene Chaalia; Adel Abusitta; Francois-Xavier Devailly; Wissem Maazoun; Patrick Cardinal
2022-05-22
AutoJoin: Efficient Adversarial Training for Robust Maneuvering via Denoising Autoencoder and Joint Learning. (26%)Michael Villarreal; Bibek Poudel; Ryan Wickman; Yu Shen; Weizi Li
Robust Quantity-Aware Aggregation for Federated Learning. (13%)Jingwei Yi; Fangzhao Wu; Huishuai Zhang; Bin Zhu; Tao Qi; Guangzhong Sun; Xing Xie
Analysis of functional neural codes of deep learning models. (10%)Jung Hoon Lee; Sujith Vijayan
2022-05-21
Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models. (99%)Shawn Shan; Wenxin Ding; Emily Wenger; Haitao Zheng; Ben Y. Zhao
Gradient Concealment: Free Lunch for Defending Adversarial Attacks. (99%)Sen Pei; Jiaxi Sun; Xiaopeng Zhang; Gaofeng Meng
Phrase-level Textual Adversarial Attack with Label Preservation. (99%)Yibin Lei; Yu Cao; Dianqi Li; Tianyi Zhou; Meng Fang; Mykola Pechenizkiy
On the Feasibility and Generality of Patch-based Adversarial Attacks on Semantic Segmentation Problems. (16%)Soma Kontar; Andras Horvath
2022-05-20
Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness. (99%)Jiankai Jin; Olga Ohrimenko; Benjamin I. P. Rubinstein
Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification. (98%)Jungeum Kim; Xiao Wang
Adversarial joint attacks on legged robots. (86%)Takuto Otomo; Hiroshi Kera; Kazuhiko Kawamoto
Towards Consistency in Adversarial Classification. (82%)Laurent Meunier; Raphaël Ettedgui; Rafael Pinot; Yann Chevaleyre; Jamal Atif
Adversarial Body Shape Search for Legged Robots. (80%)Takaaki Azakami; Hiroshi Kera; Kazuhiko Kawamoto
SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning. (64%)Harsh Chaudhari; Matthew Jagielski; Alina Oprea
The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks. (11%)Lukas S. Huber; Robert Geirhos; Felix A. Wichmann
Vulnerability Analysis and Performance Enhancement of Authentication Protocol in Dynamic Wireless Power Transfer Systems. (10%)Tommaso Bianchi; Surudhi Asokraj; Alessandro Brighente; Mauro Conti; Radha Poovendran
Exploring the Trade-off between Plausibility, Change Intensity and Adversarial Power in Counterfactual Explanations using Multi-objective Optimization. (4%)Ser Javier Del; Alejandro Barredo-Arrieta; Natalia Díaz-Rodríguez; Francisco Herrera; Andreas Holzinger
2022-05-19
Focused Adversarial Attacks. (99%)Thomas Cilloni; Charles Walter; Charles Fleming
Transferable Physical Attack against Object Detection with Separable Attention. (99%)Yu Zhang; Zhiqiang Gong; Yichuang Zhang; YongQian Li; Kangcheng Bin; Jiahao Qi; Wei Xue; Ping Zhong
Enhancing the Transferability of Adversarial Examples via a Few Queries. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
On Trace of PGD-Like Adversarial Attacks. (99%)Mo Zhou; Vishal M. Patel
Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification. (98%)Leo Schwinn; Leon Bungert; An Nguyen; René Raab; Falk Pulsmeyer; Doina Precup; Björn Eskofier; Dario Zanca
Defending Against Adversarial Attacks by Energy Storage Facility. (96%)Jiawei Li; Jianxiao Wang; Lin Chen; Yang Yu
Sparse Adversarial Attack in Multi-agent Reinforcement Learning. (82%)Yizheng Hu; Zhihua Zhang
Data Valuation for Offline Reinforcement Learning. (1%)Amir Abolfazli; Gregory Palmer; Daniel Kudenko
2022-05-18
Passive Defense Against 3D Adversarial Point Clouds Through the Lens of 3D Steganalysis. (99%)Jiahao Zhu
Property Unlearning: A Defense Strategy Against Property Inference Attacks. (84%)Joshua Universität Hamburg Stock; Jens Universität Hamburg Wettlaufer; Daniel Universität Hamburg Demmler; Hannes Universität Hamburg Federrath
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution. (56%)Zhixin Pan; Prabhat Mishra
Empirical Advocacy of Bio-inspired Models for Robust Image Recognition. (38%)Harshitha Machiraju; Oh-Hyeon Choung; Michael H. Herzog; Pascal Frossard
Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing. (1%)Ryan Feng; Somesh Jha; Atul Prakash
Mitigating Neural Network Overconfidence with Logit Normalization. (1%)Hongxin Wei; Renchunzi Xie; Hao Cheng; Lei Feng; Bo An; Yixuan Li
2022-05-17
Hierarchical Distribution-Aware Testing of Deep Learning. (98%)Wei Huang; Xingyu Zhao; Alec Banks; Victoria Cox; Xiaowei Huang
Bankrupting DoS Attackers Despite Uncertainty. (12%)Trisha Chakraborty; Abir Islam; Valerie King; Daniel Rayborn; Jared Saia; Maxwell Young
A two-steps approach to improve the performance of Android malware detectors. (10%)Nadia Daoudi; Kevin Allix; Tegawendé F. Bissyandé; Jacques Klein
Policy Distillation with Selective Input Gradient Regularization for Efficient Interpretability. (2%)Jinwei Xing; Takashi Nagata; Xinyun Zou; Emre Neftci; Jeffrey L. Krichmar
Recovering Private Text in Federated Learning of Language Models. (2%)Samyak Gupta; Yangsibo Huang; Zexuan Zhong; Tianyu Gao; Kai Li; Danqi Chen
Semi-Supervised Building Footprint Generation with Feature and Output Consistency Training. (1%)Qingyu Li; Yilei Shi; Xiao Xiang Zhu
2022-05-16
Attacking and Defending Deep Reinforcement Learning Policies. (99%)Chao Wang
Diffusion Models for Adversarial Purification. (99%)Weili Nie; Brandon Guo; Yujia Huang; Chaowei Xiao; Arash Vahdat; Anima Anandkumar
Robust Representation via Dynamic Feature Aggregation. (84%)Haozhe Liu; Haoqin Ji; Yuexiang Li; Nanjun He; Haoqian Wu; Feng Liu; Linlin Shen; Yefeng Zheng
Sparse Visual Counterfactual Explanations in Image Space. (83%)Valentyn Boreiko; Maximilian Augustin; Francesco Croce; Philipp Berens; Matthias Hein
On the Difficulty of Defending Self-Supervised Learning against Model Extraction. (67%)Adam Dziedzic; Nikita Dhawan; Muhammad Ahmad Kaleem; Jonas Guan; Nicolas Papernot
Transferability of Adversarial Attacks on Synthetic Speech Detection. (47%)Jiacheng Deng; Shunyi Chen; Li Dong; Diqun Yan; Rangding Wang
2022-05-15
Learn2Weight: Parameter Adaptation against Similar-domain Adversarial Attacks. (99%)Siddhartha Datta
Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection. (64%)Fan Wang; Adams Wai-Kin Kong
RoMFAC: A Robust Mean-Field Actor-Critic Reinforcement Learning against Adversarial Perturbations on States. (38%)Ziyuan Zhou; Guanjun Liu
Automation Slicing and Testing for in-App Deep Learning Models. (1%)Hao Wu; Yuhang Gong; Xiaopeng Ke; Hanzhong Liang; Minghao Li; Fengyuan Xu; Yunxin Liu; Sheng Zhong
2022-05-14
Evaluating Membership Inference Through Adversarial Robustness. (98%)Zhaoxi Zhang; Leo Yu Zhang; Xufei Zheng; Bilal Hussain Abbasi; Shengshan Hu
Verifying Neural Networks Against Backdoor Attacks. (2%)Long H. Pham; Jun Sun
2022-05-13
Universal Post-Training Backdoor Detection. (98%)Hang Wang; Zhen Xiang; David J. Miller; George Kesidis
l-Leaks: Membership Inference Attacks with Logits. (41%)Shuhao Li; Yajie Wang; Yuanzhang Li; Yu-an Tan
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations. (26%)Yongjie Wang; Hangwei Qian; Chunyan Miao
Millimeter-Wave Automotive Radar Spoofing. (2%)Mihai Ordean; Flavio D. Garcia
2022-05-12
Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks. (75%)Pascale Gourdeau; Varun Kanade; Marta Kwiatkowska; James Worrell
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. (61%)Hongbin Liu; Jinyuan Jia; Neil Zhenqiang Gong
How to Combine Membership-Inference Attacks on Multiple Updated Models. (11%)Matthew Jagielski; Stanley Wu; Alina Oprea; Jonathan Ullman; Roxana Geambasu
Infrared Invisible Clothing:Hiding from Infrared Detectors at Multiple Angles in Real World. (4%)Xiaopei Zhu; Zhanhao Hu; Siyuan Huang; Jianmin Li; Xiaolin Hu
Smooth-Reduce: Leveraging Patches for Improved Certified Robustness. (2%)Ameya Joshi; Minh Pham; Minsu Cho; Leonid Boytsov; Filipe Condessa; J. Zico Kolter; Chinmay Hegde
Stalloris: RPKI Downgrade Attack. (1%)Tomas Hlavacek; Philipp Jeitner; Donika Mirdita; Haya Shulman; Michael Waidner
2022-05-11
Injection Attacks Reloaded: Tunnelling Malicious Payloads over DNS. (1%)Philipp Jeitner; Haya Shulman
The Hijackers Guide To The Galaxy: Off-Path Taking Over Internet Resources. (1%)Tianxiang Dai; Philipp Jeitner; Haya Shulman; Michael Waidner
A Longitudinal Study of Cryptographic API: a Decade of Android Malware. (1%)Adam Janovsky; Davide Maiorca; Dominik Macko; Vashek Matyas; Giorgio Giacinto
2022-05-10
Robust Medical Image Classification from Noisy Labeled Data with Global and Local Representation Guided Co-training. (1%)Cheng Xue; Lequan Yu; Pengfei Chen; Qi Dou; Pheng-Ann Heng
White-box Testing of NLP models with Mask Neuron Coverage. (1%)Arshdeep Sekhon; Yangfeng Ji; Matthew B. Dwyer; Yanjun Qi
2022-05-09
Btech thesis report on adversarial attack detection and purification of adverserially attacked images. (99%)Dvij Kalaria
Using Frequency Attention to Make Adversarial Patch Powerful Against Person Detector. (98%)Xiaochun Lei; Chang Lu; Zetao Jiang; Zhaoting Gong; Xiang Cai; Linjun Lu
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks. (97%)Harel Berger; Amit Dvir; Chen Hajaj; Rony Ronen
Model-Contrastive Learning for Backdoor Defense. (87%)Zhihao Yue; Jun Xia; Zhiwei Ling; Ming Hu; Ting Wang; Xian Wei; Mingsong Chen
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations? (61%)Alvin Chan; Yew-Soon Ong; Clement Tan
Federated Multi-Armed Bandits Under Byzantine Attacks. (2%)Ilker Demirel; Yigit Yildirim; Cem Tekin
Verifying Integrity of Deep Ensemble Models by Lossless Black-box Watermarking with Sensitive Samples. (2%)Lina Lin; Hanzhou Wu
2022-05-08
Fingerprint Template Invertibility: Minutiae vs. Deep Templates. (68%)Kanishka P. Wijewardena; Steven A. Grosz; Kai Cao; Anil K. Jain
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. (22%)Jingtao Li; Adnan Siraj Rakin; Xing Chen; Zhezhi He; Deliang Fan; Chaitali Chakrabarti
VPN: Verification of Poisoning in Neural Networks. (9%)Youcheng Sun; Muhammad Usman; Divya Gopinath; Corina S. Păsăreanu
FOLPETTI: A Novel Multi-Armed Bandit Smart Attack for Wireless Networks. (4%)Emilie Bout; Alessandro Brighente; Mauro Conti; Valeria Loscri
PGADA: Perturbation-Guided Adversarial Alignment for Few-shot Learning Under the Support-Query Shift. (1%)Siyang Jiang; Wei Ding; Hsi-Wen Chen; Ming-Syan Chen
2022-05-07
A Simple Yet Efficient Method for Adversarial Word-Substitute Attack. (99%)Tianle Li; Yi Yang
Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees. (92%)Binghui Wang; Youqi Li; Pan Zhou
2022-05-06
Imperceptible Backdoor Attack: From Input Space to Feature Representation. (68%)Nan Zhong; Zhenxing Qian; Xinpeng Zhang
Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data. (26%)Joceline Ziegler; Bjarne Pfitzner; Heinrich Schulz; Axel Saalbach; Bert Arnrich
Unlimited Lives: Secure In-Process Rollback with Isolated Domains. (1%)Merve Turhan; Thomas Nyman; Christoph Baumann; Jan Tobias Mühlberg
LPGNet: Link Private Graph Networks for Node Classification. (1%)Aashish Kolluri; Teodora Baluta; Bryan Hooi; Prateek Saxena
2022-05-05
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems. (99%)Gaurav Kumar Nayak; Ruchit Rawal; Rohit Lal; Himanshu Patil; Anirban Chakraborty
Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness. (78%)Dávid Szeghy; Mahmoud Aslan; Áron Fóthi; Balázs Mészáros; Zoltán Ádám Milacski; András Lőrincz
Can collaborative learning be private, robust and scalable? (61%)Dmitrii Usynin; Helena Klause; Daniel Rueckert; Georgios Kaissis
Large Scale Transfer Learning for Differentially Private Image Classification. (2%)Harsh Mehta; Abhradeep Thakurta; Alexey Kurakin; Ashok Cutkosky
Heterogeneous Domain Adaptation with Adversarial Neural Representation Learning: Experiments on E-Commerce and Cybersecurity. (1%)Mohammadreza Ebrahimi; Yidong Chai; Hao Helen Zhang; Hsinchun Chen
Are GAN-based Morphs Threatening Face Recognition? (1%)Eklavya Sarkar; Pavel Korshunov; Laurent Colbois; Sébastien Marcel
2022-05-04
Based-CE white-box adversarial attack will not work using super-fitting. (99%)Youhuan Yang; Lei Sun; Leyu Dai; Song Guo; Xiuqing Mao; Xiaoqin Wang; Bayi Xu
Rethinking Classifier And Adversarial Attack. (98%)Youhuan Yang; Lei Sun; Leyu Dai; Song Guo; Xiuqing Mao; Xiaoqin Wang; Bayi Xu
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning. (98%)Antonio Emanuele Cinà; Kathrin Grosse; Ambra Demontis; Sebastiano Vascon; Werner Zellinger; Bernhard A. Moser; Alina Oprea; Battista Biggio; Marcello Pelillo; Fabio Roli
Robust Conversational Agents against Imperceptible Toxicity Triggers. (92%)Ninareh Mehrabi; Ahmad Beirami; Fred Morstatter; Aram Galstyan
Subverting Fair Image Search with Generative Adversarial Perturbations. (83%)Avijit Ghosh; Matthew Jagielski; Christo Wilson
2022-05-03
Adversarial Training for High-Stakes Reliability. (98%)Daniel M. Ziegler; Seraphina Nix; Lawrence Chan; Tim Bauman; Peter Schmidt-Nielsen; Tao Lin; Adam Scherlis; Noa Nabeshima; Ben Weinstein-Raun; Haas Daniel de; Buck Shlegeris; Nate Thomas
Don't sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks. (96%)Jonathan Rusert; Padmini Srinivasan
On the uncertainty principle of neural networks. (3%)Jun-Jie Zhang; Dong-Xiao Zhang; Jian-Nan Chen; Long-Gang Pang
Meta-Cognition. An Inverse-Inverse Reinforcement Learning Approach for Cognitive Radars. (1%)Kunal Pattanayak; Vikram Krishnamurthy; Christopher Berry
2022-05-02
SemAttack: Natural Textual Attacks via Different Semantic Spaces. (96%)Boxin Wang; Chejian Xu; Xiangyu Liu; Yu Cheng; Bo Li
Deep-Attack over the Deep Reinforcement Learning. (93%)Yang Li; Quan Pan; Erik Cambria
Enhancing Adversarial Training with Feature Separability. (92%)Yaxin Li; Xiaorui Liu; Han Xu; Wentao Wang; Jiliang Tang
BERTops: Studying BERT Representations under a Topological Lens. (92%)Jatin Chauhan; Manohar Kaul
MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust Classification of Breast Cancer. (83%)Shoukun Sun; Min Xian; Aleksandar Vakanski; Hossny Ghanem
Revisiting Gaussian Neurons for Online Clustering with Unknown Number of Clusters. (1%)Ole Christian Eidheim
2022-05-01
A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction. (98%)Yong Xie; Dakuo Wang; Pin-Yu Chen; Jinjun Xiong; Sijia Liu; Sanmi Koyejo
DDDM: a Brain-Inspired Framework for Robust Classification. (76%)Xiyuan Chen; Xingyu Li; Yi Zhou; Tianming Yang
Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances. (9%)Shoujie Tong; Qingxiu Dong; Damai Dai; Yifan song; Tianyu Liu; Baobao Chang; Zhifang Sui
A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness. (3%)Jeremiah Zhe Liu; Shreyas Padhy; Jie Ren; Zi Lin; Yeming Wen; Ghassen Jerfel; Zack Nado; Jasper Snoek; Dustin Tran; Balaji Lakshminarayanan
Adversarial Plannning. (2%)Valentin Vie; Ryan Sheatsley; Sophia Beyda; Sushrut Shringarputale; Kevin Chan; Trent Jaeger; Patrick McDaniel
2022-04-30
Optimizing One-pixel Black-box Adversarial Attacks. (82%)Tianxun Zhou; Shubhankar Agrawal; Prateek Manocha
Cracking White-box DNN Watermarks via Invariant Neuron Transforms. (26%)Yifan Yan; Xudong Pan; Yining Wang; Mi Zhang; Min Yang
Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees. (1%)Jonathan Brophy; Zayd Hammoudeh; Daniel Lowd
Loss Function Entropy Regularization for Diverse Decision Boundaries. (1%)Chong Sue Sin
2022-04-29
Adversarial attacks on an optical neural network. (92%)Shuming Jiao; Ziwei Song; Shuiying Xiang
Logically Consistent Adversarial Attacks for Soft Theorem Provers. (2%)Alexander Gaskell; Yishu Miao; Lucia Specia; Francesca Toni
Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation. (1%)Heng Zhu; Qing Ling
2022-04-28
Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations. (99%)Na Liu; Mark Dras; Wei Emma Zhang
Formulating Robustness Against Unforeseen Attacks. (99%)Sihui Dai; Saeed Mahloujifar; Prateek Mittal
Randomized Smoothing under Attack: How Good is it in Pratice? (84%)Thibault Maho; Teddy Furon; Erwan Le Merrer
Improving robustness of language models from a geometry-aware perspective. (68%)Bin Zhu; Zhaoquan Gu; Le Wang; Jinyin Chen; Qi Xuan
Mixup-based Deep Metric Learning Approaches for Incomplete Supervision. (50%)Luiz H. Buris; Daniel C. G. Pedronette; Joao P. Papa; Jurandy Almeida; Gustavo Carneiro; Fabio A. Faria
AGIC: Approximate Gradient Inversion Attack on Federated Learning. (16%)Jin Xu; Chi Hong; Jiyue Huang; Lydia Y. Chen; Jérémie Decouchant
An Online Ensemble Learning Model for Detecting Attacks in Wireless Sensor Networks. (1%)Hiba Tabbaa; Samir Ifzarne; Imad Hafidi
2022-04-27
Adversarial Fine-tune with Dynamically Regulated Adversary. (99%)Pengyue Hou; Ming Zhou; Jie Han; Petr Musilek; Xingyu Li
Defending Against Person Hiding Adversarial Patch Attack with a Universal White Frame. (98%)Youngjoon Yu; Hong Joo Lee; Hakmin Lee; Yong Man Ro
An Adversarial Attack Analysis on Malicious Advertisement URL Detection Framework. (81%)Ehsan Nowroozi; Abhishek; Mohammadreza Mohammadi; Mauro Conti
2022-04-26
Boosting Adversarial Transferability of MLP-Mixer. (99%)Haoran Lyu; Yajie Wang; Yu-an Tan; Huipeng Zhou; Yuhang Zhao; Quanxin Zhang
Restricted Black-box Adversarial Attack Against DeepFake Face Swapping. (99%)Junhao Dong; Yuan Wang; Jianhuang Lai; Xiaohua Xie
Improving the Transferability of Adversarial Examples with Restructure Embedded Patches. (99%)Huipeng Zhou; Yu-an Tan; Yajie Wang; Haoran Lyu; Shangbo Wu; Yuanzhang Li
On Fragile Features and Batch Normalization in Adversarial Training. (97%)Nils Philipp Walter; David Stutz; Bernt Schiele
Mixed Strategies for Security Games with General Defending Requirements. (75%)Rufan Bai; Haoxing Lin; Xinyu Yang; Xiaowei Wu; Minming Li; Weijia Jia
Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios. (26%)Dazhong Rong; Qinming He; Jianhai Chen
Designing Perceptual Puzzles by Differentiating Probabilistic Programs. (13%)Kartik Chandra; Tzu-Mao Li; Joshua Tenenbaum; Jonathan Ragan-Kelley
Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies. (8%)Shaltiel Eloul; Fran Silavong; Sanket Kamthe; Antonios Georgiadis; Sean J. Moran
Performance Analysis of Out-of-Distribution Detection on Trained Neural Networks. (4%)Jens Henriksson; Christian Berger; Markus Borg; Lars Tornberg; Sankar Raman Sathyamoorthy; Cristofer Englund
2022-04-25
Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social Networks. (99%)Jiawei Zhang; Jinwei Wang; Hao Wang; Xiangyang Luo
When adversarial examples are excusable. (89%)Pieter-Jan Kindermans; Charles Staats
A Simple Structure For Building A Robust Model. (81%)Xiao Tan; JingBo Gao; Ruolin Li
Real or Virtual: A Video Conferencing Background Manipulation-Detection System. (67%)Ehsan Nowroozi; Yassine Mekdad; Mauro Conti; Simone Milani; Selcuk Uluagac; Berrin Yanikoglu
Can Rationalization Improve Robustness? (12%)Howard Chen; Jacqueline He; Karthik Narasimhan; Danqi Chen
PhysioGAN: Training High Fidelity Generative Model for Physiological Sensor Readings. (1%)Moustafa Alzantot; Luis Garcia; Mani Srivastava
VITA: A Multi-Source Vicinal Transfer Augmentation Method for Out-of-Distribution Generalization. (1%)Minghui Chen; Cheng Wen; Feng Zheng; Fengxiang He; Ling Shao
Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications. (1%)Han Cai; Ji Lin; Yujun Lin; Zhijian Liu; Haotian Tang; Hanrui Wang; Ligeng Zhu; Song Han
2022-04-24
A Hybrid Defense Method against Adversarial Attacks on Traffic Sign Classifiers in Autonomous Vehicles. (99%)Zadid Khan; Mashrur Chowdhury; Sakib Mahmud Khan
Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity. (81%)Marco Marchetti; Edmond S. L. Ho
2022-04-23
Smart App Attack: Hacking Deep Learning Models in Android Apps. (98%)Yujin Huang; Chunyang Chen
Towards Data-Free Model Stealing in a Hard Label Setting. (13%)Sunandini Sanyal; Sravanti Addepalli; R. Venkatesh Babu
Reinforced Causal Explainer for Graph Neural Networks. (1%)Xiang Wang; Yingxin Wu; An Zhang; Fuli Feng; Xiangnan He; Tat-Seng Chua
2022-04-22
How Sampling Impacts the Robustness of Stochastic Neural Networks. (99%)Sina Däubener; Asja Fischer
A Tale of Two Models: Constructing Evasive Attacks on Edge Models. (83%)Wei Hao; Aahil Awatramani; Jiayang Hu; Chengzhi Mao; Pin-Chun Chen; Eyal Cidon; Asaf Cidon; Junfeng Yang
Enhancing the Transferability via Feature-Momentum Adversarial Attack. (82%)Xianglong; Yuezun Li; Haipeng Qu; Junyu Dong
Data-Efficient Backdoor Attacks. (76%)Pengfei Xia; Ziqiang Li; Wei Zhang; Bin Li
2022-04-21
A Mask-Based Adversarial Defense Scheme. (99%)Weizhen Xu; Chenyi Zhang; Fangzhen Zhao; Liangda Fang
Is Neuron Coverage Needed to Make Person Detection More Robust? (98%)Svetlana Pavlitskaya; Şiyar Yıkmış; J. Marius Zöllner
Testing robustness of predictions of trained classifiers against naturally occurring perturbations. (98%)Sebastian Scher; Andreas Trügler
Adversarial Contrastive Learning by Permuting Cluster Assignments. (15%)Muntasir Wahed; Afrina Tabassum; Ismini Lourentzou
Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation. (4%)Jun Xia; Ting Wang; Jiepin Ding; Xian Wei; Mingsong Chen
Detecting Topology Attacks against Graph Neural Networks. (1%)Senrong Xu; Yuan Yao; Liangyue Li; Wei Yang; Feng Xu; Hanghang Tong
2022-04-20
GUARD: Graph Universal Adversarial Defense. (99%)Jintang Li; Jie Liao; Ruofan Wu; Liang Chen; Jiawang Dan; Changhua Meng; Zibin Zheng; Weiqiang Wang
Adversarial Scratches: Deployable Attacks to CNN Classifiers. (99%)Loris Giulivi; Malhar Jere; Loris Rossi; Farinaz Koushanfar; Gabriela Ciocarlie; Briland Hitaj; Giacomo Boracchi
Fast AdvProp. (98%)Jieru Mei; Yucheng Han; Yutong Bai; Yixiao Zhang; Yingwei Li; Xianhang Li; Alan Yuille; Cihang Xie
Case-Aware Adversarial Training. (98%)Mingyuan Fan; Yang Liu; Wenzhong Guo; Ximeng Liu; Jianhua Li
Improved Worst-Group Robustness via Classifier Retraining on Independent Splits. (1%)Thien Hang Nguyen; Hongyang R. Zhang; Huy Le Nguyen
2022-04-19
Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks. (99%)Kenneth T. Co; David Martinez-Rego; Zhongyuan Hau; Emil C. Lupu
Robustness Testing of Data and Knowledge Driven Anomaly Detection in Cyber-Physical Systems. (86%)Xugui Zhou; Maxfield Kouzel; Homa Alemzadeh
Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation. (83%)Siyu Lai; Zhen Yang; Fandong Meng; Xue Zhang; Yufeng Chen; Jinan Xu; Jie Zhou
2022-04-18
UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples. (99%)Rahim Taheri
Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge. (99%)Qun Song; Zhenyu Yan; Wenjie Luo; Rui Tan
CgAT: Center-Guided Adversarial Training for Deep Hashing-Based Retrieval. (99%)Xunguang Wang; Yiqun Lin; Xiaomeng Li
Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors. (98%)Nyee Thoang Lim; Meng Yi Kuan; Muxin Pu; Mei Kuan Lim; Chun Yong Chong
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. (75%)Enyan Dai; Tianxiang Zhao; Huaisheng Zhu; Junjie Xu; Zhimeng Guo; Hui Liu; Jiliang Tang; Suhang Wang
CorrGAN: Input Transformation Technique Against Natural Corruptions. (70%)Mirazul Haque; Christof J. Budnik; Wei Yang
Poisons that are learned faster are more effective. (64%)Pedro Sandoval-Segura; Vasu Singla; Liam Fowl; Jonas Geiping; Micah Goldblum; David Jacobs; Tom Goldstein
2022-04-17
Residue-Based Natural Language Adversarial Attack Detection. (99%)Vyas Raina; Mark Gales
Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning. (95%)Jun Guo; Yonghong Chen; Yihang Hao; Zixin Yin; Yin Yu; Simin Li
2022-04-16
SETTI: A Self-supervised Adversarial Malware Detection Architecture in an IoT Environment. (95%)Marjan Golmaryami; Rahim Taheri; Zahra Pooranian; Mohammad Shojafar; Pei Xiao
Homomorphic Encryption and Federated Learning based Privacy-Preserving CNN Training: COVID-19 Detection Use-Case. (67%)Febrianti Wibawa; Ferhat Ozgur Catak; Salih Sarp; Murat Kuzlu; Umit Cali
2022-04-15
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning. (92%)Mathias Lechner; Alexander Amini; Daniela Rus; Thomas A. Henzinger
2022-04-14
From Environmental Sound Representation to Robustness of 2D CNN Models Against Adversarial Attacks. (99%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Planting Undetectable Backdoors in Machine Learning Models. (99%)Shafi Goldwasser; Michael P. Kim; Vinod Vaikuntanathan; Or Zamir
Q-TART: Quickly Training for Adversarial Robustness and in-Transferability. (50%)Madan Ravi Ganesh; Salimeh Yasaei Sekeh; Jason J. Corso
Robotic and Generative Adversarial Attacks in Offline Writer-independent Signature Verification. (41%)Jordan J. Bird
2022-04-13
Task-Driven Data Augmentation for Vision-Based Robotic Control. (96%)Shubhankar Agarwal; Sandeep P. Chinchali
Stealing Malware Classifiers and AVs at Low False Positive Conditions. (82%)Maria Rigaki; Sebastian Garcia
Defensive Patches for Robust Recognition in the Physical World. (80%)Jiakai Wang; Zixin Yin; Pengfei Hu; Aishan Liu; Renshuai Tao; Haotong Qin; Xianglong Liu; Dacheng Tao
A Novel Approach to Train Diverse Types of Language Models for Health Mention Classification of Tweets. (78%)Pervaiz Iqbal Khan; Imran Razzak; Andreas Dengel; Sheraz Ahmed
Overparameterized Linear Regression under Adversarial Attacks. (76%)Antônio H. Ribeiro; Thomas B. Schön
Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures. (38%)Huming Qiu; Hua Ma; Zhi Zhang; Alsharif Abuadbba; Wei Kang; Anmin Fu; Yansong Gao
A Natural Language Processing Approach for Instruction Set Architecture Identification. (1%)Dinuka Sahabandu; Sukarno Mertoguno; Radha Poovendran
2022-04-12
Liuer Mihou: A Practical Framework for Generating and Evaluating Grey-box Adversarial Attacks against NIDS. (99%)Ke He; Dan Dongseong Kim; Jing Sun; Jeong Do Yoo; Young Hun Lee; Huy Kang Kim
Examining the Proximity of Adversarial Examples to Class Manifolds in Deep Networks. (98%)Štefan Pócoš; Iveta Bečková; Igor Farkaš
Toward Robust Spiking Neural Network Against Adversarial Perturbation. (98%)Ling Liang; Kaidi Xu; Xing Hu; Lei Deng; Yuan Xie
Machine Learning Security against Data Poisoning: Are We There Yet? (92%)Antonio Emanuele Cinà; Kathrin Grosse; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Optimal Membership Inference Bounds for Adaptive Composition of Sampled Gaussian Mechanisms. (11%)Saeed Mahloujifar; Alexandre Sablayrolles; Graham Cormode; Somesh Jha
3DeformRS: Certifying Spatial Deformations on Point Clouds. (9%)Gabriel Pérez S.; Juan C. Pérez; Motasem Alfarra; Silvio Giancola; Bernard Ghanem
2022-04-11
A Simple Approach to Adversarial Robustness in Few-shot Image Classification. (98%)Akshayvarun Subramanya; Hamed Pirsiavash
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information. (92%)Yi Zeng; Minzhou Pan; Hoang Anh Just; Lingjuan Lyu; Meikang Qiu; Ruoxi Jia
Generalizing Adversarial Explanations with Grad-CAM. (84%)Tanmay Chakraborty; Utkarsh Trehan; Khawla Mallat; Jean-Luc Dugelay
Anti-Adversarially Manipulated Attributions for Weakly Supervised Semantic Segmentation and Object Localization. (83%)Jungbeom Lee; Eunji Kim; Jisoo Mok; Sungroh Yoon
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm. (47%)Lei Xu; Yangyi Chen; Ganqu Cui; Hongcheng Gao; Zhiyuan Liu
medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space. (1%)Amil Dravid; Florian Schiffers; Boqing Gong; Aggelos K. Katsaggelos
2022-04-10
"That Is a Suspicious Reaction!": Interpreting Logits Variation to Detect NLP Adversarial Attacks. (88%)Edoardo Mosca; Shreyash Agarwal; Javier Rando-Ramirez; Georg Groh
Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural Networks. (54%)Karthikeyan Nagarajan; Junde Li; Sina Sayyah Ensan; Mohammad Nasim Imtiaz Khan; Sachhidh Kannan; Swaroop Ghosh
Measuring the False Sense of Security. (26%)Carlos Gomes
2022-04-08
Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser. (99%)Sonal Joshi; Saurabh Kataria; Yiwen Shao; Piotr Zelasko; Jesus Villalba; Sanjeev Khudanpur; Najim Dehak
AdvEst: Adversarial Perturbation Estimation to Classify and Detect Adversarial Attacks against Speaker Identification. (99%)Sonal Joshi; Saurabh Kataria; Jesus Villalba; Najim Dehak
Evaluating the Adversarial Robustness for Fourier Neural Operators. (92%)Abolaji D. Adesoji; Pin-Yu Chen
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense. (87%)Shaik Mohammed Maqsood; Viveros Manuela Ceron; Addluri GowthamKrishna
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks. (45%)Xinqiao Zhang; Huili Chen; Ke Huang; Farinaz Koushanfar
Characterizing and Understanding the Behavior of Quantized Models for Reliable Deployment. (13%)Qiang Hu; Yuejun Guo; Maxime Cordy; Xiaofei Xie; Wei Ma; Mike Papadakis; Yves Le Traon
Neural Tangent Generalization Attacks. (12%)Chia-Hung Yuan; Shan-Hung Wu
Labeling-Free Comparison Testing of Deep Learning Models. (11%)Yuejun Guo; Qiang Hu; Maxime Cordy; Xiaofei Xie; Mike Papadakis; Yves Le Traon
Does Robustness on ImageNet Transfer to Downstream Tasks? (2%)Yutaro Yamada; Mayu Otani
The self-learning AI controller for adaptive power beaming with fiber-array laser transmitter system. (1%)A. M. Vorontsov; G. A. Filimonov
2022-04-07
Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings. (99%)Yuhao Mao; Chong Fu; Saizhuo Wang; Shouling Ji; Xuhong Zhang; Zhenguang Liu; Jun Zhou; Alex X. Liu; Raheem Beyah; Ting Wang
Adaptive-Gravity: A Defense Against Adversarial Samples. (99%)Ali Mirzaeian; Zhi Tian; Sai Manoj P D; Banafsheh S. Latibari; Ioannis Savidis; Houman Homayoun; Avesta Sasan
Using Multiple Self-Supervised Tasks Improves Model Robustness. (81%)Matthew Lawhon; Chengzhi Mao; Junfeng Yang
Transformer-Based Language Models for Software Vulnerability Detection: Performance, Model's Security and Platforms. (69%)Chandra Thapa; Seung Ick Jang; Muhammad Ejaz Ahmed; Seyit Camtepe; Josef Pieprzyk; Surya Nepal
Defending Active Directory by Combining Neural Network based Dynamic Program and Evolutionary Diversity Optimisation. (1%)Diksha Goel; Max Hector Ward-Graham; Aneta Neumann; Frank Neumann; Hung Nguyen; Mingyu Guo
2022-04-06
Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks. (99%)Xu Han; Anmin Liu; Yifeng Xiong; Yanbo Fan; Kun He
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network. (95%)Byung-Kwan Lee; Junho Kim; Yong Man Ro
Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck. (93%)Junho Kim; Byung-Kwan Lee; Yong Man Ro
Optimization Models and Interpretations for Three Types of Adversarial Perturbations against Support Vector Machines. (68%)Wen Su; Qingna Li; Chunfeng Cui
Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems. (62%)Furkan Mumcu; Keval Doshi; Yasin Yilmaz
Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures. (33%)Md Tamjid Jim Hossain; Shahriar Jim Badsha; Jim Hung; La; Haoting Shen; Shafkat Islam; Ibrahim Khalil; Xun Yi
2022-04-05
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning. (98%)Nilaksh Das; Duen Horng Chau
Adversarial Robustness through the Lens of Convolutional Filters. (87%)Paul Gavrikov; Janis Keuper
User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning. (2%)Tiantian Feng; Raghuveer Peri; Shrikanth Narayanan
SwapMix: Diagnosing and Regularizing the Over-Reliance on Visual Context in Visual Question Answering. (1%)Vipul Gupta; Zhuowan Li; Adam Kortylewski; Chenyu Zhang; Yingwei Li; Alan Yuille
GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning. (1%)Jinyin Chen; Shulong Hu; Haibin Zheng; Changyou Xing; Guomin Zhang
2022-04-04
DAD: Data-free Adversarial Defense at Test Time. (99%)Gaurav Kumar Nayak; Ruchit Rawal; Anirban Chakraborty
SecureSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition. (99%)Jianfei Yang; Han Zou; Lihua Xie
Experimental quantum adversarial learning with programmable superconducting qubits. (99%)Wenhui Ren; Weikang Li; Shibo Xu; Ke Wang; Wenjie Jiang; Feitong Jin; Xuhao Zhu; Jiachen Chen; Zixuan Song; Pengfei Zhang; Hang Dong; Xu Zhang; Jinfeng Deng; Yu Gao; Chuanyu Zhang; Yaozu Wu; Bing Zhang; Qiujiang Guo; Hekang Li; Zhen Wang; Jacob Biamonte; Chao Song; Dong-Ling Deng; H. Wang
PRADA: Practical Black-Box Adversarial Attacks against Neural Ranking Models. (99%)Chen Wu; Ruqing Zhang; Jiafeng Guo; Rijke Maarten de; Yixing Fan; Xueqi Cheng
FaceSigns: Semi-Fragile Neural Watermarks for Media Authentication and Countering Deepfakes. (98%)Paarth Neekhara; Shehzeen Hussain; Xinqiao Zhang; Ke Huang; Julian McAuley; Farinaz Koushanfar
2022-04-03
Breaking the De-Pois Poisoning Defense. (98%)Alaa Anani; Mohamed Ghanem; Lotfy Abdel Khaliq
Adversarially robust segmentation models learn perceptually-aligned gradients. (16%)Pedro Sandoval-Segura
Detecting In-vehicle Intrusion via Semi-supervised Learning-based Convolutional Adversarial Autoencoders. (1%)Thien-Nu Hoang; Daehee Kim
Improving Vision Transformers by Revisiting High-frequency Components. (1%)Jiawang Bai; Li Yuan; Shu-Tao Xia; Shuicheng Yan; Zhifeng Li; Wei Liu
2022-04-02
Adversarial Neon Beam: Robust Physical-World Adversarial Attack to DNNs. (98%)Chengyin Hu; Kalibinuer Tiliwalidi
DST: Dynamic Substitute Training for Data-free Black-box Attack. (98%)Wenxuan Wang; Xuelin Qian; Yanwei Fu; Xiangyang Xue
2022-04-01
SkeleVision: Towards Adversarial Resiliency of Person Tracking with Multi-Task Learning. (47%)Nilaksh Das; Sheng-Yun Peng; Duen Horng Chau
Robust and Accurate -- Compositional Architectures for Randomized Smoothing. (31%)Miklós Z. Horváth; Mark Niklas Müller; Marc Fischer; Martin Vechev
FrequencyLowCut Pooling -- Plug & Play against Catastrophic Overfitting. (16%)Julia Grabinski; Steffen Jung; Janis Keuper; Margret Keuper
Preventing Distillation-based Attacks on Neural Network IP. (2%)Mahdieh Grailoo; Zain Ul Abideen; Mairo Leier; Samuel Pagliarini
FedRecAttack: Model Poisoning Attack to Federated Recommendation. (1%)Dazhong Rong; Shuai Ye; Ruoyan Zhao; Hon Ning Yuen; Jianhai Chen; Qinming He
2022-03-31
Improving Adversarial Transferability via Neuron Attribution-Based Attacks. (99%)Jianping Zhang; Weibin Wu; Jen-tse Huang; Yizhan Huang; Wenxuan Wang; Yuxin Su; Michael R. Lyu
Adversarial Examples in Random Neural Networks with General Activations. (98%)Andrea Montanari; Yuchen Wu
Scalable Whitebox Attacks on Tree-based Models. (96%)Giuseppe Castiglione; Gavin Ding; Masoud Hashemi; Christopher Srinivasa; Ga Wu
Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond. (86%)Yi Yu; Wenhan Yang; Yap-Peng Tan; Alex C. Kot
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. (81%)Florian Tramèr; Reza Shokri; Ayrton San Joaquin; Hoang Le; Matthew Jagielski; Sanghyun Hong; Nicholas Carlini
2022-03-30
Investigating Top-$k$ White-Box and Transferable Black-box Attack. (87%)Chaoning Zhang; Philipp Benz; Adil Karjauv; Jae Won Cho; Kang Zhang; In So Kweon
Sensor Data Validation and Driving Safety in Autonomous Driving Systems. (83%)Jindi Zhang
Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis. (56%)Yi Chang; Zhao Ren; Thanh Tam Nguyen; Wolfgang Nejdl; Björn W. Schuller
2022-03-29
Mel Frequency Spectral Domain Defenses against Adversarial Attacks on Speech Recognition Systems. (99%)Nicholas Mehlman; Anirudh Sreeram; Raghuveer Peri; Shrikanth Narayanan
Zero-Query Transfer Attacks on Context-Aware Object Detectors. (99%)Zikui Cai; Shantanu Rane; Alejandro E. Brito; Chengyu Song; Srikanth V. Krishnamurthy; Amit K. Roy-Chowdhury; M. Salman Asif
Exploring Frequency Adversarial Attacks for Face Forgery Detection. (99%)Shuai Jia; Chao Ma; Taiping Yao; Bangjie Yin; Shouhong Ding; Xiaokang Yang
StyleFool: Fooling Video Classification Systems via Style Transfer. (99%)Yuxin Cao; Xi Xiao; Ruoxi Sun; Derui Wang; Minhui Xue; Sheng Wen
Recent improvements of ASR models in the face of adversarial attacks. (98%)Raphael Olivier; Bhiksha Raj
Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients. (83%)Kaidong Li; Ziming Zhang; Cuncong Zhong; Guanghui Wang
Treatment Learning Transformer for Noisy Image Classification. (26%)Chao-Han Huck Yang; I-Te Danny Hung; Yi-Chieh Liu; Pin-Yu Chen
Can NMT Understand Me? Towards Perturbation-based Evaluation of NMT Models for Code Generation. (11%)Pietro Liguori; Cristina Improta; Vivo Simona De; Roberto Natella; Bojan Cukic; Domenico Cotroneo
2022-03-28
Boosting Black-Box Adversarial Attacks with Meta Learning. (99%)Junjie the State Key Lab of Intelligent Control and Decision of Complex Systems and the School of Automation, Beijing Institute of Technology, Beijing, China Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China Fu; Jian the State Key Lab of Intelligent Control and Decision of Complex Systems and the School of Automation, Beijing Institute of Technology, Beijing, China Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China Sun; Gang the State Key Lab of Intelligent Control and Decision of Complex Systems and the School of Automation, Beijing Institute of Technology, Beijing, China Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China Wang
A Fast and Efficient Conditional Learning for Tunable Trade-Off between Accuracy and Robustness. (62%)Souvik Kundu; Sairam Sundaresan; Massoud Pedram; Peter A. Beerel
Robust Unlearnable Examples: Protecting Data Against Adversarial Learning. (16%)Shaopeng Fu; Fengxiang He; Yang Liu; Li Shen; Dacheng Tao
Neurosymbolic hybrid approach to driver collision warning. (15%)Kyongsik Yun; Thomas Lu; Alexander Huyen; Patrick Hammer; Pei Wang
Attacker Attribution of Audio Deepfakes. (1%)Nicolas M. Müller; Franziska Dieckmann; Jennifer Williams
2022-03-27
Rebuild and Ensemble: Exploring Defense Against Text Adversaries. (76%)Linyang Li; Demin Song; Jiehang Zeng; Ruotian Ma; Xipeng Qiu
Adversarial Representation Sharing: A Quantitative and Secure Collaborative Learning Framework. (8%)Jikun Chen; Feng Qiang; Na Ruan
2022-03-26
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective. (99%)Yimeng Zhang; Yuguang Yao; Jinghan Jia; Jinfeng Yi; Mingyi Hong; Shiyu Chang; Sijia Liu
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies. (99%)Zhuang Qian; Kaizhu Huang; Qiu-Feng Wang; Xu-Yao Zhang
Reverse Engineering of Imperceptible Adversarial Image Perturbations. (99%)Yifan Gong; Yuguang Yao; Yize Li; Yimeng Zhang; Xiaoming Liu; Xue Lin; Sijia Liu
Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding. (33%)Zhilu Wang; Chao Huang; Qi Zhu
A Systematic Survey of Attack Detection and Prevention in Connected and Autonomous Vehicles. (1%)Trupil Limbasiya; Ko Zheng Teng; Sudipta Chattopadhyay; Jianying Zhou
A Roadmap for Big Model. (1%)Sha Yuan; Hanyu Zhao; Shuai Zhao; Jiahong Leng; Yangxiao Liang; Xiaozhi Wang; Jifan Yu; Xin Lv; Zhou Shao; Jiaao He; Yankai Lin; Xu Han; Zhenghao Liu; Ning Ding; Yongming Rao; Yizhao Gao; Liang Zhang; Ming Ding; Cong Fang; Yisen Wang; Mingsheng Long; Jing Zhang; Yinpeng Dong; Tianyu Pang; Peng Cui; Lingxiao Huang; Zheng Liang; Huawei Shen; Hui Zhang; Quanshi Zhang; Qingxiu Dong; Zhixing Tan; Mingxuan Wang; Shuo Wang; Long Zhou; Haoran Li; Junwei Bao; Yingwei Pan; Weinan Zhang; Zhou Yu; Rui Yan; Chence Shi; Minghao Xu; Zuobai Zhang; Guoqiang Wang; Xiang Pan; Mengjie Li; Xiaoyu Chu; Zijun Yao; Fangwei Zhu; Shulin Cao; Weicheng Xue; Zixuan Ma; Zhengyan Zhang; Shengding Hu; Yujia Qin; Chaojun Xiao; Zheni Zeng; Ganqu Cui; Weize Chen; Weilin Zhao; Yuan Yao; Peng Li; Wenzhao Zheng; Wenliang Zhao; Ziyi Wang; Borui Zhang; Nanyi Fei; Anwen Hu; Zenan Ling; Haoyang Li; Boxi Cao; Xianpei Han; Weidong Zhan; Baobao Chang; Hao Sun; Jiawen Deng; Chujie Zheng; Juanzi Li; Lei Hou; Xigang Cao; Jidong Zhai; Zhiyuan Liu; Maosong Sun; Jiwen Lu; Zhiwu Lu; Qin Jin; Ruihua Song; Ji-Rong Wen; Zhouchen Lin; Liwei Wang; Hang Su; Jun Zhu; Zhifang Sui; Jiajun Zhang; Yang Liu; Xiaodong He; Minlie Huang; Jian Tang; Jie Tang
2022-03-25
Enhancing Transferability of Adversarial Examples with Spatial Momentum. (99%)Guoqiu Wang; Huanqian Yan; Xingxing Wei
Origins of Low-dimensional Adversarial Perturbations. (98%)Elvis Dohmatob; Chuan Guo; Morgane Goibert
Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness. (89%)Giulio Lovisotto; Nicole Finnie; Mauricio Munoz; Chaithanya Kumar Mummadi; Jan Hendrik Metzen
Improving Robustness of Jet Tagging Algorithms with Adversarial Training. (10%)Annika Stein; Xavier Coubez; Spandan Mondal; Andrzej Novak; Alexander Schmidt
A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training. (5%)Yifei Wang; Yisen Wang; Jiansheng Yang; Zhouchen Lin
A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration. (1%)Ramya Hebbalaguppe; Jatin Prakash; Neelabh Madan; Chetan Arora
2022-03-24
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning. (99%)Arezoo Rajabi; Bhaskar Ramasubramanian; Radha Poovendran
A Perturbation Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow. (99%)Jenny Schmalfuss; Philipp Scholze; Andrés Bruhn
NPC: Neuron Path Coverage via Characterizing Decision Logic of Deep Neural Networks. (93%)Xiaofei Xie; Tianlin Li; Jian Wang; Lei Ma; Qing Guo; Felix Juefei-Xu; Yang Liu
MERLIN -- Malware Evasion with Reinforcement LearnINg. (56%)Tony Quertier; Benjamin Marais; Stéphane Morucci; Bertrand Fournel
Repairing Group-Level Errors for DNNs Using Weighted Regularization. (13%)Ziyuan Zhong; Yuchi Tian; Conor J. Sweeney; Vicente Ordonez-Roman; Baishakhi Ray
A Manifold View of Adversarial Risk. (11%)Wenjia Zhang; Yikai Zhang; Xiaoling Hu; Mayank Goswami; Chao Chen; Dimitris Metaxas
2022-03-23
Powerful Physical Adversarial Examples Against Practical Face Recognition Systems. (99%)Inderjeet Singh; Toshinori Araki; Kazuya Kakizaki
Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation. (99%)Hanjie Chen; Yangfeng Ji
Input-specific Attention Subnetworks for Adversarial Detection. (99%)Emil Biju; Anirudh Sriram; Pratyush Kumar; Mitesh M Khapra
Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection. (69%)Liang Chen; Yong Zhang; Yibing Song; Lingqiao Liu; Jue Wang
Distort to Detect, not Affect: Detecting Stealthy Sensor Attacks with Micro-distortion. (3%)Suman Sourav; Binbin Chen
On the (Limited) Generalization of MasterFace Attacks and Its Relation to the Capacity of Face Representations. (3%)Philipp Terhörst; Florian Bierbaum; Marco Huber; Naser Damer; Florian Kirchbuchner; Kiran Raja; Arjan Kuijper
2022-03-22
Exploring High-Order Structure for Robust Graph Structure Learning. (99%)Guangqian Yang; Yibing Zhan; Jinlong Li; Baosheng Yu; Liu Liu; Fengxiang He
On Adversarial Robustness of Large-scale Audio Visual Learning. (93%)Juncheng B Bernie Li; Shuhui Bernie Qu; Xinjian Bernie Li; Bernie Po-Yao; Huang; Florian Metze
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes. (86%)Elvis Dohmatob; Alberto Bietti
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis. (78%)Yuwei Sun; Hideya Ochiai; Jun Sakuma
A Girl Has A Name, And It's ... Adversarial Authorship Attribution for Deobfuscation. (2%)Wanyue Zhai; Jonathan Rusert; Zubair Shafiq; Padmini Srinivasan
GradViT: Gradient Inversion of Vision Transformers. (1%)Ali Hatamizadeh; Hongxu Yin; Holger Roth; Wenqi Li; Jan Kautz; Daguang Xu; Pavlo Molchanov
On Robust Classification using Contractive Hamiltonian Neural ODEs. (1%)Muhammad Zakwan; Liang Xu; Giancarlo Ferrari-Trecate
2022-03-21
Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack. (92%)Chi Liu; Huajie Chen; Tianqing Zhu; Jun Zhang; Wanlei Zhou
Integrity Fingerprinting of DNN with Double Black-box Design and Verification. (10%)Shuo Wang; Sidharth Agarwal; Sharif Abuadbba; Kristen Moore; Surya Nepal; Salil Kanhere
On The Robustness of Offensive Language Classifiers. (2%)Jonathan Rusert; Zubair Shafiq; Padmini Srinivasan
Defending against Co-residence Attack in Energy-Efficient Cloud: An Optimization based Real-time Secure VM Allocation Strategy. (1%)Lu Cao; Ruiwen Li; Xiaojun Ruan; Yuhong Liu
2022-03-20
An Intermediate-level Attack Framework on The Basis of Linear Regression. (99%)Yiwen Guo; Qizhang Li; Wangmeng Zuo; Hao Chen
A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement. (99%)Yuting Yang; Pei Huang; Juan Cao; Jintao Li; Yun Lin; Jin Song Dong; Feifei Ma; Jian Zhang
Leveraging Expert Guided Adversarial Augmentation For Improving Generalization in Named Entity Recognition. (82%)Aaron Reich; Jiaao Chen; Aastha Agrawal; Yanzhe Zhang; Diyi Yang
Adversarial Parameter Attack on Deep Neural Networks. (62%)Lijia Yu; Yihan Wang; Xiao-Shan Gao
2022-03-19
Adversarial Defense via Image Denoising with Chaotic Encryption. (99%)Shi Hu; Eric Nalisnick; Max Welling
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense. (98%)Thai Le; Jooyoung Lee; Kevin Yen; Yifan Hu; Dongwon Lee
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model. (84%)Jiayi Wang; Rongzhou Bao; Zhuosheng Zhang; Hai Zhao
Efficient Neural Network Analysis with Sum-of-Infeasibilities. (74%)Haoze Wu; Aleksandar Zeljić; Guy Katz; Clark Barrett
Deep Learning Generalization, Extrapolation, and Over-parameterization. (68%)Roozbeh Yousefzadeh
On Robust Prefix-Tuning for Text Classification. (10%)Zonghan Yang; Yang Liu
2022-03-18
Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike. (99%)Johannes Schneider; Giovanni Apruzzese
Adversarial Attacks on Deep Learning-based Video Compression and Classification Systems. (99%)Jung-Woo Chang; Mojan Javaheripi; Seira Hidano; Farinaz Koushanfar
Neural Predictor for Black-Box Adversarial Attacks on Speech Recognition. (99%)Marie Biolková; Bac Nguyen
AutoAdversary: A Pixel Pruning Method for Sparse Adversarial Attack. (99%)Jinqiao Li; Xiaotao Liu; Jian Zhao; Furao Shen
Alleviating Adversarial Attacks on Variational Autoencoders with MCMC. (96%)Anna Kuzina; Max Welling; Jakub M. Tomczak
DTA: Physical Camouflage Attacks using Differentiable Transformation Network. (83%)Naufal Suryanto; Yongsu Kim; Hyoeun Kang; Harashta Tatimma Larasati; Youngyeo Yun; Thi-Thu-Huong Le; Hunmin Yang; Se-Yoon Oh; Howon Kim
AdIoTack: Quantifying and Refining Resilience of Decision Tree Ensemble Inference Models against Adversarial Volumetric Attacks on IoT Networks. (78%)Arman Pashamokhtari; Gustavo Batista; Hassan Habibi Gharakheili
Towards Robust 2D Convolution for Reliable Visual Recognition. (9%)Lida Li; Shuai Li; Kun Wang; Xiangchu Feng; Lei Zhang
2022-03-17
Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input. (99%)Junyoung Byun; Seungju Cho; Myung-Joon Kwon; Hee-Seon Kim; Changick Kim
Self-Ensemble Adversarial Training for Improved Robustness. (99%)Hongjun Wang; Yisen Wang
Leveraging Adversarial Examples to Quantify Membership Information Leakage. (98%)Grosso Ganesh Del; Hamid Jalalzai; Georg Pichler; Catuscia Palamidessi; Pablo Piantanida
On the Properties of Adversarially-Trained CNNs. (93%)Mattia Carletti; Matteo Terzi; Gian Antonio Susto
PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks. (89%)Yue Wang; Wenqing Li; Esha Sarkar; Muhammad Shafique; Michail Maniatakos; Saif Eddin Jabari
HDLock: Exploiting Privileged Encoding to Protect Hyperdimensional Computing Models against IP Stealing. (1%)Shijin Duan; Shaolei Ren; Xiaolin Xu
2022-03-16
Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training. (99%)Adir Rahamim; Itay Naeh
Towards Practical Certifiable Patch Defense with Vision Transformer. (98%)Zhaoyu Chen; Bo Li; Jianghe Xu; Shuang Wu; Shouhong Ding; Wenqiang Zhang
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations? (97%)Yonggan Fu; Shunyao Zhang; Shang Wu; Cheng Wan; Yingyan Lin
Provable Adversarial Robustness for Fractional Lp Threat Models. (87%)Alexander Levine; Soheil Feizi
What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study. (83%)Binxiao Huang; Chaofan Tao; Rui Lin; Ngai Wong
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks. (82%)Fan Wu; Linyi Li; Chejian Xu; Huan Zhang; Bhavya Kailkhura; Krishnaram Kenthapadi; Ding Zhao; Bo Li
Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning. (70%)Gorka Abad; Servio Paguada; Oguzhan Ersoy; Stjepan Picek; Víctor Julio Ramírez-Durán; Aitor Urbieta
Reducing Flipping Errors in Deep Neural Networks. (68%)Xiang Deng; Yun Xiao; Bo Long; Zhongfei Zhang
Attacking deep networks with surrogate-based adversarial black-box methods is easy. (45%)Nicholas A. Lord; Romain Mueller; Luca Bertinetto
On the Convergence of Certified Robust Training with Interval Bound Propagation. (15%)Yihan Wang; Zhouxing Shi; Quanquan Gu; Cho-Jui Hsieh
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients. (15%)Xiaoyu Cao; Neil Zhenqiang Gong
Understanding robustness and generalization of artificial neural networks through Fourier masks. (2%)Nikos Karantzas; Emma Besier; Josue Ortega Caro; Xaq Pitkow; Andreas S. Tolias; Ankit B. Patel; Fabio Anselmi
2022-03-15
Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness. (76%)Tejas Gokhale; Swaroop Mishra; Man Luo; Bhavdeep Singh Sachdeva; Chitta Baral
Internet-based Social Engineering Attacks, Defenses and Psychology: A Survey. (13%)Theodore Longtchi; Rosana Montañez Rodriguez; Laith Al-Shawaf; Adham Atyabi; Shouhuai Xu
Towards Adversarial Control Loops in Sensor Attacks: A Case Study to Control the Kinematics and Actuation of Embedded Systems. (10%)Yazhou Tu; Sara Rampazzi; Xiali Hei
LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference. (1%)Zhongzhi Yu; Yonggan Fu; Shang Wu; Mengquan Li; Haoran You; Yingyan Lin
Adversarial Counterfactual Augmentation: Application in Alzheimer's Disease Classification. (1%)Tian Xia; Pedro Sanchez; Chen Qin; Sotirios A. Tsaftaris
2022-03-14
Efficient universal shuffle attack for visual object tracking. (99%)Siao Liu; Zhaoyu Chen; Wei Li; Jiwei Zhu; Jiafeng Wang; Wenqiang Zhang; Zhongxue Gan
Defending Against Adversarial Attack in ECG Classification with Adversarial Distillation Training. (99%)Jiahao Shao; Shijia Geng; Zhaoji Fu; Weilun Xu; Tong Liu; Shenda Hong
Task-Agnostic Robust Representation Learning. (98%)A. Tuan Nguyen; Ser Nam Lim; Philip Torr
Energy-Latency Attacks via Sponge Poisoning. (91%)Antonio Emanuele Cinà; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Adversarial amplitude swap towards robust image classifiers. (83%)Chun Yang Tan; Hiroshi Kera; Kazuhiko Kawamoto
On the benefits of knowledge distillation for adversarial robustness. (82%)Javier Maroto; Guillermo Ortiz-Jiménez; Pascal Frossard
RES-HD: Resilient Intelligent Fault Diagnosis Against Adversarial Attacks Using Hyper-Dimensional Computing. (82%)Onat Gungor; Tajana Rosing; Baris Aksanli
Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis. (54%)Giulio Rossolini; Federico Nesti; Fabio Brau; Alessandro Biondi; Giorgio Buttazzo
2022-03-13
LAS-AT: Adversarial Training with Learnable Attack Strategy. (99%)Xiaojun Jia; Yong Zhang; Baoyuan Wu; Ke Ma; Jue Wang; Xiaochun Cao
Generating Practical Adversarial Network Traffic Flows Using NIDSGAN. (99%)Bolor-Erdene Zolbayar; Ryan Sheatsley; Patrick McDaniel; Michael J. Weisman; Sencun Zhu; Shitong Zhu; Srikanth Krishnamurthy
Model Inversion Attack against Transfer Learning: Inverting a Model without Accessing It. (92%)Dayong Ye; Huiqiang Chen; Shuai Zhou; Tianqing Zhu; Wanlei Zhou; Shouling Ji
One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy. (67%)Dayong Ye; Sheng Shen; Tianqing Zhu; Bo Liu; Wanlei Zhou
Policy Learning for Robust Markov Decision Process with a Mismatched Generative Model. (3%)Jialian Li; Tongzheng Ren; Dong Yan; Hang Su; Jun Zhu
2022-03-12
Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior. (99%)Yinpeng Dong; Shuyu Cheng; Tianyu Pang; Hang Su; Jun Zhu
A survey in Adversarial Defences and Robustness in NLP. (99%)Shreya Goyal; Sumanth Doddapaneni; Mitesh M. Khapra; Balaraman Ravindran
Label-only Model Inversion Attack: The Attack that Requires the Least Information. (47%)Dayong Ye; Tianqing Zhu; Shuai Zhou; Bo Liu; Wanlei Zhou
2022-03-11
Block-Sparse Adversarial Attack to Fool Transformer-Based Text Classifiers. (99%)Sahar Sadrizadeh; Ljiljana Dolamic; Pascal Frossard
Learning from Attacks: Attacking Variational Autoencoder for Improving Image Classification. (98%)Jianzhang Zheng; Fan Yang; Hao Shen; Xuan Tang; Mingsong Chen; Liang Song; Xian Wei
An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks. (96%)Anirudh Yadav; Ashutosh Upadhyay; S. Sharanya
Enhancing Adversarial Training with Second-Order Statistics of Weights. (38%)Gaojie Jin; Xinping Yi; Wei Huang; Sven Schewe; Xiaowei Huang
ROOD-MRI: Benchmarking the robustness of deep learning segmentation models to out-of-distribution and corrupted data in MRI. (33%)Lyndon Boone; Mahdi Biparva; Parisa Mojiri Forooshani; Joel Ramirez; Mario Masellis; Robert Bartha; Sean Symons; Stephen Strother; Sandra E. Black; Chris Heyn; Anne L. Martel; Richard H. Swartz; Maged Goubran
Perception Over Time: Temporal Dynamics for Robust Image Understanding. (16%)Maryam Daniali; Edward Kim
Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation. (15%)Yunhan Huang; Quanyan Zhu
2022-03-10
Exploiting the Potential of Datasets: A Data-Centric Approach for Model Robustness. (92%)Yiqi Zhong; Lei Wu; Xianming Liu; Junjun Jiang
Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation. (75%)Saeed Ranjbar Alvar; Lanjun Wang; Jian Pei; Yong Zhang
Attack Analysis of Face Recognition Authentication Systems Using Fast Gradient Sign Method. (69%)Arbena Musa; Kamer Vishi; Blerim Rexha
Attacks as Defenses: Designing Robust Audio CAPTCHAs Using Attacks on Automatic Speech Recognition Systems. (64%)Hadi Abdullah; Aditya Karlekar; Saurabh Prasad; Muhammad Sajidur Rahman; Logan Blue; Luke A. Bauer; Vincent Bindschaedler; Patrick Traynor
SoK: On the Semantic AI Security in Autonomous Driving. (10%)Junjie Shen; Ningfei Wang; Ziwen Wan; Yunpeng Luo; Takami Sato; Zhisheng Hu; Xinyang Zhang; Shengjian Guo; Zhenyu Zhong; Kang Li; Ziming Zhao; Chunming Qiao; Qi Alfred Chen
2022-03-09
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation. (99%)Qilong Zhang; Chaoning Zhang; Chaoqun Li; Jingkuan Song; Lianli Gao; Heng Tao Shen
Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack. (99%)Ye Liu; Yaya Cheng; Lianli Gao; Xianglong Liu; Qilong Zhang; Jingkuan Song
Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity. (99%)Cheng Luo; Qinliang Lin; Weicheng Xie; Bizhu Wu; Jinheng Xie; Linlin Shen
Binary Classification Under $\ell_0$ Attacks for General Noise Distribution. (98%)Payam Delgosha; Hamed Hassani; Ramtin Pedarsani
Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition. (97%)Xiao Yang; Yinpeng Dong; Tianyu Pang; Zihao Xiao; Hang Su; Jun Zhu
Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees. (92%)Darshan Thaker; Paris Giampouras; René Vidal
Defending Black-box Skeleton-based Human Activity Classifiers. (92%)He Wang; Yunfeng Diao; Zichang Tan; Guodong Guo
Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition. (81%)Yi Chang; Sofiane Laridi; Zhao Ren; Gregory Palmer; Björn W. Schuller; Marco Fisichella
Improving Neural ODEs via Knowledge Distillation. (80%)Haoyu Chu; Shikui Wei; Qiming Lu; Yao Zhao
Physics-aware Complex-valued Adversarial Machine Learning in Reconfigurable Diffractive All-optical Neural Network. (22%)Ruiyang Chen; Yingjie Li; Minhan Lou; Jichao Fan; Yingheng Tang; Berardi Sensale-Rodriguez; Cunxi Yu; Weilu Gao
On the surprising tradeoff between ImageNet accuracy and perceptual similarity. (1%)Manoj Kumar; Neil Houlsby; Nal Kalchbrenner; Ekin D. Cubuk
2022-03-08
Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust NIDS. (99%)João Vitorino; Nuno Oliveira; Isabel Praça
Shape-invariant 3D Adversarial Point Clouds. (99%)Qidong Huang; Xiaoyi Dong; Dongdong Chen; Hang Zhou; Weiming Zhang; Nenghai Yu
ART-Point: Improving Rotation Robustness of Point Cloud Classifiers via Adversarial Rotation. (92%)Robin Wang; Yibo Yang; Dacheng Tao
Robustly-reliable learners under poisoning attacks. (13%)Maria-Florina Balcan; Avrim Blum; Steve Hanneke; Dravyansh Sharma
DeepSE-WF: Unified Security Estimation for Website Fingerprinting Defenses. (2%)Alexander Veicht; Cedric Renggli; Diogo Barradas
Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4. (1%)William Berrios; Arturo Deza
Harmonicity Plays a Critical Role in DNN Based Versus in Biologically-Inspired Monaural Speech Segregation Systems. (1%)Rahil Institute for Systems Research, University of Maryland Parikh; Ilya Google Inc Kavalerov; Carol Institute for Systems Research, University of Maryland Espy-Wilson; Shihab Institute for Systems Research, University of Maryland Shamma
2022-03-07
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches. (99%)Maura Pintor; Daniele Angioni; Angelo Sotgiu; Luca Demetrio; Ambra Demontis; Battista Biggio; Fabio Roli
Art-Attack: Black-Box Adversarial Attack via Evolutionary Art. (99%)Phoenix Williams; Ke Li
Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon. (99%)Yiqi Zhong; Xianming Liu; Deming Zhai; Junjun Jiang; Xiangyang Ji
Adversarial Texture for Fooling Person Detectors in the Physical World. (98%)Zhanhao Hu; Siyuan Huang; Xiaopei Zhu; Xiaolin Hu; Fuchun Sun; Bo Zhang
Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision. (83%)Jun Zhuang; Mohammad Al Hasan
Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation. (31%)Xiaogeng Liu; Haoyu Wang; Yechao Zhang; Fangzhou Wu; Shengshan Hu
2022-03-06
$A^{3}D$: A Platform of Searching for Robust Neural Architectures and Efficient Adversarial Attacks. (99%)Jialiang Sun; Wen Yao; Tingsong Jiang; Chao Li; Xiaoqian Chen
Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer. (98%)Shengshan Hu; Xiaogeng Liu; Yechao Zhang; Minghui Li; Leo Yu Zhang; Hai Jin; Libing Wu
Scalable Uncertainty Quantification for Deep Operator Networks using Randomized Priors. (45%)Yibo Yang; Georgios Kissas; Paris Perdikaris
Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks. (2%)Lennart Brocki; Neo Christopher Chung
2022-03-05
aaeCAPTCHA: The Design and Implementation of Audio Adversarial CAPTCHA. (92%)Md Imran Hossen; Xiali Hei
2022-03-04
Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation. (82%)Xudong Zhang; Zan Wang; Jingke Zhao; Lanjun Wang
2022-03-03
Ad2Attack: Adaptive Adversarial Attack on Real-Time UAV Tracking. (99%)Changhong Fu; Sihang Li; Xinnan Yuan; Junjie Ye; Ziang Cao; Fangqiang Ding
Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation. (98%)KiYoon Yoo; Jangho Kim; Jiho Jang; Nojun Kwak
Adversarial Patterns: Building Robust Android Malware Classifiers. (98%)Dipkamal Bhusal; Nidhi Rastogi
Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training. (84%)Pervaiz Iqbal Khan; Shoaib Ahmed Siddiqui; Imran Razzak; Andreas Dengel; Sheraz Ahmed
Label-Only Model Inversion Attacks via Boundary Repulsion. (74%)Mostafa Kahla; Si Chen; Hoang Anh Just; Ruoxi Jia
Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models. (56%)Zhibo Wang; Xiaowei Dong; Henry Xue; Zhifei Zhang; Weifeng Chiu; Tao Wei; Kui Ren
Why adversarial training can hurt robust accuracy. (22%)Jacob Clarysse; Julia Hörmann; Fanny Yang
Understanding Failure Modes of Self-Supervised Learning. (4%)Neha Mukund Kalibhat; Kanika Narang; Liang Tan; Hamed Firooz; Maziar Sanjabi; Soheil Feizi
Ensemble Methods for Robust Support Vector Machines using Integer Programming. (2%)Jannis Kurtz
Autonomous and Resilient Control for Optimal LEO Satellite Constellation Coverage Against Space Threats. (1%)Yuhan Zhao; Quanyan Zhu
2022-03-02
Enhancing Adversarial Robustness for Deep Metric Learning. (99%)Mo Zhou; Vishal M. Patel
Detecting Adversarial Perturbations in Multi-Task Perception. (98%)Marvin Klingner; Varun Ravi Kumar; Senthil Yogamani; Andreas Bär; Tim Fingscheidt
Canonical foliations of neural networks: application to robustness. (82%)Eliot Tron; Nicolas Couellan; Stéphane Puechmorel
Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers. (69%)Evan Crothers; Nathalie Japkowicz; Herna Viktor; Paula Branco
Video is All You Need: Attacking PPG-based Biometric Authentication. (13%)Lin Li; Chao Chen; Lei Pan; Jun Zhang; Yang Xiang
MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members. (2%)Ismat Jarin; Birhanu Eshete
A Quantitative Geometric Approach to Neural-Network Smoothness. (2%)Zi Wang; Gautam Prakriya; Somesh Jha
2022-03-01
Adversarial samples for deep monocular 6D object pose estimation. (99%)Jinlai Zhang; Weiming Li; Shuang Liang; Hao Wang; Jihong Zhu
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving. (87%)Xingshuo Han; Guowen Xu; Yuan Zhou; Xuehuan Yang; Jiwei Li; Tianwei Zhang
Global-Local Regularization Via Distributional Robustness. (86%)Hoang Phan; Trung Le; Trung Phung; Tuan Anh Bui; Nhat Ho; Dinh Phung
Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. (11%)Wei Dai; Daniel Berleant
Signature Correction Attack on Dilithium Signature Scheme. (1%)Saad Islam; Koksal Mus; Richa Singh; Patrick Schaumont; Berk Sunar
2022-02-28
Enhance transferability of adversarial examples with model architecture. (99%)Mingyuan Fan; Wenzhong Guo; Shengxing Yu; Zuobin Ying; Ximeng Liu
Towards Robust Stacked Capsule Autoencoder with Hybrid Adversarial Training. (99%)Jiazhu Dai; Siwei Xiong
Evaluating the Adversarial Robustness of Adaptive Test-time Defenses. (98%)Francesco Croce; Sven Gowal; Thomas Brunner; Evan Shelhamer; Matthias Hein; Taylan Cemgil
MaMaDroid2.0 -- The Holes of Control Flow Graphs. (88%)Harel Berger; Chen Hajaj; Enrico Mariconti; Amit Dvir
Improving Lexical Embeddings for Robust Question Answering. (67%)Weiwen Xu; Bowei Zou; Wai Lam; Ai Ti Aw
Robust Textual Embedding against Word-level Adversarial Attacks. (26%)Yichen Yang; Xiaosen Wang; Kun He
Artificial Intelligence for Cyber Security (AICS). (1%)James Holt; Edward Raff; Ahmad Ridley; Dennis Ross; Arunesh Sinha; Diane Staheli; William Streilen; Milind Tambe; Yevgeniy Vorobeychik; Allan Wollaber
Explaining RADAR features for detecting spoofing attacks in Connected Autonomous Vehicles. (1%)Nidhi Rastogi; Sara Rampazzi; Michael Clifford; Miriam Heller; Matthew Bishop; Karl Levitt
2022-02-27
A Unified Wasserstein Distributional Robustness Framework for Adversarial Training. (99%)Tuan Anh Bui; Trung Le; Quan Tran; He Zhao; Dinh Phung
Robust Control of Partially Specified Boolean Networks. (1%)Luboš Brim; Samuel Pastva; David Šafránek; Eva Šmijáková
2022-02-26
Adversarial robustness of sparse local Lipschitz predictors. (87%)Ramchandran Muthukumar; Jeremias Sulam
Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations. (45%)Metehan Cekic; Can Bakiskan; Upamanyu Madhow
Automation of reversible steganographic coding with nonlinear discrete optimisation. (1%)Ching-Chun Chang
2022-02-25
ARIA: Adversarially Robust Image Attribution for Content Provenance. (99%)Maksym Andriushchenko; Xiaoyang Rebecca Li; Geoffrey Oxholm; Thomas Gittings; Tu Bui; Nicolas Flammarion; John Collomosse
Projective Ranking-based GNN Evasion Attacks. (97%)He Zhang; Xingliang Yuan; Chuan Zhou; Shirui Pan
On the Effectiveness of Dataset Watermarking in Adversarial Settings. (56%)Buse Gul Atli Tekgul; N. Asokan
2022-02-24
Towards Effective and Robust Neural Trojan Defenses via Input Filtering. (92%)Kien Do; Haripriya Harikumar; Hung Le; Dung Nguyen; Truyen Tran; Santu Rana; Dang Nguyen; Willy Susilo; Svetha Venkatesh
Robust Probabilistic Time Series Forecasting. (76%)TaeHo Yoon; Youngsuk Park; Ernest K. Ryu; Yuyang Wang
Understanding Adversarial Robustness from Feature Maps of Convolutional Layers. (50%)Cong Xu; Min Yang
Measuring CLEVRness: Blackbox testing of Visual Reasoning Models. (16%)Spyridon Mouselinos; Henryk Michalewski; Mateusz Malinowski
Bounding Membership Inference. (11%)Anvith Thudi; Ilia Shumailov; Franziska Boenisch; Nicolas Papernot
Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration. (3%)Ryan Soklaski; Michael Yee; Theodoros Tsiligkaridis
Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations. (2%)Chih-Kuan Yeh; Kuan-Yun Lee; Frederick Liu; Pradeep Ravikumar
Interpolation-based Contrastive Learning for Few-Label Semi-Supervised Learning. (1%)Xihong Yang; Xiaochang Hu; Sihang Zhou; Xinwang Liu; En Zhu
2022-02-23
Improving Robustness of Convolutional Neural Networks Using Element-Wise Activation Scaling. (96%)Zhi-Yuan Zhang; Di Liu
Using calibrator to improve robustness in Machine Reading Comprehension. (13%)Jing Jin; Houfeng Wang
2022-02-22
LPF-Defense: 3D Adversarial Defense based on Frequency Analysis. (99%)Hanieh Naderi; Kimia Noorbakhsh; Arian Etemadi; Shohreh Kasaei
Universal adversarial perturbation for remote sensing images. (95%)Zhaoxia Yin; Qingyu Wang; Jin Tang; Bin Luo
Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era. (84%)Changjiang Li; Li Wang; Shouling Ji; Xuhong Zhang; Zhaohan Xi; Shanqing Guo; Ting Wang
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning. (1%)Hao He; Kaiwen Zha; Dina Katabi
2022-02-21
Adversarial Attacks on Speech Recognition Systems for Mission-Critical Applications: A Survey. (99%)Ngoc Dung Huynh; Mohamed Reda Bouadjenek; Imran Razzak; Kevin Lee; Chetan Arora; Ali Hassani; Arkady Zaslavsky
Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness. (99%)Beomsu Kim; Junghoon Seo
HoneyModels: Machine Learning Honeypots. (99%)Ahmed Abdou; Ryan Sheatsley; Yohan Beugin; Tyler Shipp; Patrick McDaniel
Transferring Adversarial Robustness Through Robust Representation Matching. (99%)Pratik Vaishnavi; Kevin Eykholt; Amir Rahmati
On the Effectiveness of Adversarial Training against Backdoor Attacks. (96%)Yinghua Gao; Dongxian Wu; Jingfeng Zhang; Guanhao Gan; Shu-Tao Xia; Gang Niu; Masashi Sugiyama
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey. (83%)Miguel A. Ramirez; Song-Kyoo Kim; Hussam Al Hamadi; Ernesto Damiani; Young-Ji Byon; Tae-Yeon Kim; Chung-Suk Cho; Chan Yeob Yeun
A Tutorial on Adversarial Learning Attacks and Countermeasures. (75%)Cato Pauling; Michael Gimson; Muhammed Qaid; Ahmad Kida; Basel Halak
Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection. (41%)Yein Kim; Huili Chen; Farinaz Koushanfar
Privacy Leakage of Adversarial Training Models in Federated Learning Systems. (38%)Jingyang Zhang; Yiran Chen; Hai Li
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition. (11%)Tianyu Pang; Min Lin; Xiao Yang; Jun Zhu; Shuicheng Yan
Cyber-Physical Defense in the Quantum Era. (2%)Michel Barbeau; Joaquin Garcia-Alfaro
2022-02-20
Real-time Over-the-air Adversarial Perturbations for Digital Communications using Deep Neural Networks. (93%)Roman A. Sandler; Peter K. Relich; Cloud Cho; Sean Holloway
Sparsity Winning Twice: Better Robust Generaliztion from More Efficient Training. (26%)Tianlong Chen; Zhenyu Zhang; Pengjun Wang; Santosh Balachandra; Haoyu Ma; Zehao Wang; Zhangyang Wang
Overparametrization improves robustness against adversarial attacks: A replication study. (3%)Ali Borji
2022-02-18
Exploring Adversarially Robust Training for Unsupervised Domain Adaptation. (99%)Shao-Yuan Lo; Vishal M. Patel
Learning Representations Robust to Group Shifts and Adversarial Examples. (93%)Ming-Chang Chiu; Xuezhe Ma
Critical Checkpoints for Evaluating Defence Models Against Adversarial Attack and Robustness. (92%)Kanak Tekwani; Manojkumar Parmar
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy Breaches. (80%)Reena Zelenkova; Jack Swallow; M. A. P. Chamikara; Dongxi Liu; Mohan Baruwal Chhetri; Seyit Camtepe; Marthie Grobler; Mahathir Almashor
Data-Driven Mitigation of Adversarial Text Perturbation. (75%)Rasika Bhalerao; Mohammad Al-Rubaie; Anand Bhaskar; Igor Markov
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias. (68%)Shangxi Wu; Qiuyang He; Yi Zhang; Jitao Sang
Stochastic Perturbations of Tabular Features for Non-Deterministic Inference with Automunge. (38%)Nicholas J. Teague
Label-Smoothed Backdoor Attack. (33%)Minlong Peng; Zidi Xiong; Mingming Sun; Ping Li
Black-box Node Injection Attack for Graph Neural Networks. (33%)Mingxuan Ju; Yujie Fan; Yanfang Ye; Liang Zhao
Robust Reinforcement Learning as a Stackelberg Game via Adaptively-Regularized Adversarial Training. (9%)Peide Huang; Mengdi Xu; Fei Fang; Ding Zhao
Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML Systems. (4%)Mohamad Fazelnia; Igor Khokhlov; Mehdi Mirakhorli
Synthetic Disinformation Attacks on Automated Fact Verification Systems. (1%)Yibing Du; Antoine Bosselut; Christopher D. Manning
2022-02-17
Rethinking Machine Learning Robustness via its Link with the Out-of-Distribution Problem. (99%)Abderrahmen Amich; Birhanu Eshete
Mitigating Closed-model Adversarial Examples with Bayesian Neural Modeling for Enhanced End-to-End Speech Recognition. (98%)Chao-Han Huck Yang; Zeeshan Ahmed; Yile Gu; Joseph Szurley; Roger Ren; Linda Liu; Andreas Stolcke; Ivan Bulyko
Developing Imperceptible Adversarial Patches to Camouflage Military Assets From Computer Vision Enabled Technologies. (98%)Chris Wise; Jo Plested
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations. (78%)Zirui Peng; Shaofeng Li; Guoxing Chen; Cheng Zhang; Haojin Zhu; Minhui Xue
2022-02-16
The Adversarial Security Mitigations of mmWave Beamforming Prediction Models using Defensive Distillation and Adversarial Retraining. (99%)Murat Kuzlu; Ferhat Ozgur Catak; Umit Cali; Evren Catak; Ozgur Guler
Understanding and Improving Graph Injection Attack by Promoting Unnoticeability. (10%)Yongqiang Chen; Han Yang; Yonggang Zhang; Kaili Ma; Tongliang Liu; Bo Han; James Cheng
Gradient Based Activations for Accurate Bias-Free Learning. (1%)Vinod K Kurmi; Rishabh Sharma; Yash Vardhan Sharma; Vinay P. Namboodiri
2022-02-15
Unreasonable Effectiveness of Last Hidden Layer Activations. (99%)Omer Faruk Tuna; Ferhat Ozgur Catak; M. Taner Eskil
Exploring the Devil in Graph Spectral Domain for 3D Point Cloud Attacks. (99%)Qianjiang Hu; Daizong Liu; Wei Hu
StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection. (99%)Aqib Rashid; Jose Such
Random Walks for Adversarial Meshes. (97%)Amir Belder; Gal Yefet; Ran Ben Izhak; Ayellet Tal
Generative Adversarial Network-Driven Detection of Adversarial Tasks in Mobile Crowdsensing. (93%)Zhiyan Chen; Burak Kantarci
Applying adversarial networks to increase the data efficiency and reliability of Self-Driving Cars. (89%)Aakash Kumar
Improving the repeatability of deep learning models with Monte Carlo dropout. (1%)Andreanne Lemay; Katharina Hoebel; Christopher P. Bridge; Brian Befano; Sanjosé Silvia De; Diden Egemen; Ana Cecilia Rodriguez; Mark Schiffman; John Peter Campbell; Jayashree Kalpathy-Cramer
Holistic Adversarial Robustness of Deep Learning Models. (1%)Pin-Yu Chen; Sijia Liu
Taking a Step Back with KCal: Multi-Class Kernel-Based Calibration for Deep Neural Networks. (1%)Zhen Lin; Shubhendu Trivedi; Jimeng Sun
2022-02-14
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark. (99%)Yonghao Xu; Pedram Ghamisi
Finding Dynamics Preserving Adversarial Winning Tickets. (86%)Xupeng Shi; Pengfei Zheng; A. Adam Ding; Yuan Gao; Weizhong Zhang
Recent Advances in Reliable Deep Graph Learning: Adversarial Attack, Inherent Noise, and Distribution Shift. (83%)Bingzhe Wu; Jintang Li; Chengbin Hou; Guoji Fu; Yatao Bian; Liang Chen; Junzhou Huang
UA-FedRec: Untargeted Attack on Federated News Recommendation. (1%)Jingwei Yi; Fangzhao Wu; Bin Zhu; Yang Yu; Chao Zhang; Guangzhong Sun; Xing Xie
PFGE: Parsimonious Fast Geometric Ensembling of DNNs. (1%)Hao Guo; Jiyong Jin; Bin Liu
2022-02-13
Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks. (99%)Bingxu Mu; Zhenxing Niu; Le Wang; Xue Wang; Rong Jin; Gang Hua
Training with More Confidence: Mitigating Injected and Natural Backdoors During Training. (92%)Zhenting Wang; Hailun Ding; Juan Zhai; Shiqing Ma
Extracting Label-specific Key Input Features for Neural Code Intelligence Models. (9%)Md Rafiqul Islam Rabin
Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey. (2%)Zhilin Wang; Qiao Kang; Xinyi Zhang; Qin Hu
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation. (1%)Cong Guo; Yuxian Qiu; Jingwen Leng; Xiaotian Gao; Chen Zhang; Yunxin Liu; Fan Yang; Yuhao Zhu; Minyi Guo
2022-02-12
RoPGen: Towards Robust Code Authorship Attribution via Automatic Coding Style Transformation. (98%)Zhen Qian Li; Qian Guenevere; Chen; Chen Chen; Yayi Zou; Shouhuai Xu
Excitement Surfeited Turns to Errors: Deep Learning Testing Framework Based on Excitable Neurons. (98%)Haibo Jin; Ruoxi Chen; Haibin Zheng; Jinyin Chen; Yao Cheng; Yue Yu; Xianglong Liu
2022-02-11
Adversarial Attacks and Defense Methods for Power Quality Recognition. (99%)Jiwei Tian; Buhong Wang; Jing Li; Zhen Wang; Mete Ozay
Towards Adversarially Robust Deepfake Detection: An Ensemble Approach. (99%)Ashish Hooda; Neal Mangaokar; Ryan Feng; Kassem Fawaz; Somesh Jha; Atul Prakash
Open-set Adversarial Defense with Clean-Adversarial Mutual Learning. (98%)Rui Shao; Pramuditha Perera; Pong C. Yuen; Vishal M. Patel
Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models. (92%)Abigail Swenor; Jugal Kalita
Fast Adversarial Training with Noise Augmentation: A Unified Perspective on RandStart and GradAlign. (74%)Axi Niu; Kang Zhang; Chaoning Zhang; Chenshuang Zhang; In So Kweon; Chang D. Yoo; Yanning Zhang
Predicting Out-of-Distribution Error with the Projection Norm. (62%)Yaodong Yu; Zitong Yang; Alexander Wei; Yi Ma; Jacob Steinhardt
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers. (62%)Limin Yang; Zhi Chen; Jacopo Cortellazzi; Feargus Pendlebury; Kevin Tu; Fabio Pierazzi; Lorenzo Cavallaro; Gang Wang
White-Box Attacks on Hate-speech BERT Classifiers in German with Explicit and Implicit Character Level Defense. (12%)Shahrukh Khan; Mahnoor Shahid; Navdeeppal Singh
On the Detection of Adaptive Adversarial Attacks in Speaker Verification Systems. (10%)Zesheng Chen
Improving Generalization via Uncertainty Driven Perturbations. (2%)Matteo Pagliardini; Gilberto Manunza; Martin Jaggi; Michael I. Jordan; Tatjana Chavdarova
CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep Learning. (1%)Jun Shu; Xiang Yuan; Deyu Meng; Zongben Xu
2022-02-10
FAAG: Fast Adversarial Audio Generation through Interactive Attack Optimisation. (99%)Yuantian Miao; Chao Chen; Lei Pan; Jun Zhang; Yang Xiang
Towards Assessing and Characterizing the Semantic Robustness of Face Recognition. (76%)Juan C. Pérez; Motasem Alfarra; Ali Thabet; Pablo Arbeláez; Bernard Ghanem
Controlling the Complexity and Lipschitz Constant improves polynomial nets. (12%)Zhenyu Zhu; Fabian Latorre; Grigorios G Chrysos; Volkan Cevher
FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling. (8%)Chuhan Wu; Fangzhao Wu; Tao Qi; Yongfeng Huang; Xing Xie
A Field of Experts Prior for Adapting Neural Networks at Test Time. (1%)Neerav Karani; Georg Brunner; Ertunc Erdil; Simin Fei; Kerem Tezcan; Krishna Chaitanya; Ender Konukoglu
2022-02-09
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios. (99%)Jung Im Choi; Qing Tian
Gradient Methods Provably Converge to Non-Robust Networks. (82%)Gal Vardi; Gilad Yehudai; Ohad Shamir
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger. (22%)Muhammad Umer; Robi Polikar
Learning to Bootstrap for Combating Label Noise. (2%)Yuyin Zhou; Xianhang Li; Fengze Liu; Xuxi Chen; Lequan Yu; Cihang Xie; Matthew P. Lungren; Lei Xing
Model Architecture Adaption for Bayesian Neural Networks. (1%)Duo Wang; Yiren Zhao; Ilia Shumailov; Robert Mullins
ARIBA: Towards Accurate and Robust Identification of Backdoor Attacks in Federated Learning. (1%)Yuxi Mi; Jihong Guan; Shuigeng Zhou
2022-02-08
Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations. (99%)Lei Hsiung; Yun-Yun Tsai; Pin-Yu Chen; Tsung-Yi Ho
Verification-Aided Deep Ensemble Selection. (96%)Guy Amir; Guy Katz; Michael Schapira
Adversarial Detection without Model Information. (87%)Abhishek Moitra; Youngeun Kim; Priyadarshini Panda
Towards Making a Trojan-horse Attack on Text-to-Image Retrieval. (68%)Fan Hu; Aozhu Chen; Xirong Li
Robust, Deep, and Reinforcement Learning for Management of Communication and Power Networks. (1%)Alireza Sadeghi
2022-02-07
On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks. (99%)Salijona Dyrmishi; Salah Ghamizi; Thibault Simonetto; Yves Le Traon; Maxime Cordy
Blind leads Blind: A Zero-Knowledge Attack on Federated Learning. (98%)Jiyue Huang; Zilong Zhao; Lydia Y. Chen; Stefanie Roos
Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests. (98%)Xilie Xu; Jingfeng Zhang; Feng Liu; Masashi Sugiyama; Mohan Kankanhalli
Evaluating Robustness of Cooperative MARL: A Model-based Approach. (97%)Nhan H. Pham; Lam M. Nguyen; Jie Chen; Hoang Thanh Lam; Subhro Das; Tsui-Wei Weng
More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks. (68%)Jing Xu; Rui Wang; Kaitai Liang; Stjepan Picek
Membership Inference Attacks and Defenses in Neural Network Pruning. (50%)Xiaoyong Yuan; Lan Zhang
SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation. (4%)Jun Xia; Lirong Wu; Jintao Chen; Bozhen Hu; Stan Z. Li
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning. (3%)Ji Gao; Sanjam Garg; Mohammad Mahmoody; Prashant Nalini Vasudevan
2022-02-06
Pipe Overflow: Smashing Voice Authentication for Fun and Profit. (99%)Shimaa Ahmed; Yash Wani; Ali Shahin Shamsabadi; Mohammad Yaghini; Ilia Shumailov; Nicolas Papernot; Kassem Fawaz
Redactor: A Data-centric and Individualized Defense Against Inference Attacks. (8%)Geon Heo; Steven Euijong Whang
2022-02-05
Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework. (99%)Mohammad Khalooei; Mohammad Mehdi Homayounpour; Maryam Amirmazlaghani
Adversarial Detector with Robust Classifier. (93%)Takayuki Osakabe; Maungmaung Aprilpyone; Sayaka Shiota; Hitoshi Kiya
Memory Defense: More Robust Classification via a Memory-Masking Autoencoder. (76%)Eashan Lehigh University Adhikarla; Dan Lehigh University Luo; Brian D. Lehigh University Davison
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation. (75%)Wenxiao Wang; Alexander Levine; Soheil Feizi
2022-02-04
Pixle: a fast and effective black-box attack based on rearranging pixels. (98%)Jary Pomponi; Simone Scardapane; Aurelio Uncini
Backdoor Defense via Decoupling the Training Process. (80%)Kunzhe Huang; Yiming Li; Baoyuan Wu; Zhan Qin; Kui Ren
LTU Attacker for Membership Inference. (67%)Joseph Pedersen; Rafael Muñoz-Gómez; Jiangnan Huang; Haozhe Sun; Wei-Wei Tu; Isabelle Guyon
A Survey on Safety-Critical Driving Scenario Generation -- A Methodological Perspective. (1%)Wenhao Ding; Chejian Xu; Mansur Arief; Haohong Lin; Bo Li; Ding Zhao
2022-02-03
ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking. (93%)Chong Xiang; Alexander Valtchanov; Saeed Mahloujifar; Prateek Mittal
Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization. (75%)Xiaojun Xu; Jacky Yibo Zhang; Evelyn Ma; Danny Son; Oluwasanmi Koyejo; Bo Li
2022-02-02
An Eye for an Eye: Defending against Gradient-based Attacks with Gradients. (99%)Hanbin Hong; Yuan Hong; Yu Kong
Smoothed Embeddings for Certified Few-Shot Learning. (76%)Mikhail Pautov; Olesya Kuznetsova; Nurislam Tursynbek; Aleksandr Petiushko; Ivan Oseledets
Probabilistically Robust Learning: Balancing Average- and Worst-case Performance. (75%)Alexander Robey; Luiz F. O. Chamon; George J. Pappas; Hamed Hassani
Make Some Noise: Reliable and Efficient Single-Step Adversarial Training. (70%)Jorge Pau de; Adel Bibi; Riccardo Volpi; Amartya Sanyal; Philip H. S. Torr; Grégory Rogez; Puneet K. Dokania
Robust Binary Models by Pruning Randomly-initialized Networks. (10%)Chen Liu; Ziqi Zhao; Sabine Süsstrunk; Mathieu Salzmann
NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections. (10%)N. Benjamin Erichson; Soon Hoe Lim; Francisco Utrera; Winnie Xu; Ziang Cao; Michael W. Mahoney
2022-02-01
Language Dependencies in Adversarial Attacks on Speech Recognition Systems. (98%)Karla Markert; Donika Mirdita; Konstantin Böttinger
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks. (80%)Anne Harrington; Arturo Deza
Visualizing Automatic Speech Recognition -- Means for a Better Understanding? (64%)Karla Markert; Romain Parracone; Mykhailo Kulakov; Philip Sperl; Ching-Yu Kao; Konstantin Böttinger
Datamodels: Predicting Predictions from Training Data. (2%)Andrew Ilyas; Sung Min Park; Logan Engstrom; Guillaume Leclerc; Aleksander Madry
2022-01-31
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons. (99%)Chandresh Pravin; Ivan Martino; Giuseppe Nicosia; Varun Ojha
Boundary Defense Against Black-box Adversarial Attacks. (99%)Manjushree B. Aithal; Xiaohua Li
Query Efficient Decision Based Sparse Attacks Against Black-Box Deep Learning Models. (99%)Viet Quoc Vo; Ehsan Abbasnejad; Damith C. Ranasinghe
Can Adversarial Training Be Manipulated By Non-Robust Features? (98%)Lue Tao; Lei Feng; Hongxin Wei; Jinfeng Yi; Sheng-Jun Huang; Songcan Chen
GADoT: GAN-based Adversarial Training for Robust DDoS Attack Detection. (96%)Maged Abdelaty; Sandra Scott-Hayward; Roberto Doriguzzi-Corin; Domenico Siracusa
Rate Coding or Direct Coding: Which One is Better for Accurate, Robust, and Energy-efficient Spiking Neural Networks? (93%)Youngeun Kim; Hyoungseob Park; Abhishek Moitra; Abhiroop Bhattacharjee; Yeshwanth Venkatesha; Priyadarshini Panda
AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks. (89%)Muhammad Usman; Youcheng Sun; Divya Gopinath; Corina S. Pasareanu
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks. (81%)Mingfu Xue; Shifeng Ni; Yinghao Wu; Yushu Zhang; Jian Wang; Weiqiang Liu
On the Robustness of Quality Measures for GANs. (80%)Motasem Alfarra; Juan C. Pérez; Anna Frühstück; Philip H. S. Torr; Peter Wonka; Bernard Ghanem
MEGA: Model Stealing via Collaborative Generator-Substitute Networks. (76%)Chi Hong; Jiyue Huang; Lydia Y. Chen
Learning Robust Representation through Graph Adversarial Contrastive Learning. (26%)Jiayan Guo; Shangyang Li; Yue Zhao; Yan Zhang
UQGAN: A Unified Model for Uncertainty Quantification of Deep Classifiers trained via Conditional GANs. (16%)Philipp Oberdiek; Gernot A. Fink; Matthias Rottmann
Few-Shot Backdoor Attacks on Visual Object Tracking. (10%)Yiming Li; Haoxiang Zhong; Xingjun Ma; Yong Jiang; Shu-Tao Xia
Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors. (5%)Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Timo Schenk; Adrian Lars Benjamin Iten; Gérôme Bovet; Gregorio Martínez Pérez; Burkhard Stiller
Securing Federated Sensitive Topic Classification against Poisoning Attacks. (1%)Tianyue Chu; Alvaro Garcia-Recuero; Costas Iordanou; Georgios Smaragdakis; Nikolaos Laoutaris
2022-01-30
Improving Corruption and Adversarial Robustness by Enhancing Weak Subnets. (92%)Yong Guo; David Stutz; Bernt Schiele
GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks. (84%)Chenhui Deng; Xiuyu Li; Zhuo Feng; Zhiru Zhang
TPC: Transformation-Specific Smoothing for Point Cloud Models. (75%)Wenda Chu; Linyi Li; Bo Li
2022-01-29
Scale-Invariant Adversarial Attack for Evaluating and Enhancing Adversarial Defenses. (99%)Mengting Xu; Tao Zhang; Zhongnian Li; Daoqiang Zhang
Robustness of Deep Recommendation Systems to Untargeted Interaction Perturbations. (82%)Sejoon Oh; Srijan Kumar
Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms. (1%)Jeongyeol Kwon; Yonathan Efroni; Constantine Caramanis; Shie Mannor
2022-01-28
Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning. (87%)Jie Zhang; Lei Zhang; Gang Li; Chao Wu
Feature Visualization within an Automated Design Assessment leveraging Explainable Artificial Intelligence Methods. (81%)Raoul Schönhof; Artem Werner; Jannes Elstner; Boldizsar Zopcsak; Ramez Awad; Marco Huber
Certifying Model Accuracy under Distribution Shifts. (74%)Aounon Kumar; Alexander Levine; Tom Goldstein; Soheil Feizi
Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions. (13%)Jiachen Sun; Qingzhao Zhang; Bhavya Kailkhura; Zhiding Yu; Chaowei Xiao; Z. Morley Mao
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. (8%)Lukas Struppek; Dominik Hintersdorf; Antonio De Almeida Correia; Antonia Adler; Kristian Kersting
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire. (3%)Siddhartha Datta; Nigel Shadbolt
Toward Training at ImageNet Scale with Differential Privacy. (1%)Alexey Kurakin; Shuang Song; Steve Chien; Roxana Geambasu; Andreas Terzis; Abhradeep Thakurta
2022-01-27
Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains. (99%)Qilong Zhang; Xiaodan Li; Yuefeng Chen; Jingkuan Song; Lianli Gao; Yuan He; Hui Xue
Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities. (10%)Xin Du; Benedicte Legastelois; Bhargavi Ganesh; Ajitha Rajan; Hana Chockler; Vaishak Belle; Stuart Anderson; Subramanian Ramamoorthy
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders. (2%)Tianshuo Cong; Xinlei He; Yang Zhang
CacheFX: A Framework for Evaluating Cache Security. (1%)Daniel Genkin; William Kosasih; Fangfei Liu; Anna Trikalinou; Thomas Unterluggauer; Yuval Yarom
2022-01-26
Boosting 3D Adversarial Attacks with Attacking On Frequency. (98%)Binbin Liu; Jinlai Zhang; Lyujie Chen; Jihong Zhu
How Robust are Discriminatively Trained Zero-Shot Learning Models? (98%)Mehmet Kerim Yucel; Ramazan Gokberk Cinbis; Pinar Duygulu
Autonomous Cyber Defense Introduces Risk: Can We Manage the Risk? (2%)Alexandre K. Ligo; Alexander Kott; Igor Linkov
Automatic detection of access control vulnerabilities via API specification processing. (1%)Alexander Barabanov; Denis Dergunov; Denis Makrushin; Aleksey Teplov
2022-01-25
Virtual Adversarial Training for Semi-supervised Breast Mass Classification. (3%)Xuxin Chen; Ximin Wang; Ke Zhang; Kar-Ming Fung; Theresa C. Thai; Kathleen Moore; Robert S. Mannel; Hong Liu; Bin Zheng; Yuchen Qiu
Class-Aware Adversarial Transformers for Medical Image Segmentation. (1%)Chenyu You; Ruihan Zhao; Fenglin Liu; Siyuan Dong; Sandeep Chinchali; Ufuk Topcu; Lawrence Staib; James S. Duncan
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training. (1%)Wenyong Huang; Zhenhe Zhang; Yu Ting Yeung; Xin Jiang; Qun Liu
2022-01-24
What You See is Not What the Network Infers: Detecting Adversarial Examples Based on Semantic Contradiction. (99%)Yijun Yang; Ruiyuan Gao; Yu Li; Qiuxia Lai; Qiang Xu
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation. (95%)Zayd Hammoudeh; Daniel Lowd
Attacks and Defenses for Free-Riders in Multi-Discriminator GAN. (76%)Zilong Zhao; Jiyue Huang; Stefanie Roos; Lydia Y. Chen
Backdoor Defense with Machine Unlearning. (33%)Yang Liu; Mingyuan Fan; Cen Chen; Ximeng Liu; Zhuo Ma; Li Wang; Jianfeng Ma
On the Complexity of Attacking Elliptic Curve Based Authentication Chips. (1%)Ievgen Kabin; Zoya Dyka; Dan Klann; Jan Schaeffner; Peter Langendoerfer
2022-01-23
Efficient and Robust Classification for Sparse Attacks. (83%)Mark Beliaev; Payam Delgosha; Hamed Hassani; Ramtin Pedarsani
Gradient-guided Unsupervised Text Style Transfer via Contrastive Learning. (78%)Chenghao Fan; Ziao Li; Wei wei
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models. (56%)Shagufta Mehnaz; Sayanton V. Dibbo; Ehsanul Kabir; Ninghui Li; Elisa Bertino
Increasing the Cost of Model Extraction with Calibrated Proof of Work. (22%)Adam Dziedzic; Muhammad Ahmad Kaleem; Yu Shen Lu; Nicolas Papernot
2022-01-22
Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection. (99%)Siyuan Liang; Baoyuan Wu; Yanbo Fan; Xingxing Wei; Xiaochun Cao
Robust Unpaired Single Image Super-Resolution of Faces. (98%)Saurabh Goswami; Rajagopalan A. N
On the Robustness of Counterfactual Explanations to Adverse Perturbations. (10%)Marco Virgolin; Saverio Fracaros
2022-01-21
Robust Unsupervised Graph Representation Learning via Mutual Information Maximization. (99%)Jihong Wang; Minnan Luo; Jundong Li; Ziqi Liu; Jun Zhou; Qinghua Zheng
Natural Attack for Pre-trained Models of Code. (99%)Zhou Yang; Jieke Shi; Junda He; David Lo
The Security of Deep Learning Defences for Medical Imaging. (80%)Moshe Levy; Guy Amit; Yuval Elovici; Yisroel Mirsky
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World. (75%)Hua Ma; Yinshan Li; Yansong Gao; Alsharif Abuadbba; Zhi Zhang; Anmin Fu; Hyoungshick Kim; Said F. Al-Sarawi; Nepal Surya; Derek Abbott
Identifying Adversarial Attacks on Text Classifiers. (73%)Zhouhang Xie; Jonathan Brophy; Adam Noack; Wencong You; Kalyani Asthana; Carter Perkins; Sabrina Reis; Sameer Singh; Daniel Lowd
The Many Faces of Adversarial Risk. (47%)Muni Sreenivas Pydi; Varun Jog
2022-01-20
TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack. (99%)Zhen Yu; Xiaosen Wang; Wanxiang Che; Kun He
Cheating Automatic Short Answer Grading: On the Adversarial Usage of Adjectives and Adverbs. (95%)Anna Filighera; Sebastian Ochs; Tim Steuer; Thomas Tregel
Survey on Federated Learning Threats: concepts, taxonomy on attacks and defences, experimental study and challenges. (93%)Nuria Rodríguez-Barroso; Daniel Jiménez López; M. Victoria Luzón; Francisco Herrera; Eugenio Martínez-Cámara
Low-Interception Waveform: To Prevent the Recognition of Spectrum Waveform Modulation via Adversarial Examples. (83%)Haidong Xie; Jia Tan; Xiaoying Zhang; Nan Ji; Haihua Liao; Zuguo Yu; Xueshuang Xiang; Naijin Liu
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios. (70%)Zhen Xiang; David J. Miller; George Kesidis
Adversarial Jamming for a More Effective Constellation Attack. (56%)Haidong Xie; Yizhou Xu; Yuanqing Chen; Nan Ji; Shuai Yuan; Naijin Liu; Xueshuang Xiang
Steerable Pyramid Transform Enables Robust Left Ventricle Quantification. (13%)Xiangyang Zhu; Kede Ma; Wufeng Xue
Black-box Prompt Learning for Pre-trained Language Models. (13%)Shizhe Diao; Zhichao Huang; Ruijia Xu; Xuechun Li; Yong Lin; Xiao Zhou; Tong Zhang
DeepGalaxy: Testing Neural Network Verifiers via Two-Dimensional Input Space Exploration. (1%)Xuan Xie; Fuyuan Zhang
2022-01-19
Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation. (96%)Sixiao Zhang; Hongxu Chen; Xiangguo Sun; Yicong Li; Guandong Xu
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders. (8%)Zeyang Sha; Xinlei He; Ning Yu; Michael Backes; Yang Zhang
2022-01-18
MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting. (99%)Xudong Pan; Yifan Yan; Mi Zhang; Min Yang
Adversarial vulnerability of powerful near out-of-distribution detection. (78%)Stanislav Fort
Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning. (13%)Phung Lai; NhatHai Phan; Abdallah Khreishah; Issa Khalil; Xintao Wu
Secure IoT Routing: Selective Forwarding Attacks and Trust-based Defenses in RPL Network. (2%)Jun Jiang; Yuhong Liu
Lung Swapping Autoencoder: Learning a Disentangled Structure-texture Representation of Chest Radiographs. (1%)Lei Zhou; Joseph Bae; Huidong Liu; Gagandeep Singh; Jeremy Green; Amit Gupta; Dimitris Samaras; Prateek Prasanna
2022-01-17
Masked Faces with Faced Masks. (81%)Jiayi Zhu; Qing Guo; Felix Juefei-Xu; Yihao Huang; Yang Liu; Geguang Pu
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations. (56%)Chris Emmery; Ákos Kádár; Grzegorz Chrupała; Walter Daelemans
AugLy: Data Augmentations for Robustness. (3%)Zoe Papakipos; Joanna Bitton
2022-01-16
Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems. (99%)Wei Jia; Zhaojun Lu; Haichun Zhang; Zhenglin Liu; Jie Wang; Gang Qu
ALA: Adversarial Lightness Attack via Naturalness-aware Regularizations. (99%)Liangru Sun; Felix Juefei-Xu; Yihao Huang; Qing Guo; Jiayi Zhu; Jincao Feng; Yang Liu; Geguang Pu
Adversarial Machine Learning Threat Analysis in Open Radio Access Networks. (64%)Ron Bitton; Dan Avraham; Eitan Klevansky; Dudu Mimran; Oleg Brodt; Heiko Lehmann; Yuval Elovici; Asaf Shabtai
Neighboring Backdoor Attacks on Graph Convolutional Network. (22%)Liang Chen; Qibiao Peng; Jintang Li; Yang Liu; Jiawei Chen; Yong Li; Zibin Zheng
2022-01-15
Interpretable and Effective Reinforcement Learning for Attacking against Graph-based Rumor Detection. (26%)Yuefei Lyu; Xiaoyu Yang; Jiaxin Liu; Philip S. Yu; Sihong Xie; Xi Zhang
StolenEncoder: Stealing Pre-trained Encoders. (13%)Yupei Liu; Jinyuan Jia; Hongbin Liu; Neil Zhenqiang Gong
2022-01-14
CommonsenseQA 2.0: Exposing the Limits of AI through Gamification. (56%)Alon Talmor; Ori Yoran; Ronan Le Bras; Chandra Bhagavatula; Yoav Goldberg; Yejin Choi; Jonathan Berant
Security Orchestration, Automation, and Response Engine for Deployment of Behavioural Honeypots. (1%)Upendra Bartwal; Subhasis Mukhopadhyay; Rohit Negi; Sandeep Shukla
2022-01-13
Evaluation of Four Black-box Adversarial Attacks and Some Query-efficient Improvement Analysis. (96%)Rui Wang
The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression. (93%)Hamed Hassani; Adel Javanmard
On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles. (83%)Qingzhao Zhang; Shengtuo Hu; Jiachen Sun; Qi Alfred Chen; Z. Morley Mao
Reconstructing Training Data with Informed Adversaries. (54%)Borja Balle; Giovanni Cherubin; Jamie Hayes
Jamming Attacks on Federated Learning in Wireless Networks. (2%)Yi Shi; Yalin E. Sagduyu
2022-01-12
Adversarially Robust Classification by Conditional Generative Model Inversion. (99%)Mitra Alirezaei; Tolga Tasdizen
Towards Adversarially Robust Deep Image Denoising. (99%)Hanshu Yan; Jingfeng Zhang; Jiashi Feng; Masashi Sugiyama; Vincent Y. F. Tan
Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data. (70%)Sunder Ali Khowaja; Ik Hyun Lee; Kapal Dev; Muhammad Aslam Jarwar; Nawab Muhammad Faseeh Qureshi
2022-01-11
Quantifying Robustness to Adversarial Word Substitutions. (99%)Yuting Yang; Pei Huang; FeiFei Ma; Juan Cao; Meishan Zhang; Jian Zhang; Jintao Li
Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition. (99%)Hanrui Wang; Shuo Wang; Zhe Jin; Yandan Wang; Cunjian Chen; Massimo Tistarell
2022-01-10
Evaluation of Neural Networks Defenses and Attacks using NDCG and Reciprocal Rank Metrics. (98%)Haya Brama; Lihi Dery; Tal Grinshpoun
IoTGAN: GAN Powered Camouflage Against Machine Learning Based IoT Device Identification. (89%)Tao Hou; Tao Wang; Zhuo Lu; Yao Liu; Yalin Sagduyu
Reciprocal Adversarial Learning for Brain Tumor Segmentation: A Solution to BraTS Challenge 2021 Segmentation Task. (73%)Himashi Peiris; Zhaolin Chen; Gary Egan; Mehrtash Harandi
GMFIM: A Generative Mask-guided Facial Image Manipulation Model for Privacy Preservation. (3%)Mohammad Hossein Khojaste; Nastaran Moradzadeh Farid; Ahmad Nickabadi
Towards Group Robustness in the presence of Partial Group Labels. (1%)Vishnu Suresh Lokhande; Kihyuk Sohn; Jinsung Yoon; Madeleine Udell; Chen-Yu Lee; Tomas Pfister
2022-01-09
Rethink Stealthy Backdoor Attacks in Natural Language Processing. (89%)Lingfeng Shen; Haiyun Jiang; Lemao Liu; Shuming Shi
A Retrospective and Futurespective of Rowhammer Attacks and Defenses on DRAM. (76%)Zhi Zhang; Jiahao Qi; Yueqiang Cheng; Shijie Jiang; Yiyang Lin; Yansong Gao; Surya Nepal; Yi Zou
Privacy-aware Early Detection of COVID-19 through Adversarial Training. (10%)Omid Rohanian; Samaneh Kouchaki; Andrew Soltan; Jenny Yang; Morteza Rohanian; Yang Yang; David Clifton
2022-01-08
LoMar: A Local Defense Against Poisoning Attack on Federated Learning. (9%)Xingyu Li; Zhe Qu; Shangqing Zhao; Bo Tang; Zhuo Lu; Yao Liu
PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++. (1%)Jaewoo Song; Fangzhen Lin
2022-01-07
iDECODe: In-distribution Equivariance for Conformal Out-of-distribution Detection. (93%)Ramneet Kaur; Susmit Jha; Anirban Roy; Sangdon Park; Edgar Dobriban; Oleg Sokolsky; Insup Lee
Asymptotic Security using Bayesian Defense Mechanisms with Application to Cyber Deception. (11%)Hampei Sasahara; Henrik Sandberg
Negative Evidence Matters in Interpretable Histology Image Classification. (1%)Soufiane Belharbi; Marco Pedersoli; Ismail Ben Ayed; Luke McCaffrey; Eric Granger
2022-01-06
PAEG: Phrase-level Adversarial Example Generation for Neural Machine Translation. (98%)Juncheng Wan; Jian Yang; Shuming Ma; Dongdong Zhang; Weinan Zhang; Yong Yu; Zhoujun Li
Learning to be adversarially robust and differentially private. (31%)Jamie Hayes; Borja Balle; M. Pawan Kumar
Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time Algorithms and Adversarial Training. (2%)Yatong Bai; Tanmay Gautam; Somayeh Sojoudi
2022-01-05
On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving. (99%)Giulio Rossolini; Federico Nesti; Gianluca D'Amico; Saasha Nair; Alessandro Biondi; Giorgio Buttazzo
ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints. (99%)Amira Guesmi; Khaled N. Khasawneh; Nael Abu-Ghazaleh; Ihsen Alouani
Adversarial Robustness in Cognitive Radio Networks. (1%)Makan Zamanipour
2022-01-04
Towards Transferable Unrestricted Adversarial Examples with Minimum Changes. (99%)Fangcheng Liu; Chao Zhang; Hongyang Zhang
Towards Understanding and Harnessing the Effect of Image Transformation in Adversarial Detection. (99%)Hui Liu; Bo Zhao; Yuefeng Peng; Weidong Li; Peng Liu
On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error. (86%)Fabio Brau; Giulio Rossolini; Alessandro Biondi; Giorgio Buttazzo
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of Robustness. (31%)Amin Eslami Abyane; Derui Zhu; Roberto Souza; Lei Ma; Hadi Hemmati
Corrupting Data to Remove Deceptive Perturbation: Using Preprocessing Method to Improve System Robustness. (10%)Hieu Le; Hans Walker; Dung Tran; Peter Chin
2022-01-03
Compression-Resistant Backdoor Attack against Deep Neural Networks. (75%)Mingfu Xue; Xin Wang; Shichang Sun; Yushu Zhang; Jian Wang; Weiqiang Liu
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection. (68%)Phillip Rieger; Thien Duc Nguyen; Markus Miettinen; Ahmad-Reza Sadeghi
Revisiting PGD Attacks for Stability Analysis of Large-Scale Nonlinear Systems and Perception-Based Control. (11%)Aaron Havens; Darioush Keivan; Peter Seiler; Geir Dullerud; Bin Hu
2022-01-02
Actor-Critic Network for Q&A in an Adversarial Environment. (33%)Bejan Sadeghian
On Sensitivity of Deep Learning Based Text Classification Algorithms to Practical Input Perturbations. (12%)Aamir Miyajiwala; Arnav Ladkat; Samiksha Jagadale; Raviraj Joshi
2022-01-01
Rethinking Feature Uncertainty in Stochastic Neural Networks for Adversarial Robustness. (87%)Hao Yang; Min Wang; Zhengfei Yu; Yun Zhou
Revisiting Neuron Coverage Metrics and Quality of Deep Neural Networks. (41%)Zhou Yang; Jieke Shi; Muhammad Hilmi Asyrofi; David Lo
Generating Adversarial Samples For Training Wake-up Word Detection Systems Against Confusing Words. (1%)Haoxu Wang; Yan Jia; Zeqing Zhao; Xuyang Wang; Junjie Wang; Ming Li
2021-12-31
Adversarial Attack via Dual-Stage Network Erosion. (99%)Yexin Duan; Junhua Zou; Xingyu Zhou; Wu Zhang; Jin Zhang; Zhisong Pan
On Distinctive Properties of Universal Perturbations. (83%)Sung Min Park; Kuo-An Wei; Kai Xiao; Jerry Li; Aleksander Madry
2021-12-30
Benign Overfitting in Adversarially Robust Linear Classification. (99%)Jinghui Chen; Yuan Cao; Quanquan Gu
Causal Attention for Interpretable and Generalizable Graph Classification. (1%)Yongduo Sui; Xiang Wang; Jiancan Wu; Min Lin; Xiangnan He; Tat-Seng Chua
2021-12-29
Invertible Image Dataset Protection. (92%)Kejiang Chen; Xianhan Zeng; Qichao Ying; Sheng Li; Zhenxing Qian; Xinpeng Zhang
Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning. (4%)Junyu Shi; Wei Wan; Shengshan Hu; Jianrong Lu; Leo Yu Zhang
2021-12-28
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks. (99%)Weiran Lin; Keane Lucas; Lujo Bauer; Michael K. Reiter; Mahmood Sharif
Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently. (99%)Futa Waseda; Sosuke Nishikawa; Trung-Nghia Le; Huy H. Nguyen; Isao Echizen
Repairing Adversarial Texts through Perturbation. (99%)Guoliang Dong; Jingyi Wang; Jun Sun; Sudipta Chattopadhyay; Xinyu Wang; Ting Dai; Jie Shi; Jin Song Dong
DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification. (91%)Aleksandra Ćiprijanović; Diana Kafkes; Gregory Snyder; F. Javier Sánchez; Gabriel Nathan Perdue; Kevin Pedro; Brian Nord; Sandeep Madireddy; Stefan M. Wild
Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge. (88%)Kartikeya Bhardwaj; Dibakar Gope; James Ward; Paul Whatmough; Danny Loh
A General Framework for Evaluating Robustness of Combinatorial Optimization Solvers on Graphs. (86%)Han Lu; Zenan Li; Runzhong Wang; Qibing Ren; Junchi Yan; Xiaokang Yang
Gas Gauge: A Security Analysis Tool for Smart Contract Out-of-Gas Vulnerabilities. (1%)Behkish Nassirzadeh; Huaiying Sun; Sebastian Banescu; Vijay Ganesh
2021-12-27
Adversarial Attack for Asynchronous Event-based Data. (99%)Wooju Lee; Hyun Myung
PRIME: A Few Primitives Can Boost Robustness to Common Corruptions. (81%)Apostolos Modas; Rahul Rade; Guillermo Ortiz-Jiménez; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
Associative Adversarial Learning Based on Selective Attack. (26%)Runqi Wang; Xiaoyue Duan; Baochang Zhang; Song Xue; Wentao Zhu; David Doermann; Guodong Guo
Learning Robust and Lightweight Model through Separable Structured Transformations. (8%)Yanhui Huang; Yangyu Xu; Xian Wei
2021-12-26
Perlin Noise Improve Adversarial Robustness. (99%)Chengjun Tang; Kun Zhang; Chunfang Xing; Yong Ding; Zengmin Xu
2021-12-25
Task and Model Agnostic Adversarial Attack on Graph Neural Networks. (99%)Kartik Sharma; Samidha Verma; Sourav Medya; Sayan Ranu; Arnab Bhattacharya
NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification. (50%)Haibin Zheng; Zhiqing Chen; Tianyu Du; Xuhong Zhang; Yao Cheng; Shouling Ji; Jingyi Wang; Yue Yu; Jinyin Chen
2021-12-24
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping. (99%)Behnam Ghavami; Seyd Movi; Zhenman Fang; Lesley Shannon
NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks. (98%)Ruoxi Chen; Haibo Jin; Jinyin Chen; Haibin Zheng; Yue Yu; Shouling Ji
CatchBackdoor: Backdoor Testing by Critical Trojan Neural Path Identification via Differential Fuzzing. (82%)Haibo Jin; Ruoxi Chen; Jinyin Chen; Yao Cheng; Chong Fu; Ting Wang; Yue Yu; Zhaoyan Ming
SoK: A Study of the Security on Voice Processing Systems. (9%)Robert Chang; Logan Kuo; Arthur Liu; Nader Sehatbakhsh
DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning. (1%)Ismat Jarin; Birhanu Eshete
Gradient Leakage Attack Resilient Deep Learning. (1%)Wenqi Wei; Ling Liu
2021-12-23
Adaptive Modeling Against Adversarial Attacks. (99%)Zhiwen Yan; Teck Khim Ng
Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization. (99%)Yihua Zhang; Guanhua Zhang; Prashant Khanduri; Mingyi Hong; Shiyu Chang; Sijia Liu
Robust Secretary and Prophet Algorithms for Packing Integer Programs. (2%)C. J. Argue; Anupam Gupta; Marco Molinaro; Sahil Singla
Counterfactual Memorization in Neural Language Models. (2%)Chiyuan Zhang; Daphne Ippolito; Katherine Lee; Matthew Jagielski; Florian Tramèr; Nicholas Carlini
2021-12-22
Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-Art. (99%)Xiang Ling; Lingfei Wu; Jiangyu Zhang; Zhenqing Qu; Wei Deng; Xiang Chen; Yaguan Qian; Chunming Wu; Shouling Ji; Tianyue Luo; Jingzheng Wu; Yanjun Wu
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? (98%)Xinhsuai Dong; Luu Anh Tuan; Min Lin; Shuicheng Yan; Hanwang Zhang
Detect & Reject for Transferability of Black-box Adversarial Attacks Against Network Intrusion Detection Systems. (98%)Islam Debicha; Thibault Debatty; Jean-Michel Dricot; Wim Mees; Tayeb Kenaza
Adversarial Deep Reinforcement Learning for Improving the Robustness of Multi-agent Autonomous Driving Policies. (96%)Aizaz Sharif; Dusica Marijan
Understanding and Measuring Robustness of Multimodal Learning. (69%)Nishant Vishwamitra; Hongxin Hu; Ziming Zhao; Long Cheng; Feng Luo
Evaluating the Robustness of Deep Reinforcement Learning for Autonomous and Adversarial Policies in a Multi-agent Urban Driving Environment. (41%)Aizaz Sharif; Dusica Marijan
2021-12-21
A Theoretical View of Linear Backpropagation and Its Convergence. (99%)Ziang Li; Yiwen Guo; Haodi Liu; Changshui Zhang
An Attention Score Based Attacker for Black-box NLP Classifier. (91%)Yueyang Liu; Hunmin Lee; Zhipeng Cai
Covert Communications via Adversarial Machine Learning and Reconfigurable Intelligent Surfaces. (81%)Brian Kim; Tugba Erpek; Yalin E. Sagduyu; Sennur Ulukus
Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems. (76%)Andrea Stocco; Brian Pulfer; Paolo Tonella
Input-Specific Robustness Certification for Randomized Smoothing. (68%)Ruoxin Chen; Jie Li; Junchi Yan; Ping Li; Bin Sheng
Improving Robustness with Image Filtering. (68%)Matteo Terzi; Mattia Carletti; Gian Antonio Susto
On the Adversarial Robustness of Causal Algorithmic Recourse. (10%)Ricardo Dominguez-Olmedo; Amir-Hossein Karimi; Bernhard Schölkopf
MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation. (4%)Zhongzhi Yu; Yonggan Fu; Sicheng Li; Chaojian Li; Yingyan Lin
Exploring Credibility Scoring Metrics of Perception Systems for Autonomous Driving. (2%)Viren Khandal; Arth Vidyarthi
Adversarial Gradient Driven Exploration for Deep Click-Through Rate Prediction. (2%)Kailun Wu; Zhangming Chan; Weijie Bian; Lejian Ren; Shiming Xiang; Shuguang Han; Hongbo Deng; Bo Zheng
Longitudinal Study of the Prevalence of Malware Evasive Techniques. (1%)Lorenzo Maffia; Dario Nisi; Platon Kotzias; Giovanni Lagorio; Simone Aonzo; Davide Balzarotti
2021-12-20
Certified Federated Adversarial Training. (98%)Giulio Zizzo; Ambrish Rawat; Mathieu Sinn; Sergio Maffeis; Chris Hankin
Energy-bounded Learning for Robust Models of Code. (83%)Nghi D. Q. Bui; Yijun Yu
Black-Box Testing of Deep Neural Networks through Test Case Diversity. (82%)Zohreh Aghababaeyan; Manel Abdellatif; Lionel Briand; Ramesh S; Mojtaba Bagherzadeh
Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction. (80%)Dongfang Li; Baotian Hu; Qingcai Chen; Tujie Xu; Jingcong Tao; Yunan Zhang
Adversarially Robust Stability Certificates can be Sample-Efficient. (2%)Thomas T. C. K. Zhang; Stephen Tu; Nicholas M. Boffi; Jean-Jacques E. Slotine; Nikolai Matni
2021-12-19
Initiative Defense against Facial Manipulation. (67%)Qidong Huang; Jie Zhang; Wenbo Zhou; WeimingZhang; Nenghai Yu
2021-12-18
Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks. (12%)Simone Marullo; Matteo Tiezzi; Marco Gori; Stefano Melacci
Android-COCO: Android Malware Detection with Graph Neural Network for Byte- and Native-Code. (1%)Peng Xu
2021-12-17
Reasoning Chain Based Adversarial Attack for Multi-hop Question Answering. (92%)Jiayu Fudan University Ding; Siyuan Fudan University Wang; Qin East China Normal University Chen; Zhongyu Fudan University Wei
Deep Bayesian Learning for Car Hacking Detection. (81%)Laha Ale; Scott A. King; Ning Zhang
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations. (81%)Siddhant Arora; Danish Pruthi; Norman Sadeh; William W. Cohen; Zachary C. Lipton; Graham Neubig
Dynamics-aware Adversarial Attack of 3D Sparse Convolution Network. (80%)An Tao; Yueqi Duan; He Wang; Ziyi Wu; Pengliang Ji; Haowen Sun; Jie Zhou; Jiwen Lu
Provable Adversarial Robustness in the Quantum Model. (62%)Khashayar Barooti; Grzegorz Głuch; Ruediger Urbanke
Domain Adaptation on Point Clouds via Geometry-Aware Implicits. (1%)Yuefan Shen; Yanchao Yang; Mi Yan; He Wang; Youyi Zheng; Leonidas Guibas
2021-12-16
Addressing Adversarial Machine Learning Attacks in Smart Healthcare Perspectives. (99%)Arawinkumaar Selvakkumar; Shantanu Pal; Zahra Jadidi
Towards Robust Neural Image Compression: Adversarial Attack and Model Finetuning. (99%)Tong Chen; Zhan Ma
All You Need is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines. (99%)Yuxuan Zhang; Bo Dong; Felix Heide
TAFIM: Targeted Adversarial Attacks against Facial Image Manipulations. (64%)Shivangi Aneja; Lev Markhasin; Matthias Niessner
A Robust Optimization Approach to Deep Learning. (45%)Dimitris Bertsimas; Xavier Boix; Kimberly Villalobos Carballo; Dick den Hertog
Sharpness-Aware Minimization with Dynamic Reweighting. (31%)Wenxuan Zhou; Fangyu Liu; Huan Zhang; Muhao Chen
APTSHIELD: A Stable, Efficient and Real-time APT Detection System for Linux Hosts. (16%)Tiantian Zhu; Jinkai Yu; Tieming Chen; Jiayu Wang; Jie Ying; Ye Tian; Mingqi Lv; Yan Chen; Yuan Fan; Ting Wang
Correlation inference attacks against machine learning models. (13%)Ana-Maria Creţu; Florent Guépin; Montjoye Yves-Alexandre de
Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants. (2%)Max Bartolo; Tristan Thrush; Sebastian Riedel; Pontus Stenetorp; Robin Jia; Douwe Kiela
Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced Classification by Training on Random Noise Images. (2%)Shiran Zada; Itay Benou; Michal Irani
2021-12-15
On the Convergence and Robustness of Adversarial Training. (99%)Yisen Wang; Xingjun Ma; James Bailey; Jinfeng Yi; Bowen Zhou; Quanquan Gu
Temporal Shuffling for Defending Deep Action Recognition Models against Adversarial Attacks. (97%)Jaehui Hwang; Huan Zhang; Jun-Ho Choi; Cho-Jui Hsieh; Jong-Seok Lee
DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models. (75%)Hongyu Zhu; Yan Chen; Jing Yan; Jing Liu; Yu Hong; Ying Chen; Hua Wu; Haifeng Wang
Robust Neural Network Classification via Double Regularization. (1%)Olof Zetterqvist; Rebecka Jörnsten; Johan Jonasson
2021-12-14
Robustifying automatic speech recognition by extracting slowly varying features. (99%)Matias Pizarro; Dorothea Kolossa; Asja Fischer
Adversarial Examples for Extreme Multilabel Text Classification. (99%)Mohammadreza Qaraei; Rohit Babbar
Dual-Key Multimodal Backdoors for Visual Question Answering. (81%)Matthew Walmer; Karan Sikka; Indranil Sur; Abhinav Shrivastava; Susmit Jha
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training. (76%)Chen Liu; Zhichao Huang; Mathieu Salzmann; Tong Zhang; Sabine Süsstrunk
MuxLink: Circumventing Learning-Resilient MUX-Locking Using Graph Neural Network-based Link Prediction. (4%)Lilas Alrahis; Satwik Patnaik; Muhammad Shafique; Ozgur Sinanoglu
2021-12-13
Detecting Audio Adversarial Examples with Logit Noising. (99%)Namgyu Park; Sangwoo Ji; Jong Kim
Triangle Attack: A Query-efficient Decision-based Adversarial Attack. (99%)Xiaosen Wang; Zeliang Zhang; Kangheng Tong; Dihong Gong; Kun He; Zhifeng Li; Wei Liu
2021-12-12
Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses. (98%)Chun Pong Lau; Jiang Liu; Hossein Souri; Wei-An Lin; Soheil Feizi; Rama Chellappa
Quantifying and Understanding Adversarial Examples in Discrete Input Spaces. (91%)Volodymyr Kuleshov; Evgenii Nikishin; Shantanu Thakoor; Tingfung Lau; Stefano Ermon
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification. (91%)Ashwinee Panda; Saeed Mahloujifar; Arjun N. Bhagoji; Supriyo Chakraborty; Prateek Mittal
WOOD: Wasserstein-based Out-of-Distribution Detection. (12%)Yinan Wang; Wenbo Sun; Jionghua "Judy" Jin; Zhenyu "James" Kong; Xiaowei Yue
2021-12-11
MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare. (99%)Muchao Ye; Junyu Luo; Guanjie Zheng; Cao Xiao; Ting Wang; Fenglong Ma
Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting. (98%)Junhua Zou; Zhisong Pan; Junyang Qiu; Xin Liu; Ting Rui; Wei Li
Stereoscopic Universal Perturbations across Different Architectures and Datasets. (98%)Zachary Berger; Parth Agrawal; Tian Yu Liu; Stefano Soatto; Alex Wong
2021-12-10
Learning to Learn Transferable Attack. (99%)Shuman Fang; Jie Li; Xianming Lin; Rongrong Ji
Cross-Modal Transferable Adversarial Attacks from Images to Videos. (99%)Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Attacking Point Cloud Segmentation with Color-only Perturbation. (99%)Jiacen Xu; Zhe Zhou; Boyuan Feng; Yufei Ding; Zhou Li
Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks. (92%)Seungyong Moon; Gaon An; Hyun Oh Song
Batch Label Inference and Replacement Attacks in Black-Boxed Vertical Federated Learning. (75%)Yang Liu; Tianyuan Zou; Yan Kang; Wenhan Liu; Yuanqin He; Zhihao Yi; Qiang Yang
Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models. (68%)Jialuo Chen; Jingyi Wang; Tinglan Peng; Youcheng Sun; Peng Cheng; Shouling Ji; Xingjun Ma; Bo Li; Dawn Song
Efficient Action Poisoning Attacks on Linear Contextual Bandits. (67%)Guanlin Liu; Lifeng Lai
How Private Is Your RL Policy? An Inverse RL Based Analysis Framework. (41%)Kritika Prakash; Fiza Husain; Praveen Paruchuri; Sujit P. Gujar
SoK: On the Security & Privacy in Federated Learning. (5%)Gorka Abad; Stjepan Picek; Aitor Urbieta
2021-12-09
Amicable Aid: Turning Adversarial Attack to Benefit Classification. (99%)Juyeop Kim; Jun-Ho Choi; Soobeom Jang; Jong-Seok Lee
Mutual Adversarial Training: Learning together is better than going alone. (99%)Jiang Liu; Chun Pong Lau; Hossein Souri; Soheil Feizi; Rama Chellappa
PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function. (99%)Manaar Alam; Shubhajit Datta; Debdeep Mukhopadhyay; Arijit Mondal; Partha Pratim Chakrabarti
RamBoAttack: A Robust Query Efficient Deep Neural Network Decision Exploit. (99%)Viet Quoc Vo; Ehsan Abbasnejad; Damith C. Ranasinghe
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures. (69%)Eugene Bagdasaryan; Vitaly Shmatikov
Robustness Certificates for Implicit Neural Networks: A Mixed Monotone Contractive Approach. (38%)Saber Jafarpour; Matthew Abate; Alexander Davydov; Francesco Bullo; Samuel Coogan
PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures. (10%)Dan Hendrycks; Andy Zou; Mantas Mazeika; Leonard Tang; Dawn Song; Jacob Steinhardt
Are We There Yet? Timing and Floating-Point Attacks on Differential Privacy Systems. (2%)Jiankai Jin; Eleanor McMurtry; Benjamin I. P. Rubinstein; Olga Ohrimenko
3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D Object Detection. (1%)Alexander Lehner; Stefano Gasperini; Alvaro Marcos-Ramiro; Michael Schmidt; Mohammad-Ali Nikouei Mahani; Nassir Navab; Benjamin Busam; Federico Tombari
2021-12-08
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection. (99%)Jiang Liu; Alexander Levine; Chun Pong Lau; Rama Chellappa; Soheil Feizi
On visual self-supervision and its effect on model robustness. (99%)Michal Kucer; Diane Oyen; Garrett Kenyon
SNEAK: Synonymous Sentences-Aware Adversarial Attack on Natural Language Video Localization. (93%)Wenbo Gou; Wen Shi; Jian Lou; Lijie Huang; Pan Zhou; Ruixuan Li
Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework. (8%)Ching-Yun Ko; Jeet Mohapatra; Sijia Liu; Pin-Yu Chen; Luca Daniel; Lily Weng
2021-12-07
Saliency Diversified Deep Ensemble for Robustness to Adversaries. (99%)Alex Bogun; Dimche Kostadinov; Damian Borth
Vehicle trajectory prediction works, but not everywhere. (50%)Mohammadhossein Bahari; Saeed Saadatnejad; Ahmad Rahimi; Mohammad Shaverdikondori; Mohammad Shahidzadeh; Seyed-Mohsen Moosavi-Dezfooli; Alexandre Alahi
Lightning: Striking the Secure Isolation on GPU Clouds with Transient Hardware Faults. (11%)Rihui Sun; Pefei Qiu; Yongqiang Lyu; Donsheng Wang; Jiang Dong; Gang Qu
Membership Inference Attacks From First Principles. (2%)Nicholas Carlini; Steve Chien; Milad Nasr; Shuang Song; Andreas Terzis; Florian Tramer
Training Deep Models to be Explained with Fewer Examples. (1%)Tomoharu Iwata; Yuya Yoshikawa
Presentation Attack Detection Methods based on Gaze Tracking and Pupil Dynamic: A Comprehensive Survey. (1%)Jalil Nourmohammadi Khiarak
2021-12-06
Adversarial Machine Learning In Network Intrusion Detection Domain: A Systematic Review. (99%)Huda Ali Alatwi; Charles Morisset
Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal. (84%)Yucheng Shi; Yahong Han; Yu-an Tan; Xiaohui Kuang
ML Attack Models: Adversarial Attacks and Data Poisoning Attacks. (82%)Jing Lin; Long Dang; Mohamed Rahouti; Kaiqi Xiong
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks. (82%)Xi Li; Zhen Xiang; David J. Miller; George Kesidis
When the Curious Abandon Honesty: Federated Learning Is Not Private. (68%)Franziska Boenisch; Adam Dziedzic; Roei Schuster; Ali Shahin Shamsabadi; Ilia Shumailov; Nicolas Papernot
Defending against Model Stealing via Verifying Embedded External Features. (33%)Yiming Li; Linghui Zhu; Xiaojun Jia; Yong Jiang; Shu-Tao Xia; Xiaochun Cao
Context-Aware Transfer Attacks for Object Detection. (1%)Zikui Cai; Xinxin Xie; Shasha Li; Mingjun Yin; Chengyu Song; Srikanth V. Krishnamurthy; Amit K. Roy-Chowdhury; M. Salman Asif
2021-12-05
Robust Active Learning: Sample-Efficient Training of Robust Deep Learning Models. (96%)Yuejun Guo; Qiang Hu; Maxime Cordy; Mike Papadakis; Yves Le Traon
Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness. (88%)Konstantinos P. Panousis; Sotirios Chatzis; Sergios Theodoridis
Beyond Robustness: Resilience Verification of Tree-Based Classifiers. (2%)Stefano Calzavara; Lorenzo Cazzaro; Claudio Lucchese; Federico Marcuzzi; Salvatore Orlando
On Impact of Semantically Similar Apps in Android Malware Datasets. (1%)Roopak Surendran
2021-12-04
RADA: Robust Adversarial Data Augmentation for Camera Localization in Challenging Weather. (10%)Jialu Wang; Muhamad Risqi U. Saputra; Chris Xiaoxuan Lu; Niki Trigon; Andrew Markham
2021-12-03
Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach. (99%)James Lee Hu; Mohammadreza Ebrahimi; Hsinchun Chen
Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing. (99%)Bhagyashree Puranik; Upamanyu Madhow; Ramtin Pedarsani
Blackbox Untargeted Adversarial Testing of Automatic Speech Recognition Systems. (98%)Xiaoliang Wu; Ajitha Rajan
Attack-Centric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models. (54%)Tochukwu Idika; Ismail Akturk
Adversarial Attacks against a Satellite-borne Multispectral Cloud Detector. (13%)Andrew Du; Yee Wei Law; Michele Sasdelli; Bo Chen; Ken Clarke; Michael Brown; Tat-Jun Chin
A Game-Theoretic Approach for AI-based Botnet Attack Defence. (9%)Hooman Alavizadeh; Julian Jang-Jaccard; Tansu Alpcan; Seyit A. Camtepe
2021-12-02
A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space. (99%)Thibault Simonetto; Salijona Dyrmishi; Salah Ghamizi; Maxime Cordy; Yves Le Traon
Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks? (93%)Ayesha Siddique; Khaza Anuarul Hoque
Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness? (75%)Peter Lorenz; Dominik Strassel; Margret Keuper; Janis Keuper
Training Efficiency and Robustness in Deep Learning. (41%)Fartash Faghri
FedRAD: Federated Robust Adaptive Distillation. (10%)Stefán Páll Sturluson; Samuel Trew; Luis Muñoz-González; Matei Grama; Jonathan Passerat-Palmbach; Daniel Rueckert; Amir Alansary
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis. (3%)Yu Feng; Benteng Ma; Jing Zhang; Shanshan Zhao; Yong Xia; Dacheng Tao
On the Existence of the Adversarial Bayes Classifier (Extended Version). (2%)Pranjal Awasthi; Natalie S. Frank; Mehryar Mohri
Editing a classifier by rewriting its prediction rules. (1%)Shibani Santurkar; Dimitris Tsipras; Mahalaxmi Elango; David Bau; Antonio Torralba; Aleksander Madry
2021-12-01
Adversarial Robustness of Deep Reinforcement Learning based Dynamic Recommender Systems. (99%)Siyu Wang; Yuanjiang Cao; Xiaocong Chen; Lina Yao; Xianzhi Wang; Quan Z. Sheng
Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness. (99%)Jia-Li Yin; Lehui Xie; Wanqing Zhu; Ximeng Liu; Bo-Hao Chen
$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial Training. (99%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines. (96%)Jiachen Sun; Akshay Mehra; Bhavya Kailkhura; Pin-Yu Chen; Dan Hendrycks; Jihun Hamm; Z. Morley Mao
Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation. (95%)Tianyue Zheng; Zhe Chen; Shuya Ding; Chao Cai; Jun Luo
Robustness in Deep Learning for Computer Vision: Mind the gap? (31%)Nathan Drenkow; Numair Sani; Ilya Shpitser; Mathias Unberath
CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning. (1%)Aidan Boyd; Patrick Tinsley; Kevin Bowyer; Adam Czajka
2021-11-30
Using a GAN to Generate Adversarial Examples to Facial Image Recognition. (99%)Andrew Merrigan; Alan F. Smeaton
Mitigating Adversarial Attacks by Distributing Different Copies to Different Users. (86%)Jiyi Zhang; Wesley Joon-Wie Tann; Ee-Chien Chang
Human Imperceptible Attacks and Applications to Improve Fairness. (83%)Xinru Hua; Huanzhong Xu; Jose Blanchet; Viet Nguyen
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning. (81%)Yangsibo Huang; Samyak Gupta; Zhao Song; Kai Li; Sanjeev Arora
FROB: Few-shot ROBust Model for Classification and Out-of-Distribution Detection. (78%)Nikolaos Dionelis
COREATTACK: Breaking Up the Core Structure of Graphs. (78%)Bo Zhou; Yuqian Lv; Jinhuan Wang; Jian Zhang; Qi Xuan
Adversarial Attacks Against Deep Generative Models on Data: A Survey. (12%)Hui Sun; Tianqing Zhu; Zhiqiu Zhang; Dawei Jin. Ping Xiong; Wanlei Zhou
A Face Recognition System's Worst Morph Nightmare, Theoretically. (1%)Una M. Kelly; Raymond Veldhuis; Luuk Spreeuwers
New Datasets for Dynamic Malware Classification. (1%)Berkant Düzgün; Aykut Çayır; Ferhat Demirkıran; Ceyda Nur Kayha; Buket Gençaydın; Hasan Dağ
Reliability Assessment and Safety Arguments for Machine Learning Components in Assuring Learning-Enabled Autonomous Systems. (1%)Xingyu Zhao; Wei Huang; Vibhav Bharti; Yi Dong; Victoria Cox; Alec Banks; Sen Wang; Sven Schewe; Xiaowei Huang
2021-11-29
MedRDF: A Robust and Retrain-Less Diagnostic Framework for Medical Pretrained Models Against Adversarial Attack. (99%)Mengting Xu; Tao Zhang; Daoqiang Zhang
Adversarial Attacks in Cooperative AI. (82%)Ted Fujimoto; Arthur Paul Pedersen
Living-Off-The-Land Command Detection Using Active Learning. (10%)Talha Ongun; Jack W. Stokes; Jonathan Bar Or; Ke Tian; Farid Tajaddodianfar; Joshua Neil; Christian Seifert; Alina Oprea; John C. Platt
Do Invariances in Deep Neural Networks Align with Human Perception? (9%)Vedant Nanda; Ayan Majumdar; Camila Kolling; John P. Dickerson; Krishna P. Gummadi; Bradley C. Love; Adrian Weller
A Simple Long-Tailed Recognition Baseline via Vision-Language Model. (1%)Teli Ma; Shijie Geng; Mengmeng Wang; Jing Shao; Jiasen Lu; Hongsheng Li; Peng Gao; Yu Qiao
ROBIN : A Benchmark for Robustness to Individual Nuisances in Real-World Out-of-Distribution Shifts. (1%)Bingchen Zhao; Shaozuo Yu; Wufei Ma; Mingxin Yu; Shenxiao Mei; Angtian Wang; Ju He; Alan Yuille; Adam Kortylewski
Pyramid Adversarial Training Improves ViT Performance. (1%)Charles Herrmann; Kyle Sargent; Lu Jiang; Ramin Zabih; Huiwen Chang; Ce Liu; Dilip Krishnan; Deqing Sun
2021-11-28
Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional Variational AutoEncoders for Adversary Detection in the Presence of Noisy Images. (96%)Dvij Kalaria; Aritra Hazra; Partha Pratim Chakrabarti
MALIGN: Adversarially Robust Malware Family Detection using Sequence Alignment. (54%)Shoumik Saha; Sadia Afroz; Atif Rahman
Automated Runtime-Aware Scheduling for Multi-Tenant DNN Inference on GPU. (1%)Fuxun Yu; Shawn Bray; Di Wang; Longfei Shangguan; Xulong Tang; Chenchen Liu; Xiang Chen
ExCon: Explanation-driven Supervised Contrastive Learning for Image Classification. (1%)Zhibo Zhang; Jongseong Jang; Chiheb Trabelsi; Ruiwen Li; Scott Sanner; Yeonjeong Jeong; Dongsub Shim
2021-11-27
Adaptive Image Transformations for Transfer-based Adversarial Attack. (99%)Zheng Yuan; Jie Zhang; Shiguang Shan
Adaptive Perturbation for Adversarial Attack. (99%)Zheng Yuan; Jie Zhang; Shiguang Shan
Statically Detecting Adversarial Malware through Randomised Chaining. (98%)Matthew Crawford; Wei Wang; Ruoxi Sun; Minhui Xue
Dissecting Malware in the Wild. (1%)Hamish Spencer; Wei Wang; Ruoxi Sun; Minhui Xue
2021-11-26
ArchRepair: Block-Level Architecture-Oriented Repairing for Deep Neural Networks. (50%)Hua Qi; Zhijie Wang; Qing Guo; Jianlang Chen; Felix Juefei-Xu; Lei Ma; Jianjun Zhao
2021-11-25
AdvBokeh: Learning to Adversarially Defocus Blur. (99%)Yihao Huang; Felix Juefei-Xu; Qing Guo; Weikai Miao; Yang Liu; Geguang Pu
Clustering Effect of (Linearized) Adversarial Robust Models. (97%)Yang Bai; Xin Yan; Yong Jiang; Shu-Tao Xia; Yisen Wang
Simple Contrastive Representation Adversarial Learning for NLP Tasks. (93%)Deshui Miao; Jiaqi Zhang; Wenbo Xie; Jian Song; Xin Li; Lijuan Jia; Ning Guo
Going Grayscale: The Road to Understanding and Improving Unlearnable Examples. (92%)Zhuoran Liu; Zhengyu Zhao; Alex Kolmus; Tijn Berns; Laarhoven Twan van; Tom Heskes; Martha Larson
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks. (92%)Xiangyu Qi; Tinghao Xie; Ruizhe Pan; Jifeng Zhu; Yong Yang; Kai Bu
Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning. (3%)Sanjay Kariyappa; Moinuddin K Qureshi
Joint inference and input optimization in equilibrium networks. (1%)Swaminathan Gurumurthy; Shaojie Bai; Zachary Manchester; J. Zico Kolter
2021-11-24
Thundernna: a white box adversarial attack. (99%)Linfeng Ye
Unity is strength: Improving the Detection of Adversarial Examples with Ensemble Approaches. (99%)Francesco Craighero; Fabrizio Angaroni; Fabio Stella; Chiara Damiani; Marco Antoniotti; Alex Graudenzi
Robustness against Adversarial Attacks in Neural Networks using Incremental Dissipativity. (92%)Bernardo Aquino; Arash Rahnama; Peter Seiler; Lizhen Lin; Vijay Gupta
WFDefProxy: Modularly Implementing and Empirically Evaluating Website Fingerprinting Defenses. (15%)Jiajun Gong; Wuqi Zhang; Charles Zhang; Tao Wang
Sharpness-aware Quantization for Deep Neural Networks. (10%)Jing Liu; Jianfei Cai; Bohan Zhuang
SLA$^2$P: Self-supervised Anomaly Detection with Adversarial Perturbation. (5%)Yizhou Wang; Can Qin; Rongzhe Wei; Yi Xu; Yue Bai; Yun Fu
An Attack on Facial Soft-biometric Privacy Enhancement. (2%)Dailé Osorio-Roig; Christian Rathgeb; Pawel Drozdowski; Philipp Terhörst; Vitomir Štruc; Christoph Busch
Accelerating Deep Learning with Dynamic Data Pruning. (1%)Ravi S Raju; Kyle Daruwalla; Mikko Lipasti
2021-11-23
Adversarial machine learning for protecting against online manipulation. (92%)Stefano Cresci; Marinella Petrocchi; Angelo Spognardi; Stefano Tognazzi
Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS. (84%)Witt Christian Schroeder de; Yongchao Huang; Philip H. S. Torr; Martin Strohmeier
Subspace Adversarial Training. (69%)Tao Li; Yingwen Wu; Sizhe Chen; Kun Fang; Xiaolin Huang
HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance. (1%)Huanrui Yang; Xiaoxuan Yang; Neil Zhenqiang Gong; Yiran Chen
2021-11-22
Adversarial Examples on Segmentation Models Can be Easy to Transfer. (99%)Jindong Gu; Hengshuang Zhao; Volker Tresp; Philip Torr
Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes. (99%)Utku Ozbulak; Maura Pintor; Messem Arnout Van; Neve Wesley De
Imperceptible Transfer Attack and Defense on 3D Point Cloud Classification. (99%)Daizong Liu; Wei Hu
Backdoor Attack through Frequency Domain. (92%)Tong Wang; Yuan Yao; Feng Xu; Shengwei An; Hanghang Tong; Ting Wang
NTD: Non-Transferability Enabled Backdoor Detection. (69%)Yinshan Li; Hua Ma; Zhi Zhang; Yansong Gao; Alsharif Abuadbba; Anmin Fu; Yifeng Zheng; Said F. Al-Sarawi; Derek Abbott
A Comparison of State-of-the-Art Techniques for Generating Adversarial Malware Binaries. (33%)Prithviraj Dasgupta; Zachariah Osman
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data. (13%)Yongji Wu; Xiaoyu Cao; Jinyuan Jia; Neil Zhenqiang Gong
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration. (1%)Yifan Gong; Geng Yuan; Zheng Zhan; Wei Niu; Zhengang Li; Pu Zhao; Yuxuan Cai; Sijia Liu; Bin Ren; Xue Lin; Xulong Tang; Yanzhi Wang
Electric Vehicle Attack Impact on Power Grid Operation. (1%)Mohammad Ali Sayed; Ribal Atallah; Chadi Assi; Mourad Debbabi
2021-11-21
Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability. (99%)Yifeng Xiong; Jiadong Lin; Min Zhang; John E. Hopcroft; Kun He
Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Model. (99%)Alon Zolfi; Shai Avidan; Yuval Elovici; Asaf Shabtai
Medical Aegis: Robust adversarial protectors for medical images. (99%)Qingsong Yao; Zecheng He; S. Kevin Zhou
Local Linearity and Double Descent in Catastrophic Overfitting. (73%)Varun Sivashankar; Nikil Selvam
Denoised Internal Models: a Brain-Inspired Autoencoder against Adversarial Attacks. (62%)Kaiyuan Liu; Xingyu Li; Yi Zhou; Jisong Guan; Yurui Lai; Ge Zhang; Hang Su; Jiachen Wang; Chunxu Guo
2021-11-20
Are Vision Transformers Robust to Patch Perturbations? (98%)Jindong Gu; Volker Tresp; Yao Qin
2021-11-19
Zero-Shot Certified Defense against Adversarial Patches with Vision Transformers. (99%)Yuheng Huang; Yuanchun Li
Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method. (99%)Tao Bai; Jun Zhao; Jinlin Zhu; Shoudong Han; Jiefeng Chen; Bo Li; Alex Kot
Meta Adversarial Perturbations. (99%)Chia-Hung Yuan; Pin-Yu Chen; Chia-Mu Yu
Resilience from Diversity: Population-based approach to harden models against adversarial attacks. (99%)Jasser Jasser; Ivan Garibay
Enhanced countering adversarial attacks via input denoising and feature restoring. (99%)Yanni Li; Wenhui Zhang; Jiawei Liu; Xiaoli Kou; Hui Li; Jiangtao Cui
Fooling Adversarial Training with Inducing Noise. (98%)Zhirui Wang; Yifei Wang; Yisen Wang
Exposing Weaknesses of Malware Detectors with Explainability-Guided Evasion Attacks. (86%)Wei Wang; Ruoxi Sun; Tian Dong; Shaofeng Li; Minhui Xue; Gareth Tyson; Haojin Zhu
2021-11-18
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems. (99%)Bao Gia Doan; Minhui Xue; Shiqing Ma; Ehsan Abbasnejad; Damith C. Ranasinghe
A Review of Adversarial Attack and Defense for Classification Methods. (99%)Yao Li; Minhao Cheng; Cho-Jui Hsieh; Thomas C. M. Lee
Robust Person Re-identification with Multi-Modal Joint Defence. (98%)Yunpeng Gong; Lifei Chen
Enhancing the Insertion of NOP Instructions to Obfuscate Malware via Deep Reinforcement Learning. (96%)Daniel Gibert; Matt Fredrikson; Carles Mateu; Jordi Planes; Quan Le
How to Build Robust FAQ Chatbot with Controllable Question Generator? (80%)Yan Pan; Mingyang Ma; Bernhard Pflugfelder; Georg Groh
Adversarial attacks on voter model dynamics in complex networks. (76%)Katsumi Chiyomaru; Kazuhiro Takemoto
Enhanced Membership Inference Attacks against Machine Learning Models. (12%)Jiayuan Ye; Aadyaa Maddi; Sasi Kumar Murakonda; Reza Shokri
Wiggling Weights to Improve the Robustness of Classifiers. (2%)Sadaf Gulshad; Ivan Sosnovik; Arnold Smeulders
Improving Transferability of Representations via Augmentation-Aware Self-Supervision. (1%)Hankook Lee; Kibok Lee; Kimin Lee; Honglak Lee; Jinwoo Shin
2021-11-17
TraSw: Tracklet-Switch Adversarial Attacks against Multi-Object Tracking. (99%)Delv Lin; Qi Chen; Chengyu Zhou; Kun He
Generating Unrestricted 3D Adversarial Point Clouds. (99%)Xuelong Dai; Yanjie Li; Hua Dai; Bin Xiao
SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness. (93%)Jongheon Jeong; Sejun Park; Minkyu Kim; Heung-Chang Lee; Doguk Kim; Jinwoo Shin
Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation. (92%)Mehdi Sadi; B. M. S. Bahar Talukder; Kaniz Mishty; Md Tauhidur Rahman
Do Not Trust Prediction Scores for Membership Inference Attacks. (33%)Dominik Hintersdorf; Lukas Struppek; Kristian Kersting
2021-11-16
Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks. (99%)Adaku Uchendu; Daniel Campoy; Christopher Menart; Alexandra Hildenbrandt
Improving the robustness and accuracy of biomedical language models through adversarial training. (99%)Milad Moradi; Matthias Samwald
Detecting AutoAttack Perturbations in the Frequency Domain. (99%)Peter Lorenz; Paula Harder; Dominik Strassel; Margret Keuper; Janis Keuper
Adversarial Tradeoffs in Linear Inverse Problems and Robust StateEstimation. (92%)Bruce D. Lee; Thomas T. C. K. Zhang; Hamed Hassani; Nikolai Matni
Consistent Semantic Attacks on Optical Flow. (81%)Tom Koren; Lior Talker; Michael Dinerstein; Roy J Jevnisek
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences. (54%)Wei Guo; Benedetta Tondi; Mauro Barni
Enabling equivariance for arbitrary Lie groups. (1%)Lachlan Ewen MacDonald; Sameera Ramasinghe; Simon Lucey
2021-11-15
A Survey on Adversarial Attacks for Malware Analysis. (98%)Kshitiz Aryal; Maanak Gupta; Mahmoud Abdelsalam
Triggerless Backdoor Attack for NLP Tasks with Clean Labels. (68%)Leilei Gan; Jiwei Li; Tianwei Zhang; Xiaoya Li; Yuxian Meng; Fei Wu; Shangwei Guo; Chun Fan
Property Inference Attacks Against GANs. (67%)Junhao Zhou; Yufei Chen; Chao Shen; Yang Zhang
2021-11-14
Generating Band-Limited Adversarial Surfaces Using Neural Networks. (99%)Roee Ben-Shlomo; Yevgeniy Men; Ido Imanuel
Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks. (76%)Chen Ma; Xiangyu Guo; Li Chen; Jun-Hai Yong; Yisen Wang
Towards Interpretability of Speech Pause in Dementia Detection using Adversarial Learning. (75%)Youxiang Zhu; Bang Tran; Xiaohui Liang; John A. Batsis; Robert M. Roth
Improving Compound Activity Classification via Deep Transfer and Representation Learning. (1%)Vishal Dey; Raghu Machiraju; Xia Ning
2021-11-13
Robust and Accurate Object Detection via Self-Knowledge Distillation. (62%)Weipeng Xu; Pengzhi Chu; Renhao Xie; Xiongziyan Xiao; Hongcheng Huang
UNTANGLE: Unlocking Routing and Logic Obfuscation Using Graph Neural Networks-based Link Prediction. (2%)Lilas Alrahis; Satwik Patnaik; Muhammad Abdullah Hanif; Muhammad Shafique; Ozgur Sinanoglu
2021-11-12
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception. (99%)Joel Dapello; Jenelle Feather; Hang Le; Tiago Marques; David D. Cox; Josh H. McDermott; James J. DiCarlo; SueYeon Chung
Measuring the Contribution of Multiple Model Representations in Detecting Adversarial Instances. (98%)Daniel Steinberg; Paul Munro
Adversarially Robust Learning for Security-Constrained Optimal Power Flow. (10%)Priya L. Donti; Aayushya Agarwal; Neeraj Vijay Bedmutha; Larry Pileggi; J. Zico Kolter
On Transferability of Prompt Tuning for Natural Language Understanding. (8%)Yusheng Su; Xiaozhi Wang; Yujia Qin; Chi-Min Chan; Yankai Lin; Zhiyuan Liu; Peng Li; Juanzi Li; Lei Hou; Maosong Sun; Jie Zhou
A Bayesian Nash equilibrium-based moving target defense against stealthy sensor attacks. (1%)David Umsonst; Serkan Sarıtaş; György Dán; Henrik Sandberg
Resilient Consensus-based Multi-agent Reinforcement Learning. (1%)Martin Figura; Yixuan Lin; Ji Liu; Vijay Gupta
2021-11-11
On the Equivalence between Neural Network and Support Vector Machine. (1%)Yilan Chen; Wei Huang; Lam M. Nguyen; Tsui-Wei Weng
2021-11-10
Trustworthy Medical Segmentation with Uncertainty Estimation. (93%)Giuseppina Carannante; Dimah Dera; Nidhal C. Bouaynaya; Ghulam Rasool; Hassan M. Fathallah-Shaykh
Robust Learning via Ensemble Density Propagation in Deep Neural Networks. (2%)Giuseppina Carannante; Dimah Dera; Ghulam Rasool; Nidhal C. Bouaynaya; Lyudmila Mihaylova
2021-11-09
Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search. (99%)Pengfei Xia; Ziqiang Li; Bin Li
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps. (99%)Muhammad Awais; Fengwei Zhou; Chuanlong Xie; Jiawei Li; Sung-Ho Bae; Zhenguo Li
Sparse Adversarial Video Attacks with Spatial Transformations. (98%)Ronghui Mu; Wenjie Ruan; Leandro Soriano Marcolino; Qiang Ni
A Statistical Difference Reduction Method for Escaping Backdoor Detection. (97%)Pengfei Xia; Hongjing Niu; Ziqiang Li; Bin Li
Data Augmentation Can Improve Robustness. (73%)Sylvestre-Alvise Rebuffi; Sven Gowal; Dan A. Calian; Florian Stimberg; Olivia Wiles; Timothy Mann
Are Transformers More Robust Than CNNs? (67%)Yutong Bai; Jieru Mei; Alan Yuille; Cihang Xie
2021-11-08
Geometrically Adaptive Dictionary Attack on Face Recognition. (99%)Junyoung Byun; Hyojun Go; Changick Kim
Defense Against Explanation Manipulation. (98%)Ruixiang Tang; Ninghao Liu; Fan Yang; Na Zou; Xia Hu
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories. (98%)Adnan Siraj Rakin; Md Hafizul Islam Chowdhuryy; Fan Yao; Deliang Fan
On Assessing The Safety of Reinforcement Learning algorithms Using Formal Methods. (75%)Paulina Stevia Nouwou Mindom; Amin Nikanjam; Foutse Khomh; John Mullins
Get a Model! Model Hijacking Attack Against Machine Learning Models. (69%)Ahmed Salem; Michael Backes; Yang Zhang
Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks. (69%)Lijia Yu; Xiao-Shan Gao
Characterizing the adversarial vulnerability of speech self-supervised learning. (68%)Haibin Wu; Bo Zheng; Xu Li; Xixin Wu; Hung-yi Lee; Helen Meng
HAPSSA: Holistic Approach to PDF Malware Detection Using Signal and Statistical Analysis. (67%)Tajuddin Manhar Mohammed; Lakshmanan Nataraj; Satish Chikkagoudar; Shivkumar Chandrasekaran; B. S. Manjunath
Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning. (67%)Qinkai Zheng; Xu Zou; Yuxiao Dong; Yukuo Cen; Da Yin; Jiarong Xu; Yang Yang; Jie Tang
BARFED: Byzantine Attack-Resistant Federated Averaging Based on Outlier Elimination. (1%)Ece Isik-Polat; Gorkem Polat; Altan Kocyigit
2021-11-07
Generative Dynamic Patch Attack. (99%)Xiang Li; Shihao Ji
Natural Adversarial Objects. (81%)Felix Lau; Nishant Subramani; Sasha Harrison; Aerin Kim; Elliot Branson; Rosanne Liu
2021-11-06
"How Does It Detect A Malicious App?" Explaining the Predictions of AI-based Android Malware Detector. (11%)Zhi Lu; Vrizlynn L. L. Thing
2021-11-05
A Unified Game-Theoretic Interpretation of Adversarial Robustness. (98%)Jie Ren; Die Zhang; Yisen Wang; Lu Chen; Zhanpeng Zhou; Yiting Chen; Xu Cheng; Xin Wang; Meng Zhou; Jie Shi; Quanshi Zhang
Sequential Randomized Smoothing for Adversarially Robust Speech Recognition. (96%)Raphael Olivier; Bhiksha Raj
Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups. (2%)Aidmar Wainakh; Ephraim Zimmer; Sandeep Subedi; Jens Keim; Tim Grube; Shankar Karuppayah; Alejandro Sanchez Guinea; Max Mühlhäuser
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models. (99%)Boxin Wang; Chejian Xu; Shuohang Wang; Zhe Gan; Yu Cheng; Jianfeng Gao; Ahmed Hassan Awadallah; Bo Li
Adversarial Attacks on Graph Classification via Bayesian Optimisation. (87%)Xingchen Wan; Henry Kenlay; Binxin Ru; Arno Blaas; Michael A. Osborne; Xiaowen Dong
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods. (47%)Peru Bhardwaj; John Kelleher; Luca Costabello; Declan O'Sullivan
Attacking Deep Reinforcement Learning-Based Traffic Signal Control Systems with Colluding Vehicles. (3%)Ao Qu; Yihong Tang; Wei Ma
2021-11-03
LTD: Low Temperature Distillation for Robust Adversarial Training. (54%)Erh-Chung Chen; Che-Rung Lee
Multi-Glimpse Network: A Robust and Efficient Classification Architecture based on Recurrent Downsampled Attention. (41%)Sia Huat Tan; Runpei Dong; Kaisheng Ma
2021-11-02
Effective and Imperceptible Adversarial Textual Attack via Multi-objectivization. (99%)Shengcai Liu; Ning Lu; Wenjing Hong; Chao Qian; Ke Tang
Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks. (96%)Maksym Yatsura; Jan Hendrik Metzen; Matthias Hein
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds. (70%)Yujia Huang; Huan Zhang; Yuanyuan Shi; J Zico Kolter; Anima Anandkumar
Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness. (68%)Ke Sun; Mingjie Li; Zhouchen Lin
Knowledge Cross-Distillation for Membership Privacy. (38%)Rishav Chourasia; Batnyam Enkhtaivan; Kunihiro Ito; Junki Mori; Isamu Teranishi; Hikaru Tsuchida
Adversarially Perturbed Wavelet-based Morphed Face Generation. (9%)Kelsey O'Haire; Sobhan Soleymani; Baaria Chaudhary; Poorya Aghdaie; Jeremy Dawson; Nasser M. Nasrabadi
2021-11-01
Graph Structural Attack by Spectral Distance. (93%)Lu Lin; Ethan Blaser; Hongning Wang
Availability Attacks Create Shortcuts. (89%)Da Yu; Huishuai Zhang; Wei Chen; Jian Yin; Tie-Yan Liu
Robustness of deep learning algorithms in astronomy -- galaxy morphology studies. (83%)A. Ćiprijanović; D. Kafkes; G. N. Perdue; K. Pedro; G. Snyder; F. J. Sánchez; S. Madireddy; S. Wild; B. Nord
When Does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning? (69%)Lijie Fan; Sijia Liu; Pin-Yu Chen; Gaoyuan Zhang; Chuang Gan
ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack. (9%)Dahoon Park; Kon-Woo Kwon; Sunghoon Im; Jaeha Kung
2021-10-31
An Actor-Critic Method for Simulation-Based Optimization. (56%)Kuo Li; Qing-Shan Jia; Jiaqi Yan
2021-10-30
Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided Curriculum Learning Approach. (97%)Anindya Sarkar; Anirban Sarkar; Sowrya Gali; Vineeth N Balasubramanian
AdvCodeMix: Adversarial Attack on Code-Mixed Data. (93%)Sourya Dipta Das; Ayan Basak; Soumil Mandal; Dipankar Das
Backdoor Pre-trained Models Can Transfer to All. (3%)Lujia Shen; Shouling Ji; Xuhong Zhang; Jinfeng Li; Jing Chen; Jie Shi; Chengfang Fang; Jianwei Yin; Ting Wang
Trojan Source: Invisible Vulnerabilities. (1%)Nicholas Boucher; Ross Anderson
2021-10-29
Attacking Video Recognition Models with Bullet-Screen Comments. (99%)Kai Chen; Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Adversarial Robustness with Semi-Infinite Constrained Learning. (92%)Alexander Robey; Luiz F. O. Chamon; George J. Pappas; Hamed Hassani; Alejandro Ribeiro
{\epsilon}-weakened Robustness of Deep Neural Networks. (62%)Pei Huang; Yuting Yang; Minghao Liu; Fuqi Jia; Feifei Ma; Jian Zhang
You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership. (11%)Xuxi Chen; Tianlong Chen; Zhenyu Zhang; Zhangyang Wang
2021-10-28
Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework. (99%)Lifan Yuan; Yichi Zhang; Yangyi Chen; Wei Wei
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis. (92%)Junfeng Guo; Ang Li; Cong Liu
The magnitude vector of images. (1%)Michael F. Adamer; Leslie O'Bray; Brouwer Edward De; Bastian Rieck; Karsten Borgwardt
2021-10-27
Towards Evaluating the Robustness of Neural Networks Learned by Transduction. (98%)Jiefeng Chen; Xi Wu; Yang Guo; Yingyu Liang; Somesh Jha
CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks. (98%)Haotian Xue; Kaixiong Zhou; Tianlong Chen; Kai Guo; Xia Hu; Yi Chang; Xin Wang
Towards Robust Reasoning over Knowledge Graphs. (83%)Zhaohan Xi; Ren Pang; Changjiang Li; Shouling Ji; Xiapu Luo; Xusheng Xiao; Ting Wang
Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks. (74%)Hassan Dbouk; Naresh R. Shanbhag
Adversarial Neuron Pruning Purifies Backdoored Deep Models. (15%)Dongxian Wu; Yisen Wang
From Intrinsic to Counterfactual: On the Explainability of Contextualized Recommender Systems. (5%)Yao Zhou; Haonan Wang; Jingrui He; Haixun Wang
Robust Contrastive Learning Using Negative Samples with Diminished Semantics. (1%)Songwei Ge; Shlok Mishra; Haohan Wang; Chun-Liang Li; David Jacobs
RoMA: Robust Model Adaptation for Offline Model-based Optimization. (1%)Sihyun Yu; Sungsoo Ahn; Le Song; Jinwoo Shin
2021-10-26
Can't Fool Me: Adversarially Robust Transformer for Video Understanding. (99%)Divya Choudhary; Palash Goyal; Saurabh Sahu
Frequency Centric Defense Mechanisms against Adversarial Examples. (99%)Sanket B. Shah; Param Raval; Harin Khakhi; Mehul S. Raval
ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers. (99%)Husheng Han; Kaidi Xu; Xing Hu; Xiaobing Chen; Ling Liang; Zidong Du; Qi Guo; Yanzhi Wang; Yunji Chen
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks. (99%)Yonggan Fu; Qixuan Yu; Yang Zhang; Shang Wu; Xu Ouyang; David Cox; Yingyan Lin
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective. (98%)Jingwei Sun; Ang Li; Louis DiValentin; Amin Hassanzadeh; Yiran Chen; Hai Li
A Frequency Perspective of Adversarial Robustness. (98%)Shishira R Maiya; Max Ehrlich; Vatsal Agarwal; Ser-Nam Lim; Tom Goldstein; Abhinav Shrivastava
Disrupting Deep Uncertainty Estimation Without Harming Accuracy. (86%)Ido Galil; Ran El-Yaniv
Improving Local Effectiveness for Global robust training. (83%)Jingyue Lu; M. Pawan Kumar
Robustness of Graph Neural Networks at Scale. (76%)Simon Geisler; Tobias Schmidt; Hakan Şirin; Daniel Zügner; Aleksandar Bojchevski; Stephan Günnemann
Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions. (75%)Izzat Alsmadi; Kashif Ahmad; Mahmoud Nazzal; Firoj Alam; Ala Al-Fuqaha; Abdallah Khreishah; Abdulelah Algosaibi
Adversarial Robustness in Multi-Task Learning: Promises and Illusions. (64%)Salah Ghamizi; Maxime Cordy; Mike Papadakis; Yves Le Traon
AugMax: Adversarial Composition of Random Augmentations for Robust Training. (56%)Haotao Wang; Chaowei Xiao; Jean Kossaifi; Zhiding Yu; Anima Anandkumar; Zhangyang Wang
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes. (50%)Sanghyun Hong; Michael-Andrei Panaitescu-Liess; Yiğitcan Kaya; Tudor Dumitraş
Semantic Host-free Trojan Attack. (10%)Haripriya Harikumar; Kien Do; Santu Rana; Sunil Gupta; Svetha Venkatesh
CAFE: Catastrophic Data Leakage in Vertical Federated Learning. (3%)Xiao Jin; Pin-Yu Chen; Chia-Yi Hsu; Chia-Mu Yu; Tianyi Chen
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge. (1%)Geng Yuan; Xiaolong Ma; Wei Niu; Zhengang Li; Zhenglun Kong; Ning Liu; Yifan Gong; Zheng Zhan; Chaoyang He; Qing Jin; Siyue Wang; Minghai Qin; Bin Ren; Yanzhi Wang; Sijia Liu; Xue Lin
Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection. (1%)Chunjong Park; Anas Awadalla; Tadayoshi Kohno; Shwetak Patel
Defensive Tensorization. (1%)Adrian Bulat; Jean Kossaifi; Sourav Bhattacharya; Yannis Panagakis; Timothy Hospedales; Georgios Tzimiropoulos; Nicholas D Lane; Maja Pantic
Task-Aware Meta Learning-based Siamese Neural Network for Classifying Obfuscated Malware. (1%)Jinting Zhu; Julian Jang-Jaccard; Amardeep Singh; Paul A. Watters; Seyit Camtepe
2021-10-25
Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks. (99%)Qiyu Kang; Yang Song; Qinxu Ding; Wee Peng Tay
Generating Watermarked Adversarial Texts. (99%)Mingjie Li; Hanzhou Wu; Xinpeng Zhang
Beyond $L_p$ clipping: Equalization-based Psychoacoustic Attacks against ASRs. (92%)Hadi Abdullah; Muhammad Sajidur Rahman; Christian Peeters; Cassidy Gibson; Washington Garcia; Vincent Bindschaedler; Thomas Shrimpton; Patrick Traynor
Fast Gradient Non-sign Methods. (92%)Yaya Cheng; Jingkuan Song; Xiaosu Zhu; Qilong Zhang; Lianli Gao; Heng Tao Shen
Ensemble Federated Adversarial Training with Non-IID data. (87%)Shuang Luo; Didi Zhu; Zexi Li; Chao Wu
GANash -- A GAN approach to steganography. (81%)Venkatesh Subramaniyan; Vignesh Sivakumar; A. K. Vagheesan; S. Sakthivelan; K. J. Jegadish Kumar; K. K. Nagarajan
A Dynamical System Perspective for Lipschitz Neural Networks. (81%)Laurent Meunier; Blaise Delattre; Alexandre Araujo; Alexandre Allauzen
An Adaptive Structural Learning of Deep Belief Network for Image-based Crack Detection in Concrete Structures Using SDNET2018. (13%)Shin Kamada; Takumi Ichimura; Takashi Iwasaki
2021-10-24
Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples. (80%)Yi Xiang Marcus Tan; Penny Chong; Jiamei Sun; Ngai-man Cheung; Yuval Elovici; Alexander Binder
2021-10-23
ADC: Adversarial attacks against object Detection that evade Context consistency checks. (99%)Mingjun Yin; Shasha Li; Chengyu Song; M. Salman Asif; Amit K. Roy-Chowdhury; Srikanth V. Krishnamurthy
A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness. (81%)Chang Song; Riya Ranjan; Hai Li
2021-10-22
Improving Robustness of Malware Classifiers using Adversarial Strings Generated from Perturbed Latent Representations. (99%)Marek Galovic; Branislav Bosansky; Viliam Lisy
How and When Adversarial Robustness Transfers in Knowledge Distillation? (91%)Rulin Shao; Jinfeng Yi; Pin-Yu Chen; Cho-Jui Hsieh
Fairness Degrading Adversarial Attacks Against Clustering Algorithms. (86%)Anshuman Chhabra; Adish Singla; Prasant Mohapatra
Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff. (80%)Adel Javanmard; Mohammad Mehrabi
PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy. (15%)Xiaolan Gu; Ming Li; Li Xiong
ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI. (15%)Samuel Hess; Gregory Ditzler
Spoofing Detection on Hand Images Using Quality Assessment. (1%)Asish Bera; Ratnadeep Dey; Debotosh Bhattacharjee; Mita Nasipuri; Hubert P. H. Shum
Text Counterfactuals via Latent Optimization and Shapley-Guided Search. (1%)Quintin Pope; Xiaoli Z. Fern
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning. (1%)Anvith Thudi; Hengrui Jia; Ilia Shumailov; Nicolas Papernot
MANDERA: Malicious Node Detection in Federated Learning via Ranking. (1%)Wanchuang Zhu; Benjamin Zi Hao Zhao; Simon Luo; Tongliang Liu; Ke Deng
2021-10-21
CAPTIVE: Constrained Adversarial Perturbations to Thwart IC Reverse Engineering. (98%)Amir Hosein Afandizadeh Zargari; Marzieh AshrafiAmiri; Minjun Seo; Sai Manoj Pudukotai Dinakarrao; Mohammed E. Fouda; Fadi Kurdahi
PROVES: Establishing Image Provenance using Semantic Signatures. (93%)Mingyang Xie; Manav Kulshrestha; Shaojie Wang; Jinghan Yang; Ayan Chakrabarti; Ning Zhang; Yevgeniy Vorobeychik
RoMA: a Method for Neural Network Robustness Measurement and Assessment. (92%)Natan Levy; Guy Katz
Anti-Backdoor Learning: Training Clean Models on Poisoned Data. (83%)Yige Li; Xixiang Lyu; Nodens Koren; Lingjuan Lyu; Bo Li; Xingjun Ma
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion. (68%)Shijie Zhang; Hongzhi Yin; Tong Chen; Zi Huang; Quoc Viet Hung Nguyen; Lizhen Cui
Robustness through Data Augmentation Loss Consistency. (61%)Tianjian Huang; Shaunak Halbe; Chinnadhurai Sankar; Pooyan Amini; Satwik Kottur; Alborz Geramifard; Meisam Razaviyayn; Ahmad Beirami
Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness. (61%)Simon Geisler; Johanna Sommer; Jan Schuchardt; Aleksandar Bojchevski; Stephan Günnemann
Watermarking Graph Neural Networks based on Backdoor Attacks. (31%)Jing Xu; Stjepan Picek
Physical Side-Channel Attacks on Embedded Neural Networks: A Survey. (8%)Maria Méndez Real; Rubén Salvador
2021-10-20
Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical Reinforcement Learning. (83%)Thai Le; Long Tran-Thanh; Dongwon Lee
Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks. (62%)Zihan Liul; Yun Luo; Zelin Zang; Stan Z. Li
Moir\'e Attack (MA): A New Potential Risk of Screen Photos. (56%)Dantong Niu; Ruohao Guo; Yisen Wang
Adversarial attacks against Bayesian forecasting dynamic models. (13%)Roi Naveiro
No One Representation to Rule Them All: Overlapping Features of Training Methods. (1%)Raphael Gontijo-Lopes; Yann Dauphin; Ekin D. Cubuk
2021-10-19
Multi-concept adversarial attacks. (99%)Vibha Belavadi; Yan Zhou; Murat Kantarcioglu; Bhavani M. Thuraisingham
A Regularization Method to Improve Adversarial Robustness of Neural Networks for ECG Signal Classification. (96%)Linhai Ma; Liang Liang
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks. (69%)Atul Sharma; Wei Chen; Joshua Zhao; Qiang Qiu; Somali Chaterji; Saurabh Bagchi
Understanding Convolutional Neural Networks from Theoretical Perspective via Volterra Convolution. (61%)Tenghui Li; Guoxu Zhou; Yuning Qiu; Qibin Zhao
Detecting Backdoor Attacks Against Point Cloud Classifiers. (26%)Zhen Xiang; David J. Miller; Siheng Chen; Xi Li; George Kesidis
Speech Pattern based Black-box Model Watermarking for Automatic Speech Recognition. (13%)Haozhe Chen; Weiming Zhang; Kunlin Liu; Kejiang Chen; Han Fang; Nenghai Yu
A Deeper Look into RowHammer`s Sensitivities: Experimental Analysis of Real DRAM Chips and Implications on Future Attacks and Defenses. (5%)Lois Orosa; Abdullah Giray Yağlıkçı; Haocong Luo; Ataberk Olgun; Jisung Park; Hasan Hassan; Minesh Patel; Jeremie S. Kim; Onur Mutlu
2021-10-18
Boosting the Transferability of Video Adversarial Examples via Temporal Translation. (99%)Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information. (99%)Baolin Zheng; Peipei Jiang; Qian Wang; Qi Li; Chao Shen; Cong Wang; Yunjie Ge; Qingyang Teng; Shenyi Zhang
Improving Robustness using Generated Data. (97%)Sven Gowal; Sylvestre-Alvise Rebuffi; Olivia Wiles; Florian Stimberg; Dan Andrei Calian; Timothy Mann
MEMO: Test Time Robustness via Adaptation and Augmentation. (13%)Marvin Zhang; Sergey Levine; Chelsea Finn
Minimal Multi-Layer Modifications of Deep Neural Networks. (4%)Idan Refaeli; Guy Katz
2021-10-17
Unrestricted Adversarial Attacks on ImageNet Competition. (99%)Yuefeng Chen; Xiaofeng Mao; Yuan He; Hui Xue; Chao Li; Yinpeng Dong; Qi-An Fu; Xiao Yang; Wenzhao Xiang; Tianyu Pang; Hang Su; Jun Zhu; Fangcheng Liu; Chao Zhang; Hongyang Zhang; Yichi Zhang; Shilong Liu; Chang Liu; Wenzhao Xiang; Yajie Wang; Huipeng Zhou; Haoran Lyu; Yidan Xu; Zixuan Xu; Taoyu Zhu; Wenjun Li; Xianfeng Gao; Guoqiu Wang; Huanqian Yan; Ying Guo; Chaoning Zhang; Zheng Fang; Yang Wang; Bingyang Fu; Yunfei Zheng; Yekui Wang; Haorong Luo; Zhen Yang
Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training. (99%)Alexander Daniel Pan; Daniel Yongkyun; Lee; Huan Zhang; Yize Chen; Yuanyuan Shi
ECG-ATK-GAN: Robustness against Adversarial Attacks on ECGs using Conditional Generative Adversarial Networks. (99%)Khondker Fariha Hossain; Sharif Amit Kamran; Alireza Tavakkoli; Xingjun Ma
Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications. (22%)Bang Wu; Xiangwen Yang; Shirui Pan; Xingliang Yuan
Poisoning Attacks on Fair Machine Learning. (12%)Minh-Hao Van; Wei Du; Xintao Wu; Aidong Lu
2021-10-16
Black-box Adversarial Attacks on Network-wide Multi-step Traffic State Prediction Models. (99%)Bibek Poudel; Weizi Li
Analyzing Dynamic Adversarial Training Data in the Limit. (82%)Eric Wallace; Adina Williams; Robin Jia; Douwe Kiela
Characterizing Improper Input Validation Vulnerabilities of Mobile Crowdsourcing Services. (5%)Sojhal Ismail Khan; Dominika Woszczyk; Chengzeng You; Soteris Demetriou; Muhammad Naveed
Tackling the Imbalance for GNNs. (4%)Rui Wang; Weixuan Xiong; Qinghu Hou; Ou Wu
2021-10-15
Adversarial Attacks on Gaussian Process Bandits. (99%)Eric Han; Jonathan Scarlett
Generating Natural Language Adversarial Examples through An Improved Beam Search Algorithm. (99%)Tengfei Zhao; Zhaocheng Ge; Hanping Hu; Dingmeng Shi
Adversarial Attacks on ML Defense Models Competition. (99%)Yinpeng Dong; Qi-An Fu; Xiao Yang; Wenzhao Xiang; Tianyu Pang; Hang Su; Jun Zhu; Jiayu Tang; Yuefeng Chen; XiaoFeng Mao; Yuan He; Hui Xue; Chao Li; Ye Liu; Qilong Zhang; Lianli Gao; Yunrui Yu; Xitong Gao; Zhe Zhao; Daquan Lin; Jiadong Lin; Chuanbiao Song; Zihao Wang; Zhennan Wu; Yang Guo; Jiequan Cui; Xiaogang Xu; Pengguang Chen
Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture. (76%)Xinyu Tang; Saeed Mahloujifar; Liwei Song; Virat Shejwalkar; Milad Nasr; Amir Houmansadr; Prateek Mittal
Robustness of different loss functions and their impact on networks learning capability. (76%)Vishal Rajput
Chunked-Cache: On-Demand and Scalable Cache Isolation for Security Architectures. (22%)Ghada Dessouky; Alexander Gruler; Pouya Mahmoody; Ahmad-Reza Sadeghi; Emmanuel Stapf
Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks. (10%)Yangyi Chen; Fanchao Qi; Zhiyuan Liu; Maosong Sun
Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation. (8%)Yao Qin; Chiyuan Zhang; Ting Chen; Balaji Lakshminarayanan; Alex Beutel; Xuezhi Wang
Hand Me Your PIN! Inferring ATM PINs of Users Typing with a Covered Hand. (1%)Matteo Cardaioli; Stefano Cecconello; Mauro Conti; Simone Milani; Stjepan Picek; Eugen Saraci
2021-10-14
Adversarial examples by perturbing high-level features in intermediate decoder layers. (99%)Vojtěch Čermák; Lukáš Adam
DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks. (99%)Yixiang Wang; Jiqiang Liu; Xiaolin Chang; Jianhua Wang; Ricardo J. Rodríguez
Adversarial Purification through Representation Disentanglement. (99%)Tao Bai; Jun Zhao; Lanqing Guo; Bihan Wen
RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models. (93%)Wenkai Yang; Yankai Lin; Peng Li; Jie Zhou; Xu Sun
An Optimization Perspective on Realizing Backdoor Injection Attacks on Deep Neural Networks in Hardware. (87%)M. Caner Tol; Saad Islam; Berk Sunar; Ziming Zhang
Interactive Analysis of CNN Robustness. (80%)Stefan Sietzen; Mathias Lechner; Judy Borowski; Ramin Hasani; Manuela Waldner
On Adversarial Vulnerability of PHM algorithms: An Initial Study. (69%)Weizhong Yan; Zhaoyuan Yang; Jianwei Qiu
Identifying and Mitigating Spurious Correlations for Improving Robustness in NLP Models. (61%)Tianlu Wang; Diyi Yang; Xuezhi Wang
Toward Degradation-Robust Voice Conversion. (9%)Chien-yu Huang; Kai-Wei Chang; Hung-yi Lee
Interpreting the Robustness of Neural NLP Models to Textual Perturbations. (9%)Yunxiang Zhang; Liangming Pan; Samson Tan; Min-Yen Kan
Retrieval-guided Counterfactual Generation for QA. (2%)Bhargavi Paranjape; Matthew Lamm; Ian Tenney
Effective Certification of Monotone Deep Equilibrium Models. (1%)Mark Niklas Müller; Robin Staab; Marc Fischer; Martin Vechev
2021-10-13
A Framework for Verification of Wasserstein Adversarial Robustness. (99%)Tobias Wegel; Felix Assion; David Mickisch; Florens Greßner
Identification of Attack-Specific Signatures in Adversarial Examples. (99%)Hossein Souri; Pirazh Khorramshahi; Chun Pong Lau; Micah Goldblum; Rama Chellappa
Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness. (99%)Xiao Yang; Yinpeng Dong; Wenzhao Xiang; Tianyu Pang; Hang Su; Jun Zhu
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer. (98%)Fanchao Qi; Yangyi Chen; Xurui Zhang; Mukai Li; Zhiyuan Liu; Maosong Sun
Brittle interpretations: The Vulnerability of TCAV and Other Concept-based Explainability Tools to Adversarial Attack. (93%)Davis Brown; Henry Kvinge
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks. (92%)Shawn Shan; Arjun Nitin Bhagoji; Haitao Zheng; Ben Y. Zhao
Boosting the Certified Robustness of L-infinity Distance Nets. (1%)Bohang Zhang; Du Jiang; Di He; Liwei Wang
Benchmarking the Robustness of Spatial-Temporal Models Against Corruptions. (1%)Chenyu Yi; Siyuan Yang; Haoliang Li; Yap-peng Tan; Alex Kot
2021-10-12
Adversarial Attack across Datasets. (99%)Yunxiao Qin; Yuanhao Xiong; Jinfeng Yi; Cho-Jui Hsieh
Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning. (99%)Jinyin Chen; Guohan Huang; Haibin Zheng; Shanqing Yu; Wenrong Jiang; Chen Cui
SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text. (92%)Hoang-Quoc Nguyen-Son; Seira Hidano; Kazuhide Fukushima; Shinsaku Kiyomoto
On the Security Risks of AutoML. (45%)Ren Pang; Zhaohan Xi; Shouling Ji; Xiapu Luo; Ting Wang
Zero-bias Deep Neural Network for Quickest RF Signal Surveillance. (1%)Yongxin Liu; Yingjie Chen; Jian Wang; Shuteng Niu; Dahai Liu; Houbing Song
2021-10-11
Boosting Fast Adversarial Training with Learnable Adversarial Initialization. (99%)Xiaojun Jia; Yong Zhang; Baoyuan Wu; Jue Wang; Xiaochun Cao
Parameterizing Activation Functions for Adversarial Robustness. (98%)Sihui Dai; Saeed Mahloujifar; Prateek Mittal
Amicable examples for informed source separation. (86%)Naoya Takahashi; Yuki Mitsufuji
Doubly-Trained Adversarial Data Augmentation for Neural Machine Translation. (12%)Weiting Tan; Shuoyang Ding; Huda Khayrallah; Philipp Koehn
Intriguing Properties of Input-dependent Randomized Smoothing. (1%)Peter Súkeník; Aleksei Kuvshinov; Stephan Günnemann
Hiding Images into Images with Real-world Robustness. (1%)Qichao Ying; Hang Zhou; Xianhan Zeng; Haisheng Xu; Zhenxing Qian; Xinpeng Zhang
Source Mixing and Separation Robust Audio Steganography. (1%)Naoya Takahashi; Mayank Kumar Singh; Yuki Mitsufuji
Homogeneous Learning: Self-Attention Decentralized Deep Learning. (1%)Yuwei Sun; Hideya Ochiai
Large Language Models Can Be Strong Differentially Private Learners. (1%)Xuechen Li; Florian Tramèr; Percy Liang; Tatsunori Hashimoto
A Closer Look at Prototype Classifier for Few-shot Image Classification. (1%)Mingcheng Hou; Issei Sato
Certified Patch Robustness via Smoothed Vision Transformers. (1%)Hadi Salman; Saachi Jain; Eric Wong; Aleksander Mądry
2021-10-10
Adversarial Attacks in a Multi-view Setting: An Empirical Study of the Adversarial Patches Inter-view Transferability. (98%)Bilel Tarchoun; Ihsen Alouani; Anouar Ben Khalifa; Mohamed Ali Mahjoub
Universal Adversarial Attacks on Neural Networks for Power Allocation in a Massive MIMO System. (92%)Pablo Millán Santos; B. R. Manoj; Meysam Sadeghi; Erik G. Larsson
2021-10-09
Demystifying the Transferability of Adversarial Attacks in Computer Networks. (99%)Ehsan Nowroozi; Yassine Mekdad; Mohammad Hajian Berenjestanaki; Mauro Conti; Abdeslam EL Fergougui
Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning. (93%)Guanlin Liu; Lifeng Lai
Widen The Backdoor To Let More Attackers In. (13%)Siddhartha Datta; Giulio Lovisotto; Ivan Martinovic; Nigel Shadbolt
2021-10-08
Explainability-Aware One Point Attack for Point Cloud Neural Networks. (99%)Hanxiao Tan; Helena Kotthaus
Game Theory for Adversarial Attacks and Defenses. (98%)Shorya Sharma
Graphs as Tools to Improve Deep Learning Methods. (10%)Carlos Lassance; Myriam Bontonou; Mounia Hamidouche; Bastien Pasdeloup; Lucas Drumetz; Vincent Gripon
IHOP: Improved Statistical Query Recovery against Searchable Symmetric Encryption through Quadratic Optimization. (3%)Simon Oya; Florian Kerschbaum
A Wireless Intrusion Detection System for 802.11 WPA3 Networks. (1%)Neil Dalal; Nadeem Akhtar; Anubhav Gupta; Nikhil Karamchandani; Gaurav S. Kasbekar; Jatin Parekh
Salient ImageNet: How to discover spurious features in Deep Learning? (1%)Sahil Singla; Soheil Feizi
2021-10-07
Robust Feature-Level Adversaries are Interpretability Tools. (99%)Stephen Casper; Max Nadeau; Dylan Hadfield-Menell; Gabriel Kreiman
EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box Android Malware Detection. (99%)Hamid Bostani; Veelasha Moonsamy
Adversarial Attack by Limited Point Cloud Surface Modifications. (98%)Atrin Arya; Hanieh Naderi; Shohreh Kasaei
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks. (98%)Hanxun Huang; Yisen Wang; Sarah Monazam Erfani; Quanquan Gu; James Bailey; Xingjun Ma
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction. (80%)Jinyin Chen; Haiyang Xiong; Haibin Zheng; Jian Zhang; Guodong Jiang; Yi Liu
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time. (62%)Tian Dong; Han Qiu; Tianwei Zhang; Jiwei Li; Hewu Li; Jialiang Lu
Adversarial Unlearning of Backdoors via Implicit Hypergradient. (56%)Yi Zeng; Si Chen; Won Park; Z. Morley Mao; Ming Jin; Ruoxi Jia
MPSN: Motion-aware Pseudo Siamese Network for Indoor Video Head Detection in Buildings. (1%)Kailai Sun; Xiaoteng Ma; Peng Liu; Qianchuan Zhao
2021-10-06
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training with Crafted Input Noise. (99%)Souvik Kundu; Massoud Pedram; Peter A. Beerel
Reversible adversarial examples against local visual perturbation. (99%)Zhaoxia Yin; Li Chen; Shaowei Zhu
Attack as the Best Defense: Nullifying Image-to-image Translation GANs via Limit-aware Adversarial Attack. (99%)Chin-Yuan Yeh; Hsi-Wen Chen; Hong-Han Shuai; De-Nian Yang; Ming-Syan Chen
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs. (99%)Philipp Benz; Soomin Ham; Chaoning Zhang; Adil Karjauv; In So Kweon
Adversarial Attacks on Machinery Fault Diagnosis. (99%)Jiahao Chen; Diqun Yan
Adversarial Attacks on Spiking Convolutional Networks for Event-based Vision. (98%)Julian Büchel; Gregor Lenz; Yalun Hu; Sadique Sheik; Martino Sorbaro
A Uniform Framework for Anomaly Detection in Deep Neural Networks. (97%)Fangzhen Zhao; Chenyi Zhang; Naipeng Dong; Zefeng You; Zhenxin Wu
Double Descent in Adversarial Training: An Implicit Label Noise Perspective. (88%)Chengyu Dong; Liyuan Liu; Jingbo Shang
Improving Adversarial Robustness for Free with Snapshot Ensemble. (83%)Yihao Wang
DoubleStar: Long-Range Attack Towards Depth Estimation based Obstacle Avoidance in Autonomous Systems. (45%)Ce Michigan State University Zhou; Qiben Michigan State University Yan; Yan Michigan State University Shi; Lichao Lehigh University Sun
Inference Attacks Against Graph Neural Networks. (2%)Zhikun Zhang; Min Chen; Michael Backes; Yun Shen; Yang Zhang
Data-driven behavioural biometrics for continuous and adaptive user verification using Smartphone and Smartwatch. (1%)Akriti Verma; Valeh Moghaddam; Adnan Anwar
On The Vulnerability of Recurrent Neural Networks to Membership Inference Attacks. (1%)Yunhao Yang; Parham Gohari; Ufuk Topcu
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks. (1%)Jiawei Du; Hanshu Yan; Jiashi Feng; Joey Tianyi Zhou; Liangli Zhen; Rick Siow Mong Goh; Vincent Y. F. Tan
Stegomalware: A Systematic Survey of MalwareHiding and Detection in Images, Machine LearningModels and Research Challenges. (1%)Rajasekhar Chaganti; Vinayakumar Ravi; Mamoun Alazab; Tuan D. Pham
Exploring the Common Principal Subspace of Deep Features in Neural Networks. (1%)Haoran Liu; Haoyi Xiong; Yaqing Wang; Haozhe An; Dongrui Wu; Dejing Dou
Generalizing Neural Networks by Reflecting Deviating Data in Production. (1%)Yan Xiao; Yun Lin; Ivan Beschastnikh; Changsheng Sun; David S. Rosenblum; Jin Song Dong
2021-10-05
Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems. (99%)Lisa Oakley; Alina Oprea; Stavros Tripakis
Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power of Geometric Transformations. (99%)Shasha Li; Abhishek Aich; Shitong Zhu; M. Salman Asif; Chengyu Song; Amit K. Roy-Chowdhury; Srikanth Krishnamurthy
Adversarial defenses via a mixture of generators. (99%)Maciej Żelaszczyk; Jacek Mańdziuk
Neural Network Adversarial Attack Method Based on Improved Genetic Algorithm. (92%)Dingming Yang; Yanrong Cui; Hongqiang Yuan
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models. (33%)Kangjie Chen; Yuxian Meng; Xiaofei Sun; Shangwei Guo; Tianwei Zhang; Jiwei Li; Chun Fan
Spectral Bias in Practice: The Role of Function Frequency in Generalization. (1%)Sara Fridovich-Keil; Raphael Gontijo-Lopes; Rebecca Roelofs
CADA: Multi-scale Collaborative Adversarial Domain Adaptation for Unsupervised Optic Disc and Cup Segmentation. (1%)Peng Liu; Charlie T. Tran; Bin Kong; Ruogu Fang
Noisy Feature Mixup. (1%)Soon Hoe Lim; N. Benjamin Erichson; Francisco Utrera; Winnie Xu; Michael W. Mahoney
2021-10-04
Benchmarking Safety Monitors for Image Classifiers with Machine Learning. (1%)Raul Sena LAAS Ferreira; Jean LAAS Arlat; Jeremie LAAS Guiochet; Hélène LAAS Waeselynck
2021-10-03
Adversarial Examples Generation for Reducing Implicit Gender Bias in Pre-trained Models. (82%)Wenqian Ye; Fei Xu; Yaojia Huang; Cassie Huang; Ji A
2021-10-02
Evaluating Deep Learning Models and Adversarial Attacks on Accelerometer-Based Gesture Authentication. (98%)Elliu Huang; Troia Fabio Di; Mark Stamp
Anti-aliasing Deep Image Classifiers using Novel Depth Adaptive Blurring and Activation Function. (13%)Md Tahmid Hossain; Shyh Wei Teng; Ferdous Sohel; Guojun Lu
2021-10-01
Calibrated Adversarial Training. (98%)Tianjin Huang; Vlado Menkovski; Yulong Pei; Mykola Pechenizkiy
Universal Adversarial Spoofing Attacks against Face Recognition. (87%)Takuma Amada; Seng Pei Liew; Kazuya Kakizaki; Toshinori Araki
Score-Based Generative Classifiers. (84%)Roland S. Zimmermann; Lukas Schott; Yang Song; Benjamin A. Dunn; David A. Klindt
One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency. (1%)Sayeed Shafayet Chowdhury; Nitin Rathi; Kaushik Roy
2021-09-30
Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation. (98%)Manjushree B. Aithal; Xiaohua Li
You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for Object Detectors. (95%)Zijian Zhu; Hang Su; Chang Liu; Wenzhao Xiang; Shibao Zheng
Adversarial Semantic Contour for Object Detection. (92%)Yichi Zhang; Zijian Zhu; Xiao Yang; Jun Zhu
From Zero-Shot Machine Learning to Zero-Day Attack Detection. (10%)Mohanad Sarhan; Siamak Layeghy; Marcus Gallagher; Marius Portmann
2021-09-29
On Brightness Agnostic Adversarial Examples Against Face Recognition Systems. (99%)Inderjeet Singh; Satoru Momiyama; Kazuya Kakizaki; Toshinori Araki
Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks. (70%)Kaleel Mahmood; Rigel Mahmood; Ethan Rathbun; Dijk Marten van
BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining. (41%)Weizhe Hua; Yichi Zhang; Chuan Guo; Zhiru Zhang; G. Edward Suh
Mitigation of Adversarial Policy Imitation via Constrained Randomization of Policy (CRoP). (10%)Nancirose Piazza; Vahid Behzadan
2021-09-28
slimTrain -- A Stochastic Approximation Method for Training Separable Deep Neural Networks. (1%)Elizabeth Newman; Julianne Chung; Matthias Chung; Lars Ruthotto
2021-09-27
MUTEN: Boosting Gradient-Based Adversarial Attacks via Mutant-Based Ensembles. (99%)Yuejun Guo; Qiang Hu; Maxime Cordy; Michail Papadakis; Yves Le Traon
Cluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors. (99%)Zhengyi Wang; Zhongkai Hao; Ziqiao Wang; Hang Su; Jun Zhu
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective. (98%)Adhyyan Narang; Vidya Muthukumar; Anant Sahai
GANG-MAM: GAN based enGine for Modifying Android Malware. (64%)Renjith G; Sonia Laudanna; Aji S; Corrado Aaron Visaggio; Vinod P
Distributionally Robust Multi-Output Regression Ranking. (3%)Shahabeddin Sotudian; Ruidi Chen; Ioannis Paschalidis
Improving Uncertainty of Deep Learning-based Object Classification on Radar Spectra using Label Smoothing. (1%)Kanil Patel; William Beluch; Kilian Rambach; Michael Pfeiffer; Bin Yang
Federated Deep Learning with Bayesian Privacy. (1%)Hanlin Gu; Lixin Fan; Bowen Li; Yan Kang; Yuan Yao; Qiang Yang
2021-09-26
Distributionally Robust Multiclass Classification and Applications in Deep CNN Image Classifiers. (11%)Ruidi Chen; Boran Hao; Ioannis Paschalidis
2021-09-25
Two Souls in an Adversarial Image: Towards Universal Adversarial Example Detection using Multi-view Inconsistency. (99%)Sohaib Kiani; Sana Awan; Chao Lan; Fengjun Li; Bo Luo
Contributions to Large Scale Bayesian Inference and Adversarial Machine Learning. (98%)Víctor Gallego
MINIMAL: Mining Models for Data Free Universal Adversarial Triggers. (93%)Swapnil Parekh; Yaman Singla Kumar; Somesh Singh; Changyou Chen; Balaji Krishnamurthy; Rajiv Ratn Shah
2021-09-24
Local Intrinsic Dimensionality Signals Adversarial Perturbations. (98%)Sandamal Weerasinghe; Tansu Alpcan; Sarah M. Erfani; Christopher Leckie; Benjamin I. P. Rubinstein
2021-09-23
Breaking BERT: Understanding its Vulnerabilities for Biomedical Named Entity Recognition through Adversarial Attack. (98%)Anne Dirkson; Suzan Verberne; Wessel Kraaij
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training. (88%)Jakub Breier; Xiaolu Hou; Martín Ochoa; Jesus Solano
AES Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses. (68%)Yaman Kumar Singla; Swapnil Parekh; Somesh Singh; Junyi Jessy Li; Rajiv Ratn Shah; Changyou Chen
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications. (1%)Dongqi Han; Zhiliang Wang; Wenqi Chen; Ying Zhong; Su Wang; Han Zhang; Jiahai Yang; Xingang Shi; Xia Yin
2021-09-22
Exploring Adversarial Examples for Efficient Active Learning in Machine Learning Classifiers. (99%)Honggang Yu; Shihfeng Zeng; Teng Zhang; Ing-Chao Lin; Yier Jin
CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks. (81%)Mikhail Pautov; Nurislam Tursynbek; Marina Munkhoeva; Nikita Muravev; Aleksandr Petiushko; Ivan Oseledets
Security Analysis of Capsule Network Inference using Horizontal Collaboration. (69%)Adewale Adeyemo; Faiq Khalid; Tolulope A. Odetola; Syed Rafay Hasan
Adversarial Transfer Attacks With Unknown Data and Class Overlap. (62%)Luke E. Richards; André Nguyen; Ryan Capps; Steven Forsythe; Cynthia Matuszek; Edward Raff
Pushing the Right Buttons: Adversarial Evaluation of Quality Estimation. (1%)Diptesh Kanojia; Marina Fomicheva; Tharindu Ranasinghe; Frédéric Blain; Constantin Orăsan; Lucia Specia
Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis. (1%)Zeyuan Yin; Ye Yuan; Panfeng Guo; Pan Zhou
2021-09-21
Attacks on Visualization-Based Malware Detection: Balancing Effectiveness and Executability. (99%)Hadjer Benkraouda; Jingyu Qian; Hung Quoc Tran; Berkay Kaplan
3D Point Cloud Completion with Geometric-Aware Adversarial Augmentation. (93%)Mengxi Wu; Hao Huang; Yi Fang
DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning. (76%)Md Tamjid Hossain; Shafkat Islam; Shahriar Badsha; Haoting Shen
Privacy, Security, and Utility Analysis of Differentially Private CPES Data. (13%)Md Tamjid Hossain; Shahriar Badsha; Haoting Shen
2021-09-20
Robust Physical-World Attacks on Face Recognition. (99%)Xin Zheng; Yanbo Fan; Baoyuan Wu; Yong Zhang; Jue Wang; Shirui Pan
Modeling Adversarial Noise for Adversarial Defense. (99%)Dawei Zhou; Nannan Wang; Bo Han; Tongliang Liu
Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection? (99%)Deqiang Li; Tian Qiu; Shuo Chen; Qianmu Li; Shouhuai Xu
Robustness Analysis of Deep Learning Frameworks on Mobile Platforms. (10%)Amin Eslami Abyane; Hadi Hemmati
"Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World. (2%)Emily Wenger; Max Bronckers; Christian Cianfarani; Jenna Cryan; Angela Sha; Haitao Zheng; Ben Y. Zhao
Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework. (1%)Muhammad Shafique; Alberto Marchisio; Rachmad Vidya Wicaksana Putra; Muhammad Abdullah Hanif
2021-09-19
On the Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars. (99%)Deboleena Roy; Chun Tao; Indranil Chakraborty; Kaushik Roy
Adversarial Training with Contrastive Learning in NLP. (16%)Daniela N. Rim; DongNyeong Heo; Heeyoul Choi
2021-09-18
Clean-label Backdoor Attack against Deep Hashing based Retrieval. (98%)Kuofeng Gao; Jiawang Bai; Bin Chen; Dongxian Wu; Shu-Tao Xia
2021-09-17
Messing Up 3D Virtual Environments: Transferable Adversarial 3D Objects. (98%)Enrico Meloni; Matteo Tiezzi; Luca Pasqualini; Marco Gori; Stefano Melacci
Exploring the Training Robustness of Distributional Reinforcement Learning against Noisy State Observations. (8%)Ke Sun; Yi Liu; Yingnan Zhao; Hengshuai Yao; Shangling Jui; Linglong Kong
2021-09-16
Harnessing Perceptual Adversarial Patches for Crowd Counting. (99%)Shunchang Liu; Jiakai Wang; Aishan Liu; Yingwei Li; Yijie Gao; Xianglong Liu; Dacheng Tao
KATANA: Simple Post-Training Robustness Using Test Time Augmentations. (98%)Gilad Cohen; Raja Giryes
Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual Patterns. (96%)Prasanth Buddareddygari; Travis Zhang; Yezhou Yang; Yi Ren
Adversarial Attacks against Deep Learning Based Power Control in Wireless Communications. (95%)Brian Kim; Yi Shi; Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus
Don't Search for a Search Method -- Simple Heuristics Suffice for Adversarial Text Attacks. (68%)Nathaniel Berger; Stefan Riezler; Artem Sokolov; Sebastian Ebert
Membership Inference Attacks Against Recommender Systems. (3%)Minxing Zhang; Zhaochun Ren; Zihan Wang; Pengjie Ren; Zhumin Chen; Pengfei Hu; Yang Zhang
2021-09-15
Universal Adversarial Attack on Deep Learning Based Prognostics. (99%)Arghya Basak; Pradeep Rathore; Sri Harsha Nistala; Sagar Srinivas; Venkataramana Runkana
Balancing detectability and performance of attacks on the control channel of Markov Decision Processes. (98%)Alessio Russo; Alexandre Proutiere
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack. (95%)DonghuaWang; Tingsong Jiang; Jialiang Sun; Weien Zhou; Xiaoya Zhang; Zhiqiang Gong; Wen Yao; Xiaoqian Chen
BERT is Robust! A Case Against Synonym-Based Adversarial Examples in Text Classification. (92%)Jens Hauser; Zhao Meng; Damián Pascual; Roger Wattenhofer
Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup. (13%)Guang Liu; Yuzhao Mao; Hailong Huang; Weiguo Gao; Xuan Li
Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel. (10%)Henrique Teles Maia; Chang Xiao; Dingzeyu Li; Eitan Grinspun; Changxi Zheng
2021-09-14
A Novel Data Encryption Method Inspired by Adversarial Attacks. (99%)Praveen Fernando; Jin Wei-Kocsis
Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder. (99%)Yao Qiu; Jinchao Zhang; Jie Zhou
PETGEN: Personalized Text Generation Attack on Deep Sequence Embedding-based Classification Models. (99%)Bing He; Mustaque Ahamad; Srijan Kumar
EVAGAN: Evasion Generative Adversarial Network for Low Data Regimes. (76%)Rizwan Hamid Randhawa; Nauman Aslam; Muhammad Alauthman; Husnain Rafiq; Muhammad Khalid
Dodging Attack Using Carefully Crafted Natural Makeup. (47%)Nitzan Guetta; Asaf Shabtai; Inderjeet Singh; Satoru Momiyama; Yuval Elovici
Avengers Ensemble! Improving Transferability of Authorship Obfuscation. (12%)Muhammad Haroon; Muhammad Fareed Zaffar; Padmini Srinivasan; Zubair Shafiq
ARCH: Efficient Adversarial Regularized Training with Caching. (8%)Simiao Zuo; Chen Liang; Haoming Jiang; Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen; Tuo Zhao
2021-09-13
Adversarial Bone Length Attack on Action Recognition. (99%)Nariki Tanaka; Hiroshi Kera; Kazuhiko Kawamoto
Randomized Substitution and Vote for Textual Adversarial Example Detection. (99%)Xiaosen Wang; Yifeng Xiong; Kun He
Improving the Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator. (99%)Wenzhao Xiang; Hang Su; Chang Liu; Yandong Guo; Shibao Zheng
Evolving Architectures with Gradient Misalignment toward Low Adversarial Transferability. (98%)Kevin Richard G. Operiano; Wanchalerm Pora; Hitoshi Iba; Hiroshi Kera
A Practical Adversarial Attack on Contingency Detection of Smart Energy Systems. (98%)Moein Sabounchi; Jin Wei-Kocsis
Adversarial Examples for Evaluating Math Word Problem Solvers. (96%)Vivek Kumar; Rishabh Maheshwary; Vikram Pudi
PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos. (86%)Nupur Thakur; Baoxin Li
SignGuard: Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering. (81%)Jian Xu; Shao-Lun Huang; Linqi Song; Tian Lan
Formalizing and Estimating Distribution Inference Risks. (62%)Anshuman Suri; David Evans
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models. (50%)Kun Zhou; Wayne Xin Zhao; Sirui Wang; Fuzheng Zhang; Wei Wu; Ji-Rong Wen
Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models. (16%)Won Park; Nan Li; Qi Alfred Chen; Z. Morley Mao
Adversarially Trained Object Detector for Unsupervised Domain Adaptation. (3%)Kazuma Fujii; Hiroshi Kera; Kazuhiko Kawamoto
Perturbation CheckLists for Evaluating NLG Evaluation Metrics. (1%)Ananya B. Sai; Tanay Dixit; Dev Yashpal Sheth; Sreyas Mohan; Mitesh M. Khapra
How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding. (1%)Tianda Li; Ahmad Rashid; Aref Jafari; Pranav Sharma; Ali Ghodsi; Mehdi Rezagholizadeh
Detecting Safety Problems of Multi-Sensor Fusion in Autonomous Driving. (1%)Ziyuan Zhong; Zhisheng Hu; Shengjian Guo; Xinyang Zhang; Zhenyu Zhong; Baishakhi Ray
2021-09-12
TREATED:Towards Universal Defense against Textual Adversarial Attacks. (99%)Bin Zhu; Zhaoquan Gu; Le Wang; Zhihong Tian
CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph. (98%)Xugang Wu; Huijun Wu; Xu Zhou; Kai Lu
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain. (93%)Hasan Abed Al Kader Hammoud; Bernard Ghanem
RockNER: A Simple Method to Create Adversarial Examples for Evaluating the Robustness of Named Entity Recognition Models. (84%)Bill Yuchen Lin; Wenyang Gao; Jun Yan; Ryan Moreno; Xiang Ren
Shape-Biased Domain Generalization via Shock Graph Embeddings. (2%)Maruthi Narayanan; Vickram Rajendran; Benjamin Kimia
Source Inference Attacks in Federated Learning. (1%)Hongsheng Hu; Zoran Salcic; Lichao Sun; Gillian Dobbie; Xuyun Zhang
2021-09-11
RobustART: Benchmarking Robustness on Architecture Design and Training Techniques. (98%)Shiyu Tang; Ruihao Gong; Yan Wang; Aishan Liu; Jiakai Wang; Xinyun Chen; Fengwei Yu; Xianglong Liu; Dawn Song; Alan Yuille; Philip H. S. Torr; Dacheng Tao
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency. (81%)Yonggan Fu; Yang Zhao; Qixuan Yu; Chaojian Li; Yingyan Lin
2021-09-10
A Strong Baseline for Query Efficient Attacks in a Black Box Setting. (99%)Rishabh Maheshwary; Saket Maheshwary; Vikram Pudi
2021-09-09
Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification. (99%)Maximilian Mozes; Max Bartolo; Pontus Stenetorp; Bennett Kleinberg; Lewis D. Griffin
Energy Attack: On Transferring Adversarial Examples. (99%)Ruoxi Shi; Borui Yang; Yangzhou Jiang; Chenglong Zhao; Bingbing Ni
Protein Folding Neural Networks Are Not Robust. (99%)Sumit Kumar Jha; Arvind Ramanathan; Rickard Ewetz; Alvaro Velasquez; Susmit Jha
Towards Transferable Adversarial Attacks on Vision Transformers. (99%)Zhipeng Wei; Jingjing Chen; Micah Goldblum; Zuxuan Wu; Tom Goldstein; Yu-Gang Jiang
Multi-granularity Textual Adversarial Attack with Behavior Cloning. (98%)Yangyi Chen; Jin Su; Wei Wei
Spatially Focused Attack against Spatiotemporal Graph Neural Networks. (81%)Fuqiang Liu; Luis Miranda-Moreno; Lijun Sun
Differential Privacy in Personalized Pricing with Nonparametric Demand Models. (26%)Xi Chen; Sentao Miao; Yining Wang
EvilModel 2.0: Bringing Neural Network Models into Malware Attacks. (5%)Zhi Wang; Chaoge Liu; Xiang Cui; Jie Yin; Xutong Wang
2021-09-08
Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning. (89%)Maziar Gomrokchi; Susan Amin; Hossein Aboutalebi; Alexander Wong; Doina Precup
Robust Optimal Classification Trees Against Adversarial Examples. (80%)Daniël Vos; Sicco Verwer
2021-09-07
Adversarial Parameter Defense by Multi-Step Risk Minimization. (98%)Zhiyuan Zhang; Ruixuan Luo; Xuancheng Ren; Qi Su; Liangyou Li; Xu Sun
POW-HOW: An enduring timing side-channel to evade online malware sandboxes. (12%)Antonio Nappa; Panagiotis Papadopoulos; Matteo Varvello; Daniel Aceituno Gomez; Juan Tapiador; Andrea Lanzi
Unpaired Adversarial Learning for Single Image Deraining with Rain-Space Contrastive Constraints. (1%)Xiang Chen; Jinshan Pan; Kui Jiang; Yufeng Huang; Caihua Kong; Longgang Dai; Yufeng Li
2021-09-06
Robustness and Generalization via Generative Adversarial Training. (82%)Omid Poursaeed; Tianxing Jiang; Harry Yang; Serge Belongie; SerNam Lim
Trojan Signatures in DNN Weights. (33%)Greg Fields; Mohammad Samragh; Mojan Javaheripi; Farinaz Koushanfar; Tara Javidi
Automated Robustness with Adversarial Training as a Post-Processing Step. (4%)Ambrish Rawat; Mathieu Sinn; Beat Buesser
Exposing Length Divergence Bias of Textual Matching Models. (2%)Lan Jiang; Tianshu Lyu; Chong Meng; Xiaoyong Lyu; Dawei Yin
2021-09-05
Efficient Combinatorial Optimization for Word-level Adversarial Textual Attack. (98%)Shengcai Liu; Ning Lu; Cheng Chen; Ke Tang
Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning. (2%)Yusen Wu; Hao Chen; Xin Wang; Chao Liu; Phuong Nguyen; Yelena Yesha
DexRay: A Simple, yet Effective Deep Learning Approach to Android Malware Detection based on Image Representation of Bytecode. (1%)Nadia Daoudi; Jordan Samhi; Abdoul Kader Kabore; Kevin Allix; Tegawendé F. Bissyandé; Jacques Klein
2021-09-04
Real-World Adversarial Examples involving Makeup Application. (99%)Chang-Sheng Lin; Chia-Yi Hsu; Pin-Yu Chen; Chia-Mu Yu
Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness. (99%)Uriya Pesso; Koby Bibas; Meir Feder
Training Meta-Surrogate Model for Transferable Adversarial Attack. (99%)Yunxiao Qin; Yuanhao Xiong; Jinfeng Yi; Cho-Jui Hsieh
2021-09-03
SEC4SR: A Security Analysis Platform for Speaker Recognition. (99%)Guangke Chen; Zhe Zhao; Fu Song; Sen Chen; Lingling Fan; Yang Liu
Risk Assessment for Connected Vehicles under Stealthy Attacks on Vehicle-to-Vehicle Networks. (1%)Tianci Yang; Carlos Murguia; Chen Lv
2021-09-02
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples. (99%)Guanxiong Liu; Issa Khalil; Abdallah Khreishah; NhatHai Phan
Impact of Attention on Adversarial Robustness of Image Classification Models. (99%)Prachi Agrawal; Narinder Singh Punn; Sanjay Kumar Sonbhadra; Sonali Agarwal
Adversarial Robustness for Unsupervised Domain Adaptation. (98%)Muhammad Awais; Fengwei Zhou; Hang Xu; Lanqing Hong; Ping Luo; Sung-Ho Bae; Zhenguo Li
Real World Robustness from Systematic Noise. (91%)Yan Wang; Yuhang Li; Ruihao Gong
Building Compact and Robust Deep Neural Networks with Toeplitz Matrices. (61%)Alexandre Araujo
2021-09-01
Towards Improving Adversarial Training of NLP Models. (98%)Jin Yong Yoo; Yanjun Qi
Excess Capacity and Backdoor Poisoning. (97%)Naren Sarayu Manoj; Avrim Blum
Regional Adversarial Training for Better Robust Generalization. (96%)Chuanbiao Song; Yanbo Fan; Yicheng Yang; Baoyuan Wu; Yiming Li; Zhifeng Li; Kun He
R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversarial Attacks through Noise Filters for Dynamic Vision Sensors. (86%)Alberto Marchisio; Giacomo Pira; Maurizio Martina; Guido Masera; Muhammad Shafique
Proof Transfer for Neural Network Verification. (9%)Christian Sprecher; Marc Fischer; Dimitar I. Dimitrov; Gagandeep Singh; Martin Vechev
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks. (2%)Anuj Dubey; Rosario Cammarota; Vikram Suresh; Aydin Aysu
2021-08-31
EG-Booster: Explanation-Guided Booster of ML Evasion Attacks. (99%)Abderrahmen Amich; Birhanu Eshete
Morphence: Moving Target Defense Against Adversarial Examples. (99%)Abderrahmen Amich; Birhanu Eshete
DPA: Learning Robust Physical Adversarial Camouflages for Object Detectors. (93%)Yexin Duan; Jialin Chen; Xingyu Zhou; Junhua Zou; Zhengyun He; Wu Zhang; Jin Zhang; Zhisong Pan
Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction. (83%)Zhenrui Yue; Zhankui He; Huimin Zeng; Julian McAuley
Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning. (75%)Doha Al Bared; Mohamed Nassar
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning. (4%)Linyang Li; Demin Song; Xiaonan Li; Jiehang Zeng; Ruotian Ma; Xipeng Qiu
2021-08-30
Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings. (99%)Mazda Moayeri; Soheil Feizi
Investigating Vulnerabilities of Deep Neural Policies. (99%)Ezgi Korkmaz
Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise. (99%)Mingyu Dong; Diqun Yan; Yongkang Gong; Rangding Wang
Single Node Injection Attack against Graph Neural Networks. (68%)Shuchang Tao; Qi Cao; Huawei Shen; Junjie Huang; Yunfan Wu; Xueqi Cheng
Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms. (41%)Albert Jiménez Sanfiz; Mohamed Akrout
Adaptive perturbation adversarial training: based on reinforcement learning. (41%)Zhishen Nie; Ying Lin; Sp Ren; Lan Zhang
How Does Adversarial Fine-Tuning Benefit BERT? (33%)Javid Ebrahimi; Hao Yang; Wei Zhang
ML-based IoT Malware Detection Under Adversarial Settings: A Systematic Evaluation. (26%)Ahmed Abusnaina; Afsah Anwar; Sultan Alshamrani; Abdulrahman Alabduljabbar; RhongHo Jang; Daehun Nyang; David Mohaisen
DuTrust: A Sentiment Analysis Dataset for Trustworthiness Evaluation. (1%)Lijie Wang; Hao Liu; Shuyuan Peng; Hongxuan Tang; Xinyan Xiao; Ying Chen; Hua Wu; Haifeng Wang
2021-08-29
Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution. (99%)Zongyi Li; Jianhan Xu; Jiehang Zeng; Linyang Li; Xiaoqing Zheng; Qi Zhang; Kai-Wei Chang; Cho-Jui Hsieh
Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models. (98%)Zeyuan Wang; Chaofeng Sha; Su Yang
DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks. (82%)Shiwen Ni; Jiawen Li; Hung-Yu Kao
HAT4RD: Hierarchical Adversarial Training for Rumor Detection on Social Media. (81%)Shiwen Ni; Jiawen Li; Hung-Yu Kao
2021-08-27
Mal2GCN: A Robust Malware Detection Approach Using Deep Graph Convolutional Networks With Non-Negative Weights. (99%)Omid Kargarnovin; Amir Mahdi Sadeghzadeh; Rasool Jalili
Disrupting Adversarial Transferability in Deep Neural Networks. (98%)Christopher Wiedeman; Ge Wang
Evaluating the Robustness of Neural Language Models to Input Perturbations. (16%)Milad Moradi; Matthias Samwald
Deep learning models are not robust against noise in clinical text. (1%)Milad Moradi; Kathrin Blagec; Matthias Samwald
2021-08-26
Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks. (99%)Landan Seguin; Anthony Ndirango; Neeli Mishra; SueYeon Chung; Tyler Lee
A Hierarchical Assessment of Adversarial Severity. (98%)Guillaume Jeanneret; Juan C Perez; Pablo Arbelaez
Physical Adversarial Attacks on an Aerial Imagery Object Detector. (96%)Andrew Du; Bo Chen; Tat-Jun Chin; Yee Wei Law; Michele Sasdelli; Ramesh Rajasegaran; Dillon Campbell
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference. (80%)Yang Zheng; Xiaoyi Feng; Zhaoqiang Xia; Xiaoyue Jiang; Ambra Demontis; Maura Pintor; Battista Biggio; Fabio Roli
Detection and Continual Learning of Novel Face Presentation Attacks. (2%)Mohammad Rostami; Leonidas Spinoulas; Mohamed Hussein; Joe Mathai; Wael Abd-Almageed
2021-08-25
Uncertify: Attacks Against Neural Network Certification. (99%)Tobias Lorenz; Marta Kwiatkowska; Mario Fritz
Adversarially Robust One-class Novelty Detection. (99%)Shao-Yuan Lo; Poojan Oza; Vishal M. Patel
Bridged Adversarial Training. (93%)Hoki Kim; Woojin Lee; Sungyoon Lee; Jaewook Lee
Generalized Real-World Super-Resolution through Adversarial Robustness. (93%)Angela Castillo; María Escobar; Juan C. Pérez; Andrés Romero; Radu Timofte; Gool Luc Van; Pablo Arbeláez
2021-08-24
Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE. (99%)Wenzhao Xiang; Chang Liu; Shibao Zheng
Are socially-aware trajectory prediction models really socially-aware? (92%)Saeed Saadatnejad; Mohammadhossein Bahari; Pedram Khorsandi; Mohammad Saneian; Seyed-Mohsen Moosavi-Dezfooli; Alexandre Alahi
OOWL500: Overcoming Dataset Collection Bias in the Wild. (76%)Brandon Leung; Chih-Hui Ho; Amir Persekian; David Orozco; Yen Chang; Erik Sandstrom; Bo Liu; Nuno Vasconcelos
StyleAugment: Learning Texture De-biased Representations by Style Augmentation without Pre-defined Textures. (1%)Sanghyuk Chun; Song Park
2021-08-23
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications. (99%)Wenjie Ruan; Xinping Yi; Xiaowei Huang
Semantic-Preserving Adversarial Text Attacks. (99%)Xinghao Yang; Weifeng Liu; James Bailey; Tianqing Zhu; Dacheng Tao; Wei Liu
Deep Bayesian Image Set Classification: A Defence Approach against Adversarial Attacks. (99%)Nima Mirnateghi; Syed Afaq Ali Shah; Mohammed Bennamoun
Kryptonite: An Adversarial Attack Using Regional Focus. (99%)Yogesh Kulkarni; Krisha Bhambani
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning. (73%)Virat Shejwalkar; Amir Houmansadr; Peter Kairouz; Daniel Ramage
SegMix: Co-occurrence Driven Mixup for Semantic Segmentation and Adversarial Robustness. (4%)Md Amirul Islam; Matthew Kowal; Konstantinos G. Derpanis; Neil D. B. Bruce
2021-08-22
Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations. (99%)Inci M. Baytas; Debayan Deb
Multi-Expert Adversarial Attack Detection in Person Re-identification Using Context Inconsistency. (98%)Xueping Wang; Shasha Li; Min Liu; Yaonan Wang; Amit K. Roy-Chowdhury
Relating CNNs with brain: Challenges and findings. (10%)Reem Abdel-Salam
2021-08-21
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks. (99%)Jiaming Mu; Binghui Wang; Qi Li; Kun Sun; Mingwei Xu; Zhuotao Liu
"Adversarial Examples" for Proof-of-Learning. (98%)Rui Zhang; Jian Liu; Yuan Ding; Qingbiao Wu; Kui Ren
Regularizing Instabilities in Image Reconstruction Arising from Learned Denoisers. (2%)Abinash Nayak
2021-08-20
AdvDrop: Adversarial Attack to DNNs by Dropping Information. (99%)Ranjie Duan; Yuefeng Chen; Dantong Niu; Yun Yang; A. K. Qin; Yuan He
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier. (99%)Chong Xiang; Saeed Mahloujifar; Prateek Mittal
Integer-arithmetic-only Certified Robustness for Quantized Neural Networks. (98%)Haowen Lin; Jian Lou; Li Xiong; Cyrus Shahabi
Towards Understanding the Generative Capability of Adversarially Robust Classifiers. (98%)Yao Zhu; Jiacheng Ma; Jiacheng Sun; Zewei Chen; Rongxin Jiang; Zhenguo Li
Detecting and Segmenting Adversarial Graphics Patterns from Images. (93%)Xiangyu Purdue University Qu; Stanley H. Purdue University Chan
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning. (1%)Ege Erdogan; Alptekin Kupcu; A. Ercument Cicek
Early-exit deep neural networks for distorted images: providing an efficient edge offloading. (1%)Roberto G. Pacheco; Fernanda D. V. R. Oliveira; Rodrigo S. Couto
2021-08-19
Application of Adversarial Examples to Physical ECG Signals. (99%)Taiga Waseda University Ono; Takeshi The University of Electro-Communications Sugawara; Jun University of Tsukuba Sakuma; Tatsuya Waseda University RIKEN AIP Mori
Pruning in the Face of Adversaries. (99%)Florian Merkle; Maximilian Samsinger; Pascal Schöttle
ASAT: Adaptively Scaled Adversarial Training in Time Series. (98%)Zhiyuan Zhang; Wei Li; Ruihan Bao; Keiko Harimoto; Yunfang Wu; Xu Sun
Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain. (80%)Guangyao Chen; Peixi Peng; Li Ma; Jia Li; Lin Du; Yonghong Tian
2021-08-18
Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better. (99%)Bojia Zi; Shihao Zhao; Xingjun Ma; Yu-Gang Jiang
Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes. (98%)Mingjun Yin; Shasha Li; Zikui Cai; Chengyu Song; M. Salman Asif; Amit K. Roy-Chowdhury; Srikanth V. Krishnamurthy
MBRS : Enhancing Robustness of DNN-based Watermarking by Mini-Batch of Real and Simulated JPEG Compression. (45%)Zhaoyang Jia; Han Fang; Weiming Zhang
Proceedings of the 1st International Workshop on Adaptive Cyber Defense. (1%)Damian Marriott; Kimberly Ferguson-Walter; Sunny Fugate; Marco Carvalho
2021-08-17
When Should You Defend Your Classifier -- A Game-theoretical Analysis of Countermeasures against Adversarial Examples. (98%)Maximilian Samsinger; Florian Merkle; Pascal Schöttle; Tomas Pevny
Adversarial Relighting Against Face Recognition. (98%)Qian Zhang; Qing Guo; Ruijun Gao; Felix Juefei-Xu; Hongkai Yu; Wei Feng
Semantic Perturbations with Normalizing Flows for Improved Generalization. (13%)Oguz Kaan Yuksel; Sebastian U. Stich; Martin Jaggi; Tatjana Chavdarova
Coalesced Multi-Output Tsetlin Machines with Clause Sharing. (1%)Sondre Glimsdal; Ole-Christoffer Granmo
Appearance Based Deep Domain Adaptation for the Classification of Aerial Images. (1%)Dennis Wittich; Franz Rottensteiner
2021-08-16
Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy. (99%)Ruikui Wang; Yuanfang Guo; Ruijie Yang; Yunhong Wang
Interpreting Attributions and Interactions of Adversarial Attacks. (83%)Xin Wang; Shuyun Lin; Hao Zhang; Yufei Zhu; Quanshi Zhang
Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose? (62%)Max Lennon; Nathan Drenkow; Philippe Burlina
NeuraCrypt is not private. (10%)Nicholas Carlini; Sanjam Garg; Somesh Jha; Saeed Mahloujifar; Mohammad Mahmoody; Florian Tramer
Identifying and Exploiting Structures for Reliable Deep Learning. (2%)Amartya Sanyal
On the Opportunities and Risks of Foundation Models. (2%)Rishi Bommasani; Drew A. Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Arx Sydney von; Michael S. Bernstein; Jeannette Bohg; Antoine Bosselut; Emma Brunskill; Erik Brynjolfsson; Shyamal Buch; Dallas Card; Rodrigo Castellon; Niladri Chatterji; Annie Chen; Kathleen Creel; Jared Quincy Davis; Dora Demszky; Chris Donahue; Moussa Doumbouya; Esin Durmus; Stefano Ermon; John Etchemendy; Kawin Ethayarajh; Li Fei-Fei; Chelsea Finn; Trevor Gale; Lauren Gillespie; Karan Goel; Noah Goodman; Shelby Grossman; Neel Guha; Tatsunori Hashimoto; Peter Henderson; John Hewitt; Daniel E. Ho; Jenny Hong; Kyle Hsu; Jing Huang; Thomas Icard; Saahil Jain; Dan Jurafsky; Pratyusha Kalluri; Siddharth Karamcheti; Geoff Keeling; Fereshte Khani; Omar Khattab; Pang Wei Koh; Mark Krass; Ranjay Krishna; Rohith Kuditipudi; Ananya Kumar; Faisal Ladhak; Mina Lee; Tony Lee; Jure Leskovec; Isabelle Levent; Xiang Lisa Li; Xuechen Li; Tengyu Ma; Ali Malik; Christopher D. Manning; Suvir Mirchandani; Eric Mitchell; Zanele Munyikwa; Suraj Nair; Avanika Narayan; Deepak Narayanan; Ben Newman; Allen Nie; Juan Carlos Niebles; Hamed Nilforoshan; Julian Nyarko; Giray Ogut; Laurel Orr; Isabel Papadimitriou; Joon Sung Park; Chris Piech; Eva Portelance; Christopher Potts; Aditi Raghunathan; Rob Reich; Hongyu Ren; Frieda Rong; Yusuf Roohani; Camilo Ruiz; Jack Ryan; Christopher Ré; Dorsa Sadigh; Shiori Sagawa; Keshav Santhanam; Andy Shih; Krishnan Srinivasan; Alex Tamkin; Rohan Taori; Armin W. Thomas; Florian Tramèr; Rose E. Wang; William Wang; Bohan Wu; Jiajun Wu; Yuhuai Wu; Sang Michael Xie; Michihiro Yasunaga; Jiaxuan You; Matei Zaharia; Michael Zhang; Tianyi Zhang; Xikun Zhang; Yuhui Zhang; Lucia Zheng; Kaitlyn Zhou; Percy Liang
2021-08-15
Neural Architecture Dilation for Adversarial Robustness. (81%)Yanxi Li; Zhaohui Yang; Yunhe Wang; Chang Xu
Deep Adversarially-Enhanced k-Nearest Neighbors. (74%)Ren Wang; Tianqi Chen
IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance. (1%)Ruixuan Liu; Changliu Liu
2021-08-14
LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis. (1%)Fan Wu; Yunhui Long; Ce Zhang; Bo Li
2021-08-13
Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks. (99%)Federico Nesti; Giulio Rossolini; Saasha Nair; Alessandro Biondi; Giorgio Buttazzo
Optical Adversarial Attack. (98%)Abhiram Gnanasambandam; Alex M. Sherman; Stanley H. Chan
Understanding Structural Vulnerability in Graph Convolutional Networks. (96%)Liang Chen; Jintang Li; Qibiao Peng; Yang Liu; Zibin Zheng; Carl Yang
The Forgotten Threat of Voltage Glitching: A Case Study on Nvidia Tegra X2 SoCs. (1%)Otto Bittner; Thilo Krachenfels; Andreas Galauner; Jean-Pierre Seifert
2021-08-12
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning. (99%)Hong Wang; Yuefan Deng; Shinjae Yoo; Haibin Ling; Yuewei Lin
Deep adversarial attack on target detection systems. (99%)Uche M. Osahor; Nasser M. Nasrabadi
Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate. (69%)Hannah Rose Kirk; Bertram Vidgen; Paul Röttger; Tristan Thrush; Scott A. Hale
2021-08-11
Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attacks. (99%)Zitao Chen; Pritam Dash; Karthik Pattabiraman
Attacks against Ranking Algorithms with Text Embeddings: a Case Study on Recruitment Algorithms. (78%)Anahita Samadi; Debapriya Banerjee; Shirin Nilizadeh
Are Neural Ranking Models Robust? (4%)Chen Wu; Ruqing Zhang; Jiafeng Guo; Yixing Fan; Xueqi Cheng
Logic Explained Networks. (1%)Gabriele Ciravegna; Pietro Barbiero; Francesco Giannini; Marco Gori; Pietro Lió; Marco Maggini; Stefano Melacci
2021-08-10
Simple black-box universal adversarial attacks on medical image classification based on deep neural networks. (99%)Kazuki Koga; Kazuhiro Takemoto
On the Effect of Pruning on Adversarial Robustness. (81%)Artur Jordao; Helio Pedrini
SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version). (68%)Nils Lukas; Edward Jiang; Xinda Li; Florian Kerschbaum
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing. (64%)Sanchit Sinha; Hanjie Chen; Arshdeep Sekhon; Yangfeng Ji; Yanjun Qi
UniNet: A Unified Scene Understanding Network and Exploring Multi-Task Relationships through the Lens of Adversarial Attacks. (2%)NareshKumar Gurulingan; Elahe Arani; Bahram Zonooz
Instance-wise Hard Negative Example Generation for Contrastive Learning in Unpaired Image-to-Image Translation. (1%)Weilun Wang; Wengang Zhou; Jianmin Bao; Dong Chen; Houqiang Li
2021-08-09
Meta Gradient Adversarial Attack. (99%)Zheng Yuan; Jie Zhang; Yunpei Jia; Chuanqi Tan; Tao Xue; Shiguang Shan
On Procedural Adversarial Noise Attack And Defense. (99%)Jun Yan; Xiaoyang Deng; Huilin Yin; Wancheng Ge
Enhancing Knowledge Tracing via Adversarial Training. (98%)Xiaopeng Guo; Zhijie Huang; Jie Gao; Mingyu Shang; Maojing Shu; Jun Sun
Neural Network Repair with Reachability Analysis. (96%)Xiaodong Yang; Tom Yamaguchi; Hoang-Dung Tran; Bardh Hoxha; Taylor T Johnson; Danil Prokhorov
Classification Auto-Encoder based Detector against Diverse Data Poisoning Attacks. (92%)Fereshteh Razmi; Li Xiong
Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning. (82%)Wanqi Xue; Wei Qiu; Bo An; Zinovi Rabinovich; Svetlana Obraztsova; Chai Kiat Yeo
Privacy-Preserving Machine Learning: Methods, Challenges and Directions. (16%)Runhua Xu; Nathalie Baracaldo; James Joshi
Explainable AI and susceptibility to adversarial attacks: a case study in classification of breast ultrasound images. (15%)Hamza Rasaee; Hassan Rivaz
2021-08-07
Jointly Attacking Graph Neural Network and its Explanations. (96%)Wenqi Fan; Wei Jin; Xiaorui Liu; Han Xu; Xianfeng Tang; Suhang Wang; Qing Li; Jiliang Tang; Jianping Wang; Charu Aggarwal
Membership Inference Attacks on Lottery Ticket Networks. (33%)Aadesh Bagmar; Shishira R Maiya; Shruti Bidwalka; Amol Deshpande
Information Bottleneck Approach to Spatial Attention Learning. (1%)Qiuxia Lai; Yu Li; Ailing Zeng; Minhao Liu; Hanqiu Sun; Qiang Xu
2021-08-06
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles. (80%)Jindi Zhang; Yang Lou; Jianping Wang; Kui Wu; Kejie Lu; Xiaohua Jia
Ensemble Augmentation for Deep Neural Networks Using 1-D Time Series Vibration Data. (2%)Atik Faysal; Ngui Wai Keng; M. H. Lim
2021-08-05
BOSS: Bidirectional One-Shot Synthesis of Adversarial Examples. (99%)Ismail Alkhouri; Alvaro Velasquez; George Atia
Poison Ink: Robust and Invisible Backdoor Attack. (99%)Jie Zhang; Dongdong Chen; Jing Liao; Qidong Huang; Gang Hua; Weiming Zhang; Nenghai Yu
Imperceptible Adversarial Examples by Spatial Chroma-Shift. (99%)Ayberk Aydin; Deniz Sen; Berat Tuna Karli; Oguz Hanoglu; Alptekin Temizel
Householder Activations for Provable Robustness against Adversarial Attacks. (83%)Sahil Singla; Surbhi Singla; Soheil Feizi
Fairness Properties of Face Recognition and Obfuscation Systems. (68%)Harrison Rosenberg; Brian Tang; Kassem Fawaz; Somesh Jha
Exploring Structure Consistency for Deep Model Watermarking. (10%)Jie Zhang; Dongdong Chen; Jing Liao; Han Fang; Zehua Ma; Weiming Zhang; Gang Hua; Nenghai Yu
Locally Interpretable One-Class Anomaly Detection for Credit Card Fraud Detection. (1%)Tungyu Wu; Youting Wang
2021-08-04
Robust Transfer Learning with Pretrained Language Models through Adapters. (82%)Wenjuan Han; Bo Pang; Yingnian Wu
Semi-supervised Conditional GAN for Simultaneous Generation and Detection of Phishing URLs: A Game theoretic Perspective. (31%)Sharif Amit Kamran; Shamik Sengupta; Alireza Tavakkoli
2021-08-03
On the Robustness of Domain Adaption to Adversarial Attacks. (99%)Liyuan Zhang; Yuhang Zhou; Lei Zhang
On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples. (99%)Adelin Travers; Lorna Licollari; Guanghan Wang; Varun Chandrasekaran; Adam Dziedzic; David Lie; Nicolas Papernot
AdvRush: Searching for Adversarially Robust Neural Architectures. (99%)Jisoo Mok; Byunggook Na; Hyeokjun Choe; Sungroh Yoon
The Devil is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models. (88%)Ambrish Rawat; Killian Levacher; Mathieu Sinn
DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on Commercial EdgeML Device. (69%)Yoo-Seung Won; Soham Chatterjee; Dirmanto Jap; Arindam Basu; Shivam Bhasin
Tutorials on Testing Neural Networks. (1%)Nicolas Berthier; Youcheng Sun; Wei Huang; Yanghao Zhang; Wenjie Ruan; Xiaowei Huang
2021-08-02
Hybrid Classical-Quantum Deep Learning Models for Autonomous Vehicle Traffic Image Classification Under Adversarial Attack. (98%)Reek Majumder; Sakib Mahmud Khan; Fahim Ahmed; Zadid Khan; Frank Ngeni; Gurcan Comert; Judith Mwakalonge; Dimitra Michalaka; Mashrur Chowdhury
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles. (10%)Anum Talpur; Mohan Gurusamy
Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks. (9%)Yuwei Sun; Ng Chong; Hideya Ochiai
Efficacy of Statistical and Artificial Intelligence-based False Information Cyberattack Detection Models for Connected Vehicles. (1%)Sakib Mahmud Khan; Gurcan Comert; Mashrur Chowdhury
2021-08-01
Advances in adversarial attacks and defenses in computer vision: A survey. (92%)Naveed Akhtar; Ajmal Mian; Navid Kardan; Mubarak Shah
Certified Defense via Latent Space Randomized Smoothing with Orthogonal Encoders. (80%)Huimin Zeng; Jiahao Su; Furong Huang
An Effective and Robust Detector for Logo Detection. (70%)Xiaojun Jia; Huanqian Yan; Yonglin Wu; Xingxing Wei; Xiaochun Cao; Yong Zhang
Style Curriculum Learning for Robust Medical Image Segmentation. (2%)Zhendong Liu; Van Manh; Xin Yang; Xiaoqiong Huang; Karim Lekadir; Víctor Campello; Nishant Ravikumar; Alejandro F Frangi; Dong Ni
2021-07-31
Delving into Deep Image Prior for Adversarial Defense: A Novel Reconstruction-based Defense Framework. (99%)Li Ding; Yongwei Wang; Xin Ding; Kaiwen Yuan; Ping Wang; Hua Huang; Z. Jane Wang
Adversarial Robustness of Deep Code Comment Generation. (99%)Yu Zhou; Xiaoqing Zhang; Juanjuan Shen; Tingting Han; Taolue Chen; Harald Gall
Towards Adversarially Robust and Domain Generalizable Stereo Matching by Rethinking DNN Feature Backbones. (93%)Kelvin Cheng; Christopher Healey; Tianfu Wu
T$_k$ML-AP: Adversarial Attacks to Top-$k$ Multi-Label Learning. (81%)Shu Hu; Lipeng Ke; Xin Wang; Siwei Lyu
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. (67%)Jinyuan Jia; Yupei Liu; Neil Zhenqiang Gong
Fair Representation Learning using Interpolation Enabled Disentanglement. (1%)Akshita Jha; Bhanukiran Vinzamuri; Chandan K. Reddy
2021-07-30
Who's Afraid of Thomas Bayes? (92%)Erick Galinkin
Practical Attacks on Voice Spoofing Countermeasures. (86%)Andre Kassis; Urs Hengartner
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers. (50%)Stefanos Koffas; Jing Xu; Mauro Conti; Stjepan Picek
Unveiling the potential of Graph Neural Networks for robust Intrusion Detection. (13%)David Pujol-Perich; José Suárez-Varela; Albert Cabellos-Aparicio; Pere Barlet-Ros
2021-07-29
Feature Importance-aware Transferable Adversarial Attacks. (99%)Zhibo Wang; Hengchang Guo; Zhifei Zhang; Wenxin Liu; Zhan Qin; Kui Ren
Enhancing Adversarial Robustness via Test-time Transformation Ensembling. (98%)Juan C. Pérez; Motasem Alfarra; Guillaume Jeanneret; Laura Rueda; Ali Thabet; Bernard Ghanem; Pablo Arbeláez
The Robustness of Graph k-shell Structure under Adversarial Attacks. (93%)B. Zhou; Y. Q. Lv; Y. C. Mao; J. H. Wang; S. Q. Yu; Q. Xuan
Understanding the Effects of Adversarial Personalized Ranking Optimization Method on Recommendation Quality. (31%)Vito Walter Anelli; Yashar Deldjoo; Noia Tommaso Di; Felice Antonio Merra
Towards robust vision by multi-task learning on monkey visual cortex. (3%)Shahd Safarani; Arne Nix; Konstantin Willeke; Santiago A. Cadena; Kelli Restivo; George Denfield; Andreas S. Tolias; Fabian H. Sinz
2021-07-28
Imbalanced Adversarial Training with Reweighting. (86%)Wentao Wang; Han Xu; Xiaorui Liu; Yaxin Li; Bhavani Thuraisingham; Jiliang Tang
Towards Robustness Against Natural Language Word Substitutions. (73%)Xinshuai Dong; Anh Tuan Luu; Rongrong Ji; Hong Liu
Models of Computational Profiles to Study the Likelihood of DNN Metamorphic Test Cases. (67%)Ettore Merlo; Mira Marhaba; Foutse Khomh; Houssem Ben Braiek; Giuliano Antoniol
WaveCNet: Wavelet Integrated CNNs to Suppress Aliasing Effect for Noise-Robust Image Classification. (15%)Qiufu Li; Linlin Shen; Sheng Guo; Zhihui Lai
TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing. (2%)Aoting Hu; Renjie Xie; Zhigang Lu; Aiqun Hu; Minhui Xue
2021-07-27
Towards Black-box Attacks on Deep Learning Apps. (89%)Hongchen Cao; Shuai Li; Yuming Zhou; Ming Fan; Xuejiao Zhao; Yutian Tang
Poisoning Online Learning Filters: DDoS Attacks and Countermeasures. (50%)Wesley Joon-Wie Tann; Ee-Chien Chang
PDF-Malware: An Overview on Threats, Detection and Evasion Attacks. (8%)Nicolas Fleury; Theo Dubrunquez; Ihsen Alouani
2021-07-26
Benign Adversarial Attack: Tricking Models for Goodness. (99%)Jitao Sang; Xian Zhao; Jiaming Zhang; Zhiyu Lin
Learning to Adversarially Blur Visual Object Tracking. (98%)Qing Guo; Ziyi Cheng; Felix Juefei-Xu; Lei Ma; Xiaofei Xie; Yang Liu; Jianjun Zhao
Adversarial Attacks with Time-Scale Representations. (96%)Alberto Santamaria-Pang; Jianwei Qiu; Aritra Chowdhury; James Kubricht; Peter Tu; Iyer Naresh; Nurali Virani
2021-07-24
Adversarial training may be a double-edged sword. (99%)Ali Rahmati; Seyed-Mohsen Moosavi-Dezfooli; Huaiyu Dai
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them. (98%)Florian Tramèr
Stress Test Evaluation of Biomedical Word Embeddings. (73%)Vladimir Araujo; Andrés Carvallo; Carlos Aspillaga; Camilo Thorne; Denis Parra
X-GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question Answering. (1%)Jingjing Jiang; Ziyi Liu; Yifan Liu; Zhixiong Nan; Nanning Zheng
2021-07-23
A Differentiable Language Model Adversarial Attack on Text Classifiers. (99%)Ivan Fursov; Alexey Zaytsev; Pavel Burnyshev; Ekaterina Dmitrieva; Nikita Klyuchnikov; Andrey Kravchenko; Ekaterina Artemova; Evgeny Burnaev
Structack: Structure-based Adversarial Attacks on Graph Neural Networks. (86%)Hussain Hussain; Tomislav Duricic; Elisabeth Lex; Denis Helic; Markus Strohmaier; Roman Kern
Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation. (45%)Bingqian Lin; Yi Zhu; Yanxin Long; Xiaodan Liang; Qixiang Ye; Liang Lin
Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers. (8%)Yunhui Guo; Xudong Wang; Yubei Chen; Stella X. Yu
2021-07-22
On the Certified Robustness for Ensemble Models and Beyond. (99%)Zhuolin Yang; Linyi Li; Xiaojun Xu; Bhavya Kailkhura; Tao Xie; Bo Li
Unsupervised Detection of Adversarial Examples with Model Explanations. (99%)Gihyuk Ko; Gyumin Lim
Membership Inference Attack and Defense for Wireless Signal Classifiers with Deep Learning. (83%)Yi Shi; Yalin E. Sagduyu
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks. (75%)Ramin Barati; Reza Safabakhsh; Mohammad Rahmati
Estimating Predictive Uncertainty Under Program Data Distribution Shift. (1%)Yufei Li; Simin Chen; Wei Yang
Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack. (1%)Fan Wu; Min Gao; Junliang Yu; Zongwei Wang; Kecheng Liu; Xu Wange
2021-07-21
Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic Gradients. (98%)Huimin Wu; Zhengmian Hu; Bin Gu
Improved Text Classification via Contrastive Adversarial Training. (84%)Lin Pan; Chung-Wei Hang; Avirup Sil; Saloni Potdar
Black-box Probe for Unsupervised Domain Adaptation without Model Transferring. (81%)Kunhong Wu; Yucheng Shi; Yahong Han; Yunfeng Shao; Bingshuai Li
Defending against Reconstruction Attack in Vertical Federated Learning. (10%)Jiankai Sun; Yuanshun Yao; Weihao Gao; Junyuan Xie; Chong Wang
Generative Models for Security: Attacks, Defenses, and Opportunities. (10%)Luke A. Bauer; Vincent Bindschaedler
A Tandem Framework Balancing Privacy and Security for Voice User Interfaces. (5%)Ranya Aloufi; Hamed Haddadi; David Boyle
Spinning Sequence-to-Sequence Models with Meta-Backdoors. (4%)Eugene Bagdasaryan; Vitaly Shmatikov
On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms. (2%)Shuyu Cheng; Guoqiang Wu; Jun Zhu
2021-07-20
Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks. (99%)Saikat Majumdar; Mohammad Hossein Samavatian; Kristin Barber; Radu Teodorescu
A Markov Game Model for AI-based Cyber Security Attack Mitigation. (10%)Hooman Alavizadeh; Julian Jang-Jaccard; Tansu Alpcan; Seyit A. Camtepe
Leaking Secrets through Modern Branch Predictor in the Speculative World. (1%)Md Hafizul Islam Chowdhuryy; Fan Yao
2021-07-19
Discriminator-Free Generative Adversarial Attack. (99%)Shaohao Lu; Yuqiao Xian; Ke Yan; Yi Hu; Xing Sun; Xiaowei Guo; Feiyue Huang; Wei-Shi Zheng
Feature-Filter: Detecting Adversarial Examples through Filtering off Recessive Features. (99%)Hui Liu; Bo Zhao; Yuefeng Peng; Jiabao Guo; Peng Liu
Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition. (98%)Benjamin Spetter-Goldstein; Nataniel Ruiz; Sarah Adel Bargal
On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples. (80%)Verena Praher; Katharina Prinz; Arthur Flexer; Gerhard Widmer
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI. (33%)Takayuki Miura; Satoshi Hasegawa; Toshiki Shibahara
Structural Watermarking to Deep Neural Networks via Network Channel Pruning. (11%)Xiangyu Zhao; Yinzhe Yao; Hanzhou Wu; Xinpeng Zhang
Generative Adversarial Neural Cellular Automata. (1%)Maximilian Otte; Quentin Delfosse; Johannes Czech; Kristian Kersting
Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units. (1%)Woo-Jeoung Nam; Seong-Whan Lee
Just Train Twice: Improving Group Robustness without Training Group Information. (1%)Evan Zheran Liu; Behzad Haghgoo; Annie S. Chen; Aditi Raghunathan; Pang Wei Koh; Shiori Sagawa; Percy Liang; Chelsea Finn
2021-07-18
RobustFed: A Truth Inference Approach for Robust Federated Learning. (1%)Farnaz Tahmasebian; Jian Lou; Li Xiong
2021-07-17
BEDS-Bench: Behavior of EHR-models under Distributional Shift--A Benchmark. (9%)Anand Avati; Martin Seneviratne; Emily Xue; Zhen Xu; Balaji Lakshminarayanan; Andrew M. Dai
2021-07-16
EGC2: Enhanced Graph Classification with Easy Graph Compression. (89%)Jinyin Chen; Haiyang Xiong; Haibin Zhenga; Dunjie Zhang; Jian Zhang; Mingwei Jia; Yi Liu
Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI. (1%)Quanshi Zhang; Tian Han; Lixin Fan; Zhanxing Zhu; Hang Su; Ying Nian Wu; Jie Ren; Hao Zhang
2021-07-15
Self-Supervised Contrastive Learning with Adversarial Perturbations for Defending Word Substitution-based Attacks. (99%)Zhao Meng; Yihan Dong; Mrinmaya Sachan; Roger Wattenhofer
Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving. (98%)Ibrahim Sobh; Ahmed Hamed; Varun Ravi Kumar; Senthil Yogamani
ECG-Adv-GAN: Detecting ECG Adversarial Examples with Conditional Generative Adversarial Networks. (92%)Khondker Fariha Hossain; Sharif Amit Kamran; Alireza Tavakkoli; Lei Pan; Xingjun Ma; Sutharshan Rajasegarar; Chandan Karmaker
Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks. (80%)Ismail Alarab; Simant Prakoonwit
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting. (16%)Xiangyu Qi; Jifeng Zhu; Chulin Xie; Yong Yang
Tailor: Generating and Perturbing Text with Semantic Controls. (3%)Alexis Ross; Tongshuang Wu; Hao Peng; Matthew E. Peters; Matt Gardner
Shifts: A Dataset of Real Distributional Shift Across Multiple Large-Scale Tasks. (1%)Andrey Malinin; Neil Band; Ganshin; Alexander; German Chesnokov; Yarin Gal; Mark J. F. Gales; Alexey Noskov; Andrey Ploskonosov; Liudmila Prokhorenkova; Ivan Provilkov; Vatsal Raina; Vyas Raina; Roginskiy; Denis; Mariya Shmatova; Panos Tigas; Boris Yangel
2021-07-14
AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning. (99%)Yihao Huang; Qing Guo; Felix Juefei-Xu; Lei Ma; Weikai Miao; Yang Liu; Geguang Pu
Conservative Objective Models for Effective Offline Model-Based Optimization. (67%)Brandon Trabucco; Aviral Kumar; Xinyang Geng; Sergey Levine
2021-07-13
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense. (88%)Duhun Hwang; Eunjung Lee; Wonjong Rhee
Using BERT Encoding to Tackle the Mad-lib Attack in SMS Spam Detection. (69%)Sergio Rojas-Galeano
Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors. (41%)M. Ben Amor; J. Stier; M. Granitzer
What classifiers know what they don't? (1%)Mohamed Ishmael Belghazi; David Lopez-Paz
2021-07-12
EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial Attacks. (99%)Andrei Ilie; Marius Popescu; Alin Stefanescu
Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising. (99%)Anouar Kherchouche; Sid Ahmed Fezza; Wassim Hamidouche
Perceptual-based deep-learning denoiser as a defense against adversarial attacks on ASR systems. (96%)Anirudh Sreeram; Nicholas Mehlman; Raghuveer Peri; Dillon Knox; Shrikanth Narayanan
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning. (81%)Jun Wang; Chang Xu; Francisco Guzman; Ahmed El-Kishky; Yuqing Tang; Benjamin I. P. Rubinstein; Trevor Cohn
A Closer Look at the Adversarial Robustness of Information Bottleneck Models. (70%)Iryna Korshunova; David Stutz; Alexander A. Alemi; Olivia Wiles; Sven Gowal
SoftHebb: Bayesian inference in unsupervised Hebbian soft winner-take-all networks. (56%)Timoleon Moraitis; Dmitry Toichkin; Yansong Chua; Qinghai Guo
2021-07-11
Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks. (76%)Kendra Albert; Maggie Delano; Bogdan Kulynych; Ram Shankar Siva Kumar
Stateful Detection of Model Extraction Attacks. (2%)Soham Pal; Yash Gupta; Aditya Kanade; Shirish Shevade
Attack Rules: An Adversarial Approach to Generate Attacks for Industrial Control Systems using Machine Learning. (1%)Muhammad Azmi Umer; Chuadhry Mujeeb Ahmed; Muhammad Taha Jilani; Aditya P. Mathur
2021-07-10
Hack The Box: Fooling Deep Learning Abstraction-Based Monitors. (91%)Sara Hajj Ibrahim; Mohamed Nassar
HOMRS: High Order Metamorphic Relations Selector for Deep Neural Networks. (88%)Florian Tambon; Giulio Antoniol; Foutse Khomh
Identifying Layers Susceptible to Adversarial Attacks. (83%)Shoaib Ahmed Siddiqui; Thomas Breuel
Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis. (22%)Anisie Uwimana1; Ransalu Senanayake
Cyber-Security Challenges in Aviation Industry: A Review of Current and Future Trends. (1%)Elochukwu Ukwandu; Mohamed Amine Ben Farah; Hanan Hindy; Miroslav Bures; Robert Atkinson; Christos Tachtatzis; Xavier Bellekens
2021-07-09
Learning to Detect Adversarial Examples Based on Class Scores. (99%)Tobias Uelwer; Felix Michels; Candido Oliver De
Resilience of Autonomous Vehicle Object Category Detection to Universal Adversarial Perturbations. (99%)Mohammad Nayeem Teli; Seungwon Oh
Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems. (99%)Shangyu Xie; Han Wang; Yu Kong; Yuan Hong
GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network. (98%)Zuohui Chen; Renxuan Wang; Jingyang Xiang; Yue Yu; Xin Xia; Shouling Ji; Qi Xuan; Xiaoniu Yang
Towards Robust General Medical Image Segmentation. (83%)Laura Daza; Juan C. Pérez; Pablo Arbeláez
ARC: Adversarially Robust Control Policies for Autonomous Vehicles. (38%)Sampo Kuutti; Saber Fallah; Richard Bowden
2021-07-08
Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models. (99%)Daniel Park; Haidar Khan; Azer Khan; Alex Gittens; Bülent Yener
Improving Model Robustness with Latent Distribution Locally and Globally. (99%)Zhuang Qian; Shufei Zhang; Kaizhu Huang; Qiufeng Wang; Rui Zhang; Xinping Yi
Analytically Tractable Hidden-States Inference in Bayesian Neural Networks. (50%)Luong-Ha Nguyen; James-A. Goulet
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (33%)Akshay Mehra; Bhavya Kailkhura; Pin-Yu Chen; Jihun Hamm
2021-07-07
Controlled Caption Generation for Images Through Adversarial Attacks. (99%)Nayyer Aafaq; Naveed Akhtar; Wei Liu; Mubarak Shah; Ajmal Mian
Incorporating Label Uncertainty in Understanding Adversarial Robustness. (38%)Xiao Zhang; David Evans
RoFL: Attestable Robustness for Secure Federated Learning. (2%)Lukas Burkhalter; Hidde Lycklama; Alexander Viand; Nicolas Küchler; Anwar Hithnawi
2021-07-06
GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization. (99%)Sungyoon Lee; Hoki Kim; Jaewook Lee
Self-Adversarial Training incorporating Forgery Attention for Image Forgery Localization. (95%)Long Zhuo; Shunquan Tan; Bin Li; Jiwu Huang
ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients. (76%)Alessandro Cappelli; Julien Launay; Laurent Meunier; Ruben Ohana; Iacopo Poli
On Generalization of Graph Autoencoders with Adversarial Training. (12%)Tianjin huang; Yulong Pei; Vlado Menkovski; Mykola Pechenizkiy
On Robustness of Lane Detection Models to Physical-World Adversarial Attacks in Autonomous Driving. (1%)Takami Sato; Qi Alfred Chen
2021-07-05
When and How to Fool Explainable Models (and Humans) with Adversarial Examples. (99%)Jon Vadillo; Roberto Santana; Jose A. Lozano
Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks. (99%)Xiao Yang; Yinpeng Dong; Tianyu Pang; Hang Su; Jun Zhu
Adversarial Robustness of Probabilistic Network Embedding for Link Prediction. (87%)Xi Chen; Bo Kang; Jefrey Lijffijt; Bie Tijl De
Dealing with Adversarial Player Strategies in the Neural Network Game iNNk through Ensemble Learning. (69%)Mathias Löwe; Jennifer Villareale; Evan Freed; Aleksanteri Sladek; Jichen Zhu; Sebastian Risi
Understanding the Security of Deepfake Detection. (33%)Xiaoyu Cao; Neil Zhenqiang Gong
Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems. (15%)Ron Bitton; Nadav Maman; Inderjeet Singh; Satoru Momiyama; Yuval Elovici; Asaf Shabtai
Poisoning Attack against Estimating from Pairwise Comparisons. (15%)Ke Ma; Qianqian Xu; Jinshan Zeng; Xiaochun Cao; Qingming Huang
Confidence Conditioned Knowledge Distillation. (10%)Sourav Mishra; Suresh Sundaram
2021-07-04
Certifiably Robust Interpretation via Renyi Differential Privacy. (67%)Ao Liu; Xiaoyu Chen; Sijia Liu; Lirong Xia; Chuang Gan
Mirror Mirror on the Wall: Next-Generation Wireless Jamming Attacks Based on Software-Controlled Surfaces. (1%)Paul Staat; Harald Elders-Boll; Christian Zenger; Christof Paar
2021-07-03
Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity. (99%)Yajie Wang; Shangbo Wu; Wenyi Jiang; Shengang Hao; Yu-an Tan; Quanxin Zhang
2021-07-01
Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples. (99%)Nelson Manohar-Alers; Ryan Feng; Sahib Singh; Jiguo Song; Atul Prakash
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks. (99%)Alberto Marchisio; Giacomo Pira; Maurizio Martina; Guido Masera; Muhammad Shafique
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding. (68%)Dong Wang; Ning Ding; Piji Li; Hai-Tao Zheng
Adversarial Sample Detection for Speaker Verification by Neural Vocoders. (41%)Haibin Wu; Po-chun Hsu; Ji Gao; Shanshan Zhang; Shen Huang; Jian Kang; Zhiyong Wu; Helen Meng; Hung-yi Lee
The Interplay between Distribution Parameters and the Accuracy-Robustness Tradeoff in Classification. (16%)Alireza Mousavi Hosseini; Amir Mohammad Abouei; Mohammad Hossein Rohban
Reinforcement Learning for Feedback-Enabled Cyber Resilience. (10%)Yunhan Huang; Linan Huang; Quanyan Zhu
2021-06-30
Single-Step Adversarial Training for Semantic Segmentation. (96%)Daniel Wiens; Barbara Hammer
Adversarial examples within the training distribution: A widespread challenge. (93%)Spandan Madan; Tomotake Sasaki; Hanspeter Pfister; Tzu-Mao Li; Xavier Boix
Understanding Adversarial Attacks on Observations in Deep Reinforcement Learning. (84%)You Qiaoben; Chengyang Ying; Xinning Zhou; Hang Su; Jun Zhu; Bo Zhang
Explanation-Guided Diagnosis of Machine Learning Evasion Attacks. (82%)Abderrahmen Amich; Birhanu Eshete
Bi-Level Poisoning Attack Model and Countermeasure for Appliance Consumption Data of Smart Homes. (8%)Mustain Billah; Adnan Anwar; Ziaur Rahman; Syed Md. Galib
Exploring Robustness of Neural Networks through Graph Measures. (8%)Asim Rowan University Waqas; Ghulam Rowan University Rasool; Hamza University of Minnesota Farooq; Nidhal C. Rowan University Bouaynaya
A Context-Aware Information-Based Clone Node Attack Detection Scheme in Internet of Things. (1%)Khizar Hameed; Saurabh Garg; Muhammad Bilal Amin; Byeong Kang; Abid Khan
Understanding and Improving Early Stopping for Learning with Noisy Labels. (1%)Yingbin Bai; Erkun Yang; Bo Han; Yanhua Yang; Jiatong Li; Yinian Mao; Gang Niu; Tongliang Liu
2021-06-29
Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges. (99%)Bowei Xi
Understanding Adversarial Examples Through Deep Neural Network's Response Surface and Uncertainty Regions. (99%)Juan Shu; Bowei Xi; Charles Kamhoua
Attack Transferability Characterization for Adversarially Robust Multi-label Classification. (99%)Zhuo Yang; Yufei Han; Xiangliang Zhang
Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices. (99%)Tao Bai; Jinqi Luo; Jun Zhao
Bio-Inspired Adversarial Attack Against Deep Neural Networks. (98%)Bowei Xi; Yujie Chen; Fan Fei; Zhan Tu; Xinyan Deng
Do Not Deceive Your Employer with a Virtual Background: A Video Conferencing Manipulation-Detection System. (62%)Mauro Conti; Simone Milani; Ehsan Nowroozi; Gabriele Orazi
The Threat of Offensive AI to Organizations. (54%)Yisroel Mirsky; Ambra Demontis; Jaidip Kotak; Ram Shankar; Deng Gelei; Liu Yang; Xiangyu Zhang; Wenke Lee; Yuval Elovici; Battista Biggio
Local Reweighting for Adversarial Training. (22%)Ruize Gao; Feng Liu; Kaiwen Zhou; Gang Niu; Bo Han; James Cheng
On the Interaction of Belief Bias and Explanations. (15%)Ana Valeria Gonzalez; Anna Rogers; Anders Søgaard
2021-06-28
Feature Importance Guided Attack: A Model Agnostic Adversarial Attack. (99%)Gilad Gressel; Niranjan Hegde; Archana Sreekumar; Michael Darling
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent. (99%)Oliver Bryniarski; Nabeel Hingun; Pedro Pachuca; Vincent Wang; Nicholas Carlini
Improving Transferability of Adversarial Patches on Face Recognition with Generative Models. (99%)Zihao Xiao; Xianfeng Gao; Chilin Fu; Yinpeng Dong; Wei Gao; Xiaolu Zhang; Jun Zhou; Jun Zhu
Data Poisoning Won't Save You From Facial Recognition. (97%)Evani Radiya-Dixit; Florian Tramèr
Adversarial Robustness of Streaming Algorithms through Importance Sampling. (61%)Vladimir Braverman; Avinatan Hassidim; Yossi Matias; Mariano Schain; Sandeep Silwal; Samson Zhou
Test-Time Adaptation to Distribution Shift by Confidence Maximization and Input Transformation. (2%)Chaithanya Kumar Mummadi; Robin Hutmacher; Kilian Rambach; Evgeny Levinkov; Thomas Brox; Jan Hendrik Metzen
Certified Robustness via Randomized Smoothing over Multiplicative Parameters. (1%)Nikita Muravev; Aleksandr Petiushko
Realtime Robust Malicious Traffic Detection via Frequency Domain Analysis. (1%)Chuanpu Fu; Qi Li; Meng Shen; Ke Xu
2021-06-27
RAILS: A Robust Adversarial Immune-inspired Learning System. (98%)Ren Wang; Tianqi Chen; Stephen Lindsly; Cooper Stansbury; Alnawaz Rehemtulla; Indika Rajapakse; Alfred Hero
Who is Responsible for Adversarial Defense? (93%)Kishor Datta Gupta; Dipankar Dasgupta
ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense. (82%)Ren Wang; Tianqi Chen; Philip Yao; Sijia Liu; Indika Rajapakse; Alfred Hero
Immuno-mimetic Deep Neural Networks (Immuno-Net). (64%)Ren Wang; Tianqi Chen; Stephen Lindsly; Cooper Stansbury; Indika Rajapakse; Alfred Hero
Stabilizing Equilibrium Models by Jacobian Regularization. (1%)Shaojie Bai; Vladlen Koltun; J. Zico Kolter
2021-06-26
Multi-stage Optimization based Adversarial Training. (99%)Xiaosen Wang; Chuanbiao Song; Liwei Wang; Kun He
The Feasibility and Inevitability of Stealth Attacks. (69%)Ivan Y. Tyukin; Desmond J. Higham; Eliyas Woldegeorgis; Alexander N. Gorban
2021-06-24
On the (Un-)Avoidability of Adversarial Examples. (99%)Sadia Chowdhury; Ruth Urner
Countering Adversarial Examples: Combining Input Transformation and Noisy Training. (99%)Cheng Zhang; Pan Gao
Break it, Fix it: Attack and Defense for "Add-on'' Access Control Solutions in Distributed Data Analytics Platforms. (8%)Fahad Data Security Technologies Shaon; Sazzadur University of Arizona Rahaman; Murat Data Security Technologies Kantarcioglu
2021-06-23
Adversarial Examples in Multi-Layer Random ReLU Networks. (81%)Peter L. Bartlett; Sébastien Bubeck; Yeshwanth Cherapanamjeri
Teacher Model Fingerprinting Attacks Against Transfer Learning. (2%)Yufei Chen; Chao Shen; Cong Wang; Yang Zhang
Meaningfully Explaining Model Mistakes Using Conceptual Counterfactuals. (1%)Abubakar Abid; Mert Yuksekgonul; James Zou
Feature Attributions and Counterfactual Explanations Can Be Manipulated. (1%)Dylan Slack; Sophie Hilgard; Sameer Singh; Himabindu Lakkaraju
2021-06-22
DetectX -- Adversarial Input Detection using Current Signatures in Memristive XBar Arrays. (99%)Abhishek Moitra; Priyadarshini Panda
Self-Supervised Iterative Contextual Smoothing for Efficient Adversarial Defense against Gray- and Black-Box Attack. (99%)Sungmin Cha; Naeun Ko; Youngjoon Yoo; Taesup Moon
Long-term Cross Adversarial Training: A Robust Meta-learning Method for Few-shot Classification Tasks. (83%)Fan Liu; Shuyu Zhao; Xuelong Dai; Bin Xiao
On Adversarial Robustness of Synthetic Code Generation. (81%)Mrinal Anand; Pratik Kayal; Mayank Singh
NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data. (67%)I-Chung Hsieh; Cheng-Te Li
FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data. (1%)Eugenia Iofinova; Nikola Konstantinov; Christoph H. Lampert
2021-06-21
Policy Smoothing for Provably Robust Reinforcement Learning. (99%)Aounon Kumar; Alexander Levine; Soheil Feizi
Delving into the pixels of adversarial samples. (98%)Blerta Lindqvist
HODA: Hardness-Oriented Detection of Model Extraction Attacks. (98%)Amir Mahdi Sadeghzadeh; Amir Mohammad Sobhanian; Faezeh Dehghan; Rasool Jalili
Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier. (91%)Simone Marullo; Matteo Tiezzi; Marco Gori; Stefano Melacci
Membership Inference on Word Embedding and Beyond. (38%)Saeed Mahloujifar; Huseyin A. Inan; Melissa Chase; Esha Ghosh; Marcello Hasegawa
An Alternative Auxiliary Task for Enhancing Image Classification. (11%)Chen Liu
Zero-shot learning approach to adaptive Cybersecurity using Explainable AI. (1%)Dattaraj Rao; Shraddha Mane
2021-06-20
Adversarial Examples Make Strong Poisons. (98%)Liam Fowl; Micah Goldblum; Ping-yeh Chiang; Jonas Geiping; Wojtek Czaja; Tom Goldstein
Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem. (95%)Jiaqi Ma; Junwei Deng; Qiaozhu Mei
Generative Model Adversarial Training for Deep Compressed Sensing. (8%)Ashkan Esmaeili
2021-06-19
Attack to Fool and Explain Deep Networks. (99%)Naveed Akhtar; Muhammad A. A. K. Jalwana; Mohammed Bennamoun; Ajmal Mian
A Stealthy and Robust Fingerprinting Scheme for Generative Models. (47%)Li Guanlin; Guo Shangwei; Wang Run; Xu Guowen; Zhang Tianwei
2021-06-18
Residual Error: a New Performance Measure for Adversarial Robustness. (99%)Hossein Aboutalebi; Mohammad Javad Shafiee; Michelle Karg; Christian Scharfenberger; Alexander Wong
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples. (99%)Maura Pintor; Luca Demetrio; Angelo Sotgiu; Ambra Demontis; Nicholas Carlini; Battista Biggio; Fabio Roli
The Dimpled Manifold Model of Adversarial Examples in Machine Learning. (99%)Adi Shamir; Odelia Melamed; Oriel BenShmuel
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis. (99%)Martin Pawelczyk; Chirag Agarwal; Shalmali Joshi; Sohini Upadhyay; Himabindu Lakkaraju
Light Lies: Optical Adversarial Attack. (92%)Kyulim Kim; JeongSoo Kim; Seungri Song; Jun-Ho Choi; Chulmin Joo; Jong-Seok Lee
BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection. (82%)Yulin Zhu; Yuni Lai; Kaifa Zhao; Xiapu Luo; Mingquan Yuan; Jian Ren; Kai Zhou
Less is More: Feature Selection for Adversarial Robustness with Compressive Counter-Adversarial Attacks. (80%)Emre Ozfatura; Muhammad Zaid Hameed; Kerem Ozfatura; Deniz Gunduz
Group-Structured Adversarial Training. (68%)Farzan Farnia; Amirali Aghazadeh; James Zou; David Tse
Accumulative Poisoning Attacks on Real-time Data. (45%)Tianyu Pang; Xiao Yang; Yinpeng Dong; Hang Su; Jun Zhu
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks. (45%)Suyoung Lee; Wonho Song; Suman Jana; Meeyoung Cha; Sooel Son
Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning. (5%)Junyuan Hong; Haotao Wang; Zhangyang Wang; Jiayu Zhou
2021-06-17
Analyzing Adversarial Robustness of Deep Neural Networks in Pixel Space: a Semantic Perspective. (99%)Lina Wang; Xingshu Chen; Yulong Wang; Yawei Yue; Yi Zhu; Xuemei Zeng; Wei Wang
Bad Characters: Imperceptible NLP Attacks. (99%)Nicholas Boucher; Ilia Shumailov; Ross Anderson; Nicolas Papernot
DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs. (99%)Junhao Zhu; Yalu Shan; Jinhuan Wang; Shanqing Yu; Guanrong Chen; Qi Xuan
Adversarial Visual Robustness by Causal Intervention. (99%)Kaihua Tang; Mingyuan Tao; Hanwang Zhang
Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning. (92%)Shubham Jain; Ana-Maria Cretu; Montjoye Yves-Alexandre de
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks. (91%)Yulong *co-first authors Cao*; Ningfei *co-first authors Wang*; Chaowei *co-first authors Xiao*; Dawei *co-first authors Yang*; Jin *co-first authors Fang; Ruigang *co-first authors Yang; Qi Alfred *co-first authors Chen; Mingyan *co-first authors Liu; Bo *co-first authors Li
Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems. (82%)Giovanni Apruzzese; Mauro Andreolini; Luca Ferretti; Mirco Marchetti; Michele Colajanni
Poisoning and Backdooring Contrastive Learning. (70%)Nicholas Carlini; Andreas Terzis
CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing. (69%)Fan Wu; Linyi Li; Zijian Huang; Yevgeniy Vorobeychik; Ding Zhao; Bo Li
CoCoFuzzing: Testing Neural Code Models with Coverage-Guided Fuzzing. (64%)Moshi Wei; Yuchao Huang; Jinqiu Yang; Junjie Wang; Song Wang
On Deep Neural Network Calibration by Regularization and its Impact on Refinement. (3%)Aditya Singh; Alessandro Bay; Biswa Sengupta; Andrea Mirabile
Effective Model Sparsification by Scheduled Grow-and-Prune Methods. (1%)Xiaolong Ma; Minghai Qin; Fei Sun; Zejiang Hou; Kun Yuan; Yi Xu; Yanzhi Wang; Yen-Kuang Chen; Rong Jin; Yuan Xie
2021-06-16
Real-time Adversarial Perturbations against Deep Reinforcement Learning Policies: Attacks and Defenses. (99%)Buse G. A. Tekgul; Shelly Wang; Samuel Marchal; N. Asokan
Localized Uncertainty Attacks. (99%)Ousmane Amadou Dia; Theofanis Karaletsos; Caner Hazirbas; Cristian Canton Ferrer; Ilknur Kaynar Kabul; Erik Meijer
Evaluating the Robustness of Bayesian Neural Networks Against Different Types of Attacks. (67%)Yutian Pang; Sheng Cheng; Jueming Hu; Yongming Liu
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch. (38%)Hossein Souri; Liam Fowl; Rama Chellappa; Micah Goldblum; Tom Goldstein
Explainable AI for Natural Adversarial Images. (13%)Tomas Folke; ZhaoBin Li; Ravi B. Sojitra; Scott Cheng-Hsin Yang; Patrick Shafto
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness. (2%)James Diffenderfer; Brian R. Bartoldson; Shreya Chaganti; Jize Zhang; Bhavya Kailkhura
Scaling-up Diverse Orthogonal Convolutional Networks with a Paraunitary Framework. (1%)Jiahao Su; Wonmin Byeon; Furong Huang
Loki: Hardening Code Obfuscation Against Automated Attacks. (1%)Moritz Schloegel; Tim Blazytko; Moritz Contag; Cornelius Aschermann; Julius Basler; Thorsten Holz; Ali Abbasi
2021-06-15
Adversarial Attacks on Deep Models for Financial Transaction Records. (99%)Ivan Fursov; Matvey Morozov; Nina Kaploukhaya; Elizaveta Kovtun; Rodrigo Rivera-Castro; Gleb Gusev; Dmitry Babaev; Ivan Kireev; Alexey Zaytsev; Evgeny Burnaev
Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information. (99%)Tommy Li; Cory Merkel
Towards Adversarial Robustness via Transductive Learning. (80%)Jiefeng Chen; Yang Guo; Xi Wu; Tianqi Li; Qicheng Lao; Yingyu Liang; Somesh Jha
Voting for the right answer: Adversarial defense for speaker verification. (78%)Haibin Wu; Yang Zhang; Zhiyong Wu; Dong Wang; Hung-yi Lee
Detect and remove watermark in deep neural networks via generative adversarial networks. (68%)Haoqi Wang; Mingfu Xue; Shichang Sun; Yushu Zhang; Jian Wang; Weiqiang Liu
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. (13%)Chulin Xie; Minghao Chen; Pin-Yu Chen; Bo Li
Securing Face Liveness Detection Using Unforgeable Lip Motion Patterns. (12%)Man Senior Member, IEEE Zhou; Qian Senior Member, IEEE Wang; Qi Senior Member, IEEE Li; Peipei Senior Member, IEEE Jiang; Jingxiao Senior Member, IEEE Yang; Chao Senior Member, IEEE Shen; Cong Fellow, IEEE Wang; Shouhong Ding
Probabilistic Margins for Instance Reweighting in Adversarial Training. (8%)Qizhou Wang; Feng Liu; Bo Han; Tongliang Liu; Chen Gong; Gang Niu; Mingyuan Zhou; Masashi Sugiyama
CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals. (1%)Efrat Levy; Asaf Shabtai; Bogdan Groza; Pal-Stefan Murvay; Yuval Elovici
2021-06-14
PopSkipJump: Decision-Based Attack for Probabilistic Classifiers. (99%)Carl-Johann Simon-Gabriel; Noman Ahmed Sheikh; Andreas Krause
Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology. (99%)Alex Foote; Amina Asif; Ayesha Azam; Tim Marshall-Cox; Nasir Rajpoot; Fayyaz Minhas
Audio Attacks and Defenses against AED Systems -- A Practical Study. (99%)Rodrigo dos Santos; Shirin Nilizadeh
Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions. (92%)Antonio Emanuele Cinà; Kathrin Grosse; Sebastiano Vascon; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Evading Malware Classifiers via Monte Carlo Mutant Feature Discovery. (81%)John Boutsikas; Maksim E. Eren; Charles Varga; Edward Raff; Cynthia Matuszek; Charles Nicholas
On the Relationship between Heterophily and Robustness of Graph Neural Networks. (81%)Jiong Zhu; Junchen Jin; Donald Loveland; Michael T. Schaub; Danai Koutra
Partial success in closing the gap between human and machine vision. (15%)Robert Geirhos; Kantharaju Narayanappa; Benjamin Mitzkus; Tizian Thieringer; Matthias Bethge; Felix A. Wichmann; Wieland Brendel
Text Generation with Efficient (Soft) Q-Learning. (2%)Han Guo; Bowen Tan; Zhengzhong Liu; Eric P. Xing; Zhiting Hu
Resilient Control of Platooning Networked Robitic Systems via Dynamic Watermarking. (1%)Matthew Porter; Arnav Joshi; Sidhartha Dey; Qirui Wu; Pedro Hespanhol; Anil Aswani; Matthew Johnson-Roberson; Ram Vasudevan
Self-training Guided Adversarial Domain Adaptation For Thermal Imagery. (1%)Ibrahim Batuhan Akkaya; Fazil Altinel; Ugur Halici
Code Integrity Attestation for PLCs using Black Box Neural Network Predictions. (1%)Yuqi Chen; Christopher M. Poskitt; Jun Sun
2021-06-13
Target Model Agnostic Adversarial Attacks with Query Budgets on Language Understanding Models. (99%)Jatin Chauhan; Karan Bhukar; Manohar Kaul
Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks. (99%)Utku Ozbulak; Esla Timothy Anzaku; Neve Wesley De; Messem Arnout Van
ATRAS: Adversarially Trained Robust Architecture Search. (96%)Yigit Alparslan; Edward Kim
Security Analysis of Camera-LiDAR Semantic-Level Fusion Against Black-Box Attacks on Autonomous Vehicles. (64%)R. Spencer Hallyburton; Yupei Liu; Miroslav Pajic
Weakly-supervised High-resolution Segmentation of Mammography Images for Breast Cancer Diagnosis. (1%)Kangning Liu; Yiqiu Shen; Nan Wu; Jakub Chłędowski; Carlos Fernandez-Granda; Krzysztof J. Geras
HistoTransfer: Understanding Transfer Learning for Histopathology. (1%)Yash Sharma; Lubaina Ehsan; Sana Syed; Donald E. Brown
2021-06-12
Adversarial Robustness via Fisher-Rao Regularization. (67%)Marine Picot; Francisco Messina; Malik Boudiaf; Fabrice Labeau; Ismail Ben Ayed; Pablo Piantanida
What can linearized neural networks actually say about generalization? (31%)Guillermo Ortiz-Jiménez; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack. (2%)Tolulope Odetola; Faiq Khalid; Travis Sandefur; Hawzhin Mohammed; Syed Rafay Hasan
2021-06-11
CausalAdv: Adversarial Robustness through the Lens of Causality. (99%)Yonggang Zhang; Mingming Gong; Tongliang Liu; Gang Niu; Xinmei Tian; Bo Han; Bernhard Schölkopf; Kun Zhang
Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks. (99%)Nezihe Merve Gürel; Xiangyu Qi; Luka Rimanic; Ce Zhang; Bo Li
Adversarial purification with Score-based generative models. (89%)Jongmin Yoon; Sung Ju Hwang; Juho Lee
Relaxing Local Robustness. (80%)Klas Leino; Matt Fredrikson
TDGIA:Effective Injection Attacks on Graph Neural Networks. (76%)Xu Zou; Qinkai Zheng; Yuxiao Dong; Xinyu Guan; Evgeny Kharlamov; Jialiang Lu; Jie Tang
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution. (56%)Fanchao Qi; Yuan Yao; Sophia Xu; Zhiyuan Liu; Maosong Sun
CARTL: Cooperative Adversarially-Robust Transfer Learning. (8%)Dian Chen; Hongxin Hu; Qian Wang; Yinli Li; Cong Wang; Chao Shen; Qi Li
A Shuffling Framework for Local Differential Privacy. (1%)Casey Meehan; Amrita Roy Chowdhury; Kamalika Chaudhuri; Somesh Jha
2021-06-10
Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm. (99%)Mingkang Zhu; Tianlong Chen; Zhangyang Wang
Deep neural network loses attention to adversarial images. (99%)Shashank Kotyan; Danilo Vasconcellos Vargas
Verifying Quantized Neural Networks using SMT-Based Model Checking. (92%)Luiz Sena; Xidan Song; Erickson Alves; Iury Bessa; Edoardo Manino; Lucas Cordeiro; Eddie de Lima Filho
Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation. (80%)Jiawei Zhang; Linyi Li; Huichen Li; Xiaolu Zhang; Shuang Yang; Bo Li
An Ensemble Approach Towards Adversarial Robustness. (41%)Haifeng Qian
Towards an Automated Pipeline for Detecting and Classifying Malware through Machine Learning. (1%)Nicola Loi; Claudio Borile; Daniele Ucci
Fair Classification with Adversarial Perturbations. (1%)L. Elisa Celis; Anay Mehrotra; Nisheeth K. Vishnoi
2021-06-09
HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks. (99%)Mohammad Hossein Samavatian; Saikat Majumdar; Kristin Barber; Radu Teodorescu
Towards Defending against Adversarial Examples via Attack-Invariant Features. (99%)Dawei Zhou; Tongliang Liu; Bo Han; Nannan Wang; Chunlei Peng; Xinbo Gao
Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training. (99%)Dawei Zhou; Nannan Wang; Xinbo Gao; Bo Han; Jun Yu; Xiaoyu Wang; Tongliang Liu
Attacking Adversarial Attacks as A Defense. (99%)Boxi Wu; Heng Pan; Li Shen; Jindong Gu; Shuai Zhao; Zhifeng Li; Deng Cai; Xiaofei He; Wei Liu
We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature. (98%)Bin Liang; Jiachun Li; Jianjun Huang
Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. (97%)Yanchao Sun; Ruijie Zheng; Yongyuan Liang; Furong Huang
URLTran: Improving Phishing URL Detection Using Transformers. (10%)Pranav Maneriker; Jack W. Stokes; Edir Garcia Lazo; Diana Carutasu; Farid Tajaddodianfar; Arun Gururajan
ZoPE: A Fast Optimizer for ReLU Networks with Low-Dimensional Inputs. (5%)Christopher A. Strong; Sydney M. Katz; Anthony L. Corso; Mykel J. Kochenderfer
Practical Machine Learning Safety: A Survey and Primer. (4%)Sina Mohseni; Haotao Wang; Zhiding Yu; Chaowei Xiao; Zhangyang Wang; Jay Yadawa
Network insensitivity to parameter noise via adversarial regularization. (2%)Julian Büchel; Fynn Faber; Dylan R. Muir
2021-06-08
On Improving Adversarial Transferability of Vision Transformers. (99%)Muzammal Naseer; Kanchana Ranasinghe; Salman Khan; Fahad Shahbaz Khan; Fatih Porikli
Simulated Adversarial Testing of Face Recognition Models. (99%)Nataniel Ruiz; Adam Kortylewski; Weichao Qiu; Cihang Xie; Sarah Adel Bargal; Alan Yuille; Stan Sclaroff
Towards the Memorization Effect of Neural Networks in Adversarial Training. (93%)Han Xu; Xiaorui Liu; Wentao Wang; Wenbiao Ding; Zhongqin Wu; Zitao Liu; Anil Jain; Jiliang Tang
Handcrafted Backdoors in Deep Neural Networks. (