It can be hard to stay up-to-date on the published papers in
the field of adversarial examples,
where we have seen massive growth in the number of papers
written each year.
I have been somewhat religiously keeping track of these
papers for the last few years, and realized it may be helpful
for others to release this list.
The only requirement I used for selecting papers for this list
is that it is primarily a paper about adversarial examples,
or extensively uses adversarial examples.
Due to the sheer quantity of papers, I can't guarantee
that I actually have found all of them.
But I did try.
I also may have included papers that don't match
these criteria (and are about something different instead),
or made inconsistent
judgement calls as to whether or not any given paper is
mainly an adversarial example paper.
Send me an email if something is wrong and I'll correct it.
As a result, this list is completely un-filtered.
Everything that mainly presents itself as an adversarial
example paper is listed here; I pass no judgement of quality.
For a curated list of papers that I think are excellent and
worth reading, see the
Adversarial Machine Learning Reading List.
One final note about the data.
This list automatically updates with new papers, even before I
get a chance to manually filter through them.
I do this filtering roughly twice a week, and it's
then that I'll remove the ones that aren't related to
adversarial examples.
As a result, there may be some
false positives on the most recent few entries.
The new un-verified entries will have a probability indicated that my
simplistic (but reasonably well calibrated)
bag-of-words classifier believes the given paper
is actually about adversarial examples.
The full paper list appears below. I've also released a
TXT file (and a TXT file
with abstracts) and a
JSON file
with the same data. If you do anything interesting with
this data I'd be happy to hear from you what it was.
Paper List
2023-11-28
Efficient Key-Based Adversarial Defense for ImageNet by Using Pre-trained Model. (98%)AprilPyone MaungMaung; Isao Echizen; Hitoshi Kiya
1-Lipschitz Layers Compared: Memory, Speed, and Certifiable Robustness. (13%)Bernd Prach; Fabio Brau; Giorgio Buttazzo; Christoph H. Lampert
Scalable Extraction of Training Data from (Production) Language Models. (10%)Milad Nasr; Nicholas Carlini; Jonathan Hayase; Matthew Jagielski; A. Feder Cooper; Daphne Ippolito; Christopher A. Choquette-Choo; Eric Wallace; Florian Tramèr; Katherine Lee
Cooperative Abnormal Node Detection with Adversary Resistance: A Probabilistic Approach. (10%)Yingying Huangfu; Tian Bai
On robust overfitting: adversarial training induced distribution matters. (1%)Runzhi Tian; Yongyi Mao
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations. (1%)Maximilian Dreyer; Reduan Achtibat; Wojciech Samek; Sebastian Lapuschkin
2023-11-27
RetouchUAA: Unconstrained Adversarial Attack via Image Retouching. (99%)Mengda Xie; Yiling He; Meie Fang
Adversaral Doodles: Interpretable and Human-drawable Attacks Provide Describable Insights. (99%)Ryoya Nara; Yusuke Matsui
Instruct2Attack: Language-Guided Semantic Adversarial Attacks. (98%)Jiang Liu; Chen Wei; Yuxiang Guo; Heng Yu; Alan Yuille; Soheil Feizi; Chun Pong Lau; Rama Chellappa
CLAP: Contrastive Learning with Augmented Prompts for Robustness on Pretrained Vision-Language Models. (95%)Yichao Cai; Yuhang Liu; Zhen Zhang; Javen Qinfeng Shi
A Survey on Vulnerability of Federated Learning: A Learning Algorithm Perspective. (50%)Xianghua Xie; Chen Hu; Hanchi Ren; Jingjing Deng
Threshold Breaker: Can Counter-Based RowHammer Prevention Mechanisms Truly Safeguard DRAM? (31%)Ranyang Zhou; Jacqueline Liu; Sabbir Ahmed; Nakul Kochar; Adnan Siraj Rakin; Shaahin Angizi
Distributed Attacks over Federated Reinforcement Learning-enabled Cell Sleep Control. (22%)Han Zhang; Hao Zhou; Medhat Elsayed; Majid Bavand; Raimundas Gaigalas; Yigit Ozcan; Melike Erol-Kantarci
"Do Users fall for Real Adversarial Phishing?" Investigating the Human response to Evasive Webpages. (15%)Ajka Draganovic; Savino Dambra; Javier Aldana Iuit; Kevin Roundy; Giovanni Apruzzese
How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs. (12%)Haoqin Tu; Chenhang Cui; Zijun Wang; Yiyang Zhou; Bingchen Zhao; Junlin Han; Wangchunshu Zhou; Huaxiu Yao; Cihang Xie
Microarchitectural Security of AWS Firecracker VMM for Serverless Cloud Platforms. (1%)Zane Worcester Polytechnic Institute Weissman; Thore University of Lübeck Tiemann; Thomas University of Lübeck Eisenbarth; Berk Worcester Polytechnic Institute Sunar
2023-11-26
Adversarial Purification of Information Masking. (99%)Sitong Liu; Zhichao Lian; Shuangquan Zhang; Liang Xiao
Having Second Thoughts? Let's hear it. (56%)Jung H. Lee; Sujith Vijayan
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP. (13%)Jiawang Bai; Kuofeng Gao; Shaobo Min; Shu-Tao Xia; Zhifeng Li; Wei Liu
Confidence Is All You Need for MI Attacks. (2%)Abhishek Sinha; Himanshi Tibrewal; Mansi Gupta; Nikhar Waghela; Shivank Garg
2023-11-25
Mixing Classifiers to Alleviate the Accuracy-Robustness Trade-Off. (26%)Yatong Bai; Brendon G. Anderson; Somayeh Sojoudi
Robust Graph Neural Networks via Unbiased Aggregation. (10%)Ruiqi Feng; Zhichao Hou; Tyler Derr; Xiaorui Liu
Effective Backdoor Mitigation Depends on the Pre-training Objective. (10%)Sahil Verma; Gantavya Bhatt; Avi Schwarzschild; Soumye Singhal; Arnav Mohanty Das; Chirag Shah; John P Dickerson; Jeff Bilmes
2023-11-24
Trainwreck: A damaging adversarial attack on image classifiers. (99%)Jan Zahálka
Segment (Almost) Nothing: Prompt-Agnostic Adversarial Attacks on Segmentation Models. (96%)Francesco Croce; Matthias Hein
Universal Jailbreak Backdoors from Poisoned Human Feedback. (1%)Javier Rando; Florian Tramèr
2023-11-23
When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence. (99%)Benoit Coqueret; Mathieu Carbone; Olivier Sentieys; Gabriel Zaid
Adversarial defense based on distribution transfer. (99%)Jiahao Chen; Diqun Yan; Li Dong
Robust and Interpretable COVID-19 Diagnosis on Chest X-ray Images using Adversarial Training. (68%)Karina Yang; Alexis Bennett; Dominique Duncan
2023-11-22
A Survey of Adversarial CAPTCHAs on its History, Classification and Generation. (99%)Zisheng Xu; Qiao Yan; F. Richard Yu; Victor C. M. Leung
Transfer Attacks and Defenses for Large Language Models on Coding Tasks. (99%)Chi Zhang; Zifan Wang; Ravi Mangal; Matt Fredrikson; Limin Jia; Corina Pasareanu
Panda or not Panda? Understanding Adversarial Attacks with Interactive Visualization. (98%)Yuzhe You; Jarvis Tse; Jian Zhao
Hard Label Black Box Node Injection Attack on Graph Neural Networks. (93%)Yu Zhou; Zihao Dong; Guofeng Zhang; Jingchen Tang
Security and Privacy Challenges in Deep Learning Models. (74%)Gopichandh Golla
A Somewhat Robust Image Watermark against Diffusion-based Editing Models. (50%)Mingtian Tan; Tianhao Wang; Somesh Jha
OASIS: Offsetting Active Reconstruction Attacks in Federated Learning. (2%)Tre' R. Jeter; Truc Nguyen; Raed Alharbi; My T. Thai
2023-11-21
SD-NAE: Generating Natural Adversarial Examples with Stable Diffusion. (96%)Yueqian Lin; Jingyang Zhang; Yiran Chen; Hai Li
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise. (96%)Yixin Liu; Kaidi Xu; Xun Chen; Lichao Sun
Attacking Motion Planners Using Adversarial Perception Errors. (69%)Jonathan Sadeghi; Nicholas A. Lord; John Redford; Romain Mueller
Toward Robust Imperceptible Perturbation against Unauthorized Text-to-image Diffusion-based Synthesis. (62%)Yixin Liu; Chenrui Fan; Yutong Dai; Xun Chen; Pan Zhou; Lichao Sun
Attention Deficit is Ordered! Fooling Deformable Vision Transformers with Collaborative Adversarial Patches. (47%)Quazi Mishkatul Alam; Bilel Tarchoun; Ihsen Alouani; Nael Abu-Ghazaleh
Iris Presentation Attack: Assessing the Impact of Combining Vanadium Dioxide Films with Artificial Eyes. (1%)Darshika Jauhari; Renu Sharma; Cunjian Chen; Nelson Sepulveda; Arun Ross
2023-11-20
ODDR: Outlier Detection & Dimension Reduction Based Defense Against Adversarial Patches. (99%)Nandish Chattopadhyay; Amira Guesmi; Muhammad Abdullah Hanif; Bassem Ouni; Muhammad Shafique
DefensiveDR: Defending against Adversarial Patches using Dimensionality Reduction. (99%)Nandish Chattopadhyay; Amira Guesmi; Muhammad Abdullah Hanif; Bassem Ouni; Muhammad Shafique
Generating Valid and Natural Adversarial Examples with Large Language Models. (99%)Zimu Wang; Wei Wang; Qi Chen; Qiufeng Wang; Anh Nguyen
AdvGen: Physical Adversarial Attack on Face Presentation Attack Detection Systems. (99%)Sai Amrit Patnaik; Shivali Chansoriya; Anil K. Jain; Anoop M. Namboodiri
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems. (50%)Guangjing Wang; Ce Zhou; Yuanda Wang; Bocheng Chen; Hanqing Guo; Qiben Yan
Understanding Variation in Subpopulation Susceptibility to Poisoning Attacks. (15%)Evan Rose; Fnu Suya; David Evans
Training robust and generalizable quantum models. (10%)Julian Berberich; Daniel Fink; Daniel Pranjić; Christian Tutschku; Christian Holm
BrainWash: A Poisoning Attack to Forget in Continual Learning. (4%)Ali Abbasi; Parsa Nooralinejad; Hamed Pirsiavash; Soheil Kolouri
2023-11-19
Adversarial Prompt Tuning for Vision-Language Models. (97%)Jiaming Zhang; Xingjun Ma; Xin Wang; Lingyu Qiu; Jiaqi Wang; Yu-Gang Jiang; Jitao Sang
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning. (69%)Siyuan Liang; Mingli Zhu; Aishan Liu; Baoyuan Wu; Xiaochun Cao; Ee-Chien Chang
EditShield: Protecting Unauthorized Image Editing by Instruction-guided Diffusion Models. (10%)Ruoxi Chen; Haibo Jin; Jinyin Chen; Lichao Sun
2023-11-18
Boost Adversarial Transferability by Uniform Scale and Mix Mask Method. (99%)Tao Wang; Zijian Ying; Qianmu Li; zhichao Lian
Improving Adversarial Transferability by Stable Diffusion. (99%)Jiayang Liu; Siyu Zhu; Siyuan Liang; Jie Zhang; Han Fang; Weiming Zhang; Ee-Chien Chang
Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications. (92%)Giulio Rossolini; Alessandro Biondi; Giorgio Buttazzo
TextGuard: Provable Defense against Backdoor Attacks on Text Classification. (82%)Hengzhi Pei; Jinyuan Jia; Wenbo Guo; Bo Li; Dawn Song
Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies. (1%)Feng Wang; M. Cenk Gursoy; Senem Velipasalar
2023-11-17
Breaking Temporal Consistency: Generating Video Universal Adversarial Perturbations Using Image Models. (97%)Hee-Seon Kim; Minji Son; Minbeom Kim; Myung-Joon Kwon; Changick Kim
PACOL: Poisoning Attacks Against Continual Learners. (93%)Huayu Li; Gregory Ditzler
Two-Factor Authentication Approach Based on Behavior Patterns for Defeating Puppet Attacks. (1%)Wenhao Wang; Guyue Li; Zhiming Chu; Haobo Li; Daniele Faccio
2023-11-16
Breaking Boundaries: Balancing Performance and Robustness in Deep Wireless Traffic Forecasting. (99%)Ilbert Romain; V. Hoang Thai; Zhang Zonghua; Palpanas Themis
Hijacking Large Language Models via Adversarial In-Context Learning. (75%)Yao Qiang; Xiangyu Zhou; Dongxiao Zhu
Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking. (54%)Nan Xu; Fei Wang; Ben Zhou; Bang Zheng Li; Chaowei Xiao; Muhao Chen
Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations. (38%)Wenjie Mo; Jiashu Xu; Qin Liu; Jiongxiao Wang; Jun Yan; Chaowei Xiao; Muhao Chen
On the Exploitability of Reinforcement Learning with Human Feedback for Large Language Models. (13%)Jiongxiao Wang; Junlin Wu; Muhao Chen; Yevgeniy Vorobeychik; Chaowei Xiao
Towards Improving Robustness Against Common Corruptions using Mixture of Class Specific Experts. (2%)Shashank Kotyan; Danilo Vasconcellos Vargas
Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities. (2%)Avishree Khare; Saikat Dutta; Ziyang Li; Alaia Solko-Breslin; Rajeev Alur; Mayur Naik
Towards more Practical Threat Models in Artificial Intelligence Security. (2%)Kathrin Grosse; Lukas Bieringer; Tarek Richard Besold; Alexandre Alahi
You Cannot Escape Me: Detecting Evasions of SIEM Rules in Enterprise Networks. (1%)Rafael Uetz; Marco Herzog; Louis Hackländer; Simon Schwarz; Martin Henze
2023-11-15
Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts. (99%)Yuanwei Wu; Xiang Li; Yixin Liu; Pan Zhou; Lichao Sun
Backdoor Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment. (74%)Haoran Wang; Kai Shu
Fast Certification of Vision-Language Models Using Incremental Randomized Smoothing. (64%)A K Iowa State University Nirala; A New York University Joshi; C New York University Hegde; S Iowa State University Sarkar
Adversarially Robust Spiking Neural Networks Through Conversion. (26%)Ozan Özdenizci; Robert Legenstein
Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization. (15%)Zhexin Zhang; Junxiao Yang; Pei Ke; Minlie Huang
Privacy Threats in Stable Diffusion Models. (13%)Thomas Cilloni; Charles Fleming; Charles Walter
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities. (9%)Lingbo Mo; Boshi Wang; Muhao Chen; Huan Sun
MirrorNet: A TEE-Friendly Framework for Secure On-device DNN Inference. (2%)Ziyu Liu; Yukui Luo; Shijin Duan; Tong Zhou; Xiaolin Xu
Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models. (1%)Yueqing Liang; Lu Cheng; Ali Payani; Kai Shu
JAB: Joint Adversarial Prompting and Belief Augmentation. (1%)Ninareh Mehrabi; Palash Goyal; Anil Ramakrishna; Jwala Dhamala; Shalini Ghosh; Richard Zemel; Kai-Wei Chang; Aram Galstyan; Rahul Gupta
2023-11-14
Towards Improving Robustness Against Common Corruptions in Object Detectors Using Adversarial Contrastive Learning. (99%)Shashank Kotyan; Danilo Vasconcellos Vargas
Physical Adversarial Examples for Multi-Camera Systems. (99%)Ana Răduţoiu; Jan-Philipp Schulze; Philip Sperl; Konstantin Böttinger
DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Pre-trained Language Models. (99%)Yibo Wang; Xiangjue Dong; James Caverlee; Philip S. Yu
On The Relationship Between Universal Adversarial Attacks And Sparse Representations. (98%)Dana Weitzner; Raja Giryes
A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily. (56%)Peng Ding; Jun Kuang; Dan Ma; Xuezhi Cao; Yunsen Xian; Jiajun Chen; Shujian Huang
Multi-Set Inoculation: Assessing Model Robustness Across Multiple Challenge Sets. (13%)Vatsal Gupta; Pranshu Pandya; Tushar Kataria; Vivek Gupta; Dan Roth
The Perception-Robustness Tradeoff in Deterministic Image Restoration. (1%)Guy Ohayon; Tomer Michaeli; Michael Elad
2023-11-13
Adversarial Purification for Data-Driven Power System Event Classifiers with Diffusion Models. (99%)Yuanbin Cheng; Koji Yamashita; Jim Follum; Nanpeng Yu
Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio Attacks against Speaker Recognition Models. (99%)Rui Duan; Zhe Qu; Leah Ding; Yao Liu; Zhuo Lu
An Extensive Study on Adversarial Attack against Pre-trained Models of Code. (99%)Xiaohu Du; Ming Wen; Zichao Wei; Shangwen Wang; Hai Jin
Untargeted Black-box Attacks for Social Recommendations. (96%)Wenqi Fan; Shijie Wang; Xiao-yong Wei; Xiaowei Mei; Qing Li
On the Robustness of Neural Collapse and the Neural Collapse of Robustness. (80%)Jingtong Su; Ya Shi Zhang; Nikolaos Tsilivis; Julia Kempe
Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data. (41%)Bart Pleiter; Behrad Tajalli; Stefanos Koffas; Gorka Abad; Jing Xu; Martha Larson; Stjepan Picek
2023-11-12
Learning Globally Optimized Language Structure via Adversarial Training. (83%)Xuwang Yin
Contractive Systems Improve Graph Neural Networks Against Adversarial Attacks. (70%)Moshe Eliasof; Davide Murari; Ferdia Sherry; Carola-Bibiane Schönlieb
Analytical Verification of Deep Neural Network Performance for Time-Synchronized Distribution System State Estimation. (5%)Behrouz Azimian; Shiva Moshtagh; Anamitra Pal; Shanshan Ma
DialMAT: Dialogue-Enabled Transformer with Moment-Based Adversarial Training. (1%)Kanta Kaneda; Ryosuke Korekata; Yuiga Wada; Shunya Nagashima; Motonari Kambara; Yui Iioka; Haruka Matsuo; Yuto Imai; Takayuki Nishimura; Komei Sugiura
2023-11-10
Flatness-aware Adversarial Attack. (99%)Mingyuan Fan; Xiaodan Li; Cen Chen; Yinggui Wang
Robust Adversarial Attacks Detection for Deep Learning based Relative Pose Estimation for Space Rendezvous. (99%)Ziwei Wang; Nabil Aouf; Jose Pizarro; Christophe Honvault
Fight Fire with Fire: Combating Adversarial Patch Attacks using Pattern-randomized Defensive Patches. (98%)Jianan Feng; Jiachun Li; Changqing Miao; Jianjun Huang; Wei You; Wenchang Shi; Bin Liang
Resilient and constrained consensus against adversarial attacks: A distributed MPC framework. (84%)Henglai Wei; Kunwu Zhang; Hui Zhang; Yang Shi
CALLOC: Curriculum Adversarial Learning for Secure and Robust Indoor Localization. (1%)Danish Gufran; Sudeep Pasricha
Practical Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration. (1%)Wenjie Fu; Huandong Wang; Chen Gao; Guanghua Liu; Yong Li; Tao Jiang
2023-11-09
ABIGX: A Unified Framework for eXplainable Fault Detection and Classification. (68%)Yue Zhuo; Jinchuan Qian; Zhihuan Song; Zhiqiang Ge
Honest Score Client Selection Scheme: Preventing Federated Learning Label Flipping Attacks in Non-IID Scenarios. (50%)Yanli Li; Huaming Chen; Wei Bao; Zhengmeng Xu; Dong Yuan
Scale-MIA: A Scalable Model Inversion Attack against Secure Federated Learning via Latent Space Reconstruction. (15%)Shanghao Shi; Ning Wang; Yang Xiao; Chaoyu Zhang; Yi Shi; Y. Thomas Hou; Wenjing Lou
FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts. (1%)Yichen Gong; Delong Ran; Jinyuan Liu; Conglei Wang; Tianshuo Cong; Anyu Wang; Sisi Duan; Xiaoyun Wang
FireMatch: A Semi-Supervised Video Fire Detection Network Based on Consistency and Distribution Alignment. (1%)Qinghua Lin; Zuoyong Li; Kun Zeng; Haoyi Fan; Wei Li; Xiaoguang Zhou
2023-11-08
Constrained Adaptive Attacks: Realistic Evaluation of Adversarial Examples and Robust Training of Deep Neural Networks for Tabular Data. (99%)Thibault Simonetto; Salah Ghamizi; Antoine Desjardins; Maxime Cordy; Yves Le Traon
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selection. (70%)Akshit Jindal; Vikram Goyal; Saket Anand; Chetan Arora
Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5? (61%)C. Daniel Freeman; Laura Culp; Aaron Parisi; Maxwell L Bileschi; Gamaleldin F Elsayed; Alex Rizkowsky; Isabelle Simpson; Alex Alemi; Azade Nova; Ben Adlam; Bernd Bohnet; Gaurav Mishra; Hanie Sedghi; Igor Mordatch; Izzeddin Gur; Jaehoon Lee; JD Co-Reyes; Jeffrey Pennington; Kelvin Xu; Kevin Swersky; Kshiteej Mahajan; Lechao Xiao; Rosanne Liu; Simon Kornblith; Noah Constant; Peter J. Liu; Roman Novak; Yundi Qian; Noah Fiedel; Jascha Sohl-Dickstein
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training. (10%)Rui Xu; Wenkang Qin; Peixiang Huang; Haowang; Lin Luo
Counter-Empirical Attacking based on Adversarial Reinforcement Learning for Time-Relevant Scoring System. (1%)Xiangguo Sun; Hong Cheng; Hang Dong; Bo Qiao; Si Qin; Qingwei Lin
Domain Adaptive Object Detection via Balancing Between Self-Training and Adversarial Learning. (1%)Muhammad Akhtar Munir; Muhammad Haris Khan; M. Saquib Sarfraz; Mohsen Ali
2023-11-07
Unveiling Safety Vulnerabilities of Large Language Models. (61%)George Kour; Marcel Zalmanovici; Naama Zwerdling; Esther Goldbraich; Ora Nova Fandina; Ateret Anaby-Tavor; Orna Raz; Eitan Farchi
FD-MIA: Efficient Attacks on Fairness-enhanced Models. (10%)Huan Tian; Guangsheng Zhang; Bo Liu; Tianqing Zhu; Ming Ding; Wanlei Zhou
Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications. (2%)Fengqing Jiang; Zhangchen Xu; Luyao Niu; Boxin Wang; Jinyuan Jia; Bo Li; Radha Poovendran
SoK: Security Below the OS -- A Security Analysis of UEFI. (1%)Priyanka Prakash Surve; Oleg Brodt; Mark Yampolskiy; Yuval Elovici; Asaf Shabtai
2023-11-06
Measuring Adversarial Datasets. (92%)Yuanchen Bai; Raoyi Huang; Vijay Viswanathan; Tzu-Sheng Kuo; Tongshuang Wu
Can LLMs Follow Simple Rules? (68%)Norman Mu; Sarah Chen; Zifan Wang; Sizhe Chen; David Karamardian; Lulwa Aljeraisy; Dan Hendrycks; David Wagner
Preserving Privacy in GANs Against Membership Inference Attack. (33%)Mohammadhadi Shateri; Francisco Messina; Fabrice Labeau; Pablo Piantanida
Cal-DETR: Calibrated Detection Transformer. (4%)Muhammad Akhtar Munir; Salman Khan; Muhammad Haris Khan; Mohsen Ali; Fahad Shahbaz Khan
2023-11-05
ELEGANT: Certified Defense on the Fairness of Graph Neural Networks. (10%)Yushun Dong; Binchi Zhang; Hanghang Tong; Jundong Li
2023-11-04
From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models. (22%)Zhuoshi Pan; Yuguang Yao; Gaowen Liu; Bingquan Shen; H. Vicky Zhao; Ramana Rao Kompella; Sijia Liu
2023-11-03
Efficient Black-Box Adversarial Attacks on Neural Text Detectors. (22%)Vitalii Fishchuk; Daniel Braun
The Alignment Problem in Context. (2%)Raphaël Millière
2023-11-02
Adversary ML Resilience in Autonomous Driving Through Human Centered Perception Mechanisms. (99%)Aakriti Shah
Towards Evaluating Transfer-based Attacks Systematically, Practically, and Fairly. (99%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game. (93%)Sam Toyer; Olivia Watkins; Ethan Adrian Mendes; Justin Svegliato; Luke Bailey; Tiffany Wang; Isaac Ong; Karim Elmaaroufi; Pieter Abbeel; Trevor Darrell; Alan Ritter; Stuart Russell
On the Lipschitz constant of random neural networks. (92%)Paul Geuchen; Thomas Heindl; Dominik Stöger; Felix Voigtlaender
Universal Perturbation-based Secret Key-Controlled Data Hiding. (80%)Donghua Wang; Wen Yao; Tingsong Jiang; Xiaoqian Chen
Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models. (76%)Andy Zhou; Jindong Wang; Yu-Xiong Wang; Haohan Wang
Assist Is Just as Important as the Goal: Image Resurfacing to Aid Model's Robust Prediction. (13%)Abhijith Sharma; Phil Munz; Apurva Narayan
Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula. (12%)Aryaman Reddi; Maximilian Tölle; Jan Peters; Georgia Chalvatzaki; Carlo D'Eramo
Sequential Subset Matching for Dataset Distillation. (1%)Jiawei Du; Qin Shi; Joey Tianyi Zhou
E(2) Equivariant Neural Networks for Robust Galaxy Morphology Classification. (1%)Sneh Pandya; Purvik Patel; Franc O; Jonathan Blazek
2023-11-01
NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks. (99%)Seokil Ham; Jungwuk Park; Dong-Jun Han; Jaekyun Moon
Adversarial Examples in the Physical World: A Survey. (98%)Jiakai Wang; Donghua Wang; Jin Hu; Siyang Wu; Tingsong Jiang; Wen Yao; Aishan Liu; Xianglong Liu
Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems. (80%)Ziqing Lu; Guanlin Liu; Lifeng Cai; Weiyu Xu
Improving Robustness for Vision Transformer with a Simple Dynamic Scanning Augmentation. (76%)Shashank Kotyan; Danilo Vasconcellos Vargas
MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training. (75%)Jiacheng Li; Ninghui Li; Bruno Ribeiro
Robustness Tests for Automatic Machine Translation Metrics with Adversarial Attacks. (1%)Yichen Huang; Timothy Baldwin
Open-Set Face Recognition with Maximal Entropy and Objectosphere Loss. (1%)Rafael Henrique Vareto; Yu Linghu; Terrance E. Boult; William Robson Schwartz; Manuel Günther
2023-10-31
Amoeba: Circumventing ML-supported Network Censorship via Adversarial Reinforcement Learning. (99%)Haoyu Liu; Alec F. Diallo; Paul Patras
Robust Safety Classifier for Large Language Models: Adversarial Prompt Shield. (99%)Jinhwa Kim; Ali Derakhshan; Ian G. Harris
LFAA: Crafting Transferable Targeted Adversarial Examples with Low-Frequency Perturbations. (99%)Kunyu Wang; Juluan Shi; Wenxuan Wang
Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication Systems. (98%)Jung-Woo Chang; Ke Sun; Nasimeh Heydaribeni; Seira Hidano; Xinyu Zhang; Farinaz Koushanfar
Is Robustness Transferable across Languages in Multilingual Neural Machine Translation? (26%)Leiyu Pan; Supryadi; Deyi Xiong
Dynamic Batch Norm Statistics Update for Natural Robustness. (22%)Shahbaz Rezaei; Mohammad Sadegh Norouzzadeh
2023-10-30
Label-Only Model Inversion Attacks via Knowledge Transfer. (83%)Ngoc-Bao Nguyen; Keshigeyan Chandrasegaran; Milad Abdollahzadeh; Ngai-Man Cheung
Exploring Geometry of Blind Spots in Vision Models. (83%)Sriram Balasubramanian; Gaurang Sriramanan; Vinu Sankar Sadasivan; Soheil Feizi
Adversarial Attacks and Defenses in Large Language Models: Old and New Threats. (74%)Leo Schwinn; David Dobre; Stephan Günnemann; Gauthier Gidel
Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models. (61%)Minxing Zhang; Ning Yu; Rui Wen; Michael Backes; Yang Zhang
Differentially Private Reward Estimation with Preference Feedback. (16%)Sayak Ray Chowdhury; Xingyu Zhou; Nagarajan Natarajan
Asymmetric Diffusion Based Channel-Adaptive Secure Wireless Semantic Communications. (10%)Xintian Ren; Jun Wu; Hansong Xu; Qianqian Pan
Privacy-Preserving Federated Learning over Vertically and Horizontally Partitioned Data for Financial Anomaly Detection. (1%)Swanand Ravindra Kadhe; Heiko Ludwig; Nathalie Baracaldo; Alan King; Yi Zhou; Keith Houck; Ambrish Rawat; Mark Purcell; Naoise Holohan; Mikio Takeuchi; Ryo Kawahara; Nir Drucker; Hayim Shaul; Eyal Kushnir; Omri Soceanu
2023-10-29
Blacksmith: Fast Adversarial Training of Vision Transformers via a Mixture of Single-step and Multi-step Methods. (99%)Mahdi Salmani; Alireza Dehghanpour Farashah; Mohammad Azizmalayeri; Mahdi Amiri; Navid Eslami; Mohammad Taghi Manzuri; Mohammad Hossein Rohban
Boosting Decision-Based Black-Box Adversarial Attack with Gradient Priors. (98%)Han Liu; Xingshuo Huang; Xiaotong Zhang; Qimai Li; Fenglong Ma; Wei Wang; Hongyang Chen; Hong Yu; Xianchao Zhang
BERT Lost Patience Won't Be Robust to Adversarial Slowdown. (98%)Zachary Coalson; Gabriel Ritter; Rakesh Bobba; Sanghyun Hong
Adversarial Examples Are Not Real Features. (98%)Ang Li; Yifei Wang; Yiwen Guo; Yisen Wang
IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI. (82%)Bochuan Cao; Changjiang Li; Ting Wang; Jinyuan Jia; Bo Li; Jinghui Chen
Poisoning Retrieval Corpora by Injecting Adversarial Passages. (68%)Zexuan Zhong; Ziqing Huang; Alexander Wettig; Danqi Chen
Label Poisoning is All You Need. (54%)Rishi D. Jha; Jonathan Hayase; Sewoong Oh
Robustifying Language Models with Test-Time Adaptation. (47%)Noah Thomas McDermott; Junfeng Yang; Chengzhi Mao
Path Analysis for Effective Fault Localization in Deep Neural Networks. (1%)Soroush Hashemifar; Saeed Parsa; Akram Kalaee
2023-10-28
Assessing and Improving Syntactic Adversarial Robustness of Pre-trained Models for Code Translation. (92%)Guang Yang; Yu Zhou; Xiangyu Zhang; Xiang Chen; Tingting Han; Taolue Chen
Benchmark Generation Framework with Customizable Distortions for Image Classifier Robustness. (86%)Soumyendu Sarkar; Ashwin Ramesh Babu; Sajad Mousavi; Zachariah Carmichael; Vineet Gundecha; Sahand Ghorbanpour; Ricardo Luna; Gutierrez Antonio Guillen; Avisek Naug
Purify++: Improving Diffusion-Purification with Advanced Diffusion Models and Control of Randomness. (61%)Boya Zhang; Weijian Luo; Zhihua Zhang
Large Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks Against Text Classifiers. (47%)Wencong You; Zayd Hammoudeh; Daniel Lowd
Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation. (8%)Kunlin Cai; Jinghuai Zhang; Will Shand; Zhiqing Hong; Guang Wang; Desheng Zhang; Jianfeng Chi; Yuan Tian
2023-10-27
DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification. (99%)Mintong Kang; Dawn Song; Bo Li
Understanding and Improving Ensemble Adversarial Defense. (99%)Yian Deng; Tingting Mu
LipSim: A Provably Robust Perceptual Similarity Metric. (45%)Sara Ghazanfari; Alexandre Araujo; Prashanth Krishnamurthy; Farshad Khorrami; Siddharth Garg
Elevating Code-mixed Text Handling through Auditory Information of Words. (5%)Mamta; Zishan Ahmad; Asif Ekbal
Understanding Parameter Saliency via Extreme Value Theory. (1%)Shuo Wang; Issei Sato
2023-10-26
Unscrambling the Rectification of Adversarial Attacks Transferability across Computer Networks. (99%)Ehsan Nowroozi; Samaneh Ghelichkhani; Imran Haider; Ali Dehghantanha
A Survey on Transferability of Adversarial Examples across Deep Neural Networks. (99%)Jindong Gu; Xiaojun Jia; Jorge Pau de; Wenqain Yu; Xinwei Liu; Avery Ma; Yuan Xun; Anjun Hu; Ashkan Khakzar; Zhijiang Li; Xiaochun Cao; Philip Torr
Defending Against Transfer Attacks From Public Models. (99%)Chawin Sitawarin; Jaewon Chang; David Huang; Wesson Altoyan; David Wagner
Uncertainty-weighted Loss Functions for Improved Adversarial Attacks on Semantic Segmentation. (93%)Kira Maag; Asja Fischer
Detection Defenses: An Empty Promise against Adversarial Patch Attacks on Optical Flow. (93%)Erik Scheurer; Jenny Schmalfuss; Alexander Lis; Andrés Bruhn
SoK: Pitfalls in Evaluating Black-Box Attacks. (76%)Fnu Suya; Anshuman Suri; Tingwei Zhang; Jingtao Hong; Yuan Tian; David Evans
CBD: A Certified Backdoor Detector Based on Local Dominant Probability. (76%)Zhen Xiang; Zidi Xiong; Bo Li
Instability of computer vision models is a necessary result of the task itself. (26%)Oliver Turnbull; George Cevora
PAC-tuning:Fine-tuning Pretrained Language Models with PAC-driven Perturbed Gradient Descent. (1%)Guangliang Liu; Zhiyu Xue; Xitong Zhang; Kristen Marie Johnson; Rongrong Wang
A minimax optimal control approach for robust neural ODEs. (1%)Cristina Cipriani; Alessandro Scagliotti; Tobias Wöhrer
2023-10-25
Break it, Imitate it, Fix it: Robustness by Generating Human-Like Attacks. (93%)Aradhana Sinha; Ananth Balashankar; Ahmad Beirami; Thi Avrahami; Jilin Chen; Alex Beutel
Trust, but Verify: Robust Image Segmentation using Deep Learning. (54%)Fahim Ahmed Zaman; Xiaodong Wu; Weiyu Xu; Milan Sonka; Raghuraman Mudumbai
Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking against Face Swapping. (26%)Yunming Zhang; Dengpan Ye; Caiyun Xie; Long Tang; Chuanxi Chen; Ziyi Liu; Jiacheng Deng
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts. (22%)Yixin Wu; Ning Yu; Michael Backes; Yun Shen; Yang Zhang
Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs. (12%)Jianwei Fei; Zhihua Xia; Benedetta Tondi; Mauro Barni
Multi-scale Diffusion Denoised Smoothing. (1%)Jongheon Jeong; Jinwoo Shin
SparseDFF: Sparse-View Feature Distillation for One-Shot Dexterous Manipulation. (1%)Qianxu Wang; Haotong Zhang; Congyue Deng; Yang You; Hao Dong; Yixin Zhu; Leonidas Guibas
2023-10-24
Adversarial sample generation and training using geometric masks for accurate and resilient license plate character recognition. (99%)Bishal Shrestha; Griwan Khakurel; Kritika Simkhada; Badri Adhikari
RAEDiff: Denoising Diffusion Probabilistic Models Based Reversible Adversarial Examples Self-Generation and Self-Recovery. (92%)Fan Xing; Xiaoyi Zhou; Xuefeng Fan; Zhuo Tian; Yan Zhao
Defense Against Model Extraction Attacks on Recommender Systems. (92%)Sixiao Zhang; Hongzhi Yin; Hongxu Chen; Cheng Long
Segue: Side-information Guided Generative Unlearnable Examples for Facial Privacy Protection in Real World. (89%)Zhiling Zhang; Jie Zhang; Kui Zhang; Wenbo Zhou; Weiming Zhang; Nenghai Yu
Hierarchical Randomized Smoothing. (75%)Yan Scholten; Jan Schuchardt; Aleksandar Bojchevski; Stephan Günnemann
Momentum Gradient-based Untargeted Attack on Hypergraph Neural Networks. (73%)Yang Chen; Stjepan Picek; Zhonglin Ye; Zhaoyang Wang; Haixing Zhao
Corrupting Neuron Explanations of Deep Visual Features. (41%)Divyansh Srivastava; Tuomas Oikarinen; Tsui-Wei Weng
Guiding LLM to Fool Itself: Automatically Manipulating Machine Reading Comprehension Shortcut Triggers. (10%)Mosh Levy; Shauli Ravfogel; Yoav Goldberg
A Survey on Detection of LLMs-Generated Content. (1%)Xianjun Yang; Liangming Pan; Xuandong Zhao; Haifeng Chen; Linda Petzold; William Yang Wang; Wei Cheng
White-box Compiler Fuzzing Empowered by Large Language Models. (1%)Chenyuan Yang; Yinlin Deng; Runyu Lu; Jiayi Yao; Jiawei Liu; Reyhaneh Jabbarvand; Lingming Zhang
Enhancing Large Language Models for Secure Code Generation: A Dataset-driven Study on Vulnerability Mitigation. (1%)Jiexin Wang; Liuwen Cao; Xitong Luo; Zhiping Zhou; Jiayuan Xie; Adam Jatowt; Yi Cai
2023-10-23
Semantic-Aware Adversarial Training for Reliable Deep Hashing Retrieval. (99%)Xu Yuan; Zheng Zhang; Xunguang Wang; Lin Wu
F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of Natural and Perturbed Patterns. (99%)Yaguan Qian; Chenyu Zhao; Zhaoquan Gu; Bin Wang; Shouling Ji; Wei Wang; Boyang Zhou; Pan Zhou
Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks. (98%)Xiaojun Jia; Jianshu Li; Jindong Gu; Yang Bai; Xiaochun Cao
AutoDAN: Automatic and Interpretable Adversarial Attacks on Large Language Models. (92%)Sicheng Zhu; Ruiyi Zhang; Bang An; Gang Wu; Joe Barrow; Zichao Wang; Furong Huang; Ani Nenkova; Tong Sun
On the Detection of Image-Scaling Attacks in Machine Learning. (15%)Erwin Quiring; Andreas Müller; Konrad Rieck
RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions. (1%)Lingdong Kong; Shaoyuan Xie; Hanjiang Hu; Lai Xing Ng; Benoit R. Cottereau; Wei Tsang Ooi
2023-10-22
Diffusion-Based Adversarial Purification for Speaker Verification. (99%)Yibo Bai; Xiao-Lei Zhang
CT-GAT: Cross-Task Generative Adversarial Attack based on Transferability. (99%)Minxuan Lv; Chengwei Dai; Kun Li; Wei Zhou; Songlin Hu
Imperceptible CMOS camera dazzle for adversarial attacks on deep neural networks. (92%)Zvi Stein; Adrian Stern
ADoPT: LiDAR Spoofing Attack Detection Based on Point-Level Temporal Consistency. (26%)Minkyoung Cho; Yulong Cao; Zixiang Zhou; Z. Morley Mao
Attention-Enhancing Backdoor Attacks Against BERT-based Models. (13%)Weimin Lyu; Songzhu Zheng; Lu Pang; Haibin Ling; Chao Chen
MoPe: Model Perturbation-based Privacy Attacks on Language Models. (9%)Marvin Li; Jason Wang; Jeffrey Wang; Seth Neel
2023-10-21
Adversarial Image Generation by Spatial Transformation in Perceptual Colorspaces. (99%)Ayberk Aydin; Alptekin Temizel
Training Image Derivatives: Increased Accuracy and Universal Robustness. (5%)Vsevolod I. Avrutskiy
2023-10-20
Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting. (99%)Zecheng Tang; Kaifeng Qi; Juntao Li; Min Zhang
An LLM can Fool Itself: A Prompt-Based Adversarial Attack. (99%)Xilie Xu; Keyi Kong; Ning Liu; Lizhen Cui; Di Wang; Jingfeng Zhang; Mohan Kankanhalli
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models. (56%)Shawn Shan; Wenxin Ding; Josephine Passananti; Haitao Zheng; Ben Y. Zhao
The Hidden Adversarial Vulnerabilities of Medical Federated Learning. (45%)Erfan Darzi; Florian Dubost; Nanna. M. Sijtsema; Ooijen P. M. A van
Adversarial Attacks on Fairness of Graph Neural Networks. (26%)Binchi Zhang; Yushun Dong; Chen Chen; Yada Zhu; Minnan Luo; Jundong Li
FLTracer: Accurate Poisoning Attack Provenance in Federated Learning. (26%)Xinyu Zhang; Qingyu Liu; Zhongjie Ba; Yuan Hong; Tianhang Zheng; Feng Lin; Li Lu; Kui Ren
Can We Trust the Similarity Measurement in Federated Learning? (15%)Zhilin Wang; Qin Hu; Xukai Zou
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images. (4%)Logan Frank; Jim Davis
VOICE-ZEUS: Impersonating Zoom's E2EE-Protected Static Media and Textual Communications via Simple Voice Manipulations. (4%)Mashari Alatawi; Nitesh Saxena
2023-10-19
Generating Robust Adversarial Examples against Online Social Networks (OSNs). (98%)Jun Liu; Jiantao Zhou; Haiwei Wu; Weiwei Sun; Jinyu Tian
Recoverable Privacy-Preserving Image Classification through Noise-like Adversarial Examples. (98%)Jun Liu; Jiantao Zhou; Jinyu Tian; Weiwei Sun
Learn from the Past: A Proxy based Adversarial Defense Framework to Boost Robustness. (98%)Yaohua Liu; Jiaxin Gao; Zhu Liu; Xianghao Jiao; Xin Fan; Risheng Liu
OODRobustBench: benchmarking and analyzing adversarial robustness under distribution shift. (97%)Lin Li; Yifei Wang; Chawin Sitawarin; Michael Spratling
Automatic Hallucination Assessment for Aligned Large Language Models via Transferable Adversarial Attacks. (97%)Xiaodong Yu; Hao Cheng; Xiaodong Liu; Dan Roth; Jianfeng Gao
PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses. (97%)Chong Xiang; Tong Wu; Sihui Dai; Jonathan Petit; Suman Jana; Prateek Mittal
Prompt Injection Attacks and Defenses in LLM-Integrated Applications. (47%)Yupei Liu; Yuqi Jia; Runpeng Geng; Jinyuan Jia; Neil Zhenqiang Gong
Attack Prompt Generation for Red Teaming and Defending Large Language Models. (15%)Boyi Deng; Wenjie Wang; Fuli Feng; Yang Deng; Qifan Wang; Xiangnan He
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models. (5%)Boyang Zhang; Zheng Li; Ziqing Yang; Xinlei He; Michael Backes; Mario Fritz; Yang Zhang
To grok or not to grok: Disentangling generalization and memorization on corrupted algorithmic datasets. (1%)Darshil Doshi; Aritra Das; Tianyu He; Andrey Gromov
Detecting Shared Data Manipulation in Distributed Optimization Algorithms. (1%)Mohannad Alkhraijah; Rachel Harris; Samuel Litchfield; David Huggins; Daniel K. Molzahn
Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models. (1%)Jianwei Li; Qi Lei; Wei Cheng; Dongkuan Xu
2023-10-18
Exploring Decision-based Black-box Attacks on Face Forgery Detection. (99%)Zhaoyu Chen; Bo Li; Kaixun Jiang; Shuang Wu; Shouhong Ding; Wenqiang Zhang
Segment Anything Meets Universal Adversarial Perturbation. (99%)Dongshen Han; Sheng Zheng; Chaoning Zhang
IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks. (99%)Yue Cao; Tianlin Li; Xiaofeng Cao; Ivor Tsang; Yang Liu; Qing Guo
Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights. (99%)Zhengyu Zhao; Hanwei Zhang; Renjue Li; Ronan Sicre; Laurent Amsaleg; Michael Backes; Qi Li; Chao Shen
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm. (99%)S. M. Fazle Rabby Labib; Joyanta Jyoti Mondal; Meem Arafat Manab
Malicious Agent Detection for Robust Multi-Agent Collaborative Perception. (87%)Yangheng Zhao; Zhen Xiang; Sheng Yin; Xianghe Pang; Siheng Chen; Yanfeng Wang
Black-Box Training Data Identification in GANs via Detector Networks. (82%)Lukman Olagoke; Salil Vadhan; Seth Neel
Adversarial Training for Physics-Informed Neural Networks. (81%)Yao Li; Shengzhu Shi; Zhichang Guo; Boying Wu
REVAMP: Automated Simulations of Adversarial Attacks on Arbitrary Objects in Realistic Scenes. (80%)Matthew Hull; Zijie J. Wang; Duen Horng Chau
Quantifying Privacy Risks of Prompts in Visual Prompt Learning. (76%)Yixin Wu; Rui Wen; Michael Backes; Pascal Berrang; Mathias Humbert; Yun Shen; Yang Zhang
To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now. (47%)Yimeng Zhang; Jinghan Jia; Xin Chen; Aochuan Chen; Yihua Zhang; Jiancheng Liu; Ke Ding; Sijia Liu
CAT: Closed-loop Adversarial Training for Safe End-to-End Driving. (2%)Linrui Zhang; Zhenghao Peng; Quanyi Li; Bolei Zhou
PrivInfer: Privacy-Preserving Inference for Black-box Large Language Model. (1%)Meng Tong; Kejiang Chen; Yuang Qi; Jie Zhang; Weiming Zhang; Nenghai Yu
2023-10-17
The Efficacy of Transformer-based Adversarial Attacks in Security Domains. (99%)Kunyang Li; Kyle Domico; Jean-Charles Noirot Ferrand; Patrick McDaniel
Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning. (93%)Taejin Kim; Jiarui Li; Shubhranshu Singh; Nikhil Madaan; Carlee Joe-Wong
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks. (15%)Jun Xia; Zhihao Yue; Yingbo Zhou; Zhiwei Ling; Xian Wei; Mingsong Chen
Generalizability of CNN Architectures for Face Morph Presentation Attack. (1%)Sherko R. HmaSalah; Aras Asaad
2023-10-16
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks. (98%)Erfan Shayegani; Md Abdullah Al Mamun; Yu Fu; Pedram Zaree; Yue Dong; Nael Abu-Ghazaleh
Regularization properties of adversarially-trained linear regression. (92%)Antônio H. Ribeiro; Dave Zachariah; Francis Bach; Thomas B. Schön
Fast Adversarial Label-Flipping Attack on Tabular Data. (84%)Xinglong Chang; Gillian Dobbie; Jörg Wicker
A Non-monotonic Smooth Activation Function. (83%)Koushik Biswas; Meghana Karri; Ulaş Bağcı
Quantifying Assistive Robustness Via the Natural-Adversarial Frontier. (68%)Jerry Zhi-Yang He; Zackory Erickson; Daniel S. Brown; Anca D. Dragan
A Comprehensive Study of Privacy Risks in Curriculum Learning. (67%)Joann Qiongna Chen; Xinlei He; Zheng Li; Yang Zhang; Zhou Li
DANAA: Towards transferable attacks with double adversarial neuron attribution. (26%)Zhibo Jin; Zhiyu Zhu; Xinyi Wang; Jiayu Zhang; Jun Shen; Huaming Chen
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective. (9%)Ganghua Wang; Xun Xian; Jayanth Srinivasa; Ashish Kundu; Xuan Bi; Mingyi Hong; Jie Ding
Prompt Packer: Deceiving LLMs through Compositional Instruction with Hidden Attacks. (4%)Shuyu Jiang; Xingshu Chen; Rui Tang
Passive Inference Attacks on Split Learning via Adversarial Regularization. (3%)Xiaochen Zhu; Xinjian Luo; Yuncheng Wu; Yangfan Jiang; Xiaokui Xiao; Beng Chin Ooi
Robust Multi-Agent Reinforcement Learning via Adversarial Regularization: Theoretical Foundation and Stable Algorithms. (3%)Alexander Bukharin; Yan Li; Yue Yu; Qingru Zhang; Zhehui Chen; Simiao Zuo; Chao Zhang; Songan Zhang; Tuo Zhao
On the Transferability of Learning Models for Semantic Segmentation for Remote Sensing Data. (2%)Rongjun Qin; Guixiang Zhang; Yang Tang
Orthogonal Uncertainty Representation of Data Manifold for Robust Long-Tailed Learning. (1%)Yanbiao Ma; Licheng Jiao; Fang Liu; Shuyuan Yang; Xu Liu; Lingling Li
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts. (1%)Christina Chance; Da Yin; Dakuo Wang; Kai-Wei Chang
2023-10-15
Towards Deep Learning Models Resistant to Transfer-based Adversarial Attacks via Data-centric Robust Learning. (99%)Yulong Yang; Chenhao Lin; Xiang Ji; Qiwei Tian; Qian Li; Hongshan Yang; Zhibo Wang; Chao Shen
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack. (99%)Renyang Liu; Jinhong Zhang; Kwok-Yan Lam; Jun Zhao; Wei Zhou
AFLOW: Developing Adversarial Examples under Extremely Noise-limited Settings. (99%)Renyang Liu; Jinhong Zhang; Haoran Li; Jin Zhang; Yuanyu Wang; Wei Zhou
Black-box Targeted Adversarial Attack on Segment Anything (SAM). (99%)Sheng Zheng; Chaoning Zhang
Evading Detection Actively: Toward Anti-Forensics against Forgery Localization. (97%)Long Zhuo; Shenghai Luo; Shunquan Tan; Han Chen; Bin Li; Jiwu Huang
Explore the Effect of Data Selection on Poison Efficiency in Backdoor Attacks. (61%)Ziqiang Li; Pengfei Xia; Hong Sun; Yueqi Zeng; Wei Zhang; Bin Li
Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models? (5%)Yu-Lin Tsai; Chia-Yi Hsu; Chulin Xie; Chih-Hsun Lin; Jia-You Chen; Bo Li; Pin-Yu Chen; Chia-Mu Yu; Chun-Ying Huang
VFLAIR: A Research Library and Benchmark for Vertical Federated Learning. (3%)Tianyuan Zou; Zixuan Gu; Yu He; Hideaki Takahashi; Yang Liu; Guangnan Ye; Ya-Qin Zhang
2023-10-14
BufferSearch: Generating Black-Box Adversarial Texts With Lower Queries. (98%)Wenjie Lv; Zhen Wang; Yitao Zheng; Zhehua Zhong; Qi Xuan; Tianyi Chen
2023-10-13
Is Certifying $\ell_p$ Robustness Still Worthwhile? (99%)Ravi Mangal; Klas Leino; Zifan Wang; Kai Hu; Weicheng Yu; Corina Pasareanu; Anupam Datta; Matt Fredrikson
User Inference Attacks on Large Language Models. (16%)Nikhil Kandpal; Krishna Pillutla; Alina Oprea; Peter Kairouz; Christopher A. Choquette-Choo; Zheng Xu
On the Over-Memorization During Natural, Robust and Catastrophic Overfitting. (1%)Runqi Lin; Chaojian Yu; Bo Han; Tongliang Liu
2023-10-12
Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks. (99%)Giorgio Piras; Maura Pintor; Ambra Demontis; Battista Biggio
Concealed Electronic Countermeasures of Radar Signal with Adversarial Examples. (93%)Ruinan Ma; Canjie Zhu; Mingfeng Lu; Yunjie Li; Yu-an Tan; Ruibin Zhang; Ran Tao
Attacks Meet Interpretability (AmI) Evaluation and Findings. (92%)Qian Ma; Ziping Ye; Shagufta Mehnaz
Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization. (68%)Giuseppe Floris; Raffaele Mura; Luca Scionis; Giorgio Piras; Maura Pintor; Ambra Demontis; Battista Biggio
Fed-Safe: Securing Federated Learning in Healthcare Against Adversarial Attacks. (64%)Erfan Darzi; Nanna M. Sijtsema; Ooijen P. M. A van
Provably Robust Cost-Sensitive Learning via Randomized Smoothing. (45%)Yuan Xin; Michael Backes; Xiao Zhang
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders. (31%)Jan Dubiński; Stanisław Pawlak; Franziska Boenisch; Tomasz Trzciński; Adam Dziedzic
Sentinel: An Aggregation Function to Secure Decentralized Federated Learning. (11%)Chao Feng; Alberto Huertas Celdran; Janosch Baltensperger; Enrique Tomas Matınez Bertran; Gerome Bovet; Burkhard Stiller
Investigating the Robustness and Properties of Detection Transformers (DETR) Toward Difficult Images. (9%)Zhao Ning Zou; Yuhang Zhang; Robert Wijaya
Polynomial Time Cryptanalytic Extraction of Neural Network Models. (3%)Adi Shamir; Isaac Canales-Martinez; Anna Hambitzer; Jorge Chavez-Saab; Francisco Rodrigez-Henriquez; Nitin Satpute
Defending Our Privacy With Backdoors. (3%)Dominik Hintersdorf; Lukas Struppek; Daniel Neider; Kristian Kersting
SEE-OoD: Supervised Exploration For Enhanced Out-of-Distribution Detection. (1%)Xiaoyang Song; Wenbo Sun; Maher Nouiehed; Raed Al Kontar; Judy Jin
XAI Benchmark for Visual Explanation. (1%)Yifei Zhang; Siyi Gu; James Song; Bo Pan; Liang Zhao
Jailbreaking Black Box Large Language Models in Twenty Queries. (1%)Patrick Chao; Alexander Robey; Edgar Dobriban; Hamed Hassani; George J. Pappas; Eric Wong
2023-10-11
Boosting Black-box Attack to Deep Neural Networks with Conditional Diffusion Models. (99%)Renyang Liu; Wei Zhou; Tianwei Zhang; Kangjie Chen; Jun Zhao; Kwok-Yan Lam
Promoting Robustness of Randomized Smoothing: Two Cost-Effective Approaches. (89%)Linbo Liu; Trong Nghia Hoang; Lam M. Nguyen; Tsui-Wei Weng
An Adversarial Example for Direct Logit Attribution: Memory Management in gelu-4l. (13%)James Dao; Yeu-Tong Lao; Can Rager; Jett Janiak
Prompt Backdoors in Visual Prompt Learning. (11%)Hai Huang; Zhengyu Zhao; Michael Backes; Yun Shen; Yang Zhang
Why Train More? Effective and Efficient Membership Inference via Memorization. (10%)Jihye Choi; Shruti Tople; Varun Chandrasekaran; Somesh Jha
Towards Causal Deep Learning for Vulnerability Detection. (4%)Md Mahbubur Rahman; Ira Ceka; Chengzhi Mao; Saikat Chakraborty; Baishakhi Ray; Wei Le
Deep Reinforcement Learning for Autonomous Cyber Operations: A Survey. (3%)Gregory Palmer; Chris Parry; Daniel J. B. Harrold; Chris Willis
2023-10-10
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks. (99%)Yang Wang; Bo Dong; Ke Xu; Haiyin Piao; Yufei Ding; Baocai Yin; Xin Yang
My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection. (98%)Lanjun Wang; Xinran Qiao; Yanwei Xie; Weizhi Nie; Yongdong Zhang; Anan Liu
Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach. (83%)Kai Zhao; Qiyu Kang; Yang Song; Rui She; Sijie Wang; Wee Peng Tay
Adversarial optimization leads to over-optimistic security-constrained dispatch, but sampling can help. (76%)Charles Dawson; Chuchu Fan
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML. (62%)Ziqi Zhang; Chen Gong; Yifeng Cai; Yuanyuan Yuan; Bingyan Liu; Ding Li; Yao Guo; Xiangqun Chen
Comparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks. (45%)Anastasia Antsiferova; Khaled Abud; Aleksandr Gushchin; Sergey Lavrushkin; Ekaterina Shumitskaya; Maksim Velikanov; Dmitriy Vatolin
GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation. (22%)Yixin Liu; Chenrui Fan; Xun Chen; Pan Zhou; Lichao Sun
Latent Diffusion Counterfactual Explanations. (5%)Karim Farid; Simon Schrodi; Max Argus; Thomas Brox
FTFT: efficient and robust Fine-Tuning by transFerring Training dynamics. (2%)Yupei Du; Albert Gatt; Dong Nguyen
Investigating the Adversarial Robustness of Density Estimation Using the Probability Flow ODE. (2%)Marius Arvinte; Cory Cornelius; Jason Martin; Nageen Himayat
Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations. (1%)Zeming Wei; Yifei Wang; Yisen Wang
2023-10-09
PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization. (92%)Jiancong Xiao; Ruoyu Sun; Zhi- Quan Luo
Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand. (22%)Junfeng Guo; Yiming Li; Lixu Wang; Shu-Tao Xia; Heng Huang; Cong Liu; Bo Li
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach. (5%)Shaopeng Fu; Di Wang
Exploring adversarial attacks in federated learning for medical imaging. (2%)Erfan Darzi; Florian Dubost; N. M. Sijtsema; Ooijen P. M. A van
2023-10-08
An Initial Investigation of Neural Replay Simulator for Over-the-Air Adversarial Perturbations to Automatic Speaker Verification. (99%)Jiaqi Li; Li Wang; Liumeng Xue; Lei Wang; Zhizheng Wu
BRAINTEASER: Lateral Thinking Puzzles for Large Language Models. (26%)Yifan Jiang; Filip Ilievski; Kaixin Ma; Zhivar Sourati
2023-10-07
IPMix: Label-Preserving Data Augmentation Method for Training Robust Classifiers. (76%)Zhenglin Huang; Xianan Bao; Na Zhang; Qingqi Zhang; Xiaomei Tu; Biao Wu; Xi Yang
2023-10-06
VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models. (98%)Ziyi Yin; Muchao Ye; Tianrong Zhang; Tianyu Du; Jinguo Zhu; Han Liu; Jinghui Chen; Ting Wang; Fenglong Ma
2023-10-05
OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable Evasion Attacks. (99%)Ofir Bar Tal; Adi Haviv; Amit H. Bermano
Untargeted White-box Adversarial Attack with Heuristic Defence Methods in Real-time Deep Learning based Network Intrusion Detection System. (99%)Khushnaseeb Roshan; Aasim Zafar; Sheikh Burhan Ul Haque
Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria. (99%)Nuoyan Zhou; Nannan Wang; Decheng Liu; Dawei Zhou; Xinbo Gao
An Integrated Algorithm for Robust and Imperceptible Audio Adversarial Examples. (98%)Armin Ettenhofer; Jan-Philipp Schulze; Karla Pizzi
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally. (98%)Shawqi Al-Maliki; Adnan Qayyum; Hassan Ali; Mohamed Abdallah; Junaid Qadir; Dinh Thai Hoang; Dusit Niyato; Ala Al-Fuqaha
SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks. (87%)Alexander Robey; Eric Wong; Hamed Hassani; George J. Pappas
Targeted Adversarial Attacks on Generalizable Neural Radiance Fields. (56%)Andras Horvath; Csaba M. Jozsa
Certification of Deep Learning Models for Medical Image Segmentation. (15%)Othmane Laousy; Alexandre Araujo; Guillaume Chassagnon; Nikos Paragios; Marie-Pierre Revel; Maria Vakalopoulou
Certifiably Robust Graph Contrastive Learning. (5%)Minhua Lin; Teng Xiao; Enyan Dai; Xiang Zhang; Suhang Wang
Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations. (2%)Jiachi Liu; Liwen Wang; Guanting Dong; Xiaoshuai Song; Zechen Wang; Zhengyang Wang; Shanglin Lei; Jinzheng Zhao; Keqing He; Bo Xiao; Weiran Xu
2023-10-04
Optimizing Key-Selection for Face-based One-Time Biometrics via Morphing. (98%)Daile Osorio-Roig; Mahdi Ghafourian; Christian Rathgeb; Ruben Vera-Rodriguez; Christoph Busch; Julian Fierrez
Misusing Tools in Large Language Models With Visual Adversarial Examples. (97%)Xiaohan Fu; Zihan Wang; Shuheng Li; Rajesh K. Gupta; Niloofar Mireshghallah; Taylor Berg-Kirkpatrick; Earlence Fernandes
Burning the Adversarial Bridges: Robust Windows Malware Detection Against Binary-level Mutations. (82%)Ahmed Abusnaina; Yizhen Wang; Sunpreet Arora; Ke Wang; Mihai Christodorescu; David Mohaisen
Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors. (81%)Biagio Montaruli; Luca Demetrio; Maura Pintor; Luca Compagna; Davide Balzarotti; Battista Biggio
Shielding the Unseen: Privacy Protection through Poisoning NeRF with Spatial Deformation. (10%)Yihan Wu; Brandon Y. Feng; Heng Huang
2023-10-03
Splitting the Difference on Adversarial Training. (99%)Matan Levi; Aryeh Kontorovich
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training. (96%)Aochuan Chen; Yimeng Zhang; Jinghan Jia; James Diffenderfer; Jiancheng Liu; Konstantinos Parasyris; Yihua Zhang; Zheng Zhang; Bhavya Kailkhura; Sijia Liu
SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers. (86%)KL Navaneet; Soroush Abbasi Koohpayegani; Essam Sleiman; Hamed Pirsiavash
Towards Stable Backdoor Purification through Feature Shift Tuning. (83%)Rui Min; Zeyu Qin; Li Shen; Minhao Cheng
Jailbreaker in Jail: Moving Target Defense for Large Language Models. (73%)Bocheng Chen; Advait Paliwal; Qiben Yan
Beyond Labeling Oracles: What does it mean to steal ML models? (47%)Avital Shafran; Ilia Shumailov; Murat A. Erdogdu; Nicolas Papernot
Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness. (13%)Yanzhao Wu; Ka-Ho Chow; Wenqi Wei; Ling Liu
FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks. (11%)Jorge Castillo; Phillip Rieger; Hossein Fereidooni; Qian Chen; Ahmad Sadeghi
AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework. (3%)Xilie Xu; Jingfeng Zhang; Mohan Kankanhalli
2023-10-02
Fooling the Textual Fooler via Randomizing Latent Representations. (99%)Duy C. Hoang; Quang H. Nguyen; Saurav Manchanda; MinLong Peng; Kok-Seng Wong; Khoa D. Doan
Adversarial Client Detection via Non-parametric Subspace Monitoring in the Internet of Federated Things. (92%)Xianjian Xie; Xiaochen Xian; Dan Li; Andi Wang
LoFT: Local Proxy Fine-tuning For Improving Transferability Of Adversarial Attacks Against Large Language Model. (87%)Muhammad Ahmed Shah; Roshan Sharma; Hira Dhamyal; Raphael Olivier; Ankit Shah; Joseph Konan; Dareen Alharthi; Hazim T Bukhari; Massa Baali; Soham Deshmukh; Michael Kuhlmann; Bhiksha Raj; Rita Singh
LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples. (87%)Jia-Yu Yao; Kun-Peng Ning; Zhen-Hui Liu; Mu-Nan Ning; Li Yuan
Gotcha! This Model Uses My Code! Evaluating Membership Leakage Risks in Code Models. (13%)Zhou Yang; Zhipeng Zhao; Chenyu Wang; Jieke Shi; Dongsum Kim; Donggyun Han; David Lo
Toward effective protection against diffusion based mimicry through score distillation. (3%)Haotian Xue; Chumeng Liang; Xiaoyu Wu; Yongxin Chen
Fool Your (Vision and) Language Model With Embarrassingly Simple Permutations. (1%)Yongshuo Zong; Tingyang Yu; Bingchen Zhao; Ruchika Chavhan; Timothy Hospedales
2023-10-01
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks. (99%)Yanjie Li; Bin Xie; Songtao Guo; Yuanyuan Yang; Bin Xiao
Counterfactual Image Generation for adversarially robust and interpretable Classifiers. (96%)Rafael Bischof; Florian Scheidegger; Michael A. Kraus; A. Cristiano I. Malossi
On the Onset of Robust Overfitting in Adversarial Training. (64%)Chaojian Yu; Xiaolong Shi; Jun Yu; Bo Han; Tongliang Liu
Understanding Adversarial Transferability in Federated Learning. (64%)Yijiang Li; Ying Gao; Haohan Wang
GhostEncoder: Stealthy Backdoor Attacks with Dynamic Triggers to Pre-trained Encoders in Self-supervised Learning. (61%)Qiannan Wang; Changchun Yin; Zhe Liu; Liming Fang; Run Wang; Chenhao Lin
Fewer is More: Trojan Attacks on Parameter-Efficient Fine-Tuning. (9%)Lauren Hong; Ting Wang
Can Pre-trained Networks Detect Familiar Out-of-Distribution Data? (1%)Atsuyuki Miyai; Qing Yu; Go Irie; Kiyoharu Aizawa
How well does LLM generate security tests? (1%)Ying Daphne Zhang; Wenjia Daphne Song; Zhengjie Daphne Ji; Daphne Danfeng; Yao; Na Meng
2023-09-30
Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks. (99%)Quang H. Nguyen; Yingjie Lao; Tung Pham; Kok-Seng Wong; Khoa D. Doan
Human-Producible Adversarial Examples. (98%)David Khachaturov; Yue Gao; Ilia Shumailov; Robert Mullins; Ross Anderson; Kassem Fawaz
Black-box Attacks on Image Activity Prediction and its Natural Language Explanations. (98%)Alina Elena Baia; Valentina Poggioni; Andrea Cavallaro
Horizontal Class Backdoor to Deep Learning. (56%)Hua Ma; Shang Wang; Yansong Gao
Refutation of Shapley Values for XAI -- Additional Evidence. (8%)Xuanxiang Huang; Joao Marques-Silva
2023-09-29
Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks. (99%)Mehrdad Saberi; Vinu Sankar Sadasivan; Keivan Rezaei; Aounon Kumar; Atoosa Chegini; Wenxiao Wang; Soheil Feizi
Efficient Biologically Plausible Adversarial Training. (98%)Matilde Tristany Farinha; Thomas Ortner; Giorgia Dellaferrera; Benjamin Grewe; Angeliki Pantazi
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks. (96%)Vaidehi Patil; Peter Hase; Mohit Bansal
On Continuity of Robust and Accurate Classifiers. (93%)Ramin Barati; Reza Safabakhsh; Mohammad Rahmati
Adversarial Machine Learning in Latent Representations of Neural Networks. (93%)Milin Zhang; Mohammad Abdi; Francesco Restuccia
Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization. (92%)Mahyar Fazlyab; Taha Entesari; Aniket Roy; Rama Chellappa
Toward Robust Recommendation via Real-time Vicinal Defense. (82%)Yichang Xu; Chenwang Wu; Defu Lian
Adversarial Explainability: Utilizing Explainable Machine Learning in Bypassing IoT Botnet Detection Systems. (31%)Mohammed M. Alani; Atefeh Mashatan; Ali Miri
Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study. (13%)Myeongseob Ko; Ming Jin; Chenguang Wang; Ruoxi Jia
Distributed Resilient Control of DC Microgrids Under Generally Unbounded FDI Attacks. (1%)Yichao Wang; Mohamadamin Rajabinezhad; Omar A. Beg; Shan Zuo
Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning. (1%)Hongsheng Hu; Xuyun Zhang; Zoran Salcic; Lichao Sun; Kim-Kwang Raymond Choo; Gillian Dobbie
2023-09-28
Investigating Human-Identifiable Features Hidden in Adversarial Perturbations. (98%)Dennis Y. Menn; Tzu-hsun Feng; Sriram Vishwanath; Hung-yi Lee
Parameter-Saving Adversarial Training: Reinforcing Multi-Perturbation Robustness via Hypernetworks. (98%)Huihui Gong; Minjing Dong; Siqi Ma; Seyit Camtepe; Surya Nepal; Chang Xu
Towards Poisoning Fair Representations. (70%)Tianci Liu; Haoyu Wang; Feijie Wu; Hengtong Zhang; Pan Li; Lu Su; Jing Gao
On the Trade-offs between Adversarial Robustness and Actionable Explanations. (68%)Satyapriya Krishna; Chirag Agarwal; Himabindu Lakkaraju
Post-Training Overfitting Mitigation in DNN Classifiers. (41%)Hang Wang; David J. Miller; George Kesidis
The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing. (26%)Blaise Delattre; Alexandre Araujo; Quentin Barthélemy; Alexandre Allauzen
Leveraging Optimization for Adaptive Attacks on Image Watermarks. (10%)Nils Lukas; Abdulrahman Diaa; Lucas Fenaux; Florian Kerschbaum
Random and Safe Cache Architecture to Defeat Cache Timing Attacks. (9%)Guangyuan Hu; Ruby B. Lee
Robust Offline Reinforcement Learning -- Certify the Confidence Interval. (4%)Jiarui Yao; Simon Shaolei Du
A Primer on Bayesian Neural Networks: Review and Debates. (2%)Julyan Arbel; Konstantinos Pitas; Mariia Vladimirova; Vincent Fortuin
2023-09-27
Adversarial Examples Might be Avoidable: The Role of Data Concentration in Adversarial Robustness. (95%)Ambar Pal; Jeremias Sulam; René Vidal
Defending Against Physical Adversarial Patch Attacks on Infrared Human Detection. (92%)Lukas Strack; Futa Waseda; Huy H. Nguyen; Yinqiang Zheng; Isao Echizen
On Computational Entanglement and Its Interpretation in Adversarial Machine Learning. (92%)YenLung Lai; Xingbo Dong; Zhe Jin
Automatic Feature Fairness in Recommendation via Adversaries. (33%)Hengchang Hu; Yiming Cao; Zhankui He; Samson Tan; Min-Yen Kan
Generating Transferable Adversarial Simulation Scenarios for Self-Driving via Neural Rendering. (11%)Yasasa Abeysirigoonawardena; Kevin Xie; Chuhan Chen; Salar Hosseini; Ruiting Chen; Ruiqi Wang; Florian Shkurti
Breaking NoC Anonymity using Flow Correlation Attack. (2%)Hansika Weerasena; Pan Zhixin; Khushboo Rani; Prabhat Mishra
Genetic Algorithm-Based Dynamic Backdoor Attack on Federated Learning-Based Network Traffic Classification. (1%)Mahmoud Nazzal; Nura Aljaafari; Ahmed Sawalmeh; Abdallah Khreishah; Muhammad Anan; Abdulelah Algosaibi; Mohammed Alnaeem; Adel Aldalbahi; Abdulaziz Alhumam; Conrado P. Vizcarra; Shadan Alhamed
Towards the Vulnerability of Watermarking Artificial Intelligence Generated Content. (1%)Guanlin Li; Yifei Chen; Jie Zhang; Jiwei Li; Shangwei Guo; Tianwei Zhang
2023-09-26
Structure Invariant Transformation for better Adversarial Transferability. (99%)Xiaosen Wang; Zeliang Zhang; Jianping Zhang
Privacy-preserving and Privacy-attacking Approaches for Speech and Audio -- A Survey. (16%)Yuchen Liu; Apu Kapadia; Donald Williamson
Neural Stochastic Differential Equations for Robust and Explainable Analysis of Electromagnetic Unintended Radiated Emissions. (2%)Sumit Kumar Jha; Susmit Jha; Rickard Ewetz; Alvaro Velasquez
Collaborative Watermarking for Adversarial Speech Synthesis. (1%)Lauri Aalto University, Finland Juvela; Xin National Institute of Informatics, Japan Wang
2023-09-25
DifAttack: Query-Efficient Black-Box Attack via Disentangled Feature Space. (99%)Liu Jun; Zhou Jiantao; Zeng Jiandian; Jinyu Tian
Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents. (98%)Foozhan Ataiefard; Hadi Hemmati
SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution. (1%)Zhongjie Ba; Jieming Zhong; Jiachen Lei; Peng Cheng; Qinglong Wang; Zhan Qin; Zhibo Wang; Kui Ren
2023-09-24
Adversarial Attacks on Video Object Segmentation with Hard Region Discovery. (99%)Ping Li; Yu Zhang; Li Yuan; Jian Zhao; Xianghua Xu; Xiaoqin Zhang
Vulnerabilities in Video Quality Assessment Models: The Challenge of Adversarial Attacks. (98%)Ao-Xiang Zhang; Yu Ran; Weixuan Tang; Yuan-Gen Wang
On the Effectiveness of Adversarial Samples against Ensemble Learning-based Windows PE Malware Detectors. (86%)Trong-Nghia To; Danh Le Kim; Do Thi Thu Hien; Nghi Hoang Khoa; Hien Do Hoang; Phan The Duy; Van-Hau Pham
Benchmarking Local Robustness of High-Accuracy Binary Neural Networks for Enhanced Traffic Sign Recognition. (80%)Andreea Postovan; Mădălina Eraşcu
Projected Randomized Smoothing for Certified Adversarial Robustness. (76%)Samuel Pfrommer; Brendon G. Anderson; Somayeh Sojoudi
Combining Two Adversarial Attacks Against Person Re-Identification Systems. (73%)Eduardo de O. Andrade; Igor Garcia Ballhausen Sampaio; Joris Guérin; José Viterbo
Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained Models. (2%)Minghang Deng; Zhong Zhang; Junming Shao
2023-09-23
Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks. (61%)Zhaohan Xi; Tianyu Du; Changjiang Li; Ren Pang; Shouling Ji; Jinghui Chen; Fenglong Ma; Ting Wang
Detecting and Mitigating System-Level Anomalies of Vision-Based Controllers. (1%)Aryaman Gupta; Kaustav Chakraborty; Somil Bansal
Moving Target Defense based Secured Network Slicing System in the O-RAN Architecture. (1%)Mojdeh Karbalaee Motalleb; Chafika Benzaïd; Tarik Taleb; Vahid Shah-Mansouri
2023-09-22
RBFormer: Improve Adversarial Robustness of Transformer by Robust Bias. (99%)Hao Cheng; Jinhao Duan; Hui Li; Lyutianyang Zhang; Jiahang Cao; Ping Wang; Jize Zhang; Kaidi Xu; Renjing Xu
Spatial-frequency channels, shape bias, and adversarial robustness. (69%)Ajay Subramanian; Elena Sizikova; Najib J. Majaj; Denis G. Pelli
VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks. (69%)Heitor R. Guimarães; Arthur Pimentel; Anderson Avila; Tiago H. Falk
Understanding Deep Gradient Leakage via Inversion Influence Functions. (11%)Haobo Zhang; Junyuan Hong; Yuyang Deng; Mehrdad Mahdavi; Jiayu Zhou
Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations. (10%)Hanjiang Hu; Zuxin Liu; Linyi Li; Jiacheng Zhu; Ding Zhao
Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception? (5%)Xiaoxiao Sun; Nidham Gazagnadou; Vivek Sharma; Lingjuan Lyu; Hongdong Li; Liang Zheng
Expressive variational quantum circuits provide inherent privacy in federated learning. (1%)Niraj Kumar; Jamie Heredge; Changhao Li; Shaltiel Eloul; Shree Hari Sureshbabu; Marco Pistoia
On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures. (1%)Qingzhao Zhang; Shuowei Jin; Ruiyang Zhu; Jiachen Sun; Xumiao Zhang; Qi Alfred Chen; Z. Morley Mao
2023-09-21
Improving Machine Learning Robustness via Adversarial Training. (99%)Long Dang; Thushari Hapuarachchi; Kaiqi Xiong; Jing Lin
A Chinese Prompt Attack Dataset for LLMs with Evil Content. (62%)Chengyuan Liu; Fubang Zhao; Lizhi Qing; Yangyang Kang; Changlong Sun; Kun Kuang; Fei Wu
HANS, are you clever? Clever Hans Effect Analysis of Neural Systems. (45%)Leonardo Ranaldi; Fabio Massimo Zanzotto
On the Relationship between Skill Neurons and Robustness in Prompt Tuning. (10%)Leon Ackermann; Xenia Ohmer
DeepTheft: Stealing DNN Model Architectures through Power Side Channel. (1%)Yansong Gao; Huming Qiu; Zhi Zhang; Binghui Wang; Hua Ma; Alsharif Abuadbba; Minhui Xue; Anmin Fu; Surya Nepal
2023-09-20
How Robust is Google's Bard to Adversarial Image Attacks? (99%)Yinpeng Dong; Huanran Chen; Jiawei Chen; Zhengwei Fang; Xiao Yang; Yichi Zhang; Yu Tian; Hang Su; Jun Zhu
PRAT: PRofiling Adversarial aTtacks. (99%)Rahul Ambati; Naveed Akhtar; Ajmal Mian; Yogesh Singh Rawat
When to Trust AI: Advances and Challenges for Certification of Neural Networks. (64%)Marta Kwiatkowska; Xiyue Zhang
AudioFool: Fast, Universal and synchronization-free Cross-Domain Attack on Speech Recognition. (54%)Mohamad Fakih; Rouwaida Kanj; Fadi Kurdahi; Mohammed E. Fouda
Understanding Pose and Appearance Disentanglement in 3D Human Pose Estimation. (54%)Krishna Kanth Nakka; Mathieu Salzmann
Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection. (5%)Tran Duc Luong; Vuong Minh Tien; Nguyen Huu Quyen; Do Thi Thu Hien; Phan The Duy; Van-Hau Pham
Compilation as a Defense: Enhancing DL Model Attack Robustness via Tensor Optimization. (2%)Stefan Trawicki; William Hackett; Lewis Birch; Neeraj Suri; Peter Garraghan
2023-09-19
Language Guided Adversarial Purification. (99%)Himanshu Singh; A V Subramanyam
What Learned Representations and Influence Functions Can Tell Us About Adversarial Examples. (99%)Shakila Mahjabin Tonni; Mark Dras
Adversarial Attacks Against Uncertainty Quantification. (99%)Emanuele Ledda; Daniele Angioni; Giorgio Piras; Giorgio Fumera; Battista Biggio; Fabio Roli
Model Leeching: An Extraction Attack Targeting LLMs. (76%)Lewis Birch; William Hackett; Stefan Trawicki; Neeraj Suri; Peter Garraghan
Information Leakage from Data Updates in Machine Learning Models. (16%)Tian Hui; Farhad Farokhi; Olga Ohrimenko
Robin: A Novel Method to Produce Robust Interpreters for Deep Learning-Based Code Classifiers. (16%)Zhen Li; Ruqian Zhang; Deqing Zou; Ning Wang; Yating Li; Shouhuai Xu; Chen Chen; Hai Jin
SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks. (12%)Zizhen Liu; Weiyang He; Chip-Hong Chang; Jing Ye; Huawei Li; Xiaowei Li
It's Simplex! Disaggregating Measures to Improve Certified Robustness. (11%)Andrew C. Cullen; Paul Montague; Shijie Liu; Sarah M. Erfani; Benjamin I. P. Rubinstein
Nebula: Self-Attention for Dynamic Malware Analysis. (5%)Dmitrijs Trizna; Luca Demetrio; Battista Biggio; Fabio Roli
Extreme Image Transformations Facilitate Robust Latent Object Representations. (1%)Girik Malik; Dakarai Crowder; Ennio Mingolla
2023-09-18
Stealthy Physical Masked Face Recognition Attack via Adversarial Style Optimization. (99%)Huihui Gong; Minjing Dong; Siqi Ma; Seyit Camtepe; Surya Nepal; Chang Xu
Transferable Adversarial Attack on Image Tampering Localization. (99%)Yuqi Wang; Gang Cao; Zijie Lou; Haochen Zhu
Efficient Low-Rank GNN Defense Against Structural Attacks. (96%)Abdullah Alchihabi; Qing En; Yuhong Guo
Evaluating Adversarial Robustness with Expected Viable Performance. (45%)Ryan McCoppin; Colin Dawson; Sean M. Kennedy; Leslie M. Blaha
Dual Student Networks for Data-Free Model Stealing. (26%)James Beetham; Navid Kardan; Ajmal Mian; Mubarak Shah
Securing Fixed Neural Network Steganography. (5%)Zicong Luo; Sheng Li; Guobiao Li; Zhenxing Qian; Xinpeng Zhang
GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts. (4%)Jiahao Yu; Xingwei Lin; Zheng Yu; Xinyu Xing
Spoofing attack augmentation: can differently-trained attack models improve generalisation? (3%)Wanying Ge; Xin Wang; Junichi Yamagishi; Massimiliano Todisco; Nicholas Evans
Frame-to-Utterance Convergence: A Spectra-Temporal Approach for Unified Spoofing Detection. (1%)Awais Khan; Khalid Mahmood Malik; Shah Nawaz
2023-09-17
Reducing Adversarial Training Cost with Gradient Approximation. (99%)Huihui Gong; Shuo Yang; Siqi Ma; Seyit Camtepe; Surya Nepal; Chang Xu
Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM. (61%)Bochuan Cao; Yuanpu Cao; Lu Lin; Jinghui Chen
2023-09-16
Context-aware Adversarial Attack on Named Entity Recognition. (99%)Shuguang Chen; Leonardo Neves; Thamar Solorio
Inverse classification with logistic and softmax classifiers: efficient optimization. (56%)Miguel Á. Carreira-Perpiñán; Suryabhan Singh Hada
Robust Backdoor Attacks on Object Detection in Real World. (11%)Yaguan Qian; Boyuan Ji; Shuke He; Shenhui Huang; Xiang Ling; Bin Wang; Wei Wang
Conditional Mutual Information Constrained Deep Learning for Classification. (5%)En-Hui Yang; Shayan Mohajer Hamidi; Linfeng Ye; Renhao Tan; Beverly Yang
2023-09-15
Adversarial Attacks on Tables with Entity Swap. (92%)Aneta Koleva; Martin Ringsquandl; Volker Tresp
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks. (87%)Minh-Hao Van; Alycia N. Carey; Xintao Wu
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services. (1%)Hongsheng Hu; Shuo Wang; Jiamin Chang; Haonan Zhong; Ruoxi Sun; Shuang Hao; Haojin Zhu; Minhui Xue
Distributionally Robust Post-hoc Classifiers under Prior Shifts. (1%)Jiaheng Wei; Harikrishna Narasimhan; Ehsan Amid; Wen-Sheng Chu; Yang Liu; Abhishek Kumar
2023-09-14
Unleashing the Adversarial Facet of Software Debloating. (98%)Do-Men Su; Mohannad Alhanahnah
SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker Recognition Systems. (76%)Guangke Chen; Yedi Zhang; Fu Song
BAGEL: Backdoor Attacks against Federated Contrastive Learning. (16%)Yao Huang; Kongyang Chen; Jiannong Cao; Jiaxing Shen; Shaowei Wang; Yun Peng; Weilong Peng; Kechao Cai
What Matters to Enhance Traffic Rule Compliance of Imitation Learning for Automated Driving. (13%)Hongkuan Zhou; Aifen Sui; Wei Cao; Letian Shi
Physical Invisible Backdoor Based on Camera Imaging. (2%)Yusheng Guo; Nan Zhong; Zhenxing Qian; Xinpeng Zhang
M3Dsynth: A dataset of medical 3D images with AI-generated local manipulations. (1%)Giada Zingarini; Davide Cozzolino; Riccardo Corvi; Giovanni Poggi; Luisa Verdoliva
2023-09-13
Semantic Adversarial Attacks via Diffusion Models. (99%)Chenan Wang; Jinhao Duan; Chaowei Xiao; Edward Kim; Matthew Stamm; Kaidi Xu
Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks. (99%)Yang Zheng; Luca Demetrio; Antonio Emanuele Cinà; Xiaoyi Feng; Zhaoqiang Xia; Xiaoyue Jiang; Ambra Demontis; Battista Biggio; Fabio Roli
Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments. (99%)Simon Queyrut; Valerio Schiavoni; Pascal Felber
PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection. (99%)Hanqing Guo; Guangjing Wang; Yuanda Wang; Bocheng Chen; Qiben Yan; Li Xiao
APICom: Automatic API Completion via Prompt Learning and Adversarial Training-based Data Augmentation. (92%)Yafeng Gu; Yiheng Shen; Xiang Chen; Shaoyu Yang; Yiling Huang; Zhixiang Cao
RAIN: Your Language Models Can Align Themselves without Finetuning. (83%)Yuhui Li; Fangyun Wei; Jinjing Zhao; Chao Zhang; Hongyang Zhang
Differentiable JPEG: The Devil is in the Details. (70%)Christoph Reich; Biplob Debnath; Deep Patel; Srimat Chakradhar
Deep Nonparametric Convexified Filtering for Computational Photography, Image Synthesis and Adversarial Defense. (41%)Jianqiao Wangni
MASTERKEY: Practical Backdoor Attack Against Speaker Verification Systems. (38%)Hanqing Guo; Xun Chen; Junfeng Guo; Li Xiao; Qiben Yan
Client-side Gradient Inversion Against Federated Learning from Poisoning. (22%)Jiaheng Wei; Yanjun Zhang; Leo Yu Zhang; Chao Chen; Shirui Pan; Kok-Leong Ong; Jun Zhang; Yang Xiang
Safe Reinforcement Learning with Dual Robustness. (1%)Zeyang Li; Chuxiong Hu; Yunan Wang; Yujie Yang; Shengbo Eben Li
2023-09-12
Using Reed-Muller Codes for Classification with Rejection and Recovery. (99%)Daniel University of Birmingham Fentham; David University of Oxford Parker; Mark University of Birmingham Ryan
Certified Robust Models with Slack Control and Large Lipschitz Constants. (98%)Max Losch; David Stutz; Bernt Schiele; Mario Fritz
Exploring Non-additive Randomness on ViT against Query-Based Black-Box Attacks. (98%)Jindong Gu; Fangyun Wei; Philip Torr; Han Hu
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review. (61%)Pengzhou Cheng; Zongru Wu; Wei Du; Gongshen Liu
CToMP: A Cycle-task-oriented Memory Protection Scheme for Unmanned Systems. (8%)Chengyan Ma; Ning Xi; Di Lu; Yebo Feng; Jianfeng Ma
Language Models as Black-Box Optimizers for Vision-Language Models. (4%)Shihong Liu; Samuel Yu; Zhiqiu Lin; Deepak Pathak; Deva Ramanan
Unveiling Signle-Bit-Flip Attacks on DNN Executables. (1%)Yanzuo The Hong Kong University of Science and Technology Chen; Zhibo The Hong Kong University of Science and Technology Liu; Yuanyuan The Hong Kong University of Science and Technology Yuan; Sihang Huawei Technologies Hu; Tianxiang Huawei Technologies Li; Shuai The Hong Kong University of Science and Technology Wang
2023-09-11
Generalized Attacks on Face Verification Systems. (88%)Ehsan Nazari; Paula Branco; Guy-Vincent Jourdan
Adversarial Attacks Assessment of Salient Object Detection via Symbolic Learning. (76%)Gustavo Olague; Roberto Pineda; Gerardo Ibarra-Vazquez; Matthieu Olague; Axel Martinez; Sambit Bakshi; Jonathan Vargas; Isnardo Reducindo
Backdoor Attack through Machine Unlearning. (67%)Peixin Zhang; Jun Sun; Mingtian Tan; Xinyu Wang
Privacy Side Channels in Machine Learning Systems. (10%)Edoardo Debenedetti; Giorgio Severi; Nicholas Carlini; Christopher A. Choquette-Choo; Matthew Jagielski; Milad Nasr; Eric Wallace; Florian Tramèr
Divergences in Color Perception between Deep Neural Networks and Humans. (4%)Ethan O. Nadler; Elise Darragh-Ford; Bhargav Srinivasa Desikan; Christian Conaway; Mark Chu; Tasker Hull; Douglas Guilbeault
Catch You Everything Everywhere: Guarding Textual Inversion via Concept Watermarking. (1%)Weitao Feng; Jiyan He; Jie Zhang; Tianwei Zhang; Wenbo Zhou; Weiming Zhang; Nenghai Yu
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs. (1%)Wenhua Cheng; Weiwei Zhang; Haihao Shen; Yiyang Cai; Xin He; Kaokao Lv
2023-09-10
Outlier Robust Adversarial Training. (98%)Shu Hu; Zhenhuan Yang; Xin Wang; Yiming Ying; Siwei Lyu
DAD++: Improved Data-free Test Time Adversarial Defense. (98%)Gaurav Kumar Nayak; Inder Khatri; Shubham Randive; Ruchit Rawal; Anirban Chakraborty
Machine Translation Models Stand Strong in the Face of Adversarial Attacks. (86%)Pavel Burnyshev; Elizaveta Kostenok; Alexey Zaytsev
Secure Set-Based State Estimation for Linear Systems under Adversarial Attacks on Sensors. (3%)Muhammad Umar B. Niazi; Michelle S. Chong; Amr Alanwar; Karl H. Johansson
2023-09-09
Towards Robust Model Watermark via Reducing Parametric Vulnerability. (8%)Guanhao Gan; Yiming Li; Dongxian Wu; Shu-Tao Xia
RecAD: Towards A Unified Library for Recommender Attack and Defense. (1%)Changsheng Wang; Jianbai Ye; Wenjie Wang; Chongming Gao; Fuli Feng; Xiangnan He
2023-09-08
Exploring Robust Features for Improving Adversarial Robustness. (99%)Hong Wang; Yuefan Deng; Shinjae Yoo; Yuewei Lin
ARRTOC: Adversarially Robust Real-Time Optimization and Control. (2%)Akhil Ahmed; Rio-Chanona Ehecatl Antonio del; Mehmet Mercangoz
Adversarial attacks on hybrid classical-quantum Deep Learning models for Histopathological Cancer Detection. (1%)Biswaraj Baral; Reek Majumdar; Bhavika Bhalgamiya; Taposh Dutta Roy
Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse. (1%)Edward A. Small; Jeffrey N. Clark; Christopher J. McWilliams; Kacper Sokol; Jeffrey Chan; Flora D. Salim; Raul Santos-Rodriguez
2023-09-07
How adversarial attacks can disrupt seemingly stable accurate classifiers. (99%)Oliver J. Sutton; Qinghua Zhou; Ivan Y. Tyukin; Alexander N. Gorban; Alexander Bastounis; Desmond J. Higham
Experimental Study of Adversarial Attacks on ML-based xApps in O-RAN. (99%)Naveen Naik Sapavath; Brian Kim; Kaushik Chowdhury; Vijay K Shah
Adversarially Robust Deep Learning with Optimal-Transport-Regularized Divergences. (95%)Jeremiah Birrell; Mohammadreza Ebrahimi
DiffDefense: Defending against Adversarial Attacks via Diffusion Models. (80%)Hondamunige Prasanna Silva; Lorenzo Seidenari; Bimbo Alberto Del
One-to-Multiple Clean-Label Image Camouflage (OmClic) based Backdoor Attack on Deep Learning. (73%)Guohong Wang; Hua Ma; Yansong Gao; Alsharif Abuadbba; Zhi Zhang; Wei Kang; Said F. Al-Sarawib; Gongxuan Zhang; Derek Abbott
Promoting Fairness in GNNs: A Characterization of Stability. (1%)Yaning Jia; Chunhui Zhang
2023-09-06
Certifying LLM Safety against Adversarial Prompting. (86%)Aounon Kumar; Chirag Agarwal; Suraj Srinivas; Soheil Feizi; Hima Lakkaraju
SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series. (84%)Chang George Dong; Liangwei Nathan Zheng; Weitong Chen; Wei Emma Zhang; Lin Yue
Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy. (68%)Zikai Zhang; Rui Hu
J-Guard: Journalism Guided Adversarially Robust Detection of AI-generated News. (38%)Tharindu Kumarage; Amrita Bhattacharjee; Djordje Padejski; Kristy Roschke; Dan Gillmor; Scott Ruston; Huan Liu; Joshua Garland
MIRA: Cracking Black-box Watermarking on Deep Neural Networks via Model Inversion-based Removal Attacks. (22%)Yifan Lu; Wenxuan Li; Mi Zhang; Xudong Pan; Min Yang
My Art My Choice: Adversarial Protection Against Unruly AI. (2%)Anthony Rhodes; Ram Bhagat; Umur Aybars Ciftci; Ilke Demir
VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy Leakage Fingerprints. (1%)Aoting Hu; Zhigang Lu; Renjie Xie; Minhui Xue
A Theoretical Explanation of Activation Sparsity through Flat Minima and Adversarial Robustness. (1%)Ze Peng; Lei Qi; Yinghuan Shi; Yang Gao
2023-09-05
The Adversarial Implications of Variable-Time Inference. (99%)Dudi Biton; Aditi Misra; Efrat Levy; Jaidip Kotak; Ron Bitton; Roei Schuster; Nicolas Papernot; Yuval Elovici; Ben Nassi
Adaptive Adversarial Training Does Not Increase Recourse Costs. (92%)Ian Hardy; Jayanth Yetukuri; Yang Liu
Black-Box Attacks against Signed Graph Analysis via Balance Poisoning. (87%)Jialong Zhou; Yuni Lai; Jian Ren; Kai Zhou
RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems. (83%)Abhishek Moitra; Abhiroop Bhattacharjee; Youngeun Kim; Priyadarshini Panda
Building a Winning Team: Selecting Source Model Ensembles using a Submodular Transferability Estimation Approach. (4%)Vimal K B; Saketh Bachu; Tanmay Garg; Niveditha Lakshmi Narasimhan; Raghavan Konuru; Vineeth N Balasubramanian
Robust Recommender System: A Survey and Future Directions. (2%)Kaike Zhang; Qi Cao; Fei Sun; Yunfan Wu; Shuchang Tao; Huawei Shen; Xueqi Cheng
Dual Adversarial Alignment for Realistic Support-Query Shift Few-shot Learning. (1%)Siyang Jiang; Rui Fang; Hsi-Wen Chen; Wei Ding; Ming-Syan Chen
2023-09-04
Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings. (99%)AprilPyone MaungMaung; Isao Echizen; Hitoshi Kiya
Improving Visual Quality and Transferability of Adversarial Attacks on Face Recognition Simultaneously with Adversarial Restoration. (99%)Fengfan Zhou; Hefei Ling; Yuxuan Shi; Jiazhong Chen; Ping Li
Adv3D: Generating 3D Adversarial Examples in Driving Scenarios with NeRF. (99%)Leheng Li; Qing Lian; Ying-Cong Chen
Toward Defensive Letter Design. (98%)Rentaro Kataoka; Akisato Kimura; Seiichi Uchida
MathAttack: Attacking Large Language Models Towards Math Solving Ability. (97%)Zihao Zhou; Qiufeng Wang; Mingyu Jin; Jie Yao; Jianan Ye; Wei Liu; Wei Wang; Xiaowei Huang; Kaizhu Huang
Efficient Defense Against Model Stealing Attacks on Convolutional Neural Networks. (93%)Kacem Khaled; Mouna Dhaouadi; Magalhães Felipe Gohring de; Gabriela Nicolescu
Efficient Query-Based Attack against ML-Based Android Malware Detection under Zero Knowledge Setting. (92%)Ping He; Yifan Xia; Xuhong Zhang; Shouling Ji
Safe and Robust Watermark Injection with a Single OoD Image. (8%)Shuyang Yu; Junyuan Hong; Haobo Zhang; Haotao Wang; Zhangyang Wang; Jiayu Zhou
Dropout Attacks. (2%)Andrew Yuan; Alina Oprea; Cheng Tan
Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images. (2%)Jamiu Idowu; Ahmed Almasoud
2023-09-03
Robust and Efficient Interference Neural Networks for Defending Against Adversarial Attacks in ImageNet. (99%)Yunuo Xiong; Shujuan Liu; Hongwei Xiong
Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection. (98%)Weijie Wang; Zhengyu Zhao; Nicu Sebe; Bruno Lepri
AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware Robust Adversarial Training. (98%)Xingyuan Li; Jinyuan Liu; Long Ma; Xin Fan; Risheng Liu
Robust Adversarial Defense by Tensor Factorization. (89%)Manish Bhattarai; Mehmet Cagri Kaymak; Ryan Barron; Ben Nebgen; Kim Rasmussen; Boian Alexandrov
Dual Adversarial Resilience for Collaborating Robust Underwater Image Enhancement and Perception. (13%)Zengxi Zhang; Zhiying Jiang; Zeru Shi; Jinyuan Liu; Risheng Liu
2023-09-02
Towards Certified Probabilistic Robustness with High Accuracy. (98%)Ruihan Zhang; Peixin Zhang; Jun Sun
Timbre-reserved Adversarial Attack in Speaker Identification. (98%)Qing Wang; Jixun Yao; Li Zhang; Pengcheng Guo; Lei Xie
Regularly Truncated M-estimators for Learning with Noisy Labels. (1%)Xiaobo Xia; Pengqian Lu; Chen Gong; Bo Han; Jun Yu; Jun Yu; Tongliang Liu
2023-09-01
Baseline Defenses for Adversarial Attacks Against Aligned Language Models. (99%)Neel Jain; Avi Schwarzschild; Yuxin Wen; Gowthami Somepalli; John Kirchenbauer; Ping-yeh Chiang; Micah Goldblum; Aniruddha Saha; Jonas Geiping; Tom Goldstein
Curating Naturally Adversarial Datasets for Trustworthy AI in Healthcare. (99%)Sydney Pugh; Ivan Ruchkin; Insup Lee; James Weimer
Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified Models. (89%)Changyu Liu; Yuling Jiao; Junhui Wang; Jian Huang
Why do universal adversarial attacks work on large language models?: Geometry might be the answer. (83%)Varshini Subhash; Anna Bialas; Weiwei Pan; Finale Doshi-Velez
RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model. (1%)Fengxiang Bie; Yibo Yang; Zhongzhu Zhou; Adam Ghanem; Minjia Zhang; Zhewei Yao; Xiaoxia Wu; Connor Holmes; Pareesa Golnari; David A. Clifton; Yuxiong He; Dacheng Tao; Shuaiwen Leon Song
Learned Visual Features to Textual Explanations. (1%)Saeid Asgari Taghanaki; Aliasghar Khani; Amir Khasahmadi; Aditya Sanghi; Karl D. D. Willis; Ali Mahdavi-Amiri
2023-08-31
Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff. (98%)Satoshi Suzuki; Shin'ya Yamaguchi; Shoichiro Takeda; Sekitoshi Kanai; Naoki Makishima; Atsushi Ando; Ryo Masumura
Image Hijacking: Adversarial Images can Control Generative Models at Runtime. (98%)Luke Bailey; Euan Ong; Stuart Russell; Scott Emmons
The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement Learning. (93%)Maria Rigaki; Sebastian Garcia
Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models. (75%)Kevin Hector; Pierre-Alain Moellic; Mathieu Dumont; Jean-Max Dutertre
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack. (75%)Sze Jue Yang; Quang Nguyen; Chee Seng Chan; Khoa D. Doan
FTA: Stealthy and Robust Backdoor Attack with Flexible Trigger on Federated Learning. (45%)Yanqi Qiao; Congwen Chen; Rui Wang; Kaitai Liang
2023-08-30
Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous Driving: An Inductive Logic Programming Approach. (98%)Zahra University of Surrey Chaghazardi; Saber University of Surrey Fallah; Alireza University of Surrey Tamaddoni-Nezhad
Robust Principles: Architectural Design Principles for Adversarially Robust CNNs. (11%)ShengYun Peng; Weilin Xu; Cory Cornelius; Matthew Hull; Kevin Li; Rahul Duggal; Mansi Phute; Jason Martin; Duen Horng Chau
2023-08-29
Adaptive Attack Detection in Text Classification: Leveraging Space Exploration Features for Text Sentiment Classification. (99%)Atefeh Mahdavi; Neda Keivandarian; Marco Carvalho
Advancing Adversarial Robustness Through Adversarial Logit Update. (99%)Hao Xuan; Peican Zhu; Xingyu Li
Imperceptible Adversarial Attack on Deep Neural Networks from Image Boundary. (99%)Fahad Alrasheedi; Xin Zhong
A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation. (99%)Sahar Sadrizadeh; Ljiljana Dolamic; Pascal Frossard
MDTD: A Multi Domain Trojan Detector for Deep Neural Networks. (97%)Arezoo Rajabi; Surudhi Asokraj; Fengqing Jiang; Luyao Niu; Bhaskar Ramasubramanian; Jim Ritcey; Radha Poovendran
3D Adversarial Augmentations for Robust Out-of-Domain Predictions. (87%)Alexander Lehner; Stefano Gasperini; Alvaro Marcos-Ramiro; Michael Schmidt; Nassir Navab; Benjamin Busam; Federico Tombari
Everything Perturbed All at Once: Enabling Differentiable Graph Attacks. (84%)Haoran Liu; Bokun Wang; Jianling Wang; Xiangjue Dong; Tianbao Yang; James Caverlee
Adaversarial Issue of Machine Learning Approaches Applied in Smart Grid: A Survey. (70%)Zhenyong Zhang; Mengxiang Liu
Intriguing Properties of Diffusion Models: A Large-Scale Dataset for Evaluating Natural Attack Capability in Text-to-Image Generative Models. (67%)Takami Sato; Justin Yue; Nanze Chen; Ningfei Wang; Qi Alfred Chen
Can We Rely on AI? (50%)Desmond J. Higham
Uncertainty Aware Training to Improve Deep Learning Model Calibration for Classification of Cardiac MR Images. (1%)Tareen Dawood; Chen Chen; Baldeep S. Sidhua; Bram Ruijsink; Justin Goulda; Bradley Porter; Mark K. Elliott; Vishal Mehta; Christopher A. Rinaldi; Esther Puyol-Anton; Reza Razavi; Andrew P. King
2023-08-28
Adversarial Attacks on Foundational Vision Models. (80%)Nathan Inkawhich; Gwendolyn McDonald; Ryan Luley
DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing. (45%)Jiawei Zhang; Zhongzhu Chen; Huan Zhang; Chaowei Xiao; Bo Li
Identifying and Mitigating the Security Risks of Generative AI. (45%)Clark Barrett; Brad Boyd; Elie Burzstein; Nicholas Carlini; Brad Chen; Jihye Choi; Amrita Roy Chowdhury; Mihai Christodorescu; Anupam Datta; Soheil Feizi; Kathleen Fisher; Tatsunori Hashimoto; Dan Hendrycks; Somesh Jha; Daniel Kang; Florian Kerschbaum; Eric Mitchell; John Mitchell; Zulfikar Ramzan; Khawaja Shams; Dawn Song; Ankur Taly; Diyi Yang
ReMAV: Reward Modeling of Autonomous Vehicles for Finding Likely Failure Events. (2%)Aizaz Sharif; Dusica Marijan
Rep2wav: Noise Robust text-to-speech Using self-supervised representations. (1%)Qiushi Zhu; Yu Gu; Rilin Chen; Chao Weng; Yuchen Hu; Lirong Dai; Jie Zhang
Are Existing Out-Of-Distribution Techniques Suitable for Network Intrusion Detection? (1%)Andrea Corsini; Shanchieh Jay Yang
2023-08-27
Detecting Language Model Attacks with Perplexity. (1%)Gabriel Alon; Michael Kamfonas
2023-08-24
Exploring Transferability of Multimodal Adversarial Samples for Vision-Language Pre-training Models with Contrastive Learning. (99%)Youze Wang; Wenbo Hu; Yinpeng Dong; Richang Hong
Don't Look into the Sun: Adversarial Solarization Attacks on Image Classifiers. (92%)Paul Gavrikov; Janis Keuper
Evaluating the Vulnerabilities in ML systems in terms of adversarial attacks. (82%)John Harshith; Mantej Singh Gill; Madhan Jothimani
Fast Adversarial Training with Smooth Convergence. (3%)Mengnan Zhao; Lihe Zhang; Yuqiu Kong; Baocai Yin
WavMark: Watermarking for Audio Generation. (2%)Guangyu Chen; Yu Wu; Shujie Liu; Tao Liu; Xiaoyong Du; Furu Wei
2023-08-23
On-Manifold Projected Gradient Descent. (99%)Aaron Mahler; Tyrus Berry; Tom Stephens; Harbir Antil; Michael Merritt; Jeanie Schreiber; Ioannis Kevrekidis
Sample Complexity of Robust Learning against Evasion Attacks. (98%)Pascale Gourdeau
LCANets++: Robust Audio Classification using Multi-layer Neural Networks with Lateral Competition. (92%)Sayanton V. Dibbo; Juston S. Moore; Garrett T. Kenyon; Michael A. Teti
BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection. (74%)Tinghao Xie; Xiangyu Qi; Ping He; Yiming Li; Jiachen T. Wang; Prateek Mittal
RemovalNet: DNN Fingerprint Removal Attacks. (69%)Hongwei Yao; Zheng Li; Kunzhe Huang; Jian Lou; Zhan Qin; Kui Ren
Graph Unlearning: A Review. (2%)Anwar Said; Tyler Derr; Mudassir Shabbir; Waseem Abbas; Xenofon Koutsoukos
Ensembling Uncertainty Measures to Improve Safety of Black-Box Classifiers. (1%)Tommaso Zoppi; Andrea Ceccarelli; Andrea Bondavalli
Aparecium: Revealing Secrets from Physical Photographs. (1%)Zhe Lei; Jie Zhang; Jingtao Li; Weiming Zhang; Nenghai Yu
2023-08-22
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks. (99%)Yue Gao; Ilia Shumailov; Kassem Fawaz
Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection. (99%)Mahmoud Nazzal; Issa Khalil; Abdallah Khreishah; NhatHai Phan; Yao Ma
Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack. (98%)Ningfei Wang; Yunpeng Luo; Takami Sato; Kaidi Xu; Qi Alfred Chen
Protect Federated Learning Against Backdoor Attacks via Data-Free Trigger Generation. (86%)Yanxin Yang; Ming Hu; Yue Cao; Jun Xia; Yihao Huang; Yang Liu; Mingsong Chen
Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging. (76%)Xiaojun Jia; Yuefeng Chen; Xiaofeng Mao; Ranjie Duan; Jindong Gu; Rong Zhang; Hui Xue; Xiaochun Cao
Designing an attack-defense game: how to increase robustness of financial transaction models via a competition. (75%)Alexey Zaytsev; Alex Natekin; Evgeni Vorsin; Valerii Smirnov; Oleg Sidorshin; Alexander Senin; Alexander Dudin; Dmitry Berestnev
Adversarial Training Using Feedback Loops. (74%)Ali Haisam Muhammad Rafid; Adrian Sandu
LEAP: Efficient and Automated Test Method for NLP Software. (31%)Mingxuan Xiao; Yan Xiao; Hai Dong; Shunhui Ji; Pengcheng Zhang
PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification. (16%)Yizhen Institute for AI Industry Research Yuan; Rui Shanghai Jiao Tong University, Shanghai, China Kong; Shenghao Wuhan University, Wuhan, China Xie; Yuanchun Institute for AI Industry Research Shanghai AI Laboratory, Shanghai, China Li; Yunxin Institute for AI Industry Research Shanghai AI Laboratory, Shanghai, China Liu
2023-08-21
Spear and Shield: Adversarial Attacks and Defense Methods for Model-Based Link Prediction on Continuous-Time Dynamic Graphs. (99%)Dongjin Lee; Juho Lee; Kijung Shin
Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer. (99%)Zhijin Ge; Fanhua Shang; Hongying Liu; Yuanyuan Liu; Liang Wan; Wei Feng; Xiaosen Wang
Enhancing Adversarial Attacks: The Similar Target Method. (99%)Shuo Zhang; Ziruo Wang; Zikai Zhou; Huanran Chen
Adversarial Attacks on Code Models with Discriminative Graph Patterns. (96%)Thanh-Dat Pick Nguyen; Yang Pick Zhou; Xuan Bach D. Pick Le; Pick Patanamon; Thongtanunam; David Lo
Temporal-Distributed Backdoor Attack Against Video Based Action Recognition. (88%)Xi Li; Songhe Wang; Ruiquan Huang; Mahanth Gowda; George Kesidis
Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models. (76%)Preben M. Ness; Dusica Marijan; Sunanda Bose
Single-User Injection for Invisible Shilling Attack against Recommender Systems. (62%)Chengzhi Huang; Hui Li
On the Adversarial Robustness of Multi-Modal Foundation Models. (4%)Christian Schlarmann; Matthias Hein
Unlocking Accuracy and Fairness in Differentially Private Image Classification. (2%)Leonard Berrada; Soham De; Judy Hanwen Shen; Jamie Hayes; Robert Stanforth; David Stutz; Pushmeet Kohli; Samuel L. Smith; Borja Balle
2023-08-20
Boosting Adversarial Transferability by Block Shuffle and Rotation. (99%)Kunyu Wang; Xuanran He; Wenxuan Wang; Xiaosen Wang
Improving Adversarial Robustness of Masked Autoencoders via Test-time Frequency-domain Prompting. (96%)Qidong Huang; Xiaoyi Dong; Dongdong Chen; Yinpeng Chen; Lu Yuan; Gang Hua; Weiming Zhang; Nenghai Yu
HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds. (96%)Hejia Geng; Peng Li
Hiding Backdoors within Event Sequence Data via Poisoning Attacks. (95%)Elizaveta Kovtun; Alina Ermilova; Dmitry Berestnev; Alexey Zaytsev
Adversarial Collaborative Filtering for Free. (61%)Huiyuan Chen; Xiaoting Li; Vivian Lai; Chin-Chia Michael Yeh; Yujie Fan; Yan Zheng; Mahashweta Das; Hao Yang
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks. (1%)Kaixin Xu; Zhe Wang; Xue Geng; Jie Lin; Min Wu; Xiaoli Li; Weisi Lin
A Study on Robustness and Reliability of Large Language Model Code Generation. (1%)Li Zhong; Zilong Wang
2023-08-19
A Comparison of Adversarial Learning Techniques for Malware Detection. (99%)Pavla Louthánová; Matouš Kozák; Martin Jureček; Mark Stamp
Robust Mixture-of-Expert Training for Convolutional Neural Networks. (83%)Yihua Zhang; Ruisi Cai; Tianlong Chen; Guanhua Zhang; Huan Zhang; Pin-Yu Chen; Shiyu Chang; Zhangyang Wang; Sijia Liu
2023-08-18
Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method. (99%)Yu-An Liu; Ruqing Zhang; Jiafeng Guo; Rijke Maarten de; Wei Chen; Yixing Fan; Xueqi Cheng
Attacking logo-based phishing website detectors with adversarial perturbations. (99%)Jehyun Lee; Zhe Xin; Melanie Ng Pei See; Kanav Sabharwal; Giovanni Apruzzese; Dinil Mon Divakaran
Compensating Removed Frequency Components: Thwarting Voice Spectrum Reduction Attacks. (92%)Shu Wang; Kun Sun; Qi Li
Poison Dart Frog: A Clean-Label Attack with Low Poisoning Rate and High Attack Success Rate in the Absence of Training Data. (54%)Binhao Ma; Jiahui Wang; Dejun Wang; Bo Meng
Backdoor Mitigation by Correcting the Distribution of Neural Activations. (11%)Xi Li; Zhen Xiang; David J. Miller; George Kesidis
On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box. (9%)Yi Cai; Gerhard Wunder
Towards Attack-tolerant Federated Learning via Critical Parameter Analysis. (9%)Sungwon Han; Sungwon Park; Fangzhao Wu; Sundong Kim; Bin Zhu; Xing Xie; Meeyoung Cha
Defending Label Inference Attacks in Split Learning under Regression Setting. (4%)Haoze Qiu; Fei Zheng; Chaochao Chen; Xiaolin Zheng
An Image is Worth a Thousand Toxic Words: A Metamorphic Testing Framework for Content Moderation Software. (1%)Wenxuan Wang; Jingyuan Huang; Jen-tse Huang; Chang Chen; Jiazhen Gu; Pinjia He; Michael R. Lyu
Proceedings of the 2nd International Workshop on Adaptive Cyber Defense. (1%)Marco Carvalho; Damian Marriott; Mark Bilinski; Ahmad Ridley
2023-08-17
Towards a Practical Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via Randomized Smoothing. (99%)Daniel Gibert; Giulio Zizzo; Quan Le
AIR: Threats of Adversarial Attacks on Deep Learning-Based Information Recovery. (99%)Jinyin Chen; Jie Ge; Shilian Zheng; Linhui Ye; Haibin Zheng; Weiguo Shen; Keqiang Yue; Xiaoniu Yang
A White-Box False Positive Adversarial Attack Method on Contrastive Loss-Based Offline Handwritten Signature Verification Models. (98%)Zhongliang Guo; Yifei Qian; Ognjen Arandjelović; Lei Fang
Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces. (16%)Ahmad-Reza Ehyaei; Kiarash Mohammadi; Amir-Hossein Karimi; Samira Samadi; Golnoosh Farnadi
That Doesn't Go There: Attacks on Shared State in Multi-User Augmented Reality Applications. (10%)Carter Slocum; Yicheng Zhang; Erfan Shayegani; Pedram Zaree; Nael Abu-Ghazaleh; Jiasi Chen
Evaluating the Instruction-Following Robustness of Large Language Models to Prompt Injection. (10%)Zekun Li; Baolin Peng; Pengcheng He; Xifeng Yan
General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing. (3%)Dmitrii Korzh; Mikhail Pautov; Olga Tsymboi; Ivan Oseledets
2023-08-16
Benchmarking Adversarial Robustness of Compressed Deep Learning Models. (81%)Brijesh Vora; Kartik Patwari; Syed Mahbub Hafiz; Zubair Shafiq; Chen-Nee Chuah
Test-Time Poisoning Attacks Against Test-Time Adaptation Models. (73%)Tianshuo Cong; Xinlei He; Yun Shen; Yang Zhang
Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models. (67%)Zhenhua Wang; Wei Xie; Kai Chen; Baosheng Wang; Zhiwen Gui; Enze Wang
Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks. (61%)Mirazul Haque; Wei Yang
Expressivity of Graph Neural Networks Through the Lens of Adversarial Robustness. (33%)Francesco Campi; Lukas Gosch; Tom Wollschläger; Yan Scholten; Stephan Günnemann
2023-08-15
SEDA: Self-Ensembling ViT with Defensive Distillation and Adversarial Training for robust Chest X-rays Classification. (99%)Raza Imam; Ibrahim Almakky; Salma Alrashdi; Baketah Alrashdi; Mohammad Yaqub
Backpropagation Path Search On Adversarial Transferability. (99%)Zhuoer Xu; Zhangxuan Gu; Jianping Zhang; Shiwen Cui; Changhua Meng; Weiqiang Wang
A Review of Adversarial Attacks in Computer Vision. (99%)Yutong Zhang; Yao Li; Yin Li; Zhichang Guo
Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models. (95%)Yugeng Liu; Tianshuo Cong; Zhengyu Zhao; Michael Backes; Yun Shen; Yang Zhang
Simple and Efficient Partial Graph Adversarial Attack: A New Perspective. (93%)Guanghui Zhu; Mengyu Chen; Chunfeng Yuan; Yihua Huang
2023-08-14
3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D Point Cloud Attack. (99%)Yunbo Tao; Daizong Liu; Pan Zhou; Yulai Xie; Wei Du; Wei Hu
White-Box Adversarial Attacks on Deep Learning-Based Radio Frequency Fingerprint Identification. (99%)Jie Ma; Junqing Zhang; Guanxiong Shen; Alan Marshall; Chip-Hong Chang
AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning. (99%)Ziqi Zhou; Shengshan Hu; Minghui Li; Hangtao Zhang; Yechao Zhang; Hai Jin
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks. (68%)Shijie Liu; Andrew C. Cullen; Paul Montague; Sarah M. Erfani; Benjamin I. P. Rubinstein
LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked. (54%)Alec Helbling; Mansi Phute; Matthew Hull; Duen Horng Chau
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks. (13%)Indu Joshi; Priyank Upadhya; Gaurav Kumar Nayak; Peter Schüffler; Nassir Navab
ACTIVE: Towards Highly Transferable 3D Physical Camouflage for Universal and Robust Vehicle Evasion. (10%)Naufal Suryanto; Yongsu Kim; Harashta Tatimma Larasati; Hyoeun Kang; Thi-Thu-Huong Le; Yoonyoung Hong; Hunmin Yang; Se-Yoon Oh; Howon Kim
SAM Meets Robotic Surgery: An Empirical Study on Generalization, Robustness and Adaptation. (1%)An Wang; Mobarakol Islam; Mengya Xu; Yang Zhang; Hongliang Ren
2023-08-13
SoK: Realistic Adversarial Attacks and Defenses for Intelligent Network Intrusion Detection. (99%)João Vitorino; Isabel Praça; Eva Maia
Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods. (45%)Avery Ma; Yangchen Pan; Amir-massoud Farahmand
A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and Recommendations. (1%)Hongrong Cheng; Miao Zhang; Javen Qinfeng Shi
Robustified ANNs Reveal Wormholes Between Human Category Percepts. (1%)Guy Gaziv; Michael J. Lee; James J. DiCarlo
Faithful to Whom? Questioning Interpretability Measures in NLP. (1%)Evan Crothers; Herna Viktor; Nathalie Japkowicz
2023-08-12
Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks. (99%)Roman Garaev; Bader Rasheed; Adil Khan
One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training. (13%)Jianshuo Dong; Han Qiu; Yiming Li; Tianwei Zhang; Yuanjie Li; Zeqi Lai; Chao Zhang; Shu-Tao Xia
2023-08-11
Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregation. (98%)Xuannan Liu; Yaoyao Zhong; Yuhang Zhang; Lixiong Qin; Weihong Deng
Physical Adversarial Attacks For Camera-based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook. (98%)Amira Guesmi; Muhammad Abdullah Hanif; Bassem Ouni; Muhammed Shafique
Face Encryption via Frequency-Restricted Identity-Agnostic Attacks. (96%)Xin Dong; Rui Wang; Siyuan Liang; Aishan Liu; Lihua Jing
White-box Membership Inference Attacks against Diffusion Models. (68%)Yan Pang; Tianhao Wang; Xuhui Kang; Mengdi Huai; Yang Zhang
Test-Time Adaptation for Backdoor Defense. (10%)Jiyang Guan; Jian Liang; Ran He
Continual Face Forgery Detection via Historical Distribution Preserving. (2%)Ke Sun; Shen Chen; Taiping Yao; Xiaoshuai Sun; Shouhong Ding; Rongrong Ji
Fast and Accurate Transferability Measurement by Evaluating Intra-class Feature Variance. (1%)Huiwen Xu; U Kang
2023-08-10
Hard No-Box Adversarial Attack on Skeleton-Based Human Action Recognition with Skeleton-Motion-Informed Gradient. (99%)Zhengzhi Lu; He Wang; Ziyi Chang; Guoan Yang; Hubert P. H. Shum
Symmetry Defense Against XGBoost Adversarial Perturbation Attacks. (96%)Blerta Lindqvist
Complex Network Effects on the Robustness of Graph Convolutional Networks. (92%)Benjamin A. Miller; Kevin Chan; Tina Eliassi-Rad
State Machine Frameworks for Website Fingerprinting Defenses: Maybe Not. (61%)Ethan Witwer
FLShield: A Validation Based Federated Learning Framework to Defend Against Poisoning Attacks. (45%)Ehsanul Kabir; Zeyu Song; Md Rafi Ur Rashid; Shagufta Mehnaz
Critical Points ++: An Agile Point Cloud Importance Measure for Robust Classification, Adversarial Defense and Explainable AI. (5%)Meir Yossef Levi; Guy Gilboa
Comprehensive Analysis of Network Robustness Evaluation Based on Convolutional Neural Networks with Spatial Pyramid Pooling. (1%)Wenjun Jiang; Tianlong Fan; Changhao Li; Chuanfu Zhang; Tao Zhang; Zong-fu Luo
2023-08-09
Adv-Inpainting: Generating Natural and Transferable Adversarial Patch via Attention-guided Feature Fusion. (98%)Yanjie Li; Mingxing Duan; Bin Xiao
Adversarial ModSecurity: Countering Adversarial SQL Injections with Robust Machine Learning. (93%)Biagio Montaruli; Luca Demetrio; Andrea Valenza; Battista Biggio; Luca Compagna; Davide Balzarotti; Davide Ariu; Luca Piras
Adversarial Deep Reinforcement Learning for Cyber Security in Software Defined Networks. (81%)Luke Borchjes; Clement Nyirenda; Louise Leenen
Data-Free Model Extraction Attacks in the Context of Object Detection. (41%)Harshit Shah; Aravindhan G; Pavan Kulkarni; Yuvaraj Govidarajulu; Manojkumar Parmar
2023-08-08
Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning. (99%)Simon Queyrut; Yérom-David Bromberg; Valerio Schiavoni
Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate Gradients. (81%)Yao Shu; Xiaoqiang Lin; Zhongxiang Dai; Bryan Kian Hsiang Low
The Model Inversion Eavesdropping Attack in Semantic Communication Systems. (67%)Yuhao Chen; Qianqian Yang; Zhiguo Shi; Jiming Chen
Comprehensive Assessment of the Performance of Deep Learning Classifiers Reveals a Surprising Lack of Robustness. (64%)Michael W. Spratling
XGBD: Explanation-Guided Graph Backdoor Detection. (54%)Zihan Guan; Mengnan Du; Ninghao Liu
Improved Activation Clipping for Universal Backdoor Mitigation and Test-Time Detection. (50%)Hang Wang; Zhen Xiang; David J. Miller; George Kesidis
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers. (15%)Haomin Zhuang; Mingxian Yu; Hao Wang; Yang Hua; Jian Li; Xu Yuan
Evil Operation: Breaking Speaker Recognition with PaddingBack. (13%)Zhe Ye; Diqun Yan; Li Dong; Kailai Shen
2023-08-07
Fixed Inter-Neuron Covariability Induces Adversarial Robustness. (98%)Muhammad Ahmed Shah; Bhiksha Raj
Exploring the Physical World Adversarial Robustness of Vehicle Detection. (98%)Wei Jiang; Tianyuan Zhang; Shuangcheng Liu; Weiyu Ji; Zichao Zhang; Gang Xiao
PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant Semantic Segmentation. (86%)Zhu Liu; Jinyuan Liu; Benzhuang Zhang; Long Ma; Xin Fan; Risheng Liu
A reading survey on adversarial machine learning: Adversarial attacks and their understanding. (81%)Shashank Kotyan
A Four-Pronged Defense Against Byzantine Attacks in Federated Learning. (54%)Wei Wan; Shengshan Hu; Minghui Li; Jianrong Lu; Longling Zhang; Leo Yu Zhang; Hai Jin
Improving Performance of Semi-Supervised Learning by Adversarial Attacks. (11%)Dongyoon Yang; Kunwoong Kim; Yongdai Kim
Mondrian: Prompt Abstraction Attack Against Large Language Models for Cheaper API Pricing. (10%)Wai Man Si; Michael Backes; Yang Zhang
2023-08-06
SAAM: Stealthy Adversarial Attack on Monoculor Depth Estimation. (99%)Amira Guesmi; Muhammad Abdullah Hanif; Bassem Ouni; Muhammad Shafique
CGBA: Curvature-aware Geometric Black-box Attack. (99%)Md Farhamdur Reza; Ali Rahmati; Tianfu Wu; Huaiyu Dai
APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses. (98%)Tianrui Qin; Xitong Gao; Juanjuan Zhao; Kejiang Ye; Cheng-Zhong Xu
Unsupervised Adversarial Detection without Extra Model: Training Loss Should Change. (82%)Chien Cheng Chyou; Hung-Ting Su; Winston H. Hsu
Using Overlapping Methods to Counter Adversaries in Community Detection. (50%)Benjamin A. Miller; Kevin Chan; Tina Eliassi-Rad
2023-08-05
An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability. (99%)Bin Chen; Jia-Li Yin; Shukai Chen; Bo-Hao Chen; Ximeng Liu
An AI-Enabled Framework to Defend Ingenious MDT-based Attacks on the Emerging Zero Touch Cellular Networks. (92%)Aneeqa Ijaz; Waseem Raza; Hasan Farooq; Marvin Manalastas; Ali Imran
A Security and Usability Analysis of Local Attacks Against FIDO2. (1%)Tarun Kumar Yadav; Kent Seamons
Approximating Positive Homogeneous Functions with Scale Invariant Neural Networks. (1%)Stefan Bamberger; Reinhard Heckel; Felix Krahmer
2023-08-04
Multi-attacks: Many images $+$ the same adversarial attack $\to$ many target labels. (99%)Stanislav Fort
RobustMQ: Benchmarking Robustness of Quantized Models. (75%)Yisong Xiao; Aishan Liu; Tianyuan Zhang; Haotong Qin; Jinyang Guo; Xianglong Liu
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks. (67%)Domenico Cotroneo; Cristina Improta; Pietro Liguori; Roberto Natella
Universal Defensive Underpainting Patch: Making Your Text Invisible to Optical Character Recognition. (31%)JiaCheng Deng; Li Dong; Jiahao Chen; Diqun Yan; Rangding Wang; Dengpan Ye; Lingchen Zhao; Jinyu Tian
BlindSage: Label Inference Attacks against Node-level Vertical Federated Graph Neural Networks. (9%)Marco Arazzi; Mauro Conti; Stefanos Koffas; Marina Krcek; Antonino Nocera; Stjepan Picek; Jing Xu
2023-08-03
Hard Adversarial Example Mining for Improving Robust Fairness. (99%)Chenhao Lin; Xiang Ji; Yulong Yang; Qian Li; Chao Shen; Run Wang; Liming Fang
URET: Universal Robustness Evaluation Toolkit (for Evasion). (99%)Kevin Eykholt; Taesung Lee; Douglas Schales; Jiyong Jang; Ian Molloy; Masha Zorin
AdvFAS: A robust face anti-spoofing framework against adversarial examples. (98%)Jiawei Chen; Xiao Yang; Heng Yin; Mingzhi Ma; Bihui Chen; Jianteng Peng; Yandong Guo; Zhaoxia Yin; Hang Su
FROD: Robust Object Detection for Free. (67%)Muhammad; Awais; Weiming; Zhuang; Lingjuan; Lyu; Sung-Ho; Bae
ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP. (33%)Lu Yan; Zhuo Zhang; Guanhong Tao; Kaiyuan Zhang; Xuan Chen; Guangyu Shen; Xiangyu Zhang
From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application? (4%)Rodrigo Pedro; Daniel Castro; Paulo Carreira; Nuno Santos
2023-08-02
Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time. (99%)Xinfeng Li; Chen Yan; Xuancun Lu; Zihan Zeng; Xiaoyu Ji; Wenyuan Xu
Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks. (98%)Jun Guo; Aishan Liu; Xingyu Zheng; Siyuan Liang; Yisong Xiao; Yichao Wu; Xianglong Liu
Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator. (16%)Xiaobei Yan; Xiaoxuan Lou; Guowen Xu; Han Qiu; Shangwei Guo; Chip Hong Chang; Tianwei Zhang
TEASMA: A Practical Approach for the Test Assessment of Deep Neural Networks using Mutation Analysis. (2%)Amin Abbasishahkoo; Mahboubeh Dadkhah; Lionel Briand; Dayi Lin
LSF-IDM: Automotive Intrusion Detection Model with Lightweight Attribution and Semantic Fusion. (1%)Pengzhou Cheng; Lei Hua; Haobin Jiang; Mohammad Samie; Gongshen Liu
2023-08-01
Dynamic ensemble selection based on Deep Neural Network Uncertainty Estimation for Adversarial Robustness. (99%)Ruoxi Qin; Linyuan Wang; Xuehui Du; Xingyuan Chen; Bin Yan
LimeAttack: Local Explainable Method for Textual Hard-Label Adversarial Attack. (99%)Hai Zhu; Zhaoqing Yang; Weiwei Shang; Yuren Wu
Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning. (99%)Kaijie Zhu; Jindong Wang; Xixu Hu; Xing Xie; Ge Yang
Doubly Robust Instance-Reweighted Adversarial Training. (82%)Daouda Sow; Sen Lin; Zhangyang Wang; Yingbin Liang
Training on Foveated Images Improves Robustness to Adversarial Attacks. (82%)Muhammad A. Shah; Bhiksha Raj
Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches. (47%)Pia Hanfeld; Khaled Wahba; Marina M. -C. Höhne; Michael Bussmann; Wolfgang Hönig
Robust Linear Regression: Phase-Transitions and Precise Tradeoffs for General Norms. (22%)Elvis Dohmatob; Meyer Scetbon
Zero-Shot Learning by Harnessing Adversarial Samples. (1%)Zhi Chen; Pengfei Zhang; Jingjing Li; Sen Wang; Zi Huang
A Novel Cross-Perturbation for Single Domain Generalization. (1%)Dongjia Zhao; Lei Qi; Xiao Shi; Yinghuan Shi; Xin Geng
Learning to Generate Training Datasets for Robust Semantic Segmentation. (1%)Marwane Hariat; Olivier Laurent; Rémi Kazmierczak; Andrei Bursuc; Angela Yao; Gianni Franchi
2023-07-31
A Novel Deep Learning based Model to Defend Network Intrusion Detection System against Adversarial Attacks. (99%)Khushnaseeb Roshan; Aasim Zafar; Shiekh Burhan Ul Haque
Transferable Attack for Semantic Segmentation. (99%)Mengqi He; Jing Zhang; Zhaoyuan Yang; Mingyi He; Nick Barnes; Yuchao Dai
Universal Adversarial Defense in Remote Sensing Based on Pre-trained Denoising Diffusion Models. (99%)Weikang Yu; Yonghao Xu; Pedram Ghamisi
Defense of Adversarial Ranking Attack in Text Retrieval: Benchmark and Baseline via Detection. (97%)Xuanang Chen; Ben He; Le Sun; Yingfei Sun
Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks. (86%)Xinyu Zhang; Hanbin Hong; Yuan Hong; Peng Huang; Binghui Wang; Zhongjie Ba; Kui Ren
BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models. (26%)Jordan Vice; Naveed Akhtar; Richard Hartley; Ajmal Mian
Adversarially Robust Neural Legal Judgement Systems. (11%)Rohit Raj; V Susheela Devi
Virtual Prompt Injection for Instruction-Tuned Large Language Models. (10%)Jun Yan; Vikas Yadav; Shiyang Li; Lichang Chen; Zheng Tang; Hai Wang; Vijay Srinivasan; Xiang Ren; Hongxia Jin
Noisy Self-Training with Data Augmentations for Offensive and Hate Speech Detection Tasks. (1%)João A. Leite; Carolina Scarton; Diego F. Silva
2023-07-30
Theoretically Principled Trade-off for Stateful Defenses against Query-Based Black-Box Attacks. (99%)Ashish Hooda; Neal Mangaokar; Ryan Feng; Kassem Fawaz; Somesh Jha; Atul Prakash
Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples. (99%)Qiufan Ji; Lin Wang; Cong Shi; Shengshan Hu; Yingying Chen; Lichao Sun
Probabilistically robust conformal prediction. (91%)Subhankar Ghosh; Yuanjie Shi; Taha Belkhouja; Yan Yan; Jana Doppa; Brian Jones
On Updating Static Output Feedback Controllers Under State-Space Perturbation. (1%)MirSaleh Bahavarnia; Ahmad F. Taha
2023-07-29
You Can Backdoor Personalized Federated Learning. (92%)Tiandi Ye; Cen Chen; Yinggui Wang; Xiang Li; Ming Gao
On Neural Network approximation of ideal adversarial attack and convergence of adversarial training. (92%)Rajdeep Haldar; Qifan Song
Exposing Hidden Attackers in Industrial Control Systems using Micro-distortions. (41%)Suman Sourav; Binbin Chen
2023-07-28
Beating Backdoor Attack at Its Own Game. (97%)Min Liu; Alberto Sangiovanni-Vincentelli; Xiangyu Yue
Adversarial training for tabular data with attack propagation. (67%)Tiago Leon Melo; João Bravo; Marco O. P. Sampaio; Paolo Romano; Hugo Ferreira; João Tiago Ascensão; Pedro Bizarro
Improving Realistic Worst-Case Performance of NVCiM DNN Accelerators through Training with Right-Censored Gaussian Noise. (10%)Zheyu Yan; Yifan Qin; Wujie Wen; Xiaobo Sharon Hu; Yiyu Shi
What can Discriminator do? Towards Box-free Ownership Verification of Generative Adversarial Network. (4%)Ziheng Huang; Boheng Li; Yan Cai; Run Wang; Shangwei Guo; Liming Fang; Jing Chen; Lina Wang
2023-07-27
Universal and Transferable Adversarial Attacks on Aligned Language Models. (99%)Andy Zou; Zifan Wang; J. Zico Kolter; Matt Fredrikson
R-LPIPS: An Adversarially Robust Perceptual Similarity Metric. (99%)Sara Ghazanfari; Siddharth Garg; Prashanth Krishnamurthy; Farshad Khorrami; Alexandre Araujo
When Measures are Unreliable: Imperceptible Adversarial Perturbations toward Top-$k$ Multi-Label Learning. (99%)Yuchen Sun; Qianqian Xu; Zitai Wang; Qingming Huang
Backdoor Attacks for In-Context Learning with Language Models. (97%)Nikhil Kandpal; Matthew Jagielski; Florian Tramèr; Nicholas Carlini
FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal Adversarial Masks. (93%)Buse G. A. Tekgul; N. Asokan
Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in the Physical World. (92%)Xingxing Wei; Yao Huang; Yitong Sun; Jie Yu
NSA: Naturalistic Support Artifact to Boost Network Confidence. (62%)Abhijith Sharma; Phil Munz; Apurva Narayan
SEV-Step: A Single-Stepping Framework for AMD-SEV. (3%)Luca Wilke; Jan Wichelmann; Anja Rabich; Thomas Eisenbarth
Decoding the Secrets of Machine Learning in Malware Classification: A Deep Dive into Datasets, Feature Extraction, and Model Performance. (1%)Savino Dambra; Yufei Han; Simone Aonzo; Platon Kotzias; Antonino Vitale; Juan Caballero; Davide Balzarotti; Leyla Bilge
AC-Norm: Effective Tuning for Medical Image Analysis via Affine Collaborative Normalization. (1%)Chuyan Zhang; Yuncheng Yang; Hao Zheng; Yun Gu
2023-07-26
Enhanced Security against Adversarial Examples Using a Random Ensemble of Encrypted Vision Transformer Models. (99%)Ryota Iijima; Miki Tanaka; Sayaka Shiota; Hitoshi Kiya
Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. (99%)Dong Lu; Zhiqiang Wang; Teng Wang; Weili Guan; Hongchang Gao; Feng Zheng
Defending Adversarial Patches via Joint Region Localizing and Inpainting. (99%)Junwen Chen; Xingxing Wei
Lateral-Direction Localization Attack in High-Level Autonomous Driving: Domain-Specific Defense Opportunity via Lane Detection. (67%)Junjie Shen; Yunpeng Luo; Ziwen Wan; Qi Alfred Chen
Plug and Pray: Exploiting off-the-shelf components of Multi-Modal Models. (33%)Erfan Shayegani; Yue Dong; Nael Abu-Ghazaleh
Coupled-Space Attacks against Random-Walk-based Anomaly Detection. (11%)Yuni Lai; Marcin Waniek; Liying Li; Jingwen Wu; Yulin Zhu; Tomasz P. Michalak; Talal Rahwan; Kai Zhou
FakeTracer: Proactively Defending Against Face-swap DeepFakes via Implanting Traces in Training. (5%)Pu Sun; Honggang Qi; Yuezun Li; Siwei Lyu
Open Image Content Disarm And Reconstruction. (1%)Eli Belkind; Ran Dubin; Amit Dvir
2023-07-25
On the unreasonable vulnerability of transformers for image restoration -- and an easy fix. (99%)Shashank Agnihotri; Kanchana Vaishnavi Gandikota; Julia Grabinski; Paramanand Chandramouli; Margret Keuper
Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation. (99%)Junbin Fang; Canjian Jiang; You Jiang; Puxi Lin; Zhaojie Chen; Yujing Sun; Siu-Ming Yiu; Zoe L. Jiang
Foundational Models Defining a New Era in Vision: A Survey and Outlook. (10%)Muhammad Awais; Muzammal Naseer; Salman Khan; Rao Muhammad Anwer; Hisham Cholakkal; Mubarak Shah; Ming-Hsuan Yang; Fahad Shahbaz Khan
Efficient Estimation of Average-Case Robustness for Multi-Class Classification. (10%)Tessa Han; Suraj Srinivas; Himabindu Lakkaraju
2023-07-24
Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations. (99%)Yi Han; Matthew Chan; Eric Wengrowski; Zhuohuan Li; Nils Ole Tippenhauer; Mani Srivastava; Saman Zonouz; Luis Garcia
Lost In Translation: Generating Adversarial Examples Robust to Round-Trip Translation. (99%)Neel Bhandari; Pin-Yu Chen
Data-free Black-box Attack based on Diffusion Model. (62%)Mingwen Shao; Lingzhuang Meng; Yuanjian Qiao; Lixu Zhang; Wangmeng Zuo
Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs. (56%)Zhakshylyk Nurlanov; Frank R. Schmidt; Florian Bernard
An Estimator for the Sensitivity to Perturbations of Deep Neural Networks. (31%)Naman Maheshwari; Nicholas Malaya; Scott Moe; Jaydeep P. Kulkarni; Sudhanva Gurumurthi
Cyber Deception against Zero-day Attacks: A Game Theoretic Approach. (12%)Md Abu University of Texas at El Paso Sayed; Ahmed H. US Army Research Laboratory Anwar; Christopher University of Texas at El Paso Kiekintveld; Branislav Czech Technical University in Prague Bosansky; Charles US Army Research Laboratory Kamhoua
Malware Resistant Data Protection in Hyper-connected Networks: A survey. (10%)Jannatul Ferdous; Rafiqul Islam; Maumita Bhattacharya; Md Zahidul Islam
Digital Twins for Moving Target Defense Validation in AC Microgrids. (1%)Suman Rath; Subham Sahoo; Shamik Sengupta
Towards Bridging the FL Performance-Explainability Trade-Off: A Trustworthy 6G RAN Slicing Use-Case. (1%)Swastika Roy; Hatim Chergui; Christos Verikoukis
Learning Provably Robust Estimators for Inverse Problems via Jittering. (1%)Anselm Krainovic; Mahdi Soltanolkotabi; Reinhard Heckel
2023-07-23
AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion Models. (99%)Xuelong Dai; Kaisheng Liang; Bin Xiao
Towards Generic and Controllable Attacks Against Object Detection. (99%)Guopeng Li; Yue Xu; Jian Ding; Gui-Song Xia
Downstream-agnostic Adversarial Examples. (99%)Ziqi Zhou; Shengshan Hu; Ruizhi Zhao; Qian Wang; Leo Yu Zhang; Junhui Hou; Hai Jin
Gradient-Based Word Substitution for Obstinate Adversarial Examples Generation in Language Models. (98%)Yimu Wang; Peng Shi; Hongyang Zhang
A First Look at On-device Models in iOS Apps. (84%)Han Hu; Yujin Huang; Qiuyuan Chen; Terry Tue Zhuo; Chunyang Chen
Robust Automatic Speech Recognition via WavAugment Guided Phoneme Adversarial Training. (83%)Gege Qi; Yuefeng Chen; Xiaofeng Mao; Xiaojun Jia; Ranjie Duan; Rong Zhang; Hui Xue
Cross Contrastive Feature Perturbation for Domain Generalization. (1%)Chenming Li; Daoan Zhang; Wenjian Huang; Jianguo Zhang
2023-07-22
Backdoor Attacks against Voice Recognition Systems: A Survey. (13%)Baochen Yan; Jiahe Lan; Zheng Yan
2023-07-21
Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks. (99%)Eldor Abdukhamidov; Mohammed Abuhamad; Simon S. Woo; Eric Chan-Tin; Tamer Abuhmed
Fast Adaptive Test-Time Defense with Robust Features. (98%)Anurag Singh; Mahalakshmi Sabanayagam; Krikamol Muandet; Debarghya Ghoshdastidar
Improving Viewpoint Robustness for Visual Recognition via Adversarial Training. (80%)Shouwei Ruan; Yinpeng Dong; Hang Su; Jianteng Peng; Ning Chen; Xingxing Wei
FMT: Removing Backdoor Feature Maps via Feature Map Testing in Deep Neural Networks. (76%)Dong Huang; Qingwen Bu; Yahao Qing; Yichao Fu; Heming Cui
OUTFOX: LLM-generated Essay Detection through In-context Learning with Adversarially Generated Examples. (62%)Ryuto Koike; Masahiro Kaneko; Naoaki Okazaki
HybridAugment++: Unified Frequency Spectra Perturbations for Model Robustness. (26%)Mehmet Kerim Yucel; Ramazan Gokberk Cinbis; Pinar Duygulu
2023-07-20
A LLM Assisted Exploitation of AI-Guardian. (98%)Nicholas Carlini
Improving Transferability of Adversarial Examples via Bayesian Attacks. (98%)Qizhang Li; Yiwen Guo; Xiaochen Yang; Wangmeng Zuo; Hao Chen
Adversarial attacks for mixtures of classifiers. (54%)Lucas Gnecco Heredia; Benjamin Negrevergne; Yann Chevaleyre
PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks. (33%)Shiwei Ding; Lan Zhang; Miao Pan; Xiaoyong Yuan
A Holistic Assessment of the Reliability of Machine Learning Systems. (4%)Anthony Corso; David Karamadian; Romeo Valentin; Mary Cooper; Mykel J. Kochenderfer
Making Pre-trained Language Models both Task-solvers and Self-calibrators. (2%)Yangyi Chen; Xingyao Wang; Heng Ji
Boundary State Generation for Testing and Improvement of Autonomous Driving Systems. (1%)Matteo Biagiola; Paolo Tonella
2023-07-19
Backdoor Attack against Object Detection with Clean Annotation. (93%)Yize Cheng; Wenbin Hu; Minhao Cheng
Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples. (92%)Shaokui Wei; Mingda Zhang; Hongyuan Zha; Baoyuan Wu
Rethinking Backdoor Attacks. (83%)Alaa Khaddaj; Guillaume Leclerc; Aleksandar Makelov; Kristian Georgiev; Hadi Salman; Andrew Ilyas; Aleksander Madry
Towards Building More Robust Models with Frequency Bias. (81%)Qingwen Bu; Dong Huang; Heming Cui
Reinforcing POD based model reduction techniques in reaction-diffusion complex networks using stochastic filtering and pattern recognition. (26%)Abhishek Ajayakumar; Soumyendu Raha
2023-07-18
CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space. (67%)Haibin Zheng; Jinyin Chen; Haibo Jin
FedDefender: Client-Side Attack-Tolerant Federated Learning. (50%)Sungwon Park; Sungwon Han; Fangzhao Wu; Sundong Kim; Bin Zhu; Xing Xie; Meeyoung Cha
Can Neural Network Memorization Be Localized? (4%)Pratyush Maini; Michael C. Mozer; Hanie Sedghi; Zachary C. Lipton; J. Zico Kolter; Chiyuan Zhang
2023-07-17
Analyzing the Impact of Adversarial Examples on Explainable Machine Learning. (99%)Prathyusha Devabhakthini; Sasmita Parida; Raj Mani Shukla; Suvendu Chandan Nayak
Adversarial Attacks on Traffic Sign Recognition: A Survey. (98%)Svetlana Pavlitska; Nico Lambing; J. Marius Zöllner
Discretization-based ensemble model for robust learning in IoT. (87%)Anahita Namvar; Chandra Thapa; Salil S. Kanhere
Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion Model. (83%)Rongke Liu
Experimental Security Analysis of DNN-based Adaptive Cruise Control under Context-Aware Perception Attacks. (11%)Xugui Zhou; Anqi Chen; Maxfield Kouzel; Haotian Ren; Morgan McCarty; Cristina Nita-Rotaru; Homa Alemzadeh
On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization. (2%)Akshay Mehra; Yunbei Zhang; Bhavya Kailkhura; Jihun Hamm
A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks. (1%)Umara Noor; Sawera Shahid; Rimsha Kanwal; Zahid Rashid
2023-07-16
Towards Viewpoint-Invariant Visual Recognition via Adversarial Training. (83%)Shouwei Ruan; Yinpeng Dong; Hang Su; Jianteng Peng; Ning Chen; Xingxing Wei
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound. (73%)Hanbo Cai; Pengcheng Zhang; Hai Dong; Yan Xiao; Stefanos Koffas; Yiming Li
Diffusion to Confusion: Naturalistic Adversarial Patch Generation Based on Diffusion Model for Object Detector. (10%)Shuo-Yen Lin; Ernie Chu; Che-Hsien Lin; Jun-Cheng Chen; Jia-Ching Wang
Lipschitz Continuous Algorithms for Covering Problems. (1%)Soh Kumabe; Yuichi Yoshida
2023-07-15
On the Robustness of Split Learning against Adversarial Attacks. (99%)Mingyuan Fan; Cen Chen; Chengyu Wang; Wenmeng Zhou; Jun Huang
Why Does Little Robustness Help? Understanding and Improving Adversarial Transferability from Surrogate Training. (99%)Yechao Zhang; Shengshan Hu; Leo Yu Zhang; Junyu Shi; Minghui Li; Xiaogeng Liu; Wei Wan; Hai Jin
Unified Adversarial Patch for Cross-modal Attacks in the Physical World. (92%)Xingxing Wei; Yao Huang; Yitong Sun; Jie Yu
MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots. (2%)Gelei Deng; Yi Liu; Yuekang Li; Kailong Wang; Ying Zhang; Zefeng Li; Haoyu Wang; Tianwei Zhang; Yang Liu
2023-07-14
Vulnerability-Aware Instance Reweighting For Adversarial Training. (99%)Olukorede Fakorede; Ashutosh Kumar Nirala; Modeste Atsague; Jin Tian
Mitigating Adversarial Vulnerability through Causal Parameter Estimation by Adversarial Double Machine Learning. (99%)Byung-Kwan Lee; Junho Kim; Yong Man Ro
On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks. (99%)Hafsa Bousbiat; Yassine Himeur; Abbes Amira; Wathiq Mansoor
RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World. (98%)Donghua Wang; Wen Yao; Tingsong Jiang; Chao Li; Xiaoqian Chen
Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation. (98%)Asif Hanif; Muzammal Naseer; Salman Khan; Mubarak Shah; Fahad Shahbaz Khan
Adversarial Training Over Long-Tailed Distribution. (84%)Guanlin Li; Guowen Xu; Tianwei Zhang
Structured Pruning of Neural Networks for Constraints Learning. (76%)Matteo Cacciola; Antonio Frangioni; Andrea Lodi
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy. (68%)Zihao Zhu; Mingda Zhang; Shaokui Wei; Li Shen; Yanbo Fan; Baoyuan Wu
Erasing, Transforming, and Noising Defense Network for Occluded Person Re-Identification. (31%)Neng Dong; Liyan Zhang; Shuanglin Yan; Hao Tang; Jinhui Tang
Certified Robustness for Large Language Models with Self-Denoising. (5%)Zhen Zhang; Guanhua Zhang; Bairu Hou; Wenqi Fan; Qing Li; Sijia Liu; Yang Zhang; Shiyu Chang
2023-07-13
Multi-objective Evolutionary Search of Variable-length Composite Semantic Perturbations. (99%)Jialiang Suna; Wen Yao; Tingsong Jianga; Xiaoqian Chena
Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks. (99%)Jiaming Zhang; Jitao Sang; Qi Yi; Changsheng Xu
Defeating Proactive Jammers Using Deep Reinforcement Learning for Resource-Constrained IoT Networks. (1%)Abubakar Sani Ali; Shimaa Naser; Sami Muhaidat
Towards Traitor Tracing in Black-and-White-Box DNN Watermarking with Tardos-based Codes. (1%)Elena Rodriguez-Lois; Fernando Perez-Gonzalez
2023-07-12
Single-Class Target-Specific Attack against Interpretable Deep Learning Systems. (99%)Eldor Abdukhamidov; Mohammed Abuhamad; George K. Thiruvathukal; Hyoungshick Kim; Tamer Abuhmed
Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems. (99%)Eldor Abdukhamidov; Mohammed Abuhamad; Simon S. Woo; Eric Chan-Tin; Tamer Abuhmed
Rational Neural Network Controllers. (2%)Matthew Newton; Antonis Papachristodoulou
A Bayesian approach to quantifying uncertainties and improving generalizability in traffic prediction models. (1%)Agnimitra Sengupta; Sudeepta Mondal; Adway Das; S. Ilgin Guler
Misclassification in Automated Content Analysis Causes Bias in Regression. Can We Fix It? Yes We Can! (1%)Nathan TeBlunthuis; Valerie Hase; Chung-Hong Chan
2023-07-11
ATWM: Defense against adversarial malware based on adversarial training. (99%)Kun Li; Fan Zhang; Wei Guo
Membership Inference Attacks on DNNs using Adversarial Perturbations. (89%)Hassan Ali; Adnan Qayyum; Ala Al-Fuqaha; Junaid Qadir
On the Vulnerability of DeepFake Detectors to Attacks Generated by Denoising Diffusion Models. (10%)Marija Ivanovska; Vitomir Štruc
Random-Set Convolutional Neural Network (RS-CNN) for Epistemic Deep Learning. (4%)Shireen Kudukkil Manchingal; Muhammad Mubashar; Kaizheng Wang; Keivan Shariatmadar; Fabio Cuzzolin
Differential Analysis of Triggers and Benign Features for Black-Box DNN Backdoor Detection. (2%)Hao Fu; Prashanth Krishnamurthy; Siddharth Garg; Farshad Khorrami
The Butterfly Effect in Artificial Intelligence Systems: Implications for AI Bias and Fairness. (1%)Emilio Ferrara
Memorization Through the Lens of Curvature of Loss Function Around Samples. (1%)Isha Garg; Deepak Ravikumar; Kaushik Roy
2023-07-10
Practical Trustworthiness Model for DNN in Dedicated 6G Application. (33%)Anouar Nechi; Ahmed Mahmoudi; Christoph Herold; Daniel Widmer; Thomas Kürner; Mladen Berekovic; Saleh Mulhem
Distill-SODA: Distilling Self-Supervised Vision Transformer for Source-Free Open-Set Domain Adaptation in Computational Pathology. (1%)Guillaume Vray; Devavrat Tomar; Behzad Bozorgtabar; Jean-Philippe Thiran
2023-07-09
GNP Attack: Transferable Adversarial Examples via Gradient Norm Penalty. (98%)Tao Wu; Tie Luo; Donald C. Wunsch
Enhancing Adversarial Robustness via Score-Based Optimization. (98%)Boya Zhang; Weijian Luo; Zhihua Zhang
2023-07-08
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification. (99%)Huafeng Li; Le Xu; Yafei Zhang; Dapeng Tao; Zhengtao Yu
Random Position Adversarial Patch for Vision Transformers. (83%)Mingzhen Shao
Robust Ranking Explanations. (38%)Chao Chen; Chenghua Guo; Guixiang Ma; Ming Zeng; Xi Zhang; Sihong Xie
2023-07-07
A Theoretical Perspective on Subnetwork Contributions to Adversarial Robustness. (81%)Jovon Craig; Josh Andle; Theodore S. Nowak; Salimeh Yasaei Sekeh
Scalable Membership Inference Attacks via Quantile Regression. (33%)Martin Bertran; Shuai Tang; Michael Kearns; Jamie Morgenstern; Aaron Roth; Zhiwei Steven Wu
RADAR: Robust AI-Text Detection via Adversarial Learning. (5%)Xiaomeng Hu; Pin-Yu Chen; Tsung-Yi Ho
Generation of Time-Varying Impedance Attacks Against Haptic Shared Control Steering Systems. (1%)Alireza Mohammadi; Hafiz Malik
2023-07-06
Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks. (99%)Xu Han; Anmin Liu; Chenxuan Yao; Yanbo Fan; Kun He
NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic. (92%)Zi'ou Zheng; Xiaodan Zhu
Quantification of Uncertainty with Adversarial Models. (68%)Kajetan Schweighofer; Lukas Aichberger; Mykyta Ielanskyi; Günter Klambauer; Sepp Hochreiter
A Vulnerability of Attribution Methods Using Pre-Softmax Scores. (41%)Miguel Lerma; Mirtha Lucas
Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications. (8%)Peter Tu; Zhaoyuan Yang; Richard Hartley; Zhiwei Xu; Jing Zhang; Yiwei Fu; Dylan Campbell; Jaskirat Singh; Tianyu Wang
T-MARS: Improving Visual Representations by Circumventing Text Feature Learning. (1%)Pratyush Maini; Sachin Goyal; Zachary C. Lipton; J. Zico Kolter; Aditi Raghunathan
2023-07-05
Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and their Impact. (98%)Jaydip Sen; Subhasis Dasgupta
DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications. (69%)Adam Ivankay; Mattia Rigotti; Pascal Frossard
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality. (67%)Peter Lorenz; Ricard Durall; Janis Keuper
GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations. (62%)Julia Lust; Alexandru P. Condurache
Securing Cloud FPGAs Against Power Side-Channel Attacks: A Case Study on Iterative AES. (5%)Nithyashankari Gummidipoondi JV Jayasankaran; Hao JV Guo; Satwik JV Patnaik; JV Jeyavijayan; Rajendran; Jiang Hu
On the Adversarial Robustness of Generative Autoencoders in the Latent Space. (3%)Mingfei Lu; Badong Chen
2023-07-04
SCAT: Robust Self-supervised Contrastive Learning via Adversarial Training for Text Classification. (99%)Junjie Wu; Dit-Yan Yeung
LEAT: Towards Robust Deepfake Disruption in Real-World Scenarios via Latent Ensemble Attack. (83%)Joonkyo Shim; Hyunsoo Yoon
Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection. (68%)Delyan Boychev
Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction. (45%)Zitao Chen; Karthik Pattabiraman
Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling. (26%)Zhanhao Hu; Wenda Chu; Xiaopei Zhu; Hui Zhang; Bo Zhang; Xiaolin Hu
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems. (13%)Shuyi Wang; Guido Zuccon
Machine Learning-Based Intrusion Detection: Feature Selection versus Feature Extraction. (1%)Vu-Duc Ngo; Tuan-Cuong Vuong; Luong Thien Van; Hung Tran
Synthetic is all you need: removing the auxiliary data assumption for membership inference attacks against synthetic data. (1%)Florent Guépin; Matthieu Meeus; Ana-Maria Cretu; Montjoye Yves-Alexandre de
2023-07-03
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems. (99%)Debopam Georgia Institute of Technology Sanyal; Jui-Tse Georgia Institute of Technology Hung; Manav Georgia Institute of Technology Agrawal; Prahlad Georgia Institute of Technology Jasti; Shahab University of California, Riverside Nikkhoo; Somesh University of Wisconsin-Madison Jha; Tianhao University of Virginia Wang; Sibin George Washington University Mohan; Alexey Georgia Institute of Technology Tumanov
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives. (83%)Yudong Gao; Honglong Chen; Peng Sun; Junjian Li; Anqing Zhang; Zhibo Wang
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks. (62%)Aysha Thahsin Zahir Ismail; Raj Mani Shukla
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners? (62%)Fnu Suya; Xiao Zhang; Yuan Tian; David Evans
Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives. (45%)Danele Lunghi; Alkis Simitsis; Olivier Caelen; Gianluca Bontempi
Analysis of Task Transferability in Large Pre-trained Classifiers. (13%)Akshay Mehra; Yunbei Zhang; Jihun Hamm
Enhancing the Robustness of QMIX against State-adversarial Attacks. (4%)Weiran Guo; Guanjun Liu; Ziyuan Zhou; Ling Wang; Jiacun Wang
Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration. (1%)Kemal Oksuz; Tom Joy; Puneet K. Dokania
2023-07-02
Query-Efficient Decision-based Black-Box Patch Attack. (99%)Zhaoyu Chen; Bo Li; Shuang Wu; Shouhong Ding; Wenqiang Zhang
Interpretability and Transparency-Driven Detection and Transformation of Textual Adversarial Examples (IT-DT). (99%)Bushra Sabir; M. Ali Babar; Sharif Abuadbba
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. (10%)Maanak Gupta; CharanKumar Akiri; Kshitiz Aryal; Eli Parker; Lopamudra Praharaj
CLIMAX: An exploration of Classifier-Based Contrastive Explanations. (2%)Praharsh Nanavati; Ranjitha Prasad
2023-07-01
Common Knowledge Learning for Generating Transferable Adversarial Examples. (99%)Ruijie Yang; Yuanfang Guo; Junfu Wang; Jiantao Zhou; Yunhong Wang
Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey. (99%)Hanieh Naderi; Ivan V. Bajić
Brightness-Restricted Adversarial Attack Patch. (75%)Mingzhen Shao
Fedward: Flexible Federated Backdoor Defense Framework with Non-IID Data. (54%)Zekai Chen; Fuyi Wang; Zhiwei Zheng; Ximeng Liu; Yujie Lin
Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training. (26%)Dario Lazzaro; Antonio Emanuele Cinà; Maura Pintor; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency. (13%)Yan Wang; Yuhang Li; Ruihao Gong; Aishan Liu; Yanfei Wang; Jian Hu; Yongqiang Yao; Yunchen Zhang; Tianzi Xiao; Fengwei Yu; Xianglong Liu
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD. (10%)Anvith Thudi; Hengrui Jia; Casey Meehan; Ilia Shumailov; Nicolas Papernot
CasTGAN: Cascaded Generative Adversarial Network for Realistic Tabular Data Synthesis. (2%)Abdallah Alshantti; Damiano Varagnolo; Adil Rasheed; Aria Rahmati; Frank Westad
FedDefender: Backdoor Attack Defense in Federated Learning. (2%)Waris Virginia Tech Gill; Ali University of Minnesota Twin Cities Anwar; Muhammad Ali Virginia Tech Gulzar
Hiding in Plain Sight: Differential Privacy Noise Exploitation for Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement Learning. (1%)Md Tamjid Hossain; Hung La
2023-06-30
Defense against Adversarial Cloud Attack on Remote Sensing Salient Object Detection. (99%)Huiming Sun; Lan Fu; Jinlong Li; Qing Guo; Zibo Meng; Tianyun Zhang; Yuewei Lin; Hongkai Yu
Efficient Backdoor Removal Through Natural Gradient Fine-tuning. (8%)Nazmul Karim; Abdullah Al Arafat; Umar Khalid; Zhishan Guo; Naznin Rahnavard
Minimum-norm Sparse Perturbations for Opacity in Linear Systems. (1%)Varkey M John; Vaibhav Katewa
2023-06-29
Defending Black-box Classifiers by Bayesian Boundary Correction. (99%)He Wang; Yunfeng Diao
Towards Optimal Randomized Strategies in Adversarial Example Game. (96%)Jiahao Xie; Chao Zhang; Weijie Liu; Wensong Bai; Hui Qian
Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features. (13%)Mingli Zhu; Shaokui Wei; Hongyuan Zha; Baoyuan Wu
NeuralFuse: Learning to Improve the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes. (1%)Hao-Lun Sun; Lei Hsiung; Nandhini Chandramoorthy; Pin-Yu Chen; Tsung-Yi Ho
2023-06-28
Mitigating the Accuracy-Robustness Trade-off via Multi-Teacher Adversarial Distillation. (99%)Shiji Zhao; Xizhe Wang; Xingxing Wei
Boosting Adversarial Transferability with Learnable Patch-wise Masks. (99%)Xingxing Wei; Shiji Zhao
Evaluating Similitude and Robustness of Deep Image Denoising Models via Adversarial Attack. (99%)Jie Ning; Yao Li; Zhichang Guo
Group-based Robustness: A General Framework for Customized Robustness in the Real World. (98%)Weiran Lin; Keane Lucas; Neo Eyal; Lujo Bauer; Michael K. Reiter; Mahmood Sharif
Distributional Modeling for Location-Aware Adversarial Patches. (98%)Xingxing Wei; Shouwei Ruan; Yinpeng Dong; Hang Su
Enrollment-stage Backdoor Attacks on Speaker Recognition Systems via Adversarial Ultrasound. (96%)Xinfeng Li; Junning Ze; Chen Yan; Yushi Cheng; Xiaoyu Ji; Wenyuan Xu
Does Saliency-Based Training bring Robustness for Deep Neural Networks in Image Classification? (93%)Ali Karkehabadi
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks. (50%)Wenxiao Wang; Soheil Feizi
On the Exploitability of Instruction Tuning. (13%)Manli Shu; Jiongxiao Wang; Chen Zhu; Jonas Geiping; Chaowei Xiao; Tom Goldstein
2023-06-27
Advancing Adversarial Training by Injecting Booster Signal. (98%)Hong Joo Lee; Youngjoon Yu; Yong Man Ro
IMPOSITION: Implicit Backdoor Attack through Scenario Injection. (96%)Mozhgan Pourkeshavarz; Mohammad Sabokrou; Amir Rasouli
Adversarial Training for Graph Neural Networks. (92%)Lukas Gosch; Simon Geisler; Daniel Sturm; Bertrand Charpentier; Daniel Zügner; Stephan Günnemann
Robust Proxy: Improving Adversarial Robustness by Robust Proxy Learning. (89%)Hong Joo Lee; Yong Man Ro
Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability. (87%)Marco Alecci; Mauro Conti; Francesco Marchiori; Luca Martinelli; Luca Pajola
[Re] Double Sampling Randomized Smoothing. (69%)Aryan Gupta; Sarthak Gupta; Abhay Kumar; Harsh Dugar
Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets. (68%)Yimu Wang; Dinghuai Zhang; Yihan Wu; Heng Huang; Hongyang Zhang
Catch Me If You Can: A New Low-Rate DDoS Attack Strategy Disguised by Feint. (26%)Tianyang Cai; Yuqi Li; Tao Jia; Leo Yu Zhang; Zheng Yang
Shilling Black-box Review-based Recommender Systems through Fake Review Generation. (1%)Hung-Yun Chiang; Yi-Syuan Chen; Yun-Zhu Song; Hong-Han Shuai; Jason S. Chang
2023-06-26
On the Universal Adversarial Perturbations for Efficient Data-free Adversarial Detection. (99%)Songyang Gao; Shihan Dou; Qi Zhang; Xuanjing Huang; Jin Ma; Ying Shan
Are aligned neural networks adversarially aligned? (99%)Nicholas Carlini; Milad Nasr; Christopher A. Choquette-Choo; Matthew Jagielski; Irena Gao; Anas Awadalla; Pang Wei Koh; Daphne Ippolito; Katherine Lee; Florian Tramer; Ludwig Schmidt
The race to robustness: exploiting fragile models for urban camouflage and the imperative for machine learning security. (92%)Harriet Farlow; Matthew Garratt; Gavin Mount; Tim Lynar
3D-Aware Adversarial Makeup Generation for Facial Privacy Protection. (92%)Yueming Lyu; Yue Jiang; Ziwen He; Bo Peng; Yunfan Liu; Jing Dong
Towards Sybil Resilience in Decentralized Learning. (80%)Thomas Werthenbach; Johan Pouwelse
On the Resilience of Machine Learning-Based IDS for Automotive Networks. (78%)Ivo Zenden; Han Wang; Alfonso Iacovazzi; Arash Vahidi; Rolf Blom; Shahid Raza
DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization. (75%)Songyang Gao; Shihan Dou; Yan Liu; Xiao Wang; Qi Zhang; Zhongyu Wei; Jin Ma; Ying Shan
PWSHAP: A Path-Wise Explanation Model for Targeted Variables. (8%)Lucile Ter-Minassian; Oscar Clivio; Karla Diaz-Ordaz; Robin J. Evans; Chris Holmes
2023-06-25
A Spectral Perspective towards Understanding and Improving Adversarial Robustness. (99%)Binxiao Huang; Rui Lin; Chaofan Tao; Ngai Wong
On Evaluating the Adversarial Robustness of Semantic Segmentation Models. (99%)Levente Halmosi; Mark Jelasity
Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic Adversarial Training. (98%)Fan Liu; Weijia Zhang; Hao Liu
Enhancing Adversarial Training via Reweighting Optimization Trajectory. (97%)Tianjin Huang; Shiwei Liu; Tianlong Chen; Meng Fang; Li Shen; Vlaod Menkovski; Lu Yin; Yulong Pei; Mykola Pechenizkiy
RobuT: A Systematic Study of Table QA Robustness Against Human-Annotated Adversarial Perturbations. (87%)Yilun Zhao; Chen Zhao; Linyong Nan; Zhenting Qi; Wenlin Zhang; Xiangru Tang; Boyu Mi; Dragomir Radev
Computational Asymmetries in Robust Classification. (80%)Samuele Marro; Michele Lombardi
2023-06-24
Machine Learning needs its own Randomness Standard: Randomised Smoothing and PRNG-based attacks. (98%)Pranav Dahiya; Ilia Shumailov; Ross Anderson
Boosting Model Inversion Attacks with Adversarial Examples. (98%)Shuai Zhou; Tianqing Zhu; Dayong Ye; Xin Yu; Wanlei Zhou
Similarity Preserving Adversarial Graph Contrastive Learning. (96%)Yeonjun In; Kanghoon Yoon; Chanyoung Park
Weighted Automata Extraction and Explanation of Recurrent Neural Networks for Natural Language Tasks. (70%)Zeming Wei; Xiyue Zhang; Yihao Zhang; Meng Sun
2023-06-23
Creating Valid Adversarial Examples of Malware. (99%)Matouš Kozák; Martin Jureček; Mark Stamp; Troia Fabio Di
Adversarial Robustness Certification for Bayesian Neural Networks. (92%)Matthew Wicker; Andrea Patane; Luca Laurenti; Marta Kwiatkowska
A First Order Meta Stackelberg Method for Robust Federated Learning. (10%)Yunian Pan; Tao Li; Henger Li; Tianyi Xu; Zizhan Zheng; Quanyan Zhu
2023-06-22
Visual Adversarial Examples Jailbreak Large Language Models. (99%)Xiangyu Qi; Kaixuan Huang; Ashwinee Panda; Mengdi Wang; Prateek Mittal
Towards quantum enhanced adversarial robustness in machine learning. (99%)Maxwell T. West; Shu-Lok Tsang; Jia S. Low; Charles D. Hill; Christopher Leckie; Lloyd C. L. Hollenberg; Sarah M. Erfani; Muhammad Usman
Rethinking the Backward Propagation for Adversarial Transferability. (99%)Xiaosen Wang; Kangheng Tong; Kun He
Evading Forensic Classifiers with Attribute-Conditioned Adversarial Faces. (96%)Fahad Shamshad; Koushik Srivatsan; Karthik Nandakumar
Adversarial Resilience in Sequential Prediction via Abstention. (93%)Surbhi Goel; Steve Hanneke; Shay Moran; Abhishek Shetty
Document Image Cleaning using Budget-Aware Black-Box Approximation. (92%)Ganesh Tata; Katyani Singh; Oeveren Eric Van; Nilanjan Ray
Anticipatory Thinking Challenges in Open Worlds: Risk Management. (81%)Adam Amos-Binks; Dustin Dannenhauer; Leilani H. Gilpin
Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models. (75%)Francesco Croce; Naman D Singh; Matthias Hein
A First Order Meta Stackelberg Method for Robust Federated Learning (Technical Report). (33%)Henger Li; Tianyi Xu; Tao Li; Yunian Pan; Quanyan Zhu; Zizhan Zheng
Impacts and Risk of Generative AI Technology on Cyber Defense. (4%)Subash Neupane; Ivan A. Fernandez; Sudip Mittal; Shahram Rahimi
2023-06-21
Adversarial Attacks Neutralization via Data Set Randomization. (99%)Mouna Rabhi; Pietro Roberto Di
A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking. (92%)Shaohui Mei; Jiawei Lian; Xiaofei Wang; Yuru Su; Mingyang Ma; Lap-Pui Chau
Sample Attackability in Natural Language Adversarial Attacks. (92%)Vyas Raina; Mark Gales
Revisiting Image Classifier Training for Improved Certified Robust Defense against Adversarial Patches. (76%)Aniruddha Saha; Shuhua Yu; Arash Norouzzadeh; Wan-Yi Lin; Chaithanya Kumar Mummadi
DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning with Client Momentum. (47%)Xiaolan Gu; Ming Li; Li Xiong
FFCV: Accelerating Training by Removing Data Bottlenecks. (3%)Guillaume Leclerc; Andrew Ilyas; Logan Engstrom; Sung Min Park; Hadi Salman; Aleksander Madry
2023-06-20
Evaluating Adversarial Robustness of Convolution-based Human Motion Prediction. (99%)Chengxu Duan; Zhicheng Zhang; Xiaoli Liu; Yonghao Dang; Jianqin Yin
Reversible Adversarial Examples with Beam Search Attack and Grayscale Invariance. (99%)Haodong Zhang; Chi Man Pun; Xia Du
Universal adversarial perturbations for multiple classification tasks with quantum classifiers. (99%)Yun-Zhong Qiu
FDInet: Protecting against DNN Model Extraction via Feature Distortion Index. (50%)Hongwei Yao; Zheng Li; Haiqin Weng; Feng Xue; Kui Ren; Zhan Qin
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. (26%)Boxin Wang; Weixin Chen; Hengzhi Pei; Chulin Xie; Mintong Kang; Chenhui Zhang; Chejian Xu; Zidi Xiong; Ritik Dutta; Rylan Schaeffer; Sang T. Truong; Simran Arora; Mantas Mazeika; Dan Hendrycks; Zinan Lin; Yu Cheng; Sanmi Koyejo; Dawn Song; Bo Li
Towards a robust and reliable deep learning approach for detection of compact binary mergers in gravitational wave data. (3%)Shreejit Jadhav; Mihir Shrivastava; Sanjit Mitra
Mitigating Speculation-based Attacks through Configurable Hardware/Software Co-design. (1%)Ali Hajiabadi; Archit Agarwal; Andreas Diavastos; Trevor E. Carlson
LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching. (1%)Duy M. H. Nguyen; Hoang Nguyen; Nghiem T. Diep; Tan N. Pham; Tri Cao; Binh T. Nguyen; Paul Swoboda; Nhat Ho; Shadi Albarqouni; Pengtao Xie; Daniel Sonntag; Mathias Niepert
2023-06-19
Comparative Evaluation of Recent Universal Adversarial Perturbations in Image Classification. (99%)Juanjuan Weng; Zhiming Luo; Dazhen Lin; Shaozi Li
Adversarial Robustness of Prompt-based Few-Shot Learning for Natural Language Understanding. (75%)Venkata Prabhakara Sarath Nookala; Gaurav Verma; Subhabrata Mukherjee; Srijan Kumar
Adversarial Training Should Be Cast as a Non-Zero-Sum Game. (73%)Alexander Robey; Fabian Latorre; George J. Pappas; Hamed Hassani; Volkan Cevher
Eigenpatches -- Adversarial Patches from Principal Components. (38%)Jens Bayer; Stefan Becker; David Münch; Michael Arens
Practical and General Backdoor Attacks against Vertical Federated Learning. (13%)Yuexin Xuan; Xiaojun Chen; Zhendong Zhao; Bisheng Tang; Ye Dong
BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming. (5%)Steven Adams; Andrea Patane; Morteza Lahijanian; Luca Laurenti
2023-06-17
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses. (98%)Mohamed Amine Ferrag; Othmane Friha; Burak Kantarci; Norbert Tihanyi; Lucas Cordeiro; Merouane Debbah; Djallel Hamouda; Muna Al-Hawawreh; Kim-Kwang Raymond Choo
Understanding Certified Training with Interval Bound Propagation. (38%)Yuhao Mao; Mark Niklas Müller; Marc Fischer; Martin Vechev
GlyphNet: Homoglyph domains dataset and detection using attention-based Convolutional Neural Networks. (9%)Akshat Gupta; Laxman Singh Tomar; Ridhima Garg
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network. (1%)Fan Liu; Siqi Lai; Yansong Ning; Hao Liu
2023-06-16
Wasserstein distributional robustness of neural networks. (99%)Xingjian Bai; Guangyi He; Yifan Jiang; Jan Obloj
Query-Free Evasion Attacks Against Machine Learning-Based Malware Detectors with Generative Adversarial Networks. (99%)Daniel Gibert; Jordi Planes; Quan Le; Giulio Zizzo
You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks. (98%)Edward Raff; Michel Benaroch; Andrew L. Farris
Towards Better Certified Segmentation via Diffusion Models. (73%)Othmane Laousy; Alexandre Araujo; Guillaume Chassagnon; Marie-Pierre Revel; Siddharth Garg; Farshad Khorrami; Maria Vakalopoulou
Adversarially robust clustering with optimality guarantees. (4%)Soham Jana; Kun Yang; Sanjeev Kulkarni
CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search. (1%)Fahad Shamshad; Muzammal Naseer; Karthik Nandakumar
2023-06-15
DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World. (99%)Caixin Kang; Yinpeng Dong; Zhengyi Wang; Shouwei Ruan; Hang Su; Xingxing Wei
OVLA: Neural Network Ownership Verification using Latent Watermarks. (64%)Feisi Fu; Wenchao Li
Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks. (62%)Hongcheng Gao; Hao Zhang; Yinpeng Dong; Zhijie Deng
On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation. (33%)Zhanke Zhou; Chenyu Zhou; Xuan Li; Jiangchao Yao; Quanming Yao; Bo Han
Robustness Analysis on Foundational Segmentation Models. (9%)Madeline Chantry Schiappa; Sachidanand VS; Yunhao Ge; Ondrej Miksik; Yogesh S. Rawat; Vibhav Vineet
Explore, Establish, Exploit: Red Teaming Language Models from Scratch. (1%)Stephen Casper; Jason Lin; Joe Kwon; Gatlen Culp; Dylan Hadfield-Menell
Community Detection Attack against Collaborative Learning-based Recommender Systems. (1%)Yacine Belal; Sonia Ben Mokhtar; Mohamed Maouche; Anthony Simonet-Boulogne
Concealing CAN Message Sequences to Prevent Schedule-based Bus-off Attacks. (1%)Sunandan Adhikary; Ipsita Koley; Arkaprava Sain; Soumyadeep das; Shuvam Saha; Soumyajit Dey
2023-06-14
Reliable Evaluation of Adversarial Transferability. (99%)Wenqian Yu; Jindong Gu; Zhijiang Li; Philip Torr
A Relaxed Optimization Approach for Adversarial Attacks against Neural Machine Translation Models. (99%)Sahar Sadrizadeh; Clément Barbier; Ljiljana Dolamic; Pascal Frossard
X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail. (98%)Omer Hofman; Amit Giloni; Yarin Hayun; Ikuya Morikawa; Toshiya Shimizu; Yuval Elovici; Asaf Shabtai
Augment then Smooth: Reconciling Differential Privacy with Certified Robustness. (98%)Jiapeng Wu; Atiyeh Ashari Ghomi; David Glukhov; Jesse C. Cresswell; Franziska Boenisch; Nicolas Papernot
Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios. (83%)Hong Sun; Ziqiang Li; Pengfei Xia; Heng Li; Beihao Xia; Yi Wu; Bin Li
A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy. (75%)Enyan Dai; Limeng Cui; Zhengyang Wang; Xianfeng Tang; Yinghan Wang; Monica Cheng; Bing Yin; Suhang Wang
On the Robustness of Latent Diffusion Models. (73%)Jianping Zhang; Zhuoer Xu; Shiwen Cui; Changhua Meng; Weibin Wu; Michael R. Lyu
Improving Selective Visual Question Answering by Learning from Your Peers. (1%)Corentin Dancette; Spencer Whitehead; Rishabh Maheshwary; Ramakrishna Vedantam; Stefan Scherer; Xinlei Chen; Matthieu Cord; Marcus Rohrbach
2023-06-13
Theoretical Foundations of Adversarially Robust Learning. (99%)Omar Montasser
Finite Gaussian Neurons: Defending against adversarial attacks by making neural networks say "I don't know". (99%)Felix Grezes
I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models. (99%)Raz Lapid; Moshe Sipper
Robustness of SAM: Segment Anything Under Corruptions and Beyond. (98%)Yu Qiao; Chaoning Zhang; Taegoo Kang; Donghun Kim; Chenshuang Zhang; Choong Seon Hong
Area is all you need: repeatable elements make stronger adversarial attacks. (98%)Dillon Niederhut
Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systems. (96%)Michele Panariello; Wanying Ge; Hemlata Tak; Massimiliano Todisco; Nicholas Evans
Revisiting and Advancing Adversarial Training Through A Simple Baseline. (87%)Hong Liu
Generative Watermarking Against Unauthorized Subject-Driven Image Synthesis. (78%)Yihan Ma; Zhengyu Zhao; Xinlei He; Zheng Li; Michael Backes; Yang Zhang
Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios. (22%)Haochen Mei; Gaolei Li; Jun Wu; Longfei Zheng
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation. (22%)Zhicong Yan; Shenghong Li; Ruijie Zhao; Yuan Tian; Yuanyuan Zhao
Temporal Gradient Inversion Attacks with Robust Optimization. (8%)Bowen Li; Hanlin Gu; Ruoxin Chen; Jie Li; Chentao Wu; Na Ruan; Xueming Si; Lixin Fan
Few-shot Multi-domain Knowledge Rearming for Context-aware Defence against Advanced Persistent Threats. (2%)Gaolei Li; Yuanyuan Zhao; Wenqi Wei; Yuchen Liu
2023-06-12
When Vision Fails: Text Attacks Against ViT and OCR. (99%)Nicholas Boucher; Jenny Blessing; Ilia Shumailov; Ross Anderson; Nicolas Papernot
AROID: Improving Adversarial Robustness through Online Instance-wise Data Augmentation. (99%)Lin Li; Jianing Qiu; Michael Spratling
How robust accuracy suffers from certified training with convex relaxations. (73%)Bartolomeis Piersilvio De; Jacob Clarysse; Amartya Sanyal; Fanny Yang
Graph Agent Network: Empowering Nodes with Decentralized Communications Capabilities for Adversarial Resilience. (54%)Ao Liu; Wenshan Li; Tao Li; Beibei Li; Hanyuan Huang; Guangquan Xu; Pan Zhou
Frequency-Based Vulnerability Analysis of Deep Learning Models against Image Corruptions. (13%)Harshitha Machiraju; Michael H. Herzog; Pascal Frossard
On the Robustness of Removal-Based Feature Attributions. (11%)Chris Lin; Ian Covert; Su-In Lee
VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models. (1%)Sheng-Yen Chou; Pin-Yu Chen; Tsung-Yi Ho
2023-06-11
Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework. (99%)Minglei Yin; Bin Liu; Neil Zhenqiang Gong; Xin Li
Neural Architecture Design and Robustness: A Dataset. (76%)Steffen Jung; Jovita Lukasik; Margret Keuper
TrojLLM: A Black-box Trojan Prompt Attack on Large Language Models. (68%)Jiaqi Xue; Mengxin Zheng; Ting Hua; Yilin Shen; Yepeng Liu; Ladislau Boloni; Qian Lou
2023-06-10
Boosting Adversarial Robustness using Feature Level Stochastic Smoothing. (92%)Sravanti Addepalli; Samyak Jain; Gaurang Sriramanan; R. Venkatesh Babu
NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations. (83%)Yonggan Fu; Ye Yuan; Souvik Kundu; Shang Wu; Shunyao Zhang; Yingyan Lin
The Defense of Networked Targets in General Lotto games. (13%)Adel Aghajan; Keith Paarporn; Jason R. Marden
2023-06-09
Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions. (84%)Ezgi Korkmaz; Jonah Brown-Cohen
GAN-CAN: A Novel Attack to Behavior-Based Driver Authentication Systems. (70%)Emad Efatinasab; Francesco Marchiori; Denis Donadel; Alessandro Brighente; Mauro Conti
Overcoming Adversarial Attacks for Human-in-the-Loop Applications. (45%)Ryan McCoppin; Marla Kennedy; Platon Lukyanenko; Sean Kennedy
2023-06-08
Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning. (99%)Mohamed el Shehaby; Ashraf Matrawy
Boosting Adversarial Transferability by Achieving Flat Local Maxima. (99%)Zhijin Ge; Hongying Liu; Xiaosen Wang; Fanhua Shang; Yuanyuan Liu
COVER: A Heuristic Greedy Adversarial Attack on Prompt-based Learning in Language Models. (93%)Zihao Tan; Qingliang Chen; Wenbin Zhu; Yongjian Huang
Generalizable Lightweight Proxy for Robust NAS against Diverse Perturbations. (83%)Hyeonjeong Ha; Minseon Kim; Sung Ju Hwang
A Melting Pot of Evolution and Learning. (41%)Moshe Sipper; Achiya Elyasaf; Tomer Halperin; Zvika Haramaty; Raz Lapid; Eyal Segal; Itai Tzruia; Snir Vitrack Tamam
PriSampler: Mitigating Property Inference of Diffusion Models. (12%)Hailong Hu; Jun Pang
Robustness Testing for Multi-Agent Reinforcement Learning: State Perturbations on Critical Agents. (10%)Ziyuan Zhou; Guanjun Liu
G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks through Attributed Client Graph Clustering. (10%)Hao Yu; Chuan Ma; Meng Liu; Xinwang Liu; Zhe Liu; Ming Ding
Re-aligning Shadow Models can Improve White-box Membership Inference Attacks. (10%)Ana-Maria Cretu; Daniel Jones; Montjoye Yves-Alexandre de; Shruti Tople
Conservative Prediction via Data-Driven Confidence Minimization. (8%)Caroline Choi; Fahim Tajwar; Yoonho Lee; Huaxiu Yao; Ananya Kumar; Chelsea Finn
Robust Framework for Explanation Evaluation in Time Series Classification. (2%)Thu Trang Nguyen; Thach Le Nguyen; Georgiana Ifrim
Enhancing Robustness of AI Offensive Code Generators via Data Augmentation. (2%)Cristina Improta; Pietro Liguori; Roberto Natella; Bojan Cukic; Domenico Cotroneo
FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs. (1%)Shanshan Han; Baturalp Buyukates; Zijian Hu; Han Jin; Weizhao Jin; Lichao Sun; Xiaoyang Wang; Chulin Xie; Kai Zhang; Qifan Zhang; Yuhui Zhang; Chaoyang He; Salman Avestimehr
Open Set Relation Extraction via Unknown-Aware Training. (1%)Jun Zhao; Xin Zhao; Wenyu Zhan; Qi Zhang; Tao Gui; Zhongyu Wei; Yunwen Chen; Xiang Gao; Xuanjing Huang
2023-06-07
Extracting Cloud-based Model with Prior Knowledge. (99%)Shiqian Zhao; Kangjie Chen; Meng Hao; Jian Zhang; Guowen Xu; Hongwei Li; Tianwei Zhang
Expanding Scope: Adapting English Adversarial Attacks to Chinese. (99%)Hanyu Liu; Chengyuan Cai; Yanjun Qi
PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts. (92%)Xiangjue Dong; Yun He; Ziwei Zhu; James Caverlee
Optimal Transport Model Distributional Robustness. (83%)Van-Anh Nguyen; Trung Le; Anh Tuan Bui; Thanh-Toan Do; Dinh Phung
PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts. (76%)Kaijie Zhu; Jindong Wang; Jiaheng Zhou; Zichen Wang; Hao Chen; Yidong Wang; Linyi Yang; Wei Ye; Neil Zhenqiang Gong; Yue Zhang; Xing Xie
A Linearly Convergent GAN Inversion-based Algorithm for Reverse Engineering of Deceptions. (45%)Darshan Thaker; Paris Giampouras; René Vidal
Faithful Knowledge Distillation. (41%)Tom A. Lamb; Rudy Brunel; Krishnamurthy DJ Dvijotham; M. Pawan Kumar; Philip H. S. Torr; Francisco Eiras
Divide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations. (16%)Prithviraj Dasgupta
Can current NLI systems handle German word order? Investigating language model performance on a new German challenge set of minimal pairs. (15%)Ines Reinig; Katja Markert
Adversarial Sample Detection Through Neural Network Transport Dynamics. (10%)Skander Karkar; Patrick Gallinari; Alain Rakotomamonjy
2023-06-06
Revisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters. (99%)Xingxing Wei; Shiji Zhao
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations. (97%)Torsten University of Würzburg Krauß; Alexandra University of Würzburg Dmitrienko
Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings. (93%)Klim Kireev; Maksym Andriushchenko; Carmela Troncoso; Nicolas Flammarion
Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey. (64%)Hubert Baniecki; Przemyslaw Biecek
Exploring Model Dynamics for Accumulative Poisoning Discovery. (62%)Jianing Zhu; Xiawei Guo; Jiangchao Yao; Chao Du; Li He; Shuo Yuan; Tongliang Liu; Liang Wang; Bo Han
Membership inference attack with relative decision boundary distance. (33%)JiaCheng Xu; ChengXiang Tan
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex. (8%)Drew Linsley; Ivan F. Rodriguez; Thomas Fel; Michael Arcaro; Saloni Sharma; Margaret Livingstone; Thomas Serre
Adversarial Attacks and Defenses for Semantic Communication in Vehicular Metaverses. (1%)Jiawen Kang; Jiayi He; Hongyang Du; Zehui Xiong; Zhaohui Yang; Xumin Huang; Shengli Xie
2023-06-05
Evading Black-box Classifiers Without Breaking Eggs. (99%)Edoardo Debenedetti; Nicholas Carlini; Florian Tramèr
Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception. (99%)Drew Linsley; Pinyuan Feng; Thibaut Boissin; Alekh Karkada Ashok; Thomas Fel; Stephanie Olaiya; Thomas Serre
Evaluating robustness of support vector machines with the Lagrangian dual approach. (97%)Yuting Liu; Hong Gu; Pan Qin
A Robust Likelihood Model for Novelty Detection. (93%)Ranya Almohsen; Shivang Patel; Donald A. Adjeroh; Gianfranco Doretto
Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning. (86%)Lucas Beerens; Desmond J. Higham
Enhance Diffusion to Improve Robust Generalization. (76%)Jianhui Sun; Sanchit Sinha; Aidong Zhang
KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language Explanations. (68%)Myeongjun Jang; Bodhisattwa Prasad Majumder; Julian McAuley; Thomas Lukasiewicz; Oana-Maria Camburu
Stable Diffusion is Unstable. (45%)Chengbin Du; Yanxi Li; Zhongwei Qiu; Chang Xu
Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization. (1%)Yibing Liu; Chris Xing Tian; Haoliang Li; Lei Ma; Shiqi Wang
Security Knowledge-Guided Fuzzing of Deep Learning Libraries. (1%)Nima Shiri Harzevili; Hung Viet Pham; Song Wang
2023-06-04
Adversary for Social Good: Leveraging Adversarial Attacks to Protect Personal Attribute Privacy. (98%)Xiaoting Li; Lingwei Chen; Dinghao Wu
Aerial Swarm Defense using Interception and Herding Strategies. (1%)Vishnu S. Chipade; Dimitra Panagou
2023-06-03
Towards Black-box Adversarial Example Detection: A Data Reconstruction-based Method. (99%)Yifei Gao; Zhiyu Lin; Yunfan Yang; Jitao Sang
Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning in Cybersecurity Games. (67%)Tyler Malloy; Cleotilde Gonzalez
Can Directed Graph Neural Networks be Adversarially Robust? (56%)Zhichao Hou; Xitong Zhang; Wei Wang; Charu C. Aggarwal; Xiaorui Liu
Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training. (13%)Pucheng Dang; Xing Hu; Kaidi Xu; Jinhao Duan; Di Huang; Husheng Han; Rui Zhang; Zidong Du; Qi Guo; Yunji Chen
Benchmarking Robustness of Adaptation Methods on Pre-trained Vision-Language Models. (1%)Shuo Chen; Jindong Gu; Zhen Han; Yunpu Ma; Philip Torr; Volker Tresp
2023-06-02
Why Clean Generalization and Robust Overfitting Both Happen in Adversarial Training. (99%)Binghui Li; Yuanzhi Li
A Closer Look at the Adversarial Robustness of Deep Equilibrium Models. (92%)Zonghan Yang; Tianyu Pang; Yang Liu
Adaptive Attractors: A Defense Strategy against ML Adversarial Collusion Attacks. (83%)Jiyi Zhang; Han Fang; Ee-Chien Chang
Poisoning Network Flow Classifiers. (61%)Giorgio Severi; Simona Boboila; Alina Oprea; John Holodnak; Kendra Kratkiewicz; Jason Matterer
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization. (54%)Javier Carnerero-Cano; Luis Muñoz-González; Phillippa Spencer; Emil C. Lupu
Robust low-rank training via approximate orthonormal constraints. (22%)Dayana Savostianova; Emanuele Zangrando; Gianluca Ceruti; Francesco Tudisco
Supervised Adversarial Contrastive Learning for Emotion Recognition in Conversations. (13%)Dou Hu; Yinan Bao; Lingwei Wei; Wei Zhou; Songlin Hu
Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics. (11%)Zonghan Yang; Peng Li; Tianyu Pang; Yang Liu
Covert Communication Based on the Poisoning Attack in Federated Learning. (10%)Junchuan Liang; Rong Wang
Invisible Image Watermarks Are Provably Removable Using Generative AI. (10%)Xuandong Zhao; Kexun Zhang; Zihao Su; Saastha Vasan; Ilya Grishchenko; Christopher Kruegel; Giovanni Vigna; Yu-Xiang Wang; Lei Li
VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations. (3%)Hoang-Quoc Nguyen-Son; Seira Hidano; Kazuhide Fukushima; Shinsaku Kiyomoto; Isao Echizen
Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation. (2%)Zhengyue Zhao; Jinhao Duan; Xing Hu; Kaidi Xu; Chenan Wang; Rui Zhang; Zidong Du; Qi Guo; Yunji Chen
Towards Robust GAN-generated Image Detection: a Multi-view Completion Representation. (1%)Chi Liu; Tianqing Zhu; Sheng Shen; Wanlei Zhou
Improving the generalizability and robustness of large-scale traffic signal control. (1%)Tianyu Shi; Francois-Xavier Devailly; Denis Larocque; Laurent Charlin
2023-06-01
Adversarial Attack Based on Prediction-Correction. (99%)Chen Wan; Fangjun Huang
Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations. (96%)Yang Sui; Zhuohang Li; Ding Ding; Xiang Pan; Xiaozhong Xu; Shan Liu; Zhenzhong Chen
Intriguing Properties of Text-guided Diffusion Models. (92%)Qihao Liu; Adam Kortylewski; Yutong Bai; Song Bai; Alan Yuille
Constructing Semantics-Aware Adversarial Examples with Probabilistic Perspective. (87%)Andi Zhang; Damon Wischik
Robust Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers. (82%)Ruotong Wang; Hongrui Chen; Zihao Zhu; Li Liu; Yong Zhang; Yanbo Fan; Baoyuan Wu
Improving the Robustness of Summarization Systems with Dual Augmentation. (76%)Xiuying Chen; Guodong Long; Chongyang Tao; Mingzhe Li; Xin Gao; Chengqi Zhang; Xiangliang Zhang
Adversarial Robustness in Unsupervised Machine Learning: A Systematic Review. (38%)Mathias Lundteigen Mohus; Jinyue Li
Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk? (13%)Iyiola E. Olatunji; Anmar Hizber; Oliver Sihlovec; Megha Khosla
CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception. (13%)Jiachen Sun; Haizhong Zheng; Qingzhao Zhang; Atul Prakash; Z. Morley Mao; Chaowei Xiao
ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-based Systems. (4%)Mingyi Zhou; Xiang Gao; Jing Wu; John Grundy; Xiao Chen; Chunyang Chen; Li Li
2023-05-31
Exploring the Vulnerabilities of Machine Learning and Quantum Machine Learning to Adversarial Attacks using a Malware Dataset: A Comparative Analysis. (98%)Mst Shapna Akter; Hossain Shahriar; Iysa Iqbal; MD Hossain; M. A. Karim; Victor Clincy; Razvan Voicu
Graph-based methods coupled with specific distributional distances for adversarial attack detection. (98%)Dwight Nwaigwe; Lucrezia Carboni; Martial Mermillod; Sophie Achard; Michel Dojat
Adversarial-Aware Deep Learning System based on a Secondary Classical Machine Learning Verification Approach. (98%)Mohammed Alkhowaiter; Hisham Kholidy; Mnassar Alyami; Abdulmajeed Alghamdi; Cliff Zou
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems. (54%)Ashim Gupta; Amrith Krishna
Deception by Omission: Using Adversarial Missingness to Poison Causal Structure Learning. (26%)Deniz Koyuncu; Alex Gittens; Bülent Yener; Moti Yung
Red Teaming Language Model Detectors with Language Models. (15%)Zhouxing Shi; Yihan Wang; Fan Yin; Xiangning Chen; Kai-Wei Chang; Cho-Jui Hsieh
Ambiguity in solving imaging inverse problems with deep learning based operators. (1%)Davide Evangelista; Elena Morotti; Elena Loli Piccolomini; James Nagy
2023-05-30
Pseudo-Siamese Network based Timbre-reserved Black-box Adversarial Attack in Speaker Identification. (99%)Qing Wang; Jixun Yao; Ziqian Wang; Pengcheng Guo; Lei Xie
Breeding Machine Translations: Evolutionary approach to survive and thrive in the world of automated evaluation. (64%)Josef Jon; Ondřej Bojar
Incremental Randomized Smoothing Certification. (33%)Shubham Ugare; Tarun Suresh; Debangshu Banerjee; Gagandeep Singh; Sasa Misailovic
Defense Against Shortest Path Attacks. (16%)Benjamin A. Miller; Zohair Shafi; Wheeler Ruml; Yevgeniy Vorobeychik; Tina Eliassi-Rad; Scott Alfeld
A Multilingual Evaluation of NER Robustness to Adversarial Inputs. (15%)Akshay Srinivasan; Sowmya Vajjala
Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness. (10%)Suraj Srinivas; Sebastian Bordt; Hima Lakkaraju
It begins with a boundary: A geometric view on probabilistically robust learning. (8%)Leon Bungert; Nicolás García Trillos; Matt Jacobs; Daniel McKenzie; Đorđe Nikolić; Qingsong Wang
Adversarial Attacks on Online Learning to Rank with Stochastic Click Models. (2%)Zichen Wang; Rishab Balasubramanian; Hui Yuan; Chenyu Song; Mengdi Wang; Huazheng Wang
Learning Perturbations to Explain Time Series Predictions. (1%)Joseph Enguehard
2023-05-29
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework. (99%)Yangyi Chen; Hongcheng Gao; Ganqu Cui; Lifan Yuan; Dehan Kong; Hanlu Wu; Ning Shi; Bo Yuan; Longtao Huang; Hui Xue; Zhiyuan Liu; Maosong Sun; Heng Ji
Fourier Analysis on Robustness of Graph Convolutional Neural Networks for Skeleton-based Action Recognition. (92%)Nariki Tanaka; Hiroshi Kera; Kazuhiko Kawamoto
Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models. (92%)Pranath Reddy Kumbam; Sohaib Uddin Syed; Prashanth Thamminedi; Suhas Harish; Ian Perera; Bonnie J. Dorr
UMD: Unsupervised Model Detection for X2X Backdoor Attacks. (81%)Zhen Xiang; Zidi Xiong; Bo Li
Membership Inference Attacks against Language Models via Neighbourhood Comparison. (73%)Justus Mattern; Fatemehsadat Mireshghallah; Zhijing Jin; Bernhard Schölkopf; Mrinmaya Sachan; Taylor Berg-Kirkpatrick
Trustworthy Sensor Fusion against Inaudible Command Attacks in Advanced Driver-Assistance System. (41%)Jiwei Guan; Lei Pan; Chen Wang; Shui Yu; Longxiang Gao; Xi Zheng
Explainability in Simplicial Map Neural Networks. (38%)Eduardo Paluzo-Hidalgo; Miguel A. Gutiérrez-Naranjo; Rocio Gonzalez-Diaz
Robust Lipschitz Bandits to Adversarial Corruptions. (11%)Yue Kang; Cho-Jui Hsieh; Thomas C. M. Lee
Towards minimizing efforts for Morphing Attacks -- Deep embeddings for morphing pair selection and improved Morphing Attack Detection. (8%)Roman Kessler; Kiran Raja; Juan Tapia; Christoph Busch
2023-05-28
Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness. (99%)Zhanhao Hu; Jun Zhu; Bo Zhang; Xiaolin Hu
NaturalFinger: Generating Natural Fingerprint with Generative Adversarial Networks. (92%)Kang Yang; Kunhao Lai
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study. (41%)Yiqi Zhong; Xianming Liu; Deming Zhai; Junjun Jiang; Xiangyang Ji
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models. (38%)Kai Mei; Zheng Li; Zhenting Wang; Yang Zhang; Shiqing Ma
Choose your Data Wisely: A Framework for Semantic Counterfactuals. (13%)Edmund Dervakos; Konstantinos Thomas; Giorgos Filandrianos; Giorgos Stamou
BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise Learning. (5%)Jingfeng Zhang; Bo Song; Haohan Wang; Bo Han; Tongliang Liu; Lei Liu; Masashi Sugiyama
Black-Box Anomaly Attribution. (1%)Tsuyoshi Idé; Naoki Abe
2023-05-27
Adversarial Attack On Yolov5 For Traffic And Road Sign Detection. (99%)Sanyam Jain
Pre-trained transformer for adversarial purification. (99%)Kai Wu; Yujian Betterest Li; Xiaoyu Zhang; Handing Wang; Jing Liu
Two Heads are Better than One: Towards Better Adversarial Robustness by Combining Transduction and Rejection. (98%)Nils Palumbo; Yang Guo; Xi Wu; Jiefeng Chen; Yingyu Liang; Somesh Jha
Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making. (92%)Xuanjie Fang; Sijie Cheng; Yang Liu; Wei Wang
On the Importance of Backbone to the Adversarial Robustness of Object Detectors. (83%)Xiao Li; Hang Chen; Xiaolin Hu
No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions. (2%)Tiancheng Jin; Junyan Liu; Chloé Rouyer; William Chang; Chen-Yu Wei; Haipeng Luo
2023-05-26
On Evaluating Adversarial Robustness of Large Vision-Language Models. (99%)Yunqing Zhao; Tianyu Pang; Chao Du; Xiao Yang; Chongxuan Li; Ngai-Man Cheung; Min Lin
Leveraging characteristics of the output probability distribution for identifying adversarial audio examples. (98%)Matías P. Pizarro B.; Dorothea Kolossa; Asja Fischer
Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in Multi-Agent RL. (96%)Xiangyu Liu; Souradip Chakraborty; Yanchao Sun; Furong Huang
A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation. (45%)Zhiyi Xue; Si Liu; Zhaodi Zhang; Yiting Wu; Min Zhang
Adversarial Attacks on Online Learning to Rank with Click Feedback. (38%)Jinhang Zuo; Zhiyao Zhang; Zhiyong Wang; Shuai Li; Mohammad Hajiesmaili; Adam Wierman
DeepSeaNet: Improving Underwater Object Detection using EfficientDet. (2%)Sanyam Jain
Trust-Aware Resilient Control and Coordination of Connected and Automated Vehicles. (1%)H M Sabbir Ahmad; Ehsan Sabouni; Wei Xiao; Christos G. Cassandras; Wenchao Li
Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model. (1%)Zhijie Deng; Hongcheng Gao; Yibo Miao; Hao Zhang
2023-05-25
IDEA: Invariant Causal Defense for Graph Adversarial Robustness. (99%)Shuchang Tao; Qi Cao; Huawei Shen; Yunfan Wu; Bingbing Xu; Xueqi Cheng
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text. (98%)Ashim Gupta; Carter Wood Blum; Temma Choji; Yingjie Fei; Shalin Shah; Alakananda Vempala; Vivek Srikumar
Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability. (98%)Haotian Xue; Alexandre Araujo; Bin Hu; Yongxin Chen
PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image Deraining for Semantic Segmentation. (96%)Xianghao Jiao; Yaohua Liu; Jiaxin Gao; Xinyuan Chu; Risheng Liu; Xin Fan
Adversarial Attacks on Leakage Detectors in Water Distribution Networks. (86%)Paul Stahlhofen; André Artelt; Luca Hermes; Barbara Hammer
CARSO: Counter-Adversarial Recall of Synthetic Observations. (86%)Emanuele Ballarin; Alessio Ansuini; Luca Bortolussi
On the Robustness of Segment Anything. (73%)Yihao Huang; Yue Cao; Tianlin Li; Felix Juefei-Xu; Di Lin; Ivor W. Tsang; Yang Liu; Qing Guo
Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score. (67%)Shuhai Zhang; Feng Liu; Jiahao Yang; Yifan Yang; Changsheng Li; Bo Han; Mingkui Tan
Rethink Diversity in Deep Learning Testing. (50%)Zi Wang; Jihye Choi; Somesh Jha
IMBERT: Making BERT Immune to Insertion-based Backdoor Attacks. (13%)Xuanli He; Jun Wang; Benjamin Rubinstein; Trevor Cohn
Securing Deep Generative Models with Universal Adversarial Signature. (2%)Yu Zeng; Mo Zhou; Yuan Xue; Vishal M. Patel
Concept-Centric Transformers: Enhancing Model Interpretability through Object-Centric Concept Learning within a Shared Global Workspace. (1%)Jinyung Hong; Keun Hee Park; Theodore P. Pavlic
2023-05-24
How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks. (99%)Salijona Dyrmishi; Salah Ghamizi; Maxime Cordy
Robust Classification via a Single Diffusion Model. (99%)Huanran Chen; Yinpeng Dong; Zhengyi Wang; Xiao Yang; Chengqi Duan; Hang Su; Jun Zhu
Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup. (99%)Junyoung Byun; Myung-Joon Kwon; Seungju Cho; Yoonji Kim; Changick Kim
Fantastic DNN Classifiers and How to Identify them without Data. (91%)Nathaniel Dean; Dilip Sarkar
Adversarial Demonstration Attacks on Large Language Models. (88%)Jiongxiao Wang; Zichen Liu; Keun Hee Park; Muhao Chen; Chaowei Xiao
Relating Implicit Bias and Adversarial Attacks through Intrinsic Dimension. (86%)Lorenzo Basile; Nikos Karantzas; Alberto D'Onofrio; Luca Bortolussi; Alex Rodriguez; Fabio Anselmi
AdvFunMatch: When Consistent Teaching Meets Adversarial Robustness. (76%)Ziuhi Wu; Haichang Gao; Bingqian Zhou; Ping Wang
Reconstructive Neuron Pruning for Backdoor Defense. (75%)Yige Li; Xixiang Lyu; Xingjun Ma; Nodens Koren; Lingjuan Lyu; Bo Li; Yu-Gang Jiang
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing. (74%)Alberto Muñoz-Ortiz; David Vilares
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE. (47%)Qin Liu; Fei Wang; Chaowei Xiao; Muhao Chen
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models. (31%)Jiashu Xu; Mingyu Derek Ma; Fei Wang; Chaowei Xiao; Muhao Chen
Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models. (22%)Natalie Shapira; Mosh Levy; Seyed Hossein Alavi; Xuhui Zhou; Yejin Choi; Yoav Goldberg; Maarten Sap; Vered Shwartz
Adversarial robustness of amortized Bayesian inference. (11%)Manuel Glöckler; Michael Deistler; Jakob H. Macke
Sharpness-Aware Data Poisoning Attack. (10%)Pengfei He; Han Xu; Jie Ren; Yingqian Cui; Hui Liu; Charu C. Aggarwal; Jiliang Tang
How to fix a broken confidence estimator: Evaluating post-hoc methods for selective classification with deep neural networks. (3%)Luís Felipe P. Cattelan; Danilo Silva
Ghostbuster: Detecting Text Ghostwritten by Large Language Models. (1%)Vivek Verma; Eve Fleisig; Nicholas Tomlin; Dan Klein
2023-05-23
The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks. (99%)Iuri Frosio; Jan Kautz
Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning. (99%)Minchan Kwon; Kangil Kim
QFA2SR: Query-Free Adversarial Transfer Attacks to Speaker Recognition Systems. (98%)Guangke Chen; Yedi Zhang; Zhe Zhao; Fu Song
Expressive Losses for Verified Robustness via Convex Combinations. (95%)Palma Alessandro De; Rudy Bunel; Krishnamurthy Dvijotham; M. Pawan Kumar; Robert Stanforth; Alessio Lomuscio
Impact of Light and Shadow on Robustness of Deep Neural Networks. (87%)Chengyin Hu; Weiwen Shi; Chao Li; Jialiang Sun; Donghua Wang; Junqi Wu; Guijian Tang
A Causal View of Entity Bias in (Large) Language Models. (10%)Fei Wang; Wenjie Mo; Yiwei Wang; Wenxuan Zhou; Muhao Chen
2023-05-22
Uncertainty-based Detection of Adversarial Attacks in Semantic Segmentation. (99%)Kira Maag; Asja Fischer
Latent Magic: An Investigation into Adversarial Examples Crafted in the Semantic Latent Space. (99%)BoYang Zheng
FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign. (98%)Kun Li; Fan Zhang; Wei Guo
Attribute-Guided Encryption with Facial Texture Masking. (98%)Chun Pong Lau; Jiang Liu; Rama Chellappa
DiffProtect: Generate Adversarial Examples with Diffusion Models for Facial Privacy Protection. (98%)Jiang Liu; Chun Pong Lau; Rama Chellappa
Byzantine Robust Cooperative Multi-Agent Reinforcement Learning as a Bayesian Game. (93%)Simin Li; Jun Guo; Jingqiao Xiu; Xini Yu; Jiakai Wang; Aishan Liu; Yaodong Yang; Xianglong Liu
Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks. (88%)Simin Li; Shuing Zhang; Gujun Chen; Dong Wang; Pu Feng; Jiakai Wang; Aishan Liu; Xin Yi; Xianglong Liu
Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors. (54%)Pia Hanfeld; Marina M. -C. Höhne; Michael Bussmann; Wolfgang Hönig
DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using Bernstein Polynomial Activations and Precise Bound Propagation. (50%)Haitham Khedr; Yasser Shoukry
The defender's perspective on automatic speaker verification: An overview. (22%)Haibin Wu; Jiawen Kang; Lingwei Meng; Helen Meng; Hung-yi Lee
Model Stealing Attack against Multi-Exit Networks. (10%)Li Pan; Lv Peizhuo; Chen Kai; Cai Yuling; Xiang Fan; Zhang Shengzhi
Adversarial Defenses via Vector Quantization. (8%)Zhiyi Dong; Yongyi Mao
Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models. (2%)Alicia Parrish; Hannah Rose Kirk; Jessica Quaye; Charvi Rastogi; Max Bartolo; Oana Inel; Juan Ciro; Rafael Mosquera; Addison Howard; Will Cukierski; D. Sculley; Vijay Janapa Reddi; Lora Aroyo
Watermarking Classification Dataset for Copyright Protection. (1%)Yixin Liu; Hongsheng Hu; Xun Chen; Xuyun Zhang; Lichao Sun
Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals. (1%)Ananth Balashankar; Xuezhi Wang; Yao Qin; Ben Packer; Nithum Thain; Jilin Chen; Ed H. Chi; Alex Beutel
Tied-Augment: Controlling Representation Similarity Improves Data Augmentation. (1%)Emirhan Kurtulus; Zichao Li; Yann Dauphin; Ekin Dogus Cubuk
Adaptive Face Recognition Using Adversarial Information Network. (1%)Mei Wang; Weihong Deng
2023-05-21
Mist: Towards Improved Adversarial Examples for Diffusion Models. (99%)Chumeng Liang; Xiaoyu Wu
Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial Attack. (81%)Christopher Burger; Lingwei Chen; Thai Le
FAQ: Mitigating the Impact of Faults in the Weight Memory of DNN Accelerators through Fault-Aware Quantization. (1%)Muhammad Abdullah Hanif; Muhammad Shafique
2023-05-20
Dynamic Transformers Provide a False Sense of Efficiency. (92%)Yiming Chen; Simin Chen; Zexin Li; Wei Yang; Cong Liu; Robby T. Tan; Haizhou Li
Annealing Self-Distillation Rectification Improves Adversarial Training. (76%)Yu-Yu Wu; Hung-Jui Wang; Shang-Tse Chen
Stability, Generalization and Privacy: Precise Analysis for Random and NTK Features. (8%)Simone Bombari; Marco Mondelli
2023-05-19
Dynamic Gradient Balancing for Enhanced Adversarial Attacks on Multi-Task Models. (98%)Lijun Zhang; Xiao Liu; Kaleel Mahmood; Caiwen Ding; Hui Guan
DAP: A Dynamic Adversarial Patch for Evading Person Detectors. (92%)Amira Guesmi; Ruitian Ding; Muhammad Abdullah Hanif; Ihsen Alouani; Muhammad Shafique
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation. (8%)Xuanli He; Qiongkai Xu; Jun Wang; Benjamin Rubinstein; Trevor Cohn
Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment. (5%)Mengke Li; Yiu-ming Cheung; Yang Lu
SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters. (4%)Yuchen Yang; Bo Hui; Haolin Yuan; Neil Gong; Yinzhi Cao
Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing. (2%)Yisong Xiao; Aishan Liu; Tianlin Li; Xianglong Liu
Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning. (1%)Mustafa Safa Ozdayi; Charith Peris; Jack FitzGerald; Christophe Dupuy; Jimit Majmudar; Haidar Khan; Rahil Parikh; Rahul Gupta
2023-05-18
Deep PackGen: A Deep Reinforcement Learning Framework for Adversarial Network Packet Generation. (99%)Soumyadeep Hore; Jalal Ghadermazi; Diwas Paudel; Ankit Shah; Tapas K. Das; Nathaniel D. Bastian
Adversarial Amendment is the Only Force Capable of Transforming an Enemy into a Friend. (99%)Chong Yu; Tao Chen; Zhongxue Gan
Architecture-agnostic Iterative Black-box Certified Defense against Adversarial Patches. (99%)Di Yang; Yihao Huang; Qing Guo; Felix Juefei-Xu; Ming Hu; Yang Liu; Geguang Pu
Towards an Accurate and Secure Detector against Adversarial Perturbations. (99%)Chao Wang; Shuren Qi; Zhiqiu Huang; Yushu Zhang; Xiaochun Cao
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoning. (99%)Elise Bishoff; Charles Godfrey; Myles McKay; Eleanor Byler
How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses. (98%)Joana C. Costa; Tiago Roxo; Hugo Proença; Pedro R. M. Inácio
RobustFair: Adversarial Evaluation through Fairness Confusion Directed Gradient Search. (93%)Xuran Li; Peng Wu; Kaixiang Dong; Zhen Zhang
Attacks on Online Learners: a Teacher-Student Analysis. (54%)Riccardo Giuseppe Margiotta; Sebastian Goldt; Guido Sanguinetti
Explaining V1 Properties with a Biologically Constrained Deep Learning Architecture. (47%)Galen Pogoncheff; Jacob Granley; Michael Beyeler
Zero-Day Backdoor Attack against Text-to-Image Diffusion Models via Personalization. (2%)Yihao Huang; Qing Guo; Felix Juefei-Xu
Large Language Models can be Guided to Evade AI-Generated Text Detection. (1%)Ning Lu; Shengcai Liu; Rui He; Ke Tang
Re-thinking Data Availablity Attacks Against Deep Neural Networks. (1%)Bin Fang; Bo Li; Shuang Wu; Ran Yi; Shouhong Ding; Lizhuang Ma
TrustSER: On the Trustworthiness of Fine-tuning Pre-trained Speech Embeddings For Speech Emotion Recognition. (1%)Tiantian Feng; Rajat Hebbar; Shrikanth Narayanan
2023-05-17
Content-based Unrestricted Adversarial Attack. (99%)Zhaoyu Chen; Bo Li; Shuang Wu; Kaixun Jiang; Shouhong Ding; Wenqiang Zhang
Raising the Bar for Certified Adversarial Robustness with Diffusion Models. (95%)Thomas Altstidl; David Dobre; Björn Eskofier; Gauthier Gidel; Leo Schwinn
The Adversarial Consistency of Surrogate Risks for Binary Classification. (10%)Natalie Frank; Jonathan Niles-Weed
Variational Classification. (1%)Shehzaad Dhuliawala; Mrinmaya Sachan; Carl Allen
Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt. (1%)Zhaozhuo Xu; Zirui Liu; Beidi Chen; Yuxin Tang; Jue Wang; Kaixiong Zhou; Xia Hu; Anshumali Shrivastava
PaLM 2 Technical Report. (1%)Rohan Anil; Andrew M. Dai; Orhan Firat; Melvin Johnson; Dmitry Lepikhin; Alexandre Passos; Siamak Shakeri; Emanuel Taropa; Paige Bailey; Zhifeng Chen; Eric Chu; Jonathan H. Clark; Laurent El Shafey; Yanping Huang; Kathy Meier-Hellstern; Gaurav Mishra; Erica Moreira; Mark Omernick; Kevin Robinson; Sebastian Ruder; Yi Tay; Kefan Xiao; Yuanzhong Xu; Yujing Zhang; Gustavo Hernandez Abrego; Junwhan Ahn; Jacob Austin; Paul Barham; Jan Botha; James Bradbury; Siddhartha Brahma; Kevin Brooks; Michele Catasta; Yong Cheng; Colin Cherry; Christopher A. Choquette-Choo; Aakanksha Chowdhery; Clément Crepy; Shachi Dave; Mostafa Dehghani; Sunipa Dev; Jacob Devlin; Mark Díaz; Nan Du; Ethan Dyer; Vlad Feinberg; Fangxiaoyu Feng; Vlad Fienber; Markus Freitag; Xavier Garcia; Sebastian Gehrmann; Lucas Gonzalez; Guy Gur-Ari; Steven Hand; Hadi Hashemi; Le Hou; Joshua Howland; Andrea Hu; Jeffrey Hui; Jeremy Hurwitz; Michael Isard; Abe Ittycheriah; Matthew Jagielski; Wenhao Jia; Kathleen Kenealy; Maxim Krikun; Sneha Kudugunta; Chang Lan; Katherine Lee; Benjamin Lee; Eric Li; Music Li; Wei Li; YaGuang Li; Jian Li; Hyeontaek Lim; Hanzhao Lin; Zhongtao Liu; Frederick Liu; Marcello Maggioni; Aroma Mahendru; Joshua Maynez; Vedant Misra; Maysam Moussalem; Zachary Nado; John Nham; Eric Ni; Andrew Nystrom; Alicia Parrish; Marie Pellat; Martin Polacek; Alex Polozov; Reiner Pope; Siyuan Qiao; Emily Reif; Bryan Richter; Parker Riley; Alex Castro Ros; Aurko Roy; Brennan Saeta; Rajkumar Samuel; Renee Shelby; Ambrose Slone; Daniel Smilkov; David R. So; Daniel Sohn; Simon Tokumine; Dasha Valter; Vijay Vasudevan; Kiran Vodrahalli; Xuezhi Wang; Pidong Wang; Zirui Wang; Tao Wang; John Wieting; Yuhuai Wu; Kelvin Xu; Yunhan Xu; Linting Xue; Pengcheng Yin; Jiahui Yu; Qiao Zhang; Steven Zheng; Ce Zheng; Weikang Zhou; Denny Zhou; Slav Petrov; Yonghui Wu
2023-05-16
Iterative Adversarial Attack on Image-guided Story Ending Generation. (99%)Youze Wang; Wenbo Hu; Richang Hong
Releasing Inequality Phenomena in $L_{\infty}$-Adversarial Training via Input Gradient Distillation. (98%)Junxi Chen; Junhao Dong; Xiaohua Xie
Ortho-ODE: Enhancing Robustness and of Neural ODEs against Adversarial Attacks. (54%)Vishal Purohit
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples. (50%)Wan Jiang; Yunfeng Diao; He Wang; Jianxin Sun; Meng Wang; Richang Hong
2023-05-15
Attacking Perceptual Similarity Metrics. (99%)Abhijay Ghildyal; Feng Liu
Exploiting Frequency Spectrum of Adversarial Images for General Robustness. (96%)Chun Yang Tan; Kazuhiko Kawamoto; Hiroshi Kera
Training Neural Networks without Backpropagation: A Deeper Dive into the Likelihood Ratio Method. (4%)Jinyang Jiang; Zeliang Zhang; Chenliang Xu; Zhaofei Yu; Yijie Peng
Assessing Hidden Risks of LLMs: An Empirical Study on Robustness, Consistency, and Credibility. (1%)Wentao Ye; Mingfeng Ou; Tianyi Li; Yipeng chen; Xuetao Ma; Yifan Yanggong; Sai Wu; Jie Fu; Gang Chen; Haobo Wang; Junbo Zhao
2023-05-14
Diffusion Models for Imperceptible and Transferable Adversarial Attack. (99%)Jianqi Chen; Hao Chen; Keyan Chen; Yilan Zhang; Zhengxia Zou; Zhenwei Shi
Improving Defensive Distillation using Teacher Assistant. (96%)Maniratnam Mandal; Suna Gao
Manipulating Visually-aware Federated Recommender Systems and Its Countermeasures. (82%)Wei Yuan; Shilong Yuan; Chaoqun Yang; Quoc Viet Hung Nguyen; Hongzhi Yin
Watermarking Text Generated by Black-Box Language Models. (9%)Xi Yang; Kejiang Chen; Weiming Zhang; Chang Liu; Yuang Qi; Jie Zhang; Han Fang; Nenghai Yu
2023-05-13
DNN-Defender: An in-DRAM Deep Neural Network Defense Mechanism for Adversarial Weight Attack. (86%)Ranyang Zhou; Sabbir Ahmed; Adnan Siraj Rakin; Shaahin Angizi
On enhancing the robustness of Vision Transformers: Defensive Diffusion. (76%)Raza Imam; Muhammad Huzaifa; Mohammed El-Amine Azz
Decision-based iterative fragile watermarking for model integrity verification. (50%)Zhaoxia Yin; Heng Yin; Hang Su; Xinpeng Zhang; Zhenzhe Gao
2023-05-12
Efficient Search of Comprehensively Robust Neural Architectures via Multi-fidelity Evaluation. (73%)Jialiang Sun; Wen Yao; Tingsong Jiang; Xiaoqian Chen
Adversarial Security and Differential Privacy in mmWave Beam Prediction in 6G networks. (68%)Ghanta Sai Krishna; Kundrapu Supriya; Sanskar Singh; Sabur Baidya
Mastering Percolation-like Games with Deep Learning. (1%)Michael M. Danziger; Omkar R. Gojala; Sean P. Cornelius
2023-05-11
Distracting Downpour: Adversarial Weather Attacks for Motion Estimation. (74%)Jenny Schmalfuss; Lukas Mehl; Andrés Bruhn
Backdoor Attack with Sparse and Invisible Trigger. (67%)Yinghua Gao; Yiming Li; Xueluan Gong; Shu-Tao Xia; Qian Wang
Watch This Space: Securing Satellite Communication through Resilient Transmitter Fingerprinting. (1%)Joshua Smailes; Sebastian Kohler; Simon Birnbach; Martin Strohmeier; Ivan Martinovic
2023-05-10
A Black-Box Attack on Code Models via Representation Nearest Neighbor Search. (99%)Jie Zhang; Wei Ma; Qiang Hu; Shangqing Liu; Xiaofei Xie; Yves Le Traon; Yang Liu
Inter-frame Accelerate Attack against Video Interpolation Models. (99%)Junpei Liao; Zhikai Chen; Liang Yi; Wenyuan Yang; Baoyuan Wu; Xiaochun Cao
Randomized Smoothing with Masked Inference for Adversarially Robust Text Classifications. (98%)Han Cheol Moon; Shafiq Joty; Ruochen Zhao; Megh Thakkar; Xu Chi
Stealthy Low-frequency Backdoor Attack against Deep Neural Networks. (80%)Xinrui Liu; Yu-an Tan; Yajie Wang; Kefan Qiu; Yuanzhang Li
Towards Invisible Backdoor Attacks in the Frequency Domain against Deep Neural Networks. (75%)Xinrui Liu; Yajie Wang; Yu-an Tan; Kefan Qiu; Yuanzhang Li
The Robustness of Computer Vision Models against Common Corruptions: a Survey. (50%)Shunxin Wang; Raymond Veldhuis; Nicola Strisciuglio
An Empirical Study on the Robustness of the Segment Anything Model (SAM). (22%)Yuqing Wang; Yun Zhao; Linda Petzold
Robust multi-agent coordination via evolutionary generation of auxiliary adversarial attackers. (12%)Lei Yuan; Zi-Qian Zhang; Ke Xue; Hao Yin; Feng Chen; Cong Guan; Li-He Li; Chao Qian; Yang Yu
2023-05-09
Quantization Aware Attack: Enhancing the Transferability of Adversarial Attacks across Target Models with Different Quantization Bitwidths. (99%)Yulong Yang; Chenhao Lin; Qian Li; Chao Shen; Dawei Zhou; Nannan Wang; Tongliang Liu
Attack Named Entity Recognition by Entity Boundary Interference. (98%)Yifei Yang; Hongqiu Wu; Hai Zhao
VSMask: Defending Against Voice Synthesis Attack via Real-Time Predictive Perturbation. (96%)Yuanda Wang; Hanqing Guo; Guangjing Wang; Bocheng Chen; Qiben Yan
Investigating the Corruption Robustness of Image Classifiers with Random Lp-norm Corruptions. (75%)Georg Siedel; Weijia Shao; Silvia Vock; Andrey Morozov
On the Relation between Sharpness-Aware Minimization and Adversarial Robustness. (56%)Zeming Wei; Jingyu Zhu; Yihao Zhang
Turning Privacy-preserving Mechanisms against Federated Learning. (9%)Marco Arazzi; Mauro Conti; Antonino Nocera; Stjepan Picek
BadCS: A Backdoor Attack Framework for Code search. (8%)Shiyi Qi; Yuanhang Yang; Shuzhzeng Gao; Cuiyun Gao; Zenglin Xu
Quantum Machine Learning for Malware Classification. (1%)Grégoire Barrué; Tony Quertier
2023-05-08
Toward Adversarial Training on Contextualized Language Representation. (93%)Hongqiu Wu; Yongxiang Liu; Hanwen Shi; Hai Zhao; Min Zhang
Understanding Noise-Augmented Training for Randomized Smoothing. (64%)Ambar Pal; Jeremias Sulam
TAPS: Connecting Certified and Adversarial Training. (41%)Yuhao Mao; Mark Niklas Müller; Marc Fischer; Martin Vechev
Privacy-preserving Adversarial Facial Features. (22%)Zhibo Wang; He Wang; Shuaifan Jin; Wenwen Zhang; Jiahui Hu; Yan Wang; Peng Sun; Wei Yuan; Kaixin Liu; Kui Ren
Communication-Robust Multi-Agent Learning by Adaptable Auxiliary Multi-Agent Adversary Generation. (1%)Lei Yuan; Feng Chen; Zhongzhang Zhang; Yang Yu
2023-05-07
Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization. (99%)Zhaoxia Yin; Shaowei Zhu; Hang Su; Jianteng Peng; Wanli Lyu; Bin Luo
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification. (93%)Nils Lukas; Florian Kerschbaum
2023-05-06
Reactive Perturbation Defocusing for Textual Adversarial Defense. (99%)Heng Yang; Ke Li
Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps. (92%)Ye Sang; Yujin Huang; Shuo Huang; Helei Cui
Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling Augmentation Framework. (38%)Ruijia Wu; Yuhang Wang; Huafeng Shi; Zhipeng Yu; Yichao Wu; Ding Liang
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning. (2%)Shengfang Zhai; Yinpeng Dong; Qingni Shen; Shi Pu; Yuejian Fang; Hang Su
2023-05-05
White-Box Multi-Objective Adversarial Attack on Dialogue Generation. (99%)Yufei Li; Zexin Li; Yingfan Gao; Cong Liu
Evading Watermark based Detection of AI-Generated Content. (87%)Zhengyuan Jiang; Jinghuai Zhang; Neil Zhenqiang Gong
Verifiable Learning for Robust Tree Ensembles. (15%)Stefano Calzavara; Lorenzo Cazzaro; Giulio Ermanno Pibiri; Nicola Prezza
Repairing Deep Neural Networks Based on Behavior Imitation. (4%)Zhen Liang; Taoran Wu; Changyuan Zhao; Wanwei Liu; Bai Xue; Wenjing Yang; Ji Wang
2023-05-04
Madvex: Instrumentation-based Adversarial Attacks on Machine Learning Malware Detection. (99%)Nils Loose; Felix Mächtle; Claudius Pott; Volodymyr Bezsmertnyi; Thomas Eisenbarth
IMAP: Intrinsically Motivated Adversarial Policy. (99%)Xiang Zheng; Xingjun Ma; Shengjie Wang; Xinyu Wang; Chao Shen; Cong Wang
Single Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning. (78%)Dayuan Chen; Jian Zhang; Yuqian Lv; Jinhuan Wang; Hongjie Ni; Shanqing Yu; Zhen Wang; Qi Xuan
Faulting original McEliece's implementations is possible: How to mitigate this risk? (2%)Vincent Giraud; Guillaume Bouffard
2023-05-03
New Adversarial Image Detection Based on Sentiment Analysis. (99%)Yulong Wang; Tianxiang Li; Shenghong Li; Xin Yuan; Wei Ni
LearnDefend: Learning to Defend against Targeted Model-Poisoning Attacks on Federated Learning. (84%)Kiran Purohit; Soumi Das; Sourangshu Bhattacharya; Santu Rana
Defending against Insertion-based Textual Backdoor Attacks via Attribution. (61%)Jiazhao Li; Zhuofeng Wu; Wei Ping; Chaowei Xiao; V. G. Vinod Vydiswaran
On the Security Risks of Knowledge Graph Reasoning. (26%)Zhaohan Xi; Tianyu Du; Changjiang Li; Ren Pang; Shouling Ji; Xiapu Luo; Xusheng Xiao; Fenglong Ma; Ting Wang
Backdoor Learning on Sequence to Sequence Models. (5%)Lichang Chen; Minhao Cheng; Heng Huang
Rethinking Graph Lottery Tickets: Graph Sparsity Matters. (2%)Bo Hui; Da Yan; Xiaolong Ma; Wei-Shinn Ku
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer. (1%)Lichang Chen; Heng Huang; Minhao Cheng
2023-05-02
Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature. (99%)Juanjuan Weng; Zhiming Luo; Dazhen Lin; Shaozi Li; Zhun Zhong
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning. (73%)Wenqiang Sun; Sen Li; Yuchang Sun; Jun Zhang
Towards Imperceptible Document Manipulations against Neural Ranking Models. (67%)Xuanang Chen; Ben He; Zheng Ye; Le Sun; Yingfei Sun
Sentiment Perception Adversarial Attacks on Neural Machine Translation Systems. (50%)Vyas Raina; Mark Gales
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models. (8%)Shuai Zhao; Jinming Wen; Luu Anh Tuan; Junbo Zhao; Jie Fu
2023-05-01
Attack-SAM: Towards Evaluating Adversarial Robustness of Segment Anything Model. (99%)Chenshuang Zhang; Chaoning Zhang; Taegoo Kang; Donghun Kim; Sung-Ho Bae; In So Kweon
Physical Adversarial Attacks for Surveillance: A Survey. (98%)Kien Nguyen; Tharindu Fernando; Clinton Fookes; Sridha Sridharan
Revisiting Robustness in Graph Machine Learning. (98%)Lukas Gosch; Daniel Sturm; Simon Geisler; Stephan Günnemann
Stratified Adversarial Robustness with Rejection. (96%)Jiefeng Chen; Jayaram Raghuram; Jihye Choi; Xi Wu; Yingyu Liang; Somesh Jha
Poisoning Language Models During Instruction Tuning. (2%)Alexander Wan; Eric Wallace; Sheng Shen; Dan Klein
2023-04-30
Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks. (98%)Jingfeng Zhang; Bo Song; Bo Han; Lei Liu; Gang Niu; Masashi Sugiyama
2023-04-29
FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection. (81%)Thuy Dung Nguyen; Anh Duy Nguyen; Kok-Seng Wong; Huy Hieu Pham; Thanh Hung Nguyen; Phi Le Nguyen; Truong Thao Nguyen
Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization. (33%)Xilie Xu; Jingfeng Zhang; Feng Liu; Masashi Sugiyama; Mohan Kankanhalli
Adversarial Representation Learning for Robust Privacy Preservation in Audio. (1%)Shayan Gharib; Minh Tran; Diep Luong; Konstantinos Drossos; Tuomas Virtanen
2023-04-28
Topic-oriented Adversarial Attacks against Black-box Neural Ranking Models. (99%)Yu-An Liu; Ruqing Zhang; Jiafeng Guo; Rijke Maarten de; Wei Chen; Yixing Fan; Xueqi Cheng
On the existence of solutions to adversarial training in multiclass classification. (75%)Nicolas Garcia Trillos; Matt Jacobs; Jakwang Kim
The Power of Typed Affine Decision Structures: A Case Study. (3%)Gerrit Nolte; Maximilian Schlüter; Alnis Murtovi; Bernhard Steffen
faulTPM: Exposing AMD fTPMs' Deepest Secrets. (3%)Hans Niklas Jacob; Christian Werling; Robert Buhren; Jean-Pierre Seifert
SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective. (1%)An Wang; Mobarakol Islam; Mengya Xu; Yang Zhang; Hongliang Ren
2023-04-27
Adversary Aware Continual Learning. (80%)Muhammad Umer; Robi Polikar
Fusion is Not Enough: Single-Modal Attacks to Compromise Fusion Models in Autonomous Driving. (75%)Zhiyuan Cheng; Hongjun Choi; James Liang; Shiwei Feng; Guanhong Tao; Dongfang Liu; Michael Zuzak; Xiangyu Zhang
Boosting Big Brother: Attacking Search Engines with Encodings. (68%)Nicholas Boucher; Luca Pajola; Ilia Shumailov; Ross Anderson; Mauro Conti
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger. (62%)Jiazhao Li; Yijin Yang; Zhuofeng Wu; V. G. Vinod Vydiswaran; Chaowei Xiao
Improve Video Representation with Temporal Adversarial Augmentation. (26%)Jinhao Duan; Quanfu Fan; Hao Cheng; Xiaoshuang Shi; Kaidi Xu
Origin Tracing and Detecting of LLMs. (1%)Linyang Li; Pengyu Wang; Ke Ren; Tianxiang Sun; Xipeng Qiu
Deep Intellectual Property Protection: A Survey. (1%)Yuchen Sun; Tianpeng Liu; Panhe Hu; Qing Liao; Shaojing Fu; Nenghai Yu; Deke Guo; Yongxiang Liu; Li Liu
Interactive Greybox Penetration Testing for Cloud Access Control using IAM Modeling and Deep Reinforcement Learning. (1%)Yang Hu; Wenxi Wang; Sarfraz Khurshid; Mohit Tiwari
2023-04-26
Improving Adversarial Transferability via Intermediate-level Perturbation Decay. (98%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Detection of Adversarial Physical Attacks in Time-Series Image Data. (92%)Ramneet Kaur; Yiannis Kantaros; Wenwen Si; James Weimer; Insup Lee
Blockchain-based Federated Learning with SMPC Model Verification Against Poisoning Attack for Healthcare Systems. (13%)Aditya Pribadi Kalapaaking; Ibrahim Khalil; Xun Yi
2023-04-25
Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks. (99%)Ferheen Ayaz; Idris Zakariyya; José Cano; Sye Loong Keoh; Jeremy Singer; Danilo Pau; Mounia Kharbouche-Harrari
Generating Adversarial Examples with Task Oriented Multi-Objective Optimization. (99%)Anh Bui; Trung Le; He Zhao; Quan Tran; Paul Montague; Dinh Phung
SHIELD: Thwarting Code Authorship Attribution. (98%)Mohammed Abuhamad; Changhun Jung; David Mohaisen; DaeHun Nyang
Learning Robust Deep Equilibrium Models. (82%)Haoyu Chu; Shikui Wei; Ting Liu; Yao Zhao
LSTM-based Load Forecasting Robustness Against Noise Injection Attack in Microgrid. (1%)Amirhossein Nazeri; Pierluigi Pisu
2023-04-24
Evaluating Adversarial Robustness on Document Image Classification. (99%)Timothée Fronteau; Arnaud Paran; Aymen Shabou
Combining Adversaries with Anti-adversaries in Training. (64%)Xiaoling Zhou; Nan Yang; Ou Wu
Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization. (41%)Mingli Zhu; Shaokui Wei; Li Shen; Yanbo Fan; Baoyuan Wu
Opinion Control under Adversarial Network Perturbation: A Stackelberg Game Approach. (10%)Yuejiang Li; Zhanjiang Chen; H. Vicky Zhao
Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning. (1%)Yonggan Fu; Ye Yuan; Shang Wu; Jiayi Yuan; Yingyan Lin
2023-04-23
StyLess: Boosting the Transferability of Adversarial Examples. (99%)Kaisheng Liang; Bin Xiao
Evading DeepFake Detectors via Adversarial Statistical Consistency. (98%)Yang Hou; Qing Guo; Yihao Huang; Xiaofei Xie; Lei Ma; Jianjun Zhao
2023-04-22
Detecting Adversarial Faces Using Only Real Face Self-Perturbations. (98%)Qian Wang; Yongqin Xian; Hefei Ling; Jinyuan Zhang; Xiaorui Lin; Ping Li; Jiazhong Chen; Ning Yu
Universal Adversarial Backdoor Attacks to Fool Vertical Federated Learning in Cloud-Edge Collaboration. (70%)Peng Chen; Xin Du; Zhihui Lu; Hongfeng Chai
2023-04-21
Launching a Robust Backdoor Attack under Capability Constrained Scenarios. (88%)Ming Yi; Yixiao Xu; Kangyi Ding; Mingyong Yin; Xiaolei Liu
Individual Fairness in Bayesian Neural Networks. (69%)Alice Doherty; Matthew Wicker; Luca Laurenti; Andrea Patane
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning. (64%)Hangtao Zhang; Zeming Yao; Leo Yu Zhang; Shengshan Hu; Chao Chen; Alan Liew; Zhetao Li
Interpretable and Robust AI in EEG Systems: A Survey. (12%)Xinliang Zhou; Chenyu Liu; Liming Zhai; Ziyu Jia; Cuntai Guan; Yang Liu
MAWSEO: Adversarial Wiki Search Poisoning for Illicit Online Promotion. (2%)Zilong Lin; Zhengyi Li; Xiaojing Liao; XiaoFeng Wang; Xiaozhong Liu
2023-04-20
Towards the Universal Defense for Query-Based Audio Adversarial Attacks. (99%)Feng Guo; Zheng Sun; Yuxuan Chen; Lei Ju
Diversifying the High-level Features for better Adversarial Transferability. (99%)Zhiyuan Wang; Zeliang Zhang; Siyuan Liang; Xiaosen Wang
Using Z3 for Formal Modeling and Verification of FNN Global Robustness. (98%)Yihao Zhang; Zeming Wei; Xiyue Zhang; Meng Sun
Certified Adversarial Robustness Within Multiple Perturbation Bounds. (96%)Soumalya Nandi; Sravanti Addepalli; Harsh Rangwani; R. Venkatesh Babu
Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock Recommendation via Split Variational Adversarial Training. (93%)Jiezhu Cheng; Kaizhu Huang; Zibin Zheng
Adversarial Infrared Blocks: A Black-box Attack to Thermal Infrared Detectors at Multiple Angles in Physical World. (89%)Chengyin Hu; Weiwen Shi; Tingsong Jiang; Wen Yao; Ling Tian; Xiaoqian Chen
An Analysis of the Completion Time of the BB84 Protocol. (22%)Sounak Kar; Jean-Yves Le Boudec
A Plug-and-Play Defensive Perturbation for Copyright Protection of DNN-based Applications. (13%)Donghua Wang; Wen Yao; Tingsong Jiang; Weien Zhou; Lang Lin; Xiaoqian Chen
Enhancing object detection robustness: A synthetic and natural perturbation approach. (12%)Nilantha Premakumara; Brian Jalaian; Niranjan Suri; Hooman Samani
RoCOCO: Robustness Benchmark of MS-COCO to Stress-test Image-Text Matching Models. (8%)Seulki Park; Daeho Um; Hajung Yoon; Sanghyuk Chun; Sangdoo Yun; Jin Young Choi
Get Rid Of Your Trail: Remotely Erasing Backdoors in Federated Learning. (2%)Manaar Alam; Hithem Lamri; Michail Maniatakos
Learning Sample Difficulty from Pre-trained Models for Reliable Prediction. (1%)Peng Cui; Dan Zhang; Zhijie Deng; Yinpeng Dong; Jun Zhu
2023-04-19
Jedi: Entropy-based Localization and Removal of Adversarial Patches. (84%)Bilel Tarchoun; Anouar Ben Khalifa; Mohamed Ali Mahjoub; Nael Abu-Ghazaleh; Ihsen Alouani
GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models. (81%)Zaitang Li; Pin-Yu Chen; Tsung-Yi Ho
Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks. (5%)Yunlong Mao; Zexi Xin; Zhenyu Li; Jue Hong; Qingyou Yang; Sheng Zhong
Density-Insensitive Unsupervised Domain Adaption on 3D Object Detection. (1%)Qianjiang Hu; Daizong Liu; Wei Hu
On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model, Data, and Training. (1%)Hao Fei; Tat-Seng Chua; Chenliang Li; Donghong Ji; Meishan Zhang; Yafeng Ren
Fundamental Limitations of Alignment in Large Language Models. (1%)Yotam Wolf; Noam Wies; Oshri Avnery; Yoav Levine; Amnon Shashua
2023-04-18
Wavelets Beat Monkeys at Adversarial Robustness. (99%)Jingtong Su; Julia Kempe
Towards the Transferable Audio Adversarial Attack via Ensemble Methods. (99%)Feng Guo; Zheng Sun; Yuxuan Chen; Lei Ju
Masked Language Model Based Textual Adversarial Example Detection. (99%)Xiaomei Zhang; Zhaoxi Zhang; Qi Zhong; Xufei Zheng; Yanjun Zhang; Shengshan Hu; Leo Yu Zhang
In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT. (80%)Xinyue Shen; Zeyuan Chen; Michael Backes; Yang Zhang
Generative models improve fairness of medical classifiers under distribution shifts. (13%)Ira Ktena; Olivia Wiles; Isabela Albuquerque; Sylvestre-Alvise Rebuffi; Ryutaro Tanno; Abhijit Guha Roy; Shekoofeh Azizi; Danielle Belgrave; Pushmeet Kohli; Alan Karthikesalingam; Taylan Cemgil; Sven Gowal
2023-04-17
Evil from Within: Machine Learning Backdoors through Hardware Trojans. (15%)Alexander Warnecke; Julian Speith; Jan-Niklas Möller; Konrad Rieck; Christof Paar
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings. (13%)Asim Waheed; Vasisht Duddu; N. Asokan
OOD-CV-v2: An extended Benchmark for Robustness to Out-of-Distribution Shifts of Individual Nuisances in Natural Images. (1%)Bingchen Zhao; Jiahao Wang; Wufei Ma; Artur Jesslen; Siwei Yang; Shaozuo Yu; Oliver Zendel; Christian Theobalt; Alan Yuille; Adam Kortylewski
2023-04-16
A Random-patch based Defense Strategy Against Physical Attacks for Face Recognition Systems. (98%)JiaHao Xie; Ye Luo; Jianwei Lu
RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks. (96%)Yunruo Zhang; Tianyu Du; Shouling Ji; Peng Tang; Shanqing Guo
JoB-VS: Joint Brain-Vessel Segmentation in TOF-MRA Images. (15%)Natalia Valderrama; Ioannis Pitsiorlas; Luisa Vargas; Pablo Arbeláez; Maria A. Zuluaga
2023-04-14
Interpretability is a Kind of Safety: An Interpreter-based Ensemble for Adversary Defense. (99%)Jingyuan Wang; Yufan Wu; Mingxuan Li; Xin Lin; Junjie Wu; Chao Li
Combining Generators of Adversarial Malware Examples to Increase Evasion Rate. (99%)Matouš Kozák; Martin Jureček
Cross-Entropy Loss Functions: Theoretical Analysis and Applications. (3%)Anqi Mao; Mehryar Mohri; Yutao Zhong
Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice. (2%)Andrea Gadotti; Florimond Houssiau; Meenatchi Sundaram Muthu Selva Annamalai; Montjoye Yves-Alexandre de
2023-04-13
Generating Adversarial Examples with Better Transferability via Masking Unimportant Parameters of Surrogate Model. (99%)Dingcheng Yang; Wenjian Yu; Zihao Xiao; Jiaqi Luo
Certified Zeroth-order Black-Box Defense with Robust UNet Denoiser. (96%)Astha Verma; Siddhesh Bangar; A V Subramanyam; Naman Lal; Rajiv Ratn Shah; Shin'ichi Satoh
False Claims against Model Ownership Resolution. (93%)Jian Liu; Rui Zhang; Sebastian Szyller; Kui Ren; N. Asokan
Adversarial Examples from Dimensional Invariance. (45%)Benjamin L. Badger
Understanding Overfitting in Adversarial Training in Kernel Regression. (1%)Teng Zhang; Kang Li
LSFSL: Leveraging Shape Information in Few-shot Learning. (1%)Deepan Chakravarthi Padmanabhan; Shruthi Gowda; Elahe Arani; Bahram Zonooz
2023-04-12
Generative Adversarial Networks-Driven Cyber Threat Intelligence Detection Framework for Securing Internet of Things. (92%)Mohamed Amine Ferrag; Djallel Hamouda; Merouane Debbah; Leandros Maglaras; Abderrahmane Lakas
Exploiting Logic Locking for a Neural Trojan Attack on Machine Learning Accelerators. (1%)Hongye Xu; Dongfang Liu; Cory Merkel; Michael Zuzak
2023-04-11
RecUP-FL: Reconciling Utility and Privacy in Federated Learning via User-configurable Privacy Defense. (99%)Yue Cui; Syed Irfan Ali Meerza; Zhuohang Li; Luyang Liu; Jiaxin Zhang; Jian Liu
Simultaneous Adversarial Attacks On Multiple Face Recognition System Components. (98%)Inderjeet Singh; Kazuya Kakizaki; Toshinori Araki
Boosting Cross-task Transferability of Adversarial Patches with Visual Relations. (98%)Tony Ma; Songze Li; Yisong Xiao; Shunchang Liu
Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection. (92%)Tianyuan Zhang; Yisong Xiao; Xiaoya Zhang; Hao Li; Lu Wang
On the Adversarial Inversion of Deep Biometric Representations. (67%)Gioacchino Tangari; Shreesh Keskar; Hassan Jameel Asghar; Dali Kaafar
Overload: Latency Attacks on Object Detection for Edge Devices. (33%)Erh-Chung Chen; Pin-Yu Chen; I-Hsin Chung; Che-rung Lee
Towards More Robust and Accurate Sequential Recommendation with Cascade-guided Adversarial Training. (9%)Juntao Tan; Shelby Heinecke; Zhiwei Liu; Yongjun Chen; Yongfeng Zhang; Huan Wang
2023-04-10
Generating Adversarial Attacks in the Latent Space. (98%)Nitish Shukla; Sudipta Banerjee
Reinforcement Learning-Based Black-Box Model Inversion Attacks. (67%)Gyojin Han; Jaehyun Choi; Haeil Lee; Junmo Kim
Defense-Prefix for Preventing Typographic Attacks on CLIP. (16%)Hiroki Azuma; Yusuke Matsui
Helix++: A platform for efficiently securing software. (1%)Jack W. Davidson; Jason D. Hiser; Anh Nguyen-Tuong
2023-04-09
Certifiable Black-Box Attack: Ensuring Provably Successful Attack for Adversarial Examples. (99%)Hanbin Hong; Yuan Hong
Adversarially Robust Neural Architecture Search for Graph Neural Networks. (80%)Beini Xie; Heng Chang; Ziwei Zhang; Xin Wang; Daixin Wang; Zhiqiang Zhang; Rex Ying; Wenwu Zhu
Unsupervised Multi-Criteria Adversarial Detection in Deep Image Retrieval. (68%)Yanru Xiao; Cong Wang; Xing Gao
2023-04-08
Robust Deep Learning Models Against Semantic-Preserving Adversarial Attack. (99%)Dashan Gao; Yunce Zhao; Yinghua Yao; Zeqi Zhang; Bifei Mao; Xin Yao
RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks. (98%)Alberto Marchisio; Marco Antonio De; Alessio Colucci; Maurizio Martina; Muhammad Shafique
Exploring the Connection between Robust and Generative Models. (67%)Senad Beadini; Iacopo Masi
Benchmarking the Robustness of Quantized Models. (47%)Yisong Xiao; Tianyuan Zhang; Shunchang Liu; Haotong Qin
Attack is Good Augmentation: Towards Skeleton-Contrastive Representation Learning. (13%)Binqian Xu; Xiangbo Shu; Rui Yan; Guo-Sen Xie; Yixiao Ge; Mike Zheng Shou
Deep Prototypical-Parts Ease Morphological Kidney Stone Identification and are Competitively Robust to Photometric Perturbations. (4%)Daniel Flores-Araiza; Francisco Lopez-Tiro; Jonathan El-Beze; Jacques Hubert; Miguel Gonzalez-Mendoza; Gilberto Ochoa-Ruiz; Christian Daul
EMP-SSL: Towards Self-Supervised Learning in One Training Epoch. (1%)Shengbang Tong; Yubei Chen; Yi Ma; Yann Lecun
2023-04-07
Architecture-Preserving Provable Repair of Deep Neural Networks. (1%)Zhe Tao; Stephanie Nawas; Jacqueline Mitchell; Aditya V. Thakur
ASPEST: Bridging the Gap Between Active Learning and Selective Prediction. (1%)Jiefeng Chen; Jinsung Yoon; Sayna Ebrahimi; Sercan Arik; Somesh Jha; Tomas Pfister
2023-04-06
Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding. (45%)Yuke Hu; Wei Liang; Ruofan Wu; Kai Xiao; Weiqiang Wang; Xiaochen Li; Jinfei Liu; Zhan Qin
Manipulating Federated Recommender Systems: Poisoning with Synthetic Users and Its Countermeasures. (45%)Wei Yuan; Quoc Viet Hung Nguyen; Tieke He; Liang Chen; Hongzhi Yin
Improving Visual Question Answering Models through Robustness Analysis and In-Context Learning with a Chain of Basic Questions. (10%)Jia-Hong Huang; Modar Alfadly; Bernard Ghanem; Marcel Worring
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles. (4%)Jonah O'Brien Weiss; Tiago Alves; Sandip Kundu
Evaluating the Robustness of Machine Reading Comprehension Models to Low Resource Entity Renaming. (2%)Clemencia Siro; Tunde Oluwaseyi Ajayi
Rethinking Evaluation Protocols of Visual Representations Learned via Self-supervised Learning. (1%)Jae-Hun Lee; Doyoung Yoon; ByeongMoon Ji; Kyungyul Kim; Sangheum Hwang
Reliable Learning for Test-time Attacks and Distribution Shift. (1%)Maria-Florina Balcan; Steve Hanneke; Rattana Pukdee; Dravyansh Sharma
Benchmarking Robustness to Text-Guided Corruptions. (1%)Mohammadreza Mofayezi; Yasamin Medghalchi
2023-04-05
A Certified Radius-Guided Attack Framework to Image Segmentation Models. (99%)Wenjie Qu; Youqi Li; Binghui Wang
Going Further: Flatness at the Rescue of Early Stopping for Adversarial Example Transferability. (99%)Martin Gubri; Maxime Cordy; Yves Le Traon
How to choose your best allies for a transferable attack? (99%)Thibault Maho; Seyed-Mohsen Moosavi-Dezfooli; Teddy Furon
Robust Neural Architecture Search. (92%)Xunyu Zhu; Jian Li; Yong Liu; Weiping Wang
Hyper-parameter Tuning for Adversarially Robust Models. (62%)Pedro Mendes; Paolo Romano; David Garlan
JPEG Compressed Images Can Bypass Protections Against AI Editing. (15%)Pedro Sandoval-Segura; Jonas Geiping; Tom Goldstein
FACE-AUDITOR: Data Auditing in Facial Recognition Systems. (1%)Min Chen; Zhikun Zhang; Tianhao Wang; Michael Backes; Yang Zhang
2023-04-04
CGDTest: A Constrained Gradient Descent Algorithm for Testing Neural Networks. (31%)Vineel Nagisetty; Laura Graves; Guanting Pan; Piyush Jha; Vijay Ganesh
Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher. (1%)Jiawei Shao; Fangzhao Wu; Jun Zhang
EGC: Image Generation and Classification via a Single Energy-Based Model. (1%)Qiushan Guo; Chuofan Ma; Yi Jiang; Zehuan Yuan; Yizhou Yu; Ping Luo
2023-04-03
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning. (76%)Ajinkya Tejankar; Maziar Sanjabi; Qifan Wang; Sinong Wang; Hamed Firooz; Hamed Pirsiavash; Liang Tan
Model-Agnostic Reachability Analysis on Deep Neural Networks. (75%)Chi Zhang; Wenjie Ruan; Fu Wang; Peipei Xu; Geyong Min; Xiaowei Huang
NetFlick: Adversarial Flickering Attacks on Deep Learning Based Video Compression. (69%)Jung-Woo Chang; Nojan Sheybani; Shehzeen Samarah Hussain; Mojan Javaheripi; Seira Hidano; Farinaz Koushanfar
Learning About Simulated Adversaries from Human Defenders using Interactive Cyber-Defense Games. (1%)Baptiste Prebot; Yinuo Du; Cleotilde Gonzalez
2023-04-01
GradMDM: Adversarial Attack on Dynamic Networks. (84%)Jianhong Pan; Lin Geng Foo; Qichen Zheng; Zhipeng Fan; Hossein Rahmani; Qiuhong Ke; Jun Liu
Instance-level Trojan Attacks on Visual Question Answering via Adversarial Learning in Neuron Activation Space. (61%)Yuwei Sun; Hideya Ochiai; Jun Sakuma
2023-03-31
Improving Fast Adversarial Training with Prior-Guided Knowledge. (99%)Xiaojun Jia; Yong Zhang; Xingxing Wei; Baoyuan Wu; Ke Ma; Jue Wang; Xiaochun Sr Cao
To be Robust and to be Fair: Aligning Fairness with Robustness. (93%)Junyi Chai; Xiaoqian Wang
Fooling Polarization-based Vision using Locally Controllable Polarizing Projection. (91%)Zhuoxiao Li; Zhihang Zhong; Shohei Nobuhara; Ko Nishino; Yinqiang Zheng
Per-Example Gradient Regularization Improves Learning Signals from Noisy Data. (3%)Xuran Meng; Yuan Cao; Difan Zou
Secure Federated Learning against Model Poisoning Attacks via Client Filtering. (2%)Duygu Nur Yaldiz; Tuo Zhang; Salman Avestimehr
DIME-FM: DIstilling Multimodal and Efficient Foundation Models. (1%)Ximeng Sun; Pengchuan Zhang; Peizhao Zhang; Hardik Shah; Kate Saenko; Xide Xia
A Generative Framework for Low-Cost Result Validation of Outsourced Machine Learning Tasks. (1%)Abhinav Kumar; Miguel A. Guirao Aguilera; Reza Tourani; Satyajayant Misra
2023-03-30
Adversarial Attack and Defense for Dehazing Networks. (97%)Jie Gui; Xiaofeng Cong; Chengwei Peng; Yuan Yan Tang; James Tin-Yau Kwok
Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness. (96%)Timothy Redgrave; Colton Crum
Towards Adversarially Robust Continual Learning. (95%)Tao Bai; Chen Chen; Lingjuan Lyu; Jun Zhao; Bihan Wen
Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving. (81%)Zijian Zhu; Yichi Zhang; Hai Chen; Yinpeng Dong; Shu Zhao; Wenbo Ding; Jiachen Zhong; Shibao Zheng
Robo3D: Towards Robust and Reliable 3D Perception against Corruptions. (2%)Lingdong Kong; Youquan Liu; Xin Li; Runnan Chen; Wenwei Zhang; Jiawei Ren; Liang Pan; Kai Chen; Ziwei Liu
Establishing baselines and introducing TernaryMixOE for fine-grained out-of-distribution detection. (1%)Noah Fleischmann; Walter Bennette; Nathan Inkawhich
Explainable Intrusion Detection Systems Using Competitive Learning Techniques. (1%)Jesse Ables; Thomas Kirby; Sudip Mittal; Ioana Banicescu; Shahram Rahimi; William Anderson; Maria Seale
Differential Area Analysis for Ransomware: Attacks, Countermeasures, and Limitations. (1%)Marco Venturini; Francesco Freda; Emanuele Miotto; Alberto Giaretta; Mauro Conti
2023-03-29
Latent Feature Relation Consistency for Adversarial Robustness. (99%)Xingbin Liu; Huafeng Kuang; Hong Liu; Xianming Lin; Yongjian Wu; Rongrong Ji
Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness. (99%)Wei Wei; Jiahuan Zhou; Ying Wu
Targeted Adversarial Attacks on Wind Power Forecasts. (88%)René Heinrich; Christoph Scholz; Stephan Vogt; Malte Lehna
Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias. (67%)Zihan Liu; Yun Luo; Lirong Wu; Zicheng Liu; Stan Z. Li
ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing. (56%)Xiaodan Li; Yuefeng Chen; Yao Zhu; Shuhui Wang; Rong Zhang; Hui Xue
Graph Neural Networks for Hardware Vulnerability Analysis -- Can you Trust your GNN? (16%)Lilas Alrahis; Ozgur Sinanoglu
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling. (10%)Ethan Wisdom; Tejas Gokhale; Chaowei Xiao; Yezhou Yang
A Tensor-based Convolutional Neural Network for Small Dataset Classification. (2%)Zhenhua Chen; David Crandall
ALUM: Adversarial Data Uncertainty Modeling from Latent Model Uncertainty Compensation. (1%)Wei Wei; Jiahuan Zhou; Hongze Li; Ying Wu
2023-03-28
A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion. (99%)Haomin Zhuang; Yihua Zhang; Sijia Liu
Improving the Transferability of Adversarial Samples by Path-Augmented Method. (99%)Jianping Zhang; Jen-tse Huang; Wenxuan Wang; Yichen Li; Weibin Wu; Xiaosen Wang; Yuxin Su; Michael R. Lyu
Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition. (99%)Xiao Yang; Chang Liu; Longlong Xu; Yikai Wang; Yinpeng Dong; Ning Chen; Hang Su; Jun Zhu
Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization. (98%)Jianping Zhang; Yizhan Huang; Weibin Wu; Michael R. Lyu
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm. (98%)Bakary Badjie; José Cecílio; António Casimiro
TransAudio: Towards the Transferable Adversarial Audio Attack via Learning Contextualized Perturbations. (98%)Qi Gege; Yuefeng Chen; Xiaofeng Mao; Yao Zhu; Binyuan Hui; Xiaodan Li; Rong Zhang; Hui Xue
A Survey on Malware Detection with Graph Representation Learning. (41%)Tristan Bilot; Nour El Madhoun; Khaldoun Al Agha; Anis Zouaoui
Provable Robustness for Streaming Models with a Sliding Window. (15%)Aounon Kumar; Vinu Sankar Sadasivan; Soheil Feizi
Machine-learned Adversarial Attacks against Fault Prediction Systems in Smart Electrical Grids. (9%)Carmelo Ardito; Yashar Deldjoo; Noia Tommaso Di; Sciascio Eugenio Di; Fatemeh Nazary; Giovanni Servedio
On the Use of Reinforcement Learning for Attacking and Defending Load Frequency Control. (3%)Amr S. Mohamed; Deepa Kundur
A Universal Identity Backdoor Attack against Speaker Verification based on Siamese Network. (1%)Haodong Zhao; Wei Du; Junjie Guo; Gongshen Liu
2023-03-27
Classifier Robustness Enhancement Via Test-Time Transformation. (99%)Tsachi Blau; Roy Ganz; Chaim Baskin; Michael Elad; Alex Bronstein
Improving the Transferability of Adversarial Examples via Direction Tuning. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
EMShepherd: Detecting Adversarial Samples via Side-channel Leakage. (99%)Ruyi Ding; Cheng Gongye; Siyue Wang; Aidong Ding; Yunsi Fei
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks. (97%)Tianrui Qin; Xitong Gao; Juanjuan Zhao; Kejiang Ye; Cheng-Zhong Xu
Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency. (76%)Xiaogeng Liu; Minghui Li; Haoyu Wang; Shengshan Hu; Dengpan Ye; Hai Jin; Libing Wu; Chaowei Xiao
CAT:Collaborative Adversarial Training. (69%)Xingbin Liu; Huafeng Kuang; Xianming Lin; Yongjian Wu; Rongrong Ji
Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection. (67%)Nicola Franco; Daniel Korth; Jeanette Miriam Lorenz; Karsten Roscher; Stephan Guennemann
Personalized Federated Learning on Long-Tailed Data via Adversarial Feature Augmentation. (41%)Yang Lu; Pinxin Qian; Gang Huang; Hanzi Wang
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder. (41%)Tao Sun; Lu Pang; Chao Chen; Haibin Ling
Sequential training of GANs against GAN-classifiers reveals correlated "knowledge gaps" present among independently trained GAN instances. (41%)Arkanath Pathak; Nicholas Dufour
Anti-DreamBooth: Protecting users from personalized text-to-image synthesis. (5%)Le Thanh Van; Hao Phung; Thuan Hoang Nguyen; Quan Dao; Ngoc Tran; Anh Tran
2023-03-26
MGTBench: Benchmarking Machine-Generated Text Detection. (26%)Xinlei He; Xinyue Shen; Zeyuan Chen; Michael Backes; Yang Zhang
2023-03-25
AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking. (99%)Ruoxi Chen; Haibo Jin; Jinyin Chen; Haibin Zheng
CFA: Class-wise Calibrated Fair Adversarial Training. (98%)Zeming Wei; Yifei Wang; Yiwen Guo; Yisen Wang
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks. (68%)Jinyuan Jia; Yupei Liu; Yuepeng Hu; Neil Zhenqiang Gong
Improving robustness of jet tagging algorithms with adversarial training: exploring the loss surface. (12%)Annika Stein
2023-03-24
PIAT: Parameter Interpolation based Adversarial Training for Image Classification. (99%)Kun He; Xin Liu; Yichen Yang; Zhou Qin; Weigao Wen; Hui Xue; John E. Hopcroft
How many dimensions are required to find an adversarial example? (99%)Charles Godfrey; Henry Kvinge; Elise Bishoff; Myles Mckay; Davis Brown; Tim Doster; Eleanor Byler
Effective black box adversarial attack with handcrafted kernels. (99%)Petr Dvořáček; Petr Hurtik; Petra Števuliáková
Adversarial Attack and Defense for Medical Image Analysis: Methods and Applications. (99%)Junhao Dong; Junxi Chen; Xiaohua Xie; Jianhuang Lai; Hao Chen
Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing. (99%)Lin Li; Michael Spratling
Feature Separation and Recalibration for Adversarial Robustness. (98%)Woo Jae Kim; Yoonki Cho; Junsik Jung; Sung-Eui Yoon
Physically Adversarial Infrared Patches with Learnable Shapes and Locations. (97%)Wei Xingxing; Yu Jie; Huang Yao
Generalist: Decoupling Natural and Robust Generalization. (96%)Hongjun Wang; Yisen Wang
Ensemble-based Blackbox Attacks on Dense Prediction. (86%)Zikui Cai; Yaoteng Tan; M. Salman Asif
Backdoor Attacks with Input-unique Triggers in NLP. (54%)Xukun Zhou; Jiwei Li; Tianwei Zhang; Lingjuan Lyu; Muqiao Yang; Jun He
PoisonedGNN: Backdoor Attack on Graph Neural Networks-based Hardware Security Systems. (22%)Lilas Alrahis; Satwik Patnaik; Muhammad Abdullah Hanif; Muhammad Shafique; Ozgur Sinanoglu
Enhancing Multiple Reliability Measures via Nuisance-extended Information Bottleneck. (5%)Jongheon Jeong; Sihyun Yu; Hankook Lee; Jinwoo Shin
Optimal Smoothing Distribution Exploration for Backdoor Neutralization in Deep Learning-based Traffic Systems. (2%)Yue Wang; Wending Li; Michail Maniatakos; Saif Eddin Jabari
TRAK: Attributing Model Behavior at Scale. (1%)Sung Min Park; Kristian Georgiev; Andrew Ilyas; Guillaume Leclerc; Aleksander Madry
2023-03-23
Watch Out for the Confusing Faces: Detecting Face Swapping with the Probability Distribution of Face Identification Models. (68%)Yuxuan Duan; Xuhong Zhang; Chuer Yu; Zonghui Wang; Shouling Ji; Wenzhi Chen
Quadratic Graph Attention Network (Q-GAT) for Robust Construction of Gene Regulatory Networks. (50%)Hui Zhang; Xuexin An; Qiang He; Yudong Yao; Feng-Lei Fan; Yueyang Teng
Optimization and Optimizers for Adversarial Robustness. (41%)Hengyue Liang; Buyun Liang; Le Peng; Ying Cui; Tim Mitchell; Ju Sun
Adversarial Robustness and Feature Impact Analysis for Driver Drowsiness Detection. (41%)João Vitorino; Lourenço Rodrigues; Eva Maia; Isabel Praça; André Lourenço
Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense. (15%)Kalpesh Krishna; Yixiao Song; Marzena Karpinska; John Wieting; Mohit Iyyer
Decentralized Adversarial Training over Graphs. (13%)Ying Cao; Elsa Rizk; Stefan Vlaski; Ali H. Sayed
Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs. (8%)Hasan Abed Al Kader Hammoud; Adel Bibi; Philip H. S. Torr; Bernard Ghanem
Low-frequency Image Deep Steganography: Manipulate the Frequency Distribution to Hide Secrets with Tenacious Robustness. (1%)Huajie Chen; Tianqing Zhu; Yuan Zhao; Bo Liu; Xin Yu; Wanlei Zhou
Efficient Symbolic Reasoning for Neural-Network Verification. (1%)Zi Dj Wang; Somesh Dj Jha; Dj Krishnamurthy; Dvijotham
2023-03-22
Reliable and Efficient Evaluation of Adversarial Robustness for Deep Hashing-Based Retrieval. (99%)Xunguang Wang; Jiawang Bai; Xinyue Xu; Xiaomeng Li
Semantic Image Attack for Visual Model Diagnosis. (99%)Jinqi Luo; Zhaoning Wang; Chen Henry Wu; Dong Huang; la Torre Fernando De
Revisiting DeepFool: generalization and improvement. (99%)Alireza Abdollahpourrostam; Mahed Abroshan; Seyed-Mohsen Moosavi-Dezfooli
Wasserstein Adversarial Examples on Univariant Time Series Data. (99%)Wenjie Wang; Li Xiong; Jian Lou
Test-time Defense against Adversarial Attacks: Detection and Reconstruction of Adversarial Examples via Masked Autoencoder. (99%)Yun-Yun Tsai; Ju-Chin Chao; Albert Wen; Zhaoyuan Yang; Chengzhi Mao; Tapan Shah; Junfeng Yang
Sibling-Attack: Rethinking Transferable Adversarial Attacks against Face Recognition. (78%)Zexin Li; Bangjie Yin; Taiping Yao; Juefeng Guo; Shouhong Ding; Simin Chen; Cong Liu
An Extended Study of Human-like Behavior under Adversarial Training. (76%)Paul Gavrikov; Janis Keuper; Margret Keuper
Distribution-restrained Softmax Loss for the Model Robustness. (38%)Hao Wang; Chen Li; Jinzhe Jiang; Xin Zhang; Yaqian Zhao; Weifeng Gong
Backdoor Defense via Adaptively Splitting Poisoned Dataset. (16%)Kuofeng Gao; Yang Bai; Jindong Gu; Yong Yang; Shu-Tao Xia
Edge Deep Learning Model Protection via Neuron Authorization. (11%)Jinyin Chen; Haibin Zheng; Tao Liu; Rongchang Li; Yao Cheng; Xuhong Zhang; Shouling Ji
2023-03-21
State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems. (99%)Junbin Fang; You Jiang; Canjian Jiang; Zoe L. Jiang; Siu-Ming Yiu; Chuanyi Liu
Information-containing Adversarial Perturbation for Combating Facial Manipulation Systems. (99%)Yao Zhu; Yuefeng Chen; Xiaodan Li; Rong Zhang; Xiang Tian; Bolun Zheng; Yaowu Chen
OTJR: Optimal Transport Meets Optimal Jacobian Regularization for Adversarial Robustness. (99%)Binh M. Le; Shahroz Tariq; Simon S. Woo
Efficient Decision-based Black-box Patch Attacks on Video Recognition. (98%)Kaixun Jiang; Zhaoyu Chen; Hao Huang; Jiafeng Wang; Dingkang Yang; Bo Li; Yan Wang; Wenqiang Zhang
Black-box Backdoor Defense via Zero-shot Image Purification. (86%)Yucheng Shi; Mengnan Du; Xuansheng Wu; Zihan Guan; Jin Sun; Ninghao Liu
Model Robustness Meets Data Privacy: Adversarial Robustness Distillation without Original Data. (10%)Yuzheng Wang; Zhaoyu Chen; Dingkang Yang; Pinxue Guo; Kaixun Jiang; Wenqiang Zhang; Lizhe Qi
LOKI: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation. (9%)Joshua C. Zhao; Atul Sharma; Ahmed Roushdy Elkordy; Yahya H. Ezzeldin; Salman Avestimehr; Saurabh Bagchi
Influencer Backdoor Attack on Semantic Segmentation. (8%)Haoheng Lan; Jindong Gu; Philip Torr; Hengshuang Zhao
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-enabled IoTs: An Anticipatory Study. (1%)Mohamed Amine Ferrag; Burak Kantarci; Lucas C. Cordeiro; Merouane Debbah; Kim-Kwang Raymond Choo
2023-03-20
TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization. (99%)Ziquan Liu; Yi Xu; Xiangyang Ji; Antoni B. Chan
Adversarial Attacks against Binary Similarity Systems. (99%)Gianluca Capozzi; Daniele Cono D'Elia; Luna Giuseppe Antonio Di; Leonardo Querzoni
DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified Robustness. (99%)Shoumik Saha; Wenxiao Wang; Yigitcan Kaya; Soheil Feizi; Tudor Dumitras
Translate your gibberish: black-box adversarial attack on machine translation systems. (83%)Andrei Chertkov; Olga Tsymboi; Mikhail Pautov; Ivan Oseledets
GNN-Ensemble: Towards Random Decision Graph Neural Networks. (56%)Wenqi Wei; Mu Qiao; Divyesh Jadav
Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving. (41%)Yinpeng Dong; Caixin Kang; Jinlai Zhang; Zijian Zhu; Yikai Wang; Xiao Yang; Hang Su; Xingxing Wei; Jun Zhu
Did You Train on My Dataset? Towards Public Dataset Protection with Clean-Label Backdoor Watermarking. (9%)Ruixiang Tang; Qizhang Feng; Ninghao Liu; Fan Yang; Xia Hu
Boosting Semi-Supervised Learning by Exploiting All Unlabeled Data. (2%)Yuhao Chen; Xin Tan; Borui Zhao; Zhaowei Chen; Renjie Song; Jiajun Liang; Xuequan Lu
Make Landscape Flatter in Differentially Private Federated Learning. (1%)Yifan Shi; Yingqi Liu; Kang Wei; Li Shen; Xueqian Wang; Dacheng Tao
Robustifying Token Attention for Vision Transformers. (1%)Yong Guo; David Stutz; Bernt Schiele
2023-03-19
Randomized Adversarial Training via Taylor Expansion. (99%)Gaojie Jin; Xinping Yi; Dengyu Wu; Ronghui Mu; Xiaowei Huang
AdaptGuard: Defending Against Universal Attacks for Model Adaptation. (82%)Lijun Sheng; Jian Liang; Ran He; Zilei Wang; Tieniu Tan
2023-03-18
NoisyHate: Benchmarking Content Moderation Machine Learning Models with Human-Written Perturbations Online. (98%)Yiran Ye; Thai Le; Dongwon Lee
FedRight: An Effective Model Copyright Protection for Federated Learning. (96%)Jinyin Chen; Mingjun Li; Mingjun Li; Haibin Zheng
2023-03-17
Fuzziness-tuned: Improving the Transferability of Adversarial Examples. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness. (99%)Peiyu Xiong; Michael Tegegn; Jaskeerat Singh Sarin; Shubhraneel Pal; Julia Rubin
Robust Mode Connectivity-Oriented Adversarial Defense: Enhancing Neural Network Robustness Against Diversified $\ell_p$ Attacks. (99%)Ren Wang; Yuxuan Li; Sijia Liu
Detection of Uncertainty in Exceedance of Threshold (DUET): An Adversarial Patch Localizer. (83%)Terence Jie Chua; Wenhan Yu; Jun Zhao
Can AI-Generated Text be Reliably Detected? (33%)Vinu Sankar Sadasivan; Aounon Kumar; Sriram Balasubramanian; Wenxiao Wang; Soheil Feizi
Adversarial Counterfactual Visual Explanations. (31%)Guillaume Jeanneret; Loïc Simon; Frédéric Jurie
MedLocker: A Transferable Adversarial Watermarking for Preventing Unauthorized Analysis of Medical Image Dataset. (16%)Bangzheng Pu; Xingxing Wei; Shiji Zhao; Huazhu Fu
Mobile Edge Adversarial Detection for Digital Twinning to the Metaverse with Deep Reinforcement Learning. (9%)Terence Jie Chua; Wenhan Yu; Jun Zhao
Moving Target Defense for Service-oriented Mission-critical Networks. (1%)Doğanalp Ergenç; Florian Schneider; Peter Kling; Mathias Fischer
2023-03-16
Rethinking Model Ensemble in Transfer-based Adversarial Attacks. (99%)Huanran Chen; Yichi Zhang; Yinpeng Dong; Jun Zhu
Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations. (68%)Lukas Struppek; Dominik Hintersdorf; Felix Friedrich; Manuel Brack; Patrick Schramowski; Kristian Kersting
Among Us: Adversarially Robust Collaborative Perception by Consensus. (67%)Yiming Li; Qi Fang; Jiamu Bai; Siheng Chen; Felix Juefei-Xu; Chen Feng
Exorcising ''Wraith'': Protecting LiDAR-based Object Detector in Automated Driving System from Appearing Attacks. (50%)Qifan Xiao; Xudong Pan; Yifan Lu; Mi Zhang; Jiarun Dai; Min Yang
Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation. (11%)Yifan Yan; Xudong Pan; Mi Zhang; Min Yang
2023-03-15
Black-box Adversarial Example Attack towards FCG Based Android Malware Detection under Incomplete Feature Information. (99%)Heng Li; Zhang Cheng; Bang Wu; Liheng Yuan; Cuiying Gao; Wei Yuan; Xiapu Luo
Robust Evaluation of Diffusion-Based Adversarial Purification. (83%)Minjong Lee; Dongwoo Kim
DeeBBAA: A benchmark Deep Black Box Adversarial Attack against Cyber-Physical Power Systems. (81%)Arnab Bhattacharjee; Tapan K. Saha; Ashu Verma; Sukumar Mishra
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models. (62%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models. (45%)Ian E. Nielsen; Ravi P. Ramachandran; Nidhal Bouaynaya; Hassan M. Fathallah-Shaykh; Ghulam Rasool
Certifiable (Multi)Robustness Against Patch Attacks Using ERM. (10%)Saba Ahmadi; Avrim Blum; Omar Montasser; Kevin Stangl
Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement. (1%)Fartash Faghri; Hadi Pouransari; Sachin Mehta; Mehrdad Farajtabar; Ali Farhadi; Mohammad Rastegari; Oncel Tuzel
2023-03-14
Verifying the Robustness of Automatic Credibility Assessment. (99%)Piotr Przybyła; Alexander Shvets; Horacio Saggion
Resilient Dynamic Average Consensus based on Trusted agents. (69%)Shamik Bhattacharyya; Rachel Kalpana Kalaimani
Improving Adversarial Robustness with Hypersphere Embedding and Angular-based Regularizations. (31%)Olukorede Fakorede; Ashutosh Nirala; Modeste Atsague; Jin Tian
2023-03-13
Constrained Adversarial Learning and its applicability to Automated Software Testing: a systematic review. (99%)João Vitorino; Tiago Dias; Tiago Fonseca; Eva Maia; Isabel Praça
Can Adversarial Examples Be Parsed to Reveal Victim Model Information? (99%)Yuguang Yao; Jiancheng Liu; Yifan Gong; Xiaoming Liu; Yanzhi Wang; Xue Lin; Sijia Liu
Review on the Feasibility of Adversarial Evasion Attacks and Defenses for Network Intrusion Detection Systems. (99%)Islam Debicha; Benjamin Cochez; Tayeb Kenaza; Thibault Debatty; Jean-Michel Dricot; Wim Mees
SMUG: Towards robust MRI reconstruction by smoothed unrolling. (98%)Hui Li; Jinghan Jia; Shijun Liang; Yuguang Yao; Saiprasad Ravishankar; Sijia Liu
Model-tuning Via Prompts Makes NLP Models Adversarially Robust. (93%)Mrigank Raman; Pratyush Maini; J. Zico Kolter; Zachary C. Lipton; Danish Pruthi
Robust Contrastive Language-Image Pretraining against Adversarial Attacks. (76%)Wenhan Yang; Baharan Mirzasoleiman
Model Extraction Attacks on Split Federated Learning. (47%)Jingtao Li; Adnan Siraj Rakin; Xing Chen; Li Yang; Zhezhi He; Deliang Fan; Chaitali Chakrabarti
WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminative Analysis. (1%)Yiye Chen; Yunzhi Lin; Ruinian Xu; Patricio A. Vela
2023-03-12
Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion Detection Systems. (99%)Islam Debicha; Benjamin Cochez; Tayeb Kenaza; Thibault Debatty; Jean-Michel Dricot; Wim Mees
Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented Reality. (99%)Weiquan Liu; Shijun Zheng; Cheng Wang
DNN-Alias: Deep Neural Network Protection Against Side-Channel Attacks via Layer Balancing. (96%)Mahya Morid Ahmadi; Lilas Alrahis; Ozgur Sinanoglu; Muhammad Shafique
Multi-metrics adaptively identifies backdoors in Federated learning. (92%)Siquan Huang; Yijiang Li; Chong Chen; Leyu Shi; Ying Gao
Adversarial Attacks to Direct Data-driven Control for Destabilization. (91%)Hampei Sasahara
Backdoor Defense via Deconfounded Representation Learning. (83%)Zaixi Zhang; Qi Liu; Zhicai Wang; Zepu Lu; Qingyong Hu
Interpreting Hidden Semantics in the Intermediate Layers of 3D Point Cloud Classification Neural Network. (76%)Weiquan Liu; Minghao Liu; Shijun Zheng; Cheng Wang
Boosting Source Code Learning with Data Augmentation: An Empirical Study. (11%)Zeming Dong; Qiang Hu; Yuejun Guo; Zhenya Zhang; Maxime Cordy; Mike Papadakis; Yves Le Traon; Jianjun Zhao
2023-03-11
Improving the Robustness of Deep Convolutional Neural Networks Through Feature Learning. (99%)Jin Ding; Jie-Chao Zhao; Yong-Zhi Sun; Ping Tan; Ji-En Ma; You-Tong Fang
SHIELD: An Adaptive and Lightweight Defense against the Remote Power Side-Channel Attacks on Multi-tenant FPGAs. (8%)Mahya Morid Ahmadi; Faiq Khalid; Radha Vaidya; Florian Kriebel; Andreas Steininger; Muhammad Shafique
2023-03-10
Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks. (99%)Binghui Wang; Meng Pang; Yun Dong
Boosting Adversarial Attacks by Leveraging Decision Boundary Information. (99%)Boheng Zeng; LianLi Gao; QiLong Zhang; ChaoQun Li; JingKuan Song; ShuaiQi Jing
Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey. (99%)Yulong Wang; Tong Sun; Shenghong Li; Xin Yuan; Wei Ni; Ekram Hossain; H. Vincent Poor
Investigating Stateful Defenses Against Black-Box Adversarial Examples. (99%)Ryan Feng; Ashish Hooda; Neal Mangaokar; Kassem Fawaz; Somesh Jha; Atul Prakash
MIXPGD: Hybrid Adversarial Training for Speech Recognition Systems. (99%)Aminul Huq; Weiyi Zhang; Xiaolin Hu
Do we need entire training data for adversarial training? (99%)Vipul Gupta; Apurva Narayan
TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets. (61%)Weixin Chen; Dawn Song; Bo Li
Adapting Contrastive Language-Image Pretrained (CLIP) Models for Out-of-Distribution Detection. (13%)Nikolas Adaloglou; Felix Michels; Tim Kaiser; Markus Kollmann
2023-03-09
NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks. (99%)Wenkai Tan; Justus Renkhoff; Alvaro Velasquez; Ziyu Wang; Lusi Li; Jian Wang; Shuteng Niu; Fan Yang; Yongxin Liu; Houbing Song
Evaluating the Robustness of Conversational Recommender Systems by Adversarial Examples. (92%)Ali Montazeralghaem; James Allan
Identification of Systematic Errors of Image Classifiers on Rare Subgroups. (83%)Jan Hendrik Metzen; Robin Hutmacher; N. Grace Hua; Valentyn Boreiko; Dan Zhang
Learning the Legibility of Visual Text Perturbations. (78%)Dev Seth; Rickard Stureborg; Danish Pruthi; Bhuwan Dhingra
Efficient Certified Training and Robustness Verification of Neural ODEs. (75%)Mustafa Zeqiri; Mark Niklas Müller; Marc Fischer; Martin Vechev
Feature Unlearning for Pre-trained GANs and VAEs. (62%)Saemi Moon; Seunghyuk Cho; Dongwoo Kim
2023-03-08
Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples. (99%)Jinwei Wang; Hao Wu; Haihua Wang; Jiawei Zhang; Xiangyang Luo; Bin Ma
Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient Estimation. (99%)Geunhyeok Yu; Minwoo Jeon; Hyoseok Hwang
Exploring Adversarial Attacks on Neural Networks: An Explainable Approach. (99%)Justus Renkhoff; Wenkai Tan; Alvaro Velasquez; illiam Yichen Wang; Yongxin Liu; Jian Wang; Shuteng Niu; Lejla Begic Fazlic; Guido Dartmann; Houbing Song
BeamAttack: Generating High-quality Textual Adversarial Examples through Beam Search and Mixed Semantic Spaces. (99%)Hai Zhu; Qingyang Zhao; Yuren Wu
DeepGD: A Multi-Objective Black-Box Test Selection Approach for Deep Neural Networks. (3%)Zohreh Aghababaeyan; Manel Abdellatif; Mahboubeh Dadkhah; Lionel Briand
2023-03-07
Logit Margin Matters: Improving Transferable Targeted Adversarial Attack by Logit Calibration. (99%)Juanjuan Weng; Zhiming Luo; Zhun Zhong; Shaozi Li; Nicu Sebe
Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on Object Detectors. (98%)Raz Lapid; Moshe Sipper
Robustness-preserving Lifelong Learning via Dataset Condensation. (96%)Jinghan Jia; Yihua Zhang; Dogyoon Song; Sijia Liu; Alfred Hero
CUDA: Convolution-based Unlearnable Datasets. (82%)Vinu Sankar Sadasivan; Mahdi Soltanolkotabi; Soheil Feizi
EavesDroid: Eavesdropping User Behaviors via OS Side-Channels on Smartphones. (11%)Quancheng Wang; Ming Tang; Jianming Fu
Stabilized training of joint energy-based models and their practical applications. (2%)Martin Sustek; Samik Sadhu; Lukas Burget; Hynek Hermansky; Jesus Villalba; Laureano Moro-Velazquez; Najim Dehak
2023-03-06
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning. (41%)Hritik Bansal; Nishad Singhi; Yu Yang; Fan Yin; Aditya Grover; Kai-Wei Chang
Students Parrot Their Teachers: Membership Inference on Model Distillation. (31%)Matthew Jagielski; Milad Nasr; Christopher Choquette-Choo; Katherine Lee; Nicholas Carlini
On the Feasibility of Specialized Ability Extracting for Large Language Code Models. (22%)Zongjie Li; Chaozheng Wang; Pingchuan Ma; Chaowei Liu; Shuai Wang; Daoyuan Wu; Cuiyun Gao
A Unified Algebraic Perspective on Lipschitz Neural Networks. (15%)Alexandre Araujo; Aaron Havens; Blaise Delattre; Alexandre Allauzen; Bin Hu
Learning to Backdoor Federated Learning. (15%)Henger Li; Chen Wu; Sencun Zhu; Zizhan Zheng
Partial-Information, Longitudinal Cyber Attacks on LiDAR in Autonomous Vehicles. (10%)R. Spencer Hallyburton; Qingzhao Zhang; Z. Morley Mao; Miroslav Pajic
ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning. (1%)Animesh Basak Chowdhury; Lilas Alrahis; Luca Collini; Johann Knechtel; Ramesh Karri; Siddharth Garg; Ozgur Sinanoglu; Benjamin Tan
Rethinking Confidence Calibration for Failure Prediction. (1%)Fei Zhu; Zhen Cheng; Xu-Yao Zhang; Cheng-Lin Liu
2023-03-05
Consistent Valid Physically-Realizable Adversarial Attack against Crowd-flow Prediction Models. (99%)Hassan Ali; Muhammad Atif Butt; Fethi Filali; Ala Al-Fuqaha; Junaid Qadir
Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks. (99%)Yiran Li; Junpeng Wang; Takanori Fujiwara; Kwan-Liu Ma
Adversarial Sampling for Fairness Testing in Deep Neural Network. (98%)Tosin Ige; William Marfo; Justin Tonkinson; Sikiru Adewale; Bolanle Hafiz Matti
Local Environment Poisoning Attacks on Federated Reinforcement Learning. (12%)Evelyn Ma; Rasoul Etesami
Robustness, Evaluation and Adaptation of Machine Learning Models in the Wild. (10%)Vihari Piratla
Knowledge-Based Counterfactual Queries for Visual Question Answering. (3%)Theodoti Stoikou; Maria Lymperaiou; Giorgos Stamou
2023-03-04
Improved Robustness Against Adaptive Attacks With Ensembles and Error-Correcting Output Codes. (68%)Thomas Philippon; Christian Gagné
2023-03-03
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees. (91%)Jinghuai Zhang; Jinyuan Jia; Hongbin Liu; Neil Zhenqiang Gong
Certified Robust Neural Networks: Generalization and Corruption Resistance. (82%)Amine Bennouna; Ryan Lucas; Parys Bart Van
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions. (47%)Thuy Dung Nguyen; Tuan Nguyen; Phi Le Nguyen; Hieu H. Pham; Khoa Doan; Kok-Seng Wong
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms. (38%)Christian Westbrook; Sudeep Pasricha
Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models. (33%)Naman D Singh; Francesco Croce; Matthias Hein
Stealthy Perception-based Attacks on Unmanned Aerial Vehicles. (16%)Amir Khazraei; Haocheng Meng; Miroslav Pajic
AdvART: Adversarial Art for Camouflaged Object Detection Attacks. (15%)Amira Guesmi; Ioan Marius Bilasco; Muhammad Shafique; Ihsen Alouani
TrojText: Test-time Invisible Textual Trojan Insertion. (2%)Qian Lou; Yepeng Liu; Bo Feng
2023-03-02
Defending against Adversarial Audio via Diffusion Model. (99%)Shutong Wu; Jiongxiao Wang; Wei Ping; Weili Nie; Chaowei Xiao
Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression. (99%)Junho Kim. Byung-Kwan Lee; Yong Man Ro
AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems. (99%)Amira Guesmi; Muhammad Abdullah Hanif; Muhammad Shafique
APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation. (99%)Amira Guesmi; Muhammad Abdullah Hanif; Ihsen Alouani; Muhammad Shafique
Targeted Adversarial Attacks against Neural Machine Translation. (98%)Sahar Sadrizadeh; AmirHossein Dabiri Aghdam; Ljiljana Dolamic; Pascal Frossard
The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks. (93%)Spencer Frei; Gal Vardi; Peter L. Bartlett; Nathan Srebro
Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators in Neural Networks. (10%)Lennart Brocki; Neo Christopher Chung
D-Score: An Expert-Based Method for Assessing the Detectability of IoT-Related Cyber-Attacks. (3%)Yair Meidan; Daniel Benatar; Ron Bitton; Dan Avraham; Asaf Shabtai
Interpretable System Identification and Long-term Prediction on Time-Series Data. (1%)Xiaoyi Liu; Duxin Chen; Wenjia Wei; Xia Zhu; Wenwu Yu
Consistency Models. (1%)Yang Song; Prafulla Dhariwal; Mark Chen; Ilya Sutskever
CADeSH: Collaborative Anomaly Detection for Smart Homes. (1%)Yair Meidan; Dan Avraham; Hanan Libhaber; Asaf Shabtai
Conflict-Based Cross-View Consistency for Semi-Supervised Semantic Segmentation. (1%)Zicheng Wang; Zhen Zhao; Xiaoxia Xing; Dong Xu; Xiangyu Kong; Luping Zhou
2023-03-01
To Make Yourself Invisible with Adversarial Semantic Contours. (99%)Yichi Zhang; Zijian Zhu; Hang Su; Jun Zhu; Shibao Zheng; Yuan He; Hui Xue
Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Data Manifolds. (98%)Odelia Melamed; Gilad Yehudai; Gal Vardi
Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process. (95%)Mingze Ni; Zhensu Sun; Wei Liu
A Practical Upper Bound for the Worst-Case Attribution Deviations. (70%)Fan Wang; Adams Wai-Kin Kong
Combating Exacerbated Heterogeneity for Robust Models in Federated Learning. (54%)Jianing Zhu; Jiangchao Yao; Tongliang Liu; Quanming Yao; Jianliang Xu; Bo Han
Poster: Sponge ML Model Attacks of Mobile Apps. (8%)Souvik Paul; Nicolas Kourtellis
DOLOS: A Novel Architecture for Moving Target Defense. (8%)Giulio Pagnotta; Gaspari Fabio De; Dorjan Hitaj; Mauro Andreolini; Michele Colajanni; Luigi V. Mancini
Mitigating Backdoors in Federated Learning with FLD. (2%)Yihang Lin; Pengyuan Zhou; Zhiqian Wu; Yong Liao
Competence-Based Analysis of Language Models. (1%)Adam Davies; Jize Jiang; ChengXiang Zhai
2023-02-28
A semantic backdoor attack against Graph Convolutional Networks. (98%)Jiazhu Dai; Zhipeng Xiong
Feature Extraction Matters More: Universal Deepfake Disruption through Attacking Ensemble Feature Extractors. (67%)Long Tang; Dengpan Ye; Zhenhao Lu; Yunming Zhang; Shengshan Hu; Yue Xu; Chuanxi Chen
Single Image Backdoor Inversion via Robust Smoothed Classifiers. (22%)Mingjie Sun; Zico Kolter
Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger. (11%)Yi Yu; Yufei Wang; Wenhan Yang; Shijian Lu; Yap-peng Tan; Alex C. Kot
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases. (1%)Chong Fu; Xuhong Zhang; Shouling Ji; Ting Wang; Peng Lin; Yanghe Feng; Jianwei Yin
2023-02-27
A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking. (99%)Chang Liu; Yinpeng Dong; Wenzhao Xiang; Xiao Yang; Hang Su; Jun Zhu; Yuefeng Chen; Yuan He; Hui Xue; Shibao Zheng
Adversarial Attack with Raindrops. (99%)Jiyuan Liu; Bingyi Lu; Mingkang Xiong; Tao Zhang; Huilin Xiong
Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign Recognition: A Feasibility Study. (99%)Fabian Woitschek; Georg Schneider
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks. (98%)Jialai Wang; Ziyuan Zhang; Meiqi Wang; Han Qiu; Tianwei Zhang; Qi Li; Zongpeng Li; Tao Wei; Chao Zhang
CBA: Contextual Background Attack against Optical Aerial Detection in the Physical World. (98%)Jiawei Lian; Xiaofei Wang; Yuru Su; Mingyang Ma; Shaohui Mei
Improving Model Generalization by On-manifold Adversarial Augmentation in the Frequency Domain. (96%)Chang Liu; Wenzhao Xiang; Yuan He; Hui Xue; Shibao Zheng; Hang Su
Efficient and Low Overhead Website Fingerprinting Attacks and Defenses based on TCP/IP Traffic. (83%)Guodong Huang; Chuan Ma; Ming Ding; Yuwen Qian; Chunpeng Ge; Liming Fang; Zhe Liu
GLOW: Global Layout Aware Attacks on Object Detection. (81%)Buyu Liu; BaoJun; Jianping Fan; Xi Peng; Kui Ren; Jun Yu
Online Black-Box Confidence Estimation of Deep Neural Networks. (16%)Fabian Woitschek; Georg Schneider
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks. (15%)Mohammad Mohammadi; Jonathan Nöther; Debmalya Mandal; Adish Singla; Goran Radanovic
Differentially Private Diffusion Models Generate Useful Synthetic Images. (10%)Sahra Ghalebikesabi; Leonard Berrada; Sven Gowal; Ira Ktena; Robert Stanforth; Jamie Hayes; Soham De; Samuel L. Smith; Olivia Wiles; Borja Balle
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation. (5%)Gaurav Patel; Konda Reddy Mopuri; Qiang Qiu
2023-02-26
Contextual adversarial attack against aerial detection in the physical world. (99%)Jiawei Lian; Xiaofei Wang; Yuru Su; Mingyang Ma; Shaohui Mei
Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators. (96%)Keane Lucas; Matthew Jagielski; Florian Tramèr; Lujo Bauer; Nicholas Carlini
2023-02-25
Deep Learning-based Multi-Organ CT Segmentation with Adversarial Data Augmentation. (99%)Shaoyan Pan; Shao-Yuan Lo; Min Huang; Chaoqiong Ma; Jacob Wynne; Tonghe Wang; Tian Liu; Xiaofeng Yang
Scalable Attribution of Adversarial Attacks via Multi-Task Learning. (99%)Zhongyi Guo; Keji Han; Yao Ge; Wei Ji; Yun Li
SATBA: An Invisible Backdoor Attack Based On Spatial Attention. (67%)Huasong Zhou; Xiaowei Xu; Xiaodong Wang; Leon Bevan Bullock
2023-02-24
Defending Against Backdoor Attacks by Layer-wise Feature Analysis. (68%)Najeeb Moharram Jebreel; Josep Domingo-Ferrer; Yiming Li
Chaotic Variational Auto encoder-based Adversarial Machine Learning. (54%)Pavan Venkata Sainadh Reddy; Yelleti Vivek; Gopi Pranay; Vadlamani Ravi
Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights? (12%)Ruisi Cai; Zhenyu Zhang; Zhangyang Wang
2023-02-23
Less is More: Data Pruning for Faster Adversarial Training. (99%)Yize Li; Pu Zhao; Xue Lin; Bhavya Kailkhura; Ryan Goldhahn
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots. (99%)Boyang Zhang; Xinlei He; Yun Shen; Tianhao Wang; Yang Zhang
Boosting Adversarial Transferability using Dynamic Cues. (99%)Muzammal Naseer; Ahmad Mahmood; Salman Khan; Fahad Khan
HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks. (98%)Chao Hu; Ruishi Yu; Binqi Zeng; Yu Zhan; Ying Fu; Quan Zhang; Rongkai Liu; Heyuan Shi
Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective. (84%)Zhengbao He; Tao Li; Sizhe Chen; Xiaolin Huang
More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models. (70%)Kai Greshake; Sahar Abdelnabi; Shailesh Mishra; Christoph Endres; Thorsten Holz; Mario Fritz
On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space. (68%)Yuyang Deng; Nidham Gazagnadou; Junyuan Hong; Mehrdad Mahdavi; Lingjuan Lyu
Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity. (2%)Khatoon Mohammed
2023-02-22
Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of Perturbation and AI Techniques. (99%)Saminder Dhesi; Laura Fontes; Pedro Machado; Isibor Kennedy Ihianle; Farhad Fassihi Tash; David Ada Adama
PAD: Towards Principled Adversarial Malware Detection Against Evasion Attacks. (98%)Deqiang Li; Shicheng Cui; Yun Li; Jia Xu; Fu Xiao; Shouhuai Xu
Feature Partition Aggregation: A Fast Certified Defense Against a Union of Sparse Adversarial Attacks. (97%)Zayd Hammoudeh; Daniel Lowd
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms. (33%)Minzhou Pan; Yi Zeng; Lingjuan Lyu; Xue Lin; Ruoxi Jia
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective. (12%)Jindong Wang; Xixu Hu; Wenxin Hou; Hao Chen; Runkai Zheng; Yidong Wang; Linyi Yang; Haojun Huang; Wei Ye; Xiubo Geng; Binxin Jiao; Yue Zhang; Xing Xie
2023-02-21
MultiRobustBench: Benchmarking Robustness Against Multiple Attacks. (99%)Sihui Dai; Saeed Mahloujifar; Chong Xiang; Vikash Sehwag; Pin-Yu Chen; Prateek Mittal
MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-based Malware Detection. (99%)Aqib Rashid; Jose Such
Interpretable Spectrum Transformation Attacks to Speaker Recognition. (98%)Jiadi Yao; Hong Luo; Xiao-Lei Zhang
Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker. (97%)Sihui Dai; Wenxin Ding; Arjun Nitin Bhagoji; Daniel Cullina; Ben Y. Zhao; Haitao Zheng; Prateek Mittal
Generalization Bounds for Adversarial Contrastive Learning. (31%)Xin Zou; Weiwei Liu
2023-02-20
An Incremental Gray-box Physical Adversarial Attack on Neural Network Training. (98%)Rabiah Al-qudah; Moayad Aloqaily; Bassem Ouni; Mohsen Guizani; Thierry Lestable
Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing System. (97%)Hao Lv; Bing Li; Lei Zhang; Cheng Liu; Ying Wang
Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts. (88%)Francesco Croce; Sylvestre-Alvise Rebuffi; Evan Shelhamer; Sven Gowal
Poisoning Web-Scale Training Datasets is Practical. (83%)Nicholas Carlini; Matthew Jagielski; Christopher A. Choquette-Choo; Daniel Paleka; Will Pearce; Hyrum Anderson; Andreas Terzis; Kurt Thomas; Florian Tramèr
Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network. (47%)Xiaojian Yuan; Kejiang Chen; Jie Zhang; Weiming Zhang; Nenghai Yu; Yang Zhang
Model-based feature selection for neural networks: A mixed-integer programming approach. (22%)Shudian Zhao; Calvin Tsay; Jan Kronqvist
Take Me Home: Reversing Distribution Shifts using Reinforcement Learning. (8%)Vivian Lin; Kuk Jin Jang; Souradeep Dutta; Michele Caprio; Oleg Sokolsky; Insup Lee
2023-02-19
X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection. (99%)Aishan Liu; Jun Guo; Jiakai Wang; Siyuan Liang; Renshuai Tao; Wenbo Zhou; Cong Liu; Xianglong Liu; Dacheng Tao
Stationary Point Losses for Robust Model. (93%)Weiwei Gao; Dazhi Zhang; Yao Li; Zhichang Guo; Ovanes Petrosian
On Feasibility of Server-side Backdoor Attacks on Split Learning. (76%)Behrad Tajalli; Oguzhan Ersoy; Stjepan Picek
2023-02-18
Adversarial Machine Learning: A Systematic Survey of Backdoor Attack, Weight Attack and Adversarial Example. (99%)Baoyuan Wu; Li Liu; Zihao Zhu; Qingshan Liu; Zhaofeng He; Siwei Lyu
Delving into the Adversarial Robustness of Federated Learning. (98%)Jie Zhang; Bo Li; Chen Chen; Lingjuan Lyu; Shuang Wu; Shouhong Ding; Chao Wu
Meta Style Adversarial Training for Cross-Domain Few-Shot Learning. (83%)Yuqian Fu; Yu Xie; Yanwei Fu; Yu-Gang Jiang
MedViT: A Robust Vision Transformer for Generalized Medical Image Classification. (12%)Omid Nejati Manzari; Hamid Ahmadabadi; Hossein Kashiani; Shahriar B. Shokouhi; Ahmad Ayatollahi
RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks. (11%)Marwan Omar
Beyond Distribution Shift: Spurious Features Through the Lens of Training Dynamics. (2%)Nihal Murali; Aahlad Puli; Ke Yu; Rajesh Ranganath; Kayhan Batmanghelich
2023-02-17
Measuring Equality in Machine Learning Security Defenses. (96%)Luke E. Richards; Edward Raff; Cynthia Matuszek
Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions. (5%)Manish Nagireddy; Moninder Singh; Samuel C. Hoffman; Evaline Ju; Karthikeyan Natesan Ramamurthy; Kush R. Varshney
RetVec: Resilient and Efficient Text Vectorizer. (1%)Elie Bursztein; Marina Zhang; Owen Vallis; Xinyu Jia; Alexey Kurakin
2023-02-16
On the Effect of Adversarial Training Against Invariance-based Adversarial Examples. (99%)Roland Rauter; Martin Nocker; Florian Merkle; Pascal Schöttle
High-frequency Matters: An Overwriting Attack and defense for Image-processing Neural Network Watermarking. (67%)Huajie Chen; Tianqing Zhu; Chi Liu; Shui Yu; Wanlei Zhou
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data. (3%)Pratik Karmakar; Debabrota Basu
A Novel Noise Injection-based Training Scheme for Better Model Robustness. (2%)Zeliang Zhang; Jinyang Jiang; Minjie Chen; Zhiyuan Wang; Yijie Peng; Zhaofei Yu
2023-02-15
Masking and Mixing Adversarial Training. (99%)Hiroki Adachi; Tsubasa Hirakawa; Takayoshi Yamashita; Hironobu Fujiyoshi; Yasunori Ishii; Kazuki Kozuka
Robust Mid-Pass Filtering Graph Convolutional Networks. (98%)Jincheng Huang; Lun Du; Xu Chen; Qiang Fu; Shi Han; Dongmei Zhang
Graph Adversarial Immunization for Certifiable Robustness. (98%)Shuchang Tao; Huawei Shen; Qi Cao; Yunfan Wu; Liang Hou; Xueqi Cheng
XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural Architectures for Non-ideal Xbars. (87%)Abhiroop Bhattacharjee; Abhishek Moitra; Priyadarshini Panda
Tight Auditing of Differentially Private Machine Learning. (41%)Milad Nasr; Jamie Hayes; Thomas Steinke; Borja Balle; Florian Tramèr; Matthew Jagielski; Nicholas Carlini; Andreas Terzis
Field-sensitive Data Flow Integrity. (1%)So Shizukuishi; Yoshitaka Arahori; Katsuhiko Gondow
Uncertainty-Estimation with Normalized Logits for Out-of-Distribution Detection. (1%)Mouxiao Huang; Yu Qiao
2023-02-14
Regret-Based Optimization for Robust Reinforcement Learning. (99%)Roman Belaire; Pradeep Varakantham; David Lo
On the Role of Randomization in Adversarially Robust Classification. (99%)Lucas Gnecco-Heredia; Yann Chevaleyre; Benjamin Negrevergne; Laurent Meunier; Muni Sreenivas Pydi
Attacking Fake News Detectors via Manipulating News Social Engagement. (83%)Haoran Wang; Yingtong Dou; Canyu Chen; Lichao Sun; Philip S. Yu; Kai Shu
An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning. (31%)Shenghui Li; Edith C. -H. Ngai; Thiemo Voigt
A Modern Look at the Relationship between Sharpness and Generalization. (10%)Maksym Andriushchenko; Francesco Croce; Maximilian Müller; Matthias Hein; Nicolas Flammarion
Bounding Training Data Reconstruction in DP-SGD. (8%)Jamie Hayes; Saeed Mahloujifar; Borja Balle
Security Defense For Smart Contracts: A Comprehensive Survey. (1%)Nikolay Ivanov; Chenning Li; Qiben Yan; Zhiyuan Sun; Zhichao Cao; Xiapu Luo
READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises. (1%)Chenglei Si; Zhengyan Zhang; Yingfa Chen; Xiaozhi Wang; Zhiyuan Liu; Maosong Sun
2023-02-13
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data. (98%)Gorka Abad; Oguzhan Ersoy; Stjepan Picek; Aitor Urbieta
Raising the Cost of Malicious AI-Powered Image Editing. (82%)Hadi Salman; Alaa Khaddaj; Guillaume Leclerc; Andrew Ilyas; Aleksander Madry
Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction Challenge. (8%)Ali Al-Kaswan; Maliheh Izadi; Deursen Arie van
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions. (1%)Marwan Omar
2023-02-12
TextDefense: Adversarial Text Detection based on Word Importance Entropy. (99%)Lujia Shen; Xuhong Zhang; Shouling Ji; Yuwen Pu; Chunpeng Ge; Xing Yang; Yanghe Feng
2023-02-11
Mutation-Based Adversarial Attacks on Neural Text Detectors. (69%)Gongbo Liang; Jesus Guerrero; Izzat Alsmadi
HateProof: Are Hateful Meme Detection Systems really Robust? (13%)Piush Aggarwal; Pranit Chawla; Mithun Das; Punyajoy Saha; Binny Mathew; Torsten Zesch; Animesh Mukherjee
MTTM: Metamorphic Testing for Textual Content Moderation Software. (2%)Wenxuan Wang; Jen-tse Huang; Weibin Wu; Jianping Zhang; Yizhan Huang; Shuqing Li; Pinjia He; Michael Lyu
Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play. (1%)Jeremiah Zhe Liu; Krishnamurthy Dj Dvijotham; Jihyeon Lee; Quan Yuan; Martin Strobel; Balaji Lakshminarayanan; Deepak Ramachandran
High Recovery with Fewer Injections: Practical Binary Volumetric Injection Attacks against Dynamic Searchable Encryption. (1%)Xianglong Zhang; Wei Wang; Peng Xu; Laurence T. Yang; Kaitai Liang
2023-02-10
Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples. (98%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Unnoticeable Backdoor Attacks on Graph Neural Networks. (80%)Enyan Dai; Minhua Lin; Xiang Zhang; Suhang Wang
Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks. (73%)Piotr Gaiński; Klaudia Bałazy
2023-02-09
IB-RAR: Information Bottleneck as Regularizer for Adversarial Robustness. (98%)Xiaoyun Xu; Guilherme Perin; Stjepan Picek
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples. (98%)Chumeng Liang; Xiaoyu Wu; Yang Hua; Jiaru Zhang; Yiming Xue; Tao Song; Zhengui Xue; Ruhui Ma; Haibing Guan
Hyperparameter Search Is All You Need For Training-Agnostic Backdoor Robustness. (75%)Eugene Bagdasaryan; Vitaly Shmatikov
Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder. (62%)Jiliang Zhang; Jing Xu; Zhi Zhang; Yansong Gao
Better Diffusion Models Further Improve Adversarial Training. (22%)Zekai Wang; Tianyu Pang; Chao Du; Min Lin; Weiwei Liu; Shuicheng Yan
Augmenting NLP data to counter Annotation Artifacts for NLI Tasks. (16%)Armaan Singh Bhullar
Incremental Satisfiability Modulo Theory for Verification of Deep Neural Networks. (1%)Pengfei Yang; Zhiming Chi; Zongxin Liu; Mengyu Zhao; Cheng-Chao Huang; Shaowei Cai; Lijun Zhang
2023-02-08
WAT: Improve the Worst-class Robustness in Adversarial Training. (99%)Boqi Li; Weiwei Liu
Exploiting Certified Defences to Attack Randomised Smoothing. (99%)Andrew C. Cullen; Paul Montague; Shijie Liu; Sarah M. Erfani; Benjamin I. P. Rubinstein
Shortcut Detection with Variational Autoencoders. (13%)Nicolas M. Müller; Simon Roschmann; Shahbaz Khan; Philip Sperl; Konstantin Böttinger
Continuous Learning for Android Malware Detection. (13%)Yizheng Chen; Zhoujie Ding; David Wagner
Training-free Lexical Backdoor Attacks on Language Models. (8%)Yujin Huang; Terry Yue Zhuo; Qiongkai Xu; Han Hu; Xingliang Yuan; Chunyang Chen
On Function-Coupled Watermarks for Deep Neural Networks. (2%)Xiangyu Wen; Yu Li; Wei Jiang; Qiang Xu
Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy. (1%)Cheolhyoung Lee; Kyunghyun Cho
2023-02-07
Toward Face Biometric De-identification using Adversarial Examples. (98%)Mahdi Ghafourian; Julian Fierrez; Luis Felipe Gomez; Ruben Vera-Rodriguez; Aythami Morales; Zohra Rezgui; Raymond Veldhuis
Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence. (83%)Simin Li; Jun Guo; Jingqiao Xiu; Pu Feng; Xin Yu; Jiakai Wang; Aishan Liu; Wenjun Wu; Xianglong Liu
Membership Inference Attacks against Diffusion Models. (64%)Tomoya Matsumoto; Takayuki Miura; Naoto Yanai
Temporal Robustness against Data Poisoning. (12%)Wenxiao Wang; Soheil Feizi
Robustness Implies Fairness in Casual Algorithmic Recourse. (2%)Ahmad-Reza Ehyaei; Amir-Hossein Karimi; Bernhard Schölkopf; Setareh Maghsudi
Low-Latency Communication using Delay-Aware Relays Against Reactive Adversaries. (1%)Vivek Chaudhary; J. Harshan
2023-02-06
Less is More: Understanding Word-level Textual Adversarial Attack via n-gram Frequency Descend. (99%)Ning Lu; Shengcai Liu; Zhirui Zhang; Qi Wang; Haifeng Liu; Ke Tang
SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency. (92%)Junfeng Guo; Yiming Li; Xun Chen; Hanqing Guo; Lichao Sun; Cong Liu
Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness. (87%)Yuancheng Xu; Yanchao Sun; Micah Goldblum; Tom Goldstein; Furong Huang
Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks. (75%)Jan Schuchardt; Aleksandar Bojchevski; Johannes Gasteiger; Stephan Günnemann
GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks. (67%)Salah Ghamizi; Jingfeng Zhang; Maxime Cordy; Mike Papadakis; Masashi Sugiyama; Yves Le Traon
Target-based Surrogates for Stochastic Optimization. (1%)Jonathan Wilder Lavington; Sharan Vaswani; Reza Babanezhad; Mark Schmidt; Nicolas Le Roux
Dropout Injection at Test Time for Post Hoc Uncertainty Quantification in Neural Networks. (1%)Emanuele Ledda; Giorgio Fumera; Fabio Roli
One-shot Empirical Privacy Estimation for Federated Learning. (1%)Galen Andrew; Peter Kairouz; Sewoong Oh; Alina Oprea; H. Brendan McMahan; Vinith Suriyakumar
2023-02-05
On the Role of Contrastive Representation Learning in Adversarial Robustness: An Empirical Study. (54%)Fatemeh Ghofrani; Mehdi Yaghouti; Pooyan Jamshidi
Leaving Reality to Imagination: Robust Classification via Generated Datasets. (2%)Hritik Bansal; Aditya Grover
2023-02-04
CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks. (99%)Shashank Agnihotri; Steffen Jung; Margret Keuper
A Minimax Approach Against Multi-Armed Adversarial Attacks Detection. (86%)Federica Granese; Marco Romanelli; Siddharth Garg; Pablo Piantanida
Run-Off Election: Improved Provable Defense against Data Poisoning Attacks. (83%)Keivan Rezaei; Kiarash Banihashem; Atoosa Chegini; Soheil Feizi
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Decision Tree Models. (80%)Abdullah Caglar Oksuz; Anisa Halimi; Erman Ayday
Certified Robust Control under Adversarial Perturbations. (78%)Jinghan Yang; Hunmin Kim; Wenbin Wan; Naira Hovakimyan; Yevgeniy Vorobeychik
2023-02-03
TextShield: Beyond Successfully Detecting Adversarial Sentences in Text Classification. (96%)Lingfeng Shen; Ze Zhang; Haiyun Jiang; Ying Chen
DeTorrent: An Adversarial Padding-only Traffic Analysis Defense. (73%)James K Holland; Jason Carpenter; Se Eun Oh; Nicholas Hopper
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification. (61%)Gorka Abad; Jing Xu; Stefanos Koffas; Behrad Tajalli; Stjepan Picek; Mauro Conti
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels. (15%)Simone Bombari; Shayan Kiyani; Marco Mondelli
Asymmetric Certified Robustness via Feature-Convex Neural Networks. (8%)Samuel Pfrommer; Brendon G. Anderson; Julien Piet; Somayeh Sojoudi
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks. (2%)Zeyu Qin; Liuyi Yao; Daoyuan Chen; Yaliang Li; Bolin Ding; Minhao Cheng
BarrierBypass: Out-of-Sight Clean Voice Command Injection Attacks through Physical Barriers. (2%)Payton Walker; Tianfang Zhang; Cong Shi; Nitesh Saxena; Yingying Chen
From Robustness to Privacy and Back. (2%)Hilal Asi; Jonathan Ullman; Lydia Zakynthinou
DCA: Delayed Charging Attack on the Electric Shared Mobility System. (1%)Shuocheng Guo; Hanlin Chen; Mizanur Rahman; Xinwu Qian
Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning. (1%)Jacob Alexander Markson Brown; Xi Jiang; Van Tran; Arjun Nitin Bhagoji; Nguyen Phong Hoang; Nick Feamster; Prateek Mittal; Vinod Yegneswaran
2023-02-02
TransFool: An Adversarial Attack against Neural Machine Translation Models. (99%)Sahar Sadrizadeh; Ljiljana Dolamic; Pascal Frossard
Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial Defense. (99%)Zunzhi You; Daochang Liu; Bohyung Han; Chang Xu
On the Robustness of Randomized Ensembles to Adversarial Perturbations. (75%)Hassan Dbouk; Naresh R. Shanbhag
A sliced-Wasserstein distance-based approach for out-of-class-distribution detection. (62%)Mohammad Shifat E Rabbi; Abu Hasnat Mohammad Rubaiyat; Yan Zhuang; Gustavo K Rohde
Effective Robustness against Natural Distribution Shifts for Models with Different Training Data. (13%)Zhouxing Shi; Nicholas Carlini; Ananth Balashankar; Ludwig Schmidt; Cho-Jui Hsieh; Alex Beutel; Yao Qin
SPECWANDS: An Efficient Priority-based Scheduler Against Speculation Contention Attacks. (10%)Bowen Tang; Chenggang Wu; Pen-Chung Yew; Yinqian Zhang; Mengyao Xie; Yuanming Lai; Yan Kang; Wei Wang; Qiang Wei; Zhe Wang
Provably Bounding Neural Network Preimages. (8%)Suhas Dj Kotha; Christopher Dj Brix; Zico Dj Kolter; Dj Krishnamurthy; Dvijotham; Huan Zhang
Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation. (2%)Hyoungwook Nam; Raghavendra Pradyumna Pothukuchi; Bo Li; Nam Sung Kim; Josep Torrellas
Generalized Uncertainty of Deep Neural Networks: Taxonomy and Applications. (1%)Chengyu Dong
Dataset Distillation Fixes Dataset Reconstruction Attacks. (1%)Noel Loo; Ramin Hasani; Mathias Lechner; Daniela Rus
2023-02-01
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks. (99%)Xiaoyun Xu; Oguzhan Ersoy; Stjepan Picek
Effectiveness of Moving Target Defenses for Adversarial Attacks in ML-based Malware Detection. (92%)Aqib Rashid; Jose Such
Exploring Semantic Perturbations on Grover. (56%)Pranav Kulkarni; Ziqing Ji; Yan Xu; Marko Neskovic; Kevin Nolan
BackdoorBox: A Python Toolbox for Backdoor Learning. (10%)Yiming Li; Mengxi Ya; Yang Bai; Yong Jiang; Shu-Tao Xia
2023-01-31
Reverse engineering adversarial attacks with fingerprints from adversarial examples. (99%)David Aaron Embedded Intelligence Nicholson; Vincent Embedded Intelligence Emanuele
The Impacts of Unanswerable Questions on the Robustness of Machine Reading Comprehension Models. (97%)Son Quoc Tran; Phong Nguyen-Thuan Do; Uyen Le; Matt Kretchmar
Are Defenses for Graph Neural Networks Robust? (80%)Felix Mujkanovic; Simon Geisler; Stephan Günnemann; Aleksandar Bojchevski
Adversarial Training of Self-supervised Monocular Depth Estimation against Physical-World Attacks. (75%)Zhiyuan Cheng; James Liang; Guanhong Tao; Dongfang Liu; Xiangyu Zhang
Robust Linear Regression: Gradient-descent, Early-stopping, and Beyond. (47%)Meyer Scetbon; Elvis Dohmatob
Fairness-aware Vision Transformer via Debiased Self-Attention. (47%)Yao Qiang; Chengyin Li; Prashant Khanduri; Dongxiao Zhu
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression. (12%)Zhuoran Liu; Zhengyu Zhao; Martha Larson
DRAINCLoG: Detecting Rogue Accounts with Illegally-obtained NFTs using Classifiers Learned on Graphs. (1%)Hanna Kim; Jian Cui; Eugene Jang; Chanhee Lee; Yongjae Lee; Jin-Woo Chung; Seungwon Shin
Identifying the Hazard Boundary of ML-enabled Autonomous Systems Using Cooperative Co-Evolutionary Search. (1%)Sepehr Sharifi; Donghwan Shin; Lionel C. Briand; Nathan Aschbacher
2023-01-30
Feature-Space Bayesian Adversarial Learning Improved Malware Detector Robustness. (99%)Bao Gia Doan; Shuiqiao Yang; Paul Montague; Vel Olivier De; Tamas Abraham; Seyit Camtepe; Salil S. Kanhere; Ehsan Abbasnejad; Damith C. Ranasinghe
Improving Adversarial Transferability with Scheduled Step Size and Dual Example. (99%)Zeliang Zhang; Peihan Liu; Xiaosen Wang; Chenliang Xu
Towards Adversarial Realism and Robust Learning for IoT Intrusion Detection and Classification. (99%)João Vitorino; Isabel Praça; Eva Maia
RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion. (99%)Zhuoqun Huang; Neil G. Marchant; Keane Lucas; Lujo Bauer; Olga Ohrimenko; Benjamin I. P. Rubinstein
Identifying Adversarially Attackable and Robust Samples. (99%)Vyas Raina; Mark Gales
On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex. (98%)Terry Yue Zhuo; Zhuang Li; Yujin Huang; Fatemeh Shiri; Weiqing Wang; Gholamreza Haffari; Yuan-Fang Li
Anchor-Based Adversarially Robust Zero-Shot Learning Driven by Language. (96%)Xiao Li; Wei Zhang; Yining Liu; Zhanhao Hu; Bo Zhang; Xiaolin Hu
Inference Time Evidences of Adversarial Attacks for Forensic on Transformers. (87%)Hugo Lemarchant; Liangzi Li; Yiming Qian; Yuta Nakashima; Hajime Nagahara
On the Efficacy of Metrics to Describe Adversarial Attacks. (82%)Tommaso Puccetti; Tommaso Zoppi; Andrea Ceccarelli
Benchmarking Robustness to Adversarial Image Obfuscations. (74%)Florian Stimberg; Ayan Chakrabarti; Chun-Ta Lu; Hussein Hazimeh; Otilia Stretcu; Wei Qiao; Yintao Liu; Merve Kaya; Cyrus Rashtchian; Ariel Fuxman; Mehmet Tek; Sven Gowal
Extracting Training Data from Diffusion Models. (5%)Nicholas Carlini; Jamie Hayes; Milad Nasr; Matthew Jagielski; Vikash Sehwag; Florian Tramèr; Borja Balle; Daphne Ippolito; Eric Wallace
M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System. (1%)Chenqi Kong; Kexin Zheng; Yibing Liu; Shiqi Wang; Anderson Rocha; Haoliang Li
2023-01-29
Unlocking Deterministic Robustness Certification on ImageNet. (98%)Kai Hu; Andy Zou; Zifan Wang; Klas Leino; Matt Fredrikson
Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid. (93%)Farhin Farhad Riya; Shahinul Hoque; Jinyuan Stella Sun; Jiangnan Li; Hairong Qi
Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing. (83%)Yatong Bai; Brendon G. Anderson; Aerin Kim; Somayeh Sojoudi
Uncovering Adversarial Risks of Test-Time Adaptation. (82%)Tong Wu; Feiran Jia; Xiangyu Qi; Jiachen T. Wang; Vikash Sehwag; Saeed Mahloujifar; Prateek Mittal
Adversarial Attacks on Adversarial Bandits. (69%)Yuzhe Ma; Zhijin Zhou
Towards Verifying the Geometric Robustness of Large-scale Neural Networks. (54%)Fu Wang; Peipei Xu; Wenjie Ruan; Xiaowei Huang
Lateralized Learning for Multi-Class Visual Classification Tasks. (13%)Abubakar Siddique; Will N. Browne; Gina M. Grimshaw
Diverse, Difficult, and Odd Instances (D2O): A New Test Set for Object Classification. (3%)Ali Borji
Adversarial Style Augmentation for Domain Generalization. (2%)Yabin Zhang; Bin Deng; Ruihuang Li; Kui Jia; Lei Zhang
Confidence-Aware Calibration and Scoring Functions for Curriculum Learning. (1%)Shuang Ao; Stefan Rueger; Advaith Siddharthan
2023-01-28
Node Injection for Class-specific Network Poisoning. (82%)Ansh Kumar Sharma; Rahul Kukreja; Mayank Kharbanda; Tanmoy Chakraborty
Out-of-distribution Detection with Energy-based Models. (82%)Sven Elflein
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering. (13%)Rui Zhu; Di Tang; Siyuan Tang; Guanhong Tao; Shiqing Ma; Xiaofeng Wang; Haixu Tang
Selecting Models based on the Risk of Damage Caused by Adversarial Attacks. (1%)Jona Klemenc; Holger Trittenbach
2023-01-27
Semantic Adversarial Attacks on Face Recognition through Significant Attributes. (99%)Yasmeen M. Khedr; Yifeng Xiong; Kun He
Targeted Attacks on Timeseries Forecasting. (99%)Yuvaraj Govindarajulu; Avinash Amballa; Pavan Kulkarni; Manojkumar Parmar
Adapting Step-size: A Unified Perspective to Analyze and Improve Gradient-based Methods for Adversarial Attacks. (98%)Wei Tao; Lei Bao; Long Sheng; Gaowei Wu; Qing Tao
PECAN: A Deterministic Certified Defense Against Backdoor Attacks. (97%)Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Vertex-based reachability analysis for verifying ReLU deep neural networks. (93%)João Zago; Eduardo Camponogara; Eric Antonelo
OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks. (92%)Xingwu Guo; Ziwei Zhou; Yueling Zhang; Guy Katz; Min Zhang
PCV: A Point Cloud-Based Network Verifier. (88%)Arup Kumar Sarker; Farzana Yasmin Ahmad; Matthew B. Dwyer
Robust Transformer with Locality Inductive Bias and Feature Normalization. (88%)Omid Nejati Manzari; Hossein Kashiani; Hojat Asgarian Dehkordi; Shahriar Baradaran Shokouhi
Analyzing Robustness of the Deep Reinforcement Learning Algorithm in Ramp Metering Applications Considering False Data Injection Attack and Defense. (87%)Diyi Liu; Lanmin Liu; Lee D Han
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers. (80%)Sungmin Cha; Sungjun Cho; Dasol Hwang; Honglak Lee; Taesup Moon; Moontae Lee
Certified Invertibility in Neural Networks via Mixed-Integer Programming. (76%)Tianqi Cui; Thomas Bertalan; George J. Pappas; Manfred Morari; Ioannis G. Kevrekidis; Mahyar Fazlyab
2023-01-26
Attacking Important Pixels for Anchor-free Detectors. (99%)Yunxu Xie; Shu Hu; Xin Wang; Quanyu Liao; Bin Zhu; Xi Wu; Siwei Lyu
Certified Interpretability Robustness for Class Activation Mapping. (92%)Alex Gu; Tsui-Wei Weng; Pin-Yu Chen; Sijia Liu; Luca Daniel
Interaction-level Membership Inference Attack Against Federated Recommender Systems. (31%)Wei Yuan; Chaoqun Yang; Quoc Viet Hung Nguyen; Lizhen Cui; Tieke He; Hongzhi Yin
Minerva: A File-Based Ransomware Detector. (13%)Dorjan Hitaj; Giulio Pagnotta; Gaspari Fabio De; Carli Lorenzo De; Luigi V. Mancini
2023-01-25
RobustPdM: Designing Robust Predictive Maintenance against Adversarial Attacks. (99%)Ayesha Siddique; Ripan Kumar Kundu; Gautam Raj Mode; Khaza Anuarul Hoque
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing. (98%)Jiali Wei; Ming Fan; Wenjing Jiao; Wuxia Jin; Ting Liu
A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection. (96%)Mohammad Azizmalayeri; Arman Zarei; Alireza Isavand; Mohammad Taghi Manzuri; Mohammad Hossein Rohban
On the Adversarial Robustness of Camera-based 3D Object Detection. (81%)Shaoyuan Xie; Zichao Li; Zeyu Wang; Cihang Xie
A Study on FGSM Adversarial Training for Neural Retrieval. (75%)Simon Lupart; Stéphane Clinchant
Distilling Cognitive Backdoor Patterns within an Image. (5%)Hanxun Huang; Xingjun Ma; Sarah Erfani; James Bailey
Connecting metrics for shape-texture knowledge in computer vision. (1%)Tiago Oliveira; Tiago Marques; Arlindo L. Oliveira
2023-01-24
Blockchain-aided Secure Semantic Communication for AI-Generated Content in Metaverse. (13%)Yijing Lin; Hongyang Du; Dusit Niyato; Jiangtian Nie; Jiayi Zhang; Yanyu Cheng; Zhaohui Yang
Learning Effective Strategies for Moving Target Defense with Switching Costs. (1%)Vignesh Viswanathan; Megha Bose; Praveen Paruchuri
Data Augmentation Alone Can Improve Adversarial Training. (1%)Lin Li; Michael Spratling
2023-01-23
DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards Secure Industrial Internet of Things Analytics. (99%)Onat Gungor; Tajana Rosing; Baris Aksanli
Practical Adversarial Attacks Against AI-Driven Power Allocation in a Distributed MIMO Network. (92%)Ömer Faruk Tuna; Fehmi Emre Kadan; Leyli Karaçay
BayBFed: Bayesian Backdoor Defense for Federated Learning. (78%)Kavita Kumari; Phillip Rieger; Hossein Fereidooni; Murtuza Jadliwala; Ahmad-Reza Sadeghi
Backdoor Attacks in Peer-to-Peer Federated Learning. (68%)Gokberk Yar; Cristina Nita-Rotaru; Alina Oprea
2023-01-22
Provable Unrestricted Adversarial Training without Compromise with Generalizability. (99%)Lilin Zhang; Ning Yang; Yanchao Sun; Philip S. Yu
ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning. (8%)Shangqing Liu; Bozhi Wu; Xiaofei Xie; Guozhu Meng; Yang Liu
2023-01-20
Limitations of Piecewise Linearity for Efficient Robustness Certification. (95%)Klas Leino
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning. (16%)Soumyadeep Pal; Ren Wang; Yuguang Yao; Sijia Liu
Dr.Spider: A Diagnostic Evaluation Benchmark towards Text-to-SQL Robustness. (8%)Shuaichen Chang; Jun Wang; Mingwen Dong; Lin Pan; Henghui Zhu; Alexander Hanbo Li; Wuwei Lan; Sheng Zhang; Jiarong Jiang; Joseph Lilien; Steve Ash; William Yang Wang; Zhiguo Wang; Vittorio Castelli; Patrick Ng; Bing Xiang
Defending SDN against packet injection attacks using deep learning. (2%)Anh Tuan Phu; Bo Li; Faheem Ullah; Tanvir Ul Huque; Ranesh Naha; Ali Babar; Hung Nguyen
2023-01-19
On the Vulnerability of Backdoor Defenses for Federated Learning. (62%)Pei Fang; Jinghui Chen
On the Relationship Between Information-Theoretic Privacy Metrics And Probabilistic Information Privacy. (31%)Chong Xiao Wang; Wee Peng Tay
RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation. (16%)Utkarsh Nath; Yancheng Wang; Yingzhen Yang
Enhancing Deep Learning with Scenario-Based Override Rules: a Case Study. (1%)Adiel Ashrov; Guy Katz
2023-01-17
Denoising Diffusion Probabilistic Models as a Defense against Adversarial Attacks. (98%)Lars Lien Ankile; Anna Midgley; Sebastian Weisshaar
Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness. (68%)Ezgi Korkmaz
Label Inference Attack against Split Learning under Regression Setting. (8%)Shangyu Xie; Xin Yang; Yuanshun Yao; Tianyi Liu; Taiqing Wang; Jiankai Sun
2023-01-16
$\beta$-DARTS++: Bi-level Regularization for Proxy-robust Differentiable Architecture Search. (1%)Peng Ye; Tong He; Baopu Li; Tao Chen; Lei Bai; Wanli Ouyang
Modeling Uncertain Feature Representation for Domain Generalization. (1%)Xiaotong Li; Zixuan Hu; Jun Liu; Yixiao Ge; Yongxing Dai; Ling-Yu Duan
2023-01-15
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense. (4%)Siyuan Cheng; Guanhong Tao; Yingqi Liu; Shengwei An; Xiangzhe Xu; Shiwei Feng; Guangyu Shen; Kaiyuan Zhang; Qiuling Xu; Shiqing Ma; Xiangyu Zhang
2023-01-13
On the feasibility of attacking Thai LPR systems with adversarial examples. (99%)Chissanupong Jiamsuchon; Jakapan Suaboot; Norrathep Rattanavipanon
2023-01-12
Security-Aware Approximate Spiking Neural Networks. (87%)Syed Tihaam Ahmad; Ayesha Siddique; Khaza Anuarul Hoque
Jamming Attacks on Decentralized Federated Learning in General Multi-Hop Wireless Networks. (3%)Yi Shi; Yalin E. Sagduyu; Tugba Erpek
2023-01-11
Phase-shifted Adversarial Training. (82%)Yeachan Kim; Seongyeon Kim; Ihyeok Seo; Bonggun Shin
Universal Detection of Backdoor Attacks via Density-based Clustering and Centroids Analysis. (78%)Wei Guo; Benedetta Tondi; Mauro Barni
2023-01-10
On the Robustness of AlphaFold: A COVID-19 Case Study. (73%)Ismail Alkhouri; Sumit Jha; Andre Beckus; George Atia; Alvaro Velasquez; Rickard Ewetz; Arvind Ramanathan; Susmit Jha
CDA: Contrastive-adversarial Domain Adaptation. (38%)Nishant Yadav; Mahbubul Alam; Ahmed Farahat; Dipanjan Ghosh; Chetan Gupta; Auroop R. Ganguly
User-Centered Security in Natural Language Processing. (12%)Chris Emmery
Leveraging Diffusion For Strong and High Quality Face Morphing Attacks. (3%)Zander Blasingame; Chen Liu
2023-01-09
Over-The-Air Adversarial Attacks on Deep Learning Wi-Fi Fingerprinting. (99%)Fei Xiao; Yong Huang; Yingying Zuo; Wei Kuang; Wei Wang
On the Susceptibility and Robustness of Time Series Models through Adversarial Attack and Defense. (98%)Asadullah Hill Galib; Bidhan Bashyal
Is Federated Learning a Practical PET Yet? (13%)Franziska Boenisch; Adam Dziedzic; Roei Schuster; Ali Shahin Shamsabadi; Ilia Shumailov; Nicolas Papernot
SoK: Hardware Defenses Against Speculative Execution Attacks. (1%)Guangyuan Hu; Zecheng He; Ruby Lee
2023-01-08
RobArch: Designing Robust Architectures against Adversarial Attacks. (76%)ShengYun Peng; Weilin Xu; Cory Cornelius; Kevin Li; Rahul Duggal; Duen Horng Chau; Jason Martin
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope. (1%)Jingwei Zhang; Farzan Farnia
2023-01-07
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. (99%)Wenjie Qu; Jinyuan Jia; Neil Zhenqiang Gong
Adversarial training with informed data selection. (99%)Marcele O. K. Mendonça; Javier Maroto; Pascal Frossard; Paulo S. R. Diniz
2023-01-06
Code Difference Guided Adversarial Example Generation for Deep Code Models. (99%)Zhao Tian; Junjie Chen; Zhi Jin
Stealthy Backdoor Attack for Code Models. (98%)Zhou Yang; Bowen Xu; Jie M. Zhang; Hong Jin Kang; Jieke Shi; Junda He; David Lo
2023-01-05
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor Attack. (98%)Tzvi Lederer; Gallil Maimon; Lior Rokach
gRoMA: a Tool for Measuring Deep Neural Networks Global Robustness. (96%)Natan Levy; Raz Yerushalmi; Guy Katz
Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks. (61%)Yan Scholten; Jan Schuchardt; Simon Geisler; Aleksandar Bojchevski; Stephan Günnemann
Can Large Language Models Change User Preference Adversarially? (1%)Varshini Subhash
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models. (1%)Hojjat Aghakhani; Wei Dai; Andre Manoel; Xavier Fernandes; Anant Kharkar; Christopher Kruegel; Giovanni Vigna; David Evans; Ben Zorn; Robert Sim
2023-01-04
Availability Adversarial Attack and Countermeasures for Deep Learning-based Load Forecasting. (98%)Wangkun Xu; Fei Teng
Beckman Defense. (84%)A. V. Subramanyam
GUAP: Graph Universal Attack Through Adversarial Patching. (81%)Xiao Zang; Jie Chen; Bo Yuan
Enhancement attacks in biomedical machine learning. (1%)Matthew Rosenblatt; Javid Dadashkarimi; Dustin Scheinost
2023-01-03
Explainability and Robustness of Deep Visual Classification Models. (92%)Jindong Gu
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition. (83%)Hasan Abed Al Kader Hammoud; Shuming Liu; Mohammed Alkhrashi; Fahad AlBalawi; Bernard Ghanem
Backdoor Attacks Against Dataset Distillation. (50%)Yugeng Liu; Zheng Li; Michael Backes; Yun Shen; Yang Zhang
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector. (33%)Kshitiz Aryal; Maanak Gupta; Mahmoud Abdelsalam
2023-01-02
Efficient Robustness Assessment via Adversarial Spatial-Temporal Focus on Videos. (92%)Wei Xingxing; Wang Songping; Yan Huanqian
2023-01-01
Generalizable Black-Box Adversarial Attack with Meta Learning. (99%)Fei Yin; Yong Zhang; Baoyuan Wu; Yan Feng; Jingyi Zhang; Yanbo Fan; Yujiu Yang
ExploreADV: Towards exploratory attack for Neural Networks. (99%)Tianzuo Luo; Yuyi Zhong; Siaucheng Khoo
Trojaning semi-supervised learning model via poisoning wild images on the web. (47%)Le Feng; Zhenxing Qian; Sheng Li; Xinpeng Zhang
2022-12-30
Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence. (99%)Han Fang; Jiyi Zhang; Yupeng Qiu; Ke Xu; Chengfang Fang; Ee-Chien Chang
Guidance Through Surrogate: Towards a Generic Diagnostic Attack. (99%)Muzammal Naseer; Salman Khan; Fatih Porikli; Fahad Shahbaz Khan
Defense Against Adversarial Attacks on Audio DeepFake Detection. (91%)Piotr Kawa; Marcin Plata; Piotr Syga
Adversarial attacks and defenses on ML- and hardware-based IoT device fingerprinting and identification. (82%)Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Gérôme Bovet; Gregorio Martínez Pérez
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples. (22%)Jiaming Zhang; Xingjun Ma; Qi Yi; Jitao Sang; Yugang Jiang; Yaowei Wang; Changsheng Xu
Targeted k-node Collapse Problem: Towards Understanding the Robustness of Local k-core Structure. (1%)Yuqian Lv; Bo Zhou; Jinhuan Wang; Qi Xuan
2022-12-29
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice. (68%)Giovanni Apruzzese; Hyrum S. Anderson; Savino Dambra; David Freeman; Fabio Pierazzi; Kevin A. Roundy
Detection of out-of-distribution samples using binary neuron activation patterns. (11%)Bartlomiej Olber; Krystian Radlak; Adam Popowicz; Michal Szczepankiewicz; Krystian Chachula
2022-12-28
Thermal Heating in ReRAM Crossbar Arrays: Challenges and Solutions. (99%)Kamilya Smagulova; Mohammed E. Fouda; Ahmed Eltawil
Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks. (98%)Junlin Wu; Hussein Sibai; Yevgeniy Vorobeychik
Publishing Efficient On-device Models Increases Adversarial Vulnerability. (95%)Sanghyun Hong; Nicholas Carlini; Alexey Kurakin
Differentiable Search of Accurate and Robust Architectures. (92%)Yuwei Ou; Xiangning Xie; Shangce Gao; Yanan Sun; Kay Chen Tan; Jiancheng Lv
Robust Ranking Explanations. (76%)Chao Chen; Chenghua Guo; Guixiang Ma; Xi Zhang; Sihong Xie
Evaluating Generalizability of Deep Learning Models Using Indian-COVID-19 CT Dataset. (1%)Suba S; Nita Parekh; Ramesh Loganathan; Vikram Pudi; Chinnababu Sunkavalli
2022-12-27
EDoG: Adversarial Edge Detection For Graph Neural Networks. (98%)Xiaojun Xu; Yue Yu; Hanzhang Wang; Alok Lal; Carl A. Gunter; Bo Li
Learning When to Use Adaptive Adversarial Image Perturbations against Autonomous Vehicles. (86%)Hyung-Jin Yoon; Hamidreza Jafarnejadsani; Petros Voulgaris
Sparse Mixture Once-for-all Adversarial Training for Efficient In-Situ Trade-Off Between Accuracy and Robustness of DNNs. (62%)Souvik Kundu; Sairam Sundaresan; Sharath Nittur Sridhar; Shunlin Lu; Han Tang; Peter A. Beerel
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning. (56%)Jianyi Zhang; Fangjiao Zhang; Qichao Jin; Zhiqiang Wang; Xiaodong Lin; Xiali Hei
2022-12-25
Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks. (99%)Xingxing Wei; Ying Guo; Jie Yu; Bo Zhang
2022-12-24
Frequency Regularization for Improving Adversarial Robustness. (99%)Binxiao Huang; Chaofan Tao; Rui Lin; Ngai Wong
2022-12-23
Out-of-Distribution Detection with Reconstruction Error and Typicality-based Penalty. (61%)Genki Osada; Takahashi Tsubasa; Budrul Ahsan; Takashi Nishide
Towards Scalable Physically Consistent Neural Networks: an Application to Data-driven Multi-zone Thermal Building Models. (1%)Natale Loris Di; Bratislav Svetozarevic; Philipp Heer; Colin Neil Jones
2022-12-22
Adversarial Machine Learning and Defense Game for NextG Signal Classification with Deep Learning. (98%)Yalin E. Sagduyu
Aliasing is a Driver of Adversarial Attacks. (80%)Adrián Rodríguez-Muñoz; Antonio Torralba
GAN-based Domain Inference Attack. (2%)Yuechun Gu; Keke Chen
Hybrid Quantum-Classical Generative Adversarial Network for High Resolution Image Generation. (1%)Shu Lok Tsang; Maxwell T. West; Sarah M. Erfani; Muhammad Usman
2022-12-21
Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective. (80%)Shihua Huang; Zhichao Lu; Kalyanmoy Deb; Vishnu Naresh Boddeti
Vulnerabilities of Deep Learning-Driven Semantic Communications to Backdoor (Trojan) Attacks. (67%)Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus; Aylin Yener
A Theoretical Study of The Effects of Adversarial Attacks on Sparse Regression. (13%)Deepak Maurya; Jean Honorio
2022-12-20
A Comprehensive Study and Comparison of the Robustness of 3D Object Detectors Against Adversarial Attacks. (98%)Yifan Zhang; Junhui Hou; Yixuan Yuan
Multi-head Uncertainty Inference for Adversarial Attack Detection. (98%)Yuqi Yang; Songyun Yang; Jiyang Xie. Zhongwei Si; Kai Guo; Ke Zhang; Kongming Liang
In and Out-of-Domain Text Adversarial Robustness via Label Smoothing. (98%)Yahan Yang; Soham Dan; Dan Roth; Insup Lee
Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial Attacks. (96%)Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus; Aylin Yener
Unleashing the Power of Visual Prompting At the Pixel Level. (92%)Junyang Wu; Xianhang Li; Chen Wei; Huiyu Wang; Alan Yuille; Yuyin Zhou; Cihang Xie
Learned Systems Security. (78%)Roei Schuster; Jin Peng Zhou; Paul Grubbs; Thorsten Eisenhofer; Nicolas Papernot
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks. (22%)Jimmy Z. Di; Jack Douglas; Jayadev Acharya; Gautam Kamath; Ayush Sekhari
ReCode: Robustness Evaluation of Code Generation Models. (10%)Shiqi Wang; Zheng Li; Haifeng Qian; Chenghao Yang; Zijian Wang; Mingyue Shang; Varun Kumar; Samson Tan; Baishakhi Ray; Parminder Bhatia; Ramesh Nallapati; Murali Krishna Ramanathan; Dan Roth; Bing Xiang
Defending Against Poisoning Attacks in Open-Domain Question Answering. (8%)Orion Weller; Aleem Khan; Nathaniel Weir; Dawn Lawrie; Durme Benjamin Van
SoK: Analysis of Root Causes and Defense Strategies for Attacks on Microarchitectural Optimizations. (5%)Nadja Ramhöj Holtryd; Madhavan Manivannan; Per Stenström
DISCO: Distilling Phrasal Counterfactuals with Large Language Models. (1%)Zeming Chen; Qiyue Gao; Kyle Richardson; Antoine Bosselut; Ashish Sabharwal
2022-12-19
TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization. (99%)Bairu Hou; Jinghan Jia; Yihua Zhang; Guanhua Zhang; Yang Zhang; Sijia Liu; Shiyu Chang
Towards Robustness of Text-to-SQL Models Against Natural and Realistic Adversarial Table Perturbation. (75%)Xinyu Pi; Bing Wang; Yan Gao; Jiaqi Guo; Zhoujun Li; Jian-Guang Lou
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends. (50%)Yonghao Xu; Tao Bai; Weikang Yu; Shizhen Chang; Peter M. Atkinson; Pedram Ghamisi
Task-Oriented Communications for NextG: End-to-End Deep Learning and AI Security Aspects. (26%)Yalin E. Sagduyu; Sennur Ulukus; Aylin Yener
Flareon: Stealthy any2any Backdoor Injection via Poisoned Augmentation. (2%)Tianrui Qin; Xianghuan He; Xitong Gao; Yiren Zhao; Kejiang Ye; Cheng-Zhong Xu
Exploring Optimal Substructure for Out-of-distribution Generalization via Feature-targeted Model Pruning. (1%)Yingchun Wang; Jingcai Guo; Song Guo; Weizhan Zhang; Jie Zhang
2022-12-18
Estimating the Adversarial Robustness of Attributions in Text with Transformers. (99%)Adam Ivankay; Mattia Rigotti; Ivan Girardi; Chiara Marchiori; Pascal Frossard
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks. (99%)Anqi Zhao; Tong Chu; Yahao Liu; Wen Li; Jingjing Li; Lixin Duan
Discrete Point-wise Attack Is Not Enough: Generalized Manifold Adversarial Attack for Face Recognition. (99%)Qian Li; Yuxiao Hu; Ye Liu; Dongxiao Zhang; Xin Jin; Yuntian Chen
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks. (4%)Zeyang Sha; Xinlei He; Pascal Berrang; Mathias Humbert; Yang Zhang
2022-12-17
Confidence-aware Training of Smoothed Classifiers for Certified Robustness. (86%)Jongheon Jeong; Seojin Kim; Jinwoo Shin
A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and Fairness. (2%)Tiantian Feng; Rajat Hebbar; Nicholas Mehlman; Xuan Shi; Aditya Kommineni; and Shrikanth Narayanan
HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation. (1%)Hongyi Yuan; Zheng Yuan; Chuanqi Tan; Fei Huang; Songfang Huang
2022-12-16
Adversarial Example Defense via Perturbation Grading Strategy. (99%)Shaowei Zhu; Wanli Lyu; Bin Li; Zhaoxia Yin; Bin Luo
WebAssembly Diversification for Malware Evasion. (5%)Javier Cabrera-Arteaga; Martin Monperrus; Tim Toady; Benoit Baudry
Biomedical image analysis competitions: The state of current participation practice. (4%)Matthias Eisenmann; Annika Reinke; Vivienn Weru; Minu Dietlinde Tizabi; Fabian Isensee; Tim J. Adler; Patrick Godau; Veronika Cheplygina; Michal Kozubek; Sharib Ali; Anubha Gupta; Jan Kybic; Alison Noble; Solórzano Carlos Ortiz de; Samiksha Pachade; Caroline Petitjean; Daniel Sage; Donglai Wei; Elizabeth Wilden; Deepak Alapatt; Vincent Andrearczyk; Ujjwal Baid; Spyridon Bakas; Niranjan Balu; Sophia Bano; Vivek Singh Bawa; Jorge Bernal; Sebastian Bodenstedt; Alessandro Casella; Jinwook Choi; Olivier Commowick; Marie Daum; Adrien Depeursinge; Reuben Dorent; Jan Egger; Hannah Eichhorn; Sandy Engelhardt; Melanie Ganz; Gabriel Girard; Lasse Hansen; Mattias Heinrich; Nicholas Heller; Alessa Hering; Arnaud Huaulmé; Hyunjeong Kim; Bennett Landman; Hongwei Bran Li; Jianning Li; Jun Ma; Anne Martel; Carlos Martín-Isla; Bjoern Menze; Chinedu Innocent Nwoye; Valentin Oreiller; Nicolas Padoy; Sarthak Pati; Kelly Payette; Carole Sudre; Wijnen Kimberlin van; Armine Vardazaryan; Tom Vercauteren; Martin Wagner; Chuanbo Wang; Moi Hoon Yap; Zeyun Yu; Chun Yuan; Maximilian Zenk; Aneeq Zia; David Zimmerer; Rina Bao; Chanyeol Choi; Andrew Cohen; Oleh Dzyubachyk; Adrian Galdran; Tianyuan Gan; Tianqi Guo; Pradyumna Gupta; Mahmood Haithami; Edward Ho; Ikbeom Jang; Zhili Li; Zhengbo Luo; Filip Lux; Sokratis Makrogiannis; Dominik Müller; Young-tack Oh; Subeen Pang; Constantin Pape; Gorkem Polat; Charlotte Rosalie Reed; Kanghyun Ryu; Tim Scherr; Vajira Thambawita; Haoyu Wang; Xinliang Wang; Kele Xu; Hung Yeh; Doyeob Yeo; Yixuan Yuan; Yan Zeng; Xin Zhao; Julian Abbing; Jannes Adam; Nagesh Adluru; Niklas Agethen; Salman Ahmed; Yasmina Al Khalil; Mireia Alenyà; Esa Alhoniemi; Chengyang An; Talha Anwar; Tewodros Weldebirhan Arega; Netanell Avisdris; Dogu Baran Aydogan; Yingbin Bai; Maria Baldeon Calisto; Berke Doga Basaran; Marcel Beetz; Cheng Bian; Hao Bian; Kevin Blansit; Louise Bloch; Robert Bohnsack; Sara Bosticardo; Jack Breen; Mikael Brudfors; Raphael Brüngel; Mariano Cabezas; Alberto Cacciola; Zhiwei Chen; Yucong Chen; Daniel Tianming Chen; Minjeong Cho; Min-Kook Choi; Chuantao Xie Chuantao Xie; Dana Cobzas; Julien Cohen-Adad; Jorge Corral Acero; Sujit Kumar Das; Oliveira Marcela de; Hanqiu Deng; Guiming Dong; Lars Doorenbos; Cory Efird; Di Fan; Mehdi Fatan Serj; Alexandre Fenneteau; Lucas Fidon; Patryk Filipiak; René Finzel; Nuno R. Freitas; Christoph M. Friedrich; Mitchell Fulton; Finn Gaida; Francesco Galati; Christoforos Galazis; Chang Hee Gan; Zheyao Gao; Shengbo Gao; Matej Gazda; Beerend Gerats; Neil Getty; Adam Gibicar; Ryan Gifford; Sajan Gohil; Maria Grammatikopoulou; Daniel Grzech; Orhun Güley; Timo Günnemann; Chunxu Guo; Sylvain Guy; Heonjin Ha; Luyi Han; Il Song Han; Ali Hatamizadeh; Tian He; Jimin Heo; Sebastian Hitziger; SeulGi Hong; SeungBum Hong; Rian Huang; Ziyan Huang; Markus Huellebrand; Stephan Huschauer; Mustaffa Hussain; Tomoo Inubushi; Ece Isik Polat; Mojtaba Jafaritadi; SeongHun Jeong; Bailiang Jian; Yuanhong Jiang; Zhifan Jiang; Yueming Jin; Smriti Joshi; Abdolrahim Kadkhodamohammadi; Reda Abdellah Kamraoui; Inha Kang; Junghwa Kang; Davood Karimi; April Khademi; Muhammad Irfan Khan; Suleiman A. Khan; Rishab Khantwal; Kwang-Ju Kim; Timothy Kline; Satoshi Kondo; Elina Kontio; Adrian Krenzer; Artem Kroviakov; Hugo Kuijf; Satyadwyoom Kumar; Rosa Francesco La; Abhi Lad; Doohee Lee; Minho Lee; Chiara Lena; Hao Li; Ling Li; Xingyu Li; Fuyuan Liao; KuanLun Liao; Arlindo Limede Oliveira; Chaonan Lin; Shan Lin; Akis Linardos; Marius George Linguraru; Han Liu; Tao Liu; Di Liu; Yanling Liu; João Lourenço-Silva; Jingpei Lu; Jiangshan Lu; Imanol Luengo; Christina B. Lund; Huan Minh Luu; Yi Lv; Yi Lv; Uzay Macar; Leon Maechler; Sina Mansour L.; Kenji Marshall; Moona Mazher; Richard McKinley; Alfonso Medela; Felix Meissen; Mingyuan Meng; Dylan Miller; Seyed Hossein Mirjahanmardi; Arnab Mishra; Samir Mitha; Hassan Mohy-ud-Din; Tony Chi Wing Mok; Gowtham Krishnan Murugesan; Enamundram Naga Karthik; Sahil Nalawade; Jakub Nalepa; Mohamed Naser; Ramin Nateghi; Hammad Naveed; Quang-Minh Nguyen; Cuong Nguyen Quoc; Brennan Nichyporuk; Bruno Oliveira; David Owen; Jimut Bahan Pal; Junwen Pan; Wentao Pan; Winnie Pang; Bogyu Park; Vivek Pawar; Kamlesh Pawar; Michael Peven; Lena Philipp; Tomasz Pieciak; Szymon Plotka; Marcel Plutat; Fattaneh Pourakpour; Domen Preložnik; Kumaradevan Punithakumar; Abdul Qayyum; Sandro Queirós; Arman Rahmim; Salar Razavi; Jintao Ren; Mina Rezaei; Jonathan Adam Rico; ZunHyan Rieu; Markus Rink; Johannes Roth; Yusely Ruiz-Gonzalez; Numan Saeed; Anindo Saha; Mostafa Salem; Ricardo Sanchez-Matilla; Kurt Schilling; Wei Shao; Zhiqiang Shen; Ruize Shi; Pengcheng Shi; Daniel Sobotka; Théodore Soulier; Bella Specktor Fadida; Danail Stoyanov; Timothy Sum Hon Mun; Xiaowu Sun; Rong Tao; Franz Thaler; Antoine Théberge; Felix Thielke; Helena Torres; Kareem A. Wahid; Jiacheng Wang; YiFei Wang; Wei Wang; Xiong Wang; Jianhui Wen; Ning Wen; Marek Wodzinski; Ye Wu; Fangfang Xia; Tianqi Xiang; Chen Xiaofei; Lizhan Xu; Tingting Xue; Yuxuan Yang; Lin Yang; Kai Yao; Huifeng Yao; Amirsaeed Yazdani; Michael Yip; Hwanseung Yoo; Fereshteh Yousefirizi; Shunkai Yu; Lei Yu; Jonathan Zamora; Ramy Ashraf Zeineldin; Dewen Zeng; Jianpeng Zhang; Bokai Zhang; Jiapeng Zhang; Fan Zhang; Huahong Zhang; Zhongchen Zhao; Zixuan Zhao; Jiachen Zhao; Can Zhao; Qingshuo Zheng; Yuheng Zhi; Ziqi Zhou; Baosheng Zou; Klaus Maier-Hein; Paul F. Jäger; Annette Kopp-Schneider; Lena Maier-Hein
Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation? (1%)Ming-Chang Chiu; Pin-Yu Chen; Xuezhe Ma
On Human Visual Contrast Sensitivity and Machine Vision Robustness: A Comparative Study. (1%)Ming-Chang Chiu; Yingfei Wang; Derrick Eui Gyu Kim; Pin-Yu Chen; Xuezhe Ma
2022-12-15
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks. (98%)Nikolaos Antoniou; Efthymios Georgiou; Alexandros Potamianos
On Evaluating Adversarial Robustness of Chest X-ray Classification: Pitfalls and Best Practices. (84%)Salah Ghamizi; Maxime Cordy; Michail Papadakis; Yves Le Traon
Are Multimodal Models Robust to Image and Text Perturbations? (5%)Jielin Qiu; Yi Zhu; Xingjian Shi; Florian Wenzel; Zhiqiang Tang; Ding Zhao; Bo Li; Mu Li
Holistic risk assessment of inference attacks in machine learning. (4%)Yang Yang
Defending against cybersecurity threats to the payments and banking system. (2%)Williams Haruna; Toyin Ajiboro Aremu; Yetunde Ajao Modupe
White-box Inference Attacks against Centralized Machine Learning and Federated Learning. (1%)Jingyi Ge
2022-12-14
SAIF: Sparse Adversarial and Interpretable Attack Framework. (98%)Tooba Imtiaz; Morgan Kohler; Jared Miller; Zifeng Wang; Mario Sznaier; Octavia Camps; Jennifer Dy
Dissecting Distribution Inference. (88%)Anshuman Suri; Yifu Lu; Yanjin Chen; David Evans
Generative Robust Classification. (11%)Xuwang Yin
Synthesis of Adversarial DDOS Attacks Using Tabular Generative Adversarial Networks. (8%)Abdelmageed Ahmed Hassan; Mohamed Sayed Hussein; Ahmed Shehata AboMoustafa; Sarah Hossam Elmowafy
DOC-NAD: A Hybrid Deep One-class Classifier for Network Anomaly Detection. (1%)Mohanad Sarhan; Gayan Kulatilleke; Wai Weng Lo; Siamak Layeghy; Marius Portmann
2022-12-13
Object-fabrication Targeted Attack for Object Detection. (99%)Xuchong Zhang; Changfeng Sun; Haoliang Han; Hang Wang; Hongbin Sun; Nanning Zheng
Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial Detection. (99%)Peter Lorenz; Margret Keuper; Janis Keuper
Adversarial Attacks and Defences for Skin Cancer Classification. (99%)Vinay Jogani; Joy Purohit; Ishaan Shivhare; Samina Attari; Shraddha Surtkar
Towards Efficient and Domain-Agnostic Evasion Attack with High-dimensional Categorical Inputs. (80%)Hongyan Bao; Yufei Han; Yujun Zhou; Xin Gao; Xiangliang Zhang
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models. (73%)Chengzhi Mao; Scott Geng; Junfeng Yang; Xin Wang; Carl Vondrick
Pixel is All You Need: Adversarial Trajectory-Ensemble Active Learning for Salient Object Detection. (56%)Zhenyu Wu; Lin Wang; Wei Wang; Qing Xia; Chenglizhao Chen; Aimin Hao; Shuo Li
AdvCat: Domain-Agnostic Robustness Assessment for Cybersecurity-Critical Applications with Categorical Inputs. (56%)Helene Orsini; Hongyan Bao; Yujun Zhou; Xiangrui Xu; Yufei Han; Longyang Yi; Wei Wang; Xin Gao; Xiangliang Zhang
Privacy-preserving Security Inference Towards Cloud-Edge Collaborative Using Differential Privacy. (1%)Yulong Wang; Xingshu Chen; Qixu Wang
Boosting Semi-Supervised Learning with Contrastive Complementary Labeling. (1%)Qinyi Deng; Yong Guo; Zhibang Yang; Haolin Pan; Jian Chen
2022-12-12
SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation. (98%)Wanqing Zhu; Jia-Li Yin; Bo-Hao Chen; Ximeng Liu
Adversarially Robust Video Perception by Seeing Motion. (98%)Lingyu Zhang; Chengzhi Mao; Junfeng Yang; Carl Vondrick
A Survey on Reinforcement Learning Security with Application to Autonomous Driving. (96%)Ambra Demontis; Maura Pintor; Luca Demetrio; Kathrin Grosse; Hsiao-Ying Lin; Chengfang Fang; Battista Biggio; Fabio Roli
HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable Design. (96%)Hui Wei; Zhixiang Wang; Xuemei Jia; Yinqiang Zheng; Hao Tang; Shin'ichi Satoh; Zheng Wang
Robust Perception through Equivariance. (96%)Chengzhi Mao; Lingyu Zhang; Abhishek Joshi; Junfeng Yang; Hao Wang; Carl Vondrick
Despite "super-human" performance, current LLMs are unsuited for decisions about ethics and safety. (75%)Joshua Albrecht; Ellie Kitanidis; Abraham J. Fetterman
AFLGuard: Byzantine-robust Asynchronous Federated Learning. (15%)Minghong Fang; Jia Liu; Neil Zhenqiang Gong; Elizabeth S. Bentley
Carpet-bombing patch: attacking a deep network without usual requirements. (2%)Pol Labarbarie; Adrien Chan-Hon-Tong; Stéphane Herbin; Milad Leyli-Abadi
Numerical Stability of DeepGOPlus Inference. (1%)Inés Gonzalez Pepe; Yohan Chatelain; Gregory Kiar; Tristan Glatard
2022-12-11
DISCO: Adversarial Defense with Local Implicit Functions. (99%)Chih-Hui Ho; Nuno Vasconcelos
REAP: A Large-Scale Realistic Adversarial Patch Benchmark. (98%)Nabeel Hingun; Chawin Sitawarin; Jerry Li; David Wagner
2022-12-10
General Adversarial Defense Against Black-box Attacks via Pixel Level and Feature Level Distribution Alignments. (99%)Xiaogang Xu; Hengshuang Zhao; Philip Torr; Jiaya Jia
Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense. (93%)Yang Yu; Qi Liu; Likang Wu; Runlong Yu; Sanshi Lei Yu; Zaixi Zhang
Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking. (93%)Dennis Gross; Thiago D. Simao; Nils Jansen; Guillermo A. Perez
Mitigating Adversarial Gray-Box Attacks Against Phishing Detectors. (54%)Giovanni Apruzzese; V. S. Subrahmanian
How to Backdoor Diffusion Models? (12%)Sheng-Yen Chou; Pin-Yu Chen; Tsung-Yi Ho
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification. (1%)Ruixuan Tang; Hanjie Chen; Yangfeng Ji
2022-12-09
Understanding and Combating Robust Overfitting via Input Loss Landscape Analysis and Regularization. (98%)Lin Li; Michael Spratling
Expeditious Saliency-guided Mix-up through Random Gradient Thresholding. (2%)Minh-Long Luu; Zeyi Huang; Eric P. Xing; Yong Jae Lee; Haohan Wang
Spurious Features Everywhere -- Large-Scale Detection of Harmful Spurious Features in ImageNet. (1%)Yannic Neuhaus; Maximilian Augustin; Valentyn Boreiko; Matthias Hein
Robustness Implies Privacy in Statistical Estimation. (1%)Samuel B. Hopkins; Gautam Kamath; Mahbod Majid; Shyam Narayanan
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models. (1%)Rui Zhu; Di Tang; Siyuan Tang; XiaoFeng Wang; Haixu Tang
QVIP: An ILP-based Formal Verification Approach for Quantized Neural Networks. (1%)Yedi Zhang; Zhe Zhao; Fu Song; Min Zhang; Taolue Chen; Jun Sun
2022-12-08
Targeted Adversarial Attacks against Neural Network Trajectory Predictors. (99%)Kaiyuan Tan; Jun Wang; Yiannis Kantaros
XRand: Differentially Private Defense against Explanation-Guided Attacks. (68%)Truc Nguyen; Phung Lai; NhatHai Phan; My T. Thai
Robust Graph Representation Learning via Predictive Coding. (22%)Billy Byiringiro; Tommaso Salvatori; Thomas Lukasiewicz
2022-12-07
Use of Cryptography in Malware Obfuscation. (1%)Hassan Jameel Asghar; Benjamin Zi Hao Zhao; Muhammad Ikram; Giang Nguyen; Dali Kaafar; Sean Lamont; Daniel Coscia
2022-12-06
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning. (96%)Hongbin Liu; Wenjie Qu; Jinyuan Jia; Neil Zhenqiang Gong
2022-12-05
Enhancing Quantum Adversarial Robustness by Randomized Encodings. (99%)Weiyuan Gong; Dong Yuan; Weikang Li; Dong-Ling Deng
Multiple Perturbation Attack: Attack Pixelwise Under Different $\ell_p$-norms For Better Adversarial Performance. (99%)Ngoc N. Tran; Anh Tuan Bui; Dinh Phung; Trung Le
FaceQAN: Face Image Quality Assessment Through Adversarial Noise Exploration. (92%)Žiga Babnik; Peter Peer; Vitomir Štruc
Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning. (76%)Mingyuan Fan; Cen Chen; Chengyu Wang; Wenmeng Zhou; Jun Huang; Ximeng Liu; Wenzhong Guo
Blessings and Curses of Covariate Shifts: Adversarial Learning Dynamics, Directional Convergence, and Equilibria. (8%)Tengyuan Liang
What is the Solution for State-Adversarial Multi-Agent Reinforcement Learning? (3%)Songyang Han; Sanbao Su; Sihong He; Shuo Han; Haizhao Yang; Fei Miao
Spuriosity Rankings: Sorting Data for Spurious Correlation Robustness. (1%)Mazda Moayeri; Wenxiao Wang; Sahil Singla; Soheil Feizi
Efficient Malware Analysis Using Metric Embeddings. (1%)Ethan M. Rudd; David Krisiloff; Scott Coull; Daniel Olszewski; Edward Raff; James Holt
2022-12-04
Bayesian Learning with Information Gain Provably Bounds Risk for a Robust Adversarial Defense. (98%)Bao Gia Doan; Ehsan Abbasnejad; Javen Qinfeng Shi; Damith C. Ranasinghe
Recognizing Object by Components with Human Prior Knowledge Enhances Adversarial Robustness of Deep Neural Networks. (88%)Xiao Li; Ziqi Wang; Bo Zhang; Fuchun Sun; Xiaolin Hu
CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness. (82%)Huy Phan; Miao Yin; Yang Sui; Bo Yuan; Saman Zonouz
FedCC: Robust Federated Learning against Model Poisoning Attacks. (45%)Hyejun Jeong; Hamin Son; Seohu Lee; Jayun Hyun; Tai-Myoung Chung
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder. (8%)Qi Tian; Kun Kuang; Kelu Jiang; Furui Liu; Zhihua Wang; Fei Wu
2022-12-03
LDL: A Defense for Label-Based Membership Inference Attacks. (83%)Arezoo Rajabi; Dinuka Sahabandu; Luyao Niu; Bhaskar Ramasubramanian; Radha Poovendran
Security Analysis of SplitFed Learning. (8%)Momin Ahmad Khan; Virat Shejwalkar; Amir Houmansadr; Fatima Muhammad Anwar
2022-12-02
Membership Inference Attacks Against Semantic Segmentation Models. (45%)Tomas Chobola; Dmitrii Usynin; Georgios Kaissis
Guaranteed Conformance of Neurosymbolic Models to Natural Constraints. (1%)Kaustubh Sridhar; Souradeep Dutta; James Weimer; Insup Lee
2022-12-01
Purifier: Defending Data Inference Attacks via Transforming Confidence Scores. (89%)Ziqi Yang; Lijin Wang; Da Yang; Jie Wan; Ziming Zhao; Ee-Chien Chang; Fan Zhang; Kui Ren
Pareto Regret Analyses in Multi-objective Multi-armed Bandit. (41%)Mengfan Xu; Diego Klabjan
All You Need Is Hashing: Defending Against Data Reconstruction Attack in Vertical Federated Learning. (2%)Pengyu Qiu; Xuhong Zhang; Shouling Ji; Yuwen Pu; Ting Wang
Generalizing and Improving Jacobian and Hessian Regularization. (1%)Chenwei Cui; Zehao Yan; Guangshen Liu; Liangfu Lu
On the Limit of Explaining Black-box Temporal Graph Neural Networks. (1%)Minh N. Vu; My T. Thai
SimpleMind adds thinking to deep neural networks. (1%)Youngwon Choi; M. Wasil Wahi-Anwar; Matthew S. Brown
2022-11-30
Towards Interpreting Vulnerability of Multi-Instance Learning via Customized and Universal Adversarial Perturbations. (97%)Yu-Xuan Zhang; Hua Meng; Xue-Mei Cao; Zhengchun Zhou; Mei Yang; Avik Ranjan Adhikary
Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations. (84%)Haniyeh Ehsani Oskouie; Farzan Farnia
Efficient Adversarial Input Generation via Neural Net Patching. (75%)Tooba Khan; Kumar Madhukar; Subodh Vishnu Sharma
Toward Robust Diagnosis: A Contour Attention Preserving Adversarial Defense for COVID-19 Detection. (69%)Kun Xiang; Xing Zhang; Jinwen She; Jinpeng Liu; Haohan Wang; Shiqi Deng; Shancheng Jiang
Tight Certification of Adversarially Trained Neural Networks via Nonconvex Low-Rank Semidefinite Relaxations. (38%)Hong-Ming Chiu; Richard Y. Zhang
Improved Smoothed Analysis of 2-Opt for the Euclidean TSP. (8%)Bodo Manthey; Rhijn Jesse van
2022-11-29
Understanding and Enhancing Robustness of Concept-based Models. (99%)Sanchit Sinha; Mengdi Huai; Jianhui Sun; Aidong Zhang
Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive Diffusion. (99%)Kui Zhang; Hang Zhou; Jie Zhang; Qidong Huang; Weiming Zhang; Nenghai Yu
Advancing Deep Metric Learning Through Multiple Batch Norms And Multi-Targeted Adversarial Examples. (88%)Inderjeet Singh; Kazuya Kakizaki; Toshinori Araki
Penalizing Confident Predictions on Largely Perturbed Inputs Does Not Improve Out-of-Distribution Generalization in Question Answering. (83%)Kazutoshi Shinoda; Saku Sugawara; Akiko Aizawa
Quantization-aware Interval Bound Propagation for Training Certifiably Robust Quantized Neural Networks. (73%)Mathias Lechner; Đorđe Žikelić; Krishnendu Chatterjee; Thomas A. Henzinger; Daniela Rus
AdvMask: A Sparse Adversarial Attack Based Data Augmentation Method for Image Classification. (54%)Suorong Yang; Jinqiao Li; Jian Zhao; Furao Shen
A3T: Accuracy Aware Adversarial Training. (10%)Enes Altinisik; Safa Messaoud; Husrev Taha Sencar; Sanjay Chawla
Building Resilience to Out-of-Distribution Visual Data via Input Optimization and Model Finetuning. (1%)Christopher J. Holder; Majid Khonji; Jorge Dias; Muhammad Shafique
2022-11-28
Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces. (99%)Xiaoqing Chen; Dongrui Wu
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning. (95%)Eldor Abdukhamidov; Mohammed Abuhamad; Simon S. Woo; Eric Chan-Tin; Tamer Abuhmed
Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning. (83%)Gyojin Han; Jaehyun Choi; Hyeong Gwon Hong; Junmo Kim
Towards More Robust Interpretation via Local Gradient Alignment. (76%)Sunghwan Joo; Seokhyeon Jeong; Juyeon Heo; Adrian Weller; Taesup Moon
Understanding the Impact of Adversarial Robustness on Accuracy Disparity. (31%)Yuzheng Hu; Fan Wu; Hongyang Zhang; Han Zhao
How Important are Good Method Names in Neural Code Generation? A Model Robustness Perspective. (13%)Guang Yang; Yu Zhou; Wenhua Yang; Tao Yue; Xiang Chen; Taolue Chen
Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning. (8%)Xiaoyue Duan; Guoliang Kang; Runqi Wang; Shumin Han; Song Xue; Tian Wang; Baochang Zhang
Attack on Unfair ToS Clause Detection: A Case Study using Universal Adversarial Triggers. (8%)Shanshan Xu; Irina Broda; Rashid Haddad; Marco Negrini; Matthias Grabmair
Gamma-convergence of a nonlocal perimeter arising in adversarial machine learning. (3%)Leon Bungert; Kerrek Stinson
CoNAL: Anticipating Outliers with Large Language Models. (1%)Albert Xu; Xiang Ren; Robin Jia
Learning Antidote Data to Individual Unfairness. (1%)Peizhao Li; Ethan Xia; Hongfu Liu
2022-11-27
Imperceptible Adversarial Attack via Invertible Neural Networks. (99%)Zihan Chen; Ziyue Wang; Junjie Huang; Wentao Zhao; Xiao Liu; Dejian Guan
Foiling Explanations in Deep Neural Networks. (98%)Snir Vitrack Tamam; Raz Lapid; Moshe Sipper
Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning. (84%)Yunchao Zhang; Zonglin Di; Kaiwen Zhou; Cihang Xie; Xin Wang
Traditional Classification Neural Networks are Good Generators: They are Competitive with DDPMs and GANs. (50%)Guangrun Wang; Philip H. S. Torr
Federated Learning Attacks and Defenses: A Survey. (47%)Yao Chen; Yijie Gui; Hong Lin; Wensheng Gan; Yongdong Wu
Adversarial Rademacher Complexity of Deep Neural Networks. (47%)Jiancong Xiao; Yanbo Fan; Ruoyu Sun; Zhi-Quan Luo
2022-11-26
Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning. (99%)Ethan Rathbun; Kaleel Mahmood; Sohaib Ahmad; Caiwen Ding; Dijk Marten van
2022-11-25
Boundary Adversarial Examples Against Adversarial Overfitting. (99%)Muhammad Zaid Hameed; Beat Buesser
Supervised Contrastive Prototype Learning: Augmentation Free Robust Neural Network. (98%)Iordanis Fostiropoulos; Laurent Itti
Beyond Smoothing: Unsupervised Graph Representation Learning with Edge Heterophily Discriminating. (3%)Yixin Liu; Yizhen Zheng; Daokun Zhang; Vincent CS Lee; Shirui Pan
TrustGAN: Training safe and trustworthy deep learning models through generative adversarial networks. (1%)Hélion du Mas des Bourboux
2022-11-24
SAGA: Spectral Adversarial Geometric Attack on 3D Meshes. (98%)Tomer Stolik; Itai Lang; Shai Avidan
Explainable and Safe Reinforcement Learning for Autonomous Air Mobility. (92%)Lei Wang; Hongyu Yang; Yi Lin; Suwan Yin; Yuankai Wu
Tracking Dataset IP Use in Deep Neural Networks. (76%)Seonhye Park; Alsharif Abuadbba; Shuo Wang; Kristen Moore; Yansong Gao; Hyoungshick Kim; Surya Nepal
Neural Network Complexity of Chaos and Turbulence. (41%)Tim Whittaker; Romuald A. Janik; Yaron Oz
Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models. (8%)Jacob Shams; Ben Nassi; Ikuya Morikawa; Toshiya Shimizu; Asaf Shabtai; Yuval Elovici
Generative Joint Source-Channel Coding for Semantic Image Transmission. (1%)Ecenaz Erdemir; Tze-Yang Tung; Pier Luigi Dragotti; Deniz Gunduz
CycleGANWM: A CycleGAN watermarking method for ownership verification. (1%)Dongdong Lin; Benedetta Tondi; Bin Li; Mauro Barni
2022-11-23
Query Efficient Cross-Dataset Transferable Black-Box Attack on Action Recognition. (99%)Rohit Gupta; Naveed Akhtar; Gaurav Kumar Nayak; Ajmal Mian; Mubarak Shah
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners. (99%)Elre T. Oldewage; John Bronskill; Richard E. Turner
Reliable Robustness Evaluation via Automatically Constructed Attack Ensembles. (76%)Shengcai Liu; Fu Peng; Ke Tang
Dual Graphs of Polyhedral Decompositions for the Detection of Adversarial Attacks. (62%)Huma Jamil; Yajing Liu; Christina Cole; Nathaniel Blanchard; Emily J. King; Michael Kirby; Christopher Peterson
Privacy-Enhancing Optical Embeddings for Lensless Classification. (11%)Eric Bezzam; Martin Vetterli; Matthieu Simeoni
Principled Data-Driven Decision Support for Cyber-Forensic Investigations. (1%)Soodeh Atefi; Sakshyam Panda; Manos Panaousis; Aron Laszka
Data Provenance Inference in Machine Learning. (1%)Mingxue Xu; Xiang-Yang Li
2022-11-22
Benchmarking Adversarially Robust Quantum Machine Learning at Scale. (99%)Maxwell T. West; Sarah M. Erfani; Christopher Leckie; Martin Sevior; Lloyd C. L. Hollenberg; Muhammad Usman
PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples. (99%)Shengshan Hu; Junwei Zhang; Wei Liu; Junhui Hou; Minghui Li; Leo Yu Zhang; Hai Jin; Lichao Sun
Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces. (98%)Shengbang Fang; Matthew C Stamm
Backdoor Cleansing with Unlabeled Data. (75%)Lu Pang; Tao Sun; Haibin Ling; Chao Chen
Improving Robust Generalization by Direct PAC-Bayesian Bound Minimization. (70%)Zifan Wang; Nan Ding; Tomer Levinboim; Xi Chen; Radu Soricut
SoK: Inference Attacks and Defenses in Human-Centered Wireless Sensing. (69%)Wei Sun; Tingjun Chen; Neil Gong
2022-11-21
Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization. (99%)Jiafeng Wang; Zhaoyu Chen; Kaixun Jiang; Dingkang Yang; Lingyi Hong; Yan Wang; Wenqiang Zhang
Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack. (99%)Yunfeng Diao; He Wang; Tianjia Shao; Yong-Liang Yang; Kun Zhou; David Hogg
Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors. (99%)Sizhe Chen; Geng Yuan; Xinwen Cheng; Yifan Gong; Minghai Qin; Yanzhi Wang; Xiaolin Huang
Addressing Mistake Severity in Neural Networks with Semantic Knowledge. (92%)Natalie Abreu; Nathan Vaska; Victoria Helus
Efficient Generalization Improvement Guided by Random Weight Perturbation. (68%)Tao Li; Weihao Yan; Zehao Lei; Yingwen Wu; Kun Fang; Ming Yang; Xiaolin Huang
CLAWSAT: Towards Both Robust and Accurate Code Models. (56%)Jinghan Jia; Shashank Srikant; Tamara Mitrovska; Chuang Gan; Shiyu Chang; Sijia Liu; Una-May O'Reilly
Fairness Increases Adversarial Vulnerability. (54%)Cuong Tran; Keyu Zhu; Ferdinando Fioretto; Henternyck Pascal Van
Don't Watch Me: A Spatio-Temporal Trojan Attack on Deep-Reinforcement-Learning-Augment Autonomous Driving. (10%)Yinbo Yu; Jiajia Liu
SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks. (8%)Sunder Ali Khowaja; Parus Khuwaja; Kapal Dev; Angelos Antonopoulos
A Survey on Backdoor Attack and Defense in Natural Language Processing. (2%)Xuan Sheng; Zhaoyang Han; Piji Li; Xiangmao Chang
Understanding and Improving Visual Prompting: A Label-Mapping Perspective. (2%)Aochuan Chen; Yuguang Yao; Pin-Yu Chen; Yihua Zhang; Sijia Liu
Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text. (1%)Qianhui Wu; Huiqiang Jiang; Haonan Yin; Börje F. Karlsson; Chin-Yew Lin
Privacy in Practice: Private COVID-19 Detection in X-Ray Images. (1%)Lucas Lange; Maja Schneider; Erhard Rahm
A Tale of Frozen Clouds: Quantifying the Impact of Algorithmic Complexity Vulnerabilities in Popular Web Servers. (1%)Masudul Hasan Masud Bhuiyan; Cristian-Alexandru Staicu
2022-11-20
Spectral Adversarial Training for Robust Graph Neural Network. (99%)Jintang Li; Jiaying Peng; Liang Chen; Zibin Zheng; Tingting Liang; Qing Ling
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identification. (81%)Wenli Sun; Xinyang Jiang; Shuguang Dou; Dongsheng Li; Duoqian Miao; Cheng Deng; Cairong Zhao
Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training. (47%)Jiaxu Tian; Dapeng Zhi; Si Liu; Peixin Wang; Guy Katz; Min Zhang
Adversarial Cheap Talk. (8%)Chris Lu; Timon Willi; Alistair Letcher; Jakob Foerster
Deep Composite Face Image Attacks: Generation, Vulnerability and Detection. (2%)Jag Mohan Singh; Raghavendra Ramachandra
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation. (2%)Hyungmin Kim; Sungho Suh; Sunghyun Baek; Daehwan Kim; Daun Jeong; Hansang Cho; Junmo Kim
2022-11-19
Towards Adversarial Robustness of Deep Vision Algorithms. (92%)Hanshu Yan
Phonemic Adversarial Attack against Audio Recognition in Real World. (87%)Jiakai Wang; Zhendong Chen; Zixin Yin; Qinghong Yang; Xianglong Liu
Towards Robust Dataset Learning. (82%)Yihan Wu; Xinda Li; Florian Kerschbaum; Heng Huang; Hongyang Zhang
Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning. (80%)Mingxuan Ju; Yujie Fan; Chuxu Zhang; Yanfang Ye
Mask Off: Analytic-based Malware Detection By Transfer Learning and Model Personalization. (9%)Amirmohammad Pasdar; Young Choon Lee; Seok-Hee Hong
Investigating the Security of EV Charging Mobile Applications As an Attack Surface. (1%)K. Sarieddine; M. A. Sayed; S. Torabi; R. Atallah; C. Assi
2022-11-18
Adversarial Stimuli: Attacking Brain-Computer Interfaces via Perturbed Sensory Events. (98%)Bibek Upadhayay; Vahid Behzadan
Adversarial Detection by Approximation of Ensemble Boundary. (75%)T. Windeatt
Leveraging Algorithmic Fairness to Mitigate Blackbox Attribute Inference Attacks. (68%)Jan Aalmoes; Vasisht Duddu; Antoine Boutet
Invariant Learning via Diffusion Dreamed Distribution Shifts. (10%)Priyatham Kattakinda; Alexander Levine; Soheil Feizi
Intrusion Detection in Internet of Things using Convolutional Neural Networks. (1%)Martin Kodys; Zhi Lu; Kar Wai Fok; Vrizlynn L. L. Thing
Improving Robustness of TCM-based Robust Steganography with Variable Robustness. (1%)Jimin Zhang; Xianfeng Zhao; Xiaolei He
Provable Defense against Backdoor Policies in Reinforcement Learning. (1%)Shubham Kumar Bharti; Xuezhou Zhang; Adish Singla; Xiaojin Zhu
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory. (1%)Justin Cui; Ruochen Wang; Si Si; Cho-Jui Hsieh
2022-11-17
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks. (99%)Stephen Casper; Kaivalya Hariharan; Dylan Hadfield-Menell
Towards Good Practices in Evaluating Transfer Adversarial Attacks. (93%)Zhengyu Zhao; Hanwei Zhang; Renjue Li; Ronan Sicre; Laurent Amsaleg; Michael Backes
Assessing Neural Network Robustness via Adversarial Pivotal Tuning. (92%)Peter Ebert Christensen; Vésteinn Snæbjarnarson; Andrea Dittadi; Serge Belongie; Sagie Benaim
UPTON: Unattributable Authorship Text via Data Poisoning. (86%)Ziyao Wang; Thai Le; Dongwon Lee
Generalizable Deepfake Detection with Phase-Based Motion Analysis. (50%)Ekta Prashnani; Michael Goebel; B. S. Manjunath
More Effective Centrality-Based Attacks on Weighted Networks. (15%)Balume Mburano; Weisheng Si; Qing Cao; Wei Xing Zheng
Potential Auto-driving Threat: Universal Rain-removal Attack. (2%)Jinchegn Hu; Jihao Li; Zhuoran Hou; Jingjing Jiang; Cunjia Liu; Yuanjian Zhang
Data-Centric Debugging: mitigating model failures via targeted data collection. (1%)Sahil Singla; Atoosa Malemir Chegini; Mazda Moayeri; Soheil Feiz
A Tale of Two Cities: Data and Configuration Variances in Robust Deep Learning. (1%)Guanqin Zhang; Jiankun Sun; Feng Xu; H. M. N. Dilum Bandara; Shiping Chen; Yulei Sui; Tim Menzies
VeriSparse: Training Verified Locally Robust Sparse Neural Networks from Scratch. (1%)Sawinder Kaur; Yi Xiao; Asif Salekin
2022-11-16
T-SEA: Transfer-based Self-Ensemble Attack on Object Detection. (99%)Hao Huang; Ziyan Chen; Huanran Chen; Yongtao Wang; Kevin Zhang
Efficiently Finding Adversarial Examples with DNN Preprocessing. (99%)Avriti Chauhan; Mohammad Afzal; Hrishikesh Karmarkar; Yizhak Elboher; Kumar Madhukar; Guy Katz
Improving Interpretability via Regularization of Neural Activation Sensitivity. (92%)Ofir Moshe; Gil Fidel; Ron Bitton; Asaf Shabtai
Attacking Object Detector Using A Universal Targeted Label-Switch Patch. (86%)Avishag Shapira; Ron Bitton; Dan Avraham; Alon Zolfi; Yuval Elovici; Asaf Shabtai
Differentially Private Optimizers Can Learn Adversarially Robust Models. (83%)Yuan Zhang; Zhiqi Bu
Interpretable Dimensionality Reduction by Feature Preserving Manifold Approximation and Projection. (56%)Yang Yang; Hongjian Sun; Jialei Gong; Yali Du; Di Yu
Privacy against Real-Time Speech Emotion Detection via Acoustic Adversarial Evasion of Machine Learning. (38%)Brian Testa; Yi Xiao; Avery Gump; Asif Salekin
Holistic Evaluation of Language Models. (2%)Percy Liang; Rishi Bommasani; Tony Lee; Dimitris Tsipras; Dilara Soylu; Michihiro Yasunaga; Yian Zhang; Deepak Narayanan; Yuhuai Wu; Ananya Kumar; Benjamin Newman; Binhang Yuan; Bobby Yan; Ce Zhang; Christian Cosgrove; Christopher D. Manning; Christopher Ré; Diana Acosta-Navas; Drew A. Hudson; Eric Zelikman; Esin Durmus; Faisal Ladhak; Frieda Rong; Hongyu Ren; Huaxiu Yao; Jue Wang; Keshav Santhanam; Laurel Orr; Lucia Zheng; Mert Yuksekgonul; Mirac Suzgun; Nathan Kim; Neel Guha; Niladri Chatterji; Omar Khattab; Peter Henderson; Qian Huang; Ryan Chi; Sang Michael Xie; Shibani Santurkar; Surya Ganguli; Tatsunori Hashimoto; Thomas Icard; Tianyi Zhang; Vishrav Chaudhary; William Wang; Xuechen Li; Yifan Mai; Yuhui Zhang; Yuta Koreeda
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Systems. (1%)Alessio Russo; Alexandre Proutiere
2022-11-15
Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation. (99%)Zhihao Zhu; Chenwang Wu; Min Zhou; Hao Liao; Defu Lian; Enhong Chen
Universal Distributional Decision-based Black-box Adversarial Attack with Reinforcement Learning. (99%)Yiran Huang; Yexu Zhou; Michael Hefenbrock; Till Riedel; Likun Fang; Michael Beigl
MORA: Improving Ensemble Robustness Evaluation with Model-Reweighing Attack. (99%)Yunrui Yu; Xitong Gao; Cheng-Zhong Xu
Person Text-Image Matching via Text-Featur Interpretability Embedding and External Attack Node Implantation. (92%)Fan Li; Hang Zhou; Huafeng Li; Yafei Zhang; Zhengtao Yu
Backdoor Attacks on Time Series: A Generative Approach. (70%)Yujing Jiang; Xingjun Ma; Sarah Monazam Erfani; James Bailey
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning. (61%)Jinghuai Zhang; Hongbin Liu; Jinyuan Jia; Neil Zhenqiang Gong
Improved techniques for deterministic l2 robustness. (22%)Sahil Singla; Soheil Feizi
Backdoor Attacks for Remote Sensing Data with Wavelet Transform. (12%)Nikolaus Dräger; Yonghao Xu; Pedram Ghamisi
2022-11-14
Efficient Adversarial Training with Robust Early-Bird Tickets. (92%)Zhiheng Xi; Rui Zheng; Tao Gui; Qi Zhang; Xuanjing Huang
Attacking Face Recognition with T-shirts: Database, Vulnerability Assessment and Detection. (13%)M. Ibsen; C. Rathgeb; F. Brechtel; R. Klepp; K. Pöppelmann; A. George; S. Marcel; C. Busch
Towards Robust Numerical Question Answering: Diagnosing Numerical Capabilities of NLP Systems. (5%)Jialiang Xu; Mengyu Zhou; Xinyi He; Shi Han; Dongmei Zhang
Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be Effective for Detecting Unknown Spurious Correlations. (5%)Shea Cardozo; Gabriel Islas Montero; Dmitry Kazhdan; Botty Dimanov; Maleakhi Wijaya; Mateja Jamnik; Pietro Lio
Robustifying Deep Vision Models Through Shape Sensitization. (2%)Aditay Tripathi; Rishubh Singh; Anirban Chakraborty; Pradeep Shenoy
2022-11-13
Certifying Robustness of Convolutional Neural Networks with Tight Linear Approximation. (26%)Yuan Xiao; Tongtong Bai; Mingzheng Gu; Chunrong Fang; Zhenyu Chen
2022-11-12
Adversarial and Random Transformations for Robust Domain Adaptation and Generalization. (75%)Liang Xiao; Jiaolong Xu; Dawei Zhao; Erke Shang; Qi Zhu; Bin Dai
DriftRec: Adapting diffusion models to blind JPEG restoration. (1%)Simon Welker; Henry N. Chapman; Timo Gerkmann
2022-11-11
Generating Textual Adversaries with Minimal Perturbation. (98%)Xingyi Zhao; Lu Zhang; Depeng Xu; Shuhan Yuan
On the robustness of non-intrusive speech quality model by adversarial examples. (98%)Hsin-Yi Lin; Huan-Hsin Tseng; Yu Tsao
An investigation of security controls and MITRE ATT\&CK techniques. (47%)Md Rayhanur Rahman; Laurie Williams
Investigating co-occurrences of MITRE ATT\&CK Techniques. (12%)Md Rayhanur Rahman; Laurie Williams
Remapped Cache Layout: Thwarting Cache-Based Side-Channel Attacks with a Hardware Defense. (9%)Wei Song; Rui Hou; Peng Liu; Xiaoxin Li; Peinan Li; Lutan Zhao; Xiaofei Fu; Yifei Sun; Dan Meng
2022-11-10
Test-time adversarial detection and robustness for localizing humans using ultra wide band channel impulse responses. (99%)Abhiram Kolli; Muhammad Jehanzeb Mirza; Horst Possegger; Horst Bischof
Impact of Adversarial Training on Robustness and Generalizability of Language Models. (99%)Enes Altinisik; Hassan Sajjad; Husrev Taha Sencar; Safa Messaoud; Sanjay Chawla
Privacy-Utility Balanced Voice De-Identification Using Adversarial Examples. (98%)Meng Chen; Li Lu; Jiadi Yu; Yingying Chen; Zhongjie Ba; Feng Lin; Kui Ren
Stay Home Safe with Starving Federated Data. (80%)Jaechul Roh; Yajun Fang
MSDT: Masked Language Model Scoring Defense in Text Domain. (38%)Jaechul Roh; Minhao Cheng; Yajun Fang
Robust DNN Surrogate Models with Uncertainty Quantification via Adversarial Training. (3%)Lixiang Zhang; Jia Li
Mitigating Forgetting in Online Continual Learning via Contrasting Semantically Distinct Augmentations. (1%)Sheng-Feng Yu; Wei-Chen Chiu
2022-11-09
On the Robustness of Explanations of Deep Neural Network Models: A Survey. (50%)Amlan Jyoti; Karthik Balaji Ganesh; Manoj Gayala; Nandita Lakshmi Tunuguntla; Sandesh Kamath; Vineeth N Balasubramanian
Are All Edges Necessary? A Unified Framework for Graph Purification. (5%)Zishan Gu; Jintang Li; Liang Chen
QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems. (3%)Ana-Maria Cretu; Florimond Houssiau; Antoine Cully; Montjoye Yves-Alexandre de
Accountable and Explainable Methods for Complex Reasoning over Text. (2%)Pepa Atanasova
Directional Privacy for Deep Learning. (1%)Pedro Faustini; Natasha Fernandes; Shakila Tonni; Annabelle McIver; Mark Dras
2022-11-08
Preserving Semantics in Textual Adversarial Attacks. (99%)David Herel; Hugo Cisneros; Tomas Mikolov
NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries? (98%)Saadia Gabriel; Hamid Palangi; Yejin Choi
How Fraudster Detection Contributes to Robust Recommendation. (67%)Yuni Lai; Kai Zhou
Lipschitz Continuous Algorithms for Graph Problems. (16%)Soh Kumabe; Yuichi Yoshida
Learning advisor networks for noisy image classification. (1%)Simone Ricci; Tiberio Uricchio; Bimbo Alberto Del
2022-11-07
Are AlphaZero-like Agents Robust to Adversarial Perturbations? (99%)Li-Cheng Lan; Huan Zhang; Ti-Rong Wu; Meng-Yu Tsai; I-Chen Wu; Cho-Jui Hsieh
Black-Box Attack against GAN-Generated Image Detector with Contrastive Perturbation. (82%)Zijie Lou; Gang Cao; Man Lin
Deviations in Representations Induced by Adversarial Attacks. (70%)Daniel Steinberg; Paul Munro
A Hypergraph-Based Machine Learning Ensemble Network Intrusion Detection System. (1%)Zong-Zhi Lin; Thomas D. Pike; Mark M. Bailey; Nathaniel D. Bastian
Interpreting deep learning output for out-of-distribution detection. (1%)Damian Matuszewski; Ida-Maria Sintorn
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks. (1%)Naoya Tezuka; Hideya Ochiai; Yuwei Sun; Hiroshi Esaki
2022-11-06
Contrastive Weighted Learning for Near-Infrared Gaze Estimation. (31%)Adam Lee
2022-11-05
Textual Manifold-based Defense Against Natural Language Adversarial Examples. (99%)Dang Minh Nguyen; Luu Anh Tuan
Stateful Detection of Adversarial Reprogramming. (96%)Yang Zheng; Xiaoyi Feng; Zhaoqiang Xia; Xiaoyue Jiang; Maura Pintor; Ambra Demontis; Battista Biggio; Fabio Roli
Robust Lottery Tickets for Pre-trained Language Models. (83%)Rui Zheng; Rong Bao; Yuhao Zhou; Di Liang; Sirui Wang; Wei Wu; Tao Gui; Qi Zhang; Xuanjing Huang
2022-11-04
Improving Adversarial Robustness to Sensitivity and Invariance Attacks with Deep Metric Learning. (99%)Anaelia Ovalle; Evan Czyzycki; Cho-Jui Hsieh
Logits are predictive of network type. (68%)Ali Borji
An Adversarial Robustness Perspective on the Topology of Neural Networks. (64%)Morgane Goibert; Thomas Ricatte; Elvis Dohmatob
Fairness-aware Regression Robust to Adversarial Attacks. (38%)Yulu Jin; Lifeng Lai
Extension of Simple Algorithms to the Matroid Secretary Problem. (9%)Simon Park
Robustness of Fusion-based Multimodal Classifiers to Cross-Modal Content Dilutions. (3%)Gaurav Verma; Vishwa Vinay; Ryan A. Rossi; Srijan Kumar
Data Models for Dataset Drift Controls in Machine Learning With Images. (1%)Luis Oala; Marco Aversa; Gabriel Nobis; Kurt Willis; Yoan Neuenschwander; Michèle Buck; Christian Matek; Jerome Extermann; Enrico Pomarico; Wojciech Samek; Roderick Murray-Smith; Christoph Clausen; Bruno Sanguinetti
2022-11-03
Physically Adversarial Attacks and Defenses in Computer Vision: A Survey. (99%)Xingxing Wei; Bangzheng Pu; Jiefan Lu; Baoyuan Wu
Adversarial Defense via Neural Oscillation inspired Gradient Masking. (98%)Chunming Jiang; Yilei Zhang
M-to-N Backdoor Paradigm: A Stealthy and Fuzzy Attack to Deep Learning Models. (98%)Linshan Hou; Zhongyun Hua; Yuhong Li; Leo Yu Zhang
Robust Few-shot Learning Without Using any Adversarial Samples. (89%)Gaurav Kumar Nayak; Ruchit Rawal; Inder Khatri; Anirban Chakraborty
Data-free Defense of Black Box Models Against Adversarial Attacks. (84%)Gaurav Kumar Nayak; Inder Khatri; Shubham Randive; Ruchit Rawal; Anirban Chakraborty
Leveraging Domain Features for Detecting Adversarial Attacks Against Deep Speech Recognition in Noise. (38%)Christian Heider Nielsen; Zheng-Hua Tan
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems. (33%)Chong Chen; Ying Gao; Leyu Shi; Siquan Huang
Unintended Memorization and Timing Attacks in Named Entity Recognition Models. (12%)Rana Salal Ali; Benjamin Zi Hao Zhao; Hassan Jameel Asghar; Tham Nguyen; Ian David Wood; Dali Kaafar
2022-11-02
Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks. (99%)Amira Guesmi; Ihsen Alouani; Khaled N. Khasawneh; Mouna Baklouti; Tarek Frikha; Mohamed Abid; Nael Abu-Ghazaleh
Improving transferability of 3D adversarial attacks with scale and shear transformations. (99%)Jinali Zhang; Yinpeng Dong; Jun Zhu; Jihong Zhu; Minchi Kuang; Xiaming Yuan
Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise. (99%)Jhih-Cing Huang; Yu-Lin Tsai; Chao-Han Huck Yang; Cheng-Fang Su; Chia-Mu Yu; Pin-Yu Chen; Sy-Yen Kuo
Adversarial Attack on Radar-based Environment Perception Systems. (99%)Amira Guesmi; Ihsen Alouani
Isometric Representations in Neural Networks Improve Robustness. (62%)Kosio Beshkov; Jonas Verhellen; Mikkel Elle Lepperød
BATT: Backdoor Attack with Transformation-based Triggers. (56%)Tong Xu; Yiming Li; Yong Jiang; Shu-Tao Xia
Untargeted Backdoor Attack against Object Detection. (50%)Chengxiao Luo; Yiming Li; Yong Jiang; Shu-Tao Xia
Generative Adversarial Training Can Improve Neural Language Models. (33%)Sajad Movahedi; Azadeh Shakery
Backdoor Defense via Suppressing Model Shortcuts. (3%)Sheng Yang; Yiming Li; Yong Jiang; Shu-Tao Xia
Human-in-the-Loop Mixup. (1%)Katherine M. Collins; Umang Bhatt; Weiyang Liu; Vihari Piratla; Ilia Sucholutsky; Bradley Love; Adrian Weller
2022-11-01
The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training. (99%)Junhao Dong; Seyed-Mohsen Moosavi-Dezfooli; Jianhuang Lai; Xiaohua Xie
LMD: A Learnable Mask Network to Detect Adversarial Examples for Speaker Verification. (99%)Xing Chen; Jie Wang; Xiao-Lei Zhang; Wei-Qiang Zhang; Kunde Yang
DensePure: Understanding Diffusion Models towards Adversarial Robustness. (98%)Chaowei Xiao; Zhongzhu Chen; Kun Jin; Jiongxiao Wang; Weili Nie; Mingyan Liu; Anima Anandkumar; Bo Li; Dawn Song
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks. (87%)Jianan Zhou; Jianing Zhu; Jingfeng Zhang; Tongliang Liu; Gang Niu; Bo Han; Masashi Sugiyama
Universal Perturbation Attack on Differentiable No-Reference Image- and Video-Quality Metrics. (82%)Ekaterina Shumitskaya; Anastasia Antsiferova; Dmitriy Vatolin
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning. (80%)Virat Shejwalkar; Lingjuan Lyu; Amir Houmansadr
Maximum Likelihood Distillation for Robust Modulation Classification. (69%)Javier Maroto; Gérôme Bovet; Pascal Frossard
FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness. (45%)Wenhao Wu; Wei Li; Jiachen Liu; Xinyan Xiao; Ziqiang Cao; Sujian Li; Hua Wu
Amplifying Membership Exposure via Data Poisoning. (22%)Yufei Chen; Chao Shen; Yun Shen; Cong Wang; Yang Zhang
ActGraph: Prioritization of Test Cases Based on Deep Neural Network Activation Graph. (13%)Jinyin Chen; Jie Ge; Haibin Zheng
2022-10-31
Scoring Black-Box Models for Adversarial Robustness. (98%)Jian Vora; Pranay Reddy Samala
ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation. (88%)Tomokatsu Takahashi; Masanori Yamada; Yuuki Yamanaka; Tomoya Yamashita
SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability. (33%)Dipkamal Bhusal; Rosalyn Shin; Ajay Ashok Shewale; Monish Kumar Manikya Veerabhadran; Michael Clifford; Sara Rampazzi; Nidhi Rastogi
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy. (16%)Daphne Ippolito; Florian Tramèr; Milad Nasr; Chiyuan Zhang; Matthew Jagielski; Katherine Lee; Christopher A. Choquette-Choo; Nicholas Carlini
2022-10-30
Poison Attack and Defense on Deep Source Code Processing Models. (99%)Jia Li; Zhuo Li; Huangzhao Zhang; Ge Li; Zhi Jin; Xing Hu; Xin Xia
Character-level White-Box Adversarial Attacks against Transformers via Attachable Subwords Substitution. (99%)Aiwei Liu; Honghai Yu; Xuming Hu; Shu'ang Li; Li Lin; Fukun Ma; Yawen Yang; Lijie Wen
Benchmarking Adversarial Patch Against Aerial Detection. (99%)Jiawei Lian; Shaohui Mei; Shun Zhang; Mingyang Ma
Symmetric Saliency-based Adversarial Attack To Speaker Identification. (92%)Jiadi Yao; Xing Chen; Xiao-Lei Zhang; Wei-Qiang Zhang; Kunde Yang
FI-ODE: Certified and Robust Forward Invariance in Neural ODEs. (61%)Yujia Huang; Ivan Dario Jimenez Rodriguez; Huan Zhang; Yuanyuan Shi; Yisong Yue
Imitating Opponent to Win: Adversarial Policy Imitation Learning in Two-player Competitive Games. (9%)The Viet Bui; Tien Mai; Thanh H. Nguyen
2022-10-29
On the Need of Neuromorphic Twins to Detect Denial-of-Service Attacks on Communication Networks. (10%)Holger Boche; Rafael F. Schaefer; H. Vincent Poor; Frank H. P. Fitzek
2022-10-28
Universal Adversarial Directions. (99%)Ching Lam Choi; Farzan Farnia
Improving the Transferability of Adversarial Attacks on Face Recognition with Beneficial Perturbation Feature Augmentation. (99%)Fengfan Zhou; Hefei Ling; Yuxuan Shi; Jiazhong Chen; Zongyi Li; Ping Li
Improving Hyperspectral Adversarial Robustness Under Multiple Attacks. (98%)Nicholas Soucy; Salimeh Yasaei Sekeh
Distributed Black-box Attack against Image Classification Cloud Services. (95%)Han Wu; Sareh Rowlands; Johan Wahlstrom
RoChBert: Towards Robust BERT Fine-tuning for Chinese. (75%)Zihan Zhang; Jinfeng Li; Ning Shi; Bo Yuan; Xiangyu Liu; Rong Zhang; Hui Xue; Donghong Sun; Chao Zhang
Robust Boosting Forests with Richer Deep Feature Hierarchy. (56%)Jianqiao Wangni
Localized Randomized Smoothing for Collective Robustness Certification. (26%)Jan Schuchardt; Tom Wollschläger; Aleksandar Bojchevski; Stephan Günnemann
Towards Reliable Neural Specifications. (11%)Chuqin Geng; Nham Le; Xiaojie Xu; Zhaoyue Wang; Arie Gurfinkel; Xujie Si
On the Vulnerability of Data Points under Multiple Membership Inference Attacks and Target Models. (1%)Mauro Conti; Jiaxin Li; Stjepan Picek
2022-10-27
TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion Attacks against Network Intrusion Detection Systems. (99%)Islam Debicha; Richard Bauwens; Thibault Debatty; Jean-Michel Dricot; Tayeb Kenaza; Wim Mees
Isometric 3D Adversarial Examples in the Physical World. (99%)Yibo Miao; Yinpeng Dong; Jun Zhu; Xiao-Shan Gao
LeNo: Adversarial Robust Salient Object Detection Networks with Learnable Noise. (92%)He Tang; He Wang
TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack. (92%)Yu Cao; Dianqi Li; Meng Fang; Tianyi Zhou; Jun Gao; Yibing Zhan; Dacheng Tao
Efficient and Effective Augmentation Strategy for Adversarial Training. (56%)Sravanti Addepalli; Samyak Jain; R. Venkatesh Babu
Noise Injection Node Regularization for Robust Learning. (2%)Noam Levi; Itay M. Bloch; Marat Freytsis; Tomer Volansky
Domain Adaptive Object Detection for Autonomous Driving under Foggy Weather. (1%)Jinlong Li; Runsheng Xu; Jin Ma; Qin Zou; Jiaqi Ma; Hongkai Yu
2022-10-26
Improving Adversarial Robustness with Self-Paced Hard-Class Pair Reweighting. (99%)Pengyue Hou; Jie Han; Xingyu Li
There is more than one kind of robustness: Fooling Whisper with adversarial examples. (98%)Raphael Olivier; Bhiksha Raj
Disentangled Text Representation Learning with Information-Theoretic Perspective for Adversarial Robustness. (86%)Jiahao Zhao; Wenji Mao
BioNLI: Generating a Biomedical NLI Dataset Using Lexico-semantic Constraints for Adversarial Examples. (75%)Mohaddeseh Bastan; Mihai Surdeanu; Niranjan Balasubramanian
EIPSIM: Modeling Secure IP Address Allocation at Cloud Scale. (11%)Eric University of Wisconsin-Madison Pauley; Kyle Pennsylvania State University Domico; Blaine University of Wisconsin-Madison Hoak; Ryan University of Wisconsin-Madison Sheatsley; Quinn University of Wisconsin-Madison Burke; Yohan University of Wisconsin-Madison Beugin; Patrick University of Wisconsin-Madison McDaniel
V-Cloak: Intelligibility-, Naturalness- & Timbre-Preserving Real-Time Voice Anonymization. (10%)Jiangyi Zhejiang University Deng; Fei Zhejiang University Teng; Yanjiao Zhejiang University Chen; Xiaofu Wuhan University Chen; Zhaohui Wuhan University Wang; Wenyuan Zhejiang University Xu
Rethinking the Reverse-engineering of Trojan Triggers. (5%)Zhenting Wang; Kai Mei; Hailun Ding; Juan Zhai; Shiqing Ma
Cover Reproducible Steganography via Deep Generative Models. (1%)Kejiang Chen; Hang Zhou; Yaofei Wang; Menghan Li; Weiming Zhang; Nenghai Yu
DEMIS: A Threat Model for Selectively Encrypted Visual Surveillance Data. (1%)Ifeoluwapo Aribilola; Mamoona Naveed Asghar; Brian Lee
Privately Fine-Tuning Large Language Models with Differential Privacy. (1%)Rouzbeh Behnia; Mohamamdreza Ebrahimi; Jason Pacheco; Balaji Padmanabhan
2022-10-25
LP-BFGS attack: An adversarial attack based on the Hessian with limited pixels. (99%)Jiebao Zhang; Wenhua Qian; Rencan Nie; Jinde Cao; Dan Xu
Adversarially Robust Medical Classification via Attentive Convolutional Neural Networks. (99%)Isaac Wasserman
A White-Box Adversarial Attack Against a Digital Twin. (99%)Wilson Patterson; Ivan Fernandez; Subash Neupane; Milan Parmar; Sudip Mittal; Shahram Rahimi
Adaptive Test-Time Defense with the Manifold Hypothesis. (98%)Zhaoyuan Yang; Zhiwei Xu; Jing Zhang; Richard Hartley; Peter Tu
Multi-view Representation Learning from Malware to Defend Against Adversarial Variants. (98%)James Lee Hu; Mohammadreza Ebrahimi; Weifeng Li; Xin Li; Hsinchun Chen
Improving Adversarial Robustness via Joint Classification and Multiple Explicit Detection Classes. (98%)Sina Baharlouei; Fatemeh Sheikholeslami; Meisam Razaviyayn; Zico Kolter
Accelerating Certified Robustness Training via Knowledge Transfer. (73%)Pratik Vaishnavi; Kevin Eykholt; Amir Rahmati
Causal Information Bottleneck Boosts Adversarial Robustness of Deep Neural Network. (64%)Huan Hua; Jun Yan; Xi Fang; Weiquan Huang; Huilin Yin; Wancheng Ge
Towards Robust Recommender Systems via Triple Cooperative Defense. (61%)Qingyang Wang; Defu Lian; Chenwang Wu; Enhong Chen
Towards Formal Approximated Minimal Explanations of Neural Networks. (13%)Shahaf Bassan; Guy Katz
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification. (13%)Yulin Zhu; Liang Tong; Kai Zhou
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks. (3%)M. Kuzlu; F. O. Catak; S. Sarp; U. Cali; O Gueler
Robustness of Locally Differentially Private Graph Analysis Against Poisoning. (1%)Jacob Imola; Amrita Roy Chowdhury; Kamalika Chaudhuri
2022-10-24
Ares: A System-Oriented Wargame Framework for Adversarial ML. (99%)Farhan Ahmed; Pratik Vaishnavi; Kevin Eykholt; Amir Rahmati
SpacePhish: The Evasion-space of Adversarial Attacks against Phishing Website Detectors using Machine Learning. (99%)Giovanni Apruzzese; Mauro Conti; Ying Yuan
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs. (96%)Haibin Zheng; Haiyang Xiong; Jinyin Chen; Haonan Ma; Guohan Huang
On the Robustness of Dataset Inference. (88%)Sebastian Szyller; Rui Zhang; Jian Liu; N. Asokan
Flexible Android Malware Detection Model based on Generative Adversarial Networks with Code Tensor. (16%)Zhao Yang; Fengyang Deng; Linxi Han
Revisiting Sparse Convolutional Model for Visual Recognition. (11%)Xili Dai; Mingyang Li; Pengyuan Zhai; Shengbang Tong; Xingjian Gao; Shao-Lun Huang; Zhihui Zhu; Chong You; Yi Ma
2022-10-23
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning. (68%)Kaiyuan Zhang; Guanhong Tao; Qiuling Xu; Siyuan Cheng; Shengwei An; Yingqi Liu; Shiwei Feng; Guangyu Shen; Pin-Yu Chen; Shiqing Ma; Xiangyu Zhang
Adversarial Pretraining of Self-Supervised Deep Networks: Past, Present and Future. (45%)Guo-Jun Qi; Mubarak Shah
2022-10-22
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation. (99%)Fan Yin; Yao Li; Cho-Jui Hsieh; Kai-Wei Chang
Hindering Adversarial Attacks with Implicit Neural Representations. (92%)Andrei A. Rusu; Dan A. Calian; Sven Gowal; Raia Hadsell
GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections. (81%)Junyuan Fang; Haixian Wen; Jiajing Wu; Qi Xuan; Zibin Zheng; Chi K. Tse
Nash Equilibria and Pitfalls of Adversarial Training in Adversarial Robustness Games. (26%)Maria-Florina Balcan; Rattana Pukdee; Pradeep Ravikumar; Hongyang Zhang
Precisely the Point: Adversarial Augmentations for Faithful and Informative Text Generation. (4%)Wenhao Wu; Wei Li; Jiachen Liu; Xinyan Xiao; Sujian Li; Yajuan Lyu
2022-10-21
Evolution of Neural Tangent Kernels under Benign and Adversarial Training. (99%)Noel Loo; Ramin Hasani; Alexander Amini; Daniela Rus
The Dark Side of AutoML: Towards Architectural Backdoor Search. (68%)Ren Pang; Changjiang Li; Zhaohan Xi; Shouling Ji; Ting Wang
Diffusion Visual Counterfactual Explanations. (10%)Maximilian Augustin; Valentyn Boreiko; Francesco Croce; Matthias Hein
TCAB: A Large-Scale Text Classification Attack Benchmark. (10%)Kalyani Asthana; Zhouhang Xie; Wencong You; Adam Noack; Jonathan Brophy; Sameer Singh; Daniel Lowd
A critical review of cyber-physical security for building automation systems. (2%)Guowen Li; Lingyu Ren; Yangyang Fu; Zhiyao Yang; Veronica Adetola; Jin Wen; Qi Zhu; Teresa Wu; K. Selcuk Candanf; Zheng O'Neill
Extracted BERT Model Leaks More Information than You Think! (1%)Xuanli He; Chen Chen; Lingjuan Lyu; Qiongkai Xu
2022-10-20
Identifying Human Strategies for Generating Word-Level Adversarial Examples. (98%)Maximilian Mozes; Bennett Kleinberg; Lewis D. Griffin
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks. (98%)Jiyang Guan; Jian Liang; Ran He
Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models. (98%)Hannah Chen; Yangfeng Ji; David Evans
Learning Sample Reweighting for Accuracy and Adversarial Robustness. (93%)Chester Holtz; Tsui-Wei Weng; Gal Mishne
Similarity of Neural Architectures Based on Input Gradient Transferability. (86%)Jaehui Hwang; Dongyoon Han; Byeongho Heo; Song Park; Sanghyuk Chun; Jong-Seok Lee
New data poison attacks on machine learning classifiers for mobile exfiltration. (80%)Miguel A. Ramirez; Sangyoung Yoon; Ernesto Damiani; Hussam Al Hamadi; Claudio Agostino Ardagna; Nicola Bena; Young-Ji Byon; Tae-Yeon Kim; Chung-Suk Cho; Chan Yeob Yeun
Attacking Motion Estimation with Adversarial Snow. (16%)Jenny Schmalfuss; Lukas Mehl; Andrés Bruhn
How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers. (13%)Guangsheng Zhang; Bo Liu; Huan Tian; Tianqing Zhu; Ming Ding; Wanlei Zhou
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario. (4%)Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Enrique Tomás Martínez Beltrán; Daniel Demeter; Gérôme Bovet; Gregorio Martínez Pérez; Burkhard Stiller
Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning. (3%)Xiaoyi Chen; Baisong Xin; Shengfang Zhai; Shiqing Ma; Qingni Shen; Zhonghai Wu
LOT: Layer-wise Orthogonal Training on Improving $\ell_2$ Certified Robustness. (3%)Xiaojun Xu; Linyi Li; Bo Li
2022-10-19
Learning Transferable Adversarial Robust Representations via Multi-view Consistency. (99%)Minseon Kim; Hyeonjeong Ha; Dong Bok Lee; Sung Ju Hwang
Effective Targeted Attacks for Adversarial Self-Supervised Learning. (99%)Minseon Kim; Hyeonjeong Ha; Sooel Son; Sung Ju Hwang
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis. (83%)Ruinan Jin; Xiaoxiao Li
Chaos Theory and Adversarial Robustness. (73%)Jonathan S. Kent
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey. (69%)Hui Cao; Wenlong Zou; Yinkun Wang; Ting Song; Mengjun Liu
Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP. (64%)Yangyi Chen; Hongcheng Gao; Ganqu Cui; Fanchao Qi; Longtao Huang; Zhiyuan Liu; Maosong Sun
Model-Free Prediction of Adversarial Drop Points in 3D Point Clouds. (54%)Hanieh Naderi; Chinthaka Dinesh; Ivan V. Bajic; Shohreh Kasaei
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. (41%)Xiaoyu Cao; Jinyuan Jia; Zaixi Zhang; Neil Zhenqiang Gong
Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning. (16%)Ruihan Wu; Xiangyu Chen; Chuan Guo; Kilian Q. Weinberger
Variational Model Perturbation for Source-Free Domain Adaptation. (1%)Mengmeng Jing; Xiantong Zhen; Jingjing Li; Cees G. M. Snoek
2022-10-18
Scaling Adversarial Training to Large Perturbation Bounds. (98%)Sravanti Addepalli; Samyak Jain; Gaurang Sriramanan; R. Venkatesh Babu
Not All Poisons are Created Equal: Robust Training against Data Poisoning. (97%)Yu Yang; Tian Yu Liu; Baharan Mirzasoleiman
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models. (73%)Lan Jiang; Hao Zhou; Yankai Lin; Peng Li; Jie Zhou; Rui Jiang
Analysis of Master Vein Attacks on Finger Vein Recognition Systems. (56%)Huy H. Nguyen; Trung-Nghia Le; Junichi Yamagishi; Isao Echizen
Training set cleansing of backdoor poisoning by self-supervised representation learning. (56%)H. Wang; S. Karami; O. Dia; H. Ritter; E. Emamjomeh-Zadeh; J. Chen; Z. Xiang; D. J. Miller; G. Kesidis
On the Adversarial Robustness of Mixture of Experts. (13%)Joan Puigcerver; Rodolphe Jenatton; Carlos Riquelme; Pranjal Awasthi; Srinadh Bhojanapalli
Transferable Unlearnable Examples. (8%)Jie Ren; Han Xu; Yuxuan Wan; Xingjun Ma; Lichao Sun; Jiliang Tang
Automatic Detection of Fake Key Attacks in Secure Messaging. (8%)Tarun Kumar Yadav; Devashish Gosain; Amir Herzberg; Daniel Zappala; Kent Seamons
Improving Adversarial Robustness by Contrastive Guided Diffusion Process. (2%)Yidong Ouyang; Liyan Xie; Guang Cheng
2022-10-17
Towards Generating Adversarial Examples on Mixed-type Data. (99%)Han Xu; Menghai Pan; Zhimeng Jiang; Huiyuan Chen; Xiaoting Li; Mahashweta Das; Hao Yang
Differential Evolution based Dual Adversarial Camouflage: Fooling Human Eyes and Object Detectors. (99%)Jialiang Sun; Tingsong Jiang; Wen Yao; Donghua Wang; Xiaoqian Chen
Probabilistic Categorical Adversarial Attack & Adversarial Training. (99%)Pengfei He; Han Xu; Jie Ren; Yuxuan Wan; Zitao Liu; Jiliang Tang
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class. (96%)Khoa D. Doan; Yingjie Lao; Ping Li
DE-CROP: Data-efficient Certified Robustness for Pretrained Classifiers. (87%)Gaurav Kumar Nayak; Ruchit Rawal; Anirban Chakraborty
Beyond Model Interpretability: On the Faithfulness and Adversarial Robustness of Contrastive Textual Explanations. (78%)Julia El Zini; Mariette Awad
Towards Fair Classification against Poisoning Attacks. (76%)Han Xu; Xiaorui Liu; Yuxuan Wan; Jiliang Tang
Deepfake Text Detection: Limitations and Opportunities. (41%)Jiameng Pu; Zain Sarwar; Sifat Muhammad Abdullah; Abdullah Rehman; Yoonjin Kim; Parantapa Bhattacharya; Mobin Javed; Bimal Viswanath
You Can't See Me: Physical Removal Attacks on LiDAR-based Autonomous Vehicles Driving Frameworks. (15%)Yulong Cao; S. Hrushikesh Bhupathiraju; Pirouz Naghavi; Takeshi Sugawara; Z. Morley Mao; Sara Rampazzi
Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models. (9%)Zhiyuan Zhang; Lingjuan Lyu; Xingjun Ma; Chenguang Wang; Xu Sun
Understanding CNN Fragility When Learning With Imbalanced Data. (1%)Damien Dablain; Kristen N. Jacobson; Colin Bellinger; Mark Roberts; Nitesh Chawla
2022-10-16
Object-Attentional Untargeted Adversarial Attack. (99%)Chao Zhou; Yuan-Gen Wang; Guopu Zhu
Nowhere to Hide: A Lightweight Unsupervised Detector against Adversarial Examples. (99%)Hui Liu; Bo Zhao; Kehuan Zhang; Peng Liu
ODG-Q: Robust Quantization via Online Domain Generalization. (83%)Chaofan Tao; Ngai Wong
Interpretable Machine Learning for Detection and Classification of Ransomware Families Based on API Calls. (1%)Rawshan Ara Mowri; Madhuri Siddula; Kaushik Roy
2022-10-15
RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging. (2%)Ajay Jaiswal; Kumar Ashutosh; Justin F Rousseau; Yifan Peng; Zhangyang Wang; Ying Ding
2022-10-14
Dynamics-aware Adversarial Attack of Adaptive Neural Networks. (89%)An Tao; Yueqi Duan; Yingqi Wang; Jiwen Lu; Jie Zhou
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture. (87%)Yichuan Mo; Dongxian Wu; Yifei Wang; Yiwen Guo; Yisen Wang
Is Face Recognition Safe from Realizable Attacks? (84%)Sanjay Saha; Terence Sim
Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks. (76%)Sishuo Chen; Wenkai Yang; Zhiyuan Zhang; Xiaohan Bi; Xu Sun
Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis. (67%)Phillip Technical University Darmstadt Rieger; Torsten University of Würzburg Krauß; Markus Technical University Darmstadt Miettinen; Alexandra University of Würzburg Dmitrienko; Ahmad-Reza Technical University Darmstadt Sadeghi
2022-10-13
Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition. (99%)Shuai Jia; Bangjie Yin; Taiping Yao; Shouhong Ding; Chunhua Shen; Xiaokang Yang; Chao Ma
AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient. (99%)Farzad Nikfam; Alberto Marchisio; Maurizio Martina; Muhammad Shafique
Demystifying Self-supervised Trojan Attacks. (95%)Changjiang Li; Ren Pang; Zhaohan Xi; Tianyu Du; Shouling Ji; Yuan Yao; Ting Wang
Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors. (81%)Qixun Wang; Yifei Wang; Hong Zhu; Yisen Wang
Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation. (13%)Zhouxing Shi; Yihan Wang; Huan Zhang; Zico Kolter; Cho-Jui Hsieh
Large-Scale Open-Set Classification Protocols for ImageNet. (2%)Jesus Andres Palechor Anacona; Annesha Bhoumik; Manuel Günther
SoK: How Not to Architect Your Next-Generation TEE Malware? (1%)Kubilay Ahmet Küçük; Steve Moyle; Andrew Martin; Alexandru Mereacre; Nicholas Allott
Feature Reconstruction Attacks and Countermeasures of DNN training in Vertical Federated Learning. (1%)Peng Ye; Zhifeng Jiang; Wei Wang; Bo Li; Baochun Li
Characterizing the Influence of Graph Elements. (1%)Zizhang Chen; Peizhao Li; Hongfu Liu; Pengyu Hong
2022-10-12
A Game Theoretical vulnerability analysis of Adversarial Attack. (99%)Khondker Fariha Hossain; Alireza Tavakkoli; Shamik Sengupta
Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation. (99%)Zeyu Qin; Yanbo Fan; Yi Liu; Li Shen; Yong Zhang; Jue Wang; Baoyuan Wu
Visual Prompting for Adversarial Robustness. (99%)Aochuan Chen; Peter Lorenz; Yuguang Yao; Pin-Yu Chen; Sijia Liu
Robust Models are less Over-Confident. (96%)Julia Grabinski; Paul Gavrikov; Janis Keuper; Margret Keuper
Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity. (86%)Andrew C. Cullen; Paul Montague; Shijie Liu; Sarah M. Erfani; Benjamin I. P. Rubinstein
Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning. (82%)Yongyuan Liang; Yanchao Sun; Ruijie Zheng; Furong Huang
COLLIDER: A Robust Training Framework for Backdoor Data. (81%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. (76%)Haotao Wang; Junyuan Hong; Aston Zhang; Jiayu Zhou; Zhangyang Wang
Few-shot Backdoor Attacks via Neural Tangent Kernels. (62%)Jonathan Hayase; Sewoong Oh
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning? (9%)Yi Zeng; Minzhou Pan; Himanshu Jahagirdar; Ming Jin; Lingjuan Lyu; Ruoxi Jia
Understanding Impacts of Task Similarity on Backdoor Attack and Detection. (2%)Di Tang; Rui Zhu; XiaoFeng Wang; Haixu Tang; Yi Chen
When are Local Queries Useful for Robust Learning? (1%)Pascale Gourdeau; Varun Kanade; Marta Kwiatkowska; James Worrell
2022-10-11
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness? (99%)Nikolaos Tsilivis; Julia Kempe
Stable and Efficient Adversarial Training through Local Linearization. (91%)Zhuorong Li; Daiwei Yu
RoHNAS: A Neural Architecture Search Framework with Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks. (86%)Alberto Marchisio; Vojtech Mrazek; Andrea Massa; Beatrice Bussolino; Maurizio Martina; Muhammad Shafique
Adversarial Attack Against Image-Based Localization Neural Networks. (78%)Meir Brand; Itay Naeh; Daniel Teitelman
Detecting Backdoors in Deep Text Classifiers. (76%)You Guo; Jun Wang; Trevor Cohn
Human Body Measurement Estimation with Adversarial Augmentation. (33%)Nataniel Ruiz; Miriam Bellver; Timo Bolkart; Ambuj Arora; Ming C. Lin; Javier Romero; Raja Bala
Curved Representation Space of Vision Transformers. (10%)Juyeop Kim; Junha Park; Songkuk Kim; Jong-Seok Lee
Zeroth-Order Hard-Thresholding: Gradient Error vs. Expansivity. (1%)Vazelhes William de; Hualin Zhang; Huimin Wu; Xiao-Tong Yuan; Bin Gu
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach. (1%)Peng Mi; Li Shen; Tianhe Ren; Yiyi Zhou; Xiaoshuai Sun; Rongrong Ji; Dacheng Tao
2022-10-10
Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization. (92%)Ziquan Liu; Antoni B. Chan
Revisiting adapters with adversarial training. (88%)Sylvestre-Alvise Rebuffi; Francesco Croce; Sven Gowal
Universal Adversarial Perturbations: Efficiency on a small image dataset. (81%)Waris ENSEIRB-MATMECA, UB Radji
Certified Training: Small Boxes are All You Need. (22%)Mark Niklas Müller; Franziska Eckert; Marc Fischer; Martin Vechev
Denoising Masked AutoEncoders Help Robust Classification. (1%)Quanlin Wu; Hang Ye; Yuntian Gu; Huishuai Zhang; Liwei Wang; Di He
2022-10-09
Pruning Adversarially Robust Neural Networks without Adversarial Examples. (99%)Tong Jian; Zifeng Wang; Yanzhi Wang; Jennifer Dy; Stratis Ioannidis
Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective. (99%)Yao Zhu; Yuefeng Chen; Xiaodan Li; Kejiang Chen; Yuan He; Xiang Tian; Bolun Zheng; Yaowu Chen; Qingming Huang
Online Training Through Time for Spiking Neural Networks. (1%)Mingqing Xiao; Qingyan Meng; Zongpeng Zhang; Di He; Zhouchen Lin
2022-10-08
FedDef: Defense Against Gradient Leakage in Federated Learning-based Network Intrusion Detection Systems. (99%)Jiahui Chen; Yi Zhao; Qi Li; Xuewei Feng; Ke Xu
Symmetry Defense Against CNN Adversarial Perturbation Attacks. (99%)Blerta Lindqvist
Robustness of Unsupervised Representation Learning without Labels. (54%)Aleksandar Petrov; Marta Kwiatkowska
2022-10-07
Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs. (99%)Prashant Pandey; Aleti Vardhan; Mustafa Chasmai; Tanuj Sur; Brejesh Lall
Pre-trained Adversarial Perturbations. (99%)Yuanhao Ban; Yinpeng Dong
ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints. (93%)Yinpeng Dong; Shouwei Ruan; Hang Su; Caixin Kang; Xingxing Wei; Jun Zhu
Game-Theoretic Understanding of Misclassification. (47%)Kosuke Sumiyasu; Kazuhiko Kawamoto; Hiroshi Kera
A2: Efficient Automated Attacker for Boosting Adversarial Training. (41%)Zhuoer Xu; Guanghui Zhu; Changhua Meng; Shiwen Cui; Zhenzhe Ying; Weiqiang Wang; Ming GU; Yihua Huang
NMTSloth: Understanding and Testing Efficiency Degradation of Neural Machine Translation Systems. (13%)Simin Chen; Cong Liu; Mirazul Haque; Zihe Song; Wei Yang
Mind Your Data! Hiding Backdoors in Offline Reinforcement Learning Datasets. (9%)Chen Gong; Zhou Yang; Yunpeng Bai; Junda He; Jieke Shi; Arunesh Sinha; Bowen Xu; Xinwen Hou; Guoliang Fan; David Lo
A Wolf in Sheep's Clothing: Spreading Deadly Pathogens Under the Disguise of Popular Music. (2%)Anomadarshi Barua; Yonatan Gizachew Achamyeleh; Mohammad Abdullah Al Faruque
Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation. (1%)Indu Panigrahi; Ryan Manzuk; Adam Maloof; Ruth Fong
2022-10-06
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems. (99%)Chawin Sitawarin; Florian Tramèr; Nicholas Carlini
Enhancing Code Classification by Mixup-Based Data Augmentation. (96%)Zeming Dong; Qiang Hu; Yuejun Guo; Maxime Cordy; Mike Papadakis; Yves Le Traon; Jianjun Zhao
Deep Reinforcement Learning based Evasion Generative Adversarial Network for Botnet Detection. (92%)Rizwan Hamid Randhawa; Nauman Aslam; Mohammad Alauthman; Muhammad Khalid; Husnain Rafiq
On Optimal Learning Under Targeted Data Poisoning. (82%)Steve Hanneke; Amin Karbasi; Mohammad Mahmoody; Idan Mehalel; Shay Moran
Towards Out-of-Distribution Adversarial Robustness. (73%)Adam Ibrahim; Charles Guille-Escuret; Ioannis Mitliagkas; Irina Rish; David Krueger; Pouya Bashivan
InferES : A Natural Language Inference Corpus for Spanish Featuring Negation-Based Contrastive and Adversarial Examples. (61%)Venelin Kovatchev; Mariona Taulé
Unsupervised Domain Adaptation for COVID-19 Information Service with Contrastive Adversarial Domain Mixup. (41%)Huimin Zeng; Zhenrui Yue; Ziyi Kou; Lanyu Shang; Yang Zhang; Dong Wang
Synthetic Dataset Generation for Privacy-Preserving Machine Learning. (2%)Efstathia Soufleri; Gobinda Saha; Kaushik Roy
Enhancing Mixup-Based Graph Learning for Language Processing via Hybrid Pooling. (1%)Zeming Dong; Qiang Hu; Yuejun Guo; Maxime Cordy; Mike Papadakis; Yves Le Traon; Jianjun Zhao
Bad Citrus: Reducing Adversarial Costs with Model Distances. (1%)Giorgio Severi; Will Pearce; Alina Oprea
2022-10-05
Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks. (99%)Shengming Yuan; Qilong Zhang; Lianli Gao; Yaya Cheng; Jingkuan Song
Dynamic Stochastic Ensemble with Adversarial Robust Lottery Ticket Subnetworks. (98%)Qi Peng; Wenlin Liu; Ruoxi Qin; Libin Hou; Bin Yan; Linyuan Wang
On Adversarial Robustness of Deep Image Deblurring. (83%)Kanchana Vaishnavi Gandikota; Paramanand Chandramouli; Michael Moeller
A Closer Look at Robustness to L-infinity and Spatial Perturbations and their Composition. (81%)Luke Rowe; Benjamin Thérien; Krzysztof Czarnecki; Hongyang Zhang
Jitter Does Matter: Adapting Gaze Estimation to New Domains. (78%)Ruicong Liu; Yiwei Bao; Mingjie Xu; Haofei Wang; Yunfei Liu; Feng Lu
Image Masking for Robust Self-Supervised Monocular Depth Estimation. (38%)Hemang Chawla; Kishaan Jeeveswaran; Elahe Arani; Bahram Zonooz
Over-the-Air Federated Learning with Privacy Protection via Correlated Additive Perturbations. (38%)Jialing Liao; Zheng Chen; Erik G. Larsson
2022-10-04
Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective. (97%)Bohang Zhang; Du Jiang; Di He; Liwei Wang
Robust Fair Clustering: A Novel Fairness Attack and Defense Framework. (93%)Anshuman Chhabra; Peizhao Li; Prasant Mohapatra; Hongfu Liu
A Study on the Efficiency and Generalization of Light Hybrid Retrievers. (86%)Man Luo; Shashank Jain; Anchit Gupta; Arash Einolghozati; Barlas Oguz; Debojeet Chatterjee; Xilun Chen; Chitta Baral; Peyman Heidari
Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting Models. (81%)Fan Liu; Hao Liu; Wenzhao Jiang
Invariant Aggregator for Defending Federated Backdoor Attacks. (80%)Xiaoyang Wang; Dimitrios Dimitriadis; Sanmi Koyejo; Shruti Tople
On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses. (75%)Anshuman Chhabra; Ashwin Sekhari; Prasant Mohapatra
Robustness Certification of Visual Perception Models via Camera Motion Smoothing. (70%)Hanjiang Hu; Zuxin Liu; Linyi Li; Jiacheng Zhu; Ding Zhao
Backdoor Attacks in the Supply Chain of Masked Image Modeling. (68%)Xinyue Shen; Xinlei He; Zheng Li; Yun Shen; Michael Backes; Yang Zhang
CADet: Fully Self-Supervised Anomaly Detection With Contrastive Learning. (67%)Charles Guille-Escuret; Pau Rodriguez; David Vazquez; Ioannis Mitliagkas; Joao Monteiro
2022-10-03
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples. (99%)Jinyuan Jia; Wenjie Qu; Neil Zhenqiang Gong
Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual Active Speaker Detection. (97%)Xuanjun Chen; Haibin Wu; Helen Meng; Hung-yi Lee; Jyh-Shing Roger Jang
Stability Analysis and Generalization Bounds of Adversarial Training. (96%)Jiancong Xiao; Yanbo Fan; Ruoyu Sun; Jue Wang; Zhi-Quan Luo
On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks. (92%)Huimin Zeng; Zhenrui Yue; Yang Zhang; Ziyi Kou; Lanyu Shang; Dong Wang
Decompiling x86 Deep Neural Network Executables. (83%)Zhibo Liu; Yuanyuan Yuan; Shuai Wang; Xiaofei Xie; Lei Ma
Strength-Adaptive Adversarial Training. (80%)Chaojian Yu; Dawei Zhou; Li Shen; Jun Yu; Bo Han; Mingming Gong; Nannan Wang; Tongliang Liu
ASGNN: Graph Neural Networks with Adaptive Structure. (68%)Zepeng Zhang; Songtao Lu; Zengfeng Huang; Ziping Zhao
UnGANable: Defending Against GAN-based Face Manipulation. (2%)Zheng Li; Ning Yu; Ahmed Salem; Michael Backes; Mario Fritz; Yang Zhang
2022-10-02
Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis. (99%)Jiancong Xiao; Zeyu Qin; Yanbo Fan; Baoyuan Wu; Jue Wang; Zhi-Quan Luo
Understanding Adversarial Robustness Against On-manifold Adversarial Examples. (99%)Jiancong Xiao; Liusha Yang; Yanbo Fan; Jue Wang; Zhi-Quan Luo
FLCert: Provably Secure Federated Learning against Poisoning Attacks. (74%)Xiaoyu Cao; Zaixi Zhang; Jinyuan Jia; Neil Zhenqiang Gong
Optimization for Robustness Evaluation beyond $\ell_p$ Metrics. (16%)Hengyue Liang; Buyun Liang; Ying Cui; Tim Mitchell; Ju Sun
Automated Security Analysis of Exposure Notification Systems. (1%)Kevin Morio; Ilkan Esiyok; Dennis Jackson; Robert Künnemann
2022-10-01
DeltaBound Attack: Efficient decision-based attack in low queries regime. (96%)Lorenzo Rossi
Adversarial Attacks on Transformers-Based Malware Detectors. (91%)Yash Jakhotiya; Heramb Patil; Jugal Rawlani; Dr. Sunil B. Mane
Voice Spoofing Countermeasures: Taxonomy, State-of-the-art, experimental analysis of generalizability, open challenges, and the way forward. (5%)Awais Khan; Khalid Mahmood Malik; James Ryan; Mikul Saravanan
2022-09-30
Your Out-of-Distribution Detection Method is Not Robust! (99%)Mohammad Azizmalayeri; Arshia Soltani Moakhar; Arman Zarei; Reihaneh Zohrabi; Mohammad Taghi Manzuri; Mohammad Hossein Rohban
Learning Robust Kernel Ensembles with Kernel Average Pooling. (99%)Pouya Bashivan; Adam Ibrahim; Amirozhan Dehghani; Yifei Ren
Adversarial Robustness of Representation Learning for Knowledge Graphs. (95%)Peru Bhardwaj
Hiding Visual Information via Obfuscating Adversarial Perturbations. (92%)Zhigang Su; Dawei Zhou; Nannan Wangu; Decheng Li; Zhen Wang; Xinbo Gao
On the tightness of linear relaxation based robustness certification methods. (78%)Cheng Tang
Data Poisoning Attacks Against Multimodal Encoders. (73%)Ziqing Yang; Xinlei He; Zheng Li; Michael Backes; Mathias Humbert; Pascal Berrang; Yang Zhang
ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks. (70%)Tim Clifford; Ilia Shumailov; Yiren Zhao; Ross Anderson; Robert Mullins
2022-09-29
Physical Adversarial Attack meets Computer Vision: A Decade Survey. (99%)Hui Wei; Hao Tang; Xuemei Jia; Zhixiang Wang; Hanxun Yu; Zhubo Li; Shin'ichi Satoh; Gool Luc Van; Zheng Wang
Towards Lightweight Black-Box Attacks against Deep Neural Networks. (99%)Chenghao Sun; Yonggang Zhang; Wan Chaoqun; Qizhou Wang; Ya Li; Tongliang Liu; Bo Han; Xinmei Tian
Generalizability of Adversarial Robustness Under Distribution Shifts. (83%)Kumail Alhamoud; Hasan Abed Al Kader Hammoud; Motasem Alfarra; Bernard Ghanem
Digital and Physical Face Attacks: Reviewing and One Step Further. (2%)Chenqi Kong; Shiqi Wang; Haoliang Li
Chameleon Cache: Approximating Fully Associative Caches with Random Replacement to Prevent Contention-Based Cache Attacks. (1%)Thomas Unterluggauer; Austin Harris; Scott Constable; Fangfei Liu; Carlos Rozas
2022-09-28
A Survey on Physical Adversarial Attack in Computer Vision. (99%)Donghua Wang; Wen Yao; Tingsong Jiang; Guijian Tang; Xiaoqian Chen
Exploring the Relationship between Architecture and Adversarially Robust Generalization. (99%)Aishan Liu; Shiyu Tang; Siyuan Liang; Ruihao Gong; Boxi Wu; Xianglong Liu; Dacheng Tao
A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks. (67%)Kevin Hector; Mathieu Dumont; Pierre-Alain Moellic; Jean-Max Dutertre
Supervised Contrastive Learning as Multi-Objective Optimization for Fine-Tuning Large Pre-trained Language Models. (47%)Youness Moukafih; Mounir Ghogho; Kamel Smaili
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach. (31%)Marco Anisetti; Claudio A. Ardagna; Alessandro Balestrucci; Nicola Bena; Ernesto Damiani; Chan Yeob Yeun
CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention. (1%)Ziyu Guo; Renrui Zhang; Longtian Qiu; Xianzheng Ma; Xupeng Miao; Xuming He; Bin Cui
Improving alignment of dialogue agents via targeted human judgements. (1%)Amelia Glaese; Nat McAleese; Maja Trębacz; John Aslanides; Vlad Firoiu; Timo Ewalds; Maribeth Rauh; Laura Weidinger; Martin Chadwick; Phoebe Thacker; Lucy Campbell-Gillingham; Jonathan Uesato; Po-Sen Huang; Ramona Comanescu; Fan Yang; Abigail See; Sumanth Dathathri; Rory Greig; Charlie Chen; Doug Fritz; Jaume Sanchez Elias; Richard Green; Soňa Mokrá; Nicholas Fernando; Boxi Wu; Rachel Foley; Susannah Young; Iason Gabriel; William Isaac; John Mellor; Demis Hassabis; Koray Kavukcuoglu; Lisa Anne Hendricks; Geoffrey Irving
2022-09-27
Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks against Object Detection. (74%)Svetlana Pavlitskaya; Jonas Hendl; Sebastian Kleim; Leopold Müller; Fabian Wylczoch; J. Marius Zöllner
Inducing Data Amplification Using Auxiliary Datasets in Adversarial Training. (33%)Saehyung Lee; Hyungyu Lee
Attacking Compressed Vision Transformers. (33%)Swapnil Parekh; Devansh Shah; Pratyush Shukla
Mitigating Attacks on Artificial Intelligence-based Spectrum Sensing for Cellular Network Signals. (8%)Ferhat Ozgur Catak; Murat Kuzlu; Salih Sarp; Evren Catak; Umit Cali
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection. (5%)Yiming Li; Yang Bai; Yong Jiang; Yong Yang; Shu-Tao Xia; Bo Li
Reconstruction-guided attention improves the robustness and shape processing of neural networks. (2%)Seoyoung Ahn; Hossein Adeli; Gregory J. Zelinsky
A Learning-based Honeypot Game for Collaborative Defense in UAV Networks. (1%)Yuntao Wang; Zhou Su; Abderrahim Benslimane; Qichao Xu; Minghui Dai; Ruidong Li
Stability Via Adversarial Training of Neural Network Stochastic Control of Mean-Field Type. (1%)Julian Barreiro-Gomez; Salah Eddine Choutri; Boualem Djehiche
Measuring Overfitting in Convolutional Neural Networks using Adversarial Perturbations and Label Noise. (1%)Svetlana Pavlitskaya; Joël Oswald; J. Marius Zöllner
2022-09-26
FG-UAP: Feature-Gathering Universal Adversarial Perturbation. (99%)Zhixing Ye; Xinwen Cheng; Xiaolin Huang
Activation Learning by Local Competitions. (64%)Hongchao Zhou
Multi-Task Adversarial Training Algorithm for Multi-Speaker Neural Text-to-Speech. (1%)Yusuke Nakai; Yuki Saito; Kenta Udagawa; Hiroshi Saruwatari
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification. (1%)Adrien Bennetot; Gianni Franchi; Ser Javier Del; Raja Chatila; Natalia Diaz-Rodriguez
2022-09-25
SPRITZ-1.5C: Employing Deep Ensemble Learning for Improving the Security of Computer Networks against Adversarial Attacks. (81%)Ehsan Nowroozi; Mohammadreza Mohammadi; Erkay Savas; Mauro Conti; Yassine Mekdad
2022-09-24
Approximate better, Attack stronger: Adversarial Example Generation via Asymptotically Gaussian Mixture Distribution. (99%)Zhengwei Fang; Rui Wang; Tao Huang; Liping Jing
2022-09-23
The "Beatrix'' Resurrections: Robust Backdoor Detection via Gram Matrices. (13%)Wanlun Ma; Derui Wang; Ruoxi Sun; Minhui Xue; Sheng Wen; Yang Xiang
2022-09-22
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models. (50%)Sohaib Ahmad; Benjamin Fuller; Kaleel Mahmood
2022-09-21
Fair Robust Active Learning by Joint Inconsistency. (99%)Tsung-Han Wu; Shang-Tse Chen; Winston H. Hsu
Toy Models of Superposition. (45%)Nelson Elhage; Tristan Hume; Catherine Olsson; Nicholas Schiefer; Tom Henighan; Shauna Kravec; Zac Hatfield-Dodds; Robert Lasenby; Dawn Drain; Carol Chen; Roger Grosse; Sam McCandlish; Jared Kaplan; Dario Amodei; Martin Wattenberg; Christopher Olah
DARTSRepair: Core-failure-set Guided DARTS for Network Robustness to Common Corruptions. (13%)Xuhong Ren; Jianlang Chen; Felix Juefei-Xu; Wanli Xue; Qing Guo; Lei Ma; Jianjun Zhao; Shengyong Chen
Fairness Reprogramming. (1%)Guanhua Zhang; Yihua Zhang; Yang Zhang; Wenqi Fan; Qing Li; Sijia Liu; Shiyu Chang
2022-09-20
Understanding Real-world Threats to Deep Learning Models in Android Apps. (99%)Zizhuang Deng; Kai Chen; Guozhu Meng; Xiaodong Zhang; Ke Xu; Yao Cheng
Audit and Improve Robustness of Private Neural Networks on Encrypted Data. (99%)Jiaqi Xue; Lei Xu; Lin Chen; Weidong Shi; Kaidi Xu; Qian Lou
GAMA: Generative Adversarial Multi-Object Scene Attacks. (99%)Abhishek Aich; Calvin-Khang Ta; Akash Gupta; Chengyu Song; Srikanth V. Krishnamurthy; M. Salman Asif; Amit K. Roy-Chowdhury
Sparse Vicious Attacks on Graph Neural Networks. (98%)Giovanni Trappolini; Valentino Maiorca; Silvio Severino; Emanuele Rodolà; Fabrizio Silvestri; Gabriele Tolomei
Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks. (98%)Abhishek Aich; Shasha Li; Chengyu Song; M. Salman Asif; Srikanth V. Krishnamurthy; Amit K. Roy-Chowdhury
Rethinking Data Augmentation in Knowledge Distillation for Object Detection. (68%)Jiawei Liang; Siyuan Liang; Aishan Liu; Mingli Zhu; Danni Yuan; Chenye Xu; Xiaochun Cao
CANflict: Exploiting Peripheral Conflicts for Data-Link Layer Attacks on Automotive Networks. (1%)Alvise de Faveri Tron; Stefano Longari; Michele Carminati; Mario Polino; Stefano Zanero
EM-Fault It Yourself: Building a Replicable EMFI Setup for Desktop and Server Hardware. (1%)Niclas Kühnapfel; Robert Buhren; Hans Niklas Jacob; Thilo Krachenfels; Christian Werling; Jean-Pierre Seifert
2022-09-19
Adversarial Catoptric Light: An Effective, Stealthy and Robust Physical-World Attack to DNNs. (99%)Chengyin Hu; Weiwen Shi
Adversarial Color Projection: A Projector-Based Physical Attack to DNNs. (99%)Chengyin Hu; Weiwen Shi
2022-09-18
On the Adversarial Transferability of ConvMixer Models. (99%)Ryota Iijima; Miki Tanaka; Isao Echizen; Hitoshi Kiya
AdvDO: Realistic Adversarial Attacks for Trajectory Prediction. (96%)Yulong Cao; Chaowei Xiao; Anima Anandkumar; Danfei Xu; Marco Pavone
Distribution inference risks: Identifying and mitigating sources of leakage. (1%)Valentin Hartmann; Léo Meynent; Maxime Peyrard; Dimitrios Dimitriadis; Shruti Tople; Robert West
2022-09-17
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models. (99%)Raphael Olivier; Hadi Abdullah; Bhiksha Raj
Characterizing Internal Evasion Attacks in Federated Learning. (98%)Taejin Kim; Shubhranshu Singh; Nikhil Madaan; Carlee Joe-Wong
A study on the deviations in performance of FNNs and CNNs in the realm of grayscale adversarial images. (4%)Durga Shree Nagabushanam; Steve Mathew; Chiranji Lal Chowdhary
2022-09-16
Robust Ensemble Morph Detection with Domain Generalization. (99%)Hossein Kashiani; Shoaib Meraj Sami; Sobhan Soleymani; Nasser M. Nasrabadi
A Large-scale Multiple-objective Method for Black-box Attack against Object Detection. (99%)Siyuan Liang; Longkang Li; Yanbo Fan; Xiaojun Jia; Jingzhi Li; Baoyuan Wu; Xiaochun Cao
Enhance the Visual Representation via Discrete Adversarial Training. (97%)Xiaofeng Mao; Yuefeng Chen; Ranjie Duan; Yao Zhu; Gege Qi; Shaokai Ye; Xiaodan Li; Rong Zhang; Hui Xue
Model Inversion Attacks against Graph Neural Networks. (92%)Zaixi Zhang; Qi Liu; Zhenya Huang; Hao Wang; Chee-Kong Lee; Enhong Chen
PointCAT: Contrastive Adversarial Training for Robust Point Cloud Recognition. (62%)Qidong Huang; Xiaoyi Dong; Dongdong Chen; Hang Zhou; Weiming Zhang; Kui Zhang; Gang Hua; Nenghai Yu
Cascading Failures in Power Grids. (33%)Rounak Meyur
Dataset Inference for Self-Supervised Models. (16%)Adam Dziedzic; Haonan Duan; Muhammad Ahmad Kaleem; Nikita Dhawan; Jonas Guan; Yannis Cattan; Franziska Boenisch; Nicolas Papernot
On the Robustness of Graph Neural Diffusion to Topology Perturbations. (15%)Yang Song; Qiyu Kang; Sijie Wang; Zhao Kai; Wee Peng Tay
A Systematic Evaluation of Node Embedding Robustness. (11%)Alexandru Mara; Jefrey Lijffijt; Stephan Günnemann; Bie Tijl De
2022-09-15
Improving Robust Fairness via Balance Adversarial Training. (99%)Chunyu Sun; Chenye Xu; Chengyuan Yao; Siyuan Liang; Yichao Wu; Ding Liang; XiangLong Liu; Aishan Liu
A Light Recipe to Train Robust Vision Transformers. (98%)Edoardo Debenedetti; Vikash Sehwag; Prateek Mittal
Part-Based Models Improve Adversarial Robustness. (92%)Chawin Sitawarin; Kornrapat Pongmala; Yizheng Chen; Nicholas Carlini; David Wagner
Explicit Tradeoffs between Adversarial and Natural Distributional Robustness. (80%)Mazda Moayeri; Kiarash Banihashem; Soheil Feizi
Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization. (80%)Omar Montasser; Steve Hanneke; Nathan Srebro
Defending Root DNS Servers Against DDoS Using Layered Defenses. (15%)A S M Rizvi; Jelena Mirkovic; John Heidemann; Wesley Hardaker; Robert Story
BadRes: Reveal the Backdoors through Residual Connection. (2%)Mingrui He; Tianyu Chen; Haoyi Zhou; Shanghang Zhang; Jianxin Li
Adversarial Cross-View Disentangled Graph Contrastive Learning. (1%)Qianlong Wen; Zhongyu Ouyang; Chunhui Zhang; Yiyue Qian; Yanfang Ye; Chuxu Zhang
Towards Improving Calibration in Object Detection Under Domain Shift. (1%)Muhammad Akhtar Munir; Muhammad Haris Khan; M. Saquib Sarfraz; Mohsen Ali
2022-09-14
Robust Transferable Feature Extractors: Learning to Defend Pre-Trained Networks Against White Box Adversaries. (99%)Alexander Cann; Ian Colbert; Ihab Amer
PointACL:Adversarial Contrastive Learning for Robust Point Clouds Representation under Adversarial Attack. (99%)Junxuan Huang; Yatong An; Lu cheng; Bai Chen; Junsong Yuan; Chunming Qiao
Certified Robustness to Word Substitution Ranking Attack for Neural Ranking Models. (99%)Chen Wu; Ruqing Zhang; Jiafeng Guo; Wei Chen; Yixing Fan; Rijke Maarten de; Xueqi Cheng
Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models. (97%)Jiawei Liu; Yangyang Kang; Di Tang; Kaisong Song; Changlong Sun; Xiaofeng Wang; Wei Lu; Xiaozhong Liu
On the interplay of adversarial robustness and architecture components: patches, convolution and attention. (67%)Francesco Croce; Matthias Hein
M^4I: Multi-modal Models Membership Inference. (54%)Pingyi Hu; Zihan Wang; Ruoxi Sun; Hu Wang; Minhui Xue
Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering. (12%)Jingjing Jiang; Ziyi Liu; Nanning Zheng
Robust Constrained Reinforcement Learning. (9%)Yue Wang; Fei Miao; Shaofeng Zou
2022-09-13
Adversarial Coreset Selection for Efficient Robust Training. (99%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
TSFool: Crafting High-quality Adversarial Time Series through Multi-objective Optimization to Fool Recurrent Neural Network Classifiers. (99%)Yanyun Wang; Dehui Du; Yuanhao Liu
PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models. (92%)William Hackett; Stefan Trawicki; Zhengxin Yu; Neeraj Suri; Peter Garraghan
Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation. (78%)Maksym Yatsura; Kaspar Sakmann; N. Grace Hua; Matthias Hein; Jan Hendrik Metzen
Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. (68%)Hussain Hussain; Meng Cao; Sandipan Sikdar; Denis Helic; Elisabeth Lex; Markus Strohmaier; Roman Kern
ADMM based Distributed State Observer Design under Sparse Sensor Attacks. (22%)Vinaya Mary Prinse; Rachel Kalpana Kalaimani
A Tale of HodgeRank and Spectral Method: Target Attack Against Rank Aggregation Is the Fixed Point of Adversarial Game. (15%)Ke Ma; Qianqian Xu; Jinshan Zeng; Guorong Li; Xiaochun Cao; Qingming Huang
Defense against Privacy Leakage in Federated Learning. (12%)Jing Wu; Munawar Hayat; Mingyi Zhou; Mehrtash Harandi
Federated Learning based on Defending Against Data Poisoning Attacks in IoT. (1%)Jiayin Li; Wenzhong Guo; Xingshuo Han; Jianping Cai; Ximeng Liu
2022-09-12
Adaptive Perturbation Generation for Multiple Backdoors Detection. (95%)Yuhang Wang; Huafeng Shi; Rui Min; Ruijia Wu; Siyuan Liang; Yichao Wu; Ding Liang; Aishan Liu
CARE: Certifiably Robust Learning with Reasoning via Variational Inference. (75%)Jiawei Zhang; Linyi Li; Ce Zhang; Bo Li
Sample Complexity of an Adversarial Attack on UCB-based Best-arm Identification Policy. (69%)Varsha Pendyala
Boosting Robustness Verification of Semantic Feature Neighborhoods. (54%)Anan Kabaha; Dana Drachsler-Cohen
Semantic-Preserving Adversarial Code Comprehension. (1%)Yiyang Li; Hongqiu Wu; Hai Zhao
Holistic Segmentation. (1%)Stefano Gasperini; Alvaro Marcos-Ramiro; Michael Schmidt; Nassir Navab; Benjamin Busam; Federico Tombari
Class-Level Logit Perturbation. (1%)Mengyang Li; Fengguang Su; Ou Wu; Ji Zhang
2022-09-11
Resisting Deep Learning Models Against Adversarial Attack Transferability via Feature Randomization. (99%)Ehsan Nowroozi; Mohammadreza Mohammadi; Pargol Golmohammadi; Yassine Mekdad; Mauro Conti; Selcuk Uluagac
Generate novel and robust samples from data: accessible sharing without privacy concerns. (5%)David Banh; Alan Huang
2022-09-10
Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense. (99%)Bowen Peng; Bo Peng; Jie Zhou; Jianyue Xie; Li Liu
2022-09-09
The Space of Adversarial Strategies. (99%)Ryan Sheatsley; Blaine Hoak; Eric Pauley; Patrick McDaniel
Defend Data Poisoning Attacks on Voice Authentication. (54%)Ke Li; Cameron Baird; Dan Lin
Robust-by-Design Classification via Unitary-Gradient Neural Networks. (41%)Fabio Brau; Giulio Rossolini; Alessandro Biondi; Giorgio Buttazzo
Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled Membership Inference. (10%)Hanzhou Wu
Saliency Guided Adversarial Training for Learning Generalizable Features with Applications to Medical Imaging Classification System. (1%)Xin Li; Yao Qiang; Chengyin Li; Sijia Liu; Dongxiao Zhu
2022-09-08
Incorporating Locality of Images to Generate Targeted Transferable Adversarial Examples. (99%)Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Evaluating the Security of Aircraft Systems. (92%)Edan Habler; Ron Bitton; Asaf Shabtai
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks. (62%)Chulin Xie; Yunhui Long; Pin-Yu Chen; Qinbin Li; Sanmi Koyejo; Bo Li
A Survey of Recent Advances in Deep Learning Models for Detecting Malware in Desktop and Mobile Platforms. (1%)Pascal Maniriho; Abdun Naser Mahmood; Mohammad Jabed Morshed Chowdhury
FADE: Enabling Large-Scale Federated Adversarial Training on Resource-Constrained Edge Devices. (1%)Minxue Tang; Jianyi Zhang; Mingyuan Ma; Louis DiValentin; Aolin Ding; Amin Hassanzadeh; Hai Li; Yiran Chen
2022-09-07
On the Transferability of Adversarial Examples between Encrypted Models. (99%)Miki Tanaka; Isao Echizen; Hitoshi Kiya
Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples. (99%)Nuo Xu; Kaleel Mahmood; Haowen Fang; Ethan Rathbun; Caiwen Ding; Wujie Wen
Reward Delay Attacks on Deep Reinforcement Learning. (70%)Anindya Sarkar; Jiarui Feng; Yevgeniy Vorobeychik; Christopher Gill; Ning Zhang
Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems. (47%)Sahar Abdelnabi; Mario Fritz
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots. (15%)Wai Man Si; Michael Backes; Jeremy Blackburn; Cristofaro Emiliano De; Gianluca Stringhini; Savvas Zannettou; Yand Zhang
Physics-Guided Adversarial Machine Learning for Aircraft Systems Simulation. (1%)Houssem Ben Braiek; Thomas Reid; Foutse Khomh
Hardware faults that matter: Understanding and Estimating the safety impact of hardware faults on object detection DNNs. (1%)Syed Qutub; Florian Geissler; Yang Peng; Ralf Grafe; Michael Paulitsch; Gereon Hinz; Alois Knoll
MalDetConv: Automated Behaviour-based Malware Detection Framework Based on Natural Language Processing and Deep Learning Techniques. (1%)Pascal Maniriho; Abdun Naser Mahmood; Mohammad Jabed Morshed Chowdhury
2022-09-06
Instance Attack:An Explanation-based Vulnerability Analysis Framework Against DNNs for Malware Detection. (99%)Sun RuiJin; Guo ShiZe; Guo JinHong; Xing ChangYou; Yang LuMing; Guo Xi; Pan ZhiSong
Bag of Tricks for FGSM Adversarial Training. (96%)Zichao Li; Li Liu; Zeyu Wang; Yuyin Zhou; Cihang Xie
Improving Robustness to Out-of-Distribution Data by Frequency-based Augmentation. (82%)Koki Mukai; Soichiro Kumano; Toshihiko Yamasaki
Defending Against Backdoor Attack on Graph Nerual Network by Explainability. (80%)Bingchen Jiang; Zhao Li
MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World. (56%)Hua Ma; Yinshan Li; Yansong Gao; Zhi Zhang; Alsharif Abuadbba; Anmin Fu; Said F. Al-Sarawi; Nepal Surya; Derek Abbott
Multimodal contrastive learning for remote sensing tasks. (1%)Umangi Jain; Alex Wilson; Varun Gulshan
Annealing Optimization for Progressive Learning with Stochastic Approximation. (1%)Christos Mavridis; John Baras
Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps. (1%)Alireza Ganjdanesh; Shangqian Gao; Heng Huang
A Survey of Machine Unlearning. (1%)Thanh Tam Nguyen; Thanh Trung Huynh; Phi Le Nguyen; Alan Wee-Chung Liew; Hongzhi Yin; Quoc Viet Hung Nguyen
2022-09-05
Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples. (98%)Hezekiah J. Branch; Jonathan Rodriguez Cefalu; Jeremy McHugh; Leyla Hujer; Aditya Bahl; Daniel del Castillo Iglesias; Ron Heichman; Ramesh Darwishi
White-Box Adversarial Policies in Deep Reinforcement Learning. (98%)Stephen Casper; Taylor Killian; Gabriel Kreiman; Dylan Hadfield-Menell
"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution. (69%)Yuyou Gan; Yuhao Mao; Xuhong Zhang; Shouling Ji; Yuwen Pu; Meng Han; Jianwei Yin; Ting Wang
Adversarial Detection: Attacking Object Detection in Real Time. (64%)Han Wu; Syed Yunas; Sareh Rowlands; Wenjie Ruan; Johan Wahlstrom
PromptAttack: Prompt-based Attack for Language Models via Gradient Search. (16%)Yundi Shi; Piji Li; Changchun Yin; Zhaoyang Han; Lu Zhou; Zhe Liu
Federated Zero-Shot Learning for Visual Recognition. (2%)Zhi Chen; Yadan Luo; Sen Wang; Jingjing Li; Zi Huang
Improving Out-of-Distribution Detection via Epistemic Uncertainty Adversarial Training. (2%)Derek Everett; Andre T. Nguyen; Luke E. Richards; Edward Raff
2022-09-04
An Adaptive Black-box Defense against Trojan Attacks (TrojDef). (98%)Guanxiong Liu; Abdallah Khreishah; Fatima Sharadgah; Issa Khalil
Hide & Seek: Seeking the (Un)-Hidden key in Provably-Secure Logic Locking Techniques. (11%)Satwik Patnaik; Nimisha Limaye; Ozgur Sinanoglu
Synergistic Redundancy: Towards Verifiable Safety for Autonomous Vehicles. (1%)Ayoosh Bansal; Simon Yu; Hunmin Kim; Bo Li; Naira Hovakimyan; Marco Caccamo; Lui Sha
2022-09-02
Adversarial Color Film: Effective Physical-World Attack to DNNs. (98%)Chengyin Hu; Weiwen Shi
Impact of Scaled Image on Robustness of Deep Neural Networks. (98%)Chengyin Hu; Weiwen Shi
Property inference attack; Graph neural networks; Privacy attacks and defense; Trustworthy machine learning. (95%)Xiuling Wang; Wendy Hui Wang
Impact of Colour Variation on Robustness of Deep Neural Networks. (92%)Chengyin Hu; Weiwen Shi
Scalable Adversarial Attack Algorithms on Influence Maximization. (68%)Lichao Sun; Xiaobin Rui; Wei Chen
Are Attribute Inference Attacks Just Imputation? (31%)Bargav Jayaraman; David Evans
Explainable AI for Android Malware Detection: Towards Understanding Why the Models Perform So Well? (9%)Yue Liu; Chakkrit Tantithamthavorn; Li Li; Yepang Liu
Revisiting Outer Optimization in Adversarial Training. (5%)Ali Dabouei; Fariborz Taherkhani; Sobhan Soleymani; Nasser M. Nasrabadi
2022-09-01
Adversarial for Social Privacy: A Poisoning Strategy to Degrade User Identity Linkage. (98%)Jiangli Shao; Yongqing Wang; Boshen Shi; Hao Gao; Huawei Shen; Xueqi Cheng
Universal Fourier Attack for Time Series. (12%)Elizabeth Coda; Brad Clymer; Chance DeSmet; Yijing Watkins; Michael Girard
2022-08-31
Be Your Own Neighborhood: Detecting Adversarial Example by the Neighborhood Relations Built on Self-Supervised Learning. (99%)Zhiyuan He; Yijun Yang; Pin-Yu Chen; Qiang Xu; Tsung-Yi Ho
Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters Substitution. (99%)MingWei Zhou; Xiaobing Pei
Membership Inference Attacks by Exploiting Loss Trajectory. (70%)Yiyong Liu; Zhengyu Zhao; Michael Backes; Yang Zhang
Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research. (13%)Zhibo Zhang; Hussam Al Hamadi; Ernesto Damiani; Chan Yeob Yeun; Fatma Taher
Feature Alignment by Uncertainty and Self-Training for Source-Free Unsupervised Domain Adaptation. (1%)JoonHo Lee; Gyemin Lee
Vulnerability of Distributed Inverter VAR Control in PV Distributed Energy System. (1%)Bo Tu; Wen-Tai Li; Chau Yuen
MA-RECON: Mask-aware deep-neural-network for robust fast MRI k-space interpolation. (1%)Nitzan Avidan; Moti Freiman
2022-08-30
A Black-Box Attack on Optical Character Recognition Systems. (99%)Samet Bayram; Kenneth Barner
Robustness and invariance properties of image classifiers. (99%)Apostolos Modas
Solving the Capsulation Attack against Backdoor-based Deep Neural Network Watermarks by Reversing Triggers. (1%)Fangqi Li; Shilin Wang; Yun Zhu
Constraining Representations Yields Models That Know What They Don't Know. (1%)Joao Monteiro; Pau Rodriguez; Pierre-Andre Noel; Issam Laradji; David Vazquez
2022-08-29
Towards Adversarial Purification using Denoising AutoEncoders. (99%)Dvij Kalaria; Aritra Hazra; Partha Pratim Chakrabarti
Reducing Certified Regression to Certified Classification for General Poisoning Attacks. (54%)Zayd Hammoudeh; Daniel Lowd
Interpreting Black-box Machine Learning Models for High Dimensional Datasets. (1%)Md. Rezaul Karim; Md. Shajalal; Alex Graß; Till Döhmen; Sisay Adugna Chala; Christian Beecks; Stefan Decker
2022-08-28
Cross-domain Cross-architecture Black-box Attacks on Fine-tuned Models with Transferred Evolutionary Strategies. (99%)Yinghua Zhang; Yangqiu Song; Kun Bai; Qiang Yang
2022-08-27
Adversarial Robustness for Tabular Data through Cost and Utility Awareness. (99%)Klim Kireev; Bogdan Kulynych; Carmela Troncoso
SA: Sliding attack for synthetic speech detection with resistance to clipping and self-splicing. (99%)Deng JiaCheng; Dong Li; Yan Diqun; Wang Rangding; Zeng Jiaming
TrojViT: Trojan Insertion in Vision Transformers. (15%)Mengxin Zheng; Qian Lou; Lei Jiang
Overparameterized (robust) models from computational constraints. (13%)Sanjam Garg; Somesh Jha; Saeed Mahloujifar; Mohammad Mahmoody; Mingyuan Wang
RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems. (1%)Emna Baccour; Aiman Erbad; Amr Mohamed; Mounir Hamdi; Mohsen Guizani
2022-08-26
What Does the Gradient Tell When Attacking the Graph Structure. (69%)Zihan Liu; Ge Wang; Yun Luo; Stan Z. Li
Network-Level Adversaries in Federated Learning. (54%)Giorgio Severi; Matthew Jagielski; Gökberk Yar; Yuxuan Wang; Alina Oprea; Cristina Nita-Rotaru
ATTRITION: Attacking Static Hardware Trojan Detection Techniques Using Reinforcement Learning. (45%)Vasudev JV Gohil; Hao JV Guo; Satwik JV Patnaik; JV Jeyavijayan; Rajendran
Lower Difficulty and Better Robustness: A Bregman Divergence Perspective for Adversarial Training. (4%)Zihui Wu; Haichang Gao; Bingqian Zhou; Xiaoyan Guo; Shudong Zhang
2022-08-25
Semantic Preserving Adversarial Attack Generation with Autoencoder and Genetic Algorithm. (99%)Xinyi Wang; Simon Yusuf Enoch; Dong Seong Kim
SNAP: Efficient Extraction of Private Properties with Poisoning. (89%)Harsh Chaudhari; John Abascal; Alina Oprea; Matthew Jagielski; Florian Tramèr; Jonathan Ullman
FuncFooler: A Practical Black-box Attack Against Learning-based Binary Code Similarity Detection Methods. (78%)Lichen Jia; Bowen Tang; Chenggang Wu; Zhe Wang; Zihan Jiang; Yuanming Lai; Yan Kang; Ning Liu; Jingfeng Zhang
Robust Prototypical Few-Shot Organ Segmentation with Regularized Neural-ODEs. (31%)Prashant Pandey; Mustafa Chasmai; Tanuj Sur; Brejesh Lall
Calibrated Selective Classification. (15%)Adam Fisch; Tommi Jaakkola; Regina Barzilay
XDRI Attacks - and - How to Enhance Resilience of Residential Routers. (4%)Philipp Jeitner; Haya Shulman; Lucas Teichmann; Michael Waidner
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning. (1%)Haodong Zhao; Wei Du; Fangqi Li; Peixuan Li; Gongshen Liu
2022-08-24
Attacking Neural Binary Function Detection. (99%)Joshua Bundt; Michael Davinroy; Ioannis Agadakos; Alina Oprea; William Robertson
Unrestricted Black-box Adversarial Attack Using GAN with Limited Queries. (99%)Dongbin Na; Sangwoo Ji; Jong Kim
Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps. (98%)Mohammadreza Amirian; Friedhelm Schwenker; Thilo Stadelmann
A Perturbation Resistant Transformation and Classification System for Deep Neural Networks. (98%)Nathaniel Dean; Dilip Sarkar
Rethinking Cost-sensitive Classification in Deep Learning via Adversarial Data Augmentation. (92%)Qiyuan Chen; Raed Al Kontar; Maher Nouiehed; Jessie Yang; Corey Lester
Bidirectional Contrastive Split Learning for Visual Question Answering. (38%)Yuwei Sun; Hideya Ochiai
2022-08-23
Towards an Awareness of Time Series Anomaly Detection Models' Adversarial Vulnerability. (99%)Shahroz Tariq; Binh M. Le; Simon S. Woo
Adversarial Vulnerability of Temporal Feature Networks for Object Detection. (99%)Svetlana Pavlitskaya; Nikolai Polley; Michael Weber; J. Marius Zöllner
Transferability Ranking of Adversarial Examples. (99%)Mosh Levy; Yuval Elovici; Yisroel Mirsky
Auditing Membership Leakages of Multi-Exit Networks. (76%)Zheng Li; Yiyong Liu; Xinlei He; Ning Yu; Michael Backes; Yang Zhang
A Comprehensive Study of Real-Time Object Detection Networks Across Multiple Domains: A Survey. (13%)Elahe Arani; Shruthi Gowda; Ratnajit Mukherjee; Omar Magdy; Senthilkumar Kathiresan; Bahram Zonooz
Robust DNN Watermarking via Fixed Embedding Weights with Optimized Distribution. (10%)Benedetta Tondi; Andrea Costanzo; Mauro Barni
2022-08-22
Fight Fire With Fire: Reversing Skin Adversarial Examples by Multiscale Diffusive and Denoising Aggregation Mechanism. (99%)Yongwei Wang; Yuan Li; Zhiqi Shen
Hierarchical Perceptual Noise Injection for Social Media Fingerprint Privacy Protection. (98%)Simin Li; Huangxinxin Xu; Jiakai Wang; Aishan Liu; Fazhi He; Xianglong Liu; Dacheng Tao
Different Spectral Representations in Optimized Artificial Neural Networks and Brains. (93%)Richard C. Gerum; Cassidy Pirlot; Alona Fyshe; Joel Zylberberg
Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models. (87%)Xinlei He; Zheng Li; Weilin Xu; Cory Cornelius; Yang Zhang
BARReL: Bottleneck Attention for Adversarial Robustness in Vision-Based Reinforcement Learning. (86%)Eugene Bykovets; Yannick Metz; Mennatallah El-Assady; Daniel A. Keim; Joachim M. Buhmann
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN. (62%)Huy Phan; Cong Shi; Yi Xie; Tianfang Zhang; Zhuohang Li; Tianming Zhao; Jian Liu; Yan Wang; Yingying Chen; Bo Yuan
Toward Better Target Representation for Source-Free and Black-Box Domain Adaptation. (31%)Qucheng Peng; Zhengming Ding; Lingjuan Lyu; Lichao Sun; Chen Chen
Optimal Bootstrapping of PoW Blockchains. (1%)Ranvir Rana; Dimitris Karakostas; Sreeram Kannan; Aggelos Kiayias; Pramod Viswanath
2022-08-21
PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D Point Cloud Recognition. (99%)Jiachen Sun; Weili Nie; Zhiding Yu; Z. Morley Mao; Chaowei Xiao
Inferring Sensitive Attributes from Model Explanations. (56%)Vasisht Duddu; Antoine Boutet
Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning. (10%)Kerem Ozfatura; Emre Ozfatura; Alptekin Kupcu; Deniz Gunduz
MockingBERT: A Method for Retroactively Adding Resilience to NLP Models. (4%)Jan Jezabek; Akash Singh
NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs. (1%)Yijun Tian; Chuxu Zhang; Zhichun Guo; Xiangliang Zhang; Nitesh V. Chawla
A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective. (1%)Chanwoo Park; Sangdoo Yun; Sanghyuk Chun
2022-08-20
Analyzing Adversarial Robustness of Vision Transformers against Spatial and Spectral Attacks. (86%)Gihyun Kim; Jong-Seok Lee
GAIROSCOPE: Injecting Data from Air-Gapped Computers to Nearby Gyroscopes. (33%)Mordechai Guri
Sensor Security: Current Progress, Research Challenges, and Future Roadmap. (10%)Anomadarshi Barua; Mohammad Abdullah Al Faruque
Evaluating Out-of-Distribution Detectors Through Adversarial Generation of Outliers. (5%)Sangwoong Yoon; Jinwon Choi; Yonghyeon Lee; Yung-Kyun Noh; Frank Chongwoo Park
Adversarial contamination of networks in the setting of vertex nomination: a new trimming method. (1%)Sheyda Peyman; Minh Tang; Vince Lyzinski
2022-08-19
Real-Time Robust Video Object Detection System Against Physical-World Adversarial Attacks. (99%)Husheng Han; Xing Hu; Kaidi Xu; Pucheng Dang; Ying Wang; Yongwei Zhao; Zidong Du; Qi Guo; Yanzhi Yang; Tianshi Chen
Gender Bias and Universal Substitution Adversarial Attacks on Grammatical Error Correction Systems for Automated Assessment. (92%)Vyas Raina; Mark Gales
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models. (76%)Yulong Wang; Minghui Zhao; Shenghong Li; Xin Yuan; Wei Ni
A Novel Plug-and-Play Approach for Adversarially Robust Generalization. (54%)Deepak Maurya; Adarsh Barik; Jean Honorio
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability. (8%)Wei Huang; Xingyu Zhao; Gaojie Jin; Xiaowei Huang
UKP-SQuARE v2 Explainability and Adversarial Attacks for Trustworthy QA. (1%)Rachneet Sachdeva; Haritz Puerto; Tim Baumgärtner; Sewin Tariverdian; Hao Zhang; Kexin Wang; Hossain Shaikh Saadi; Leonardo F. R. Ribeiro; Iryna Gurevych
2022-08-18
Resisting Adversarial Attacks in Deep Neural Networks using Diverse Decision Boundaries. (99%)Manaar Alam; Shubhajit Datta; Debdeep Mukhopadhyay; Arijit Mondal; Partha Pratim Chakrabarti
Enhancing Targeted Attack Transferability via Diversified Weight Pruning. (99%)Hung-Jui Wang; Yu-Yu Wu; Shang-Tse Chen
Enhancing Diffusion-Based Image Synthesis with Robust Classifier Guidance. (45%)Bahjat Kawar; Roy Ganz; Michael Elad
Reverse Engineering of Integrated Circuits: Tools and Techniques. (33%)Abhijitt Dhavlle
DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization. (10%)Anshul Nasery; Sravanti Addepalli; Praneeth Netrapalli; Prateek Jain
Discovering Bugs in Vision Models using Off-the-shelf Image Generation and Captioning. (3%)Olivia Wiles; Isabela Albuquerque; Sven Gowal
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy. (2%)Wenqiang Ruan; Mingxin Xu; Wenjing Fang; Li Wang; Lei Wang; Weili Han
Profiler: Profile-Based Model to Detect Phishing Emails. (1%)Mariya Shmalko; Alsharif Abuadbba; Raj Gaire; Tingmin Wu; Hye-Young Paik; Surya Nepal
2022-08-17
Two Heads are Better than One: Robust Learning Meets Multi-branch Models. (99%)Dong Huang; Qingwen Bu; Yuhao Qing; Haowen Pi; Sen Wang; Heming Cui
An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks. (99%)Raz Lapid; Zvika Haramaty; Moshe Sipper
Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against Shadow-Based Adversarial Attacks. (98%)Andrew Wang; Wyatt Mayor; Ryan Smith; Gopal Nookula; Gregory Ditzler
Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System. (70%)Abdur R. Shahid; Ahmed Imteaj; Peter Y. Wu; Diane A. Igoche; Tauhidul Alam
An Efficient Multi-Step Framework for Malware Packing Identification. (41%)Jong-Wouk Kim; Yang-Sae Moon; Mi-Jung Choi
An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models. (26%)Jihyeon Hyeong; Jayoung Kim; Noseong Park; Sushil Jajodia
Efficient Detection and Filtering Systems for Distributed Training. (26%)Konstantinos Konstantinidis; Aditya Ramamoorthy
On the Privacy Effect of Data Enhancement via the Lens of Memorization. (10%)Xiao Li; Qiongxiu Li; Zhanhao Hu; Xiaolin Hu
ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach. (2%)Tong Zhou; Shaolei Ren; Xiaolin Xu
DF-Captcha: A Deepfake Captcha for Preventing Fake Calls. (1%)Yisroel Mirsky
Analyzing Robustness of End-to-End Neural Models for Automatic Speech Recognition. (1%)Goutham Rajendran; Wei Zou
2022-08-16
A Context-Aware Approach for Textual Adversarial Attack through Probability Difference Guided Beam Search. (82%)Huijun Liu; Jie Yu; Shasha Li; Jun Ma; Bin Ji
Imperceptible and Robust Backdoor Attack in 3D Point Cloud. (68%)Kuofeng Gao; Jiawang Bai; Baoyuan Wu; Mengxi Ya; Shu-Tao Xia
AutoCAT: Reinforcement Learning for Automated Exploration of Cache-Timing Attacks. (13%)Mulong Luo; Wenjie Xiong; Geunbae Lee; Yueying Li; Xiaomeng Yang; Amy Zhang; Yuandong Tian; Hsien-Hsin S. Lee; G. Edward Suh
Investigating the Impact of Model Width and Density on Generalization in Presence of Label Noise. (1%)Yihao Xue; Kyle Whitecross; Baharan Mirzasoleiman
2022-08-15
Man-in-the-Middle Attack against Object Detection Systems. (96%)Han Wu; Sareh Rowlands; Johan Wahlstrom
MENLI: Robust Evaluation Metrics from Natural Language Inference. (92%)Yanran Chen; Steffen Eger
Training-Time Attacks against k-Nearest Neighbors. (2%)Ara Vartanian; Will Rosenbaum; Scott Alfeld
CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI Models. (1%)Chuyen Nguyen; Caleb Morgan; Sudip Mittal
2022-08-14
A Multi-objective Memetic Algorithm for Auto Adversarial Attack Optimization Design. (99%)Jialiang Sun; Wen Yao; Tingsong Jiang; Xiaoqian Chen
Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection. (92%)Haibin Zheng; Haiyang Xiong; Haonan Ma; Guohan Huang; Jinyin Chen
InvisibiliTee: Angle-agnostic Cloaking from Person-Tracking Systems with a Tee. (92%)Yaxian Li; Bingqing Zhang; Guoping Zhao; Mingyu Zhang; Jiajun Liu; Ziwei Wang; Jirong Wen
Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning. (67%)Ashish Gupta; Tie Luo; Mao V. Ngo; Sajal K. Das
2022-08-13
Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification. (99%)Beini Xie; Heng Chang; Xin Wang; Tian Bian; Shiji Zhou; Daixin Wang; Zhiqiang Zhang; Wenwu Zhu
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks. (99%)Tian Yu Liu; Yu Yang; Baharan Mirzasoleiman
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer. (62%)Tong Wang; Yuan Yao; Feng Xu; Miao Xu; Shengwei An; Ting Wang
2022-08-12
Scale-free and Task-agnostic Attack: Generating Photo-realistic Adversarial Patterns with Patch Quilting Generator. (99%)Xiangbo Gao; Cheng Luo; Qinliang Lin; Weicheng Xie; Minmin Liu; Linlin Shen; Keerthy Kusumam; Siyang Song
MaskBlock: Transferable Adversarial Examples with Bayes Approach. (99%)Mingyuan Fan; Cen Chen; Ximeng Liu; Wenzhong Guo
Defensive Distillation based Adversarial Attacks Mitigation Method for Channel Estimation using Deep Learning Models in Next-Generation Wireless Networks. (98%)Ferhat Ozgur Catak; Murat Kuzlu; Evren Catak; Umit Cali; Ozgur Guler
Unifying Gradients to Improve Real-world Robustness for Deep Networks. (96%)Yingwen Wu; Sizhe Chen; Kun Fang; Xiaolin Huang
A Knowledge Distillation-Based Backdoor Attack in Federated Learning. (93%)Yifan Wang; Wei Fan; Keke Yang; Naji Alhusaini; Jing Li
Dropout is NOT All You Need to Prevent Gradient Leakage. (62%)Daniel Scheliga; Patrick Mäder; Marco Seeland
Defense against Backdoor Attacks via Identifying and Purifying Bad Neurons. (2%)Mingyuan Fan; Yang Liu; Cen Chen; Ximeng Liu; Wenzhong Guo
PRIVEE: A Visual Analytic Workflow for Proactive Privacy Risk Inspection of Open Data. (2%)Kaustav Bhattacharjee; Akm Islam; Jaideep Vaidya; Aritra Dasgupta
2022-08-11
Diverse Generative Perturbations on Attention Space for Transferable Adversarial Attacks. (99%)Woo Jae Kim; Seunghoon Hong; Sung-Eui Yoon
General Cutting Planes for Bound-Propagation-Based Neural Network Verification. (68%)Huan Zhang; Shiqi Wang; Kaidi Xu; Linyi Li; Bo Li; Suman Jana; Cho-Jui Hsieh; J. Zico Kolter
On deceiving malware classification with section injection. (5%)Silva Adeilson Antonio da; Mauricio Pamplona Segundo
A Probabilistic Framework for Mutation Testing in Deep Neural Networks. (1%)Florian Tambon; Foutse Khomh; Giuliano Antoniol
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment. (1%)Jie Zhu; Leye Wang; Xiao Han
Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone. (1%)Aghiles Ait Messaoud; Sonia Ben Mokhtar; Vlad Nitu; Valerio Schiavoni
2022-08-10
Explaining Machine Learning DGA Detectors from DNS Traffic Data. (13%)Giorgio Piras; Maura Pintor; Luca Demetrio; Battista Biggio
A Sublinear Adversarial Training Algorithm. (3%)Yeqi Gao; Lianke Qin; Zhao Song; Yitan Wang
DVR: Micro-Video Recommendation Optimizing Watch-Time-Gain under Duration Bias. (1%)Yu Zheng; Chen Gao; Jingtao Ding; Lingling Yi; Depeng Jin; Yong Li; Meng Wang
2022-08-09
Adversarial Machine Learning-Based Anticipation of Threats Against Vehicle-to-Microgrid Services. (98%)Ahmed Omara; Burak Kantarci
Reducing Exploitability with Population Based Training. (67%)Pavel Czempin; Adam Gleave
Robust Machine Learning for Malware Detection over Time. (9%)Daniele Angioni; Luca Demetrio; Maura Pintor; Battista Biggio
2022-08-08
Robust and Imperceptible Black-box DNN Watermarking Based on Fourier Perturbation Analysis and Frequency Sensitivity Clustering. (75%)Yong Liu; Hanzhou Wu; Xinpeng Zhang
PerD: Perturbation Sensitivity-based Neural Trojan Detection Framework on NLP Applications. (67%)Diego Garcia-soto; Huili Chen; Farinaz Koushanfar
Adversarial robustness of VAEs through the lens of local geometry. (47%)Asif Khan; Amos Storkey
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning. (26%)Tianxing Zhang; Hanzhou Wu; Xiaofeng Lu; Guangling Sun
Abutting Grating Illusion: Cognitive Challenge to Neural Network Models. (1%)Jinyu Fan; Yi Zeng
Testing of Machine Learning Models with Limited Samples: An Industrial Vacuum Pumping Application. (1%)Ayan Chatterjee; Bestoun S. Ahmed; Erik Hallin; Anton Engman
2022-08-07
Federated Adversarial Learning: A Framework with Convergence Analysis. (80%)Xiaoxiao Li; Zhao Song; Jiaming Yang
Are Gradients on Graph Structure Reliable in Gray-box Attacks? (13%)Zihan Liu; Yun Luo; Lirong Wu; Siyuan Li; Zicheng Liu; Stan Z. Li
2022-08-06
Blackbox Attacks via Surrogate Ensemble Search. (99%)Zikui Cai; Chengyu Song; Srikanth Krishnamurthy; Amit Roy-Chowdhury; M. Salman Asif
On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning. (22%)Congyu Fang; Hengrui Jia; Anvith Thudi; Mohammad Yaghini; Christopher A. Choquette-Choo; Natalie Dullerud; Varun Chandrasekaran; Nicolas Papernot
Preventing or Mitigating Adversarial Supply Chain Attacks; a legal analysis. (3%)Kaspar Rosager Ludvigsen; Shishir Nagaraja; Angela Daly
2022-08-05
Adversarial Robustness of MR Image Reconstruction under Realistic Perturbations. (73%)Jan Nikolas Morshuis; Sergios Gatidis; Matthias Hein; Christian F. Baumgartner
Data-free Backdoor Removal based on Channel Lipschitzness. (64%)Runkai Zheng; Rongjun Tang; Jianze Li; Li Liu
Lethal Dose Conjecture on Data Poisoning. (2%)Wenxiao Wang; Alexander Levine; Soheil Feizi
LCCDE: A Decision-Based Ensemble Framework for Intrusion Detection in The Internet of Vehicles. (1%)Li Yang; Abdallah Shami; Gary Stevens; Rusett Stephen De
Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks. (1%)Bernd Prach; Christoph H. Lampert
2022-08-04
Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification. (99%)Faris Almalik; Mohammad Yaqub; Karthik Nandakumar
2022-08-03
Spectrum Focused Frequency Adversarial Attacks for Automatic Modulation Classification. (99%)Sicheng College of Information and Communication Engineering, Harbin Engineering University, Harbin Zhang; Jiarun College of Information and Communication Engineering, Harbin Engineering University, Harbin Yu; Zhida College of Information and Communication Engineering, Harbin Engineering University, Harbin Bao; Shiwen Department of Electrical & Computer Engineering, Auburn University, Auburn Mao; Yun College of Information and Communication Engineering, Harbin Engineering University, Harbin Lin
Design of secure and robust cognitive system for malware detection. (99%)Sanket Shukla
A New Kind of Adversarial Example. (99%)Ali Borji
Adversarial Attacks on ASR Systems: An Overview. (98%)Xiao Zhang; Hao Tan; Xuan Huang; Denghui Zhang; Keke Tang; Zhaoquan Gu
Multiclass ASMA vs Targeted PGD Attack in Image Segmentation. (96%)Johnson University of Toronto Vo; Jiabao University of Toronto Xie; Sahil University of Toronto Patel
MOVE: Effective and Harmless Ownership Verification via Embedded External Features. (84%)Yiming Li; Linghui Zhu; Xiaojun Jia; Yang Bai; Yong Jiang; Shu-Tao Xia; Xiaochun Cao
Robust Graph Neural Networks using Weighted Graph Laplacian. (13%)Bharat Runwal; Vivek; Sandeep Kumar
2022-08-02
Adversarial Camouflage for Node Injection Attack on Graphs. (81%)Shuchang Tao; Qi Cao; Huawei Shen; Yunfan Wu; Liang Hou; Xueqi Cheng
Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry. (2%)Mark Penrod; Harrison Termotto; Varshini Reddy; Jiayu Yao; Finale Doshi-Velez; Weiwei Pan
SCFI: State Machine Control-Flow Hardening Against Fault Attacks. (1%)Pascal Nasahl; Martin Unterguggenberger; Rishub Nagpal; Robert Schilling; David Schrammel; Stefan Mangard
2022-08-01
GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction. (98%)Jiacheng Zhu; Jielin Qiu; Zhuolin Yang; Douglas Weber; Michael A. Rosenberg; Emerson Liu; Bo Li; Ding Zhao
Understanding Adversarial Robustness of Vision Transformers via Cauchy Problem. (81%)Zheng Wang; Wenjie Ruan
On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel. (75%)Shubhi Shukla; Manaar Alam; Sarani Bhattacharya; Debdeep Mukhopadhyay; Pabitra Mitra
Attacking Adversarial Defences by Smoothing the Loss Landscape. (26%)Panagiotis Eustratiadis; Henry Gouk; Da Li; Timothy Hospedales
2022-07-31
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning. (99%)Mohammad Hossein Samavatian; Saikat Majumdar; Kristin Barber; Radu Teodorescu
Robust Real-World Image Super-Resolution against Adversarial Attacks. (99%)Jiutao Yue; Haofeng Li; Pengxu Wei; Guanbin Li; Liang Lin
Is current research on adversarial robustness addressing the right problem? (97%)Ali Borji
2022-07-30
enpheeph: A Fault Injection Framework for Spiking and Compressed Deep Neural Networks. (5%)Alessio Colucci; Andreas Steininger; Muhammad Shafique
CoNLoCNN: Exploiting Correlation and Non-Uniform Quantization for Energy-Efficient Low-precision Deep Convolutional Neural Networks. (2%)Muhammad Abdullah Hanif; Giuseppe Maria Sarda; Alberto Marchisio; Guido Masera; Maurizio Martina; Muhammad Shafique
2022-07-29
Robust Trajectory Prediction against Adversarial Attacks. (99%)Yulong Cao; Danfei Xu; Xinshuo Weng; Zhuoqing Mao; Anima Anandkumar; Chaowei Xiao; Marco Pavone
Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation and Complexity Analysis. (56%)Tao Li; Haozhe Lei; Quanyan Zhu
2022-07-28
Pro-tuning: Unified Prompt Tuning for Vision Tasks. (1%)Xing Nie; Bolin Ni; Jianlong Chang; Gaomeng Meng; Chunlei Huo; Zhaoxiang Zhang; Shiming Xiang; Qi Tian; Chunhong Pan
2022-07-27
Point Cloud Attacks in Graph Spectral Domain: When 3D Geometry Meets Graph Signal Processing. (96%)Daizong Liu; Wei Hu; Xin Li
Look Closer to Your Enemy: Learning to Attack via Teacher-student Mimicking. (91%)Mingejie Wang; Zhiqing Tang; Sirui Li; Dingwen Xiao
Membership Inference Attacks via Adversarial Examples. (73%)Hamid Jalalzai; Elie Kadoche; Rémi Leluc; Vincent Plassier
Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips. (69%)Jiawang Bai; Kuofeng Gao; Dihong Gong; Shu-Tao Xia; Zhifeng Li; Wei Liu
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking. (47%)Abhishek Chakraborty; Daniel Xing; Yuntao Liu; Ankur Srivastava
Label-Only Membership Inference Attack against Node-Level Graph Neural Networks. (22%)Mauro Conti; Jiaxin Li; Stjepan Picek; Jing Xu
Generative Steganography Network. (1%)Ping Wei; Sheng Li; Xinpeng Zhang; Ge Luo; Zhenxing Qian; Qing Zhou
2022-07-26
LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity. (99%)Martin Gubri; Maxime Cordy; Mike Papadakis; Yves Le Traon; Koushik Sen
Perception-Aware Attack: Creating Adversarial Music via Reverse-Engineering Human Perception. (99%)Rui Duan; Zhe Qu; Shangqing Zhao; Leah Ding; Yao Liu; Zhuo Lu
Generative Extraction of Audio Classifiers for Speaker Identification. (73%)Tejumade Afonja; Lucas Bourtoule; Varun Chandrasekaran; Sageev Oore; Nicolas Papernot
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. (8%)Tilman Räuker; Anson Ho; Stephen Casper; Dylan Hadfield-Menell
2022-07-25
$p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations. (99%)Adam Dziedzic; Stephan Rabanser; Mohammad Yaghini; Armin Ale; Murat A. Erdogdu; Nicolas Papernot
Improving Adversarial Robustness via Mutual Information Estimation. (99%)Dawei Zhou; Nannan Wang; Xinbo Gao; Bo Han; Xiaoyu Wang; Yibing Zhan; Tongliang Liu
SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness. (99%)Jindong Gu; Hengshuang Zhao; Volker Tresp; Philip Torr
Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer. (75%)Yingyi Chen; Xi Shen; Yahui Liu; Qinghua Tao; Johan A. K. Suykens
Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment. (9%)Tian Liu; Xueyang Hu; Tao Shu
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning. (2%)Xinlei He; Hongbin Liu; Neil Zhenqiang Gong; Yang Zhang
2022-07-24
Versatile Weight Attack via Flipping Limited Bits. (86%)Jiawang Bai; Baoyuan Wu; Zhifeng Li; Shu-tao Xia
Can we achieve robustness from data alone? (82%)Nikolaos Tsilivis; Jingtong Su; Julia Kempe
Proving Common Mechanisms Shared by Twelve Methods of Boosting Adversarial Transferability. (69%)Quanshi Zhang; Xin Wang; Jie Ren; Xu Cheng; Shuyun Lin; Yisen Wang; Xiangming Zhu
Privacy Against Inference Attacks in Vertical Federated Learning. (2%)Borzoo Rassouli; Morteza Varasteh; Deniz Gunduz
Semantic-guided Multi-Mask Image Harmonization. (1%)Xuqian Ren; Yifan Liu
2022-07-22
Do Perceptually Aligned Gradients Imply Adversarial Robustness? (99%)Roy Ganz; Bahjat Kawar; Michael Elad
Provable Defense Against Geometric Transformations. (47%)Rem Yang; Jacob Laurel; Sasa Misailovic; Gagandeep Singh
Aries: Efficient Testing of Deep Neural Networks via Labeling-Free Accuracy Estimation. (41%)Qiang Hu; Yuejun Guo; Xiaofei Xie; Maxime Cordy; Lei Ma; Mike Papadakis; Yves Le Traon
Learning from Multiple Annotator Noisy Labels via Sample-wise Label Fusion. (1%)Zhengqi Gao; Fan-Keng Sun; Mingran Yang; Sucheng Ren; Zikai Xiong; Marc Engeler; Antonio Burazer; Linda Wildling; Luca Daniel; Duane S. Boning
2022-07-21
Synthetic Dataset Generation for Adversarial Machine Learning Research. (99%)Xiruo Liu; Shibani Singh; Cory Cornelius; Colin Busho; Mike Tan; Anindya Paul; Jason Martin
Careful What You Wish For: on the Extraction of Adversarially Trained Models. (99%)Kacem Khaled; Gabriela Nicolescu; Magalhães Felipe Gohring de
Rethinking Textual Adversarial Defense for Pre-trained Language Models. (99%)Jiayi Wang; Rongzhou Bao; Zhuosheng Zhang; Hai Zhao
AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance. (98%)Xiaoliang Liu; Furao Shen; Jian Zhao; Changhai Nie
Knowledge-enhanced Black-box Attacks for Recommendations. (92%)Jingfan Chen; Wenqi Fan; Guanghui Zhu; Xiangyu Zhao; Chunfeng Yuan; Qing Li; Yihua Huang
Towards Efficient Adversarial Training on Vision Transformers. (92%)Boxi Wu; Jindong Gu; Zhifeng Li; Deng Cai; Xiaofei He; Wei Liu
Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation. (87%)Tong Wu; Tianhao Wang; Vikash Sehwag; Saeed Mahloujifar; Prateek Mittal
Contrastive Self-Supervised Learning Leads to Higher Adversarial Susceptibility. (83%)Rohit Gupta; Naveed Akhtar; Ajmal Mian; Mubarak Shah
Generating and Detecting True Ambiguity: A Forgotten Danger in DNN Supervision Testing. (22%)Michael Weiss; André García Gómez; Paolo Tonella
2022-07-20
Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness. (99%)Sekitoshi Kanai; Shin'ya Yamaguchi; Masanori Yamada; Hiroshi Takahashi; Kentaro Ohno; Yasutoshi Ida
Illusory Attacks: Detectability Matters in Adversarial Attacks on Sequential Decision-Makers. (98%)Tim Franzmeyer; Stephen McAleer; João F. Henriques; Jakob N. Foerster; Philip H. S. Torr; Adel Bibi; Witt Christian Schroeder de
Test-Time Adaptation via Conjugate Pseudo-labels. (10%)Sachin Goyal; Mingjie Sun; Aditi Raghunathan; Zico Kolter
Malware Triage Approach using a Task Memory based on Meta-Transfer Learning Framework. (9%)Jinting Zhu; Julian Jang-Jaccard; Ian Welch; Harith Al-Sahaf; Seyit Camtepe
A temporally and spatially local spike-based backpropagation algorithm to enable training in hardware. (1%)Anmol Biswas; Vivek Saraswat; Udayan Ganguly
2022-07-19
Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms. (99%)Linbo Liu; Youngsuk Park; Trong Nghia Hoang; Hilaf Hasson; Jun Huan
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. (41%)Zaixi Zhang; Xiaoyu Cao; Jinyuan Jia; Neil Zhenqiang Gong
Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive Privacy Analysis and Beyond. (26%)Yuzheng Hu; Tianle Cai; Jinyong Shan; Shange Tang; Chaochao Cai; Ethan Song; Bo Li; Dawn Song
Assaying Out-Of-Distribution Generalization in Transfer Learning. (1%)Florian Wenzel; Andrea Dittadi; Peter Vincent Gehler; Carl-Johann Simon-Gabriel; Max Horn; Dominik Zietlow; David Kernert; Chris Russell; Thomas Brox; Bernt Schiele; Bernhard Schölkopf; Francesco Locatello
2022-07-18
Defending Substitution-Based Profile Pollution Attacks on Sequential Recommenders. (99%)Zhenrui Yue; Huimin Zeng; Ziyi Kou; Lanyu Shang; Dong Wang
Prior-Guided Adversarial Initialization for Fast Adversarial Training. (99%)Xiaojun Jia; Yong Zhang; Xingxing Wei; Baoyuan Wu; Ke Ma; Jue Wang; Xiaochun Cao
Decorrelative Network Architecture for Robust Electrocardiogram Classification. (99%)Christopher Wiedeman; Ge Wang
Multi-step domain adaptation by adversarial attack to $\mathcal{H} \Delta \mathcal{H}$-divergence. (96%)Arip Asadulaev; Alexander Panfilov; Andrey Filchenkov
Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations. (91%)Hashmat Shadab Malik; Shahina K Kunhimon; Muzammal Naseer; Salman Khan; Fahad Shahbaz Khan
Easy Batch Normalization. (69%)Arip Asadulaev; Alexander Panfilov; Andrey Filchenkov
Adversarial Contrastive Learning via Asymmetric InfoNCE. (61%)Qiying Yu; Jieming Lou; Xianyuan Zhan; Qizhang Li; Wangmeng Zuo; Yang Liu; Jingjing Liu
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications. (22%)Ali Raza; Shujun Li; Kim-Phuc Tran; Ludovic Koehl
A Certifiable Security Patch for Object Tracking in Self-Driving Systems via Historical Deviation Modeling. (10%)Xudong Pan; Qifan Xiao; Mi Zhang; Min Yang
Benchmarking Machine Learning Robustness in Covid-19 Genome Sequence Classification. (2%)Sarwan Ali; Bikram Sahoo; Alexander Zelikovskiy; Pin-Yu Chen; Murray Patterson
2022-07-17
Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal. (99%)Xinwei Liu; Jian Liu; Yang Bai; Jindong Gu; Tao Chen; Xiaojun Jia; Xiaochun Cao
Threat Model-Agnostic Adversarial Defense using Diffusion Models. (99%)Tsachi Blau; Roy Ganz; Bahjat Kawar; Alex Bronstein; Michael Elad
Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game. (96%)Xiao-Shan Gao; Shuang Liu; Lijia Yu
Automated Repair of Neural Networks. (16%)Dor Cohen; Ofer Strichman
2022-07-16
DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking. (99%)Xiangyu Yin; Wenjie Ruan; Jonathan Fieldsend
Certified Neural Network Watermarks with Randomized Smoothing. (1%)Arpit Bansal; Ping-yeh Chiang; Michael Curry; Rajiv Jain; Curtis Wigington; Varun Manjunatha; John P Dickerson; Tom Goldstein
Progress and limitations of deep networks to recognize objects in unusual poses. (1%)Amro Abbas; Stéphane Deny
Exploring The Resilience of Control Execution Skips against False Data Injection Attacks. (1%)Ipsita Koley; Sunandan Adhikary; Soumyajit Dey
MixTailor: Mixed Gradient Aggregation for Robust Learning Against Tailored Attacks. (1%)Ali Ramezani-Kebrya; Iman Tabrizian; Fartash Faghri; Petar Popovski
2022-07-15
Towards the Desirable Decision Boundary by Moderate-Margin Adversarial Training. (99%)Xiaoyu Liang; Yaguan Qian; Jianchang Huang; Xiang Ling; Bin Wang; Chunming Wu; Wassim Swaileh
CARBEN: Composite Adversarial Robustness Benchmark. (98%)Lei Hsiung; Yun-Yun Tsai; Pin-Yu Chen; Tsung-Yi Ho
Masked Spatial-Spectral Autoencoders Are Excellent Hyperspectral Defenders. (68%)Jiahao Qi; Zhiqiang Gong; Xingyue Liu; Kangcheng Bin; Chen Chen; Yongqian Li; Wei Xue; Yu Zhang; Ping Zhong
Feasibility of Inconspicuous GAN-generated Adversarial Patches against Object Detection. (10%)Svetlana Pavlitskaya; Bianca-Marina Codău; J. Marius Zöllner
PASS: Parameters Audit-based Secure and Fair Federated Learning Scheme against Free Rider. (5%)Jianhua Wang
3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models. (1%)Ronghui Mu; Wenjie Ruan; Leandro S. Marcolino; Qiang Ni
2022-07-14
Adversarial Examples for Model-Based Control: A Sensitivity Analysis. (98%)Po-han Department of Electrical and Computer Engineering, The University of Texas at Austin Li; Ufuk Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin Topcu; Sandeep P. Department of Electrical and Computer Engineering, The University of Texas at Austin Chinchali
Adversarial Attacks on Monocular Pose Estimation. (98%)Hemang Chawla; Arnav Varma; Elahe Arani; Bahram Zonooz
Provably Adversarially Robust Nearest Prototype Classifiers. (83%)Václav Voráček; Matthias Hein
Improving Task-free Continual Learning by Distributionally Robust Memory Evolution. (70%)Zhenyi Wang; Li Shen; Le Fang; Qiuling Suo; Tiehang Duan; Mingchen Gao
RSD-GAN: Regularized Sobolev Defense GAN Against Speech-to-Text Adversarial Attacks. (67%)Mohammad Esmaeilpour; Nourhene Chaalia; Patrick Cardinal
Sound Randomized Smoothing in Floating-Point Arithmetics. (50%)Václav Voráček; Matthias Hein
Audio-guided Album Cover Art Generation with Genetic Algorithms. (38%)James Marien; Sam Leroux; Bart Dhoedt; Boom Cedric De
Distance Learner: Incorporating Manifold Prior to Model Training. (16%)Aditya Chetan; Nipun Kwatra
Active Data Pattern Extraction Attacks on Generative Language Models. (11%)Bargav Jayaraman; Esha Ghosh; Huseyin Inan; Melissa Chase; Sambuddha Roy; Wei Dai
Contrastive Adapters for Foundation Model Group Robustness. (1%)Michael Zhang; Christopher Ré
Lipschitz Bound Analysis of Neural Networks. (1%)Sarosij Bose
2022-07-13
Perturbation Inactivation Based Adversarial Defense for Face Recognition. (99%)Min Ren; Yuhao Zhu; Yunlong Wang; Zhenan Sun
On the Robustness of Bayesian Neural Networks to Adversarial Attacks. (93%)Luca Bortolussi; Ginevra Carbone; Luca Laurenti; Andrea Patane; Guido Sanguinetti; Matthew Wicker
Adversarially-Aware Robust Object Detector. (91%)Ziyi Dong; Pengxu Wei; Liang Lin
PIAT: Physics Informed Adversarial Training for Solving Partial Differential Equations. (15%)Simin Shekarpaz; Mohammad Azizmalayeri; Mohammad Hossein Rohban
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities. (10%)Subash Neupane; Jesse Ables; William Anderson; Sudip Mittal; Shahram Rahimi; Ioana Banicescu; Maria Seale
Interactive Machine Learning: A State of the Art Review. (4%)Natnael A. Wondimu; Cédric Buche; Ubbo Visser
Sample-dependent Adaptive Temperature Scaling for Improved Calibration. (2%)Tom Joy; Francesco Pinto; Ser-Nam Lim; Philip H. S. Torr; Puneet K. Dokania
DiverGet: A Search-Based Software Testing Approach for Deep Neural Network Quantization Assessment. (1%)Ahmed Haj Yahmed; Houssem Ben Braiek; Foutse Khomh; Sonia Bouzidi; Rania Zaatour
2022-07-12
Exploring Adversarial Examples and Adversarial Robustness of Convolutional Neural Networks by Mutual Information. (99%)Jiebao Zhang; Wenhua Qian; Rencan Nie; Jinde Cao; Dan Xu
Adversarial Robustness Assessment of NeuroEvolution Approaches. (99%)Inês Valentim; Nuno Lourenço; Nuno Antunes
Frequency Domain Model Augmentation for Adversarial Attack. (99%)Yuyang Long; Qilong Zhang; Boheng Zeng; Lianli Gao; Xianglong Liu; Jian Zhang; Jingkuan Song
Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware. (92%)Luca Demetrio; Battista Biggio; Fabio Roli
Game of Trojans: A Submodular Byzantine Approach. (87%)Dinuka Sahabandu; Arezoo Rajabi; Luyao Niu; Bo Li; Bhaskar Ramasubramanian; Radha Poovendran
Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust Deep Neural Architectures. (84%)Jia Liu; Ran Cheng; Yaochu Jin
Certified Adversarial Robustness via Anisotropic Randomized Smoothing. (76%)Hanbin Hong; Yuan Hong
RelaxLoss: Defending Membership Inference Attacks without Losing Utility. (26%)Dingfan Chen; Ning Yu; Mario Fritz
Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations. (5%)Satoshi Munakata; Caterina Urban; Haruki Yokoyama; Koji Yamamoto; Kazuki Munakata
Markov Decision Process For Automatic Cyber Defense. (4%)Simon Yusuf Enoch; Simon Yusuf Enoch; Dong Seong Kim
Estimating Test Performance for AI Medical Devices under Distribution Shift with Conformal Prediction. (1%)Charles Lu; Syed Rakin Ahmed; Praveer Singh; Jayashree Kalpathy-Cramer
Backdoor Attacks on Crowd Counting. (1%)Yuhua Sun; Tailai Zhang; Xingjun Ma; Pan Zhou; Jian Lou; Zichuan Xu; Xing Di; Yu Cheng; Lichao
2022-07-11
Statistical Detection of Adversarial examples in Blockchain-based Federated Forest In-vehicle Network Intrusion Detection Systems. (99%)Ibrahim Aliyu; Engelenburg Selinde van; Muhammed Bashir Muazu; Jinsul Kim; Chang Gyoon Lim
RUSH: Robust Contrastive Learning via Randomized Smoothing. (98%)Yijiang Pang; Boyang Liu; Jiayu Zhou
Physical Passive Patch Adversarial Attacks on Visual Odometry Systems. (98%)Yaniv Nemcovsky; Matan Yaakoby; Alex M. Bronstein; Chaim Baskin
Towards Effective Multi-Label Recognition Attacks via Knowledge Graph Consistency. (83%)Hassan Mahmood; Ehsan Elhamifar
Susceptibility of Continual Learning Against Adversarial Attacks. (75%)Hikmat Khan; Pir Masoom Shah; Syed Farhan Alam Zaidi; Saif ul Islam
"Why do so?" -- A Practical Perspective on Machine Learning Security. (64%)Kathrin Grosse; Lukas Bieringer; Tarek Richard Besold; Battista Biggio; Katharina Krombholz
Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches. (22%)Zhiyuan Cheng; James Liang; Hongjun Choi; Guanhong Tao; Zhiwen Cao; Dongfang Liu; Xiangyu Zhang
Adversarial Style Augmentation for Domain Generalized Urban-Scene Segmentation. (1%)Zhun Zhong; Yuyang Zhao; Gim Hee Lee; Nicu Sebe
2022-07-10
One-shot Neural Backdoor Erasing via Adversarial Weight Masking. (33%)Shuwen Chai; Jinghui Chen
Hiding Your Signals: A Security Analysis of PPG-based Biometric Authentication. (4%)Lin Li; Chao Chen; Lei Pan; Yonghang Tai; Jun Zhang; Yang Xiang
2022-07-09
Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features. (98%)Taha Belkhouja; Janardhan Rao Doppa
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain. (98%)Chang Yue; Peizhuo Lv; Ruigang Liang; Kai Chen
Dynamic Time Warping based Adversarial Framework for Time-Series Domain. (97%)Taha Belkhouja; Yan Yan; Janardhan Rao Doppa
Training Robust Deep Models for Time-Series Domain: Novel Algorithms and Theoretical Analysis. (67%)Taha Belkhouja; Yan Yan; Janardhan Rao Doppa
2022-07-08
Not all broken defenses are equal: The dead angles of adversarial accuracy. (99%)Raphael Olivier; Bhiksha Raj
Improved and Interpretable Defense to Transferred Adversarial Examples by Jacobian Norm with Selective Input Gradient Regularization. (99%)Deyin Liu; Lin Wu; Lingqiao Liu; Haifeng Zhao; Farid Boussaid; Mohammed Bennamoun
Defense Against Multi-target Trojan Attacks. (80%)Haripriya Harikumar; Santu Rana; Kien Do; Sunil Gupta; Wei Zong; Willy Susilo; Svetha Venkastesh
Guiding the retraining of convolutional neural networks against adversarial inputs. (80%)Francisco Durán López; Silverio Martínez-Fernández; Michael Felderer; Xavier Franch
Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future. (68%)Byunggill Joe; Insik Shin; Jihun Hamm
Models Out of Line: A Fourier Lens on Distribution Shift Robustness. (10%)Sara Fridovich-Keil; Brian R. Bartoldson; James Diffenderfer; Bhavya Kailkhura; Peer-Timo Bremer
A law of adversarial risk, interpolation, and label noise. (1%)Daniel Paleka; Amartya Sanyal
2022-07-07
On the Relationship Between Adversarial Robustness and Decision Region in Deep Neural Network. (99%)Seongjin Park; Haedong Jeong; Giyoung Jeon; Jaesik Choi
Harnessing Out-Of-Distribution Examples via Augmenting Content and Style. (11%)Zhuo Huang; Xiaobo Xia; Li Shen; Bo Han; Mingming Gong; Chen Gong; Tongliang Liu
CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal Relationships. (5%)Rebecca Roelofs; Liting Sun; Ben Caine; Khaled S. Refaat; Ben Sapp; Scott Ettinger; Wei Chai
2022-07-06
The Weaknesses of Adversarial Camouflage in Overhead Imagery. (83%)Etten Adam Van
Adversarial Robustness of Visual Dialog. (64%)Lu Yu; Verena Rieser
Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural Networks with Power Consumption Information. (54%)Cory Merkel
When does Bias Transfer in Transfer Learning? (10%)Hadi Salman; Saachi Jain; Andrew Ilyas; Logan Engstrom; Eric Wong; Aleksander Madry
Privacy-preserving Reflection Rendering for Augmented Reality. (2%)Yiqin Zhao; Sheng Wei; Tian Guo
Not All Models Are Equal: Predicting Model Transferability in a Self-challenging Fisher Space. (1%)Wenqi Shao; Xun Zhao; Yixiao Ge; Zhaoyang Zhang; Lei Yang; Xiaogang Wang; Ying Shan; Ping Luo
2022-07-05
Query-Efficient Adversarial Attack Based on Latin Hypercube Sampling. (99%)Dan Wang; Jiayu Lin; Yuan-Gen Wang
Defending against the Label-flipping Attack in Federated Learning. (98%)Najeeb Moharram Jebreel; Josep Domingo-Ferrer; David Sánchez; Alberto Blanco-Justicia
UniCR: Universally Approximated Certified Robustness via Randomized Smoothing. (93%)Hanbin Hong; Binghui Wang; Yuan Hong
PRoA: A Probabilistic Robustness Assessment against Functional Perturbations. (92%)Tianle Zhang; Wenjie Ruan; Jonathan E. Fieldsend
Learning to Accelerate Approximate Methods for Solving Integer Programming via Early Fixing. (38%)Longkang Li; Baoyuan Wu
Robustness Analysis of Video-Language Models Against Visual and Language Perturbations. (1%)Madeline C. Schiappa; Shruti Vyas; Hamid Palangi; Yogesh S. Rawat; Vibhav Vineet
Conflicting Interactions Among Protection Mechanisms for Machine Learning Models. (1%)Sebastian Szyller; N. Asokan
PoF: Post-Training of Feature Extractor for Improving Generalization. (1%)Ikuro Sato; Ryota Yamada; Masayuki Tanaka; Nakamasa Inoue; Rei Kawakami
Class-Specific Semantic Reconstruction for Open Set Recognition. (1%)Hongzhi Huang; Yu Wang; Qinghua Hu; Ming-Ming Cheng
2022-07-04
Hessian-Free Second-Order Adversarial Examples for Adversarial Learning. (99%)Yaguan Qian; Yuqi Wang; Bin Wang; Zhaoquan Gu; Yuhan Guo; Wassim Swaileh
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples. (98%)Giovanni Apruzzese; Rodion Vladimirov; Aliya Tastemirova; Pavel Laskov
Task-agnostic Defense against Adversarial Patch Attacks. (98%)Ke Xu; Yao Xiao; Zhaoheng Zheng; Kaijie Cai; Ram Nevatia
Large-scale Robustness Analysis of Video Action Recognition Models. (70%)Madeline C. Schiappa; Naman Biyani; Shruti Vyas; Hamid Palangi; Vibhav Vineet; Yogesh Rawat
Counterbalancing Teacher: Regularizing Batch Normalized Models for Robustness. (1%)Saeid Asgari Taghanaki; Ali Gholami; Fereshte Khani; Kristy Choi; Linh Tran; Ran Zhang; Aliasghar Khani
2022-07-03
RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely Limited Queries. (99%)Keshav Kasichainula; Hadi Mansourifar; Weidong Shi
Removing Batch Normalization Boosts Adversarial Training. (98%)Haotao Wang; Aston Zhang; Shuai Zheng; Xingjian Shi; Mu Li; Zhangyang Wang
Anomaly Detection with Adversarially Learned Perturbations of Latent Space. (13%)Vahid Reza Khazaie; Anthony Wong; John Taylor Jewell; Yalda Mohsenzadeh
Identifying the Context Shift between Test Benchmarks and Production Data. (1%)Matthew Groh
2022-07-02
FL-Defender: Combating Targeted Attacks in Federated Learning. (80%)Najeeb Jebreel; Josep Domingo-Ferrer
Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis. (11%)Ruinan Jin; Xiaoxiao Li
PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection. (1%)Zhi Lu; Vrizlynn L. L. Thing
2022-07-01
Efficient Adversarial Training With Data Pruning. (99%)Maximilian Kaufmann; Yiren Zhao; Ilia Shumailov; Robert Mullins; Nicolas Papernot
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label. (99%)Shengshan Hu; Ziqi Zhou; Yechao Zhang; Leo Yu Zhang; Yifeng Zheng; Yuanyuan HE; Hai Jin
2022-06-30
Detecting and Recovering Adversarial Examples from Extracting Non-robust and Highly Predictive Adversarial Perturbations. (99%)Mingyu Dong; Jiahao Chen; Diqun Yan; Jingxing Gao; Li Dong; Rangding Wang
Measuring Forgetting of Memorized Training Examples. (83%)Matthew Jagielski; Om Thakkar; Florian Tramèr; Daphne Ippolito; Katherine Lee; Nicholas Carlini; Eric Wallace; Shuang Song; Abhradeep Thakurta; Nicolas Papernot; Chiyuan Zhang
MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples Detectors. (80%)Federica Granese; Marine Picot; Marco Romanelli; Francisco Messina; Pablo Piantanida
Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN. (16%)Kuan Li; Yang Liu; Xiang Ao; Jianfeng Chi; Jinghua Feng; Hao Yang; Qing He
Threat Assessment in Machine Learning based Systems. (13%)Lionel Nganyewou Tidjon; Foutse Khomh
Robustness of Epinets against Distributional Shifts. (1%)Xiuyuan Lu; Ian Osband; Seyed Mohammad Asghari; Sven Gowal; Vikranth Dwaracherla; Zheng Wen; Roy Benjamin Van
ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State. (1%)Xinshao Wang; Yang Hua; Elyor Kodirov; Sankha Subhra Mukherjee; David A. Clifton; Neil M. Robertson
No Reason for No Supervision: Improved Generalization in Supervised Models. (1%)Mert Bulent Sariyildiz; Yannis Kalantidis; Karteek Alahari; Diane Larlus
Augment like there's no tomorrow: Consistently performing neural networks for medical imaging. (1%)Joona Pohjonen; Carolin Stürenberg; Atte Föhr; Reija Randen-Brady; Lassi Luomala; Jouni Lohi; Esa Pitkänen; Antti Rannikko; Tuomas Mirtti
2022-06-29
IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound. (92%)Palma Alessandro De; Rudy Bunel; Krishnamurthy Dvijotham; M. Pawan Kumar; Robert Stanforth
Adversarial Ensemble Training by Jointly Learning Label Dependencies and Member Models. (33%)Lele Wang; Bin Liu
longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks. (10%)Venelin Kovatchev; Trina Chatterjee; Venkata S Govindarajan; Jifan Chen; Eunsol Choi; Gabriella Chronis; Anubrata Das; Katrin Erk; Matthew Lease; Junyi Jessy Li; Yating Wu; Kyle Mahowald
Private Graph Extraction via Feature Explanations. (10%)Iyiola E. Olatunji; Mandeep Rathee; Thorben Funke; Megha Khosla
RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness. (2%)Francesco Pinto; Harry Yang; Ser-Nam Lim; Philip H. S. Torr; Puneet K. Dokania
2022-06-28
Increasing Confidence in Adversarial Robustness Evaluations. (99%)Roland S. Zimmermann; Wieland Brendel; Florian Tramer; Nicholas Carlini
Rethinking Adversarial Examples for Location Privacy Protection. (93%)Trung-Nghia Le; Ta Gu; Huy H. Nguyen; Isao Echizen
A Deep Learning Approach to Create DNS Amplification Attacks. (92%)Jared Mathews; Prosenjit Chatterjee; Shankar Banik; Cory Nance
On the amplification of security and privacy risks by post-hoc explanations in machine learning models. (31%)Pengrui Quan; Supriyo Chakraborty; Jeya Vikranth Jeyakumar; Mani Srivastava
How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection. (12%)Mantas Mazeika; Bo Li; David Forsyth
An Empirical Study of Challenges in Converting Deep Learning Models. (5%)Moses Jack Openja; Amin Jack Nikanjam; Ahmed Haj Jack Yahmed; Foutse Jack Khomh; Zhen Jack Ming; Jiang
Reasoning about Moving Target Defense in Attack Modeling Formalisms. (2%)Gabriel Ballot; Vadim Malvone; Jean Leneutre; Etienne Borde
AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE. (1%)Changjie Lu; Shen Zheng; Zirui Wang; Omar Dib; Gaurav Gupta
2022-06-27
Adversarial Example Detection in Deployed Tree Ensembles. (99%)Laurens Devos; Wannes Meert; Jesse Davis
Towards Secrecy-Aware Attacks Against Trust Prediction in Signed Graphs. (38%)Yulin Zhu; Tomasz Michalak; Xiapu Luo; Kai Zhou
Utilizing Class Separation Distance for the Evaluation of Corruption Robustness of Machine Learning Classifiers. (15%)Georg Siedel; Silvia Vock; Andrey Morozov; Stefan Voß
Cyber Network Resilience against Self-Propagating Malware Attacks. (13%)Alesia Chernikova; Nicolò Gozzi; Simona Boboila; Priyanka Angadi; John Loughner; Matthew Wilden; Nicola Perra; Tina Eliassi-Rad; Alina Oprea
Quantification of Deep Neural Network Prediction Uncertainties for VVUQ of Machine Learning Models. (4%)Mahmoud Yaseen; Xu Wu
2022-06-26
Self-Healing Robust Neural Networks via Closed-Loop Control. (45%)Zhuotong Chen; Qianxiao Li; Zheng Zhang
De-END: Decoder-driven Watermarking Network. (1%)Han Fang; Zhaoyang Jia; Yupeng Qiu; Jiyi Zhang; Weiming Zhang; Ee-Chien Chang
2022-06-25
Empirical Evaluation of Physical Adversarial Patch Attacks Against Overhead Object Detection Models. (99%)Gavin S. Hartnett; Li Ang Zhang; Caolionn O'Connell; Andrew J. Lohn; Jair Aguirre
Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising. (99%)Sandhya Aneja; Nagender Aneja; Pg Emeroylariffion Abas; Abdul Ghani Naim
RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer. (99%)Xiaoliang Liu; Furao Shen; Jian Zhao; Changhai Nie
Defending Multimodal Fusion Models against Single-Source Adversaries. (81%)Karren Yang; Wan-Yi Lin; Manash Barman; Filipe Condessa; Zico Kolter
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning. (12%)Baoyuan Wu; Hongrui Chen; Mingda Zhang; Zihao Zhu; Shaokui Wei; Danni Yuan; Chao Shen; Hongyuan Zha
Cascading Failures in Smart Grids under Random, Targeted and Adaptive Attacks. (1%)Sushmita Ruj; Arindam Pal
2022-06-24
Defending Backdoor Attacks on Vision Transformer via Patch Processing. (99%)Khoa D. Doan; Yingjie Lao; Peng Yang; Ping Li
AdAUC: End-to-end Adversarial AUC Optimization Against Long-tail Problems. (96%)Wenzheng Hou; Qianqian Xu; Zhiyong Yang; Shilong Bao; Yuan He; Qingming Huang
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective. (92%)Mark Huasong Meng; Guangdong Bai; Sin Gee Teo; Zhe Hou; Yan Xiao; Yun Lin; Jin Song Dong
Robustness of Explanation Methods for NLP Models. (82%)Shriya Atmakuri; Tejas Chheda; Dinesh Kandula; Nishant Yadav; Taesung Lee; Hessel Tuinhof
zPROBE: Zero Peek Robustness Checks for Federated Learning. (4%)Zahra Ghodsi; Mojan Javaheripi; Nojan Sheybani; Xinqiao Zhang; Ke Huang; Farinaz Koushanfar
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection Systems. (2%)D'Jeff Kanda Nkashama; Arian Soltani; Jean-Charles Verdier; Marc Frappier; Pierre-Martin Tardif; Froduald Kabanza
2022-06-23
Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs. (99%)Chengyin Hu; Weiwen Shi
A Framework for Understanding Model Extraction Attack and Defense. (98%)Xun Xian; Mingyi Hong; Jie Ding
Towards End-to-End Private Automatic Speaker Recognition. (76%)Francisco Teixeira; Alberto Abad; Bhiksha Raj; Isabel Trancoso
BERT Rankers are Brittle: a Study using Adversarial Document Perturbations. (75%)Yumeng Wang; Lijun Lyu; Avishek Anand
Never trust, always verify : a roadmap for Trustworthy AI? (1%)Lionel Nganyewou Tidjon; Foutse Khomh
Measuring Representational Robustness of Neural Networks Through Shared Invariances. (1%)Vedant Nanda; Till Speicher; Camila Kolling; John P. Dickerson; Krishna P. Gummadi; Adrian Weller
2022-06-22
AdvSmo: Black-box Adversarial Attack by Smoothing Linear Structure of Texture. (99%)Hui Xia; Rui Zhang; Shuliang Jiang; Zi Kang
InfoAT: Improving Adversarial Training Using the Information Bottleneck Principle. (98%)Mengting Xu; Tao Zhang; Zhongnian Li; Daoqiang Zhang
Robust Universal Adversarial Perturbations. (97%)Changming Xu; Gagandeep Singh
Guided Diffusion Model for Adversarial Purification from Random Noise. (68%)Quanlin Wu; Hang Ye; Yuntian Gu
Understanding the effect of sparsity on neural networks robustness. (61%)Lukas Timpl; Rahim Entezari; Hanie Sedghi; Behnam Neyshabur; Olga Saukh
Shilling Black-box Recommender Systems by Learning to Generate Fake User Profiles. (41%)Chen Lin; Si Chen; Meifang Zeng; Sheng Zhang; Min Gao; Hui Li
2022-06-21
SSMI: How to Make Objects of Interest Disappear without Accessing Object Detectors? (99%)Hui Xia; Rui Zhang; Zi Kang; Shuliang Jiang
Transferable Graph Backdoor Attack. (99%)Shuiqiao Yang; Bao Gia Doan; Paul Montague; Vel Olivier De; Tamas Abraham; Seyit Camtepe; Damith C. Ranasinghe; Salil S. Kanhere
(Certified!!) Adversarial Robustness for Free! (84%)Nicholas Dj Carlini; Florian Dj Tramer; Dj Krishnamurthy; Dvijotham; J. Zico Kolter
Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems. (81%)Yanchao Sun; Ruijie Zheng; Parisa Hassanzadeh; Yongyuan Liang; Soheil Feizi; Sumitra Ganesh; Furong Huang
FlashSyn: Flash Loan Attack Synthesis via Counter Example Driven Approximation. (68%)Zhiyang Chen; Sidi Mohamed Beillahi; Fan Long
Natural Backdoor Datasets. (33%)Emily Wenger; Roma Bhattacharjee; Arjun Nitin Bhagoji; Josephine Passananti; Emilio Andere; Haitao Zheng; Ben Y. Zhao
The Privacy Onion Effect: Memorization is Relative. (22%)Nicholas Carlini; Matthew Jagielski; Nicolas Papernot; Andreas Terzis; Florian Tramer; Chiyuan Zhang
ProML: A Decentralised Platform for Provenance Management of Machine Learning Software Systems. (1%)Nguyen Khoi Tran; Bushra Sabir; M. Ali Babar; Nini Cui; Mehran Abolhasan; Justin Lipman
2022-06-20
Understanding Robust Learning through the Lens of Representation Similarities. (99%)Christian Cianfarani; Arjun Nitin Bhagoji; Vikash Sehwag; Ben Zhao; Prateek Mittal
Diversified Adversarial Attacks based on Conjugate Gradient Method. (98%)Keiichiro Yamamura; Haruki Sato; Nariaki Tateiwa; Nozomi Hata; Toru Mitsutake; Issa Oe; Hiroki Ishikura; Katsuki Fujisawa
Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum. (76%)Junlin Wu; Yevgeniy Vorobeychik
SafeBench: A Benchmarking Platform for Safety Evaluation of Autonomous Vehicles. (5%)Chejian Xu; Wenhao Ding; Weijie Lyu; Zuxin Liu; Shuai Wang; Yihan He; Hanjiang Hu; Ding Zhao; Bo Li
Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities. (1%)Julian Bitterwolf; Alexander Meinke; Maximilian Augustin; Matthias Hein
2022-06-19
On the Limitations of Stochastic Pre-processing Defenses. (99%)Yue Gao; Ilia Shumailov; Kassem Fawaz; Nicolas Papernot
Towards Adversarial Attack on Vision-Language Pre-training Models. (98%)Jiaming Zhang; Qi Yi; Jitao Sang
A Universal Adversarial Policy for Text Classifiers. (98%)Gallil Maimon; Lior Rokach
JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System. (68%)Jiaming Zhang; Qi Yi; Jitao Sang
Adversarially trained neural representations may already be as robust as corresponding biological neural representations. (31%)Chong Guo; Michael J. Lee; Guillaume Leclerc; Joel Dapello; Yug Rao; Aleksander Madry; James J. DiCarlo
2022-06-18
Demystifying the Adversarial Robustness of Random Transformation Defenses. (99%)Chawin Sitawarin; Zachary Golan-Strieb; David Wagner
On the Role of Generalization in Transferability of Adversarial Examples. (99%)Yilin Wang; Farzan Farnia
DECK: Model Hardening for Defending Pervasive Backdoors. (98%)Guanhong Tao; Yingqi Liu; Siyuan Cheng; Shengwei An; Zhuo Zhang; Qiuling Xu; Guangyu Shen; Xiangyu Zhang
Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning. (10%)Marin Matsumoto; Tsubasa Takahashi; Seng Pei Liew; Masato Oguchi
Adversarial Scrutiny of Evidentiary Statistical Software. (2%)Rediet Abebe; Moritz Hardt; Angela Jin; John Miller; Ludwig Schmidt; Rebecca Wexler
2022-06-17
Detecting Adversarial Examples in Batches -- a geometrical approach. (99%)Danush Kumar Venkatesh; Peter Steinbach
Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation. (99%)Wen Sun; Jian Jin; Weisi Lin
Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization. (99%)Deokjae Lee; Seungyong Moon; Junhyeok Lee; Hyun Oh Song
Comment on Transferability and Input Transformation with Additive Noise. (99%)Hoki Kim; Jinseong Park; Jaewook Lee
Adversarial Robustness is at Odds with Lazy Training. (98%)Yunjuan Wang; Enayat Ullah; Poorya Mianjy; Raman Arora
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection. (83%)Jinyin Chen; Chengyu Jia; Haibin Zheng; Ruoxi Chen; Chenbo Fu
RetrievalGuard: Provably Robust 1-Nearest Neighbor Image Retrieval. (81%)Yihan Wu; Hongyang Zhang; Heng Huang
The Consistency of Adversarial Training for Binary Classification. (26%)Natalie S. Frank; Jonathan Niles-Weed
Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary Classification. (15%)Natalie S. Frank
Understanding Robust Overfitting of Adversarial Training and Beyond. (8%)Chaojian Yu; Bo Han; Li Shen; Jun Yu; Chen Gong; Mingming Gong; Tongliang Liu
2022-06-16
Adversarial Privacy Protection on Speech Enhancement. (99%)Mingyu Dong; Diqun Yan; Rangding Wang
Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge. (99%)Dingcheng Yang; Zihao Xiao; Wenjian Yu
Analysis and Extensions of Adversarial Training for Video Classification. (93%)Kaleab A. Kinfu; René Vidal
Double Sampling Randomized Smoothing. (89%)Linyi Li; Jiawei Zhang; Tao Xie; Bo Li
Adversarial Robustness of Graph-based Anomaly Detection. (76%)Yulin Zhu; Yuni Lai; Kaifa Zhao; Xiapu Luo; Mingquan Yuan; Jian Ren; Kai Zhou
A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. (68%)Ganqu Cui; Lifan Yuan; Bingxiang He; Yangyi Chen; Zhiyuan Liu; Maosong Sun
Backdoor Attacks on Vision Transformers. (31%)Akshayvarun Subramanya; Aniruddha Saha; Soroush Abbasi Koohpayegani; Ajinkya Tejankar; Hamed Pirsiavash
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey. (22%)Abhijith Sharma; Yijun Bian; Phil Munz; Apurva Narayan
Catastrophic overfitting is a bug but also a feature. (16%)Guillermo Ortiz-Jiménez; Jorge Pau de; Amartya Sanyal; Adel Bibi; Puneet K. Dokania; Pascal Frossard; Gregory Rogéz; Philip H. S. Torr
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences. (5%)Daryna Oliynyk; Rudolf Mayer; Andreas Rauber
Gradient-Based Adversarial and Out-of-Distribution Detection. (2%)Jinsol Lee; Mohit Prabhushankar; Ghassan AlRegib
"Understanding Robustness Lottery": A Comparative Visual Analysis of Neural Network Pruning Approaches. (1%)Zhimin Li; Shusen Liu; Xin Yu; Kailkhura Bhavya; Jie Cao; Diffenderfer James Daniel; Peer-Timo Bremer; Valerio Pascucci
2022-06-15
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack. (99%)Ruize Gao; Jiongxiao Wang; Kaiwen Zhou; Feng Liu; Binghui Xie; Gang Niu; Bo Han; James Cheng
Morphence-2.0: Evasion-Resilient Moving Target Defense Powered by Out-of-Distribution Detection. (99%)Abderrahmen Amich; Ata Kaboudi; Birhanu Eshete
Architectural Backdoors in Neural Networks. (83%)Mikel Bober-Irizar; Ilia Shumailov; Yiren Zhao; Robert Mullins; Nicolas Papernot
Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial Pruning. (75%)Jonah O'Brien Weiss; Tiago Alves; Sandip Kundu
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness. (74%)Tianlong Chen; Huan Zhang; Zhenyu Zhang; Shiyu Chang; Sijia Liu; Pin-Yu Chen; Zhangyang Wang
A Search-Based Testing Approach for Deep Reinforcement Learning Agents. (62%)Amirhossein Zolfagharian; Manel Abdellatif; Lionel Briand; Mojtaba Bagherzadeh; Ramesh S
Can pruning improve certified robustness of neural networks? (56%)Zhangheng Li; Tianlong Chen; Linyi Li; Bo Li; Zhangyang Wang
Improving Diversity with Adversarially Learned Transformations for Domain Generalization. (33%)Tejas Gokhale; Rushil Anirudh; Jayaraman J. Thiagarajan; Bhavya Kailkhura; Chitta Baral; Yezhou Yang
Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning. (11%)Tianlong Chen; Sijia Liu; Shiyu Chang; Lisa Amini; Zhangyang Wang
The Manifold Hypothesis for Gradient-Based Explanations. (2%)Sebastian Bordt; Uddeshya Upadhyay; Zeynep Akata; Luxburg Ulrike von
READ: Aggregating Reconstruction Error into Out-of-distribution Detection. (1%)Wenyu Jiang; Hao Cheng; Mingcai Chen; Shuai Feng; Yuxin Ge; Chongjun Wang
2022-06-14
Adversarial Vulnerability of Randomized Ensembles. (99%)Hassan Dbouk; Naresh R. Shanbhag
Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial Attacks and Training. (99%)B. R. Manoj; Meysam Sadeghi; Erik G. Larsson
Efficiently Training Low-Curvature Neural Networks. (92%)Suraj Srinivas; Kyle Matoba; Himabindu Lakkaraju; Francois Fleuret
Proximal Splitting Adversarial Attacks for Semantic Segmentation. (92%)Jérôme Rony; Jean-Christophe Pesquet; Ismail Ben Ayed
Defending Observation Attacks in Deep Reinforcement Learning via Detection and Denoising. (88%)Zikang Xiong; Joe Eappen; He Zhu; Suresh Jagannathan
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective. (88%)Mathieu IRIT, UT Serrurier; Franck UT Mamalet; Thomas UT Fel; Louis UT3, UT, IRIT Béthune; Thibaut UT Boissin
Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINO. (86%)Javier Rando; Nasib Naimi; Thomas Baumann; Max Mathys
Turning a Curse Into a Blessing: Enabling Clean-Data-Free Defenses by Model Inversion. (68%)Si Chen; Yi Zeng; Won Park; Ruoxi Jia
Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises. (62%)Minkyu Choi; Yizhen Zhang; Kuan Han; Xiaokai Wang; Zhongming Liu
Attacks on Perception-Based Control Systems: Modeling and Fundamental Limits. (2%)Amir Khazraei; Henry Pfister; Miroslav Pajic
A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise. (1%)Jongwoo Ko; Bongsoo Yi; Se-Young Yun
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions. (1%)Rui Zhang; Song Guo; Junxiao Wang; Xin Xie; Dacheng Tao
2022-06-13
Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of Perturbations. (99%)Kaustubh Sridhar; Souradeep Dutta; Ramneet Kaur; James Weimer; Oleg Sokolsky; Insup Lee
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale. (99%)Gaoyuan Zhang; Songtao Lu; Yihua Zhang; Xiangyi Chen; Pin-Yu Chen; Quanfu Fan; Lee Martie; Lior Horesh; Mingyi Hong; Sijia Liu
Pixel to Binary Embedding Towards Robustness for CNNs. (47%)Ikki Kishida; Hideki Nakayama
Towards Understanding Sharpness-Aware Minimization. (1%)Maksym Andriushchenko; Nicolas Flammarion
An adversarially robust data-market for spatial, crowd-sourced data. (1%)Aida Manzano Kharman; Christian Jursitzky; Quan Zhou; Pietro Ferraro; Jakub Marecek; Pierre Pinson; Robert Shorten
Efficient Human-in-the-loop System for Guiding DNNs Attention. (1%)Yi He; Xi Yang; Chia-Ming Chang; Haoran Xie; Takeo Igarashi
2022-06-12
Consistent Attack: Universal Adversarial Perturbation on Embodied Vision Navigation. (98%)Chengyang Ying; You Qiaoben; Xinning Zhou; Hang Su; Wenbo Ding; Jianyong Ai
Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems. (92%)Zahra Jadidi; Shantanu Pal; Nithesh Nayak K; Arawinkumaar Selvakkumar; Chih-Chia Chang; Maedeh Beheshti; Alireza Jolfaei
Darknet Traffic Classification and Adversarial Attacks. (81%)Nhien Rust-Nguyen; Mark Stamp
InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness. (26%)Shruthi Gowda; Bahram Zonooz; Elahe Arani
RSSD: Defend against Ransomware with Hardware-Isolated Network-Storage Codesign and Post-Attack Analysis. (9%)Benjamin Reidys; Peng Liu; Jian Huang
Neurotoxin: Durable Backdoors in Federated Learning. (5%)Zhengming Zhang; Ashwinee Panda; Linyue Song; Yaoqing Yang; Michael W. Mahoney; Joseph E. Gonzalez; Kannan Ramchandran; Prateek Mittal
An Efficient Method for Sample Adversarial Perturbations against Nonlinear Support Vector Machines. (4%)Wen Su; Qingna Li
2022-06-11
Improving the Adversarial Robustness of NLP Models by Information Bottleneck. (99%)Cenyuan Zhang; Xiang Zhou; Yixin Wan; Xiaoqing Zheng; Kai-Wei Chang; Cho-Jui Hsieh
Defending Adversarial Examples by Negative Correlation Ensemble. (99%)Wenjian Luo; Hongwei Zhang; Linghao Kong; Zhijian Chen; Ke Tang
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks. (81%)Nuo Xu; Binghui Wang; Ran Ran; Wujie Wen; Parv Venkitasubramaniam
Bilateral Dependency Optimization: Defending Against Model-inversion Attacks. (69%)Xiong Peng; Feng Liu; Jingfen Zhang; Long Lan; Junjie Ye; Tongliang Liu; Bo Han
2022-06-10
Localized adversarial artifacts for compressed sensing MRI. (76%)Rima Alaifari; Giovanni S. Alberti; Tandri Gauksson
Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency. (70%)Jinyin Chen; Mingjun Li; Tao Liu; Haibin Zheng; Yao Cheng; Changting Lin
Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning. (33%)Shenghui Li; Edith Ngai; Fanghua Ye; Li Ju; Tianru Zhang; Thiemo Voigt
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers. (9%)Nan Luo; Yuanzhang Li; Yajie Wang; Shangbo Wu; Yu-an Tan; Quanxin Zhang
Deep Leakage from Model in Federated Learning. (3%)Zihao Zhao; Mengen Luo; Wenbo Ding
Adversarial Counterfactual Environment Model Learning. (1%)Xiong-Hui Chen; Yang Yu; Zheng-Mao Zhu; Zhihua Yu; Zhenjun Chen; Chenghe Wang; Yinan Wu; Hongqiu Wu; Rong-Jun Qin; Ruijin Ding; Fangsheng Huang
2022-06-09
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models. (99%)Federico Nesti; Giulio Rossolini; Gianluca D'Amico; Alessandro Biondi; Giorgio Buttazzo
ReFace: Real-time Adversarial Attacks on Face Recognition Systems. (99%)Shehzeen Hussain; Todd Huster; Chris Mesterharm; Paarth Neekhara; Kevin An; Malhar Jere; Harshvardhan Sikka; Farinaz Koushanfar
Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks. (98%)Huishuai Zhang; Da Yu; Yiping Lu; Di He
Meet You Halfway: Explaining Deep Learning Mysteries. (92%)Oriel BenShmuel
Early Transferability of Adversarial Examples in Deep Neural Networks. (86%)Oriel BenShmuel
GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing. (86%)Zhongkai Hao; Chengyang Ying; Yinpeng Dong; Hang Su; Jun Zhu; Jian Song
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. (84%)Aarohi Shammie Srivastava; Abhinav Shammie Rastogi; Abhishek Shammie Rao; Abu Awal Md Shammie Shoeb; Abubakar Shammie Abid; Adam Shammie Fisch; Adam R. Shammie Brown; Adam Shammie Santoro; Aditya Shammie Gupta; Adrià Shammie Garriga-Alonso; Agnieszka Shammie Kluska; Aitor Shammie Lewkowycz; Akshat Shammie Agarwal; Alethea Shammie Power; Alex Shammie Ray; Alex Shammie Warstadt; Alexander W. Shammie Kocurek; Ali Shammie Safaya; Ali Shammie Tazarv; Alice Shammie Xiang; Alicia Shammie Parrish; Allen Shammie Nie; Aman Shammie Hussain; Amanda Shammie Askell; Amanda Shammie Dsouza; Ambrose Shammie Slone; Ameet Shammie Rahane; Anantharaman S. Shammie Iyer; Anders Shammie Andreassen; Andrea Shammie Madotto; Andrea Shammie Santilli; Andreas Shammie Stuhlmüller; Andrew Shammie Dai; Andrew Shammie La; Andrew Shammie Lampinen; Andy Shammie Zou; Angela Shammie Jiang; Angelica Shammie Chen; Anh Shammie Vuong; Animesh Shammie Gupta; Anna Shammie Gottardi; Antonio Shammie Norelli; Anu Shammie Venkatesh; Arash Shammie Gholamidavoodi; Arfa Shammie Tabassum; Arul Shammie Menezes; Arun Shammie Kirubarajan; Asher Shammie Mullokandov; Ashish Shammie Sabharwal; Austin Shammie Herrick; Avia Shammie Efrat; Aykut Shammie Erdem; Ayla Shammie Karakaş; B. Ryan Shammie Roberts; Bao Sheng Shammie Loe; Barret Shammie Zoph; Bartłomiej Shammie Bojanowski; Batuhan Shammie Özyurt; Behnam Shammie Hedayatnia; Behnam Shammie Neyshabur; Benjamin Shammie Inden; Benno Shammie Stein; Berk Shammie Ekmekci; Bill Yuchen Shammie Lin; Blake Shammie Howald; Cameron Shammie Diao; Cameron Shammie Dour; Catherine Shammie Stinson; Cedrick Shammie Argueta; César Ferri Shammie Ramírez; Chandan Shammie Singh; Charles Shammie Rathkopf; Chenlin Shammie Meng; Chitta Shammie Baral; Chiyu Shammie Wu; Chris Shammie Callison-Burch; Chris Shammie Waites; Christian Shammie Voigt; Christopher D. Shammie Manning; Christopher Shammie Potts; Cindy Shammie Ramirez; Clara E. Shammie Rivera; Clemencia Shammie Siro; Colin Shammie Raffel; Courtney Shammie Ashcraft; Cristina Shammie Garbacea; Damien Shammie Sileo; Dan Shammie Garrette; Dan Shammie Hendrycks; Dan Shammie Kilman; Dan Shammie Roth; Daniel Shammie Freeman; Daniel Shammie Khashabi; Daniel Shammie Levy; Daniel Moseguí Shammie González; Danielle Shammie Perszyk; Danny Shammie Hernandez; Danqi Shammie Chen; Daphne Shammie Ippolito; Dar Shammie Gilboa; David Shammie Dohan; David Shammie Drakard; David Shammie Jurgens; Debajyoti Shammie Datta; Deep Shammie Ganguli; Denis Shammie Emelin; Denis Shammie Kleyko; Deniz Shammie Yuret; Derek Shammie Chen; Derek Shammie Tam; Dieuwke Shammie Hupkes; Diganta Shammie Misra; Dilyar Shammie Buzan; Dimitri Coelho Shammie Mollo; Diyi Shammie Yang; Dong-Ho Shammie Lee; Ekaterina Shammie Shutova; Ekin Dogus Shammie Cubuk; Elad Shammie Segal; Eleanor Shammie Hagerman; Elizabeth Shammie Barnes; Elizabeth Shammie Donoway; Ellie Shammie Pavlick; Emanuele Shammie Rodola; Emma Shammie Lam; Eric Shammie Chu; Eric Shammie Tang; Erkut Shammie Erdem; Ernie Shammie Chang; Ethan A. Shammie Chi; Ethan Shammie Dyer; Ethan Shammie Jerzak; Ethan Shammie Kim; Eunice Engefu Shammie Manyasi; Evgenii Shammie Zheltonozhskii; Fanyue Shammie Xia; Fatemeh Shammie Siar; Fernando Shammie Martínez-Plumed; Francesca Shammie Happé; Francois Shammie Chollet; Frieda Shammie Rong; Gaurav Shammie Mishra; Genta Indra Shammie Winata; Melo Gerard Shammie de; Germán Shammie Kruszewski; Giambattista Shammie Parascandolo; Giorgio Shammie Mariani; Gloria Shammie Wang; Gonzalo Shammie Jaimovitch-López; Gregor Shammie Betz; Guy Shammie Gur-Ari; Hana Shammie Galijasevic; Hannah Shammie Kim; Hannah Shammie Rashkin; Hannaneh Shammie Hajishirzi; Harsh Shammie Mehta; Hayden Shammie Bogar; Henry Shammie Shevlin; Hinrich Shammie Schütze; Hiromu Shammie Yakura; Hongming Shammie Zhang; Hugh Mee Shammie Wong; Ian Shammie Ng; Isaac Shammie Noble; Jaap Shammie Jumelet; Jack Shammie Geissinger; Jackson Shammie Kernion; Jacob Shammie Hilton; Jaehoon Shammie Lee; Jaime Fernández Shammie Fisac; James B. Shammie Simon; James Shammie Koppel; James Shammie Zheng; James Shammie Zou; Jan Shammie Kocoń; Jana Shammie Thompson; Jared Shammie Kaplan; Jarema Shammie Radom; Jascha Shammie Sohl-Dickstein; Jason Shammie Phang; Jason Shammie Wei; Jason Shammie Yosinski; Jekaterina Shammie Novikova; Jelle Shammie Bosscher; Jennifer Shammie Marsh; Jeremy Shammie Kim; Jeroen Shammie Taal; Jesse Shammie Engel; Jesujoba Shammie Alabi; Jiacheng Shammie Xu; Jiaming Shammie Song; Jillian Shammie Tang; Joan Shammie Waweru; John Shammie Burden; John Shammie Miller; John U. Shammie Balis; Jonathan Shammie Berant; Jörg Shammie Frohberg; Jos Shammie Rozen; Jose Shammie Hernandez-Orallo; Joseph Shammie Boudeman; Joseph Shammie Jones; Joshua B. Shammie Tenenbaum; Joshua S. Shammie Rule; Joyce Shammie Chua; Kamil Shammie Kanclerz; Karen Shammie Livescu; Karl Shammie Krauth; Karthik Shammie Gopalakrishnan; Katerina Shammie Ignatyeva; Katja Shammie Markert; Kaustubh D. Shammie Dhole; Kevin Shammie Gimpel; Kevin Shammie Omondi; Kory Shammie Mathewson; Kristen Shammie Chiafullo; Ksenia Shammie Shkaruta; Kumar Shammie Shridhar; Kyle Shammie McDonell; Kyle Shammie Richardson; Laria Shammie Reynolds; Leo Shammie Gao; Li Shammie Zhang; Liam Shammie Dugan; Lianhui Shammie Qin; Lidia Shammie Contreras-Ochando; Louis-Philippe Shammie Morency; Luca Shammie Moschella; Lucas Shammie Lam; Lucy Shammie Noble; Ludwig Shammie Schmidt; Luheng Shammie He; Luis Oliveros Shammie Colón; Luke Shammie Metz; Lütfi Kerem Shammie Şenel; Maarten Shammie Bosma; Maarten Shammie Sap; Hoeve Maartje Shammie ter; Maheen Shammie Farooqi; Manaal Shammie Faruqui; Mantas Shammie Mazeika; Marco Shammie Baturan; Marco Shammie Marelli; Marco Shammie Maru; Maria Jose Ramírez Shammie Quintana; Marie Shammie Tolkiehn; Mario Shammie Giulianelli; Martha Shammie Lewis; Martin Shammie Potthast; Matthew L. Shammie Leavitt; Matthias Shammie Hagen; Mátyás Shammie Schubert; Medina Orduna Shammie Baitemirova; Melody Shammie Arnaud; Melvin Shammie McElrath; Michael A. Shammie Yee; Michael Shammie Cohen; Michael Shammie Gu; Michael Shammie Ivanitskiy; Michael Shammie Starritt; Michael Shammie Strube; Michał Shammie Swędrowski; Michele Shammie Bevilacqua; Michihiro Shammie Yasunaga; Mihir Shammie Kale; Mike Shammie Cain; Mimee Shammie Xu; Mirac Shammie Suzgun; Mo Shammie Tiwari; Mohit Shammie Bansal; Moin Shammie Aminnaseri; Mor Shammie Geva; Mozhdeh Shammie Gheini; Mukund Varma Shammie T; Nanyun Shammie Peng; Nathan Shammie Chi; Nayeon Shammie Lee; Neta Gur-Ari Shammie Krakover; Nicholas Shammie Cameron; Nicholas Shammie Roberts; Nick Shammie Doiron; Nikita Shammie Nangia; Niklas Shammie Deckers; Niklas Shammie Muennighoff; Nitish Shirish Shammie Keskar; Niveditha S. Shammie Iyer; Noah Shammie Constant; Noah Shammie Fiedel; Nuan Shammie Wen; Oliver Shammie Zhang; Omar Shammie Agha; Omar Shammie Elbaghdadi; Omer Shammie Levy; Owain Shammie Evans; Pablo Antonio Moreno Shammie Casares; Parth Shammie Doshi; Pascale Shammie Fung; Paul Pu Shammie Liang; Paul Shammie Vicol; Pegah Shammie Alipoormolabashi; Peiyuan Shammie Liao; Percy Shammie Liang; Peter Shammie Chang; Peter Shammie Eckersley; Phu Mon Shammie Htut; Pinyu Shammie Hwang; Piotr Shammie Miłkowski; Piyush Shammie Patil; Pouya Shammie Pezeshkpour; Priti Shammie Oli; Qiaozhu Shammie Mei; Qing Shammie Lyu; Qinlang Shammie Chen; Rabin Shammie Banjade; Rachel Etta Shammie Rudolph; Raefer Shammie Gabriel; Rahel Shammie Habacker; Ramón Risco Shammie Delgado; Raphaël Shammie Millière; Rhythm Shammie Garg; Richard Shammie Barnes; Rif A. Shammie Saurous; Riku Shammie Arakawa; Robbe Shammie Raymaekers; Robert Shammie Frank; Rohan Shammie Sikand; Roman Shammie Novak; Roman Shammie Sitelew; Ronan Shammie LeBras; Rosanne Shammie Liu; Rowan Shammie Jacobs; Rui Shammie Zhang; Ruslan Shammie Salakhutdinov; Ryan Shammie Chi; Ryan Shammie Lee; Ryan Shammie Stovall; Ryan Shammie Teehan; Rylan Shammie Yang; Sahib Shammie Singh; Saif M. Shammie Mohammad; Sajant Shammie Anand; Sam Shammie Dillavou; Sam Shammie Shleifer; Sam Shammie Wiseman; Samuel Shammie Gruetter; Samuel R. Shammie Bowman; Samuel S. Shammie Schoenholz; Sanghyun Shammie Han; Sanjeev Shammie Kwatra; Sarah A. Shammie Rous; Sarik Shammie Ghazarian; Sayan Shammie Ghosh; Sean Shammie Casey; Sebastian Shammie Bischoff; Sebastian Shammie Gehrmann; Sebastian Shammie Schuster; Sepideh Shammie Sadeghi; Shadi Shammie Hamdan; Sharon Shammie Zhou; Shashank Shammie Srivastava; Sherry Shammie Shi; Shikhar Shammie Singh; Shima Shammie Asaadi; Shixiang Shane Shammie Gu; Shubh Shammie Pachchigar; Shubham Shammie Toshniwal; Shyam Shammie Upadhyay; Shammie Shyamolima; Debnath; Siamak Shakeri; Simon Thormeyer; Simone Melzi; Siva Reddy; Sneha Priscilla Makini; Soo-Hwan Lee; Spencer Torene; Sriharsha Hatwar; Stanislas Dehaene; Stefan Divic; Stefano Ermon; Stella Biderman; Stephanie Lin; Stephen Prasad; Steven T. Piantadosi; Stuart M. Shieber; Summer Misherghi; Svetlana Kiritchenko; Swaroop Mishra; Tal Linzen; Tal Schuster; Tao Li; Tao Yu; Tariq Ali; Tatsu Hashimoto; Te-Lin Wu; Théo Desbordes; Theodore Rothschild; Thomas Phan; Tianle Wang; Tiberius Nkinyili; Timo Schick; Timofei Kornev; Timothy Telleen-Lawton; Titus Tunduny; Tobias Gerstenberg; Trenton Chang; Trishala Neeraj; Tushar Khot; Tyler Shultz; Uri Shaham; Vedant Misra; Vera Demberg; Victoria Nyamai; Vikas Raunak; Vinay Ramasesh; Vinay Uday Prabhu; Vishakh Padmakumar; Vivek Srikumar; William Fedus; William Saunders; William Zhang; Wout Vossen; Xiang Ren; Xiaoyu Tong; Xinran Zhao; Xinyi Wu; Xudong Shen; Yadollah Yaghoobzadeh; Yair Lakretz; Yangqiu Song; Yasaman Bahri; Yejin Choi; Yichi Yang; Yiding Hao; Yifu Chen; Yonatan Belinkov; Yu Hou; Yufang Hou; Yuntao Bai; Zachary Seid; Zhuoye Zhao; Zijian Wang; Zijie J. Wang; Zirui Wang; Ziyi Wu
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training. (41%)Tianlong Chen; Zhenyu Zhang; Sijia Liu; Yang Zhang; Shiyu Chang; Zhangyang Wang
DORA: Exploring outlier representations in Deep Neural Networks. (1%)Kirill Bykov; Mayukh Deb; Dennis Grinwald; Klaus-Robert Müller; Marina M. -C. Höhne
Membership Inference via Backdooring. (1%)Hongsheng Hu; Zoran Salcic; Gillian Dobbie; Jinjun Chen; Lichao Sun; Xuyun Zhang
2022-06-08
Wavelet Regularization Benefits Adversarial Training. (99%)Jun Yan; Huilin Yin; Xiaoyang Deng; Ziming Zhao; Wancheng Ge; Hao Zhang; Gerhard Rigoll
Latent Boundary-guided Adversarial Training. (99%)Xiaowei Zhou; Ivor W. Tsang; Jie Yin
Adversarial Text Normalization. (73%)Joanna Bitton; Maya Pavlova; Ivan Evtimov
Autoregressive Perturbations for Data Poisoning. (70%)Pedro Sandoval-Segura; Vasu Singla; Jonas Geiping; Micah Goldblum; Tom Goldstein; David W. Jacobs
Toward Certified Robustness Against Real-World Distribution Shifts. (5%)Haoze Wu; Teruhiro Tagomori; Alexander Robey; Fengjun Yang; Nikolai Matni; George Pappas; Hamed Hassani; Corina Pasareanu; Clark Barrett
Generative Adversarial Networks and Image-Based Malware Classification. (1%)Huy Nguyen; Troia Fabio Di; Genya Ishigaki; Mark Stamp
Robust Deep Ensemble Method for Real-world Image Denoising. (1%)Pengju Liu; Hongzhi Zhang; Jinghui Wang; Yuzhi Wang; Dongwei Ren; Wangmeng Zuo
2022-06-07
Fooling Explanations in Text Classifiers. (99%)Adam Ivankay; Ivan Girardi; Chiara Marchiori; Pascal Frossard
AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems. (99%)Guangke Chen; Zhe Zhao; Fu Song; Sen Chen; Lingling Fan; Yang Liu
Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition. (99%)Guangke Chen; Zhe Zhao; Fu Song; Sen Chen; Lingling Fan; Feng Wang; Jiashui Wang
Adaptive Regularization for Adversarial Training. (98%)Dongyoon Yang; Insung Kong; Yongdai Kim
Building Robust Ensembles via Margin Boosting. (83%)Dinghuai Zhang; Hongyang Zhang; Aaron Courville; Yoshua Bengio; Pradeep Ravikumar; Arun Sai Suggala
On the Permanence of Backdoors in Evolving Models. (67%)Huiying Li; Arjun Nitin Bhagoji; Yuxin Chen; Haitao Zheng; Ben Y. Zhao
Subject Membership Inference Attacks in Federated Learning. (4%)Anshuman Suri; Pallika Kanani; Virendra J. Marathe; Daniel W. Peterson
Adversarial Reprogramming Revisited. (3%)Matthias Englert; Ranko Lazic
Certifying Data-Bias Robustness in Linear Regression. (1%)Anna P. Meyer; Aws Albarghouthi; Loris D'Antoni
Parametric Chordal Sparsity for SDP-based Neural Network Verification. (1%)Anton Xue; Lars Lindemann; Rajeev Alur
Can CNNs Be More Robust Than Transformers? (1%)Zeyu Wang; Yutong Bai; Yuyin Zhou; Cihang Xie
2022-06-06
Robust Adversarial Attacks Detection based on Explainable Deep Reinforcement Learning For UAV Guidance and Planning. (99%)Thomas Hickling; Nabil Aouf; Phillippa Spencer
Fast Adversarial Training with Adaptive Step Size. (98%)Zhichao Huang; Yanbo Fan; Chen Liu; Weizhong Zhang; Yong Zhang; Mathieu Salzmann; Sabine Süsstrunk; Jue Wang
Certified Robustness in Federated Learning. (87%)Motasem Alfarra; Juan C. Pérez; Egor Shulgin; Peter Richtárik; Bernard Ghanem
Robust Image Protection Countering Cropping Manipulation. (12%)Qichao Ying; Hang Zhou; Zhenxing Qian; Sheng Li; Xinpeng Zhang
PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN Model. (3%)Xuefeng Fan; Hangyu Gui; Xiaoyi Zhou
Tackling covariate shift with node-based Bayesian neural networks. (1%)Trung Trinh; Markus Heinonen; Luigi Acerbi; Samuel Kaski
Anomaly Detection with Test Time Augmentation and Consistency Evaluation. (1%)Haowei He; Jiaye Teng; Yang Yuan
2022-06-05
Federated Adversarial Training with Transformers. (98%)Ahmed Aldahdooh; Wassim Hamidouche; Olivier Déforges
Vanilla Feature Distillation for Improving the Accuracy-Robustness Trade-Off in Adversarial Training. (98%)Guodong Cao; Zhibo Wang; Xiaowei Dong; Zhifei Zhang; Hengchang Guo; Zhan Qin; Kui Ren
Which models are innately best at uncertainty estimation? (1%)Ido Galil; Mohammed Dabbah; Ran El-Yaniv
2022-06-04
Soft Adversarial Training Can Retain Natural Accuracy. (76%)Abhijith Sharma; Apurva Narayan
2022-06-03
Saliency Attack: Towards Imperceptible Black-box Adversarial Attack. (99%)Zeyu Dai; Shengcai Liu; Ke Tang; Qing Li
Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis. (96%)Raphael Ettedgui; Alexandre Araujo; Rafael Pinot; Yann Chevaleyre; Jamal Atif
Evaluating Transfer-based Targeted Adversarial Perturbations against Real-World Computer Vision Systems based on Human Judgments. (92%)Zhengyu Zhao; Nga Dang; Martha Larson
A Robust Backpropagation-Free Framework for Images. (80%)Timothy Zee; Alexander G. Ororbia; Ankur Mali; Ifeoma Nwogu
Gradient Obfuscation Checklist Test Gives a False Sense of Security. (73%)Nikola Popovic; Danda Pani Paudel; Thomas Probst; Gool Luc Van
Kallima: A Clean-label Framework for Textual Backdoor Attacks. (26%)Xiaoyi Chen; Yinpeng Dong; Zeyu Sun; Shengfang Zhai; Qingni Shen; Zhonghai Wu
2022-06-02
Improving the Robustness and Generalization of Deep Neural Network with Confidence Threshold Reduction. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
FACM: Intermediate Layer Still Retain Effective Features against Adversarial Examples. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs for Medical Image Segmentation and Detection. (99%)Linhai Ma; Liang Liang
Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline. (99%)Junjian Li; Honglong Chen
Adversarial Laser Spot: Robust and Covert Physical Adversarial Attack to DNNs. (98%)Chengyin Hu
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions. (31%)Amrith Setlur; Benjamin Eysenbach; Virginia Smith; Sergey Levine
MaxStyle: Adversarial Style Composition for Robust Medical Image Segmentation. (8%)Chen Chen; Zeju Li; Cheng Ouyang; Matt Sinclair; Wenjia Bai; Daniel Rueckert
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection. (4%)Wei Guo; Benedetta Tondi; Mauro Barni
Learning Unbiased Transferability for Domain Adaptation by Uncertainty Modeling. (1%)Jian Hu; Haowen Zhong; Junchi Yan; Shaogang Gong; Guile Wu; Fei Yang
2022-06-01
On the reversibility of adversarial attacks. (99%)Chau Yi Li; Ricardo Sánchez-Matilla; Ali Shahin Shamsabadi; Riccardo Mazzon; Andrea Cavallaro
NeuroUnlock: Unlocking the Architecture of Obfuscated Deep Neural Networks. (99%)Mahya Morid Ahmadi; Lilas Alrahis; Alessio Colucci; Ozgur Sinanoglu; Muhammad Shafique
Attack-Agnostic Adversarial Detection. (99%)Jiaxin Cheng; Mohamed Hussein; Jay Billa; Wael AbdAlmageed
On the Perils of Cascading Robust Classifiers. (98%)Ravi Mangal; Zifan Wang; Chi Zhang; Klas Leino; Corina Pasareanu; Matt Fredrikson
Anti-Forgery: Towards a Stealthy and Robust DeepFake Disruption Attack via Adversarial Perceptual-aware Perturbations. (98%)Run Wang; Ziheng Huang; Zhikai Chen; Li Liu; Jing Chen; Lina Wang
Support Vector Machines under Adversarial Label Contamination. (97%)Huang Xiao; Battista Biggio; Blaine Nelson; Han Xiao; Claudia Eckert; Fabio Roli
Defense Against Gradient Leakage Attacks via Learning to Obscure Data. (80%)Yuxuan Wan; Han Xu; Xiaorui Liu; Jie Ren; Wenqi Fan; Jiliang Tang
The robust way to stack and bag: the local Lipschitz way. (70%)Thulasi Tholeti; Sheetal Kalyani
Robustness Evaluation and Adversarial Training of an Instance Segmentation Model. (54%)Jacob Bond; Andrew Lingg
RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model. (1%)Hangzhi Guo; Feiran Jia; Jinghui Chen; Anna Squicciarini; Amulya Yadav
2022-05-31
Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems. (99%)Zeyan Liu; Fengjun Li; Jingqiang Lin; Zhu Li; Bo Luo
Exact Feature Collisions in Neural Networks. (95%)Utku Ozbulak; Manvel Gasparyan; Shodhan Rao; Neve Wesley De; Messem Arnout Van
CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models. (93%)Akshita Jha; Chandan K. Reddy
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences. (83%)Shang Wang; Yansong Gao; Anmin Fu; Zhi Zhang; Yuqing Zhang; Willy Susilo
Semantic Autoencoder and Its Potential Usage for Adversarial Attack. (81%)Yurui Ming; Cuihuan Du; Chin-Teng Lin
An Effective Fusion Method to Enhance the Robustness of CNN. (80%)Yating Ma; Zhichao Lian
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models. (64%)Kaiji Lu; Anupam Datta
Generative Models with Information-Theoretic Protection Against Membership Inference Attacks. (10%)Parisa Hassanzadeh; Robert E. Tillman
Likelihood-Free Inference with Generative Neural Networks via Scoring Rule Minimization. (1%)Lorenzo Pacchiardi; Ritabrata Dutta
2022-05-30
Domain Constraints in Feature Space: Strengthening Robustness of Android Malware Detection against Realizable Adversarial Examples. (99%)Hamid Bostani; Zhuoran Liu; Zhengyu Zhao; Veelasha Moonsamy
Searching for the Essence of Adversarial Perturbations. (99%)Dennis Y. Menn; Tzu-hsun Feng; Hung-yi Lee
Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models. (99%)Songlin Yang; Wei Wang; Chenye Xu; Ziwen He; Bo Peng; Jing Dong
Guided Diffusion Model for Adversarial Purification. (99%)Jinyi Wang; Zhaoyang Lyu; Dahua Lin; Bo Dai; Hongfei Fu
Why Adversarial Training of ReLU Networks Is Difficult? (68%)Xu Cheng; Hao Zhang; Yue Xin; Wen Shen; Jie Ren; Quanshi Zhang
CalFAT: Calibrated Federated Adversarial Training with Label Skewness. (67%)Chen Chen; Yuchen Liu; Xingjun Ma; Lingjuan Lyu
Securing AI-based Healthcare Systems using Blockchain Technology: A State-of-the-Art Systematic Literature Review and Future Research Directions. (15%)Rucha Shinde; Shruti Patil; Ketan Kotecha; Vidyasagar Potdar; Ganeshsree Selvachandran; Ajith Abraham
Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning. (13%)Yinglun Xu; Qi Zeng; Gagandeep Singh
White-box Membership Attack Against Machine Learning Based Retinopathy Classification. (10%)Mounia Hamidouche; Reda Bellafqira; Gwenolé Quellec; Gouenou Coatrieux
Fool SHAP with Stealthily Biased Sampling. (2%)Gabriel Laberge; Ulrich Aïvodji; Satoshi Hara; Mario Marchand.; Foutse Khomh
Snoopy: A Webpage Fingerprinting Framework with Finite Query Model for Mass-Surveillance. (2%)Gargi Mitra; Prasanna Karthik Vairam; Sandip Saha; Nitin Chandrachoodan; V. Kamakoti
2022-05-29
Robust Weight Perturbation for Adversarial Training. (99%)Chaojian Yu; Bo Han; Mingming Gong; Li Shen; Shiming Ge; Bo Du; Tongliang Liu
Mixture GAN For Modulation Classification Resiliency Against Adversarial Attacks. (99%)Eyad Shtaiwi; Ahmed El Ouadrhiri; Majid Moradikia; Salma Sultana; Ahmed Abdelhadi; Zhu Han
Unfooling Perturbation-Based Post Hoc Explainers. (98%)Zachariah Carmichael; Walter J Scheirer
On the Robustness of Safe Reinforcement Learning under Observational Perturbations. (93%)Zuxin Liu; Zijian Guo; Zhepeng Cen; Huan Zhang; Jie Tan; Bo Li; Ding Zhao
Superclass Adversarial Attack. (80%)Soichiro Kumano; Hiroshi Kera; Toshihiko Yamasaki
Problem-Space Evasion Attacks in the Android OS: a Survey. (50%)Harel Berger; Chen Hajaj; Amit Dvir
Context-based Virtual Adversarial Training for Text Classification with Noisy Labels. (11%)Do-Myoung Lee; Yeachan Kim; Chang-gyun Seo
A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks. (1%)Binyan Hu; Yu Sun; A. K. Qin
2022-05-28
Contributor-Aware Defenses Against Adversarial Backdoor Attacks. (98%)Glenn Dawson; Muhammad Umer; Robi Polikar
BadDet: Backdoor Attacks on Object Detection. (92%)Shih-Han Chan; Yinpeng Dong; Jun Zhu; Xiaolu Zhang; Jun Zhou
Syntax-Guided Program Reduction for Understanding Neural Code Intelligence Models. (62%)Md Rafiqul Islam Rabin; Aftab Hussain; Mohammad Amin Alipour
2022-05-27
fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating Weather Conditions on the Camera Lens of Autonomous Systems. (96%)Alberto Marchisio; Giovanni Caramia; Maurizio Martina; Muhammad Shafique
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power. (95%)Binghui Li; Jikai Jin; Han Zhong; John E. Hopcroft; Liwei Wang
Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction. (93%)Ruochen Jiao; Xiangguo Liu; Takami Sato; Qi Alfred Chen; Qi Zhu
Defending Against Stealthy Backdoor Attacks. (73%)Sangeet Sagar; Abhinav Bhatt; Abhijith Srinivas Bidaralli
EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks. (13%)Runlin Lei; Zhen Wang; Yaliang Li; Bolin Ding; Zhewei Wei
2022-05-26
A Physical-World Adversarial Attack Against 3D Face Recognition. (99%)Yanjie Li; Yiquan Li; Bin Xiao
Transferable Adversarial Attack based on Integrated Gradients. (99%)Yi Huang; Adams Wai-Kin Kong
MALICE: Manipulation Attacks on Learned Image ComprEssion. (99%)Kang Liu; Di Wu; Yiru Wang; Dan Feng; Benjamin Tan; Siddharth Garg
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors. (98%)Avishag Shapira; Alon Zolfi; Luca Demetrio; Battista Biggio; Asaf Shabtai
Circumventing Backdoor Defenses That Are Based on Latent Separability. (96%)Xiangyu Qi; Tinghao Xie; Yiming Li; Saeed Mahloujifar; Prateek Mittal
An Analytic Framework for Robust Training of Artificial Neural Networks. (93%)Ramin Barati; Reza Safabakhsh; Mohammad Rahmati
Adversarial attacks and defenses in Speaker Recognition Systems: A survey. (81%)Jiahe Lan; Rui Zhang; Zheng Yan; Jie Wang; Yu Chen; Ronghui Hou
PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations. (81%)Manaar Alam; Esha Sarkar; Michail Maniatakos
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning. (81%)Zhenting Wang; Juan Zhai; Shiqing Ma
R-HTDetector: Robust Hardware-Trojan Detection Based on Adversarial Training. (80%)Kento Hasegawa; Seira Hidano; Kohei Nozawa; Shinsaku Kiyomoto; Nozomu Togawa
BagFlip: A Certified Defense against Data Poisoning. (75%)Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples. (67%)Xiangyu Qi; Tinghao Xie; Jiachen T. Wang; Tong Wu; Saeed Mahloujifar; Prateek Mittal
Membership Inference Attack Using Self Influence Functions. (45%)Gilad Cohen; Raja Giryes
MemeTector: Enforcing deep focus for meme detection. (1%)Christos Koutlis; Manos Schinas; Symeon Papadopoulos
ES-GNN: Generalizing Graph Neural Networks Beyond Homophily with Edge Splitting. (1%)Jingwei Guo; Kaizhu Huang; Rui Zhang; Xinping Yi
2022-05-25
Surprises in adversarially-trained linear regression. (87%)Antônio H. Ribeiro; Dave Zachariah; Thomas B. Schön
BITE: Textual Backdoor Attacks with Iterative Trigger Injection. (75%)Jun Yan; Vansh Gupta; Xiang Ren
Impartial Games: A Challenge for Reinforcement Learning. (10%)Bei Zhou; Søren Riis
How explainable are adversarially-robust CNNs? (8%)Mehdi Nourelahi; Lars Kotthoff; Peijie Chen; Anh Nguyen
2022-05-24
Defending a Music Recommender Against Hubness-Based Adversarial Attacks. (99%)Katharina Hoedt; Arthur Flexer; Gerhard Widmer
Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks. (99%)Sizhe Chen; Zhehao Huang; Qinghua Tao; Yingwen Wu; Cihang Xie; Xiaolin Huang
Certified Robustness Against Natural Language Attacks by Causal Intervention. (98%)Haiteng Zhao; Chang Ma; Xinshuai Dong; Anh Tuan Luu; Zhi-Hong Deng; Hanwang Zhang
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks. (92%)Shutong Wu; Sizhe Chen; Cihang Xie; Xiaolin Huang
Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation. (64%)Xiaoguang Li; Neil Zhenqiang Gong; Ninghui Li; Wenhai Sun; Hui Li
WeDef: Weakly Supervised Backdoor Defense for Text Classification. (56%)Lesheng Jin; Zihan Wang; Jingbo Shang
Recipe2Vec: Multi-modal Recipe Representation Learning with Graph Neural Networks. (50%)Yijun Tian; Chuxu Zhang; Zhichun Guo; Yihong Ma; Ronald Metoyer; Nitesh V. Chawla
EBM Life Cycle: MCMC Strategies for Synthesis, Defense, and Density Modeling. (10%)Mitch Hill; Jonathan Mitchell; Chu Chen; Yuan Du; Mubarak Shah; Song-Chun Zhu
Comprehensive Privacy Analysis on Federated Recommender System against Attribute Inference Attacks. (9%)Shijie Zhang; Hongzhi Yin
Fast & Furious: Modelling Malware Detection as Evolving Data Streams. (2%)Fabrício Ceschin; Marcus Botacin; Heitor Murilo Gomes; Felipe Pinagé; Luiz S. Oliveira; André Grégio
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. (2%)Tianlong Chen; Zhenyu Zhang; Yihua Zhang; Shiyu Chang; Sijia Liu; Zhangyang Wang
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing. (1%)Zhiwei Hao; Yong Luo; Zhi Wang; Han Hu; Jianping An
2022-05-23
Collaborative Adversarial Training. (98%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Alleviating Robust Overfitting of Adversarial Training With Consistency Regularization. (98%)Shudong Zhang; Haichang Gao; Tianwei Zhang; Yunyi Zhou; Zihui Wu
Learning to Ignore Adversarial Attacks. (95%)Yiming Zhang; Yangqiaoyu Zhou; Samuel Carton; Chenhao Tan
Towards a Defense against Backdoor Attacks in Continual Federated Learning. (50%)Shuaiqi Wang; Jonathan Hayase; Giulia Fanti; Sewoong Oh
Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation. (10%)Huarui He; Jie Wang; Zhanqiu Zhang; Feng Wu
RCC-GAN: Regularized Compound Conditional GAN for Large-Scale Tabular Data Synthesis. (1%)Mohammad Esmaeilpour; Nourhene Chaalia; Adel Abusitta; Francois-Xavier Devailly; Wissem Maazoun; Patrick Cardinal
2022-05-22
AutoJoin: Efficient Adversarial Training for Robust Maneuvering via Denoising Autoencoder and Joint Learning. (26%)Michael Villarreal; Bibek Poudel; Ryan Wickman; Yu Shen; Weizi Li
Robust Quantity-Aware Aggregation for Federated Learning. (13%)Jingwei Yi; Fangzhao Wu; Huishuai Zhang; Bin Zhu; Tao Qi; Guangzhong Sun; Xing Xie
Analysis of functional neural codes of deep learning models. (10%)Jung Hoon Lee; Sujith Vijayan
2022-05-21
Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models. (99%)Shawn Shan; Wenxin Ding; Emily Wenger; Haitao Zheng; Ben Y. Zhao
Gradient Concealment: Free Lunch for Defending Adversarial Attacks. (99%)Sen Pei; Jiaxi Sun; Xiaopeng Zhang; Gaofeng Meng
Phrase-level Textual Adversarial Attack with Label Preservation. (99%)Yibin Lei; Yu Cao; Dianqi Li; Tianyi Zhou; Meng Fang; Mykola Pechenizkiy
On the Feasibility and Generality of Patch-based Adversarial Attacks on Semantic Segmentation Problems. (16%)Soma Kontar; Andras Horvath
2022-05-20
Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness. (99%)Jiankai Jin; Olga Ohrimenko; Benjamin I. P. Rubinstein
Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification. (98%)Jungeum Kim; Xiao Wang
Adversarial joint attacks on legged robots. (86%)Takuto Otomo; Hiroshi Kera; Kazuhiko Kawamoto
Towards Consistency in Adversarial Classification. (82%)Laurent Meunier; Raphaël Ettedgui; Rafael Pinot; Yann Chevaleyre; Jamal Atif
Adversarial Body Shape Search for Legged Robots. (80%)Takaaki Azakami; Hiroshi Kera; Kazuhiko Kawamoto
SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning. (64%)Harsh Chaudhari; Matthew Jagielski; Alina Oprea
The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks. (11%)Lukas S. Huber; Robert Geirhos; Felix A. Wichmann
Vulnerability Analysis and Performance Enhancement of Authentication Protocol in Dynamic Wireless Power Transfer Systems. (10%)Tommaso Bianchi; Surudhi Asokraj; Alessandro Brighente; Mauro Conti; Radha Poovendran
Exploring the Trade-off between Plausibility, Change Intensity and Adversarial Power in Counterfactual Explanations using Multi-objective Optimization. (4%)Ser Javier Del; Alejandro Barredo-Arrieta; Natalia Díaz-Rodríguez; Francisco Herrera; Andreas Holzinger
2022-05-19
Focused Adversarial Attacks. (99%)Thomas Cilloni; Charles Walter; Charles Fleming
Transferable Physical Attack against Object Detection with Separable Attention. (99%)Yu Zhang; Zhiqiang Gong; Yichuang Zhang; YongQian Li; Kangcheng Bin; Jiahao Qi; Wei Xue; Ping Zhong
Gradient Aligned Attacks via a Few Queries. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
On Trace of PGD-Like Adversarial Attacks. (99%)Mo Zhou; Vishal M. Patel
Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification. (98%)Leo Schwinn; Leon Bungert; An Nguyen; René Raab; Falk Pulsmeyer; Doina Precup; Björn Eskofier; Dario Zanca
Defending Against Adversarial Attacks by Energy Storage Facility. (96%)Jiawei Li; Jianxiao Wang; Lin Chen; Yang Yu
Sparse Adversarial Attack in Multi-agent Reinforcement Learning. (82%)Yizheng Hu; Zhihua Zhang
Data Valuation for Offline Reinforcement Learning. (1%)Amir Abolfazli; Gregory Palmer; Daniel Kudenko
2022-05-18
Passive Defense Against 3D Adversarial Point Clouds Through the Lens of 3D Steganalysis. (99%)Jiahao Zhu
Property Unlearning: A Defense Strategy Against Property Inference Attacks. (84%)Joshua Universität Hamburg Stock; Jens Universität Hamburg Wettlaufer; Daniel Universität Hamburg Demmler; Hannes Universität Hamburg Federrath
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution. (56%)Zhixin Pan; Prabhat Mishra
Empirical Advocacy of Bio-inspired Models for Robust Image Recognition. (38%)Harshitha Machiraju; Oh-Hyeon Choung; Michael H. Herzog; Pascal Frossard
Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing. (1%)Ryan Feng; Somesh Jha; Atul Prakash
Mitigating Neural Network Overconfidence with Logit Normalization. (1%)Hongxin Wei; Renchunzi Xie; Hao Cheng; Lei Feng; Bo An; Yixuan Li
2022-05-17
Hierarchical Distribution-Aware Testing of Deep Learning. (99%)Wei Huang; Xingyu Zhao; Alec Banks; Victoria Cox; Xiaowei Huang
Bankrupting DoS Attackers Despite Uncertainty. (12%)Trisha Chakraborty; Abir Islam; Valerie King; Daniel Rayborn; Jared Saia; Maxwell Young
A two-steps approach to improve the performance of Android malware detectors. (10%)Nadia Daoudi; Kevin Allix; Tegawendé F. Bissyandé; Jacques Klein
Policy Distillation with Selective Input Gradient Regularization for Efficient Interpretability. (2%)Jinwei Xing; Takashi Nagata; Xinyun Zou; Emre Neftci; Jeffrey L. Krichmar
Recovering Private Text in Federated Learning of Language Models. (2%)Samyak Gupta; Yangsibo Huang; Zexuan Zhong; Tianyu Gao; Kai Li; Danqi Chen
Semi-Supervised Building Footprint Generation with Feature and Output Consistency Training. (1%)Qingyu Li; Yilei Shi; Xiao Xiang Zhu
2022-05-16
Attacking and Defending Deep Reinforcement Learning Policies. (99%)Chao Wang
Diffusion Models for Adversarial Purification. (99%)Weili Nie; Brandon Guo; Yujia Huang; Chaowei Xiao; Arash Vahdat; Anima Anandkumar
Robust Representation via Dynamic Feature Aggregation. (84%)Haozhe Liu; Haoqin Ji; Yuexiang Li; Nanjun He; Haoqian Wu; Feng Liu; Linlin Shen; Yefeng Zheng
Sparse Visual Counterfactual Explanations in Image Space. (83%)Valentyn Boreiko; Maximilian Augustin; Francesco Croce; Philipp Berens; Matthias Hein
On the Difficulty of Defending Self-Supervised Learning against Model Extraction. (67%)Adam Dziedzic; Nikita Dhawan; Muhammad Ahmad Kaleem; Jonas Guan; Nicolas Papernot
Transferability of Adversarial Attacks on Synthetic Speech Detection. (47%)Jiacheng Deng; Shunyi Chen; Li Dong; Diqun Yan; Rangding Wang
2022-05-15
Learn2Weight: Parameter Adaptation against Similar-domain Adversarial Attacks. (99%)Siddhartha Datta
Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection. (64%)Fan Wang; Adams Wai-Kin Kong
RoMFAC: A robust mean-field actor-critic reinforcement learning against adversarial perturbations on states. (62%)Ziyuan Zhou; Guanjun Liu
Automation Slicing and Testing for in-App Deep Learning Models. (1%)Hao Wu; Yuhang Gong; Xiaopeng Ke; Hanzhong Liang; Minghao Li; Fengyuan Xu; Yunxin Liu; Sheng Zhong
2022-05-14
Evaluating Membership Inference Through Adversarial Robustness. (98%)Zhaoxi Zhang; Leo Yu Zhang; Xufei Zheng; Bilal Hussain Abbasi; Shengshan Hu
Verifying Neural Networks Against Backdoor Attacks. (2%)Long H. Pham; Jun Sun
2022-05-13
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic. (98%)Hang Wang; Zhen Xiang; David J. Miller; George Kesidis
l-Leaks: Membership Inference Attacks with Logits. (41%)Shuhao Li; Yajie Wang; Yuanzhang Li; Yu-an Tan
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations. (26%)Yongjie Wang; Hangwei Qian; Chunyan Miao
Millimeter-Wave Automotive Radar Spoofing. (2%)Mihai Ordean; Flavio D. Garcia
2022-05-12
Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks. (75%)Pascale Gourdeau; Varun Kanade; Marta Kwiatkowska; James Worrell
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. (61%)Hongbin Liu; Jinyuan Jia; Neil Zhenqiang Gong
How to Combine Membership-Inference Attacks on Multiple Updated Models. (11%)Matthew Jagielski; Stanley Wu; Alina Oprea; Jonathan Ullman; Roxana Geambasu
Infrared Invisible Clothing:Hiding from Infrared Detectors at Multiple Angles in Real World. (4%)Xiaopei Zhu; Zhanhao Hu; Siyuan Huang; Jianmin Li; Xiaolin Hu
Smooth-Reduce: Leveraging Patches for Improved Certified Robustness. (2%)Ameya Joshi; Minh Pham; Minsu Cho; Leonid Boytsov; Filipe Condessa; J. Zico Kolter; Chinmay Hegde
Stalloris: RPKI Downgrade Attack. (1%)Tomas Hlavacek; Philipp Jeitner; Donika Mirdita; Haya Shulman; Michael Waidner
2022-05-11
Injection Attacks Reloaded: Tunnelling Malicious Payloads over DNS. (1%)Philipp Jeitner; Haya Shulman
The Hijackers Guide To The Galaxy: Off-Path Taking Over Internet Resources. (1%)Tianxiang Dai; Philipp Jeitner; Haya Shulman; Michael Waidner
A Longitudinal Study of Cryptographic API: a Decade of Android Malware. (1%)Adam Janovsky; Davide Maiorca; Dominik Macko; Vashek Matyas; Giorgio Giacinto
2022-05-10
Robust Medical Image Classification from Noisy Labeled Data with Global and Local Representation Guided Co-training. (1%)Cheng Xue; Lequan Yu; Pengfei Chen; Qi Dou; Pheng-Ann Heng
White-box Testing of NLP models with Mask Neuron Coverage. (1%)Arshdeep Sekhon; Yangfeng Ji; Matthew B. Dwyer; Yanjun Qi
2022-05-09
Btech thesis report on adversarial attack detection and purification of adverserially attacked images. (99%)Dvij Kalaria
Using Frequency Attention to Make Adversarial Patch Powerful Against Person Detector. (98%)Xiaochun Lei; Chang Lu; Zetao Jiang; Zhaoting Gong; Xiang Cai; Linjun Lu
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks. (97%)Harel Berger; Amit Dvir; Chen Hajaj; Rony Ronen
Model-Contrastive Learning for Backdoor Defense. (87%)Zhihao Yue; Jun Xia; Zhiwei Ling; Ming Hu; Ting Wang; Xian Wei; Mingsong Chen
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations? (61%)Alvin Chan; Yew-Soon Ong; Clement Tan
Federated Multi-Armed Bandits Under Byzantine Attacks. (2%)Ilker Demirel; Yigit Yildirim; Cem Tekin
Verifying Integrity of Deep Ensemble Models by Lossless Black-box Watermarking with Sensitive Samples. (2%)Lina Lin; Hanzhou Wu
2022-05-08
Fingerprint Template Invertibility: Minutiae vs. Deep Templates. (68%)Kanishka P. Wijewardena; Steven A. Grosz; Kai Cao; Anil K. Jain
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. (22%)Jingtao Li; Adnan Siraj Rakin; Xing Chen; Zhezhi He; Deliang Fan; Chaitali Chakrabarti
VPN: Verification of Poisoning in Neural Networks. (9%)Youcheng Sun; Muhammad Usman; Divya Gopinath; Corina S. Păsăreanu
FOLPETTI: A Novel Multi-Armed Bandit Smart Attack for Wireless Networks. (4%)Emilie Bout; Alessandro Brighente; Mauro Conti; Valeria Loscri
PGADA: Perturbation-Guided Adversarial Alignment for Few-shot Learning Under the Support-Query Shift. (1%)Siyang Jiang; Wei Ding; Hsi-Wen Chen; Ming-Syan Chen
2022-05-07
A Simple Yet Efficient Method for Adversarial Word-Substitute Attack. (99%)Tianle Li; Yi Yang
Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees. (92%)Binghui Wang; Youqi Li; Pan Zhou
2022-05-06
Imperceptible Backdoor Attack: From Input Space to Feature Representation. (68%)Nan Zhong; Zhenxing Qian; Xinpeng Zhang
Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data. (26%)Joceline Ziegler; Bjarne Pfitzner; Heinrich Schulz; Axel Saalbach; Bert Arnrich
LPGNet: Link Private Graph Networks for Node Classification. (1%)Aashish Kolluri; Teodora Baluta; Bryan Hooi; Prateek Saxena
Unlimited Lives: Secure In-Process Rollback with Isolated Domains. (1%)Merve Gülmez; Thomas Nyman; Christoph Baumann; Jan Tobias Mühlberg
2022-05-05
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems. (99%)Gaurav Kumar Nayak; Ruchit Rawal; Rohit Lal; Himanshu Patil; Anirban Chakraborty
Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness. (78%)Dávid Szeghy; Mahmoud Aslan; Áron Fóthi; Balázs Mészáros; Zoltán Ádám Milacski; András Lőrincz
Can collaborative learning be private, robust and scalable? (61%)Dmitrii Usynin; Helena Klause; Daniel Rueckert; Georgios Kaissis
Large Scale Transfer Learning for Differentially Private Image Classification. (2%)Harsh Mehta; Abhradeep Thakurta; Alexey Kurakin; Ashok Cutkosky
Are GAN-based Morphs Threatening Face Recognition? (1%)Eklavya Sarkar; Pavel Korshunov; Laurent Colbois; Sébastien Marcel
Heterogeneous Domain Adaptation with Adversarial Neural Representation Learning: Experiments on E-Commerce and Cybersecurity. (1%)Mohammadreza Ebrahimi; Yidong Chai; Hao Helen Zhang; Hsinchun Chen
2022-05-04
Based-CE white-box adversarial attack will not work using super-fitting. (99%)Youhuan Yang; Lei Sun; Leyu Dai; Song Guo; Xiuqing Mao; Xiaoqin Wang; Bayi Xu
Rethinking Classifier And Adversarial Attack. (98%)Youhuan Yang; Lei Sun; Leyu Dai; Song Guo; Xiuqing Mao; Xiaoqin Wang; Bayi Xu
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning. (98%)Antonio Emanuele Cinà; Kathrin Grosse; Ambra Demontis; Sebastiano Vascon; Werner Zellinger; Bernhard A. Moser; Alina Oprea; Battista Biggio; Marcello Pelillo; Fabio Roli
Robust Conversational Agents against Imperceptible Toxicity Triggers. (92%)Ninareh Mehrabi; Ahmad Beirami; Fred Morstatter; Aram Galstyan
Subverting Fair Image Search with Generative Adversarial Perturbations. (83%)Avijit Ghosh; Matthew Jagielski; Christo Wilson
2022-05-03
Adversarial Training for High-Stakes Reliability. (98%)Daniel M. Ziegler; Seraphina Nix; Lawrence Chan; Tim Bauman; Peter Schmidt-Nielsen; Tao Lin; Adam Scherlis; Noa Nabeshima; Ben Weinstein-Raun; Haas Daniel de; Buck Shlegeris; Nate Thomas
Don't sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks. (96%)Jonathan Rusert; Padmini Srinivasan
On the uncertainty principle of neural networks. (3%)Jun-Jie Zhang; Dong-Xiao Zhang; Jian-Nan Chen; Long-Gang Pang
Meta-Cognition. An Inverse-Inverse Reinforcement Learning Approach for Cognitive Radars. (1%)Kunal Pattanayak; Vikram Krishnamurthy; Christopher Berry
2022-05-02
SemAttack: Natural Textual Attacks via Different Semantic Spaces. (96%)Boxin Wang; Chejian Xu; Xiangyu Liu; Yu Cheng; Bo Li
Deep-Attack over the Deep Reinforcement Learning. (93%)Yang Li; Quan Pan; Erik Cambria
Enhancing Adversarial Training with Feature Separability. (92%)Yaxin Li; Xiaorui Liu; Han Xu; Wentao Wang; Jiliang Tang
BERTops: Studying BERT Representations under a Topological Lens. (92%)Jatin Chauhan; Manohar Kaul
MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust Classification of Breast Cancer. (83%)Shoukun Sun; Min Xian; Aleksandar Vakanski; Hossny Ghanem
Revisiting Gaussian Neurons for Online Clustering with Unknown Number of Clusters. (1%)Ole Christian Eidheim
2022-05-01
A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction. (98%)Yong Xie; Dakuo Wang; Pin-Yu Chen; Jinjun Xiong; Sijia Liu; Sanmi Koyejo
DDDM: a Brain-Inspired Framework for Robust Classification. (76%)Xiyuan Chen; Xingyu Li; Yi Zhou; Tianming Yang
Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances. (9%)Shoujie Tong; Qingxiu Dong; Damai Dai; Yifan song; Tianyu Liu; Baobao Chang; Zhifang Sui
A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness. (3%)Jeremiah Zhe Liu; Shreyas Padhy; Jie Ren; Zi Lin; Yeming Wen; Ghassen Jerfel; Zack Nado; Jasper Snoek; Dustin Tran; Balaji Lakshminarayanan
Adversarial Plannning. (2%)Valentin Vie; Ryan Sheatsley; Sophia Beyda; Sushrut Shringarputale; Kevin Chan; Trent Jaeger; Patrick McDaniel
2022-04-30
Optimizing One-pixel Black-box Adversarial Attacks. (82%)Tianxun Zhou; Shubhankar Agrawal; Prateek Manocha
Cracking White-box DNN Watermarks via Invariant Neuron Transforms. (26%)Yifan Yan; Xudong Pan; Yining Wang; Mi Zhang; Min Yang
Loss Function Entropy Regularization for Diverse Decision Boundaries. (1%)Chong Sue Sin
Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees. (1%)Jonathan Brophy; Zayd Hammoudeh; Daniel Lowd
2022-04-29
Adversarial attacks on an optical neural network. (92%)Shuming Jiao; Ziwei Song; Shuiying Xiang
Logically Consistent Adversarial Attacks for Soft Theorem Provers. (2%)Alexander Gaskell; Yishu Miao; Lucia Specia; Francesca Toni
Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation. (1%)Heng Zhu; Qing Ling
2022-04-28
Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations. (99%)Na Liu; Mark Dras; Wei Emma Zhang
Formulating Robustness Against Unforeseen Attacks. (99%)Sihui Dai; Saeed Mahloujifar; Prateek Mittal
Randomized Smoothing under Attack: How Good is it in Pratice? (84%)Thibault Maho; Teddy Furon; Erwan Le Merrer
Improving robustness of language models from a geometry-aware perspective. (68%)Bin Zhu; Zhaoquan Gu; Le Wang; Jinyin Chen; Qi Xuan
Mixup-based Deep Metric Learning Approaches for Incomplete Supervision. (50%)Luiz H. Buris; Daniel C. G. Pedronette; Joao P. Papa; Jurandy Almeida; Gustavo Carneiro; Fabio A. Faria
AGIC: Approximate Gradient Inversion Attack on Federated Learning. (16%)Jin Xu; Chi Hong; Jiyue Huang; Lydia Y. Chen; Jérémie Decouchant
An Online Ensemble Learning Model for Detecting Attacks in Wireless Sensor Networks. (1%)Hiba Tabbaa; Samir Ifzarne; Imad Hafidi
2022-04-27
Adversarial Fine-tune with Dynamically Regulated Adversary. (99%)Pengyue Hou; Ming Zhou; Jie Han; Petr Musilek; Xingyu Li
Defending Against Person Hiding Adversarial Patch Attack with a Universal White Frame. (98%)Youngjoon Yu; Hong Joo Lee; Hakmin Lee; Yong Man Ro
An Adversarial Attack Analysis on Malicious Advertisement URL Detection Framework. (81%)Ehsan Nowroozi; Abhishek; Mohammadreza Mohammadi; Mauro Conti
2022-04-26
Boosting Adversarial Transferability of MLP-Mixer. (99%)Haoran Lyu; Yajie Wang; Yu-an Tan; Huipeng Zhou; Yuhang Zhao; Quanxin Zhang
Restricted Black-box Adversarial Attack Against DeepFake Face Swapping. (99%)Junhao Dong; Yuan Wang; Jianhuang Lai; Xiaohua Xie
Improving the Transferability of Adversarial Examples with Restructure Embedded Patches. (99%)Huipeng Zhou; Yu-an Tan; Yajie Wang; Haoran Lyu; Shangbo Wu; Yuanzhang Li
On Fragile Features and Batch Normalization in Adversarial Training. (97%)Nils Philipp Walter; David Stutz; Bernt Schiele
Mixed Strategies for Security Games with General Defending Requirements. (75%)Rufan Bai; Haoxing Lin; Xinyu Yang; Xiaowei Wu; Minming Li; Weijia Jia
Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios. (26%)Dazhong Rong; Qinming He; Jianhai Chen
Designing Perceptual Puzzles by Differentiating Probabilistic Programs. (13%)Kartik Chandra; Tzu-Mao Li; Joshua Tenenbaum; Jonathan Ragan-Kelley
Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies. (8%)Shaltiel Eloul; Fran Silavong; Sanket Kamthe; Antonios Georgiadis; Sean J. Moran
Performance Analysis of Out-of-Distribution Detection on Trained Neural Networks. (4%)Jens Henriksson; Christian Berger; Markus Borg; Lars Tornberg; Sankar Raman Sathyamoorthy; Cristofer Englund
2022-04-25
Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social Networks. (99%)Jiawei Zhang; Jinwei Wang; Hao Wang; Xiangyang Luo
When adversarial examples are excusable. (89%)Pieter-Jan Kindermans; Charles Staats
A Simple Structure For Building A Robust Model. (81%)Xiao Tan; JingBo Gao; Ruolin Li
Real or Virtual: A Video Conferencing Background Manipulation-Detection System. (67%)Ehsan Nowroozi; Yassine Mekdad; Mauro Conti; Simone Milani; Selcuk Uluagac; Berrin Yanikoglu
Can Rationalization Improve Robustness? (12%)Howard Chen; Jacqueline He; Karthik Narasimhan; Danqi Chen
PhysioGAN: Training High Fidelity Generative Model for Physiological Sensor Readings. (1%)Moustafa Alzantot; Luis Garcia; Mani Srivastava
VITA: A Multi-Source Vicinal Transfer Augmentation Method for Out-of-Distribution Generalization. (1%)Minghui Chen; Cheng Wen; Feng Zheng; Fengxiang He; Ling Shao
Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications. (1%)Han Cai; Ji Lin; Yujun Lin; Zhijian Liu; Haotian Tang; Hanrui Wang; Ligeng Zhu; Song Han
2022-04-24
A Hybrid Defense Method against Adversarial Attacks on Traffic Sign Classifiers in Autonomous Vehicles. (99%)Zadid Khan; Mashrur Chowdhury; Sakib Mahmud Khan
Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity. (81%)Marco Marchetti; Edmond S. L. Ho
2022-04-23
Smart App Attack: Hacking Deep Learning Models in Android Apps. (98%)Yujin Huang; Chunyang Chen
Towards Data-Free Model Stealing in a Hard Label Setting. (13%)Sunandini Sanyal; Sravanti Addepalli; R. Venkatesh Babu
Reinforced Causal Explainer for Graph Neural Networks. (1%)Xiang Wang; Yingxin Wu; An Zhang; Fuli Feng; Xiangnan He; Tat-Seng Chua
2022-04-22
How Sampling Impacts the Robustness of Stochastic Neural Networks. (99%)Sina Däubener; Asja Fischer
A Tale of Two Models: Constructing Evasive Attacks on Edge Models. (83%)Wei Hao; Aahil Awatramani; Jiayang Hu; Chengzhi Mao; Pin-Chun Chen; Eyal Cidon; Asaf Cidon; Junfeng Yang
Enhancing the Transferability via Feature-Momentum Adversarial Attack. (82%)Xianglong; Yuezun Li; Haipeng Qu; Junyu Dong
Data-Efficient Backdoor Attacks. (76%)Pengfei Xia; Ziqiang Li; Wei Zhang; Bin Li
2022-04-21
A Mask-Based Adversarial Defense Scheme. (99%)Weizhen Xu; Chenyi Zhang; Fangzhen Zhao; Liangda Fang
Is Neuron Coverage Needed to Make Person Detection More Robust? (98%)Svetlana Pavlitskaya; Şiyar Yıkmış; J. Marius Zöllner
Testing robustness of predictions of trained classifiers against naturally occurring perturbations. (98%)Sebastian Scher; Andreas Trügler
Adversarial Contrastive Learning by Permuting Cluster Assignments. (15%)Muntasir Wahed; Afrina Tabassum; Ismini Lourentzou
Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation. (4%)Jun Xia; Ting Wang; Jiepin Ding; Xian Wei; Mingsong Chen
Detecting Topology Attacks against Graph Neural Networks. (1%)Senrong Xu; Yuan Yao; Liangyue Li; Wei Yang; Feng Xu; Hanghang Tong
2022-04-20
Adversarial Scratches: Deployable Attacks to CNN Classifiers. (99%)Loris Giulivi; Malhar Jere; Loris Rossi; Farinaz Koushanfar; Gabriela Ciocarlie; Briland Hitaj; Giacomo Boracchi
GUARD: Graph Universal Adversarial Defense. (99%)Jintang Li; Jie Liao; Ruofan Wu; Liang Chen; Zibin Zheng; Jiawang Dan; Changhua Meng; Weiqiang Wang
Fast AdvProp. (98%)Jieru Mei; Yucheng Han; Yutong Bai; Yixiao Zhang; Yingwei Li; Xianhang Li; Alan Yuille; Cihang Xie
Case-Aware Adversarial Training. (98%)Mingyuan Fan; Yang Liu; Wenzhong Guo; Ximeng Liu; Jianhua Li
Improved Worst-Group Robustness via Classifier Retraining on Independent Splits. (1%)Thien Hang Nguyen; Hongyang R. Zhang; Huy Le Nguyen
2022-04-19
Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks. (99%)Kenneth T. Co; David Martinez-Rego; Zhongyuan Hau; Emil C. Lupu
Robustness Testing of Data and Knowledge Driven Anomaly Detection in Cyber-Physical Systems. (86%)Xugui Zhou; Maxfield Kouzel; Homa Alemzadeh
Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation. (83%)Siyu Lai; Zhen Yang; Fandong Meng; Xue Zhang; Yufeng Chen; Jinan Xu; Jie Zhou
2022-04-18
UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples. (99%)Rahim Taheri
Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge. (99%)Qun Song; Zhenyu Yan; Wenjie Luo; Rui Tan
CgAT: Center-Guided Adversarial Training for Deep Hashing-Based Retrieval. (99%)Xunguang Wang; Yiqun Lin; Xiaomeng Li
Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors. (98%)Nyee Thoang Lim; Meng Yi Kuan; Muxin Pu; Mei Kuan Lim; Chun Yong Chong
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. (75%)Enyan Dai; Tianxiang Zhao; Huaisheng Zhu; Junjie Xu; Zhimeng Guo; Hui Liu; Jiliang Tang; Suhang Wang
CorrGAN: Input Transformation Technique Against Natural Corruptions. (70%)Mirazul Haque; Christof J. Budnik; Wei Yang
Poisons that are learned faster are more effective. (64%)Pedro Sandoval-Segura; Vasu Singla; Liam Fowl; Jonas Geiping; Micah Goldblum; David Jacobs; Tom Goldstein
2022-04-17
Residue-Based Natural Language Adversarial Attack Detection. (99%)Vyas Raina; Mark Gales
Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning. (95%)Jun Guo; Yonghong Chen; Yihang Hao; Zixin Yin; Yin Yu; Simin Li
2022-04-16
SETTI: A Self-supervised Adversarial Malware Detection Architecture in an IoT Environment. (95%)Marjan Golmaryami; Rahim Taheri; Zahra Pooranian; Mohammad Shojafar; Pei Xiao
Homomorphic Encryption and Federated Learning based Privacy-Preserving CNN Training: COVID-19 Detection Use-Case. (67%)Febrianti Wibawa; Ferhat Ozgur Catak; Salih Sarp; Murat Kuzlu; Umit Cali
2022-04-15
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning. (92%)Mathias Lechner; Alexander Amini; Daniela Rus; Thomas A. Henzinger
2022-04-14
From Environmental Sound Representation to Robustness of 2D CNN Models Against Adversarial Attacks. (99%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Planting Undetectable Backdoors in Machine Learning Models. (99%)Shafi Goldwasser; Michael P. Kim; Vinod Vaikuntanathan; Or Zamir
Q-TART: Quickly Training for Adversarial Robustness and in-Transferability. (50%)Madan Ravi Ganesh; Salimeh Yasaei Sekeh; Jason J. Corso
Robotic and Generative Adversarial Attacks in Offline Writer-independent Signature Verification. (41%)Jordan J. Bird
2022-04-13
Task-Driven Data Augmentation for Vision-Based Robotic Control. (96%)Shubhankar Agarwal; Sandeep P. Chinchali
Stealing and Evading Malware Classifiers and Antivirus at Low False Positive Conditions. (87%)Maria Rigaki; Sebastian Garcia
Defensive Patches for Robust Recognition in the Physical World. (80%)Jiakai Wang; Zixin Yin; Pengfei Hu; Aishan Liu; Renshuai Tao; Haotong Qin; Xianglong Liu; Dacheng Tao
A Novel Approach to Train Diverse Types of Language Models for Health Mention Classification of Tweets. (78%)Pervaiz Iqbal Khan; Imran Razzak; Andreas Dengel; Sheraz Ahmed
Overparameterized Linear Regression under Adversarial Attacks. (76%)Antônio H. Ribeiro; Thomas B. Schön
Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures. (38%)Huming Qiu; Hua Ma; Zhi Zhang; Alsharif Abuadbba; Wei Kang; Anmin Fu; Yansong Gao
A Natural Language Processing Approach for Instruction Set Architecture Identification. (1%)Dinuka Sahabandu; Sukarno Mertoguno; Radha Poovendran
2022-04-12
Liuer Mihou: A Practical Framework for Generating and Evaluating Grey-box Adversarial Attacks against NIDS. (99%)Ke He; Dan Dongseong Kim; Jing Sun; Jeong Do Yoo; Young Hun Lee; Huy Kang Kim
Examining the Proximity of Adversarial Examples to Class Manifolds in Deep Networks. (98%)Štefan Pócoš; Iveta Bečková; Igor Farkaš
Toward Robust Spiking Neural Network Against Adversarial Perturbation. (98%)Ling Liang; Kaidi Xu; Xing Hu; Lei Deng; Yuan Xie
Machine Learning Security against Data Poisoning: Are We There Yet? (92%)Antonio Emanuele Cinà; Kathrin Grosse; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Optimal Membership Inference Bounds for Adaptive Composition of Sampled Gaussian Mechanisms. (11%)Saeed Mahloujifar; Alexandre Sablayrolles; Graham Cormode; Somesh Jha
3DeformRS: Certifying Spatial Deformations on Point Clouds. (9%)Gabriel Pérez S.; Juan C. Pérez; Motasem Alfarra; Silvio Giancola; Bernard Ghanem
2022-04-11
A Simple Approach to Adversarial Robustness in Few-shot Image Classification. (98%)Akshayvarun Subramanya; Hamed Pirsiavash
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information. (92%)Yi Zeng; Minzhou Pan; Hoang Anh Just; Lingjuan Lyu; Meikang Qiu; Ruoxi Jia
Generalizing Adversarial Explanations with Grad-CAM. (84%)Tanmay Chakraborty; Utkarsh Trehan; Khawla Mallat; Jean-Luc Dugelay
Anti-Adversarially Manipulated Attributions for Weakly Supervised Semantic Segmentation and Object Localization. (83%)Jungbeom Lee; Eunji Kim; Jisoo Mok; Sungroh Yoon
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm. (47%)Lei Xu; Yangyi Chen; Ganqu Cui; Hongcheng Gao; Zhiyuan Liu
medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space. (1%)Amil Dravid; Florian Schiffers; Boqing Gong; Aggelos K. Katsaggelos
2022-04-10
"That Is a Suspicious Reaction!": Interpreting Logits Variation to Detect NLP Adversarial Attacks. (88%)Edoardo Mosca; Shreyash Agarwal; Javier Rando-Ramirez; Georg Groh
Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural Networks. (54%)Karthikeyan Nagarajan; Junde Li; Sina Sayyah Ensan; Mohammad Nasim Imtiaz Khan; Sachhidh Kannan; Swaroop Ghosh
Measuring the False Sense of Security. (26%)Carlos Gomes
2022-04-08
Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser. (99%)Sonal Joshi; Saurabh Kataria; Yiwen Shao; Piotr Zelasko; Jesus Villalba; Sanjeev Khudanpur; Najim Dehak
AdvEst: Adversarial Perturbation Estimation to Classify and Detect Adversarial Attacks against Speaker Identification. (99%)Sonal Joshi; Saurabh Kataria; Jesus Villalba; Najim Dehak
Evaluating the Adversarial Robustness for Fourier Neural Operators. (92%)Abolaji D. Adesoji; Pin-Yu Chen
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense. (87%)Shaik Mohammed Maqsood; Viveros Manuela Ceron; Addluri GowthamKrishna
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks. (45%)Xinqiao Zhang; Huili Chen; Ke Huang; Farinaz Koushanfar
Characterizing and Understanding the Behavior of Quantized Models for Reliable Deployment. (13%)Qiang Hu; Yuejun Guo; Maxime Cordy; Xiaofei Xie; Wei Ma; Mike Papadakis; Yves Le Traon
Neural Tangent Generalization Attacks. (12%)Chia-Hung Yuan; Shan-Hung Wu
Labeling-Free Comparison Testing of Deep Learning Models. (11%)Yuejun Guo; Qiang Hu; Maxime Cordy; Xiaofei Xie; Mike Papadakis; Yves Le Traon
Does Robustness on ImageNet Transfer to Downstream Tasks? (2%)Yutaro Yamada; Mayu Otani
The self-learning AI controller for adaptive power beaming with fiber-array laser transmitter system. (1%)A. M. Vorontsov; G. A. Filimonov
2022-04-07
Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings. (99%)Yuhao Mao; Chong Fu; Saizhuo Wang; Shouling Ji; Xuhong Zhang; Zhenguang Liu; Jun Zhou; Alex X. Liu; Raheem Beyah; Ting Wang
Adaptive-Gravity: A Defense Against Adversarial Samples. (99%)Ali Mirzaeian; Zhi Tian; Sai Manoj P D; Banafsheh S. Latibari; Ioannis Savidis; Houman Homayoun; Avesta Sasan
Using Multiple Self-Supervised Tasks Improves Model Robustness. (81%)Matthew Lawhon; Chengzhi Mao; Junfeng Yang
Transformer-Based Language Models for Software Vulnerability Detection: Performance, Model's Security and Platforms. (69%)Chandra Thapa; Seung Ick Jang; Muhammad Ejaz Ahmed; Seyit Camtepe; Josef Pieprzyk; Surya Nepal
Defending Active Directory by Combining Neural Network based Dynamic Program and Evolutionary Diversity Optimisation. (1%)Diksha Goel; Max Hector Ward-Graham; Aneta Neumann; Frank Neumann; Hung Nguyen; Mingyu Guo
2022-04-06
Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks. (99%)Xu Han; Anmin Liu; Yifeng Xiong; Yanbo Fan; Kun He
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network. (95%)Byung-Kwan Lee; Junho Kim; Yong Man Ro
Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck. (93%)Junho Kim; Byung-Kwan Lee; Yong Man Ro
Optimization Models and Interpretations for Three Types of Adversarial Perturbations against Support Vector Machines. (68%)Wen Su; Qingna Li; Chunfeng Cui
Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems. (62%)Furkan Mumcu; Keval Doshi; Yasin Yilmaz
Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures. (33%)Md Tamjid Jim Hossain; Shahriar Jim Badsha; Jim Hung; La; Haoting Shen; Shafkat Islam; Ibrahim Khalil; Xun Yi
2022-04-05
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning. (98%)Nilaksh Das; Duen Horng Chau