It can be hard to stay up-to-date on the published papers in
the field of adversarial examples,
where we have seen massive growth in the number of papers
written each year.
I have been somewhat religiously keeping track of these
papers for the last few years, and realized it may be helpful
for others to release this list.
The only requirement I used for selecting papers for this list
is that it is primarily a paper about adversarial examples,
or extensively uses adversarial examples.
Due to the sheer quantity of papers, I can't guarantee
that I actually have found all of them.
But I did try.
I also may have included papers that don't match
these criteria (and are about something different instead),
or made inconsistent
judgement calls as to whether or not any given paper is
mainly an adversarial example paper.
Send me an email if something is wrong and I'll correct it.
As a result, this list is completely un-filtered.
Everything that mainly presents itself as an adversarial
example paper is listed here; I pass no judgement of quality.
For a curated list of papers that I think are excellent and
worth reading, see the
Adversarial Machine Learning Reading List.
One final note about the data.
This list automatically updates with new papers, even before I
get a chance to manually filter through them.
I do this filtering roughly twice a week, and it's
then that I'll remove the ones that aren't related to
adversarial examples.
As a result, there may be some
false positives on the most recent few entries.
The new un-verified entries will have a probability indicated that my
simplistic (but reasonably well calibrated)
bag-of-words classifier believes the given paper
is actually about adversarial examples.
The full paper list appears below. I've also released a
TXT file (and a TXT file
with abstracts) and a
JSON file
with the same data. If you do anything interesting with
this data I'd be happy to hear from you what it was.
Paper List
2024-12-24
Robustness-aware Automatic Prompt Optimization. (98%)Zeru Shi; Zhenting Wang; Yongye Su; Weidi Luo; Fan Yang; Yongfeng Zhang
On the Effectiveness of Adversarial Training on Malware Classifiers. (96%)Hamid Bostani; Jacopo Cortellazzi; Daniel Arp; Fabio Pierazzi; Veelasha Moonsamy; Lorenzo Cavallaro
Efficient Contrastive Explanations on Demand. (82%)Yacine Izza; Joao Marques-Silva
An Empirical Analysis of Federated Learning Models Subject to Label-Flipping Adversarial Attack. (73%)Kunal Bhatnagar; Sagana Chattanathan; Angela Dang; Bhargav Eranki; Ronnit Rana; Charan Sridhar; Siddharth Vedam; Angie Yao; Mark Stamp
Unveiling the Threat of Fraud Gangs to Graph Neural Networks: Multi-Target Graph Injection Attacks against GNN-Based Fraud Detectors. (41%)Jinhyeok Choi; Heehyeon Kim; Joyce Jiyoung Whang
Token Highlighter: Inspecting and Mitigating Jailbreak Prompts for Large Language Models. (38%)Xiaomeng Hu; Pin-Yu Chen; Tsung-Yi Ho
Hypergraph Attacks via Injecting Homogeneous Nodes into Elite Hyperedges. (22%)Meixia He; Peican Zhu; Keke Tang; Yangming Guo
Re-assessing ImageNet: How aligned is its single-label assumption with its multi-label nature? (4%)Esla Timothy Anzaku; Seyed Amir Mousavi; Messem Arnout Van; Neve Wesley De
2024-12-23
Retention Score: Quantifying Jailbreak Risks for Vision Language Models. (99%)Zaitang Li; Pin-Yu Chen; Tsung-Yi Ho
Emerging Security Challenges of Large Language Models. (73%)Herve Debar; Sven Dietrich; Pavel Laskov; Emil C. Lupu; Eirini Ntoutsi
Stability Bounds for the Unfolded Forward-Backward Algorithm. (38%)Emilie Chouzenoux; Valle Cecile Della; Jean-Christophe Pesquet
Double Landmines: Invisible Textual Backdoor Attacks based on Dual-Trigger. (15%)Yang Hou; Qiuling Yue; Lujia Chai; Guozhao Liao; Wenbao Han; Wei Ou
AEIOU: A Unified Defense Framework against NSFW Prompts in Text-to-Image Models. (2%)Yiming Wang; Jiahao Chen; Qingming Li; Xing Yang; Shouling Ji
Sensitivity Curve Maximization: Attacking Robust Aggregators in Distributed Learning. (1%)Christian A. Schroth; Stefan Vlaski; Abdelhak M. Zoubir
DiffusionAttacker: Diffusion-Driven Prompt Manipulation for LLM Jailbreak. (1%)Hao Wang; Hao Li; Junda Zhu; Xinyuan Wang; Chengwei Pan; MinLie Huang; Lei Sha
2024-12-22
ErasableMask: A Robust and Erasable Privacy Protection Scheme against Black-box Face Recognition Models. (99%)Sipeng Shen; Yunming Zhang; Dengpan Ye; Xiuwen Shi; Long Tang; Haoran Duan; Ziyi Liu
NumbOD: A Spatial-Frequency Fusion Attack Against Object Detectors. (99%)Ziqi Zhou; Bowen Li; Yufei Song; Zhifei Yu; Shengshan Hu; Wei Wan; Leo Yu Zhang; Dezhong Yao; Hai Jin
Breaking Barriers in Physical-World Adversarial Examples: Improving Robustness and Transferability via Robust Feature. (99%)Yichen Wang; Yuxuan Chou; Ziqi Zhou; Hangtao Zhang; Wei Wan; Shengshan Hu; Minghui Li
Preventing Non-intrusive Load Monitoring Privacy Invasion: A Precise Adversarial Attack Scheme for Networked Smart Meters. (98%)Jialing He; Jiacheng Wang; Ning Wang; Shangwei Guo; Liehuang Zhu; Dusit Niyato; Tao Xiang
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification. (45%)Yuan Ma; Xu Ma; Jiankang Wei; Jinmeng Tang; Xiaoyu Zhang; Yilun Lyu; Kehao Chen; Jingtong Huang
Attack by Yourself: Effective and Unnoticeable Multi-Category Graph Backdoor Attacks with Subgraph Triggers Pool. (22%)Jiangtong Li; Dungy Liu; Dawei Cheng; Changchun Jiang
Shaping the Safety Boundaries: Understanding and Defending Against Jailbreaks in Large Language Models. (22%)Lang Gao; Xiangliang Zhang; Preslav Nakov; Xiuying Chen
Robustness of Large Language Models Against Adversarial Attacks. (13%)Yiyi Tao; Yixian Shen; Hang Zhang; Yanxin Shen; Lun Wang; Chuanqi Shi; Shaoshuai Du
2024-12-21
PB-UAP: Hybrid Universal Adversarial Attack For Image Segmentation. (99%)Yufei Song; Ziqi Zhou; Minghui Li; Xianlong Wang; Menghao Deng; Wei Wan; Shengshan Hu; Leo Yu Zhang
Adversarial Attack Against Images Classification based on Generative Adversarial Networks. (98%)Yahe Yang
OpenAI o1 System Card. (81%)OpenAI; :; Aaron Jaech; Adam Kalai; Adam Lerer; Adam Richardson; Ahmed El-Kishky; Aiden Low; Alec Helyar; Aleksander Madry; Alex Beutel; Alex Carney; Alex Iftimie; Alex Karpenko; Alex Tachard Passos; Alexander Neitz; Alexander Prokofiev; Alexander Wei; Allison Tam; Ally Bennett; Ananya Kumar; Andre Saraiva; Andrea Vallone; Andrew Duberstein; Andrew Kondrich; Andrey Mishchenko; Andy Applebaum; Angela Jiang; Ashvin Nair; Barret Zoph; Behrooz Ghorbani; Ben Rossen; Benjamin Sokolowsky; Boaz Barak; Bob McGrew; Borys Minaiev; Botao Hao; Bowen Baker; Brandon Houghton; Brandon McKinzie; Brydon Eastman; Camillo Lugaresi; Cary Bassin; Cary Hudson; Chak Ming Li; Bourcy Charles de; Chelsea Voss; Chen Shen; Chong Zhang; Chris Koch; Chris Orsinger; Christopher Hesse; Claudia Fischer; Clive Chan; Dan Roberts; Daniel Kappler; Daniel Levy; Daniel Selsam; David Dohan; David Farhi; David Mely; David Robinson; Dimitris Tsipras; Doug Li; Dragos Oprica; Eben Freeman; Eddie Zhang; Edmund Wong; Elizabeth Proehl; Enoch Cheung; Eric Mitchell; Eric Wallace; Erik Ritter; Evan Mays; Fan Wang; Felipe Petroski Such; Filippo Raso; Florencia Leoni; Foivos Tsimpourlas; Francis Song; Lohmann Fred von; Freddie Sulit; Geoff Salmon; Giambattista Parascandolo; Gildas Chabot; Grace Zhao; Greg Brockman; Guillaume Leclerc; Hadi Salman; Haiming Bao; Hao Sheng; Hart Andrin; Hessam Bagherinezhad; Hongyu Ren; Hunter Lightman; Hyung Won Chung; Ian Kivlichan; Ian O'Connell; Ian Osband; Ignasi Clavera Gilaberte; Ilge Akkaya; Ilya Kostrikov; Ilya Sutskever; Irina Kofman; Jakub Pachocki; James Lennon; Jason Wei; Jean Harb; Jerry Twore; Jiacheng Feng; Jiahui Yu; Jiayi Weng; Jie Tang; Jieqi Yu; Joaquin Quiñonero Candela; Joe Palermo; Joel Parish; Johannes Heidecke; John Hallman; John Rizzo; Jonathan Gordon; Jonathan Uesato; Jonathan Uesato; Jonathan Ward; Joost Huizinga; Julie Wang; Kai Chen; Kai Xiao; Karan Singhal; Karina Nguyen; Karl Cobbe; Katy Shi; Kayla Wood; Kendra Rimbach; Keren Gu-Lemberg; Keren GuLemberg; Kevin Liu; Kevin Lu; Kevin Stone; Kevin Yu; Lama Ahmad; Lauren Yang; Leo Liu; Leon Maksin; Leyton Ho; Liam Fedus; Lilian Weng; Linden Li; Lindsay McCallum; Lindsey Held; Lorenz Kuhn; Lukas Kondraciuk; Lukasz Kaiser; Luke Metz; Madelaine Boyd; Maja Trebacz; Manas Joglekar; Mark Chen; Marko Tintor; Mason Meyer; Matt Jones; Matt Kaufer; Max Schwarzer; Meghan Shah; Mehmet Yatbaz; Melody Guan; Mengyuan Xu; Mengyuan Yan; Mia Glaese; Mianna Chen; Mianna Chen; Michael Lampe; Michael Malek; Michele Wang; Michelle Fradin; Mike McClay; Mikhail Pavlov; Miles Wang; Mingxuan Wang; Mira Murati; Mo Bavarian; Mostafa Rohaninejad; Nat McAleese; Neil Chowdhury; Neil Chowdhury; Nick Ryder; Nikolas Tezak; Noam Brown; Ofir Nachum; Oleg Boiko; Oleg Murk; Olivia Watkins; Patrick Chao; Paul Ashbourne; Pavel Izmailov; Peter Zhokhov; Rachel Dias; Rahul Arora; Randall Lin; Rapha Gontijo Lopes; Raz Gaon; Reah Miyara; Reimar Leike; Renny Hwang; Rhythm Garg; Robin Brown; Roshan James; Rui Shu; Ryan Cheu; Ryan Greene; Saachi Jain; Sam Altman; Sam Toizer; Sam Toyer; Samuel Miserendino; Sandhini Agarwal; Santiago Hernandez; Sasha Baker; Scott McKinney; Scottie Yan; Shengjia Zhao; Shengli Hu; Shibani Santurkar; Shraman Ray Chaudhuri; Shuyuan Zhang; Siyuan Fu; Spencer Papay; Steph Lin; Suchir Balaji; Suvansh Sanjeev; Szymon Sidor; Tal Broda; Aidan Clark; Tao Wang; Taylor Gordon; Ted Sanders; Tejal Patwardhan; Thibault Sottiaux; Thomas Degry; Thomas Dimson; Tianhao Zheng; Timur Garipov; Tom Stasi; Trapit Bansal; Trevor Creech; Troy Peterson; Tyna Eloundou; Valerie Qi; Vineet Kosaraju; Vinnie Monaco; Vitchyr Pong; Vlad Fomenko; Weiyi Zheng; Wenda Zhou; Wes McCabe; Wojciech Zaremba; Yann Dubois; Yinghai Lu; Yining Chen; Young Cha; Yu Bai; Yuchen He; Yuchen Zhang; Yunyun Wang; Zheng Shao; Zhuohan Li
TrojFlow: Flow Models are Natural Targets for Trojan Attacks. (78%)Zhengyang Qi; Xiaohua Xu
Towards More Robust Retrieval-Augmented Generation: Evaluating RAG Under Adversarial Poisoning Attacks. (76%)Jinyan Su; Jin Peng Zhou; Zhengxin Zhang; Preslav Nakov; Claire Cardie
POEX: Policy Executable Embodied AI Jailbreak Attacks. (9%)Xuancun Lu; Zhengxian Huang; Xinfeng Li; Xiaoyu ji; Wenyuan Xu
Forget Vectors at Play: Universal Input Perturbations Driving Machine Unlearning in Image Classification. (8%)Changchang Sun; Ren Wang; Yihua Zhang; Jinghan Jia; Jiancheng Liu; Gaowen Liu; Sijia Liu; Yan Yan
The Task Shield: Enforcing Task Alignment to Defend Against Indirect Prompt Injection in LLM Agents. (4%)Feiran Jia; Tong Wu; Xin Qin; Anna Squicciarini
2024-12-20
Adversarial Robustness through Dynamic Ensemble Learning. (99%)Hetvi Waghela; Jaydip Sen; Sneha Rakshit
EMPRA: Embedding Perturbation Rank Attack against Neural Ranking Models. (98%)Amin Bigdeli; Negar Arabzadeh; Ebrahim Bagheri; Charles L. A. Clarke
Watertox: The Art of Simplicity in Universal Attacks A Cross-Model Framework for Robust Adversarial Generation. (98%)Zhenghao Gao; Shengjie Xu; Meixi Chen; Fangyao Zhao
JailPO: A Novel Black-box Jailbreak Framework via Preference Optimization against Aligned LLMs. (41%)Hongyi Li; Jiawei Ye; Jie Wu; Tianjie Yan; Chu Wang; Zhixin Li
Technical Report for ICML 2024 TiFA Workshop MLLM Attack Challenge: Suffix Injection and Projected Gradient Descent Can Easily Fool An MLLM. (13%)Yangyang Guo; Ziwei Xu; Xilie Xu; YongKang Wong; Liqiang Nie; Mohan Kankanhalli
Texture- and Shape-based Adversarial Attacks for Vehicle Detection in Synthetic Overhead Imagery. (9%)Mikael Yeghiazaryan; Sai Abhishek Siddhartha Namburu; Emily Kim; Stanislav Panev; Melo Celso de; Brent Lance; la Torre Fernando De; Jessica K. Hodgins
PoisonCatcher: Revealing and Identifying LDP Poisoning Attacks in IIoT. (9%)Lisha Shuai; Shaofeng Tan; Nan Zhang; Jiamin Zhang; Min Zhang; Xiaolong Yang
Robust random graph matching in dense graphs via vector approximate message passing. (1%)Zhangsong Li
2024-12-19
AutoTrust: Benchmarking Trustworthiness in Large Vision Language Models for Autonomous Driving. (5%)Shuo Xing; Hongyuan Hua; Xiangbo Gao; Shenzhe Zhu; Renjie Li; Kexin Tian; Xiaopeng Li; Heng Huang; Tianbao Yang; Zhangyang Wang; Yang Zhou; Huaxiu Yao; Zhengzhong Tu
Boosting GNN Performance via Training Sample Selection Based on Adversarial Robustness Evaluation. (4%)Yongyu Wang
2024-12-18
A Review of the Duality of Adversarial Learning in Network Intrusion: Attacks and Countermeasures. (89%)Shalini Saini; Anitha Chennamaneni; Babatunde Sawyerr
Mitigating Adversarial Attacks in LLMs through Defensive Suffix Generation. (83%)Minkyoung Kim; Yunha Kim; Hyeram Seo; Heejung Choi; Jiye Han; Gaeun Kee; Soyoung Ko; HyoJe Jung; Byeolhee Kim; Young-Hak Kim; Sanghyun Park; Tae Joon Jun
Physics-Based Adversarial Attack on Near-Infrared Human Detector for Nighttime Surveillance Camera Systems. (78%)Muyao Niu; Zhuoxiao Li; Yifan Zhan; Huy H. Nguyen; Isao Echizen; Yinqiang Zheng
Crabs: Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings. (75%)Yuanhe Zhang; Zhenhong Zhou; Wei Zhang; Xinyue Wang; Xiaojun Jia; Yang Liu; Sen Su
Adversarial Hubness in Multi-Modal Retrieval. (74%)Tingwei Zhang; Fnu Suya; Rishi Jha; Collin Zhang; Vitaly Shmatikov
Cultivating Archipelago of Forests: Evolving Robust Decision Trees through Island Coevolution. (56%)Adam Żychowski; Andrew Perrault; Jacek Mańdziuk
A Black-Box Evaluation Framework for Semantic Robustness in Bird's Eye View Detection. (8%)Fu Wang; Yanghao Zhang; Xiangyu Yin; Guangliang Cheng; Zeyu Fu; Xiaowei Huang; Wenjie Ruan
Speech Watermarking with Discrete Intermediate Representations. (4%)Shengpeng Ji; Ziyue Jiang; Jialong Zuo; Minghui Fang; Yifu Chen; Tao Jin; Zhou Zhao
On the Robustness of Distributed Machine Learning against Transfer Attacks. (2%)Sébastien Andreina; Pascal Zimmer; Ghassan Karame
Mesoscopic Insights: Orchestrating Multi-scale & Hybrid Architecture for Image Manipulation Localization. (1%)Xuekang Zhu; Xiaochen Ma; Lei Su; Zhuohang Jiang; Bo Du; Xiwen Wang; Zeyu Lei; Wentao Feng; Chi-Man Pun; Jizhe Zhou
Hybrid Data-Free Knowledge Distillation. (1%)Jialiang Tang; Shuo Chen; Chen Gong
SHAP scores fail pervasively even when Lipschitz succeeds. (1%)Olivier Letoffe; Xuanxiang Huang; Joao Marques-Silva
Novel AI Camera Camouflage: Face Cloaking Without Full Disguise. (1%)David Noever; Forrest McKee
2024-12-17
Improving the Transferability of 3D Point Cloud Attack via Spectral-aware Admix and Optimization Designs. (99%)Shiyu Hu; Daizong Liu; Wei Hu
Targeted View-Invariant Adversarial Perturbations for 3D Object Recognition. (99%)Christian Green; Mehmet Ergezer; Abdurrahman Zeybey
AdvIRL: Reinforcement Learning-Based Adversarial Attacks on 3D NeRF Models. (98%)Tommy Nguyen; Mehmet Ergezer; Christian Green
Defending LVLMs Against Vision Attacks through Partial-Perception Supervision. (92%)Qi Zhou; Tianlin Li; Qing Guo; Dongxia Wang; Yun Lin; Yang Liu; Jin Song Dong
Exploring Query Efficient Data Generation towards Data-free Model Stealing in Hard Label Setting. (92%)Gaozheng Pei; Shaojie lyu; Ke Ma; Pinci Yang; Qianqian Xu; Yingfei Sun
Fooling LLM graders into giving better grades through neural activity guided adversarial prompting. (75%)Atsushi Yamamura; Surya Ganguli
Practicable Black-box Evasion Attacks on Link Prediction in Dynamic Graphs -- A Graph Sequential Embedding Method. (70%)Jiate Li; Meng Pang; Binghui Wang
Jailbreaking? One Step Is Enough! (64%)Weixiong Zheng; Peijian Zeng; Yiwei Li; Hongyan Wu; Nankai Lin; Junhao Chen; Aimin Yang; Yongmei Zhou
Toxicity Detection towards Adaptability to Changing Perturbations. (11%)Hankun Kang; Jianhao Chen; Yongqi Li; Xin Miao; Mayi Xu; Ming Zhong; Yuanyuan Zhu; Tieyun Qian
A New Adversarial Perspective for LiDAR-based 3D Object Detection. (9%)Shijun Zheng; Weiquan Liu; Yu Guo; Yu Zang; Siqi Shen; Cheng Wang
Accuracy Limits as a Barrier to Biometric System Security. (2%)Axel Durbet; Paul-Marie Grollemund; Pascal Lafourcade; Kevin Thiry-Atighehchi
Neural Control and Certificate Repair via Runtime Monitoring. (1%)Emily Yu; Đorđe Žikelić; Thomas A. Henzinger
Training Verification-Friendly Neural Networks via Neuron Behavior Consistency. (1%)Zongxin Liu; Zhe Zhao; Fu Song; Jun Sun; Pengfei Yang; Xiaowei Huang; Lijun Zhang
Distribution Shifts at Scale: Out-of-distribution Detection in Earth Observation. (1%)Burak Ekim; Girmaw Abebe Tadesse; Caleb Robinson; Gilles Hacheme; Michael Schmitt; Rahul Dodhia; Juan M. Lavista Ferres
2024-12-16
Adversarially robust generalization theory via Jacobian regularization for deep neural networks. (99%)Dongya Wu; Xin Li
Human-in-the-Loop Generation of Adversarial Texts: A Case Study on Tibetan Script. (99%)Xi Cao; Yuan Sun; Jiajun Li; Quzong Gesang; Nuo Qun; Tashi Nyima
Transferable Adversarial Face Attack with Text Controlled Attribute. (98%)Wenyun Li; Zheng Zhang; Xiangyuan Lan; Dongmei Jiang
Towards Adversarial Robustness of Model-Level Mixture-of-Experts Architectures for Semantic Segmentation. (86%)Svetlana Pavlitska; Enrico Eisen; J. Marius Zöllner
WFCAT: Augmenting Website Fingerprinting with Channel-wise Attention on Timing Features. (33%)Jiajun Gong; Wei Cai; Siyuan Liang; Zhong Guan; Tao Wang; Ee-Chien Chang
Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning. (22%)Siyuan Liang; Jiajun Gong; Tianmeng Fang; Aishan Liu; Tao Wang; Xianglong Liu; Xiaochun Cao; Dacheng Tao; Chang Ee-Chien
Sonar-based Deep Learning in Underwater Robotics: Overview, Robustness and Challenges. (1%)Martin Aubard; Ana Madureira; Luís Teixeira; José Pinto
2024-12-15
Comprehensive Survey on Adversarial Examples in Cybersecurity: Impacts, Challenges, and Mitigation Strategies. (99%)Li Li
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models. (99%)Yuning Han; Bingyin Zhao; Rui Chu; Feng Luo; Biplab Sikdar; Yingjie Lao
Unpacking the Resilience of SNLI Contradiction Examples to Attacks. (99%)Chetan Verma; Archit Agarwal
Impact of Adversarial Attacks on Deep Learning Model Explainability. (99%)Gazi Nazia Nur; Mohammad Ahnaf Sadat
PGD-Imp: Rethinking and Unleashing Potential of Classic PGD with Dual Strategies for Imperceptible Adversarial Attacks. (98%)Jin Li; Zitong Yu; Ziqiang He; Z. Jane Wang; Xiangui Kang
Learning Robust and Privacy-Preserving Representations via Information Theory. (92%)Binghui Zhang; Sayedeh Leila Noorbakhsh; Yun Dong; Yuan Hong; Binghui Wang
A Comprehensive Review of Adversarial Attacks on Machine Learning. (75%)Syed Quiser Ahmed; Bharathi Vokkaliga Ganesh; Sathyanarayana Sampath Kumar; Prakhar Mishra; Ravi Anand; Bhanuteja Akurathi
Finding a Wolf in Sheep's Clothing: Combating Adversarial Text-To-Image Prompts with Text Summarization. (2%)Portia Cooper; Harshita Narnoli; Mihai Surdeanu
Set-Valued Sensitivity Analysis of Deep Neural Networks. (2%)Xin Jeff Wang; Feiling Jeff wang; Xuegang Jeff Ban
Accurate, Robust and Privacy-Preserving Brain-Computer Interface Decoding. (1%)Xiaoqing Chen; Tianwang Jia; Dongrui Wu
SpearBot: Leveraging Large Language Models in a Generative-Critique Framework for Spear-Phishing Email Generation. (1%)Qinglin Qi; Yun Luo; Yijia Xu; Wenbo Guo; Yong Fang
2024-12-14
RAT: Adversarial Attacks on Deep Reinforcement Agents for Targeted Behaviors. (98%)Fengshuo Bai; Runze Liu; Yali Du; Ying Wen; Yaodong Yang
One Pixel is All I Need. (80%)Deng Siqin; Zhou Xiaoyi
Are Language Models Agnostic to Linguistically Grounded Perturbations? A Case Study of Indic Languages. (1%)Poulami Ghosh; Raj Dabre; Pushpak Bhattacharyya
Unbiased General Annotated Dataset Generation. (1%)Dengyang Jiang; Haoyu Wang; Lei Zhang; Wei Wei; Guang Dai; Mengmeng Wang; Jingdong Wang; Yanning Zhang
2024-12-13
Robust image classification with multi-modal large language models. (99%)Francesco Villani; Igor Maljkovic; Dario Lazzaro; Angelo Sotgiu; Antonio Emanuele Cinà; Fabio Roli
Prompt2Perturb (P2P): Text-Guided Diffusion-Based Adversarial Attacks on Breast Ultrasound Images. (99%)Yasamin Medghalchi; Moein Heidari; Clayton Allard; Leonid Sigal; Ilker Hacihaliloglu
A2RNet: Adversarial Attack Resilient Network for Robust Infrared and Visible Image Fusion. (92%)Jiawei Li; Hongwei Yu; Jiansheng Chen; Xinlong Ding; Jinlong Wang; Jinyuan Liu; Bochao Zou; Huimin Ma
Err on the Side of Texture: Texture Bias on Real Data. (82%)Blaine Hoak; Ryan Sheatsley; Patrick McDaniel
BinarySelect to Improve Accessibility of Black-Box Attack Research. (80%)Shatarupa Ghosh; Jonathan Rusert
On Adversarial Robustness and Out-of-Distribution Robustness of Large Language Models. (78%)April Yang; Jordan Tab; Parth Shah; Paul Kotchavong
FaceShield: Defending Facial Image against Deepfake Threats. (70%)Jaehwan Jeong; Sumin In; Sieun Kim; Hannie Shin; Jongheon Jeong; Sang Ho Yoon; Jaewook Chung; Sangpil Kim
SuperMark: Robust and Training-free Image Watermarking via Diffusion-based Super-Resolution. (67%)Runyi Hu; Jie Zhang; Yiming Li; Jiwei Li; Qing Guo; Han Qiu; Tianwei Zhang
Client-Side Patching against Backdoor Attacks in Federated Learning. (61%)Borja Molina-Coronado
No Free Lunch for Defending Against Prefilling Attack by In-Context Learning. (22%)Zhiyu Xue; Guangliang Liu; Bocheng Chen; Kristen Marie Johnson; Ramtin Pedarsani
Active Poisoning: Efficient Backdoor Attacks on Transfer Learning-Based Brain-Computer Interfaces. (13%)X. Jiang; L. Meng; S. Li; D. Wu
Adversarial Robustness of Bottleneck Injected Deep Neural Networks for Task-Oriented Communication. (10%)Alireza Furutanpey; Pantelis A. Frangoudis; Patrik Szabo; Schahram Dustdar
From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial Injection. (1%)Haowei Wang; Rupeng Zhang; Junjie Wang; Mingyang Li; Yuekai Huang; Dandan Wang; Qing Wang
BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks. (1%)Tobias Lorenz; Marta Kwiatkowska; Mario Fritz
2024-12-12
Three-in-One: Robust Enhanced Universal Transferable Anti-Facial Retrieval in Online Social Networks. (99%)Yunna Lv; Long Tang; Dengpan Ye; Caiyun Xie; Jiacheng Deng; Yiheng He
On the Generation and Removal of Speaker Adversarial Perturbation for Voice-Privacy Protection. (95%)Chenyang Guo; Liping Chen; Zhuhai Li; Kong Aik Lee; Zhen-Hua Ling; Wu Guo
Real-time Identity Defenses against Malicious Personalization of Diffusion Models. (95%)Hanzhong Guo; Shen Nie; Chao Du; Tianyu Pang; Hao Sun; Chongxuan Li
Deep Learning Model Security: Threats and Defenses. (92%)Tianyang Wang; Ziqian Bi; Yichao Zhang; Ming Liu; Weiche Hsieh; Pohsun Feng; Lawrence K. Q. Yan; Yizhu Wen; Benji Peng; Junyu Liu; Keyu Chen; Sen Zhang; Ming Li; Chuanqi Jiang; Xinyuan Song; Junjie Yang; Bowen Jing; Jintao Ren; Junhao Song; Hong-Ming Tseng; Silin Chen; Yunze Wang; Chia Xin Liang; Jiawei Xu; Xuanhe Pan; Jinlang Wang; Qian Niu
A Semi Black-Box Adversarial Bit-Flip Attack with Limited DNN Model Information. (69%)Behnam Ghavami; Mani Sadati; Mohammad Shahidzadeh; Lesley Shannon; Steve Wilton
Evaluating Adversarial Attacks on Traffic Sign Classifiers beyond Standard Baselines. (45%)Svetlana Pavlitska; Leopold Müller; J. Marius Zöllner
SVasP: Self-Versatility Adversarial Style Perturbation for Cross-Domain Few-Shot Learning. (3%)Wenqian Li; Pengfei Fang; Hui Xue
Obfuscated Activations Bypass LLM Latent-Space Defenses. (2%)Luke Bailey; Alex Serrano; Abhay Sheshadri; Mikhail Seleznyov; Jordan Taylor; Erik Jenner; Jacob Hilton; Stephen Casper; Carlos Guestrin; Scott Emmons
Towards Understanding the Robustness of LLM-based Evaluations under Perturbations. (1%)Manav Chaudhary; Harshit Gupta; Savita Bhat; Vasudeva Varma
L-WISE: Boosting Human Image Category Learning Through Model-Based Image Selection And Enhancement. (1%)Morgan B. Talbot; Gabriel Kreiman; James J. DiCarlo; Guy Gaziv
2024-12-11
Adversarial Purification by Consistency-aware Latent Space Optimization on Data Manifolds. (99%)Shuhai Zhang; Jiahao Yang; Hui Luo; Jie Chen; Li Wang; Feng Liu; Bo Han; Mingkui Tan
Doubly-Universal Adversarial Perturbations: Deceiving Vision-Language Models Across Both Images and Text with a Single Perturbation. (98%)Hee-Seon Kim; Minbeom Kim; Changick Kim
Grimm: A Plug-and-Play Perturbation Rectifier for Graph Neural Networks Defending against Poisoning Attacks. (93%)Ao Liu; Wenshan Li; Beibei Li; Wengang Ma; Tao Li; Pan Zhou
Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models. (83%)Jiahui Li; Yongchang Hao; Haoyu Xu; Xing Wang; Yu Hong
AdvWave: Stealthy Adversarial Jailbreak Attack against Large Audio-Language Models. (82%)Mintong Kang; Chejian Xu; Bo Li
Proactive Adversarial Defense: Harnessing Prompt Tuning in Vision-Language Models to Detect Unseen Backdoored Images. (45%)Kyle Stein; Andrew Arash Mahyari; Guillermo Francia; Eman El-Sheikh
Backdoor attacks on DNN and GBDT -- A Case Study from the insurance domain. (16%)Robin Debeka, Koblenz, Germany Kühlem; Daniel Debeka, Koblenz, Germany Otten; Daniel Debeka, Koblenz, Germany Ludwig; Anselm Debeka, Koblenz, Germany Department of Maths and Technology, Koblenz University of Applied Sciences, Remagen, Germany Hudde; Alexander Computer Science, University of Koblenz, Koblenz, Germany Rosenbaum; Andreas Computer Science, University of Koblenz, Koblenz, Germany Mauthe
Antelope: Potent and Concealed Jailbreak Attack Strategy. (10%)Xin Zhao; Xiaojun Chen; Haoyu Gao
Model-Editing-Based Jailbreak against Safety-aligned Large Language Models. (1%)Yuxi Li; Zhibo Zhang; Kailong Wang; Ling Shi; Haoyu Wang
2024-12-10
AHSG: Adversarial Attacks on High-level Semantics in Graph Neural Networks. (99%)Kai Yuan; Xiaobing Pei; Haoran Yang
Addressing Key Challenges of Adversarial Attacks and Defenses in the Tabular Domain: A Methodological Framework for Coherence and Consistency. (99%)Yael Itzhakev; Amit Giloni; Yuval Elovici; Asaf Shabtai
Backdoor Attacks against No-Reference Image Quality Assessment Models via A Scalable Trigger. (99%)Yi Yu; Song Xia; Xun Lin; Wenhan Yang; Shijian Lu; Yap-peng Tan; Alex Kot
A Generative Victim Model for Segmentation. (99%)Aixuan Li; Jing Zhang; Jiawei Shi; Yiran Zhong; Yuchao Dai
Adversarial Vulnerabilities in Large Language Models for Time Series Forecasting. (98%)Fuqiang Liu; Sicong Jiang; Luis Miranda-Moreno; Seongjin Choi; Lijun Sun
Defending Against Neural Network Model Inversion Attacks via Data Poisoning. (98%)Shuai Zhou; Dayong Ye; Tianqing Zhu; Wanlei Zhou
DynamicPAE: Generating Scene-Aware Physical Adversarial Examples in Real-Time. (92%)Jin Hu; Xianglong Liu; Jiakai Wang; Junkai Zhang; Xianqi Yang; Haotong Qin; Yuqing Ma; Ke Xu
Adaptive Epsilon Adversarial Training for Robust Gravitational Wave Parameter Estimation Using Normalizing Flows. (86%)Yiqian Yang; Xihua Zhu; Fan Zhang
What You See Is Not Always What You Get: An Empirical Study of Code Comprehension by Large Language Models. (83%)Bangshuo Zhu; Jiawen Wen; Huaming Chen
MAGIC: Mastering Physical Adversarial Generation in Context through Collaborative LLM Agents. (82%)Yun Xing; Nhat Chung; Jie Zhang; Yue Cao; Ivor Tsang; Yang Liu; Lei Ma; Qing Guo
FlexLLM: Exploring LLM Customization for Moving Target Defense on Black-Box LLMs Against Jailbreak Attacks. (81%)Bocheng Chen; Hanqing Guo; Qiben Yan
Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features. (76%)Xiaoyang Ning; Qing Xie; Jinyu Xu; Wenbo Jiang; Jiachen Li; Yanchun Ma
Adversarial Filtering Based Evasion and Backdoor Attacks to EEG-Based Brain-Computer Interfaces. (68%)Lubin Meng; Xue Jiang; Xiaoqing Chen; Wenzhong Liu; Hanbin Luo; Dongrui Wu
A Parametric Approach to Adversarial Augmentation for Cross-Domain Iris Presentation Attack Detection. (61%)Debasmita Pal; Redwan Sony; Arun Ross
Na'vi or Knave: Jailbreaking Language Models via Metaphorical Avatars. (50%)Yu Yan; Sheng Sun; Junqi Tong; Min Liu; Qi Li
CapGen:An Environment-Adaptive Generator of Adversarial Patches. (13%)Chaoqun Li; Zhuodong Liu; Huanqian Yan; Hang Su
PrisonBreak: Jailbreaking Large Language Models with Fewer Than Twenty-Five Targeted Bit-flips. (9%)Zachary Coalson; Jeonghyun Woo; Shiyang Chen; Yu Sun; Lishan Yang; Prashant Nair; Bo Fang; Sanghyun Hong
2024-12-09
Take Fake as Real: Realistic-like Robust Black-box Adversarial Attack to Evade AIGC Detection. (99%)Caiyun Xie; Dengpan Ye; Yunming Zhang; Long Tang; Yunna Lv; Jiacheng Deng; Jiawei Song
Defensive Dual Masking for Robust Adversarial Defense. (99%)Wangli Yang; Jie Yang; Yi Guo; Johan Barthelemy
A Real-Time Defense Against Object Vanishing Adversarial Patch Attacks for Object Detection in Autonomous Vehicles. (97%)Jaden Mu
Data Free Backdoor Attacks. (64%)Bochuan Cao; Jinyuan Jia; Chuxuan Hu; Wenbo Guo; Zhen Xiang; Jinghui Chen; Bo Li; Dawn Song
On Evaluating the Durability of Safeguards for Open-Weight LLMs. (38%)Xiangyu Qi; Boyi Wei; Nicholas Carlini; Yangsibo Huang; Tinghao Xie; Luxi He; Matthew Jagielski; Milad Nasr; Prateek Mittal; Peter Henderson
Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice. (3%)A. Feder Cooper; Christopher A. Choquette-Choo; Miranda Bogen; Matthew Jagielski; Katja Filippova; Ken Ziyu Liu; Alexandra Chouldechova; Jamie Hayes; Yangsibo Huang; Niloofar Mireshghallah; Ilia Shumailov; Eleni Triantafillou; Peter Kairouz; Nicole Mitchell; Percy Liang; Daniel E. Ho; Yejin Choi; Sanmi Koyejo; Fernando Delgado; James Grimmelmann; Vitaly Shmatikov; Sa Christopher De; Solon Barocas; Amy Cyphert; Mark Lemley; danah boyd; Jennifer Wortman Vaughan; Miles Brundage; David Bau; Seth Neel; Abigail Z. Jacobs; Andreas Terzis; Hanna Wallach; Nicolas Papernot; Katherine Lee
StyleMark: A Robust Watermarking Method for Art Style Images Against Black-Box Arbitrary Style Transfer. (2%)Yunming Zhang; Dengpan Ye; Sipeng Shen; Jun Wang
Understanding Gradient Descent through the Training Jacobian. (1%)Nora Belrose; Adam Scherlis
Vulnerability, Where Art Thou? An Investigation of Vulnerability Management in Android Smartphone Chipsets. (1%)Daniel Klischies; Philipp Mackensen; Veelasha Moonsamy
2024-12-08
Adversarial Transferability in Deep Denoising Models: Theoretical Insights and Robustness Enhancement via Out-of-Distribution Typical Set Sampling. (99%)Jie Ning; Jiebao Sun; Shengzhu Shi; Zhichang Guo; Yao Li; Hongwei Li; Boying Wu
An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers. (89%)Xueluan Gong; Bowei Tian; Meng Xue; Yuan Wu; Yanjiao Chen; Qian Wang
Understanding the Impact of Graph Reduction on Adversarial Robustness in Graph Neural Networks. (78%)Kerui Wu; Ka-Ho Chow; Wenqi Wei; Lei Yu
Anti-Reference: Universal and Immediate Defense Against Reference-Based Generation. (22%)Yiren Song; Shengtao Lou; Xiaokang Liu; Hai Ci; Pei Yang; Jiaming Liu; Mike Zheng Shou
PBI-Attack: Prior-Guided Bimodal Interactive Black-Box Jailbreak Attack for Toxicity Maximization. (15%)Ruoxi Cheng; Yizhong Ding; Shuirong Cao; Ranjie Duan; Xiaoshuang Jia; Shaowei Yuan; Zhiqiang Wang; Xiaojun Jia
SABER: Model-agnostic Backdoor Attack on Chain-of-Thought in Neural Code Generation. (4%)Naizhu Jin; Zhong Li; Yinggang Guo; Chao Su; Tian Zhang; Qingkai Zeng
Enhancing Adversarial Resistance in LLMs with Recursion. (1%)Bryan Li; Sounak Bagchi; Zizhan Wang
Membership Inference Attacks and Defenses in Federated Learning: A Survey. (1%)Li Bai; Haibo Hu; Qingqing Ye; Haoyang Li; Leixia Wang; Jianliang Xu
2024-12-07
PrivAgent: Agentic-based Red-teaming for LLM Privacy Leakage. (92%)Yuzhou Nie; Zhun Wang; Ye Yu; Xian Wu; Xuandong Zhao; Wenbo Guo; Dawn Song
DeMem: Privacy-Enhanced Robust Adversarial Learning via De-Memorization. (76%)Xiaoyu Luo; Qiongxiu Li
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation. (47%)Kristoffer Wickstrøm; Marina Marie-Claire Höhne; Anna Hedström
Nearly Solved? Robust Deepfake Detection Requires More than Visual Forensics. (33%)Guy Levy; Nathan Liebmann
2024-12-06
Uncovering Vision Modality Threats in Image-to-Image Tasks. (8%)Hao Cheng; Erjia Xiao; Jiayan Yang; Jiahang Cao; Qiang Zhang; Jize Zhang; Kaidi Xu; Jindong Gu; Renjing Xu
Towards Predicting the Success of Transfer-based Attacks by Quantifying Shared Feature Representations. (2%)Ashley S. Dale; Mei Qiu; Foo Bin Che; Thomas Bsaibes; Lauren Christopher; Paul Salama
Backdooring Outlier Detection Methods: A Novel Attack Approach. (2%)ZeinabSadat Taghavi; Hossein Mirzaei
LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds. (1%)James Beetham; Souradip Chakraborty; Mengdi Wang; Furong Huang; Amrit Singh Bedi; Mubarak Shah
2024-12-05
Intriguing Properties of Robust Classification. (96%)Bernd Prach; Christoph H. Lampert
On the Lack of Robustness of Binary Function Similarity Systems. (92%)Gianluca Capozzi; Tong Tang; Jie Wan; Ziqi Yang; Daniele Cono D'Elia; Luna Giuseppe Antonio Di; Lorenzo Cavallaro; Leonardo Querzoni
Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer. (76%)Xueluan Gong; Bowei Tian; Meng Xue; Shuike Li; Yanjiao Chen; Qian Wang
Can Targeted Clean-Label Poisoning Attacks Generalize? (13%)Zhizhen Chen; Subrat Kishore Dutta; Zhengyu Zhao; Chenhao Lin; Chao Shen; Xiao Zhang
LaserGuider: A Laser Based Physical Backdoor Attack against Deep Neural Networks. (8%)Yongjie Xu; Guangke Chen; Fu Song; Yuqi Chen
Safeguarding Text-to-Image Generation via Inference-Time Prompt-Noise Optimization. (3%)Jiangweizhi Peng; Zhiwei Tang; Gaowen Liu; Charles Fleming; Mingyi Hong
Targeting the Core: A Simple and Effective Method to Attack RAG-based Agents via Direct LLM Manipulation. (2%)Xuying Li; Zhuo Li; Yuji Kosuga; Yasuhiro Yoshida; Victor Bian
2024-12-04
Does Safety Training of LLMs Generalize to Semantically Related Natural Prompts? (99%)Sravanti Addepalli; Yerram Varun; Arun Suggala; Karthikeyan Shanmugam; Prateek Jain
NODE-AdvGAN: Improving the transferability and perceptual similarity of adversarial examples by dynamic-system-driven adversarial generative model. (99%)Xinheng Xie; Yue Wu; Cuiyu He
Less is More: A Stealthy and Efficient Adversarial Attack Method for DRL-based Autonomous Driving Policies. (98%)Junchao Fan; Xuyang Lei; Xiaolin Chang; Jelena Mišić; Vojislav B. Mišić
A Taxonomy of System-Level Attacks on Deep Learning Models in Autonomous Vehicles. (76%)Masoud Jamshidiyan Tehrani; Jinhan Kim; Rosmael Zidane Lekeufack Foulefack; Alessandro Marchetto; Paolo Tonella
PBP: Post-training Backdoor Purification for Malware Classifiers. (76%)Dung Thuy Nguyen; Ngoc N. Tran; Taylor T. Johnson; Kevin Leach
Testing Neural Network Verifiers: A Soundness Benchmark with Hidden Counterexamples. (13%)Xingjian Zhou; Hongji Xu; Andy Xu; Zhouxing Shi; Cho-Jui Hsieh; Huan Zhang
Black-Box Forgery Attacks on Semantic Watermarks for Diffusion Models. (12%)Andreas Müller; Denis Lukovnikov; Jonas Thietke; Asja Fischer; Erwin Quiring
Designing DNNs for a trade-off between robustness and processing performance in embedded devices. (11%)Jon Gutiérrez-Zaballa; Koldo Basterretxea; Javier Echanobe
Pre-trained Multiple Latent Variable Generative Models are good defenders against Adversarial Attacks. (4%)Dario Serez; Marco Cristani; Bue Alessio Del; Vittorio Murino; Pietro Morerio
Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspective. (1%)Jon Gutiérrez-Zaballa; Koldo Basterretxea; Javier Echanobe
2024-12-03
Sustainable Self-evolution Adversarial Training. (99%)Wenxuan Wang; Chenglei Wang; Huihui Qi; Menghao Ye; Xuelin Qian; Peng Wang; Yanning Zhang
Gaussian Splatting Under Attack: Investigating Adversarial Noise in 3D Objects. (99%)Abdurrahman Zeybey; Mehmet Ergezer; Tommy Nguyen
Multi-Granularity Tibetan Textual Adversarial Attack Method Based on Masked Language Model. (98%)Xi Cao; Nuo Qun; Quzong Gesang; Yulei Zhu; Trashi Nyima
Pay Attention to the Robustness of Chinese Minority Language Models! Syllable-level Textual Adversarial Attack on Tibetan Script. (98%)Xi Cao; Dolma Dawa; Nuo Qun; Trashi Nyima
Underload: Defending against Latency Attacks for Object Detectors on Edge Devices. (93%)Tianyi Zhejiang University, Hangzhou, China Wang; Zichen Zhejiang University, Hangzhou, China Wang; Cong Zhejiang University, Hangzhou, China Wang; Yuanchao Zhejiang University, Hangzhou, China Shu; Ruilong Zhejiang University, Hangzhou, China Deng; Peng Zhejiang University, Hangzhou, China Cheng; Jiming Zhejiang University, Hangzhou, China Chen
Hijacking Vision-and-Language Navigation Agents with Adversarial Environmental Attacks. (80%)Zijiao Yang; Xiangxi Shi; Eric Slyman; Stefan Lee
TSCheater: Generating High-Quality Tibetan Adversarial Texts via Visual Similarity. (76%)Xi Cao; Quzong Gesang; Yuan Sun; Nuo Qun; Tashi Nyima
The Efficacy of Transfer-based No-box Attacks on Image Watermarking: A Pragmatic Analysis. (61%)Qilong Wu; Varun Chandrasekaran
Defending Against Diverse Attacks in Federated Learning Through Consensus-Based Bi-Level Optimization. (22%)Nicolás García Trillos; Aditya Kumar Akash; Sixu Li; Konstantin Riedl; Yuhua Zhu
OODFace: Benchmarking Robustness of Face Recognition under Common Corruptions and Appearance Variations. (11%)Caixin Kang; Yubo Chen; Shouwei Ruan; Shiji Zhao; Ruochen Zhang; Jiayi Wang; Shan Fu; Xingxing Wei
AdvDreamer Unveils: Are Vision-Language Models Truly Ready for Real-World 3D Variations? (3%)Shouwei Ruan; Hanqing Liu; Yao Huang; Xiaoqi Wang; Caixin Kang; Hang Su; Yinpeng Dong; Xingxing Wei
Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining. (2%)Zongru Wu; Pengzhou Cheng; Lingyong Fang; Zhuosheng Zhang; Gongshen Liu
GenMix: Effective Data Augmentation with Generative Diffusion Model Image Editing. (1%)Khawar Islam; Muhammad Zaigham Zaheer; Arif Mahmood; Karthik Nandakumar; Naveed Akhtar
2024-12-02
Traversing the Subspace of Adversarial Patches. (83%)Jens Bayer; Stefan Becker; David Münch; Michael Arens; Jürgen Beyerer
DiffPatch: Generating Customizable Adversarial Patches using Diffusion Model. (82%)Zhixiang Wang; Guangnan Ye; Xiaosen Wang; Siheng Chen; Zhibo Wang; Xingjun Ma; Yu-Gang Jiang
Exploring the Robustness of AI-Driven Tools in Digital Forensics: A Preliminary Study. (74%)Silvia Lucia Sanna; Leonardo Regano; Davide Maiorca; Giorgio Giacinto
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios. (69%)Sangyeon Yoon; Wonje Jeung; Albert No
Robust and Transferable Backdoor Attacks Against Deep Image Compression With Selective Frequency Prior. (67%)Yi Yu; Yufei Wang; Wenhan Yang; Lanqing Guo; Shijian Lu; Ling-Yu Duan; Yap-Peng Tan; Alex C. Kot
Adversarial Attacks on Hyperbolic Networks. (26%)Spengler Max van; Jan Zahálka; Pascal Mettes
Compromising the Intelligence of Modern DNNs: On the Effectiveness of Targeted RowPress. (13%)Ranyang Zhou; Jacqueline T. Liu; Sabbir Ahmed; Shaahin Angizi; Adnan Siraj Rakin
CopyrightShield: Spatial Similarity Guided Backdoor Defense against Copyright Infringement in Diffusion Models. (10%)Zhixiang Guo; Siyuan Liang; Aishan Liu; Dacheng Tao
R.I.P.: A Simple Black-box Attack on Continual Test-time Adaptation. (5%)Trung-Hieu Hoang; Duc Minh Vo; Minh N. Do
Reactive Synthesis of Sensor Revealing Strategies in Hypergames on Graphs. (1%)Sumukha Udupa; Ahmed Hemida; Charles A. Kamhoua; Jie Fu
Precision Profile Pollution Attack on Sequential Recommenders via Influence Function. (1%)Xiaoyu Du; Yingying Chen; Yang Zhang; Jinhui Tang
2024-12-01
Intermediate Outputs Are More Sensitive Than You Think. (61%)Tao Huang; Qingyu Huang; Jiayang Meng
Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection. (16%)Delong Zhu; Yuezun Li; Baoyuan Wu; Jiaran Zhou; Zhibo Wang; Siwei Lyu
Online Poisoning Attack Against Reinforcement Learning under Black-box Environments. (11%)Jianhui Li; Bokang Zhang; Junfeng Wu
2024-11-30
Hard-Label Black-Box Attacks on 3D Point Clouds. (99%)Daizong Liu; Yunbo Tao; Pan Zhou; Wei Hu
Exposing LLM Vulnerabilities: Adversarial Scam Detection and Performance. (69%)Chen-Wei Chang; Shailik Sarkar; Shutonu Mitra; Qi Zhang; Hossein Salemi; Hemant Purohit; Fengxiu Zhang; Michin Hong; Jin-Hee Cho; Chang-Tien Lu
Exact Certification of (Graph) Neural Networks Against Label Poisoning. (22%)Mahalakshmi Sabanayagam; Lukas Gosch; Stephan Günnemann; Debarghya Ghoshdastidar
Jailbreak Large Vision-Language Models Through Multi-Modal Linkage. (12%)Yu Wang; Xiaofei Zhou; Yichen Wang; Geyuan Zhang; Tianxing He
2024-11-29
Towards Class-wise Robustness Analysis. (99%)Tejaswini Medi; Julia Grabinski; Margret Keuper
FLARE: Towards Universal Dataset Purification against Backdoor Attacks. (81%)Linshan Hou; Wei Luo; Zhongyun Hua; Songhua Chen; Leo Yu Zhang; Yiming Li
Robust Table Integration in Data Lakes. (56%)Daomin Ji; Hui Luo; Zhifeng Bao; Shane Culpepper
On the Adversarial Robustness of Instruction-Tuned Large Language Models for Code. (38%)Md Imran Hossen; Xiali Hei
Parallel Stacked Aggregated Network for Voice Authentication in IoT-Enabled Smart Devices. (10%)Awais Khan; Ijaz Ul Haq; Khalid Mahmood Malik
Fusing Physics-Driven Strategies and Cross-Modal Adversarial Learning: Toward Multi-Domain Applications. (1%)Hana Satou; Alan Mitkiy
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA Tasks. (1%)Kim-Celine Kahl; Selen Erkan; Jeremias Traub; Carsten T. Lüth; Klaus Maier-Hein; Lena Maier-Hein; Paul F. Jaeger
2024-11-28
SceneTAP: Scene-Coherent Typographic Adversarial Planner against Vision-Language Models in Real-World Environments. (84%)Yue Cao; Yun Xing; Jie Zhang; Di Lin; Tianwei Zhang; Ivor Tsang; Yang Liu; Qing Guo
PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning. (69%)Shenghui Li; Edith C. -H. Ngai; Fanghua Ye; Thiemo Voigt
Random Sampling for Diffusion-based Adversarial Purification. (26%)Jiancheng Zhang; Peiran Dong; Yongyong Chen; Yin-Ping Zhao; Song Guo
Artificial intelligence and cybersecurity in banking sector: opportunities and risks. (12%)Ana Kovacevic; Sonja D. Radenkovic; Dragana Nikolic
Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models. (11%)Chung-Ting Tsai; Ching-Yun Ko; I-Hsin Chung; Yu-Chiang Frank Wang; Pin-Yu Chen
LADDER: Multi-objective Backdoor Attack via Evolutionary Algorithm. (2%)Dazhuang Liu; Yanqi Qiao; Rui Wang; Kaitai Liang; Georgios Smaragdakis
Enhancing Neural Network Robustness Against Fault Injection Through Non-linear Weight Transformations. (2%)Ninnart Fuengfusin; Hakaru Tamukoh
2024-11-27
Visual Adversarial Attack on Vision-Language Models for Autonomous Driving. (99%)Tianyuan Zhang; Lu Wang; Xinwei Zhang; Yitong Zhang; Boyi Jia; Siyuan Liang; Shengshan Hu; Qiang Fu; Aishan Liu; Xianglong Liu
Fall Leaf Adversarial Attack on Traffic Sign Classification. (99%)Anthony Etim; Jakub Szefer
Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment. (67%)Soumya Suvra Ghosal; Souradip Chakraborty; Vaibhav Singh; Tianrui Guan; Mengdi Wang; Ahmad Beirami; Furong Huang; Alvaro Velasquez; Dinesh Manocha; Amrit Singh Bedi
Neutralizing Backdoors through Information Conflicts for Large Language Models. (26%)Chen Chen; Yuchen Sun; Xueluan Gong; Jiaxin Gao; Kwok-Yan Lam
Hidden Data Privacy Breaches in Federated Learning. (22%)Xueluan Gong; Yuji Wang; Shuaike Li; Mengyuan Sun; Songze Li; Qian Wang; Kwok-Yan Lam; Chen Chen
SoK: Watermarking for AI-Generated Content. (3%)Xuandong Zhao; Sam Gunn; Miranda Christ; Jaiden Fairoze; Andres Fabrega; Nicholas Carlini; Sanjam Garg; Sanghyun Hong; Milad Nasr; Florian Tramer; Somesh Jha; Lei Li; Yu-Xiang Wang; Dawn Song
From Open Vocabulary to Open World: Teaching Vision Language Models to Detect Novel Objects. (1%)Zizhao Li; Zhengkang Xiang; Joseph West; Kourosh Khoshelham
2024-11-26
Adversarial Training in Low-Label Regimes with Margin-Based Interpolation. (99%)Tian Ye; Rajgopal Kannan; Viktor Prasanna
BadScan: An Architectural Backdoor Attack on Visual State Space Models. (98%)Om Suhas Deshmukh; Sankalp Nagaonkar; Achyut Mani Tripathi; Ashish Mishra
Stealthy Multi-Task Adversarial Attacks. (92%)Jiacheng Guo; Tianyun Zhang; Lei Li; Haochen Yang; Hongkai Yu; Minghai Qin
Adversarial Bounding Boxes Generation (ABBG) Attack against Visual Object Trackers. (82%)Fatemeh Nourilenjan Nokabadi; Jean-Francois Lalonde; Christian Gagné
MADE: Graph Backdoor Defense with Masked Unlearning. (82%)Xiao Lin amd Mingjie Li; Yisen Wang
Exploring Visual Vulnerabilities via Multi-Loss Adversarial Search for Jailbreaking Vision-Language Models. (75%)Shuyang Hao; Bryan Hooi; Jun Liu; Kai-Wei Chang; Zi Huang; Yujun Cai
Privacy-preserving Robotic-based Multi-factor Authentication Scheme for Secure Automated Delivery System. (9%)Yang Yang; Aryan Mohammadi Pasikhani; Prosanta Gope; Biplab Sikdar
PEFTGuard: Detecting Backdoor Attacks Against Parameter-Efficient Fine-Tuning. (2%)Zhen Sun; Tianshuo Cong; Yule Liu; Chenhao Lin; Xinlei He; Rongmao Chen; Xingshuo Han; Xinyi Huang
Multi-Objective Reinforcement Learning for Automated Resilient Cyber Defence. (1%)Ross O'Driscoll; Claudia Hagen; Joe Bater; James M. Adams
Improved Parallel Derandomization via Finite Automata with Applications. (1%)Jeff Giliberti; David G. Harris
2024-11-25
Unlocking The Potential of Adaptive Attacks on Diffusion-Based Purification. (99%)Andre Kassis; Urs Hengartner; Yaoliang Yu
Imperceptible Adversarial Examples in the Physical World. (99%)Weilin Xu; Sebastian Szyller; Cory Cornelius; Luis Murillo Rojas; Marius Arvinte; Alvaro Velasquez; Jason Martin; Nageen Himayat
Scaling Laws for Black box Adversarial Attacks. (99%)Chuan Liu; Huanran Chen; Yichi Zhang; Yinpeng Dong; Jun Zhu
Privacy Protection in Personalized Diffusion Models via Targeted Cross-Attention Adversarial Attack. (81%)Xide Xu; Muhammad Atif Butt; Sandesh Kamath; Bogdan Raducanu
UVCG: Leveraging Temporal Consistency for Universal Video Protection. (54%)KaiZhou Li; Jindong Gu; Xinchun Yu; Junjie Cao; Yansong Tang; Xiao-Ping Zhang
Guarding the Gate: ConceptGuard Battles Concept-Level Backdoors in Concept Bottleneck Models. (50%)Songning Lai; Yu Huang; Jiayu Yang; Gaoxiang Huang; Wenshuo Chen; Yutao Yue
Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing. (50%)Hanhui Wang; Yihua Zhang; Ruizheng Bai; Yue Zhao; Sijia Liu; Zhengzhong Tu
Sparse patches adversarial attacks via extrapolating point-wise information. (47%)Yaniv Nemcovsky; Avi Mendelson; Chaim Baskin
DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders. (10%)Sizai Hou; Songze Li; Duanyi Yao
RED: Robust Environmental Design. (10%)Jinghan Yang
BadSFL: Backdoor Attack against Scaffold Federated Learning. (3%)Xingshuo Han; Xuanye Zhang; Xiang Lan; Haozhao Wang; Shengmin Xu; Shen Ren; Jason Zeng; Ming Wu; Michael Heinrich; Tianwei Zhang
Why the Agent Made that Decision: Explaining Deep Reinforcement Learning with Vision Masks. (2%)Rui Zuo; Zifan Wang; Simon Khan; Garrett Ethan Katz; Qinru Qiu
XAI and Android Malware Models. (2%)Maithili Kulkarni; Mark Stamp
Revisiting Marr in Face: The Building of 2D--2.5D--3D Representations in Deep Neural Networks. (1%)Xiangyu Zhu; Chang Yu; Jiankuo Zhao; Zhaoxiang Zhang; Stan Z. Li; Zhen Lei
2024-11-24
Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks. (99%)Peng Xie; Yequan Bie; Jianda Mao; Yangqiu Song; Yang Wang; Hao Chen; Kani Chen
ExAL: An Exploration Enhanced Adversarial Learning Algorithm. (92%)A Vinil; Aneesh Sreevallabh Chivukula; Pranav Chintareddy
A Tunable Despeckling Neural Network Stabilized via Diffusion Equation. (64%)Yi Ran; Zhichang Guo; Jia Li; Yao Li; Martin Burger; Boying Wu
Hide in Plain Sight: Clean-Label Backdoor for Auditing Membership Inference. (10%)Depeng Chen; Hao Chen; Hulin Jin; Jie Cui; Hong Zhong
Stealth Attacks Against Moving Target Defense for Smart Grid. (2%)Ke Sun; Iñaki Esnaola; H. Vincent Poor
DRIVE: Dual-Robustness via Information Variability and Entropic Consistency in Source-Free Unsupervised Domain Adaptation. (2%)Ruiqiang Xiao; Songning Lai; Yijun Yang; Jiemin Wu; Yutao Yue; Lei Zhu
2024-11-23
Improving Transferable Targeted Attacks with Feature Tuning Mixup. (99%)Kaisheng Liang; Xuelong Dai; Yanjie Li; Dong Wang; Bin Xiao
Enhancing the Transferability of Adversarial Attacks on Face Recognition with Diverse Parameters Augmentation. (99%)Fengfan Zhou; Bangjie Yin; Hefei Ling; Qianyu Zhou; Wenxuan Wang
Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-grained Knowledge Alignment. (4%)Alvi Md Ishmam; Christopher Thomas
LoBAM: LoRA-Based Backdoor Attack on Model Merging. (2%)Ming Yin; Jingyang Zhang; Jingwei Sun; Minghong Fang; Hai Li; Yiran Chen
2024-11-22
Exploring the Robustness and Transferability of Patch-Based Adversarial Attacks in Quantized Neural Networks. (99%)Amira Guesmi; Bassem Ouni; Muhammad Shafique
Gradient Masking All-at-Once: Ensemble Everything Everywhere Is Not Robust. (99%)Jie Zhang; Kristina Nikolić; Nicholas Carlini; Florian Tramèr
Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbreaks. (98%)Han Wang; Gang Wang; Huan Zhang
Derivative-Free Diffusion Manifold-Constrained Gradient for Unified XAI. (45%)Won Jun Kim; Hyungjin Chung; Jaemin Kim; Sangmin Lee; Byeongsu Sim; Jong Chul Ye
Who Can Withstand Chat-Audio Attacks? An Evaluation Benchmark for Large Language Models. (41%)Wanqi Yang; Yanda Li; Meng Fang; Yunchao Wei; Tianyi Zhou; Ling Chen
Universal and Context-Independent Triggers for Precise Control of LLM Outputs. (31%)Jiashuo Liang; Guancheng Li; Yang Yu
Benchmarking the Robustness of Optical Flow Estimation to Corruptions. (13%)Zhonghua Yi; Hao Shi; Qi Jiang; Yao Gao; Ze Wang; Yufan Zhang; Kailun Yang; Kaiwei Wang
Twin Trigger Generative Networks for Backdoor Attacks against Object Detection. (4%)Zhiying Li; Zhi Liu; Guanggang Geng; Shreyank N Gowda; Shuyuan Lin; Jian Weng; Xiaobo Jin
Geminio: Language-Guided Gradient Inversion Attacks in Federated Learning. (2%)Junjie Shan; Ziqi Zhao; Jialin Lu; Rui Zhang; Siu Ming Yiu; Ka-Ho Chow
Heavy-tailed Contamination is Easier than Adversarial Contamination. (1%)Yeshwanth Cherapanamjeri; Daniel Lee
Exploiting Watermark-Based Defense Mechanisms in Text-to-Image Diffusion Models for Unauthorized Data Usage. (1%)Soumil Datta; Shih-Chieh Dai; Leo Yu; Guanhong Tao
Reliable Evaluation of Attribution Maps in CNNs: A Perturbation-Based Approach. (1%)Lars Nieradzik; Henrike Stephani; Janis Keuper
2024-11-21
Generating Realistic Adversarial Examples for Business Processes using Variational Autoencoders. (99%)Alexander Stevens; Jari Peeperkorn; Smedt Johannes De; Weerdt Jochen De
Learning Fair Robustness via Domain Mixup. (81%)Meiyu Zhong; Ravi Tandon
GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs. (78%)Advik Raj Basani; Xiao Zhang
Adversarial Prompt Distillation for Vision-Language Models. (75%)Lin Luo; Xin Wang; Bojia Zi; Shihao Zhao; Xingjun Ma
AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection. (74%)Jialin Lu; Junjie Shan; Ziqi Zhao; Ka-Ho Chow
Indiscriminate Disruption of Conditional Inference on Multivariate Gaussians. (50%)William N. Caballero; Matthew LaRosa; Alexander Fisher; Vahid Tarokh
GraphTheft: Quantifying Privacy Risks in Graph Prompt Learning. (4%)Jiani Zhu; Xi Lin; Yuxin Qi; Qinghua Mao
Global Challenge for Safe and Secure LLMs Track 1. (4%)Xiaojun Jia; Yihao Huang; Yang Liu; Peng Yan Tan; Weng Kuan Yau; Mun-Thye Mak; Xin Ming Sim; Wee Siong Ng; See Kiong Ng; Hanqing Liu; Lifeng Zhou; Huanqian Yan; Xiaobing Sun; Wei Liu; Long Wang; Yiming Qian; Yong Liu; Junxiao Yang; Zhexin Zhang; Leqi Lei; Renmiao Chen; Yida Lu; Shiyao Cui; Zizhou Wang; Shaohua Li; Yan Wang; Rick Siow Mong Goh; Liangli Zhen; Yingjie Zhang; Zhe Zhao
TrojanEdit: Backdooring Text-Based Image Editing Models. (3%)Ji Guo; Peihong Chen; Wenbo Jiang; Guoming Lu
Evaluating the Robustness of Analogical Reasoning in Large Language Models. (1%)Martha Lewis; Melanie Mitchell
Memory Backdoor Attacks on Neural Networks. (1%)Eden Luzon; Guy Amit; Roy Weiss; Yisroel Mirsky
2024-11-20
Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks. (98%)Yong Xie; Weijie Zheng; Hanxun Huang; Guangnan Ye; Xingjun Ma
TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models. (96%)Xin Wang; Kai Chen; Jiaming Zhang; Jingjing Chen; Xingjun Ma
Provably Efficient Action-Manipulation Attack Against Continuous Reinforcement Learning. (86%)Zhi Luo; Xiyuan Yang; Pan Zhou; Di Wang
A Survey on Adversarial Robustness of LiDAR-based Machine Learning Perception in Autonomous Vehicles. (86%)Junae Kim; Amardeep Kaur
Rethinking the Intermediate Features in Adversarial Attacks: Misleading Robotic Models via Adversarial Distillation. (68%)Ke Wuhan University Zhao; Huayang Wuhan University Huang; Miao Wuhan University Li; Yu Wuhan University Wu
AI-generated Image Detection: Passive or Watermark? (22%)Moyang Guo; Yuepeng Hu; Zhengyuan Jiang; Zeyu Li; Amir Sadovnik; Arka Daw; Neil Gong
SoK: A Systems Perspective on Compound AI Threats and Countermeasures. (12%)Sarbartha Banerjee; Prateek Sahu; Mulong Luo; Anjo Vahldiek-Oberwagner; Neeraja J. Yadwadkar; Mohit Tiwari
CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models. (12%)Naen Xu; Changjiang Li; Tianyu Du; Minxi Li; Wenjie Luo; Jiacheng Liang; Yuyuan Li; Xuhong Zhang; Meng Han; Jianwei Yin; Ting Wang
Bounding-box Watermarking: Defense against Model Extraction Attacks on Object Detectors. (5%)Satoru Koda; Ikuya Morikawa
WaterPark: A Robustness Assessment of Language Model Watermarking. (1%)Jiacheng Liang; Zian Wang; Lauren Hong; Shouling Ji; Ting Wang
2024-11-19
NMT-Obfuscator Attack: Ignore a sentence in translation with only one word. (99%)Sahar Sadrizadeh; César Descalzo; Ljiljana Dolamic; Pascal Frossard
Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment. (75%)Ekaterina Shumitskaya; Mikhail Pautov; Dmitriy Vatolin; Anastasia Antsiferova
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations. (3%)Huaizhi Ge; Yiming Li; Qifan Wang; Yongfeng Zhang; Ruixiang Tang
2024-11-18
Theoretical Corrections and the Leveraging of Reinforcement Learning to Enhance Triangle Attack. (99%)Nicole Meng; Caleb Manicke; David Chen; Yingjie Lao; Caiwen Ding; Pengyu Hong; Kaleel Mahmood
Adapting to Cyber Threats: A Phishing Evolution Network (PEN) Framework for Phishing Generation and Analyzing Evolution Patterns using Large Language Models. (87%)Fengchao Chen; Tingmin Wu; Van Nguyen; Shuo Wang; Hongsheng Hu; Alsharif Abuadbba; Carsten Rudolph
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning. (75%)Kichang Lee; Yujin Shin; Jonghyuk Yun; Jun Han; JeongGil Ko
CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization. (67%)Nay Myat Min; Long H. Pham; Yige Li; Jun Sun
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization. (50%)Mingda Zhang; Mingli Zhu; Zihao Zhu; Baoyuan Wu
Few-shot Model Extraction Attacks against Sequential Recommender Systems. (38%)Hui Zhang; Fu Liu
CLUE-MARK: Watermarking Diffusion Models using CLWE. (26%)Kareem Shehata; Aashish Kolluri; Prateek Saxena
The Dark Side of Trust: Authority Citation-Driven Jailbreak Attacks on Large Language Models. (13%)Xikang Yang; Xuehai Tang; Jizhong Han; Songlin Hu
Exploring adversarial robustness of JPEG AI: methodology, comparison and new methods. (8%)Egor Kovalev; Georgii Bychkov; Khaled Abud; Aleksandr Gushchin; Anna Chistyakova; Sergey Lavrushkin; Dmitriy Vatolin; Anastasia Antsiferova
2024-11-17
Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics. (86%)Taowen Wang; Dongfang Liu; James Chenhao Liang; Wenhao Yang; Qifan Wang; Cheng Han; Jiebo Luo; Ruixiang Tang
JailbreakLens: Interpreting Jailbreak Mechanism in the Lens of Representation and Circuit. (47%)Zeqing He; Zhibo Wang; Zhixuan Chu; Huiyu Xu; Rui Zheng; Kui Ren; Chun Chen
Countering Backdoor Attacks in Image Recognition: A Survey and Evaluation of Mitigation Strategies. (22%)Kealan Dunnett; Reza Arablouei; Dimity Miller; Volkan Dedeoglu; Raja Jurdak
SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach. (9%)Ruoxi Sun; Jiamin Chang; Hammond Pearce; Chaowei Xiao; Bo Li; Qi Wu; Surya Nepal; Minhui Xue
CLMIA: Membership Inference Attacks via Unsupervised Contrastive Learning. (2%)Depeng School of Computer Science and Technology, Anhui University Chen; Xiao School of Computer Science and Technology, Anhui University Liu; Jie School of Computer Science and Technology, Anhui University Cui; Hong School of Computer Science and Technology, Anhui University Zhong
2024-11-15
A Hard-Label Cryptanalytic Extraction of Non-Fully Connected Deep Neural Networks using Side-Channel Attacks. (98%)Benoit Coqueret; Mathieu Carbone; Olivier Sentieys; Gabriel Zaid
Edge-Only Universal Adversarial Attacks in Distributed Learning. (98%)Giulio Rossolini; Tommaso Baldi; Alessandro Biondi; Giorgio Buttazzo
Prompt-Guided Environmentally Consistent Adversarial Patch. (82%)Chaoqun Li; Huanqian Yan; Lifeng Zhou; Tairan Chen; Zhuodong Liu; Hang Su
Continual Adversarial Reinforcement Learning (CARL) of False Data Injection detection: forgetting and explainability. (81%)Pooja Aslami; Kejun Chen; Timothy M. Hansen; Malik Hassanaly
EveGuard: Defeating Vibration-based Side-Channel Eavesdropping with Audio Adversarial Perturbations. (68%)Jung-Woo Chang; Ke Sun; David Xia; Xinyu Zhang; Farinaz Koushanfar
Comparing Robustness Against Adversarial Attacks in Code Generation: LLM-Generated vs. Human-Written. (68%)Md Abdul Awal; Mrigank Rochan; Chanchal K. Roy
Safe Text-to-Image Generation: Simply Sanitize the Prompt Embedding. (11%)Huming Qiu; Guanxu Chen; Mi Zhang; Min Yang
Measuring Non-Adversarial Reproduction of Training Data in Large Language Models. (9%)Michael Aerni; Javier Rando; Edoardo Debenedetti; Nicholas Carlini; Daphne Ippolito; Florian Tramèr
Toward Robust and Accurate Adversarial Camouflage Generation against Vehicle Detectors. (1%)Jiawei Zhou; Linye Lyu; Daojing He; Yu Li
RedTest: Towards Measuring Redundancy in Deep Neural Networks Effectively. (1%)Yao Lu; Peixin Zhang; Jingyi Wang; Lei Ma; Xiaoniu Yang; Qi Xuan
2024-11-14
BEARD: Benchmarking the Adversarial Robustness for Dataset Distillation. (99%)Zheng Zhou; Wenquan Feng; Shuchang Lyu; Guangliang Cheng; Xiaowei Huang; Qi Zhao
Transferable Adversarial Attacks against ASR. (89%)Xiaoxue Gao; Zexin Li; Yiming Chen; Cong Liu; Haizhou Li
Adversarial Attacks Using Differentiable Rendering: A Survey. (83%)Matthew Hull; Chao Zhang; Zsolt Kira; Duen Horng Chau
Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey. (69%)Xuannan Liu; Xing Cui; Peipei Li; Zekun Li; Huaibo Huang; Shuhan Xia; Miaoxuan Zhang; Yueying Zou; Ran He
Your Fixed Watermark is Fragile: Towards Semantic-Aware Watermark for EaaS Copyright Protection. (11%)Zekun Fei; Biao Yi; Jianing Geng; Ruiqi He; Lihai Nie; Zheli Liu
Are nuclear masks all you need for improved out-of-domain generalisation? A closer look at cancer classification in histopathology. (1%)Dhananjay Tomar; Alexander Binder; Andreas Kleppe
2024-11-13
Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified Robustness. (95%)Suhyeok Jang; Seojin Kim; Jinwoo Shin; Jongheon Jeong
Trap-MID: Trapdoor-based Defense against Model Inversion Attacks. (81%)Zhen-Ting Liu; Shang-Tse Chen
Robust Optimal Power Flow Against Adversarial Attacks: A Tri-Level Optimization Approach. (81%)Saman Mazaheri Khamaneh; Tong Wu
The VLLM Safety Paradox: Dual Ease in Jailbreak Attack and Defense. (22%)Yangyang Guo; Fangkai Jiao; Liqiang Nie; Mohan Kankanhalli
LLMStinger: Jailbreaking LLMs using RL fine-tuned LLMs. (8%)Piyush Jha; Arnav Arora; Vijay Ganesh
2024-11-12
Chain Association-based Attacking and Shielding Natural Language Processing Systems. (99%)Jiacheng Huang; Long Chen
IAE: Irony-based Adversarial Examples for Sentiment Analysis Systems. (99%)Xiaoyin Yi; Jiacheng Huang
Zer0-Jack: A Memory-efficient Gradient-based Jailbreaking Method for Black-box Multi-modal Large Language Models. (78%)Tiejin Chen; Kaishen Wang; Hua Wei
Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach. (67%)Jiyao Li; Mingze Ni; Yongshun Gong; Wei Liu
A Survey on Adversarial Machine Learning for Code Data: Realistic Threats, Countermeasures, and Interpretations. (64%)Yulong Yang; Haoran Fan; Chenhao Lin; Qian Li; Zhengyu Zhao; Chao Shen; Xiaohong Guan
New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook. (13%)Meng Yang; Tianqing Zhu; Chi Liu; WanLei Zhou; Shui Yu; Philip S. Yu
Adaptive Meta-Learning for Robust Deepfake Detection: A Multi-Agent Framework to Data Drift and Model Generalization. (1%)Dinesh Srivasthav P; Badri Narayan Subudhi
2024-11-11
Boosting the Targeted Transferability of Adversarial Examples via Salient Region & Weighted Feature Drop. (99%)Shanjun Xu; Linghui Li; Kaiguo Yuan; Bingyu Li
Computable Model-Independent Bounds for Adversarial Quantum Machine Learning. (69%)Bacui Li; Tansu Alpcan; Chandra Thapa; Udaya Parampalli
The Inherent Adversarial Robustness of Analog In-Memory Computing. (61%)Corey Lammie; Julian Büchel; Athanasios Vasilopoulos; Manuel Le Gallo; Abu Sebastian
Rapid Response: Mitigating LLM Jailbreaks with a Few Examples. (54%)Alwin Peng; Julian Michael; Henry Sleight; Ethan Perez; Mrinank Sharma
Semi-Truths: A Large-Scale Dataset of AI-Augmented Images for Evaluating Robustness of AI-Generated Image detectors. (1%)Anisha Pal; Julia Kruk; Mansi Phute; Manognya Bhattaram; Diyi Yang; Duen Horng Chau; Judy Hoffman
2024-11-10
Adversarial Detection with a Dynamically Stable System. (99%)Xiaowei Long; Jie Lin; Xiangyuan Yang
Deferred Backdoor Functionality Attacks on Deep Learning Models. (82%)Jeongjin Shin; Sangdon Park
SequentialBreak: Large Language Models Can be Fooled by Embedding Jailbreak Prompts into Sequential Prompt Chains. (70%)Bijoy Ahmed Saiem; MD Sadik Hossain Shanto; Rakib Ahsan; Md Rafi ur Rashid
2024-11-09
Target-driven Attack for Large Language Models. (73%)Chong Zhang; Mingyu Jin; Dong Shu; Taowen Wang; Dongfang Liu; Xiaobo Jin
AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems. (33%)Zhiyu Zhu; Zhibo Jin; Hongsheng Hu; Minhui Xue; Ruoxi Sun; Seyit Camtepe; Praveen Gauravaram; Huaming Chen
2024-11-08
Post-Hoc Robustness Enhancement in Graph Neural Networks with Conditional Random Fields. (41%)Yassine Abbahaddou; Sofiane Ennadir; Johannes F. Lutzeyer; Fragkiskos D. Malliaros; Michalis Vazirgiannis
Reasoning Robustness of LLMs to Adversarial Typographical Errors. (13%)Esther Gan; Yiran Zhao; Liying Cheng; Yancan Mao; Anirudh Goyal; Kenji Kawaguchi; Min-Yen Kan; Michael Shieh
Towards a Re-evaluation of Data Forging Attacks in Practice. (2%)Mohamed Suliman; Anisa Halimi; Swanand Kadhe; Nathalie Baracaldo; Douglas Leith
2024-11-07
Neural Fingerprints for Adversarial Attack Detection. (99%)Haim Fisher; Moni Shahar; Yehezkel S. Resheff
Adversarial Robustness of In-Context Learning in Transformers for Linear Regression. (98%)Usman Anwar; Oswald Johannes Von; Louis Kirsch; David Krueger; Spencer Frei
Seeing is Deceiving: Exploitation of Visual Pathways in Multi-Modal Language Models. (97%)Pete Janowczyk; Linda Laurier; Ave Giulietta; Arlo Octavia; Meade Cleti
Attention Masks Help Adversarial Attacks to Bypass Safety Detectors. (97%)Yunfan Shi
Defending Deep Regression Models against Backdoor Attacks. (78%)Lingyu Du; Yupei Liu; Jinyuan Jia; Guohao Lan
Hardware and Software Platform Inference. (5%)Cheng Zhang; Hanna Foerster; Robert D. Mullins; Yiren Zhao; Ilia Shumailov
Saliency Assisted Quantization for Neural Networks. (1%)Elmira Mousa Rezabeyk; Salar Beigzad; Yasin Hamzavi; Mohsen Bagheritabar; Seyedeh Sogol Mirikhoozani
MISGUIDE: Security-Aware Attack Analytics for Smart Grid Load Frequency Control. (1%)Nur Imtiazul Haque; Prabin Mali; Mohammad Zakaria Haider; Mohammad Ashiqur Rahman; Sumit Paudyal
2024-11-06
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging. (95%)Rui Luo; Jie Bao; Zhixin Zhou; Chuangyin Dang
Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization. (86%)Yuhao He; Jinyu Tian; Xianwei Zheng; Li Dong; Yuanman Li; Leo Yu Zhang; Jiantao Zhou
FedRISE: Rating Induced Sign Election of Gradients for Byzantine Tolerant Federated Aggregation. (41%)Joseph Geo Benjamin; Mothilal Asokan; Mohammad Yaqub; Karthik Nandakumar
MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue. (10%)Fengxiang Wang; Ranjie Duan; Peng Xiao; Xiaojun Jia; YueFeng Chen; Chongwen Wang; Jialing Tao; Hang Su; Jun Zhu; Hui Xue
Towards Secured Smart Grid 2.0: Exploring Security Threats, Protection Models, and Challenges. (4%)Lan-Huong Nguyen; Van-Linh Nguyen; Ren-Hung Hwang; Jian-Jhih Kuo; Yu-Wen Chen; Chien-Chung Huang; Ping-I Pan
Mitigating Privacy Risks in LLM Embeddings from Embedding Inversion. (1%)Tiantian Liu; Hongwei Yao; Tong Wu; Zhan Qin; Feng Lin; Kui Ren; Chun Chen
2024-11-05
Region-Guided Attack on the Segment Anything Model (SAM). (99%)Xiaoliang Liu; Furao Shen; Jian Zhao
Enhancing Adversarial Robustness via Uncertainty-Aware Distributional Adversarial Training. (99%)Junhao Dong; Xinghua Qu; Z. Jane Wang; Yew-Soon Ong
Double Whammy: Stealthy Data Manipulation aided Reconstruction Attack on Graph Federated Learning. (91%)Jinyin Chen; Minying Ma; Haibin Zheng; Qi Xuan
Flashy Backdoor: Real-world Environment Backdoor Attack on SNNs with DVS Cameras. (75%)Roberto Riaño; Gorka Abad; Stjepan Picek; Aitor Urbieta
Formal Logic-guided Robust Federated Learning against Poisoning Attacks. (68%)Dung Thuy Nguyen; Ziyan An; Taylor T. Johnson; Meiyi Ma; Kevin Leach
Oblivious Defense in ML Models: Backdoor Removal without Detection. (15%)Shafi Goldwasser; Jonathan Shafer; Neekon Vafa; Vinod Vaikuntanathan
DM4Steal: Diffusion Model For Link Stealing Attack On Graph Neural Networks. (13%)Jinyin Chen; Haonan Ma; Haibin Zheng
FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses. (9%)Isaac Baglin; Xiatian Zhu; Simon Hadfield
Lost in Context: The Influence of Context on Feature Attribution Methods for Object Recognition. (3%)Sayanta Adhikari; Rishav Kumar; Konda Reddy Mopuri; Rajalakshmi Pachamuthu
Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset. (1%)Yingzi Ma; Jiongxiao Wang; Fei Wang; Siyuan Ma; Jiazhao Li; Xiujun Li; Furong Huang; Lichao Sun; Bo Li; Yejin Choi; Muhao Chen; Chaowei Xiao
2024-11-04
Query-Efficient Adversarial Attack Against Vertical Federated Graph Learning. (99%)Jinyin Chen; Wenbo Mu; Luxin Zhang; Guohan Huang; Haibin Zheng; Yao Cheng
Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack. (99%)Xiaojun Jia; Sensen Gao; Qing Guo; Ke Ma; Yihao Huang; Simeng Qin; Yang Liu; Ivor Tsang Fellow; Xiaochun Cao
LiDAttack: Robust Black-box Attack on LiDAR-based Object Detection. (99%)Jinyin Chen; Danxin Liao; Sheng Xiang; Haibin Zheng
Alignment-Based Adversarial Training (ABAT) for Improving the Robustness and Accuracy of EEG-Based BCIs. (91%)Xiaoqing Chen; Ziwei Wang; Dongrui Wu
Attacking Vision-Language Computer Agents via Pop-ups. (9%)Yanzhe Zhang; Tao Yu; Diyi Yang
Stochastic Monkeys at Play: Random Augmentations Cheaply Break LLM Safety Alignment. (2%)Jason Vega; Junsheng Huang; Gaokai Zhang; Hangoo Kang; Minjia Zhang; Gagandeep Singh
FactTest: Factuality Testing in Large Language Models with Statistical Guarantees. (1%)Fan Nie; Xiaotian Hou; Shuhang Lin; James Zou; Huaxiu Yao; Linjun Zhang
Differentially Private Integrated Decision Gradients (IDG-DP) for Radar-based Human Activity Recognition. (1%)Idris Zakariyya; Linda Tran; Kaushik Bhargav Sivangi; Paul Henderson; Fani Deligianni
2024-11-03
Undermining Image and Text Classification Algorithms Using Adversarial Attacks. (98%)Langalibalele Lunga; Suhas Sreehari
SQL Injection Jailbreak: a structural disaster of large language models. (78%)Jiawei Zhao; Kejiang Chen; Weiming Zhang; Nenghai Yu
Rotation Perturbation Robustness in Point Cloud Analysis: A Perspective of Manifold Distillation. (2%)Xinyu Xu; Huazhen Liu; Feiming Wei; Huilin Xiong; Wenxian Yu; Tao Zhang
Poison Attacks and Adversarial Prompts Against an Informed University Virtual Assistant. (1%)Ivan A. Fernandez; Subash Neupane; Sudip Mittal; Shahram Rahimi
TabSec: A Collaborative Framework for Novel Insider Threat Detection. (1%)Zilin Huang; Xiangyan Tang; Hongyu Li; Xinyi Cao; Jieren Cheng
Learning predictable and robust neural representations by straightening image sequences. (1%)Xueyan Niu; Cristina Savin; Eero P. Simoncelli
2024-11-02
$B^4$: A Black-Box Scrubbing Attack on LLM Watermarks. (75%)Baizhou Huang; Xiao Pu; Xiaojun Wan
What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks. (1%)Nathalie Maria Kirch; Severin Field; Stephen Casper
2024-11-01
Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning for Vision-Language Models. (99%)Jonggyu Jang; Hyeonsu Lyu; Jungyeon Koh; Hyun Jong Yang
Defense Against Prompt Injection Attack by Leveraging Attack Techniques. (81%)Yulin Chen; Haoran Li; Zihao Zheng; Yangqiu Song; Dekai Wu; Bryan Hooi
Certified Robustness for Deep Equilibrium Models via Serialized Random Smoothing. (68%)Weizhi Gao; Zhichao Hou; Han Xu; Xiaorui Liu
Attention Tracker: Detecting Prompt Injection Attacks in LLMs. (26%)Kuo-Han Hung; Ching-Yun Ko; Ambrish Rawat; I-Hsin Chung; Winston H. Hsu; Pin-Yu Chen
Emoji Attack: A Method for Misleading Judge LLMs in Safety Risk Detection. (22%)Zhipeng Wei; Yuqi Liu; N. Benjamin Erichson
Outlier-Oriented Poisoning Attack: A Grey-box Approach to Disturb Decision Boundaries by Perturbing Outliers in Multiclass Learning. (13%)Anum Paracha; Junaid Arshad; Mohamed Ben Farah; Khalid Ismail
Identify Backdoored Model in Federated Learning via Individual Unlearning. (5%)Jiahao Xu; Zikai Zhang; Rui Hu
Plentiful Jailbreaks with String Compositions. (2%)Brian R. Y. Huang
Uncertainty-based Offline Variational Bayesian Reinforcement Learning for Robustness under Diverse Data Corruptions. (2%)Rui Yang; Jie Wang; Guoping Wu; Bin Li
Examining Attacks on Consensus and Incentive Systems in Proof-of-Work Blockchains: A Systematic Literature Review. (1%)Dinitha Wijewardhana; Sugandima Vidanagamachchi; Nalin Arachchilage
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable. (1%)Shreyash Arya; Sukrut Rao; Moritz Böhle; Bernt Schiele
Towards Building Secure UAV Navigation with FHE-aware Knowledge Distillation. (1%)Arjun Ramesh Kaushik; Charanjit Jutla; Nalini Ratha
2024-10-31
Noise as a Double-Edged Sword: Reinforcement Learning Exploits Randomized Defenses in Neural Networks. (99%)Steve Bakos; Pooria Madani; Heidar Davoudi
Protecting Feed-Forward Networks from Adversarial Attacks Using Predictive Coding. (99%)Ehsan Ganjidoost; Jeff Orchard
Wide Two-Layer Networks can Learn from Adversarial Perturbations. (98%)Soichiro Kumano; Hiroshi Kera; Toshihiko Yamasaki
DiffPAD: Denoising Diffusion-based Adversarial Patch Decontamination. (93%)Jia Fu; Xiao Zhang; Sepideh Pashami; Fatemeh Rahimian; Anders Holst
I Can Hear You: Selective Robust Training for Deepfake Audio Detection. (86%)Zirui Zhang; Wei Hao; Aroon Sankoh; William Lin; Emanuel Mendiola-Ortiz; Junfeng Yang; Chengzhi Mao
Pseudo-Conversation Injection for LLM Goal Hijacking. (75%)Zheng Chen; Buhui Yao
ARQ: A Mixed-Precision Quantization Framework for Accurate and Certifiably Robust DNNs. (41%)Yuchen Yang; Shubham Ugare; Yifan Zhao; Gagandeep Singh; Sasa Misailovic
Optical Lens Attack on Monocular Depth Estimation for Autonomous Driving. (5%)Ce Michigan State University Zhou; Qiben Michigan State University Yan; Daniel Michigan State University Kent; Guangjing University of South Florida Wang; Weikang Michigan State University Ding; Ziqi Peking University Zhang; Hayder Michigan State University Radha
Adversarial Attacks of Vision Tasks in the Past 10 Years: A Survey. (2%)Chiyu Zhang; Xiaogang Xu; Jiafei Wu; Zhe Liu; Lu Zhou
2024-10-30
FAIR-TAT: Improving Model Fairness Using Targeted Adversarial Training. (99%)Tejaswini Medi; Steffen Jung; Margret Keuper
Keep on Swimming: Real Attackers Only Need Partial Knowledge of a Multi-Model System. (99%)Julian Collado; Kevin Stangl
CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense. (99%)Mingkun Zhang; Keping Bi; Wei Chen; Quanrun Chen; Jiafeng Guo; Xueqi Cheng
One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks. (98%)Ji Guo; Wenbo Jiang; Rui Zhang; Guoming Lu; Hongwei Li
Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector. (87%)Youcheng Huang; Fengbin Zhu; Jingkun Tang; Pan Zhou; Wenqiang Lei; Jiancheng Lv; Tat-Seng Chua
HijackRAG: Hijacking Attacks against Retrieval-Augmented Large Language Models. (82%)Yucheng Zhang; Qinfeng Li; Tianyu Du; Xuhong Zhang; Xinkui Zhao; Zhengwen Feng; Jianwei Yin
Understanding and Improving Adversarial Collaborative Filtering for Robust Recommendation. (67%)Kaike Zhang; Qi Cao; Yunfan Wu; Fei Sun; Huawei Shen; Xueqi Cheng
Teaching a Language Model to Distinguish Between Similar Details using a Small Adversarial Training Set. (64%)Chris Achard
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion. (62%)Ji Guo; Hongwei Li; Wenbo Jiang; Guoming Lu
Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted Images. (2%)Qi Song; Ziyuan Luo; Ka Chun Cheung; Simon See; Renjie Wan
Byzantine-Robust Federated Learning: An Overview With Focus on Developing Sybil-based Attacks to Backdoor Augmented Secure Aggregation Protocols. (1%)Atharv Deshmukh
ProTransformer: Robustify Transformers via Plug-and-Play Paradigm. (1%)Zhichao Hou; Weizhi Gao; Yuchen Shen; Feiyi Wang; Xiaorui Liu
Attribute-to-Delete: Machine Unlearning via Datamodel Matching. (1%)Kristian Georgiev; Roy Rinberg; Sung Min Park; Shivam Garg; Andrew Ilyas; Aleksander Madry; Seth Neel
Stealing User Prompts from Mixture of Experts. (1%)Itay Yona; Ilia Shumailov; Jamie Hayes; Nicholas Carlini
InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection Guardrail Models. (1%)Hao Li; Xiaogeng Liu; Chaowei Xiao
2024-10-29
On the Robustness of Adversarial Training Against Uncertainty Attacks. (99%)Emanuele Ledda; Giovanni Scodeller; Daniele Angioni; Giorgio Piras; Antonio Emanuele Cinà; Giorgio Fumera; Battista Biggio; Fabio Roli
CausAdv: A Causal-based Framework for Detecting Adversarial Examples. (99%)Hichem Debbi
Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models. (98%)Lu Yu; Haiyang Zhang; Changsheng Xu
IDEATOR: Jailbreaking Large Vision-Language Models Using Themselves. (83%)Ruofan Wang; Bo Wang; Xiaosen Wang; Xingjun Ma; Yu-Gang Jiang
Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers. (82%)Lam Nguyen Tung; Steven Cho; Xiaoning Du; Neelofar Neelofar; Valerio Terragni; Stefano Ruberto; Aldeida Aleti
Longitudinal Mammogram Exam-based Breast Cancer Diagnosis Models: Vulnerability to Adversarial Attacks. (81%)Zhengbo Zhou; Degan Hao; Dooman Arefan; Margarita Zuley; Jules Sumkin; Shandong Wu
AmpleGCG-Plus: A Strong Generative Model of Adversarial Suffixes to Jailbreak LLMs with Higher Success Rates in Fewer Attempts. (78%)Vishal Kumar; Zeyi Liao; Jaylen Jones; Huan Sun
Embedding-based classifiers can detect prompt injection attacks. (64%)Md. Ahsan Ayub; Subhabrata Majumdar
Enhancing Adversarial Attacks through Chain of Thought. (54%)Jingbo Su
Power side-channel leakage localization through adversarial training of deep neural networks. (11%)Jimmy Gammell; Anand Raghunathan; Kaushik Roy
Enhancing Safety and Robustness of Vision-Based Controllers via Reachability Analysis. (1%)Kaustav Chakraborty; Aryaman Gupta; Somil Bansal
DynaMath: A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models. (1%)Chengke Zou; Xingang Guo; Rui Yang; Junyu Zhang; Bin Hu; Huan Zhang
2024-10-28
Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack. (99%)Shengjing Tian; Yinan Han; Xiantong Zhao; Bin Liu; Xiuping Liu
AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion models. (96%)Yaopei Zeng; Yuanpu Cao; Bochuan Cao; Yurui Chang; Jinghui Chen; Lu Lin
BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks. (93%)Yunhan Zhao; Xiang Zheng; Lin Luo; Yige Li; Xingjun Ma; Yu-Gang Jiang
FATH: Authentication-based Test-time Defense against Indirect Prompt Injection Attacks. (91%)Jiongxiao Wang; Fangzhou Wu; Wendi Li; Jinsheng Pan; Edward Suh; Z. Morley Mao; Muhao Chen; Chaowei Xiao
TACO: Adversarial Camouflage Optimization on Trucks to Fool Object Detectors. (88%)Adonisz Dimitriu; Tamás Michaletzky; Viktor Remeli
Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models. (83%)Piotr Przybyła
Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks. (80%)Dario Pasquini; Evgenios M. Kornaropoulos; Giuseppe Ateniese
Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring. (50%)Honglin Mu; Han He; Yuxin Zhou; Yunlong Feng; Yang Xu; Libo Qin; Xiaoming Shi; Zeming Liu; Xudong Han; Qi Shi; Qingfu Zhu; Wanxiang Che
Mitigating Unauthorized Speech Synthesis for Voice Protection. (9%)Zhisheng Zhang; Qianyi Yang; Derui Wang; Pengyang Huang; Yuxin Cao; Kai Ye; Jie Hao
Palisade -- Prompt Injection Detection Framework. (1%)Sahasra Kokkula; Somanathan R; Nandavardhan R; Aashishkumar; G Divya
2024-10-27
Integrating uncertainty quantification into randomized smoothing based robustness guarantees. (98%)Sina Däubener; Kira Maag; David Krueger; Asja Fischer
LLM Robustness Against Misinformation in Biomedical Question Answering. (80%)Alexander Bondarenko; Adrian Viehweger
Fine-tuned Large Language Models (LLMs): Improved Prompt Injection Attacks Detection. (1%)Md Abdur Rahman; Fan Wu; Alfredo Cuzzocrea; Sheikh Iqbal Ahamed
2024-10-26
Adversarial Attacks Against Double RIS-Assisted MIMO Systems-based Autoencoder in Finite-Scattering Environments. (99%)Bui Duc Son; Ngo Nam Khanh; Chien Trinh Van; Dong In Kim
Transferable Adversarial Attacks on SAM and Its Downstream Models. (99%)Song Xia; Wenhan Yang; Yi Yu; Xun Lin; Henghui Ding; Lingyu Duan; Xudong Jiang
Generative Adversarial Patches for Physical Attacks on Cross-Modal Pedestrian Re-Identification. (98%)Yue Su; Hao Li; Maoguo Gong
CodePurify: Defend Backdoor Attacks on Neural Code Models via Entropy-based Purification. (76%)Fangwen Mu; Junjie Wang; Zhuohao Yu; Lin Shi; Song Wang; Mingyang Li; Qing Wang
Robust Model Evaluation over Large-scale Federated Networks. (2%)Amir Najafi; Samin Mahdizadeh Sani; Farzan Farnia
2024-10-25
GPT-4o System Card. (76%)Tony OpenAI; Tony :; Aaron Tony Hurst; Adam Tony Lerer; Adam P. Tony Goucher; Adam Tony Perelman; Aditya Tony Ramesh; Aidan Tony Clark; AJ Tony Ostrow; Akila Tony Welihinda; Alan Tony Hayes; Alec Tony Radford; Aleksander Tony Mądry; Alex Tony Baker-Whitcomb; Alex Tony Beutel; Alex Tony Borzunov; Alex Tony Carney; Alex Tony Chow; Alex Tony Kirillov; Alex Tony Nichol; Alex Tony Paino; Alex Tony Renzin; Alex Tachard Tony Passos; Alexander Tony Kirillov; Alexi Tony Christakis; Alexis Tony Conneau; Ali Tony Kamali; Allan Tony Jabri; Allison Tony Moyer; Allison Tony Tam; Amadou Tony Crookes; Amin Tony Tootoochian; Amin Tony Tootoonchian; Ananya Tony Kumar; Andrea Tony Vallone; Andrej Tony Karpathy; Andrew Tony Braunstein; Andrew Tony Cann; Andrew Tony Codispoti; Andrew Tony Galu; Andrew Tony Kondrich; Andrew Tony Tulloch; Andrey Tony Mishchenko; Angela Tony Baek; Angela Tony Jiang; Antoine Tony Pelisse; Antonia Tony Woodford; Anuj Tony Gosalia; Arka Tony Dhar; Ashley Tony Pantuliano; Avi Tony Nayak; Avital Tony Oliver; Barret Tony Zoph; Behrooz Tony Ghorbani; Ben Tony Leimberger; Ben Tony Rossen; Ben Tony Sokolowsky; Ben Tony Wang; Benjamin Tony Zweig; Beth Tony Hoover; Blake Tony Samic; Bob Tony McGrew; Bobby Tony Spero; Bogo Tony Giertler; Bowen Tony Cheng; Brad Tony Lightcap; Brandon Tony Walkin; Brendan Tony Quinn; Brian Tony Guarraci; Brian Tony Hsu; Bright Tony Kellogg; Brydon Tony Eastman; Camillo Tony Lugaresi; Carroll Tony Wainwright; Cary Tony Bassin; Cary Tony Hudson; Casey Tony Chu; Chad Tony Nelson; Chak Tony Li; Chan Jun Tony Shern; Channing Tony Conger; Charlotte Tony Barette; Chelsea Tony Voss; Chen Tony Ding; Cheng Tony Lu; Chong Tony Zhang; Chris Tony Beaumont; Chris Tony Hallacy; Chris Tony Koch; Christian Tony Gibson; Christina Tony Kim; Christine Tony Choi; Christine Tony McLeavey; Christopher Tony Hesse; Claudia Tony Fischer; Clemens Tony Winter; Coley Tony Czarnecki; Colin Tony Jarvis; Colin Tony Wei; Constantin Tony Koumouzelis; Dane Tony Sherburn; Daniel Tony Kappler; Daniel Tony Levin; Daniel Tony Levy; David Tony Carr; David Tony Farhi; David Tony Mely; David Tony Robinson; David Tony Sasaki; Denny Tony Jin; Dev Tony Valladares; Dimitris Tony Tsipras; Doug Tony Li; Duc Phong Tony Nguyen; Duncan Tony Findlay; Edede Tony Oiwoh; Edmund Tony Wong; Ehsan Tony Asdar; Elizabeth Tony Proehl; Elizabeth Tony Yang; Eric Tony Antonow; Eric Tony Kramer; Eric Tony Peterson; Eric Tony Sigler; Eric Tony Wallace; Eugene Tony Brevdo; Evan Tony Mays; Farzad Tony Khorasani; Felipe Petroski Tony Such; Filippo Tony Raso; Francis Tony Zhang; Lohmann Fred Tony von; Freddie Tony Sulit; Gabriel Tony Goh; Gene Tony Oden; Geoff Tony Salmon; Giulio Tony Starace; Greg Tony Brockman; Hadi Tony Salman; Haiming Tony Bao; Haitang Tony Hu; Hannah Tony Wong; Haoyu Tony Wang; Heather Tony Schmidt; Heather Tony Whitney; Heewoo Tony Jun; Hendrik Tony Kirchner; Henrique Ponde de Oliveira Tony Pinto; Hongyu Tony Ren; Huiwen Tony Chang; Hyung Won Tony Chung; Ian Tony Kivlichan; Ian Tony O'Connell; Ian Tony O'Connell; Ian Tony Osband; Ian Tony Silber; Ian Tony Sohl; Ibrahim Tony Okuyucu; Ikai Tony Lan; Ilya Tony Kostrikov; Ilya Tony Sutskever; Ingmar Tony Kanitscheider; Ishaan Tony Gulrajani; Jacob Tony Coxon; Jacob Tony Menick; Jakub Tony Pachocki; James Tony Aung; James Tony Betker; James Tony Crooks; James Tony Lennon; Jamie Tony Kiros; Jan Tony Leike; Jane Tony Park; Jason Tony Kwon; Jason Tony Phang; Jason Tony Teplitz; Jason Tony Wei; Jason Tony Wolfe; Jay Tony Chen; Jeff Tony Harris; Jenia Tony Varavva; Jessica Gan Tony Lee; Jessica Tony Shieh; Ji Tony Lin; Jiahui Tony Yu; Jiayi Tony Weng; Jie Tony Tang; Jieqi Tony Yu; Joanne Tony Jang; Joaquin Quinonero Tony Candela; Joe Tony Beutler; Joe Tony Landers; Joel Tony Parish; Johannes Tony Heidecke; John Tony Schulman; Jonathan Tony Lachman; Jonathan Tony McKay; Jonathan Tony Uesato; Jonathan Tony Ward; Jong Wook Tony Kim; Joost Tony Huizinga; Jordan Tony Sitkin; Jos Tony Kraaijeveld; Josh Tony Gross; Josh Tony Kaplan; Josh Tony Snyder; Joshua Tony Achiam; Joy Tony Jiao; Joyce Tony Lee; Juntang Tony Zhuang; Justyn Tony Harriman; Kai Tony Fricke; Kai Tony Hayashi; Karan Tony Singhal; Katy Tony Shi; Kavin Tony Karthik; Kayla Tony Wood; Kendra Tony Rimbach; Kenny Tony Hsu; Kenny Tony Nguyen; Keren Tony Gu-Lemberg; Kevin Tony Button; Kevin Tony Liu; Kiel Tony Howe; Krithika Tony Muthukumar; Kyle Tony Luther; Lama Tony Ahmad; Larry Tony Kai; Lauren Tony Itow; Lauren Tony Workman; Leher Tony Pathak; Leo Tony Chen; Li Tony Jing; Lia Tony Guy; Liam Tony Fedus; Liang Tony Zhou; Lien Tony Mamitsuka; Lilian Tony Weng; Lindsay Tony McCallum; Lindsey Tony Held; Long Tony Ouyang; Louis Tony Feuvrier; Lu Tony Zhang; Lukas Tony Kondraciuk; Lukasz Tony Kaiser; Luke Tony Hewitt; Luke Tony Metz; Lyric Tony Doshi; Mada Tony Aflak; Maddie Tony Simens; Madelaine Tony Boyd; Madeleine Tony Thompson; Marat Tony Dukhan; Mark Tony Chen; Mark Tony Gray; Mark Tony Hudnall; Marvin Tony Zhang; Marwan Tony Aljubeh; Mateusz Tony Litwin; Matthew Tony Zeng; Max Tony Johnson; Maya Tony Shetty; Mayank Tony Gupta; Meghan Tony Shah; Mehmet Tony Yatbaz; Meng Jia Tony Yang; Mengchao Tony Zhong; Mia Tony Glaese; Mianna Tony Chen; Michael Tony Janner; Michael Tony Lampe; Michael Tony Petrov; Michael Tony Wu; Michele Tony Wang; Michelle Tony Fradin; Michelle Tony Pokrass; Miguel Tony Castro; Castro Miguel Oom Temudo Tony de; Mikhail Tony Pavlov; Miles Tony Brundage; Miles Tony Wang; Minal Tony Khan; Mira Tony Murati; Mo Tony Bavarian; Molly Tony Lin; Murat Tony Yesildal; Nacho Tony Soto; Natalia Tony Gimelshein; Natalie Tony Cone; Natalie Tony Staudacher; Natalie Tony Summers; Natan Tony LaFontaine; Neil Tony Chowdhury; Nick Tony Ryder; Nick Tony Stathas; Nick Tony Turley; Nik Tony Tezak; Niko Tony Felix; Nithanth Tony Kudige; Nitish Tony Keskar; Noah Tony Deutsch; Noel Tony Bundick; Nora Tony Puckett; Ofir Tony Nachum; Ola Tony Okelola; Oleg Tony Boiko; Oleg Tony Murk; Oliver Tony Jaffe; Olivia Tony Watkins; Olivier Tony Godement; Owen Tony Campbell-Moore; Patrick Tony Chao; Paul Tony McMillan; Pavel Tony Belov; Peng Tony Su; Peter Tony Bak; Peter Tony Bakkum; Peter Tony Deng; Peter Tony Dolan; Peter Tony Hoeschele; Peter Tony Welinder; Phil Tony Tillet; Philip Tony Pronin; Philippe Tony Tillet; Prafulla Tony Dhariwal; Qiming Tony Yuan; Rachel Tony Dias; Rachel Tony Lim; Rahul Tony Arora; Rajan Tony Troll; Randall Tony Lin; Rapha Gontijo Tony Lopes; Raul Tony Puri; Reah Tony Miyara; Reimar Tony Leike; Renaud Tony Gaubert; Reza Tony Zamani; Ricky Tony Wang; Rob Tony Donnelly; Rob Tony Honsby; Rocky Tony Smith; Rohan Tony Sahai; Rohit Tony Ramchandani; Romain Tony Huet; Rory Tony Carmichael; Rowan Tony Zellers; Roy Tony Chen; Ruby Tony Chen; Ruslan Tony Nigmatullin; Ryan Tony Cheu; Saachi Tony Jain; Sam Tony Altman; Sam Tony Schoenholz; Sam Tony Toizer; Samuel Tony Miserendino; Sandhini Tony Agarwal; Sara Tony Culver; Scott Tony Ethersmith; Scott Tony Gray; Sean Tony Grove; Sean Tony Metzger; Shamez Tony Hermani; Shantanu Tony Jain; Shengjia Tony Zhao; Sherwin Tony Wu; Shino Tony Jomoto; Shirong Tony Wu; Tony Shuaiqi; Xia; Sonia Phene; Spencer Papay; Srinivas Narayanan; Steve Coffey; Steve Lee; Stewart Hall; Suchir Balaji; Tal Broda; Tal Stramer; Tao Xu; Tarun Gogineni; Taya Christianson; Ted Sanders; Tejal Patwardhan; Thomas Cunninghman; Thomas Degry; Thomas Dimson; Thomas Raoux; Thomas Shadwell; Tianhao Zheng; Todd Underwood; Todor Markov; Toki Sherbakov; Tom Rubin; Tom Stasi; Tomer Kaftan; Tristan Heywood; Troy Peterson; Tyce Walters; Tyna Eloundou; Valerie Qi; Veit Moeller; Vinnie Monaco; Vishal Kuo; Vlad Fomenko; Wayne Chang; Weiyi Zheng; Wenda Zhou; Wesam Manassra; Will Sheu; Wojciech Zaremba; Yash Patil; Yilei Qian; Yongjik Kim; Youlong Cheng; Yu Zhang; Yuchen He; Yuchen Zhang; Yujia Jin; Yunxing Dai; Yury Malkov
RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction. (64%)Tanqiu Jiang; Zian Wang; Jiacheng Liang; Changjiang Li; Yuhui Wang; Ting Wang
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions. (62%)Poojitha Thota; Shirin Nilizadeh
Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models. (56%)Yige Li; Hanxun Huang; Jiaming Zhang; Xingjun Ma; Yu-Gang Jiang
Towards Robust Algorithms for Surgical Phase Recognition via Digital Twin-based Scene Representation. (2%)Hao Ding; Yuqian Zhang; Hongchao Shu; Xu Lian; Ji Woong Kim; Axel Krieger; Mathias Unberath
2024-10-24
GADT: Enhancing Transferable Adversarial Attacks through Gradient-guided Adversarial Data Transformation. (99%)Yating Ma; Xiaogang Xu; Liming Fang; Zhe Liu
Adversarial Attacks on Large Language Models Using Regularized Relaxation. (98%)Samuel Jacob Chacko; Sajib Biswas; Chashi Mahiul Islam; Fatema Tabassum Liza; Xiuwen Liu
Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities. (88%)Chung-En Sun; Xiaodong Liu; Weiwei Yang; Tsui-Wei Weng; Hao Cheng; Aidan San; Michel Galley; Jianfeng Gao
Humanizing the Machine: Proxy Attacks to Mislead LLM Detectors. (68%)Tianchun Wang; Yuanzhou Chen; Zichuan Liu; Zhanwen Chen; Haifeng Chen; Xiang Zhang; Wei Cheng
Complexity Matters: Effective Dimensionality as a Measure for Adversarial Robustness. (33%)David Khachaturov; Robert Mullins
Robust Watermarking Using Generative Priors Against Image Editing: From Benchmarking to Advances. (11%)Shilin Lu; Zihan Zhou; Jiayou Lu; Yuanzhi Zhu; Adams Wai-Kin Kong
2024-10-23
Advancing NLP Security by Leveraging LLMs as Adversarial Engines. (98%)Sudarshan Srinivasan; Maria Mahbub; Amir Sadovnik
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing. (93%)Dongliang Guo; Mengxuan Hu; Zihan Guan; Junfeng Guo; Thomas Hartvigsen; Sheng Li
Breaking the Illusion: Real-world Challenges for Adversarial Patches in Object Detection. (70%)Jakob Shack; Katarina Petrovic; Olga Saukh
Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning. (16%)Wei Qiao; Yebo Feng; Teng Li; Zijian Zhang; Zhengzi Xu; Zhuo Ma; Yulong Shen; JianFeng Ma; Yang Liu
Guide for Defense (G4D): Dynamic Guidance for Robust and Balanced Defense in Large Language Models. (9%)He Cao; Weidi Luo; Yu Wang; Zijing Liu; Bing Feng; Yuan Yao; Yu Li
Towards Understanding the Fragility of Multilingual LLMs against Fine-Tuning Attacks. (2%)Samuele Poppi; Zheng-Xin Yong; Yifei He; Bobbie Chern; Han Zhao; Aobo Yang; Jianfeng Chi
Countering Autonomous Cyber Threats. (2%)Kade M. Heckel; Adrian Weller
Is Smoothness the Key to Robustness? A Comparison of Attention and Convolution Models Using a Novel Metric. (1%)Baiyuan Chen
2024-10-22
Detecting Adversarial Examples. (99%)Furkan Mumcu; Yasin Yilmaz
Test-time Adversarial Defense with Opposite Adversarial Path and High Attack Time Cost. (98%)Cheng-Han Yeh; Kuanchun Yu; Chun-Shien Lu
AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents. (97%)Chejian Xu; Mintong Kang; Jiawei Zhang; Zeyi Liao; Lingbo Mo; Mengqi Yuan; Huan Sun; Bo Li
Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks. (67%)Tao Li; Henger Li; Yunian Pan; Tianyi Xu; Zizhan Zheng; Quanyan Zhu
Hierarchical Multi-agent Reinforcement Learning for Cyber Network Defense. (41%)Aditya Vikram Singh; Ethan Rathbun; Emma Graham; Lisa Oakley; Simona Boboila; Alina Oprea; Peter Chin
On the Vulnerability of Text Sanitization. (8%)Meng Tong; Kejiang Chen; Xiaojian Yuang; Jiayang Liu; Weiming Zhang; Nenghai Yu; Jie Zhang
Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods. (5%)Tsachi Blau; Moshe Kimhi; Yonatan Belinkov; Alexander Bronstein; Chaim Baskin
Evaluating the Effectiveness of Attack-Agnostic Features for Morphing Attack Detection. (4%)Laurent Colbois; Sébastien Marcel
BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers. (2%)Jiaqi Xue; Qian Lou; Mengxin Zheng
Invisible Manipulation Deep Reinforcement Learning Enhanced Stealthy Attacks on Battery Energy Management Systems. (1%)Qi Xiao; Lidong Song; Jongha Woo; Rongxing Hu; Bei Xu; Ning Lu
A Hybrid Simulation of DNN-based Gray Box Models. (1%)Aayushya Agarwal; Yihan Ruan; Larry Pileggi
2024-10-21
Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples. (99%)Kirill Lukyanov; Andrew Perminov; Denis Turdakov; Mikhail Pautov
Conflict-Aware Adversarial Training. (70%)Zhiyu Xue; Haohan Wang; Yao Qin; Ramtin Pedarsani
Robust Feature Learning for Multi-Index Models in High Dimensions. (68%)Alireza Mousavi-Hosseini; Adel Javanmard; Murat A. Erdogdu
Dual-Model Defense: Safeguarding Diffusion Models from Membership Inference Attacks through Disjoint Data Splitting. (16%)Bao Q. Tran; Viet Nguyen; Anh Tran; Toan Tran
Metric as Transform: Exploring beyond Affine Transform for Interpretable Neural Network. (13%)Suman Sapkota
A Realistic Threat Model for Large Language Model Jailbreaks. (11%)Valentyn Boreiko; Alexander Panfilov; Vaclav Voracek; Matthias Hein; Jonas Geiping
Vulnerabilities in Machine Learning-Based Voice Disorder Detection Systems. (11%)Gianpaolo Perelli; Andrea Panzino; Roberto Casula; Marco Micheletto; Giulia Orrù; Gian Luca Marcialis
On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds. (5%)Matteo Vilucchio; Nikolaos Tsilivis; Bruno Loureiro; Julia Kempe
Boosting Jailbreak Transferability for Large Language Models. (1%)Hanqing Liu; Lifeng Zhou; Huanqian Yan
Extracting Spatiotemporal Data from Gradients with Large Language Models. (1%)Lele Zheng; Yang Cao; Renhe Jiang; Kenjiro Taura; Yulong Shen; Sheng Li; Masatoshi Yoshikawa
2024-10-20
PEAS: A Strategy for Crafting Transferable Adversarial Examples. (99%)Bar Avraham; Yisroel Mirsky
Efficient Model Extraction via Boundary Sampling. (96%)Maor Biton Dor; Yisroel Mirsky
The Best Defense is a Good Offense: Countering LLM-Powered Cyberattacks. (76%)Daniel Ayzenshteyn; Roy Weiss; Yisroel Mirsky
Faster-GCG: Efficient Discrete Optimization Jailbreak Attacks against Aligned Large Language Models. (45%)Xiao Li; Zhuhong Li; Qiongxiu Li; Bingze Lee; Jinghao Cui; Xiaolin Hu
Bayesian Concept Bottleneck Models with LLM Priors. (1%)Jean Feng; Avni Kothari; Luke Zier; Chandan Singh; Yan Shuo Tan
2024-10-19
Adversarial Training: A Survey. (97%)Mengnan Zhao; Lihe Zhang; Jingwen Ye; Huchuan Lu; Baocai Yin; Xinchao Wang
Toward Robust RALMs: Revealing the Impact of Imperfect Retrieval on Retrieval-Augmented Language Models. (92%)Seong-Il Park; Jay-Yoon Lee
Beyond Pruning Criteria: The Dominant Role of Fine-Tuning and Adaptive Ratios in Neural Network Robustness. (76%)Lincen Bai; Hedi Tabia; Raúl Santos-Rodríguez
Jailbreaking and Mitigation of Vulnerabilities in Large Language Models. (50%)Benji Peng; Ziqian Bi; Qian Niu; Ming Liu; Pohsun Feng; Tianyang Wang; Lawrence K. Q. Yan; Yizhu Wen; Yichao Zhang; Caitlyn Heqi Yin
SLIC: Secure Learned Image Codec through Compressed Domain Watermarking to Defend Image Manipulation. (11%)Chen-Hsiu Huang; Ja-Ling Wu
DynaMO: Protecting Mobile DL Models through Coupling Obfuscated DL Operators. (2%)Mingyi Zhou; Xiang Gao; Xiao Chen; Chunyang Chen; John Grundy; Li Li
2024-10-18
A Hybrid Defense Strategy for Boosting Adversarial Robustness in Vision-Language Models. (99%)Yuhan Liang; Yijun Li; Yumeng Niu; Qianhe Shen; Hangyu Liu
Class-RAG: Content Moderation with Retrieval Augmented Generation. (76%)Jianfa Chen; Emily Shen; Trupti Bavalatti; Xiaowen Lin; Yongkai Wang; Shuming Hu; Harihar Subramanyam; Ksheeraj Sai Vepuri; Ming Jiang; Ji Qi; Li Chen; Nan Jiang; Ankit Jain
Attack as Defense: Run-time Backdoor Implantation for Image Content Protection. (61%)Haichuan Zhang; Meiyu Lin; Zhaoyi Liu; Renyuan Li; Zhiyuan Cheng; Carl Yang; Mingjie Tang
Feint and Attack: Attention-Based Strategies for Jailbreaking and Protecting LLMs. (13%)Rui Pu; Chaozhuo Li; Rui Ha; Zejian Chen; Litian Zhang; Zheng Liu; Lirong Qiu; Xi Zhang
Stochastic Gradient Descent Jittering for Inverse Problems: Alleviating the Accuracy-Robustness Tradeoff. (10%)Peimeng Guan; Mark A. Davenport
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge Distillation. (5%)Shuai Zhao; Xiaobao Wu; Cong-Duy Nguyen; Meihuizi Jia; Yichao Feng; Luu Anh Tuan
Real-time Fake News from Adversarial Feedback. (3%)Sanxing Chen; Yukun Huang; Bhuwan Dhingra
Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step. (1%)Mingyuan Zhou; Huangjie Zheng; Yi Gu; Zhendong Wang; Hai Huang
2024-10-17
MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks. (99%)Xinxin Liu; Zhongliang Guo; Siyuan Huang; Chun Pong Lau
DMGNN: Detecting and Mitigating Backdoor Attacks in Graph Neural Networks. (95%)Hao Sui; Bing Chen; Jiale Zhang; Chengcheng Zhu; Di Wu; Qinghua Lu; Guodong Long
Adversarial Inception for Bounded Backdoor Poisoning in Deep Reinforcement Learning. (67%)Ethan Rathbun; Christopher Amato; Alina Oprea
SPIN: Self-Supervised Prompt INjection. (67%)Leon Zhou; Junfeng Yang; Chengzhi Mao
Jailbreaking LLM-Controlled Robots. (56%)Alexander Robey; Zachary Ravichandran; Vijay Kumar; Hamed Hassani; George J. Pappas
Persistent Pre-Training Poisoning of LLMs. (33%)Yiming Zhang; Javier Rando; Ivan Evtimov; Jianfeng Chi; Eric Michael Smith; Nicholas Carlini; Florian Tramèr; Daphne Ippolito
Trojan Prompt Attacks on Graph Neural Networks. (4%)Minhua Lin; Zhiwei Zhang; Enyan Dai; Zongyu Wu; Yilong Wang; Xiang Zhang; Suhang Wang
Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems. (2%)Isack Lee; Haebin Seong
2024-10-16
Golyadkin's Torment: Doppelg\"angers and Adversarial Vulnerability. (99%)George I. Kamberov
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain. (99%)Fengpeng Li; Kemou Li; Haiwei Wu; Jinyu Tian; Jiantao Zhou
Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples Generation with Momentum. (99%)Nashrah Haque; Xiang Li; Zhehui Chen; Yanzhao Wu; Lei Yu; Arun Iyengar; Wenqi Wei
New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes. (98%)Yanyun Wang; Li Liu; Zi Liang; Qingqing Ye; Haibo Hu
Perseus: Leveraging Common Data Patterns with Curriculum Learning for More Robust Graph Neural Networks. (92%)Kaiwen Xia; Huijun Wu; Duanyu Li; Min Xie; Ruibo Wang; Wenzhe Zhang
Low-Rank Adversarial PGD Attack. (84%)Dayana Savostianova; Emanuele Zangrando; Francesco Tudisco
Data Defenses Against Large Language Models. (76%)William Agnew; Harry H. Jiang; Cella Sum; Maarten Sap; Sauvik Das
Hiding-in-Plain-Sight (HiPS) Attack on CLIP for Targetted Object Removal from Images. (61%)Arka Daw; Megan Hong-Thanh Chung; Maria Mahbub; Amir Sadovnik
NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models. (16%)Haodong Zhao; Jinming Hu; Peixuan Li; Fangqi Li; Jinrui Sha; Peixuan Chen; Zhuosheng Zhang; Gongshen Liu
Reconstruction of Differentially Private Text Sanitization via Large Language Models. (4%)Shuchao Pang; Zhigang Lu; Haichen Wang; Peng Fu; Yongbin Zhou; Minhui Xue; Bo Li
Unitary Multi-Margin BERT for Robust Natural Language Processing. (4%)Hao-Yuan Chang; Kang L. Wang
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace. (2%)Jinluan Yang; Anke Tang; Didi Zhu; Zhengyu Chen; Li Shen; Fei Wu
FedGTST: Boosting Global Transferability of Federated Models via Statistics Tuning. (2%)Evelyn Ma; Chao Pan; Rasoul Etesami; Han Zhao; Olgica Milenkovic
Consistency Calibration: Improving Uncertainty Calibration via Consistency among Perturbed Neighbors. (2%)Linwei Tao; Haolan Guo; Minjing Dong; Chang Xu
Efficient Optimization Algorithms for Linear Adversarial Training. (1%)Antônio H. RIbeiro; Thomas B. Schön; Dave Zahariah; Francis Bach
PromptExp: Multi-granularity Prompt Explanation of Large Language Models. (1%)Ximing Dong; Shaowei Wang; Dayi Lin; Gopi Krishnan Rajbahadur; Boquan Zhou; Shichao Liu; Ahmed E. Hassan
Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations. (1%)Lu Pang; Tao Sun; Weimin Lyu; Haibin Ling; Chao Chen
2024-10-15
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks. (99%)Kevin Eykholt; Farhan Ahmed; Pratik Vaishnavi; Amir Rahmati
Efficient and Effective Universal Adversarial Attack against Vision-Language Pre-training Models. (98%)Fan Yang; Yihao Huang; Kailong Wang; Ling Shi; Geguang Pu; Yang Liu; Haoyu Wang
Deciphering the Chaos: Enhancing Jailbreak Attacks via Adversarial Prompt Translation. (83%)Qizhang Li; Xiaochen Yang; Wangmeng Zuo; Yiwen Guo
BeniFul: Backdoor Defense via Middle Feature Analysis for Deep Neural Networks. (82%)Xinfu Li; Junying Zhang; Xindi Ma
Cognitive Overload Attack:Prompt Injection for Long Context. (62%)Bibek Upadhayay; Vahid Behzadan; Amin Karbasi
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment. (31%)Pankayaraj Pathmanathan; Udari Madhushani Sehwag; Michael-Andrei Panaitescu-Liess; Furong Huang
Backdoor Attack on Vertical Federated Graph Neural Network Learning. (10%)Jirui Yang; Peng Chen; Zhihui Lu; Ruijun Deng; Qiang Duan; Jianping Zeng
DiffGAN: A Test Generation Approach for Differential Testing of Deep Neural Networks. (10%)Zohreh Aghababaeyan; Manel Abdellatif; Lionel Briand; Ramesh S
Multi-round jailbreak attack on large language models. (4%)Yihua Zhou; Xiaochuan Shi
Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture. (3%)Sajad Movahedi; Antonio Orvieto; Seyed-Mohsen Moosavi-Dezfooli
G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks. (2%)Guibin Zhang; Yanwei Yue; Xiangguo Sun; Guancheng Wan; Miao Yu; Junfeng Fang; Kun Wang; Dawei Cheng
2024-10-14
Denial-of-Service Poisoning Attacks against Large Language Models. (92%)Kuofeng Gao; Tianyu Pang; Chao Du; Yong Yang; Shu-Tao Xia; Min Lin
Towards Calibrated Losses for Adversarial Robust Reject Option Classification. (86%)Vrund Shah; Tejas Chaudhari; Naresh Manwani
Adversarially Robust Out-of-Distribution Detection Using Lyapunov-Stabilized Embeddings. (86%)Hossein Mirzaei; Mackenzie W. Mathis
Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning. (81%)Hassan Ali; Surya Nepal; Salil S. Kanhere; Sanjay Jha
Feature Averaging: An Implicit Bias of Gradient Descent Leading to Non-Robustness in Neural Networks. (68%)Binghui Li; Zhixuan Pan; Kaifeng Lyu; Jian Li
ROSAR: An Adversarial Re-Training Framework for Robust Side-Scan Sonar Object Detection. (67%)Martin Aubard; László Antal; Ana Madureira; Luis F. Teixeira; Erika Ábrahám
Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion. (15%)Karl Rubel; Maximilian Noppel; Christian Wressnegger
Enhancing Robustness in Deep Reinforcement Learning: A Lyapunov Exponent Approach. (13%)Rory Young; Nicolas Pugeault
How to Backdoor Consistency Models? (12%)Chengen Wang; Murat Kantarcioglu
The Implicit Bias of Structured State Space Models Can Be Poisoned With Clean Labels. (2%)Yonatan Slutzky; Yotam Alexander; Noam Razin; Nadav Cohen
Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance. (1%)Sachin Goyal; Christina Baek; J. Zico Kolter; Aditi Raghunathan
On Calibration of LLM-based Guard Models for Reliable Content Moderation. (1%)Hongfu Liu; Hengguan Huang; Hao Wang; Xiangming Gu; Ye Wang
Regularized Robustly Reliable Learners and Instance Targeted Attacks. (1%)Avrim Blum; Donya Saless
Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models. (1%)Zhongye Liu; Hongbin Liu; Yuepeng Hu; Zedian Shao; Neil Zhenqiang Gong
2024-10-13
S$^4$ST: A Strong, Self-transferable, faSt, and Simple Scale Transformation for Transferable Targeted Attack. (99%)Yongxiang Liu; Bowen Peng; Li Liu; Xiang Li
Understanding Robustness of Parameter-Efficient Tuning for Image Classification. (98%)Jiacheng Ruan; Xian Gao; Suncheng Xiang; Mingye Xie; Ting Liu; Yuzhuo Fu
Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors. (75%)Tao Lin; Lijia Yu; Gaojie Jin; Renjue Li; Peng Wu; Lijun Zhang
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense. (67%)Rui Min; Zeyu Qin; Nevin L. Zhang; Li Shen; Minhao Cheng
BlackDAN: A Black-Box Multi-Objective Approach for Effective and Contextual Jailbreaking of Large Language Models. (13%)Xinyuan Wang; Victor Shea-Jay Huang; Renmiao Chen; Hao Wang; Chengwei Pan; Lei Sha; Minlie Huang
Targeted Vaccine: Safety Alignment for Large Language Models against Harmful Fine-Tuning via Layer-wise Perturbation. (1%)Guozhi Liu; Weiwei Lin; Tiansheng Huang; Ruichao Mo; Qi Mu; Li Shen
2024-10-12
Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy. (91%)Yangsibo Huang; Daogao Liu; Lynn Chua; Badih Ghazi; Pritish Kamath; Ravi Kumar; Pasin Manurangsi; Milad Nasr; Amer Sinha; Chiyuan Zhang
Robust 3D Point Clouds Classification based on Declarative Defenders. (2%)Kaidong Li; Tianxiao Zhang; Cuncong Zhong; Ziming Zhang; Guanghui Wang
2024-10-11
On the Adversarial Transferability of Generalized "Skip Connections". (99%)Yisen Wang; Yichuan Mo; Dongxian Wu; Mingjie Li; Xingjun Ma; Zhouchen Lin
Natural Language Induced Adversarial Images. (99%)Xiaopei Zhu; Peiyang Xu; Guanning Zeng; Yingpeng Dong; Xiaolin Hu
Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks. (70%)Isha Gupta; Hidde Lycklama; Emanuel Opel; Evan Rose; Anwar Hithnawi
AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents. (69%)Maksym Andriushchenko; Alexandra Souly; Mateusz Dziemian; Derek Duenas; Maxwell Lin; Justin Wang; Dan Hendrycks; Andy Zou; Zico Kolter; Matt Fredrikson; Eric Winsor; Jerome Wynne; Yarin Gal; Xander Davies
Training on Fake Labels: Mitigating Label Leakage in Split Learning via Secure Dimension Transformation. (62%)Yukun Jiang; Peiran Wang; Chengguo Lin; Ziyue Huang; Yong Cheng
PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning. (31%)Tingchen Fu; Mrinank Sharma; Philip Torr; Shay B. Cohen; David Krueger; Fazl Barez
AttnGCG: Enhancing Jailbreaking Attacks on LLMs with Attention Manipulation. (31%)Zijun Wang; Haoqin Tu; Jieru Mei; Bingchen Zhao; Yisen Wang; Cihang Xie
The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses. (16%)Grzegorz Głuch; Berkant Turan; Sai Ganesh Nagarajan; Sebastian Pokutta
Impeding LLM-assisted Cheating in Introductory Programming Assignments via Adversarial Perturbation. (4%)Saiful Islam Salim; Rubin Yuchan Yang; Alexander Cooper; Suryashree Ray; Saumya Debray; Sazzadur Rahaman
F2A: An Innovative Approach for Prompt Injection by Utilizing Feign Security Detection Agents. (1%)Yupeng Ren
RePD: Defending Jailbreak Attack through a Retrieval-based Prompt Decomposition Process. (1%)Peiran Wang; Xiaogeng Liu; Chaowei Xiao
JAILJUDGE: A Comprehensive Jailbreak Judge Benchmark with Multi-Agent Enhanced Explanation Evaluation Framework. (1%)Fan Liu; Yue Feng; Zhao Xu; Lixin Su; Xinyu Ma; Dawei Yin; Hao Liu
2024-10-10
Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data. (99%)Binghui Li; Yuanzhi Li
Time Traveling to Defend Against Adversarial Example Attacks in Image Classification. (99%)Anthony Etim; Jakub Szefer
Understanding Adversarially Robust Generalization via Weight-Curvature Index. (98%)Yuelin Xu; Xiao Zhang
Invisibility Cloak: Disappearance under Human Pose Estimation via Backdoor Attacks. (92%)Minxing Zhang; Michael Backes; Xiao Zhang
A Survey on Physical Adversarial Attacks against Face Recognition Systems. (91%)Mingsi Wang; Jiachen Zhou; Tianlin Li; Guozhu Meng; Kai Chen
Towards Assurance of LLM Adversarial Robustness using Ontology-Driven Argumentation. (74%)Tomas Bueno Momcilovic; Beat Buesser; Giulio Zizzo; Mark Purcell; Dian Balta
Bilinear MLPs enable weight-based mechanistic interpretability. (70%)Michael T. Pearce; Thomas Dooms; Alice Rigg; Jose M. Oramas; Lee Sharkey
Adversarial Robustness Overestimation and Instability in TRADES. (67%)Jonathan Weiping Li; Ren-Wei Liang; Cheng-Han Yeh; Cheng-Chang Tsai; Kuanchun Yu; Chun-Shien Lu; Shang-Tse Chen
RAB$^2$-DEF: Dynamic and explainable defense against adversarial attacks in Federated Learning to fair poor clients. (61%)Nuria Rodríguez-Barroso; M. Victoria Luzón; Francisco Herrera
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting. (10%)Jiahao Lu; Yifan Zhang; Qiuhong Shen; Xinchao Wang; Shuicheng Yan
A Closer Look at Machine Unlearning for Large Language Models. (4%)Xiaojian Yuan; Tianyu Pang; Chao Du; Kejiang Chen; Weiming Zhang; Min Lin
2024-10-09
Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens of Large Vision-Language Models. (99%)Yubo Wang; Chaohu Liu; Yanqiu Qu; Haoyu Cao; Deqiang Jiang; Linli Xu
Understanding Model Ensemble in Transferable Adversarial Attack. (99%)Wei Yao; Zeliang Zhang; Huayi Tang; Yong Liu
Secure Video Quality Assessment Resisting Adversarial Attacks. (75%)Ao-Xiang Zhang; Yu Ran; Weixuan Tang; Yuan-Gen Wang; Qingxiao Guan; Chunsheng Yang
Can DeepFake Speech be Reliably Detected? (62%)Hongbin Liu; Youzheng Chen; Arun Narayanan; Athula Balachandran; Pedro J. Moreno; Lun Wang
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning. (15%)Wassim Bouaziz; El-Mahdi El-Mhamdi; Nicolas Usunier
Average Certified Radius is a Poor Metric for Randomized Smoothing. (11%)Chenhao Sun; Yuhao Mao; Mark Niklas Müller; Martin Vechev
JPEG Inspired Deep Learning. (11%)Ahmed H. Salamah; Kaixiang Zheng; Yiwen Liu; En-Hui Yang
Adversarial Vulnerability as a Consequence of On-Manifold Inseparibility. (2%)Rajdeep Haldar; Yue Xing; Qifan Song; Guang Lin
Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates. (2%)Xiaosen Zheng; Tianyu Pang; Chao Du; Qian Liu; Jing Jiang; Min Lin
Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization Models. (2%)Shuaimin Li; Yuanfeng Song; Xuanang Chen; Anni Peng; Zhuoyue Wan; Chen Jason Zhang; Raymond Chi-Wing Wong
AdaRC: Mitigating Graph Structure Shifts during Test-Time. (1%)Wenxuan Bao; Zhichen Zeng; Zhining Liu; Hanghang Tong; Jingrui He
PII-Scope: A Benchmark for Training Data PII Leakage Assessment in LLMs. (1%)Krishna Kanth Nakka; Ahmed Frikha; Ricardo Mendes; Xue Jiang; Xuebing Zhou
Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning. (1%)Runchuan Zhu; Zhipeng Ma; Jiang Wu; Junyuan Gao; Jiaqi Wang; Dahua Lin; Conghui He
2024-10-08
Hyper Adversarial Tuning for Boosting Adversarial Robustness of Pretrained Large Vision Models. (99%)Kangtao Lv; Huangsen Cao; Kainan Tu; Yihuai Xu; Zhimeng Zhang; Xin Ding; Yongwei Wang
DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing. (98%)June Suk Choi; Kyungmin Lee; Jongheon Jeong; Saining Xie; Jinwoo Shin; Kimin Lee
Filtered Randomized Smoothing: A New Defense for Robust Modulation Classification. (98%)Wenhan Zhang; Meiyu Zhong; Ravi Tandon; Marwan Krunz
CALoR: Towards Comprehensive Model Inversion Defense. (76%)Hongyao Yu; Yixiang Qiu; Hao Fang; Bin Chen; Sijin Yu; Bin Wang; Shu-Tao Xia; Ke Xu
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting. (74%)Nicholas Carlini; Jorge Chávez-Saab; Anna Hambitzer; Francisco Rodríguez-Henríquez; Adi Shamir
Training-free LLM-generated Text Detection by Mining Token Probability Sequences. (26%)Yihuai Xu; Yongwei Wang; Yifei Bi; Huangsen Cao; Zhouhan Lin; Yu Zhao; Fei Wu
PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning. (10%)Jiashi Gao; Ziwei Wang; Xiangyu Zhao; Xin Yao; Xuetao Wei
Recent advancements in LLM Red-Teaming: Techniques, Defenses, and Ethical Considerations. (10%)Tarun Raheja; Nilay Pochhi
2024-10-07
TaeBench: Improving Quality of Toxic Adversarial Examples. (99%)Xuan Zhu; Dmitriy Bespalov; Liwen You; Ninad Kulkarni; Yanjun Qi
AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models. (99%)Jiaming Zhang; Junhong Ye; Xingjun Ma; Yige Li; Yunfan Yang; Jitao Sang; Dit-Yan Yeung
LOTOS: Layer-wise Orthogonalization for Training Robust Ensembles. (99%)Ali Ebrahimpour-Boroojeny; Hari Sundaram; Varun Chandrasekaran
Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models. (95%)Dehong Kong; Siyuan Liang; Xiaopeng Zhu; Yuansheng Zhong; Wenqi Ren
MIBench: A Comprehensive Benchmark for Model Inversion Attack and Defense. (86%)Yixiang Qiu; Hongyao Yu; Hao Fang; Wenbo Yu; Bin Chen; Xuan Wang; Shu-Tao Xia; Ke Xu
STOP! Camera Spoofing via the in-Vehicle IP Network. (83%)Dror Peri; Avishai Wool
Double Oracle Neural Architecture Search for Game Theoretic Deep Learning Models. (76%)Aye Phyu Phyu Aung; Xinrun Wang; Ruiyu Wang; Hau Chan; Bo An; Xiaoli Li; J. Senthilnath
Collaboration! Towards Robust Neural Methods for Routing Problems. (70%)Jianan Zhou; Yaoxin Wu; Zhiguang Cao; Wen Song; Jie Zhang; Zhiqi Shen
Aligning LLMs to Be Robust Against Prompt Injection. (47%)Sizhe Chen; Arman Zharmagambetov; Saeed Mahloujifar; Kamalika Chaudhuri; Chuan Guo
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models. (11%)Songning Lai; Jiayu Yang; Yu Huang; Lijie Hu; Tianlang Xue; Zhangyi Hu; Jiaxu Li; Haicheng Liao; Yutao Yue
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models. (8%)Xiao Yang; Kai Zhou; Yuni Lai; Gaolei Li
Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation. (1%)Fanqing Meng; Jiaqi Liao; Xinyu Tan; Wenqi Shao; Quanfeng Lu; Kaipeng Zhang; Yu Cheng; Dianqi Li; Yu Qiao; Ping Luo
2024-10-06
Suspiciousness of Adversarial Texts to Human. (99%)Shakila Mahjabin Tonni; Pedro Faustini; Mark Dras
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning. (99%)Yongyi Su; Yushu Li; Nanqing Liu; Kui Jia; Xulei Yang; Chuan-Sheng Foo; Xun Xu
TA3: Testing Against Adversarial Attacks on Machine Learning Models. (67%)Yuanzhe Jin; Min Chen
Robustness Reprogramming for Representation Learning. (56%)Zhichao Hou; MohamadAli Torkamani; Hamid Krim; Xiaorui Liu
Towards Understanding and Enhancing Security of Proof-of-Training for DNN Model Ownership Verification. (2%)Yijia Chang; Hanrui Jiang; Chao Lin; Xinyi Huang; Jian Weng
Federated Learning Nodes Can Reconstruct Peers' Image Data. (1%)Ethan Wilson; Kai Yue; Chau-Wai Wong; Huaiyu Dai
2024-10-05
Harnessing Task Overload for Scalable Jailbreak Attacks on Large Language Models. (38%)Yiting Dong; Guobin Shen; Dongcheng Zhao; Xiang He; Yi Zeng
ConDa: Fast Federated Unlearning with Contribution Dampening. (1%)Vikram S Chundawat; Pushkar Niroula; Prasanna Dhungana; Stefan Schoepf; Murari Mandal; Alexandra Brintrup
2024-10-04
Mitigating Adversarial Perturbations for Deep Reinforcement Learning via Vector Quantization. (98%)Tung M. Luu; Thanh Nguyen; Tee Joshua Tian Jin; Sungwoon Kim; Chang D. Yoo
RAFT: Realistic Attacks to Fool Text Detectors. (96%)James Wang; Ran Li; Junfeng Yang; Chengzhi Mao
A Brain-Inspired Regularizer for Adversarial Robustness. (92%)Elie Attias; Cengiz Pehlevan; Dina Obeid
Gradient-based Jailbreak Images for Multimodal Fusion Models. (16%)Javier Rando; Hannah Korevaar; Erik Brinkman; Ivan Evtimov; Florian Tramèr
You Know What I'm Saying -- Jailbreak Attack via Implicit Reference. (16%)Tianyu Wu; Lingrui Mei; Ruibin Yuan; Lujun Li; Wei Xue; Yike Guo
Impact of Regularization on Calibration and Robustness: from the Representation Space Perspective. (13%)Jonghyun Park; Juyeop Kim; Jong-Seok Lee
Make Interval Bound Propagation great again. (9%)Patryk Krukowski; Daniel Wilczak; Jacek Tabor; Anna Bielawska; Przemysław Spurek
Classification-Denoising Networks. (9%)Louis Thiry; Florentin Guth
Knowledge-Augmented Reasoning for EUAIA Compliance and Adversarial Robustness of LLMs. (2%)Tomas Bueno Momcilovic; Dian Balta; Beat Buesser; Giulio Zizzo; Mark Purcell
BN-SCAFFOLD: controlling the drift of Batch Normalization statistics in Federated Learning. (1%)Gonzalo Iñaki Quintana; Laurence Vancamberg; Vincent Jugnon; Mathilde Mougeot; Agnès Desolneux
Chain-of-Jailbreak Attack for Image Generation Models via Editing Step by Step. (1%)Wenxuan Wang; Kuiyi Gao; Zihan Jia; Youliang Yuan; Jen-tse Huang; Qiuzhi Liu; Shuai Wang; Wenxiang Jiao; Zhaopeng Tu
2024-10-03
SCA: Highly Efficient Semantic-Consistent Unrestricted Adversarial Attack. (99%)Zihao Pan; Weibin Wu; Yuhang Cao; Zibin Zheng
Towards Universal Certified Robustness with Multi-Norm Training. (26%)Enyi Jiang; Gagandeep Singh
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents. (15%)Hanrong Zhang; Jingyuan Huang; Kai Mei; Yifei Yao; Zhenting Wang; Chenlu Zhan; Hongwei Wang; Yongfeng Zhang
AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs. (11%)Xiaogeng Liu; Peiran Li; Edward Suh; Yevgeniy Vorobeychik; Zhuoqing Mao; Somesh Jha; Patrick McDaniel; Huan Sun; Bo Li; Chaowei Xiao
Demonstration Attack against In-Context Learning for Code Intelligence. (10%)Yifei Ge; Weisong Sun; Yihang Lou; Chunrong Fang; Yiran Zhang; Yiming Li; Xiaofang Zhang; Yang Liu; Zhihong Zhao; Zhenyu Chen
Unveiling AI's Blind Spots: An Oracle for In-Domain, Out-of-Domain, and Adversarial Errors. (3%)Shuangpeng Han; Mengmi Zhang
Jailbreak Antidote: Runtime Safety-Utility Balance via Sparse Representation Adjustment in Large Language Models. (3%)Guobin Shen; Dongcheng Zhao; Yiting Dong; Xiang He; Yi Zeng
MTDNS: Moving Target Defense for Resilient DNS Infrastructure. (2%)Abdullah Aydeger; Pei Zhou; Sanzida Hoque; Marco Carvalho; Engin Zeydan
Cut the Crap: An Economical Communication Pipeline for LLM-based Multi-Agent Systems. (1%)Guibin Zhang; Yanwei Yue; Zhixun Li; Sukwon Yun; Guancheng Wan; Kun Wang; Dawei Cheng; Jeffrey Xu Yu; Tianlong Chen
IndicSentEval: How Effectively do Multilingual Transformer Models encode Linguistic Properties for Indic Languages? (1%)Akhilesh Aravapalli; Mounika Marreddy; Subba Reddy Oota; Radhika Mamidi; Manish Gupta
BACKTIME: Backdoor Attacks on Multivariate Time Series Forecasting. (1%)Xiao Lin; Zhining Liu; Dongqi Fu; Ruizhong Qiu; Hanghang Tong
Optimizing Adaptive Attacks against Content Watermarks for Language Models. (1%)Abdulrahman Diaa; Toluwani Aremu; Nils Lukas
Universally Optimal Watermarking Schemes for LLMs: from Theory to Practice. (1%)Haiyun He; Yepeng Liu; Ziqiao Wang; Yongyi Mao; Yuheng Bu
Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation. (1%)Xiaoqun Liu; Jiacheng Liang; Luoxi Tang; Chenyu You; Muchao Ye; Zhaohan Xi
2024-10-02
On Using Certified Training towards Empirical Robustness. (99%)Palma Alessandro De; Serge Durand; Zakaria Chihani; François Terrier; Caterina Urban
Impact of White-Box Adversarial Attacks on Convolutional Neural Networks. (99%)Rakesh Podder; Sudipto Ghosh
Signal Adversarial Examples Generation for Signal Detection Network via White-Box Attack. (99%)Dongyang Li; Linyuan Wang; Guangwei Xiong; Bin Yan; Dekui Ma; Jinxian Peng
MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning. (99%)Sedjro Salomon Hotegni; Sebastian Peitz
Fake It Until You Break It: On the Adversarial Robustness of AI-generated Image Detectors. (98%)Sina Mavali; Jonas Ricker; David Pape; Yash Sharma; Asja Fischer; Lea Schönherr
"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning. (93%)Jiale Zhang; Chengcheng Zhu; Bosen Rao; Hao Sui; Xiaobing Sun; Bing Chen; Chunyi Zhou; Shouling Ji
Social Media Authentication and Combating Deepfakes using Semi-fragile Invisible Image Watermarking. (82%)Aakash Varma Nadimpalli; Ajita Rattani
The Unlikely Hero: Nonideality in Analog Photonic Neural Networks as Built-in Defender Against Adversarial Attacks. (76%)Haotian Lu; Ziang Yin; Partho Bhoumik; Sanmitra Banerjee; Krishnendu Chakrabarty; Jiaqi Gu
Endless Jailbreaks with Bijection Learning. (16%)Brian R. Y. Huang; Maximilian Li; Leonard Tang
BadCM: Invisible Backdoor Attack Against Cross-Modal Learning. (13%)Zheng Zhang; Xu Yuan; Lei Zhu; Jingkuan Song; Liqiang Nie
Controlled Generation of Natural Adversarial Documents for Stealthy Retrieval Poisoning. (13%)Collin Zhang; Tingwei Zhang; Vitaly Shmatikov
Automated Red Teaming with GOAT: the Generative Offensive Agent Tester. (11%)Maya Pavlova; Erik Brinkman; Krithika Iyer; Vitor Albiero; Joanna Bitton; Hailey Nguyen; Joe Li; Cristian Canton Ferrer; Ivan Evtimov; Aaron Grattafiori
Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models. (8%)Ching-Chia Kao; Chia-Mu Yu; Chun-Shien Lu; Chu-Song Chen
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability. (1%)Gabriel Kasmi; Amandine Brunetto; Thomas Fel; Jayneel Parekh
2024-10-01
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective. (54%)Yixin Liu; Arielle Carr; Lichao Sun
Adversarial Suffixes May Be Features Too! (45%)Wei Zhao; Zhe Li; Yige Li; Jun Sun
2024-09-30
Characterizing Model Robustness via Natural Input Gradients. (92%)Adrián Rodríguez-Muñoz; Tongzhou Wang; Antonio Torralba
Robust LLM safeguarding via refusal feature adversarial training. (80%)Lei Yu; Virginie Do; Karen Hambardzumyan; Nicola Cancedda
Resonance Reduction Against Adversarial Attacks in Dynamic Networks via Eigenspectrum Optimization. (76%)Alp Sahin; Nicolas Kozachuk; Rick S. Blum; Subhrajit Bhattacharya
Navigating Threats: A Survey of Physical Adversarial Attacks on LiDAR Perception Systems in Autonomous Vehicles. (45%)Amira Guesmi; Muhammad Shafique
VLMGuard: Defending VLMs against Malicious Prompts via Unlabeled Data. (8%)Xuefeng Du; Reshmi Ghosh; Robert Sim; Ahmed Salem; Vitor Carvalho; Emily Lawton; Yixuan Li; Jack W. Stokes
2024-09-29
MASKDROID: Robust Android Malware Detection with Masked Graph Representations. (99%)Jingnan Zheng; Jiaohao Liu; An Zhang; Jun Zeng; Ziqi Yang; Zhenkai Liang; Tat-Seng Chua
Adversarial Examples for DNA Classification. (98%)Hyunwoo Yoo
Discerning the Chaos: Detecting Adversarial Perturbations while Disentangling Intentional from Unintentional Noises. (86%)Anubhooti Jain; Susim Roy; Kwanit Gupta; Mayank Vatsa; Richa Singh
BadHMP: Backdoor Attack against Human Motion Prediction. (61%)Chaohui Xu; Si Wang; Chip-Hong Chang
Nonideality-aware training makes memristive networks more robust to adversarial attacks. (38%)Dovydas Joksas; Luis Muñoz-González; Emil Lupu; Adnan Mehonic
Infighting in the Dark: Multi-Labels Backdoor Attack in Federated Learning. (33%)Ye Li; Yanchao Zhao; Chengcheng Zhu; Jiale Zhang
Towards Robust Extractive Question Answering Models: Rethinking the Training Methodology. (10%)Son Quoc Tran; Matt Kretchmar
Learning Robust Policies via Interpretable Hamilton-Jacobi Reachability-Guided Disturbances. (5%)Hanyang Hu; Xilun Zhang; Xubo Lyu; Mo Chen
IDEAW: Robust Neural Audio Watermarking with Invertible Dual-Embedding. (1%)Pengcheng Li; Xulong Zhang; Jing Xiao; Jianzong Wang
Can Models Learn Skill Composition from Examples? (1%)Haoyu Zhao; Simran Kaur; Dingli Yu; Anirudh Goyal; Sanjeev Arora
2024-09-28
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats. (74%)Kuanrong Liu; Siyuan Liang; Jiawei Liang; Pengwen Dai; Xiaochun Cao
GenTel-Safe: A Unified Benchmark and Shielding Framework for Defending Against Prompt Injection Attacks. (13%)Rongchang Li; Minjie Chen; Chang Hu; Han Chen; Wenpeng Xing; Meng Han
Leveraging MTD to Mitigate Poisoning Attacks in Decentralized FL with Non-IID Data. (11%)Chao Feng; Alberto Huertas Celdrán; Zien Zeng; Zi Ye; der Assen Jan von; Gerome Bovet; Burkhard Stiller
Survey of Security and Data Attacks on Machine Unlearning In Financial and E-Commerce. (2%)Carl E. J. Brodzinski
Privacy Attack in Federated Learning is Not Easy: An Experimental Study. (1%)Hangyu Zhu; Liyuan Huang; Zhenping Xie
2024-09-27
Adversarial Challenges in Network Intrusion Detection Systems: Research Insights and Future Prospects. (96%)Sabrine Ennaji; Gaspari Fabio De; Dorjan Hitaj; Alicia K/Bidi; Luigi V. Mancini
Enhancing Robustness of Graph Neural Networks through p-Laplacian. (12%)Anuj Kumar Sirohi; Subhanu Halder; Kabir Kumar; Sandeep Kumar
Efficient Noise Mitigation for Enhancing Inference Accuracy in DNNs on Mixed-Signal Accelerators. (1%)Seyedarmin Azizi; Mohammad Erfan Sadeghi; Mehdi Kamal; Massoud Pedram
In-depth Analysis of Privacy Threats in Federated Learning for Medical Data. (1%)Badhan Chandra Das; M. Hadi Amini; Yanzhao Wu
2024-09-26
Showing Many Labels in Multi-label Classification Models: An Empirical Study of Adversarial Examples. (98%)Yujiang Liu; Wenjian Luo; Zhijian Chen; Muhammad Luqman Naseem
Cross-Modality Attack Boosted by Gradient-Evolutionary Multiform Optimization. (98%)Yunpeng Gong; Qingyuan Zeng; Dejun Xu; Zhenzhong Wang; Min Jiang
Discovering New Shadow Patterns for Black-Box Attacks on Lane Detection of Autonomous Vehicles. (97%)Pedram MohajerAnsari; Alkim Domeke; Voor Jan de; Arkajyoti Mitra; Grace Johnson; Amir Salarpour; Habeeb Olufowobi; Mohammad Hamad; Mert D. Pesé
Improving Fast Adversarial Training via Self-Knowledge Guidance. (82%)Chengze Jiang; Junkai Wang; Minjing Dong; Jie Gui; Xinli Shi; Yuan Cao; Yuan Yan Tang; James Tin-Yau Kwok
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations. (69%)Supriya Manna; Niladri Sett
CleanerCLIP: Fine-grained Counterfactual Semantic Augmentation for Backdoor Defense in Contrastive Learning. (69%)Yuan Xun; Siyuan Liang; Xiaojun Jia; Xinwei Liu; Xiaochun Cao
DarkSAM: Fooling Segment Anything Model to Segment Nothing. (68%)Ziqi Zhou; Yufei Song; Minghui Li; Shengshan Hu; Xianlong Wang; Leo Yu Zhang; Dezhong Yao; Hai Jin
Perturb, Attend, Detect and Localize (PADL): Robust Proactive Image Defense. (56%)Filippo Bartolucci; Iacopo Masi; Giuseppe Lisanti
Development of an Edge Resilient ML Ensemble to Tolerate ICS Adversarial Attacks. (54%)Likai Yao; Qinxuan Shi; Zhanglong Yang; Sicong Shao; Salim Hariri
Backdoor Attacks for LLMs with Weak-To-Strong Knowledge Distillation. (15%)Shuai Zhao; Leilei Gan; Zhongliang Guo; Xiaobao Wu; Luwei Xiao; Xiaoyu Xu; Cong-Duy Nguyen; Luu Anh Tuan
Harmful Fine-tuning Attacks and Defenses for Large Language Models: A Survey. (15%)Tiansheng Huang; Sihao Hu; Fatih Ilhan; Selim Furkan Tekin; Ling Liu
Dark Miner: Defend against unsafe generation for text-to-image diffusion models. (5%)Zheling Meng; Bo Peng; Xiaochuan Jin; Yue Jiang; Jing Dong; Wei Wang; Tieniu Tan
An Adversarial Perspective on Machine Unlearning for AI Safety. (2%)Jakub Łucki; Boyi Wei; Yangsibo Huang; Peter Henderson; Florian Tramèr; Javier Rando
Revolutionizing Payload Inspection: A Self-Supervised Journey to Precision with Few Shots. (2%)Kyle Stein; Arash Mahyari; Guillermo III Francia; Eman El-Sheikh
2024-09-25
Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System. (93%)Kaike Zhang; Qi Cao; Yunfan Wu; Fei Sun; Huawei Shen; Xueqi Cheng
A Hybrid Quantum-Classical AI-Based Detection Strategy for Generative Adversarial Network-Based Deepfake Attacks on an Autonomous Vehicle Traffic Sign Classification System. (82%)M Sabbir Salek; Shaozhi Li; Mashrur Chowdhury
RED QUEEN: Safeguarding Large Language Models against Concealed Multi-Turn Jailbreaking. (75%)Yifan Jiang; Kriti Aggarwal; Tanmay Laud; Kashif Munir; Jay Pujara; Subhabrata Mukherjee
Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving. (67%)Ce Zhou; Qiben Yan; Sijia Liu
Examining the Rat in the Tunnel: Interpretable Multi-Label Classification of Tor-based Malware. (45%)Ishan Karunanayake; Mashael AlSabah; Nadeem Ahmed; Sanjay Jha
SWE2: SubWord Enriched and Significant Word Emphasized Framework for Hate Speech Detection. (38%)Guanyi Mou; Pengyi Ye; Kyumin Lee
SHEATH: Defending Horizontal Collaboration for Distributed CNNs against Adversarial Noise. (22%)Muneeba Asif; Mohammad Kumail Kazmi; Mohammad Ashiqur Rahman; Syed Rafay Hasan; Soamar Homsi
Claim-Guided Textual Backdoor Attack for Practical Applications. (10%)Minkyoo Song; Hanna Kim; Jaehan Kim; Youngjin Jin; Seungwon Shin
Cat-and-Mouse Satellite Dynamics: Divergent Adversarial Reinforcement Learning for Contested Multi-Agent Space Operations. (1%)Cameron Mehlman; Joseph Abramov; Gregory Falco
2024-09-24
Adversarial Backdoor Defense in CLIP. (99%)Junhao Kuang; Siyuan Liang; Jiawei Liang; Kuanrong Liu; Xiaochun Cao
Revisiting Acoustic Features for Robust ASR. (84%)Muhammad A. Shah; Bhiksha Raj
Adversarial Watermarking for Face Recognition. (80%)Yuguang Yao; Anil Jain; Sijia Liu
Proactive Schemes: A Survey of Adversarial Attacks for Social Good. (54%)Vishal Asnani; Xi Yin; Xiaoming Liu
Privacy Evaluation Benchmarks for NLP Models. (45%)Wei Huang; Yinggui Wang; Cen Chen
Towards Robust Object Detection: Identifying and Removing Backdoors via Module Inconsistency Analysis. (33%)Xianda Zhang; Siyuan Liang
PACE: Poisoning Attacks on Learned Cardinality Estimation. (4%)Jintao Tsinghua University Zhang; Chao Tsinghua University Zhang; Guoliang Tsinghua University Li; Chengliang Beijing Institute of Technology Chai
2024-09-23
Improving Adversarial Robustness for 3D Point Cloud Recognition at Test-Time through Purified Self-Training. (96%)Jinpeng Lin; Xulei Yang; Tianrui Li; Xun Xu
Interpretability-Guided Test-Time Adversarial Defense. (87%)Akshay Kulkarni; Tsui-Wei Weng
Effective and Evasive Fuzz Testing-Driven Jailbreaking Attacks against LLMs. (87%)Xueluan Gong; Mingzhe Li; Yilin Zhang; Fengyuan Ran; Chen Chen; Yanjiao Chen; Qian Wang; Kwok-Yan Lam
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks. (68%)Lingxin Jin; Meiyu Lin; Wei Jiang; Jinyu Zhan
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI. (47%)Ambrish Rawat; Stefan Schoepf; Giulio Zizzo; Giandomenico Cornacchia; Muhammad Zaid Hameed; Kieran Fraser; Erik Miehling; Beat Buesser; Elizabeth M. Daly; Mark Purcell; Prasanna Sattigeri; Pin-Yu Chen; Kush R. Varshney
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs. (33%)Jiahao Yu; Yangguang Shao; Hanwen Miao; Junzheng Shi; Xinyu Xing
Log-normal Mutations and their Use in Detecting Surreptitious Fake Images. (13%)Ismail Labiad; Thomas Bäck; Pierre Fernandez; Laurent Najman; Tom Sander; Furong Ye; Mariia Zameshina; Olivier Teytaud
Curb Your Attention: Causal Attention Gating for Robust Trajectory Prediction in Autonomous Driving. (12%)Ehsan Ahmadi; Ray Mercurius; Soheil Alizadeh; Kasra Rezaee; Amir Rasouli
Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for 6G Networks. (5%)Jiayi He; Xiaofeng Luo; Jiawen Kang; Hongyang Du; Zehui Xiong; Ci Chen; Dusit Niyato; Xuemin Shen
Room Impulse Responses help attackers to evade Deep Fake Detection. (1%)Hieu-Thi Luong; Duc-Tuan Truong; Kong Aik Lee; Eng Siong Chng
AIM 2024 Sparse Neural Rendering Challenge: Dataset and Benchmark. (1%)Michal Nazarczuk; Thomas Tanay; Sibi Catley-Chandar; Richard Shaw; Radu Timofte; Eduardo Pérez-Pellitero
UTrace: Poisoning Forensics for Private Collaborative Learning. (1%)Evan Rose; Hidde Lycklama; Harsh Chaudhari; Anwar Hithnawi; Alina Oprea
SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning. (1%)Minyeong Choe; Cheolhee Park; Changho Seo; Hyunil Kim
2024-09-22
Enhancing LLM-based Autonomous Driving Agents to Mitigate Perception Attacks. (10%)Ruoyu Song; Muslum Ozgur Ozmen; Hyungsub Kim; Antonio Bianchi; Z. Berkay Celik
Evaluating the Performance and Robustness of LLMs in Materials Science Q&A and Property Predictions. (1%)Hongchen Wang; Kangming Li; Scott Ramsay; Yao Fehlis; Edward Kim; Jason Hattrick-Simpers
2024-09-21
Cloud Adversarial Example Generation for Remote Sensing Image Classification. (99%)Fei Ma; Yuqiang Feng; Fan Zhang; Yongsheng Zhou
Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image Generation. (98%)G M Shahariar; Jia Chen; Jiachen Li; Yue Dong
When Witnesses Defend: A Witness Graph Topological Layer for Adversarial Graph Learning. (69%)Naheed Anjum Arafat; Debabrota Basu; Yulia Gel; Yuzhou Chen
PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach. (62%)Zhihao Lin; Wei Ma; Mingyi Zhou; Yanjie Zhao; Haoyu Wang; Yang Liu; Jun Wang; Li Li
ESPERANTO: Evaluating Synthesized Phrases to Enhance Robustness in AI Detection for Text Origination. (10%)Navid Ayoobi; Lily Knab; Wen Cheng; David Pantoja; Hamidreza Alikhani; Sylvain Flamant; Jin Kim; Arjun Mukherjee
Perfect Gradient Inversion in Federated Learning: A New Paradigm from the Hidden Subset Sum Problem. (8%)Qiongxiu Li; Lixia Luo; Agnese Gini; Changlong Ji; Zhanhao Hu; Xiao Li; Chengfang Fang; Jie Shi; Xiaolin Hu
Data-centric NLP Backdoor Defense from the Lens of Memorization. (4%)Zhenting Wang; Zhizhi Wang; Mingyu Jin; Mengnan Du; Juan Zhai; Shiqing Ma
2024-09-20
Efficient Visualization of Neural Networks with Generative Models and Adversarial Perturbations. (99%)Athanasios Karagounis
ViTGuard: Attention-aware Detection against Adversarial Examples for Vision Transformer. (99%)Shihua Sun; Kenechukwu Nwodo; Shridatt Sugrim; Angelos Stavrou; Haining Wang
Certified Adversarial Robustness via Partition-based Randomized Smoothing. (81%)Hossein Goli; Farzan Farnia
ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification. (76%)Zuomin Qu; Wei Lu; Xiangyang Luo; Qian Wang; Xiaochun Cao
Persistent Backdoor Attacks in Continual Learning. (73%)Zhen Guo; Abhinav Kumar; Reza Tourani
Relationship between Uncertainty in DNNs and Adversarial Attacks. (70%)Abigail Adeniran; Adewale Adeyemo
PureDiffusion: Using Backdoor to Counter Backdoor in Generative Diffusion Models. (61%)Vu Tuan Truong; Long Bao Le
On the Feasibility of Fully AI-automated Vishing Attacks. (1%)João Figueiredo; Afonso Carvalho; Daniel Castro; Daniel Gonçalves; Nuno Santos
2024-09-19
Deep generative models as an adversarial attack strategy for tabular machine learning. (99%)Salijona Dyrmishi; Mihaela Cătălina Stoian; Eleonora Giunchiglia; Maxime Cordy
TEAM: Temporal Adversarial Examples Attack Model against Network Intrusion Detection System Applied to RNN. (99%)Ziyi Liu; Dengpan Ye; Long Tang; Yunming Zhang; Jiacheng Deng
Hidden Activations Are Not Enough: A General Approach to Neural Network Predictions. (98%)Samuel Leblanc; Aiky Rasolomanana; Marco Armenta
Defending against Reverse Preference Attacks is Difficult. (83%)Domenic Rosati; Giles Edkins; Harsh Raj; David Atanasov; Subhabrata Majumdar; Janarthanan Rajendran; Frank Rudzicz; Hassan Sajjad
Revisiting Semi-supervised Adversarial Robustness via Noise-aware Online Robust Distillation. (45%)Tsung-Han Wu; Hung-Ting Su; Shang-Tse Chen; Winston H. Hsu
VCAT: Vulnerability-aware and Curiosity-driven Adversarial Training for Enhancing Autonomous Vehicle Robustness. (26%)Xuan Cai; Zhiyong Cui; Xuesong Bai; Ruimin Ke; Zhenshu Ma; Haiyang Yu; Yilong Ren
Data Poisoning and Leakage Analysis in Federated Learning. (11%)Wenqi Wei; Tiansheng Huang; Zachary Yahn; Anoop Singhal; Margaret Loper; Ling Liu
Manipulation Facing Threats: Evaluating Physical Vulnerabilities in End-to-End Vision Language Action Models. (2%)Hao Cheng; Erjia Xiao; Chengyuan Yu; Zhao Yao; Jiahang Cao; Qiang Zhang; Jiaxu Wang; Mengshu Sun; Kaidi Xu; Jindong Gu; Renjing Xu
Hidden in Plain Sound: Environmental Backdoor Poisoning Attacks on Whisper, and Mitigations. (2%)Jonatan Bartolini; Todor Stoyanov; Alberto Giaretta
2024-09-18
Enhancing 3D Robotic Vision Robustness by Minimizing Adversarial Mutual Information through a Curriculum Training Approach. (99%)Nastaran Darabi; Dinithi Jayasuriya; Devashri Naik; Theja Tulabandhula; Amit Ranjan Trivedi
ITPatch: An Invisible and Triggered Physical Adversarial Patch against Traffic Sign Recognition. (99%)Shuai Yuan; Hongwei Li; Xingshuo Han; Guowen Xu; Wenbo Jiang; Tao Ni; Qingchuan Zhao; Yuguang Fang
NPAT Null-Space Projected Adversarial Training Towards Zero Deterioration. (96%)Hanyi Hu; Qiao Han; Kui Chen; Yao Yang
LLM-Powered Text Simulation Attack Against ID-Free Recommender Systems. (76%)Zongwei Wang; Min Gao; Junliang Yu; Xinyi Gao; Quoc Viet Hung Nguyen; Shazia Sadiq; Hongzhi Yin
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning. (68%)Yukai Xu; Yujie Gu; Kouichi Sakurai
A constrained optimization approach to improve robustness of neural networks. (54%)Shudian Zhao; Jan Kronqvist
Understanding Implosion in Text-to-Image Generative Models. (2%)Wenxin Ding; Cathy Y. Li; Shawn Shan; Ben Y. Zhao; Haitao Zheng
2024-09-17
Golden Ratio Search: A Low-Power Adversarial Attack for Deep Learning based Modulation Classification. (98%)Deepsayan Sadhukhan; Nitin Priyadarshini Shankar; Sheetal Kalyani
EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage. (76%)Zeyi Liao; Lingbo Mo; Chejian Xu; Mintong Kang; Jiawei Zhang; Chaowei Xiao; Yuan Tian; Bo Li; Huan Sun
Contextual Breach: Assessing the Robustness of Transformer-based QA Models. (56%)Asir Saadat; Nahian Ibn Asad; Md Farhan Ishmam
Hard-Label Cryptanalytic Extraction of Neural Network Models. (2%)Yi Chen; Xiaoyang Dong; Jian Guo; Yantian Shen; Anyu Wang; Xiaoyun Wang
2024-09-16
Towards Physically-Realizable Adversarial Attacks in Embodied Vision Navigation. (82%)Meng Chen; Jiawei Tu; Chao Qi; Yonghao Dang; Feng Zhou; Wei Wei; Jianqin Yin
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble. (2%)Jonathan Rosenthal; Shanchao Liang; Kevin Zhang; Lin Tan
Realistic Extreme Behavior Generation for Improved AV Testing. (1%)Robert Dyro; Matthew Foutter; Ruolin Li; Lillo Luigi Di; Edward Schmerling; Xilin Zhou; Marco Pavone
Jailbreaking Large Language Models with Symbolic Mathematics. (1%)Emet Bethany; Mazal Bethany; Juan Arturo Nolazco Flores; Sumit Kumar Jha; Peyman Najafirad
Speaker Contrastive Learning for Source Speaker Tracing. (1%)Qing Wang; Hongmei Guo; Jian Kang; Mengjie Du; Jie Li; Xiao-Lei Zhang; Lei Xie
2024-09-15
Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective. (98%)Ningfei Wang; Shaoyuan Xie; Takami Sato; Yunpeng Luo; Kaidi Xu; Qi Alfred Chen
Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity. (8%)Hao Jian Huang; Bekzod Iskandarov; Mizanur Rahman; Hakan T. Otal; M. Abdullah Canbaz
2024-09-14
Real-world Adversarial Defense against Patch Attacks based on Diffusion Model. (99%)Xingxing Wei; Caixin Kang; Yinpeng Dong; Zhengyi Wang; Shouwei Ruan; Yubo Chen; Hang Su
2024-09-13
XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers via Feature Substitution. (95%)Kiana Vu; Phung Lai; Truc Nguyen
Are Existing Road Design Guidelines Suitable for Autonomous Vehicles? (41%)Yang Sun; Christopher M. Poskitt; Jun Sun
Clean Label Attacks against SLU Systems. (31%)Henry Li Xinyuan; Sonal Joshi; Thomas Thebaud; Jesus Villalba; Najim Dehak; Sanjeev Khudanpur
FAST: Boosting Uncertainty-based Test Prioritization Methods for Neural Networks via Feature Selection. (15%)Jialuo Chen; Jingyi Wang; Xiyue Zhang; Youcheng Sun; Marta Kwiatkowska; Jiming Chen; Peng Cheng
2024-09-12
LoRID: Low-Rank Iterative Diffusion for Adversarial Purification. (99%)Geigh Zollicoffer; Minh Vu; Ben Nebgen; Juan Castorena; Boian Alexandrov; Manish Bhattarai
High-Frequency Anti-DreamBooth: Robust Defense against Personalized Image Synthesis. (93%)Takuto Onikubo; Yusuke Matsui
FedProphet: Memory-Efficient Federated Adversarial Training via Theoretic-Robustness and Low-Inconsistency Cascade Learning. (92%)Minxue Tang; Yitu Wang; Jingyang Zhang; Louis DiValentin; Aolin Ding; Amin Hass; Yiran Chen; Hai "Helen" Li
Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense. (73%)Jeremy Styborski; Mingzhi Lyu; Yi Huang; Adams Kong
Sub-graph Based Diffusion Model for Link Prediction. (9%)Hang Li; Wei Jin; Geri Skenderi; Harry Shomer; Wenzhuo Tang; Wenqi Fan; Jiliang Tang
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using Jailbreaking. (1%)Stav Cohen; Ron Bitton; Ben Nassi
Risks When Sharing LoRA Fine-Tuned Diffusion Model Weights. (1%)Dixi Yao
2024-09-11
Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving. (99%)Tianyuan Zhang; Lu Wang; Jiaqi Kang; Xinwei Zhang; Siyuan Liang; Yuwei Chen; Aishan Liu; Xianglong Liu
Securing Vision-Language Models with a Robust Encoder Against Jailbreak and Adversarial Attacks. (98%)Md Zarif Hossain; Ahmed Imteaj
Introducing Perturb-ability Score (PS) to Enhance Robustness Against Evasion Adversarial Attacks on ML-NIDS. (97%)Mohamed elShehaby; Ashraf Matrawy
D-CAPTCHA++: A Study of Resilience of Deepfake CAPTCHA under Transferable Imperceptible Adversarial Attack. (93%)Hong-Hanh Nguyen-Le; Van-Tuan Tran; Dinh-Thuc Nguyen; Nhien-An Le-Khac
A Cost-Aware Approach to Adversarial Robustness in Neural Networks. (84%)Charles Meyers; Mohammad Reza Saleh Sedghpour; Tommy Löfstedt; Erik Elmroth
Attack End-to-End Autonomous Driving through Module-Wise Noise. (74%)Lu Wang; Tianyuan Zhang; Yikai Han; Muyang Fang; Ting Jin; Jiaqi Kang
On the Vulnerability of Applying Retrieval-Augmented Generation within Knowledge-Intensive Application Domains. (67%)Xun Xian; Ganghua Wang; Xuan Bi; Jayanth Srinivasa; Ashish Kundu; Charles Fleming; Mingyi Hong; Jie Ding
Enhancing adversarial robustness in Natural Language Inference using explanations. (67%)Alexandros Koulakos; Maria Lymperaiou; Giorgos Filandrianos; Giorgos Stamou
AdvLogo: Adversarial Patch Attack against Object Detectors based on Diffusion Models. (64%)Boming Miao; Chunxiao Li; Yao Zhu; Weixiang Sun; Zizhe Wang; Xiaoyi Wang; Chuanlong Xie
Understanding Knowledge Drift in LLMs through Misinformation. (1%)Alina Fastowski; Gjergji Kasneci
2024-09-10
Unrevealed Threats: A Comprehensive Study of the Adversarial Robustness of Underwater Image Enhancement Models. (99%)Siyu Zhai; Zhibo He; Xiaofeng Cong; Junming Hou; Jie Gui; Jian Wei You; Xin Gong; James Tin-Yau Kwok; Yuan Yan Tang
Advancing Hybrid Defense for Byzantine Attacks in Federated Learning. (84%)Kai Yue; Richeng Jin; Chau-Wai Wong; Huaiyu Dai
Adversarial Attacks to Multi-Modal Models. (76%)Zhihao Dou; Xin Hu; Haibo Yang; Zhuqing Liu; Minghong Fang
DV-FSR: A Dual-View Target Attack Framework for Federated Sequential Recommendation. (67%)Qitao Qin; Yucong Luo; Mingyue Cheng; Qingyang Mao; Chenyi Lei
2024-09-09
Seeing Through the Mask: Rethinking Adversarial Examples for CAPTCHAs. (99%)Yahya Jabary; Andreas Plesner; Turlan Kuzhagaliyev; Roger Wattenhofer
Adversarial Attacks on Data Attribution. (99%)Xinhe Wang; Pingbang Hu; Junwei Deng; Jiaqi W. Ma
Unlearning or Concealment? A Critical Analysis and Evaluation Metrics for Unlearning in Diffusion Models. (84%)Aakash Sen Sharma; Niladri Sarkar; Vikram Chundawat; Ankur A Mali; Murari Mandal
Input Space Mode Connectivity in Deep Neural Networks. (83%)Jakub Vrabel; Ori Shem-Ur; Yaron Oz; David Krueger
On the Weaknesses of Backdoor-based Model Watermarking: An Information-theoretic Perspective. (33%)Aoting Hu; Yanzhi Chen; Renjie Xie; Adrian Weller
2024-09-08
PIP: Detecting Adversarial Examples in Large Vision-Language Models via Attention Patterns of Irrelevant Probe Questions. (99%)Yudong Zhang; Ruobing Xie; Jiansheng Chen; Xingwu Sun; Yu Wang
2DSig-Detect: a semi-supervised framework for anomaly detection on image data using 2D-signatures. (87%)Xinheng Xie; Kureha Yamaguchi; Margaux Leblanc; Simon Malzard; Varun Chhabra; Victoria Nockles; Yue Wu
Vision-fused Attack: Advancing Aggressive and Stealthy Adversarial Text against Neural Machine Translation. (67%)Yanni Xue; Haojie Hao; Jiakai Wang; Qiang Sheng; Renshuai Tao; Yu Liang; Pu Feng; Xianglong Liu
Natias: Neuron Attribution based Transferable Image Adversarial Steganography. (67%)Zexin Fan; Kejiang Chen; Kai Zeng; Jiansong Zhang; Weiming Zhang; Nenghai Yu
2024-09-07
Phrase-Level Adversarial Training for Mitigating Bias in Neural Network-based Automatic Essay Scoring. (86%)Haddad Philip; Tsegaye Misikir Tashu
PIXHELL Attack: Leaking Sensitive Information from Air-Gap Computers via `Singing Pixels'. (80%)Mordechai Guri
Top-GAP: Integrating Size Priors in CNNs for more Interpretability, Robustness, and Bias Mitigation. (12%)Lars Nieradzik; Henrike Stephani; Janis Keuper
2024-09-06
Learning to Learn Transferable Generative Attack for Person Re-Identification. (99%)Yuan Bian; Min Liu; Xueping Wang; Yunfeng Ma; Yaonan Wang
PANTS: Practical Adversarial Network Traffic Samples against ML-powered Networking Classifiers. (99%)Minhao Jin; Maria Apostolaki
Secure Traffic Sign Recognition: An Attention-Enabled Universal Image Inpainting Mechanism against Light Patch Attacks. (83%)Hangcheng Cao; Longzhi Yuan; Guowen Xu; Ziyang He; Zhengru Fang; Yuguang Fang
Mind The Gap: Can Air-Gaps Keep Your Private Data Secure? (74%)Mordechai Guri
Exploiting the Data Gap: Utilizing Non-ignorable Missingness to Manipulate Model Learning. (38%)Deniz Koyuncu; Alex Gittens; Bülent Yener; Moti Yung
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers. (8%)Gorka Abad; Stjepan Picek; Lorenzo Cavallaro; Aitor Urbieta
Dual-stream Feature Augmentation for Domain Generalization. (8%)Shanshan Wang; ALuSi; Xun Yang; Ke Xu; Huibin Tan; Xingyi Zhang
2024-09-05
A practical approach to evaluating the adversarial distance for machine learning classifiers. (98%)Georg Siedel; Ekagra Gupta; Andrey Morozov
Non-Uniform Illumination Attack for Fooling Convolutional Neural Networks. (92%)Akshay Jain; Shiv Ram Dubey; Satish Kumar Singh; KC Santosh; Bidyut Baran Chaudhuri
Limited but consistent gains in adversarial robustness by co-training object recognition models with human EEG. (31%)Manshan Guo; Bhavin Choksi; Sari Sadiya; Alessandro T. Gifford; Martina G. Vilas; Radoslaw M. Cichy; Gemma Roig
Recent Advances in Attack and Defense Approaches of Large Language Models. (4%)Jing Cui; Yishi Xu; Zhewei Huang; Shuchang Zhou; Jianbin Jiao; Junge Zhang
WaterMAS: Sharpness-Aware Maximization for Neural Network Watermarking. (3%)Carl De Sousa Trias; Mihai Mitrea; Attilio Fiandrotti; Marco Cagnazzo; Sumanta Chaudhuri; Enzo Tartaglione
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm? (1%)Rui Wen; Michael Backes; Yang Zhang
2024-09-04
Bypassing DARCY Defense: Indistinguishable Universal Adversarial Triggers. (99%)Zuquan Peng; Yuanyuan He; Jianbing Ni; Ben Niu
OpenFact at CheckThat! 2024: Combining Multiple Attack Methods for Effective Adversarial Text Generation. (99%)Włodzimierz Lewoniewski; Piotr Stolarski; Milena Stróżyna; Elzbieta Lewańska; Aleksandra Wojewoda; Ewelina Księżniak; Marcin Sawiński
TASAR: Transferable Attack on Skeletal Action Recognition. (92%)Yunfeng Diao; Baiqi Wu; Ruixuan Zhang; Ajian Liu; Xingxing Wei; Meng Wang; He Wang
Adversarial Attacks on Machine Learning-Aided Visualizations. (83%)Takanori Fujiwara; Kostiantyn Kucher; Junpeng Wang; Rafael M. Martins; Andreas Kerren; Anders Ynnerman
Transfer-based Adversarial Poisoning Attacks for Online (MIMO-)Deep Receviers. (76%)Kunze Wu; Weiheng Jiang; Dusit Niyato; Yinghuan Li; Chuang Luo
Boosting Certificate Robustness for Time Series Classification with Efficient Self-Ensemble. (70%)Chang Dong; Zhengyang Li; Liangwei Zheng; Weitong Chen; Wei Emma Zhang
AdvSecureNet: A Python Toolkit for Adversarial Machine Learning. (33%)Melih Catal; Manuel Günther
Active Fake: DeepFake Camouflage. (13%)Pu Sun; Honggang Qi; Yuezun Li
Well, that escalated quickly: The Single-Turn Crescendo Attack (STCA). (2%)Alan Aqrawi
2024-09-03
Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural Backdoor. (97%)Abdullah Arafat Miah; Yu Bi
Dynamic Guidance Adversarial Distillation with Enhanced Teacher Knowledge. (92%)Hyejin Park; Dongbo Min
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise. (16%)Abdullah Arafat Miah; Kaan Icer; Resit Sendag; Yu Bi
Reassessing Noise Augmentation Methods in the Context of Adversarial Speech. (5%)Karla Pizzi; Matías Pizarro; Asja Fischer
On the Vulnerability of Skip Connections to Model Inversion Attacks. (3%)Jun Hao Koh; Sy-Tuyen Ho; Ngoc-Bao Nguyen; Ngai-man Cheung
2024-09-02
One-Index Vector Quantization Based Adversarial Attack on Image Classification. (99%)Haiju Fan; Xiaona Qin; Shuang Chen; Hubert P. H. Shum; Ming Li
Adversarial Pruning: A Survey and Benchmark of Pruning Methods for Adversarial Robustness. (99%)Giorgio Piras; Maura Pintor; Ambra Demontis; Battista Biggio; Giorgio Giacinto; Fabio Roli
Phantom: Untargeted Poisoning Attacks on Semi-Supervised Learning (Full Version). (68%)Jonathan Knauer; Phillip Rieger; Hossein Fereidooni; Ahmad-Reza Sadeghi
Defending against Model Inversion Attacks via Random Erasing. (64%)Viet-Hung Tran; Ngoc-Bao Nguyen; Son T. Mai; Hans Vandierendonck; Ngai-man Cheung
CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models. (62%)Rui Zeng; Xi Chen; Yuwen Pu; Xuhong Zhang; Tianyu Du; Shouling Ji
Purification-Agnostic Proxy Learning for Agentic Copyright Watermarking against Adversarial Evidence Forgery. (26%)Erjin Bao; Ching-Chun Chang; Hanrui Wang; Isao Echizen
Unveiling the Vulnerability of Private Fine-Tuning in Split-Based Frameworks for Large Language Models: A Bidirectionally Enhanced Attack. (26%)Guanzhong Chen; Zhenghan Qin; Mingxin Yang; Yajie Zhou; Tao Fan; Tianyu Du; Zenglin Xu
A Review of Image Retrieval Techniques: Data Augmentation and Adversarial Learning Approaches. (16%)Kim Jinwoo
Spatial-Aware Conformal Prediction for Trustworthy Hyperspectral Image Classification. (1%)Kangdao Liu; Tianhao Sun; Hao Zeng; Yongshan Zhang; Chi-Man Pun; Chi-Man Vong
2024-09-01
Comprehensive Botnet Detection by Mitigating Adversarial Attacks, Navigating the Subtleties of Perturbation Distances and Fortifying Predictions with Conformal Layers. (99%)Rahul Yumlembam; Biju Issac; Seibu Mary Jacob; Longzhi Yang
Accurate Forgetting for All-in-One Image Restoration Model. (83%)Xin Su; Zhuoran Zheng
The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs. (26%)Bocheng Chen; Hanqing Guo; Guangjing Wang; Yuanda Wang; Qiben Yan
Fisher Information guided Purification against Backdoor Attacks. (12%)Nazmul Karim; Abdullah Al Arafat; Adnan Siraj Rakin; Zhishan Guo; Nazanin Rahnavard
2024-08-31
HSF: Defending against Jailbreak Attacks with Hidden State Filtering. (75%)Cheng Qian; Hainan Zhang; Lei Sha; Zhiming Zheng
Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks. (15%)Yu He; Boheng Li; Yao Wang; Mengda Yang; Juan Wang; Hongxin Hu; Xingyu Zhao
Robust off-policy Reinforcement Learning via Soft Constrained Adversary. (4%)Kosuke Nakanishi; Akihiro Kubo; Yuji Yasui; Shin Ishii
2024-08-30
LightPure: Realtime Adversarial Image Purification for Mobile Devices Using Diffusion Models. (92%)Hossein Khalili; Seongbin Park; Vincent Li; Brandan Bright; Ali Payani; Ramana Rao Kompella; Nader Sehatbakhsh
Instant Adversarial Purification with Adversarial Consistency Distillation. (33%)Chun Tong Lei; Hon Ming Yam; Zhongliang Guo; Chun Pong Lau
PRADA: Proactive Risk Assessment and Mitigation of Misinformed Demand Attacks on Navigational Route Recommendations. (8%)Ya-Ting Yang; Haozhe Lei; Quanyan Zhu
Evaluating Reliability in Medical DNNs: A Critical Analysis of Feature and Confidence-Based OOD Detection. (1%)Harry Anthony; Konstantinos Kamnitsas
2024-08-29
PromptSmooth: Certifying Robustness of Medical Vision-Language Models via Prompt Learning. (92%)Noor Hussein; Fahad Shamshad; Muzammal Naseer; Karthik Nandakumar
STEREO: Towards Adversarially Robust Concept Erasing from Text-to-Image Generation Models. (83%)Koushik Srivatsan; Fahad Shamshad; Muzammal Naseer; Karthik Nandakumar
SFR-GNN: Simple and Fast Robust GNNs against Structural Attacks. (67%)Xing Ai; Guanyu Zhu; Yulin Zhu; Yu Zheng; Gaolei Li; Jianhua Li; Kai Zhou
Analyzing Inference Privacy Risks Through Gradients in Machine Learning. (54%)Zhuohang Li; Andrew Lowy; Jing Liu; Toshiaki Koike-Akino; Kieran Parsons; Bradley Malin; Ye Wang
Tex-ViT: A Generalizable, Robust, Texture-based dual-branch cross-attention deepfake detector. (12%)Deepak Dagar; Dinesh Kumar Vishwakarma
2024-08-28
Evaluating Model Robustness Using Adaptive Sparse L0 Regularization. (99%)Weiyou Liu; Zhenyang Li; Weitong Chen
Network transferability of adversarial patches in real-time object detection. (83%)Jens Bayer; Stefan Becker; David Münch; Michael Arens
Defending Text-to-image Diffusion Models: Surprising Efficacy of Textual Perturbations Against Backdoor Attacks. (83%)Oscar Chew; Po-Yi Lu; Jayden Lin; Hsuan-Tien Lin
Fusing Pruned and Backdoored Models: Optimal Transport-based Data-free Backdoor Mitigation. (47%)Weilin Lin; Li Liu; Jianze Li; Hui Xiong
VFLIP: A Backdoor Defense for Vertical Federated Learning via Identification and Purification. (2%)Yungi Cho; Woorim Han; Miseon Yu; Younghan Lee; Ho Bae; Yunheung Paek
FRACTURED-SORRY-Bench: Framework for Revealing Attacks in Conversational Turns Undermining Refusal Efficacy and Defenses over SORRY-Bench (Automated Multi-shot Jailbreaks). (1%)Aman Priyanshu; Supriti Vijay
2024-08-27
Adversarial Attacks and Defenses in Multivariate Time-Series Forecasting for Smart and Connected Infrastructures. (99%)Pooja Krishan; Rohan Mohapatra; Saptarshi Sengupta
Certified Causal Defense with Generalizable Robustness. (99%)Yiran Qiao; Yu Yin; Chen Chen; Jing Ma
Improving Adversarial Robustness in Android Malware Detection by Reducing the Impact of Spurious Correlations. (99%)Hamid Bostani; Zhengyu Zhao; Veelasha Moonsamy
Adversarial Manhole: Challenging Monocular Depth Estimation and Semantic Segmentation Models with Patch Attack. (98%)Naufal Suryanto; Andro Aprila Adiputra; Ahmada Yusril Kadiptya; Yongsu Kim; Howon Kim
LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet. (12%)Nathaniel Li; Ziwen Han; Ian Steneker; Willow Primack; Riley Goodside; Hugh Zhang; Zifan Wang; Cristina Menghini; Summer Yue
Investigating Coverage Criteria in Large Language Models: An In-Depth Study Through Jailbreak Attacks. (11%)Shide Zhou; Tianlin Li; Kailong Wang; Yihao Huang; Ling Shi; Yang Liu; Haoyu Wang
Detecting AI Flaws: Target-Driven Attacks on Internal Faults in Language Models. (8%)Yuhao Du; Zhuo Li; Pengyu Cheng; Xiang Wan; Anningzhe Gao
SpecGuard: Specification Aware Recovery for Robotic Autonomous Vehicles from Physical Attacks. (3%)Pritam Dash; Ethan Chan; Karthik Pattabiraman
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models. (2%)Wenhan Yao; Zedong XingXiarun Chen; Jia Liu; yongqiang He; Weiping Wen
2024-08-26
TART: Boosting Clean Accuracy Through Tangent Direction Guided Adversarial Training. (99%)Bongsoo Yi; Rongjie Lai; Yao Li
2D-Malafide: Adversarial Attacks Against Face Deepfake Detection Systems. (99%)Chiara Galdi; Michele Panariello; Massimiliano Todisco; Nicholas Evans
Feedback-based Modal Mutual Search for Attacking Vision-Language Pre-training Models. (99%)Renhua Ding; Xinze Zhang; Xiao Yang; Kun He
Celtibero: Robust Layered Aggregation for Federated Learning. (92%)Borja Molina-Coronado
Dual Adversarial Perturbators Generate rich Views for Recommendation. (5%)Lijun Zhang; Yuan Yao; Haibo Ye
Investigating the Effectiveness of Bayesian Spam Filters in Detecting LLM-modified Spam Mails. (1%)Malte Josten; Torben Weis
Surprisingly Fragile: Assessing and Addressing Prompt Instability in Multimodal Foundation Models. (1%)Ian Stewart; Sameera Horawalavithana; Brendan Kennedy; Sai Munikoti; Karl Pazdernik
2024-08-25
On the Robustness of Kolmogorov-Arnold Networks: An Adversarial Perspective. (98%)Tal Alter; Raz Lapid; Moshe Sipper
HTS-Attack: Heuristic Token Search for Jailbreaking Text-to-Image Models. (97%)Sensen Gao; Xiaojun Jia; Yihao Huang; Ranjie Duan; Jindong Gu; Yang Bai; Yang Liu; Qing Guo
TF-Attack: Transferable and Fast Adversarial Attacks on Large Language Models. (96%)Zelin Li; Kehai Chen; Lemao Liu; Xuefeng Bai; Mingming Yang; Yang Xiang; Min Zhang
Generalization of Graph Neural Networks is Robust to Model Mismatch. (1%)Zhiyang Wang; Juan Cervino; Alejandro Ribeiro
2024-08-24
Probing the Robustness of Vision-Language Pretrained Models: A Multimodal Adversarial Attack Approach. (99%)Jiwei Guan; Tianyu Ding; Longbing Cao; Lei Pan; Chen Wang; Xi Zheng
Evaluating the Robustness of LiDAR-based 3D Obstacles Detection and Its Impacts on Autonomous Driving Systems. (1%)Tri Minh Triet Pham; Bo Yang; Jinqiu Yang
2024-08-23
Dynamic Label Adversarial Training for Deep Learning Robustness Against Adversarial Attacks. (99%)Zhenyu Liu; Haoran Duan; Huizhi Liang; Yang Long; Vaclav Snasel; Guiseppe Nicosia; Rajiv Ranjan; Varun Ojha
Toward Improving Synthetic Audio Spoofing Detection Robustness via Meta-Learning and Disentangled Training With Adversarial Examples. (98%)Zhenyu Wang; John H. L. Hansen
Disentangled Training with Adversarial Examples For Robust Small-footprint Keyword Spotting. (83%)Zhenyu Wang; Li Wan; Biqiao Zhang; Yiteng Huang; Shang-Wen Li; Ming Sun; Xin Lei; Zhaojun Yang
Protecting against simultaneous data poisoning attacks. (54%)Neel Alex; Shoaib Ahmed Siddiqui; Amartya Sanyal; David Krueger
2024-08-22
Enhancing Transferability of Adversarial Attacks with GE-AdvGAN+: A Comprehensive Framework for Gradient Editing. (99%)Zhibo Jin; Jiayu Zhang; Zhiyu Zhu; Yuchen Zhang; Jiahao Huang; Jianlong Zhou; Fang Chen
Leveraging Information Consistency in Frequency and Spatial Domain for Adversarial Attacks. (99%)Zhibo Jin; Jiayu Zhang; Zhiyu Zhu; Xinyi Wang; Yiyun Huang; Huaming Chen
MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup Transfer. (98%)Ming Sun; Lihua Jing; Zixuan Zhu; Rui Wang
BankTweak: Adversarial Attack against Multi-Object Trackers by Manipulating Feature Banks. (80%)Woojin Shin; Donghwa Kang; Daejin Choi; Brent Kang; Jinkyu Lee; Hyeongboo Baek
On the Credibility of Backdoor Attacks Against Object Detectors in the Physical World. (75%)Bao Gia Doan; Dang Quang Nguyen; Callum Lindquist; Paul Montague; Tamas Abraham; Vel Olivier De; Seyit Camtepe; Salil S. Kanhere; Ehsan Abbasnejad; Damith C. Ranasinghe
Quantifying Psychological Sophistication of Malicious Emails. (2%)Theodore Longtchi; Rosana Montañez Rodriguez; Kora Gwartney; Ekzhin Ear; David P. Azari; Christopher P. Kelley; Shouhuai Xu
Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks. (2%)Yusuf Usman; Aadesh Upadhyay; Prashnna Gyawali; Robin Chataut
VALE: A Multimodal Visual and Language Explanation Framework for Image Classifiers using eXplainable AI and Language Models. (2%)Purushothaman Natarajan; Athira Nambiar
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models. (2%)Yige Li; Hanxun Huang; Yunhan Zhao; Xingjun Ma; Jun Sun
2024-08-21
Query-Efficient Video Adversarial Attack with Stylized Logo. (99%)Duoxun Tang; Yuxin Cao; Xi Xiao; Derui Wang; Sheng Wen; Tianqing Zhu
Pixel Is Not A Barrier: An Effective Evasion Attack for Pixel-Domain Diffusion Models. (74%)Chun-Yen Shih; Li-Xuan Peng; Jia-Wei Liao; Ernie Chu; Cheng-Fu Chou; Jun-Cheng Chen
A Practical Trigger-Free Backdoor Attack on Neural Networks. (67%)Jiahao Wang; Xianglong Zhang; Xiuzhen Cheng; Pengfei Hu; Guoming Zhang
First line of defense: A robust first layer mitigates adversarial attacks. (54%)Janani Suresh; Nancy Nayak; Sheetal Kalyani
Exploring Robustness of Visual State Space model against Backdoor Attacks. (45%)Cheng-Yi Lee; Cheng-Chang Tsai; Chia-Mu Yu; Chun-Shien Lu
Against All Odds: Overcoming Typology, Script, and Language Confusion in Multilingual Embedding Inversion Attacks. (26%)Yiyi Chen; Russa Biswas; Heather Lent; Johannes Bjerva
Latent Feature and Attention Dual Erasure Attack against Multi-View Diffusion Models for 3D Assets Protection. (12%)Jingwei Sun; Xuchong Zhang; Changfeng Sun; Qicheng Bai; Hongbin Sun
Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks. (10%)Ziqiang Li; Yueqi Zeng; Pengfei Xia; Lei Liu; Zhangjie Fu; Bin Li
2024-08-20
GAIM: Attacking Graph Neural Networks via Adversarial Influence Maximization. (99%)Xiaodong Yang; Xiaoting Li; Huiyuan Chen; Yiwei Cai
Correlation Analysis of Adversarial Attack in Time Series Classification. (99%)Zhengyang Li; Wenhao Liang; Chang Dong; Weitong Chen; Dong Huang
Privacy-preserving Universal Adversarial Defense for Black-box Models. (99%)Qiao Li; Cong Wu; Jing Chen; Zijun Zhang; Kun He; Ruiying Du; Xinxin Wang; Qingchuang Zhao; Yang Liu
MsMemoryGAN: A Multi-scale Memory GAN for Palm-vein Adversarial Purification. (99%)Huafeng Qin; Yuming Fu; Huiyan Zhang; Mounim A. El-Yacoubi; Xinbo Gao; Qun Song; Jun Wang
Revisiting Min-Max Optimization Problem in Adversarial Training. (97%)Sina Hajer Ahmadi; Hassan Bahrami
Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models. (97%)Cong Wan; Yuhang He; Xiang Song; Yihong Gong
Iterative Window Mean Filter: Thwarting Diffusion-based Adversarial Purification. (87%)Hanrui Wang; Ruoxi Sun; Cunjian Chen; Minhui Xue; Lay-Ki Soon; Shuo Wang; Zhe Jin
Adversarial Attack for Explanation Robustness of Rationalization Models. (82%)Yuankai Zhang; Lingxiao Kong; Haozhao Wang; Ruixuan Li; Jun Wang; Yuhua Li; Wei Liu
Towards Robust Knowledge Unlearning: An Adversarial Framework for Assessing and Improving Unlearning Robustness in Large Language Models. (73%)Hongbang Yuan; Zhuoran Jin; Pengfei Cao; Yubo Chen; Kang Liu; Jun Zhao
A Grey-box Attack against Latent Diffusion Model-based Image Editing by Posterior Collapse. (68%)Zhongliang Guo; Lei Fang; Jingyu Lin; Yifei Qian; Shuai Zhao; Zeyu Wang; Junhao Dong; Cunjian Chen; Ognjen Arandjelović; Chun Pong Lau
Security Assessment of Hierarchical Federated Deep Learning. (67%)D Alqattan; R Sun; H Liang; G Nicosia; V Snasel; R Ranjan; V Ojha
Improving Out-of-Distribution Data Handling and Corruption Resistance via Modern Hopfield Networks. (54%)Saleh Sargolzaei; Luis Rueda
Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique. (50%)Tej Deep Pala; Vernon Y. H. Toh; Rishabh Bhardwaj; Soujanya Poria
Makeup-Guided Facial Privacy Protection via Untrained Neural Network Priors. (33%)Fahad Shamshad; Muzammal Naseer; Karthik Nandakumar
EEG-Defender: Defending against Jailbreak through Early Exit Generation of Large Language Models. (31%)Chongwen Zhao; Zhihao Dou; Kaizhu Huang
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender Systems. (26%)Yunfan Wu; Qi Cao; Shuchang Tao; Kaike Zhang; Fei Sun; Huawei Shen
Unlocking Adversarial Suffix Optimization Without Affirmative Phrases: Efficient Black-box Jailbreaking via LLM as Optimizer. (10%)Weipeng Jiang; Zhenting Wang; Juan Zhai; Shiqing Ma; Zhengyu Zhao; Chao Shen
Security Attacks on LLM-based Code Completion Tools. (8%)Wen Cheng; Ke Sun; Xinyu Zhang; Wei Wang
MEGen: Generative Backdoor in Large Language Models via Model Editing. (2%)Jiyang Qiu; Xinbei Ma; Zhuosheng Zhang; Hai Zhao
Learning Randomized Algorithms with Transformers. (1%)Oswald Johannes von; Seijin Kobayashi; Yassir Akram; Angelika Steger
2024-08-19
Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks. (99%)Hetvi Waghela; Jaydip Sen; Sneha Rakshit
Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis. (99%)Kira Maag; Roman Resner; Asja Fischer
Segment-Anything Models Achieve Zero-shot Robustness in Autonomous Driving. (98%)Jun Yan; Pengyu Wang; Danni Wang; Weiquan Huang; Daniel Watzenig; Huilin Yin
Criticality Leveraged Adversarial Training (CLAT) for Boosted Performance via Parameter Efficiency. (31%)Bhavna Gopal; Huanrui Yang; Jingyang Zhang; Mark Horton; Yiran Chen
The Brittleness of AI-Generated Image Watermarking Techniques: Examining Their Robustness Against Visual Paraphrasing Attacks. (5%)Niyar R Barman; Krish Sharma; Ashhar Aziz; Shashwat Bajpai; Shwetangshu Biswas; Vasu Sharma; Vinija Jain; Aman Chadha; Amit Sheth; Amitava Das
Transferring Backdoors between Large Language Models by Knowledge Distillation. (2%)Pengzhou Cheng; Zongru Wu; Tianjie Ju; Wei Du; Zhuosheng Zhang Gongshen Liu
Enhance Modality Robustness in Text-Centric Multimodal Alignment with Adversarial Prompting. (1%)Yun-Da Tsai; Ting-Yu Yen; Keng-Te Liao; Shou-De Lin
Perfectly Undetectable Reflection and Scaling False Data Injection Attacks via Affine Transformation on Mobile Robot Trajectory Tracking Control. (1%)Jun Ueda; Hyukbin Kwon
2024-08-18
Enhancing Adversarial Transferability with Adversarial Weight Tuning. (99%)Jiahao Chen; Zhou Feng; Rui Zeng; Yuwen Pu; Chunyi Zhou; Yi Jiang; Yuyou Gan; Jinbao Li; Shouling Ji
Regularization for Adversarial Robust Learning. (41%)Jie Wang; Rui Gao; Yao Xie
Adversarial Attacked Teacher for Unsupervised Domain Adaptive Object Detection. (31%)Kaiwen Wang; Yinzhe Shen; Martin Lauer
GANPrompt: Enhancing Robustness in LLM-Based Recommendations with GAN-Enhanced Diversity Prompts. (1%)Xinyu Li; Chuang Zhao; Hongke Zhao; Likang Wu; Ming HE
Global BGP Attacks that Evade Route Monitoring. (1%)Henry Birge-Lee; Maria Apostolaki; Jennifer Rexford
2024-08-17
Attack Anything: Blind DNNs via Universal Background Adversarial Attack. (99%)Jiawei Lian; Shaohui Mei; Xiaofei Wang; Yi Wang; Lefan Wang; Yingjie Lu; Mingyang Ma; Lap-Pui Chau
Training Verifiably Robust Agents Using Set-Based Reinforcement Learning. (75%)Manuel Wendl; Lukas Koller; Tobias Ladner; Matthias Althoff
DiffZOO: A Purely Query-Based Black-Box Attack for Red-teaming Text-to-Image Generative Model via Zeroth Order Optimization. (67%)Pucheng Dang; Xing Hu; Dong Li; Rui Zhang; Qi Guo; Kaidi Xu
PADetBench: Towards Benchmarking Physical Attacks against Object Detection. (62%)Jiawei Lian; Jianhong Pan; Lefan Wang; Yi Wang; Lap-Pui Chau; Shaohui Mei
Malacopula: adversarial automatic speaker verification attacks using a neural-based generalised Hammerstein model. (31%)Massimiliano Todisco; Michele Panariello; Xin Wang; Héctor Delgado; Kong Aik Lee; Nicholas Evans
BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger. (10%)Yulin Chen; Haoran Li; Zihao Zheng; Yangqiu Song
Characterizing and Evaluating the Reliability of LLMs against Jailbreak Attacks. (5%)Kexin Chen; Yi Liu; Dongxia Wang; Jiaying Chen; Wenhai Wang
PREMAP: A Unifying PREiMage APproximation Framework for Neural Networks. (2%)Xiyue Zhang; Benjie Wang; Marta Kwiatkowska; Huan Zhang
Out-of-distribution materials property prediction using adversarial learning based fine-tuning. (1%)Qinyang Li; Nicholas Miklaucic; Jianjun Hu
2024-08-16
Ask, Attend, Attack: A Effective Decision-Based Black-Box Targeted Attack for Image-to-Text Models. (98%)Qingyuan Zeng; Zhenzhong Wang; Yiu-ming Cheung; Min Jiang
Towards Physical World Backdoor Attacks against Skeleton Action Recognition. (93%)Qichen Zheng; Yi Yu; Siyuan Yang; Jun Liu; Kwok-Yan Lam; Alex Kot
LEVIS: Large Exact Verifiable Input Spaces for Neural Networks. (87%)Mohamad Fares El Hajj Chehade; Brian Wesley Bell; Russell Bent; Hao Zhu; Wenting Li
Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks? (83%)Zhongjian Zhang; Xiao Wang; Huichi Zhou; Yue Yu; Mengmei Zhang; Cheng Yang; Chuan Shi
Visual-Friendly Concept Protection via Selective Adversarial Perturbations. (75%)Xiaoyue Mi; Fan Tang; Juan Cao; Peng Li; Yang Liu
Mitigating Backdoor Attacks in Federated Learning via Flipping Weight Updates of Low-Activation Input Neurons. (1%)Binbin Ding; Penghui Yang; Zeqing Ge; Shengjun Huang
2024-08-15
DFT-Based Adversarial Attack Detection in MRI Brain Imaging: Enhancing Diagnostic Accuracy in Alzheimer's Case Studies. (99%)Mohammad Hossein Najafi; Mohammad Morsali; Mohammadmahdi Vahediahmar; Saeed Bagheri Shouraki
A Multi-task Adversarial Attack Against Face Authentication. (98%)Hanrui Wang; Shuo Wang; Cunjian Chen; Massimo Tistarelli; Zhe Jin
Evaluating Text Classification Robustness to Part-of-Speech Adversarial Examples. (98%)Anahita Samadi; Allison Sullivan
Unlearnable Examples Detection via Iterative Filtering. (88%)Yi Yu; Qichen Zheng; Siyuan Yang; Wenhan Yang; Jun Liu; Shijian Lu; Yap-Peng Tan; Kwok-Yan Lam; Alex Kot
A Survey of Trojan Attacks and Defenses to Deep Neural Networks. (78%)Lingxin Jin; Xianyu Wen; Wei Jiang; Jinyu Zhan
Efficient Image-to-Image Diffusion Classifier for Adversarial Robustness. (76%)Hefei Mei; Minjing Dong; Chang Xu
Prefix Guidance: A Steering Wheel for Large Language Models to Defend Against Jailbreak Attacks. (74%)Jiawei Zhao; Kejiang Chen; Xiaojian Yuan; Weiming Zhang
$\textit{MMJ-Bench}$: A Comprehensive Study on Jailbreak Attacks and Defenses for Multimodal Large Language Models. (70%)Fenghua Weng; Yue Xu; Chengyan Fu; Wenjie Wang
Random Gradient Masking as a Defensive Measure to Deep Leakage in Federated Learning. (8%)Joon Kim; Sejin Park
A Robust Multi-Stage Intrusion Detection System for In-Vehicle Network Security using Hierarchical Federated Learning. (2%)Muzun Althunayyan; Amir Javed; Omer Rana
2024-08-14
Enhancing Adversarial Attacks via Parameter Adaptive Adversarial Attack. (99%)Zhibo Jin; Jiayu Zhang; Zhiyu Zhu; Chenyu Zhang; Jiahao Huang; Jianlong Zhou; Fang Chen
TabularBench: Benchmarking Adversarial Robustness for Tabular Deep Learning in Real-world Use-cases. (98%)Thibault Simonetto; Salah Ghamizi; Maxime Cordy
Robust Active Learning (RoAL): Countering Dynamic Adversaries in Active Learning with Elastic Weight Consolidation. (80%)Ricky Maulana Fajri; Yulong Pei; Lu Yin; Mykola Pechenizkiy
Achieving Data Efficient Neural Networks with Hybrid Concept-based Models. (70%)Tobias A. Opsahl; Vegard Antun
Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms. (67%)Francesco Villani; Dario Lazzaro; Antonio Emanuele Cinà; Matteo Dell'Amico; Battista Biggio; Fabio Roli
BadMerging: Backdoor Attacks Against Model Merging. (47%)Jinghuai Zhang; Jianfeng Chi; Zheng Li; Kunlin Cai; Yang Zhang; Yuan Tian
BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt Learning. (38%)Asif Hanif; Fahad Shamshad; Muhammad Awais; Muzammal Naseer; Fahad Shahbaz Khan; Karthik Nandakumar; Salman Khan; Rao Muhammad Anwer
Cognitive Networks and Performance Drive fMRI-Based State Classification Using DNN Models. (1%)Murat Kucukosmanoglu; Javier O. Garcia; Justin Brooks; Kanika Bansal
2024-08-13
DePatch: Towards Robust Adversarial Patch for Evading Person Detectors in the Real World. (92%)Jikang Cheng; Ying Zhang; Zhongyuan Wang; Zou Qin; Chen Li
Robust Black-box Testing of Deep Neural Networks using Co-Domain Coverage. (12%)Aishwarya Gupta; Indranil Saha; Piyush Rai
Imagen 3. (11%)Imagen-Team-Google; :; Jason Baldridge; Jakob Bauer; Mukul Bhutani; Nicole Brichtova; Andrew Bunner; Lluis Castrejon; Kelvin Chan; Yichang Chen; Sander Dieleman; Yuqing Du; Zach Eaton-Rosen; Hongliang Fei; Freitas Nando de; Yilin Gao; Evgeny Gladchenko; Sergio Gómez Colmenarejo; Mandy Guo; Alex Haig; Will Hawkins; Hexiang Hu; Huilian Huang; Tobenna Peter Igwe; Christos Kaplanis; Siavash Khodadadeh; Yelin Kim; Ksenia Konyushkova; Karol Langner; Eric Lau; Rory Lawton; Shixin Luo; Soňa Mokrá; Henna Nandwani; Yasumasa Onoe; Aäron van den Oord; Zarana Parekh; Jordi Pont-Tuset; Hang Qi; Rui Qian; Deepak Ramachandran; Poorva Rane; Abdullah Rashwan; Ali Razavi; Robert Riachi; Hansa Srinivasan; Srivatsan Srinivasan; Robin Strudel; Benigno Uria; Oliver Wang; Su Wang; Austin Waters; Chris Wolff; Auriel Wright; Zhisheng Xiao; Hao Xiong; Keyang Xu; Zee Marc van; Junlin Zhang; Katie Zhang; Wenlei Zhou; Konrad Zolna; Ola Aboubakar; Canfer Akbulut; Oscar Akerlund; Isabela Albuquerque; Nina Anderson; Marco Andreetto; Lora Aroyo; Ben Bariach; David Barker; Sherry Ben; Dana Berman; Courtney Biles; Irina Blok; Pankil Botadra; Jenny Brennan; Karla Brown; John Buckley; Rudy Bunel; Elie Bursztein; Christina Butterfield; Ben Caine; Viral Carpenter; Norman Casagrande; Ming-Wei Chang; Solomon Chang; Shamik Chaudhuri; Tony Chen; John Choi; Dmitry Churbanau; Nathan Clement; Matan Cohen; Forrester Cole; Mikhail Dektiarev; Vincent Du; Praneet Dutta; Tom Eccles; Ndidi Elue; Ashley Feden; Shlomi Fruchter; Frankie Garcia; Roopal Garg; Weina Ge; Ahmed Ghazy; Bryant Gipson; Andrew Goodman; Dawid Górny; Sven Gowal; Khyatti Gupta; Yoni Halpern; Yena Han; Susan Hao; Jamie Hayes; Jonathan Heek; Amir Hertz; Ed Hirst; Emiel Hoogeboom; Tingbo Hou; Heidi Howard; Mohamed Ibrahim; Dirichi Ike-Njoku; Joana Iljazi; Vlad Ionescu; William Isaac; Reena Jana; Gemma Jennings; Donovon Jenson; Xuhui Jia; Kerry Jones; Xiaoen Ju; Ivana Kajic; Christos Kaplanis; Burcu Karagol Ayan; Jacob Kelly; Suraj Kothawade; Christina Kouridi; Ira Ktena; Jolanda Kumakaw; Dana Kurniawan; Dmitry Lagun; Lily Lavitas; Jason Lee; Tao Li; Marco Liang; Maggie Li-Calis; Yuchi Liu; Javier Lopez Alberca; Matthieu Kim Lorrain; Peggy Lu; Kristian Lum; Yukun Ma; Chase Malik; John Mellor; Thomas Mensink; Inbar Mosseri; Tom Murray; Aida Nematzadeh; Paul Nicholas; Signe Nørly; João Gabriel Oliveira; Guillermo Ortiz-Jimenez; Michela Paganini; Tom Le Paine; Roni Paiss; Alicia Parrish; Anne Peckham; Vikas Peswani; Igor Petrovski; Tobias Pfaff; Alex Pirozhenko; Ryan Poplin; Utsav Prabhu; Yuan Qi; Matthew Rahtz; Cyrus Rashtchian; Charvi Rastogi; Amit Raul; Ali Razavi; Sylvestre-Alvise Rebuffi; Susanna Ricco; Felix Riedel; Dirk Robinson; Pankaj Rohatgi; Bill Rosgen; Sarah Rumbley; Moonkyung Ryu; Anthony Salgado; Tim Salimans; Sahil Singla; Florian Schroff; Candice Schumann; Tanmay Shah; Eleni Shaw; Gregory Shaw; Brendan Shillingford; Kaushik Shivakumar; Dennis Shtatnov; Zach Singer; Evgeny Sluzhaev; Valerii Sokolov; Thibault Sottiaux; Florian Stimberg; Brad Stone; David Stutz; Yu-Chuan Su; Eric Tabellion; Shuai Tang; David Tao; Kurt Thomas; Gregory Thornton; Andeep Toor; Cristian Udrescu; Aayush Upadhyay; Cristina Vasconcelos; Alex Vasiloff; Andrey Voynov; Amanda Walker; Luyu Wang; Miaosen Wang; Simon Wang; Stanley Wang; Qifei Wang; Yuxiao Wang; Ágoston Weisz; Olivia Wiles; Chenxia Wu; Xingyu Federico Xu; Andrew Xue; Jianbo Yang; Luo Yu; Mete Yurtoglu; Ali Zand; Han Zhang; Jiageng Zhang; Catherine Zhao; Adilet Zhaxybay; Miao Zhou; Shengqi Zhu; Zhenkai Zhu; Dawn Bloxwich; Mahyar Bordbar; Luis C. Cobo; Eli Collins; Shengyang Dai; Tulsee Doshi; Anca Dragan; Douglas Eck; Demis Hassabis; Sissie Hsiao; Tom Hume; Koray Kavukcuoglu; Helen King; Jack Krawczyk; Yeqing Li; Kathy Meier-Hellstern; Andras Orban; Yury Pinsky; Amar Subramanya; Oriol Vinyals; Ting Yu; Yori Zwols
2024-08-12
Towards Adversarial Robustness via Debiased High-Confidence Logit Alignment. (99%)Kejia Zhang; Juanjuan Weng; Zhiming Luo; Shaozi Li
Fooling SHAP with Output Shuffling Attacks. (81%)Jun Yuan; Aritra Dasgupta
Understanding Byzantine Robustness in Federated Learning with A Black-box Server. (13%)Fangyuan Zhao; Yuexiang Xie; Xuebin Ren; Bolin Ding; Shusen Yang; Yaliang Li
2024-08-11
Improving Adversarial Transferability with Neighbourhood Gradient Information. (99%)Haijing Guo; Jiafeng Wang; Zhaoyu Chen; Kaixun Jiang; Lingyi Hong; Pinxue Guo; Jinglun Li; Wenqiang Zhang
Classifier Guidance Enhances Diffusion-based Adversarial Purification by Preserving Predictive Information. (98%)Mingkun Zhang; Jianing Li; Wei Chen; Jiafeng Guo; Xueqi Cheng
Kov: Transferable and Naturalistic Black-Box LLM Attacks using Markov Decision Processes and Tree Search. (9%)Robert J. Moss
2024-08-10
ReToMe-VA: Recursive Token Merging for Video Diffusion-based Unrestricted Adversarial Attack. (99%)Ziyi Gao; Kai Chen; Zhipeng Wei; Tingshu Mou; Jingjing Chen; Zhiyu Tan; Hao Li; Yu-Gang Jiang
StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model. (99%)Ziyin Zhou; Ke Sun; Zhongxi Chen; Huafeng Kuang; Xiaoshuai Sun; Rongrong Ji
PointNCBW: Towards Dataset Ownership Verification for Point Clouds via Negative Clean-label Backdoor Watermark. (13%)Cheng Wei; Yang Wang; Kuofeng Gao; Shuo Shao; Yiming Li; Zhibo Wang; Zhan Qin
2024-08-09
Modeling Electromagnetic Signal Injection Attacks on Camera-based Smart Systems: Applications and Mitigation. (84%)Youqian Zhang; Michael Cheung; Chunxi Yang; Xinwei Zhai; Zitong Shen; Xinyu Ji; Eugene Y. Fu; Sze-Yiu Chau; Xiapu Luo
A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares. (2%)Stav Cohen; Ron Bitton; Ben Nassi
Rag and Roll: An End-to-End Evaluation of Indirect Prompt Manipulations in LLM-based Application Frameworks. (2%)Stefano Gianluca De; Lea Schönherr; Giancarlo Pellegrino
TrajFM: A Vehicle Trajectory Foundation Model for Region and Task Transferability. (1%)Yan Lin; Tonglong Wei; Zeyu Zhou; Haomin Wen; Jilin Hu; Shengnan Guo; Youfang Lin; Huaiyu Wan
2024-08-08
Constructing Adversarial Examples for Vertical Federated Learning: Optimal Client Corruption through Multi-Armed Bandit. (99%)Duanyi Yao; Songze Li; Ye Xue; Jin Liu
Adversarially Robust Industrial Anomaly Detection Through Diffusion Model. (99%)Yuanpu Cao; Lu Lin; Jinghui Chen
Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness. (99%)Stanislav Fort; Balaji Lakshminarayanan
Eliminating Backdoors in Neural Code Models via Trigger Inversion. (92%)Weisong Sun; Yuchen Chen; Chunrong Fang; Yebo Feng; Yuan Xiao; An Guo; Quanjun Zhang; Yang Liu; Baowen Xu; Zhenyu Chen
Improving Network Interpretability via Explanation Consistency Evaluation. (81%)Hefeng Wu; Hao Jiang; Keze Wang; Ziyi Tang; Xianghuan He; Liang Lin
Unveiling Hidden Visual Information: A Reconstruction Attack Against Adversarial Visual Information Hiding. (80%)Jonggyu Jang; Hyeonsu Lyu; Seongjin Hwang; Hyun Jong Yang
Towards Resilient and Efficient LLMs: A Comparative Study of Efficiency, Performance, and Adversarial Robustness. (67%)Xiaojing Fan; Chunliang Tao
Stability Analysis of Equivariant Convolutional Representations Through The Lens of Equivariant Multi-layered CKNs. (61%)Soutrik Roy Chowdhury
h4rm3l: A Dynamic Benchmark of Composable Jailbreak Attacks for LLM Safety Assessment. (15%)Moussa Koulako Bala Doumbouya; Ananjan Nandi; Gabriel Poesia; Davide Ghilardi; Anna Goldie; Federico Bianchi; Dan Jurafsky; Christopher D. Manning
VideoQA in the Era of LLMs: An Empirical Study. (1%)Junbin Xiao; Nanxin Huang; Hangyu Qin; Dongyang Li; Yicong Li; Fengbin Zhu; Zhulin Tao; Jianxing Yu; Liang Lin; Tat-Seng Chua; Angela Yao
2024-08-07
Investigating Adversarial Attacks in Software Analytics via Machine Learning Explainability. (99%)MD Abdul Awal; Mrigank Rochan; Chanchal K. Roy
Enhancing Output Diversity Improves Conjugate Gradient-based Adversarial Attacks. (98%)Keiichiro Yamamura; Issa Oe; Hiroki Ishikura; Katsuki Fujisawa
EdgeShield: A Universal and Efficient Edge Computing Framework for Robust AI. (83%)Duo Zhong; Bojing Li; Xiang Chen; Chenchen Liu
EnJa: Ensemble Jailbreak on Large Language Models. (83%)Jiahao Zhang; Zilong Wang; Ruofan Wang; Xingjun Ma; Yu-Gang Jiang
MORTAR: A Model-based Runtime Action Repair Framework for AI-enabled Cyber-Physical Systems. (76%)Renzhi Wang; Zhehua Zhou; Jiayang Song; Xuan Xie; Xiaofei Xie; Lei Ma
LaFA: Latent Feature Attacks on Non-negative Matrix Factorization. (38%)Minh Vu; Ben Nebgen; Erik Skau; Geigh Zollicoffer; Juan Castorena; Kim Rasmussen; Boian Alexandrov; Manish Bhattarai
MTDSense: AI-Based Fingerprinting of Moving Target Defense Techniques in Software-Defined Networking. (26%)Tina Moghaddam; Guowei Yang; Chandra Thapa; Seyit Camtepe; Dan Dongseong Kim
FDI: Attack Neural Code Generation Systems through User Feedback Channel. (5%)Zhensu Sun; Xiaoning Du; Xiapu Luo; Fu Song; David Lo; Li Li
Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation. (1%)Guy Amir; Shahaf Bassan; Guy Katz
Decoding Biases: Automated Methods and LLM Judges for Gender Bias Detection in Language Models. (1%)Shachi H Kumar; Saurav Sahay; Sahisnu Mazumder; Eda Okur; Ramesh Manuvinakurike; Nicole Beckage; Hsuan Su; Hung-yi Lee; Lama Nachman
2024-08-06
Adversarial Robustness of Open-source Text Classification Models and Fine-Tuning Chains. (98%)Hao Qin; Mingyang Li; Junjie Wang; Qing Wang
Sample-agnostic Adversarial Perturbation for Vision-Language Pre-training Models. (98%)Haonan Zheng; Wen Jiang; Xinyang Deng; Wenrui Li
Simple Perturbations Subvert Ethereum Phishing Transactions Detection: An Empirical Analysis. (92%)Ahod Alghureid; David Mohaisen
Attacks and Defenses for Generative Diffusion Models: A Comprehensive Survey. (64%)Vu Tuan Truong; Luan Ba Dang; Long Bao Le
A Study on Prompt Injection Attack Against LLM-Integrated Mobile Robotic Systems. (2%)Wenxiao Zhang; Xiangrui Kong; Conan Dewitt; Thomas Braunl; Jin B. Hong
2024-08-05
On the Robustness of Malware Detectors to Adversarial Samples. (99%)Muhammad Salman; Benjamin Zi Hao Zhao; Hassan Jameel Asghar; Muhammad Ikram; Sidharth Kaushik; Mohamed Ali Kaafar
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense. (84%)Qilei Li; Ahmed M. Abdelmoniem
SEAS: Self-Evolving Adversarial Safety Optimization for Large Language Models. (38%)Muxi Diao; Rumei Li; Shiyang Liu; Guogang Liao; Jingang Wang; Xunliang Cai; Weiran Xu
Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models. (13%)Zi Liang; Haibo Hu; Qingqing Ye; Yaxin Xiao; Haoyang Li
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services. (13%)Shaopeng Fu; Xuexue Sun; Ke Qing; Tianhang Zheng; Di Wang
Can Reinforcement Learning Unlock the Hidden Dangers in Aligned Large Language Models? (8%)Mohammad Bahrami Karkevandi; Nishant Vishwamitra; Peyman Najafirad
RCDM: Enabling Robustness for Conditional Diffusion Model. (4%)Weifeng Xu; Xiang Zhu; Xiaoyong Li
Compromising Embodied Agents with Contextual Backdoor Attacks. (4%)Aishan Liu; Yuguang Zhou; Xianglong Liu; Tianyuan Zhang; Siyuan Liang; Jiakai Wang; Yanjun Pu; Tianlin Li; Junqi Zhang; Wenbo Zhou; Qing Guo; Dacheng Tao
Practical Attacks against Black-box Code Completion Engines. (4%)Slobodan Jenko; Jingxuan He; Niels Mündler; Mark Vero; Martin Vechev
2024-08-04
A Survey and Evaluation of Adversarial Attacks for Object Detection. (99%)Khoi Nguyen Tiet Nguyen; Wenyu Zhang; Kangkang Lu; Yuhuan Wu; Xingjian Zheng; Hui Li Tan; Liangli Zhen
AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning. (99%)Xin Wang; Kai Chen; Xingjun Ma; Zhineng Chen; Jingjing Chen; Yu-Gang Jiang
Label Augmentation for Neural Networks Robustness. (98%)Fatemeh Amerehi; Patrick Healy
Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification. (76%)Honglin Gao; Gaoxi Xiao
Model Hijacking Attack in Federated Learning. (75%)Zheng Li; Siyuan Wu; Ruichuan Chen; Paarijaat Aditya; Istemi Ekin Akkus; Manohar Vanga; Min Zhang; Hao Li; Yang Zhang
Robustness of Watermarking on Text-to-Image Diffusion Models. (22%)Xiaodong Wu; Xiangman Li; Jianbing Ni
FovEx: Human-inspired Explanations for Vision Transformers and Convolutional Neural Networks. (1%)Mahadev Prasad Panda; Matteo Tiezzi; Martina Vilas; Gemma Roig; Bjoern M. Eskofier; Dario Zanca
2024-08-03
ALIF: Low-Cost Adversarial Audio Attacks on Black-Box Speech Platforms using Linguistic Features. (99%)Peng Cheng; Yuwei Wang; Peng Huang; Zhongjie Ba; Xiaodong Lin; Feng Lin; Li Lu; Kui Ren
Joint Universal Adversarial Perturbations with Interpretations. (99%)Liang-bo Ning; Zeyu Dai; Wenqi Fan; Jingran Su; Chao Pan; Luning Wang; Qing Li
Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers. (99%)Weijie Zheng; Xingjun Ma; Hanxun Huang; Zuxuan Wu; Yu-Gang Jiang
2024-08-02
Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics. (98%)Alexander Gushchin; Khaled Abud; Georgii Bychkov; Ekaterina Shumitskaya; Anna Chistyakova; Sergey Lavrushkin; Bader Rasheed; Kirill Malyshev; Dmitriy Vatolin; Anastasia Antsiferova
Trustworthy Machine Learning under Social and Adversarial Data Sources. (83%)Han Shao
EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody. (80%)Coen Schoof; Stefanos Koffas; Mauro Conti; Stjepan Picek
Interpreting Global Perturbation Robustness of Image Models using Axiomatic Spectral Importance Decomposition. (61%)Róisín Luo; James McDermott; Colm O'Riordan
Assessing Robustness of Machine Learning Models using Covariate Perturbations. (33%)Arun Prakash R; Anwesha Bhattacharyya; Joel Vaughan; Vijayan N. Nair
Certifiably Robust Encoding Schemes. (31%)Aman Saxena; Tom Wollschläger; Nicola Franco; Jeanette Miriam Lorenz; Stephan Günnemann
Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed Inputs. (2%)Peng Ding; Jingyu Wu; Jun Kuang; Dan Ma; Xuezhi Cao; Xunliang Cai; Shi Chen; Jiajun Chen; Shujian Huang
2024-08-01
Autonomous LLM-Enhanced Adversarial Attack for Text-to-Motion. (99%)Honglei Miao; Fan Ma; Ruijie Quan; Kun Zhan; Yi Yang
OTAD: An Optimal Transport-Induced Robust Model for Agnostic Adversarial Attack. (99%)Kuo Gai; Sicong Wang; Shihua Zhang
Securing the Diagnosis of Medical Imaging: An In-depth Analysis of AI-Resistant Attacks. (99%)Angona Biswas; MD Abdullah Al Nasim; Kishor Datta Gupta; Roy George; Abdur Rashid
CERT-ED: Certifiably Robust Text Classification for Edit Distance. (98%)Zhuoqun Huang; Neil G Marchant; Olga Ohrimenko; Benjamin I. P. Rubinstein
ADBM: Adversarial diffusion bridge model for reliable adversarial purification. (96%)Xiao Li; Wenxuan Sun; Huanran Chen; Qiongxiu Li; Yining Liu; Yingzhe He; Jie Shi; Xiaolin Hu
Discrete Randomized Smoothing Meets Quantum Computing. (41%)Tom Wollschläger; Aman Saxena; Nicola Franco; Jeanette Miriam Lorenz; Stephan Günnemann
Adversarial Text Rewriting for Text-aware Recommender Systems. (13%)Sejoon Oh; Gaurav Verma; Srijan Kumar
MAARS: Multi-Rate Attack-Aware Randomized Scheduling for Securing Real-time Systems. (1%)Arkaprava Sain; Sunandan Adhikary; Ipsita Koley; Soumyajit Dey
Pathway to Secure and Trustworthy 6G for LLMs: Attacks, Defense, and Opportunities. (1%)Sunder Ali Khowaja; Parus Khuwaja; Kapal Dev; Hussam Al Hamadi; Engin Zeydan
2024-07-31
Cross-modality Information Check for Detecting Jailbreaking in Multimodal Large Language Models. (98%)Yue Xu; Xiuyuan Qi; Zhan Qin; Wenjie Wang
On the Perturbed States for Transformed Input-robust Reinforcement Learning. (92%)Tung M. Luu; Haeyong Kang; Tri Ton; Thanh Nguyen; Chang D. Yoo
The Llama 3 Herd of Models. (62%)Abhimanyu Jack Dubey; Abhinav Jack Jauhri; Abhinav Jack Pandey; Abhishek Jack Kadian; Ahmad Jack Al-Dahle; Aiesha Jack Letman; Akhil Jack Mathur; Alan Jack Schelten; Amy Jack Yang; Angela Jack Fan; Anirudh Jack Goyal; Anthony Jack Hartshorn; Aobo Jack Yang; Archi Jack Mitra; Archie Jack Sravankumar; Artem Jack Korenev; Arthur Jack Hinsvark; Arun Jack Rao; Aston Jack Zhang; Aurelien Jack Rodriguez; Austen Jack Gregerson; Ava Jack Spataru; Baptiste Jack Roziere; Bethany Jack Biron; Binh Jack Tang; Bobbie Jack Chern; Charlotte Jack Caucheteux; Chaya Jack Nayak; Chloe Jack Bi; Chris Jack Marra; Chris Jack McConnell; Christian Jack Keller; Christophe Jack Touret; Chunyang Jack Wu; Corinne Jack Wong; Cristian Canton Jack Ferrer; Cyrus Jack Nikolaidis; Damien Jack Allonsius; Daniel Jack Song; Danielle Jack Pintz; Danny Jack Livshits; David Jack Esiobu; Dhruv Jack Choudhary; Dhruv Jack Mahajan; Diego Jack Garcia-Olano; Diego Jack Perino; Dieuwke Jack Hupkes; Egor Jack Lakomkin; Ehab Jack AlBadawy; Elina Jack Lobanova; Emily Jack Dinan; Eric Michael Jack Smith; Filip Jack Radenovic; Frank Jack Zhang; Gabriel Jack Synnaeve; Gabrielle Jack Lee; Georgia Lewis Jack Anderson; Graeme Jack Nail; Gregoire Jack Mialon; Guan Jack Pang; Guillem Jack Cucurell; Hailey Jack Nguyen; Hannah Jack Korevaar; Hu Jack Xu; Hugo Jack Touvron; Iliyan Jack Zarov; Imanol Arrieta Jack Ibarra; Isabel Jack Kloumann; Ishan Jack Misra; Ivan Jack Evtimov; Jade Jack Copet; Jaewon Jack Lee; Jan Jack Geffert; Jana Jack Vranes; Jason Jack Park; Jay Jack Mahadeokar; Jeet Jack Shah; der Linde Jelmer Jack van; Jennifer Jack Billock; Jenny Jack Hong; Jenya Jack Lee; Jeremy Jack Fu; Jianfeng Jack Chi; Jianyu Jack Huang; Jiawen Jack Liu; Jie Jack Wang; Jiecao Jack Yu; Joanna Jack Bitton; Joe Jack Spisak; Jongsoo Jack Park; Joseph Jack Rocca; Joshua Jack Johnstun; Joshua Jack Saxe; Junteng Jack Jia; Kalyan Vasuden Jack Alwala; Kartikeya Jack Upasani; Kate Jack Plawiak; Ke Jack Li; Kenneth Jack Heafield; Kevin Jack Stone; Khalid Jack El-Arini; Krithika Jack Iyer; Kshitiz Jack Malik; Kuenley Jack Chiu; Kunal Jack Bhalla; Lauren Jack Rantala-Yeary; der Maaten Laurens Jack van; Lawrence Jack Chen; Liang Jack Tan; Liz Jack Jenkins; Louis Jack Martin; Lovish Jack Madaan; Lubo Jack Malo; Lukas Jack Blecher; Lukas Jack Landzaat; Oliveira Luke Jack de; Madeline Jack Muzzi; Mahesh Jack Pasupuleti; Mannat Jack Singh; Manohar Jack Paluri; Marcin Jack Kardas; Mathew Jack Oldham; Mathieu Jack Rita; Maya Jack Pavlova; Melanie Jack Kambadur; Mike Jack Lewis; Min Jack Si; Mitesh Kumar Jack Singh; Mona Jack Hassan; Naman Jack Goyal; Narjes Jack Torabi; Nikolay Jack Bashlykov; Nikolay Jack Bogoychev; Niladri Jack Chatterji; Olivier Jack Duchenne; Onur Jack Çelebi; Patrick Jack Alrassy; Pengchuan Jack Zhang; Pengwei Jack Li; Petar Jack Vasic; Peter Jack Weng; Prajjwal Jack Bhargava; Pratik Jack Dubal; Praveen Jack Krishnan; Punit Singh Jack Koura; Puxin Jack Xu; Qing Jack He; Qingxiao Jack Dong; Ragavan Jack Srinivasan; Raj Jack Ganapathy; Ramon Jack Calderer; Ricardo Silveira Jack Cabral; Robert Jack Stojnic; Roberta Jack Raileanu; Rohit Jack Girdhar; Rohit Jack Patel; Romain Jack Sauvestre; Ronnie Jack Polidoro; Roshan Jack Sumbaly; Ross Jack Taylor; Ruan Jack Silva; Rui Jack Hou; Rui Jack Wang; Saghar Jack Hosseini; Sahana Jack Chennabasappa; Sanjay Jack Singh; Sean Jack Bell; Seohyun Sonia Jack Kim; Sergey Jack Edunov; Shaoliang Jack Nie; Sharan Jack Narang; Sharath Jack Raparthy; Sheng Jack Shen; Shengye Jack Wan; Shruti Jack Bhosale; Shun Jack Zhang; Simon Jack Vandenhende; Soumya Jack Batra; Spencer Jack Whitman; Sten Jack Sootla; Stephane Jack Collot; Suchin Jack Gururangan; Sydney Jack Borodinsky; Tamar Jack Herman; Tara Jack Fowler; Tarek Jack Sheasha; Thomas Jack Georgiou; Thomas Jack Scialom; Tobias Jack Speckbacher; Todor Jack Mihaylov; Tong Jack Xiao; Ujjwal Jack Karn; Vedanuj Jack Goswami; Vibhor Jack Gupta; Vignesh Jack Ramanathan; Viktor Jack Kerkez; Vincent Jack Gonguet; Virginie Jack Do; Vish Jack Vogeti; Vladan Jack Petrovic; Weiwei Jack Chu; Wenhan Jack Xiong; Wenyin Jack Fu; Whitney Jack Meers; Xavier Jack Martinet; Xiaodong Jack Wang; Xiaoqing Ellen Jack Tan; Xinfeng Jack Xie; Xuchao Jack Jia; Xuewei Jack Wang; Yaelle Jack Goldschlag; Yashesh Jack Gaur; Yasmine Jack Babaei; Yi Jack Wen; Yiwen Jack Song; Yuchen Jack Zhang; Yue Jack Li; Yuning Jack Mao; Zacharie Delpierre Jack Coudert; Zheng Jack Yan; Zhengxing Jack Chen; Zoe Jack Papakipos; Aaditya Jack Singh; Aaron Jack Grattafiori; Abha Jack Jain; Adam Jack Kelsey; Adam Jack Shajnfeld; Adithya Jack Gangidi; Adolfo Jack Victoria; Ahuva Jack Goldstand; Ajay Jack Menon; Ajay Jack Sharma; Alex Jack Boesenberg; Alex Jack Vaughan; Alexei Jack Baevski; Allie Jack Feinstein; Amanda Jack Kallet; Amit Jack Sangani; Anam Jack Yunus; Andrei Jack Lupu; Andres Jack Alvarado; Andrew Jack Caples; Andrew Jack Gu; Andrew Jack Ho; Andrew Jack Poulton; Andrew Jack Ryan; Ankit Jack Ramchandani; Annie Jack Franco; Aparajita Jack Saraf; Arkabandhu Jack Chowdhury; Ashley Jack Gabriel; Ashwin Jack Bharambe; Assaf Jack Eisenman; Azadeh Jack Yazdan; Beau Jack James; Ben Jack Maurer; Benjamin Jack Leonhardi; Bernie Jack Huang; Beth Jack Loyd; Paola Beto Jack De; Bhargavi Jack Paranjape; Bing Jack Liu; Bo Jack Wu; Boyu Jack Ni; Braden Jack Hancock; Bram Jack Wasti; Brandon Jack Spence; Brani Jack Stojkovic; Brian Jack Gamido; Britt Jack Montalvo; Carl Jack Parker; Carly Jack Burton; Catalina Jack Mejia; Changhan Jack Wang; Changkyu Jack Kim; Chao Jack Zhou; Chester Jack Hu; Ching-Hsiang Jack Chu; Chris Jack Cai; Chris Jack Tindal; Christoph Jack Feichtenhofer; Damon Jack Civin; Dana Jack Beaty; Daniel Jack Kreymer; Daniel Jack Li; Danny Jack Wyatt; David Jack Adkins; David Jack Xu; Davide Jack Testuggine; Delia Jack David; Devi Jack Parikh; Diana Jack Liskovich; Didem Jack Foss; Dingkang Jack Wang; Duc Jack Le; Dustin Jack Holland; Edward Jack Dowling; Eissa Jack Jamil; Elaine Jack Montgomery; Eleonora Jack Presani; Emily Jack Hahn; Emily Jack Wood; Erik Jack Brinkman; Esteban Jack Arcaute; Evan Jack Dunbar; Evan Jack Smothers; Fei Jack Sun; Felix Jack Kreuk; Feng Jack Tian; Firat Jack Ozgenel; Francesco Jack Caggioni; Francisco Jack Guzmán; Frank Jack Kanayet; Frank Jack Seide; Gabriela Medina Jack Florez; Gabriella Jack Schwarz; Gada Jack Badeer; Georgia Jack Swee; Gil Jack Halpern; Govind Jack Thattai; Grant Jack Herman; Grigory Jack Sizov; Jack Guangyi; Sid Zhang; Guna Sid Lakshminarayanan; Hamid Sid Shojanazeri; Han Sid Zou; Hannah Sid Wang; Hanwen Sid Zha; Haroun Sid Habeeb; Harrison Sid Rudolph; Helen Sid Suk; Henry Sid Aspegren; Hunter Sid Goldman; Igor Sid Molybog; Igor Sid Tufanov; Irina-Elena Sid Veliche; Itai Sid Gat; Jake Sid Weissman; James Sid Geboski; James Sid Kohli; Japhet Sid Asher; Jean-Baptiste Sid Gaya; Jeff Sid Marcus; Jeff Sid Tang; Jennifer Sid Chan; Jenny Sid Zhen; Jeremy Sid Reizenstein; Jeremy Sid Teboul; Jessica Sid Zhong; Jian Sid Jin; Jingyi Sid Yang; Joe Sid Cummings; Jon Sid Carvill; Jon Sid Shepard; Jonathan Sid McPhie; Jonathan Sid Torres; Josh Sid Ginsburg; Junjie Sid Wang; Kai Sid Wu; Kam Hou Sid U; Karan Sid Saxena; Karthik Sid Prasad; Kartikay Sid Khandelwal; Katayoun Sid Zand; Kathy Sid Matosich; Kaushik Sid Veeraraghavan; Kelly Sid Michelena; Keqian Sid Li; Kun Sid Huang; Kunal Sid Chawla; Kushal Sid Lakhotia; Kyle Sid Huang; Lailin Sid Chen; Lakshya Sid Garg; Lavender Sid A; Leandro Sid Silva; Lee Sid Bell; Lei Sid Zhang; Liangpeng Sid Guo; Licheng Sid Yu; Liron Sid Moshkovich; Luca Sid Wehrstedt; Madian Sid Khabsa; Manav Sid Avalani; Manish Sid Bhatt; Maria Sid Tsimpoukelli; Martynas Sid Mankus; Matan Sid Hasson; Matthew Sid Lennie; Matthias Sid Reso; Maxim Sid Groshev; Maxim Sid Naumov; Maya Sid Lathi; Meghan Sid Keneally; Michael L. Sid Seltzer; Michal Sid Valko; Michelle Sid Restrepo; Mihir Sid Patel; Mik Sid Vyatskov; Mikayel Sid Samvelyan; Mike Sid Clark; Mike Sid Macey; Mike Sid Wang; Miquel Jubert Sid Hermoso; Mo Sid Metanat; Mohammad Sid Rastegari; Munish Sid Bansal; Nandhini Sid Santhanam; Natascha Sid Parks; Natasha Sid White; Navyata Sid Bawa; Nayan Sid Singhal; Nick Sid Egebo; Nicolas Sid Usunier; Nikolay Pavlovich Sid Laptev; Ning Sid Dong; Ning Sid Zhang; Norman Sid Cheng; Oleg Sid Chernoguz; Olivia Sid Hart; Omkar Sid Salpekar; Ozlem Sid Kalinli; Parkin Sid Kent; Parth Sid Parekh; Paul Sid Saab; Pavan Sid Balaji; Pedro Sid Rittner; Philip Sid Bontrager; Pierre Sid Roux; Piotr Sid Dollar; Polina Sid Zvyagina; Prashant Sid Ratanchandani; Pritish Sid Yuvraj; Qian Sid Liang; Rachad Sid Alao; Rachel Sid Rodriguez; Rafi Sid Ayub; Raghotham Sid Murthy; Raghu Sid Nayani; Rahul Sid Mitra; Raymond Sid Li; Rebekkah Sid Hogan; Robin Sid Battey; Rocky Sid Wang; Rohan Sid Maheswari; Russ Sid Howes; Ruty Sid Rinott; Sai Jayesh Sid Bondu; Samyak Sid Datta; Sara Sid Chugh; Sara Sid Hunt; Sargun Sid Dhillon; Sasha Sid Sidorov; Satadru Sid Pan; Saurabh Sid Verma; Seiji Sid Yamamoto; Sharadh Sid Ramaswamy; Shaun Sid Lindsay; Shaun Sid Lindsay; Sheng Sid Feng; Shenghao Sid Lin; Shengxin Cindy Sid Zha; Shiva Sid Shankar; Shuqiang Sid Zhang; Shuqiang Sid Zhang; Sinong Sid Wang; Sneha Sid Agarwal; Soji Sid Sajuyigbe; Soumith Sid Chintala; Stephanie Sid Max; Stephen Sid Chen; Steve Sid Kehoe; Steve Sid Satterfield; Sudarshan Sid Govindaprasad; Sumit Sid Gupta; Sungmin Sid Cho; Sunny Sid Virk; Suraj Sid Subramanian; Sy Sid Choudhury; Sydney Sid Goldman; Tal Sid Remez; Tamar Sid Glaser; Tamara Sid Best; Thilo Sid Kohler; Thomas Sid Robinson; Tianhe Sid Li; Tianjun Sid Zhang; Tim Sid Matthews; Timothy Sid Chou; Tzook Sid Shaked; Varun Sid Vontimitta; Victoria Sid Ajayi; Victoria Sid Montanez; Vijai Sid Mohan; Vinay Satish Sid Kumar; Vishal Sid Mangla; Vlad Sid Ionescu; Vlad Sid Poenaru; Vlad Tiberiu Sid Mihailescu; Vladimir Sid Ivanov; Wei Sid Li; Wenchen Sid Wang; Wenwen Sid Jiang; Wes Sid Bouaziz; Will Sid Constable; Xiaocheng Sid Tang; Xiaofang Sid Wang; Xiaojian Sid Wu; Xiaolan Sid Wang; Xide Sid Xia; Xilun Sid Wu; Xinbo Sid Gao; Yanjun Sid Chen; Ye Sid Hu; Ye Sid Jia; Ye Sid Qi; Yenda Sid Li; Yilin Sid Zhang; Ying Sid Zhang; Yossi Sid Adi; Youngjin Sid Nam; Sid Yu; Wang; Yuchen Hao; Yundi Qian; Yuzi He; Zach Rait; Zachary DeVito; Zef Rosnbrick; Zhaoduo Wen; Zhenyu Yang; Zhiwei Zhao
Certifying Robustness of Learning-Based Keypoint Detection and Pose Estimation Methods. (22%)Xusheng Luo; Tianhao Wei; Simin Liu; Ziwei Wang; Luis Mattei-Mendez; Taylor Loper; Joshua Neighbor; Casidhe Hutchison; Changliu Liu
Vera Verto: Multimodal Hijacking Attack. (9%)Minxing Zhang; Ahmed Salem; Michael Backes; Yang Zhang
2024-07-30
Prompt-Driven Contrastive Learning for Transferable Adversarial Attacks. (99%)Hunmin Yang; Jongoh Jeong; Kuk-Jin Yoon
AI Safety in Practice: Enhancing Adversarial Robustness in Multimodal Image Captioning. (99%)Maisha Binte Rashid; Pablo Rivas
FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks. (99%)Hunmin Yang; Jongoh Jeong; Kuk-Jin Yoon
Vulnerabilities in AI-generated Image Detection: The Challenge of Adversarial Attacks. (99%)Yunfeng Diao; Naixin Zhai; Changtao Miao; Xun Yang; Meng Wang
Diff-Cleanse: Identifying and Mitigating Backdoor Attacks in Diffusion Models. (62%)Jiang Hao; Xiao Jin; Hu Xiaoguang; Chen Tianyou
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers. (47%)C. A. Martínez-Mejía; J. Solano; J. Breier; D. Bucko; X. Hou
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction Amplification. (16%)Boyang Zhang; Yicong Tan; Yun Shen; Ahmed Salem; Michael Backes; Savvas Zannettou; Yang Zhang
Bayesian Low-Rank LeArning (Bella): A Practical Approach to Bayesian Neural Networks. (1%)Bao Gia Doan; Afshar Shamsi; Xiao-Yu Guo; Arash Mohammadi; Hamid Alinejad-Rokny; Dino Sejdinovic; Damith C. Ranasinghe; Ehsan Abbasnejad
2024-07-29
Adversarial Robustness in RGB-Skeleton Action Recognition: Leveraging Attention Modality Reweighter. (99%)Chao Liu; Xin Liu; Zitong Yu; Yonghong Hou; Huanjing Yue; Jingyu Yang
Enhancing Adversarial Text Attacks on BERT Models with Projected Gradient Descent. (99%)Hetvi Waghela; Jaydip Sen; Sneha Rakshit
Detecting and Understanding Vulnerabilities in Language Models via Mechanistic Interpretability. (92%)Jorge García-Carrasco; Alejandro Maté; Juan Trujillo
From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks. (84%)Aditya Kulkarni; Vivek Balachandran; Dinil Mon Divakaran; Tamal Das
DDAP: Dual-Domain Anti-Personalization against Text-to-Image Diffusion Models. (68%)Jing Yang; Runping Xi; Yingxin Lai; Xun Lin; Zitong Yu
RSC-SNN: Exploring the Trade-off Between Adversarial Robustness and Accuracy in Spiking Neural Networks via Randomized Smoothing Coding. (50%)Keming Wu; Man Yao; Yuhong Chou; Xuerui Qiu; Rui Yang; Bo Xu; Guoqi Li
Can Editing LLMs Inject Harm? (9%)Canyu Chen; Baixiang Huang; Zekun Li; Zhaorun Chen; Shiyang Lai; Xiongxiao Xu; Jia-Chen Gu; Jindong Gu; Huaxiu Yao; Chaowei Xiao; Xifeng Yan; William Yang Wang; Philip Torr; Dawn Song; Kai Shu
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning. (3%)Baoyuan Wu; Hongrui Chen; Mingda Zhang; Zihao Zhu; Shaokui Wei; Danni Yuan; Mingli Zhu; Ruotong Wang; Li Liu; Chao Shen
ImagiNet: A Multi-Content Dataset for Generalizable Synthetic Image Detection via Contrastive Learning. (1%)Delyan Boychev; Radostin Cholakov
2024-07-28
Exploring the Adversarial Robustness of CLIP for AI-generated Image Detection. (80%)Rosa Vincenzo De; Fabrizio Guillaro; Giovanni Poggi; Davide Cozzolino; Luisa Verdoliva
2024-07-27
EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection. (99%)Shigang Liu; Di Cao; Junae Kim; Tamas Abraham; Paul Montague; Seyit Camtepe; Jun Zhang; Yang Xiang
Towards Clean-Label Backdoor Attacks in the Physical World. (98%)Thinh Dao; Cuong Chi Le; Khoa D Doan; Kok-Seng Wong
2024-07-26
Debiased Graph Poisoning Attack via Contrastive Surrogate Objective. (93%)Kanghoon Yoon; Yeonjun In; Namkyeong Lee; Kibum Kim; Chanyoung Park
Robust VAEs via Generating Process of Noise Augmented Data. (87%)Hiroo Irobe; Wataru Aoki; Kimihiro Yamazaki; Yuhui Zhang; Takumi Nakagawa; Hiroki Waida; Yuichiro Wada; Takafumi Kanamori
Adversarial Robustification via Text-to-Image Diffusion Models. (64%)Daewon Choi; Jongheon Jeong; Huiwon Jang; Jinwoo Shin
A Survey of Malware Detection Using Deep Learning. (5%)Ahmed Bensaoud; Jugal Kalita; Mahmoud Bensaoud
Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data. (1%)Hanyang Yuan; Jiarong Xu; Cong Wang; Ziqi Yang; Chunping Wang; Keting Yin; Yang Yang
UniForensics: Face Forgery Detection via General Facial Representation. (1%)Ziyuan Fang; Hanqing Zhao; Tianyi Wei; Wenbo Zhou; Ming Wan; Zhanyi Wang; Weiming Zhang; Nenghai Yu
2024-07-25
Sparse vs Contiguous Adversarial Pixel Perturbations in Multimodal Models: An Empirical Analysis. (99%)Cristian-Alexandru Botocan; Raphael Meier; Ljiljana Dolamic
Effects of Scale on Language Model Robustness. (96%)Nikolaus Howe; Ian McKenzie; Oskar Hollinsworth; Michał Zajac; Tom Tseng; Aaron Tucker; Pierre-Luc Bacon; Adam Gleave
A Unified Understanding of Adversarial Vulnerability Regarding Unimodal Models and Vision-Language Pre-training Models. (95%)Haonan Zheng; Xinyang Deng; Wen Jiang; Wenrui Li
RIDA: A Robust Attack Framework on Incomplete Graphs. (31%)Jianke Yu; Hanchen Wang; Chen Chen; Xiaoyang Wang; Wenjie Zhang; Ying Zhang
Adversarially Robust Decision Transformer. (22%)Xiaohang Tang; Afonso Marques; Parameswaran Kamalaruban; Ilija Bogunovic
Peak-Controlled Logits Poisoning Attack in Federated Distillation. (4%)Yuhan Tang; Aoxu Zhang; Zhiyuan Wu; Bo Gao; Tian Wen; Yuwei Wang; Sheng Sun
Network Inversion of Convolutional Neural Nets. (3%)Pirzada Suhail; Amit Sethi
Regret-Optimal Defense Against Stealthy Adversaries: A System Level Approach. (1%)Hiroyasu Tsukamoto; Joudi Hajar; Soon-Jo Chung; Fred Y. Hadaegh
2024-07-24
Physical Adversarial Attack on Monocular Depth Estimation via Shape-Varying Patches. (92%)Chenxing Zhao; Yang Li; Shihao Wu; Wenyi Tan; Shuangju Zhou; Quan Pan
FLRT: Fluent Student-Teacher Redteaming. (13%)T. Ben Confirm Labs Thompson; Michael Confirm Labs Sklar
2024-07-23
S-E Pipeline: A Vision Transformer (ViT) based Resilient Classification Pipeline for Medical Imaging Against Adversarial Attacks. (87%)Neha A S; Vivek Chaturvedi; Muhammad Shafique
Algebraic Adversarial Attacks on Integrated Gradients. (86%)Lachlan Simpson; Federico Costanza; Kyle Millar; Adriel Cheng; Cheng-Chew Lim; Hong Gunn Chew
Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning. (41%)Xinwei Liu; Xiaojun Jia; Yuan Xun; Siyuan Liang; Xiaochun Cao
When AI Defeats Password Deception! A Deep Learning Framework to Distinguish Passwords and Honeywords. (13%)Jimmy Dani; Brandon McCulloh; Nitesh Saxena
Figure it Out: Analyzing-based Jailbreak Attack on Large Language Models. (8%)Shi Lin; Rongchang Li; Xun Wang; Changting Lin; Wenpeng Xing; Meng Han
RedAgent: Red Teaming Large Language Models with Context-aware Autonomous Language Agent. (5%)Huiyu Xu; Wenhui Zhang; Zhibo Wang; Feng Xiao; Rui Zheng; Yunhe Feng; Zhongjie Ba; Kui Ren
2024-07-22
Enhancing Transferability of Targeted Adversarial Examples: A Self-Universal Perspective. (99%)Bowen Peng; Li Liu; Tianpeng Liu; Zhen Liu; Yongxiang Liu
Towards Robust Vision Transformer via Masked Adaptive Ensemble. (99%)Fudong Lin; Jiadong Lou; Xu Yuan; Nian-Feng Tzeng
Towards Efficient Transferable Preemptive Adversarial Defense. (99%)Hanrui Wang; Ching-Chun Chang; Chun-Shien Lu; Isao Echizen
On Feasibility of Intent Obfuscating Attacks. (98%)Zhaobin Li; Patrick Shafto
Poisoning with A Pill: Circumventing Detection in Federated Learning. (92%)Hanxi Guo; Hao Wang; Tao Song; Tianhang Zheng; Yang Hua; Haibing Guan; Xiangyu Zhang
Revisiting the Robust Alignment of Circuit Breakers. (70%)Leo Schwinn; Simon Geisler
Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs. (56%)Abhay Sheshadri; Aidan Ewart; Phillip Guo; Aengus Lynch; Cindy Wu; Vivek Hebbar; Henry Sleight; Asa Cooper Stickland; Ethan Perez; Dylan Hadfield-Menell; Stephen Casper
Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models. (11%)Xiao Liu; Liangzhi Li; Tong Xiang; Fuying Ye; Lu Wei; Wangyue Li; Noa Garcia
Virtual Reality and Augmented Reality Security: A Reconnaissance and Vulnerability Assessment Approach. (1%)Sarina Dastgerdy
2024-07-21
Taxonomy Driven Fast Adversarial Training. (99%)Kun Tong; Chengze Jiang; Jie Gui; Yuan Cao
Failures to Find Transferable Image Jailbreaks Between Vision-Language Models. (74%)Rylan Schaeffer; Dan Valentine; Luke Bailey; James Chua; Cristóbal Eyzaguirre; Zane Durante; Joe Benton; Brando Miranda; Henry Sleight; John Hughes; Rajashree Agrawal; Mrinank Sharma; Scott Emmons; Sanmi Koyejo; Ethan Perez
A Learning-Based Attack Framework to Break SOTA Poisoning Defenses in Federated Learning. (73%)Yuxin College of Computer Science and Technology, Jilin University Illinois Institute of Technology Yang; Qiang College of Computer Science and Technology, Jilin University Li; Chenfei College of Computer Science and Technology, Jilin University Nie; Yuan University of Connecticut Hong; Meng Nanchang University Pang; Binghui Illinois Institute of Technology Wang
SeqMIA: Sequential-Metric Based Membership Inference Attack. (22%)Hao Li; Zheng Li; Siyuan Wu; Chengrui Hu; Yutong Ye; Min Zhang; Dengguo Feng; Yang Zhang
Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions. (5%)Naseem Khan; Kashif Ahmad; Aref Al Tamimi; Mohammed M. Alani; Amine Bermak; Issa Khalil
Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective. (2%)Mariya Hendriksen; Shuo Zhang; Ridho Reinanda; Mohamed Yahya; Edgar Meij; Rijke Maarten de
2024-07-20
Sim-CLIP: Unsupervised Siamese Adversarial Fine-Tuning for Robust and Semantically-Rich Vision-Language Models. (68%)Md Zarif Hossain; Ahmed Imteaj
2024-07-19
Data Poisoning: An Overlooked Threat to Power Grid Resilience. (68%)Nora Agah; Javad Mohammadi; Alex Aved; David Ferris; Erika Ardiles Cruz; Philip Morrone
Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context. (4%)Nilanjana Das; Edward Raff; Manas Gaur
Adversarial Databases Improve Success in Retrieval-based Large Language Models. (1%)Sean Wu; Michael Koo; Li Yo Kao; Andy Black; Lesley Blum; Fabien Scalzo; Ira Kurtz
On the Robustness of Fully-Spiking Neural Networks in Open-World Scenarios using Forward-Only Learning Algorithms. (1%)Erik B. Terres-Escudero; Ser Javier Del; Aitor Martínez-Seras; Pablo Garcia-Bringas
2024-07-18
Cross-Task Attack: A Self-Supervision Generative Framework Based on Attention Shift. (99%)Qingyuan Zeng; Yunpeng Gong; Min Jiang
Beyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking. (98%)Yunpeng Gong; Chuangliang Zhang; Yongjie Hou; Lifei Chen; Min Jiang
Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models. (75%)Zhuo Chen; Jiawei Liu; Haotan Liu; Qikai Cheng; Fan Zhang; Wei Lu; Xiaozhong Liu
Prover-Verifier Games improve legibility of LLM outputs. (61%)Jan Hendrik Kirchner; Yining Chen; Harri Edwards; Jan Leike; Nat McAleese; Yuri Burda
Compressed models are NOT miniature versions of large models. (47%)Rohit Raj Rai; Rishant Pal; Amit Awekar
Distributionally and Adversarially Robust Logistic Regression via Intersecting Wasserstein Balls. (16%)Aras Selvi; Eleonora Kreacic; Mohsen Ghassemi; Vamsi Potluru; Tucker Balch; Manuela Veloso
A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks. (10%)Yixiang Qiu; Hao Fang; Hongyao Yu; Bin Chen; MeiKang Qiu; Shu-Tao Xia
2024-07-17
PG-Attack: A Precision-Guided Adversarial Attack Framework Against Vision Foundation Models for Autonomous Driving. (98%)Jiyuan Fu; Zhaoyu Chen; Kaixun Jiang; Haijing Guo; Shuyong Gao; Wenqiang Zhang
Preventing Catastrophic Overfitting in Fast Adversarial Training: A Bi-level Optimization Perspective. (98%)Zhaoxin Wang; Handing Wang; Cong Tian; Yaochu Jin
Transferable Adversarial Facial Images for Privacy Protection. (96%)Minghui Li; Jiangxiong Wang; Hao Zhang; Ziqi Zhou; Shengshan Hu; Xiaobing Pei
Context-Aware Fuzzing for Robustness Enhancement of Deep Learning Models. (86%)Haipeng Wang; Zhengyuan Wei; Qilin Zhou; Wing-Kwong Chan
Krait: A Backdoor Attack Against Graph Prompt Tuning. (83%)Ying Song; Rita Singh; Balaji Palanisamy
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases. (61%)Zhaorun Chen; Zhen Xiang; Chaowei Xiao; Dawn Song; Bo Li
Benchmarking Robust Self-Supervised Learning Across Diverse Downstream Tasks. (12%)Antoni Kowalczuk; Jan Dubiński; Atiyeh Ashari Ghomi; Yi Sui; George Stein; Jiapeng Wu; Jesse C. Cresswell; Franziska Boenisch; Adam Dziedzic
Direct Unlearning Optimization for Robust and Safe Text-to-Image Models. (12%)Yong-Hyun Park; Sangdoo Yun; Jin-Hwa Kim; Junho Kim; Geonhui Jang; Yonghyun Jeong; Junghyo Jo; Gayoung Lee
Contrastive Adversarial Training for Unsupervised Domain Adaptation. (2%)Jiahong Chen; Zhilin Zhang; Lucy Li; Behzad Shahrasbi; Arjun Mishra
Rethinking Video-Text Understanding: Retrieval from Counterfactually Augmented Data. (1%)Wufei Ma; Kai Li; Zhongshi Jiang; Moustafa Meshry; Qihao Liu; Huiyu Wang; Christian Häne; Alan Yuille
2024-07-16
Any Target Can be Offense: Adversarial Example Generation via Generalized Latent Infection. (99%)Youheng Sun; Shengming Yuan; Xuanhan Wang; Lianli Gao; Jingkuan Song
Variational Randomized Smoothing for Sample-Wise Adversarial Robustness. (99%)Ryo Hase; Ye Wang; Toshiaki Koike-Akino; Jing Liu; Kieran Parsons
AEMIM: Adversarial Examples Meet Masked Image Modeling. (99%)Wenzhao Xiang; Chang Liu; Hang Su; Hongyang Yu
Investigating Imperceptibility of Adversarial Attacks on Tabular Data: An Empirical Analysis. (99%)Zhipeng He; Chun Ouyang; Laith Alzubaidi; Alistair Barros; Catarina Moreira
Enhancing TinyML Security: Study of Adversarial Attack Transferability. (96%)Parin Shah; Yuvaraj Govindarajulu; Pavan Kulkarni; Manojkumar Parmar
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening. (82%)Siyuan Cheng; Guangyu Shen; Kaiyuan Zhang; Guanhong Tao; Shengwei An; Hanxi Guo; Shiqing Ma; Xiangyu Zhang
Relaxing Graph Transformers for Adversarial Attacks. (81%)Philipp Foth; Lukas Gosch; Simon Geisler; Leo Schwinn; Stephan Günnemann
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks. (76%)Shuli Jiang; Swanand Ravindra Kadhe; Yi Zhou; Farhan Ahmed; Ling Cai; Nathalie Baracaldo
Learning on Graphs with Large Language Models(LLMs): A Deep Dive into Model Robustness. (33%)Kai Guo; Zewen Liu; Zhikai Chen; Hongzhi Wen; Wei Jin; Jiliang Tang; Yi Chang
SegSTRONG-C: Segmenting Surgical Tools Robustly On Non-adversarial Generated Corruptions -- An EndoVis'24 Challenge. (33%)Hao Ding; Tuxun Lu; Yuqian Zhang; Ruixing Liang; Hongchao Shu; Lalithkumar Seenivasan; Yonghao Long; Qi Dou; Cong Gao; Mathias Unberath
Does Refusal Training in LLMs Generalize to the Past Tense? (15%)Maksym Andriushchenko; Nicolas Flammarion
Cover-separable Fixed Neural Network Steganography via Deep Generative Models. (8%)Guobiao Li; Sheng Li; Zhenxing Qian; Xinpeng Zhang
Model Inversion Attacks Through Target-Specific Conditional Diffusion Models. (4%)Ouxiang Li; Yanbin Hao; Zhicai Wang; Bin Zhu; Shuo Wang; Zaixi Zhang; Fuli Feng
IPA-NeRF: Illusory Poisoning Attack Against Neural Radiance Fields. (1%)Wenxiang Ocean University of China Jiang; Hanwei Saarland University Institute of Intelligent Software, Guangzhou Zhang; Shuo Ocean University of China Zhao; Zhongwen Ocean University of China Guo; Hao Xidian University, China Wang
2024-07-15
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks. (99%)Quang H. Nguyen; Nguyen Ngoc-Hieu; The-Anh Ta; Thanh Nguyen-Tang; Kok-Seng Wong; Hoang Thanh-Tung; Khoa D. Doan
Backdoor Attacks against Image-to-Image Networks. (88%)Wenbo Jiang; Hongwei Li; Jiaming He; Rui Zhang; Guowen Xu; Tianwei Zhang; Rongxing Lu
Towards Adversarially Robust Vision-Language Models: Insights from Design Choices and Prompt Formatting Techniques. (88%)Rishika Bhagwatkar; Shravan Nayak; Reza Bayat; Alexis Roger; Daniel Z Kaplan; Pouya Bashivan; Irina Rish
PartImageNet++ Dataset: Scaling up Part-based Models for Robust Recognition. (80%)Xiao Li; Yining Liu; Na Dong; Sitian Qin; Xiaolin Hu
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks. (67%)Lukas Gosch; Mahalakshmi Sabanayagam; Debarghya Ghoshdastidar; Stephan Günnemann
Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models. (41%)Qingcheng Zeng; Mingyu Jin; Qinkai Yu; Zhenting Wang; Wenyue Hua; Zihao Zhou; Guangyan Sun; Yanda Meng; Shiqing Ma; Qifan Wang; Felix Juefei-Xu; Kaize Ding; Fan Yang; Ruixiang Tang; Yongfeng Zhang
Feature Inference Attack on Shapley Values. (12%)Xinjian Luo; Yangfan Jiang; Xiaokui Xiao
2024-07-14
Transferable 3D Adversarial Shape Completion using Diffusion Models. (99%)Xuelong Dai; Bin Xiao
Towards Robust Recommendation via Decision Boundary-aware Graph Contrastive Learning. (92%)Jiakai Tang; Sunhao Dai; Zexu Sun; Xu Chen; Jun Xu; Wenhui Yu; Lantao Hu; Peng Jiang; Han Li
Defending Against Repetitive-based Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-off. (76%)Cheng-Yi Lee; Ching-Chia Kao; Cheng-Han Yeh; Chun-Shien Lu; Chia-Mu Yu; Chu-Song Chen
CLIP-Guided Networks for Transferable Targeted Attacks. (76%)Hao Fang; Jiawei Kong; Bin Chen; Tao Dai; Hao Wu; Shu-Tao Xia
SENTINEL: Securing Indoor Localization against Adversarial Attacks with Capsule Neural Networks. (10%)Danish Gufran; Pooja Anandathirtha; Sudeep Pasricha
2024-07-13
Augmented Neural Fine-Tuning for Efficient Backdoor Purification. (68%)Nazmul Karim; Abdullah Al Arafat; Umar Khalid; Zhishan Guo; Nazanin Rahnavard
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning. (67%)Shihua Sun; Shridatt Sugrim; Angelos Stavrou; Haining Wang
Team up GBDTs and DNNs: Advancing Efficient and Effective Tabular Prediction with Tree-hybrid MLPs. (1%)Jiahuan Yan; Jintai Chen; Qianxing Wang; Danny Z. Chen; Jian Wu
2024-07-12
SemiAdv: Query-Efficient Black-Box Adversarial Attack with Unlabeled Images. (99%)Mingyuan Fan; Yang Liu; Cen Chen; Ximeng Liu
Evaluating the Adversarial Robustness of Semantic Segmentation: Trying Harder Pays Off. (97%)Levente Halmosi; Bálint Mohos; Márk Jelasity
TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs. (93%)Yuchen Yang; Hongwei Yao; Bingrun Yang; Yiling He; Yiming Li; Tianwei Zhang; Zhan Qin; Kui Ren
Deep Adversarial Defense Against Multilevel-Lp Attacks. (87%)Ren Wang; Yuxuan Li; Alfred Hero
Robust Yet Efficient Conformal Prediction Sets. (61%)Soroush H. Zargarbashi; Mohammad Sadegh Akhondzadeh; Aleksandar Bojchevski
Refusing Safe Prompts for Multi-modal Large Language Models. (16%)Zedian Shao; Hongbin Liu; Yuepeng Hu; Neil Zhenqiang Gong
Security Matrix for Multimodal Agents on Mobile Devices: A Systematic and Proof of Concept Study. (15%)Yulong Yang; Xinshan Yang; Shuaidong Li; Chenhao Lin; Zhengyu Zhao; Chao Shen; Tianwei Zhang
MaPPing Your Model: Assessing the Impact of Adversarial Attacks on LLM-based Programming Assistants. (13%)John Heibel; Daniel Lowd
BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning. (5%)Ning Wang; Shanghao Shi; Yang Xiao; Yimin Chen; Y. Thomas Hou; Wenjing Lou
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training. (1%)Youliang Yuan; Wenxiang Jiao; Wenxuan Wang; Jen-tse Huang; Jiahao Xu; Tian Liang; Pinjia He; Zhaopeng Tu
2024-07-11
Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems. (99%)Yuxin Cao; Yumeng Zhu; Derui Wang; Sheng Wen; Minhui Xue; Jin Lu; Hao Ge
Boosting Adversarial Transferability for Skeleton-based Action Recognition via Exploring the Model Posterior Space. (99%)Yunfeng Diao; Baiqi Wu; Ruixuan Zhang; Xun Yang; Meng Wang; He Wang
Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses. (98%)Yuxin College of Computer Science and Technology, Jilin University Illinois Institute of Technology Yang; Qiang College of Computer Science and Technology, Jilin University Li; Jinyuan The Pennsylvania State University Jia; Yuan University of Connecticut Hong; Binghui Illinois Institute of Technology Wang
HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm Attacks. (98%)Raffaele Mura; Giuseppe Floris; Luca Scionis; Giorgio Piras; Maura Pintor; Ambra Demontis; Giorgio Giacinto; Battista Biggio; Fabio Roli
DeCE: Deceptive Cross-Entropy Loss Designed for Defending Backdoor Attacks. (87%)Guang Yang; Yu Zhou; Xiang Chen; Xiangyu Zhang; Terry Yue Zhuo; David Lo; Taolue Chen
How to beat a Bayesian adversary. (81%)Zihan Ding; Kexin Jin; Jonas Latz; Chenguang Liu
Soft Prompts Go Hard: Steering Visual Language Models with Hidden Meta-Instructions. (74%)Tingwei Zhang; Collin Zhang; John X. Morris; Eugene Bagdasarian; Vitaly Shmatikov
DART: A Solution for Decentralized Federated Learning Model Robustness Analysis. (47%)Chao Feng; Alberto Huertas Celdrán; der Assen Jan von; Enrique Tomás Martínez Beltrán; Gérôme Bovet; Burkhard Stiller
Quantitative Evaluation of the Saliency Map for Alzheimer's Disease Classifier with Anatomical Segmentation. (8%)Yihan Zhang; Xuanshuo Zhang; Wei Wu; Haohan Wang
Enhancing Privacy of Spatiotemporal Federated Learning against Gradient Inversion Attacks. (8%)Lele Zheng; Yang Cao; Renhe Jiang; Kenjiro Taura; Yulong Shen; Sheng Li; Masatoshi Yoshikawa
Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias Elicitation. (1%)Riccardo Cantini; Giada Cosenza; Alessio Orsino; Domenico Talia
Deep Learning for Network Anomaly Detection under Data Contamination: Evaluating Robustness and Mitigating Performance Degradation. (1%)D'Jeff K. Nkashama; Jordan Masakuna Félicien; Arian Soltani; Jean-Charles Verdier; Pierre-Martin Tardif; Marc Frappier; Froduald Kabanza
2024-07-10
Adversarial Attacks and Defenses on Text-to-Image Diffusion Models: A Survey. (99%)Chenyu Zhang; Mingwang Hu; Wenhui Li; Lanjun Wang
A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future Trends. (38%)Daizong Liu; Mingyu Yang; Xiaoye Qu; Pan Zhou; Wei Hu; Yu Cheng
Model-agnostic clean-label backdoor mitigation in cybersecurity environments. (31%)Giorgio Severi; Simona Boboila; John Holodnak; Kendra Kratkiewicz; Rauf Izmailov; Lucia Michael J. De; Alina Oprea
Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities. (11%)Tianjie Ju; Yiting Wang; Xinbei Ma; Pengzhou Cheng; Haodong Zhao; Yulong Wang; Lifeng Liu; Jian Xie; Zhuosheng Zhang; Gongshen Liu
Invisible Optical Adversarial Stripes on Traffic Sign against Autonomous Vehicles. (8%)Dongfang Guo; Yuting Wu; Yimin Dai; Pengfei Zhou; Xin Lou; Rui Tan
A Comprehensive Survey on the Security of Smart Grid: Challenges, Mitigations, and Future Research Opportunities. (2%)Arastoo Zibaeirad; Farnoosh Koleini; Shengping Bi; Tao Hou; Tao Wang
Was it Slander? Towards Exact Inversion of Generative Language Models. (2%)Adrians Skapars; Edoardo Manino; Youcheng Sun; Lucas C. Cordeiro
CHILLI: A data context-aware perturbation method for XAI. (1%)Saif Anwar; Nathan Griffiths; Abhir Bhalerao; Thomas Popham
2024-07-09
A Hybrid Training-time and Run-time Defense Against Adversarial Attacks in Modulation Classification. (99%)Lu Zhang; Sangarapillai Lambotharan; Gan Zheng; Guisheng Liao; Ambra Demontis; Fabio Roli
Universal Multi-view Black-box Attack against Object Detectors via Layout Optimization. (99%)Donghua Wang; Wen Yao; Tingsong Jiang; Chao Li; Xiaoqian Chen
DLOVE: A new Security Evaluation Tool for Deep Learning Based Watermarking Techniques. (98%)Sudev Kumar Padhi; Sk. Subidh Ali
Countermeasures Against Adversarial Examples in Radio Signal Classification. (97%)Lu Zhang; Sangarapillai Lambotharan; Gan Zheng; Basil AsSadhan; Fabio Roli
Improving the Transferability of Adversarial Examples by Feature Augmentation. (93%)Donghua Wang; Wen Yao; Tingsong Jiang; Xiaohu Zheng; Junqi Wu; Xiaoqian Chen
Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning. (26%)Yuqi Jia; Minghong Fang; Hongbin Liu; Jinghuai Zhang; Neil Zhenqiang Gong
The Quantum Imitation Game: Reverse Engineering of Quantum Machine Learning Models. (15%)Archisman Ghosh; Swaroop Ghosh
Robust Neural Information Retrieval: An Adversarial and Out-of-distribution Perspective. (13%)Yu-An Liu; Ruqing Zhang; Jiafeng Guo; Rijke Maarten de; Yixing Fan; Xueqi Cheng
Attack GAN (AGAN ): A new Security Evaluation Tool for Perceptual Encryption. (10%)Umesh Kashyap; Sudev Kumar Padhi; Sk. Subidh Ali
Performance Evaluation of Knowledge Graph Embedding Approaches under Non-adversarial Attacks. (8%)Sourabh Kapoor; Arnab Sharma; Michael Röder; Caglar Demir; Axel-Cyrille Ngonga Ngomo
Exploring the Causality of End-to-End Autonomous Driving. (1%)Jiankun Li; Hao Li; Jiangjiang Liu; Zhikang Zou; Xiaoqing Ye; Fan Wang; Jizhou Huang; Hua Wu; Haifeng Wang
Distribution System Reconfiguration to Mitigate Load Altering Attacks via Stackelberg Games. (1%)Sajjad Maleki; Subhash Lakshminarayana; Charalambos Konstantinou; E. Veronica Belmaga
2024-07-08
Shedding More Light on Robust Classifiers under the lens of Energy-based Models. (98%)Mujtaba Hussain Mirza; Maria Rosaria Briglia; Senad Beadini; Iacopo Masi
Non-Robust Features are Not Always Useful in One-Class Classification. (92%)Matthew Lau; Haoran Wang; Alec Helbling; Matthew Hul; ShengYun Peng; Martin Andreoni; Willian T. Lunardi; Wenke Lee
Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment. (1%)Qizhang Feng; Siva Rajesh Kasa; Hyokun Yun; Choon Hui Teo; Sravan Babu Bodapati
2024-07-07
Rethinking Targeted Adversarial Attacks For Neural Machine Translation. (99%)Junjie Wu; Lemao Liu; Wei Bi; Dit-Yan Yeung
Mjolnir: Breaking the Shield of Perturbation-Protected Gradients via Adaptive Diffusion. (64%)Xuan Liu; Siqi Cai; Qihua Zhou; Song Guo; Ruibin Li; Kaiwei Lin
An accurate detection is not all you need to combat label noise in web-noisy datasets. (1%)Paul Albert; Jack Valmadre; Eric Arazo; Tarun Krishna; Noel E. O'Connor; Kevin McGuinness
Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach. (1%)Oladipo A. Madamidola; Felix Ngobigha; Adnane Ez-zizi
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense. (1%)Qi Zhou; Zipeng Ye; Yubo Tang; Wenjian Luo; Yuhui Shi; Yan Jia
2024-07-06
A Novel Bifurcation Method for Observation Perturbation Attacks on Reinforcement Learning Agents: Load Altering Attacks on a Cyber Physical Power System. (99%)Kiernan Broda-Milian; Ranwa Al-Mallah; Hanane Dagdougui
Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning. (13%)Binhao Ma; Tianhang Zheng; Hongsheng Hu; Di Wang; Shuo Wang; Zhongjie Ba; Zhan Qin; Kui Ren
GCON: Differentially Private Graph Convolutional Network via Objective Perturbation. (12%)Jianxin Wei; Yizheng Zhu; Xiaokui Xiao; Ergute Bao; Yin Yang; Kuntai Cai; Beng Chin Ooi
2024-07-05
Remembering Everything Makes You Vulnerable: A Limelight on Machine Unlearning for Personalized Healthcare Sector. (98%)Ahan Chatterjee; Sai Anirudh Aryasomayajula; Rajat Chaudhari; Subhajit Paul; Vishwa Mohan Singh
Jailbreak Attacks and Defenses Against Large Language Models: A Survey. (92%)Sibo Yi; Yule Liu; Zhen Sun; Tianshuo Cong; Xinlei He; Jiaxing Song; Ke Xu; Qi Li
Controlling Whisper: Universal Acoustic Adversarial Attacks to Control Speech Foundation Models. (91%)Vyas Raina; Mark Gales
Self-Supervised Representation Learning for Adversarial Attack Detection. (68%)Yi Li; Plamen Angelov; Neeraj Suri
On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks. (61%)Zesen Liu; Tianshuo Cong; Xinlei He; Qi Li
Late Breaking Results: Fortifying Neural Networks: Safeguarding Against Adversarial Attacks with Stochastic Computing. (54%)Faeze S. Banitaba; Sercan Aygun; M. Hassan Najafi
Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density. (38%)Peiyu Yang; Naveed Akhtar; Mubarak Shah; Ajmal Mian
Data Poisoning Attacks in Intelligent Transportation Systems: A Survey. (2%)Feilong Wang; Xin Wang; Xuegang Ban
Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape. (2%)Tuan Nguyen; Dung Thuy Nguyen; Khoa D Doan; Kok-Seng Wong
2024-07-04
TrackPGD: A White-box Attack using Binary Masks against Robust Transformer Trackers. (99%)Fatemeh Nourilenjan Nokabadi; Yann Batiste Pequignot; Jean-Francois Lalonde; Christian Gagné
Protecting Deep Learning Model Copyrights with Adversarial Example-Free Reuse Detection. (99%)Xiaokun Luan; Xiyue Zhang; Jingyi Wang; Meng Sun
Adversarial Robustness of VAEs across Intersectional Subgroups. (99%)Chethan Krishnamurthy Ramanaik; Arjun Roy; Eirini Ntoutsi
Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness. (92%)Kejia Zhang; Juanjuan Weng; Yuanzheng Cai; Zhiming Luo; Shaozi Li
Securing Multi-turn Conversational Language Models Against Distributed Backdoor Triggers. (68%)Terry Tong; Jiashu Xu; Qin Liu; Muhao Chen
Charging Ahead: A Hierarchical Adversarial Framework for Counteracting Advanced Cyber Threats in EV Charging Stations. (15%)Mohammed Al-Mehdhar; Abdullatif Albaseer; Mohamed Abdallah; Ala Al-Fuqaha
T2IShield: Defending Against Backdoors on Text-to-Image Diffusion Models. (13%)Zhongqi Wang; Jie Zhang; Shiguang Shan; Xilin Chen
Automated Progressive Red Teaming. (2%)Bojian Jiang; Yi Jing; Tianhao Shen; Tong Wu; Qing Yang; Deyi Xiong
Quantifying Prediction Consistency Under Model Multiplicity in Tabular LLMs. (1%)Faisal Hamman; Pasan Dissanayake; Saumitra Mishra; Freddy Lecue; Sanghamitra Dutta
Certifiably Robust Image Watermark. (1%)Zhengyuan Jiang; Moyang Guo; Yuepeng Hu; Jinyuan Jia; Neil Zhenqiang Gong
2024-07-03
A Wolf in Sheep's Clothing: Practical Black-box Adversarial Attacks for Evading Learning-based Windows Malware Detection in the Wild. (99%)Xiang Ling; Zhiyu Wu; Bin Wang; Wei Deng; Jingzheng Wu; Shouling Ji; Tianyue Luo; Yanjun Wu
$L_p$-norm Distortion-Efficient Adversarial Attack. (99%)Chao Zhou; Yuan-Gen Wang; Zi-jia Wang; Xiangui Kang
SPLITZ: Certifiable Robustness via Split Lipschitz Randomized Smoothing. (98%)Meiyu Zhong; Ravi Tandon
JailbreakHunter: A Visual Analytics Approach for Jailbreak Prompts Discovery from Large-Scale Human-LLM Conversational Datasets. (83%)Zhihua Jin; Shiyi Liu; Haotian Li; Xun Zhao; Huamin Qu
Venomancer: Towards Imperceptible and Target-on-Demand Backdoor Attacks in Federated Learning. (74%)Son Nguyen; Thinh Nguyen; Khoa Doan; Kok-Seng Wong
A Geometric Framework for Adversarial Vulnerability in Machine Learning. (70%)Brian Bell
Self-Evaluation as a Defense Against Adversarial Attacks on LLMs. (41%)Hannah Brown; Leon Lin; Kenji Kawaguchi; Michael Shieh
Backdoor Graph Condensation. (16%)Jiahao Wu; Ning Lu; Zeiyu Dai; Wenqi Fan; Shengcai Liu; Qing Li; Ke Tang
Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks. (10%)Zhexin Zhang; Junxiao Yang; Pei Ke; Shiyao Cui; Chujie Zheng; Hongning Wang; Minlie Huang
Federated Learning for Zero-Day Attack Detection in 5G and Beyond V2X Networks. (2%)Abdelaziz Amara korba; Abdelwahab Boualouache; Bouziane Brik; Rabah Rahal; Yacine Ghamri-Doudane; Sidi Mohammed Senouci
An Empirical Study on Capability of Large Language Models in Understanding Code Semantics. (1%)Thu-Trang Nguyen; Thanh Trong Vu; Hieu Dinh Vo; Son Nguyen
On Large Language Models in National Security Applications. (1%)William N. Caballero; Phillip R. Jenkins
Purification Of Contaminated Convolutional Neural Networks Via Robust Recovery: An Approach with Theoretical Guarantee in One-Hidden-Layer Case. (1%)Hanxiao Lu; Zeyu Huang; Ren Wang
2024-07-02
Secure Semantic Communication via Paired Adversarial Residual Networks. (99%)Boxiang He; Fanggang Wang; Tony Q. S. Quek
EvolBA: Evolutionary Boundary Attack under Hard-label Black Box condition. (99%)Ayane Tajima; Satoshi Ono
Adversarial Magnification to Deceive Deepfake Detection through Super Resolution. (98%)Davide Alessandro Coccomini; Roberto Caldelli; Giuseppe Amato; Fabrizio Falchi; Claudio Gennaro
Breach By A Thousand Leaks: Unsafe Information Leakage in `Safe' AI Responses. (80%)David Glukhov; Ziwen Han; Ilia Shumailov; Vardan Papyan; Nicolas Papernot
Light-weight Fine-tuning Method for Defending Adversarial Noise in Pre-trained Medical Vision-Language Models. (76%)Xu Han; Linghao Jin; Xuezhe Ma; Xiaofeng Liu
Parameter Matching Attack: Enhancing Practical Applicability of Availability Attacks. (50%)Yu Zhe; Jun Sakuma
Towards More Realistic Extraction Attacks: An Adversarial Perspective. (22%)Yash More; Prakhar Ganesh; Golnoosh Farnadi
On the Robustness of Graph Reduction Against GNN Backdoor. (13%)Yuxuan Zhu; Michael Mandulak; Kerui Wu; George Slota; Yuseok Jeon; Ka-Ho Chow; Lei Yu
MALT Powers Up Adversarial Attacks. (13%)Odelia Melamed; Gilad Yehudai; Adi Shamir
Face Reconstruction Transfer Attack as Out-of-Distribution Generalization. (2%)Yoon Gyo Jung; Jaewoo Park; Xingbo Dong; Hojin Park; Andrew Beng Jin Teoh; Octavia Camps
Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather. (1%)Muhammad Zaeem Shahzad; Muhammad Abdullah Hanif; Muhammad Shafique
2024-07-01
Multi-View Black-Box Physical Attacks on Infrared Pedestrian Detectors Using Adversarial Infrared Grid. (98%)Kalibinuer Tiliwalidi; Chengyin Hu; Weiwen Shi
DeepiSign-G: Generic Watermark to Stamp Hidden DNN Parameters for Self-contained Tracking. (82%)Alsharif Abuadbba; Nicholas Rhodes; Kristen Moore; Bushra Sabir; Shuo Wang; Yansong Gao
Looking From the Future: Multi-order Iterations Can Enhance Adversarial Attack Transferability. (81%)Zijian Ying; Qianmu Li; Tao Wang; Zhichao Lian; Shunmei Meng; Xuyun Zhang
QUEEN: Query Unlearning against Model Extraction. (75%)Huajie Chen; Tianqing Zhu; Lefeng Zhang; Bo Liu; Derui Wang; Wanlei Zhou; Minhui Xue
Formal Verification of Object Detection. (56%)Avraham Raviv; Yizhak Y. Elboher; Michelle Aluf-Medina; Yael Leibovich Weiss; Omer Cohen; Roy Assa; Guy Katz; Hillel Kugler
SoP: Unlock the Power of Social Facilitation for Automatic Jailbreak Attack. (13%)Yan Yang; Zeguan Xiao; Xin Lu; Hongru Wang; Hailiang Huang; Guanhua Chen; Yun Chen
Securing Distributed Network Digital Twin Systems Against Model Poisoning Attacks. (8%)Zifan Zhang; Minghong Fang; Mingzhe Chen; Gaolei Li; Xi Lin; Yuchen Liu
A Fingerprint for Large Language Models. (2%)Zhiguang Yang; Hanzhou Wu
Unveiling the Unseen: Exploring Whitebox Membership Inference through the Lens of Explainability. (1%)Chenxi Li; Abhinav Kumar; Zhen Guo; Jie Hou; Reza Tourani
Unaligning Everything: Or Aligning Any Text to Any Image in Multimodal Models. (1%)Shaeke Salman; Md Montasir Bin Shams; Xiuwen Liu
2024-06-30
Learning Robust 3D Representation from CLIP via Dual Denoising. (67%)Shuqing Luo; Bowen Qu; Wei Gao
Consistency Purification: Effective and Efficient Diffusion Purification towards Certified Robustness. (13%)Yiquan Li; Zhongzhu Chen; Kun Jin; Jiongxiao Wang; Bo Li; Chaowei Xiao
UWBAD: Towards Effective and Imperceptible Jamming Attacks Against UWB Ranging Systems with COTS Chips. (2%)Yuqiao Yang; Zhongjie Wu; Yongzhao Zhang; Ting Chen; Jun Li; Jie Yang; Wenhao Liu; Xiaosong Zhang; Ruicong Shi; Jingwei Li; Yu Jiang; Zhuo Su
2024-06-29
Query-Efficient Hard-Label Black-Box Attack against Vision Transformers. (99%)Chao Zhou; Xiaowen Shi; Yuan-Gen Wang
2024-06-28
Deceptive Diffusion: Generating Synthetic Adversarial Examples. (99%)Lucas Beerens; Catherine F. Higham; Desmond J. Higham
DiffuseDef: Improved Robustness to Adversarial Attacks. (95%)Zhenhao Li; Marek Rei; Lucia Specia
Emotion Loss Attacking: Adversarial Attack Perception for Skeleton based on Multi-dimensional Features. (92%)Feng Liu; Qing Xu; Qijian Zheng
Steering cooperation: Adversarial attacks on prisoner's dilemma in complex networks. (92%)Kazuhiro Takemoto
IDT: Dual-Task Adversarial Attacks for Privacy Protection. (88%)Pedro Faustini; Shakila Mahjabin Tonni; Annabelle McIver; Qiongkai Xu; Mark Dras
Backdoor Attack in Prompt-Based Continual Learning. (22%)Trang Nguyen; Anh Tran; Nhat Ho
Virtual Context: Enhancing Jailbreak Attacks with Special Token Injection. (11%)Yuqi Zhou; Lin Lu; Hanchi Sun; Pan Zhou; Lichao Sun
GRACE: Graph-Regularized Attentive Convolutional Entanglement with Laplacian Smoothing for Robust DeepFake Video Detection. (1%)Chih-Chung Hsu; Shao-Ning Chen; Mei-Hsuan Wu; Yi-Fang Wang; Chia-Ming Lee; Yi-Shiuan Chou
2024-06-27
Zero-Query Adversarial Attack on Black-box Automatic Speech Recognition Systems. (99%)Zheng Fang; Tao Wang; Lingchen Zhao; Shenyi Zhang; Bowen Li; Yunjie Ge; Qi Li; Chao Shen; Qian Wang
Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness. (98%)Erh-Chung Chen; Pin-Yu Chen; I-Hsin Chung; Che-Rung Lee
Investigating and Defending Shortcut Learning in Personalized Diffusion Models. (87%)Yixin Liu; Ruoxi Chen; Lichao Sun
Data Poisoning Attacks to Locally Differentially Private Frequent Itemset Mining Protocols. (2%)Wei Tong; Haoyu Chen; Jiacheng Niu; Sheng Zhong
Context Matters: An Empirical Study of the Impact of Contextual Information in Temporal Question Answering Systems. (1%)Dan Schumacher; Fatemeh Haji; Tara Grey; Niharika Bandlamudi; Nupoor Karnik; Gagana Uday Kumar; Jason Cho-Yu Chiang; Paul Rad; Nishant Vishwamitra; Anthony Rios
2024-06-26
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers. (98%)Jonas Ngnawé; Sabyasachi Sahoo; Yann Pequignot; Frédéric Precioso; Christian Gagné
Revisiting Backdoor Attacks against Large Vision-Language Models. (62%)Siyuan Liang; Jiawei Liang; Tianyu Pang; Chao Du; Aishan Liu; Ee-Chien Chang; Xiaochun Cao
On Discrete Prompt Optimization for Diffusion Models. (62%)Ruochen Wang; Ting Liu; Cho-Jui Hsieh; Boqing Gong
Breaking the Barrier: Enhanced Utility and Robustness in Smoothed DRL Agents. (54%)Chung-En Sun; Sicun Gao; Tsui-Wei Weng
Poisoned LangChain: Jailbreak LLMs by LangChain. (26%)Ziqiu Wang; Jun Liu; Shengkai Zhang; Yang Yang
WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models. (12%)Liwei Jiang; Kavel Rao; Seungju Han; Allyson Ettinger; Faeze Brahman; Sachin Kumar; Niloofar Mireshghallah; Ximing Lu; Maarten Sap; Yejin Choi; Nouha Dziri
Adversarial Search Engine Optimization for Large Language Models. (9%)Fredrik Nestaas; Edoardo Debenedetti; Florian Tramèr
2024-06-25
CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems. (99%)Zhen Chen; Yong Liao; Youpeng Zhao; Zipeng Dai; Jian Zhao
Diffusion-based Adversarial Purification for Intrusion Detection. (98%)Mohamed Amine Merzouk; Erwan Beurier; Reda Yaich; Nora Boulahia-Cuppens; Frédéric Cuppens
Semantic Deep Hiding for Robust Unlearnable Examples. (76%)Ruohan Meng; Chenyu Yi; Yi Yu; Siyuan Yang; Bingquan Shen; Alex C. Kot
Treatment of Statistical Estimation Problems in Randomized Smoothing for Adversarial Robustness. (67%)Vaclav Voracek
Robustly Optimized Deep Feature Decoupling Network for Fatty Liver Diseases Detection. (13%)Peng Huang; Shu Hu; Bo Peng; Jiashu Zhang; Xi Wu; Xin Wang
2024-06-24
Evaluating the Robustness of Deep-Learning Algorithm-Selection Models by Evolving Adversarial Instances. (98%)Emma Hart; Quentin Renau; Kevin Sim; Mohamad Alissa
UNICAD: A Unified Approach for Attack Detection, Noise Reduction and Novel Class Identification. (96%)Alvaro Lopez Pellicer; Kittipos Giatgong; Yi Li; Neeraj Suri; Plamen Angelov
ADVSCORE: A Metric for the Evaluation and Creation of Adversarial Benchmarks. (92%)Yoo Yeon Sung; Eve Fleisig; Ishani Mondal; Jordan Lee Boyd-Graber
Automated Adversarial Discovery for Safety Classifiers. (92%)Yash Kumar Lal; Preethi Lahoti; Aradhana Sinha; Yao Qin; Ananth Balashankar
Improving robustness to corruptions with multiplicative weight perturbations. (74%)Trung Trinh; Markus Heinonen; Luigi Acerbi; Samuel Kaski
BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models. (38%)Yi Zeng; Weiyu Sun; Tran Ngoc Huynh; Dawn Song; Bo Li; Ruoxi Jia
From Perfect to Noisy World Simulation: Customizable Embodied Multi-modal Perturbations for SLAM Robustness Benchmarking. (5%)Xiaohao Xu; Tianyi Zhang; Sibo Wang; Xiang Li; Yongqi Chen; Ye Li; Bhiksha Raj; Matthew Johnson-Roberson; Xiaonan Huang
Machine Unlearning Fails to Remove Data Poisoning Attacks. (2%)Martin Pawelczyk; Jimmy Z. Di; Yiwei Lu; Gautam Kamath; Ayush Sekhari; Seth Neel
2024-06-23
Towards unlocking the mystery of adversarial fragility of neural networks. (64%)Jingchao Gao; Raghu Mudumbai; Xiaodong Wu; Jirong Yi; Catherine Xu; Hui Xie; Weiyu Xu
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack. (13%)Hanfeng Xia; Haibo Hong; Ruili Wang
Investigating the Influence of Prompt-Specific Shortcuts in AI Generated Text Detection. (8%)Choonghyun Park; Hyuhng Joon Kim; Junyeob Kim; Youna Kim; Taeuk Kim; Hyunsoo Cho; Hwiyeol Jo; Sang-goo Lee; Kang Min Yoo
On Instabilities of Unsupervised Denoising Diffusion Models in Magnetic Resonance Imaging Reconstruction. (2%)Tianyu Han; Sven Nebelung; Firas Khader; Jakob Nikolas Kather; Daniel Truhn
Understanding and Diagnosing Deep Reinforcement Learning. (1%)Ezgi Korkmaz
2024-06-22
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI. (97%)Christopher Burger; Charles Walter; Thai Le
Federated Adversarial Learning for Robust Autonomous Landing Runway Detection. (2%)Yi Li; Plamen Angelov; Zhengxin Yu; Alvaro Lopez Pellicer; Neeraj Suri
Privacy Implications of Explainable AI in Data-Driven Systems. (1%)Fatima Ezzeddine
2024-06-21
ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification. (99%)Xianlong Wang; Shengshan Hu; Yechao Zhang; Ziqi Zhou; Leo Yu Zhang; Peng Xu; Wei Wan; Hai Jin
Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors. (99%)Peter Lorenz; Mario Fernandez; Jens Müller; Ullrich Köthe
DataFreeShield: Defending Adversarial Attacks without Training Data. (45%)Hyeyoon Lee; Kanghyun Choi; Dain Kwon; Sunjong Park; Mayoore Selvarasa Jaiswal; Noseong Park; Jonghyun Choi; Jinho Lee
Large Language Models for Link Stealing Attacks Against Graph Neural Networks. (38%)Faqian Guan; Tianqing Zhu; Hui Sun; Wanlei Zhou; Philip S. Yu
Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference. (2%)Anton Xue; Avishree Khare; Rajeev Alur; Surbhi Goel; Eric Wong
MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local Explanations. (1%)Parikshit Solunke; Vitoria Guardieiro; Joao Rulff; Peter Xenopoulos; Gromit Yeuk-Yin Chan; Brian Barr; Luis Gustavo Nonato; Claudio Silva
2024-06-20
Enhancing robustness of data-driven SHM models: adversarial training with circle loss. (99%)Xiangli Yang; Xijie Deng; Hanwei Zhang; Yang Zou; Jianxi Yang
Exploring Layerwise Adversarial Robustness Through the Lens of t-SNE. (87%)Inês Valentim; Nuno Antunes; Nuno Lourenço
Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated Learning. (81%)Yujing Wang; Hainan Zhang; Sijia Wen; Wangjie Qiu; Binghui Guo
Jailbreaking as a Reward Misspecification Problem. (78%)Zhihui Xie; Jiahui Gao; Lei Li; Zhenguo Li; Qi Liu; Lingpeng Kong
Uniform Convergence of Adversarially Robust Classifiers. (68%)Rachel Morris; Ryan Murray
Prompt Injection Attacks in Defended Systems. (47%)Daniil Khomsky; Narek Maloyan; Bulat Nutfullin
MEAT: Median-Ensemble Adversarial Training for Improving Robustness and Generalization. (41%)Zhaozhe Hu; Jia-Li Yin; Bin Chen; Luojun Lin; Bo-Hao Chen; Ximeng Liu
Countering adversarial perturbations in graphs using error correcting codes. (22%)Saif Eddin Jabari
Steering Without Side Effects: Improving Post-Deployment Control of Language Models. (15%)Asa Cooper Stickland; Alexander Lyzhov; Jacob Pfau; Salsabila Mahdi; Samuel R. Bowman
Evaluating Implicit Bias in Large Language Models by Attacking From a Psychometric Perspective. (8%)Yuchen Wen; Keping Bi; Wei Chen; Jiafeng Guo; Xueqi Cheng
PoseBench: Benchmarking the Robustness of Pose Estimation Models under Corruptions. (5%)Sihan Ma; Jing Zhang; Qiong Cao; Dacheng Tao
Can you trust your explanations? A robustness test for feature attribution methods. (2%)Ilaria Vascotto; Alex Rodriguez; Alessandro Bonaita; Luca Bortolussi
SeCTIS: A Framework to Secure CTI Sharing. (1%)Dincy R. Arikkat; Mert Cihangiroglu; Mauro Conti; Rafidha Rehiman K. A.; Serena Nicolazzo; Antonino Nocera; Vinod P
2024-06-19
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning. (99%)Tao Wu; Xinwen Cao; Chao Wang; Shaojie Qiao; Xingping Xian; Lin Yuan; Canyixing Cui; Yanbing Liu
AGSOA:Graph Neural Network Targeted Attack Based on Average Gradient and Structure Optimization. (99%)Yang Chen; Bin Zhou
Explainable AI Security: Exploring Robustness of Graph Neural Networks to Adversarial Attacks. (99%)Tao Wu; Canyixing Cui; Xingping Xian; Shaojie Qiao; Chao Wang; Lin Yuan; Shui Yu
AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents. (83%)Edoardo Debenedetti; Jie Zhang; Mislav Balunović; Luca Beurer-Kellner; Marc Fischer; Florian Tramèr
Enhancing Cross-Prompt Transferability in Vision-Language Models through Contextual Injection of Target Tokens. (62%)Xikang Yang; Xuehai Tang; Fuqing Zhu; Jizhong Han; Songlin Hu
Textual Unlearning Gives a False Sense of Unlearning. (16%)Jiacheng Du; Zhibo Wang; Kui Ren
Large-Scale Dataset Pruning in Adversarial Training through Data Importance Extrapolation. (9%)Björn Nieth; Thomas Altstidl; Leo Schwinn; Björn Eskofier
DPO: Dual-Perturbation Optimization for Test-time Adaptation in 3D Object Detection. (3%)Zhuoxiao Chen; Zixin Wang; Sen Wang; Zi Huang; Yadan Luo
ModSec-Learn: Boosting ModSecurity with Machine Learning. (2%)Christian Scano; Giuseppe Floris; Biagio Montaruli; Luca Demetrio; Andrea Valenza; Luca Compagna; Davide Ariu; Luca Piras; Davide Balzarotti; Battista Biggio
RobGC: Towards Robust Graph Condensation. (1%)Xinyi Gao; Hongzhi Yin; Tong Chen; Guanhua Ye; Wentao Zhang; Bin Cui
2024-06-18
Saliency Attention and Semantic Similarity-Driven Adversarial Perturbation. (99%)Hetvi Waghela; Jaydip Sen; Sneha Rakshit
NoiSec: Harnessing Noise for Security against Adversarial and Backdoor Attacks. (97%)Md Hasan Shahriar; Ning Wang; Y. Thomas Hou; Wenjing Lou
Adversarial Attacks on Multimodal Agents. (96%)Chen Henry Wu; Jing Yu Koh; Ruslan Salakhutdinov; Daniel Fried; Aditi Raghunathan
MaskPure: Improving Defense Against Text Adversaries with Stochastic Purification. (95%)Harrison Gietz; Jugal Kalita
Towards Trustworthy Unsupervised Domain Adaptation: A Representation Learning Perspective for Enhancing Robustness, Discrimination, and Generalization. (76%)Jia-Li Yin; Haoyuan Zheng; Ximeng Liu
Adversarial Attacks on Large Language Models in Medicine. (70%)Yifan Yang; Qiao Jin; Furong Huang; Zhiyong Lu
Can Go AIs be adversarially robust? (61%)Tom Tseng; Euan McLean; Kellin Pelrine; Tony T. Wang; Adam Gleave
DLP: towards active defense against backdoor attacks with decoupled learning process. (31%)Zonghao Ying; Bin Wu
Attack and Defense of Deep Learning Models in the Field of Web Attack Detection. (10%)Lijia Shi; Shihao Dong
SHIELD: Evaluation and Defense Strategies for Copyright Compliance in LLM Text Generation. (10%)Xiaoze Liu; Ting Sun; Tianyang Xu; Feijie Wu; Cunxiang Wang; Xiaoqian Wang; Jing Gao
CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models. (8%)Yuetai Li; Zhangchen Xu; Fengqing Jiang; Luyao Niu; Dinuka Sahabandu; Bhaskar Ramasubramanian; Radha Poovendran
Stealth edits for provably fixing or attacking large language models. (2%)Oliver J. Sutton; Qinghua Zhou; Wei Wang; Desmond J. Higham; Alexander N. Gorban; Alexander Bastounis; Ivan Y. Tyukin
PRePair: Pointwise Reasoning Enhance Pairwise Evaluating for Robust Instruction-Following Assessments. (1%)Hawon Jeong; ChaeHun Park; Jimin Hong; Jaegul Choo
2024-06-17
Obfuscating IoT Device Scanning Activity via Adversarial Example Generation. (99%)Haocong Li; Yaxin Zhang; Long Cheng; Wenjia Niu; Haining Wang; Qiang Li
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural Networks. (98%)Tobias Lorenz; Marta Kwiatkowska; Mario Fritz
Harmonizing Feature Maps: A Graph Convolutional Approach for Enhancing Adversarial Robustness. (93%)Kejia Zhang; Juanjuan Weng; Junwei Wu; Guoqing Yang; Shaozi Li; Zhiming Luo
A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving. (82%)Yang Lou; Yi Zhu; Qun Song; Rui Tan; Chunming Qiao; Wei-Bin Lee; Jianping Wang
Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI. (76%)Robert Hönig; Javier Rando; Nicholas Carlini; Florian Tramèr
ToxiCloakCN: Evaluating Robustness of Offensive Language Detection in Chinese with Cloaking Perturbations. (22%)Yunze Xiao; Yujia Hu; Kenny Tsu Wei Choo; Roy Ka-wei Lee
Evading AI-Generated Content Detectors using Homoglyphs. (3%)Aldan Creo; Shushanta Pudasaini
BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning. (2%)Yi Liu; Cong Wang; Xingliang Yuan
SoK: A Literature and Engineering Review of Regular Expression Denial of Service. (2%)Masudul Hasan Masud Bhuiyan; Berk Çakar; Ethan H Burmane; James C Davis; Cristian-Alexandru Staicu
Do Parameters Reveal More than Loss for Membership Inference? (1%)Anshuman Suri; Xiao Zhang; David Evans
Adversaries With Incentives: A Strategic Alternative to Adversarial Robustness. (1%)Maayan Ehrenberg; Roy Ganz; Nir Rosenfeld
2024-06-16
Improving Adversarial Robustness via Decoupled Visual Representation Masking. (99%)Decheng Liu; Tao Chen; Chunlei Peng; Nannan Wang; Ruimin Hu; Xinbo Gao
Imperceptible Face Forgery Attack via Adversarial Semantic Mask. (99%)Decheng Liu; Qixuan Su; Chunlei Peng; Nannan Wang; Xinbo Gao
KGPA: Robustness Evaluation for Large Language Models via Cross-Domain Knowledge Graphs. (92%)Aihua Waseda University Pei; Zehua Waseda University Yang; Shunan Waseda University Zhu; Ruoxi Southeast University Cheng; Ju Southeast University Jia; Lina Wuhan University Wang
NBA: defensive distillation for backdoor removal via neural behavior alignment. (80%)Zonghao Ying; Bin Wu
RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. (62%)Zhuoran Jin; Pengfei Cao; Chenhao Wang; Zhitao He; Hongbang Yuan; Jiachun Li; Yubo Chen; Kang Liu; Jun Zhao
ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat Templates. (61%)Fengqing Jiang; Zhangchen Xu; Luyao Niu; Bill Yuchen Lin; Radha Poovendran
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition. (10%)Wenhan Yao; Jiangkun Yang; Yongqiang He; Jia Liu; Weiping Wen
RUPBench: Benchmarking Reasoning Under Perturbations for Robustness Evaluation in Large Language Models. (9%)Yuqing Wang; Yun Zhao
2024-06-15
Robust Image Classification in the Presence of Out-of-Distribution and Adversarial Samples Using Attractors in Neural Networks. (98%)Nasrin Alipour; Seyyed Ali SeyyedSalehi
E-SAGE: Explainability-based Defense Against Backdoor Attacks on Graph Neural Networks. (81%)Dingqiang Yuan; Xiaohua Xu; Lei Yu; Tongchang Han; Rongchang Li; Meng Han
Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models. (68%)Rui Ye; Jingyi Chai; Xiangrui Liu; Yaodong Yang; Yanfeng Wang; Siheng Chen
Enhancing Anomaly Detection Generalization through Knowledge Exposure: The Dual Effects of Augmentation. (1%)Mohammad Akhavan Anvari; Rojina Kashefi; Vahid Reza Khazaie; Mohammad Khalooei; Mohammad Sabokrou
2024-06-14
Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis. (99%)Zhang Chen; Luca Demetrio; Srishti Gupta; Xiaoyi Feng; Zhaoqiang Xia; Antonio Emanuele Cinà; Maura Pintor; Luca Oneto; Ambra Demontis; Battista Biggio; Fabio Roli
Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences. (93%)Saiyue Lyu; Shadab Shaikh; Frederick Shpilevskiy; Evan Shelhamer; Mathias Lécuyer
Robustness-Inspired Defense Against Backdoor Attacks on Graph Neural Networks. (75%)Zhiwei Zhang; Minhua Lin; Junjie Xu; Zongyu Wu; Enyan Dai; Suhang Wang
Automated Design of Linear Bounding Functions for Sigmoidal Nonlinearities in Neural Networks. (67%)Matthias König; Xiyue Zhang; Holger H. Hoos; Marta Kwiatkowska; Rijn Jan N. van
Beyond Slow Signs in High-fidelity Model Extraction. (10%)Hanna Foerster; Robert Mullins; Ilia Shumailov; Jamie Hayes
Byzantine-Robust Decentralized Federated Learning. (8%)Minghong Fang; Zifan Zhang; Hairi; Prashant Khanduri; Jia Liu; Songtao Lu; Yuchen Liu; Neil Gong
2024-06-13
Improving Adversarial Robustness via Feature Pattern Consistency Constraint. (99%)Jiacong Hu; Jingwen Ye; Zunlei Feng; Jiazhen Yang; Shunyu Liu; Xiaotian Yu; Lingxiang Jia; Mingli Song
Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models. (98%)Changjiang Li; Ren Pang; Bochuan Cao; Jinghui Chen; Fenglong Ma; Shouling Ji; Ting Wang
MirrorCheck: Efficient Adversarial Defense for Vision-Language Models. (95%)Samar Fares; Klea Ziu; Toluwani Aremu; Nikita Durasov; Martin Takáč; Pascal Fua; Karthik Nandakumar; Ivan Laptev
Towards Evaluating the Robustness of Visual State Space Models. (89%)Hashmat Shadab Malik; Fahad Shamshad; Muzammal Naseer; Karthik Nandakumar; Fahad Shahbaz Khan; Salman Khan
Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. (11%)Zhao Xu; Fan Liu; Hao Liu
Steganalysis on Digital Watermarking: Is Your Defense Truly Impervious? (4%)Pei Yang; Hai Ci; Yiren Song; Mike Zheng Shou
Opening the Black Box: predicting the trainability of deep neural networks with reconstruction entropy. (2%)Yanick Thurn; Ro Jefferson; Johanna Erdmenger
Validation of human benchmark models for Automated Driving System approval: How competent and careful are they really? (1%)Pierluigi Olleja; Gustav Markkula; Jonas Bärgman
An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare Records. (1%)Joakim Edin; Maria Maistro; Lars Maaløe; Lasse Borgholt; Jakob D. Havtorn; Tuukka Ruotsalo
Large-Scale Evaluation of Open-Set Image Classification Techniques. (1%)Halil Bisgin; Andres Palechor; Mike Suter; Manuel Günther
Understanding Hallucinations in Diffusion Models through Mode Interpolation. (1%)Sumukh K Aithal; Pratyush Maini; Zachary C. Lipton; J. Zico Kolter
2024-06-12
I Don't Know You, But I Can Catch You: Real-Time Defense against Diverse Adversarial Patches for Object Detectors. (99%)Zijin Lin; Yue Zhao; Kai Chen; Jinwen He
On Evaluating Adversarial Robustness of Volumetric Medical Segmentation Models. (99%)Hashmat Shadab Malik; Numan Saeed; Asif Hanif; Muzammal Naseer; Mohammad Yaqub; Salman Khan; Fahad Shahbaz Khan
Adversarial Evasion Attack Efficiency against Large Language Models. (98%)João Vitorino; Eva Maia; Isabel Praça
Transformation-Dependent Adversarial Attacks. (89%)Yaoteng Tan; Zikui Cai; M. Salman Asif
When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search. (64%)Xuan Chen; Yuzhou Nie; Wenbo Guo; Xiangyu Zhang
RL-JACK: Reinforcement Learning-powered Black-box Jailbreaking Attack against LLMs. (62%)Xuan Chen; Yuzhou Nie; Lu Yan; Yunshu Mao; Wenbo Guo; Xiangyu Zhang
AdaNCA: Neural Cellular Automata As Adaptors For More Robust Vision Transformer. (22%)Yitao Xu; Tong Zhang; Sabine Süsstrunk
Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks. (13%)Peizhi Niu; Chao Pan; Siheng Chen; Olgica Milenkovic
On Security Weaknesses and Vulnerabilities in Deep Learning Systems. (8%)Zhongzheng Lai; Huaming Chen; Ruoxi Sun; Yu Zhang; Minhui Xue; Dong Yuan
Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition. (4%)Edoardo Debenedetti; Javier Rando; Daniel Paleka; Silaghi Fineas Florin; Dragos Albastroiu; Niv Cohen; Yuval Lemberg; Reshmi Ghosh; Rui Wen; Ahmed Salem; Giovanni Cherubin; Santiago Zanella-Beguelin; Robin Schmid; Victor Klemm; Takahiro Miki; Chenhao Li; Stefan Kraft; Mario Fritz; Florian Tramèr; Sahar Abdelnabi; Lea Schönherr
Improving Noise Robustness through Abstractions and its Impact on Machine Learning. (4%)Alfredo Personal Health Data Science, Sano - Centre for Computational Personalised Medicine Ibias; Karol Personal Health Data Science, Sano - Centre for Computational Personalised Medicine Capala; Varun Ravi Personal Health Data Science, Sano - Centre for Computational Personalised Medicine Varma; Anna Personal Health Data Science, Sano - Centre for Computational Personalised Medicine Drozdz; Jose Personal Health Data Science, Sano - Centre for Computational Personalised Medicine Sousa
Exploiting Uncommon Text-Encoded Structures for Automated Jailbreaks in LLMs. (1%)Bangxin Li; Hengrui Xing; Chao Huang; Jin Qian; Huangqing Xiao; Linfeng Feng; Cong Tian
Adversarial Patch for 3D Local Feature Extractor. (1%)Yu Wen Pao; Li Chang Lai; Hong-Yi Lin
2024-06-11
Erasing Radio Frequency Fingerprints via Active Adversarial Perturbation. (86%)Zhaoyi Lu; Wenchao Xu; Ming Tu; Xin Xie; Cunqing Hua; Nan Cheng
AudioMarkBench: Benchmarking Robustness of Audio Watermarking. (83%)Hongbin Liu; Moyang Guo; Zhengyuan Jiang; Lun Wang; Neil Zhenqiang Gong
On the H\"{o}lder Stability of Multiset and Graph Neural Networks. (69%)Yair Davidson; Nadav Dym
A Study of Backdoors in Instruction Fine-tuned Language Models. (31%)Jayaram Raghuram; George Kesidis; David J. Miller
Merging Improves Self-Critique Against Jailbreak Attacks. (26%)Victor Gallego
Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study. (15%)Yichi Zhang; Yao Huang; Yitong Sun; Chang Liu; Zhe Zhao; Zhengwei Fang; Yifan Wang; Huanran Chen; Xiao Yang; Xingxing Wei; Hang Su; Yinpeng Dong; Jun Zhu
Dual Thinking and Perceptual Analysis of Deep Learning Models using Human Adversarial Examples. (15%)Kailas Dayanandan; Anand Sinha; Brejesh Lall
MoreauPruner: Robust Pruning of Large Language Models against Weight Perturbations. (5%)Zixiao Wang; Jingwei Zhang; Wenqian Zhao; Farzan Farnia; Bei Yu
Rethinking the impact of noisy labels in graph classification: A utility and privacy perspective. (1%)De Li; Xianxian Li; Zeming Gan; Qiyu Li; Bin Qu; Jinyan Wang
Agnostic Sharpness-Aware Minimization. (1%)Van-Anh Nguyen; Quyen Tran; Tuan Truong; Thanh-Toan Do; Dinh Phung; Trung Le
2024-06-10
Texture Re-scalable Universal Adversarial Perturbation. (99%)Yihao Huang; Qing Guo; Felix Juefei-Xu; Ming Hu; Xiaojun Jia; Xiaochun Cao; Geguang Pu; Yang Liu
Explainable Graph Neural Networks Under Fire. (99%)Zhong Li; Simon Geisler; Yuhang Wang; Stephan Günnemann; Leeuwen Matthijs van
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning. (81%)Xiaoting Lyu; Yufei Han; Wei Wang; Jingkai Liu; Yongsheng Zhu; Guangquan Xu; Jiqiang Liu; Xiangliang Zhang
Reinforced Compressive Neural Architecture Search for Versatile Adversarial Robustness. (56%)Dingrong Wang; Hitesh Sapkota; Zhiqiang Tao; Qi Yu
Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications. (56%)Junlin Wang; Tianyi Yang; Roy Xie; Bhuwan Dhingra
A Survey of Backdoor Attacks and Defenses on Large Language Models: Implications for Security Measures. (13%)Shuai Zhao; Meihuizi Jia; Zhongliang Guo; Leilei Gan; Jie Fu; Yichao Feng; Fengjun Pan; Luu Anh Tuan
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection. (8%)Shenao Yan; Shen Wang; Yue Duan; Hanbin Hong; Kiho Lee; Doowon Kim; Yuan Hong
Fast White-Box Adversarial Streaming Without a Random Oracle. (3%)Ying Feng; Aayush Jain; David P. Woodruff
Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks. (2%)Zonghao Ying; Aishan Liu; Xianglong Liu; Dacheng Tao
2024-06-09
DMS: Addressing Information Loss with More Steps for Pragmatic Adversarial Attacks. (99%)Zhiyu Zhu; Jiayu Zhang; Xinyi Wang; Zhibo Jin; Huaming Chen
MeanSparse: Post-Training Robustness Enhancement Through Mean-Centered Feature Sparsification. (97%)Sajjad Amini; Mohammadreza Teymoorianfard; Shiqing Ma; Amir Houmansadr
Stealthy Targeted Backdoor Attacks against Image Captioning. (82%)Wenshu Fan; Hongwei Li; Wenbo Jiang; Meng Hao; Shui Yu; Xiao Zhang
ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving. (80%)Chen Ma; Ningfei Wang; Zhengyu Zhao; Qian Wang; Qi Alfred Chen; Chao Shen
Self-supervised Adversarial Training of Monocular Depth Estimation against Physical-World Attacks. (67%)Zhiyuan Cheng; Cheng Han; James Liang; Qifan Wang; Xiangyu Zhang; Dongfang Liu
SlowPerception: Physical-World Latency Attack against Visual Perception in Autonomous Driving. (64%)Chen Ma; Ningfei Wang; Zhengyu Zhao; Qi Alfred Chen; Chao Shen
ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations. (38%)Sravanti Addepalli; Priyam Dey; R. Venkatesh Babu
Certified Robustness to Data Poisoning in Gradient-Based Training. (22%)Philip Sosnin; Mark N. Müller; Maximilian Baader; Calvin Tsay; Matthew Wicker
Machine Against the RAG: Jamming Retrieval-Augmented Generation with Blocker Documents. (4%)Avital Shafran; Roei Schuster; Vitaly Shmatikov
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection. (2%)Wei Li; Pin-Yu Chen; Sijia Liu; Ren Wang
Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language Models. (2%)Xi Li; Yusen Zhang; Renze Lou; Chen Wu; Jiaqi Wang
A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities. (1%)Corren McCoy; Ross Gore; Michael L. Nelson; Michele C. Weigle
Safety Alignment Should Be Made More Than Just a Few Tokens Deep. (1%)Xiangyu Qi; Ashwinee Panda; Kaifeng Lyu; Xiao Ma; Subhrajit Roy; Ahmad Beirami; Prateek Mittal; Peter Henderson
Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models. (1%)Alkis Kalavasis; Amin Karbasi; Argyris Oikonomou; Katerina Sotiraki; Grigoris Velegkas; Manolis Zampetakis
2024-06-08
SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner. (99%)Xunguang Wang; Daoyuan Wu; Zhenlan Ji; Zongjie Li; Pingchuan Ma; Shuai Wang; Yingjiu Li; Yang Liu; Ning Liu; Juergen Rahmel
One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models. (99%)Hao Fang; Jiawei Kong; Wenbo Yu; Bin Chen; Jiawei Li; Shutao Xia; Ke Xu
Bridging the Gap: Rademacher Complexity in Robust and Standard Generalization. (98%)Jiancong Xiao; Ruoyu Sun; Qi Long; Weijie J. Su
Perturbation Towards Easy Samples Improves Targeted Adversarial Transferability. (96%)Junqi Gao; Biqing Qi; Yao Li; Zhichang Guo; Dong Li; Yuming Xing; Dazhi Zhang
Enhancing Adversarial Transferability via Information Bottleneck Constraints. (68%)Biqing Qi; Junqi Gao; Jianxing Liu; Ligang Wu; Bowen Zhou
Exploring Adversarial Robustness of Deep State Space Models. (56%)Biqing Qi; Yang Luo; Junqi Gao; Pengfei Li; Kai Tian; Zhiyuan Ma; Bowen Zhou
Adversarial flows: A gradient flow characterization of adversarial attacks. (13%)Lukas Weigand; Tim Roith; Martin Burger
2024-06-07
ADBA:Approximation Decision Boundary Approach for Black-Box Adversarial Attacks. (99%)Feiyang Wang; Xingquan Zuo; Hai Huang; Gang Chen
Probabilistic Perspectives on Error Minimization in Adversarial Reinforcement Learning. (98%)Roman Belaire; Arunesh Sinha; Pradeep Varakantham
Corpus Poisoning via Approximate Greedy Gradient Descent. (86%)Jinyan Su; Preslav Nakov; Claire Cardie
Compositional Curvature Bounds for Deep Neural Networks. (84%)Taha Entesari; Sina Sharifi; Mahyar Fazlyab
Adversarial Tuning: Defending Against Jailbreak Attacks for LLMs. (41%)Fan Liu; Zhao Xu; Hao Liu
Clarifying Myths About the Relationship Between Shape Bias, Accuracy, and Robustness. (22%)Zahra Golpayegani; Patrick St-Amant; Nizar Bouguila
GENIE: Watermarking Graph Neural Networks for Link Prediction. (15%)Venkata Sai Pranav Bachina; Ankit Gangwal; Aaryan Ajay Sharma; Charu Sharma
The Price of Implicit Bias in Adversarially Robust Generalization. (5%)Nikolaos Tsilivis; Natalie Frank; Nathan Srebro; Julia Kempe
Contextual fusion enhances robustness to image blurring. (5%)Shruti Joshi; Aiswarya Akumalla; Seth Haney; Maxim Bazhenov
LLM Whisperer: An Inconspicuous Attack to Bias LLM Responses. (1%)Weiran Lin; Anna Gerchanovsky; Omer Akgul; Lujo Bauer; Matt Fredrikson; Zifan Wang
2024-06-06
Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection. (99%)Yinting School of Mathematics and Statistics, and Key Lab NAA--MOE, Central China Normal University Wu; Pai School of Mathematics and Computer Science, Jianghan University Peng; Bo Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, and School of Cyber Science and Engineering, Wuhan University Cai; Le School of Mathematics and Statistics, and Key Lab NAA--MOE, Central China Normal University Li; .
Talos: A More Effective and Efficient Adversarial Defense for GNN Models Based on the Global Homophily of Graphs. (98%)Duanyu Li; Huijun Wu; Min Xie; Xugang Wu; Zhenwei Wu; Wenzhe Zhang
Improving Alignment and Robustness with Circuit Breakers. (98%)Andy Zou; Long Phan; Justin Wang; Derek Duenas; Maxwell Lin; Maksym Andriushchenko; Rowan Wang; Zico Kolter; Matt Fredrikson; Dan Hendrycks
Behavior-Targeted Attack on Reinforcement Learning with Limited Access to Victim's Policy. (76%)Shojiro Yamabe; Kazuto Fukuchi; Ryoma Senda; Jun Sakuma
AutoJailbreak: Exploring Jailbreak Attacks and Defenses through a Dependency Lens. (69%)Lin Lu; Hai Yan; Zenghui Yuan; Jiawen Shi; Wenqi Wei; Pin-Yu Chen; Pan Zhou
Neural Codec-based Adversarial Sample Detection for Speaker Verification. (68%)Xuanjun Chen; Jiawei Du; Haibin Wu; Jyh-Shing Roger Jang; Hung-yi Lee
Interpreting the Second-Order Effects of Neurons in CLIP. (67%)Yossi Gandelsman; Alexei A. Efros; Jacob Steinhardt
Jailbreak Vision Language Models via Bi-Modal Adversarial Prompt. (56%)Zonghao Ying; Aishan Liu; Tianyuan Zhang; Zhengmin Yu; Siyuan Liang; Xianglong Liu; Dacheng Tao
Memorization in deep learning: A survey. (1%)Jiaheng Wei; Yanjun Zhang; Leo Yu Zhang; Ming Ding; Chao Chen; Kok-Leong Ong; Jun Zhang; Yang Xiang
2024-06-05
ZeroPur: Succinct Training-Free Adversarial Purification. (99%)Xiuli Bi; Zonglin Yang; Bo Liu; Xiaodong Cun; Chi-Man Pun; Pietro Lio; Bin Xiao
VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise. (99%)Zhixun He; Mukesh Singhal
DifAttack++: Query-Efficient Black-Box Adversarial Attack via Hierarchical Disentangled Feature Space in Cross-Domain. (99%)Jun Liu; Jiantao Zhou; Jiandian Zeng; Jinyu Tian; Zheng Li
Distributional Adversarial Loss. (96%)Saba Ahmadi; Siddharth Bhandari; Avrim Blum; Chen Dan; Prabhav Jain
Defending Large Language Models Against Attacks With Residual Stream Activation Analysis. (83%)Amelia Kawasaki; Andrew Davis; Houssam Abbas
Graph Neural Network Explanations are Fragile. (80%)Jiate Li; Meng Pang; Yun Dong; Jinyuan Jia; Binghui Wang
A Geometric View of Data Complexity: Efficient Local Intrinsic Dimension Estimation with Diffusion Models. (68%)Hamidreza Kamkari; Brendan Leigh Ross; Rasa Hosseinzadeh; Jesse C. Cresswell; Gabriel Loaiza-Ganem
Principles of Designing Robust Remote Face Anti-Spoofing Systems. (13%)Xiang Xu; Tianchen Zhao; Zheng Zhang; Zhihua Li; Jon Wu; Alessandro Achille; Mani Srivastava
Mutual Information Guided Backdoor Mitigation for Pre-trained Encoders. (13%)Tingxu Han; Weisong Sun; Ziqi Ding; Chunrong Fang; Hanwei Qian; Jiaxun Li; Zhenyu Chen; Xiangyu Zhang
JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits. (10%)Minzhou Pan; Yi Zeng; Xue Lin; Ning Yu; Cho-Jui Hsieh; Peter Henderson; Ruoxi Jia
Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections. (10%)Zihan Luo; Hong Huang; Yongkang Zhou; Jiping Zhang; Nuo Chen; Hai Jin
Enhancing the Resilience of Graph Neural Networks to Topological Perturbations in Sparse Graphs. (8%)Shuqi He; Jun Zhuang; Ding Wang; Luyao Peng; Jun Song
Reconstructing training data from document understanding models. (1%)Jérémie Dentan; Arnaud Paran; Aymen Shabou
FREA: Feasibility-Guided Generation of Safety-Critical Scenarios with Reasonable Adversariality. (1%)Keyu Chen; Yuheng Lei; Hao Cheng; Haoran Wu; Wenchao Sun; Sifa Zheng
2024-06-04
Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation. (99%)Yaohua Liu; Jiaxin Gao; Xuan Liu; Xianghao Jiao; Xin Fan; Risheng Liu
Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing. (98%)Youwei Shu; Xi Xiao; Derui Wang; Yuxin Cao; Siji Chen; Jason Xue; Linyi Li; Bo Li
PuFace: Defending against Facial Cloaking Attacks for Facial Recognition Models. (54%)Jing Wen
A Risk Estimation Study of Native Code Vulnerabilities in Android Applications. (5%)Silvia Lucia Sanna; Diego Soi; Davide Maiorca; Giorgio Fumera; Giorgio Giacinto
Verifying the Generalization of Deep Learning to Out-of-Distribution Domains. (3%)Guy Amir; Osher Maayan; Tom Zelazny; Guy Katz; Michael Schapira
Large Language Models as Carriers of Hidden Messages. (2%)Jakub Hoscilowicz; Pawel Popiolek; Jan Rudkowski; Jedrzej Bieniasz; Artur Janicki
Nonlinear Transformations Against Unlearnable Datasets. (2%)Thushari Hapuarachchi; Jing Lin; Kaiqi Xiong; Mohamed Rahouti; Gitte Ost
Inference Attacks: A Taxonomy, Survey, and Promising Directions. (1%)Feng Wu; Lei Cui; Shaowen Yao; Shui Yu
QROA: A Black-Box Query-Response Optimization Attack on LLMs. (1%)Hussein LaMME Jawad; Nicolas J. -B. LaMME BRUNEL
The Crystal Ball Hypothesis in diffusion models: Anticipating object positions from initial noise. (1%)Yuanhao Ban; Ruochen Wang; Tianyi Zhou; Boqing Gong; Cho-Jui Hsieh; Minhao Cheng
Can Dense Connectivity Benefit Outlier Detection? An Odyssey with NAS. (1%)Hao Fu; Tunhou Zhang; Hai Li; Yiran Chen
2024-06-03
Constraint-based Adversarial Example Synthesis. (99%)Fang Yu; Ya-Yu Chi; Yu-Fang Chen
SVASTIN: Sparse Video Adversarial Attack via Spatio-Temporal Invertible Neural Networks. (99%)Yi Pan; Jun-Jie Huang; Zihan Chen; Wentao Zhao; Ziyue Wang
Reproducibility Study on Adversarial Attacks Against Robust Transformer Trackers. (93%)Fatemeh Nourilenjan Nokabadi; Jean-François Lalonde; Christian Gagné
CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language Models. (83%)Qian Lou; Xin Liang; Jiaqi Xue; Yancheng Zhang; Rui Xie; Mengxin Zheng
Are AI-Generated Text Detectors Robust to Adversarial Perturbations? (80%)Guanhua Huang; Yuchen Zhang; Zhe Li; Yongjian You; Mingze Wang; Zhouwang Yang
Model for Peanuts: Hijacking ML Models without Training Access is Possible. (62%)Mahmoud Ghorbel; Halima Bouzidi; Ioan Marius Bilasco; Ihsen Alouani
SLANT: Spurious Logo ANalysis Toolkit. (47%)Maan Qraitem; Piotr Teterwak; Kate Saenko; Bryan A. Plummer
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering. (16%)Robert Osazuwa Ness; Katie Matton; Hayden Helm; Sheng Zhang; Junaid Bajwa; Carey E. Priebe; Eric Horvitz
From Feature Visualization to Visual Circuits: Effect of Adversarial Model Manipulation. (12%)Geraldin Nanfack; Michael Eickenberg; Eugene Belilovsky
A Game-Theoretic Approach to Privacy-Utility Tradeoff in Sharing Genomic Summary Statistics. (10%)Tao Zhang; Rajagopal Venkatesaramani; Rajat K. De; Bradley A. Malin; Yevgeniy Vorobeychik
Poisoning Attacks and Defenses in Recommender Systems: A Survey. (10%)Zongwei Wang; Junliang Yu; Min Gao; Wei Yuan; Guanhua Ye; Shazia Sadiq; Hongzhi Yin
Unelicitable Backdoors in Language Models via Cryptographic Transformer Circuits. (4%)Andis Draguns; Andrew Gritsevskiy; Sumeet Ramesh Motwani; Charlie Rogers-Smith; Jeffrey Ladish; Witt Christian Schroeder de
PRICE: A Pretrained Model for Cross-Database Cardinality Estimation. (1%)Tianjing Zeng; Junwei Lan; Jiahong Ma; Wenqing Wei; Rong Zhu; Pengfei Li; Bolin Ding; Defu Lian; Zhewei Wei; Jingren Zhou
2024-06-02
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data. (99%)Thibault Simonetto; Salah Ghamizi; Maxime Cordy
Improving Accuracy-robustness Trade-off via Pixel Reweighted Adversarial Training. (98%)Jiacheng Zhang; Feng Liu; Dawei Zhou; Jingfeng Zhang; Tongliang Liu
Assessing the Adversarial Security of Perceptual Hashing Algorithms. (31%)Jordan Madden; Moxanki Bhavsar; Lhamo Dorje; Xiaohua Li
A Novel Defense Against Poisoning Attacks on Federated Learning: LayerCAM Augmented with Autoencoder. (31%)Jingjing Zheng; Xin Yuan; Kai Li; Wei Ni; Eduardo Tovar; Jon Crowcroft
Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation. (13%)Yuan Xiao; Shiqing Ma; Juan Zhai; Chunrong Fang; Jinyuan Jia; Zhenyu Chen
Invisible Backdoor Attacks on Diffusion Models. (2%)Sen Li; Junchi Ma; Minhao Cheng
2024-06-01
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model. (3%)Jinyin Chen; Xiaoming Zhao; Haibin Zheng; Xiao Li; Sheng Xiang; Haifeng Guo
2024-05-31
Query Provenance Analysis: Efficient and Robust Defense against Query-based Black-box Attacks. (99%)Shaofei Li; Ziqi Zhang; Haomin Jia; Ding Li; Yao Guo; Xiangqun Chen
Investigating and unmasking feature-level vulnerabilities of CNNs to adversarial perturbations. (95%)Davide Coppola; Hwee Kuan Lee
Robust Stable Spiking Neural Networks. (38%)Jianhao Ding; Zhiyu Pan; Yujia Liu; Zhaofei Yu; Tiejun Huang
Improved Techniques for Optimization-Based Jailbreaking on Large Language Models. (26%)Xiaojun Jia; Tianyu Pang; Chao Du; Yihao Huang; Jindong Gu; Yang Liu; Xiaochun Cao; Min Lin
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning. (22%)Zhangchen Xu; Fengqing Jiang; Luyao Niu; Jinyuan Jia; Bo Li; Radha Poovendran
Enhancing Jailbreak Attack Against Large Language Models through Silent Tokens. (13%)Jiahao Yu; Haozheng Luo; Jerry Yao-Chieh Hu; Wenbo Guo; Han Liu; Xinyu Xing
Exploring Vulnerabilities and Protections in Large Language Models: A Survey. (10%)Frank Weizhen Liu; Chenhui Hu
GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning. (9%)Xiaoyun Gan; Shanyu Gan; Taizhi Su; Peng Liu
Neural Network Verification with Branch-and-Bound for General Nonlinearities. (9%)Zhouxing Shi; Qirui Jin; Zico Kolter; Suman Jana; Cho-Jui Hsieh; Huan Zhang
StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization. (4%)Songhua Liu; Xin Jin; Xingyi Yang; Jingwen Ye; Xinchao Wang
GI-NAS: Boosting Gradient Inversion Attacks through Adaptive Neural Architecture Search. (1%)Wenbo Yu; Hao Fang; Bin Chen; Xiaohang Sui; Chuan Chen; Hao Wu; Shu-Tao Xia; Ke Xu
2024-05-30
Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based Customization. (99%)Yisu Liu; Jinyang An; Wanqian Zhang; Dayan Wu; Jingzi Gu; Zheng Lin; Weiping Wang
HOLMES: to Detect Adversarial Examples with Multiple Detectors. (99%)Jing Wen
Typography Leads Semantic Diversifying: Amplifying Adversarial Transferability across Multimodal Large Language Models. (99%)Hao Cheng; Erjia Xiao; Jiayan Yang; Jiahang Cao; Qiang Zhang; Le Yang; Jize Zhang; Kaidi Xu; Jindong Gu; Renjing Xu
Enhancing Adversarial Robustness in SNNs with Sparse Gradients. (92%)Yujia Liu; Tong Bu; Jianhao Ding; Zecheng Hao; Tiejun Huang; Zhaofei Yu
Exploring the Robustness of Decision-Level Through Adversarial Attacks on LLM-Based Embodied Models. (89%)Shuyuan Liu; Jiawei Chen; Shouwei Ruan; Hang Su; Zhaoxia Yin
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation. (83%)Harsh Chaudhari; Giorgio Severi; John Abascal; Matthew Jagielski; Christopher A. Choquette-Choo; Milad Nasr; Cristina Nita-Rotaru; Alina Oprea
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents. (75%)Ethan Rathbun; Christopher Amato; Alina Oprea
Deep Learning Approaches for Detecting Adversarial Cyberbullying and Hate Speech in Social Networks. (73%)Sylvia Worlali Azumah; Nelly Elsayed; Zag ElSayed; Murat Ozer; Guardia Amanda La
BAN: Detecting Backdoors Activated by Adversarial Neuron Noise. (68%)Xiaoyun Xu; Zhuoran Liu; Stefanos Koffas; Shujian Yu; Stjepan Picek
Is My Data in Your Retrieval Database? Membership Inference Attacks Against Retrieval Augmented Generation. (45%)Maya Anderson; Guy Amit; Abigail Goldsteen
Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks. (38%)Chen Xiong; Xiangyu Qi; Pin-Yu Chen; Tsung-Yi Ho
Large Language Model Watermark Stealing With Mixed Integer Programming. (33%)Zhaoxi Zhang; Xiaomei Zhang; Yanjun Zhang; Leo Yu Zhang; Chao Chen; Shengshan Hu; Asif Gill; Shirui Pan
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person Re-Identification in Real-World. (22%)Wenli Sun; Xinyang Jiang; Dongsheng Li; Cairong Zhao
Investigating the Robustness of LLMs on Math Word Problems. (16%)Ujjwala Anantheswaran; Himanshu Gupta; Kevin Scaria; Shreyas Verma; Chitta Baral; Swaroop Mishra
Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness. (5%)Weilin Lin; Li Liu; Shaokui Wei; Jianze Li; Hui Xiong
Certifying Global Robustness for Deep Neural Networks. (2%)You Li; Guannan Zhao; Shuyu Kong; Yunqi He; Hai Zhou
Breaking Indistinguishability with Transfer Learning: A First Look at SPECK32/64 Lightweight Block Ciphers. (1%)Jimmy Dani; Kalyan Nakka; Nitesh Saxena
Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable. (1%)Martin Bertran; Shuai Tang; Michael Kearns; Jamie Morgenstern; Aaron Roth; Zhiwei Steven Wu
2024-05-29
Efficient Black-box Adversarial Attacks via Bayesian Optimization Guided by a Function Prior. (99%)Shuyu Cheng; Yibo Miao; Yinpeng Dong; Xiao Yang; Xiao-Shan Gao; Jun Zhu
Leveraging Many-To-Many Relationships for Defending Against Visual-Language Adversarial Attacks. (96%)Futa Waseda; Antonio Tejero-de-Pablos
Model Agnostic Defense against Adversarial Patch Attacks on Object Detection in Unmanned Aerial Vehicles. (92%)Saurabh Pathak; Samridha Shrestha; Abdelrahman AlMahmoud
Diffusion Policy Attacker: Crafting Adversarial Attacks for Diffusion-based Policies. (92%)Yipu Chen; Haotian Xue; Yongxin Chen
Evaluating the Effectiveness and Robustness of Visual Similarity-based Phishing Detection Models. (91%)Fujiao Ji; Kiho Lee; Hyungjoon Koo; Wenhao You; Euijin Choo; Hyoungshick Kim; Doowon Kim
Verifiably Robust Conformal Prediction. (82%)Linus Jeary; Tom Kuipers; Mehran Hosseini; Nicola Paoletti
AI Risk Management Should Incorporate Both Safety and Security. (67%)Xiangyu Qi; Yangsibo Huang; Yi Zeng; Edoardo Debenedetti; Jonas Geiping; Luxi He; Kaixuan Huang; Udari Madhushani; Vikash Sehwag; Weijia Shi; Boyi Wei; Tinghao Xie; Danqi Chen; Pin-Yu Chen; Jeffrey Ding; Ruoxi Jia; Jiaqi Ma; Arvind Narayanan; Weijie J Su; Mengdi Wang; Chaowei Xiao; Bo Li; Dawn Song; Peter Henderson; Prateek Mittal
AutoBreach: Universal and Adaptive Jailbreaking with Efficient Wordplay-Guided Optimization. (61%)Jiawei Chen; Xiao Yang; Zhengwei Fang; Yu Tian; Yinpeng Dong; Zhaoxia Yin; Hang Su
EntProp: High Entropy Propagation for Improving Accuracy and Robustness. (50%)Shohei Enomoto
ConceptPrune: Concept Editing in Diffusion Models via Skilled Neuron Pruning. (26%)Ruchika Chavhan; Da Li; Timothy Hospedales
Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning. (22%)Dipam Goswami; Albin Soutif--Cormerais; Yuyang Liu; Sandesh Kamath; Bartłomiej Twardowski; de Weijer Joost van
Node Injection Attack Based on Label Propagation Against Graph Neural Network. (12%)Peican Zhu; Zechen Pan; Keke Tang; Xiaodong Cui; Jinhuan Wang; Qi Xuan
Genshin: General Shield for Natural Language Processing with Large Language Models. (5%)Xiao Peng; Tao Liu; Ying Wang
Confronting the Reproducibility Crisis: A Case Study of Challenges in Cybersecurity AI. (2%)Richard H. Moulton; Gary A. McCully; John D. Hastings
Enhancing Security and Privacy in Federated Learning using Update Digests and Voting-Based Defense. (1%)Wenjie Li; Kai Fan; Jingyuan Zhang; Hui Li; Wei Yang Bryan Lim; Qiang Yang
Gone but Not Forgotten: Improved Benchmarks for Machine Unlearning. (1%)Keltin Grimes; Collin Abidi; Cole Frank; Shannon Gallagher
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter Selection. (1%)Raman Dutt; Ondrej Bohdal; Pedro Sanchez; Sotirios A. Tsaftaris; Timothy Hospedales
2024-05-28
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks. (99%)Zhenxing Niu; Yuyao Sun; Qiguang Miao; Rong Jin; Gang Hua
Improved Generation of Adversarial Examples Against Safety-aligned LLMs. (99%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics. (98%)Sunay Bhat; Jeffrey Jiang; Omead Pooladzandi; Alexander Branch; Gregory Pottie
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models. (98%)Omead Pooladzandi; Jeffrey Jiang; Sunay Bhat; Gregory Pottie
White-box Multimodal Jailbreaks Against Large Vision-Language Models. (96%)Ruofan Wang; Xingjun Ma; Hanxu Zhou; Chuanjun Ji; Guangnan Ye; Yu-Gang Jiang
Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing. (92%)Wei Zhao; Zhe Li; Yige Li; Ye Zhang; Jun Sun
Wavelet-Based Image Tokenizer for Vision Transformers. (64%)Zhenhai Zhu; Radu Soricut
Cross-Context Backdoor Attacks against Graph Prompt Learning. (13%)Xiaoting Lyu; Yufei Han; Wei Wang; Hangwei Qian; Ivor Tsang; Xiangliang Zhang
BlueSWAT: A Lightweight State-Aware Security Framework for Bluetooth Low Energy. (1%)Xijia Che; Yi He; Xuewei Feng; Kun Sun; Ke Xu; Qi Li
Watermarking Counterfactual Explanations. (1%)Hangzhi Guo; Firdaus Ahmed Choudhury; Tinghua Chen; Amulya Yadav
Black-Box Detection of Language Model Watermarks. (1%)Thibaud Gloaguen; Nikola Jovanović; Robin Staab; Martin Vechev
2024-05-27
Adversarial Attacks on Both Face Recognition and Face Anti-spoofing Models. (99%)Fengfan Zhou; Qianyu Zhou; Xiangtai Li; Xuequan Lu; Lizhuang Ma; Hefei Ling
The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective. (99%)Nils Philipp Walter; Linara Adilova; Jilles Vreeken; Michael Kamp
Exploiting the Layered Intrinsic Dimensionality of Deep Models for Practical Adversarial Training. (98%)Enes Altinisik; Safa Messaoud; Husrev Taha Sencar; Hassan Sajjad; Sanjay Chawla
Spectral regularization for adversarially-robust representation learning. (86%)Sheng Yang; Jacob A. Zavatone-Veth; Cengiz Pehlevan
TIMA: Text-Image Mutual Awareness for Balancing Zero-Shot Adversarial Robustness and Generalization Ability. (83%)Fengji Ma; Li Liu; Hei Victor Cheng
OSLO: One-Shot Label-Only Membership Inference Attacks. (81%)Yuefeng Peng; Jaechul Roh; Subhransu Maji; Amir Houmansadr
Verifying Properties of Binary Neural Networks Using Sparse Polynomial Optimization. (33%)Jianting Yang; Srećko Ðurašinović; Jean-Bernard Lasserre; Victor Magron; Jun Zhao
Rethinking Pruning for Backdoor Mitigation: An Optimization Perspective. (26%)Nan Li; Haiyang Yu; Ping Yi
Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models. (8%)ShengYun Peng; Pin-Yu Chen; Matthew Hull; Duen Horng Chau
Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems. (5%)Ruochen Jiao; Shaoyuan Xie; Justin Yue; Takami Sato; Lixu Wang; Yixuan Wang; Qi Alfred Chen; Qi Zhu
LabObf: A Label Protection Scheme for Vertical Federated Learning Through Label Obfuscation. (1%)Ying He; Mingyang Niu; Jingyu Hua; Yunlong Mao; Xu Huang; Chen Li; Sheng Zhong
Magnitude-based Neuron Pruning for Backdoor Defens. (1%)Nan Li; Haoyu Jiang; Ping Yi
2024-05-26
Medical MLLM is Vulnerable: Cross-Modality Jailbreak and Mismatched Attacks on Medical Multimodal Large Language Models. (67%)Xijie Huang; Xinyuan Wang; Hantao Zhang; Yinghao Zhu; Jiawen Xi; Jingkun An; Hao Wang; Hao Liang; Chengwei Pan
Pruning for Robust Concept Erasing in Diffusion Models. (38%)Tianyun Yang; Juan Cao; Chang Xu
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models. (31%)Yuzhou. Nie; Yanting. Wang; Jinyuan. Jia; Lucia Michael J. De; Nathaniel D. Bastian; Wenbo. Guo; Dawn. Song
Partial train and isolate, mitigate backdoor attack. (1%)Yong Li; Han Gao
Automatic Jailbreaking of the Text-to-Image Generative AI Systems. (1%)Minseon Kim; Hyomin Lee; Boqing Gong; Huishuai Zhang; Sung Ju Hwang
2024-05-25
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack. (99%)Mingli Zhu; Siyuan Liang; Baoyuan Wu
Detecting Adversarial Data via Perturbation Forgery. (99%)Qian Wang; Chen Li; Yuchen Luo; Hefei Ling; Ping Li; Jiazhong Chen; Shijuan Huang; Ning Yu
Enhancing Adversarial Transferability Through Neighborhood Conditional Sampling. (98%)Chunlin Qiu; Yiheng Duan; Lingchen Zhao; Qian Wang
R.A.C.E.: Robust Adversarial Concept Erasure for Secure Text-to-Image Diffusion Model. (97%)Changhoon Kim; Kyle Min; Yezhou Yang
Uncertainty Measurement of Deep Learning System based on the Convex Hull of Training Sets. (89%)Hyekyoung Hwang; Jitae Shin
Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency. (81%)Runqi Lin; Chaojian Yu; Bo Han; Hang Su; Tongliang Liu
Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor. (70%)Shaokui Wei; Hongyuan Zha; Baoyuan Wu
Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Character. (56%)Siyuan Ma; Weidi Luo; Yu Wang; Xiaogeng Liu
Intruding with Words: Towards Understanding Graph Injection Attacks at the Text Level. (8%)Runlin Lei; Yuwei Hu; Yuchen Ren; Zhewei Wei
No Two Devils Alike: Unveiling Distinct Mechanisms of Fine-tuning Attacks. (4%)Chak Tou Leong; Yi Cheng; Kaishuai Xu; Jian Wang; Hanlin Wang; Wenjie Li
Robust Message Embedding via Attention Flow-Based Steganography. (1%)Huayuan Ye; Shenzhuo Zhang; Shiqi Jiang; Jing Liao; Shuhang Gu; Dejun Zheng; Changbo Wang; Chenhui Li
2024-05-24
Robust width: A lightweight and certifiable adversarial defense. (99%)Jonathan Peck; Bart Goossens
Large Language Model Sentinel: LLM Agent for Adversarial Purification. (99%)Guang Lin; Qibin Zhao
Adversarial Attacks on Hidden Tasks in Multi-Task Learning. (98%)Yu Zhe; Rei Nagaike; Daiki Nishiyama; Kazuto Fukuchi; Jun Sakuma
Evaluating and Safeguarding the Adversarial Robustness of Retrieval-Based In-Context Learning. (95%)Simon Yu; Jie He; Pasquale Minervini; Jeff Z. Pan
Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness. (92%)Jieren Deng; Hanbin Hong; Aaron Palmer; Xin Zhou; Jinbo Bi; Kaleel Mahmood; Yuan Hong; Derek Aguiar
Efficient Adversarial Training in LLMs with Continuous Attacks. (92%)Sophie Xhonneux; Alessandro Sordoni; Stephan Günnemann; Gauthier Gidel; Leo Schwinn
Rethinking Independent Cross-Entropy Loss For Graph-Structured Data. (76%)Rui Miao; Kaixiong Zhou; Yili Wang; Ninghao Liu; Ying Wang; Xin Wang
BDetCLIP: Multimodal Prompting Contrastive Test-Time Backdoor Detection. (61%)Yuwei Niu; Shuo He; Qi Wei; Zongyu Wu; Feng Liu; Lei Feng
Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models. (47%)Yimeng Zhang; Xin Chen; Jinghan Jia; Yihua Zhang; Chongyu Fan; Jiancheng Liu; Mingyi Hong; Ke Ding; Sijia Liu
Robustifying Safety-Aligned Large Language Models through Clean Data Curation. (15%)Xiaoqun Liu; Jiacheng Liang; Muchao Ye; Zhaohan Xi
HiddenSpeaker: Generate Imperceptible Unlearnable Audios for Speaker Verification System. (15%)Zhisheng Zhang; Pengyang Huang
Can Implicit Bias Imply Adversarial Robustness? (11%)Hancheng Min; René Vidal
Certifiably Robust RAG against Retrieval Corruption. (10%)Chong Xiang; Tong Wu; Zexuan Zhong; David Wagner; Danqi Chen; Prateek Mittal
BadGD: A unified data-centric framework to identify gradient descent vulnerabilities. (8%)Chi-Hua Wang; Guang Cheng
AuthNet: Neural Network with Integrated Authentication Logic. (5%)Yuling Cai; Fan Xiang; Guozhu Meng; Yinzhi Cao; Kai Chen
Revisit, Extend, and Enhance Hessian-Free Influence Functions. (2%)Ziao Yang; Han Yue; Jian Chen; Hongfu Liu
2024-05-23
Eidos: Efficient, Imperceptible Adversarial 3D Point Clouds. (98%)Hanwei Zhang; Luo Cheng; Qisong He; Wei Huang; Renjue Li; Ronan Sicre; Xiaowei Huang; Holger Hermanns; Lijun Zhang
Certified Robustness against Sparse Adversarial Perturbations via Data Localization. (92%)Ambar Pal; René Vidal; Jeremias Sulam
A New Formulation for Zeroth-Order Optimization of Adversarial EXEmples in Malware Detection. (91%)Marco Rando; Luca Demetrio; Lorenzo Rosasco; Fabio Roli
SLIFER: Investigating Performance and Robustness of Malware Detection Pipelines. (89%)Andrea Ponte; Dmitrijs Trizna; Luca Demetrio; Battista Biggio; Ivan Tesfai Ogbu; Fabio Roli
Generating camera failures as a class of physics-based adversarial examples. (87%)Manav Prabhakar; Jwalandhar Girnar; Arpan Kusari
TrojanForge: Generating Adversarial Hardware Trojan Examples with Reinforcement Learning. (84%)Amin Sarihi; Peter Jamieson; Ahmad Patooghy; Abdel-Hameed A. Badawy
Towards Transferable Attacks Against Vision-LLMs in Autonomous Driving with Typography. (83%)Nhat Chung; Sensen Gao; Tuan-Anh Vu; Jie Zhang; Aishan Liu; Yun Lin; Jin Song Dong; Qing Guo
How Does Bayes Error Limit Probabilistic Robust Accuracy. (76%)Ruihan Zhang; Jun Sun
Universal Robustness via Median Randomized Smoothing for Real-World Super-Resolution. (67%)Zakariya Chaouai; Mohamed Tamaazousti
Towards Imperceptible Backdoor Attack in Self-supervised Learning. (61%)Hanrong Zhang; Zhenting Wang; Tingxu Han; Mingyu Jin; Chenlu Zhan; Mengnan Du; Hongwei Wang; Shiqing Ma
Unveiling the Achilles' Heel of NLG Evaluators: A Unified Adversarial Framework Driven by Large Language Models. (33%)Yiming Chen; Chen Zhang; Danqing Luo; Luis Fernando D'Haro; Robby T. Tan; Haizhou Li
AdjointDEIS: Efficient Gradients for Diffusion Models. (15%)Zander W. Blasingame; Chen Liu
What Variables Affect Out-of-Distribution Generalization in Pretrained Models? (9%)Md Yousuf Harun; Kyungbok Lee; Jhair Gallardo; Giri Krishnan; Christopher Kanan
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model. (8%)Tudor Cebere; Aurélien Bellet; Nicolas Papernot
RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation. (1%)Peihua Mai; Ran Yan; Yan Pang
Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark. (1%)Huali Ren; Anli Yan; Chong-zhi Gao; Hongyang Yan; Zhenxin Zhang; Jin Li
2024-05-22
Learning to Transform Dynamically for Better Adversarial Transferability. (99%)Rongyi Zhu; Zeliang Zhang; Susan Liang; Zhuo Liu; Chenliang Xu
Adversarial Training of Two-Layer Polynomial and ReLU Activation Networks via Convex Optimization. (80%)Daniel Kuelbs; Sanjay Lall; Mert Pilanci
Towards Certification of Uncertainty Calibration under Adversarial Attacks. (75%)Cornelius Emde; Francesco Pinto; Thomas Lukasiewicz; Philip H. S. Torr; Adel Bibi
LookHere: Vision Transformers with Directed Attention Generalize and Extrapolate. (67%)Anthony Fuller; Daniel G. Kyrollos; Yousef Yassin; James R. Green
Remote Keylogging Attacks in Multi-user VR Applications. (13%)Zihao Su; Kunlin Cai; Reuben Beeler; Lukas Dresel; Allan Garcia; Ilya Grishchenko; Yuan Tian; Christopher Kruegel; Giovanni Vigna
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning. (5%)Meenatchi Sundaram Muthu Selva Annamalai; Cristofaro Emiliano De
WordGame: Efficient & Effective LLM Jailbreak via Simultaneous Obfuscation in Query and Response. (1%)Tianrong Zhang; Bochuan Cao; Yuanpu Cao; Lu Lin; Prasenjit Mitra; Jinghui Chen
2024-05-21
Mellivora Capensis: A Backdoor-Free Training Framework on the Poisoned Dataset without Auxiliary Data. (92%)Yuwen Pu; Jiahao Chen; Chunyi Zhou; Zhou Feng; Qingming Li; Chunqiang Hu; Shouling Ji
Adversarial Training via Adaptive Knowledge Amalgamation of an Ensemble of Teachers. (87%)Shayan Mohajer Hamidi; Linfeng Ye
Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective. (78%)Jiahao Chen; Zhiqiang Shen; Yuwen Pu; Chunyi Zhou; Shouling Ji
EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection. (47%)Yuwen Qian; Shuchi Wu; Kang Wei; Ming Ding; Di Xiao; Tao Xiang; Chuan Ma; Song Guo
Fully Randomized Pointers. (15%)Gregory J. Duck; Sai Dhawal Phaye; Roland H. C. Yap; Trevor E. Carlson
A novel reliability attack of Physical Unclonable Functions. (13%)Gaoxiang Li; Yu Zhuang
Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks. (8%)Boheng Li; Yishuo Cai; Haowei Li; Feng Xue; Zhifeng Li; Yiming Li
Dullahan: Stealthy Backdoor Attack against Without-Label-Sharing Split Learning. (4%)Yuwen Pu; Zhuoyuan Ding; Jiahao Chen; Chunyi Zhou; Qingming Li; Chunqiang Hu; Shouling Ji
Tiny Refinements Elicit Resilience: Toward Efficient Prefix-Model Against LLM Red-Teaming. (1%)Jiaxu Liu; Xiangyu Yin; Sihao Wu; Jianhong Wang; Meng Fang; Xinping Yi; Xiaowei Huang
2024-05-20
A Constraint-Enforcing Reward for Adversarial Attacks on Text Classifiers. (99%)Tom Roth; Inigo Jauregi Unanue; Alsharif Abuadbba; Massimo Piccardi
GAN-GRID: A Novel Generative Attack on Smart Grid Stability Prediction. (98%)Emad Efatinasab; Alessandro Brighente; Mirco Rampazzo; Nahal Azadi; Mauro Conti
Robust Deep Reinforcement Learning with Adaptive Adversarial Perturbations in Action Space. (76%)Qianmei Liu; Yufei Kuang; Jie Wang
EGAN: Evolutional GAN for Ransomware Evasion. (74%)Daniel Commey; Benjamin Appiah; Bill K. Frimpong; Isaac Osei; Ebenezer N. A. Hammond; Garth V. Crosby
Rethinking Robustness Assessment: Adversarial Attacks on Learning-based Quadrupedal Locomotion Controllers. (31%)Fan Shi; Chong Zhang; Takahiro Miki; Joonho Lee; Marco Hutter; Stelian Coros
Adversarially Diversified Rehearsal Memory (ADRM): Mitigating Memory Overfitting Challenge in Continual Learning. (8%)Hikmat Khan; Ghulam Rasool; Nidhal Carla Bouaynaya
Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks. (3%)Marcin Podhajski; Jan Dubiński; Franziska Boenisch; Adam Dziedzic; Agnieszka Pregowska; Tomasz Michalak
DispaRisk: Auditing Fairness Through Usable Information. (1%)Jonathan Vasquez; Carlotta Domeniconi; Huzefa Rangwala
2024-05-19
Adaptive Batch Normalization Networks for Adversarial Robustness. (99%)Shao-Yuan Lo; Vishal M. Patel
An Invisible Backdoor Attack Based On Semantic Feature. (96%)Yangming Chen
Certified Robust Accuracy of Neural Networks Are Bounded due to Bayes Errors. (81%)Ruihan Zhang; Jun Sun
A GAN-Based Data Poisoning Attack Against Federated Learning Systems and Its Countermeasure. (68%)Wei Sun; Bo Gao; Ke Xiong; Yuwei Wang
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks. (62%)Xuanli He; Qiongkai Xu; Jun Wang; Benjamin I. P. Rubinstein; Trevor Cohn
Fed-Credit: Robust Federated Learning with Credibility Management. (13%)Jiayan Chen; Zhirong Qian; Tianhui Meng; Xitong Gao; Tian Wang; Weijia Jia
BOSC: A Backdoor-based Framework for Open Set Synthetic Image Attribution. (5%)Jun Wang; Benedetta Tondi; Mauro Barni
2024-05-18
Towards Robust Policy: Enhancing Offline Reinforcement Learning with Adversarial Attacks and Defenses. (84%)Thanh Nguyen; Tung M. Luu; Tri Ton; Chang D. Yoo
Trustworthy Actionable Perturbations. (82%)Jesse Friedbaum; Sudarshan Adiga; Ravi Tandon
Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing. (13%)Yunlong Zhao; Xiaoheng Deng; Yijing Liu; Xinjun Pei; Jiazhi Xia; Wei Chen
UPAM: Unified Prompt Attack in Text-to-Image Generation Models Against Both Textual Filters and Visual Checkers. (12%)Duo Peng; Qiuhong Ke; Jun Liu
BadActs: A Universal Backdoor Defense in the Activation Space. (10%)Biao Yi; Sishuo Chen; Yiming Li; Tong Li; Baolei Zhang; Zheli Liu
On Robust Reinforcement Learning with Lipschitz-Bounded Policy Networks. (8%)Nicholas H. Barbara; Ruigang Wang; Ian R. Manchester
Diffusion Model Driven Test-Time Image Adaptation for Robust Skin Lesion Classification. (3%)Ming Hu; Siyuan Yan; Peng Xia; Feilong Tang; Wenxue Li; Peibo Duan; Lin Zhang; Zongyuan Ge
2024-05-17
Revisiting the Robust Generalization of Adversarial Prompt Tuning. (99%)Fan Yang; Mingxuan Xia; Sangzhou Xia; Chicheng Ma; Hui Hui
Safeguarding Vision-Language Models Against Patched Visual Prompt Injectors. (99%)Jiachen Sun; Changsheng Wang; Jiongxiao Wang; Yiwei Zhang; Chaowei Xiao
Rethinking Graph Backdoor Attacks: A Distribution-Preserving Perspective. (83%)Zhiwei Zhang; Minhua Lin; Enyan Dai; Suhang Wang
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers. (67%)Sheng Yang; Jiawang Bai; Kuofeng Gao; Yong Yang; Yiming Li; Shu-tao Xia
Boosting Few-Pixel Robustness Verification via Covering Verification Designs. (1%)Yuval Shapira; Naor Wiesel; Shahar Shabelman; Dana Drachsler-Cohen
2024-05-16
DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection. (99%)Yuhao Sun; Lingyun Yu; Hongtao Xie; Jiaming Li; Yongdong Zhang
Infrared Adversarial Car Stickers. (98%)Xiaopei Zhu; Yuqiu Liu; Zhanhao Hu; Jianmin Li; Xiaolin Hu
Adversarial Robustness for Visual Grounding of Multimodal Large Language Models. (95%)Kuofeng Gao; Yang Bai; Jiawang Bai; Yong Yang; Shu-Tao Xia
Adversarial Robustness Guarantees for Quantum Classifiers. (81%)Neil Dowling; Maxwell T. West; Angus Southwell; Azar C. Nakhl; Martin Sevior; Muhammad Usman; Kavan Modi
Box-Free Model Watermarks Are Prone to Black-Box Removal Attacks. (13%)Haonan An; Guang Hua; Zhiping Lin; Yuguang Fang
Relational DNN Verification With Cross Executional Bound Refinement. (8%)Debangshu Banerjee; Gagandeep Singh
Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution. (1%)Eslam Zaher; Maciej Trzaskowski; Quan Nguyen; Fred Roosta
Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy. (1%)Yichuan Shi; Olivera Kotevska; Viktor Reshniak; Abhishek Singh; Ramesh Raskar
2024-05-15
Properties that allow or prohibit transferability of adversarial attacks among quantized networks. (99%)Abhishek Shrestha; Jürgen Großmann
Towards Evaluating the Robustness of Automatic Speech Recognition Systems via Audio Style Transfer. (99%)Weifei Jin; Yuxin Cao; Junjie Su; Qi Shen; Kai Ye; Derui Wang; Jie Hao; Ziyao Liu
Cross-Input Certified Training for Universal Perturbations. (98%)Changming Xu; Gagandeep Singh
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency. (4%)Linshan Hou; Ruili Feng; Zhongyun Hua; Wei Luo; Leo Yu Zhang; Yiming Li
Themis: Automatic and Efficient Deep Learning System Testing with Strong Fault Detection Capability. (4%)Tsz On Li; Dong Huang; Xiaofei Xie; Heming Cui
Optimizing Sensor Network Design for Multiple Coverage. (1%)Lukas Taus; Yen-Hsi Richard Tsai
2024-05-14
SpeechGuard: Exploring the Adversarial Robustness of Multimodal Large Language Models. (99%)Raghuveer Peri; Sai Muralidhar Jayanthi; Srikanth Ronanki; Anshu Bhatia; Karel Mundnich; Saket Dingliwal; Nilaksh Das; Zejiang Hou; Goeric Huybrechts; Srikanth Vishnubhotla; Daniel Garcia-Romero; Sundararajan Srinivasan; Kyu J Han; Katrin Kirchhoff
Certifying Robustness of Graph Convolutional Networks for Node Perturbation with Polyhedra Abstract Interpretation. (92%)Boqi Chen; Kristóf Marussy; Oszkár Semeráth; Gunter Mussbacher; Dániel Varró
The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks. (92%)Ziquan Liu; Yufei Cui; Yan Yan; Yi Xu; Xiangyang Ji; Xue Liu; Antoni B. Chan
The RoboDrive Challenge: Drive Anytime Anywhere in Any Condition. (11%)Lingdong Kong; Shaoyuan Xie; Hanjiang Hu; Yaru Niu; Wei Tsang Ooi; Benoit R. Cottereau; Lai Xing Ng; Yuexin Ma; Wenwei Zhang; Liang Pan; Kai Chen; Ziwei Liu; Weichao Qiu; Wei Zhang; Xu Cao; Hao Lu; Ying-Cong Chen; Caixin Kang; Xinning Zhou; Chengyang Ying; Wentao Shang; Xingxing Wei; Yinpeng Dong; Bo Yang; Shengyin Jiang; Zeliang Ma; Dengyi Ji; Haiwen Li; Xingliang Huang; Yu Tian; Genghua Kou; Fan Jia; Yingfei Liu; Tiancai Wang; Ying Li; Xiaoshuai Hao; Yifan Yang; Hui Zhang; Mengchuan Wei; Yi Zhou; Haimei Zhao; Jing Zhang; Jinke Li; Xiao He; Xiaoqiang Cheng; Bingyang Zhang; Lirong Zhao; Dianlei Ding; Fangsheng Liu; Yixiang Yan; Hongming Wang; Nanfei Ye; Lun Luo; Yubo Tian; Yiwei Zuo; Zhe Cao; Yi Ren; Yunfan Li; Wenjie Liu; Xun Wu; Yifan Mao; Ming Li; Jian Liu; Jiayang Liu; Zihan Qin; Cunxi Chu; Jialei Xu; Wenbo Zhao; Junjun Jiang; Xianming Liu; Ziyan Wang; Chiwei Li; Shilong Li; Chendong Yuan; Songyue Yang; Wentao Liu; Peng Chen; Bin Zhou; Yubo Wang; Chi Zhang; Jianhang Sun; Hai Chen; Xiao Yang; Lizhong Wang; Dongyi Fu; Yongchun Lin; Huitong Yang; Haoang Li; Yadan Luo; Xianjing Cheng; Yong Xu
Pointwise Lipschitz Continuous Graph Algorithms via Proximal Gradient Analysis. (1%)Quanquan C. Liu; Grigoris Velegkas; Yuichi Yoshida; Felix Zhou
Achieving Resolution-Agnostic DNN-based Image Watermarking:A Novel Perspective of Implicit Neural Representation. (1%)Yuchen Wang; Xingyu Zhu; Guanhui Ye; Shiyao Zhang; Xuetao Wei
Neural Collapse Meets Differential Privacy: Curious Behaviors of NoisyGD with Near-perfect Representation Learning. (1%)Chendi Wang; Yuqing Zhu; Weijie J. Su; Yu-Xiang Wang
UnMarker: A Universal Attack on Defensive Watermarking. (1%)Andre Kassis; Urs Hengartner
RS-Reg: Probabilistic and Robust Certified Regression Through Randomized Smoothing. (1%)Aref Miri Rekavandi; Olga Ohrimenko; Benjamin I. P. Rubinstein
2024-05-13
Environmental Matching Attack Against Unmanned Aerial Vehicles Object Detection. (96%)Dehong Kong; Siyuan Liang; Wenqi Ren
CrossCert: A Cross-Checking Detection Approach to Patch Robustness Certification for Deep Learning Models. (82%)Qilin Zhou; Zhengyuan Wei; Haipeng Wang; Bo Jiang; W. K. Chan
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors. (15%)Liam Dugan; Alyssa Hwang; Filip Trhlik; Josh Magnus Ludan; Andrew Zhu; Hainiu Xu; Daphne Ippolito; Chris Callison-Burch
GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation. (11%)Andrey V. Galichin; Mikhail Pautov; Alexey Zhavoronkin; Oleg Y. Rogov; Ivan Oseledets
Backdoor Removal for Generative Large Language Models. (1%)Haoran Li; Yulin Chen; Zihao Zheng; Qi Hu; Chunkit Chan; Heshan Liu; Yangqiu Song
2024-05-11
Stealthy Imitation: Reward-guided Environment-free Policy Stealing. (1%)Zhixiong Zhuang; Maria-Irina Nicolae; Mario Fritz
2024-05-10
Improving Transferable Targeted Adversarial Attack via Normalized Logit Calibration and Truncated Feature Mixing. (99%)Juanjuan Weng; Zhiming Luo; Shaozi Li
Disttack: Graph Adversarial Attacks Toward Distributed GNN Training. (98%)Yuxiang Zhang; Xin Liu; Meng Wu; Wei Yan; Mingyu Yan; Xiaochun Ye; Dongrui Fan
Exploring the Interplay of Interpretability and Robustness in Deep Neural Networks: A Saliency-guided Approach. (98%)Amira Guesmi; Nishant Suresh Aswani; Muhammad Shafique
Evaluating Adversarial Robustness in the Spatial Frequency Domain. (96%)Keng-Hsin Liao; Chin-Yuan Yeh; Hsi-Wen Chen; Ming-Syan Chen
Certified $\ell_2$ Attribution Robustness via Uniformly Smoothed Attributions. (96%)Fan Wang; Adams Wai-Kin Kong
PUMA: margin-based data pruning. (80%)Javier Maroto; Pascal Frossard
2024-05-09
BB-Patch: BlackBox Adversarial Patch-Attack using Zeroth-Order Optimization. (99%)Satyadwyoom Kumar; Saurabh Gupta; Arun Balaji Buduru
Muting Whisper: A Universal Acoustic Adversarial Attack on Speech Foundation Models. (97%)Vyas Raina; Rao Ma; Charles McGhee; Kate Knill; Mark Gales
Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers. (80%)Binxiao Huang; Jason Chun Lok; Chang Liu; Ngai Wong
Link Stealing Attacks Against Inductive Graph Neural Networks. (75%)Yixin Wu; Xinlei He; Pascal Berrang; Mathias Humbert; Michael Backes; Neil Zhenqiang Gong; Yang Zhang
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search. (68%)Zachary Coalson; Huazheng Wang; Qingyun Wu; Sanghyun Hong
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning. (62%)Yujie Zhang; Neil Gong; Michael K. Reiter
Towards Robust Physical-world Backdoor Attacks on Lane Detection. (50%)Xinwei Zhang; Aishan Liu; Tianyuan Zhang; Siyuan Liang; Xianglong Liu
Model Inversion Robustness: Can Transfer Learning Help? (45%)Sy-Tuyen Ho; Koh Jun Hao; Keshigeyan Chandrasegaran; Ngoc-Bao Nguyen; Ngai-Man Cheung
Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM. (3%)Xikang Yang; Xuehai Tang; Songlin Hu; Jizhong Han
Demystifying Behavior-Based Malware Detection at Endpoints. (2%)Yigitcan Kaya; Yizheng Chen; Shoumik Saha; Fabio Pierazzi; Lorenzo Cavallaro; David Wagner; Tudor Dumitras
2024-05-08
Universal Adversarial Perturbations for Vision-Language Pre-trained Models. (99%)Peng-Fei Zhang; Zi Huang; Guangdong Bai
Adversarial Threats to Automatic Modulation Open Set Recognition in Wireless Networks. (99%)Yandie Yang; Sicheng Zhang; Kuixian Li; Qiao Tian; Yun Lin
Untargeted Adversarial Attack on Knowledge Graph Embeddings. (98%)Tianzhe Zhao; Jiaoyan Chen; Yanchi Ru; Qika Lin; Yuxia Geng; Jun Liu
Towards Efficient Training and Evaluation of Robust Models against $l_0$ Bounded Adversarial Perturbations. (98%)Xuyang Zhong; Yixiao Huang; Chen Liu
Towards Accurate and Robust Architectures via Neural Architecture Search. (96%)Yuwei Ou; Yuqi Feng; Yanan Sun
Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution. (1%)Shuo Shao; Yiming Li; Hongwei Yao; Yiling He; Zhan Qin; Kui Ren
2024-05-07
Revisiting character-level adversarial attacks. (99%)Elias Abad Rocamora; Yongtao Wu; Fanghui Liu; Grigorios G. Chrysos; Volkan Cevher
Explainability-Informed Targeted Malware Misclassification. (99%)Quincy Card; Kshitiz Aryal; Maanak Gupta
Effective and Robust Adversarial Training against Data and Label Corruptions. (70%)Peng-Fei Zhang; Zi Huang; Xin-Shun Xu; Guangdong Bai
Going Proactive and Explanatory Against Malware Concept Drift. (1%)Yiling He; Junchi Lei; Zhan Qin; Kui Ren
Verified Neural Compressed Sensing. (1%)Rudy Bunel; Krishnamurthy Dvijotham; M. Pawan Kumar; Palma Alessandro De; Robert Stanforth
2024-05-06
Exploring Frequencies via Feature Mixing and Meta-Learning for Improving Adversarial Transferability. (99%)Juanjuan Weng; Zhiming Luo; Shaozi Li
Cutting through buggy adversarial example defenses: fixing 1 line of code breaks Sabre. (99%)Nicholas Carlini
On Adversarial Examples for Text Classification by Perturbing Latent Representations. (99%)Korn Sooksatra; Bikram Khanal; Pablo Rivas
Is ReLU Adversarially Robust? (98%)Korn Sooksatra; Greg Hamerly; Pablo Rivas
Enhancing O-RAN Security: Evasion Attacks and Robust Defenses for Graph Reinforcement Learning-based Connection Management. (91%)Ravikumar Balakrishnan; Marius Arvinte; Nageen Himayat; Hosein Nikopour; Hassnaa Moustafa
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection. (75%)Saket S. Chaturvedi; Lan Zhang; Wenbin Zhang; Pan He; Xiaoyong Yuan
Provably Unlearnable Data Examples. (64%)Derui Wang; Minhui Xue; Bo Li; Seyit Camtepe; Liming Zhu
DarkFed: A Data-Free Backdoor Attack in Federated Learning. (33%)Minghui Li; Wei Wan; Yuxuan Ning; Shengshan Hu; Lulu Xue; Leo Yu Zhang; Yichen Wang
UnsafeBench: Benchmarking Image Safety Classifiers on Real-World and AI-Generated Images. (1%)Yiting Qu; Xinyue Shen; Yixin Wu; Michael Backes; Savvas Zannettou; Yang Zhang
Why is SAM Robust to Label Noise? (1%)Christina Baek; Zico Kolter; Aditi Raghunathan
Detecting Android Malware: From Neural Embeddings to Hands-On Validation with BERTroid. (1%)Meryam Chaieb; Mostafa Anouar Ghorab; Mohamed Aymen Saied
LaserEscape: Detecting and Mitigating Optical Probing Attacks. (1%)Saleh Khalaj Monfared; Kyle Mitard; Andrew Cannon; Domenic Forte; Shahin Tajik
2024-05-05
Defense against Joint Poison and Evasion Attacks: A Case Study of DERMS. (88%)Zain ul Abdeen; Padmaksha Roy; Ahmad Al-Tawaha; Rouxi Jia; Laura Freeman; Peter Beling; Chen-Ching Liu; Alberto Sangiovanni-Vincentelli; Ming Jin
To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models. (15%)George-Octavian Barbulescu; Peter Triantafillou
Explainable Malware Detection with Tailored Logic Explained Networks. (2%)Peter Anthony; Francesco Giannini; Michelangelo Diligenti; Martin Homola; Marco Gori; Stefan Balogh; Jan Mojzis
2024-05-04
Leveraging the Human Ventral Visual Stream to Improve Neural Network Robustness. (92%)Zhenan Shao; Linjian Ma; Bo Li; Diane M. Beck
Updating Windows Malware Detectors: Balancing Robustness and Regression against Adversarial EXEmples. (83%)Matous Kozak; Luca Demetrio; Dmitrijs Trizna; Fabio Roli
Assessing Adversarial Robustness of Large Language Models: An Empirical Study. (76%)Zeyu Yang; Zhao Meng; Xiaochen Zheng; Roger Wattenhofer
2024-05-03
A Novel Approach to Guard from Adversarial Attacks using Stable Diffusion. (99%)Trinath Sai Subhash Reddy Pittala; Uma Maheswara Rao Meleti; Geethakrishna Puligundla
From Attack to Defense: Insights into Deep Learning Security Measures in Black-Box Settings. (99%)Firuz Juraev; Mohammed Abuhamad; Eric Chan-Tin; George K. Thiruvathukal; Tamer Abuhmed
ProFLingo: A Fingerprinting-based Copyright Protection Scheme for Large Language Models. (97%)Heng Jin; Chaoyu Zhang; Shanghao Shi; Wenjing Lou; Y. Thomas Hou
Impact of Architectural Modifications on Deep Learning Adversarial Robustness. (88%)Firuz Juraev; Mohammed Abuhamad; Simon S. Woo; George K Thiruvathukal; Tamer Abuhmed
Adaptive and robust watermark against model extraction attack. (38%)Kaiyi Pang; Tao Qi; Chuhan Wu; Minhao Bai
Robust Explainable Recommendation. (9%)Sairamvinay Vijayaraghavan; Prasant Mohapatra
Adversarial Botometer: Adversarial Analysis for Social Bot Detection. (1%)Shaghayegh Najari; Davood Rafiee; Mostafa Salehi; Reza Farahbakhsh
2024-05-02
Position Paper: Beyond Robustness Against Single Attack Types. (99%)Sihui Dai; Chong Xiang; Tong Wu; Prateek Mittal
Explainability Guided Adversarial Evasion Attacks on Malware Detectors. (98%)Kshitiz Aryal; Maanak Gupta; Mahmoud Abdelsalam; Moustafa Saleh
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders. (88%)Yi Yu; Yufei Wang; Song Xia; Wenhan Yang; Shijian Lu; Yap-Peng Tan; Alex C. Kot
Poisoning Attacks on Federated Learning for Autonomous Driving. (75%)Sonakshi Garg; Hugo Jönsson; Gustav Kalander; Axel Nilsson; Bhhaanu Pirange; Viktor Valadi; Johan Östman
Adversarial Attacks on Reinforcement Learning Agents for Command and Control. (75%)Ahaan Dabholkar; James Z. Hare; Mark Mittrick; John Richardson; Nicholas Waytowich; Priya Narayanan; Saurabh Bagchi
Boosting Jailbreak Attack with Momentum. (41%)Yihao Zhang; Zeming Wei
Uniformly Stable Algorithms for Adversarial Training and Beyond. (10%)Jiancong Xiao; Jiawei Zhang; Zhi-Quan Luo; Asuman Ozdaglar
ATTAXONOMY: Unpacking Differential Privacy Guarantees Against Practical Adversaries. (2%)Rachel Cummings; Shlomi Hod; Jayshree Sarathy; Marika Swanberg
2024-05-01
Certified Adversarial Robustness of Machine Learning-based Malware Detectors via (De)Randomized Smoothing. (99%)Daniel Gibert; Luca Demetrio; Giulio Zizzo; Quan Le; Jordi Planes; Battista Biggio
JNI Global References Are Still Vulnerable: Attacks and Defenses. (12%)Yi He; Yuan Zhou; Yacong Gu; Purui Su; Qi Li; Yajin Zhou; Yong Jiang
Robustness of graph embedding methods for community detection. (2%)Zhi-Feng Wei; Pablo Moriano; Ramakrishnan Kannan
Exploiting Positional Bias for Query-Agnostic Generative Content in Search. (1%)Andrew Parry; Sean MacAvaney; Debasis Ganguly
2024-04-30
Revisiting the Adversarial Robustness of Vision Language Models: a Multimodal Perspective. (99%)Wanqi Zhou; Shuanghao Bai; Qibin Zhao; Badong Chen
Probing Unlearned Diffusion Models: A Transferable Adversarial Attack Perspective. (99%)Xiaoxuan Han; Songlin Yang; Wei Wang; Yang Li; Jing Dong
AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples. (99%)Antonio Emanuele Cinà; Jérôme Rony; Maura Pintor; Luca Demetrio; Ambra Demontis; Battista Biggio; Ismail Ben Ayed; Fabio Roli
Provably Robust Conformal Prediction with Improved Efficiency. (98%)Ge Yan; Yaniv Romano; Tsui-Wei Weng
ASAM: Boosting Segment Anything Model with Adversarial Tuning. (98%)Bo Li; Haoke Xiao; Lv Tang
Adversarial Attacks and Defense for Conversation Entailment Task. (98%)Zhenning Yang; Ryan Krawec; Liang-Yuan Wu
Causal Perception Inspired Representation Learning for Trustworthy Image Quality Assessment. (92%)Lei Wang; Desen Yuan
Transferring Troubles: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning. (81%)Xuanli He; Jun Wang; Qiongkai Xu; Pasquale Minervini; Pontus Stenetorp; Benjamin I. P. Rubinstein; Trevor Cohn
Let's Focus: Focused Backdoor Attack against Federated Transfer Learning. (75%)Marco Arazzi; Stefanos Koffas; Antonino Nocera; Stjepan Picek
VeriFence: Lightweight and Precise Spectre Defenses for Untrusted Linux Kernel Extensions. (1%)Luis Gerhorst; Henriette Herzog; Peter Wägemann; Maximilian Ott; Rüdiger Kapitza; Timo Hönig
URVFL: Undetectable Data Reconstruction Attack on Vertical Federated Learning. (1%)Duanyi Yao; Songze Li; Xueluan Gong; Sizai Hou; Gaoning Pan
Physical Backdoor: Towards Temperature-based Backdoor Attacks in the Physical World. (1%)Wen Yin; Jian Lou; Pan Zhou; Yulai Xie; Dan Feng; Yuhua Sun; Tailai Zhang; Lichao Sun
2024-04-29
Assessing Cybersecurity Vulnerabilities in Code Large Language Models. (99%)Md Imran Hossen; Jianyi Zhang; Yinzhi Cao; Xiali Hei
A Systematic Evaluation of Adversarial Attacks against Speech Emotion Recognition Models. (99%)Nicolas Facchinetti; Federico Simonetta; Stavros Ntalampiras
Certification of Speaker Recognition Models to Additive Perturbations. (54%)Dmitrii Korzh; Elvir Karimov; Mikhail Pautov; Oleg Y. Rogov; Ivan Oseledets
Espresso: Robust Concept Filtering in Text-to-Image Models. (15%)Anudeep Das; Vasisht Duddu; Rui Zhang; N. Asokan
Why You Should Not Trust Interpretations in Machine Learning: Adversarial Attacks on Partial Dependence Plots. (13%)Xi Xin; Giles Hooker; Fei Huang
Machine Learning for Windows Malware Detection and Classification: Methods, Challenges and Ongoing Research. (3%)Daniel Gibert
Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection. (1%)Konstantinos Tsigos; Evlampios Apostolidis; Spyridon Baxevanakis; Symeon Papadopoulos; Vasileios Mezaris
Harmonic Machine Learning Models are Robust. (1%)Nicholas S. Kersting; Yi Li; Aman Mohanty; Oyindamola Obisesan; Raphael Okochu
Enhancing IoT Security: A Novel Feature Engineering Approach for ML-Based Intrusion Detection Systems. (1%)Afsaneh Mahanipour; Hana Khamfroush
2024-04-28
Learnable Linguistic Watermarks for Tracing Model Extraction Attacks on Large Language Models. (1%)Minhao Bai; Kaiyi Pang; Yongfeng Huang
2024-04-27
Towards Robust Recommendation: A Review and an Adversarial Robustness Evaluation Library. (92%)Lei Cheng; Xiaowen Huang; Jitao Sang; Jian Yu
Privacy-Preserving Aggregation for Decentralized Learning with Byzantine-Robustness. (70%)Ali Reza Ghavamipour; Benjamin Zi Hao Zhao; Oguzhan Ersoy; Fatih Turkmen
Bounding the Expected Robustness of Graph Neural Networks Subject to Node Feature Attacks. (67%)Yassine Abbahaddou; Sofiane Ennadir; Johannes F. Lutzeyer; Michalis Vazirgiannis; Henrik Boström
Are Watermarks Bugs for Deepfake Detectors? Rethinking Proactive Forensics. (2%)Xiaoshuai Wu; Xin Liao; Bo Ou; Yuling Liu; Zheng Qin
2024-04-26
Attacking Bayes: On the Adversarial Robustness of Bayesian Neural Networks. (99%)Yunzhen Feng; Tim G. J. Rudner; Nikolaos Tsilivis; Julia Kempe
Adversarial Examples: Generation Proposal in the Context of Facial Recognition Systems. (92%)Marina Fuster; Ignacio Vidaurreta
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications. (54%)Quan Zhang; Binqi Zeng; Chijin Zhou; Gwihwan Go; Heyuan Shi; Yu Jiang
Evaluations of Machine Learning Privacy Defenses are Misleading. (3%)Michael Aerni; Jie Zhang; Florian Tramèr
Enhancing Privacy and Security of Autonomous UAV Navigation. (2%)Vatsal Aggarwal; Arjun Ramesh Kaushik; Charanjit Jutla; Nalini Ratha
Adversarial Reweighting with $\alpha$-Power Maximization for Domain Adaptation. (1%)Xiang Gu; Xi Yu; Yan Yang; Jian Sun; Zongben Xu
Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization. (1%)Dang Nguyen; Paymon Haddad; Eric Gan; Baharan Mirzasoleiman
Adversarial Consistency and the Uniqueness of the Adversarial Bayes Classifier. (1%)Natalie S. Frank
2024-04-25
Generating Minimalist Adversarial Perturbations to Test Object-Detection Models: An Adaptive Multi-Metric Evolutionary Search Approach. (98%)Cristopher McIntyre-Garcia; Adrien Heymans; Beril Borali; Won-Sook Lee; Shiva Nejati
PAD: Patch-Agnostic Defense against Adversarial Patch Attacks. (92%)Lihua Jing; Rui Wang; Wenqi Ren; Xin Dong; Cong Zou
Defending Spiking Neural Networks against Adversarial Attacks through Image Purification. (84%)Weiran Chen; Qi Sun; Qi Xu
Don't Say No: Jailbreaking LLM by Suppressing Refusal. (67%)Yukai Zhou; Zhijie Huang; Feiyang Lu; Zhan Qin; Wenjie Wang
A Self-Organizing Clustering System for Unsupervised Distribution Shift Detection. (12%)Sebastián Basterrech; Line Clemmensen; Gerardo Rubino
Constructing Optimal Noise Channels for Enhanced Robustness in Quantum Machine Learning. (2%)David Winderl; Nicola Franco; Jeanette Miriam Lorenz
Energy-Latency Manipulation of Multi-modal Large Language Models via Verbose Samples. (2%)Kuofeng Gao; Jindong Gu; Yang Bai; Shu-Tao Xia; Philip Torr; Wei Liu; Zhifeng Li
Talking Nonsense: Probing Large Language Models' Understanding of Adversarial Gibberish Inputs. (1%)Valeriia Cherepanova; James Zou
2024-04-24
Steal Now and Attack Later: Evaluating Robustness of Object Detection against Black-box Adversarial Attacks. (99%)Erh-Chung Chen; Pin-Yu Chen; I-Hsin Chung; Che-Rung Lee
An Analysis of Recent Advances in Deepfake Image Detection in an Evolving Threat Landscape. (99%)Sifat Muhammad Abdullah; Aravind Cheruvu; Shravya Kanchi; Taejoong Chung; Peng Gao; Murtuza Jadliwala; Bimal Viswanath
An Empirical Study of Aegis. (98%)Daniel Saragih; Paridhi Goel; Tejas Balaji; Alyssa Li
A General Black-box Adversarial Attack on Graph-based Fake News Detectors. (96%)Peican Zhu; Zechen Pan; Yang Liu; Jiwei Tian; Keke Tang; Zhen Wang
A Comparative Analysis of Adversarial Robustness for Quantum and Classical Machine Learning Models. (83%)Maximilian Wendlinger; Kilian Tscharke; Pascal Debus
MISLEAD: Manipulating Importance of Selected features for Learning Epsilon in Evasion Attack Deception. (83%)Vidit Khazanchi; Pavan Kulkarni; Yuvaraj Govindarajulu; Manojkumar Parmar
Investigating the prompt leakage effect and black-box defenses for multi-turn LLM interactions. (45%)Divyansh Agarwal; Alexander R. Fabbri; Philippe Laban; Ben Risher; Shafiq Joty; Caiming Xiong; Chien-Sheng Wu
Universal Adversarial Triggers Are Not Universal. (16%)Nicholas Meade; Arkil Patel; Siva Reddy
CLAD: Robust Audio Deepfake Detection Against Manipulation Attacks with Contrastive Learning. (2%)Haolin Wu; Jing Chen; Ruiying Du; Cong Wu; Kun He; Xingcan Shang; Hao Ren; Guowen Xu
2024-04-23
Security Analysis of WiFi-based Sensing Systems: Threats from Perturbation Attacks. (61%)Hangcheng Cao; Wenbin Huang; Guowen Xu; Xianhao Chen; Ziyang He; Jingyang Hu; Hongbo Jiang; Yuguang Fang
Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures. (61%)Thanh Toan Nguyen; Quoc Viet Hung Nguyen; Thanh Tam Nguyen; Thanh Trung Huynh; Thanh Thi Nguyen; Matthias Weidlich; Hongzhi Yin
PoisonedFL: Model Poisoning Attacks to Federated Learning via Multi-Round Consistency. (54%)Yueqi Xie; Minghong Fang; Neil Zhenqiang Gong
Perturbing Attention Gives You More Bang for the Buck: Subtle Imaging Perturbations That Efficiently Fool Customized Diffusion Models. (47%)Jingyao Xu; Yuetong Lu; Yandong Li; Siyang Lu; Dongdong Wang; Xiang Wei
Talk Too Much: Poisoning Large Language Models under Token Limit. (38%)Jiaming He; Wenbo Jiang; Guanyu Hou; Wenshu Fan; Rui Zhang; Hongwei Li
Leverage Variational Graph Representation For Model Poisoning on Federated Learning. (10%)Kai Li; Xin Yuan; Jingjing Zheng; Wei Ni; Falko Dressler; Abbas Jamalipour
Formal Verification of Graph Convolutional Networks with Uncertain Node Features and Uncertain Graph Structure. (2%)Tobias Ladner; Michael Eichelbeck; Matthias Althoff
Does It Make Sense to Explain a Black Box With Another Black Box? (1%)Julien Delaunay; Luis Galárraga; Christine Largouët
Graph Machine Learning in the Era of Large Language Models (LLMs). (1%)Wenqi Fan; Shijie Wang; Jiani Huang; Zhikai Chen; Yu Song; Wenzhuo Tang; Haitao Mao; Hui Liu; Xiaorui Liu; Dawei Yin; Qing Li
2024-04-22
Towards Understanding the Robustness of Diffusion-Based Purification: A Stochastic Perspective. (98%)Yiming Liu; Kezhao Liu; Yao Xiao; Ziyi Dong; Xiaogang Xu; Pengxu Wei; Liang Lin
Double Privacy Guard: Robust Traceable Adversarial Watermarking against Face Recognition. (93%)Yunming Zhang; Dengpan Ye; Sipeng Shen; Caiyun Xie; Ziyi Liu; Jiacheng Deng; Long Tang
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction. (74%)Wenhao Lan; Yijun Yang; Haihua Shen; Shan Li
Explicit Lipschitz Value Estimation Enhances Policy Robustness Against Perturbation. (67%)Xulin Chen; Ruipeng Liu; Garrett E. Katz
Audio Anti-Spoofing Detection: A Survey. (62%)Menglu Li; Yasaman Ahmadiadli; Xiao-Ping Zhang
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning. (41%)Rong Wang; Guichen Zhou; Mingjun Gao; Yunpeng Xiao
Protecting Your LLMs with Information Bottleneck. (26%)Zichuan Liu; Zefan Wang; Linjie Xu; Jinyu Wang; Lei Song; Tianchun Wang; Chunlin Chen; Wei Cheng; Jiang Bian
Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs. (13%)Javier Rando; Francesco Croce; Kryštof Mitka; Stepan Shabalin; Maksym Andriushchenko; Nicolas Flammarion; Florian Tramèr
Deep Learning as Ricci Flow. (2%)Anthony Baptista; Alessandro Barp; Tapabrata Chakraborti; Chris Harbron; Ben D. MacArthur; Christopher R. S. Banerji
Hyp-OC: Hyperbolic One Class Classification for Face Anti-Spoofing. (1%)Kartik Narayan; Vishal M. Patel
Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction. (1%)Zifan Zhang; Minghong Fang; Jiayuan Huang; Yuchen Liu
Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by Simulating Documents in the Wild via Low-level Perturbations. (1%)Sukmin Cho; Soyeong Jeong; Jeongyeon Seo; Taeho Hwang; Jong C. Park
2024-04-21
Attack on Scene Flow using Point Clouds. (98%)Haniyeh Ehsani Oskouie; Mohammad-Shahram Moin; Shohreh Kasaei
Fermi-Bose Machine. (96%)Mingshan Xie; Yuchen Wang; Haiping Huang
Robust EEG-based Emotion Recognition Using an Inception and Two-sided Perturbation Model. (50%)Shadi Sartipi; Mujdat Cetin
AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs. (47%)Anselm Paulus; Arman Zharmagambetov; Chuan Guo; Brandon Amos; Yuandong Tian
Swap It Like Its Hot: Segmentation-based spoof attacks on eye-tracking images. (26%)Anish S. Narkar; Brendan David-John
Trojan Detection in Large Language Models: Insights from The Trojan Detection Challenge. (1%)Narek Maloyan; Ekansh Verma; Bulat Nutfullin; Bislan Ashinov
2024-04-20
Reliable Model Watermarking: Defending Against Theft without Compromising on Evasion. (99%)Hongyu Zhu; Sichu Liang; Wentao Hu; Fangqi Li; Ju Jia; Shilin Wang
Beyond Score Changes: Adversarial Attack on No-Reference Image Quality Assessment from Two Perspectives. (99%)Chenxi Yang; Yujia Liu; Dingquan Li; Yan Zhong; Tingting Jiang
Pixel is a Barrier: Diffusion Models Are More Adversarially Robust Than We Think. (99%)Haotian Xue; Yongxin Chen
Backdoor Attacks and Defenses on Semantic-Symbol Reconstruction in Semantic Communications. (41%)Yuan Zhou; Rose Qingyang Hu; Yi Qian
2024-04-19
How Real Is Real? A Human Evaluation Framework for Unrestricted Adversarial Examples. (99%)Dren Fazlija; Arkadij Orlov; Johanna Schrader; Monty-Maximilian Zühlke; Michael Rohs; Daniel Kudenko
AED-PADA:Improving Generalizability of Adversarial Example Detection via Principal Adversarial Domain Adaptation. (99%)Heqi Peng; Yunhong Wang; Ruijie Yang; Beichen Li; Rui Wang; Yuanfang Guo
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only. (75%)Jiazhu Dai; Haoyu Sun
Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models. (5%)Zhenyang Ni; Rui Ye; Yuxi Wei; Zhen Xiang; Yanfeng Wang; Siheng Chen
MLSD-GAN -- Generating Strong High Quality Face Morphing Attacks using Latent Semantic Disentanglement. (3%)Aravinda Reddy PN; Raghavendra Ramachandra; Krothapalli Sreenivasa Rao; Pabitra Mitra
Model-Based Counterfactual Explanations Incorporating Feature Space Attributes for Tabular Data. (1%)Yuta Sumiya; Hayaru shouno
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning. (1%)Beichen Li; Yuanfang Guo; Heqi Peng; Yangxi Li; Yunhong Wang
2024-04-18
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors. (99%)Raz Lapid; Almog Dubin; Moshe Sipper
Advancing the Robustness of Large Language Models through Self-Denoised Smoothing. (98%)Jiabao Ji; Bairu Hou; Zhen Zhang; Guanhua Zhang; Wenqi Fan; Qing Li; Yang Zhang; Gaowen Liu; Sijia Liu; Shiyu Chang
SA-Attack: Speed-adaptive stealthy adversarial attack on trajectory prediction. (98%)Huilin Yin; Jiaxiang Li; Pengju Zhen; Jun Yan
Enhance Robustness of Language Models Against Variation Attack through Graph Integration. (33%)Zi Xiong; Lizhi Qing; Yangyang Kang; Jiawei Liu; Hongsong Li; Changlong Sun; Xiaozhong Liu; Wei Lu
Uncovering Safety Risks of Large Language Models through Concept Activation Vector. (22%)Zhihao Xu; Ruixuan Huang; Changyu Chen; Shuai Wang; Xiting Wang
Proteus: Preserving Model Confidentiality during Graph Optimizations. (15%)Yubo Gao; Maryam Haghifam; Christina Giannoula; Renbo Tu; Gennady Pekhimenko; Nandita Vijaykumar
Omniview-Tuning: Boosting Viewpoint Invariance of Vision-Language Pre-training Models. (2%)Shouwei Ruan; Yinpeng Dong; Hanqing Liu; Yao Huang; Hang Su; Xingxing Wei
Is There No Such Thing as a Bad Question? H4R: HalluciBot For Ratiocination, Rewriting, Ranking, and Routing. (1%)William Watson; Nicole Cho; Nishan Srishankar
2024-04-17
The Victim and The Beneficiary: Exploiting a Poisoned Model to Train a Clean Model on Poisoned Data. (83%)Zixuan Zhu; Rui Wang; Cong Zou; Lihua Jing
GenFighter: A Generative and Evolutive Textual Attack Removal. (82%)Md Athikul Islam; Edoardo Serra; Sushil Jajodia
Utilizing Adversarial Examples for Bias Mitigation and Accuracy Enhancement. (80%)Pushkar Shukla; Dhruv Srikanth; Lee Cohen; Matthew Turk
Exploring DNN Robustness Against Adversarial Attacks Using Approximate Multipliers. (75%)Mohammad Javad Askarizadeh; Ebrahim Farahmand; Jorge Castro-Godinez; Ali Mahani; Laura Cabrera-Quiros; Carlos Salazar-Garcia
Exploring the Transferability of Visual Prompting for Multimodal Large Language Models. (2%)Yichi Zhang; Yinpeng Dong; Siyuan Zhang; Tianzan Min; Hang Su; Jun Zhu
Toward Understanding the Disagreement Problem in Neural Network Feature Attribution. (1%)Niklas Koenen; Marvin N. Wright
Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness. (1%)Hangtao Zhang; Shengshan Hu; Yichen Wang; Leo Yu Zhang; Ziqi Zhou; Xianlong Wang; Yanjun Zhang; Chao Chen
Towards Robust and Interpretable EMG-based Hand Gesture Recognition using Deep Metric Meta Learning. (1%)Simon Tam; Shriram Tallam Puranam Raghu; Étienne Buteau; Erik Scheme; Mounir Boukadoum; Alexandre Campeau-Lecours; Benoit Gosselin
2024-04-16
Efficiently Adversarial Examples Generation for Visual-Language Models under Targeted Transfer Scenarios using Diffusion Models. (99%)Qi Guo; Shanmin Pang; Xiaojun Jia; Qing Guo
Adversarial Identity Injection for Semantic Face Image Synthesis. (38%)Giuseppe Tarollo; Tomaso Fontanini; Claudio Ferrari; Guido Borghi; Andrea Prati
Robust Noisy Label Learning via Two-Stream Sample Distillation. (1%)Sihan Bai; Sanping Zhou; Zheng Qin; Le Wang; Nanning Zheng
2024-04-15
Black-box Adversarial Transferability: An Empirical Study in Cybersecurity Perspective. (99%)Khushnaseeb Roshan; Aasim Zafar
Towards a Novel Perspective on Adversarial Examples Driven by Frequency. (99%)Zhun Zhang; Yi Zeng; Qihe Liu; Shijie Zhou
Ti-Patch: Tiled Physical Adversarial Patch for no-reference video quality metrics. (83%)Victoria Leonenkova; Ekaterina Shumitskaya; Anastasia Antsiferova; Dmitriy Vatolin
Improving Weakly-Supervised Object Localization Using Adversarial Erasing and Pseudo Label. (1%)Byeongkeun Kang; Sinhae Cha; Yeejin Lee
Enhancing Code Vulnerability Detection via Vulnerability-Preserving Data Augmentation. (1%)Shangqing Liu; Wei Ma; Jian Wang; Xiaofei Xie; Ruitao Feng; Yang Liu
Consistency and Uncertainty: Identifying Unreliable Responses From Black-Box Vision-Language Models for Selective Visual Question Answering. (1%)Zaid Khan; Yun Fu
2024-04-14
Counteracting Concept Drift by Learning with Future Malware Predictions. (96%)Branislav Bosansky; Lada Hospodkova; Michal Najman; Maria Rigaki; Elnaz Babayeva; Viliam Lisy
Watermark-embedded Adversarial Examples for Copyright Protection against Diffusion Models. (96%)Peifei Zhu; Tsubasa Takahashi; Hirokatsu Kataoka
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies. (76%)Brian R. Bartoldson; James Diffenderfer; Konstantinos Parasyris; Bhavya Kailkhura
FaceCat: Enhancing Face Recognition Security with a Unified Generative Model Framework. (22%)Jiawei Chen; Xiao Yang; Yinpeng Dong; Hang Su; Jianteng Peng; Zhaoxia Yin
2024-04-13
Stability and Generalization in Free Adversarial Training. (96%)Xiwei Cheng; Kexin Fu; Farzan Farnia
Proof-of-Learning with Incentive Security. (2%)Zishuo Zhao; Zhixuan Fang; Xuechao Wang; Xi Chen; Yuan Zhou
2024-04-12
PASA: Attack Agnostic Unsupervised Adversarial Detection using Prediction & Attribution Sensitivity Analysis. (99%)Dipkamal Bhusal; Md Tanvirul Alam; Monish K. Veerabhadran; Michael Clifford; Sara Rampazzi; Nidhi Rastogi
Counterfactual Explanations for Face Forgery Detection via Adversarial Removal of Artifacts. (99%)Yang Li; Songlin Yang; Wei Wang; Ziwen He; Bo Peng; Jing Dong
Struggle with Adversarial Defense? Try Diffusion. (99%)Yujie Li; Yanbin Wang; Haitao Xu; Bin Liu; Jianguo Sun; Zhenhao Guo; Wenrui Ma
Multimodal Attack Detection for Action Recognition Models. (83%)Furkan Mumcu; Yasin Yilmaz
A Survey of Neural Network Robustness Assessment in Image Recognition. (83%)Jie Wang; Jun Ai; Minyan Lu; Haoran Su; Dan Yu; Yutao Zhang; Junda Zhu; Jingyu Liu
Practical Region-level Attack against Segment Anything Models. (81%)Yifan Shen; Zhengyuan Li; Gang Wang
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models. (69%)Yanting Wang; Wei Zou; Jinyuan Jia
Mitigating Cascading Effects in Large Adversarial Graph Environments. (2%)James D. Cunningham; Conrad S. Tucker
On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation. (1%)Agneet Chatterjee; Tejas Gokhale; Chitta Baral; Yezhou Yang
Empowering Malware Detection Efficiency within Processing-in-Memory Architecture. (1%)Sreenitha Kasarapu; Sathwika Bavikadi; Sai Manoj Pudukotai Dinakarrao
2024-04-11
Persistent Classification: A New Approach to Stability of Data and Adversarial Examples. (98%)Brian Bell; Michael Geyer; David Glickenstein; Keaton Hamm; Carlos Scheidegger; Amanda Fernandez; Juston Moore
Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization. (98%)Runqi Lin; Chaojian Yu; Tongliang Liu
Backdoor Contrastive Learning via Bi-level Trigger Optimization. (96%)Weiyu Sun; Xinyu Zhang; Hao Lu; Yingcong Chen; Ting Wang; Jinghui Chen; Lu Lin
Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers. (92%)Nayan Moni Baishya; B. R. Manoj
CodeFort: Robust Training for Code Generation Models. (33%)Yuhao Zhang; Shiqi Wang; Haifeng Qian; Zijian Wang; Mingyue Shang; Linbo Liu; Sanjay Krishna Gouda; Baishakhi Ray; Murali Krishna Ramanathan; Xiaofei Ma; Anoop Deoras
AmpleGCG: Learning a Universal and Transferable Generative Model of Adversarial Suffixes for Jailbreaking Both Open and Closed LLMs. (12%)Zeyi Liao; Huan Sun
LeapFrog: The Rowhammer Instruction Skip Attack. (8%)Andrew Adiletta; M. Caner Tol; Kemal Derya; Berk Sunar; Saad Islam
Scaling (Down) CLIP: A Comprehensive Analysis of Data, Architecture, and Training Strategies. (1%)Zichao Li; Cihang Xie; Ekin Dogus Cubuk
2024-04-10
Logit Calibration and Feature Contrast for Robust Federated Learning on Non-IID Data. (99%)Yu Qiao; Chaoning Zhang; Apurba Adhikary; Choong Seon Hong
Lost in Translation: Modern Neural Networks Still Struggle With Small Realistic Image Transformations. (82%)Ofir Shifman; Yair Weiss
Adversarial purification for no-reference image-quality metrics: applicability study and new methods. (26%)Aleksandr Gushchin; Anna Chistyakova; Vladislav Minashkin; Anastasia Antsiferova; Dmitriy Vatolin
Simpler becomes Harder: Do LLMs Exhibit a Coherent Behavior on Simplified Corpora? (2%)Miriam Anschütz; Edoardo Mosca; Georg Groh
TrajPRed: Trajectory Prediction with Region-based Relation Learning. (1%)Chen Zhou; Ghassan AlRegib; Armin Parchami; Kunjan Singh
2024-04-09
Towards Building a Robust Toxicity Predictor. (99%)Dmitriy Bespalov; Sourav Bhabesh; Yi Xiang; Liutong Zhou; Yanjun Qi
On adversarial training and the 1 Nearest Neighbor classifier. (99%)Amir Hagai; Yair Weiss
LRR: Language-Driven Resamplable Continuous Representation against Adversarial Tracking Attacks. (80%)Jianlang Chen; Xuhong Ren; Qing Guo; Felix Juefei-Xu; Di Lin; Wei Feng; Lei Ma; Jianjun Zhao
Towards Robust Domain Generation Algorithm Classification. (80%)Arthur Drichel; Marc Meyer; Ulrike Meyer
SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models. (41%)Xinfeng Li; Yuchen Yang; Jiangyi Deng; Chen Yan; Yanjiao Chen; Xiaoyu Ji; Wenyuan Xu
Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs. (31%)Bibek Upadhayay; Vahid Behzadan
Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning. (26%)Emre Ozfatura; Kerem Ozfatura; Alptekin Kupcu; Deniz Gunduz
How to Craft Backdoors with Unlabeled Data Alone? (1%)Yifei Wang; Wenhan Ma; Yisen Wang
2024-04-08
Certified PEFTSmoothing: Parameter-Efficient Fine-Tuning with Randomized Smoothing. (99%)Chengyan Fu; Wenjie Wang
David and Goliath: An Empirical Evaluation of Attacks and Defenses for QNNs at the Deep Edge. (99%)Miguel Costa; Sandro Pinto
BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack. (99%)Viet Quoc Vo; Ehsan Abbasnejad; Damith C. Ranasinghe
Case Study: Neural Network Malware Detection Verification for Feature and Image Datasets. (98%)Preston K. Robinette; Diego Manzanas Lopez; Serena Serbinowska; Kevin Leach; Taylor T. Johnson
Out-of-Distribution Data: An Acquaintance of Adversarial Examples -- A Survey. (98%)Naveen Karunanayake; Ravin Gunawardena; Suranga Seneviratne; Sanjay Chawla
Quantum Adversarial Learning for Kernel Methods. (75%)Giuseppe Montalbano; Leonardo Banchi
Investigating the Impact of Quantization on Adversarial Robustness. (50%)Qun Li; Yuan Meng; Chen Tang; Jiacheng Jiang; Zhi Wang
SphereHead: Stable 3D Full-head Synthesis with Spherical Tri-plane Representation. (1%)Heyuan Li; Ce Chen; Tianhao Shi; Yuda Qiu; Sizhe An; Guanying Chen; Xiaoguang Han
2024-04-07
Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods. (99%)Roopkatha Dey; Aivy Debnath; Sayak Kumar Dutta; Kaustav Ghosh; Arijit Mitra; Arghya Roy Chowdhury; Jaydip Sen
Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning. (15%)Yu Bi; Yekai Li; Xuan Feng; Xianghang Mi
How much reliable is ChatGPT's prediction on Information Extraction under Input Perturbations? (5%)Ishani Mondal; Abhilasha Sancheti
SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. (1%)Mael Jullien; Marco Valentino; André Freitas
2024-04-06
CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems. (86%)Francesco Marchiori; Mauro Conti
Learning Minimal NAP Specifications for Neural Network Verification. (80%)Chuqin Geng; Zhaoyue Wang; Haolin Ye; Saifei Liao; Xujie Si
Data Poisoning Attacks on Off-Policy Policy Evaluation Methods. (67%)Elita Lobo; Harvineet Singh; Marek Petrik; Cynthia Rudin; Himabindu Lakkaraju
Goal-guided Generative Prompt Injection Attack on Large Language Models. (67%)Chong Zhang; Mingyu Jin; Qinkai Yu; Chengzhi Liu; Haochen Xue; Xiaobo Jin
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training. (61%)Shizhan Gong; Qi Dou; Farzan Farnia
Exploiting Sequence Number Leakage: TCP Hijacking in NAT-Enabled Wi-Fi Networks. (3%)Yuxiang Yang; Xuewei Feng; Qi Li; Kun Sun; Ziqiang Wang; Ke Xu
2024-04-05
Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner Attacks, And The Role of Distillation as Defense Mechanism. (99%)Trilokesh Ranjan Sarkar; Nilanjan Das; Pralay Sankar Maitra; Bijoy Some; Ritwik Saha; Orijita Adhikary; Bishal Bose; Jaydip Sen
Reliable Feature Selection for Adversarially Robust Cyber-Attack Detection. (98%)João Vitorino; Miguel Silva; Eva Maia; Isabel Praça
DiffuseMix: Label-Preserving Data Augmentation with Diffusion Models. (15%)Khawar Islam; Muhammad Zaigham Zaheer; Arif Mahmood; Karthik Nandakumar
Compositional Estimation of Lipschitz Constants for Deep Neural Networks. (13%)Yuezhu Xu; S. Sivaranjani
Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning. (12%)K Naveen Kumar; C Krishna Mohan; Aravind Machiry
2024-04-04
Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks. (99%)Lei Zhang; Yuhang Zhou; Yi Yang; Xinbo Gao
FACTUAL: A Novel Framework for Contrastive Learning Based Robust SAR Image Classification. (98%)Xu Wang; Tian Ye; Rajgopal Kannan; Viktor Prasanna
Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning. (16%)Hongsheng Hu; Shuo Wang; Tian Dong; Minhui Xue
Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual Explanations. (8%)Fatima Ezzeddine; Omran Ayoub; Silvia Giordano
Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks? (2%)Shuo Chen; Zhen Han; Bailan He; Zifeng Ding; Wenqian Yu; Philip Torr; Volker Tresp; Jindong Gu
2024-04-03
Adversarial Attacks and Dimensionality in Text Classifiers. (99%)Nandish Chattopadhyay; Atreya Goswami; Anupam Chattopadhyay
Unsegment Anything by Simulating Deformation. (97%)Jiahao Lu; Xingyi Yang; Xinchao Wang
"Are Adversarial Phishing Webpages a Threat in Reality?" Understanding the Users' Perception of Adversarial Webpages. (81%)Ying Yuan; Qingying Hao; Giovanni Apruzzese; Mauro Conti; Gang Wang
JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks. (75%)Weidi Luo; Siyuan Ma; Xiaogeng Liu; Xiaoyu Guo; Chaowei Xiao
Learn to Disguise: Avoid Refusal Responses in LLM's Defense via a Multi-agent Attacker-Disguiser Game. (11%)Qianqiao Xu; Zhiliang Tian; Hongyan Wu; Zhen Huang; Yiping Song; Feng Liu; Dongsheng Li
A Unified Membership Inference Method for Visual Self-supervised Encoder via Part-aware Capability. (9%)Jie Zhu; Jirong Zha; Ding Li; Leye Wang
Steganographic Passport: An Owner and User Verifiable Credential for Deep Model IP Protection Without Retraining. (1%)Qi Cui; Ruohan Meng; Chaohui Xu; Chip-Hong Chang
2024-04-02
Humanizing Machine-Generated Content: Evading AI-Text Detection through Adversarial Attack. (99%)Ying Zhou; Ben He; Le Sun
ADVREPAIR:Provable Repair of Adversarial Attack. (99%)Zhiming Chi; Jianan Ma; Pengfei Yang; Cheng-Chao Huang; Renjue Li; Xiaowei Huang; Lijun Zhang
Jailbreaking Prompt Attack: A Controllable Adversarial Attack against Diffusion Models. (97%)Jiachen Ma; Anda Cao; Zhiqing Xiao; Yijiang Li; Jie Zhang; Chao Ye; Junbo Zhao
One Noise to Rule Them All: Multi-View Adversarial Attacks with Universal Perturbation. (92%)Mehmet Ergezer; Phat Duong; Christian Green; Tommy Nguyen; Abdurrahman Zeybey
Defense without Forgetting: Continual Adversarial Defense with Anisotropic & Isotropic Pseudo Replay. (88%)Yuhang Zhou; Zhongyun Hua
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks. (83%)Maksym Andriushchenko; Francesco Croce; Nicolas Flammarion
READ: Improving Relation Extraction from an ADversarial Perspective. (81%)Dawei Li; William Hogan; Jingbo Shang
Two Heads are Better than One: Nested PoE for Robust Defense Against Multi-Backdoors. (64%)Victoria Graf; Qin Liu; Muhao Chen
Red-Teaming Segment Anything Model. (45%)Krzysztof Jankowski; Bartlomiej Sobieski; Mateusz Kwiatkowski; Jakub Szulc; Michal Janik; Hubert Baniecki; Przemyslaw Biecek
Towards Robust 3D Pose Transfer with Adversarial Learning. (31%)Haoyu Chen; Hao Tang; Ehsan Adeli; Guoying Zhao
Designing a Photonic Physically Unclonable Function Having Resilience to Machine Learning Attacks. (12%)Elena R. Henderson; Jessie M. Henderson; Hiva Shahoei; William V. Oxford; Eric C. Larson; Duncan L. MacFarlane; Mitchell A. Thornton
Exploring Backdoor Vulnerabilities of Chat Models. (2%)Yunzhuo Hao; Wenkai Yang; Yankai Lin
CAPE: CAM as a Probabilistic Ensemble for Enhanced DNN Interpretation. (1%)Townim Faisal Chowdhury; Kewen Liao; Vu Minh Hieu Phan; Minh-Son To; Yutong Xie; Kevin Hung; David Ross; Anton van den Hengel; Johan W. Verjans; Zhibin Liao
2024-04-01
The Double-Edged Sword of Input Perturbations to Robust Accurate Fairness. (99%)Xuran Li; Peng Wu; Yanting Chen; Xingjun Ma; Zhen Zhang; Kaixiang Dong
Multi-granular Adversarial Attacks against Black-box Neural Ranking Models. (99%)Yu-An Liu; Ruqing Zhang; Jiafeng Guo; Rijke Maarten de; Yixing Fan; Xueqi Cheng
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks. (93%)Zhiyuan Cheng; Zhaoyi Liu; Tengda Guo; Shiwei Feng; Dongfang Liu; Mingjie Tang; Xiangyu Zhang
Poisoning Decentralized Collaborative Recommender System and Its Countermeasures. (33%)Ruiqi Zheng; Liang Qu; Tong Chen; Kai Zheng; Yuhui Shi; Hongzhi Yin
Can Biases in ImageNet Models Explain Generalization? (10%)Paul Gavrikov; Janis Keuper
UFID: A Unified Framework for Input-level Backdoor Detection on Diffusion Models. (10%)Zihan Guan; Mengxuan Hu; Sheng Li; Anil Vullikanti
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models. (2%)Yuxin Wen; Leo Marchyok; Sanghyun Hong; Jonas Geiping; Tom Goldstein; Nicholas Carlini
An incremental hybrid adaptive network-based IDS in Software Defined Networks to detect stealth attacks. (1%)Abdullah H Alqahtani
2024-03-31
PID Control-Based Self-Healing to Improve the Robustness of Large Language Models. (75%)Zhuotong Chen; Zihu Wang; Yifan Yang; Qianxiao Li; Zheng Zhang
Machine Learning Robustness: A Primer. (62%)Houssem Ben Braiek; Foutse Khomh
2024-03-30
STBA: Towards Evaluating the Robustness of DNNs for Query-Limited Black-box Scenario. (99%)Renyang Liu; Kwok-Yan Lam; Wei Zhou; Sixing Wu; Jun Zhao; Dongting Hu; Mingming Gong
Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches. (98%)Lingxuan Wu; Xiao Yang; Yinpeng Dong; Liuwei Xie; Hang Su; Jun Zhu
Shortcuts Arising from Contrast: Effective and Covert Clean-Label Attacks in Prompt-Based Learning. (5%)Xiaopeng Xie; Ming Yan; Xiwen Zhou; Chenlong Zhao; Suli Wang; Yong Zhang; Joey Tianyi Zhou
2024-03-29
On Inherent Adversarial Robustness of Active Vision Systems. (99%)Amitangshu Mukherjee; Timur Ibrayev; Kaushik Roy
Benchmarking the Robustness of Temporal Action Detection Models Against Temporal Corruptions. (68%)Runhao Zeng; Xiaoyong Chen; Jiaming Liang; Huisi Wu; Guangzhong Cao; Yong Guo
Deepfake Sentry: Harnessing Ensemble Intelligence for Resilient Detection and Generalisation. (8%)Liviu-Daniel University "Politehnica" of Bucharest, Romania Ştefan; Dan-Cristian University "Politehnica" of Bucharest, Romania Stanciu; Mihai University "Politehnica" of Bucharest, Romania Dogariu; Mihai Gabriel University "Politehnica" of Bucharest, Romania Constantin; Andrei Cosmin University "Politehnica" of Bucharest, Romania Jitaru; Bogdan University "Politehnica" of Bucharest, Romania Ionescu
The Impact of Prompts on Zero-Shot Detection of AI-Generated Text. (2%)Kaito Taguchi; Yujie Gu; Kouichi Sakurai
GDA: Generalized Diffusion for Robust Test-time Adaptation. (1%)Yun-Yun Tsai; Fu-Chen Chen; Albert Y. C. Chen; Junfeng Yang; Che-Chun Su; Min Sun; Cheng-Hao Kuo
Efficient Data-Free Model Stealing with Label Diversity. (1%)Yiyong Liu; Rui Wen; Michael Backes; Yang Zhang
Cross-Lingual Transfer Robustness to Lower-Resource Languages on Adversarial Datasets. (1%)Shadi Manafi; Nikhil Krishnaswamy
2024-03-28
Towards Understanding Dual BN In Hybrid Adversarial Training. (82%)Chenshuang Zhang; Chaoning Zhang; Kang Zhang; Axi Niu; Junmo Kim; In So Kweon
Improving Adversarial Data Collection by Supporting Annotators: Lessons from GAHD, a German Hate Speech Dataset. (82%)Janis Goldzycher; Paul Röttger; Gerold Schneider
On the Robustness of LDP Protocols for Numerical Attributes under Data Poisoning Attacks. (41%)Xiaoguang Li; Zitao Li; Ninghui Li; Wenhai Sun
MedBN: Robust Test-Time Adaptation against Malicious Test Samples. (10%)Hyejin Park; Jeongyeon Hwang; Sunung Mun; Sangdon Park; Jungseul Ok
Imperceptible Protection against Style Imitation from Diffusion Models. (2%)Namhyuk Ahn; Wonhyuk Ahn; KiYoon Yoo; Daesik Kim; Seung-Hun Nam
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks. (1%)Orson Mengara
2024-03-27
Uncertainty-Aware SAR ATR: Defending Against Adversarial Attacks via Bayesian Neural Networks. (99%)Tian Ye; Rajgopal Kannan; Viktor Prasanna; Carl Busart
CosalPure: Learning Concept from Group Images for Robust Co-Saliency Detection. (99%)Jiayi Zhu; Qing Guo; Felix Juefei-Xu; Yihao Huang; Yang Liu; Geguang Pu
MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models. (98%)Yanting Wang; Hongye Fu; Wei Zou; Jinyuan Jia
Bayesian Learned Models Can Detect Adversarial Malware For Free. (97%)Bao Gia Doan; Dang Quang Nguyen; Paul Montague; Tamas Abraham; Vel Olivier De; Seyit Camtepe; Salil S. Kanhere; Ehsan Abbasnejad; Damith C. Ranasinghe
MisGUIDE : Defense Against Data-Free Deep Learning Model Extraction. (95%)Mahendra Gurve; Sankar Behera; Satyadev Ahlawat; Yamuna Prasad
Towards Sustainable SecureML: Quantifying Carbon Footprint of Adversarial Machine Learning. (83%)Syed Mhamudul Hasan; Abdur R. Shahid; Ahmed Imteaj
Deep Learning for Robust and Explainable Models in Computer Vision. (82%)Mohammadreza Amirian
SemRoDe: Macro Adversarial Training to Learn Representations That are Robust to Word-Level Attacks. (81%)Brian Formento; Wenjie Feng; Chuan Sheng Foo; Luu Anh Tuan; See-Kiong Ng
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models. (54%)Patrick Chao; Edoardo Debenedetti; Alexander Robey; Maksym Andriushchenko; Francesco Croce; Vikash Sehwag; Edgar Dobriban; Nicolas Flammarion; George J. Pappas; Florian Tramer; Hamed Hassani; Eric Wong
Vulnerability Detection with Code Language Models: How Far Are We? (26%)Yangruibo Ding; Yanjun Fu; Omniyyah Ibrahim; Chawin Sitawarin; Xinyun Chen; Basel Alomair; David Wagner; Baishakhi Ray; Yizheng Chen
Spikewhisper: Temporal Spike Backdoor Attacks on Federated Neuromorphic Learning over Low-power Devices. (15%)Hanqing Fu; Gaolei Li; Jun Wu; Jianhua Li; Xi Lin; Kai Zhou; Yuchen Liu
Robustness and Visual Explanation for Black Box Image, Video, and ECG Signal Classification with Reinforcement Learning. (15%)Soumyendu Sarkar; Ashwin Ramesh Babu; Sajad Mousavi; Vineet Gundecha; Avisek Naug; Sahand Ghorbanpour
The Impact of Uniform Inputs on Activation Sparsity and Energy-Latency Attacks in Computer Vision. (11%)Andreas Müller; Erwin Quiring
Fact Checking Beyond Training Set. (1%)Payam Karisani; Heng Ji
BAM: Box Abstraction Monitors for Real-time OoD Detection in Object Detection. (1%)Changshun Wu; Weicheng He; Chih-Hong Cheng; Xiaowei Huang; Saddek Bensalem
2024-03-26
DataCook: Crafting Anti-Adversarial Examples for Healthcare Data Copyright Protection. (92%)Sihan Shang; Jiancheng Yang; Zhenglong Sun; Pascal Fua
FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids. (78%)Emad Efatinasab; Francesco Marchiori; Alessandro Brighente; Mirco Rampazzo; Mauro Conti
Boosting Adversarial Training via Fisher-Rao Norm-based Regularization. (69%)Xiangyu Yin; Wenjie Ruan
Optimization-based Prompt Injection Attack to LLM-as-a-Judge. (45%)Jiawen Shi; Zenghui Yuan; Yinuo Liu; Yue Huang; Pan Zhou; Lichao Sun; Neil Zhenqiang Gong
Targeted Visualization of the Backbone of Encoder LLMs. (9%)Isaac Roberts; Alexander Schulz; Luca Hermes; Barbara Hammer
Leak and Learn: An Attacker's Cookbook to Train Using Leaked Data from Federated Learning. (1%)Joshua C. Zhao; Ahaan Dabholkar; Atul Sharma; Saurabh Bagchi
Exploring LLMs as a Source of Targeted Synthetic Textual Data to Minimize High Confidence Misclassifications. (1%)Philip Lippmann; Matthijs Spaan; Jie Yang
2024-03-25
$\textit{LinkPrompt}$: Natural and Universal Adversarial Attacks on Prompt-based Language Models. (99%)Yue Xu; Wenjie Wang
Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving. (98%)Junhao Zheng; Chenhao Lin; Jiahao Sun; Zhengyu Zhao; Qian Li; Chao Shen
The Anatomy of Adversarial Attacks: Concept-based XAI Dissection. (87%)Georgii Mikriukov; Gesina Schwalbe; Franz Motzkus; Korinna Bade
DeepKnowledge: Generalisation-Driven Deep Learning Testing. (82%)Sondess Missaoui; Simos Gerasimou; Nikolaos Matragkas
Revealing Vulnerabilities of Neural Networks in Parameter Learning and Defense Against Explanation-Aware Backdoors. (70%)Md Abdul Kadir; GowthamKrishna Addluri; Daniel Sonntag
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning. (69%)Siyuan Cheng; Guanhong Tao; Yingqi Liu; Guangyu Shen; Shengwei An; Shiwei Feng; Xiangzhe Xu; Kaiyuan Zhang; Shiqing Ma; Xiangyu Zhang
Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models. (1%)Mingyi Zhou; Xiang Gao; Pei Liu; John Grundy; Chunyang Chen; Xiao Chen; Li Li
2024-03-24
Subspace Defense: Discarding Adversarial Perturbations by Learning a Subspace for Clean Signals. (99%)Rui Zheng; Yuhao Zhou; Zhiheng Xi; Tao Gui; Qi Zhang; Xuanjing Huang
Ensemble Adversarial Defense via Integration of Multiple Dispersed Low Curvature Models. (98%)Kaikang Zhao; Xi Chen; Wei Huang; Liuxin Ding; Xianglong Kong; Fan Zhang
Robust Diffusion Models for Adversarial Purification. (83%)Guang Lin; Zerui Tao; Jianhai Zhang; Toshihisa Tanaka; Qibin Zhao
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning. (5%)Siyuan Liang; Kuanrong Liu; Jiajun Gong; Jiawei Liang; Yuan Xun; Ee-Chien Chang; Xiaochun Cao
Rumor Detection with a novel graph neural network approach. (4%)Tianrui Liu; Qi Cai; Changxin Xu; Bo Hong; Fanghao Ni; Yuxin Qiao; Tsungwei Yang
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion. (2%)Hossein Souri; Arpit Bansal; Hamid Kazemi; Liam Fowl; Aniruddha Saha; Jonas Geiping; Andrew Gordon Wilson; Rama Chellappa; Tom Goldstein; Micah Goldblum
A General and Efficient Federated Split Learning with Pre-trained Image Transformers for Heterogeneous Data. (1%)Yifan Shi; Yuhui Zhang; Ziyue Huang; Xiaofeng Yang; Li Shen; Wei Chen; Xueqian Wang
2024-03-23
Towards Adversarial Robustness And Backdoor Mitigation in SSL. (76%)Aryan Satpathy; Nilaksh Singh; Dhruva Rajwade; Somesh Kumar
Adversarial Defense Teacher for Cross-Domain Object Detection under Poor Visibility Conditions. (64%)Kaiwen Wang; Yinzhe Shen; Martin Lauer
2024-03-22
Robust optimization for adversarial learning with finite sample complexity guarantees. (96%)André Bertolace; Konstatinos Gatsis; Kostas Margellos
A Transfer Attack to Image Watermarks. (95%)Yuepeng Hu; Zhengyuan Jiang; Moyang Guo; Neil Gong
From Hardware Fingerprint to Access Token: Enhancing the Authentication on IoT Devices. (26%)Yue Xiao; Yi He; Xiaoli Zhang; Qian Wang; Renjie Xie; Kun Sun; Ke Xu; Qi Li
Clean-image Backdoor Attacks. (12%)Dazhong Rong; Guoyao Yu; Shuheng Shen; Xinyi Fu; Peng Qian; Jianhai Chen; Qinming He; Xing Fu; Weiqiang Wang
Forward Learning for Gradient-based Black-box Saliency Map Generation. (1%)Zeliang Zhang; Mingqian Feng; Jinyang Jiang; Rongyi Zhu; Yijie Peng; Chenliang Xu
2024-03-21
Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking. (99%)Qianyu Guo; Jiaming Fu; Yawen Lu; Dongming Gan
Few-Shot Adversarial Prompt Learning on Vision-Language Models. (98%)Yiwei Zhou; Xiaobo Xia; Zhiwei Lin; Bo Han; Tongliang Liu
Reversible Jump Attack to Textual Classifiers with Modification Reduction. (98%)Mingze Ni; Zhensu Sun; Wei Liu
Improving Robustness to Model Inversion Attacks via Sparse Coding Architectures. (82%)Sayanton V. Dibbo; Adam Breuer; Juston Moore; Michael Teti
Adversary-Robust Graph-Based Learning of WSIs. (45%)Saba Heidari Gheshlaghi; Milan Aryal; Nasim Yahyasoltani; Masoud Ganji
Safeguarding Medical Image Segmentation Datasets against Unauthorized Training via Contour- and Texture-Aware Perturbations. (4%)Xun Lin; Yi Yu; Song Xia; Jue Jiang; Haoran Wang; Zitong Yu; Yizhong Liu; Ying Fu; Shuai Wang; Wenzhong Tang; Alex Kot
2024-03-20
FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMs. (97%)Jinmin Li; Kuofeng Gao; Yang Bai; Jingyun Zhang; Shu-tao Xia; Yisen Wang
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation. (96%)Yifan Wu; Jiawei Du; Ping Liu; Yuewei Lin; Wenqing Cheng; Wei Xu
Capsule Neural Networks as Noise Stabilizer for Time Series Data. (93%)Soyeon Kim; Jihyeon Seong; Hyunkyung Han; Jaesik Choi
Adversarial Attacks and Defenses in Automated Control Systems: A Comprehensive Benchmark. (70%)Vitaliy Pozdnyakov; Aleksandr Kovalenko; Ilya Makarov; Mikhail Drobyshevskiy; Kirill Lukyanov
Certified Human Trajectory Prediction. (61%)Mohammadhossein Bahari; Saeed Saadatnejad; Amirhossein Asgari Farsangi; Seyed-Mohsen Moosavi-Dezfooli; Alexandre Alahi
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning. (54%)Gaspari Fabio De; Dorjan Hitaj; Luigi V. Mancini
Mask-based Invisible Backdoor Attacks on Object Detection. (50%)Shin Jeong Jin
Defending Against Indirect Prompt Injection Attacks With Spotlighting. (31%)Keegan Hines; Gary Lopez; Matthew Hall; Federico Zarfati; Yonatan Zunger; Emre Kiciman
Don't be a Fool: Pooling Strategies in Offensive Language Detection from User-Intended Adversarial Attacks. (11%)Seunguk Yu; Juhwan Choi; Youngbin Kim
BadEdit: Backdooring large language models by model editing. (1%)Yanzhou Li; Tianlin Li; Kangjie Chen; Jian Zhang; Shangqing Liu; Wenhan Wang; Tianwei Zhang; Yang Liu
Teacher-Student Training for Debiasing: General Permutation Debiasing for Large Language Models. (1%)Adian Liusie; Yassir Fathullah; Mark J. F. Gales
Threats, Attacks, and Defenses in Machine Unlearning: A Survey. (1%)Ziyao Liu; Huanyi Ye; Chen Chen; Kwok-Yan Lam
2024-03-19
As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks? (99%)Anjun Hu; Jindong Gu; Francesco Pinto; Konstantinos Kamnitsas; Philip Torr
Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory. (99%)Sensen Gao; Xiaojun Jia; Xuhong Ren; Ivor Tsang; Qing Guo
ADAPT to Robustify Prompt Tuning Vision Transformers. (98%)Masih Eskandar; Tooba Imtiaz; Zifeng Wang; Jennifer Dy
Marlin: Knowledge-Driven Analysis of Provenance Graphs for Efficient and Robust Detection of Cyber Attacks. (75%)Zhenyuan Li; Yangyang Wei; Xiangmin Shen; Lingzhi Wang; Yan Chen; Haitao Xu; Shouling Ji; Fan Zhang; Liang Hou; Wenmao Liu; Xuhong Zhang; Jianwei Ying
Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing. (9%)Ehsan Lari; Reza Arablouei; Vinay Chakravarthi Gogineni; Stefan Werner
RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content. (8%)Zhuowen Yuan; Zidi Xiong; Yi Zeng; Ning Yu; Ruoxi Jia; Dawn Song; Bo Li
Robust NAS under adversarial training: benchmark, theory, and beyond. (2%)Yongtao Wu; Fanghui Liu; Carl-Johann Simon-Gabriel; Grigorios G Chrysos; Volkan Cevher
Discover and Mitigate Multiple Biased Subgroups in Image Classifiers. (1%)Zeliang Zhang; Mingqian Feng; Zhiheng Li; Chenliang Xu
2024-03-18
Diffusion Denoising as a Certified Defense against Clean-label Poisoning. (99%)Sanghyun Hong; Nicholas Carlini; Alexey Kurakin
SSCAE -- Semantic, Syntactic, and Context-aware natural language Adversarial Examples generator. (99%)Javad Rafiei Asl; Mohammad H. Rafiei; Manar Alohaly; Daniel Takabi
LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model. (99%)Yuxin Cao; Jinghao Li; Xi Xiao; Derui Wang; Minhui Xue; Hao Ge; Wei Liu; Guangwu Hu
Invisible Backdoor Attack Through Singular Value Decomposition. (96%)Wenmin Chen; Xiaowei Xu
Problem space structural adversarial attacks for Network Intrusion Detection Systems based on Graph Neural Networks. (88%)Andrea Venturi; Dario Stabili; Mirco Marchetti
Impart: An Imperceptible and Effective Label-Specific Backdoor Attack. (83%)Jingke Zhao; Zan Wang; Yongwei Wang; Lanjun Wang
SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications. (78%)Amira Guesmi; Muhammad Abdullah Hanif; Ihsen Alouani; Bassem Ouni; Muhammad Shafique
Electioneering the Network: Dynamic Multi-Step Adversarial Attacks for Community Canvassing. (61%)Saurabh Sharma; Ambuj SIngh
Advancing Time Series Classification with Multimodal Language Modeling. (1%)Mingyue Cheng; Yiheng Chen; Qi Liu; Zhiding Liu; Yucong Luo
2024-03-17
Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization. (99%)Yujia Liu; Chenxi Yang; Dingquan Li; Jianhao Ding; Tingting Jiang
A Modified Word Saliency-Based Adversarial Attack on Text Classification Models. (99%)Hetvi Waghela; Sneha Rakshit; Jaydip Sen
Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM. (99%)Linyu Tang; Lei Zhang
Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation. (76%)Silvia Corbara; Alejandro Moreo
RobustSentEmbed: Robust Sentence Embeddings Using Adversarial Self-Supervised Contrastive Learning. (50%)Javad Rafiei Asl; Prajwal Panzade; Eduardo Blanco; Daniel Takabi; Zhipeng Cai
COLEP: Certifiably Robust Learning-Reasoning Conformal Prediction via Probabilistic Circuits. (22%)Mintong Kang; Nezihe Merve Gürel; Linyi Li; Bo Li
A Dual-Tier Adaptive One-Class Classification IDS for Emerging Cyberthreats. (9%)Md. Ashraf Uddin; Sunil Aryal; Mohamed Reda Bouadjenek; Muna Al-Hawawreh; Md. Alamin Talukder
Hierarchical Classification for Intrusion Detection System: Effective Design and Empirical Analysis. (2%)Md. Ashraf Uddin; Sunil Aryal; Mohamed Reda Bouadjenek; Muna Al-Hawawreh; Md. Alamin Talukder
CBR - Boosting Adaptive Classification By Retrieval of Encrypted Network Traffic with Out-of-distribution. (1%)Amir Lukach; Ran Dubin; Amit Dvir; Chen Hajaj
Pencil: Private and Extensible Collaborative Learning without the Non-Colluding Assumption. (1%)Xuanqi Liu; Zhuotao Liu; Qi Li; Ke Xu; Mingwei Xu
2024-03-16
Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples. (98%)Ziqi Zhou; Minghui Li; Wei Liu; Shengshan Hu; Yechao Zhang; Wei Wan; Lulu Xue; Leo Yu Zhang; Dezhong Yang; Hai Jin
Understanding Robustness of Visual State Space Models for Image Classification. (98%)Chengbin Du; Yanxi Li; Chang Xu
Improving Adversarial Transferability of Visual-Language Pre-training Models through Collaborative Multimodal Interaction. (92%)Jiyuan Fu; Zhaoyu Chen; Kaixun Jiang; Haijing Guo; Jiafeng Wang; Shuyong Gao; Wenqiang Zhang
Edge Private Graph Neural Networks with Singular Value Perturbation. (11%)Tingting Tang; Yue Niu; Salman Avestimehr; Murali Annavaram
2024-03-15
Benchmarking Adversarial Robustness of Image Shadow Removal with Shadow-adaptive Attacks. (99%)Chong Wang; Yi Yu; Lanqing Guo; Bihan Wen
Towards Non-Adversarial Algorithmic Recourse. (99%)Tobias Leemann; Martin Pawelczyk; Bardh Prenkaj; Gjergji Kasneci
Time-Frequency Jointed Imperceptible Adversarial Attack to Brainprint Recognition with Deep Learning Models. (99%)Hangjie Yi; Yuhang Ming; Dongjun Liu; Wanzeng Kong
Introducing Adaptive Continuous Adversarial Training (ACAT) to Enhance ML Robustness. (87%)Mohamed elShehaby; Aditya Kotha; Ashraf Matrawy
Revisiting Adversarial Training under Long-Tailed Distributions. (80%)Xinli Yue; Ningping Mou; Qian Wang; Lingchen Zhao
Towards Adversarially Robust Dataset Distillation by Curvature Regularization. (54%)Eric Xue; Yijiang Li; Haoyang Liu; Peiran Wang; Yifan Shen; Haohan Wang
Interactive Trimming against Evasive Online Data Manipulation Attacks: A Game-Theoretic Approach. (50%)Yue Fu; Qingqing Ye; Rong Du; Haibo Hu
Securing Federated Learning with Control-Flow Attestation: A Novel Framework for Enhanced Integrity and Resilience against Adversarial Attacks. (12%)Zahir Alsulaimawi
Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study. (11%)Chenguang Wang; Ruoxi Jia; Xin Liu; Dawn Song
Not Just Change the Labels, Learn the Features: Watermarking Deep Neural Networks with Multi-View Data. (8%)Yuxuan Li; Sarthak Kumar Maharana; Yunhui Guo
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency. (3%)Soumyadeep Pal; Yuguang Yao; Ren Wang; Bingquan Shen; Sijia Liu
Robust Influence-based Training Methods for Noisy Brain MRI. (1%)Minh-Hao Van; Alycia N. Carey; Xintao Wu
2024-03-14
An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language Models. (99%)Haochen Luo; Jindong Gu; Fengyuan Liu; Philip Torr
Counter-Samples: A Stateless Strategy to Neutralize Black Box Adversarial Attacks. (99%)Roey Bokobza; Yisroel Mirsky
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiency. (98%)Hallgrimur Thorsteinsson; Valdemar J Henriksen; Tong Chen; Raghavendra Selvan
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement. (83%)Daiwei Yu; Zhuorong Li; Lina Wei; Canghong Jin; Yun Zhang; Sixian Chan
Robust Subgraph Learning by Monitoring Early Training Representations. (80%)Sepideh Neshatfar; Salimeh Yasaei Sekeh
LDPRecover: Recovering Frequencies from Poisoning Attacks against Local Differential Privacy. (76%)Xinyue Sun; Qingqing Ye; Haibo Hu; Jiawei Duan; Tianyu Wo; Jie Xu; Renyu Yang
AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting. (74%)Yu Wang; Xiaogeng Liu; Yu Li; Muhao Chen; Chaowei Xiao
Towards White Box Deep Learning. (15%)Maciej Satkiewicz
Symbiotic Game and Foundation Models for Cyber Deception Operations in Strategic Cyber Warfare. (13%)Tao Li; Quanyan Zhu
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps. (12%)Ruixuan Liu; Tianhao Wang; Yang Cao; Li Xiong
AVIBench: Towards Evaluating the Robustness of Large Vision-Language Model on Adversarial Visual-Instructions. (2%)Hao Zhang; Wenqi Shao; Hong Liu; Yongqiang Ma; Ping Luo; Yu Qiao; Kaipeng Zhang
Optimistic Verifiable Training by Controlling Hardware Nondeterminism. (1%)Megha Srivastava; Simran Arora; Dan Boneh
Medical Unlearnable Examples: Securing Medical Data from Unauthorized Traning via Sparsity-Aware Local Masking. (1%)Weixiang Sun; Yixin Liu; Zhiling Yan; Kaidi Xu; Lichao Sun
ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks. (1%)Zhaoliang Chen; Zhihao Wu; Ylli Sadikaj; Claudia Plant; Hong-Ning Dai; Shiping Wang; Yiu-Ming Cheung; Wenzhong Guo
2024-03-13
Attack Deterministic Conditional Image Generative Models for Diverse and Controllable Generation. (92%)Tianyi Chu; Wei Xing; Jiafu Chen; Zhizhong Wang; Jiakai Sun; Lei Zhao; Haibo Chen; Huaizhong Lin
Fast Inference of Removal-Based Node Influence. (54%)Weikai Li; Zhiping Xiao; Xiao Luo; Yizhou Sun
Tastle: Distract Large Language Models for Automatic Jailbreak Attack. (31%)Zeguan Xiao; Yan Yang; Guanhua Chen; Yun Chen
Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack. (8%)Yinggui Wang; Yuanqing Huang; Jianshu Li; Le Yang; Kai Song; Lei Wang
RAF-GI: Towards Robust, Accurate and Fast-Convergent Gradient Inversion Attack in Federated Learning. (2%)Can Liu; Jin Wang; Dongyang Yu
Verifix: Post-Training Correction to Improve Label Noise Robustness with Verified Samples. (1%)Sangamesh Kodge; Deepak Ravikumar; Gobinda Saha; Kaushik Roy
2024-03-12
Versatile Defense Against Adversarial Attacks on Image Recognition. (99%)Haibo Zhang; Zhihua Yao; Kouichi Sakurai
Towards Model Extraction Attacks in GAN-Based Image Translation via Domain Shift Mitigation. (61%)Di Mi; Yanjun Zhang; Leo Yu Zhang; Shengshan Hu; Qi Zhong; Haizhuan Yuan; Shirui Pan
Backdoor Attack with Mode Mixture Latent Modification. (8%)Hongwei Zhang; Xiaoyin Xu; Dongsheng An; Xianfeng Gu; Min Zhang
Towards a Framework for Deep Learning Certification in Safety-Critical Applications Using Inherently Safe Design and Run-Time Error Detection. (2%)Romeo Valentin
Duwak: Dual Watermarks in Large Language Models. (2%)Chaoyi Zhu; Jeroen Galjaard; Pin-Yu Chen; Lydia Y. Chen
Visual Privacy Auditing with Diffusion Models. (1%)Kristian Schwethelm; Johannes Kaiser; Moritz Knolle; Daniel Rueckert; Georgios Kaissis; Alexander Ziller
2024-03-11
Intra-Section Code Cave Injection for Adversarial Evasion Attacks on Windows PE Malware File. (99%)Kshitiz Aryal; Maanak Gupta; Mahmoud Abdelsalam; Moustafa Saleh
epsilon-Mesh Attack: A Surface-based Adversarial Point Cloud Attack for Facial Expression Recognition. (99%)Batuhan Cengiz; Mert Gulsen; Yusuf H. Sahin; Gozde Unal
PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor. (98%)Jaewon Jung; Hongsun Jang; Jaeyong Song; Jinho Lee
Dynamic Perturbation-Adaptive Adversarial Training on Medical Image Classification. (97%)Shuai Li; Xiaoguang Ma; Shancheng Jiang; Lu Meng
Disentangling Policy from Offline Task Representation Learning via Adversarial Data Augmentation. (96%)Chengxing Jia; Fuxiang Zhang; Yi-Chen Li; Chen-Xiao Gao; Xu-Hui Liu; Lei Yuan; Zongzhang Zhang; Yang Yu
PCLD: Point Cloud Layerwise Diffusion for Adversarial Purification. (86%)Mert Gulsen; Batuhan Cengiz; Yusuf H. Sahin; Gozde Unal
Overcoming the Paradox of Certified Training with Gaussian Smoothing. (83%)Stefan Balauca; Mark Niklas Müller; Yuhao Mao; Maximilian Baader; Marc Fischer; Martin Vechev
Real is not True: Backdoor Attacks Against Deepfake Detection. (78%)Hong Sun; Ziqiang Li; Lei Liu; Bin Li
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning. (61%)Fuseinin Mumuni; Alhassan Mumuni
Stealing Part of a Production Language Model. (38%)Nicholas Carlini; Daniel Paleka; Krishnamurthy Dj Dvijotham; Thomas Steinke; Jonathan Hayase; A. Feder Cooper; Katherine Lee; Matthew Jagielski; Milad Nasr; Arthur Conmy; Eric Wallace; David Rolnick; Florian Tramèr
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration. (9%)Zhenbo Song; Wenhao Gao; Kaihao Zhang; Wenhan Luo; Zhaoxin Fan; Jianfeng Lu
A novel interface for adversarial trivia question-writing. (3%)Jason Liu
Towards the Uncharted: Density-Descending Feature Perturbation for Semi-supervised Semantic Segmentation. (2%)Xiaoyang Wang; Huihui Bai; Limin Yu; Yao Zhao; Jimin Xiao
Learning with Noisy Foundation Models. (1%)Hao Chen; Jindong Wang; Zihan Wang; Ran Tao; Hongxin Wei; Xing Xie; Masashi Sugiyama; Bhiksha Raj
DNNShield: Embedding Identifiers for Deep Neural Network Ownership Verification. (1%)Jasper Stang; Torsten Krauß; Alexandra Dmitrienko
2024-03-10
A Zero Trust Framework for Realization and Defense Against Generative AI Attacks in Power Grid. (22%)Md. Shirajum Munir; Sravanthi Proddatoori; Manjushree Muralidhara; Walid Saad; Zhu Han; Sachin Shetty
2024-03-09
Hard-label based Small Query Black-box Adversarial Attack. (99%)Jeonghwan Park; Paul Miller; Niall McLaughlin
IOI: Invisible One-Iteration Adversarial Attack on No-Reference Image- and Video-Quality Metrics. (83%)Ekaterina Shumitskaya; Anastasia Antsiferova; Dmitriy Vatolin
iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself. (82%)Yuhao Bian; Shengjing Tian; Xiuping Liu
Attacking Transformers with Feature Diversity Adversarial Perturbation. (70%)Chenxing Gao; Hang Zhou; Junqing Yu; YuTeng Ye; Jiale Cai; Junle Wang; Wei Yang
2024-03-08
Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds. (99%)Tianrui Lou; Xiaojun Jia; Jindong Gu; Li Liu; Siyuan Liang; Bangyan He; Xiaochun Cao
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. (99%)Team Gemini; Petko Georgiev; Ving Ian Lei; Ryan Burnell; Libin Bai; Anmol Gulati; Garrett Tanzer; Damien Vincent; Zhufeng Pan; Shibo Wang; Soroosh Mariooryad; Yifan Ding; Xinyang Geng; Fred Alcober; Roy Frostig; Mark Omernick; Lexi Walker; Cosmin Paduraru; Christina Sorokin; Andrea Tacchetti; Colin Gaffney; Samira Daruki; Olcan Sercinoglu; Zach Gleicher; Juliette Love; Paul Voigtlaender; Rohan Jain; Gabriela Surita; Kareem Mohamed; Rory Blevins; Junwhan Ahn; Tao Zhu; Kornraphop Kawintiranon; Orhan Firat; Yiming Gu; Yujing Zhang; Matthew Rahtz; Manaal Faruqui; Natalie Clay; Justin Gilmer; JD Co-Reyes; Ivo Penchev; Rui Zhu; Nobuyuki Morioka; Kevin Hui; Krishna Haridasan; Victor Campos; Mahdis Mahdieh; Mandy Guo; Samer Hassan; Kevin Kilgour; Arpi Vezer; Heng-Tze Cheng; Liedekerke Raoul de; Siddharth Goyal; Paul Barham; DJ Strouse; Seb Noury; Jonas Adler; Mukund Sundararajan; Sharad Vikram; Dmitry Lepikhin; Michela Paganini; Xavier Garcia; Fan Yang; Dasha Valter; Maja Trebacz; Kiran Vodrahalli; Chulayuth Asawaroengchai; Roman Ring; Norbert Kalb; Livio Baldini Soares; Siddhartha Brahma; David Steiner; Tianhe Yu; Fabian Mentzer; Antoine He; Lucas Gonzalez; Bibo Xu; Raphael Lopez Kaufman; Laurent El Shafey; Junhyuk Oh; Tom Hennigan; George van den Driessche; Seth Odoom; Mario Lucic; Becca Roelofs; Sid Lall; Amit Marathe; Betty Chan; Santiago Ontanon; Luheng He; Denis Teplyashin; Jonathan Lai; Phil Crone; Bogdan Damoc; Lewis Ho; Sebastian Riedel; Karel Lenc; Chih-Kuan Yeh; Aakanksha Chowdhery; Yang Xu; Mehran Kazemi; Ehsan Amid; Anastasia Petrushkina; Kevin Swersky; Ali Khodaei; Gowoon Chen; Chris Larkin; Mario Pinto; Geng Yan; Adria Puigdomenech Badia; Piyush Patil; Steven Hansen; Dave Orr; Sebastien M. R. Arnold; Jordan Grimstad; Andrew Dai; Sholto Douglas; Rishika Sinha; Vikas Yadav; Xi Chen; Elena Gribovskaya; Jacob Austin; Jeffrey Zhao; Kaushal Patel; Paul Komarek; Sophia Austin; Sebastian Borgeaud; Linda Friso; Abhimanyu Goyal; Ben Caine; Kris Cao; Da-Woon Chung; Matthew Lamm; Gabe Barth-Maron; Thais Kagohara; Kate Olszewska; Mia Chen; Kaushik Shivakumar; Rishabh Agarwal; Harshal Godhia; Ravi Rajwar; Javier Snaider; Xerxes Dotiwalla; Yuan Liu; Aditya Barua; Victor Ungureanu; Yuan Zhang; Bat-Orgil Batsaikhan; Mateo Wirth; James Qin; Ivo Danihelka; Tulsee Doshi; Martin Chadwick; Jilin Chen; Sanil Jain; Quoc Le; Arjun Kar; Madhu Gurumurthy; Cheng Li; Ruoxin Sang; Fangyu Liu; Lampros Lamprou; Rich Munoz; Nathan Lintz; Harsh Mehta; Heidi Howard; Malcolm Reynolds; Lora Aroyo; Quan Wang; Lorenzo Blanco; Albin Cassirer; Jordan Griffith; Dipanjan Das; Stephan Lee; Jakub Sygnowski; Zach Fisher; James Besley; Richard Powell; Zafarali Ahmed; Dominik Paulus; David Reitter; Zalan Borsos; Rishabh Joshi; Aedan Pope; Steven Hand; Vittorio Selo; Vihan Jain; Nikhil Sethi; Megha Goel; Takaki Makino; Rhys May; Zhen Yang; Johan Schalkwyk; Christina Butterfield; Anja Hauth; Alex Goldin; Will Hawkins; Evan Senter; Sergey Brin; Oliver Woodman; Marvin Ritter; Eric Noland; Minh Giang; Vijay Bolina; Lisa Lee; Tim Blyth; Ian Mackinnon; Machel Reid; Obaid Sarvana; David Silver; Alexander Chen; Lily Wang; Loren Maggiore; Oscar Chang; Nithya Attaluri; Gregory Thornton; Chung-Cheng Chiu; Oskar Bunyan; Nir Levine; Timothy Chung; Evgenii Eltyshev; Xiance Si; Timothy Lillicrap; Demetra Brady; Vaibhav Aggarwal; Boxi Wu; Yuanzhong Xu; Ross McIlroy; Kartikeya Badola; Paramjit Sandhu; Erica Moreira; Wojciech Stokowiec; Ross Hemsley; Dong Li; Alex Tudor; Pranav Shyam; Elahe Rahimtoroghi; Salem Haykal; Pablo Sprechmann; Xiang Zhou; Diana Mincu; Yujia Li; Ravi Addanki; Kalpesh Krishna; Xiao Wu; Alexandre Frechette; Matan Eyal; Allan Dafoe; Dave Lacey; Jay Whang; Thi Avrahami; Ye Zhang; Emanuel Taropa; Hanzhao Lin; Daniel Toyama; Eliza Rutherford; Motoki Sano; HyunJeong Choe; Alex Tomala; Chalence Safranek-Shrader; Nora Kassner; Mantas Pajarskas; Matt Harvey; Sean Sechrist; Meire Fortunato; Christina Lyu; Gamaleldin Elsayed; Chenkai Kuang; James Lottes; Eric Chu; Chao Jia; Chih-Wei Chen; Peter Humphreys; Kate Baumli; Connie Tao; Rajkumar Samuel; Cicero Nogueira dos Santos; Anders Andreassen; Nemanja Rakićević; Dominik Grewe; Aviral Kumar; Stephanie Winkler; Jonathan Caton; Andrew Brock; Sid Dalmia; Hannah Sheahan; Iain Barr; Yingjie Miao; Paul Natsev; Jacob Devlin; Feryal Behbahani; Flavien Prost; Yanhua Sun; Artiom Myaskovsky; Thanumalayan Sankaranarayana Pillai; Dan Hurt; Angeliki Lazaridou; Xi Xiong; Ce Zheng; Fabio Pardo; Xiaowei Li; Dan Horgan; Joe Stanton; Moran Ambar; Fei Xia; Alejandro Lince; Mingqiu Wang; Basil Mustafa; Albert Webson; Hyo Lee; Rohan Anil; Martin Wicke; Timothy Dozat; Abhishek Sinha; Enrique Piqueras; Elahe Dabir; Shyam Upadhyay; Anudhyan Boral; Lisa Anne Hendricks; Corey Fry; Josip Djolonga; Yi Su; Jake Walker; Jane Labanowski; Ronny Huang; Vedant Misra; Jeremy Chen; RJ Skerry-Ryan; Avi Singh; Shruti Rijhwani; Dian Yu; Alex Castro-Ros; Beer Changpinyo; Romina Datta; Sumit Bagri; Arnar Mar Hrafnkelsson; Marcello Maggioni; Daniel Zheng; Yury Sulsky; Shaobo Hou; Tom Le Paine; Antoine Yang; Jason Riesa; Dominika Rogozinska; Dror Marcus; Dalia El Badawy; Qiao Zhang; Luyu Wang; Helen Miller; Jeremy Greer; Lars Lowe Sjos; Azade Nova; Heiga Zen; Rahma Chaabouni; Mihaela Rosca; Jiepu Jiang; Charlie Chen; Ruibo Liu; Tara Sainath; Maxim Krikun; Alex Polozov; Jean-Baptiste Lespiau; Josh Newlan; Zeyncep Cankara; Soo Kwak; Yunhan Xu; Phil Chen; Andy Coenen; Clemens Meyer; Katerina Tsihlas; Ada Ma; Juraj Gottweis; Jinwei Xing; Chenjie Gu; Jin Miao; Christian Frank; Zeynep Cankara; Sanjay Ganapathy; Ishita Dasgupta; Steph Hughes-Fitt; Heng Chen; David Reid; Keran Rong; Hongmin Fan; Amersfoort Joost van; Vincent Zhuang; Aaron Cohen; Shixiang Shane Gu; Anhad Mohananey; Anastasija Ilic; Taylor Tobin; John Wieting; Anna Bortsova; Phoebe Thacker; Emma Wang; Emily Caveness; Justin Chiu; Eren Sezener; Alex Kaskasoli; Steven Baker; Katie Millican; Mohamed Elhawaty; Kostas Aisopos; Carl Lebsack; Nathan Byrd; Hanjun Dai; Wenhao Jia; Matthew Wiethoff; Elnaz Davoodi; Albert Weston; Lakshman Yagati; Arun Ahuja; Isabel Gao; Golan Pundak; Susan Zhang; Michael Azzam; Khe Chai Sim; Sergi Caelles; James Keeling; Abhanshu Sharma; Andy Swing; YaGuang Li; Chenxi Liu; Carrie Grimes Bostock; Yamini Bansal; Zachary Nado; Ankesh Anand; Josh Lipschultz; Abhijit Karmarkar; Lev Proleev; Abe Ittycheriah; Soheil Hassas Yeganeh; George Polovets; Aleksandra Faust; Jiao Sun; Alban Rrustemi; Pen Li; Rakesh Shivanna; Jeremiah Liu; Chris Welty; Federico Lebron; Anirudh Baddepudi; Sebastian Krause; Emilio Parisotto; Radu Soricut; Zheng Xu; Dawn Bloxwich; Melvin Johnson; Behnam Neyshabur; Justin Mao-Jones; Renshen Wang; Vinay Ramasesh; Zaheer Abbas; Arthur Guez; Constant Segal; Duc Dung Nguyen; James Svensson; Le Hou; Sarah York; Kieran Milan; Sophie Bridgers; Wiktor Gworek; Marco Tagliasacchi; James Lee-Thorp; Michael Chang; Alexey Guseynov; Ale Jakse Hartman; Michael Kwong; Ruizhe Zhao; Sheleem Kashem; Elizabeth Cole; Antoine Miech; Richard Tanburn; Mary Phuong; Filip Pavetic; Sebastien Cevey; Ramona Comanescu; Richard Ives; Sherry Yang; Cosmo Du; Bo Li; Zizhao Zhang; Mariko Iinuma; Clara Huiyi Hu; Aurko Roy; Shaan Bijwadia; Zhenkai Zhu; Danilo Martins; Rachel Saputro; Anita Gergely; Steven Zheng; Dawei Jia; Ioannis Antonoglou; Adam Sadovsky; Shane Gu; Yingying Bi; Alek Andreev; Sina Samangooei; Mina Khan; Tomas Kocisky; Angelos Filos; Chintu Kumar; Colton Bishop; Adams Yu; Sarah Hodkinson; Sid Mittal; Premal Shah; Alexandre Moufarek; Yong Cheng; Adam Bloniarz; Jaehoon Lee; Pedram Pejman; Paul Michel; Stephen Spencer; Vladimir Feinberg; Xuehan Xiong; Nikolay Savinov; Charlotte Smith; Siamak Shakeri; Dustin Tran; Mary Chesus; Bernd Bohnet; George Tucker; Glehn Tamara von; Carrie Muir; Yiran Mao; Hideto Kazawa; Ambrose Slone; Kedar Soparkar; Disha Shrivastava; James Cobon-Kerr; Michael Sharman; Jay Pavagadhi; Carlos Araya; Karolis Misiunas; Nimesh Ghelani; Michael Laskin; David Barker; Qiujia Li; Anton Briukhov; Neil Houlsby; Mia Glaese; Balaji Lakshminarayanan; Nathan Schucher; Yunhao Tang; Eli Collins; Hyeontaek Lim; Fangxiaoyu Feng; Adria Recasens; Guangda Lai; Alberto Magni; Cao Nicola De; Aditya Siddhant; Zoe Ashwood; Jordi Orbay; Mostafa Dehghani; Jenny Brennan; Yifan He; Kelvin Xu; Yang Gao; Carl Saroufim; James Molloy; Xinyi Wu; Seb Arnold; Solomon Chang; Julian Schrittwieser; Elena Buchatskaya; Soroush Radpour; Martin Polacek; Skye Giordano; Ankur Bapna; Simon Tokumine; Vincent Hellendoorn; Thibault Sottiaux; Sarah Cogan; Aliaksei Severyn; Mohammad Saleh; Shantanu Thakoor; Laurent Shefey; Siyuan Qiao; Meenu Gaba; Shuo-yiin Chang; Craig Swanson; Biao Zhang; Benjamin Lee; Paul Kishan Rubenstein; Gan Song; Tom Kwiatkowski; Anna Koop; Ajay Kannan; David Kao; Parker Schuh; Axel Stjerngren; Golnaz Ghiasi; Gena Gibson; Luke Vilnis; Ye Yuan; Felipe Tiengo Ferreira; Aishwarya Kamath; Ted Klimenko; Ken Franko; Kefan Xiao; Indro Bhattacharya; Miteyan Patel; Rui Wang; Alex Morris; Robin Strudel; Vivek Sharma; Peter Choy; Sayed Hadi Hashemi; Jessica Landon; Mara Finkelstein; Priya Jhakra; Justin Frye; Megan Barnes; Matthew Mauger; Dennis Daun; Khuslen Baatarsukh; Matthew Tung; Wael Farhan; Henryk Michalewski; Fabio Viola; Felix de Chaumont Quitry; Charline Le Lan; Tom Hudson; Qingze Wang; Felix Fischer; Ivy Zheng; Elspeth White; Anca Dragan; Jean-baptiste Alayrac; Eric Ni; Alexander Pritzel; Adam Iwanicki; Michael Isard; Anna Bulanova; Lukas Zilka; Ethan Dyer; Devendra Sachan; Srivatsan Srinivasan; Hannah Muckenhirn; Honglong Cai; Amol Mandhane; Mukarram Tariq; Jack W. Rae; Gary Wang; Kareem Ayoub; Nicholas FitzGerald; Yao Zhao; Woohyun Han; Chris Alberti; Dan Garrette; Kashyap Krishnakumar; Mai Gimenez; Anselm Levskaya; Daniel Sohn; Josip Matak; Inaki Iturrate; Michael B. Chang; Jackie Xiang; Yuan Cao; Nishant Ranka; Geoff Brown; Adrian Hutter; Vahab Mirrokni; Nanxin Chen; Kaisheng Yao; Zoltan Egyed; Francois Galilee; Tyler Liechty; Praveen Kallakuri; Evan Palmer; Sanjay Ghemawat; Jasmine Liu; David Tao; Chloe Thornton; Tim Green; Mimi Jasarevic; Sharon Lin; Victor Cotruta; Yi-Xuan Tan; Noah Fiedel; Hongkun Yu; Ed Chi; Alexander Neitz; Jens Heitkaemper; Anu Sinha; Denny Zhou; Yi Sun; Charbel Kaed; Brice Hulse; Swaroop Mishra; Maria Georgaki; Sneha Kudugunta; Clement Farabet; Izhak Shafran; Daniel Vlasic; Anton Tsitsulin; Rajagopal Ananthanarayanan; Alen Carin; Guolong Su; Pei Sun; Shashank V; Gabriel Carvajal; Josef Broder; Iulia Comsa; Alena Repina; William Wong; Warren Weilun Chen; Peter Hawkins; Egor Filonov; Lucia Loher; Christoph Hirnschall; Weiyi Wang; Jingchen Ye; Andrea Burns; Hardie Cate; Diana Gage Wright; Federico Piccinini; Lei Zhang; Chu-Cheng Lin; Ionel Gog; Yana Kulizhskaya; Ashwin Sreevatsa; Shuang Song; Luis C. Cobo; Anand Iyer; Chetan Tekur; Guillermo Garrido; Zhuyun Xiao; Rupert Kemp; Huaixiu Steven Zheng; Hui Li; Ananth Agarwal; Christel Ngani; Kati Goshvadi; Rebeca Santamaria-Fernandez; Wojciech Fica; Xinyun Chen; Chris Gorgolewski; Sean Sun; Roopal Garg; Xinyu Ye; S. M. Ali Eslami; Nan Hua; Jon Simon; Pratik Joshi; Yelin Kim; Ian Tenney; Sahitya Potluri; Lam Nguyen Thiet; Quan Yuan; Florian Luisier; Alexandra Chronopoulou; Salvatore Scellato; Praveen Srinivasan; Minmin Chen; Vinod Koverkathu; Valentin Dalibard; Yaming Xu; Brennan Saeta; Keith Anderson; Thibault Sellam; Nick Fernando; Fantine Huot; Junehyuk Jung; Mani Varadarajan; Michael Quinn; Amit Raul; Maigo Le; Ruslan Habalov; Jon Clark; Komal Jalan; Kalesha Bullard; Achintya Singhal; Thang Luong; Boyu Wang; Sujeevan Rajayogam; Julian Eisenschlos; Johnson Jia; Daniel Finchelstein; Alex Yakubovich; Daniel Balle; Michael Fink; Sameer Agarwal; Jing Li; Dj Dvijotham; Shalini Pal; Kai Kang; Jaclyn Konzelmann; Jennifer Beattie; Olivier Dousse; Diane Wu; Remi Crocker; Chen Elkind; Siddhartha Reddy Jonnalagadda; Jong Lee; Dan Holtmann-Rice; Krystal Kallarackal; Rosanne Liu; Denis Vnukov; Neera Vats; Luca Invernizzi; Mohsen Jafari; Huanjie Zhou; Lilly Taylor; Jennifer Prendki; Marcus Wu; Tom Eccles; Tianqi Liu; Kavya Kopparapu; Francoise Beaufays; Christof Angermueller; Andreea Marzoca; Shourya Sarcar; Hilal Dib; Jeff Stanway; Frank Perbet; Nejc Trdin; Rachel Sterneck; Andrey Khorlin; Dinghua Li; Xihui Wu; Sonam Goenka; David Madras; Sasha Goldshtein; Willi Gierke; Tong Zhou; Yaxin Liu; Yannie Liang; Anais White; Yunjie Li; Shreya Singh; Sanaz Bahargam; Mark Epstein; Sujoy Basu; Li Lao; Adnan Ozturel; Carl Crous; Alex Zhai; Han Lu; Zora Tung; Neeraj Gaur; Alanna Walton; Lucas Dixon; Ming Zhang; Amir Globerson; Grant Uy; Andrew Bolt; Olivia Wiles; Milad Nasr; Ilia Shumailov; Marco Selvi; Francesco Piccinno; Ricardo Aguilar; Sara McCarthy; Misha Khalman; Mrinal Shukla; Vlado Galic; John Carpenter; Kevin Villela; Haibin Zhang; Harry Richardson; James Martens; Matko Bosnjak; Shreyas Rammohan Belle; Jeff Seibert; Mahmoud Alnahlawi; Brian McWilliams; Sankalp Singh; Annie Louis; Wen Ding; Dan Popovici; Lenin Simicich; Laura Knight; Pulkit Mehta; Nishesh Gupta; Chongyang Shi; Saaber Fatehi; Jovana Mitrovic; Alex Grills; Joseph Pagadora; Dessie Petrova; Danielle Eisenbud; Zhishuai Zhang; Damion Yates; Bhavishya Mittal; Nilesh Tripuraneni; Yannis Assael; Thomas Brovelli; Prateek Jain; Mihajlo Velimirovic; Canfer Akbulut; Jiaqi Mu; Wolfgang Macherey; Ravin Kumar; Jun Xu; Haroon Qureshi; Gheorghe Comanici; Jeremy Wiesner; Zhitao Gong; Anton Ruddock; Matthias Bauer; Nick Felt; Anirudh GP; Anurag Arnab; Dustin Zelle; Jonas Rothfuss; Bill Rosgen; Ashish Shenoy; Bryan Seybold; Xinjian Li; Jayaram Mudigonda; Goker Erdogan; Jiawei Xia; Jiri Simsa; Andrea Michi; Yi Yao; Christopher Yew; Steven Kan; Isaac Caswell; Carey Radebaugh; Andre Elisseeff; Pedro Valenzuela; Kay McKinney; Kim Paterson; Albert Cui; Eri Latorre-Chimoto; Solomon Kim; William Zeng; Ken Durden; Priya Ponnapalli; Tiberiu Sosea; Christopher A. Choquette-Choo; James Manyika; Brona Robenek; Harsha Vashisht; Sebastien Pereira; Hoi Lam; Marko Velic; Denese Owusu-Afriyie; Katherine Lee; Tolga Bolukbasi; Alicia Parrish; Shawn Lu; Jane Park; Balaji Venkatraman; Alice Talbert; Lambert Rosique; Yuchung Cheng; Andrei Sozanschi; Adam Paszke; Praveen Kumar; Jessica Austin; Lu Li; Khalid Salama; Wooyeol Kim; Nandita Dukkipati; Anthony Baryshnikov; Christos Kaplanis; XiangHai Sheng; Yuri Chervonyi; Caglar Unlu; Diego de Las Casas; Harry Askham; Kathryn Tunyasuvunakool; Felix Gimeno; Siim Poder; Chester Kwak; Matt Miecnikowski; Vahab Mirrokni; Alek Dimitriev; Aaron Parisi; Dangyi Liu; Tomy Tsai; Toby Shevlane; Christina Kouridi; Drew Garmon; Adrian Goedeckemeyer; Adam R. Brown; Anitha Vijayakumar; Ali Elqursh; Sadegh Jazayeri; Jin Huang; Sara Mc Carthy; Jay Hoover; Lucy Kim; Sandeep Kumar; Wei Chen; Courtney Biles; Garrett Bingham; Evan Rosen; Lisa Wang; Qijun Tan; David Engel; Francesco Pongetti; Cesare Dario de; Dongseong Hwang; Lily Yu; Jennifer Pullman; Srini Narayanan; Kyle Levin; Siddharth Gopal; Megan Li; Asaf Aharoni; Trieu Trinh; Jessica Lo; Norman Casagrande; Roopali Vij; Loic Matthey; Bramandia Ramadhana; Austin Matthews; CJ Carey; Matthew Johnson; Kremena Goranova; Rohin Shah; Shereen Ashraf; Kingshuk Dasgupta; Rasmus Larsen; Yicheng Wang; Manish Reddy Vuyyuru; Chong Jiang; Joana Ijazi; Kazuki Osawa; Celine Smith; Ramya Sree Boppana; Taylan Bilal; Yuma Koizumi; Ying Xu; Yasemin Altun; Nir Shabat; Ben Bariach; Alex Korchemniy; Kiam Choo; Olaf Ronneberger; Chimezie Iwuanyanwu; Shubin Zhao; David Soergel; Cho-Jui Hsieh; Irene Cai; Shariq Iqbal; Martin Sundermeyer; Zhe Chen; Elie Bursztein; Chaitanya Malaviya; Fadi Biadsy; Prakash Shroff; Inderjit Dhillon; Tejasi Latkar; Chris Dyer; Hannah Forbes; Massimo Nicosia; Vitaly Nikolaev; Somer Greene; Marin Georgiev; Pidong Wang; Nina Martin; Hanie Sedghi; John Zhang; Praseem Banzal; Doug Fritz; Vikram Rao; Xuezhi Wang; Jiageng Zhang; Viorica Patraucean; Dayou Du; Igor Mordatch; Ivan Jurin; Lewis Liu; Ayush Dubey; Abhi Mohan; Janek Nowakowski; Vlad-Doru Ion; Nan Wei; Reiko Tojo; Maria Abi Raad; Drew A. Hudson; Vaishakh Keshava; Shubham Agrawal; Kevin Ramirez; Zhichun Wu; Hoang Nguyen; Ji Liu; Madhavi Sewak; Bryce Petrini; DongHyun Choi; Ivan Philips; Ziyue Wang; Ioana Bica; Ankush Garg; Jarek Wilkiewicz; Priyanka Agrawal; Xiaowei Li; Danhao Guo; Emily Xue; Naseer Shaik; Andrew Leach; Sadh MNM Khan; Julia Wiesinger; Sammy Jerome; Abhishek Chakladar; Alek Wenjiao Wang; Tina Ornduff; Folake Abu; Alireza Ghaffarkhah; Marcus Wainwright; Mario Cortes; Frederick Liu; Joshua Maynez; Slav Petrov; Yonghui Wu; Demis Hassabis; Koray Kavukcuoglu; Jeffrey Dean; Oriol Vinyals
Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume. (98%)Ping Guo; Cheng Gong; Xi Lin; Zhiyuan Yang; Qingfu Zhang
Prepared for the Worst: A Learning-Based Adversarial Attack for Resilience Analysis of the ICP Algorithm. (93%)Ziyu Zhang; Johann Laconte; Daniil Lisus; Timothy D. Barfoot
Adversarial Sparse Teacher: Defense Against Distillation-Based Model Stealing Attacks Using Adversarial Examples. (93%)Eda Yilmaz; Hacer Yalim Keles
EVD4UAV: An Altitude-Sensitive Benchmark to Evade Vehicle Detection in UAV. (81%)Huiming Sun; Jiacheng Guo; Zibo Meng; Tianyun Zhang; Jianwu Fang; Yuewei Lin; Hongkai Yu
The Impact of Quantization on the Robustness of Transformer-based Text Classifiers. (45%)Seyed Parsa Neshaei; Yasaman Boreshban; Gholamreza Ghassem-Sani; Seyed Abolghasem Mirroshandel
EdgeLeakage: Membership Information Leakage in Distributed Edge Intelligence Systems. (38%)Kongyang Chen; Yi Lin; Hui Luo; Bing Mi; Yatie Xiao; Chao Ma; Jorge Sá Silva
Speech Robust Bench: A Robustness Benchmark For Speech Recognition. (1%)Muhammad A. Shah; David Solans Noguero; Mikko A. Heikkila; Bhiksha Raj; Nicolas Kourtellis
2024-03-07
Defending Against Unforeseen Failure Modes with Latent Adversarial Training. (83%)Stephen Casper; Lennart Schulze; Oam Patel; Dylan Hadfield-Menell
Fooling Neural Networks for Motion Forecasting via Adversarial Attacks. (33%)Edgar Medina; Leyong Loh
Automatic and Universal Prompt Injection Attacks against Large Language Models. (31%)Xiaogeng Liu; Zhiyuan Yu; Yizhe Zhang; Ning Zhang; Chaowei Xiao
ObjectCompose: Evaluating Resilience of Vision-Based Models on Object-to-Background Compositional Changes. (31%)Hashmat Shadab Malik; Muhammad Huzaifa; Muzammal Naseer; Salman Khan; Fahad Shahbaz Khan
Cell reprogramming design by transfer learning of functional transcriptional networks. (1%)Thomas P. Wytock; Adilson E. Motter
Towards Robustness Analysis of E-Commerce Ranking System. (1%)Ningfei Wang; Yupin Huang; Han Cheng; Jiri Gesi; Xiaojie Wang; Vivek Mittal
2024-03-06
Adversarial Infrared Geometry: Using Geometry to Perform Adversarial Attack against Infrared Pedestrian Detectors. (99%)Kalibinuer Tiliwalidi
Improving Adversarial Training using Vulnerability-Aware Perturbation Budget. (99%)Olukorede Fakorede; Modeste Atsague; Jin Tian
Effect of Ambient-Intrinsic Dimension Gap on Adversarial Vulnerability. (92%)Rajdeep Haldar; Yue Xing; Qifan Song
Belief-Enriched Pessimistic Q-Learning against Adversarial State Perturbations. (16%)Xiaolin Sun; Zizhan Zheng
On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder. (2%)Tingxu Han; Shenghan Huang; Ziqi Ding; Weisong Sun; Yebo Feng; Chunrong Fang; Jun Li; Hanwei Qian; Cong Wu; Quanjun Zhang; Yang Liu; Zhenyu Chen
Verified Training for Counterfactual Explanation Robustness under Data Shift. (2%)Anna P. Meyer; Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
2024-03-05
Towards Robust Federated Learning via Logits Calibration on Non-IID Data. (99%)Yu Qiao; Apurba Adhikary; Chaoning Zhang; Choong Seon Hong
Mitigating Label Flipping Attacks in Malicious URL Detectors Using Ensemble Trees. (96%)Ehsan Nowroozi; Nada Jadalla; Samaneh Ghelichkhani; Alireza Jolfaei
Minimum Topology Attacks for Graph Neural Networks. (83%)Mengmei Zhang; Xiao Wang; Chuan Shi; Lingjuan Lyu; Tianchi Yang; Junping Du
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks. (82%)Ehsan Nowroozi; Imran Haider; Rahim Taheri; Mauro Conti
A general approach to enhance the survivability of backdoor attacks by decision path coupling. (68%)Yufei Zhao; Dingji Wang; Bihuan Chen; Ziqian Chen; Xin Peng
Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks. (61%)Yichang Xu; Ming Yin; Minghong Fang; Neil Zhenqiang Gong
Uplift Modeling for Target User Attacks on Recommender Systems. (12%)Wenjie Wang; Changsheng Wang; Fuli Feng; Wentao Shi; Daizong Ding; Tat-Seng Chua
FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models. (11%)Younghan Lee; Yungi Cho; Woorim Han; Ho Bae; Yunheung Paek
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents. (11%)Qiusi Zhan; Zhixiang Liang; Zifan Ying; Daniel Kang
XAI-Based Detection of Adversarial Attacks on Deepfake Detectors. (8%)Ben Pinhasov; Raz Lapid; Rony Ohayon; Moshe Sipper; Yehudit Aperstein
2024-03-04
Robustness Bounds on the Successful Adversarial Examples: Theory and Practice. (99%)Hiroaki Maeshima; Akira Otsuka
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models. (99%)Lin Li; Haoyan Guan; Jianing Qiu; Michael Spratling
Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks. (99%)Roie Kazoom; Raz Birman; Ofer Hadar
COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against Semantic Attacks. (96%)Zijian Huang; Wenda Chu; Linyi Li; Chejian Xu; Bo Li
Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks. (26%)Sayedeh Leila Noorbakhsh; Binghui Zhang; Yuan Hong; Binghui Wang
BSDP: Brain-inspired Streaming Dual-level Perturbations for Online Open World Object Detection. (16%)Yu Chen; Liyan Ma; Liping Jing; Jian Yu
Mirage: Defense against CrossPath Attacks in Software Defined Networks. (3%)Shariq Murtuza; Krishna Asawa
Bayesian Uncertainty Estimation by Hamiltonian Monte Carlo: Applications to Cardiac MRI Segmentation. (1%)Yidong Zhao; Joao Tourais; Iain Pierce; Christian Nitsche; Thomas A. Treibel; Sebastian Weingärtner; Artur M. Schweidtmann; Qian Tao
2024-03-03
GuardT2I: Defending Text-to-Image Models from Adversarial Prompts. (10%)Yijun Yang; Ruiyuan Gao; Xiao Yang; Jianyuan Zhong; Qiang Xu
2024-03-02
SAR-AE-SFP: SAR Imagery Adversarial Example in Real Physics domain with Target Scattering Feature Parameters. (99%)Jiahao Cui; Jiale Duan; Binyan Luo; Hang Cao; Wang Guo; Haifeng Li
Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy. (68%)Jamie Hayes; Ilia Shumailov; Eleni Triantafillou; Amr Khalifa; Nicolas Papernot
Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models. (56%)Arijit Ghosh Chowdhury; Md Mofijul Islam; Vaibhav Kumar; Faysal Hossain Shezan; Vaibhav Kumar; Vinija Jain; Aman Chadha
AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks. (31%)Yifan Zeng; Yiran Wu; Xiao Zhang; Huazheng Wang; Qingyun Wu
Adversarial Testing for Visual Grounding via Image-Aware Property Reduction. (11%)Zhiyuan Chang; Mingyang Li; Junjie Wang; Cheng Li; Boyu Wu; Fanjiang Xu; Qing Wang
Query Recovery from Easy to Hard: Jigsaw Attack against SSE. (2%)Hao Nie; Wei Wang; Peng Xu; Xianglong Zhang; Laurence T. Yang; Kaitai Liang
Accelerating Greedy Coordinate Gradient via Probe Sampling. (1%)Yiran Zhao; Wenyue Zheng; Tianle Cai; Xuan Long Do; Kenji Kawaguchi; Anirudh Goyal; Michael Shieh
2024-03-01
Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey. (91%)Lucas Schott; Josephine Delas; Hatem Hajri; Elies Gherbi; Reda Yaich; Nora Boulahia-Cuppens; Frederic Cuppens; Sylvain Lamprier
Resilience of Entropy Model in Distributed Neural Networks. (67%)Milin Zhang; Mohammad Abdi; Shahriar Rifat; Francesco Restuccia
Attacking Delay-based PUFs with Minimal Adversary Model. (45%)Hongming Fei; Owen Millwood; Prosanta Gope; Jack Miskelly; Biplab Sikdar
2024-02-29
Unraveling Adversarial Examples against Speaker Identification -- Techniques for Attack Detection and Victim Model Classification. (99%)Sonal Joshi; Thomas Thebaud; Jesús Villalba; Najim Dehak
How to Train your Antivirus: RL-based Hardening through the Problem-Space. (99%)Jacopo Cortellazzi; Ilias Tsingenopoulos; Branislav Bošanský; Simone Aonzo; Davy Preuveneers; Wouter Joosen; Fabio Pierazzi; Lorenzo Cavallaro
On Robustness and Generalization of ML-Based Congestion Predictors to Valid and Imperceptible Perturbations. (88%)Chester Holtz; Yucheng Wang; Chung-Kuan Cheng; Bill Lin
Pointing out the Shortcomings of Relation Extraction Models with Semantically Motivated Adversarials. (76%)Gennaro Nolano; Moritz Blum; Basil Ell; Philipp Cimiano
Assessing Visually-Continuous Corruption Robustness of Neural Networks Relative to Human Performance. (38%)Huakun Shen; Boyue Caroline Hu; Krzysztof Czarnecki; Lina Marsso; Marsha Chechik
Verification of Neural Networks' Global Robustness. (38%)Anan Kabaha; Dana Drachsler-Cohen
SynGhost: Imperceptible and Universal Task-agnostic Backdoor Attack in Pre-trained Language Models. (16%)Pengzhou Cheng; Wei Du; Zongru Wu; Fengwei Zhang; Libo Chen; Gongshen Liu
Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge. (2%)Ansh Arora; Xuanli He; Maximilian Mozes; Srinibas Swain; Mark Dras; Qiongkai Xu
Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes. (1%)Xiaomeng Hu; Pin-Yu Chen; Tsung-Yi Ho
Unveiling Typographic Deceptions: Insights of the Typographic Vulnerability in Large Vision-Language Model. (1%)Hao Cheng; Erjia Xiao; Jindong Gu; Le Yang; Jinhao Duan; Jize Zhang; Jiahang Cao; Kaidi Xu; Renjing Xu
2024-02-28
Enhancing the "Immunity" of Mixture-of-Experts Networks for Adversarial Defense. (99%)Qiao Han; yong huang; xinling Guo; Yiteng Zhai; Yu Qin; Yao Yang
MPAT: Building Robust Deep Neural Networks against Textual Adversarial Attacks. (99%)Fangyuan Zhang; Huichi Zhou; Shuangjiao Li; Hongtao Wang
Catastrophic Overfitting: A Potential Blessing in Disguise. (98%)Mengnan Zhao; Lihe Zhang; Yuqiu Kong; Baocai Yin
Living-off-The-Land Reverse-Shell Detection by Informed Data Augmentation. (86%)Dmitrijs Trizna; Luca Demetrio; Battista Biggio; Fabio Roli
A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems. (64%)Fangzhou Wu; Ning Zhang; Somesh Jha; Patrick McDaniel; Chaowei Xiao
Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction. (33%)Tong Liu; Yingjie Zhang; Zhe Zhao; Yinpeng Dong; Guozhu Meng; Kai Chen
Out-of-Distribution Detection using Neural Activation Prior. (1%)Weilin Wan; Weizhong Zhang; Cheng Jin
2024-02-27
Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates. (99%)Daniele Angioni; Luca Demetrio; Maura Pintor; Luca Oneto; Davide Anguita; Battista Biggio; Fabio Roli
Extreme Miscalibration and the Illusion of Adversarial Robustness. (99%)Vyas Raina; Samson Tan; Volkan Cevher; Aditya Rawal; Sheng Zha; George Karypis
Black-box Adversarial Attacks Against Image Quality Assessment Models. (99%)Yu Ran; Ao-Xiang Zhang; Mingjie Li; Weixuan Tang; Yuan-Gen Wang
Enhancing Tracking Robustness with Auxiliary Adversarial Defense Networks. (99%)Zhewei Wu; Ruilong Yu; Qihe Liu; Shuying Cheng; Shilin Qiu; Shijie Zhou
LLM-Resistant Math Word Problem Generation via Adversarial Attacks. (87%)Roy Xie; Chengxuan Huang; Junlin Wang; Bhuwan Dhingra
Breaking the Black-Box: Confidence-Guided Model Inversion Attack for Distribution Shift. (83%)Xinhao Liu; Yingzhao Jiang; Zetao Lin
Model X-ray:Detecting Backdoored Models via Decision Boundary. (67%)Yanghao Su; Jie Zhang; Ting Xu; Tianwei Zhang; Weiming Zhang; Nenghai Yu
Towards Fairness-Aware Adversarial Learning. (11%)Yanghao Zhang; Tianle Zhang; Ronghui Mu; Xiaowei Huang; Wenjie Ruan
Time-Restricted Double-Spending Attack on PoW-based Blockchains. (1%)Yiming Jiang; Jiangfan Zhang
2024-02-26
Improving the JPEG-resistance of Adversarial Attacks on Face Recognition by Interpolation Smoothing. (99%)Kefu Guo; Fengfan Zhou; Hefei Ling; Ping Li; Hui Liu
Improving behavior based authentication against adversarial attack using XAI. (99%)Dong Qin; George Amariucai; Daji Qiao; Yong Guan
Adversarial example soups: averaging multiple adversarial examples improves transferability without increasing additional generation time. (99%)Bo Yang; Hengwei Zhang; Chenwei Li; Jindong Wang
A Curious Case of Remarkable Resilience to Gradient Attacks via Fully Convolutional and Differentiable Front End with a Skip Connection. (98%)Leonid Boytsov; Ameya Joshi; Filipe Condessa
Adversarial Perturbations of Physical Signals. (92%)Robert L. Bassett; Dellen Austin Van; Anthony P. Austin
Unveiling Vulnerability of Self-Attention. (87%)Khai Jiet Liong; Hongqiu Wu; Hai Zhao
Edge Detectors Can Make Deep Convolutional Neural Networks More Robust. (83%)Jin Ding; Jie-Chao Zhao; Yong-Zhi Sun; Ping Tan; Jia-Wei Wang; Ji-En Ma; You-Tong Fang
Investigating Deep Watermark Security: An Adversarial Transferability Perspective. (64%)Biqing Qi; Junqi Gao; Yiang Luo; Jianxing Liu; Ligang Wu; Bowen Zhou
Defending LLMs against Jailbreaking Attacks via Backtranslation. (64%)Yihan Wang; Zhouxing Shi; Andrew Bai; Cho-Jui Hsieh
Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts. (62%)Mikayel Samvelyan; Sharath Chandra Raparthy; Andrei Lupu; Eric Hambro; Aram H. Markosyan; Manish Bhatt; Yuning Mao; Minqi Jiang; Jack Parker-Holder; Jakob Foerster; Tim Rocktäschel; Roberta Raileanu
Pandora's White-Box: Precise Training Data Detection and Extraction in Large Language Models. (50%)Jeffrey G. Wang; Jason Wang; Marvin Li; Seth Neel
WIPI: A New Web Threat for LLM-Driven Web Agents. (8%)Fangzhou Wu; Shutong Wu; Yulong Cao; Chaowei Xiao
RoCoIns: Enhancing Robustness of Large Language Models through Code-Style Instructions. (4%)Yuansen Zhang; Xiao Wang; Zhiheng Xi; Han Xia; Tao Gui; Qi Zhang; Xuanjing Huang
An Innovative Information Theory-based Approach to Tackle and Enhance The Transparency in Phishing Detection. (1%)Van Nguyen; Tingmin Wu; Xingliang Yuan; Marthie Grobler; Surya Nepal; Carsten Rudolph
2024-02-25
From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings. (98%)Hao Wang; Hao Li; Minlie Huang; Lei Sha
An Adversarial Robustness Benchmark for Enterprise Network Intrusion Detection. (92%)João Vitorino; Miguel Silva; Eva Maia; Isabel Praça
Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing. (76%)Jiabao Ji; Bairu Hou; Alexander Robey; George J. Pappas; Hamed Hassani; Yang Zhang; Eric Wong; Shiyu Chang
Adversarial-Robust Transfer Learning for Medical Imaging via Domain Assimilation. (73%)Xiaohui Chen; Tie Luo
Evaluating Robustness of Generative Search Engine on Adversarial Factual Questions. (13%)Xuming Hu; Xiaochuan Li; Junzhe Chen; Yinghui Li; Yangning Li; Xiaoguang Li; Yasheng Wang; Qun Liu; Lijie Wen; Philip S. Yu; Zhijiang Guo
DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers. (2%)Xirui Li; Ruochen Wang; Minhao Cheng; Tianyi Zhou; Cho-Jui Hsieh
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey. (1%)Chaoyu Zhang
m2mKD: Module-to-Module Knowledge Distillation for Modular Transformers. (1%)Ka Man Lo; Yiming Liang; Wenyu Du; Yuantao Fan; Zili Wang; Wenhao Huang; Lei Ma; Jie Fu
2024-02-24
PRP: Propagating Universal Perturbations to Attack Large Language Model Guard-Rails. (87%)Neal Mangaokar; Ashish Hooda; Jihye Choi; Shreyas Chandrashekaran; Kassem Fawaz; Somesh Jha; Atul Prakash
LLMs Can Defend Themselves Against Jailbreaking in a Practical Manner: A Vision Paper. (86%)Daoyuan Wu; Shuai Wang; Yang Liu; Ning Liu
RAUCA: A Novel Physical Adversarial Attack on Vehicle Detectors via Robust and Accurate Camouflage Generation. (82%)Jiawei Zhou; Linye Lyu; Daojing He; Yu Li
Towards Robust Image Stitching: An Adaptive Resistance Learning against Compatible Attacks. (76%)Zhiying Jiang; Xingyuan Li; Jinyuan Liu; Xin Fan; Risheng Liu
Optimal Zero-Shot Detector for Multi-Armed Attacks. (50%)Federica Granese; Marco Romanelli; Pablo Piantanida
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning. (1%)Yong Liu; Zirui Zhu; Chaoyu Gong; Minhao Cheng; Cho-Jui Hsieh; Yang You
2024-02-23
Distilling Adversarial Robustness Using Heterogeneous Teachers. (99%)Jieren Deng; Aaron Palmer; Rigel Mahmood; Ethan Rathbun; Jinbo Bi; Kaleel Mahmood; Derek Aguiar
Fast Adversarial Attacks on Language Models In One GPU Minute. (98%)Vinu Sankar Sadasivan; Shoumik Saha; Gaurang Sriramanan; Priyatham Kattakinda; Atoosa Chegini; Soheil Feizi
A Robust Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via (De)Randomized Smoothing. (98%)Daniel Gibert; Giulio Zizzo; Quan Le; Jordi Planes
ProTIP: Probabilistic Robustness Verification on Text-to-Image Diffusion Models against Stochastic Perturbation. (93%)Yi Zhang; Yun Tang; Wenjie Ruan; Xiaowei Huang; Siddartha Khastgir; Paul Jennings; Xingyu Zhao
On the Duality Between Sharpness-Aware Minimization and Adversarial Training. (92%)Yihao Zhang; Hangzhou He; Jingyu Zhu; Huanran Chen; Yifei Wang; Zeming Wei
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm. (87%)Yanqi Qiao; Dazhuang Liu; Rui Wang; Kaitai Liang
Deep Networks Always Grok and Here is Why. (76%)Ahmed Imtiaz Humayun; Randall Balestriero; Richard Baraniuk
BSPA: Exploring Black-box Stealthy Prompt Attacks against Image Generators. (67%)Yu Tian; Xiao Yang; Yinpeng Dong; Heming Yang; Hang Su; Jun Zhu
Reinforcement Learning-Based Approaches for Enhancing Security and Resilience in Smart Control: A Survey on Attack and Defense Methods. (61%)Zheyu Zhang
Break the Breakout: Reinventing LM Defense Against Jailbreak Attacks with Self-Refinement. (5%)Heegyu Kim; Sehyun Yuk; Hyunsouk Cho
Prime+Retouch: When Cache is Locked and Leaked. (2%)Jaehyuk Lee; Fan Sang; Taesoo Kim
TREC: APT Tactic / Technique Recognition via Few-Shot Provenance Subgraph Learning. (1%)Mingqi Lv; HongZhe Gao; Xuebo Qiu; Tieming Chen; Tiantian Zhu; Jinyin Chen; Shouling Ji
2024-02-22
SoK: Analyzing Adversarial Examples: A Framework to Study Adversary Knowledge. (99%)Lucas Fenaux; Florian Kerschbaum
Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off. (98%)Futa Waseda; Ching-Chun Chang; Isao Echizen
Stop Reasoning! When Multimodal LLMs with Chain-of-Thought Reasoning Meets Adversarial Images. (93%)Zefeng Wang; Zhen Han; Shuo Chen; Fan Xue; Zifeng Ding; Xun Xiao; Volker Tresp; Philip Torr; Jindong Gu
Noise-BERT: A Unified Perturbation-Robust Framework with Noise Alignment Pre-training for Noisy Slot Filling Task. (83%)Jinxu Zhao; Guanting Dong; Yueyan Qiu; Tingfeng Hui; Xiaoshuai Song; Daichi Guo; Weiran Xu
Mitigating Fine-tuning Jailbreak Attack with Backdoor Enhanced Alignment. (75%)Jiongxiao Wang; Jiazhao Li; Yiquan Li; Xiangyu Qi; Junjie Hu; Yixuan Li; Patrick McDaniel; Muhao Chen; Bo Li; Chaowei Xiao
Getting Serious about Humor: Crafting Humor Datasets with Unfunny Large Language Models. (26%)Zachary Horvitz; Jingru Chen; Rahul Aditya; Harshvardhan Srivastava; Robert West; Zhou Yu; Kathleen McKeown
2024-02-21
AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning. (99%)Vasudev Gohil; Satwik Patnaik; Dileep Kalathil; Jeyavijayan Rajendran
A Simple and Yet Fairly Effective Defense for Graph Neural Networks. (98%)Sofiane Ennadir; Yassine Abbahaddou; Johannes F. Lutzeyer; Michalis Vazirgiannis; Henrik Boström
Adversarial Purification and Fine-tuning for Robust UDC Image Restoration. (98%)Zhenbo Song; Zhenyuan Zhang; Kaihao Zhang; Wenhan Luo; Zhaoxin Fan; Jianfeng Lu
Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot LLM Assessment. (83%)Vyas Raina; Adian Liusie; Mark Gales
Robustness of Deep Neural Networks for Micro-Doppler Radar Classification. (80%)Mikolaj Czerkawski; Carmine Clemente; Craig MichieCraig Michie; Christos Tachtatzis
Whispers in Grammars: Injecting Covert Backdoors to Compromise Dense Retrieval Systems. (76%)Quanyu Long; Yue Deng; LeiLei Gan; Wenya Wang; Sinno Jialin Pan
Flexible Physical Camouflage Generation Based on a Differential Approach. (38%)Yang Li; Wenyi Tan; Chenxing Zhao; Shuangju Zhou; Xinkai Liang; Quan Pan
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models. (10%)Jiawei Liang; Siyuan Liang; Man Luo; Aishan Liu; Dongchen Han; Ee-Chien Chang; Xiaochun Cao
Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts Against Open-source LLMs. (8%)Xiaoxia Li; Siyuan Liang; Jiyi Zhang; Han Fang; Aishan Liu; Ee-Chien Chang
Coercing LLMs to do and reveal (almost) anything. (4%)Jonas Geiping; Alex Stein; Manli Shu; Khalid Saifullah; Yuxin Wen; Tom Goldstein
T-Stitch: Accelerating Sampling in Pre-Trained Diffusion Models with Trajectory Stitching. (1%)Zizheng Pan; Bohan Zhuang; De-An Huang; Weili Nie; Zhiding Yu; Chaowei Xiao; Jianfei Cai; Anima Anandkumar
2024-02-20
QuanTest: Entanglement-Guided Testing of Quantum Neural Network Systems. (92%)Jinjing Shi; Zimeng Xiao; Heyuan Shi; Yu Jiang; Xuelong Li
Defending Jailbreak Prompts via In-Context Adversarial Game. (76%)Yujun Zhou; Yufei Han; Haomin Zhuang; Taicheng Guo; Kehan Guo; Zhenwen Liang; Hongyan Bao; Xiangliang Zhang
Round Trip Translation Defence against Large Language Model Jailbreaking Attacks. (74%)Canaan Yung; Hadi Mohaghegh Dolatabadi; Sarah Erfani; Christopher Leckie
Investigating the Impact of Model Instability on Explanations and Uncertainty. (69%)Sara Vera Marjanović; Isabelle Augenstein; Christina Lioma
A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models. (68%)Zihao Xu; Yi Liu; Gelei Deng; Yuekang Li; Stjepan Picek
Learning to Poison Large Language Models During Instruction Tuning. (13%)Yao Qiang; Xiangyu Zhou; Saleh Zare Zade; Mohammad Amin Roshani; Douglas Zytko; Dongxiao Zhu
Stealthy Adversarial Attacks on Stochastic Multi-Armed Bandits. (3%)Zhiwei Wang; Huazheng Wang; Hongning Wang
The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative. (1%)Zhen Tan; Chengshuai Zhao; Raha Moraffah; Yifan Li; Yu Kong; Tianlong Chen; Huan Liu
RITFIS: Robust input testing framework for LLMs-based intelligent software. (1%)Mingxuan Xiao; Yan Xiao; Hai Dong; Shunhui Ji; Pengcheng Zhang
2024-02-19
Query-Based Adversarial Prompt Generation. (99%)Jonathan Hayase; Ema Borevkovic; Nicholas Carlini; Florian Tramèr; Milad Nasr
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training. (99%)Leo Hyun Park; Jaeuk Kim; Myung Gyo Oh; Jaewoo Park; Taekyoung Kwon
AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization. (99%)Jiyao Li; Mingze Ni; Yifei Dong; Tianqing Zhu; Wei Liu
An Adversarial Approach to Evaluating the Robustness of Event Identification Models. (98%)Obai Bahwal; Oliver Kosut; Lalitha Sankar
Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies. (97%)Xiangyu Liu; Chenghao Deng; Yanchao Sun; Yongyuan Liang; Furong Huang
Stealing the Invisible: Unveiling Pre-Trained CNN Models through Adversarial Examples and Timing Side-Channels. (92%)Shubhi Shukla; Manaar Alam; Pabitra Mitra; Debdeep Mukhopadhyay
Attacks on Node Attributes in Graph Neural Networks. (83%)Ying Xu; Michael Lanier; Anindya Sarkar; Yevgeniy Vorobeychik
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors. (68%)Yiwei Lu; Matthew Y. R. Yang; Gautam Kamath; Yaoliang Yu
Self-Guided Robust Graph Structure Refinement. (67%)Yeonjun In; Kanghoon Yoon; Kibum Kim; Kijung Shin; Chanyoung Park
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models. (50%)Christian Schlarmann; Naman Deep Singh; Francesco Croce; Matthias Hein
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning. (15%)Shuai Zhao; Leilei Gan; Luu Anh Tuan; Jie Fu; Lingjuan Lyu; Meihuizi Jia; Jinming Wen
Robustness and Exploration of Variational and Machine Learning Approaches to Inverse Problems: An Overview. (1%)Alexander Auras; Kanchana Vaishnavi Gandikota; Hannah Droege; Michael Moeller
Amplifying Training Data Exposure through Fine-Tuning with Pseudo-Labeled Memberships. (1%)Myung Gyo Oh; Hong Eun Ahn; Leo Hyun Park; Taekyoung Kwon
Privacy-Preserving Low-Rank Adaptation for Latent Diffusion Models. (1%)Zihao Luo; Xilie Xu; Feng Liu; Yun Sing Koh; Di Wang; Jingfeng Zhang
2024-02-18
A Curious Case of Searching for the Correlation between Training Data and Adversarial Robustness of Transformer Textual Models. (93%)Cuong Dang; Dung D. Le; Thai Le
Evaluating Adversarial Robustness of Low dose CT Recovery. (92%)Kanchana Vaishnavi Gandikota; Paramanand Chandramouli; Hannah Droege; Michael Moeller
Evaluating Efficacy of Model Stealing Attacks and Defenses on Quantum Neural Networks. (83%)Satwik Kundu; Debarshi Kundu; Swaroop Ghosh
The Effectiveness of Random Forgetting for Robust Generalization. (75%)Vijaya Raghavan T Ramkumar; Bahram Zonooz; Elahe Arani
Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection. (26%)Jiawei Liang; Siyuan Liang; Aishan Liu; Xiaojun Jia; Junhao Kuang; Xiaochun Cao
Poisoning Federated Recommender Systems with Fake Users. (5%)Ming Yin; Yichang Xu; Minghong Fang; Neil Zhenqiang Gong
SPML: A DSL for Defending Language Models Against Prompt Attacks. (1%)Reshabh K Sharma; Vinayak Gupta; Dan Grossman
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge Distillation. (1%)Hyunjune Shin; Dong-Wan Choi
2024-02-17
Maintaining Adversarial Robustness in Continuous Learning. (75%)Xiaolei Ru; Xiaowei Cao; Zijia Liu; Jack Murdoch Moore; Xin-Ya Zhang; Xia Zhu; Wenjia Wei; Gang Yan
Be Persistent: Towards a Unified Solution for Mitigating Shortcuts in Deep Learning. (22%)Hadi M. Dolatabadi; Sarah M. Erfani; Christopher Leckie
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents. (8%)Wenkai Yang; Xiaohan Bi; Yankai Lin; Sishuo Chen; Jie Zhou; Xu Sun
VoltSchemer: Use Voltage Noise to Manipulate Your Wireless Charger. (2%)Zihao Zhan; Yirui Yang; Haoqi Shan; Hanqiu Wang; Yier Jin; Shuo Wang
2024-02-16
DART: A Principled Approach to Adversarially Robust Unsupervised Domain Adaptation. (99%)Yunjuan Wang; Hussein Hazimeh; Natalia Ponomareva; Alexey Kurakin; Ibrahim Hammoud; Raman Arora
Theoretical Understanding of Learning from Adversarial Perturbations. (98%)Soichiro Kumano; Hiroshi Kera; Toshihiko Yamasaki
Assessing biomedical knowledge robustness in large language models by query-efficient sampling attacks. (98%)R. Patrick Xian; Alex J. Lee; Satvik Lolla; Vincent Wang; Qiming Cui; Russell Ro; Reza Abbasi-Asl
VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models. (92%)Ziyi Yin; Muchao Ye; Tianrong Zhang; Jiaqi Wang; Han Liu; Jinghui Chen; Ting Wang; Fenglong Ma
The AI Security Pyramid of Pain. (47%)Chris M. Ward; Josh Harguess; Julia Tao; Daniel Christman; Paul Spicer; Mike Tan
AIM: Automated Input Set Minimization for Metamorphic Security Testing. (2%)Nazanin Bayati Chaleshtari; Yoann Marquer; Fabrizio Pastore; Lionel C. Briand
Universal Prompt Optimizer for Safe Text-to-Image Generation. (1%)Zongyu Wu; Hongcheng Gao; Yueze Wang; Xiang Zhang; Suhang Wang
ToBlend: Token-Level Blending With an Ensemble of LLMs to Attack AI-Generated Text Detection. (1%)Fan Huang; Haewoon Kwak; Jisun An
2024-02-15
Camouflage is all you need: Evaluating and Enhancing Language Model Robustness Against Camouflage Adversarial Attacks. (62%)Álvaro Huertas-García; Alejandro Martín; Javier Huertas-Tato; David Camacho
On the Safety Concerns of Deploying LLMs/VLMs in Robotics: Highlighting the Risks and Vulnerabilities. (31%)Xiyang Wu; Ruiqi Xian; Tianrui Guan; Jing Liang; Souradip Chakraborty; Fuxiao Liu; Brian Sadler; Dinesh Manocha; Amrit Singh Bedi
Backdoor Attack against One-Class Sequential Anomaly Detection Models. (9%)He Cheng; Shuhan Yuan
A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents. (5%)Lingbo Mo; Zeyi Liao; Boyuan Zheng; Yu Su; Chaowei Xiao; Huan Sun
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning. (3%)Enrique Mármol Campos; Aurora González Vidal; José Luis Hernández Ramos; Antonio Skarmeta
Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of Conjugate Variables in System Attacks. (1%)Jun-Jie Zhang; Deyu Meng
2024-02-14
Exploring the Adversarial Capabilities of Large Language Models. (98%)Lukas Struppek; Minh Hieu Le; Dominik Hintersdorf; Kristian Kersting
PAL: Proxy-Guided Black-Box Attack on Large Language Models. (92%)Chawin Sitawarin; Norman Mu; David Wagner; Alexandre Araujo
Only My Model On My Data: A Privacy Preserving Approach Protecting one Model and Deceiving Unauthorized Black-Box Models. (92%)Weiheng Chai; Brian Testa; Huantao Ren; Asif Salekin; Senem Velipasalar
How Secure Are Large Language Models (LLMs) for Navigation in Urban Environments? (80%)Congcong Wen; Jiazhao Liang; Shuaihang Yuan; Hao Huang; Yi Fang
Review-Incorporated Model-Agnostic Profile Injection Attacks on Recommender Systems. (76%)Shiyi Yang; Lina Yao; Chen Wang; Xiwei Xu; Liming Zhu
Attacking Large Language Models with Projected Gradient Descent. (67%)Simon Geisler; Tom Wollschläger; M. H. I. Abdalla; Johannes Gasteiger; Stephan Günnemann
Detecting Adversarial Spectrum Attacks via Distance to Decision Boundary Statistics. (47%)Wenwei Zhao; Xiaowen Li; Shangqing Zhao; Jie Xu; Yao Liu; Zhuo Lu
SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding. (38%)Zhangchen Xu; Fengqing Jiang; Luyao Niu; Jinyuan Jia; Bill Yuchen Lin; Radha Poovendran
Reward Poisoning Attack Against Offline Reinforcement Learning. (12%)Yinglun Xu; Rohan Gumaste; Gagandeep Singh
Rapid Adoption, Hidden Risks: The Dual Impact of Large Language Model Customization. (9%)Rui Zhang; Hongwei Li; Rui Wen; Wenbo Jiang; Yuan Zhang; Michael Backes; Yun Shen; Yang Zhang
Adversarial Nibbler: An Open Red-Teaming Method for Identifying Diverse Harms in Text-to-Image Generation. (3%)Jessica Quaye; Alicia Parrish; Oana Inel; Charvi Rastogi; Hannah Rose Kirk; Minsuk Kahng; Liemt Erin van; Max Bartolo; Jess Tsang; Justin White; Nathan Clement; Rafael Mosquera; Juan Ciro; Vijay Janapa Reddi; Lora Aroyo
Ten Words Only Still Help: Improving Black-Box AI-Generated Text Detection via Proxy-Guided Efficient Re-Sampling. (2%)Yuhui Shi; Qiang Sheng; Juan Cao; Hao Mi; Beizhe Hu; Danding Wang
Leveraging the Context through Multi-Round Interactions for Jailbreaking Attacks. (1%)Yixin Cheng; Markos Georgopoulos; Volkan Cevher; Grigorios G. Chrysos
Towards Robust Model-Based Reinforcement Learning Against Adversarial Corruption. (1%)Chenlu Ye; Jiafan He; Quanquan Gu; Tong Zhang
Immediate generalisation in humans but a generalisation lag in deep neural networks$\unicode{x2014}$evidence for representational divergence? (1%)Lukas S. Huber; Fred W. Mast; Felix A. Wichmann
Play Guessing Game with LLM: Indirect Jailbreak Attack with Implicit Clues. (1%)Zhiyuan Chang; Mingyang Li; Yi Liu; Junjie Wang; Qing Wang; Yang Liu
2024-02-13
Faster Repeated Evasion Attacks in Tree Ensembles. (96%)Lorenzo Cascioli; Laurens Devos; Ondřej Kuželka; Jesse Davis
Generating Universal Adversarial Perturbations for Quantum Classifiers. (93%)Gautham Anil; Vishnu Vinod; Apurva Narayan
Enhancing Robustness of Indoor Robotic Navigation with Free-Space Segmentation Models Against Adversarial Attacks. (83%)Qiyuan An; Christos Sevastopoulos; Fillia Makedon
Data Reconstruction Attacks and Defenses: A Systematic Evaluation. (76%)Sheng Liu; Zihan Wang; Qi Lei
COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability. (62%)Xingang Guo; Fangxu Yu; Huan Zhang; Lianhui Qin; Bin Hu
Test-Time Backdoor Attacks on Multimodal Large Language Models. (56%)Dong Lu; Tianyu Pang; Chao Du; Qian Liu; Xianjun Yang; Min Lin
Adversarially Robust Feature Learning for Breast Cancer Diagnosis. (33%)Degan Hao; Dooman Arefan; Margarita Zuley; Wendie Berg; Shandong Wu
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast. (31%)Xiangming Gu; Xiaosen Zheng; Tianyu Pang; Chao Du; Qian Liu; Ye Wang; Jing Jiang; Min Lin
Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing. (10%)Alaa Anani; Tobias Lorenz; Bernt Schiele; Mario Fritz
Feature Attribution with Necessity and Sufficiency via Dual-stage Perturbation Test for Causal Explanation. (1%)Xuexin Chen; Ruichu Cai; Zhengting Huang; Yuxuan Zhu; Julien Horwood; Zhifeng Hao; Zijian Li; Jose Miguel Hernandez-Lobato
2024-02-12
Understanding Deep Learning defenses Against Adversarial Examples Through Visualizations for Dynamic Risk Assessment. (99%)Xabier Echeberria-Barrio; Amaia Gil-Lerchundi; Jon Egana-Zubia; Raul Orduna-Urrutia
Topological safeguard for evasion attack interpreting the neural networks' behavior. (89%)Xabier Echeberria-Barrio; Amaia Gil-Lerchundi; Iñigo Mendialdua; Raul Orduna-Urrutia
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models. (83%)Wei Zou; Runpeng Geng; Binghui Wang; Jinyuan Jia
Privacy-Preserving Gaze Data Streaming in Immersive Interactive Virtual Reality: Robustness and User Experience. (33%)Ethan Wilson; Azim Ibragimov; Michael J. Proulx; Sai Deep Tetali; Kevin Butler; Eakta Jain
OrderBkd: Textual backdoor attack through repositioning. (13%)Irina Alekseevskaia; Konstantin Arkhipenko
Tighter Bounds on the Information Bottleneck with Application to Deep Learning. (10%)Nir Weingarten; Zohar Yakhini; Moshe Butman; Ran Gilad-Bachrach
Multi-Attribute Vision Transformers are Efficient and Robust Learners. (9%)Hanan Gani; Nada Saadi; Noor Hussein; Karthik Nandakumar
Customizable Perturbation Synthesis for Robust SLAM Benchmarking. (9%)Xiaohao Xu; Tianyi Zhang; Sibo Wang; Xiang Li; Yongqi Chen; Ye Li; Bhiksha Raj; Matthew Johnson-Roberson; Xiaonan Huang
THE COLOSSEUM: A Benchmark for Evaluating Generalization for Robotic Manipulation. (5%)Wilbert Pumacay; Ishika Singh; Jiafei Duan; Ranjay Krishna; Jesse Thomason; Dieter Fox
Accelerated Smoothing: A Scalable Approach to Randomized Smoothing. (3%)Devansh Bhardwaj; Kshitiz Kaushik; Sarthak Gupta
Game of Trojans: Adaptive Adversaries Against Output-based Trojaned-Model Detectors. (3%)Dinuka Sahabandu; Xiaojun Xu; Arezoo Rajabi; Luyao Niu; Bhaskar Ramasubramanian; Bo Li; Radha Poovendran
Local Centrality Minimization with Quality Guarantees. (1%)Atsushi Miyauchi; Lorenzo Severini; Francesco Bonchi
NeuralSentinel: Safeguarding Neural Network Reliability and Trustworthiness. (1%)Xabier Echeberria-Barrio; Mikel Gorricho; Selene Valencia; Francesco Zola
Do Membership Inference Attacks Work on Large Language Models? (1%)Michael Duan; Anshuman Suri; Niloofar Mireshghallah; Sewon Min; Weijia Shi; Luke Zettlemoyer; Yulia Tsvetkov; Yejin Choi; David Evans; Hannaneh Hajishirzi
Pixel Sentence Representation Learning. (1%)Chenghao Xiao; Zhuoxu Huang; Danlu Chen; G Thomas Hudson; Yizhi Li; Haoran Duan; Chenghua Lin; Jie Fu; Jungong Han; Noura Al Moubayed
2024-02-11
A Random Ensemble of Encrypted Vision Transformers for Adversarially Robust Defense. (99%)Ryota Iijima; Sayaka Shiota; Hitoshi Kiya
Accuracy of TextFooler black box adversarial attacks on 01 loss sign activation neural network ensemble. (98%)Yunzhe Xue; Usman Roshan
2024-02-10
Whispers in the Machine: Confidentiality in LLM-integrated Systems. (26%)Jonathan Evertz; Merlin Chlosta; Lea Schönherr; Thorsten Eisenhofer
Architectural Neural Backdoors from First Principles. (26%)Harry Langford; Ilia Shumailov; Yiren Zhao; Robert Mullins; Nicolas Papernot
2024-02-09
Anomaly Unveiled: Securing Image Classification against Adversarial Patch Attacks. (98%)Nandish Chattopadhyay; Amira Guesmi; Muhammad Shafique
Fight Back Against Jailbreaking via Prompt Adversarial Tuning. (95%)Yichuan Mo; Yuji Wang; Zeming Wei; Yisen Wang
RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations. (84%)Enyi Jiang; Gagandeep Singh
System-level Analysis of Adversarial Attacks and Defenses on Intelligence in O-RAN based Cellular Networks. (82%)Azuka Chiejina; Brian Kim; Kaushik Chowhdury; Vijay K. Shah
The SkipSponge Attack: Sponge Weight Poisoning of Deep Neural Networks. (70%)Jona te Lintelo; Stefanos Koffas; Stjepan Picek
Corruption Robust Offline Reinforcement Learning with Human Feedback. (67%)Debmalya Mandal; Andi Nika; Parameswaran Kamalaruban; Adish Singla; Goran Radanović
Quantifying and Enhancing Multi-modal Robustness with Modality Preference. (56%)Zequn Yang; Yake Wei; Ce Liang; Di Hu
StruQ: Defending Against Prompt Injection with Structured Queries. (45%)Sizhe Chen; Julien Piet; Chawin Sitawarin; David Wagner
Evaluating Membership Inference Attacks and Defenses in Federated Learning. (4%)Gongxi Zhu; Donghao Li; Hanlin Gu; Yuxing Han; Yuan Yao; Lixin Fan; Qiang Yang
Blockchain Bribing Attacks and the Efficacy of Counterincentives. (1%)Dimitris Karakostas; Aggelos Kiayias; Thomas Zacharias
For Better or For Worse? Learning Minimum Variance Features With Label Augmentation. (1%)Muthu Chidambaram; Rong Ge
2024-02-08
Comprehensive Assessment of Jailbreak Attacks Against LLMs. (99%)Junjie Chu; Yugeng Liu; Ziqing Yang; Xinyue Shen; Michael Backes; Yang Zhang
Investigating White-Box Attacks for On-Device Models. (93%)Mingyi Zhou; Xiang Gao; Jing Wu; Kui Liu; Hailong Sun; Li Li
TETRIS: Towards Exploring the Robustness of Interactive Segmentation. (81%)Andrey Moskalenko; Vlad Shakhuro; Anna Vorontsova; Anton Konushin; Anton Antonov; Alexander Krapukhin; Denis Shepelev; Konstantin Soshin
Linearizing Models for Efficient yet Robust Private Inference. (68%)Sreetama Sarkar; Souvik Kundu; Peter A. Beerel
A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs. (26%)Kasimir Tanner; Matteo Vilucchio; Bruno Loureiro; Florent Krzakala
Is Adversarial Training with Compressed Datasets Effective? (10%)Tong Chen; Raghavendra Selvan
FedAA: A Reinforcement Learning Perspective on Adaptive Aggregation for Fair and Robust Federated Learning. (9%)Jialuo He; Wei Chen; Xiaojin Zhang
2024-02-07
Adversarial Robustness Through Artifact Design. (99%)Tsufit Shua; Liron David; Mahmood Sharif
Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! (98%)Shashank Kotyan; Po-Yuan Mao; Pin-Yu Chen; Danilo Vasconcellos Vargas
Analyzing Adversarial Inputs in Deep Reinforcement Learning. (96%)Davide Corsi; Guy Amir; Guy Katz; Alessandro Farinelli
Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications. (1%)Boyi Wei; Kaixuan Huang; Yangsibo Huang; Tinghao Xie; Xiangyu Qi; Mengzhou Xia; Prateek Mittal; Mengdi Wang; Peter Henderson
2024-02-06
Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping. (98%)Qinliang Lin; Cheng Luo; Zenghao Niu; Xilin He; Weicheng Xie; Yuanbo Hou; Linlin Shen; Siyang Song
Adversarially Robust Deepfake Detection via Adversarial Feature Similarity Learning. (98%)Sarwar Khan
SUB-PLAY: Adversarial Policies against Partially Observed Multi-Agent Reinforcement Learning Systems. (76%)Oubo Ma; Yuwen Pu; Linkang Du; Yang Dai; Ruo Wang; Xiaolei Liu; Yingcai Wu; Shouling Ji
PAC-Bayesian Adversarially Robust Generalization Bounds for Graph Neural Network. (75%)Tan Sun; Junhong Lin
Enhance DNN Adversarial Robustness and Efficiency via Injecting Noise to Non-Essential Neurons. (74%)Zhenyu Liu; Garrett Gagnon; Swagath Venkataramani; Liu Liu
BotSSCL: Social Bot Detection with Self-Supervised Contrastive Learning. (64%)Mohammad Majid Akhtar; Navid Shadman Bhuiyan; Rahat Masood; Muhammad Ikram; Salil S. Kanhere
Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses. (26%)Hao Fang; Yixiang Qiu; Hongyao Yu; Wenbo Yu; Jiawei Kong; Baoli Chong; Bin Chen; Xuan Wang; Shu-Tao Xia; Ke Xu
Studying Vulnerable Code Entities in R. (10%)Zixiao Zhao; Millon Madhur Das; Fatemeh H. Fard
DeMarking: A Defense for Network Flow Watermarking in Real-Time. (10%)Yali Yuan; Jian Ge; Guang Cheng
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal. (2%)Mantas Mazeika; Long Phan; Xuwang Yin; Andy Zou; Zifan Wang; Norman Mu; Elham Sakhaee; Nathaniel Li; Steven Basart; Bo Li; David Forsyth; Dan Hendrycks
2024-02-05
A Generative Approach to Surrogate-based Black-box Attacks. (99%)Raha Moraffah; Huan Liu
Transcending Adversarial Perturbations: Manifold-Aided Adversarial Examples with Legitimate Semantics. (99%)Shuai Li; Xiaoyu Jiang; Xiaoguang Ma
Arabic Synonym BERT-based Adversarial Examples for Text Classification. (99%)Norah Alshahrani; Saied Alshahrani; Esma Wali; Jeanna Matthews
Generalization Properties of Adversarial Training for $\ell_0$-Bounded Adversarial Attacks. (92%)Payam Delgosha; Hamed Hassani; Ramtin Pedarsani
FoolSDEdit: Deceptively Steering Your Edits Towards Targeted Attribute-aware Distribution. (89%)Qi Zhou; Dongxia Wang; Tianlin Li; Zhihong Xu; Yang Liu; Kui Ren; Wenhai Wang; Qing Guo
Time-Distributed Backdoor Attacks on Federated Spiking Learning. (83%)Gorka Abad; Stjepan Picek; Aitor Urbieta
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models. (83%)Yuancheng Xu; Jiarui Yao; Manli Shu; Yanchao Sun; Zichu Wu; Ning Yu; Tom Goldstein; Furong Huang
Partially Recentralization Softmax Loss for Vision-Language Models Robustness. (81%)Hao Wang; Xin Zhang; Jinzhe Jiang; Yaqian Zhao; Chen Li
Organic or Diffused: Can We Distinguish Human Art from AI-generated Images? (31%)Anna Yoo Jeong Ha; Josephine Passananti; Ronik Bhaskar; Shawn Shan; Reid Southen; Haitao Zheng; Ben Y. Zhao
DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models. (12%)Yang Sui; Huy Phan; Jinqi Xiao; Tianfang Zhang; Zijie Tang; Cong Shi; Yan Wang; Yingying Chen; Bo Yuan
FINEST: Stabilizing Recommendations by Rank-Preserving Fine-Tuning. (1%)Sejoon Oh; Berk Ustun; Julian McAuley; Srijan Kumar
Exploring mechanisms of Neural Robustness: probing the bridge between geometry and spectrum. (1%)Konstantin Holzhausen; Mia Merlid; Håkon Olav Torvik; Anders Malthe-Sørenssen; Mikkel Elle Lepperød
2024-02-04
PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks. (99%)Ziquan Liu; Zhuo Zhi; Ilija Bogunovic; Carsten Gerner-Beuerle; Miguel Rodrigues
Adversarial Text Purification: A Large Language Model Approach for Defense. (99%)Raha Moraffah; Shubh Khandelwal; Amrita Bhattacharjee; Huan Liu
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms in Vision Transformers. (99%)Oryan Yehezkel; Alon Zolfi; Amit Baras; Yuval Elovici; Asaf Shabtai
Exploiting Class Probabilities for Black-box Sentence-level Attacks. (75%)Raha Moraffah; Huan Liu
Evading Deep Learning-Based Malware Detectors via Obfuscation: A Deep Reinforcement Learning Approach. (41%)Brian Etter; James Lee Hu; Mohammedreza Ebrahimi; Weifeng Li; Xin Li; Hsinchun Chen
Adversarial Data Augmentation for Robust Speaker Verification. (1%)Zhenyu Zhou; Junhui Chen; Namin Wang; Lantian Li; Dong Wang
2024-02-03
Contrasting Adversarial Perturbations: The Space of Harmless Perturbations. (99%)Lu Chen; Shaofeng Li; Benhao Huang; Fan Yang; Zheng Li; Jie Li; Yuan Luo
Evaluating the Robustness of Off-Road Autonomous Driving Segmentation against Adversarial Attacks: A Dataset-Centric analysis. (96%)Pankaj Deoli; Rohit Kumar; Axel Vierling; Karsten Berns
Your Diffusion Model is Secretly a Certifiably Robust Classifier. (91%)Huanran Chen; Yinpeng Dong; Shitong Shao; Zhongkai Hao; Xiao Yang; Hang Su; Jun Zhu
MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers. (76%)Yatong Bai; Mo Zhou; Vishal M. Patel; Somayeh Sojoudi
Analyzing Sentiment Polarity Reduction in News Presentation through Contextual Perturbation and Large Language Models. (68%)Alapan Kuila; Somnath Jena; Sudeshna Sarkar; Partha Pratim Chakrabarti
Universal Post-Training Reverse-Engineering Defense Against Backdoors in Deep Neural Networks. (31%)Xi Li; Hang Wang; David J. Miller; George Kesidis
Towards Optimal Adversarial Robust Q-learning with Bellman Infinity-error. (10%)Haoran Li; Zicheng Zhang; Wang Luo; Congying Han; Yudong Hu; Tiande Guo; Shichen Liao
Invisible Finger: Practical Electromagnetic Interference Attack on Touchscreen-based Electronic Devices. (9%)Haoqi Shan; Boyi Zhang; Zihao Zhan; Dean Sullivan; Shuo Wang; Yier Jin
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models. (5%)Yongshuo Zong; Ondrej Bohdal; Tingyang Yu; Yongxin Yang; Timothy Hospedales
Data Poisoning for In-context Learning. (5%)Pengfei He; Han Xu; Yue Xing; Hui Liu; Makoto Yamada; Jiliang Tang
2024-02-02
HQA-Attack: Toward High Quality Black-Box Hard-Label Adversarial Attack on Text. (99%)Han Liu; Zhi Xu; Xiaotong Zhang; Feng Zhang; Fenglong Ma; Hongyang Chen; Hong Yu; Xianchao Zhang
$\sigma$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples. (99%)Antonio Emanuele Cinà; Francesco Villani; Maura Pintor; Lea Schönherr; Battista Biggio; Marcello Pelillo
STAA-Net: A Sparse and Transferable Adversarial Attack for Speech Emotion Recognition. (99%)Yi Chang; Zhao Ren; Zixing Zhang; Xin Jing; Kun Qian; Xi Shao; Bin Hu; Tanja Schultz; Björn W. Schuller
Delving into Decision-based Black-box Attacks on Semantic Segmentation. (93%)Zhaoyu Chen; Zhengyang Shan; Jingwen Chang; Kaixun Jiang; Dingkang Yang; Yiting Cheng; Wenqiang Zhang
SignSGD with Federated Defense: Harnessing Adversarial Attacks through Gradient Sign Decoding. (92%)Chanho Park; Namyoon Lee
Unlearnable Examples For Time Series. (86%)Yujing Jiang; Xingjun Ma; Sarah Monazam Erfani; James Bailey
Preference Poisoning Attacks on Reward Model Learning. (83%)Junlin Wu; Jiongxiao Wang; Chaowei Xiao; Chenguang Wang; Ning Zhang; Yevgeniy Vorobeychik
Privacy-Preserving Distributed Learning for Residential Short-Term Load Forecasting. (3%)Yi Dong; Yingjie Wang; Mariana Gama; Mustafa A. Mustafa; Geert Deconinck; Xiaowei Huang
S2malloc: Statistically Secure Allocator for Use-After-Free Protection And More. (3%)Ruizhe Wang; Meng Xu; N. Asokan
Cheating Suffix: Targeted Attack to Text-To-Image Diffusion Models with Multi-Modal Priors. (2%)Dingcheng Yang; Yang Bai; Xiaojun Jia; Yang Liu; Xiaochun Cao; Wenjian Yu
Fundamental Challenges in Cybersecurity and a Philosophy of Vulnerability-Guided Hardening. (1%)Marcel Böhme
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement. (1%)Xisen Jin; Xiang Ren
2024-02-01
Benchmarking Transferable Adversarial Attacks. (98%)Zhibo Jin; Jiayu Zhang; Zhiyu Zhu; Huaming Chen
Hidding the Ghostwriters: An Adversarial Evaluation of AI-Generated Student Essay Detection. (70%)Xinlin Peng; Ying Zhou; Ben He; Le Sun; Yingfei Sun
Double-Dip: Thwarting Label-Only Membership Inference Attacks with Transfer Learning and Randomization. (64%)Arezoo Rajabi; Reeya Pimple; Aiswarya Janardhanan; Surudhi Asokraj; Bhaskar Ramasubramanian; Radha Poovendran
Vision-LLMs Can Fool Themselves with Self-Generated Typographic Attacks. (45%)Maan Qraitem; Nazia Tasnim; Piotr Teterwak; Kate Saenko; Bryan A. Plummer
Approximating Optimal Morphing Attacks using Template Inversion. (9%)Laurent Colbois; Hatef Otroshi Shahreza; Sébastien Marcel
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance. (8%)Wenqi Wei; Ling Liu
Vaccine: Perturbation-aware Alignment for Large Language Models against Harmful Fine-tuning Attack. (1%)Tiansheng Huang; Sihao Hu; Ling Liu
algoXSSF: Detection and analysis of cross-site request forgery (XSRF) and cross-site scripting (XSS) attacks via Machine learning algorithms. (1%)Naresh Kshetri; Dilip Kumar; James Hutson; Navneet Kaur; Omar Faruq Osama
2024-01-31
Adversarial Quantum Machine Learning: An Information-Theoretic Generalization Analysis. (95%)Petros Georgiou; Sharu Theresa Jose; Osvaldo Simeone
Invariance-powered Trustworthy Defense via Remove Then Restore. (70%)Xiaowei Fu; Yuhang Zhou; Lina Ma; Lei Zhang
BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks. (13%)Hamed Poursiami; Ihsen Alouani; Maryam Parsa
LoRec: Large Language Model for Robust Sequential Recommendation against Poisoning Attacks. (9%)Kaike Zhang; Qi Cao; Yunfan Wu; Fei Sun; Huawei Shen; Xueqi Cheng
Logit Poisoning Attack in Distillation-based Federated Learning and its Countermeasures. (4%)Yonghao Yu; Shunan Zhu; Jinglu Hu
Manipulating Predictions over Discrete Inputs in Machine Teaching. (1%)Xiaodong Wu; Yufei Han; Hayssam Dahrouj; Jianbing Ni; Zhenwen Liang; Xiangliang Zhang
Ambush from All Sides: Understanding Security Threats in Open-Source Software CI/CD Pipelines. (1%)Ziyue Pan; Wenbo Shen; Xingkai Wang; Yutian Yang; Rui Chang; Yao Liu; Chengwei Liu; Yang Liu; Kui Ren
2024-01-30
Single Word Change is All You Need: Designing Attacks and Defenses for Text Classifiers. (99%)Lei Xu; Sarah Alnegheimish; Laure Berti-Equille; Alfredo Cuesta-Infante; Kalyan Veeramachaneni
Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks. (98%)Andy Zhou; Bo Li; Haohan Wang
Towards Assessing the Synthetic-to-Measured Adversarial Vulnerability of SAR ATR. (98%)Bowen Peng; Bo Peng; Jingyuan Xia; Tianpeng Liu; Yongxiang Liu; Li Liu
AdvGPS: Adversarial GPS for Multi-Agent Perception Attack. (95%)Jinlong Li; Baolu Li; Xinyu Liu; Jianwu Fang; Felix Juefei-Xu; Qing Guo; Hongkai Yu
Game-Theoretic Unlearnable Example Generator. (92%)Shuang Liu; Yihan Wang; Xiao-Shan Gao
Camouflage Adversarial Attacks on Multiple Agent Systems. (87%)Ziqing Lu; Guanlin Liu; Lifeng Lai; Weiyu Xu
Weak-to-Strong Jailbreaking on Large Language Models. (76%)Xuandong Zhao; Xianjun Yang; Tianyu Pang; Chao Du; Lei Li; Yu-Xiang Wang; William Yang Wang
A Proactive and Dual Prevention Mechanism against Illegal Song Covers empowered by Singing Voice Conversion. (75%)Guangke Chen; Yedi Zhang; Fu Song; Ting Wang; Xiaoning Du; Yang Liu
Improving QA Model Performance with Cartographic Inoculation. (26%)Allen UT Austin Chen; Okan UT Austin Tanrikulu
Towards Visual Syntactical Understanding. (4%)Sayeed Shafayet Chowdhury; Soumyadeep Chandra; Kaushik Roy
Provably Robust Multi-bit Watermarking for AI-generated Text via Error Correction Code. (2%)Wenjie Qu; Dong Yin; Zixin He; Wei Zou; Tianyang Tao; Jinyuan Jia; Jiaheng Zhang
2024-01-29
LESSON: Multi-Label Adversarial False Data Injection Attack for Deep Learning Locational Detection. (99%)Jiwei Tian; Chao Shen; Buhong Wang; Xiaofang Xia; Meng Zhang; Chenhao Lin; Qian Li
Adversarial Training on Purification (AToP): Advancing Both Robustness and Generalization. (92%)Guang Lin; Chao Li; Jianhai Zhang; Toshihisa Tanaka; Qibin Zhao
Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks. (68%)Lulu Xue; Shengshan Hu; Ruizhi Zhao; Leo Yu Zhang; Shengqing Hu; Lichao Sun; Dezhong Yao
GPS: Graph Contrastive Learning via Multi-scale Augmented Views from Adversarial Pooling. (5%)Wei Ju; Yiyang Gu; Zhengyang Mao; Ziyue Qiao; Yifang Qin; Xiao Luo; Hui Xiong; Ming Zhang
Security and Privacy Challenges of Large Language Models: A Survey. (1%)Badhan Chandra Das; M. Hadi Amini; Yanzhao Wu
2024-01-28
Addressing Noise and Efficiency Issues in Graph-Based Machine Learning Models From the Perspective of Adversarial Attack. (83%)Yongyu Wang
Transparency Attacks: How Imperceptible Image Layers Can Fool AI Perception. (75%)Forrest McKee; David Noever
Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding Indistinguishability. (26%)Hao Wang; Shangwei Guo; Jialing He; Hangcheng Liu; Tianwei Zhang; Tao Xiang
2024-01-27
L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial Attacks. (98%)Ping Guo; Fei Liu; Xi Lin; Qingchuan Zhao; Qingfu Zhang
2024-01-26
Set-Based Training for Neural Network Verification. (99%)Lukas Koller; Tobias Ladner; Matthias Althoff
Mitigating Feature Gap for Adversarial Robustness by Feature Disentanglement. (91%)Nuoyan Zhou; Dawei Zhou; Decheng Liu; Xinbo Gao; Nannan Wang
Multi-Trigger Backdoor Attacks: More Triggers, More Threats. (82%)Yige Li; Xingjun Ma; Jiabo He; Hanxun Huang; Yu-Gang Jiang
Adversarial Attacks and Defenses in 6G Network-Assisted IoT Systems. (81%)Bui Duc Son; Nguyen Tien Hoa; Chien Trinh Van; Waqas Khalid; Mohamed Amine Ferrag; Wan Choi; Merouane Debbah
Conserve-Update-Revise to Cure Generalization and Robustness Trade-off in Adversarial Training. (62%)Shruthi Gowda; Bahram Zonooz; Elahe Arani
Asymptotic Behavior of Adversarial Training Estimator under $\ell_\infty$-Perturbation. (22%)Yiling Xie; Xiaoming Huo
Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective. (22%)Yue Xing; Xiaofeng Lin; Qifan Song; Yi Xu; Belinda Zeng; Guang Cheng
MEA-Defender: A Robust Watermark against Model Extraction Attack. (13%)Peizhuo Lv; Hualong Ma; Kai Chen; Jiachen Zhou; Shengzhi Zhang; Ruigang Liang; Shenchen Zhu; Pan Li; Yingjun Zhang
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning. (2%)Baoyuan Wu; Hongrui Chen; Mingda Zhang; Zihao Zhu; Shaokui Wei; Danni Yuan; Mingli Zhu; Ruotong Wang; Li Liu; Chao Shen
2024-01-25
Sparse and Transferable Universal Singular Vectors Attack. (99%)Kseniia Kuvshinova; Olga Tsymboi; Ivan Oseledets
Friendly Attacks to Improve Channel Coding Reliability. (54%)Anastasiia Kurmukova; Deniz Gunduz
Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models. (16%)Erik Arakelyan; Zhaoqi Liu; Isabelle Augenstein
The Risk of Federated Learning to Skew Fine-Tuning Features and Underperform Out-of-Distribution Robustness. (2%)Mengyao Du; Miao Zhang; Yuwen Pu; Kai Xu; Shouling Ji; Quanjun Yin
Novel Quadratic Constraints for Extending LipSDP beyond Slope-Restricted Activations. (1%)Patricia Pauli; Aaron Havens; Alexandre Araujo; Siddharth Garg; Farshad Khorrami; Frank Allgöwer; Bin Hu
Physical Trajectory Inference Attack and Defense in Decentralized POI Recommendation. (1%)Jing Long; Tong Chen; Guanhua Ye; Kai Zheng; Nguyen Quoc Viet Hung; Hongzhi Yin
2024-01-24
A Training Rate and Survival Heuristic for Inference and Robustness Evaluation (TRASHFIRE). (92%)Charles Meyers; Mohammad Reza Saleh Sedghpour; Tommy Löfstedt; Erik Elmroth
Can overfitted deep neural networks in adversarial training generalize? -- An approximation viewpoint. (86%)Zhongjie Shi; Fanghui Liu; Yuan Cao; Johan A. K. Suykens
WPDA: Frequency-based Backdoor Attack with Wavelet Packet Decomposition. (76%)Zhengyao Song; Yongqiang Li; Danni Yuan; Li Liu; Shaokui Wei; Baoyuan Wu
Exploring Adversarial Threat Models in Cyber Physical Battery Systems. (76%)Shanthan Kumar Padisala; Shashank Dhananjay Vyas; Satadru Dey
Fluent dreaming for language models. (64%)T. Ben Confirm Labs Thompson; Zygimantas Confirm Labs Straznickas; Michael Confirm Labs Sklar
2024-01-23
Boosting the Transferability of Adversarial Examples via Local Mixup and Adaptive Step Size. (99%)Junlin Liu; Xinchen Lyu
Securing Recommender System via Cooperative Training. (80%)Qingyang Wang; Chenwang Wu; Defu Lian; Enhong Chen
Compositional Generative Inverse Design. (56%)Tailin Wu; Takashi Maruyama; Long Wei; Tao Zhang; Yilun Du; Gianluca Iaccarino; Jure Leskovec
AdCorDA: Classifier Refinement via Adversarial Correction and Domain Adaptation. (33%)Lulan Shen; Ali Edalati; Brett Meyer; Warren Gross; James J. Clark
ToDA: Target-oriented Diffusion Attacker against Recommendation System. (13%)Xiaohao Liu; Zhulin Tao; Ting Jiang; He Chang; Yunshan Ma; Xianglin Huang; Xiang Wang
DAFA: Distance-Aware Fair Adversarial Training. (2%)Hyungyu Lee; Saehyung Lee; Hyemi Jang; Junsung Park; Ho Bae; Sungroh Yoon
The twin peaks of learning neural networks. (2%)Elizaveta Demyanenko; Christoph Feinauer; Enrico M. Malatesta; Luca Saglietti
2024-01-22
Fast Adversarial Training against Textual Adversarial Attacks. (99%)Yichen Yang; Xin Liu; Kun He
A Training-Free Defense Framework for Robust Learned Image Compression. (74%)Myungseo Song; Jinyoung Choi; Bohyung Han
Adversarial speech for voice privacy protection from Personalized Speech generation. (73%)Shihao Chen; Liping Chen; Jie Zhang; KongAik Lee; Zhenhua Ling; Lirong Dai
NEUROSEC: FPGA-Based Neuromorphic Audio Security. (13%)Murat Isik; Hiruna Vishwamith; Yusuf Sur; Kayode Inadagbo; I. Can Dikmen
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them. (13%)Chao Liu; Boxi Chen; Wei Shao; Chris Zhang; Kelvin Wong; Yi Zhang
Robustness to distribution shifts of compressed networks for edge devices. (8%)Lulan Shen; Ali Edalati; Brett Meyer; Warren Gross; James J. Clark
Text Embedding Inversion Security for Multilingual Language Models. (2%)Yiyi Chen; Heather Lent; Johannes Bjerva
Out-of-Distribution Detection & Applications With Ablated Learned Temperature Energy. (1%)Will LeVine; Benjamin Pikus; Jacob Phillips; Berk Norman; Fernando Amat Gil; Sean Hendryx
2024-01-21
How Robust Are Energy-Based Models Trained With Equilibrium Propagation? (99%)Siddharth Mansingh; Michal Kucer; Garrett Kenyon; Juston Moore; Michael Teti
Analyzing the Quality Attributes of AI Vision Models in Open Repositories Under Adversarial Attacks. (56%)Zerui Wang; Yan Liu
Adversarial Augmentation Training Makes Action Recognition Models More Robust to Realistic Video Distribution Shifts. (11%)Kiyoon Kim; Shreyank N Gowda; Panagiotis Eustratiadis; Antreas Antoniou; Robert B Fisher
Efficient local linearity regularization to overcome catastrophic overfitting. (8%)Elias Abad Rocamora; Fanghui Liu; Grigorios G. Chrysos; Pablo M. Olmos; Volkan Cevher
2024-01-20
Susceptibility of Adversarial Attack on Medical Image Segmentation Models. (99%)Zhongxuan Wang; Leo Xu
Finding a Needle in the Adversarial Haystack: A Targeted Paraphrasing Approach For Uncovering Edge Cases with Minimal Distribution Distortion. (96%)Aly M. Kassem; Sherif Saad
CARE: Ensemble Adversarial Robustness Evaluation Against Adaptive Attackers for Security Applications. (80%)Hangsheng Zhang; Jiqiang Liu; Jinsong Dong
Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images. (33%)Kuofeng Gao; Yang Bai; Jindong Gu; Shu-Tao Xia; Philip Torr; Zhifeng Li; Wei Liu
2024-01-19
PuriDefense: Randomized Local Implicit Adversarial Purification for Defending Black-box Query-based Attacks. (99%)Ping Guo; Zhiyuan Yang; Xi Lin; Qingchuan Zhao; Qingfu Zhang
Explainable and Transferable Adversarial Attack for ML-Based Network Intrusion Detectors. (99%)Hangsheng Zhang; Dongqi Han; Yinlong Liu; Zhiliang Wang; Jiyan Sun; Shangyuan Zhuang; Jiqiang Liu; Jinsong Dong
The Surprising Harmfulness of Benign Overfitting for Adversarial Robustness. (98%)Yifan Hao; Tong Zhang
FIMBA: Evaluating the Robustness of AI in Genomics via Feature Importance Adversarial Attacks. (56%)Heorhii Skovorodnikov; Hoda Alkhzaimi
Adversarial Robustness of Link Sign Prediction in Signed Graphs. (26%)Jialong Zhou; Xing Ai; Yuni Lai; Tomasz Michalak; Gaolei Li; Jianhua Li; Kai Zhou
BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models. (3%)Zhen Xiang; Fengqing Jiang; Zidi Xiong; Bhaskar Ramasubramanian; Radha Poovendran; Bo Li
Image Safeguarding: Reasoning with Conditional Vision Language Model and Obfuscating Unsafe Content Counterfactually. (1%)Mazal Bethany; Brandon Wherry; Nishant Vishwamitra; Peyman Najafirad
2024-01-18
HGAttack: Transferable Heterogeneous Graph Adversarial Attack. (99%)He Zhao; Zhiwei Zeng; Yongwei Wang; Deheng Ye; Chunyan Miao
Hijacking Attacks against Neural Networks by Analyzing Training Data. (99%)Yunjie Ge; Qian Wang; Huayang Huang; Qi Li; Cong Wang; Chao Shen; Lingchen Zhao; Peipei Jiang; Zheng Fang; Shenyi Zhang
Adapters Mixup: Mixing Parameter-Efficient Adapters to Enhance the Adversarial Robustness of Fine-tuned Pre-trained Text Classifiers. (99%)Tuc Nguyen; Thai Le
Hacking Predictors Means Hacking Cars: Using Sensitivity Analysis to Identify Trajectory Prediction Vulnerabilities for Autonomous Driving Security. (92%)Marsalis Gibson; David Babazadeh; Claire Tomlin; Shankar Sastry
Differentially Private and Adversarially Robust Machine Learning: An Empirical Evaluation. (80%)Janvi Thakkar; Giulio Zizzo; Sergio Maffeis
Investigating Training Strategies and Model Robustness of Low-Rank Adaptation for Language Modeling in Speech Recognition. (15%)Yu Yu; Chao-Han Huck Yang; Tuan Dinh; Sungho Ryu; Jari Kolehmainen; Roger Ren; Denis Filimonov; Prashanth G. Shivakumar; Ankur Gandhe; Ariya Rastow; Jia Xu; Ivan Bulyko; Andreas Stolcke
Power in Numbers: Robust reading comprehension by finetuning with four adversarial sentences per example. (13%)Ariel Marcus
Cross-Modality Perturbation Synergy Attack for Person Re-identification. (3%)Yunpeng Gong; Zhun Zhong; Yansong Qu; Zhiming Luo; Rongrong Ji; Min Jiang
Vulnerabilities of Foundation Model Integrated Federated Learning Under Adversarial Threats. (2%)Chen Wu; Xi Li; Jiaqi Wang
Large Language Models are Efficient Learners of Noise-Robust Speech Recognition. (1%)Yuchen Hu; Chen Chen; Chao-Han Huck Yang; Ruizhe Li; Chao Zhang; Pin-Yu Chen; EnSiong Chng
2024-01-17
Towards Scalable and Robust Model Versioning. (93%)Wenxin Ding; Arjun Nitin Bhagoji; Ben Y. Zhao; Haitao Zheng
Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color Attack. (93%)Zhongliang Guo; Junhao Dong; Yifei Qian; Kaixuan Wang; Weiye Li; Ziheng Guo; Yuheng Wang; Yanli Li; Ognjen Arandjelović; Lei Fang
MITS-GAN: Safeguarding Medical Imaging from Tampering with Generative Adversarial Networks. (26%)Giovanni Pasqualino; Luca Guarnera; Alessandro Ortis; Sebastiano Battiato
A GAN-based data poisoning framework against anomaly detection in vertical federated learning. (3%)Xiaolin Chen; Daoguang Zan; Wei Li; Bei Guan; Yongji Wang
An Optimal Transport Approach for Computing Adversarial Training Lower Bounds in Multiclass Classification. (3%)Nicolas Garcia Trillos; Matt Jacobs; Jakwang Kim; Matthew Werenski
Attack and Reset for Unlearning: Exploiting Adversarial Noise toward Machine Unlearning through Parameter Re-initialization. (1%)Yoonhwa Jung; Ikhyun Cho; Shun-Hsiang Hsu; Julia Hockenmaier
Caught in the Quicksand of Reasoning, Far from AGI Summit: Evaluating LLMs' Mathematical and Coding Competency through Ontology-guided Interventions. (1%)Pengfei Hong; Deepanway Ghosal; Navonil Majumder; Somak Aditya; Rada Mihalcea; Soujanya Poria
2024-01-16
Revealing Vulnerabilities in Stable Diffusion via Targeted Attacks. (99%)Chenyu Zhang; Lanjun Wang; Anan Liu
Bag of Tricks to Boost Adversarial Transferability. (99%)Zeliang Zhang; Rongyi Zhu; Wei Yao; Xiaosen Wang; Chenliang Xu
A Generative Adversarial Attack for Multilingual Text Classifiers. (99%)Tom Roth; Inigo Jauregi Unanue; Alsharif Abuadbba; Massimo Piccardi
PPR: Enhancing Dodging Attacks while Maintaining Impersonation Attacks on Face Recognition Systems. (99%)Fengfan Zhou; Heifei Ling
Robust Localization of Key Fob Using Channel Impulse Response of Ultra Wide Band Sensors for Keyless Entry Systems. (92%)Abhiram Kolli; Filippo Casamassima; Horst Possegger; Horst Bischof
The Effect of Intrinsic Dataset Properties on Generalization: Unraveling Learning Differences Between Natural and Medical Images. (87%)Nicholas Konz; Maciej A. Mazurowski
RandOhm: Mitigating Impedance Side-channel Attacks using Randomized Circuit Configurations. (9%)Saleh Khalaj Monfared; Domenic Forte; Shahin Tajik
Towards Efficient and Certified Recovery from Poisoning Attacks in Federated Learning. (8%)Yu Jiang; Jiyuan Shen; Ziyao Liu; Chee Wei Tan; Kwok-Yan Lam
IPR-NeRF: Ownership Verification meets Neural Radiance Field. (3%)Win Kent Ong; Kam Woh Ng; Chee Seng Chan; Yi Zhe Song; Tao Xiang
IoTWarden: A Deep Reinforcement Learning Based Real-time Defense System to Mitigate Trigger-action IoT Attacks. (1%)Md Morshed Department of Software and Information Systems, University of North Carolina at Charlotte, Charlotte, USA Alam; Israt Department of Computer Science, University of Memphis, Memphis, USA Jahan; Weichao Department of Software and Information Systems, University of North Carolina at Charlotte, Charlotte, USA Wang
2024-01-15
Robustness Against Adversarial Attacks via Learning Confined Adversarial Polytopes. (99%)Shayan Mohajer Hamidi; Linfeng Ye
Authorship Obfuscation in Multilingual Machine-Generated Text Detection. (13%)Dominik Macko; Robert Moro; Adaku Uchendu; Ivan Srba; Jason Samuel Lucas; Michiharu Yamashita; Nafis Irtiza Tripto; Dongwon Lee; Jakub Simko; Maria Bielikova
2024-01-14
LookAhead: Preventing DeFi Attacks via Unveiling Adversarial Contracts. (80%)Shoupeng Ren; Lipeng He; Tianyu Tu; Di Wu; Jian Liu; Kui Ren; Chun Chen
Crafter: Facial Feature Crafting against Inversion-based Identity Theft on Deep Models. (70%)Shiming Wang; Zhe Ji; Liyao Xiang; Hao Zhang; Xinbing Wang; Chenghu Zhou; Bo Li
2024-01-13
Exploring Adversarial Attacks against Latent Diffusion Model from the Perspective of Adversarial Transferability. (99%)Junxi Chen; Junhao Dong; Xiaohua Xie
Left-right Discrepancy for Adversarial Attack on Stereo Networks. (98%)Pengfei Wang; Xiaofei Hui; Beijia Lu; Nimrod Lilith; Jun Liu; Sameer Alam
2024-01-12
Adversarial Examples are Misaligned in Diffusion Model Manifolds. (98%)Peter Lorenz; Ricard Durall; Janis Keuper
How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs. (2%)Yi Zeng; Hongpeng Lin; Jingwen Zhang; Diyi Yang; Ruoxi Jia; Weiyan Shi
Enhancing Consistency and Mitigating Bias: A Data Replay Approach for Incremental Learning. (1%)Chenyang Wang; Junjun Jiang; Xingyu Hu; Xianming Liu; Xiangyang Ji
An Analytical Framework for Modeling and Synthesizing Malicious Attacks on ACC Vehicles. (1%)Shian Wang
Intention Analysis Makes LLMs A Good Jailbreak Defender. (1%)Yuqi Zhang; Liang Ding; Lefei Zhang; Dacheng Tao
2024-01-11
GE-AdvGAN: Improving the transferability of adversarial samples by gradient editing-based adversarial generative model. (99%)Zhiyu Zhu; Huaming Chen; Xinyi Wang; Jiayu Zhang; Zhibo Jin; Kim-Kwang Raymond Choo; Jun Shen; Dong Yuan
Universal Vulnerabilities in Large Language Models: In-context Learning Backdoor Attacks. (61%)Shuai Zhao; Meihuizi Jia; Luu Anh Tuan; Jinming Wen
Open the Pandora's Box of LLMs: Jailbreaking LLMs through Representation Engineering. (22%)Tianlong Li; Shihan Dou; Wenhao Liu; Muling Wu; Changze Lv; Xiaoqing Zheng; Xuanjing Huang
Can We Trust the Unlabeled Target Data? Towards Backdoor Attack and Defense on Model Adaptation. (8%)Lijun Sheng; Jian Liang; Ran He; Zilei Wang; Tieniu Tan
Manipulating Feature Visualizations with Gradient Slingshots. (3%)Dilyara Bareeva; Marina M. -C. Höhne; Alexander Warnecke; Lukas Pirch; Klaus-Robert Müller; Konrad Rieck; Kirill Bykov
Combating Adversarial Attacks with Multi-Agent Debate. (3%)Steffi Chern; Zhen Fan; Andy Liu
2024-01-10
Exploring Vulnerabilities of No-Reference Image Quality Assessment Models: A Query-Based Black-Box Method. (83%)Chenxi Yang; Yujia Liu; Dingquan Li; Tingting Jiang
TrustLLM: Trustworthiness in Large Language Models. (75%)Lichao Sun; Yue Huang; Haoran Wang; Siyuan Wu; Qihui Zhang; Chujie Gao; Yixin Huang; Wenhan Lyu; Yixuan Zhang; Xiner Li; Zhengliang Liu; Yixin Liu; Yijue Wang; Zhikun Zhang; Bhavya Kailkhura; Caiming Xiong; Chaowei Xiao; Chunyuan Li; Eric Xing; Furong Huang; Hao Liu; Heng Ji; Hongyi Wang; Huan Zhang; Huaxiu Yao; Manolis Kellis; Marinka Zitnik; Meng Jiang; Mohit Bansal; James Zou; Jian Pei; Jian Liu; Jianfeng Gao; Jiawei Han; Jieyu Zhao; Jiliang Tang; Jindong Wang; John Mitchell; Kai Shu; Kaidi Xu; Kai-Wei Chang; Lifang He; Lifu Huang; Michael Backes; Neil Zhenqiang Gong; Philip S. Yu; Pin-Yu Chen; Quanquan Gu; Ran Xu; Rex Ying; Shuiwang Ji; Suman Jana; Tianlong Chen; Tianming Liu; Tianyi Zhou; Willian Wang; Xiang Li; Xiangliang Zhang; Xiao Wang; Xing Xie; Xun Chen; Xuyu Wang; Yan Liu; Yanfang Ye; Yinzhi Cao; Yong Chen; Yue Zhao
SENet: Visual Detection of Online Social Engineering Attack Campaigns. (4%)Irfan Ozen; Karthika Subramani; Phani Vadrevu; Roberto Perdisci
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training. (2%)Evan Hubinger; Carson Denison; Jesse Mu; Mike Lambert; Meg Tong; Monte MacDiarmid; Tamera Lanham; Daniel M. Ziegler; Tim Maxwell; Newton Cheng; Adam Jermyn; Amanda Askell; Ansh Radhakrishnan; Cem Anil; David Duvenaud; Deep Ganguli; Fazl Barez; Jack Clark; Kamal Ndousse; Kshitij Sachan; Michael Sellitto; Mrinank Sharma; Nova DasSarma; Roger Grosse; Shauna Kravec; Yuntao Bai; Zachary Witten; Marina Favaro; Jan Brauner; Holden Karnofsky; Paul Christiano; Samuel R. Bowman; Logan Graham; Jared Kaplan; Sören Mindermann; Ryan Greenblatt; Buck Shlegeris; Nicholas Schiefer; Ethan Perez
CoLafier: Collaborative Noisy Label Purifier With Local Intrinsic Dimensionality Guidance. (1%)Dongyu Zhang; Ruofan Hu; Elke Rundensteiner
Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning. (1%)Zhangchen Xu; Fengqing Jiang; Luyao Niu; Jinyuan Jia; Radha Poovendran
FBSDetector: Fake Base Station and Multi Step Attack Detection in Cellular Networks using Machine Learning. (1%)Kazi Samin Mubasshir; Imtiaz Karim; Elisa Bertino
2024-01-09
Revisiting Adversarial Training at Scale. (26%)Zeyu Wang; Xianhang Li; Hongru Zhu; Cihang Xie
SoK: Facial Deepfake Detectors. (11%)Binh M. Le; Jiwon Kim; Shahroz Tariq; Kristen Moore; Alsharif Abuadbba; Simon S. Woo
Advancing Ante-Hoc Explainable Models through Generative Adversarial Networks. (3%)Tanmay Garg; Deepika Vemuri; Vineeth N Balasubramanian
2024-01-08
Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness. (99%)Sibo Wang; Jie Zhang; Zheng Yuan; Shiguang Shan
Robustness Assessment of a Runway Object Classifier for Safe Aircraft Taxiing. (54%)Yizhak Elboher; Raya Elsaleh; Omri Isac; Mélanie Ducoffe; Audrey Galametz; Guillaume Povéda; Ryma Boumazouza; Noémie Cohen; Guy Katz
Coupling Graph Neural Networks with Fractional Order Continuous Dynamics: A Robustness Study. (45%)Qiyu Kang; Kai Zhao; Yang Song; Yihang Xie; Yanan Zhao; Sijie Wang; Rui She; Wee Peng Tay
Logits Poisoning Attack in Federated Distillation. (12%)Yuhan Tang; Zhiyuan Wu; Bo Gao; Tian Wen; Yuwei Wang; Sheng Sun
Attack-Resilient Image Watermarking Using Stable Diffusion. (3%)Lijun Zhang; Xiao Liu; Antoni Viros Martin; Cindy Xiong Bearfield; Yuriy Brun; Hui Guan
Dense Hopfield Networks in the Teacher-Student Setting. (1%)Robin Thériault; Daniele Tantari
2024-01-07
Invisible Reflections: Leveraging Infrared Laser Reflections to Target Traffic Sign Perception. (87%)Takami Sato; Sri Hrushikesh Varma Bhupathiraju; Michael Clifford; Takeshi Sugawara; Qi Alfred Chen; Sara Rampazzi
Data-Driven Subsampling in the Presence of an Adversarial Actor. (86%)Abu Shafin Mohammad Mahdee Jameel; Ahmed P. Mohamed; Jinho Yi; Aly El Gamal; Akshay Malhotra
ROIC-DM: Robust Text Inference and Classification via Diffusion Model. (33%)Shilong Yuan; Wei Yuan; Hongzhi Yin; Tieke He
2024-01-06
Data-Dependent Stability Analysis of Adversarial Training. (98%)Yihan Wang; Shuang Liu; Xiao-Shan Gao
End-to-End Anti-Backdoor Learning on Images and Time Series. (61%)Yujing Jiang; Xingjun Ma; Sarah Monazam Erfani; Yige Li; James Bailey
2024-01-05
Transferable Learned Image Compression-Resistant Adversarial Perturbations. (99%)Yang Sui; Zhuohang Li; Ding Ding; Xiang Pan; Xiaozhong Xu; Shan Liu; Zhenzhong Chen
Enhancing targeted transferability via feature space fine-tuning. (98%)Hui Zeng; Biwei Chen; Anjie Peng
Calibration Attack: A Framework For Adversarial Attacks Targeting Calibration. (76%)Stephen Obadinma; Xiaodan Zhu; Hongyu Guo
A backdoor attack against link prediction tasks with graph neural networks. (38%)Jiazhu Dai; Haoyu Sun
TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep Neural Networks. (1%)Khondoker Murad Hossain; Tim Oates
MLLM-Protector: Ensuring MLLM's Safety without Hurting Performance. (1%)Renjie Pi; Tianyang Han; Yueqi Xie; Rui Pan; Qing Lian; Hanze Dong; Jipeng Zhang; Tong Zhang
2024-01-04
Vulnerabilities Unveiled: Adversarially Attacking a Multimodal Vision Langauge Model for Pathology Imaging. (99%)Jai Prakash Veerla; Poojitha Thota; Partha Sai Guttikonda; Shirin Nilizadeh; Jacob M. Luber
A Random Ensemble of Encrypted models for Enhancing Robustness against Adversarial Examples. (99%)Ryota Iijima; Sayaka Shiota; Hitoshi Kiya
AdvSQLi: Generating Adversarial SQL Injections against Real-world WAF-as-a-service. (95%)Zhenqing Qu; Xiang Ling; Ting Wang; Xiang Chen; Shouling Ji; Chunming Wu
Evasive Hardware Trojan through Adversarial Power Trace. (92%)Behnam Omidi; Khaled N. Khasawneh; Ihsen Alouani
Object-oriented backdoor attack against image captioning. (76%)Meiling Li; Nan Zhong; Xinpeng Zhang; Zhenxing Qian; Sheng Li
DEM: A Method for Certifying Deep Neural Network Classifier Outputs in Aerospace. (2%)Guy Katz; Natan Levy; Idan Refaeli; Raz Yerushalmi
Secure Control of Connected and Automated Vehicles Using Trust-Aware Robust Event-Triggered Control Barrier Functions. (2%)H M Sabbir Ahmad; Ehsan Sabouni; Akua Dickson; Wei Xiao; Christos G. Cassandras; Wenchao Li
A Survey Analyzing Generalization in Deep Reinforcement Learning. (1%)Ezgi Korkmaz
2024-01-03
Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement. (92%)Zheng Yuan; Jie Zhang; Yude Wang; Shiguang Shan; Xilin Chen
Spy-Watermark: Robust Invisible Watermarking for Backdoor Attack. (62%)Ruofei Wang; Renjie Wan; Zongyu Guo; Qing Guo; Rui Huang
FullLoRA-AT: Efficiently Boosting the Robustness of Pretrained Vision Transformers. (33%)Zheng Yuan; Jie Zhang; Shiguang Shan
Integrated Cyber-Physical Resiliency for Power Grids under IoT-Enabled Dynamic Botnet Attacks. (22%)Yuhan Zhao; Juntao Chen; Quanyan Zhu
Enhancing Generalization of Invisible Facial Privacy Cloak via Gradient Accumulation. (1%)Xuannan Liu; Yaoyao Zhong; Weihong Deng; Hongzhi Shi; Xingchen Cui; Yunfeng Yin; Dongchao Wen
2024-01-02
JMA: a General Algorithm to Craft Nearly Optimal Targeted Adversarial Example. (99%)Benedetta Tondi; Wei Guo; Mauro Barni
Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing. (92%)Zhe Kong; Wentian Zhang; Tao Wang; Kaihao Zhang; Yuexiang Li; Xiaoying Tang; Wenhan Luo
SpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalization. (75%)Xixu Hu; Runkai Zheng; Jindong Wang; Cheuk Hang Leung; Qi Wu; Xing Xie
Unveiling the Stealthy Threat: Analyzing Slow Drift GPS Spoofing Attacks for Autonomous Vehicles in Urban Environments and Enabling the Resilience. (10%)Sagar Dasgupta; Abdullah Ahmed; Mizanur Rahman; Thejesh N. Bandi
Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control. (4%)Ka-Ho Chow; Wenqi Wei; Lei Yu
Will 6G be Semantic Communications? Opportunities and Challenges from Task Oriented and Secure Communications to Integrated Sensing. (2%)Yalin E. Sagduyu; Tugba Erpek; Aylin Yener; Sennur Ulukus
2024-01-01
Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment. (12%)Jie Zhu; Leye Wang; Xiao Han; Anmin Liu; Tao Xie
Detection and Defense Against Prominent Attacks on Preconditioned LLM-Integrated Virtual Assistants. (8%)Chun Fai Chan; Daniel Wankit Yip; Aysan Esmradi
A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models. (2%)Daniel Wankit Yip; Aysan Esmradi; Chun Fai Chan
2023-12-31
AR-GAN: Generative Adversarial Network-Based Defense Method Against Adversarial Attacks on the Traffic Sign Classification System of Autonomous Vehicles. (99%)M Sabbir Salek; Abdullah Al Mamun; Mashrur Chowdhury
Does Few-shot Learning Suffer from Backdoor Attacks? (98%)Xinwei Liu; Xiaojun Jia; Jindong Gu; Yuan Xun; Siyuan Liang; Xiaochun Cao
Is It Possible to Backdoor Face Forgery Detection with Natural Triggers? (68%)Xiaoxuan Han; Songlin Yang; Wei Wang; Ziwen He; Jing Dong
2023-12-30
Explainability-Driven Leaf Disease Classification using Adversarial Training and Knowledge Distillation. (84%)Sebastian-Vasile Echim; Iulian-Marius Tăiatu; Dumitru-Clementin Cercel; Florin Pop
CamPro: Camera-based Anti-Facial Recognition. (81%)Wenjun Zhu; Yuan Sun; Jiani Liu; Yushi Cheng; Xiaoyu Ji; Wenyuan Xu
TPatch: A Triggered Physical Adversarial Patch. (76%)Wenjun Zhu; Xiaoyu Ji; Yushi Cheng; Shibo Zhang; Wenyuan Xu
A clean-label graph backdoor attack method in node classification task. (9%)Xiaogang Xing; Ming Xu; Yujing Bai; Dongdong Yang
2023-12-29
Jatmo: Prompt Injection Defense by Task-Specific Finetuning. (54%)Julien Piet; Maha Alrashed; Chawin Sitawarin; Sizhe Chen; Zeming Wei; Elizabeth Sun; Basel Alomair; David Wagner
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection. (11%)Qiannan Wang; Changchun Yin; Lu Zhou; Liming Fang
Towards Faithful Explanations for Text Classification with Robustness Improvement and Explanation Guided Training. (9%)Dongfang Li; Baotian Hu; Qingcai Chen; Shan He
2023-12-28
Adversarial Attacks on Image Classification Models: Analysis and Defense. (99%)Jaydip Sen; Abhiraj Sen; Ananda Chatterjee
BlackboxBench: A Comprehensive Benchmark of Black-box Adversarial Attacks. (99%)Meixi Zheng; Xuanchen Yan; Zihao Zhu; Hongrui Chen; Baoyuan Wu
Attack Tree Analysis for Adversarial Evasion Attacks. (99%)Yuki Yamaguchi; Toshiaki Aoki
Can you See me? On the Visibility of NOPs against Android Malware Detectors. (98%)Diego Soi; Davide Maiorca; Giorgio Giacinto; Harel Berger
MVPatch: More Vivid Patch for Adversarial Camouflaged Attacks on Object Detectors in the Physical World. (98%)Zheng Zhou; Hongbo Zhao; Ju Liu; Qiaosheng Zhang; Liwei Geng; Shuchang Lyu; Wenquan Feng
Explainability-Based Adversarial Attack on Graphs Through Edge Perturbation. (92%)Dibaloke Chanda; Saba Heidari Gheshlaghi; Nasim Yahya Soltani
DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches Generation. (83%)Wenyi Tan; Yang Li; Chenxing Zhao; Zhunga Liu; Quan Pan
Securing NextG Systems against Poisoning Attacks on Federated Learning: A Game-Theoretic Solution. (64%)Yalin E. Sagduyu; Tugba Erpek; Yi Shi
Timeliness: A New Design Metric and a New Attack Surface. (1%)Priyanka Kaswan; Sennur Ulukus
2023-12-27
Adversarial Attacks on LoRa Device Identification and Rogue Signal Detection with Deep Learning. (98%)Yalin E. Sagduyu; Tugba Erpek
Domain Generalization with Vital Phase Augmentation. (3%)Ingyun Lee; Wooju Lee; Hyun Myung
2023-12-26
From text to multimodal: a survey of adversarial example generation in question answering systems. (92%)Gulsum Yigit; Mehmet Fatih Amasyali
Natural Adversarial Patch Generation Method Based on Latent Diffusion Model. (76%)Xianyi Chen; Fazhan Liu; Dong Jiang; Kai Yan
Robust Survival Analysis with Adversarial Regularization. (61%)Michael Potter; Stefano Maxenti; Michael Everett
Universal Pyramid Adversarial Training for Improved ViT Performance. (5%)Ping-yeh Chiang; Yipin Zhou; Omid Poursaeed; Satya Narayan Shukla; Ashish Shah; Tom Goldstein; Ser-Nam Lim
2023-12-25
GanFinger: GAN-Based Fingerprint Generation for Deep Neural Network Ownership Verification. (96%)Huali Ren; Anli Yan; Xiaojun Ren; Pei-Gen Ye; Chong-zhi Gao; Zhili Zhou; Jin Li
Adversarial Item Promotion on Visually-Aware Recommender Systems by Guided Diffusion. (84%)Lijian Chen; Wei Yuan; Tong Chen; Guanhua Ye; Quoc Viet Hung Nguyen; Hongzhi Yin
Punctuation Matters! Stealthy Backdoor Attack for Language Models. (11%)Xuan Sheng; Zhicheng Li; Zhaoyang Han; Xiangmao Chang; Piji Li
2023-12-23
Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News without Modifying It. (10%)Federico Siciliano; Luca Maiano; Lorenzo Papa; Federica Baccin; Irene Amerini; Fabrizio Silvestri
Pre-trained Trojan Attacks for Visual Recognition. (1%)Aishan Liu; Xinwei Zhang; Yisong Xiao; Yuguang Zhou; Siyuan Liang; Jiakai Wang; Xianglong Liu; Xiaochun Cao; Dacheng Tao
TVE: Learning Meta-attribution for Transferable Vision Explainer. (1%)Guanchu Wang; Yu-Neng Chuang; Fan Yang; Mengnan Du; Chia-Yuan Chang; Shaochen Zhong; Zirui Liu; Zhaozhuo Xu; Kaixiong Zhou; Xuanting Cai; Xia Hu
2023-12-22
MEAOD: Model Extraction Attack against Object Detectors. (83%)Zeyu Li; Chenghui Shi; Yuwen Pu; Xuhong Zhang; Yu Li; Jinbao Li; Shouling Ji
Asymmetric Bias in Text-to-Image Generation with Adversarial Attacks. (82%)Haz Sameen Shahgir; Xianghao Kong; Greg Ver Steeg; Yue Dong
Understanding the Regularity of Self-Attention with Optimal Transport. (31%)Valérie Castin; Pierre Ablin; Gabriel Peyré
Attacking Byzantine Robust Aggregation in High Dimensions. (22%)Sarthak Choudhary; Aashish Kolluri; Prateek Saxena
SODA: Protecting Proprietary Information in On-Device Machine Learning Models. (4%)Akanksha Atrey; Ritwik Sinha; Saayan Mitra; Prashant Shenoy
Energy-based learning algorithms for analog computing: a comparative study. (2%)Benjamin Scellier; Maxence Ernoult; Jack Kendall; Suhas Kumar
Adaptive Domain Inference Attack. (1%)Yuechun Gu; Keke Chen
2023-12-21
AutoAugment Input Transformation for Highly Transferable Targeted Attacks. (99%)Haobo Lu; Xin Liu; Kun He
Where and How to Attack? A Causality-Inspired Recipe for Generating Counterfactual Adversarial Examples. (98%)Ruichu Cai; Yuxuan Zhu; Jie Qiao; Zefeng Liang; Furui Liu; Zhifeng Hao
Elevating Defenses: Bridging Adversarial Training and Watermarking for Model Resilience. (86%)Janvi Thakkar; Giulio Zizzo; Sergio Maffeis
Adversarial Infrared Curves: An Attack on Infrared Pedestrian Detectors in the Physical World. (74%)Chengyin Hu; Weiwen Shi
Exploiting Novel GPT-4 APIs. (8%)Kellin Pelrine; Mohammad Taufeeque; Michał Zając; Euan McLean; Adam Gleave
2023-12-20
Mutual-modality Adversarial Attack with Semantic Perturbation. (99%)Jingwen Ye; Ruonan Yu; Songhua Liu; Xinchao Wang
LRS: Enhancing Adversarial Transferability through Lipschitz Regularized Surrogate. (99%)Tao Wu; Tie Luo; Donald C. Wunsch
Adversarial Markov Games: On Adaptive Decision-Based Attacks and Defenses. (98%)Ilias Tsingenopoulos; Vera Rimmer; Davy Preuveneers; Fabio Pierazzi; Lorenzo Cavallaro; Wouter Joosen
Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models. (98%)Jingwei Yi; Yueqi Xie; Bin Zhu; Emre Kiciman; Guangzhong Sun; Xing Xie; Fangzhao Wu
PGN: A perturbation generation network against deep reinforcement learning. (96%)Xiangjuan Li; Feifan Li; Yang Li; Quan Pan
ARBiBench: Benchmarking Adversarial Robustness of Binarized Neural Networks. (96%)Peng Zhao; Jiehua Zhang; Bowen Peng; Longguang Wang; YingMei Wei; Yu Liu; Li Liu
Scaling Compute Is Not All You Need for Adversarial Robustness. (93%)Edoardo Debenedetti; Zishen Wan; Maksym Andriushchenko; Vikash Sehwag; Kshitij Bhardwaj; Bhavya Kailkhura
Doubly Perturbed Task Free Continual Learning. (9%)Byung Hyun Lee; Min-hwan Oh; Se Young Chun
Interactive Visualization of Time-Varying Flow Fields Using Particle Tracing Neural Networks. (1%)Mengjiao Han; Jixian Li; Sudhanshu Sane; Shubham Gupta; Bei Wang; Steve Petruzza; Chris R. Johnson
2023-12-19
Tensor Train Decomposition for Adversarial Attacks on Computer Vision Models. (96%)Andrei Chertkov; Ivan Oseledets
Rethinking Randomized Smoothing from the Perspective of Scalability. (86%)Anupriya Kumari; Devansh Bhardwaj; Sukrit Jindal
SkyMask: Attack-agnostic Robust Federated Learning with Fine-grained Learnable Masks. (74%)Peishen Yan; Hao Wang; Tao Song; Yang Hua; Ruhui Ma; Ningxin Hu; Mohammad R. Haghighat; Haibing Guan
Progressive Poisoned Data Isolation for Training-time Backdoor Defense. (61%)Yiming Chen; Haiwei Wu; Jiantao Zhou
Adversarial AutoMixup. (11%)Huafeng Qin; Xin Jin; Yun Jiang; Mounim A. El-Yacoubi; Xinbo Gao
Shaping Up SHAP: Enhancing Stability through Layer-Wise Neighbor Selection. (1%)Gwladys Kelodjou; Laurence Rozé; Véronique Masson; Luis Galárraga; Romaric Gaudel; Maurice Tchuente; Alexandre Termier
I-CEE: Tailoring Explanations of Image Classifications Models to User Expertise. (1%)Yao Rong; Peizhu Qian; Vaibhav Unhelkar; Enkelejda Kasneci
2023-12-18
Gemini: A Family of Highly Capable Multimodal Models. (99%)Team Gemini; Rohan Anil; Sebastian Borgeaud; Yonghui Wu; Jean-Baptiste Alayrac; Jiahui Yu; Radu Soricut; Johan Schalkwyk; Andrew M. Dai; Anja Hauth; Katie Millican; David Silver; Slav Petrov; Melvin Johnson; Ioannis Antonoglou; Julian Schrittwieser; Amelia Glaese; Jilin Chen; Emily Pitler; Timothy Lillicrap; Angeliki Lazaridou; Orhan Firat; James Molloy; Michael Isard; Paul R. Barham; Tom Hennigan; Benjamin Lee; Fabio Viola; Malcolm Reynolds; Yuanzhong Xu; Ryan Doherty; Eli Collins; Clemens Meyer; Eliza Rutherford; Erica Moreira; Kareem Ayoub; Megha Goel; George Tucker; Enrique Piqueras; Maxim Krikun; Iain Barr; Nikolay Savinov; Ivo Danihelka; Becca Roelofs; Anaïs White; Anders Andreassen; Glehn Tamara von; Lakshman Yagati; Mehran Kazemi; Lucas Gonzalez; Misha Khalman; Jakub Sygnowski; Alexandre Frechette; Charlotte Smith; Laura Culp; Lev Proleev; Yi Luan; Xi Chen; James Lottes; Nathan Schucher; Federico Lebron; Alban Rrustemi; Natalie Clay; Phil Crone; Tomas Kocisky; Jeffrey Zhao; Bartek Perz; Dian Yu; Heidi Howard; Adam Bloniarz; Jack W. Rae; Han Lu; Laurent Sifre; Marcello Maggioni; Fred Alcober; Dan Garrette; Megan Barnes; Shantanu Thakoor; Jacob Austin; Gabriel Barth-Maron; William Wong; Rishabh Joshi; Rahma Chaabouni; Deeni Fatiha; Arun Ahuja; Ruibo Liu; Yunxuan Li; Sarah Cogan; Jeremy Chen; Chao Jia; Chenjie Gu; Qiao Zhang; Jordan Grimstad; Ale Jakse Hartman; Martin Chadwick; Gaurav Singh Tomar; Xavier Garcia; Evan Senter; Emanuel Taropa; Thanumalayan Sankaranarayana Pillai; Jacob Devlin; Michael Laskin; Diego de Las Casas; Dasha Valter; Connie Tao; Lorenzo Blanco; Adrià Puigdomènech Badia; David Reitter; Mianna Chen; Jenny Brennan; Clara Rivera; Sergey Brin; Shariq Iqbal; Gabriela Surita; Jane Labanowski; Abhi Rao; Stephanie Winkler; Emilio Parisotto; Yiming Gu; Kate Olszewska; Yujing Zhang; Ravi Addanki; Antoine Miech; Annie Louis; Laurent El Shafey; Denis Teplyashin; Geoff Brown; Elliot Catt; Nithya Attaluri; Jan Balaguer; Jackie Xiang; Pidong Wang; Zoe Ashwood; Anton Briukhov; Albert Webson; Sanjay Ganapathy; Smit Sanghavi; Ajay Kannan; Ming-Wei Chang; Axel Stjerngren; Josip Djolonga; Yuting Sun; Ankur Bapna; Matthew Aitchison; Pedram Pejman; Henryk Michalewski; Tianhe Yu; Cindy Wang; Juliette Love; Junwhan Ahn; Dawn Bloxwich; Kehang Han; Peter Humphreys; Thibault Sellam; James Bradbury; Varun Godbole; Sina Samangooei; Bogdan Damoc; Alex Kaskasoli; Sébastien M. R. Arnold; Vijay Vasudevan; Shubham Agrawal; Jason Riesa; Dmitry Lepikhin; Richard Tanburn; Srivatsan Srinivasan; Hyeontaek Lim; Sarah Hodkinson; Pranav Shyam; Johan Ferret; Steven Hand; Ankush Garg; Tom Le Paine; Jian Li; Yujia Li; Minh Giang; Alexander Neitz; Zaheer Abbas; Sarah York; Machel Reid; Elizabeth Cole; Aakanksha Chowdhery; Dipanjan Das; Dominika Rogozińska; Vitaly Nikolaev; Pablo Sprechmann; Zachary Nado; Lukas Zilka; Flavien Prost; Luheng He; Marianne Monteiro; Gaurav Mishra; Chris Welty; Josh Newlan; Dawei Jia; Miltiadis Allamanis; Clara Huiyi Hu; Liedekerke Raoul de; Justin Gilmer; Carl Saroufim; Shruti Rijhwani; Shaobo Hou; Disha Shrivastava; Anirudh Baddepudi; Alex Goldin; Adnan Ozturel; Albin Cassirer; Yunhan Xu; Daniel Sohn; Devendra Sachan; Reinald Kim Amplayo; Craig Swanson; Dessie Petrova; Shashi Narayan; Arthur Guez; Siddhartha Brahma; Jessica Landon; Miteyan Patel; Ruizhe Zhao; Kevin Villela; Luyu Wang; Wenhao Jia; Matthew Rahtz; Mai Giménez; Legg Yeung; Hanzhao Lin; James Keeling; Petko Georgiev; Diana Mincu; Boxi Wu; Salem Haykal; Rachel Saputro; Kiran Vodrahalli; James Qin; Zeynep Cankara; Abhanshu Sharma; Nick Fernando; Will Hawkins; Behnam Neyshabur; Solomon Kim; Adrian Hutter; Priyanka Agrawal; Alex Castro-Ros; George van den Driessche; Tao Wang; Fan Yang; Shuo-yiin Chang; Paul Komarek; Ross McIlroy; Mario Lučić; Guodong Zhang; Wael Farhan; Michael Sharman; Paul Natsev; Paul Michel; Yong Cheng; Yamini Bansal; Siyuan Qiao; Kris Cao; Siamak Shakeri; Christina Butterfield; Justin Chung; Paul Kishan Rubenstein; Shivani Agrawal; Arthur Mensch; Kedar Soparkar; Karel Lenc; Timothy Chung; Aedan Pope; Loren Maggiore; Jackie Kay; Priya Jhakra; Shibo Wang; Joshua Maynez; Mary Phuong; Taylor Tobin; Andrea Tacchetti; Maja Trebacz; Kevin Robinson; Yash Katariya; Sebastian Riedel; Paige Bailey; Kefan Xiao; Nimesh Ghelani; Lora Aroyo; Ambrose Slone; Neil Houlsby; Xuehan Xiong; Zhen Yang; Elena Gribovskaya; Jonas Adler; Mateo Wirth; Lisa Lee; Music Li; Thais Kagohara; Jay Pavagadhi; Sophie Bridgers; Anna Bortsova; Sanjay Ghemawat; Zafarali Ahmed; Tianqi Liu; Richard Powell; Vijay Bolina; Mariko Iinuma; Polina Zablotskaia; James Besley; Da-Woon Chung; Timothy Dozat; Ramona Comanescu; Xiance Si; Jeremy Greer; Guolong Su; Martin Polacek; Raphaël Lopez Kaufman; Simon Tokumine; Hexiang Hu; Elena Buchatskaya; Yingjie Miao; Mohamed Elhawaty; Aditya Siddhant; Nenad Tomasev; Jinwei Xing; Christina Greer; Helen Miller; Shereen Ashraf; Aurko Roy; Zizhao Zhang; Ada Ma; Angelos Filos; Milos Besta; Rory Blevins; Ted Klimenko; Chih-Kuan Yeh; Soravit Changpinyo; Jiaqi Mu; Oscar Chang; Mantas Pajarskas; Carrie Muir; Vered Cohen; Charline Le Lan; Krishna Haridasan; Amit Marathe; Steven Hansen; Sholto Douglas; Rajkumar Samuel; Mingqiu Wang; Sophia Austin; Chang Lan; Jiepu Jiang; Justin Chiu; Jaime Alonso Lorenzo; Lars Lowe Sjösund; Sébastien Cevey; Zach Gleicher; Thi Avrahami; Anudhyan Boral; Hansa Srinivasan; Vittorio Selo; Rhys May; Konstantinos Aisopos; Léonard Hussenot; Livio Baldini Soares; Kate Baumli; Michael B. Chang; Adrià Recasens; Ben Caine; Alexander Pritzel; Filip Pavetic; Fabio Pardo; Anita Gergely; Justin Frye; Vinay Ramasesh; Dan Horgan; Kartikeya Badola; Nora Kassner; Subhrajit Roy; Ethan Dyer; Víctor Campos; Alex Tomala; Yunhao Tang; Dalia El Badawy; Elspeth White; Basil Mustafa; Oran Lang; Abhishek Jindal; Sharad Vikram; Zhitao Gong; Sergi Caelles; Ross Hemsley; Gregory Thornton; Fangxiaoyu Feng; Wojciech Stokowiec; Ce Zheng; Phoebe Thacker; Çağlar Ünlü; Zhishuai Zhang; Mohammad Saleh; James Svensson; Max Bileschi; Piyush Patil; Ankesh Anand; Roman Ring; Katerina Tsihlas; Arpi Vezer; Marco Selvi; Toby Shevlane; Mikel Rodriguez; Tom Kwiatkowski; Samira Daruki; Keran Rong; Allan Dafoe; Nicholas FitzGerald; Keren Gu-Lemberg; Mina Khan; Lisa Anne Hendricks; Marie Pellat; Vladimir Feinberg; James Cobon-Kerr; Tara Sainath; Maribeth Rauh; Sayed Hadi Hashemi; Richard Ives; Yana Hasson; YaGuang Li; Eric Noland; Yuan Cao; Nathan Byrd; Le Hou; Qingze Wang; Thibault Sottiaux; Michela Paganini; Jean-Baptiste Lespiau; Alexandre Moufarek; Samer Hassan; Kaushik Shivakumar; Amersfoort Joost van; Amol Mandhane; Pratik Joshi; Anirudh Goyal; Matthew Tung; Andrew Brock; Hannah Sheahan; Vedant Misra; Cheng Li; Nemanja Rakićević; Mostafa Dehghani; Fangyu Liu; Sid Mittal; Junhyuk Oh; Seb Noury; Eren Sezener; Fantine Huot; Matthew Lamm; Cao Nicola De; Charlie Chen; Gamaleldin Elsayed; Ed Chi; Mahdis Mahdieh; Ian Tenney; Nan Hua; Ivan Petrychenko; Patrick Kane; Dylan Scandinaro; Rishub Jain; Jonathan Uesato; Romina Datta; Adam Sadovsky; Oskar Bunyan; Dominik Rabiej; Shimu Wu; John Zhang; Gautam Vasudevan; Edouard Leurent; Mahmoud Alnahlawi; Ionut Georgescu; Nan Wei; Ivy Zheng; Betty Chan; Pam G Rabinovitch; Piotr Stanczyk; Ye Zhang; David Steiner; Subhajit Naskar; Michael Azzam; Matthew Johnson; Adam Paszke; Chung-Cheng Chiu; Jaume Sanchez Elias; Afroz Mohiuddin; Faizan Muhammad; Jin Miao; Andrew Lee; Nino Vieillard; Sahitya Potluri; Jane Park; Elnaz Davoodi; Jiageng Zhang; Jeff Stanway; Drew Garmon; Abhijit Karmarkar; Zhe Dong; Jong Lee; Aviral Kumar; Luowei Zhou; Jonathan Evens; William Isaac; Zhe Chen; Johnson Jia; Anselm Levskaya; Zhenkai Zhu; Chris Gorgolewski; Peter Grabowski; Yu Mao; Alberto Magni; Kaisheng Yao; Javier Snaider; Norman Casagrande; Paul Suganthan; Evan Palmer; Geoffrey Irving; Edward Loper; Manaal Faruqui; Isha Arkatkar; Nanxin Chen; Izhak Shafran; Michael Fink; Alfonso Castaño; Irene Giannoumis; Wooyeol Kim; Mikołaj Rybiński; Ashwin Sreevatsa; Jennifer Prendki; David Soergel; Adrian Goedeckemeyer; Willi Gierke; Mohsen Jafari; Meenu Gaba; Jeremy Wiesner; Diana Gage Wright; Yawen Wei; Harsha Vashisht; Yana Kulizhskaya; Jay Hoover; Maigo Le; Lu Li; Chimezie Iwuanyanwu; Lu Liu; Kevin Ramirez; Andrey Khorlin; Albert Cui; Tian LIN; Marin Georgiev; Marcus Wu; Ricardo Aguilar; Keith Pallo; Abhishek Chakladar; Alena Repina; Xihui Wu; der Weide Tom van; Priya Ponnapalli; Caroline Kaplan; Jiri Simsa; Shuangfeng Li; Olivier Dousse; Fan Yang; Jeff Piper; Nathan Ie; Minnie Lui; Rama Pasumarthi; Nathan Lintz; Anitha Vijayakumar; Lam Nguyen Thiet; Daniel Andor; Pedro Valenzuela; Cosmin Paduraru; Daiyi Peng; Katherine Lee; Shuyuan Zhang; Somer Greene; Duc Dung Nguyen; Paula Kurylowicz; Sarmishta Velury; Sebastian Krause; Cassidy Hardin; Lucas Dixon; Lili Janzer; Kiam Choo; Ziqiang Feng; Biao Zhang; Achintya Singhal; Tejasi Latkar; Mingyang Zhang; Quoc Le; Elena Allica Abellan; Dayou Du; Dan McKinnon; Natasha Antropova; Tolga Bolukbasi; Orgad Keller; David Reid; Daniel Finchelstein; Maria Abi Raad; Remi Crocker; Peter Hawkins; Robert Dadashi; Colin Gaffney; Sid Lall; Ken Franko; Egor Filonov; Anna Bulanova; Rémi Leblond; Vikas Yadav; Shirley Chung; Harry Askham; Luis C. Cobo; Kelvin Xu; Felix Fischer; Jun Xu; Christina Sorokin; Chris Alberti; Chu-Cheng Lin; Colin Evans; Hao Zhou; Alek Dimitriev; Hannah Forbes; Dylan Banarse; Zora Tung; Jeremiah Liu; Mark Omernick; Colton Bishop; Chintu Kumar; Rachel Sterneck; Ryan Foley; Rohan Jain; Swaroop Mishra; Jiawei Xia; Taylor Bos; Geoffrey Cideron; Ehsan Amid; Francesco Piccinno; Xingyu Wang; Praseem Banzal; Petru Gurita; Hila Noga; Premal Shah; Daniel J. Mankowitz; Alex Polozov; Nate Kushman; Victoria Krakovna; Sasha Brown; MohammadHossein Bateni; Dennis Duan; Vlad Firoiu; Meghana Thotakuri; Tom Natan; Anhad Mohananey; Matthieu Geist; Sidharth Mudgal; Sertan Girgin; Hui Li; Jiayu Ye; Ofir Roval; Reiko Tojo; Michael Kwong; James Lee-Thorp; Christopher Yew; Quan Yuan; Sumit Bagri; Danila Sinopalnikov; Sabela Ramos; John Mellor; Abhishek Sharma; Aliaksei Severyn; Jonathan Lai; Kathy Wu; Heng-Tze Cheng; David Miller; Nicolas Sonnerat; Denis Vnukov; Rory Greig; Jennifer Beattie; Emily Caveness; Libin Bai; Julian Eisenschlos; Alex Korchemniy; Tomy Tsai; Mimi Jasarevic; Weize Kong; Phuong Dao; Zeyu Zheng; Frederick Liu; Fan Yang; Rui Zhu; Mark Geller; Tian Huey Teh; Jason Sanmiya; Evgeny Gladchenko; Nejc Trdin; Andrei Sozanschi; Daniel Toyama; Evan Rosen; Sasan Tavakkol; Linting Xue; Chen Elkind; Oliver Woodman; John Carpenter; George Papamakarios; Rupert Kemp; Sushant Kafle; Tanya Grunina; Rishika Sinha; Alice Talbert; Abhimanyu Goyal; Diane Wu; Denese Owusu-Afriyie; Cosmo Du; Chloe Thornton; Jordi Pont-Tuset; Pradyumna Narayana; Jing Li; Sabaer Fatehi; John Wieting; Omar Ajmeri; Benigno Uria; Tao Zhu; Yeongil Ko; Laura Knight; Amélie Héliou; Ning Niu; Shane Gu; Chenxi Pang; Dustin Tran; Yeqing Li; Nir Levine; Ariel Stolovich; Norbert Kalb; Rebeca Santamaria-Fernandez; Sonam Goenka; Wenny Yustalim; Robin Strudel; Ali Elqursh; Balaji Lakshminarayanan; Charlie Deck; Shyam Upadhyay; Hyo Lee; Mike Dusenberry; Zonglin Li; Xuezhi Wang; Kyle Levin; Raphael Hoffmann; Dan Holtmann-Rice; Olivier Bachem; Summer Yue; Sho Arora; Eric Malmi; Daniil Mirylenka; Qijun Tan; Christy Koh; Soheil Hassas Yeganeh; Siim Põder; Steven Zheng; Francesco Pongetti; Mukarram Tariq; Yanhua Sun; Lucian Ionita; Mojtaba Seyedhosseini; Pouya Tafti; Ragha Kotikalapudi; Zhiyu Liu; Anmol Gulati; Jasmine Liu; Xinyu Ye; Bart Chrzaszcz; Lily Wang; Nikhil Sethi; Tianrun Li; Ben Brown; Shreya Singh; Wei Fan; Aaron Parisi; Joe Stanton; Chenkai Kuang; Vinod Koverkathu; Christopher A. Choquette-Choo; Yunjie Li; TJ Lu; Abe Ittycheriah; Prakash Shroff; Pei Sun; Mani Varadarajan; Sanaz Bahargam; Rob Willoughby; David Gaddy; Ishita Dasgupta; Guillaume Desjardins; Marco Cornero; Brona Robenek; Bhavishya Mittal; Ben Albrecht; Ashish Shenoy; Fedor Moiseev; Henrik Jacobsson; Alireza Ghaffarkhah; Morgane Rivière; Alanna Walton; Clément Crepy; Alicia Parrish; Yuan Liu; Zongwei Zhou; Clement Farabet; Carey Radebaugh; Praveen Srinivasan; der Salm Claudia van; Andreas Fidjeland; Salvatore Scellato; Eri Latorre-Chimoto; Hanna Klimczak-Plucińska; David Bridson; Cesare Dario de; Tom Hudson; Piermaria Mendolicchio; Lexi Walker; Alex Morris; Ivo Penchev; Matthew Mauger; Alexey Guseynov; Alison Reid; Seth Odoom; Lucia Loher; Victor Cotruta; Madhavi Yenugula; Dominik Grewe; Anastasia Petrushkina; Tom Duerig; Antonio Sanchez; Steve Yadlowsky; Amy Shen; Amir Globerson; Adam Kurzrok; Lynette Webb; Sahil Dua; Dong Li; Preethi Lahoti; Surya Bhupatiraju; Dan Hurt; Haroon Qureshi; Ananth Agarwal; Tomer Shani; Matan Eyal; Anuj Khare; Shreyas Rammohan Belle; Lei Wang; Chetan Tekur; Mihir Sanjay Kale; Jinliang Wei; Ruoxin Sang; Brennan Saeta; Tyler Liechty; Yi Sun; Yao Zhao; Stephan Lee; Pandu Nayak; Doug Fritz; Manish Reddy Vuyyuru; John Aslanides; Nidhi Vyas; Martin Wicke; Xiao Ma; Taylan Bilal; Evgenii Eltyshev; Daniel Balle; Nina Martin; Hardie Cate; James Manyika; Keyvan Amiri; Yelin Kim; Xi Xiong; Kai Kang; Florian Luisier; Nilesh Tripuraneni; David Madras; Mandy Guo; Austin Waters; Oliver Wang; Joshua Ainslie; Jason Baldridge; Han Zhang; Garima Pruthi; Jakob Bauer; Feng Yang; Riham Mansour; Jason Gelman; Yang Xu; George Polovets; Ji Liu; Honglong Cai; Warren Chen; XiangHai Sheng; Emily Xue; Sherjil Ozair; Adams Yu; Christof Angermueller; Xiaowei Li; Weiren Wang; Julia Wiesinger; Emmanouil Koukoumidis; Yuan Tian; Anand Iyer; Madhu Gurumurthy; Mark Goldenson; Parashar Shah; MK Blake; Hongkun Yu; Anthony Urbanowicz; Jennimaria Palomaki; Chrisantha Fernando; Kevin Brooks; Ken Durden; Harsh Mehta; Nikola Momchev; Elahe Rahimtoroghi; Maria Georgaki; Amit Raul; Sebastian Ruder; Morgan Redshaw; Jinhyuk Lee; Komal Jalan; Dinghua Li; Ginger Perng; Blake Hechtman; Parker Schuh; Milad Nasr; Mia Chen; Kieran Milan; Vladimir Mikulik; Trevor Strohman; Juliana Franco; Tim Green; Demis Hassabis; Koray Kavukcuoglu; Jeffrey Dean; Oriol Vinyals
Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent Diffusion Model. (99%)Decheng Liu; Xijun Wang; Chunlei Peng; Nannan Wang; Ruiming Hu; Xinbo Gao
The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations. (99%)Zebin Yun; Achi-Or Weingarten; Eyal Ronen; Mahmood Sharif
DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models. (16%)Jiachen Zhou; Peizhuo Lv; Yibing Lan; Guozhu Meng; Kai Chen; Hualong Ma
A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models. (10%)Aysan Esmradi; Daniel Wankit Yip; Chun Fai Chan
Model Stealing Attack against Recommender System. (10%)Zhihao Zhu; Rui Fan; Chenwang Wu; Yi Yang; Defu Lian; Enhong Chen
Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity. (4%)Zhihao Zhu; Chenwang Wu; Rui Fan; Yi Yang; Defu Lian; Enhong Chen
MISA: Unveiling the Vulnerabilities in Split Federated Learning. (1%)Wei Wan; Yuxuan Ning; Shengshan Hu; Lulu Xue; Minghui Li; Leo Yu Zhang; Hai Jin
A Survey of Side-Channel Attacks in Context of Cache -- Taxonomies, Analysis and Mitigation. (1%)Ankit Pulkit; Smita Naval; Vijay Laxmi
2023-12-17
UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks. (98%)Bingyin Zhao; Yingjie Lao
The Pros and Cons of Adversarial Robustness. (92%)Yacine Izza; Joao Marques-Silva
A Mutation-Based Method for Multi-Modal Jailbreaking Attack Detection. (80%)Xiaoyu Zhang; Cen Zhang; Tianlin Li; Yihao Huang; Xiaojun Jia; Xiaofei Xie; Yang Liu; Chao Shen
Robust Node Representation Learning via Graph Variational Diffusion Networks. (11%)Jun Zhuang; Mohammad Al Hasan
A Study on Transferability of Deep Learning Models for Network Intrusion Detection. (4%)Shreya Ghosh; Abu Shafin Mohammad Mahdee Jameel; Aly El Gamal
2023-12-16
Perturbation-Invariant Adversarial Training for Neural Ranking Models: Improving the Effectiveness-Robustness Trade-Off. (99%)Yu-An Liu; Ruqing Zhang; Mingkun Zhang; Wei Chen; Rijke Maarten de; Jiafeng Guo; Xueqi Cheng
Rethinking Robustness of Model Attributions. (80%)Sandesh Kamath; Sankalp Mittal; Amit Deshpande; Vineeth N Balasubramanian
SAME: Sample Reconstruction Against Model Extraction Attacks. (13%)Yi Xie; Jie Zhang; Shiqian Zhao; Tianwei Zhang; Xiaofeng Chen
TrojFair: Trojan Fairness Attacks. (8%)Mengxin Zheng; Jiaqi Xue; Yi Sheng; Lei Yang; Qian Lou; Lei Jiang
Transformers in Unsupervised Structure-from-Motion. (3%)Hemang Chawla; Arnav Varma; Elahe Arani; Bahram Zonooz
TrojFSP: Trojan Insertion in Few-shot Prompt Tuning. (2%)Mengxin Zheng; Jiaqi Xue; Xun Chen; YanShan Wang; Qian Lou; Lei Jiang
2023-12-15
LogoStyleFool: Vitiating Video Recognition Systems via Logo Style Transfer. (99%)Yuxin Cao; Ziyu Zhao; Xi Xiao; Derui Wang; Minhui Xue; Jin Lu
Embodied Adversarial Attack: A Dynamic Robust Physical Attack in Autonomous Driving. (99%)Yitong Sun; Yao Huang; Xingxing Wei
Towards Transferable Targeted 3D Adversarial Attack in the Physical World. (99%)Yao Huang; Yinpeng Dong; Shouwei Ruan; Xiao Yang; Hang Su; Xingxing Wei
A Malware Classification Survey on Adversarial Attacks and Defences. (98%)Mahesh Datta Sai Ponnuru; Likhitha Amasala; Tanu Sree Bhimavarapu; Guna Chaitanya Garikipati
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited Knowledge. (76%)Jiahe Lan; Jie Wang; Baochen Yan; Zheng Yan; Elisa Bertino
Closing the Gap: Achieving Better Accuracy-Robustness Tradeoffs Against Query-Based Attacks. (74%)Pascal Zimmer; Sébastien Andreina; Giorgia Azzurra Marson; Ghassan Karame
Fragility, Robustness and Antifragility in Deep Learning. (67%)Chandresh Pravin; Ivan Martino; Giuseppe Nicosia; Varun Ojha
VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees. (67%)Anahita Baninajjar; Ahmed Rezine; Amir Aminifar
Silent Guardian: Protecting Text from Malicious Exploitation by Large Language Models. (10%)Jiawei Zhao; Kejiang Chen; Xiaojian Yuan; Yuang Qi; Weiming Zhang; Nenghai Yu
2023-12-14
AVA: Inconspicuous Attribute Variation-based Adversarial Attack bypassing DeepFake Detection. (99%)Xiangtao Meng; Li Wang; Shanqing Guo; Lei Ju; Qingchuan Zhao
Continual Adversarial Defense. (95%)Qian Wang; Yaoyao Liu; Hefei Ling; Yingwei Li; Qihao Liu; Ping Li; Jiazhong Chen; Alan Yuille; Ning Yu
SlowTrack: Increasing the Latency of Camera-based Perception in Autonomous Driving Using Adversarial Examples. (92%)Chen Ma; Ningfei Wang; Qi Alfred Chen; Chao Shen
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks. (84%)Changjiang Li; Ren Pang; Bochuan Cao; Zhaohan Xi; Jinghui Chen; Shouling Ji; Ting Wang
Detection and Defense of Unlearnable Examples. (81%)Yifan Zhu; Lijia Yu; Xiao-Shan Gao
Improve Robustness of Reinforcement Learning against Observation Perturbations via $l_\infty$ Lipschitz Policy Networks. (81%)Buqing Nie; Jingtian Ji; Yangqing Fu; Yue Gao
Adversarial Robustness on Image Classification with $k$-means. (81%)Rollin Omari; Junae Kim; Paul Montague
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey. (76%)Yichen Wan; Youyang Qu; Wei Ni; Yong Xiang; Longxiang Gao; Ekram Hossain
DRAM-Locker: A General-Purpose DRAM Protection Mechanism against Adversarial DNN Weight Attacks. (45%)Ranyang Zhou; Sabbir Ahmed; Arman Roohi; Adnan Siraj Rakin; Shaahin Angizi
No-Skim: Towards Efficiency Robustness Evaluation on Skimming-based Language Models. (45%)Shengyao Zhang; Mi Zhang; Xudong Pan; Min Yang
Forbidden Facts: An Investigation of Competing Objectives in Llama-2. (45%)Tony T. Wang; Miles Wang; Kaivalya Hariharan; Nir Shavit
Coevolutionary Algorithm for Building Robust Decision Trees under Minimax Regret. (13%)Adam Żychowski; Andrew Perrault; Jacek Mańdziuk
Exploring Transferability for Randomized Smoothing. (5%)Kai Qiu; Huishuai Zhang; Zhirong Wu; Stephen Lin
Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting. (1%)Anthony Chen; Huanrui Yang; Yulu Gan; Denis A Gudovskiy; Zhen Dong; Haofan Wang; Tomoyuki Okuno; Yohei Nakata; Shanghang Zhang; Kurt Keutzer
2023-12-13
Defenses in Adversarial Machine Learning: A Survey. (99%)Baoyuan Wu; Shaokui Wei; Mingli Zhu; Meixi Zheng; Zihao Zhu; Mingda Zhang; Hongrui Chen; Danni Yuan; Li Liu; Qingshan Liu
Robust Few-Shot Named Entity Recognition with Boundary Discrimination and Correlation Purification. (99%)Xiaojun Xue; Chunxia Zhang; Tianxiang Xu; Zhendong Niu
Universal Adversarial Framework to Improve Adversarial Robustness for Diabetic Retinopathy Detection. (98%)Samrat Mukherjee; Dibyanayan Bandyopadhyay; Baban Gain; Asif Ekbal
Towards Inductive Robustness: Distilling and Fostering Wave-induced Resonance in Transductive GCNs Against Graph Adversarial Attacks. (83%)Ao Liu; Wenshan Li; Tao Li; Beibei Li; Hanyuan Huang; Pan Zhou
Scalable Ensemble-based Detection Method against Adversarial Attacks for speaker verification. (64%)Haibin Wu; Heng-Cheng Kuo; Yu Tsao; Hung-yi Lee
Accelerating the Global Aggregation of Local Explanations. (47%)Alon Mor; Yonatan Belinkov; Benny Kimelfeld
Erasing Self-Supervised Learning Backdoor by Cluster Activation Masking. (22%)Shengsheng Qian; Dizhan Xue; Yifei Wang; Shengjie Zhang; Huaiwen Zhang; Changsheng Xu
Efficient Representation of the Activation Space in Deep Neural Networks. (11%)Tanya Akumu; Celia Cintas; Girmaw Abebe Tadesse; Adebayo Oshingbesan; Skyler Speakman; Edward III McFowland
Efficient Toxic Content Detection by Bootstrapping and Distilling Large Language Models. (1%)Jiang Zhang; Qiong Wu; Yiming Xu; Cheng Cao; Zheng Du; Konstantinos Psounis
2023-12-12
Radio Signal Classification by Adversarially Robust Quantum Machine Learning. (99%)Yanqiu Wu; Eromanga Adermann; Chandra Thapa; Seyit Camtepe; Hajime Suzuki; Muhammad Usman
SSTA: Salient Spatially Transformed Attack. (99%)Renyang Liu; Wei Zhou; Sixin Wu; Jun Zhao; Kwok-Yan Lam
DTA: Distribution Transform-based Attack for Query-Limited Scenario. (99%)Renyang Liu; Wei Zhou; Xin Jin; Song Gao; Yuanyu Wang; Ruxin Wang
May the Noise be with you: Adversarial Training without Adversarial Examples. (98%)Ayoub Arous; Andres F Lopez-Lopera; Nael Abu-Ghazaleh; Ihsen Alouani
Collapse-Oriented Adversarial Training with Triplet Decoupling for Robust Image Retrieval. (98%)Qiwei Tian; Chenhao Lin; Qian Li; Zhengyu Zhao; Chao Shen
Focus on Hiders: Exploring Hidden Threats for Enhancing Adversarial Training. (98%)Qian Li; Yuxiao Hu; Yinpeng Dong; Dongxiao Zhang; Yuntian Chen
QuadAttack: A Quadratic Programming Approach to Ordered Top-K Attacks. (97%)Thomas Paniagua; Ryan Grainger; Tianfu Wu
Attacking the Loop: Adversarial Attacks on Graph-based Loop Closure Detection. (92%)Jonathan J. Y. Kim; Martin Urschler; Patricia J. Riddle; Jorg S. Wicker
ReRoGCRL: Representation-based Robustness in Goal-Conditioned Reinforcement Learning. (86%)Xiangyu Yin; Sihao Wu; Jiaxu Liu; Meng Fang; Xingyu Zhao; Xiaowei Huang; Wenjie Ruan
Robust MRI Reconstruction by Smoothed Unrolling (SMUG). (82%)Shijun Liang; Van Hoang Minh Nguyen; Jinghan Jia; Ismail Alkhouri; Sijia Liu; Saiprasad Ravishankar
Cost Aware Untargeted Poisoning Attack against Graph Neural Networks,. (70%)Yuwei Han; Yuni Lai; Yulin Zhu; Kai Zhou
EdgePruner: Poisoned Edge Pruning in Graph Contrastive Learning. (47%)Hiroya Kato; Kento Hasegawa; Seira Hidano; Kazuhide Fukushima
Causality Analysis for Evaluating the Security of Large Language Models. (22%)Wei Zhao; Zhe Li; Jun Sun
SimAC: A Simple Anti-Customization Method for Protecting Face Privacy against Text-to-Image Synthesis of Diffusion Models. (13%)Feifei Wang; Zhentao Tan; Tianyi Wei; Yue Wu; Qidong Huang
Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass Safety Filters of Text-to-Image Models. (8%)Yimo Deng; Huangxun Chen
Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation Of Adversarial Attacks In Geospatial Systems. (5%)Michael Lanier; Aayush Dhakal; Zhexiao Xiong; Arthur Li; Nathan Jacobs; Yevgeniy Vorobeychik
Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification. (2%)Bang Wu; Xingliang Yuan; Shuo Wang; Qi Li; Minhui Xue; Shirui Pan
Majority is Not Required: A Rational Analysis of the Private Double-Spend Attack from a Sub-Majority Adversary. (1%)Yanni Georghiades; Rajesh Mishra; Karl Kreder; Sriram Vishwanath
Rethinking Model Inversion Attacks With Patch-Wise Reconstruction. (1%)Jonggyu Jang; Hyeonsu Lyu; Hyun Jong Yang
2023-12-11
Towards Transferable Adversarial Attacks with Centralized Perturbation. (99%)Shangbo Wu; Yu-an Tan; Yajie Wang; Ruinan Ma; Wencong Ma; Yuanzhang Li
MalPurifier: Enhancing Android Malware Detection with Adversarial Purification against Evasion Attacks. (98%)Yuyang Zhou; Guang Cheng; Zongyao Chen; Shui Yu
Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets. (83%)Subhajit Dutta Chowdhury; Zhiyu Ni; Qingyuan Peng; Souvik Kundu; Pierluigi Nuzzo
Reward Certification for Policy Smoothed Reinforcement Learning. (78%)Ronghui Mu; Leandro Soriano Marcolino; Tianle Zhang; Yanghao Zhang; Xiaowei Huang; Wenjie Ruan
Activation Gradient based Poisoned Sample Detection Against Backdoor Attacks. (31%)Danni Yuan; Shaokui Wei; Mingda Zhang; Li Liu; Baoyuan Wu
Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models. (22%)Sanghak Oh; Kiho Lee; Seonhye Park; Doowon Kim; Hyoungshick Kim
Promoting Counterfactual Robustness through Diversity. (13%)Francesco Leofante; Nico Potyka
Resilient Path Planning for UAVs in Data Collection under Adversarial Attacks. (10%)Xueyuan Wang; M. Cenk Gursoy
Adversarial Camera Patch: An Effective and Robust Physical-World Attack on Object Detectors. (1%)Kalibinuer Tiliwalidi
Robust Graph Neural Network based on Graph Denoising. (1%)Victor M. Tenorio; Samuel Rey; Antonio G. Marques
2023-12-10
Data-Free Hard-Label Robustness Stealing Attack. (86%)Xiaojian Yuan; Kejiang Chen; Wen Huang; Jie Zhang; Weiming Zhang; Nenghai Yu
A Practical Survey on Emerging Threats from AI-driven Voice Attacks: How Vulnerable are Commercial Voice Control Systems? (76%)Yuanda Wang; Qiben Yan; Nikolay Ivanov; Xun Chen
An Ambiguity Measure for Recognizing the Unknowns in Deep Learning. (12%)Roozbeh Yousefzadeh
METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities. (2%)Sangwon Hyun; Mingyu Guo; M. Ali Babar
2023-12-09
Poisoning $\times$ Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks. (99%)Ege Erdogan; Simon Geisler; Stephan Günnemann
Improving Adversarial Robust Fairness via Anti-Bias Soft Label Distillation. (98%)Shiji Zhao; Ranjie Duan; Xizhe Wang; Xingxing Wei
Dynamic Adversarial Attacks on Autonomous Driving Systems. (98%)Amirhosein Chahe; Chenan Wang; Abhishek Jeyapratap; Kaidi Xu; Lifeng Zhou
Initialization Matters for Adversarial Transfer Learning. (76%)Andong Hua; Jindong Gu; Zhiyu Xue; Nicholas Carlini; Eric Wong; Yao Qin
2023-12-08
HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial Training of GNNs. (99%)Xiaobing Pei; Haoran Yang; Gang Shen
SA-Attack: Improving Adversarial Transferability of Vision-Language Pre-training Models via Self-Augmentation. (99%)Bangyan He; Xiaojun Jia; Siyuan Liang; Tianrui Lou; Yang Liu; Xiaochun Cao
MIMIR: Masked Image Modeling for Mutual Information-based Adversarial Robustness. (99%)Xiaoyun Xu; Shujian Yu; Jingzheng Wu; Stjepan Picek
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting. (96%)Huming Qiu; Junjie Sun; Mi Zhang; Xudong Pan; Min Yang
An adversarial attack approach for eXplainable AI evaluation on deepfake detection models. (38%)Balachandar Gowrisankar; Vrizlynn L. L. Thing
A Red Teaming Framework for Securing AI in Maritime Autonomous Systems. (3%)Mathew J. Walter; Aaron Barrett; Kimberly Tam
Annotation-Free Group Robustness via Loss-Based Resampling. (2%)Mahdi Ghaznavi; Hesam Asadollahzadeh; HamidReza Yaghoubi Araghi; Fahimeh Hosseini Noohdani; Mohammad Hossein Rohban; Mahdieh Soleymani Baghshah
HuRef: HUman-REadable Fingerprint for Large Language Models. (2%)Boyi Zeng; Lizheng Wang; Yuncong Hu; Yi Xu; Chenghu Zhou; Xinbing Wang; Yu Yu; Zhouhan Lin
Topology-Based Reconstruction Prevention for Decentralised Learning. (1%)Florine W. Delft University of Technology, the Netherlands and Dekker; Zekeriya Delft University of Technology, the Netherlands and Erkin; Mauro Università di Padova, Italy Delft University of Technology, the Netherlands and Conti
2023-12-07
MimicDiffusion: Purifying Adversarial Perturbation via Mimicking Clean Diffusion Model. (99%)Kaiyu Song; Hanjiang Lai
OT-Attack: Enhancing Adversarial Transferability of Vision-Language Models via Optimal Transport Optimization. (99%)Dongchen Han; Xiaojun Jia; Yang Bai; Jindong Gu; Yang Liu; Xiaochun Cao
Diffence: Fencing Membership Privacy With Diffusion Models. (97%)Yuefeng Peng; Ali Naseh; Amir Houmansadr
FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning. (70%)Hossein Fereidooni; Alessandro Pegoraro; Phillip Rieger; Alexandra Dmitrienko; Ahmad-Reza Sadeghi
Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks. (64%)Shuli Jiang; Swanand Ravindra Kadhe; Yi Zhou; Ling Cai; Nathalie Baracaldo
DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions. (15%)Fangzhou Wu; Xiaogeng Liu; Chaowei Xiao
2023-12-06
Defense against ML-based Power Side-channel Attacks on DNN Accelerators with Adversarial Attacks. (98%)Xiaobei Yan; Chip Hong Chang; Tianwei Zhang
Defense Against Adversarial Attacks using Convolutional Auto-Encoders. (97%)Shreyasi Mandal
Node-aware Bi-smoothing: Certified Robustness against Graph Injection Attacks. (88%)Yuni Lai; Yulin Zhu; Bailin Pan; Kai Zhou
RoAST: Robustifying Language Models via Adversarial Perturbation with Selective Training. (54%)Jaehyung Kim; Yuning Mao; Rui Hou; Hanchao Yu; Davis Liang; Pascale Fung; Qifan Wang; Fuli Feng; Lifu Huang; Madian Khabsa
Detecting Voice Cloning Attacks via Timbre Watermarking. (13%)Chang Liu; Jie Zhang; Tianwei Zhang; Xi Yang; Weiming Zhang; Nenghai Yu
Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models. (11%)Sze Jue Yang; Chinh D. La; Quang H. Nguyen; Eugene Bagdasaryan; Kok-Seng Wong; Anh Tuan Tran; Chee Seng Chan; Khoa D. Doan
Dr. Jekyll and Mr. Hyde: Two Faces of LLMs. (4%)Matteo Gioele Collu; Tom Janssen-Groesbeek; Stefanos Koffas; Mauro Conti; Stjepan Picek
MICRO: Model-Based Offline Reinforcement Learning with a Conservative Bellman Operator. (2%)Xiao-Yin Liu; Xiao-Hu Zhou; Guo-Tao Li; Hao Li; Mei-Jiang Gui; Tian-Yu Xiang; De-Xing Huang; Zeng-Guang Hou
2023-12-05
Generating Visually Realistic Adversarial Patch. (99%)Xiaosen Wang; Kunyu Wang
A Simple Framework to Enhance the Adversarial Robustness of Deep Learning-based Intrusion Detection System. (99%)Xinwei Yuan; Shu Han; Wei Huang; Hongliang Ye; Xianglong Kong; Fan Zhang
Realistic Scatterer Based Adversarial Attacks on SAR Image Classifiers. (99%)Tian Ye; Rajgopal Kannan; Viktor Prasanna; Carl Busart; Lance Kaplan
ScAR: Scaling Adversarial Robustness for LiDAR Object Detection. (99%)Xiaohu Lu; Hayder Radha
Class Incremental Learning for Adversarial Robustness. (98%)Seungju Cho; Hongsin Lee; Changick Kim
Provable Adversarial Robustness for Group Equivariant Tasks: Graphs, Point Clouds, Molecules, and More. (89%)Jan Schuchardt; Yan Scholten; Stephan Günnemann
On the Robustness of Large Multimodal Models Against Image Adversarial Attacks. (69%)Xuanimng Cui; Alejandro Aparcedo; Young Kyun Jang; Ser-Nam Lim
Scaling Laws for Adversarial Attacks on Language Model Activations. (50%)Stanislav Fort
Indirect Gradient Matching for Adversarial Robust Distillation. (13%)Hongsin Lee; Seungju Cho; Changick Kim
Robust Backdoor Detection for Deep Learning via Topological Evolution Dynamics. (3%)Xiaoxing Mo; Yechao Zhang; Leo Yu Zhang; Wei Luo; Nan Sun; Shengshan Hu; Shang Gao; Yang Xiang
Prompt Optimization via Adversarial In-Context Learning. (3%)Xuan Long Do; Yiran Zhao; Hannah Brown; Yuxi Xie; James Xu Zhao; Nancy F. Chen; Kenji Kawaguchi; Michael Qizhe Xie; Junxian He
Privacy-Preserving Task-Oriented Semantic Communications Against Model Inversion Attacks. (2%)Yanhu Wang; Shuaishuai Guo; Yiqin Deng; Haixia Zhang; Yuguang Fang
Machine Vision Therapy: Multimodal Large Language Models Can Enhance Visual Robustness via Denoising In-Context Learning. (2%)Zhuo Huang; Chang Liu; Yinpeng Dong; Hang Su; Shibao Zheng; Tongliang Liu
2023-12-04
Adversarial Medical Image with Hierarchical Feature Hiding. (99%)Qingsong Yao; Zecheng He; Yuexiang Li; Yi Lin; Kai Ma; Yefeng Zheng; S. Kevin Zhou
InstructTA: Instruction-Tuned Targeted Attack for Large Vision-Language Models. (99%)Xunguang Wang; Zhenlan Ji; Pingchuan Ma; Zongjie Li; Shuai Wang
Singular Regularization with Information Bottleneck Improves Model's Adversarial Robustness. (98%)Guanlin Li; Naishan Zheng; Man Zhou; Jie Zhang; Tianwei Zhang
Two-stage optimized unified adversarial patch for attacking visible-infrared cross-modal detectors in the physical world. (12%)Chengyin Hu; Weiwen Shi
Auto DP-SGD: Dual Improvements of Privacy and Accuracy via Automatic Clipping Threshold and Noise Multiplier Estimation. (1%)Sai Venkatesh Chilukoti; Md Imran Hossen; Liqun Shan; Vijay Srinivas Tida; Xiai Hei
Rejuvenating image-GPT as Strong Visual Representation Learners. (1%)Sucheng Ren; Zeyu Wang; Hongru Zhu; Junfei Xiao; Alan Yuille; Cihang Xie
2023-12-03
QuantAttack: Exploiting Dynamic Quantization to Attack Vision Transformers. (99%)Amit Baras; Alon Zolfi; Yuval Elovici; Asaf Shabtai
OCGEC: One-class Graph Embedding Classification for DNN Backdoor Detection. (61%)Haoyu Jiang; Haiyang Yu; Nan Li; Ping Yi
Evaluating the Security of Satellite Systems. (16%)Roy Peled; Eran Aizikovich; Edan Habler; Yuval Elovici; Asaf Shabtai
Exploring Adversarial Robustness of LiDAR-Camera Fusion Model in Autonomous Driving. (13%)Bo Yang; Xiaoyu Ji; Xiaoyu Ji; Xiaoyu Ji; Xiaoyu Ji
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger. (2%)Yiming Li; Mingyan Zhu; Junfeng Guo; Tao Wei; Shu-Tao Xia; Zhan Qin
2023-12-02
TranSegPGD: Improving Transferability of Adversarial Examples on Semantic Segmentation. (99%)Xiaojun Jia; Jindong Gu; Yihao Huang; Simeng Qin; Qing Guo; Yang Liu; Xiaochun Cao
Rethinking PGD Attack: Is Sign Function Necessary? (98%)Junjie Yang; Tianlong Chen; Xuxi Chen; Zhangyang Wang; Yingbin Liang
PROFL: A Privacy-Preserving Federated Learning Method with Stringent Defense Against Poisoning Attacks. (61%)Yisheng Zhong; Li-Ping Wang
Mendata: A Framework to Purify Manipulated Training Data. (2%)Zonghao Huang; Neil Gong; Michael K. Reiter
2023-12-01
PyraTrans: Learning Attention-Enriched Multi-Scale Pyramid Network from Pre-Trained Transformers for Effective Malicious URL Detection. (69%)Ruitong Liu; Yanbin Wang; Zhenhao Guo; Haitao Xu; Zhan Qin; Wenrui Ma; Fan Zhang
Survey of Security Issues in Memristor-based Machine Learning Accelerators for RF Analysis. (22%)William Lillis; Max Cohen Hoffing; Wayne Burleson
Deep Generative Attacks and Countermeasures for Data-Driven Offline Signature Verification. (10%)An Ngo; MinhPhuong Cao; Rajesh Kumar
The Philosopher's Stone: Trojaning Plugins of Large Language Models. (4%)Tian Dong; Minhui Xue; Guoxing Chen; Rayne Holland; Yan Meng; Shaofeng Li; Zhen Liu; Haojin Zhu
Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training. (1%)Yefan Zhou; Tianyu Pang; Keqin Liu; Charles H. Martin; Michael W. Mahoney; Yaoqing Yang
Crystal: Enhancing Blockchain Mining Transparency with Quorum Certificate. (1%)Jianyu Niu; Fangyu Gai; Runchao Han; Ren Zhang; Yinqian Zhang; Chen Feng
2023-11-30
Improving the Robustness of Quantized Deep Neural Networks to White-Box Attacks using Stochastic Quantization and Information-Theoretic Ensemble Training. (98%)Saurabh Farkya; Aswin Raghavan; Avi Ziskind
Adversarial Attacks and Defenses for Wireless Signal Classifiers using CDI-aware GANs. (98%)Sujata Sinha; Alkan Soysal
Universal Backdoor Attacks. (97%)Benjamin Schneider; Nils Lukas; Florian Kerschbaum
Fool the Hydra: Adversarial Attacks against Multi-view Object Detection Systems. (97%)Bilel Tarchoun; Quazi Mishkatul Alam; Nael Abu-Ghazaleh; Ihsen Alouani
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations. (88%)Xianlong Wang; Shengshan Hu; Minghui Li; Zhifei Yu; Ziqi Zhou; Leo Yu Zhang; Hai Jin
Optimal Attack and Defense for Reinforcement Learning. (76%)Jeremy McMahan; Young Wu; Xiaojin Zhu; Qiaomin Xie
Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion? (74%)Zhengyue Zhao; Jinhao Duan; Kaidi Xu; Chenan Wang; Rui Zhangp Zidong Dup Qi Guo; Xing Hu
Improving Adversarial Transferability via Model Alignment. (68%)Avery Ma; Amir-massoud Farahmand; Yangchen Pan; Philip Torr; Jindong Gu
Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder Approach. (62%)Kai Li; Jingjing Zheng; Xin Yuan; Wei Ni; Ozgur B. Akan; H. Vincent Poor
Mark My Words: Analyzing and Evaluating Language Model Watermarks. (9%)Julien Piet; Chawin Sitawarin; Vivian Fang; Norman Mu; David Wagner
2023-11-29
Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention. (98%)Lujia Shen; Yuwen Pu; Shouling Ji; Changjiang Li; Xuhong Zhang; Chunpeng Ge; Ting Wang
Group-wise Sparse and Explainable Adversarial Attacks. (96%)Shpresim Sadiku; Moritz Wagner; Sebastian Pokutta
Quantum Neural Networks under Depolarization Noise: Exploring White-Box Attacks and Defenses. (88%)David Winderl; Nicola Franco; Jeanette Miriam Lorenz
On the Adversarial Robustness of Graph Contrastive Learning Methods. (83%)Filippo Guerranti; Zinuo Yi; Anna Starovoit; Rafiq Kamel; Simon Geisler; Stephan Günnemann
Adversarial Robust Memory-Based Continual Learner. (81%)Xiaoyue Mi; Fan Tang; Zonghan Yang; Danding Wang; Juan Cao; Peng Li; Yang Liu
Improving Faithfulness for Vision Transformers. (80%)Lijie Hu; Yixin Liu; Ninghao Liu; Mengdi Huai; Lichao Sun; Di Wang
TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT4. (68%)Zihao Tan; Qingliang Chen; Yongjian Huang; Chen Liang
Topology-Preserving Adversarial Training. (10%)Xiaoyue Mi; Fan Tang; Yepeng Weng; Danding Wang; Juan Cao; Sheng Tang; Peng Li; Yang Liu
Query-Relevant Images Jailbreak Large Multi-Modal Models. (9%)Xin Liu; Yichen Zhu; Yunshi Lan; Chao Yang; Yu Qiao
Analyzing and Explaining Image Classifiers via Diffusion Guidance. (8%)Maximilian Augustin; Yannic Neuhaus; Matthias Hein
Poisoning Attacks Against Contrastive Recommender Systems. (2%)Zongwei Wang; Junliang Yu; Min Gao; Hongzhi Yin; Bin Cui; Shazia Sadiq
SenTest: Evaluating Robustness of Sentence Encoders. (2%)Tanmay Chavan; Shantanu Patankar; Aditya Kane; Omkar Gokhale; Geetanjali Kale; Raviraj Joshi
Critical Influence of Overparameterization on Sharpness-aware Minimization. (1%)Sungbin Shin; Dongyeop Lee; Maksym Andriushchenko; Namhoon Lee
CLIPC8: Face liveness detection algorithm based on image-text pairs and contrastive learning. (1%)Xu Liu; Shu Zhou; Yurong Song; Wenzhe Luo; Xin Zhang
Unveiling the Implicit Toxicity in Large Language Models. (1%)Jiaxin Wen; Pei Ke; Hao Sun; Zhexin Zhang; Chengfei Li; Jinfeng Bai; Minlie Huang
2023-11-28
Vulnerability Analysis of Transformer-based Optical Character Recognition to Adversarial Attacks. (99%)Lucas Beerens; Desmond J. Higham
NeRFTAP: Enhancing Transferability of Adversarial Patches on Face Recognition using Neural Radiance Fields. (99%)Xiaoliang Liu; Furao Shen; Feng Han; Jian Zhao; Changhai Nie
Efficient Key-Based Adversarial Defense for ImageNet by Using Pre-trained Model. (98%)AprilPyone MaungMaung; Isao Echizen; Hitoshi Kiya
RADAP: A Robust and Adaptive Defense Against Diverse Adversarial Patches on Face Recognition. (92%)Xiaoliang Liu; Furao Shen; Jian Zhao; Changhai Nie
STR-Cert: Robustness Certification for Deep Text Recognition on Deep Learning Pipelines and Vision Transformers. (26%)Daqian Shao; Lukas Fesser; Marta Kwiatkowska
1-Lipschitz Layers Compared: Memory, Speed, and Certifiable Robustness. (13%)Bernd Prach; Fabio Brau; Giorgio Buttazzo; Christoph H. Lampert
Scalable Extraction of Training Data from (Production) Language Models. (10%)Milad Nasr; Nicholas Carlini; Jonathan Hayase; Matthew Jagielski; A. Feder Cooper; Daphne Ippolito; Christopher A. Choquette-Choo; Eric Wallace; Florian Tramèr; Katherine Lee
Cooperative Abnormal Node Detection with Adversary Resistance. (10%)Yingying Huangfu; Tian Bai
On robust overfitting: adversarial training induced distribution matters. (1%)Runzhi Tian; Yongyi Mao
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations. (1%)Maximilian Dreyer; Reduan Achtibat; Wojciech Samek; Sebastian Lapuschkin
Shadows Don't Lie and Lines Can't Bend! Generative Models don't know Projective Geometry...for now. (1%)Ayush Sarkar; Hanlin Mai; Amitabh Mahapatra; Svetlana Lazebnik; D. A. Forsyth; Anand Bhattad
Enhancing Cyber-Resilience in Integrated Energy System Scheduling with Demand Response Using Deep Reinforcement Learning. (1%)Yang Li; Wenjie Ma; Yuanzheng Li; Sen Li; Zhe Chen; Mohammad Shahidehpor
2023-11-27
RetouchUAA: Unconstrained Adversarial Attack via Image Retouching. (99%)Mengda Xie; Yiling He; Meie Fang
Adversaral Doodles: Interpretable and Human-drawable Attacks Provide Describable Insights. (99%)Ryoya Nara; Yusuke Matsui
Rethinking Mixup for Improving the Adversarial Transferability. (98%)Xiaosen Wang; Zeyuan Yin
Instruct2Attack: Language-Guided Semantic Adversarial Attacks. (98%)Jiang Liu; Chen Wei; Yuxiang Guo; Heng Yu; Alan Yuille; Soheil Feizi; Chun Pong Lau; Rama Chellappa
CLAP: Contrastive Learning with Augmented Prompts for Robustness on Pretrained Vision-Language Models. (95%)Yichao Cai; Yuhang Liu; Zhen Zhang; Javen Qinfeng Shi
A Survey on Vulnerability of Federated Learning: A Learning Algorithm Perspective. (50%)Xianghua Xie; Chen Hu; Hanchi Ren; Jingjing Deng
Threshold Breaker: Can Counter-Based RowHammer Prevention Mechanisms Truly Safeguard DRAM? (31%)Ranyang Zhou; Jacqueline Liu; Sabbir Ahmed; Nakul Kochar; Adnan Siraj Rakin; Shaahin Angizi
Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift. (31%)Shengwei An; Sheng-Yen Chou; Kaiyuan Zhang; Qiuling Xu; Guanhong Tao; Guangyu Shen; Siyuan Cheng; Shiqing Ma; Pin-Yu Chen; Tsung-Yi Ho; Xiangyu Zhang
Distributed Attacks over Federated Reinforcement Learning-enabled Cell Sleep Control. (22%)Han Zhang; Hao Zhou; Medhat Elsayed; Majid Bavand; Raimundas Gaigalas; Yigit Ozcan; Melike Erol-Kantarci
"Do Users fall for Real Adversarial Phishing?" Investigating the Human response to Evasive Webpages. (15%)Ajka Draganovic; Savino Dambra; Javier Aldana Iuit; Kevin Roundy; Giovanni Apruzzese
How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs. (12%)Haoqin Tu; Chenhang Cui; Zijun Wang; Yiyang Zhou; Bingchen Zhao; Junlin Han; Wangchunshu Zhou; Huaxiu Yao; Cihang Xie
Microarchitectural Security of AWS Firecracker VMM for Serverless Cloud Platforms. (1%)Zane Worcester Polytechnic Institute Weissman; Thore University of Lübeck Tiemann; Thomas University of Lübeck Eisenbarth; Berk Worcester Polytechnic Institute Sunar
2023-11-26
Adversarial Purification of Information Masking. (99%)Sitong Liu; Zhichao Lian; Shuangquan Zhang; Liang Xiao
Having Second Thoughts? Let's hear it. (56%)Jung H. Lee; Sujith Vijayan
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP. (13%)Jiawang Bai; Kuofeng Gao; Shaobo Min; Shu-Tao Xia; Zhifeng Li; Wei Liu
Confidence Is All You Need for MI Attacks. (2%)Abhishek Sinha; Himanshi Tibrewal; Mansi Gupta; Nikhar Waghela; Shivank Garg
2023-11-25
Mixing Classifiers to Alleviate the Accuracy-Robustness Trade-Off. (68%)Yatong Bai; Brendon G. Anderson; Somayeh Sojoudi
Robust Graph Neural Networks via Unbiased Aggregation. (12%)Ruiqi Feng; Zhichao Hou; Tyler Derr; Xiaorui Liu
Effective Backdoor Mitigation Depends on the Pre-training Objective. (10%)Sahil Verma; Gantavya Bhatt; Avi Schwarzschild; Soumye Singhal; Arnav Mohanty Das; Chirag Shah; John P Dickerson; Jeff Bilmes
2023-11-24
Trainwreck: A damaging adversarial attack on image classifiers. (99%)Jan Zahálka
Segment (Almost) Nothing: Prompt-Agnostic Adversarial Attacks on Segmentation Models. (96%)Francesco Croce; Matthias Hein
Universal Jailbreak Backdoors from Poisoned Human Feedback. (1%)Javier Rando; Florian Tramèr
2023-11-23
When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence. (99%)Benoit Coqueret; Mathieu Carbone; Olivier Sentieys; Gabriel Zaid
Adversarial defense based on distribution transfer. (99%)Jiahao Chen; Diqun Yan; Li Dong
Robust and Interpretable COVID-19 Diagnosis on Chest X-ray Images using Adversarial Training. (68%)Karina Yang; Alexis Bennett; Dominique Duncan
Presentation Attack Detection using Convolutional Neural Networks and Local Binary Patterns. (1%)Justin Spencer; Deborah Lawrence; Prosenjit Chatterjee; Kaushik Roy; Albert Esterline; Jung-Hee Kim
2023-11-22
A Survey of Adversarial CAPTCHAs on its History, Classification and Generation. (99%)Zisheng Xu; Qiao Yan; F. Richard Yu; Victor C. M. Leung
Transfer Attacks and Defenses for Large Language Models on Coding Tasks. (99%)Chi Zhang; Zifan Wang; Ravi Mangal; Matt Fredrikson; Limin Jia; Corina Pasareanu
Panda or not Panda? Understanding Adversarial Attacks with Interactive Visualization. (98%)Yuzhe You; Jarvis Tse; Jian Zhao
Hard Label Black Box Node Injection Attack on Graph Neural Networks. (93%)Yu Zhou; Zihao Dong; Guofeng Zhang; Jingchen Tang
Security and Privacy Challenges in Deep Learning Models. (74%)Gopichandh Golla
A Somewhat Robust Image Watermark against Diffusion-based Editing Models. (50%)Mingtian Tan; Tianhao Wang; Somesh Jha
OASIS: Offsetting Active Reconstruction Attacks in Federated Learning. (15%)Tre' R. Jeter; Truc Nguyen; Raed Alharbi; My T. Thai
Unified Classification and Rejection: A One-versus-All Framework. (1%)Zhen Cheng; Xu-Yao Zhang; Cheng-Lin Liu
2023-11-21
SD-NAE: Generating Natural Adversarial Examples with Stable Diffusion. (96%)Yueqian Lin; Jingyang Zhang; Yiran Chen; Hai Li
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise. (96%)Yixin Liu; Kaidi Xu; Xun Chen; Lichao Sun
Attention Deficit is Ordered! Fooling Deformable Vision Transformers with Collaborative Adversarial Patches. (75%)Quazi Mishkatul Alam; Bilel Tarchoun; Ihsen Alouani; Nael Abu-Ghazaleh
Attacking Motion Planners Using Adversarial Perception Errors. (69%)Jonathan Sadeghi; Nicholas A. Lord; John Redford; Romain Mueller
Toward Robust Imperceptible Perturbation against Unauthorized Text-to-image Diffusion-based Synthesis. (62%)Yixin Liu; Chenrui Fan; Yutong Dai; Xun Chen; Pan Zhou; Lichao Sun
Iris Presentation Attack: Assessing the Impact of Combining Vanadium Dioxide Films with Artificial Eyes. (1%)Darshika Jauhari; Renu Sharma; Cunjian Chen; Nelson Sepulveda; Arun Ross
2023-11-20
ODDR: Outlier Detection & Dimension Reduction Based Defense Against Adversarial Patches. (99%)Nandish Chattopadhyay; Amira Guesmi; Muhammad Abdullah Hanif; Bassem Ouni; Muhammad Shafique
DefensiveDR: Defending against Adversarial Patches using Dimensionality Reduction. (99%)Nandish Chattopadhyay; Amira Guesmi; Muhammad Abdullah Hanif; Bassem Ouni; Muhammad Shafique
Generating Valid and Natural Adversarial Examples with Large Language Models. (99%)Zimu Wang; Wei Wang; Qi Chen; Qiufeng Wang; Anh Nguyen
AdvGen: Physical Adversarial Attack on Face Presentation Attack Detection Systems. (99%)Sai Amrit Patnaik; Shivali Chansoriya; Anil K. Jain; Anoop M. Namboodiri
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems. (50%)Guangjing Wang; Ce Zhou; Yuanda Wang; Bocheng Chen; Hanqing Guo; Qiben Yan
Understanding Variation in Subpopulation Susceptibility to Poisoning Attacks. (15%)Evan Rose; Fnu Suya; David Evans
Training robust and generalizable quantum models. (10%)Julian Berberich; Daniel Fink; Daniel Pranjić; Christian Tutschku; Christian Holm
BrainWash: A Poisoning Attack to Forget in Continual Learning. (4%)Ali Abbasi; Parsa Nooralinejad; Hamed Pirsiavash; Soheil Kolouri
2023-11-19
Adversarial Prompt Tuning for Vision-Language Models. (98%)Jiaming Zhang; Xingjun Ma; Xin Wang; Lingyu Qiu; Jiaqi Wang; Yu-Gang Jiang; Jitao Sang
Token-Level Adversarial Prompt Detection Based on Perplexity Measures and Contextual Information. (78%)Zhengmian Hu; Gang Wu; Saayan Mitra; Ruiyi Zhang; Tong Sun; Heng Huang; Viswanathan Swaminathan
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning. (69%)Siyuan Liang; Mingli Zhu; Aishan Liu; Baoyuan Wu; Xiaochun Cao; Ee-Chien Chang
EditShield: Protecting Unauthorized Image Editing by Instruction-guided Diffusion Models. (10%)Ruoxi Chen; Haibo Jin; Jinyin Chen; Lichao Sun
2023-11-18
Boost Adversarial Transferability by Uniform Scale and Mix Mask Method. (99%)Tao Wang; Zijian Ying; Qianmu Li; zhichao Lian
Improving Adversarial Transferability by Stable Diffusion. (99%)Jiayang Liu; Siyu Zhu; Siyuan Liang; Jie Zhang; Han Fang; Weiming Zhang; Ee-Chien Chang
Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications. (92%)Giulio Rossolini; Alessandro Biondi; Giorgio Buttazzo
TextGuard: Provable Defense against Backdoor Attacks on Text Classification. (82%)Hengzhi Pei; Jinyuan Jia; Wenbo Guo; Bo Li; Dawn Song
Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies. (1%)Feng Wang; M. Cenk Gursoy; Senem Velipasalar
2023-11-17
Breaking Temporal Consistency: Generating Video Universal Adversarial Perturbations Using Image Models. (97%)Hee-Seon Kim; Minji Son; Minbeom Kim; Myung-Joon Kwon; Changick Kim
PACOL: Poisoning Attacks Against Continual Learners. (93%)Huayu Li; Gregory Ditzler
Two-Factor Authentication Approach Based on Behavior Patterns for Defeating Puppet Attacks. (1%)Wenhao Wang; Guyue Li; Zhiming Chu; Haobo Li; Daniele Faccio
2023-11-16
Breaking Boundaries: Balancing Performance and Robustness in Deep Wireless Traffic Forecasting. (99%)Romain Ilbert; Thai V. Hoang; Zonghua Zhang; Themis Palpanas
Hijacking Large Language Models via Adversarial In-Context Learning. (92%)Yao Qiang; Xiangyu Zhou; Dongxiao Zhu
Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking. (54%)Nan Xu; Fei Wang; Ben Zhou; Bang Zheng Li; Chaowei Xiao; Muhao Chen
Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations. (38%)Wenjie Mo; Jiashu Xu; Qin Liu; Jiongxiao Wang; Jun Yan; Chaowei Xiao; Muhao Chen
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models. (16%)Jiongxiao Wang; Junlin Wu; Muhao Chen; Yevgeniy Vorobeychik; Chaowei Xiao
Towards Improving Robustness Against Common Corruptions using Mixture of Class Specific Experts. (2%)Shashank Kotyan; Danilo Vasconcellos Vargas
Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities. (2%)Avishree Khare; Saikat Dutta; Ziyang Li; Alaia Solko-Breslin; Rajeev Alur; Mayur Naik
Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework. (2%)Matthew Pisano; Peter Ly; Abraham Sanders; Bingsheng Yao; Dakuo Wang; Tomek Strzalkowski; Mei Si
Towards more Practical Threat Models in Artificial Intelligence Security. (2%)Kathrin Grosse; Lukas Bieringer; Tarek Richard Besold; Alexandre Alahi
You Cannot Escape Me: Detecting Evasions of SIEM Rules in Enterprise Networks. (1%)Rafael Uetz; Marco Herzog; Louis Hackländer; Simon Schwarz; Martin Henze
2023-11-15
Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts. (99%)Yuanwei Wu; Xiang Li; Yixin Liu; Pan Zhou; Lichao Sun
Backdoor Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment. (74%)Haoran Wang; Kai Shu
Fast Certification of Vision-Language Models Using Incremental Randomized Smoothing. (64%)A K Iowa State University Nirala; A New York University Joshi; C New York University Hegde; S Iowa State University Sarkar
Adversarially Robust Spiking Neural Networks Through Conversion. (61%)Ozan Özdenizci; Robert Legenstein
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities. (16%)Lingbo Mo; Boshi Wang; Muhao Chen; Huan Sun
Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization. (16%)Zhexin Zhang; Junxiao Yang; Pei Ke; Fei Mi; Hongning Wang; Minlie Huang
Privacy Threats in Stable Diffusion Models. (13%)Thomas Cilloni; Charles Fleming; Charles Walter
MirrorNet: A TEE-Friendly Framework for Secure On-device DNN Inference. (2%)Ziyu Liu; Yukui Luo; Shijin Duan; Tong Zhou; Xiaolin Xu
JAB: Joint Adversarial Prompting and Belief Augmentation. (1%)Ninareh Mehrabi; Palash Goyal; Anil Ramakrishna; Jwala Dhamala; Shalini Ghosh; Richard Zemel; Kai-Wei Chang; Aram Galstyan; Rahul Gupta
Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models. (1%)Yueqing Liang; Lu Cheng; Ali Payani; Kai Shu
2023-11-14
Towards Improving Robustness Against Common Corruptions in Object Detectors Using Adversarial Contrastive Learning. (99%)Shashank Kotyan; Danilo Vasconcellos Vargas
Physical Adversarial Examples for Multi-Camera Systems. (99%)Ana Răduţoiu; Jan-Philipp Schulze; Philip Sperl; Konstantin Böttinger
DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Language Models. (99%)Yibo Wang; Xiangjue Dong; James Caverlee; Philip S. Yu
On The Relationship Between Universal Adversarial Attacks And Sparse Representations. (98%)Dana Weitzner; Raja Giryes
A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily. (62%)Peng Ding; Jun Kuang; Dan Ma; Xuezhi Cao; Yunsen Xian; Jiajun Chen; Shujian Huang
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets. (26%)Vatsal Gupta; Pranshu Pandya; Tushar Kataria; Vivek Gupta; Dan Roth
The Perception-Robustness Tradeoff in Deterministic Image Restoration. (1%)Guy Ohayon; Tomer Michaeli; Michael Elad
2023-11-13
Adversarial Purification for Data-Driven Power System Event Classifiers with Diffusion Models. (99%)Yuanbin Cheng; Koji Yamashita; Jim Follum; Nanpeng Yu
Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio Attacks against Speaker Recognition Models. (99%)Rui Duan; Zhe Qu; Leah Ding; Yao Liu; Zhuo Lu
An Extensive Study on Adversarial Attack against Pre-trained Models of Code. (99%)Xiaohu Du; Ming Wen; Zichao Wei; Shangwen Wang; Hai Jin
Multi-agent Attacks for Black-box Social Recommendations. (96%)Wenqi Fan; Shijie Wang; Xiao-yong Wei; Xiaowei Mei; Shanru Lin; Qing Li
On the Robustness of Neural Collapse and the Neural Collapse of Robustness. (87%)Jingtong Su; Ya Shi Zhang; Nikolaos Tsilivis; Julia Kempe
Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data. (70%)Bart Pleiter; Behrad Tajalli; Stefanos Koffas; Gorka Abad; Jing Xu; Martha Larson; Stjepan Picek
2023-11-12
Learning Globally Optimized Language Structure via Adversarial Training. (83%)Xuwang Yin
Resilient Graph Neural Networks: A Coupled Dynamical Systems Approach. (70%)Moshe Eliasof; Davide Murari; Ferdia Sherry; Carola-Bibiane Schönlieb
Analytical Verification of Deep Neural Network Performance for Time-Synchronized Distribution System State Estimation. (5%)Behrouz Azimian; Shiva Moshtagh; Anamitra Pal; Shanshan Ma
DialMAT: Dialogue-Enabled Transformer with Moment-Based Adversarial Training. (1%)Kanta Kaneda; Ryosuke Korekata; Yuiga Wada; Shunya Nagashima; Motonari Kambara; Yui Iioka; Haruka Matsuo; Yuto Imai; Takayuki Nishimura; Komei Sugiura
2023-11-11
Robust Text Classification: Analyzing Prototype-Based Networks. (97%)Zhivar Sourati; Darshan Deshpande; Filip Ilievski; Kiril Gashteovski; Sascha Saralajew
2023-11-10
Robust Adversarial Attacks Detection for Deep Learning based Relative Pose Estimation for Space Rendezvous. (99%)Ziwei Wang; Nabil Aouf; Jose Pizarro; Christophe Honvault
Fight Fire with Fire: Combating Adversarial Patch Attacks using Pattern-randomized Defensive Patches. (99%)Jianan Feng; Jiachun Li; Changqing Miao; Jianjun Huang; Wei You; Wenchang Shi; Bin Liang
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness. (99%)Mingyuan Fan; Xiaodan Li; Cen Chen; Wenmeng Zhou; Yaliang Li
Resilient and constrained consensus against adversarial attacks: A distributed MPC framework. (84%)Henglai Wei; Kunwu Zhang; Hui Zhang; Yang Shi
CALLOC: Curriculum Adversarial Learning for Secure and Robust Indoor Localization. (1%)Danish Gufran; Sudeep Pasricha
Practical Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration. (1%)Wenjie Fu; Huandong Wang; Chen Gao; Guanghua Liu; Yong Li; Tao Jiang
2023-11-09
ABIGX: A Unified Framework for eXplainable Fault Detection and Classification. (68%)Yue Zhuo; Jinchuan Qian; Zhihuan Song; Zhiqiang Ge
Honest Score Client Selection Scheme: Preventing Federated Learning Label Flipping Attacks in Non-IID Scenarios. (50%)Yanli Li; Huaming Chen; Wei Bao; Zhengmeng Xu; Dong Yuan
Scale-MIA: A Scalable Model Inversion Attack against Secure Federated Learning via Latent Space Reconstruction. (15%)Shanghao Shi; Ning Wang; Yang Xiao; Chaoyu Zhang; Yi Shi; Y. Thomas Hou; Wenjing Lou
FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts. (1%)Yichen Gong; Delong Ran; Jinyuan Liu; Conglei Wang; Tianshuo Cong; Anyu Wang; Sisi Duan; Xiaoyun Wang
FireMatch: A Semi-Supervised Video Fire Detection Network Based on Consistency and Distribution Alignment. (1%)Qinghua Lin; Zuoyong Li; Kun Zeng; Haoyi Fan; Wei Li; Xiaoguang Zhou
2023-11-08
Constrained Adaptive Attacks: Realistic Evaluation of Adversarial Examples and Robust Training of Deep Neural Networks for Tabular Data. (99%)Thibault Simonetto; Salah Ghamizi; Antoine Desjardins; Maxime Cordy; Yves Le Traon
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selection. (70%)Akshit Jindal; Vikram Goyal; Saket Anand; Chetan Arora
Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5? (61%)C. Daniel Freeman; Laura Culp; Aaron Parisi; Maxwell L Bileschi; Gamaleldin F Elsayed; Alex Rizkowsky; Isabelle Simpson; Alex Alemi; Azade Nova; Ben Adlam; Bernd Bohnet; Gaurav Mishra; Hanie Sedghi; Igor Mordatch; Izzeddin Gur; Jaehoon Lee; JD Co-Reyes; Jeffrey Pennington; Kelvin Xu; Kevin Swersky; Kshiteej Mahajan; Lechao Xiao; Rosanne Liu; Simon Kornblith; Noah Constant; Peter J. Liu; Roman Novak; Yundi Qian; Noah Fiedel; Jascha Sohl-Dickstein
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training. (10%)Rui Xu; Wenkang Qin; Peixiang Huang; Haowang; Lin Luo
Domain Adaptive Object Detection via Balancing Between Self-Training and Adversarial Learning. (1%)Muhammad Akhtar Munir; Muhammad Haris Khan; M. Saquib Sarfraz; Mohsen Ali
Counter-Empirical Attacking based on Adversarial Reinforcement Learning for Time-Relevant Scoring System. (1%)Xiangguo Sun; Hong Cheng; Hang Dong; Bo Qiao; Si Qin; Qingwei Lin
2023-11-07
Unveiling Safety Vulnerabilities of Large Language Models. (61%)George Kour; Marcel Zalmanovici; Naama Zwerdling; Esther Goldbraich; Ora Nova Fandina; Ateret Anaby-Tavor; Orna Raz; Eitan Farchi
When Fairness Meets Privacy: Exploring Privacy Threats in Fair Binary Classifiers through Membership Inference Attacks. (10%)Huan Tian; Guangsheng Zhang; Bo Liu; Tianqing Zhu; Ming Ding; Wanlei Zhou
Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications. (2%)Fengqing Jiang; Zhangchen Xu; Luyao Niu; Boxin Wang; Jinyuan Jia; Bo Li; Radha Poovendran
SoK: Security Below the OS -- A Security Analysis of UEFI. (1%)Priyanka Prakash Surve; Oleg Brodt; Mark Yampolskiy; Yuval Elovici; Asaf Shabtai
Do LLMs exhibit human-like response biases? A case study in survey design. (1%)Lindia Tjuatja; Valerie Chen; Sherry Tongshuang Wu; Ameet Talwalkar; Graham Neubig
2023-11-06
Measuring Adversarial Datasets. (92%)Yuanchen Bai; Raoyi Huang; Vijay Viswanathan; Tzu-Sheng Kuo; Tongshuang Wu
Can LLMs Follow Simple Rules? (68%)Norman Mu; Sarah Chen; Zifan Wang; Sizhe Chen; David Karamardian; Lulwa Aljeraisy; Basel Alomair; Dan Hendrycks; David Wagner
Preserving Privacy in GANs Against Membership Inference Attack. (33%)Mohammadhadi Shateri; Francisco Messina; Fabrice Labeau; Pablo Piantanida
Cal-DETR: Calibrated Detection Transformer. (4%)Muhammad Akhtar Munir; Salman Khan; Muhammad Haris Khan; Mohsen Ali; Fahad Shahbaz Khan
2023-11-05
ELEGANT: Certified Defense on the Fairness of Graph Neural Networks. (10%)Yushun Dong; Binchi Zhang; Hanghang Tong; Jundong Li
2023-11-04
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models. (74%)Zhuoshi Pan; Yuguang Yao; Gaowen Liu; Bingquan Shen; H. Vicky Zhao; Ramana Rao Kompella; Sijia Liu
2023-11-03
Efficient Black-Box Adversarial Attacks on Neural Text Detectors. (22%)Vitalii Fishchuk; Daniel Braun
The Alignment Problem in Context. (2%)Raphaël Millière
2023-11-02
Adversary ML Resilience in Autonomous Driving Through Human Centered Perception Mechanisms. (99%)Aakriti Shah
Towards Evaluating Transfer-based Attacks Systematically, Practically, and Fairly. (99%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game. (93%)Sam Toyer; Olivia Watkins; Ethan Adrian Mendes; Justin Svegliato; Luke Bailey; Tiffany Wang; Isaac Ong; Karim Elmaaroufi; Pieter Abbeel; Trevor Darrell; Alan Ritter; Stuart Russell
On the Lipschitz constant of random neural networks. (92%)Paul Geuchen; Thomas Heindl; Dominik Stöger; Felix Voigtlaender
Universal Perturbation-based Secret Key-Controlled Data Hiding. (80%)Donghua Wang; Wen Yao; Tingsong Jiang; Xiaoqian Chen
Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models. (76%)Andy Zhou; Jindong Wang; Yu-Xiong Wang; Haohan Wang
Assist Is Just as Important as the Goal: Image Resurfacing to Aid Model's Robust Prediction. (13%)Abhijith Sharma; Phil Munz; Apurva Narayan
Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula. (12%)Aryaman Reddi; Maximilian Tölle; Jan Peters; Georgia Chalvatzaki; Carlo D'Eramo
Sequential Subset Matching for Dataset Distillation. (1%)Jiawei Du; Qin Shi; Joey Tianyi Zhou
E(2) Equivariant Neural Networks for Robust Galaxy Morphology Classification. (1%)Sneh Pandya; Purvik Patel; Franc O; Jonathan Blazek
Robust Identity Perceptual Watermark Against Deepfake Face Swapping. (1%)Tianyi Wang; Mengxiao Huang; Harry Cheng; Bin Ma; Yinglong Wang
2023-11-01
NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks. (99%)Seokil Ham; Jungwuk Park; Dong-Jun Han; Jaekyun Moon
Adversarial Examples in the Physical World: A Survey. (98%)Jiakai Wang; Donghua Wang; Jin Hu; Siyang Wu; Tingsong Jiang; Wen Yao; Aishan Liu; Xianglong Liu
Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems. (80%)Ziqing Lu; Guanlin Liu; Lifeng Cai; Weiyu Xu
Improving Robustness for Vision Transformer with a Simple Dynamic Scanning Augmentation. (76%)Shashank Kotyan; Danilo Vasconcellos Vargas
MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training. (75%)Jiacheng Li; Ninghui Li; Bruno Ribeiro
Robustness Tests for Automatic Machine Translation Metrics with Adversarial Attacks. (1%)Yichen Huang; Timothy Baldwin
Open-Set Face Recognition with Maximal Entropy and Objectosphere Loss. (1%)Rafael Henrique Vareto; Yu Linghu; Terrance E. Boult; William Robson Schwartz; Manuel Günther
2023-10-31
Amoeba: Circumventing ML-supported Network Censorship via Adversarial Reinforcement Learning. (99%)Haoyu Liu; Alec F. Diallo; Paul Patras
Robust Safety Classifier for Large Language Models: Adversarial Prompt Shield. (99%)Jinhwa Kim; Ali Derakhshan; Ian G. Harris
LFAA: Crafting Transferable Targeted Adversarial Examples with Low-Frequency Perturbations. (99%)Kunyu Wang; Juluan Shi; Wenxuan Wang
Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication Systems. (98%)Jung-Woo Chang; Ke Sun; Nasimeh Heydaribeni; Seira Hidano; Xinyu Zhang; Farinaz Koushanfar
Is Robustness Transferable across Languages in Multilingual Neural Machine Translation? (26%)Leiyu Pan; Supryadi; Deyi Xiong
Dynamic Batch Norm Statistics Update for Natural Robustness. (22%)Shahbaz Rezaei; Mohammad Sadegh Norouzzadeh
In Search of Lost Online Test-time Adaptation: A Survey. (1%)Zixin Wang; Yadan Luo; Liang Zheng; Zhuoxiao Chen; Sen Wang; Zi Huang
2023-10-30
Label-Only Model Inversion Attacks via Knowledge Transfer. (83%)Ngoc-Bao Nguyen; Keshigeyan Chandrasegaran; Milad Abdollahzadeh; Ngai-Man Cheung
Exploring Geometry of Blind Spots in Vision Models. (83%)Sriram Balasubramanian; Gaurang Sriramanan; Vinu Sankar Sadasivan; Soheil Feizi
Adversarial Attacks and Defenses in Large Language Models: Old and New Threats. (74%)Leo Schwinn; David Dobre; Stephan Günnemann; Gauthier Gidel
Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models. (61%)Minxing Zhang; Ning Yu; Rui Wen; Michael Backes; Yang Zhang
Causal Fair Metric: Bridging Causality, Individual Fairness, and Adversarial Robustness. (33%)Ahmad-Reza Ehyaei; Golnoosh Farnadi; Samira Samadi
Differentially Private Reward Estimation with Preference Feedback. (16%)Sayak Ray Chowdhury; Xingyu Zhou; Nagarajan Natarajan
Asymmetric Diffusion Based Channel-Adaptive Secure Wireless Semantic Communications. (10%)Xintian Ren; Jun Wu; Hansong Xu; Qianqian Pan
Privacy-Preserving Federated Learning over Vertically and Horizontally Partitioned Data for Financial Anomaly Detection. (1%)Swanand Ravindra Kadhe; Heiko Ludwig; Nathalie Baracaldo; Alan King; Yi Zhou; Keith Houck; Ambrish Rawat; Mark Purcell; Naoise Holohan; Mikio Takeuchi; Ryo Kawahara; Nir Drucker; Hayim Shaul; Eyal Kushnir; Omri Soceanu
2023-10-29
Blacksmith: Fast Adversarial Training of Vision Transformers via a Mixture of Single-step and Multi-step Methods. (99%)Mahdi Salmani; Alireza Dehghanpour Farashah; Mohammad Azizmalayeri; Mahdi Amiri; Navid Eslami; Mohammad Taghi Manzuri; Mohammad Hossein Rohban
Boosting Decision-Based Black-Box Adversarial Attack with Gradient Priors. (98%)Han Liu; Xingshuo Huang; Xiaotong Zhang; Qimai Li; Fenglong Ma; Wei Wang; Hongyang Chen; Hong Yu; Xianchao Zhang
BERT Lost Patience Won't Be Robust to Adversarial Slowdown. (98%)Zachary Coalson; Gabriel Ritter; Rakesh Bobba; Sanghyun Hong
Adversarial Examples Are Not Real Features. (98%)Ang Li; Yifei Wang; Yiwen Guo; Yisen Wang
IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI. (82%)Bochuan Cao; Changjiang Li; Ting Wang; Jinyuan Jia; Bo Li; Jinghui Chen
Poisoning Retrieval Corpora by Injecting Adversarial Passages. (68%)Zexuan Zhong; Ziqing Huang; Alexander Wettig; Danqi Chen
Label Poisoning is All You Need. (54%)Rishi D. Jha; Jonathan Hayase; Sewoong Oh
Robustifying Language Models with Test-Time Adaptation. (47%)Noah Thomas McDermott; Junfeng Yang; Chengzhi Mao
Path Analysis for Effective Fault Localization in Deep Neural Networks. (1%)Soroush Hashemifar; Saeed Parsa; Akram Kalaee
From Chatbots to PhishBots? -- Preventing Phishing scams created using ChatGPT, Google Bard and Claude. (1%)Sayak Saha Roy; Poojitha Thota; Krishna Vamsi Naragam; Shirin Nilizadeh
2023-10-28
Assessing and Improving Syntactic Adversarial Robustness of Pre-trained Models for Code Translation. (92%)Guang Yang; Yu Zhou; Xiangyu Zhang; Xiang Chen; Tingting Han; Taolue Chen
Benchmark Generation Framework with Customizable Distortions for Image Classifier Robustness. (86%)Soumyendu Sarkar; Ashwin Ramesh Babu; Sajad Mousavi; Zachariah Carmichael; Vineet Gundecha; Sahand Ghorbanpour; Ricardo Luna; Gutierrez Antonio Guillen; Avisek Naug
Purify++: Improving Diffusion-Purification with Advanced Diffusion Models and Control of Randomness. (61%)Boya Zhang; Weijian Luo; Zhihua Zhang
Large Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks Against Text Classifiers. (47%)Wencong You; Zayd Hammoudeh; Daniel Lowd
Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation. (10%)Kunlin Cai; Jinghuai Zhang; Will Shand; Zhiqing Hong; Guang Wang; Desheng Zhang; Jianfeng Chi; Yuan Tian
2023-10-27
DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification. (99%)Mintong Kang; Dawn Song; Bo Li
Understanding and Improving Ensemble Adversarial Defense. (99%)Yian Deng; Tingting Mu
LipSim: A Provably Robust Perceptual Similarity Metric. (45%)Sara Ghazanfari; Alexandre Araujo; Prashanth Krishnamurthy; Farshad Khorrami; Siddharth Garg
Elevating Code-mixed Text Handling through Auditory Information of Words. (5%)Mamta; Zishan Ahmad; Asif Ekbal
Understanding Parameter Saliency via Extreme Value Theory. (1%)Shuo Wang; Issei Sato
2023-10-26
Unscrambling the Rectification of Adversarial Attacks Transferability across Computer Networks. (99%)Ehsan Nowroozi; Samaneh Ghelichkhani; Imran Haider; Ali Dehghantanha
A Survey on Transferability of Adversarial Examples across Deep Neural Networks. (99%)Jindong Gu; Xiaojun Jia; Jorge Pau de; Wenqain Yu; Xinwei Liu; Avery Ma; Yuan Xun; Anjun Hu; Ashkan Khakzar; Zhijiang Li; Xiaochun Cao; Philip Torr
Defending Against Transfer Attacks From Public Models. (99%)Chawin Sitawarin; Jaewon Chang; David Huang; Wesson Altoyan; David Wagner
Uncertainty-weighted Loss Functions for Improved Adversarial Attacks on Semantic Segmentation. (93%)Kira Maag; Asja Fischer
Detection Defenses: An Empty Promise against Adversarial Patch Attacks on Optical Flow. (93%)Erik Scheurer; Jenny Schmalfuss; Alexander Lis; Andrés Bruhn
CBD: A Certified Backdoor Detector Based on Local Dominant Probability. (76%)Zhen Xiang; Zidi Xiong; Bo Li
SoK: Pitfalls in Evaluating Black-Box Attacks. (76%)Fnu Suya; Anshuman Suri; Tingwei Zhang; Jingtao Hong; Yuan Tian; David Evans
Instability of computer vision models is a necessary result of the task itself. (26%)Oliver Turnbull; George Cevora
PAC-tuning:Fine-tuning Pretrained Language Models with PAC-driven Perturbed Gradient Descent. (1%)Guangliang Liu; Zhiyu Xue; Xitong Zhang; Kristen Marie Johnson; Rongrong Wang
A minimax optimal control approach for robust neural ODEs. (1%)Cristina Cipriani; Alessandro Scagliotti; Tobias Wöhrer
2023-10-25
Break it, Imitate it, Fix it: Robustness by Generating Human-Like Attacks. (93%)Aradhana Sinha; Ananth Balashankar; Ahmad Beirami; Thi Avrahami; Jilin Chen; Alex Beutel
Trust, but Verify: Robust Image Segmentation using Deep Learning. (54%)Fahim Ahmed Zaman; Xiaodong Wu; Weiyu Xu; Milan Sonka; Raghuraman Mudumbai
Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking against Face Swapping. (26%)Yunming Zhang; Dengpan Ye; Caiyun Xie; Long Tang; Chuanxi Chen; Ziyi Liu; Jiacheng Deng
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts. (22%)Yixin Wu; Ning Yu; Michael Backes; Yun Shen; Yang Zhang
Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs. (12%)Jianwei Fei; Zhihua Xia; Benedetta Tondi; Mauro Barni
Multi-scale Diffusion Denoised Smoothing. (1%)Jongheon Jeong; Jinwoo Shin
SparseDFF: Sparse-View Feature Distillation for One-Shot Dexterous Manipulation. (1%)Qianxu Wang; Haotong Zhang; Congyue Deng; Yang You; Hao Dong; Yixin Zhu; Leonidas Guibas
2023-10-24
Adversarial sample generation and training using geometric masks for accurate and resilient license plate character recognition. (99%)Bishal Shrestha; Griwan Khakurel; Kritika Simkhada; Badri Adhikari
RAEDiff: Denoising Diffusion Probabilistic Models Based Reversible Adversarial Examples Self-Generation and Self-Recovery. (92%)Fan Xing; Xiaoyi Zhou; Xuefeng Fan; Zhuo Tian; Yan Zhao
Defense Against Model Extraction Attacks on Recommender Systems. (92%)Sixiao Zhang; Hongzhi Yin; Hongxu Chen; Cheng Long
Segue: Side-information Guided Generative Unlearnable Examples for Facial Privacy Protection in Real World. (89%)Zhiling Zhang; Jie Zhang; Kui Zhang; Wenbo Zhou; Weiming Zhang; Nenghai Yu
Hierarchical Randomized Smoothing. (75%)Yan Scholten; Jan Schuchardt; Aleksandar Bojchevski; Stephan Günnemann
Momentum Gradient-based Untargeted Attack on Hypergraph Neural Networks. (73%)Yang Chen; Stjepan Picek; Zhonglin Ye; Zhaoyang Wang; Haixing Zhao
Corrupting Neuron Explanations of Deep Visual Features. (41%)Divyansh Srivastava; Tuomas Oikarinen; Tsui-Wei Weng
Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles. (13%)Xing Shen; Hengguan Huang; Brennan Nichyporuk; Tal Arbel
Guiding LLM to Fool Itself: Automatically Manipulating Machine Reading Comprehension Shortcut Triggers. (10%)Mosh Levy; Shauli Ravfogel; Yoav Goldberg
A Survey on Detection of LLMs-Generated Content. (1%)Xianjun Yang; Liangming Pan; Xuandong Zhao; Haifeng Chen; Linda Petzold; William Yang Wang; Wei Cheng
White-box Compiler Fuzzing Empowered by Large Language Models. (1%)Chenyuan Yang; Yinlin Deng; Runyu Lu; Jiayi Yao; Jiawei Liu; Reyhaneh Jabbarvand; Lingming Zhang
Enhancing Large Language Models for Secure Code Generation: A Dataset-driven Study on Vulnerability Mitigation. (1%)Jiexin Wang; Liuwen Cao; Xitong Luo; Zhiping Zhou; Jiayuan Xie; Adam Jatowt; Yi Cai
2023-10-23
Semantic-Aware Adversarial Training for Reliable Deep Hashing Retrieval. (99%)Xu Yuan; Zheng Zhang; Xunguang Wang; Lin Wu
F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of Natural and Perturbed Patterns. (99%)Yaguan Qian; Chenyu Zhao; Zhaoquan Gu; Bin Wang; Shouling Ji; Wei Wang; Boyang Zhou; Pan Zhou
AutoDAN: Automatic and Interpretable Adversarial Attacks on Large Language Models. (98%)Sicheng Zhu; Ruiyi Zhang; Bang An; Gang Wu; Joe Barrow; Zichao Wang; Furong Huang; Ani Nenkova; Tong Sun
Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks. (98%)Xiaojun Jia; Jianshu Li; Jindong Gu; Yang Bai; Xiaochun Cao
On the Detection of Image-Scaling Attacks in Machine Learning. (15%)Erwin Quiring; Andreas Müller; Konrad Rieck
Unleashing the potential of prompt engineering: a comprehensive review. (1%)Banghao Chen; Zhaofeng Zhang; Nicolas Langrené; Shengxin Zhu
RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions. (1%)Lingdong Kong; Shaoyuan Xie; Hanjiang Hu; Lai Xing Ng; Benoit R. Cottereau; Wei Tsang Ooi
Calibration of Time-Series Forecasting: Detecting and Adapting Context-Driven Distribution Shift. (1%)Mouxiang Chen; Lefei Shen; Han Fu; Zhuo Li; Jianling Sun; Chenghao Liu
The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks. (1%)Xiaoyi Chen; Siyuan Tang; Rui Zhu; Shijun Yan; Lei Jin; Zihao Wang; Liya Su; Zhikun Zhang; XiaoFeng Wang; Haixu Tang
2023-10-22
Diffusion-Based Adversarial Purification for Speaker Verification. (99%)Yibo Bai; Xiao-Lei Zhang
CT-GAT: Cross-Task Generative Adversarial Attack based on Transferability. (99%)Minxuan Lv; Chengwei Dai; Kun Li; Wei Zhou; Songlin Hu
Imperceptible CMOS camera dazzle for adversarial attacks on deep neural networks. (92%)Zvi Stein; Adrian Stern
ADoPT: LiDAR Spoofing Attack Detection Based on Point-Level Temporal Consistency. (26%)Minkyoung Cho; Yulong Cao; Zixiang Zhou; Z. Morley Mao
Attention-Enhancing Backdoor Attacks Against BERT-based Models. (13%)Weimin Lyu; Songzhu Zheng; Lu Pang; Haibin Ling; Chao Chen
MoPe: Model Perturbation-based Privacy Attacks on Language Models. (9%)Marvin Li; Jason Wang; Jeffrey Wang; Seth Neel
Reputation-Based Federated Learning Defense to Mitigate Threats in EEG Signal Classification. (1%)Zhibo Zhang; Pengfei Li; Ahmed Y. Al Hammadi; Fusen Guo; Ernesto Damiani; Chan Yeob Yeun
2023-10-21
Adversarial Image Generation by Spatial Transformation in Perceptual Colorspaces. (99%)Ayberk Aydin; Alptekin Temizel
Training Image Derivatives: Increased Accuracy and Universal Robustness. (5%)Vsevolod I. Avrutskiy
2023-10-20
Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting. (99%)Zecheng Tang; Kaifeng Qi; Juntao Li; Min Zhang
An LLM can Fool Itself: A Prompt-Based Adversarial Attack. (99%)Xilie Xu; Keyi Kong; Ning Liu; Lizhen Cui; Di Wang; Jingfeng Zhang; Mohan Kankanhalli
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models. (61%)Shawn Shan; Wenxin Ding; Josephine Passananti; Haitao Zheng; Ben Y. Zhao
The Hidden Adversarial Vulnerabilities of Medical Federated Learning. (45%)Erfan Darzi; Florian Dubost; Nanna. M. Sijtsema; Ooijen P. M. A van
Adversarial Attacks on Fairness of Graph Neural Networks. (26%)Binchi Zhang; Yushun Dong; Chen Chen; Yada Zhu; Minnan Luo; Jundong Li
FLTracer: Accurate Poisoning Attack Provenance in Federated Learning. (26%)Xinyu Zhang; Qingyu Liu; Zhongjie Ba; Yuan Hong; Tianhang Zheng; Feng Lin; Li Lu; Kui Ren
Can We Trust the Similarity Measurement in Federated Learning? (15%)Zhilin Wang; Qin Hu; Xukai Zou
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images. (4%)Logan Frank; Jim Davis
VOICE-ZEUS: Impersonating Zoom's E2EE-Protected Static Media and Textual Communications via Simple Voice Manipulations. (4%)Mashari Alatawi; Nitesh Saxena
2023-10-19
Automatic Hallucination Assessment for Aligned Large Language Models via Transferable Adversarial Attacks. (98%)Xiaodong Yu; Hao Cheng; Xiaodong Liu; Dan Roth; Jianfeng Gao
Generating Robust Adversarial Examples against Online Social Networks (OSNs). (98%)Jun Liu; Jiantao Zhou; Haiwei Wu; Weiwei Sun; Jinyu Tian
Recoverable Privacy-Preserving Image Classification through Noise-like Adversarial Examples. (98%)Jun Liu; Jiantao Zhou; Jinyu Tian; Weiwei Sun
Learn from the Past: A Proxy based Adversarial Defense Framework to Boost Robustness. (98%)Yaohua Liu; Jiaxin Gao; Zhu Liu; Xianghao Jiao; Xin Fan; Risheng Liu
OODRobustBench: benchmarking and analyzing adversarial robustness under distribution shift. (97%)Lin Li; Yifei Wang; Chawin Sitawarin; Michael Spratling
PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses. (97%)Chong Xiang; Tong Wu; Sihui Dai; Jonathan Petit; Suman Jana; Prateek Mittal
Prompt Injection Attacks and Defenses in LLM-Integrated Applications. (47%)Yupei Liu; Yuqi Jia; Runpeng Geng; Jinyuan Jia; Neil Zhenqiang Gong
Attack Prompt Generation for Red Teaming and Defending Large Language Models. (15%)Boyi Deng; Wenjie Wang; Fuli Feng; Yang Deng; Qifan Wang; Xiangnan He
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models. (5%)Boyang Zhang; Zheng Li; Ziqing Yang; Xinlei He; Michael Backes; Mario Fritz; Yang Zhang
To grok or not to grok: Disentangling generalization and memorization on corrupted algorithmic datasets. (1%)Darshil Doshi; Aritra Das; Tianyu He; Andrey Gromov
Detecting Shared Data Manipulation in Distributed Optimization Algorithms. (1%)Mohannad Alkhraijah; Rachel Harris; Samuel Litchfield; David Huggins; Daniel K. Molzahn
Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models. (1%)Jianwei Li; Qi Lei; Wei Cheng; Dongkuan Xu
2023-10-18
Exploring Decision-based Black-box Attacks on Face Forgery Detection. (99%)Zhaoyu Chen; Bo Li; Kaixun Jiang; Shuang Wu; Shouhong Ding; Wenqiang Zhang
Segment Anything Meets Universal Adversarial Perturbation. (99%)Dongshen Han; Sheng Zheng; Chaoning Zhang
IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks. (99%)Yue Cao; Tianlin Li; Xiaofeng Cao; Ivor Tsang; Yang Liu; Qing Guo
Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights. (99%)Zhengyu Zhao; Hanwei Zhang; Renjue Li; Ronan Sicre; Laurent Amsaleg; Michael Backes; Qi Li; Chao Shen
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm. (99%)S. M. Fazle Rabby Labib; Joyanta Jyoti Mondal; Meem Arafat Manab; Sarfaraz Newaz; Xi Xiao
Malicious Agent Detection for Robust Multi-Agent Collaborative Perception. (87%)Yangheng Zhao; Zhen Xiang; Sheng Yin; Xianghe Pang; Siheng Chen; Yanfeng Wang
Black-Box Training Data Identification in GANs via Detector Networks. (82%)Lukman Olagoke; Salil Vadhan; Seth Neel
Adversarial Training for Physics-Informed Neural Networks. (81%)Yao Li; Shengzhu Shi; Zhichang Guo; Boying Wu
REVAMP: Automated Simulations of Adversarial Attacks on Arbitrary Objects in Realistic Scenes. (80%)Matthew Hull; Zijie J. Wang; Duen Horng Chau
Quantifying Privacy Risks of Prompts in Visual Prompt Learning. (76%)Yixin Wu; Rui Wen; Michael Backes; Pascal Berrang; Mathias Humbert; Yun Shen; Yang Zhang
To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now. (47%)Yimeng Zhang; Jinghan Jia; Xin Chen; Aochuan Chen; Yihua Zhang; Jiancheng Liu; Ke Ding; Sijia Liu
CAT: Closed-loop Adversarial Training for Safe End-to-End Driving. (2%)Linrui Zhang; Zhenghao Peng; Quanyi Li; Bolei Zhou
PrivInfer: Privacy-Preserving Inference for Black-box Large Language Model. (1%)Meng Tong; Kejiang Chen; Yuang Qi; Jie Zhang; Weiming Zhang; Nenghai Yu
2023-10-17
The Efficacy of Transformer-based Adversarial Attacks in Security Domains. (99%)Kunyang Li; Kyle Domico; Jean-Charles Noirot Ferrand; Patrick McDaniel
Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning. (93%)Taejin Kim; Jiarui Li; Shubhranshu Singh; Nikhil Madaan; Carlee Joe-Wong
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks. (15%)Jun Xia; Zhihao Yue; Yingbo Zhou; Zhiwei Ling; Xian Wei; Mingsong Chen
Generalizability of CNN Architectures for Face Morph Presentation Attack. (1%)Sherko R. HmaSalah; Aras Asaad
2023-10-16
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks. (98%)Erfan Shayegani; Md Abdullah Al Mamun; Yu Fu; Pedram Zaree; Yue Dong; Nael Abu-Ghazaleh
Regularization properties of adversarially-trained linear regression. (92%)Antônio H. Ribeiro; Dave Zachariah; Francis Bach; Thomas B. Schön
Fast Adversarial Label-Flipping Attack on Tabular Data. (84%)Xinglong Chang; Gillian Dobbie; Jörg Wicker
A Non-monotonic Smooth Activation Function. (83%)Koushik Biswas; Meghana Karri; Ulaş Bağcı
Quantifying Assistive Robustness Via the Natural-Adversarial Frontier. (68%)Jerry Zhi-Yang He; Zackory Erickson; Daniel S. Brown; Anca D. Dragan
A Comprehensive Study of Privacy Risks in Curriculum Learning. (67%)Joann Qiongna Chen; Xinlei He; Zheng Li; Yang Zhang; Zhou Li
DANAA: Towards transferable attacks with double adversarial neuron attribution. (26%)Zhibo Jin; Zhiyu Zhu; Xinyi Wang; Jiayu Zhang; Jun Shen; Huaming Chen
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective. (9%)Ganghua Wang; Xun Xian; Jayanth Srinivasa; Ashish Kundu; Xuan Bi; Mingyi Hong; Jie Ding
Prompt Packer: Deceiving LLMs through Compositional Instruction with Hidden Attacks. (4%)Shuyu Jiang; Xingshu Chen; Rui Tang
Robust Multi-Agent Reinforcement Learning via Adversarial Regularization: Theoretical Foundation and Stable Algorithms. (3%)Alexander Bukharin; Yan Li; Yue Yu; Qingru Zhang; Zhehui Chen; Simiao Zuo; Chao Zhang; Songan Zhang; Tuo Zhao
Passive Inference Attacks on Split Learning via Adversarial Regularization. (3%)Xiaochen Zhu; Xinjian Luo; Yuncheng Wu; Yangfan Jiang; Xiaokui Xiao; Beng Chin Ooi
On the Transferability of Learning Models for Semantic Segmentation for Remote Sensing Data. (2%)Rongjun Qin; Guixiang Zhang; Yang Tang
Orthogonal Uncertainty Representation of Data Manifold for Robust Long-Tailed Learning. (1%)Yanbiao Ma; Licheng Jiao; Fang Liu; Shuyuan Yang; Xu Liu; Lingling Li
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts. (1%)Christina Chance; Da Yin; Dakuo Wang; Kai-Wei Chang
2023-10-15
Towards Deep Learning Models Resistant to Transfer-based Adversarial Attacks via Data-centric Robust Learning. (99%)Yulong Yang; Chenhao Lin; Xiang Ji; Qiwei Tian; Qian Li; Hongshan Yang; Zhibo Wang; Chao Shen
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack. (99%)Renyang Liu; Jinhong Zhang; Kwok-Yan Lam; Jun Zhao; Wei Zhou
AFLOW: Developing Adversarial Examples under Extremely Noise-limited Settings. (99%)Renyang Liu; Jinhong Zhang; Haoran Li; Jin Zhang; Yuanyu Wang; Wei Zhou
Black-box Targeted Adversarial Attack on Segment Anything (SAM). (99%)Sheng Zheng; Chaoning Zhang; Xinhong Hao
Evading Detection Actively: Toward Anti-Forensics against Forgery Localization. (97%)Long Zhuo; Shenghai Luo; Shunquan Tan; Han Chen; Bin Li; Jiwu Huang
Explore the Effect of Data Selection on Poison Efficiency in Backdoor Attacks. (61%)Ziqiang Li; Pengfei Xia; Hong Sun; Yueqi Zeng; Wei Zhang; Bin Li
Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models? (9%)Yu-Lin Tsai; Chia-Yi Hsu; Chulin Xie; Chih-Hsun Lin; Jia-You Chen; Bo Li; Pin-Yu Chen; Chia-Mu Yu; Chun-Ying Huang
VFLAIR: A Research Library and Benchmark for Vertical Federated Learning. (3%)Tianyuan Zou; Zixuan Gu; Yu He; Hideaki Takahashi; Yang Liu; Ya-Qin Zhang
2023-10-14
BufferSearch: Generating Black-Box Adversarial Texts With Lower Queries. (98%)Wenjie Lv; Zhen Wang; Yitao Zheng; Zhehua Zhong; Qi Xuan; Tianyi Chen
2023-10-13
Is Certifying $\ell_p$ Robustness Still Worthwhile? (99%)Ravi Mangal; Klas Leino; Zifan Wang; Kai Hu; Weicheng Yu; Corina Pasareanu; Anupam Datta; Matt Fredrikson
User Inference Attacks on Large Language Models. (41%)Nikhil Kandpal; Krishna Pillutla; Alina Oprea; Peter Kairouz; Christopher A. Choquette-Choo; Zheng Xu
On the Over-Memorization During Natural, Robust and Catastrophic Overfitting. (1%)Runqi Lin; Chaojian Yu; Bo Han; Tongliang Liu
2023-10-12
Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks. (99%)Giorgio Piras; Maura Pintor; Ambra Demontis; Battista Biggio
Concealed Electronic Countermeasures of Radar Signal with Adversarial Examples. (93%)Ruinan Ma; Canjie Zhu; Mingfeng Lu; Yunjie Li; Yu-an Tan; Ruibin Zhang; Ran Tao
Attacks Meet Interpretability (AmI) Evaluation and Findings. (92%)Qian Ma; Ziping Ye; Shagufta Mehnaz
Provably Robust Cost-Sensitive Learning via Randomized Smoothing. (73%)Yuan Xin; Michael Backes; Xiao Zhang
Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization. (68%)Giuseppe Floris; Raffaele Mura; Luca Scionis; Giorgio Piras; Maura Pintor; Ambra Demontis; Battista Biggio
Fed-Safe: Securing Federated Learning in Healthcare Against Adversarial Attacks. (64%)Erfan Darzi; Nanna M. Sijtsema; Ooijen P. M. A van
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders. (31%)Jan Dubiński; Stanisław Pawlak; Franziska Boenisch; Tomasz Trzciński; Adam Dziedzic
Sentinel: An Aggregation Function to Secure Decentralized Federated Learning. (13%)Chao Feng; Alberto Huertas Celdran; Janosch Baltensperger; Enrique Tomas Matınez Bertran; Gerome Bovet; Burkhard Stiller
Defending Our Privacy With Backdoors. (10%)Dominik Hintersdorf; Lukas Struppek; Daniel Neider; Kristian Kersting
Investigating the Robustness and Properties of Detection Transformers (DETR) Toward Difficult Images. (9%)Zhao Ning Zou; Yuhang Zhang; Robert Wijaya
Polynomial Time Cryptanalytic Extraction of Neural Network Models. (3%)Adi Shamir; Isaac Canales-Martinez; Anna Hambitzer; Jorge Chavez-Saab; Francisco Rodrigez-Henriquez; Nitin Satpute
SEE-OoD: Supervised Exploration For Enhanced Out-of-Distribution Detection. (1%)Xiaoyang Song; Wenbo Sun; Maher Nouiehed; Raed Al Kontar; Judy Jin
XAI Benchmark for Visual Explanation. (1%)Yifei Zhang; Siyi Gu; James Song; Bo Pan; Liang Zhao
Jailbreaking Black Box Large Language Models in Twenty Queries. (1%)Patrick Chao; Alexander Robey; Edgar Dobriban; Hamed Hassani; George J. Pappas; Eric Wong
Voyager: MTD-Based Aggregation Protocol for Mitigating Poisoning Attacks on DFL. (1%)Chao Feng; Alberto Huertas Celdran; Michael Vuong; Gerome Bovet; Burkhard Stiller
2023-10-11
Boosting Black-box Attack to Deep Neural Networks with Conditional Diffusion Models. (99%)Renyang Liu; Wei Zhou; Tianwei Zhang; Kangjie Chen; Jun Zhao; Kwok-Yan Lam
Promoting Robustness of Randomized Smoothing: Two Cost-Effective Approaches. (89%)Linbo Liu; Trong Nghia Hoang; Lam M. Nguyen; Tsui-Wei Weng
An Adversarial Example for Direct Logit Attribution: Memory Management in GELU-4L. (13%)Jett Janiak; Can Rager; James Dao; Yeu-Tong Lau
Prompt Backdoors in Visual Prompt Learning. (11%)Hai Huang; Zhengyu Zhao; Michael Backes; Yun Shen; Yang Zhang
Why Train More? Effective and Efficient Membership Inference via Memorization. (10%)Jihye Choi; Shruti Tople; Varun Chandrasekaran; Somesh Jha
Towards Causal Deep Learning for Vulnerability Detection. (4%)Md Mahbubur Rahman; Ira Ceka; Chengzhi Mao; Saikat Chakraborty; Baishakhi Ray; Wei Le
Deep Reinforcement Learning for Autonomous Cyber Defence: A Survey. (4%)Gregory Palmer; Chris Parry; Daniel J. B. Harrold; Chris Willis
2023-10-10
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks. (99%)Yang Wang; Bo Dong; Ke Xu; Haiyin Piao; Yufei Ding; Baocai Yin; Xin Yang
My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection. (98%)Lanjun Wang; Xinran Qiao; Yanwei Xie; Weizhi Nie; Yongdong Zhang; Anan Liu
Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach. (83%)Kai Zhao; Qiyu Kang; Yang Song; Rui She; Sijie Wang; Wee Peng Tay
Adversarial optimization leads to over-optimistic security-constrained dispatch, but sampling can help. (76%)Charles Dawson; Chuchu Fan
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML. (62%)Ziqi Zhang; Chen Gong; Yifeng Cai; Yuanyuan Yuan; Bingyan Liu; Ding Li; Yao Guo; Xiangqun Chen
Comparing the Robustness of Modern No-Reference Image- and Video-Quality Metrics to Adversarial Attacks. (47%)Anastasia Antsiferova; Khaled Abud; Aleksandr Gushchin; Ekaterina Shumitskaya; Sergey Lavrushkin; Dmitriy Vatolin
GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation. (22%)Yixin Liu; Chenrui Fan; Xun Chen; Pan Zhou; Lichao Sun
Latent Diffusion Counterfactual Explanations. (5%)Karim Farid; Simon Schrodi; Max Argus; Thomas Brox
FTFT: efficient and robust Fine-Tuning by transFerring Training dynamics. (2%)Yupei Du; Albert Gatt; Dong Nguyen
Investigating the Adversarial Robustness of Density Estimation Using the Probability Flow ODE. (2%)Marius Arvinte; Cory Cornelius; Jason Martin; Nageen Himayat
Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations. (1%)Zeming Wei; Yifei Wang; Yisen Wang
2023-10-09
PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization. (92%)Jiancong Xiao; Ruoyu Sun; Zhi- Quan Luo
Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand. (22%)Junfeng Guo; Yiming Li; Lixu Wang; Shu-Tao Xia; Heng Huang; Cong Liu; Bo Li
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach. (5%)Shaopeng Fu; Di Wang
Exploring adversarial attacks in federated learning for medical imaging. (2%)Erfan Darzi; Florian Dubost; N. M. Sijtsema; Ooijen P. M. A van
2023-10-08
GReAT: A Graph Regularized Adversarial Training Method. (99%)Samet Bayram; Kenneth Barner
An Initial Investigation of Neural Replay Simulator for Over-the-Air Adversarial Perturbations to Automatic Speaker Verification. (99%)Jiaqi Li; Li Wang; Liumeng Xue; Lei Wang; Zhizheng Wu
AdvSV: An Over-the-Air Adversarial Attack Dataset for Speaker Verification. (96%)Li Wang; Jiaqi Li; Yuhao Luo; Jiahao Zheng; Lei Wang; Hao Li; Ke Xu; Chengfang Fang; Jie Shi; Zhizheng Wu
Transferable Availability Poisoning Attacks. (83%)Yiyong Liu; Michael Backes; Xiao Zhang
BRAINTEASER: Lateral Thinking Puzzles for Large Language Models. (26%)Yifan Jiang; Filip Ilievski; Kaixin Ma; Zhivar Sourati
Stealthy Backdoor Attack via Confidence-driven Sampling. (10%)Pengfei He; Yue Xing; Han Xu; Jie Ren; Yingqian Cui; Shenglai Zeng; Jiliang Tang; Makoto Yamada; Mohammad Sabokrou
Adversarial Attacks on Combinatorial Multi-Armed Bandits. (5%)Rishab Balasubramanian; Jiawei Li; Prasad Tadepalli; Huazheng Wang; Qingyun Wu; Haoyu Zhao
2023-10-07
Improving Adversarial Attacks on Latent Diffusion Model. (99%)Boyang Zheng; Chumeng Liang; Xiaoyu Wu; Yan Liu
IPMix: Label-Preserving Data Augmentation Method for Training Robust Classifiers. (76%)Zhenglin Huang; Xiaoan Bao; Na Zhang; Qingqi Zhang; Xiaomei Tu; Biao Wu; Xi Yang
Test-Time Adaptation Induces Stronger Accuracy and Agreement-on-the-Line. (1%)Eungyeup Kim; Mingjie Sun; Christina Baek; Aditi Raghunathan; J. Zico Kolter
2023-10-06
VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models. (98%)Ziyi Yin; Muchao Ye; Tianrong Zhang; Tianyu Du; Jinguo Zhu; Han Liu; Jinghui Chen; Ting Wang; Fenglong Ma
Generating Less Certain Adversarial Examples Improves Robust Generalization. (98%)Minxing Zhang; Michael Backes; Xiao Zhang
Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification. (84%)Shanshan Han; Wenxuan Wu; Baturalp Buyukates; Weizhao Jin; Qifan Zhang; Yuhang Yao; Salman Avestimehr; Chaoyang He
2023-10-05
OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable Evasion Attacks. (99%)Ofir Bar Tal; Adi Haviv; Amit H. Bermano
Untargeted White-box Adversarial Attack with Heuristic Defence Methods in Real-time Deep Learning based Network Intrusion Detection System. (99%)Khushnaseeb Roshan; Aasim Zafar; Sheikh Burhan Ul Haque
Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria. (99%)Nuoyan Zhou; Nannan Wang; Decheng Liu; Dawei Zhou; Xinbo Gao
An Integrated Algorithm for Robust and Imperceptible Audio Adversarial Examples. (98%)Armin Ettenhofer; Jan-Philipp Schulze; Karla Pizzi
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally. (98%)Shawqi Al-Maliki; Adnan Qayyum; Hassan Ali; Mohamed Abdallah; Junaid Qadir; Dinh Thai Hoang; Dusit Niyato; Ala Al-Fuqaha
SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks. (92%)Alexander Robey; Eric Wong; Hamed Hassani; George J. Pappas
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks. (64%)Wenhan Yang; Jingdong Gao; Baharan Mirzasoleiman
Targeted Adversarial Attacks on Generalizable Neural Radiance Fields. (56%)Andras Horvath; Csaba M. Jozsa
Certification of Deep Learning Models for Medical Image Segmentation. (15%)Othmane Laousy; Alexandre Araujo; Guillaume Chassagnon; Nikos Paragios; Marie-Pierre Revel; Maria Vakalopoulou
Certifiably Robust Graph Contrastive Learning. (5%)Minhua Lin; Teng Xiao; Enyan Dai; Xiang Zhang; Suhang Wang
Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations. (2%)Jiachi Liu; Liwen Wang; Guanting Dong; Xiaoshuai Song; Zechen Wang; Zhengyang Wang; Shanglin Lei; Jinzheng Zhao; Keqing He; Bo Xiao; Weiran Xu
2023-10-04
Optimizing Key-Selection for Face-based One-Time Biometrics via Morphing. (98%)Daile Osorio-Roig; Mahdi Ghafourian; Christian Rathgeb; Ruben Vera-Rodriguez; Christoph Busch; Julian Fierrez
Misusing Tools in Large Language Models With Visual Adversarial Examples. (97%)Xiaohan Fu; Zihan Wang; Shuheng Li; Rajesh K. Gupta; Niloofar Mireshghallah; Taylor Berg-Kirkpatrick; Earlence Fernandes
Burning the Adversarial Bridges: Robust Windows Malware Detection Against Binary-level Mutations. (82%)Ahmed Abusnaina; Yizhen Wang; Sunpreet Arora; Ke Wang; Mihai Christodorescu; David Mohaisen
Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors. (81%)Biagio Montaruli; Luca Demetrio; Maura Pintor; Luca Compagna; Davide Balzarotti; Battista Biggio
Shielding the Unseen: Privacy Protection through Poisoning NeRF with Spatial Deformation. (10%)Yihan Wu; Brandon Y. Feng; Heng Huang
2023-10-03
Splitting the Difference on Adversarial Training. (99%)Matan Levi; Aryeh Kontorovich
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training. (97%)Aochuan Chen; Yimeng Zhang; Jinghan Jia; James Diffenderfer; Jiancheng Liu; Konstantinos Parasyris; Yihua Zhang; Zheng Zhang; Bhavya Kailkhura; Sijia Liu
SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers. (86%)KL Navaneet; Soroush Abbasi Koohpayegani; Essam Sleiman; Hamed Pirsiavash
Towards Stable Backdoor Purification through Feature Shift Tuning. (83%)Rui Min; Zeyu Qin; Li Shen; Minhao Cheng
Jailbreaker in Jail: Moving Target Defense for Large Language Models. (73%)Bocheng Chen; Advait Paliwal; Qiben Yan
AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models. (56%)Xiaogeng Liu; Nan Xu; Muhao Chen; Chaowei Xiao
Beyond Labeling Oracles: What does it mean to steal ML models? (47%)Avital Shafran; Ilia Shumailov; Murat A. Erdogdu; Nicolas Papernot
A Recipe for Improved Certifiable Robustness. (22%)Kai Hu; Klas Leino; Zifan Wang; Matt Fredrikson
Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness. (13%)Yanzhao Wu; Ka-Ho Chow; Wenqi Wei; Ling Liu
FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks. (11%)Jorge Castillo; Phillip Rieger; Hossein Fereidooni; Qian Chen; Ahmad Sadeghi
AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework. (3%)Xilie Xu; Jingfeng Zhang; Mohan Kankanhalli
2023-10-02
Fooling the Textual Fooler via Randomizing Latent Representations. (99%)Duy C. Hoang; Quang H. Nguyen; Saurav Manchanda; MinLong Peng; Kok-Seng Wong; Khoa D. Doan
LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples. (93%)Jia-Yu Yao; Kun-Peng Ning; Zhen-Hui Liu; Mu-Nan Ning; Li Yuan
Adversarial Client Detection via Non-parametric Subspace Monitoring in the Internet of Federated Things. (92%)Xianjian Xie; Xiaochen Xian; Dan Li; Andi Wang
LoFT: Local Proxy Fine-tuning For Improving Transferability Of Adversarial Attacks Against Large Language Model. (87%)Muhammad Ahmed Shah; Roshan Sharma; Hira Dhamyal; Raphael Olivier; Ankit Shah; Joseph Konan; Dareen Alharthi; Hazim T Bukhari; Massa Baali; Soham Deshmukh; Michael Kuhlmann; Bhiksha Raj; Rita Singh
Gotcha! This Model Uses My Code! Evaluating Membership Leakage Risks in Code Models. (13%)Zhou Yang; Zhipeng Zhao; Chenyu Wang; Jieke Shi; Dongsum Kim; Donggyun Han; David Lo
Toward effective protection against diffusion based mimicry through score distillation. (3%)Haotian Xue; Chumeng Liang; Xiaoyu Wu; Yongxin Chen
Fool Your (Vision and) Language Model With Embarrassingly Simple Permutations. (1%)Yongshuo Zong; Tingyang Yu; Bingchen Zhao; Ruchika Chavhan; Timothy Hospedales
2023-10-01
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks. (99%)Yanjie Li; Bin Xie; Songtao Guo; Yuanyuan Yang; Bin Xiao
Counterfactual Image Generation for adversarially robust and interpretable Classifiers. (96%)Rafael Bischof; Florian Scheidegger; Michael A. Kraus; A. Cristiano I. Malossi
Understanding Adversarial Transferability in Federated Learning. (93%)Yijiang Li; Ying Gao; Haohan Wang
On the Onset of Robust Overfitting in Adversarial Training. (87%)Chaojian Yu; Xiaolong Shi; Jun Yu; Bo Han; Tongliang Liu
GhostEncoder: Stealthy Backdoor Attacks with Dynamic Triggers to Pre-trained Encoders in Self-supervised Learning. (61%)Qiannan Wang; Changchun Yin; Zhe Liu; Liming Fang; Run Wang; Chenhao Lin
Fewer is More: Trojan Attacks on Parameter-Efficient Fine-Tuning. (9%)Lauren Hong; Ting Wang
Can Pre-trained Networks Detect Familiar Out-of-Distribution Data? (1%)Atsuyuki Miyai; Qing Yu; Go Irie; Kiyoharu Aizawa
How well does LLM generate security tests? (1%)Ying Daphne Zhang; Wenjia Daphne Song; Zhengjie Daphne Ji; Daphne Danfeng; Yao; Na Meng
2023-09-30
Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks. (99%)Quang H. Nguyen; Yingjie Lao; Tung Pham; Kok-Seng Wong; Khoa D. Doan
Human-Producible Adversarial Examples. (98%)David Khachaturov; Yue Gao; Ilia Shumailov; Robert Mullins; Ross Anderson; Kassem Fawaz
Black-box Attacks on Image Activity Prediction and its Natural Language Explanations. (98%)Alina Elena Baia; Valentina Poggioni; Andrea Cavallaro
Horizontal Class Backdoor to Deep Learning. (84%)Hua Ma; Shang Wang; Yansong Gao
Refutation of Shapley Values for XAI -- Additional Evidence. (8%)Xuanxiang Huang; Joao Marques-Silva
2023-09-29
Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks. (99%)Mehrdad Saberi; Vinu Sankar Sadasivan; Keivan Rezaei; Aounon Kumar; Atoosa Chegini; Wenxiao Wang; Soheil Feizi
Efficient Biologically Plausible Adversarial Training. (98%)Matilde Tristany Farinha; Thomas Ortner; Giorgia Dellaferrera; Benjamin Grewe; Angeliki Pantazi
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks. (96%)Vaidehi Patil; Peter Hase; Mohit Bansal
On Continuity of Robust and Accurate Classifiers. (93%)Ramin Barati; Reza Safabakhsh; Mohammad Rahmati
Adversarial Machine Learning in Latent Representations of Neural Networks. (93%)Milin Zhang; Mohammad Abdi; Francesco Restuccia
Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization. (92%)Mahyar Fazlyab; Taha Entesari; Aniket Roy; Rama Chellappa
Toward Robust Recommendation via Real-time Vicinal Defense. (82%)Yichang Xu; Chenwang Wu; Defu Lian
Adversarial Explainability: Utilizing Explainable Machine Learning in Bypassing IoT Botnet Detection Systems. (31%)Mohammed M. Alani; Atefeh Mashatan; Ali Miri
Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study. (13%)Myeongseob Ko; Ming Jin; Chenguang Wang; Ruoxi Jia
Distributed Resilient Control of DC Microgrids Under Generally Unbounded FDI Attacks. (1%)Yichao Wang; Mohamadamin Rajabinezhad; Omar A. Beg; Shan Zuo
Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning. (1%)Hongsheng Hu; Xuyun Zhang; Zoran Salcic; Lichao Sun; Kim-Kwang Raymond Choo; Gillian Dobbie
2023-09-28
Investigating Human-Identifiable Features Hidden in Adversarial Perturbations. (98%)Dennis Y. Menn; Tzu-hsun Feng; Sriram Vishwanath; Hung-yi Lee
Parameter-Saving Adversarial Training: Reinforcing Multi-Perturbation Robustness via Hypernetworks. (98%)Huihui Gong; Minjing Dong; Siqi Ma; Seyit Camtepe; Surya Nepal; Chang Xu
Towards Poisoning Fair Representations. (70%)Tianci Liu; Haoyu Wang; Feijie Wu; Hengtong Zhang; Pan Li; Lu Su; Jing Gao
On the Trade-offs between Adversarial Robustness and Actionable Explanations. (68%)Satyapriya Krishna; Chirag Agarwal; Himabindu Lakkaraju
The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing. (56%)Blaise Delattre; Alexandre Araujo; Quentin Barthélemy; Alexandre Allauzen
Post-Training Overfitting Mitigation in DNN Classifiers. (41%)Hang Wang; David J. Miller; George Kesidis
Leveraging Optimization for Adaptive Attacks on Image Watermarks. (13%)Nils Lukas; Abdulrahman Diaa; Lucas Fenaux; Florian Kerschbaum
Random and Safe Cache Architecture to Defeat Cache Timing Attacks. (9%)Guangyuan Hu; Ruby B. Lee
Robust Offline Reinforcement Learning -- Certify the Confidence Interval. (4%)Jiarui Yao; Simon Shaolei Du
A Primer on Bayesian Neural Networks: Review and Debates. (2%)Julyan Arbel; Konstantinos Pitas; Mariia Vladimirova; Vincent Fortuin
2023-09-27
On the Computational Entanglement of Distant Features in Adversarial Machine Learning. (99%)YenLung Lai; Xingbo Dong; Zhe Jin
Adversarial Examples Might be Avoidable: The Role of Data Concentration in Adversarial Robustness. (95%)Ambar Pal; Jeremias Sulam; René Vidal
Defending Against Physical Adversarial Patch Attacks on Infrared Human Detection. (95%)Lukas Strack; Futa Waseda; Huy H. Nguyen; Yinqiang Zheng; Isao Echizen
Automatic Feature Fairness in Recommendation via Adversaries. (33%)Hengchang Hu; Yiming Cao; Zhankui He; Samson Tan; Min-Yen Kan
Warfare:Breaking the Watermark Protection of AI-Generated Content. (12%)Guanlin Li; Yifei Chen; Jie Zhang; Jiwei Li; Shangwei Guo; Tianwei Zhang
Generating Transferable Adversarial Simulation Scenarios for Self-Driving via Neural Rendering. (11%)Yasasa Abeysirigoonawardena; Kevin Xie; Chuhan Chen; Salar Hosseini; Ruiting Chen; Ruiqi Wang; Florian Shkurti
Breaking On-Chip Communication Anonymity using Flow Correlation Attacks. (4%)Hansika Weerasena; Prabhat Mishra
Genetic Algorithm-Based Dynamic Backdoor Attack on Federated Learning-Based Network Traffic Classification. (1%)Mahmoud Nazzal; Nura Aljaafari; Ahmed Sawalmeh; Abdallah Khreishah; Muhammad Anan; Abdulelah Algosaibi; Mohammed Alnaeem; Adel Aldalbahi; Abdulaziz Alhumam; Conrado P. Vizcarra; Shadan Alhamed
2023-09-26
Structure Invariant Transformation for better Adversarial Transferability. (99%)Xiaosen Wang; Zeliang Zhang; Jianping Zhang
Privacy-preserving and Privacy-attacking Approaches for Speech and Audio -- A Survey. (16%)Yuchen Liu; Apu Kapadia; Donald Williamson
Neural Stochastic Differential Equations for Robust and Explainable Analysis of Electromagnetic Unintended Radiated Emissions. (2%)Sumit Kumar Jha; Susmit Jha; Rickard Ewetz; Alvaro Velasquez
Collaborative Watermarking for Adversarial Speech Synthesis. (1%)Lauri Aalto University, Finland Juvela; Xin National Institute of Informatics, Japan Wang
2023-09-25
DifAttack: Query-Efficient Black-Box Attack via Disentangled Feature Space. (99%)Liu Jun; Zhou Jiantao; Zeng Jiandian; Jinyu Tian
Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents. (98%)Foozhan Ataiefard; Hadi Hemmati
SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution. (1%)Zhongjie Ba; Jieming Zhong; Jiachen Lei; Peng Cheng; Qinglong Wang; Zhan Qin; Zhibo Wang; Kui Ren
2023-09-24
Adversarial Attacks on Video Object Segmentation with Hard Region Discovery. (99%)Ping Li; Yu Zhang; Li Yuan; Jian Zhao; Xianghua Xu; Xiaoqin Zhang
Vulnerabilities in Video Quality Assessment Models: The Challenge of Adversarial Attacks. (98%)Ao-Xiang Zhang; Yu Ran; Weixuan Tang; Yuan-Gen Wang
On the Effectiveness of Adversarial Samples against Ensemble Learning-based Windows PE Malware Detectors. (86%)Trong-Nghia To; Danh Le Kim; Do Thi Thu Hien; Nghi Hoang Khoa; Hien Do Hoang; Phan The Duy; Van-Hau Pham
Benchmarking Local Robustness of High-Accuracy Binary Neural Networks for Enhanced Traffic Sign Recognition. (80%)Andreea Postovan; Mădălina Eraşcu
Projected Randomized Smoothing for Certified Adversarial Robustness. (76%)Samuel Pfrommer; Brendon G. Anderson; Somayeh Sojoudi
Combining Two Adversarial Attacks Against Person Re-Identification Systems. (73%)Eduardo de O. Andrade; Igor Garcia Ballhausen Sampaio; Joris Guérin; José Viterbo
Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained Models. (2%)Minghang Deng; Zhong Zhang; Junming Shao
2023-09-23
Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks. (61%)Zhaohan Xi; Tianyu Du; Changjiang Li; Ren Pang; Shouling Ji; Jinghui Chen; Fenglong Ma; Ting Wang
Moving Target Defense based Secured Network Slicing System in the O-RAN Architecture. (1%)Mojdeh Karbalaee Motalleb; Chafika Benzaïd; Tarik Taleb; Vahid Shah-Mansouri
Detecting and Mitigating System-Level Anomalies of Vision-Based Controllers. (1%)Aryaman Gupta; Kaustav Chakraborty; Somil Bansal
2023-09-22
RBFormer: Improve Adversarial Robustness of Transformer by Robust Bias. (99%)Hao Cheng; Jinhao Duan; Hui Li; Lyutianyang Zhang; Jiahang Cao; Ping Wang; Jize Zhang; Kaidi Xu; Renjing Xu
Spatial-frequency channels, shape bias, and adversarial robustness. (69%)Ajay Subramanian; Elena Sizikova; Najib J. Majaj; Denis G. Pelli
VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks. (69%)Heitor R. Guimarães; Arthur Pimentel; Anderson Avila; Tiago H. Falk
Understanding Deep Gradient Leakage via Inversion Influence Functions. (15%)Haobo Zhang; Junyuan Hong; Yuyang Deng; Mehrdad Mahdavi; Jiayu Zhou
Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations. (10%)Hanjiang Hu; Zuxin Liu; Linyi Li; Jiacheng Zhu; Ding Zhao
Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception? (5%)Xiaoxiao Sun; Nidham Gazagnadou; Vivek Sharma; Lingjuan Lyu; Hongdong Li; Liang Zheng
Expressive variational quantum circuits provide inherent privacy in federated learning. (1%)Niraj Kumar; Jamie Heredge; Changhao Li; Shaltiel Eloul; Shree Hari Sureshbabu; Marco Pistoia
On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures. (1%)Qingzhao Zhang; Shuowei Jin; Ruiyang Zhu; Jiachen Sun; Xumiao Zhang; Qi Alfred Chen; Z. Morley Mao
2023-09-21
Improving Machine Learning Robustness via Adversarial Training. (99%)Long Dang; Thushari Hapuarachchi; Kaiqi Xiong; Jing Lin
Goal-Oriented Prompt Attack and Safety Evaluation for LLMs. (69%)Chengyuan Liu; Fubang Zhao; Lizhi Qing; Yangyang Kang; Changlong Sun; Kun Kuang; Fei Wu
HANS, are you clever? Clever Hans Effect Analysis of Neural Systems. (45%)Leonardo Ranaldi; Fabio Massimo Zanzotto
On the Relationship between Skill Neurons and Robustness in Prompt Tuning. (12%)Leon Ackermann; Xenia Ohmer
DeepTheft: Stealing DNN Model Architectures through Power Side Channel. (1%)Yansong Gao; Huming Qiu; Zhi Zhang; Binghui Wang; Hua Ma; Alsharif Abuadbba; Minhui Xue; Anmin Fu; Surya Nepal
2023-09-20
How Robust is Google's Bard to Adversarial Image Attacks? (99%)Yinpeng Dong; Huanran Chen; Jiawei Chen; Zhengwei Fang; Xiao Yang; Yichi Zhang; Yu Tian; Hang Su; Jun Zhu
PRAT: PRofiling Adversarial aTtacks. (99%)Rahul Ambati; Naveed Akhtar; Ajmal Mian; Yogesh Singh Rawat
When to Trust AI: Advances and Challenges for Certification of Neural Networks. (64%)Marta Kwiatkowska; Xiyue Zhang
AudioFool: Fast, Universal and synchronization-free Cross-Domain Attack on Speech Recognition. (54%)Mohamad Fakih; Rouwaida Kanj; Fadi Kurdahi; Mohammed E. Fouda
Understanding Pose and Appearance Disentanglement in 3D Human Pose Estimation. (54%)Krishna Kanth Nakka; Mathieu Salzmann
Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection. (5%)Tran Duc Luong; Vuong Minh Tien; Nguyen Huu Quyen; Do Thi Thu Hien; Phan The Duy; Van-Hau Pham
Compilation as a Defense: Enhancing DL Model Attack Robustness via Tensor Optimization. (2%)Stefan Trawicki; William Hackett; Lewis Birch; Neeraj Suri; Peter Garraghan
Generalized Face Forgery Detection via Adaptive Learning for Pre-trained Vision Transformer. (1%)Anwei Luo; Rizhao Cai; Chenqi Kong; Yakun Ju; Xiangui Kang; Jiwu Huang; Alex C. Kot
2023-09-19
Language Guided Adversarial Purification. (99%)Himanshu Singh; A V Subramanyam
What Learned Representations and Influence Functions Can Tell Us About Adversarial Examples. (99%)Shakila Mahjabin Tonni; Mark Dras
Adversarial Attacks Against Uncertainty Quantification. (99%)Emanuele Ledda; Daniele Angioni; Giorgio Piras; Giorgio Fumera; Battista Biggio; Fabio Roli
Model Leeching: An Extraction Attack Targeting LLMs. (76%)Lewis Birch; William Hackett; Stefan Trawicki; Neeraj Suri; Peter Garraghan
Information Leakage from Data Updates in Machine Learning Models. (16%)Tian Hui; Farhad Farokhi; Olga Ohrimenko
Robin: A Novel Method to Produce Robust Interpreters for Deep Learning-Based Code Classifiers. (16%)Zhen Li; Ruqian Zhang; Deqing Zou; Ning Wang; Yating Li; Shouhuai Xu; Chen Chen; Hai Jin
SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks. (12%)Zizhen Liu; Weiyang He; Chip-Hong Chang; Jing Ye; Huawei Li; Xiaowei Li
It's Simplex! Disaggregating Measures to Improve Certified Robustness. (11%)Andrew C. Cullen; Paul Montague; Shijie Liu; Sarah M. Erfani; Benjamin I. P. Rubinstein
Nebula: Self-Attention for Dynamic Malware Analysis. (5%)Dmitrijs Trizna; Luca Demetrio; Battista Biggio; Fabio Roli
Extreme Image Transformations Facilitate Robust Latent Object Representations. (1%)Girik Malik; Dakarai Crowder; Ennio Mingolla
2023-09-18
Stealthy Physical Masked Face Recognition Attack via Adversarial Style Optimization. (99%)Huihui Gong; Minjing Dong; Siqi Ma; Seyit Camtepe; Surya Nepal; Chang Xu
Transferable Adversarial Attack on Image Tampering Localization. (99%)Yuqi Wang; Gang Cao; Zijie Lou; Haochen Zhu
Efficient Low-Rank GNN Defense Against Structural Attacks. (96%)Abdullah Alchihabi; Qing En; Yuhong Guo
Evaluating Adversarial Robustness with Expected Viable Performance. (45%)Ryan McCoppin; Colin Dawson; Sean M. Kennedy; Leslie M. Blaha
Dual Student Networks for Data-Free Model Stealing. (26%)James Beetham; Navid Kardan; Ajmal Mian; Mubarak Shah
Securing Fixed Neural Network Steganography. (5%)Zicong Luo; Sheng Li; Guobiao Li; Zhenxing Qian; Xinpeng Zhang
GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts. (4%)Jiahao Yu; Xingwei Lin; Zheng Yu; Xinyu Xing
Spoofing attack augmentation: can differently-trained attack models improve generalisation? (3%)Wanying Ge; Xin Wang; Junichi Yamagishi; Massimiliano Todisco; Nicholas Evans
Frame-to-Utterance Convergence: A Spectra-Temporal Approach for Unified Spoofing Detection. (1%)Awais Khan; Khalid Mahmood Malik; Shah Nawaz
2023-09-17
Reducing Adversarial Training Cost with Gradient Approximation. (99%)Huihui Gong; Shuo Yang; Siqi Ma; Seyit Camtepe; Surya Nepal; Chang Xu
Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM. (61%)Bochuan Cao; Yuanpu Cao; Lu Lin; Jinghui Chen
2023-09-16
Context-aware Adversarial Attack on Named Entity Recognition. (99%)Shuguang Chen; Leonardo Neves; Thamar Solorio
Inverse classification with logistic and softmax classifiers: efficient optimization. (56%)Miguel Á. Carreira-Perpiñán; Suryabhan Singh Hada
Robust Backdoor Attacks on Object Detection in Real World. (11%)Yaguan Qian; Boyuan Ji; Shuke He; Shenhui Huang; Xiang Ling; Bin Wang; Wei Wang
Conditional Mutual Information Constrained Deep Learning for Classification. (5%)En-Hui Yang; Shayan Mohajer Hamidi; Linfeng Ye; Renhao Tan; Beverly Yang
2023-09-15
Adversarial Attacks on Tables with Entity Swap. (92%)Aneta Koleva; Martin Ringsquandl; Volker Tresp
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks. (87%)Minh-Hao Van; Alycia N. Carey; Xintao Wu
Distributionally Robust Post-hoc Classifiers under Prior Shifts. (1%)Jiaheng Wei; Harikrishna Narasimhan; Ehsan Amid; Wen-Sheng Chu; Yang Liu; Abhishek Kumar
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services. (1%)Hongsheng Hu; Shuo Wang; Jiamin Chang; Haonan Zhong; Ruoxi Sun; Shuang Hao; Haojin Zhu; Minhui Xue
2023-09-14
Unleashing the Adversarial Facet of Software Debloating. (98%)Do-Men Su; Mohannad Alhanahnah
SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker Recognition Systems. (76%)Guangke Chen; Yedi Zhang; Fu Song
What Matters to Enhance Traffic Rule Compliance of Imitation Learning for Automated Driving. (50%)Hongkuan Zhou; Aifen Sui; Wei Cao; Zhenshan Bing
BAGEL: Backdoor Attacks against Federated Contrastive Learning. (16%)Yao Huang; Kongyang Chen; Jiannong Cao; Jiaxing Shen; Shaowei Wang; Yun Peng; Weilong Peng; Kechao Cai
Physical Invisible Backdoor Based on Camera Imaging. (2%)Yusheng Guo; Nan Zhong; Zhenxing Qian; Xinpeng Zhang
M3Dsynth: A dataset of medical 3D images with AI-generated local manipulations. (1%)Giada Zingarini; Davide Cozzolino; Riccardo Corvi; Giovanni Poggi; Luisa Verdoliva
2023-09-13
Semantic Adversarial Attacks via Diffusion Models. (99%)Chenan Wang; Jinhao Duan; Chaowei Xiao; Edward Kim; Matthew Stamm; Kaidi Xu
Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks. (99%)Yang Zheng; Luca Demetrio; Antonio Emanuele Cinà; Xiaoyi Feng; Zhaoqiang Xia; Xiaoyue Jiang; Ambra Demontis; Battista Biggio; Fabio Roli
Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments. (99%)Simon Queyrut; Valerio Schiavoni; Pascal Felber
PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection. (99%)Hanqing Guo; Guangjing Wang; Yuanda Wang; Bocheng Chen; Qiben Yan; Li Xiao
APICom: Automatic API Completion via Prompt Learning and Adversarial Training-based Data Augmentation. (92%)Yafeng Gu; Yiheng Shen; Xiang Chen; Shaoyu Yang; Yiling Huang; Zhixiang Cao
RAIN: Your Language Models Can Align Themselves without Finetuning. (83%)Yuhui Li; Fangyun Wei; Jinjing Zhao; Chao Zhang; Hongyang Zhang
Differentiable JPEG: The Devil is in the Details. (70%)Christoph Reich; Biplob Debnath; Deep Patel; Srimat Chakradhar
Deep Nonparametric Convexified Filtering for Computational Photography, Image Synthesis and Adversarial Defense. (41%)Jianqiao Wangni
MASTERKEY: Practical Backdoor Attack Against Speaker Verification Systems. (38%)Hanqing Guo; Xun Chen; Junfeng Guo; Li Xiao; Qiben Yan
Client-side Gradient Inversion Against Federated Learning from Poisoning. (22%)Jiaheng Wei; Yanjun Zhang; Leo Yu Zhang; Chao Chen; Shirui Pan; Kok-Leong Ong; Jun Zhang; Yang Xiang
Safe Reinforcement Learning with Dual Robustness. (1%)Zeyang Li; Chuxiong Hu; Yunan Wang; Yujie Yang; Shengbo Eben Li
2023-09-12
Using Reed-Muller Codes for Classification with Rejection and Recovery. (99%)Daniel University of Birmingham Fentham; David University of Oxford Parker; Mark University of Birmingham Ryan
Certified Robust Models with Slack Control and Large Lipschitz Constants. (98%)Max Losch; David Stutz; Bernt Schiele; Mario Fritz
Exploring Non-additive Randomness on ViT against Query-Based Black-Box Attacks. (98%)Jindong Gu; Fangyun Wei; Philip Torr; Han Hu
Compiled Models, Built-In Exploits: Uncovering Pervasive Bit-Flip Attack Surfaces in DNN Executables. (83%)Yanzuo The Hong Kong University of Science and Technology Chen; Zhibo The Hong Kong University of Science and Technology Liu; Yuanyuan The Hong Kong University of Science and Technology Yuan; Sihang Huawei Technologies Hu; Tianxiang Huawei Technologies Li; Shuai The Hong Kong University of Science and Technology Wang
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review. (61%)Pengzhou Cheng; Zongru Wu; Wei Du; Gongshen Liu
CToMP: A Cycle-task-oriented Memory Protection Scheme for Unmanned Systems. (8%)Chengyan Ma; Ning Xi; Di Lu; Yebo Feng; Jianfeng Ma
Language Models as Black-Box Optimizers for Vision-Language Models. (4%)Shihong Liu; Zhiqiu Lin; Samuel Yu; Ryan Lee; Tiffany Ling; Deepak Pathak; Deva Ramanan
2023-09-11
Generalized Attacks on Face Verification Systems. (88%)Ehsan Nazari; Paula Branco; Guy-Vincent Jourdan
Adversarial Attacks Assessment of Salient Object Detection via Symbolic Learning. (76%)Gustavo Olague; Roberto Pineda; Gerardo Ibarra-Vazquez; Matthieu Olague; Axel Martinez; Sambit Bakshi; Jonathan Vargas; Isnardo Reducindo
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System. (68%)Peixin Zhang; Jun Sun; Mingtian Tan; Xinyu Wang
Privacy Side Channels in Machine Learning Systems. (10%)Edoardo Debenedetti; Giorgio Severi; Nicholas Carlini; Christopher A. Choquette-Choo; Matthew Jagielski; Milad Nasr; Eric Wallace; Florian Tramèr
Divergences in Color Perception between Deep Neural Networks and Humans. (4%)Ethan O. Nadler; Elise Darragh-Ford; Bhargav Srinivasa Desikan; Christian Conaway; Mark Chu; Tasker Hull; Douglas Guilbeault
Catch You Everything Everywhere: Guarding Textual Inversion via Concept Watermarking. (1%)Weitao Feng; Jiyan He; Jie Zhang; Tianwei Zhang; Wenbo Zhou; Weiming Zhang; Nenghai Yu
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs. (1%)Wenhua Cheng; Weiwei Zhang; Haihao Shen; Yiyang Cai; Xin He; Kaokao Lv
2023-09-10
Outlier Robust Adversarial Training. (98%)Shu Hu; Zhenhuan Yang; Xin Wang; Yiming Ying; Siwei Lyu
DAD++: Improved Data-free Test Time Adversarial Defense. (98%)Gaurav Kumar Nayak; Inder Khatri; Shubham Randive; Ruchit Rawal; Anirban Chakraborty
Machine Translation Models Stand Strong in the Face of Adversarial Attacks. (86%)Pavel Burnyshev; Elizaveta Kostenok; Alexey Zaytsev
Secure Set-Based State Estimation for Linear Systems under Adversarial Attacks on Sensors. (12%)M. Umar B. Niazi; Michelle S. Chong; Amr Alanwar; Karl H. Johansson
2023-09-09
Towards Robust Model Watermark via Reducing Parametric Vulnerability. (8%)Guanhao Gan; Yiming Li; Dongxian Wu; Shu-Tao Xia
RecAD: Towards A Unified Library for Recommender Attack and Defense. (1%)Changsheng Wang; Jianbai Ye; Wenjie Wang; Chongming Gao; Fuli Feng; Xiangnan He
2023-09-08
Exploring Robust Features for Improving Adversarial Robustness. (99%)Hong Wang; Yuefan Deng; Shinjae Yoo; Yuewei Lin
ARRTOC: Adversarially Robust Real-Time Optimization and Control. (2%)Akhil Ahmed; Rio-Chanona Ehecatl Antonio del; Mehmet Mercangoz
Adversarial attacks on hybrid classical-quantum Deep Learning models for Histopathological Cancer Detection. (1%)Biswaraj Baral; Reek Majumdar; Bhavika Bhalgamiya; Taposh Dutta Roy
Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse. (1%)Edward A. Small; Jeffrey N. Clark; Christopher J. McWilliams; Kacper Sokol; Jeffrey Chan; Flora D. Salim; Raul Santos-Rodriguez
2023-09-07
Experimental Study of Adversarial Attacks on ML-based xApps in O-RAN. (99%)Naveen Naik Sapavath; Brian Kim; Kaushik Chowdhury; Vijay K Shah
How adversarial attacks can disrupt seemingly stable accurate classifiers. (99%)Oliver J. Sutton; Qinghua Zhou; Ivan Y. Tyukin; Alexander N. Gorban; Alexander Bastounis; Desmond J. Higham
Adversarially Robust Deep Learning with Optimal-Transport-Regularized Divergences. (95%)Jeremiah Birrell; Mohammadreza Ebrahimi
DiffDefense: Defending against Adversarial Attacks via Diffusion Models. (80%)Hondamunige Prasanna Silva; Lorenzo Seidenari; Bimbo Alberto Del
One-to-Multiple Clean-Label Image Camouflage (OmClic) based Backdoor Attack on Deep Learning. (73%)Guohong Wang; Hua Ma; Yansong Gao; Alsharif Abuadbba; Zhi Zhang; Wei Kang; Said F. Al-Sarawib; Gongxuan Zhang; Derek Abbott
Promoting Fairness in GNNs: A Characterization of Stability. (1%)Yaning Jia; Chunhui Zhang
2023-09-06
Certifying LLM Safety against Adversarial Prompting. (99%)Aounon Kumar; Chirag Agarwal; Suraj Srinivas; Aaron Jiaxun Li; Soheil Feizi; Himabindu Lakkaraju
SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series. (84%)Chang George Dong; Liangwei Nathan Zheng; Weitong Chen; Wei Emma Zhang; Lin Yue
Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy. (68%)Zikai Zhang; Rui Hu
J-Guard: Journalism Guided Adversarially Robust Detection of AI-generated News. (38%)Tharindu Kumarage; Amrita Bhattacharjee; Djordje Padejski; Kristy Roschke; Dan Gillmor; Scott Ruston; Huan Liu; Joshua Garland
MIRA: Cracking Black-box Watermarking on Deep Neural Networks via Model Inversion-based Removal Attacks. (22%)Yifan Lu; Wenxuan Li; Mi Zhang; Xudong Pan; Min Yang
My Art My Choice: Adversarial Protection Against Unruly AI. (2%)Anthony Rhodes; Ram Bhagat; Umur Aybars Ciftci; Ilke Demir
VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy Leakage Fingerprints. (1%)Aoting Hu; Zhigang Lu; Renjie Xie; Minhui Xue
A Theoretical Explanation of Activation Sparsity through Flat Minima and Adversarial Robustness. (1%)Ze Peng; Lei Qi; Yinghuan Shi; Yang Gao
2023-09-05
The Adversarial Implications of Variable-Time Inference. (99%)Dudi Biton; Aditi Misra; Efrat Levy; Jaidip Kotak; Ron Bitton; Roei Schuster; Nicolas Papernot; Yuval Elovici; Ben Nassi
Adaptive Adversarial Training Does Not Increase Recourse Costs. (92%)Ian Hardy; Jayanth Yetukuri; Yang Liu
Black-Box Attacks against Signed Graph Analysis via Balance Poisoning. (87%)Jialong Zhou; Yuni Lai; Jian Ren; Kai Zhou
RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems. (83%)Abhishek Moitra; Abhiroop Bhattacharjee; Youngeun Kim; Priyadarshini Panda
Building a Winning Team: Selecting Source Model Ensembles using a Submodular Transferability Estimation Approach. (4%)Vimal K B; Saketh Bachu; Tanmay Garg; Niveditha Lakshmi Narasimhan; Raghavan Konuru; Vineeth N Balasubramanian
Robust Recommender System: A Survey and Future Directions. (2%)Kaike Zhang; Qi Cao; Fei Sun; Yunfan Wu; Shuchang Tao; Huawei Shen; Xueqi Cheng
Dual Adversarial Alignment for Realistic Support-Query Shift Few-shot Learning. (1%)Siyang Jiang; Rui Fang; Hsi-Wen Chen; Wei Ding; Ming-Syan Chen
2023-09-04
Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings. (99%)AprilPyone MaungMaung; Isao Echizen; Hitoshi Kiya
Improving Visual Quality and Transferability of Adversarial Attacks on Face Recognition Simultaneously with Adversarial Restoration. (99%)Fengfan Zhou; Hefei Ling; Yuxuan Shi; Jiazhong Chen; Ping Li
Adv3D: Generating 3D Adversarial Examples in Driving Scenarios with NeRF. (99%)Leheng Li; Qing Lian; Ying-Cong Chen
Toward Defensive Letter Design. (98%)Rentaro Kataoka; Akisato Kimura; Seiichi Uchida
MathAttack: Attacking Large Language Models Towards Math Solving Ability. (97%)Zihao Zhou; Qiufeng Wang; Mingyu Jin; Jie Yao; Jianan Ye; Wei Liu; Wei Wang; Xiaowei Huang; Kaizhu Huang
Efficient Defense Against Model Stealing Attacks on Convolutional Neural Networks. (93%)Kacem Khaled; Mouna Dhaouadi; Magalhães Felipe Gohring de; Gabriela Nicolescu
Efficient Query-Based Attack against ML-Based Android Malware Detection under Zero Knowledge Setting. (92%)Ping He; Yifan Xia; Xuhong Zhang; Shouling Ji
EventTrojan: Manipulating Non-Intrusive Speech Quality Assessment via Imperceptible Events. (15%)Ying Ren; Kailai Shen; Zhe Ye; Diqun Yan
Safe and Robust Watermark Injection with a Single OoD Image. (8%)Shuyang Yu; Junyuan Hong; Haobo Zhang; Haotao Wang; Zhangyang Wang; Jiayu Zhou
Dropout Attacks. (2%)Andrew Yuan; Alina Oprea; Cheng Tan
Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images. (2%)Jamiu Idowu; Ahmed Almasoud
2023-09-03
Robust and Efficient Interference Neural Networks for Defending Against Adversarial Attacks in ImageNet. (99%)Yunuo Xiong; Shujuan Liu; Hongwei Xiong
Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection. (98%)Weijie Wang; Zhengyu Zhao; Nicu Sebe; Bruno Lepri
AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware Robust Adversarial Training. (98%)Xingyuan Li; Jinyuan Liu; Long Ma; Xin Fan; Risheng Liu
Robust Adversarial Defense by Tensor Factorization. (89%)Manish Bhattarai; Mehmet Cagri Kaymak; Ryan Barron; Ben Nebgen; Kim Rasmussen; Boian Alexandrov
Dual Adversarial Resilience for Collaborating Robust Underwater Image Enhancement and Perception. (13%)Zengxi Zhang; Zhiying Jiang; Zeru Shi; Jinyuan Liu; Risheng Liu
2023-09-02
Towards Certified Probabilistic Robustness with High Accuracy. (98%)Ruihan Zhang; Peixin Zhang; Jun Sun
Timbre-reserved Adversarial Attack in Speaker Identification. (98%)Qing Wang; Jixun Yao; Li Zhang; Pengcheng Guo; Lei Xie
Regularly Truncated M-estimators for Learning with Noisy Labels. (1%)Xiaobo Xia; Pengqian Lu; Chen Gong; Bo Han; Jun Yu; Jun Yu; Tongliang Liu
2023-09-01
Baseline Defenses for Adversarial Attacks Against Aligned Language Models. (99%)Neel Jain; Avi Schwarzschild; Yuxin Wen; Gowthami Somepalli; John Kirchenbauer; Ping-yeh Chiang; Micah Goldblum; Aniruddha Saha; Jonas Geiping; Tom Goldstein
Curating Naturally Adversarial Datasets for Trustworthy AI in Healthcare. (99%)Sydney Pugh; Ivan Ruchkin; Insup Lee; James Weimer
Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified Models. (89%)Changyu Liu; Yuling Jiao; Junhui Wang; Jian Huang
Why do universal adversarial attacks work on large language models?: Geometry might be the answer. (83%)Varshini Subhash; Anna Bialas; Weiwei Pan; Finale Doshi-Velez
RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model. (1%)Fengxiang Bie; Yibo Yang; Zhongzhu Zhou; Adam Ghanem; Minjia Zhang; Zhewei Yao; Xiaoxia Wu; Connor Holmes; Pareesa Golnari; David A. Clifton; Yuxiong He; Dacheng Tao; Shuaiwen Leon Song
Learned Visual Features to Textual Explanations. (1%)Saeid Asgari Taghanaki; Aliasghar Khani; Amir Khasahmadi; Aditya Sanghi; Karl D. D. Willis; Ali Mahdavi-Amiri
2023-08-31
Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff. (98%)Satoshi Suzuki; Shin'ya Yamaguchi; Shoichiro Takeda; Sekitoshi Kanai; Naoki Makishima; Atsushi Ando; Ryo Masumura
Image Hijacking: Adversarial Images can Control Generative Models at Runtime. (98%)Luke Bailey; Euan Ong; Stuart Russell; Scott Emmons
The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement Learning. (93%)Maria Rigaki; Sebastian Garcia
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack. (75%)Sze Jue Yang; Quang Nguyen; Chee Seng Chan; Khoa D. Doan
Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models. (75%)Kevin Hector; Pierre-Alain Moellic; Mathieu Dumont; Jean-Max Dutertre
FTA: Stealthy and Robust Backdoor Attack with Flexible Trigger on Federated Learning. (45%)Yanqi Qiao; Congwen Chen; Rui Wang; Kaitai Liang
2023-08-30
Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous Driving: An Inductive Logic Programming Approach. (98%)Zahra University of Surrey Chaghazardi; Saber University of Surrey Fallah; Alireza University of Surrey Tamaddoni-Nezhad
Robust Principles: Architectural Design Principles for Adversarially Robust CNNs. (11%)ShengYun Peng; Weilin Xu; Cory Cornelius; Matthew Hull; Kevin Li; Rahul Duggal; Mansi Phute; Jason Martin; Duen Horng Chau
2023-08-29
Adaptive Attack Detection in Text Classification: Leveraging Space Exploration Features for Text Sentiment Classification. (99%)Atefeh Mahdavi; Neda Keivandarian; Marco Carvalho
Advancing Adversarial Robustness Through Adversarial Logit Update. (99%)Hao Xuan; Peican Zhu; Xingyu Li
Imperceptible Adversarial Attack on Deep Neural Networks from Image Boundary. (99%)Fahad Alrasheedi; Xin Zhong
A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation. (99%)Sahar Sadrizadeh; Ljiljana Dolamic; Pascal Frossard
MDTD: A Multi Domain Trojan Detector for Deep Neural Networks. (97%)Arezoo Rajabi; Surudhi Asokraj; Fengqing Jiang; Luyao Niu; Bhaskar Ramasubramanian; Jim Ritcey; Radha Poovendran
3D Adversarial Augmentations for Robust Out-of-Domain Predictions. (87%)Alexander Lehner; Stefano Gasperini; Alvaro Marcos-Ramiro; Michael Schmidt; Nassir Navab; Benjamin Busam; Federico Tombari
Everything Perturbed All at Once: Enabling Differentiable Graph Attacks. (84%)Haoran Liu; Bokun Wang; Jianling Wang; Xiangjue Dong; Tianbao Yang; James Caverlee
Intriguing Properties of Diffusion Models: An Empirical Study of the Natural Attack Capability in Text-to-Image Generative Models. (75%)Takami Sato; Justin Yue; Nanze Chen; Ningfei Wang; Qi Alfred Chen
Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review. (75%)Zhenyong Zhang; Mengxiang Liu; Mingyang Sun; Ruilong Deng; Peng Cheng; Dusit Niyato; Mo-Yuen Chow; Jiming Chen
Can We Rely on AI? (50%)Desmond J. Higham
Uncertainty Aware Training to Improve Deep Learning Model Calibration for Classification of Cardiac MR Images. (1%)Tareen Dawood; Chen Chen; Baldeep S. Sidhua; Bram Ruijsink; Justin Goulda; Bradley Porter; Mark K. Elliott; Vishal Mehta; Christopher A. Rinaldi; Esther Puyol-Anton; Reza Razavi; Andrew P. King
2023-08-28
Adversarial Attacks on Foundational Vision Models. (80%)Nathan Inkawhich; Gwendolyn McDonald; Ryan Luley
DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing. (45%)Jiawei Zhang; Zhongzhu Chen; Huan Zhang; Chaowei Xiao; Bo Li
Identifying and Mitigating the Security Risks of Generative AI. (45%)Clark Barrett; Brad Boyd; Elie Burzstein; Nicholas Carlini; Brad Chen; Jihye Choi; Amrita Roy Chowdhury; Mihai Christodorescu; Anupam Datta; Soheil Feizi; Kathleen Fisher; Tatsunori Hashimoto; Dan Hendrycks; Somesh Jha; Daniel Kang; Florian Kerschbaum; Eric Mitchell; John Mitchell; Zulfikar Ramzan; Khawaja Shams; Dawn Song; Ankur Taly; Diyi Yang
ReMAV: Reward Modeling of Autonomous Vehicles for Finding Likely Failure Events. (13%)Aizaz Sharif; Dusica Marijan
Rep2wav: Noise Robust text-to-speech Using self-supervised representations. (1%)Qiushi Zhu; Yu Gu; Rilin Chen; Chao Weng; Yuchen Hu; Lirong Dai; Jie Zhang
Are Existing Out-Of-Distribution Techniques Suitable for Network Intrusion Detection? (1%)Andrea Corsini; Shanchieh Jay Yang
2023-08-27
FaceChain: A Playground for Human-centric Artificial Intelligence Generated Content. (1%)Yang Liu; Cheng Yu; Lei Shang; Yongyi He; Ziheng Wu; Xingjun Wang; Chao Xu; Haoyu Xie; Weida Wang; Yuze Zhao; Lin Zhu; Chen Cheng; Weitao Chen; Yuan Yao; Wenmeng Zhou; Jiaqi Xu; Qiang Wang; Yingda Chen; Xuansong Xie; Baigui Sun
Detecting Language Model Attacks with Perplexity. (1%)Gabriel Alon; Michael Kamfonas
2023-08-24
Exploring Transferability of Multimodal Adversarial Samples for Vision-Language Pre-training Models with Contrastive Learning. (99%)Youze Wang; Wenbo Hu; Yinpeng Dong; Richang Hong
Don't Look into the Sun: Adversarial Solarization Attacks on Image Classifiers. (92%)Paul Gavrikov; Janis Keuper
Evaluating the Vulnerabilities in ML systems in terms of adversarial attacks. (82%)John Harshith; Mantej Singh Gill; Madhan Jothimani
Fast Adversarial Training with Smooth Convergence. (3%)Mengnan Zhao; Lihe Zhang; Yuqiu Kong; Baocai Yin
WavMark: Watermarking for Audio Generation. (2%)Guangyu Chen; Yu Wu; Shujie Liu; Tao Liu; Xiaoyong Du; Furu Wei
Prediction without Preclusion: Recourse Verification with Reachable Sets. (1%)Avni Kothari; Bogdan Kulynych; Tsui-Wei Weng; Berk Ustun
2023-08-23
On-Manifold Projected Gradient Descent. (99%)Aaron Mahler; Tyrus Berry; Tom Stephens; Harbir Antil; Michael Merritt; Jeanie Schreiber; Ioannis Kevrekidis
Sample Complexity of Robust Learning against Evasion Attacks. (98%)Pascale Gourdeau
LCANets++: Robust Audio Classification using Multi-layer Neural Networks with Lateral Competition. (92%)Sayanton V. Dibbo; Juston S. Moore; Garrett T. Kenyon; Michael A. Teti
BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection. (74%)Tinghao Xie; Xiangyu Qi; Ping He; Yiming Li; Jiachen T. Wang; Prateek Mittal
RemovalNet: DNN Fingerprint Removal Attacks. (69%)Hongwei Yao; Zheng Li; Kunzhe Huang; Jian Lou; Zhan Qin; Kui Ren
A Survey of Graph Unlearning. (2%)Anwar Said; Yuying Zhao; Tyler Derr; Mudassir Shabbir; Waseem Abbas; Xenofon Koutsoukos
Ensembling Uncertainty Measures to Improve Safety of Black-Box Classifiers. (1%)Tommaso Zoppi; Andrea Ceccarelli; Andrea Bondavalli
Aparecium: Revealing Secrets from Physical Photographs. (1%)Zhe Lei; Jie Zhang; Jingtao Li; Weiming Zhang; Nenghai Yu
2023-08-22
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks. (99%)Yue Gao; Ilia Shumailov; Kassem Fawaz
Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection. (99%)Mahmoud Nazzal; Issa Khalil; Abdallah Khreishah; NhatHai Phan; Yao Ma
Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack. (98%)Ningfei Wang; Yunpeng Luo; Takami Sato; Kaidi Xu; Qi Alfred Chen
Protect Federated Learning Against Backdoor Attacks via Data-Free Trigger Generation. (86%)Yanxin Yang; Ming Hu; Yue Cao; Jun Xia; Yihao Huang; Yang Liu; Mingsong Chen
Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging. (76%)Xiaojun Jia; Yuefeng Chen; Xiaofeng Mao; Ranjie Duan; Jindong Gu; Rong Zhang; Hui Xue; Xiaochun Cao
Adversarial Illusions in Multi-Modal Embeddings. (75%)Tingwei Zhang; Rishi Jha; Eugene Bagdasaryan; Vitaly Shmatikov
Designing an attack-defense game: how to increase robustness of financial transaction models via a competition. (75%)Alexey Zaytsev; Maria Kovaleva; Alex Natekin; Evgeni Vorsin; Valerii Smirnov; Georgii Smirnov; Oleg Sidorshin; Alexander Senin; Alexander Dudin; Dmitry Berestnev
Adversarial Training Using Feedback Loops. (74%)Ali Haisam Muhammad Rafid; Adrian Sandu
LEAP: Efficient and Automated Test Method for NLP Software. (31%)Mingxuan Xiao; Yan Xiao; Hai Dong; Shunhui Ji; Pengcheng Zhang
PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification. (16%)Yizhen Institute for AI Industry Research Yuan; Rui Shanghai Jiao Tong University, Shanghai, China Kong; Shenghao Wuhan University, Wuhan, China Xie; Yuanchun Institute for AI Industry Research Shanghai AI Laboratory, Shanghai, China Li; Yunxin Institute for AI Industry Research Shanghai AI Laboratory, Shanghai, China Liu
2023-08-21
Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer. (99%)Zhijin Ge; Fanhua Shang; Hongying Liu; Yuanyuan Liu; Liang Wan; Wei Feng; Xiaosen Wang
Spear and Shield: Adversarial Attacks and Defense Methods for Model-Based Link Prediction on Continuous-Time Dynamic Graphs. (99%)Dongjin Lee; Juho Lee; Kijung Shin
Enhancing Adversarial Attacks: The Similar Target Method. (99%)Shuo Zhang; Ziruo Wang; Zikai Zhou; Huanran Chen
Adversarial Attacks on Code Models with Discriminative Graph Patterns. (96%)Thanh-Dat Nguyen; Yang Zhou; Xuan Bach D. Le; Patanamon Thongtanunam; David Lo
Temporal-Distributed Backdoor Attack Against Video Based Action Recognition. (88%)Xi Li; Songhe Wang; Ruiquan Huang; Mahanth Gowda; George Kesidis
Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models. (76%)Preben M. Ness; Dusica Marijan; Sunanda Bose
Single-User Injection for Invisible Shilling Attack against Recommender Systems. (62%)Chengzhi Huang; Hui Li
On the Adversarial Robustness of Multi-Modal Foundation Models. (4%)Christian Schlarmann; Matthias Hein
Unlocking Accuracy and Fairness in Differentially Private Image Classification. (2%)Leonard Berrada; Soham De; Judy Hanwen Shen; Jamie Hayes; Robert Stanforth; David Stutz; Pushmeet Kohli; Samuel L. Smith; Borja Balle
2023-08-20
Boosting Adversarial Transferability by Block Shuffle and Rotation. (99%)Kunyu Wang; Xuanran He; Wenxuan Wang; Xiaosen Wang
Improving Adversarial Robustness of Masked Autoencoders via Test-time Frequency-domain Prompting. (96%)Qidong Huang; Xiaoyi Dong; Dongdong Chen; Yinpeng Chen; Lu Yuan; Gang Hua; Weiming Zhang; Nenghai Yu
HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds. (96%)Hejia Geng; Peng Li
Hiding Backdoors within Event Sequence Data via Poisoning Attacks. (95%)Elizaveta Kovtun; Alina Ermilova; Dmitry Berestnev; Alexey Zaytsev
Adversarial Collaborative Filtering for Free. (61%)Huiyuan Chen; Xiaoting Li; Vivian Lai; Chin-Chia Michael Yeh; Yujie Fan; Yan Zheng; Mahashweta Das; Hao Yang
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks. (1%)Kaixin Xu; Zhe Wang; Xue Geng; Jie Lin; Min Wu; Xiaoli Li; Weisi Lin
A Study on Robustness and Reliability of Large Language Model Code Generation. (1%)Li Zhong; Zilong Wang
2023-08-19
A Comparison of Adversarial Learning Techniques for Malware Detection. (99%)Pavla Louthánová; Matouš Kozák; Martin Jureček; Mark Stamp
Robust Mixture-of-Expert Training for Convolutional Neural Networks. (83%)Yihua Zhang; Ruisi Cai; Tianlong Chen; Guanhua Zhang; Huan Zhang; Pin-Yu Chen; Shiyu Chang; Zhangyang Wang; Sijia Liu
2023-08-18
Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method. (99%)Yu-An Liu; Ruqing Zhang; Jiafeng Guo; Rijke Maarten de; Wei Chen; Yixing Fan; Xueqi Cheng
Attacking logo-based phishing website detectors with adversarial perturbations. (99%)Jehyun Lee; Zhe Xin; Melanie Ng Pei See; Kanav Sabharwal; Giovanni Apruzzese; Dinil Mon Divakaran
Compensating Removed Frequency Components: Thwarting Voice Spectrum Reduction Attacks. (92%)Shu Wang; Kun Sun; Qi Li
DFB: A Data-Free, Low-Budget, and High-Efficacy Clean-Label Backdoor Attack. (54%)Binhao Ma; Jiahui Wang; Dejun Wang; Bo Meng
Backdoor Mitigation by Correcting the Distribution of Neural Activations. (11%)Xi Li; Zhen Xiang; David J. Miller; George Kesidis
Towards Attack-tolerant Federated Learning via Critical Parameter Analysis. (9%)Sungwon Han; Sungwon Park; Fangzhao Wu; Sundong Kim; Bin Zhu; Xing Xie; Meeyoung Cha
On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box. (9%)Yi Cai; Gerhard Wunder
Defending Label Inference Attacks in Split Learning under Regression Setting. (4%)Haoze Qiu; Fei Zheng; Chaochao Chen; Xiaolin Zheng
An Image is Worth a Thousand Toxic Words: A Metamorphic Testing Framework for Content Moderation Software. (1%)Wenxuan Wang; Jingyuan Huang; Jen-tse Huang; Chang Chen; Jiazhen Gu; Pinjia He; Michael R. Lyu
Proceedings of the 2nd International Workshop on Adaptive Cyber Defense. (1%)Marco Carvalho; Damian Marriott; Mark Bilinski; Ahmad Ridley
2023-08-17
AIR: Threats of Adversarial Attacks on Deep Learning-Based Information Recovery. (99%)Jinyin Chen; Jie Ge; Shilian Zheng; Linhui Ye; Haibin Zheng; Weiguo Shen; Keqiang Yue; Xiaoniu Yang
Towards a Practical Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via Randomized Smoothing. (99%)Daniel Gibert; Giulio Zizzo; Quan Le
A White-Box False Positive Adversarial Attack Method on Contrastive Loss Based Offline Handwritten Signature Verification Models. (98%)Zhongliang Guo; Weiye Li; Yifei Qian; Ognjen Arandjelović; Lei Fang
Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces. (16%)Ahmad-Reza Ehyaei; Kiarash Mohammadi; Amir-Hossein Karimi; Samira Samadi; Golnoosh Farnadi
That Doesn't Go There: Attacks on Shared State in Multi-User Augmented Reality Applications. (10%)Carter Slocum; Yicheng Zhang; Erfan Shayegani; Pedram Zaree; Nael Abu-Ghazaleh; Jiasi Chen
Evaluating the Instruction-Following Robustness of Large Language Models to Prompt Injection. (10%)Zekun Li; Baolin Peng; Pengcheng He; Xifeng Yan
General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing. (3%)Dmitrii Korzh; Mikhail Pautov; Olga Tsymboi; Ivan Oseledets
2023-08-16
Benchmarking Adversarial Robustness of Compressed Deep Learning Models. (81%)Brijesh Vora; Kartik Patwari; Syed Mahbub Hafiz; Zubair Shafiq; Chen-Nee Chuah
Test-Time Poisoning Attacks Against Test-Time Adaptation Models. (73%)Tianshuo Cong; Xinlei He; Yun Shen; Yang Zhang
Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models. (67%)Zhenhua Wang; Wei Xie; Kai Chen; Baosheng Wang; Zhiwen Gui; Enze Wang
Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks. (61%)Mirazul Haque; Wei Yang
Expressivity of Graph Neural Networks Through the Lens of Adversarial Robustness. (33%)Francesco Campi; Lukas Gosch; Tom Wollschläger; Yan Scholten; Stephan Günnemann
2023-08-15
SEDA: Self-Ensembling ViT with Defensive Distillation and Adversarial Training for robust Chest X-rays Classification. (99%)Raza Imam; Ibrahim Almakky; Salma Alrashdi; Baketah Alrashdi; Mohammad Yaqub
Backpropagation Path Search On Adversarial Transferability. (99%)Zhuoer Xu; Zhangxuan Gu; Jianping Zhang; Shiwen Cui; Changhua Meng; Weiqiang Wang
A Review of Adversarial Attacks in Computer Vision. (99%)Yutong Zhang; Yao Li; Yin Li; Zhichang Guo
Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models. (95%)Yugeng Liu; Tianshuo Cong; Zhengyu Zhao; Michael Backes; Yun Shen; Yang Zhang
Simple and Efficient Partial Graph Adversarial Attack: A New Perspective. (93%)Guanghui Zhu; Mengyu Chen; Chunfeng Yuan; Yihua Huang
2023-08-14
3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D Point Cloud Attack. (99%)Yunbo Tao; Daizong Liu; Pan Zhou; Yulai Xie; Wei Du; Wei Hu
White-Box Adversarial Attacks on Deep Learning-Based Radio Frequency Fingerprint Identification. (99%)Jie Ma; Junqing Zhang; Guanxiong Shen; Alan Marshall; Chip-Hong Chang
AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning. (99%)Ziqi Zhou; Shengshan Hu; Minghui Li; Hangtao Zhang; Yechao Zhang; Hai Jin
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks. (68%)Shijie Liu; Andrew C. Cullen; Paul Montague; Sarah M. Erfani; Benjamin I. P. Rubinstein
LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked. (61%)Alec Helbling; Mansi Phute; Matthew Hull; Duen Horng Chau
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks. (13%)Indu Joshi; Priyank Upadhya; Gaurav Kumar Nayak; Peter Schüffler; Nassir Navab
ACTIVE: Towards Highly Transferable 3D Physical Camouflage for Universal and Robust Vehicle Evasion. (10%)Naufal Suryanto; Yongsu Kim; Harashta Tatimma Larasati; Hyoeun Kang; Thi-Thu-Huong Le; Yoonyoung Hong; Hunmin Yang; Se-Yoon Oh; Howon Kim
SAM Meets Robotic Surgery: An Empirical Study on Generalization, Robustness and Adaptation. (1%)An Wang; Mobarakol Islam; Mengya Xu; Yang Zhang; Hongliang Ren
2023-08-13
SoK: Realistic Adversarial Attacks and Defenses for Intelligent Network Intrusion Detection. (99%)João Vitorino; Isabel Praça; Eva Maia
Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods. (45%)Avery Ma; Yangchen Pan; Amir-massoud Farahmand
A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and Recommendations. (1%)Hongrong Cheng; Miao Zhang; Javen Qinfeng Shi
Robustified ANNs Reveal Wormholes Between Human Category Percepts. (1%)Guy Gaziv; Michael J. Lee; James J. DiCarlo
Faithful to Whom? Questioning Interpretability Measures in NLP. (1%)Evan Crothers; Herna Viktor; Nathalie Japkowicz
2023-08-12
Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks. (99%)Roman Garaev; Bader Rasheed; Adil Khan
One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training. (13%)Jianshuo Dong; Han Qiu; Yiming Li; Tianwei Zhang; Yuanjie Li; Zeqi Lai; Chao Zhang; Shu-Tao Xia
2023-08-11
Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregation. (98%)Xuannan Liu; Yaoyao Zhong; Yuhang Zhang; Lixiong Qin; Weihong Deng
Physical Adversarial Attacks For Camera-based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook. (98%)Amira Guesmi; Muhammad Abdullah Hanif; Bassem Ouni; Muhammed Shafique
Face Encryption via Frequency-Restricted Identity-Agnostic Attacks. (96%)Xin Dong; Rui Wang; Siyuan Liang; Aishan Liu; Lihua Jing
White-box Membership Inference Attacks against Diffusion Models. (68%)Yan Pang; Tianhao Wang; Xuhui Kang; Mengdi Huai; Yang Zhang
Test-Time Backdoor Defense via Detecting and Repairing. (10%)Jiyang Guan; Jian Liang; Ran He
Continual Face Forgery Detection via Historical Distribution Preserving. (2%)Ke Sun; Shen Chen; Taiping Yao; Xiaoshuai Sun; Shouhong Ding; Rongrong Ji
Fast and Accurate Transferability Measurement by Evaluating Intra-class Feature Variance. (1%)Huiwen Xu; U Kang
2023-08-10
Hard No-Box Adversarial Attack on Skeleton-Based Human Action Recognition with Skeleton-Motion-Informed Gradient. (99%)Zhengzhi Lu; He Wang; Ziyi Chang; Guoan Yang; Hubert P. H. Shum
Symmetry Defense Against XGBoost Adversarial Perturbation Attacks. (96%)Blerta Lindqvist
Complex Network Effects on the Robustness of Graph Convolutional Networks. (92%)Benjamin A. Miller; Kevin Chan; Tina Eliassi-Rad
Critical Points ++: An Agile Point Cloud Importance Measure for Robust Classification, Adversarial Defense and Explainable AI. (80%)Meir Yossef Levi; Guy Gilboa
State Machine Frameworks for Website Fingerprinting Defenses: Maybe Not. (61%)Ethan Witwer
FLShield: A Validation Based Federated Learning Framework to Defend Against Poisoning Attacks. (45%)Ehsanul Kabir; Zeyu Song; Md Rafi Ur Rashid; Shagufta Mehnaz
Comprehensive Analysis of Network Robustness Evaluation Based on Convolutional Neural Networks with Spatial Pyramid Pooling. (1%)Wenjun Jiang; Tianlong Fan; Changhao Li; Chuanfu Zhang; Tao Zhang; Zong-fu Luo
2023-08-09
Adv-Inpainting: Generating Natural and Transferable Adversarial Patch via Attention-guided Feature Fusion. (98%)Yanjie Li; Mingxing Duan; Bin Xiao
Adversarial ModSecurity: Countering Adversarial SQL Injections with Robust Machine Learning. (93%)Biagio Montaruli; Luca Demetrio; Andrea Valenza; Battista Biggio; Luca Compagna; Davide Balzarotti; Davide Ariu; Luca Piras
Adversarial Deep Reinforcement Learning for Cyber Security in Software Defined Networks. (81%)Luke Borchjes; Clement Nyirenda; Louise Leenen
Data-Free Model Extraction Attacks in the Context of Object Detection. (41%)Harshit Shah; Aravindhan G; Pavan Kulkarni; Yuvaraj Govidarajulu; Manojkumar Parmar
2023-08-08
Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning. (99%)Simon Queyrut; Yérom-David Bromberg; Valerio Schiavoni
Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate Gradients. (81%)Yao Shu; Xiaoqiang Lin; Zhongxiang Dai; Bryan Kian Hsiang Low
Comprehensive Assessment of the Performance of Deep Learning Classifiers Reveals a Surprising Lack of Robustness. (67%)Michael W. Spratling
The Model Inversion Eavesdropping Attack in Semantic Communication Systems. (67%)Yuhao Chen; Qianqian Yang; Zhiguo Shi; Jiming Chen
XGBD: Explanation-Guided Graph Backdoor Detection. (54%)Zihan Guan; Mengnan Du; Ninghao Liu
Improved Activation Clipping for Universal Backdoor Mitigation and Test-Time Detection. (50%)Hang Wang; Zhen Xiang; David J. Miller; George Kesidis
Evil Operation: Breaking Speaker Recognition with PaddingBack. (31%)Zhe Ye; Diqun Yan; Li Dong; Kailai Shen
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers. (15%)Haomin Zhuang; Mingxian Yu; Hao Wang; Yang Hua; Jian Li; Xu Yuan
2023-08-07
Fixed Inter-Neuron Covariability Induces Adversarial Robustness. (98%)Muhammad Ahmed Shah; Bhiksha Raj
Exploring the Physical World Adversarial Robustness of Vehicle Detection. (98%)Wei Jiang; Tianyuan Zhang; Shuangcheng Liu; Weiyu Ji; Zichao Zhang; Gang Xiao
PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant Semantic Segmentation. (86%)Zhu Liu; Jinyuan Liu; Benzhuang Zhang; Long Ma; Xin Fan; Risheng Liu
A reading survey on adversarial machine learning: Adversarial attacks and their understanding. (81%)Shashank Kotyan
A Four-Pronged Defense Against Byzantine Attacks in Federated Learning. (54%)Wei Wan; Shengshan Hu; Minghui Li; Jianrong Lu; Longling Zhang; Leo Yu Zhang; Hai Jin
Improving Performance of Semi-Supervised Learning by Adversarial Attacks. (11%)Dongyoon Yang; Kunwoong Kim; Yongdai Kim
Mondrian: Prompt Abstraction Attack Against Large Language Models for Cheaper API Pricing. (10%)Wai Man Si; Michael Backes; Yang Zhang
2023-08-06
SAAM: Stealthy Adversarial Attack on Monoculor Depth Estimation. (99%)Amira Guesmi; Muhammad Abdullah Hanif; Bassem Ouni; Muhammad Shafique
CGBA: Curvature-aware Geometric Black-box Attack. (99%)Md Farhamdur Reza; Ali Rahmati; Tianfu Wu; Huaiyu Dai
APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses. (98%)Tianrui Qin; Xitong Gao; Juanjuan Zhao; Kejiang Ye; Cheng-Zhong Xu
Unsupervised Adversarial Detection without Extra Model: Training Loss Should Change. (82%)Chien Cheng Chyou; Hung-Ting Su; Winston H. Hsu
Using Overlapping Methods to Counter Adversaries in Community Detection. (50%)Benjamin A. Miller; Kevin Chan; Tina Eliassi-Rad
2023-08-05
An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability. (99%)Bin Chen; Jia-Li Yin; Shukai Chen; Bo-Hao Chen; Ximeng Liu
An AI-Enabled Framework to Defend Ingenious MDT-based Attacks on the Emerging Zero Touch Cellular Networks. (92%)Aneeqa Ijaz; Waseem Raza; Hasan Farooq; Marvin Manalastas; Ali Imran
A Security and Usability Analysis of Local Attacks Against FIDO2. (1%)Tarun Kumar Yadav; Kent Seamons
Approximating Positive Homogeneous Functions with Scale Invariant Neural Networks. (1%)Stefan Bamberger; Reinhard Heckel; Felix Krahmer
2023-08-04
Multi-attacks: Many images $+$ the same adversarial attack $\to$ many target labels. (99%)Stanislav Fort
RobustMQ: Benchmarking Robustness of Quantized Models. (75%)Yisong Xiao; Aishan Liu; Tianyuan Zhang; Haotong Qin; Jinyang Guo; Xianglong Liu
SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection. (67%)Nasimeh Heydaribeni; Ruisi Zhang; Tara Javidi; Cristina Nita-Rotaru; Farinaz Koushanfar
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks. (67%)Domenico Cotroneo; Cristina Improta; Pietro Liguori; Roberto Natella
Universal Defensive Underpainting Patch: Making Your Text Invisible to Optical Character Recognition. (31%)JiaCheng Deng; Li Dong; Jiahao Chen; Diqun Yan; Rangding Wang; Dengpan Ye; Lingchen Zhao; Jinyu Tian
BlindSage: Label Inference Attacks against Node-level Vertical Federated Graph Neural Networks. (9%)Marco Arazzi; Mauro Conti; Stefanos Koffas; Marina Krcek; Antonino Nocera; Stjepan Picek; Jing Xu
2023-08-03
Hard Adversarial Example Mining for Improving Robust Fairness. (99%)Chenhao Lin; Xiang Ji; Yulong Yang; Qian Li; Chao Shen; Run Wang; Liming Fang
URET: Universal Robustness Evaluation Toolkit (for Evasion). (99%)Kevin Eykholt; Taesung Lee; Douglas Schales; Jiyong Jang; Ian Molloy; Masha Zorin
AdvFAS: A robust face anti-spoofing framework against adversarial examples. (98%)Jiawei Chen; Xiao Yang; Heng Yin; Mingzhi Ma; Bihui Chen; Jianteng Peng; Yandong Guo; Zhaoxia Yin; Hang Su
FROD: Robust Object Detection for Free. (67%)Muhammad; Awais; Weiming; Zhuang; Lingjuan; Lyu; Sung-Ho; Bae
ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP. (33%)Lu Yan; Zhuo Zhang; Guanhong Tao; Kaiyuan Zhang; Xuan Chen; Guangyu Shen; Xiangyu Zhang
From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application? (4%)Rodrigo Pedro; Daniel Castro; Paulo Carreira; Nuno Santos
2023-08-02
Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time. (99%)Xinfeng Li; Chen Yan; Xuancun Lu; Zihan Zeng; Xiaoyu Ji; Wenyuan Xu
Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks. (98%)Jun Guo; Aishan Liu; Xingyu Zheng; Siyuan Liang; Yisong Xiao; Yichao Wu; Xianglong Liu
Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator. (16%)Xiaobei Yan; Xiaoxuan Lou; Guowen Xu; Han Qiu; Shangwei Guo; Chip Hong Chang; Tianwei Zhang
TEASMA: A Practical Approach for the Test Assessment of Deep Neural Networks using Mutation Analysis. (2%)Amin Abbasishahkoo; Mahboubeh Dadkhah; Lionel Briand; Dayi Lin
LSF-IDM: Automotive Intrusion Detection Model with Lightweight Attribution and Semantic Fusion. (1%)Pengzhou Cheng; Lei Hua; Haobin Jiang; Mohammad Samie; Gongshen Liu
2023-08-01
Dynamic ensemble selection based on Deep Neural Network Uncertainty Estimation for Adversarial Robustness. (99%)Ruoxi Qin; Linyuan Wang; Xuehui Du; Xingyuan Chen; Bin Yan
Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning. (99%)Kaijie Zhu; Jindong Wang; Xixu Hu; Xing Xie; Ge Yang
LimeAttack: Local Explainable Method for Textual Hard-Label Adversarial Attack. (99%)Hai Zhu; Zhaoqing Yang; Weiwei Shang; Yuren Wu
Doubly Robust Instance-Reweighted Adversarial Training. (82%)Daouda Sow; Sen Lin; Zhangyang Wang; Yingbin Liang
Training on Foveated Images Improves Robustness to Adversarial Attacks. (82%)Muhammad A. Shah; Bhiksha Raj
Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches. (47%)Pia Hanfeld; Khaled Wahba; Marina M. -C. Höhne; Michael Bussmann; Wolfgang Hönig
Robust Linear Regression: Phase-Transitions and Precise Tradeoffs for General Norms. (22%)Elvis Dohmatob; Meyer Scetbon
Learning to Generate Training Datasets for Robust Semantic Segmentation. (9%)Marwane Hariat; Olivier Laurent; Rémi Kazmierczak; Shihao Zhang; Andrei Bursuc; Angela Yao; Gianni Franchi
Zero-Shot Learning by Harnessing Adversarial Samples. (1%)Zhi Chen; Pengfei Zhang; Jingjing Li; Sen Wang; Zi Huang
A Novel Cross-Perturbation for Single Domain Generalization. (1%)Dongjia Zhao; Lei Qi; Xiao Shi; Yinghuan Shi; Xin Geng
2023-07-31
A Novel Deep Learning based Model to Defend Network Intrusion Detection System against Adversarial Attacks. (99%)Khushnaseeb Roshan; Aasim Zafar; Shiekh Burhan Ul Haque
Transferable Attack for Semantic Segmentation. (99%)Mengqi He; Jing Zhang; Zhaoyuan Yang; Mingyi He; Nick Barnes; Yuchao Dai
Universal Adversarial Defense in Remote Sensing Based on Pre-trained Denoising Diffusion Models. (99%)Weikang Yu; Yonghao Xu; Pedram Ghamisi
Defense of Adversarial Ranking Attack in Text Retrieval: Benchmark and Baseline via Detection. (97%)Xuanang Chen; Ben He; Le Sun; Yingfei Sun
Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks. (86%)Xinyu Zhang; Hanbin Hong; Yuan Hong; Peng Huang; Binghui Wang; Zhongjie Ba; Kui Ren
BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models. (26%)Jordan Vice; Naveed Akhtar; Richard Hartley; Ajmal Mian
Adversarially Robust Neural Legal Judgement Systems. (11%)Rohit Raj; V Susheela Devi
Virtual Prompt Injection for Instruction-Tuned Large Language Models. (10%)Jun Yan; Vikas Yadav; Shiyang Li; Lichang Chen; Zheng Tang; Hai Wang; Vijay Srinivasan; Xiang Ren; Hongxia Jin
Noisy Self-Training with Data Augmentations for Offensive and Hate Speech Detection Tasks. (1%)João A. Leite; Carolina Scarton; Diego F. Silva
2023-07-30
Theoretically Principled Trade-off for Stateful Defenses against Query-Based Black-Box Attacks. (99%)Ashish Hooda; Neal Mangaokar; Ryan Feng; Kassem Fawaz; Somesh Jha; Atul Prakash
Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples. (99%)Qiufan Ji; Lin Wang; Cong Shi; Shengshan Hu; Yingying Chen; Lichao Sun
Probabilistically robust conformal prediction. (91%)Subhankar Ghosh; Yuanjie Shi; Taha Belkhouja; Yan Yan; Jana Doppa; Brian Jones
On Updating Static Output Feedback Controllers Under State-Space Perturbation. (1%)MirSaleh Bahavarnia; Ahmad F. Taha
2023-07-29
You Can Backdoor Personalized Federated Learning. (92%)Tiandi Ye; Cen Chen; Yinggui Wang; Xiang Li; Ming Gao
On Neural Network approximation of ideal adversarial attack and convergence of adversarial training. (92%)Rajdeep Haldar; Qifan Song
Exposing Hidden Attackers in Industrial Control Systems using Micro-distortions. (41%)Suman Sourav; Binbin Chen
2023-07-28
Beating Backdoor Attack at Its Own Game. (97%)Min Liu; Alberto Sangiovanni-Vincentelli; Xiangyu Yue
Adversarial training for tabular data with attack propagation. (67%)Tiago Leon Melo; João Bravo; Marco O. P. Sampaio; Paolo Romano; Hugo Ferreira; João Tiago Ascensão; Pedro Bizarro
Improving Realistic Worst-Case Performance of NVCiM DNN Accelerators through Training with Right-Censored Gaussian Noise. (10%)Zheyu Yan; Yifan Qin; Wujie Wen; Xiaobo Sharon Hu; Yiyu Shi
What can Discriminator do? Towards Box-free Ownership Verification of Generative Adversarial Network. (4%)Ziheng Huang; Boheng Li; Yan Cai; Run Wang; Shangwei Guo; Liming Fang; Jing Chen; Lina Wang
2023-07-27
R-LPIPS: An Adversarially Robust Perceptual Similarity Metric. (99%)Sara Ghazanfari; Siddharth Garg; Prashanth Krishnamurthy; Farshad Khorrami; Alexandre Araujo
Universal and Transferable Adversarial Attacks on Aligned Language Models. (99%)Andy Zou; Zifan Wang; Nicholas Carlini; Milad Nasr; J. Zico Kolter; Matt Fredrikson
When Measures are Unreliable: Imperceptible Adversarial Perturbations toward Top-$k$ Multi-Label Learning. (99%)Yuchen Sun; Qianqian Xu; Zitai Wang; Qingming Huang
Backdoor Attacks for In-Context Learning with Language Models. (97%)Nikhil Kandpal; Matthew Jagielski; Florian Tramèr; Nicholas Carlini
FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal Adversarial Masks. (93%)Buse G. A. Tekgul; N. Asokan
Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in the Physical World. (92%)Xingxing Wei; Yao Huang; Yitong Sun; Jie Yu
NSA: Naturalistic Support Artifact to Boost Network Confidence. (62%)Abhijith Sharma; Phil Munz; Apurva Narayan
SEV-Step: A Single-Stepping Framework for AMD-SEV. (3%)Luca Wilke; Jan Wichelmann; Anja Rabich; Thomas Eisenbarth
Decoding the Secrets of Machine Learning in Malware Classification: A Deep Dive into Datasets, Feature Extraction, and Model Performance. (1%)Savino Dambra; Yufei Han; Simone Aonzo; Platon Kotzias; Antonino Vitale; Juan Caballero; Davide Balzarotti; Leyla Bilge
AC-Norm: Effective Tuning for Medical Image Analysis via Affine Collaborative Normalization. (1%)Chuyan Zhang; Yuncheng Yang; Hao Zheng; Yun Gu
2023-07-26
Enhanced Security against Adversarial Examples Using a Random Ensemble of Encrypted Vision Transformer Models. (99%)Ryota Iijima; Miki Tanaka; Sayaka Shiota; Hitoshi Kiya
Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. (99%)Dong Lu; Zhiqiang Wang; Teng Wang; Weili Guan; Hongchang Gao; Feng Zheng
Defending Adversarial Patches via Joint Region Localizing and Inpainting. (99%)Junwen Chen; Xingxing Wei
Lateral-Direction Localization Attack in High-Level Autonomous Driving: Domain-Specific Defense Opportunity via Lane Detection. (67%)Junjie Shen; Yunpeng Luo; Ziwen Wan; Qi Alfred Chen
Plug and Pray: Exploiting off-the-shelf components of Multi-Modal Models. (33%)Erfan Shayegani; Yue Dong; Nael Abu-Ghazaleh
Coupled-Space Attacks against Random-Walk-based Anomaly Detection. (11%)Yuni Lai; Marcin Waniek; Liying Li; Jingwen Wu; Yulin Zhu; Tomasz P. Michalak; Talal Rahwan; Kai Zhou
FakeTracer: Proactively Defending Against Face-swap DeepFakes via Implanting Traces in Training. (5%)Pu Sun; Honggang Qi; Yuezun Li; Siwei Lyu
Open Image Content Disarm And Reconstruction. (1%)Eli Belkind; Ran Dubin; Amit Dvir
2023-07-25
On the unreasonable vulnerability of transformers for image restoration -- and an easy fix. (99%)Shashank Agnihotri; Kanchana Vaishnavi Gandikota; Julia Grabinski; Paramanand Chandramouli; Margret Keuper
Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation. (99%)Junbin Fang; Canjian Jiang; You Jiang; Puxi Lin; Zhaojie Chen; Yujing Sun; Siu-Ming Yiu; Zoe L. Jiang
Efficient Estimation of Average-Case Robustness for Multi-Class Classification. (13%)Tessa Han; Suraj Srinivas; Himabindu Lakkaraju
Foundational Models Defining a New Era in Vision: A Survey and Outlook. (10%)Muhammad Awais; Muzammal Naseer; Salman Khan; Rao Muhammad Anwer; Hisham Cholakkal; Mubarak Shah; Ming-Hsuan Yang; Fahad Shahbaz Khan
2023-07-24
Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations. (99%)Yi Han; Matthew Chan; Eric Wengrowski; Zhuohuan Li; Nils Ole Tippenhauer; Mani Srivastava; Saman Zonouz; Luis Garcia
Lost In Translation: Generating Adversarial Examples Robust to Round-Trip Translation. (99%)Neel Bhandari; Pin-Yu Chen
Data-free Black-box Attack based on Diffusion Model. (62%)Mingwen Shao; Lingzhuang Meng; Yuanjian Qiao; Lixu Zhang; Wangmeng Zuo
Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs. (56%)Zhakshylyk Nurlanov; Frank R. Schmidt; Florian Bernard
An Estimator for the Sensitivity to Perturbations of Deep Neural Networks. (31%)Naman Maheshwari; Nicholas Malaya; Scott Moe; Jaydeep P. Kulkarni; Sudhanva Gurumurthi
Cyber Deception against Zero-day Attacks: A Game Theoretic Approach. (12%)Md Abu University of Texas at El Paso Sayed; Ahmed H. US Army Research Laboratory Anwar; Christopher University of Texas at El Paso Kiekintveld; Branislav Czech Technical University in Prague Bosansky; Charles US Army Research Laboratory Kamhoua
Malware Resistant Data Protection in Hyper-connected Networks: A survey. (10%)Jannatul Ferdous; Rafiqul Islam; Maumita Bhattacharya; Md Zahidul Islam
Investigating the Robustness of Sequential Recommender Systems Against Training Data Perturbations. (9%)Filippo Betello; Federico Siciliano; Pushkar Mishra; Fabrizio Silvestri
Digital Twins for Moving Target Defense Validation in AC Microgrids. (1%)Suman Rath; Subham Sahoo; Shamik Sengupta
Towards Bridging the FL Performance-Explainability Trade-Off: A Trustworthy 6G RAN Slicing Use-Case. (1%)Swastika Roy; Hatim Chergui; Christos Verikoukis
Learning Provably Robust Estimators for Inverse Problems via Jittering. (1%)Anselm Krainovic; Mahdi Soltanolkotabi; Reinhard Heckel
2023-07-23
Towards Generic and Controllable Attacks Against Object Detection. (99%)Guopeng Li; Yue Xu; Jian Ding; Gui-Song Xia
Downstream-agnostic Adversarial Examples. (99%)Ziqi Zhou; Shengshan Hu; Ruizhi Zhao; Qian Wang; Leo Yu Zhang; Junhui Hou; Hai Jin
AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion Models. (99%)Xuelong Dai; Kaisheng Liang; Bin Xiao
Gradient-Based Word Substitution for Obstinate Adversarial Examples Generation in Language Models. (98%)Yimu Wang; Peng Shi; Hongyang Zhang
A First Look at On-device Models in iOS Apps. (84%)Han Hu; Yujin Huang; Qiuyuan Chen; Terry Tue Zhuo; Chunyang Chen
Robust Automatic Speech Recognition via WavAugment Guided Phoneme Adversarial Training. (83%)Gege Qi; Yuefeng Chen; Xiaofeng Mao; Xiaojun Jia; Ranjie Duan; Rong Zhang; Hui Xue
Cross Contrastive Feature Perturbation for Domain Generalization. (1%)Chenming Li; Daoan Zhang; Wenjian Huang; Jianguo Zhang
2023-07-22
Backdoor Attacks against Voice Recognition Systems: A Survey. (13%)Baochen Yan; Jiahe Lan; Zheng Yan
2023-07-21
Fast Adaptive Test-Time Defense with Robust Features. (99%)Anurag Singh; Mahalakshmi Sabanayagam; Krikamol Muandet; Debarghya Ghoshdastidar
Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks. (99%)Eldor Abdukhamidov; Mohammed Abuhamad; Simon S. Woo; Eric Chan-Tin; Tamer Abuhmed
FMT: Removing Backdoor Feature Maps via Feature Map Testing in Deep Neural Networks. (81%)Dong Huang; Qingwen Bu; Yahao Qing; Yichao Fu; Heming Cui
Improving Viewpoint Robustness for Visual Recognition via Adversarial Training. (80%)Shouwei Ruan; Yinpeng Dong; Hang Su; Jianteng Peng; Ning Chen; Xingxing Wei
OUTFOX: LLM-generated Essay Detection through In-context Learning with Adversarially Generated Examples. (62%)Ryuto Koike; Masahiro Kaneko; Naoaki Okazaki
HybridAugment++: Unified Frequency Spectra Perturbations for Model Robustness. (26%)Mehmet Kerim Yucel; Ramazan Gokberk Cinbis; Pinar Duygulu
Mitigating Communications Threats in Decentralized Federated Learning through Moving Target Defense. (1%)Enrique Tomás Martínez Beltrán; Pedro Miguel Sánchez Sánchez; Sergio López Bernal; Gérôme Bovet; Manuel Gil Pérez; Gregorio Martínez Pérez; Alberto Huertas Celdrán
2023-07-20
A LLM Assisted Exploitation of AI-Guardian. (98%)Nicholas Carlini
Improving Transferability of Adversarial Examples via Bayesian Attacks. (98%)Qizhang Li; Yiwen Guo; Xiaochen Yang; Wangmeng Zuo; Hao Chen
Adversarial attacks for mixtures of classifiers. (54%)Lucas Gnecco Heredia; Benjamin Negrevergne; Yann Chevaleyre
PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks. (33%)Shiwei Ding; Lan Zhang; Miao Pan; Xiaoyong Yuan
A Holistic Assessment of the Reliability of Machine Learning Systems. (4%)Anthony Corso; David Karamadian; Romeo Valentin; Mary Cooper; Mykel J. Kochenderfer
Making Pre-trained Language Models both Task-solvers and Self-calibrators. (2%)Yangyi Chen; Xingyao Wang; Heng Ji
Boundary State Generation for Testing and Improvement of Autonomous Driving Systems. (1%)Matteo Biagiola; Paolo Tonella
A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency. (1%)Jiawei Shao; Zijian Li; Wenqiang Sun; Tailin Zhou; Yuchang Sun; Lumin Liu; Zehong Lin; Yuyi Mao; Jun Zhang
2023-07-19
Backdoor Attack against Object Detection with Clean Annotation. (93%)Yize Cheng; Wenbin Hu; Minhao Cheng
Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples. (92%)Shaokui Wei; Mingda Zhang; Hongyuan Zha; Baoyuan Wu
Rethinking Backdoor Attacks. (83%)Alaa Khaddaj; Guillaume Leclerc; Aleksandar Makelov; Kristian Georgiev; Hadi Salman; Andrew Ilyas; Aleksander Madry
Towards Building More Robust Models with Frequency Bias. (81%)Qingwen Bu; Dong Huang; Heming Cui
Reinforcing POD based model reduction techniques in reaction-diffusion complex networks using stochastic filtering and pattern recognition. (26%)Abhishek Ajayakumar; Soumyendu Raha
2023-07-18
CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space. (67%)Haibin Zheng; Jinyin Chen; Haibo Jin
FedDefender: Client-Side Attack-Tolerant Federated Learning. (50%)Sungwon Park; Sungwon Han; Fangzhao Wu; Sundong Kim; Bin Zhu; Xing Xie; Meeyoung Cha
Can Neural Network Memorization Be Localized? (4%)Pratyush Maini; Michael C. Mozer; Hanie Sedghi; Zachary C. Lipton; J. Zico Kolter; Chiyuan Zhang
2023-07-17
Analyzing the Impact of Adversarial Examples on Explainable Machine Learning. (99%)Prathyusha Devabhakthini; Sasmita Parida; Raj Mani Shukla; Suvendu Chandan Nayak
Adversarial Attacks on Traffic Sign Recognition: A Survey. (98%)Svetlana Pavlitska; Nico Lambing; J. Marius Zöllner
Discretization-based ensemble model for robust learning in IoT. (87%)Anahita Namvar; Chandra Thapa; Salil S. Kanhere
Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion Model. (83%)Rongke Liu; Dong Wang; Yizhi Ren; Zhen Wang; Kaitian Guo; Qianqian Qin; Xiaolei Liu
Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems. (22%)Xugui Zhou; Anqi Chen; Maxfield Kouzel; Haotian Ren; Morgan McCarty; Cristina Nita-Rotaru; Homa Alemzadeh
On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization. (2%)Akshay Mehra; Yunbei Zhang; Bhavya Kailkhura; Jihun Hamm
A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks. (1%)Umara Noor; Sawera Shahid; Rimsha Kanwal; Zahid Rashid
2023-07-16
Towards Viewpoint-Invariant Visual Recognition via Adversarial Training. (83%)Shouwei Ruan; Yinpeng Dong; Hang Su; Jianteng Peng; Ning Chen; Xingxing Wei
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound. (73%)Hanbo Cai; Pengcheng Zhang; Hai Dong; Yan Xiao; Stefanos Koffas; Yiming Li
Diffusion to Confusion: Naturalistic Adversarial Patch Generation Based on Diffusion Model for Object Detector. (10%)Shuo-Yen Lin; Ernie Chu; Che-Hsien Lin; Jun-Cheng Chen; Jia-Ching Wang
Lipschitz Continuous Algorithms for Covering Problems. (1%)Soh Kumabe; Yuichi Yoshida
2023-07-15
On the Robustness of Split Learning against Adversarial Attacks. (99%)Mingyuan Fan; Cen Chen; Chengyu Wang; Wenmeng Zhou; Jun Huang
Why Does Little Robustness Help? Understanding and Improving Adversarial Transferability from Surrogate Training. (99%)Yechao Zhang; Shengshan Hu; Leo Yu Zhang; Junyu Shi; Minghui Li; Xiaogeng Liu; Wei Wan; Hai Jin
Unified Adversarial Patch for Cross-modal Attacks in the Physical World. (92%)Xingxing Wei; Yao Huang; Yitong Sun; Jie Yu
MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots. (2%)Gelei Deng; Yi Liu; Yuekang Li; Kailong Wang; Ying Zhang; Zefeng Li; Haoyu Wang; Tianwei Zhang; Yang Liu
2023-07-14
Vulnerability-Aware Instance Reweighting For Adversarial Training. (99%)Olukorede Fakorede; Ashutosh Kumar Nirala; Modeste Atsague; Jin Tian
Mitigating Adversarial Vulnerability through Causal Parameter Estimation by Adversarial Double Machine Learning. (99%)Byung-Kwan Lee; Junho Kim; Yong Man Ro
On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks. (99%)Hafsa Bousbiat; Yassine Himeur; Abbes Amira; Wathiq Mansoor
RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World. (98%)Donghua Wang; Wen Yao; Tingsong Jiang; Chao Li; Xiaoqian Chen
Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation. (98%)Asif Hanif; Muzammal Naseer; Salman Khan; Mubarak Shah; Fahad Shahbaz Khan
Alleviating the Effect of Data Imbalance on Adversarial Training. (92%)Guanlin Li; Guowen Xu; Tianwei Zhang
Structured Pruning of Neural Networks for Constraints Learning. (76%)Matteo Cacciola; Antonio Frangioni; Andrea Lodi
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy. (68%)Zihao Zhu; Mingda Zhang; Shaokui Wei; Li Shen; Yanbo Fan; Baoyuan Wu
Erasing, Transforming, and Noising Defense Network for Occluded Person Re-Identification. (31%)Neng Dong; Liyan Zhang; Shuanglin Yan; Hao Tang; Jinhui Tang
Omnipotent Adversarial Training in the Wild. (9%)Guanlin Li; Kangjie Chen; Yuan Xu; Han Qiu; Tianwei Zhang
Certified Robustness for Large Language Models with Self-Denoising. (5%)Zhen Zhang; Guanhua Zhang; Bairu Hou; Wenqi Fan; Qing Li; Sijia Liu; Yang Zhang; Shiyu Chang
2023-07-13
Multi-objective Evolutionary Search of Variable-length Composite Semantic Perturbations. (99%)Jialiang Suna; Wen Yao; Tingsong Jianga; Xiaoqian Chena
Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks. (99%)Jiaming Zhang; Jitao Sang; Qi Yi; Changsheng Xu
Effective Prompt Extraction from Language Models. (4%)Yiming Zhang; Nicholas Carlini; Daphne Ippolito
Layer-wise Linear Mode Connectivity. (1%)Linara Adilova; Maksym Andriushchenko; Michael Kamp; Asja Fischer; Martin Jaggi
Defeating Proactive Jammers Using Deep Reinforcement Learning for Resource-Constrained IoT Networks. (1%)Abubakar Sani Ali; Shimaa Naser; Sami Muhaidat
Towards Traitor Tracing in Black-and-White-Box DNN Watermarking with Tardos-based Codes. (1%)Elena Rodriguez-Lois; Fernando Perez-Gonzalez
2023-07-12
Single-Class Target-Specific Attack against Interpretable Deep Learning Systems. (99%)Eldor Abdukhamidov; Mohammed Abuhamad; George K. Thiruvathukal; Hyoungshick Kim; Tamer Abuhmed
Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems. (99%)Eldor Abdukhamidov; Mohammed Abuhamad; Simon S. Woo; Eric Chan-Tin; Tamer Abuhmed
Rational Neural Network Controllers. (2%)Matthew Newton; Antonis Papachristodoulou
Misclassification in Automated Content Analysis Causes Bias in Regression. Can We Fix It? Yes We Can! (1%)Nathan TeBlunthuis; Valerie Hase; Chung-Hong Chan
A Bayesian approach to quantifying uncertainties and improving generalizability in traffic prediction models. (1%)Agnimitra Sengupta; Sudeepta Mondal; Adway Das; S. Ilgin Guler
2023-07-11
ATWM: Defense against adversarial malware based on adversarial training. (99%)Kun Li; Fan Zhang; Wei Guo
Membership Inference Attacks on DNNs using Adversarial Perturbations. (89%)Hassan Ali; Adnan Qayyum; Ala Al-Fuqaha; Junaid Qadir
Random-Set Convolutional Neural Network (RS-CNN) for Epistemic Deep Learning. (12%)Shireen Kudukkil Manchingal; Muhammad Mubashar; Kaizheng Wang; Keivan Shariatmadar; Fabio Cuzzolin
On the Vulnerability of DeepFake Detectors to Attacks Generated by Denoising Diffusion Models. (10%)Marija Ivanovska; Vitomir Štruc
Differential Analysis of Triggers and Benign Features for Black-Box DNN Backdoor Detection. (2%)Hao Fu; Prashanth Krishnamurthy; Siddharth Garg; Farshad Khorrami
Scale Alone Does not Improve Mechanistic Interpretability in Vision Models. (1%)Roland S. Zimmermann; Thomas Klein; Wieland Brendel
Memorization Through the Lens of Curvature of Loss Function Around Samples. (1%)Isha Garg; Deepak Ravikumar; Kaushik Roy
The Butterfly Effect in Artificial Intelligence Systems: Implications for AI Bias and Fairness. (1%)Emilio Ferrara
2023-07-10
Practical Trustworthiness Model for DNN in Dedicated 6G Application. (33%)Anouar Nechi; Ahmed Mahmoudi; Christoph Herold; Daniel Widmer; Thomas Kürner; Mladen Berekovic; Saleh Mulhem
Distill-SODA: Distilling Self-Supervised Vision Transformer for Source-Free Open-Set Domain Adaptation in Computational Pathology. (1%)Guillaume Vray; Devavrat Tomar; Jean-Philippe Thiran; Behzad Bozorgtabar
2023-07-09
GNP Attack: Transferable Adversarial Examples via Gradient Norm Penalty. (98%)Tao Wu; Tie Luo; Donald C. Wunsch
Enhancing Adversarial Robustness via Score-Based Optimization. (98%)Boya Zhang; Weijian Luo; Zhihua Zhang
2023-07-08
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification. (99%)Huafeng Li; Le Xu; Yafei Zhang; Dapeng Tao; Zhengtao Yu
Random Position Adversarial Patch for Vision Transformers. (83%)Mingzhen Shao
Robust Ranking Explanations. (38%)Chao Chen; Chenghua Guo; Guixiang Ma; Ming Zeng; Xi Zhang; Sihong Xie
2023-07-07
A Theoretical Perspective on Subnetwork Contributions to Adversarial Robustness. (81%)Jovon Craig; Josh Andle; Theodore S. Nowak; Salimeh Yasaei Sekeh
Fooling Contrastive Language-Image Pre-trained Models with CLIPMasterPrints. (68%)Matthias Freiberger; Peter Kun; Christian Igel; Anders Sundnes Løvlie; Sebastian Risi
Scalable Membership Inference Attacks via Quantile Regression. (33%)Martin Bertran; Shuai Tang; Michael Kearns; Jamie Morgenstern; Aaron Roth; Zhiwei Steven Wu
RADAR: Robust AI-Text Detection via Adversarial Learning. (5%)Xiaomeng Hu; Pin-Yu Chen; Tsung-Yi Ho
Generation of Time-Varying Impedance Attacks Against Haptic Shared Control Steering Systems. (1%)Alireza Mohammadi; Hafiz Malik
2023-07-06
Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks. (99%)Xu Han; Anmin Liu; Chenxuan Yao; Yanbo Fan; Kun He
NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic. (92%)Zi'ou Zheng; Xiaodan Zhu
Quantification of Uncertainty with Adversarial Models. (68%)Kajetan Schweighofer; Lukas Aichberger; Mykyta Ielanskyi; Günter Klambauer; Sepp Hochreiter
A Vulnerability of Attribution Methods Using Pre-Softmax Scores. (41%)Miguel Lerma; Mirtha Lucas
Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications. (8%)Peter Tu; Zhaoyuan Yang; Richard Hartley; Zhiwei Xu; Jing Zhang; Yiwei Fu; Dylan Campbell; Jaskirat Singh; Tianyu Wang
T-MARS: Improving Visual Representations by Circumventing Text Feature Learning. (1%)Pratyush Maini; Sachin Goyal; Zachary C. Lipton; J. Zico Kolter; Aditi Raghunathan
2023-07-05
Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and their Impact. (98%)Jaydip Sen; Subhasis Dasgupta
DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications. (69%)Adam Ivankay; Mattia Rigotti; Pascal Frossard
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality. (67%)Peter Lorenz; Ricard Durall; Janis Keuper
GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations. (62%)Julia Lust; Alexandru P. Condurache
Securing Cloud FPGAs Against Power Side-Channel Attacks: A Case Study on Iterative AES. (5%)Nithyashankari Gummidipoondi JV Jayasankaran; Hao JV Guo; Satwik JV Patnaik; JV Jeyavijayan; Rajendran; Jiang Hu
On the Adversarial Robustness of Generative Autoencoders in the Latent Space. (3%)Mingfei Lu; Badong Chen
2023-07-04
SCAT: Robust Self-supervised Contrastive Learning via Adversarial Training for Text Classification. (99%)Junjie Wu; Dit-Yan Yeung
LEAT: Towards Robust Deepfake Disruption in Real-World Scenarios via Latent Ensemble Attack. (83%)Joonkyo Shim; Hyunsoo Yoon
Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection. (68%)Delyan Boychev
Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction. (45%)Zitao Chen; Karthik Pattabiraman
Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling. (26%)Zhanhao Hu; Wenda Chu; Xiaopei Zhu; Hui Zhang; Bo Zhang; Xiaolin Hu
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems. (13%)Shuyi Wang; Guido Zuccon
Machine Learning-Based Intrusion Detection: Feature Selection versus Feature Extraction. (1%)Vu-Duc Ngo; Tuan-Cuong Vuong; Luong Thien Van; Hung Tran
Synthetic is all you need: removing the auxiliary data assumption for membership inference attacks against synthetic data. (1%)Florent Guépin; Matthieu Meeus; Ana-Maria Cretu; Montjoye Yves-Alexandre de
2023-07-03
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems. (99%)Debopam Georgia Institute of Technology Sanyal; Jui-Tse Georgia Institute of Technology Hung; Manav Georgia Institute of Technology Agrawal; Prahlad Georgia Institute of Technology Jasti; Shahab University of California, Riverside Nikkhoo; Somesh University of Wisconsin-Madison Jha; Tianhao University of Virginia Wang; Sibin George Washington University Mohan; Alexey Georgia Institute of Technology Tumanov
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives. (83%)Yudong Gao; Honglong Chen; Peng Sun; Junjian Li; Anqing Zhang; Zhibo Wang
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks. (62%)Aysha Thahsin Zahir Ismail; Raj Mani Shukla
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners? (62%)Fnu Suya; Xiao Zhang; Yuan Tian; David Evans
Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives. (45%)Danele Lunghi; Alkis Simitsis; Olivier Caelen; Gianluca Bontempi
Understanding the Transferability of Representations via Task-Relatedness. (13%)Akshay Mehra; Yunbei Zhang; Jihun Hamm
Enhancing the Robustness of QMIX against State-adversarial Attacks. (4%)Weiran Guo; Guanjun Liu; Ziyuan Zhou; Ling Wang; Jiacun Wang
Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration. (1%)Kemal Oksuz; Tom Joy; Puneet K. Dokania
2023-07-02
Query-Efficient Decision-based Black-Box Patch Attack. (99%)Zhaoyu Chen; Bo Li; Shuang Wu; Shouhong Ding; Wenqiang Zhang
Interpretability and Transparency-Driven Detection and Transformation of Textual Adversarial Examples (IT-DT). (99%)Bushra Sabir; M. Ali Babar; Sharif Abuadbba
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. (10%)Maanak Gupta; CharanKumar Akiri; Kshitiz Aryal; Eli Parker; Lopamudra Praharaj
CLIMAX: An exploration of Classifier-Based Contrastive Explanations. (2%)Praharsh Nanavati; Ranjitha Prasad
2023-07-01
Common Knowledge Learning for Generating Transferable Adversarial Examples. (99%)Ruijie Yang; Yuanfang Guo; Junfu Wang; Jiantao Zhou; Yunhong Wang
Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey. (99%)Hanieh Naderi; Ivan V. Bajić
Brightness-Restricted Adversarial Attack Patch. (75%)Mingzhen Shao
Fedward: Flexible Federated Backdoor Defense Framework with Non-IID Data. (54%)Zekai Chen; Fuyi Wang; Zhiwei Zheng; Ximeng Liu; Yujie Lin
Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training. (26%)Dario Lazzaro; Antonio Emanuele Cinà; Maura Pintor; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency. (13%)Yan Wang; Yuhang Li; Ruihao Gong; Aishan Liu; Yanfei Wang; Jian Hu; Yongqiang Yao; Yunchen Zhang; Tianzi Xiao; Fengwei Yu; Xianglong Liu
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD. (10%)Anvith Thudi; Hengrui Jia; Casey Meehan; Ilia Shumailov; Nicolas Papernot
CasTGAN: Cascaded Generative Adversarial Network for Realistic Tabular Data Synthesis. (5%)Abdallah Alshantti; Damiano Varagnolo; Adil Rasheed; Aria Rahmati; Frank Westad
FedDefender: Backdoor Attack Defense in Federated Learning. (2%)Waris Virginia Tech Gill; Ali University of Minnesota Twin Cities Anwar; Muhammad Ali Virginia Tech Gulzar
Hiding in Plain Sight: Differential Privacy Noise Exploitation for Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement Learning. (1%)Md Tamjid Hossain; Hung La
2023-06-30
Defense against Adversarial Cloud Attack on Remote Sensing Salient Object Detection. (99%)Huiming Sun; Lan Fu; Jinlong Li; Qing Guo; Zibo Meng; Tianyun Zhang; Yuewei Lin; Hongkai Yu
Efficient Backdoor Removal Through Natural Gradient Fine-tuning. (8%)Nazmul Karim; Abdullah Al Arafat; Umar Khalid; Zhishan Guo; Naznin Rahnavard
Minimum-norm Sparse Perturbations for Opacity in Linear Systems. (1%)Varkey M John; Vaibhav Katewa
2023-06-29
Post-train Black-box Defense via Bayesian Boundary Correction. (99%)He Wang; Yunfeng Diao
Towards Optimal Randomized Strategies in Adversarial Example Game. (96%)Jiahao Xie; Chao Zhang; Weijie Liu; Wensong Bai; Hui Qian
Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features. (13%)Mingli Zhu; Shaokui Wei; Hongyuan Zha; Baoyuan Wu
NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes. (1%)Hao-Lun Sun; Lei Hsiung; Nandhini Chandramoorthy; Pin-Yu Chen; Tsung-Yi Ho
2023-06-28
Boosting Adversarial Transferability with Learnable Patch-wise Masks. (99%)Xingxing Wei; Shiji Zhao
Evaluating Similitude and Robustness of Deep Image Denoising Models via Adversarial Attack. (99%)Jie Ning; Yao Li; Zhichang Guo
Mitigating Accuracy-Robustness Trade-off via Balanced Multi-Teacher Adversarial Distillation. (99%)Shiji Zhao; Xizhe Wang; Xingxing Wei
Group-based Robustness: A General Framework for Customized Robustness in the Real World. (98%)Weiran Lin; Keane Lucas; Neo Eyal; Lujo Bauer; Michael K. Reiter; Mahmood Sharif
Distributional Modeling for Location-Aware Adversarial Patches. (98%)Xingxing Wei; Shouwei Ruan; Yinpeng Dong; Hang Su
Enrollment-stage Backdoor Attacks on Speaker Recognition Systems via Adversarial Ultrasound. (98%)Xinfeng Li; Junning Ze; Chen Yan; Yushi Cheng; Xiaoyu Ji; Wenyuan Xu
Does Saliency-Based Training bring Robustness for Deep Neural Networks in Image Classification? (93%)Ali Karkehabadi
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks. (50%)Wenxiao Wang; Soheil Feizi
On the Exploitability of Instruction Tuning. (13%)Manli Shu; Jiongxiao Wang; Chen Zhu; Jonas Geiping; Chaowei Xiao; Tom Goldstein
2023-06-27
Advancing Adversarial Training by Injecting Booster Signal. (98%)Hong Joo Lee; Youngjoon Yu; Yong Man Ro
IMPOSITION: Implicit Backdoor Attack through Scenario Injection. (96%)Mozhgan Pourkeshavarz; Mohammad Sabokrou; Amir Rasouli
Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions. (92%)Lukas Gosch; Simon Geisler; Daniel Sturm; Bertrand Charpentier; Daniel Zügner; Stephan Günnemann
Robust Proxy: Improving Adversarial Robustness by Robust Proxy Learning. (89%)Hong Joo Lee; Yong Man Ro
Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability. (87%)Marco Alecci; Mauro Conti; Francesco Marchiori; Luca Martinelli; Luca Pajola
[Re] Double Sampling Randomized Smoothing. (69%)Aryan Gupta; Sarthak Gupta; Abhay Kumar; Harsh Dugar
Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets. (68%)Yimu Wang; Dinghuai Zhang; Yihan Wu; Heng Huang; Hongyang Zhang
Catch Me If You Can: A New Low-Rate DDoS Attack Strategy Disguised by Feint. (26%)Tianyang Cai; Yuqi Li; Tao Jia; Leo Yu Zhang; Zheng Yang
Shilling Black-box Review-based Recommender Systems through Fake Review Generation. (1%)Hung-Yun Chiang; Yi-Syuan Chen; Yun-Zhu Song; Hong-Han Shuai; Jason S. Chang
2023-06-26
On the Universal Adversarial Perturbations for Efficient Data-free Adversarial Detection. (99%)Songyang Gao; Shihan Dou; Qi Zhang; Xuanjing Huang; Jin Ma; Ying Shan
Are aligned neural networks adversarially aligned? (99%)Nicholas Carlini; Milad Nasr; Christopher A. Choquette-Choo; Matthew Jagielski; Irena Gao; Anas Awadalla; Pang Wei Koh; Daphne Ippolito; Katherine Lee; Florian Tramer; Ludwig Schmidt
The race to robustness: exploiting fragile models for urban camouflage and the imperative for machine learning security. (92%)Harriet Farlow; Matthew Garratt; Gavin Mount; Tim Lynar
3D-Aware Adversarial Makeup Generation for Facial Privacy Protection. (92%)Yueming Lyu; Yue Jiang; Ziwen He; Bo Peng; Yunfan Liu; Jing Dong
Towards Sybil Resilience in Decentralized Learning. (80%)Thomas Werthenbach; Johan Pouwelse
On the Resilience of Machine Learning-Based IDS for Automotive Networks. (78%)Ivo Zenden; Han Wang; Alfonso Iacovazzi; Arash Vahidi; Rolf Blom; Shahid Raza
DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization. (75%)Songyang Gao; Shihan Dou; Yan Liu; Xiao Wang; Qi Zhang; Zhongyu Wei; Jin Ma; Ying Shan
PWSHAP: A Path-Wise Explanation Model for Targeted Variables. (8%)Lucile Ter-Minassian; Oscar Clivio; Karla Diaz-Ordaz; Robin J. Evans; Chris Holmes
2023-06-25
A Spectral Perspective towards Understanding and Improving Adversarial Robustness. (99%)Binxiao Huang; Rui Lin; Chaofan Tao; Ngai Wong
On Evaluating the Adversarial Robustness of Semantic Segmentation Models. (99%)Levente Halmosi; Mark Jelasity
Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic Adversarial Training. (98%)Fan Liu; Weijia Zhang; Hao Liu
Enhancing Adversarial Training via Reweighting Optimization Trajectory. (97%)Tianjin Huang; Shiwei Liu; Tianlong Chen; Meng Fang; Li Shen; Vlaod Menkovski; Lu Yin; Yulong Pei; Mykola Pechenizkiy
RobuT: A Systematic Study of Table QA Robustness Against Human-Annotated Adversarial Perturbations. (87%)Yilun Zhao; Chen Zhao; Linyong Nan; Zhenting Qi; Wenlin Zhang; Xiangru Tang; Boyu Mi; Dragomir Radev
Computational Asymmetries in Robust Classification. (80%)Samuele Marro; Michele Lombardi
2023-06-24
Boosting Model Inversion Attacks with Adversarial Examples. (98%)Shuai Zhou; Tianqing Zhu; Dayong Ye; Xin Yu; Wanlei Zhou
Machine Learning needs Better Randomness Standards: Randomised Smoothing and PRNG-based attacks. (98%)Pranav Dahiya; Ilia Shumailov; Ross Anderson
Similarity Preserving Adversarial Graph Contrastive Learning. (96%)Yeonjun In; Kanghoon Yoon; Chanyoung Park
Weighted Automata Extraction and Explanation of Recurrent Neural Networks for Natural Language Tasks. (70%)Zeming Wei; Xiyue Zhang; Yihao Zhang; Meng Sun
2023-06-23
Creating Valid Adversarial Examples of Malware. (99%)Matouš Kozák; Martin Jureček; Mark Stamp; Troia Fabio Di
Adversarial Robustness Certification for Bayesian Neural Networks. (92%)Matthew Wicker; Andrea Patane; Luca Laurenti; Marta Kwiatkowska
A First Order Meta Stackelberg Method for Robust Federated Learning. (10%)Yunian Pan; Tao Li; Henger Li; Tianyi Xu; Zizhan Zheng; Quanyan Zhu
2023-06-22
Visual Adversarial Examples Jailbreak Large Language Models. (99%)Xiangyu Qi; Kaixuan Huang; Ashwinee Panda; Mengdi Wang; Prateek Mittal
Towards quantum enhanced adversarial robustness in machine learning. (99%)Maxwell T. West; Shu-Lok Tsang; Jia S. Low; Charles D. Hill; Christopher Leckie; Lloyd C. L. Hollenberg; Sarah M. Erfani; Muhammad Usman
Rethinking the Backward Propagation for Adversarial Transferability. (99%)Xiaosen Wang; Kangheng Tong; Kun He
Evading Forensic Classifiers with Attribute-Conditioned Adversarial Faces. (96%)Fahad Shamshad; Koushik Srivatsan; Karthik Nandakumar
Adversarial Resilience in Sequential Prediction via Abstention. (93%)Surbhi Goel; Steve Hanneke; Shay Moran; Abhishek Shetty
Document Image Cleaning using Budget-Aware Black-Box Approximation. (92%)Ganesh Tata; Katyani Singh; Oeveren Eric Van; Nilanjan Ray
Anticipatory Thinking Challenges in Open Worlds: Risk Management. (81%)Adam Amos-Binks; Dustin Dannenhauer; Leilani H. Gilpin
Towards Reliable Evaluation and Fast Training of Robust Semantic Segmentation Models. (80%)Francesco Croce; Naman D Singh; Matthias Hein
Cross-lingual Cross-temporal Summarization: Dataset, Models, Evaluation. (45%)Ran Zhang; Jihed Ouni; Steffen Eger
A First Order Meta Stackelberg Method for Robust Federated Learning (Technical Report). (33%)Henger Li; Tianyi Xu; Tao Li; Yunian Pan; Quanyan Zhu; Zizhan Zheng
Impacts and Risk of Generative AI Technology on Cyber Defense. (4%)Subash Neupane; Ivan A. Fernandez; Sudip Mittal; Shahram Rahimi
2023-06-21
Adversarial Attacks Neutralization via Data Set Randomization. (99%)Mouna Rabhi; Pietro Roberto Di
A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking. (92%)Shaohui Mei; Jiawei Lian; Xiaofei Wang; Yuru Su; Mingyang Ma; Lap-Pui Chau
Sample Attackability in Natural Language Adversarial Attacks. (92%)Vyas Raina; Mark Gales
Revisiting Image Classifier Training for Improved Certified Robust Defense against Adversarial Patches. (76%)Aniruddha Saha; Shuhua Yu; Arash Norouzzadeh; Wan-Yi Lin; Chaithanya Kumar Mummadi
DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning with Client Momentum. (47%)Xiaolan Gu; Ming Li; Li Xiong
FFCV: Accelerating Training by Removing Data Bottlenecks. (3%)Guillaume Leclerc; Andrew Ilyas; Logan Engstrom; Sung Min Park; Hadi Salman; Aleksander Madry
2023-06-20
Reversible Adversarial Examples with Beam Search Attack and Grayscale Invariance. (99%)Haodong Zhang; Chi Man Pun; Xia Du
Universal adversarial perturbations for multiple classification tasks with quantum classifiers. (99%)Yun-Zhong Qiu
Physics-constrained Attack against Convolution-based Human Motion Prediction. (99%)Chengxu Duan; Zhicheng Zhang; Xiaoli Liu; Yonghao Dang; Jianqin Yin
FDINet: Protecting against DNN Model Extraction via Feature Distortion Index. (50%)Hongwei Yao; Zheng Li; Haiqin Weng; Feng Xue; Zhan Qin; Kui Ren
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. (33%)Boxin Wang; Weixin Chen; Hengzhi Pei; Chulin Xie; Mintong Kang; Chenhui Zhang; Chejian Xu; Zidi Xiong; Ritik Dutta; Rylan Schaeffer; Sang T. Truong; Simran Arora; Mantas Mazeika; Dan Hendrycks; Zinan Lin; Yu Cheng; Sanmi Koyejo; Dawn Song; Bo Li
Towards a robust and reliable deep learning approach for detection of compact binary mergers in gravitational wave data. (3%)Shreejit Jadhav; Mihir Shrivastava; Sanjit Mitra
Mitigating Speculation-based Attacks through Configurable Hardware/Software Co-design. (1%)Ali Hajiabadi; Archit Agarwal; Andreas Diavastos; Trevor E. Carlson
LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching. (1%)Duy M. H. Nguyen; Hoang Nguyen; Nghiem T. Diep; Tan N. Pham; Tri Cao; Binh T. Nguyen; Paul Swoboda; Nhat Ho; Shadi Albarqouni; Pengtao Xie; Daniel Sonntag; Mathias Niepert
2023-06-19
Comparative Evaluation of Recent Universal Adversarial Perturbations in Image Classification. (99%)Juanjuan Weng; Zhiming Luo; Dazhen Lin; Shaozi Li
Adversarial Robustness of Prompt-based Few-Shot Learning for Natural Language Understanding. (75%)Venkata Prabhakara Sarath Nookala; Gaurav Verma; Subhabrata Mukherjee; Srijan Kumar
Adversarial Training Should Be Cast as a Non-Zero-Sum Game. (73%)Alexander Robey; Fabian Latorre; George J. Pappas; Hamed Hassani; Volkan Cevher
Eigenpatches -- Adversarial Patches from Principal Components. (38%)Jens Bayer; Stefan Becker; David Münch; Michael Arens
Practical and General Backdoor Attacks against Vertical Federated Learning. (13%)Yuexin Xuan; Xiaojun Chen; Zhendong Zhao; Bisheng Tang; Ye Dong
BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming. (5%)Steven Adams; Andrea Patane; Morteza Lahijanian; Luca Laurenti
2023-06-17
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses. (98%)Mohamed Amine Ferrag; Othmane Friha; Burak Kantarci; Norbert Tihanyi; Lucas Cordeiro; Merouane Debbah; Djallel Hamouda; Muna Al-Hawawreh; Kim-Kwang Raymond Choo
Understanding Certified Training with Interval Bound Propagation. (38%)Yuhao Mao; Mark Niklas Müller; Marc Fischer; Martin Vechev
GlyphNet: Homoglyph domains dataset and detection using attention-based Convolutional Neural Networks. (9%)Akshat Gupta; Laxman Singh Tomar; Ridhima Garg
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network. (1%)Fan Liu; Siqi Lai; Yansong Ning; Hao Liu
2023-06-16
Wasserstein distributional robustness of neural networks. (99%)Xingjian Bai; Guangyi He; Yifan Jiang; Jan Obloj
Query-Free Evasion Attacks Against Machine Learning-Based Malware Detectors with Generative Adversarial Networks. (99%)Daniel Gibert; Jordi Planes; Quan Le; Giulio Zizzo
You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks. (98%)Edward Raff; Michel Benaroch; Andrew L. Farris
Towards Better Certified Segmentation via Diffusion Models. (73%)Othmane Laousy; Alexandre Araujo; Guillaume Chassagnon; Marie-Pierre Revel; Siddharth Garg; Farshad Khorrami; Maria Vakalopoulou
Adversarially robust clustering with optimality guarantees. (5%)Soham Jana; Kun Yang; Sanjeev Kulkarni
CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search. (1%)Fahad Shamshad; Muzammal Naseer; Karthik Nandakumar
2023-06-15
DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World. (99%)Caixin Kang; Yinpeng Dong; Zhengyi Wang; Shouwei Ruan; Hang Su; Xingxing Wei
OVLA: Neural Network Ownership Verification using Latent Watermarks. (64%)Feisi Fu; Wenchao Li
Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks. (62%)Hongcheng Gao; Hao Zhang; Yinpeng Dong; Zhijie Deng
On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation. (33%)Zhanke Zhou; Chenyu Zhou; Xuan Li; Jiangchao Yao; Quanming Yao; Bo Han
Robustness Analysis on Foundational Segmentation Models. (11%)Madeline Chantry Schiappa; Sachidanand VS; Yunhao Ge; Ondrej Miksik; Yogesh S. Rawat; Vibhav Vineet
DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers. (3%)Chandramouli Sastry; Sri Harsha Dumpala; Sageev Oore
Explore, Establish, Exploit: Red Teaming Language Models from Scratch. (1%)Stephen Casper; Jason Lin; Joe Kwon; Gatlen Culp; Dylan Hadfield-Menell
Community Detection Attack against Collaborative Learning-based Recommender Systems. (1%)Yacine Belal; Sonia Ben Mokhtar; Mohamed Maouche; Anthony Simonet-Boulogne
Concealing CAN Message Sequences to Prevent Schedule-based Bus-off Attacks. (1%)Sunandan Adhikary; Ipsita Koley; Arkaprava Sain; Soumyadeep das; Shuvam Saha; Soumyajit Dey
2023-06-14
Reliable Evaluation of Adversarial Transferability. (99%)Wenqian Yu; Jindong Gu; Zhijiang Li; Philip Torr
A Relaxed Optimization Approach for Adversarial Attacks against Neural Machine Translation Models. (99%)Sahar Sadrizadeh; Clément Barbier; Ljiljana Dolamic; Pascal Frossard
X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail. (98%)Omer Hofman; Amit Giloni; Yarin Hayun; Ikuya Morikawa; Toshiya Shimizu; Yuval Elovici; Asaf Shabtai
Augment then Smooth: Reconciling Differential Privacy with Certified Robustness. (98%)Jiapeng Wu; Atiyeh Ashari Ghomi; David Glukhov; Jesse C. Cresswell; Franziska Boenisch; Nicolas Papernot
Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios. (83%)Ziqiang Li; Hong Sun; Pengfei Xia; Heng Li; Beihao Xia; Yi Wu; Bin Li
A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy. (75%)Enyan Dai; Limeng Cui; Zhengyang Wang; Xianfeng Tang; Yinghan Wang; Monica Cheng; Bing Yin; Suhang Wang
On the Robustness of Latent Diffusion Models. (73%)Jianping Zhang; Zhuoer Xu; Shiwen Cui; Changhua Meng; Weibin Wu; Michael R. Lyu
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks. (38%)Ziqiang Li; Hong Sun; Pengfei Xia; Beihao Xia; Xue Rui; Wei Zhang; Qinglang Guo; Bin Li
Improving Selective Visual Question Answering by Learning from Your Peers. (1%)Corentin Dancette; Spencer Whitehead; Rishabh Maheshwary; Ramakrishna Vedantam; Stefan Scherer; Xinlei Chen; Matthieu Cord; Marcus Rohrbach
2023-06-13
Theoretical Foundations of Adversarially Robust Learning. (99%)Omar Montasser
Finite Gaussian Neurons: Defending against adversarial attacks by making neural networks say "I don't know". (99%)Felix Grezes
I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models. (99%)Raz Lapid; Moshe Sipper
Robustness of SAM: Segment Anything Under Corruptions and Beyond. (98%)Yu Qiao; Chaoning Zhang; Taegoo Kang; Donghun Kim; Chenshuang Zhang; Choong Seon Hong
Area is all you need: repeatable elements make stronger adversarial attacks. (98%)Dillon Niederhut
Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systems. (96%)Michele Panariello; Wanying Ge; Hemlata Tak; Massimiliano Todisco; Nicholas Evans
Revisiting and Advancing Adversarial Training Through A Simple Baseline. (87%)Hong Liu
Generative Watermarking Against Unauthorized Subject-Driven Image Synthesis. (78%)Yihan Ma; Zhengyu Zhao; Xinlei He; Zheng Li; Michael Backes; Yang Zhang
Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios. (22%)Haochen Mei; Gaolei Li; Jun Wu; Longfei Zheng
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation. (22%)Zhicong Yan; Shenghong Li; Ruijie Zhao; Yuan Tian; Yuanyuan Zhao
Temporal Gradient Inversion Attacks with Robust Optimization. (8%)Bowen Li; Hanlin Gu; Ruoxin Chen; Jie Li; Chentao Wu; Na Ruan; Xueming Si; Lixin Fan
Few-shot Multi-domain Knowledge Rearming for Context-aware Defence against Advanced Persistent Threats. (2%)Gaolei Li; Yuanyuan Zhao; Wenqi Wei; Yuchen Liu
2023-06-12
When Vision Fails: Text Attacks Against ViT and OCR. (99%)Nicholas Boucher; Jenny Blessing; Ilia Shumailov; Ross Anderson; Nicolas Papernot
AROID: Improving Adversarial Robustness Through Online Instance-Wise Data Augmentation. (99%)Lin Li; Jianing Qiu; Michael Spratling
How robust accuracy suffers from certified training with convex relaxations. (73%)Bartolomeis Piersilvio De; Jacob Clarysse; Amartya Sanyal; Fanny Yang
Graph Agent Network: Empowering Nodes with Decentralized Communications Capabilities for Adversarial Resilience. (54%)Ao Liu; Wenshan Li; Tao Li; Beibei Li; Guangquan Xu; Pan Zhou; Wengang Ma; Hanyuan Huang
Frequency-Based Vulnerability Analysis of Deep Learning Models against Image Corruptions. (13%)Harshitha Machiraju; Michael H. Herzog; Pascal Frossard
On the Robustness of Removal-Based Feature Attributions. (11%)Chris Lin; Ian Covert; Su-In Lee
VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models. (1%)Sheng-Yen Chou; Pin-Yu Chen; Tsung-Yi Ho
2023-06-11
Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework. (99%)Minglei Yin; Bin Liu; Neil Zhenqiang Gong; Xin Li
Neural Architecture Design and Robustness: A Dataset. (76%)Steffen Jung; Jovita Lukasik; Margret Keuper
TrojLLM: A Black-box Trojan Prompt Attack on Large Language Models. (68%)Jiaqi Xue; Mengxin Zheng; Ting Hua; Yilin Shen; Yepeng Liu; Ladislau Boloni; Qian Lou
2023-06-10
Boosting Adversarial Robustness using Feature Level Stochastic Smoothing. (92%)Sravanti Addepalli; Samyak Jain; Gaurang Sriramanan; R. Venkatesh Babu
NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations. (83%)Yonggan Fu; Ye Yuan; Souvik Kundu; Shang Wu; Shunyao Zhang; Yingyan Lin
The Defense of Networked Targets in General Lotto games. (13%)Adel Aghajan; Keith Paarporn; Jason R. Marden
2023-06-09
Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions. (84%)Ezgi Korkmaz; Jonah Brown-Cohen
When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems. (78%)Emad Efatinasab; Francesco Marchiori; Denis Donadel; Alessandro Brighente; Mauro Conti
Overcoming Adversarial Attacks for Human-in-the-Loop Applications. (45%)Ryan McCoppin; Marla Kennedy; Platon Lukyanenko; Sean Kennedy
2023-06-08
Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning. (99%)Mohamed el Shehaby; Ashraf Matrawy
Boosting Adversarial Transferability by Achieving Flat Local Maxima. (99%)Zhijin Ge; Hongying Liu; Xiaosen Wang; Fanhua Shang; Yuanyuan Liu
COVER: A Heuristic Greedy Adversarial Attack on Prompt-based Learning in Language Models. (93%)Zihao Tan; Qingliang Chen; Wenbin Zhu; Yongjian Huang
Generalizable Lightweight Proxy for Robust NAS against Diverse Perturbations. (83%)Hyeonjeong Ha; Minseon Kim; Sung Ju Hwang
G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks through Attributed Client Graph Clustering. (62%)Hao Yu; Chuan Ma; Meng Liu; Xinwang Liu; Zhe Liu; Ming Ding
A Melting Pot of Evolution and Learning. (41%)Moshe Sipper; Achiya Elyasaf; Tomer Halperin; Zvika Haramaty; Raz Lapid; Eyal Segal; Itai Tzruia; Snir Vitrack Tamam
FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs. (22%)Shanshan Han; Baturalp Buyukates; Zijian Hu; Han Jin; Weizhao Jin; Lichao Sun; Xiaoyang Wang; Chulin Xie; Kai Zhang; Qifan Zhang; Yuhui Zhang; Chaoyang He; Salman Avestimehr
PriSampler: Mitigating Property Inference of Diffusion Models. (13%)Hailong Hu; Jun Pang
Investigating the Effect of Misalignment on Membership Privacy in the White-box Setting. (12%)Ana-Maria Cretu; Daniel Jones; Montjoye Yves-Alexandre de; Shruti Tople
Robustness Testing for Multi-Agent Reinforcement Learning: State Perturbations on Critical Agents. (10%)Ziyuan Zhou; Guanjun Liu
Enhancing Robustness of AI Offensive Code Generators via Data Augmentation. (10%)Cristina Improta; Pietro Liguori; Roberto Natella; Bojan Cukic; Domenico Cotroneo
Conservative Prediction via Data-Driven Confidence Minimization. (8%)Caroline Choi; Fahim Tajwar; Yoonho Lee; Huaxiu Yao; Ananya Kumar; Chelsea Finn
Robust Framework for Explanation Evaluation in Time Series Classification. (2%)Thu Trang Nguyen; Thach Le Nguyen; Georgiana Ifrim
Open Set Relation Extraction via Unknown-Aware Training. (1%)Jun Zhao; Xin Zhao; Wenyu Zhan; Qi Zhang; Tao Gui; Zhongyu Wei; Yunwen Chen; Xiang Gao; Xuanjing Huang
2023-06-07
Extracting Cloud-based Model with Prior Knowledge. (99%)Shiqian Zhao; Kangjie Chen; Meng Hao; Jian Zhang; Guowen Xu; Hongwei Li; Tianwei Zhang
Expanding Scope: Adapting English Adversarial Attacks to Chinese. (99%)Hanyu Liu; Chengyuan Cai; Yanjun Qi
PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts. (92%)Xiangjue Dong; Yun He; Ziwei Zhu; James Caverlee
Optimal Transport Model Distributional Robustness. (83%)Van-Anh Nguyen; Trung Le; Anh Tuan Bui; Thanh-Toan Do; Dinh Phung
PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts. (76%)Kaijie Zhu; Jindong Wang; Jiaheng Zhou; Zichen Wang; Hao Chen; Yidong Wang; Linyi Yang; Wei Ye; Yue Zhang; Neil Zhenqiang Gong; Xing Xie
A Linearly Convergent GAN Inversion-based Algorithm for Reverse Engineering of Deceptions. (45%)Darshan Thaker; Paris Giampouras; René Vidal
Faithful Knowledge Distillation. (41%)Tom A. Lamb; Rudy Brunel; Krishnamurthy DJ Dvijotham; M. Pawan Kumar; Philip H. S. Torr; Francisco Eiras
Divide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations. (16%)Prithviraj Dasgupta
Can current NLI systems handle German word order? Investigating language model performance on a new German challenge set of minimal pairs. (15%)Ines Reinig; Katja Markert
Adversarial Sample Detection Through Neural Network Transport Dynamics. (10%)Skander Karkar; Patrick Gallinari; Alain Rakotomamonjy
2023-06-06
Revisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters. (99%)Xingxing Wei; Shiji Zhao
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations. (97%)Torsten University of Würzburg Krauß; Alexandra University of Würzburg Dmitrienko
Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings. (93%)Klim Kireev; Maksym Andriushchenko; Carmela Troncoso; Nicolas Flammarion
Adversarial attacks and defenses in explainable artificial intelligence: A survey. (64%)Hubert Baniecki; Przemyslaw Biecek
Exploring Model Dynamics for Accumulative Poisoning Discovery. (62%)Jianing Zhu; Xiawei Guo; Jiangchao Yao; Chao Du; Li He; Shuo Yuan; Tongliang Liu; Liang Wang; Bo Han
Membership inference attack with relative decision boundary distance. (33%)JiaCheng Xu; ChengXiang Tan
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex. (8%)Drew Linsley; Ivan F. Rodriguez; Thomas Fel; Michael Arcaro; Saloni Sharma; Margaret Livingstone; Thomas Serre
Adversarial Attacks and Defenses for Semantic Communication in Vehicular Metaverses. (1%)Jiawen Kang; Jiayi He; Hongyang Du; Zehui Xiong; Zhaohui Yang; Xumin Huang; Shengli Xie
2023-06-05
Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception. (99%)Drew Linsley; Pinyuan Feng; Thibaut Boissin; Alekh Karkada Ashok; Thomas Fel; Stephanie Olaiya; Thomas Serre
Evading Black-box Classifiers Without Breaking Eggs. (99%)Edoardo Debenedetti; Nicholas Carlini; Florian Tramèr
Evaluating robustness of support vector machines with the Lagrangian dual approach. (97%)Yuting Liu; Hong Gu; Pan Qin
A Robust Likelihood Model for Novelty Detection. (93%)Ranya Almohsen; Shivang Patel; Donald A. Adjeroh; Gianfranco Doretto
Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning. (86%)Lucas Beerens; Desmond J. Higham
Enhance Diffusion to Improve Robust Generalization. (76%)Jianhui Sun; Sanchit Sinha; Aidong Zhang
KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language Explanations. (68%)Myeongjun Jang; Bodhisattwa Prasad Majumder; Julian McAuley; Thomas Lukasiewicz; Oana-Maria Camburu
Stable Diffusion is Unstable. (45%)Chengbin Du; Yanxi Li; Zhongwei Qiu; Chang Xu
Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization. (1%)Yibing Liu; Chris Xing Tian; Haoliang Li; Lei Ma; Shiqi Wang
Security Knowledge-Guided Fuzzing of Deep Learning Libraries. (1%)Nima Shiri Harzevili; Hung Viet Pham; Song Wang
Input-gradient space particle inference for neural network ensembles. (1%)Trung Trinh; Markus Heinonen; Luigi Acerbi; Samuel Kaski
2023-06-04
Adversary for Social Good: Leveraging Adversarial Attacks to Protect Personal Attribute Privacy. (98%)Xiaoting Li; Lingwei Chen; Dinghao Wu
Aerial Swarm Defense using Interception and Herding Strategies. (1%)Vishnu S. Chipade; Dimitra Panagou
2023-06-03
Towards Black-box Adversarial Example Detection: A Data Reconstruction-based Method. (99%)Yifei Gao; Zhiyu Lin; Yunfan Yang; Jitao Sang
Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning in Cybersecurity Games. (67%)Tyler Malloy; Cleotilde Gonzalez
Can Directed Graph Neural Networks be Adversarially Robust? (56%)Zhichao Hou; Xitong Zhang; Wei Wang; Charu C. Aggarwal; Xiaorui Liu
Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training. (13%)Pucheng Dang; Xing Hu; Kaidi Xu; Jinhao Duan; Di Huang; Husheng Han; Rui Zhang; Zidong Du; Qi Guo; Yunji Chen
Benchmarking Robustness of Adaptation Methods on Pre-trained Vision-Language Models. (1%)Shuo Chen; Jindong Gu; Zhen Han; Yunpu Ma; Philip Torr; Volker Tresp
2023-06-02
Towards Understanding Clean Generalization and Robust Overfitting in Adversarial Training. (99%)Binghui Li; Yuanzhi Li
A Closer Look at the Adversarial Robustness of Deep Equilibrium Models. (92%)Zonghan Yang; Tianyu Pang; Yang Liu
Adaptive Attractors: A Defense Strategy against ML Adversarial Collusion Attacks. (83%)Jiyi Zhang; Han Fang; Ee-Chien Chang
Poisoning Network Flow Classifiers. (61%)Giorgio Severi; Simona Boboila; Alina Oprea; John Holodnak; Kendra Kratkiewicz; Jason Matterer
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization. (54%)Javier Carnerero-Cano; Luis Muñoz-González; Phillippa Spencer; Emil C. Lupu
Invisible Image Watermarks Are Provably Removable Using Generative AI. (33%)Xuandong Zhao; Kexun Zhang; Zihao Su; Saastha Vasan; Ilya Grishchenko; Christopher Kruegel; Giovanni Vigna; Yu-Xiang Wang; Lei Li
Robust low-rank training via approximate orthonormal constraints. (22%)Dayana Savostianova; Emanuele Zangrando; Gianluca Ceruti; Francesco Tudisco
Supervised Adversarial Contrastive Learning for Emotion Recognition in Conversations. (13%)Dou Hu; Yinan Bao; Lingwei Wei; Wei Zhou; Songlin Hu
Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics. (11%)Zonghan Yang; Peng Li; Tianyu Pang; Yang Liu
Covert Communication Based on the Poisoning Attack in Federated Learning. (10%)Junchuan Liang; Rong Wang
VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations. (3%)Hoang-Quoc Nguyen-Son; Seira Hidano; Kazuhide Fukushima; Shinsaku Kiyomoto; Isao Echizen
Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation. (2%)Zhengyue Zhao; Jinhao Duan; Xing Hu; Kaidi Xu; Chenan Wang; Rui Zhang; Zidong Du; Qi Guo; Yunji Chen
MutateNN: Mutation Testing of Image Recognition Models Deployed on Hardware Accelerators. (1%)Nikolaos Louloudakis; Perry Gibson; José Cano; Ajitha Rajan
Towards Robust GAN-generated Image Detection: a Multi-view Completion Representation. (1%)Chi Liu; Tianqing Zhu; Sheng Shen; Wanlei Zhou
Improving the generalizability and robustness of large-scale traffic signal control. (1%)Tianyu Shi; Francois-Xavier Devailly; Denis Larocque; Laurent Charlin
2023-06-01
Adversarial Attack Based on Prediction-Correction. (99%)Chen Wan; Fangjun Huang
Constructing Semantics-Aware Adversarial Examples with Probabilistic Perspective. (98%)Andi Zhang; Damon Wischik
Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations. (96%)Yang Sui; Zhuohang Li; Ding Ding; Xiang Pan; Xiaozhong Xu; Shan Liu; Zhenzhong Chen
Intriguing Properties of Text-guided Diffusion Models. (92%)Qihao Liu; Adam Kortylewski; Yutong Bai; Song Bai; Alan Yuille
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers. (82%)Ruotong Wang; Hongrui Chen; Zihao Zhu; Li Liu; Baoyuan Wu
Improving the Robustness of Summarization Systems with Dual Augmentation. (76%)Xiuying Chen; Guodong Long; Chongyang Tao; Mingzhe Li; Xin Gao; Chengqi Zhang; Xiangliang Zhang
Adversarial Robustness in Unsupervised Machine Learning: A Systematic Review. (38%)Mathias Lundteigen Mohus; Jinyue Li
Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk? (13%)Iyiola E. Olatunji; Anmar Hizber; Oliver Sihlovec; Megha Khosla
CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception. (13%)Jiachen Sun; Haizhong Zheng; Qingzhao Zhang; Atul Prakash; Z. Morley Mao; Chaowei Xiao
ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-based Systems. (4%)Mingyi Zhou; Xiang Gao; Jing Wu; John Grundy; Xiao Chen; Chunyang Chen; Li Li
2023-05-31
Exploring the Vulnerabilities of Machine Learning and Quantum Machine Learning to Adversarial Attacks using a Malware Dataset: A Comparative Analysis. (98%)Mst Shapna Akter; Hossain Shahriar; Iysa Iqbal; MD Hossain; M. A. Karim; Victor Clincy; Razvan Voicu
Graph-based methods coupled with specific distributional distances for adversarial attack detection. (98%)Dwight Nwaigwe; Lucrezia Carboni; Martial Mermillod; Sophie Achard; Michel Dojat
Adversarial-Aware Deep Learning System based on a Secondary Classical Machine Learning Verification Approach. (98%)Mohammed Alkhowaiter; Hisham Kholidy; Mnassar Alyami; Abdulmajeed Alghamdi; Cliff Zou
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems. (54%)Ashim Gupta; Amrith Krishna
Deception by Omission: Using Adversarial Missingness to Poison Causal Structure Learning. (26%)Deniz Koyuncu; Alex Gittens; Bülent Yener; Moti Yung
Red Teaming Language Model Detectors with Language Models. (15%)Zhouxing Shi; Yihan Wang; Fan Yin; Xiangning Chen; Kai-Wei Chang; Cho-Jui Hsieh
Ambiguity in solving imaging inverse problems with deep learning based operators. (1%)Davide Evangelista; Elena Morotti; Elena Loli Piccolomini; James Nagy
2023-05-30
Pseudo-Siamese Network based Timbre-reserved Black-box Adversarial Attack in Speaker Identification. (99%)Qing Wang; Jixun Yao; Ziqian Wang; Pengcheng Guo; Lei Xie
Breeding Machine Translations: Evolutionary approach to survive and thrive in the world of automated evaluation. (64%)Josef Jon; Ondřej Bojar
Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness. (56%)Suraj Srinivas; Sebastian Bordt; Hima Lakkaraju
Incremental Randomized Smoothing Certification. (33%)Shubham Ugare; Tarun Suresh; Debangshu Banerjee; Gagandeep Singh; Sasa Misailovic
Defense Against Shortest Path Attacks. (16%)Benjamin A. Miller; Zohair Shafi; Wheeler Ruml; Yevgeniy Vorobeychik; Tina Eliassi-Rad; Scott Alfeld
A Multilingual Evaluation of NER Robustness to Adversarial Inputs. (15%)Akshay Srinivasan; Sowmya Vajjala
It begins with a boundary: A geometric view on probabilistically robust learning. (10%)Leon Bungert; Nicolás García Trillos; Matt Jacobs; Daniel McKenzie; Đorđe Nikolić; Qingsong Wang
Adversarial Attacks on Online Learning to Rank with Stochastic Click Models. (2%)Zichen Wang; Rishab Balasubramanian; Hui Yuan; Chenyu Song; Mengdi Wang; Huazheng Wang
Learning Perturbations to Explain Time Series Predictions. (1%)Joseph Enguehard
2023-05-29
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework. (99%)Yangyi Chen; Hongcheng Gao; Ganqu Cui; Lifan Yuan; Dehan Kong; Hanlu Wu; Ning Shi; Bo Yuan; Longtao Huang; Hui Xue; Zhiyuan Liu; Maosong Sun; Heng Ji
Fourier Analysis on Robustness of Graph Convolutional Neural Networks for Skeleton-based Action Recognition. (92%)Nariki Tanaka; Hiroshi Kera; Kazuhiko Kawamoto
Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models. (92%)Pranath Reddy Kumbam; Sohaib Uddin Syed; Prashanth Thamminedi; Suhas Harish; Ian Perera; Bonnie J. Dorr
UMD: Unsupervised Model Detection for X2X Backdoor Attacks. (81%)Zhen Xiang; Zidi Xiong; Bo Li
Membership Inference Attacks against Language Models via Neighbourhood Comparison. (73%)Justus Mattern; Fatemehsadat Mireshghallah; Zhijing Jin; Bernhard Schölkopf; Mrinmaya Sachan; Taylor Berg-Kirkpatrick
Trustworthy Sensor Fusion against Inaudible Command Attacks in Advanced Driver-Assistance System. (41%)Jiwei Guan; Lei Pan; Chen Wang; Shui Yu; Longxiang Gao; Xi Zheng
Trainable and Explainable Simplicial Map Neural Networks. (41%)Eduardo Paluzo-Hidalgo; Miguel A. Gutiérrez-Naranjo; Rocio Gonzalez-Diaz
Robust Lipschitz Bandits to Adversarial Corruptions. (11%)Yue Kang; Cho-Jui Hsieh; Thomas C. M. Lee
Towards minimizing efforts for Morphing Attacks -- Deep embeddings for morphing pair selection and improved Morphing Attack Detection. (8%)Roman Kessler; Kiran Raja; Juan Tapia; Christoph Busch
2023-05-28
Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness. (99%)Zhanhao Hu; Jun Zhu; Bo Zhang; Xiaolin Hu
NaturalFinger: Generating Natural Fingerprint with Generative Adversarial Networks. (92%)Kang Yang; Kunhao Lai
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study. (41%)Yiqi Zhong; Xianming Liu; Deming Zhai; Junjun Jiang; Xiangyang Ji
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models. (38%)Kai Mei; Zheng Li; Zhenting Wang; Yang Zhang; Shiqing Ma
Choose your Data Wisely: A Framework for Semantic Counterfactuals. (13%)Edmund Dervakos; Konstantinos Thomas; Giorgos Filandrianos; Giorgos Stamou
BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise Learning. (5%)Jingfeng Zhang; Bo Song; Haohan Wang; Bo Han; Tongliang Liu; Lei Liu; Masashi Sugiyama
Black-Box Anomaly Attribution. (1%)Tsuyoshi Idé; Naoki Abe
2023-05-27
Adversarial Attack On Yolov5 For Traffic And Road Sign Detection. (99%)Sanyam Jain
Rapid Plug-in Defenders. (99%)Kai Wu; Yujian Betterest Li; Jian Lou; Xiaoyu Zhang; Handing Wang; Jing Liu
Two Heads are Better than One: Towards Better Adversarial Robustness by Combining Transduction and Rejection. (98%)Nils Palumbo; Yang Guo; Xi Wu; Jiefeng Chen; Yingyu Liang; Somesh Jha
Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making. (92%)Xuanjie Fang; Sijie Cheng; Yang Liu; Wei Wang
On the Importance of Backbone to the Adversarial Robustness of Object Detectors. (83%)Xiao Li; Hang Chen; Xiaolin Hu
No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions. (2%)Tiancheng Jin; Junyan Liu; Chloé Rouyer; William Chang; Chen-Yu Wei; Haipeng Luo
FoPro-KD: Fourier Prompted Effective Knowledge Distillation for Long-Tailed Medical Image Recognition. (1%)Marawan Elbatel; Robert Martí; Xiaomeng Li
2023-05-26
On Evaluating Adversarial Robustness of Large Vision-Language Models. (99%)Yunqing Zhao; Tianyu Pang; Chao Du; Xiao Yang; Chongxuan Li; Ngai-Man Cheung; Min Lin
DistriBlock: Identifying adversarial audio samples by leveraging characteristics of the output distribution. (98%)Matías Pizarro; Dorothea Kolossa; Asja Fischer
Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in Multi-Agent RL. (96%)Xiangyu Liu; Souradip Chakraborty; Yanchao Sun; Furong Huang
A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation. (45%)Zhiyi Xue; Si Liu; Zhaodi Zhang; Yiting Wu; Min Zhang
Adversarial Attacks on Online Learning to Rank with Click Feedback. (38%)Jinhang Zuo; Zhiyao Zhang; Zhiyong Wang; Shuai Li; Mohammad Hajiesmaili; Adam Wierman
DeepSeaNet: Improving Underwater Object Detection using EfficientDet. (2%)Sanyam Jain
Trust-Aware Resilient Control and Coordination of Connected and Automated Vehicles. (1%)H M Sabbir Ahmad; Ehsan Sabouni; Wei Xiao; Christos G. Cassandras; Wenchao Li
Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model. (1%)Zhijie Deng; Hongcheng Gao; Yibo Miao; Hao Zhang
2023-05-25
IDEA: Invariant Defense for Graph Adversarial Robustness. (99%)Shuchang Tao; Qi Cao; Huawei Shen; Yunfan Wu; Bingbing Xu; Xueqi Cheng
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text. (98%)Ashim Gupta; Carter Wood Blum; Temma Choji; Yingjie Fei; Shalin Shah; Alakananda Vempala; Vivek Srikumar
Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability. (98%)Haotian Xue; Alexandre Araujo; Bin Hu; Yongxin Chen
PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image Deraining for Semantic Segmentation. (96%)Xianghao Jiao; Yaohua Liu; Jiaxin Gao; Xinyuan Chu; Risheng Liu; Xin Fan
CARSO: Counter-Adversarial Recall of Synthetic Observations. (93%)Emanuele Ballarin; Alessio Ansuini; Luca Bortolussi
Adversarial Attacks on Leakage Detectors in Water Distribution Networks. (86%)Paul Stahlhofen; André Artelt; Luca Hermes; Barbara Hammer
On the Robustness of Segment Anything. (73%)Yihao Huang; Yue Cao; Tianlin Li; Felix Juefei-Xu; Di Lin; Ivor W. Tsang; Yang Liu; Qing Guo
Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score. (67%)Shuhai Zhang; Feng Liu; Jiahao Yang; Yifan Yang; Changsheng Li; Bo Han; Mingkui Tan
Rethinking Diversity in Deep Neural Network Testing. (50%)Zi Wang; Jihye Choi; Ke Wang; Somesh Jha
IMBERT: Making BERT Immune to Insertion-based Backdoor Attacks. (13%)Xuanli He; Jun Wang; Benjamin Rubinstein; Trevor Cohn
Securing Deep Generative Models with Universal Adversarial Signature. (2%)Yu Zeng; Mo Zhou; Yuan Xue; Vishal M. Patel
Concept-Centric Transformers: Enhancing Model Interpretability through Object-Centric Concept Learning within a Shared Global Workspace. (1%)Jinyung Hong; Keun Hee Park; Theodore P. Pavlic
2023-05-24
How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks. (99%)Salijona Dyrmishi; Salah Ghamizi; Maxime Cordy
Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup. (99%)Junyoung Byun; Myung-Joon Kwon; Seungju Cho; Yoonji Kim; Changick Kim
Robust Classification via a Single Diffusion Model. (99%)Huanran Chen; Yinpeng Dong; Zhengyi Wang; Xiao Yang; Chengqi Duan; Hang Su; Jun Zhu
Investigating Adversarial Vulnerability and Implicit Bias through Frequency Analysis. (92%)Lorenzo Basile; Nikos Karantzas; Alberto D'Onofrio; Luca Bortolussi; Alex Rodriguez; Fabio Anselmi
Fantastic DNN Classifiers and How to Identify them without Data. (91%)Nathaniel Dean; Dilip Sarkar
Adversarial Demonstration Attacks on Large Language Models. (88%)Jiongxiao Wang; Zichen Liu; Keun Hee Park; Muhao Chen; Chaowei Xiao
AdvFunMatch: When Consistent Teaching Meets Adversarial Robustness. (76%)Ziuhi Wu; Haichang Gao; Bingqian Zhou; Ping Wang
Reconstructive Neuron Pruning for Backdoor Defense. (75%)Yige Li; Xixiang Lyu; Xingjun Ma; Nodens Koren; Lingjuan Lyu; Bo Li; Yu-Gang Jiang
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing. (74%)Alberto Muñoz-Ortiz; David Vilares
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models. (50%)Jiashu Xu; Mingyu Derek Ma; Fei Wang; Chaowei Xiao; Muhao Chen
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE. (47%)Qin Liu; Fei Wang; Chaowei Xiao; Muhao Chen
Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models. (22%)Natalie Shapira; Mosh Levy; Seyed Hossein Alavi; Xuhui Zhou; Yejin Choi; Yoav Goldberg; Maarten Sap; Vered Shwartz
Adversarial robustness of amortized Bayesian inference. (11%)Manuel Glöckler; Michael Deistler; Jakob H. Macke
Sharpness-Aware Data Poisoning Attack. (10%)Pengfei He; Han Xu; Jie Ren; Yingqian Cui; Hui Liu; Charu C. Aggarwal; Jiliang Tang
How to fix a broken confidence estimator: Evaluating post-hoc methods for selective classification with deep neural networks. (3%)Luís Felipe P. Cattelan; Danilo Silva
M4: Multi-generator, Multi-domain, and Multi-lingual Black-Box Machine-Generated Text Detection. (1%)Yuxia Wang; Jonibek Mansurov; Petar Ivanov; Jinyan Su; Artem Shelmanov; Akim Tsvigun; Chenxi Whitehouse; Osama Mohammed Afzal; Tarek Mahmoud; Toru Sasaki; Thomas Arnold; Alham Fikri Aji; Nizar Habash; Iryna Gurevych; Preslav Nakov
Ghostbuster: Detecting Text Ghostwritten by Large Language Models. (1%)Vivek Verma; Eve Fleisig; Nicholas Tomlin; Dan Klein
2023-05-23
The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks. (99%)Iuri Frosio; Jan Kautz
Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning. (99%)Minchan Kwon; Kangil Kim
QFA2SR: Query-Free Adversarial Transfer Attacks to Speaker Recognition Systems. (98%)Guangke Chen; Yedi Zhang; Zhe Zhao; Fu Song
Expressive Losses for Verified Robustness via Convex Combinations. (95%)Palma Alessandro De; Rudy Bunel; Krishnamurthy Dvijotham; M. Pawan Kumar; Robert Stanforth; Alessio Lomuscio
Impact of Light and Shadow on Robustness of Deep Neural Networks. (87%)Chengyin Hu; Weiwen Shi; Chao Li; Jialiang Sun; Donghua Wang; Junqi Wu; Guijian Tang
A Causal View of Entity Bias in (Large) Language Models. (10%)Fei Wang; Wenjie Mo; Yiwei Wang; Wenxuan Zhou; Muhao Chen
Decoupled Kullback-Leibler Divergence Loss. (1%)Jiequan Cui; Zhuotao Tian; Zhisheng Zhong; Xiaojuan Qi; Bei Yu; Hanwang Zhang
2023-05-22
Latent Magic: An Investigation into Adversarial Examples Crafted in the Semantic Latent Space. (99%)BoYang Zheng
Uncertainty-based Detection of Adversarial Attacks in Semantic Segmentation. (99%)Kira Maag; Asja Fischer
FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign. (98%)Kun Li; Fan Zhang; Wei Guo
Attribute-Guided Encryption with Facial Texture Masking. (98%)Chun Pong Lau; Jiang Liu; Rama Chellappa
DiffProtect: Generate Adversarial Examples with Diffusion Models for Facial Privacy Protection. (98%)Jiang Liu; Chun Pong Lau; Rama Chellappa
Byzantine Robust Cooperative Multi-Agent Reinforcement Learning as a Bayesian Game. (93%)Simin Li; Jun Guo; Jingqiao Xiu; Ruixiao Xu; Xin Yu; Jiakai Wang; Aishan Liu; Yaodong Yang; Xianglong Liu
Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks. (88%)Simin Li; Shuing Zhang; Gujun Chen; Dong Wang; Pu Feng; Jiakai Wang; Aishan Liu; Xin Yi; Xianglong Liu
Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors. (54%)Pia Hanfeld; Marina M. -C. Höhne; Michael Bussmann; Wolfgang Hönig
DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using Bernstein Polynomial Activations and Precise Bound Propagation. (50%)Haitham Khedr; Yasser Shoukry
The defender's perspective on automatic speaker verification: An overview. (22%)Haibin Wu; Jiawen Kang; Lingwei Meng; Helen Meng; Hung-yi Lee
Model Stealing Attack against Multi-Exit Networks. (10%)Li Pan; Lv Peizhuo; Chen Kai; Cai Yuling; Xiang Fan; Zhang Shengzhi
Adversarial Defenses via Vector Quantization. (8%)Zhiyi Dong; Yongyi Mao
Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models. (2%)Alicia Parrish; Hannah Rose Kirk; Jessica Quaye; Charvi Rastogi; Max Bartolo; Oana Inel; Juan Ciro; Rafael Mosquera; Addison Howard; Will Cukierski; D. Sculley; Vijay Janapa Reddi; Lora Aroyo
Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals. (1%)Ananth Balashankar; Xuezhi Wang; Yao Qin; Ben Packer; Nithum Thain; Jilin Chen; Ed H. Chi; Alex Beutel
Tied-Augment: Controlling Representation Similarity Improves Data Augmentation. (1%)Emirhan Kurtulus; Zichao Li; Yann Dauphin; Ekin Dogus Cubuk
Adaptive Face Recognition Using Adversarial Information Network. (1%)Mei Wang; Weihong Deng
Watermarking Text Data on Large Language Models for Dataset Copyright. (1%)Yixin Liu; Hongsheng Hu; Xun Chen; Xuyun Zhang; Lichao Sun
2023-05-21
Mist: Towards Improved Adversarial Examples for Diffusion Models. (99%)Chumeng Liang; Xiaoyu Wu
Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial Attack. (81%)Christopher Burger; Lingwei Chen; Thai Le
FAQ: Mitigating the Impact of Faults in the Weight Memory of DNN Accelerators through Fault-Aware Quantization. (1%)Muhammad Abdullah Hanif; Muhammad Shafique
2023-05-20
Dynamic Transformers Provide a False Sense of Efficiency. (92%)Yiming Chen; Simin Chen; Zexin Li; Wei Yang; Cong Liu; Robby T. Tan; Haizhou Li
Annealing Self-Distillation Rectification Improves Adversarial Training. (76%)Yu-Yu Wu; Hung-Jui Wang; Shang-Tse Chen
Stability, Generalization and Privacy: Precise Analysis for Random and NTK Features. (8%)Simone Bombari; Marco Mondelli
2023-05-19
Multi-Task Models Adversarial Attacks. (98%)Lijun Zhang; Xiao Liu; Kaleel Mahmood; Caiwen Ding; Hui Guan
DAP: A Dynamic Adversarial Patch for Evading Person Detectors. (92%)Amira Guesmi; Ruitian Ding; Muhammad Abdullah Hanif; Ihsen Alouani; Muhammad Shafique
Efficient ConvBN Blocks for Transfer Learning and Beyond. (67%)Kaichao You; Guo Qin; Anchang Bao; Meng Cao; Ping Huang; Jiulong Shan; Mingsheng Long
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation. (8%)Xuanli He; Qiongkai Xu; Jun Wang; Benjamin Rubinstein; Trevor Cohn
Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment. (5%)Mengke Li; Yiu-ming Cheung; Yang Lu
SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters. (4%)Yuchen Yang; Bo Hui; Haolin Yuan; Neil Gong; Yinzhi Cao
Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing. (2%)Yisong Xiao; Aishan Liu; Tianlin Li; Xianglong Liu
Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning. (1%)Mustafa Safa Ozdayi; Charith Peris; Jack FitzGerald; Christophe Dupuy; Jimit Majmudar; Haidar Khan; Rahil Parikh; Rahul Gupta
2023-05-18
Deep PackGen: A Deep Reinforcement Learning Framework for Adversarial Network Packet Generation. (99%)Soumyadeep Hore; Jalal Ghadermazi; Diwas Paudel; Ankit Shah; Tapas K. Das; Nathaniel D. Bastian
Adversarial Amendment is the Only Force Capable of Transforming an Enemy into a Friend. (99%)Chong Yu; Tao Chen; Zhongxue Gan
Architecture-agnostic Iterative Black-box Certified Defense against Adversarial Patches. (99%)Di Yang; Yihao Huang; Qing Guo; Felix Juefei-Xu; Ming Hu; Yang Liu; Geguang Pu
Towards an Accurate and Secure Detector against Adversarial Perturbations. (99%)Chao Wang; Shuren Qi; Zhiqiu Huang; Yushu Zhang; Xiaochun Cao
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoning. (99%)Elise Bishoff; Charles Godfrey; Myles McKay; Eleanor Byler
How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses. (98%)Joana C. Costa; Tiago Roxo; Hugo Proença; Pedro R. M. Inácio
RobustFair: Adversarial Evaluation through Fairness Confusion Directed Gradient Search. (93%)Xuran Li; Peng Wu; Kaixiang Dong; Zhen Zhang
Attacks on Online Learners: a Teacher-Student Analysis. (54%)Riccardo Giuseppe Margiotta; Sebastian Goldt; Guido Sanguinetti
Explaining V1 Properties with a Biologically Constrained Deep Learning Architecture. (47%)Galen Pogoncheff; Jacob Granley; Michael Beyeler
Large Language Models can be Guided to Evade AI-Generated Text Detection. (3%)Ning Lu; Shengcai Liu; Rui He; Qi Wang; Yew-Soon Ong; Ke Tang
Zero-Day Backdoor Attack against Text-to-Image Diffusion Models via Personalization. (2%)Yihao Huang; Qing Guo; Felix Juefei-Xu
Re-thinking Data Availablity Attacks Against Deep Neural Networks. (1%)Bin Fang; Bo Li; Shuang Wu; Ran Yi; Shouhong Ding; Lizhuang Ma
TrustSER: On the Trustworthiness of Fine-tuning Pre-trained Speech Embeddings For Speech Emotion Recognition. (1%)Tiantian Feng; Rajat Hebbar; Shrikanth Narayanan
2023-05-17
Content-based Unrestricted Adversarial Attack. (99%)Zhaoyu Chen; Bo Li; Shuang Wu; Kaixun Jiang; Shouhong Ding; Wenqiang Zhang
Raising the Bar for Certified Adversarial Robustness with Diffusion Models. (95%)Thomas Altstidl; David Dobre; Björn Eskofier; Gauthier Gidel; Leo Schwinn
The Adversarial Consistency of Surrogate Risks for Binary Classification. (10%)Natalie Frank; Jonathan Niles-Weed
Variational Classification. (1%)Shehzaad Dhuliawala; Mrinmaya Sachan; Carl Allen
Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt. (1%)Zhaozhuo Xu; Zirui Liu; Beidi Chen; Yuxin Tang; Jue Wang; Kaixiong Zhou; Xia Hu; Anshumali Shrivastava
PaLM 2 Technical Report. (1%)Rohan Anil; Andrew M. Dai; Orhan Firat; Melvin Johnson; Dmitry Lepikhin; Alexandre Passos; Siamak Shakeri; Emanuel Taropa; Paige Bailey; Zhifeng Chen; Eric Chu; Jonathan H. Clark; Laurent El Shafey; Yanping Huang; Kathy Meier-Hellstern; Gaurav Mishra; Erica Moreira; Mark Omernick; Kevin Robinson; Sebastian Ruder; Yi Tay; Kefan Xiao; Yuanzhong Xu; Yujing Zhang; Gustavo Hernandez Abrego; Junwhan Ahn; Jacob Austin; Paul Barham; Jan Botha; James Bradbury; Siddhartha Brahma; Kevin Brooks; Michele Catasta; Yong Cheng; Colin Cherry; Christopher A. Choquette-Choo; Aakanksha Chowdhery; Clément Crepy; Shachi Dave; Mostafa Dehghani; Sunipa Dev; Jacob Devlin; Mark Díaz; Nan Du; Ethan Dyer; Vlad Feinberg; Fangxiaoyu Feng; Vlad Fienber; Markus Freitag; Xavier Garcia; Sebastian Gehrmann; Lucas Gonzalez; Guy Gur-Ari; Steven Hand; Hadi Hashemi; Le Hou; Joshua Howland; Andrea Hu; Jeffrey Hui; Jeremy Hurwitz; Michael Isard; Abe Ittycheriah; Matthew Jagielski; Wenhao Jia; Kathleen Kenealy; Maxim Krikun; Sneha Kudugunta; Chang Lan; Katherine Lee; Benjamin Lee; Eric Li; Music Li; Wei Li; YaGuang Li; Jian Li; Hyeontaek Lim; Hanzhao Lin; Zhongtao Liu; Frederick Liu; Marcello Maggioni; Aroma Mahendru; Joshua Maynez; Vedant Misra; Maysam Moussalem; Zachary Nado; John Nham; Eric Ni; Andrew Nystrom; Alicia Parrish; Marie Pellat; Martin Polacek; Alex Polozov; Reiner Pope; Siyuan Qiao; Emily Reif; Bryan Richter; Parker Riley; Alex Castro Ros; Aurko Roy; Brennan Saeta; Rajkumar Samuel; Renee Shelby; Ambrose Slone; Daniel Smilkov; David R. So; Daniel Sohn; Simon Tokumine; Dasha Valter; Vijay Vasudevan; Kiran Vodrahalli; Xuezhi Wang; Pidong Wang; Zirui Wang; Tao Wang; John Wieting; Yuhuai Wu; Kelvin Xu; Yunhan Xu; Linting Xue; Pengcheng Yin; Jiahui Yu; Qiao Zhang; Steven Zheng; Ce Zheng; Weikang Zhou; Denny Zhou; Slav Petrov; Yonghui Wu
2023-05-16
Iterative Adversarial Attack on Image-guided Story Ending Generation. (99%)Youze Wang; Wenbo Hu; Richang Hong
Releasing Inequality Phenomena in $L_{\infty}$-Adversarial Training via Input Gradient Distillation. (98%)Junxi Chen; Junhao Dong; Xiaohua Xie
Ortho-ODE: Enhancing Robustness and of Neural ODEs against Adversarial Attacks. (54%)Vishal Purohit
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples. (50%)Wan Jiang; Yunfeng Diao; He Wang; Jianxin Sun; Meng Wang; Richang Hong
2023-05-15
Attacking Perceptual Similarity Metrics. (99%)Abhijay Ghildyal; Feng Liu
Exploiting Frequency Spectrum of Adversarial Images for General Robustness. (96%)Chun Yang Tan; Kazuhiko Kawamoto; Hiroshi Kera
Training Neural Networks without Backpropagation: A Deeper Dive into the Likelihood Ratio Method. (4%)Jinyang Jiang; Zeliang Zhang; Chenliang Xu; Zhaofei Yu; Yijie Peng
Assessing Hidden Risks of LLMs: An Empirical Study on Robustness, Consistency, and Credibility. (1%)Wentao Ye; Mingfeng Ou; Tianyi Li; Yipeng chen; Xuetao Ma; Yifan Yanggong; Sai Wu; Jie Fu; Gang Chen; Haobo Wang; Junbo Zhao
2023-05-14
Diffusion Models for Imperceptible and Transferable Adversarial Attack. (99%)Jianqi Chen; Hao Chen; Keyan Chen; Yilan Zhang; Zhengxia Zou; Zhenwei Shi
Improving Defensive Distillation using Teacher Assistant. (96%)Maniratnam Mandal; Suna Gao
Manipulating Visually-aware Federated Recommender Systems and Its Countermeasures. (82%)Wei Yuan; Shilong Yuan; Chaoqun Yang; Quoc Viet Hung Nguyen; Hongzhi Yin
Watermarking Text Generated by Black-Box Language Models. (9%)Xi Yang; Kejiang Chen; Weiming Zhang; Chang Liu; Yuang Qi; Jie Zhang; Han Fang; Nenghai Yu
2023-05-13
DNN-Defender: A Victim-Focused In-DRAM Defense Mechanism for Taming Adversarial Weight Attack on DNNs. (86%)Ranyang Zhou; Sabbir Ahmed; Adnan Siraj Rakin; Shaahin Angizi
On enhancing the robustness of Vision Transformers: Defensive Diffusion. (76%)Raza Imam; Muhammad Huzaifa; Mohammed El-Amine Azz
Decision-based iterative fragile watermarking for model integrity verification. (50%)Zhaoxia Yin; Heng Yin; Hang Su; Xinpeng Zhang; Zhenzhe Gao
2023-05-12
Efficient Search of Comprehensively Robust Neural Architectures via Multi-fidelity Evaluation. (73%)Jialiang Sun; Wen Yao; Tingsong Jiang; Xiaoqian Chen
Adversarial Security and Differential Privacy in mmWave Beam Prediction in 6G networks. (68%)Ghanta Sai Krishna; Kundrapu Supriya; Sanskar Singh; Sabur Baidya
Mastering Percolation-like Games with Deep Learning. (1%)Michael M. Danziger; Omkar R. Gojala; Sean P. Cornelius
2023-05-11
Distracting Downpour: Adversarial Weather Attacks for Motion Estimation. (74%)Jenny Schmalfuss; Lukas Mehl; Andrés Bruhn
Backdoor Attack with Sparse and Invisible Trigger. (68%)Yinghua Gao; Yiming Li; Xueluan Gong; Shu-Tao Xia; Qian Wang
Watch This Space: Securing Satellite Communication through Resilient Transmitter Fingerprinting. (1%)Joshua Smailes; Sebastian Kohler; Simon Birnbach; Martin Strohmeier; Ivan Martinovic
2023-05-10
A Black-Box Attack on Code Models via Representation Nearest Neighbor Search. (99%)Jie Zhang; Wei Ma; Qiang Hu; Shangqing Liu; Xiaofei Xie; Yves Le Traon; Yang Liu
Inter-frame Accelerate Attack against Video Interpolation Models. (99%)Junpei Liao; Zhikai Chen; Liang Yi; Wenyuan Yang; Baoyuan Wu; Xiaochun Cao
Randomized Smoothing with Masked Inference for Adversarially Robust Text Classifications. (98%)Han Cheol Moon; Shafiq Joty; Ruochen Zhao; Megh Thakkar; Xu Chi
Stealthy Low-frequency Backdoor Attack against Deep Neural Networks. (80%)Xinrui Liu; Yu-an Tan; Yajie Wang; Kefan Qiu; Yuanzhang Li
Towards Invisible Backdoor Attacks in the Frequency Domain against Deep Neural Networks. (75%)Xinrui Liu; Yajie Wang; Yu-an Tan; Kefan Qiu; Yuanzhang Li
The Robustness of Computer Vision Models against Common Corruptions: a Survey. (50%)Shunxin Wang; Raymond Veldhuis; Nicola Strisciuglio
An Empirical Study on the Robustness of the Segment Anything Model (SAM). (22%)Yuqing Wang; Yun Zhao; Linda Petzold
Robust multi-agent coordination via evolutionary generation of auxiliary adversarial attackers. (12%)Lei Yuan; Zi-Qian Zhang; Ke Xue; Hao Yin; Feng Chen; Cong Guan; Li-He Li; Chao Qian; Yang Yu
2023-05-09
Quantization Aware Attack: Enhancing the Transferability of Adversarial Attacks across Target Models with Different Quantization Bitwidths. (99%)Yulong Yang; Chenhao Lin; Qian Li; Chao Shen; Dawei Zhou; Nannan Wang; Tongliang Liu
Attack Named Entity Recognition by Entity Boundary Interference. (98%)Yifei Yang; Hongqiu Wu; Hai Zhao
VSMask: Defending Against Voice Synthesis Attack via Real-Time Predictive Perturbation. (96%)Yuanda Wang; Hanqing Guo; Guangjing Wang; Bocheng Chen; Qiben Yan
Investigating the Corruption Robustness of Image Classifiers with Random Lp-norm Corruptions. (75%)Georg Siedel; Weijia Shao; Silvia Vock; Andrey Morozov
On the Relation between Sharpness-Aware Minimization and Adversarial Robustness. (56%)Zeming Wei; Jingyu Zhu; Yihao Zhang
Effects of Real-Life Traffic Sign Alteration on YOLOv7- an Object Recognition Model. (13%)Farhin Farhad Riya; Shahinul Hoque; Md Saif Hassan Onim; Edward Michaud; Edmon Begoli; Jinyuan Stella Sun
Turning Privacy-preserving Mechanisms against Federated Learning. (9%)Marco Arazzi; Mauro Conti; Antonino Nocera; Stjepan Picek
BadCS: A Backdoor Attack Framework for Code search. (8%)Shiyi Qi; Yuanhang Yang; Shuzhzeng Gao; Cuiyun Gao; Zenglin Xu
Quantum Machine Learning for Malware Classification. (1%)Grégoire Barrué; Tony Quertier
2023-05-08
Toward Adversarial Training on Contextualized Language Representation. (93%)Hongqiu Wu; Yongxiang Liu; Hanwen Shi; Hai Zhao; Min Zhang
Understanding Noise-Augmented Training for Randomized Smoothing. (64%)Ambar Pal; Jeremias Sulam
TAPS: Connecting Certified and Adversarial Training. (41%)Yuhao Mao; Mark Niklas Müller; Marc Fischer; Martin Vechev
Privacy-preserving Adversarial Facial Features. (22%)Zhibo Wang; He Wang; Shuaifan Jin; Wenwen Zhang; Jiahui Hu; Yan Wang; Peng Sun; Wei Yuan; Kaixin Liu; Kui Ren
Communication-Robust Multi-Agent Learning by Adaptable Auxiliary Multi-Agent Adversary Generation. (1%)Lei Yuan; Feng Chen; Zhongzhang Zhang; Yang Yu
2023-05-07
Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization. (99%)Zhaoxia Yin; Shaowei Zhu; Hang Su; Jianteng Peng; Wanli Lyu; Bin Luo
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification. (93%)Nils Lukas; Florian Kerschbaum
2023-05-06
The Best Defense is Attack: Repairing Semantics in Textual Adversarial Examples. (99%)Heng Yang; Ke Li
Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps. (92%)Ye Sang; Yujin Huang; Shuo Huang; Helei Cui
Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling Augmentation Framework. (38%)Ruijia Wu; Yuhang Wang; Huafeng Shi; Zhipeng Yu; Yichao Wu; Ding Liang
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning. (2%)Shengfang Zhai; Yinpeng Dong; Qingni Shen; Shi Pu; Yuejian Fang; Hang Su
2023-05-05
White-Box Multi-Objective Adversarial Attack on Dialogue Generation. (99%)Yufei Li; Zexin Li; Yingfan Gao; Cong Liu
Evading Watermark based Detection of AI-Generated Content. (87%)Zhengyuan Jiang; Jinghuai Zhang; Neil Zhenqiang Gong
Verifiable Learning for Robust Tree Ensembles. (15%)Stefano Calzavara; Lorenzo Cazzaro; Giulio Ermanno Pibiri; Nicola Prezza
Repairing Deep Neural Networks Based on Behavior Imitation. (4%)Zhen Liang; Taoran Wu; Changyuan Zhao; Wanwei Liu; Bai Xue; Wenjing Yang; Ji Wang
2023-05-04
Madvex: Instrumentation-based Adversarial Attacks on Machine Learning Malware Detection. (99%)Nils Loose; Felix Mächtle; Claudius Pott; Volodymyr Bezsmertnyi; Thomas Eisenbarth
IMAP: Intrinsically Motivated Adversarial Policy. (99%)Xiang Zheng; Xingjun Ma; Shengjie Wang; Xinyu Wang; Chao Shen; Cong Wang
Single Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning. (78%)Dayuan Chen; Jian Zhang; Yuqian Lv; Jinhuan Wang; Hongjie Ni; Shanqing Yu; Zhen Wang; Qi Xuan
Faulting original McEliece's implementations is possible: How to mitigate this risk? (2%)Vincent Giraud; Guillaume Bouffard
2023-05-03
New Adversarial Image Detection Based on Sentiment Analysis. (99%)Yulong Wang; Tianxiang Li; Shenghong Li; Xin Yuan; Wei Ni
A Data-Driven Defense against Edge-case Model Poisoning Attacks on Federated Learning. (86%)Kiran Purohit; Soumi Das; Sourangshu Bhattacharya; Santu Rana
Defending against Insertion-based Textual Backdoor Attacks via Attribution. (61%)Jiazhao Li; Zhuofeng Wu; Wei Ping; Chaowei Xiao; V. G. Vinod Vydiswaran
On the Security Risks of Knowledge Graph Reasoning. (26%)Zhaohan Xi; Tianyu Du; Changjiang Li; Ren Pang; Shouling Ji; Xiapu Luo; Xusheng Xiao; Fenglong Ma; Ting Wang
Backdoor Learning on Sequence to Sequence Models. (5%)Lichang Chen; Minhao Cheng; Heng Huang
Rethinking Graph Lottery Tickets: Graph Sparsity Matters. (2%)Bo Hui; Da Yan; Xiaolong Ma; Wei-Shinn Ku
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer. (1%)Lichang Chen; Heng Huang; Minhao Cheng
2023-05-02
Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature. (99%)Juanjuan Weng; Zhiming Luo; Dazhen Lin; Shaozi Li; Zhun Zhong
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning. (73%)Wenqiang Sun; Sen Li; Yuchang Sun; Jun Zhang
Towards Imperceptible Document Manipulations against Neural Ranking Models. (67%)Xuanang Chen; Ben He; Zheng Ye; Le Sun; Yingfei Sun
Sentiment Perception Adversarial Attacks on Neural Machine Translation Systems. (50%)Vyas Raina; Mark Gales
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models. (8%)Shuai Zhao; Jinming Wen; Luu Anh Tuan; Junbo Zhao; Jie Fu
2023-05-01
Attack-SAM: Towards Evaluating Adversarial Robustness of Segment Anything Model. (99%)Chenshuang Zhang; Chaoning Zhang; Taegoo Kang; Donghun Kim; Sung-Ho Bae; In So Kweon
Physical Adversarial Attacks for Surveillance: A Survey. (98%)Kien Nguyen; Tharindu Fernando; Clinton Fookes; Sridha Sridharan
Revisiting Robustness in Graph Machine Learning. (98%)Lukas Gosch; Daniel Sturm; Simon Geisler; Stephan Günnemann
Stratified Adversarial Robustness with Rejection. (96%)Jiefeng Chen; Jayaram Raghuram; Jihye Choi; Xi Wu; Yingyu Liang; Somesh Jha
Poisoning Language Models During Instruction Tuning. (2%)Alexander Wan; Eric Wallace; Sheng Shen; Dan Klein
2023-04-30
Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks. (98%)Jingfeng Zhang; Bo Song; Bo Han; Lei Liu; Gang Niu; Masashi Sugiyama
2023-04-29
FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection. (81%)Thuy Dung Nguyen; Anh Duy Nguyen; Kok-Seng Wong; Huy Hieu Pham; Thanh Hung Nguyen; Phi Le Nguyen; Truong Thao Nguyen
Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization. (33%)Xilie Xu; Jingfeng Zhang; Feng Liu; Masashi Sugiyama; Mohan Kankanhalli
Adversarial Representation Learning for Robust Privacy Preservation in Audio. (1%)Shayan Gharib; Minh Tran; Diep Luong; Konstantinos Drossos; Tuomas Virtanen
2023-04-28
Topic-oriented Adversarial Attacks against Black-box Neural Ranking Models. (99%)Yu-An Liu; Ruqing Zhang; Jiafeng Guo; Rijke Maarten de; Wei Chen; Yixing Fan; Xueqi Cheng
On the existence of solutions to adversarial training in multiclass classification. (75%)Nicolas Garcia Trillos; Matt Jacobs; Jakwang Kim
The Power of Typed Affine Decision Structures: A Case Study. (3%)Gerrit Nolte; Maximilian Schlüter; Alnis Murtovi; Bernhard Steffen
faulTPM: Exposing AMD fTPMs' Deepest Secrets. (3%)Hans Niklas Jacob; Christian Werling; Robert Buhren; Jean-Pierre Seifert
SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective. (1%)An Wang; Mobarakol Islam; Mengya Xu; Yang Zhang; Hongliang Ren
2023-04-27
Adversary Aware Continual Learning. (80%)Muhammad Umer; Robi Polikar
Fusion is Not Enough: Single-Modal Attacks to Compromise Fusion Models in Autonomous Driving. (75%)Zhiyuan Cheng; Hongjun Choi; James Liang; Shiwei Feng; Guanhong Tao; Dongfang Liu; Michael Zuzak; Xiangyu Zhang
Boosting Big Brother: Attacking Search Engines with Encodings. (68%)Nicholas Boucher; Luca Pajola; Ilia Shumailov; Ross Anderson; Mauro Conti
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger. (62%)Jiazhao Li; Yijin Yang; Zhuofeng Wu; V. G. Vinod Vydiswaran; Chaowei Xiao
Improve Video Representation with Temporal Adversarial Augmentation. (26%)Jinhao Duan; Quanfu Fan; Hao Cheng; Xiaoshuang Shi; Kaidi Xu
Origin Tracing and Detecting of LLMs. (1%)Linyang Li; Pengyu Wang; Ke Ren; Tianxiang Sun; Xipeng Qiu
Deep Intellectual Property Protection: A Survey. (1%)Yuchen Sun; Tianpeng Liu; Panhe Hu; Qing Liao; Shaojing Fu; Nenghai Yu; Deke Guo; Yongxiang Liu; Li Liu
Interactive Greybox Penetration Testing for Cloud Access Control using IAM Modeling and Deep Reinforcement Learning. (1%)Yang Hu; Wenxi Wang; Sarfraz Khurshid; Mohit Tiwari
2023-04-26
Improving Adversarial Transferability via Intermediate-level Perturbation Decay. (98%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Detection of Adversarial Physical Attacks in Time-Series Image Data. (92%)Ramneet Kaur; Yiannis Kantaros; Wenwen Si; James Weimer; Insup Lee
Blockchain-based Federated Learning with SMPC Model Verification Against Poisoning Attack for Healthcare Systems. (13%)Aditya Pribadi Kalapaaking; Ibrahim Khalil; Xun Yi
2023-04-25
Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks. (99%)Ferheen Ayaz; Idris Zakariyya; José Cano; Sye Loong Keoh; Jeremy Singer; Danilo Pau; Mounia Kharbouche-Harrari
Generating Adversarial Examples with Task Oriented Multi-Objective Optimization. (99%)Anh Bui; Trung Le; He Zhao; Quan Tran; Paul Montague; Dinh Phung
SHIELD: Thwarting Code Authorship Attribution. (98%)Mohammed Abuhamad; Changhun Jung; David Mohaisen; DaeHun Nyang
Lyapunov-Stable Deep Equilibrium Models. (82%)Haoyu Chu; Shikui Wei; Ting Liu; Yao Zhao; Yuto Miyatake
LSTM-based Load Forecasting Robustness Against Noise Injection Attack in Microgrid. (1%)Amirhossein Nazeri; Pierluigi Pisu
2023-04-24
Evaluating Adversarial Robustness on Document Image Classification. (99%)Timothée Fronteau; Arnaud Paran; Aymen Shabou
Combining Adversaries with Anti-adversaries in Training. (64%)Xiaoling Zhou; Nan Yang; Ou Wu
Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization. (41%)Mingli Zhu; Shaokui Wei; Li Shen; Yanbo Fan; Baoyuan Wu
Opinion Control under Adversarial Network Perturbation: A Stackelberg Game Approach. (10%)Yuejiang Li; Zhanjiang Chen; H. Vicky Zhao
Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning. (1%)Yonggan Fu; Ye Yuan; Shang Wu; Jiayi Yuan; Yingyan Lin
2023-04-23
StyLess: Boosting the Transferability of Adversarial Examples. (99%)Kaisheng Liang; Bin Xiao
Evading DeepFake Detectors via Adversarial Statistical Consistency. (98%)Yang Hou; Qing Guo; Yihao Huang; Xiaofei Xie; Lei Ma; Jianjun Zhao
2023-04-22
Detecting Adversarial Faces Using Only Real Face Self-Perturbations. (98%)Qian Wang; Yongqin Xian; Hefei Ling; Jinyuan Zhang; Xiaorui Lin; Ping Li; Jiazhong Chen; Ning Yu
Universal Adversarial Backdoor Attacks to Fool Vertical Federated Learning in Cloud-Edge Collaboration. (70%)Peng Chen; Xin Du; Zhihui Lu; Hongfeng Chai
2023-04-21
INK: Inheritable Natural Backdoor Attack Against Model Distillation. (97%)Xiaolei Liu; Ming Yi; Kangyi Ding; Bangzhou Xin; Yixiao Xu; Li Yan; Chao Shen
Individual Fairness in Bayesian Neural Networks. (69%)Alice Doherty; Matthew Wicker; Luca Laurenti; Andrea Patane
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning. (64%)Hangtao Zhang; Zeming Yao; Leo Yu Zhang; Shengshan Hu; Chao Chen; Alan Liew; Zhetao Li
Interpretable and Robust AI in EEG Systems: A Survey. (12%)Xinliang Zhou; Chenyu Liu; Liming Zhai; Ziyu Jia; Cuntai Guan; Yang Liu
MAWSEO: Adversarial Wiki Search Poisoning for Illicit Online Promotion. (2%)Zilong Lin; Zhengyi Li; Xiaojing Liao; XiaoFeng Wang; Xiaozhong Liu
2023-04-20
Towards the Universal Defense for Query-Based Audio Adversarial Attacks. (99%)Feng Guo; Zheng Sun; Yuxuan Chen; Lei Ju
Diversifying the High-level Features for better Adversarial Transferability. (99%)Zhiyuan Wang; Zeliang Zhang; Siyuan Liang; Xiaosen Wang
Using Z3 for Formal Modeling and Verification of FNN Global Robustness. (98%)Yihao Zhang; Zeming Wei; Xiyue Zhang; Meng Sun
Certified Adversarial Robustness Within Multiple Perturbation Bounds. (96%)Soumalya Nandi; Sravanti Addepalli; Harsh Rangwani; R. Venkatesh Babu
Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock Recommendation via Split Variational Adversarial Training. (93%)Jiezhu Cheng; Kaizhu Huang; Zibin Zheng
Adversarial Infrared Blocks: A Black-box Attack to Thermal Infrared Detectors at Multiple Angles in Physical World. (89%)Chengyin Hu; Weiwen Shi; Tingsong Jiang; Wen Yao; Ling Tian; Xiaoqian Chen
An Analysis of the Completion Time of the BB84 Protocol. (22%)Sounak Kar; Jean-Yves Le Boudec
A Plug-and-Play Defensive Perturbation for Copyright Protection of DNN-based Applications. (13%)Donghua Wang; Wen Yao; Tingsong Jiang; Weien Zhou; Lang Lin; Xiaoqian Chen
Enhancing object detection robustness: A synthetic and natural perturbation approach. (12%)Nilantha Premakumara; Brian Jalaian; Niranjan Suri; Hooman Samani
RoCOCO: Robustness Benchmark of MS-COCO to Stress-test Image-Text Matching Models. (8%)Seulki Park; Daeho Um; Hajung Yoon; Sanghyuk Chun; Sangdoo Yun; Jin Young Choi
Get Rid Of Your Trail: Remotely Erasing Backdoors in Federated Learning. (2%)Manaar Alam; Hithem Lamri; Michail Maniatakos
Learning Sample Difficulty from Pre-trained Models for Reliable Prediction. (1%)Peng Cui; Dan Zhang; Zhijie Deng; Yinpeng Dong; Jun Zhu
2023-04-19
Jedi: Entropy-based Localization and Removal of Adversarial Patches. (84%)Bilel Tarchoun; Anouar Ben Khalifa; Mohamed Ali Mahjoub; Nael Abu-Ghazaleh; Ihsen Alouani
GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models. (81%)Zaitang Li; Pin-Yu Chen; Tsung-Yi Ho
Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks. (5%)Yunlong Mao; Zexi Xin; Zhenyu Li; Jue Hong; Qingyou Yang; Sheng Zhong
Density-Insensitive Unsupervised Domain Adaption on 3D Object Detection. (1%)Qianjiang Hu; Daizong Liu; Wei Hu
On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model, Data, and Training. (1%)Hao Fei; Tat-Seng Chua; Chenliang Li; Donghong Ji; Meishan Zhang; Yafeng Ren
Fundamental Limitations of Alignment in Large Language Models. (1%)Yotam Wolf; Noam Wies; Oshri Avnery; Yoav Levine; Amnon Shashua
2023-04-18
Wavelets Beat Monkeys at Adversarial Robustness. (99%)Jingtong Su; Julia Kempe
Towards the Transferable Audio Adversarial Attack via Ensemble Methods. (99%)Feng Guo; Zheng Sun; Yuxuan Chen; Lei Ju
Masked Language Model Based Textual Adversarial Example Detection. (99%)Xiaomei Zhang; Zhaoxi Zhang; Qi Zhong; Xufei Zheng; Yanjun Zhang; Shengshan Hu; Leo Yu Zhang
In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT. (80%)Xinyue Shen; Zeyuan Chen; Michael Backes; Yang Zhang
Generative models improve fairness of medical classifiers under distribution shifts. (13%)Ira Ktena; Olivia Wiles; Isabela Albuquerque; Sylvestre-Alvise Rebuffi; Ryutaro Tanno; Abhijit Guha Roy; Shekoofeh Azizi; Danielle Belgrave; Pushmeet Kohli; Alan Karthikesalingam; Taylan Cemgil; Sven Gowal
2023-04-17
Evil from Within: Machine Learning Backdoors through Hardware Trojans. (15%)Alexander Warnecke; Julian Speith; Jan-Niklas Möller; Konrad Rieck; Christof Paar
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings. (13%)Asim Waheed; Vasisht Duddu; N. Asokan
OOD-CV-v2: An extended Benchmark for Robustness to Out-of-Distribution Shifts of Individual Nuisances in Natural Images. (1%)Bingchen Zhao; Jiahao Wang; Wufei Ma; Artur Jesslen; Siwei Yang; Shaozuo Yu; Oliver Zendel; Christian Theobalt; Alan Yuille; Adam Kortylewski
2023-04-16
A Random-patch based Defense Strategy Against Physical Attacks for Face Recognition Systems. (98%)JiaHao Xie; Ye Luo; Jianwei Lu
RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks. (96%)Yunruo Zhang; Tianyu Du; Shouling Ji; Peng Tang; Shanqing Guo
JoB-VS: Joint Brain-Vessel Segmentation in TOF-MRA Images. (15%)Natalia Valderrama; Ioannis Pitsiorlas; Luisa Vargas; Pablo Arbeláez; Maria A. Zuluaga
2023-04-14
Interpretability is a Kind of Safety: An Interpreter-based Ensemble for Adversary Defense. (99%)Jingyuan Wang; Yufan Wu; Mingxuan Li; Xin Lin; Junjie Wu; Chao Li
Combining Generators of Adversarial Malware Examples to Increase Evasion Rate. (99%)Matouš Kozák; Martin Jureček
Cross-Entropy Loss Functions: Theoretical Analysis and Applications. (3%)Anqi Mao; Mehryar Mohri; Yutao Zhong
Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice. (2%)Andrea Gadotti; Florimond Houssiau; Meenatchi Sundaram Muthu Selva Annamalai; Montjoye Yves-Alexandre de
2023-04-13
Generating Adversarial Examples with Better Transferability via Masking Unimportant Parameters of Surrogate Model. (99%)Dingcheng Yang; Wenjian Yu; Zihao Xiao; Jiaqi Luo
Certified Zeroth-order Black-Box Defense with Robust UNet Denoiser. (96%)Astha Verma; Siddhesh Bangar; A V Subramanyam; Naman Lal; Rajiv Ratn Shah; Shin'ichi Satoh
False Claims against Model Ownership Resolution. (93%)Jian Liu; Rui Zhang; Sebastian Szyller; Kui Ren; N. Asokan
Adversarial Examples from Dimensional Invariance. (45%)Benjamin L. Badger
Understanding Overfitting in Adversarial Training in Kernel Regression. (1%)Teng Zhang; Kang Li
LSFSL: Leveraging Shape Information in Few-shot Learning. (1%)Deepan Chakravarthi Padmanabhan; Shruthi Gowda; Elahe Arani; Bahram Zonooz
2023-04-12
Generative Adversarial Networks-Driven Cyber Threat Intelligence Detection Framework for Securing Internet of Things. (92%)Mohamed Amine Ferrag; Djallel Hamouda; Merouane Debbah; Leandros Maglaras; Abderrahmane Lakas
Exploiting Logic Locking for a Neural Trojan Attack on Machine Learning Accelerators. (1%)Hongye Xu; Dongfang Liu; Cory Merkel; Michael Zuzak
2023-04-11
RecUP-FL: Reconciling Utility and Privacy in Federated Learning via User-configurable Privacy Defense. (99%)Yue Cui; Syed Irfan Ali Meerza; Zhuohang Li; Luyang Liu; Jiaxin Zhang; Jian Liu
Simultaneous Adversarial Attacks On Multiple Face Recognition System Components. (98%)Inderjeet Singh; Kazuya Kakizaki; Toshinori Araki
Boosting Cross-task Transferability of Adversarial Patches with Visual Relations. (98%)Tony Ma; Songze Li; Yisong Xiao; Shunchang Liu
Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection. (92%)Tianyuan Zhang; Yisong Xiao; Xiaoya Zhang; Hao Li; Lu Wang
On the Adversarial Inversion of Deep Biometric Representations. (67%)Gioacchino Tangari; Shreesh Keskar; Hassan Jameel Asghar; Dali Kaafar
Overload: Latency Attacks on Object Detection for Edge Devices. (33%)Erh-Chung Chen; Pin-Yu Chen; I-Hsin Chung; Che-rung Lee
Towards More Robust and Accurate Sequential Recommendation with Cascade-guided Adversarial Training. (9%)Juntao Tan; Shelby Heinecke; Zhiwei Liu; Yongjun Chen; Yongfeng Zhang; Huan Wang
2023-04-10
Generating Adversarial Attacks in the Latent Space. (98%)Nitish Shukla; Sudipta Banerjee
Reinforcement Learning-Based Black-Box Model Inversion Attacks. (67%)Gyojin Han; Jaehyun Choi; Haeil Lee; Junmo Kim
Defense-Prefix for Preventing Typographic Attacks on CLIP. (16%)Hiroki Azuma; Yusuke Matsui
Helix++: A platform for efficiently securing software. (1%)Jack W. Davidson; Jason D. Hiser; Anh Nguyen-Tuong
2023-04-09
Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence. (99%)Hanbin Hong; Xinyu Zhang; Binghui Wang; Zhongjie Ba; Yuan Hong
Adversarially Robust Neural Architecture Search for Graph Neural Networks. (80%)Beini Xie; Heng Chang; Ziwei Zhang; Xin Wang; Daixin Wang; Zhiqiang Zhang; Rex Ying; Wenwu Zhu
Unsupervised Multi-Criteria Adversarial Detection in Deep Image Retrieval. (68%)Yanru Xiao; Cong Wang; Xing Gao
2023-04-08
Robust Deep Learning Models Against Semantic-Preserving Adversarial Attack. (99%)Dashan Gao; Yunce Zhao; Yinghua Yao; Zeqi Zhang; Bifei Mao; Xin Yao
RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks. (98%)Alberto Marchisio; Marco Antonio De; Alessio Colucci; Maurizio Martina; Muhammad Shafique
Exploring the Connection between Robust and Generative Models. (67%)Senad Beadini; Iacopo Masi
Benchmarking the Robustness of Quantized Models. (47%)Yisong Xiao; Tianyuan Zhang; Shunchang Liu; Haotong Qin
Attack-Augmentation Mixing-Contrastive Skeletal Representation Learning. (15%)Binqian Xu; Xiangbo Shu; Jiachao Zhang; Rui Yan; Guo-Sen Xie
Deep Prototypical-Parts Ease Morphological Kidney Stone Identification and are Competitively Robust to Photometric Perturbations. (4%)Daniel Flores-Araiza; Francisco Lopez-Tiro; Jonathan El-Beze; Jacques Hubert; Miguel Gonzalez-Mendoza; Gilberto Ochoa-Ruiz; Christian Daul
EMP-SSL: Towards Self-Supervised Learning in One Training Epoch. (1%)Shengbang Tong; Yubei Chen; Yi Ma; Yann Lecun
2023-04-07
Architecture-Preserving Provable Repair of Deep Neural Networks. (1%)Zhe Tao; Stephanie Nawas; Jacqueline Mitchell; Aditya V. Thakur
ASPEST: Bridging the Gap Between Active Learning and Selective Prediction. (1%)Jiefeng Chen; Jinsung Yoon; Sayna Ebrahimi; Sercan Arik; Somesh Jha; Tomas Pfister
2023-04-06
Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding. (45%)Yuke Hu; Wei Liang; Ruofan Wu; Kai Xiao; Weiqiang Wang; Xiaochen Li; Jinfei Liu; Zhan Qin
Manipulating Federated Recommender Systems: Poisoning with Synthetic Users and Its Countermeasures. (45%)Wei Yuan; Quoc Viet Hung Nguyen; Tieke He; Liang Chen; Hongzhi Yin
Improving Visual Question Answering Models through Robustness Analysis and In-Context Learning with a Chain of Basic Questions. (10%)Jia-Hong Huang; Modar Alfadly; Bernard Ghanem; Marcel Worring
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles. (4%)Jonah O'Brien Weiss; Tiago Alves; Sandip Kundu
Evaluating the Robustness of Machine Reading Comprehension Models to Low Resource Entity Renaming. (2%)Clemencia Siro; Tunde Oluwaseyi Ajayi
Rethinking Evaluation Protocols of Visual Representations Learned via Self-supervised Learning. (1%)Jae-Hun Lee; Doyoung Yoon; ByeongMoon Ji; Kyungyul Kim; Sangheum Hwang
Reliable Learning for Test-time Attacks and Distribution Shift. (1%)Maria-Florina Balcan; Steve Hanneke; Rattana Pukdee; Dravyansh Sharma
Benchmarking Robustness to Text-Guided Corruptions. (1%)Mohammadreza Mofayezi; Yasamin Medghalchi
2023-04-05
A Certified Radius-Guided Attack Framework to Image Segmentation Models. (99%)Wenjie Qu; Youqi Li; Binghui Wang
How to choose your best allies for a transferable attack? (99%)Thibault Maho; Seyed-Mohsen Moosavi-Dezfooli; Teddy Furon
Going Further: Flatness at the Rescue of Early Stopping for Adversarial Example Transferability. (99%)Martin Gubri; Maxime Cordy; Yves Le Traon
Robust Neural Architecture Search. (92%)Xunyu Zhu; Jian Li; Yong Liu; Weiping Wang
Hyper-parameter Tuning for Adversarially Robust Models. (62%)Pedro Mendes; Paolo Romano; David Garlan
JPEG Compressed Images Can Bypass Protections Against AI Editing. (15%)Pedro Sandoval-Segura; Jonas Geiping; Tom Goldstein
FACE-AUDITOR: Data Auditing in Facial Recognition Systems. (1%)Min Chen; Zhikun Zhang; Tianhao Wang; Michael Backes; Yang Zhang
2023-04-04
CGDTest: A Constrained Gradient Descent Algorithm for Testing Neural Networks. (31%)Vineel Nagisetty; Laura Graves; Guanting Pan; Piyush Jha; Vijay Ganesh
Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher. (1%)Jiawei Shao; Fangzhao Wu; Jun Zhang
EGC: Image Generation and Classification via a Single Energy-Based Model. (1%)Qiushan Guo; Chuofan Ma; Yi Jiang; Zehuan Yuan; Yizhou Yu; Ping Luo
2023-04-03
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning. (76%)Ajinkya Tejankar; Maziar Sanjabi; Qifan Wang; Sinong Wang; Hamed Firooz; Hamed Pirsiavash; Liang Tan
Model-Agnostic Reachability Analysis on Deep Neural Networks. (75%)Chi Zhang; Wenjie Ruan; Fu Wang; Peipei Xu; Geyong Min; Xiaowei Huang
NetFlick: Adversarial Flickering Attacks on Deep Learning Based Video Compression. (69%)Jung-Woo Chang; Nojan Sheybani; Shehzeen Samarah Hussain; Mojan Javaheripi; Seira Hidano; Farinaz Koushanfar
Learning About Simulated Adversaries from Human Defenders using Interactive Cyber-Defense Games. (1%)Baptiste Prebot; Yinuo Du; Cleotilde Gonzalez
2023-04-01
GradMDM: Adversarial Attack on Dynamic Networks. (84%)Jianhong Pan; Lin Geng Foo; Qichen Zheng; Zhipeng Fan; Hossein Rahmani; Qiuhong Ke; Jun Liu
Instance-level Trojan Attacks on Visual Question Answering via Adversarial Learning in Neuron Activation Space. (67%)Yuwei Sun; Hideya Ochiai; Jun Sakuma
2023-03-31
Improving Fast Adversarial Training with Prior-Guided Knowledge. (99%)Xiaojun Jia; Yong Zhang; Xingxing Wei; Baoyuan Wu; Ke Ma; Jue Wang; Xiaochun Sr Cao
To be Robust and to be Fair: Aligning Fairness with Robustness. (93%)Junyi Chai; Xiaoqian Wang
Fooling Polarization-based Vision using Locally Controllable Polarizing Projection. (91%)Zhuoxiao Li; Zhihang Zhong; Shohei Nobuhara; Ko Nishino; Yinqiang Zheng
Per-Example Gradient Regularization Improves Learning Signals from Noisy Data. (3%)Xuran Meng; Yuan Cao; Difan Zou
Secure Federated Learning against Model Poisoning Attacks via Client Filtering. (2%)Duygu Nur Yaldiz; Tuo Zhang; Salman Avestimehr
DIME-FM: DIstilling Multimodal and Efficient Foundation Models. (1%)Ximeng Sun; Pengchuan Zhang; Peizhao Zhang; Hardik Shah; Kate Saenko; Xide Xia
A Generative Framework for Low-Cost Result Validation of Outsourced Machine Learning Tasks. (1%)Abhinav Kumar; Miguel A. Guirao Aguilera; Reza Tourani; Satyajayant Misra
2023-03-30
Adversarial Attack and Defense for Dehazing Networks. (97%)Jie Gui; Xiaofeng Cong; Chengwei Peng; Yuan Yan Tang; James Tin-Yau Kwok
Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness. (96%)Timothy Redgrave; Colton Crum
Towards Adversarially Robust Continual Learning. (95%)Tao Bai; Chen Chen; Lingjuan Lyu; Jun Zhao; Bihan Wen
Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving. (81%)Zijian Zhu; Yichi Zhang; Hai Chen; Yinpeng Dong; Shu Zhao; Wenbo Ding; Jiachen Zhong; Shibao Zheng
Robo3D: Towards Robust and Reliable 3D Perception against Corruptions. (2%)Lingdong Kong; Youquan Liu; Xin Li; Runnan Chen; Wenwei Zhang; Jiawei Ren; Liang Pan; Kai Chen; Ziwei Liu
Model-agnostic explainable artificial intelligence for object detection in image data. (1%)Milad Moradi; Ke Yan; David Colwell; Matthias Samwald; Rhona Asgari
Establishing baselines and introducing TernaryMixOE for fine-grained out-of-distribution detection. (1%)Noah Fleischmann; Walter Bennette; Nathan Inkawhich
Explainable Intrusion Detection Systems Using Competitive Learning Techniques. (1%)Jesse Ables; Thomas Kirby; Sudip Mittal; Ioana Banicescu; Shahram Rahimi; William Anderson; Maria Seale
Differential Area Analysis for Ransomware: Attacks, Countermeasures, and Limitations. (1%)Marco Venturini; Francesco Freda; Emanuele Miotto; Alberto Giaretta; Mauro Conti
2023-03-29
Latent Feature Relation Consistency for Adversarial Robustness. (99%)Xingbin Liu; Huafeng Kuang; Hong Liu; Xianming Lin; Yongjian Wu; Rongrong Ji
Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness. (99%)Wei Wei; Jiahuan Zhou; Ying Wu
Targeted Adversarial Attacks on Wind Power Forecasts. (88%)René Heinrich; Christoph Scholz; Stephan Vogt; Malte Lehna
Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias. (67%)Zihan Liu; Yun Luo; Lirong Wu; Zicheng Liu; Stan Z. Li
ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing. (56%)Xiaodan Li; Yuefeng Chen; Yao Zhu; Shuhui Wang; Rong Zhang; Hui Xue
Graph Neural Networks for Hardware Vulnerability Analysis -- Can you Trust your GNN? (16%)Lilas Alrahis; Ozgur Sinanoglu
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling. (10%)Ethan Wisdom; Tejas Gokhale; Chaowei Xiao; Yezhou Yang
A Tensor-based Convolutional Neural Network for Small Dataset Classification. (2%)Zhenhua Chen; David Crandall
ALUM: Adversarial Data Uncertainty Modeling from Latent Model Uncertainty Compensation. (1%)Wei Wei; Jiahuan Zhou; Hongze Li; Ying Wu
2023-03-28
A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion. (99%)Haomin Zhuang; Yihua Zhang; Sijia Liu
Improving the Transferability of Adversarial Samples by Path-Augmented Method. (99%)Jianping Zhang; Jen-tse Huang; Wenxuan Wang; Yichen Li; Weibin Wu; Xiaosen Wang; Yuxin Su; Michael R. Lyu
Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition. (99%)Xiao Yang; Chang Liu; Longlong Xu; Yikai Wang; Yinpeng Dong; Ning Chen; Hang Su; Jun Zhu
Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization. (98%)Jianping Zhang; Yizhan Huang; Weibin Wu; Michael R. Lyu
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm. (98%)Bakary Badjie; José Cecílio; António Casimiro
TransAudio: Towards the Transferable Adversarial Audio Attack via Learning Contextualized Perturbations. (98%)Qi Gege; Yuefeng Chen; Xiaofeng Mao; Yao Zhu; Binyuan Hui; Xiaodan Li; Rong Zhang; Hui Xue
A Survey on Malware Detection with Graph Representation Learning. (41%)Tristan Bilot; Nour El Madhoun; Khaldoun Al Agha; Anis Zouaoui
Provable Robustness for Streaming Models with a Sliding Window. (15%)Aounon Kumar; Vinu Sankar Sadasivan; Soheil Feizi
Machine-learned Adversarial Attacks against Fault Prediction Systems in Smart Electrical Grids. (9%)Carmelo Ardito; Yashar Deldjoo; Noia Tommaso Di; Sciascio Eugenio Di; Fatemeh Nazary; Giovanni Servedio
On the Use of Reinforcement Learning for Attacking and Defending Load Frequency Control. (3%)Amr S. Mohamed; Deepa Kundur
Hard-normal Example-aware Template Mutual Matching for Industrial Anomaly Detection. (1%)Zixuan Chen; Xiaohua Xie; Lingxiao Yang; Jianhuang Lai
A Universal Identity Backdoor Attack against Speaker Verification based on Siamese Network. (1%)Haodong Zhao; Wei Du; Junjie Guo; Gongshen Liu
2023-03-27
Classifier Robustness Enhancement Via Test-Time Transformation. (99%)Tsachi Blau; Roy Ganz; Chaim Baskin; Michael Elad; Alex Bronstein
Improving the Transferability of Adversarial Examples via Direction Tuning. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
EMShepherd: Detecting Adversarial Samples via Side-channel Leakage. (99%)Ruyi Ding; Cheng Gongye; Siyue Wang; Aidong Ding; Yunsi Fei
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks. (97%)Tianrui Qin; Xitong Gao; Juanjuan Zhao; Kejiang Ye; Cheng-Zhong Xu
Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency. (76%)Xiaogeng Liu; Minghui Li; Haoyu Wang; Shengshan Hu; Dengpan Ye; Hai Jin; Libing Wu; Chaowei Xiao
CAT:Collaborative Adversarial Training. (69%)Xingbin Liu; Huafeng Kuang; Xianming Lin; Yongjian Wu; Rongrong Ji
Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection. (67%)Nicola Franco; Daniel Korth; Jeanette Miriam Lorenz; Karsten Roscher; Stephan Guennemann
Personalized Federated Learning on Long-Tailed Data via Adversarial Feature Augmentation. (41%)Yang Lu; Pinxin Qian; Gang Huang; Hanzi Wang
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder. (41%)Tao Sun; Lu Pang; Chao Chen; Haibin Ling
Sequential training of GANs against GAN-classifiers reveals correlated "knowledge gaps" present among independently trained GAN instances. (41%)Arkanath Pathak; Nicholas Dufour
Anti-DreamBooth: Protecting users from personalized text-to-image synthesis. (5%)Le Thanh Van; Hao Phung; Thuan Hoang Nguyen; Quan Dao; Ngoc Tran; Anh Tran
2023-03-26
MGTBench: Benchmarking Machine-Generated Text Detection. (61%)Xinlei He; Xinyue Shen; Zeyuan Chen; Michael Backes; Yang Zhang
2023-03-25
AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking. (99%)Ruoxi Chen; Haibo Jin; Jinyin Chen; Haibin Zheng
CFA: Class-wise Calibrated Fair Adversarial Training. (98%)Zeming Wei; Yifei Wang; Yiwen Guo; Yisen Wang
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks. (68%)Jinyuan Jia; Yupei Liu; Yuepeng Hu; Neil Zhenqiang Gong
Improving robustness of jet tagging algorithms with adversarial training: exploring the loss surface. (12%)Annika Stein
2023-03-24
PIAT: Parameter Interpolation based Adversarial Training for Image Classification. (99%)Kun He; Xin Liu; Yichen Yang; Zhou Qin; Weigao Wen; Hui Xue; John E. Hopcroft
How many dimensions are required to find an adversarial example? (99%)Charles Godfrey; Henry Kvinge; Elise Bishoff; Myles Mckay; Davis Brown; Tim Doster; Eleanor Byler
Effective black box adversarial attack with handcrafted kernels. (99%)Petr Dvořáček; Petr Hurtik; Petra Števuliáková
Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and Challenges. (99%)Junhao Dong; Junxi Chen; Xiaohua Xie; Jianhuang Lai; Hao Chen
Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing. (99%)Lin Li; Michael Spratling
Feature Separation and Recalibration for Adversarial Robustness. (98%)Woo Jae Kim; Yoonki Cho; Junsik Jung; Sung-Eui Yoon
Physically Adversarial Infrared Patches with Learnable Shapes and Locations. (97%)Wei Xingxing; Yu Jie; Huang Yao
Generalist: Decoupling Natural and Robust Generalization. (96%)Hongjun Wang; Yisen Wang
Ensemble-based Blackbox Attacks on Dense Prediction. (86%)Zikui Cai; Yaoteng Tan; M. Salman Asif
Backdoor Attacks with Input-unique Triggers in NLP. (54%)Xukun Zhou; Jiwei Li; Tianwei Zhang; Lingjuan Lyu; Muqiao Yang; Jun He
PoisonedGNN: Backdoor Attack on Graph Neural Networks-based Hardware Security Systems. (22%)Lilas Alrahis; Satwik Patnaik; Muhammad Abdullah Hanif; Muhammad Shafique; Ozgur Sinanoglu
Enhancing Multiple Reliability Measures via Nuisance-extended Information Bottleneck. (5%)Jongheon Jeong; Sihyun Yu; Hankook Lee; Jinwoo Shin
Optimal Smoothing Distribution Exploration for Backdoor Neutralization in Deep Learning-based Traffic Systems. (2%)Yue Wang; Wending Li; Michail Maniatakos; Saif Eddin Jabari
TRAK: Attributing Model Behavior at Scale. (1%)Sung Min Park; Kristian Georgiev; Andrew Ilyas; Guillaume Leclerc; Aleksander Madry
2023-03-23
Watch Out for the Confusing Faces: Detecting Face Swapping with the Probability Distribution of Face Identification Models. (68%)Yuxuan Duan; Xuhong Zhang; Chuer Yu; Zonghui Wang; Shouling Ji; Wenzhi Chen
Quadratic Graph Attention Network (Q-GAT) for Robust Construction of Gene Regulatory Networks. (50%)Hui Zhang; Xuexin An; Qiang He; Yudong Yao; Feng-Lei Fan; Yueyang Teng
Optimization and Optimizers for Adversarial Robustness. (41%)Hengyue Liang; Buyun Liang; Le Peng; Ying Cui; Tim Mitchell; Ju Sun
Adversarial Robustness and Feature Impact Analysis for Driver Drowsiness Detection. (41%)João Vitorino; Lourenço Rodrigues; Eva Maia; Isabel Praça; André Lourenço
Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense. (15%)Kalpesh Krishna; Yixiao Song; Marzena Karpinska; John Wieting; Mohit Iyyer
Decentralized Adversarial Training over Graphs. (13%)Ying Cao; Elsa Rizk; Stefan Vlaski; Ali H. Sayed
Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs. (8%)Hasan Abed Al Kader Hammoud; Adel Bibi; Philip H. S. Torr; Bernard Ghanem
Low-frequency Image Deep Steganography: Manipulate the Frequency Distribution to Hide Secrets with Tenacious Robustness. (1%)Huajie Chen; Tianqing Zhu; Yuan Zhao; Bo Liu; Xin Yu; Wanlei Zhou
Efficient Symbolic Reasoning for Neural-Network Verification. (1%)Zi Dj Wang; Somesh Dj Jha; Dj Krishnamurthy; Dvijotham
2023-03-22
Reliable and Efficient Evaluation of Adversarial Robustness for Deep Hashing-Based Retrieval. (99%)Xunguang Wang; Jiawang Bai; Xinyue Xu; Xiaomeng Li
Semantic Image Attack for Visual Model Diagnosis. (99%)Jinqi Luo; Zhaoning Wang; Chen Henry Wu; Dong Huang; la Torre Fernando De
Revisiting DeepFool: generalization and improvement. (99%)Alireza Abdollahpourrostam; Mahed Abroshan; Seyed-Mohsen Moosavi-Dezfooli
Wasserstein Adversarial Examples on Univariant Time Series Data. (99%)Wenjie Wang; Li Xiong; Jian Lou
Test-time Defense against Adversarial Attacks: Detection and Reconstruction of Adversarial Examples via Masked Autoencoder. (99%)Yun-Yun Tsai; Ju-Chin Chao; Albert Wen; Zhaoyuan Yang; Chengzhi Mao; Tapan Shah; Junfeng Yang
Sibling-Attack: Rethinking Transferable Adversarial Attacks against Face Recognition. (78%)Zexin Li; Bangjie Yin; Taiping Yao; Juefeng Guo; Shouhong Ding; Simin Chen; Cong Liu
An Extended Study of Human-like Behavior under Adversarial Training. (76%)Paul Gavrikov; Janis Keuper; Margret Keuper
Distribution-restrained Softmax Loss for the Model Robustness. (38%)Hao Wang; Chen Li; Jinzhe Jiang; Xin Zhang; Yaqian Zhao; Weifeng Gong
Backdoor Defense via Adaptively Splitting Poisoned Dataset. (16%)Kuofeng Gao; Yang Bai; Jindong Gu; Yong Yang; Shu-Tao Xia
Edge Deep Learning Model Protection via Neuron Authorization. (11%)Jinyin Chen; Haibin Zheng; Tao Liu; Rongchang Li; Yao Cheng; Xuhong Zhang; Shouling Ji
2023-03-21
Information-containing Adversarial Perturbation for Combating Facial Manipulation Systems. (99%)Yao Zhu; Yuefeng Chen; Xiaodan Li; Rong Zhang; Xiang Tian; Bolun Zheng; Yaowu Chen
State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems. (99%)Junbin Fang; You Jiang; Canjian Jiang; Zoe L. Jiang; Siu-Ming Yiu; Chuanyi Liu
Bridging Optimal Transport and Jacobian Regularization by Optimal Trajectory for Enhanced Adversarial Defense. (99%)Binh M. Le; Shahroz Tariq; Simon S. Woo
Efficient Decision-based Black-box Patch Attacks on Video Recognition. (98%)Kaixun Jiang; Zhaoyu Chen; Hao Huang; Jiafeng Wang; Dingkang Yang; Bo Li; Yan Wang; Wenqiang Zhang
Black-box Backdoor Defense via Zero-shot Image Purification. (86%)Yucheng Shi; Mengnan Du; Xuansheng Wu; Zihan Guan; Jin Sun; Ninghao Liu
Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation. (10%)Yuzheng Wang; Zhaoyu Chen; Dingkang Yang; Pinxue Guo; Kaixun Jiang; Wenqiang Zhang; Lizhe Qi
Influencer Backdoor Attack on Semantic Segmentation. (10%)Haoheng Lan; Jindong Gu; Philip Torr; Hengshuang Zhao
LOKI: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation. (9%)Joshua C. Zhao; Atul Sharma; Ahmed Roushdy Elkordy; Yahya H. Ezzeldin; Salman Avestimehr; Saurabh Bagchi
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-enabled IoTs: An Anticipatory Study. (1%)Mohamed Amine Ferrag; Burak Kantarci; Lucas C. Cordeiro; Merouane Debbah; Kim-Kwang Raymond Choo
2023-03-20
TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization. (99%)Ziquan Liu; Yi Xu; Xiangyang Ji; Antoni B. Chan
Adversarial Attacks against Binary Similarity Systems. (99%)Gianluca Capozzi; Daniele Cono D'Elia; Luna Giuseppe Antonio Di; Leonardo Querzoni
DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified Robustness. (99%)Shoumik Saha; Wenxiao Wang; Yigitcan Kaya; Soheil Feizi; Tudor Dumitras
Translate your gibberish: black-box adversarial attack on machine translation systems. (83%)Andrei Chertkov; Olga Tsymboi; Mikhail Pautov; Ivan Oseledets
GNN-Ensemble: Towards Random Decision Graph Neural Networks. (56%)Wenqi Wei; Mu Qiao; Divyesh Jadav
Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving. (41%)Yinpeng Dong; Caixin Kang; Jinlai Zhang; Zijian Zhu; Yikai Wang; Xiao Yang; Hang Su; Xingxing Wei; Jun Zhu
Did You Train on My Dataset? Towards Public Dataset Protection with Clean-Label Backdoor Watermarking. (9%)Ruixiang Tang; Qizhang Feng; Ninghao Liu; Fan Yang; Xia Hu
Boosting Semi-Supervised Learning by Exploiting All Unlabeled Data. (2%)Yuhao Chen; Xin Tan; Borui Zhao; Zhaowei Chen; Renjie Song; Jiajun Liang; Xuequan Lu
Make Landscape Flatter in Differentially Private Federated Learning. (1%)Yifan Shi; Yingqi Liu; Kang Wei; Li Shen; Xueqian Wang; Dacheng Tao
Robustifying Token Attention for Vision Transformers. (1%)Yong Guo; David Stutz; Bernt Schiele
2023-03-19
Randomized Adversarial Training via Taylor Expansion. (99%)Gaojie Jin; Xinping Yi; Dengyu Wu; Ronghui Mu; Xiaowei Huang
AdaptGuard: Defending Against Universal Attacks for Model Adaptation. (82%)Lijun Sheng; Jian Liang; Ran He; Zilei Wang; Tieniu Tan
2023-03-18
NoisyHate: Benchmarking Content Moderation Machine Learning Models with Human-Written Perturbations Online. (98%)Yiran Ye; Thai Le; Dongwon Lee
FedRight: An Effective Model Copyright Protection for Federated Learning. (96%)Jinyin Chen; Mingjun Li; Mingjun Li; Haibin Zheng
2023-03-17
Fuzziness-tuned: Improving the Transferability of Adversarial Examples. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness. (99%)Peiyu Xiong; Michael Tegegn; Jaskeerat Singh Sarin; Shubhraneel Pal; Julia Rubin
Robust Mode Connectivity-Oriented Adversarial Defense: Enhancing Neural Network Robustness Against Diversified $\ell_p$ Attacks. (99%)Ren Wang; Yuxuan Li; Sijia Liu
Detection of Uncertainty in Exceedance of Threshold (DUET): An Adversarial Patch Localizer. (83%)Terence Jie Chua; Wenhan Yu; Jun Zhao
Can AI-Generated Text be Reliably Detected? (45%)Vinu Sankar Sadasivan; Aounon Kumar; Sriram Balasubramanian; Wenxiao Wang; Soheil Feizi
Adversarial Counterfactual Visual Explanations. (31%)Guillaume Jeanneret; Loïc Simon; Frédéric Jurie
MedLocker: A Transferable Adversarial Watermarking for Preventing Unauthorized Analysis of Medical Image Dataset. (16%)Bangzheng Pu; Xingxing Wei; Shiji Zhao; Huazhu Fu
Mobile Edge Adversarial Detection for Digital Twinning to the Metaverse with Deep Reinforcement Learning. (9%)Terence Jie Chua; Wenhan Yu; Jun Zhao
Moving Target Defense for Service-oriented Mission-critical Networks. (1%)Doğanalp Ergenç; Florian Schneider; Peter Kling; Mathias Fischer
2023-03-16
Rethinking Model Ensemble in Transfer-based Adversarial Attacks. (99%)Huanran Chen; Yichi Zhang; Yinpeng Dong; Jun Zhu
Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations. (68%)Lukas Struppek; Dominik Hintersdorf; Felix Friedrich; Manuel Brack; Patrick Schramowski; Kristian Kersting
Among Us: Adversarially Robust Collaborative Perception by Consensus. (67%)Yiming Li; Qi Fang; Jiamu Bai; Siheng Chen; Felix Juefei-Xu; Chen Feng
Exorcising ''Wraith'': Protecting LiDAR-based Object Detector in Automated Driving System from Appearing Attacks. (50%)Qifan Xiao; Xudong Pan; Yifan Lu; Mi Zhang; Jiarun Dai; Min Yang
Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation. (11%)Yifan Yan; Xudong Pan; Mi Zhang; Min Yang
2023-03-15
Black-box Adversarial Example Attack towards FCG Based Android Malware Detection under Incomplete Feature Information. (99%)Heng Li; Zhang Cheng; Bang Wu; Liheng Yuan; Cuiying Gao; Wei Yuan; Xiapu Luo
Robust Evaluation of Diffusion-Based Adversarial Purification. (83%)Minjong Lee; Dongwoo Kim
DeeBBAA: A benchmark Deep Black Box Adversarial Attack against Cyber-Physical Power Systems. (81%)Arnab Bhattacharjee; Tapan K. Saha; Ashu Verma; Sukumar Mishra
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models. (67%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models. (45%)Ian E. Nielsen; Ravi P. Ramachandran; Nidhal Bouaynaya; Hassan M. Fathallah-Shaykh; Ghulam Rasool
Agnostic Multi-Robust Learning Using ERM. (12%)Saba Ahmadi; Avrim Blum; Omar Montasser; Kevin Stangl
Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement. (1%)Fartash Faghri; Hadi Pouransari; Sachin Mehta; Mehrdad Farajtabar; Ali Farhadi; Mohammad Rastegari; Oncel Tuzel
GPT-4 Technical Report. (1%)Rai OpenAI; Josh Rai Achiam; Steven Rai Adler; Sandhini Rai Agarwal; Lama Rai Ahmad; Ilge Rai Akkaya; Florencia Leoni Rai Aleman; Diogo Rai Almeida; Janko Rai Altenschmidt; Sam Rai Altman; Shyamal Rai Anadkat; Red Rai Avila; Igor Rai Babuschkin; Suchir Rai Balaji; Valerie Rai Balcom; Paul Rai Baltescu; Haiming Rai Bao; Mohammad Rai Bavarian; Jeff Rai Belgum; Irwan Rai Bello; Jake Rai Berdine; Gabriel Rai Bernadett-Shapiro; Christopher Rai Berner; Lenny Rai Bogdonoff; Oleg Rai Boiko; Madelaine Rai Boyd; Anna-Luisa Rai Brakman; Greg Rai Brockman; Tim Rai Brooks; Miles Rai Brundage; Kevin Rai Button; Trevor Rai Cai; Rosie Rai Campbell; Andrew Rai Cann; Brittany Rai Carey; Chelsea Rai Carlson; Rory Rai Carmichael; Brooke Rai Chan; Che Rai Chang; Fotis Rai Chantzis; Derek Rai Chen; Sully Rai Chen; Ruby Rai Chen; Jason Rai Chen; Mark Rai Chen; Ben Rai Chess; Chester Rai Cho; Casey Rai Chu; Hyung Won Rai Chung; Dave Rai Cummings; Jeremiah Rai Currier; Yunxing Rai Dai; Cory Rai Decareaux; Thomas Rai Degry; Noah Rai Deutsch; Damien Rai Deville; Arka Rai Dhar; David Rai Dohan; Steve Rai Dowling; Sheila Rai Dunning; Adrien Rai Ecoffet; Atty Rai Eleti; Tyna Rai Eloundou; David Rai Farhi; Liam Rai Fedus; Niko Rai Felix; Simón Posada Rai Fishman; Juston Rai Forte; Isabella Rai Fulford; Leo Rai Gao; Elie Rai Georges; Christian Rai Gibson; Vik Rai Goel; Tarun Rai Gogineni; Gabriel Rai Goh; Rapha Rai Gontijo-Lopes; Jonathan Rai Gordon; Morgan Rai Grafstein; Scott Rai Gray; Ryan Rai Greene; Joshua Rai Gross; Shixiang Shane Rai Gu; Yufei Rai Guo; Chris Rai Hallacy; Jesse Rai Han; Jeff Rai Harris; Yuchen Rai He; Mike Rai Heaton; Johannes Rai Heidecke; Chris Rai Hesse; Alan Rai Hickey; Wade Rai Hickey; Peter Rai Hoeschele; Brandon Rai Houghton; Kenny Rai Hsu; Shengli Rai Hu; Xin Rai Hu; Joost Rai Huizinga; Shantanu Rai Jain; Shawn Rai Jain; Joanne Rai Jang; Angela Rai Jiang; Roger Rai Jiang; Haozhun Rai Jin; Denny Rai Jin; Shino Rai Jomoto; Billie Rai Jonn; Heewoo Rai Jun; Tomer Rai Kaftan; Łukasz Rai Kaiser; Ali Rai Kamali; Ingmar Rai Kanitscheider; Nitish Shirish Rai Keskar; Tabarak Rai Khan; Logan Rai Kilpatrick; Jong Wook Rai Kim; Christina Rai Kim; Yongjik Rai Kim; Jan Hendrik Rai Kirchner; Jamie Rai Kiros; Matt Rai Knight; Daniel Rai Kokotajlo; Łukasz Rai Kondraciuk; Andrew Rai Kondrich; Aris Rai Konstantinidis; Kyle Rai Kosic; Gretchen Rai Krueger; Vishal Rai Kuo; Michael Rai Lampe; Ikai Rai Lan; Teddy Rai Lee; Jan Rai Leike; Jade Rai Leung; Daniel Rai Levy; Chak Ming Rai Li; Rachel Rai Lim; Molly Rai Lin; Stephanie Rai Lin; Mateusz Rai Litwin; Theresa Rai Lopez; Ryan Rai Lowe; Patricia Rai Lue; Anna Rai Makanju; Kim Rai Malfacini; Sam Rai Manning; Todor Rai Markov; Yaniv Rai Markovski; Bianca Rai Martin; Katie Rai Mayer; Andrew Rai Mayne; Bob Rai McGrew; Scott Mayer Rai McKinney; Christine Rai McLeavey; Paul Rai McMillan; Jake Rai McNeil; David Rai Medina; Aalok Rai Mehta; Jacob Rai Menick; Luke Rai Metz; Andrey Rai Mishchenko; Pamela Rai Mishkin; Vinnie Rai Monaco; Evan Rai Morikawa; Daniel Rai Mossing; Tong Rai Mu; Mira Rai Murati; Oleg Rai Murk; David Rai Mély; Ashvin Rai Nair; Reiichiro Rai Nakano; Rajeev Rai Nayak; Arvind Rai Neelakantan; Richard Rai Ngo; Hyeonwoo Rai Noh; Long Rai Ouyang; Cullen Rai O'Keefe; Jakub Rai Pachocki; Alex Rai Paino; Joe Rai Palermo; Ashley Rai Pantuliano; Giambattista Rai Parascandolo; Joel Rai Parish; Emy Rai Parparita; Alex Rai Passos; Mikhail Rai Pavlov; Andrew Rai Peng; Adam Rai Perelman; Filipe de Avila Belbute Rai Peres; Michael Rai Petrov; Henrique Ponde de Oliveira Rai Pinto; Rai Michael; Pokorny; Michelle Pokrass; Vitchyr H. Pong; Tolly Powell; Alethea Power; Boris Power; Elizabeth Proehl; Raul Puri; Alec Radford; Jack Rae; Aditya Ramesh; Cameron Raymond; Francis Real; Kendra Rimbach; Carl Ross; Bob Rotsted; Henri Roussez; Nick Ryder; Mario Saltarelli; Ted Sanders; Shibani Santurkar; Girish Sastry; Heather Schmidt; David Schnurr; John Schulman; Daniel Selsam; Kyla Sheppard; Toki Sherbakov; Jessica Shieh; Sarah Shoker; Pranav Shyam; Szymon Sidor; Eric Sigler; Maddie Simens; Jordan Sitkin; Katarina Slama; Ian Sohl; Benjamin Sokolowsky; Yang Song; Natalie Staudacher; Felipe Petroski Such; Natalie Summers; Ilya Sutskever; Jie Tang; Nikolas Tezak; Madeleine B. Thompson; Phil Tillet; Amin Tootoonchian; Elizabeth Tseng; Preston Tuggle; Nick Turley; Jerry Tworek; Juan Felipe Cerón Uribe; Andrea Vallone; Arun Vijayvergiya; Chelsea Voss; Carroll Wainwright; Justin Jay Wang; Alvin Wang; Ben Wang; Jonathan Ward; Jason Wei; CJ Weinmann; Akila Welihinda; Peter Welinder; Jiayi Weng; Lilian Weng; Matt Wiethoff; Dave Willner; Clemens Winter; Samuel Wolrich; Hannah Wong; Lauren Workman; Sherwin Wu; Jeff Wu; Michael Wu; Kai Xiao; Tao Xu; Sarah Yoo; Kevin Yu; Qiming Yuan; Wojciech Zaremba; Rowan Zellers; Chong Zhang; Marvin Zhang; Shengjia Zhao; Tianhao Zheng; Juntang Zhuang; William Zhuk; Barret Zoph
2023-03-14
Verifying the Robustness of Automatic Credibility Assessment. (99%)Piotr Przybyła; Alexander Shvets; Horacio Saggion
Resilient Dynamic Average Consensus based on Trusted agents. (69%)Shamik Bhattacharyya; Rachel Kalpana Kalaimani
Improving Adversarial Robustness with Hypersphere Embedding and Angular-based Regularizations. (31%)Olukorede Fakorede; Ashutosh Nirala; Modeste Atsague; Jin Tian
2023-03-13
Constrained Adversarial Learning and its applicability to Automated Software Testing: a systematic review. (99%)João Vitorino; Tiago Dias; Tiago Fonseca; Eva Maia; Isabel Praça
Review on the Feasibility of Adversarial Evasion Attacks and Defenses for Network Intrusion Detection Systems. (99%)Islam Debicha; Benjamin Cochez; Tayeb Kenaza; Thibault Debatty; Jean-Michel Dricot; Wim Mees
Can Adversarial Examples Be Parsed to Reveal Victim Model Information? (99%)Yuguang Yao; Jiancheng Liu; Yifan Gong; Xiaoming Liu; Yanzhi Wang; Xue Lin; Sijia Liu
SMUG: Towards robust MRI reconstruction by smoothed unrolling. (98%)Hui Li; Jinghan Jia; Shijun Liang; Yuguang Yao; Saiprasad Ravishankar; Sijia Liu
Model-tuning Via Prompts Makes NLP Models Adversarially Robust. (96%)Mrigank Raman; Pratyush Maini; J. Zico Kolter; Zachary C. Lipton; Danish Pruthi
Robust Contrastive Language-Image Pretraining against Adversarial Attacks. (83%)Wenhan Yang; Baharan Mirzasoleiman
Model Extraction Attacks on Split Federated Learning. (47%)Jingtao Li; Adnan Siraj Rakin; Xing Chen; Li Yang; Zhezhi He; Deliang Fan; Chaitali Chakrabarti
WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminative Analysis. (1%)Yiye Chen; Yunzhi Lin; Ruinian Xu; Patricio A. Vela
Pixel-wise Gradient Uncertainty for Convolutional Neural Networks applied to Out-of-Distribution Segmentation. (1%)Kira Maag; Tobias Riedlinger
2023-03-12
Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion Detection Systems. (99%)Islam Debicha; Benjamin Cochez; Tayeb Kenaza; Thibault Debatty; Jean-Michel Dricot; Wim Mees
Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented Reality. (99%)Weiquan Liu; Shijun Zheng; Cheng Wang
DNN-Alias: Deep Neural Network Protection Against Side-Channel Attacks via Layer Balancing. (96%)Mahya Morid Ahmadi; Lilas Alrahis; Ozgur Sinanoglu; Muhammad Shafique
Multi-metrics adaptively identifies backdoors in Federated learning. (92%)Siquan Huang; Yijiang Li; Chong Chen; Leyu Shi; Ying Gao
Adversarial Attacks to Direct Data-driven Control for Destabilization. (91%)Hampei Sasahara
Backdoor Defense via Deconfounded Representation Learning. (83%)Zaixi Zhang; Qi Liu; Zhicai Wang; Zepu Lu; Qingyong Hu
Interpreting Hidden Semantics in the Intermediate Layers of 3D Point Cloud Classification Neural Network. (76%)Weiquan Liu; Minghao Liu; Shijun Zheng; Cheng Wang
Boosting Source Code Learning with Data Augmentation: An Empirical Study. (11%)Zeming Dong; Qiang Hu; Yuejun Guo; Zhenya Zhang; Maxime Cordy; Mike Papadakis; Yves Le Traon; Jianjun Zhao
2023-03-11
Improving the Robustness of Deep Convolutional Neural Networks Through Feature Learning. (99%)Jin Ding; Jie-Chao Zhao; Yong-Zhi Sun; Ping Tan; Ji-En Ma; You-Tong Fang
SHIELD: An Adaptive and Lightweight Defense against the Remote Power Side-Channel Attacks on Multi-tenant FPGAs. (8%)Mahya Morid Ahmadi; Faiq Khalid; Radha Vaidya; Florian Kriebel; Andreas Steininger; Muhammad Shafique
2023-03-10
Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks. (99%)Binghui Wang; Meng Pang; Yun Dong
Boosting Adversarial Attacks by Leveraging Decision Boundary Information. (99%)Boheng Zeng; LianLi Gao; QiLong Zhang; ChaoQun Li; JingKuan Song; ShuaiQi Jing
Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey. (99%)Yulong Wang; Tong Sun; Shenghong Li; Xin Yuan; Wei Ni; Ekram Hossain; H. Vincent Poor
Investigating Stateful Defenses Against Black-Box Adversarial Examples. (99%)Ryan Feng; Ashish Hooda; Neal Mangaokar; Kassem Fawaz; Somesh Jha; Atul Prakash
MIXPGD: Hybrid Adversarial Training for Speech Recognition Systems. (99%)Aminul Huq; Weiyi Zhang; Xiaolin Hu
Do we need entire training data for adversarial training? (99%)Vipul Gupta; Apurva Narayan
TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets. (61%)Weixin Chen; Dawn Song; Bo Li
Adapting Contrastive Language-Image Pretrained (CLIP) Models for Out-of-Distribution Detection. (13%)Nikolas Adaloglou; Felix Michels; Tim Kaiser; Markus Kollmann
2023-03-09
NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks. (99%)Wenkai Tan; Justus Renkhoff; Alvaro Velasquez; Ziyu Wang; Lusi Li; Jian Wang; Shuteng Niu; Fan Yang; Yongxin Liu; Houbing Song
Evaluating the Robustness of Conversational Recommender Systems by Adversarial Examples. (92%)Ali Montazeralghaem; James Allan
Identification of Systematic Errors of Image Classifiers on Rare Subgroups. (83%)Jan Hendrik Metzen; Robin Hutmacher; N. Grace Hua; Valentyn Boreiko; Dan Zhang
Learning the Legibility of Visual Text Perturbations. (78%)Dev Seth; Rickard Stureborg; Danish Pruthi; Bhuwan Dhingra
Efficient Certified Training and Robustness Verification of Neural ODEs. (75%)Mustafa Zeqiri; Mark Niklas Müller; Marc Fischer; Martin Vechev
Feature Unlearning for Pre-trained GANs and VAEs. (68%)Saemi Moon; Seunghyuk Cho; Dongwoo Kim
2023-03-08
Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples. (99%)Jinwei Wang; Hao Wu; Haihua Wang; Jiawei Zhang; Xiangyang Luo; Bin Ma
Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient Estimation. (99%)Geunhyeok Yu; Minwoo Jeon; Hyoseok Hwang
Exploring Adversarial Attacks on Neural Networks: An Explainable Approach. (99%)Justus Renkhoff; Wenkai Tan; Alvaro Velasquez; illiam Yichen Wang; Yongxin Liu; Jian Wang; Shuteng Niu; Lejla Begic Fazlic; Guido Dartmann; Houbing Song
BeamAttack: Generating High-quality Textual Adversarial Examples through Beam Search and Mixed Semantic Spaces. (99%)Hai Zhu; Qingyang Zhao; Yuren Wu
DeepGD: A Multi-Objective Black-Box Test Selection Approach for Deep Neural Networks. (3%)Zohreh Aghababaeyan; Manel Abdellatif; Mahboubeh Dadkhah; Lionel Briand
2023-03-07
Logit Margin Matters: Improving Transferable Targeted Adversarial Attack by Logit Calibration. (99%)Juanjuan Weng; Zhiming Luo; Zhun Zhong; Shaozi Li; Nicu Sebe
Patch of Invisibility: Naturalistic Physical Black-Box Adversarial Attacks on Object Detectors. (98%)Raz Lapid; Eylon Mizrahi; Moshe Sipper
Robustness-preserving Lifelong Learning via Dataset Condensation. (96%)Jinghan Jia; Yihua Zhang; Dogyoon Song; Sijia Liu; Alfred Hero
CUDA: Convolution-based Unlearnable Datasets. (82%)Vinu Sankar Sadasivan; Mahdi Soltanolkotabi; Soheil Feizi
EavesDroid: Eavesdropping User Behaviors via OS Side-Channels on Smartphones. (11%)Quancheng Wang; Ming Tang; Jianming Fu
Stabilized training of joint energy-based models and their practical applications. (2%)Martin Sustek; Samik Sadhu; Lukas Burget; Hynek Hermansky; Jesus Villalba; Laureano Moro-Velazquez; Najim Dehak
2023-03-06
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning. (41%)Hritik Bansal; Nishad Singhi; Yu Yang; Fan Yin; Aditya Grover; Kai-Wei Chang
Students Parrot Their Teachers: Membership Inference on Model Distillation. (31%)Matthew Jagielski; Milad Nasr; Christopher Choquette-Choo; Katherine Lee; Nicholas Carlini
On the Feasibility of Specialized Ability Extracting for Large Language Code Models. (22%)Zongjie Li; Chaozheng Wang; Pingchuan Ma; Chaowei Liu; Shuai Wang; Daoyuan Wu; Cuiyun Gao
A Unified Algebraic Perspective on Lipschitz Neural Networks. (15%)Alexandre Araujo; Aaron Havens; Blaise Delattre; Alexandre Allauzen; Bin Hu
Learning to Backdoor Federated Learning. (15%)Henger Li; Chen Wu; Sencun Zhu; Zizhan Zheng
Partial-Information, Longitudinal Cyber Attacks on LiDAR in Autonomous Vehicles. (10%)R. Spencer Hallyburton; Qingzhao Zhang; Z. Morley Mao; Miroslav Pajic
ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning. (1%)Animesh Basak Chowdhury; Lilas Alrahis; Luca Collini; Johann Knechtel; Ramesh Karri; Siddharth Garg; Ozgur Sinanoglu; Benjamin Tan
Rethinking Confidence Calibration for Failure Prediction. (1%)Fei Zhu; Zhen Cheng; Xu-Yao Zhang; Cheng-Lin Liu
2023-03-05
Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks. (99%)Yiran Li; Junpeng Wang; Takanori Fujiwara; Kwan-Liu Ma
Consistent Valid Physically-Realizable Adversarial Attack against Crowd-flow Prediction Models. (99%)Hassan Ali; Muhammad Atif Butt; Fethi Filali; Ala Al-Fuqaha; Junaid Qadir
Adversarial Sampling for Fairness Testing in Deep Neural Network. (98%)Tosin Ige; William Marfo; Justin Tonkinson; Sikiru Adewale; Bolanle Hafiz Matti
Local Environment Poisoning Attacks on Federated Reinforcement Learning. (12%)Evelyn Ma; Rasoul Etesami
Robustness, Evaluation and Adaptation of Machine Learning Models in the Wild. (10%)Vihari Piratla
Knowledge-Based Counterfactual Queries for Visual Question Answering. (3%)Theodoti Stoikou; Maria Lymperaiou; Giorgos Stamou
2023-03-04
Improved Robustness Against Adaptive Attacks With Ensembles and Error-Correcting Output Codes. (68%)Thomas Philippon; Christian Gagné
2023-03-03
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees. (91%)Jinghuai Zhang; Jinyuan Jia; Hongbin Liu; Neil Zhenqiang Gong
Certified Robust Neural Networks: Generalization and Corruption Resistance. (82%)Amine Bennouna; Ryan Lucas; Parys Bart Van
AdvART: Adversarial Art for Camouflaged Object Detection Attacks. (75%)Amira Guesmi; Ioan Marius Bilasco; Muhammad Shafique; Ihsen Alouani
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions. (47%)Thuy Dung Nguyen; Tuan Nguyen; Phi Le Nguyen; Hieu H. Pham; Khoa Doan; Kok-Seng Wong
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms. (38%)Christian Westbrook; Sudeep Pasricha
Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models. (33%)Naman D Singh; Francesco Croce; Matthias Hein
Stealthy Perception-based Attacks on Unmanned Aerial Vehicles. (16%)Amir Khazraei; Haocheng Meng; Miroslav Pajic
TrojText: Test-time Invisible Textual Trojan Insertion. (2%)Qian Lou; Yepeng Liu; Bo Feng
2023-03-02
Defending against Adversarial Audio via Diffusion Model. (99%)Shutong Wu; Jiongxiao Wang; Wei Ping; Weili Nie; Chaowei Xiao
Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression. (99%)Junho Kim. Byung-Kwan Lee; Yong Man Ro
AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems. (99%)Amira Guesmi; Muhammad Abdullah Hanif; Muhammad Shafique
APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation. (99%)Amira Guesmi; Muhammad Abdullah Hanif; Ihsen Alouani; Muhammad Shafique
Targeted Adversarial Attacks against Neural Machine Translation. (98%)Sahar Sadrizadeh; AmirHossein Dabiri Aghdam; Ljiljana Dolamic; Pascal Frossard
The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks. (93%)Spencer Frei; Gal Vardi; Peter L. Bartlett; Nathan Srebro
Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators in Neural Networks. (10%)Lennart Brocki; Neo Christopher Chung
D-Score: An Expert-Based Method for Assessing the Detectability of IoT-Related Cyber-Attacks. (3%)Yair Meidan; Daniel Benatar; Ron Bitton; Dan Avraham; Asaf Shabtai
Interpretable System Identification and Long-term Prediction on Time-Series Data. (1%)Xiaoyi Liu; Duxin Chen; Wenjia Wei; Xia Zhu; Wenwu Yu
Consistency Models. (1%)Yang Song; Prafulla Dhariwal; Mark Chen; Ilya Sutskever
CADeSH: Collaborative Anomaly Detection for Smart Homes. (1%)Yair Meidan; Dan Avraham; Hanan Libhaber; Asaf Shabtai
Conflict-Based Cross-View Consistency for Semi-Supervised Semantic Segmentation. (1%)Zicheng Wang; Zhen Zhao; Xiaoxia Xing; Dong Xu; Xiangyu Kong; Luping Zhou
2023-03-01
To Make Yourself Invisible with Adversarial Semantic Contours. (99%)Yichi Zhang; Zijian Zhu; Hang Su; Jun Zhu; Shibao Zheng; Yuan He; Hui Xue
Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Data Manifolds. (98%)Odelia Melamed; Gilad Yehudai; Gal Vardi
Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process. (97%)Mingze Ni; Zhensu Sun; Wei Liu
A Practical Upper Bound for the Worst-Case Attribution Deviations. (70%)Fan Wang; Adams Wai-Kin Kong
Combating Exacerbated Heterogeneity for Robust Models in Federated Learning. (54%)Jianing Zhu; Jiangchao Yao; Tongliang Liu; Quanming Yao; Jianliang Xu; Bo Han
Poster: Sponge ML Model Attacks of Mobile Apps. (8%)Souvik Paul; Nicolas Kourtellis
DOLOS: A Novel Architecture for Moving Target Defense. (8%)Giulio Pagnotta; Gaspari Fabio De; Dorjan Hitaj; Mauro Andreolini; Michele Colajanni; Luigi V. Mancini
Mitigating Backdoors in Federated Learning with FLD. (2%)Yihang Lin; Pengyuan Zhou; Zhiqian Wu; Yong Liao
Competence-Based Analysis of Language Models. (1%)Adam Davies; Jize Jiang; ChengXiang Zhai
2023-02-28
A semantic backdoor attack against Graph Convolutional Networks. (98%)Jiazhu Dai; Zhipeng Xiong
Single Image Backdoor Inversion via Robust Smoothed Classifiers. (88%)Mingjie Sun; J. Zico Kolter
Feature Extraction Matters More: Universal Deepfake Disruption through Attacking Ensemble Feature Extractors. (67%)Long Tang; Dengpan Ye; Zhenhao Lu; Yunming Zhang; Shengshan Hu; Yue Xu; Chuanxi Chen
Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger. (11%)Yi Yu; Yufei Wang; Wenhan Yang; Shijian Lu; Yap-peng Tan; Alex C. Kot
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases. (1%)Chong Fu; Xuhong Zhang; Shouling Ji; Ting Wang; Peng Lin; Yanghe Feng; Jianwei Yin
2023-02-27
A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking. (99%)Chang Liu; Yinpeng Dong; Wenzhao Xiang; Xiao Yang; Hang Su; Jun Zhu; Yuefeng Chen; Yuan He; Hui Xue; Shibao Zheng
Adversarial Attack with Raindrops. (99%)Jiyuan Liu; Bingyi Lu; Mingkang Xiong; Tao Zhang; Huilin Xiong
Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign Recognition: A Feasibility Study. (99%)Fabian Woitschek; Georg Schneider
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks. (98%)Jialai Wang; Ziyuan Zhang; Meiqi Wang; Han Qiu; Tianwei Zhang; Qi Li; Zongpeng Li; Tao Wei; Chao Zhang
CBA: Contextual Background Attack against Optical Aerial Detection in the Physical World. (98%)Jiawei Lian; Xiaofei Wang; Yuru Su; Mingyang Ma; Shaohui Mei
Improving Model Generalization by On-manifold Adversarial Augmentation in the Frequency Domain. (96%)Chang Liu; Wenzhao Xiang; Yuan He; Hui Xue; Shibao Zheng; Hang Su
Efficient and Low Overhead Website Fingerprinting Attacks and Defenses based on TCP/IP Traffic. (83%)Guodong Huang; Chuan Ma; Ming Ding; Yuwen Qian; Chunpeng Ge; Liming Fang; Zhe Liu
GLOW: Global Layout Aware Attacks on Object Detection. (81%)Buyu Liu; BaoJun; Jianping Fan; Xi Peng; Kui Ren; Jun Yu
Online Black-Box Confidence Estimation of Deep Neural Networks. (16%)Fabian Woitschek; Georg Schneider
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks. (15%)Mohammad Mohammadi; Jonathan Nöther; Debmalya Mandal; Adish Singla; Goran Radanovic
Differentially Private Diffusion Models Generate Useful Synthetic Images. (10%)Sahra Ghalebikesabi; Leonard Berrada; Sven Gowal; Ira Ktena; Robert Stanforth; Jamie Hayes; Soham De; Samuel L. Smith; Olivia Wiles; Borja Balle
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation. (5%)Gaurav Patel; Konda Reddy Mopuri; Qiang Qiu
2023-02-26
Contextual adversarial attack against aerial detection in the physical world. (99%)Jiawei Lian; Xiaofei Wang; Yuru Su; Mingyang Ma; Shaohui Mei
Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators. (96%)Keane Lucas; Matthew Jagielski; Florian Tramèr; Lujo Bauer; Nicholas Carlini
2023-02-25
Deep Learning-based Multi-Organ CT Segmentation with Adversarial Data Augmentation. (99%)Shaoyan Pan; Shao-Yuan Lo; Min Huang; Chaoqiong Ma; Jacob Wynne; Tonghe Wang; Tian Liu; Xiaofeng Yang
Scalable Attribution of Adversarial Attacks via Multi-Task Learning. (99%)Zhongyi Guo; Keji Han; Yao Ge; Wei Ji; Yun Li
SATBA: An Invisible Backdoor Attack Based On Spatial Attention. (74%)Huasong Zhou; Xiaowei Xu; Xiaodong Wang; Leon Bevan Bullock
Bayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts. (1%)Qihan Ren; Huiqi Deng; Yunuo Chen; Siyu Lou; Quanshi Zhang
2023-02-24
Defending Against Backdoor Attacks by Layer-wise Feature Analysis. (68%)Najeeb Moharram Jebreel; Josep Domingo-Ferrer; Yiming Li
Chaotic Variational Auto encoder-based Adversarial Machine Learning. (54%)Pavan Venkata Sainadh Reddy; Yelleti Vivek; Gopi Pranay; Vadlamani Ravi
Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights? (12%)Ruisi Cai; Zhenyu Zhang; Zhangyang Wang
2023-02-23
Less is More: Data Pruning for Faster Adversarial Training. (99%)Yize Li; Pu Zhao; Xue Lin; Bhavya Kailkhura; Ryan Goldhahn
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots. (99%)Boyang Zhang; Xinlei He; Yun Shen; Tianhao Wang; Yang Zhang
Boosting Adversarial Transferability using Dynamic Cues. (99%)Muzammal Naseer; Ahmad Mahmood; Salman Khan; Fahad Khan
HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks. (98%)Chao Hu; Ruishi Yu; Binqi Zeng; Yu Zhan; Ying Fu; Quan Zhang; Rongkai Liu; Heyuan Shi
Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective. (84%)Zhengbao He; Tao Li; Sizhe Chen; Xiaolin Huang
More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models. (70%)Kai Greshake; Sahar Abdelnabi; Shailesh Mishra; Christoph Endres; Thorsten Holz; Mario Fritz
On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space. (68%)Yuyang Deng; Nidham Gazagnadou; Junyuan Hong; Mehrdad Mahdavi; Lingjuan Lyu
Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity. (2%)Khatoon Mohammed
2023-02-22
Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of Perturbation and AI Techniques. (99%)Saminder Dhesi; Laura Fontes; Pedro Machado; Isibor Kennedy Ihianle; Farhad Fassihi Tash; David Ada Adama
PAD: Towards Principled Adversarial Malware Detection Against Evasion Attacks. (98%)Deqiang Li; Shicheng Cui; Yun Li; Jia Xu; Fu Xiao; Shouhuai Xu
Provable Robustness Against a Union of $\ell_0$ Adversarial Attacks. (97%)Zayd Hammoudeh; Daniel Lowd
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms. (33%)Minzhou Pan; Yi Zeng; Lingjuan Lyu; Xue Lin; Ruoxi Jia
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective. (12%)Jindong Wang; Xixu Hu; Wenxin Hou; Hao Chen; Runkai Zheng; Yidong Wang; Linyi Yang; Haojun Huang; Wei Ye; Xiubo Geng; Binxin Jiao; Yue Zhang; Xing Xie
2023-02-21
MultiRobustBench: Benchmarking Robustness Against Multiple Attacks. (99%)Sihui Dai; Saeed Mahloujifar; Chong Xiang; Vikash Sehwag; Pin-Yu Chen; Prateek Mittal
MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-based Malware Detection. (99%)Aqib Rashid; Jose Such
Interpretable Spectrum Transformation Attacks to Speaker Recognition. (98%)Jiadi Yao; Hong Luo; Xiao-Lei Zhang
Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker. (97%)Sihui Dai; Wenxin Ding; Arjun Nitin Bhagoji; Daniel Cullina; Ben Y. Zhao; Haitao Zheng; Prateek Mittal
Generalization Bounds for Adversarial Contrastive Learning. (31%)Xin Zou; Weiwei Liu
2023-02-20
An Incremental Gray-box Physical Adversarial Attack on Neural Network Training. (98%)Rabiah Al-qudah; Moayad Aloqaily; Bassem Ouni; Mohsen Guizani; Thierry Lestable
Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing System. (97%)Hao Lv; Bing Li; Lei Zhang; Cheng Liu; Ying Wang
Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts. (88%)Francesco Croce; Sylvestre-Alvise Rebuffi; Evan Shelhamer; Sven Gowal
Poisoning Web-Scale Training Datasets is Practical. (83%)Nicholas Carlini; Matthew Jagielski; Christopher A. Choquette-Choo; Daniel Paleka; Will Pearce; Hyrum Anderson; Andreas Terzis; Kurt Thomas; Florian Tramèr
Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network. (47%)Xiaojian Yuan; Kejiang Chen; Jie Zhang; Weiming Zhang; Nenghai Yu; Yang Zhang
Take Me Home: Reversing Distribution Shifts using Reinforcement Learning. (26%)Vivian Lin; Kuk Jin Jang; Souradeep Dutta; Michele Caprio; Oleg Sokolsky; Insup Lee
Model-based feature selection for neural networks: A mixed-integer programming approach. (22%)Shudian Zhao; Calvin Tsay; Jan Kronqvist
Prompt Stealing Attacks Against Text-to-Image Generation Models. (1%)Xinyue Shen; Yiting Qu; Michael Backes; Yang Zhang
2023-02-19
X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection. (99%)Aishan Liu; Jun Guo; Jiakai Wang; Siyuan Liang; Renshuai Tao; Wenbo Zhou; Cong Liu; Xianglong Liu; Dacheng Tao
Stationary Point Losses for Robust Model. (93%)Weiwei Gao; Dazhi Zhang; Yao Li; Zhichang Guo; Ovanes Petrosian
On Feasibility of Server-side Backdoor Attacks on Split Learning. (76%)Behrad Tajalli; Oguzhan Ersoy; Stjepan Picek
2023-02-18
Adversarial Machine Learning: A Systematic Survey of Backdoor Attack, Weight Attack and Adversarial Example. (99%)Baoyuan Wu; Li Liu; Zihao Zhu; Qingshan Liu; Zhaofeng He; Siwei Lyu
Delving into the Adversarial Robustness of Federated Learning. (98%)Jie Zhang; Bo Li; Chen Chen; Lingjuan Lyu; Shuang Wu; Shouhong Ding; Chao Wu
Meta Style Adversarial Training for Cross-Domain Few-Shot Learning. (83%)Yuqian Fu; Yu Xie; Yanwei Fu; Yu-Gang Jiang
Towards Safer Generative Language Models: A Survey on Safety Risks, Evaluations, and Improvements. (67%)Jiawen Deng; Jiale Cheng; Hao Sun; Zhexin Zhang; Minlie Huang
MedViT: A Robust Vision Transformer for Generalized Medical Image Classification. (12%)Omid Nejati Manzari; Hamid Ahmadabadi; Hossein Kashiani; Shahriar B. Shokouhi; Ahmad Ayatollahi
RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks. (11%)Marwan Omar
Beyond Distribution Shift: Spurious Features Through the Lens of Training Dynamics. (2%)Nihal Murali; Aahlad Puli; Ke Yu; Rajesh Ranganath; Kayhan Batmanghelich
2023-02-17
Measuring Equality in Machine Learning Security Defenses. (96%)Luke E. Richards; Edward Raff; Cynthia Matuszek
Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions. (5%)Manish Nagireddy; Moninder Singh; Samuel C. Hoffman; Evaline Ju; Karthikeyan Natesan Ramamurthy; Kush R. Varshney
RetVec: Resilient and Efficient Text Vectorizer. (4%)Elie Bursztein; Marina Zhang; Owen Vallis; Xinyu Jia; Alexey Kurakin
2023-02-16
On the Effect of Adversarial Training Against Invariance-based Adversarial Examples. (99%)Roland Rauter; Martin Nocker; Florian Merkle; Pascal Schöttle
High-frequency Matters: An Overwriting Attack and defense for Image-processing Neural Network Watermarking. (67%)Huajie Chen; Tianqing Zhu; Chi Liu; Shui Yu; Wanlei Zhou
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data. (3%)Pratik Karmakar; Debabrota Basu
A Novel Noise Injection-based Training Scheme for Better Model Robustness. (2%)Zeliang Zhang; Jinyang Jiang; Minjie Chen; Zhiyuan Wang; Yijie Peng; Zhaofei Yu
2023-02-15
Masking and Mixing Adversarial Training. (99%)Hiroki Adachi; Tsubasa Hirakawa; Takayoshi Yamashita; Hironobu Fujiyoshi; Yasunori Ishii; Kazuki Kozuka
Robust Mid-Pass Filtering Graph Convolutional Networks. (98%)Jincheng Huang; Lun Du; Xu Chen; Qiang Fu; Shi Han; Dongmei Zhang
Graph Adversarial Immunization for Certifiable Robustness. (98%)Shuchang Tao; Huawei Shen; Qi Cao; Yunfan Wu; Liang Hou; Xueqi Cheng
XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural Architectures for Non-ideal Xbars. (87%)Abhiroop Bhattacharjee; Abhishek Moitra; Priyadarshini Panda
Tight Auditing of Differentially Private Machine Learning. (41%)Milad Nasr; Jamie Hayes; Thomas Steinke; Borja Balle; Florian Tramèr; Matthew Jagielski; Nicholas Carlini; Andreas Terzis
Field-sensitive Data Flow Integrity. (1%)So Shizukuishi; Yoshitaka Arahori; Katsuhiko Gondow
Uncertainty-Estimation with Normalized Logits for Out-of-Distribution Detection. (1%)Mouxiao Huang; Yu Qiao
2023-02-14
Regret-Based Defense in Adversarial Reinforcement Learning. (99%)Roman Belaire; Pradeep Varakantham; Thanh Nguyen; David Lo
On the Role of Randomization in Adversarially Robust Classification. (99%)Lucas Gnecco-Heredia; Yann Chevaleyre; Benjamin Negrevergne; Laurent Meunier; Muni Sreenivas Pydi
Attacking Fake News Detectors via Manipulating News Social Engagement. (83%)Haoran Wang; Yingtong Dou; Canyu Chen; Lichao Sun; Philip S. Yu; Kai Shu
An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning. (31%)Shenghui Li; Edith C. -H. Ngai; Thiemo Voigt
A Modern Look at the Relationship between Sharpness and Generalization. (10%)Maksym Andriushchenko; Francesco Croce; Maximilian Müller; Matthias Hein; Nicolas Flammarion
Bounding Training Data Reconstruction in DP-SGD. (8%)Jamie Hayes; Saeed Mahloujifar; Borja Balle
Security Defense For Smart Contracts: A Comprehensive Survey. (1%)Nikolay Ivanov; Chenning Li; Qiben Yan; Zhiyuan Sun; Zhichao Cao; Xiapu Luo
READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises. (1%)Chenglei Si; Zhengyan Zhang; Yingfa Chen; Xiaozhi Wang; Zhiyuan Liu; Maosong Sun
2023-02-13
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data. (98%)Gorka Abad; Oguzhan Ersoy; Stjepan Picek; Aitor Urbieta
Raising the Cost of Malicious AI-Powered Image Editing. (82%)Hadi Salman; Alaa Khaddaj; Guillaume Leclerc; Andrew Ilyas; Aleksander Madry
Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction Challenge. (8%)Ali Al-Kaswan; Maliheh Izadi; Deursen Arie van
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions. (1%)Marwan Omar
2023-02-12
TextDefense: Adversarial Text Detection based on Word Importance Entropy. (99%)Lujia Shen; Xuhong Zhang; Shouling Ji; Yuwen Pu; Chunpeng Ge; Xing Yang; Yanghe Feng
2023-02-11
Mutation-Based Adversarial Attacks on Neural Text Detectors. (69%)Gongbo Liang; Jesus Guerrero; Izzat Alsmadi
HateProof: Are Hateful Meme Detection Systems really Robust? (13%)Piush Aggarwal; Pranit Chawla; Mithun Das; Punyajoy Saha; Binny Mathew; Torsten Zesch; Animesh Mukherjee
MTTM: Metamorphic Testing for Textual Content Moderation Software. (2%)Wenxuan Wang; Jen-tse Huang; Weibin Wu; Jianping Zhang; Yizhan Huang; Shuqing Li; Pinjia He; Michael Lyu
Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play. (1%)Jeremiah Zhe Liu; Krishnamurthy Dj Dvijotham; Jihyeon Lee; Quan Yuan; Martin Strobel; Balaji Lakshminarayanan; Deepak Ramachandran
High Recovery with Fewer Injections: Practical Binary Volumetric Injection Attacks against Dynamic Searchable Encryption. (1%)Xianglong Zhang; Wei Wang; Peng Xu; Laurence T. Yang; Kaitai Liang
2023-02-10
Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples. (98%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Unnoticeable Backdoor Attacks on Graph Neural Networks. (80%)Enyan Dai; Minhua Lin; Xiang Zhang; Suhang Wang
Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks. (73%)Piotr Gaiński; Klaudia Bałazy
2023-02-09
IB-RAR: Information Bottleneck as Regularizer for Adversarial Robustness. (98%)Xiaoyun Xu; Guilherme Perin; Stjepan Picek
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples. (98%)Chumeng Liang; Xiaoyu Wu; Yang Hua; Jiaru Zhang; Yiming Xue; Tao Song; Zhengui Xue; Ruhui Ma; Haibing Guan
Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning Pipelines. (81%)Eugene Bagdasaryan; Vitaly Shmatikov
Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder. (62%)Xiangqi Wang; Mingfu Xue; Kewei Chen; Jing Xu; Wenmao Liu; Leo Yu Zhang; Yushu Zhang
Better Diffusion Models Further Improve Adversarial Training. (22%)Zekai Wang; Tianyu Pang; Chao Du; Min Lin; Weiwei Liu; Shuicheng Yan
Augmenting NLP data to counter Annotation Artifacts for NLI Tasks. (16%)Armaan Singh Bhullar
Incremental Satisfiability Modulo Theory for Verification of Deep Neural Networks. (1%)Pengfei Yang; Zhiming Chi; Zongxin Liu; Mengyu Zhao; Cheng-Chao Huang; Shaowei Cai; Lijun Zhang
2023-02-08
WAT: Improve the Worst-class Robustness in Adversarial Training. (99%)Boqi Li; Weiwei Liu
Et Tu Certifications: Robustness Certificates Yield Better Adversarial Examples. (99%)Andrew C. Cullen; Shijie Liu; Paul Montague; Sarah M. Erfani; Benjamin I. P. Rubinstein
Shortcut Detection with Variational Autoencoders. (13%)Nicolas M. Müller; Simon Roschmann; Shahbaz Khan; Philip Sperl; Konstantin Böttinger
Continuous Learning for Android Malware Detection. (13%)Yizheng Chen; Zhoujie Ding; David Wagner
Training-free Lexical Backdoor Attacks on Language Models. (8%)Yujin Huang; Terry Yue Zhuo; Qiongkai Xu; Han Hu; Xingliang Yuan; Chunyang Chen
On Function-Coupled Watermarks for Deep Neural Networks. (2%)Xiangyu Wen; Yu Li; Wei Jiang; Qiang Xu
Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy. (1%)Cheolhyoung Lee; Kyunghyun Cho
2023-02-07
Toward Face Biometric De-identification using Adversarial Examples. (98%)Mahdi Ghafourian; Julian Fierrez; Luis Felipe Gomez; Ruben Vera-Rodriguez; Aythami Morales; Zohra Rezgui; Raymond Veldhuis
Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence. (83%)Simin Li; Jun Guo; Jingqiao Xiu; Yuwei Zheng; Pu Feng; Xin Yu; Aishan Liu; Yaodong Yang; Bo An; Wenjun Wu; Xianglong Liu
Membership Inference Attacks against Diffusion Models. (64%)Tomoya Matsumoto; Takayuki Miura; Naoto Yanai
Temporal Robustness against Data Poisoning. (12%)Wenxiao Wang; Soheil Feizi
Robustness Implies Fairness in Casual Algorithmic Recourse. (2%)Ahmad-Reza Ehyaei; Amir-Hossein Karimi; Bernhard Schölkopf; Setareh Maghsudi
Low-Latency Communication using Delay-Aware Relays Against Reactive Adversaries. (1%)Vivek Chaudhary; J. Harshan
2023-02-06
Less is More: Understanding Word-level Textual Adversarial Attack via n-gram Frequency Descend. (99%)Ning Lu; Shengcai Liu; Zhirui Zhang; Qi Wang; Haifeng Liu; Ke Tang
SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency. (92%)Junfeng Guo; Yiming Li; Xun Chen; Hanqing Guo; Lichao Sun; Cong Liu
Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness. (87%)Yuancheng Xu; Yanchao Sun; Micah Goldblum; Tom Goldstein; Furong Huang
Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks. (75%)Jan Schuchardt; Aleksandar Bojchevski; Johannes Gasteiger; Stephan Günnemann
GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks. (67%)Salah Ghamizi; Jingfeng Zhang; Maxime Cordy; Mike Papadakis; Masashi Sugiyama; Yves Le Traon
Target-based Surrogates for Stochastic Optimization. (1%)Jonathan Wilder Lavington; Sharan Vaswani; Reza Babanezhad; Mark Schmidt; Nicolas Le Roux
Dropout Injection at Test Time for Post Hoc Uncertainty Quantification in Neural Networks. (1%)Emanuele Ledda; Giorgio Fumera; Fabio Roli
One-shot Empirical Privacy Estimation for Federated Learning. (1%)Galen Andrew; Peter Kairouz; Sewoong Oh; Alina Oprea; H. Brendan McMahan; Vinith Suriyakumar
2023-02-05
On the Role of Contrastive Representation Learning in Adversarial Robustness: An Empirical Study. (54%)Fatemeh Ghofrani; Mehdi Yaghouti; Pooyan Jamshidi
Leaving Reality to Imagination: Robust Classification via Generated Datasets. (2%)Hritik Bansal; Aditya Grover
2023-02-04
CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks. (99%)Shashank Agnihotri; Steffen Jung; Margret Keuper
A Minimax Approach Against Multi-Armed Adversarial Attacks Detection. (86%)Federica Granese; Marco Romanelli; Siddharth Garg; Pablo Piantanida
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Decision Tree Models. (84%)Abdullah Caglar Oksuz; Anisa Halimi; Erman Ayday
Run-Off Election: Improved Provable Defense against Data Poisoning Attacks. (83%)Keivan Rezaei; Kiarash Banihashem; Atoosa Chegini; Soheil Feizi
Certified Robust Control under Adversarial Perturbations. (78%)Jinghan Yang; Hunmin Kim; Wenbin Wan; Naira Hovakimyan; Yevgeniy Vorobeychik
2023-02-03
TextShield: Beyond Successfully Detecting Adversarial Sentences in Text Classification. (96%)Lingfeng Shen; Ze Zhang; Haiyun Jiang; Ying Chen
DeTorrent: An Adversarial Padding-only Traffic Analysis Defense. (73%)James K Holland; Jason Carpenter; Se Eun Oh; Nicholas Hopper
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification. (61%)Gorka Abad; Jing Xu; Stefanos Koffas; Behrad Tajalli; Stjepan Picek; Mauro Conti
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels. (15%)Simone Bombari; Shayan Kiyani; Marco Mondelli
Asymmetric Certified Robustness via Feature-Convex Neural Networks. (8%)Samuel Pfrommer; Brendon G. Anderson; Julien Piet; Somayeh Sojoudi
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks. (2%)Zeyu Qin; Liuyi Yao; Daoyuan Chen; Yaliang Li; Bolin Ding; Minhao Cheng
BarrierBypass: Out-of-Sight Clean Voice Command Injection Attacks through Physical Barriers. (2%)Payton Walker; Tianfang Zhang; Cong Shi; Nitesh Saxena; Yingying Chen
From Robustness to Privacy and Back. (2%)Hilal Asi; Jonathan Ullman; Lydia Zakynthinou
DCA: Delayed Charging Attack on the Electric Shared Mobility System. (1%)Shuocheng Guo; Hanlin Chen; Mizanur Rahman; Xinwu Qian
Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning. (1%)Jacob Alexander Markson Brown; Xi Jiang; Van Tran; Arjun Nitin Bhagoji; Nguyen Phong Hoang; Nick Feamster; Prateek Mittal; Vinod Yegneswaran
2023-02-02
TransFool: An Adversarial Attack against Neural Machine Translation Models. (99%)Sahar Sadrizadeh; Ljiljana Dolamic; Pascal Frossard
Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial Defense. (99%)Zunzhi You; Daochang Liu; Bohyung Han; Chang Xu
On the Robustness of Randomized Ensembles to Adversarial Perturbations. (75%)Hassan Dbouk; Naresh R. Shanbhag
Provably Bounding Neural Network Preimages. (64%)Suhas Kotha; Christopher Brix; Zico Kolter; Krishnamurthy Dvijotham; Huan Zhang
A sliced-Wasserstein distance-based approach for out-of-class-distribution detection. (62%)Mohammad Shifat E Rabbi; Abu Hasnat Mohammad Rubaiyat; Yan Zhuang; Gustavo K Rohde
Effective Robustness against Natural Distribution Shifts for Models with Different Training Data. (13%)Zhouxing Shi; Nicholas Carlini; Ananth Balashankar; Ludwig Schmidt; Cho-Jui Hsieh; Alex Beutel; Yao Qin
SPECWANDS: An Efficient Priority-based Scheduler Against Speculation Contention Attacks. (10%)Bowen Tang; Chenggang Wu; Pen-Chung Yew; Yinqian Zhang; Mengyao Xie; Yuanming Lai; Yan Kang; Wei Wang; Qiang Wei; Zhe Wang
Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation. (2%)Hyoungwook Nam; Raghavendra Pradyumna Pothukuchi; Bo Li; Nam Sung Kim; Josep Torrellas
Generalized Uncertainty of Deep Neural Networks: Taxonomy and Applications. (1%)Chengyu Dong
Dataset Distillation Fixes Dataset Reconstruction Attacks. (1%)Noel Loo; Ramin Hasani; Mathias Lechner; Daniela Rus
2023-02-01
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks. (99%)Xiaoyun Xu; Oguzhan Ersoy; Stjepan Picek
Effectiveness of Moving Target Defenses for Adversarial Attacks in ML-based Malware Detection. (92%)Aqib Rashid; Jose Such
Exploring Semantic Perturbations on Grover. (56%)Ziqing Ji; Pranav Kulkarni; Marko Neskovic; Kevin Nolan; Yan Xu
BackdoorBox: A Python Toolbox for Backdoor Learning. (10%)Yiming Li; Mengxi Ya; Yang Bai; Yong Jiang; Shu-Tao Xia
2023-01-31
Reverse engineering adversarial attacks with fingerprints from adversarial examples. (99%)David Aaron Embedded Intelligence Nicholson; Vincent Embedded Intelligence Emanuele
The Impacts of Unanswerable Questions on the Robustness of Machine Reading Comprehension Models. (97%)Son Quoc Tran; Phong Nguyen-Thuan Do; Uyen Le; Matt Kretchmar
Are Defenses for Graph Neural Networks Robust? (80%)Felix Mujkanovic; Simon Geisler; Stephan Günnemann; Aleksandar Bojchevski
Adversarial Training of Self-supervised Monocular Depth Estimation against Physical-World Attacks. (75%)Zhiyuan Cheng; James Liang; Guanhong Tao; Dongfang Liu; Xiangyu Zhang
Fairness-aware Vision Transformer via Debiased Self-Attention. (50%)Yao Qiang; Chengyin Li; Prashant Khanduri; Dongxiao Zhu
Robust Linear Regression: Gradient-descent, Early-stopping, and Beyond. (47%)Meyer Scetbon; Elvis Dohmatob
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression. (12%)Zhuoran Liu; Zhengyu Zhao; Martha Larson
DRAINCLoG: Detecting Rogue Accounts with Illegally-obtained NFTs using Classifiers Learned on Graphs. (1%)Hanna Kim; Jian Cui; Eugene Jang; Chanhee Lee; Yongjae Lee; Jin-Woo Chung; Seungwon Shin
Identifying the Hazard Boundary of ML-enabled Autonomous Systems Using Cooperative Co-Evolutionary Search. (1%)Sepehr Sharifi; Donghwan Shin; Lionel C. Briand; Nathan Aschbacher
2023-01-30
Feature-Space Bayesian Adversarial Learning Improved Malware Detector Robustness. (99%)Bao Gia Doan; Shuiqiao Yang; Paul Montague; Vel Olivier De; Tamas Abraham; Seyit Camtepe; Salil S. Kanhere; Ehsan Abbasnejad; Damith C. Ranasinghe
Improving Adversarial Transferability with Scheduled Step Size and Dual Example. (99%)Zeliang Zhang; Peihan Liu; Xiaosen Wang; Chenliang Xu
Towards Adversarial Realism and Robust Learning for IoT Intrusion Detection and Classification. (99%)João Vitorino; Isabel Praça; Eva Maia
RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion. (99%)Zhuoqun Huang; Neil G. Marchant; Keane Lucas; Lujo Bauer; Olga Ohrimenko; Benjamin I. P. Rubinstein
Identifying Adversarially Attackable and Robust Samples. (99%)Vyas Raina; Mark Gales
On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex. (98%)Terry Yue Zhuo; Zhuang Li; Yujin Huang; Fatemeh Shiri; Weiqing Wang; Gholamreza Haffari; Yuan-Fang Li
Anchor-Based Adversarially Robust Zero-Shot Learning Driven by Language. (96%)Xiao Li; Wei Zhang; Yining Liu; Zhanhao Hu; Bo Zhang; Xiaolin Hu
Inference Time Evidences of Adversarial Attacks for Forensic on Transformers. (87%)Hugo Lemarchant; Liangzi Li; Yiming Qian; Yuta Nakashima; Hajime Nagahara
On the Efficacy of Metrics to Describe Adversarial Attacks. (82%)Tommaso Puccetti; Tommaso Zoppi; Andrea Ceccarelli
Benchmarking Robustness to Adversarial Image Obfuscations. (74%)Florian Stimberg; Ayan Chakrabarti; Chun-Ta Lu; Hussein Hazimeh; Otilia Stretcu; Wei Qiao; Yintao Liu; Merve Kaya; Cyrus Rashtchian; Ariel Fuxman; Mehmet Tek; Sven Gowal
Extracting Training Data from Diffusion Models. (5%)Nicholas Carlini; Jamie Hayes; Milad Nasr; Matthew Jagielski; Vikash Sehwag; Florian Tramèr; Borja Balle; Daphne Ippolito; Eric Wallace
Affinity Uncertainty-based Hard Negative Mining in Graph Contrastive Learning. (2%)Chaoxi Niu; Guansong Pang; Ling Chen
M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System. (1%)Chenqi Kong; Kexin Zheng; Yibing Liu; Shiqi Wang; Anderson Rocha; Haoliang Li
2023-01-29
Unlocking Deterministic Robustness Certification on ImageNet. (98%)Kai Hu; Andy Zou; Zifan Wang; Klas Leino; Matt Fredrikson
Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid. (93%)Farhin Farhad Riya; Shahinul Hoque; Jinyuan Stella Sun; Jiangnan Li; Hairong Qi
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing. (83%)Yatong Bai; Brendon G. Anderson; Aerin Kim; Somayeh Sojoudi
Uncovering Adversarial Risks of Test-Time Adaptation. (82%)Tong Wu; Feiran Jia; Xiangyu Qi; Jiachen T. Wang; Vikash Sehwag; Saeed Mahloujifar; Prateek Mittal
Adversarial Attacks on Adversarial Bandits. (69%)Yuzhe Ma; Zhijin Zhou
Towards Verifying the Geometric Robustness of Large-scale Neural Networks. (54%)Fu Wang; Peipei Xu; Wenjie Ruan; Xiaowei Huang
Lateralized Learning for Multi-Class Visual Classification Tasks. (13%)Abubakar Siddique; Will N. Browne; Gina M. Grimshaw
Diverse, Difficult, and Odd Instances (D2O): A New Test Set for Object Classification. (3%)Ali Borji
Adversarial Style Augmentation for Domain Generalization. (2%)Yabin Zhang; Bin Deng; Ruihuang Li; Kui Jia; Lei Zhang
Confidence-Aware Calibration and Scoring Functions for Curriculum Learning. (1%)Shuang Ao; Stefan Rueger; Advaith Siddharthan
2023-01-28
Node Injection for Class-specific Network Poisoning. (82%)Ansh Kumar Sharma; Rahul Kukreja; Mayank Kharbanda; Tanmoy Chakraborty
Out-of-distribution Detection with Energy-based Models. (82%)Sven Elflein
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering. (13%)Rui Zhu; Di Tang; Siyuan Tang; Guanhong Tao; Shiqing Ma; Xiaofeng Wang; Haixu Tang
Selecting Models based on the Risk of Damage Caused by Adversarial Attacks. (1%)Jona Klemenc; Holger Trittenbach
2023-01-27
Semantic Adversarial Attacks on Face Recognition through Significant Attributes. (99%)Yasmeen M. Khedr; Yifeng Xiong; Kun He
Targeted Attacks on Timeseries Forecasting. (99%)Yuvaraj Govindarajulu; Avinash Amballa; Pavan Kulkarni; Manojkumar Parmar
Adapting Step-size: A Unified Perspective to Analyze and Improve Gradient-based Methods for Adversarial Attacks. (98%)Wei Tao; Lei Bao; Long Sheng; Gaowei Wu; Qing Tao
PECAN: A Deterministic Certified Defense Against Backdoor Attacks. (97%)Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Vertex-based reachability analysis for verifying ReLU deep neural networks. (93%)João Zago; Eduardo Camponogara; Eric Antonelo
OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks. (92%)Xingwu Guo; Ziwei Zhou; Yueling Zhang; Guy Katz; Min Zhang
PCV: A Point Cloud-Based Network Verifier. (88%)Arup Kumar Sarker; Farzana Yasmin Ahmad; Matthew B. Dwyer
Robust Transformer with Locality Inductive Bias and Feature Normalization. (88%)Omid Nejati Manzari; Hossein Kashiani; Hojat Asgarian Dehkordi; Shahriar Baradaran Shokouhi
Analyzing Robustness of the Deep Reinforcement Learning Algorithm in Ramp Metering Applications Considering False Data Injection Attack and Defense. (87%)Diyi Liu; Lanmin Liu; Lee D Han
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers. (80%)Sungmin Cha; Sungjun Cho; Dasol Hwang; Honglak Lee; Taesup Moon; Moontae Lee
Certified Invertibility in Neural Networks via Mixed-Integer Programming. (76%)Tianqi Cui; Thomas Bertalan; George J. Pappas; Manfred Morari; Ioannis G. Kevrekidis; Mahyar Fazlyab
2023-01-26
Attacking Important Pixels for Anchor-free Detectors. (99%)Yunxu Xie; Shu Hu; Xin Wang; Quanyu Liao; Bin Zhu; Xi Wu; Siwei Lyu
Certified Interpretability Robustness for Class Activation Mapping. (92%)Alex Gu; Tsui-Wei Weng; Pin-Yu Chen; Sijia Liu; Luca Daniel
Minerva: A File-Based Ransomware Detector. (68%)Dorjan Hitaj; Giulio Pagnotta; Gaspari Fabio De; Carli Lorenzo De; Luigi V. Mancini
Interaction-level Membership Inference Attack Against Federated Recommender Systems. (31%)Wei Yuan; Chaoqun Yang; Quoc Viet Hung Nguyen; Lizhen Cui; Tieke He; Hongzhi Yin
2023-01-25
On the Adversarial Robustness of Camera-based 3D Object Detection. (99%)Shaoyuan Xie; Zichao Li; Zeyu Wang; Cihang Xie
RobustPdM: Designing Robust Predictive Maintenance against Adversarial Attacks. (99%)Ayesha Siddique; Ripan Kumar Kundu; Gautam Raj Mode; Khaza Anuarul Hoque
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing. (98%)Jiali Wei; Ming Fan; Wenjing Jiao; Wuxia Jin; Ting Liu
A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection. (96%)Mohammad Azizmalayeri; Arman Zarei; Alireza Isavand; Mohammad Taghi Manzuri; Mohammad Hossein Rohban
A Study on FGSM Adversarial Training for Neural Retrieval. (75%)Simon Lupart; Stéphane Clinchant
Distilling Cognitive Backdoor Patterns within an Image. (5%)Hanxun Huang; Xingjun Ma; Sarah Erfani; James Bailey
Connecting metrics for shape-texture knowledge in computer vision. (1%)Tiago Oliveira; Tiago Marques; Arlindo L. Oliveira
2023-01-24
Blockchain-aided Secure Semantic Communication for AI-Generated Content in Metaverse. (13%)Yijing Lin; Hongyang Du; Dusit Niyato; Jiangtian Nie; Jiayi Zhang; Yanyu Cheng; Zhaohui Yang
Learning Effective Strategies for Moving Target Defense with Switching Costs. (1%)Vignesh Viswanathan; Megha Bose; Praveen Paruchuri
Data Augmentation Alone Can Improve Adversarial Training. (1%)Lin Li; Michael Spratling
2023-01-23
DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards Secure Industrial Internet of Things Analytics. (99%)Onat Gungor; Tajana Rosing; Baris Aksanli
Practical Adversarial Attacks Against AI-Driven Power Allocation in a Distributed MIMO Network. (92%)Ömer Faruk Tuna; Fehmi Emre Kadan; Leyli Karaçay
BayBFed: Bayesian Backdoor Defense for Federated Learning. (78%)Kavita Kumari; Phillip Rieger; Hossein Fereidooni; Murtuza Jadliwala; Ahmad-Reza Sadeghi
Backdoor Attacks in Peer-to-Peer Federated Learning. (68%)Gokberk Yar; Cristina Nita-Rotaru; Alina Oprea
2023-01-22
Provable Unrestricted Adversarial Training without Compromise with Generalizability. (99%)Lilin Zhang; Ning Yang; Yanchao Sun; Philip S. Yu
ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning. (8%)Shangqing Liu; Bozhi Wu; Xiaofei Xie; Guozhu Meng; Yang Liu
2023-01-20
Limitations of Piecewise Linearity for Efficient Robustness Certification. (95%)Klas Leino
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning. (16%)Soumyadeep Pal; Ren Wang; Yuguang Yao; Sijia Liu
Dr.Spider: A Diagnostic Evaluation Benchmark towards Text-to-SQL Robustness. (8%)Shuaichen Chang; Jun Wang; Mingwen Dong; Lin Pan; Henghui Zhu; Alexander Hanbo Li; Wuwei Lan; Sheng Zhang; Jiarong Jiang; Joseph Lilien; Steve Ash; William Yang Wang; Zhiguo Wang; Vittorio Castelli; Patrick Ng; Bing Xiang
Defending SDN against packet injection attacks using deep learning. (2%)Anh Tuan Phu; Bo Li; Faheem Ullah; Tanvir Ul Huque; Ranesh Naha; Ali Babar; Hung Nguyen
2023-01-19
On the Vulnerability of Backdoor Defenses for Federated Learning. (62%)Pei Fang; Jinghui Chen
On the Relationship Between Information-Theoretic Privacy Metrics And Probabilistic Information Privacy. (31%)Chong Xiao Wang; Wee Peng Tay
RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation. (16%)Utkarsh Nath; Yancheng Wang; Yingzhen Yang
Enhancing Deep Learning with Scenario-Based Override Rules: a Case Study. (1%)Adiel Ashrov; Guy Katz
2023-01-17
Denoising Diffusion Probabilistic Models as a Defense against Adversarial Attacks. (98%)Lars Lien Ankile; Anna Midgley; Sebastian Weisshaar
Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness. (68%)Ezgi Korkmaz
Label Inference Attack against Split Learning under Regression Setting. (8%)Shangyu Xie; Xin Yang; Yuanshun Yao; Tianyi Liu; Taiqing Wang; Jiankai Sun
2023-01-16
$\beta$-DARTS++: Bi-level Regularization for Proxy-robust Differentiable Architecture Search. (1%)Peng Ye; Tong He; Baopu Li; Tao Chen; Lei Bai; Wanli Ouyang
Modeling Uncertain Feature Representation for Domain Generalization. (1%)Xiaotong Li; Zixuan Hu; Jun Liu; Yixiao Ge; Yongxing Dai; Ling-Yu Duan
2023-01-15
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense. (4%)Siyuan Cheng; Guanhong Tao; Yingqi Liu; Shengwei An; Xiangzhe Xu; Shiwei Feng; Guangyu Shen; Kaiyuan Zhang; Qiuling Xu; Shiqing Ma; Xiangyu Zhang
2023-01-14
Adaptive Deep Neural Network Inference Optimization with EENet. (1%)Fatih Ilhan; Ka-Ho Chow; Sihao Hu; Tiansheng Huang; Selim Tekin; Wenqi Wei; Yanzhao Wu; Myungjin Lee; Ramana Kompella; Hugo Latapie; Gaowen Liu; Ling Liu
2023-01-13
On the feasibility of attacking Thai LPR systems with adversarial examples. (99%)Chissanupong Jiamsuchon; Jakapan Suaboot; Norrathep Rattanavipanon
2023-01-12
Security-Aware Approximate Spiking Neural Networks. (87%)Syed Tihaam Ahmad; Ayesha Siddique; Khaza Anuarul Hoque
Jamming Attacks on Decentralized Federated Learning in General Multi-Hop Wireless Networks. (3%)Yi Shi; Yalin E. Sagduyu; Tugba Erpek
2023-01-11
Phase-shifted Adversarial Training. (82%)Yeachan Kim; Seongyeon Kim; Ihyeok Seo; Bonggun Shin
Universal Detection of Backdoor Attacks via Density-based Clustering and Centroids Analysis. (78%)Wei Guo; Benedetta Tondi; Mauro Barni
2023-01-10
On the Robustness of AlphaFold: A COVID-19 Case Study. (73%)Ismail Alkhouri; Sumit Jha; Andre Beckus; George Atia; Alvaro Velasquez; Rickard Ewetz; Arvind Ramanathan; Susmit Jha
CDA: Contrastive-adversarial Domain Adaptation. (38%)Nishant Yadav; Mahbubul Alam; Ahmed Farahat; Dipanjan Ghosh; Chetan Gupta; Auroop R. Ganguly
User-Centered Security in Natural Language Processing. (12%)Chris Emmery
Leveraging Diffusion For Strong and High Quality Face Morphing Attacks. (3%)Zander W. Blasingame; Chen Liu
2023-01-09
Over-The-Air Adversarial Attacks on Deep Learning Wi-Fi Fingerprinting. (99%)Fei Xiao; Yong Huang; Yingying Zuo; Wei Kuang; Wei Wang
On the Susceptibility and Robustness of Time Series Models through Adversarial Attack and Defense. (98%)Asadullah Hill Galib; Bidhan Bashyal
Is Federated Learning a Practical PET Yet? (13%)Franziska Boenisch; Adam Dziedzic; Roei Schuster; Ali Shahin Shamsabadi; Ilia Shumailov; Nicolas Papernot
SoK: Hardware Defenses Against Speculative Execution Attacks. (1%)Guangyuan Hu; Zecheng He; Ruby Lee
2023-01-08
RobArch: Designing Robust Architectures against Adversarial Attacks. (76%)ShengYun Peng; Weilin Xu; Cory Cornelius; Kevin Li; Rahul Duggal; Duen Horng Chau; Jason Martin
Facial Misrecognition Systems: Simple Weight Manipulations Force DNNs to Err Only on Specific Persons. (2%)Irad Zehavi; Roee Nitzan; Adi Shamir
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope. (1%)Jingwei Zhang; Farzan Farnia
2023-01-07
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. (99%)Wenjie Qu; Jinyuan Jia; Neil Zhenqiang Gong
Adversarial training with informed data selection. (99%)Marcele O. K. Mendonça; Javier Maroto; Pascal Frossard; Paulo S. R. Diniz
2023-01-06
Code Difference Guided Adversarial Example Generation for Deep Code Models. (99%)Zhao Tian; Junjie Chen; Zhi Jin
Stealthy Backdoor Attack for Code Models. (98%)Zhou Yang; Bowen Xu; Jie M. Zhang; Hong Jin Kang; Jieke Shi; Junda He; David Lo
2023-01-05
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor Attack. (98%)Tzvi Lederer; Gallil Maimon; Lior Rokach
gRoMA: a Tool for Measuring the Global Robustness of Deep Neural Networks. (96%)Natan Levy; Raz Yerushalmi; Guy Katz
Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks. (61%)Yan Scholten; Jan Schuchardt; Simon Geisler; Aleksandar Bojchevski; Stephan Günnemann
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models. (4%)Hojjat Aghakhani; Wei Dai; Andre Manoel; Xavier Fernandes; Anant Kharkar; Christopher Kruegel; Giovanni Vigna; David Evans; Ben Zorn; Robert Sim
Can Large Language Models Change User Preference Adversarially? (1%)Varshini Subhash
2023-01-04
Availability Adversarial Attack and Countermeasures for Deep Learning-based Load Forecasting. (98%)Wangkun Xu; Fei Teng
Beckman Defense. (84%)A. V. Subramanyam
GUAP: Graph Universal Attack Through Adversarial Patching. (81%)Xiao Zang; Jie Chen; Bo Yuan
Enhancement attacks in biomedical machine learning. (1%)Matthew Rosenblatt; Javid Dadashkarimi; Dustin Scheinost
2023-01-03
Explainability and Robustness of Deep Visual Classification Models. (92%)Jindong Gu
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition. (83%)Hasan Abed Al Kader Hammoud; Shuming Liu; Mohammed Alkhrashi; Fahad AlBalawi; Bernard Ghanem
Backdoor Attacks Against Dataset Distillation. (50%)Yugeng Liu; Zheng Li; Michael Backes; Yun Shen; Yang Zhang
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector. (33%)Kshitiz Aryal; Maanak Gupta; Mahmoud Abdelsalam
2023-01-02
Efficient Robustness Assessment via Adversarial Spatial-Temporal Focus on Videos. (92%)Wei Xingxing; Wang Songping; Yan Huanqian
2023-01-01
Generalizable Black-Box Adversarial Attack with Meta Learning. (99%)Fei Yin; Yong Zhang; Baoyuan Wu; Yan Feng; Jingyi Zhang; Yanbo Fan; Yujiu Yang
ExploreADV: Towards exploratory attack for Neural Networks. (99%)Tianzuo Luo; Yuyi Zhong; Siaucheng Khoo
Trojaning semi-supervised learning model via poisoning wild images on the web. (47%)Le Feng; Zhenxing Qian; Sheng Li; Xinpeng Zhang
2022-12-30
Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence. (99%)Han Fang; Jiyi Zhang; Yupeng Qiu; Ke Xu; Chengfang Fang; Ee-Chien Chang
Guidance Through Surrogate: Towards a Generic Diagnostic Attack. (99%)Muzammal Naseer; Salman Khan; Fatih Porikli; Fahad Shahbaz Khan
Defense Against Adversarial Attacks on Audio DeepFake Detection. (91%)Piotr Kawa; Marcin Plata; Piotr Syga
Adversarial attacks and defenses on ML- and hardware-based IoT device fingerprinting and identification. (82%)Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Gérôme Bovet; Gregorio Martínez Pérez
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples. (22%)Jiaming Zhang; Xingjun Ma; Qi Yi; Jitao Sang; Yugang Jiang; Yaowei Wang; Changsheng Xu
Targeted k-node Collapse Problem: Towards Understanding the Robustness of Local k-core Structure. (1%)Yuqian Lv; Bo Zhou; Jinhuan Wang; Qi Xuan
2022-12-29
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice. (68%)Giovanni Apruzzese; Hyrum S. Anderson; Savino Dambra; David Freeman; Fabio Pierazzi; Kevin A. Roundy
Detection of out-of-distribution samples using binary neuron activation patterns. (11%)Bartlomiej Olber; Krystian Radlak; Adam Popowicz; Michal Szczepankiewicz; Krystian Chachula
2022-12-28
Thermal Heating in ReRAM Crossbar Arrays: Challenges and Solutions. (99%)Kamilya Smagulova; Mohammed E. Fouda; Ahmed Eltawil
Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks. (98%)Junlin Wu; Hussein Sibai; Yevgeniy Vorobeychik
Publishing Efficient On-device Models Increases Adversarial Vulnerability. (95%)Sanghyun Hong; Nicholas Carlini; Alexey Kurakin
Differentiable Search of Accurate and Robust Architectures. (92%)Yuwei Ou; Xiangning Xie; Shangce Gao; Yanan Sun; Kay Chen Tan; Jiancheng Lv
Robust Ranking Explanations. (76%)Chao Chen; Chenghua Guo; Guixiang Ma; Xi Zhang; Sihong Xie
Evaluating Generalizability of Deep Learning Models Using Indian-COVID-19 CT Dataset. (1%)Suba S; Nita Parekh; Ramesh Loganathan; Vikram Pudi; Chinnababu Sunkavalli
2022-12-27
EDoG: Adversarial Edge Detection For Graph Neural Networks. (98%)Xiaojun Xu; Yue Yu; Hanzhang Wang; Alok Lal; Carl A. Gunter; Bo Li
Learning When to Use Adaptive Adversarial Image Perturbations against Autonomous Vehicles. (86%)Hyung-Jin Yoon; Hamidreza Jafarnejadsani; Petros Voulgaris
Sparse Mixture Once-for-all Adversarial Training for Efficient In-Situ Trade-Off Between Accuracy and Robustness of DNNs. (62%)Souvik Kundu; Sairam Sundaresan; Sharath Nittur Sridhar; Shunlin Lu; Han Tang; Peter A. Beerel
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning. (56%)Jianyi Zhang; Fangjiao Zhang; Qichao Jin; Zhiqiang Wang; Xiaodong Lin; Xiali Hei
2022-12-25
Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks. (99%)Xingxing Wei; Ying Guo; Jie Yu; Bo Zhang
2022-12-24
Frequency Regularization for Improving Adversarial Robustness. (99%)Binxiao Huang; Chaofan Tao; Rui Lin; Ngai Wong
2022-12-23
Out-of-Distribution Detection with Reconstruction Error and Typicality-based Penalty. (61%)Genki Osada; Takahashi Tsubasa; Budrul Ahsan; Takashi Nishide
Towards Scalable Physically Consistent Neural Networks: an Application to Data-driven Multi-zone Thermal Building Models. (1%)Natale Loris Di; Bratislav Svetozarevic; Philipp Heer; Colin Neil Jones
2022-12-22
Adversarial Machine Learning and Defense Game for NextG Signal Classification with Deep Learning. (98%)Yalin E. Sagduyu
Aliasing is a Driver of Adversarial Attacks. (80%)Adrián Rodríguez-Muñoz; Antonio Torralba
GAN-based Domain Inference Attack. (2%)Yuechun Gu; Keke Chen
Hybrid Quantum-Classical Generative Adversarial Network for High Resolution Image Generation. (1%)Shu Lok Tsang; Maxwell T. West; Sarah M. Erfani; Muhammad Usman
2022-12-21
Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective. (80%)Shihua Huang; Zhichao Lu; Kalyanmoy Deb; Vishnu Naresh Boddeti
Vulnerabilities of Deep Learning-Driven Semantic Communications to Backdoor (Trojan) Attacks. (67%)Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus; Aylin Yener
A Theoretical Study of The Effects of Adversarial Attacks on Sparse Regression. (13%)Deepak Maurya; Jean Honorio
2022-12-20
A Comprehensive Study and Comparison of the Robustness of 3D Object Detectors Against Adversarial Attacks. (98%)Yifan Zhang; Junhui Hou; Yixuan Yuan
Multi-head Uncertainty Inference for Adversarial Attack Detection. (98%)Yuqi Yang; Songyun Yang; Jiyang Xie. Zhongwei Si; Kai Guo; Ke Zhang; Kongming Liang
In and Out-of-Domain Text Adversarial Robustness via Label Smoothing. (98%)Yahan Yang; Soham Dan; Dan Roth; Insup Lee
Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial Attacks. (96%)Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus; Aylin Yener
Unleashing the Power of Visual Prompting At the Pixel Level. (92%)Junyang Wu; Xianhang Li; Chen Wei; Huiyu Wang; Alan Yuille; Yuyin Zhou; Cihang Xie
Learned Systems Security. (78%)Roei Schuster; Jin Peng Zhou; Paul Grubbs; Thorsten Eisenhofer; Nicolas Papernot
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks. (22%)Jimmy Z. Di; Jack Douglas; Jayadev Acharya; Gautam Kamath; Ayush Sekhari
ReCode: Robustness Evaluation of Code Generation Models. (10%)Shiqi Wang; Zheng Li; Haifeng Qian; Chenghao Yang; Zijian Wang; Mingyue Shang; Varun Kumar; Samson Tan; Baishakhi Ray; Parminder Bhatia; Ramesh Nallapati; Murali Krishna Ramanathan; Dan Roth; Bing Xiang
Defending Against Poisoning Attacks in Open-Domain Question Answering. (8%)Orion Weller; Aleem Khan; Nathaniel Weir; Dawn Lawrie; Durme Benjamin Van
SoK: Analysis of Root Causes and Defense Strategies for Attacks on Microarchitectural Optimizations. (5%)Nadja Ramhöj Holtryd; Madhavan Manivannan; Per Stenström
Walking Noise: On Layer-Specific Robustness of Neural Architectures against Noisy Computations and Associated Characteristic Learning Dynamics. (1%)Hendrik Borras; Bernhard Klein; Holger Fröning
DISCO: Distilling Phrasal Counterfactuals with Large Language Models. (1%)Zeming Chen; Qiyue Gao; Kyle Richardson; Antoine Bosselut; Ashish Sabharwal
2022-12-19
TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization. (99%)Bairu Hou; Jinghan Jia; Yihua Zhang; Guanhua Zhang; Yang Zhang; Sijia Liu; Shiyu Chang
Towards Robustness of Text-to-SQL Models Against Natural and Realistic Adversarial Table Perturbation. (75%)Xinyu Pi; Bing Wang; Yan Gao; Jiaqi Guo; Zhoujun Li; Jian-Guang Lou
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends. (50%)Yonghao Xu; Tao Bai; Weikang Yu; Shizhen Chang; Peter M. Atkinson; Pedram Ghamisi
Task-Oriented Communications for NextG: End-to-End Deep Learning and AI Security Aspects. (26%)Yalin E. Sagduyu; Sennur Ulukus; Aylin Yener
Flareon: Stealthy any2any Backdoor Injection via Poisoned Augmentation. (2%)Tianrui Qin; Xianghuan He; Xitong Gao; Yiren Zhao; Kejiang Ye; Cheng-Zhong Xu
Exploring Optimal Substructure for Out-of-distribution Generalization via Feature-targeted Model Pruning. (1%)Yingchun Wang; Jingcai Guo; Song Guo; Weizhan Zhang; Jie Zhang
2022-12-18
Estimating the Adversarial Robustness of Attributions in Text with Transformers. (99%)Adam Ivankay; Mattia Rigotti; Ivan Girardi; Chiara Marchiori; Pascal Frossard
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks. (99%)Anqi Zhao; Tong Chu; Yahao Liu; Wen Li; Jingjing Li; Lixin Duan
Discrete Point-wise Attack Is Not Enough: Generalized Manifold Adversarial Attack for Face Recognition. (99%)Qian Li; Yuxiao Hu; Ye Liu; Dongxiao Zhang; Xin Jin; Yuntian Chen
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks. (4%)Zeyang Sha; Xinlei He; Pascal Berrang; Mathias Humbert; Yang Zhang
2022-12-17
Confidence-aware Training of Smoothed Classifiers for Certified Robustness. (86%)Jongheon Jeong; Seojin Kim; Jinwoo Shin
A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and Fairness. (2%)Tiantian Feng; Rajat Hebbar; Nicholas Mehlman; Xuan Shi; Aditya Kommineni; and Shrikanth Narayanan
HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation. (1%)Hongyi Yuan; Zheng Yuan; Chuanqi Tan; Fei Huang; Songfang Huang
2022-12-16
Adversarial Example Defense via Perturbation Grading Strategy. (99%)Shaowei Zhu; Wanli Lyu; Bin Li; Zhaoxia Yin; Bin Luo
WebAssembly Diversification for Malware Evasion. (5%)Javier Cabrera-Arteaga; Martin Monperrus; Tim Toady; Benoit Baudry
Biomedical image analysis competitions: The state of current participation practice. (4%)Matthias Eisenmann; Annika Reinke; Vivienn Weru; Minu Dietlinde Tizabi; Fabian Isensee; Tim J. Adler; Patrick Godau; Veronika Cheplygina; Michal Kozubek; Sharib Ali; Anubha Gupta; Jan Kybic; Alison Noble; Solórzano Carlos Ortiz de; Samiksha Pachade; Caroline Petitjean; Daniel Sage; Donglai Wei; Elizabeth Wilden; Deepak Alapatt; Vincent Andrearczyk; Ujjwal Baid; Spyridon Bakas; Niranjan Balu; Sophia Bano; Vivek Singh Bawa; Jorge Bernal; Sebastian Bodenstedt; Alessandro Casella; Jinwook Choi; Olivier Commowick; Marie Daum; Adrien Depeursinge; Reuben Dorent; Jan Egger; Hannah Eichhorn; Sandy Engelhardt; Melanie Ganz; Gabriel Girard; Lasse Hansen; Mattias Heinrich; Nicholas Heller; Alessa Hering; Arnaud Huaulmé; Hyunjeong Kim; Bennett Landman; Hongwei Bran Li; Jianning Li; Jun Ma; Anne Martel; Carlos Martín-Isla; Bjoern Menze; Chinedu Innocent Nwoye; Valentin Oreiller; Nicolas Padoy; Sarthak Pati; Kelly Payette; Carole Sudre; Wijnen Kimberlin van; Armine Vardazaryan; Tom Vercauteren; Martin Wagner; Chuanbo Wang; Moi Hoon Yap; Zeyun Yu; Chun Yuan; Maximilian Zenk; Aneeq Zia; David Zimmerer; Rina Bao; Chanyeol Choi; Andrew Cohen; Oleh Dzyubachyk; Adrian Galdran; Tianyuan Gan; Tianqi Guo; Pradyumna Gupta; Mahmood Haithami; Edward Ho; Ikbeom Jang; Zhili Li; Zhengbo Luo; Filip Lux; Sokratis Makrogiannis; Dominik Müller; Young-tack Oh; Subeen Pang; Constantin Pape; Gorkem Polat; Charlotte Rosalie Reed; Kanghyun Ryu; Tim Scherr; Vajira Thambawita; Haoyu Wang; Xinliang Wang; Kele Xu; Hung Yeh; Doyeob Yeo; Yixuan Yuan; Yan Zeng; Xin Zhao; Julian Abbing; Jannes Adam; Nagesh Adluru; Niklas Agethen; Salman Ahmed; Yasmina Al Khalil; Mireia Alenyà; Esa Alhoniemi; Chengyang An; Talha Anwar; Tewodros Weldebirhan Arega; Netanell Avisdris; Dogu Baran Aydogan; Yingbin Bai; Maria Baldeon Calisto; Berke Doga Basaran; Marcel Beetz; Cheng Bian; Hao Bian; Kevin Blansit; Louise Bloch; Robert Bohnsack; Sara Bosticardo; Jack Breen; Mikael Brudfors; Raphael Brüngel; Mariano Cabezas; Alberto Cacciola; Zhiwei Chen; Yucong Chen; Daniel Tianming Chen; Minjeong Cho; Min-Kook Choi; Chuantao Xie Chuantao Xie; Dana Cobzas; Julien Cohen-Adad; Jorge Corral Acero; Sujit Kumar Das; Oliveira Marcela de; Hanqiu Deng; Guiming Dong; Lars Doorenbos; Cory Efird; Di Fan; Mehdi Fatan Serj; Alexandre Fenneteau; Lucas Fidon; Patryk Filipiak; René Finzel; Nuno R. Freitas; Christoph M. Friedrich; Mitchell Fulton; Finn Gaida; Francesco Galati; Christoforos Galazis; Chang Hee Gan; Zheyao Gao; Shengbo Gao; Matej Gazda; Beerend Gerats; Neil Getty; Adam Gibicar; Ryan Gifford; Sajan Gohil; Maria Grammatikopoulou; Daniel Grzech; Orhun Güley; Timo Günnemann; Chunxu Guo; Sylvain Guy; Heonjin Ha; Luyi Han; Il Song Han; Ali Hatamizadeh; Tian He; Jimin Heo; Sebastian Hitziger; SeulGi Hong; SeungBum Hong; Rian Huang; Ziyan Huang; Markus Huellebrand; Stephan Huschauer; Mustaffa Hussain; Tomoo Inubushi; Ece Isik Polat; Mojtaba Jafaritadi; SeongHun Jeong; Bailiang Jian; Yuanhong Jiang; Zhifan Jiang; Yueming Jin; Smriti Joshi; Abdolrahim Kadkhodamohammadi; Reda Abdellah Kamraoui; Inha Kang; Junghwa Kang; Davood Karimi; April Khademi; Muhammad Irfan Khan; Suleiman A. Khan; Rishab Khantwal; Kwang-Ju Kim; Timothy Kline; Satoshi Kondo; Elina Kontio; Adrian Krenzer; Artem Kroviakov; Hugo Kuijf; Satyadwyoom Kumar; Rosa Francesco La; Abhi Lad; Doohee Lee; Minho Lee; Chiara Lena; Hao Li; Ling Li; Xingyu Li; Fuyuan Liao; KuanLun Liao; Arlindo Limede Oliveira; Chaonan Lin; Shan Lin; Akis Linardos; Marius George Linguraru; Han Liu; Tao Liu; Di Liu; Yanling Liu; João Lourenço-Silva; Jingpei Lu; Jiangshan Lu; Imanol Luengo; Christina B. Lund; Huan Minh Luu; Yi Lv; Yi Lv; Uzay Macar; Leon Maechler; Sina Mansour L.; Kenji Marshall; Moona Mazher; Richard McKinley; Alfonso Medela; Felix Meissen; Mingyuan Meng; Dylan Miller; Seyed Hossein Mirjahanmardi; Arnab Mishra; Samir Mitha; Hassan Mohy-ud-Din; Tony Chi Wing Mok; Gowtham Krishnan Murugesan; Enamundram Naga Karthik; Sahil Nalawade; Jakub Nalepa; Mohamed Naser; Ramin Nateghi; Hammad Naveed; Quang-Minh Nguyen; Cuong Nguyen Quoc; Brennan Nichyporuk; Bruno Oliveira; David Owen; Jimut Bahan Pal; Junwen Pan; Wentao Pan; Winnie Pang; Bogyu Park; Vivek Pawar; Kamlesh Pawar; Michael Peven; Lena Philipp; Tomasz Pieciak; Szymon Plotka; Marcel Plutat; Fattaneh Pourakpour; Domen Preložnik; Kumaradevan Punithakumar; Abdul Qayyum; Sandro Queirós; Arman Rahmim; Salar Razavi; Jintao Ren; Mina Rezaei; Jonathan Adam Rico; ZunHyan Rieu; Markus Rink; Johannes Roth; Yusely Ruiz-Gonzalez; Numan Saeed; Anindo Saha; Mostafa Salem; Ricardo Sanchez-Matilla; Kurt Schilling; Wei Shao; Zhiqiang Shen; Ruize Shi; Pengcheng Shi; Daniel Sobotka; Théodore Soulier; Bella Specktor Fadida; Danail Stoyanov; Timothy Sum Hon Mun; Xiaowu Sun; Rong Tao; Franz Thaler; Antoine Théberge; Felix Thielke; Helena Torres; Kareem A. Wahid; Jiacheng Wang; YiFei Wang; Wei Wang; Xiong Wang; Jianhui Wen; Ning Wen; Marek Wodzinski; Ye Wu; Fangfang Xia; Tianqi Xiang; Chen Xiaofei; Lizhan Xu; Tingting Xue; Yuxuan Yang; Lin Yang; Kai Yao; Huifeng Yao; Amirsaeed Yazdani; Michael Yip; Hwanseung Yoo; Fereshteh Yousefirizi; Shunkai Yu; Lei Yu; Jonathan Zamora; Ramy Ashraf Zeineldin; Dewen Zeng; Jianpeng Zhang; Bokai Zhang; Jiapeng Zhang; Fan Zhang; Huahong Zhang; Zhongchen Zhao; Zixuan Zhao; Jiachen Zhao; Can Zhao; Qingshuo Zheng; Yuheng Zhi; Ziqi Zhou; Baosheng Zou; Klaus Maier-Hein; Paul F. Jäger; Annette Kopp-Schneider; Lena Maier-Hein
Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation? (1%)Ming-Chang Chiu; Pin-Yu Chen; Xuezhe Ma
On Human Visual Contrast Sensitivity and Machine Vision Robustness: A Comparative Study. (1%)Ming-Chang Chiu; Yingfei Wang; Derrick Eui Gyu Kim; Pin-Yu Chen; Xuezhe Ma
2022-12-15
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks. (98%)Nikolaos Antoniou; Efthymios Georgiou; Alexandros Potamianos
On Evaluating Adversarial Robustness of Chest X-ray Classification: Pitfalls and Best Practices. (84%)Salah Ghamizi; Maxime Cordy; Michail Papadakis; Yves Le Traon
Are Multimodal Models Robust to Image and Text Perturbations? (5%)Jielin Qiu; Yi Zhu; Xingjian Shi; Florian Wenzel; Zhiqiang Tang; Ding Zhao; Bo Li; Mu Li
Holistic risk assessment of inference attacks in machine learning. (4%)Yang Yang
Defending against cybersecurity threats to the payments and banking system. (2%)Williams Haruna; Toyin Ajiboro Aremu; Yetunde Ajao Modupe
White-box Inference Attacks against Centralized Machine Learning and Federated Learning. (1%)Jingyi Ge
2022-12-14
SAIF: Sparse Adversarial and Interpretable Attack Framework. (99%)Tooba Imtiaz; Morgan Kohler; Jared Miller; Zifeng Wang; Mario Sznaier; Octavia Camps; Jennifer Dy
Dissecting Distribution Inference. (88%)Anshuman Suri; Yifu Lu; Yanjin Chen; David Evans
Generative Robust Classification. (11%)Xuwang Yin
Synthesis of Adversarial DDOS Attacks Using Tabular Generative Adversarial Networks. (8%)Abdelmageed Ahmed Hassan; Mohamed Sayed Hussein; Ahmed Shehata AboMoustafa; Sarah Hossam Elmowafy
DOC-NAD: A Hybrid Deep One-class Classifier for Network Anomaly Detection. (1%)Mohanad Sarhan; Gayan Kulatilleke; Wai Weng Lo; Siamak Layeghy; Marius Portmann
2022-12-13
Object-fabrication Targeted Attack for Object Detection. (99%)Xuchong Zhang; Changfeng Sun; Haoliang Han; Hongbin Sun
Adversarial Attacks and Defences for Skin Cancer Classification. (99%)Vinay Jogani; Joy Purohit; Ishaan Shivhare; Samina Attari; Shraddha Surtkar
Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial Detection. (99%)Peter Lorenz; Margret Keuper; Janis Keuper
Towards Efficient and Domain-Agnostic Evasion Attack with High-dimensional Categorical Inputs. (80%)Hongyan Bao; Yufei Han; Yujun Zhou; Xin Gao; Xiangliang Zhang
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models. (73%)Chengzhi Mao; Scott Geng; Junfeng Yang; Xin Wang; Carl Vondrick
Pixel is All You Need: Adversarial Trajectory-Ensemble Active Learning for Salient Object Detection. (56%)Zhenyu Wu; Lin Wang; Wei Wang; Qing Xia; Chenglizhao Chen; Aimin Hao; Shuo Li
AdvCat: Domain-Agnostic Robustness Assessment for Cybersecurity-Critical Applications with Categorical Inputs. (56%)Helene Orsini; Hongyan Bao; Yujun Zhou; Xiangrui Xu; Yufei Han; Longyang Yi; Wei Wang; Xin Gao; Xiangliang Zhang
Privacy-preserving Security Inference Towards Cloud-Edge Collaborative Using Differential Privacy. (1%)Yulong Wang; Xingshu Chen; Qixu Wang
Boosting Semi-Supervised Learning with Contrastive Complementary Labeling. (1%)Qinyi Deng; Yong Guo; Zhibang Yang; Haolin Pan; Jian Chen
2022-12-12
SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation. (98%)Wanqing Zhu; Jia-Li Yin; Bo-Hao Chen; Ximeng Liu
Adversarially Robust Video Perception by Seeing Motion. (98%)Lingyu Zhang; Chengzhi Mao; Junfeng Yang; Carl Vondrick
A Survey on Reinforcement Learning Security with Application to Autonomous Driving. (96%)Ambra Demontis; Maura Pintor; Luca Demetrio; Kathrin Grosse; Hsiao-Ying Lin; Chengfang Fang; Battista Biggio; Fabio Roli
HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable Design. (96%)Hui Wei; Zhixiang Wang; Xuemei Jia; Yinqiang Zheng; Hao Tang; Shin'ichi Satoh; Zheng Wang
Robust Perception through Equivariance. (96%)Chengzhi Mao; Lingyu Zhang; Abhishek Joshi; Junfeng Yang; Hao Wang; Carl Vondrick
Despite "super-human" performance, current LLMs are unsuited for decisions about ethics and safety. (75%)Joshua Albrecht; Ellie Kitanidis; Abraham J. Fetterman
AFLGuard: Byzantine-robust Asynchronous Federated Learning. (15%)Minghong Fang; Jia Liu; Neil Zhenqiang Gong; Elizabeth S. Bentley
Carpet-bombing patch: attacking a deep network without usual requirements. (2%)Pol Labarbarie; Adrien Chan-Hon-Tong; Stéphane Herbin; Milad Leyli-Abadi
Numerical Stability of DeepGOPlus Inference. (1%)Inés Gonzalez Pepe; Yohan Chatelain; Gregory Kiar; Tristan Glatard
2022-12-11
DISCO: Adversarial Defense with Local Implicit Functions. (99%)Chih-Hui Ho; Nuno Vasconcelos
REAP: A Large-Scale Realistic Adversarial Patch Benchmark. (98%)Nabeel Hingun; Chawin Sitawarin; Jerry Li; David Wagner
2022-12-10
General Adversarial Defense Against Black-box Attacks via Pixel Level and Feature Level Distribution Alignments. (99%)Xiaogang Xu; Hengshuang Zhao; Philip Torr; Jiaya Jia
Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense. (93%)Yang Yu; Qi Liu; Likang Wu; Runlong Yu; Sanshi Lei Yu; Zaixi Zhang
Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking. (93%)Dennis Gross; Thiago D. Simao; Nils Jansen; Guillermo A. Perez
Mitigating Adversarial Gray-Box Attacks Against Phishing Detectors. (54%)Giovanni Apruzzese; V. S. Subrahmanian
How to Backdoor Diffusion Models? (12%)Sheng-Yen Chou; Pin-Yu Chen; Tsung-Yi Ho
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification. (1%)Ruixuan Tang; Hanjie Chen; Yangfeng Ji
2022-12-09
Understanding and Combating Robust Overfitting via Input Loss Landscape Analysis and Regularization. (98%)Lin Li; Michael Spratling
Expeditious Saliency-guided Mix-up through Random Gradient Thresholding. (2%)Minh-Long Luu; Zeyi Huang; Eric P. Xing; Yong Jae Lee; Haohan Wang
Dynamic Test-Time Augmentation via Differentiable Functions. (2%)Shohei Enomoto; Monikka Roslianna Busto; Takeharu Eda
Spurious Features Everywhere -- Large-Scale Detection of Harmful Spurious Features in ImageNet. (1%)Yannic Neuhaus; Maximilian Augustin; Valentyn Boreiko; Matthias Hein
Robustness Implies Privacy in Statistical Estimation. (1%)Samuel B. Hopkins; Gautam Kamath; Mahbod Majid; Shyam Narayanan
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models. (1%)Rui Zhu; Di Tang; Siyuan Tang; XiaoFeng Wang; Haixu Tang
QVIP: An ILP-based Formal Verification Approach for Quantized Neural Networks. (1%)Yedi Zhang; Zhe Zhao; Fu Song; Min Zhang; Taolue Chen; Jun Sun
2022-12-08
Targeted Adversarial Attacks against Neural Network Trajectory Predictors. (99%)Kaiyuan Tan; Jun Wang; Yiannis Kantaros
XRand: Differentially Private Defense against Explanation-Guided Attacks. (68%)Truc Nguyen; Phung Lai; NhatHai Phan; My T. Thai
Robust Graph Representation Learning via Predictive Coding. (22%)Billy Byiringiro; Tommaso Salvatori; Thomas Lukasiewicz
2022-12-07
Multi-Objective Linear Ensembles for Robust and Sparse Training of Few-Bit Neural Networks. (2%)Ambrogio Maria Bernardelli; Stefano Gualandi; Hoong Chuin Lau; Simone Milanesi; Neil Yorke-Smith
Use of Cryptography in Malware Obfuscation. (1%)Hassan Jameel Asghar; Benjamin Zi Hao Zhao; Muhammad Ikram; Giang Nguyen; Dali Kaafar; Sean Lamont; Daniel Coscia
2022-12-06
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning. (96%)Hongbin Liu; Wenjie Qu; Jinyuan Jia; Neil Zhenqiang Gong
2022-12-05
Enhancing Quantum Adversarial Robustness by Randomized Encodings. (99%)Weiyuan Gong; Dong Yuan; Weikang Li; Dong-Ling Deng
Multiple Perturbation Attack: Attack Pixelwise Under Different $\ell_p$-norms For Better Adversarial Performance. (99%)Ngoc N. Tran; Anh Tuan Bui; Dinh Phung; Trung Le
FaceQAN: Face Image Quality Assessment Through Adversarial Noise Exploration. (92%)Žiga Babnik; Peter Peer; Vitomir Štruc
Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning. (76%)Mingyuan Fan; Cen Chen; Chengyu Wang; Wenmeng Zhou; Jun Huang; Ximeng Liu; Wenzhong Guo
Blessings and Curses of Covariate Shifts: Adversarial Learning Dynamics, Directional Convergence, and Equilibria. (8%)Tengyuan Liang
What is the Solution for State-Adversarial Multi-Agent Reinforcement Learning? (3%)Songyang Han; Sanbao Su; Sihong He; Shuo Han; Haizhao Yang; Fei Miao
Spuriosity Rankings: Sorting Data for Spurious Correlation Robustness. (1%)Mazda Moayeri; Wenxiao Wang; Sahil Singla; Soheil Feizi
Efficient Malware Analysis Using Metric Embeddings. (1%)Ethan M. Rudd; David Krisiloff; Scott Coull; Daniel Olszewski; Edward Raff; James Holt
2022-12-04
Bayesian Learning with Information Gain Provably Bounds Risk for a Robust Adversarial Defense. (98%)Bao Gia Doan; Ehsan Abbasnejad; Javen Qinfeng Shi; Damith C. Ranasinghe
Recognizing Object by Components with Human Prior Knowledge Enhances Adversarial Robustness of Deep Neural Networks. (88%)Xiao Li; Ziqi Wang; Bo Zhang; Fuchun Sun; Xiaolin Hu
CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness. (82%)Huy Phan; Miao Yin; Yang Sui; Bo Yuan; Saman Zonouz
FedCC: Robust Federated Learning against Model Poisoning Attacks. (45%)Hyejun Jeong; Hamin Son; Seohu Lee; Jayun Hyun; Tai-Myoung Chung
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder. (8%)Qi Tian; Kun Kuang; Kelu Jiang; Furui Liu; Zhihua Wang; Fei Wu
2022-12-03
LDL: A Defense for Label-Based Membership Inference Attacks. (83%)Arezoo Rajabi; Dinuka Sahabandu; Luyao Niu; Bhaskar Ramasubramanian; Radha Poovendran
Security Analysis of SplitFed Learning. (8%)Momin Ahmad Khan; Virat Shejwalkar; Amir Houmansadr; Fatima Muhammad Anwar
2022-12-02
Membership Inference Attacks Against Semantic Segmentation Models. (45%)Tomas Chobola; Dmitrii Usynin; Georgios Kaissis
Guaranteed Conformance of Neurosymbolic Models to Natural Constraints. (1%)Kaustubh Sridhar; Souradeep Dutta; James Weimer; Insup Lee
2022-12-01
Purifier: Defending Data Inference Attacks via Transforming Confidence Scores. (89%)Ziqi Yang; Lijin Wang; Da Yang; Jie Wan; Ziming Zhao; Ee-Chien Chang; Fan Zhang; Kui Ren
Pareto Regret Analyses in Multi-objective Multi-armed Bandit. (41%)Mengfan Xu; Diego Klabjan
All You Need Is Hashing: Defending Against Data Reconstruction Attack in Vertical Federated Learning. (3%)Pengyu Qiu; Xuhong Zhang; Shouling Ji; Yuwen Pu; Ting Wang
Generalizing and Improving Jacobian and Hessian Regularization. (1%)Chenwei Cui; Zehao Yan; Guangshen Liu; Liangfu Lu
On the Limit of Explaining Black-box Temporal Graph Neural Networks. (1%)Minh N. Vu; My T. Thai
SimpleMind adds thinking to deep neural networks. (1%)Youngwon Choi; M. Wasil Wahi-Anwar; Matthew S. Brown
2022-11-30
Towards Interpreting Vulnerability of Multi-Instance Learning via Customized and Universal Adversarial Perturbations. (97%)Yu-Xuan Zhang; Hua Meng; Xue-Mei Cao; Zhengchun Zhou; Mei Yang; Avik Ranjan Adhikary
Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations. (84%)Haniyeh Ehsani Oskouie; Farzan Farnia
Efficient Adversarial Input Generation via Neural Net Patching. (75%)Tooba Khan; Kumar Madhukar; Subodh Vishnu Sharma
Toward Robust Diagnosis: A Contour Attention Preserving Adversarial Defense for COVID-19 Detection. (69%)Kun Xiang; Xing Zhang; Jinwen She; Jinpeng Liu; Haohan Wang; Shiqi Deng; Shancheng Jiang
Tight Certification of Adversarially Trained Neural Networks via Nonconvex Low-Rank Semidefinite Relaxations. (38%)Hong-Ming Chiu; Richard Y. Zhang
Improved Smoothed Analysis of 2-Opt for the Euclidean TSP. (8%)Bodo Manthey; Rhijn Jesse van
2022-11-29
Understanding and Enhancing Robustness of Concept-based Models. (99%)Sanchit Sinha; Mengdi Huai; Jianhui Sun; Aidong Zhang
Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive Diffusion. (99%)Kui Zhang; Hang Zhou; Jie Zhang; Qidong Huang; Weiming Zhang; Nenghai Yu
Advancing Deep Metric Learning Through Multiple Batch Norms And Multi-Targeted Adversarial Examples. (88%)Inderjeet Singh; Kazuya Kakizaki; Toshinori Araki
Penalizing Confident Predictions on Largely Perturbed Inputs Does Not Improve Out-of-Distribution Generalization in Question Answering. (83%)Kazutoshi Shinoda; Saku Sugawara; Akiko Aizawa
Quantization-aware Interval Bound Propagation for Training Certifiably Robust Quantized Neural Networks. (73%)Mathias Lechner; Đorđe Žikelić; Krishnendu Chatterjee; Thomas A. Henzinger; Daniela Rus
AdvMask: A Sparse Adversarial Attack Based Data Augmentation Method for Image Classification. (54%)Suorong Yang; Jinqiao Li; Jian Zhao; Furao Shen
A3T: Accuracy Aware Adversarial Training. (10%)Enes Altinisik; Safa Messaoud; Husrev Taha Sencar; Sanjay Chawla
Building Resilience to Out-of-Distribution Visual Data via Input Optimization and Model Finetuning. (1%)Christopher J. Holder; Majid Khonji; Jorge Dias; Muhammad Shafique
2022-11-28
Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces. (99%)Xiaoqing Chen; Dongrui Wu
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning. (95%)Eldor Abdukhamidov; Mohammed Abuhamad; Simon S. Woo; Eric Chan-Tin; Tamer Abuhmed
Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning. (83%)Gyojin Han; Jaehyun Choi; Hyeong Gwon Hong; Junmo Kim
Towards More Robust Interpretation via Local Gradient Alignment. (76%)Sunghwan Joo; Seokhyeon Jeong; Juyeon Heo; Adrian Weller; Taesup Moon
Understanding the Impact of Adversarial Robustness on Accuracy Disparity. (31%)Yuzheng Hu; Fan Wu; Hongyang Zhang; Han Zhao
How Important are Good Method Names in Neural Code Generation? A Model Robustness Perspective. (13%)Guang Yang; Yu Zhou; Wenhua Yang; Tao Yue; Xiang Chen; Taolue Chen
Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning. (8%)Xiaoyue Duan; Guoliang Kang; Runqi Wang; Shumin Han; Song Xue; Tian Wang; Baochang Zhang
Attack on Unfair ToS Clause Detection: A Case Study using Universal Adversarial Triggers. (8%)Shanshan Xu; Irina Broda; Rashid Haddad; Marco Negrini; Matthias Grabmair
Gamma-convergence of a nonlocal perimeter arising in adversarial machine learning. (3%)Leon Bungert; Kerrek Stinson
CoNAL: Anticipating Outliers with Large Language Models. (1%)Albert Xu; Xiang Ren; Robin Jia
Learning Antidote Data to Individual Unfairness. (1%)Peizhao Li; Ethan Xia; Hongfu Liu
2022-11-27
Imperceptible Adversarial Attack via Invertible Neural Networks. (99%)Zihan Chen; Ziyue Wang; Junjie Huang; Wentao Zhao; Xiao Liu; Dejian Guan
Foiling Explanations in Deep Neural Networks. (98%)Snir Vitrack Tamam; Raz Lapid; Moshe Sipper
Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning. (84%)Yunchao Zhang; Zonglin Di; Kaiwen Zhou; Cihang Xie; Xin Wang
Traditional Classification Neural Networks are Good Generators: They are Competitive with DDPMs and GANs. (50%)Guangrun Wang; Philip H. S. Torr
Federated Learning Attacks and Defenses: A Survey. (47%)Yao Chen; Yijie Gui; Hong Lin; Wensheng Gan; Yongdong Wu
Adversarial Rademacher Complexity of Deep Neural Networks. (47%)Jiancong Xiao; Yanbo Fan; Ruoyu Sun; Zhi-Quan Luo
2022-11-26
Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning. (99%)Ethan Rathbun; Kaleel Mahmood; Sohaib Ahmad; Caiwen Ding; Dijk Marten van
2022-11-25
Boundary Adversarial Examples Against Adversarial Overfitting. (99%)Muhammad Zaid Hameed; Beat Buesser
Supervised Contrastive Prototype Learning: Augmentation Free Robust Neural Network. (98%)Iordanis Fostiropoulos; Laurent Itti
Beyond Smoothing: Unsupervised Graph Representation Learning with Edge Heterophily Discriminating. (3%)Yixin Liu; Yizhen Zheng; Daokun Zhang; Vincent CS Lee; Shirui Pan
TrustGAN: Training safe and trustworthy deep learning models through generative adversarial networks. (1%)Hélion du Mas des Bourboux
2022-11-24
SAGA: Spectral Adversarial Geometric Attack on 3D Meshes. (98%)Tomer Stolik; Itai Lang; Shai Avidan
Tracking Dataset IP Use in Deep Neural Networks. (96%)Seonhye Park; Alsharif Abuadbba; Shuo Wang; Kristen Moore; Yansong Gao; Hyoungshick Kim; Surya Nepal
Explainable and Safe Reinforcement Learning for Autonomous Air Mobility. (92%)Lei Wang; Hongyu Yang; Yi Lin; Suwan Yin; Yuankai Wu
Neural Network Complexity of Chaos and Turbulence. (41%)Tim Whittaker; Romuald A. Janik; Yaron Oz
Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models. (8%)Jacob Shams; Ben Nassi; Ikuya Morikawa; Toshiya Shimizu; Asaf Shabtai; Yuval Elovici
Generative Joint Source-Channel Coding for Semantic Image Transmission. (1%)Ecenaz Erdemir; Tze-Yang Tung; Pier Luigi Dragotti; Deniz Gunduz
CycleGANWM: A CycleGAN watermarking method for ownership verification. (1%)Dongdong Lin; Benedetta Tondi; Bin Li; Mauro Barni
2022-11-23
Query Efficient Cross-Dataset Transferable Black-Box Attack on Action Recognition. (99%)Rohit Gupta; Naveed Akhtar; Gaurav Kumar Nayak; Ajmal Mian; Mubarak Shah
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners. (99%)Elre T. Oldewage; John Bronskill; Richard E. Turner
Reliable Robustness Evaluation via Automatically Constructed Attack Ensembles. (76%)Shengcai Liu; Fu Peng; Ke Tang
Dual Graphs of Polyhedral Decompositions for the Detection of Adversarial Attacks. (62%)Huma Jamil; Yajing Liu; Christina Cole; Nathaniel Blanchard; Emily J. King; Michael Kirby; Christopher Peterson
Privacy-Enhancing Optical Embeddings for Lensless Classification. (11%)Eric Bezzam; Martin Vetterli; Matthieu Simeoni
Principled Data-Driven Decision Support for Cyber-Forensic Investigations. (1%)Soodeh Atefi; Sakshyam Panda; Manos Panaousis; Aron Laszka
Data Provenance Inference in Machine Learning. (1%)Mingxue Xu; Xiang-Yang Li
2022-11-22
Benchmarking Adversarially Robust Quantum Machine Learning at Scale. (99%)Maxwell T. West; Sarah M. Erfani; Christopher Leckie; Martin Sevior; Lloyd C. L. Hollenberg; Muhammad Usman
PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples. (99%)Shengshan Hu; Junwei Zhang; Wei Liu; Junhui Hou; Minghui Li; Leo Yu Zhang; Hai Jin; Lichao Sun
Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces. (98%)Shengbang Fang; Matthew C Stamm
Backdoor Cleansing with Unlabeled Data. (75%)Lu Pang; Tao Sun; Haibin Ling; Chao Chen
Improving Robust Generalization by Direct PAC-Bayesian Bound Minimization. (70%)Zifan Wang; Nan Ding; Tomer Levinboim; Xi Chen; Radu Soricut
SoK: Inference Attacks and Defenses in Human-Centered Wireless Sensing. (69%)Wei Sun; Tingjun Chen; Neil Gong
2022-11-21
Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack. (99%)Yunfeng Diao; He Wang; Tianjia Shao; Yong-Liang Yang; Kun Zhou; David Hogg
Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors. (99%)Sizhe Chen; Geng Yuan; Xinwen Cheng; Yifan Gong; Minghai Qin; Yanzhi Wang; Xiaolin Huang
Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization. (99%)Jiafeng Wang; Zhaoyu Chen; Kaixun Jiang; Dingkang Yang; Lingyi Hong; Pinxue Guo; Haijing Guo; Wenqiang Zhang
Addressing Mistake Severity in Neural Networks with Semantic Knowledge. (92%)Natalie Abreu; Nathan Vaska; Victoria Helus
Efficient Generalization Improvement Guided by Random Weight Perturbation. (68%)Tao Li; Weihao Yan; Zehao Lei; Yingwen Wu; Kun Fang; Ming Yang; Xiaolin Huang
CLAWSAT: Towards Both Robust and Accurate Code Models. (56%)Jinghan Jia; Shashank Srikant; Tamara Mitrovska; Chuang Gan; Shiyu Chang; Sijia Liu; Una-May O'Reilly
Fairness Increases Adversarial Vulnerability. (54%)Cuong Tran; Keyu Zhu; Ferdinando Fioretto; Henternyck Pascal Van
Don't Watch Me: A Spatio-Temporal Trojan Attack on Deep-Reinforcement-Learning-Augment Autonomous Driving. (10%)Yinbo Yu; Jiajia Liu
SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks. (8%)Sunder Ali Khowaja; Parus Khuwaja; Kapal Dev; Angelos Antonopoulos
A Survey on Backdoor Attack and Defense in Natural Language Processing. (2%)Xuan Sheng; Zhaoyang Han; Piji Li; Xiangmao Chang
Understanding and Improving Visual Prompting: A Label-Mapping Perspective. (2%)Aochuan Chen; Yuguang Yao; Pin-Yu Chen; Yihua Zhang; Sijia Liu
Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text. (1%)Qianhui Wu; Huiqiang Jiang; Haonan Yin; Börje F. Karlsson; Chin-Yew Lin
Privacy in Practice: Private COVID-19 Detection in X-Ray Images. (1%)Lucas Lange; Maja Schneider; Erhard Rahm
A Tale of Frozen Clouds: Quantifying the Impact of Algorithmic Complexity Vulnerabilities in Popular Web Servers. (1%)Masudul Hasan Masud Bhuiyan; Cristian-Alexandru Staicu
2022-11-20
Spectral Adversarial Training for Robust Graph Neural Network. (99%)Jintang Li; Jiaying Peng; Liang Chen; Zibin Zheng; Tingting Liang; Qing Ling
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identification. (81%)Wenli Sun; Xinyang Jiang; Shuguang Dou; Dongsheng Li; Duoqian Miao; Cheng Deng; Cairong Zhao
Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training. (47%)Jiaxu Tian; Dapeng Zhi; Si Liu; Peixin Wang; Guy Katz; Min Zhang
Adversarial Cheap Talk. (8%)Chris Lu; Timon Willi; Alistair Letcher; Jakob Foerster
Deep Composite Face Image Attacks: Generation, Vulnerability and Detection. (2%)Jag Mohan Singh; Raghavendra Ramachandra
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation. (2%)Hyungmin Kim; Sungho Suh; Sunghyun Baek; Daehwan Kim; Daun Jeong; Hansang Cho; Junmo Kim
2022-11-19
Towards Adversarial Robustness of Deep Vision Algorithms. (92%)Hanshu Yan
Phonemic Adversarial Attack against Audio Recognition in Real World. (87%)Jiakai Wang; Zhendong Chen; Zixin Yin; Qinghong Yang; Xianglong Liu
Towards Robust Dataset Learning. (82%)Yihan Wu; Xinda Li; Florian Kerschbaum; Heng Huang; Hongyang Zhang
Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning. (80%)Mingxuan Ju; Yujie Fan; Chuxu Zhang; Yanfang Ye
Exploring validation metrics for offline model-based optimisation with diffusion models. (75%)Christopher Beckham; Alexandre Piche; David Vazquez; Christopher Pal
Mask Off: Analytic-based Malware Detection By Transfer Learning and Model Personalization. (9%)Amirmohammad Pasdar; Young Choon Lee; Seok-Hee Hong
Investigating the Security of EV Charging Mobile Applications As an Attack Surface. (1%)K. Sarieddine; M. A. Sayed; S. Torabi; R. Atallah; C. Assi
2022-11-18
Adversarial Stimuli: Attacking Brain-Computer Interfaces via Perturbed Sensory Events. (98%)Bibek Upadhayay; Vahid Behzadan
Adversarial Detection by Approximation of Ensemble Boundary. (91%)T. Windeatt
Leveraging Algorithmic Fairness to Mitigate Blackbox Attribute Inference Attacks. (68%)Jan Aalmoes; Vasisht Duddu; Antoine Boutet
Invariant Learning via Diffusion Dreamed Distribution Shifts. (10%)Priyatham Kattakinda; Alexander Levine; Soheil Feizi
Intrusion Detection in Internet of Things using Convolutional Neural Networks. (1%)Martin Kodys; Zhi Lu; Kar Wai Fok; Vrizlynn L. L. Thing
Improving Robustness of TCM-based Robust Steganography with Variable Robustness. (1%)Jimin Zhang; Xianfeng Zhao; Xiaolei He
Provable Defense against Backdoor Policies in Reinforcement Learning. (1%)Shubham Kumar Bharti; Xuezhou Zhang; Adish Singla; Xiaojin Zhu
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory. (1%)Justin Cui; Ruochen Wang; Si Si; Cho-Jui Hsieh
2022-11-17
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks. (99%)Stephen Casper; Kaivalya Hariharan; Dylan Hadfield-Menell
Towards Good Practices in Evaluating Transfer Adversarial Attacks. (93%)Zhengyu Zhao; Hanwei Zhang; Renjue Li; Ronan Sicre; Laurent Amsaleg; Michael Backes
Assessing Neural Network Robustness via Adversarial Pivotal Tuning. (92%)Peter Ebert Christensen; Vésteinn Snæbjarnarson; Andrea Dittadi; Serge Belongie; Sagie Benaim
UPTON: Unattributable Authorship Text via Data Poisoning. (86%)Ziyao Wang; Thai Le; Dongwon Lee
Generalizable Deepfake Detection with Phase-Based Motion Analysis. (50%)Ekta Prashnani; Michael Goebel; B. S. Manjunath
More Effective Centrality-Based Attacks on Weighted Networks. (15%)Balume Mburano; Weisheng Si; Qing Cao; Wei Xing Zheng
Potential Auto-driving Threat: Universal Rain-removal Attack. (2%)Jinchegn Hu; Jihao Li; Zhuoran Hou; Jingjing Jiang; Cunjia Liu; Yuanjian Zhang
Data-Centric Debugging: mitigating model failures via targeted data collection. (1%)Sahil Singla; Atoosa Malemir Chegini; Mazda Moayeri; Soheil Feiz
A Tale of Two Cities: Data and Configuration Variances in Robust Deep Learning. (1%)Guanqin Zhang; Jiankun Sun; Feng Xu; H. M. N. Dilum Bandara; Shiping Chen; Yulei Sui; Tim Menzies
VeriSparse: Training Verified Locally Robust Sparse Neural Networks from Scratch. (1%)Sawinder Kaur; Yi Xiao; Asif Salekin
2022-11-16
T-SEA: Transfer-based Self-Ensemble Attack on Object Detection. (99%)Hao Huang; Ziyan Chen; Huanran Chen; Yongtao Wang; Kevin Zhang
Efficiently Finding Adversarial Examples with DNN Preprocessing. (99%)Avriti Chauhan; Mohammad Afzal; Hrishikesh Karmarkar; Yizhak Elboher; Kumar Madhukar; Guy Katz
Improving Interpretability via Regularization of Neural Activation Sensitivity. (92%)Ofir Moshe; Gil Fidel; Ron Bitton; Asaf Shabtai
Attacking Object Detector Using A Universal Targeted Label-Switch Patch. (86%)Avishag Shapira; Ron Bitton; Dan Avraham; Alon Zolfi; Yuval Elovici; Asaf Shabtai
Differentially Private Optimizers Can Learn Adversarially Robust Models. (83%)Yuan Zhang; Zhiqi Bu
Interpretable Dimensionality Reduction by Feature Preserving Manifold Approximation and Projection. (56%)Yang Yang; Hongjian Sun; Jialei Gong; Di Yu
Privacy against Real-Time Speech Emotion Detection via Acoustic Adversarial Evasion of Machine Learning. (38%)Brian Testa; Yi Xiao; Harshit Sharma; Avery Gump; Asif Salekin
Holistic Evaluation of Language Models. (2%)Percy Liang; Rishi Bommasani; Tony Lee; Dimitris Tsipras; Dilara Soylu; Michihiro Yasunaga; Yian Zhang; Deepak Narayanan; Yuhuai Wu; Ananya Kumar; Benjamin Newman; Binhang Yuan; Bobby Yan; Ce Zhang; Christian Cosgrove; Christopher D. Manning; Christopher Ré; Diana Acosta-Navas; Drew A. Hudson; Eric Zelikman; Esin Durmus; Faisal Ladhak; Frieda Rong; Hongyu Ren; Huaxiu Yao; Jue Wang; Keshav Santhanam; Laurel Orr; Lucia Zheng; Mert Yuksekgonul; Mirac Suzgun; Nathan Kim; Neel Guha; Niladri Chatterji; Omar Khattab; Peter Henderson; Qian Huang; Ryan Chi; Sang Michael Xie; Shibani Santurkar; Surya Ganguli; Tatsunori Hashimoto; Thomas Icard; Tianyi Zhang; Vishrav Chaudhary; William Wang; Xuechen Li; Yifan Mai; Yuhui Zhang; Yuta Koreeda
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Systems. (1%)Alessio Russo; Alexandre Proutiere
2022-11-15
Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation. (99%)Zhihao Zhu; Chenwang Wu; Min Zhou; Hao Liao; Defu Lian; Enhong Chen
Universal Distributional Decision-based Black-box Adversarial Attack with Reinforcement Learning. (99%)Yiran Huang; Yexu Zhou; Michael Hefenbrock; Till Riedel; Likun Fang; Michael Beigl
MORA: Improving Ensemble Robustness Evaluation with Model-Reweighing Attack. (99%)Yunrui Yu; Xitong Gao; Cheng-Zhong Xu
Person Text-Image Matching via Text-Featur Interpretability Embedding and External Attack Node Implantation. (92%)Fan Li; Hang Zhou; Huafeng Li; Yafei Zhang; Zhengtao Yu
Backdoor Attacks on Time Series: A Generative Approach. (70%)Yujing Jiang; Xingjun Ma; Sarah Monazam Erfani; James Bailey
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning. (61%)Jinghuai Zhang; Hongbin Liu; Jinyuan Jia; Neil Zhenqiang Gong
Improved techniques for deterministic l2 robustness. (22%)Sahil Singla; Soheil Feizi
Backdoor Attacks for Remote Sensing Data with Wavelet Transform. (12%)Nikolaus Dräger; Yonghao Xu; Pedram Ghamisi
2022-11-14
Efficient Adversarial Training with Robust Early-Bird Tickets. (92%)Zhiheng Xi; Rui Zheng; Tao Gui; Qi Zhang; Xuanjing Huang
Attacking Face Recognition with T-shirts: Database, Vulnerability Assessment and Detection. (13%)M. Ibsen; C. Rathgeb; F. Brechtel; R. Klepp; K. Pöppelmann; A. George; S. Marcel; C. Busch
Towards Robust Numerical Question Answering: Diagnosing Numerical Capabilities of NLP Systems. (5%)Jialiang Xu; Mengyu Zhou; Xinyi He; Shi Han; Dongmei Zhang
Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be Effective for Detecting Unknown Spurious Correlations. (5%)Shea Cardozo; Gabriel Islas Montero; Dmitry Kazhdan; Botty Dimanov; Maleakhi Wijaya; Mateja Jamnik; Pietro Lio
Robustifying Deep Vision Models Through Shape Sensitization. (2%)Aditay Tripathi; Rishubh Singh; Anirban Chakraborty; Pradeep Shenoy
2022-11-13
Certifying Robustness of Convolutional Neural Networks with Tight Linear Approximation. (26%)Yuan Xiao; Tongtong Bai; Mingzheng Gu; Chunrong Fang; Zhenyu Chen
2022-11-12
Adversarial and Random Transformations for Robust Domain Adaptation and Generalization. (75%)Liang Xiao; Jiaolong Xu; Dawei Zhao; Erke Shang; Qi Zhu; Bin Dai
DriftRec: Adapting diffusion models to blind JPEG restoration. (1%)Simon Welker; Henry N. Chapman; Timo Gerkmann
2022-11-11
Generating Textual Adversaries with Minimal Perturbation. (98%)Xingyi Zhao; Lu Zhang; Depeng Xu; Shuhan Yuan
On the robustness of non-intrusive speech quality model by adversarial examples. (98%)Hsin-Yi Lin; Huan-Hsin Tseng; Yu Tsao
An investigation of security controls and MITRE ATT\&CK techniques. (47%)Md Rayhanur Rahman; Laurie Williams
Investigating co-occurrences of MITRE ATT\&CK Techniques. (12%)Md Rayhanur Rahman; Laurie Williams
Remapped Cache Layout: Thwarting Cache-Based Side-Channel Attacks with a Hardware Defense. (9%)Wei Song; Rui Hou; Peng Liu; Xiaoxin Li; Peinan Li; Lutan Zhao; Xiaofei Fu; Yifei Sun; Dan Meng
2022-11-10
Test-time adversarial detection and robustness for localizing humans using ultra wide band channel impulse responses. (99%)Abhiram Kolli; Muhammad Jehanzeb Mirza; Horst Possegger; Horst Bischof
Impact of Adversarial Training on Robustness and Generalizability of Language Models. (99%)Enes Altinisik; Hassan Sajjad; Husrev Taha Sencar; Safa Messaoud; Sanjay Chawla
Privacy-Utility Balanced Voice De-Identification Using Adversarial Examples. (98%)Meng Chen; Li Lu; Jiadi Yu; Yingying Chen; Zhongjie Ba; Feng Lin; Kui Ren
Stay Home Safe with Starving Federated Data. (80%)Jaechul Roh; Yajun Fang
MSDT: Masked Language Model Scoring Defense in Text Domain. (38%)Jaechul Roh; Minhao Cheng; Yajun Fang
Robust DNN Surrogate Models with Uncertainty Quantification via Adversarial Training. (3%)Lixiang Zhang; Jia Li
Mitigating Forgetting in Online Continual Learning via Contrasting Semantically Distinct Augmentations. (1%)Sheng-Feng Yu; Wei-Chen Chiu
2022-11-09
On the Robustness of Explanations of Deep Neural Network Models: A Survey. (50%)Amlan Jyoti; Karthik Balaji Ganesh; Manoj Gayala; Nandita Lakshmi Tunuguntla; Sandesh Kamath; Vineeth N Balasubramanian
Are All Edges Necessary? A Unified Framework for Graph Purification. (5%)Zishan Gu; Jintang Li; Liang Chen
QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems. (3%)Ana-Maria Cretu; Florimond Houssiau; Antoine Cully; Montjoye Yves-Alexandre de
Accountable and Explainable Methods for Complex Reasoning over Text. (2%)Pepa Atanasova
Directional Privacy for Deep Learning. (1%)Pedro Faustini; Natasha Fernandes; Shakila Tonni; Annabelle McIver; Mark Dras
2022-11-08
Preserving Semantics in Textual Adversarial Attacks. (99%)David Herel; Hugo Cisneros; Tomas Mikolov
NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries? (98%)Saadia Gabriel; Hamid Palangi; Yejin Choi
How Fraudster Detection Contributes to Robust Recommendation. (67%)Yuni Lai; Kai Zhou
Lipschitz Continuous Algorithms for Graph Problems. (16%)Soh Kumabe; Yuichi Yoshida
Learning advisor networks for noisy image classification. (1%)Simone Ricci; Tiberio Uricchio; Bimbo Alberto Del
2022-11-07
Are AlphaZero-like Agents Robust to Adversarial Perturbations? (99%)Li-Cheng Lan; Huan Zhang; Ti-Rong Wu; Meng-Yu Tsai; I-Chen Wu; Cho-Jui Hsieh
Black-Box Attack against GAN-Generated Image Detector with Contrastive Perturbation. (82%)Zijie Lou; Gang Cao; Man Lin
Deviations in Representations Induced by Adversarial Attacks. (70%)Daniel Steinberg; Paul Munro
Interpreting deep learning output for out-of-distribution detection. (1%)Damian Matuszewski; Ida-Maria Sintorn
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks. (1%)Naoya Tezuka; Hideya Ochiai; Yuwei Sun; Hiroshi Esaki
A Hypergraph-Based Machine Learning Ensemble Network Intrusion Detection System. (1%)Zong-Zhi Lin; Thomas D. Pike; Mark M. Bailey; Nathaniel D. Bastian
2022-11-06
Contrastive Weighted Learning for Near-Infrared Gaze Estimation. (31%)Adam Lee
2022-11-05
Textual Manifold-based Defense Against Natural Language Adversarial Examples. (99%)Dang Minh Nguyen; Luu Anh Tuan
Stateful Detection of Adversarial Reprogramming. (96%)Yang Zheng; Xiaoyi Feng; Zhaoqiang Xia; Xiaoyue Jiang; Maura Pintor; Ambra Demontis; Battista Biggio; Fabio Roli
Robust Lottery Tickets for Pre-trained Language Models. (83%)Rui Zheng; Rong Bao; Yuhao Zhou; Di Liang; Sirui Wang; Wei Wu; Tao Gui; Qi Zhang; Xuanjing Huang
2022-11-04
Improving Adversarial Robustness to Sensitivity and Invariance Attacks with Deep Metric Learning. (99%)Anaelia Ovalle; Evan Czyzycki; Cho-Jui Hsieh
Logits are predictive of network type. (68%)Ali Borji
An Adversarial Robustness Perspective on the Topology of Neural Networks. (64%)Morgane Goibert; Thomas Ricatte; Elvis Dohmatob
Fairness-aware Regression Robust to Adversarial Attacks. (38%)Yulu Jin; Lifeng Lai
Extension of Simple Algorithms to the Matroid Secretary Problem. (9%)Simon Park
Robustness of Fusion-based Multimodal Classifiers to Cross-Modal Content Dilutions. (3%)Gaurav Verma; Vishwa Vinay; Ryan A. Rossi; Srijan Kumar
Data Models for Dataset Drift Controls in Machine Learning With Images. (1%)Luis Oala; Marco Aversa; Gabriel Nobis; Kurt Willis; Yoan Neuenschwander; Michèle Buck; Christian Matek; Jerome Extermann; Enrico Pomarico; Wojciech Samek; Roderick Murray-Smith; Christoph Clausen; Bruno Sanguinetti
2022-11-03
Physically Adversarial Attacks and Defenses in Computer Vision: A Survey. (99%)Xingxing Wei; Bangzheng Pu; Jiefan Lu; Baoyuan Wu
Adversarial Defense via Neural Oscillation inspired Gradient Masking. (98%)Chunming Jiang; Yilei Zhang
M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models. (98%)Linshan Hou; Zhongyun Hua; Yuhong Li; Yifeng Zheng; Leo Yu Zhang
Robust Few-shot Learning Without Using any Adversarial Samples. (89%)Gaurav Kumar Nayak; Ruchit Rawal; Inder Khatri; Anirban Chakraborty
Data-free Defense of Black Box Models Against Adversarial Attacks. (84%)Gaurav Kumar Nayak; Inder Khatri; Shubham Randive; Ruchit Rawal; Anirban Chakraborty
Leveraging Domain Features for Detecting Adversarial Attacks Against Deep Speech Recognition in Noise. (38%)Christian Heider Nielsen; Zheng-Hua Tan
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems. (33%)Chong Chen; Ying Gao; Leyu Shi; Siquan Huang
Unintended Memorization and Timing Attacks in Named Entity Recognition Models. (12%)Rana Salal Ali; Benjamin Zi Hao Zhao; Hassan Jameel Asghar; Tham Nguyen; Ian David Wood; Dali Kaafar
2022-11-02
Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks. (99%)Amira Guesmi; Ihsen Alouani; Khaled N. Khasawneh; Mouna Baklouti; Tarek Frikha; Mohamed Abid; Nael Abu-Ghazaleh
Improving transferability of 3D adversarial attacks with scale and shear transformations. (99%)Jinali Zhang; Yinpeng Dong; Jun Zhu; Jihong Zhu; Minchi Kuang; Xiaming Yuan
Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise. (99%)Jhih-Cing Huang; Yu-Lin Tsai; Chao-Han Huck Yang; Cheng-Fang Su; Chia-Mu Yu; Pin-Yu Chen; Sy-Yen Kuo
Adversarial Attack on Radar-based Environment Perception Systems. (99%)Amira Guesmi; Ihsen Alouani
Isometric Representations in Neural Networks Improve Robustness. (62%)Kosio Beshkov; Jonas Verhellen; Mikkel Elle Lepperød
BATT: Backdoor Attack with Transformation-based Triggers. (56%)Tong Xu; Yiming Li; Yong Jiang; Shu-Tao Xia
Untargeted Backdoor Attack against Object Detection. (50%)Chengxiao Luo; Yiming Li; Yong Jiang; Shu-Tao Xia
Generative Adversarial Training Can Improve Neural Language Models. (33%)Sajad Movahedi; Azadeh Shakery
Backdoor Defense via Suppressing Model Shortcuts. (3%)Sheng Yang; Yiming Li; Yong Jiang; Shu-Tao Xia
Human-in-the-Loop Mixup. (1%)Katherine M. Collins; Umang Bhatt; Weiyang Liu; Vihari Piratla; Ilia Sucholutsky; Bradley Love; Adrian Weller
2022-11-01
The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training. (99%)Junhao Dong; Seyed-Mohsen Moosavi-Dezfooli; Jianhuang Lai; Xiaohua Xie
LMD: A Learnable Mask Network to Detect Adversarial Examples for Speaker Verification. (99%)Xing Chen; Jie Wang; Xiao-Lei Zhang; Wei-Qiang Zhang; Kunde Yang
DensePure: Understanding Diffusion Models towards Adversarial Robustness. (98%)Chaowei Xiao; Zhongzhu Chen; Kun Jin; Jiongxiao Wang; Weili Nie; Mingyan Liu; Anima Anandkumar; Bo Li; Dawn Song
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks. (87%)Jianan Zhou; Jianing Zhu; Jingfeng Zhang; Tongliang Liu; Gang Niu; Bo Han; Masashi Sugiyama
Universal Perturbation Attack on Differentiable No-Reference Image- and Video-Quality Metrics. (82%)Ekaterina Shumitskaya; Anastasia Antsiferova; Dmitriy Vatolin
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning. (80%)Virat Shejwalkar; Lingjuan Lyu; Amir Houmansadr
Maximum Likelihood Distillation for Robust Modulation Classification. (69%)Javier Maroto; Gérôme Bovet; Pascal Frossard
FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness. (45%)Wenhao Wu; Wei Li; Jiachen Liu; Xinyan Xiao; Ziqiang Cao; Sujian Li; Hua Wu
Amplifying Membership Exposure via Data Poisoning. (22%)Yufei Chen; Chao Shen; Yun Shen; Cong Wang; Yang Zhang
ActGraph: Prioritization of Test Cases Based on Deep Neural Network Activation Graph. (13%)Jinyin Chen; Jie Ge; Haibin Zheng
2022-10-31
Scoring Black-Box Models for Adversarial Robustness. (98%)Jian Vora; Pranay Reddy Samala
ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation. (88%)Tomokatsu Takahashi; Masanori Yamada; Yuuki Yamanaka; Tomoya Yamashita
SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability. (33%)Dipkamal Bhusal; Rosalyn Shin; Ajay Ashok Shewale; Monish Kumar Manikya Veerabhadran; Michael Clifford; Sara Rampazzi; Nidhi Rastogi
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy. (16%)Daphne Ippolito; Florian Tramèr; Milad Nasr; Chiyuan Zhang; Matthew Jagielski; Katherine Lee; Christopher A. Choquette-Choo; Nicholas Carlini
2022-10-30
Poison Attack and Defense on Deep Source Code Processing Models. (99%)Jia Li; Zhuo Li; Huangzhao Zhang; Ge Li; Zhi Jin; Xing Hu; Xin Xia
Character-level White-Box Adversarial Attacks against Transformers via Attachable Subwords Substitution. (99%)Aiwei Liu; Honghai Yu; Xuming Hu; Shu'ang Li; Li Lin; Fukun Ma; Yawen Yang; Lijie Wen
Benchmarking Adversarial Patch Against Aerial Detection. (99%)Jiawei Lian; Shaohui Mei; Shun Zhang; Mingyang Ma
Symmetric Saliency-based Adversarial Attack To Speaker Identification. (92%)Jiadi Yao; Xing Chen; Xiao-Lei Zhang; Wei-Qiang Zhang; Kunde Yang
FI-ODE: Certified and Robust Forward Invariance in Neural ODEs. (61%)Yujia Huang; Ivan Dario Jimenez Rodriguez; Huan Zhang; Yuanyuan Shi; Yisong Yue
Imitating Opponent to Win: Adversarial Policy Imitation Learning in Two-player Competitive Games. (9%)The Viet Bui; Tien Mai; Thanh H. Nguyen
2022-10-29
On the Need of Neuromorphic Twins to Detect Denial-of-Service Attacks on Communication Networks. (10%)Holger Boche; Rafael F. Schaefer; H. Vincent Poor; Frank H. P. Fitzek
2022-10-28
Universal Adversarial Directions. (99%)Ching Lam Choi; Farzan Farnia
Improving the Transferability of Adversarial Attacks on Face Recognition with Beneficial Perturbation Feature Augmentation. (99%)Fengfan Zhou; Hefei Ling; Yuxuan Shi; Jiazhong Chen; Zongyi Li; Ping Li
Improving Hyperspectral Adversarial Robustness Under Multiple Attacks. (98%)Nicholas Soucy; Salimeh Yasaei Sekeh
Distributed Black-box Attack against Image Classification Cloud Services. (95%)Han Wu; Sareh Rowlands; Johan Wahlstrom
RoChBert: Towards Robust BERT Fine-tuning for Chinese. (75%)Zihan Zhang; Jinfeng Li; Ning Shi; Bo Yuan; Xiangyu Liu; Rong Zhang; Hui Xue; Donghong Sun; Chao Zhang
Robust Boosting Forests with Richer Deep Feature Hierarchy. (56%)Jianqiao Wangni
Localized Randomized Smoothing for Collective Robustness Certification. (26%)Jan Schuchardt; Tom Wollschläger; Aleksandar Bojchevski; Stephan Günnemann
Towards Reliable Neural Specifications. (11%)Chuqin Geng; Nham Le; Xiaojie Xu; Zhaoyue Wang; Arie Gurfinkel; Xujie Si
On the Vulnerability of Data Points under Multiple Membership Inference Attacks and Target Models. (1%)Mauro Conti; Jiaxin Li; Stjepan Picek
2022-10-27
TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion Attacks against Network Intrusion Detection Systems. (99%)Islam Debicha; Richard Bauwens; Thibault Debatty; Jean-Michel Dricot; Tayeb Kenaza; Wim Mees
Isometric 3D Adversarial Examples in the Physical World. (99%)Yibo Miao; Yinpeng Dong; Jun Zhu; Xiao-Shan Gao
LeNo: Adversarial Robust Salient Object Detection Networks with Learnable Noise. (92%)He Tang; He Wang
TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack. (92%)Yu Cao; Dianqi Li; Meng Fang; Tianyi Zhou; Jun Gao; Yibing Zhan; Dacheng Tao
Efficient and Effective Augmentation Strategy for Adversarial Training. (56%)Sravanti Addepalli; Samyak Jain; R. Venkatesh Babu
Noise Injection Node Regularization for Robust Learning. (2%)Noam Levi; Itay M. Bloch; Marat Freytsis; Tomer Volansky
Domain Adaptive Object Detection for Autonomous Driving under Foggy Weather. (1%)Jinlong Li; Runsheng Xu; Jin Ma; Qin Zou; Jiaqi Ma; Hongkai Yu
2022-10-26
Improving Adversarial Robustness with Self-Paced Hard-Class Pair Reweighting. (99%)Pengyue Hou; Jie Han; Xingyu Li
There is more than one kind of robustness: Fooling Whisper with adversarial examples. (98%)Raphael Olivier; Bhiksha Raj
Disentangled Text Representation Learning with Information-Theoretic Perspective for Adversarial Robustness. (86%)Jiahao Zhao; Wenji Mao
BioNLI: Generating a Biomedical NLI Dataset Using Lexico-semantic Constraints for Adversarial Examples. (75%)Mohaddeseh Bastan; Mihai Surdeanu; Niranjan Balasubramanian
Secure IP Address Allocation at Cloud Scale. (47%)Eric University of Wisconsin-Madison Pauley; Kyle Pennsylvania State University Domico; Blaine University of Wisconsin-Madison Hoak; Ryan University of Wisconsin-Madison Sheatsley; Quinn University of Wisconsin-Madison Burke; Yohan University of Wisconsin-Madison Beugin; Engin Northeastern University Kirda; Patrick University of Wisconsin-Madison McDaniel
V-Cloak: Intelligibility-, Naturalness- & Timbre-Preserving Real-Time Voice Anonymization. (10%)Jiangyi Zhejiang University Deng; Fei Zhejiang University Teng; Yanjiao Zhejiang University Chen; Xiaofu Wuhan University Chen; Zhaohui Wuhan University Wang; Wenyuan Zhejiang University Xu
Rethinking the Reverse-engineering of Trojan Triggers. (5%)Zhenting Wang; Kai Mei; Hailun Ding; Juan Zhai; Shiqing Ma
Cover Reproducible Steganography via Deep Generative Models. (1%)Kejiang Chen; Hang Zhou; Yaofei Wang; Menghan Li; Weiming Zhang; Nenghai Yu
DEMIS: A Threat Model for Selectively Encrypted Visual Surveillance Data. (1%)Ifeoluwapo Aribilola; Mamoona Naveed Asghar; Brian Lee
Privately Fine-Tuning Large Language Models with Differential Privacy. (1%)Rouzbeh Behnia; Mohamamdreza Ebrahimi; Jason Pacheco; Balaji Padmanabhan
2022-10-25
LP-BFGS attack: An adversarial attack based on the Hessian with limited pixels. (99%)Jiebao Zhang; Wenhua Qian; Rencan Nie; Jinde Cao; Dan Xu
Adversarially Robust Medical Classification via Attentive Convolutional Neural Networks. (99%)Isaac Wasserman
A White-Box Adversarial Attack Against a Digital Twin. (99%)Wilson Patterson; Ivan Fernandez; Subash Neupane; Milan Parmar; Sudip Mittal; Shahram Rahimi
Multi-view Representation Learning from Malware to Defend Against Adversarial Variants. (98%)James Lee Hu; Mohammadreza Ebrahimi; Weifeng Li; Xin Li; Hsinchun Chen
Adversarial Purification with the Manifold Hypothesis. (98%)Zhaoyuan Yang; Zhiwei Xu; Jing Zhang; Richard Hartley; Peter Tu
Improving Adversarial Robustness via Joint Classification and Multiple Explicit Detection Classes. (98%)Sina Baharlouei; Fatemeh Sheikholeslami; Meisam Razaviyayn; Zico Kolter
Accelerating Certified Robustness Training via Knowledge Transfer. (73%)Pratik Vaishnavi; Kevin Eykholt; Amir Rahmati
Causal Information Bottleneck Boosts Adversarial Robustness of Deep Neural Network. (64%)Huan Hua; Jun Yan; Xi Fang; Weiquan Huang; Huilin Yin; Wancheng Ge
Towards Robust Recommender Systems via Triple Cooperative Defense. (61%)Qingyang Wang; Defu Lian; Chenwang Wu; Enhong Chen
Towards Formal Approximated Minimal Explanations of Neural Networks. (13%)Shahaf Bassan; Guy Katz
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification. (13%)Yulin Zhu; Liang Tong; Kai Zhou
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks. (3%)M. Kuzlu; F. O. Catak; S. Sarp; U. Cali; O Gueler
Robustness of Locally Differentially Private Graph Analysis Against Poisoning. (1%)Jacob Imola; Amrita Roy Chowdhury; Kamalika Chaudhuri
2022-10-24
Ares: A System-Oriented Wargame Framework for Adversarial ML. (99%)Farhan Ahmed; Pratik Vaishnavi; Kevin Eykholt; Amir Rahmati
SpacePhish: The Evasion-space of Adversarial Attacks against Phishing Website Detectors using Machine Learning. (99%)Giovanni Apruzzese; Mauro Conti; Ying Yuan
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs. (96%)Haibin Zheng; Haiyang Xiong; Jinyin Chen; Haonan Ma; Guohan Huang
On the Robustness of Dataset Inference. (88%)Sebastian Szyller; Rui Zhang; Jian Liu; N. Asokan
Flexible Android Malware Detection Model based on Generative Adversarial Networks with Code Tensor. (16%)Zhao Yang; Fengyang Deng; Linxi Han
Revisiting Sparse Convolutional Model for Visual Recognition. (11%)Xili Dai; Mingyang Li; Pengyuan Zhai; Shengbang Tong; Xingjian Gao; Shao-Lun Huang; Zhihui Zhu; Chong You; Yi Ma
2022-10-23
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning. (68%)Kaiyuan Zhang; Guanhong Tao; Qiuling Xu; Siyuan Cheng; Shengwei An; Yingqi Liu; Shiwei Feng; Guangyu Shen; Pin-Yu Chen; Shiqing Ma; Xiangyu Zhang
Adversarial Pretraining of Self-Supervised Deep Networks: Past, Present and Future. (45%)Guo-Jun Qi; Mubarak Shah
2022-10-22
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation. (99%)Fan Yin; Yao Li; Cho-Jui Hsieh; Kai-Wei Chang
Hindering Adversarial Attacks with Implicit Neural Representations. (92%)Andrei A. Rusu; Dan A. Calian; Sven Gowal; Raia Hadsell
GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections. (81%)Junyuan Fang; Haixian Wen; Jiajing Wu; Qi Xuan; Zibin Zheng; Chi K. Tse
Nash Equilibria and Pitfalls of Adversarial Training in Adversarial Robustness Games. (26%)Maria-Florina Balcan; Rattana Pukdee; Pradeep Ravikumar; Hongyang Zhang
Precisely the Point: Adversarial Augmentations for Faithful and Informative Text Generation. (4%)Wenhao Wu; Wei Li; Jiachen Liu; Xinyan Xiao; Sujian Li; Yajuan Lyu
2022-10-21
Evolution of Neural Tangent Kernels under Benign and Adversarial Training. (99%)Noel Loo; Ramin Hasani; Alexander Amini; Daniela Rus
The Dark Side of AutoML: Towards Architectural Backdoor Search. (68%)Ren Pang; Changjiang Li; Zhaohan Xi; Shouling Ji; Ting Wang
Diffusion Visual Counterfactual Explanations. (10%)Maximilian Augustin; Valentyn Boreiko; Francesco Croce; Matthias Hein
TCAB: A Large-Scale Text Classification Attack Benchmark. (10%)Kalyani Asthana; Zhouhang Xie; Wencong You; Adam Noack; Jonathan Brophy; Sameer Singh; Daniel Lowd
A critical review of cyber-physical security for building automation systems. (2%)Guowen Li; Lingyu Ren; Yangyang Fu; Zhiyao Yang; Veronica Adetola; Jin Wen; Qi Zhu; Teresa Wu; K. Selcuk Candanf; Zheng O'Neill
Extracted BERT Model Leaks More Information than You Think! (1%)Xuanli He; Chen Chen; Lingjuan Lyu; Qiongkai Xu
2022-10-20
Identifying Human Strategies for Generating Word-Level Adversarial Examples. (98%)Maximilian Mozes; Bennett Kleinberg; Lewis D. Griffin
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks. (98%)Jiyang Guan; Jian Liang; Ran He
Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models. (98%)Hannah Chen; Yangfeng Ji; David Evans
Learning Sample Reweighting for Accuracy and Adversarial Robustness. (93%)Chester Holtz; Tsui-Wei Weng; Gal Mishne
Similarity of Neural Architectures using Adversarial Attack Transferability. (86%)Jaehui Hwang; Dongyoon Han; Byeongho Heo; Song Park; Sanghyuk Chun; Jong-Seok Lee
New data poison attacks on machine learning classifiers for mobile exfiltration. (80%)Miguel A. Ramirez; Sangyoung Yoon; Ernesto Damiani; Hussam Al Hamadi; Claudio Agostino Ardagna; Nicola Bena; Young-Ji Byon; Tae-Yeon Kim; Chung-Suk Cho; Chan Yeob Yeun
Attacking Motion Estimation with Adversarial Snow. (16%)Jenny Schmalfuss; Lukas Mehl; Andrés Bruhn
How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers. (13%)Guangsheng Zhang; Bo Liu; Huan Tian; Tianqing Zhu; Ming Ding; Wanlei Zhou
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario. (4%)Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Enrique Tomás Martínez Beltrán; Daniel Demeter; Gérôme Bovet; Gregorio Martínez Pérez; Burkhard Stiller
Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning. (3%)Xiaoyi Chen; Baisong Xin; Shengfang Zhai; Shiqing Ma; Qingni Shen; Zhonghai Wu
LOT: Layer-wise Orthogonal Training on Improving $\ell_2$ Certified Robustness. (3%)Xiaojun Xu; Linyi Li; Bo Li
2022-10-19
Learning Transferable Adversarial Robust Representations via Multi-view Consistency. (99%)Minseon Kim; Hyeonjeong Ha; Dong Bok Lee; Sung Ju Hwang
Effective Targeted Attacks for Adversarial Self-Supervised Learning. (99%)Minseon Kim; Hyeonjeong Ha; Sooel Son; Sung Ju Hwang
No-Box Attacks on 3D Point Cloud Classification. (93%)Hanieh Naderi; Chinthaka Dinesh; Ivan V. Bajic; Shohreh Kasaei
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis. (83%)Ruinan Jin; Xiaoxiao Li
Chaos Theory and Adversarial Robustness. (73%)Jonathan S. Kent
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey. (69%)Hui Cao; Wenlong Zou; Yinkun Wang; Ting Song; Mengjun Liu
Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP. (64%)Yangyi Chen; Hongcheng Gao; Ganqu Cui; Fanchao Qi; Longtao Huang; Zhiyuan Liu; Maosong Sun
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. (41%)Xiaoyu Cao; Jinyuan Jia; Zaixi Zhang; Neil Zhenqiang Gong
Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning. (16%)Ruihan Wu; Xiangyu Chen; Chuan Guo; Kilian Q. Weinberger
Variational Model Perturbation for Source-Free Domain Adaptation. (1%)Mengmeng Jing; Xiantong Zhen; Jingjing Li; Cees G. M. Snoek
2022-10-18
Scaling Adversarial Training to Large Perturbation Bounds. (98%)Sravanti Addepalli; Samyak Jain; Gaurang Sriramanan; R. Venkatesh Babu
Not All Poisons are Created Equal: Robust Training against Data Poisoning. (97%)Yu Yang; Tian Yu Liu; Baharan Mirzasoleiman
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models. (73%)Lan Jiang; Hao Zhou; Yankai Lin; Peng Li; Jie Zhou; Rui Jiang
Analysis of Master Vein Attacks on Finger Vein Recognition Systems. (56%)Huy H. Nguyen; Trung-Nghia Le; Junichi Yamagishi; Isao Echizen
Training set cleansing of backdoor poisoning by self-supervised representation learning. (56%)H. Wang; S. Karami; O. Dia; H. Ritter; E. Emamjomeh-Zadeh; J. Chen; Z. Xiang; D. J. Miller; G. Kesidis
On the Adversarial Robustness of Mixture of Experts. (13%)Joan Puigcerver; Rodolphe Jenatton; Carlos Riquelme; Pranjal Awasthi; Srinadh Bhojanapalli
Transferable Unlearnable Examples. (8%)Jie Ren; Han Xu; Yuxuan Wan; Xingjun Ma; Lichao Sun; Jiliang Tang
Automatic Detection of Fake Key Attacks in Secure Messaging. (8%)Tarun Kumar Yadav; Devashish Gosain; Amir Herzberg; Daniel Zappala; Kent Seamons
Improving Adversarial Robustness by Contrastive Guided Diffusion Process. (2%)Yidong Ouyang; Liyan Xie; Guang Cheng
2022-10-17
Towards Generating Adversarial Examples on Mixed-type Data. (99%)Han Xu; Menghai Pan; Zhimeng Jiang; Huiyuan Chen; Xiaoting Li; Mahashweta Das; Hao Yang
Differential Evolution based Dual Adversarial Camouflage: Fooling Human Eyes and Object Detectors. (99%)Jialiang Sun; Tingsong Jiang; Wen Yao; Donghua Wang; Xiaoqian Chen
Probabilistic Categorical Adversarial Attack & Adversarial Training. (99%)Pengfei He; Han Xu; Jie Ren; Yuxuan Wan; Zitao Liu; Jiliang Tang
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class. (96%)Khoa D. Doan; Yingjie Lao; Ping Li
DE-CROP: Data-efficient Certified Robustness for Pretrained Classifiers. (87%)Gaurav Kumar Nayak; Ruchit Rawal; Anirban Chakraborty
Beyond Model Interpretability: On the Faithfulness and Adversarial Robustness of Contrastive Textual Explanations. (78%)Julia El Zini; Mariette Awad
Towards Fair Classification against Poisoning Attacks. (76%)Han Xu; Xiaorui Liu; Yuxuan Wan; Jiliang Tang
Deepfake Text Detection: Limitations and Opportunities. (41%)Jiameng Pu; Zain Sarwar; Sifat Muhammad Abdullah; Abdullah Rehman; Yoonjin Kim; Parantapa Bhattacharya; Mobin Javed; Bimal Viswanath
You Can't See Me: Physical Removal Attacks on LiDAR-based Autonomous Vehicles Driving Frameworks. (15%)Yulong Cao; S. Hrushikesh Bhupathiraju; Pirouz Naghavi; Takeshi Sugawara; Z. Morley Mao; Sara Rampazzi
Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models. (9%)Zhiyuan Zhang; Lingjuan Lyu; Xingjun Ma; Chenguang Wang; Xu Sun
Understanding CNN Fragility When Learning With Imbalanced Data. (1%)Damien Dablain; Kristen N. Jacobson; Colin Bellinger; Mark Roberts; Nitesh Chawla
2022-10-16
Object-Attentional Untargeted Adversarial Attack. (99%)Chao Zhou; Yuan-Gen Wang; Guopu Zhu
Nowhere to Hide: A Lightweight Unsupervised Detector against Adversarial Examples. (99%)Hui Liu; Bo Zhao; Kehuan Zhang; Peng Liu
ODG-Q: Robust Quantization via Online Domain Generalization. (83%)Chaofan Tao; Ngai Wong
Interpretable Machine Learning for Detection and Classification of Ransomware Families Based on API Calls. (1%)Rawshan Ara Mowri; Madhuri Siddula; Kaushik Roy
2022-10-15
RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging. (2%)Ajay Jaiswal; Kumar Ashutosh; Justin F Rousseau; Yifan Peng; Zhangyang Wang; Ying Ding
2022-10-14
Dynamics-aware Adversarial Attack of Adaptive Neural Networks. (89%)An Tao; Yueqi Duan; Yingqi Wang; Jiwen Lu; Jie Zhou
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture. (87%)Yichuan Mo; Dongxian Wu; Yifei Wang; Yiwen Guo; Yisen Wang
Is Face Recognition Safe from Realizable Attacks? (84%)Sanjay Saha; Terence Sim
Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks. (76%)Sishuo Chen; Wenkai Yang; Zhiyuan Zhang; Xiaohan Bi; Xu Sun
Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis. (67%)Phillip Technical University Darmstadt Rieger; Torsten University of Würzburg Krauß; Markus Technical University Darmstadt Miettinen; Alexandra University of Würzburg Dmitrienko; Ahmad-Reza Technical University Darmstadt Sadeghi
2022-10-13
Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition. (99%)Shuai Jia; Bangjie Yin; Taiping Yao; Shouhong Ding; Chunhua Shen; Xiaokang Yang; Chao Ma
AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient. (99%)Farzad Nikfam; Alberto Marchisio; Maurizio Martina; Muhammad Shafique
Demystifying Self-supervised Trojan Attacks. (95%)Changjiang Li; Ren Pang; Zhaohan Xi; Tianyu Du; Shouling Ji; Yuan Yao; Ting Wang
Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors. (81%)Qixun Wang; Yifei Wang; Hong Zhu; Yisen Wang
Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation. (13%)Zhouxing Shi; Yihan Wang; Huan Zhang; Zico Kolter; Cho-Jui Hsieh
Large-Scale Open-Set Classification Protocols for ImageNet. (2%)Jesus Andres Palechor Anacona; Annesha Bhoumik; Manuel Günther
SoK: How Not to Architect Your Next-Generation TEE Malware? (1%)Kubilay Ahmet Küçük; Steve Moyle; Andrew Martin; Alexandru Mereacre; Nicholas Allott
Feature Reconstruction Attacks and Countermeasures of DNN training in Vertical Federated Learning. (1%)Peng Ye; Zhifeng Jiang; Wei Wang; Bo Li; Baochun Li
Characterizing the Influence of Graph Elements. (1%)Zizhang Chen; Peizhao Li; Hongfu Liu; Pengyu Hong
2022-10-12
A Game Theoretical vulnerability analysis of Adversarial Attack. (99%)Khondker Fariha Hossain; Alireza Tavakkoli; Shamik Sengupta
Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation. (99%)Zeyu Qin; Yanbo Fan; Yi Liu; Li Shen; Yong Zhang; Jue Wang; Baoyuan Wu
Visual Prompting for Adversarial Robustness. (99%)Aochuan Chen; Peter Lorenz; Yuguang Yao; Pin-Yu Chen; Sijia Liu
Robust Models are less Over-Confident. (96%)Julia Grabinski; Paul Gavrikov; Janis Keuper; Margret Keuper
Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity. (86%)Andrew C. Cullen; Paul Montague; Shijie Liu; Sarah M. Erfani; Benjamin I. P. Rubinstein
Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning. (82%)Yongyuan Liang; Yanchao Sun; Ruijie Zheng; Furong Huang
COLLIDER: A Robust Training Framework for Backdoor Data. (81%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. (76%)Haotao Wang; Junyuan Hong; Aston Zhang; Jiayu Zhou; Zhangyang Wang
Few-shot Backdoor Attacks via Neural Tangent Kernels. (62%)Jonathan Hayase; Sewoong Oh
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning? (9%)Yi Zeng; Minzhou Pan; Himanshu Jahagirdar; Ming Jin; Lingjuan Lyu; Ruoxi Jia
Understanding Impacts of Task Similarity on Backdoor Attack and Detection. (2%)Di Tang; Rui Zhu; XiaoFeng Wang; Haixu Tang; Yi Chen
When are Local Queries Useful for Robust Learning? (1%)Pascale Gourdeau; Varun Kanade; Marta Kwiatkowska; James Worrell
2022-10-11
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness? (99%)Nikolaos Tsilivis; Julia Kempe
Stable and Efficient Adversarial Training through Local Linearization. (91%)Zhuorong Li; Daiwei Yu
RoHNAS: A Neural Architecture Search Framework with Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks. (86%)Alberto Marchisio; Vojtech Mrazek; Andrea Massa; Beatrice Bussolino; Maurizio Martina; Muhammad Shafique
Adversarial Attack Against Image-Based Localization Neural Networks. (78%)Meir Brand; Itay Naeh; Daniel Teitelman
Detecting Backdoors in Deep Text Classifiers. (76%)You Guo; Jun Wang; Trevor Cohn
Human Body Measurement Estimation with Adversarial Augmentation. (33%)Nataniel Ruiz; Miriam Bellver; Timo Bolkart; Ambuj Arora; Ming C. Lin; Javier Romero; Raja Bala
Curved Representation Space of Vision Transformers. (10%)Juyeop Kim; Junha Park; Songkuk Kim; Jong-Seok Lee
Zeroth-Order Hard-Thresholding: Gradient Error vs. Expansivity. (1%)Vazelhes William de; Hualin Zhang; Huimin Wu; Xiao-Tong Yuan; Bin Gu
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach. (1%)Peng Mi; Li Shen; Tianhe Ren; Yiyi Zhou; Xiaoshuai Sun; Rongrong Ji; Dacheng Tao
2022-10-10
Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization. (92%)Ziquan Liu; Antoni B. Chan
Revisiting adapters with adversarial training. (88%)Sylvestre-Alvise Rebuffi; Francesco Croce; Sven Gowal
Universal Adversarial Perturbations: Efficiency on a small image dataset. (81%)Waris ENSEIRB-MATMECA, UB Radji
Certified Training: Small Boxes are All You Need. (22%)Mark Niklas Müller; Franziska Eckert; Marc Fischer; Martin Vechev
Denoising Masked AutoEncoders Help Robust Classification. (1%)Quanlin Wu; Hang Ye; Yuntian Gu; Huishuai Zhang; Liwei Wang; Di He
2022-10-09
Pruning Adversarially Robust Neural Networks without Adversarial Examples. (99%)Tong Jian; Zifeng Wang; Yanzhi Wang; Jennifer Dy; Stratis Ioannidis
Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective. (99%)Yao Zhu; Yuefeng Chen; Xiaodan Li; Kejiang Chen; Yuan He; Xiang Tian; Bolun Zheng; Yaowu Chen; Qingming Huang
Online Training Through Time for Spiking Neural Networks. (1%)Mingqing Xiao; Qingyan Meng; Zongpeng Zhang; Di He; Zhouchen Lin
2022-10-08
FedDef: Defense Against Gradient Leakage in Federated Learning-based Network Intrusion Detection Systems. (99%)Jiahui Chen; Yi Zhao; Qi Li; Xuewei Feng; Ke Xu
Symmetry Defense Against CNN Adversarial Perturbation Attacks. (99%)Blerta Lindqvist
Robustness of Unsupervised Representation Learning without Labels. (54%)Aleksandar Petrov; Marta Kwiatkowska
2022-10-07
Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs. (99%)Prashant Pandey; Aleti Vardhan; Mustafa Chasmai; Tanuj Sur; Brejesh Lall
Pre-trained Adversarial Perturbations. (99%)Yuanhao Ban; Yinpeng Dong
NMTSloth: Understanding and Testing Efficiency Degradation of Neural Machine Translation Systems. (97%)Simin Chen; Cong Liu; Mirazul Haque; Zihe Song; Wei Yang
ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints. (93%)Yinpeng Dong; Shouwei Ruan; Hang Su; Caixin Kang; Xingxing Wei; Jun Zhu
Game-Theoretic Understanding of Misclassification. (47%)Kosuke Sumiyasu; Kazuhiko Kawamoto; Hiroshi Kera
A2: Efficient Automated Attacker for Boosting Adversarial Training. (41%)Zhuoer Xu; Guanghui Zhu; Changhua Meng; Shiwen Cui; Zhenzhe Ying; Weiqiang Wang; Ming GU; Yihua Huang
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets. (9%)Chen Gong; Zhou Yang; Yunpeng Bai; Junda He; Jieke Shi; Kecen Li; Arunesh Sinha; Bowen Xu; Xinwen Hou; David Lo; Tianhao Wang
A Wolf in Sheep's Clothing: Spreading Deadly Pathogens Under the Disguise of Popular Music. (2%)Anomadarshi Barua; Yonatan Gizachew Achamyeleh; Mohammad Abdullah Al Faruque
Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation. (1%)Indu Panigrahi; Ryan Manzuk; Adam Maloof; Ruth Fong
2022-10-06
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems. (99%)Chawin Sitawarin; Florian Tramèr; Nicholas Carlini
Enhancing Code Classification by Mixup-Based Data Augmentation. (96%)Zeming Dong; Qiang Hu; Yuejun Guo; Maxime Cordy; Mike Papadakis; Yves Le Traon; Jianjun Zhao
Deep Reinforcement Learning based Evasion Generative Adversarial Network for Botnet Detection. (92%)Rizwan Hamid Randhawa; Nauman Aslam; Mohammad Alauthman; Muhammad Khalid; Husnain Rafiq
On Optimal Learning Under Targeted Data Poisoning. (82%)Steve Hanneke; Amin Karbasi; Mohammad Mahmoody; Idan Mehalel; Shay Moran
Towards Out-of-Distribution Adversarial Robustness. (73%)Adam Ibrahim; Charles Guille-Escuret; Ioannis Mitliagkas; Irina Rish; David Krueger; Pouya Bashivan
InferES : A Natural Language Inference Corpus for Spanish Featuring Negation-Based Contrastive and Adversarial Examples. (61%)Venelin Kovatchev; Mariona Taulé
Unsupervised Domain Adaptation for COVID-19 Information Service with Contrastive Adversarial Domain Mixup. (41%)Huimin Zeng; Zhenrui Yue; Ziyi Kou; Lanyu Shang; Yang Zhang; Dong Wang
Synthetic Dataset Generation for Privacy-Preserving Machine Learning. (2%)Efstathia Soufleri; Gobinda Saha; Kaushik Roy
Enhancing Mixup-Based Graph Learning for Language Processing via Hybrid Pooling. (1%)Zeming Dong; Qiang Hu; Yuejun Guo; Maxime Cordy; Mike Papadakis; Yves Le Traon; Jianjun Zhao
Bad Citrus: Reducing Adversarial Costs with Model Distances. (1%)Giorgio Severi; Will Pearce; Alina Oprea
2022-10-05
Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks. (99%)Shengming Yuan; Qilong Zhang; Lianli Gao; Yaya Cheng; Jingkuan Song
Dynamic Stochastic Ensemble with Adversarial Robust Lottery Ticket Subnetworks. (98%)Qi Peng; Wenlin Liu; Ruoxi Qin; Libin Hou; Bin Yan; Linyuan Wang
On Adversarial Robustness of Deep Image Deblurring. (83%)Kanchana Vaishnavi Gandikota; Paramanand Chandramouli; Michael Moeller
A Closer Look at Robustness to L-infinity and Spatial Perturbations and their Composition. (81%)Luke Rowe; Benjamin Thérien; Krzysztof Czarnecki; Hongyang Zhang
Jitter Does Matter: Adapting Gaze Estimation to New Domains. (78%)Ruicong Liu; Yiwei Bao; Mingjie Xu; Haofei Wang; Yunfei Liu; Feng Lu
Image Masking for Robust Self-Supervised Monocular Depth Estimation. (38%)Hemang Chawla; Kishaan Jeeveswaran; Elahe Arani; Bahram Zonooz
Over-the-Air Federated Learning with Privacy Protection via Correlated Additive Perturbations. (38%)Jialing Liao; Zheng Chen; Erik G. Larsson
2022-10-04
Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective. (97%)Bohang Zhang; Du Jiang; Di He; Liwei Wang
Robust Fair Clustering: A Novel Fairness Attack and Defense Framework. (93%)Anshuman Chhabra; Peizhao Li; Prasant Mohapatra; Hongfu Liu
A Study on the Efficiency and Generalization of Light Hybrid Retrievers. (86%)Man Luo; Shashank Jain; Anchit Gupta; Arash Einolghozati; Barlas Oguz; Debojeet Chatterjee; Xilun Chen; Chitta Baral; Peyman Heidari
Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting Models. (81%)Fan Liu; Hao Liu; Wenzhao Jiang
Invariant Aggregator for Defending against Federated Backdoor Attacks. (80%)Xiaoyang Wang; Dimitrios Dimitriadis; Sanmi Koyejo; Shruti Tople
On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses. (75%)Anshuman Chhabra; Ashwin Sekhari; Prasant Mohapatra
Robustness Certification of Visual Perception Models via Camera Motion Smoothing. (70%)Hanjiang Hu; Zuxin Liu; Linyi Li; Jiacheng Zhu; Ding Zhao
Backdoor Attacks in the Supply Chain of Masked Image Modeling. (68%)Xinyue Shen; Xinlei He; Zheng Li; Yun Shen; Michael Backes; Yang Zhang
CADet: Fully Self-Supervised Anomaly Detection With Contrastive Learning. (67%)Charles Guille-Escuret; Pau Rodriguez; David Vazquez; Ioannis Mitliagkas; Joao Monteiro
2022-10-03
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples. (99%)Jinyuan Jia; Wenjie Qu; Neil Zhenqiang Gong
Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual Active Speaker Detection. (97%)Xuanjun Chen; Haibin Wu; Helen Meng; Hung-yi Lee; Jyh-Shing Roger Jang
Stability Analysis and Generalization Bounds of Adversarial Training. (96%)Jiancong Xiao; Yanbo Fan; Ruoyu Sun; Jue Wang; Zhi-Quan Luo
On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks. (92%)Huimin Zeng; Zhenrui Yue; Yang Zhang; Ziyi Kou; Lanyu Shang; Dong Wang
Decompiling x86 Deep Neural Network Executables. (83%)Zhibo Liu; Yuanyuan Yuan; Shuai Wang; Xiaofei Xie; Lei Ma
Strength-Adaptive Adversarial Training. (80%)Chaojian Yu; Dawei Zhou; Li Shen; Jun Yu; Bo Han; Mingming Gong; Nannan Wang; Tongliang Liu
ASGNN: Graph Neural Networks with Adaptive Structure. (68%)Zepeng Zhang; Songtao Lu; Zengfeng Huang; Ziping Zhao
UnGANable: Defending Against GAN-based Face Manipulation. (2%)Zheng Li; Ning Yu; Ahmed Salem; Michael Backes; Mario Fritz; Yang Zhang
2022-10-02
Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis. (99%)Jiancong Xiao; Zeyu Qin; Yanbo Fan; Baoyuan Wu; Jue Wang; Zhi-Quan Luo
Understanding Adversarial Robustness Against On-manifold Adversarial Examples. (99%)Jiancong Xiao; Liusha Yang; Yanbo Fan; Jue Wang; Zhi-Quan Luo
FLCert: Provably Secure Federated Learning against Poisoning Attacks. (74%)Xiaoyu Cao; Zaixi Zhang; Jinyuan Jia; Neil Zhenqiang Gong
Optimization for Robustness Evaluation beyond $\ell_p$ Metrics. (16%)Hengyue Liang; Buyun Liang; Ying Cui; Tim Mitchell; Ju Sun
Automated Security Analysis of Exposure Notification Systems. (1%)Kevin Morio; Ilkan Esiyok; Dennis Jackson; Robert Künnemann
2022-10-01
DeltaBound Attack: Efficient decision-based attack in low queries regime. (96%)Lorenzo Rossi
Adversarial Attacks on Transformers-Based Malware Detectors. (91%)Yash Jakhotiya; Heramb Patil; Jugal Rawlani; Sunil B. Mane
Voice Spoofing Countermeasures: Taxonomy, State-of-the-art, experimental analysis of generalizability, open challenges, and the way forward. (5%)Awais Khan; Khalid Mahmood Malik; James Ryan; Mikul Saravanan
2022-09-30
Your Out-of-Distribution Detection Method is Not Robust! (99%)Mohammad Azizmalayeri; Arshia Soltani Moakhar; Arman Zarei; Reihaneh Zohrabi; Mohammad Taghi Manzuri; Mohammad Hossein Rohban
Learning Robust Kernel Ensembles with Kernel Average Pooling. (99%)Pouya Bashivan; Adam Ibrahim; Amirozhan Dehghani; Yifei Ren
Adversarial Robustness of Representation Learning for Knowledge Graphs. (95%)Peru Bhardwaj
Hiding Visual Information via Obfuscating Adversarial Perturbations. (92%)Zhigang Su; Dawei Zhou; Nannan Wangu; Decheng Li; Zhen Wang; Xinbo Gao
On the tightness of linear relaxation based robustness certification methods. (78%)Cheng Tang
Data Poisoning Attacks Against Multimodal Encoders. (73%)Ziqing Yang; Xinlei He; Zheng Li; Michael Backes; Mathias Humbert; Pascal Berrang; Yang Zhang
ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks. (73%)Tim Clifford; Ilia Shumailov; Yiren Zhao; Ross Anderson; Robert Mullins
2022-09-29
Physical Adversarial Attack meets Computer Vision: A Decade Survey. (99%)Hui Wei; Hao Tang; Xuemei Jia; Zhixiang Wang; Hanxun Yu; Zhubo Li; Shin'ichi Satoh; Gool Luc Van; Zheng Wang
Towards Lightweight Black-Box Attacks against Deep Neural Networks. (99%)Chenghao Sun; Yonggang Zhang; Wan Chaoqun; Qizhou Wang; Ya Li; Tongliang Liu; Bo Han; Xinmei Tian
Generalizability of Adversarial Robustness Under Distribution Shifts. (83%)Kumail Alhamoud; Hasan Abed Al Kader Hammoud; Motasem Alfarra; Bernard Ghanem
Digital and Physical Face Attacks: Reviewing and One Step Further. (2%)Chenqi Kong; Shiqi Wang; Haoliang Li
Chameleon Cache: Approximating Fully Associative Caches with Random Replacement to Prevent Contention-Based Cache Attacks. (1%)Thomas Unterluggauer; Austin Harris; Scott Constable; Fangfei Liu; Carlos Rozas
2022-09-28
A Survey on Physical Adversarial Attack in Computer Vision. (99%)Donghua Wang; Wen Yao; Tingsong Jiang; Guijian Tang; Xiaoqian Chen
Exploring the Relationship between Architecture and Adversarially Robust Generalization. (99%)Aishan Liu; Shiyu Tang; Siyuan Liang; Ruihao Gong; Boxi Wu; Xianglong Liu; Dacheng Tao
A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks. (67%)Kevin Hector; Mathieu Dumont; Pierre-Alain Moellic; Jean-Max Dutertre
Supervised Contrastive Learning as Multi-Objective Optimization for Fine-Tuning Large Pre-trained Language Models. (47%)Youness Moukafih; Mounir Ghogho; Kamel Smaili
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach. (31%)Marco Anisetti; Claudio A. Ardagna; Alessandro Balestrucci; Nicola Bena; Ernesto Damiani; Chan Yeob Yeun
CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention. (1%)Ziyu Guo; Renrui Zhang; Longtian Qiu; Xianzheng Ma; Xupeng Miao; Xuming He; Bin Cui
Improving alignment of dialogue agents via targeted human judgements. (1%)Amelia Glaese; Nat McAleese; Maja Trębacz; John Aslanides; Vlad Firoiu; Timo Ewalds; Maribeth Rauh; Laura Weidinger; Martin Chadwick; Phoebe Thacker; Lucy Campbell-Gillingham; Jonathan Uesato; Po-Sen Huang; Ramona Comanescu; Fan Yang; Abigail See; Sumanth Dathathri; Rory Greig; Charlie Chen; Doug Fritz; Jaume Sanchez Elias; Richard Green; Soňa Mokrá; Nicholas Fernando; Boxi Wu; Rachel Foley; Susannah Young; Iason Gabriel; William Isaac; John Mellor; Demis Hassabis; Koray Kavukcuoglu; Lisa Anne Hendricks; Geoffrey Irving
2022-09-27
Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks against Object Detection. (74%)Svetlana Pavlitskaya; Jonas Hendl; Sebastian Kleim; Leopold Müller; Fabian Wylczoch; J. Marius Zöllner
Inducing Data Amplification Using Auxiliary Datasets in Adversarial Training. (33%)Saehyung Lee; Hyungyu Lee
Attacking Compressed Vision Transformers. (33%)Swapnil Parekh; Devansh Shah; Pratyush Shukla
Mitigating Attacks on Artificial Intelligence-based Spectrum Sensing for Cellular Network Signals. (8%)Ferhat Ozgur Catak; Murat Kuzlu; Salih Sarp; Evren Catak; Umit Cali
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection. (5%)Yiming Li; Yang Bai; Yong Jiang; Yong Yang; Shu-Tao Xia; Bo Li
Reconstruction-guided attention improves the robustness and shape processing of neural networks. (2%)Seoyoung Ahn; Hossein Adeli; Gregory J. Zelinsky
A Learning-based Honeypot Game for Collaborative Defense in UAV Networks. (1%)Yuntao Wang; Zhou Su; Abderrahim Benslimane; Qichao Xu; Minghui Dai; Ruidong Li
Stability Via Adversarial Training of Neural Network Stochastic Control of Mean-Field Type. (1%)Julian Barreiro-Gomez; Salah Eddine Choutri; Boualem Djehiche
Measuring Overfitting in Convolutional Neural Networks using Adversarial Perturbations and Label Noise. (1%)Svetlana Pavlitskaya; Joël Oswald; J. Marius Zöllner
2022-09-26
FG-UAP: Feature-Gathering Universal Adversarial Perturbation. (99%)Zhixing Ye; Xinwen Cheng; Xiaolin Huang
Activation Learning by Local Competitions. (64%)Hongchao Zhou
Multi-Task Adversarial Training Algorithm for Multi-Speaker Neural Text-to-Speech. (1%)Yusuke Nakai; Yuki Saito; Kenta Udagawa; Hiroshi Saruwatari
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification. (1%)Adrien Bennetot; Gianni Franchi; Ser Javier Del; Raja Chatila; Natalia Diaz-Rodriguez
2022-09-25
SPRITZ-1.5C: Employing Deep Ensemble Learning for Improving the Security of Computer Networks against Adversarial Attacks. (81%)Ehsan Nowroozi; Mohammadreza Mohammadi; Erkay Savas; Mauro Conti; Yassine Mekdad
2022-09-24
Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning. (99%)Zhengwei Fang; Rui Wang; Tao Huang; Liping Jing
2022-09-23
The "Beatrix'' Resurrections: Robust Backdoor Detection via Gram Matrices. (13%)Wanlun Ma; Derui Wang; Ruoxi Sun; Minhui Xue; Sheng Wen; Yang Xiang
2022-09-22
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models. (50%)Sohaib Ahmad; Benjamin Fuller; Kaleel Mahmood
2022-09-21
Fair Robust Active Learning by Joint Inconsistency. (99%)Tsung-Han Wu; Shang-Tse Chen; Winston H. Hsu
Toy Models of Superposition. (45%)Nelson Elhage; Tristan Hume; Catherine Olsson; Nicholas Schiefer; Tom Henighan; Shauna Kravec; Zac Hatfield-Dodds; Robert Lasenby; Dawn Drain; Carol Chen; Roger Grosse; Sam McCandlish; Jared Kaplan; Dario Amodei; Martin Wattenberg; Christopher Olah
DARTSRepair: Core-failure-set Guided DARTS for Network Robustness to Common Corruptions. (13%)Xuhong Ren; Jianlang Chen; Felix Juefei-Xu; Wanli Xue; Qing Guo; Lei Ma; Jianjun Zhao; Shengyong Chen
Fairness Reprogramming. (1%)Guanhua Zhang; Yihua Zhang; Yang Zhang; Wenqi Fan; Qing Li; Sijia Liu; Shiyu Chang
2022-09-20
Understanding Real-world Threats to Deep Learning Models in Android Apps. (99%)Zizhuang Deng; Kai Chen; Guozhu Meng; Xiaodong Zhang; Ke Xu; Yao Cheng
Audit and Improve Robustness of Private Neural Networks on Encrypted Data. (99%)Jiaqi Xue; Lei Xu; Lin Chen; Weidong Shi; Kaidi Xu; Qian Lou
GAMA: Generative Adversarial Multi-Object Scene Attacks. (99%)Abhishek Aich; Calvin-Khang Ta; Akash Gupta; Chengyu Song; Srikanth V. Krishnamurthy; M. Salman Asif; Amit K. Roy-Chowdhury
Sparse Vicious Attacks on Graph Neural Networks. (98%)Giovanni Trappolini; Valentino Maiorca; Silvio Severino; Emanuele Rodolà; Fabrizio Silvestri; Gabriele Tolomei
Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks. (98%)Abhishek Aich; Shasha Li; Chengyu Song; M. Salman Asif; Srikanth V. Krishnamurthy; Amit K. Roy-Chowdhury
Rethinking Data Augmentation in Knowledge Distillation for Object Detection. (68%)Jiawei Liang; Siyuan Liang; Aishan Liu; Mingli Zhu; Danni Yuan; Chenye Xu; Xiaochun Cao
CANflict: Exploiting Peripheral Conflicts for Data-Link Layer Attacks on Automotive Networks. (1%)Alvise de Faveri Tron; Stefano Longari; Michele Carminati; Mario Polino; Stefano Zanero
EM-Fault It Yourself: Building a Replicable EMFI Setup for Desktop and Server Hardware. (1%)Niclas Kühnapfel; Robert Buhren; Hans Niklas Jacob; Thilo Krachenfels; Christian Werling; Jean-Pierre Seifert
2022-09-19
Adversarial Catoptric Light: An Effective, Stealthy and Robust Physical-World Attack to DNNs. (99%)Chengyin Hu; Weiwen Shi
Adversarial Color Projection: A Projector-Based Physical Attack to DNNs. (99%)Chengyin Hu; Weiwen Shi
2022-09-18
On the Adversarial Transferability of ConvMixer Models. (99%)Ryota Iijima; Miki Tanaka; Isao Echizen; Hitoshi Kiya
AdvDO: Realistic Adversarial Attacks for Trajectory Prediction. (96%)Yulong Cao; Chaowei Xiao; Anima Anandkumar; Danfei Xu; Marco Pavone
Distribution inference risks: Identifying and mitigating sources of leakage. (1%)Valentin Hartmann; Léo Meynent; Maxime Peyrard; Dimitrios Dimitriadis; Shruti Tople; Robert West
2022-09-17
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models. (99%)Raphael Olivier; Hadi Abdullah; Bhiksha Raj
Characterizing Internal Evasion Attacks in Federated Learning. (98%)Taejin Kim; Shubhranshu Singh; Nikhil Madaan; Carlee Joe-Wong
A study on the deviations in performance of FNNs and CNNs in the realm of grayscale adversarial images. (4%)Durga Shree Nagabushanam; Steve Mathew; Chiranji Lal Chowdhary
2022-09-16
Robust Ensemble Morph Detection with Domain Generalization. (99%)Hossein Kashiani; Shoaib Meraj Sami; Sobhan Soleymani; Nasser M. Nasrabadi
A Large-scale Multiple-objective Method for Black-box Attack against Object Detection. (99%)Siyuan Liang; Longkang Li; Yanbo Fan; Xiaojun Jia; Jingzhi Li; Baoyuan Wu; Xiaochun Cao
Enhance the Visual Representation via Discrete Adversarial Training. (97%)Xiaofeng Mao; Yuefeng Chen; Ranjie Duan; Yao Zhu; Gege Qi; Shaokai Ye; Xiaodan Li; Rong Zhang; Hui Xue
Model Inversion Attacks against Graph Neural Networks. (92%)Zaixi Zhang; Qi Liu; Zhenya Huang; Hao Wang; Chee-Kong Lee; Enhong Chen
PointCAT: Contrastive Adversarial Training for Robust Point Cloud Recognition. (62%)Qidong Huang; Xiaoyi Dong; Dongdong Chen; Hang Zhou; Weiming Zhang; Kui Zhang; Gang Hua; Nenghai Yu
Cascading Failures in Power Grids. (33%)Rounak Meyur
Dataset Inference for Self-Supervised Models. (16%)Adam Dziedzic; Haonan Duan; Muhammad Ahmad Kaleem; Nikita Dhawan; Jonas Guan; Yannis Cattan; Franziska Boenisch; Nicolas Papernot
On the Robustness of Graph Neural Diffusion to Topology Perturbations. (15%)Yang Song; Qiyu Kang; Sijie Wang; Zhao Kai; Wee Peng Tay
A Systematic Evaluation of Node Embedding Robustness. (11%)Alexandru Mara; Jefrey Lijffijt; Stephan Günnemann; Bie Tijl De
PA-Boot: A Formally Verified Authentication Protocol for Multiprocessor Secure Boot. (1%)Zhuoruo Zhang; Chenyang Yu; Rui Chang; Mingshuai Chen; Bo Feng; He Huang; Qinming Dai; Wenbo Shen; Yongwang Zhao
2022-09-15
Improving Robust Fairness via Balance Adversarial Training. (99%)Chunyu Sun; Chenye Xu; Chengyuan Yao; Siyuan Liang; Yichao Wu; Ding Liang; XiangLong Liu; Aishan Liu
A Light Recipe to Train Robust Vision Transformers. (98%)Edoardo Debenedetti; Vikash Sehwag; Prateek Mittal
Part-Based Models Improve Adversarial Robustness. (92%)Chawin Sitawarin; Kornrapat Pongmala; Yizheng Chen; Nicholas Carlini; David Wagner
Explicit Tradeoffs between Adversarial and Natural Distributional Robustness. (80%)Mazda Moayeri; Kiarash Banihashem; Soheil Feizi
Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization. (80%)Omar Montasser; Steve Hanneke; Nathan Srebro
Defending Root DNS Servers Against DDoS Using Layered Defenses. (15%)A S M Rizvi; Jelena Mirkovic; John Heidemann; Wesley Hardaker; Robert Story
BadRes: Reveal the Backdoors through Residual Connection. (2%)Mingrui He; Tianyu Chen; Haoyi Zhou; Shanghang Zhang; Jianxin Li
Adversarial Cross-View Disentangled Graph Contrastive Learning. (1%)Qianlong Wen; Zhongyu Ouyang; Chunhui Zhang; Yiyue Qian; Yanfang Ye; Chuxu Zhang
Towards Improving Calibration in Object Detection Under Domain Shift. (1%)Muhammad Akhtar Munir; Muhammad Haris Khan; M. Saquib Sarfraz; Mohsen Ali
2022-09-14
Robust Transferable Feature Extractors: Learning to Defend Pre-Trained Networks Against White Box Adversaries. (99%)Alexander Cann; Ian Colbert; Ihab Amer
PointACL:Adversarial Contrastive Learning for Robust Point Clouds Representation under Adversarial Attack. (99%)Junxuan Huang; Yatong An; Lu cheng; Bai Chen; Junsong Yuan; Chunming Qiao
Certified Robustness to Word Substitution Ranking Attack for Neural Ranking Models. (99%)Chen Wu; Ruqing Zhang; Jiafeng Guo; Wei Chen; Yixing Fan; Rijke Maarten de; Xueqi Cheng
Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models. (97%)Jiawei Liu; Yangyang Kang; Di Tang; Kaisong Song; Changlong Sun; Xiaofeng Wang; Wei Lu; Xiaozhong Liu
On the interplay of adversarial robustness and architecture components: patches, convolution and attention. (67%)Francesco Croce; Matthias Hein
M^4I: Multi-modal Models Membership Inference. (54%)Pingyi Hu; Zihan Wang; Ruoxi Sun; Hu Wang; Minhui Xue
Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering. (12%)Jingjing Jiang; Ziyi Liu; Nanning Zheng
Robust Constrained Reinforcement Learning. (9%)Yue Wang; Fei Miao; Shaofeng Zou
2022-09-13
Adversarial Coreset Selection for Efficient Robust Training. (99%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
TSFool: Crafting Highly-Imperceptible Adversarial Time Series through Multi-Objective Attack. (99%)Yanyun Wang; Dehui Du; Haibo Hu; Zi Liang; Yuanhao Liu
PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models. (92%)William Hackett; Stefan Trawicki; Zhengxin Yu; Neeraj Suri; Peter Garraghan
Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation. (78%)Maksym Yatsura; Kaspar Sakmann; N. Grace Hua; Matthias Hein; Jan Hendrik Metzen
Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. (68%)Hussain Hussain; Meng Cao; Sandipan Sikdar; Denis Helic; Elisabeth Lex; Markus Strohmaier; Roman Kern
ADMM based Distributed State Observer Design under Sparse Sensor Attacks. (22%)Vinaya Mary Prinse; Rachel Kalpana Kalaimani
A Tale of HodgeRank and Spectral Method: Target Attack Against Rank Aggregation Is the Fixed Point of Adversarial Game. (15%)Ke Ma; Qianqian Xu; Jinshan Zeng; Guorong Li; Xiaochun Cao; Qingming Huang
Defense against Privacy Leakage in Federated Learning. (12%)Jing Wu; Munawar Hayat; Mingyi Zhou; Mehrtash Harandi
Federated Learning based on Defending Against Data Poisoning Attacks in IoT. (1%)Jiayin Li; Wenzhong Guo; Xingshuo Han; Jianping Cai; Ximeng Liu
2022-09-12
Adaptive Perturbation Generation for Multiple Backdoors Detection. (95%)Yuhang Wang; Huafeng Shi; Rui Min; Ruijia Wu; Siyuan Liang; Yichao Wu; Ding Liang; Aishan Liu
CARE: Certifiably Robust Learning with Reasoning via Variational Inference. (75%)Jiawei Zhang; Linyi Li; Ce Zhang; Bo Li
Sample Complexity of an Adversarial Attack on UCB-based Best-arm Identification Policy. (69%)Varsha Pendyala
Boosting Robustness Verification of Semantic Feature Neighborhoods. (54%)Anan Kabaha; Dana Drachsler-Cohen
Semantic-Preserving Adversarial Code Comprehension. (1%)Yiyang Li; Hongqiu Wu; Hai Zhao
Holistic Segmentation. (1%)Stefano Gasperini; Alvaro Marcos-Ramiro; Michael Schmidt; Nassir Navab; Benjamin Busam; Federico Tombari
Class-Level Logit Perturbation. (1%)Mengyang Li; Fengguang Su; Ou Wu; Ji Zhang
2022-09-11
Resisting Deep Learning Models Against Adversarial Attack Transferability via Feature Randomization. (99%)Ehsan Nowroozi; Mohammadreza Mohammadi; Pargol Golmohammadi; Yassine Mekdad; Mauro Conti; Selcuk Uluagac
Generate novel and robust samples from data: accessible sharing without privacy concerns. (5%)David Banh; Alan Huang
2022-09-10
Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense. (99%)Bowen Peng; Bo Peng; Jie Zhou; Jianyue Xie; Li Liu
2022-09-09
The Space of Adversarial Strategies. (99%)Ryan Sheatsley; Blaine Hoak; Eric Pauley; Patrick McDaniel
Defend Data Poisoning Attacks on Voice Authentication. (54%)Ke Li; Cameron Baird; Dan Lin
Robust-by-Design Classification via Unitary-Gradient Neural Networks. (41%)Fabio Brau; Giulio Rossolini; Alessandro Biondi; Giorgio Buttazzo
Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled Membership Inference. (10%)Hanzhou Wu
Saliency Guided Adversarial Training for Learning Generalizable Features with Applications to Medical Imaging Classification System. (1%)Xin Li; Yao Qiang; Chengyin Li; Sijia Liu; Dongxiao Zhu
2022-09-08
Incorporating Locality of Images to Generate Targeted Transferable Adversarial Examples. (99%)Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Evaluating the Security of Aircraft Systems. (92%)Edan Habler; Ron Bitton; Asaf Shabtai
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks. (64%)Chulin Xie; Yunhui Long; Pin-Yu Chen; Qinbin Li; Arash Nourian; Sanmi Koyejo; Bo Li
A Survey of Recent Advances in Deep Learning Models for Detecting Malware in Desktop and Mobile Platforms. (1%)Pascal Maniriho; Abdun Naser Mahmood; Mohammad Jabed Morshed Chowdhury
FADE: Enabling Large-Scale Federated Adversarial Training on Resource-Constrained Edge Devices. (1%)Minxue Tang; Jianyi Zhang; Mingyuan Ma; Louis DiValentin; Aolin Ding; Amin Hassanzadeh; Hai Li; Yiran Chen
2022-09-07
On the Transferability of Adversarial Examples between Encrypted Models. (99%)Miki Tanaka; Isao Echizen; Hitoshi Kiya
Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples. (99%)Nuo Xu; Kaleel Mahmood; Haowen Fang; Ethan Rathbun; Caiwen Ding; Wujie Wen
Reward Delay Attacks on Deep Reinforcement Learning. (70%)Anindya Sarkar; Jiarui Feng; Yevgeniy Vorobeychik; Christopher Gill; Ning Zhang
Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems. (47%)Sahar Abdelnabi; Mario Fritz
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots. (15%)Wai Man Si; Michael Backes; Jeremy Blackburn; Cristofaro Emiliano De; Gianluca Stringhini; Savvas Zannettou; Yand Zhang
Physics-Guided Adversarial Machine Learning for Aircraft Systems Simulation. (1%)Houssem Ben Braiek; Thomas Reid; Foutse Khomh
Hardware faults that matter: Understanding and Estimating the safety impact of hardware faults on object detection DNNs. (1%)Syed Qutub; Florian Geissler; Yang Peng; Ralf Grafe; Michael Paulitsch; Gereon Hinz; Alois Knoll
MalDetConv: Automated Behaviour-based Malware Detection Framework Based on Natural Language Processing and Deep Learning Techniques. (1%)Pascal Maniriho; Abdun Naser Mahmood; Mohammad Jabed Morshed Chowdhury
2022-09-06
Instance Attack:An Explanation-based Vulnerability Analysis Framework Against DNNs for Malware Detection. (99%)Sun RuiJin; Guo ShiZe; Guo JinHong; Xing ChangYou; Yang LuMing; Guo Xi; Pan ZhiSong
Bag of Tricks for FGSM Adversarial Training. (96%)Zichao Li; Li Liu; Zeyu Wang; Yuyin Zhou; Cihang Xie
Improving Robustness to Out-of-Distribution Data by Frequency-based Augmentation. (82%)Koki Mukai; Soichiro Kumano; Toshihiko Yamasaki
Defending Against Backdoor Attack on Graph Nerual Network by Explainability. (80%)Bingchen Jiang; Zhao Li
MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World. (56%)Hua Ma; Yinshan Li; Yansong Gao; Zhi Zhang; Alsharif Abuadbba; Anmin Fu; Said F. Al-Sarawi; Nepal Surya; Derek Abbott
Multimodal contrastive learning for remote sensing tasks. (1%)Umangi Jain; Alex Wilson; Varun Gulshan
Annealing Optimization for Progressive Learning with Stochastic Approximation. (1%)Christos Mavridis; John Baras
Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps. (1%)Alireza Ganjdanesh; Shangqian Gao; Heng Huang
A Survey of Machine Unlearning. (1%)Thanh Tam Nguyen; Thanh Trung Huynh; Phi Le Nguyen; Alan Wee-Chung Liew; Hongzhi Yin; Quoc Viet Hung Nguyen
2022-09-05
Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples. (98%)Hezekiah J. Branch; Jonathan Rodriguez Cefalu; Jeremy McHugh; Leyla Hujer; Aditya Bahl; Daniel del Castillo Iglesias; Ron Heichman; Ramesh Darwishi
White-Box Adversarial Policies in Deep Reinforcement Learning. (98%)Stephen Casper; Taylor Killian; Gabriel Kreiman; Dylan Hadfield-Menell
"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution. (69%)Yuyou Gan; Yuhao Mao; Xuhong Zhang; Shouling Ji; Yuwen Pu; Meng Han; Jianwei Yin; Ting Wang
Adversarial Detection: Attacking Object Detection in Real Time. (64%)Han Wu; Syed Yunas; Sareh Rowlands; Wenjie Ruan; Johan Wahlstrom
PromptAttack: Prompt-based Attack for Language Models via Gradient Search. (16%)Yundi Shi; Piji Li; Changchun Yin; Zhaoyang Han; Lu Zhou; Zhe Liu
Federated Zero-Shot Learning for Visual Recognition. (2%)Zhi Chen; Yadan Luo; Sen Wang; Jingjing Li; Zi Huang
Improving Out-of-Distribution Detection via Epistemic Uncertainty Adversarial Training. (2%)Derek Everett; Andre T. Nguyen; Luke E. Richards; Edward Raff
2022-09-04
An Adaptive Black-box Defense against Trojan Attacks (TrojDef). (98%)Guanxiong Liu; Abdallah Khreishah; Fatima Sharadgah; Issa Khalil
Hide & Seek: Seeking the (Un)-Hidden key in Provably-Secure Logic Locking Techniques. (11%)Satwik Patnaik; Nimisha Limaye; Ozgur Sinanoglu
Synergistic Redundancy: Towards Verifiable Safety for Autonomous Vehicles. (1%)Ayoosh Bansal; Simon Yu; Hunmin Kim; Bo Li; Naira Hovakimyan; Marco Caccamo; Lui Sha
2022-09-02
Adversarial Color Film: Effective Physical-World Attack to DNNs. (98%)Chengyin Hu; Weiwen Shi
Impact of Scaled Image on Robustness of Deep Neural Networks. (98%)Chengyin Hu; Weiwen Shi
Property inference attack; Graph neural networks; Privacy attacks and defense; Trustworthy machine learning. (95%)Xiuling Wang; Wendy Hui Wang
Impact of Colour Variation on Robustness of Deep Neural Networks. (92%)Chengyin Hu; Weiwen Shi
Scalable Adversarial Attack Algorithms on Influence Maximization. (68%)Lichao Sun; Xiaobin Rui; Wei Chen
Are Attribute Inference Attacks Just Imputation? (31%)Bargav Jayaraman; David Evans
Explainable AI for Android Malware Detection: Towards Understanding Why the Models Perform So Well? (9%)Yue Liu; Chakkrit Tantithamthavorn; Li Li; Yepang Liu
Revisiting Outer Optimization in Adversarial Training. (5%)Ali Dabouei; Fariborz Taherkhani; Sobhan Soleymani; Nasser M. Nasrabadi
2022-09-01
Adversarial for Social Privacy: A Poisoning Strategy to Degrade User Identity Linkage. (98%)Jiangli Shao; Yongqing Wang; Boshen Shi; Hao Gao; Huawei Shen; Xueqi Cheng
Universal Fourier Attack for Time Series. (12%)Elizabeth Coda; Brad Clymer; Chance DeSmet; Yijing Watkins; Michael Girard
2022-08-31
Be Your Own Neighborhood: Detecting Adversarial Example by the Neighborhood Relations Built on Self-Supervised Learning. (99%)Zhiyuan He; Yijun Yang; Pin-Yu Chen; Qiang Xu; Tsung-Yi Ho
Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters Substitution. (99%)MingWei Zhou; Xiaobing Pei
Membership Inference Attacks by Exploiting Loss Trajectory. (70%)Yiyong Liu; Zhengyu Zhao; Michael Backes; Yang Zhang
Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research. (13%)Zhibo Zhang; Hussam Al Hamadi; Ernesto Damiani; Chan Yeob Yeun; Fatma Taher
Feature Alignment by Uncertainty and Self-Training for Source-Free Unsupervised Domain Adaptation. (1%)JoonHo Lee; Gyemin Lee
Vulnerability of Distributed Inverter VAR Control in PV Distributed Energy System. (1%)Bo Tu; Wen-Tai Li; Chau Yuen
MA-RECON: Mask-aware deep-neural-network for robust fast MRI k-space interpolation. (1%)Nitzan Avidan; Moti Freiman
2022-08-30
A Black-Box Attack on Optical Character Recognition Systems. (99%)Samet Bayram; Kenneth Barner
Robustness and invariance properties of image classifiers. (99%)Apostolos Modas
Solving the Capsulation Attack against Backdoor-based Deep Neural Network Watermarks by Reversing Triggers. (1%)Fangqi Li; Shilin Wang; Yun Zhu
Constraining Representations Yields Models That Know What They Don't Know. (1%)Joao Monteiro; Pau Rodriguez; Pierre-Andre Noel; Issam Laradji; David Vazquez
2022-08-29
Towards Adversarial Purification using Denoising AutoEncoders. (99%)Dvij Kalaria; Aritra Hazra; Partha Pratim Chakrabarti
Reducing Certified Regression to Certified Classification for General Poisoning Attacks. (54%)Zayd Hammoudeh; Daniel Lowd
Interpreting Black-box Machine Learning Models for High Dimensional Datasets. (1%)Md. Rezaul Karim; Md. Shajalal; Alex Graß; Till Döhmen; Sisay Adugna Chala; Christian Beecks; Stefan Decker
2022-08-28
Cross-domain Cross-architecture Black-box Attacks on Fine-tuned Models with Transferred Evolutionary Strategies. (99%)Yinghua Zhang; Yangqiu Song; Kun Bai; Qiang Yang
2022-08-27
Adversarial Robustness for Tabular Data through Cost and Utility Awareness. (99%)Klim Kireev; Bogdan Kulynych; Carmela Troncoso
SA: Sliding attack for synthetic speech detection with resistance to clipping and self-splicing. (99%)Deng JiaCheng; Dong Li; Yan Diqun; Wang Rangding; Zeng Jiaming
TrojViT: Trojan Insertion in Vision Transformers. (15%)Mengxin Zheng; Qian Lou; Lei Jiang
Overparameterized (robust) models from computational constraints. (13%)Sanjam Garg; Somesh Jha; Saeed Mahloujifar; Mohammad Mahmoody; Mingyuan Wang
RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems. (1%)Emna Baccour; Aiman Erbad; Amr Mohamed; Mounir Hamdi; Mohsen Guizani
2022-08-26
What Does the Gradient Tell When Attacking the Graph Structure. (69%)Zihan Liu; Ge Wang; Yun Luo; Stan Z. Li
Network-Level Adversaries in Federated Learning. (54%)Giorgio Severi; Matthew Jagielski; Gökberk Yar; Yuxuan Wang; Alina Oprea; Cristina Nita-Rotaru
ATTRITION: Attacking Static Hardware Trojan Detection Techniques Using Reinforcement Learning. (45%)Vasudev JV Gohil; Hao JV Guo; Satwik JV Patnaik; JV Jeyavijayan; Rajendran
Lower Difficulty and Better Robustness: A Bregman Divergence Perspective for Adversarial Training. (4%)Zihui Wu; Haichang Gao; Bingqian Zhou; Xiaoyan Guo; Shudong Zhang
2022-08-25
Semantic Preserving Adversarial Attack Generation with Autoencoder and Genetic Algorithm. (99%)Xinyi Wang; Simon Yusuf Enoch; Dong Seong Kim
SNAP: Efficient Extraction of Private Properties with Poisoning. (89%)Harsh Chaudhari; John Abascal; Alina Oprea; Matthew Jagielski; Florian Tramèr; Jonathan Ullman
FuncFooler: A Practical Black-box Attack Against Learning-based Binary Code Similarity Detection Methods. (78%)Lichen Jia; Bowen Tang; Chenggang Wu; Zhe Wang; Zihan Jiang; Yuanming Lai; Yan Kang; Ning Liu; Jingfeng Zhang
Robust Prototypical Few-Shot Organ Segmentation with Regularized Neural-ODEs. (31%)Prashant Pandey; Mustafa Chasmai; Tanuj Sur; Brejesh Lall
Calibrated Selective Classification. (15%)Adam Fisch; Tommi Jaakkola; Regina Barzilay
XDRI Attacks - and - How to Enhance Resilience of Residential Routers. (4%)Philipp Jeitner; Haya Shulman; Lucas Teichmann; Michael Waidner
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning. (1%)Haodong Zhao; Wei Du; Fangqi Li; Peixuan Li; Gongshen Liu
2022-08-24
Attacking Neural Binary Function Detection. (99%)Joshua Bundt; Michael Davinroy; Ioannis Agadakos; Alina Oprea; William Robertson
Unrestricted Black-box Adversarial Attack Using GAN with Limited Queries. (99%)Dongbin Na; Sangwoo Ji; Jong Kim
Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps. (98%)Mohammadreza Amirian; Friedhelm Schwenker; Thilo Stadelmann
A Perturbation Resistant Transformation and Classification System for Deep Neural Networks. (98%)Nathaniel Dean; Dilip Sarkar
Rethinking Cost-sensitive Classification in Deep Learning via Adversarial Data Augmentation. (92%)Qiyuan Chen; Raed Al Kontar; Maher Nouiehed; Jessie Yang; Corey Lester
Bidirectional Contrastive Split Learning for Visual Question Answering. (38%)Yuwei Sun; Hideya Ochiai
2022-08-23
Towards an Awareness of Time Series Anomaly Detection Models' Adversarial Vulnerability. (99%)Shahroz Tariq; Binh M. Le; Simon S. Woo
Adversarial Vulnerability of Temporal Feature Networks for Object Detection. (99%)Svetlana Pavlitskaya; Nikolai Polley; Michael Weber; J. Marius Zöllner
Transferability Ranking of Adversarial Examples. (99%)Mosh Levy; Yuval Elovici; Yisroel Mirsky
Auditing Membership Leakages of Multi-Exit Networks. (76%)Zheng Li; Yiyong Liu; Xinlei He; Ning Yu; Michael Backes; Yang Zhang
A Comprehensive Study of Real-Time Object Detection Networks Across Multiple Domains: A Survey. (13%)Elahe Arani; Shruthi Gowda; Ratnajit Mukherjee; Omar Magdy; Senthilkumar Kathiresan; Bahram Zonooz
Robust DNN Watermarking via Fixed Embedding Weights with Optimized Distribution. (10%)Benedetta Tondi; Andrea Costanzo; Mauro Barni
2022-08-22
Fight Fire With Fire: Reversing Skin Adversarial Examples by Multiscale Diffusive and Denoising Aggregation Mechanism. (99%)Yongwei Wang; Yuan Li; Zhiqi Shen
Hierarchical Perceptual Noise Injection for Social Media Fingerprint Privacy Protection. (98%)Simin Li; Huangxinxin Xu; Jiakai Wang; Aishan Liu; Fazhi He; Xianglong Liu; Dacheng Tao
Different Spectral Representations in Optimized Artificial Neural Networks and Brains. (93%)Richard C. Gerum; Cassidy Pirlot; Alona Fyshe; Joel Zylberberg
Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models. (87%)Xinlei He; Zheng Li; Weilin Xu; Cory Cornelius; Yang Zhang
BARReL: Bottleneck Attention for Adversarial Robustness in Vision-Based Reinforcement Learning. (86%)Eugene Bykovets; Yannick Metz; Mennatallah El-Assady; Daniel A. Keim; Joachim M. Buhmann
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN. (62%)Huy Phan; Cong Shi; Yi Xie; Tianfang Zhang; Zhuohang Li; Tianming Zhao; Jian Liu; Yan Wang; Yingying Chen; Bo Yuan
Toward Better Target Representation for Source-Free and Black-Box Domain Adaptation. (31%)Qucheng Peng; Zhengming Ding; Lingjuan Lyu; Lichao Sun; Chen Chen
Optimal Bootstrapping of PoW Blockchains. (1%)Ranvir Rana; Dimitris Karakostas; Sreeram Kannan; Aggelos Kiayias; Pramod Viswanath
2022-08-21
PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D Point Cloud Recognition. (99%)Jiachen Sun; Weili Nie; Zhiding Yu; Z. Morley Mao; Chaowei Xiao
Inferring Sensitive Attributes from Model Explanations. (56%)Vasisht Duddu; Antoine Boutet
Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning. (10%)Kerem Ozfatura; Emre Ozfatura; Alptekin Kupcu; Deniz Gunduz
MockingBERT: A Method for Retroactively Adding Resilience to NLP Models. (4%)Jan Jezabek; Akash Singh
NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs. (1%)Yijun Tian; Chuxu Zhang; Zhichun Guo; Xiangliang Zhang; Nitesh V. Chawla
A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective. (1%)Chanwoo Park; Sangdoo Yun; Sanghyuk Chun
2022-08-20
Analyzing Adversarial Robustness of Vision Transformers against Spatial and Spectral Attacks. (86%)Gihyun Kim; Jong-Seok Lee
GAIROSCOPE: Injecting Data from Air-Gapped Computers to Nearby Gyroscopes. (33%)Mordechai Guri
Sensor Security: Current Progress, Research Challenges, and Future Roadmap. (10%)Anomadarshi Barua; Mohammad Abdullah Al Faruque
Evaluating Out-of-Distribution Detectors Through Adversarial Generation of Outliers. (5%)Sangwoong Yoon; Jinwon Choi; Yonghyeon Lee; Yung-Kyun Noh; Frank Chongwoo Park
Adversarial contamination of networks in the setting of vertex nomination: a new trimming method. (1%)Sheyda Peyman; Minh Tang; Vince Lyzinski
2022-08-19
Real-Time Robust Video Object Detection System Against Physical-World Adversarial Attacks. (99%)Husheng Han; Xing Hu; Kaidi Xu; Pucheng Dang; Ying Wang; Yongwei Zhao; Zidong Du; Qi Guo; Yanzhi Yang; Tianshi Chen
Gender Bias and Universal Substitution Adversarial Attacks on Grammatical Error Correction Systems for Automated Assessment. (92%)Vyas Raina; Mark Gales
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models. (76%)Yulong Wang; Minghui Zhao; Shenghong Li; Xin Yuan; Wei Ni
A Novel Plug-and-Play Approach for Adversarially Robust Generalization. (61%)Deepak Maurya; Adarsh Barik; Jean Honorio
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability. (8%)Wei Huang; Xingyu Zhao; Gaojie Jin; Xiaowei Huang
UKP-SQuARE v2 Explainability and Adversarial Attacks for Trustworthy QA. (1%)Rachneet Sachdeva; Haritz Puerto; Tim Baumgärtner; Sewin Tariverdian; Hao Zhang; Kexin Wang; Hossain Shaikh Saadi; Leonardo F. R. Ribeiro; Iryna Gurevych
2022-08-18
Resisting Adversarial Attacks in Deep Neural Networks using Diverse Decision Boundaries. (99%)Manaar Alam; Shubhajit Datta; Debdeep Mukhopadhyay; Arijit Mondal; Partha Pratim Chakrabarti
Enhancing Targeted Attack Transferability via Diversified Weight Pruning. (99%)Hung-Jui Wang; Yu-Yu Wu; Shang-Tse Chen
Enhancing Diffusion-Based Image Synthesis with Robust Classifier Guidance. (45%)Bahjat Kawar; Roy Ganz; Michael Elad
Reverse Engineering of Integrated Circuits: Tools and Techniques. (33%)Abhijitt Dhavlle
DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization. (10%)Anshul Nasery; Sravanti Addepalli; Praneeth Netrapalli; Prateek Jain
Discovering Bugs in Vision Models using Off-the-shelf Image Generation and Captioning. (3%)Olivia Wiles; Isabela Albuquerque; Sven Gowal
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy. (2%)Wenqiang Ruan; Mingxin Xu; Wenjing Fang; Li Wang; Lei Wang; Weili Han
Profiler: Profile-Based Model to Detect Phishing Emails. (1%)Mariya Shmalko; Alsharif Abuadbba; Raj Gaire; Tingmin Wu; Hye-Young Paik; Surya Nepal
2022-08-17
Two Heads are Better than One: Robust Learning Meets Multi-branch Models. (99%)Dong Huang; Qingwen Bu; Yuhao Qing; Haowen Pi; Sen Wang; Heming Cui
An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks. (99%)Raz Lapid; Zvika Haramaty; Moshe Sipper
Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against Shadow-Based Adversarial Attacks. (98%)Andrew Wang; Wyatt Mayor; Ryan Smith; Gopal Nookula; Gregory Ditzler
Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System. (70%)Abdur R. Shahid; Ahmed Imteaj; Peter Y. Wu; Diane A. Igoche; Tauhidul Alam
An Efficient Multi-Step Framework for Malware Packing Identification. (41%)Jong-Wouk Kim; Yang-Sae Moon; Mi-Jung Choi
On the Privacy Effect of Data Enhancement via the Lens of Memorization. (31%)Xiao Li; Qiongxiu Li; Zhanhao Hu; Xiaolin Hu
An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models. (26%)Jihyeon Hyeong; Jayoung Kim; Noseong Park; Sushil Jajodia
Efficient Detection and Filtering Systems for Distributed Training. (26%)Konstantinos Konstantinidis; Aditya Ramamoorthy
ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach. (2%)Tong Zhou; Shaolei Ren; Xiaolin Xu
DF-Captcha: A Deepfake Captcha for Preventing Fake Calls. (1%)Yisroel Mirsky
Analyzing Robustness of End-to-End Neural Models for Automatic Speech Recognition. (1%)Goutham Rajendran; Wei Zou
2022-08-16
A Context-Aware Approach for Textual Adversarial Attack through Probability Difference Guided Beam Search. (82%)Huijun Liu; Jie Yu; Shasha Li; Jun Ma; Bin Ji
Imperceptible and Robust Backdoor Attack in 3D Point Cloud. (68%)Kuofeng Gao; Jiawang Bai; Baoyuan Wu; Mengxi Ya; Shu-Tao Xia
AutoCAT: Reinforcement Learning for Automated Exploration of Cache-Timing Attacks. (13%)Mulong Luo; Wenjie Xiong; Geunbae Lee; Yueying Li; Xiaomeng Yang; Amy Zhang; Yuandong Tian; Hsien-Hsin S. Lee; G. Edward Suh
Investigating the Impact of Model Width and Density on Generalization in Presence of Label Noise. (1%)Yihao Xue; Kyle Whitecross; Baharan Mirzasoleiman
2022-08-15
Man-in-the-Middle Attack against Object Detection Systems. (96%)Han Wu; Sareh Rowlands; Johan Wahlstrom
MENLI: Robust Evaluation Metrics from Natural Language Inference. (92%)Yanran Chen; Steffen Eger
Training-Time Attacks against k-Nearest Neighbors. (2%)Ara Vartanian; Will Rosenbaum; Scott Alfeld
CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI Models. (1%)Chuyen Nguyen; Caleb Morgan; Sudip Mittal
2022-08-14
A Multi-objective Memetic Algorithm for Auto Adversarial Attack Optimization Design. (99%)Jialiang Sun; Wen Yao; Tingsong Jiang; Xiaoqian Chen
Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection. (92%)Haibin Zheng; Haiyang Xiong; Haonan Ma; Guohan Huang; Jinyin Chen
InvisibiliTee: Angle-agnostic Cloaking from Person-Tracking Systems with a Tee. (92%)Yaxian Li; Bingqing Zhang; Guoping Zhao; Mingyu Zhang; Jiajun Liu; Ziwei Wang; Jirong Wen
Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning. (67%)Ashish Gupta; Tie Luo; Mao V. Ngo; Sajal K. Das
2022-08-13
Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification. (99%)Beini Xie; Heng Chang; Xin Wang; Tian Bian; Shiji Zhou; Daixin Wang; Zhiqiang Zhang; Wenwu Zhu
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks. (99%)Tian Yu Liu; Yu Yang; Baharan Mirzasoleiman
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer. (62%)Tong Wang; Yuan Yao; Feng Xu; Miao Xu; Shengwei An; Ting Wang
2022-08-12
MaskBlock: Transferable Adversarial Examples with Bayes Approach. (99%)Mingyuan Fan; Cen Chen; Ximeng Liu; Wenzhong Guo
Scale-free and Task-agnostic Attack: Generating Photo-realistic Adversarial Patterns with Patch Quilting Generator. (99%)Xiangbo Gao; Cheng Luo; Qinliang Lin; Weicheng Xie; Minmin Liu; Linlin Shen; Keerthy Kusumam; Siyang Song
Defensive Distillation based Adversarial Attacks Mitigation Method for Channel Estimation using Deep Learning Models in Next-Generation Wireless Networks. (98%)Ferhat Ozgur Catak; Murat Kuzlu; Evren Catak; Umit Cali; Ozgur Guler
Unifying Gradients to Improve Real-world Robustness for Deep Networks. (96%)Yingwen Wu; Sizhe Chen; Kun Fang; Xiaolin Huang
A Knowledge Distillation-Based Backdoor Attack in Federated Learning. (93%)Yifan Wang; Wei Fan; Keke Yang; Naji Alhusaini; Jing Li
Dropout is NOT All You Need to Prevent Gradient Leakage. (62%)Daniel Scheliga; Patrick Mäder; Marco Seeland
Defense against Backdoor Attacks via Identifying and Purifying Bad Neurons. (2%)Mingyuan Fan; Yang Liu; Cen Chen; Ximeng Liu; Wenzhong Guo
PRIVEE: A Visual Analytic Workflow for Proactive Privacy Risk Inspection of Open Data. (2%)Kaustav Bhattacharjee; Akm Islam; Jaideep Vaidya; Aritra Dasgupta
2022-08-11
Diverse Generative Perturbations on Attention Space for Transferable Adversarial Attacks. (99%)Woo Jae Kim; Seunghoon Hong; Sung-Eui Yoon
General Cutting Planes for Bound-Propagation-Based Neural Network Verification. (68%)Huan Zhang; Shiqi Wang; Kaidi Xu; Linyi Li; Bo Li; Suman Jana; Cho-Jui Hsieh; J. Zico Kolter
On deceiving malware classification with section injection. (5%)Silva Adeilson Antonio da; Mauricio Pamplona Segundo
A Probabilistic Framework for Mutation Testing in Deep Neural Networks. (1%)Florian Tambon; Foutse Khomh; Giuliano Antoniol
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment. (1%)Jie Zhu; Leye Wang; Xiao Han
Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone. (1%)Aghiles Ait Messaoud; Sonia Ben Mokhtar; Vlad Nitu; Valerio Schiavoni
2022-08-10
Explaining Machine Learning DGA Detectors from DNS Traffic Data. (13%)Giorgio Piras; Maura Pintor; Luca Demetrio; Battista Biggio
A Sublinear Adversarial Training Algorithm. (3%)Yeqi Gao; Lianke Qin; Zhao Song; Yitan Wang
DVR: Micro-Video Recommendation Optimizing Watch-Time-Gain under Duration Bias. (1%)Yu Zheng; Chen Gao; Jingtao Ding; Lingling Yi; Depeng Jin; Yong Li; Meng Wang
2022-08-09
Adversarial Machine Learning-Based Anticipation of Threats Against Vehicle-to-Microgrid Services. (98%)Ahmed Omara; Burak Kantarci
Reducing Exploitability with Population Based Training. (67%)Pavel Czempin; Adam Gleave
Combining Stochastic Defenses to Resist Gradient Inversion: An Ablation Study. (50%)Daniel Scheliga; Patrick Mäder; Marco Seeland
Robust Machine Learning for Malware Detection over Time. (9%)Daniele Angioni; Luca Demetrio; Maura Pintor; Battista Biggio
2022-08-08
Robust and Imperceptible Black-box DNN Watermarking Based on Fourier Perturbation Analysis and Frequency Sensitivity Clustering. (75%)Yong Liu; Hanzhou Wu; Xinpeng Zhang
PerD: Perturbation Sensitivity-based Neural Trojan Detection Framework on NLP Applications. (67%)Diego Garcia-soto; Huili Chen; Farinaz Koushanfar
Adversarial robustness of VAEs through the lens of local geometry. (47%)Asif Khan; Amos Storkey
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning. (26%)Tianxing Zhang; Hanzhou Wu; Xiaofeng Lu; Guangling Sun
Abutting Grating Illusion: Cognitive Challenge to Neural Network Models. (1%)Jinyu Fan; Yi Zeng
Testing of Machine Learning Models with Limited Samples: An Industrial Vacuum Pumping Application. (1%)Ayan Chatterjee; Bestoun S. Ahmed; Erik Hallin; Anton Engman
2022-08-07
Federated Adversarial Learning: A Framework with Convergence Analysis. (80%)Xiaoxiao Li; Zhao Song; Jiaming Yang
Are Gradients on Graph Structure Reliable in Gray-box Attacks? (13%)Zihan Liu; Yun Luo; Lirong Wu; Siyuan Li; Zicheng Liu; Stan Z. Li
2022-08-06
Blackbox Attacks via Surrogate Ensemble Search. (99%)Zikui Cai; Chengyu Song; Srikanth Krishnamurthy; Amit Roy-Chowdhury; M. Salman Asif
On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning. (22%)Congyu Fang; Hengrui Jia; Anvith Thudi; Mohammad Yaghini; Christopher A. Choquette-Choo; Natalie Dullerud; Varun Chandrasekaran; Nicolas Papernot
Preventing or Mitigating Adversarial Supply Chain Attacks; a legal analysis. (3%)Kaspar Rosager Ludvigsen; Shishir Nagaraja; Angela Daly
2022-08-05
Adversarial Robustness of MR Image Reconstruction under Realistic Perturbations. (73%)Jan Nikolas Morshuis; Sergios Gatidis; Matthias Hein; Christian F. Baumgartner
Data-free Backdoor Removal based on Channel Lipschitzness. (64%)Runkai Zheng; Rongjun Tang; Jianze Li; Li Liu
Lethal Dose Conjecture on Data Poisoning. (2%)Wenxiao Wang; Alexander Levine; Soheil Feizi
LCCDE: A Decision-Based Ensemble Framework for Intrusion Detection in The Internet of Vehicles. (1%)Li Yang; Abdallah Shami; Gary Stevens; Rusett Stephen De
Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks. (1%)Bernd Prach; Christoph H. Lampert
2022-08-04
Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification. (99%)Faris Almalik; Mohammad Yaqub; Karthik Nandakumar
2022-08-03
Spectrum Focused Frequency Adversarial Attacks for Automatic Modulation Classification. (99%)Sicheng College of Information and Communication Engineering, Harbin Engineering University, Harbin Zhang; Jiarun College of Information and Communication Engineering, Harbin Engineering University, Harbin Yu; Zhida College of Information and Communication Engineering, Harbin Engineering University, Harbin Bao; Shiwen Department of Electrical & Computer Engineering, Auburn University, Auburn Mao; Yun College of Information and Communication Engineering, Harbin Engineering University, Harbin Lin
Design of secure and robust cognitive system for malware detection. (99%)Sanket Shukla
A New Kind of Adversarial Example. (99%)Ali Borji
Adversarial Attacks on ASR Systems: An Overview. (98%)Xiao Zhang; Hao Tan; Xuan Huang; Denghui Zhang; Keke Tang; Zhaoquan Gu
Multiclass ASMA vs Targeted PGD Attack in Image Segmentation. (96%)Johnson University of Toronto Vo; Jiabao University of Toronto Xie; Sahil University of Toronto Patel
MOVE: Effective and Harmless Ownership Verification via Embedded External Features. (84%)Yiming Li; Linghui Zhu; Xiaojun Jia; Yang Bai; Yong Jiang; Shu-Tao Xia; Xiaochun Cao
Robust Graph Neural Networks using Weighted Graph Laplacian. (13%)Bharat Runwal; Vivek; Sandeep Kumar
2022-08-02
Adversarial Camouflage for Node Injection Attack on Graphs. (81%)Shuchang Tao; Qi Cao; Huawei Shen; Yunfan Wu; Liang Hou; Xueqi Cheng
Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry. (2%)Mark Penrod; Harrison Termotto; Varshini Reddy; Jiayu Yao; Finale Doshi-Velez; Weiwei Pan
SCFI: State Machine Control-Flow Hardening Against Fault Attacks. (1%)Pascal Nasahl; Martin Unterguggenberger; Rishub Nagpal; Robert Schilling; David Schrammel; Stefan Mangard
2022-08-01
GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction. (98%)Jiacheng Zhu; Jielin Qiu; Zhuolin Yang; Douglas Weber; Michael A. Rosenberg; Emerson Liu; Bo Li; Ding Zhao
Understanding Adversarial Robustness of Vision Transformers via Cauchy Problem. (81%)Zheng Wang; Wenjie Ruan
On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel. (75%)Shubhi Shukla; Manaar Alam; Sarani Bhattacharya; Debdeep Mukhopadhyay; Pabitra Mitra
Attacking Adversarial Defences by Smoothing the Loss Landscape. (26%)Panagiotis Eustratiadis; Henry Gouk; Da Li; Timothy Hospedales
2022-07-31
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning. (99%)Mohammad Hossein Samavatian; Saikat Majumdar; Kristin Barber; Radu Teodorescu
Robust Real-World Image Super-Resolution against Adversarial Attacks. (99%)Jiutao Yue; Haofeng Li; Pengxu Wei; Guanbin Li; Liang Lin
Is current research on adversarial robustness addressing the right problem? (97%)Ali Borji
2022-07-30
enpheeph: A Fault Injection Framework for Spiking and Compressed Deep Neural Networks. (5%)Alessio Colucci; Andreas Steininger; Muhammad Shafique
CoNLoCNN: Exploiting Correlation and Non-Uniform Quantization for Energy-Efficient Low-precision Deep Convolutional Neural Networks. (2%)Muhammad Abdullah Hanif; Giuseppe Maria Sarda; Alberto Marchisio; Guido Masera; Maurizio Martina; Muhammad Shafique
2022-07-29
Robust Trajectory Prediction against Adversarial Attacks. (99%)Yulong Cao; Danfei Xu; Xinshuo Weng; Zhuoqing Mao; Anima Anandkumar; Chaowei Xiao; Marco Pavone
Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation and Complexity Analysis. (56%)Tao Li; Haozhe Lei; Quanyan Zhu
2022-07-28
Pro-tuning: Unified Prompt Tuning for Vision Tasks. (1%)Xing Nie; Bolin Ni; Jianlong Chang; Gaomeng Meng; Chunlei Huo; Zhaoxiang Zhang; Shiming Xiang; Qi Tian; Chunhong Pan
2022-07-27
Look Closer to Your Enemy: Learning to Attack via Teacher-student Mimicking. (99%)Mingejie Wang; Zhiqing Tang; Sirui Li; Dingwen Xiao
Point Cloud Attacks in Graph Spectral Domain: When 3D Geometry Meets Graph Signal Processing. (96%)Daizong Liu; Wei Hu; Xin Li
Membership Inference Attacks via Adversarial Examples. (73%)Hamid Jalalzai; Elie Kadoche; Rémi Leluc; Vincent Plassier
Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips. (69%)Jiawang Bai; Kuofeng Gao; Dihong Gong; Shu-Tao Xia; Zhifeng Li; Wei Liu
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking. (47%)Abhishek Chakraborty; Daniel Xing; Yuntao Liu; Ankur Srivastava
Label-Only Membership Inference Attack against Node-Level Graph Neural Networks. (22%)Mauro Conti; Jiaxin Li; Stjepan Picek; Jing Xu
Generative Steganography Network. (1%)Ping Wei; Sheng Li; Xinpeng Zhang; Ge Luo; Zhenxing Qian; Qing Zhou
2022-07-26
LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity. (99%)Martin Gubri; Maxime Cordy; Mike Papadakis; Yves Le Traon; Koushik Sen
Perception-Aware Attack: Creating Adversarial Music via Reverse-Engineering Human Perception. (99%)Rui Duan; Zhe Qu; Shangqing Zhao; Leah Ding; Yao Liu; Zhuo Lu
Generative Extraction of Audio Classifiers for Speaker Identification. (73%)Tejumade Afonja; Lucas Bourtoule; Varun Chandrasekaran; Sageev Oore; Nicolas Papernot
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. (8%)Tilman Räuker; Anson Ho; Stephen Casper; Dylan Hadfield-Menell
2022-07-25
$p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations. (99%)Adam Dziedzic; Stephan Rabanser; Mohammad Yaghini; Armin Ale; Murat A. Erdogdu; Nicolas Papernot
Improving Adversarial Robustness via Mutual Information Estimation. (99%)Dawei Zhou; Nannan Wang; Xinbo Gao; Bo Han; Xiaoyu Wang; Yibing Zhan; Tongliang Liu
SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness. (99%)Jindong Gu; Hengshuang Zhao; Volker Tresp; Philip Torr
Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer. (75%)Yingyi Chen; Xi Shen; Yahui Liu; Qinghua Tao; Johan A. K. Suykens
Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment. (9%)Tian Liu; Xueyang Hu; Tao Shu
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning. (2%)Xinlei He; Hongbin Liu; Neil Zhenqiang Gong; Yang Zhang
2022-07-24
Versatile Weight Attack via Flipping Limited Bits. (86%)Jiawang Bai; Baoyuan Wu; Zhifeng Li; Shu-tao Xia
Can we achieve robustness from data alone? (82%)Nikolaos Tsilivis; Jingtong Su; Julia Kempe
Proving Common Mechanisms Shared by Twelve Methods of Boosting Adversarial Transferability. (69%)Quanshi Zhang; Xin Wang; Jie Ren; Xu Cheng; Shuyun Lin; Yisen Wang; Xiangming Zhu
Privacy Against Inference Attacks in Vertical Federated Learning. (2%)Borzoo Rassouli; Morteza Varasteh; Deniz Gunduz
Semantic-guided Multi-Mask Image Harmonization. (1%)Xuqian Ren; Yifan Liu
2022-07-22
Do Perceptually Aligned Gradients Imply Adversarial Robustness? (99%)Roy Ganz; Bahjat Kawar; Michael Elad
Provable Defense Against Geometric Transformations. (47%)Rem Yang; Jacob Laurel; Sasa Misailovic; Gagandeep Singh
Aries: Efficient Testing of Deep Neural Networks via Labeling-Free Accuracy Estimation. (41%)Qiang Hu; Yuejun Guo; Xiaofei Xie; Maxime Cordy; Lei Ma; Mike Papadakis; Yves Le Traon
Learning from Multiple Annotator Noisy Labels via Sample-wise Label Fusion. (1%)Zhengqi Gao; Fan-Keng Sun; Mingran Yang; Sucheng Ren; Zikai Xiong; Marc Engeler; Antonio Burazer; Linda Wildling; Luca Daniel; Duane S. Boning
2022-07-21
Synthetic Dataset Generation for Adversarial Machine Learning Research. (99%)Xiruo Liu; Shibani Singh; Cory Cornelius; Colin Busho; Mike Tan; Anindya Paul; Jason Martin
Careful What You Wish For: on the Extraction of Adversarially Trained Models. (99%)Kacem Khaled; Gabriela Nicolescu; Magalhães Felipe Gohring de
Rethinking Textual Adversarial Defense for Pre-trained Language Models. (99%)Jiayi Wang; Rongzhou Bao; Zhuosheng Zhang; Hai Zhao
AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance. (98%)Xiaoliang Liu; Furao Shen; Jian Zhao; Changhai Nie
Knowledge-enhanced Black-box Attacks for Recommendations. (92%)Jingfan Chen; Wenqi Fan; Guanghui Zhu; Xiangyu Zhao; Chunfeng Yuan; Qing Li; Yihua Huang
Towards Efficient Adversarial Training on Vision Transformers. (92%)Boxi Wu; Jindong Gu; Zhifeng Li; Deng Cai; Xiaofei He; Wei Liu
Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation. (87%)Tong Wu; Tianhao Wang; Vikash Sehwag; Saeed Mahloujifar; Prateek Mittal
Contrastive Self-Supervised Learning Leads to Higher Adversarial Susceptibility. (83%)Rohit Gupta; Naveed Akhtar; Ajmal Mian; Mubarak Shah
Generating and Detecting True Ambiguity: A Forgotten Danger in DNN Supervision Testing. (22%)Michael Weiss; André García Gómez; Paolo Tonella
2022-07-20
Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness. (99%)Sekitoshi Kanai; Shin'ya Yamaguchi; Masanori Yamada; Hiroshi Takahashi; Kentaro Ohno; Yasutoshi Ida
Illusory Attacks: Detectability Matters in Adversarial Attacks on Sequential Decision-Makers. (98%)Tim Franzmeyer; Stephen McAleer; João F. Henriques; Jakob N. Foerster; Philip H. S. Torr; Adel Bibi; Witt Christian Schroeder de
Test-Time Adaptation via Conjugate Pseudo-labels. (10%)Sachin Goyal; Mingjie Sun; Aditi Raghunathan; Zico Kolter
Malware Triage Approach using a Task Memory based on Meta-Transfer Learning Framework. (9%)Jinting Zhu; Julian Jang-Jaccard; Ian Welch; Harith Al-Sahaf; Seyit Camtepe
A temporally and spatially local spike-based backpropagation algorithm to enable training in hardware. (1%)Anmol Biswas; Vivek Saraswat; Udayan Ganguly
2022-07-19
Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms. (99%)Linbo Liu; Youngsuk Park; Trong Nghia Hoang; Hilaf Hasson; Jun Huan
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. (41%)Zaixi Zhang; Xiaoyu Cao; Jinyuan Jia; Neil Zhenqiang Gong
Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive Privacy Analysis and Beyond. (26%)Yuzheng Hu; Tianle Cai; Jinyong Shan; Shange Tang; Chaochao Cai; Ethan Song; Bo Li; Dawn Song
Assaying Out-Of-Distribution Generalization in Transfer Learning. (1%)Florian Wenzel; Andrea Dittadi; Peter Vincent Gehler; Carl-Johann Simon-Gabriel; Max Horn; Dominik Zietlow; David Kernert; Chris Russell; Thomas Brox; Bernt Schiele; Bernhard Schölkopf; Francesco Locatello
2022-07-18
Defending Substitution-Based Profile Pollution Attacks on Sequential Recommenders. (99%)Zhenrui Yue; Huimin Zeng; Ziyi Kou; Lanyu Shang; Dong Wang
Prior-Guided Adversarial Initialization for Fast Adversarial Training. (99%)Xiaojun Jia; Yong Zhang; Xingxing Wei; Baoyuan Wu; Ke Ma; Jue Wang; Xiaochun Cao
Decorrelative Network Architecture for Robust Electrocardiogram Classification. (99%)Christopher Wiedeman; Ge Wang
Multi-step domain adaptation by adversarial attack to $\mathcal{H} \Delta \mathcal{H}$-divergence. (96%)Arip Asadulaev; Alexander Panfilov; Andrey Filchenkov
Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations. (91%)Hashmat Shadab Malik; Shahina K Kunhimon; Muzammal Naseer; Salman Khan; Fahad Shahbaz Khan
Easy Batch Normalization. (69%)Arip Asadulaev; Alexander Panfilov; Andrey Filchenkov
Adversarial Contrastive Learning via Asymmetric InfoNCE. (61%)Qiying Yu; Jieming Lou; Xianyuan Zhan; Qizhang Li; Wangmeng Zuo; Yang Liu; Jingjing Liu
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications. (22%)Ali Raza; Shujun Li; Kim-Phuc Tran; Ludovic Koehl
A Certifiable Security Patch for Object Tracking in Self-Driving Systems via Historical Deviation Modeling. (10%)Xudong Pan; Qifan Xiao; Mi Zhang; Min Yang
Benchmarking Machine Learning Robustness in Covid-19 Genome Sequence Classification. (2%)Sarwan Ali; Bikram Sahoo; Alexander Zelikovskiy; Pin-Yu Chen; Murray Patterson
2022-07-17
Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal. (99%)Xinwei Liu; Jian Liu; Yang Bai; Jindong Gu; Tao Chen; Xiaojun Jia; Xiaochun Cao
Threat Model-Agnostic Adversarial Defense using Diffusion Models. (99%)Tsachi Blau; Roy Ganz; Bahjat Kawar; Alex Bronstein; Michael Elad
Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game. (96%)Xiao-Shan Gao; Shuang Liu; Lijia Yu
Automated Repair of Neural Networks. (16%)Dor Cohen; Ofer Strichman
2022-07-16
DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking. (99%)Xiangyu Yin; Wenjie Ruan; Jonathan Fieldsend
Certified Neural Network Watermarks with Randomized Smoothing. (1%)Arpit Bansal; Ping-yeh Chiang; Michael Curry; Rajiv Jain; Curtis Wigington; Varun Manjunatha; John P Dickerson; Tom Goldstein
Progress and limitations of deep networks to recognize objects in unusual poses. (1%)Amro Abbas; Stéphane Deny
MixTailor: Mixed Gradient Aggregation for Robust Learning Against Tailored Attacks. (1%)Ali Ramezani-Kebrya; Iman Tabrizian; Fartash Faghri; Petar Popovski
Exploring The Resilience of Control Execution Skips against False Data Injection Attacks. (1%)Ipsita Koley; Sunandan Adhikary; Soumyajit Dey
2022-07-15
Towards the Desirable Decision Boundary by Moderate-Margin Adversarial Training. (99%)Xiaoyu Liang; Yaguan Qian; Jianchang Huang; Xiang Ling; Bin Wang; Chunming Wu; Wassim Swaileh
CARBEN: Composite Adversarial Robustness Benchmark. (98%)Lei Hsiung; Yun-Yun Tsai; Pin-Yu Chen; Tsung-Yi Ho
Masked Spatial-Spectral Autoencoders Are Excellent Hyperspectral Defenders. (68%)Jiahao Qi; Zhiqiang Gong; Xingyue Liu; Kangcheng Bin; Chen Chen; Yongqian Li; Wei Xue; Yu Zhang; Ping Zhong
Feasibility of Inconspicuous GAN-generated Adversarial Patches against Object Detection. (10%)Svetlana Pavlitskaya; Bianca-Marina Codău; J. Marius Zöllner
PASS: Parameters Audit-based Secure and Fair Federated Learning Scheme against Free Rider. (5%)Jianhua Wang
3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models. (1%)Ronghui Mu; Wenjie Ruan; Leandro S. Marcolino; Qiang Ni
2022-07-14
Adversarial Examples for Model-Based Control: A Sensitivity Analysis. (98%)Po-han Department of Electrical and Computer Engineering, The University of Texas at Austin Li; Ufuk Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin Topcu; Sandeep P. Department of Electrical and Computer Engineering, The University of Texas at Austin Chinchali
Adversarial Attacks on Monocular Pose Estimation. (98%)Hemang Chawla; Arnav Varma; Elahe Arani; Bahram Zonooz
Provably Adversarially Robust Nearest Prototype Classifiers. (83%)Václav Voráček; Matthias Hein
Improving Task-free Continual Learning by Distributionally Robust Memory Evolution. (70%)Zhenyi Wang; Li Shen; Le Fang; Qiuling Suo; Tiehang Duan; Mingchen Gao
RSD-GAN: Regularized Sobolev Defense GAN Against Speech-to-Text Adversarial Attacks. (67%)Mohammad Esmaeilpour; Nourhene Chaalia; Patrick Cardinal
Sound Randomized Smoothing in Floating-Point Arithmetics. (50%)Václav Voráček; Matthias Hein
Audio-guided Album Cover Art Generation with Genetic Algorithms. (38%)James Marien; Sam Leroux; Bart Dhoedt; Boom Cedric De
Distance Learner: Incorporating Manifold Prior to Model Training. (16%)Aditya Chetan; Nipun Kwatra
Active Data Pattern Extraction Attacks on Generative Language Models. (11%)Bargav Jayaraman; Esha Ghosh; Huseyin Inan; Melissa Chase; Sambuddha Roy; Wei Dai
Contrastive Adapters for Foundation Model Group Robustness. (1%)Michael Zhang; Christopher Ré
Lipschitz Bound Analysis of Neural Networks. (1%)Sarosij Bose
2022-07-13
Perturbation Inactivation Based Adversarial Defense for Face Recognition. (99%)Min Ren; Yuhao Zhu; Yunlong Wang; Zhenan Sun
On the Robustness of Bayesian Neural Networks to Adversarial Attacks. (93%)Luca Bortolussi; Ginevra Carbone; Luca Laurenti; Andrea Patane; Guido Sanguinetti; Matthew Wicker
Adversarially-Aware Robust Object Detector. (91%)Ziyi Dong; Pengxu Wei; Liang Lin
PIAT: Physics Informed Adversarial Training for Solving Partial Differential Equations. (15%)Simin Shekarpaz; Mohammad Azizmalayeri; Mohammad Hossein Rohban
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities. (10%)Subash Neupane; Jesse Ables; William Anderson; Sudip Mittal; Shahram Rahimi; Ioana Banicescu; Maria Seale
Interactive Machine Learning: A State of the Art Review. (4%)Natnael A. Wondimu; Cédric Buche; Ubbo Visser
Sample-dependent Adaptive Temperature Scaling for Improved Calibration. (2%)Tom Joy; Francesco Pinto; Ser-Nam Lim; Philip H. S. Torr; Puneet K. Dokania
DiverGet: A Search-Based Software Testing Approach for Deep Neural Network Quantization Assessment. (1%)Ahmed Haj Yahmed; Houssem Ben Braiek; Foutse Khomh; Sonia Bouzidi; Rania Zaatour
2022-07-12
Exploring Adversarial Examples and Adversarial Robustness of Convolutional Neural Networks by Mutual Information. (99%)Jiebao Zhang; Wenhua Qian; Rencan Nie; Jinde Cao; Dan Xu
Adversarial Robustness Assessment of NeuroEvolution Approaches. (99%)Inês Valentim; Nuno Lourenço; Nuno Antunes
Frequency Domain Model Augmentation for Adversarial Attack. (99%)Yuyang Long; Qilong Zhang; Boheng Zeng; Lianli Gao; Xianglong Liu; Jian Zhang; Jingkuan Song
Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware. (92%)Luca Demetrio; Battista Biggio; Fabio Roli
Game of Trojans: A Submodular Byzantine Approach. (87%)Dinuka Sahabandu; Arezoo Rajabi; Luyao Niu; Bo Li; Bhaskar Ramasubramanian; Radha Poovendran
Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust Deep Neural Architectures. (84%)Jia Liu; Ran Cheng; Yaochu Jin
Certified Adversarial Robustness via Anisotropic Randomized Smoothing. (76%)Hanbin Hong; Yuan Hong
RelaxLoss: Defending Membership Inference Attacks without Losing Utility. (26%)Dingfan Chen; Ning Yu; Mario Fritz
Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations. (5%)Satoshi Munakata; Caterina Urban; Haruki Yokoyama; Koji Yamamoto; Kazuki Munakata
Markov Decision Process For Automatic Cyber Defense. (4%)Simon Yusuf Enoch; Simon Yusuf Enoch; Dong Seong Kim
Estimating Test Performance for AI Medical Devices under Distribution Shift with Conformal Prediction. (1%)Charles Lu; Syed Rakin Ahmed; Praveer Singh; Jayashree Kalpathy-Cramer
Backdoor Attacks on Crowd Counting. (1%)Yuhua Sun; Tailai Zhang; Xingjun Ma; Pan Zhou; Jian Lou; Zichuan Xu; Xing Di; Yu Cheng; Lichao
2022-07-11
Statistical Detection of Adversarial examples in Blockchain-based Federated Forest In-vehicle Network Intrusion Detection Systems. (99%)Ibrahim Aliyu; Engelenburg Selinde van; Muhammed Bashir Muazu; Jinsul Kim; Chang Gyoon Lim
RUSH: Robust Contrastive Learning via Randomized Smoothing. (98%)Yijiang Pang; Boyang Liu; Jiayu Zhou
Physical Passive Patch Adversarial Attacks on Visual Odometry Systems. (98%)Yaniv Nemcovsky; Matan Yaakoby; Alex M. Bronstein; Chaim Baskin
Towards Effective Multi-Label Recognition Attacks via Knowledge Graph Consistency. (83%)Hassan Mahmood; Ehsan Elhamifar
Susceptibility of Continual Learning Against Adversarial Attacks. (75%)Hikmat Khan; Pir Masoom Shah; Syed Farhan Alam Zaidi; Saif ul Islam
"Why do so?" -- A Practical Perspective on Machine Learning Security. (64%)Kathrin Grosse; Lukas Bieringer; Tarek Richard Besold; Battista Biggio; Katharina Krombholz
Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches. (22%)Zhiyuan Cheng; James Liang; Hongjun Choi; Guanhong Tao; Zhiwen Cao; Dongfang Liu; Xiangyu Zhang
Adversarial Style Augmentation for Domain Generalized Urban-Scene Segmentation. (1%)Zhun Zhong; Yuyang Zhao; Gim Hee Lee; Nicu Sebe
2022-07-10
One-shot Neural Backdoor Erasing via Adversarial Weight Masking. (33%)Shuwen Chai; Jinghui Chen
Hiding Your Signals: A Security Analysis of PPG-based Biometric Authentication. (4%)Lin Li; Chao Chen; Lei Pan; Yonghang Tai; Jun Zhang; Yang Xiang
2022-07-09
Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features. (98%)Taha Belkhouja; Janardhan Rao Doppa
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain. (98%)Chang Yue; Peizhuo Lv; Ruigang Liang; Kai Chen
Dynamic Time Warping based Adversarial Framework for Time-Series Domain. (97%)Taha Belkhouja; Yan Yan; Janardhan Rao Doppa
Training Robust Deep Models for Time-Series Domain: Novel Algorithms and Theoretical Analysis. (67%)Taha Belkhouja; Yan Yan; Janardhan Rao Doppa
2022-07-08
Not all broken defenses are equal: The dead angles of adversarial accuracy. (99%)Raphael Olivier; Bhiksha Raj
Improved and Interpretable Defense to Transferred Adversarial Examples by Jacobian Norm with Selective Input Gradient Regularization. (99%)Deyin Liu; Lin Wu; Lingqiao Liu; Haifeng Zhao; Farid Boussaid; Mohammed Bennamoun
Defense Against Multi-target Trojan Attacks. (80%)Haripriya Harikumar; Santu Rana; Kien Do; Sunil Gupta; Wei Zong; Willy Susilo; Svetha Venkastesh
Guiding the retraining of convolutional neural networks against adversarial inputs. (80%)Francisco Durán López; Silverio Martínez-Fernández; Michael Felderer; Xavier Franch
Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future. (68%)Byunggill Joe; Insik Shin; Jihun Hamm
Models Out of Line: A Fourier Lens on Distribution Shift Robustness. (10%)Sara Fridovich-Keil; Brian R. Bartoldson; James Diffenderfer; Bhavya Kailkhura; Peer-Timo Bremer
A law of adversarial risk, interpolation, and label noise. (1%)Daniel Paleka; Amartya Sanyal
2022-07-07
On the Relationship Between Adversarial Robustness and Decision Region in Deep Neural Network. (99%)Seongjin Park; Haedong Jeong; Giyoung Jeon; Jaesik Choi
Harnessing Out-Of-Distribution Examples via Augmenting Content and Style. (11%)Zhuo Huang; Xiaobo Xia; Li Shen; Bo Han; Mingming Gong; Chen Gong; Tongliang Liu
CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal Relationships. (5%)Rebecca Roelofs; Liting Sun; Ben Caine; Khaled S. Refaat; Ben Sapp; Scott Ettinger; Wei Chai
2022-07-06
The Weaknesses of Adversarial Camouflage in Overhead Imagery. (83%)Etten Adam Van
Adversarial Robustness of Visual Dialog. (64%)Lu Yu; Verena Rieser
Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural Networks with Power Consumption Information. (54%)Cory Merkel
When does Bias Transfer in Transfer Learning? (10%)Hadi Salman; Saachi Jain; Andrew Ilyas; Logan Engstrom; Eric Wong; Aleksander Madry
Privacy-preserving Reflection Rendering for Augmented Reality. (2%)Yiqin Zhao; Sheng Wei; Tian Guo
Not All Models Are Equal: Predicting Model Transferability in a Self-challenging Fisher Space. (1%)Wenqi Shao; Xun Zhao; Yixiao Ge; Zhaoyang Zhang; Lei Yang; Xiaogang Wang; Ying Shan; Ping Luo
2022-07-05
Query-Efficient Adversarial Attack Based on Latin Hypercube Sampling. (99%)Dan Wang; Jiayu Lin; Yuan-Gen Wang
Defending against the Label-flipping Attack in Federated Learning. (98%)Najeeb Moharram Jebreel; Josep Domingo-Ferrer; David Sánchez; Alberto Blanco-Justicia
UniCR: Universally Approximated Certified Robustness via Randomized Smoothing. (93%)Hanbin Hong; Binghui Wang; Yuan Hong
PRoA: A Probabilistic Robustness Assessment against Functional Perturbations. (92%)Tianle Zhang; Wenjie Ruan; Jonathan E. Fieldsend
Learning to Accelerate Approximate Methods for Solving Integer Programming via Early Fixing. (38%)Longkang Li; Baoyuan Wu
Robustness Analysis of Video-Language Models Against Visual and Language Perturbations. (1%)Madeline C. Schiappa; Shruti Vyas; Hamid Palangi; Yogesh S. Rawat; Vibhav Vineet
Conflicting Interactions Among Protection Mechanisms for Machine Learning Models. (1%)Sebastian Szyller; N. Asokan
PoF: Post-Training of Feature Extractor for Improving Generalization. (1%)Ikuro Sato; Ryota Yamada; Masayuki Tanaka; Nakamasa Inoue; Rei Kawakami
Class-Specific Semantic Reconstruction for Open Set Recognition. (1%)Hongzhi Huang; Yu Wang; Qinghua Hu; Ming-Ming Cheng
2022-07-04
Hessian-Free Second-Order Adversarial Examples for Adversarial Learning. (99%)Yaguan Qian; Yuqi Wang; Bin Wang; Zhaoquan Gu; Yuhan Guo; Wassim Swaileh
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples. (98%)Giovanni Apruzzese; Rodion Vladimirov; Aliya Tastemirova; Pavel Laskov
Task-agnostic Defense against Adversarial Patch Attacks. (98%)Ke Xu; Yao Xiao; Zhaoheng Zheng; Kaijie Cai; Ram Nevatia
Large-scale Robustness Analysis of Video Action Recognition Models. (70%)Madeline C. Schiappa; Naman Biyani; Shruti Vyas; Hamid Palangi; Vibhav Vineet; Yogesh Rawat
Counterbalancing Teacher: Regularizing Batch Normalized Models for Robustness. (1%)Saeid Asgari Taghanaki; Ali Gholami; Fereshte Khani; Kristy Choi; Linh Tran; Ran Zhang; Aliasghar Khani
2022-07-03
RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely Limited Queries. (99%)Keshav Kasichainula; Hadi Mansourifar; Weidong Shi
Removing Batch Normalization Boosts Adversarial Training. (98%)Haotao Wang; Aston Zhang; Shuai Zheng; Xingjian Shi; Mu Li; Zhangyang Wang
Anomaly Detection with Adversarially Learned Perturbations of Latent Space. (13%)Vahid Reza Khazaie; Anthony Wong; John Taylor Jewell; Yalda Mohsenzadeh
Identifying the Context Shift between Test Benchmarks and Production Data. (1%)Matthew Groh
2022-07-02
FL-Defender: Combating Targeted Attacks in Federated Learning. (80%)Najeeb Jebreel; Josep Domingo-Ferrer
Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis. (11%)Ruinan Jin; Xiaoxiao Li
PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection. (1%)Zhi Lu; Vrizlynn L. L. Thing
2022-07-01
Efficient Adversarial Training With Data Pruning. (99%)Maximilian Kaufmann; Yiren Zhao; Ilia Shumailov; Robert Mullins; Nicolas Papernot
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label. (99%)Shengshan Hu; Ziqi Zhou; Yechao Zhang; Leo Yu Zhang; Yifeng Zheng; Yuanyuan HE; Hai Jin
2022-06-30
Detecting and Recovering Adversarial Examples from Extracting Non-robust and Highly Predictive Adversarial Perturbations. (99%)Mingyu Dong; Jiahao Chen; Diqun Yan; Jingxing Gao; Li Dong; Rangding Wang
Measuring Forgetting of Memorized Training Examples. (83%)Matthew Jagielski; Om Thakkar; Florian Tramèr; Daphne Ippolito; Katherine Lee; Nicholas Carlini; Eric Wallace; Shuang Song; Abhradeep Thakurta; Nicolas Papernot; Chiyuan Zhang
MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples Detectors. (80%)Federica Granese; Marine Picot; Marco Romanelli; Francisco Messina; Pablo Piantanida
Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN. (16%)Kuan Li; Yang Liu; Xiang Ao; Jianfeng Chi; Jinghua Feng; Hao Yang; Qing He
Threat Assessment in Machine Learning based Systems. (13%)Lionel Nganyewou Tidjon; Foutse Khomh
Robustness of Epinets against Distributional Shifts. (1%)Xiuyuan Lu; Ian Osband; Seyed Mohammad Asghari; Sven Gowal; Vikranth Dwaracherla; Zheng Wen; Roy Benjamin Van
ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State. (1%)Xinshao Wang; Yang Hua; Elyor Kodirov; Sankha Subhra Mukherjee; David A. Clifton; Neil M. Robertson
No Reason for No Supervision: Improved Generalization in Supervised Models. (1%)Mert Bulent Sariyildiz; Yannis Kalantidis; Karteek Alahari; Diane Larlus
Augment like there's no tomorrow: Consistently performing neural networks for medical imaging. (1%)Joona Pohjonen; Carolin Stürenberg; Atte Föhr; Reija Randen-Brady; Lassi Luomala; Jouni Lohi; Esa Pitkänen; Antti Rannikko; Tuomas Mirtti
2022-06-29
IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound. (92%)Palma Alessandro De; Rudy Bunel; Krishnamurthy Dvijotham; M. Pawan Kumar; Robert Stanforth
Adversarial Ensemble Training by Jointly Learning Label Dependencies and Member Models. (33%)Lele Wang; Bin Liu
longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks. (10%)Venelin Kovatchev; Trina Chatterjee; Venkata S Govindarajan; Jifan Chen; Eunsol Choi; Gabriella Chronis; Anubrata Das; Katrin Erk; Matthew Lease; Junyi Jessy Li; Yating Wu; Kyle Mahowald
Private Graph Extraction via Feature Explanations. (10%)Iyiola E. Olatunji; Mandeep Rathee; Thorben Funke; Megha Khosla
RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness. (2%)Francesco Pinto; Harry Yang; Ser-Nam Lim; Philip H. S. Torr; Puneet K. Dokania
2022-06-28
Increasing Confidence in Adversarial Robustness Evaluations. (99%)Roland S. Zimmermann; Wieland Brendel; Florian Tramer; Nicholas Carlini
Rethinking Adversarial Examples for Location Privacy Protection. (93%)Trung-Nghia Le; Ta Gu; Huy H. Nguyen; Isao Echizen
A Deep Learning Approach to Create DNS Amplification Attacks. (92%)Jared Mathews; Prosenjit Chatterjee; Shankar Banik; Cory Nance
On the amplification of security and privacy risks by post-hoc explanations in machine learning models. (31%)Pengrui Quan; Supriyo Chakraborty; Jeya Vikranth Jeyakumar; Mani Srivastava
How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection. (12%)Mantas Mazeika; Bo Li; David Forsyth
An Empirical Study of Challenges in Converting Deep Learning Models. (5%)Moses Jack Openja; Amin Jack Nikanjam; Ahmed Haj Jack Yahmed; Foutse Jack Khomh; Zhen Jack Ming; Jiang
Reasoning about Moving Target Defense in Attack Modeling Formalisms. (2%)Gabriel Ballot; Vadim Malvone; Jean Leneutre; Etienne Borde
AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE. (1%)Changjie Lu; Shen Zheng; Zirui Wang; Omar Dib; Gaurav Gupta
2022-06-27
Adversarial Example Detection in Deployed Tree Ensembles. (99%)Laurens Devos; Wannes Meert; Jesse Davis
Towards Secrecy-Aware Attacks Against Trust Prediction in Signed Graphs. (38%)Yulin Zhu; Tomasz Michalak; Xiapu Luo; Kai Zhou
Utilizing Class Separation Distance for the Evaluation of Corruption Robustness of Machine Learning Classifiers. (15%)Georg Siedel; Silvia Vock; Andrey Morozov; Stefan Voß
Cyber Network Resilience against Self-Propagating Malware Attacks. (13%)Alesia Chernikova; Nicolò Gozzi; Simona Boboila; Priyanka Angadi; John Loughner; Matthew Wilden; Nicola Perra; Tina Eliassi-Rad; Alina Oprea
Quantification of Deep Neural Network Prediction Uncertainties for VVUQ of Machine Learning Models. (4%)Mahmoud Yaseen; Xu Wu
2022-06-26
Self-Healing Robust Neural Networks via Closed-Loop Control. (45%)Zhuotong Chen; Qianxiao Li; Zheng Zhang
De-END: Decoder-driven Watermarking Network. (1%)Han Fang; Zhaoyang Jia; Yupeng Qiu; Jiyi Zhang; Weiming Zhang; Ee-Chien Chang
2022-06-25
Empirical Evaluation of Physical Adversarial Patch Attacks Against Overhead Object Detection Models. (99%)Gavin S. Hartnett; Li Ang Zhang; Caolionn O'Connell; Andrew J. Lohn; Jair Aguirre
Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising. (99%)Sandhya Aneja; Nagender Aneja; Pg Emeroylariffion Abas; Abdul Ghani Naim
RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer. (99%)Xiaoliang Liu; Furao Shen; Jian Zhao; Changhai Nie
Defending Multimodal Fusion Models against Single-Source Adversaries. (81%)Karren Yang; Wan-Yi Lin; Manash Barman; Filipe Condessa; Zico Kolter
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning. (12%)Baoyuan Wu; Hongrui Chen; Mingda Zhang; Zihao Zhu; Shaokui Wei; Danni Yuan; Chao Shen; Hongyuan Zha
Cascading Failures in Smart Grids under Random, Targeted and Adaptive Attacks. (1%)Sushmita Ruj; Arindam Pal
2022-06-24
Defending Backdoor Attacks on Vision Transformer via Patch Processing. (99%)Khoa D. Doan; Yingjie Lao; Peng Yang; Ping Li
AdAUC: End-to-end Adversarial AUC Optimization Against Long-tail Problems. (96%)Wenzheng Hou; Qianqian Xu; Zhiyong Yang; Shilong Bao; Yuan He; Qingming Huang
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective. (92%)Mark Huasong Meng; Guangdong Bai; Sin Gee Teo; Zhe Hou; Yan Xiao; Yun Lin; Jin Song Dong
Robustness of Explanation Methods for NLP Models. (82%)Shriya Atmakuri; Tejas Chheda; Dinesh Kandula; Nishant Yadav; Taesung Lee; Hessel Tuinhof
zPROBE: Zero Peek Robustness Checks for Federated Learning. (4%)Zahra Ghodsi; Mojan Javaheripi; Nojan Sheybani; Xinqiao Zhang; Ke Huang; Farinaz Koushanfar
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection Systems. (2%)D'Jeff Kanda Nkashama; Arian Soltani; Jean-Charles Verdier; Marc Frappier; Pierre-Martin Tardif; Froduald Kabanza
2022-06-23
Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs. (99%)Chengyin Hu; Weiwen Shi
A Framework for Understanding Model Extraction Attack and Defense. (98%)Xun Xian; Mingyi Hong; Jie Ding
Towards End-to-End Private Automatic Speaker Recognition. (76%)Francisco Teixeira; Alberto Abad; Bhiksha Raj; Isabel Trancoso
BERT Rankers are Brittle: a Study using Adversarial Document Perturbations. (75%)Yumeng Wang; Lijun Lyu; Avishek Anand
Never trust, always verify : a roadmap for Trustworthy AI? (1%)Lionel Nganyewou Tidjon; Foutse Khomh
Measuring Representational Robustness of Neural Networks Through Shared Invariances. (1%)Vedant Nanda; Till Speicher; Camila Kolling; John P. Dickerson; Krishna P. Gummadi; Adrian Weller
2022-06-22
AdvSmo: Black-box Adversarial Attack by Smoothing Linear Structure of Texture. (99%)Hui Xia; Rui Zhang; Shuliang Jiang; Zi Kang
InfoAT: Improving Adversarial Training Using the Information Bottleneck Principle. (98%)Mengting Xu; Tao Zhang; Zhongnian Li; Daoqiang Zhang
Robust Universal Adversarial Perturbations. (97%)Changming Xu; Gagandeep Singh
Guided Diffusion Model for Adversarial Purification from Random Noise. (68%)Quanlin Wu; Hang Ye; Yuntian Gu
Understanding the effect of sparsity on neural networks robustness. (61%)Lukas Timpl; Rahim Entezari; Hanie Sedghi; Behnam Neyshabur; Olga Saukh
Shilling Black-box Recommender Systems by Learning to Generate Fake User Profiles. (41%)Chen Lin; Si Chen; Meifang Zeng; Sheng Zhang; Min Gao; Hui Li
2022-06-21
SSMI: How to Make Objects of Interest Disappear without Accessing Object Detectors? (99%)Hui Xia; Rui Zhang; Zi Kang; Shuliang Jiang
Transferable Graph Backdoor Attack. (99%)Shuiqiao Yang; Bao Gia Doan; Paul Montague; Vel Olivier De; Tamas Abraham; Seyit Camtepe; Damith C. Ranasinghe; Salil S. Kanhere
(Certified!!) Adversarial Robustness for Free! (84%)Nicholas Dj Carlini; Florian Dj Tramer; Dj Krishnamurthy; Dvijotham; J. Zico Kolter
Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems. (81%)Yanchao Sun; Ruijie Zheng; Parisa Hassanzadeh; Yongyuan Liang; Soheil Feizi; Sumitra Ganesh; Furong Huang
FlashSyn: Flash Loan Attack Synthesis via Counter Example Driven Approximation. (68%)Zhiyang Chen; Sidi Mohamed Beillahi; Fan Long
Natural Backdoor Datasets. (33%)Emily Wenger; Roma Bhattacharjee; Arjun Nitin Bhagoji; Josephine Passananti; Emilio Andere; Haitao Zheng; Ben Y. Zhao
The Privacy Onion Effect: Memorization is Relative. (22%)Nicholas Carlini; Matthew Jagielski; Nicolas Papernot; Andreas Terzis; Florian Tramer; Chiyuan Zhang
ProML: A Decentralised Platform for Provenance Management of Machine Learning Software Systems. (1%)Nguyen Khoi Tran; Bushra Sabir; M. Ali Babar; Nini Cui; Mehran Abolhasan; Justin Lipman
2022-06-20
Understanding Robust Learning through the Lens of Representation Similarities. (99%)Christian Cianfarani; Arjun Nitin Bhagoji; Vikash Sehwag; Ben Zhao; Prateek Mittal
Diversified Adversarial Attacks based on Conjugate Gradient Method. (98%)Keiichiro Yamamura; Haruki Sato; Nariaki Tateiwa; Nozomi Hata; Toru Mitsutake; Issa Oe; Hiroki Ishikura; Katsuki Fujisawa
Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum. (76%)Junlin Wu; Yevgeniy Vorobeychik
SafeBench: A Benchmarking Platform for Safety Evaluation of Autonomous Vehicles. (5%)Chejian Xu; Wenhao Ding; Weijie Lyu; Zuxin Liu; Shuai Wang; Yihan He; Hanjiang Hu; Ding Zhao; Bo Li
Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities. (1%)Julian Bitterwolf; Alexander Meinke; Maximilian Augustin; Matthias Hein
2022-06-19
On the Limitations of Stochastic Pre-processing Defenses. (99%)Yue Gao; Ilia Shumailov; Kassem Fawaz; Nicolas Papernot
Towards Adversarial Attack on Vision-Language Pre-training Models. (98%)Jiaming Zhang; Qi Yi; Jitao Sang
A Universal Adversarial Policy for Text Classifiers. (98%)Gallil Maimon; Lior Rokach
JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System. (68%)Jiaming Zhang; Qi Yi; Jitao Sang
Adversarially trained neural representations may already be as robust as corresponding biological neural representations. (31%)Chong Guo; Michael J. Lee; Guillaume Leclerc; Joel Dapello; Yug Rao; Aleksander Madry; James J. DiCarlo
2022-06-18
Demystifying the Adversarial Robustness of Random Transformation Defenses. (99%)Chawin Sitawarin; Zachary Golan-Strieb; David Wagner
On the Role of Generalization in Transferability of Adversarial Examples. (99%)Yilin Wang; Farzan Farnia
DECK: Model Hardening for Defending Pervasive Backdoors. (98%)Guanhong Tao; Yingqi Liu; Siyuan Cheng; Shengwei An; Zhuo Zhang; Qiuling Xu; Guangyu Shen; Xiangyu Zhang
Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning. (10%)Marin Matsumoto; Tsubasa Takahashi; Seng Pei Liew; Masato Oguchi
Adversarial Scrutiny of Evidentiary Statistical Software. (2%)Rediet Abebe; Moritz Hardt; Angela Jin; John Miller; Ludwig Schmidt; Rebecca Wexler
2022-06-17
Detecting Adversarial Examples in Batches -- a geometrical approach. (99%)Danush Kumar Venkatesh; Peter Steinbach
Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation. (99%)Wen Sun; Jian Jin; Weisi Lin
Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization. (99%)Deokjae Lee; Seungyong Moon; Junhyeok Lee; Hyun Oh Song
Comment on Transferability and Input Transformation with Additive Noise. (99%)Hoki Kim; Jinseong Park; Jaewook Lee
Adversarial Robustness is at Odds with Lazy Training. (98%)Yunjuan Wang; Enayat Ullah; Poorya Mianjy; Raman Arora
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection. (83%)Jinyin Chen; Chengyu Jia; Haibin Zheng; Ruoxi Chen; Chenbo Fu
RetrievalGuard: Provably Robust 1-Nearest Neighbor Image Retrieval. (81%)Yihan Wu; Hongyang Zhang; Heng Huang
The Consistency of Adversarial Training for Binary Classification. (26%)Natalie S. Frank; Jonathan Niles-Weed
Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary Classification. (15%)Natalie S. Frank
Understanding Robust Overfitting of Adversarial Training and Beyond. (8%)Chaojian Yu; Bo Han; Li Shen; Jun Yu; Chen Gong; Mingming Gong; Tongliang Liu
2022-06-16
Adversarial Privacy Protection on Speech Enhancement. (99%)Mingyu Dong; Diqun Yan; Rangding Wang
Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge. (99%)Dingcheng Yang; Zihao Xiao; Wenjian Yu
Analysis and Extensions of Adversarial Training for Video Classification. (93%)Kaleab A. Kinfu; René Vidal
Double Sampling Randomized Smoothing. (89%)Linyi Li; Jiawei Zhang; Tao Xie; Bo Li
Adversarial Robustness of Graph-based Anomaly Detection. (76%)Yulin Zhu; Yuni Lai; Kaifa Zhao; Xiapu Luo; Mingquan Yuan; Jian Ren; Kai Zhou
A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. (68%)Ganqu Cui; Lifan Yuan; Bingxiang He; Yangyi Chen; Zhiyuan Liu; Maosong Sun
Backdoor Attacks on Vision Transformers. (31%)Akshayvarun Subramanya; Aniruddha Saha; Soroush Abbasi Koohpayegani; Ajinkya Tejankar; Hamed Pirsiavash
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey. (22%)Abhijith Sharma; Yijun Bian; Phil Munz; Apurva Narayan
Catastrophic overfitting is a bug but also a feature. (16%)Guillermo Ortiz-Jiménez; Jorge Pau de; Amartya Sanyal; Adel Bibi; Puneet K. Dokania; Pascal Frossard; Gregory Rogéz; Philip H. S. Torr
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences. (5%)Daryna Oliynyk; Rudolf Mayer; Andreas Rauber
Gradient-Based Adversarial and Out-of-Distribution Detection. (2%)Jinsol Lee; Mohit Prabhushankar; Ghassan AlRegib
"Understanding Robustness Lottery": A Comparative Visual Analysis of Neural Network Pruning Approaches. (1%)Zhimin Li; Shusen Liu; Xin Yu; Kailkhura Bhavya; Jie Cao; Diffenderfer James Daniel; Peer-Timo Bremer; Valerio Pascucci
2022-06-15
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack. (99%)Ruize Gao; Jiongxiao Wang; Kaiwen Zhou; Feng Liu; Binghui Xie; Gang Niu; Bo Han; James Cheng
Morphence-2.0: Evasion-Resilient Moving Target Defense Powered by Out-of-Distribution Detection. (99%)Abderrahmen Amich; Ata Kaboudi; Birhanu Eshete
Architectural Backdoors in Neural Networks. (83%)Mikel Bober-Irizar; Ilia Shumailov; Yiren Zhao; Robert Mullins; Nicolas Papernot
Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial Pruning. (75%)Jonah O'Brien Weiss; Tiago Alves; Sandip Kundu
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness. (74%)Tianlong Chen; Huan Zhang; Zhenyu Zhang; Shiyu Chang; Sijia Liu; Pin-Yu Chen; Zhangyang Wang
A Search-Based Testing Approach for Deep Reinforcement Learning Agents. (62%)Amirhossein Zolfagharian; Manel Abdellatif; Lionel Briand; Mojtaba Bagherzadeh; Ramesh S
Can pruning improve certified robustness of neural networks? (56%)Zhangheng Li; Tianlong Chen; Linyi Li; Bo Li; Zhangyang Wang
Improving Diversity with Adversarially Learned Transformations for Domain Generalization. (33%)Tejas Gokhale; Rushil Anirudh; Jayaraman J. Thiagarajan; Bhavya Kailkhura; Chitta Baral; Yezhou Yang
Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning. (11%)Tianlong Chen; Sijia Liu; Shiyu Chang; Lisa Amini; Zhangyang Wang
The Manifold Hypothesis for Gradient-Based Explanations. (2%)Sebastian Bordt; Uddeshya Upadhyay; Zeynep Akata; Luxburg Ulrike von
READ: Aggregating Reconstruction Error into Out-of-distribution Detection. (1%)Wenyu Jiang; Hao Cheng; Mingcai Chen; Shuai Feng; Yuxin Ge; Chongjun Wang
2022-06-14
Adversarial Vulnerability of Randomized Ensembles. (99%)Hassan Dbouk; Naresh R. Shanbhag
Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial Attacks and Training. (99%)B. R. Manoj; Meysam Sadeghi; Erik G. Larsson
Efficiently Training Low-Curvature Neural Networks. (92%)Suraj Srinivas; Kyle Matoba; Himabindu Lakkaraju; Francois Fleuret
Proximal Splitting Adversarial Attacks for Semantic Segmentation. (92%)Jérôme Rony; Jean-Christophe Pesquet; Ismail Ben Ayed
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective. (89%)Mathieu IRIT, UT Serrurier; Franck UT Mamalet; Thomas UT Fel; Louis UT3, UT, IRIT Béthune; Thibaut UT Boissin
Defending Observation Attacks in Deep Reinforcement Learning via Detection and Denoising. (88%)Zikang Xiong; Joe Eappen; He Zhu; Suresh Jagannathan
Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINO. (86%)Javier Rando; Nasib Naimi; Thomas Baumann; Max Mathys
Turning a Curse Into a Blessing: Enabling Clean-Data-Free Defenses by Model Inversion. (68%)Si Chen; Yi Zeng; Won Park; Ruoxi Jia
Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises. (67%)Minkyu Choi; Yizhen Zhang; Kuan Han; Xiaokai Wang; Zhongming Liu
Attacks on Perception-Based Control Systems: Modeling and Fundamental Limits. (2%)Amir Khazraei; Henry Pfister; Miroslav Pajic
A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise. (1%)Jongwoo Ko; Bongsoo Yi; Se-Young Yun
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions. (1%)Rui Zhang; Song Guo; Junxiao Wang; Xin Xie; Dacheng Tao
2022-06-13
Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of Perturbations. (99%)Kaustubh Sridhar; Souradeep Dutta; Ramneet Kaur; James Weimer; Oleg Sokolsky; Insup Lee
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale. (99%)Gaoyuan Zhang; Songtao Lu; Yihua Zhang; Xiangyi Chen; Pin-Yu Chen; Quanfu Fan; Lee Martie; Lior Horesh; Mingyi Hong; Sijia Liu
Pixel to Binary Embedding Towards Robustness for CNNs. (47%)Ikki Kishida; Hideki Nakayama
Towards Understanding Sharpness-Aware Minimization. (1%)Maksym Andriushchenko; Nicolas Flammarion
An adversarially robust data-market for spatial, crowd-sourced data. (1%)Aida Manzano Kharman; Christian Jursitzky; Quan Zhou; Pietro Ferraro; Jakub Marecek; Pierre Pinson; Robert Shorten
Efficient Human-in-the-loop System for Guiding DNNs Attention. (1%)Yi He; Xi Yang; Chia-Ming Chang; Haoran Xie; Takeo Igarashi
2022-06-12
Consistent Attack: Universal Adversarial Perturbation on Embodied Vision Navigation. (98%)Chengyang Ying; You Qiaoben; Xinning Zhou; Hang Su; Wenbo Ding; Jianyong Ai
Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems. (92%)Zahra Jadidi; Shantanu Pal; Nithesh Nayak K; Arawinkumaar Selvakkumar; Chih-Chia Chang; Maedeh Beheshti; Alireza Jolfaei
Darknet Traffic Classification and Adversarial Attacks. (81%)Nhien Rust-Nguyen; Mark Stamp
InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness. (26%)Shruthi Gowda; Bahram Zonooz; Elahe Arani
RSSD: Defend against Ransomware with Hardware-Isolated Network-Storage Codesign and Post-Attack Analysis. (9%)Benjamin Reidys; Peng Liu; Jian Huang
Neurotoxin: Durable Backdoors in Federated Learning. (5%)Zhengming Zhang; Ashwinee Panda; Linyue Song; Yaoqing Yang; Michael W. Mahoney; Joseph E. Gonzalez; Kannan Ramchandran; Prateek Mittal
An Efficient Method for Sample Adversarial Perturbations against Nonlinear Support Vector Machines. (4%)Wen Su; Qingna Li
2022-06-11
Improving the Adversarial Robustness of NLP Models by Information Bottleneck. (99%)Cenyuan Zhang; Xiang Zhou; Yixin Wan; Xiaoqing Zheng; Kai-Wei Chang; Cho-Jui Hsieh
Defending Adversarial Examples by Negative Correlation Ensemble. (99%)Wenjian Luo; Hongwei Zhang; Linghao Kong; Zhijian Chen; Ke Tang
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks. (81%)Nuo Xu; Binghui Wang; Ran Ran; Wujie Wen; Parv Venkitasubramaniam
Bilateral Dependency Optimization: Defending Against Model-inversion Attacks. (69%)Xiong Peng; Feng Liu; Jingfen Zhang; Long Lan; Junjie Ye; Tongliang Liu; Bo Han
2022-06-10
Localized adversarial artifacts for compressed sensing MRI. (76%)Rima Alaifari; Giovanni S. Alberti; Tandri Gauksson
Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency. (70%)Jinyin Chen; Mingjun Li; Tao Liu; Haibin Zheng; Yao Cheng; Changting Lin
Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning. (33%)Shenghui Li; Edith Ngai; Fanghua Ye; Li Ju; Tianru Zhang; Thiemo Voigt
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers. (9%)Nan Luo; Yuanzhang Li; Yajie Wang; Shangbo Wu; Yu-an Tan; Quanxin Zhang
Deep Leakage from Model in Federated Learning. (3%)Zihao Zhao; Mengen Luo; Wenbo Ding
Adversarial Counterfactual Environment Model Learning. (1%)Xiong-Hui Chen; Yang Yu; Zheng-Mao Zhu; Zhihua Yu; Zhenjun Chen; Chenghe Wang; Yinan Wu; Hongqiu Wu; Rong-Jun Qin; Ruijin Ding; Fangsheng Huang
2022-06-09
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models. (99%)Federico Nesti; Giulio Rossolini; Gianluca D'Amico; Alessandro Biondi; Giorgio Buttazzo
ReFace: Real-time Adversarial Attacks on Face Recognition Systems. (99%)Shehzeen Hussain; Todd Huster; Chris Mesterharm; Paarth Neekhara; Kevin An; Malhar Jere; Harshvardhan Sikka; Farinaz Koushanfar
Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks. (98%)Huishuai Zhang; Da Yu; Yiping Lu; Di He
Meet You Halfway: Explaining Deep Learning Mysteries. (92%)Oriel BenShmuel
Early Transferability of Adversarial Examples in Deep Neural Networks. (86%)Oriel BenShmuel
GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing. (86%)Zhongkai Hao; Chengyang Ying; Yinpeng Dong; Hang Su; Jun Zhu; Jian Song
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. (84%)Aarohi Shammie Srivastava; Abhinav Shammie Rastogi; Abhishek Shammie Rao; Abu Awal Md Shammie Shoeb; Abubakar Shammie Abid; Adam Shammie Fisch; Adam R. Shammie Brown; Adam Shammie Santoro; Aditya Shammie Gupta; Adrià Shammie Garriga-Alonso; Agnieszka Shammie Kluska; Aitor Shammie Lewkowycz; Akshat Shammie Agarwal; Alethea Shammie Power; Alex Shammie Ray; Alex Shammie Warstadt; Alexander W. Shammie Kocurek; Ali Shammie Safaya; Ali Shammie Tazarv; Alice Shammie Xiang; Alicia Shammie Parrish; Allen Shammie Nie; Aman Shammie Hussain; Amanda Shammie Askell; Amanda Shammie Dsouza; Ambrose Shammie Slone; Ameet Shammie Rahane; Anantharaman S. Shammie Iyer; Anders Shammie Andreassen; Andrea Shammie Madotto; Andrea Shammie Santilli; Andreas Shammie Stuhlmüller; Andrew Shammie Dai; Andrew Shammie La; Andrew Shammie Lampinen; Andy Shammie Zou; Angela Shammie Jiang; Angelica Shammie Chen; Anh Shammie Vuong; Animesh Shammie Gupta; Anna Shammie Gottardi; Antonio Shammie Norelli; Anu Shammie Venkatesh; Arash Shammie Gholamidavoodi; Arfa Shammie Tabassum; Arul Shammie Menezes; Arun Shammie Kirubarajan; Asher Shammie Mullokandov; Ashish Shammie Sabharwal; Austin Shammie Herrick; Avia Shammie Efrat; Aykut Shammie Erdem; Ayla Shammie Karakaş; B. Ryan Shammie Roberts; Bao Sheng Shammie Loe; Barret Shammie Zoph; Bartłomiej Shammie Bojanowski; Batuhan Shammie Özyurt; Behnam Shammie Hedayatnia; Behnam Shammie Neyshabur; Benjamin Shammie Inden; Benno Shammie Stein; Berk Shammie Ekmekci; Bill Yuchen Shammie Lin; Blake Shammie Howald; Cameron Shammie Diao; Cameron Shammie Dour; Catherine Shammie Stinson; Cedrick Shammie Argueta; César Ferri Shammie Ramírez; Chandan Shammie Singh; Charles Shammie Rathkopf; Chenlin Shammie Meng; Chitta Shammie Baral; Chiyu Shammie Wu; Chris Shammie Callison-Burch; Chris Shammie Waites; Christian Shammie Voigt; Christopher D. Shammie Manning; Christopher Shammie Potts; Cindy Shammie Ramirez; Clara E. Shammie Rivera; Clemencia Shammie Siro; Colin Shammie Raffel; Courtney Shammie Ashcraft; Cristina Shammie Garbacea; Damien Shammie Sileo; Dan Shammie Garrette; Dan Shammie Hendrycks; Dan Shammie Kilman; Dan Shammie Roth; Daniel Shammie Freeman; Daniel Shammie Khashabi; Daniel Shammie Levy; Daniel Moseguí Shammie González; Danielle Shammie Perszyk; Danny Shammie Hernandez; Danqi Shammie Chen; Daphne Shammie Ippolito; Dar Shammie Gilboa; David Shammie Dohan; David Shammie Drakard; David Shammie Jurgens; Debajyoti Shammie Datta; Deep Shammie Ganguli; Denis Shammie Emelin; Denis Shammie Kleyko; Deniz Shammie Yuret; Derek Shammie Chen; Derek Shammie Tam; Dieuwke Shammie Hupkes; Diganta Shammie Misra; Dilyar Shammie Buzan; Dimitri Coelho Shammie Mollo; Diyi Shammie Yang; Dong-Ho Shammie Lee; Ekaterina Shammie Shutova; Ekin Dogus Shammie Cubuk; Elad Shammie Segal; Eleanor Shammie Hagerman; Elizabeth Shammie Barnes; Elizabeth Shammie Donoway; Ellie Shammie Pavlick; Emanuele Shammie Rodola; Emma Shammie Lam; Eric Shammie Chu; Eric Shammie Tang; Erkut Shammie Erdem; Ernie Shammie Chang; Ethan A. Shammie Chi; Ethan Shammie Dyer; Ethan Shammie Jerzak; Ethan Shammie Kim; Eunice Engefu Shammie Manyasi; Evgenii Shammie Zheltonozhskii; Fanyue Shammie Xia; Fatemeh Shammie Siar; Fernando Shammie Martínez-Plumed; Francesca Shammie Happé; Francois Shammie Chollet; Frieda Shammie Rong; Gaurav Shammie Mishra; Genta Indra Shammie Winata; Melo Gerard Shammie de; Germán Shammie Kruszewski; Giambattista Shammie Parascandolo; Giorgio Shammie Mariani; Gloria Shammie Wang; Gonzalo Shammie Jaimovitch-López; Gregor Shammie Betz; Guy Shammie Gur-Ari; Hana Shammie Galijasevic; Hannah Shammie Kim; Hannah Shammie Rashkin; Hannaneh Shammie Hajishirzi; Harsh Shammie Mehta; Hayden Shammie Bogar; Henry Shammie Shevlin; Hinrich Shammie Schütze; Hiromu Shammie Yakura; Hongming Shammie Zhang; Hugh Mee Shammie Wong; Ian Shammie Ng; Isaac Shammie Noble; Jaap Shammie Jumelet; Jack Shammie Geissinger; Jackson Shammie Kernion; Jacob Shammie Hilton; Jaehoon Shammie Lee; Jaime Fernández Shammie Fisac; James B. Shammie Simon; James Shammie Koppel; James Shammie Zheng; James Shammie Zou; Jan Shammie Kocoń; Jana Shammie Thompson; Jared Shammie Kaplan; Jarema Shammie Radom; Jascha Shammie Sohl-Dickstein; Jason Shammie Phang; Jason Shammie Wei; Jason Shammie Yosinski; Jekaterina Shammie Novikova; Jelle Shammie Bosscher; Jennifer Shammie Marsh; Jeremy Shammie Kim; Jeroen Shammie Taal; Jesse Shammie Engel; Jesujoba Shammie Alabi; Jiacheng Shammie Xu; Jiaming Shammie Song; Jillian Shammie Tang; Joan Shammie Waweru; John Shammie Burden; John Shammie Miller; John U. Shammie Balis; Jonathan Shammie Berant; Jörg Shammie Frohberg; Jos Shammie Rozen; Jose Shammie Hernandez-Orallo; Joseph Shammie Boudeman; Joseph Shammie Jones; Joshua B. Shammie Tenenbaum; Joshua S. Shammie Rule; Joyce Shammie Chua; Kamil Shammie Kanclerz; Karen Shammie Livescu; Karl Shammie Krauth; Karthik Shammie Gopalakrishnan; Katerina Shammie Ignatyeva; Katja Shammie Markert; Kaustubh D. Shammie Dhole; Kevin Shammie Gimpel; Kevin Shammie Omondi; Kory Shammie Mathewson; Kristen Shammie Chiafullo; Ksenia Shammie Shkaruta; Kumar Shammie Shridhar; Kyle Shammie McDonell; Kyle Shammie Richardson; Laria Shammie Reynolds; Leo Shammie Gao; Li Shammie Zhang; Liam Shammie Dugan; Lianhui Shammie Qin; Lidia Shammie Contreras-Ochando; Louis-Philippe Shammie Morency; Luca Shammie Moschella; Lucas Shammie Lam; Lucy Shammie Noble; Ludwig Shammie Schmidt; Luheng Shammie He; Luis Oliveros Shammie Colón; Luke Shammie Metz; Lütfi Kerem Shammie Şenel; Maarten Shammie Bosma; Maarten Shammie Sap; Hoeve Maartje Shammie ter; Maheen Shammie Farooqi; Manaal Shammie Faruqui; Mantas Shammie Mazeika; Marco Shammie Baturan; Marco Shammie Marelli; Marco Shammie Maru; Maria Jose Ramírez Shammie Quintana; Marie Shammie Tolkiehn; Mario Shammie Giulianelli; Martha Shammie Lewis; Martin Shammie Potthast; Matthew L. Shammie Leavitt; Matthias Shammie Hagen; Mátyás Shammie Schubert; Medina Orduna Shammie Baitemirova; Melody Shammie Arnaud; Melvin Shammie McElrath; Michael A. Shammie Yee; Michael Shammie Cohen; Michael Shammie Gu; Michael Shammie Ivanitskiy; Michael Shammie Starritt; Michael Shammie Strube; Michał Shammie Swędrowski; Michele Shammie Bevilacqua; Michihiro Shammie Yasunaga; Mihir Shammie Kale; Mike Shammie Cain; Mimee Shammie Xu; Mirac Shammie Suzgun; Mo Shammie Tiwari; Mohit Shammie Bansal; Moin Shammie Aminnaseri; Mor Shammie Geva; Mozhdeh Shammie Gheini; Mukund Varma Shammie T; Nanyun Shammie Peng; Nathan Shammie Chi; Nayeon Shammie Lee; Neta Gur-Ari Shammie Krakover; Nicholas Shammie Cameron; Nicholas Shammie Roberts; Nick Shammie Doiron; Nikita Shammie Nangia; Niklas Shammie Deckers; Niklas Shammie Muennighoff; Nitish Shirish Shammie Keskar; Niveditha S. Shammie Iyer; Noah Shammie Constant; Noah Shammie Fiedel; Nuan Shammie Wen; Oliver Shammie Zhang; Omar Shammie Agha; Omar Shammie Elbaghdadi; Omer Shammie Levy; Owain Shammie Evans; Pablo Antonio Moreno Shammie Casares; Parth Shammie Doshi; Pascale Shammie Fung; Paul Pu Shammie Liang; Paul Shammie Vicol; Pegah Shammie Alipoormolabashi; Peiyuan Shammie Liao; Percy Shammie Liang; Peter Shammie Chang; Peter Shammie Eckersley; Phu Mon Shammie Htut; Pinyu Shammie Hwang; Piotr Shammie Miłkowski; Piyush Shammie Patil; Pouya Shammie Pezeshkpour; Priti Shammie Oli; Qiaozhu Shammie Mei; Qing Shammie Lyu; Qinlang Shammie Chen; Rabin Shammie Banjade; Rachel Etta Shammie Rudolph; Raefer Shammie Gabriel; Rahel Shammie Habacker; Ramón Risco Shammie Delgado; Raphaël Shammie Millière; Rhythm Shammie Garg; Richard Shammie Barnes; Rif A. Shammie Saurous; Riku Shammie Arakawa; Robbe Shammie Raymaekers; Robert Shammie Frank; Rohan Shammie Sikand; Roman Shammie Novak; Roman Shammie Sitelew; Ronan Shammie LeBras; Rosanne Shammie Liu; Rowan Shammie Jacobs; Rui Shammie Zhang; Ruslan Shammie Salakhutdinov; Ryan Shammie Chi; Ryan Shammie Lee; Ryan Shammie Stovall; Ryan Shammie Teehan; Rylan Shammie Yang; Sahib Shammie Singh; Saif M. Shammie Mohammad; Sajant Shammie Anand; Sam Shammie Dillavou; Sam Shammie Shleifer; Sam Shammie Wiseman; Samuel Shammie Gruetter; Samuel R. Shammie Bowman; Samuel S. Shammie Schoenholz; Sanghyun Shammie Han; Sanjeev Shammie Kwatra; Sarah A. Shammie Rous; Sarik Shammie Ghazarian; Sayan Shammie Ghosh; Sean Shammie Casey; Sebastian Shammie Bischoff; Sebastian Shammie Gehrmann; Sebastian Shammie Schuster; Sepideh Shammie Sadeghi; Shadi Shammie Hamdan; Sharon Shammie Zhou; Shashank Shammie Srivastava; Sherry Shammie Shi; Shikhar Shammie Singh; Shima Shammie Asaadi; Shixiang Shane Shammie Gu; Shubh Shammie Pachchigar; Shubham Shammie Toshniwal; Shyam Shammie Upadhyay; Shammie Shyamolima; Debnath; Siamak Shakeri; Simon Thormeyer; Simone Melzi; Siva Reddy; Sneha Priscilla Makini; Soo-Hwan Lee; Spencer Torene; Sriharsha Hatwar; Stanislas Dehaene; Stefan Divic; Stefano Ermon; Stella Biderman; Stephanie Lin; Stephen Prasad; Steven T. Piantadosi; Stuart M. Shieber; Summer Misherghi; Svetlana Kiritchenko; Swaroop Mishra; Tal Linzen; Tal Schuster; Tao Li; Tao Yu; Tariq Ali; Tatsu Hashimoto; Te-Lin Wu; Théo Desbordes; Theodore Rothschild; Thomas Phan; Tianle Wang; Tiberius Nkinyili; Timo Schick; Timofei Kornev; Timothy Telleen-Lawton; Titus Tunduny; Tobias Gerstenberg; Trenton Chang; Trishala Neeraj; Tushar Khot; Tyler Shultz; Uri Shaham; Vedant Misra; Vera Demberg; Victoria Nyamai; Vikas Raunak; Vinay Ramasesh; Vinay Uday Prabhu; Vishakh Padmakumar; Vivek Srikumar; William Fedus; William Saunders; William Zhang; Wout Vossen; Xiang Ren; Xiaoyu Tong; Xinran Zhao; Xinyi Wu; Xudong Shen; Yadollah Yaghoobzadeh; Yair Lakretz; Yangqiu Song; Yasaman Bahri; Yejin Choi; Yichi Yang; Yiding Hao; Yifu Chen; Yonatan Belinkov; Yu Hou; Yufang Hou; Yuntao Bai; Zachary Seid; Zhuoye Zhao; Zijian Wang; Zijie J. Wang; Zirui Wang; Ziyi Wu
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training. (41%)Tianlong Chen; Zhenyu Zhang; Sijia Liu; Yang Zhang; Shiyu Chang; Zhangyang Wang
DORA: Exploring outlier representations in Deep Neural Networks. (1%)Kirill Bykov; Mayukh Deb; Dennis Grinwald; Klaus-Robert Müller; Marina M. -C. Höhne
Membership Inference via Backdooring. (1%)Hongsheng Hu; Zoran Salcic; Gillian Dobbie; Jinjun Chen; Lichao Sun; Xuyun Zhang
2022-06-08
Wavelet Regularization Benefits Adversarial Training. (99%)Jun Yan; Huilin Yin; Xiaoyang Deng; Ziming Zhao; Wancheng Ge; Hao Zhang; Gerhard Rigoll
Latent Boundary-guided Adversarial Training. (99%)Xiaowei Zhou; Ivor W. Tsang; Jie Yin
Adversarial Text Normalization. (73%)Joanna Bitton; Maya Pavlova; Ivan Evtimov
Autoregressive Perturbations for Data Poisoning. (70%)Pedro Sandoval-Segura; Vasu Singla; Jonas Geiping; Micah Goldblum; Tom Goldstein; David W. Jacobs
Toward Certified Robustness Against Real-World Distribution Shifts. (5%)Haoze Wu; Teruhiro Tagomori; Alexander Robey; Fengjun Yang; Nikolai Matni; George Pappas; Hamed Hassani; Corina Pasareanu; Clark Barrett
Generative Adversarial Networks and Image-Based Malware Classification. (1%)Huy Nguyen; Troia Fabio Di; Genya Ishigaki; Mark Stamp
Robust Deep Ensemble Method for Real-world Image Denoising. (1%)Pengju Liu; Hongzhi Zhang; Jinghui Wang; Yuzhi Wang; Dongwei Ren; Wangmeng Zuo
2022-06-07
Fooling Explanations in Text Classifiers. (99%)Adam Ivankay; Ivan Girardi; Chiara Marchiori; Pascal Frossard
AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems. (99%)Guangke Chen; Zhe Zhao; Fu Song; Sen Chen; Lingling Fan; Yang Liu
Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition. (99%)Guangke Chen; Zhe Zhao; Fu Song; Sen Chen; Lingling Fan; Feng Wang; Jiashui Wang
Adaptive Regularization for Adversarial Training. (98%)Dongyoon Yang; Insung Kong; Yongdai Kim
Building Robust Ensembles via Margin Boosting. (83%)Dinghuai Zhang; Hongyang Zhang; Aaron Courville; Yoshua Bengio; Pradeep Ravikumar; Arun Sai Suggala
On the Permanence of Backdoors in Evolving Models. (67%)Huiying Li; Arjun Nitin Bhagoji; Yuxin Chen; Haitao Zheng; Ben Y. Zhao
Subject Membership Inference Attacks in Federated Learning. (4%)Anshuman Suri; Pallika Kanani; Virendra J. Marathe; Daniel W. Peterson
Adversarial Reprogramming Revisited. (3%)Matthias Englert; Ranko Lazic
Certifying Data-Bias Robustness in Linear Regression. (1%)Anna P. Meyer; Aws Albarghouthi; Loris D'Antoni
Parametric Chordal Sparsity for SDP-based Neural Network Verification. (1%)Anton Xue; Lars Lindemann; Rajeev Alur
Can CNNs Be More Robust Than Transformers? (1%)Zeyu Wang; Yutong Bai; Yuyin Zhou; Cihang Xie
2022-06-06
Robust Adversarial Attacks Detection based on Explainable Deep Reinforcement Learning For UAV Guidance and Planning. (99%)Thomas Hickling; Nabil Aouf; Phillippa Spencer
Fast Adversarial Training with Adaptive Step Size. (98%)Zhichao Huang; Yanbo Fan; Chen Liu; Weizhong Zhang; Yong Zhang; Mathieu Salzmann; Sabine Süsstrunk; Jue Wang
Certified Robustness in Federated Learning. (87%)Motasem Alfarra; Juan C. Pérez; Egor Shulgin; Peter Richtárik; Bernard Ghanem
Robust Image Protection Countering Cropping Manipulation. (12%)Qichao Ying; Hang Zhou; Zhenxing Qian; Sheng Li; Xinpeng Zhang
PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN Model. (3%)Xuefeng Fan; Hangyu Gui; Xiaoyi Zhou
Tackling covariate shift with node-based Bayesian neural networks. (1%)Trung Trinh; Markus Heinonen; Luigi Acerbi; Samuel Kaski
Anomaly Detection with Test Time Augmentation and Consistency Evaluation. (1%)Haowei He; Jiaye Teng; Yang Yuan
2022-06-05
Federated Adversarial Training with Transformers. (98%)Ahmed Aldahdooh; Wassim Hamidouche; Olivier Déforges
Vanilla Feature Distillation for Improving the Accuracy-Robustness Trade-Off in Adversarial Training. (98%)Guodong Cao; Zhibo Wang; Xiaowei Dong; Zhifei Zhang; Hengchang Guo; Zhan Qin; Kui Ren
Which models are innately best at uncertainty estimation? (1%)Ido Galil; Mohammed Dabbah; Ran El-Yaniv
2022-06-04
Soft Adversarial Training Can Retain Natural Accuracy. (76%)Abhijith Sharma; Apurva Narayan
2022-06-03
Saliency Attack: Towards Imperceptible Black-box Adversarial Attack. (99%)Zeyu Dai; Shengcai Liu; Ke Tang; Qing Li
Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis. (96%)Raphael Ettedgui; Alexandre Araujo; Rafael Pinot; Yann Chevaleyre; Jamal Atif
Evaluating Transfer-based Targeted Adversarial Perturbations against Real-World Computer Vision Systems based on Human Judgments. (92%)Zhengyu Zhao; Nga Dang; Martha Larson
A Robust Backpropagation-Free Framework for Images. (80%)Timothy Zee; Alexander G. Ororbia; Ankur Mali; Ifeoma Nwogu
Gradient Obfuscation Checklist Test Gives a False Sense of Security. (73%)Nikola Popovic; Danda Pani Paudel; Thomas Probst; Gool Luc Van
Kallima: A Clean-label Framework for Textual Backdoor Attacks. (26%)Xiaoyi Chen; Yinpeng Dong; Zeyu Sun; Shengfang Zhai; Qingni Shen; Zhonghai Wu
2022-06-02
Improving the Robustness and Generalization of Deep Neural Network with Confidence Threshold Reduction. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
FACM: Intermediate Layer Still Retain Effective Features against Adversarial Examples. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs for Medical Image Segmentation and Detection. (99%)Linhai Ma; Liang Liang
Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline. (99%)Junjian Li; Honglong Chen
Adversarial Laser Spot: Robust and Covert Physical Adversarial Attack to DNNs. (98%)Chengyin Hu
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions. (31%)Amrith Setlur; Benjamin Eysenbach; Virginia Smith; Sergey Levine
MaxStyle: Adversarial Style Composition for Robust Medical Image Segmentation. (8%)Chen Chen; Zeju Li; Cheng Ouyang; Matt Sinclair; Wenjia Bai; Daniel Rueckert
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection. (4%)Wei Guo; Benedetta Tondi; Mauro Barni
Learning Unbiased Transferability for Domain Adaptation by Uncertainty Modeling. (1%)Jian Hu; Haowen Zhong; Junchi Yan; Shaogang Gong; Guile Wu; Fei Yang
2022-06-01
On the reversibility of adversarial attacks. (99%)Chau Yi Li; Ricardo Sánchez-Matilla; Ali Shahin Shamsabadi; Riccardo Mazzon; Andrea Cavallaro
NeuroUnlock: Unlocking the Architecture of Obfuscated Deep Neural Networks. (99%)Mahya Morid Ahmadi; Lilas Alrahis; Alessio Colucci; Ozgur Sinanoglu; Muhammad Shafique
Attack-Agnostic Adversarial Detection. (99%)Jiaxin Cheng; Mohamed Hussein; Jay Billa; Wael AbdAlmageed
On the Perils of Cascading Robust Classifiers. (98%)Ravi Mangal; Zifan Wang; Chi Zhang; Klas Leino; Corina Pasareanu; Matt Fredrikson
Anti-Forgery: Towards a Stealthy and Robust DeepFake Disruption Attack via Adversarial Perceptual-aware Perturbations. (98%)Run Wang; Ziheng Huang; Zhikai Chen; Li Liu; Jing Chen; Lina Wang
Support Vector Machines under Adversarial Label Contamination. (97%)Huang Xiao; Battista Biggio; Blaine Nelson; Han Xiao; Claudia Eckert; Fabio Roli
Defense Against Gradient Leakage Attacks via Learning to Obscure Data. (80%)Yuxuan Wan; Han Xu; Xiaorui Liu; Jie Ren; Wenqi Fan; Jiliang Tang
The robust way to stack and bag: the local Lipschitz way. (70%)Thulasi Tholeti; Sheetal Kalyani
Robustness Evaluation and Adversarial Training of an Instance Segmentation Model. (54%)Jacob Bond; Andrew Lingg
Sequential Bayesian Neural Subnetwork Ensembles. (2%)Sanket Jantre; Shrijita Bhattacharya; Nathan M. Urban; Byung-Jun Yoon; Tapabrata Maiti; Prasanna Balaprakash; Sandeep Madireddy
RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model. (1%)Hangzhi Guo; Feiran Jia; Jinghui Chen; Anna Squicciarini; Amulya Yadav
2022-05-31
Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems. (99%)Zeyan Liu; Fengjun Li; Jingqiang Lin; Zhu Li; Bo Luo
Exact Feature Collisions in Neural Networks. (95%)Utku Ozbulak; Manvel Gasparyan; Shodhan Rao; Neve Wesley De; Messem Arnout Van
CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models. (93%)Akshita Jha; Chandan K. Reddy
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences. (83%)Shang Wang; Yansong Gao; Anmin Fu; Zhi Zhang; Yuqing Zhang; Willy Susilo
Semantic Autoencoder and Its Potential Usage for Adversarial Attack. (81%)Yurui Ming; Cuihuan Du; Chin-Teng Lin
An Effective Fusion Method to Enhance the Robustness of CNN. (80%)Yating Ma; Zhichao Lian
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models. (64%)Kaiji Lu; Anupam Datta
Generative Models with Information-Theoretic Protection Against Membership Inference Attacks. (10%)Parisa Hassanzadeh; Robert E. Tillman
Likelihood-Free Inference with Generative Neural Networks via Scoring Rule Minimization. (1%)Lorenzo Pacchiardi; Ritabrata Dutta
2022-05-30
Domain Constraints in Feature Space: Strengthening Robustness of Android Malware Detection against Realizable Adversarial Examples. (99%)Hamid Bostani; Zhuoran Liu; Zhengyu Zhao; Veelasha Moonsamy
Searching for the Essence of Adversarial Perturbations. (99%)Dennis Y. Menn; Tzu-hsun Feng; Hung-yi Lee
Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models. (99%)Songlin Yang; Wei Wang; Chenye Xu; Ziwen He; Bo Peng; Jing Dong
Guided Diffusion Model for Adversarial Purification. (99%)Jinyi Wang; Zhaoyang Lyu; Dahua Lin; Bo Dai; Hongfei Fu
Why Adversarial Training of ReLU Networks Is Difficult? (68%)Xu Cheng; Hao Zhang; Yue Xin; Wen Shen; Jie Ren; Quanshi Zhang
CalFAT: Calibrated Federated Adversarial Training with Label Skewness. (67%)Chen Chen; Yuchen Liu; Xingjun Ma; Lingjuan Lyu
Securing AI-based Healthcare Systems using Blockchain Technology: A State-of-the-Art Systematic Literature Review and Future Research Directions. (15%)Rucha Shinde; Shruti Patil; Ketan Kotecha; Vidyasagar Potdar; Ganeshsree Selvachandran; Ajith Abraham
Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning. (13%)Yinglun Xu; Qi Zeng; Gagandeep Singh
White-box Membership Attack Against Machine Learning Based Retinopathy Classification. (10%)Mounia Hamidouche; Reda Bellafqira; Gwenolé Quellec; Gouenou Coatrieux
Fool SHAP with Stealthily Biased Sampling. (2%)Gabriel Laberge; Ulrich Aïvodji; Satoshi Hara; Mario Marchand.; Foutse Khomh
Snoopy: A Webpage Fingerprinting Framework with Finite Query Model for Mass-Surveillance. (2%)Gargi Mitra; Prasanna Karthik Vairam; Sandip Saha; Nitin Chandrachoodan; V. Kamakoti
2022-05-29
Robust Weight Perturbation for Adversarial Training. (99%)Chaojian Yu; Bo Han; Mingming Gong; Li Shen; Shiming Ge; Bo Du; Tongliang Liu
Mixture GAN For Modulation Classification Resiliency Against Adversarial Attacks. (99%)Eyad Shtaiwi; Ahmed El Ouadrhiri; Majid Moradikia; Salma Sultana; Ahmed Abdelhadi; Zhu Han
Unfooling Perturbation-Based Post Hoc Explainers. (98%)Zachariah Carmichael; Walter J Scheirer
On the Robustness of Safe Reinforcement Learning under Observational Perturbations. (93%)Zuxin Liu; Zijian Guo; Zhepeng Cen; Huan Zhang; Jie Tan; Bo Li; Ding Zhao
Superclass Adversarial Attack. (80%)Soichiro Kumano; Hiroshi Kera; Toshihiko Yamasaki
Problem-Space Evasion Attacks in the Android OS: a Survey. (50%)Harel Berger; Chen Hajaj; Amit Dvir
Context-based Virtual Adversarial Training for Text Classification with Noisy Labels. (11%)Do-Myoung Lee; Yeachan Kim; Chang-gyun Seo
A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks. (1%)Binyan Hu; Yu Sun; A. K. Qin
2022-05-28
Contributor-Aware Defenses Against Adversarial Backdoor Attacks. (98%)Glenn Dawson; Muhammad Umer; Robi Polikar
BadDet: Backdoor Attacks on Object Detection. (92%)Shih-Han Chan; Yinpeng Dong; Jun Zhu; Xiaolu Zhang; Jun Zhou
Syntax-Guided Program Reduction for Understanding Neural Code Intelligence Models. (62%)Md Rafiqul Islam Rabin; Aftab Hussain; Mohammad Amin Alipour
2022-05-27
fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating Weather Conditions on the Camera Lens of Autonomous Systems. (96%)Alberto Marchisio; Giovanni Caramia; Maurizio Martina; Muhammad Shafique
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power. (95%)Binghui Li; Jikai Jin; Han Zhong; John E. Hopcroft; Liwei Wang
Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction. (93%)Ruochen Jiao; Xiangguo Liu; Takami Sato; Qi Alfred Chen; Qi Zhu
Defending Against Stealthy Backdoor Attacks. (73%)Sangeet Sagar; Abhinav Bhatt; Abhijith Srinivas Bidaralli
EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks. (13%)Runlin Lei; Zhen Wang; Yaliang Li; Bolin Ding; Zhewei Wei
2022-05-26
A Physical-World Adversarial Attack Against 3D Face Recognition. (99%)Yanjie Li; Yiquan Li; Bin Xiao
Transferable Adversarial Attack based on Integrated Gradients. (99%)Yi Huang; Adams Wai-Kin Kong
MALICE: Manipulation Attacks on Learned Image ComprEssion. (99%)Kang Liu; Di Wu; Yiru Wang; Dan Feng; Benjamin Tan; Siddharth Garg
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors. (98%)Avishag Shapira; Alon Zolfi; Luca Demetrio; Battista Biggio; Asaf Shabtai
Circumventing Backdoor Defenses That Are Based on Latent Separability. (96%)Xiangyu Qi; Tinghao Xie; Yiming Li; Saeed Mahloujifar; Prateek Mittal
An Analytic Framework for Robust Training of Artificial Neural Networks. (93%)Ramin Barati; Reza Safabakhsh; Mohammad Rahmati
Adversarial attacks and defenses in Speaker Recognition Systems: A survey. (81%)Jiahe Lan; Rui Zhang; Zheng Yan; Jie Wang; Yu Chen; Ronghui Hou
PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations. (81%)Manaar Alam; Esha Sarkar; Michail Maniatakos
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning. (81%)Zhenting Wang; Juan Zhai; Shiqing Ma
R-HTDetector: Robust Hardware-Trojan Detection Based on Adversarial Training. (80%)Kento Hasegawa; Seira Hidano; Kohei Nozawa; Shinsaku Kiyomoto; Nozomu Togawa
BagFlip: A Certified Defense against Data Poisoning. (75%)Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples. (67%)Xiangyu Qi; Tinghao Xie; Jiachen T. Wang; Tong Wu; Saeed Mahloujifar; Prateek Mittal
Membership Inference Attack Using Self Influence Functions. (45%)Gilad Cohen; Raja Giryes
MemeTector: Enforcing deep focus for meme detection. (1%)Christos Koutlis; Manos Schinas; Symeon Papadopoulos
ES-GNN: Generalizing Graph Neural Networks Beyond Homophily with Edge Splitting. (1%)Jingwei Guo; Kaizhu Huang; Rui Zhang; Xinping Yi
2022-05-25
Surprises in adversarially-trained linear regression. (87%)Antônio H. Ribeiro; Dave Zachariah; Thomas B. Schön
BITE: Textual Backdoor Attacks with Iterative Trigger Injection. (75%)Jun Yan; Vansh Gupta; Xiang Ren
Impartial Games: A Challenge for Reinforcement Learning. (13%)Bei Zhou; Søren Riis
How explainable are adversarially-robust CNNs? (8%)Mehdi Nourelahi; Lars Kotthoff; Peijie Chen; Anh Nguyen
2022-05-24
Defending a Music Recommender Against Hubness-Based Adversarial Attacks. (99%)Katharina Hoedt; Arthur Flexer; Gerhard Widmer
Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks. (99%)Sizhe Chen; Zhehao Huang; Qinghua Tao; Yingwen Wu; Cihang Xie; Xiaolin Huang
Certified Robustness Against Natural Language Attacks by Causal Intervention. (98%)Haiteng Zhao; Chang Ma; Xinshuai Dong; Anh Tuan Luu; Zhi-Hong Deng; Hanwang Zhang
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks. (92%)Shutong Wu; Sizhe Chen; Cihang Xie; Xiaolin Huang
Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation. (64%)Xiaoguang Li; Neil Zhenqiang Gong; Ninghui Li; Wenhai Sun; Hui Li
WeDef: Weakly Supervised Backdoor Defense for Text Classification. (56%)Lesheng Jin; Zihan Wang; Jingbo Shang
Recipe2Vec: Multi-modal Recipe Representation Learning with Graph Neural Networks. (50%)Yijun Tian; Chuxu Zhang; Zhichun Guo; Yihong Ma; Ronald Metoyer; Nitesh V. Chawla
EBM Life Cycle: MCMC Strategies for Synthesis, Defense, and Density Modeling. (10%)Mitch Hill; Jonathan Mitchell; Chu Chen; Yuan Du; Mubarak Shah; Song-Chun Zhu
Comprehensive Privacy Analysis on Federated Recommender System against Attribute Inference Attacks. (9%)Shijie Zhang; Hongzhi Yin
Fast & Furious: Modelling Malware Detection as Evolving Data Streams. (2%)Fabrício Ceschin; Marcus Botacin; Heitor Murilo Gomes; Felipe Pinagé; Luiz S. Oliveira; André Grégio
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. (2%)Tianlong Chen; Zhenyu Zhang; Yihua Zhang; Shiyu Chang; Sijia Liu; Zhangyang Wang
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing. (1%)Zhiwei Hao; Yong Luo; Zhi Wang; Han Hu; Jianping An
2022-05-23
Collaborative Adversarial Training. (98%)Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen
Alleviating Robust Overfitting of Adversarial Training With Consistency Regularization. (98%)Shudong Zhang; Haichang Gao; Tianwei Zhang; Yunyi Zhou; Zihui Wu
Learning to Ignore Adversarial Attacks. (95%)Yiming Zhang; Yangqiaoyu Zhou; Samuel Carton; Chenhao Tan
Towards a Defense against Backdoor Attacks in Continual Federated Learning. (50%)Shuaiqi Wang; Jonathan Hayase; Giulia Fanti; Sewoong Oh
Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation. (10%)Huarui He; Jie Wang; Zhanqiu Zhang; Feng Wu
RCC-GAN: Regularized Compound Conditional GAN for Large-Scale Tabular Data Synthesis. (1%)Mohammad Esmaeilpour; Nourhene Chaalia; Adel Abusitta; Francois-Xavier Devailly; Wissem Maazoun; Patrick Cardinal
2022-05-22
AutoJoin: Efficient Adversarial Training for Robust Maneuvering via Denoising Autoencoder and Joint Learning. (26%)Michael Villarreal; Bibek Poudel; Ryan Wickman; Yu Shen; Weizi Li
Robust Quantity-Aware Aggregation for Federated Learning. (13%)Jingwei Yi; Fangzhao Wu; Huishuai Zhang; Bin Zhu; Tao Qi; Guangzhong Sun; Xing Xie
Generalization ability and Vulnerabilities to adversarial perturbations: Two sides of the same coin. (10%)Jung Hoon Lee; Sujith Vijayan
2022-05-21
Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models. (99%)Shawn Shan; Wenxin Ding; Emily Wenger; Haitao Zheng; Ben Y. Zhao
Gradient Concealment: Free Lunch for Defending Adversarial Attacks. (99%)Sen Pei; Jiaxi Sun; Xiaopeng Zhang; Gaofeng Meng
Phrase-level Textual Adversarial Attack with Label Preservation. (99%)Yibin Lei; Yu Cao; Dianqi Li; Tianyi Zhou; Meng Fang; Mykola Pechenizkiy
On the Feasibility and Generality of Patch-based Adversarial Attacks on Semantic Segmentation Problems. (16%)Soma Kontar; Andras Horvath
2022-05-20
Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness. (99%)Jiankai Jin; Olga Ohrimenko; Benjamin I. P. Rubinstein
Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification. (98%)Jungeum Kim; Xiao Wang
Adversarial joint attacks on legged robots. (86%)Takuto Otomo; Hiroshi Kera; Kazuhiko Kawamoto
Towards Consistency in Adversarial Classification. (82%)Laurent Meunier; Raphaël Ettedgui; Rafael Pinot; Yann Chevaleyre; Jamal Atif
Adversarial Body Shape Search for Legged Robots. (80%)Takaaki Azakami; Hiroshi Kera; Kazuhiko Kawamoto
SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning. (64%)Harsh Chaudhari; Matthew Jagielski; Alina Oprea
The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks. (11%)Lukas S. Huber; Robert Geirhos; Felix A. Wichmann
Vulnerability Analysis and Performance Enhancement of Authentication Protocol in Dynamic Wireless Power Transfer Systems. (10%)Tommaso Bianchi; Surudhi Asokraj; Alessandro Brighente; Mauro Conti; Radha Poovendran
Exploring the Trade-off between Plausibility, Change Intensity and Adversarial Power in Counterfactual Explanations using Multi-objective Optimization. (4%)Ser Javier Del; Alejandro Barredo-Arrieta; Natalia Díaz-Rodríguez; Francisco Herrera; Andreas Holzinger
2022-05-19
Focused Adversarial Attacks. (99%)Thomas Cilloni; Charles Walter; Charles Fleming
Transferable Physical Attack against Object Detection with Separable Attention. (99%)Yu Zhang; Zhiqiang Gong; Yichuang Zhang; YongQian Li; Kangcheng Bin; Jiahao Qi; Wei Xue; Ping Zhong
Gradient Aligned Attacks via a Few Queries. (99%)Xiangyuan Yang; Jie Lin; Hanlin Zhang; Xinyu Yang; Peng Zhao
On Trace of PGD-Like Adversarial Attacks. (99%)Mo Zhou; Vishal M. Patel
Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification. (98%)Leo Schwinn; Leon Bungert; An Nguyen; René Raab; Falk Pulsmeyer; Doina Precup; Björn Eskofier; Dario Zanca
Defending Against Adversarial Attacks by Energy Storage Facility. (96%)Jiawei Li; Jianxiao Wang; Lin Chen; Yang Yu
Sparse Adversarial Attack in Multi-agent Reinforcement Learning. (82%)Yizheng Hu; Zhihua Zhang
Data Valuation for Offline Reinforcement Learning. (1%)Amir Abolfazli; Gregory Palmer; Daniel Kudenko
2022-05-18
Passive Defense Against 3D Adversarial Point Clouds Through the Lens of 3D Steganalysis. (99%)Jiahao Zhu
Property Unlearning: A Defense Strategy Against Property Inference Attacks. (84%)Joshua Universität Hamburg Stock; Jens Universität Hamburg Wettlaufer; Daniel Universität Hamburg Demmler; Hannes Universität Hamburg Federrath
Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing. (81%)Ryan Feng; Somesh Jha; Atul Prakash
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution. (56%)Zhixin Pan; Prabhat Mishra
Empirical Advocacy of Bio-inspired Models for Robust Image Recognition. (38%)Harshitha Machiraju; Oh-Hyeon Choung; Michael H. Herzog; Pascal Frossard
Mitigating Neural Network Overconfidence with Logit Normalization. (1%)Hongxin Wei; Renchunzi Xie; Hao Cheng; Lei Feng; Bo An; Yixuan Li
RandoMix: A mixed sample data augmentation method with multiple mixed modes. (1%)Xiaoliang Liu; Furao Shen; Jian Zhao; Changhai Nie
2022-05-17
Hierarchical Distribution-Aware Testing of Deep Learning. (99%)Wei Huang; Xingyu Zhao; Alec Banks; Victoria Cox; Xiaowei Huang
Bankrupting DoS Attackers Despite Uncertainty. (12%)Trisha Chakraborty; Abir Islam; Valerie King; Daniel Rayborn; Jared Saia; Maxwell Young
A two-steps approach to improve the performance of Android malware detectors. (10%)Nadia Daoudi; Kevin Allix; Tegawendé F. Bissyandé; Jacques Klein
Policy Distillation with Selective Input Gradient Regularization for Efficient Interpretability. (2%)Jinwei Xing; Takashi Nagata; Xinyun Zou; Emre Neftci; Jeffrey L. Krichmar
Recovering Private Text in Federated Learning of Language Models. (2%)Samyak Gupta; Yangsibo Huang; Zexuan Zhong; Tianyu Gao; Kai Li; Danqi Chen
Semi-Supervised Building Footprint Generation with Feature and Output Consistency Training. (1%)Qingyu Li; Yilei Shi; Xiao Xiang Zhu
2022-05-16
Attacking and Defending Deep Reinforcement Learning Policies. (99%)Chao Wang
Diffusion Models for Adversarial Purification. (99%)Weili Nie; Brandon Guo; Yujia Huang; Chaowei Xiao; Arash Vahdat; Anima Anandkumar
Robust Representation via Dynamic Feature Aggregation. (84%)Haozhe Liu; Haoqin Ji; Yuexiang Li; Nanjun He; Haoqian Wu; Feng Liu; Linlin Shen; Yefeng Zheng
Sparse Visual Counterfactual Explanations in Image Space. (83%)Valentyn Boreiko; Maximilian Augustin; Francesco Croce; Philipp Berens; Matthias Hein
On the Difficulty of Defending Self-Supervised Learning against Model Extraction. (67%)Adam Dziedzic; Nikita Dhawan; Muhammad Ahmad Kaleem; Jonas Guan; Nicolas Papernot
Transferability of Adversarial Attacks on Synthetic Speech Detection. (47%)Jiacheng Deng; Shunyi Chen; Li Dong; Diqun Yan; Rangding Wang
2022-05-15
Learn2Weight: Parameter Adaptation against Similar-domain Adversarial Attacks. (99%)Siddhartha Datta
Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection. (64%)Fan Wang; Adams Wai-Kin Kong
RoMFAC: A robust mean-field actor-critic reinforcement learning against adversarial perturbations on states. (62%)Ziyuan Zhou; Guanjun Liu
Automation Slicing and Testing for in-App Deep Learning Models. (1%)Hao Wu; Yuhang Gong; Xiaopeng Ke; Hanzhong Liang; Minghao Li; Fengyuan Xu; Yunxin Liu; Sheng Zhong
2022-05-14
Evaluating Membership Inference Through Adversarial Robustness. (98%)Zhaoxi Zhang; Leo Yu Zhang; Xufei Zheng; Bilal Hussain Abbasi; Shengshan Hu
Verifying Neural Networks Against Backdoor Attacks. (2%)Long H. Pham; Jun Sun
2022-05-13
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic. (98%)Hang Wang; Zhen Xiang; David J. Miller; George Kesidis
l-Leaks: Membership Inference Attacks with Logits. (41%)Shuhao Li; Yajie Wang; Yuanzhang Li; Yu-an Tan
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations. (26%)Yongjie Wang; Hangwei Qian; Chunyan Miao
Millimeter-Wave Automotive Radar Spoofing. (2%)Mihai Ordean; Flavio D. Garcia
2022-05-12
Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks. (75%)Pascale Gourdeau; Varun Kanade; Marta Kwiatkowska; James Worrell
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. (61%)Hongbin Liu; Jinyuan Jia; Neil Zhenqiang Gong
How to Combine Membership-Inference Attacks on Multiple Updated Models. (11%)Matthew Jagielski; Stanley Wu; Alina Oprea; Jonathan Ullman; Roxana Geambasu
Infrared Invisible Clothing:Hiding from Infrared Detectors at Multiple Angles in Real World. (4%)Xiaopei Zhu; Zhanhao Hu; Siyuan Huang; Jianmin Li; Xiaolin Hu
Smooth-Reduce: Leveraging Patches for Improved Certified Robustness. (2%)Ameya Joshi; Minh Pham; Minsu Cho; Leonid Boytsov; Filipe Condessa; J. Zico Kolter; Chinmay Hegde
Stalloris: RPKI Downgrade Attack. (1%)Tomas Hlavacek; Philipp Jeitner; Donika Mirdita; Haya Shulman; Michael Waidner
2022-05-11
Injection Attacks Reloaded: Tunnelling Malicious Payloads over DNS. (1%)Philipp Jeitner; Haya Shulman
The Hijackers Guide To The Galaxy: Off-Path Taking Over Internet Resources. (1%)Tianxiang Dai; Philipp Jeitner; Haya Shulman; Michael Waidner
A Longitudinal Study of Cryptographic API: a Decade of Android Malware. (1%)Adam Janovsky; Davide Maiorca; Dominik Macko; Vashek Matyas; Giorgio Giacinto
2022-05-10
Robust Medical Image Classification from Noisy Labeled Data with Global and Local Representation Guided Co-training. (1%)Cheng Xue; Lequan Yu; Pengfei Chen; Qi Dou; Pheng-Ann Heng
White-box Testing of NLP models with Mask Neuron Coverage. (1%)Arshdeep Sekhon; Yangfeng Ji; Matthew B. Dwyer; Yanjun Qi
2022-05-09
Btech thesis report on adversarial attack detection and purification of adverserially attacked images. (99%)Dvij Kalaria
Using Frequency Attention to Make Adversarial Patch Powerful Against Person Detector. (98%)Xiaochun Lei; Chang Lu; Zetao Jiang; Zhaoting Gong; Xiang Cai; Linjun Lu
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks. (97%)Harel Berger; Amit Dvir; Chen Hajaj; Rony Ronen
Model-Contrastive Learning for Backdoor Defense. (87%)Zhihao Yue; Jun Xia; Zhiwei Ling; Ming Hu; Ting Wang; Xian Wei; Mingsong Chen
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations? (61%)Alvin Chan; Yew-Soon Ong; Clement Tan
Federated Multi-Armed Bandits Under Byzantine Attacks. (2%)Ilker Demirel; Yigit Yildirim; Cem Tekin
Verifying Integrity of Deep Ensemble Models by Lossless Black-box Watermarking with Sensitive Samples. (2%)Lina Lin; Hanzhou Wu
2022-05-08
Fingerprint Template Invertibility: Minutiae vs. Deep Templates. (68%)Kanishka P. Wijewardena; Steven A. Grosz; Kai Cao; Anil K. Jain
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. (22%)Jingtao Li; Adnan Siraj Rakin; Xing Chen; Zhezhi He; Deliang Fan; Chaitali Chakrabarti
VPN: Verification of Poisoning in Neural Networks. (9%)Youcheng Sun; Muhammad Usman; Divya Gopinath; Corina S. Păsăreanu
FOLPETTI: A Novel Multi-Armed Bandit Smart Attack for Wireless Networks. (4%)Emilie Bout; Alessandro Brighente; Mauro Conti; Valeria Loscri
PGADA: Perturbation-Guided Adversarial Alignment for Few-shot Learning Under the Support-Query Shift. (1%)Siyang Jiang; Wei Ding; Hsi-Wen Chen; Ming-Syan Chen
2022-05-07
A Simple Yet Efficient Method for Adversarial Word-Substitute Attack. (99%)Tianle Li; Yi Yang
Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees. (92%)Binghui Wang; Youqi Li; Pan Zhou
2022-05-06
Imperceptible Backdoor Attack: From Input Space to Feature Representation. (68%)Nan Zhong; Zhenxing Qian; Xinpeng Zhang
Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data. (26%)Joceline Ziegler; Bjarne Pfitzner; Heinrich Schulz; Axel Saalbach; Bert Arnrich
LPGNet: Link Private Graph Networks for Node Classification. (1%)Aashish Kolluri; Teodora Baluta; Bryan Hooi; Prateek Saxena
Unlimited Lives: Secure In-Process Rollback with Isolated Domains. (1%)Merve Gülmez; Thomas Nyman; Christoph Baumann; Jan Tobias Mühlberg
2022-05-05
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems. (99%)Gaurav Kumar Nayak; Ruchit Rawal; Rohit Lal; Himanshu Patil; Anirban Chakraborty
Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness. (78%)Dávid Szeghy; Mahmoud Aslan; Áron Fóthi; Balázs Mészáros; Zoltán Ádám Milacski; András Lőrincz
Can collaborative learning be private, robust and scalable? (61%)Dmitrii Usynin; Helena Klause; Daniel Rueckert; Georgios Kaissis
Large Scale Transfer Learning for Differentially Private Image Classification. (2%)Harsh Mehta; Abhradeep Thakurta; Alexey Kurakin; Ashok Cutkosky
Are GAN-based Morphs Threatening Face Recognition? (1%)Eklavya Sarkar; Pavel Korshunov; Laurent Colbois; Sébastien Marcel
Heterogeneous Domain Adaptation with Adversarial Neural Representation Learning: Experiments on E-Commerce and Cybersecurity. (1%)Mohammadreza Ebrahimi; Yidong Chai; Hao Helen Zhang; Hsinchun Chen
2022-05-04
Based-CE white-box adversarial attack will not work using super-fitting. (99%)Youhuan Yang; Lei Sun; Leyu Dai; Song Guo; Xiuqing Mao; Xiaoqin Wang; Bayi Xu
Rethinking Classifier And Adversarial Attack. (98%)Youhuan Yang; Lei Sun; Leyu Dai; Song Guo; Xiuqing Mao; Xiaoqin Wang; Bayi Xu
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning. (98%)Antonio Emanuele Cinà; Kathrin Grosse; Ambra Demontis; Sebastiano Vascon; Werner Zellinger; Bernhard A. Moser; Alina Oprea; Battista Biggio; Marcello Pelillo; Fabio Roli
Robust Conversational Agents against Imperceptible Toxicity Triggers. (92%)Ninareh Mehrabi; Ahmad Beirami; Fred Morstatter; Aram Galstyan
Subverting Fair Image Search with Generative Adversarial Perturbations. (83%)Avijit Ghosh; Matthew Jagielski; Christo Wilson
2022-05-03
Adversarial Training for High-Stakes Reliability. (98%)Daniel M. Ziegler; Seraphina Nix; Lawrence Chan; Tim Bauman; Peter Schmidt-Nielsen; Tao Lin; Adam Scherlis; Noa Nabeshima; Ben Weinstein-Raun; Haas Daniel de; Buck Shlegeris; Nate Thomas
Don't sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks. (96%)Jonathan Rusert; Padmini Srinivasan
On the uncertainty principle of neural networks. (3%)Jun-Jie Zhang; Dong-Xiao Zhang; Jian-Nan Chen; Long-Gang Pang
Meta-Cognition. An Inverse-Inverse Reinforcement Learning Approach for Cognitive Radars. (1%)Kunal Pattanayak; Vikram Krishnamurthy; Christopher Berry
2022-05-02
SemAttack: Natural Textual Attacks via Different Semantic Spaces. (96%)Boxin Wang; Chejian Xu; Xiangyu Liu; Yu Cheng; Bo Li
Deep-Attack over the Deep Reinforcement Learning. (93%)Yang Li; Quan Pan; Erik Cambria
Enhancing Adversarial Training with Feature Separability. (92%)Yaxin Li; Xiaorui Liu; Han Xu; Wentao Wang; Jiliang Tang
BERTops: Studying BERT Representations under a Topological Lens. (92%)Jatin Chauhan; Manohar Kaul
MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust Classification of Breast Cancer. (83%)Shoukun Sun; Min Xian; Aleksandar Vakanski; Hossny Ghanem
Revisiting Gaussian Neurons for Online Clustering with Unknown Number of Clusters. (1%)Ole Christian Eidheim
2022-05-01
A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction. (98%)Yong Xie; Dakuo Wang; Pin-Yu Chen; Jinjun Xiong; Sijia Liu; Sanmi Koyejo
DDDM: a Brain-Inspired Framework for Robust Classification. (76%)Xiyuan Chen; Xingyu Li; Yi Zhou; Tianming Yang
Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances. (9%)Shoujie Tong; Qingxiu Dong; Damai Dai; Yifan song; Tianyu Liu; Baobao Chang; Zhifang Sui
A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness. (3%)Jeremiah Zhe Liu; Shreyas Padhy; Jie Ren; Zi Lin; Yeming Wen; Ghassen Jerfel; Zack Nado; Jasper Snoek; Dustin Tran; Balaji Lakshminarayanan
Adversarial Plannning. (2%)Valentin Vie; Ryan Sheatsley; Sophia Beyda; Sushrut Shringarputale; Kevin Chan; Trent Jaeger; Patrick McDaniel
2022-04-30
Optimizing One-pixel Black-box Adversarial Attacks. (82%)Tianxun Zhou; Shubhankar Agrawal; Prateek Manocha
Cracking White-box DNN Watermarks via Invariant Neuron Transforms. (26%)Yifan Yan; Xudong Pan; Yining Wang; Mi Zhang; Min Yang
Loss Function Entropy Regularization for Diverse Decision Boundaries. (1%)Chong Sue Sin
Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees. (1%)Jonathan Brophy; Zayd Hammoudeh; Daniel Lowd
2022-04-29
Adversarial attacks on an optical neural network. (92%)Shuming Jiao; Ziwei Song; Shuiying Xiang
Logically Consistent Adversarial Attacks for Soft Theorem Provers. (2%)Alexander Gaskell; Yishu Miao; Lucia Specia; Francesca Toni
Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation. (1%)Heng Zhu; Qing Ling
2022-04-28
Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations. (99%)Na Liu; Mark Dras; Wei Emma Zhang
Formulating Robustness Against Unforeseen Attacks. (99%)Sihui Dai; Saeed Mahloujifar; Prateek Mittal
Randomized Smoothing under Attack: How Good is it in Pratice? (84%)Thibault Maho; Teddy Furon; Erwan Le Merrer
Improving robustness of language models from a geometry-aware perspective. (68%)Bin Zhu; Zhaoquan Gu; Le Wang; Jinyin Chen; Qi Xuan
Mixup-based Deep Metric Learning Approaches for Incomplete Supervision. (50%)Luiz H. Buris; Daniel C. G. Pedronette; Joao P. Papa; Jurandy Almeida; Gustavo Carneiro; Fabio A. Faria
AGIC: Approximate Gradient Inversion Attack on Federated Learning. (16%)Jin Xu; Chi Hong; Jiyue Huang; Lydia Y. Chen; Jérémie Decouchant
An Online Ensemble Learning Model for Detecting Attacks in Wireless Sensor Networks. (1%)Hiba Tabbaa; Samir Ifzarne; Imad Hafidi
2022-04-27
Adversarial Fine-tune with Dynamically Regulated Adversary. (99%)Pengyue Hou; Ming Zhou; Jie Han; Petr Musilek; Xingyu Li
Defending Against Person Hiding Adversarial Patch Attack with a Universal White Frame. (98%)Youngjoon Yu; Hong Joo Lee; Hakmin Lee; Yong Man Ro
An Adversarial Attack Analysis on Malicious Advertisement URL Detection Framework. (81%)Ehsan Nowroozi; Abhishek; Mohammadreza Mohammadi; Mauro Conti
2022-04-26
Boosting Adversarial Transferability of MLP-Mixer. (99%)Haoran Lyu; Yajie Wang; Yu-an Tan; Huipeng Zhou; Yuhang Zhao; Quanxin Zhang
Restricted Black-box Adversarial Attack Against DeepFake Face Swapping. (99%)Junhao Dong; Yuan Wang; Jianhuang Lai; Xiaohua Xie
Improving the Transferability of Adversarial Examples with Restructure Embedded Patches. (99%)Huipeng Zhou; Yu-an Tan; Yajie Wang; Haoran Lyu; Shangbo Wu; Yuanzhang Li
On Fragile Features and Batch Normalization in Adversarial Training. (97%)Nils Philipp Walter; David Stutz; Bernt Schiele
Mixed Strategies for Security Games with General Defending Requirements. (75%)Rufan Bai; Haoxing Lin; Xinyu Yang; Xiaowei Wu; Minming Li; Weijia Jia
Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios. (26%)Dazhong Rong; Qinming He; Jianhai Chen
Designing Perceptual Puzzles by Differentiating Probabilistic Programs. (13%)Kartik Chandra; Tzu-Mao Li; Joshua Tenenbaum; Jonathan Ragan-Kelley
Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies. (8%)Shaltiel Eloul; Fran Silavong; Sanket Kamthe; Antonios Georgiadis; Sean J. Moran
Performance Analysis of Out-of-Distribution Detection on Trained Neural Networks. (4%)Jens Henriksson; Christian Berger; Markus Borg; Lars Tornberg; Sankar Raman Sathyamoorthy; Cristofer Englund
2022-04-25
Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social Networks. (99%)Jiawei Zhang; Jinwei Wang; Hao Wang; Xiangyang Luo
When adversarial examples are excusable. (89%)Pieter-Jan Kindermans; Charles Staats
A Simple Structure For Building A Robust Model. (81%)Xiao Tan; JingBo Gao; Ruolin Li
Real or Virtual: A Video Conferencing Background Manipulation-Detection System. (67%)Ehsan Nowroozi; Yassine Mekdad; Mauro Conti; Simone Milani; Selcuk Uluagac; Berrin Yanikoglu
Can Rationalization Improve Robustness? (12%)Howard Chen; Jacqueline He; Karthik Narasimhan; Danqi Chen
PhysioGAN: Training High Fidelity Generative Model for Physiological Sensor Readings. (1%)Moustafa Alzantot; Luis Garcia; Mani Srivastava
VITA: A Multi-Source Vicinal Transfer Augmentation Method for Out-of-Distribution Generalization. (1%)Minghui Chen; Cheng Wen; Feng Zheng; Fengxiang He; Ling Shao
Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications. (1%)Han Cai; Ji Lin; Yujun Lin; Zhijian Liu; Haotian Tang; Hanrui Wang; Ligeng Zhu; Song Han
2022-04-24
A Hybrid Defense Method against Adversarial Attacks on Traffic Sign Classifiers in Autonomous Vehicles. (99%)Zadid Khan; Mashrur Chowdhury; Sakib Mahmud Khan
Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity. (81%)Marco Marchetti; Edmond S. L. Ho
2022-04-23
Smart App Attack: Hacking Deep Learning Models in Android Apps. (98%)Yujin Huang; Chunyang Chen
Towards Data-Free Model Stealing in a Hard Label Setting. (13%)Sunandini Sanyal; Sravanti Addepalli; R. Venkatesh Babu
Reinforced Causal Explainer for Graph Neural Networks. (1%)Xiang Wang; Yingxin Wu; An Zhang; Fuli Feng; Xiangnan He; Tat-Seng Chua
2022-04-22
How Sampling Impacts the Robustness of Stochastic Neural Networks. (99%)Sina Däubener; Asja Fischer
A Tale of Two Models: Constructing Evasive Attacks on Edge Models. (83%)Wei Hao; Aahil Awatramani; Jiayang Hu; Chengzhi Mao; Pin-Chun Chen; Eyal Cidon; Asaf Cidon; Junfeng Yang
Enhancing the Transferability via Feature-Momentum Adversarial Attack. (82%)Xianglong; Yuezun Li; Haipeng Qu; Junyu Dong
Data-Efficient Backdoor Attacks. (76%)Pengfei Xia; Ziqiang Li; Wei Zhang; Bin Li
2022-04-21
A Mask-Based Adversarial Defense Scheme. (99%)Weizhen Xu; Chenyi Zhang; Fangzhen Zhao; Liangda Fang
Is Neuron Coverage Needed to Make Person Detection More Robust? (98%)Svetlana Pavlitskaya; Şiyar Yıkmış; J. Marius Zöllner
Testing robustness of predictions of trained classifiers against naturally occurring perturbations. (98%)Sebastian Scher; Andreas Trügler
Adversarial Contrastive Learning by Permuting Cluster Assignments. (15%)Muntasir Wahed; Afrina Tabassum; Ismini Lourentzou
Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation. (4%)Jun Xia; Ting Wang; Jiepin Ding; Xian Wei; Mingsong Chen
Detecting Topology Attacks against Graph Neural Networks. (1%)Senrong Xu; Yuan Yao; Liangyue Li; Wei Yang; Feng Xu; Hanghang Tong
2022-04-20
Adversarial Scratches: Deployable Attacks to CNN Classifiers. (99%)Loris Giulivi; Malhar Jere; Loris Rossi; Farinaz Koushanfar; Gabriela Ciocarlie; Briland Hitaj; Giacomo Boracchi
GUARD: Graph Universal Adversarial Defense. (99%)Jintang Li; Jie Liao; Ruofan Wu; Liang Chen; Zibin Zheng; Jiawang Dan; Changhua Meng; Weiqiang Wang
Fast AdvProp. (98%)Jieru Mei; Yucheng Han; Yutong Bai; Yixiao Zhang; Yingwei Li; Xianhang Li; Alan Yuille; Cihang Xie
Case-Aware Adversarial Training. (98%)Mingyuan Fan; Yang Liu; Wenzhong Guo; Ximeng Liu; Jianhua Li
Improved Worst-Group Robustness via Classifier Retraining on Independent Splits. (1%)Thien Hang Nguyen; Hongyang R. Zhang; Huy Le Nguyen
2022-04-19
Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks. (99%)Kenneth T. Co; David Martinez-Rego; Zhongyuan Hau; Emil C. Lupu
Robustness Testing of Data and Knowledge Driven Anomaly Detection in Cyber-Physical Systems. (86%)Xugui Zhou; Maxfield Kouzel; Homa Alemzadeh
Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation. (83%)Siyu Lai; Zhen Yang; Fandong Meng; Xue Zhang; Yufeng Chen; Jinan Xu; Jie Zhou
2022-04-18
UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples. (99%)Rahim Taheri
Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge. (99%)Qun Song; Zhenyu Yan; Wenjie Luo; Rui Tan
CgAT: Center-Guided Adversarial Training for Deep Hashing-Based Retrieval. (99%)Xunguang Wang; Yiqun Lin; Xiaomeng Li
Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors. (98%)Nyee Thoang Lim; Meng Yi Kuan; Muxin Pu; Mei Kuan Lim; Chun Yong Chong
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. (75%)Enyan Dai; Tianxiang Zhao; Huaisheng Zhu; Junjie Xu; Zhimeng Guo; Hui Liu; Jiliang Tang; Suhang Wang
CorrGAN: Input Transformation Technique Against Natural Corruptions. (70%)Mirazul Haque; Christof J. Budnik; Wei Yang
Poisons that are learned faster are more effective. (64%)Pedro Sandoval-Segura; Vasu Singla; Liam Fowl; Jonas Geiping; Micah Goldblum; David Jacobs; Tom Goldstein
2022-04-17
Residue-Based Natural Language Adversarial Attack Detection. (99%)Vyas Raina; Mark Gales
Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning. (95%)Jun Guo; Yonghong Chen; Yihang Hao; Zixin Yin; Yin Yu; Simin Li
2022-04-16
SETTI: A Self-supervised Adversarial Malware Detection Architecture in an IoT Environment. (95%)Marjan Golmaryami; Rahim Taheri; Zahra Pooranian; Mohammad Shojafar; Pei Xiao
Homomorphic Encryption and Federated Learning based Privacy-Preserving CNN Training: COVID-19 Detection Use-Case. (67%)Febrianti Wibawa; Ferhat Ozgur Catak; Salih Sarp; Murat Kuzlu; Umit Cali
2022-04-15
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning. (92%)Mathias Lechner; Alexander Amini; Daniela Rus; Thomas A. Henzinger
2022-04-14
From Environmental Sound Representation to Robustness of 2D CNN Models Against Adversarial Attacks. (99%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Planting Undetectable Backdoors in Machine Learning Models. (99%)Shafi Goldwasser; Michael P. Kim; Vinod Vaikuntanathan; Or Zamir
Q-TART: Quickly Training for Adversarial Robustness and in-Transferability. (50%)Madan Ravi Ganesh; Salimeh Yasaei Sekeh; Jason J. Corso
Robotic and Generative Adversarial Attacks in Offline Writer-independent Signature Verification. (41%)Jordan J. Bird
2022-04-13
Task-Driven Data Augmentation for Vision-Based Robotic Control. (96%)Shubhankar Agarwal; Sandeep P. Chinchali
Stealing and Evading Malware Classifiers and Antivirus at Low False Positive Conditions. (87%)Maria Rigaki; Sebastian Garcia
Defensive Patches for Robust Recognition in the Physical World. (80%)Jiakai Wang; Zixin Yin; Pengfei Hu; Aishan Liu; Renshuai Tao; Haotong Qin; Xianglong Liu; Dacheng Tao
A Novel Approach to Train Diverse Types of Language Models for Health Mention Classification of Tweets. (78%)Pervaiz Iqbal Khan; Imran Razzak; Andreas Dengel; Sheraz Ahmed
Overparameterized Linear Regression under Adversarial Attacks. (76%)Antônio H. Ribeiro; Thomas B. Schön
Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures. (38%)Huming Qiu; Hua Ma; Zhi Zhang; Alsharif Abuadbba; Wei Kang; Anmin Fu; Yansong Gao
A Natural Language Processing Approach for Instruction Set Architecture Identification. (1%)Dinuka Sahabandu; Sukarno Mertoguno; Radha Poovendran
2022-04-12
Liuer Mihou: A Practical Framework for Generating and Evaluating Grey-box Adversarial Attacks against NIDS. (99%)Ke He; Dan Dongseong Kim; Jing Sun; Jeong Do Yoo; Young Hun Lee; Huy Kang Kim
Examining the Proximity of Adversarial Examples to Class Manifolds in Deep Networks. (98%)Štefan Pócoš; Iveta Bečková; Igor Farkaš
Toward Robust Spiking Neural Network Against Adversarial Perturbation. (98%)Ling Liang; Kaidi Xu; Xing Hu; Lei Deng; Yuan Xie
Machine Learning Security against Data Poisoning: Are We There Yet? (92%)Antonio Emanuele Cinà; Kathrin Grosse; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Optimal Membership Inference Bounds for Adaptive Composition of Sampled Gaussian Mechanisms. (11%)Saeed Mahloujifar; Alexandre Sablayrolles; Graham Cormode; Somesh Jha
3DeformRS: Certifying Spatial Deformations on Point Clouds. (9%)Gabriel Pérez S.; Juan C. Pérez; Motasem Alfarra; Silvio Giancola; Bernard Ghanem
2022-04-11
A Simple Approach to Adversarial Robustness in Few-shot Image Classification. (98%)Akshayvarun Subramanya; Hamed Pirsiavash
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information. (92%)Yi Zeng; Minzhou Pan; Hoang Anh Just; Lingjuan Lyu; Meikang Qiu; Ruoxi Jia
Generalizing Adversarial Explanations with Grad-CAM. (84%)Tanmay Chakraborty; Utkarsh Trehan; Khawla Mallat; Jean-Luc Dugelay
Anti-Adversarially Manipulated Attributions for Weakly Supervised Semantic Segmentation and Object Localization. (83%)Jungbeom Lee; Eunji Kim; Jisoo Mok; Sungroh Yoon
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm. (47%)Lei Xu; Yangyi Chen; Ganqu Cui; Hongcheng Gao; Zhiyuan Liu
medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space. (1%)Amil Dravid; Florian Schiffers; Boqing Gong; Aggelos K. Katsaggelos
2022-04-10
"That Is a Suspicious Reaction!": Interpreting Logits Variation to Detect NLP Adversarial Attacks. (88%)Edoardo Mosca; Shreyash Agarwal; Javier Rando-Ramirez; Georg Groh
Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural Networks. (54%)Karthikeyan Nagarajan; Junde Li; Sina Sayyah Ensan; Mohammad Nasim Imtiaz Khan; Sachhidh Kannan; Swaroop Ghosh
Measuring the False Sense of Security. (26%)Carlos Gomes
2022-04-08
Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser. (99%)Sonal Joshi; Saurabh Kataria; Yiwen Shao; Piotr Zelasko; Jesus Villalba; Sanjeev Khudanpur; Najim Dehak
AdvEst: Adversarial Perturbation Estimation to Classify and Detect Adversarial Attacks against Speaker Identification. (99%)Sonal Joshi; Saurabh Kataria; Jesus Villalba; Najim Dehak
Evaluating the Adversarial Robustness for Fourier Neural Operators. (92%)Abolaji D. Adesoji; Pin-Yu Chen
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense. (87%)Shaik Mohammed Maqsood; Viveros Manuela Ceron; Addluri GowthamKrishna
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks. (45%)Xinqiao Zhang; Huili Chen; Ke Huang; Farinaz Koushanfar
Characterizing and Understanding the Behavior of Quantized Models for Reliable Deployment. (13%)Qiang Hu; Yuejun Guo; Maxime Cordy; Xiaofei Xie; Wei Ma; Mike Papadakis; Yves Le Traon
Neural Tangent Generalization Attacks. (12%)Chia-Hung Yuan; Shan-Hung Wu
Labeling-Free Comparison Testing of Deep Learning Models. (11%)Yuejun Guo; Qiang Hu; Maxime Cordy; Xiaofei Xie; Mike Papadakis; Yves Le Traon
Does Robustness on ImageNet Transfer to Downstream Tasks? (2%)Yutaro Yamada; Mayu Otani
The self-learning AI controller for adaptive power beaming with fiber-array laser transmitter system. (1%)A. M. Vorontsov; G. A. Filimonov
2022-04-07
Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings. (99%)Yuhao Mao; Chong Fu; Saizhuo Wang; Shouling Ji; Xuhong Zhang; Zhenguang Liu; Jun Zhou; Alex X. Liu; Raheem Beyah; Ting Wang
Adaptive-Gravity: A Defense Against Adversarial Samples. (99%)Ali Mirzaeian; Zhi Tian; Sai Manoj P D; Banafsheh S. Latibari; Ioannis Savidis; Houman Homayoun; Avesta Sasan
Using Multiple Self-Supervised Tasks Improves Model Robustness. (81%)Matthew Lawhon; Chengzhi Mao; Junfeng Yang
Transformer-Based Language Models for Software Vulnerability Detection: Performance, Model's Security and Platforms. (69%)Chandra Thapa; Seung Ick Jang; Muhammad Ejaz Ahmed; Seyit Camtepe; Josef Pieprzyk; Surya Nepal
Defending Active Directory by Combining Neural Network based Dynamic Program and Evolutionary Diversity Optimisation. (1%)Diksha Goel; Max Hector Ward-Graham; Aneta Neumann; Frank Neumann; Hung Nguyen; Mingyu Guo
2022-04-06
Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks. (99%)Xu Han; Anmin Liu; Yifeng Xiong; Yanbo Fan; Kun He
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network. (95%)Byung-Kwan Lee; Junho Kim; Yong Man Ro
Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck. (93%)Junho Kim; Byung-Kwan Lee; Yong Man Ro
Optimization Models and Interpretations for Three Types of Adversarial Perturbations against Support Vector Machines. (68%)Wen Su; Qingna Li; Chunfeng Cui
Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems. (62%)Furkan Mumcu; Keval Doshi; Yasin Yilmaz
Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures. (33%)Md Tamjid Jim Hossain; Shahriar Jim Badsha; Jim Hung; La; Haoting Shen; Shafkat Islam; Ibrahim Khalil; Xun Yi
2022-04-05
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning. (98%)Nilaksh Das; Duen Horng Chau
Adversarial Robustness through the Lens of Convolutional Filters. (87%)Paul Gavrikov; Janis Keuper
User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning. (2%)Tiantian Feng; Raghuveer Peri; Shrikanth Narayanan
SwapMix: Diagnosing and Regularizing the Over-Reliance on Visual Context in Visual Question Answering. (1%)Vipul Gupta; Zhuowan Li; Adam Kortylewski; Chenyu Zhang; Yingwei Li; Alan Yuille
GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning. (1%)Jinyin Chen; Shulong Hu; Haibin Zheng; Changyou Xing; Guomin Zhang
2022-04-04
DAD: Data-free Adversarial Defense at Test Time. (99%)Gaurav Kumar Nayak; Ruchit Rawal; Anirban Chakraborty
SecureSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition. (99%)Jianfei Yang; Han Zou; Lihua Xie
Experimental quantum adversarial learning with programmable superconducting qubits. (99%)Wenhui Ren; Weikang Li; Shibo Xu; Ke Wang; Wenjie Jiang; Feitong Jin; Xuhao Zhu; Jiachen Chen; Zixuan Song; Pengfei Zhang; Hang Dong; Xu Zhang; Jinfeng Deng; Yu Gao; Chuanyu Zhang; Yaozu Wu; Bing Zhang; Qiujiang Guo; Hekang Li; Zhen Wang; Jacob Biamonte; Chao Song; Dong-Ling Deng; H. Wang
PRADA: Practical Black-Box Adversarial Attacks against Neural Ranking Models. (99%)Chen Wu; Ruqing Zhang; Jiafeng Guo; Rijke Maarten de; Yixing Fan; Xueqi Cheng
FaceSigns: Semi-Fragile Neural Watermarks for Media Authentication and Countering Deepfakes. (98%)Paarth Neekhara; Shehzeen Hussain; Xinqiao Zhang; Ke Huang; Julian McAuley; Farinaz Koushanfar
2022-04-03
Breaking the De-Pois Poisoning Defense. (98%)Alaa Anani; Mohamed Ghanem; Lotfy Abdel Khaliq
Adversarially robust segmentation models learn perceptually-aligned gradients. (16%)Pedro Sandoval-Segura
Detecting In-vehicle Intrusion via Semi-supervised Learning-based Convolutional Adversarial Autoencoders. (1%)Thien-Nu Hoang; Daehee Kim
Improving Vision Transformers by Revisiting High-frequency Components. (1%)Jiawang Bai; Li Yuan; Shu-Tao Xia; Shuicheng Yan; Zhifeng Li; Wei Liu
2022-04-02
DST: Dynamic Substitute Training for Data-free Black-box Attack. (98%)Wenxuan Wang; Xuelin Qian; Yanwei Fu; Xiangyang Xue
Adversarial Neon Beam: Robust Physical-World Adversarial Attack to DNNs. (98%)Chengyin Hu; Kalibinuer Tiliwalidi
2022-04-01
SkeleVision: Towards Adversarial Resiliency of Person Tracking with Multi-Task Learning. (47%)Nilaksh Das; Sheng-Yun Peng; Duen Horng Chau
Robust and Accurate -- Compositional Architectures for Randomized Smoothing. (31%)Miklós Z. Horváth; Mark Niklas Müller; Marc Fischer; Martin Vechev
FrequencyLowCut Pooling -- Plug & Play against Catastrophic Overfitting. (16%)Julia Grabinski; Steffen Jung; Janis Keuper; Margret Keuper
Preventing Distillation-based Attacks on Neural Network IP. (2%)Mahdieh Grailoo; Zain Ul Abideen; Mairo Leier; Samuel Pagliarini
FedRecAttack: Model Poisoning Attack to Federated Recommendation. (1%)Dazhong Rong; Shuai Ye; Ruoyan Zhao; Hon Ning Yuen; Jianhai Chen; Qinming He
2022-03-31
Improving Adversarial Transferability via Neuron Attribution-Based Attacks. (99%)Jianping Zhang; Weibin Wu; Jen-tse Huang; Yizhan Huang; Wenxuan Wang; Yuxin Su; Michael R. Lyu
Adversarial Examples in Random Neural Networks with General Activations. (98%)Andrea Montanari; Yuchen Wu
Scalable Whitebox Attacks on Tree-based Models. (96%)Giuseppe Castiglione; Gavin Ding; Masoud Hashemi; Christopher Srinivasa; Ga Wu
Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond. (86%)Yi Yu; Wenhan Yang; Yap-Peng Tan; Alex C. Kot
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. (81%)Florian Tramèr; Reza Shokri; Ayrton San Joaquin; Hoang Le; Matthew Jagielski; Sanghyun Hong; Nicholas Carlini
2022-03-30
Investigating Top-$k$ White-Box and Transferable Black-box Attack. (87%)Chaoning Zhang; Philipp Benz; Adil Karjauv; Jae Won Cho; Kang Zhang; In So Kweon
Sensor Data Validation and Driving Safety in Autonomous Driving Systems. (83%)Jindi Zhang
Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis. (56%)Yi Chang; Zhao Ren; Thanh Tam Nguyen; Wolfgang Nejdl; Björn W. Schuller
2022-03-29
Mel Frequency Spectral Domain Defenses against Adversarial Attacks on Speech Recognition Systems. (99%)Nicholas Mehlman; Anirudh Sreeram; Raghuveer Peri; Shrikanth Narayanan
Zero-Query Transfer Attacks on Context-Aware Object Detectors. (99%)Zikui Cai; Shantanu Rane; Alejandro E. Brito; Chengyu Song; Srikanth V. Krishnamurthy; Amit K. Roy-Chowdhury; M. Salman Asif
Exploring Frequency Adversarial Attacks for Face Forgery Detection. (99%)Shuai Jia; Chao Ma; Taiping Yao; Bangjie Yin; Shouhong Ding; Xiaokang Yang
StyleFool: Fooling Video Classification Systems via Style Transfer. (99%)Yuxin Cao; Xi Xiao; Ruoxi Sun; Derui Wang; Minhui Xue; Sheng Wen
Recent improvements of ASR models in the face of adversarial attacks. (98%)Raphael Olivier; Bhiksha Raj
Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients. (83%)Kaidong Li; Ziming Zhang; Cuncong Zhong; Guanghui Wang
Treatment Learning Causal Transformer for Noisy Image Classification. (26%)Chao-Han Huck Yang; I-Te Danny Hung; Yi-Chieh Liu; Pin-Yu Chen
Can NMT Understand Me? Towards Perturbation-based Evaluation of NMT Models for Code Generation. (11%)Pietro Liguori; Cristina Improta; Vivo Simona De; Roberto Natella; Bojan Cukic; Domenico Cotroneo
2022-03-28
Boosting Black-Box Adversarial Attacks with Meta Learning. (99%)Junjie the State Key Lab of Intelligent Control and Decision of Complex Systems and the School of Automation, Beijing Institute of Technology, Beijing, China Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China Fu; Jian the State Key Lab of Intelligent Control and Decision of Complex Systems and the School of Automation, Beijing Institute of Technology, Beijing, China Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China Sun; Gang the State Key Lab of Intelligent Control and Decision of Complex Systems and the School of Automation, Beijing Institute of Technology, Beijing, China Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China Wang
A Fast and Efficient Conditional Learning for Tunable Trade-Off between Accuracy and Robustness. (62%)Souvik Kundu; Sairam Sundaresan; Massoud Pedram; Peter A. Beerel
Robust Unlearnable Examples: Protecting Data Against Adversarial Learning. (16%)Shaopeng Fu; Fengxiang He; Yang Liu; Li Shen; Dacheng Tao
Neurosymbolic hybrid approach to driver collision warning. (15%)Kyongsik Yun; Thomas Lu; Alexander Huyen; Patrick Hammer; Pei Wang
Attacker Attribution of Audio Deepfakes. (1%)Nicolas M. Müller; Franziska Dieckmann; Jennifer Williams
2022-03-27
Text Adversarial Purification as Defense against Adversarial Attacks. (99%)Linyang Li; Demin Song; Xipeng Qiu
Adversarial Representation Sharing: A Quantitative and Secure Collaborative Learning Framework. (8%)Jikun Chen; Feng Qiang; Na Ruan
2022-03-26
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective. (99%)Yimeng Zhang; Yuguang Yao; Jinghan Jia; Jinfeng Yi; Mingyi Hong; Shiyu Chang; Sijia Liu
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies. (99%)Zhuang Qian; Kaizhu Huang; Qiu-Feng Wang; Xu-Yao Zhang
Reverse Engineering of Imperceptible Adversarial Image Perturbations. (99%)Yifan Gong; Yuguang Yao; Yize Li; Yimeng Zhang; Xiaoming Liu; Xue Lin; Sijia Liu
Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding. (33%)Zhilu Wang; Chao Huang; Qi Zhu
A Systematic Survey of Attack Detection and Prevention in Connected and Autonomous Vehicles. (1%)Trupil Limbasiya; Ko Zheng Teng; Sudipta Chattopadhyay; Jianying Zhou
A Roadmap for Big Model. (1%)Sha Yuan; Hanyu Zhao; Shuai Zhao; Jiahong Leng; Yangxiao Liang; Xiaozhi Wang; Jifan Yu; Xin Lv; Zhou Shao; Jiaao He; Yankai Lin; Xu Han; Zhenghao Liu; Ning Ding; Yongming Rao; Yizhao Gao; Liang Zhang; Ming Ding; Cong Fang; Yisen Wang; Mingsheng Long; Jing Zhang; Yinpeng Dong; Tianyu Pang; Peng Cui; Lingxiao Huang; Zheng Liang; Huawei Shen; Hui Zhang; Quanshi Zhang; Qingxiu Dong; Zhixing Tan; Mingxuan Wang; Shuo Wang; Long Zhou; Haoran Li; Junwei Bao; Yingwei Pan; Weinan Zhang; Zhou Yu; Rui Yan; Chence Shi; Minghao Xu; Zuobai Zhang; Guoqiang Wang; Xiang Pan; Mengjie Li; Xiaoyu Chu; Zijun Yao; Fangwei Zhu; Shulin Cao; Weicheng Xue; Zixuan Ma; Zhengyan Zhang; Shengding Hu; Yujia Qin; Chaojun Xiao; Zheni Zeng; Ganqu Cui; Weize Chen; Weilin Zhao; Yuan Yao; Peng Li; Wenzhao Zheng; Wenliang Zhao; Ziyi Wang; Borui Zhang; Nanyi Fei; Anwen Hu; Zenan Ling; Haoyang Li; Boxi Cao; Xianpei Han; Weidong Zhan; Baobao Chang; Hao Sun; Jiawen Deng; Chujie Zheng; Juanzi Li; Lei Hou; Xigang Cao; Jidong Zhai; Zhiyuan Liu; Maosong Sun; Jiwen Lu; Zhiwu Lu; Qin Jin; Ruihua Song; Ji-Rong Wen; Zhouchen Lin; Liwei Wang; Hang Su; Jun Zhu; Zhifang Sui; Jiajun Zhang; Yang Liu; Xiaodong He; Minlie Huang; Jian Tang; Jie Tang
2022-03-25
Enhancing Transferability of Adversarial Examples with Spatial Momentum. (99%)Guoqiu Wang; Huanqian Yan; Xingxing Wei
Origins of Low-dimensional Adversarial Perturbations. (98%)Elvis Dohmatob; Chuan Guo; Morgane Goibert
Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness. (89%)Giulio Lovisotto; Nicole Finnie; Mauricio Munoz; Chaithanya Kumar Mummadi; Jan Hendrik Metzen
Improving Robustness of Jet Tagging Algorithms with Adversarial Training. (10%)Annika Stein; Xavier Coubez; Spandan Mondal; Andrzej Novak; Alexander Schmidt
A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training. (5%)Yifei Wang; Yisen Wang; Jiansheng Yang; Zhouchen Lin
A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration. (1%)Ramya Hebbalaguppe; Jatin Prakash; Neelabh Madan; Chetan Arora
2022-03-24
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning. (99%)Arezoo Rajabi; Bhaskar Ramasubramanian; Radha Poovendran
A Perturbation Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow. (99%)Jenny Schmalfuss; Philipp Scholze; Andrés Bruhn
NPC: Neuron Path Coverage via Characterizing Decision Logic of Deep Neural Networks. (93%)Xiaofei Xie; Tianlin Li; Jian Wang; Lei Ma; Qing Guo; Felix Juefei-Xu; Yang Liu
MERLIN -- Malware Evasion with Reinforcement LearnINg. (56%)Tony Quertier; Benjamin Marais; Stéphane Morucci; Bertrand Fournel
Repairing Group-Level Errors for DNNs Using Weighted Regularization. (13%)Ziyuan Zhong; Yuchi Tian; Conor J. Sweeney; Vicente Ordonez-Roman; Baishakhi Ray
A Manifold View of Adversarial Risk. (11%)Wenjia Zhang; Yikai Zhang; Xiaoling Hu; Mayank Goswami; Chao Chen; Dimitris Metaxas
2022-03-23
Powerful Physical Adversarial Examples Against Practical Face Recognition Systems. (99%)Inderjeet Singh; Toshinori Araki; Kazuya Kakizaki
Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation. (99%)Hanjie Chen; Yangfeng Ji
Input-specific Attention Subnetworks for Adversarial Detection. (99%)Emil Biju; Anirudh Sriram; Pratyush Kumar; Mitesh M Khapra
Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection. (69%)Liang Chen; Yong Zhang; Yibing Song; Lingqiao Liu; Jue Wang
Distort to Detect, not Affect: Detecting Stealthy Sensor Attacks with Micro-distortion. (3%)Suman Sourav; Binbin Chen
On the (Limited) Generalization of MasterFace Attacks and Its Relation to the Capacity of Face Representations. (3%)Philipp Terhörst; Florian Bierbaum; Marco Huber; Naser Damer; Florian Kirchbuchner; Kiran Raja; Arjan Kuijper
2022-03-22
Exploring High-Order Structure for Robust Graph Structure Learning. (99%)Guangqian Yang; Yibing Zhan; Jinlong Li; Baosheng Yu; Liu Liu; Fengxiang He
On Adversarial Robustness of Large-scale Audio Visual Learning. (93%)Juncheng B Bernie Li; Shuhui Bernie Qu; Xinjian Bernie Li; Bernie Po-Yao; Huang; Florian Metze
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes. (86%)Elvis Dohmatob; Alberto Bietti
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis. (78%)Yuwei Sun; Hideya Ochiai; Jun Sakuma
A Girl Has A Name, And It's ... Adversarial Authorship Attribution for Deobfuscation. (2%)Wanyue Zhai; Jonathan Rusert; Zubair Shafiq; Padmini Srinivasan
GradViT: Gradient Inversion of Vision Transformers. (1%)Ali Hatamizadeh; Hongxu Yin; Holger Roth; Wenqi Li; Jan Kautz; Daguang Xu; Pavlo Molchanov
On Robust Classification using Contractive Hamiltonian Neural ODEs. (1%)Muhammad Zakwan; Liang Xu; Giancarlo Ferrari-Trecate
2022-03-21
Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack. (92%)Chi Liu; Huajie Chen; Tianqing Zhu; Jun Zhang; Wanlei Zhou
Integrity Fingerprinting of DNN with Double Black-box Design and Verification. (10%)Shuo Wang; Sidharth Agarwal; Sharif Abuadbba; Kristen Moore; Surya Nepal; Salil Kanhere
On The Robustness of Offensive Language Classifiers. (2%)Jonathan Rusert; Zubair Shafiq; Padmini Srinivasan
Defending against Co-residence Attack in Energy-Efficient Cloud: An Optimization based Real-time Secure VM Allocation Strategy. (1%)Lu Cao; Ruiwen Li; Xiaojun Ruan; Yuhong Liu
2022-03-20
An Intermediate-level Attack Framework on The Basis of Linear Regression. (99%)Yiwen Guo; Qizhang Li; Wangmeng Zuo; Hao Chen
A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement. (99%)Yuting Yang; Pei Huang; Juan Cao; Jintao Li; Yun Lin; Jin Song Dong; Feifei Ma; Jian Zhang
Leveraging Expert Guided Adversarial Augmentation For Improving Generalization in Named Entity Recognition. (82%)Aaron Reich; Jiaao Chen; Aastha Agrawal; Yanzhe Zhang; Diyi Yang
Adversarial Parameter Attack on Deep Neural Networks. (62%)Lijia Yu; Yihan Wang; Xiao-Shan Gao
2022-03-19
Adversarial Defense via Image Denoising with Chaotic Encryption. (99%)Shi Hu; Eric Nalisnick; Max Welling
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense. (98%)Thai Le; Jooyoung Lee; Kevin Yen; Yifan Hu; Dongwon Lee
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model. (84%)Jiayi Wang; Rongzhou Bao; Zhuosheng Zhang; Hai Zhao
Efficient Neural Network Analysis with Sum-of-Infeasibilities. (74%)Haoze Wu; Aleksandar Zeljić; Guy Katz; Clark Barrett
Deep Learning Generalization, Extrapolation, and Over-parameterization. (68%)Roozbeh Yousefzadeh
On Robust Prefix-Tuning for Text Classification. (10%)Zonghan Yang; Yang Liu
2022-03-18
Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike. (99%)Johannes Schneider; Giovanni Apruzzese
Adversarial Attacks on Deep Learning-based Video Compression and Classification Systems. (99%)Jung-Woo Chang; Mojan Javaheripi; Seira Hidano; Farinaz Koushanfar
Neural Predictor for Black-Box Adversarial Attacks on Speech Recognition. (99%)Marie Biolková; Bac Nguyen
AutoAdversary: A Pixel Pruning Method for Sparse Adversarial Attack. (99%)Jinqiao Li; Xiaotao Liu; Jian Zhao; Furao Shen
Alleviating Adversarial Attacks on Variational Autoencoders with MCMC. (96%)Anna Kuzina; Max Welling; Jakub M. Tomczak
DTA: Physical Camouflage Attacks using Differentiable Transformation Network. (83%)Naufal Suryanto; Yongsu Kim; Hyoeun Kang; Harashta Tatimma Larasati; Youngyeo Yun; Thi-Thu-Huong Le; Hunmin Yang; Se-Yoon Oh; Howon Kim
AdIoTack: Quantifying and Refining Resilience of Decision Tree Ensemble Inference Models against Adversarial Volumetric Attacks on IoT Networks. (78%)Arman Pashamokhtari; Gustavo Batista; Hassan Habibi Gharakheili
Towards Robust 2D Convolution for Reliable Visual Recognition. (9%)Lida Li; Shuai Li; Kun Wang; Xiangchu Feng; Lei Zhang
2022-03-17
Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input. (99%)Junyoung Byun; Seungju Cho; Myung-Joon Kwon; Hee-Seon Kim; Changick Kim
Self-Ensemble Adversarial Training for Improved Robustness. (99%)Hongjun Wang; Yisen Wang
Leveraging Adversarial Examples to Quantify Membership Information Leakage. (98%)Grosso Ganesh Del; Hamid Jalalzai; Georg Pichler; Catuscia Palamidessi; Pablo Piantanida
On the Properties of Adversarially-Trained CNNs. (93%)Mattia Carletti; Matteo Terzi; Gian Antonio Susto
PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks. (89%)Yue Wang; Wenqing Li; Esha Sarkar; Muhammad Shafique; Michail Maniatakos; Saif Eddin Jabari
HDLock: Exploiting Privileged Encoding to Protect Hyperdimensional Computing Models against IP Stealing. (1%)Shijin Duan; Shaolei Ren; Xiaolin Xu
2022-03-16
Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training. (99%)Adir Rahamim; Itay Naeh
Towards Practical Certifiable Patch Defense with Vision Transformer. (98%)Zhaoyu Chen; Bo Li; Jianghe Xu; Shuang Wu; Shouhong Ding; Wenqiang Zhang
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations? (97%)Yonggan Fu; Shunyao Zhang; Shang Wu; Cheng Wan; Yingyan Lin
Provable Adversarial Robustness for Fractional Lp Threat Models. (87%)Alexander Levine; Soheil Feizi
What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study. (83%)Binxiao Huang; Chaofan Tao; Rui Lin; Ngai Wong
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks. (82%)Fan Wu; Linyi Li; Chejian Xu; Huan Zhang; Bhavya Kailkhura; Krishnaram Kenthapadi; Ding Zhao; Bo Li
Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning. (70%)Gorka Abad; Servio Paguada; Oguzhan Ersoy; Stjepan Picek; Víctor Julio Ramírez-Durán; Aitor Urbieta
Reducing Flipping Errors in Deep Neural Networks. (68%)Xiang Deng; Yun Xiao; Bo Long; Zhongfei Zhang
Attacking deep networks with surrogate-based adversarial black-box methods is easy. (45%)Nicholas A. Lord; Romain Mueller; Luca Bertinetto
On the Convergence of Certified Robust Training with Interval Bound Propagation. (15%)Yihan Wang; Zhouxing Shi; Quanquan Gu; Cho-Jui Hsieh
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients. (15%)Xiaoyu Cao; Neil Zhenqiang Gong
Understanding robustness and generalization of artificial neural networks through Fourier masks. (2%)Nikos Karantzas; Emma Besier; Josue Ortega Caro; Xaq Pitkow; Andreas S. Tolias; Ankit B. Patel; Fabio Anselmi
2022-03-15
Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness. (76%)Tejas Gokhale; Swaroop Mishra; Man Luo; Bhavdeep Singh Sachdeva; Chitta Baral
Internet-based Social Engineering Attacks, Defenses and Psychology: A Survey. (13%)Theodore Longtchi; Rosana Montañez Rodriguez; Laith Al-Shawaf; Adham Atyabi; Shouhuai Xu
Towards Adversarial Control Loops in Sensor Attacks: A Case Study to Control the Kinematics and Actuation of Embedded Systems. (10%)Yazhou Tu; Sara Rampazzi; Xiali Hei
LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference. (1%)Zhongzhi Yu; Yonggan Fu; Shang Wu; Mengquan Li; Haoran You; Yingyan Lin
Adversarial Counterfactual Augmentation: Application in Alzheimer's Disease Classification. (1%)Tian Xia; Pedro Sanchez; Chen Qin; Sotirios A. Tsaftaris
2022-03-14
Efficient universal shuffle attack for visual object tracking. (99%)Siao Liu; Zhaoyu Chen; Wei Li; Jiwei Zhu; Jiafeng Wang; Wenqiang Zhang; Zhongxue Gan
Defending Against Adversarial Attack in ECG Classification with Adversarial Distillation Training. (99%)Jiahao Shao; Shijia Geng; Zhaoji Fu; Weilun Xu; Tong Liu; Shenda Hong
Task-Agnostic Robust Representation Learning. (98%)A. Tuan Nguyen; Ser Nam Lim; Philip Torr
Energy-Latency Attacks via Sponge Poisoning. (91%)Antonio Emanuele Cinà; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Adversarial amplitude swap towards robust image classifiers. (83%)Chun Yang Tan; Hiroshi Kera; Kazuhiko Kawamoto
On the benefits of knowledge distillation for adversarial robustness. (82%)Javier Maroto; Guillermo Ortiz-Jiménez; Pascal Frossard
RES-HD: Resilient Intelligent Fault Diagnosis Against Adversarial Attacks Using Hyper-Dimensional Computing. (82%)Onat Gungor; Tajana Rosing; Baris Aksanli
Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis. (54%)Giulio Rossolini; Federico Nesti; Fabio Brau; Alessandro Biondi; Giorgio Buttazzo
2022-03-13
LAS-AT: Adversarial Training with Learnable Attack Strategy. (99%)Xiaojun Jia; Yong Zhang; Baoyuan Wu; Ke Ma; Jue Wang; Xiaochun Cao
Generating Practical Adversarial Network Traffic Flows Using NIDSGAN. (99%)Bolor-Erdene Zolbayar; Ryan Sheatsley; Patrick McDaniel; Michael J. Weisman; Sencun Zhu; Shitong Zhu; Srikanth Krishnamurthy
Model Inversion Attack against Transfer Learning: Inverting a Model without Accessing It. (92%)Dayong Ye; Huiqiang Chen; Shuai Zhou; Tianqing Zhu; Wanlei Zhou; Shouling Ji
One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy. (67%)Dayong Ye; Sheng Shen; Tianqing Zhu; Bo Liu; Wanlei Zhou
Policy Learning for Robust Markov Decision Process with a Mismatched Generative Model. (3%)Jialian Li; Tongzheng Ren; Dong Yan; Hang Su; Jun Zhu
2022-03-12
Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior. (99%)Yinpeng Dong; Shuyu Cheng; Tianyu Pang; Hang Su; Jun Zhu
A Survey of Adversarial Defences and Robustness in NLP. (99%)Shreya Goyal; Sumanth Doddapaneni; Mitesh M. Khapra; Balaraman Ravindran
Label-only Model Inversion Attack: The Attack that Requires the Least Information. (47%)Dayong Ye; Tianqing Zhu; Shuai Zhou; Bo Liu; Wanlei Zhou
2022-03-11
Block-Sparse Adversarial Attack to Fool Transformer-Based Text Classifiers. (99%)Sahar Sadrizadeh; Ljiljana Dolamic; Pascal Frossard
Learning from Attacks: Attacking Variational Autoencoder for Improving Image Classification. (98%)Jianzhang Zheng; Fan Yang; Hao Shen; Xuan Tang; Mingsong Chen; Liang Song; Xian Wei
An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks. (96%)Anirudh Yadav; Ashutosh Upadhyay; S. Sharanya
Enhancing Adversarial Training with Second-Order Statistics of Weights. (38%)Gaojie Jin; Xinping Yi; Wei Huang; Sven Schewe; Xiaowei Huang
ROOD-MRI: Benchmarking the robustness of deep learning segmentation models to out-of-distribution and corrupted data in MRI. (33%)Lyndon Boone; Mahdi Biparva; Parisa Mojiri Forooshani; Joel Ramirez; Mario Masellis; Robert Bartha; Sean Symons; Stephen Strother; Sandra E. Black; Chris Heyn; Anne L. Martel; Richard H. Swartz; Maged Goubran
Perception Over Time: Temporal Dynamics for Robust Image Understanding. (16%)Maryam Daniali; Edward Kim
Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation. (15%)Yunhan Huang; Quanyan Zhu
2022-03-10
Exploiting the Potential of Datasets: A Data-Centric Approach for Model Robustness. (92%)Yiqi Zhong; Lei Wu; Xianming Liu; Junjun Jiang
Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation. (75%)Saeed Ranjbar Alvar; Lanjun Wang; Jian Pei; Yong Zhang
Attack Analysis of Face Recognition Authentication Systems Using Fast Gradient Sign Method. (69%)Arbena Musa; Kamer Vishi; Blerim Rexha
Attacks as Defenses: Designing Robust Audio CAPTCHAs Using Attacks on Automatic Speech Recognition Systems. (64%)Hadi Abdullah; Aditya Karlekar; Saurabh Prasad; Muhammad Sajidur Rahman; Logan Blue; Luke A. Bauer; Vincent Bindschaedler; Patrick Traynor
SoK: On the Semantic AI Security in Autonomous Driving. (10%)Junjie Shen; Ningfei Wang; Ziwen Wan; Yunpeng Luo; Takami Sato; Zhisheng Hu; Xinyang Zhang; Shengjian Guo; Zhenyu Zhong; Kang Li; Ziming Zhao; Chunming Qiao; Qi Alfred Chen
2022-03-09
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation. (99%)Qilong Zhang; Chaoning Zhang; Chaoqun Li; Jingkuan Song; Lianli Gao; Heng Tao Shen
Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack. (99%)Ye Liu; Yaya Cheng; Lianli Gao; Xianglong Liu; Qilong Zhang; Jingkuan Song
Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity. (99%)Cheng Luo; Qinliang Lin; Weicheng Xie; Bizhu Wu; Jinheng Xie; Linlin Shen
Binary Classification Under $\ell_0$ Attacks for General Noise Distribution. (98%)Payam Delgosha; Hamed Hassani; Ramtin Pedarsani
Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition. (97%)Xiao Yang; Yinpeng Dong; Tianyu Pang; Zihao Xiao; Hang Su; Jun Zhu
Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees. (92%)Darshan Thaker; Paris Giampouras; René Vidal
Defending Black-box Skeleton-based Human Activity Classifiers. (92%)He Wang; Yunfeng Diao; Zichang Tan; Guodong Guo
Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition. (81%)Yi Chang; Sofiane Laridi; Zhao Ren; Gregory Palmer; Björn W. Schuller; Marco Fisichella
Improving Neural ODEs via Knowledge Distillation. (80%)Haoyu Chu; Shikui Wei; Qiming Lu; Yao Zhao
Physics-aware Complex-valued Adversarial Machine Learning in Reconfigurable Diffractive All-optical Neural Network. (22%)Ruiyang Chen; Yingjie Li; Minhan Lou; Jichao Fan; Yingheng Tang; Berardi Sensale-Rodriguez; Cunxi Yu; Weilu Gao
On the surprising tradeoff between ImageNet accuracy and perceptual similarity. (1%)Manoj Kumar; Neil Houlsby; Nal Kalchbrenner; Ekin D. Cubuk
2022-03-08
Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust NIDS. (99%)João Vitorino; Nuno Oliveira; Isabel Praça
Shape-invariant 3D Adversarial Point Clouds. (99%)Qidong Huang; Xiaoyi Dong; Dongdong Chen; Hang Zhou; Weiming Zhang; Nenghai Yu
ART-Point: Improving Rotation Robustness of Point Cloud Classifiers via Adversarial Rotation. (92%)Robin Wang; Yibo Yang; Dacheng Tao
Robustly-reliable learners under poisoning attacks. (13%)Maria-Florina Balcan; Avrim Blum; Steve Hanneke; Dravyansh Sharma
DeepSE-WF: Unified Security Estimation for Website Fingerprinting Defenses. (2%)Alexander Veicht; Cedric Renggli; Diogo Barradas
Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4. (1%)William Berrios; Arturo Deza
Harmonicity Plays a Critical Role in DNN Based Versus in Biologically-Inspired Monaural Speech Segregation Systems. (1%)Rahil Institute for Systems Research, University of Maryland Parikh; Ilya Google Inc Kavalerov; Carol Institute for Systems Research, University of Maryland Espy-Wilson; Shihab Institute for Systems Research, University of Maryland Shamma
2022-03-07
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches. (99%)Maura Pintor; Daniele Angioni; Angelo Sotgiu; Luca Demetrio; Ambra Demontis; Battista Biggio; Fabio Roli
Art-Attack: Black-Box Adversarial Attack via Evolutionary Art. (99%)Phoenix Williams; Ke Li
Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon. (99%)Yiqi Zhong; Xianming Liu; Deming Zhai; Junjun Jiang; Xiangyang Ji
Adversarial Texture for Fooling Person Detectors in the Physical World. (98%)Zhanhao Hu; Siyuan Huang; Xiaopei Zhu; Xiaolin Hu; Fuchun Sun; Bo Zhang
Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision. (83%)Jun Zhuang; Mohammad Al Hasan
Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation. (31%)Xiaogeng Liu; Haoyu Wang; Yechao Zhang; Fangzhou Wu; Shengshan Hu
2022-03-06
$A^{3}D$: A Platform of Searching for Robust Neural Architectures and Efficient Adversarial Attacks. (99%)Jialiang Sun; Wen Yao; Tingsong Jiang; Chao Li; Xiaoqian Chen
Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer. (98%)Shengshan Hu; Xiaogeng Liu; Yechao Zhang; Minghui Li; Leo Yu Zhang; Hai Jin; Libing Wu
Scalable Uncertainty Quantification for Deep Operator Networks using Randomized Priors. (45%)Yibo Yang; Georgios Kissas; Paris Perdikaris
Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks. (2%)Lennart Brocki; Neo Christopher Chung
2022-03-05
aaeCAPTCHA: The Design and Implementation of Audio Adversarial CAPTCHA. (92%)Md Imran Hossen; Xiali Hei
2022-03-04
Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation. (82%)Xudong Zhang; Zan Wang; Jingke Zhao; Lanjun Wang
Concept-based Explanations for Out-Of-Distribution Detectors. (1%)Jihye Choi; Jayaram Raghuram; Ryan Feng; Jiefeng Chen; Somesh Jha; Atul Prakash
2022-03-03
Ad2Attack: Adaptive Adversarial Attack on Real-Time UAV Tracking. (99%)Changhong Fu; Sihang Li; Xinnan Yuan; Junjie Ye; Ziang Cao; Fangqiang Ding
Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation. (98%)KiYoon Yoo; Jangho Kim; Jiho Jang; Nojun Kwak
Adversarial Patterns: Building Robust Android Malware Classifiers. (98%)Dipkamal Bhusal; Nidhi Rastogi
Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training. (84%)Pervaiz Iqbal Khan; Shoaib Ahmed Siddiqui; Imran Razzak; Andreas Dengel; Sheraz Ahmed
Label-Only Model Inversion Attacks via Boundary Repulsion. (74%)Mostafa Kahla; Si Chen; Hoang Anh Just; Ruoxi Jia
Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models. (56%)Zhibo Wang; Xiaowei Dong; Henry Xue; Zhifei Zhang; Weifeng Chiu; Tao Wei; Kui Ren
Why adversarial training can hurt robust accuracy. (22%)Jacob Clarysse; Julia Hörmann; Fanny Yang
Understanding Failure Modes of Self-Supervised Learning. (4%)Neha Mukund Kalibhat; Kanika Narang; Liang Tan; Hamed Firooz; Maziar Sanjabi; Soheil Feizi
Ensemble Methods for Robust Support Vector Machines using Integer Programming. (2%)Jannis Kurtz
Autonomous and Resilient Control for Optimal LEO Satellite Constellation Coverage Against Space Threats. (1%)Yuhan Zhao; Quanyan Zhu
2022-03-02
Enhancing Adversarial Robustness for Deep Metric Learning. (99%)Mo Zhou; Vishal M. Patel
Adversarial attacks on neural networks through canonical Riemannian foliations. (99%)Eliot Tron; Nicolas Couellan; Stéphane Puechmorel
Detecting Adversarial Perturbations in Multi-Task Perception. (98%)Marvin Klingner; Varun Ravi Kumar; Senthil Yogamani; Andreas Bär; Tim Fingscheidt
Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers. (69%)Evan Crothers; Nathalie Japkowicz; Herna Viktor; Paula Branco
Video is All You Need: Attacking PPG-based Biometric Authentication. (13%)Lin Li; Chao Chen; Lei Pan; Jun Zhang; Yang Xiang
MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members. (2%)Ismat Jarin; Birhanu Eshete
A Quantitative Geometric Approach to Neural-Network Smoothness. (2%)Zi Wang; Gautam Prakriya; Somesh Jha
2022-03-01
Adversarial samples for deep monocular 6D object pose estimation. (99%)Jinlai Zhang; Weiming Li; Shuang Liang; Hao Wang; Jihong Zhu
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving. (87%)Xingshuo Han; Guowen Xu; Yuan Zhou; Xuehuan Yang; Jiwei Li; Tianwei Zhang
Global-Local Regularization Via Distributional Robustness. (86%)Hoang Phan; Trung Le; Trung Phung; Tuan Anh Bui; Nhat Ho; Dinh Phung
Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. (11%)Wei Dai; Daniel Berleant
Signature Correction Attack on Dilithium Signature Scheme. (1%)Saad Islam; Koksal Mus; Richa Singh; Patrick Schaumont; Berk Sunar
2022-02-28
Enhance transferability of adversarial examples with model architecture. (99%)Mingyuan Fan; Wenzhong Guo; Shengxing Yu; Zuobin Ying; Ximeng Liu
Towards Robust Stacked Capsule Autoencoder with Hybrid Adversarial Training. (99%)Jiazhu Dai; Siwei Xiong
Evaluating the Adversarial Robustness of Adaptive Test-time Defenses. (98%)Francesco Croce; Sven Gowal; Thomas Brunner; Evan Shelhamer; Matthias Hein; Taylan Cemgil
MaMaDroid2.0 -- The Holes of Control Flow Graphs. (88%)Harel Berger; Chen Hajaj; Enrico Mariconti; Amit Dvir
Improving Lexical Embeddings for Robust Question Answering. (67%)Weiwen Xu; Bowei Zou; Wai Lam; Ai Ti Aw
Robust Textual Embedding against Word-level Adversarial Attacks. (26%)Yichen Yang; Xiaosen Wang; Kun He
Artificial Intelligence for Cyber Security (AICS). (1%)James Holt; Edward Raff; Ahmad Ridley; Dennis Ross; Arunesh Sinha; Diane Staheli; William Streilen; Milind Tambe; Yevgeniy Vorobeychik; Allan Wollaber
Explaining RADAR features for detecting spoofing attacks in Connected Autonomous Vehicles. (1%)Nidhi Rastogi; Sara Rampazzi; Michael Clifford; Miriam Heller; Matthew Bishop; Karl Levitt
2022-02-27
A Unified Wasserstein Distributional Robustness Framework for Adversarial Training. (99%)Tuan Anh Bui; Trung Le; Quan Tran; He Zhao; Dinh Phung
Robust Control of Partially Specified Boolean Networks. (1%)Luboš Brim; Samuel Pastva; David Šafránek; Eva Šmijáková
2022-02-26
Adversarial robustness of sparse local Lipschitz predictors. (87%)Ramchandran Muthukumar; Jeremias Sulam
Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations. (45%)Metehan Cekic; Can Bakiskan; Upamanyu Madhow
Automation of reversible steganographic coding with nonlinear discrete optimisation. (1%)Ching-Chun Chang
2022-02-25
ARIA: Adversarially Robust Image Attribution for Content Provenance. (99%)Maksym Andriushchenko; Xiaoyang Rebecca Li; Geoffrey Oxholm; Thomas Gittings; Tu Bui; Nicolas Flammarion; John Collomosse
Projective Ranking-based GNN Evasion Attacks. (97%)He Zhang; Xingliang Yuan; Chuan Zhou; Shirui Pan
On the Effectiveness of Dataset Watermarking in Adversarial Settings. (56%)Buse Gul Atli Tekgul; N. Asokan
2022-02-24
Towards Effective and Robust Neural Trojan Defenses via Input Filtering. (92%)Kien Do; Haripriya Harikumar; Hung Le; Dung Nguyen; Truyen Tran; Santu Rana; Dang Nguyen; Willy Susilo; Svetha Venkatesh
Robust Probabilistic Time Series Forecasting. (76%)TaeHo Yoon; Youngsuk Park; Ernest K. Ryu; Yuyang Wang
Understanding Adversarial Robustness from Feature Maps of Convolutional Layers. (70%)Cong Xu; Min Yang
Measuring CLEVRness: Blackbox testing of Visual Reasoning Models. (16%)Spyridon Mouselinos; Henryk Michalewski; Mateusz Malinowski
Bounding Membership Inference. (11%)Anvith Thudi; Ilia Shumailov; Franziska Boenisch; Nicolas Papernot
Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration. (3%)Ryan Soklaski; Michael Yee; Theodoros Tsiligkaridis
Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations. (2%)Chih-Kuan Yeh; Kuan-Yun Lee; Frederick Liu; Pradeep Ravikumar
Interpolation-based Contrastive Learning for Few-Label Semi-Supervised Learning. (1%)Xihong Yang; Xiaochang Hu; Sihang Zhou; Xinwang Liu; En Zhu
2022-02-23
Improving Robustness of Convolutional Neural Networks Using Element-Wise Activation Scaling. (96%)Zhi-Yuan Zhang; Di Liu
Using calibrator to improve robustness in Machine Reading Comprehension. (13%)Jing Jin; Houfeng Wang
2022-02-22
LPF-Defense: 3D Adversarial Defense based on Frequency Analysis. (99%)Hanieh Naderi; Kimia Noorbakhsh; Arian Etemadi; Shohreh Kasaei
Universal adversarial perturbation for remote sensing images. (95%)Zhaoxia Yin; Qingyu Wang; Jin Tang; Bin Luo
Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era. (84%)Changjiang Li; Li Wang; Shouling Ji; Xuhong Zhang; Zhaohan Xi; Shanqing Guo; Ting Wang
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning. (1%)Hao He; Kaiwen Zha; Dina Katabi
2022-02-21
Adversarial Attacks on Speech Recognition Systems for Mission-Critical Applications: A Survey. (99%)Ngoc Dung Huynh; Mohamed Reda Bouadjenek; Imran Razzak; Kevin Lee; Chetan Arora; Ali Hassani; Arkady Zaslavsky
Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness. (99%)Beomsu Kim; Junghoon Seo
HoneyModels: Machine Learning Honeypots. (99%)Ahmed Abdou; Ryan Sheatsley; Yohan Beugin; Tyler Shipp; Patrick McDaniel
Transferring Adversarial Robustness Through Robust Representation Matching. (99%)Pratik Vaishnavi; Kevin Eykholt; Amir Rahmati
On the Effectiveness of Adversarial Training against Backdoor Attacks. (96%)Yinghua Gao; Dongxian Wu; Jingfeng Zhang; Guanhao Gan; Shu-Tao Xia; Gang Niu; Masashi Sugiyama
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey. (83%)Miguel A. Ramirez; Song-Kyoo Kim; Hussam Al Hamadi; Ernesto Damiani; Young-Ji Byon; Tae-Yeon Kim; Chung-Suk Cho; Chan Yeob Yeun
A Tutorial on Adversarial Learning Attacks and Countermeasures. (75%)Cato Pauling; Michael Gimson; Muhammed Qaid; Ahmad Kida; Basel Halak
Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection. (41%)Yein Kim; Huili Chen; Farinaz Koushanfar
Privacy Leakage of Adversarial Training Models in Federated Learning Systems. (38%)Jingyang Zhang; Yiran Chen; Hai Li
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition. (11%)Tianyu Pang; Min Lin; Xiao Yang; Jun Zhu; Shuicheng Yan
Cyber-Physical Defense in the Quantum Era. (2%)Michel Barbeau; Joaquin Garcia-Alfaro
2022-02-20
Real-time Over-the-air Adversarial Perturbations for Digital Communications using Deep Neural Networks. (93%)Roman A. Sandler; Peter K. Relich; Cloud Cho; Sean Holloway
Sparsity Winning Twice: Better Robust Generaliztion from More Efficient Training. (26%)Tianlong Chen; Zhenyu Zhang; Pengjun Wang; Santosh Balachandra; Haoyu Ma; Zehao Wang; Zhangyang Wang
Overparametrization improves robustness against adversarial attacks: A replication study. (3%)Ali Borji
2022-02-18
Exploring Adversarially Robust Training for Unsupervised Domain Adaptation. (99%)Shao-Yuan Lo; Vishal M. Patel
Learning Representations Robust to Group Shifts and Adversarial Examples. (93%)Ming-Chang Chiu; Xuezhe Ma
Critical Checkpoints for Evaluating Defence Models Against Adversarial Attack and Robustness. (92%)Kanak Tekwani; Manojkumar Parmar
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy Breaches. (80%)Reena Zelenkova; Jack Swallow; M. A. P. Chamikara; Dongxi Liu; Mohan Baruwal Chhetri; Seyit Camtepe; Marthie Grobler; Mahathir Almashor
Data-Driven Mitigation of Adversarial Text Perturbation. (75%)Rasika Bhalerao; Mohammad Al-Rubaie; Anand Bhaskar; Igor Markov
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias. (68%)Shangxi Wu; Qiuyang He; Yi Zhang; Jitao Sang
Label-Smoothed Backdoor Attack. (38%)Minlong Peng; Zidi Xiong; Mingming Sun; Ping Li
Stochastic Perturbations of Tabular Features for Non-Deterministic Inference with Automunge. (38%)Nicholas J. Teague
Black-box Node Injection Attack for Graph Neural Networks. (33%)Mingxuan Ju; Yujie Fan; Yanfang Ye; Liang Zhao
Robust Reinforcement Learning as a Stackelberg Game via Adaptively-Regularized Adversarial Training. (9%)Peide Huang; Mengdi Xu; Fei Fang; Ding Zhao
Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML Systems. (4%)Mohamad Fazelnia; Igor Khokhlov; Mehdi Mirakhorli
Synthetic Disinformation Attacks on Automated Fact Verification Systems. (1%)Yibing Du; Antoine Bosselut; Christopher D. Manning
2022-02-17
Rethinking Machine Learning Robustness via its Link with the Out-of-Distribution Problem. (99%)Abderrahmen Amich; Birhanu Eshete
Mitigating Closed-model Adversarial Examples with Bayesian Neural Modeling for Enhanced End-to-End Speech Recognition. (98%)Chao-Han Huck Yang; Zeeshan Ahmed; Yile Gu; Joseph Szurley; Roger Ren; Linda Liu; Andreas Stolcke; Ivan Bulyko
Developing Imperceptible Adversarial Patches to Camouflage Military Assets From Computer Vision Enabled Technologies. (98%)Chris Wise; Jo Plested
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations. (78%)Zirui Peng; Shaofeng Li; Guoxing Chen; Cheng Zhang; Haojin Zhu; Minhui Xue
2022-02-16
The Adversarial Security Mitigations of mmWave Beamforming Prediction Models using Defensive Distillation and Adversarial Retraining. (99%)Murat Kuzlu; Ferhat Ozgur Catak; Umit Cali; Evren Catak; Ozgur Guler
Understanding and Improving Graph Injection Attack by Promoting Unnoticeability. (10%)Yongqiang Chen; Han Yang; Yonggang Zhang; Kaili Ma; Tongliang Liu; Bo Han; James Cheng
Gradient Based Activations for Accurate Bias-Free Learning. (1%)Vinod K Kurmi; Rishabh Sharma; Yash Vardhan Sharma; Vinay P. Namboodiri
2022-02-15
Unreasonable Effectiveness of Last Hidden Layer Activations. (99%)Omer Faruk Tuna; Ferhat Ozgur Catak; M. Taner Eskil
Exploring the Devil in Graph Spectral Domain for 3D Point Cloud Attacks. (99%)Qianjiang Hu; Daizong Liu; Wei Hu
StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection. (99%)Aqib Rashid; Jose Such
Random Walks for Adversarial Meshes. (97%)Amir Belder; Gal Yefet; Ran Ben Izhak; Ayellet Tal
Generative Adversarial Network-Driven Detection of Adversarial Tasks in Mobile Crowdsensing. (93%)Zhiyan Chen; Burak Kantarci
Applying adversarial networks to increase the data efficiency and reliability of Self-Driving Cars. (89%)Aakash Kumar
Improving the repeatability of deep learning models with Monte Carlo dropout. (1%)Andreanne Lemay; Katharina Hoebel; Christopher P. Bridge; Brian Befano; Sanjosé Silvia De; Diden Egemen; Ana Cecilia Rodriguez; Mark Schiffman; John Peter Campbell; Jayashree Kalpathy-Cramer
Holistic Adversarial Robustness of Deep Learning Models. (1%)Pin-Yu Chen; Sijia Liu
Taking a Step Back with KCal: Multi-Class Kernel-Based Calibration for Deep Neural Networks. (1%)Zhen Lin; Shubhendu Trivedi; Jimeng Sun
2022-02-14
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark. (99%)Yonghao Xu; Pedram Ghamisi
Finding Dynamics Preserving Adversarial Winning Tickets. (86%)Xupeng Shi; Pengfei Zheng; A. Adam Ding; Yuan Gao; Weizhong Zhang
Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack. (83%)Jintang Li; Bingzhe Wu; Chengbin Hou; Guoji Fu; Yatao Bian; Liang Chen; Junzhou Huang; Zibin Zheng
PFGE: Parsimonious Fast Geometric Ensembling of DNNs. (1%)Hao Guo; Jiyong Jin; Bin Liu
UA-FedRec: Untargeted Attack on Federated News Recommendation. (1%)Jingwei Yi; Fangzhao Wu; Bin Zhu; Jing Yao; Zhulin Tao; Guangzhong Sun; Xing Xie
2022-02-13
Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks. (99%)Bingxu Mu; Zhenxing Niu; Le Wang; Xue Wang; Rong Jin; Gang Hua
Training with More Confidence: Mitigating Injected and Natural Backdoors During Training. (92%)Zhenting Wang; Hailun Ding; Juan Zhai; Shiqing Ma
Extracting Label-specific Key Input Features for Neural Code Intelligence Models. (9%)Md Rafiqul Islam Rabin
Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey. (2%)Zhilin Wang; Qiao Kang; Xinyi Zhang; Qin Hu
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation. (1%)Cong Guo; Yuxian Qiu; Jingwen Leng; Xiaotian Gao; Chen Zhang; Yunxin Liu; Fan Yang; Yuhao Zhu; Minyi Guo
2022-02-12
RoPGen: Towards Robust Code Authorship Attribution via Automatic Coding Style Transformation. (98%)Zhen Qian Li; Qian Guenevere; Chen; Chen Chen; Yayi Zou; Shouhuai Xu
Excitement Surfeited Turns to Errors: Deep Learning Testing Framework Based on Excitable Neurons. (98%)Haibo Jin; Ruoxi Chen; Haibin Zheng; Jinyin Chen; Yao Cheng; Yue Yu; Xianglong Liu
2022-02-11
Adversarial Attacks and Defense Methods for Power Quality Recognition. (99%)Jiwei Tian; Buhong Wang; Jing Li; Zhen Wang; Mete Ozay
Towards Adversarially Robust Deepfake Detection: An Ensemble Approach. (99%)Ashish Hooda; Neal Mangaokar; Ryan Feng; Kassem Fawaz; Somesh Jha; Atul Prakash
Open-set Adversarial Defense with Clean-Adversarial Mutual Learning. (98%)Rui Shao; Pramuditha Perera; Pong C. Yuen; Vishal M. Patel
Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models. (92%)Abigail Swenor; Jugal Kalita
Fast Adversarial Training with Noise Augmentation: A Unified Perspective on RandStart and GradAlign. (74%)Axi Niu; Kang Zhang; Chaoning Zhang; Chenshuang Zhang; In So Kweon; Chang D. Yoo; Yanning Zhang
Predicting Out-of-Distribution Error with the Projection Norm. (62%)Yaodong Yu; Zitong Yang; Alexander Wei; Yi Ma; Jacob Steinhardt
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers. (62%)Limin Yang; Zhi Chen; Jacopo Cortellazzi; Feargus Pendlebury; Kevin Tu; Fabio Pierazzi; Lorenzo Cavallaro; Gang Wang
White-Box Attacks on Hate-speech BERT Classifiers in German with Explicit and Implicit Character Level Defense. (12%)Shahrukh Khan; Mahnoor Shahid; Navdeeppal Singh
On the Detection of Adaptive Adversarial Attacks in Speaker Verification Systems. (10%)Zesheng Chen
Improving Generalization via Uncertainty Driven Perturbations. (2%)Matteo Pagliardini; Gilberto Manunza; Martin Jaggi; Michael I. Jordan; Tatjana Chavdarova
CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep Learning. (1%)Jun Shu; Xiang Yuan; Deyu Meng; Zongben Xu
2022-02-10
FAAG: Fast Adversarial Audio Generation through Interactive Attack Optimisation. (99%)Yuantian Miao; Chao Chen; Lei Pan; Jun Zhang; Yang Xiang
Towards Assessing and Characterizing the Semantic Robustness of Face Recognition. (76%)Juan C. Pérez; Motasem Alfarra; Ali Thabet; Pablo Arbeláez; Bernard Ghanem
Controlling the Complexity and Lipschitz Constant improves polynomial nets. (12%)Zhenyu Zhu; Fabian Latorre; Grigorios G Chrysos; Volkan Cevher
FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling. (8%)Chuhan Wu; Fangzhao Wu; Tao Qi; Yongfeng Huang; Xing Xie
A Field of Experts Prior for Adapting Neural Networks at Test Time. (1%)Neerav Karani; Georg Brunner; Ertunc Erdil; Simin Fei; Kerem Tezcan; Krishna Chaitanya; Ender Konukoglu
2022-02-09
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios. (99%)Jung Im Choi; Qing Tian
Gradient Methods Provably Converge to Non-Robust Networks. (82%)Gal Vardi; Gilad Yehudai; Ohad Shamir
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger. (22%)Muhammad Umer; Robi Polikar
ARIBA: Towards Accurate and Robust Identification of Backdoor Attacks in Federated Learning. (10%)Yuxi Mi; Jihong Guan; Shuigeng Zhou
L2B: Learning to Bootstrap Robust Models for Combating Label Noise. (2%)Yuyin Zhou; Xianhang Li; Fengze Liu; Qingyue Wei; Xuxi Chen; Lequan Yu; Cihang Xie; Matthew P. Lungren; Lei Xing
Model Architecture Adaption for Bayesian Neural Networks. (1%)Duo Wang; Yiren Zhao; Ilia Shumailov; Robert Mullins
2022-02-08
Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations. (99%)Lei Hsiung; Yun-Yun Tsai; Pin-Yu Chen; Tsung-Yi Ho
Verification-Aided Deep Ensemble Selection. (96%)Guy Amir; Guy Katz; Michael Schapira
Adversarial Detection without Model Information. (87%)Abhishek Moitra; Youngeun Kim; Priyadarshini Panda
Towards Making a Trojan-horse Attack on Text-to-Image Retrieval. (68%)Fan Hu; Aozhu Chen; Xirong Li
Robust, Deep, and Reinforcement Learning for Management of Communication and Power Networks. (1%)Alireza Sadeghi
2022-02-07
Blind leads Blind: A Zero-Knowledge Attack on Federated Learning. (99%)Jiyue Huang; Zilong Zhao; Lydia Y. Chen; Stefanie Roos
On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks. (99%)Salijona Dyrmishi; Salah Ghamizi; Thibault Simonetto; Yves Le Traon; Maxime Cordy
Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests. (98%)Xilie Xu; Jingfeng Zhang; Feng Liu; Masashi Sugiyama; Mohan Kankanhalli
Evaluating Robustness of Cooperative MARL: A Model-based Approach. (98%)Nhan H. Pham; Lam M. Nguyen; Jie Chen; Hoang Thanh Lam; Subhro Das; Tsui-Wei Weng
More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks. (68%)Jing Xu; Rui Wang; Kaitai Liang; Stjepan Picek
Membership Inference Attacks and Defenses in Neural Network Pruning. (50%)Xiaoyong Yuan; Lan Zhang
SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation. (4%)Jun Xia; Lirong Wu; Jintao Chen; Bozhen Hu; Stan Z. Li
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning. (3%)Ji Gao; Sanjam Garg; Mohammad Mahmoody; Prashant Nalini Vasudevan
2022-02-06
Tubes Among Us: Analog Attack on Automatic Speaker Identification. (99%)Shimaa Ahmed; Yash Wani; Ali Shahin Shamsabadi; Mohammad Yaghini; Ilia Shumailov; Nicolas Papernot; Kassem Fawaz
Redactor: A Data-centric and Individualized Defense Against Inference Attacks. (8%)Geon Heo; Steven Euijong Whang
2022-02-05
Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework. (99%)Mohammad Khalooei; Mohammad Mehdi Homayounpour; Maryam Amirmazlaghani
Adversarial Detector with Robust Classifier. (93%)Takayuki Osakabe; Maungmaung Aprilpyone; Sayaka Shiota; Hitoshi Kiya
Memory Defense: More Robust Classification via a Memory-Masking Autoencoder. (76%)Eashan Lehigh University Adhikarla; Dan Lehigh University Luo; Brian D. Lehigh University Davison
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation. (75%)Wenxiao Wang; Alexander Levine; Soheil Feizi
2022-02-04
Pixle: a fast and effective black-box attack based on rearranging pixels. (98%)Jary Pomponi; Simone Scardapane; Aurelio Uncini
Backdoor Defense via Decoupling the Training Process. (80%)Kunzhe Huang; Yiming Li; Baoyuan Wu; Zhan Qin; Kui Ren
LTU Attacker for Membership Inference. (67%)Joseph Pedersen; Rafael Muñoz-Gómez; Jiangnan Huang; Haozhe Sun; Wei-Wei Tu; Isabelle Guyon
A Survey on Safety-Critical Driving Scenario Generation -- A Methodological Perspective. (1%)Wenhao Ding; Chejian Xu; Mansur Arief; Haohong Lin; Bo Li; Ding Zhao
2022-02-03
ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking. (93%)Chong Xiang; Alexander Valtchanov; Saeed Mahloujifar; Prateek Mittal
Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization. (75%)Xiaojun Xu; Jacky Yibo Zhang; Evelyn Ma; Danny Son; Oluwasanmi Koyejo; Bo Li
2022-02-02
An Eye for an Eye: Defending against Gradient-based Attacks with Gradients. (99%)Hanbin Hong; Yuan Hong; Yu Kong
Smoothed Embeddings for Certified Few-Shot Learning. (76%)Mikhail Pautov; Olesya Kuznetsova; Nurislam Tursynbek; Aleksandr Petiushko; Ivan Oseledets
Probabilistically Robust Learning: Balancing Average- and Worst-case Performance. (75%)Alexander Robey; Luiz F. O. Chamon; George J. Pappas; Hamed Hassani
Make Some Noise: Reliable and Efficient Single-Step Adversarial Training. (70%)Jorge Pau de; Adel Bibi; Riccardo Volpi; Amartya Sanyal; Philip H. S. Torr; Grégory Rogez; Puneet K. Dokania
Robust Binary Models by Pruning Randomly-initialized Networks. (10%)Chen Liu; Ziqi Zhao; Sabine Süsstrunk; Mathieu Salzmann
NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections. (10%)N. Benjamin Erichson; Soon Hoe Lim; Francisco Utrera; Winnie Xu; Ziang Cao; Michael W. Mahoney
2022-02-01
Language Dependencies in Adversarial Attacks on Speech Recognition Systems. (98%)Karla Markert; Donika Mirdita; Konstantin Böttinger
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks. (80%)Anne Harrington; Arturo Deza
Visualizing Automatic Speech Recognition -- Means for a Better Understanding? (64%)Karla Markert; Romain Parracone; Mykhailo Kulakov; Philip Sperl; Ching-Yu Kao; Konstantin Böttinger
Datamodels: Predicting Predictions from Training Data. (2%)Andrew Ilyas; Sung Min Park; Logan Engstrom; Guillaume Leclerc; Aleksander Madry
2022-01-31
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons. (99%)Chandresh Pravin; Ivan Martino; Giuseppe Nicosia; Varun Ojha
Boundary Defense Against Black-box Adversarial Attacks. (99%)Manjushree B. Aithal; Xiaohua Li
Query Efficient Decision Based Sparse Attacks Against Black-Box Deep Learning Models. (99%)Viet Quoc Vo; Ehsan Abbasnejad; Damith C. Ranasinghe
Can Adversarial Training Be Manipulated By Non-Robust Features? (98%)Lue Tao; Lei Feng; Hongxin Wei; Jinfeng Yi; Sheng-Jun Huang; Songcan Chen
GADoT: GAN-based Adversarial Training for Robust DDoS Attack Detection. (96%)Maged Abdelaty; Sandra Scott-Hayward; Roberto Doriguzzi-Corin; Domenico Siracusa
Rate Coding or Direct Coding: Which One is Better for Accurate, Robust, and Energy-efficient Spiking Neural Networks? (93%)Youngeun Kim; Hyoungseob Park; Abhishek Moitra; Abhiroop Bhattacharjee; Yeshwanth Venkatesha; Priyadarshini Panda
AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks. (89%)Muhammad Usman; Youcheng Sun; Divya Gopinath; Corina S. Pasareanu
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks. (81%)Mingfu Xue; Shifeng Ni; Yinghao Wu; Yushu Zhang; Jian Wang; Weiqiang Liu
On the Robustness of Quality Measures for GANs. (80%)Motasem Alfarra; Juan C. Pérez; Anna Frühstück; Philip H. S. Torr; Peter Wonka; Bernard Ghanem
MEGA: Model Stealing via Collaborative Generator-Substitute Networks. (76%)Chi Hong; Jiyue Huang; Lydia Y. Chen
Learning Robust Representation through Graph Adversarial Contrastive Learning. (26%)Jiayan Guo; Shangyang Li; Yue Zhao; Yan Zhang
UQGAN: A Unified Model for Uncertainty Quantification of Deep Classifiers trained via Conditional GANs. (16%)Philipp Oberdiek; Gernot A. Fink; Matthias Rottmann
Few-Shot Backdoor Attacks on Visual Object Tracking. (10%)Yiming Li; Haoxiang Zhong; Xingjun Ma; Yong Jiang; Shu-Tao Xia
Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors. (5%)Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Timo Schenk; Adrian Lars Benjamin Iten; Gérôme Bovet; Gregorio Martínez Pérez; Burkhard Stiller
Securing Federated Sensitive Topic Classification against Poisoning Attacks. (1%)Tianyue Chu; Alvaro Garcia-Recuero; Costas Iordanou; Georgios Smaragdakis; Nikolaos Laoutaris
2022-01-30
Improving Corruption and Adversarial Robustness by Enhancing Weak Subnets. (92%)Yong Guo; David Stutz; Bernt Schiele
GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks. (84%)Chenhui Deng; Xiuyu Li; Zhuo Feng; Zhiru Zhang
TPC: Transformation-Specific Smoothing for Point Cloud Models. (75%)Wenda Chu; Linyi Li; Bo Li
2022-01-29
Scale-Invariant Adversarial Attack for Evaluating and Enhancing Adversarial Defenses. (99%)Mengting Xu; Tao Zhang; Zhongnian Li; Daoqiang Zhang
Robustness of Deep Recommendation Systems to Untargeted Interaction Perturbations. (82%)Sejoon Oh; Srijan Kumar
Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms. (1%)Jeongyeol Kwon; Yonathan Efroni; Constantine Caramanis; Shie Mannor
2022-01-28
Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning. (87%)Jie Zhang; Lei Zhang; Gang Li; Chao Wu
Feature Visualization within an Automated Design Assessment leveraging Explainable Artificial Intelligence Methods. (81%)Raoul Schönhof; Artem Werner; Jannes Elstner; Boldizsar Zopcsak; Ramez Awad; Marco Huber
Certifying Model Accuracy under Distribution Shifts. (74%)Aounon Kumar; Alexander Levine; Tom Goldstein; Soheil Feizi
Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions. (13%)Jiachen Sun; Qingzhao Zhang; Bhavya Kailkhura; Zhiding Yu; Chaowei Xiao; Z. Morley Mao
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. (8%)Lukas Struppek; Dominik Hintersdorf; Antonio De Almeida Correia; Antonia Adler; Kristian Kersting
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire. (3%)Siddhartha Datta; Nigel Shadbolt
Toward Training at ImageNet Scale with Differential Privacy. (1%)Alexey Kurakin; Shuang Song; Steve Chien; Roxana Geambasu; Andreas Terzis; Abhradeep Thakurta
2022-01-27
Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains. (99%)Qilong Zhang; Xiaodan Li; Yuefeng Chen; Jingkuan Song; Lianli Gao; Yuan He; Hui Xue
Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities. (10%)Xin Du; Benedicte Legastelois; Bhargavi Ganesh; Ajitha Rajan; Hana Chockler; Vaishak Belle; Stuart Anderson; Subramanian Ramamoorthy
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders. (2%)Tianshuo Cong; Xinlei He; Yang Zhang
CacheFX: A Framework for Evaluating Cache Security. (1%)Daniel Genkin; William Kosasih; Fangfei Liu; Anna Trikalinou; Thomas Unterluggauer; Yuval Yarom
2022-01-26
Boosting 3D Adversarial Attacks with Attacking On Frequency. (98%)Binbin Liu; Jinlai Zhang; Lyujie Chen; Jihong Zhu
How Robust are Discriminatively Trained Zero-Shot Learning Models? (98%)Mehmet Kerim Yucel; Ramazan Gokberk Cinbis; Pinar Duygulu
Autonomous Cyber Defense Introduces Risk: Can We Manage the Risk? (2%)Alexandre K. Ligo; Alexander Kott; Igor Linkov
Automatic detection of access control vulnerabilities via API specification processing. (1%)Alexander Barabanov; Denis Dergunov; Denis Makrushin; Aleksey Teplov
2022-01-25
Virtual Adversarial Training for Semi-supervised Breast Mass Classification. (3%)Xuxin Chen; Ximin Wang; Ke Zhang; Kar-Ming Fung; Theresa C. Thai; Kathleen Moore; Robert S. Mannel; Hong Liu; Bin Zheng; Yuchen Qiu
Class-Aware Adversarial Transformers for Medical Image Segmentation. (1%)Chenyu You; Ruihan Zhao; Fenglin Liu; Siyuan Dong; Sandeep Chinchali; Ufuk Topcu; Lawrence Staib; James S. Duncan
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training. (1%)Wenyong Huang; Zhenhe Zhang; Yu Ting Yeung; Xin Jiang; Qun Liu
2022-01-24
What You See is Not What the Network Infers: Detecting Adversarial Examples Based on Semantic Contradiction. (99%)Yijun Yang; Ruiyuan Gao; Yu Li; Qiuxia Lai; Qiang Xu
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation. (95%)Zayd Hammoudeh; Daniel Lowd
Attacks and Defenses for Free-Riders in Multi-Discriminator GAN. (76%)Zilong Zhao; Jiyue Huang; Stefanie Roos; Lydia Y. Chen
Backdoor Defense with Machine Unlearning. (33%)Yang Liu; Mingyuan Fan; Cen Chen; Ximeng Liu; Zhuo Ma; Li Wang; Jianfeng Ma
On the Complexity of Attacking Elliptic Curve Based Authentication Chips. (1%)Ievgen Kabin; Zoya Dyka; Dan Klann; Jan Schaeffner; Peter Langendoerfer
2022-01-23
Efficient and Robust Classification for Sparse Attacks. (83%)Mark Beliaev; Payam Delgosha; Hamed Hassani; Ramtin Pedarsani
Gradient-guided Unsupervised Text Style Transfer via Contrastive Learning. (78%)Chenghao Fan; Ziao Li; Wei wei
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models. (56%)Shagufta Mehnaz; Sayanton V. Dibbo; Ehsanul Kabir; Ninghui Li; Elisa Bertino
Increasing the Cost of Model Extraction with Calibrated Proof of Work. (22%)Adam Dziedzic; Muhammad Ahmad Kaleem; Yu Shen Lu; Nicolas Papernot
2022-01-22
Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection. (99%)Siyuan Liang; Baoyuan Wu; Yanbo Fan; Xingxing Wei; Xiaochun Cao
Robust Unpaired Single Image Super-Resolution of Faces. (98%)Saurabh Goswami; Rajagopalan A. N
On the Robustness of Counterfactual Explanations to Adverse Perturbations. (10%)Marco Virgolin; Saverio Fracaros
2022-01-21
Natural Attack for Pre-trained Models of Code. (99%)Zhou Yang; Jieke Shi; Junda He; David Lo
Toward Enhanced Robustness in Unsupervised Graph Representation Learning: A Graph Information Bottleneck Perspective. (99%)Jihong Wang; Minnan Luo; Jundong Li; Ziqi Liu; Jun Zhou; Qinghua Zheng
The Security of Deep Learning Defences for Medical Imaging. (80%)Moshe Levy; Guy Amit; Yuval Elovici; Yisroel Mirsky
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World. (75%)Hua Ma; Yinshan Li; Yansong Gao; Alsharif Abuadbba; Zhi Zhang; Anmin Fu; Hyoungshick Kim; Said F. Al-Sarawi; Nepal Surya; Derek Abbott
Identifying Adversarial Attacks on Text Classifiers. (73%)Zhouhang Xie; Jonathan Brophy; Adam Noack; Wencong You; Kalyani Asthana; Carter Perkins; Sabrina Reis; Sameer Singh; Daniel Lowd
The Many Faces of Adversarial Risk. (47%)Muni Sreenivas Pydi; Varun Jog
2022-01-20
TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack. (99%)Zhen Yu; Xiaosen Wang; Wanxiang Che; Kun He
Cheating Automatic Short Answer Grading: On the Adversarial Usage of Adjectives and Adverbs. (95%)Anna Filighera; Sebastian Ochs; Tim Steuer; Thomas Tregel
Survey on Federated Learning Threats: concepts, taxonomy on attacks and defences, experimental study and challenges. (93%)Nuria Rodríguez-Barroso; Daniel Jiménez López; M. Victoria Luzón; Francisco Herrera; Eugenio Martínez-Cámara
Low-Interception Waveform: To Prevent the Recognition of Spectrum Waveform Modulation via Adversarial Examples. (83%)Haidong Xie; Jia Tan; Xiaoying Zhang; Nan Ji; Haihua Liao; Zuguo Yu; Xueshuang Xiang; Naijin Liu
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios. (70%)Zhen Xiang; David J. Miller; George Kesidis
Adversarial Jamming for a More Effective Constellation Attack. (56%)Haidong Xie; Yizhou Xu; Yuanqing Chen; Nan Ji; Shuai Yuan; Naijin Liu; Xueshuang Xiang
Steerable Pyramid Transform Enables Robust Left Ventricle Quantification. (38%)Xiangyang Zhu; Kede Ma; Wufeng Xue
Black-box Prompt Learning for Pre-trained Language Models. (13%)Shizhe Diao; Zhichao Huang; Ruijia Xu; Xuechun Li; Yong Lin; Xiao Zhou; Tong Zhang
DeepGalaxy: Testing Neural Network Verifiers via Two-Dimensional Input Space Exploration. (1%)Xuan Xie; Fuyuan Zhang
2022-01-19
Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation. (96%)Sixiao Zhang; Hongxu Chen; Xiangguo Sun; Yicong Li; Guandong Xu
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders. (8%)Zeyang Sha; Xinlei He; Ning Yu; Michael Backes; Yang Zhang
2022-01-18
MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting. (99%)Xudong Pan; Yifan Yan; Mi Zhang; Min Yang
Adversarial vulnerability of powerful near out-of-distribution detection. (78%)Stanislav Fort
How to Backdoor HyperNetwork in Personalized Federated Learning? (13%)Phung Lai; NhatHai Phan; Issa Khalil; Abdallah Khreishah; Xintao Wu
Secure IoT Routing: Selective Forwarding Attacks and Trust-based Defenses in RPL Network. (2%)Jun Jiang; Yuhong Liu
Unveiling Project-Specific Bias in Neural Code Models. (1%)Zhiming Li; Yanzhou Li; Tianlin Li; Mengnan Du; Bozhi Wu; Yushi Cao; Junzhe Jiang; Yang Liu
Lung Swapping Autoencoder: Learning a Disentangled Structure-texture Representation of Chest Radiographs. (1%)Lei Zhou; Joseph Bae; Huidong Liu; Gagandeep Singh; Jeremy Green; Amit Gupta; Dimitris Samaras; Prateek Prasanna
2022-01-17
Masked Faces with Faced Masks. (81%)Jiayi Zhu; Qing Guo; Felix Juefei-Xu; Yihao Huang; Yang Liu; Geguang Pu
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations. (56%)Chris Emmery; Ákos Kádár; Grzegorz Chrupała; Walter Daelemans
AugLy: Data Augmentations for Robustness. (3%)Zoe Papakipos; Joanna Bitton
2022-01-16
Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems. (99%)Wei Jia; Zhaojun Lu; Haichun Zhang; Zhenglin Liu; Jie Wang; Gang Qu
ALA: Naturalness-aware Adversarial Lightness Attack. (99%)Yihao Huang; Liangru Sun; Qing Guo; Felix Juefei-Xu; Jiayi Zhu; Jincao Feng; Yang Liu; Geguang Pu
Adversarial Machine Learning Threat Analysis in Open Radio Access Networks. (64%)Ron Bitton; Dan Avraham; Eitan Klevansky; Dudu Mimran; Oleg Brodt; Heiko Lehmann; Yuval Elovici; Asaf Shabtai
Neighboring Backdoor Attacks on Graph Convolutional Network. (22%)Liang Chen; Qibiao Peng; Jintang Li; Yang Liu; Jiawei Chen; Yong Li; Zibin Zheng
2022-01-15
Interpretable and Effective Reinforcement Learning for Attacking against Graph-based Rumor Detection. (26%)Yuefei Lyu; Xiaoyu Yang; Jiaxin Liu; Philip S. Yu; Sihong Xie; Xi Zhang
StolenEncoder: Stealing Pre-trained Encoders. (13%)Yupei Liu; Jinyuan Jia; Hongbin Liu; Neil Zhenqiang Gong
2022-01-14
CommonsenseQA 2.0: Exposing the Limits of AI through Gamification. (56%)Alon Talmor; Ori Yoran; Ronan Le Bras; Chandra Bhagavatula; Yoav Goldberg; Yejin Choi; Jonathan Berant
Security Orchestration, Automation, and Response Engine for Deployment of Behavioural Honeypots. (1%)Upendra Bartwal; Subhasis Mukhopadhyay; Rohit Negi; Sandeep Shukla
2022-01-13
Evaluation of Four Black-box Adversarial Attacks and Some Query-efficient Improvement Analysis. (96%)Rui Wang
The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression. (93%)Hamed Hassani; Adel Javanmard
On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles. (83%)Qingzhao Zhang; Shengtuo Hu; Jiachen Sun; Qi Alfred Chen; Z. Morley Mao
Reconstructing Training Data with Informed Adversaries. (54%)Borja Balle; Giovanni Cherubin; Jamie Hayes
Jamming Attacks on Federated Learning in Wireless Networks. (2%)Yi Shi; Yalin E. Sagduyu
2022-01-12
Adversarially Robust Classification by Conditional Generative Model Inversion. (99%)Mitra Alirezaei; Tolga Tasdizen
Towards Adversarially Robust Deep Image Denoising. (99%)Hanshu Yan; Jingfeng Zhang; Jiashi Feng; Masashi Sugiyama; Vincent Y. F. Tan
Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data. (70%)Sunder Ali Khowaja; Ik Hyun Lee; Kapal Dev; Muhammad Aslam Jarwar; Nawab Muhammad Faseeh Qureshi
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges. (1%)Huaming Chen; M. Ali Babar
2022-01-11
Quantifying Robustness to Adversarial Word Substitutions. (99%)Yuting Yang; Pei Huang; FeiFei Ma; Juan Cao; Meishan Zhang; Jian Zhang; Jintao Li
Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition. (99%)Hanrui Wang; Shuo Wang; Zhe Jin; Yandan Wang; Cunjian Chen; Massimo Tistarell
2022-01-10
Evaluation of Neural Networks Defenses and Attacks using NDCG and Reciprocal Rank Metrics. (98%)Haya Brama; Lihi Dery; Tal Grinshpoun
IoTGAN: GAN Powered Camouflage Against Machine Learning Based IoT Device Identification. (89%)Tao Hou; Tao Wang; Zhuo Lu; Yao Liu; Yalin Sagduyu
Reciprocal Adversarial Learning for Brain Tumor Segmentation: A Solution to BraTS Challenge 2021 Segmentation Task. (73%)Himashi Peiris; Zhaolin Chen; Gary Egan; Mehrtash Harandi
GMFIM: A Generative Mask-guided Facial Image Manipulation Model for Privacy Preservation. (3%)Mohammad Hossein Khojaste; Nastaran Moradzadeh Farid; Ahmad Nickabadi
Towards Group Robustness in the presence of Partial Group Labels. (1%)Vishnu Suresh Lokhande; Kihyuk Sohn; Jinsung Yoon; Madeleine Udell; Chen-Yu Lee; Tomas Pfister
2022-01-09
Rethink Stealthy Backdoor Attacks in Natural Language Processing. (89%)Lingfeng Shen; Haiyun Jiang; Lemao Liu; Shuming Shi
A Retrospective and Futurespective of Rowhammer Attacks and Defenses on DRAM. (76%)Zhi Zhang; Jiahao Qi; Yueqiang Cheng; Shijie Jiang; Yiyang Lin; Yansong Gao; Surya Nepal; Yi Zou
Privacy-aware Early Detection of COVID-19 through Adversarial Training. (10%)Omid Rohanian; Samaneh Kouchaki; Andrew Soltan; Jenny Yang; Morteza Rohanian; Yang Yang; David Clifton
2022-01-08
LoMar: A Local Defense Against Poisoning Attack on Federated Learning. (9%)Xingyu Li; Zhe Qu; Shangqing Zhao; Bo Tang; Zhuo Lu; Yao Liu
PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++. (1%)Jaewoo Song; Fangzhen Lin
2022-01-07
iDECODe: In-distribution Equivariance for Conformal Out-of-distribution Detection. (93%)Ramneet Kaur; Susmit Jha; Anirban Roy; Sangdon Park; Edgar Dobriban; Oleg Sokolsky; Insup Lee
Asymptotic Security using Bayesian Defense Mechanisms with Application to Cyber Deception. (11%)Hampei Sasahara; Henrik Sandberg
Negative Evidence Matters in Interpretable Histology Image Classification. (1%)Soufiane Belharbi; Marco Pedersoli; Ismail Ben Ayed; Luke McCaffrey; Eric Granger
2022-01-06
PAEG: Phrase-level Adversarial Example Generation for Neural Machine Translation. (98%)Juncheng Wan; Jian Yang; Shuming Ma; Dongdong Zhang; Weinan Zhang; Yong Yu; Zhoujun Li
Learning to be adversarially robust and differentially private. (31%)Jamie Hayes; Borja Balle; M. Pawan Kumar
Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time Algorithms and Adversarial Training. (2%)Yatong Bai; Tanmay Gautam; Somayeh Sojoudi
2022-01-05
On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving. (99%)Giulio Rossolini; Federico Nesti; Gianluca D'Amico; Saasha Nair; Alessandro Biondi; Giorgio Buttazzo
ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints. (99%)Amira Guesmi; Khaled N. Khasawneh; Nael Abu-Ghazaleh; Ihsen Alouani
Adversarial Robustness in Cognitive Radio Networks. (1%)Makan Zamanipour
2022-01-04
Towards Transferable Unrestricted Adversarial Examples with Minimum Changes. (99%)Fangcheng Liu; Chao Zhang; Hongyang Zhang
Towards Understanding and Harnessing the Effect of Image Transformation in Adversarial Detection. (99%)Hui Liu; Bo Zhao; Yuefeng Peng; Weidong Li; Peng Liu
On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error. (86%)Fabio Brau; Giulio Rossolini; Alessandro Biondi; Giorgio Buttazzo
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of Robustness. (31%)Amin Eslami Abyane; Derui Zhu; Roberto Souza; Lei Ma; Hadi Hemmati
Corrupting Data to Remove Deceptive Perturbation: Using Preprocessing Method to Improve System Robustness. (10%)Hieu Le; Hans Walker; Dung Tran; Peter Chin
2022-01-03
Compression-Resistant Backdoor Attack against Deep Neural Networks. (75%)Mingfu Xue; Xin Wang; Shichang Sun; Yushu Zhang; Jian Wang; Weiqiang Liu
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection. (68%)Phillip Rieger; Thien Duc Nguyen; Markus Miettinen; Ahmad-Reza Sadeghi
Revisiting PGD Attacks for Stability Analysis of Large-Scale Nonlinear Systems and Perception-Based Control. (11%)Aaron Havens; Darioush Keivan; Peter Seiler; Geir Dullerud; Bin Hu
2022-01-02
Actor-Critic Network for Q&A in an Adversarial Environment. (33%)Bejan Sadeghian
On Sensitivity of Deep Learning Based Text Classification Algorithms to Practical Input Perturbations. (12%)Aamir Miyajiwala; Arnav Ladkat; Samiksha Jagadale; Raviraj Joshi
2022-01-01
Rethinking Feature Uncertainty in Stochastic Neural Networks for Adversarial Robustness. (87%)Hao Yang; Min Wang; Zhengfei Yu; Yun Zhou
Revisiting Neuron Coverage Metrics and Quality of Deep Neural Networks. (41%)Zhou Yang; Jieke Shi; Muhammad Hilmi Asyrofi; David Lo
Generating Adversarial Samples For Training Wake-up Word Detection Systems Against Confusing Words. (1%)Haoxu Wang; Yan Jia; Zeqing Zhao; Xuyang Wang; Junjie Wang; Ming Li
2021-12-31
Adversarial Attack via Dual-Stage Network Erosion. (99%)Yexin Duan; Junhua Zou; Xingyu Zhou; Wu Zhang; Jin Zhang; Zhisong Pan
On Distinctive Properties of Universal Perturbations. (83%)Sung Min Park; Kuo-An Wei; Kai Xiao; Jerry Li; Aleksander Madry
2021-12-30
Benign Overfitting in Adversarially Robust Linear Classification. (99%)Jinghui Chen; Yuan Cao; Quanquan Gu
Causal Attention for Interpretable and Generalizable Graph Classification. (1%)Yongduo Sui; Xiang Wang; Jiancan Wu; Min Lin; Xiangnan He; Tat-Seng Chua
2021-12-29
Invertible Image Dataset Protection. (92%)Kejiang Chen; Xianhan Zeng; Qichao Ying; Sheng Li; Zhenxing Qian; Xinpeng Zhang
Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning. (4%)Junyu Shi; Wei Wan; Shengshan Hu; Jianrong Lu; Leo Yu Zhang
2021-12-28
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks. (99%)Weiran Lin; Keane Lucas; Lujo Bauer; Michael K. Reiter; Mahmood Sharif
Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently. (99%)Futa Waseda; Sosuke Nishikawa; Trung-Nghia Le; Huy H. Nguyen; Isao Echizen
Repairing Adversarial Texts through Perturbation. (99%)Guoliang Dong; Jingyi Wang; Jun Sun; Sudipta Chattopadhyay; Xinyu Wang; Ting Dai; Jie Shi; Jin Song Dong
DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification. (91%)Aleksandra Ćiprijanović; Diana Kafkes; Gregory Snyder; F. Javier Sánchez; Gabriel Nathan Perdue; Kevin Pedro; Brian Nord; Sandeep Madireddy; Stefan M. Wild
Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge. (88%)Kartikeya Bhardwaj; Dibakar Gope; James Ward; Paul Whatmough; Danny Loh
A General Framework for Evaluating Robustness of Combinatorial Optimization Solvers on Graphs. (86%)Han Lu; Zenan Li; Runzhong Wang; Qibing Ren; Junchi Yan; Xiaokang Yang
Gas Gauge: A Security Analysis Tool for Smart Contract Out-of-Gas Vulnerabilities. (1%)Behkish Nassirzadeh; Huaiying Sun; Sebastian Banescu; Vijay Ganesh
2021-12-27
Adversarial Attack for Asynchronous Event-based Data. (99%)Wooju Lee; Hyun Myung
PRIME: A Few Primitives Can Boost Robustness to Common Corruptions. (81%)Apostolos Modas; Rahul Rade; Guillermo Ortiz-Jiménez; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
Associative Adversarial Learning Based on Selective Attack. (26%)Runqi Wang; Xiaoyue Duan; Baochang Zhang; Song Xue; Wentao Zhu; David Doermann; Guodong Guo
Learning Robust and Lightweight Model through Separable Structured Transformations. (8%)Yanhui Huang; Yangyu Xu; Xian Wei
2021-12-26
Perlin Noise Improve Adversarial Robustness. (99%)Chengjun Tang; Kun Zhang; Chunfang Xing; Yong Ding; Zengmin Xu
2021-12-25
Task and Model Agnostic Adversarial Attack on Graph Neural Networks. (99%)Kartik Sharma; Samidha Verma; Sourav Medya; Sayan Ranu; Arnab Bhattacharya
NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification. (50%)Haibin Zheng; Zhiqing Chen; Tianyu Du; Xuhong Zhang; Yao Cheng; Shouling Ji; Jingyi Wang; Yue Yu; Jinyin Chen
2021-12-24
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping. (99%)Behnam Ghavami; Seyd Movi; Zhenman Fang; Lesley Shannon
Fight Perturbations with Perturbations: Defending Adversarial Attacks via Neuron Influence. (99%)Ruoxi Chen; Haibo Jin; Haibin Zheng; Jinyin Chen; Zhenguang Liu
CatchBackdoor: Backdoor Testing by Critical Trojan Neural Path Identification via Differential Fuzzing. (86%)Haibo Jin; Ruoxi Chen; Jinyin Chen; Yao Cheng; Chong Fu; Ting Wang; Yue Yu; Zhaoyan Ming
SoK: A Study of the Security on Voice Processing Systems. (9%)Robert Chang; Logan Kuo; Arthur Liu; Nader Sehatbakhsh
DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning. (1%)Ismat Jarin; Birhanu Eshete
Gradient Leakage Attack Resilient Deep Learning. (1%)Wenqi Wei; Ling Liu
2021-12-23
Adaptive Modeling Against Adversarial Attacks. (99%)Zhiwen Yan; Teck Khim Ng
Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization. (99%)Yihua Zhang; Guanhua Zhang; Prashant Khanduri; Mingyi Hong; Shiyu Chang; Sijia Liu
Robust Secretary and Prophet Algorithms for Packing Integer Programs. (2%)C. J. Argue; Anupam Gupta; Marco Molinaro; Sahil Singla
Counterfactual Memorization in Neural Language Models. (2%)Chiyuan Zhang; Daphne Ippolito; Katherine Lee; Matthew Jagielski; Florian Tramèr; Nicholas Carlini
2021-12-22
Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-Art. (99%)Xiang Ling; Lingfei Wu; Jiangyu Zhang; Zhenqing Qu; Wei Deng; Xiang Chen; Yaguan Qian; Chunming Wu; Shouling Ji; Tianyue Luo; Jingzheng Wu; Yanjun Wu
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? (98%)Xinhsuai Dong; Luu Anh Tuan; Min Lin; Shuicheng Yan; Hanwang Zhang
Detect & Reject for Transferability of Black-box Adversarial Attacks Against Network Intrusion Detection Systems. (98%)Islam Debicha; Thibault Debatty; Jean-Michel Dricot; Wim Mees; Tayeb Kenaza
Adversarial Deep Reinforcement Learning for Improving the Robustness of Multi-agent Autonomous Driving Policies. (96%)Aizaz Sharif; Dusica Marijan
Understanding and Measuring Robustness of Multimodal Learning. (69%)Nishant Vishwamitra; Hongxin Hu; Ziming Zhao; Long Cheng; Feng Luo
Evaluating the Robustness of Deep Reinforcement Learning for Autonomous and Adversarial Policies in a Multi-agent Urban Driving Environment. (41%)Aizaz Sharif; Dusica Marijan
2021-12-21
A Theoretical View of Linear Backpropagation and Its Convergence. (99%)Ziang Li; Yiwen Guo; Haodi Liu; Changshui Zhang
AED: An black-box NLP classifier model attacker. (99%)Yueyang Liu; Yan Huang; Zhipeng Cai
Covert Communications via Adversarial Machine Learning and Reconfigurable Intelligent Surfaces. (81%)Brian Kim; Tugba Erpek; Yalin E. Sagduyu; Sennur Ulukus
Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems. (76%)Andrea Stocco; Brian Pulfer; Paolo Tonella
Input-Specific Robustness Certification for Randomized Smoothing. (68%)Ruoxin Chen; Jie Li; Junchi Yan; Ping Li; Bin Sheng
Improving Robustness with Image Filtering. (68%)Matteo Terzi; Mattia Carletti; Gian Antonio Susto
On the Adversarial Robustness of Causal Algorithmic Recourse. (10%)Ricardo Dominguez-Olmedo; Amir-Hossein Karimi; Bernhard Schölkopf
MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation. (4%)Zhongzhi Yu; Yonggan Fu; Sicheng Li; Chaojian Li; Yingyan Lin
Exploring Credibility Scoring Metrics of Perception Systems for Autonomous Driving. (2%)Viren Khandal; Arth Vidyarthi
Adversarial Gradient Driven Exploration for Deep Click-Through Rate Prediction. (2%)Kailun Wu; Zhangming Chan; Weijie Bian; Lejian Ren; Shiming Xiang; Shuguang Han; Hongbo Deng; Bo Zheng
Longitudinal Study of the Prevalence of Malware Evasive Techniques. (1%)Lorenzo Maffia; Dario Nisi; Platon Kotzias; Giovanni Lagorio; Simone Aonzo; Davide Balzarotti
2021-12-20
Certified Federated Adversarial Training. (98%)Giulio Zizzo; Ambrish Rawat; Mathieu Sinn; Sergio Maffeis; Chris Hankin
Energy-bounded Learning for Robust Models of Code. (83%)Nghi D. Q. Bui; Yijun Yu
Black-Box Testing of Deep Neural Networks through Test Case Diversity. (82%)Zohreh Aghababaeyan; Manel Abdellatif; Lionel Briand; Ramesh S; Mojtaba Bagherzadeh
Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction. (80%)Dongfang Li; Baotian Hu; Qingcai Chen; Tujie Xu; Jingcong Tao; Yunan Zhang
Adversarially Robust Stability Certificates can be Sample-Efficient. (2%)Thomas T. C. K. Zhang; Stephen Tu; Nicholas M. Boffi; Jean-Jacques E. Slotine; Nikolai Matni
2021-12-19
Initiative Defense against Facial Manipulation. (67%)Qidong Huang; Jie Zhang; Wenbo Zhou; WeimingZhang; Nenghai Yu
2021-12-18
Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks. (12%)Simone Marullo; Matteo Tiezzi; Marco Gori; Stefano Melacci
Android-COCO: Android Malware Detection with Graph Neural Network for Byte- and Native-Code. (1%)Peng Xu
2021-12-17
Reasoning Chain Based Adversarial Attack for Multi-hop Question Answering. (92%)Jiayu Fudan University Ding; Siyuan Fudan University Wang; Qin East China Normal University Chen; Zhongyu Fudan University Wei
Deep Bayesian Learning for Car Hacking Detection. (81%)Laha Ale; Scott A. King; Ning Zhang
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations. (81%)Siddhant Arora; Danish Pruthi; Norman Sadeh; William W. Cohen; Zachary C. Lipton; Graham Neubig
Dynamics-aware Adversarial Attack of 3D Sparse Convolution Network. (80%)An Tao; Yueqi Duan; He Wang; Ziyi Wu; Pengliang Ji; Haowen Sun; Jie Zhou; Jiwen Lu
Provable Adversarial Robustness in the Quantum Model. (62%)Khashayar Barooti; Grzegorz Głuch; Ruediger Urbanke
Domain Adaptation on Point Clouds via Geometry-Aware Implicits. (1%)Yuefan Shen; Yanchao Yang; Mi Yan; He Wang; Youyi Zheng; Leonidas Guibas
2021-12-16
Addressing Adversarial Machine Learning Attacks in Smart Healthcare Perspectives. (99%)Arawinkumaar Selvakkumar; Shantanu Pal; Zahra Jadidi
Towards Robust Neural Image Compression: Adversarial Attack and Model Finetuning. (99%)Tong Chen; Zhan Ma
All You Need is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines. (99%)Yuxuan Zhang; Bo Dong; Felix Heide
Robust Upper Bounds for Adversarial Training. (75%)Dimitris Bertsimas; Xavier Boix; Kimberly Villalobos Carballo; Dick den Hertog
TAFIM: Targeted Adversarial Attacks against Facial Image Manipulations. (64%)Shivangi Aneja; Lev Markhasin; Matthias Niessner
Sharpness-Aware Minimization with Dynamic Reweighting. (31%)Wenxuan Zhou; Fangyu Liu; Huan Zhang; Muhao Chen
APTSHIELD: A Stable, Efficient and Real-time APT Detection System for Linux Hosts. (16%)Tiantian Zhu; Jinkai Yu; Tieming Chen; Jiayu Wang; Jie Ying; Ye Tian; Mingqi Lv; Yan Chen; Yuan Fan; Ting Wang
Correlation inference attacks against machine learning models. (13%)Ana-Maria Creţu; Florent Guépin; Montjoye Yves-Alexandre de
Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants. (2%)Max Bartolo; Tristan Thrush; Sebastian Riedel; Pontus Stenetorp; Robin Jia; Douwe Kiela
Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced Classification by Training on Random Noise Images. (2%)Shiran Zada; Itay Benou; Michal Irani
2021-12-15
On the Convergence and Robustness of Adversarial Training. (99%)Yisen Wang; Xingjun Ma; James Bailey; Jinfeng Yi; Bowen Zhou; Quanquan Gu
Temporal Shuffling for Defending Deep Action Recognition Models against Adversarial Attacks. (98%)Jaehui Hwang; Huan Zhang; Jun-Ho Choi; Cho-Jui Hsieh; Jong-Seok Lee
DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models. (75%)Hongyu Zhu; Yan Chen; Jing Yan; Jing Liu; Yu Hong; Ying Chen; Hua Wu; Haifeng Wang
Robust Neural Network Classification via Double Regularization. (1%)Olof Zetterqvist; Rebecka Jörnsten; Johan Jonasson
2021-12-14
Adversarial Examples for Extreme Multilabel Text Classification. (99%)Mohammadreza Qaraei; Rohit Babbar
Robustifying automatic speech recognition by extracting slowly varying features. (99%)Matías Pizarro; Dorothea Kolossa; Asja Fischer
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training. (81%)Chen Liu; Zhichao Huang; Mathieu Salzmann; Tong Zhang; Sabine Süsstrunk
Dual-Key Multimodal Backdoors for Visual Question Answering. (81%)Matthew Walmer; Karan Sikka; Indranil Sur; Abhinav Shrivastava; Susmit Jha
MuxLink: Circumventing Learning-Resilient MUX-Locking Using Graph Neural Network-based Link Prediction. (4%)Lilas Alrahis; Satwik Patnaik; Muhammad Shafique; Ozgur Sinanoglu
2021-12-13
Detecting Audio Adversarial Examples with Logit Noising. (99%)Namgyu Park; Sangwoo Ji; Jong Kim
Triangle Attack: A Query-efficient Decision-based Adversarial Attack. (99%)Xiaosen Wang; Zeliang Zhang; Kangheng Tong; Dihong Gong; Kun He; Zhifeng Li; Wei Liu
2021-12-12
Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses. (98%)Chun Pong Lau; Jiang Liu; Hossein Souri; Wei-An Lin; Soheil Feizi; Rama Chellappa
Quantifying and Understanding Adversarial Examples in Discrete Input Spaces. (91%)Volodymyr Kuleshov; Evgenii Nikishin; Shantanu Thakoor; Tingfung Lau; Stefano Ermon
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification. (91%)Ashwinee Panda; Saeed Mahloujifar; Arjun N. Bhagoji; Supriyo Chakraborty; Prateek Mittal
WOOD: Wasserstein-based Out-of-Distribution Detection. (12%)Yinan Wang; Wenbo Sun; Jionghua "Judy" Jin; Zhenyu "James" Kong; Xiaowei Yue
2021-12-11
MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare. (99%)Muchao Ye; Junyu Luo; Guanjie Zheng; Cao Xiao; Ting Wang; Fenglong Ma
Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting. (98%)Junhua Zou; Zhisong Pan; Junyang Qiu; Xin Liu; Ting Rui; Wei Li
Stereoscopic Universal Perturbations across Different Architectures and Datasets. (98%)Zachary Berger; Parth Agrawal; Tian Yu Liu; Stefano Soatto; Alex Wong
2021-12-10
Learning to Learn Transferable Attack. (99%)Shuman Fang; Jie Li; Xianming Lin; Rongrong Ji
Cross-Modal Transferable Adversarial Attacks from Images to Videos. (99%)Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Attacking Point Cloud Segmentation with Color-only Perturbation. (99%)Jiacen Xu; Zhe Zhou; Boyuan Feng; Yufei Ding; Zhou Li
Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks. (92%)Seungyong Moon; Gaon An; Hyun Oh Song
Batch Label Inference and Replacement Attacks in Black-Boxed Vertical Federated Learning. (75%)Yang Liu; Tianyuan Zou; Yan Kang; Wenhan Liu; Yuanqin He; Zhihao Yi; Qiang Yang
Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models. (68%)Jialuo Chen; Jingyi Wang; Tinglan Peng; Youcheng Sun; Peng Cheng; Shouling Ji; Xingjun Ma; Bo Li; Dawn Song
Efficient Action Poisoning Attacks on Linear Contextual Bandits. (67%)Guanlin Liu; Lifeng Lai
How Private Is Your RL Policy? An Inverse RL Based Analysis Framework. (41%)Kritika Prakash; Fiza Husain; Praveen Paruchuri; Sujit P. Gujar
SoK: On the Security & Privacy in Federated Learning. (5%)Gorka Abad; Stjepan Picek; Aitor Urbieta
2021-12-09
Amicable Aid: Turning Adversarial Attack to Benefit Classification. (99%)Juyeop Kim; Jun-Ho Choi; Soobeom Jang; Jong-Seok Lee
Mutual Adversarial Training: Learning together is better than going alone. (99%)Jiang Liu; Chun Pong Lau; Hossein Souri; Soheil Feizi; Rama Chellappa
PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function. (99%)Manaar Alam; Shubhajit Datta; Debdeep Mukhopadhyay; Arijit Mondal; Partha Pratim Chakrabarti
RamBoAttack: A Robust Query Efficient Deep Neural Network Decision Exploit. (99%)Viet Quoc Vo; Ehsan Abbasnejad; Damith C. Ranasinghe
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures. (69%)Eugene Bagdasaryan; Vitaly Shmatikov
Robustness Certificates for Implicit Neural Networks: A Mixed Monotone Contractive Approach. (38%)Saber Jafarpour; Matthew Abate; Alexander Davydov; Francesco Bullo; Samuel Coogan
PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures. (10%)Dan Hendrycks; Andy Zou; Mantas Mazeika; Leonard Tang; Dawn Song; Jacob Steinhardt
Are We There Yet? Timing and Floating-Point Attacks on Differential Privacy Systems. (2%)Jiankai Jin; Eleanor McMurtry; Benjamin I. P. Rubinstein; Olga Ohrimenko
3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D Object Detection. (1%)Alexander Lehner; Stefano Gasperini; Alvaro Marcos-Ramiro; Michael Schmidt; Mohammad-Ali Nikouei Mahani; Nassir Navab; Benjamin Busam; Federico Tombari
2021-12-08
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection. (99%)Jiang Liu; Alexander Levine; Chun Pong Lau; Rama Chellappa; Soheil Feizi
On visual self-supervision and its effect on model robustness. (99%)Michal Kucer; Diane Oyen; Garrett Kenyon
SNEAK: Synonymous Sentences-Aware Adversarial Attack on Natural Language Video Localization. (93%)Wenbo Gou; Wen Shi; Jian Lou; Lijie Huang; Pan Zhou; Ruixuan Li
Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework. (8%)Ching-Yun Ko; Jeet Mohapatra; Sijia Liu; Pin-Yu Chen; Luca Daniel; Lily Weng
2021-12-07
Saliency Diversified Deep Ensemble for Robustness to Adversaries. (99%)Alex Bogun; Dimche Kostadinov; Damian Borth
Vehicle trajectory prediction works, but not everywhere. (50%)Mohammadhossein Bahari; Saeed Saadatnejad; Ahmad Rahimi; Mohammad Shaverdikondori; Mohammad Shahidzadeh; Seyed-Mohsen Moosavi-Dezfooli; Alexandre Alahi
Lightning: Striking the Secure Isolation on GPU Clouds with Transient Hardware Faults. (11%)Rihui Sun; Pefei Qiu; Yongqiang Lyu; Donsheng Wang; Jiang Dong; Gang Qu
Membership Inference Attacks From First Principles. (2%)Nicholas Carlini; Steve Chien; Milad Nasr; Shuang Song; Andreas Terzis; Florian Tramer
Training Deep Models to be Explained with Fewer Examples. (1%)Tomoharu Iwata; Yuya Yoshikawa
Presentation Attack Detection Methods based on Gaze Tracking and Pupil Dynamic: A Comprehensive Survey. (1%)Jalil Nourmohammadi Khiarak
2021-12-06
Adversarial Machine Learning In Network Intrusion Detection Domain: A Systematic Review. (99%)Huda Ali Alatwi; Charles Morisset
Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal. (84%)Yucheng Shi; Yahong Han; Yu-an Tan; Xiaohui Kuang
ML Attack Models: Adversarial Attacks and Data Poisoning Attacks. (82%)Jing Lin; Long Dang; Mohamed Rahouti; Kaiqi Xiong
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks. (82%)Xi Li; Zhen Xiang; David J. Miller; George Kesidis
When the Curious Abandon Honesty: Federated Learning Is Not Private. (68%)Franziska Boenisch; Adam Dziedzic; Roei Schuster; Ali Shahin Shamsabadi; Ilia Shumailov; Nicolas Papernot
Defending against Model Stealing via Verifying Embedded External Features. (33%)Yiming Li; Linghui Zhu; Xiaojun Jia; Yong Jiang; Shu-Tao Xia; Xiaochun Cao
Context-Aware Transfer Attacks for Object Detection. (1%)Zikui Cai; Xinxin Xie; Shasha Li; Mingjun Yin; Chengyu Song; Srikanth V. Krishnamurthy; Amit K. Roy-Chowdhury; M. Salman Asif
2021-12-05
Robust Active Learning: Sample-Efficient Training of Robust Deep Learning Models. (96%)Yuejun Guo; Qiang Hu; Maxime Cordy; Mike Papadakis; Yves Le Traon
Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness. (88%)Konstantinos P. Panousis; Sotirios Chatzis; Sergios Theodoridis
Beyond Robustness: Resilience Verification of Tree-Based Classifiers. (2%)Stefano Calzavara; Lorenzo Cazzaro; Claudio Lucchese; Federico Marcuzzi; Salvatore Orlando
On Impact of Semantically Similar Apps in Android Malware Datasets. (1%)Roopak Surendran
2021-12-04
RADA: Robust Adversarial Data Augmentation for Camera Localization in Challenging Weather. (10%)Jialu Wang; Muhamad Risqi U. Saputra; Chris Xiaoxuan Lu; Niki Trigon; Andrew Markham
2021-12-03
Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach. (99%)James Lee Hu; Mohammadreza Ebrahimi; Hsinchun Chen
Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing. (99%)Bhagyashree Puranik; Upamanyu Madhow; Ramtin Pedarsani
Blackbox Untargeted Adversarial Testing of Automatic Speech Recognition Systems. (98%)Xiaoliang Wu; Ajitha Rajan
Attack-Centric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models. (54%)Tochukwu Idika; Ismail Akturk
Adversarial Attacks against a Satellite-borne Multispectral Cloud Detector. (13%)Andrew Du; Yee Wei Law; Michele Sasdelli; Bo Chen; Ken Clarke; Michael Brown; Tat-Jun Chin
A Game-Theoretic Approach for AI-based Botnet Attack Defence. (9%)Hooman Alavizadeh; Julian Jang-Jaccard; Tansu Alpcan; Seyit A. Camtepe
2021-12-02
A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space. (99%)Thibault Simonetto; Salijona Dyrmishi; Salah Ghamizi; Maxime Cordy; Yves Le Traon
Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks? (93%)Ayesha Siddique; Khaza Anuarul Hoque
Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness? (75%)Peter Lorenz; Dominik Strassel; Margret Keuper; Janis Keuper
Training Efficiency and Robustness in Deep Learning. (41%)Fartash Faghri
FedRAD: Federated Robust Adaptive Distillation. (10%)Stefán Páll Sturluson; Samuel Trew; Luis Muñoz-González; Matei Grama; Jonathan Passerat-Palmbach; Daniel Rueckert; Amir Alansary
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis. (3%)Yu Feng; Benteng Ma; Jing Zhang; Shanshan Zhao; Yong Xia; Dacheng Tao
On the Existence of the Adversarial Bayes Classifier (Extended Version). (2%)Pranjal Awasthi; Natalie S. Frank; Mehryar Mohri
Editing a classifier by rewriting its prediction rules. (1%)Shibani Santurkar; Dimitris Tsipras; Mahalaxmi Elango; David Bau; Antonio Torralba; Aleksander Madry
2021-12-01
Adversarial Robustness of Deep Reinforcement Learning based Dynamic Recommender Systems. (99%)Siyu Wang; Yuanjiang Cao; Xiaocong Chen; Lina Yao; Xianzhi Wang; Quan Z. Sheng
Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness. (99%)Jia-Li Yin; Lehui Xie; Wanqing Zhu; Ximeng Liu; Bo-Hao Chen
$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial Training. (99%)Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines. (96%)Jiachen Sun; Akshay Mehra; Bhavya Kailkhura; Pin-Yu Chen; Dan Hendrycks; Jihun Hamm; Z. Morley Mao
Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation. (95%)Tianyue Zheng; Zhe Chen; Shuya Ding; Chao Cai; Jun Luo
Robustness in Deep Learning for Computer Vision: Mind the gap? (31%)Nathan Drenkow; Numair Sani; Ilya Shpitser; Mathias Unberath
CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning. (1%)Aidan Boyd; Patrick Tinsley; Kevin Bowyer; Adam Czajka
2021-11-30
Using a GAN to Generate Adversarial Examples to Facial Image Recognition. (99%)Andrew Merrigan; Alan F. Smeaton
Mitigating Adversarial Attacks by Distributing Different Copies to Different Users. (96%)Jiyi Zhang; Wesley Joon-Wie Tann; Ee-Chien Chang
Human Imperceptible Attacks and Applications to Improve Fairness. (83%)Xinru Hua; Huanzhong Xu; Jose Blanchet; Viet Nguyen
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning. (81%)Yangsibo Huang; Samyak Gupta; Zhao Song; Kai Li; Sanjeev Arora
FROB: Few-shot ROBust Model for Classification and Out-of-Distribution Detection. (78%)Nikolaos Dionelis
COREATTACK: Breaking Up the Core Structure of Graphs. (78%)Bo Zhou; Yuqian Lv; Jinhuan Wang; Jian Zhang; Qi Xuan
Adversarial Attacks Against Deep Generative Models on Data: A Survey. (12%)Hui Sun; Tianqing Zhu; Zhiqiu Zhang; Dawei Jin. Ping Xiong; Wanlei Zhou
A Face Recognition System's Worst Morph Nightmare, Theoretically. (1%)Una M. Kelly; Raymond Veldhuis; Luuk Spreeuwers
New Datasets for Dynamic Malware Classification. (1%)Berkant Düzgün; Aykut Çayır; Ferhat Demirkıran; Ceyda Nur Kayha; Buket Gençaydın; Hasan Dağ
Reliability Assessment and Safety Arguments for Machine Learning Components in Assuring Learning-Enabled Autonomous Systems. (1%)Xingyu Zhao; Wei Huang; Vibhav Bharti; Yi Dong; Victoria Cox; Alec Banks; Sen Wang; Sven Schewe; Xiaowei Huang
2021-11-29
MedRDF: A Robust and Retrain-Less Diagnostic Framework for Medical Pretrained Models Against Adversarial Attack. (99%)Mengting Xu; Tao Zhang; Daoqiang Zhang
Adversarial Attacks in Cooperative AI. (82%)Ted Fujimoto; Arthur Paul Pedersen
Living-Off-The-Land Command Detection Using Active Learning. (10%)Talha Ongun; Jack W. Stokes; Jonathan Bar Or; Ke Tian; Farid Tajaddodianfar; Joshua Neil; Christian Seifert; Alina Oprea; John C. Platt
Do Invariances in Deep Neural Networks Align with Human Perception? (9%)Vedant Nanda; Ayan Majumdar; Camila Kolling; John P. Dickerson; Krishna P. Gummadi; Bradley C. Love; Adrian Weller
A Simple Long-Tailed Recognition Baseline via Vision-Language Model. (1%)Teli Ma; Shijie Geng; Mengmeng Wang; Jing Shao; Jiasen Lu; Hongsheng Li; Peng Gao; Yu Qiao
ROBIN : A Benchmark for Robustness to Individual Nuisances in Real-World Out-of-Distribution Shifts. (1%)Bingchen Zhao; Shaozuo Yu; Wufei Ma; Mingxin Yu; Shenxiao Mei; Angtian Wang; Ju He; Alan Yuille; Adam Kortylewski
Pyramid Adversarial Training Improves ViT Performance. (1%)Charles Herrmann; Kyle Sargent; Lu Jiang; Ramin Zabih; Huiwen Chang; Ce Liu; Dilip Krishnan; Deqing Sun
2021-11-28
Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional Variational AutoEncoders for Adversary Detection in the Presence of Noisy Images. (96%)Dvij Kalaria; Aritra Hazra; Partha Pratim Chakrabarti
MALIGN: Explainable Static Raw-byte Based Malware Family Classification using Sequence Alignment. (68%)Shoumik Saha; Sadia Afroz; Atif Rahman
Automated Runtime-Aware Scheduling for Multi-Tenant DNN Inference on GPU. (1%)Fuxun Yu; Shawn Bray; Di Wang; Longfei Shangguan; Xulong Tang; Chenchen Liu; Xiang Chen
ExCon: Explanation-driven Supervised Contrastive Learning for Image Classification. (1%)Zhibo Zhang; Jongseong Jang; Chiheb Trabelsi; Ruiwen Li; Scott Sanner; Yeonjeong Jeong; Dongsub Shim
2021-11-27
Adaptive Image Transformations for Transfer-based Adversarial Attack. (99%)Zheng Yuan; Jie Zhang; Shiguang Shan
Adaptive Perturbation for Adversarial Attack. (99%)Zheng Yuan; Jie Zhang; Zhaoyan Jiang; Liangliang Li; Shiguang Shan
Statically Detecting Adversarial Malware through Randomised Chaining. (98%)Matthew Crawford; Wei Wang; Ruoxi Sun; Minhui Xue
Dissecting Malware in the Wild. (1%)Hamish Spencer; Wei Wang; Ruoxi Sun; Minhui Xue
2021-11-26
ArchRepair: Block-Level Architecture-Oriented Repairing for Deep Neural Networks. (50%)Hua Qi; Zhijie Wang; Qing Guo; Jianlang Chen; Felix Juefei-Xu; Lei Ma; Jianjun Zhao
2021-11-25
Natural & Adversarial Bokeh Rendering via Circle-of-Confusion Predictive Network. (99%)Yihao Huang; Felix Juefei-Xu; Qing Guo; Geguang Pu; Yang Liu
Clustering Effect of (Linearized) Adversarial Robust Models. (97%)Yang Bai; Xin Yan; Yong Jiang; Shu-Tao Xia; Yisen Wang
Simple Contrastive Representation Adversarial Learning for NLP Tasks. (93%)Deshui Miao; Jiaqi Zhang; Wenbo Xie; Jian Song; Xin Li; Lijuan Jia; Ning Guo
Going Grayscale: The Road to Understanding and Improving Unlearnable Examples. (92%)Zhuoran Liu; Zhengyu Zhao; Alex Kolmus; Tijn Berns; Laarhoven Twan van; Tom Heskes; Martha Larson
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks. (92%)Xiangyu Qi; Tinghao Xie; Ruizhe Pan; Jifeng Zhu; Yong Yang; Kai Bu
Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning. (3%)Sanjay Kariyappa; Moinuddin K Qureshi
Joint inference and input optimization in equilibrium networks. (1%)Swaminathan Gurumurthy; Shaojie Bai; Zachary Manchester; J. Zico Kolter
2021-11-24
Unity is strength: Improving the Detection of Adversarial Examples with Ensemble Approaches. (99%)Francesco Craighero; Fabrizio Angaroni; Fabio Stella; Chiara Damiani; Marco Antoniotti; Alex Graudenzi
Thundernna: a white box adversarial attack. (99%)Linfeng Ye; Shayan Mohajer Hamidi
Robustness against Adversarial Attacks in Neural Networks using Incremental Dissipativity. (92%)Bernardo Aquino; Arash Rahnama; Peter Seiler; Lizhen Lin; Vijay Gupta
WFDefProxy: Modularly Implementing and Empirically Evaluating Website Fingerprinting Defenses. (15%)Jiajun Gong; Wuqi Zhang; Charles Zhang; Tao Wang
Sharpness-aware Quantization for Deep Neural Networks. (10%)Jing Liu; Jianfei Cai; Bohan Zhuang
SLA$^2$P: Self-supervised Anomaly Detection with Adversarial Perturbation. (5%)Yizhou Wang; Can Qin; Rongzhe Wei; Yi Xu; Yue Bai; Yun Fu
An Attack on Facial Soft-biometric Privacy Enhancement. (2%)Dailé Osorio-Roig; Christian Rathgeb; Pawel Drozdowski; Philipp Terhörst; Vitomir Štruc; Christoph Busch
Accelerating Deep Learning with Dynamic Data Pruning. (1%)Ravi S Raju; Kyle Daruwalla; Mikko Lipasti
2021-11-23
Adversarial machine learning for protecting against online manipulation. (92%)Stefano Cresci; Marinella Petrocchi; Angelo Spognardi; Stefano Tognazzi
Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS. (84%)Witt Christian Schroeder de; Yongchao Huang; Philip H. S. Torr; Martin Strohmeier
Subspace Adversarial Training. (69%)Tao Li; Yingwen Wu; Sizhe Chen; Kun Fang; Xiaolin Huang
HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance. (1%)Huanrui Yang; Xiaoxuan Yang; Neil Zhenqiang Gong; Yiran Chen
2021-11-22
Adversarial Examples on Segmentation Models Can be Easy to Transfer. (99%)Jindong Gu; Hengshuang Zhao; Volker Tresp; Philip Torr
Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes. (99%)Utku Ozbulak; Maura Pintor; Messem Arnout Van; Neve Wesley De
Imperceptible Transfer Attack and Defense on 3D Point Cloud Classification. (99%)Daizong Liu; Wei Hu
Backdoor Attack through Frequency Domain. (92%)Tong Wang; Yuan Yao; Feng Xu; Shengwei An; Hanghang Tong; Ting Wang
NTD: Non-Transferability Enabled Backdoor Detection. (69%)Yinshan Li; Hua Ma; Zhi Zhang; Yansong Gao; Alsharif Abuadbba; Anmin Fu; Yifeng Zheng; Said F. Al-Sarawi; Derek Abbott
A Comparison of State-of-the-Art Techniques for Generating Adversarial Malware Binaries. (33%)Prithviraj Dasgupta; Zachariah Osman
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data. (13%)Yongji Wu; Xiaoyu Cao; Jinyuan Jia; Neil Zhenqiang Gong
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration. (1%)Yifan Gong; Geng Yuan; Zheng Zhan; Wei Niu; Zhengang Li; Pu Zhao; Yuxuan Cai; Sijia Liu; Bin Ren; Xue Lin; Xulong Tang; Yanzhi Wang
Electric Vehicle Attack Impact on Power Grid Operation. (1%)Mohammad Ali Sayed; Ribal Atallah; Chadi Assi; Mourad Debbabi
2021-11-21
Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability. (99%)Yifeng Xiong; Jiadong Lin; Min Zhang; John E. Hopcroft; Kun He
Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Model. (99%)Alon Zolfi; Shai Avidan; Yuval Elovici; Asaf Shabtai
Medical Aegis: Robust adversarial protectors for medical images. (99%)Qingsong Yao; Zecheng He; S. Kevin Zhou
Local Linearity and Double Descent in Catastrophic Overfitting. (73%)Varun Sivashankar; Nikil Selvam
Denoised Internal Models: a Brain-Inspired Autoencoder against Adversarial Attacks. (62%)Kaiyuan Liu; Xingyu Li; Yi Zhou; Jisong Guan; Yurui Lai; Ge Zhang; Hang Su; Jiachen Wang; Chunxu Guo
2021-11-20
Are Vision Transformers Robust to Patch Perturbations? (98%)Jindong Gu; Volker Tresp; Yao Qin
2021-11-19
Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method. (99%)Tao Bai; Jun Zhao; Jinlin Zhu; Shoudong Han; Jiefeng Chen; Bo Li; Alex Kot
Meta Adversarial Perturbations. (99%)Chia-Hung Yuan; Pin-Yu Chen; Chia-Mu Yu
Resilience from Diversity: Population-based approach to harden models against adversarial attacks. (99%)Jasser Jasser; Ivan Garibay
Enhanced countering adversarial attacks via input denoising and feature restoring. (99%)Yanni Li; Wenhui Zhang; Jiawei Liu; Xiaoli Kou; Hui Li; Jiangtao Cui
PatchCensor: Patch Robustness Certification for Transformers via Exhaustive Testing. (99%)Yuheng Huang; Lei Ma; Yuanchun Li
Fooling Adversarial Training with Inducing Noise. (98%)Zhirui Wang; Yifei Wang; Yisen Wang
Exposing Weaknesses of Malware Detectors with Explainability-Guided Evasion Attacks. (86%)Wei Wang; Ruoxi Sun; Tian Dong; Shaofeng Li; Minhui Xue; Gareth Tyson; Haojin Zhu
2021-11-18
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems. (99%)Bao Gia Doan; Minhui Xue; Shiqing Ma; Ehsan Abbasnejad; Damith C. Ranasinghe
A Review of Adversarial Attack and Defense for Classification Methods. (99%)Yao Li; Minhao Cheng; Cho-Jui Hsieh; Thomas C. M. Lee
Robust Person Re-identification with Multi-Modal Joint Defence. (98%)Yunpeng Gong; Lifei Chen
Enhancing the Insertion of NOP Instructions to Obfuscate Malware via Deep Reinforcement Learning. (96%)Daniel Gibert; Matt Fredrikson; Carles Mateu; Jordi Planes; Quan Le
How to Build Robust FAQ Chatbot with Controllable Question Generator? (80%)Yan Pan; Mingyang Ma; Bernhard Pflugfelder; Georg Groh
Adversarial attacks on voter model dynamics in complex networks. (76%)Katsumi Chiyomaru; Kazuhiro Takemoto
Enhanced Membership Inference Attacks against Machine Learning Models. (12%)Jiayuan Ye; Aadyaa Maddi; Sasi Kumar Murakonda; Reza Shokri
Wiggling Weights to Improve the Robustness of Classifiers. (2%)Sadaf Gulshad; Ivan Sosnovik; Arnold Smeulders
Improving Transferability of Representations via Augmentation-Aware Self-Supervision. (1%)Hankook Lee; Kibok Lee; Kimin Lee; Honglak Lee; Jinwoo Shin
2021-11-17
TraSw: Tracklet-Switch Adversarial Attacks against Multi-Object Tracking. (99%)Delv Lin; Qi Chen; Chengyu Zhou; Kun He
Generating Unrestricted 3D Adversarial Point Clouds. (99%)Xuelong Dai; Yanjie Li; Hua Dai; Bin Xiao
SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness. (93%)Jongheon Jeong; Sejun Park; Minkyu Kim; Heung-Chang Lee; Doguk Kim; Jinwoo Shin
Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation. (92%)Mehdi Sadi; B. M. S. Bahar Talukder; Kaniz Mishty; Md Tauhidur Rahman
Do Not Trust Prediction Scores for Membership Inference Attacks. (33%)Dominik Hintersdorf; Lukas Struppek; Kristian Kersting
2021-11-16
Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks. (99%)Adaku Uchendu; Daniel Campoy; Christopher Menart; Alexandra Hildenbrandt
Improving the robustness and accuracy of biomedical language models through adversarial training. (99%)Milad Moradi; Matthias Samwald
Detecting AutoAttack Perturbations in the Frequency Domain. (99%)Peter Lorenz; Paula Harder; Dominik Strassel; Margret Keuper; Janis Keuper
Adversarial Tradeoffs in Linear Inverse Problems and Robust StateEstimation. (92%)Bruce D. Lee; Thomas T. C. K. Zhang; Hamed Hassani; Nikolai Matni
Consistent Semantic Attacks on Optical Flow. (81%)Tom Koren; Lior Talker; Michael Dinerstein; Roy J Jevnisek
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences. (54%)Wei Guo; Benedetta Tondi; Mauro Barni
Enabling equivariance for arbitrary Lie groups. (1%)Lachlan Ewen MacDonald; Sameera Ramasinghe; Simon Lucey
2021-11-15
A Survey on Adversarial Attacks for Malware Analysis. (98%)Kshitiz Aryal; Maanak Gupta; Mahmoud Abdelsalam
Triggerless Backdoor Attack for NLP Tasks with Clean Labels. (68%)Leilei Gan; Jiwei Li; Tianwei Zhang; Xiaoya Li; Yuxian Meng; Fei Wu; Shangwei Guo; Chun Fan
Property Inference Attacks Against GANs. (67%)Junhao Zhou; Yufei Chen; Chao Shen; Yang Zhang
FedCG: Leverage Conditional GAN for Protecting Privacy and Maintaining Competitive Performance in Federated Learning. (1%)Yuezhou Wu; Yan Kang; Jiahuan Luo; Yuanqin He; Qiang Yang
2021-11-14
Generating Band-Limited Adversarial Surfaces Using Neural Networks. (99%)Roee Ben-Shlomo; Yevgeniy Men; Ido Imanuel
Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks. (76%)Chen Ma; Xiangyu Guo; Li Chen; Jun-Hai Yong; Yisen Wang
Towards Interpretability of Speech Pause in Dementia Detection using Adversarial Learning. (75%)Youxiang Zhu; Bang Tran; Xiaohui Liang; John A. Batsis; Robert M. Roth
Improving Compound Activity Classification via Deep Transfer and Representation Learning. (1%)Vishal Dey; Raghu Machiraju; Xia Ning
2021-11-13
Robust and Accurate Object Detection via Self-Knowledge Distillation. (62%)Weipeng Xu; Pengzhi Chu; Renhao Xie; Xiongziyan Xiao; Hongcheng Huang
UNTANGLE: Unlocking Routing and Logic Obfuscation Using Graph Neural Networks-based Link Prediction. (2%)Lilas Alrahis; Satwik Patnaik; Muhammad Abdullah Hanif; Muhammad Shafique; Ozgur Sinanoglu
2021-11-12
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception. (99%)Joel Dapello; Jenelle Feather; Hang Le; Tiago Marques; David D. Cox; Josh H. McDermott; James J. DiCarlo; SueYeon Chung
Measuring the Contribution of Multiple Model Representations in Detecting Adversarial Instances. (98%)Daniel Steinberg; Paul Munro
Adversarially Robust Learning for Security-Constrained Optimal Power Flow. (10%)Priya L. Donti; Aayushya Agarwal; Neeraj Vijay Bedmutha; Larry Pileggi; J. Zico Kolter
On Transferability of Prompt Tuning for Natural Language Processing. (8%)Yusheng Su; Xiaozhi Wang; Yujia Qin; Chi-Min Chan; Yankai Lin; Huadong Wang; Kaiyue Wen; Zhiyuan Liu; Peng Li; Juanzi Li; Lei Hou; Maosong Sun; Jie Zhou
A Bayesian Nash equilibrium-based moving target defense against stealthy sensor attacks. (1%)David Umsonst; Serkan Sarıtaş; György Dán; Henrik Sandberg
Resilient Consensus-based Multi-agent Reinforcement Learning. (1%)Martin Figura; Yixuan Lin; Ji Liu; Vijay Gupta
2021-11-11
On the Equivalence between Neural Network and Support Vector Machine. (1%)Yilan Chen; Wei Huang; Lam M. Nguyen; Tsui-Wei Weng
2021-11-10
Trustworthy Medical Segmentation with Uncertainty Estimation. (93%)Giuseppina Carannante; Dimah Dera; Nidhal C. Bouaynaya; Ghulam Rasool; Hassan M. Fathallah-Shaykh
Robust Learning via Ensemble Density Propagation in Deep Neural Networks. (2%)Giuseppina Carannante; Dimah Dera; Ghulam Rasool; Nidhal C. Bouaynaya; Lyudmila Mihaylova
2021-11-09
Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search. (99%)Pengfei Xia; Ziqiang Li; Bin Li
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps. (99%)Muhammad Awais; Fengwei Zhou; Chuanlong Xie; Jiawei Li; Sung-Ho Bae; Zhenguo Li
Sparse Adversarial Video Attacks with Spatial Transformations. (98%)Ronghui Mu; Wenjie Ruan; Leandro Soriano Marcolino; Qiang Ni
A Statistical Difference Reduction Method for Escaping Backdoor Detection. (97%)Pengfei Xia; Hongjing Niu; Ziqiang Li; Bin Li
Data Augmentation Can Improve Robustness. (73%)Sylvestre-Alvise Rebuffi; Sven Gowal; Dan A. Calian; Florian Stimberg; Olivia Wiles; Timothy Mann
Are Transformers More Robust Than CNNs? (67%)Yutong Bai; Jieru Mei; Alan Yuille; Cihang Xie
2021-11-08
Geometrically Adaptive Dictionary Attack on Face Recognition. (99%)Junyoung Byun; Hyojun Go; Changick Kim
Defense Against Explanation Manipulation. (98%)Ruixiang Tang; Ninghao Liu; Fan Yang; Na Zou; Xia Hu
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories. (98%)Adnan Siraj Rakin; Md Hafizul Islam Chowdhuryy; Fan Yao; Deliang Fan
On Assessing The Safety of Reinforcement Learning algorithms Using Formal Methods. (75%)Paulina Stevia Nouwou Mindom; Amin Nikanjam; Foutse Khomh; John Mullins
Get a Model! Model Hijacking Attack Against Machine Learning Models. (69%)Ahmed Salem; Michael Backes; Yang Zhang
Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks. (69%)Lijia Yu; Xiao-Shan Gao
Characterizing the adversarial vulnerability of speech self-supervised learning. (68%)Haibin Wu; Bo Zheng; Xu Li; Xixin Wu; Hung-yi Lee; Helen Meng
HAPSSA: Holistic Approach to PDF Malware Detection Using Signal and Statistical Analysis. (67%)Tajuddin Manhar Mohammed; Lakshmanan Nataraj; Satish Chikkagoudar; Shivkumar Chandrasekaran; B. S. Manjunath
Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning. (67%)Qinkai Zheng; Xu Zou; Yuxiao Dong; Yukuo Cen; Da Yin; Jiarong Xu; Yang Yang; Jie Tang
BARFED: Byzantine Attack-Resistant Federated Averaging Based on Outlier Elimination. (45%)Ece Isik-Polat; Gorkem Polat; Altan Kocyigit
2021-11-07
Generative Dynamic Patch Attack. (99%)Xiang Li; Shihao Ji
Natural Adversarial Objects. (81%)Felix Lau; Nishant Subramani; Sasha Harrison; Aerin Kim; Elliot Branson; Rosanne Liu
2021-11-06
"How Does It Detect A Malicious App?" Explaining the Predictions of AI-based Android Malware Detector. (11%)Zhi Lu; Vrizlynn L. L. Thing
2021-11-05
A Unified Game-Theoretic Interpretation of Adversarial Robustness. (98%)Jie Ren; Die Zhang; Yisen Wang; Lu Chen; Zhanpeng Zhou; Yiting Chen; Xu Cheng; Xin Wang; Meng Zhou; Jie Shi; Quanshi Zhang
Sequential Randomized Smoothing for Adversarially Robust Speech Recognition. (96%)Raphael Olivier; Bhiksha Raj
Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups. (2%)Aidmar Wainakh; Ephraim Zimmer; Sandeep Subedi; Jens Keim; Tim Grube; Shankar Karuppayah; Alejandro Sanchez Guinea; Max Mühlhäuser
2021-11-04
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models. (99%)Boxin Wang; Chejian Xu; Shuohang Wang; Zhe Gan; Yu Cheng; Jianfeng Gao; Ahmed Hassan Awadallah; Bo Li
Adversarial Attacks on Graph Classification via Bayesian Optimisation. (87%)Xingchen Wan; Henry Kenlay; Binxin Ru; Arno Blaas; Michael A. Osborne; Xiaowen Dong
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods. (47%)Peru Bhardwaj; John Kelleher; Luca Costabello; Declan O'Sullivan
Attacking Deep Reinforcement Learning-Based Traffic Signal Control Systems with Colluding Vehicles. (3%)Ao Qu; Yihong Tang; Wei Ma
2021-11-03
LTD: Low Temperature Distillation for Robust Adversarial Training. (88%)Erh-Chung Chen; Che-Rung Lee
Multi-Glimpse Network: A Robust and Efficient Classification Architecture based on Recurrent Downsampled Attention. (41%)Sia Huat Tan; Runpei Dong; Kaisheng Ma
2021-11-02
Effective and Imperceptible Adversarial Textual Attack via Multi-objectivization. (99%)Shengcai Liu; Ning Lu; Wenjing Hong; Chao Qian; Ke Tang
Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks. (96%)Maksym Yatsura; Jan Hendrik Metzen; Matthias Hein
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds. (70%)Yujia Huang; Huan Zhang; Yuanyuan Shi; J Zico Kolter; Anima Anandkumar
Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness. (68%)Ke Sun; Mingjie Li; Zhouchen Lin
Knowledge Cross-Distillation for Membership Privacy. (38%)Rishav Chourasia; Batnyam Enkhtaivan; Kunihiro Ito; Junki Mori; Isamu Teranishi; Hikaru Tsuchida
Adversarially Perturbed Wavelet-based Morphed Face Generation. (9%)Kelsey O'Haire; Sobhan Soleymani; Baaria Chaudhary; Poorya Aghdaie; Jeremy Dawson; Nasser M. Nasrabadi
2021-11-01
Graph Structural Attack by Spectral Distance. (93%)Lu Lin; Ethan Blaser; Hongning Wang
Availability Attacks Create Shortcuts. (89%)Da Yu; Huishuai Zhang; Wei Chen; Jian Yin; Tie-Yan Liu
Robustness of deep learning algorithms in astronomy -- galaxy morphology studies. (83%)A. Ćiprijanović; D. Kafkes; G. N. Perdue; K. Pedro; G. Snyder; F. J. Sánchez; S. Madireddy; S. Wild; B. Nord
When Does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning? (69%)Lijie Fan; Sijia Liu; Pin-Yu Chen; Gaoyuan Zhang; Chuang Gan
ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack. (9%)Dahoon Park; Kon-Woo Kwon; Sunghoon Im; Jaeha Kung
2021-10-31
An Actor-Critic Method for Simulation-Based Optimization. (56%)Kuo Li; Qing-Shan Jia; Jiaqi Yan
2021-10-30
Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided Curriculum Learning Approach. (97%)Anindya Sarkar; Anirban Sarkar; Sowrya Gali; Vineeth N Balasubramanian
AdvCodeMix: Adversarial Attack on Code-Mixed Data. (93%)Sourya Dipta Das; Ayan Basak; Soumil Mandal; Dipankar Das
Backdoor Pre-trained Models Can Transfer to All. (3%)Lujia Shen; Shouling Ji; Xuhong Zhang; Jinfeng Li; Jing Chen; Jie Shi; Chengfang Fang; Jianwei Yin; Ting Wang
Trojan Source: Invisible Vulnerabilities. (1%)Nicholas Boucher; Ross Anderson
2021-10-29
Attacking Video Recognition Models with Bullet-Screen Comments. (99%)Kai Chen; Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Adversarial Robustness with Semi-Infinite Constrained Learning. (92%)Alexander Robey; Luiz F. O. Chamon; George J. Pappas; Hamed Hassani; Alejandro Ribeiro
{\epsilon}-weakened Robustness of Deep Neural Networks. (62%)Pei Huang; Yuting Yang; Minghao Liu; Fuqi Jia; Feifei Ma; Jian Zhang
You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership. (11%)Xuxi Chen; Tianlong Chen; Zhenyu Zhang; Zhangyang Wang
2021-10-28
Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework. (99%)Lifan Yuan; Yichi Zhang; Yangyi Chen; Wei Wei
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis. (92%)Junfeng Guo; Ang Li; Cong Liu
The magnitude vector of images. (1%)Michael F. Adamer; Leslie O'Bray; Brouwer Edward De; Bastian Rieck; Karsten Borgwardt
2021-10-27
Towards Evaluating the Robustness of Neural Networks Learned by Transduction. (98%)Jiefeng Chen; Xi Wu; Yang Guo; Yingyu Liang; Somesh Jha
CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks. (98%)Haotian Xue; Kaixiong Zhou; Tianlong Chen; Kai Guo; Xia Hu; Yi Chang; Xin Wang
Towards Robust Reasoning over Knowledge Graphs. (83%)Zhaohan Xi; Ren Pang; Changjiang Li; Shouling Ji; Xiapu Luo; Xusheng Xiao; Ting Wang
Binarized ResNet: Enabling Robust Automatic Modulation Classification at the resource-constrained Edge. (80%)Deepsayan Sadhukhan; Nitin Priyadarshini Shankar; Nancy Nayak; Thulasi Tholeti; Sheetal Kalyani
Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks. (74%)Hassan Dbouk; Naresh R. Shanbhag
Adversarial Neuron Pruning Purifies Backdoored Deep Models. (15%)Dongxian Wu; Yisen Wang
From Intrinsic to Counterfactual: On the Explainability of Contextualized Recommender Systems. (5%)Yao Zhou; Haonan Wang; Jingrui He; Haixun Wang
Robust Contrastive Learning Using Negative Samples with Diminished Semantics. (1%)Songwei Ge; Shlok Mishra; Haohan Wang; Chun-Liang Li; David Jacobs
RoMA: Robust Model Adaptation for Offline Model-based Optimization. (1%)Sihyun Yu; Sungsoo Ahn; Le Song; Jinwoo Shin
2021-10-26
Can't Fool Me: Adversarially Robust Transformer for Video Understanding. (99%)Divya Choudhary; Palash Goyal; Saurabh Sahu
Frequency Centric Defense Mechanisms against Adversarial Examples. (99%)Sanket B. Shah; Param Raval; Harin Khakhi; Mehul S. Raval
ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers. (99%)Husheng Han; Kaidi Xu; Xing Hu; Xiaobing Chen; Ling Liang; Zidong Du; Qi Guo; Yanzhi Wang; Yunji Chen
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks. (99%)Yonggan Fu; Qixuan Yu; Yang Zhang; Shang Wu; Xu Ouyang; David Cox; Yingyan Lin
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective. (98%)Jingwei Sun; Ang Li; Louis DiValentin; Amin Hassanzadeh; Yiran Chen; Hai Li
A Frequency Perspective of Adversarial Robustness. (98%)Shishira R Maiya; Max Ehrlich; Vatsal Agarwal; Ser-Nam Lim; Tom Goldstein; Abhinav Shrivastava
Disrupting Deep Uncertainty Estimation Without Harming Accuracy. (86%)Ido Galil; Ran El-Yaniv
Improving Local Effectiveness for Global robust training. (83%)Jingyue Lu; M. Pawan Kumar
Robustness of Graph Neural Networks at Scale. (76%)Simon Geisler; Tobias Schmidt; Hakan Şirin; Daniel Zügner; Aleksandar Bojchevski; Stephan Günnemann
Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions. (75%)Izzat Alsmadi; Kashif Ahmad; Mahmoud Nazzal; Firoj Alam; Ala Al-Fuqaha; Abdallah Khreishah; Abdulelah Algosaibi
Adversarial Robustness in Multi-Task Learning: Promises and Illusions. (64%)Salah Ghamizi; Maxime Cordy; Mike Papadakis; Yves Le Traon
AugMax: Adversarial Composition of Random Augmentations for Robust Training. (56%)Haotao Wang; Chaowei Xiao; Jean Kossaifi; Zhiding Yu; Anima Anandkumar; Zhangyang Wang
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes. (50%)Sanghyun Hong; Michael-Andrei Panaitescu-Liess; Yiğitcan Kaya; Tudor Dumitraş
Semantic Host-free Trojan Attack. (10%)Haripriya Harikumar; Kien Do; Santu Rana; Sunil Gupta; Svetha Venkatesh
CAFE: Catastrophic Data Leakage in Vertical Federated Learning. (3%)Xiao Jin; Pin-Yu Chen; Chia-Yi Hsu; Chia-Mu Yu; Tianyi Chen
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge. (1%)Geng Yuan; Xiaolong Ma; Wei Niu; Zhengang Li; Zhenglun Kong; Ning Liu; Yifan Gong; Zheng Zhan; Chaoyang He; Qing Jin; Siyue Wang; Minghai Qin; Bin Ren; Yanzhi Wang; Sijia Liu; Xue Lin
Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection. (1%)Chunjong Park; Anas Awadalla; Tadayoshi Kohno; Shwetak Patel
Defensive Tensorization. (1%)Adrian Bulat; Jean Kossaifi; Sourav Bhattacharya; Yannis Panagakis; Timothy Hospedales; Georgios Tzimiropoulos; Nicholas D Lane; Maja Pantic
Task-Aware Meta Learning-based Siamese Neural Network for Classifying Obfuscated Malware. (1%)Jinting Zhu; Julian Jang-Jaccard; Amardeep Singh; Paul A. Watters; Seyit Camtepe
2021-10-25
Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks. (99%)Qiyu Kang; Yang Song; Qinxu Ding; Wee Peng Tay
Generating Watermarked Adversarial Texts. (99%)Mingjie Li; Hanzhou Wu; Xinpeng Zhang
Beyond $L_p$ clipping: Equalization-based Psychoacoustic Attacks against ASRs. (92%)Hadi Abdullah; Muhammad Sajidur Rahman; Christian Peeters; Cassidy Gibson; Washington Garcia; Vincent Bindschaedler; Thomas Shrimpton; Patrick Traynor
Fast Gradient Non-sign Methods. (92%)Yaya Cheng; Jingkuan Song; Xiaosu Zhu; Qilong Zhang; Lianli Gao; Heng Tao Shen
Ensemble Federated Adversarial Training with Non-IID data. (87%)Shuang Luo; Didi Zhu; Zexi Li; Chao Wu
GANash -- A GAN approach to steganography. (81%)Venkatesh Subramaniyan; Vignesh Sivakumar; A. K. Vagheesan; S. Sakthivelan; K. J. Jegadish Kumar; K. K. Nagarajan
A Dynamical System Perspective for Lipschitz Neural Networks. (81%)Laurent Meunier; Blaise Delattre; Alexandre Araujo; Alexandre Allauzen
An Adaptive Structural Learning of Deep Belief Network for Image-based Crack Detection in Concrete Structures Using SDNET2018. (13%)Shin Kamada; Takumi Ichimura; Takashi Iwasaki
2021-10-24
Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples. (80%)Yi Xiang Marcus Tan; Penny Chong; Jiamei Sun; Ngai-man Cheung; Yuval Elovici; Alexander Binder
2021-10-23
ADC: Adversarial attacks against object Detection that evade Context consistency checks. (99%)Mingjun Yin; Shasha Li; Chengyu Song; M. Salman Asif; Amit K. Roy-Chowdhury; Srikanth V. Krishnamurthy
A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness. (81%)Chang Song; Riya Ranjan; Hai Li
2021-10-22
Improving Robustness of Malware Classifiers using Adversarial Strings Generated from Perturbed Latent Representations. (99%)Marek Galovic; Branislav Bosansky; Viliam Lisy
How and When Adversarial Robustness Transfers in Knowledge Distillation? (91%)Rulin Shao; Jinfeng Yi; Pin-Yu Chen; Cho-Jui Hsieh
Fairness Degrading Adversarial Attacks Against Clustering Algorithms. (86%)Anshuman Chhabra; Adish Singla; Prasant Mohapatra
Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff. (80%)Adel Javanmard; Mohammad Mehrabi
PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy. (15%)Xiaolan Gu; Ming Li; Li Xiong
ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI. (15%)Samuel Hess; Gregory Ditzler
Spoofing Detection on Hand Images Using Quality Assessment. (1%)Asish Bera; Ratnadeep Dey; Debotosh Bhattacharjee; Mita Nasipuri; Hubert P. H. Shum
Text Counterfactuals via Latent Optimization and Shapley-Guided Search. (1%)Quintin Pope; Xiaoli Z. Fern
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning. (1%)Anvith Thudi; Hengrui Jia; Ilia Shumailov; Nicolas Papernot
MANDERA: Malicious Node Detection in Federated Learning via Ranking. (1%)Wanchuang Zhu; Benjamin Zi Hao Zhao; Simon Luo; Tongliang Liu; Ke Deng
2021-10-21
CAPTIVE: Constrained Adversarial Perturbations to Thwart IC Reverse Engineering. (98%)Amir Hosein Afandizadeh Zargari; Marzieh AshrafiAmiri; Minjun Seo; Sai Manoj Pudukotai Dinakarrao; Mohammed E. Fouda; Fadi Kurdahi
PROVES: Establishing Image Provenance using Semantic Signatures. (93%)Mingyang Xie; Manav Kulshrestha; Shaojie Wang; Jinghan Yang; Ayan Chakrabarti; Ning Zhang; Yevgeniy Vorobeychik
RoMA: a Method for Neural Network Robustness Measurement and Assessment. (92%)Natan Levy; Guy Katz
Anti-Backdoor Learning: Training Clean Models on Poisoned Data. (83%)Yige Li; Xixiang Lyu; Nodens Koren; Lingjuan Lyu; Bo Li; Xingjun Ma
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion. (68%)Shijie Zhang; Hongzhi Yin; Tong Chen; Zi Huang; Quoc Viet Hung Nguyen; Lizhen Cui
Robustness through Data Augmentation Loss Consistency. (61%)Tianjian Huang; Shaunak Halbe; Chinnadhurai Sankar; Pooyan Amini; Satwik Kottur; Alborz Geramifard; Meisam Razaviyayn; Ahmad Beirami
Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness. (61%)Simon Geisler; Johanna Sommer; Jan Schuchardt; Aleksandar Bojchevski; Stephan Günnemann
Watermarking Graph Neural Networks based on Backdoor Attacks. (31%)Jing Xu; Stjepan Picek
Physical Side-Channel Attacks on Embedded Neural Networks: A Survey. (8%)Maria Méndez Real; Rubén Salvador
2021-10-20
Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical Reinforcement Learning. (83%)Thai Le; Long Tran-Thanh; Dongwon Lee
Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks. (62%)Zihan Liul; Yun Luo; Zelin Zang; Stan Z. Li
Moir\'e Attack (MA): A New Potential Risk of Screen Photos. (56%)Dantong Niu; Ruohao Guo; Yisen Wang
Adversarial attacks against Bayesian forecasting dynamic models. (13%)Roi Naveiro
No One Representation to Rule Them All: Overlapping Features of Training Methods. (1%)Raphael Gontijo-Lopes; Yann Dauphin; Ekin D. Cubuk
2021-10-19
Multi-concept adversarial attacks. (99%)Vibha Belavadi; Yan Zhou; Murat Kantarcioglu; Bhavani M. Thuraisingham
A Regularization Method to Improve Adversarial Robustness of Neural Networks for ECG Signal Classification. (96%)Linhai Ma; Liang Liang
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks. (69%)Atul Sharma; Wei Chen; Joshua Zhao; Qiang Qiu; Somali Chaterji; Saurabh Bagchi
Understanding Convolutional Neural Networks from Theoretical Perspective via Volterra Convolution. (61%)Tenghui Li; Guoxu Zhou; Yuning Qiu; Qibin Zhao
Detecting Backdoor Attacks Against Point Cloud Classifiers. (26%)Zhen Xiang; David J. Miller; Siheng Chen; Xi Li; George Kesidis
Speech Pattern based Black-box Model Watermarking for Automatic Speech Recognition. (13%)Haozhe Chen; Weiming Zhang; Kunlin Liu; Kejiang Chen; Han Fang; Nenghai Yu
A Deeper Look into RowHammer`s Sensitivities: Experimental Analysis of Real DRAM Chips and Implications on Future Attacks and Defenses. (5%)Lois Orosa; Abdullah Giray Yağlıkçı; Haocong Luo; Ataberk Olgun; Jisung Park; Hasan Hassan; Minesh Patel; Jeremie S. Kim; Onur Mutlu
2021-10-18
Boosting the Transferability of Video Adversarial Examples via Temporal Translation. (99%)Zhipeng Wei; Jingjing Chen; Zuxuan Wu; Yu-Gang Jiang
Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information. (99%)Baolin Zheng; Peipei Jiang; Qian Wang; Qi Li; Chao Shen; Cong Wang; Yunjie Ge; Qingyang Teng; Shenyi Zhang
Improving Robustness using Generated Data. (97%)Sven Gowal; Sylvestre-Alvise Rebuffi; Olivia Wiles; Florian Stimberg; Dan Andrei Calian; Timothy Mann
MEMO: Test Time Robustness via Adaptation and Augmentation. (13%)Marvin Zhang; Sergey Levine; Chelsea Finn
Minimal Multi-Layer Modifications of Deep Neural Networks. (4%)Idan Refaeli; Guy Katz
2021-10-17
Unrestricted Adversarial Attacks on ImageNet Competition. (99%)Yuefeng Chen; Xiaofeng Mao; Yuan He; Hui Xue; Chao Li; Yinpeng Dong; Qi-An Fu; Xiao Yang; Wenzhao Xiang; Tianyu Pang; Hang Su; Jun Zhu; Fangcheng Liu; Chao Zhang; Hongyang Zhang; Yichi Zhang; Shilong Liu; Chang Liu; Wenzhao Xiang; Yajie Wang; Huipeng Zhou; Haoran Lyu; Yidan Xu; Zixuan Xu; Taoyu Zhu; Wenjun Li; Xianfeng Gao; Guoqiu Wang; Huanqian Yan; Ying Guo; Chaoning Zhang; Zheng Fang; Yang Wang; Bingyang Fu; Yunfei Zheng; Yekui Wang; Haorong Luo; Zhen Yang
Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training. (99%)Alexander Daniel Pan; Daniel Yongkyun; Lee; Huan Zhang; Yize Chen; Yuanyuan Shi
ECG-ATK-GAN: Robustness against Adversarial Attacks on ECGs using Conditional Generative Adversarial Networks. (99%)Khondker Fariha Hossain; Sharif Amit Kamran; Alireza Tavakkoli; Xingjun Ma
Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications. (22%)Bang Wu; Xiangwen Yang; Shirui Pan; Xingliang Yuan
Poisoning Attacks on Fair Machine Learning. (12%)Minh-Hao Van; Wei Du; Xintao Wu; Aidong Lu
2021-10-16
Black-box Adversarial Attacks on Network-wide Multi-step Traffic State Prediction Models. (99%)Bibek Poudel; Weizi Li
Analyzing Dynamic Adversarial Training Data in the Limit. (82%)Eric Wallace; Adina Williams; Robin Jia; Douwe Kiela
Characterizing Improper Input Validation Vulnerabilities of Mobile Crowdsourcing Services. (5%)Sojhal Ismail Khan; Dominika Woszczyk; Chengzeng You; Soteris Demetriou; Muhammad Naveed
Tackling the Imbalance for GNNs. (4%)Rui Wang; Weixuan Xiong; Qinghu Hou; Ou Wu
2021-10-15
Adversarial Attacks on Gaussian Process Bandits. (99%)Eric Han; Jonathan Scarlett
Generating Natural Language Adversarial Examples through An Improved Beam Search Algorithm. (99%)Tengfei Zhao; Zhaocheng Ge; Hanping Hu; Dingmeng Shi
Adversarial Attacks on ML Defense Models Competition. (99%)Yinpeng Dong; Qi-An Fu; Xiao Yang; Wenzhao Xiang; Tianyu Pang; Hang Su; Jun Zhu; Jiayu Tang; Yuefeng Chen; XiaoFeng Mao; Yuan He; Hui Xue; Chao Li; Ye Liu; Qilong Zhang; Lianli Gao; Yunrui Yu; Xitong Gao; Zhe Zhao; Daquan Lin; Jiadong Lin; Chuanbiao Song; Zihao Wang; Zhennan Wu; Yang Guo; Jiequan Cui; Xiaogang Xu; Pengguang Chen
Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture. (76%)Xinyu Tang; Saeed Mahloujifar; Liwei Song; Virat Shejwalkar; Milad Nasr; Amir Houmansadr; Prateek Mittal
Robustness of different loss functions and their impact on networks learning capability. (76%)Vishal Rajput
Chunked-Cache: On-Demand and Scalable Cache Isolation for Security Architectures. (22%)Ghada Dessouky; Alexander Gruler; Pouya Mahmoody; Ahmad-Reza Sadeghi; Emmanuel Stapf
Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks. (10%)Yangyi Chen; Fanchao Qi; Zhiyuan Liu; Maosong Sun
Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation. (8%)Yao Qin; Chiyuan Zhang; Ting Chen; Balaji Lakshminarayanan; Alex Beutel; Xuezhi Wang
Hand Me Your PIN! Inferring ATM PINs of Users Typing with a Covered Hand. (1%)Matteo Cardaioli; Stefano Cecconello; Mauro Conti; Simone Milani; Stjepan Picek; Eugen Saraci
2021-10-14
Adversarial examples by perturbing high-level features in intermediate decoder layers. (99%)Vojtěch Čermák; Lukáš Adam
DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks. (99%)Yixiang Wang; Jiqiang Liu; Xiaolin Chang; Jianhua Wang; Ricardo J. Rodríguez
Adversarial Purification through Representation Disentanglement. (99%)Tao Bai; Jun Zhao; Lanqing Guo; Bihan Wen
RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models. (93%)Wenkai Yang; Yankai Lin; Peng Li; Jie Zhou; Xu Sun
An Optimization Perspective on Realizing Backdoor Injection Attacks on Deep Neural Networks in Hardware. (87%)M. Caner Tol; Saad Islam; Berk Sunar; Ziming Zhang
Interactive Analysis of CNN Robustness. (80%)Stefan Sietzen; Mathias Lechner; Judy Borowski; Ramin Hasani; Manuela Waldner
On Adversarial Vulnerability of PHM algorithms: An Initial Study. (69%)Weizhong Yan; Zhaoyuan Yang; Jianwei Qiu
Identifying and Mitigating Spurious Correlations for Improving Robustness in NLP Models. (61%)Tianlu Wang; Diyi Yang; Xuezhi Wang
Toward Degradation-Robust Voice Conversion. (9%)Chien-yu Huang; Kai-Wei Chang; Hung-yi Lee
Interpreting the Robustness of Neural NLP Models to Textual Perturbations. (9%)Yunxiang Zhang; Liangming Pan; Samson Tan; Min-Yen Kan
Retrieval-guided Counterfactual Generation for QA. (2%)Bhargavi Paranjape; Matthew Lamm; Ian Tenney
Effective Certification of Monotone Deep Equilibrium Models. (1%)Mark Niklas Müller; Robin Staab; Marc Fischer; Martin Vechev
2021-10-13
A Framework for Verification of Wasserstein Adversarial Robustness. (99%)Tobias Wegel; Felix Assion; David Mickisch; Florens Greßner
Identification of Attack-Specific Signatures in Adversarial Examples. (99%)Hossein Souri; Pirazh Khorramshahi; Chun Pong Lau; Micah Goldblum; Rama Chellappa
Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness. (99%)Xiao Yang; Yinpeng Dong; Wenzhao Xiang; Tianyu Pang; Hang Su; Jun Zhu
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer. (98%)Fanchao Qi; Yangyi Chen; Xurui Zhang; Mukai Li; Zhiyuan Liu; Maosong Sun
Brittle interpretations: The Vulnerability of TCAV and Other Concept-based Explainability Tools to Adversarial Attack. (93%)Davis Brown; Henry Kvinge
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks. (92%)Shawn Shan; Arjun Nitin Bhagoji; Haitao Zheng; Ben Y. Zhao
Boosting the Certified Robustness of L-infinity Distance Nets. (1%)Bohang Zhang; Du Jiang; Di He; Liwei Wang
Benchmarking the Robustness of Spatial-Temporal Models Against Corruptions. (1%)Chenyu Yi; Siyuan Yang; Haoliang Li; Yap-peng Tan; Alex Kot
2021-10-12
Adversarial Attack across Datasets. (99%)Yunxiao Qin; Yuanhao Xiong; Jinfeng Yi; Cho-Jui Hsieh
Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning. (99%)Jinyin Chen; Guohan Huang; Haibin Zheng; Shanqing Yu; Wenrong Jiang; Chen Cui
SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text. (92%)Hoang-Quoc Nguyen-Son; Seira Hidano; Kazuhide Fukushima; Shinsaku Kiyomoto
On the Security Risks of AutoML. (45%)Ren Pang; Zhaohan Xi; Shouling Ji; Xiapu Luo; Ting Wang
Zero-bias Deep Neural Network for Quickest RF Signal Surveillance. (1%)Yongxin Liu; Yingjie Chen; Jian Wang; Shuteng Niu; Dahai Liu; Houbing Song
2021-10-11
Boosting Fast Adversarial Training with Learnable Adversarial Initialization. (99%)Xiaojun Jia; Yong Zhang; Baoyuan Wu; Jue Wang; Xiaochun Cao
Parameterizing Activation Functions for Adversarial Robustness. (98%)Sihui Dai; Saeed Mahloujifar; Prateek Mittal
Amicable examples for informed source separation. (86%)Naoya Takahashi; Yuki Mitsufuji
Doubly-Trained Adversarial Data Augmentation for Neural Machine Translation. (12%)Weiting Tan; Shuoyang Ding; Huda Khayrallah; Philipp Koehn
Intriguing Properties of Input-dependent Randomized Smoothing. (1%)Peter Súkeník; Aleksei Kuvshinov; Stephan Günnemann
Hiding Images into Images with Real-world Robustness. (1%)Qichao Ying; Hang Zhou; Xianhan Zeng; Haisheng Xu; Zhenxing Qian; Xinpeng Zhang
Source Mixing and Separation Robust Audio Steganography. (1%)Naoya Takahashi; Mayank Kumar Singh; Yuki Mitsufuji
Homogeneous Learning: Self-Attention Decentralized Deep Learning. (1%)Yuwei Sun; Hideya Ochiai
Large Language Models Can Be Strong Differentially Private Learners. (1%)Xuechen Li; Florian Tramèr; Percy Liang; Tatsunori Hashimoto
A Closer Look at Prototype Classifier for Few-shot Image Classification. (1%)Mingcheng Hou; Issei Sato
Certified Patch Robustness via Smoothed Vision Transformers. (1%)Hadi Salman; Saachi Jain; Eric Wong; Aleksander Mądry
2021-10-10
Adversarial Attacks in a Multi-view Setting: An Empirical Study of the Adversarial Patches Inter-view Transferability. (98%)Bilel Tarchoun; Ihsen Alouani; Anouar Ben Khalifa; Mohamed Ali Mahjoub
Universal Adversarial Attacks on Neural Networks for Power Allocation in a Massive MIMO System. (92%)Pablo Millán Santos; B. R. Manoj; Meysam Sadeghi; Erik G. Larsson
2021-10-09
Demystifying the Transferability of Adversarial Attacks in Computer Networks. (99%)Ehsan Nowroozi; Yassine Mekdad; Mohammad Hajian Berenjestanaki; Mauro Conti; Abdeslam EL Fergougui
Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning. (93%)Guanlin Liu; Lifeng Lai
Widen The Backdoor To Let More Attackers In. (13%)Siddhartha Datta; Giulio Lovisotto; Ivan Martinovic; Nigel Shadbolt
2021-10-08
Explainability-Aware One Point Attack for Point Cloud Neural Networks. (99%)Hanxiao Tan; Helena Kotthaus
Game Theory for Adversarial Attacks and Defenses. (98%)Shorya Sharma
Graphs as Tools to Improve Deep Learning Methods. (10%)Carlos Lassance; Myriam Bontonou; Mounia Hamidouche; Bastien Pasdeloup; Lucas Drumetz; Vincent Gripon
IHOP: Improved Statistical Query Recovery against Searchable Symmetric Encryption through Quadratic Optimization. (3%)Simon Oya; Florian Kerschbaum
A Wireless Intrusion Detection System for 802.11 WPA3 Networks. (1%)Neil Dalal; Nadeem Akhtar; Anubhav Gupta; Nikhil Karamchandani; Gaurav S. Kasbekar; Jatin Parekh
Salient ImageNet: How to discover spurious features in Deep Learning? (1%)Sahil Singla; Soheil Feizi
2021-10-07
Robust Feature-Level Adversaries are Interpretability Tools. (99%)Stephen Casper; Max Nadeau; Dylan Hadfield-Menell; Gabriel Kreiman
EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box Android Malware Detection. (99%)Hamid Bostani; Veelasha Moonsamy
Adversarial Attack by Limited Point Cloud Surface Modifications. (98%)Atrin Arya; Hanieh Naderi; Shohreh Kasaei
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks. (98%)Hanxun Huang; Yisen Wang; Sarah Monazam Erfani; Quanquan Gu; James Bailey; Xingjun Ma
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction. (80%)Jinyin Chen; Haiyang Xiong; Haibin Zheng; Jian Zhang; Guodong Jiang; Yi Liu
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time. (62%)Tian Dong; Han Qiu; Tianwei Zhang; Jiwei Li; Hewu Li; Jialiang Lu
Adversarial Unlearning of Backdoors via Implicit Hypergradient. (56%)Yi Zeng; Si Chen; Won Park; Z. Morley Mao; Ming Jin; Ruoxi Jia
MPSN: Motion-aware Pseudo Siamese Network for Indoor Video Head Detection in Buildings. (1%)Kailai Sun; Xiaoteng Ma; Peng Liu; Qianchuan Zhao
2021-10-06
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training with Crafted Input Noise. (99%)Souvik Kundu; Massoud Pedram; Peter A. Beerel
Reversible adversarial examples against local visual perturbation. (99%)Zhaoxia Yin; Li Chen; Shaowei Zhu
Attack as the Best Defense: Nullifying Image-to-image Translation GANs via Limit-aware Adversarial Attack. (99%)Chin-Yuan Yeh; Hsi-Wen Chen; Hong-Han Shuai; De-Nian Yang; Ming-Syan Chen
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs. (99%)Philipp Benz; Soomin Ham; Chaoning Zhang; Adil Karjauv; In So Kweon
Adversarial Attacks on Machinery Fault Diagnosis. (99%)Jiahao Chen; Diqun Yan
Adversarial Attacks on Spiking Convolutional Networks for Event-based Vision. (98%)Julian Büchel; Gregor Lenz; Yalun Hu; Sadique Sheik; Martino Sorbaro
A Uniform Framework for Anomaly Detection in Deep Neural Networks. (97%)Fangzhen Zhao; Chenyi Zhang; Naipeng Dong; Zefeng You; Zhenxin Wu
Double Descent in Adversarial Training: An Implicit Label Noise Perspective. (88%)Chengyu Dong; Liyuan Liu; Jingbo Shang
Improving Adversarial Robustness for Free with Snapshot Ensemble. (83%)Yihao Wang
DoubleStar: Long-Range Attack Towards Depth Estimation based Obstacle Avoidance in Autonomous Systems. (45%)Ce Michigan State University Zhou; Qiben Michigan State University Yan; Yan Michigan State University Shi; Lichao Lehigh University Sun
Inference Attacks Against Graph Neural Networks. (2%)Zhikun Zhang; Min Chen; Michael Backes; Yun Shen; Yang Zhang
Data-driven behavioural biometrics for continuous and adaptive user verification using Smartphone and Smartwatch. (1%)Akriti Verma; Valeh Moghaddam; Adnan Anwar
On The Vulnerability of Recurrent Neural Networks to Membership Inference Attacks. (1%)Yunhao Yang; Parham Gohari; Ufuk Topcu
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks. (1%)Jiawei Du; Hanshu Yan; Jiashi Feng; Joey Tianyi Zhou; Liangli Zhen; Rick Siow Mong Goh; Vincent Y. F. Tan
Stegomalware: A Systematic Survey of MalwareHiding and Detection in Images, Machine LearningModels and Research Challenges. (1%)Rajasekhar Chaganti; Vinayakumar Ravi; Mamoun Alazab; Tuan D. Pham
Exploring the Common Principal Subspace of Deep Features in Neural Networks. (1%)Haoran Liu; Haoyi Xiong; Yaqing Wang; Haozhe An; Dongrui Wu; Dejing Dou
Generalizing Neural Networks by Reflecting Deviating Data in Production. (1%)Yan Xiao; Yun Lin; Ivan Beschastnikh; Changsheng Sun; David S. Rosenblum; Jin Song Dong
2021-10-05
Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems. (99%)Lisa Oakley; Alina Oprea; Stavros Tripakis
Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power of Geometric Transformations. (99%)Shasha Li; Abhishek Aich; Shitong Zhu; M. Salman Asif; Chengyu Song; Amit K. Roy-Chowdhury; Srikanth Krishnamurthy
Adversarial defenses via a mixture of generators. (99%)Maciej Żelaszczyk; Jacek Mańdziuk
Neural Network Adversarial Attack Method Based on Improved Genetic Algorithm. (92%)Dingming Yang; Yanrong Cui; Hongqiang Yuan
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models. (33%)Kangjie Chen; Yuxian Meng; Xiaofei Sun; Shangwei Guo; Tianwei Zhang; Jiwei Li; Chun Fan
Spectral Bias in Practice: The Role of Function Frequency in Generalization. (1%)Sara Fridovich-Keil; Raphael Gontijo-Lopes; Rebecca Roelofs
CADA: Multi-scale Collaborative Adversarial Domain Adaptation for Unsupervised Optic Disc and Cup Segmentation. (1%)Peng Liu; Charlie T. Tran; Bin Kong; Ruogu Fang
Noisy Feature Mixup. (1%)Soon Hoe Lim; N. Benjamin Erichson; Francisco Utrera; Winnie Xu; Michael W. Mahoney
2021-10-04
Benchmarking Safety Monitors for Image Classifiers with Machine Learning. (1%)Raul Sena LAAS Ferreira; Jean LAAS Arlat; Jeremie LAAS Guiochet; Hélène LAAS Waeselynck
2021-10-03
Adversarial Examples Generation for Reducing Implicit Gender Bias in Pre-trained Models. (82%)Wenqian Ye; Fei Xu; Yaojia Huang; Cassie Huang; Ji A
2021-10-02
Evaluating Deep Learning Models and Adversarial Attacks on Accelerometer-Based Gesture Authentication. (98%)Elliu Huang; Troia Fabio Di; Mark Stamp
Anti-aliasing Deep Image Classifiers using Novel Depth Adaptive Blurring and Activation Function. (13%)Md Tahmid Hossain; Shyh Wei Teng; Ferdous Sohel; Guojun Lu
2021-10-01
Calibrated Adversarial Training. (98%)Tianjin Huang; Vlado Menkovski; Yulong Pei; Mykola Pechenizkiy
Universal Adversarial Spoofing Attacks against Face Recognition. (87%)Takuma Amada; Seng Pei Liew; Kazuya Kakizaki; Toshinori Araki
Score-Based Generative Classifiers. (84%)Roland S. Zimmermann; Lukas Schott; Yang Song; Benjamin A. Dunn; David A. Klindt
One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency. (1%)Sayeed Shafayet Chowdhury; Nitin Rathi; Kaushik Roy
2021-09-30
Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation. (98%)Manjushree B. Aithal; Xiaohua Li
You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for Object Detectors. (95%)Zijian Zhu; Hang Su; Chang Liu; Wenzhao Xiang; Shibao Zheng
Adversarial Semantic Contour for Object Detection. (92%)Yichi Zhang; Zijian Zhu; Xiao Yang; Jun Zhu
From Zero-Shot Machine Learning to Zero-Day Attack Detection. (10%)Mohanad Sarhan; Siamak Layeghy; Marcus Gallagher; Marius Portmann
2021-09-29
On Brightness Agnostic Adversarial Examples Against Face Recognition Systems. (99%)Inderjeet Singh; Satoru Momiyama; Kazuya Kakizaki; Toshinori Araki
Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks. (70%)Kaleel Mahmood; Rigel Mahmood; Ethan Rathbun; Dijk Marten van
BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining. (41%)Weizhe Hua; Yichi Zhang; Chuan Guo; Zhiru Zhang; G. Edward Suh
Mitigation of Adversarial Policy Imitation via Constrained Randomization of Policy (CRoP). (10%)Nancirose Piazza; Vahid Behzadan
2021-09-28
slimTrain -- A Stochastic Approximation Method for Training Separable Deep Neural Networks. (1%)Elizabeth Newman; Julianne Chung; Matthias Chung; Lars Ruthotto
2021-09-27
MUTEN: Boosting Gradient-Based Adversarial Attacks via Mutant-Based Ensembles. (99%)Yuejun Guo; Qiang Hu; Maxime Cordy; Michail Papadakis; Yves Le Traon
Cluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors. (99%)Zhengyi Wang; Zhongkai Hao; Ziqiao Wang; Hang Su; Jun Zhu
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective. (98%)Adhyyan Narang; Vidya Muthukumar; Anant Sahai
GANG-MAM: GAN based enGine for Modifying Android Malware. (64%)Renjith G; Sonia Laudanna; Aji S; Corrado Aaron Visaggio; Vinod P
Distributionally Robust Multi-Output Regression Ranking. (3%)Shahabeddin Sotudian; Ruidi Chen; Ioannis Paschalidis
Improving Uncertainty of Deep Learning-based Object Classification on Radar Spectra using Label Smoothing. (1%)Kanil Patel; William Beluch; Kilian Rambach; Michael Pfeiffer; Bin Yang
Federated Deep Learning with Bayesian Privacy. (1%)Hanlin Gu; Lixin Fan; Bowen Li; Yan Kang; Yuan Yao; Qiang Yang
2021-09-26
Distributionally Robust Multiclass Classification and Applications in Deep CNN Image Classifiers. (11%)Ruidi Chen; Boran Hao; Ioannis Paschalidis
2021-09-25
Two Souls in an Adversarial Image: Towards Universal Adversarial Example Detection using Multi-view Inconsistency. (99%)Sohaib Kiani; Sana Awan; Chao Lan; Fengjun Li; Bo Luo
Contributions to Large Scale Bayesian Inference and Adversarial Machine Learning. (98%)Víctor Gallego
MINIMAL: Mining Models for Data Free Universal Adversarial Triggers. (93%)Swapnil Parekh; Yaman Singla Kumar; Somesh Singh; Changyou Chen; Balaji Krishnamurthy; Rajiv Ratn Shah
2021-09-24
Local Intrinsic Dimensionality Signals Adversarial Perturbations. (98%)Sandamal Weerasinghe; Tansu Alpcan; Sarah M. Erfani; Christopher Leckie; Benjamin I. P. Rubinstein
2021-09-23
Breaking BERT: Understanding its Vulnerabilities for Biomedical Named Entity Recognition through Adversarial Attack. (98%)Anne Dirkson; Suzan Verberne; Wessel Kraaij
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training. (88%)Jakub Breier; Xiaolu Hou; Martín Ochoa; Jesus Solano
AES Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses. (68%)Yaman Kumar Singla; Swapnil Parekh; Somesh Singh; Junyi Jessy Li; Rajiv Ratn Shah; Changyou Chen
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications. (1%)Dongqi Han; Zhiliang Wang; Wenqi Chen; Ying Zhong; Su Wang; Han Zhang; Jiahai Yang; Xingang Shi; Xia Yin
2021-09-22
Exploring Adversarial Examples for Efficient Active Learning in Machine Learning Classifiers. (99%)Honggang Yu; Shihfeng Zeng; Teng Zhang; Ing-Chao Lin; Yier Jin
CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks. (81%)Mikhail Pautov; Nurislam Tursynbek; Marina Munkhoeva; Nikita Muravev; Aleksandr Petiushko; Ivan Oseledets
Security Analysis of Capsule Network Inference using Horizontal Collaboration. (69%)Adewale Adeyemo; Faiq Khalid; Tolulope A. Odetola; Syed Rafay Hasan
Adversarial Transfer Attacks With Unknown Data and Class Overlap. (62%)Luke E. Richards; André Nguyen; Ryan Capps; Steven Forsythe; Cynthia Matuszek; Edward Raff
Pushing the Right Buttons: Adversarial Evaluation of Quality Estimation. (1%)Diptesh Kanojia; Marina Fomicheva; Tharindu Ranasinghe; Frédéric Blain; Constantin Orăsan; Lucia Specia
Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis. (1%)Zeyuan Yin; Ye Yuan; Panfeng Guo; Pan Zhou
2021-09-21
Attacks on Visualization-Based Malware Detection: Balancing Effectiveness and Executability. (99%)Hadjer Benkraouda; Jingyu Qian; Hung Quoc Tran; Berkay Kaplan
3D Point Cloud Completion with Geometric-Aware Adversarial Augmentation. (93%)Mengxi Wu; Hao Huang; Yi Fang
DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning. (76%)Md Tamjid Hossain; Shafkat Islam; Shahriar Badsha; Haoting Shen
Privacy, Security, and Utility Analysis of Differentially Private CPES Data. (13%)Md Tamjid Hossain; Shahriar Badsha; Haoting Shen
2021-09-20
Robust Physical-World Attacks on Face Recognition. (99%)Xin Zheng; Yanbo Fan; Baoyuan Wu; Yong Zhang; Jue Wang; Shirui Pan
Modeling Adversarial Noise for Adversarial Defense. (99%)Dawei Zhou; Nannan Wang; Bo Han; Tongliang Liu
Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection? (99%)Deqiang Li; Tian Qiu; Shuo Chen; Qianmu Li; Shouhuai Xu
Robustness Analysis of Deep Learning Frameworks on Mobile Platforms. (10%)Amin Eslami Abyane; Hadi Hemmati
"Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World. (2%)Emily Wenger; Max Bronckers; Christian Cianfarani; Jenna Cryan; Angela Sha; Haitao Zheng; Ben Y. Zhao
Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework. (1%)Muhammad Shafique; Alberto Marchisio; Rachmad Vidya Wicaksana Putra; Muhammad Abdullah Hanif
2021-09-19
On the Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars. (99%)Deboleena Roy; Chun Tao; Indranil Chakraborty; Kaushik Roy
Adversarial Training with Contrastive Learning in NLP. (16%)Daniela N. Rim; DongNyeong Heo; Heeyoul Choi
2021-09-18
Clean-label Backdoor Attack against Deep Hashing based Retrieval. (98%)Kuofeng Gao; Jiawang Bai; Bin Chen; Dongxian Wu; Shu-Tao Xia
2021-09-17
Messing Up 3D Virtual Environments: Transferable Adversarial 3D Objects. (98%)Enrico Meloni; Matteo Tiezzi; Luca Pasqualini; Marco Gori; Stefano Melacci
Exploring the Training Robustness of Distributional Reinforcement Learning against Noisy State Observations. (8%)Ke Sun; Yingnan Zhao; Shangling Jui; Linglong Kong
2021-09-16
Harnessing Perceptual Adversarial Patches for Crowd Counting. (99%)Shunchang Liu; Jiakai Wang; Aishan Liu; Yingwei Li; Yijie Gao; Xianglong Liu; Dacheng Tao
KATANA: Simple Post-Training Robustness Using Test Time Augmentations. (98%)Gilad Cohen; Raja Giryes
Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual Patterns. (96%)Prasanth Buddareddygari; Travis Zhang; Yezhou Yang; Yi Ren
Adversarial Attacks against Deep Learning Based Power Control in Wireless Communications. (95%)Brian Kim; Yi Shi; Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus
Don't Search for a Search Method -- Simple Heuristics Suffice for Adversarial Text Attacks. (68%)Nathaniel Berger; Stefan Riezler; Artem Sokolov; Sebastian Ebert
Membership Inference Attacks Against Recommender Systems. (3%)Minxing Zhang; Zhaochun Ren; Zihan Wang; Pengjie Ren; Zhumin Chen; Pengfei Hu; Yang Zhang
2021-09-15
Universal Adversarial Attack on Deep Learning Based Prognostics. (99%)Arghya Basak; Pradeep Rathore; Sri Harsha Nistala; Sagar Srinivas; Venkataramana Runkana
Balancing detectability and performance of attacks on the control channel of Markov Decision Processes. (98%)Alessio Russo; Alexandre Proutiere
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack. (95%)DonghuaWang; Tingsong Jiang; Jialiang Sun; Weien Zhou; Xiaoya Zhang; Zhiqiang Gong; Wen Yao; Xiaoqian Chen
BERT is Robust! A Case Against Synonym-Based Adversarial Examples in Text Classification. (92%)Jens Hauser; Zhao Meng; Damián Pascual; Roger Wattenhofer
Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup. (13%)Guang Liu; Yuzhao Mao; Hailong Huang; Weiguo Gao; Xuan Li
Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel. (10%)Henrique Teles Maia; Chang Xiao; Dingzeyu Li; Eitan Grinspun; Changxi Zheng
2021-09-14
A Novel Data Encryption Method Inspired by Adversarial Attacks. (99%)Praveen Fernando; Jin Wei-Kocsis
Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder. (99%)Yao Qiu; Jinchao Zhang; Jie Zhou
PETGEN: Personalized Text Generation Attack on Deep Sequence Embedding-based Classification Models. (99%)Bing He; Mustaque Ahamad; Srijan Kumar
EVAGAN: Evasion Generative Adversarial Network for Low Data Regimes. (76%)Rizwan Hamid Randhawa; Nauman Aslam; Muhammad Alauthman; Husnain Rafiq; Muhammad Khalid
Dodging Attack Using Carefully Crafted Natural Makeup. (47%)Nitzan Guetta; Asaf Shabtai; Inderjeet Singh; Satoru Momiyama; Yuval Elovici
Avengers Ensemble! Improving Transferability of Authorship Obfuscation. (12%)Muhammad Haroon; Muhammad Fareed Zaffar; Padmini Srinivasan; Zubair Shafiq
ARCH: Efficient Adversarial Regularized Training with Caching. (8%)Simiao Zuo; Chen Liang; Haoming Jiang; Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen; Tuo Zhao
2021-09-13
Adversarial Bone Length Attack on Action Recognition. (99%)Nariki Tanaka; Hiroshi Kera; Kazuhiko Kawamoto
Randomized Substitution and Vote for Textual Adversarial Example Detection. (99%)Xiaosen Wang; Yifeng Xiong; Kun He
Improving the Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator. (99%)Wenzhao Xiang; Hang Su; Chang Liu; Yandong Guo; Shibao Zheng
Evolving Architectures with Gradient Misalignment toward Low Adversarial Transferability. (98%)Kevin Richard G. Operiano; Wanchalerm Pora; Hitoshi Iba; Hiroshi Kera
A Practical Adversarial Attack on Contingency Detection of Smart Energy Systems. (98%)Moein Sabounchi; Jin Wei-Kocsis
Adversarial Examples for Evaluating Math Word Problem Solvers. (96%)Vivek Kumar; Rishabh Maheshwary; Vikram Pudi
PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos. (86%)Nupur Thakur; Baoxin Li
Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering. (81%)Jian Xu; Shao-Lun Huang; Linqi Song; Tian Lan
Formalizing and Estimating Distribution Inference Risks. (62%)Anshuman Suri; David Evans
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models. (50%)Kun Zhou; Wayne Xin Zhao; Sirui Wang; Fuzheng Zhang; Wei Wu; Ji-Rong Wen
Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models. (16%)Won Park; Nan Li; Qi Alfred Chen; Z. Morley Mao
Adversarially Trained Object Detector for Unsupervised Domain Adaptation. (3%)Kazuma Fujii; Hiroshi Kera; Kazuhiko Kawamoto
Perturbation CheckLists for Evaluating NLG Evaluation Metrics. (1%)Ananya B. Sai; Tanay Dixit; Dev Yashpal Sheth; Sreyas Mohan; Mitesh M. Khapra
How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding. (1%)Tianda Li; Ahmad Rashid; Aref Jafari; Pranav Sharma; Ali Ghodsi; Mehdi Rezagholizadeh
Detecting Safety Problems of Multi-Sensor Fusion in Autonomous Driving. (1%)Ziyuan Zhong; Zhisheng Hu; Shengjian Guo; Xinyang Zhang; Zhenyu Zhong; Baishakhi Ray
2021-09-12
TREATED:Towards Universal Defense against Textual Adversarial Attacks. (99%)Bin Zhu; Zhaoquan Gu; Le Wang; Zhihong Tian
CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph. (98%)Xugang Wu; Huijun Wu; Xu Zhou; Kai Lu
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain. (93%)Hasan Abed Al Kader Hammoud; Bernard Ghanem
RockNER: A Simple Method to Create Adversarial Examples for Evaluating the Robustness of Named Entity Recognition Models. (84%)Bill Yuchen Lin; Wenyang Gao; Jun Yan; Ryan Moreno; Xiang Ren
Shape-Biased Domain Generalization via Shock Graph Embeddings. (2%)Maruthi Narayanan; Vickram Rajendran; Benjamin Kimia
Source Inference Attacks in Federated Learning. (1%)Hongsheng Hu; Zoran Salcic; Lichao Sun; Gillian Dobbie; Xuyun Zhang
2021-09-11
RobustART: Benchmarking Robustness on Architecture Design and Training Techniques. (98%)Shiyu Tang; Ruihao Gong; Yan Wang; Aishan Liu; Jiakai Wang; Xinyun Chen; Fengwei Yu; Xianglong Liu; Dawn Song; Alan Yuille; Philip H. S. Torr; Dacheng Tao
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency. (81%)Yonggan Fu; Yang Zhao; Qixuan Yu; Chaojian Li; Yingyan Lin
2021-09-10
A Strong Baseline for Query Efficient Attacks in a Black Box Setting. (99%)Rishabh Maheshwary; Saket Maheshwary; Vikram Pudi
2021-09-09
Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification. (99%)Maximilian Mozes; Max Bartolo; Pontus Stenetorp; Bennett Kleinberg; Lewis D. Griffin
Energy Attack: On Transferring Adversarial Examples. (99%)Ruoxi Shi; Borui Yang; Yangzhou Jiang; Chenglong Zhao; Bingbing Ni
Protein Folding Neural Networks Are Not Robust. (99%)Sumit Kumar Jha; Arvind Ramanathan; Rickard Ewetz; Alvaro Velasquez; Susmit Jha
Towards Transferable Adversarial Attacks on Vision Transformers. (99%)Zhipeng Wei; Jingjing Chen; Micah Goldblum; Zuxuan Wu; Tom Goldstein; Yu-Gang Jiang
Multi-granularity Textual Adversarial Attack with Behavior Cloning. (98%)Yangyi Chen; Jin Su; Wei Wei
Spatially Focused Attack against Spatiotemporal Graph Neural Networks. (81%)Fuqiang Liu; Luis Miranda-Moreno; Lijun Sun
Differential Privacy in Personalized Pricing with Nonparametric Demand Models. (26%)Xi Chen; Sentao Miao; Yining Wang
EvilModel 2.0: Bringing Neural Network Models into Malware Attacks. (5%)Zhi Wang; Chaoge Liu; Xiang Cui; Jie Yin; Xutong Wang
2021-09-08
Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning. (89%)Maziar Gomrokchi; Susan Amin; Hossein Aboutalebi; Alexander Wong; Doina Precup
Robust Optimal Classification Trees Against Adversarial Examples. (80%)Daniël Vos; Sicco Verwer
2021-09-07
Adversarial Parameter Defense by Multi-Step Risk Minimization. (98%)Zhiyuan Zhang; Ruixuan Luo; Xuancheng Ren; Qi Su; Liangyou Li; Xu Sun
POW-HOW: An enduring timing side-channel to evade online malware sandboxes. (12%)Antonio Nappa; Panagiotis Papadopoulos; Matteo Varvello; Daniel Aceituno Gomez; Juan Tapiador; Andrea Lanzi
Unpaired Adversarial Learning for Single Image Deraining with Rain-Space Contrastive Constraints. (1%)Xiang Chen; Jinshan Pan; Kui Jiang; Yufeng Huang; Caihua Kong; Longgang Dai; Yufeng Li
2021-09-06
Robustness and Generalization via Generative Adversarial Training. (82%)Omid Poursaeed; Tianxing Jiang; Harry Yang; Serge Belongie; SerNam Lim
Trojan Signatures in DNN Weights. (33%)Greg Fields; Mohammad Samragh; Mojan Javaheripi; Farinaz Koushanfar; Tara Javidi
Automated Robustness with Adversarial Training as a Post-Processing Step. (4%)Ambrish Rawat; Mathieu Sinn; Beat Buesser
Exposing Length Divergence Bias of Textual Matching Models. (2%)Lan Jiang; Tianshu Lyu; Chong Meng; Xiaoyong Lyu; Dawei Yin
2021-09-05
Efficient Combinatorial Optimization for Word-level Adversarial Textual Attack. (98%)Shengcai Liu; Ning Lu; Cheng Chen; Ke Tang
Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning. (2%)Yusen Wu; Hao Chen; Xin Wang; Chao Liu; Phuong Nguyen; Yelena Yesha
DexRay: A Simple, yet Effective Deep Learning Approach to Android Malware Detection based on Image Representation of Bytecode. (1%)Nadia Daoudi; Jordan Samhi; Abdoul Kader Kabore; Kevin Allix; Tegawendé F. Bissyandé; Jacques Klein
2021-09-04
Real-World Adversarial Examples involving Makeup Application. (99%)Chang-Sheng Lin; Chia-Yi Hsu; Pin-Yu Chen; Chia-Mu Yu
Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness. (99%)Uriya Pesso; Koby Bibas; Meir Feder
Training Meta-Surrogate Model for Transferable Adversarial Attack. (99%)Yunxiao Qin; Yuanhao Xiong; Jinfeng Yi; Cho-Jui Hsieh
2021-09-03
SEC4SR: A Security Analysis Platform for Speaker Recognition. (99%)Guangke Chen; Zhe Zhao; Fu Song; Sen Chen; Lingling Fan; Yang Liu
Risk Assessment for Connected Vehicles under Stealthy Attacks on Vehicle-to-Vehicle Networks. (1%)Tianci Yang; Carlos Murguia; Chen Lv
2021-09-02
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples. (99%)Guanxiong Liu; Issa Khalil; Abdallah Khreishah; NhatHai Phan
Impact of Attention on Adversarial Robustness of Image Classification Models. (99%)Prachi Agrawal; Narinder Singh Punn; Sanjay Kumar Sonbhadra; Sonali Agarwal
Adversarial Robustness for Unsupervised Domain Adaptation. (98%)Muhammad Awais; Fengwei Zhou; Hang Xu; Lanqing Hong; Ping Luo; Sung-Ho Bae; Zhenguo Li
Real World Robustness from Systematic Noise. (91%)Yan Wang; Yuhang Li; Ruihao Gong
Building Compact and Robust Deep Neural Networks with Toeplitz Matrices. (61%)Alexandre Araujo
2021-09-01
Towards Improving Adversarial Training of NLP Models. (98%)Jin Yong Yoo; Yanjun Qi
Excess Capacity and Backdoor Poisoning. (97%)Naren Sarayu Manoj; Avrim Blum
Regional Adversarial Training for Better Robust Generalization. (96%)Chuanbiao Song; Yanbo Fan; Yicheng Yang; Baoyuan Wu; Yiming Li; Zhifeng Li; Kun He
R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversarial Attacks through Noise Filters for Dynamic Vision Sensors. (86%)Alberto Marchisio; Giacomo Pira; Maurizio Martina; Guido Masera; Muhammad Shafique
Proof Transfer for Neural Network Verification. (9%)Christian Sprecher; Marc Fischer; Dimitar I. Dimitrov; Gagandeep Singh; Martin Vechev
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks. (2%)Anuj Dubey; Rosario Cammarota; Vikram Suresh; Aydin Aysu
2021-08-31
EG-Booster: Explanation-Guided Booster of ML Evasion Attacks. (99%)Abderrahmen Amich; Birhanu Eshete
Morphence: Moving Target Defense Against Adversarial Examples. (99%)Abderrahmen Amich; Birhanu Eshete
DPA: Learning Robust Physical Adversarial Camouflages for Object Detectors. (93%)Yexin Duan; Jialin Chen; Xingyu Zhou; Junhua Zou; Zhengyun He; Wu Zhang; Jin Zhang; Zhisong Pan
Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction. (83%)Zhenrui Yue; Zhankui He; Huimin Zeng; Julian McAuley
Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning. (75%)Doha Al Bared; Mohamed Nassar
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning. (4%)Linyang Li; Demin Song; Xiaonan Li; Jiehang Zeng; Ruotian Ma; Xipeng Qiu
2021-08-30
Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings. (99%)Mazda Moayeri; Soheil Feizi
Investigating Vulnerabilities of Deep Neural Policies. (99%)Ezgi Korkmaz
Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise. (99%)Mingyu Dong; Diqun Yan; Yongkang Gong; Rangding Wang
Single Node Injection Attack against Graph Neural Networks. (68%)Shuchang Tao; Qi Cao; Huawei Shen; Junjie Huang; Yunfan Wu; Xueqi Cheng
Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms. (41%)Albert Jiménez Sanfiz; Mohamed Akrout
Adaptive perturbation adversarial training: based on reinforcement learning. (41%)Zhishen Nie; Ying Lin; Sp Ren; Lan Zhang
How Does Adversarial Fine-Tuning Benefit BERT? (33%)Javid Ebrahimi; Hao Yang; Wei Zhang
ML-based IoT Malware Detection Under Adversarial Settings: A Systematic Evaluation. (26%)Ahmed Abusnaina; Afsah Anwar; Sultan Alshamrani; Abdulrahman Alabduljabbar; RhongHo Jang; Daehun Nyang; David Mohaisen
DuTrust: A Sentiment Analysis Dataset for Trustworthiness Evaluation. (1%)Lijie Wang; Hao Liu; Shuyuan Peng; Hongxuan Tang; Xinyan Xiao; Ying Chen; Hua Wu; Haifeng Wang
2021-08-29
Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution. (99%)Zongyi Li; Jianhan Xu; Jiehang Zeng; Linyang Li; Xiaoqing Zheng; Qi Zhang; Kai-Wei Chang; Cho-Jui Hsieh
Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models. (98%)Zeyuan Wang; Chaofeng Sha; Su Yang
DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks. (82%)Shiwen Ni; Jiawen Li; Hung-Yu Kao
HAT4RD: Hierarchical Adversarial Training for Rumor Detection on Social Media. (81%)Shiwen Ni; Jiawen Li; Hung-Yu Kao
2021-08-27
Mal2GCN: A Robust Malware Detection Approach Using Deep Graph Convolutional Networks With Non-Negative Weights. (99%)Omid Kargarnovin; Amir Mahdi Sadeghzadeh; Rasool Jalili
Disrupting Adversarial Transferability in Deep Neural Networks. (98%)Christopher Wiedeman; Ge Wang
Evaluating the Robustness of Neural Language Models to Input Perturbations. (16%)Milad Moradi; Matthias Samwald
Deep learning models are not robust against noise in clinical text. (1%)Milad Moradi; Kathrin Blagec; Matthias Samwald
2021-08-26
Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks. (99%)Landan Seguin; Anthony Ndirango; Neeli Mishra; SueYeon Chung; Tyler Lee
A Hierarchical Assessment of Adversarial Severity. (98%)Guillaume Jeanneret; Juan C Perez; Pablo Arbelaez
Physical Adversarial Attacks on an Aerial Imagery Object Detector. (96%)Andrew Du; Bo Chen; Tat-Jun Chin; Yee Wei Law; Michele Sasdelli; Ramesh Rajasegaran; Dillon Campbell
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference. (80%)Yang Zheng; Xiaoyi Feng; Zhaoqiang Xia; Xiaoyue Jiang; Ambra Demontis; Maura Pintor; Battista Biggio; Fabio Roli
Detection and Continual Learning of Novel Face Presentation Attacks. (2%)Mohammad Rostami; Leonidas Spinoulas; Mohamed Hussein; Joe Mathai; Wael Abd-Almageed
2021-08-25
Adversarially Robust One-class Novelty Detection. (99%)Shao-Yuan Lo; Poojan Oza; Vishal M. Patel
Certifiers Make Neural Networks Vulnerable to Availability Attacks. (99%)Tobias Lorenz; Marta Kwiatkowska; Mario Fritz
Bridged Adversarial Training. (93%)Hoki Kim; Woojin Lee; Sungyoon Lee; Jaewook Lee
Generalized Real-World Super-Resolution through Adversarial Robustness. (93%)Angela Castillo; María Escobar; Juan C. Pérez; Andrés Romero; Radu Timofte; Gool Luc Van; Pablo Arbeláez
2021-08-24
Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE. (99%)Wenzhao Xiang; Chang Liu; Shibao Zheng
Are socially-aware trajectory prediction models really socially-aware? (92%)Saeed Saadatnejad; Mohammadhossein Bahari; Pedram Khorsandi; Mohammad Saneian; Seyed-Mohsen Moosavi-Dezfooli; Alexandre Alahi
OOWL500: Overcoming Dataset Collection Bias in the Wild. (76%)Brandon Leung; Chih-Hui Ho; Amir Persekian; David Orozco; Yen Chang; Erik Sandstrom; Bo Liu; Nuno Vasconcelos
StyleAugment: Learning Texture De-biased Representations by Style Augmentation without Pre-defined Textures. (1%)Sanghyuk Chun; Song Park
2021-08-23
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications. (99%)Wenjie Ruan; Xinping Yi; Xiaowei Huang
Semantic-Preserving Adversarial Text Attacks. (99%)Xinghao Yang; Weifeng Liu; James Bailey; Tianqing Zhu; Dacheng Tao; Wei Liu
Deep Bayesian Image Set Classification: A Defence Approach against Adversarial Attacks. (99%)Nima Mirnateghi; Syed Afaq Ali Shah; Mohammed Bennamoun
Kryptonite: An Adversarial Attack Using Regional Focus. (99%)Yogesh Kulkarni; Krisha Bhambani
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning. (73%)Virat Shejwalkar; Amir Houmansadr; Peter Kairouz; Daniel Ramage
SegMix: Co-occurrence Driven Mixup for Semantic Segmentation and Adversarial Robustness. (4%)Md Amirul Islam; Matthew Kowal; Konstantinos G. Derpanis; Neil D. B. Bruce
2021-08-22
Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations. (99%)Inci M. Baytas; Debayan Deb
Multi-Expert Adversarial Attack Detection in Person Re-identification Using Context Inconsistency. (98%)Xueping Wang; Shasha Li; Min Liu; Yaonan Wang; Amit K. Roy-Chowdhury
Relating CNNs with brain: Challenges and findings. (10%)Reem Abdel-Salam
2021-08-21
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks. (99%)Jiaming Mu; Binghui Wang; Qi Li; Kun Sun; Mingwei Xu; Zhuotao Liu
"Adversarial Examples" for Proof-of-Learning. (98%)Rui Zhang; Jian Liu; Yuan Ding; Qingbiao Wu; Kui Ren
Regularizing Instabilities in Image Reconstruction Arising from Learned Denoisers. (2%)Abinash Nayak
2021-08-20
AdvDrop: Adversarial Attack to DNNs by Dropping Information. (99%)Ranjie Duan; Yuefeng Chen; Dantong Niu; Yun Yang; A. K. Qin; Yuan He
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier. (99%)Chong Xiang; Saeed Mahloujifar; Prateek Mittal
Integer-arithmetic-only Certified Robustness for Quantized Neural Networks. (98%)Haowen Lin; Jian Lou; Li Xiong; Cyrus Shahabi
Towards Understanding the Generative Capability of Adversarially Robust Classifiers. (98%)Yao Zhu; Jiacheng Ma; Jiacheng Sun; Zewei Chen; Rongxin Jiang; Zhenguo Li
Detecting and Segmenting Adversarial Graphics Patterns from Images. (93%)Xiangyu Purdue University Qu; Stanley H. Purdue University Chan
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning. (1%)Ege Erdogan; Alptekin Kupcu; A. Ercument Cicek
Early-exit deep neural networks for distorted images: providing an efficient edge offloading. (1%)Roberto G. Pacheco; Fernanda D. V. R. Oliveira; Rodrigo S. Couto
2021-08-19
Application of Adversarial Examples to Physical ECG Signals. (99%)Taiga Waseda University Ono; Takeshi The University of Electro-Communications Sugawara; Jun University of Tsukuba Sakuma; Tatsuya Waseda University RIKEN AIP Mori
Pruning in the Face of Adversaries. (99%)Florian Merkle; Maximilian Samsinger; Pascal Schöttle
ASAT: Adaptively Scaled Adversarial Training in Time Series. (98%)Zhiyuan Zhang; Wei Li; Ruihan Bao; Keiko Harimoto; Yunfang Wu; Xu Sun
Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain. (80%)Guangyao Chen; Peixi Peng; Li Ma; Jia Li; Lin Du; Yonghong Tian
2021-08-18
Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better. (99%)Bojia Zi; Shihao Zhao; Xingjun Ma; Yu-Gang Jiang
Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes. (98%)Mingjun Yin; Shasha Li; Zikui Cai; Chengyu Song; M. Salman Asif; Amit K. Roy-Chowdhury; Srikanth V. Krishnamurthy
MBRS : Enhancing Robustness of DNN-based Watermarking by Mini-Batch of Real and Simulated JPEG Compression. (45%)Zhaoyang Jia; Han Fang; Weiming Zhang
Proceedings of the 1st International Workshop on Adaptive Cyber Defense. (1%)Damian Marriott; Kimberly Ferguson-Walter; Sunny Fugate; Marco Carvalho
2021-08-17
When Should You Defend Your Classifier -- A Game-theoretical Analysis of Countermeasures against Adversarial Examples. (98%)Maximilian Samsinger; Florian Merkle; Pascal Schöttle; Tomas Pevny
Adversarial Relighting Against Face Recognition. (98%)Qian Zhang; Qing Guo; Ruijun Gao; Felix Juefei-Xu; Hongkai Yu; Wei Feng
Semantic Perturbations with Normalizing Flows for Improved Generalization. (13%)Oguz Kaan Yuksel; Sebastian U. Stich; Martin Jaggi; Tatjana Chavdarova
Coalesced Multi-Output Tsetlin Machines with Clause Sharing. (1%)Sondre Glimsdal; Ole-Christoffer Granmo
Appearance Based Deep Domain Adaptation for the Classification of Aerial Images. (1%)Dennis Wittich; Franz Rottensteiner
2021-08-16
Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy. (99%)Ruikui Wang; Yuanfang Guo; Ruijie Yang; Yunhong Wang
Interpreting Attributions and Interactions of Adversarial Attacks. (83%)Xin Wang; Shuyun Lin; Hao Zhang; Yufei Zhu; Quanshi Zhang
Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose? (62%)Max Lennon; Nathan Drenkow; Philippe Burlina
NeuraCrypt is not private. (10%)Nicholas Carlini; Sanjam Garg; Somesh Jha; Saeed Mahloujifar; Mohammad Mahmoody; Florian Tramer
Identifying and Exploiting Structures for Reliable Deep Learning. (2%)Amartya Sanyal
On the Opportunities and Risks of Foundation Models. (2%)Rishi Bommasani; Drew A. Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Arx Sydney von; Michael S. Bernstein; Jeannette Bohg; Antoine Bosselut; Emma Brunskill; Erik Brynjolfsson; Shyamal Buch; Dallas Card; Rodrigo Castellon; Niladri Chatterji; Annie Chen; Kathleen Creel; Jared Quincy Davis; Dora Demszky; Chris Donahue; Moussa Doumbouya; Esin Durmus; Stefano Ermon; John Etchemendy; Kawin Ethayarajh; Li Fei-Fei; Chelsea Finn; Trevor Gale; Lauren Gillespie; Karan Goel; Noah Goodman; Shelby Grossman; Neel Guha; Tatsunori Hashimoto; Peter Henderson; John Hewitt; Daniel E. Ho; Jenny Hong; Kyle Hsu; Jing Huang; Thomas Icard; Saahil Jain; Dan Jurafsky; Pratyusha Kalluri; Siddharth Karamcheti; Geoff Keeling; Fereshte Khani; Omar Khattab; Pang Wei Koh; Mark Krass; Ranjay Krishna; Rohith Kuditipudi; Ananya Kumar; Faisal Ladhak; Mina Lee; Tony Lee; Jure Leskovec; Isabelle Levent; Xiang Lisa Li; Xuechen Li; Tengyu Ma; Ali Malik; Christopher D. Manning; Suvir Mirchandani; Eric Mitchell; Zanele Munyikwa; Suraj Nair; Avanika Narayan; Deepak Narayanan; Ben Newman; Allen Nie; Juan Carlos Niebles; Hamed Nilforoshan; Julian Nyarko; Giray Ogut; Laurel Orr; Isabel Papadimitriou; Joon Sung Park; Chris Piech; Eva Portelance; Christopher Potts; Aditi Raghunathan; Rob Reich; Hongyu Ren; Frieda Rong; Yusuf Roohani; Camilo Ruiz; Jack Ryan; Christopher Ré; Dorsa Sadigh; Shiori Sagawa; Keshav Santhanam; Andy Shih; Krishnan Srinivasan; Alex Tamkin; Rohan Taori; Armin W. Thomas; Florian Tramèr; Rose E. Wang; William Wang; Bohan Wu; Jiajun Wu; Yuhuai Wu; Sang Michael Xie; Michihiro Yasunaga; Jiaxuan You; Matei Zaharia; Michael Zhang; Tianyi Zhang; Xikun Zhang; Yuhui Zhang; Lucia Zheng; Kaitlyn Zhou; Percy Liang
2021-08-15
Neural Architecture Dilation for Adversarial Robustness. (81%)Yanxi Li; Zhaohui Yang; Yunhe Wang; Chang Xu
Deep Adversarially-Enhanced k-Nearest Neighbors. (74%)Ren Wang; Tianqi Chen
IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance. (1%)Ruixuan Liu; Changliu Liu
2021-08-14
LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis. (1%)Fan Wu; Yunhui Long; Ce Zhang; Bo Li
2021-08-13
Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks. (99%)Federico Nesti; Giulio Rossolini; Saasha Nair; Alessandro Biondi; Giorgio Buttazzo
Optical Adversarial Attack. (98%)Abhiram Gnanasambandam; Alex M. Sherman; Stanley H. Chan
Understanding Structural Vulnerability in Graph Convolutional Networks. (96%)Liang Chen; Jintang Li; Qibiao Peng; Yang Liu; Zibin Zheng; Carl Yang
The Forgotten Threat of Voltage Glitching: A Case Study on Nvidia Tegra X2 SoCs. (1%)Otto Bittner; Thilo Krachenfels; Andreas Galauner; Jean-Pierre Seifert
2021-08-12
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning. (99%)Hong Wang; Yuefan Deng; Shinjae Yoo; Haibin Ling; Yuewei Lin
Deep adversarial attack on target detection systems. (99%)Uche M. Osahor; Nasser M. Nasrabadi
Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate. (69%)Hannah Rose Kirk; Bertram Vidgen; Paul Röttger; Tristan Thrush; Scott A. Hale
2021-08-11
Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attacks. (99%)Zitao Chen; Pritam Dash; Karthik Pattabiraman
Attacks against Ranking Algorithms with Text Embeddings: a Case Study on Recruitment Algorithms. (78%)Anahita Samadi; Debapriya Banerjee; Shirin Nilizadeh
Are Neural Ranking Models Robust? (4%)Chen Wu; Ruqing Zhang; Jiafeng Guo; Yixing Fan; Xueqi Cheng
Logic Explained Networks. (1%)Gabriele Ciravegna; Pietro Barbiero; Francesco Giannini; Marco Gori; Pietro Lió; Marco Maggini; Stefano Melacci
2021-08-10
Simple black-box universal adversarial attacks on medical image classification based on deep neural networks. (99%)Kazuki Koga; Kazuhiro Takemoto
On the Effect of Pruning on Adversarial Robustness. (81%)Artur Jordao; Helio Pedrini
SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version). (68%)Nils Lukas; Edward Jiang; Xinda Li; Florian Kerschbaum
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing. (64%)Sanchit Sinha; Hanjie Chen; Arshdeep Sekhon; Yangfeng Ji; Yanjun Qi
UniNet: A Unified Scene Understanding Network and Exploring Multi-Task Relationships through the Lens of Adversarial Attacks. (2%)NareshKumar Gurulingan; Elahe Arani; Bahram Zonooz
Instance-wise Hard Negative Example Generation for Contrastive Learning in Unpaired Image-to-Image Translation. (1%)Weilun Wang; Wengang Zhou; Jianmin Bao; Dong Chen; Houqiang Li
2021-08-09
Meta Gradient Adversarial Attack. (99%)Zheng Yuan; Jie Zhang; Yunpei Jia; Chuanqi Tan; Tao Xue; Shiguang Shan
On Procedural Adversarial Noise Attack And Defense. (99%)Jun Yan; Xiaoyang Deng; Huilin Yin; Wancheng Ge
Enhancing Knowledge Tracing via Adversarial Training. (98%)Xiaopeng Guo; Zhijie Huang; Jie Gao; Mingyu Shang; Maojing Shu; Jun Sun
Neural Network Repair with Reachability Analysis. (96%)Xiaodong Yang; Tom Yamaguchi; Hoang-Dung Tran; Bardh Hoxha; Taylor T Johnson; Danil Prokhorov
Classification Auto-Encoder based Detector against Diverse Data Poisoning Attacks. (92%)Fereshteh Razmi; Li Xiong
Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning. (82%)Wanqi Xue; Wei Qiu; Bo An; Zinovi Rabinovich; Svetlana Obraztsova; Chai Kiat Yeo
Privacy-Preserving Machine Learning: Methods, Challenges and Directions. (16%)Runhua Xu; Nathalie Baracaldo; James Joshi
Explainable AI and susceptibility to adversarial attacks: a case study in classification of breast ultrasound images. (15%)Hamza Rasaee; Hassan Rivaz
2021-08-07
Jointly Attacking Graph Neural Network and its Explanations. (96%)Wenqi Fan; Wei Jin; Xiaorui Liu; Han Xu; Xianfeng Tang; Suhang Wang; Qing Li; Jiliang Tang; Jianping Wang; Charu Aggarwal
Membership Inference Attacks on Lottery Ticket Networks. (33%)Aadesh Bagmar; Shishira R Maiya; Shruti Bidwalka; Amol Deshpande
Information Bottleneck Approach to Spatial Attention Learning. (1%)Qiuxia Lai; Yu Li; Ailing Zeng; Minhao Liu; Hanqiu Sun; Qiang Xu
2021-08-06
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles. (80%)Jindi Zhang; Yang Lou; Jianping Wang; Kui Wu; Kejie Lu; Xiaohua Jia
Ensemble Augmentation for Deep Neural Networks Using 1-D Time Series Vibration Data. (2%)Atik Faysal; Ngui Wai Keng; M. H. Lim
2021-08-05
BOSS: Bidirectional One-Shot Synthesis of Adversarial Examples. (99%)Ismail Alkhouri; Alvaro Velasquez; George Atia
Poison Ink: Robust and Invisible Backdoor Attack. (99%)Jie Zhang; Dongdong Chen; Jing Liao; Qidong Huang; Gang Hua; Weiming Zhang; Nenghai Yu
Imperceptible Adversarial Examples by Spatial Chroma-Shift. (99%)Ayberk Aydin; Deniz Sen; Berat Tuna Karli; Oguz Hanoglu; Alptekin Temizel
Householder Activations for Provable Robustness against Adversarial Attacks. (83%)Sahil Singla; Surbhi Singla; Soheil Feizi
Fairness Properties of Face Recognition and Obfuscation Systems. (68%)Harrison Rosenberg; Brian Tang; Kassem Fawaz; Somesh Jha
Exploring Structure Consistency for Deep Model Watermarking. (10%)Jie Zhang; Dongdong Chen; Jing Liao; Han Fang; Zehua Ma; Weiming Zhang; Gang Hua; Nenghai Yu
Locally Interpretable One-Class Anomaly Detection for Credit Card Fraud Detection. (1%)Tungyu Wu; Youting Wang
2021-08-04
Robust Transfer Learning with Pretrained Language Models through Adapters. (82%)Wenjuan Han; Bo Pang; Yingnian Wu
Semi-supervised Conditional GAN for Simultaneous Generation and Detection of Phishing URLs: A Game theoretic Perspective. (31%)Sharif Amit Kamran; Shamik Sengupta; Alireza Tavakkoli
2021-08-03
On the Robustness of Domain Adaption to Adversarial Attacks. (99%)Liyuan Zhang; Yuhang Zhou; Lei Zhang
On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples. (99%)Adelin Travers; Lorna Licollari; Guanghan Wang; Varun Chandrasekaran; Adam Dziedzic; David Lie; Nicolas Papernot
AdvRush: Searching for Adversarially Robust Neural Architectures. (99%)Jisoo Mok; Byunggook Na; Hyeokjun Choe; Sungroh Yoon
The Devil is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models. (88%)Ambrish Rawat; Killian Levacher; Mathieu Sinn
DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on Commercial EdgeML Device. (69%)Yoo-Seung Won; Soham Chatterjee; Dirmanto Jap; Arindam Basu; Shivam Bhasin
Tutorials on Testing Neural Networks. (1%)Nicolas Berthier; Youcheng Sun; Wei Huang; Yanghao Zhang; Wenjie Ruan; Xiaowei Huang
2021-08-02
Hybrid Classical-Quantum Deep Learning Models for Autonomous Vehicle Traffic Image Classification Under Adversarial Attack. (98%)Reek Majumder; Sakib Mahmud Khan; Fahim Ahmed; Zadid Khan; Frank Ngeni; Gurcan Comert; Judith Mwakalonge; Dimitra Michalaka; Mashrur Chowdhury
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles. (10%)Anum Talpur; Mohan Gurusamy
Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks. (9%)Yuwei Sun; Ng Chong; Hideya Ochiai
Efficacy of Statistical and Artificial Intelligence-based False Information Cyberattack Detection Models for Connected Vehicles. (1%)Sakib Mahmud Khan; Gurcan Comert; Mashrur Chowdhury
2021-08-01
Advances in adversarial attacks and defenses in computer vision: A survey. (92%)Naveed Akhtar; Ajmal Mian; Navid Kardan; Mubarak Shah
Certified Defense via Latent Space Randomized Smoothing with Orthogonal Encoders. (80%)Huimin Zeng; Jiahao Su; Furong Huang
An Effective and Robust Detector for Logo Detection. (70%)Xiaojun Jia; Huanqian Yan; Yonglin Wu; Xingxing Wei; Xiaochun Cao; Yong Zhang
Style Curriculum Learning for Robust Medical Image Segmentation. (2%)Zhendong Liu; Van Manh; Xin Yang; Xiaoqiong Huang; Karim Lekadir; Víctor Campello; Nishant Ravikumar; Alejandro F Frangi; Dong Ni
2021-07-31
Delving into Deep Image Prior for Adversarial Defense: A Novel Reconstruction-based Defense Framework. (99%)Li Ding; Yongwei Wang; Xin Ding; Kaiwen Yuan; Ping Wang; Hua Huang; Z. Jane Wang
Adversarial Robustness of Deep Code Comment Generation. (99%)Yu Zhou; Xiaoqing Zhang; Juanjuan Shen; Tingting Han; Taolue Chen; Harald Gall
Towards Adversarially Robust and Domain Generalizable Stereo Matching by Rethinking DNN Feature Backbones. (93%)Kelvin Cheng; Christopher Healey; Tianfu Wu
T$_k$ML-AP: Adversarial Attacks to Top-$k$ Multi-Label Learning. (81%)Shu Hu; Lipeng Ke; Xin Wang; Siwei Lyu
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. (67%)Jinyuan Jia; Yupei Liu; Neil Zhenqiang Gong
Fair Representation Learning using Interpolation Enabled Disentanglement. (1%)Akshita Jha; Bhanukiran Vinzamuri; Chandan K. Reddy
2021-07-30
Who's Afraid of Thomas Bayes? (92%)Erick Galinkin
Practical Attacks on Voice Spoofing Countermeasures. (86%)Andre Kassis; Urs Hengartner
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers. (50%)Stefanos Koffas; Jing Xu; Mauro Conti; Stjepan Picek
Unveiling the potential of Graph Neural Networks for robust Intrusion Detection. (13%)David Pujol-Perich; José Suárez-Varela; Albert Cabellos-Aparicio; Pere Barlet-Ros
2021-07-29
Feature Importance-aware Transferable Adversarial Attacks. (99%)Zhibo Wang; Hengchang Guo; Zhifei Zhang; Wenxin Liu; Zhan Qin; Kui Ren
Enhancing Adversarial Robustness via Test-time Transformation Ensembling. (98%)Juan C. Pérez; Motasem Alfarra; Guillaume Jeanneret; Laura Rueda; Ali Thabet; Bernard Ghanem; Pablo Arbeláez
The Robustness of Graph k-shell Structure under Adversarial Attacks. (93%)B. Zhou; Y. Q. Lv; Y. C. Mao; J. H. Wang; S. Q. Yu; Q. Xuan
Understanding the Effects of Adversarial Personalized Ranking Optimization Method on Recommendation Quality. (31%)Vito Walter Anelli; Yashar Deldjoo; Noia Tommaso Di; Felice Antonio Merra
Towards robust vision by multi-task learning on monkey visual cortex. (3%)Shahd Safarani; Arne Nix; Konstantin Willeke; Santiago A. Cadena; Kelli Restivo; George Denfield; Andreas S. Tolias; Fabian H. Sinz
2021-07-28
Imbalanced Adversarial Training with Reweighting. (86%)Wentao Wang; Han Xu; Xiaorui Liu; Yaxin Li; Bhavani Thuraisingham; Jiliang Tang
Towards Robustness Against Natural Language Word Substitutions. (73%)Xinshuai Dong; Anh Tuan Luu; Rongrong Ji; Hong Liu
Models of Computational Profiles to Study the Likelihood of DNN Metamorphic Test Cases. (67%)Ettore Merlo; Mira Marhaba; Foutse Khomh; Houssem Ben Braiek; Giuliano Antoniol
WaveCNet: Wavelet Integrated CNNs to Suppress Aliasing Effect for Noise-Robust Image Classification. (15%)Qiufu Li; Linlin Shen; Sheng Guo; Zhihui Lai
TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing. (2%)Aoting Hu; Renjie Xie; Zhigang Lu; Aiqun Hu; Minhui Xue
2021-07-27
Towards Black-box Attacks on Deep Learning Apps. (89%)Hongchen Cao; Shuai Li; Yuming Zhou; Ming Fan; Xuejiao Zhao; Yutian Tang
Poisoning Online Learning Filters: DDoS Attacks and Countermeasures. (50%)Wesley Joon-Wie Tann; Ee-Chien Chang
PDF-Malware: An Overview on Threats, Detection and Evasion Attacks. (8%)Nicolas Fleury; Theo Dubrunquez; Ihsen Alouani
2021-07-26
Benign Adversarial Attack: Tricking Models for Goodness. (99%)Jitao Sang; Xian Zhao; Jiaming Zhang; Zhiyu Lin
Learning to Adversarially Blur Visual Object Tracking. (98%)Qing Guo; Ziyi Cheng; Felix Juefei-Xu; Lei Ma; Xiaofei Xie; Yang Liu; Jianjun Zhao
Adversarial Attacks with Time-Scale Representations. (96%)Alberto Santamaria-Pang; Jianwei Qiu; Aritra Chowdhury; James Kubricht; Peter Tu; Iyer Naresh; Nurali Virani
2021-07-24
Adversarial training may be a double-edged sword. (99%)Ali Rahmati; Seyed-Mohsen Moosavi-Dezfooli; Huaiyu Dai
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them. (98%)Florian Tramèr
Stress Test Evaluation of Biomedical Word Embeddings. (73%)Vladimir Araujo; Andrés Carvallo; Carlos Aspillaga; Camilo Thorne; Denis Parra
X-GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question Answering. (1%)Jingjing Jiang; Ziyi Liu; Yifan Liu; Zhixiong Nan; Nanning Zheng
2021-07-23
A Differentiable Language Model Adversarial Attack on Text Classifiers. (99%)Ivan Fursov; Alexey Zaytsev; Pavel Burnyshev; Ekaterina Dmitrieva; Nikita Klyuchnikov; Andrey Kravchenko; Ekaterina Artemova; Evgeny Burnaev
Structack: Structure-based Adversarial Attacks on Graph Neural Networks. (86%)Hussain Hussain; Tomislav Duricic; Elisabeth Lex; Denis Helic; Markus Strohmaier; Roman Kern
Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation. (45%)Bingqian Lin; Yi Zhu; Yanxin Long; Xiaodan Liang; Qixiang Ye; Liang Lin
Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers. (8%)Yunhui Guo; Xudong Wang; Yubei Chen; Stella X. Yu
2021-07-22
On the Certified Robustness for Ensemble Models and Beyond. (99%)Zhuolin Yang; Linyi Li; Xiaojun Xu; Bhavya Kailkhura; Tao Xie; Bo Li
Unsupervised Detection of Adversarial Examples with Model Explanations. (99%)Gihyuk Ko; Gyumin Lim
Membership Inference Attack and Defense for Wireless Signal Classifiers with Deep Learning. (83%)Yi Shi; Yalin E. Sagduyu
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks. (75%)Ramin Barati; Reza Safabakhsh; Mohammad Rahmati
Estimating Predictive Uncertainty Under Program Data Distribution Shift. (1%)Yufei Li; Simin Chen; Wei Yang
Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack. (1%)Fan Wu; Min Gao; Junliang Yu; Zongwei Wang; Kecheng Liu; Xu Wange
2021-07-21
Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic Gradients. (98%)Huimin Wu; Zhengmian Hu; Bin Gu
Improved Text Classification via Contrastive Adversarial Training. (84%)Lin Pan; Chung-Wei Hang; Avirup Sil; Saloni Potdar
Black-box Probe for Unsupervised Domain Adaptation without Model Transferring. (81%)Kunhong Wu; Yucheng Shi; Yahong Han; Yunfeng Shao; Bingshuai Li
Defending against Reconstruction Attack in Vertical Federated Learning. (10%)Jiankai Sun; Yuanshun Yao; Weihao Gao; Junyuan Xie; Chong Wang
Generative Models for Security: Attacks, Defenses, and Opportunities. (10%)Luke A. Bauer; Vincent Bindschaedler
A Tandem Framework Balancing Privacy and Security for Voice User Interfaces. (5%)Ranya Aloufi; Hamed Haddadi; David Boyle
Spinning Sequence-to-Sequence Models with Meta-Backdoors. (4%)Eugene Bagdasaryan; Vitaly Shmatikov
On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms. (2%)Shuyu Cheng; Guoqiang Wu; Jun Zhu
2021-07-20
Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks. (99%)Saikat Majumdar; Mohammad Hossein Samavatian; Kristin Barber; Radu Teodorescu
A Markov Game Model for AI-based Cyber Security Attack Mitigation. (10%)Hooman Alavizadeh; Julian Jang-Jaccard; Tansu Alpcan; Seyit A. Camtepe
Leaking Secrets through Modern Branch Predictor in the Speculative World. (1%)Md Hafizul Islam Chowdhuryy; Fan Yao
2021-07-19
Discriminator-Free Generative Adversarial Attack. (99%)Shaohao Lu; Yuqiao Xian; Ke Yan; Yi Hu; Xing Sun; Xiaowei Guo; Feiyue Huang; Wei-Shi Zheng
Feature-Filter: Detecting Adversarial Examples through Filtering off Recessive Features. (99%)Hui Liu; Bo Zhao; Yuefeng Peng; Jiabao Guo; Peng Liu
Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition. (98%)Benjamin Spetter-Goldstein; Nataniel Ruiz; Sarah Adel Bargal
On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples. (80%)Verena Praher; Katharina Prinz; Arthur Flexer; Gerhard Widmer
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI. (33%)Takayuki Miura; Satoshi Hasegawa; Toshiki Shibahara
Structural Watermarking to Deep Neural Networks via Network Channel Pruning. (11%)Xiangyu Zhao; Yinzhe Yao; Hanzhou Wu; Xinpeng Zhang
Generative Adversarial Neural Cellular Automata. (1%)Maximilian Otte; Quentin Delfosse; Johannes Czech; Kristian Kersting
Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units. (1%)Woo-Jeoung Nam; Seong-Whan Lee
Just Train Twice: Improving Group Robustness without Training Group Information. (1%)Evan Zheran Liu; Behzad Haghgoo; Annie S. Chen; Aditi Raghunathan; Pang Wei Koh; Shiori Sagawa; Percy Liang; Chelsea Finn
2021-07-18
RobustFed: A Truth Inference Approach for Robust Federated Learning. (1%)Farnaz Tahmasebian; Jian Lou; Li Xiong
2021-07-17
BEDS-Bench: Behavior of EHR-models under Distributional Shift--A Benchmark. (9%)Anand Avati; Martin Seneviratne; Emily Xue; Zhen Xu; Balaji Lakshminarayanan; Andrew M. Dai
2021-07-16
EGC2: Enhanced Graph Classification with Easy Graph Compression. (89%)Jinyin Chen; Haiyang Xiong; Haibin Zhenga; Dunjie Zhang; Jian Zhang; Mingwei Jia; Yi Liu
Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI. (1%)Quanshi Zhang; Tian Han; Lixin Fan; Zhanxing Zhu; Hang Su; Ying Nian Wu; Jie Ren; Hao Zhang
2021-07-15
Self-Supervised Contrastive Learning with Adversarial Perturbations for Defending Word Substitution-based Attacks. (99%)Zhao Meng; Yihan Dong; Mrinmaya Sachan; Roger Wattenhofer
Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving. (98%)Ibrahim Sobh; Ahmed Hamed; Varun Ravi Kumar; Senthil Yogamani
ECG-Adv-GAN: Detecting ECG Adversarial Examples with Conditional Generative Adversarial Networks. (92%)Khondker Fariha Hossain; Sharif Amit Kamran; Alireza Tavakkoli; Lei Pan; Xingjun Ma; Sutharshan Rajasegarar; Chandan Karmaker
Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks. (80%)Ismail Alarab; Simant Prakoonwit
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting. (16%)Xiangyu Qi; Jifeng Zhu; Chulin Xie; Yong Yang
Tailor: Generating and Perturbing Text with Semantic Controls. (3%)Alexis Ross; Tongshuang Wu; Hao Peng; Matthew E. Peters; Matt Gardner
Shifts: A Dataset of Real Distributional Shift Across Multiple Large-Scale Tasks. (1%)Andrey Malinin; Neil Band; Ganshin; Alexander; German Chesnokov; Yarin Gal; Mark J. F. Gales; Alexey Noskov; Andrey Ploskonosov; Liudmila Prokhorenkova; Ivan Provilkov; Vatsal Raina; Vyas Raina; Roginskiy; Denis; Mariya Shmatova; Panos Tigas; Boris Yangel
2021-07-14
AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning. (99%)Yihao Huang; Qing Guo; Felix Juefei-Xu; Lei Ma; Weikai Miao; Yang Liu; Geguang Pu
Conservative Objective Models for Effective Offline Model-Based Optimization. (67%)Brandon Trabucco; Aviral Kumar; Xinyang Geng; Sergey Levine
2021-07-13
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense. (88%)Duhun Hwang; Eunjung Lee; Wonjong Rhee
Using BERT Encoding to Tackle the Mad-lib Attack in SMS Spam Detection. (69%)Sergio Rojas-Galeano
Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors. (41%)M. Ben Amor; J. Stier; M. Granitzer
What classifiers know what they don't? (1%)Mohamed Ishmael Belghazi; David Lopez-Paz
2021-07-12
EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial Attacks. (99%)Andrei Ilie; Marius Popescu; Alin Stefanescu
Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising. (99%)Anouar Kherchouche; Sid Ahmed Fezza; Wassim Hamidouche
Perceptual-based deep-learning denoiser as a defense against adversarial attacks on ASR systems. (96%)Anirudh Sreeram; Nicholas Mehlman; Raghuveer Peri; Dillon Knox; Shrikanth Narayanan
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning. (81%)Jun Wang; Chang Xu; Francisco Guzman; Ahmed El-Kishky; Yuqing Tang; Benjamin I. P. Rubinstein; Trevor Cohn
A Closer Look at the Adversarial Robustness of Information Bottleneck Models. (70%)Iryna Korshunova; David Stutz; Alexander A. Alemi; Olivia Wiles; Sven Gowal
SoftHebb: Bayesian inference in unsupervised Hebbian soft winner-take-all networks. (56%)Timoleon Moraitis; Dmitry Toichkin; Yansong Chua; Qinghai Guo
2021-07-11
Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks. (76%)Kendra Albert; Maggie Delano; Bogdan Kulynych; Ram Shankar Siva Kumar
Stateful Detection of Model Extraction Attacks. (2%)Soham Pal; Yash Gupta; Aditya Kanade; Shirish Shevade
Attack Rules: An Adversarial Approach to Generate Attacks for Industrial Control Systems using Machine Learning. (1%)Muhammad Azmi Umer; Chuadhry Mujeeb Ahmed; Muhammad Taha Jilani; Aditya P. Mathur
2021-07-10
Hack The Box: Fooling Deep Learning Abstraction-Based Monitors. (91%)Sara Hajj Ibrahim; Mohamed Nassar
HOMRS: High Order Metamorphic Relations Selector for Deep Neural Networks. (88%)Florian Tambon; Giulio Antoniol; Foutse Khomh
Identifying Layers Susceptible to Adversarial Attacks. (83%)Shoaib Ahmed Siddiqui; Thomas Breuel
Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis. (22%)Anisie Uwimana1; Ransalu Senanayake
Cyber-Security Challenges in Aviation Industry: A Review of Current and Future Trends. (1%)Elochukwu Ukwandu; Mohamed Amine Ben Farah; Hanan Hindy; Miroslav Bures; Robert Atkinson; Christos Tachtatzis; Xavier Bellekens
2021-07-09
Learning to Detect Adversarial Examples Based on Class Scores. (99%)Tobias Uelwer; Felix Michels; Candido Oliver De
Resilience of Autonomous Vehicle Object Category Detection to Universal Adversarial Perturbations. (99%)Mohammad Nayeem Teli; Seungwon Oh
Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems. (99%)Shangyu Xie; Han Wang; Yu Kong; Yuan Hong
GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network. (98%)Zuohui Chen; Renxuan Wang; Jingyang Xiang; Yue Yu; Xin Xia; Shouling Ji; Qi Xuan; Xiaoniu Yang
Towards Robust General Medical Image Segmentation. (83%)Laura Daza; Juan C. Pérez; Pablo Arbeláez
ARC: Adversarially Robust Control Policies for Autonomous Vehicles. (38%)Sampo Kuutti; Saber Fallah; Richard Bowden
2021-07-08
Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models. (99%)Daniel Park; Haidar Khan; Azer Khan; Alex Gittens; Bülent Yener
Improving Model Robustness with Latent Distribution Locally and Globally. (99%)Zhuang Qian; Shufei Zhang; Kaizhu Huang; Qiufeng Wang; Rui Zhang; Xinping Yi
Analytically Tractable Hidden-States Inference in Bayesian Neural Networks. (50%)Luong-Ha Nguyen; James-A. Goulet
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (33%)Akshay Mehra; Bhavya Kailkhura; Pin-Yu Chen; Jihun Hamm
2021-07-07
Controlled Caption Generation for Images Through Adversarial Attacks. (99%)Nayyer Aafaq; Naveed Akhtar; Wei Liu; Mubarak Shah; Ajmal Mian
Incorporating Label Uncertainty in Understanding Adversarial Robustness. (38%)Xiao Zhang; David Evans
RoFL: Attestable Robustness for Secure Federated Learning. (2%)Lukas Burkhalter; Hidde Lycklama; Alexander Viand; Nicolas Küchler; Anwar Hithnawi
2021-07-06
GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization. (99%)Sungyoon Lee; Hoki Kim; Jaewook Lee
Self-Adversarial Training incorporating Forgery Attention for Image Forgery Localization. (95%)Long Zhuo; Shunquan Tan; Bin Li; Jiwu Huang
ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients. (76%)Alessandro Cappelli; Julien Launay; Laurent Meunier; Ruben Ohana; Iacopo Poli
On Generalization of Graph Autoencoders with Adversarial Training. (12%)Tianjin huang; Yulong Pei; Vlado Menkovski; Mykola Pechenizkiy
On Robustness of Lane Detection Models to Physical-World Adversarial Attacks in Autonomous Driving. (1%)Takami Sato; Qi Alfred Chen
2021-07-05
When and How to Fool Explainable Models (and Humans) with Adversarial Examples. (99%)Jon Vadillo; Roberto Santana; Jose A. Lozano
Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks. (99%)Xiao Yang; Yinpeng Dong; Tianyu Pang; Hang Su; Jun Zhu
Adversarial Robustness of Probabilistic Network Embedding for Link Prediction. (87%)Xi Chen; Bo Kang; Jefrey Lijffijt; Bie Tijl De
Dealing with Adversarial Player Strategies in the Neural Network Game iNNk through Ensemble Learning. (69%)Mathias Löwe; Jennifer Villareale; Evan Freed; Aleksanteri Sladek; Jichen Zhu; Sebastian Risi
Understanding the Security of Deepfake Detection. (33%)Xiaoyu Cao; Neil Zhenqiang Gong
Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems. (15%)Ron Bitton; Nadav Maman; Inderjeet Singh; Satoru Momiyama; Yuval Elovici; Asaf Shabtai
Poisoning Attack against Estimating from Pairwise Comparisons. (15%)Ke Ma; Qianqian Xu; Jinshan Zeng; Xiaochun Cao; Qingming Huang
Confidence Conditioned Knowledge Distillation. (10%)Sourav Mishra; Suresh Sundaram
2021-07-04
Certifiably Robust Interpretation via Renyi Differential Privacy. (67%)Ao Liu; Xiaoyu Chen; Sijia Liu; Lirong Xia; Chuang Gan
Mirror Mirror on the Wall: Next-Generation Wireless Jamming Attacks Based on Software-Controlled Surfaces. (1%)Paul Staat; Harald Elders-Boll; Christian Zenger; Christof Paar
2021-07-03
Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity. (99%)Yajie Wang; Shangbo Wu; Wenyi Jiang; Shengang Hao; Yu-an Tan; Quanxin Zhang
2021-07-01
Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples. (99%)Nelson Manohar-Alers; Ryan Feng; Sahib Singh; Jiguo Song; Atul Prakash
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks. (99%)Alberto Marchisio; Giacomo Pira; Maurizio Martina; Guido Masera; Muhammad Shafique
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding. (68%)Dong Wang; Ning Ding; Piji Li; Hai-Tao Zheng
Adversarial Sample Detection for Speaker Verification by Neural Vocoders. (41%)Haibin Wu; Po-chun Hsu; Ji Gao; Shanshan Zhang; Shen Huang; Jian Kang; Zhiyong Wu; Helen Meng; Hung-yi Lee
The Interplay between Distribution Parameters and the Accuracy-Robustness Tradeoff in Classification. (16%)Alireza Mousavi Hosseini; Amir Mohammad Abouei; Mohammad Hossein Rohban
Reinforcement Learning for Feedback-Enabled Cyber Resilience. (10%)Yunhan Huang; Linan Huang; Quanyan Zhu
2021-06-30
Single-Step Adversarial Training for Semantic Segmentation. (96%)Daniel Wiens; Barbara Hammer
Adversarial examples within the training distribution: A widespread challenge. (93%)Spandan Madan; Tomotake Sasaki; Hanspeter Pfister; Tzu-Mao Li; Xavier Boix
Understanding Adversarial Attacks on Observations in Deep Reinforcement Learning. (84%)You Qiaoben; Chengyang Ying; Xinning Zhou; Hang Su; Jun Zhu; Bo Zhang
Explanation-Guided Diagnosis of Machine Learning Evasion Attacks. (82%)Abderrahmen Amich; Birhanu Eshete
Bi-Level Poisoning Attack Model and Countermeasure for Appliance Consumption Data of Smart Homes. (8%)Mustain Billah; Adnan Anwar; Ziaur Rahman; Syed Md. Galib
Exploring Robustness of Neural Networks through Graph Measures. (8%)Asim Rowan University Waqas; Ghulam Rowan University Rasool; Hamza University of Minnesota Farooq; Nidhal C. Rowan University Bouaynaya
A Context-Aware Information-Based Clone Node Attack Detection Scheme in Internet of Things. (1%)Khizar Hameed; Saurabh Garg; Muhammad Bilal Amin; Byeong Kang; Abid Khan
Understanding and Improving Early Stopping for Learning with Noisy Labels. (1%)Yingbin Bai; Erkun Yang; Bo Han; Yanhua Yang; Jiatong Li; Yinian Mao; Gang Niu; Tongliang Liu
2021-06-29
Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges. (99%)Bowei Xi
Understanding Adversarial Examples Through Deep Neural Network's Response Surface and Uncertainty Regions. (99%)Juan Shu; Bowei Xi; Charles Kamhoua
Attack Transferability Characterization for Adversarially Robust Multi-label Classification. (99%)Zhuo Yang; Yufei Han; Xiangliang Zhang
Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices. (99%)Tao Bai; Jinqi Luo; Jun Zhao
Bio-Inspired Adversarial Attack Against Deep Neural Networks. (98%)Bowei Xi; Yujie Chen; Fan Fei; Zhan Tu; Xinyan Deng
Do Not Deceive Your Employer with a Virtual Background: A Video Conferencing Manipulation-Detection System. (62%)Mauro Conti; Simone Milani; Ehsan Nowroozi; Gabriele Orazi
The Threat of Offensive AI to Organizations. (54%)Yisroel Mirsky; Ambra Demontis; Jaidip Kotak; Ram Shankar; Deng Gelei; Liu Yang; Xiangyu Zhang; Wenke Lee; Yuval Elovici; Battista Biggio
Local Reweighting for Adversarial Training. (22%)Ruize Gao; Feng Liu; Kaiwen Zhou; Gang Niu; Bo Han; James Cheng
On the Interaction of Belief Bias and Explanations. (15%)Ana Valeria Gonzalez; Anna Rogers; Anders Søgaard
2021-06-28
Feature Importance Guided Attack: A Model Agnostic Adversarial Attack. (99%)Gilad Gressel; Niranjan Hegde; Archana Sreekumar; Michael Darling
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent. (99%)Oliver Bryniarski; Nabeel Hingun; Pedro Pachuca; Vincent Wang; Nicholas Carlini
Improving Transferability of Adversarial Patches on Face Recognition with Generative Models. (99%)Zihao Xiao; Xianfeng Gao; Chilin Fu; Yinpeng Dong; Wei Gao; Xiaolu Zhang; Jun Zhou; Jun Zhu
Data Poisoning Won't Save You From Facial Recognition. (97%)Evani Radiya-Dixit; Florian Tramèr
Adversarial Robustness of Streaming Algorithms through Importance Sampling. (61%)Vladimir Braverman; Avinatan Hassidim; Yossi Matias; Mariano Schain; Sandeep Silwal; Samson Zhou
Test-Time Adaptation to Distribution Shift by Confidence Maximization and Input Transformation. (2%)Chaithanya Kumar Mummadi; Robin Hutmacher; Kilian Rambach; Evgeny Levinkov; Thomas Brox; Jan Hendrik Metzen
Certified Robustness via Randomized Smoothing over Multiplicative Parameters. (1%)Nikita Muravev; Aleksandr Petiushko
Realtime Robust Malicious Traffic Detection via Frequency Domain Analysis. (1%)Chuanpu Fu; Qi Li; Meng Shen; Ke Xu
2021-06-27
RAILS: A Robust Adversarial Immune-inspired Learning System. (98%)Ren Wang; Tianqi Chen; Stephen Lindsly; Cooper Stansbury; Alnawaz Rehemtulla; Indika Rajapakse; Alfred Hero
Who is Responsible for Adversarial Defense? (93%)Kishor Datta Gupta; Dipankar Dasgupta
ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense. (82%)Ren Wang; Tianqi Chen; Philip Yao; Sijia Liu; Indika Rajapakse; Alfred Hero
Immuno-mimetic Deep Neural Networks (Immuno-Net). (64%)Ren Wang; Tianqi Chen; Stephen Lindsly; Cooper Stansbury; Indika Rajapakse; Alfred Hero
Stabilizing Equilibrium Models by Jacobian Regularization. (1%)Shaojie Bai; Vladlen Koltun; J. Zico Kolter
2021-06-26
Multi-stage Optimization based Adversarial Training. (99%)Xiaosen Wang; Chuanbiao Song; Liwei Wang; Kun He
The Feasibility and Inevitability of Stealth Attacks. (69%)Ivan Y. Tyukin; Desmond J. Higham; Eliyas Woldegeorgis; Alexander N. Gorban
2021-06-24
On the (Un-)Avoidability of Adversarial Examples. (99%)Sadia Chowdhury; Ruth Urner
Countering Adversarial Examples: Combining Input Transformation and Noisy Training. (99%)Cheng Zhang; Pan Gao
Break it, Fix it: Attack and Defense for "Add-on'' Access Control Solutions in Distributed Data Analytics Platforms. (8%)Fahad Data Security Technologies Shaon; Sazzadur University of Arizona Rahaman; Murat Data Security Technologies Kantarcioglu
2021-06-23
Adversarial Examples in Multi-Layer Random ReLU Networks. (81%)Peter L. Bartlett; Sébastien Bubeck; Yeshwanth Cherapanamjeri
Teacher Model Fingerprinting Attacks Against Transfer Learning. (2%)Yufei Chen; Chao Shen; Cong Wang; Yang Zhang
Meaningfully Explaining Model Mistakes Using Conceptual Counterfactuals. (1%)Abubakar Abid; Mert Yuksekgonul; James Zou
Feature Attributions and Counterfactual Explanations Can Be Manipulated. (1%)Dylan Slack; Sophie Hilgard; Sameer Singh; Himabindu Lakkaraju
2021-06-22
DetectX -- Adversarial Input Detection using Current Signatures in Memristive XBar Arrays. (99%)Abhishek Moitra; Priyadarshini Panda
Self-Supervised Iterative Contextual Smoothing for Efficient Adversarial Defense against Gray- and Black-Box Attack. (99%)Sungmin Cha; Naeun Ko; Youngjoon Yoo; Taesup Moon
Long-term Cross Adversarial Training: A Robust Meta-learning Method for Few-shot Classification Tasks. (83%)Fan Liu; Shuyu Zhao; Xuelong Dai; Bin Xiao
On Adversarial Robustness of Synthetic Code Generation. (81%)Mrinal Anand; Pratik Kayal; Mayank Singh
NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data. (67%)I-Chung Hsieh; Cheng-Te Li
FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data. (1%)Eugenia Iofinova; Nikola Konstantinov; Christoph H. Lampert
2021-06-21
Policy Smoothing for Provably Robust Reinforcement Learning. (99%)Aounon Kumar; Alexander Levine; Soheil Feizi
Delving into the pixels of adversarial samples. (98%)Blerta Lindqvist
HODA: Hardness-Oriented Detection of Model Extraction Attacks. (98%)Amir Mahdi Sadeghzadeh; Amir Mohammad Sobhanian; Faezeh Dehghan; Rasool Jalili
Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier. (91%)Simone Marullo; Matteo Tiezzi; Marco Gori; Stefano Melacci
Membership Inference on Word Embedding and Beyond. (38%)Saeed Mahloujifar; Huseyin A. Inan; Melissa Chase; Esha Ghosh; Marcello Hasegawa
An Alternative Auxiliary Task for Enhancing Image Classification. (11%)Chen Liu
Zero-shot learning approach to adaptive Cybersecurity using Explainable AI. (1%)Dattaraj Rao; Shraddha Mane
2021-06-20
Adversarial Examples Make Strong Poisons. (98%)Liam Fowl; Micah Goldblum; Ping-yeh Chiang; Jonas Geiping; Wojtek Czaja; Tom Goldstein
Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem. (95%)Jiaqi Ma; Junwei Deng; Qiaozhu Mei
Generative Model Adversarial Training for Deep Compressed Sensing. (8%)Ashkan Esmaeili
2021-06-19
Attack to Fool and Explain Deep Networks. (99%)Naveed Akhtar; Muhammad A. A. K. Jalwana; Mohammed Bennamoun; Ajmal Mian
A Stealthy and Robust Fingerprinting Scheme for Generative Models. (47%)Li Guanlin; Guo Shangwei; Wang Run; Xu Guowen; Zhang Tianwei
2021-06-18
Residual Error: a New Performance Measure for Adversarial Robustness. (99%)Hossein Aboutalebi; Mohammad Javad Shafiee; Michelle Karg; Christian Scharfenberger; Alexander Wong
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples. (99%)Maura Pintor; Luca Demetrio; Angelo Sotgiu; Ambra Demontis; Nicholas Carlini; Battista Biggio; Fabio Roli
The Dimpled Manifold Model of Adversarial Examples in Machine Learning. (99%)Adi Shamir; Odelia Melamed; Oriel BenShmuel
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis. (99%)Martin Pawelczyk; Chirag Agarwal; Shalmali Joshi; Sohini Upadhyay; Himabindu Lakkaraju
Light Lies: Optical Adversarial Attack. (92%)Kyulim Kim; JeongSoo Kim; Seungri Song; Jun-Ho Choi; Chulmin Joo; Jong-Seok Lee
BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection. (82%)Yulin Zhu; Yuni Lai; Kaifa Zhao; Xiapu Luo; Mingquan Yuan; Jian Ren; Kai Zhou
Less is More: Feature Selection for Adversarial Robustness with Compressive Counter-Adversarial Attacks. (80%)Emre Ozfatura; Muhammad Zaid Hameed; Kerem Ozfatura; Deniz Gunduz
Group-Structured Adversarial Training. (68%)Farzan Farnia; Amirali Aghazadeh; James Zou; David Tse
Accumulative Poisoning Attacks on Real-time Data. (45%)Tianyu Pang; Xiao Yang; Yinpeng Dong; Hang Su; Jun Zhu
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks. (45%)Suyoung Lee; Wonho Song; Suman Jana; Meeyoung Cha; Sooel Son
Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning. (5%)Junyuan Hong; Haotao Wang; Zhangyang Wang; Jiayu Zhou
2021-06-17
Analyzing Adversarial Robustness of Deep Neural Networks in Pixel Space: a Semantic Perspective. (99%)Lina Wang; Xingshu Chen; Yulong Wang; Yawei Yue; Yi Zhu; Xuemei Zeng; Wei Wang
Bad Characters: Imperceptible NLP Attacks. (99%)Nicholas Boucher; Ilia Shumailov; Ross Anderson; Nicolas Papernot
DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs. (99%)Junhao Zhu; Yalu Shan; Jinhuan Wang; Shanqing Yu; Guanrong Chen; Qi Xuan
Adversarial Visual Robustness by Causal Intervention. (99%)Kaihua Tang; Mingyuan Tao; Hanwang Zhang
Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning. (92%)Shubham Jain; Ana-Maria Cretu; Montjoye Yves-Alexandre de
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks. (91%)Yulong *co-first authors Cao*; Ningfei *co-first authors Wang*; Chaowei *co-first authors Xiao*; Dawei *co-first authors Yang*; Jin *co-first authors Fang; Ruigang *co-first authors Yang; Qi Alfred *co-first authors Chen; Mingyan *co-first authors Liu; Bo *co-first authors Li
Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems. (82%)Giovanni Apruzzese; Mauro Andreolini; Luca Ferretti; Mirco Marchetti; Michele Colajanni
Poisoning and Backdooring Contrastive Learning. (70%)Nicholas Carlini; Andreas Terzis
CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing. (69%)Fan Wu; Linyi Li; Zijian Huang; Yevgeniy Vorobeychik; Ding Zhao; Bo Li
CoCoFuzzing: Testing Neural Code Models with Coverage-Guided Fuzzing. (64%)Moshi Wei; Yuchao Huang; Jinqiu Yang; Junjie Wang; Song Wang
On Deep Neural Network Calibration by Regularization and its Impact on Refinement. (3%)Aditya Singh; Alessandro Bay; Biswa Sengupta; Andrea Mirabile
Effective Model Sparsification by Scheduled Grow-and-Prune Methods. (1%)Xiaolong Ma; Minghai Qin; Fei Sun; Zejiang Hou; Kun Yuan; Yi Xu; Yanzhi Wang; Yen-Kuang Chen; Rong Jin; Yuan Xie
2021-06-16
Real-time Adversarial Perturbations against Deep Reinforcement Learning Policies: Attacks and Defenses. (99%)Buse G. A. Tekgul; Shelly Wang; Samuel Marchal; N. Asokan
Localized Uncertainty Attacks. (99%)Ousmane Amadou Dia; Theofanis Karaletsos; Caner Hazirbas; Cristian Canton Ferrer; Ilknur Kaynar Kabul; Erik Meijer
Evaluating the Robustness of Bayesian Neural Networks Against Different Types of Attacks. (67%)Yutian Pang; Sheng Cheng; Jueming Hu; Yongming Liu
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch. (38%)Hossein Souri; Liam Fowl; Rama Chellappa; Micah Goldblum; Tom Goldstein
Explainable AI for Natural Adversarial Images. (13%)Tomas Folke; ZhaoBin Li; Ravi B. Sojitra; Scott Cheng-Hsin Yang; Patrick Shafto
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness. (2%)James Diffenderfer; Brian R. Bartoldson; Shreya Chaganti; Jize Zhang; Bhavya Kailkhura
Scaling-up Diverse Orthogonal Convolutional Networks with a Paraunitary Framework. (1%)Jiahao Su; Wonmin Byeon; Furong Huang
Loki: Hardening Code Obfuscation Against Automated Attacks. (1%)Moritz Schloegel; Tim Blazytko; Moritz Contag; Cornelius Aschermann; Julius Basler; Thorsten Holz; Ali Abbasi
2021-06-15
Adversarial Attacks on Deep Models for Financial Transaction Records. (99%)Ivan Fursov; Matvey Morozov; Nina Kaploukhaya; Elizaveta Kovtun; Rodrigo Rivera-Castro; Gleb Gusev; Dmitry Babaev; Ivan Kireev; Alexey Zaytsev; Evgeny Burnaev
Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information. (99%)Tommy Li; Cory Merkel
Towards Adversarial Robustness via Transductive Learning. (80%)Jiefeng Chen; Yang Guo; Xi Wu; Tianqi Li; Qicheng Lao; Yingyu Liang; Somesh Jha
Voting for the right answer: Adversarial defense for speaker verification. (78%)Haibin Wu; Yang Zhang; Zhiyong Wu; Dong Wang; Hung-yi Lee
Detect and remove watermark in deep neural networks via generative adversarial networks. (68%)Haoqi Wang; Mingfu Xue; Shichang Sun; Yushu Zhang; Jian Wang; Weiqiang Liu
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. (13%)Chulin Xie; Minghao Chen; Pin-Yu Chen; Bo Li
Securing Face Liveness Detection Using Unforgeable Lip Motion Patterns. (12%)Man Senior Member, IEEE Zhou; Qian Senior Member, IEEE Wang; Qi Senior Member, IEEE Li; Peipei Senior Member, IEEE Jiang; Jingxiao Senior Member, IEEE Yang; Chao Senior Member, IEEE Shen; Cong Fellow, IEEE Wang; Shouhong Ding
Probabilistic Margins for Instance Reweighting in Adversarial Training. (8%)Qizhou Wang; Feng Liu; Bo Han; Tongliang Liu; Chen Gong; Gang Niu; Mingyuan Zhou; Masashi Sugiyama
CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals. (1%)Efrat Levy; Asaf Shabtai; Bogdan Groza; Pal-Stefan Murvay; Yuval Elovici
2021-06-14
PopSkipJump: Decision-Based Attack for Probabilistic Classifiers. (99%)Carl-Johann Simon-Gabriel; Noman Ahmed Sheikh; Andreas Krause
Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology. (99%)Alex Foote; Amina Asif; Ayesha Azam; Tim Marshall-Cox; Nasir Rajpoot; Fayyaz Minhas
Audio Attacks and Defenses against AED Systems -- A Practical Study. (99%)Rodrigo dos Santos; Shirin Nilizadeh
Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions. (92%)Antonio Emanuele Cinà; Kathrin Grosse; Sebastiano Vascon; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Evading Malware Classifiers via Monte Carlo Mutant Feature Discovery. (81%)John Boutsikas; Maksim E. Eren; Charles Varga; Edward Raff; Cynthia Matuszek; Charles Nicholas
On the Relationship between Heterophily and Robustness of Graph Neural Networks. (81%)Jiong Zhu; Junchen Jin; Donald Loveland; Michael T. Schaub; Danai Koutra
Partial success in closing the gap between human and machine vision. (15%)Robert Geirhos; Kantharaju Narayanappa; Benjamin Mitzkus; Tizian Thieringer; Matthias Bethge; Felix A. Wichmann; Wieland Brendel
Text Generation with Efficient (Soft) Q-Learning. (2%)Han Guo; Bowen Tan; Zhengzhong Liu; Eric P. Xing; Zhiting Hu
Resilient Control of Platooning Networked Robitic Systems via Dynamic Watermarking. (1%)Matthew Porter; Arnav Joshi; Sidhartha Dey; Qirui Wu; Pedro Hespanhol; Anil Aswani; Matthew Johnson-Roberson; Ram Vasudevan
Self-training Guided Adversarial Domain Adaptation For Thermal Imagery. (1%)Ibrahim Batuhan Akkaya; Fazil Altinel; Ugur Halici
Code Integrity Attestation for PLCs using Black Box Neural Network Predictions. (1%)Yuqi Chen; Christopher M. Poskitt; Jun Sun
2021-06-13
Target Model Agnostic Adversarial Attacks with Query Budgets on Language Understanding Models. (99%)Jatin Chauhan; Karan Bhukar; Manohar Kaul
Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks. (99%)Utku Ozbulak; Esla Timothy Anzaku; Neve Wesley De; Messem Arnout Van
ATRAS: Adversarially Trained Robust Architecture Search. (96%)Yigit Alparslan; Edward Kim
Security Analysis of Camera-LiDAR Semantic-Level Fusion Against Black-Box Attacks on Autonomous Vehicles. (64%)R. Spencer Hallyburton; Yupei Liu; Miroslav Pajic
Weakly-supervised High-resolution Segmentation of Mammography Images for Breast Cancer Diagnosis. (1%)Kangning Liu; Yiqiu Shen; Nan Wu; Jakub Chłędowski; Carlos Fernandez-Granda; Krzysztof J. Geras
HistoTransfer: Understanding Transfer Learning for Histopathology. (1%)Yash Sharma; Lubaina Ehsan; Sana Syed; Donald E. Brown
2021-06-12
Adversarial Robustness via Fisher-Rao Regularization. (67%)Marine Picot; Francisco Messina; Malik Boudiaf; Fabrice Labeau; Ismail Ben Ayed; Pablo Piantanida
What can linearized neural networks actually say about generalization? (31%)Guillermo Ortiz-Jiménez; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack. (2%)Tolulope Odetola; Faiq Khalid; Travis Sandefur; Hawzhin Mohammed; Syed Rafay Hasan
2021-06-11
CausalAdv: Adversarial Robustness through the Lens of Causality. (99%)Yonggang Zhang; Mingming Gong; Tongliang Liu; Gang Niu; Xinmei Tian; Bo Han; Bernhard Schölkopf; Kun Zhang
Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks. (99%)Nezihe Merve Gürel; Xiangyu Qi; Luka Rimanic; Ce Zhang; Bo Li
Adversarial purification with Score-based generative models. (89%)Jongmin Yoon; Sung Ju Hwang; Juho Lee
Relaxing Local Robustness. (80%)Klas Leino; Matt Fredrikson
TDGIA:Effective Injection Attacks on Graph Neural Networks. (76%)Xu Zou; Qinkai Zheng; Yuxiao Dong; Xinyu Guan; Evgeny Kharlamov; Jialiang Lu; Jie Tang
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution. (56%)Fanchao Qi; Yuan Yao; Sophia Xu; Zhiyuan Liu; Maosong Sun
CARTL: Cooperative Adversarially-Robust Transfer Learning. (8%)Dian Chen; Hongxin Hu; Qian Wang; Yinli Li; Cong Wang; Chao Shen; Qi Li
A Shuffling Framework for Local Differential Privacy. (1%)Casey Meehan; Amrita Roy Chowdhury; Kamalika Chaudhuri; Somesh Jha
2021-06-10
Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm. (99%)Mingkang Zhu; Tianlong Chen; Zhangyang Wang
Deep neural network loses attention to adversarial images. (99%)Shashank Kotyan; Danilo Vasconcellos Vargas
Verifying Quantized Neural Networks using SMT-Based Model Checking. (92%)Luiz Sena; Xidan Song; Erickson Alves; Iury Bessa; Edoardo Manino; Lucas Cordeiro; Eddie de Lima Filho
Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation. (80%)Jiawei Zhang; Linyi Li; Huichen Li; Xiaolu Zhang; Shuang Yang; Bo Li
An Ensemble Approach Towards Adversarial Robustness. (41%)Haifeng Qian
Towards an Automated Pipeline for Detecting and Classifying Malware through Machine Learning. (1%)Nicola Loi; Claudio Borile; Daniele Ucci
Fair Classification with Adversarial Perturbations. (1%)L. Elisa Celis; Anay Mehrotra; Nisheeth K. Vishnoi
2021-06-09
HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks. (99%)Mohammad Hossein Samavatian; Saikat Majumdar; Kristin Barber; Radu Teodorescu
Towards Defending against Adversarial Examples via Attack-Invariant Features. (99%)Dawei Zhou; Tongliang Liu; Bo Han; Nannan Wang; Chunlei Peng; Xinbo Gao
Attacking Adversarial Attacks as A Defense. (99%)Boxi Wu; Heng Pan; Li Shen; Jindong Gu; Shuai Zhao; Zhifeng Li; Deng Cai; Xiaofei He; Wei Liu
Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training. (99%)Dawei Zhou; Nannan Wang; Xinbo Gao; Bo Han; Jun Yu; Xiaoyu Wang; Tongliang Liu
We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature. (98%)Bin Liang; Jiachun Li; Jianjun Huang
Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. (97%)Yanchao Sun; Ruijie Zheng; Yongyuan Liang; Furong Huang
URLTran: Improving Phishing URL Detection Using Transformers. (10%)Pranav Maneriker; Jack W. Stokes; Edir Garcia Lazo; Diana Carutasu; Farid Tajaddodianfar; Arun Gururajan
ZoPE: A Fast Optimizer for ReLU Networks with Low-Dimensional Inputs. (5%)Christopher A. Strong; Sydney M. Katz; Anthony L. Corso; Mykel J. Kochenderfer
Practical Machine Learning Safety: A Survey and Primer. (4%)Sina Mohseni; Haotao Wang; Zhiding Yu; Chaowei Xiao; Zhangyang Wang; Jay Yadawa
Network insensitivity to parameter noise via adversarial regularization. (2%)Julian Büchel; Fynn Faber; Dylan R. Muir
2021-06-08
On Improving Adversarial Transferability of Vision Transformers. (99%)Muzammal Naseer; Kanchana Ranasinghe; Salman Khan; Fahad Shahbaz Khan; Fatih Porikli
Simulated Adversarial Testing of Face Recognition Models. (99%)Nataniel Ruiz; Adam Kortylewski; Weichao Qiu; Cihang Xie; Sarah Adel Bargal; Alan Yuille; Stan Sclaroff
Towards the Memorization Effect of Neural Networks in Adversarial Training. (93%)Han Xu; Xiaorui Liu; Wentao Wang; Wenbiao Ding; Zhongqin Wu; Zitao Liu; Anil Jain; Jiliang Tang
Handcrafted Backdoors in Deep Neural Networks. (92%)Sanghyun Hong; Nicholas Carlini; Alexey Kurakin
Enhancing Robustness of Neural Networks through Fourier Stabilization. (73%)Netanel Raviv; Aidan Kelley; Michael Guo; Yevgeny Vorobeychik
Provably Robust Detection of Out-of-distribution Data (almost) for free. (26%)Alexander Meinke; Julian Bitterwolf; Matthias Hein
2021-06-07
Adversarial Attack and Defense in Deep Ranking. (99%)Mo Zhou; Le Wang; Zhenxing Niu; Qilin Zhang; Nanning Zheng; Gang Hua
Reveal of Vision Transformers Robustness against Adversarial Attacks. (99%)Ahmed Aldahdooh; Wassim Hamidouche; Olivier Deforges
Position Bias Mitigation: A Knowledge-Aware Graph Model for Emotion Cause Extraction. (89%)Hanqi Yan; Lin Gui; Gabriele Pergola; Yulan He
3DB: A Framework for Debugging Computer Vision Models. (45%)Guillaume Leclerc; Hadi Salman; Andrew Ilyas; Sai Vemprala; Logan Engstrom; Vibhav Vineet; Kai Xiao; Pengchuan Zhang; Shibani Santurkar; Greg Yang; Ashish Kapoor; Aleksander Madry
RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models. (11%)Xin Guo; Jianlei Yang; Haoyi Zhou; Xucheng Ye; Jianxin Li
Semantically Adversarial Scenario Generation with Explicit Knowledge Guidance. (1%)Wenhao Ding; Haohong Lin; Bo Li; Ding Zhao
2021-06-06
A Primer on Multi-Neuron Relaxation-based Adversarial Robustness Certification. (98%)Kevin Roth
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model. (4%)Zi Wang
2021-06-05
Ensemble Defense with Data Diversity: Weak Correlation Implies Strong Robustness. (92%)Renjue Li; Hanwei Zhang; Pengfei Yang; Cheng-Chao Huang; Aimin Zhou; Bai Xue; Lijun Zhang
Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks. (69%)Qin Ding; Cho-Jui Hsieh; James Sharpnack
RDA: Robust Domain Adaptation via Fourier Adversarial Attacking. (2%)Jiaxing Huang; Dayan Guan; Aoran Xiao; Shijian Lu
2021-06-04
Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness. (99%)Zifeng Wang; Tong Jian; Aria Masoomi; Stratis Ioannidis; Jennifer Dy
BO-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization. (99%)Zhuosheng Zhang; Shucheng Yu
Human-Adversarial Visual Question Answering. (31%)Sasha Sheng; Amanpreet Singh; Vedanuj Goswami; Jose Alberto Lopez Magana; Wojciech Galuba; Devi Parikh; Douwe Kiela
Predify: Augmenting deep neural networks with brain-inspired predictive coding dynamics. (15%)Bhavin Choksi; Milad Mozafari; Callum Biggs O'May; Benjamin Ador; Andrea Alamia; Rufin VanRullen
DOCTOR: A Simple Method for Detecting Misclassification Errors. (1%)Federica Granese; Marco Romanelli; Daniele Gorla; Catuscia Palamidessi; Pablo Piantanida
Teaching keyword spotters to spot new keywords with limited examples. (1%)Abhijeet Awasthi; Kevin Kilgour; Hassan Rom
2021-06-03
Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout. (99%)Pengfei Xie; Linyuan Wang; Ruoxi Qin; Kai Qiao; Shuhao Shi; Guoen Hu; Bin Yan
Imperceptible Adversarial Examples for Fake Image Detection. (99%)Quanyu Liao; Yuezun Li; Xin Wang; Bin Kong; Bin Zhu; Siwei Lyu; Youbing Yin; Qi Song; Xi Wu
A Little Robustness Goes a Long Way: Leveraging Universal Features for Targeted Transfer Attacks. (99%)Jacob M. Springer; Melanie Mitchell; Garrett T. Kenyon
Transferable Adversarial Examples for Anchor Free Object Detection. (99%)Quanyu Liao; Xin Wang; Bin Kong; Siwei Lyu; Bin Zhu; Youbing Yin; Qi Song; Xi Wu
Exploring Memorization in Adversarial Training. (98%)Yinpeng Dong; Ke Xu; Xiao Yang; Tianyu Pang; Zhijie Deng; Hang Su; Jun Zhu
Improving Neural Network Robustness via Persistency of Excitation. (68%)Kaustubh Sridhar; Oleg Sokolsky; Insup Lee; James Weimer
Defending against Backdoor Attacks in Natural Language Generation. (38%)Chun Fan; Xiaoya Li; Yuxian Meng; Xiaofei Sun; Xiang Ao; Fei Wu; Jiwei Li; Tianwei Zhang
Sneak Attack against Mobile Robotic Networks under Formation Control. (1%)Yushan Li; Jianping He; Xuda Ding; Lin Cai; Xinping Guan
2021-06-02
PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack. (99%)Alexander Matyasko; Lap-Pui Chau
Towards Robustness of Text-to-SQL Models against Synonym Substitution. (75%)Yujian Gan; Xinyun Chen; Qiuping Huang; Matthew Purver; John R. Woodward; Jinxia Xie; Pengsheng Huang
BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively Inspired Orthographic Adversarial Attacks. (62%)Yannik Keller; Jan Mackensen; Steffen Eger
2021-06-01
Adversarial Defense for Automatic Speaker Verification by Self-Supervised Learning. (99%)Haibin Wu; Xu Li; Andy T. Liu; Zhiyong Wu; Helen Meng; Hung-yi Lee
Improving Compositionality of Neural Networks by Decoding Representations to Inputs. (68%)Mike Wu; Noah Goodman; Stefano Ermon
Markpainting: Adversarial Machine Learning meets Inpainting. (12%)David Khachaturov; Ilia Shumailov; Yiren Zhao; Nicolas Papernot; Ross Anderson
On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study. (9%)Divyansh Kaushik; Douwe Kiela; Zachary C. Lipton; Wen-tau Yih
Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models. (5%)Linjie Li; Jie Lei; Zhe Gan; Jingjing Liu
Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models. (1%)Rosa Biagio La; Roberto Capobianco; Daniele Nardi
Concurrent Adversarial Learning for Large-Batch Training. (1%)Yong Liu; Xiangning Chen; Minhao Cheng; Cho-Jui Hsieh; Yang You
2021-05-31
Adaptive Feature Alignment for Adversarial Training. (99%)Tao Wang; Ruixin Zhang; Xingyu Chen; Kai Zhao; Xiaolin Huang; Yuge Huang; Shaoxin Li; Jilin Li; Feiyue Huang
QueryNet: An Efficient Attack Framework with Surrogates Carrying Multiple Identities. (99%)Sizhe Chen; Zhehao Huang; Qinghua Tao; Xiaolin Huang
Transferable Sparse Adversarial Attack. (99%)Ziwen He; Wei Wang; Jing Dong; Tieniu Tan
Adversarial Training with Rectified Rejection. (99%)Tianyu Pang; Huishuai Zhang; Di He; Yinpeng Dong; Hang Su; Wei Chen; Jun Zhu; Tie-Yan Liu
Robustifying $\ell_\infty$ Adversarial Training to the Union of Perturbation Models. (82%)Ameya D. Patil; Michael Tuttle; Alexander G. Schwing; Naresh R. Shanbhag
Dominant Patterns: Critical Features Hidden in Deep Neural Networks. (80%)Zhixing Ye; Shaofei Qin; Sizhe Chen; Xiaolin Huang
Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models. (75%)Chong Li; Cenyuan Zhang; Xiaoqing Zheng; Xuanjing Huang
Gradient-based Data Subversion Attack Against Binary Classifiers. (73%)Rosni K Vasu; Sanjay Seetharaman; Shubham Malaviya; Manish Shukla; Sachin Lodha
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals. (1%)Asma Ghandeharioun; Been Kim; Chun-Liang Li; Brendan Jou; Brian Eoff; Rosalind W. Picard
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores. (1%)Giang Nguyen; Daeyoung Kim; Anh Nguyen
2021-05-30
Generating Adversarial Examples with Graph Neural Networks. (99%)Florian Jaeckle; M. Pawan Kumar
Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice. (98%)Rongzhou Bao; Jiayi Wang; Hai Zhao
NoiLIn: Do Noisy Labels Always Hurt Adversarial Training? (62%)Jingfeng Zhang; Xilie Xu; Bo Han; Tongliang Liu; Gang Niu; Lizhen Cui; Masashi Sugiyama
Evaluating Resilience of Encrypted Traffic Classification Against Adversarial Evasion Attacks. (62%)Ramy Maarouf; Danish Sattar; Ashraf Matrawy
DAAIN: Detection of Anomalous and Adversarial Input using Normalizing Flows. (12%)Baußnern Samuel von; Johannes Otterbach; Adrian Loy; Mathieu Salzmann; Thomas Wollmann
EEG-based Cross-Subject Driver Drowsiness Recognition with an Interpretable Convolutional Neural Network. (1%)Jian Cui; Zirui Lan; Olga Sourina; Wolfgang Müller-Wittig
2021-05-29
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations. (99%)Mingfu Xue; Yinghao Wu; Zhiyu Wu; Jian Wang; Yushu Zhang; Weiqiang Liu
Analysis and Applications of Class-wise Robustness in Adversarial Training. (99%)Qi Tian; Kun Kuang; Kelu Jiang; Fei Wu; Yisen Wang
A Measurement Study on the (In)security of End-of-Life (EoL) Embedded Devices. (2%)Dingding Wang; Muhui Jiang; Rui Chang; Yajin Zhou; Baolei Hou; Xiapu Luo; Lei Wu; Kui Ren
2021-05-28
Demotivate adversarial defense in remote sensing. (99%)Adrien Chan-Hon-Tong; Gaston Lenczner; Aurelien Plyer
AdvParams: An Active DNN Intellectual Property Protection Technique via Adversarial Perturbation Based Parameter Encryption. (92%)Mingfu Xue; Zhiyu Wu; Jian Wang; Yushu Zhang; Weiqiang Liu
Robust Regularization with Adversarial Labelling of Perturbed Samples. (83%)Xiaohui Guo; Richong Zhang; Yaowei Zheng; Yongyi Mao
SafeAMC: Adversarial training for robust modulation recognition models. (83%)Javier Maroto; Gérôme Bovet; Pascal Frossard
Towards optimally abstaining from prediction. (81%)Adam Tauman Kalai; Varun Kanade
Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial Awareness. (76%)Glenn Dawson; Robi Polikar
Visualizing Representations of Adversarially Perturbed Inputs. (68%)Daniel Steinberg; Paul Munro
Chromatic and spatial analysis of one-pixel attacks against an image classifier. (15%)Janne Alatalo; Joni Korpihalkola; Tuomo Sipola; Tero Kokkonen
FoveaTer: Foveated Transformer for Image Classification. (10%)Aditya Jonnalagadda; William Yang Wang; B. S. Manjunath; Miguel P. Eckstein
DeepMoM: Robust Deep Learning With Median-of-Means. (1%)Shih-Ting Huang; Johannes Lederer
2021-05-27
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers. (84%)Xi Li; David J. Miller; Zhen Xiang; George Kesidis
2021-05-26
Deep Repulsive Prototypes for Adversarial Robustness. (99%)Alex Serban; Erik Poll; Joost Visser
Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge. (98%)Heng Chang; Yu Rong; Tingyang Xu; Wenbing Huang; Honglei Zhang; Peng Cui; Xin Wang; Wenwu Zhu; Junzhou Huang
Adversarial robustness against multiple $l_p$-threat models at the price of one and how to quickly fine-tune robust models to another threat model. (93%)Francesco Croce; Matthias Hein
Can Linear Programs Have Adversarial Examples? A Causal Perspective. (83%)Matej Zečević; Devendra Singh Dhami; Kristian Kersting
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger. (61%)Fanchao Qi; Mukai Li; Yangyi Chen; Zhengyan Zhang; Zhiyuan Liu; Yasheng Wang; Maosong Sun
Fooling Partial Dependence via Data Poisoning. (13%)Hubert Baniecki; Wojciech Kretowicz; Przemyslaw Biecek
2021-05-25
Practical Convex Formulation of Robust One-hidden-layer Neural Network Training. (98%)Yatong Bai; Tanmay Gautam; Yu Gai; Somayeh Sojoudi
Adversarial Attack Driven Data Augmentation for Accurate And Robust Medical Image Segmentation. (98%)Mst. Tasnim Pervin; Linmi Tao; Aminul Huq; Zuoxiang He; Li Huo
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs. (67%)Mohammad Malekzadeh; Anastasia Borovykh; Deniz Gündüz
Robust Value Iteration for Continuous Control Tasks. (9%)Michael Lutter; Shie Mannor; Jan Peters; Dieter Fox; Animesh Garg
2021-05-24
OFEI: A Semi-black-box Android Adversarial Sample Attack Framework Against DLaaS. (99%)Guangquan Xu; GuoHua Xin; Litao Jiao; Jian Liu; Shaoying Liu; Meiqi Feng; Xi Zheng
Learning Security Classifiers with Verified Global Robustness Properties. (92%)Yizheng Chen; Shiqi Wang; Yue Qin; Xiaojing Liao; Suman Jana; David Wagner
Feature Space Targeted Attacks by Statistic Alignment. (82%)Lianli Gao; Yaya Cheng; Qilong Zhang; Xing Xu; Jingkuan Song
Improved OOD Generalization via Adversarial Training and Pre-training. (12%)Mingyang Yi; Lu Hou; Jiacheng Sun; Lifeng Shang; Xin Jiang; Qun Liu; Zhi-Ming Ma
Out-of-Distribution Detection in Dermatology using Input Perturbation and Subset Scanning. (5%)Hannah Kim; Girmaw Abebe Tadesse; Celia Cintas; Skyler Speakman; Kush Varshney
AirNet: Neural Network Transmission over the Air. (1%)Mikolaj Jankowski; Deniz Gunduz; Krystian Mikolajczyk
Every Byte Matters: Traffic Analysis of Bluetooth Wearable Devices. (1%)Ludovic Barman; Alexandre Dumur; Apostolos Pyrgelis; Jean-Pierre Hubaux
Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models. (1%)Jieyu Lin; Jiajie Zou; Nai Ding
Dissecting Click Fraud Autonomy in the Wild. (1%)Tong Zhu; Yan Meng; Haotian Hu; Xiaokuan Zhang; Minhui Xue; Haojin Zhu
2021-05-23
Killing Two Birds with One Stone: Stealing Model and Inferring Attribute from BERT-based APIs. (99%)Lingjuan Lyu; Xuanli He; Fangzhao Wu; Lichao Sun
CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes. (92%)Hao Huang; Yongtao Wang; Zhaoyu Chen; Yuheng Li; Zhi Tang; Wei Chu; Jingdong Chen; Weisi Lin; Kai-Kuang Ma
Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters. (12%)Javier Carnerero-Cano; Luis Muñoz-González; Phillippa Spencer; Emil C. Lupu
2021-05-22
Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems. (99%)Yifan Jia; Jingyi Wang; Christopher M. Poskitt; Sudipta Chattopadhyay; Jun Sun; Yuqi Chen
Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation. (98%)Jinyu Yang; Chunyuan Li; Weizhi An; Hehuan Ma; Yuzhi Guo; Yu Rong; Peilin Zhao; Junzhou Huang
Securing Optical Networks using Quantum-secured Blockchain: An Overview. (1%)Purva Sharma; Vimal Bhatia; Shashi Prakash
2021-05-21
ReLUSyn: Synthesizing Stealthy Attacks for Deep Neural Network Based Cyber-Physical Systems. (81%)Aarti Kashyap; Syed Mubashir Iqbal; Karthik Pattabiraman; Margo Seltzer
Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks. (76%)Leo Schwinn; René Raab; An Nguyen; Dario Zanca; Bjoern Eskofier
Backdoor Attacks on Self-Supervised Learning. (68%)Aniruddha Saha; Ajinkya Tejankar; Soroush Abbasi Koohpayegani; Hamed Pirsiavash
Intriguing Properties of Vision Transformers. (8%)Muzammal Naseer; Kanchana Ranasinghe; Salman Khan; Munawar Hayat; Fahad Shahbaz Khan; Ming-Hsuan Yang
Explainable Enterprise Credit Rating via Deep Feature Crossing Network. (1%)Weiyu Guo; Zhijiang Yang; Shu Wu; Fu Chen
2021-05-20
Simple Transparent Adversarial Examples. (99%)Jaydeep Borkar; Pin-Yu Chen
Anomaly Detection of Adversarial Examples using Class-conditional Generative Adversarial Networks. (99%)Hang Wang; David J. Miller; George Kesidis
Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance. (11%)Jack W. Stokes; Paul England; Kevin Kane
TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks. (1%)Yu Li; Min Li; Qiuxia Lai; Yannan Liu; Qiang Xu
2021-05-19
Attack on practical speaker verification system using universal adversarial perturbations. (99%)Weiyi Zhang; Shuning Zhao; Le Liu; Jianmin Li; Xingliang Cheng; Thomas Fang Zheng; Xiaolin Hu
Local Aggressive Adversarial Attacks on 3D Point Cloud. (99%)Yiming Sun; Feng Chen; Zhiyu Chen; Mingjie Wang
An Orthogonal Classifier for Improving the Adversarial Robustness of Neural Networks. (76%)Cong Xu; Xiang Li; Min Yang
Balancing Robustness and Sensitivity using Feature Contrastive Learning. (15%)Seungyeon Kim; Daniel Glasner; Srikumar Ramalingam; Cho-Jui Hsieh; Kishore Papineni; Sanjiv Kumar
DeepStrike: Remotely-Guided Fault Injection Attacks on DNN Accelerator in Cloud-FPGA. (1%)Yukui Luo; Cheng Gongye; Yunsi Fei; Xiaolin Xu
User Label Leakage from Gradients in Federated Learning. (1%)Aidmar Wainakh; Fabrizio Ventola; Till Müßig; Jens Keim; Carlos Garcia Cordero; Ephraim Zimmer; Tim Grube; Kristian Kersting; Max Mühlhäuser
Hunter in the Dark: Deep Ensemble Networks for Discovering Anomalous Activity from Smart Networks. (1%)Shiyi Yang; Nour Moustafa; Hui Guo
2021-05-18
Sparta: Spatially Attentive and Adversarially Robust Activation. (99%)Qing Guo; Felix Juefei-Xu; Changqing Zhou; Wei Feng; Yang Liu; Song Wang
Detecting Adversarial Examples with Bayesian Neural Network. (99%)Yao Li; Tongyi Tang; Cho-Jui Hsieh; Thomas C. M. Lee
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks. (98%)Dequan Wang; An Ju; Evan Shelhamer; David Wagner; Trevor Darrell
On the Robustness of Domain Constraints. (98%)Ryan Sheatsley; Blaine Hoak; Eric Pauley; Yohan Beugin; Michael J. Weisman; Patrick McDaniel
Learning and Certification under Instance-targeted Poisoning. (82%)Ji Gao; Amin Karbasi; Mohammad Mahmoody
2021-05-17
Towards Robust Vision Transformer. (95%)Xiaofeng Mao; Gege Qi; Yuefeng Chen; Xiaodan Li; Ranjie Duan; Shaokai Ye; Yuan He; Hui Xue
Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning. (93%)Franziska Boenisch; Philip Sperl; Konstantin Böttinger
An SDE Framework for Adversarial Training, with Convergence and Robustness Analysis. (69%)Haotian Gu; Xin Guo
A Fusion-Denoising Attack on InstaHide with Data Augmentation. (1%)Xinjian Luo; Xiaokui Xiao; Yuncheng Wu; Juncheng Liu; Beng Chin Ooi
2021-05-16
Vision Transformers are Robust Learners. (99%)Sayak Paul; Pin-Yu Chen
Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing. (99%)Xunguang Wang; Zheng Zhang; Baoyuan Wu; Fumin Shen; Guangming Lu
SoundFence: Securing Ultrasonic Sensors in Vehicles Using Physical-Layer Defense. (2%)Jianzhi Lou; Qiben Yan; Qing Hui; Huacheng Zeng
2021-05-15
Real-time Detection of Practical Universal Adversarial Perturbations. (99%)Kenneth T. Co; Luis Muñoz-González; Leslie Kanthan; Emil C. Lupu
2021-05-14
Salient Feature Extractor for Adversarial Defense on Deep Neural Networks. (99%)Jinyin Chen; Ruoxi Chen; Haibin Zheng; Zhaoyan Ming; Wenrong Jiang; Chen Cui
High-Robustness, Low-Transferability Fingerprinting of Neural Networks. (9%)Siyue Wang; Xiao Wang; Pin-Yu Chen; Pu Zhao; Xue Lin
Information-theoretic Evolution of Model Agnostic Global Explanations. (1%)Sukriti Verma; Nikaash Puri; Piyush Gupta; Balaji Krishnamurthy
Iterative Algorithms for Assessing Network Resilience Against Structured Perturbations. (1%)Shenyu Liu; Sonia Martinez; Jorge Cortes
2021-05-13
Stochastic-Shield: A Probabilistic Approach Towards Training-Free Adversarial Defense in Quantized CNNs. (98%)Lorena Qendro; Sangwon Ha; Jong René de; Partha Maji
When Human Pose Estimation Meets Robustness: Adversarial Algorithms and Benchmarks. (5%)Jiahang Wang; Sheng Jin; Wentao Liu; Weizhong Liu; Chen Qian; Ping Luo
DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks. (1%)Yingzhe He; Guozhu Meng; Kai Chen; Jinwen He; Xingbo Hu
Biometrics: Trust, but Verify. (1%)Anil K. Jain; Debayan Deb; Joshua J. Engelsma
2021-05-12
AVA: Adversarial Vignetting Attack against Visual Recognition. (99%)Binyu Tian; Felix Juefei-Xu; Qing Guo; Xiaofei Xie; Xiaohong Li; Yang Liu
OutFlip: Generating Out-of-Domain Samples for Unknown Intent Detection with Natural Language Attack. (70%)DongHyun Choi; Myeong Cheol Shin; EungGyun Kim; Dong Ryeol Shin
Adversarial Reinforcement Learning in Dynamic Channel Access and Power Control. (2%)Feng Wang; M. Cenk Gursoy; Senem Velipasalar
A Statistical Threshold for Adversarial Classification in Laplace Mechanisms. (1%)Ayşe Ünsal; Melek Önen
2021-05-11
Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds. (99%)Guiyu Tian; Wenhao Jiang; Wei Liu; Yadong Mu
Improving Adversarial Transferability with Gradient Refining. (99%)Guoqiu Wang; Huanqian Yan; Ying Guo; Xingxing Wei
Accuracy-Privacy Trade-off in Deep Ensemble: A Membership Inference Perspective. (16%)Shahbaz Rezaei; Zubair Shafiq; Xin Liu
2021-05-10
Adversarial examples attack based on random warm restart mechanism and improved Nesterov momentum. (99%)Tiangang Li
Examining and Mitigating Kernel Saturation in Convolutional Neural Networks using Negative Images. (1%)Nidhi Gowdra; Roopak Sinha; Stephen MacDonell
2021-05-09
Automated Decision-based Adversarial Attacks. (99%)Qi-An Fu; Yinpeng Dong; Hang Su; Jun Zhu
Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks. (88%)Abhiroop Bhattacharjee; Abhishek Moitra; Priyadarshini Panda
Security Concerns on Machine Learning Solutions for 6G Networks in mmWave Beam Prediction. (81%)Ferhat Ozgur Catak; Evren Catak; Murat Kuzlu; Umit Cali
Robust Training Using Natural Transformation. (13%)Shuo Wang; Lingjuan Lyu; Surya Nepal; Carsten Rudolph; Marthie Grobler; Kristen Moore
Learning Image Attacks toward Vision Guided Autonomous Vehicles. (4%)Hyung-Jin Yoon; Hamidreza Jafarnejadsani; Petros Voulgaris
Combining Time-Dependent Force Perturbations in Robot-Assisted Surgery Training. (1%)Yarden Sharon; Daniel Naftalovich; Lidor Bahar; Yael Refaely; Ilana Nisky
2021-05-08
Self-Supervised Adversarial Example Detection by Disentangled Representation. (99%)Zhaoxi Zhang; Leo Yu Zhang; Xufei Zheng; Shengshan Hu; Jinyu Tian; Jiantao Zhou
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks. (96%)Jian Chen; Xuxin Zhang; Rui Zhang; Chen Wang; Ling Liu
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]. (93%)Jiehang Zeng; Xiaoqing Zheng; Jianhan Xu; Linyang Li; Liping Yuan; Xuanjing Huang
Provable Guarantees against Data Poisoning Using Self-Expansion and Compatibility. (81%)Charles Jin; Melinda Sun; Martin Rinard
Mental Models of Adversarial Machine Learning. (16%)Lukas Bieringer; Kathrin Grosse; Michael Backes; Battista Biggio; Katharina Krombholz
2021-05-07
Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition. (99%)Bangjie Yin; Wenxuan Wang; Taiping Yao; Junfeng Guo; Zelun Kong; Shouhong Ding; Jilin Li; Cong Liu
Uniform Convergence, Adversarial Spheres and a Simple Remedy. (15%)Gregor Bachmann; Seyed-Mohsen Moosavi-Dezfooli; Thomas Hofmann
2021-05-06
Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model. (99%)Ruoxi Qin; Linyuan Wang; Xingyuan Chen; Xuehui Du; Bin Yan
A Simple and Strong Baseline for Universal Targeted Attacks on Siamese Visual Tracking. (99%)Zhenbang Li; Yaya Shi; Jin Gao; Shaoru Wang; Bing Li; Pengpeng Liang; Weiming Hu
Understanding Catastrophic Overfitting in Adversarial Training. (92%)Peilin Kang; Seyed-Mohsen Moosavi-Dezfooli
Attestation Waves: Platform Trust via Remote Power Analysis. (1%)Ignacio M. Delgado-Lozano; Macarena C. Martínez-Rodríguez; Alexandros Bakas; Billy Bob Brumley; Antonis Michalas
2021-05-05
Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning. (99%)Matthew Durham University, Durham, UK Watson; Noura Al Durham University, Durham, UK Moubayed
Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks. (97%)Faiq Khalid; Muhammad Abdullah Hanif; Muhammad Shafique
Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation. (1%)Robert A. Marsden; Alexander Bartler; Mario Döbler; Bin Yang
A Theoretical-Empirical Approach to Estimating Sample Complexity of DNNs. (1%)Devansh Bisla; Apoorva Nandini Saridena; Anna Choromanska
2021-05-04
Poisoning the Unlabeled Dataset of Semi-Supervised Learning. (92%)Nicholas Carlini
Broadly Applicable Targeted Data Sample Omission Attacks. (68%)Guy Barash; Eitan Farchi; Sarit Kraus; Onn Shehory
An Overview of Laser Injection against Embedded Neural Network Models. (2%)Mathieu Dumont; Pierre-Alain Moellic; Raphael Viera; Jean-Max Dutertre; Rémi Bernhard
2021-05-03
Physical world assistive signals for deep neural network classifiers -- neither defense nor attack. (83%)Camilo Pestana; Wei Liu; David Glance; Robyn Owens; Ajmal Mian
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack. (73%)Yixu Wang; Jie Li; Hong Liu; Yan Wang; Yongjian Wu; Feiyue Huang; Rongrong Ji
2021-05-02
BAARD: Blocking Adversarial Examples by Testing for Applicability, Reliability and Decidability. (99%)Xinglong Chang; Katharina Dost; Kaiqi Zhao; Ambra Demontis; Fabio Roli; Gill Dobbie; Jörg Wicker
Who's Afraid of Adversarial Transferability? (99%)Ziv Katzir; Yuval Elovici
Multi-Robot Coordination and Planning in Uncertain and Adversarial Environments. (10%)Lifeng Zhou; Pratap Tokekar
GRNN: Generative Regression Neural Network -- A Data Leakage Attack for Federated Learning. (2%)Hanchi Ren; Jingjing Deng; Xianghua Xie
Spinner: Automated Dynamic Command Subsystem Perturbation. (1%)Meng Wang; Chijung Jung; Ali Ahad; Yonghwi Kwon
2021-05-01
Adversarial Example Detection for DNN Models: A Review and Experimental Comparison. (99%)Ahmed Aldahdooh; Wassim Hamidouche; Sid Ahmed Fezza; Olivier Deforges
A Perceptual Distortion Reduction Framework: Towards Generating Adversarial Examples with High Perceptual Quality and Attack Success Rate. (98%)Ruijie Yang; Yunhong Wang; Ruikui Wang; Yuanfang Guo
On the Adversarial Robustness of Quantized Neural Networks. (75%)Micah Gorsline; James Smith; Cory Merkel
Hidden Backdoors in Human-Centric Language Models. (73%)Shaofeng Li; Hui Liu; Tian Dong; Benjamin Zi Hao Zhao; Minhui Xue; Haojin Zhu; Jialiang Lu
One Detector to Rule Them All: Towards a General Deepfake Attack Detection Framework. (62%)Shahroz Tariq; Sangyup Lee; Simon S. Woo
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification. (62%)Wei Guo; Benedetta Tondi; Mauro Barni
Load Oscillating Attacks of Smart Grids: Demand Strategies and Vulnerability Analysis. (2%)Falah Alanazi; Jinsub Kim; Eduardo Cotilla-Sanchez
RATT: Leveraging Unlabeled Data to Guarantee Generalization. (1%)Saurabh Garg; Sivaraman Balakrishnan; J. Zico Kolter; Zachary C. Lipton
2021-04-30
Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks. (99%)Jun-Ho Choi; Huan Zhang; Jun-Hyuk Kim; Cho-Jui Hsieh; Jong-Seok Lee
Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense. (99%)Haoxi Zhan; Xiaobing Pei
Black-box adversarial attacks using Evolution Strategies. (98%)Hao Qiu; Leonardo Lucio Custode; Giovanni Iacca
IPatch: A Remote Adversarial Patch. (97%)Yisroel Mirsky
DeFiRanger: Detecting Price Manipulation Attacks on DeFi Applications. (10%)Siwei Wu; Dabao Wang; Jianting He; Yajin Zhou; Lei Wu; Xingliang Yuan; Qinming He; Kui Ren
FIPAC: Thwarting Fault- and Software-Induced Control-Flow Attacks with ARM Pointer Authentication. (2%)Robert Schilling; Pascal Nasahl; Stefan Mangard
2021-04-29
GasHis-Transformer: A Multi-scale Visual Transformer Approach for Gastric Histopathology Image Classification. (67%)Haoyuan Chen; Chen Li; Xiaoyan Li; Ge Wang; Weiming Hu; Yixin Li; Wanli Liu; Changhao Sun; Yudong Yao; Yueyang Teng; Marcin Grzegorzek
A neural anisotropic view of underspecification in deep learning. (26%)Guillermo Ortiz-Jimenez; Itamar Franco Salazar-Reque; Apostolos Modas; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
Analytical bounds on the local Lipschitz constants of ReLU networks. (12%)Trevor Avant; Kristi A. Morgansen
Learning Robust Variational Information Bottleneck with Reference. (5%)Weizhu Qian; Bowei Chen; Xiaowei Huang
2021-04-28
AdvHaze: Adversarial Haze Attack. (99%)Ruijun Gao; Qing Guo; Felix Juefei-Xu; Hongkai Yu; Wei Feng
2021-04-27
Improved and Efficient Text Adversarial Attacks using Target Information. (97%)Mahmoud Hossam; Trung Le; He Zhao; Viet Huynh; Dinh Phung
Metamorphic Detection of Repackaged Malware. (91%)Shirish Singh; Gail Kaiser
Structure-Aware Hierarchical Graph Pooling using Information Bottleneck. (2%)Kashob Kumar Roy; Amit Roy; A K M Mahbubur Rahman; M Ashraful Amin; Amin Ahsan Ali
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity. (1%)Mathias P. M. Parisot; Balazs Pejo; Dayana Spagnuelo
2021-04-26
Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT. (99%)Pavlos Papadopoulos; Essen Oliver Thornewill von; Nikolaos Pitropakis; Christos Chrysoulas; Alexios Mylonas; William J. Buchanan
Delving into Data: Effectively Substitute Training for Black-box Attack. (99%)Wenxuan Wang; Bangjie Yin; Taiping Yao; Li Zhang; Yanwei Fu; Shouhong Ding; Jilin Li; Feiyue Huang; Xiangyang Xue
secml-malware: Pentesting Windows Malware Classifiers with Adversarial EXEmples in Python. (99%)Luca Demetrio; Battista Biggio
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks. (98%)Sebastian Szyller; Vasisht Duddu; Tommi Gröndahl; N. Asokan
Impact of Spatial Frequency Based Constraints on Adversarial Robustness. (98%)Rémi Bernhard; Pierre-Alain Moellic; Martial Mermillod; Yannick Bourrier; Romain Cohendet; Miguel Solinas; Marina Reyboz
PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches. (87%)Chong Xiang; Prateek Mittal
2021-04-25
3D Adversarial Attacks Beyond Point Cloud. (99%)Jinlai Zhang; Lyujie Chen; Binbin Liu; Bo Ouyang; Qizhi Xie; Jihong Zhu; Weiming Li; Yanmei Meng
Making Generated Images Hard To Spot: A Transferable Attack On Synthetic Image Detectors. (81%)Xinwei Zhao; Matthew C. Stamm
2021-04-24
Influence Based Defense Against Data Poisoning Attacks in Online Learning. (99%)Sanjay Seetharaman; Shubham Malaviya; Rosni KV; Manish Shukla; Sachin Lodha
2021-04-23
Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks. (98%)Zeyu Qin; Yanbo Fan; Hongyuan Zha; Baoyuan Wu
Evaluating Deception Detection Model Robustness To Linguistic Variation. (82%)Maria Glenski; Ellyn Ayton; Robin Cosbey; Dustin Arendt; Svitlana Volkova
Lightweight Detection of Out-of-Distribution and Adversarial Samples via Channel Mean Discrepancy. (3%)Xin Dong; Junfeng Guo; Wei-Te Ting; H. T. Kung
Improving Neural Silent Speech Interface Models by Adversarial Training. (1%)Amin Honarmandi Shandiz; László Tóth; Gábor Gosztolya; Alexandra Markó; Tamás Gábor Csapó
2021-04-22
Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting. (99%)Qiming Wu; Zhikang Zou; Pan Zhou; Xiaoqing Ye; Binghui Wang; Ang Li
Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors. (98%)Arman Maesumi; Mingkang Zhu; Yi Wang; Tianlong Chen; Zhangyang Wang; Chandrajit Bajaj
Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions. (86%)Jing Wu; Mingyi Zhou; Ce Zhu; Yipeng Liu; Mehrtash Harandi; Li Li
Operator Shifting for General Noisy Matrix Systems. (56%)Philip Etter; Lexing Ying
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics. (22%)Jonathan Hayase; Weihao Kong; Raghav Somani; Sewoong Oh
2021-04-21
Dual Head Adversarial Training. (99%)Yujing Jiang; Xingjun Ma; Sarah Monazam Erfani; James Bailey
Mixture of Robust Experts (MoRE): A Flexible Defense Against Multiple Perturbations. (99%)Kaidi Xu; Chenan Wang; Xue Lin; Bhavya Kailkhura; Ryan Goldhahn
Robust Certification for Laplace Learning on Geometric Graphs. (96%)Matthew Thorpe; Bao Wang
Jacobian Regularization for Mitigating Universal Adversarial Perturbations. (95%)Kenneth T. Co; David Martinez Rego; Emil C. Lupu
Dataset Inference: Ownership Resolution in Machine Learning. (83%)Pratyush Maini; Mohammad Yaghini; Nicolas Papernot
2021-04-20
Adversarial Training for Deep Learning-based Intrusion Detection Systems. (99%)Islam Debicha; Thibault Debatty; Jean-Michel Dricot; Wim Mees
MixDefense: A Defense-in-Depth Framework for Adversarial Example Detection Based on Statistical and Semantic Analysis. (99%)Yijun Yang; Ruiyuan Gao; Yu Li; Qiuxia Lai; Qiang Xu
MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor Based on Multi-Task Adversarial Training. (64%)Jian Ma; Shuyi Xie; Haiqin Yang; Lianxin Jiang; Mengyuan Zhou; Xiaoyi Ruan; Yang Mo
Does enhanced shape bias improve neural network robustness to common corruptions? (26%)Chaithanya Kumar Mummadi; Ranjitha Subramaniam; Robin Hutmacher; Julien Vitay; Volker Fischer; Jan Hendrik Metzen
Robust Sensor Fusion Algorithms Against Voice Command Attacks in Autonomous Vehicles. (9%)Jiwei Guan; Xi Zheng; Chen Wang; Yipeng Zhou; Alireza Jolfa
Network Defense is Not a Game. (1%)Andres Molina-Markham; Ransom K. Winder; Ahmad Ridley
2021-04-19
Staircase Sign Method for Boosting Adversarial Attacks. (99%)Qilong Zhang; Xiaosu Zhu; Jingkuan Song; Lianli Gao; Heng Tao Shen
Improving Adversarial Robustness Using Proxy Distributions. (99%)Vikash Sehwag; Saeed Mahloujifar; Tinashe Handina; Sihui Dai; Chong Xiang; Mung Chiang; Prateek Mittal
Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models. (99%)Lyuyi Zhu; Kairui Feng; Ziyuan Pu; Wei Ma
LAFEAT: Piercing Through Adversarial Defenses with Latent Features. (99%)Yunrui Yu; Xitong Gao; Cheng-Zhong Xu
Removing Adversarial Noise in Class Activation Feature Space. (99%)Dawei Zhou; Nannan Wang; Chunlei Peng; Xinbo Gao; Xiaoyu Wang; Jun Yu; Tongliang Liu
Direction-Aggregated Attack for Transferable Adversarial Examples. (99%)Tianjin Huang; Vlado Menkovski; Yulong Pei; YuHao Wang; Mykola Pechenizkiy
Manipulating SGD with Data Ordering Attacks. (95%)Ilia Shumailov; Zakhar Shumaylov; Dmitry Kazhdan; Yiren Zhao; Nicolas Papernot; Murat A. Erdogdu; Ross Anderson
Provable Robustness of Adversarial Training for Learning Halfspaces with Noise. (22%)Difan Zou; Spencer Frei; Quanquan Gu
Protecting the Intellectual Properties of Deep Neural Networks with an Additional Class and Steganographic Images. (11%)Shichang Sun; Mingfu Xue; Jian Wang; Weiqiang Liu
Semi-Supervised Domain Adaptation with Prototypical Alignment and Consistency Learning. (1%)Kai Li; Chang Liu; Handong Zhao; Yulun Zhang; Yun Fu
2021-04-18
Best Practices for Noise-Based Augmentation to Improve the Performance of Emotion Recognition "In the Wild". (83%)Mimansa Jaiswal; Emily Mower Provost
Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training. (68%)Shunsuke Kitada; Hitoshi Iyatomi
On the Sensitivity and Stability of Model Interpretations in NLP. (1%)Fan Yin; Zhouxing Shi; Cho-Jui Hsieh; Kai-Wei Chang
2021-04-17
Attacking Text Classifiers via Sentence Rewriting Sampler. (99%)Lei Xu; Kalyan Veeramachaneni
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems. (99%)Yue Gao; Ilia Shumailov; Kassem Fawaz
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation. (98%)Max Bartolo; Tristan Thrush; Robin Jia; Sebastian Riedel; Pontus Stenetorp; Douwe Kiela
Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training. (87%)Kuan-Hao Huang; Wasi Uddin Ahmad; Nanyun Peng; Kai-Wei Chang
AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages with Adversarial Examples. (15%)Qianchu Liu; Edoardo M. Ponti; Diana McCarthy; Ivan Vulić; Anna Korhonen
2021-04-16
Fashion-Guided Adversarial Attack on Person Segmentation. (99%)Marc Treu; Trung-Nghia Le; Huy H. Nguyen; Junichi Yamagishi; Isao Echizen
Towards Variable-Length Textual Adversarial Attacks. (99%)Junliang Guo; Zhirui Zhang; Linlin Zhang; Linli Xu; Boxing Chen; Enhong Chen; Weihua Luo
An Adversarially-Learned Turing Test for Dialog Generation Models. (96%)Xiang Gao; Yizhe Zhang; Michel Galley; Bill Dolan
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators. (83%)David Stutz; Nandhini Chandramoorthy; Matthias Hein; Bernt Schiele
Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries. (2%)Arjun Nitin Bhagoji; Daniel Cullina; Vikash Sehwag; Prateek Mittal
2021-04-15
Gradient-based Adversarial Attacks against Text Transformers. (99%)Chuan Guo; Alexandre Sablayrolles; Hervé Jégou; Douwe Kiela
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World. (86%)Mingfu Xue; Can He; Shichang Sun; Jian Wang; Weiqiang Liu
Are Multilingual BERT models robust? A Case Study on Adversarial Attacks for Multilingual Question Answering. (12%)Sara Rosenthal; Mihaela Bornea; Avirup Sil
Federated Learning for Malware Detection in IoT Devices. (10%)Valerian Rey; Pedro Miguel Sánchez Sánchez; Alberto Huertas Celdrán; Gérôme Bovet; Martin Jaggi
2021-04-14
Meaningful Adversarial Stickers for Face Recognition in Physical World. (98%)Ying Guo; Xingxing Wei; Guoqiu Wang; Bo Zhang
Orthogonalizing Convolutional Layers with the Cayley Transform. (80%)Asher Trockman; J. Zico Kolter
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks. (38%)Nicolas M. Müller; Simon Roschmann; Konstantin Böttinger
Improved Branch and Bound for Neural Network Verification via Lagrangian Decomposition. (1%)Palma Alessandro De; Rudy Bunel; Alban Desmaison; Krishnamurthy Dvijotham; Pushmeet Kohli; Philip H. S. Torr; M. Pawan Kumar
2021-04-13
Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune. (99%)Shanshi Huang; Hongwu Jiang; Shimeng Yu
Detecting Operational Adversarial Examples for Reliable Deep Learning. (82%)Xingyu Zhao; Wei Huang; Sven Schewe; Yi Dong; Xiaowei Huang
Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack. (75%)Luca Pajola; Mauro Conti
2021-04-12
Sparse Coding Frontend for Robust Neural Networks. (99%)Can Bakiskan; Metehan Cekic; Ahmet Dundar Sezer; Upamanyu Madhow
A Backdoor Attack against 3D Point Cloud Classifiers. (96%)Zhen Xiang; David J. Miller; Siheng Chen; Xi Li; George Kesidis
Plot-guided Adversarial Example Construction for Evaluating Open-domain Story Generation. (56%)Sarik Ghazarian; Zixi Liu; Akash SM; Ralph Weischedel; Aram Galstyan; Nanyun Peng
Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation. (50%)Chong Zhang; Jieyu Zhao; Huan Zhang; Kai-Wei Chang; Cho-Jui Hsieh
Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack. (1%)Xinyi Zhang; Chengfang Fang; Jie Shi
2021-04-11
Achieving Model Robustness through Discrete Adversarial Training. (99%)Maor Ivgi; Jonathan Berant
Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks. (1%)Louis Béthune; Thibaut Boissin; Mathieu Serrurier; Franck Mamalet; Corentin Friedrich; Alberto González-Sanz
2021-04-10
Distributed Estimation over Directed Graphs Resilient to Sensor Spoofing. (69%)Shamik Bhattacharyya; Kiran Rokade; Rachel Kalpana Kalaimani
Fool Me Twice: Entailment from Wikipedia Gamification. (61%)Julian Martin Eisenschlos; Bhuwan Dhingra; Jannis Bulian; Benjamin Börschinger; Jordan Boyd-Graber
Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach. (15%)Simiao Zuo; Chen Liang; Haoming Jiang; Xiaodong Liu; Pengcheng He; Jianfeng Gao; Weizhu Chen; Tuo Zhao
Disentangled Contrastive Learning for Learning Robust Textual Representations. (11%)Xiang Chen; Xin Xie; Zhen Bi; Hongbin Ye; Shumin Deng; Ningyu Zhang; Huajun Chen
2021-04-09
Relating Adversarially Robust Generalization to Flat Minima. (99%)David Stutz; Matthias Hein; Bernt Schiele
SPoTKD: A Protocol for Symmetric Key Distribution over Public Channels Using Self-Powered Timekeeping Devices. (1%)Mustafizur Rahman; Liang Zhou; Shantanu Chakrabartty
Reversible Watermarking in Deep Convolutional Neural Networks for Integrity Authentication. (1%)Xiquan Guan; Huamin Feng; Weiming Zhang; Hang Zhou; Jie Zhang; Nenghai Yu
Learning Sampling Policy for Faster Derivative Free Optimization. (1%)Zhou Zhai; Bin Gu; Heng Huang
2021-04-08
FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems. (98%)Liang Tong; Zhengzhang Chen; Jingchao Ni; Wei Cheng; Dongjin Song; Haifeng Chen; Yevgeniy Vorobeychik
Explainability-based Backdoor Attacks Against Graph Neural Networks. (15%)Jing Jason Xu; Jason Minhui; Xue; Stjepan Picek
A single gradient step finds adversarial examples on random two-layers neural networks. (10%)Sébastien Bubeck; Yeshwanth Cherapanamjeri; Gauthier Gidel; Rémi Tachet des Combes
Adversarial Learning Inspired Emerging Side-Channel Attacks and Defenses. (8%)Abhijitt Dhavlle
2021-04-07
Universal Adversarial Training with Class-Wise Perturbations. (99%)Philipp Benz; Chaoning Zhang; Adil Karjauv; In So Kweon
The art of defense: letting networks fool the attacker. (98%)Jinlai Zhang; Yinpeng Dong; Binbin Liu; Bo Ouyang; Jihong Zhu; Minchi Kuang; Houqing Wang; Yanmei Meng
Universal Spectral Adversarial Attacks for Deformable Shapes. (81%)Arianna Rampini; Franco Pestarini; Luca Cosmo; Simone Melzi; Emanuele Rodolà
Adversarial Robustness Guarantees for Gaussian Processes. (68%)Andrea Patane; Arno Blaas; Luca Laurenti; Luca Cardelli; Stephen Roberts; Marta Kwiatkowska
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective. (61%)Yi Zeng; Won Park; Z. Morley Mao; Ruoxi Jia
Improving Robustness of Deep Reinforcement Learning Agents: Environment Attacks based on Critic Networks. (10%)Lucas Schott; Manon Césaire; Hatem Hajri; Sylvain Lamprier
Sparse Oblique Decision Trees: A Tool to Understand and Manipulate Neural Net Features. (3%)Suryabhan Singh Hada; Miguel Á. Carreira-Perpiñán; Arman Zharmagambetov
An Object Detection based Solver for Google's Image reCAPTCHA v2. (1%)Md Imran Hossen; Yazhou Tu; Md Fazle Rabby; Md Nazmul Islam; Hui Cao; Xiali Hei
2021-04-06
Exploring Targeted Universal Adversarial Perturbations to End-to-end ASR Models. (93%)Zhiyun Lu; Wei Han; Yu Zhang; Liangliang Cao
Adversarial Robustness under Long-Tailed Distribution. (89%)Tong Wu; Ziwei Liu; Qingqiu Huang; Yu Wang; Dahua Lin
Robust Adversarial Classification via Abstaining. (75%)Abed AlRahman Al Makdah; Vaibhav Katewa; Fabio Pasqualetti
Backdoor Attack in the Physical World. (2%)Yiming Li; Tongqing Zhai; Yong Jiang; Zhifeng Li; Shu-Tao Xia
2021-04-05
Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model. (99%)Payam Delgosha; Hamed Hassani; Ramtin Pedarsani
Adaptive Clustering of Robust Semantic Representations for Adversarial Image Purification. (98%)Samuel Henrique Silva; Arun Das; Ian Scarff; Peyman Najafirad
BBAEG: Towards BERT-based Biomedical Adversarial Example Generation for Text Classification. (96%)Ishani Mondal
Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses. (74%)Yao Deng; Tiehua Zhang; Guannan Lou; Xi Zheng; Jiong Jin; Qing-Long Han
Can audio-visual integration strengthen robustness under multimodal attacks? (68%)Yapeng Tian; Chenliang Xu
Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models. (33%)Neal Mangaokar; Jiameng Pu; Parantapa Bhattacharya; Chandan K. Reddy; Bimal Viswanath
Unified Detection of Digital and Physical Face Attacks. (8%)Debayan Deb; Xiaoming Liu; Anil K. Jain
Beyond Categorical Label Representations for Image Classification. (2%)Boyuan Chen; Yu Li; Sunand Raghupathi; Hod Lipson
Rethinking Perturbations in Encoder-Decoders for Fast Training. (1%)Sho Takase; Shun Kiyono
2021-04-04
Semantically Stealthy Adversarial Attacks against Segmentation Models. (99%)Zhenhua Chen; Chuhua Wang; David J. Crandall
Reliably fast adversarial training via latent adversarial perturbation. (93%)Geon Yeong Park; Sang Wan Lee
2021-04-03
Mitigating Gradient-based Adversarial Attacks via Denoising and Compression. (99%)Rehana Mahfuz; Rajeev Sahay; Aly El Gamal
Gradient-based Adversarial Deep Modulation Classification with Data-driven Subsampling. (93%)Jinho Yi; Aly El Gamal
Property-driven Training: All You (N)Ever Wanted to Know About. (38%)Marco Casadio; Matthew Daggitt; Ekaterina Komendantskaya; Wen Kokke; Daniel Kienitz; Rob Stewart
2021-04-02
Defending Against Image Corruptions Through Adversarial Augmentations. (92%)Dan A. Calian; Florian Stimberg; Olivia Wiles; Sylvestre-Alvise Rebuffi; Andras Gyorgy; Timothy Mann; Sven Gowal
RABA: A Robust Avatar Backdoor Attack on Deep Neural Network. (83%)Ying He; Zhili Shen; Chang Xia; Jingyu Hua; Wei Tong; Sheng Zhong
Diverse Gaussian Noise Consistency Regularization for Robustness and Uncertainty Calibration under Noise Domain Shifts. (2%)Athanasios Tsiligkaridis; Theodoros Tsiligkaridis
Fast-adapting and Privacy-preserving Federated Recommender System. (1%)Qinyong Wang; Hongzhi Yin; Tong Chen; Junliang Yu; Alexander Zhou; Xiangliang Zhang
2021-04-01
TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness. (99%)Zhuolin Yang; Linyi Li; Xiaojun Xu; Shiliang Zuo; Qian Chen; Benjamin Rubinstein; Pan Zhou; Ce Zhang; Bo Li
Domain Invariant Adversarial Learning. (98%)Matan Levi; Idan Attias; Aryeh Kontorovich
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction. (93%)Luoqiu Li; Xiang Chen; Ningyu Zhang; Shumin Deng; Xin Xie; Chuanqi Tan; Mosha Chen; Fei Huang; Huajun Chen
Towards Evaluating and Training Verifiably Robust Neural Networks. (45%)Zhaoyang Lyu; Minghao Guo; Tong Wu; Guodong Xu; Kehuan Zhang; Dahua Lin
Augmenting Zero Trust Architecture to Endpoints Using Blockchain: A Systematic Review. (3%)Lampis Alevizos; Vinh Thong Ta; Max Hashem Eiza
Learning from Noisy Labels via Dynamic Loss Thresholding. (1%)Hao Yang; Youzhi Jin; Ziyin Li; Deng-Bao Wang; Lei Miao; Xin Geng; Min-Ling Zhang
2021-03-31
Adversarial Heart Attack: Neural Networks Fooled to Segment Heart Symbols in Chest X-Ray Images. (99%)Gerda Bortsova; Florian Dubost; Laurens Hogeweg; Ioannis Katramados; Bruijne Marleen de
Adversarial Attacks and Defenses for Speech Recognition Systems. (99%)Piotr Żelasko; Sonal Joshi; Yiwen Shao; Jesus Villalba; Jan Trmal; Najim Dehak; Sanjeev Khudanpur
Fast Certified Robust Training with Short Warmup. (86%)Zhouxing Shi; Yihan Wang; Huan Zhang; Jinfeng Yi; Cho-Jui Hsieh
Fast Jacobian-Vector Product for Deep Networks. (22%)Randall Balestriero; Richard Baraniuk
Too Expensive to Attack: A Joint Defense Framework to Mitigate Distributed Attacks for the Internet of Things Grid. (2%)Jianhua Li; Ximeng Liu; Jiong Jin; Shui Yu
Digital Forensics vs. Anti-Digital Forensics: Techniques, Limitations and Recommendations. (1%)Jean-Paul A. Yaacoub; Hassan N. Noura; Ola Salman; Ali Chehab
2021-03-30
On the Robustness of Vision Transformers to Adversarial Examples. (99%)Kaleel Mahmood; Rigel Mahmood; Dijk Marten van
Class-Aware Robust Adversarial Training for Object Detection. (96%)Pin-Chun Chen; Bo-Han Kung; Jun-Cheng Chen
PointBA: Towards Backdoor Attacks in 3D Point Cloud. (92%)Xinke Li; Zhiru Chen; Yue Zhao; Zekun Tong; Yabang Zhao; Andrew Lim; Joey Tianyi Zhou
What Causes Optical Flow Networks to be Vulnerable to Physical Adversarial Attacks. (91%)Simon Schrodi; Tonmoy Saikia; Thomas Brox
Statistical inference for individual fairness. (67%)Subha Maity; Songkai Xue; Mikhail Yurochkin; Yuekai Sun
Learning Lipschitz Feedback Policies from Expert Demonstrations: Closed-Loop Guarantees, Generalization and Robustness. (47%)Abed AlRahman Al Makdah; Vishaal Krishnan; Fabio Pasqualetti
Improving robustness against common corruptions with frequency biased models. (1%)Tonmoy Saikia; Cordelia Schmid; Thomas Brox
2021-03-29
Lagrangian Objective Function Leads to Improved Unforeseen Attack Generalization in Adversarial Training. (99%)Mohammad Azizmalayeri; Mohammad Hossein Rohban
Enhancing the Transferability of Adversarial Attacks through Variance Tuning. (99%)Xiaosen Wang; Kun He
On the Adversarial Robustness of Vision Transformers. (99%)Rulin Shao; Zhouxing Shi; Jinfeng Yi; Pin-Yu Chen; Cho-Jui Hsieh
ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training. (95%)Zeinab Golgooni; Mehrdad Saberi; Masih Eskandar; Mohammad Hossein Rohban
Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing. (93%)Cheng Chen; Bhavya Kailkhura; Ryan Goldhahn; Yi Zhou
Fooling LiDAR Perception via Adversarial Trajectory Perturbation. (83%)Yiming Li; Congcong Wen; Felix Juefei-Xu; Chen Feng
Robust Reinforcement Learning under model misspecification. (31%)Lebin Yu; Jian Wang; Xudong Zhang
Automating Defense Against Adversarial Attacks: Discovery of Vulnerabilities and Application of Multi-INT Imagery to Protect Deployed Models. (16%)Josh Kalin; David Noever; Matthew Ciolino; Dominick Hambrick; Gerry Dozier
MISA: Online Defense of Trojaned Models using Misattributions. (10%)Panagiota Kiourti; Wenchao Li; Anirban Roy; Karan Sikka; Susmit Jha
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models. (9%)Wenkai Yang; Lei Li; Zhiyuan Zhang; Xuancheng Ren; Xu Sun; Bin He
Selective Output Smoothing Regularization: Regularize Neural Networks by Softening Output Distributions. (1%)Xuan Cheng; Tianshu Xie; Xiaomin Wang; Qifeng Weng; Minghui Liu; Jiali Deng; Ming Liu
2021-03-28
Improved Autoregressive Modeling with Distribution Smoothing. (86%)Chenlin Meng; Jiaming Song; Yang Song; Shengjia Zhao; Stefano Ermon
2021-03-27
On the benefits of robust models in modulation recognition. (99%)Javier Maroto; Gérôme Bovet; Pascal Frossard
IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking. (99%)Shuai Jia; Yibing Song; Chao Ma; Xiaokang Yang
LiBRe: A Practical Bayesian Approach to Adversarial Detection. (99%)Zhijie Deng; Xiao Yang; Shizhen Xu; Hang Su; Jun Zhu
2021-03-26
Cyclic Defense GAN Against Speech Adversarial Attacks. (99%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Combating Adversaries with Anti-Adversaries. (93%)Motasem Alfarra; Juan C. Pérez; Ali Thabet; Adel Bibi; Philip H. S. Torr; Bernard Ghanem
On Generating Transferable Targeted Perturbations. (93%)Muzammal Naseer; Salman Khan; Munawar Hayat; Fahad Shahbaz Khan; Fatih Porikli
Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation. (86%)Dohun Lim; Hyeonseok Lee; Sungchan Kim
Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness. (83%)Yi Cai; Xuefei Ning; Huazhong Yang; Yu Wang
Visual Explanations from Spiking Neural Networks using Interspike Intervals. (62%)Youngeun Kim; Priyadarshini Panda
Unsupervised Robust Domain Adaptation without Source Data. (13%)Peshal Agarwal; Danda Pani Paudel; Jan-Nico Zaech; Gool Luc Van
2021-03-25
Adversarial Attacks are Reversible with Natural Supervision. (99%)Chengzhi Mao; Mia Chiquier; Hao Wang; Junfeng Yang; Carl Vondrick
Adversarial Attacks on Deep Learning Based mmWave Beam Prediction in 5G and Beyond. (98%)Brian Kim; Yalin E. Sagduyu; Tugba Erpek; Sennur Ulukus
MagDR: Mask-guided Detection and Reconstruction for Defending Deepfakes. (81%)Zhikai Chen; Lingxi Xie; Shanmin Pang; Yong He; Bo Zhang
Deep-RBF Networks for Anomaly Detection in Automotive Cyber-Physical Systems. (70%)Matthew Burruss; Shreyas Ramakrishna; Abhishek Dubey
Orthogonal Projection Loss. (45%)Kanchana Ranasinghe; Muzammal Naseer; Munawar Hayat; Salman Khan; Fahad Shahbaz Khan
THAT: Two Head Adversarial Training for Improving Robustness at Scale. (26%)Zuxuan Wu; Tom Goldstein; Larry S. Davis; Ser-Nam Lim
A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks and Defenses in Cryptography. (11%)Xiaoxuan Lou; Tianwei Zhang; Jun Jiang; Yinqian Zhang
HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks. (10%)Peizhuo Lv; Pan Li; Shengzhi Zhang; Kai Chen; Ruigang Liang; Yue Zhao; Yingjiu Li
The Geometry of Over-parameterized Regression and Adversarial Perturbations. (2%)Jason W. Rocks; Pankaj Mehta
Synthesize-It-Classifier: Learning a Generative Classifier through RecurrentSelf-analysis. (1%)Arghya Pal; Rapha Phan; KokSheik Wong
Spirit Distillation: Precise Real-time Prediction with Insufficient Data. (1%)Zhiyuan Wu; Hong Qi; Yu Jiang; Chupeng Cui; Zongmin Yang; Xinhui Xue
Recent Advances in Large Margin Learning. (1%)Yiwen Guo; Changshui Zhang
2021-03-24
Towards Both Accurate and Robust Neural Networks without Extra Data. (99%)Faqiang Liu; Rong Zhao
Vulnerability of Appearance-based Gaze Estimation. (97%)Mingjie Xu; Haofei Wang; Yunfei Liu; Feng Lu
Black-box Detection of Backdoor Attacks with Limited Information and Data. (96%)Yinpeng Dong; Xiao Yang; Zhijie Deng; Tianyu Pang; Zihao Xiao; Hang Su; Jun Zhu
Deepfake Forensics via An Adversarial Game. (10%)Zhi Wang; Yiwen Guo; Wangmeng Zuo
2021-03-23
Robust and Accurate Object Detection via Adversarial Learning. (98%)Xiangning Chen; Cihang Xie; Mingxing Tan; Li Zhang; Cho-Jui Hsieh; Boqing Gong
CLIP: Cheap Lipschitz Training of Neural Networks. (96%)Leon Bungert; René Raab; Tim Roith; Leo Schwinn; Daniel Tenbrinck
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers? (92%)Antonio Emanuele Cinà; Sebastiano Vascon; Ambra Demontis; Battista Biggio; Fabio Roli; Marcello Pelillo
Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations. (87%)Chaitanya K. Ryali; David J. Schwab; Ari S. Morcos
RPATTACK: Refined Patch Attack on General Object Detectors. (76%)Hao Huang; Yongtao Wang; Zhaoyu Chen; Zhi Tang; Wenqiang Zhang; Kai-Kuang Ma
NNrepair: Constraint-based Repair of Neural Network Classifiers. (50%)Muhammad Usman; Divya Gopinath; Youcheng Sun; Yannic Noller; Corina Pasareanu
Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs. (31%)Ramneet Kaur; Susmit Jha; Anirban Roy; Oleg Sokolsky; Insup Lee
Improved Estimation of Concentration Under $\ell_p$-Norm Distance Metrics Using Half Spaces. (22%)Jack Prescott; Xiao Zhang; David Evans
ESCORT: Ethereum Smart COntRacTs Vulnerability Detection using Deep Neural Network and Transfer Learning. (1%)Oliver Lutz; Huili Chen; Hossein Fereidooni; Christoph Sendner; Alexandra Dmitrienko; Ahmad Reza Sadeghi; Farinaz Koushanfar
2021-03-22
Grey-box Adversarial Attack And Defence For Sentiment Classification. (99%)Ying Xu; Xu Zhong; Antonio Jimeno Yepes; Jey Han Lau
Fast Approximate Spectral Normalization for Robust Deep Neural Networks. (98%)Zhixin Pan; Prabhat Mishra
Spatio-Temporal Sparsification for General Robust Graph Convolution Networks. (87%)Mingming Lu; Ya Zhang
RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy. (75%)Adnan Siraj Rakin; Li Yang; Jingtao Li; Fan Yao; Chaitali Chakrabarti; Yu Cao; Jae-sun Seo; Deliang Fan
Adversarial Feature Augmentation and Normalization for Visual Recognition. (13%)Tianlong Chen; Yu Cheng; Zhe Gan; Jianfeng Wang; Lijuan Wang; Zhangyang Wang; Jingjing Liu
Adversarially Optimized Mixup for Robust Classification. (13%)Jason Bunk; Srinjoy Chattopadhyay; B. S. Manjunath; Shivkumar Chandrasekaran
2021-03-21
ExAD: An Ensemble Approach for Explanation-based Adversarial Detection. (99%)Raj Vardhan; Ninghao Liu; Phakpoom Chinprutthiwong; Weijie Fu; Zhenyu Hu; Xia Ben Hu; Guofei Gu
TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing. (75%)Tao Gui; Xiao Wang; Qi Zhang; Qin Liu; Yicheng Zou; Xin Zhou; Rui Zheng; Chong Zhang; Qinzhuo Wu; Jiacheng Ye; Zexiong Pang; Yongxin Zhang; Zhengyan Li; Ruotian Ma; Zichu Fei; Ruijian Cai; Jun Zhao; Xinwu Hu; Zhiheng Yan; Yiding Tan; Yuan Hu; Qiyuan Bian; Zhihua Liu; Bolin Zhu; Shan Qin; Xiaoyu Xing; Jinlan Fu; Yue Zhang; Minlong Peng; Xiaoqing Zheng; Yaqian Zhou; Zhongyu Wei; Xipeng Qiu; Xuanjing Huang
Natural Perturbed Training for General Robustness of Neural Network Classifiers. (38%)Sadaf Gulshad; Arnold Smeulders
Self adversarial attack as an augmentation method for immunohistochemical stainings. (33%)Jelica Vasiljević; Friedrich Feuerhake; Cédric Wemmert; Thomas Lampert
2021-03-20
Robust Models Are More Interpretable Because Attributions Look Normal. (15%)Zifan Wang; Matt Fredrikson; Anupam Datta
2021-03-19
LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial Attack. (99%)Ashkan Esmaeili; Marzieh Edraki; Nazanin Rahnavard; Mubarak Shah; Ajmal Mian
SoK: A Modularized Approach to Study the Security of Automatic Speech Recognition Systems. (93%)Yuxuan Chen; Jiangshan Zhang; Xuejing Yuan; Shengzhi Zhang; Kai Chen; Xiaofeng Wang; Shanqing Guo
Attribution of Gradient Based Adversarial Attacks for Reverse Engineering of Deceptions. (86%)Michael Goebel; Jason Bunk; Srinjoy Chattopadhyay; Lakshmanan Nataraj; Shivkumar Chandrasekaran; B. S. Manjunath
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond. (2%)Xuhong Li; Haoyi Xiong; Xingjian Li; Xuanyu Wu; Xiao Zhang; Ji Liu; Jiang Bian; Dejing Dou
2021-03-18
Generating Adversarial Computer Programs using Optimized Obfuscations. (99%)Shashank Srikant; Sijia Liu; Tamara Mitrovska; Shiyu Chang; Quanfu Fan; Gaoyuan Zhang; Una-May O'Reilly
Boosting Adversarial Transferability through Enhanced Momentum. (99%)Xiaosen Wang; Jiadong Lin; Han Hu; Jingdong Wang; Kun He
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles. (98%)Gabriel D. Cantareira; Rodrigo F. Mello; Fernando V. Paulovich
Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training. (76%)Saurabh Sahu; Palash Goyal
Model Extraction and Adversarial Transferability, Your BERT is Vulnerable! (69%)Xuanli He; Lingjuan Lyu; Qiongkai Xu; Lichao Sun
TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation. (61%)Todd Huster; Emmanuel Ekwedike
Noise Modulation: Let Your Model Interpret Itself. (54%)Haoyang Li; Xinggang Wang
KoDF: A Large-scale Korean DeepFake Detection Dataset. (16%)Patrick Kwon; Jaeseong You; Gyuhyeon Nam; Sungwoo Park; Gyeongsu Chae
Reading Isn't Believing: Adversarial Attacks On Multi-Modal Neurons. (9%)David A. Noever; Samantha E. Miller Noever
2021-03-17
Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap? (99%)Nathan Inkawhich; Kevin J Liang; Jingyang Zhang; Huanrui Yang; Hai Li; Yiran Chen
Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection. (98%)Mazen Abdelfattah; Kaiwen Yuan; Z. Jane Wang; Rabab Ward
Improved, Deterministic Smoothing for L1 Certified Robustness. (82%)Alexander Levine; Soheil Feizi
Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition. (41%)Yaodong Yu; Zitong Yang; Edgar Dobriban; Jacob Steinhardt; Yi Ma
Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots. (38%)Samson Tan; Shafiq Joty
Cyber Intrusion Detection by Using Deep Neural Networks with Attack-sharing Loss. (13%)Boxiang Wendy Dong; Wendy Hui; Wang; Aparna S. Varde; Dawei Li; Bharath K. Samanthula; Weifeng Sun; Liang Zhao
2021-03-16
Adversarial Driving: Attacking End-to-End Autonomous Driving. (93%)Han Wu; Syed Yunas; Sareh Rowlands; Wenjie Ruan; Johan Wahlstrom
Adversarial YOLO: Defense Human Detection Patch Attacks via Detecting Adversarial Patches. (92%)Nan Ji; YanFei Feng; Haidong Xie; Xueshuang Xiang; Naijin Liu
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation. (75%)Jungbeom Lee; Eunji Kim; Sungroh Yoon
Bio-inspired Robustness: A Review. (70%)Harshitha Machiraju; Oh-Hyeon Choung; Pascal Frossard; Michael. H Herzog
2021-03-15
Constant Random Perturbations Provide Adversarial Robustness with Minimal Effect on Accuracy. (83%)Bronya Roni Chernyak; Bhiksha Raj; Tamir Hazan; Joseph Keshet
Adversarial Training is Not Ready for Robot Learning. (67%)Mathias Lechner; Ramin Hasani; Radu Grosu; Daniela Rus; Thomas A. Henzinger
HDTest: Differential Fuzz Testing of Brain-Inspired Hyperdimensional Computing. (64%)Dongning Ma; Jianmin Guo; Yu Jiang; Xun Jiao
Understanding invariance via feedforward inversion of discriminatively trained classifiers. (10%)Piotr Teterwak; Chiyuan Zhang; Dilip Krishnan; Michael C. Mozer
Meta-Solver for Neural Ordinary Differential Equations. (2%)Julia Gusak; Alexandr Katrutsa; Talgat Daulbaev; Andrzej Cichocki; Ivan Oseledets
2021-03-14
Towards Robust Speech-to-Text Adversarial Attack. (99%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks. (98%)Manoj Rohit Vemparala; Alexander Frickenstein; Nael Fasfous; Lukas Frickenstein; Qi Zhao; Sabine Kuhn; Daniel Ehrhardt; Yuankai Wu; Christian Unger; Naveen Shankar Nagaraja; Walter Stechele
Multi-Discriminator Sobolev Defense-GAN Against Adversarial Attacks for End-to-End Speech Systems. (82%)Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Membership Inference Attacks on Machine Learning: A Survey. (68%)Hongsheng Hu; Zoran Salcic; Lichao Sun; Gillian Dobbie; Philip S. Yu; Xuyun Zhang
2021-03-13
Attack as Defense: Characterizing Adversarial Examples using Robustness. (99%)Zhe Zhao; Guangke Chen; Jingyi Wang; Yiwei Yang; Fu Song; Jun Sun
Generating Unrestricted Adversarial Examples via Three Parameters. (99%)Hanieh Naderi; Leili Goli; Shohreh Kasaei
Simeon -- Secure Federated Machine Learning Through Iterative Filtering. (12%)Nicholas Malecki; Hye-young Paik; Aleksandar Ignjatovic; Alan Blair; Elisa Bertino
2021-03-12
Learning Defense Transformers for Counterattacking Adversarial Examples. (99%)Jincheng Li; Jiezhang Cao; Yifan Zhang; Jian Chen; Mingkui Tan
Internal Wasserstein Distance for Adversarial Attack and Defense. (99%)Mingkui Tan; Shuhai Zhang; Jiezhang Cao; Jincheng Li; Yanwu Xu
A Unified Game-Theoretic Interpretation of Adversarial Robustness. (98%)Jie Ren; Die Zhang; Yisen Wang; Lu Chen; Zhanpeng Zhou; Yiting Chen; Xu Cheng; Xin Wang; Meng Zhou; Jie Shi; Quanshi Zhang
Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case. (82%)Evren Catak; Ferhat Ozgur Catak; Arild Moldsvor
Network Environment Design for Autonomous Cyberdefense. (1%)Andres Molina-Markham; Cory Miniter; Becky Powell; Ahmad Ridley
2021-03-11
Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors through Voltage Over-scaling. (99%)Md Shohidul Islam; Ihsen Alouani; Khaled N. Khasawneh
Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification. (99%)Shiqi Wang; Huan Zhang; Kaidi Xu; Xue Lin; Suman Jana; Cho-Jui Hsieh; J. Zico Kolter
Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink. (99%)Ranjie Duan; Xiaofeng Mao; A. K. Qin; Yun Yang; Yuefeng Chen; Shaokai Ye; Yuan He
DAFAR: Detecting Adversaries by Feedback-Autoencoder Reconstruction. (99%)Haowen Liu; Ping Yi; Hsiao-Ying Lin; Jie Shi
ReinforceBug: A Framework to Generate Adversarial Textual Examples. (97%)Bushra Sabir; M. Ali Babar; Raj Gaire
Multi-Task Federated Reinforcement Learning with Adversaries. (15%)Aqeel Anwar; Arijit Raychowdhury
BODAME: Bilevel Optimization for Defense Against Model Extraction. (8%)Yuto Mori; Atsushi Nitanda; Akiko Takeda
2021-03-10
Improving Adversarial Robustness via Channel-wise Activation Suppressing. (99%)Yang Bai; Yuyuan Zeng; Yong Jiang; Shu-Tao Xia; Xingjun Ma; Yisen Wang
TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack. (92%)Yam Sharon; David Berend; Yang Liu; Asaf Shabtai; Yuval Elovici
VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples. (67%)Tian Pan; Yibing Song; Tianyu Yang; Wenhao Jiang; Wei Liu
Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks. (1%)Md Akmal Haidar; Mehdi Rezagholizadeh
2021-03-09
Stabilized Medical Image Attacks. (99%)Gege Qi; Lijun Gong; Yibing Song; Kai Ma; Yefeng Zheng
Revisiting Model's Uncertainty and Confidences for Adversarial Example Detection. (99%)Ahmed Aldahdooh; Wassim Hamidouche; Olivier Déforges
Practical Relative Order Attack in Deep Ranking. (99%)Mo Zhou; Le Wang; Zhenxing Niu; Qilin Zhang; Yinghui Xu; Nanning Zheng; Gang Hua
BASAR:Black-box Attack on Skeletal Action Recognition. (99%)Yunfeng Diao; Tianjia Shao; Yong-Liang Yang; Kun Zhou; He Wang
Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack. (98%)He Wang; Feixiang He; Zhexi Peng; Tianjia Shao; Yong-Liang Yang; Kun Zhou; David Hogg
Deep Learning for Android Malware Defenses: a Systematic Literature Review. (11%)Yue Liu; Chakkrit Tantithamthavorn; Li Li; Yepang Liu
Robust Black-box Watermarking for Deep NeuralNetwork using Inverse Document Frequency. (10%)Mohammad Mehdi Yadollahi; Farzaneh Shoeleh; Sajjad Dadkhah; Ali A. Ghorbani
Towards Strengthening Deep Learning-based Side Channel Attacks with Mixup. (2%)Zhimin Luo; Mengce Zheng; Ping Wang; Minhui Jin; Jiajia Zhang; Honggang Hu; Nenghai Yu
2021-03-08
Packet-Level Adversarial Network Traffic Crafting using Sequence Generative Adversarial Networks. (99%)Qiumei Cheng; Shiying Zhou; Yi Shen; Dezhang Kong; Chunming Wu
Improving Transformation-based Defenses against Adversarial Examples with First-order Perturbations. (99%)Haimin Zhang; Min Xu
Contemplating real-world object classification. (81%)Ali Borji
Consistency Regularization for Adversarial Robustness. (50%)Jihoon Tack; Sihyun Yu; Jongheon Jeong; Minseon Kim; Sung Ju Hwang; Jinwoo Shin
Prime+Probe 1, JavaScript 0: Overcoming Browser-based Side-Channel Defenses. (2%)Anatoly Shusterman; Ayush Agarwal; Sioli O'Connell; Daniel Genkin; Yossi Oren; Yuval Yarom
Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors. (1%)Jian Ding; Enze Xie; Hang Xu; Chenhan Jiang; Zhenguo Li; Ping Luo; Gui-Song Xia
Deep Model Intellectual Property Protection via Deep Watermarking. (1%)Jie Zhang; Dongdong Chen; Jing Liao; Weiming Zhang; Huamin Feng; Gang Hua; Nenghai Yu
2021-03-07
Universal Adversarial Perturbations and Image Spam Classifiers. (99%)Andy Phung; Mark Stamp
Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain. (99%)Jinyu Tian; Jiantao Zhou; Yuanman Li; Jia Duan
Improving Global Adversarial Robustness Generalization With Adversarially Trained GAN. (99%)Desheng School of Electrical Engineering, Southwest Jiaotong University, Chengdu, P. R. China Wang; Weidong School of Electrical Engineering, Southwest Jiaotong University, Chengdu, P. R. China Jin; Yunpu School of Electrical Engineering, Southwest Jiaotong University, Chengdu, P. R. China Wu; Aamir School of Electrical Engineering, Southwest Jiaotong University, Chengdu, P. R. China Khan
Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and Accuracy. (76%)Chen Chen; Kezhi Kong; Peihong Yu; Juan Luque; Tom Goldstein; Furong Huang
2021-03-06
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification. (98%)Ahmadreza Azizi; Ibrahim Asadullah Tahmid; Asim Waheed; Neal Mangaokar; Jiameng Pu; Mobin Javed; Chandan K. Reddy; Bimal Viswanath
Hidden Backdoor Attack against Semantic Segmentation Models. (93%)Yiming Li; Yanjie Li; Yalei Lv; Yong Jiang; Shu-Tao Xia
2021-03-05
Cyber Threat Intelligence Model: An Evaluation of Taxonomies, Sharing Standards, and Ontologies within Cyber Threat Intelligence. (13%)Vasileios Mavroeidis; Siri Bromander
Don't Forget to Sign the Gradients! (10%)Omid Aramoon; Pin-Yu Chen; Gang Qu
Tor circuit fingerprinting defenses using adaptive padding. (1%)George Kadianakis; Theodoros Polyzos; Mike Perry; Kostas Chatzikokolakis
2021-03-04
Hard-label Manifolds: Unexpected Advantages of Query Efficiency for Finding On-manifold Adversarial Examples. (99%)Washington Garcia; Pin-Yu Chen; Somesh Jha; Scott Clouse; Kevin R. B. Butler
WaveGuard: Understanding and Mitigating Audio Adversarial Examples. (99%)Shehzeen Hussain; Paarth Neekhara; Shlomo Dubnov; Julian McAuley; Farinaz Koushanfar
Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack. (99%)Mengting Xu; Tao Zhang; Zhongnian Li; Mingxia Liu; Daoqiang Zhang
QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval. (99%)Xiaodan Li; Jinfeng Li; Yuefeng Chen; Shaokai Ye; Yuan He; Shuhui Wang; Hang Su; Hui Xue
SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain. (99%)Paula Harder; Franz-Josef Pfreundt; Margret Keuper; Janis Keuper
Gradient-Guided Dynamic Efficient Adversarial Training. (96%)Fu Wang; Yanghao Zhang; Yanbin Zheng; Wenjie Ruan
PointGuard: Provably Robust 3D Point Cloud Classification. (92%)Hongbin Liu; Jinyuan Jia; Neil Zhenqiang Gong
Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods. (12%)William Paul; Yinzhi Cao; Miaomiao Zhang; Phil Burlina
A Novel Framework for Threat Analysis of Machine Learning-based Smart Healthcare Systems. (1%)Nur Imtiazul Haque; Mohammad Ashiqur Rahman; Md Hasan Shahriar; Alvi Ataur Khalil; Selcuk Uluagac
On the privacy-utility trade-off in differentially private hierarchical text classification. (1%)Dominik Wunderlich; Daniel Bernau; Francesco Aldà; Javier Parra-Arnau; Thorsten Strufe
2021-03-03
Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks. (99%)Zhiqun Zhao; Hengyou Wang; Hao Sun; Zhihai He
A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models. (89%)Josh Kalin; David Noever; Matthew Ciolino
Shift Invariance Can Reduce Adversarial Robustness. (87%)Songwei Ge; Vasu Singla; Ronen Basri; David Jacobs
A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks. (81%)Yudi Dong; Huaxia Wang; Yu-Dong Yao
On the effectiveness of adversarial training against common corruptions. (67%)Klim Kireev; Maksym Andriushchenko; Nicolas Flammarion
Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations. (64%)Yu-Lin Tsai; Chia-Yi Hsu; Chia-Mu Yu; Pin-Yu Chen
2021-03-02
Evaluating the Robustness of Geometry-Aware Instance-Reweighted Adversarial Training. (99%)Dorjan Hitaj; Giulio Pagnotta; Iacopo Masi; Luigi V. Mancini
A Survey On Universal Adversarial Attack. (99%)Chaoning Zhang; Philipp Benz; Chenguo Lin; Adil Karjauv; Jing Wu; In So Kweon
Online Adversarial Attacks. (99%)Andjela Mladenovic; Avishek Joey Bose; Hugo Berard; William L. Hamilton; Simon Lacoste-Julien; Pascal Vincent; Gauthier Gidel
Adversarial Examples for Unsupervised Machine Learning Models. (98%)Chia-Yi Hsu; Pin-Yu Chen; Songtao Lu; Sijia Liu; Chia-Mu Yu
DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers. (97%)Colin Paterson; Haoze Wu; John Grese; Radu Calinescu; Corina S. Pasareanu; Clark Barrett
ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples. (97%)Mingfu Xue; Shichang Sun; Can He; Yushu Zhang; Jian Wang; Weiqiang Liu
Fixing Data Augmentation to Improve Adversarial Robustness. (69%)Sylvestre-Alvise Rebuffi; Sven Gowal; Dan A. Calian; Florian Stimberg; Olivia Wiles; Timothy Mann
A Brief Survey on Deep Learning Based Data Hiding. (54%)Chaoning Zhang; Chenguo Lin; Philipp Benz; Kejiang Chen; Weiming Zhang; In So Kweon
Group-wise Inhibition based Feature Regularization for Robust Classification. (16%)Haozhe Liu; Haoqian Wu; Weicheng Xie; Feng Liu; Linlin Shen
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations. (1%)Eitan Borgnia; Jonas Geiping; Valeriia Cherepanova; Liam Fowl; Arjun Gupta; Amin Ghiasi; Furong Huang; Micah Goldblum; Tom Goldstein
2021-03-01
Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World. (99%)Jiakai Wang; Aishan Liu; Zixin Yin; Shunchang Liu; Shiyu Tang; Xianglong Liu
Brain Programming is Immune to Adversarial Attacks: Towards Accurate and Robust Image Classification using Symbolic Learning. (99%)Gerardo Ibarra-Vazquez; Gustavo Olague; Mariana Chan-Ley; Cesar Puente; Carlos Soubervielle-Montalvo
Smoothness Analysis of Adversarial Training. (98%)Sekitoshi Kanai; Masanori Yamada; Hiroshi Takahashi; Yuki Yamanaka; Yasutoshi Ida
Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis. (96%)Mahsa Paknezhad; Cuong Phuc Ngo; Amadeus Aristo Winarto; Alistair Cheong; Beh Chuen Yang; Wu Jiayang; Lee Hwee Kuan
Mind the box: $l_1$-APGD for sparse adversarial attacks on image classifiers. (93%)Francesco Croce; Matthias Hein
Adversarial training in communication constrained federated learning. (87%)Devansh Shah; Parijat Dube; Supriyo Chakraborty; Ashish Verma
Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms. (82%)Miguel Á. Carreira-Perpiñán; Suryabhan Singh Hada
Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attack. (70%)Shahroz Tariq; Sowon Jeon; Simon S. Woo
A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness. (64%)Jacob Abernethy; Pranjal Awasthi; Satyen Kale
Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. (62%)Wei Dai; Daniel Berleant
2021-02-28
Model-Agnostic Defense for Lane Detection against Adversarial Attack. (98%)Henry Xu; An Ju; David Wagner
Robust learning under clean-label attack. (22%)Avrim Blum; Steve Hanneke; Jian Qian; Han Shao
2021-02-27
Effective Universal Unrestricted Adversarial Attacks using a MOE Approach. (98%)A. E. Baia; Bari G. Di; V. Poggioni
Tiny Adversarial Mulit-Objective Oneshot Neural Architecture Search. (93%)Guoyang Xie; Jinbao Wang; Guo Yu; Feng Zheng; Yaochu Jin
End-to-end Uncertainty-based Mitigation of Adversarial Attacks to Automated Lane Centering. (73%)Ruochen Jiao; Hengyi Liang; Takami Sato; Junjie Shen; Qi Alfred Chen; Qi Zhu
Adversarial Information Bottleneck. (33%)Pemhlong Zhai; Shihua Zhang
Neuron Coverage-Guided Domain Generalization. (2%)Chris Xing Tian; Haoliang Li; Xiaofei Xie; Yang Liu; Shiqi Wang
2021-02-26
What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors.Jonas Geiping; Liam Fowl; Gowthami Somepalli; Micah Goldblum; Michael Moeller; Tom Goldstein
NEUROSPF: A tool for the Symbolic Analysis of Neural Networks. (68%)Muhammad Usman; Yannic Noller; Corina Pasareanu; Youcheng Sun; Divya Gopinath
2021-02-25
On Instabilities of Conventional Multi-Coil MRI Reconstruction to Small Adverserial Perturbations.Chi Zhang; Jinghan Jia; Burhaneddin Yaman; Steen Moeller; Sijia Liu; Mingyi Hong; Mehmet Akçakaya
Do Input Gradients Highlight Discriminative Features?Harshay Shah; Prateek Jain; Praneeth Netrapalli
Nonlinear Projection Based Gradient Estimation for Query Efficient Blackbox Attacks.Huichen Li; Linyi Li; Xiaojun Xu; Xiaolu Zhang; Shuang Yang; Bo Li
Understanding Robustness in Teacher-Student Setting: A New Perspective.Zhuolin Yang; Zhaoxi Chen; Tiffany Cai; Xinyun Chen; Bo Li; Yuandong Tian
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints.Maura Pintor; Fabio Roli; Wieland Brendel; Battista Biggio
Cybersecurity Threats in Connected and Automated Vehicles based Federated Learning Systems.Ranwa Al Mallah; Godwin Badu-Marfo; Bilal Farooq
A statistical framework for efficient out of distribution detection in deep neural networks. (1%)Matan Haroush; Tzviel Frostig; Ruth Heller; Daniel Soudry
2021-02-24
Confidence Calibration with Bounded Error Using Transformations.Sooyong Jang; Radoslav Ivanov; Insup lee; James Weimer
Sketching Curvature for Efficient Out-of-Distribution Detection for Deep Neural Networks.Apoorva Sharma; Navid Azizan; Marco Pavone
Robust SleepNets.Yigit Alparslan; Edward Kim
Multiplicative Reweighting for Robust Neural Network Optimization.Noga Bar; Tomer Koren; Raja Giryes
Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis.Leo Schwinn; An Nguyen; René Raab; Leon Bungert; Daniel Tenbrinck; Dario Zanca; Martin Burger; Bjoern Eskofier
Graphfool: Targeted Label Adversarial Attack on Graph Embedding.Jinyin Chen; Xiang Lin; Dunjie Zhang; Wenrong Jiang; Guohan Huang; Hui Xiong; Yun Xiang
2021-02-23
The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations.Jeremiah Duncan; Fabian Fallas; Chris Gropp; Emily Herron; Maria Mahbub; Paula Olaya; Eduardo Ponce; Tabitha K. Samuel; Daniel Schultz; Sudarshan Srinivasan; Maofeng Tang; Viktor Zenkov; Quan Zhou; Edmon Begoli
Rethinking Natural Adversarial Examples for Classification Models.Xiao Li; Jianmin Li; Ting Dai; Jie Shi; Jun Zhu; Xiaolin Hu
Automated Discovery of Adaptive Attacks on Adversarial Defenses.Chengyuan Yao; Pavol Bielik; Petar Tsankov; Martin Vechev
Adversarial Robustness with Non-uniform Perturbations.Ecenaz Erdemir; Jeffrey Bickford; Luca Melis; Sergul Aydore
Non-Singular Adversarial Robustness of Neural Networks.Yu-Lin Tsai; Chia-Yi Hsu; Chia-Mu Yu; Pin-Yu Chen
Enhancing Model Robustness By Incorporating Adversarial Knowledge Into Semantic Representation.Jinfeng Li; Tianyu Du; Xiangyu Liu; Rong Zhang; Hui Xue; Shouling Ji
Adversarial Examples Detection beyond Image Space.Kejiang Chen; Yuefeng Chen; Hang Zhou; Chuan Qin; Xiaofeng Mao; Weiming Zhang; Nenghai Yu
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models.Liuqiao Chen; Hu Wang; Benjamin Zi Hao Zhao; Minhui Xue; Haifeng Qian
2021-02-22
On the robustness of randomized classifiers to adversarial examples.Rafael Pinot; Laurent Meunier; Florian Yger; Cédric Gouy-Pailler; Yann Chevaleyre; Jamal Atif
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks.Ginevra Carbone; Guido Sanguinetti; Luca Bortolussi
Man-in-The-Middle Attacks and Defense in a Power System Cyber-Physical Testbed.Patrick Wlazlo; Abhijeet Sahu; Zeyu Mao; Hao Huang; Ana Goulart; Katherine Davis; Saman Zonouz
Sandwich Batch Normalization: A Drop-In Replacement for Feature Distribution Heterogeneity.Xinyu Gong; Wuyang Chen; Tianlong Chen; Zhangyang Wang
2021-02-21
The Effects of Image Distribution and Task on Adversarial Robustness.Owen Kunhardt; Arturo Deza; Tomaso Poggio
A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box Optimization.HanQin Cai; Yuchen Lou; Daniel McKenzie; Wotao Yin
Constrained Optimization to Train Neural Networks on Critical and Under-Represented Classes. (1%)Sara Sangalli; Ertunc Erdil; Andreas Hoetker; Olivio Donati; Ender Konukoglu
2021-02-20
On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning.Ren Wang; Kaidi Xu; Sijia Liu; Pin-Yu Chen; Tsui-Wei Weng; Chuang Gan; Meng Wang
Measuring $\ell_\infty$ Attacks by the $\ell_2$ Norm.Sizhe Chen; Qinghua Tao; Zhixing Ye; Xiaolin Huang
2021-02-19
A PAC-Bayes Analysis of Adversarial Robustness.Guillaume IRIT Vidot; Paul LHC Viallard; Amaury LHC Habrard; Emilie LHC Morvant
Effective and Efficient Vote Attack on Capsule Networks.Jindong Gu; Baoyuan Wu; Volker Tresp
2021-02-18
Random Projections for Improved Adversarial Robustness.Ginevra Carbone; Guido Sanguinetti; Luca Bortolussi
Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks.Matthew Ciolino; Josh Kalin; David Noever
Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications.Leonard Berrada; Sumanth Dathathri; Krishnamurthy Dvijotham; Robert Stanforth; Rudy Bunel; Jonathan Uesato; Sven Gowal; M. Pawan Kumar
Center Smoothing: Provable Robustness for Functions with Metric-Space Outputs.Aounon Kumar; Tom Goldstein
2021-02-17
Improving Hierarchical Adversarial Robustness of Deep Neural Networks.Avery Ma; Aladin Virmaux; Kevin Scaman; Juwei Lu
Consistent Non-Parametric Methods for Maximizing Robustness.Robi Bhattacharjee; Kamalika Chaudhuri
Bridging the Gap Between Adversarial Robustness and Optimization Bias.Fartash Faghri; Sven Gowal; Cristina Vasconcelos; David J. Fleet; Fabian Pedregosa; Nicolas Le Roux
Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids.Jiangnan Li; Yingyuan Yang; Jinyuan Stella Sun; Kevin Tomsovic; Hairong Qi
2021-02-16
Globally-Robust Neural Networks.Klas Leino; Zifan Wang; Matt Fredrikson
A Law of Robustness for Weight-bounded Neural Networks.Hisham Husain; Borja Balle
Just Noticeable Difference for Machine Perception and Generation of Regularized Adversarial Images with Minimal Perturbation.Adil Kaan Akan; Emre Akbas; Fatos T. Yarman Vural
2021-02-15
Data Profiling for Adversarial Training: On the Ruin of Problematic Data.Chengyu Dong; Liyuan Liu; Jingbo Shang
Certified Robustness to Programmable Transformations in LSTMs.Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Generating Structured Adversarial Attacks Using Frank-Wolfe Method.Ehsan Kazemi; Thomas Kerdreux; Liquang Wang
Universal Adversarial Examples and Perturbations for Quantum Classifiers.Weiyuan Gong; Dong-Ling Deng
Low Curvature Activations Reduce Overfitting in Adversarial Training.Vasu Singla; Sahil Singla; David Jacobs; Soheil Feizi
And/or trade-off in artificial neurons: impact on adversarial robustness.Alessandro Fontana
Certifiably Robust Variational Autoencoders.Ben Barrett; Alexander Camuto; Matthew Willetts; Tom Rainforth
2021-02-14
Guided Interpolation for Adversarial Training.Chen Chen; Jingfeng Zhang; Xilie Xu; Tianlei Hu; Gang Niu; Gang Chen; Masashi Sugiyama
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS.Felix Olowononi; Danda B. Rawat; Chunmei Liu
Exploring Adversarial Robustness of Deep Metric Learning.Thomas Kobber Panum; Zi Wang; Pengyu Kan; Earlence Fernandes; Somesh Jha
Adversarial Attack on Network Embeddings via Supervised Network Poisoning.Viresh Gupta; Tanmoy Chakraborty
Perceptually Constrained Adversarial Attacks.Muhammad Zaid Hameed; Andras Gyorgy
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification.Mingu Kang; Trung Quang Tran; Seungju Cho; Daeyoung Kim
Cross-modal Adversarial Reprogramming.Paarth Neekhara; Shehzeen Hussain; Jinglong Du; Shlomo Dubnov; Farinaz Koushanfar; Julian McAuley
2021-02-13
Mixed Nash Equilibria in the Adversarial Examples Game.Laurent Meunier; Meyer Scetbon; Rafael Pinot; Jamal Atif; Yann Chevaleyre
Adversarial defense for automatic speaker verification by cascaded self-supervised learning models.Haibin Wu; Xu Li; Andy T. Liu; Zhiyong Wu; Helen Meng; Hung-yi Lee
2021-02-12
UAVs Path Deviation Attacks: Survey and Research Challenges.Francesco Betti Sorbelli; Mauro Conti; Cristina M. Pinotti; Giulio Rigoni
Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective.Chaoning Zhang; Philipp Benz; Adil Karjauv; In So Kweon
Universal Adversarial Perturbations for Malware.Raphael Labaca-Castro; Luis Muñoz-González; Feargus Pendlebury; Gabi Dreo Rodosek; Fabio Pierazzi; Lorenzo Cavallaro
On the Paradox of Certified Training. (13%)Nikola Jovanović; Mislav Balunović; Maximilian Baader; Martin Vechev
2021-02-11
Adversarially robust deepfake media detection using fused convolutional neural network predictions.Sohail Ahmed Khan; Alessandro Artusi; Hang Dai
Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy.Dylan Slack; Nathalie Rauschmayr; Krishnaram Kenthapadi
RobOT: Robustness-Oriented Testing for Deep Learning Systems.Jingyi Wang; Jialuo Chen; Youcheng Sun; Xingjun Ma; Dongxia Wang; Jun Sun; Peng Cheng
2021-02-10
Meta Federated Learning.Omid Aramoon; Pin-Yu Chen; Gang Qu; Yuan Tian
Adversarial Robustness: What fools you makes you stronger.Grzegorz Głuch; Rüdiger Urbanke
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection.Hanshu Yan; Jingfeng Zhang; Gang Niu; Jiashi Feng; Vincent Y. F. Tan; Masashi Sugiyama
Dompteur: Taming Audio Adversarial Examples.Thorsten Eisenhofer; Lea Schönherr; Joel Frank; Lars Speckemeier; Dorothea Kolossa; Thorsten Holz
Enhancing Real-World Adversarial Patches through 3D Modeling of Complex Target Scenes.Yael Mathov; Lior Rokach; Yuval Elovici
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons.Bohang Zhang; Tianle Cai; Zhou Lu; Di He; Liwei Wang
RoBIC: A benchmark suite for assessing classifiers robustness.Thibault Maho; Benoît Bonnet; Teddy Furon; Erwan Le Merrer
Bayesian Inference with Certifiable Adversarial Robustness.Matthew Wicker; Luca Laurenti; Andrea Patane; Zhoutong Chen; Zheng Zhang; Marta Kwiatkowska
2021-02-09
Target Training Does Adversarial Training Without Adversarial Samples.Blerta Lindqvist
Security and Privacy for Artificial Intelligence: Opportunities and Challenges.Ayodeji Oseni; Nour Moustafa; Helge Janicke; Peng Liu; Zahir Tari; Athanasios Vasilakos
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models.Sahar Abdelnabi; Mario Fritz
Adversarial Perturbations Are Not So Weird: Entanglement of Robust and Non-Robust Features in Neural Network Classifiers.Jacob M. Springer; Melanie Mitchell; Garrett T. Kenyon
Detecting Localized Adversarial Examples: A Generic Approach using Critical Region Analysis.Fengting Li; Xuankai Liu; Xiaoli Zhang; Qi Li; Kun Sun; Kang Li
Making Paper Reviewing Robust to Bid Manipulation Attacks.Ruihan Wu; Chuan Guo; Felix Wu; Rahul Kidambi; der Maaten Laurens van; Kilian Q. Weinberger
Towards Bridging the gap between Empirical and Certified Robustness against Adversarial Examples.Jay Nandy; Sudipan Saha; Wynne Hsu; Mong Li Lee; Xiao Xiang Zhu
2021-02-08
Efficient Certified Defenses Against Patch Attacks on Image Classifiers.Jan Hendrik Metzen; Maksym Yatsura
A Real-time Defense against Website Fingerprinting Attacks.Shawn Shan; Arjun Nitin Bhagoji; Haitao Zheng; Ben Y. Zhao
Benford's law: what does it say on adversarial images?João G. Zago; Fabio L. Baldissera; Eric A. Antonelo; Rodrigo T. Saad
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples.Omer Faruk Tuna; Ferhat Ozgur Catak; M. Taner Eskil
2021-02-07
Adversarial example generation with AdaBelief Optimizer and Crop Invariance.Bo Yang; Hengwei Zhang; Yuchen Zhang; Kaiyong Xu; Jindong Wang
Adversarial Imaging Pipelines.Buu Phan; Fahim Mannan; Felix Heide
2021-02-06
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation.Wuxinlin Cheng; Chenhui Deng; Zhiqiang Zhao; Yaohui Cai; Zhiru Zhang; Zhuo Feng
2021-02-05
Corner Case Generation and Analysis for Safety Assessment of Autonomous Vehicles.Haowei Sun; Shuo Feng; Xintao Yan; Henry X. Liu
Model Agnostic Answer Reranking System for Adversarial Question Answering.Sagnik Majumder; Chinmoy Samant; Greg Durrett
Robust Single-step Adversarial Training with Regularizer.Lehui Xie; Yaopeng Wang; Jia-Li Yin; Ximeng Liu
Understanding the Interaction of Adversarial Training with Noisy Labels.Jianing Zhu; Jingfeng Zhang; Bo Han; Tongliang Liu; Gang Niu; Hongxia Yang; Mohan Kankanhalli; Masashi Sugiyama
Optimal Transport as a Defense Against Adversarial Attacks.Quentin Bouniot; Romaric Audigier; Angélique Loesch
2021-02-04
DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks.Chong Xiang; Prateek Mittal
Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression.Masanori Yamada; Sekitoshi Kanai; Tomoharu Iwata; Tomokatsu Takahashi; Yuki Yamanaka; Hiroshi Takahashi; Atsutoshi Kumagai
Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images.Jiasong Chen; Linchen Qian; Timur Urakov; Weiyong Gu; Liang Liang
PredCoin: Defense against Query-based Hard-label Attack.Junfeng Guo; Yaswanth Yadlapalli; Thiele Lothar; Ang Li; Cong Liu
Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review.Dongrui Wu; Weili Fang; Yi Zhang; Liuqing Yang; Hanbin Luo; Lieyun Ding; Xiaodong Xu; Xiang Yu
Audio Adversarial Examples: Attacks Using Vocal Masks.Lynnette Ng; Kai Yuan Tay; Wei Han Chua; Lucerne Loke; Danqi Ye; Melissa Chua
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models.Yugeng Liu; Rui Wen; Xinlei He; Ahmed Salem; Zhikun Zhang; Michael Backes; Cristofaro Emiliano De; Mario Fritz; Yang Zhang
2021-02-03
Adversarially Robust Learning with Unknown Perturbation Sets.Omar Montasser; Steve Hanneke; Nathan Srebro
IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks.Yixiang Wang; Jiqiang Liu; Xiaolin Chang; Jelena Mišić; Vojislav B. Mišić
2021-02-02
On Robustness of Neural Semantic Parsers.Shuo Huang; Zhuang Li; Lizhen Qu; Lei Pan
Towards Robust Neural Networks via Close-loop Control.Zhuotong Chen; Qianxiao Li; Zheng Zhang
Recent Advances in Adversarial Training for Adversarial Robustness.Tao Bai; Jinqi Luo; Jun Zhao; Bihan Wen; Qian Wang
Probabilistic Trust Intervals for Out of Distribution Detection. (68%)Gagandeep Singh; Deepak Mishra
2021-02-01
Fast Training of Provably Robust Neural Networks by SingleProp.Akhilan Boopathy; Tsui-Wei Weng; Sijia Liu; Pin-Yu Chen; Gaoyuan Zhang; Luca Daniel
Towards Speeding up Adversarial Training in Latent Spaces.Yaguan Qian; Qiqi Shao; Tengteng Yao; Bin Wang; Shaoning Zeng; Zhaoquan Gu; Wassim Swaileh
Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems.Alireza Bahramali; Milad Nasr; Amir Houmansadr; Dennis Goeckel; Don Towsley
2021-01-31
Deep Deterministic Information Bottleneck with Matrix-based Entropy Functional.Xi Yu; Shujian Yu; Jose C. Principe
Towards Imperceptible Query-limited Adversarial Attacks with Perceptual Feature Fidelity Loss.Pengrui Quan; Ruiming Guo; Mani Srivastava
Admix: Enhancing the Transferability of Adversarial Attacks.Xiaosen Wang; Xuanran He; Jingdong Wang; Kun He
2021-01-30
Cortical Features for Defense Against Adversarial Audio Attacks.Ilya Kavalerov; Ruijie Zheng; Wojciech Czaja; Rama Chellappa
2021-01-29
You Only Query Once: Effective Black Box Adversarial Attacks with Minimal Repeated Queries.Devin Willmott; Anit Kumar Sahu; Fatemeh Sheikholeslami; Filipe Condessa; Zico Kolter
2021-01-28
Adversarial Machine Learning Attacks on Condition-Based Maintenance Capabilities.Hamidreza Habibollahi Najaf Abadi
Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network.B. R. Manoj; Meysam Sadeghi; Erik G. Larsson
Increasing the Confidence of Deep Neural Networks by Coverage Analysis.Giulio Rossolini; Alessandro Biondi; Giorgio Carlo Buttazzo
Adversarial Learning with Cost-Sensitive Classes.Haojing Shen; Sihong Chen; Ran Wang; Xizhao Wang
2021-01-27
Robust Android Malware Detection System against Adversarial Attacks using Q-Learning.Hemant Rathore; Sanjay K. Sahay; Piyush Nikam; Mohit Sewak
Adversaries in Online Learning Revisited: with applications in Robust Optimization and Adversarial training.Sebastian Pokutta; Huan Xu
Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling.Chris Emmery; Ákos Kádár; Grzegorz Chrupała
Meta Adversarial Training against Universal Patches.Jan Hendrik Metzen; Nicole Finnie; Robin Hutmacher
Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting.Federico Nesti; Alessandro Biondi; Giorgio Buttazzo
Improving Neural Network Robustness through Neighborhood Preserving Layers.Bingyuan Liu; Christopher Malon; Lingzhou Xue; Erik Kruus
2021-01-26
Blind Image Denoising and Inpainting Using Robust Hadamard Autoencoders.Rasika Karkare; Randy Paffenroth; Gunjan Mahindre
Property Inference From Poisoning.Melissa Chase; Esha Ghosh; Saeed Mahloujifar
Adversarial Vulnerability of Active Transfer Learning.Nicolas M. Müller; Konstantin Böttinger
SkeletonVis: Interactive Visualization for Understanding Adversarial Attacks on Human Action Recognition Models.Haekyu Park; Zijie J. Wang; Nilaksh Das; Anindya S. Paul; Pruthvi Perumalla; Zhiyan Zhou; Duen Horng Chau
The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs.Xinwei Zhao; Matthew C. Stamm
Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers.Xinwei Zhao; Matthew C. Stamm
Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems.Utku Ozbulak; Baptist Vandersmissen; Azarakhsh Jalalvand; Ivo Couckuyt; Messem Arnout Van; Neve Wesley De
Visual explanation of black-box model: Similarity Difference and Uniqueness (SIDU) method.Satya M. Muddamsetty; Mohammad N. S. Jahromi; Andreea E. Ciontos; Laura M. Fenoy; Thomas B. Moeslund
Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models.Mazen Abdelfattah; Kaiwen Yuan; Z. Jane Wang; Rabab Ward
2021-01-25
Diverse Adversaries for Mitigating Bias in Training.Xudong Han; Timothy Baldwin; Trevor Cohn
They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in CMOS Image Sensors.Sebastian Köhler; Giulio Lovisotto; Simon Birnbach; Richard Baker; Ivan Martinovic
Generalizing Adversarial Examples by AdaBelief Optimizer.Yixiang Wang; Jiqiang Liu; Xiaolin Chang
Towards Practical Robustness Analysis for DNNs based on PAC-Model Learning.Renjue Li; Pengfei Yang; Cheng-Chao Huang; Youcheng Sun; Bai Xue; Lijun Zhang
Few-Shot Website Fingerprinting Attack.Mantun Chen; Yongjun Wang; Zhiquan Qin; Xiatian Zhu
Understanding and Achieving Efficient Robustness with Adversarial Supervised Contrastive Learning.Anh Bui; Trung Le; He Zhao; Paul Montague; Seyit Camtepe; Dinh Phung
2021-01-23
A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network.Xinwei Zhao; Chen Chen; Matthew C. Stamm
Error Diffusion Halftoning Against Adversarial Examples.Shao-Yuan Lo; Vishal M. Patel
A Comprehensive Evaluation Framework for Deep Model Robustness.Jun Guo; Wei Bao; Jiakai Wang; Yuqing Ma; Xinghai Gao; Gang Xiao; Aishan Liu; Jian Dong; Xianglong Liu; Wenjun Wu
2021-01-22
Online Adversarial Purification based on Self-Supervision.Changhao Shi; Chester Holtz; Gal Mishne
Towards Optimal Branching of Linear and Semidefinite Relaxations for Neural Network Robustness Certification.Brendon G. Anderson; Ziye Ma; Jingqi Li; Somayeh Sojoudi
Generating Black-Box Adversarial Examples in Sparse Domain.Hadi Zanddizari; Behnam Zeinali; J. Morris Chang
Adaptive Neighbourhoods for the Discovery of Adversarial Examples.Jay Morgan; Adeline Paiement; Arno Pauly; Monika Seisenberger
2021-01-21
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary.Huan Zhang; Hongge Chen; Duane Boning; Cho-Jui Hsieh
Adv-OLM: Generating Textual Adversaries via OLM.Vijit Malik; Ashwani Bhat; Ashutosh Modi
Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning.Lang Huang; Chao Zhang; Hongyang Zhang
A Person Re-identification Data Augmentation Method with Adversarial Defense Effect.Yunpeng Gong; Zhiyong Zeng; Liwen Chen; Yifan Luo; Bin Weng; Feng Ye
Adversarial Attacks and Defenses for Speaker Identification Systems.Sonal Joshi; Jesús Villalba; Piotr Żelasko; Laureano Moro-Velázquez; Najim Dehak
A general multi-modal data learning method for Person Re-identification. (78%)Yunpeng Gong
2021-01-20
Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data.Francesco Cartella; Orlando Anunciacao; Yuki Funabiki; Daisuke Yamaguchi; Toru Akishita; Olivier Elshocht
Invariance, encodings, and generalization: learning identity effects with neural networks.S. Brugiapaglia; M. Liu; P. Tupper
Fooling thermal infrared pedestrian detectors in real world using small bulbs.Xiaopei Zhu; Xiao Li; Jianmin Li; Zheyao Wang; Xiaolin Hu
2021-01-19
LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition.Valeriia Cherepanova; Micah Goldblum; Harrison Foley; Shiyuan Duan; John Dickerson; Gavin Taylor; Tom Goldstein
A Search-Based Testing Framework for Deep Neural Networks of Source Code Embedding.Maryam Vahdat Pour; Zhuo Li; Lei Ma; Hadi Hemmati
PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack.Jie Wang; Zhaoxia Yin; Jin Tang; Jing Jiang; Bin Luo
Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization.Jie Wang; Zhaoxia Yin; Jing Jiang; Yang Du
2021-01-18
What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space.Shihao Zhao; Xingjun Ma; Yisen Wang; James Bailey; Bo Li; Yu-Gang Jiang
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks. (1%)Zhengyan Zhang; Guangxuan Xiao; Yongwei Li; Tian Lv; Fanchao Qi; Zhiyuan Liu; Yasheng Wang; Xin Jiang; Maosong Sun
2021-01-17
Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions.Nodens Koren; Qiuhong Ke; Yisen Wang; James Bailey; Xingjun Ma
GraphAttacker: A General Multi-Task GraphAttack Framework.Jinyin Chen; Dunjie Zhang; Zhaoyan Ming; Kejie Huang; Wenrong Jiang; Chen Cui
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving.James Tu; Huichen Li; Xinchen Yan; Mengye Ren; Yun Chen; Ming Liang; Eilyan Bitar; Ersin Yumer; Raquel Urtasun
2021-01-16
Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks.Jia Liu; Yaochu Jin
Adversarial Attacks On Multi-Agent Communication.James Tu; Tsunhsuan Wang; Jingkang Wang; Sivabalan Manivasagam; Mengye Ren; Raquel Urtasun
2021-01-15
Fundamental Tradeoffs in Distributionally Adversarial Training.Mohammad Mehrabi; Adel Javanmard; Ryan A. Rossi; Anup Rao; Tung Mai
Black-box Adversarial Attacks in Autonomous Vehicle Technology.K Naveen Kumar; C Vishnu; Reshmi Mitra; C Krishna Mohan
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds.Bogdan Georgiev; Lukas Franken; Mayukh Mukherjee
Mining Data Impressions from Deep Models as Substitute for the Unavailable Training Data.Gaurav Kumar Nayak; Konda Reddy Mopuri; Saksham Jain; Anirban Chakraborty
2021-01-14
Context-Aware Image Denoising with Auto-Threshold Canny Edge Detection to Suppress Adversarial Perturbation.Li-Yun Wang; Yeganeh Jalalpour; Wu-chi Feng
Robusta: Robust AutoML for Feature Selection via Reinforcement Learning.Xiaoyang Wang; Bo Li; Yibo Zhang; Bhavya Kailkhura; Klara Nahrstedt
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks.Yige Li; Xixiang Lyu; Nodens Koren; Lingjuan Lyu; Bo Li; Xingjun Ma
2021-01-13
Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series.Pradeep Rathore; Arghya Basak; Sri Harsha Nistala; Venkataramana Runkana
Image Steganography based on Iteratively Adversarial Samples of A Synchronized-directions Sub-image.Xinghong Qin; Shunquan Tan; Bin Li; Weixuan Tang; Jiwu Huang
2021-01-12
Robustness Gym: Unifying the NLP Evaluation Landscape.Karan Goel; Nazneen Rajani; Jesse Vig; Samson Tan; Jason Wu; Stephan Zheng; Caiming Xiong; Mohit Bansal; Christopher Ré
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps.Yujin Huang; Han Hu; Chunyang Chen
Random Transformation of Image Brightness for Adversarial Attack.Bo Yang; Kaiyong Xu; Hengjun Wang; Hengwei Zhang
On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks.Junyoung Byun; Hyojun Go; Changick Kim
2021-01-11
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing.Andreas Bär; Jonas Löhdefink; Nikhil Kapoor; Serin J. Varghese; Fabian Hüger; Peter Schlicht; Tim Fingscheidt
2021-01-10
Adversarially Robust and Explainable Model Compression with On-Device Personalization for Text Classification.Yao Qiang; Supriya Tumkur Suresh Kumar; Marco Brocanelli; Dongxiao Zhu
2021-01-08
Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks.Marissa Dotter; Sherry Xie; Keith Manville; Josh Harguess; Colin Busho; Mikel Rodriguez
DiPSeN: Differentially Private Self-normalizing Neural Networks For Adversarial Robustness in Federated Learning.Olakunle Ibitoye; M. Omair Shafiq; Ashraf Matrawy
Exploring Adversarial Fake Images on Face Manifold.Dongze Li; Wei Wang; Hongxing Fan; Jing Dong
2021-01-07
The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks.Arno Blaas; Stephen J. Roberts
Robust Text CAPTCHAs Using Adversarial Examples.Rulin Shao; Zhouxing Shi; Jinfeng Yi; Pin-Yu Chen; Cho-Jui Hsieh
2021-01-06
Adversarial Robustness by Design through Analog Computing and Synthetic Gradients.Alessandro Cappelli; Ruben Ohana; Julien Launay; Laurent Meunier; Iacopo Poli; Florent Krzakala
Understanding the Error in Evaluating Adversarial Robustness.Pengfei Xia; Ziqiang Li; Hongjing Niu; Bin Li
2021-01-05
Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks.Rachel Sterneck; Abhishek Moitra; Priyadarshini Panda
2021-01-04
Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks.Yanghao Zhang; Fu Wang; Wenjie Ruan
Local Competition and Stochasticity for Adversarial Robustness in Deep Learning.Konstantinos P. Panousis; Sotirios Chatzis; Antonios Alexos; Sergios Theodoridis
Local Black-box Adversarial Attacks: A Query Efficient Approach.Tao Xiang; Hangcheng Liu; Shangwei Guo; Tianwei Zhang; Xiaofeng Liao
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead.Muhammad Shafique; Mahum Naseer; Theocharis Theocharides; Christos Kyrkou; Onur Mutlu; Lois Orosa; Jungwook Choi
2021-01-02
Improving DGA-Based Malicious Domain Classifiers for Malware Defense with Adversarial Machine Learning.Ibrahim Yilmaz; Ambareen Siraj; Denis Ulybyshev
2020-12-31
Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning.Chenglei Si; Zhengyan Zhang; Fanchao Qi; Zhiyuan Liu; Yasheng Wang; Qun Liu; Maosong Sun
Patch-wise++ Perturbation for Adversarial Targeted Attacks.Lianli Gao; Qilong Zhang; Jingkuan Song; Heng Tao Shen
2020-12-30
Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial Attacks for Online Visual Object Trackers.Krishna Kanth Nakka; Mathieu Salzmann
Beating Attackers At Their Own Games: Adversarial Example Detection Using Adversarial Gradient Directions.Yuhang Wu; Sunpreet S. Arora; Yanhong Wu; Hao Yang
2020-12-29
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization.Renya Department of Information Science and Biomedical Engineering, Graduate School of Science and Engineering, Kagoshima University Daimo; Satoshi Department of Information Science and Biomedical Engineering, Graduate School of Science and Engineering, Kagoshima University Ono; Takahiro Department of Information Science and Biomedical Engineering, Graduate School of Science and Engineering, Kagoshima University Suzuki
Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces.Linyang Li; Yunfan Shao; Demin Song; Xipeng Qiu; Xuanjing Huang
Improving Adversarial Robustness in Weight-quantized Neural Networks.Chang Song; Elias Fallon; Hai Li
With False Friends Like These, Who Can Have Self-Knowledge?Lue Tao; Songcan Chen
Generating Natural Language Attacks in a Hard Label Black Box Setting.Rishabh Maheshwary; Saket Maheshwary; Vikram Pudi
2020-12-28
Enhanced Regularizers for Attributional Robustness.Anindya Sarkar; Anirban Sarkar; Vineeth N Balasubramanian
Analysis of Dominant Classes in Universal Adversarial Perturbations.Jon Vadillo; Roberto Santana; Jose A. Lozano
2020-12-27
Person Re-identification with Adversarial Triplet Embedding.Xinglu Wang
My Teacher Thinks The World Is Flat! Interpreting Automatic Essay Scoring Mechanism.Swapnil Parekh; Yaman Kumar Singla; Changyou Chen; Junyi Jessy Li; Rajiv Ratn Shah
2020-12-26
Sparse Adversarial Attack to Object Detection.Jiayu Bao
Assessment of the Relative Importance of different hyper-parameters of LSTM for an IDS.Mohit Sewak; Sanjay K. Sahay; Hemant Rathore
2020-12-25
Robustness, Privacy, and Generalization of Adversarial Training.Fengxiang He; Shaopeng Fu; Bohan Wang; Dacheng Tao
A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning.Ahmadreza Jeddi; Mohammad Javad Shafiee; Alexander Wong
2020-12-24
A Context Aware Approach for Generating Natural Language Attacks.Rishabh Maheshwary; Saket Maheshwary; Vikram Pudi
Exploring Adversarial Examples via Invertible Neural Networks.Ruqi Bai; Saurabh Bagchi; David I. Inouye
Improving the Certified Robustness of Neural Networks via Consistency Regularization.Mengting Xu; Tao Zhang; Zhongnian Li; Daoqiang Zhang
Adversarial Momentum-Contrastive Pre-Training.Cong Xu; Min Yang
Learning Robust Representation for Clustering through Locality Preserving Variational Discriminative Network.Ruixuan Luo; Wei Li; Zhiyuan Zhang; Ruihan Bao; Keiko Harimoto; Xu Sun
2020-12-23
The Translucent Patch: A Physical and Universal Attack on Object Detectors.Alon Zolfi; Moshe Kravchik; Yuval Elovici; Asaf Shabtai
Gradient-Free Adversarial Attacks for Bayesian Neural Networks.Matthew Yuan; Matthew Wicker; Luca Laurenti
SCOPE CPS: Secure Compiling of PLCs in Cyber-Physical Systems.Eyasu Getahun Chekole; Martin Ochoa; Sudipta Chattopadhyay
Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems.Moshe Kravchik; Battista Biggio; Asaf Shabtai
2020-12-22
Learning to Initialize Gradient Descent Using Gradient Descent.Kartik Ahuja; Amit Dhurandhar; Kush R. Varshney
Unadversarial Examples: Designing Objects for Robust Vision.Hadi Salman; Andrew Ilyas; Logan Engstrom; Sai Vemprala; Aleksander Madry; Ashish Kapoor
Multi-shot NAS for Discovering Adversarially Robust Convolutional Neural Architectures at Targeted Capacities.Xuefei Ning; Junbo Zhao; Wenshuo Li; Tianchen Zhao; Huazhong Yang; Yu Wang
On Frank-Wolfe Optimization for Adversarial Robustness and Interpretability.Theodoros Tsiligkaridis; Jay Roberts
2020-12-21
Genetic Adversarial Training of Decision Trees.Francesco Ranzato; Marco Zanella
Incremental Verification of Fixed-Point Implementations of Neural Networks.Luiz Sena; Erickson Alves; Iury Bessa; Eddie Filho; Lucas Cordeiro
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring.Chenchen Zhao; Hao Li
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks.Chenchen Zhao; Hao Li
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification.Siyuan Cheng; Yingqi Liu; Shiqing Ma; Xiangyu Zhang
Self-Progressing Robust Training.Minhao Cheng; Pin-Yu Chen; Sijia Liu; Shiyu Chang; Cho-Jui Hsieh; Payel Das
Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition.Shoma Ishida; Satoshi Ono
Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines.Aidan Kehoe; Peter Wittek; Yanbo Xue; Alejandro Pozas-Kerstjens
On Success and Simplicity: A Second Look at Transferable Targeted Attacks.Zhengyu Zhao; Zhuoran Liu; Martha Larson
Learning from What We Know: How to Perform Vulnerability Prediction using Noisy Historical Data. (1%)Aayush Garg; Renzo Degiovanni; Matthieu Jimenez; Maxime Cordy; Mike Papadakis; Yves Le Traon
2020-12-20
Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks.Jayendra Kantipudi; Shiv Ram Dubey; Soumendu Chakraborty
2020-12-19
Sample Complexity of Adversarially Robust Linear Classification on Separated Data.Robi Bhattacharjee; Somesh Jha; Kamalika Chaudhuri
2020-12-18
Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks.Kieran Browne; Ben Swift
ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries.Jinyin Chen; Zhen Wang; Haibin Zheng; Jun Xiao; Zhaoyan Ming
AdvExpander: Generating Natural Language Adversarial Examples by Expanding Text.Zhihong Shao; Zitao Liu; Jiyong Zhang; Zhongqin Wu; Minlie Huang
Adversarially Robust Estimate and Risk Analysis in Linear Regression.Yue Xing; Ruizhi Zhang; Guang Cheng
RAILS: A Robust Adversarial Immune-inspired Learning System.Ren Wang; Tianqi Chen; Stephen Lindsly; Alnawaz Rehemtulla; Alfred Hero; Indika Rajapakse
Efficient Training of Robust Decision Trees Against Adversarial Examples.Daniël Vos; Sicco Verwer
On the human-recognizability phenomenon of adversarially trained deep image classifiers.Jonathan Helland; Nathan VanHoudnos
2020-12-17
Characterizing the Evasion Attackability of Multi-label Classifiers.Zhuo Yang; Yufei Han; Xiangliang Zhang
A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks.Qingsong Yao; Zecheng He; Yi Lin; Kai Ma; Yefeng Zheng; S. Kevin Zhou
2020-12-16
On the Limitations of Denoising Strategies as Adversarial Defenses.Zhonghan Niu; Zhaoxi Chen; Linyi Li; Yubin Yang; Bo Li; Jinfeng Yi
2020-12-15
FoggySight: A Scheme for Facial Lookup Privacy.Ivan Evtimov; Pascal Sturmfels; Tadayoshi Kohno
FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition (OCR) Systems.Lu Chen; Jiao Sun; Wei Xu
Amata: An Annealing Mechanism for Adversarial Training Acceleration.Nanyang Ye; Qianxiao Li; Xiao-Yun Zhou; Zhanxing Zhu
2020-12-14
Disentangled Information Bottleneck.Ziqi Pan; Li Niu; Jianfu Zhang; Liqing Zhang
Adaptive Verifiable Training Using Pairwise Class Similarity.Shiqi Wang; Kevin Eykholt; Taesung Lee; Jiyong Jang; Ian Molloy
Robustness Threats of Differential Privacy.Nurislam Tursynbek; Aleksandr Petiushko; Ivan Oseledets
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios.Hassan Ali; Surya Nepal; Salil S. Kanhere; Sanjay Jha
Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints.Xin Li; Xiangrui Li; Deng Pan; Dongxiao Zhu
Binary Black-box Evasion Attacks Against Deep Learning-based Static Malware Detectors with Adversarial Byte-Level Language Model.Mohammadreza Ebrahimi; Ning Zhang; James Hu; Muhammad Taqi Raza; Hsinchun Chen
Contrastive Learning with Adversarial Perturbations for Conditional Text Generation.Seanie Lee; Dong Bok Lee; Sung Ju Hwang
2020-12-13
Achieving Adversarial Robustness Requires An Active Teacher.Chao Ma; Lexing Ying
2020-12-12
Query-free Black-box Adversarial Attacks on Graphs.Jiarong Xu; Yizhou Sun; Xin Jiang; Yanhao Wang; Yang Yang; Chunping Wang; Jiangang Lu
2020-12-11
Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning.Omer Faruk Tuna; Ferhat Ozgur Catak; M. Taner Eskil
Attack Agnostic Detection of Adversarial Examples via Random Subspace Analysis.Nathan Drenkow; Neil Fendley; Philippe Burlina
Analyzing and Improving Adversarial Training for Generative Modeling. (86%)Xuwang Yin; Shiying Li; Gustavo K. Rohde
2020-12-10
GNNUnlock: Graph Neural Networks-based Oracle-less Unlocking Scheme for Provably Secure Logic Locking.Lilas Alrahis; Satwik Patnaik; Faiq Khalid; Muhammad Abdullah Hanif; Hani Saleh; Muhammad Shafique; Ozgur Sinanoglu
Next Wave Artificial Intelligence: Robust, Explainable, Adaptable, Ethical, and Accountable.Odest Chadwicke Jenkins; Daniel Lopresti; Melanie Mitchell
DSRNA: Differentiable Search of Robust Neural Architectures.Ramtin Hosseini; Xingyi Yang; Pengtao Xie
I-GCN: Robust Graph Convolutional Network via Influence Mechanism.Haoxi Zhan; Xiaobing Pei
An Empirical Review of Adversarial Defenses.Ayush Goel
Robustness and Transferability of Universal Attacks on Compressed Models.Alberto G. Matachana; Kenneth T. Co; Luis Muñoz-González; David Martinez; Emil C. Lupu
Geometric Adversarial Attacks and Defenses on 3D Point Clouds.Itai Lang; Uriel Kotlicki; Shai Avidan
SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers.Bingyao Huang; Haibin Ling
2020-12-09
Generating Out of Distribution Adversarial Attack using Latent Space Poisoning.Ujjwal Upadhyay; Prerana Mukherjee
Detection of Adversarial Supports in Few-shot Classifiers Using Self-Similarity and Filtering.Yi Xiang Marcus Tan; Penny Chong; Jiamei Sun; Ngai-Man Cheung; Yuval Elovici; Alexander Binder
Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters.Rida El-Allami; Alberto Marchisio; Muhammad Shafique; Ihsen Alouani
Composite Adversarial Attacks.Xiaofeng Mao; Yuefeng Chen; Shuhui Wang; Hang Su; Yuan He; Hui Xue
2020-12-08
Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective.Jingwei Sun; Ang Li; Binghui Wang; Huanrui Yang; Hai Li; Yiran Chen
On 1/n neural representation and robustness.Josue Nassar; Piotr Aleksander Sokol; SueYeon Chung; Kenneth D. Harris; Il Memming Park
Locally optimal detection of stochastic targeted universal adversarial perturbations.Amish Goel; Pierre Moulin
A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models.Mohammed Hassanin; Nour Moustafa; Murat Tahtali
Using Feature Alignment can Improve Clean Average Precision and Adversarial Robustness in Object Detection.Weipeng Xu; Hongcheng Huang
EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation.Qi Zhou; Haipeng Chen; Yitao Zheng; Zhen Wang
Overcomplete Representations Against Adversarial Videos.Shao-Yuan Lo; Jeya Maria Jose Valanarasu; Vishal M. Patel
Mitigating the Impact of Adversarial Attacks in Very Deep Networks.Mohammed Hassanin; Ibrahim Radwan; Nour Moustafa; Murat Tahtali; Neeraj Kumar
Reinforcement Based Learning on Classification Task Could Yield Better Generalization and Adversarial Accuracy.Shashi Kant Gupta
Poisoning Semi-supervised Federated Learning via Unlabeled Data: Attacks and Defenses. (95%)Yi Liu; Xingliang Yuan; Ruihui Zhao; Cong Wang; Dusit Niyato; Yefeng Zheng
Data Dependent Randomized Smoothing. (1%)Motasem Alfarra; Adel Bibi; Philip H. S. Torr; Bernard Ghanem
2020-12-07
A Singular Value Perspective on Model Robustness.Malhar Jere; Maghav Kumar; Farinaz Koushanfar
Backpropagating Linearly Improves Transferability of Adversarial Examples.Yiwen Guo; Qizhang Li; Hao Chen
Learning to Separate Clusters of Adversarial Representations for Robust Adversarial Detection.Byunggill Joe; Jihun Hamm; Sung Ju Hwang; Sooel Son; Insik Shin
Are DNNs fooled by extremely unrecognizable images?Soichiro Kumano; Hiroshi Kera; Toshihiko Yamasaki
Reprogramming Language Models for Molecular Representation Learning.Ria Vinod; Pin-Yu Chen; Payel Das
2020-12-06
Black-box Model Inversion Attribute Inference Attacks on Classification Models.Shagufta Mehnaz; Ninghui Li; Elisa Bertino
PAC-Learning for Strategic Classification.Ravi Sundaram; Anil Vullikanti; Haifeng Xu; Fan Yao
2020-12-05
Evaluating adversarial robustness in simulated cerebellum.Liu Yuezhang; Bo Li; Qifeng Chen
2020-12-04
Advocating for Multiple Defense Strategies against Adversarial Examples.Alexandre Araujo; Laurent Meunier; Rafael Pinot; Benjamin Negrevergne
Practical No-box Adversarial Attacks against DNNs.Qizhang Li; Yiwen Guo; Hao Chen
Towards Natural Robustness Against Adversarial Examples.Haoyu Chu; Shikui Wei; Yao Zhao
Unsupervised Adversarially-Robust Representation Learning on Graphs.Jiarong Xu; Yang Yang; Junru Chen; Chunping Wang; Xin Jiang; Jiangang Lu; Yizhou Sun
Kernel-convoluted Deep Neural Networks with Data Augmentation.Minjin Kim; Young-geun Kim; Dongha Kim; Yongdai Kim; Myunghee Cho Paik
2020-12-03
Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning.Kendra Albert; Maggie Delano; Jonathon Penney; Afsaneh Rigot; Ram Shankar Siva Kumar
FAT: Federated Adversarial Training.Giulio Zizzo; Ambrish Rawat; Mathieu Sinn; Beat Buesser
An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks.Giuseppe Ughi; Vinayak Abrol; Jared Tanner
Channel Effects on Surrogate Models of Adversarial Attacks against Wireless Signal Classifiers.Brian Kim; Yalin E. Sagduyu; Tugba Erpek; Kemal Davaslioglu; Sennur Ulukus
Attribute-Guided Adversarial Training for Robustness to Natural Perturbations.Tejas Gokhale; Rushil Anirudh; Bhavya Kailkhura; Jayaraman J. Thiagarajan; Chitta Baral; Yezhou Yang
2020-12-02
From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation.Nikhil Kapoor; Andreas Bär; Serin Varghese; Jan David Schneider; Fabian Hüger; Peter Schlicht; Tim Fingscheidt
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques.Han Qiu; Yi Zeng; Tianwei Zhang; Yong Jiang; Meikang Qiu
Towards Defending Multiple $\ell_p$-norm Bounded Adversarial Perturbations via Gated Batch Normalization.Aishan Liu; Shiyu Tang; Xinyun Chen; Lei Huang; Haotong Qin; Xianglong Liu; Dacheng Tao
Content-Adaptive Pixel Discretization to Improve Model Robustness.Ryan Feng; Wu-chi Feng; Atul Prakash
How Robust are Randomized Smoothing based Defenses to Data Poisoning?Akshay Mehra; Bhavya Kailkhura; Pin-Yu Chen; Jihun Hamm
2020-12-01
Adversarial Robustness Across Representation Spaces.Pranjal Awasthi; George Yu; Chun-Sung Ferng; Andrew Tomkins; Da-Cheng Juan
Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks.Christian Cosgrove; Adam Kortylewski; Chenglin Yang; Alan Yuille
Boosting Adversarial Attacks on Neural Networks with Better Optimizer.Heng Yin; Hengwei Zhang; Jindong Wang; Ruiyu Dou
One-Pixel Attack Deceives Computer-Assisted Diagnosis of Cancer.Joni Korpihalkola; Tuomo Sipola; Samir Puuska; Tero Kokkonen
Towards Imperceptible Adversarial Image Patches Based on Network Explanations.Yaguan Qian; Jiamin Wang; Bin Wang; Zhaoquan Gu; Xiang Ling; Chunming Wu
2020-11-30
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses.Gaurang Sriramanan; Sravanti Addepalli; Arya Baburaj; R. Venkatesh Babu
Just One Moment: Structural Vulnerability of Deep Action Recognition against One Frame Attack.Jaehui Hwang; Jun-Hyuk Kim; Jun-Ho Choi; Jong-Seok Lee
2020-11-29
Architectural Adversarial Robustness: The Case for Deep Pursuit.George Cazenavette; Calvin Murdock; Simon Lucey
A Targeted Universal Attack on Graph Convolutional Network.Jiazhu Dai; Weifeng Zhu; Xiangfeng Luo
SwitchX: Gmin-Gmax Switching for Energy-Efficient and Robust Implementation of Binary Neural Networks on ReRAM Xbars.Abhiroop Bhattacharjee; Priyadarshini Panda
2020-11-28
Cyberbiosecurity: DNA Injection Attack in Synthetic Biology.Dor Farbiash; Rami Puzis
Deterministic Certification to Adversarial Attacks via Bernstein Polynomial Approximation.Ching-Chia Kao; Jhe-Bang Ko; Chun-Shien Lu
FaceGuard: A Self-Supervised Defense Against Adversarial Face Images.Debayan Deb; Xiaoming Liu; Anil K. Jain
2020-11-27
3D Invisible Cloak.Mingfu Xue; Can He; Zhiyu Wu; Jian Wang; Zhe Liu; Weiqiang Liu
Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers.Kaidi Xu; Huan Zhang; Shiqi Wang; Yihan Wang; Suman Jana; Xue Lin; Cho-Jui Hsieh
Voting based ensemble improves robustness of defensive models.Devvrit; Minhao Cheng; Cho-Jui Hsieh; Inderjit Dhillon
Generalized Adversarial Examples: Attacks and Defenses.Haojing Shen; Sihong Chen; Ran Wang; Xizhao Wang
Robust and Natural Physical Adversarial Examples for Object Detectors.Mingfu Xue; Chengxiang Yuan; Can He; Jian Wang; Weiqiang Liu
SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images.Mingfu Xue; Shichang Sun; Zhiyu Wu; Can He; Jian Wang; Weiqiang Liu
Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks.Mingfu Xue; Chengxiang Yuan; Can He; Zhiyu Wu; Yushu Zhang; Zhe Liu; Weiqiang Liu
2020-11-26
Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness.Yilun Jin; Lixin Fan; Kam Woh Ng; Ce Ju; Qiang Yang
Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks.Abhishek Moitra; Priyadarshini Panda
Robust Attacks on Deep Learning Face Recognition in the Physical World.Meng Shen; Hao Yu; Liehuang Zhu; Ke Xu; Qi Li; Xiaojiang Du
Regularization with Latent Space Virtual Adversarial Training.Genki Osada; Budrul Ahsan; Revoti Prasad Bora; Takashi Nishide
Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect.Athena Sayles; Ashish Hooda; Mohit Gupta; Rahul Chatterjee; Earlence Fernandes
2020-11-25
Adversarial Attack on Facial Recognition using Visible Light.Morgan Frearson; Kien Nguyen
Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption.Ivan Evtimov; Russel Howes; Brian Dolhansky; Hamed Firooz; Cristian Canton Ferrer
SurFree: a fast surrogate-free black-box attack.Thibault Maho; Teddy Furon; Erwan Le Merrer
Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization.Tianyu Han; Sven Nebelung; Federico Pedersoli; Markus Zimmermann; Maximilian Schulze-Hagen; Michael Ho; Christoph Haarburger; Fabian Kiessling; Christiane Kuhl; Volkmar Schulz; Daniel Truhn
Probing Model Signal-Awareness via Prediction-Preserving Input Minimization. (80%)Sahil Suneja; Yunhui Zheng; Yufan Zhuang; Jim Laredo; Alessandro Morari
2020-11-24
Trust but Verify: Assigning Prediction Credibility by Counterfactual Constrained Learning.Luiz F. O. Chamon; Santiago Paternain; Alejandro Ribeiro
Stochastic sparse adversarial attacks.Manon Césaire; Hatem Hajri; Sylvain Lamprier; Patrick Gallinari
On the Adversarial Robustness of 3D Point Cloud Classification.Jiachen Sun; Karl Koenig; Yulong Cao; Qi Alfred Chen; Z. Morley Mao
Towards Imperceptible Universal Attacks on Texture Recognition.Yingpeng Deng; Lina J. Karam
2020-11-23
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack.Rui Shu; Tianpei Xia; Laurie Williams; Tim Menzies
Augmented Lagrangian Adversarial Attacks.Jérôme Rony; Eric Granger; Marco Pedersoli; Ismail Ben Ayed
2020-11-22
Learnable Boundary Guided Adversarial Training.Jiequan Cui; Shu Liu; Liwei Wang; Jiaya Jia
Nudge Attacks on Point-Cloud DNNs.Yiren Zhao; Ilia Shumailov; Robert Mullins; Ross Anderson
2020-11-21
Spatially Correlated Patterns in Adversarial Images.Nandish Chattopadhyay; Lionell Yip En Zhi; Bryan Tan Bing Xing; Anupam Chattopadhyay
A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations.Can Bakiskan; Metehan Cekic; Ahmet Dundar Sezer; Upamanyu Madhow
Robust Data Hiding Using Inverse Gradient Attention. (2%)Honglei Zhang; Hu Wang; Yuanzhouhan Cao; Chunhua Shen; Yidong Li
2020-11-20
Are Chess Discussions Racist? An Adversarial Hate Speech Data Set.Rupak Sarkar; Ashiqur R. KhudaBukhsh
Detecting Universal Trigger's Adversarial Attack with Honeypot.Thai Le; Noseong Park; Dongwon Lee
2020-11-19
An Experimental Study of Semantic Continuity for Deep Learning Models.Shangxi Wu; Jitao Sang; Xian Zhao; Lizhang Chen
Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams.Chawin Sitawarin; Evgenios M. Kornaropoulos; Dawn Song; David Wagner
Adversarial Threats to DeepFake Detection: A Practical Perspective.Paarth Neekhara; Brian Dolhansky; Joanna Bitton; Cristian Canton Ferrer
Multi-Task Adversarial Attack.Pengxin Guo; Yuancheng Xu; Baijiong Lin; Yu Zhang
Latent Adversarial Debiasing: Mitigating Collider Bias in Deep Neural Networks.Luke Darlow; Stanisław Jastrzębski; Amos Storkey
2020-11-18
Robustified Domain Adaptation.Jiajin Zhang; Hanqing Chao; Pingkun Yan
Adversarial collision attacks on image hashing functions.Brian Dolhansky; Cristian Canton Ferrer
Contextual Fusion For Adversarial Robustness.Aiswarya Akumalla; Seth Haney; Maksim Bazhenov
Adversarial Turing Patterns from Cellular Automata.Nurislam Tursynbek; Ilya Vilkoviskiy; Maria Sindeeva; Ivan Oseledets
Self-Gradient Networks.Hossein Aboutalebi; Mohammad Javad Shafiee Alexander Wong
Adversarial Profiles: Detecting Out-Distribution & Adversarial Samples in Pre-trained CNNs.Arezoo Rajabi; Rakesh B. Bobba
2020-11-17
FoolHD: Fooling speaker identification by Highly imperceptible adversarial Disturbances.Ali Shahin Shamsabadi; Francisco Sepúlveda Teixeira; Alberto Abad; Bhiksha Raj; Andrea Cavallaro; Isabel Trancoso
SIENA: Stochastic Multi-Expert Neural Patcher.Thai Le; Noseong Park; Dongwon Lee
Shaping Deep Feature Space towards Gaussian Mixture for Visual Classification.Weitao Wan; Jiansheng Chen; Cheng Yu; Tong Wu; Yuanyi Zhong; Ming-Hsuan Yang
Generating universal language adversarial examples by understanding and enhancing the transferability across neural models.Liping Yuan; Xiaoqing Zheng; Yi Zhou; Cho-Jui Hsieh; Kai-wei Chang; Xuanjing Huang
Probing Predictions on OOD Images via Nearest Categories. (75%)Yao-Yuan Yang; Cyrus Rashtchian; Ruslan Salakhutdinov; Kamalika Chaudhuri
2020-11-16
MAAC: Novel Alert Correlation Method To Detect Multi-step Attack.Xiaoyu Wang; Lei Yu; Houhua He; Xiaorui Gong
Enforcing robust control guarantees within neural network policies.Priya L. Donti; Melrose Roderick; Mahyar Fazlyab; J. Zico Kolter
Adversarially Robust Classification based on GLRT.Bhagyashree Puranik; Upamanyu Madhow; Ramtin Pedarsani
Combining GANs and AutoEncoders for Efficient Anomaly Detection.Fabio ISTI CNR, Pisa, Italy Carrara; Giuseppe ISTI CNR, Pisa, Italy Amato; Luca ISTI CNR, Pisa, Italy Brombin; Fabrizio ISTI CNR, Pisa, Italy Falchi; Claudio ISTI CNR, Pisa, Italy Gennaro
Extreme Value Preserving Networks.Mingjie Sun; Jianguo Li; Changshui Zhang
2020-11-15
Towards Understanding the Regularization of Adversarial Robustness on Neural Networks.Yuxin Wen; Shuai Li; Kui Jia
Ensemble of Models Trained by Key-based Transformed Images for Adversarially Robust Defense Against Black-box Attacks.MaungMaung AprilPyone; Hitoshi Kiya
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations.Jinyuan Jia; Binghui Wang; Xiaoyu Cao; Hongbin Liu; Neil Zhenqiang Gong
Power Side-Channel Attacks on BNN Accelerators in Remote FPGAs. (1%)Shayan Moini; Shanquan Tian; Jakub Szefer; Daniel Holcomb; Russell Tessier
2020-11-14
Audio-Visual Event Recognition through the lens of Adversary.Juncheng B Li; Kaixin Ma; Shuhui Qu; Po-Yao Huang; Florian Metze
2020-11-13
Transformer-Encoder Detector Module: Using Context to Improve Robustness to Adversarial Attacks on Object Detection.Faisal Alamri; Sinan Kalkan; Nicolas Pugeault
Query-based Targeted Action-Space Adversarial Policies on Deep Reinforcement Learning Agents.Xian Yeow Lee; Yasaman Esfandiari; Kai Liang Tan; Soumik Sarkar
2020-11-12
Adversarial Robustness Against Image Color Transformation within Parametric Filter Space.Zhengyu Zhao; Zhuoran Liu; Martha Larson
Sparse PCA: Algorithms, Adversarial Perturbations and Certificates.Tommaso d'Orsi; Pravesh K. Kothari; Gleb Novikov; David Steurer
2020-11-11
Adversarial images for the primate brain.Li Yuan; Will Xiao; Gabriel Kreiman; Francis E. H. Tay; Jiashi Feng; Margaret S. Livingstone
Detecting Adversarial Patches with Class Conditional Reconstruction Networks.Perry Deng; Mohammad Saidur Rahman; Matthew Wright
2020-11-10
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks.Martin Gubri; Maxime Cordy; Mike Papadakis; Yves Le Traon; Koushik Sen
2020-11-09
Solving Inverse Problems With Deep Neural Networks -- Robustness Included?Martin Genzel; Jan Macdonald; Maximilian März
2020-11-07
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective Genetic Optimization Guided By Deep Networks.Alex Mathai; Shreya Khare; Srikanth Tamilselvam; Senthil Mani
Bridging the Performance Gap between FGSM and PGD Adversarial Training.Tianjin Huang; Vlado Menkovski; Yulong Pei; Mykola Pechenizkiy
2020-11-06
Single-Node Attacks for Fooling Graph Neural Networks.Ben Finkelshtein; Chaim Baskin; Evgenii Zheltonozhskii; Uri Alon
A survey on practical adversarial examples for malware classifiers.Daniel Park; Bülent Yener
2020-11-05
A Black-Box Attack Model for Visually-Aware Recommender Systems.Rami Cohen; Oren Sar Shalom; Dietmar Jannach; Amihood Amir
Data Augmentation via Structured Adversarial Perturbations.Calvin Luo; Hossein Mobahi; Samy Bengio
Defense-friendly Images in Adversarial Attacks: Dataset and Metrics forPerturbation Difficulty.Camilo Pestana; Wei Liu; David Glance; Ajmal Mian
Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks.Leo Schwinn; An Nguyen; René Raab; Dario Zanca; Bjoern Eskofier; Daniel Tenbrinck; Martin Burger
2020-11-03
You Do (Not) Belong Here: Detecting DPI Evasion Attacks with Context Learning.Shitong Zhu; Shasha Li; Zhongjie Wang; Xun Chen; Zhiyun Qian; Srikanth V. Krishnamurthy; Kevin S. Chan; Ananthram Swami
Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks.Denis Emelin; Ivan Titov; Rico Sennrich
Penetrating RF Fingerprinting-based Authentication with a Generative Adversarial Attack.Samurdhi Karunaratne; Enes Krijestorac; Danijela Cabric
Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks.Tao Bai; Jinqi Luo; Jun Zhao
A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs.Souvik Kundu; Mahdi Nazemi; Peter A. Beerel; Massoud Pedram
MalFox: Camouflaged Adversarial Malware Example Generation Based on Conv-GANs Against Black-Box Detectors.Fangtian Zhong; Xiuzhen Cheng; Dongxiao Yu; Bei Gong; Shuaiwen Song; Jiguo Yu
2020-11-02
Adversarial Examples in Constrained Domains.Ryan Sheatsley; Nicolas Papernot; Michael Weisman; Gunjan Verma; Patrick McDaniel
Frequency-based Automated Modulation Classification in the Presence of Adversaries.Rajeev Sahay; Christopher G. Brinton; David J. Love
Robust Algorithms for Online Convex Problems via Primal-Dual.Marco Molinaro
Trustworthy AI.Richa Singh; Mayank Vatsa; Nalini Ratha
2020-11-01
LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud-based Deep Networks.Hang Zhou; Dongdong Chen; Jing Liao; Weiming Zhang; Kejiang Chen; Xiaoyi Dong; Kunlin Liu; Gang Hua; Nenghai Yu
Vulnerability of the Neural Networks Against Adversarial Examples: A Survey.Rui Zhao
2020-10-31
MAD-VAE: Manifold Awareness Defense Variational Autoencoder.Frederick Morlock; Dingsu Wang
2020-10-30
Integer Programming-based Error-Correcting Output Code Design for Robust Classification.Samarth Gupta; Saurabh Amin
Leveraging Extracted Model Adversaries for Improved Black Box Attacks.Naveen Jafer Nizar; Ari Kobren
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks.Lubin Meng; Jian Huang; Zhigang Zeng; Xue Jiang; Shan Yu; Tzyy-Ping Jung; Chin-Teng Lin; Ricardo Chavarriaga; Dongrui Wu
Adversarial Attacks on Optimization based Planners.Sai Vemprala; Ashish Kapoor
Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks.Dorjan Hitaj; Briland Hitaj; Sushil Jajodia; Luigi V. Mancini
Perception Improvement for Free: Exploring Imperceptible Black-box Adversarial Attacks on Image Classification.Yongwei Wang; Mingquan Feng; Rabab Ward; Z. Jane Wang; Lanjun Wang
Adversarial Robust Training of Deep Learning MRI Reconstruction Models.Francesco Calivá; Kaiyang Cheng; Rutwik Shah; Valentina Pedoia
2020-10-29
Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-fine Framework and Its Adversarial Examples.Yingwei Li; Zhuotun Zhu; Yuyin Zhou; Yingda Xia; Wei Shen; Elliot K. Fishman; Alan L. Yuille
Perception Matters: Exploring Imperceptible and Transferable Anti-forensics for GAN-generated Fake Face Imagery Detection.Yongwei Wang; Xin Ding; Li Ding; Rabab Ward; Z. Jane Wang
Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks?Roger Granda; Tinne Tuytelaars; Jose Oramas
Reliable Graph Neural Networks via Robust Aggregation.Simon Geisler; Daniel Zügner; Stephan Günnemann
Passport-aware Normalization for Deep Model Protection.Jie Zhang; Dongdong Chen; Jing Liao; Weiming Zhang; Gang Hua; Nenghai Yu
Robustifying Binary Classification to Adversarial Perturbation.Fariborz Salehi; Babak Hassibi
Beyond cross-entropy: learning highly separable feature distributions for robust and accurate classification.Arslan Ali; Andrea Migliorati; Tiziano Bianchi; Enrico Magli
WaveTransform: Crafting Adversarial Examples via Input Decomposition.Divyam Anshumaan; Akshay Agarwal; Mayank Vatsa; Richa Singh
Machine Learning (In) Security: A Stream of Problems. (8%)Fabrício Ceschin; Marcus Botacin; Albert Bifet; Bernhard Pfahringer; Luiz S. Oliveira; Heitor Murilo Gomes; André Grégio
2020-10-28
Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations.Amit Daniely; Hadas Schacham
Object Hider: Adversarial Patch Attack Against Object Detectors.Yusheng Zhao; Huanqian Yan; Xingxing Wei
Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?Anna-Kathrin Kopetzki; Bertrand Charpentier; Daniel Zügner; Sandhya Giri; Stephan Günnemann
Transferable Universal Adversarial Perturbations Using Generative Models.Atiye Sadat Hashemi; Andreas Bär; Saeed Mozaffari; Tim Fingscheidt
2020-10-27
Fast Local Attack: Generating Local Adversarial Examples for Object Detectors.Quanyu Liao; Xin Wang; Bin Kong; Siwei Lyu; Youbing Yin; Qi Song; Xi Wu
Anti-perturbation of Online Social Networks by Graph Label Transition.Jun Zhuang; Mohammad Al Hasan
2020-10-26
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes.Jinyuan Jia; Binghui Wang; Neil Zhenqiang Gong
GreedyFool: Distortion-Aware Sparse Adversarial Attack.Xiaoyi Dong; Dongdong Chen; Jianmin Bao; Chuan Qin; Lu Yuan; Weiming Zhang; Nenghai Yu; Dong Chen
Robust Pre-Training by Adversarial Contrastive Learning.Ziyu Jiang; Tianlong Chen; Ting Chen; Zhangyang Wang
Versatile Verification of Tree Ensembles.Laurens Devos; Wannes Meert; Jesse Davis
Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy.Philipp Benz; Chaoning Zhang; Adil Karjauv; In So Kweon
Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis. (16%)Xudong Pan; Mi Zhang; Yifan Yan; Jiaming Zhu; Min Yang
2020-10-25
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound.Saheb Chhabra; Akshay Agarwal; Richa Singh; Mayank Vatsa
Dynamic Adversarial Patch for Evading Object Detection Models.Shahar Hoory; Tzvika Shapira; Asaf Shabtai; Yuval Elovici
Asymptotic Behavior of Adversarial Training in Binary Classification.Hossein Taheri; Ramtin Pedarsani; Christos Thrampoulidis
2020-10-24
ATRO: Adversarial Training with a Rejection Option.Masahiro Kato; Zhenghang Cui; Yoshihiro Fukuhara
Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks.Huimin Zeng; Chen Zhu; Tom Goldstein; Furong Huang
Stop Bugging Me! Evading Modern-Day Wiretapping Using Adversarial Perturbations.Yael Mathov; Tal Ben Senior; Asaf Shabtai; Yuval Elovici
2020-10-23
Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures.Nafise Sadat Moosavi; Boer Marcel de; Prasetya Ajie Utama; Iryna Gurevych
Towards Robust Neural Networks via Orthogonal Diversity.Kun Fang; Qinghua Tao; Yingwen Wu; Tao Li; Jia Cai; Feipeng Cai; Xiaolin Huang; Jie Yang
2020-10-22
Contrastive Learning with Adversarial Examples.Chih-Hui Ho; Nuno Vasconcelos
Adversarial Attacks on Binary Image Recognition Systems.Eric Balkanski; Harrison Chase; Kojin Oshiba; Alexander Rilee; Yaron Singer; Richard Wang
Rewriting Meaningful Sentences via Conditional BERT Sampling and an application on fooling text classifiers.Lei Xu; Ivan Ramirez; Kalyan Veeramachaneni
An Efficient Adversarial Attack for Tree Ensembles.Chong Zhang; Huan Zhang; Cho-Jui Hsieh
Adversarial Robustness of Supervised Sparse Coding.Jeremias Sulam; Ramchandran Muthukumar; Raman Arora
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming.Sumanth Dathathri; Krishnamurthy Dvijotham; Alexey Kurakin; Aditi Raghunathan; Jonathan Uesato; Rudy Bunel; Shreya Shankar; Jacob Steinhardt; Ian Goodfellow; Percy Liang; Pushmeet Kohli
Defense-guided Transferable Adversarial Attacks.Zifei Zhang; Kai Qiao; Jian Chen; Ningning Liang
Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free.Haotao Wang; Tianlong Chen; Shupeng Gui; Ting-Kuei Hu; Ji Liu; Zhangyang Wang
2020-10-21
Adversarial Attacks on Deep Algorithmic Trading Policies.Yaser Faghan; Nancirose Piazza; Vahid Behzadan; Ali Fathi
Maximum Mean Discrepancy is Aware of Adversarial Attacks.Ruize Gao; Feng Liu; Jingfeng Zhang; Bo Han; Tongliang Liu; Gang Niu; Masashi Sugiyama
Precise Statistical Analysis of Classification Accuracies for Adversarial Training.Adel Javanmard; Mahdi Soltanolkotabi
Learning Black-Box Attackers with Transferable Priors and Query Feedback.Jiancheng Yang; Yangzhou Jiang; Xiaoyang Huang; Bingbing Ni; Chenglong Zhao
Class-Conditional Defense GAN Against End-to-End Speech Attacks.Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
A Distributional Robustness Certificate by Randomized Smoothing.Jungang Yang; Liyao Xiang; Ruidong Chen; Yukun Wang; Wei Wang; Xinbing Wang
2020-10-20
Preventing Personal Data Theft in Images with Adversarial ML.Thomas Cilloni; Wei Wang; Charles Walter; Charles Fleming
Towards Understanding the Dynamics of the First-Order Adversaries.Zhun Deng; Hangfeng He; Jiaoyang Huang; Weijie J. Su
Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods.Byungjoo Kim; Bryce Chudomelka; Jinyoung Park; Jaewoo Kang; Youngjoon Hong; Hyunwoo J. Kim
Boosting Gradient for White-Box Adversarial Attacks.Hongying Liu; Zhenyu Zhou; Fanhua Shang; Xiaoyu Qi; Yuanyuan Liu; Licheng Jiao
Tight Second-Order Certificates for Randomized Smoothing.Alexander Levine; Aounon Kumar; Thomas Goldstein; Soheil Feizi
2020-10-19
A Survey of Machine Learning Techniques in Adversarial Image Forensics.Ehsan Nowroozi; Ali Dehghantanha; Reza M. Parizi; Kim-Kwang Raymond Choo
Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification.Erwin Quiring; Lukas Pirch; Michael Reimsbach; Daniel Arp; Konrad Rieck
RobustBench: a standardized adversarial robustness benchmark.Francesco Croce; Maksym Andriushchenko; Vikash Sehwag; Nicolas Flammarion; Mung Chiang; Prateek Mittal; Matthias Hein
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness.Guillermo Ortiz-Jimenez; Apostolos Modas; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
Verifying the Causes of Adversarial Examples.Honglin Li; Yifei Fan; Frieder Ganz; Anthony Yezzi; Payam Barnaghi
When Bots Take Over the Stock Market: Evasion Attacks Against Algorithmic Traders.Elior Nehemya; Yael Mathov; Asaf Shabtai; Yuval Elovici
FLAG: Adversarial Data Augmentation for Graph Neural Networks.Kezhi Kong; Guohao Li; Mucong Ding; Zuxuan Wu; Chen Zhu; Bernard Ghanem; Gavin Taylor; Tom Goldstein
2020-10-18
FADER: Fast Adversarial Example Rejection.Francesco Crecchi; Marco Melis; Angelo Sotgiu; Davide Bacciu; Battista Biggio
Poisoned classifiers are not only backdoored, they are fundamentally broken.Mingjie Sun; Siddhant Agarwal; J. Zico Kolter
2020-10-17
A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models.erhat Ozgur Catak; Samed Sivaslioglu; Kevser Sahinbas
Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing.Jinghan Yang; Adith Boloor; Ayan Chakrabarti; Xuan Zhang; Yevgeniy Vorobeychik
Weight-Covariance Alignment for Adversarially Robust Neural Networks.Panagiotis Eustratiadis; Henry Gouk; Da Li; Timothy Hospedales
2020-10-16
DPAttack: Diffused Patch Attacks against Universal Object Detection.Shudeng Wu; Tao Dai; Shu-Tao Xia
Mischief: A Simple Black-Box Attack Against Transformer Architectures.Wynter Adrian de
Learning Robust Algorithms for Online Allocation Problems Using Adversarial Training.Goran Zuzic; Di Wang; Aranyak Mehta; D. Sivakumar
2020-10-15
Adversarial Images through Stega Glasses.Benoît CRIStAL Bonnet; Teddy CRIStAL Furon; Patrick CRIStAL Bas
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning.Hongjun Wang; Guanbin Li; Xiaobai Liu; Liang Lin
Generalizing Universal Adversarial Attacks Beyond Additive Perturbations.Yanghao Zhang; Wenjie Ruan; Fu Wang; Xiaowei Huang
Certifying Neural Network Robustness to Random Input Noise from Samples.Brendon G. Anderson; Somayeh Sojoudi
Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training.Zichao Li; Liyuan Liu; Chengyu Dong; Jingbo Shang
Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness.Long Zhao; Ting Liu; Xi Peng; Dimitris Metaxas
Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection in AMI through Adversarial Attacks.Jiangnan Li; Yingyuan Yang; Jinyuan Stella Sun
Progressive Defense Against Adversarial Attacks for Deep Learning as a Service in Internet of Things.Ling Wang; Cheng Zhang; Zejian Luo; Chenguang Liu; Jie Liu; Xi Zheng; Athanasios Vasilakos
2020-10-14
Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability.Yuxian Meng; Chun Fan; Zijun Sun; Eduard Hovy; Fei Wu; Jiwei Li
Towards Resistant Audio Adversarial Examples.Tom Dörr; Karla Markert; Nicolas M. Müller; Konstantin Böttinger
An Adversarial Attack against Stacked Capsule Autoencoder.Jiazhu Dai; Siwei Xiong
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability.Mahmoud Hossam; Trung Le; He Zhao; Dinh Phung
GreedyFool: Multi-Factor Imperceptibility and Its Application to Designing Black-box Adversarial Example Attack.Hui Liu; Bo Zhao; Jiabao Guo; Yang An; Peng Liu
2020-10-13
Toward Few-step Adversarial Training from a Frequency Perspective.Hans Shih-Han Wang; Cory Cornelius; Brandon Edwards; Jason Martin
Higher-Order Certification for Randomized Smoothing.Jeet Mohapatra; Ching-Yun Ko; Tsui-Wei Weng; Pin-Yu Chen; Sijia Liu; Luca Daniel
Linking average- and worst-case perturbation robustness via class selectivity and dimensionality.Matthew L. Leavitt; Ari Morcos
2020-10-12
Universal Model for 3D Medical Image Analysis.Xiaoman Zhang; Ya Zhang; Xiaoyun Zhang; Yanfeng Wang
To be Robust or to be Fair: Towards Fairness in Adversarial Training.Han Xu; Xiaorui Liu; Yaxin Li; Jiliang Tang
Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework for Refining Arbitrary Dense Adversarial Attacks.He Zhao; Thanh Nguyen; Trung Le; Paul Montague; Vel Olivier De; Tamas Abraham; Dinh Phung
Shape-Texture Debiased Neural Network Training.Yingwei Li; Qihang Yu; Mingxing Tan; Jieru Mei; Peng Tang; Wei Shen; Alan Yuille; Cihang Xie
On the Power of Abstention and Data-Driven Decision Making for Adversarial Robustness.Maria-Florina Balcan; Avrim Blum; Dravyansh Sharma; Hongyang Zhang
From Hero to Z\'eroe: A Benchmark of Low-Level Adversarial Attacks.Steffen Eger; Yannik Benz
EFSG: Evolutionary Fooling Sentences Generator.Giovanni Marco Di; Marco Brambilla
Contrast and Classify: Training Robust VQA Models. (2%)Yash Kant; Abhinav Moudgil; Dhruv Batra; Devi Parikh; Harsh Agrawal
2020-10-11
Gradient-based Analysis of NLP Models is Manipulable.Junlin Wang; Jens Tuyls; Eric Wallace; Sameer Singh
IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function based Restoration.Ziyi Wu; Yueqi Duan; He Wang; Qingnan Fan; Leonidas J. Guibas
2020-10-10
Is It Time to Redefine the Classification Task for Deep Neural Networks?Keji Han; Yun Li
Regularizing Neural Networks via Adversarial Model Perturbation. (1%)Yaowei Zheng; Richong Zhang; Yongyi Mao
2020-10-09
Understanding Spatial Robustness of Deep Neural Networks.Ziyuan Zhong; Yuchi Tian; Baishakhi Ray
How Does Mixup Help With Robustness and Generalization?Linjun Zhang; Zhun Deng; Kenji Kawaguchi; Amirata Ghorbani; James Zou
2020-10-08
Transcending Transcend: Revisiting Malware Classification with Conformal Evaluation.Federico Barbero; Feargus Pendlebury; Fabio Pierazzi; Lorenzo Cavallaro
Improve Adversarial Robustness via Weight Penalization on Classification Layer.Cong Xu; Dan Li; Min Yang
A Unified Approach to Interpreting and Boosting Adversarial Transferability.Xin Wang; Jie Ren; Shuyun Lin; Xiangming Zhu; Yisen Wang; Quanshi Zhang
Improved Techniques for Model Inversion Attacks.Si Chen; Ruoxi Jia; Guo-Jun Qi
Affine-Invariant Robust Training.Oriol Barbany Mayor
Targeted Attention Attack on Deep Learning Models in Road Sign Recognition.Xinghao Yang; Weifeng Liu; Shengli Zhang; Wei Liu; Dacheng Tao
Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial Attacks.Anit Kumar Sahu; Satya Narayan Shukla; J. Zico Kolter
2020-10-07
Hiding the Access Pattern is Not Enough: Exploiting Search Pattern Leakage in Searchable Encryption.Simon Oya; Florian Kerschbaum
Learning Clusterable Visual Features for Zero-Shot Recognition.Jingyi Xu; Zhixin Shu; Dimitris Samaras
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks.Ahmed Salem; Michael Backes; Yang Zhang
Revisiting Batch Normalization for Improving Corruption Robustness.Philipp Benz; Chaoning Zhang; Adil Karjauv; In So Kweon
Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features.Philipp Benz; Chaoning Zhang; In So Kweon
Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks.Bedeuro Kim; Alsharif Abuadbba; Yansong Gao; Yifeng Zheng; Muhammad Ejaz Ahmed; Hyoungshick Kim; Surya Nepal
Global Optimization of Objective Functions Represented by ReLU Networks.Christopher A. Strong; Haoze Wu; Aleksandar Zeljić; Kyle D. Julian; Guy Katz; Clark Barrett; Mykel J. Kochenderfer
CD-UAP: Class Discriminative Universal Adversarial Perturbation.Chaoning Zhang; Philipp Benz; Tooba Imtiaz; In So Kweon
Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples.Eden Levy; Yael Mathov; Ziv Katzir; Asaf Shabtai; Yuval Elovici
Double Targeted Universal Adversarial Perturbations.Philipp Benz; Chaoning Zhang; Tooba Imtiaz; In So Kweon
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples.Sven Gowal; Chongli Qin; Jonathan Uesato; Timothy Mann; Pushmeet Kohli
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems.AKM Iqtidar Newaz; Nur Imtiazul Haque; Amit Kumar Sikder; Mohammad Ashiqur Rahman; A. Selcuk Uluagac
Adversarial attacks on audio source separation.Naoya Takahashi; Shota Inoue; Yuki Mitsufuji
2020-10-06
Visualizing Color-wise Saliency of Black-Box Image Classification Models.Yuhki SenseTime Japan Hatakeyama; Hiroki SenseTime Japan Sakuma; Yoshinori SenseTime Japan Konishi; Kohei Kyoto University Suenaga
Constraining Logits by Bounded Function for Adversarial Robustness.Sekitoshi Kanai; Masanori Yamada; Shin'ya Yamaguchi; Hiroshi Takahashi; Yasutoshi Ida
Adversarial Patch Attacks on Monocular Depth Estimation Networks.Koichiro Yamanaka; Ryutaroh Matsumoto; Keita Takahashi; Toshiaki Fujii
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models.Ahmed Salem; Yannick Sautter; Michael Backes; Mathias Humbert; Yang Zhang
2020-10-05
Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model.Xin Qiu; Risto Miikkulainen
Adversarial Boot Camp: label free certified robustness in one epoch.Ryan Campbell; Chris Finlay; Adam M Oberman
Understanding Classifier Mistakes with Generative Models.Laëtitia Shao; Yang Song; Stefano Ermon
CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation.Tianlu Wang; Xuezhi Wang; Yao Qin; Ben Packer; Kang Li; Jilin Chen; Alex Beutel; Ed Chi
Second-Order NLP Adversarial Examples.John X. Morris
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference.Sanghyun Hong; Yiğitcan Kaya; Ionuţ-Vlad Modoranu; Tudor Dumitraş
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective.Boxin Wang; Shuohang Wang; Yu Cheng; Zhe Gan; Ruoxi Jia; Bo Li; Jingjing Liu
Understanding Catastrophic Overfitting in Single-step Adversarial Training.Hoki Kim; Woojin Lee; Jaewook Lee
Downscaling Attack and Defense: Turning What You See Back Into What You Get.Andrew J. Lohn
Metadata-Based Detection of Child Sexual Abuse Material. (1%)Mayana Pereira; Rahul Dodhia; Hyrum Anderson; Richard Brown
2020-10-04
TextAttack: Lessons learned in designing Python frameworks for NLP.John X. Morris; Jin Yong Yoo; Yanjun Qi
A Study for Universal Adversarial Attacks on Texture Recognition.Yingpeng Deng; Lina J. Karam
Adversarial Attack and Defense of Structured Prediction Models.Wenjuan Han; Liwen Zhang; Yong Jiang; Kewei Tu
Geometry-aware Instance-reweighted Adversarial Training.Jingfeng Zhang; Jianing Zhu; Gang Niu; Bo Han; Masashi Sugiyama; Mohan Kankanhalli
Unknown Presentation Attack Detection against Rational Attackers.Ali Khodabakhsh; Zahid Akhtar
2020-10-03
Adversarial and Natural Perturbations for General Robustness.Sadaf Gulshad; Jan Hendrik Metzen; Arnold Smeulders
Multi-Step Adversarial Perturbations on Recommender Systems Embeddings.Vito Walter Anelli; Alejandro Bellogín; Yashar Deldjoo; Noia Tommaso Di; Felice Antonio Merra
A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples.Zhao Meng; Roger Wattenhofer
Efficient Robust Training via Backward Smoothing.Jinghui Chen; Yu Cheng; Zhe Gan; Quanquan Gu; Jingjing Liu
Do Wider Neural Networks Really Help Adversarial Robustness?Boxi Wu; Jinghui Chen; Deng Cai; Xiaofei He; Quanquan Gu
2020-10-02
Note: An alternative proof of the vulnerability of $k$-NN classifiers in high intrinsic dimensionality regions.Teddy Furon
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders.Vito Walter Anelli; Noia Tommaso Di; Daniele Malitesta; Felice Antonio Merra
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense.MaungMaung AprilPyone; Hitoshi Kiya
Query complexity of adversarial attacks.Grzegorz Głuch; Rüdiger Urbanke
CorrAttack: Black-box Adversarial Attack with Structured Search.Zhichao Huang; Yaowei Huang; Tong Zhang
A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations.Gustavo Olague; Gerardo Ibarra-Vazquez; Mariana Chan-Ley; Cesar Puente; Carlos Soubervielle-Montalvo; Axel Martinez
Data-Driven Certification of Neural Networks with Random Input Noise. (16%)Brendon G. Anderson; Somayeh Sojoudi
2020-10-01
Assessing Robustness of Text Classification through Maximal Safe Radius Computation.Malfa Emanuele La; Min Wu; Luca Laurenti; Benjie Wang; Anthony Hartshorn; Marta Kwiatkowska
Bag of Tricks for Adversarial Training.Tianyu Pang; Xiao Yang; Yinpeng Dong; Hang Su; Jun Zhu
2020-09-30
Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning.Guneet S. Dhillon; Nicholas Carlini
Accurate and Robust Feature Importance Estimation under Distribution Shifts.Jayaraman J. Thiagarajan; Vivek Narayanaswamy; Rushil Anirudh; Peer-Timo Bremer; Andreas Spanias
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks.Uday Shankar Shanthamallu; Jayaraman J. Thiagarajan; Andreas Spanias
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles.Huanrui Yang; Jingyang Zhang; Hongliang Dong; Nathan Inkawhich; Andrew Gardner; Andrew Touchet; Wesley Wilkes; Heath Berry; Hai Li
2020-09-29
Neural Topic Modeling with Cycle-Consistent Adversarial Training.Xuemeng Hu; Rui Wang; Deyu Zhou; Yuxuan Xiong
Fast Fr\'echet Inception Distance.Alexander Mathiasen; Frederik Hvilshøj
2020-09-28
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment.Sharan Raja; Rudraksh Tuwani
STRATA: Building Robustness with a Simple Method for Generating Black-box Adversarial Attacks for Models of Code.Jacob M. Springer; Bryn Marie Reinstadler; Una-May O'Reilly
Graph Adversarial Networks: Protecting Information against Adversarial Attacks.Peiyuan Liao; Han Zhao; Keyulu Xu; Tommi Jaakkola; Geoffrey Gordon; Stefanie Jegelka; Ruslan Salakhutdinov
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients.Yifei Huang; Yaodong Yu; Hongyang Zhang; Yi Ma; Yuan Yao
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability.Ishai Rosenberg; Shai Meir; Jonathan Berrebi; Ilay Gordon; Guillaume Sicard; Eli David
Learning to Generate Image Source-Agnostic Universal Adversarial Perturbations. (92%)Pu Zhao; Parikshit Ram; Songtao Lu; Yuguang Yao; Djallel Bouneffouf; Xue Lin; Sijia Liu
2020-09-27
Learning to Improve Image Compression without Changing the Standard Decoder.Yannick Strümpler; Ren Yang; Radu Timofte
RoGAT: a robust GNN combined revised GAT with adjusted graphs.Xianchen Zhou; Yaoyun Zeng; Hongxia Wang
Where Does the Robustness Come from? A Study of the Transformation-based Ensemble Defence.Chang Liao; Yao Cheng; Chengfang Fang; Jie Shi
2020-09-26
Differentially Private Adversarial Robustness Through Randomized Perturbations.Nan Xu; Oluwaseyi Feyisetan; Abhinav Aggarwal; Zekun Xu; Nathanael Teissier
Beneficial Perturbations Network for Defending Adversarial Examples.Shixian Wen; Amanda Rios; Laurent Itti
2020-09-25
Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision.Sara Mandelli; Nicolò Bonettini; Paolo Bestagini; Stefano Tubaro
Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training.Shunsuke Kitada; Hitoshi Iyatomi
2020-09-24
Advancing the Research and Development of Assured Artificial Intelligence and Machine Learning Capabilities.Tyler J. Shipp; Daniel J. Clouse; Lucia Michael J. De; Metin B. Ahiskali; Kai Steverson; Jonathan M. Mullin; Nathaniel D. Bastian
Adversarial Examples in Deep Learning for Multivariate Time Series Regression.Gautam Raj Mode; Khaza Anuarul Hoque
Improving Query Efficiency of Black-box Adversarial Attack.Yang Bai; Yuyuan Zeng; Yong Jiang; Yisen Wang; Shu-Tao Xia; Weiwei Guo
2020-09-23
Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz Regularization.Prashnna Kumar Gyawali; Sandesh Ghimire; Linwei Wang
Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining.Ananya B. Sai; Akash Kumar Mohankumar; Siddhartha Arora; Mitesh M. Khapra
Adversarial robustness via stochastic regularization of neural activation sensitivity.Gil Fidel; Ron Bitton; Ziv Katzir; Asaf Shabtai
A Partial Break of the Honeypots Defense to Catch Adversarial Attacks.Nicholas Carlini
Semantics-Preserving Adversarial Training.Wonseok Lee; Hanbit Lee; Sang-goo Lee
Robustification of Segmentation Models Against Adversarial Perturbations In Medical Imaging.Hanwool Park; Amirhossein Bayat; Mohammad Sabokrou; Jan S. Kirschke; Bjoern H. Menze
Detection of Iterative Adversarial Attacks via Counter Attack.Matthias Rottmann; Kira Maag; Mathis Peyron; Natasa Krejic; Hanno Gottschalk
Torchattacks: A PyTorch Repository for Adversarial Attacks.Hoki Kim
2020-09-22
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors.Yi-Shan Lin; Wen-Chuan Lee; Z. Berkay Celik
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time.Ferran Alet; Kenji Kawaguchi; Tomas Lozano-Perez; Leslie Pack Kaelbling
Adversarial Attack Based Countermeasures against Deep Learning Side-Channel Attacks.Ruizhe Gu; Ping Wang; Mengce Zheng; Honggang Hu; Nenghai Yu
2020-09-21
Uncertainty-aware Attention Graph Neural Network for Defending Adversarial Attacks.Boyuan Feng; Yuke Wang; Zheng Wang; Yufei Ding
Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers.Boyuan Feng; Yuke Wang; Xu Li; Yufei Ding
Generating Adversarial yet Inconspicuous Patches with a Single Image.Jinqi Luo; Tao Bai; Jun Zhao; Bo Li
Adversarial Training with Stochastic Weight Average.Joong-Won Hwang; Youngwan Lee; Sungchan Oh; Yuseok Bae
Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness.Anh Bui; Trung Le; He Zhao; Paul Montague; Olivier deVel; Tamas Abraham; Dinh Phung
DeepDyve: Dynamic Verification for Deep Neural Networks.Yu Li; Min Li; Bo Luo; Ye Tian; Qiang Xu
Feature Distillation With Guided Adversarial Contrastive Learning.Tao Bai; Jinnan Chen; Jun Zhao; Bihan Wen; Xudong Jiang; Alex Kot
Crafting Adversarial Examples for Deep Learning Based Prognostics (Extended Version).Gautam Raj Mode; Khaza Anuarul Hoque
Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations.Alex Wong; Mukund Mundhra; Stefano Soatto
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing.Maurice Weber; Nana Liu; Bo Li; Ce Zhang; Zhikuan Zhao
Password Strength Signaling: A Counter-Intuitive Defense Against Password Cracking. (1%)Wenjie Bai; Jeremiah Blocki; Ben Harsha
2020-09-20
Improving Robustness and Generality of NLP Models Using Disentangled Representations.Jiawei Wu; Xiaoya Li; Xiang Ao; Yuxian Meng; Fei Wu; Jiwei Li
2020-09-19
Efficient Certification of Spatial Robustness.Anian Ruoss; Maximilian Baader; Mislav Balunović; Martin Vechev
OpenAttack: An Open-source Textual Adversarial Attack Toolkit.Guoyang Zeng; Fanchao Qi; Qianrui Zhou; Tingji Zhang; Bairu Hou; Yuan Zang; Zhiyuan Liu; Maosong Sun
Making Images Undiscoverable from Co-Saliency Detection.Ruijun Gao; Qing Guo; Felix Juefei-Xu; Hongkai Yu; Xuhong Ren; Wei Feng; Song Wang
Bias Field Poses a Threat to DNN-based X-Ray Recognition.Binyu Tian; Qing Guo; Felix Juefei-Xu; Wen Le Chan; Yupeng Cheng; Xiaohong Li; Xiaofei Xie; Shengchao Qin
Learning to Attack: Towards Textual Adversarial Attacking in Real-world Situations.Yuan Zang; Bairu Hou; Fanchao Qi; Zhiyuan Liu; Xiaojun Meng; Maosong Sun
Adversarial Rain Attack and Defensive Deraining for DNN Perception.Liming Zhai; Felix Juefei-Xu; Qing Guo; Xiaofei Xie; Lei Ma; Wei Feng; Shengchao Qin; Yang Liu
Adversarial Exposure Attack on Diabetic Retinopathy Imagery Grading.Yupeng Cheng; Qing Guo; Felix Juefei-Xu; Huazhu Fu; Shang-Wei Lin; Weisi Lin
EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks.Yaguan Qian; Qiqi Shao; Jiamin Wang; Xiang Lin; Yankai Guo; Zhaoquan Gu; Bin Wang; Chunming Wu
2020-09-18
Robust Decentralized Learning for Neural Networks.Yao Zhou; Jun Wu; Jingrui He
MIRAGE: Mitigating Conflict-Based Cache Attacks with a Practical Fully-Associative Design. (1%)Gururaj Saileshwar; Moinuddin Qureshi
2020-09-17
Certifying Confidence via Randomized Smoothing.Aounon Kumar; Alexander Levine; Soheil Feizi; Tom Goldstein
Generating Label Cohesive and Well-Formed Adversarial Claims.Pepa Atanasova; Dustin Wright; Isabelle Augenstein
Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks.T. Gittings; S. Schneider; J. Collomosse
Label Smoothing and Adversarial Robustness.Chaohao Fu; Hongbin Chen; Na Ruan; Weijia Jia
Online Alternate Generator against Adversarial Attacks.Haofeng Li; Yirui Zeng; Guanbin Li; Liang Lin; Yizhou Yu
MultAV: Multiplicative Adversarial Videos.Shao-Yuan Lo; Vishal M. Patel
On the Transferability of Minimal Prediction Preserving Inputs in Question Answering.Shayne Longpre; Yi Lu; Christopher DuBois
Large Norms of CNN Layers Do Not Hurt Adversarial Robustness.Youwei Liang; Dong Huang
2020-09-16
Multimodal Safety-Critical Scenarios Generation for Decision-Making Algorithms Evaluation.Wenhao Ding; Baiming Chen; Bo Li; Kim Ji Eun; Ding Zhao
Analysis of Generalizability of Deep Neural Networks Based on the Complexity of Decision Boundary.Shuyue Guan; Murray Loew
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View.Erick Galinkin
Contextualized Perturbation for Textual Adversarial Attack.Dianqi Li; Yizhe Zhang; Hao Peng; Liqun Chen; Chris Brockett; Ming-Ting Sun; Bill Dolan
2020-09-15
Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup.Jang-Hyun Kim; Wonho Choo; Hyun Oh Song
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems.Haoliang Nanyang Technological University, Singapore Li; Yufei Nanyang Technological University, Singapore Wang; Xiaofei Nanyang Technological University, Singapore Xie; Yang Nanyang Technological University, Singapore Liu; Shiqi City University of Hong Kong Wang; Renjie Nanyang Technological University, Singapore Wan; Lap-Pui Nanyang Technological University, Singapore Chau; Alex C. Nanyang Technological University, Singapore Kot
Switching Gradient Directions for Query-Efficient Black-Box Adversarial Attacks.Chen Ma; Shuyu Cheng; Li Chen; Junhai Yong
Decision-based Universal Adversarial Attack.Jing Wu; Mingyi Zhou; Shuaicheng Liu; Yipeng Liu; Ce Zhu
2020-09-14
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses.Ambar Pal; René Vidal
Input Hessian Regularization of Neural Networks.Waleed Mustafa; Robert A. Vandermeulen; Marius Kloft
Robust Deep Learning Ensemble against Deception.Wenqi Wei; Ling Liu
Hold Tight and Never Let Go: Security of Deep Learning based Automated Lane Centering under Physical-World Attack.Takami Sato; Junjie Shen; Ningfei Wang; Yunhan Jack Jia; Xue Lin; Qi Alfred Chen
2020-09-13
Manifold attack.Khanh-Hung Tran; Fred-Maurice Ngole-Mboula; Jean-Luc Starck
Towards the Quantification of Safety Risks in Deep Neural Networks.Peipei Xu; Wenjie Ruan; Xiaowei Huang
2020-09-12
Certified Robustness of Graph Classification against Topology Attack with Randomized Smoothing.Zhidong Gao; Rui Hu; Yanmin Gong
2020-09-11
Defending Against Multiple and Unforeseen Adversarial Videos.Shao-Yuan Lo; Vishal M. Patel
Robust Neural Machine Translation: Modeling Orthographic and Interpunctual Variation.Toms Bergmanis; Artūrs Stafanovičs; Mārcis Pinnis
Achieving Adversarial Robustness via Sparsity.Shufan Wang; Ningyi Liao; Liyao Xiang; Nanyang Ye; Quanshi Zhang
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben
Semantic-preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection.Lan Zhang; Peng Liu; Yoon-Ho Choi
2020-09-10
Second Order Optimization for Adversarial Robustness and Interpretability.Theodoros Tsiligkaridis; Jay Roberts
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent.Ricardo Bigolin Lanfredi; Joyce D. Schroeder; Tolga Tasdizen
2020-09-09
End-to-end Kernel Learning via Generative Random Fourier Features.Kun Fang; Xiaolin Huang; Fanghui Liu; Jie Yang
Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples.Jin Yong Yoo; John X. Morris; Eli Lifland; Yanjun Qi
A black-box adversarial attack for poisoning clustering.Antonio Emanuele Cinà; Alessandro Torcinovich; Marcello Pelillo
SoK: Certified Robustness for Deep Neural Networks.Linyi Li; Tao Xie; Bo Li
2020-09-08
Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks On Deep COVID-19 Models.Achyut Mani Tripathi; Ashish Mishra
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective.Gabriel Resende Machado; Eugênio Silva; Ronaldo Ribeiro Goldschmidt
2020-09-07
Adversarial attacks on deep learning models for fatty liver disease classification by modification of ultrasound image reconstruction method.Michal Byra; Grzegorz Styczynski; Cezary Szmigielski; Piotr Kalinowski; Lukasz Michalowski; Rafal Paluszkiewicz; Bogna Ziarkiewicz-Wroblewska; Krzysztof Zieniewicz; Andrzej Nowicki
Adversarial Attack on Large Scale Graph.Jintang Li; Tao Xie; Liang Chen; Fenfang Xie; Xiangnan He; Zibin Zheng
Black Box to White Box: Discover Model Characteristics Based on Strategic Probing.Josh Kalin; Matthew Ciolino; David Noever; Gerry Dozier
2020-09-06
A Game Theoretic Analysis of LQG Control under Adversarial Attack.Zuxing Li; György Dán; Dong Liu
Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks.Shankar A. Deka; Dušan M. Stipanović; Claire J. Tomlin
Detection Defense Against Adversarial Attacks with Saliency Map.Dengpan Ye; Chuanxi Chen; Changrui Liu; Hao Wang; Shunzhi Jiang
2020-09-05
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks.Nilaksh Polo Das; Haekyu Polo Park; Zijie J. Polo Wang; Fred Polo Hohman; Robert Polo Firstman; Emily Polo Rogers; Duen Polo Horng; Chau
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks.Wei-An Lin; Chun Pong Lau; Alexander Levine; Rama Chellappa; Soheil Feizi
2020-09-03
MIPGAN -- Generating Strong and High Quality Morphing Attacks Using Identity Prior Driven GAN. (10%)Haoyu Zhang; Sushma Venkatesh; Raghavendra Ramachandra; Kiran Raja; Naser Damer; Christoph Busch
2020-09-02
Yet Meta Learning Can Adapt Fast, It Can Also Break Easily.Han Xu; Yaxin Li; Xiaorui Liu; Hui Liu; Jiliang Tang
Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation.Danilo Vasconcellos Vargas; Bingli Liao; Takahiro Kanzaki
Open-set Adversarial Defense.Rui Shao; Pramuditha Perera; Pong C. Yuen; Vishal M. Patel
Adversarially Robust Neural Architectures.Minjing Dong; Yanxi Li; Yunhe Wang; Chang Xu
Flow-based detection and proxy-based evasion of encrypted malware C2 traffic.Carlos University of Porto and INESC TEC Novo; Ricardo University of Porto and INESC TEC Morla
Adversarial Attacks on Deep Learning Systems for User Identification based on Motion Sensors.Cezara Benegui; Radu Tudor Ionescu
Simulating Unknown Target Models for Query-Efficient Black-box Attacks.Chen Ma; Li Chen; Jun-Hai Yong
2020-09-01
Defending against substitute model black box adversarial attacks with the 01 loss.Yunzhe Xue; Meiyan Xie; Usman Roshan
2020-08-31
Adversarial Patch Camouflage against Aerial Detection.Ajaya Adhikari; Richard den Hollander; Ioannis Tolios; Bekkum Michael van; Anneloes Bal; Stijn Hendriks; Maarten Kruithof; Dennis Gross; Nils Jansen; Guillermo Pérez; Kit Buurman; Stephan Raaijmakers
MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models.Thai Le; Suhang Wang; Dongwon Lee
Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to Any-Layer Graph Neural Networks via Influence Function.Binghui Wang; Tianxiang Zhou; Minhua Lin; Pan Zhou; Ang Li; Meng Pang; Hai Li; Yiran Chen
2020-08-30
Benchmarking adversarial attacks and defenses for time-series data.Shoaib Ahmed Siddiqui; Andreas Dengel; Sheraz Ahmed
An Integrated Approach to Produce Robust Models with High Efficiency.Zhijian Li; Bao Wang; Jack Xin
Shape Defense Against Adversarial Attacks.Ali Borji
2020-08-29
Improving Resistance to Adversarial Deformations by Regularizing Gradients.Pengfei Xia; Bin Li
2020-08-27
A Scene-Agnostic Framework with Adversarial Training for Abnormal Event Detection in Video.Mariana-Iuliana Georgescu; Radu Tudor Ionescu; Fahad Shahbaz Khan; Marius Popescu; Mubarak Shah
GhostBuster: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing.Zhongyuan Hau; Soteris Demetriou; Luis Muñoz-González; Emil C. Lupu
Minimal Adversarial Examples for Deep Learning on 3D Point Clouds.Jaeyeon Kim; Binh-Son Hua; Duc Thanh Nguyen; Sai-Kit Yeung
On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks.Deboleena Roy; Indranil Chakraborty; Timur Ibrayev; Kaushik Roy
Adversarial Eigen Attack on Black-Box Models.Linjun Zhou; Peng Cui; Yinan Jiang; Shiqiang Yang
Color and Edge-Aware Adversarial Image Perturbations.Robert Bassett; Mitchell Graves; Patrick Reilly
Adversarially Robust Learning via Entropic Regularization.Gauri Jagatap; Ameya Joshi; Animesh Basak Chowdhury; Siddharth Garg; Chinmay Hegde
2020-08-26
Adversarially Training for Audio Classifiers.Raymel Alfonso Sallo; Mohammad Esmaeilpour; Patrick Cardinal
2020-08-25
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses.Fu Lin; Rohit Mittapalli; Prithvijit Chattopadhyay; Daniel Bolya; Judy Hoffman
Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning.Yinghua Zhang; Yangqiu Song; Jian Liang; Kun Bai; Qiang Yang
Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks.Abhiroop Bhattacharjee; Priyadarshini Panda
An Adversarial Attack Defending System for Securing In-Vehicle Networks.Yi Li; Jing Lin; Kaiqi Xiong
2020-08-24
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation.Binghui Wang; Jinyuan Jia; Xiaoyu Cao; Neil Zhenqiang Gong
2020-08-23
Developing and Defeating Adversarial Examples.Ian McDiarmid-Sterling; Allan Moser
Ptolemy: Architecture Support for Robust Deep Learning.Yiming Gan; Yuxian Qiu; Jingwen Leng; Minyi Guo; Yuhao Zhu
PermuteAttack: Counterfactual Explanation of Machine Learning Credit Scorecards.Masoud Hashemi; Ali Fathi
2020-08-22
Self-Competitive Neural Networks.Iman Saberi; Fathiyeh Faghih
2020-08-21
A Survey on Assessing the Generalization Envelope of Deep Neural Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples.Julia Lust; Alexandru Paul Condurache
2020-08-20
Towards adversarial robustness with 01 loss neural networks.Yunzhe Xue; Meiyan Xie; Usman Roshan
On Attribution of Deepfakes.Baiwu Zhang; Jin Peng Zhou; Ilia Shumailov; Nicolas Papernot
$\beta$-Variational Classifiers Under Attack.Marco Maggipinto; Matteo Terzi; Gian Antonio Susto
Yet Another Intermediate-Level Attack.Qizhang Li; Yiwen Guo; Hao Chen
2020-08-19
Prototype-based interpretation of the functionality of neurons in winner-take-all neural networks.Ramin Zarei Sabzevar; Kamaledin Ghiasi-Shirazi; Ahad Harati
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training.Alfred Laugros; Alice Caplier; Matthieu Ospici
On $\ell_p$-norm Robustness of Ensemble Stumps and Trees.Yihan Wang; Huan Zhang; Hongge Chen; Duane Boning; Cho-Jui Hsieh
2020-08-18
Improving adversarial robustness of deep neural networks by using semantic information.Lina Wang; Rui Tang; Yawei Yue; Xingshu Chen; Wei Wang; Yi Zhu; Xuemei Zeng
Direct Adversarial Training for GANs.Ziqiang Li
Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization.Feihu Huang; Shangqian Gao; Jian Pei; Heng Huang
2020-08-17
A Deep Dive into Adversarial Robustness in Zero-Shot Learning.Mehmet Kerim Yucel; Ramazan Gokberk Cinbis; Pinar Duygulu
Adversarial Attack and Defense Strategies for Deep Speaker Recognition Systems.Arindam Jati; Chin-Cheng Hsu; Monisankha Pal; Raghuveer Peri; Wael AbdAlmageed; Shrikanth Narayanan
Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection.Luca Demetrio; Scott E. Coull; Battista Biggio; Giovanni Lagorio; Alessandro Armando; Fabio Roli
Robustness Verification of Quantum Classifiers. (81%)Ji Guan; Wang Fang; Mingsheng Ying
2020-08-16
TextDecepter: Hard Label Black Box Attack on Text Classifiers.Sachin Saxena
Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks.Elahe Arani; Fahad Sarfraz; Bahram Zonooz
2020-08-15
Relevance Attack on Detectors.Sizhe Chen; Fan He; Xiaolin Huang; Kun Zhang
2020-08-14
Defending Adversarial Attacks without Adversarial Attacks in Deep Reinforcement Learning.Xinghua Qu; Yew-Soon Ong; Abhishek Gupta; Zhu Sun
On the Generalization Properties of Adversarial Training.Yue Xing; Qifan Song; Guang Cheng
Generating Image Adversarial Examples by Embedding Digital Watermarks.Yuexin Xiang; Tiantian Li; Wei Ren; Tianqing Zhu; Kim-Kwang Raymond Choo
2020-08-13
Adversarial Training and Provable Robustness: A Tale of Two Objectives.Jiameng Fan; Wenchao Li
Semantically Adversarial Learnable Filters.Ali Shahin Shamsabadi; Changjae Oh; Andrea Cavallaro
Continuous Patrolling Games. (45%)Steve Alpern; Thuy Bui; Thomas Lidbetter; Katerina Papadaki
2020-08-12
Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise.Alex Serban; Erik Poll; Joost Visser
Defending Adversarial Examples via DNN Bottleneck Reinforcement.Wenqing Liu; Miaojing Shi; Teddy Furon; Li Li
Feature Binding with Category-Dependant MixUp for Semantic Segmentation and Adversarial Robustness.Md Amirul Islam; Matthew Kowal; Konstantinos G. Derpanis; Neil D. B. Bruce
Semantics-preserving adversarial attacks in NLP.Rahul Singh; Tarun Joshi; Vijayan N. Nair; Agus Sudjianto
2020-08-11
Revisiting Adversarially Learned Injection Attacks Against Recommender Systems.Jiaxi Tang; Hongyi Wen; Ke Wang
2020-08-10
Informative Dropout for Robust Representation Learning: A Shape-bias Perspective.Baifeng Shi; Dinghuai Zhang; Qi Dai; Zhanxing Zhu; Yadong Mu; Jingdong Wang
FireBERT: Hardening BERT-based classifiers against adversarial attack.Gunnar Mein; Kevin Hartman; Andrew Morris
2020-08-09
Enhancing Robustness Against Adversarial Examples in Network Intrusion Detection Systems.Mohammad J. Hashemi; Eric Keller
Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks.Xiaosen Wang; Yichen Yang; Yihe Deng; Kun He
2020-08-08
Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG with Variable Length.Linhai Ma; Liang Liang
2020-08-07
Visual Attack and Defense on Text.Shengjun Liu; Ningkang Jiang; Yuanbin Wu
Optimizing Information Loss Towards Robust Neural Networks.Philip Sperl; Konstantin Böttinger
Adversarial Examples on Object Recognition: A Comprehensive Survey.Alex Serban; Erik Poll; Joost Visser
2020-08-06
Stronger and Faster Wasserstein Adversarial Attacks.Kaiwen Wu; Allen Houze Wang; Yaoliang Yu
Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations.Ziquan Liu; Yufei Cui; Antoni B. Chan
2020-08-05
One word at a time: adversarial attacks on retrieval models.Nisarg Raval; Manisha Verma
Robust Deep Reinforcement Learning through Adversarial Loss.Tuomas Oikarinen; Wang Zhang; Alexandre Megretski; Luca Daniel; Tsui-Wei Weng
2020-08-04
Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples.Xiaojun Jia; Xingxing Wei; Xiaochun Cao; Xiaoguang Han
TREND: Transferability based Robust ENsemble Design.Deepak Ravikumar; Sangamesh Kodge; Isha Garg; Kaushik Roy
Can Adversarial Weight Perturbations Inject Neural Backdoors?Siddhant Garg; Adarsh Kumar; Vibhor Goel; Yingyu Liang
Entropy Guided Adversarial Model for Weakly Supervised Object Localization.Sabrina Narimene Benassou; Wuzhen Shi; Feng Jiang
2020-08-03
Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks.Haoqiang Guo; Lu Peng; Jian Zhang; Fang Qi; Lide Duan
Anti-Bandit Neural Architecture Search for Model Defense.Hanlin Chen; Baochang Zhang; Song Xue; Xuan Gong; Hong Liu; Rongrong Ji; David Doermann
2020-08-01
Efficient Adversarial Attacks for Visual Object Tracking.Siyuan Liang; Xingxing Wei; Siyuan Yao; Xiaochun Cao
Trojaning Language Models for Fun and Profit.Xinyang Zhang; Zheng Zhang; Shouling Ji; Ting Wang
2020-07-31
Vulnerability Under Adversarial Machine Learning: Bias or Variance?Hossein Aboutalebi; Mohammad Javad Shafiee; Michelle Karg; Christian Scharfenberger; Alexander Wong
Physical Adversarial Attack on Vehicle Detector in the Carla Simulator.Tong Wu; Xuefei Ning; Wenshuo Li; Ranran Huang; Huazhong Yang; Yu Wang
Adversarial Attacks with Multiple Antennas Against Deep Learning-Based Modulation Classifiers.Brian Kim; Yalin E. Sagduyu; Tugba Erpek; Kemal Davaslioglu; Sennur Ulukus
TEAM: We Need More Powerful Adversarial Examples for DNNs.Yaguan Qian; Ximin Zhang; Bin Wang; Wei Li; Zhaoquan Gu; Haijiang Wang; Wassim Swaileh
2020-07-30
Black-box Adversarial Sample Generation Based on Differential Evolution.Junyu Lin; Lei Xu; Yingqi Liu; Xiangyu Zhang
A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks.Yi Zeng; Han Qiu; Gerard Memmi; Meikang Qiu
vWitness: Certifying Web Page Interactions with Computer Vision. (83%)He Shuang; Lianying Zhao; David Lie
2020-07-29
End-to-End Adversarial White Box Attacks on Music Instrument Classification.Katharina Johannes Kepler University Linz Prinz; Arthur Johannes Kepler University Linz Flexer
Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data.Kai Steverson; Jonathan Mullin; Metin Ahiskali
Generative Classifiers as a Basis for Trustworthy Computer Vision.Radek Mackowiak; Lynton Ardizzone; Ullrich Köthe; Carsten Rother
Stylized Adversarial Defense.Muzammal Naseer; Salman Khan; Munawar Hayat; Fahad Shahbaz Khan; Fatih Porikli
Detecting Anomalous Inputs to DNN Classifiers By Joint Statistical Testing at the Layers.Jayaram Raghuram; Varun Chandrasekaran; Somesh Jha; Suman Banerjee
2020-07-28
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations.Xiaoyu Zhang; Ajmal Mian; Rohit Gupta; Nazanin Rahnavard; Mubarak Shah
Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning.Jirong Yi; Raghu Mudumbai; Weiyu Xu
Reachable Sets of Classifiers and Regression Models: (Non-)Robustness Analysis and Robust Training.Anna-Kathrin Kopetzki; Stephan Günnemann
Label-Only Membership Inference Attacks.Christopher A. Choquette-Choo; Florian Tramer; Nicholas Carlini; Nicolas Papernot
2020-07-27
Attacking and Defending Machine Learning Applications of Public Cloud.Dou Goodman; Hao Xin
KOVIS: Keypoint-based Visual Servoing with Zero-Shot Sim-to-Real Transfer for Robotics Manipulation.En Yen Puang; Keng Peng Tee; Wei Jing
From Sound Representation to Model Robustness.Mohamad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing.Yi Zhang; Jitao Sang
2020-07-26
RANDOM MASK: Towards Robust Convolutional Neural Networks.Tiange Luo; Tianle Cai; Mengxiao Zhang; Siyu Chen; Liwei Wang
Robust Collective Classification against Structural Attacks.Kai Zhou; Yevgeniy Vorobeychik
Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection. (1%)Elizabeth Newman; Lars Ruthotto; Joseph Hart; Bart van Bloemen Waanders
2020-07-25
MirrorNet: Bio-Inspired Adversarial Attack for Camouflaged Object Segmentation.Jinnan Yan; Trung-Nghia Le; Khanh-Duy Nguyen; Minh-Triet Tran; Thanh-Toan Do; Tam V. Nguyen
Adversarial Privacy-preserving Filter.Jiaming Zhang; Jitao Sang; Xian Zhao; Xiaowen Huang; Yanfeng Sun; Yongli Hu
MP3 Compression To Diminish Adversarial Noise in End-to-End Speech Recognition.Iustina Andronic; Ludwig Kürzinger; Edgar Ricardo Chavez Rosas; Gerhard Rigoll; Bernhard U. Seeber
2020-07-24
Deep Co-Training with Task Decomposition for Semi-Supervised Domain Adaptation. (1%)Luyu Yang; Yan Wang; Mingfei Gao; Abhinav Shrivastava; Kilian Q. Weinberger; Wei-Lun Chao; Ser-Nam Lim
2020-07-23
Provably Robust Adversarial Examples.Dimitar I. Dimitrov; Gagandeep Singh; Timon Gehr; Martin Vechev
2020-07-22
SOCRATES: Towards a Unified Platform for Neural Network Verification.Long H. Pham; Jiaying Li; Jun Sun
Adversarial Training Reduces Information and Improves Transferability.Matteo Terzi; Alessandro Achille; Marco Maggipinto; Gian Antonio Susto
Robust Machine Learning via Privacy/Rate-Distortion Theory.Ye Wang; Shuchin Aeron; Adnan Siraj Rakin; Toshiaki Koike-Akino; Pierre Moulin
Threat of Adversarial Attacks on Face Recognition: A Comprehensive Survey.Fatemeh Vakhshiteh; Raghavendra Ramachandra; Ahmad Nickabadi
2020-07-21
Audio Adversarial Examples for Robust Hybrid CTC/Attention Speech Recognition.Ludwig Kürzinger; Edgar Ricardo Chavez Rosas; Lujun Li; Tobias Watzel; Gerhard Rigoll
Towards Visual Distortion in Black-Box Attacks.Nannan Li; Zhenzhong Chen
2020-07-20
DeepNNK: Explaining deep models and their generalization using polytope interpolation.Sarath Shekkizhar; Antonio Ortega
Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks.Nupur Thakur; Yuzhen Ding; Baoxin Li
Robust Tracking against Adversarial Attacks.Shuai Jia; Chao Ma; Yibing Song; Xiaokang Yang
Scaling Polyhedral Neural Network Verification on GPUs.Christoph Müller; François Serre; Gagandeep Singh; Markus Püschel; Martin Vechev
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers.Yuzhen Ding; Nupur Thakur; Baoxin Li
2020-07-19
Semantic Equivalent Adversarial Data Augmentation for Visual Question Answering.Ruixue Tang; Chao Ma; Wei Emma Zhang; Qi Wu; Xiaokang Yang
Exploiting vulnerabilities of deep neural networks for privacy protection.Ricardo Sanchez-Matilla; Chau Yi Li; Ali Shahin Shamsabadi; Riccardo Mazzon; Andrea Cavallaro
Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency.Shasha Li; Shitong Zhu; Sudipta Paul; Amit Roy-Chowdhury; Chengyu Song; Srikanth Krishnamurthy; Ananthram Swami; Kevin S Chan
Adversarial Immunization for Improving Certifiable Robustness on Graphs.Shuchang Tao; Huawei Shen; Qi Cao; Liang Hou; Xueqi Cheng
2020-07-18
DDR-ID: Dual Deep Reconstruction Networks Based Image Decomposition for Anomaly Detection.Dongyun Lin; Yiqun Li; Shudong Xie; Tin Lay Nwe; Sheng Dong
Towards Quantum-Secure Authentication and Key Agreement via Abstract Multi-Agent Interaction. (1%)Ibrahim H. Ahmed; Josiah P. Hanna; Elliot Fosong; Stefano V. Albrecht
2020-07-17
Anomaly Detection in Unsupervised Surveillance Setting Using Ensemble of Multimodal Data with Adversarial Defense.Sayeed Shafayet Chowdhury; Kaji Mejbaul Islam; Rouhan Noor
Neural Networks with Recurrent Generative Feedback.Yujia Huang; James Gornet; Sihui Dai; Zhiding Yu; Tan Nguyen; Doris Y. Tsao; Anima Anandkumar
2020-07-16
Understanding and Diagnosing Vulnerability under Adversarial Attacks.Haizhong Zheng; Ziqi Zhang; Honglak Lee; Atul Prakash
Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources.Yun-Yun Tsai; Pin-Yu Chen; Tsung-Yi Ho
Accelerated Stochastic Gradient-free and Projection-free Methods.Feihu Huang; Lue Tao; Songcan Chen
Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data.Julian Bitterwolf; Alexander Meinke; Matthias Hein
An Empirical Study on the Robustness of NAS based Architectures.Chaitanya Devaguptapu; Devansh Agarwal; Gaurav Mittal; Vineeth N Balasubramanian
Do Adversarially Robust ImageNet Models Transfer Better?Hadi Salman; Andrew Ilyas; Logan Engstrom; Ashish Kapoor; Aleksander Madry
Learning perturbation sets for robust machine learning.Eric Wong; J. Zico Kolter
On Robustness and Transferability of Convolutional Neural Networks. (1%)Josip Djolonga; Jessica Yung; Michael Tschannen; Rob Romijnders; Lucas Beyer; Alexander Kolesnikov; Joan Puigcerver; Matthias Minderer; Alexander D'Amour; Dan Moldovan; Sylvain Gelly; Neil Houlsby; Xiaohua Zhai; Mario Lucic
Less is More: A privacy-respecting Android malware classifier using Federated Learning. (1%)Rafa Gálvez; Veelasha Moonsamy; Claudia Diaz
2020-07-15
A Survey of Privacy Attacks in Machine Learning.Maria Rigaki; Sebastian Garcia
Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels.Wenjie Wan; Zhaodi Zhang; Yiwei Zhu; Min Zhang; Fu Song
A Survey on Security Attacks and Defense Techniques for Connected and Autonomous Vehicles.Minh Pham; Kaiqi Xiong
2020-07-14
Towards robust sensing for Autonomous Vehicles: An adversarial perspective.Apostolos Modas; Ricardo Sanchez-Matilla; Pascal Frossard; Andrea Cavallaro
Robustifying Reinforcement Learning Agents via Action Space Adversarial Training.Kai Liang Tan; Yasaman Esfandiari; Xian Yeow Lee; Aakanksha; Soumik Sarkar
Bounding The Number of Linear Regions in Local Area for Neural Networks with ReLU Activations.Rui Zhu; Bo Lin; Haixu Tang
Multitask Learning Strengthens Adversarial Robustness.Chengzhi Mao; Amogh Gupta; Vikram Nitin; Baishakhi Ray; Shuran Song; Junfeng Yang; Carl Vondrick
Adversarial Examples and Metrics.Nico Döttling; Kathrin Grosse; Michael Backes; Ian Molloy
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows.Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack.Yupeng Cheng; Qing Guo; Felix Juefei-Xu; Wei Feng; Shang-Wei Lin; Weisi Lin; Yang Liu
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components.Ken Alparslan; Yigit Alparslan; Matthew Burlick
Towards a Theoretical Understanding of the Robustness of Variational Autoencoders.Alexander Camuto; Matthew Willetts; Stephen Roberts; Chris Holmes; Tom Rainforth
2020-07-13
A simple defense against adversarial attacks on heatmap explanations.Laura Rieger; Lars Kai Hansen
Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations.Chaoning Zhang; Philipp Benz; Tooba Imtiaz; In-So Kweon
Adversarial robustness via robust low rank representations.Pranjal Awasthi; Himanshu Jain; Ankit Singh Rawat; Aravindan Vijayaraghavan
Security and Machine Learning in the Real World.Ivan Evtimov; Weidong Cui; Ece Kamar; Emre Kiciman; Tadayoshi Kohno; Jerry Li
Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes.Satya Narayan Shukla; Anit Kumar Sahu; Devin Willmott; J. Zico Kolter
Calling Out Bluff: Attacking the Robustness of Automatic Scoring Systems with Simple Adversarial Testing.Yaman Kumar; Mehar Bhatia; Anubha Kabra; Jessy Junyi Li; Di Jin; Rajiv Ratn Shah
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems.Hadi Abdullah; Kevin Warren; Vincent Bindschaedler; Nicolas Papernot; Patrick Traynor
Patch-wise Attack for Fooling Deep Neural Network.Lianli Gao; Qilong Zhang; Jingkuan Song; Xianglong Liu; Heng Tao Shen
2020-07-12
Generating Fluent Adversarial Examples for Natural Languages.Huangzhao Zhang; Hao Zhou; Ning Miao; Lei Li
Adversarial jamming attacks and defense strategies via adaptive deep reinforcement learning.Feng Wang; Chen Zhong; M. Cenk Gursoy; Senem Velipasalar
Probabilistic Jacobian-based Saliency Maps Attacks.Théo Combey; António Loison; Maxime Faucher; Hatem Hajri
2020-07-11
Understanding Object Detection Through An Adversarial Lens.Ka-Ho Chow; Ling Liu; Mehmet Emre Gursoy; Stacey Truex; Wenqi Wei; Yanzhao Wu
ManiGen: A Manifold Aided Black-box Generator of Adversarial Examples.Guanxiong Liu; Issa Khalil; Abdallah Khreishah; Abdulelah Algosaibi; Adel Aldalbahi; Mohammed Alaneem; Abdulaziz Alhumam; Mohammed Anan
Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification. (15%)Francisco Utrera; Evan Kravitz; N. Benjamin Erichson; Rajiv Khanna; Michael W. Mahoney
2020-07-10
Improved Detection of Adversarial Images Using Deep Neural Networks.Yutong Gao; Yi Pan
Miss the Point: Targeted Adversarial Attack on Multiple Landmark Detection.Qingsong Yao; Zecheng He; Hu Han; S. Kevin Zhou
Generating Adversarial Inputs Using A Black-box Differential Technique.João Batista Pereira Matos Juúnior; Lucas Carvalho Cordeiro; Marcelo d'Amorim; Xiaowei Huang
2020-07-09
Improving Adversarial Robustness by Enforcing Local and Global Compactness.Anh Bui; Trung Le; He Zhao; Paul Montague; Olivier deVel; Tamas Abraham; Dinh Phung
Boundary thickness and robustness in learning models.Yaoqing Yang; Rajiv Khanna; Yaodong Yu; Amir Gholami; Kurt Keutzer; Joseph E. Gonzalez; Kannan Ramchandran; Michael W. Mahoney
Node Copying for Protection Against Graph Neural Network Topology Attacks.Florence Regol; Soumyasundar Pal; Mark Coates
Efficient detection of adversarial images.Darpan Kumar Yadav; Kartik Mundra; Rahul Modpur; Arpan Chattopadhyay; Indra Narayan Kar
2020-07-08
How benign is benign overfitting?Amartya Sanyal; Puneet K Dokania; Varun Kanade; Philip H. S. Torr
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations.Giulio Lovisotto; Henry Turner; Ivo Sluganovic; Martin Strohmeier; Ivan Martinovic
RobFR: Benchmarking Adversarial Robustness on Face Recognition.Xiao Yang; Dingcheng Yang; Yinpeng Dong; Hang Su; Wenjian Yu; Jun Zhu
A Critical Evaluation of Open-World Machine Learning.Liwei Song; Vikash Sehwag; Arjun Nitin Bhagoji; Prateek Mittal
On the relationship between class selectivity, dimensionality, and robustness.Matthew L. Leavitt; Ari S. Morcos
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs.Rana Abou Khamis; Ashraf Matrawy
2020-07-07
Robust Learning with Frequency Domain Regularization.Weiyu Guo; Yidong Ouyang
Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability.Utku Ozbulak; Jonathan Peck; Neve Wesley De; Bart Goossens; Yvan Saeys; Messem Arnout Van
Fast Training of Deep Neural Networks Robust to Adversarial Perturbations.Justin Goodwin; Olivia Brown; Victoria Helus
Making Adversarial Examples More Transferable and Indistinguishable.Junhua Zou; Yexin Duan; Boyu Li; Wu Zhang; Yu Pan; Zhisong Pan
Detection as Regression: Certified Object Detection by Median Smoothing.Ping-yeh Chiang; Michael J. Curry; Ahmed Abdelkader; Aounon Kumar; John Dickerson; Tom Goldstein
2020-07-06
Certifying Decision Trees Against Evasion Attacks by Program Analysis.Stefano Calzavara; Pietro Ferrara; Claudio Lucchese
On Data Augmentation and Adversarial Risk: An Empirical Analysis.Hamid Eghbal-zadeh; Khaled Koutini; Paul Primus; Verena Haunschmid; Michal Lewandowski; Werner Zellinger; Bernhard A. Moser; Gerhard Widmer
Understanding and Improving Fast Adversarial Training.Maksym Andriushchenko; Nicolas Flammarion
Black-box Adversarial Example Generation with Normalizing Flows.Hadi M. Dolatabadi; Sarah Erfani; Christopher Leckie
2020-07-05
Adversarial Learning in the Cyber Security Domain.Ihai Rosenberg; Asaf Shabtai; Yuval Elovici; Lior Rokach
2020-07-04
On Connections between Regularizations for Improving DNN Robustness.Yiwen Guo; Long Chen; Yurong Chen; Changshui Zhang
Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors.Zijian Jiang; Jianwen Zhou; Haiping Huang
Deep Active Learning via Open Set Recognition. (1%)Jaya Krishna Mandivarapu; Blake Camp; Rolando Estrada
2020-07-03
Towards Robust Deep Learning with Ensemble Networks and Noisy Layers.Yuting Liang; Reza Samavi
2020-07-02
Efficient Proximal Mapping of the 1-path-norm of Shallow Networks.Fabian Latorre; Paul Rolland; Nadav Hallak; Volkan Cevher
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment.Xabier Echeberria-Barrio; Amaia Gil-Lerchundi; Ines Goicoechea-Telleria; Raul Orduna-Urrutia
Decoder-free Robustness Disentanglement without (Additional) Supervision.Yifei Wang; Dan Peng; Furui Liu; Zhenguo Li; Zhitang Chen; Jiansheng Yang
Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring.Zhihui Shao; Jianyi Yang; Shaolei Ren
Trace-Norm Adversarial Examples.Ehsan Kazemi; Thomas Kerdreux; Liqiang Wang
Generating Adversarial Examples withControllable Non-transferability.Renzhi Wang; Tianwei Zhang; Xiaofei Xie; Lei Ma; Cong Tian; Felix Juefei-Xu; Yang Liu
2020-07-01
Unifying Model Explainability and Robustness via Machine-Checkable Concepts.Vedant Nanda; Till Speicher; John P. Dickerson; Krishna P. Gummadi; Muhammad Bilal Zafar
Measuring Robustness to Natural Distribution Shifts in Image Classification.Rohan Taori; Achal Dave; Vaishaal Shankar; Nicholas Carlini; Benjamin Recht; Ludwig Schmidt
Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks.Kishor Datta Gupta; Dipankar Dasgupta; Zahid Akhtar
Query-Free Adversarial Transfer via Undertrained Surrogates.Chris Miller; Soroush Vosoughi
Adversarial Example Games.Avishek Joey Bose; Gauthier Gidel; Hugo Berard; Andre Cianflone; Pascal Vincent; Simon Lacoste-Julien; William L. Hamilton
Robustness against Relational Adversary.Yizhen Wang; Xiaozhu Meng; Ke Wang; Mihai Christodorescu; Somesh Jha
A Le Cam Type Bound for Adversarial Learning and Applications.Qiuling Xu; Kevin Bello; Jean Honorio
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey.Samuel Henrique Silva; Peyman Najafirad
2020-06-30
Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures.Jiachen Sun; Yulong Cao; Qi Alfred Chen; Z. Morley Mao
Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection.Deqiang Li; Qianmu Li
Black-box Certification and Learning under Adversarial Perturbations.Hassan Ashtiani; Vinayak Pathak; Ruth Urner
Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications.Eric Wong; Tim Schneider; Joerg Schmitt; Frank R. Schmidt; J. Zico Kolter
Generating Adversarial Examples with an Optimized Quality.Aminollah Khormali; DaeHun Nyang; David Mohaisen
2020-06-29
Harnessing Adversarial Distances to Discover High-Confidence Errors.Walter Bennette; Karsten Maurer; Sean Sisti
Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification.Chen Dan; Yuting Wei; Pradeep Ravikumar
Legal Risks of Adversarial Machine Learning Research.Ram Shankar Siva Kumar; Jonathon Penney; Bruce Schneier; Kendra Albert
Biologically Inspired Mechanisms for Adversarial Robustness.Manish V. Reddy; Andrzej Banburski; Nishka Pant; Tomaso Poggio
Improving Uncertainty Estimates through the Relationship with Adversarial Robustness.Yao Qin; Xuezhi Wang; Alex Beutel; Ed H. Chi
2020-06-28
FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications.Yunfei Song; Tian Liu; Tongquan Wei; Xiangfeng Wang; Zhe Tao; Mingsong Chen
Geometry-Inspired Top-k Adversarial Perturbations.Nurislam Tursynbek; Aleksandr Petiushko; Ivan Oseledets
2020-06-26
Orthogonal Deep Models As Defense Against Black-Box Attacks.Mohammad A. A. K. Jalwana; Naveed Akhtar; Mohammed Bennamoun; Ajmal Mian
Informative Outlier Matters: Robustifying Out-of-distribution Detection Using Outlier Mining.Jiefeng Chen; Yixuan Li; Xi Wu; Yingyu Liang; Somesh Jha
Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks.Ali Mirzaeian; Jana Kosecka; Houman Homayoun; Tinoosh Mohsenin; Avesta Sasan
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Kaidi Jin; Tianwei Zhang; Chao Shen; Yufei Chen; Ming Fan; Chenhao Lin; Ting Liu
2020-06-25
Smooth Adversarial Training.Cihang Xie; Mingxing Tan; Boqing Gong; Alan Yuille; Quoc V. Le
Proper Network Interpretability Helps Adversarial Robustness in Classification.Akhilan Boopathy; Sijia Liu; Gaoyuan Zhang; Cynthia Liu; Pin-Yu Chen; Shiyu Chang; Luca Daniel
Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability.Kaizhao Liang; Jacky Y. Zhang; Boxin Wang; Zhuolin Yang; Oluwasanmi Koyejo; Bo Li
Can 3D Adversarial Logos Cloak Humans?Yi Wang; Jingyang Zhou; Tianlong Chen; Sijia Liu; Shiyu Chang; Chandrajit Bajaj; Zhangyang Wang
2020-06-24
Defending against adversarial attacks on medical imaging AI system, classification or detection?Xin Li; Deng Pan; Dongxiao Zhu
Compositional Explanations of Neurons.Jesse Mu; Jacob Andreas
Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks.Huiying Li; Shawn Shan; Emily Wenger; Jiayun Zhang; Haitao Zheng; Ben Y. Zhao
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness.Xingjun Ma; Linxi Jiang; Hanxun Huang; Zejia Weng; James Bailey; Yu-Gang Jiang
2020-06-23
RayS: A Ray Searching Method for Hard-label Adversarial Attack.Jinghui Chen; Quanquan Gu
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks.Francesco Croce; Maksym Andriushchenko; Naman D. Singh; Nicolas Flammarion; Matthias Hein
Adversarial Robustness of Deep Sensor Fusion Models.Shaojie Wang; Tong Wu; Ayan Chakrabarti; Yevgeniy Vorobeychik
2020-06-22
Learning to Generate Noise for Multi-Attack Robustness.Divyam Madaan; Jinwoo Shin; Sung Ju Hwang
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models.Cassidy Laidlaw; Sahil Singla; Soheil Feizi
2020-06-21
Network Moments: Extensions and Sparse-Smooth Attacks.Modar Alfadly; Adel Bibi; Emilio Botero; Salman Alsubaihi; Bernard Ghanem
2020-06-20
How do SGD hyperparameters in natural training affect adversarial robustness?Sandesh Kamath; Amit Deshpande; K V Subrahmanyam
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble.Yi Zhou; Xiaoqing Zheng; Cho-Jui Hsieh; Kai-wei Chang; Xuanjing Huang
Stochastic Shortest Path with Adversarially Changing Costs. (1%)Aviv Rosenberg; Yishay Mansour
2020-06-19
Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples.Josue Ortega Caro; Yilong Ju; Ryan Pyle; Sourav Dey; Wieland Brendel; Fabio Anselmi; Ankit Patel
A general framework for defining and optimizing robustness.Alessandro Tibo; Manfred Jaeger; Kim G. Larsen
Analyzing the Real-World Applicability of DGA Classifiers.Arthur Drichel; Ulrike Meyer; Samuel Schüppen; Dominik Teubert
Towards an Adversarially Robust Normalization Approach.Muhammad Awais; Fahad Shamshad; Sung-Ho Bae
Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers.I. Fursov; A. Zaytsev; N. Kluchnikov; A. Kravchenko; E. Burnaev
Adversarial Attacks for Multi-view Deep Models.Xuli Sun; Shiliang Sun
2020-06-18
Local Competition and Uncertainty for Adversarial Robustness in Deep Learning.Antonios Alexos; Konstantinos P. Panousis; Sotirios Chatzis
Dissecting Deep Networks into an Ensemble of Generative Classifiers for Robust Predictions.Lokender Tiwari; Anish Madan; Saket Anand; Subhashis Banerjee
The Dilemma Between Dimensionality Reduction and Adversarial Robustness.Sheila Alemany; Niki Pissinou
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples.Kaleel Mahmood; Deniz Gurevin; Dijk Marten van; Phuong Ha Nguyen
2020-06-17
Noise or Signal: The Role of Image Backgrounds in Object Recognition.Kai Xiao; Logan Engstrom; Andrew Ilyas; Aleksander Madry
Adversarial Examples Detection and Analysis with Layer-wise Autoencoders.Bartosz Wójcik; Paweł Morawiecki; Marek Śmieja; Tomasz Krzyżek; Przemysław Spurek; Jacek Tabor
Adversarial Defense by Latent Style Transformations.Shuo Wang; Surya Nepal; Alsharif Abuadbba; Carsten Rudolph; Marthie Grobler
Disrupting Deepfakes with an Adversarial Attack that Survives Training.Eran Segalis
Universal Lower-Bounds on Classification Error under Adversarial Attacks and Random Corruption.Elvis Dohmatob
Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning.Vedant Nanda; Samuel Dooley; Sahil Singla; Soheil Feizi; John P. Dickerson
2020-06-16
Calibrating Deep Neural Network Classifiers on Out-of-Distribution Datasets.Zhihui Shao; Jianyi Yang; Shaolei Ren
SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness.Mohammadamin Tavakoli; Forest Agostinelli; Pierre Baldi
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness Proofs.Christopher Brix; Thomas Noll
On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron.Sergey Bochkanov
AdvMind: Inferring Adversary Intent of Black-Box Attacks.Ren Pang; Xinyang Zhang; Shouling Ji; Xiapu Luo; Ting Wang
The shape and simplicity biases of adversarially robust ImageNet-trained CNNs.Peijie Chen; Chirag Agarwal; Anh Nguyen
2020-06-15
Total Deep Variation: A Stable Regularizer for Inverse Problems.Erich Kobler; Alexander Effland; Karl Kunisch; Thomas Pock
DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder.Ao Zhang; Jinwen Ma
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data.Zhun Deng; Linjun Zhang; Amirata Ghorbani; James Zou
Fast & Accurate Method for Bounding the Singular Values of Convolutional Layers with Application to Lipschitz Regularization.Alexandre Araujo; Benjamin Negrevergne; Yann Chevaleyre; Jamal Atif
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks.Xiang Zhang; Marinka Zitnik
CG-ATTACK: Modeling the Conditional Distribution of Adversarial Perturbations to Boost Black-Box Attack.Yan Feng; Baoyuan Wu; Yanbo Fan; Li Liu; Zhifeng Li; Shutao Xia
Multiscale Deep Equilibrium Models.Shaojie Bai; Vladlen Koltun; J. Zico Kolter
2020-06-14
GradAug: A New Regularization Method for Deep Neural Networks.Taojiannan Yang; Sijie Zhu; Chen Chen
PatchUp: A Regularization Technique for Convolutional Neural Networks.Mojtaba Faramarzi; Mohammad Amini; Akilesh Badrinaaraayanan; Vikas Verma; Sarath Chandar
On Saliency Maps and Adversarial Robustness.Puneet Mangla; Vedant Singh; Vineeth N Balasubramanian
On the transferability of adversarial examples between convex and 01 loss models.Yunzhe Xue; Meiyan Xie; Usman Roshan
Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems.Yuanjiang Cao; Xiaocong Chen; Lina Yao; Xianzhi Wang; Wei Emma Zhang
Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks.Sarada Krithivasan; Sanchari Sen; Anand Raghunathan
Duplicity Games for Deception Design with an Application to Insider Threat Mitigation. (11%)Linan Huang; Quanyan Zhu
2020-06-13
The Pitfalls of Simplicity Bias in Neural Networks.Harshay Shah; Kaustav Tamuly; Aditi Raghunathan; Prateek Jain; Praneeth Netrapalli
Adversarial Self-Supervised Contrastive Learning.Minseon Kim; Jihoon Tack; Sung Ju Hwang
Rethinking Clustering for Robustness.Motasem Alfarra; Juan C. Pérez; Adel Bibi; Ali Thabet; Pablo Arbeláez; Bernard Ghanem
Defensive Approximation: Securing CNNs using Approximate Computing.Amira Guesmi; Ihsen Alouani; Khaled Khasawneh; Mouna Baklouti; Tarek Frikha; Mohamed Abid; Nael Abu-Ghazaleh
2020-06-12
Provably Robust Metric Learning.Lu Wang; Xuanqing Liu; Jinfeng Yi; Yuan Jiang; Cho-Jui Hsieh
Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces.Chaofei Yang; Lei Ding; Yiran Chen; Hai Li
D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack.Qiuling Xu; Guanhong Tao; Xiangyu Zhang
Targeted Adversarial Perturbations for Monocular Depth Prediction.Alex Wong; Safa Cicek; Stefano Soatto
2020-06-11
Large-Scale Adversarial Training for Vision-and-Language Representation Learning.Zhe Gan; Yen-Chun Chen; Linjie Li; Chen Zhu; Yu Cheng; Jingjing Liu
Smoothed Geometry for Robust Attribution.Zifan Wang; Haofan Wang; Shakul Ramkumar; Matt Fredrikson; Piotr Mardziel; Anupam Datta
Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks.Nataniel Ruiz; Sarah Adel Bargal; Stan Sclaroff
Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification.Xu Li; Na Li; Jinghua Zhong; Xixin Wu; Xunying Liu; Dan Su; Dong Yu; Helen Meng
Robustness to Adversarial Attacks in Learning-Enabled Controllers.Zikang Xiong; Joe Eappen; He Zhu; Suresh Jagannathan
On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples.Richard Y. Zhang
Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors.Suzanne C. Wetstein; Cristina González-Gonzalo; Gerda Bortsova; Bart Liefers; Florian Dubost; Ioannis Katramados; Laurens Hogeweg; Ginneken Bram van; Josien P. W. Pluim; Bruijne Marleen de; Clara I. Sánchez; Mitko Veta
Achieving robustness in classification using optimal transport with hinge regularization.Mathieu Serrurier; Franck Mamalet; Alberto González-Sanz; Thibaut Boissin; Jean-Michel Loubes; Barrio Eustasio del
Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural Networks. (96%)Kathrin Grosse; Taesung Lee; Battista Biggio; Youngja Park; Michael Backes; Ian Molloy
2020-06-10
Evaluating Graph Vulnerability and Robustness using TIGER.Scott Freitas; Duen Horng Chau
Towards Robust Fine-grained Recognition by Maximal Separation of Discriminative Features.Krishna Kanth Nakka; Mathieu Salzmann
Deterministic Gaussian Averaged Neural Networks.Ryan Campbell; Chris Finlay; Adam M Oberman
Interpolation between Residual and Non-Residual Networks.Zonghan Yang; Yang Liu; Chenglong Bao; Zuoqiang Shi
Towards Certified Robustness of Metric Learning.Xiaochen Yang; Yiwen Guo; Mingzhi Dong; Jing-Hao Xue
2020-06-09
Towards an Intrinsic Definition of Robustness for a Classifier.Théo Giraudon; Vincent Gripon; Matthias Löwe; Franck Vermet
Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access.Jiaqi Ma; Shuangrui Ding; Qiaozhu Mei
GAP++: Learning to generate target-conditioned adversarial examples.Xiaofeng Mao; Yuefeng Chen; Yuhong Li; Yuan He; Hui Xue
Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers.Fangfang Yang; Shaolei Ren
Provable tradeoffs in adversarially robust classification.Edgar Dobriban; Hamed Hassani; David Hong; Alexander Robey
Distributional Robust Batch Contextual Bandits. (1%)Nian Si; Fan Zhang; Zhengyuan Zhou; Jose Blanchet
2020-06-08
Calibrated neighborhood aware confidence measure for deep metric learning.Maryna Karpusha; Sunghee Yun; Istvan Fehervari
A Self-supervised Approach for Adversarial Robustness.Muzammal Naseer; Salman Khan; Munawar Hayat; Fahad Shahbaz Khan; Fatih Porikli
Distributional Robustness with IPMs and links to Regularization and GANs.Hisham Husain
On Universalized Adversarial and Invariant Perturbations.Sandesh Kamath; Amit Deshpande; K V Subrahmanyam
Tricking Adversarial Attacks To Fail.Blerta Lindqvist
Global Robustness Verification Networks.Weidi Sun; Yuteng Lu; Xiyue Zhang; Zhanxing Zhu; Meng Sun
Trade-offs between membership privacy & adversarially robust learning.Jamie Hayes
Adversarial Feature Desensitization.Pouya Bashivan; Reza Bayat; Adam Ibrahim; Kartik Ahuja; Mojtaba Faramarzi; Touraj Laleh; Blake Aaron Richards; Irina Rish
2020-06-07
Extensions and limitations of randomized smoothing for robustness guarantees.Jamie Hayes
Uncertainty-Aware Deep Classifiers using Generative Models.Murat Sensoy; Lance Kaplan; Federico Cerutti; Maryam Saleki
2020-06-06
Unique properties of adversarially trained linear classifiers on Gaussian data.Jamie Hayes
Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?Stefano Melacci; Gabriele Ciravegna; Angelo Sotgiu; Ambra Demontis; Battista Biggio; Marco Gori; Fabio Roli
2020-06-05
Adversarial Image Generation and Training for Deep Convolutional Neural Networks.Ronghua Shi; Hai Shu; Hongtu Zhu; Ziqi Chen
Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing.Vishaal Krishnan; Abed AlRahman Al Makdah; Fabio Pasqualetti
Sponge Examples: Energy-Latency Attacks on Neural Networks.Ilia Shumailov; Yiren Zhao; Daniel Bates; Nicolas Papernot; Robert Mullins; Ross Anderson
2020-06-04
Characterizing the Weight Space for Different Learning Models.Saurav Musunuru; Jay N. Paranjape; Rahul Kumar Dubey; Vijendran G. Venkoparao
Towards Understanding Fast Adversarial Training.Bai Li; Shiqi Wang; Suman Jana; Lawrence Carin
Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised Learning.Haibin Wu; Andy T. Liu; Hung-yi Lee
Pick-Object-Attack: Type-Specific Adversarial Attack for Object Detection.Omid Mohamad Nezami; Akshay Chaturvedi; Mark Dras; Utpal Garain
2020-06-02
SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization.A. F. M. Shahab Uddin; Mst. Sirazam Monira; Wheemyung Shin; TaeChoong Chung; Sung-Ho Bae
Exploring the role of Input and Output Layers of a Deep Neural Network in Adversarial Defense.Jay N. Paranjape; Rahul Kumar Dubey; Vijendran V Gopalan
Perturbation Analysis of Gradient-based Adversarial Attacks.Utku Ozbulak; Manvel Gasparyan; Neve Wesley De; Messem Arnout Van
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start.Zhuoran Liu; Martha Larson
Detecting Audio Attacks on ASR Systems with Dropout Uncertainty.Tejas Jayashankar; Jonathan Le Roux; Pierre Moulin
2020-06-01
Second-Order Provable Defenses against Adversarial Attacks.Sahil Singla; Soheil Feizi
Adversarial Attacks on Reinforcement Learning based Energy Management Systems of Extended Range Electric Delivery Vehicles.Pengyue Wang; Yan Li; Shashi Shekhar; William F. Northrop
Adversarial Attacks on Classifiers for Eye-based User Modelling.Inken CISPA Helmholtz Center for Information Security Hagestedt; Michael CISPA Helmholtz Center for Information Security Backes; Andreas University of Stuttgart Bulling
Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack Methods.Kyungmi Lee; Anantha P. Chandrakasan
2020-05-31
Evaluations and Methods for Explanation through Robustness Analysis.Cheng-Yu Hsieh; Chih-Kuan Yeh; Xuanqing Liu; Pradeep Ravikumar; Seungyeon Kim; Sanjiv Kumar; Cho-Jui Hsieh
Estimating Principal Components under Adversarial Perturbations.Pranjal Awasthi; Xue Chen; Aravindan Vijayaraghavan
2020-05-30
Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training.Zheng Xu; Ali Shafahi; Tom Goldstein
2020-05-29
SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions.Mao Ye; Chengyue Gong; Qiang Liu
2020-05-28
Monocular Depth Estimators: Vulnerabilities and Attacks.Alwyn Mathew; Aditya Prakash Patra; Jimson Mathew
QEBA: Query-Efficient Boundary-Based Blackbox Attack.Huichen Li; Xiaojun Xu; Xiaolu Zhang; Shuang Yang; Bo Li
Adversarial Attacks and Defense on Texts: A Survey.Aminul Huq; Mst. Tasnim Pervin
Adversarial Robustness of Deep Convolutional Candlestick Learner.Jun-Hao Chen; Samuel Yen-Chi Chen; Yun-Cheng Tsai; Chih-Shiang Shur
2020-05-27
Enhancing Resilience of Deep Learning Networks by Means of Transferable Adversaries.Moritz Seiler; Heike Trautmann; Pascal Kerschke
Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques.Han Qiu; Yi Zeng; Qinkai Zheng; Tianwei Zhang; Meikang Qiu; Gerard Memmi
Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models.Mitch Hill; Jonathan Mitchell; Song-Chun Zhu
Calibrated Surrogate Losses for Adversarially Robust Classification.Han Bao; Clayton Scott; Masashi Sugiyama
2020-05-26
Effects of Forward Error Correction on Communications Aware Evasion Attacks.Matthew DelVecchio; Bryse Flowers; William C. Headley
Investigating a Spectral Deception Loss Metric for Training Machine Learning-based Evasion Attacks.Matthew DelVecchio; Vanessa Arndorfer; William C. Headley
Generating Semantically Valid Adversarial Questions for TableQA.Yi Zhu; Menglin Xia; Yiwei Zhou
2020-05-25
Adversarial Feature Selection against Evasion Attacks.Fei Zhang; Patrick P. K. Chan; Battista Biggio; Daniel S. Yeung; Fabio Roli
2020-05-24
Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification.Sina Däubener; Lea Schönherr; Asja Fischer; Dorothea Kolossa
SoK: Arms Race in Adversarial Malware Detection.Deqiang Li; Qianmu Li; Yanfang Ye; Shouhuai Xu
Adaptive Adversarial Logits Pairing.Shangxi Wu; Jitao Sang; Kaiyuan Xu; Guanhua Zheng; Changsheng Xu
2020-05-23
ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds.Kibok Lee; Zhuoyuan Chen; Xinchen Yan; Raquel Urtasun; Ersin Yumer
Adversarial Attack on Hierarchical Graph Pooling Neural Networks.Haoteng Tang; Guixiang Ma; Yurong Chen; Lei Guo; Wei Wang; Bo Zeng; Liang Zhan
Frontal Attack: Leaking Control-Flow in SGX via the CPU Frontend. (1%)Ivan Puddu; Moritz Schneider; Miro Haller; Srdjan Čapkun
2020-05-22
Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks.Hokuto Hirano; Kazuki Koga; Kazuhiro Takemoto
2020-05-21
Revisiting Role of Autoencoders in Adversarial Settings.Byeong Cheon Kim; Jung Uk Kim; Hakmin Lee; Yong Man Ro
Robust Ensemble Model Training via Random Layer Sampling Against Adversarial Attack.Hakmin Lee; Hong Joo Lee; Seong Tae Kim; Yong Man Ro
Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition.Qing Wang; Pengcheng Guo; Lei Xie
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning.Youngjoon Yu; Hong Joo Lee; Byeong Cheon Kim; Jung Uk Kim; Yong Man Ro
2020-05-20
Graph Structure Learning for Robust Graph Neural Networks.Wei Jin; Yao Ma; Xiaorui Liu; Xianfeng Tang; Suhang Wang; Jiliang Tang
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data.Alexander Robey; Hamed Hassani; George J. Pappas
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks.Arash Rahnama; Andrew Tseng
A survey on Adversarial Recommender Systems: from Attack/Defense strategies to Generative Adversarial Networks.Yashar Deldjoo; Noia Tommaso Di; Felice Antonio Merra
Feature Purification: How Adversarial Training Performs Robust Deep Learning.Zeyuan Allen-Zhu; Yuanzhi Li
2020-05-19
Synthesizing Unrestricted False Positive Adversarial Objects Using Generative Models.Martin Kotuliak; Sandro E. Schoenborn; Andrei Dan
Bias-based Universal Adversarial Patch Attack for Automatic Check-out.Aishan Liu; Jiakai Wang; Xianglong Liu; Bowen Cao; Chongzhi Zhang; Hang Yu
2020-05-18
Universalization of any adversarial attack using very few test examples.Sandesh Kamath; Amit Deshpande; K V Subrahmanyam
On Intrinsic Dataset Properties for Adversarial Machine Learning.Jeffrey Z. Pan; Nicholas Zufelt
Defending Your Voice: Adversarial Attack on Voice Conversion.Chien-yu Huang; Yist Y. Lin; Hung-yi Lee; Lin-shan Lee
Reliability and Robustness analysis of Machine Learning based Phishing URL Detectors.Bushra University of Adelaide, CREST - The Centre for Research on Engineering Software Technologies, CSIROs Data61 Sabir; M. Ali University of Adelaide, CREST - The Centre for Research on Engineering Software Technologies Babar; Raj CSIROs Data61 Gaire; Alsharif CSIROs DATA61 Abuadbba
Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspective.Linhai Ma; Liang Liang
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks.Linhai Ma; Liang Liang
Spatiotemporal Attacks for Embodied Agents.Aishan Liu; Tairan Huang; Xianglong Liu; Yitao Xu; Yuqing Ma; Xinyun Chen; Stephen J. Maybank; Dacheng Tao
2020-05-17
Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks.Mahdieh Abbasi; Arezoo Rajabi; Christian Gagne; Rakesh B. Bobba
2020-05-16
Universal Adversarial Perturbations: A Survey.Ashutosh Chaubey; Nikhil Agrawal; Kavya Barnwal; Keerat K. Guliani; Pramod Mehta
Encryption Inspired Adversarial Defense for Visual Classification.MaungMaung AprilPyone; Hitoshi Kiya
PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields.Chong Xiang; Arjun Nitin Bhagoji; Vikash Sehwag; Prateek Mittal
2020-05-15
How to Make 5G Communications "Invisible": Adversarial Machine Learning for Wireless Privacy.Brian Kim; Yalin E. Sagduyu; Kemal Davaslioglu; Tugba Erpek; Sennur Ulukus
Practical Traffic-space Adversarial Attacks on Learning-based NIDSs.Dongqi Han; Zhiliang Wang; Ying Zhong; Wenqi Chen; Jiahai Yang; Shuqiang Lu; Xingang Shi; Xia Yin
Initializing Perturbations in Multiple Directions for Fast Adversarial Training.Xunguang Wang; Ship Peng Xu; Eric Ke Wang
2020-05-14
Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning.Jianwen Sun; Tianwei Zhang; Xiaofei Xie; Lei Ma; Yan Zheng; Kangjie Chen; Yang Liu
Towards Assessment of Randomized Mechanisms for Certifying Adversarial Robustness.Tianhang Zheng; Di Wang; Baochun Li; Jinhui Xu
A Deep Learning-based Fine-grained Hierarchical Learning Approach for Robust Malware Classification.Ahmed Abusnaina; Mohammed Abuhamad; Hisham Alasmary; Afsah Anwar; Rhongho Jang; Saeed Salem; DaeHun Nyang; David Mohaisen
2020-05-13
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses.Yaxin Li; Wei Jin; Han Xu; Jiliang Tang
2020-05-12
Evaluating Ensemble Robustness Against Adversarial Attacks.George Adam; Romain Speciel
Increased-confidence adversarial examples for improved transferability of Counter-Forensic attacks.Wenjie Li; Benedetta Tondi; Rongrong Ni; Mauro Barni
Adversarial examples are useful too!Ali Borji
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients.Chengcheng Ma; Baoyuan Wu; Shibiao Xu; Yanbo Fan; Yong Zhang; Xiaopeng Zhang; Zhifeng Li
2020-05-11
Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers.Brian Kim; Yalin E. Sagduyu; Kemal Davaslioglu; Tugba Erpek; Sennur Ulukus
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data.Lu Wang; Huan Zhang; Jinfeng Yi; Cho-Jui Hsieh; Yuan Jiang
2020-05-09
It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations.Samson Tan; Shafiq Joty; Min-Yen Kan; Richard Socher
Class-Aware Domain Adaptation for Improving Adversarial Robustness.Xianxu Hou; Jingxin Liu; Bolei Xu; Xiaolong Wang; Bozhi Liu; Guoping Qiu
2020-05-08
Towards Robustness against Unsuspicious Adversarial Examples.Liang Tong; Minzhe Guo; Atul Prakash; Yevgeniy Vorobeychik
2020-05-07
Efficient Exact Verification of Binarized Neural Networks.Kai Jia; Martin Rinard
Projection & Probability-Driven Black-Box Attack.Jie Li; Rongrong Ji; Hong Liu; Jianzhuang Liu; Bineng Zhong; Cheng Deng; Qi Tian
Defending Hardware-based Malware Detectors against Adversarial Attacks.Abraham Peedikayil Kuruvila; Shamik Kundu; Kanad Basu
2020-05-06
GraCIAS: Grassmannian of Corrupted Images for Adversarial Security.Ankita Shukla; Pavan Turaga; Saket Anand
Training robust neural networks using Lipschitz bounds.Patricia Pauli; Anne Koch; Julian Berberich; Paul Kohler; Frank Allgöwer
2020-05-05
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder.Guanlin Li; Shuya Ding; Jun Luo; Chang Liu
Hacking the Waveform: Generalized Wireless Adversarial Deep Learning.Francesco Restuccia; Salvatore D'Oro; Amani Al-Shawabka; Bruno Costa Rendon; Kaushik Chowdhury; Stratis Ioannidis; Tommaso Melodia
Adversarial Training against Location-Optimized Adversarial Patches.Sukrut Rao; David Stutz; Bernt Schiele
Measuring Adversarial Robustness using a Voronoi-Epsilon Adversary.Hyeongji Kim; Pekka Parviainen; Ketil Malde
2020-05-04
On the Benefits of Models with Perceptually-Aligned Gradients.Gunjan Aggarwal; Abhishek Sinha; Nupur Kumari; Mayank Singh
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?Marco Melis; Michele Scalas; Ambra Demontis; Davide Maiorca; Battista Biggio; Giorgio Giacinto; Fabio Roli
2020-05-03
Robust Encodings: A Framework for Combating Adversarial Typos.Erik Jones; Robin Jia; Aditi Raghunathan; Percy Liang
2020-05-02
On the Generalization Effects of Linear Transformations in Data Augmentation. (1%)Sen Wu; Hongyang R. Zhang; Gregory Valiant; Christopher Ré
2020-05-01
Jacks of All Trades, Masters Of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks.Neil Fendley; Max Lennon; I-Jeng Wang; Philippe Burlina; Nathan Drenkow
Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models.Bill Yuchen Lin; Seyeon Lee; Rahul Khanna; Xiang Ren
Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees.Jacob H. Seidman; Mahyar Fazlyab; Victor M. Preciado; George J. Pappas
Defense of Word-level Adversarial Attacks via Random Substitution Encoding.Zhaoyang Wang; Hongtao Wang
2020-04-30
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks.Winston Wu; Dustin Arendt; Svitlana Volkova
Imitation Attacks and Defenses for Black-box Machine Translation Systems.Eric Wallace; Mitchell Stern; Dawn Song
Universal Adversarial Attacks with Natural Triggers for Text Classification.Liwei Song; Xinwei Yu; Hsuan-Tung Peng; Karthik Narasimhan
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness.Pu Zhao; Pin-Yu Chen; Payel Das; Karthikeyan Natesan Ramamurthy; Xue Lin
2020-04-29
Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability.Nathan Inkawhich; Kevin J Liang; Binghui Wang; Matthew Inkawhich; Lawrence Carin; Yiran Chen
TAVAT: Token-Aware Virtual Adversarial Training for Language Understanding.Linyang Li; Xipeng Qiu
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP.John X. Morris; Eli Lifland; Jin Yong Yoo; Jake Grigsby; Di Jin; Yanjun Qi
2020-04-28
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks.Pranjal Awasthi; Natalie Frank; Mehryar Mohri
Minority Reports Defense: Defending Against Adversarial Patches.Michael McCoyd; Won Park; Steven Chen; Neil Shah; Ryan Roggenkemper; Minjune Hwang; Jason Xinyu Liu; David Wagner
2020-04-27
DeSePtion: Dual Sequence Prediction and Adversarial Examples for Improved Fact-Checking.Christopher Hidey; Tuhin Chakrabarty; Tariq Alhindi; Siddharth Varia; Kriste Krstovski; Mona Diab; Smaranda Muresan
Adversarial Fooling Beyond "Flipping the Label".Konda Reddy Mopuri; Vaisakh Shaj; R. Venkatesh Babu
"Call me sexist, but...": Revisiting Sexism Detection Using Psychological Scales and Adversarial Samples. (81%)Mattia Samory; Indira Sen; Julian Kohne; Fabian Floeck; Claudia Wagner
2020-04-26
Transferable Perturbations of Deep Feature Distributions.Nathan Inkawhich; Kevin J Liang; Lawrence Carin; Yiran Chen
Towards Feature Space Adversarial Attack.Qiuling Xu; Guanhong Tao; Siyuan Cheng; Xiangyu Zhang
Printing and Scanning Attack for Image Counter Forensics.Hailey James; Otkrist Gupta; Dan Raviv
Improved Image Wasserstein Attacks and Defenses.Edward J. Hu; Adith Swaminathan; Hadi Salman; Greg Yang
2020-04-25
Improved Adversarial Training via Learned Optimizer.Yuanhao Xiong; Cho-Jui Hsieh
Enabling Fast and Universal Audio Adversarial Attack Using Generative Model.Yi Xie; Zhuohang Li; Cong Shi; Jian Liu; Yingying Chen; Bo Yuan
Harnessing adversarial examples with a surprisingly simple defense.Ali Borji
2020-04-24
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty.Xiyue Zhang; Xiaofei Xie; Lei Ma; Xiaoning Du; Qiang Hu; Yang Liu; Jianjun Zhao; Meng Sun
A Black-box Adversarial Attack Strategy with Adjustable Sparsity and Generalizability for Deep Image Classifiers.Arka Ghosh; Sankha Subhra Mullick; Shounak Datta; Swagatam Das; Rammohan Mallipeddi; Asit Kr. Das
Reevaluating Adversarial Examples in Natural Language.John X. Morris; Eli Lifland; Jack Lanchantin; Yangfeng Ji; Yanjun Qi
2020-04-23
Adversarial Machine Learning in Network Intrusion Detection Systems.Elie Alhajjar; Paul Maxwell; Nathaniel D. Bastian
Adversarial Attacks and Defenses: An Interpretation Perspective.Ninghao Liu; Mengnan Du; Ruocheng Guo; Huan Liu; Xia Hu
Evaluating Adversarial Robustness for Deep Neural Network Interpretability using fMRI Decoding.Patrick McClure; Dustin Moraczewski; Ka Chun Lam; Adam Thomas; Francisco Pereira
On Adversarial Examples for Biomedical NLP Tasks.Vladimir Araujo; Andres Carvallo; Carlos Aspillaga; Denis Parra
Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks.Jianhe Yuan; Zhihai He
Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation.Marvin Klingner; Andreas Bär; Tim Fingscheidt
RAIN: A Simple Approach for Robust and Accurate Image Classification Networks.Jiawei Du; Hanshu Yan; Vincent Y. F. Tan; Joey Tianyi Zhou; Rick Siow Mong Goh; Jiashi Feng
2020-04-22
CodNN -- Robust Neural Networks From Coded Classification.Netanel Andrew Raviv; Siddharth Andrew Jain; Pulakesh Andrew Upadhyaya; Jehoshua Andrew Bruck; Andrew Anxiao; Jiang
Provably robust deep generative models.Filipe Condessa; Zico Kolter
QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks.Priyadarshini Panda
Adversarial examples and where to find them.Niklas Risse; Christina Göpfert; Jan Philip Göpfert
2020-04-21
Scalable Attack on Graph Data by Injecting Vicious Nodes.Jihong Wang; Minnan Luo; Fnu Suya; Jundong Li; Zijiang Yang; Qinghua Zheng
Certifying Joint Adversarial Robustness for Model Ensembles.Mainuddin Ahmad Jonas; David Evans
Probabilistic Safety for Bayesian Neural Networks.Matthew Wicker; Luca Laurenti; Andrea Patane; Marta Kwiatkowska
BERT-ATTACK: Adversarial Attack Against BERT Using BERT.Linyang Li; Ruotian Ma; Qipeng Guo; Xiangyang Xue; Xipeng Qiu
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks.Sanchari Sen; Balaraman Ravindran; Anand Raghunathan
2020-04-20
GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified Examples.Julia Lust; Alexandru Paul Condurache
Approximate exploitability: Learning a best response in large games. (74%)Finbarr Timbers; Nolan Bard; Edward Lockhart; Marc Lanctot; Martin Schmid; Neil Burch; Julian Schrittwieser; Thomas Hubert; Michael Bowling
2020-04-19
Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning.Hongcai Xu; Junpeng Bao; Gaojie Zhang
Adversarial Training for Large Neural Language Models.Xiaodong Liu; Hao Cheng; Pengcheng He; Weizhu Chen; Yu Wang; Hoifung Poon; Jianfeng Gao
Headless Horseman: Adversarial Attacks on Transfer Learning Models.Ahmed Abdelkader; Michael J. Curry; Liam Fowl; Tom Goldstein; Avi Schwarzschild; Manli Shu; Christoph Studer; Chen Zhu
2020-04-18
Protecting Classifiers From Attacks. A Bayesian Approach.Victor Gallego; Roi Naveiro; Alberto Redondo; David Rios Insua; Fabrizio Ruggeri
Single-step Adversarial training with Dropout Scheduling.Vivek B. S.; R. Venkatesh Babu
2020-04-17
Adversarial Attack on Deep Learning-Based Splice Localization.Andras Rozsa; Zheng Zhong; Terrance E. Boult
2020-04-16
Shortcut Learning in Deep Neural Networks.Robert Geirhos; Jörn-Henrik Jacobsen; Claudio Michaelis; Richard Zemel; Wieland Brendel; Matthias Bethge; Felix A. Wichmann
2020-04-15
Targeted Attack for Deep Hashing based Retrieval.Jiawang Bai; Bin Chen; Yiming Li; Dongxian Wu; Weiwei Guo; Shu-tao Xia; En-hui Yang
A Framework for Enhancing Deep Neural Networks Against Adversarial Malware.Deqiang Li; Qianmu Li; Yanfang Ye; Shouhuai Xu
Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing Website Classifiers.Yusi Lei; Sen Chen; Lingling Fan; Fu Song; Yang Liu
2020-04-14
On the Optimal Interaction Range for Multi-Agent Systems Under Adversarial Attack.Saad J Saleh
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions.Jon Vadillo; Roberto Santana; Jose A. Lozano
2020-04-13
Adversarial Robustness Guarantees for Random Deep Neural Networks.Palma Giacomo De; Bobak T. Kiani; Seth Lloyd
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples.Maximilian Mozes; Pontus Stenetorp; Bennett Kleinberg; Lewis D. Griffin
Adversarial Weight Perturbation Helps Robust Generalization.Dongxian Wu; Shu-tao Xia; Yisen Wang
Adversarial Augmentation Policy Search for Domain and Cross-Lingual Generalization in Reading Comprehension.Adyasha Maharana; Mohit Bansal
Towards Robust Classification with Image Quality Assessment.Yeli Feng; Yiyu Cai
Towards Transferable Adversarial Attack against Deep Face Recognition.Yaoyao Zhong; Weihong Deng
2020-04-12
PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning.Chenglin Yang; Adam Kortylewski; Cihang Xie; Yinzhi Cao; Alan Yuille
2020-04-11
Domain Adaptive Transfer Attack (DATA)-based Segmentation Networks for Building Extraction from Aerial Images.Younghwan Na; Jun Hee Kim; Kyungsu Lee; Juhum Park; Jae Youn Hwang; Jihwan P. Choi
Certified Adversarial Robustness for Deep Reinforcement Learning.Michael Everett; Bjorn Lutjens; Jonathan P. How
Robust Large-Margin Learning in Hyperbolic Space.Melanie Weber; Manzil Zaheer; Ankit Singh Rawat; Aditya Menon; Sanjiv Kumar
Verification of Deep Convolutional Neural Networks Using ImageStars.Hoang-Dung Tran; Stanley Bak; Weiming Xiang; Taylor T. Johnson
2020-04-10
Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems.Eirini Anthi; Lowri Williams; Matilda Rhode; Pete Burnap; Adam Wedgbury
Luring of transferable adversarial perturbations in the black-box paradigm.Rémi Bernhard; Pierre-Alain Moellic; Jean-Max Dutertre
2020-04-09
Blind Adversarial Training: Balance Accuracy and Robustness.Haidong Xie; Xueshuang Xiang; Naijin Liu; Bin Dong
Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness.Haidong Xie; Lixin Qian; Xueshuang Xiang; Naijin Liu
On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems.Ivan Y. Tyukin; Desmond J. Higham; Alexander N. Gorban
2020-04-08
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking.Hongjun Wang; Guangrun Wang; Ya Li; Dongyu Zhang; Liang Lin
2020-04-07
Towards Evaluating the Robustness of Chinese BERT Classifiers.Boxin Wang; Boyuan Pan; Xin Li; Bo Li
Feature Partitioning for Robust Tree Ensembles and their Certification in Adversarial Scenarios.Stefano Calzavara; Claudio Lucchese; Federico Marcuzzi; Salvatore Orlando
Learning to fool the speaker recognition.Jiguo Li; Xinfeng Zhang; Jizheng Xu; Li Zhang; Yue Wang; Siwei Ma; Wen Gao
Universal Adversarial Perturbations Generative Network for Speaker Recognition.Jiguo Li; Xinfeng Zhang; Chuanmin Jia; Jizheng Xu; Li Zhang; Yue Wang; Siwei Ma; Wen Gao
2020-04-05
Approximate Manifold Defense Against Multiple Adversarial Perturbations.Jay Nandy; Wynne Hsu; Mong Li Lee
2020-04-04
Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks.Zuowen Wang; Leo Horne
BAE: BERT-based Adversarial Examples for Text Classification.Siddhant Garg; Goutham Ramakrishnan
2020-04-03
Adversarial Robustness through Regularization: A Second-Order Approach.Avery Ma; Fartash Faghri; Amir-massoud Farahmand
2020-04-01
Evading Deepfake-Image Detectors with White- and Black-Box Attacks.Nicholas Carlini; Hany Farid
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes.Sravanti Addepalli; Vivek B. S.; Arya Baburaj; Gaurang Sriramanan; R. Venkatesh Babu
Physically Realizable Adversarial Examples for LiDAR Object Detection.James Tu; Mengye Ren; Siva Manivasagam; Ming Liang; Bin Yang; Richard Du; Frank Cheng; Raquel Urtasun
2020-03-31
A Thorough Comparison Study on Adversarial Attacks and Defenses for Common Thorax Disease Classification in Chest X-rays.Chendi Rao; Jiezhang Cao; Runhao Zeng; Qi Chen; Huazhu Fu; Yanwu Xu; Mingkui Tan
2020-03-30
Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement.Chao-Han Huck Yang; Jun Qi; Pin-Yu Chen; Xiaoli Ma; Chin-Hui Lee
Adversarial Attacks on Multivariate Time Series.Samuel Harford; Fazle Karim; Houshang Darabi
Improved Gradient based Adversarial Attacks for Quantized Networks.Kartik Gupta; Thalaiyasingam Ajanthan
Towards Deep Learning Models Resistant to Large Perturbations.Amirreza Shaeiri; Rozhin Nobahari; Mohammad Hossein Rohban
Efficient Black-box Optimization of Adversarial Windows Malware with Constrained Manipulations.Luca Demetrio; Battista Biggio; Giovanni Lagorio; Fabio Roli; Alessandro Armando
2020-03-28
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning.Tianlong Chen; Sijia Liu; Shiyu Chang; Yu Cheng; Lisa Amini; Zhangyang Wang
DaST: Data-free Substitute Training for Adversarial Attacks.Mingyi Zhou; Jing Wu; Yipeng Liu; Shuaicheng Liu; Ce Zhu
Adversarial Imitation Attack.Mingyi Zhou; Jing Wu; Yipeng Liu; Shuaicheng Liu; Xiang Zhang; Ce Zhu
2020-03-26
Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks.Zain Khan; Jirong Yi; Raghu Mudumbai; Xiaodong Wu; Weiyu Xu
Challenging the adversarial robustness of DNNs based on error-correcting output codes.Bowen Zhang; Benedetta Tondi; Xixiang Lv; Mauro Barni
2020-03-25
Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples.Alejandro Barredo-Arrieta; Ser Javier Del
2020-03-24
Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study.Luan Nguyen; Sunpreet S. Arora; Yuhang Wu; Hao Yang
2020-03-23
Defense Through Diverse Directions.Christopher M. Bender; Yang Li; Yifeng Shi; Michael K. Reiter; Junier B. Oliva
Adversarial Attacks on Monocular Depth Estimation.Ziqi Zhang; Xinge Zhu; Yingwei Li; Xiangqun Chen; Yao Guo
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations.Saima Sharmin; Nitin Rathi; Priyadarshini Panda; Kaushik Roy
Adversarial Perturbations Fool Deepfake Detectors.Apurva Gandhi; Shomik Jain
2020-03-22
Understanding the robustness of deep neural network classifiers for breast cancer screening.Witold Oleszkiewicz; Taro Makino; Stanisław Jastrzębski; Tomasz Trzciński; Linda Moy; Kyunghyun Cho; Laura Heacock; Krzysztof J. Geras
Architectural Resilience to Foreground-and-Background Adversarial Noise.Carl Cheng; Evan Hu
2020-03-21
Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression.Feiyang Cai; Jiani Li; Xenofon Koutsoukos
Robust Out-of-distribution Detection in Neural Networks.Jiefeng Chen; Yixuan Li; Xi Wu; Yingyu Liang; Somesh Jha
Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises.Bin Yan; Dong Wang; Huchuan Lu; Xiaoyun Yang
2020-03-20
Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning.Cameron Buckner
Investigating Image Applications Based on Spatial-Frequency Transform and Deep Learning Techniques.Qinkai Zheng; Han Qiu; Gerard Memmi; Isabelle Bloch
Quantum noise protects quantum classifiers against adversaries.Yuxuan Du; Min-Hsiu Hsieh; Tongliang Liu; Dacheng Tao; Nana Liu
One Neuron to Fool Them All.Anshuman Suri; David Evans
Adversarial Robustness on In- and Out-Distribution Improves Explainability.Maximilian Augustin; Alexander Meinke; Matthias Hein
2020-03-19
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates.Amin Ghiasi; Ali Shafahi; Tom Goldstein
Face-Off: Adversarial Face Obfuscation.Varun Chandrasekaran; Chuhan Gao; Brian Tang; Kassem Fawaz; Somesh Jha; Suman Banerjee
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations.Huan Zhang; Hongge Chen; Chaowei Xiao; Bo Li; Mingyan Liu; Duane Boning; Cho-Jui Hsieh
Overinterpretation reveals image classification model pathologies. (81%)Brandon Carter; Siddhartha Jain; Jonas Mueller; David Gifford
2020-03-18
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence.Christian Berghoff; Matthias Neu; Twickel Arndt von
Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles.Songan Zhang; Huei Peng; Subramanya Nageshrao; H. Eric Tseng
Solving Non-Convex Non-Differentiable Min-Max Games using Proximal Gradient Method.Babak Barazandeh; Meisam Razaviyayn
SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing.Chawin Sitawarin; Supriyo Chakraborty; David Wagner
2020-03-17
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior.Hu Zhang; Linchao Zhu; Yi Zhu; Yi Yang
Heat and Blur: An Effective and Fast Defense Against Adversarial Examples.Haya Brama; Tal Grinshpoun
Adversarial Transferability in Wearable Sensor Systems.Ramesh Kumar Sah; Hassan Ghasemzadeh
2020-03-15
Output Diversified Initialization for Adversarial Attacks.Yusuke Tashiro; Yang Song; Stefano Ermon
Anomalous Example Detection in Deep Learning: A Survey.Saikiran Bulusu; Bhavya Kailkhura; Bo Li; Pramod K. Varshney; Dawn Song
Towards Face Encryption by Generating Adversarial Identity Masks.Xiao Yang; Yinpeng Dong; Tianyu Pang; Hang Su; Jun Zhu; Yuefeng Chen; Hui Xue
Toward Adversarial Robustness via Semi-supervised Robust Training.Yiming Li; Baoyuan Wu; Yan Feng; Yanbo Fan; Yong Jiang; Zhifeng Li; Shutao Xia
2020-03-14
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models.Chawin Sitawarin; David Wagner
Certified Defenses for Adversarial Patches.Ping-Yeh Chiang; Renkun Ni; Ahmed Abdelkader; Chen Zhu; Christoph Studer; Tom Goldstein
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation.Xiaogang Xu; Hengshuang Zhao; Jiaya Jia
On the benefits of defining vicinal distributions in latent space.Puneet Mangla; Vedant Singh; Shreyas Jayant Havaldar; Vineeth N Balasubramanian
2020-03-13
Towards a Resilient Machine Learning Classifier -- a Case Study of Ransomware Detection.Chih-Yuan Yang; Ravi Sahita
GeoDA: a geometric framework for black-box adversarial attacks.Ali Rahmati; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard; Huaiyu Dai
When are Non-Parametric Methods Robust?Robi Bhattacharjee; Kamalika Chaudhuri
2020-03-12
Topological Effects on Attacks Against Vertex Classification.Benjamin A. Miller; Mustafa Çamurcu; Alexander J. Gomez; Kevin Chan; Tina Eliassi-Rad
Inline Detection of DGA Domains Using Side Information.Raaghavi Sivaguru; Jonathan Peck; Femi Olumofin; Anderson Nascimento; Cock Martine De
ARAE: Adversarially Robust Training of Autoencoders Improves Novelty Detection.Mohammadreza Salehi; Atrin Arya; Barbod Pajoum; Mohammad Otoofi; Amirreza Shaeiri; Mohammad Hossein Rohban; Hamid R. Rabiee
ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems.Jiangnan Li; Yingyuan Yang; Jinyuan Stella Sun; Kevin Tomsovic; Hairong Qi
2020-03-11
Frequency-Tuned Universal Adversarial Attacks.Yingpeng Deng; Lina J. Karam
2020-03-10
SAD: Saliency-based Defenses Against Adversarial Examples.Richard Tran; David Patrick; Michael Geyer; Amanda Fernandez
Using an ensemble color space model to tackle adversarial examples.Shreyank N Gowda; Chun Yuan
Cryptanalytic Extraction of Neural Network Models.Nicholas Carlini; Matthew Jagielski; Ilya Mironov
A Survey of Adversarial Learning on Graphs.Liang Chen; Jintang Li; Jiaying Peng; Tao Xie; Zengxu Cao; Kun Xu; Xiangnan He; Zibin Zheng
2020-03-09
Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift.Remi Tachet des Combes; Han Zhao; Yu-Xiang Wang; Geoff Gordon
Towards Probabilistic Verification of Machine Unlearning.David Marco Sommer; Liwei Song; Sameer Wagh; Prateek Mittal
Manifold Regularization for Locally Stable Deep Neural Networks.Charles Jin; Martin Rinard
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models.Yankun Ren; Jianbin Lin; Siliang Tang; Jun Zhou; Shuang Yang; Yuan Qi; Xiang Ren
Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world.Ivan Fursov; Alexey Zaytsev; Nikita Kluchnikov; Andrey Kravchenko; Evgeny Burnaev
2020-03-08
Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM.Rui Zhang; Quanyan Zhu
An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods.Sanghyuk Chun; Seong Joon Oh; Sangdoo Yun; Dongyoon Han; Junsuk Choe; Youngjoon Yoo
On the Robustness of Cooperative Multi-Agent Reinforcement Learning.Jieyu Lin; Kristina Dzeparoska; Sai Qian Zhang; Alberto Leon-Garcia; Nicolas Papernot
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.Raphaël Dang-Nhu; Gagandeep Singh; Pavol Bielik; Martin Vechev
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles.Ranjie Duan; Xingjun Ma; Yisen Wang; James Bailey; A. K. Qin; Yun Yang
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks.Siqi Liu; Arnaud Arindra Adiyoso Setio; Florin C. Ghesu; Eli Gibson; Sasa Grbic; Bogdan Georgescu; Dorin Comaniciu
2020-03-07
Dynamic Backdoor Attacks Against Machine Learning Models.Ahmed Salem; Rui Wen; Michael Backes; Shiqing Ma; Yang Zhang
Adversarial Machine Learning: Bayesian Perspectives. (26%)David Rios Insua; Roi Naveiro; Victor Gallego; Jason Poulos
2020-03-06
Defense against adversarial attacks on spoofing countermeasures of ASV.Haibin Wu; Songxiang Liu; Helen Meng; Hung-yi Lee
Triple Memory Networks: a Brain-Inspired Method for Continual Learning.Liyuan Wang; Bo Lei; Qian Li; Hang Su; Jun Zhu; Yi Zhong
MAB-Malware: A Reinforcement Learning Framework for Attacking Static Malware Classifiers.Wei Song; Xuezixiang Li; Sadia Afroz; Deepali Garg; Dmitry Kuznetsov; Heng Yin
2020-03-05
Towards Practical Lottery Ticket Hypothesis for Adversarial Training.Bai Li; Shiqi Wang; Yunhan Jia; Yantao Lu; Zhenyu Zhong; Lawrence Carin; Suman Jana
Exploiting Verified Neural Networks via Floating Point Numerical Error.Kai Jia; Martin Rinard
Detection and Recovery of Adversarial Attacks with Injected Attractors.Jiyi Zhang; Ee-Chien Chang; Hwee Kuan Lee
Adversarial Robustness Through Local Lipschitzness.Yao-Yuan Yang; Cyrus Rashtchian; Hongyang Zhang; Ruslan Salakhutdinov; Kamalika Chaudhuri
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization.Saehyung Lee; Hyungyu Lee; Sungroh Yoon
Search Space of Adversarial Perturbations against Image Filters.Dang Duy Thang; Toshihiro Matsui
2020-03-04
Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems.Yi Xie; Cong Shi; Zhuohang Li; Jian Liu; Yingying Chen; Bo Yuan
Colored Noise Injection for Training Adversarially Robust Neural Networks.Evgenii Zheltonozhskii; Chaim Baskin; Yaniv Nemcovsky; Brian Chmiel; Avi Mendelson; Alex M. Bronstein
Double Backpropagation for Training Autoencoders against Adversarial Attack.Chengjin Sun; Sizhe Chen; Xiaolin Huang
Black-box Smoothing: A Provable Defense for Pretrained Classifiers.Hadi Salman; Mingjie Sun; Greg Yang; Ashish Kapoor; J. Zico Kolter
Metrics and methods for robustness evaluation of neural networks with generative models.Igor Buzhinsky; Arseny Nerinovsky; Stavros Tripakis
2020-03-03
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks.Francesco Croce; Matthias Hein
Analyzing Accuracy Loss in Randomized Smoothing Defenses.Yue Gao; Harrison Rosenberg; Kassem Fawaz; Somesh Jha; Justin Hsu
Discriminative Multi-level Reconstruction under Compact Latent Space for One-Class Novelty Detection.Jaewoo Park; Yoon Gyo Jung; Andrew Beng Jin Teoh
Security of Deep Learning based Lane Keeping System under Physical-World Adversarial Attack.Takami Sato; Junjie Shen; Ningfei Wang; Yunhan Jack Jia; Xue Lin; Qi Alfred Chen
Type I Attack for Generative Models.Chengjin Sun; Sizhe Chen; Jia Cai; Xiaolin Huang
2020-03-02
Data-Free Adversarial Perturbations for Practical Black-Box Attack.ZhaoXin Huan; Yulong Wang; Xiaolu Zhang; Lin Shang; Chilin Fu; Jun Zhou
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness.Ahmadreza Jeddi; Mohammad Javad Shafiee; Michelle Karg; Christian Scharfenberger; Alexander Wong
Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.Nataniel Ruiz; Sarah Adel Bargal; Stan Sclaroff
Hidden Cost of Randomized Smoothing.Jeet Lily Mohapatra; Ching-Yun Lily Ko; Lily Tsui-Wei; Weng; Sijia Liu; Pin-Yu Chen; Luca Daniel
Adversarial Network Traffic: Towards Evaluating the Robustness of Deep Learning-Based Network Traffic Classification.Amir Mahdi Sadeghzadeh; Saeed Shiravi; Rasool Jalili
2020-03-01
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies.Wei Jin; Yaxin Li; Han Xu; Yiqi Wang; Shuiwang Ji; Charu Aggarwal; Jiliang Tang
2020-02-29
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models.Xiao Zhang; Jinghui Chen; Quanquan Gu; David Evans
Why is the Mahalanobis Distance Effective for Anomaly Detection?Ryo Kamoi; Kei Kobayashi
2020-02-28
Improving Certified Robustness via Statistical Learning with Logical Reasoning.Zhuolin Yang; Zhikuan Zhao; Boxin Wang; Jiawei Zhang; Linyi Li; Hengzhi Pei; Bojan Karlas; Ji Liu; Heng Guo; Ce Zhang; Bo Li
Applying Tensor Decomposition to image for Robustness against Adversarial Attack.Seungju Cho; Tae Joon Jun; Mingu Kang; Daeyoung Kim
2020-02-27
Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT.Lichao Sun; Kazuma Hashimoto; Wenpeng Yin; Akari Asai; Jia Li; Philip Yu; Caiming Xiong
Detecting Patch Adversarial Attacks with Image Residuals.Marius Arvinte; Ahmed Tewfik; Sriram Vishwanath
Certified Defense to Image Transformations via Randomized Smoothing.Marc Fischer; Maximilian Baader; Martin Vechev
Are L2 adversarial examples intrinsically different?Mingxuan Li; Jingyuan Wang; Yufan Wu
TSS: Transformation-Specific Smoothing for Robustness Certification.Linyi Li; Maurice Weber; Xiaojun Xu; Luka Rimanic; Bhavya Kailkhura; Tao Xie; Ce Zhang; Bo Li
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks.Yue Zhao; Yuwei Wu; Caihua Chen; Andrew Lim
Utilizing Network Properties to Detect Erroneous Inputs.Matt Gorbett; Nathaniel Blanchard
FMix: Enhancing Mixed Sample Data Augmentation. (22%)Ethan Harris; Antonia Marcu; Matthew Painter; Mahesan Niranjan; Adam Prügel-Bennett; Jonathon Hare
2020-02-26
Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy.Aditya Saligrama; Guillaume Leclerc
Invariance vs. Robustness of Neural Networks.Sandesh Kamath; Amit Deshpande; K V Subrahmanyam
Overfitting in adversarially robust deep learning.Leslie Rice; Eric Wong; J. Zico Kolter
MGA: Momentum Gradient Attack on Network.Jinyin Chen; Yixian Chen; Haibin Zheng; Shijing Shen; Shanqing Yu; Dan Zhang; Qi Xuan
Improving Robustness of Deep-Learning-Based Image Reconstruction.Ankit Raj; Yoram Bresler; Bo Li
Defense-PointNet: Protecting PointNet Against Adversarial Attacks.Yu Zhang; Gongbo Liang; Tawfiq Salem; Nathan Jacobs
Adversarial Attack on Deep Product Quantization Network for Image Retrieval.Yan Feng; Bin Chen; Tao Dai; Shutao Xia
Randomization matters. How to defend against strong adversarial attacks.Rafael Pinot; Raphael Ettedgui; Geovani Rizk; Yann Chevaleyre; Jamal Atif
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization.Sicheng Zhu; Xiao Zhang; David Evans
2020-02-25
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy.Aditi Raghunathan; Sang Michael Xie; Fanny Yang; John Duchi; Percy Liang
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization.Yifei Min; Lin Chen; Amin Karbasi
G\"odel's Sentence Is An Adversarial Example But Unsolvable.Xiaodong Qi; Lansheng Han
Towards an Efficient and General Framework of Robust Training for Graph Neural Networks.Kaidi Xu; Sijia Liu; Pin-Yu Chen; Mengshu Sun; Caiwen Ding; Bhavya Kailkhura; Xue Lin
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks.Alexander Levine; Soheil Feizi
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.Jingfeng Zhang; Xilie Xu; Bo Han; Gang Niu; Lizhen Cui; Masashi Sugiyama; Mohan Kankanhalli
Adversarial Ranking Attack and Defense.Mo Zhou; Zhenxing Niu; Le Wang; Qilin Zhang; Gang Hua
2020-02-24
A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples: BOBYQA.Giuseppe Ughi; Vinayak Abrol; Jared Tanner
Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks.Matthew J. Roos
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space.Camilo Pestana; Naveed Akhtar; Wei Liu; David Glance; Ajmal Mian
Towards Rapid and Robust Adversarial Training with One-Step Attacks.Leo Schwinn; René Raab; Björn Eskofier
Precise Tradeoffs in Adversarial Training for Linear Regression.Adel Javanmard; Mahdi Soltanolkotabi; Hamed Hassani
HYDRA: Pruning Adversarially Robust Neural Networks.Vikash Sehwag; Shiqi Wang; Prateek Mittal; Suman Jana
2020-02-23
Adversarial Attack on DL-based Massive MIMO CSI Feedback.Qing Liu; Jiajia Guo; Chao-Kai Wen; Shi Jin
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference.Ting-Kuei Hu; Tianlong Chen; Haotao Wang; Zhangyang Wang
2020-02-22
Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks.Kirthi Shankar Sivamani; Rajeev Sahay; Aly El Gamal
Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition.Ziwen He; Wei Wang; Jing Dong; Tieniu Tan
Real-Time Detectors for Digital and Physical Adversarial Inputs to Perception Systems.Yiannis Kantaros; Taylor Carpenter; Kaustubh Sridhar; Yahan Yang; Insup Lee; James Weimer
Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples.Guanxiong Liu; Issa Khalil; Abdallah Khreishah
2020-02-21
Polarizing Front Ends for Robust CNNs.Can Bakiskan; Soorya Gopalakrishnan; Metehan Cekic; Upamanyu Madhow; Ramtin Pedarsani
Robustness from Simple Classifiers.Sharon Qian; Dimitris Kalimeris; Gal Kaplun; Yaron Singer
Adversarial Detection and Correction by Matching Prediction Distributions.Giovanni Vacanti; Looveren Arnaud Van
UnMask: Adversarial Detection and Defense Through Robust Feature Alignment.Scott Freitas; Shang-Tse Chen; Zijie J. Wang; Duen Horng Chau
Robustness to Programmable String Transformations via Augmented Abstract Training.Yuhao Zhang; Aws Albarghouthi; Loris D'Antoni
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework.Dinghuai Zhang; Mao Ye; Chengyue Gong; Zhanxing Zhu; Qiang Liu
Adversarial Attacks on Machine Learning Systems for High-Frequency Trading.Micah Goldblum; Avi Schwarzschild; Ankit B. Patel; Tom Goldstein
2020-02-20
Enhanced Adversarial Strategically-Timed Attacks against Deep Reinforcement Learning.Chao-Han Huck Yang; Jun Qi; Pin-Yu Chen; Yi Ouyang; I-Te Danny Hung; Chin-Hui Lee; Xiaoli Ma
On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective.Motasem Alfarra; Adel Bibi; Hasan Hammoud; Mohamed Gaafar; Bernard Ghanem
A Bayes-Optimal View on Adversarial Examples.Eitan Richardson; Yair Weiss
Towards Certifiable Adversarial Sample Detection.Ilia Shumailov; Yiren Zhao; Robert Mullins; Ross Anderson
Boosting Adversarial Training with Hypersphere Embedding.Tianyu Pang; Xiao Yang; Yinpeng Dong; Kun Xu; Hang Su; Jun Zhu
Byzantine-resilient Decentralized Stochastic Gradient Descent. (5%)Shangwei Guo; Tianwei Zhang; Han Yu; Xiaofei Xie; Lei Ma; Tao Xiang; Yang Liu
2020-02-19
Bayes-TrEx: Model Transparency by Example.Serena Booth; Yilun Zhou; Ankit Shah; Julie Shah
AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks.Xiao Wang; Siyue Wang; Pin-Yu Chen; Xue Lin; Peter Chin
NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion.Aritran Piplai; Sai Sree Laya Chukkapalli; Anupam Joshi
On Adaptive Attacks to Adversarial Example Defenses.Florian Tramer; Nicholas Carlini; Wieland Brendel; Aleksander Madry
Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks.Tsubasa Takahashi
Randomized Smoothing of All Shapes and Sizes.Greg Yang; Tony Duan; J. Edward Hu; Hadi Salman; Ilya Razenshteyn; Jerry Li
2020-02-18
Action-Manipulation Attacks Against Stochastic Bandits: Attacks and Defense.Guanlin Liu; Lifeng lai
Deflecting Adversarial Attacks.Yao Qin; Nicholas Frosst; Colin Raffel; Garrison Cottrell; Geoffrey Hinton
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent.Pu Zhao; Pin-Yu Chen; Siyue Wang; Xue Lin
Block Switching: A Stochastic Approach for Deep Learning Security.Xiao Wang; Siyue Wang; Pin-Yu Chen; Xue Lin; Peter Chin
2020-02-17
TensorShield: Tensor-based Defense Against Adversarial Attacks on Images.Negin Entezari; Evangelos E. Papalexakis
On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples.Pamela K. Douglas; Farzad Vasheghani Farahani
Scalable Quantitative Verification For Deep Neural Networks.Teodora Baluta; Zheng Leong Chua; Kuldeep S. Meel; Prateek Saxena
CAT: Customized Adversarial Training for Improved Robustness.Minhao Cheng; Qi Lei; Pin-Yu Chen; Inderjit Dhillon; Cho-Jui Hsieh
On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks.Hisaichi Shibata; Shouhei Hanaoka; Yukihiro Nomura; Naoto Hayashi; Osamu Abe
Robust Stochastic Bandit Algorithms under Probabilistic Unbounded Adversarial Attack.Ziwei Guan; Kaiyi Ji; Donald J Jr Bucci; Timothy Y Hu; Joseph Palombo; Michael Liston; Yingbin Liang
Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness.Huijie Feng; Chunpeng Wu; Guoyang Chen; Weifeng Zhang; Yang Ning
GRAPHITE: A Practical Framework for Generating Automatic Physical Adversarial Machine Learning Attacks.Ryan Feng; Neal Mangaokar; Jiefeng Chen; Earlence Fernandes; Somesh Jha; Atul Prakash
2020-02-16
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality.Yi Zhang; Orestis Plevrakis; Simon S. Du; Xingguo Li; Zhao Song; Sanjeev Arora
2020-02-15
Undersensitivity in Neural Reading Comprehension.Johannes Welbl; Pasquale Minervini; Max Bartolo; Pontus Stenetorp; Sebastian Riedel
Hold me tight! Influence of discriminative features on deep network boundaries.Guillermo Ortiz-Jimenez; Apostolos Modas; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
Blind Adversarial Network Perturbations.Milad Nasr; Alireza Bahramali; Amir Houmansadr
2020-02-14
Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets.Dongxian Wu; Yisen Wang; Shu-Tao Xia; James Bailey; Xingjun Ma
Adversarial Distributional Training for Robust Deep Learning.Yinpeng Dong; Zhijie Deng; Tianyu Pang; Hang Su; Jun Zhu
2020-02-13
Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks.Taro Kiritani; Koji Ono
The Conditional Entropy Bottleneck.Ian Fischer
Identifying Audio Adversarial Examples via Anomalous Pattern Detection.Victor Akinwande; Celia Cintas; Skyler Speakman; Srihari Sridharan
2020-02-12
Stabilizing Differentiable Architecture Search via Perturbation-based Regularization.Xiangning Chen; Cho-Jui Hsieh
Over-the-Air Adversarial Flickering Attacks against Video Recognition Networks.Roi Pony; Itay Naeh; Shie Mannor
2020-02-11
Adversarial Robustness for Code.Pavol Bielik; Martin Vechev
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations.Florian Tramèr; Jens Behrmann; Nicholas Carlini; Nicolas Papernot; Jörn-Henrik Jacobsen
Robustness of Bayesian Neural Networks to Gradient-Based Attacks.Ginevra Carbone; Matthew Wicker; Luca Laurenti; Andrea Patane; Luca Bortolussi; Guido Sanguinetti
Improving the affordability of robustness training for DNNs.Sidharth Gupta; Parijat Dube; Ashish Verma
Fast Geometric Projections for Local Robustness Certification.Aymeric Fromherz; Klas Leino; Matt Fredrikson; Bryan Parno; Corina Păsăreanu
Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models.Xiao Zang; Yi Xie; Jie Chen; Bo Yuan
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models.Lin Chen; Yifei Min; Mingrui Zhang; Amin Karbasi
2020-02-10
Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers.Prithviraj Dasgupta; Joseph B. Collins; Michael McCarrick
Adversarial Data Encryption.Yingdong Hu; Liang Zhang; Wei Shan; Xiaoxiao Qin; Jing Qi; Zhenzhou Wu; Yang Yuan
Generalised Lipschitz Regularisation Equals Distributional Robustness.Zac Cranko; Zhan Shi; Xinhua Zhang; Richard Nock; Simon Kornblith
2020-02-09
MDEA: Malware Detection with Evolutionary Adversarial Learning.Xiruo Wang; Risto Miikkulainen
Robust binary classification with the 01 loss.Yunzhe Xue; Meiyan Xie; Usman Roshan
Watch out! Motion is Blurring the Vision of Your Deep Neural Networks.Qing Guo; Felix Juefei-Xu; Xiaofei Xie; Lei Ma; Jian Wang; Bing Yu; Wei Feng; Yang Liu
Feature-level Malware Obfuscation in Deep Learning.Keith Dillon
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples.Paarth Neekhara; Shehzeen Hussain; Malhar Jere; Farinaz Koushanfar; Julian McAuley
Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection.Quanyu Liao; Xin Wang; Bin Kong; Siwei Lyu; Youbing Yin; Qi Song; Xi Wu
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing.Jinyuan Jia; Binghui Wang; Xiaoyu Cao; Neil Zhenqiang Gong
Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for High-Dimensional Images.Avrim Blum; Travis Dick; Naren Manoj; Hongyang Zhang
Input Validation for Neural Networks via Runtime Local Robustness Verification.Jiangchao Liu; Liqian Chen; Antoine Mine; Ji Wang
2020-02-08
Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks.Lu Chen; Wei Xu
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness.Aounon Kumar; Alexander Levine; Tom Goldstein; Soheil Feizi
2020-02-07
Renofeation: A Simple Transfer Learning Method for Improved Adversarial Robustness.Ting-Wu Chin; Cha Zhang; Diana Marculescu
Analysis of Random Perturbations for Robust Convolutional Neural Networks.Adam Dziedzic; Sanjay Krishnan
RAID: Randomized Adversarial-Input Detection for Neural Networks.Hasan Ferit Eniser; Maria Christakis; Valentin Wüstholz
Assessing the Adversarial Robustness of Monte Carlo and Distillation Methods for Deep Bayesian Neural Network Classification.Meet P. Vadera; Satya Narayan Shukla; Brian Jalaian; Benjamin M. Marlin
Semantic Robustness of Models of Source Code.Goutham Ramakrishnan; Jordan Henkel; Zi Wang; Aws Albarghouthi; Somesh Jha; Thomas Reps
2020-02-06
Reliability Validation of Learning Enabled Vehicle Tracking.Youcheng Sun; Yifan Zhou; Simon Maskell; James Sharp; Xiaowei Huang
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models.Yao Deng; Xi Zheng; Tianyi Zhang; Chen Chen; Guannan Lou; Miryung Kim
AI-GAN: Attack-Inspired Generation of Adversarial Examples.Tao Bai; Jun Zhao; Jinlin Zhu; Shoudong Han; Jiefeng Chen; Bo Li; Alex Kot
2020-02-05
Over-the-Air Adversarial Attacks on Deep Learning Based Modulation Classifier over Wireless Channels.Brian Kim; Yalin E. Sagduyu; Kemal Davaslioglu; Tugba Erpek; Sennur Ulukus
Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study.David Mickisch; Felix Assion; Florens Greßner; Wiebke Günther; Mariele Motta
2020-02-04
Adversarially Robust Frame Sampling with Bounded Irregularities.Hanhan Li; Pin Wang
Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria.Qi Xuan; Yalu Shan; Jinhuan Wang; Zhongyuan Ruan; Guanrong Chen
Minimax Defense against Gradient-based Adversarial Attacks.Blerta Lindqvist; Rauf Izmailov
2020-02-03
A Differentiable Color Filter for Generating Unrestricted Adversarial Images.Zhengyu Zhao; Zhuoran Liu; Martha Larson
Regularizers for Single-step Adversarial Training.B. S. Vivek; R. Venkatesh Babu
Defending Adversarial Attacks via Semantic Feature Manipulation.Shuo Wang; Tianle Chen; Surya Nepal; Carsten Rudolph; Marthie Grobler; Shangyu Chen
2020-02-02
Robust saliency maps with decoy-enhanced saliency score.Yang Lu; Wenbo Guo; Xinyu Xing; William Stafford Noble
2020-02-01
Towards Sharper First-Order Adversary with Quantized Gradients.Zhuanghua Liu; Ivor W. Tsang
AdvJND: Generating Adversarial Examples with Just Noticeable Difference.Zifei Zhang; Kai Qiao; Lingyun Jiang; Linyuan Wang; Bin Yan
2020-01-31
Additive Tree Ensembles: Reasoning About Potential Instances.Laurens Devos; Wannes Meert; Jesse Davis
Politics of Adversarial Machine Learning.Kendra Albert; Jonathon Penney; Bruce Schneier; Ram Shankar Siva Kumar
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications.Dou Goodman; Lv Zhonghou; Wang minghua
2020-01-30
Tiny Noise Can Make an EEG-Based Brain-Computer Interface Speller Output Anything.Xiao Zhang; Dongrui Wu; Lieyun Ding; Hanbin Luo; Chin-Teng Lin; Tzyy-Ping Jung; Ricardo Chavarriaga
2020-01-29
A4 : Evading Learning-based Adblockers.Shitong Zhu; Zhongjie Wang; Xun Chen; Shasha Li; Umar Iqbal; Zhiyun Qian; Kevin S. Chan; Srikanth V. Krishnamurthy; Zubair Shafiq
D2M: Dynamic Defense and Modeling of Adversarial Movement in Networks.Scott Freitas; Andrew Wicker; Duen Horng Chau; Joshua Neil
Just Noticeable Difference for Machines to Generate Adversarial Images.Adil Kaan Akan; Mehmet Ali Genc; Fatos T. Yarman Vural
Semantic Adversarial Perturbations using Learnt Representations.Isaac Dunn; Tom Melham; Daniel Kroening
Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain.Yigit Alparslan; Ken Alparslan; Jeremy Keim-Shenk; Shweta Khade; Rachel Greenstadt
2020-01-28
Modelling and Quantifying Membership Information Leakage in Machine Learning.Farhad Farokhi; Mohamed Ali Kaafar
2020-01-27
Interpreting Machine Learning Malware Detectors Which Leverage N-gram Analysis.William Briguglio; Sherif Saad
Generating Natural Adversarial Hyperspectral examples with a modified Wasserstein GAN.Jean-Christophe OBELIX Burnel; Kilian OBELIX Fatras; Nicolas OBELIX Courty
FakeLocator: Robust Localization of GAN-Based Face Manipulations via Semantic Segmentation Networks with Bells and Whistles.Yihao Huang; Felix Juefei-Xu; Run Wang; Xiaofei Xie; Lei Ma; Jianwen Li; Weikai Miao; Yang Liu; Geguang Pu
Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning.Inaam Ilahi; Muhammad Usama; Junaid Qadir; Muhammad Umar Janjua; Ala Al-Fuqaha; Dinh Thai Hoang; Dusit Niyato
Practical Fast Gradient Sign Attack against Mammographic Image Classifier.Ibrahim Yilmaz
2020-01-26
Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks.Rehana Mahfuz; Rajeev Sahay; Aly El Gamal
2020-01-25
Weighted Average Precision: Adversarial Example Detection in the Visual Perception of Autonomous Vehicles.Yilan Li; Senem Velipasalar
AI-Powered GUI Attack and Its Defensive Methods.Ning Yu; Zachary Tuttle; Carl Jake Thurnau; Emmanuel Mireku
Analyzing the Noise Robustness of Deep Neural Networks.Kelei Cao; Mengchen Liu; Hang Su; Jing Wu; Jun Zhu; Shixia Liu
2020-01-24
When Wireless Security Meets Machine Learning: Motivation, Challenges, and Research Directions.Yalin E. Sagduyu; Yi Shi; Tugba Erpek; William Headley; Bryse Flowers; George Stantchev; Zhuo Lu
2020-01-23
Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack.Bo Zhang; Ruotong Yu; Haipei Sun; Yanying Li; Jun Xu; Hui Wang
Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples.Ya-guan Qian; Xi-Ming Zhang; Bin Wang; Wei Li; Jian-Hai Chen; Wu-Jie Zhou; Jing-Sheng Lei
On the human evaluation of audio adversarial examples.Jon Vadillo; Roberto Santana
2020-01-22
Adversarial Attack on Community Detection by Hiding Individuals.Jia Li; Honglei Zhang; Zhichao Han; Yu Rong; Hong Cheng; Junzhou Huang
2020-01-21
SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation.Jesse Sun; Fatemeh Darbeha; Mark Zaidi; Bo Wang
Secure and Robust Machine Learning for Healthcare: A Survey.Adnan Qayyum; Junaid Qadir; Muhammad Bilal; Ala Al-Fuqaha
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence.Kihyuk Sohn; David Berthelot; Chun-Liang Li; Zizhao Zhang; Nicholas Carlini; Ekin D. Cubuk; Alex Kurakin; Han Zhang; Colin Raffel
GhostImage: Perception Domain Attacks against Vision-based Object Classification Systems.Yanmao Man; Ming Li; Ryan Gerdes
Generate High-Resolution Adversarial Samples by Identifying Effective Features.Sizhe Chen; Peidong Zhang; Chengjin Sun; Jia Cai; Xiaolin Huang
Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning.Nilaksh Polo Das; Haekyu Polo Park; Zijie J. Polo Wang; Fred Polo Hohman; Robert Polo Firstman; Emily Polo Rogers; Duen Polo Horng; Chau
Elephant in the Room: An Evaluation Framework for Assessing Adversarial Examples in NLP.Ying Xu; Xu Zhong; Antonio Jose Jimeno Yepes; Jey Han Lau
2020-01-17
Cyber Attack Detection thanks to Machine Learning Algorithms.Antoine Delplace; Sheryl Hermoso; Kristofer Anandita
2020-01-16
Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks.Farnaz Behnia; Ali Mirzaeian; Mohammad Sabokrou; Sai Manoj; Tinoosh Mohsenin; Khaled N. Khasawneh; Liang Zhao; Houman Homayoun; Avesta Sasan
A Little Fog for a Large Turn.Harshitha Machiraju; Vineeth N Balasubramanian
The gap between theory and practice in function approximation with deep neural networks.Ben Adcock; Nick Dexter
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.Sizhe Chen; Zhengbao He; Chengjin Sun; Jie Yang; Xiaolin Huang
Increasing the robustness of DNNs against image corruptions by playing the Game of Noise.Evgenia Rusak; Lukas Schott; Roland S. Zimmermann; Julian Bitterwolf; Oliver Bringmann; Matthias Bethge; Wieland Brendel
2020-01-14
Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation.Chuteng Zhou; Prad Kadambi; Matthew Mattina; Paul N. Whatmough
2020-01-13
Advbox: a toolbox to generate adversarial examples that fool neural networks.Dou Goodman; Hao Xin; Wang Yang; Wu Yuesheng; Xiong Junfeng; Zhang Huan
2020-01-12
Membership Inference Attacks Against Object Detection Models.Yeachan Park; Myungjoo Kang
An Adversarial Approach for the Robust Classification of Pneumonia from Chest Radiographs.Joseph D. Janizek; Gabriel Erion; Alex J. DeGrave; Su-In Lee
Fast is better than free: Revisiting adversarial training.Eric Wong; Leslie Rice; J. Zico Kolter
2020-01-11
Exploring and Improving Robustness of Multi Task Deep Neural Networks via Domain Agnostic Defenses.Kashyap Coimbatore Murali
Sparse Black-box Video Attack with Reinforcement Learning.Huanqian Yan; Xingxing Wei; Bo Li
2020-01-10
ReluDiff: Differential Verification of Deep Neural Networks.Brandon Paulsen; Jingbo Wang; Chao Wang
Guess First to Enable Better Compression and Adversarial Robustness.Sicheng Zhu; Bang An; Shiyu Niu
2020-01-08
To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers.Bijeeta Pal; Shruti Tople
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius.Runtian Zhai; Chen Dan; Di He; Huan Zhang; Boqing Gong; Pradeep Ravikumar; Cho-Jui Hsieh; Liwei Wang
Transferability of Adversarial Examples to Attack Cloud-based Image Classifier Service.Dou Goodman
2020-01-07
Softmax-based Classification is k-means Clustering: Formal Proof, Consequences for Adversarial Attacks, and Improvement through Centroid Based Tailoring.Sibylle Hess; Wouter Duivesteijn; Decebal Mocanu
2020-01-06
Deceiving Image-to-Image Translation Networks for Autonomous Driving with Adversarial Perturbations.Lin Wang; Wonjune Cho; Kuk-Jin Yoon
Generating Semantic Adversarial Examples via Feature Manipulation.Shuo Wang; Surya Nepal; Carsten Rudolph; Marthie Grobler; Shangyu Chen; Tianle Chen
2020-01-05
The Human Visual System and Adversarial AI.Yaoshiang Ho; Samuel Wookey
2020-01-02
Reject Illegal Inputs with Generative Classifier Derived from Any Discriminative Classifier.Xin Wang
2020-01-01
Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient.Ling Liang; Xing Hu; Lei Deng; Yujie Wu; Guoqi Li; Yufei Ding; Peng Li; Yuan Xie
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks.Ying Meng; Jianhai Su; Jason O'Kane; Pooyan Jamshidi
2019-12-31
Automated Testing for Deep Learning Systems with Differential Behavior Criteria.Yuan Gao; Yiqiang Han
Protecting GANs against privacy attacks by preventing overfitting.Sumit Mukherjee; Yixi Xu; Anusua Trivedi; Juan Lavista Ferres
Erase and Restore: Simple, Accurate and Resilient Detection of $L_2$ Adversarial Examples.Fei Zuo; Qiang Zeng
Quantum Adversarial Machine Learning.Sirui Lu; Lu-Ming Duan; Dong-Ling Deng
2019-12-30
Adversarial Example Generation using Evolutionary Multi-objective Optimization.Takahiro Suzuki; Shingo Takeshita; Satoshi Ono
Defending from adversarial examples with a two-stream architecture.Hao Ge; Xiaoguang Tu; Mei Xie; Zheng Ma
2019-12-28
Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices.Chandramouli Shama Sastry; Sageev Oore
Search Based Repair of Deep Neural Networks.Jeongju Sohn; Sungmin Kang; Shin Yoo
2019-12-26
Benchmarking Adversarial Robustness.Yinpeng Dong; Qi-An Fu; Xiao Yang; Tianyu Pang; Hang Su; Zihao Xiao; Jun Zhu
Efficient Adversarial Training with Transferable Adversarial Examples.Haizhong Zheng; Ziqi Zhang; Juncheng Gu; Honglak Lee; Atul Prakash
2019-12-24
Attack-Resistant Federated Learning with Residual-based Reweighting.Shuhao Fu; Chulin Xie; Bo Li; Qifeng Chen
Analysis of Moving Target Defense Against False Data Injection Attacks on Power Grid.Zhenyong Zhang; Ruilong Deng; Member; IEEE; David K. Y. Yau; Senior Member; IEEE; Peng Cheng; Member; IEEE; Jiming Chen; Fellow; IEEE
Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer.Hongyan Chang; Virat Shejwalkar; Reza Shokri; Amir Houmansadr
Characterizing the Decision Boundary of Deep Neural Networks.Hamid Karimi; Tyler Derr; Jiliang Tang
2019-12-23
White Noise Analysis of Neural Networks.Ali Borji; Sikun Lin
Adversarial AutoAugment.Xinyu Zhang; Qiang Wang; Jian Zhang; Zhao Zhong
Geometry-aware Generation of Adversarial and Cooperative Point Clouds.Yuxin Wen; Jiehong Lin; Ke Chen; Kui Jia
2019-12-21
T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack.Boxin Wang; Hengzhi Pei; Boyuan Pan; Qian Chen; Shuohang Wang; Bo Li
2019-12-20
Measuring Dataset Granularity.Yin Cui; Zeqi Gu; Dhruv Mahajan; der Maaten Laurens van; Serge Belongie; Ser-Nam Lim
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing.Jinyuan Jia; Xiaoyu Cao; Binghui Wang; Neil Zhenqiang Gong
secml: A Python Library for Secure and Explainable Machine Learning.Maura Pintor; Luca Demetrio; Angelo Sotgiu; Marco Melis; Ambra Demontis; Battista Biggio
Jacobian Adversarially Regularized Networks for Robustness.Alvin Chan; Yi Tay; Yew Soon Ong; Jie Fu
Explainability and Adversarial Robustness for RNNs.Alexander Hartl; Maximilian Bachl; Joachim Fabini; Tanja Zseby
Adversarial symmetric GANs: bridging adversarial samples and adversarial networks.Faqiang Liu; Mingkun Xu; Guoqi Li; Jing Pei; Luping Shi; Rong Zhao
2019-12-19
Does Symbolic Knowledge Prevent Adversarial Fooling?Stefano Teso
A New Ensemble Method for Concessively Targeted Multi-model Attack.Ziwen He; Wei Wang; Xinsheng Xuan; Jing Dong; Tieniu Tan
Mitigating large adversarial perturbations on X-MAS (X minus Moving Averaged Samples).Woohyung Chun; Sung-Min Hong; Junho Huh; Inyup Kang
Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection.Mahmood Sharif; Keane Lucas; Lujo Bauer; Michael K. Reiter; Saurabh Shintre
$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers.Mahmood Sharif; Lujo Bauer; Michael K. Reiter
Towards Verifying Robustness of Neural Networks Against Semantic Perturbations.Jeet Lily Mohapatra; Lily Tsui-Wei; Weng; Pin-Yu Chen; Sijia Liu; Luca Daniel
Perturbations on the Perceptual Ball.Andrew Elliott; Stephen Law; Chris Russell
2019-12-18
Identifying Adversarial Sentences by Analyzing Text Complexity.Hoang-Quoc Nguyen-Son; Tran Phuong Thao; Seira Hidano; Shinsaku Kiyomoto
An Adversarial Perturbation Oriented Domain Adaptation Approach for Semantic Segmentation.Jihan Yang; Ruijia Xu; Ruiyu Li; Xiaojuan Qi; Xiaoyong Shen; Guanbin Li; Liang Lin
Adversarial VC-dimension and Sample Complexity of Neural Networks.Zetong Qi; T. J. Wilder
SIGMA : Strengthening IDS with GAN and Metaheuristics Attacks.Simon Msika; Alejandro Quintero; Foutse Khomh
Detecting Adversarial Attacks On Audio-Visual Speech Recognition.Pingchuan Ma; Stavros Petridis; Maja Pantic
2019-12-17
APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection.A. Braunegg; Amartya Chakraborty; Michael Krumdick; Nicole Lape; Sara Leary; Keith Manville; Elizabeth Merkhofer; Laura Strickhart; Matthew Walmer
2019-12-16
CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator.Huy Phan; Yi Xie; Siyu Liao; Jie Chen; Bo Yuan
MimicGAN: Robust Projection onto Image Manifolds with Corruption Mimicking.Rushil Anirudh; Jayaraman J. Thiagarajan; Bhavya Kailkhura; Timo Bremer
On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration.Kanil Patel; William Beluch; Dan Zhang; Michael Pfeiffer; Bin Yang
Constructing a provably adversarially-robust classifier from a high accuracy one.Grzegorz Głuch; Rüdiger Urbanke
2019-12-15
DAmageNet: A Universal Adversarial Dataset.Sizhe Chen; Xiaolin Huang; Zhengbao He; Chengjin Sun
2019-12-14
What Else Can Fool Deep Learning? Addressing Color Constancy Errors on Deep Neural Network Performance.Mahmoud Afifi; Michael S Brown
Towards Robust Toxic Content Classification.Keita Kurita; Anna Belova; Antonios Anastasopoulos
2019-12-13
Potential adversarial samples for white-box attacks.Amir Nazemi; Paul Fieguth
2019-12-11
Learning to Model Aspects of Hearing Perception Using Neural Loss Functions.Prateek Verma; Jonathan Berger
Gabor Layers Enhance Network Robustness.Juan C. Pérez; Motasem Alfarra; Guillaume Jeanneret; Adel Bibi; Ali Thabet; Bernard Ghanem; Pablo Arbeláez
An Efficient Approach for Using Expectation Maximization Algorithm in Capsule Networks.Moein Hasani; Amin Nasim Saravi; Hassan Khotanlou
Detecting and Correcting Adversarial Images Using Image Processing Operations and Convolutional Neural Networks.Huy H. Nguyen; Minoru Kuribayashi; Junichi Yamagishi; Isao Echizen
What it Thinks is Important is Important: Robustness Transfers through Input Gradients.Alvin Chan; Yi Tay; Yew-Soon Ong
2019-12-10
Towards a Robust Classifier: An MDL-Based Method for Generating Adversarial Examples.Behzad Asadi; Vijay Varadharajan
Appending Adversarial Frames for Universal Video Attack.Zhikai Chen; Lingxi Xie; Shanmin Pang; Yong He; Qi Tian
Training Provably Robust Models by Polyhedral Envelope Regularization.Chen Liu; Mathieu Salzmann; Sabine Süsstrunk
Statistically Robust Neural Network Classification. (22%)Benjie Wang; Stefan Webb; Tom Rainforth
2019-12-09
Feature Losses for Adversarial Robustness.Kirthi Shankar Sivamani
2019-12-08
Hardening Random Forest Cyber Detectors Against Adversarial Attacks.Giovanni Apruzzese; Mauro Andreolini; Michele Colajanni; Mirco Marchetti
Amora: Black-box Adversarial Morphing Attack.Run Wang; Felix Juefei-Xu; Xiaofei Xie; Lei Ma; Yihao Huang; Yang Liu
2019-12-07
Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks.Yi Xiang Marcus Tan; Yuval Elovici; Alexander Binder
2019-12-06
Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations.Sven Gowal; Chongli Qin; Po-Sen Huang; Taylan Cemgil; Krishnamurthy Dvijotham; Timothy Mann; Pushmeet Kohli
Principal Component Properties of Adversarial Samples.Malhar Jere; Sandro Herbig; Christine Lind; Farinaz Koushanfar
Training Deep Neural Networks for Interpretability and Adversarial Robustness.Adam Noack; Isaac Ahern; Dejing Dou; Boyang Li
2019-12-05
Detection of Face Recognition Adversarial Attacks.Fabio Valerio Massoli; Fabio Carrara; Giuseppe Amato; Fabrizio Falchi
The Search for Sparse, Robust Neural Networks.Justin Cosentino; Federico Zaiter; Dan Pei; Jun Zhu
Region-Wise Attack: On Efficient Generation of Robust Physical Adversarial Examples.Bo Luo; Qiang Xu
2019-12-04
Learning with Multiplicative Perturbations.Xiulong Yang; Shihao Ji
A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks.Prithviraj Dasgupta; Joseph B. Collins
Walking on the Edge: Fast, Low-Distortion Adversarial Examples.Hanwei Zhang; Yannis Avrithis; Teddy Furon; Laurent Amsaleg
Towards Robust Image Classification Using Sequential Attention Models.Daniel Zoran; Mike Chrzanowski; Po-Sen Huang; Sven Gowal; Alex Mott; Pushmeet Kohl
Scratch that! An Evolution-based Adversarial Attack against Neural Networks.Malhar Jere; Briland Hitaj; Gabriela Ciocarlie; Farinaz Koushanfar
2019-12-03
A Survey of Black-Box Adversarial Attacks on Computer Vision Models.Siddhant Bhambri; Sumanyu Muku; Avinash Tulasi; Arun Balaji Buduru
FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks.Mahum Naseer; Mishal Fatima Minhas; Faiq Khalid; Muhammad Abdullah Hanif; Osman Hasan; Muhammad Shafique
2019-12-02
Cost-Aware Robust Tree Ensembles for Security Applications.Yizheng Chen; Shiqi Wang; Weifan Jiang; Asaf Cidon; Suman Jana
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples.Nils Lukas; Yuxuan Zhang; Florian Kerschbaum
Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs.Zihan Liu; Xiao Zhang; Lubin Meng; Dongrui Wu
2019-12-01
Adversary A3C for Robust Reinforcement Learning.Zhaoyuan Gu; Zhenzhong Jia; Howie Choset
A Method for Computing Class-wise Universal Adversarial Perturbations.Tejus Gupta; Abhishek Sinha; Nupur Kumari; Mayank Singh; Balaji Krishnamurthy
AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds.Abdullah Hamdi; Sara Rojas; Ali Thabet; Bernard Ghanem
2019-11-30
Design and Interpretation of Universal Adversarial Patches in Face Detection.Xiao Yang; Fangyun Wei; Hongyang Zhang; Jun Zhu
Error-Correcting Neural Network.Yang Song; Qiyu Kang; Wee Peng Tay
2019-11-29
Square Attack: a query-efficient black-box adversarial attack via random search.Maksym Andriushchenko; Francesco Croce; Nicolas Flammarion; Matthias Hein
2019-11-28
Towards Privacy and Security of Deep Learning Systems: A Survey.Yingzhe He; Guozhu Meng; Kai Chen; Xingbo Hu; Jinwen He
2019-11-26
Survey of Attacks and Defenses on Edge-Deployed Neural Networks.Mihailo Isakov; Vijay Gadepally; Karen M. Gettings; Michel A. Kinsy
An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense.Chao Tang; Yifei Fan; Anthony Yezzi
Can Attention Masks Improve Adversarial Robustness?Pratik Vaishnavi; Tianji Cong; Kevin Eykholt; Atul Prakash; Amir Rahmati
Defending Against Adversarial Machine Learning.Alison Jenkins
Using Depth for Pixel-Wise Detection of Adversarial Attacks in Crowd Counting.Weizhe Liu; Mathieu Salzmann; Pascal Fua
2019-11-25
Playing it Safe: Adversarial Robustness with an Abstain Option.Cassidy Laidlaw; Soheil Feizi
ColorFool: Semantic Adversarial Colorization.Ali Shahin Shamsabadi; Ricardo Sanchez-Matilla; Andrea Cavallaro
Adversarial Attack with Pattern Replacement.Ziang Dong; Liang Mao; Shiliang Sun
One Man's Trash is Another Man's Treasure: Resisting Adversarial Examples by Adversarial Examples.Chang Xiao; Changxi Zheng
2019-11-24
When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks.Minghao Guo; Yuzhe Yang; Rui Xu; Ziwei Liu; Dahua Lin
Time-aware Gradient Attack on Dynamic Network Link Prediction.Jinyin Chen; Jian Zhang; Zhi Chen; Min Du; Feifei Li; Qi Xuan
2019-11-23
Robust Assessment of Real-World Adversarial Examples.Brett Jefferson; Carlos Ortiz Marrero
Universal Adversarial Robustness of Texture and Shape-Biased Models.Kenneth T. Co; Luis Muñoz-González; Leslie Kanthan; Ben Glocker; Emil C. Lupu
2019-11-22
Bounding Singular Values of Convolution Layers.Sahil Singla; Soheil Feizi
Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction.Yantao Lu; Yunhan Jia; Jianyu Wang; Bai Li; Weiheng Chai; Lawrence Carin; Senem Velipasalar
Attack Agnostic Statistical Method for Adversarial Detection.Sambuddha Saha; Aashish Kumar; Pratyush Sahay; George Jose; Srinivas Kruthiventi; Harikrishna Muralidhara
Universal adversarial examples in speech command classification.Jon Vadillo; Roberto Santana
Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference.Wei-An Lin; Yogesh Balaji; Pouya Samangouei; Rama Chellappa
2019-11-21
Heuristic Black-box Adversarial Attacks on Video Recognition Models.Zhipeng Wei; Jingjing Chen; Xingxing Wei; Linxi Jiang; Tat-Seng Chua; Fengfeng Zhou; Yu-Gang Jiang
Adversarial Examples Improve Image Recognition.Cihang Xie; Mingxing Tan; Boqing Gong; Jiang Wang; Alan Yuille; Quoc V. Le
Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy. (1%)Ke Sun; Bing Yu; Zhouchen Lin; Zhanxing Zhu
2019-11-20
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation.Alexander Levine; Soheil Feizi
Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method.Junjie Hu; Takayuki Okatani
Fine-grained Synthesis of Unrestricted Adversarial Examples.Omid Poursaeed; Tianxing Jiang; Harry Yang; Serge Belongie; Ser-Nam Lim
Deep Minimax Probability Machine.Lirong He; Ziyi Guo; Kaizhu Huang; Zenglin Xu
2019-11-19
Logic-inspired Deep Neural Networks.Minh Le
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?Jingfeng Zhang; Bo Han; Gang Niu; Tongliang Liu; Masashi Sugiyama
Adversarial Robustness of Flow-Based Generative Models.Phillip Pope; Yogesh Balaji; Soheil Feizi
Defective Convolutional Layers Learn Robust CNNs.Tiange Luo; Tianle Cai; Mengxiao Zhang; Siyu Chen; Di He; Liwei Wang
Generate (non-software) Bugs to Fool Classifiers.Hiromu Yakura; Youhei Akimoto; Jun Sakuma
2019-11-18
A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories.Zhaohui Che; Ali Borji; Guangtao Zhai; Suiyi Ling; Jing Li; Patrick Le Callet
A novel method for identifying the deep neural network model with the Serial Number.XiangRui Xu; YaQin Li; Cao Yuan
Adversarial Attacks on Grid Events Classification: An Adversarial Machine Learning Approach.Iman Niazazari; Hanif Livani
WITCHcraft: Efficient PGD attacks with random step size.Ping-Yeh Chiang; Jonas Geiping; Micah Goldblum; Tom Goldstein; Renkun Ni; Steven Reich; Ali Shafahi
Deep Detector Health Management under Adversarial Campaigns.Javier Echauz; Keith Kenemer; Sarfaraz Hussein; Jay Dhaliwal; Saurabh Shintre; Slawomir Grzonkowski; Andrew Gardner
2019-11-17
Countering Inconsistent Labelling by Google's Vision API for Rotated Images.Aman Apte; Aritra Bandyopadhyay; K Akhilesh Shenoy; Jason Peter Andrews; Aditya Rathod; Manish Agnihotri; Aditya Jajodia
Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models.Tong Che; Xiaofeng Liu; Site Li; Yubin Ge; Ruixiang Zhang; Caiming Xiong; Yoshua Bengio
Smoothed Inference for Adversarially-Trained Models.Yaniv Nemcovsky; Evgenii Zheltonozhskii; Chaim Baskin; Brian Chmiel; Maxim Fishman; Alex M. Bronstein; Avi Mendelson
2019-11-16
SMART: Skeletal Motion Action Recognition aTtack.He Wang; Feixiang He; Zexi Peng; Yongliang Yang; Tianjia Shao; Kun Zhou; David Hogg
Suspicion-Free Adversarial Attacks on Clustering Algorithms.Anshuman Chhabra; Abhishek Roy; Prasant Mohapatra
Black-Box Adversarial Attack with Transferable Model-based Embedding.Zhichao Huang; Tong Zhang
Defensive Few-shot Learning.Wenbin Li; Lei Wang; Xingxing Zhang; Lei Qi; Jing Huo; Yang Gao; Jiebo Luo
2019-11-15
Learning To Characterize Adversarial Subspaces.Xiaofeng Mao; Yuefeng Chen; Yuhong Li; Yuan He; Hui Xue
On Model Robustness Against Adversarial Examples.Shufei Zhang; Kaizhu Huang; Zenglin Xu
Simple iterative method for generating targeted universal adversarial perturbations.Hokuto Hirano; Kazuhiro Takemoto
AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients.Xiaodan Li; Yuefeng Chen; Yuan He; Hui Xue
2019-11-14
Adversarial Embedding: A robust and elusive Steganography and Watermarking technique.Salah Ghamizi; Maxime Cordy; Mike Papadakis; Yves Le Traon
Self-supervised Adversarial Training.Kejiang Chen; Hang Zhou; Yuefeng Chen; Xiaofeng Mao; Yuhong Li; Yuan He; Hui Xue; Weiming Zhang; Nenghai Yu
DomainGAN: Generating Adversarial Examples to Attack Domain Generation Algorithm Classifiers.Isaac Corley; Jonathan Lwowski; Justin Hoffman
CAGFuzz: Coverage-Guided Adversarial Generative Fuzzing Testing of Deep Learning Systems.Pengcheng Zhang; Qiyin Dai; Patrizio Pelliccione
2019-11-13
There is Limited Correlation between Coverage and Robustness for Deep Neural Networks.Yizhen Dong; Peixin Zhang; Jingyi Wang; Shuang Liu; Jun Sun; Jianye Hao; Xinyu Wang; Li Wang; Jin Song Dong; Dai Ting
Adversarial Margin Maximization Networks.Ziang Yan; Yiwen Guo; Changshui Zhang
2019-11-12
Improving Robustness of Task Oriented Dialog Systems.Arash Einolghozati; Sonal Gupta; Mrinal Mohit; Rushin Shah
On Robustness to Adversarial Examples and Polynomial Optimization.Pranjal Awasthi; Abhratanu Dutta; Aravindan Vijayaraghavan
Adversarial Examples in Modern Machine Learning: A Review.Rey Reza Wiyatno; Anqi Xu; Ousmane Dia; Berker Archy de
2019-11-11
Few-Features Attack to Fool Machine Learning Models through Mask-Based GAN.Feng Chen; Yunkai Shang; Bo Xu; Jincheng Hu
RNN-Test: Towards Adversarial Testing for Recurrent Neural Network Systems.Jianmin Guo; Yue Zhao; Quan Zhang; Yu Jiang
Learning From Brains How to Regularize Machines.Zhe Li; Wieland Brendel; Edgar Y. Walker; Erick Cobos; Taliah Muhammad; Jacob Reimer; Matthias Bethge; Fabian H. Sinz; Xaq Pitkow; Andreas S. Tolias
Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory.Arash Rahnama; Andre T. Nguyen; Edward Raff
CALPA-NET: Channel-pruning-assisted Deep Residual Network for Steganalysis of Digital Images.Shunquan Tan; Weilong Wu; Zilong Shao; Qiushi Li; Bin Li; Jiwu Huang
GraphDefense: Towards Robust Graph Convolutional Networks.Xiaoyun Wang; Xuanqing Liu; Cho-Jui Hsieh
2019-11-09
A Reinforced Generation of Adversarial Samples for Neural Machine Translation.Wei Zou; Shujian Huang; Jun Xie; Xinyu Dai; Jiajun Chen
Improving Machine Reading Comprehension via Adversarial Training.Ziqing Yang; Yiming Cui; Wanxiang Che; Ting Liu; Shijin Wang; Guoping Hu
Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples.Marc Khoury
Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy.Xinghua Qu; Zhu Sun; Yew-Soon Ong; Abhishek Gupta; Pengfei Wei
2019-11-08
Adversarial Attacks on Time-Series Intrusion Detection for Industrial Control Systems.Giulio Zizzo; Chris Hankin; Sergio Maffeis; Kevin Jones
Patch augmentation: Towards efficient decision boundaries for neural networks.Marcus D. Bloice; Andreas Holzinger
Domain Robustness in Neural Machine Translation.Mathias Müller; Annette Rios; Rico Sennrich
Adversarial Attacks on GMM i-vector based Speaker Verification Systems.Xu Li; Jinghua Zhong; Xixin Wu; Jianwei Yu; Xunying Liu; Helen Meng
Imperceptible Adversarial Attacks on Tabular Data.Vincent Ballet; Xavier Renard; Jonathan Aigrain; Thibault Laugel; Pascal Frossard; Marcin Detyniecki
2019-11-07
White-Box Target Attack for EEG-Based BCI Regression Problems.Lubin Meng; Chin-Teng Lin; Tzyy-Ring Jung; Dongrui Wu
Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces.Xue Jiang; Xiao Zhang; Dongrui Wu
2019-11-06
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance.Zhengyu Zhao; Zhuoran Liu; Martha Larson
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods.Dylan Slack; Sophie Hilgard; Emily Jia; Sameer Singh; Himabindu Lakkaraju
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey.Olakunle Ibitoye; Rana Abou-Khamis; Ashraf Matrawy; M. Omair Shafiq
Reversible Adversarial Example based on Reversible Image Transformation.Zhaoxia Yin; Hua Wang; Weiming Zhang
2019-11-05
Adversarial Enhancement for Community Detection in Complex Networks.Jiajun Zhou; Zhi Chen; Min Du; Lihong Chen; Shanqing Yu; Feifei Li; Guanrong Chen; Qi Xuan
DLA: Dense-Layer-Analysis for Adversarial Example Detection.Philip Sperl; Ching-Yu Kao; Peng Chen; Konstantin Böttinger
Intriguing Properties of Adversarial ML Attacks in the Problem Space.Fabio Pierazzi; Feargus Pendlebury; Jacopo Cortellazzi; Lorenzo Cavallaro
Coverage Guided Testing for Recurrent Neural Networks.Wei Huang; Youcheng Sun; Xingyu Zhao; James Sharp; Wenjie Ruan; Jie Meng; Xiaowei Huang
2019-11-04
Persistency of Excitation for Robustness of Neural Networks.Kamil Nar; S. Shankar Sastry
Fast-UAP: An Algorithm for Speeding up Universal Adversarial Perturbation Generation with Orientation of Perturbation Vectors.Jiazhu Dai; Le Shu
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models.Ren Pang; Hua Shen; Xinyang Zhang; Shouling Ji; Yevgeniy Vorobeychik; Xiapu Luo; Alex Liu; Ting Wang
2019-11-03
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems.Guangke Chen; Sen Chen; Lingling Fan; Xiaoning Du; Zhe Zhao; Fu Song; Yang Liu
MadNet: Using a MAD Optimization for Defending Against Adversarial Attacks.Shai Rozenberg; Gal Elidan; Ran El-Yaniv
2019-11-02
Automatic Detection of Generated Text is Easiest when Humans are Fooled.Daphne Ippolito; Daniel Duckworth; Chris Callison-Burch; Douglas Eck
Security of Facial Forensics Models Against Adversarial Attacks.Rong Huang; Fuming Fang; Huy H. Nguyen; Junichi Yamagishi; Isao Echizen
2019-10-31
Enhancing Certifiable Robustness via a Deep Model Ensemble.Huan Zhang; Minhao Cheng; Cho-Jui Hsieh
Certifiable Robustness to Graph Perturbations.Aleksandar Bojchevski; Stephan Günnemann
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System.Juncheng B. Li; Shuhui Qu; Xinjian Li; Joseph Szurley; J. Zico Kolter; Florian Metze
2019-10-30
Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization.Rana Abou Khamis; Omair Shafiq; Ashraf Matrawy
Beyond Universal Person Re-ID Attack.Wenjie Ding; Xing Wei; Rongrong Ji; Xiaopeng Hong; Qi Tian; Yihong Gong
2019-10-29
Adversarial Example in Remote Sensing Image Recognition.Li Chen; Guowei Zhu; Qi Li; Haifeng Li
2019-10-28
Active Subspace of Neural Networks: Structural Analysis and Universal Attacks.Chunfeng Cui; Kaiqi Zhang; Talgat Daulbaev; Julia Gusak; Ivan Oseledets; Zheng Zhang
Certified Adversarial Robustness for Deep Reinforcement Learning.Björn Lütjens; Michael Everett; Jonathan P. How
2019-10-27
Word-level Textual Adversarial Attacking as Combinatorial Optimization.Yuan Zang; Fanchao Qi; Chenghao Yang; Zhiyuan Liu; Meng Zhang; Qun Liu; Maosong Sun
EdgeFool: An Adversarial Image Enhancement Filter.Ali Shahin Shamsabadi; Changjae Oh; Andrea Cavallaro
Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolutional Neural Networks.Ya-guan Qian; Dan-feng Ma; Bin Wang; Jun Pan; Jia-min Wang; Jian-hai Chen; Wu-jie Zhou; Jing-sheng Lei
2019-10-26
Detection of Adversarial Attacks and Characterization of Adversarial Subspace.Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Understanding and Quantifying Adversarial Examples Existence in Linear Classification.Xupeng Shi; A. Adam Ding
Adversarial Defense Via Local Flatness Regularization.Jia Xu; Yiming Li; Yong Jiang; Shu-Tao Xia
2019-10-25
Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples.Mauro Barni; Ehsan Nowroozi; Benedetta Tondi; Bowen Zhang
MediaEval 2019: Concealed FGSM Perturbations for Privacy Preservation.Panagiotis Linardos; Suzanne Little; Kevin McGuinness
Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?Ali Shafahi; Amin Ghiasi; Furong Huang; Tom Goldstein
2019-10-24
ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries.Xingxing Zhang; Shupeng Gui; Zhenfeng Zhu; Yao Zhao; Ji Liu
2019-10-23
A Useful Taxonomy for Adversarial Robustness of Neural Networks.Leslie N. Smith
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks.Alexander Levine; Soheil Feizi
2019-10-22
Attacking Optical Flow.Anurag Ranjan; Joel Janai; Andreas Geiger; Michael J. Black
Adversarial Example Detection by Classification for Deep Speech Recognition.Saeid Samizade; Zheng-Hua Tan; Chao Shen; Xiaohong Guan
Cross-Representation Transferability of Adversarial Attacks: From Spectrograms to Audio Waveforms.Karl M. Koerich; Mohammad Esmailpour; Sajjad Abdoli; Alceu S. Jr. Britto; Alessandro L. Koerich
Structure Matters: Towards Generating Transferable Adversarial Images.Dan Peng; Zizhan Zheng; Linhao Luo; Xiaofeng Zhang
2019-10-21
Recovering Localized Adversarial Attacks.Jan Philip Göpfert; Heiko Wersing; Barbara Hammer
Learning to Learn by Zeroth-Order Oracle.Yangjun Ruan; Yuanhao Xiong; Sashank Reddi; Sanjiv Kumar; Cho-Jui Hsieh
An Alternative Surrogate Loss for PGD-based Adversarial Testing.Sven Gowal; Jonathan Uesato; Chongli Qin; Po-Sen Huang; Timothy Mann; Pushmeet Kohli
2019-10-20
Enhancing Recurrent Neural Networks with Sememes.Yujia Qin; Fanchao Qi; Sicong Ouyang; Zhiyuan Liu; Cheng Yang; Yasheng Wang; Qun Liu; Maosong Sun
2019-10-19
Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification.Songxiang Liu; Haibin Wu; Hung-yi Lee; Helen Meng
2019-10-18
Toward Metrics for Differentiating Out-of-Distribution Sets.Mahdieh Abbasi; Changjian Shui; Arezoo Rajabi; Christian Gagne; Rakesh Bobba
Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?Simran Kaur; Jeremy Cohen; Zachary C. Lipton
Spatial-aware Online Adversarial Perturbations Against Visual Object Tracking.Qing Guo; Xiaofei Xie; Lei Ma; Zhongguo Li; Wei Feng; Yang Liu
A Fast Saddle-Point Dynamical System Approach to Robust Deep Learning.Yasaman Esfandiari; Aditya Balu; Keivan Ebrahimi; Umesh Vaidya; Nicola Elia; Soumik Sarkar
2019-10-17
Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets.Yogesh Balaji; Tom Goldstein; Judy Hoffman
Enforcing Linearity in DNN succours Robustness and Adversarial Image Generation.Anindya Sarkar; Nikhil Kumar Gupta; Raghu Iyengar
LanCe: A Comprehensive and Lightweight CNN Defense Methodology against Physical Adversarial Attacks on Embedded Multimedia Applications.Zirui Xu; Fuxun Yu; Xiang Chen
Adversarial T-shirt! Evading Person Detectors in A Physical World.Kaidi Xu; Gaoyuan Zhang; Sijia Liu; Quanfu Fan; Mengshu Sun; Hongge Chen; Pin-Yu Chen; Yanzhi Wang; Xue Lin
2019-10-16
A New Defense Against Adversarial Images: Turning a Weakness into a Strength.Tao Yu; Shengyuan Hu; Chuan Guo; Wei-Lun Chao; Kilian Q. Weinberger
2019-10-15
Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation.Anindya Sarkar; Anirudh Sunder Raj; Raghu Sesha Iyengar
Understanding Misclassifications by Attributes.Sadaf Gulshad; Zeynep Akata; Jan Hendrik Metzen; Arnold Smeulders
Adversarial Examples for Models of Code.Noam Yefet; Uri Alon; Eran Yahav
On adversarial patches: real-world attack on ArcFace-100 face recognition system.Mikhail Pautov; Grigorii Melnikov; Edgar Kaziakhmedov; Klim Kireev; Aleksandr Petiushko
2019-10-14
DeepSearch: Simple and Effective Blackbox Fuzzing of Deep Neural Networks.Fuyuan Zhang; Sankalan Pal Chowdhury; Maria Christakis
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks.David Stutz; Matthias Hein; Bernt Schiele
ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization.Xiangyi Chen; Sijia Liu; Kaidi Xu; Xingguo Li; Xue Lin; Mingyi Hong; David Cox
Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models.Derek Derui; Wang; Chaoran Li; Sheng Wen; Surya Nepal; Yang Xiang
Real-world adversarial attack on MTCNN face detection system.Edgar Kaziakhmedov; Klim Kireev; Grigorii Melnikov; Mikhail Pautov; Aleksandr Petiushko
2019-10-12
On Robustness of Neural Ordinary Differential Equations.Hanshu Yan; Jiawei Du; Vincent Y. F. Tan; Jiashi Feng
2019-10-11
Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems.Hadi Abdullah; Muhammad Sajidur Rahman; Washington Garcia; Logan Blue; Kevin Warren; Anurag Swarnim Yadav; Tom Shrimpton; Patrick Traynor
Verification of Neural Networks: Specifying Global Robustness using Generative Models.Nathanaël Fijalkow; Mohit Kumar Gupta
2019-10-10
Universal Adversarial Perturbation for Text Classification.Hang Gao; Tim Oates
Information Aware Max-Norm Dirichlet Networks for Predictive Uncertainty Estimation.Theodoros Tsiligkaridis
2019-10-09
Learning deep forest with multi-scale Local Binary Pattern features for face anti-spoofing.Rizhao Cai; Changsheng Chen
Adversarial Learning of Deepfakes in Accounting.Marco Schreyer; Timur Sattarov; Bernd Reimer; Damian Borth
Deep Latent Defence.Giulio Zizzo; Chris Hankin; Sergio Maffeis; Kevin Jones
Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system.Shixian Wen; Laurent Itti
2019-10-08
Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications.Matteo Terzi; Gian Antonio Susto; Pratik Chaudhari
SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations.Ali Dabouei; Sobhan Soleymani; Fariborz Taherkhani; Jeremy Dawson; Nasser M. Nasrabadi
2019-10-07
Interpretable Disentanglement of Neural Networks by Extracting Class-Specific Subnetwork.Yulong Wang; Xiaolin Hu; Hang Su
2019-10-05
Unrestricted Adversarial Attacks for Semantic Segmentation.Guangyu Shen; Chengzhi Mao; Junfeng Yang; Baishakhi Ray
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies.Laurent Meunier; Jamal Atif; Olivier Teytaud
2019-10-04
Requirements for Developing Robust Neural Networks.John S. Hyatt; Michael S. Lee
Adversarial Examples for Cost-Sensitive Classifiers.Gavin S. Hartnett; Andrew J. Lohn; Alexander P. Sedlack
2019-10-03
Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions.He Zhao; Trung Le; Paul Montague; Vel Olivier De; Tamas Abraham; Dinh Phung
BUZz: BUffer Zones for defending adversarial examples in image classification.Kaleel Mahmood; Phuong Ha Nguyen; Lam M. Nguyen; Thanh Nguyen; Dijk Marten van
Verification of Neural Network Behaviour: Formal Guarantees for Power System Applications.Andreas Venzke; Spyros Chatzivasileiadis
2019-10-02
Attacking Vision-based Perception in End-to-End Autonomous Driving Models.Adith Boloor; Karthik Garimella; Xin He; Christopher Gill; Yevgeniy Vorobeychik; Xuan Zhang
Adversarially Robust Few-Shot Learning: A Meta-Learning Approach.Micah Goldblum; Liam Fowl; Tom Goldstein
2019-10-01
Boosting Image Recognition with Non-differentiable Constraints.Xuan Li; Yuchen Lu; Peng Xu; Jizong Peng; Christian Desrosiers; Xue Liu
Generating Semantic Adversarial Examples with Differentiable Rendering.Lakshya Jain; Wilson Wu; Steven Chen; Uyeong Jang; Varun Chandrasekaran; Sanjit Seshia; Somesh Jha
Attacking CNN-based anti-spoofing face authentication in the physical domain.Bowen Zhang; Benedetta Tondi; Mauro Barni
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack.Yang Zhang; Shiyu Chang; Mo Yu; Kaizhi Qian
Cross-Layer Strategic Ensemble Defense Against Adversarial Examples.Wenqi Wei; Ling Liu; Margaret Loper; Ka-Ho Chow; Emre Gursoy; Stacey Truex; Yanzhao Wu
Deep Neural Rejection against Adversarial Examples.Angelo Sotgiu; Ambra Demontis; Marco Melis; Battista Biggio; Giorgio Fumera; Xiaoyi Feng; Fabio Roli
2019-09-30
Black-box Adversarial Attacks with Bayesian Optimization.Satya Narayan Shukla; Anit Kumar Sahu; Devin Willmott; J. Zico Kolter
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML.Sijia Liu; Songtao Lu; Xiangyi Chen; Yao Feng; Kaidi Xu; Abdullah Al-Dujaili; Minyi Hong; Una-May O'Reilly
Role of Spatial Context in Adversarial Robustness for Object Detection.Aniruddha Saha; Akshayvarun Subramanya; Koninika Patil; Hamed Pirsiavash
2019-09-29
Techniques for Adversarial Examples Threatening the Safety of Artificial Intelligence Based Systems.Utku Kose
2019-09-27
Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest.Indu Ilanchezian; Praneeth Vepakomma; Abhishek Singh; Otkrist Gupta; G. N. Srinivasa Prasanna; Ramesh Raskar
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks.Rémi Bernhard; Pierre-Alain Moellic; Jean-Max Dutertre
Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate. (1%)Lu Mi; Hao Wang; Yonglong Tian; Hao He; Nir Shavit
2019-09-26
Towards Understanding the Transferability of Deep Representations.Hong Liu; Mingsheng Long; Jianmin Wang; Michael I. Jordan
Adversarial Machine Learning Attack on Modulation Classification.Muhammad Usama; Muhammad Asim; Junaid Qadir; Ala Al-Fuqaha; Muhammad Ali Imran
Adversarial ML Attack on Self Organizing Cellular Networks.Salah-ud-din Farooq; Muhammad Usama; Junaid Qadir; Muhammad Ali Imran
Towards neural networks that provably know when they don't know.Alexander Meinke; Matthias Hein
Lower Bounds on Adversarial Robustness from Optimal Transport.Arjun Nitin Bhagoji; Daniel Cullina; Prateek Mittal
2019-09-25
Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection.Nilesh A. Ahuja; Ibrahima Ndiour; Trushant Kalyanpur; Omesh Tickoo
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks.Tianyu Pang; Kun Xu; Jun Zhu
FreeLB: Enhanced Adversarial Training for Natural Language Understanding.Chen Zhu; Yu Cheng; Zhe Gan; Siqi Sun; Tom Goldstein; Jingjing Liu
2019-09-24
A Visual Analytics Framework for Adversarial Text Generation.Brandon Laughlin; Christopher Collins; Karthik Sankaranarayanan; Khalil El-Khatib
Intelligent image synthesis to attack a segmentation CNN using adversarial learning.Liang Chen; Paul Bentley; Kensaku Mori; Kazunari Misawa; Michitaka Fujiwara; Daniel Rueckert
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack.Minhao Cheng; Simranjit Singh; Patrick Chen; Pin-Yu Chen; Sijia Liu; Cho-Jui Hsieh
Matrix Sketching for Secure Collaborative Machine Learning. (1%)Mengjiao Zhang; Shusen Wang
2019-09-23
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples.Jinyuan Jia; Ahmed Salem; Michael Backes; Yang Zhang; Neil Zhenqiang Gong
Robust Local Features for Improving the Generalization of Adversarial Training.Chuanbiao Song; Kun He; Jiadong Lin; Liwei Wang; John E. Hopcroft
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments.Alesia Chernikova; Alina Oprea
2019-09-22
HAWKEYE: Adversarial Example Detector for Deep Neural Networks.Jinkyu Koo; Michael Roth; Saurabh Bagchi
Towards Interpreting Recurrent Neural Networks through Probabilistic Abstraction.Guoliang Dong; Jingyi Wang; Jun Sun; Yang Zhang; Xinyu Wang; Ting Dai; Jin Song Dong; Xingen Wang
2019-09-20
Adversarial Learning with Margin-based Triplet Embedding Regularization.Yaoyao Zhong; Weihong Deng
COPYCAT: Practical Adversarial Attacks on Visualization-Based Malware Detection.Aminollah Khormali; Ahmed Abusnaina; Songqing Chen; DaeHun Nyang; Aziz Mohaisen
Defending Against Physically Realizable Attacks on Image Classification.Tong Wu; Liang Tong; Yevgeniy Vorobeychik
2019-09-19
Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation.Jihyeun Yoon; Kyungyul Kim; Jongseong Jang
Adversarial Vulnerability Bounds for Gaussian Process Classification.Michael Thomas Smith; Kathrin Grosse; Michael Backes; Mauricio A Alvarez
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks.Sekitoshi Kanai; Yasutoshi Ida; Yasuhiro Fujiwara; Masanori Yamada; Shuichi Adachi
Toward Robust Image Classification.Basemah Alshemali; Alta Graham; Jugal Kalita
Training Robust Deep Neural Networks via Adversarial Noise Propagation.Aishan Liu; Xianglong Liu; Chongzhi Zhang; Hang Yu; Qiang Liu; Dacheng Tao
2019-09-17
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review.Han Xu; Yao Ma; Haochen Liu; Debayan Deb; Hui Liu; Jiliang Tang; Anil Jain
Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model.Prashanth Vijayaraghavan; Deb Roy
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges.Jinyuan Jia; Neil Zhenqiang Gong
2019-09-16
They Might NOT Be Giants: Crafting Black-Box Adversarial Examples with Fewer Queries Using Particle Swarm Optimization.Rayan Mosli; Matthew Wright; Bo Yuan; Yin Pan
HAD-GAN: A Human-perception Auxiliary Defense GAN to Defend Adversarial Examples.Wanting Yu; Hongyi Yu; Lingyun Jiang; Mengli Zhang; Kai Qiao; Linyuan Wang; Bin Yan
Towards Quality Assurance of Software Product Lines with Adversarial Configurations.Paul Temple; Mathieu Acher; Gilles Perrouin; Battista Biggio; Jean-marc Jezequel; Fabio Roli
Interpreting and Improving Adversarial Robustness with Neuron Sensitivity.Chongzhi Zhang; Aishan Liu; Xianglong Liu; Yitao Xu; Hang Yu; Yuqing Ma; Tianlin Li
2019-09-15
An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms.Qianyu Guo; Sen Chen; Xiaofei Xie; Lei Ma; Qiang Hu; Hongtao Liu; Yang Liu; Jianjun Zhao; Xiaohong Li
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors.Gilad Cohen; Guillermo Sapiro; Raja Giryes
2019-09-14
Natural Language Adversarial Attacks and Defenses in Word Level.Xiaosen Wang; Hao Jin; Kun He
2019-09-13
Adversarial Attack on Skeleton-based Human Action Recognition.Jian Liu; Naveed Akhtar; Ajmal Mian
Say What I Want: Towards the Dark Side of Neural Dialogue Models.Haochen Liu; Tyler Derr; Zitao Liu; Jiliang Tang
White-Box Adversarial Defense via Self-Supervised Data Estimation.Zudi Lin; Hanspeter Pfister; Ziming Zhang
Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix.Chaomin Shen; Yaxin Peng; Guixu Zhang; Jinsong Fan
2019-09-12
Inspecting adversarial examples using the Fisher information.Jörg Martin; Clemens Elster
An Empirical Investigation of Randomized Defenses against Adversarial Attacks.Yannik Potdevin; Dirk Nowotka; Vijay Ganesh
Transferable Adversarial Robustness using Adversarially Trained Autoencoders.Pratik Vaishnavi; Kevin Eykholt; Atul Prakash; Amir Rahmati
2019-09-11
Feedback Learning for Improving the Robustness of Neural Networks.Chang Song; Zuoguan Wang; Hai Li
Sparse and Imperceivable Adversarial Attacks.Francesco Croce; Matthias Hein
2019-09-10
Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification.Eitan Rothberg; Tingting Chen; Luo Jie; Hao Ji
Identifying and Resisting Adversarial Videos Using Temporal Consistency.Xiaojun Jia; Xingxing Wei; Xiaochun Cao
Effectiveness of Adversarial Examples and Defenses for Malware Classification.Robert Podschwadt; Hassan Takabi
Towards Noise-Robust Neural Networks via Progressive Adversarial Training.Hang Yu; Aishan Liu; Xianglong Liu; Jichen Yang; Chongzhi Zhang
UPC: Learning Universal Physical Camouflage Attacks on Object Detectors.Lifeng Huang; Chengying Gao; Yuyin Zhou; Changqing Zou; Cihang Xie; Alan Yuille; Ning Liu
FDA: Feature Disruptive Attack.Aditya Ganeshan; B. S. Vivek; R. Venkatesh Babu
Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection.Byunggill Joe; Sung Ju Hwang; Insik Shin
Toward Finding The Global Optimal of Adversarial Examples.Zhenxin Xiao; Kai-Wei Chang; Cho-Jui Hsieh
2019-09-09
Adversarial Robustness Against the Union of Multiple Perturbation Models.Pratyush Maini; Eric Wong; J. Zico Kolter
DeepObfuscator: Obfuscating Intermediate Representations with Privacy-Preserving Adversarial Learning on Smartphones. (1%)Ang Li; Jiayi Guo; Huanrui Yang; Flora D. Salim; Yiran Chen
2019-09-08
STA: Adversarial Attacks on Siamese Trackers.Xugang Wu; Xiaoping Wang; Xu Zhou; Songlei Jian
When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures.Gil Fidel; Ron Bitton; Asaf Shabtai
2019-09-06
Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification.Yichao Zhou; Jyun-Yu Jiang; Kai-Wei Chang; Wei Wang
Natural Adversarial Sentence Generation with Gradient-based Perturbation.Yu-Lun Hsieh; Minhao Cheng; Da-Cheng Juan; Wei Wei; Wen-Lian Hsu; Cho-Jui Hsieh
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information.Yiren Zhao; Ilia Shumailov; Han Cui; Xitong Gao; Robert Mullins; Ross Anderson
2019-09-05
Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents.Xian Yeow Lee; Sambit Ghadai; Kai Liang Tan; Chinmay Hegde; Soumik Sarkar
Adversarial Examples with Difficult Common Words for Paraphrase Identification.Zhouxing Shi; Minlie Huang; Ting Yao; Jingfang Xu
2019-09-04
Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?Alfred Laugros; Alice Caplier; Matthieu Ospici
2019-09-03
Certified Robustness to Adversarial Word Substitutions.Robin Jia; Aditi Raghunathan; Kerem Göksel; Percy Liang
Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation.Po-Sen Huang; Robert Stanforth; Johannes Welbl; Chris Dyer; Dani Yogatama; Sven Gowal; Krishnamurthy Dvijotham; Pushmeet Kohli
2019-09-02
Metric Learning for Adversarial Robustness.Chengzhi Mao; Ziyuan Zhong; Junfeng Yang; Carl Vondrick; Baishakhi Ray
2019-08-29
Adversarial Training Methods for Network Embedding.Quanyu Dai; Xiao Shen; Liang Zhang; Qiang Li; Dan Wang
Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness.Ling Liu; Wenqi Wei; Ka-Ho Chow; Margaret Loper; Emre Gursoy; Stacey Truex; Yanzhao Wu
Defeating Misclassification Attacks Against Transfer Learning.Bang Wu; Shuo Wang; Xingliang Yuan; Cong Wang; Carsten Rudolph; Xiangwen Yang
Universal, transferable and targeted adversarial attacks.Junde Wu; Rao Fu
2019-08-26
A Statistical Defense Approach for Detecting Adversarial Examples.Alessandro Cennamo; Ido Freeman; Anton Kummert
Gated Convolutional Networks with Hybrid Connectivity for Image Classification.Chuanguang Yang; Zhulin An; Hui Zhu; Xiaolong Hu; Kun Zhang; Kaiqiang Xu; Chao Li; Yongjun Xu
2019-08-25
Adversarial Edit Attacks for Tree Data.Benjamin Paaßen
advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns.Zhibo Wang; Siyan Zheng; Mengkai Song; Qian Wang; Alireza Rahimpour; Hairong Qi
2019-08-24
Targeted Mismatch Adversarial Attack: Query with a Flower to Retrieve the Tower.Giorgos Tolias; Filip Radenovic; Ond{ř}ej Chum
2019-08-23
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.Dou Goodman; Xingjian Li; Jun Huan; Tao Wei
AdvHat: Real-world adversarial attack on ArcFace Face ID system.Stepan Komkov; Aleksandr Petiushko
2019-08-22
Saliency Methods for Explaining Adversarial Attacks.Jindong Gu; Volker Tresp
2019-08-21
Testing Robustness Against Unforeseen Adversaries.Daniel Kang; Yi Sun; Dan Hendrycks; Tom Brown; Jacob Steinhardt
Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples.Marcus Soll; Tobias Hinz; Sven Magg; Stefan Wermter
2019-08-20
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks.Ka-Ho Chow; Wenqi Wei; Yanzhao Wu; Ling Liu
Transferring Robustness for Graph Neural Network Against Poisoning Attacks.Xianfeng Tang; Yandong Li; Yiwei Sun; Huaxiu Yao; Prasenjit Mitra; Suhang Wang
2019-08-19
Universal Adversarial Triggers for NLP.Eric Wallace; Shi Feng; Nikhil Kandpal; Matt Gardner; Sameer Singh
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses.Xiao Wang; Siyue Wang; Pin-Yu Chen; Yanzhi Wang; Brian Kulis; Xue Lin; Peter Chin
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries.Fnu Suya; Jianfeng Chi; David Evans; Yuan Tian
2019-08-18
On the Robustness of Human Pose Estimation.Sahil Shah; Naman Jain; Abhishek Sharma; Arjun Jain
Adversarial Defense by Suppressing High-frequency Components.Zhendong Zhang; Cheolkon Jung; Xiaolong Liang
2019-08-17
Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation.Yuh-Shyang Wang; Tsui-Wei Weng; Luca Daniel
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks.Jiadong Lin; Chuanbiao Song; Kun He; Liwei Wang; John E. Hopcroft
2019-08-16
Adversarial point perturbations on 3D objects.Daniel Liu; Ronald Yu; Hao Su
2019-08-14
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once.Jiangfan Han; Xiaoyi Dong; Ruimao Zhang; Dongdong Chen; Weiming Zhang; Nenghai Yu; Ping Luo; Xiaogang Wang
AdvFaces: Adversarial Face Synthesis.Debayan Deb; Jianbang Zhang; Anil K. Jain
DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation.Seungju Cho; Tae Joon Jun; Byungsoo Oh; Daeyoung Kim
2019-08-12
On Defending Against Label Flipping Attacks on Malware Detection Systems.Rahim Taheri; Reza Javidan; Mohammad Shojafar; Zahra Pooranian; Ali Miri; Mauro Conti
Adversarial Neural Pruning with Latent Vulnerability Suppression.Divyam Madaan; Jinwoo Shin; Sung Ju Hwang
2019-08-09
On the Adversarial Robustness of Neural Networks without Weight Transport.Mohamed Akrout
2019-08-08
Defending Against Adversarial Iris Examples Using Wavelet Decomposition.Sobhan Soleymani; Ali Dabouei; Jeremy Dawson; Nasser M. Nasrabadi
Universal Adversarial Audio Perturbations.Sajjad Abdoli; Luiz G. Hafemann; Jerome Rony; Ismail Ben Ayed; Patrick Cardinal; Alessandro L. Koerich
2019-08-07
Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations.Andras Rozsa; Terrance E. Boult
Investigating Decision Boundaries of Trained Neural Networks.Roozbeh Yousefzadeh; Dianne P O'Leary
2019-08-06
Explaining Deep Neural Networks Using Spectrum-Based Fault Localization.Youcheng Sun; Hana Chockler; Xiaowei Huang; Daniel Kroening
MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks.Chen Ma; Chenxu Zhao; Hailin Shi; Li Chen; Junhai Yong; Dan Zeng
BlurNet: Defense by Filtering the Feature Maps.Ravi Raju; Mikko Lipasti
2019-08-05
Random Directional Attack for Fooling Deep Neural Networks.Wenjian Luo; Chenwang Wu; Nan Zhou; Li Ni
Adversarial Self-Defense for Cycle-Consistent GANs.Dina Bashkirova; Ben Usman; Kate Saenko
Automated Detection System for Adversarial Examples with High-Frequency Noises Sieve.Dang Duy Thang; Toshihiro Matsui
A principled approach for generating adversarial images under non-smooth dissimilarity metrics.Aram-Alexandre Pooladian; Chris Finlay; Tim Hoheisel; Adam Oberman
Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems.Lea Schönherr; Thorsten Eisenhofer; Steffen Zeiler; Thorsten Holz; Dorothea Kolossa
2019-08-04
A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models.Heng Chang; Yu Rong; Tingyang Xu; Wenbing Huang; Honglei Zhang; Peng Cui; Wenwu Zhu; Junzhou Huang
2019-08-03
Exploring the Robustness of NMT Systems to Nonsensical Inputs.Akshay Chaturvedi; Abijith KP; Utpal Garain
2019-08-02
AdvGAN++ : Harnessing latent layers for adversary generation.Puneet Mangla; Surgan Jandial; Sakshi Varshney; Vineeth N Balasubramanian
2019-08-01
Black-box Adversarial ML Attack on Modulation Classification.Muhammad Usama; Junaid Qadir; Ala Al-Fuqaha
Robustifying deep networks for image segmentation.Zheng Liu; Jinnian Zhang; Varun Jog; Po-Ling Loh; Alan B McMillan
2019-07-31
Adversarial Robustness Curves.Christina Göpfert; Jan Philip Göpfert; Barbara Hammer
Optimal Attacks on Reinforcement Learning Policies.Alessio Russo; Alexandre Proutiere
2019-07-30
Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation.Utku Ozbulak; Messem Arnout Van; Neve Wesley De
Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding.Utku Ozbulak; Messem Arnout Van; Neve Wesley De
2019-07-28
Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples.Hossein Hosseini; Sreeram Kannan; Radha Poovendran
2019-07-27
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment.Di Jin; Zhijing Jin; Joey Tianyi Zhou; Peter Szolovits
2019-07-26
Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin.Kaiwen Wu; Yaoliang Yu
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method.Pu Zhao; Sijia Liu; Pin-Yu Chen; Nghia Hoang; Kaidi Xu; Bhavya Kailkhura; Xue Lin
2019-07-24
Towards Adversarially Robust Object Detection.Haichao Zhang; Jianyu Wang
Joint Adversarial Training: Incorporating both Spatial and Pixel Attacks.Haichao Zhang; Jianyu Wang
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training.Haichao Zhang; Jianyu Wang
Weakly Supervised Localization using Min-Max Entropy: an Interpretable Framework.Soufiane Belharbi; Jérôme Rony; Jose Dolz; Ismail Ben Ayed; Luke McCaffrey; Eric Granger
Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems.Xingjun Ma; Yuhao Niu; Lin Gu; Yisen Wang; Yitian Zhao; James Bailey; Feng Lu
2019-07-23
Enhancing Adversarial Example Transferability with an Intermediate Level Attack.Qian Huang; Isay Katsman; Horace He; Zeqi Gu; Serge Belongie; Ser-Nam Lim
2019-07-21
Characterizing Attacks on Deep Reinforcement Learning.Xinlei Pan; Chaowei Xiao; Warren He; Shuang Yang; Jian Peng; Mingjie Sun; Jinfeng Yi; Zijiang Yang; Mingyan Liu; Bo Li; Dawn Song
2019-07-17
Connecting Lyapunov Control Theory to Adversarial Attacks.Arash Rahnama; Andre T. Nguyen; Edward Raff
Robustness properties of Facebook's ResNeXt WSL models.A. Emin Orhan
Constrained Concealment Attacks against Reconstruction-based Anomaly Detectors in Industrial Control Systems.Alessandro Erba; Riccardo Taormina; Stefano Galelli; Marcello Pogliani; Michele Carminati; Stefano Zanero; Nils Ole Tippenhauer
2019-07-16
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods.Arif Siddiqi
Latent Adversarial Defence with Boundary-guided Generation.Xiaowei Zhou; Ivor W. Tsang; Jie Yin
Natural Adversarial Examples.Dan Hendrycks; Kevin Zhao; Steven Basart; Jacob Steinhardt; Dawn Song
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving.Yulong Cao; Chaowei Xiao; Benjamin Cyr; Yimeng Zhou; Won Park; Sara Rampazzi; Qi Alfred Chen; Kevin Fu; Z. Morley Mao
Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics.Yuxin Ma; Tiankai Xie; Jundong Li; Ross Maciejewski
2019-07-15
Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning.Bao Wang; Stanley J. Osher
Recovery Guarantees for Compressible Signals with Adversarial Noise.Jasjeet Dhaliwal; Kyle Hambrook
2019-07-14
Measuring the Transferability of Adversarial Examples.Deyan Petrov; Timothy M. Hospedales
2019-07-12
Unsupervised Adversarial Attacks on Deep Feature-based Retrieval with GAN.Guoping Zhao; Mingyu Zhang; Jiajun Liu; Ji-Rong Wen
Stateful Detection of Black-Box Adversarial Attacks.Steven Chen; Nicholas Carlini; David Wagner
Generative Modeling by Estimating Gradients of the Data Distribution.Yang Song; Stefano Ermon
2019-07-11
Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn.Ziv Katzir; Yuval Elovici
Adversarial Objects Against LiDAR-Based Autonomous Driving Systems.Yulong Cao; Chaowei Xiao; Dawei Yang; Jing Fang; Ruigang Yang; Mingyan Liu; Bo Li
2019-07-10
Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations.Rohan Reddy Mekala; Gudjon Einar Magnusson; Adam Porter; Mikael Lindvall; Madeline Diep
2019-07-09
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving.Zelun Kong; Junfeng Guo; Ang Li; Cong Liu
2019-07-06
Affine Disentangled GAN for Interpretable and Robust AV Perception.Letao Liu; Martin Saerbeck; Justin Dauwels
2019-07-05
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions.Yao Qin; Nicholas Frosst; Sara Sabour; Colin Raffel; Garrison Cottrell; Geoffrey Hinton
2019-07-04
Adversarial Robustness through Local Linearization.Chongli Qin; James Martens; Sven Gowal; Dilip Krishnan; Krishnamurthy Dvijotham; Alhussein Fawzi; Soham De; Robert Stanforth; Pushmeet Kohli
Adversarial Attacks in Sound Event Classification.Vinod Subramanian; Emmanouil Benetos; Ning Xu; SKoT McDonald; Mark Sandler
2019-07-03
Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior.Thomas Gittings; Steve Schneider; John Collomosse
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack.Francesco Croce; Matthias Hein
2019-07-02
Efficient Cyber Attacks Detection in Industrial Control Systems Using Lightweight Neural Networks and PCA.Moshe Kravchik; Asaf Shabtai
Treant: Training Evasion-Aware Decision Trees.Stefano Calzavara; Claudio Lucchese; Gabriele Tolomei; Seyum Assefa Abebe; Salvatore Orlando
2019-07-01
Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network".Roland S. Zimmermann
Diminishing the Effect of Adversarial Perturbations via Refining Feature Representation.Nader Asadi; AmirMohammad Sarfi; Sahba Tahsini; Mahdi Eftekhari
Accurate, reliable and fast robustness evaluation.Wieland Brendel; Jonas Rauber; Matthias Kümmerer; Ivan Ustyuzhaninov; Matthias Bethge
2019-06-30
Fooling a Real Car with Adversarial Traffic Signs.Nir Morgulis; Alexander Kreines; Shachar Mendelowitz; Yuval Weisglass
2019-06-28
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty.Dan Hendrycks; Mantas Mazeika; Saurav Kadavath; Dawn Song
Certifiable Robustness and Robust Training for Graph Convolutional Networks.Daniel Zügner; Stephan Günnemann
Learning to Cope with Adversarial Attacks.Xian Yeow Lee; Aaron Havens; Girish Chowdhary; Soumik Sarkar
Robustness Guarantees for Deep Neural Networks on Videos.Min Wu; Marta Kwiatkowska
2019-06-27
Using Intuition from Empirical Properties to Simplify Adversarial Training Defense.Guanxiong Liu; Issa Khalil; Abdallah Khreishah
Adversarial Robustness via Label-Smoothing.Morgane Goibert; Elvis Dohmatob
Evolving Robust Neural Architectures to Defend from Adversarial Attacks.Shashank Kotyan; Danilo Vasconcellos Vargas
2019-06-26
The Adversarial Robustness of Sampling.Omri Ben-Eliezer; Eylon Yogev
Defending Adversarial Attacks by Correcting logits.Yifeng Li; Lingxi Xie; Ya Zhang; Rui Zhang; Yanfeng Wang; Qi Tian
2019-06-25
Quantitative Verification of Neural Networks And its Security Applications.Teodora Baluta; Shiqi Shen; Shweta Shinde; Kuldeep S. Meel; Prateek Saxena
Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection.Kang Liu; Haoyu Yang; Yuzhe Ma; Benjamin Tan; Bei Yu; Evangeline F. Y. Young; Ramesh Karri; Siddharth Garg
2019-06-24
Deceptive Reinforcement Learning Under Adversarial Manipulations on Cost Signals.Yunhan Huang; Quanyan Zhu
2019-06-22
Defending Against Adversarial Examples with K-Nearest Neighbor.Chawin Sitawarin; David Wagner
2019-06-21
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations.Yuezun Li; Xin Yang; Baoyuan Wu; Siwei Lyu
A Fourier Perspective on Model Robustness in Computer Vision.Dong Yin; Raphael Gontijo Lopes; Jonathon Shlens; Ekin D. Cubuk; Justin Gilmer
Evolution Attack On Neural Networks.YiGui Luo; RuiJia Yang; Wei Sha; WeiYi Ding; YouTeng Sun; YiSi Wang
Adversarial Examples to Fool Iris Recognition Systems.Sobhan Soleymani; Ali Dabouei; Jeremy Dawson; Nasser M. Nasrabadi
A Cyclically-Trained Adversarial Network for Invariant Representation Learning.Jiawei Chen; Janusz Konrad; Prakash Ishwar
2019-06-20
On Physical Adversarial Patches for Object Detection.Mark Lee; Zico Kolter
2019-06-19
Catfish Effect Between Internal and External Attackers:Being Semi-honest is Helpful.Hanqing Liu; Na Ruan; Joseph K. Liu
Improving the robustness of ImageNet classifiers using elements of human visual cognition.A. Emin Orhan; Brenden M. Lake
A unified view on differential privacy and robustness to adversarial examples.Rafael Pinot; Florian Yger; Cédric Gouy-Pailler; Jamal Atif
Convergence of Adversarial Training in Overparametrized Networks.Ruiqi Gao; Tianle Cai; Haochuan Li; Liwei Wang; Cho-Jui Hsieh; Jason D. Lee
Global Adversarial Attacks for Assessing Deep Learning Robustness.Hanbin Hu; Mit Shah; Jianhua Z. Huang; Peng Li
Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield.Dou Goodman; Tao Wei
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing.Haonan Qiu; Chaowei Xiao; Lei Yang; Xinchen Yan; Honglak Lee; Bo Li
2019-06-17
Adversarial attacks on Copyright Detection Systems.Parsa Saadatpanah; Ali Shafahi; Tom Goldstein
Improving Black-box Adversarial Attacks with a Transfer-based Prior.Shuyu Cheng; Yinpeng Dong; Tianyu Pang; Hang Su; Jun Zhu
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks.Felix Assion; Peter Schlicht; Florens Greßner; Wiebke Günther; Fabian Hüger; Nico Schmidt; Umair Rasheed
2019-06-16
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Accuracy.Alex Lamb; Vikas Verma; Juho Kannala; Yoshua Bengio
Defending Against Adversarial Attacks Using Random Forests.Yifan Ding; Liqiang Wang; Huan Zhang; Jinfeng Yi; Deliang Fan; Boqing Gong
2019-06-15
Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences.Shashank Kotyan; Danilo Vasconcellos Vargas; Moe Matsuki
2019-06-14
Adversarial Training Can Hurt Generalization.Aditi Raghunathan; Sang Michael Xie; Fanny Yang; John C. Duchi; Percy Liang
Towards Compact and Robust Deep Neural Networks.Vikash Sehwag; Shiqi Wang; Prateek Mittal; Suman Jana
Perceptual Based Adversarial Audio Attacks.Joseph Szurley; J. Zico Kolter
Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks.Thomas Brunner; Frederik Diehl; Alois Knoll
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks.Felipe A. Mejia; Paul Gamble; Zigfried Hampel-Arias; Michael Lomnitz; Nina Lopatina; Lucas Tindall; Maria Alejandra Barrios
Towards Stable and Efficient Training of Verifiably Robust Neural Networks.Huan Zhang; Hongge Chen; Chaowei Xiao; Bo Li; Duane Boning; Cho-Jui Hsieh
Adversarial Robustness Assessment: Why both $L_0$ and $L_\infty$ Attacks Are Necessary.Shashank Kotyan; Danilo Vasconcellos Vargas
2019-06-13
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks.Rajeev Sahay; Rehana Mahfuz; Aly El Gamal
Lower Bounds for Adversarially Robust PAC Learning.Dimitrios I. Diochnos; Saeed Mahloujifar; Mohammad Mahmoody
2019-06-12
Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers.Guang-He Lee; Yang Yuan; Shiyu Chang; Tommi S. Jaakkola
2019-06-11
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks.Ziang Yan; Yiwen Guo; Changshui Zhang
Mimic and Fool: A Task Agnostic Adversarial Attack.Akshay Chaturvedi; Utpal Garain
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks.Mahyar Fazlyab; Alexander Robey; Hamed Hassani; Manfred Morari; George J. Pappas
2019-06-10
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles.Markus Kettunen; Erik Härkönen; Jaakko Lehtinen
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective.Lu Wang; Xuanqing Liu; Jinfeng Yi; Zhi-Hua Zhou; Cho-Jui Hsieh
Robustness Verification of Tree-based Models.Hongge Chen; Huan Zhang; Si Si; Yang Li; Duane Boning; Cho-Jui Hsieh
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective.Kaidi Xu; Hongge Chen; Sijia Liu; Pin-Yu Chen; Tsui-Wei Weng; Mingyi Hong; Xue Lin
2019-06-09
On the Vulnerability of Capsule Networks to Adversarial Attacks.Felix Michels; Tobias Uelwer; Eric Upschulte; Stefan Harmeling
Intriguing properties of adversarial training.Cihang Xie; Alan Yuille
Improved Adversarial Robustness via Logit Regularization Methods.Cecilia Summers; Michael J. Dinneen
Attacking Graph Convolutional Networks via Rewiring.Yao Ma; Suhang Wang; Tyler Derr; Lingfei Wu; Jiliang Tang
Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness.Jingkang Wang; Tianyun Zhang; Sijia Liu; Pin-Yu Chen; Jiacen Xu; Makan Fardad; Bo Li
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers.Hadi Salman; Greg Yang; Jerry Li; Pengchuan Zhang; Huan Zhang; Ilya Razenshteyn; Sebastien Bubeck
2019-06-08
Strategies to architect AI Safety: Defense to guard AI from Adversaries.Rajagopal. A; Nirmala. V
Sensitivity of Deep Convolutional Networks to Gabor Noise.Kenneth T. Co; Luis Muñoz-González; Emil C. Lupu
ML-LOO: Detecting Adversarial Examples with Feature Attribution.Puyudi Yang; Jianbo Chen; Cho-Jui Hsieh; Jane-Ling Wang; Michael I. Jordan
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks.Maksym Andriushchenko; Matthias Hein
Making targeted black-box evasion attacks effective and efficient.Mika Juuti; Buse Gul Atli; N. Asokan
Defending Against Universal Attacks Through Selective Feature Regeneration.Tejas Borkar; Felix Heide; Lina Karam
2019-06-07
A cryptographic approach to black box adversarial machine learning.Kevin Shi; Daniel Hsu; Allison Bishop
Using learned optimizers to make models robust to input noise.Luke Metz; Niru Maheswaranathan; Jonathon Shlens; Jascha Sohl-Dickstein; Ekin D. Cubuk
Efficient Project Gradient Descent for Ensemble Adversarial Attack.Fanyou Wu; Rado Gazo; Eva Haviarova; Bedrich Benes
Inductive Bias of Gradient Descent based Adversarial Training on Separable Data.Yan Li; Ethan X. Fang; Huan Xu; Tuo Zhao
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness.Walt Woods; Jack Chen; Christof Teuscher
Robustness for Non-Parametric Classification: A Generic Attack and Defense.Yao-Yuan Yang; Cyrus Rashtchian; Yizhen Wang; Kamalika Chaudhuri
2019-06-06
Robust Attacks against Multiple Classifiers.Juan C. Perdomo; Yaron Singer
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation.Raphael Gontijo Lopes; Dong Yin; Ben Poole; Justin Gilmer; Ekin D. Cubuk
Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric.Yujun Shi; Benben Liao; Guangyong Chen; Yun Liu; Ming-Ming Cheng; Jiashi Feng
Should Adversarial Attacks Use Pixel p-Norm?Ayon Sen; Xiaojin Zhu; Liam Marshall; Robert Nowak
Image Synthesis with a Single (Robust) Classifier.Shibani Santurkar; Dimitris Tsipras; Brandon Tran; Andrew Ilyas; Logan Engstrom; Aleksander Madry
2019-06-05
MNIST-C: A Robustness Benchmark for Computer Vision.Norman Mu; Justin Gilmer
Enhancing Gradient-based Attacks with Symbolic Intervals.Shiqi Wang; Yizheng Chen; Ahmed Abdou; Suman Jana
Query-efficient Meta Attack to Deep Neural Networks.Jiawei Du; Hu Zhang; Joey Tianyi Zhou; Yi Yang; Jiashi Feng
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation.Minh N. Vu; Truc D. Nguyen; NhatHai Phan; Ralucca Gera; My T. Thai
Multi-way Encoding for Robustness.Donghyun Kim; Sarah Adel Bargal; Jianming Zhang; Stan Sclaroff
2019-06-04
Adversarial Training is a Form of Data-dependent Operator Norm Regularization.Kevin Roth; Yannic Kilcher; Thomas Hofmann
2019-06-03
Adversarial Exploitation of Policy Imitation.Vahid Behzadan; William Hsu
Adversarial Risk Bounds for Neural Networks through Sparsity based Compression.Emilio Rafael Balda; Arash Behboodi; Niklas Koep; Rudolf Mathar
The Adversarial Machine Learning Conundrum: Can The Insecurity of ML Become The Achilles' Heel of Cognitive Networks?Muhammad Usama; Junaid Qadir; Ala Al-Fuqaha; Mounir Hamdi
Adversarial Robustness as a Prior for Learned Representations.Logan Engstrom; Andrew Ilyas; Shibani Santurkar; Dimitris Tsipras; Brandon Tran; Aleksander Madry
RL-Based Method for Benchmarking the Adversarial Resilience and Robustness of Deep Reinforcement Learning Policies.Vahid Behzadan; William Hsu
Achieving Generalizable Robustness of Deep Neural Networks by Stability Training.Jan Laermann; Wojciech Samek; Nils Strodthoff
A Surprising Density of Illusionable Natural Speech.Melody Y. Guan; Gregory Valiant
Fast and Stable Interval Bounds Propagation for Training Verifiably Robust Models.Paweł Morawiecki; Przemysław Spurek; Marek Śmieja; Jacek Tabor
Understanding the Limitations of Conditional Generative Models.Ethan Fetaya; Jörn-Henrik Jacobsen; Will Grathwohl; Richard Zemel
2019-06-02
Adversarially Robust Generalization Just Requires More Unlabeled Data.Runtian Zhai; Tianle Cai; Di He; Chen Dan; Kun He; John Hopcroft; Liwei Wang
2019-06-01
Adversarial Examples for Edge Detection: They Exist, and They Transfer.Christian Cosgrove; Alan L. Yuille
Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification.Sid Ahmed Fezza; Yassine Bakhti; Wassim Hamidouche; Olivier Déforges
Enhancing Transformation-based Defenses using a Distribution Classifier.Connie Kou; Hwee Kuan Lee; Ee-Chien Chang; Teck Khim Ng
2019-05-31
Unlabeled Data Improves Adversarial Robustness.Yair Carmon; Aditi Raghunathan; Ludwig Schmidt; Percy Liang; John C. Duchi
Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness.Andrey Malinin; Mark Gales
Are Labels Required for Improving Adversarial Robustness?Jonathan Uesato; Jean-Baptiste Alayrac; Po-Sen Huang; Robert Stanforth; Alhussein Fawzi; Pushmeet Kohli
2019-05-30
Real-Time Adversarial Attacks.Yuan Gong; Boyang Li; Christian Poellabauer; Yiyu Shi
Residual Networks as Nonlinear Systems: Stability Analysis using Linearization.Kai Rothauge; Zhewei Yao; Zixi Hu; Michael W. Mahoney
Identifying Classes Susceptible to Adversarial Attacks.Rangeet Pan; Md Johirul Islam; Shibbir Ahmed; Hridesh Rajan
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness.Adnan Siraj Rakin; Zhezhi He; Li Yang; Yanzhi Wang; Liqiang Wang; Deliang Fan
Interpretable Adversarial Training for Text.Samuel Barham; Soheil Feizi
2019-05-29
Bandlimiting Neural Networks Against Adversarial Attacks.Yuping Lin; Kasra Ahmadi K. A.; Hui Jiang
Misleading Authorship Attribution of Source Code using Adversarial Learning.Erwin Quiring; Alwin Maier; Konrad Rieck
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward.Adnan Qayyum; Muhammad Usama; Junaid Qadir; Ala Al-Fuqaha
Functional Adversarial Attacks.Cassidy Laidlaw; Soheil Feizi
CopyCAT: Taking Control of Neural Policies with Constant Attacks.Léonard Hussenot; Matthieu Geist; Olivier Pietquin
2019-05-28
ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation.Yuzhe Yang; Guo Zhang; Dina Katabi; Zhi Xu
Adversarial Attacks on Remote User Authentication Using Behavioural Mouse Dynamics.Yi Xiang Marcus Tan; Alfonso Iacovazzi; Ivan Homoliak; Yuval Elovici; Alexander Binder
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss.Pengcheng Li; Jinfeng Yi; Bowen Zhou; Lijun Zhang
Snooping Attacks on Deep Reinforcement Learning.Matthew Inkawhich; Yiran Chen; Hai Li
High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks.Haohan Wang; Xindi Wu; Zeyi Huang; Eric P. Xing
Expected Tight Bounds for Robust Training.Salman Alsubaihi; Adel Bibi; Modar Alfadly; Abdullah Hamdi; Bernard Ghanem
Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness.Saeed Mahloujifar; Xiao Zhang; Mohammad Mahmoody; David Evans
Cross-Domain Transferability of Adversarial Perturbations.Muzammal Naseer; Salman H. Khan; Harris Khan; Fahad Shahbaz Khan; Fatih Porikli
Certifiably Robust Interpretation in Deep Learning.Alexander Levine; Sahil Singla; Soheil Feizi
2019-05-27
Brain-inspired reverse adversarial examples.Shaokai Ye; Sia Huat Tan; Kaidi Xu; Yanzhi Wang; Chenglong Bao; Kaisheng Ma
Label Universal Targeted Attack.Naveed Akhtar; Mohammad A. A. K. Jalwana; Mohammed Bennamoun; Ajmal Mian
Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking.Yunhan Jia; Yantao Lu; Junjie Shen; Qi Alfred Chen; Zhenyu Zhong; Tao Wei
Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$.Francesco Croce; Matthias Hein
Scaleable input gradient regularization for adversarial robustness.Chris Finlay; Adam M Oberman
Combating Adversarial Misspellings with Robust Word Recognition.Danish Pruthi; Bhuwan Dhingra; Zachary C. Lipton
Analyzing the Interpretability Robustness of Self-Explaining Models.Haizhong Zheng; Earlence Fernandes; Atul Prakash
Adversarially Robust Learning Could Leverage Computational Hardness.Sanjam Garg; Somesh Jha; Saeed Mahloujifar; Mohammad Mahmoody
Unsupervised Euclidean Distance Attack on Network Embedding.Shanqing Yu; Jun Zheng; Jinhuan Wang; Jian Zhang; Lihong Chen; Qi Xuan; Jinyin Chen; Dan Zhang; Qingpeng Zhang
GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification.Xuwang Yin; Soheil Kolouri; Gustavo K. Rohde
2019-05-26
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations.Alex Lamb; Jonathan Binas; Anirudh Goyal; Sandeep Subramanian; Ioannis Mitliagkas; Denis Kazakov; Yoshua Bengio; Michael C. Mozer
Non-Determinism in Neural Networks for Adversarial Robustness.Daanish Ali Khan; Linhong Li; Ninghao Sha; Zhuoran Liu; Abelino Jimenez; Bhiksha Raj; Rita Singh
Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders.Hebi Li; Qi Xiao; Shixin Tian; Jin Tian
Rearchitecting Classification Frameworks For Increased Robustness.Varun Chandrasekaran; Brian Tang; Nicolas Papernot; Kassem Fawaz; Somesh Jha; Xi Wu
Robust Classification using Robust Feature Augmentation.Kevin Eykholt; Swati Gupta; Atul Prakash; Amir Rahmati; Pratik Vaishnavi; Haizhong Zheng
Generalizable Adversarial Attacks Using Generative Models.Avishek Joey Bose; Andre Cianflone; William L. Hamilton
2019-05-25
Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks.Jirong Yi; Hui Xie; Leixin Zhou; Xiaodong Wu; Weiyu Xu; Raghuraman Mudumbai
Adversarial Distillation for Ordered Top-k Attacks.Zekun Zhang; Tianfu Wu
Adversarial Policies: Attacking Deep Reinforcement Learning.Adam Gleave; Michael Dennis; Cody Wild; Neel Kant; Sergey Levine; Stuart Russell
Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness.Tianyu Pang; Kun Xu; Yinpeng Dong; Chao Du; Ning Chen; Jun Zhu
2019-05-24
Robustness to Adversarial Perturbations in Learning from Incomplete Data.Amir Najafi; Shin-ichi Maeda; Masanori Koyama; Takeru Miyato
Enhancing Adversarial Defense by k-Winners-Take-All.Chang Xiao; Peilin Zhong; Changxi Zheng
Power up! Robust Graph Convolutional Network via Graph Powering.Ming Jin; Heng Chang; Wenwu Zhu; Somayeh Sojoudi
2019-05-23
A Direct Approach to Robust Deep Learning Using Adversarial Networks.Huaxia Wang; Chun-Nam Yu
PHom-GeM: Persistent Homology for Generative Models.Jeremy Charlier; Radu State; Jean Hilger
Thwarting finite difference adversarial attacks with output randomization.Haidar Khan; Daniel Park; Azer Khan; Bülent Yener
Interpreting Adversarially Trained Convolutional Neural Networks.Tianyuan Zhang; Zhanxing Zhu
Adversarially Robust Distillation.Micah Goldblum; Liam Fowl; Soheil Feizi; Tom Goldstein
2019-05-22
Convergence and Margin of Adversarial Training on Separable Data.Zachary Charles; Shashank Rajput; Stephen Wright; Dimitris Papailiopoulos
Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection.Jonathan Aigrain; Marcin Detyniecki
2019-05-21
DoPa: A Fast and Comprehensive CNN Defense Methodology against Physical Adversarial Attacks.Zirui Xu; Fuxun Yu; Xiang Chen
2019-05-20
Adversarially robust transfer learning.Ali Shafahi; Parsa Saadatpanah; Chen Zhu; Amin Ghiasi; Christoph Studer; David Jacobs; Tom Goldstein
2019-05-19
Testing DNN Image Classifiers for Confusion & Bias Errors.Yuchi Tian; Ziyuan Zhong; Vicente Ordonez; Gail Kaiser; Baishakhi Ray
2019-05-18
What Do Adversarially Robust Models Look At?Takahiro Itazuri; Yoshihiro Fukuhara; Hirokatsu Kataoka; Shigeo Morishima
Taking Care of The Discretization Problem:A Black-Box Adversarial Image Attack in Discrete Integer Domain.Yuchao Duan; Zhe Zhao; Lei Bu; Fu Song
2019-05-17
POPQORN: Quantifying Robustness of Recurrent Neural Networks.Ching-Yun Ko; Zhaoyang Lyu; Tsui-Wei Weng; Luca Daniel; Ngai Wong; Dahua Lin
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models.Nicholas Carlini
Simple Black-box Adversarial Attacks.Chuan Guo; Jacob R. Gardner; Yurong You; Andrew Gordon Wilson; Kilian Q. Weinberger
2019-05-16
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization.Seungyong Moon; Gaon An; Hyun Oh Song
2019-05-15
On Norm-Agnostic Robustness of Adversarial Training.Bai Li; Changyou Chen; Wenlin Wang; Lawrence Carin
An Efficient Pre-processing Method to Eliminate Adversarial Effects.Hua Wang; Jie Wang; Zhaoxia Yin
2019-05-14
Robustification of deep net classifiers by key based diversified aggregation with pre-filtering.Olga Taran; Shideh Rezaeifar; Taras Holotyak; Slava Voloshynovskiy
2019-05-13
Adversarial Examples for Electrocardiograms.Xintian Han; Yuxuan Hu; Luca Foschini; Larry Chinitz; Lior Jankelson; Rajesh Ranganath
Analyzing Adversarial Attacks Against Deep Learning for Intrusion Detection in IoT Networks.Olakunle Ibitoye; Omair Shafiq; Ashraf Matrawy
Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models.Mayank Singh; Abhishek Sinha; Nupur Kumari; Harshitha Machiraju; Balaji Krishnamurthy; Vineeth N Balasubramanian
2019-05-11
Moving Target Defense for Deep Visual Sensing against Adversarial Examples.Qun Song; Zhenyu Yan; Rui Tan
2019-05-10
Interpreting and Evaluating Neural Network Robustness.Fuxun Yu; Zhuwei Qin; Chenchen Liu; Liang Zhao; Yanzhi Wang; Xiang Chen
On the Connection Between Adversarial Robustness and Saliency Map Interpretability.Christian Etmann; Sebastian Lunz; Peter Maass; Carola-Bibiane Schönlieb
Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables.Yan Xu; Baoyuan Wu; Fumin Shen; Yanbo Fan; Yong Zhang; Heng Tao Shen; Wei Liu
2019-05-09
Adversarial Defense Framework for Graph Neural Network.Shen Wang; Zhengzhang Chen; Jingchao Ni; Xiao Yu; Zhichun Li; Haifeng Chen; Philip S. Yu
Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain.Chris Einar San Agustin
Exploring the Hyperparameter Landscape of Adversarial Robustness.Evelyn Duesterwald; Anupama Murthi; Ganesh Venkataraman; Mathieu Sinn; Deepak Vijaykeerthy
Learning Interpretable Features via Adversarially Robust Optimization.Ashkan Khakzar; Shadi Albarqouni; Nassir Navab
Universal Adversarial Perturbations for Speech Recognition Systems.Paarth Neekhara; Shehzeen Hussain; Prakhar Pandey; Shlomo Dubnov; Julian McAuley; Farinaz Koushanfar
2019-05-08
ROSA: Robust Salient Object Detection against Adversarial Attacks.Haofeng Li; Guanbin Li; Yizhou Yu
Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction.Yunhan Jia; Yantao Lu; Senem Velipasalar; Zhenyu Zhong; Tao Wei
Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems.Kazuya Kakizaki; Kosuke Yoshida
2019-05-07
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks.Saima Sharmin; Priyadarshini Panda; Syed Shakib Sarwar; Chankyu Lee; Wachirawit Ponghiran; Kaushik Roy
Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study.Chihye Han; Wonjun Yoon; Gihyun Kwon; Seungkyu Nam; Daeshik Kim
An Empirical Evaluation of Adversarial Robustness under Transfer Learning.Todor Davchev; Timos Korres; Stathi Fotiadis; Nick Antonopoulos; Subramanian Ramamoorthy
Adaptive Generation of Unrestricted Adversarial Inputs.Isaac Dunn; Hadrien Pouget; Tom Melham; Daniel Kroening
2019-05-06
Batch Normalization is a Cause of Adversarial Vulnerability.Angus Galloway; Anna Golubeva; Thomas Tanay; Medhat Moussa; Graham W. Taylor
Adversarial Examples Are Not Bugs, They Are Features.Andrew Ilyas; Shibani Santurkar; Dimitris Tsipras; Logan Engstrom; Brandon Tran; Aleksander Madry
2019-05-05
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples.Vikash Sehwag; Arjun Nitin Bhagoji; Liwei Song; Chawin Sitawarin; Daniel Cullina; Mung Chiang; Prateek Mittal
2019-05-03
Transfer of Adversarial Robustness Between Perturbation Types.Daniel Kang; Yi Sun; Tom Brown; Dan Hendrycks; Jacob Steinhardt
2019-05-02
Adversarial Training with Voronoi Constraints.Marc Khoury; Dylan Hadfield-Menell
Weight Map Layer for Noise and Adversarial Attack Robustness.Mohammed Amer; Tomás Maul
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle.Dinghuai Zhang; Tianyuan Zhang; Yiping Lu; Zhanxing Zhu; Bin Dong
2019-05-01
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm.Jinyin Chen; Mengmeng Su; Shijing Shen; Hui Xiong; Haibin Zheng
Dropping Pixels for Adversarial Robustness.Hossein Hosseini; Sreeram Kannan; Radha Poovendran
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks.Yandong Li; Lijun Li; Liqiang Wang; Tong Zhang; Boqing Gong
2019-04-30
Test Selection for Deep Learning Systems.Wei Ma; Mike Papadakis; Anestis Tsakmalis; Maxime Cordy; Yves Le Traon
Detecting Adversarial Examples through Nonlinear Dimensionality Reduction.Francesco Crecchi; Davide Bacciu; Battista Biggio
2019-04-29
Adversarial Training for Free!Ali Shafahi; Mahyar Najibi; Amin Ghiasi; Zheng Xu; John Dickerson; Christoph Studer; Larry S. Davis; Gavin Taylor; Tom Goldstein
Adversarial Training and Robustness for Multiple Perturbations.Florian Tramèr; Dan Boneh
2019-04-27
Non-Local Context Encoder: Robust Biomedical Image Segmentation against Adversarial Attacks.Xiang He; Sibei Yang; Guanbin Li?; Haofeng Li; Huiyou Chang; Yizhou Yu
2019-04-26
Robustness Verification of Support Vector Machines.Francesco Ranzato; Marco Zanella
2019-04-24
A Robust Approach for Securing Audio Classification Against Adversarial Attacks.Mohammad Esmaeilpour; Patrick Cardinal; Alessandro Lameiras Koerich
Physical Adversarial Textures that Fool Visual Object Tracking.Rey Reza Wiyatno; Anqi Xu
2019-04-23
Minimizing Perceived Image Quality Loss Through Adversarial Attack Scoping.Kostiantyn Khabarlak; Larysa Koriashkina
2019-04-22
blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness.Jiaming Zhang; Jitao Sang; Kaiyuan Xu; Shangxi Wu; Yongli Hu; Yanfeng Sun; Jian Yu
Using Videos to Evaluate Image Model Robustness.Keren Gu; Brandon Yang; Jiquan Ngiam; Quoc Le; Jonathon Shlens
2019-04-21
Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning.Devinder Kumar; Ibrahim Ben-Daya; Kanav Vats; Jeffery Feng; Graham Taylor and; Alexander Wong
2019-04-20
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach.Rahim Taheri; Reza Javidan; Mohammad Shojafar; Vinod P; Mauro Conti
2019-04-19
Salient Object Detection in the Deep Learning Era: An In-Depth Survey.Wenguan Wang; Qiuxia Lai; Huazhu Fu; Jianbing Shen; Haibin Ling; Ruigang Yang
2019-04-18
Fooling automated surveillance cameras: adversarial patches to attack person detection.Simen Thys; Ranst Wiebe Van; Toon Goedemé
2019-04-17
ZK-GanDef: A GAN based Zero Knowledge Adversarial Training Defense for Neural Networks.Guanxiong Liu; Issa Khalil; Abdallah Khreishah
Defensive Quantization: When Efficiency Meets Robustness.Ji Lin; Chuang Gan; Song Han
Interpreting Adversarial Examples with Attributes.Sadaf Gulshad; Jan Hendrik Metzen; Arnold Smeulders; Zeynep Akata
Adversarial Defense Through Network Profiling Based Path Extraction.Yuxian Qiu; Jingwen Leng; Cong Guo; Quan Chen; Chao Li; Minyi Guo; Yuhao Zhu
Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks.Shawn Shan; Emily Willson; Bolun Wang; Bo Li; Haitao Zheng; Ben Y. Zhao
Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers.Ameya Joshi; Amitangshu Mukherjee; Soumik Sarkar; Chinmay Hegde
2019-04-16
Reducing Adversarial Example Transferability Using Gradient Regularization.George Adam; Petr Smirnov; Benjamin Haibe-Kains; Anna Goldenberg
AT-GAN: An Adversarial Generator Model for Non-constrained Adversarial Examples.Xiaosen Wang; Kun He; Chuanbiao Song; Liwei Wang; John E. Hopcroft
2019-04-15
Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction.Alesia Chernikova; Alina Oprea; Cristina Nita-Rotaru; BaekGyu Kim
Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks.Vassili Kovalev; Dmitry Voynov
2019-04-13
Exploiting Vulnerabilities of Load Forecasting Through Adversarial Attacks.Yize Chen; Yushi Tan; Baosen Zhang
2019-04-12
Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense.Lingyun Jiang; Kai Qiao; Ruoxi Qin; Linyuan Wang; Jian Chen; Haibing Bu; Bin Yan
Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients.Yatie Xiao; Chi-Man Pun
Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks.Jun-Ho Choi; Huan Zhang; Jun-Hyuk Kim; Cho-Jui Hsieh; Jong-Seok Lee
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks.David J. Miller; Zhen Xiang; George Kesidis
Unrestricted Adversarial Examples via Semantic Manipulation.Anand Bhattad; Min Jin Chong; Kaizhao Liang; Bo Li; D. A. Forsyth
2019-04-11
Black-Box Decision based Adversarial Attack with Symmetric $\alpha$-stable Distribution.Vignesh Srinivasan; Ercan E. Kuruoglu; Klaus-Robert Müller; Wojciech Samek; Shinichi Nakajima
2019-04-10
Learning to Generate Synthetic Data via Compositing.Shashank Tripathi; Siddhartha Chandra; Amit Agrawal; Ambrish Tyagi; James M. Rehg; Visesh Chari
Black-box Adversarial Attacks on Video Recognition Models.Linxi Jiang; Xingjun Ma; Shaoxiang Chen; James Bailey; Yu-Gang Jiang
2019-04-09
Generation & Evaluation of Adversarial Examples for Malware Obfuscation.Daniel Park; Haidar Khan; Bülent Yener
2019-04-08
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition.Yinpeng Dong; Hang Su; Baoyuan Wu; Zhifeng Li; Wei Liu; Tong Zhang; Jun Zhu
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning.Shahbaz Rezaei; Xin Liu
2019-04-07
JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks.N. Benjamin Erichson; Zhewei Yao; Michael W. Mahoney
Malware Evasion Attack and Defense.Yonghong Huang; Utkarsh Verma; Celeste Fralick; Gabriel Infante-Lopez; Brajesh Kumarz; Carl Woodward
2019-04-06
On Training Robust PDF Malware Classifiers.Yizheng Chen; Shiqi Wang; Dongdong She; Suman Jana
2019-04-05
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks.Yinpeng Dong; Tianyu Pang; Hang Su; Jun Zhu
2019-04-04
White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks.Yotam Gil; Yoav Chai; Or Gorodissky; Jonathan Berant
Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks.Fatemeh Sheikholeslami; Swayambhoo Jain; Georgios B. Giannakis
2019-04-03
Understanding the efficacy, reliability and resiliency of computer vision techniques for malware detection and future research directions.Li Chen
Interpreting Adversarial Examples by Activation Promotion and Suppression.Kaidi Xu; Sijia Liu; Gaoyuan Zhang; Mengshu Sun; Pu Zhao; Quanfu Fan; Chuang Gan; Xue Lin
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack.Jianbo Chen; Michael I. Jordan; Martin J. Wainwright
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations.Fred Hohman; Haekyu Park; Caleb Robinson; Duen Horng Chau
2019-04-02
Adversarial Attacks against Deep Saliency Models.Zhaohui Che; Ali Borji; Guangtao Zhai; Suiyi Ling; Guodong Guo; Patrick Le Callet
2019-04-01
Curls & Whey: Boosting Black-Box Adversarial Attacks.Yucheng Shi; Siyu Wang; Yahong Han
Robustness of 3D Deep Learning in an Adversarial Setting.Matthew Wicker; Marta Kwiatkowska
Defending against adversarial attacks by randomized diversification.Olga Taran; Shideh Rezaeifar; Taras Holotyak; Slava Voloshynovskiy
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks.Aamir Mustafa; Salman Khan; Munawar Hayat; Roland Goecke; Jianbing Shen; Ling Shao
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses.Yingwei Li; Song Bai; Cihang Xie; Zhenyu Liao; Xiaohui Shen; Alan L. Yuille
2019-03-31
On the Vulnerability of CNN Classifiers in EEG-Based BCIs.Xiao Zhang; Dongrui Wu
2019-03-29
Adversarial Robustness vs Model Compression, or Both?Shaokai Ye; Kaidi Xu; Sijia Liu; Hao Cheng; Jan-Henrik Lambrechts; Huan Zhang; Aojun Zhou; Kaisheng Ma; Yanzhi Wang; Xue Lin
2019-03-28
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.Dan Hendrycks; Thomas Dietterich
Smooth Adversarial Examples.Hanwei Zhang; Yannis Avrithis; Teddy Furon; Laurent Amsaleg
2019-03-27
Bridging Adversarial Robustness and Gradient Interpretability.Beomsu Kim; Junghoon Seo; Taegyun Jeon
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks.Francesco Croce; Jonas Rauber; Matthias Hein
Rallying Adversarial Techniques against Deep Learning for Network Security.Joseph Clements; Yuzhe Yang; Ankur Sharma; Hongxin Hu; Yingjie Lao
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems.Steffen Eger; Gözde Gül Şahin; Andreas Rücklé; Ji-Ung Lee; Claudia Schulz; Mohsen Mesgar; Krishnkant Swarnkar; Edwin Simpson; Iryna Gurevych
2019-03-26
On the Adversarial Robustness of Multivariate Robust Estimation.Erhan Bayraktar; Lifeng Lai
A geometry-inspired decision-based attack.Yujia Liu; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
2019-03-25
Defending against Whitebox Adversarial Attacks via Randomized Discretization.Yuchen Zhang; Percy Liang
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness.Jörn-Henrik Jacobsen; Jens Behrmannn; Nicholas Carlini; Florian Tramèr; Nicolas Papernot
The LogBarrier adversarial attack: making effective use of decision boundary information.Chris Finlay; Aram-Alexandre Pooladian; Adam M. Oberman
Robust Neural Networks using Randomized Adversarial Training.Alexandre Araujo; Laurent Meunier; Rafael Pinot; Benjamin Negrevergne
2019-03-24
A Formalization of Robustness for Deep Neural Networks.Tommaso Dreossi; Shromona Ghosh; Alberto Sangiovanni-Vincentelli; Sanjit A. Seshia
Variational Inference with Latent Space Quantization for Adversarial Resilience.Vinay Kyatham; Mayank Mishra; Tarun Kumar Yadav; Deepak Mishra; Prathosh AP
2019-03-23
Improving Adversarial Robustness via Guided Complement Entropy.Hao-Yun Chen; Jhao-Hong Liang; Shih-Chieh Chang; Jia-Yu Pan; Yu-Ting Chen; Wei Wei; Da-Cheng Juan
2019-03-22
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition.Yao Qin; Nicholas Carlini; Ian Goodfellow; Garrison Cottrell; Colin Raffel
Fast Bayesian Uncertainty Estimation and Reduction of Batch Normalized Single Image Super-Resolution Network. (45%)Aupendu Kar; Prabir Kumar Biswas
2019-03-21
Adversarial camera stickers: A physical camera-based attack on deep learning systems.Juncheng Li; Frank R. Schmidt; J. Zico Kolter
2019-03-20
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes.Matt Jordan; Justin Lewis; Alexandros G. Dimakis
2019-03-19
On the Robustness of Deep K-Nearest Neighbors.Chawin Sitawarin; David Wagner
2019-03-18
Generating Adversarial Examples With Conditional Generative Adversarial Net.Ping Yu; Kaitao Song; Jianfeng Lu
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems.Hadi Abdullah; Washington Garcia; Christian Peeters; Patrick Traynor; Kevin R. B. Butler; Joseph Wilson
2019-03-17
Adversarial Attacks on Deep Neural Networks for Time Series Classification.Hassan Ismail Fawaz; Germain Forestier; Jonathan Weber; Lhassane Idoumghar; Pierre-Alain Muller
2019-03-15
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models.Paul Michel; Xian Li; Graham Neubig; Juan Miguel Pino
On Certifying Non-uniform Bound against Adversarial Attacks.Chen Liu; Ryota Tomioka; Volkan Cevher
2019-03-14
A Research Agenda: Dynamic Models to Defend Against Correlated Attacks.Ian Goodfellow
Attribution-driven Causal Analysis for Detection of Adversarial Examples.Susmit Jha; Sunny Raj; Steven Lawrence Fernandes; Sumit Kumar Jha; Somesh Jha; Gunjan Verma; Brian Jalaian; Ananthram Swami
2019-03-13
Adversarial attacks against Fact Extraction and VERification.James Thorne; Andreas Vlachos
2019-03-12
Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models.Adith Boloor; Xin He; Christopher Gill; Yevgeniy Vorobeychik; Xuan Zhang
2019-03-11
Can Adversarial Network Attack be Defended?Jinyin Chen; Yangyang Wu; Xiang Lin; Qi Xuan
2019-03-09
Manifold Preserving Adversarial Learning.Ousmane Amadou Dia; Elnaz Barshan; Reza Babanezhad
2019-03-07
Attack Type Agnostic Perceptual Enhancement of Adversarial Images.Bilgin Aksoy; Alptekin Temizel
Out-domain examples for generative models.Dario Pasquini; Marco Mingione; Massimo Bernaschi
2019-03-06
GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier.Guanxiong Liu; Issa Khalil; Abdallah Khreishah
2019-03-05
Statistical Guarantees for the Robustness of Bayesian Neural Networks.Luca Cardelli; Marta Kwiatkowska; Luca Laurenti; Nicola Paoletti; Andrea Patane; Matthew Wicker
L 1-norm double backpropagation adversarial defense.Ismaïla LIMOS, LITIS Seck; Gaëlle LIMOS Loosli; Stephane LITIS Canu
2019-03-04
Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search.Abhimanyu Dubey; der Maaten Laurens van; Zeki Yalniz; Yixuan Li; Dhruv Mahajan
The Vulnerabilities of Graph Convolutional Networks: Stronger Attacks and Defensive Techniques.Huijun Wu; Chen Wang; Yuriy Tyshetskiy; Andrew Dotcherty; Kai Lu; Liming Zhu
Complement Objective Training.Hao-Yun Chen; Pei-Hsin Wang; Chun-Hao Liu; Shih-Chieh Chang; Jia-Yu Pan; Yu-Ting Chen; Wei Wei; Da-Cheng Juan
Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming.Mahyar Fazlyab; Manfred Morari; George J. Pappas
2019-03-03
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations.Saeid Asgari Taghanaki; Kumar Abhishek; Shekoofeh Azizi; Ghassan Hamarneh
2019-03-01
Evaluating Adversarial Evasion Attacks in the Context of Wireless Communications.Bryse Flowers; R. Michael Buehrer; William C. Headley
PuVAE: A Variational Autoencoder to Purify Adversarial Examples.Uiwon Hwang; Jaewoo Park; Hyemi Jang; Sungroh Yoon; Nam Ik Cho
Attacking Graph-based Classification via Manipulating the Graph Structure.Binghui Wang; Neil Zhenqiang Gong
2019-02-28
On the Effectiveness of Low Frequency Perturbations.Yash Sharma; Gavin Weiguang Ding; Marcus Brubaker
Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN.Ke Sun; Zhanxing Zhu; Zhouchen Lin
Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors.Ke Sun; Zhanxing Zhu; Zhouchen Lin
Adversarial Attack and Defense on Point Sets.Qiang Zhang; Jiancheng Yang; Rongyao Fang; Bingbing Ni; Jinxian Liu; Qi Tian
2019-02-27
Adversarial Attacks on Time Series.Fazle Karim; Somshubra Majumdar; Houshang Darabi
Robust Decision Trees Against Adversarial Examples.Hongge Chen; Huan Zhang; Duane Boning; Cho-Jui Hsieh
Tensor Dropout for Robust Learning.Arinbjörn Kolbeinsson; Jean Kossaifi; Yannis Panagakis; Adrian Bulat; Anima Anandkumar; Ioanna Tzoulaki; Paul Matthews
The Best Defense Is a Good Offense: Adversarial Attacks to Avoid Modulation Detection.Muhammad Zaid Hameed; Andras Gyorgy; Deniz Gunduz
A Distributionally Robust Optimization Method for Adversarial Multiple Kernel Learning. (76%)Masoud Badiei Khuzani; Hongyi Ren; Md Tauhidul Islam; Lei Xing
AutoGAN-based Dimension Reduction for Privacy Preservation. (1%)Hung Nguyen; Di Zhuang; Pei-Yuan Wu; Morris Chang
2019-02-26
Disentangled Deep Autoencoding Regularization for Robust Image Classification.Zhenyu Duan; Martin Renqiang Min; Li Erran Li; Mingbo Cai; Yi Xu; Bingbing Ni
Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification.Jianlin Li; Pengfei Yang; Jiangchao Liu; Liqian Chen; Xiaowei Huang; Lijun Zhang
2019-02-25
Verification of Non-Linear Specifications for Neural Networks.Chongli Dj Qin; Dj Krishnamurthy; Dvijotham; Brendan O'Donoghue; Rudy Bunel; Robert Stanforth; Sven Gowal; Jonathan Uesato; Grzegorz Swirszcz; Pushmeet Kohli
Adversarial attacks hidden in plain sight.Jan Philip Göpfert; André Artelt; Heiko Wersing; Barbara Hammer
2019-02-24
MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and Adversarial Defenses.Lior Sidi; Asaf Nadler; Asaf Shabtai
Adversarial Reinforcement Learning under Partial Observability in Software-Defined Networking.Yi Han; David Hubczenko; Paul Montague; Vel Olivier De; Tamas Abraham; Benjamin I. P. Rubinstein; Christopher Leckie; Tansu Alpcan; Sarah Erfani
2019-02-23
Re-evaluating ADEM: A Deeper Look at Scoring Dialogue Responses.Ananya B. Sai; Mithun Das Gupta; Mitesh M. Khapra; Mukundhan Srinivasan
A Deep, Information-theoretic Framework for Robust Biometric Recognition.Renjie Xie; Yanzhi Chen; Yan Wo; Qiao Wang
2019-02-22
Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems.Meysam Sadeghi; Erik G. Larsson
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks.Hadi Salman; Greg Yang; Huan Zhang; Cho-Jui Hsieh; Pengchuan Zhang
Adversarial Attacks on Graph Neural Networks via Meta Learning.Daniel Zügner; Stephan Günnemann
2019-02-21
On the Sensitivity of Adversarial Robustness to Input Data Distributions.Gavin Weiguang Ding; Kry Yik Chau Lui; Xiaomeng Jin; Luyu Wang; Ruitong Huang
Quantifying Perceptual Distortion of Adversarial Examples.Matt Jordan; Naren Manoj; Surbhi Goel; Alexandros G. Dimakis
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations.Eric Wong; Frank R. Schmidt; J. Zico Kolter
2019-02-20
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch.Gavin Weiguang Ding; Luyu Wang; Xiaomeng Jin
Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers.Diego Gragnaniello; Francesco Marra; Giovanni Poggi; Luisa Verdoliva
Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure.Fuli Feng; Xiangnan He; Jie Tang; Tat-Seng Chua
2019-02-19
There are No Bit Parts for Sign Bits in Black-Box Attacks.Abdullah Al-Dujaili; Una-May O'Reilly
2019-02-18
On Evaluating Adversarial Robustness.Nicholas Carlini; Anish Athalye; Nicolas Papernot; Wieland Brendel; Jonas Rauber; Dimitris Tsipras; Ian Goodfellow; Aleksander Madry; Alexey Kurakin
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks.Yueyao Yu; Pengfei Yu; Wenye Li
Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces.Mohsen Imani; Mohammad Saidur Rahman; Nate Mathews; Matthew Wright
2019-02-16
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training.Silvija Kokalj-Filipovic; Rob Miller; Nicholas Chang; Chi Leung Lau
Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness.Silvija Kokalj-Filipovic; Rob Miller
2019-02-15
DeepFault: Fault Localization for Deep Neural Networks.Hasan Ferit Eniser; Simos Gerasimou; Alper Sen
2019-02-14
Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?Cody Burkard; Brent Lagesse
2019-02-13
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples.Kevin Roth; Yannic Kilcher; Thomas Hofmann
2019-02-12
Examining Adversarial Learning against Graph-based IoT Malware Detection Systems.Ahmed Abusnaina; Aminollah Khormali; Hisham Alasmary; Jeman Park; Afsah Anwar; Ulku Meteriz; Aziz Mohaisen
2019-02-11
Adversarial Samples on Android Malware Detection Systems for IoT Systems.Xiaolei Liu; Xiaojiang Du; Xiaosong Zhang; Qingxin Zhu; Mohsen Guizani
A Survey: Towards a Robust Deep Neural Network in Text Domain.Wenqi Wang; Lina Wang; Benxiao Tang; Run Wang; Aoshuang Ye
2019-02-09
Model Compression with Adversarial Robustness: A Unified Optimization Framework.Shupeng University of Rochester Gui; Haotao Texas A&M University Wang; Chen University of Rochester Yu; Haichuan University of Rochester Yang; Zhangyang Texas A&M University Wang; Ji Ytech Seattle AI lab, FeDA lab, AI platform, Kwai Inc Liu
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks.Chao-Han Huck Yang; Yi-Chieh Liu; Pin-Yu Chen; Xiaoli Ma; Yi-Chang James Tsai
2019-02-08
Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images.Sanjana Srivastava; Guy Ben-Yosef; Xavier Boix
Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis.Danilo Vasconcellos Vargas; Jiawei Su
Discretization based Solutions for Secure Machine Learning against Adversarial Attacks.Priyadarshini Panda; Indranil Chakraborty; Kaushik Roy
2019-02-07
Robustness Of Saak Transform Against Adversarial Attacks.Thiyagarajan Ramanathan; Abinaya Manimaran; Suya You; C-C Jay Kuo
Certified Adversarial Robustness via Randomized Smoothing.Jeremy M Cohen; Elan Rosenfeld; J. Zico Kolter
2019-02-06
Fooling Neural Network Interpretations via Adversarial Model Manipulation.Juyeon Heo; Sunghwan Joo; Taesup Moon
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples.Derui Wang; Chaoran Li; Sheng Wen; Xiaojun Chang; Surya Nepal; Yang Xiang
2019-02-05
Fatal Brain Damage.El Mahdi El Mhamdi; Rachid Guerraoui; Sergei Volodin
2019-02-04
Theoretical evidence for adversarial robustness through randomization.Rafael Pinot; Laurent Meunier; Alexandre Araujo; Hisashi Kashima; Florian Yger; Cédric Gouy-Pailler; Jamal Atif
Predictive Uncertainty Quantification with Compound Density Networks.Agustinus Kristiadi; Sina Däubener; Asja Fischer
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks.Alberto Marchisio; Giorgio Nanfa; Faiq Khalid; Muhammad Abdullah Hanif; Maurizio Martina; Muhammad Shafique
2019-02-01
Robustness Certificates Against Adversarial Examples for ReLU Networks.Sahil Singla; Soheil Feizi
Natural and Adversarial Error Detection using Invariance to Image Transformations.Yuval Bahat; Michal Irani; Gregory Shakhnarovich
Adaptive Gradient for Adversarial Perturbations Generation.Yatie Xiao; Chi-Man Pun
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks.Sascha Saralajew; Lars Holdijk; Maike Rees; Thomas Villmann
The Efficacy of SHIELD under Different Threat Models.Cory Cornelius; Nilaksh Das; Shang-Tse Chen; Li Chen; Michael E. Kounavis; Duen Horng Chau
2019-01-31
A New Family of Neural Networks Provably Resistant to Adversarial Attacks.Rakshit Agrawal; Alfaro Luca de; David Helmbold
Training Artificial Neural Networks by Generalized Likelihood Ratio Method: Exploring Brain-like Learning to Improve Robustness.Li Xiao; Yijie Peng; Jeff Hong; Zewu Ke; Shuhuai Yang
2019-01-30
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance.Adi Shamir; Itay Safran; Eyal Ronen; Orr Dunkelman
Augmenting Model Robustness with Transformation-Invariant Attacks.Houpu Yao; Zhe Wang; Guangyu Nie; Yassine Mazboudi; Yezhou Yang; Yi Ren
2019-01-29
Adversarial Examples Are a Natural Consequence of Test Error in Noise.Nic Ford; Justin Gilmer; Nicolas Carlini; Dogus Cubuk
RED-Attack: Resource Efficient Decision based Attack for Machine Learning.Faiq Khalid; Hassan Ali; Muhammad Abdullah Hanif; Semeen Rehman; Rehan Ahmed; Muhammad Shafique
Reliable Smart Road Signs.Muhammed O. Sayin; Chung-Wei Lin; Eunsuk Kang; Shinichi Shiraishi; Tamer Basar
On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks.Peter Langenberg; Emilio Rafael Balda; Arash Behboodi; Rudolf Mathar
Adversarial Metric Attack and Defense for Person Re-identification.Song Bai; Yingwei Li; Yuyin Zhou; Qizhu Li; Philip H. S. Torr
2019-01-28
Improving Adversarial Robustness of Ensembles with Diversity Training.Sanjay Kariyappa; Moinuddin K. Qureshi
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks.Alberto Marchisio; Giorgio Nanfa; Faiq Khalid; Muhammad Abdullah Hanif; Maurizio Martina; Muhammad Shafique
Defense Methods Against Adversarial Examples for Recurrent Neural Networks.Ishai Rosenberg; Asaf Shabtai; Yuval Elovici; Lior Rokach
Using Pre-Training Can Improve Model Robustness and Uncertainty.Dan Hendrycks; Kimin Lee; Mantas Mazeika
Efficient Multiparty Interactive Coding for Insertions, Deletions and Substitutions. (1%)Ran Gelles; Yael T. Kalai; Govind Ramnarayan
2019-01-27
An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers.Hui Xie; Jirong Yi; Weiyu Xu; Raghu Mudumbai
Characterizing the Shape of Activation Space in Deep Neural Networks.Thomas Gebhart; Paul Schrater; Alan Hylton
Strong Black-box Adversarial Attacks on Unsupervised Machine Learning Models.Anshuman Chhabra; Abhishek Roy; Prasant Mohapatra
2019-01-26
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm.Xiaolei Liu; Yuheng Luo; Xiaosong Zhang; Qingxin Zhu
Weighted-Sampling Audio Adversarial Example Attack.Xiaolei Liu; Xiaosong Zhang; Kun Wan; Qingxin Zhu; Yufei Ding
2019-01-25
Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data.Yi Shi; Yalin E. Sagduyu; Kemal Davaslioglu; Jason H. Li
Improving Adversarial Robustness via Promoting Ensemble Diversity.Tianyu Pang; Kun Xu; Chao Du; Ning Chen; Jun Zhu
Chapter: Vulnerability of Quantum Information Systems to Collective Manipulation. (1%)Fernando J. Gómez-Ruiz; Ferney J. Rodríguez; Luis Quiroga; Neil F. Johnson
2019-01-24
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples.Yinpeng Dong; Fan Bao; Hang Su; Jun Zhu
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples.Kamil Nar; Orhan Ocal; S. Shankar Sastry; Kannan Ramchandran
Theoretically Principled Trade-off between Robustness and Accuracy.Hongyang Zhang; Yaodong Yu; Jiantao Jiao; Eric P. Xing; Laurent El Ghaoui; Michael I. Jordan
2019-01-23
SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems.Tianyu Du; Shouling Ji; Jinfeng Li; Qinchen Gu; Ting Wang; Raheem Beyah
Sitatapatra: Blocking the Transfer of Adversarial Samples.Ilia Shumailov; Xitong Gao; Yiren Zhao; Robert Mullins; Ross Anderson; Cheng-Zhong Xu
2019-01-21
Universal Rules for Fooling Deep Neural Networks based Text Classification.Di Li; Danilo Vasconcellos Vargas; Sakurai Kouichi
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey.Wei Emma Zhang; Quan Z. Sheng; Ahoud Alhazmi; Chenliang Li
Sensitivity Analysis of Deep Neural Networks.Hai Shu; Hongtu Zhu
Perception-in-the-Loop Adversarial Examples.Mahmoud Salamati; Sadegh Soudjani; Rupak Majumdar
2019-01-17
Easy to Fool? Testing the Anti-evasion Capabilities of PDF Malware Scanners.Saeed TU Darmstadt Ehteshamifar; Antonio xorlab Barresi; Thomas R. ETH Zurich Gross; Michael TU Darmstadt Pradel
2019-01-15
The Limitations of Adversarial Training and the Blind-Spot Attack.Huan Zhang; Hongge Chen; Zhao Song; Duane Boning; Inderjit S. Dhillon; Cho-Jui Hsieh
2019-01-13
Generating Adversarial Perturbation with Root Mean Square Gradient.Yatie Xiao; Chi-Man Pun; Jizhe Zhou
2019-01-12
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System.Huangxun Chen; Chenyu Huang; Qianyi Huang; Qian Zhang; Wei Wang
2019-01-11
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries.Luca Demetrio; Battista Biggio; Giovanni Lagorio; Fabio Roli; Alessandro Armando
2019-01-10
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification.Luiz G. Hafemann; Robert Sabourin; Luiz S. Oliveira
Image Transformation can make Neural Networks more robust against Adversarial Examples.Dang Duy Thang; Toshihiro Matsui
2019-01-09
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers.Daniel Liu; Ronald Yu; Hao Su
2019-01-08
Interpretable BoW Networks for Adversarial Example Detection.Krishna Kanth Nakka; Mathieu Salzmann
2019-01-07
Image Super-Resolution as a Defense Against Adversarial Attacks.Aamir Mustafa; Salman H. Khan; Munawar Hayat; Jianbing Shen; Ling Shao
2019-01-05
Fake News Detection via NLP is Vulnerable to Adversarial Attacks.Zhixuan Zhou; Huankang Guan; Meghana Moorthy Bhat; Justin Hsu
2019-01-04
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study.Xurong Li; Shouling Ji; Meng Han; Juntao Ji; Zhenyu Ren; Yushan Liu; Chunming Wu
2019-01-02
Multi-Label Adversarial Perturbations.Qingquan Song; Haifeng Jin; Xiao Huang; Xia Hu
Adversarial Robustness May Be at Odds With Simplicity.Preetum Nakkiran
2019-01-01
A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks.Long Zhang; Xuechao Sun; Yong Li; Zhenyu Zhang
2018-12-27
DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems.Husheng Zhou; Wei Li; Yuankun Zhu; Yuqun Zhang; Bei Yu; Lingming Zhang; Cong Liu
2018-12-26
Adversarial Attack and Defense on Graph Data: A Survey.Lichao Sun; Yingtong Dou; Carl Yang; Ji Wang; Yixin Liu; Philip S. Yu; Lifang He; Bo Li
2018-12-25
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition.Krishan Rajaratnam; Jugal Kalita
PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning.Mehdi Jafarnia-Jahromi; Tasmin Chowdhury; Hsin-Tai Wu; Sayandev Mukherjee
A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples.Qiang Zeng; Jianhai Su; Chenglong Fu; Golam Kayas; Lannan Luo
A Data-driven Adversarial Examples Recognition Framework via Adversarial Feature Genome.Li Chen; Qi Li; Weiye Chen; Zeyu Wang; Haifeng Li
Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors.Yue Zhao; Hong Zhu; Ruigang Liang; Qintao Shen; Shengzhi Zhang; Kai Chen
2018-12-24
DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense.Hang Zhou; Kejiang Chen; Weiming Zhang; Han Fang; Wenbo Zhou; Nenghai Yu
2018-12-23
Markov Game Modeling of Moving Target Defense for Strategic Detection of Threats in Cloud Networks.Ankur Chowdhary; Sailik Sengupta; Dijiang Huang; Subbarao Kambhampati
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks.Thomas Brunner; Frederik Diehl; Michael Truong Le; Alois Knoll
2018-12-22
Exploiting the Inherent Limitation of L0 Adversarial Examples.Fei Zuo; Bokai Yang; Xiaopeng Li; Lannan Luo; Qiang Zeng
2018-12-21
Dissociable neural representations of adversarially perturbed images in convolutional neural networks and the human brain.Chi Zhang; Xiaohan Duan; Linyuan Wang; Yongli Li; Bin Yan; Guoen Hu; Ruyuan Zhang; Li Tong
2018-12-19
Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge.Deqiang Li; Qianmu Li; Yanfang Ye; Shouhuai Xu
2018-12-18
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach.Tsui-Wei Weng; Pin-Yu Chen; Lam M. Nguyen; Mark S. Squillante; Ivan Oseledets; Luca Daniel
2018-12-17
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings.François Menet; Paul Berthier; José M. Fernandez; Michel Gagnon
Designing Adversarially Resilient Classifiers using Resilient Feature Engineering.Kevin Eykholt; Atul Prakash
A Survey of Safety and Trustworthiness of Deep Neural Networks.Xiaowei Huang; Daniel Kroening; Wenjie Ruan; James Sharp; Youcheng Sun; Emese Thamo; Min Wu; Xinping Yi
2018-12-16
Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks.Xiang Li; Shihao Ji
2018-12-15
Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples.Emilio Rafael Balda; Arash Behboodi; Rudolf Mathar
Trust Region Based Adversarial Attack on Neural Networks.Zhewei Yao; Amir Gholami; Peng Xu; Kurt Keutzer; Michael Mahoney
2018-12-14
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing.Jingyi Wang; Guoliang Dong; Jun Sun; Xinyu Wang; Peixin Zhang
2018-12-13
TextBugger: Generating Adversarial Text Against Real-world Applications.Jinfeng Li; Shouling Ji; Tianyu Du; Bo Li; Ting Wang
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem.Matthias Hein; Maksym Andriushchenko; Julian Bitterwolf
Generating Hard Examples for Pixel-wise Classification. (4%)Hyungtae Lee; Heesung Kwon; Wonkook Kim
2018-12-12
Thwarting Adversarial Examples: An $L_0$-RobustSparse Fourier Transform.Mitali Bafna; Jack Murtagh; Nikhil Vyas
2018-12-11
On the Security of Randomized Defenses Against Adversarial Samples.Kumar Sharad; Giorgia Azzurra Marson; Hien Thi Thu Truong; Ghassan Karame
Adversarial Framing for Image and Video Classification.Konrad Zolna; Michal Zajac; Negar Rostamzadeh; Pedro O. Pinheiro
2018-12-10
Defending Against Universal Perturbations With Shared Adversarial Training.Chaithanya Kumar Mummadi; Thomas Brox; Jan Hendrik Metzen
2018-12-08
Feature Denoising for Improving Adversarial Robustness.Cihang Xie; Yuxin Wu; der Maaten Laurens van; Alan Yuille; Kaiming He
AutoGAN: Robust Classifier Against Adversarial Attacks.Blerta Lindqvist; Shridatt Sugrim; Rauf Izmailov
Detecting Adversarial Examples in Convolutional Neural Networks.Stefanos Pertigkiozoglou; Petros Maragos
Learning Transferable Adversarial Examples via Ghost Networks.Yingwei Li; Song Bai; Yuyin Zhou; Cihang Xie; Zhishuai Zhang; Alan Yuille
2018-12-07
Deep-RBF Networks Revisited: Robust Classification with Rejection.Pourya Habib Zadeh; Reshad Hosseini; Suvrit Sra
Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach.Rajeev Sahay; Rehana Mahfuz; Aly El Gamal
2018-12-06
Adversarial Defense of Image Classification Using a Variational Auto-Encoder.Yi Luo; Henry Pfister
Adversarial Attacks, Regression, and Numerical Stability Regularization.Andre T. Nguyen; Edward Raff
Prior Networks for Detection of Adversarial Attacks.Andrey Malinin; Mark Gales
Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack.Jingyang Zhang; Hsin-Pai Cheng; Chunpeng Wu; Hai Li; Yiran Chen
Fooling Network Interpretation in Image Classification.Akshayvarun Subramanya; Vipin Pillai; Hamed Pirsiavash
The Limitations of Model Uncertainty in Adversarial Settings.Kathrin Grosse; David Pfaff; Michael Thomas Smith; Michael Backes
MMA Training: Direct Input Space Margin Maximization through Adversarial Training.Gavin Weiguang Ding; Yash Sharma; Kry Yik Chau Lui; Ruitong Huang
2018-12-05
On Configurable Defense against Adversarial Example Attacks.Bo Luo; Min Li; Yu Li; Qiang Xu
Regularized Ensembles and Transferability in Adversarial Learning.Yifan Chen; Yevgeniy Vorobeychik
SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications.Abdullah Hamdi; Matthias Müller; Bernard Ghanem
2018-12-04
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures.Jonathan Dj Uesato; Ananya Dj Kumar; Csaba Dj Szepesvari; Tom Dj Erez; Avraham Dj Ruderman; Keith Dj Anderson; Dj Krishmamurthy; Dvijotham; Nicolas Heess; Pushmeet Kohli
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples.Huangyi Ge; Sze Yiu Chau; Bruno Ribeiro; Ninghui Li
2018-12-03
Disentangling Adversarial Robustness and Generalization.David Stutz; Matthias Hein; Bernt Schiele
Interpretable Deep Learning under Fire.Xinyang Zhang; Ningfei Wang; Hua Shen; Shouling Ji; Xiapu Luo; Ting Wang
Adversarial Example Decomposition.Horace He; Aaron Lou; Qingxuan Jiang; Isay Katsman; Serge Belongie; Ser-Nam Lim
2018-12-02
Model-Reuse Attacks on Deep Learning Systems.Yujie Ji; Xinyang Zhang; Shouling Ji; Xiapu Luo; Ting Wang
Universal Perturbation Attack Against Image Retrieval.Jie Li; Rongrong Ji; Hong Liu; Xiaopeng Hong; Yue Gao; Qi Tian
2018-12-01
FineFool: Fine Object Contour Attack via Attention.Jinyin Chen; Haibin Zheng; Hui Xiong; Mengmeng Su
Building robust classifiers through generation of confident out of distribution examples.Kumar Sricharan; Ashok Srivastava
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification.Qi Lei; Lingfei Wu; Pin-Yu Chen; Alexandros G. Dimakis; Inderjit S. Dhillon; Michael Witbrock
Effects of Loss Functions And Target Representations on Adversarial Robustness.Sean Saito; Sujoy Roy
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems.Edward Chou; Florian Tramèr; Giancarlo Pellegrino
2018-11-30
Transferable Adversarial Attacks for Image and Video Object Detection.Xingxing Wei; Siyuan Liang; Xiaochun Cao; Jun Zhu
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples.Xiaojun Jia; Xingxing Wei; Xiaochun Cao; Hassan Foroosh
Adversarial Defense by Stratified Convolutional Sparse Coding.Bo Sun; Nian-hsuan Tsai; Fangchen Liu; Ronald Yu; Hao Su
2018-11-29
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks.Akhilan Boopathy; Tsui-Wei Weng; Pin-Yu Chen; Sijia Liu; Luca Daniel
Bayesian Adversarial Spheres: Bayesian Inference and Adversarial Examples in a Noiseless Setting.Artur Bekasov; Iain Murray
Adversarial Examples as an Input-Fault Tolerance Problem.Angus Galloway; Anna Golubeva; Graham W. Taylor
Analyzing Federated Learning through an Adversarial Lens.Arjun Nitin Bhagoji; Supriyo Chakraborty; Prateek Mittal; Seraphin Calo
2018-11-28
Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers.Nathan Inkawhich; Matthew Inkawhich; Yiran Chen; Hai Li
Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects.Michael A. Alcorn; Qi Li; Zhitao Gong; Chengfei Wang; Long Mai; Wei-Shinn Ku; Anh Nguyen
A randomized gradient-free attack on ReLU networks.Francesco Croce; Matthias Hein
Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness.Siddique Latif; Rajib Rana; Junaid Qadir
2018-11-27
Robust Classification of Financial Risk.Suproteem K. Sarkar; Kojin Oshiba; Daniel Giebisch; Yaron Singer
Universal Adversarial Training.Ali Shafahi; Mahyar Najibi; Zheng Xu; John Dickerson; Larry S. Davis; Tom Goldstein
Using Attribution to Decode Dataset Bias in Neural Network Models for Chemistry.Kevin McCloskey; Ankur Taly; Federico Monti; Michael P. Brenner; Lucy Colwell
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks.Jinghui Chen; Dongruo Zhou; Jinfeng Yi; Quanquan Gu
2018-11-26
ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies.Bao Wang; Binjie Yuan; Zuoqiang Shi; Stanley J. Osher
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks.Jianyu Wang; Haichao Zhang
2018-11-25
Is Data Clustering in Adversarial Settings Secure?Battista Biggio; Ignazio Pillai; Samuel Rota Bulò; Davide Ariu; Marcello Pelillo; Fabio Roli
2018-11-24
Attention, Please! Adversarial Defense via Activation Rectification and Preservation.Shangxi Wu; Jitao Sang; Kaiyuan Xu; Jiaming Zhang; Yanfeng Sun; Liping Jing; Jian Yu
2018-11-23
Robustness via curvature regularization, and vice versa.Seyed-Mohsen Moosavi-Dezfooli; Alhussein Fawzi; Jonathan Uesato; Pascal Frossard
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses.Jérôme Rony; Luiz G. Hafemann; Luiz S. Oliveira; Ismail Ben Ayed; Robert Sabourin; Eric Granger
2018-11-22
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack.Adnan Siraj Rakin; Zhezhi He; Deliang Fan
Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles.Edward Grefenstette; Robert Stanforth; Brendan O'Donoghue; Jonathan Uesato; Grzegorz Swirszcz; Pushmeet Kohli
Detecting Adversarial Perturbations Through Spatial Behavior in Activation Spaces.Ziv Katzir; Yuval Elovici
2018-11-21
Task-generalizable Adversarial Attack based on Perceptual Metric.Muzammal Naseer; Salman H. Khan; Shafin Rahman; Fatih Porikli
Towards Robust Neural Networks with Lipschitz Continuity.Muhammad Usama; Dong Eui Chang
2018-11-20
How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples.Utku Ozbulak; Neve Wesley De; Messem Arnout Van
MimicGAN: Corruption-Mimicking for Blind Image Recovery & Adversarial Defense.Rushil Anirudh; Jayaraman J. Thiagarajan; Bhavya Kailkhura; Timo Bremer
Intermediate Level Adversarial Attack for Enhanced Transferability.Qian Huang; Zeqi Gu; Isay Katsman; Horace He; Pian Pawakapan; Zhiqiu Lin; Serge Belongie; Ser-Nam Lim
Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples.Hajime Ono; Tsubasa Takahashi; Kazuya Kakizaki
Convolutional Neural Networks with Transformed Input based on Robust Tensor Network Decomposition.Jenn-Bing Ong; Wee-Keong Ng; C. -C. Jay Kuo
2018-11-19
Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding.Yao Li; Martin Renqiang Min; Wenchao Yu; Cho-Jui Hsieh; Thomas C. M. Lee; Erik Kruus
2018-11-18
Generalizable Adversarial Training via Spectral Normalization.Farzan Farnia; Jesse M. Zhang; David Tse
Regularized adversarial examples for model interpretability.Yoel Shoshan; Vadim Ratner
The Taboo Trap: Behavioural Detection of Adversarial Samples.Ilia Shumailov; Yiren Zhao; Robert Mullins; Ross Anderson
2018-11-17
DeepConsensus: using the consensus of features from multiple layers to attain robust image classification.Yuchen Li; Safwan Hossain; Kiarash Jamali; Frank Rudzicz
Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples.Jacob M. Springer; Charles S. Strauss; Austin M. Thresher; Edward Kim; Garrett T. Kenyon
Boosting the Robustness Verification of DNN by Identifying the Achilles's Heel.Chengdong Feng; Zhenbang Chen; Weijiang Hong; Hengbiao Yu; Wei Dong; Ji Wang
2018-11-16
Protecting Voice Controlled Systems Using Sound Source Identification Based on Acoustic Cues.Yuan Gong; Christian Poellabauer
DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules.Nicholas Frosst; Sara Sabour; Geoffrey Hinton
2018-11-15
A note on hyperparameters in black-box adversarial examples.Jamie Hayes
Mathematical Analysis of Adversarial Attacks.Zehao Dou; Stanley J. Osher; Bao Wang
Adversarial Examples from Cryptographic Pseudo-Random Generators.Sébastien Bubeck; Yin Tat Lee; Eric Price; Ilya Razenshteyn
A Spectral View of Adversarially Robust Features.Shivam Garg; Vatsal Sharan; Brian Hu Zhang; Gregory Valiant
2018-11-14
Verification of Recurrent Neural Networks Through Rule Extraction.Qinglong Wang; Kaixuan Zhang; Xue Liu; C. Lee Giles
Robustness of spectral methods for community detection.Ludovic Stephan; Laurent Massoulié
2018-11-13
Deep Q learning for fooling neural networks.Mandar Kulkarni
2018-11-08
Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses.Thomas A. Hogan; Bhavya Kailkhura
New CleverHans Feature: Better Adversarial Robustness Evaluations with Attack Bundling.Ian Goodfellow
A Geometric Perspective on the Transferability of Adversarial Directions.Zachary Charles; Harrison Rosenberg; Dimitris Papailiopoulos
2018-11-07
CAAD 2018: Iterative Ensemble Adversarial Attack.Jiayang Liu; Weiming Zhang; Nenghai Yu
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning.Florian Tramèr; Pascal Dupré; Gili Rusak; Giancarlo Pellegrino; Dan Boneh
2018-11-06
MixTrain: Scalable Training of Verifiably Robust Neural Networks.Shiqi Wang; Yizheng Chen; Ahmed Abdou; Suman Jana
SparseFool: a few pixels make a big difference.Apostolos Modas; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
2018-11-05
Active Deep Learning Attacks under Strict Rate Limitations for Online API Calls.Yi Shi; Yalin E. Sagduyu; Kemal Davaslioglu; Jason H. Li
FUNN: Flexible Unsupervised Neural Network.David Vigouroux; Sylvain Picard
On the Transferability of Adversarial Examples Against CNN-Based Image Forensics.Mauro Barni; Kassem Kallas; Ehsan Nowroozi; Benedetta Tondi
2018-11-04
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning.Faiq Khalid; Muhammmad Abdullah Hanif; Semeen Rehman; Junaid Qadir; Muhammad Shafique
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks.Faiq Khalid; Hassan Ali; Hammad Tariq; Muhammad Abdullah Hanif; Semeen Rehman; Rehan Ahmed; Muhammad Shafique
SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters.Hassan Ali; Faiq Khalid; Hammad Tariq; Muhammad Abdullah Hanif; Semeen Rehman; Rehan Ahmed; Muhammad Shafique
2018-11-03
Adversarial Gain.Peter Henderson; Koustuv Sinha; Rosemary Nan Ke; Joelle Pineau
CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network.Xiaoyi Dong; Weiming Zhang; Nenghai Yu
Adversarial Black-Box Attacks on Automatic Speech Recognition Systems using Multi-Objective Evolutionary Optimization.Shreya Khare; Rahul Aralikatte; Senthil Mani
Learning to Defense by Learning to Attack.Haoming Jiang; Zhehui Chen; Yuyang Shi; Bo Dai; Tuo Zhao
2018-11-02
A Marauder's Map of Security and Privacy in Machine Learning.Nicolas Papernot
Semidefinite relaxations for certifying robustness to adversarial examples.Aditi Raghunathan; Jacob Steinhardt; Percy Liang
Efficient Neural Network Robustness Certification with General Activation Functions.Huan Zhang; Tsui-Wei Weng; Pin-Yu Chen; Cho-Jui Hsieh; Luca Daniel
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks.Davide Maiorca; Battista Biggio; Giorgio Giacinto
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks.Faiq Khalid; Muhammad Abdullah Hanif; Semeen Rehman; Rehan Ahmed; Muhammad Shafique
2018-11-01
Improving Adversarial Robustness by Encouraging Discriminative Features.Chirag Agarwal; Anh Nguyen; Dan Schonfeld
On the Geometry of Adversarial Examples.Marc Khoury; Dylan Hadfield-Menell
Excessive Invariance Causes Adversarial Vulnerability.Jörn-Henrik Jacobsen; Jens Behrmann; Richard Zemel; Matthias Bethge
2018-10-31
When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers.Yujia Wang; David J. Miller; George Kesidis
Unauthorized AI cannot Recognize Me: Reversible Adversarial Example.Jiayang Liu; Weiming Zhang; Kazuto Fukuchi; Youhei Akimoto; Jun Sakuma
2018-10-30
Improved Network Robustness with Adversary Critic.Alexander Matyasko; Lap-Pui Chau
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models.Sven Gowal; Krishnamurthy Dvijotham; Robert Stanforth; Rudy Bunel; Chongli Qin; Jonathan Uesato; Relja Arandjelovic; Timothy Mann; Pushmeet Kohli
2018-10-29
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution.Dimitrios I. Diochnos; Saeed Mahloujifar; Mohammad Mahmoody
Logit Pairing Methods Can Fool Gradient-Based Attacks.Marius Mosbach; Maksym Andriushchenko; Thomas Trost; Matthias Hein; Dietrich Klakow
2018-10-28
RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications.Huan Zhang; Pengchuan Zhang; Cho-Jui Hsieh
Rademacher Complexity for Adversarially Robust Generalization.Dong Yin; Kannan Ramchandran; Peter Bartlett
Robust Audio Adversarial Example for a Physical Attack.Hiromu Yakura; Jun Sakuma
2018-10-27
Towards Robust Deep Neural Networks.Timothy E. Wang; Yiming Gu; Dhagash Mehta; Xiaojun Zhao; Edgar A. Bernal
Regularization Effect of Fast Gradient Sign Method and its Generalization.Chandler Zuo
2018-10-26
Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples.Guanhong Tao; Shiqing Ma; Yingqi Liu; Xiangyu Zhang
2018-10-25
Law and Adversarial Machine Learning.Ram Shankar Siva Kumar; David R. O'Brien; Kendra Albert; Salome Vilojen
Attack Graph Convolutional Networks by Adding Fake Nodes.Xiaoyun Wang; Minhao Cheng; Joe Eaton; Cho-Jui Hsieh; Felix Wu
Evading classifiers in discrete domains with provable optimality guarantees.Bogdan Kulynych; Jamie Hayes; Nikita Samarin; Carmela Troncoso
2018-10-24
Robust Adversarial Learning via Sparsifying Front Ends.Soorya Gopalakrishnan; Zhinus Marzi; Metehan Cekic; Upamanyu Madhow; Ramtin Pedarsani
2018-10-23
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses.Mohammad Hashemi; Greg Cusack; Eric Keller
One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy.Jingkang Wang; Ruoxi Jia; Gerald Friedland; Bo Li; Costas Spanos
Et Tu Alexa? When Commodity WiFi Devices Turn into Adversarial Motion Sensors.Yanzi Zhu; Zhujun Xiao; Yuxin Chen; Zhijing Li; Max Liu; Ben Y. Zhao; Haitao Zheng
2018-10-22
Adversarial Risk Bounds via Function Transformation.Justin Khim; Po-Ling Loh
Cost-Sensitive Robustness against Adversarial Examples.Xiao Zhang; David Evans
Sparse DNNs with Improved Adversarial Robustness.Yiwen Guo; Chao Zhang; Changshui Zhang; Yurong Chen
2018-10-19
On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm.Tsui-Wei Weng; Huan Zhang; Pin-Yu Chen; Aurelie Lozano; Cho-Jui Hsieh; Luca Daniel
2018-10-18
Exploring Adversarial Examples in Malware Detection.Octavian Suciu; Scott E. Coull; Jeffrey Johns
A Training-based Identification Approach to VIN Adversarial Examples.Yingdi Wang; Wenjia Niu; Tong Chen; Yingxiao Xiang; Jingjing Liu; Gang Li; Jiqiang Liu
2018-10-17
Provable Robustness of ReLU networks via Maximization of Linear Regions.Francesco University of Tübingen Croce; Maksym Saarland University Andriushchenko; Matthias University of Tübingen Hein
2018-10-16
Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers.Nicole Nichols; Robert Jasper
Security Matters: A Survey on Adversarial Machine Learning.Guofu Li; Pengjia Zhu; Jin Li; Zhemin Yang; Ning Cao; Zhiyi Chen
2018-10-15
Concise Explanations of Neural Networks using Adversarial Training.Prasad Chalasani; Jiefeng Chen; Amrita Roy Chowdhury; Somesh Jha; Xi Wu
2018-10-11
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation.Chaowei Xiao; Ruizhi Deng; Bo Li; Fisher Yu; Mingyan Liu; Dawn Song
MeshAdv: Adversarial Meshes for Visual Recognition.Chaowei Xiao; Dawei Yang; Bo Li; Jia Deng; Mingyan Liu
2018-10-09
Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only.Tianhang Zheng; Changyou Chen; Kui Ren
Analyzing the Noise Robustness of Deep Neural Networks.Mengchen Liu; Shixia Liu; Hang Su; Kelei Cao; Jun Zhu
The Adversarial Attack and Detection under the Fisher Information Metric.Chenxiao Zhao; P. Thomas Fletcher; Mixue Yu; Yaxin Peng; Guixu Zhang; Chaomin Shen
2018-10-08
Limitations of adversarial robustness: strong No Free Lunch Theorem.Elvis Dohmatob
Efficient Two-Step Adversarial Defense for Deep Neural Networks.Ting-Jui Chang; Yukun He; Peng Li
Combinatorial Attacks on Binarized Neural Networks.Elias B. Khalil; Amrita Gupta; Bistra Dilkina
Average Margin Regularization for Classifiers.Matt Olfat; Anil Aswani
2018-10-04
Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness.Chihuang Liu; Joseph JaJa
Improved Generalization Bounds for Robust Learning.Idan Attias; Aryeh Kontorovich; Yishay Mansour
2018-10-02
Can Adversarially Robust Learning Leverage Computational Hardness?Saeed Mahloujifar; Mohammad Mahmoody
Adversarial Examples - A Complete Characterisation of the Phenomenon.Alexandru Constantin Serban; Erik Poll; Joost Visser
Link Prediction Adversarial Attack.Jinyin Chen; Ziqiang Shi; Yangyang Wu; Xuanheng Xu; Haibin Zheng
2018-10-01
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network.Xuanqing Liu; Yao Li; Chongruo Wu; Cho-Jui Hsieh
Improving the Generalization of Adversarial Training with Domain Adaptation.Chuanbiao Song; Kun He; Liwei Wang; John E. Hopcroft
Large batch size training of neural networks with adversarial training and second-order information.Zhewei Yao; Amir Gholami; Daiyaan Arfeen; Richard Liaw; Joseph Gonzalez; Kurt Keutzer; Michael Mahoney
Improved robustness to adversarial examples using Lipschitz regularization of the loss.Chris Finlay; Adam Oberman; Bilal Abbasi
2018-09-30
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks.Kenneth T. Co; Luis Muñoz-González; Maupeou Sixte de; Emil C. Lupu
2018-09-29
CAAD 2018: Generating Transferable Adversarial Examples.Yash Sharma; Tien-Dung Le; Moustafa Alzantot
Interpreting Adversarial Robustness: A View from Decision Surface in Input Space.Fuxun Yu; Chenchen Liu; Yanzhi Wang; Liang Zhao; Xiang Chen
To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression.Yiren Zhao; Ilia Shumailov; Robert Mullins; Ross Anderson
2018-09-28
Characterizing Audio Adversarial Examples Using Temporal Dependency.Zhuolin Yang; Bo Li; Pin-Yu Chen; Dawn Song
Adversarial Attacks and Defences: A Survey.Anirban Chakraborty; Manaar Alam; Vishal Dey; Anupam Chattopadhyay; Debdeep Mukhopadhyay
Explainable Black-Box Attacks Against Model-based Authentication.Washington Garcia; Joseph I. Choi; Suman K. Adari; Somesh Jha; Kevin R. B. Butler
2018-09-26
Adversarial Attacks on Cognitive Self-Organizing Networks: The Challenge and the Way Forward.Muhammad Usama; Junaid Qadir; Ala Al-Fuqaha
2018-09-24
Neural Networks with Structural Resistance to Adversarial Attacks.Alfaro Luca de
Fast Geometrically-Perturbed Adversarial Faces.Ali Dabouei; Sobhan Soleymani; Jeremy Dawson; Nasser M. Nasrabadi
On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces.Chia-Yi Hsu; Pei-Hsuan Lu; Pin-Yu Chen; Chia-Mu Yu
Low Frequency Adversarial Perturbation.Chuan Guo; Jared S. Frank; Kilian Q. Weinberger
2018-09-23
Is Ordered Weighted $\ell_1$ Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR.Pin-Yu Chen; Bhanukiran Vinzamuri; Sijia Liu
2018-09-22
Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization.Bao Wang; Alex T. Lin; Wei Zhu; Penghang Yin; Andrea L. Bertozzi; Stanley J. Osher
2018-09-21
Unrestricted Adversarial Examples.Tom B. Brown; Nicholas Carlini; Chiyuan Zhang; Catherine Olsson; Paul Christiano; Ian Goodfellow
Adversarial Binaries for Authorship Identification.Xiaozhu Meng; Barton P. Miller; Somesh Jha
2018-09-20
Playing the Game of Universal Adversarial Perturbations.Julien Perolat; Mateusz Malinowski; Bilal Piot; Olivier Pietquin
2018-09-19
Efficient Formal Safety Analysis of Neural Networks.Shiqi Wang; Kexin Pei; Justin Whitehouse; Junfeng Yang; Suman Jana
Adversarial Training Towards Robust Multimedia Recommender System.Jinhui Tang; Xiaoyu Du; Xiangnan He; Fajie Yuan; Qi Tian; Tat-Seng Chua
Generating 3D Adversarial Point Clouds.Chong Xiang; Charles R. Qi; Bo Li
2018-09-17
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples.Deqiang Li; Ramesh Baral; Tao Li; Han Wang; Qianmu Li; Shouhuai Xu
Robustness Guarantees for Bayesian Inference with Gaussian Processes.Luca Cardelli; Marta Kwiatkowska; Luca Laurenti; Andrea Patane
2018-09-16
Exploring the Vulnerability of Single Shot Module in Object Detectors via Imperceptible Background Patches.Yuezun Li; Xiao Bian; Ming-ching Chang; Siwei Lyu
Robust Adversarial Perturbation on Deep Proposal-based Models.Yuezun Li; Daniel Tian; Ming-Ching Chang; Xiao Bian; Siwei Lyu
2018-09-13
Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks.Siyue Wang; Xiao Wang; Pu Zhao; Wujie Wen; David Kaeli; Peter Chin; Xue Lin
Query-Efficient Black-Box Attack by Active Learning.Pengcheng Li; Jinfeng Yi; Lijun Zhang
Adversarial Examples: Opportunities and Challenges.Jiliang Zhang; Chen Li
2018-09-11
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions.Yusuke Tsuzuku; Issei Sato
Isolated and Ensemble Audio Preprocessing Methods for Detecting Adversarial Examples against Automatic Speech Recognition.Krishan Rajaratnam; Kunal Shah; Jugal Kalita
Humans can decipher adversarial images.Zhenglong Zhou; Chaz Firestone
Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System. (22%)Barbara Rychalska; Dominika Basaj; Przemyslaw Biecek; Anna Wroblewska
2018-09-09
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure.Saeed Mahloujifar; Dimitrios I. Diochnos; Mohammad Mahmoody
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability.Kai Y. Xiao; Vincent Tjeng; Nur Muhammad Shafiullah; Aleksander Madry
Certified Adversarial Robustness with Additive Noise.Bai Li; Changyou Chen; Wenlin Wang; Lawrence Carin
2018-09-08
Towards Query Efficient Black-box Attacks: An Input-free Perspective.Yali Du; Meng Fang; Jinfeng Yi; Jun Cheng; Dacheng Tao
Fast Gradient Attack on Network Embedding.Jinyin Chen; Yangyang Wu; Xuanheng Xu; Yixian Chen; Haibin Zheng; Qi Xuan
Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples.Dan Peng; Zizhan Zheng; Xiaofeng Zhang
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.Ambra Demontis; Marco Melis; Maura Pintor; Matthew Jagielski; Battista Biggio; Alina Oprea; Cristina Nita-Rotaru; Fabio Roli
2018-09-07
A Deeper Look at 3D Shape Classifiers.Jong-Chyi Su; Matheus Gadelha; Rui Wang; Subhransu Maji
Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer.Alvin Chan; Lei Ma; Felix Juefei-Xu; Xiaofei Xie; Yang Liu; Yew Soon Ong
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Examples for Question Answering.Eric Wallace; Pedro Rodriguez; Shi Feng; Ikuya Yamada; Jordan Boyd-Graber
Query Attack via Opposite-Direction Feature:Towards Robust Image Retrieval.Zhedong Zheng; Liang Zheng; Yi Yang; Fei Wu
2018-09-06
Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models.Tong Niu; Mohit Bansal
Are adversarial examples inevitable?Ali Shafahi; W. Ronny Huang; Christoph Studer; Soheil Feizi; Tom Goldstein
IDSGAN: Generative Adversarial Networks for Attack Generation against Intrusion Detection.Zilong Lin; Yong Shi; Zhi Xue
Adversarial Reprogramming of Text Classification Neural Networks.Paarth Neekhara; Shehzeen Hussain; Shlomo Dubnov; Farinaz Koushanfar
2018-09-05
Bridging machine learning and cryptography in defence against adversarial attacks.Olga Taran; Shideh Rezaeifar; Slava Voloshynovskiy
2018-09-04
Adversarial Attacks on Node Embeddings.Aleksandar Bojchevski; Stephan Günnemann
2018-09-03
HASP: A High-Performance Adaptive Mobile Security Enhancement Against Malicious Speech Recognition.Zirui Xu; Fuxun Yu; Chenchen Liu; Xiang Chen
Adversarial Attack Type I: Cheat Classifiers by Significant Changes.Sanli Tang; Xiaolin Huang; Mingjian Chen; Chengjin Sun; Jie Yang
2018-08-31
MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks.Siwakorn Srisakaokul; Yuhao Zhang; Zexuan Zhong; Wei Yang; Tao Xie; Bo Li
2018-08-28
DLFuzz: Differential Fuzzing Testing of Deep Learning Systems.Jianmin Guo; Yu Jiang; Yue Zhao; Quan Chen; Jiaguang Sun
All You Need is "Love": Evading Hate-speech Detection.Tommi Gröndahl; Luca Pajola; Mika Juuti; Mauro Conti; N. Asokan
Lipschitz regularized Deep Neural Networks generalize and are adversarially robust.Chris Finlay; Jeff Calder; Bilal Abbasi; Adam Oberman
2018-08-27
Targeted Nonlinear Adversarial Perturbations in Images and Videos.Roberto Rey-de-Castro; Herschel Rabitz
Generalisation in humans and deep neural networks.Robert Geirhos; Carlos R. Medina Temme; Jonas Rauber; Heiko H. Schütt; Matthias Bethge; Felix A. Wichmann
2018-08-26
Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge.Pasquale Minervini; Sebastian Riedel
2018-08-25
Analysis of adversarial attacks against CNN-based image forgery detectors.Diego Gragnaniello; Francesco Marra; Giovanni Poggi; Luisa Verdoliva
Guiding Deep Learning System Testing using Surprise Adequacy.Jinhan Kim; Robert Feldt; Shin Yoo
2018-08-24
Is Machine Learning in Power Systems Vulnerable?Yize Chen; Yushi Tan; Deepjyoti Deka
2018-08-23
Maximal Jacobian-based Saliency Map Attack.Rey Wiyatno; Anqi Xu
Adversarial Attacks on Deep-Learning Based Radio Signal Classification.Meysam Sadeghi; Erik G. Larsson
2018-08-20
Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection.Mahdieh Abbasi; Arezoo Rajabi; Azadeh Sadat Mozafari; Rakesh B. Bobba; Christian Gagne
Stochastic Combinatorial Ensembles for Defending Against Adversarial Examples.George A. Adam; Petr Smirnov; David Duvenaud; Benjamin Haibe-Kains; Anna Goldenberg
2018-08-17
Reinforcement Learning for Autonomous Defence in Software-Defined Networking.Yi Han; Benjamin I. P. Rubinstein; Tamas Abraham; Tansu Alpcan; Vel Olivier De; Sarah Erfani; David Hubczenko; Christopher Leckie; Paul Montague
2018-08-16
Mitigation of Adversarial Attacks through Embedded Feature Selection.Ziyi Bao; Luis Muñoz-González; Emil C. Lupu
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding.Lea Schönherr; Katharina Kohls; Steffen Zeiler; Thorsten Holz; Dorothea Kolossa
Distributionally Adversarial Attack.Tianhang Zheng; Changyou Chen; Kui Ren
2018-08-10
Using Randomness to Improve Robustness of Machine-Learning Models Against Evasion Attacks.Fan Yang; Zhiyuan Chen
Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection.Xiao Chen; Chaoran Li; Derui Wang; Sheng Wen; Jun Zhang; Surya Nepal; Yang Xiang; Kui Ren
2018-08-08
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer.Hsueh-Ti Derek Liu; Michael Tao; Chun-Liang Li; Derek Nowrouzezahrai; Alec Jacobson
2018-08-07
Data augmentation using synthetic data for time series classification with deep residual networks.Hassan Ismail Fawaz; Germain Forestier; Jonathan Weber; Lhassane Idoumghar; Pierre-Alain Muller
2018-08-06
Adversarial Vision Challenge.Wieland Brendel; Jonas Rauber; Alexey Kurakin; Nicolas Papernot; Behar Veliqi; Marcel Salathé; Sharada P. Mohanty; Matthias Bethge
Defense Against Adversarial Attacks with Saak Transform.Sibo Song; Yueru Chen; Ngai-Man Cheung; C. -C. Jay Kuo
Gray-box Adversarial Training.Vivek B. S.; Konda Reddy Mopuri; R. Venkatesh Babu
2018-08-05
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models.Dong Su; Huan Zhang; Hongge Chen; Jinfeng Yi; Pin-Yu Chen; Yupeng Gao
Structured Adversarial Attack: Towards General Implementation and Better Interpretability.Kaidi Xu; Sijia Liu; Pu Zhao; Pin-Yu Chen; Huan Zhang; Quanfu Fan; Deniz Erdogmus; Yanzhi Wang; Xue Lin
2018-08-04
Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection.Raghav Gurbaxani; Shivank Mishra
ATMPA: Attacking Machine Learning-based Malware Visualization Detection Methods via Adversarial Examples.Xinbo Liu; Jiliang Zhang; Yaping Lin; He Li
2018-08-03
Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions.Konda Reddy Mopuri; Phani Krishna Uppala; R. Venkatesh Babu
DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes.Mehmet Sinan Inci; Thomas Eisenbarth; Berk Sunar
2018-07-31
EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report).Yujie Ji; Xinyang Zhang; Ting Wang
2018-07-27
Rob-GAN: Generator, Discriminator, and Adversarial Attacker.Xuanqing Liu; Cho-Jui Hsieh
2018-07-26
A general metric for identifying adversarial images.Siddharth Krishna Kumar
Evaluating and Understanding the Robustness of Adversarial Logit Pairing.Logan Engstrom; Andrew Ilyas; Anish Athalye
2018-07-25
HiDDeN: Hiding Data With Deep Networks.Jiren Zhu; Russell Kaplan; Justin Johnson; Li Fei-Fei
Limitations of the Lipschitz constant as a defense against adversarial examples.Todd Huster; Cho-Yu Jason Chiang; Ritu Chadha
Unbounded Output Networks for Classification.Stefan Elfwing; Eiji Uchibe; Kenji Doya
2018-07-24
Contrastive Video Representation Learning via Adversarial Perturbations.Jue Wang; Anoop Cherian
2018-07-21
Simultaneous Adversarial Training - Learn from Others Mistakes.Zukang Liao
2018-07-20
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors.Andrew Ilyas; Logan Engstrom; Aleksander Madry
Physical Adversarial Examples for Object Detectors.Kevin Eykholt; Ivan Evtimov; Earlence Fernandes; Bo Li; Amir Rahmati; Florian Tramer; Atul Prakash; Tadayoshi Kohno; Dawn Song
2018-07-18
Harmonic Adversarial Attack Method.Wen Heng; Shuchang Zhou; Tingting Jiang
2018-07-17
Gradient Band-based Adversarial Training for Generalized Attack Immunity of A3C Path Finding.Tong Chen; Wenjia Niu; Yingxiao Xiang; Xiaoxuan Bai; Jiqiang Liu; Zhen Han; Gang Li
Motivating the Rules of the Game for Adversarial Example Research.Justin Gilmer; Ryan P. Adams; Ian Goodfellow; David Andersen; George E. Dahl
Defend Deep Neural Networks Against Adversarial Examples via Fixed and Dynamic Quantized Activation Functions.Adnan Siraj Rakin; Jinfeng Yi; Boqing Gong; Deliang Fan
2018-07-16
Online Robust Policy Learning in the Presence of Unknown Adversaries.Aaron J. Havens; Zhanhong Jiang; Soumik Sarkar
Manifold Adversarial Learning.Shufei Zhang; Kaizhu Huang; Jianke Zhu; Yang Liu
2018-07-12
Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach.Minhao Cheng; Thong Le; Pin-Yu Chen; Jinfeng Yi; Huan Zhang; Cho-Jui Hsieh
2018-07-11
With Friends Like These, Who Needs Adversaries?Saumya Jetley; Nicholas A. Lord; Philip H. S. Torr
2018-07-10
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks.Kimin Lee; Kibok Lee; Honglak Lee; Jinwoo Shin
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees.Min Wu; Matthew Wicker; Wenjie Ruan; Xiaowei Huang; Marta Kwiatkowska
Attack and defence in cellular decision-making: lessons from machine learning.Thomas J. Rademaker; Emmanuel Bengio; Paul François
2018-07-09
Adaptive Adversarial Attack on Scene Text Recognition.Xiaoyong Yuan; Pan He; Xiaolin Andy Li; Dapeng Oliver Wu
2018-07-08
Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks.Saeid Asgari Taghanaki; Arkadeep Das; Ghassan Hamarneh
2018-07-05
Implicit Generative Modeling of Random Noise during Training for Adversarial Robustness.Priyadarshini Panda; Kaushik Roy
2018-07-04
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations.Dan Hendrycks; Thomas G. Dietterich
2018-07-03
Local Gradients Smoothing: Defense against localized adversarial attacks.Muzammal Naseer; Salman H. Khan; Fatih Porikli
Adversarial Robustness Toolbox v1.0.0.Maria-Irina Nicolae; Mathieu Sinn; Minh Ngoc Tran; Beat Buesser; Ambrish Rawat; Martin Wistuba; Valentina Zantedeschi; Nathalie Baracaldo; Bryant Chen; Heiko Ludwig; Ian M. Molloy; Ben Edwards
2018-07-02
Adversarial Perturbations Against Real-Time Video Classification Systems.Shasha Li; Ajaya Neupane; Sujoy Paul; Chengyu Song; Srikanth V. Krishnamurthy; Amit K. Roy Chowdhury; Ananthram Swami
2018-07-01
Towards Adversarial Training with Moderate Performance Improvement for Neural Network Classification.Xinhan Di; Pengqian Yu; Meng Tian
2018-06-29
Adversarial Examples in Deep Learning: Characterization and Divergence.Wenqi Wei; Ling Liu; Margaret Loper; Stacey Truex; Lei Yu; Mehmet Emre Gursoy; Yanzhao Wu
2018-06-28
Adversarial Reprogramming of Neural Networks.Gamaleldin F. Elsayed; Ian Goodfellow; Jascha Sohl-Dickstein
2018-06-27
Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning.Jasjeet Dhaliwal; Saurabh Shintre
Customizing an Adversarial Example Generator with Class-Conditional GANs.Shih-hong Tsai
2018-06-25
Exploring Adversarial Examples: Patterns of One-Pixel Attacks.David Kügler; Alexander Distergoft; Arjan Kuijper; Anirban Mukhopadhyay
2018-06-23
Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions.Alex Kouzemtchenko
On Adversarial Examples for Character-Level Neural Machine Translation.Javid Ebrahimi; Daniel Lowd; Dejing Dou
Evaluation of Momentum Diverse Input Iterative Fast Gradient Sign Method (M-DI2-FGSM) Based Attack Method on MCS 2018 Adversarial Attacks on Black Box Face Recognition System.Md Ashraful Alam Milton
2018-06-21
Detection based Defense against Adversarial Examples from the Steganalysis Point of View.Jiayang Liu; Weiming Zhang; Yiwei Zhang; Dongdong Hou; Yujia Liu; Hongyue Zha; Nenghai Yu
2018-06-20
Gradient Adversarial Training of Neural Networks.Ayan Sinha; Zhao Chen; Vijay Badrinarayanan; Andrew Rabinovich
Combinatorial Testing for Deep Learning Systems.Lei Ma; Fuyuan Zhang; Minhui Xue; Bo Li; Yang Liu; Jianjun Zhao; Yadong Wang
2018-06-19
On the Learning of Deep Local Features for Robust Face Spoofing Detection.Souza Gustavo Botelho de; João Paulo Papa; Aparecido Nilceu Marana
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations.Thomas Tanay; Jerone T. A. Andrews; Lewis D. Griffin
2018-06-15
Non-Negative Networks Against Adversarial Attacks.William Fleshman; Edward Raff; Jared Sylvester; Steven Forsyth; Mark McLean
2018-06-14
Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data.Jacson Rodrigues Correia-Silva; Rodrigo F. Berriel; Claudine Badue; Souza Alberto F. de; Thiago Oliveira-Santos
2018-06-13
Hierarchical interpretations for neural network predictions.Chandan Singh; W. James Murdoch; Bin Yu
Manifold Mixup: Better Representations by Interpolating Hidden States.Vikas Verma; Alex Lamb; Christopher Beckham; Amir Najafi; Ioannis Mitliagkas; Aaron Courville; David Lopez-Paz; Yoshua Bengio
2018-06-12
Adversarial Attacks on Variational Autoencoders.George Gondim-Ribeiro; Pedro Tabacof; Eduardo Valle
Ranking Robustness Under Adversarial Document Manipulations.Gregory Goren; Oren Kurland; Moshe Tennenholtz; Fiana Raiber
2018-06-11
Defense Against the Dark Arts: An overview of adversarial example security research and future research directions.Ian Goodfellow
2018-06-08
Monge blunts Bayes: Hardness Results for Adversarial Training.Zac Cranko; Aditya Krishna Menon; Richard Nock; Cheng Soon Ong; Zhan Shi; Christian Walder
2018-06-07
Revisiting Adversarial Risk.Arun Sai Suggala; Adarsh Prasad; Vaishnavh Nagarajan; Pradeep Ravikumar
Training Augmentation with Adversarial Examples for Robust Speech Recognition.Sining Sun; Ching-Feng Yeh; Mari Ostendorf; Mei-Yuh Hwang; Lei Xie
2018-06-06
Adversarial Attack on Graph Structured Data.Hanjun Dai; Hui Li; Tian Tian; Xin Huang; Lin Wang; Jun Zhu; Le Song
Adversarial Regression with Multiple Learners.Liang Tong; Sixie Yu; Scott Alfeld; Yevgeniy Vorobeychik
Killing four birds with one Gaussian process: the relation between different test-time attacks.Kathrin Grosse; Michael T. Smith; Michael Backes
2018-06-05
DPatch: An Adversarial Patch Attack on Object Detectors.Xin Liu; Huanrui Yang; Ziwei Liu; Linghao Song; Hai Li; Yiran Chen
2018-06-04
Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise.Vahid Behzadan; Arslan Munir
An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks.Chirag Agarwal; Bo Dong; Dan Schonfeld; Anthony Hoogs
PAC-learning in the presence of evasion adversaries.Daniel Cullina; Arjun Nitin Bhagoji; Prateek Mittal
2018-06-02
Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks.Yarin Gal; Lewis Smith
Detecting Adversarial Examples via Key-based Network.Pinlong Zhao; Zhouyu Fu; Ou wu; Qinghua Hu; Jun Wang
2018-05-31
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks.Jan Svoboda; Jonathan Masci; Federico Monti; Michael M. Bronstein; Leonidas Guibas
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders.Partha Ghosh; Arpan Losalka; Michael J Black
Scaling provable adversarial defenses.Eric Wong; Frank R. Schmidt; Jan Hendrik Metzen; J. Zico Kolter
Sequential Attacks on Agents for Long-Term Adversarial Goals.Edgar Tretschk; Seong Joon Oh; Mario Fritz
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data.Puyudi Yang; Jianbo Chen; Cho-Jui Hsieh; Jane-Ling Wang; Michael I. Jordan
2018-05-30
Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization.Avishek Joey Bose; Parham Aarabi
ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio.Nilaksh Das; Madhuri Shanbhogue; Shang-Tse Chen; Li Chen; Michael E. Kounavis; Duen Horng Chau
Robustifying Models Against Adversarial Attacks by Langevin Dynamics.Vignesh Srinivasan; Arturo Marban; Klaus-Robert Müller; Wojciech Samek; Shinichi Nakajima
Robustness May Be at Odds with Accuracy.Dimitris Tsipras; Shibani Santurkar; Logan Engstrom; Alexander Turner; Aleksander Madry
2018-05-29
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks.Chun-Chen Tu; Paishun Ting; Pin-Yu Chen; Sijia Liu; Huan Zhang; Jinfeng Yi; Cho-Jui Hsieh; Shin-Ming Cheng
Adversarial Noise Attacks of Deep Learning Architectures -- Stability Analysis via Sparse Modeled Signals.Yaniv Romano; Aviad Aberdam; Jeremias Sulam; Michael Elad
Why Botnets Work: Distributed Brute-Force Attacks Need No Synchronization.Salman Salamatian; Wasim Huleihel; Ahmad Beirami; Asaf Cohen; Muriel Médard
2018-05-28
Adversarial Examples in Remote Sensing.Wojciech Czaja; Neil Fendley; Michael Pekala; Christopher Ratto; I-Jeng Wang
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization.Moustafa Alzantot; Yash Sharma; Supriyo Chakraborty; Huan Zhang; Cho-Jui Hsieh; Mani Srivastava
2018-05-27
Defending Against Adversarial Attacks by Leveraging an Entire GAN.Gokula Krishnan Santhanam; Paulina Grnarova
2018-05-25
Training verified learners with learned verifiers.Krishnamurthy Dvijotham; Sven Gowal; Robert Stanforth; Relja Arandjelovic; Brendan O'Donoghue; Jonathan Uesato; Pushmeet Kohli
Adversarial examples from computational constraints.Sébastien Bubeck; Eric Price; Ilya Razenshteyn
2018-05-24
Laplacian Networks: Bounding Indicator Function Smoothness for Neural Network Robustness.Carlos Eduardo Rosar Kos Lassance; Vincent Gripon; Antonio Ortega
2018-05-23
Anonymizing k-Facial Attributes via Adversarial Perturbations.Saheb Chhabra; Richa Singh; Mayank Vatsa; Gaurav Gupta
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients.Fuxun Yu; Zirui Xu; Yanzhi Wang; Chenchen Liu; Xiang Chen
Towards the first adversarially robust neural network model on MNIST.Lukas Schott; Jonas Rauber; Matthias Bethge; Wieland Brendel
2018-05-22
Adversarially Robust Training through Structured Gradient Regularization.Kevin Roth; Aurelien Lucchi; Sebastian Nowozin; Thomas Hofmann
2018-05-21
Adversarial Noise Layer: Regularize Neural Network By Adding Noise.Zhonghui You; Jinmian Ye; Kunming Li; Zenglin Xu; Ping Wang
Constructing Unrestricted Adversarial Examples with Generative Models.Yang Song; Rui Shu; Nate Kushman; Stefano Ermon
Bidirectional Learning for Robust Neural Networks.Sidney Pontes-Filho; Marcus Liwicki
Adversarial Attacks on Neural Networks for Graph Data.Daniel Zügner; Amir Akbarnejad; Stephan Günnemann
2018-05-20
Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference.Ruying Bao; Sihang Liang; Qingcan Wang
Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks.Jiefeng Chen; Xi Wu; Vaibhav Rastogi; Yingyu Liang; Somesh Jha
Targeted Adversarial Examples for Black Box Audio Systems.Rohan Taori; Amog Kamsetty; Brenton Chu; Nikita Vemuri
2018-05-17
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models.Pouya Samangouei; Maya Kabkab; Rama Chellappa
2018-05-16
Towards Robust Neural Machine Translation.Yong Cheng; Zhaopeng Tu; Fandong Meng; Junjie Zhai; Yang Liu
2018-05-14
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing.Jingyi Wang; Jun Sun; Peixin Zhang; Xinyu Wang
2018-05-12
Curriculum Adversarial Training.Qi-Zhi Cai; Min Du; Chang Liu; Dawn Song
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning.Jinyuan Jia; Neil Zhenqiang Gong
2018-05-11
Breaking Transferability of Adversarial Samples with Randomness.Yan Zhou; Murat Kantarcioglu; Bowei Xi
2018-05-09
On Visual Hallmarks of Robustness to Adversarial Malware.Alex Huang; Abdullah Al-Dujaili; Erik Hemberg; Una-May O'Reilly
Robust Classification with Convolutional Prototype Learning.Hong-Ming Yang; Xu-Yao Zhang; Fei Yin; Cheng-Lin Liu
2018-05-08
Interpretable Adversarial Perturbation in Input Embedding Space for Text.Motoki Sato; Jun Suzuki; Hiroyuki Shindo; Yuji Matsumoto
2018-05-05
A Counter-Forensic Method for CNN-Based Camera Model Identification.David Güera; Yu Wang; Luca Bondi; Paolo Bestagini; Stefano Tubaro; Edward J. Delp
2018-05-03
Siamese networks for generating adversarial examples.Mandar Kulkarni; Aria Abubakar
2018-04-30
Concolic Testing for Deep Neural Networks.Youcheng Sun; Min Wu; Wenjie Ruan; Xiaowei Huang; Marta Kwiatkowska; Daniel Kroening
How Robust are Deep Neural Networks?Biswa Sengupta; Karl J. Friston
Adversarially Robust Generalization Requires More Data.Ludwig Schmidt; Shibani Santurkar; Dimitris Tsipras; Kunal Talwar; Aleksander Mądry
2018-04-29
Adversarial Regression for Detecting Attacks in Cyber-Physical Systems.Amin Ghafouri; Yevgeniy Vorobeychik; Xenofon Koutsoukos
2018-04-28
Formal Security Analysis of Neural Networks using Symbolic Intervals.Shiqi Wang; Kexin Pei; Justin Whitehouse; Junfeng Yang; Suman Jana
2018-04-25
Towards Fast Computation of Certified Robustness for ReLU Networks.Tsui-Wei Weng; Huan Zhang; Hongge Chen; Zhao Song; Cho-Jui Hsieh; Duane Boning; Inderjit S. Dhillon; Luca Daniel
2018-04-23
Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning.Mahdieh Abbasi; Arezoo Rajabi; Christian Gagné; Rakesh B. Bobba
Siamese Generative Adversarial Privatizer for Biometric Data.Witold Oleszkiewicz; Peter Kairouz; Karol Piczak; Ram Rajagopal; Tomasz Trzcinski
Black-box Adversarial Attacks with Limited Queries and Information.Andrew Ilyas; Logan Engstrom; Anish Athalye; Jessy Lin
VectorDefense: Vectorization as a Defense to Adversarial Examples.Vishaal Munusamy Kabilan; Brandon Morris; Anh Nguyen
Query-Efficient Black-Box Attack Against Sequence-Based Malware Classifiers.Ishai Rosenberg; Asaf Shabtai; Yuval Elovici; Lior Rokach
2018-04-21
Generating Natural Language Adversarial Examples.Moustafa Alzantot; Yash Sharma; Ahmed Elgohary; Bo-Jhang Ho; Mani Srivastava; Kai-Wei Chang
2018-04-20
Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size.Ian Goodfellow
Learning More Robust Features with Adversarial Training.Shuangtao Li; Yuanke Chen; Yanlin Peng; Lin Bai
ADef: an Iterative Algorithm to Construct Adversarial Deformations.Rima Alaifari; Giovanni S. Alberti; Tandri Gauksson
2018-04-19
Attacking Convolutional Neural Network using Differential Evolution.Jiawei Su; Danilo Vasconcellos Vargas; Kouichi Sakurai
Semantic Adversarial Deep Learning.Tommaso Dreossi; Somesh Jha; Sanjit A. Seshia
2018-04-18
Simulation-based Adversarial Test Generation for Autonomous Vehicles with Machine Learning Components.Cumhur Erkan Tuncali; Georgios Fainekos; Hisahiro Ito; James Kapinski
Neural Automated Essay Scoring and Coherence Modeling for Adversarially Crafted Input.Youmna Farag; Helen Yannakoudakis; Ted Briscoe
2018-04-17
Robust Machine Comprehension Models via Adversarial Training.Yicheng Wang; Mohit Bansal
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks.Mohit Iyyer; John Wieting; Kevin Gimpel; Luke Zettlemoyer
2018-04-16
Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the $L_0$ Norm.Wenjie Ruan; Min Wu; Youcheng Sun; Xiaowei Huang; Daniel Kroening; Marta Kwiatkowska
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector.Shang-Tse Chen; Cory Cornelius; Jason Martin; Duen Horng Chau
2018-04-14
On the Limitation of MagNet Defense against $L_1$-based Adversarial Examples.Pei-Hsuan Lu; Pin-Yu Chen; Kang-Cheng Chen; Chia-Mu Yu
Adversarial Attacks Against Medical Deep Learning Systems.Samuel G. Finlayson; Hyung Won Chung; Isaac S. Kohane; Andrew L. Beam
2018-04-11
Detecting Malicious PowerShell Commands using Deep Neural Networks.Danny Hendler; Shay Kels; Amir Rubin
2018-04-10
On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses.Anish Athalye; Nicholas Carlini
2018-04-09
Adversarial Training Versus Weight Decay.Angus Galloway; Thomas Tanay; Graham W. Taylor
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks.Pu Zhao; Sijia Liu; Yanzhi Wang; Xue Lin
2018-04-08
Adaptive Spatial Steganography Based on Probability-Controlled Adversarial Examples.Sai Ma; Qingxiao Guan; Xianfeng Zhao; Yaqi Liu
2018-04-06
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations.Alex Lamb; Jonathan Binas; Anirudh Goyal; Dmitriy Serdyuk; Sandeep Subramanian; Ioannis Mitliagkas; Yoshua Bengio
2018-04-04
Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks.Neale Ratzlaff; Li Fuxin
2018-03-30
Adversarial Attacks and Defences Competition.Alexey Kurakin; Ian Goodfellow; Samy Bengio; Yinpeng Dong; Fangzhou Liao; Ming Liang; Tianyu Pang; Jun Zhu; Xiaolin Hu; Cihang Xie; Jianyu Wang; Zhishuai Zhang; Zhou Ren; Alan Yuille; Sangxia Huang; Yao Zhao; Yuzhe Zhao; Zhonglin Han; Junjiajia Long; Yerkebulan Berdibekov; Takuya Akiba; Seiya Tokui; Motoki Abe
2018-03-29
Security Consideration For Deep Learning-Based Image Forensics.Wei Zhao; Pengpeng Yang; Rongrong Ni; Yao Zhao; Haorui Wu
2018-03-28
Defending against Adversarial Images using Basis Functions Transformations.Uri Shaham; James Garritano; Yutaro Yamada; Ethan Weinberger; Alex Cloninger; Xiuyuan Cheng; Kelly Stanton; Yuval Kluger
The Effects of JPEG and JPEG2000 Compression on Attacks using Adversarial Examples.Ayse Elvan Aydemir; Alptekin Temizel; Tugba Taskaya Temizel
2018-03-26
Bypassing Feature Squeezing by Increasing Adversary Strength.Yash Sharma; Pin-Yu Chen
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples.Pei-Hsuan Lu; Pin-Yu Chen; Chia-Mu Yu
Clipping free attacks against artificial neural networks.Boussad Addad; Jerome Kodjabachian; Christophe Meyer
2018-03-24
Security Theater: On the Vulnerability of Classifiers to Exploratory Attacks.Tegjyot Singh Sethi; Mehmed Kantardzic; Joung Woo Ryu
A Dynamic-Adversarial Mining Approach to the Security of Machine Learning.Tegjyot Singh Sethi; Mehmed Kantardzic; Lingyu Lyua; Jiashun Chen
An Overview of Vulnerabilities of Voice Controlled Systems.Yuan Gong; Christian Poellabauer
2018-03-23
Generalizability vs. Robustness: Adversarial Examples for Medical Imaging.Magdalini Paschali; Sailesh Conjeti; Fernando Navarro; Nassir Navab
CNN Based Adversarial Embedding with Minimum Alteration for Image Steganography.Weixuan Tang; Bin Li; Shunquan Tan; Mauro Barni; Jiwu Huang
Detecting Adversarial Perturbations with Saliency.Chiliang Zhang; Zhimou Yang; Zuochang Ye
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization.Daniel Jakubovitz; Raja Giryes
2018-03-22
Understanding Measures of Uncertainty for Adversarial Example Detection.Lewis Smith; Yarin Gal
2018-03-21
Adversarial Defense based on Structure-to-Signal Autoencoders.Joachim Folz; Sebastian Palacio; Joern Hees; Damian Borth; Andreas Dengel
Task dependent Deep LDA pruning of neural networks.Qing Tian; Tal Arbel; James J. Clark
2018-03-20
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems.Lei Ma; Felix Juefei-Xu; Fuyuan Zhang; Jiyuan Sun; Minhui Xue; Bo Li; Chunyang Chen; Ting Su; Li Li; Yang Liu; Jianjun Zhao; Yadong Wang
2018-03-19
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks.Octavian Suciu; Radu Mărginean; Yiğitcan Kaya; Hal III Daumé; Tudor Dumitraş
Improving Transferability of Adversarial Examples with Input Diversity.Cihang Xie; Zhishuai Zhang; Yuyin Zhou; Song Bai; Jianyu Wang; Zhou Ren; Alan Yuille
2018-03-17
A Dual Approach to Scalable Verification of Deep Networks.Dj Krishnamurthy; Dvijotham; Robert Stanforth; Sven Gowal; Timothy Mann; Pushmeet Kohli
2018-03-16
Adversarial Logit Pairing.Harini Kannan; Alexey Kurakin; Ian Goodfellow
Semantic Adversarial Examples.Hossein Hosseini; Radha Poovendran
2018-03-15
Large Margin Deep Networks for Classification.Gamaleldin F. Elsayed; Dilip Krishnan; Hossein Mobahi; Kevin Regan; Samy Bengio
2018-03-13
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples.Zihao Liu; Qi Liu; Tao Liu; Nuo Xu; Xue Lin; Yanzhi Wang; Wujie Wen
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning.Nicolas Papernot; Patrick McDaniel
Invisible Mask: Practical Attacks on Face Recognition with Infrared.Zhe Zhou; Di Tang; Xiaofeng Wang; Weili Han; Xiangyu Liu; Kehuan Zhang
Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training.Derek Wang; Chaoran Li; Sheng Wen; Surya Nepal; Yang Xiang
2018-03-12
Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables.Bojan Kolosnjaji; Ambra Demontis; Battista Biggio; Davide Maiorca; Giorgio Giacinto; Claudia Eckert; Fabio Roli
2018-03-10
Combating Adversarial Attacks Using Sparse Representations.Soorya Gopalakrishnan; Zhinus Marzi; Upamanyu Madhow; Ramtin Pedarsani
Detecting Adversarial Examples via Neural Fingerprinting.Sumanth Dathathri; Stephan Zheng; Tianwei Yin; Richard M. Murray; Yisong Yue
2018-03-09
Detecting Adversarial Examples - A Lesson from Multimedia Forensics.Pascal Schöttle; Alexander Schlögl; Cecilia Pasquini; Rainer Böhme
On Generation of Adversarial Examples using Convex Programming.Emilio Rafael Balda; Arash Behboodi; Rudolf Mathar
Explaining Black-box Android Malware Detection.Marco Melis; Davide Maiorca; Battista Biggio; Giorgio Giacinto; Fabio Roli
2018-03-08
Rethinking Feature Distribution for Loss Functions in Image Classification.Weitao Wan; Yuanyi Zhong; Tianpeng Li; Jiansheng Chen
2018-03-07
Sparse Adversarial Perturbations for Videos.Xingxing Wei; Jun Zhu; Hang Su
2018-03-04
Stochastic Activation Pruning for Robust Adversarial Defense.Guneet S. Dhillon; Kamyar Azizzadenesheli; Zachary C. Lipton; Jeremy Bernstein; Jean Kossaifi; Aran Khanna; Anima Anandkumar
2018-03-03
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples.Minhao Cheng; Jinfeng Yi; Pin-Yu Chen; Huan Zhang; Cho-Jui Hsieh
2018-03-02
Protecting JPEG Images Against Adversarial Attacks.Aaditya Prakash; Nick Moran; Solomon Garber; Antonella DiLillo; James Storer
2018-02-26
Understanding and Enhancing the Transferability of Adversarial Examples.Lei Wu; Zhanxing Zhu; Cheng Tai; Weinan E
On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples.Mahmood Sharif; Lujo Bauer; Michael K. Reiter
Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples.Jake Zhao; Kyunghyun Cho
Max-Mahalanobis Linear Discriminant Analysis Networks.Tianyu Pang; Chao Du; Jun Zhu
2018-02-23
Deep Defense: Training DNNs with Improved Adversarial Robustness.Ziang Yan; Yiwen Guo; Changshui Zhang
Sensitivity and Generalization in Neural Networks: an Empirical Study.Roman Novak; Yasaman Bahri; Daniel A. Abolafia; Jeffrey Pennington; Jascha Sohl-Dickstein
Adversarial vulnerability for any classifier.Alhussein Fawzi; Hamza Fawzi; Omar Fawzi
Verifying Controllers Against Adversarial Examples with Bayesian Optimization.Shromona Ghosh; Felix Berkenkamp; Gireeja Ranade; Shaz Qadeer; Ashish Kapoor
2018-02-22
Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks.Gaurav Goswami; Nalini Ratha; Akshay Agarwal; Richa Singh; Mayank Vatsa
Hessian-based Analysis of Large Batch Training and Robustness to Adversaries.Zhewei Yao; Amir Gholami; Qi Lei; Kurt Keutzer; Michael W. Mahoney
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans.Gamaleldin F. Elsayed; Shreya Shankar; Brian Cheung; Nicolas Papernot; Alex Kurakin; Ian Goodfellow; Jascha Sohl-Dickstein
2018-02-21
Adversarial Training for Probabilistic Spiking Neural Networks.Alireza Bagheri; Osvaldo Simeone; Bipin Rajendran
L2-Nonexpansive Neural Networks.Haifeng Qian; Mark N. Wegman
Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch.João Monteiro; Isabela Albuquerque; Zahid Akhtar; Tiago H. Falk
2018-02-20
Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning.Christopher Frederickson; Michael Moore; Glenn Dawson; Robi Polikar
Out-distribution training confers robustness to deep neural networks.Mahdieh Abbasi; Christian Gagné
2018-02-19
On Lyapunov exponents and adversarial perturbation.Vinay Uday Prabhu; Nishant Desai; John Whaley
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression.Nilaksh Das; Madhuri Shanbhogue; Shang-Tse Chen; Fred Hohman; Siwei Li; Li Chen; Michael E. Kounavis; Duen Horng Chau
Divide, Denoise, and Defend against Adversarial Attacks.Seyed-Mohsen Moosavi-Dezfooli; Ashish Shrivastava; Oncel Tuzel
Robustness of Rotation-Equivariant Networks to Adversarial Perturbations.Beranger Dumont; Simona Maggio; Pablo Montalvo
Are Generative Classifiers More Robust to Adversarial Attacks?Yingzhen Li; John Bradshaw; Yash Sharma
2018-02-18
DARTS: Deceiving Autonomous Cars with Toxic Signs.Chawin Sitawarin; Arjun Nitin Bhagoji; Arsalan Mosenia; Mung Chiang; Prateek Mittal
2018-02-15
ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction.Fuxun Yu; Qide Dong; Xiang Chen
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks.Jonathan Uesato; Brendan O'Donoghue; Aaron van den Oord; Pushmeet Kohli
2018-02-14
Fooling OCR Systems with Adversarial Text Images.Congzheng Song; Vitaly Shmatikov
Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks.Qi Liu; Tao Liu; Zihao Liu; Yanzhi Wang; Yier Jin; Wujie Wen
2018-02-13
Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints.Di Tang; XiaoFeng Wang; Kehuan Zhang
Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models.Mengying Sun; Fengyi Tang; Jinfeng Yi; Fei Wang; Jiayu Zhou
Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples.Felix Kreuk; Assi Barak; Shir Aviv-Reuven; Moran Baruch; Benny Pinkas; Joseph Keshet
2018-02-12
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks.Yusuke Tsuzuku; Issei Sato; Masashi Sugiyama
Predicting Adversarial Examples with High Confidence.Angus Galloway; Graham W. Taylor; Medhat Moussa
2018-02-09
Certified Robustness to Adversarial Examples with Differential Privacy.Mathias Lecuyer; Vaggelis Atlidakis; Roxana Geambasu; Daniel Hsu; Suman Jana
2018-02-08
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection.Andrea Paudice; Luis Muñoz-González; Andras Gyorgy; Emil C. Lupu
2018-02-05
Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples.Adnan Siraj Rakin; Zhezhi He; Boqing Gong; Deliang Fan
First-order Adversarial Vulnerability of Neural Networks and Input Dimension.Carl-Johann Simon-Gabriel; Yann Ollivier; Léon Bottou; Bernhard Schölkopf; David Lopez-Paz
2018-02-02
Secure Detection of Image Manipulation by means of Random Feature Selection.Zhipeng Chen; Benedetta Tondi; Xiaolong Li; Rongrong Ni; Yao Zhao; Mauro Barni
Hardening Deep Neural Networks via Adversarial Model Cascades.Deepak Vijaykeerthy; Anshuman Suri; Sameep Mehta; Ponnurangam Kumaraguru
2018-02-01
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples.Anish Athalye; Nicholas Carlini; David Wagner
2018-01-31
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach.Tsui-Wei Weng; Huan Zhang; Pin-Yu Chen; Jinfeng Yi; Dong Su; Yupeng Gao; Cho-Jui Hsieh; Luca Daniel
2018-01-29
Robustness of classification ability of spiking neural networks.Jie Yang; Pingping Zhang; Yan Liu
2018-01-28
Certified Defenses against Adversarial Examples.Aditi Raghunathan; Jacob Steinhardt; Percy Liang
2018-01-27
Towards an Understanding of Neural Networks in Natural-Image Spaces.Yifei Fan; Anthony Yezzi
2018-01-26
Deflecting Adversarial Attacks with Pixel Deflection.Aaditya Prakash; Nick Moran; Solomon Garber; Antonella DiLillo; James Storer
Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning.Hyrum S. Anderson; Anant Kharkar; Bobby Filar; David Evans; Phil Roth
2018-01-24
CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition.Xuejing Yuan; Yuxuan Chen; Yue Zhao; Yunhui Long; Xiaokang Liu; Kai Chen; Shengzhi Zhang; Heqing Huang; Xiaofeng Wang; Carl A. Gunter
Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations.Konda Reddy Mopuri; Aditya Ganeshan; R. Venkatesh Babu
2018-01-22
Adversarial Texts with Gradient Methods.Zhitao Gong; Wenlu Wang; Bo Li; Dawn Song; Wei-Shinn Ku
2018-01-15
A Comparative Study of Rule Extraction for Recurrent Neural Networks.Qinglong Wang; Kaixuan Zhang; Alexander G. II Ororbia; Xinyu Xing; Xue Liu; C. Lee Giles
Sparsity-based Defense against Adversarial Attacks on Linear Classifiers.Zhinus Marzi; Soorya Gopalakrishnan; Upamanyu Madhow; Ramtin Pedarsani
Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks.Bo Luo; Yannan Liu; Lingxiao Wei; Qiang Xu
2018-01-12
Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers.Ji Gao; Jack Lanchantin; Mary Lou Soffa; Yanjun Qi
2018-01-11
A3T: Adversarially Augmented Adversarial Training.Akram Erraqabi; Aristide Baratin; Yoshua Bengio; Simon Lacoste-Julien
2018-01-10
Fooling End-to-end Speaker Verification by Adversarial Examples.Felix Kreuk; Yossi Adi; Moustapha Cisse; Joseph Keshet
2018-01-09
Adversarial Deep Learning for Robust Detection of Binary Encoded Malware.Abdullah Al-Dujaili; Alex Huang; Erik Hemberg; Una-May O'Reilly
Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks.Yongshuai Liu; Jiyu Chen; Hao Chen
2018-01-08
Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos.Chawin Sitawarin; Arjun Nitin Bhagoji; Arsalan Mosenia; Prateek Mittal; Mung Chiang
Adversarial Spheres.Justin Gilmer; Luke Metz; Fartash Faghri; Samuel S. Schoenholz; Maithra Raghu; Martin Wattenberg; Ian Goodfellow
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality.Xingjun Ma; Bo Li; Yisen Wang; Sarah M. Erfani; Sudanthi Wijewickrema; Grant Schoenebeck; Dawn Song; Michael E. Houle; James Bailey
Spatially Transformed Adversarial Examples.Chaowei Xiao; Jun-Yan Zhu; Bo Li; Warren He; Mingyan Liu; Dawn Song
Generating Adversarial Examples with Adversarial Networks.Chaowei Xiao; Bo Li; Jun-Yan Zhu; Warren He; Mingyan Liu; Dawn Song
LaVAN: Localized and Visible Adversarial Noise.Danny Karmon; Daniel Zoran; Yoav Goldberg
Attacking Speaker Recognition With Deep Generative Models.Wilson Cai; Anish Doshi; Rafael Valle
HeNet: A Deep Learning Approach on Intel$^\circledR$ Processor Trace for Effective Exploit Detection.Li Chen; Salmin Sultana; Ravi Sahita
2018-01-07
Denoising Dictionary Learning Against Adversarial Perturbations.John Mitro; Derek Bridge; Steven Prestwich
2018-01-05
Adversarial Perturbation Intensity Achieving Chosen Intra-Technique Transferability Level for Logistic Regression.Martin Gubri
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text.Nicholas Carlini; David Wagner
Shielding Google's language toxicity model against adversarial attacks.Nestor Rodriguez; Sergio Rojas-Galeano
2018-01-03
Facial Attributes: Accuracy and Adversarial Robustness.Andras Rozsa; Manuel Günther; Ethan M. Rudd; Terrance E. Boult
Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space.Mayank Singh; Abhishek Sinha; Balaji Krishnamurthy
2018-01-02
High Dimensional Spaces, Deep Learning and Adversarial Examples.Simant Dube
Did you hear that? Adversarial Examples Against Automatic Speech Recognition.Moustafa Alzantot; Bharathan Balaji; Mani Srivastava
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey.Naveed Akhtar; Ajmal Mian
2017-12-31
A General Framework for Adversarial Examples with Objectives.Mahmood Sharif; Sruti Bhagavatula; Lujo Bauer; Michael K. Reiter
2017-12-28
Gradient Regularization Improves Accuracy of Discriminative Models.Dániel Varga; Adrián Csiszárik; Zsolt Zombori
2017-12-27
Adversarial Patch.Tom B. Brown; Dandelion Mané; Aurko Roy; Martín Abadi; Justin Gilmer
2017-12-26
Exploring the Space of Black-box Attacks on Deep Neural Networks.Arjun Nitin Bhagoji; Warren He; Bo Li; Dawn Song
Building Robust Deep Neural Networks for Road Sign Detection.Arkar Min Aung; Yousef Fadila; Radian Gondokaryono; Luis Gonzalez
The Robust Manifold Defense: Adversarial Training using Generative Models.Ajil Jalal; Andrew Ilyas; Constantinos Daskalakis; Alexandros G. Dimakis
2017-12-24
Android Malware Detection using Deep Learning on API Method Sequences.ElMouatez Billah Karbab; Mourad Debbabi; Abdelouahid Derhab; Djedjiga Mouheb
2017-12-23
Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger.Vahid Behzadan; Arslan Munir
2017-12-22
Query-limited Black-box Attacks to Classifiers.Fnu Suya; Yuan Tian; David Evans; Paolo Papotti
2017-12-21
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks.Siqi Yang; Arnold Wiliem; Shaokang Chen; Brian C. Lovell
ReabsNet: Detecting and Revising Adversarial Examples.Jiefeng Chen; Zihang Meng; Changtian Sun; Wei Tang; Yinglun Zhu
Note on Attacking Object Detectors with Adversarial Stickers.Kevin Eykholt; Ivan Evtimov; Earlence Fernandes; Bo Li; Dawn Song; Tadayoshi Kohno; Amir Rahmati; Atul Prakash; Florian Tramer
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications.Qixue Xiao; Kang Li; Deyue Zhang; Yier Jin
2017-12-19
Query-Efficient Black-box Adversarial Examples (superceded).Andrew Ilyas; Logan Engstrom; Anish Athalye; Jessy Lin
Adversarial Examples: Attacks and Defenses for Deep Learning.Xiaoyong Yuan; Pan He; Qile Zhu; Xiaolin Li
2017-12-18
HotFlip: White-Box Adversarial Examples for Text Classification.Javid Ebrahimi; Anyi Rao; Daniel Lowd; Dejing Dou
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.David J. Miller; Yulia Wang; George Kesidis
2017-12-17
Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study.Matteo Fischetti; Jason Jo
Super-sparse Learning in Similarity Spaces.Ambra Demontis; Marco Melis; Battista Biggio; Giorgio Fumera; Fabio Roli
2017-12-16
Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models.Jack W. Stokes; De Wang; Mady Marinescu; Marc Marino; Brian Bussone
2017-12-14
DANCin SEQ2SEQ: Fooling Text Classifiers with Adversarial Text Example Generation.Catherine Wong
2017-12-12
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models.Wieland Brendel; Jonas Rauber; Matthias Bethge
2017-12-11
Training Ensembles to Detect Adversarial Examples.Alexander Bagnall; Razvan Bunescu; Gordon Stewart
2017-12-10
Robust Deep Reinforcement Learning with Adversarial Attacks.Anay Pattanaik; Zhenyi Tang; Shuijing Liu; Gautham Bommannan; Girish Chowdhary
2017-12-09
NAG: Network for Adversary Generation.Konda Reddy Mopuri; Utkarsh Ojha; Utsav Garg; R. Venkatesh Babu
2017-12-08
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning.Battista Biggio; Fabio Roli
Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser.Fangzhou Liao; Ming Liang; Yinpeng Dong; Tianyu Pang; Xiaolin Hu; Jun Zhu
2017-12-07
Adversarial Examples that Fool Detectors.Jiajun Lu; Hussein Sibai; Evan Fabry
Exploring the Landscape of Spatial Robustness.Logan Engstrom; Brandon Tran; Dimitris Tsipras; Ludwig Schmidt; Aleksander Madry
2017-12-06
Generative Adversarial Perturbations.Omid Poursaeed; Isay Katsman; Bicheng Gao; Serge Belongie
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning.Hongge Chen; Huan Zhang; Pin-Yu Chen; Jinfeng Yi; Cho-Jui Hsieh
2017-12-05
Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems.Kexin Pei; Linjie Zhu; Yinzhi Cao; Junfeng Yang; Carl Vondrick; Suman Jana
2017-12-02
Improving Network Robustness against Adversarial Attacks with Compact Convolution.Rajeev Ranjan; Swami Sankaranarayanan; Carlos D. Castillo; Rama Chellappa
Towards Robust Neural Networks via Random Self-ensemble.Xuanqing Liu; Minhao Cheng; Huan Zhang; Cho-Jui Hsieh
Where Classification Fails, Interpretation Rises.Chanh Nguyen; Georgi Georgiev; Yujie Ji; Ting Wang
2017-11-30
Measuring the tendency of CNNs to Learn Surface Statistical Regularities.Jason Jo; Yoshua Bengio
2017-11-27
Adversary Detection in Neural Networks via Persistent Homology.Thomas Gebhart; Paul Schrater
On the Robustness of Semantic Segmentation Models to Adversarial Attacks.Anurag Arnab; Ondrej Miksik; Philip H. S. Torr
Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation.YoungJoon Yoo; Seonguk Park; Junyoung Choi; Sangdoo Yun; Nojun Kwak
2017-11-26
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients.Andrew Slavin Ross; Finale Doshi-Velez
2017-11-24
Geometric robustness of deep networks: analysis and improvement.Can Kanbak; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
2017-11-22
Safer Classification by Synthesis.William Wang; Angelina Wang; Aviv Tamar; Xi Chen; Pieter Abbeel
MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples.Nicholas Carlini; David Wagner
Adversarial Phenomenon in the Eyes of Bayesian Deep Learning.Ambrish Rawat; Martin Wistuba; Maria-Irina Nicolae
2017-11-21
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training.Xi Wu; Uyeong Jang; Jiefeng Chen; Lingjiao Chen; Somesh Jha
2017-11-20
Evaluating Robustness of Neural Networks with Mixed Integer Programming.Vincent Tjeng; Kai Xiao; Russ Tedrake
Adversarial Attacks Beyond the Image Space.Xiaohui Zeng; Chenxi Liu; Yu-Siang Wang; Weichao Qiu; Lingxi Xie; Yu-Wing Tai; Chi Keung Tang; Alan L. Yuille
2017-11-17
How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models.Kathrin Grosse; David Pfaff; Michael Thomas Smith; Michael Backes
2017-11-16
Enhanced Attacks on Defensively Distilled Deep Neural Networks.Yujia Liu; Weiming Zhang; Shaohua Li; Nenghai Yu
Defense against Universal Adversarial Perturbations.Naveed Akhtar; Jian Liu; Ajmal Mian
2017-11-15
The best defense is a good offense: Countering black box attacks by predicting slightly wrong labels.Yannic Kilcher; Thomas Hofmann
2017-11-12
Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples.Jihun Hamm; Akshay Mehra
2017-11-09
Crafting Adversarial Examples For Speech Paralinguistics Applications.Yuan Gong; Christian Poellabauer
2017-11-08
Intriguing Properties of Adversarial Examples.Ekin D. Cubuk; Barret Zoph; Samuel S. Schoenholz; Quoc V. Le
2017-11-06
Mitigating Adversarial Effects Through Randomization.Cihang Xie; Jianyu Wang; Zhishuai Zhang; Zhou Ren; Alan Yuille
HyperNetworks with statistical filtering for defending adversarial examples.Zhun Sun; Mete Ozay; Takayuki Okatani
Towards Reverse-Engineering Black-Box Neural Networks.Seong Joon Oh; Max Augustin; Bernt Schiele; Mario Fritz
2017-11-02
The (Un)reliability of saliency methods.Pieter-Jan Kindermans; Sara Hooker; Julius Adebayo; Maximilian Alber; Kristof T. Schütt; Sven Dähne; Dumitru Erhan; Been Kim
Provable defenses against adversarial examples via the convex outer adversarial polytope.Eric Wong; J. Zico Kolter
2017-11-01
Attacking Binarized Neural Networks.Angus Galloway; Graham W. Taylor; Medhat Moussa
2017-10-31
Countering Adversarial Images using Input Transformations.Chuan Guo; Mayank Rana; Moustapha Cisse; der Maaten Laurens van
Conditional Variance Penalties and Domain Shift Robustness.Christina Heinze-Deml; Nicolai Meinshausen
Generating Natural Adversarial Examples.Zhengli Zhao; Dheeru Dua; Sameer Singh
2017-10-30
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples.Yang Song; Taesup Kim; Sebastian Nowozin; Stefano Ermon; Nate Kushman
2017-10-29
Attacking the Madry Defense Model with $L_1$-based Adversarial Examples.Yash Sharma; Pin-Yu Chen
Certifying Some Distributional Robustness with Principled Adversarial Training.Aman Sinha; Hongseok Namkoong; Riccardo Volpi; John Duchi
2017-10-28
Interpretation of Neural Networks is Fragile.Amirata Ghorbani; Abubakar Abid; James Zou
2017-10-27
Adversarial Detection of Flash Malware: Limitations and Open Issues.Davide Maiorca; Ambra Demontis; Battista Biggio; Fabio Roli; Giorgio Giacinto
2017-10-25
mixup: Beyond Empirical Risk Minimization.Hongyi Zhang; Moustapha Cisse; Yann N. Dauphin; David Lopez-Paz
2017-10-24
One pixel attack for fooling deep neural networks.Jiawei Su; Danilo Vasconcellos Vargas; Sakurai Kouichi
2017-10-21
Feature-Guided Black-Box Safety Testing of Deep Neural Networks.Matthew Wicker; Xiaowei Huang; Marta Kwiatkowska
2017-10-17
Boosting Adversarial Attacks with Momentum.Yinpeng Dong; Fangzhou Liao; Tianyu Pang; Hang Su; Jun Zhu; Xiaolin Hu; Jianguo Li
2017-10-12
Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries.Rui Zhang; Quanyan Zhu
2017-10-09
Standard detectors aren't (currently) fooled by physical adversarial stop signs.Jiajun Lu; Hussein Sibai; Evan Fabry; David Forsyth
Verification of Binarized Neural Networks via Inter-Neuron Factoring.Chih-Hong Cheng; Georg Nührenberg; Chung-Hao Huang; Harald Ruess
2017-10-02
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight.Yen-Chen Lin; Ming-Yu Liu; Min Sun; Jia-Bin Huang
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks.Divya Gopinath; Guy Katz; Corina S. Pasareanu; Clark Barrett
2017-09-28
Provably Minimally-Distorted Adversarial Examples.Nicholas Carlini; Guy Katz; Clark Barrett; David L. Dill
DR.SGX: Hardening SGX Enclaves against Cache Attacks with Data Location Randomization.Ferdinand Technische Universität Darmstadt, Germany Brasser; Srdjan ETH Zurich, Switzerland Capkun; Alexandra University of Würzburg Dmitrienko; Tommaso Technische Universität Darmstadt, Germany Frassetto; Kari ETH Zurich, Switzerland Kostiainen; Ahmad-Reza Technische Universität Darmstadt, Germany Sadeghi
2017-09-26
Output Range Analysis for Deep Neural Networks.Souradeep Dutta; Susmit Jha; Sriram Sanakaranarayanan; Ashish Tiwari
2017-09-25
Fooling Vision and Language Models Despite Localization and Attention Mechanism.Xiaojun Xu; Xinyun Chen; Chang Liu; Anna Rohrbach; Trevor Darrell; Dawn Song
2017-09-19
Verifying Properties of Binarized Deep Neural Networks.Nina Narodytska; Shiva Prasad Kasiviswanathan; Leonid Ryzhyk; Mooly Sagiv; Toby Walsh
2017-09-16
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification.Xiaoyu Cao; Neil Zhenqiang Gong
2017-09-13
A Learning and Masking Approach to Secure Learning.Linh Nguyen; Sky Wang; Arunesh Sinha
Models and Framework for Adversarial Attacks on Complex Adaptive Systems.Vahid Behzadan; Arslan Munir
2017-09-12
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples.Pin-Yu Chen; Yash Sharma; Huan Zhang; Jinfeng Yi; Cho-Jui Hsieh
2017-09-11
Art of singular vectors and universal adversarial perturbations.Valentin Khrulkov; Ivan Oseledets
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks.Thilo Strauss; Markus Hanselmann; Andrej Junginger; Holger Ulmer
2017-09-08
Towards Proving the Adversarial Robustness of Deep Neural Networks.Guy Stanford University Katz; Clark Stanford University Barrett; David L. Stanford University Dill; Kyle Stanford University Julian; Mykel J. Stanford University Kochenderfer
DeepFense: Online Accelerated Defense Against Adversarial Deep Learning.Bita Darvish Rouhani; Mohammad Samragh; Mojan Javaheripi; Tara Javidi; Farinaz Koushanfar
2017-09-02
Security Evaluation of Pattern Classifiers under Attack.Battista Biggio; Giorgio Fumera; Fabio Roli
2017-08-31
On Security and Sparsity of Linear Classifiers for Adversarial Settings.Ambra Demontis; Paolo Russu; Battista Biggio; Giorgio Fumera; Fabio Roli
Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin.Yujin Kwon; Dohyun Kim; Yunmok Son; Eugene Vasserman; Yongdae Kim
2017-08-29
Practical Attacks Against Graph-based Clustering.Yizheng Chen; Yacin Nadji; Athanasios Kountouras; Fabian Monrose; Roberto Perdisci; Manos Antonakakis; Nikolaos Vasiloglou
2017-08-28
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars.Yuchi Tian; Kexin Pei; Suman Jana; Baishakhi Ray
Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features.Liang Tong; Bo Li; Chen Hajaj; Chaowei Xiao; Ning Zhang; Yevgeniy Vorobeychik
2017-08-23
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid.Marco Melis; Ambra Demontis; Battista Biggio; Gavin Brown; Giorgio Fumera; Fabio Roli
2017-08-22
CNN Fixations: An unraveling approach to visualize the discriminative image regions.Konda Reddy Mopuri; Utsav Garg; R. Venkatesh Babu
2017-08-21
Evasion Attacks against Machine Learning at Test Time.Battista Biggio; Igino Corona; Davide Maiorca; Blaine Nelson; Nedim Srndic; Pavel Laskov; Giorgio Giacinto; Fabio Roli
2017-08-17
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples.Yinpeng Dong; Hang Su; Jun Zhu; Fan Bao
Learning Universal Adversarial Perturbations with Generative Models.Jamie Hayes; George Danezis
2017-08-14
Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API.Hossein Hosseini; Baicen Xiao; Andrew Clark; Radha Poovendran
2017-08-13
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models.Pin-Yu Chen; Huan Zhang; Yash Sharma; Jinfeng Yi; Cho-Jui Hsieh
2017-08-08
Cascade Adversarial Machine Learning Regularized with a Unified Embedding.Taesik Na; Jong Hwan Ko; Saibal Mukhopadhyay
2017-08-04
Adversarial Robustness: Softmax versus Openmax.Andras Rozsa; Manuel Günther; Terrance E. Boult
2017-08-01
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning.Andrew P. Norton; Yanjun Qi
2017-07-27
Robust Physical-World Attacks on Deep Learning Models.Kevin Eykholt; Ivan Evtimov; Earlence Fernandes; Bo Li; Amir Rahmati; Chaowei Xiao; Atul Prakash; Tadayoshi Kohno; Dawn Song
2017-07-24
Synthesizing Robust Adversarial Examples.Anish Athalye; Logan Engstrom; Andrew Ilyas; Kevin Kwok
2017-07-23
Adversarial Examples for Evaluating Reading Comprehension Systems.Robin Jia; Percy Liang
2017-07-21
Confidence estimation in Deep Neural networks via density modelling.Akshayvarun Subramanya; Suraj Srinivas; R. Venkatesh Babu
2017-07-20
Efficient Defenses Against Adversarial Attacks.Valentina Zantedeschi; Maria-Irina Nicolae; Ambrish Rawat
2017-07-19
Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers.Ishai Rosenberg; Asaf Shabtai; Lior Rokach; Yuval Elovici
2017-07-18
Fast Feature Fool: A data independent approach to universal adversarial perturbations.Konda Reddy Mopuri; Utsav Garg; R. Venkatesh Babu
APE-GAN: Adversarial Perturbation Elimination with GAN.Shiwei Shen; Guoqing Jin; Ke Gao; Yongdong Zhang
2017-07-17
Houdini: Fooling Deep Structured Prediction Models.Moustapha Cisse; Yossi Adi; Natalia Neverova; Joseph Keshet
2017-07-13
Foolbox: A Python toolbox to benchmark the robustness of machine learning models.Jonas Rauber; Wieland Brendel; Matthias Bethge
2017-07-11
NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles.Jiajun Lu; Hussein Sibai; Evan Fabry; David Forsyth
A Survey on Resilient Machine Learning.Atul Kumar; Sameep Mehta
2017-07-10
Towards Crafting Text Adversarial Samples.Suranjana Samanta; Sameep Mehta
2017-07-04
UPSET and ANGRI : Breaking High Performance Image Classifiers.Sayantan Sarkar; Ankan Bansal; Upal Mahbub; Rama Chellappa
2017-06-21
Comparing deep neural networks against humans: object recognition when the signal gets weaker.Robert Geirhos; David H. J. Janssen; Heiko H. Schütt; Jonas Rauber; Matthias Bethge; Felix A. Wichmann
2017-06-19
Towards Deep Learning Models Resistant to Adversarial Attacks.Aleksander Madry; Aleksandar Makelov; Ludwig Schmidt; Dimitris Tsipras; Adrian Vladu
2017-06-14
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong.Warren He; James Wei; Xinyun Chen; Nicholas Carlini; Dawn Song
2017-06-13
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples.Yizhen Wang; Somesh Jha; Kamalika Chaudhuri
2017-06-06
Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation.Andrew Norton; Yanjun Qi
2017-06-02
Towards Robust Detection of Adversarial Examples.Tianyu Pang; Chao Du; Yinpeng Dong; Jun Zhu
2017-05-30
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples.Weilin Xu; David Evans; Yanjun Qi
2017-05-27
MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks.Chang Song; Hsin-Pai Cheng; Huanrui Yang; Sicheng Li; Chunpeng Wu; Qing Wu; Hai Li; Yiran Chen
2017-05-26
Classification regions of deep neural networks.Alhussein Fawzi; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard; Stefano Soatto
Robustness of classifiers to universal perturbations: a geometric perspective.Seyed-Mohsen Moosavi-Dezfooli; Alhussein Fawzi; Omar Fawzi; Pascal Frossard; Stefano Soatto
2017-05-25
MagNet: a Two-Pronged Defense against Adversarial Examples.Dongyu Meng; Hao Chen
2017-05-23
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation.Matthias Hein; Maksym Andriushchenko
Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction.Bin Liang; Hongcheng Li; Miaoqiang Su; Xirong Li; Wenchang Shi; Xiaofeng Wang
Black-Box Attacks against RNN based Malware Detection Algorithms.Weiwei Hu; Ying Tan
2017-05-22
Regularizing deep networks using efficient layerwise adversarial training.Swami Sankaranarayanan; Arpit Jain; Rama Chellappa; Ser Nam Lim
2017-05-21
Evading Classifiers by Morphing in the Dark.Hung Dang; Yue Huang; Ee-Chien Chang
2017-05-20
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods.Nicholas Carlini; David Wagner
2017-05-19
Ensemble Adversarial Training: Attacks and Defenses.Florian Tramèr; Alexey Kurakin; Nicolas Papernot; Ian Goodfellow; Dan Boneh; Patrick McDaniel
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense.Sailik Sengupta; Tathagata Chakraborti; Subbarao Kambhampati
2017-05-18
DeepXplore: Automated Whitebox Testing of Deep Learning Systems.Kexin Pei; Yinzhi Cao; Junfeng Yang; Suman Jana
Delving into adversarial attacks on deep policies.Jernej Kos; Dawn Song
2017-05-15
Extending Defensive Distillation.Nicolas Papernot; Patrick McDaniel
2017-05-09
Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN.Hyeungill Lee; Sungyeob Han; Jungwoo Lee
2017-05-08
Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression.Nilaksh Das; Madhuri Shanbhogue; Shang-Tse Chen; Fred Hohman; Li Chen; Michael E. Kounavis; Duen Horng Chau
2017-05-05
Detecting Adversarial Samples Using Density Ratio Estimates.Lovedeep Gondara
2017-04-28
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection.Ambra Demontis; Marco Melis; Battista Biggio; Davide Maiorca; Daniel Arp; Konrad Rieck; Igino Corona; Giorgio Giacinto; Fabio Roli
Parseval Networks: Improving Robustness to Adversarial Examples.Moustapha Cisse; Piotr Bojanowski; Edouard Grave; Yann Dauphin; Nicolas Usunier
2017-04-26
Deep Text Classification Can be Fooled.Bin Liang; Hongcheng Li; Miaoqiang Su; Pan Bian; Xirong Li; Wenchang Shi
2017-04-19
Universal Adversarial Perturbations Against Semantic Image Segmentation.Jan Hendrik Metzen; Mummadi Chaithanya Kumar; Thomas Brox; Volker Fischer
2017-04-17
Adversarial and Clean Data Are Not Twins.Zhitao Gong; Wenlu Wang; Wei-Shinn Ku
2017-04-16
Google's Cloud Vision API Is Not Robust To Noise.Hossein Hosseini; Baicen Xiao; Radha Poovendran
2017-04-11
The Space of Transferable Adversarial Examples.Florian Tramèr; Nicolas Papernot; Ian Goodfellow; Dan Boneh; Patrick McDaniel
Interpretable Explanations of Black Boxes by Meaningful Perturbation. (1%)Ruth Fong; Andrea Vedaldi
2017-04-09
Enhancing Robustness of Machine Learning Systems via Data Transformations.Arjun Nitin Bhagoji; Daniel Cullina; Chawin Sitawarin; Prateek Mittal
2017-04-06
Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks.Yi Han; Benjamin I. P. Rubinstein
2017-04-05
Comment on "Biologically inspired protection of deep networks from adversarial attacks".Wieland Brendel; Matthias Bethge
2017-04-04
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks.Weilin Xu; David Evans; Yanjun Qi
2017-03-31
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly.Jiajun Lu; Theerasit Issaranon; David Forsyth
2017-03-27
Adversarial Transformation Networks: Learning to Generate Adversarial Examples.Shumeet Baluja; Ian Fischer
Biologically inspired protection of deep networks from adversarial attacks.Aran Nayebi; Surya Ganguli
2017-03-26
Deceiving Google's Cloud Video Intelligence API Built for Summarizing Videos.Hossein Hosseini; Baicen Xiao; Radha Poovendran
2017-03-24
Adversarial Examples for Semantic Segmentation and Object Detection.Cihang Xie; Jianyu Wang; Zhishuai Zhang; Yuyin Zhou; Lingxi Xie; Alan Yuille
2017-03-23
Self corrective Perturbations for Semantic Segmentation and Classification.Swami Sankaranarayanan; Arpit Jain; Ser Nam Lim
2017-03-22
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains.Tegjyot Singh Sethi; Mehmed Kantardzic
2017-03-20
On the Limitation of Convolutional Neural Networks in Recognizing Negative Images.Hossein Hosseini; Baicen Xiao; Mayoore Jaiswal; Radha Poovendran
2017-03-16
Fraternal Twins: Unifying Attacks on Machine Learning and Digital Watermarking.Erwin Quiring; Daniel Arp; Konrad Rieck
2017-03-13
Blocking Transferability of Adversarial Examples in Black-Box Learning Systems.Hossein Hosseini; Yize Chen; Sreeram Kannan; Baosen Zhang; Radha Poovendran
2017-03-07
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents.Yen-Chen Lin; Zhang-Wei Hong; Yuan-Hong Liao; Meng-Li Shih; Ming-Yu Liu; Min Sun
2017-03-03
Adversarial Examples for Semantic Image Segmentation.Volker Fischer; Mummadi Chaithanya Kumar; Jan Hendrik Metzen; Thomas Brox
2017-03-02
Compositional Falsification of Cyber-Physical Systems with Machine Learning Components.Tommaso Dreossi; Alexandre Donzé; Sanjit A. Seshia
2017-03-01
Detecting Adversarial Samples from Artifacts.Reuben Feinman; Ryan R. Curtin; Saurabh Shintre; Andrew B. Gardner
2017-02-26
Deceiving Google's Perspective API Built for Detecting Toxic Comments.Hossein Hosseini; Sreeram Kannan; Baosen Zhang; Radha Poovendran
2017-02-22
Robustness to Adversarial Examples through an Ensemble of Specialists.Mahdieh Abbasi; Christian Gagné
Adversarial examples for generative models.Jernej Kos; Ian Fischer; Dawn Song
DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples.Ji Gao; Beilun Wang; Zeming Lin; Weilin Xu; Yanjun Qi
2017-02-21
On the (Statistical) Detection of Adversarial Examples.Kathrin Grosse; Praveen Manoharan; Nicolas Papernot; Michael Backes; Patrick McDaniel
2017-02-20
Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN.Weiwei Hu; Ying Tan
2017-02-14
On Detecting Adversarial Perturbations.Jan Hendrik Metzen; Tim Genewein; Volker Fischer; Bastian Bischoff
2017-02-07
Adversarial Attacks on Neural Network Policies.Sandy Huang; Nicolas Papernot; Ian Goodfellow; Yan Duan; Pieter Abbeel
2017-02-03
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks.Guy Katz; Clark Barrett; David Dill; Kyle Julian; Mykel Kochenderfer
2017-01-15
Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks.Vahid Behzadan; Arslan Munir
2017-01-04
Dense Associative Memory is Robust to Adversarial Inputs.Dmitry Krotov; John J Hopfield
2016-12-22
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics.Xin Li; Fuxin Li
2016-12-19
Simple Black-Box Adversarial Perturbations for Deep Networks.Nina Narodytska; Shiva Prasad Kasiviswanathan
2016-12-05
Learning Adversary-Resistant Deep Neural Networks.Qinglong Wang; Wenbo Guo; Kaixuan Zhang; Alexander G. II Ororbia; Xinyu Xing; Xue Liu; C. Lee Giles
2016-12-01
A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples.Beilun Wang; Ji Gao; Yanjun Qi
Adversarial Images for Variational Autoencoders.Pedro Tabacof; Julia Tavares; Eduardo Valle
Deep Variational Information Bottleneck.Alexander A. Alemi; Ian Fischer; Joshua V. Dillon; Kevin Murphy
2016-11-30
Towards Robust Deep Neural Networks with BANG.Andras Rozsa; Manuel Gunther; Terrance E. Boult
2016-11-18
LOTS about Attacking Deep Features.Andras Rozsa; Manuel Günther; Terrance E. Boult
2016-11-15
AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack.Igino Corona; Battista Biggio; Davide Maiorca
2016-11-11
Towards the Science of Security and Privacy in Machine Learning.Nicolas Papernot; Patrick McDaniel; Arunesh Sinha; Michael Wellman
2016-11-08
Delving into Transferable Adversarial Examples and Black-box Attacks.Yanpei Liu; Xinyun Chen; Chang Liu; Dawn Song
2016-11-03
Adversarial Machine Learning at Scale.Alexey Kurakin; Ian Goodfellow; Samy Bengio
2016-10-26
Universal adversarial perturbations.Seyed-Mohsen Moosavi-Dezfooli; Alhussein Fawzi; Omar Fawzi; Pascal Frossard
2016-10-21
Safety Verification of Deep Neural Networks.Xiaowei Huang; Marta Kwiatkowska; Sen Wang; Min Wu
2016-10-14
Are Accuracy and Robustness Correlated?Andras Rozsa; Manuel Günther; Terrance E. Boult
2016-10-13
Assessing Threat of Adversarial Examples on Deep Neural Networks.Abigail Graese; Andras Rozsa; Terrance E. Boult
2016-10-06
Using Non-invertible Data Transformations to Build Adversarial-Robust Neural Networks.Qinglong Wang; Wenbo Guo; Alexander G. II Ororbia; Xinyu Xing; Lin Lin; C. Lee Giles; Xue Liu; Peng Liu; Gang Xiong
2016-10-04
Adversary Resistant Deep Neural Networks with an Application to Malware Detection.Qinglong Wang; Wenbo Guo; Kaixuan Zhang; Alexander G. II Ororbia; Xinyu Xing; C. Lee Giles; Xue Liu
2016-10-03
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library.Nicolas Papernot; Fartash Faghri; Nicholas Carlini; Ian Goodfellow; Reuben Feinman; Alexey Kurakin; Cihang Xie; Yash Sharma; Tom Brown; Aurko Roy; Alexander Matyasko; Vahid Behzadan; Karen Hambardzumyan; Zhishuai Zhang; Yi-Lin Juang; Zhi Li; Ryan Sheatsley; Abhibhav Garg; Jonathan Uesato; Willi Gierke; Yinpeng Dong; David Berthelot; Paul Hendricks; Jonas Rauber; Rujun Long; Patrick McDaniel
2016-09-06
Statistical Meta-Analysis of Presentation Attacks for Secure Multibiometric Systems.Battista Biggio; Giorgio Fumera; Gian Luca Marcialis; Fabio Roli
2016-09-03
Randomized Prediction Games for Adversarial Machine Learning.Samuel Rota Bulò; Battista Biggio; Ignazio Pillai; Marcello Pelillo; Fabio Roli
2016-08-31
Robustness of classifiers: from adversarial to random noise.Alhussein Fawzi; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard
2016-08-27
A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples.Thomas Tanay; Lewis Griffin
2016-08-16
Towards Evaluating the Robustness of Neural Networks.Nicholas Carlini; David Wagner
2016-08-02
A study of the effect of JPG compression on adversarial images.Gintare Karolina Dziugaite; Zoubin Ghahramani; Daniel M. Roy
2016-08-01
Early Methods for Detecting Adversarial Images.Dan Hendrycks; Kevin Gimpel
2016-07-18
On the Effectiveness of Defensive Distillation.Nicolas Papernot; Patrick McDaniel
2016-07-14
Defensive Distillation is Not Robust to Adversarial Examples.Nicholas Carlini; David Wagner
2016-07-08
Adversarial examples in the physical world.Alexey Kurakin; Ian Goodfellow; Samy Bengio
2016-06-14
Adversarial Perturbations Against Deep Neural Networks for Malware Classification.Kathrin Grosse; Nicolas Papernot; Praveen Manoharan; Michael Backes; Patrick McDaniel
2016-05-23
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples.Nicolas Papernot; Patrick McDaniel; Ian Goodfellow
Measuring Neural Net Robustness with Constraints.Osbert Bastani; Yani Ioannou; Leonidas Lampropoulos; Dimitrios Vytiniotis; Aditya Nori; Antonio Criminisi
2016-05-17
Are Facial Attributes Adversarially Robust?Andras Rozsa; Manuel Günther; Ethan M. Rudd; Terrance E. Boult
2016-05-05
Adversarial Diversity and Hard Positive Generation.Andras Rozsa; Ethan M. Rudd; Terrance E. Boult
2016-04-27
Crafting Adversarial Input Sequences for Recurrent Neural Networks.Nicolas Papernot; Patrick McDaniel; Ananthram Swami; Richard Harang
2016-04-14
Improving the Robustness of Deep Neural Networks via Stability Training.Stephan Zheng; Yang Song; Thomas Leung; Ian Goodfellow
2016-04-09
A General Retraining Framework for Scalable Adversarial Classification.Bo Li; Yevgeniy Vorobeychik; Xinyun Chen
2016-03-16
Suppressing the Unusual: towards Robust CNNs using Symmetric Activation Functions.Qiyang Zhao; Lewis D Griffin
2016-02-18
Breaking Symmetric Cryptosystems using Quantum Period Finding. (1%)Marc Kaplan; Gaëtan Leurent; Anthony Leverrier; María Naya-Plasencia
2016-02-08
Practical Black-Box Attacks against Machine Learning.Nicolas Papernot; Patrick McDaniel; Ian Goodfellow; Somesh Jha; Z. Berkay Celik; Ananthram Swami
2016-02-07
Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms.Tom Zahavy; Bingyi Kang; Alex Sivak; Jiashi Feng; Huan Xu; Shie Mannor
2016-01-26
Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization.Alexander G. II Ororbia; C. Lee Giles; Daniel Kifer
2015-11-23
The Limitations of Deep Learning in Adversarial Settings.Nicolas Papernot; Patrick McDaniel; Somesh Jha; Matt Fredrikson; Z. Berkay Celik; Ananthram Swami
2015-11-19
A Unified Gradient Regularization Family for Adversarial Examples.Chunchuan Lyu; Kaizhu Huang; Hai-Ning Liang
Manifold Regularized Deep Neural Networks using Adversarial Examples.Taehoon Lee; Minsuk Choi; Sungroh Yoon
Robust Convolutional Neural Networks under Adversarial Noise.Jonghoon Jin; Aysegul Dundar; Eugenio Culurciello
Foveation-based Mechanisms Alleviate Adversarial Examples.Yan Luo; Xavier Boix; Gemma Roig; Tomaso Poggio; Qi Zhao
Towards Open Set Deep Networks.Abhijit Bendale; Terrance Boult
2015-11-17
Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization.Uri Shaham; Yutaro Yamada; Sahand Negahban
2015-11-16
Adversarial Manipulation of Deep Representations.Sara Sabour; Yanshuai Cao; Fartash Faghri; David J. Fleet
2015-11-14
DeepFool: a simple and accurate method to fool deep neural networks.Seyed-Mohsen Moosavi-Dezfooli; Alhussein Fawzi; Pascal Frossard
2015-11-13
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks.Nicolas Papernot; Patrick McDaniel; Xi Wu; Somesh Jha; Ananthram Swami
2015-11-10
Learning with a Strong Adversary.Ruitong Huang; Bing Xu; Dale Schuurmans; Csaba Szepesvari
2015-10-18
Exploring the Space of Adversarial Images.Pedro Tabacof; Eduardo Valle
2015-10-14
Improving Back-Propagation by Adding an Adversarial Gradient.Arild Nøkland
2015-07-16
Deep Learning and Music Adversaries.Corey Kereliuk; Bob L. Sturm; Jan Larsen
2015-02-09
Analysis of classifiers' robustness to adversarial perturbations.Alhussein Fawzi; Omar Fawzi; Pascal Frossard
2014-12-19
Explaining and Harnessing Adversarial Examples.Ian J. Goodfellow; Jonathon Shlens; Christian Szegedy
2014-12-11
Towards Deep Neural Network Architectures Robust to Adversarial Examples.Shixiang Gu; Luca Rigazio
2014-12-05
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.Anh Nguyen; Jason Yosinski; Jeff Clune
2014-01-29
Security Evaluation of Support Vector Machines in Adversarial Environments.Battista Biggio; Igino Corona; Blaine Nelson; Benjamin I. P. Rubinstein; Davide Maiorca; Giorgio Fumera; Giorgio Giacinto; and Fabio Roli
2013-12-20
Intriguing properties of neural networks.Christian Szegedy; Wojciech Zaremba; Ilya Sutskever; Joan Bruna; Dumitru Erhan; Ian Goodfellow; Rob Fergus