New Frontiers

in Adversarial Machine Learning

(AdvML Frontiers @ ICML 2022)

July 22nd, 2022

Room 343-344

Baltimore, MD, USA

About AdvML Frontiers 2022

About AdvML Frontiers 2022

Adversarial machine learning, which aims at tricking ML models by providing deceptive inputs, has been identified as a powerful method to improve various trustworthiness metrics (e.g., adversarial robustness, explainability, and fairness) and to advance versatile ML paradigms (e.g., supervised and self-supervised learning, and static and continual learning). As a consequence of the proliferation of AdvML-inspired research works, the proposed workshop – New Frontiers in AdvML – aims to identify the challenges and limitations of current AdvML methods, and explore new perspectives and constructive views of AdvML across the full theory/algorithm/application stack.

News

  • The best paper award is given to the accepted paper Model Transferability With Responsive Decision Subjects! Congratulations to Yang Liu, Yatong Chen, Zeyu Tang, and Kun Zhang!

Location

Room 343-344
ICML 2022 Workshop
In Baltimore, MD, USA.

Time

July 22, 2022
Friday

Keynote Speakers

Aleksander Madry

Aleksander Madry

Massachusetts Institute of Technology, USA

Somesh Jha

Somesh Jha

University of Wisconsin, Madison, USA

Atul Prakash

Atul Prakash

University of Michigan, USA

Changliu Liu

Changliu Liu

Carnegie Mellon University, USA

Ajmal Mian

Ajmal Mian

The University of Western Australia, Australia

Battista Biggio

Battista Biggio

University of Cagliari, Italy

Celia Cintas

Celia Cintas

IBM Research Africa, Kenya

Joel Dapello

Joel Dapello

Harvard University, USA

Schedule

Opening Remarks

Ajmal Mian

Keynote (Virtual)

Ajmal Mian
Adversarial attacks on deep learning : Model explanation & transfer to the physical world
Abstract & Bio
Celia Cintas

Keynote (Virtual)

Celia Cintas
A tale of adversarial attacks & out-of-distribution detection stories in the activation space
Abstract & Bio

Oral 1 Model Transferability With Responsive Decision Subjects

Yang Liu; Yatong Chen; Zeyu Tang; Kun Zhang

Oral 2 What is a Good Metric to Study Generalization of Minimax Learners?

Asuman Ozdaglar; Sarath Pattathil; Jiawei Zhang; Kaiqing Zhang

Oral 3 Toward Efficient Robust Training against Union of Lp Threat Models

Gaurang Sriramanan; Maharshi Gor; Soheil Feizi

Oral 4 On the Interplay of Adversarial Robustness and Architecture Components: Patches, Convolution and Attention

Francesco Croce; Matthias Hein

Battista Biggio

Keynote (Virtual)

Battista Biggio
Machine Learning Security: Lessons Learned and Future Challenges
Abstract & Bio
Joel Dapello

Keynote (In Person)

Joel Dapello
What Can the Primate Brain Teach Us about Robust Object Recognition?
Abstract & Bio

Poster Session

(for all accepted papers)

Lunch + Poster

Changliu Liu

Keynote (Virtual)

Changliu Liu
New adversarial ML applications on safety-critical human-robot systems
Abstract & Bio
Aleksander Madry

Keynote (In Person)

Aleksander Madry
Topic: TBD
Abstract & Bio

Blue Sky Idea 1 Overcoming Adversarial Attacks for Human-in-the-Loop Applications

Ryan McCoppin; Sean Kennedy; Platon Lukyanenko; Marla Kennedy

Blue Sky Idea 2 Ad Hoc Teamwork in the Presence of Adversaries

Ted Fujimoto; Samrat Chatterjee; Auroop R Ganguly

Blue Sky Idea 3 Learner Knowledge Levels in Adversarial Machine Learning

Sihui Dai; Prateek Mittal

Blue Sky Idea 4 Putting Adversarial Machine Learning to the Test: Towards AI Threat Modelling

Henrik Junklewitz; Ronan Hamon

Blue Sky Idea 5 Easy Batch Normalization

Arip Asadulaev; Alexander Panfilov; Andrey Filchenkov

Blue Sky Idea 6 Adversarial Training Improve Joint Energy-Based Generative Modelling

Rostislav Korst; Arip Asadulaev

Blue Sky Idea 7 Multi-step domain adaptation by adversarial attack to HΔH-divergence

Arip Asadulaev; Alexander Panfilov; Andrey Filchenkov

Atul Prakash

Keynote (Virtual)

Atul Prakash
Robust physical perturbation attacks and defenses for deep learning visual classifiers
Abstract & Bio
Somesh Jha

Keynote (Virtual)

Somesh Jha
Topic: Adversarial Robustness and Cryptography
Abstract & Bio

Poster Session

(for all accepted papers)

Closing Remarks



Call For Papers

Submission Instructions

We welcome paper submissions in two separate tracks. The full paper submission track accepts papers up to 6 pages with unlimited references or supplementary materials. The Blue Sky Ideas submission track solicits papers up to 2 pages. The paper format will follow the ICML 2022 template. Since the accepted papers are considered as non-archival, concurrent submission is allowed but it is the responsibility of the authors to verify the compliance with the other venue's policy. Based on the PC’s recommendation, the accepted papers will be allocated either a spotlight talk or a poster presentation, and made publicly available at the workshop website.  

About Blue Sky Ideas Track : This special track targets the future of the research area of adversarial ML. We encourage short submissions focusing on visionary ideas, long-term challenges, and new research opportunities. It serves as an incubator for innovative and provocative ideas and dedicates to providing a forum for presenting and exchanging far-sighted ideas without being constrained by the result-oriented standards. We especially encourage ideas on the novel, overlooked, and under-represented areas related to adversarial ML. Selected acceptances in this track will be invited to the Blue Sky Presentation sessions.  

The accepted papers will be publicly available in our workshop website and Best paper Award will be selected.

Important Dates

Submission deadline: May 23., 2022, 23:59 AOE (May 27., 2022, 23:59 AOE)
Notification to authors: June 13., 2022, AOE
Camera ready deadline: July 8., 2022, 23:59 AOE
Please submit at the AdvML Frontiers 2022 @ ICML 2022 CMT website.

Contacts

Please contact Yihua Zhang (zhan1908@msu.edu) and Yuguang Yao (yaoyugua@msu.edu) for website and paper submission questions. Please contact Dongxiao Zhu (dzhu@wayne.edu), Kathrin Grosse (kathrin.grosse@unica.it), Pin-Yu Chen (pin-yu.chen@ibm.com), and Sijia Liu (liusiji5@msu.edu) for general workshop questions.


Topics

The topics for AdvML Frontiers 2022 include, but are not limited to:

  • Mathematical foundations of AdvML (e.g., geometries of learning, causality, information theory)
  • Adversarial ML metrics and their interconnections
  • Neurobiology-inspired adversarial ML foundations, and others beyond machine-centric design
  • New optimization methods for adversarial ML
  • Theoretical understanding of adversarial ML
  • Data foundations of adversarial ML (e.g., new datasets and new data-driven algorithms)
  • Scalable adversarial ML algorithms and implementations
  • Adversarial ML in the real world (e.g., physical attacks and lifelong defenses)
  • Provably robust machine learning methods and systems
  • Robustness certification and property verification techniques
  • Generative models and their applications in adversarial ML (e.g., Deepfakes)
  • Representation learning, knowledge discovery and model generalizability
  • Distributed adversarial ML
  • New adversarial ML applications
  • Explainable, transparent, or interpretable ML systems via adversarial learning techniques
  • Fairness and bias reduction algorithms in ML
  • Transfer learning, multi-agent adaptation, self-paced learning
  • Risk assessment and risk-aware decision making
  • Adversarial ML for good (e.g., privacy protection, education, healthcare, and scientific discovery)




Accepted Papers


Best Paper Award


53 Model Transferability With Responsive Decision Subjects [Paper]
(Yang Liu; Yatong Chen; Zeyu Tang; Kun Zhang)


Oral Acceptance


56 What is a Good Metric to Study Generalization of Minimax Learners? [Paper]
(Asuman Ozdaglar; Sarath Pattathil; Jiawei Zhang; Kaiqing Zhang)

65 Toward Efficient Robust Training against Union of Lp Threat Models [Paper]
(Gaurang Sriramanan; Maharshi Gor; Soheil Feizi)

87 On the interplay of adversarial robustness and architecture components: patches, convolution and attention [Paper]
(Francesco Croce; Matthias Hein)


Blue Sky Acceptance


9 Overcoming Adversarial Attacks for Human-in-the-Loop Applications [Paper]
(McCoppin, Ryan R; Kennedy, Sean M; Lukyanenko, Platon; Kennedy, Marla)

10 Ad Hoc Teamwork in the Presence of Adversaries [Paper]
(Fujimoto, Ted; Chatterjee, Samrat; Ganguly, Auroop)

34 Easy Batch Normalization [Paper]
(Asadulaev, Arip; Panfilov, Alexander; Filchenkov, Andrey)

35 Learner Knowledge Levels in Adversarial Machine Learning [Paper]
(Dai, Sihui; Mittal, Prateek)

38 Adversarial Training Improve Joint Energy-Based Generative Modelling [Paper]
(Korst, Rostislav; Asadulaev, Arip)

42 Putting Adversarial Machine Learning to the Test: Towards AI Threat Modelling [Paper]
(Junklewitz, Henrik; Hamon, Ronan)

47 Multi-step domain adaptation by adversarial attack to HΔH-divergence [Paper]
(Asadulaev, Arip; Panfilov, Alexander; Filchenkov, Andrey)


Poster Acceptance


4 Rethinking Multidimensional Discriminator Output for Generative Adversarial Networks [Paper]
(Dai, Mengyu; Hang, Haibin; Srivastava, Anuj)

5 Generative Models with Information-Theoretic Protection Against Membership Inference Attacks [Paper]
(Hassanzadeh, Parisa; Tillman, Robert E)

7 Availability Attacks on Graph Neural Networks [Paper]
(Tailor, Shyam A; Tairum Cruz, Miguel; Azevedo, Tiago; Lane, Nicholas; Maji, Partha)

12 Robust Models are less Over-Confident [Paper]
(Grabinski, Julia; Gavrikov, Paul; Keuper, Janis; Keuper, Margret)

13 Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINO [Paper]
(Rando, Javier; Baumann, Thomas; Naimi, Nasib A; Mathys, Max)

14 Distributionally Robust counterfactual Explanations via an End-to-End Training Approach [Paper]
(Guo, Hangzhi; Jia, Feiran; Chen, Jinghui; Squicciarini, Anna Cinzia; Yadav, Amulya)

15 Meta-Learning Adversarial Bandits [Paper]
(Balcan, Maria-Florina; Harris, Keegan; Khodak, Mikhail; Wu, Steven)

17 BIGRoC: Boosting Image Generation via a Robust Classifier [Paper]
(Ganz, Roy; Elad, Michael)

18 Why adversarial training can hurt robust accuracy [Paper]
(Clarysse, Jacob; Hörrmann, Julia; Yang, Fanny)

20 Superclass Adversarial Attack [Paper]
(Kumano, Soichiro; Kera, Hiroshi; Yamasaki, Toshihiko)

21 Individually Fair Learning with One-Sided Feedback [Paper]
(Bechavod, Yahav; Roth, Aaron)

24 Multi-Task Federated Reinforcement Learning with Adversaries [Paper]
(Anwar, Aqeel; Wan, Zishen; Raychowdhury, Arijit)

30 Adversarial Cheap Talk [Paper]
(Lu, Christopher; Willi, Timon; Letcher, Alistair HP; Foerster, Jakob)

31 Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning [Paper]
(Wen, Yuxin; Geiping, Jonas A.; Fowl, Liam; Souri, Hossein; Chellappa, Rama; Goldblum, Micah; Goldstein, Tom)

32 Synthetic Dataset Generation for Adversarial Machine Learning Research [Paper]
(Liu, Xiruo; Singh, Shibani; Cornelius, Cory; Busho, Colin; Tan, Mike; Paul, Anindya; Martin, Jason)

39 Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools [Paper]
(Brown, Davis; Kvinge, Henry)

43 Do Perceptually Aligned Gradients Imply Adversarial Robustness? [Paper]
(Ganz, Roy; Kawar, Bahjat; Elad, Michael)

44 Make Some Noise: Reliable and Efficient Single-Step Adversarial Training [Paper]
(de Jorge Aranda, Pau; Bibi, Adel; Volpi, Riccardo; Sanyal, Amartya; Torr, Philip; Rogez, Gregory; Dokania, Puneet)

46 Catastrophic overfitting is a bug but also a feature [Paper]
(Ortiz-Jimenez, Guillermo; de Jorge Aranda, Pau; Sanyal, Amartya; Bibi, Adel; Dokania, Puneet; Frossard, Pascal; Rogez, Gregory; Torr, Philip)

49 Fair Universal Representations using Adversarial Models [Paper]
(Kairouz, Peter; Liao, Jiachun; Huang, Chong; Welfert, Monica; Sankar, Lalitha)

50 Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch [Paper]
(Souri, Hossein; Fowl, Liam; Chellappa, Rama; Goldblum, Micah; Goldstein, Tom)

51 Early Layers Are More Important For Adversarial Robustness [Paper]
(Bakiskan, Can; Cekic, Metehan; Madhow, Upamanyu)

54 Provably Adversarially Robust Detection of Out-of-Distribution Data (Almost) for Free [Paper]
(Meinke, Alexander; Bitterwolf, Julian; Hein, Matthias)

55 Attacking Adversarial Defences by Smoothing the Loss Landscape [Paper]
(Eustratiadis, Panagiotis; Gouk, Henry; Li, Da; Hospedales, Timothy)

57 Sound Randomized Smoothing in Floating-Point Arithmetics [Paper]
(Voráček, Václav; Hein, Matthias)

58 Robustness in deep learning: The width (good), the depth (bad), and the initialization (ugly) [Paper]
(Zhu, Zhenyu; Liu, Fanghui; Chrysos, Grigorios; Cevher, Volkan)

59 Riemannian data-dependent randomized smoothing for neural network certification [Paper]
(Labarbarie, Pol; Hajri, Hatem; Arnaudon, Marc)

61 Adversarial robustness of ßVAE through the lens of local geometry [Paper]
(Khan, Asif; Storkey, Amos)

64 "Why do so?"" --- A practical perspective on adversarial machine learning [Paper]
(Grosse, Kathrin; Bieringer, Lukas; Besold, Tarek R.; Biggio, Battista; Krombholz, Katharina)

66 Adversarial Estimation of Riesz Representers [Paper]
(Chernozhukov, Victor; Newey, Whitney; Singh, Rahul; Syrgkanis, Vasilis)

67 Saliency Guided Adversarial Training for Tackling Generalization Gap with Applications to Medical Imaging Classification System [Paper]
(Li, Xin; Qiang, Yao; Li, Chengyin; Liu, Sijia; Zhu, Dongxiao)

68 Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models [Paper]
(Mitchell, Eric; Henderson, Peter; Manning, Christopher D; Jurafsky, Dan; Finn, Chelsea)

69 Illusionary Attacks on Sequential Decision Makers and Countermeasures [Paper]
(Franzmeyer, Tim; Henriques, Joao F; Foerster, Jakob; Torr, Philip; Bibi, Adel; Schroeder de Witt, Christian)

70 Can we achieve robustness from data alone? [Paper]
(Kempe, Julia; Tsilivis, Nikolaos; Su, Jingtong)

75 Gradient-Based Adversarial and Out-of-Distribution Detection [Paper]
(Lee, Jinsol; Prabhushankar, Mohit; AlRegib, Ghassan)

76 Investigating Why Contrastive Learning Benefits Robustness against Label Noise [Paper]
(Xue, Yihao; Whitecross, Kyle; Mirzasoleiman, Baharan)

77 Layerwise Hebbian/anti-Hebbian (HaH) Learning In Deep Networks: A Neuro-inspired Approach To Robustness [Paper]
(Cekic, Metehan; Bakiskan, Can; Madhow, Upamanyu)

81 Efficient and Effective Augmentation Strategy for Adversarial Training [Paper]
(Addepalli, Sravanti; Jain, Samyak; RADHAKRISHNAN, Venkatesh Babu)

82 Robust Empirical Risk Minimization with Tolerance [Paper]
(Bhattacharjee, Robi; Hopkins, Max; Kumar, Akash; Yu, Hantao; Chaudhuri, Kamalika)

84 Towards Out-of-Distribution Adversarial Robustness [Paper]
(Ibrahim, Adam; Guille-Escuret, Charles; Mitliagkas, Ioannis; Rish, Irina; Krueger, David; bashivan, pouya)

85 Reducing Exploitability with Population Based Training [Paper]
(Czempin, Pavel; Gleave, Adam)

91 RUSH: Robust Contrastive Learning via Randomized Smoothing [Paper]
(Pang, Yijiang; Liu, Boyang; Zhou, Jiayu )

AdvML Frontiers 2022 Venue

venue

ICML 2022 Workshop (Room 343-344)
Physical Conference

AdvML Frontiers 2022 will be held in a in-person manner with possible online components co-located at the ICML 2022 workshop and the conference will take place in the beautiful city of Baltimore, MD, USA.

Organizers

Sijia Liu

Sijia Liu

Michigan State University, USA

Pin-Yu Chen

Pin-Yu Chen

IBM Research, USA

Dongxiao Zhu

Dongxiao Zhu

Wayne State University, USA

Eric Wong

Eric Wong

MIT, USA

Kathrin Grosse

Kathrin Grosse

University of Cagliari, Italy

Hima Lakkaraju

Hima Lakkaraju

Harvard University, USA

Sanmi Koyejo

Sanmi Koyejo

UIUC & Google, USA



Program Committee Members

Xue Lin (Northeastern University)
Bhavya Kailkhura (Lawrence Livermore National Laboratory)
Parikshit Ram (IBM Research)
Ren Wang (University of Michigan)
Pranay Sharma (Carnegie Mellon University)
Prashant Khanduri (University of Minnesota)
Rhongho Jang (Wayne State University)
Yao Qiang (Wayne State University)
Luca Demetrio (University of Cagliari)
Maura Pintor (Univesrity of Cagliari)
Antonio E. Cina (Università Ca'Foscari Venezia)
Eugene Bagdasaryan (Cornell University)
Tianlong Chen (The University of Texas at Austin)
Chia-Yi Hsu (National Yang Ming Chiao Tung University)
Chulin Xie (University of Illinois Urbana-Champaign)
Akshay Mehra (Tulane University)
Jiayu Zhou (Michigan State University)
Jiefeng Chen (University of Wisconsin-Madison)
Shashank Srikant (Massachusetts Institute of Technology)
Maksym Andriushchenko (EPFL)
Pratyush Maini (Carnegie Mellon University)
Kristen Marie Johnson (Michigan State University)
Mengmei Ye (IBM Research)
Xin Li (Bosch AI)
Yigitcan Kaya (University of Maryland College Park)
Gaurang Sriramanan (University of Maryland College Park)
Qiucheng Wu (University of California, Santa Barbara)
Guanhua Zhang (University of California, Santa Barbara)
Bairu Hou (University of California, Santa Barbara)
Yize Li (Northeastern University)
Yifan Gong (Northeastern University)
Aochuan Chen (Michigan State University)
Soumyadeep Pal (University of Alberta)
Chao-Han Huck Yang
Yiming Li (Tsinghua University)
Andrew Geng (IBM Research)
Lei Hsiung (National Tsing Hua University)
Rulin Shao (Carnegie Mellon University)
Kaiyuan Zhang (Purdue University)
Zhiyuan He (The Chinese University of Hong Kong)
Ruofei Shen (Meta)
David Stutz (Deep Mind)
Marius Mosbach (Saarland University, Saarland Informatics Campus)
Alexander Robey (University of Pennsylvania)


Workshop Activity Student Chairs