Speaker Details

Aleksander Madry

Aleksander Madry

Massachusetts Institute of Technology, USA

Bio:
TBD

Keynote Title:
TBD

Keynote Abstract:
TBD

[Talk Video]

Somesh Jha

Somesh Jha

University of Wisconsin, Madison, USA

Bio:
Somesh Jha received his B.Tech from Indian Institute of Technology, New Delhi in Electrical Engineering. He received his Ph.D. in Computer Science from Carnegie Mellon University under the supervision of Prof. Edmund Clarke (a Turing award winner). Currently, Somesh Jha is the Lubar Professor in the Computer Sciences Department at the University of Wisconsin (Madison). His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, he has focused his interested on privacy and adversarial ML (AML). Somesh Jha has published several articles in highly-refereed conferences and prominent journals. He has won numerous best-paper and distinguished-paper awards. Prof. Jha is the fellow of the ACM, IEEE and AAAS.

Keynote Title:
Adversarial Robustness and Cryptography

Keynote Abstract:
Over recent years, devising classification algorithms that are robust to adversarial perturbations has emerged as a challenging problem. In particular, deep neural nets (DNNs) seem to be susceptible to small imperceptible changes over test instances. However, the line of work in provable robustness, so far, has been focused on information theoretic robustness, ruling out even the existence of any adversarial examples. In this work, we study whether there is a hope to benefit from algorithmic nature of an attacker that searches for adversarial examples, and ask whether there is any learning task for which it is possible to design classifiers that are only robust against polynomial-time adversaries. Indeed, numerous cryptographic tasks (e.g. encryption of long messages) can only be secure against computationally bounded adversaries, and are indeed impossible for computationally unbounded attackers. Thus, it is natural to ask if the same strategy could help robust learning. We show that computational limitation of attackers can indeed be useful in robust learning by demonstrating the possibility of a classifier for some learning task for which computational and information theoretic adversaries of bounded perturbations have very different power. Namely, while computationally unbounded adversaries can attack successfully and find adversarial examples with small perturbation, polynomial time adversaries are unable to do so unless they can break standard cryptographic hardness assumptions.

[Talk Video]

Atul Prakash

Atul Prakash

University of Michigan, USA

Bio:
Atul Prakash is a Professor in Computer Science and Engineering at the University of Michigan, Ann Arbor with research interests in computer security and privacy. He received a Bachelor of Technology in Electrical Engineering from IIT, Delhi, India and a Ph.D. in Computer Science from the University of California, Berkeley. His recent research includes security analysis of emerging IoT software stacks, mobile payment infrastructure in India, and vulnerability of deep learning classifiers to physical perturbations. At the University of Michigan, He has served as Director of the Software Systems Lab, led the creation of the new Data Science undergraduate program, and is currently serving as the Associate Chair of the CSE Division.

Keynote Title:
Robust physical perturbation attacks and defenses for deep learning visual classifiers.

Keynote Abstract:
Deep Neural networks are increasingly used in safety-critical situations such as autonomous driving. Our prior work at CVPR 2018 showed that robust physical adversarial examples can be crafted that fool state-of-the-art vision classifiers for domains such as traffic signs. Unfortunately, crafting those attacks still required manual selection of appropriate masks and whitebox access to the model being tested for robustness. We describe a recently developed system called GRAPHITE that can be a useful aid in automatically generating candidates for robust physical perturbation attacks. GRAPHITE can generate attacks in not only white-box, but also in black-box hard-label scenarios. In hard-label blackbox scenarios, GRAPHITE is able to find successful small-patch attacks with an average of only 566 queries for 92.2% of victim-target pairs for the GTSRB dataset. This is about a one to three orders of magnitude smaller query count than previously reported hard-label black-box attacks on similar datasets. We discuss potential implications of GRAPHITE as a helpful tool towards developing and evaluating defenses against robust physical perturbation attacks. For instance, GRAPHITE is also able to find successful attacks using perturbations that modify small areas of the input image against PatchGuard, a recently proposed defense against patch-based attacks.

[Talk Video]

Changliu Liu

Changliu Liu

Carnegie Mellon University, USA

Bio:
Dr. Changliu Liu is an assistant professor in the Robotics Institute, School of Computer Science, Carnegie Mellon University (CMU), where she leads the Intelligent Control Lab. Prior to joining CMU, Dr. Liu was a postdoc at Stanford Intelligent Systems Laboratory. She received her Ph.D. from University of California at Berkeley and her bachelor degrees from Tsinghua University. Her research interests lie in the design and verification of intelligent systems with applications to manufacturing and transportation. She published the book “Designing robot behavior in human-robot interactions” with CRC Press in 2019. She initiated and has been organizing the international verification of neural network competition (VNN-COMP) since 2020. Her work is recognized by NSF Career Award, Amazon Research Award, and Ford URP Award.

Keynote Title:
New adversarial ML applications on safety-critical human-robot systems

Keynote Abstract:
In this talk, I will discuss several applications of adversarial ML to enhance safety of human-robot systems. All the applications are under a general framework of minimax optimization over neural networks, where the inner loop computes the worst case performance and the outer loop optimize NN parameters to improve the worst case performance. We have applied this approach to develop robust models for human prediction, to learn safety certificate for robot control, and to jointly synthesize robot policy and the safety certificate.

[Talk Video]

Ajmal Mian

Ajmal Mian

The University of Western Australia, Australia

Bio:
TBD

Keynote Title:
Adversarial attacks on deep learning : Model explanation & transfer to the physical world

Keynote Abstract:
Despite their remarkable success, deep models are brittle and can be manipulated easily by corrupting data with carefully crafted perturbations that are largely imperceptible to human observers. In this talk, I will give a brief background of the three stages of attacks on deep models including adversarial perturbations, data poisoning and Trojan models. I will then discuss universal perturbations, including our work on the detection and removal of such perturbations. Next, I will present Label Universal Targeted Attack (LUTA) that is image agnostic but optimized for a particular input and output class. LUTA has interesting properties beyond model fooling and can be extended to explain deep models, and perform image generation/manipulation. Universal perturbations, being image agnostic, fingerprint the deep model itself. We show that they can be used to detect Trojaned models. In the last part of my talk, I will present our work on transferring adversarial attacks to the physical world, simulated using graphics. I will discuss attacks on action recognition where the perturbations are computed on human skeletons and then transferred to videos. Finally, I will present our work on 3D adversarial textures computed using neural rendering to fool models in a pure black-box setting where the target model and training data are both unknown. I will conclude my talk with some interesting insights into adversarial machine learning.

[Talk Video]

Battista Biggio

Battista Biggio

University of Cagliari, Italy

Bio:
Battista Biggio (MSc 2006, PhD 2010) is an Assistant Professor at the University of Cagliari, Italy, and co-founder of Pluribus One (pluribus-one.it). His research interests include machine learning and cybersecurity. He has provided pioneering contributions in the area of ML security, demonstrating the first gradient-based evasion and poisoning attacks, and how to mitigate them, playing a leading role in the establishment and advancement of this research field. He has managed six research projects, and served as a PC member for the most prestigious conferences and journals in the area of ML and computer security (ICML, NeurIPS, ICLR, IEEE SP, USENIX Security). He chaired the IAPR TC on Statistical Pattern Recognition Techniques (2016-2020), co-organized S+SSPR, AISec and DLS, and served as Associate Editor for IEEE TNNLS, IEEE CIM and Pattern Recognition. He is a senior member of the IEEE and ACM, and a member of the IAPR and ELLIS.

Keynote Title:
Machine Learning Security: Lessons Learned and Future Challenges

Keynote Abstract:
In this talk, I will briefly review some recent advancements in the area of machine learning security with a critical focus on the main factors which are hindering progress in this field. These include the lack of an underlying, systematic and scalable framework to properly evaluate machine-learning models under adversarial and out-of-distribution scenarios, along with suitable tools for easing their debugging. The latter may be helpful to unveil flaws in the evaluation process, as well as the presence of potential dataset biases and spurious features learned during training. I will finally report concrete examples of what our laboratory has been recently working on to enable a first step towards overcoming these limitations, in the context of Android and Windows malware detection.

[Talk Video]

Celia Cintas

Celia Cintas

IBM Research Africa, Kenya

Bio:
Celia Cintas is a Research Scientist at IBM Research Africa - Nairobi. She is a member of the AI Science team at the Kenya Lab. Her current research focuses on the improvement of ML techniques to address challenges in Global Health and exploring subset scanning for anomalous pattern detection under generative models. Previously, a grantee from the National Scientific and Technical Research Council (CONICET) working on Deep Learning techniques for population studies at LCI-UNS and IPCSH-CONICET as part of the Consortium for Analysis of the Diversity and Evolution of Latin America (CANDELA). She holds a Ph.D. in Computer Science from Universidad del Sur (Argentina). https://celiacintas.github.io/about/

Keynote Title:
A tale of adversarial attacks & out-of-distribution detection stories in the activation space.

Keynote Abstract:
Most deep learning models assume ideal conditions and rely on the assumption that test/production data comes from the in-distribution samples from the training data. However, this assumption is not satisfied in most real-world applications. Test data could differ from the training data either due to adversarial perturbations, new classes, generated content, noise, or other distribution changes. These shifts in the input data can lead to classifying unknown types, classes that do not appear during training, as known with high confidence. On the other hand, adversarial perturbations in the input data can cause a sample to be incorrectly classified. In this talk, we will discuss approaches based on group and individual subset scanning methods from the anomalous pattern detection domain and how they can be applied over off-the-shelf DL models.

[Talk Video]

Joel Dapello

Joel Dapello

Harvard University, USA

Bio:
Joel Dapello is a PhD candidate in Applied Math at the Harvard School of Engineering and Applied Sciences, currently working with Jim DiCarlo and David Cox at the intersection of machine learning and primate cognitive neuroscience. Prior to this, Joel was the founding engineer at BioBright, and received his bachelors in neuroscience from Hampshire College. Joel’s interests are centered around neural computation and information processing in both biological and artificial neural systems.

Keynote Title:
What Can the Primate Brain Teach Us about Robust Object Recognition?

Keynote Abstract:
Many of the current state-of-the-art object recognition models such as convolutional neural networks (CNN) are loosely inspired by the primate visual system. However, there still exist many discrepancies between these models and primates, both in terms of their internal processing mechanisms and their respective behavior on object recognition tasks. Of particular concern, many current models suffer from remarkable sensitivity to adversarial attacks, a phenomenon which does not appear to plague the primate visual system. Recent work has demonstrated that adding more biologically-inspired components or otherwise driving models to use representations more similar to the primate brain is one way to improve their robustness to adversarial attacks. In this talk, I will review some of these insights and successes such as relationships between the primary visual cortex and robustness, discuss recent findings about how neural recordings from later regions of the primate ventral stream might help to align model and human behavior, and finally conclude with recent neurophysiological results questioning exactly how robust representations in the primate brain truly are.

[Talk Video]