AAAI 2024 Tutorial

Presented by: Ghassan AlRegib, and Mohit Prabhushankar 
Georgia Institute of Technology

https://alregib.ece.gatech.edu 

alregib@gatech.edu, mohit.p@gatech.edu    

Duration: Half Day (3 hours, 30 mins)

Title: Formalizing Robustness in Neural Networks: Explainability, Uncertainty, and Intervenability

Goal

Deep learning has shown a high degree of applicability in multiple fields that provide access to big data. As deep learning-based AI systems transition from academia to everyday life, their vulnerabilities must be understood before acceptance by the public. A key vulnerability is the lack of knowledge regarding a neural network’s operational limits. This vulnerability is viewed as a lack of robustness in neural networks. In the past, simple measures of robustness that have served the research community include input noise analysis and out-of-distribution recognition. However, large vision models with billions of parameters are vulnerable to other engineered and adversarial noise as well as being uncalibrated with regards to their prediction probabilities. Recently, prompt-based architectures that allow limited inputs from users during inference stage have gained prominence. While these prompts act as interventions, their goal is not to understand causality or measure robustness, but rather to extract limited knowledge. This is a timely and relevant tutorial that emphasizes the robustness of neural networks in terms of human-centric measures which aids in their widespread applicability.

The goal of the tutorial is threefold: 1) Decompose modern large-scale neural network robustness into three manageable and human-centric measures, 2) Probabilistically define post-hoc explainability, uncertainty, and intervenability measures, and 3) Compute the three measures as a function of the input, network and output with a focus on real life applications in medical, and seismic domains. The detailed subtopics are provided in the Outline section below.

Speakers

Ghassan AlRegib is currently the John and McCarty Chair Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received the ECE Outstanding Junior Faculty Member Award, in 2008 and the 2017 Denning Faculty Award for Global Engagement. His research group, the Omni Lab for Intelligent Visual Engineering and Science (OLIVES) works on research projects related to machine learning, image and video processing, image and video understanding, seismic interpretation, machine learning for ophthalmology, and video analytics. He has participated in several service activities within the IEEE. He served as the TP co-Chair for ICIP 2020. He is an IEEE Fellow.

Mohit Prabhushankar received his Ph.D. degree in electrical engineering from the Georgia Institute of Technology (Georgia Tech), Atlanta, Georgia, 30332, USA, in 2021. He is currently a Postdoctoral Research Fellow in the School of Electrical and Computer Engineering at the Georgia Institute of Technology in the Omni Lab for Intelligent Visual Engineering and Science (OLIVES). He is working in the fields of image processing, machine learning, active learning, healthcare, and robust and explainable AI. He is the recipient of the Best Paper award at ICIP 2019 and Top Viewed Special Session Paper Award at ICIP 2020. He is the recipient of the ECE Outstanding Graduate Teaching Award, the CSIP Research award, and of the Roger P Webb ECE Graduate Research Excellence award, all in 2022.

Program Schedule and Content

AAAI Website for the tutorial: https://aaai.org/aaai-conference/aaai-24-tutorial-and-lab-list/#th18
Date and Time: Wednesday, February 21, 2:00 – 6:00 PM
Venue: Vancouver Convention Centre – West Building | Vancouver, BC, Canada

Target Audience

This tutorial is intended for graduate students, researchers and engineers working in different topics related to visual information processing and robust machine learning.

Outline

The tutorial is composed of four major parts. Part 1 discusses some recent surprising results regarding training neural networks with out-of-distribution (OOD) data, the conclusions of which are that it is not always clear when and how to use OOD data during training. This motivates the need for formal and human-centric measures of robustness at Inference. Part 2 introduces the basic mathematical framework for each one of Explainability, Uncertainty, and Intervenability. We consider the relationships between them and show that interventions on variance decomposition of predictive uncertainty is an evaluation measure used for explainability. Part 3 illustrates the relationship between Explainability and Uncertainty in real world applications of retinal and seismic image analysis. Part 4 discusses intervenability as a function of uncertainty for the case of prompting and non-causal interventions. Specifically, interventional uncertainty analysis in seismic domains is discussed.

Neural networks provide generalizable and task independent representation spaces that have garnered widespread applicability in image understanding applications. The complicated semantics of feature interactions within image data has been broken down into a set of non-linear functions, convolution parameters, attention, as well as multi-modal inputs among others. The complexity of these operations has introduced multiple vulnerabilities within neural network architectures. These vulnerabilities include adversarial samples, confidence calibration issues, and catastrophic forgetting among others. Given that AI promises to herald the fourth industrial revolution, it is critical to understand and overcome these vulnerabilities. Doing so requires creating robust neural networks that drive the AI systems. Defining robustness, however, is not trivial. Simple measurements of invariance to noise and perturbations are not applicable in real life settings. In this tutorial, we provide a human-centric approach to understanding robustness in neural networks that allow AI to function in society. Doing so allows us to state the following: 1) All neural networks must be equipped to provide contextual and relevant explanations to humans, 2) Neural networks must know when and what they don’t know, 3) Neural Networks must be equipped with being intervened upon by humans at any stage of their decision-making process. These three statements call for robust neural networks to be explainable, equipped with uncertainty quantification, and be intervenable.

We provide a probabilistic post-hoc analysis of explainability, uncertainty, and intervenability. Post-hoc implies that a decision has already been made. A simple example of post-hoc contextual and relevant explanations is shown in Fig. 1. Given a well-trained neural network, regular explanations answer the question of Why spoonbill?’ by highlighting the body of the bird. However, a more relevant question can beWhy spoonbill, rather than a Flamingo?’. Such a question requires the questioner to be aware of the features of a flamingo. If the network shows that the difference is in the lack of an S-shaped neck, then the questioner will be satisfied with the contextual answer provided. Contextual explanations build trust as well as assesses the neural network apart from explaining the decisions. In larger context, the goal of explainability must be to satisfy multiple stakeholders at various levels of expertise. This includes researchers, engineers, policymakers, and everyday users among others. In this tutorial, we expound on a gradient-based methodology that provides all the above explanations without requiring any retraining. Once a neural network is trained, it acts as a knowledge base through which different types of gradients can be used to traverse adversarial, contrastive, explanatory, counterfactual representation spaces. Apart from explanations, we demonstrate the utility these gradients to define uncertainty and intervenability. Several image understanding and robustness applications including anomaly, novelty, adversarial, and out-of-distribution image detection, image quality assessment, and noise recognition experiments among others will be discussed. In this tutorial, we examine the types, visual meanings, and interpretations of robustness as a human-centric measure of the utility of large-scale neural networks.

History

Both tutorial presenters are from the Georgia Institute of Technology and have presented the content on multiple occasions.

The tutorial is accepted and will be presented at the following venues:

Prerequisite Knowledge

Audience are expected to have a basic understanding of neural networks and robustness applications including image recognition and detection.

Expected number of Participants

On previous offerings, albeit in smaller venues compared to AAAI, we had 20 participants in our tutorial and 30 participants in a workshop. We expect a larger participation in AAAI 2024.

Recent Relevant Publications

  1. AlRegib, Ghassan, and Mohit Prabhushankar. “Explanatory Paradigms in Neural Networks: Towards relevant and contextual explanations.” IEEE Signal Processing Magazine 39.4 (2022): 59-72.
  2. M. Prabhushankar, and G. AlRegib, “Introspective Learning : A Two-Stage Approach for Inference in Neural Networks,” in Advances in Neural Information Processing Systems (NeurIPS), New Orleans, LA,, Nov. 29 – Dec. 1 2022.
  3. J. Lee, M. Prabhushankar, and G. AlRegib, “Gradient-Based Adversarial and Out-of-Distribution Detection,” in International Conference on Machine Learning (ICML) Workshop on New Frontiers in Adversarial Machine Learning, Baltimore, MD, Jul. 2022.
  4. M. Prabhushankar and G. AlRegib, “Extracting Causal Visual Features for Limited Label Classification,” IEEE International Conference on Image Processing (ICIP), Anchorage, AK, Sept 2021.
  5. J. Lee and G. AlRegib, “Open-Set Recognition with Gradient-Based Representations,” IEEE International Conference on Image Processing (ICIP), Anchorage, AK, submitted on Jan. 18 2021.
  6. G. Kwon, M. Prabhushankar, D. Temel, and G. AlRegib, “Backpropagated Gradient Representations for Anomaly Detection,” in Proceedings of the European Conference on Computer Vision (ECCV), SEC, Glasgow, Aug. 23-28 2020.
  7. M. Prabhushankar, G. Kwon, D. Temel, and G. AlRegib, “Contrastive Explanations in Neural Networks,” in IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, Oct. 2020. (Top Viewed Special Session Paper Award)
  8. Y. Sun, M. Prabhushankar, and G. AlRegib, “Implicit Saliency in Deep Neural Networks,” in IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, Oct. 2020.
  9. G. Kwon, M. Prabhushankar, D. Temel, and G. AlRegib, “Novelty Detection Through Model-Based Characterization of Neural Networks,” in IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, Oct. 2020.
  10. J. Lee and G. AlRegib, “Gradients as a Measure of Uncertainty in Neural Networks,” in IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, Oct. 2020.
  11. M. Prabhushankar*, G. Kwon*, D. Temel and G. AIRegib, “Distorted Representation Space Characterization Through Backpropagated Gradients,” 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 2019, pp. 2651-2655. (* : equal contribution, Best Paper Award (top 0.1%))
Print Friendly, PDF & Email