SPS Education Short Course 

Title:  Visual Explainability in Machine Learning

Presented by: Ghassan AlRegib, and Mohit Prabhushankar 

Omni Lab for Intelligent Visual Engineering and Science (OLIVES) 

School of Electrical and Computer Engineering  

Georgia Institute of Technology, Atlanta, USA 

https://alregib.ece.gatech.edu/

Watch the tutorial recording: HERE

Course Overview 


Visual explanations have traditionally acted as rationales used to justify the decisions made by machine learning systems. With the advent of large-scale neural networks, the role of visual explanations has been to shed interpretability on black-box models. We view this role as the process for the network to answer the question `Why P?’, where P is a trained network’s prediction. Recently however, with increasingly capable models, the role of explainability has expanded. Neural networks are asked to justify `What if?’ counterfactual and `Why P, rather than Q?’ contrastive question modalities that the network did not explicitly train to answer. This allows explanations to act as reasons to make further prediction. The short course provides a principled and rational introduction into Explainability within machine learning and justifies them as reasons to make decisions. Such a reasoning framework allows for robust machine learning as well as trustworthy AI to be accepted in everyday lives. Applications like robust recognition, image quality assessment, visual saliency, anomaly detection, out-of-distribution detection, adversarial image detection, seismic interpretation, semantic segmentation, and machine teaching among others will be discussed. 

Learning Outcome

  • Impress on the importance of Explainability in AI systems as a function of humans (users, engineers, researchers, and policymakers) requiring it
  • Define Explainability and characterize it based on its required properties, methodologies and the intended audience it caters to
  • Detail popular visual explanatory techniques across multiple data modalities including natural images, biomedical and seismic images, and videos
  • Expand on subjective and objective techniques to evaluate explanations
  • Discuss accepted proxies for Explainability – robustness and uncertainty
  • Contrast against data-specific instantiations of Explainability
  • Consider alternative data and explanation-centric training regimen
  • Debate on the role of Visual Explainability through the lens of causality and Generative AI

Schedule

Day 1 (Tuesday, 12/05)

Day 2 (Wednesday, 12/06)

Day 3 (Thursday, 12/07)

Target audience

The targeted audiences are senior-year undergraduate, postgraduate, engineers and practitioners, with some background in python and machine learning.  

Suggested reading

  • AlRegib, G., & Prabhushankar, M. (2022). Explanatory Paradigms in Neural Networks: Towards relevant and contextual explanations. IEEE Signal Processing Magazine, 39(4), 59-72  
  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626) 
  • Prabhushankar, M., & AlRegib, G. (2021). Contrastive Reasoning in Neural Networks. arXiv preprint arXiv:2103.12329
  • Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., & Lee, S. (2019, May). Counterfactual visual explanations. In International Conference on Machine Learning (pp. 2376-2384). PMLR 
  • G. Kwon*, M. Prabhushankar*, D. Temel, and G. AlRegib, “Distorted Representation Space Characterization Through Backpropagated Gradients,” in IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, Sep. 2019. 
  • G. Kwon, M. Prabhushankar, D. Temel, and G. AlRegib, “Backpropagated Gradient Representations for Anomaly Detection,” in Proceedings of the European Conference on Computer Vision (ECCV), SEC, Glasgow, Aug. 23-28 2020 
  • Temel, D., Lee, J., & AlRegib, G. (2018, December). Cure-or: Challenging unreal and real environments for object recognition. In 2018 17th IEEE international conference on machine learning and applications (ICMLA) (pp. 137-144). IEEE 
  • M. Prabhushankar, and G. AlRegib, “Introspective Learning : A Two-Stage Approach for Inference in Neural Networks,” in Advances in Neural Information Processing Systems (NeurIPS), New Orleans, LA,, Nov. 29 – Dec. 1, 2022, [PDF], [Code]

Speakers

Ghassan AlRegib is currently the John and McCarty Chair Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received the ECE Outstanding Junior Faculty Member Award, in 2008 and the 2017 Denning Faculty Award for Global Engagement. His research group, the Omni Lab for Intelligent Visual Engineering and Science (OLIVES) works on research projects related to machine learning, image and video processing, image and video understanding, seismic interpretation, machine learning for ophthalmology, and video analytics. He has participated in several service activities within the IEEE. He served as the TP co-Chair for ICIP 2020. He is an IEEE Fellow.

Mohit Prabhushankar received his Ph.D. degree in electrical engineering from the Georgia Institute of Technology (Georgia Tech), Atlanta, Georgia, 30332, USA, in 2021. He is currently a Postdoctoral Research Fellow in the School of Electrical and Computer Engineering at the Georgia Institute of Technology in the Omni Lab for Intelligent Visual Engineering and Science (OLIVES). He is working in the fields of image processing, machine learning, active learning, healthcare, and robust and explainable AI. He is the recipient of the Best Paper award at ICIP 2019 and Top Viewed Special Session Paper Award at ICIP 2020. He is the recipient of the ECE Outstanding Graduate Teaching Award, the CSIP Research award, and of the Roger P Webb ECE Graduate Research Excellence award, all in 2022.

Print Friendly, PDF & Email