Personnel: Mohit Prabhushankar, Jinsol Lee, Gukyeong Kwon, Dogancan Temel, Charles Lehman
Goal: To provide an intuitive definition of explainability in ML algorithms that uses humans in the loop to provide context and relevance to all explanations

Challenges: Explainability in machine learning is inherently aimed at human beings. Explainability can either be used to validate an ML algorithm, learn from the same ML model, or tweak the ML model. In all cases, the context and relevance comes from the humans. The role of explainability is hence, contextually dependent, making it inherently complicated. The challenge is in connecting the explanations provided by ML models to the context and relevance provided by humans.
Our Work:
Extensions
References