Explainable AI (XAI) aims at helping you understand and interpret predictions made by AI techniques and tools. Recently, there has been a growing interest in XAI as many users want explanations to the results produced by AI models. For deep neural networks, saliency of neurons defines which features learned by neurons are most salient for the predictions. This allows us to recognize and understand which inputs are attributed to model performance. In this talk, I present an approach to explainable deep neural networks by using saliency backpropagation which propagates saliencies of nodes at the output layer to inputs. I highlight existing saliency back propagation methods and their applications to biomarker discovery from functional MRI (fMRI) data and to drug mechanism prediction.
Learning Objectives:
1. Discuss the importance of Explainable AI (XAI) and the approaches to XAI.
2. Explain the method of saliency backpropagation for XAI.
3. Describe the applications of XAI for drug discovery.
4. Discuss biomarker discovery from functional MRI analysis as an XAI application.