Towards Useful AI Interpretability for Humans via Interactive AI Explanations
Open Access
- Author:
- Shen, Hua
- Graduate Program:
- Informatics
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- July 18, 2023
- Committee Members:
- Jeffrey Bardzell, Program Head/Chair
C Lee Giles, Major Field Member
Sherry Tongshuang Wu, Special Member
Kenneth Huang, Chair & Dissertation Advisor
Mary Beth Rosson, Major Field Member
S. Shyam Sundar, Outside Unit & Field Member - Keywords:
- Interactive AI Explanations
Conversational AI Explanations
Human-centered Explainable AI
XAI Human Evaluation
Useful AI Explanations
XAI Usefulness - Abstract:
- Advancements in deep learning have revolutionized AI systems, enabling collaboration between humans and AI to enhance performance in specific tasks. AI explanations play a crucial role in aiding human understanding, control, and improvement of AI systems regarding various criteria such as fairness, safety, and trustworthiness. Despite the proliferation of eXplainable AI (XAI) approaches, the practical usefulness of AI explanations in human-AI collaborative systems remains underexplored. This doctoral research aims to evaluate and enhance the usefulness of AI explanations for humans in practical human-AI collaboration. I break down the research goal of investigating and improving human-centered useful AI explanations into three research questions: RQ1: Are cutting-edge AI explanations useful for humans in practice (Part I)? RQ2: What’s the disparity between AI explanations and practical user demands (Part II)? RQ3: How to empower useful AI explanations with human-AI interaction (Part III)? We examined the three research questions by conducting four projects. To answer RQ1, we deployed two real-world human evaluation studies on analyzing computer vision AI model errors with post-hoc explanations and simulating NLP AI model predictions with inherent explanations, respectively. The two studies unveil that, surprisingly, AI explanations are not always useful for humans to analyze AI predictions in practice. This motivates our research for RQ2 – gaining insights into disparities between the status quo of AI explanations and practical user needs. By surveying over 200 AI explanation papers and comparing with summarized real-world user demands, we observe two dominating findings: i) humans request diverse XAI questions across the AI pipeline to gain a global view of AI system, whereas existing XAI approaches commonly display a single AI explanation that can not satisfy diverse XAI user needs; ii) humans are widely interested in understanding what AI systems can not achieve, which might lead to the need of interactive AI explanations that enable humans to specify the counterfactual predictions. In light of these findings, we deeply deem that, instead of designating user demands by XAI researchers during AI system development, empowering users to communicate with AI systems for their practical XAI demands is critical to unleashing useful AI explanations (RQ3). To this end, we developed an interactive XAI system via conversations that improved the usefulness of AI explanations in terms of human-perceived performance in AI-assisted writing tasks. Overall, we summarize this doctoral research by discussing the limitations and challenges of human-centered useful AI explanations.