Explanation in AI systems

Abstract

In this chapter, we consider recent work aimed at guiding the design of algorithmically generated explanations. The chapter proceeds in four parts. Firstly, we introduce the general problem of machine-generated explanation and illustrate different notions of explanation with the help of Bayesian belief networks. Secondly, we introduce key theoretical perspectives on what constitutes an explanation, and more specifically a ‘good’ explanation, from the philosophy literature. We compare these theoretical perspectives and the criteria they propose with a case study on explaining reasoning in Bayesian belief networks and present implications for AI. Thirdly, we consider the pragmatic nature of explanation with the focus on its communicative aspects that are manifested in considerations of trust. Finally, we present conclusions.

Publication
Human-Like Machine Intelligence
Marko Tešić
Marko Tešić
Research Associate