Generating explanations based on markov decision processes
Chapter in Scopus
- Additional Document Info
- View All
In this paper we address the problem of explaining the recommendations generated by a Markov decision process (MDP). We propose an automatic explanation generation mechanism that is composed by two main stages. In the first stage, the most relevant variable given the current state is obtained, based on a factored representation of the MDP. The relevant variable is defined as the factor that has the greatest impact on the utility given certain state and action, and is a key element in the explanation generation mechanism. In the second stage, based on a general template, an explanation is generated by combing the information obtained from the MDP with domain knowledge represented as a frame system. The state and action given by the MDP, as well as the relevant variable, are used as pointers to the knowledge base to extract the relevant information and fill-in the explanation template. In this way, explanations of the recommendations given by the MDP can be generated on-line and incorporated to an intelligent assistant. We have evaluated this mechanism in an intelligent assistant for power plant operator training. The experimental results show that the automatically generated explanations are similar to those given by a domain expert. © 2009 Springer-Verlag Berlin Heidelberg.