Report Number: CS-TR-77-593
Institution: Stanford University, Department of Computer Science
Title: Explanation capabilities of production-based consultation systems
Author: Scott, A. Carlisle
Author: Clancey, William J.
Author: Davis, Randall
Author: Shortliffe, Edward H.
Date: February 1977
Abstract: A computer program that models an expert in a given domain is more likely to be accepted by experts in that domain, and by non-experts seeking its advice, if the system can explain its actions. An explanation capability not only adds to the system's credibility, but also enables the non-expert user to learn from it. Furthermore, clear explanations allow an expert to check the system's "reasoning", possibly discovering the need for refinements and additions to the system's knowledge base. In a developing system, an explanation capability can be used as a debugging aid to verify that additions to the system are working as they should. This paper discusses the general characteristics of explanation systems: what types of explanations they should be able to give, what types of knowledge will be needed in order to give these explanations, and how this knowledge might be organized. The explanation facility in MYCIN is discussed as an illustration of how the various problems might be approached.
http://i.stanford.edu/pub/cstr/reports/cs/tr/77/593/CS-TR-77-593.pdf