Report Number: CS-TR-87-1181
Institution: Stanford University, Department of Computer Science
Title: On Debugging Rule Sets When Reasoning Under Uncertainty
Author: Wilkins, D. C.
Author: Buchanan, B. G.
Date: May 1987
Abstract: Heuristic inference rules with a measure of strength less
than certainty have an unusual property: better individual
rules do not necessarily lead to a better overall rule set.
All less-than-certain rules contribute evidence towards
erroneous conclusions for some problem instances, and the
distribution of these erroneous conclusions over the
instances is not necessarily related to individual rule
quality. This has important consequences for automatic
machine learning of rules, since rule selection is usually
based on measures of quality of individual rules.
In this paper, we explain why the most obvious and
intuitively reasonable solution to this probelm, incremental
modification and deletion of rules responsible for wrong
conclusions a la Teiresias, is not always appropriate. In our
experience, it usually fails to converge to an optimal set of
rules. Given a set of heuristic rules, we explain why the
best rule set should be considered to be the element of the
power set of rules that yields a global minimum error with
respect to generating erroneous positive and negative
conclusions. This selection process is modeled as a bipartite
graph minimization problem and shown to be NP-complete. A
solution method is described, the Antidote Algorithm, that
performs a model-directed search of the rule space. On an
example from medical diagnosis, the Antitdote Algortithm
signif1cantly reduced the number of misdiagnoses when applied
to a rule set generated from 104 training instances.