On Repairing Reasoning Reversals via Representational Refinements
As Polya has taught us, the right representation is often the key to successful problem solving. Most automated problem solving systems, however, use a fixed, hand-crafted representation of the problem to be solved. This is sufficient for a problem solver for an unchanging set of problems in an unchanging world, but if we want to build more general-purpose and robust problem solvers, then they must also be able to adapt their problem representation to suit both the problem and the environment.
As an initial foray into this almost virgin territory, we describe the Ontology Repair System (ORS), where an ontology is a first-order logic theory, in this case representing ORS's world model. ORS builds plans to achieve its goals with the help of other agents in a networked environment. However, its model of the conditions under which these other agents will assist it is faulty, so that these plans frequently fail when executed. ORS diagnoses these failures in plan execution and then repairs its faulty ontology. Our automated approach to dynamic ontology repair has been designed specifically to address real issues in multi-agent systems, for instance, as envisaged in the Semantic Web.