Next: Planning agents Up: Agent Architectures Previous: Agent Architectures

Classical Approaches: Deliberative Architectures

The foundation upon which the symbolic AI paradigm rests is the physical-symbol system hypothesis, formulated by Newell and Simon [Newell and Simon, 1976]. A physical symbol system is defined to be a physically realizable set of physical entities (symbols) that can be combined to form structures, and which is capable of running processes that operate on those symbols according to symbolically coded sets of instructions. The physical-symbol system hypothesis then says that such a system is capable of general intelligent action.

It is a short step from the notion of a physical symbol system to McCarthy's dream of a sentential processing automaton, or deliberative agent. (The term `deliberative agent' seems to have derived from Genesereth's use of the term `deliberate agent' to mean a specific type of symbolic architecture [Genesereth and Nilsson, 1987].) We define a deliberative agent or agent architecture to be one that contains an explicitly represented, symbolic model of the world, and in which decisions (for example about what actions to perform) are made via logical (or at least pseudo-logical) reasoning, based on pattern matching and symbolic manipulation. The idea of deliberative agents based on purely logical reasoning is highly seductive: to get an agent to realise some theory of agency one might naively suppose that it is enough to simply give it logical representation of this theory and `get it to do a bit of theorem proving' [Shardlow, 1990]. If one aims to build an agent in this way, then there are at least two important problems to be solved:

  1. The transduction problem: that of translating the real world into an accurate, adequate symbolic description, in time for that description to be useful.

  2. The representation/reasoning problem: that of how to symbolically represent information about complex real-world entities and processes, and how to get agents to reason with this information in time for the results to be useful.

The former problem has led to work on vision, speech understanding, learning, etc. The latter has led to work on knowledge representation, automated reasoning, automatic planning, etc. Despite the immense volume of work that these problems have generated, most researchers would accept that neither is anywhere near solved. Even seemingly trivial problems, such as commonsense reasoning, have turned out to be extremely difficult. The underlying problem seems to be the difficulty of theorem proving in even very simple logics, and the complexity of symbol manipulation in general: recall that first-order logic is not even decidable, and modal extensions to it (including representations of belief, desire, time, and so on) tend to be highly undecidable. It is because of these problems that some researchers have looked to alternative techniques for building agents; such alternatives are discussed in section 3.2. First, however, we consider efforts made within the symbolic AI community to construct agents.




Next: Planning agents Up: Agent Architectures Previous: Agent Architectures


mikew@mutley.doc.aca.mmu.ac.uk
Fri Nov 4 16:03:55 GMT 1994