The foundation upon which the symbolic AI paradigm rests is the physical-symbol system hypothesis, formulated by Newell and Simon [Newell and Simon, 1976]. A physical symbol system is defined to be a physically realizable set of physical entities (symbols) that can be combined to form structures, and which is capable of running processes that operate on those symbols according to symbolically coded sets of instructions. The physical-symbol system hypothesis then says that such a system is capable of general intelligent action.
It is a short step from the notion of a physical symbol system to McCarthy's dream of a sentential processing automaton, or deliberative agent. (The term `deliberative agent' seems to have derived from Genesereth's use of the term `deliberate agent' to mean a specific type of symbolic architecture [Genesereth and Nilsson, 1987].) We define a deliberative agent or agent architecture to be one that contains an explicitly represented, symbolic model of the world, and in which decisions (for example about what actions to perform) are made via logical (or at least pseudo-logical) reasoning, based on pattern matching and symbolic manipulation. The idea of deliberative agents based on purely logical reasoning is highly seductive: to get an agent to realise some theory of agency one might naively suppose that it is enough to simply give it logical representation of this theory and `get it to do a bit of theorem proving' [Shardlow, 1990]. If one aims to build an agent in this way, then there are at least two important problems to be solved: