Next: Representing Intentional Notions Up: Agent Theories Previous: Agent Theories

Agents as Intentional Systems

When explaining human activity, it is often useful to make statements such as the following:

These statements makes use of a folk psychology, by which human behaviour is predicted and explained through the attribution of attitudes, such as believing and wanting (as in the above examples), hoping, fearing, and so on. This folk psychology is well established: most people reading the above statements would say they found their meaning entirely clear, and would not give them a second glance.

The attitudes employed in such folk psychological descriptions are called the intentional notions. The philosopher Daniel Dennett has coined the term intentional system to describe entities `whose behaviour can be predicted by the method of attributing belief, desires and rational acumen' [Dennett, 1987]. Dennett identifies different `grades' of intentional system:

`A first-order intentional system has beliefs and desires (etc.) but no beliefs and desires about beliefs and desires. ... A second-order intentional system is more sophisticated; it has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other intentional states) - both those of others and its own'. [Dennett, 1987]
One can carry on this hierarchy of intentionality as far as required.

An obvious question is whether it is legitimate or useful to attribute beliefs, desires, and so on, to artificial agents. Isn't this just anthropomorphism? McCarthy, among others, has argued that there are occasions when the intentional stance is appropriate:

`To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behaviour, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of the machine in a particular situation may require mental qualities or qualities isomorphic to them. Theories of belief, knowledge and wanting can be constructed for machines in a simpler setting than for humans, and later applied to humans. Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most useful when applied to entities whose structure is incompletely known'. [McCarthy, 1978], (quoted in [Shoham, 1990])
What objects can be described by the intentional stance? As it turns out, more or less anything can. In his doctoral thesis, Seel showed that even very simple, automata-like objects can be consistently ascribed intentional descriptions [Seel, 1989]; similar work by Rosenschein and Kaelbling, (albeit with a different motivation), arrived at a similar conclusion [Rosenschein and Kaelbling, 1986]. For example, consider a light switch:

`It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of transmitting current at will, who invariably transmits current when it believes that we want it transmitted and not otherwise; flicking the switch is simply our way of communicating our desires'. [Shoham, 1990]
And yet most adults would find such a description absurd - perhaps even infantile. Why is this? The answer seems to be that while the intentional stance description is perfectly consistent with the observed behaviour of a light switch, and is internally consistent,

`... it does not buy us anything, since we essentially understand the mechanism sufficiently to have a simpler, mechanistic description of its behaviour'. [Shoham, 1990]
Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behaviour. However, with very complex systems, even if a complete, accurate picture of the system's architecture and working is available, a mechanistic, design stance explanation of its behaviour may not be practicable. Consider a computer. Although we might have a complete technical description of a computer available, it is hardly practicable to appeal to such a description when explaining why a menu appears when we click a mouse on an icon. In such situations, it may be more appropriate to adopt an intentional stance description, if that description is consistent, and simpler than the alternatives. The intentional notions are thus abstraction tools, which provide us with a convenient and familiar way of describing, explaining, and predicting the behaviour of complex systems.

Being an intentional system seems to be a necessary condition for agenthood, but is it a sufficient condition? In his Master's thesis, Shardlow trawled through the literature of cognitive science and its component disciplines in an attempt to find a unifying concept that underlies the notion of agenthood. He was forced to the following conclusion:

`Perhaps there is something more to an agent than its capacity for beliefs and desires, but whatever that thing is, it admits no unified account within cognitive science'. [Shardlow, 1990]
So, an agent is a system that is most conveniently described by the intentional stance; one whose simplest consistent description requires the intentional stance. Before proceeding, it is worth considering exactly which attitudes are appropriate for representing agents. For the purposes of this survey, the two most important categories are information attitudes and pro-attitudes:

Thus information attitudes are related to the information that an agent has about the world it occupies, whereas pro-attitudes are those that in some way guide the agent's actions. Precisely which combination of attitudes is most appropriate to characterise an agent is, as we shall see later, an issue of some debate. However, it seems reasonable to suggest that an agent must be represented in terms of at least one information attitude, and at least one pro-attitude. Note that pro- and information attitudes are closely linked, as a rational agent will make choices and form intentions, etc., on the basis of the information it has about the world. Much work in agent theory is concerned with sorting out exactly what the relationship between the different attitudes is.

The next step is to investigate methods for representing and reasoning about intentional notions.



Next: Representing Intentional Notions Up: Agent Theories Previous: Agent Theories


mikew@mutley.doc.aca.mmu.ac.uk
Fri Nov 4 16:03:55 GMT 1994