For a detailed discussion of intentionality and the intentional stance, see [Dennett, 1987][Dennett, 1978]. A number of papers on AI treatments of agency may be found in [Allen et al., 1990]. For an introduction to modal logic, see [Chellas, 1980]; a slightly older, though more wide ranging introduction, may be found in [Hughes and Cresswell, 1968]. As for the use of modal logics to model knowledge and belief, see [Halpern and Moses, 1992], which includes complexity results and proof procedures. Related work on modelling knowledge has been done by the distributed systems community, who give the worlds in possible worlds semantics a precise interpretation; for an introduction and further references, see [Fagin et al., 1992][Halpern, 1987]. Overviews of formalisms for modelling belief and knowledge may be found in [Wooldridge, 1992][Reichgelt, 1989a][Konolige, 1986a][Halpern, 1986]. A variant on the possible worlds framework, called the recursive modelling method, is described in [Gmytrasiewicz and Durfee, 1993]; a deep theory of belief may be found in [Mack, 1994]. Situation semantics, developed in the early 1980s and recently the subject of renewed interest, represent a fundamentally new approach to modelling the world and cognitive systems [Devlin, 1991][Barwise and Perry, 1983]. However, situation semantics are not (yet) in the mainstream of (D)AI, and it is not obvious what impact the paradigm will ultimately have.
Logics which integrate time with mental states are discussed in [Wooldridge and Fisher, 1994][Halpern and Vardi, 1989][Kraus and Lehmann, 1988]; the last of these presents a tableau-based proof method for a temporal belief logic. Two other important references for temporal aspects are [Shoham, 1989][Shoham, 1988]. Thomas has developed some logics for representing agent theories as part of her framework for agent programming languages; see [Thomas, 1993][Thomas et al., 1991] and section 4. For an introduction to temporal logics and related topics, see [Emerson, 1990][Goldblatt, 1987]. A non-formal discussion of intention may be found in [Bratman, 1987], or more briefly [Bratman, 1990]. Further work on modelling intention may be found in [Konolige and Pollack, 1993][Goldman and Lang, 1991][Sadek, 1992][Grosz and Sidner, 1990]. Related work, focussing less on single-agent attitudes, and more on social aspects, is [Wooldridge and Jennings, 1994][Wooldridge, 1994][Jennings, 1993a].
Finally, although we have not discussed formalisms for reasoning about action here, we suggested above that an agent logic would need to incorporate some mechanism for representing agent's actions. Our reason for avoiding the topic is simply that the field is so big, it deserves a whole review in its own right. Good starting points for AI treatments of action are [Allen et al., 1991][Allen et al., 1990][Allen, 1984]. Other treatments of action in agent logics are based on formalisms borrowed from mainstream computer science, notably dynamic logic (originally developed to reason about computer programs) [Harel, 1984]. The logic of seeing to it that has been discussed in the formal philosophy literature, but has yet to impact on (D)AI [Segerberg, 1989][Belnap, 1991][Perloff, 1991][Belnap and Perloff, 1988].