As we observed earlier, there is no clear consensus in either the AI or philosophy communities about precisely which combination of information and pro-attitudes are best suited to characterising rational agents. In the work of Cohen and Levesque, described above, just two basic attitudes were used: beliefs and goals. Further attitudes, such as intention, were defined in terms of these. In related work, Rao and Georgeff have developed a logical framework for agent theory based on three primitive modalities: beliefs, desires, and intentions [Rao and Georgeff, 1993][Rao and Georgeff, 1991a][Rao and Georgeff, 1991b]. Their formalism is based on a branching model of time, (cf. [Emerson and Halpern, 1986]), in which belief-, desire- and intention-accessible worlds are themselves branching time structures. They are particularly concerned with the notion of realism - the question of how an agent's beliefs about the future affect its desires and intentions. In other work, they also consider the potential for adding (social) plans to their formalism [Kinny et al., 1992][Rao and Georgeff, 1992].