In section 2, we saw that some researchers have considered frameworks for agent theory based on beliefs, desires, and intentions [Rao and Georgeff, 1991b]. Some researchers have also developed agent architectures based on these attitudes. One example is the Intelligent Resource-bounded Machine Architecture (IRMA) [Bratman et al., 1988]. This architecture has four key symbolic data structures: a plan library, and explicit representations of beliefs, desires, and intentions. Additionally, the architecture has: a reasoner, for reasoning about the world; a means-ends analyser, for determining which plans might be used to achieve the agent's intentions; an opportunity analyser, which monitors the environment in order to determine further options for the agent; a filtering process; and a deliberation process. The filtering process is responsible for determining the subset of the agent's potential courses of action that have the property of being consistent with the agent's current intentions. The choice between competing options is made by the deliberation process. The IRMA architecture has been evaluated in an experimental scenario known as the Tileworld [Pollack and Ringuette, 1990].