- Belief-Desire-Intention software model
The Belief-Desire-Intention (BDI) software model (usually referred to simply, but ambiguously, as BDI) is a software model developed for programming
intelligent agent s. Superficially characterized by the implementation of an agent's "beliefs", "desires" and "intentions", it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.In order to achieve this separation, the BDI software model implements the principal aspects of
Michael Bratman 's theory of human practical reasoning (also referred to as Belief-Desire-Intention, or BDI). That is to say, it implements the notions of belief, desire and (in particular) intention, in a manner inspired by Bratman. For Bratman, intention and desire are both pro-attitudes (mental attitudes concerned with action), but intention is distinguished as a conduct-controlling pro-attitude. He identifies commitment as the distinguishing factor between desire and intention, noting that it leads to (1) temporal persistence in plans and (2) further plans being made on the basis of those to which it is already committed. The BDI software model partially addresses these issues. Temporal persistence, in the sense of explicit reference to time, is not explored. The hierarchical nature of plans is more easily implemented: a plan consists of a number of steps, some of which may invoke other plans. The hierarchical definition of plans itself implies a kind of temporal persistence, since the overarching plan remains in effect while subsidiary plans are being executed.An important aspect of the BDI software model (in terms of its research relevance) is the existence of logical models through which it is possible to define and reason about BDI agents. Research in this area has led, for example, to the axiomatization of some BDI implementations, as well as to formal logical descriptions such as Anand Rao and Michael Georgeff's BDICTL. The latter combines a multiple-modal logic (with modalities representing beliefs, desires and intentions) with the
temporal logic CTL*. More recently, Michael Wooldridge has extended BDICTL to define LORA (the Logic Of Rational Agents), by incorporating an action logic. In principle, LORA allows reasoning not only about individual agents, but also about communication and other interaction in amulti-agent system .The BDI software model is closely associated with intelligent agents, but does not, of itself, ensure all the characteristics associated with such agents. For example, it allows agents to have private beliefs, but does not force them to be private. It also has nothing to say about agent communication. Ultimately, the BDI software model is an attempt to solve a problem that has more to do with plans and planning (the choice and execution thereof) than it has to do with the programming of intelligent agents.
BDI Agents
A BDI agent is a particular type of bounded rational software agent, imbued with particular "mental attitudes", viz: Beliefs, Desires and Intentions (BDI).
Wooldridge lists four characteristics of
intelligent agent s which naturally fit the purpose and design of the BDI model:
* Situated - they are embedded in their environment
* Goal directed - they have goals that they try to achieve
* Reactive - they react to changes in their environment
* Social - they can communicate with other agents (including humans)Beliefs
Beliefs represent the informational state of the agent - in other words its beliefs about the world (including itself and other agents). Beliefs can also include
inference rule s, allowingforward chaining to lead to new beliefs. Typically, this information will be stored in adatabase (sometimes called a "belief base"), although that is animplementation decision.Using the term "belief" - rather than "knowledge" - recognises that what an agent believes may not necessarily be true (and in fact may change in the future).
Desires
Desires (or goals) represent the motivational state of the agent. They represent objectives or situations that the agent would like to accomplish or bring about. Examples of desires might be: "find the best price", "go to the party" or "become rich".
Usage of the term "goals" adds the further restriction that the set of goals must be consistent. For example, one should not have concurrent goals to go to a party and to stay at home - even though they could both be desirable.
Intentions
Intentions represent the deliberative state of the agent: what the agent "has chosen" to do. Intentions are desires to which the agent has to some extent committed (in implemented systems, this means the agent has begun executing a plan).
Plans
Plans are sequences of actions that an agent can perform to achieve one or more of its intentions. Plans may include other plans: my plan to go for a drive may include a plan to find my car keys. This reflects that in Bratman's model, plans are initially only partially conceived, with details being filled in as they progress.
BDI Agent Implementations
'Pure' BDI
* PRS
* IRMA (not implemented but can be considered as PRS with non-reconsideration)
* [http://www.marcush.net/IRS/irs_downloads.html UM-PRS]
* dMARS
* AgentSpeak(L)
* [http://www.marcush.net/IRS/irs_downloads.html JAM]
* [http://www.agent-software.com.au/ JACK]
* [http://vsis-www.informatik.uni-hamburg.de/projects/jadex/ JADEX]
* [http://jason.sourceforge.net/ Jason]
* [http://www.ai.sri.com/~spark/ SPARK]
*3APL
* [http://www.cogniteam.com TAO (Think As One)] Fact|date=August 2008
* [http://www.whitestein.com/ls-ts LS/TS - Living Systems Technology Suite] [Rimassa, G., Greenwood, D. and Kernland, M. E., (2006). [http://www.whitestein.com/library/WhitesteinTechnologies_Paper_ICAS2006-gri.pdf The Living Systems Technology Suite: An Autonomous Middleware for Autonomic Computing] . International Conference on Autonomic and Autonomous Systems (ICAS).]Extensions and Hybrid Systems
* JACK Teams
* [http://www.cogniteam.com/ TAO (Think-As-One)] Fact|date=August 2008
* [http://www.whitestein.com/ls-ts LS/TS - Living Systems Technology Suite] [Rimassa, G., Greenwood, D. and Kernland, M. E., (2006). [http://www.whitestein.com/library/WhitesteinTechnologies_Paper_ICAS2006-gri.pdf The Living Systems Technology Suite: An Autonomous Middleware for Autonomic Computing] . International Conference on Autonomic and Autonomous Systems (ICAS).]BDI Agent Architectures
Strictly speaking there is no single
software architecture that represents BDI. The diagram to the right (from Georgeff, Ingrand [http://www.laas.fr/~felix/publis-pdf/ijcai89.pdf Decision-Making in an Embedded Reasoning System] , IJCAI-11, 1989) shows a very generic model, which does not address any issues of design or implementation. In fact, Wooldridge states that implemented systems since PRS have followed the PRS model, and so there should be a closer relationship between them than is described in the diagram. Indeed, the core BDI engines in dMARS (written in C++) and JACK (written in Java) are virtually identical in design.See also
*
Artificial Intelligence
*Action selection
*Software agent
*Intelligent agent
*Reasoning
*Belief revision Notes
References
*
*External links
* [http://citeseer.ist.psu.edu/dinverno97formal.html A Formal Specification of dMARS] - Mark d'Inverno, David Kinny, Michael Luck, Michael Wooldridge
Wikimedia Foundation. 2010.