Belief-Desire-Intention software model

Belief-Desire-Intention software model

The Belief-Desire-Intention (BDI) software model (usually referred to simply, but ambiguously, as BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's "beliefs", "desires" and "intentions", it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.

In order to achieve this separation, the BDI software model implements the principal aspects of Michael Bratman's theory of human practical reasoning (also referred to as Belief-Desire-Intention, or BDI). That is to say, it implements the notions of belief, desire and (in particular) intention, in a manner inspired by Bratman. For Bratman, intention and desire are both pro-attitudes (mental attitudes concerned with action), but intention is distinguished as a conduct-controlling pro-attitude. He identifies commitment as the distinguishing factor between desire and intention, noting that it leads to (1) temporal persistence in plans and (2) further plans being made on the basis of those to which it is already committed. The BDI software model partially addresses these issues. Temporal persistence, in the sense of explicit reference to time, is not explored. The hierarchical nature of plans is more easily implemented: a plan consists of a number of steps, some of which may invoke other plans. The hierarchical definition of plans itself implies a kind of temporal persistence, since the overarching plan remains in effect while subsidiary plans are being executed.

An important aspect of the BDI software model (in terms of its research relevance) is the existence of logical models through which it is possible to define and reason about BDI agents. Research in this area has led, for example, to the axiomatization of some BDI implementations, as well as to formal logical descriptions such as Anand Rao and Michael Georgeff's BDICTL. The latter combines a multiple-modal logic (with modalities representing beliefs, desires and intentions) with the temporal logic CTL*. More recently, Michael Wooldridge has extended BDICTL to define LORA (the Logic Of Rational Agents), by incorporating an action logic. In principle, LORA allows reasoning not only about individual agents, but also about communication and other interaction in a multi-agent system.

The BDI software model is closely associated with intelligent agents, but does not, of itself, ensure all the characteristics associated with such agents. For example, it allows agents to have private beliefs, but does not force them to be private. It also has nothing to say about agent communication. Ultimately, the BDI software model is an attempt to solve a problem that has more to do with plans and planning (the choice and execution thereof) than it has to do with the programming of intelligent agents.

BDI Agents

A BDI agent is a particular type of bounded rational software agent, imbued with particular "mental attitudes", viz: Beliefs, Desires and Intentions (BDI).

Wooldridge lists four characteristics of intelligent agents which naturally fit the purpose and design of the BDI model:
* Situated - they are embedded in their environment
* Goal directed - they have goals that they try to achieve
* Reactive - they react to changes in their environment
* Social - they can communicate with other agents (including humans)


Beliefs represent the informational state of the agent - in other words its beliefs about the world (including itself and other agents). Beliefs can also include inference rules, allowing forward chaining to lead to new beliefs. Typically, this information will be stored in a database (sometimes called a "belief base"), although that is an implementation decision.

Using the term "belief" - rather than "knowledge" - recognises that what an agent believes may not necessarily be true (and in fact may change in the future).


Desires (or goals) represent the motivational state of the agent. They represent objectives or situations that the agent would like to accomplish or bring about. Examples of desires might be: "find the best price", "go to the party" or "become rich".

Usage of the term "goals" adds the further restriction that the set of goals must be consistent. For example, one should not have concurrent goals to go to a party and to stay at home - even though they could both be desirable.


Intentions represent the deliberative state of the agent: what the agent "has chosen" to do. Intentions are desires to which the agent has to some extent committed (in implemented systems, this means the agent has begun executing a plan).


Plans are sequences of actions that an agent can perform to achieve one or more of its intentions. Plans may include other plans: my plan to go for a drive may include a plan to find my car keys. This reflects that in Bratman's model, plans are initially only partially conceived, with details being filled in as they progress.

BDI Agent Implementations

'Pure' BDI

* IRMA (not implemented but can be considered as PRS with non-reconsideration)
* [ UM-PRS]
* AgentSpeak(L)
* [ JAM]
* [ JACK]
* [ JADEX]
* [ Jason]
* [ SPARK]
* 3APL
* [ TAO (Think As One)] Fact|date=August 2008
* [ LS/TS - Living Systems Technology Suite] [Rimassa, G., Greenwood, D. and Kernland, M. E., (2006). [ The Living Systems Technology Suite: An Autonomous Middleware for Autonomic Computing] . International Conference on Autonomic and Autonomous Systems (ICAS).]

Extensions and Hybrid Systems

* JACK Teams
* [ TAO (Think-As-One)] Fact|date=August 2008
* [ LS/TS - Living Systems Technology Suite] [Rimassa, G., Greenwood, D. and Kernland, M. E., (2006). [ The Living Systems Technology Suite: An Autonomous Middleware for Autonomic Computing] . International Conference on Autonomic and Autonomous Systems (ICAS).]

BDI Agent Architectures

Strictly speaking there is no single software architecture that represents BDI. The diagram to the right (from Georgeff, Ingrand [ Decision-Making in an Embedded Reasoning System] , IJCAI-11, 1989) shows a very generic model, which does not address any issues of design or implementation. In fact, Wooldridge states that implemented systems since PRS have followed the PRS model, and so there should be a closer relationship between them than is described in the diagram. Indeed, the core BDI engines in dMARS (written in C++) and JACK (written in Java) are virtually identical in design.

See also

* Artificial Intelligence
* Action selection
* Software agent
* Intelligent agent
* Reasoning
* Belief revision




External links

* [ A Formal Specification of dMARS] - Mark d'Inverno, David Kinny, Michael Luck, Michael Wooldridge

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Belief-Desire-Intention model — The Belief Desire Intention (BDI) model of human practical reasoning was developed by Michael Bratman as a way of explaining future directed intention.BDI is fundamentally reliant on folk psychology (the theory theory ), which is the notion that… …   Wikipedia

  • Deliberative agent — (also known as intentional agent) is a sort of software agent used mainly in multi agent system simulations. According to Wooldridge s definition, a deliberative agent is one that possesses an explicitly represented, symbolic model of the world,… …   Wikipedia

  • Michael Georgeff — Michael Peter Georgeff is a computer scientist and entrepreneur who has made contributions in the areas of Intelligent Software Agents and eHealth. Contents 1 Overview 2 Intelligent Software Agents 2.1 Contributions …   Wikipedia

  • Distributed Multi-Agent Reasoning System — In artificial intelligence, the Distributed Multi Agent Reasoning System (dMARS) is a platform for intelligent software agents developed at the AAII that makes uses of the BDI software model. The design for dMARS is an extension of the… …   Wikipedia

  • Action selection — is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, the action selection problem is typically associated with intelligent agents and animats… …   Wikipedia

  • BDI — is a three letter acronym that can stand for:* The Beck Depression Inventory * Burundi * The Baltic Dry Index * Belief Desire Intention model, a model used in rational software agents * Brand Development Index a measure used in the allocation of… …   Wikipedia

  • United States — a republic in the N Western Hemisphere comprising 48 conterminous states, the District of Columbia, and Alaska in North America, and Hawaii in the N Pacific. 267,954,767; conterminous United States, 3,022,387 sq. mi. (7,827,982 sq. km); with… …   Universalium

  • India — /in dee euh/, n. 1. Hindi, Bharat. a republic in S Asia: a union comprising 25 states and 7 union territories; formerly a British colony; gained independence Aug. 15, 1947; became a republic within the Commonwealth of Nations Jan. 26, 1950.… …   Universalium

  • education — /ej oo kay sheuhn/, n. 1. the act or process of imparting or acquiring general knowledge, developing the powers of reasoning and judgment, and generally of preparing oneself or others intellectually for mature life. 2. the act or process of… …   Universalium

  • Economic Affairs — ▪ 2006 Introduction In 2005 rising U.S. deficits, tight monetary policies, and higher oil prices triggered by hurricane damage in the Gulf of Mexico were moderating influences on the world economy and on U.S. stock markets, but some other… …   Universalium

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”