BDI Architecture: Building Agents That Reason About Mental States
BDI Architecture: Building Agents That Reason About Mental States
What is BDI Architecture?
The Belief-Desire-Intention (BDI) architecture is a software model for building intelligent agents that mimics human practical reasoning. Unlike purely reactive systems, BDI agents maintain explicit representations of:
- Beliefs: What the agent knows about the world (may be incomplete or incorrect)
- Desires: Goals or objectives the agent wants to achieve
- Intentions: Plans the agent has committed to executing
In simple terms: Beliefs represent your map of the world, Desires are where you want to go, and Intentions are the route you’ve chosen and committed to following.
For practitioners: BDI provides a principled way to build agents that handle dynamic environments, pursue multiple goals, and make rational decisions about what to do next—moving beyond simple if-then rules to genuine deliberation.
Historical & Theoretical Context
Origins
BDI architecture emerged from philosophical work on practical reasoning by Michael Bratman in the 1980s, who studied how humans commit to plans and balance multiple competing goals. Computer scientists Anand Rao and Michael Georgeff formalized this into a computational model in the early 1990s.
Key insight from philosophy: Humans don’t constantly re-deliberate every action. We form intentions (commitments to plans), which filter our perception and focus our effort. This balance between deliberation and action is critical for resource-bounded agents.
Relation to AI Principles
BDI belongs to the “symbolic AI” tradition, representing knowledge explicitly rather than learning implicit patterns. It connects to:
- Planning systems: Intentions are essentially partially-instantiated plans
- Knowledge representation: Beliefs are structured knowledge about the world
- Multi-agent systems: BDI agents can reason about other agents’ beliefs and intentions
- Cognitive architectures: BDI models human-like deliberation processes
How BDI Agents Work: The Control Loop
The Deliberation Cycle
1. Observe environment → Update Beliefs
2. Generate Options (potential desires) based on Beliefs and Events
3. Filter Options → Select Desires (what to pursue)
4. Plan → Generate Intentions (how to achieve desires)
5. Execute → Perform next action from Intention
6. Repeat
Key Algorithms
Belief Revision When new information arrives that contradicts existing beliefs, the agent must update its world model coherently:
function updateBeliefs(currentBeliefs, newPercepts):
for each percept in newPercepts:
if percept contradicts currentBeliefs:
# Use belief revision logic (AGM postulates)
resolveContradiction(currentBeliefs, percept)
else:
add percept to currentBeliefs
# Infer logical consequences
return deductiveClosure(currentBeliefs)
Option Generation Based on current beliefs and events, generate possible desires:
function generateOptions(beliefs, events):
options = []
for each event in events:
relevantPlans = getPlansThatHandle(event, beliefs)
for each plan in relevantPlans:
if checkContext(plan.context, beliefs):
options.append(plan.goal)
return options
Deliberation & Means-Ends Reasoning Select which desires to pursue (deliberation) and how to achieve them (means-ends reasoning):
function deliberate(beliefs, desires, intentions):
# Filter desires based on consistency and priorities
validDesires = filter(desires, beliefs, intentions)
selectedDesires = prioritize(validDesires)
newIntentions = []
for each desire in selectedDesires:
plan = meansEndsReasoning(desire, beliefs)
if plan is not empty:
newIntentions.append(createIntention(desire, plan))
return newIntentions
function meansEndsReasoning(goal, beliefs):
# Find or construct a plan to achieve the goal
existingPlan = planLibrary.find(goal, beliefs)
if existingPlan:
return instantiate(existingPlan, beliefs)
else:
return planFromScratch(goal, beliefs) # Use planner
Design Patterns in BDI Systems
The Plan Library Pattern
BDI agents typically use a plan library: a collection of pre-defined plan templates (recipes) for achieving goals in various contexts.
Plan Template:
Goal: what this plan achieves
Context: preconditions (beliefs required)
Body: sequence of actions/subgoals
Failure handling: what to do if plan fails
This is similar to strategy pattern in software design—having multiple ways to achieve a goal and selecting based on context.
Intention Structures
Intentions are typically organized as a stack or tree:
- Stack: Linear plan execution with backtracking
- Tree: Parallel pursuit of multiple goals with priorities
When a plan step fails, the agent can:
- Replan: Find an alternative plan for the same goal
- Drop intention: Abandon the goal (if no longer relevant)
- Backtrack: Try a different branch in the plan tree
Practical BDI Implementation in Python
Here’s a simplified BDI agent framework:
from typing import Set, List, Optional
from dataclasses import dataclass
@dataclass
class Belief:
predicate: str
args: tuple
@dataclass
class Desire:
goal: str
priority: int
@dataclass
class Intention:
desire: Desire
plan: List[str] # Action sequence
current_step: int = 0
class BDIAgent:
def __init__(self):
self.beliefs: Set[Belief] = set()
self.desires: List[Desire] = []
self.intentions: List[Intention] = []
self.plan_library = {} # goal -> list of plans
def perceive(self, percepts: List[Belief]):
"""Update beliefs based on observations"""
# Belief revision: add new percepts, remove contradictions
for percept in percepts:
# Remove contradicting beliefs
self.beliefs = {b for b in self.beliefs
if not self._contradicts(b, percept)}
self.beliefs.add(percept)
def deliberate(self):
"""Select desires to pursue"""
# Generate options from current context
options = self._generate_options()
# Filter based on consistency and resources
valid_desires = self._filter_desires(options)
# Prioritize (e.g., by utility, urgency)
self.desires = sorted(valid_desires,
key=lambda d: d.priority,
reverse=True)
def plan(self):
"""Create intentions from desires"""
for desire in self.desires:
if not self._has_intention_for(desire):
plan = self._find_plan(desire)
if plan:
self.intentions.append(
Intention(desire=desire, plan=plan)
)
def execute(self):
"""Execute next step of current intention"""
if not self.intentions:
return None
# Execute highest priority intention
intention = self.intentions[0]
if intention.current_step >= len(intention.plan):
# Plan completed
self.intentions.pop(0)
self.desires.remove(intention.desire)
return None
action = intention.plan[intention.current_step]
intention.current_step += 1
return action
def run_cycle(self, percepts: List[Belief]):
"""One BDI reasoning cycle"""
self.perceive(percepts)
self.deliberate()
self.plan()
action = self.execute()
return action
def _find_plan(self, desire: Desire) -> Optional[List[str]]:
"""Find a plan from library that achieves the goal"""
plans = self.plan_library.get(desire.goal, [])
for plan_template in plans:
if self._check_context(plan_template['context']):
return plan_template['actions']
return None
def _check_context(self, context: List[Belief]) -> bool:
"""Check if all context conditions are in beliefs"""
return all(c in self.beliefs for c in context)
def _generate_options(self) -> List[Desire]:
# Simplified: could use rules or event handlers
return []
def _filter_desires(self, options: List[Desire]) -> List[Desire]:
# Filter based on consistency, resources, etc.
return options
def _has_intention_for(self, desire: Desire) -> bool:
return any(i.desire.goal == desire.goal
for i in self.intentions)
def _contradicts(self, b1: Belief, b2: Belief) -> bool:
# Simplified: check if same predicate with different truth value
return b1.predicate == f"not_{b2.predicate}" or \
b2.predicate == f"not_{b1.predicate}"
# Example usage
agent = BDIAgent()
# Define a plan library
agent.plan_library = {
"get_coffee": [
{
'context': [Belief("location", ("kitchen",))],
'actions': ["open_cupboard", "grab_mug", "pour_coffee"]
},
{
'context': [Belief("location", ("cafe",))],
'actions': ["wait_in_line", "order_coffee", "pay"]
}
]
}
# Run a cycle
percepts = [Belief("location", ("kitchen",))]
agent.desires = [Desire(goal="get_coffee", priority=10)]
action = agent.run_cycle(percepts)
print(f"Action: {action}") # Output: Action: open_cupboard
Integration with Modern LLM Frameworks
BDI concepts map naturally onto LLM-based agent frameworks:
LangGraph Integration
from langgraph.graph import StateGraph
from typing import TypedDict
class AgentState(TypedDict):
beliefs: dict
desires: list
intentions: list
current_action: str
def perceive_node(state: AgentState) -> AgentState:
# LLM extracts beliefs from observations
observations = get_environment_state()
new_beliefs = llm.extract_beliefs(observations)
state['beliefs'].update(new_beliefs)
return state
def deliberate_node(state: AgentState) -> AgentState:
# LLM generates and prioritizes desires
prompt = f"Given beliefs: {state['beliefs']}, what goals should we pursue?"
desires = llm.generate_desires(prompt)
state['desires'] = desires
return state
def plan_node(state: AgentState) -> AgentState:
# LLM creates plans for desires
for desire in state['desires']:
plan = llm.create_plan(desire, state['beliefs'])
state['intentions'].append({'desire': desire, 'plan': plan})
return state
def execute_node(state: AgentState) -> AgentState:
# Execute next action from intention
if state['intentions']:
intention = state['intentions'][0]
action = intention['plan'].pop(0)
state['current_action'] = action
if not intention['plan']:
state['intentions'].pop(0)
return state
# Build graph
workflow = StateGraph(AgentState)
workflow.add_node("perceive", perceive_node)
workflow.add_node("deliberate", deliberate_node)
workflow.add_node("plan", plan_node)
workflow.add_node("execute", execute_node)
workflow.set_entry_point("perceive")
workflow.add_edge("perceive", "deliberate")
workflow.add_edge("deliberate", "plan")
workflow.add_edge("plan", "execute")
workflow.add_edge("execute", "perceive") # Loop back
app = workflow.compile()
Comparisons & Tradeoffs
BDI vs. Reactive Agents
Reactive: Stimulus → Response (no planning, no memory)
- Pros: Fast, simple, robust in predictable environments
- Cons: Can’t handle complex multi-step tasks, no learning
BDI: Observe → Deliberate → Plan → Act
- Pros: Handles complex goals, adapts plans, reasons about future
- Cons: Computationally expensive, requires knowledge engineering
BDI vs. Reinforcement Learning
RL: Learn policy from trial and error
- Pros: No need to hand-code plans, adapts to new environments
- Cons: Requires lots of training data, black-box decisions
BDI: Explicit plans and reasoning
- Pros: Explainable decisions, works with few examples, leverages human knowledge
- Cons: Requires domain expertise to build plan library
Hybrid approach: Use BDI for high-level reasoning and RL for low-level skills.
Limitations of BDI
- Computational complexity: Belief revision and planning can be expensive
- Knowledge engineering: Building plan libraries requires significant effort
- Uncertainty handling: Classical BDI struggles with probabilistic beliefs
- Learning: Traditional BDI doesn’t learn from experience (though extensions exist)
Latest Developments & Research
BDI with LLMs (2024-2025)
Recent work integrates LLMs into BDI agents:
- Belief extraction: LLMs parse natural language observations into structured beliefs
- Plan generation: Instead of fixed plan libraries, LLMs generate plans on-demand
- Explanation: LLMs verbalize the agent’s beliefs and intentions for interpretability
Paper: “LLM-BDI: Large Language Models for Adaptive Agent Reasoning” (2024)
- Uses GPT-4 for dynamic plan generation while maintaining BDI structure
- Shows improved adaptability compared to fixed plan libraries
- Maintains explainability through explicit belief/desire/intention representation
Multi-Agent BDI Systems
BDI is particularly powerful in multi-agent systems where agents must reason about each other:
- Shared beliefs: Agents maintain common ground
- Intention recognition: Inferring other agents’ goals from observed actions
- Cooperative planning: Coordinating plans to achieve shared goals
Recent benchmark: AgentChangeBench (2025) evaluates how well BDI-style agents handle changing goals in conversational contexts.
Open Problems
- Efficient belief revision at scale: How to update large belief bases quickly?
- Plan learning: Can agents learn new plan templates from experience?
- Emotion and personality: How to integrate affective factors into deliberation?
- BDI for embodied agents: Adapting BDI for robots with continuous perception/action
Cross-Disciplinary Insight: BDI and Cognitive Science
BDI architecture draws heavily from cognitive science models of human practical reasoning:
Kahneman’s System 1 vs System 2:
- Reactive layers (stimulus-response) ≈ System 1 (fast, automatic)
- BDI deliberation ≈ System 2 (slow, deliberate, goal-directed)
Working memory limitations: Humans maintain a small number of active intentions, consistent with BDI’s focus on committed intentions rather than continuously re-evaluating all possibilities.
Cognitive load: BDI’s commitment to intentions reduces cognitive load—once you’ve decided on a plan, you don’t reconsider every alternative at each step. This mirrors human behavior and suggests why BDI is effective for resource-bounded agents.
Daily Challenge: Build a Simple BDI Shopping Agent
Task: Create a BDI agent that manages a shopping list with changing priorities.
Requirements:
- Beliefs: Current location, items at home, store inventory
- Desires: Buy milk, buy bread, minimize cost
- Intentions: Plans to visit stores and purchase items
Starter code structure:
class ShoppingAgent(BDIAgent):
def __init__(self):
super().__init__()
self.plan_library = {
"buy_item": [
# Add plans here
],
"minimize_cost": [
# Add plans here
]
}
# Implement custom option generation
def _generate_options(self):
# Generate desires based on beliefs
# e.g., if "low_milk" in beliefs, add desire "buy_milk"
pass
# Test scenarios:
# 1. Agent at home, believes milk is low → should form intention to buy milk
# 2. Agent at store, sees milk is expensive → might decide to skip or go to another store
# 3. Agent acquires new belief that bread is also needed → should add to intentions
Extension: Add replanning when plans fail (e.g., item out of stock).
References & Further Reading
Foundational Papers
- Rao, A. S., & Georgeff, M. P. (1995). “BDI Agents: From Theory to Practice.” Proceedings of the First International Conference on Multi-Agent Systems.
- Bratman, M. (1987). “Intention, Plans, and Practical Reason.” Harvard University Press.
Modern Implementations
- Jason: A Java-based BDI agent programming language - http://jason.sourceforge.net/
- JACK Intelligent Agents: Commercial BDI platform - https://www.aosgrp.com/products/jack/
Recent Research
- “Large Language Models Meet BDI Agents” (2024) - Explores LLM integration with BDI frameworks
- “Probabilistic BDI Agents” - Extends BDI to handle uncertainty with probability distributions
Code Resources
- AgentSpeak(Python): Python implementation of BDI language - https://github.com/niklasf/python-agentspeak
- BSPL (Belief-based Specification Language): Protocol modeling for multi-agent BDI systems