The Agent Spectrum: From Simple Reflexes to Belief-Desire-Intention

Welcome to our series on mastering AI agent programming. To build complex, intelligent agents, we first need to understand their fundamental blueprints. An agent’s architecture is its internal structure—the design that dictates how it perceives the world, makes decisions, and takes action.

This article explores the spectrum of agent architectures, from the simplest reactive machines to sophisticated models of practical reasoning. Understanding this spectrum is key to choosing the right design for your task, whether you’re building a simple chatbot, a game AI, or a complex, autonomous system.

1. Concept Introduction: The Four Architectures

At its core, an AI agent is anything that can perceive its environment through sensors and act upon that environment through actuators. The complexity arises from the “brain” that connects perception to action. We can classify agent architectures into four main types, each building upon the last.

Finally, we arrive at a specific, powerful architecture often used for autonomous agents:

2. Historical & Theoretical Context

The idea of agent architectures dates back to the early days of AI and cybernetics. The progression from simple reflexes to goal-oriented behavior mirrors the evolution of AI itself.

The BDI model, however, has a more specific origin. It was developed in the 1980s by Michael Bratman, a philosopher studying human practical reasoning. He argued that intentions are a critical component of planning. An intention is more than just a goal; it’s a commitment that guides future actions and persists over time. AI researchers like Anand Rao and Michael Georgeff formalized this into the BDI software model, creating a blueprint for building rational agents.

3. Algorithms & Pseudocode: The BDI Loop

A BDI agent operates in a continuous reasoning cycle. This loop is what allows the agent to react to new information while still working towards its long-term goals.

function BDI_Agent_Loop(agent):
  while True:
    // 1. Update Beliefs
    percepts = agent.sensors.get_percepts()
    agent.beliefs.update(percepts)

    // 2. Generate Options (Desires)
    options = agent.planner.generate_options(agent.beliefs, agent.intentions)
    agent.desires = options

    // 3. Filter and Commit (Intentions)
    agent.intentions = agent.deliberator.filter(agent.beliefs, agent.desires, agent.intentions)

    // 4. Execute Plan
    plan = agent.planner.find_plan(agent.beliefs, agent.intentions)
    agent.actuators.execute(plan.next_step())

This loop shows the agent constantly observing, updating its worldview, reconsidering its goals, committing to a course of action, and executing it one step at a time.

4. Design Patterns & Architectures

These agent architectures map cleanly to common software design patterns:

5. Practical Application: Python Example

Let’s write a simple ModelBasedReflexAgent for a vacuum cleaner.

class VacuumAgent:
    def __init__(self):
        # The model of the world: a dictionary of room locations and their status
        self.model = {"RoomA": "Unknown", "RoomB": "Unknown"}
        self.location = "RoomA"

    def perceive_and_act(self, percept):
        # Percept is a tuple: (location, status), e.g., ("RoomA", "Dirty")
        current_location, room_status = percept

        # Update internal model
        self.location = current_location
        self.model[current_location] = room_status

        # Decide and act
        if room_status == "Dirty":
            return "Suck"
        elif self.location == "RoomA" and self.model["RoomB"] == "Unknown":
            return "GoToB"
        elif self.location == "RoomB" and self.model["RoomA"] == "Unknown":
            return "GoToA"
        else:
            return "NoOp"

# --- Simulation ---
agent = VacuumAgent()
# Agent starts in Room A, which is clean. It doesn't know about Room B.
action = agent.perceive_and_act(("RoomA", "Clean"))
print(f"Model: {agent.model}, Action: {action}") # Action: GoToB

# Agent moves to Room B and finds dirt.
action = agent.perceive_and_act(("RoomB", "Dirty"))
print(f"Model: {agent.model}, Action: {action}") # Action: Suck

In a modern framework like CrewAI, you can approximate a BDI structure by creating different agents for each part of the loop. One agent could be a “BeliefsUpdater” (researching and summarizing the current state), another a “Planner” (defining goals and tasks), and a third an “Executor” (carrying out the tasks). The “manager” agent in CrewAI acts as the deliberator, choosing which intention to pursue next.

6. Comparisons & Tradeoffs

ArchitectureSpeedRationalityComplexityAdaptability
Simple ReflexVery FastLowVery LowLow
Model-BasedFastModerateLowModerate
Goal-BasedSlowHighHighModerate
BDI / UtilitySlowestVery HighVery HighHigh

The tradeoff is clear: the “smarter” and more rational you want your agent to be, the more computationally expensive and complex it becomes. A simple reflex agent is perfect for a thermostat; a BDI agent is better suited for a Mars rover.

7. Latest Developments & Research

Modern LLM-based agents are a fascinating hybrid. The LLM itself can be seen as an implicit BDI engine.

A major area of research is making this implicit process more explicit, reliable, and steerable. Projects like LangGraph allow developers to define the reasoning cycle as a graph, giving them fine-grained control over the agent’s “thought” process, making it a more explicit form of a BDI-like loop.

8. Cross-Disciplinary Insight

The BDI model is a beautiful example of cross-pollination between philosophy, cognitive psychology, and computer science. It attempts to model the very human process of practical reasoning. Think about your own day: you have beliefs about the world (e.g., “it’s sunny”), desires (e.g., “I want to go for a run”), and you form intentions (“I will go for a run after this meeting”). This commitment helps you filter out distractions and organize your behavior over time.

9. Daily Challenge / Thought Exercise

Take a simple, everyday task you perform, like making a cup of tea.

Spend 15 minutes modeling this task using each of the four main architectures:

  1. Simple Reflex: What are the if-then rules? (e.g., if cup is empty, then add teabag).
  2. Model-Based: What state do you need to track? (e.g., is_kettle_boiled, has_milk_been_added).
  3. Goal-Based: Your goal is have_hot_tea. What is the plan? What are the steps?
  4. Utility/BDI-Based: What if you have multiple desires? (“I want tea, but I also want to leave for work in 10 minutes”). How do you weigh the utility of a quick-but-mediocre tea vs. a slow-but-perfect tea? What is your final intention?

Write down the logic for each. This exercise will solidify your understanding of how ontwerp choices create vastly different behaviors.

10. References & Further Reading