AI agents are autonomous systems that perceive their environment, make decisions, and take actions to achieve specific goals. In this guide, we'll explore what makes an agent "intelligent" and walk through building your first AI agent.
What is an AI Agent?
An AI agent consists of four key components:
- Sensors - How the agent perceives its environment
- Decision Making - The logic that determines what action to take
- Actuators - How the agent affects its environment
- Goals - The objectives the agent is trying to achieve
Types of AI Agents
Simple Reflex Agents
These agents make decisions based purely on the current state:
def reflex_agent(percept):
if percept == "obstacle_ahead":
return "turn_left"
elif percept == "goal_visible":
return "move_forward"
else:
return "explore"
Model-Based Agents
These agents maintain an internal model of the world:
class ModelBasedAgent:
def __init__(self):
self.world_model = {}
self.current_state = None
def update_model(self, percept):
# Update internal world model
self.world_model[percept.location] = percept.data
def decide_action(self):
# Make decisions based on world model
return self.plan_next_action(self.world_model)
Goal-Based Agents
These agents plan sequences of actions to achieve goals:
class GoalBasedAgent:
def __init__(self, goal):
self.goal = goal
self.plan = []
def search_for_plan(self, current_state):
# Use search algorithms (A*, BFS, etc.)
# to find sequence of actions
return self.find_path_to_goal(current_state, self.goal)
Building Your First Agent
Let's create a simple agent for a grid world:
import random
class GridWorldAgent:
def __init__(self, grid_size=10):
self.position = [0, 0]
self.grid_size = grid_size
self.goal = [grid_size-1, grid_size-1]
self.visited = set()
def perceive(self, environment):
"""Get information about current state"""
x, y = self.position
neighbors = []
# Check all four directions
for dx, dy in [(0,1), (1,0), (0,-1), (-1,0)]:
new_x, new_y = x + dx, y + dy
if 0 <= new_x < self.grid_size and 0 <= new_y < self.grid_size:
neighbors.append((new_x, new_y))
return {
'position': self.position,
'neighbors': neighbors,
'at_goal': self.position == self.goal
}
def decide(self, percept):
"""Choose next action based on percept"""
if percept['at_goal']:
return 'stop'
# Simple strategy: move toward goal while avoiding visited cells
best_move = None
best_distance = float('inf')
for neighbor in percept['neighbors']:
if tuple(neighbor) not in self.visited:
# Calculate Manhattan distance to goal
distance = abs(neighbor[0] - self.goal[0]) + \
abs(neighbor[1] - self.goal[1])
if distance < best_distance:
best_distance = distance
best_move = neighbor
# If all neighbors visited, backtrack
if best_move is None:
best_move = random.choice(percept['neighbors'])
return best_move
def act(self, action):
"""Execute the chosen action"""
if action != 'stop':
self.visited.add(tuple(self.position))
self.position = action
return f"Moved to {self.position}"
return "Goal reached!"
# Run the agent
agent = GridWorldAgent(grid_size=5)
environment = {} # Simplified environment
for step in range(100):
percept = agent.perceive(environment)
action = agent.decide(percept)
result = agent.act(action)
print(f"Step {step}: {result}")
if action == 'stop':
break
Next Steps
Now that you understand the basics, you can:
- Add Learning - Implement reinforcement learning so the agent improves over time
- Multi-Agent Systems - Create multiple agents that cooperate or compete
- Complex Environments - Work with more sophisticated state spaces
- Real-World Applications - Apply agents to actual problems
Try AgentArena
Want to experiment with agents in interactive environments? Check out AgentArena, where you can build, test, and deploy agents through game-based challenges.
In our next post, we'll dive into reinforcement learning and how to train agents to optimize their behavior.