AI Agent — Interactive Knowledge Map
What exactly is an AI agent, how is it different from a chatbot, and why did protocols like MCP and A2A emerge?
Explore AI Agent through an interactive 3D knowledge map. This visual guide covers 10 key concepts and 11 relationships, helping you understand the topic structurally.
Key Concepts in AI Agent
AI Agent
An autonomous system that uses LLMs to plan, use tools, and complete tasks
Unlike simple chatbots, AI agents set goals, create plans, call external tools, and verify results. The LLM acts as the 'brain,' combined with tool-calling, memory, and planning capabilities to perform complex tasks autonomously.
LLM (Large Language Model)
The core brain of an agent — GPT, Claude, Gemini, etc.
LLMs serve as the reasoning engine for agents. They interpret user intent, decide which tools to use, and synthesize results. The model's capability sets the upper bound for the agent's performance.
Tool Use (Function Calling)
The ability for agents to call external APIs, databases, and file systems
Also known as Function Calling. When an LLM outputs structured function calls instead of text, the system executes them and feeds results back to the LLM. This is what makes agents 'act' rather than just 'talk.'
MCP (Model Context Protocol)
Anthropic's open standard for connecting agents to tools
MCP is an open protocol that lets AI agents access diverse data sources and tools in a standardized way. Like USB-C for AI — one standard to connect everything. It uses a server-client architecture.
A2A (Agent-to-Agent)
Google's protocol for inter-agent communication
A2A enables different agents to collaborate. If MCP is agent-to-tool, A2A is agent-to-agent. Agents can delegate tasks to each other and exchange results.
Multi-Agent Systems
Multiple agents collaborating by dividing roles
Complex tasks that are hard for a single agent are split among specialized agents. A manager agent distributes work, and each agent contributes to the final output. CrewAI and AutoGen are notable frameworks.
ReAct Pattern
Reasoning + Acting — a think-then-act loop
Agents repeat 'Thought → Action → Observation' cycles to solve problems. More capable than simple prompt chaining, enabling complex reasoning and tool use in tandem.
Agent Memory
Mechanisms for maintaining conversation context and long-term recall
Split into short-term memory (current conversation context) and long-term memory (past information stored in vector DBs). Without memory, agents start from scratch every time.
Chatbot vs Agent
Key differences between traditional chatbots and AI agents
Chatbots only answer questions, but agents act autonomously toward goals. Chatbots generate one response at a time; agents create multi-step plans and execute them.
Claude Code / Cursor
Coding agents — AI that reads, edits, and runs code
Representative examples of AI agents integrated into developer tools. They understand codebases, find bugs, and implement features, accessing file systems, terminals, and Git via MCP.
How Concepts Connect
- AI Agent → LLM (Large Language Model) (child)
- AI Agent → Tool Use (Function Calling) (child)
- Tool Use (Function Calling) → MCP (Model Context Protocol) (child)
- Tool Use (Function Calling) → A2A (Agent-to-Agent) (derived)
- AI Agent → Multi-Agent Systems (child)
- AI Agent → ReAct Pattern (child)
- LLM (Large Language Model) → Agent Memory (derived)
- AI Agent → Chatbot vs Agent (derived)
- Multi-Agent Systems → A2A (Agent-to-Agent) (prerequisite)
- MCP (Model Context Protocol) → Claude Code / Cursor (causal)
- ReAct Pattern → Multi-Agent Systems (prerequisite)