Skip to content

MCP Protocol Overview: Bridging AI Agents with the External World

Introduction

MCP (Model Context Protocol) is an open protocol released by Anthropic that standardizes communication between AI models and external tools/data sources. Through MCP, AI Agents can securely and consistently invoke external system capabilities.

This article provides a systematic introduction to MCP's core principles, interaction mechanisms, and deployment modes.

MCP Protocol Core Principles

Overall Architecture

The MCP protocol defines the communication specification between AI clients (Agents) and MCP Servers, with MCP Servers then interacting with actual business systems:

Working Mechanism

The MCP protocol workflow includes the following key stages:

StageDescription
Tool RegistrationWhen MCP Server starts, it declares available tools, resources, and prompts to clients
Message FormatUses JSON-RPC 2.0 protocol for communication
Invocation FlowAgent identifies user intent → Selects appropriate MCP tool → Sends invocation request → MCP Server executes business logic → Returns result

Agent-MCP Interaction Mechanism

Capability Discovery Flow

Agents need to "know" what tools are available and their parameter definitions. This process is completed during the MCP protocol initialization and capability discovery phase:

The entire flow can be divided into three phases:

  1. Initialization Handshake: Agent and MCP Server establish connection, exchanging capability information and protocol version
  2. Tool Discovery: Agent retrieves all tool information provided by MCP Server through tools/list request
  3. Actual Invocation: Agent selects appropriate tools based on user intent and initiates invocation requests

tools/list Response Structure

When MCP Server responds to tools/list, the core fields returned are:

FieldDescription
nameUnique tool identifier, e.g., create_order
descriptionTool functionality description, used by LLM to determine when to invoke
inputSchemaParameter definition in JSON Schema format (types, required fields, descriptions, etc.)

After obtaining this information, the Agent injects it into the LLM's context, enabling the LLM to:

  • Understand what tools are available
  • Comprehend each tool's purpose
  • Know what parameters to pass when invoking

Transport Protocols: stdio vs SSE

MCP Server and client support two transport layer protocols.

stdio (Standard Input/Output)

The stdio mode communicates through process standard input/output:

FeatureDescription
Communication MethodThrough process stdin and stdout
Deployment LocationMust run locally
Startup MethodClient directly starts MCP Server process
Connection ModeOne-to-one, each client starts an independent process
Typical ScenariosLocal development, desktop applications, private deployment

SSE (Server-Sent Events)

SSE mode is based on HTTP protocol and supports remote deployment:

FeatureDescription
Communication MethodHTTP-based server push technology
Deployment LocationCan be on remote servers
Startup MethodIndependently deployed as HTTP service
Connection ModeOne-to-many, can serve multiple clients
Typical ScenariosCloud deployment, SaaS services, public APIs

Architecture Design Options

Option 1: stdio Mode

Users deploy MCP Server locally, configure business system credentials for communication. The underlying transport is HTTP requests, and authentication depends on the business system's authentication mechanism.

Characteristics:

  • Users need to install MCP Server locally
  • Business credentials are configured on user's local machine
  • MCP Server internally uses HTTP to call business system APIs
  • Authentication is handled by the business system

Option 2: SSE Mode

Deploy MCP Server on the server side. A gateway can be added between client and MCP Server for API key authentication.

Characteristics:

  • MCP Server is deployed on business system servers (cloud)
  • Users don't need local installation, just configure connection URL
  • Gateway can be added for unified authentication (API Key)
  • Better suited for SaaS and multi-tenant scenarios

Comparison of Two Modes

Dimensionstdio ModeSSE Mode
Deployment ComplexityUsers need local installationZero installation for users
Credential ManagementScattered across user machinesCentralized management
SecurityCredentials on user side, risk of leakageUnified control through gateway
Updates & MaintenanceUsers need to update manuallyServer-side updates only
Network RequirementsLocal process communication, no latencyNetwork dependent, some latency
ObservabilityDifficult to monitor uniformlyCentralized monitoring and auditing
ScalabilityLimited by user's local resourcesElastic scaling
Suitable ScenariosDevelopment/testing, private deploymentProduction, public services

Summary

The MCP protocol provides a standardized solution for AI Agent interaction with external systems:

  1. Protocol Level: Uses JSON-RPC 2.0 to standardize communication format, defining standard flows for initialization, tool discovery, and tool invocation
  2. Transport Level: Supports both stdio and SSE modes, suitable for local and cloud deployment scenarios respectively
  3. Architecture Level: MCP Server serves as a middle layer, decoupling Agents from business systems, enabling tool capabilities to be reused across multiple Agents

As AI Agent technology evolves, the MCP protocol is becoming the industry standard for tool invocation, laying the foundation for building a rich AI application ecosystem.

References

License

This article is licensed under CC BY-NC-SA 4.0 . You are free to:

  • Share — copy and redistribute the material in any medium or format
  • Adapt — remix, transform, and build upon the material

Under the following terms:

  • Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
  • NonCommercial — You may not use the material for commercial purposes.
  • ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.

Last updated at: