Protocol Overview

The Model Context Protocol (MCP) is a standardized way for AI assistants to interact with external tools and services. Understanding MCP helps you understand how your tools work with AI assistants and why the platform is designed the way it is.

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is a communication standard that allows AI assistants to discover, understand, and use external tools. Think of it as a universal language that AI assistants and tools use to communicate with each other.

Before MCP existed, if you wanted to connect an AI assistant to your tools, you had to build custom integrations for each AI platform. This meant:

  • Creating different code for ChatGPT, Claude, and other AI assistants

  • Maintaining multiple integrations that could break when AI platforms updated

  • Building authentication, error handling, and security from scratch each time

  • Spending weeks or months on integration work instead of building actual business value

MCP solves this by providing a single, standardized protocol that any AI assistant can use. Once you build tools using MCP, they work with all MCP-compatible AI assistants automatically. This means you write your tool once, and it works everywhere.

What MCP Enables

MCP enables AI models to:

  • Discover available tools dynamically - When an AI assistant connects to an MCP server, it can ask "what tools do you have?" and receive a complete list. The AI doesn't need to know about your tools in advance - it discovers them automatically.

  • Invoke tools with parameters - The AI can call your tools with specific inputs. For example, if you have a "Get Weather" tool, the AI can call it with "New York" as the city parameter.

  • Receive structured responses - Your tools return data in a standardized format that AI assistants understand. This makes it easy for the AI to use the results to answer user questions or perform further actions.

  • Handle errors gracefully - If something goes wrong, the protocol ensures errors are communicated clearly. The AI can understand what went wrong and either try again, use a different tool, or explain the error to the user.

Why MCP Matters

MCP is revolutionary because it:

  • Standardizes communication - Everyone uses the same protocol, so tools work with any compatible AI assistant

  • Eliminates custom integrations - You don't need to build separate integrations for each AI platform

  • Enables tool discovery - AI assistants can automatically discover what tools are available

  • Provides structure - Standardized formats for requests, responses, and errors make everything predictable

  • Future-proofs your tools - As new AI platforms adopt MCP, your tools work with them automatically

MCP Functions implements the MCP protocol for you, so you don't need to understand all the details. But understanding the basics helps you build better tools and understand how everything works together.

Supported JSON-RPC methods

The platform supports the following MCP methods:

  • initialize — Handshake and server capabilities

  • tools/list — List available tools

  • tools/call — Invoke a tool with arguments

  • resources/list — List resources (returns empty list)

  • resources/read — Read a resource (supported, for compatibility)

  • prompts/list — List prompts (returns empty list)

  • prompts/get — Get a prompt (supported, for compatibility)

  • logging/setLevel — Set log level (validation only)

Notifications (methods starting with notifications/) are also accepted.

Server authentication

Each MCP server can enforce one or more authentication methods (all required when multiple are enabled):

  • API key in header — Bearer token in the Authorization header

  • API key in queryapiKey query parameter

  • Custom headers — Server-defined header name and value (stored as hash)

Clients must satisfy every enabled method to connect.

Understanding Protocol Features

MCP uses several key technologies and patterns to enable communication between AI assistants and tools. Understanding these features helps you understand how the protocol works.

JSON-RPC Format

What it is: JSON-RPC is a standardized format for remote procedure calls. It defines how messages are structured when one system wants to call a function in another system.

Why it's used: JSON-RPC provides a consistent, well-understood format that both AI assistants and tools can use. It's language-agnostic (works with any programming language), human-readable (easy to debug), and widely supported.

What it means for you: When an AI assistant calls your tool, it sends a JSON-RPC message. Your tool receives this message, executes, and returns a JSON-RPC response. MCP Functions handles all the JSON-RPC details for you - you just write your tool code.

Example: When an AI assistant wants to call your "Get Weather" tool, it sends a JSON-RPC message like:

Your tool receives this, executes, and returns a JSON-RPC response. MCP Functions handles all of this automatically.

Server-Sent Events (SSE) for Real-Time Streaming

What it is: Server-Sent Events (SSE) is a technology that allows a server to send data to a client in real-time over a single HTTP connection. Unlike traditional request-response patterns, SSE allows continuous communication.

Why it's used: SSE enables real-time communication between AI assistants and tools. This is important because:

  • Tools can send progress updates as they execute

  • Long-running operations can stream results as they become available

  • AI assistants can receive updates immediately rather than waiting for completion

  • Multiple messages can be sent over a single connection

What it means for you: When your tool executes, results can be streamed back to the AI assistant in real-time. If your tool takes a few seconds to complete, the AI can see progress updates. MCP Functions handles SSE automatically - you don't need to do anything special.

Example: If your tool is fetching data from a slow API, it might send progress updates like "Connecting to API...", "Fetching data...", "Processing results...", and finally the actual data. The AI assistant receives these updates in real-time.

Tool Discovery via Protocol Handshake

What it is: Tool discovery is the process by which an AI assistant learns what tools are available on an MCP server. This happens automatically when the AI connects.

Why it's used: Instead of requiring AI assistants to be pre-configured with tool information, MCP allows them to discover tools dynamically. This means:

  • You can add new tools without updating AI assistant configurations

  • AI assistants automatically know what tools are available

  • Tool descriptions help AI assistants understand when to use each tool

  • No manual configuration is needed

What it means for you: When you create a tool in MCP Functions, it's automatically included in the tool discovery process. The AI assistant will see your tool's name, description, and parameters automatically. You just need to write good descriptions so the AI knows when to use your tool.

How it works: When an AI assistant connects to your MCP server, it sends an "initialize" message. Your server responds with a list of all available tools, including their names, descriptions, and parameter schemas. The AI assistant then knows what tools it can use.

Secure Execution in Sandboxed Environment

What it is: All tool execution happens in isolated, secure sandbox environments that prevent tools from accessing system resources or other tools.

Why it's used: Security is critical when running code from various sources. Sandboxing ensures that:

  • Tools can't access sensitive system resources

  • Tools can't interfere with each other

  • Malicious code is contained and can't cause harm

  • Resource limits prevent runaway processes

What it means for you: Your tools run in secure sandboxes automatically. You don't need to worry about security - it's built into the platform. However, you do need to work within the sandbox's constraints (no file system access, limited APIs, etc.).

For more details on the sandbox environment, see the Sandbox Environmentarrow-up-right documentation.

Understanding the Message Flow

The MCP protocol defines a specific sequence of messages that occur when an AI assistant connects to and uses your tools. Understanding this flow helps you understand how everything works together.

The Complete Connection and Usage Flow

Here's the step-by-step process that happens when an AI assistant wants to use your tools:

Step 1: Client Connects to Server (SSE Stream)

The AI assistant (client) establishes a connection to your MCP server using Server-Sent Events (SSE). This creates a persistent connection that allows real-time, bidirectional communication.

  • What happens: The AI assistant makes an HTTP request to your MCP server's endpoint

  • Connection type: Server-Sent Events (SSE) stream

  • Authentication: The connection is authenticated using an API key

  • Result: A persistent connection is established that stays open for the duration of the session

This connection remains open, allowing multiple tool calls without reconnecting each time.

Step 2: Client Sends Initialize Message

Once connected, the AI assistant sends an "initialize" message to start the handshake process. This message tells the server that the client is ready to begin communication.

  • Message type: JSON-RPC request with method "initialize"

  • Purpose: Begins the protocol handshake

  • Contains: Client capabilities and protocol version information

  • What it means: The AI assistant is saying "Hello, I'm ready to communicate. What can you do?"

This is the first step in the discovery process - the AI assistant is asking the server what it can do.

Step 3: Server Responds with Server Capabilities (Includes Tools)

The server responds to the initialize message with its capabilities, including a complete list of all available tools.

  • Response type: JSON-RPC response with server capabilities

  • Contains:

  • List of all available tools - Tool names and descriptions - Tool parameter schemas (what inputs each tool needs) - Server capabilities and features

- **What it means:** The server is saying "Here's what I can do. Here are all my tools, what they do, and what parameters they need."

This is where tool discovery happens. The AI assistant now knows exactly what tools are available and how to use them. The AI uses tool descriptions to understand when each tool should be used.

Example: The server might respond with:

The AI assistant now knows it has two tools available and understands what each one does and what parameters they need.

Step 4: Client Invokes Tools via callTool Messages

When a user asks the AI assistant something that requires a tool, the AI sends a "callTool" message to invoke the appropriate tool.

  • Message type: JSON-RPC request with method "tools/call"

  • Contains:

  • Tool name (which tool to call) - Tool arguments (the parameters to pass)

- **What it means:** The AI assistant is saying "Please run this tool with these parameters."

Example: If a user asks "What's the weather in New York?", the AI might send:

The server receives this message, identifies the tool, and begins execution.

Step 5: Server Streams Execution Results via SSE

As the tool executes, the server streams results back to the AI assistant in real-time using Server-Sent Events.

  • Response type: SSE stream with JSON-RPC messages

  • Contains:

  • Progress updates (optional, for long-running operations) - Final result with tool output - Error information (if something went wrong)

- **What it means:** The server is sending back the tool's results as they become available

Example: The server might stream:

The AI assistant receives the result, extracts the data, and uses it to answer the user's question: "The weather in New York is 22°C and sunny."

Complete Flow Example

Here's a complete example of the flow when a user asks "Get customer info for [email protected]":

  • Connection: AI assistant connects to MCP server via SSE

  • Initialize: AI sends initialize message

  • Discovery: Server responds with list of tools, including "get_customer_by_email"

  • User question: User asks "Get customer info for [email protected]"

  • Tool call: AI sends callTool message: { "method": "tools/call", "params": { "name": "get_customer_by_email", "arguments": { "email": "[email protected]" } } }

  • Execution: Server executes your tool code with email="[email protected]"

  • Result: Server streams result back: { "result": { "content": [{ "type": "text", "text": "{\"success\":true,\"data\":{\"name\":\"John Doe\",\"email\":\"[email protected]\",\"status\":\"active\"}}" }] } }

  • Response: AI assistant uses the data to tell the user: "John Doe ([email protected]) has an active account."

Multiple Tool Calls

The connection stays open, so the AI assistant can call multiple tools in sequence or even in parallel. For example:

  • User asks: "What's the weather in New York and what's the customer status for [email protected]?"

  • AI calls "get_weather" tool with city="New York"

  • AI calls "get_customer_by_email" tool with email="[email protected]"

  • AI receives both results

  • AI combines the information and responds to the user

All of this happens over the same SSE connection, making it efficient and fast.

What This Means for You

As a tool creator using MCP Functions, you don't need to understand all the protocol details. The platform handles all of this for you automatically. However, understanding the basics helps you:

  • Write better tool descriptions - Good descriptions help AI assistants understand when to use your tools

  • Design better parameters - Understanding how parameters flow helps you design intuitive interfaces

  • Debug issues - When something doesn't work, understanding the flow helps you figure out why

  • Understand tool discovery - Knowing that AI assistants discover tools automatically helps you understand why descriptions matter

  • Appreciate the architecture - Understanding the protocol helps you understand why the platform is designed the way it is

The key takeaway is that MCP provides a standardized way for AI assistants to discover and use your tools. You write your tool code, and MCP Functions handles all the protocol communication automatically. Your tools work with any MCP-compatible AI assistant without any additional work on your part.

Protocol Benefits

Using MCP provides several important benefits:

  • Standardization - Everyone uses the same protocol, so tools work everywhere

  • Automatic discovery - AI assistants automatically discover your tools

  • Real-time communication - SSE enables real-time updates and streaming

  • Error handling - Standardized error formats make debugging easier

  • Future compatibility - As new AI platforms adopt MCP, your tools work with them automatically

  • No custom integrations - You don't need to build separate integrations for each AI platform

MCP Functions implements the MCP protocol completely, so you get all these benefits automatically. You can focus on building great tools, and the platform handles all the protocol complexity for you.

Last updated