Tool Invocation
Tool invocation is the process by which AI assistants call your tools to perform actions or retrieve data. Understanding how tool invocation works helps you build better tools and understand how AI assistants use them.
What is Tool Invocation?
Tool invocation is the act of an AI assistant calling one of your tools to perform a task. When a user asks an AI assistant something that requires external action or data, the AI recognizes this need and calls the appropriate tool.
For example, if a user asks "What's the weather in New York?", the AI assistant might:
Recognize that it needs weather data
Look at available tools and find a "get_weather" tool
Call that tool with "New York" as the city parameter
Receive the weather data
Use that data to answer the user's question
This entire process happens automatically. The AI assistant handles all the complexity of deciding when to use tools, which tool to use, and what parameters to provide. You just need to build the tools and write good descriptions so the AI knows when to use them.
Why Tool Invocation Matters
Tool invocation is what makes your tools useful. Without it, your tools would just sit there unused. Understanding how invocation works helps you:
Write better tool descriptions - Good descriptions help AI assistants understand when to invoke your tools
Design better parameters - Understanding how parameters flow helps you design intuitive interfaces
Handle errors better - Knowing how invocation works helps you return errors that AI assistants can understand
Debug issues - When tools aren't being invoked, understanding the process helps you figure out why
How AI Assistants Decide to Invoke Tools
AI assistants use sophisticated reasoning to decide when and how to invoke tools. Understanding this process helps you build tools that get used effectively.
The Decision Process
When a user asks a question, the AI assistant goes through a decision process:
Analyzes the user's question - The AI understands what the user is asking for
Reviews available tools - The AI looks at all tools it has access to, including their names and descriptions
Determines if a tool is needed - The AI decides if answering the question requires a tool or if it can answer from its training data
Selects the appropriate tool - If a tool is needed, the AI chooses which tool to use based on tool descriptions
Determines parameters - The AI figures out what parameters to pass to the tool based on the user's question
Invokes the tool - The AI sends a tool invocation request
Uses the result - The AI receives the tool's response and uses it to answer the user
What Makes a Tool Get Invoked
Several factors influence whether and how often your tools get invoked:
Tool description quality - Clear, descriptive tool descriptions help AI assistants understand when to use your tools. A good description explains what the tool does, when it should be used, and what it returns.
Tool name clarity - Descriptive tool names help AI assistants quickly identify relevant tools. Names like "get_customer_by_email" are better than "tool1".
Parameter clarity - Clear parameter descriptions help AI assistants understand what data to provide. Good parameter descriptions explain what each parameter is for and what format it should be in.
Tool relevance - Tools that solve common problems get invoked more often. If your tool addresses a specific need that users frequently have, it will be used more.
Tool reliability - If a tool frequently returns errors, AI assistants may learn to avoid it or use alternatives.
Example: Tool Selection
Let's say a user asks "Get me the customer information for [email protected]". The AI assistant might:
See available tools: "get_customer_by_email", "update_customer_status", "create_support_ticket"
Read descriptions:
"get_customer_by_email" - "Fetches customer information using an email address" - "update_customer_status" - "Updates a customer's account status" - "create_support_ticket" - "Creates a new support ticket"
- Match the user's request to "get_customer_by_email" based on the description - Extract the email "[email protected]" from the user's question - Invoke the tool with email="[email protected]"
This matching happens automatically based on tool descriptions. The better your descriptions, the more accurately the AI will select your tools.
Understanding the Invocation Message Format
When an AI assistant invokes a tool, it sends a JSON-RPC message following the MCP protocol. Understanding this format helps you understand how your tools receive data.
The Standard Invocation Message
Tool invocations use the JSON-RPC format with the method "tools/call". Here's the structure:
Message Components Explained
jsonrpc - Always "2.0" for JSON-RPC version 2.0. This identifies the protocol version being used.
method - Always "tools/call" for tool invocations. This tells the server that this is a tool invocation request.
params - Contains the tool invocation details:
**name** - The name of the tool to invoke (must match a tool name in your MCP server) - **arguments** - An object containing the parameters to pass to the tool. The structure matches your tool's parameter schema.
- **id** - A unique identifier for this request. The server uses this to match responses to requests. ### Real-World Example
Here's a complete example of a tool invocation for a weather tool:
This message tells the server to invoke the "get_weather" tool with city="New York" and units="celsius". The server receives this, finds the tool, and executes it with these parameters.
How Your Tool Receives This
MCP Functions handles all the JSON-RPC protocol details for you. When the server receives this message, it:
Extracts the tool name ("get_weather")
Finds that tool in your MCP server
Extracts the arguments ({ city: "New York", units: "celsius" })
Calls your tool's execute function with these arguments as the params parameter
In your tool code, you receive the arguments like this:
You don't need to worry about JSON-RPC - MCP Functions handles all of that automatically.
Parameter Passing and Validation
Parameters are the primary way data flows from the AI assistant into your tool. Understanding how parameters work helps you build tools that receive data correctly.
How Parameters Are Passed
When an AI assistant invokes your tool, it provides parameters based on your tool's parameter schema. The AI assistant:
Reads your tool's parameter schema (which parameters are required, what types they are, etc.)
Extracts relevant information from the user's question
Formats that information according to your parameter schema
Passes it to your tool
For example, if your tool has this parameter schema:
And a user asks "Get customer info for [email protected] and include their orders", the AI might invoke your tool with:
Parameter Validation
MCP Functions validates parameters before executing your tool:
Required parameters - If a required parameter is missing, the invocation fails with a clear error
Type validation - Parameters are checked to ensure they match the expected types (string, number, boolean, etc.)
Format validation - If you specify formats (like "email"), the platform validates that parameters match those formats
Enum validation - If you specify allowed values, the platform ensures parameters are one of those values
If validation fails, your tool never executes - the error is returned immediately. This prevents your tool from receiving invalid data.
Handling Optional Parameters
Optional parameters work as expected:
If an optional parameter is provided, your tool receives it
If an optional parameter is not provided, it will be
undefinedin your params objectYou can use default values in your code:
const units = params.units || 'celsius';
Best Practices for Parameters
Use clear parameter names - Names like "email" or "customerId" are better than "p1" or "data"
Write good descriptions - Help AI assistants understand what each parameter is for
Mark required parameters - Clearly indicate which parameters are required
Use appropriate types - Choose the right type (string, number, boolean) for each parameter
Add validation rules - Use format, enum, min/max to guide AI assistants
Validate in your code too - Even though the platform validates, add validation in your tool code for extra safety
Understanding Tool Execution
Once an invocation request is received and validated, your tool executes. Understanding this process helps you write tools that work correctly.
The Execution Sequence
When your tool is invoked, here's what happens:
Request received - The MCP server receives the tool invocation request
Tool lookup - The server finds your tool by name
Validation - The server validates parameters and tool code
Sandbox creation - A fresh sandbox environment is created
Parameter injection - Parameters are passed to your tool's execute function as the
paramsobjectConfig injection - Workspace secrets are passed as the
configobjectCode execution - Your tool's code runs in the sandbox
Result capture - The return value is captured
Response sent - The result is sent back to the AI assistant
What Your Tool Receives
Your tool's execute function receives two parameters:
params - An object containing the parameters provided by the AI assistant. This matches your tool's parameter schema.
config - An object containing workspace secrets. These are the secrets you stored in your workspace, accessible by their names.
For example, if the AI invokes your tool with { city: "New York" } and your workspace has a secret API_KEY, your function receives:
What Your Tool Should Return
Your tool must return a value in a specific format:
This standardized format ensures AI assistants can understand and use your tool's results consistently.
Understanding Tool Responses
After your tool executes, the result is sent back to the AI assistant. Understanding how responses work helps you return data that AI assistants can use effectively.
Response Format
Tool responses are sent back to the AI assistant in JSON-RPC format:
The AI assistant receives this, extracts the data, and uses it to answer the user's question.
Success Responses
When your tool returns successfully, the response contains your tool's return value:
The AI assistant receives this data and can use it to answer questions, perform calculations, or make decisions.
Error Responses
When your tool returns an error, the response includes error information:
The AI assistant receives this error and can either try again with different parameters, use a different tool, or explain the error to the user.
Streaming Responses
For long-running operations, responses can be streamed in real-time using Server-Sent Events (SSE). This allows:
Progress updates during execution
Partial results as they become available
Better user experience for slow operations
MCP Functions handles streaming automatically - you don't need to do anything special in your tool code.
Best Practices for Responses
Return structured data - Use consistent data structures that are easy for AI assistants to understand
Include relevant context - Return all the data the AI might need, not just the minimum
Use clear error messages - Error messages should explain what went wrong and ideally how to fix it
Format data appropriately - Use appropriate data types and formats (dates as ISO strings, numbers as numbers, etc.)
Don't expose secrets - Never include API keys, passwords, or other secrets in responses
Error Handling During Invocation
Errors can occur at various stages of tool invocation. Understanding how errors are handled helps you build robust tools and debug issues.
Types of Invocation Errors
Errors can occur at different stages:
1. Tool Not Found
When: The AI assistant tries to invoke a tool that doesn't exist in your MCP server
Cause: Tool name mismatch, tool was deleted, or tool is in a different server
Response: The server returns an error indicating the tool wasn't found
What the AI sees: "Tool 'tool_name' not found"
2. Tool Inactive
When: The AI assistant tries to invoke a tool that exists but is set to inactive
Cause: Tool was deactivated for maintenance or testing
Response: The server returns an error indicating the tool is inactive
What the AI sees: "Tool 'tool_name' is not active"
3. Invalid Parameters
When: The parameters provided don't match the tool's parameter schema
Cause: Missing required parameters, wrong parameter types, or invalid values
Response: The server returns an error with details about what's wrong
What the AI sees: "Invalid parameters: email is required" or similar
4. Execution Errors
When: Your tool code throws an error or returns an error response
Cause: Runtime errors in your code, API failures, or your tool explicitly returning an error
Response: The error from your tool is passed back to the AI assistant
What the AI sees: The error message from your tool's return value
5. Timeout Errors
When: Your tool takes longer than 30 seconds to execute
Cause: Slow operations, infinite loops, or waiting for slow APIs
Response: Execution is terminated and a timeout error is returned
What the AI sees: "Execution timeout: Tool exceeded maximum execution time"
How AI Assistants Handle Errors
When an error occurs, AI assistants can:
Retry with different parameters - If parameters seem wrong, the AI might try again
Use a different tool - If one tool fails, the AI might try an alternative
Explain the error to the user - The AI can tell the user what went wrong
Ask for clarification - The AI might ask the user for more information
This is why clear error messages are important - they help AI assistants understand what went wrong and how to proceed.
Best Practices for Error Handling
Return clear error messages - Explain what went wrong in terms users can understand
Handle errors in your code - Use try-catch blocks to catch and handle errors gracefully
Validate inputs - Check parameters in your code even though the platform also validates
Provide actionable errors - When possible, tell the AI or user how to fix the error
Don't expose sensitive information - Error messages should never include API keys, passwords, or internal details
Concurrent Tool Invocations
Multiple tool invocations can happen simultaneously. Understanding how this works helps you build tools that handle concurrent usage correctly.
How Concurrency Works
MCP Functions supports concurrent tool invocations:
Multiple users - Different users can trigger tool invocations at the same time
Multiple AI assistants - Different AI assistants can call your tools simultaneously
Multiple tools - The same AI assistant can call multiple tools in parallel
Isolated execution - Each invocation runs in its own isolated sandbox
Isolation Between Invocations
Each tool invocation is completely isolated:
Separate sandboxes - Each invocation gets its own sandbox environment
No shared state - Invocations can't access each other's data or variables
Independent execution - One slow invocation doesn't block others
Separate resources - Each invocation has its own memory and CPU limits
This means you don't need to worry about thread safety or concurrent access - each invocation is independent.
What This Means for Your Tools
Because invocations are isolated, your tools should:
Not rely on global state - Don't expect data to persist between invocations
Be stateless - Each invocation should work independently
Handle their own data - Get all needed data from parameters, config, or external APIs
Not assume execution order - Invocations can happen in any order
This is actually a good thing - it makes your tools more reliable and easier to reason about.
Tool Invocation Patterns
Understanding common invocation patterns helps you design tools that work well with AI assistants.
Single Tool Invocation
The most common pattern - the AI calls one tool to answer a question:
User asks: "What's the weather in New York?"
AI invokes: get_weather tool with city="New York"
AI uses result: To answer the user's question
This is the simplest and most common pattern.
Sequential Tool Invocations
Sometimes the AI needs to call multiple tools in sequence:
User asks: "Get customer info for [email protected] and create a support ticket"
AI invokes: get_customer_by_email with email="[email protected]"
AI receives: Customer data including customer ID
AI invokes: create_support_ticket with customerId from previous result
AI uses results: To confirm the ticket was created
The AI uses results from one tool to call another tool.
Parallel Tool Invocations
Sometimes the AI calls multiple tools in parallel:
User asks: "What's the weather in New York and London?"
AI invokes: get_weather for both cities simultaneously
AI receives: Results from both invocations
AI combines: Results to answer the user
This is more efficient than calling tools sequentially when they don't depend on each other.
Conditional Tool Invocations
Sometimes the AI decides whether to call a tool based on conditions:
User asks: "If customer [email protected] exists, update their status to active"
AI invokes: get_customer_by_email to check if customer exists
If customer exists: AI invokes update_customer_status
If customer doesn't exist: AI tells the user the customer wasn't found
The AI uses tool results to make decisions about what to do next.
Monitoring Tool Invocations
Understanding how your tools are being invoked helps you improve them and debug issues. MCP Functions provides comprehensive logging and monitoring.
What Gets Logged
Every tool invocation is logged with detailed information:
Invocation metadata:
Tool name that was invoked - Timestamp of invocation - Parameters that were provided - Which AI assistant or client invoked it
- **Execution details**:
Execution time - Memory usage - Console output from your tool - Result or error returned
- **Performance metrics**:
How long the tool took to execute - Resource usage during execution - Success/failure rate
### Viewing Invocation Logs
You can view invocation logs in the MCP Functions dashboard:
Navigate to your tool or MCP server
Open the "Execution Logs" or "History" section
View a list of all invocations
Click on any invocation to see detailed logs
Using Logs to Improve Tools
Invocation logs help you:
Understand usage patterns - See which tools are used most, what parameters are common, etc.
Debug issues - When a tool fails, logs show exactly what happened
Optimize performance - Identify slow tools and optimize them
Improve descriptions - If tools aren't being invoked, you might need better descriptions
Fix parameter issues - See what parameters are being provided and adjust your schema if needed
Best Practices for Tool Invocation
Following best practices helps ensure your tools get invoked correctly and work well with AI assistants:
1. Write Excellent Tool Descriptions
Clearly explain what your tool does
Describe when it should be used
Explain what it returns
Use natural language that AI assistants can understand
2. Design Clear Parameter Schemas
Use descriptive parameter names
Write clear parameter descriptions
Mark required vs optional parameters clearly
Use appropriate types and validation rules
3. Return Structured Data
Always return data in the standard format
Include all relevant information
Use consistent data structures
Format data appropriately (dates, numbers, etc.)
4. Handle Errors Gracefully
Return clear, actionable error messages
Validate inputs in your code
Handle edge cases
Don't expose sensitive information in errors
5. Test Your Tools
Test with various parameter combinations
Test error scenarios
Test with realistic data
Verify AI assistants can discover and use your tools
6. Monitor Usage
Regularly check invocation logs
Look for patterns in how tools are used
Identify tools that aren't being invoked
Optimize based on usage patterns
Summary: Key Concepts
Here are the key concepts to remember about tool invocation:
Invocation is automatic - AI assistants automatically decide when to invoke your tools based on user questions and tool descriptions
Parameters flow from AI to tool - The AI extracts information from user questions and passes it as parameters
Results flow from tool to AI - Your tool returns data that the AI uses to answer questions
Everything is logged - All invocations are logged for debugging and monitoring
Invocations are isolated - Each invocation runs independently in its own sandbox
Errors are handled gracefully - Errors are returned to the AI, which can handle them appropriately
Good descriptions matter - Clear tool descriptions help AI assistants know when to use your tools
Understanding tool invocation helps you build better tools that work effectively with AI assistants. Focus on writing clear descriptions, designing good parameter schemas, and returning structured data, and the AI assistants will handle the rest automatically.
Last updated