Testing Tools

Testing your tools before deploying them to production is essential for building reliable, bug-free tools. This page explains how to test your tools effectively, validate your code, and debug issues when they arise.

Why Testing Matters

Testing your tools before deploying them provides several important benefits:

  • Catches bugs early - Find and fix issues before they affect users

  • Validates functionality - Ensure your tool works as expected with different inputs

  • Verifies error handling - Make sure your tool handles errors gracefully

  • Confirms security - Ensure your code doesn't use blocked features

  • Builds confidence - Know that your tool works before making it available

The platform provides built-in testing tools that make it easy to test your tools without deploying them. You can test as many times as you want, with different inputs, until you're confident your tool works correctly.

Using the Built-In Testing Interface

The easiest way to test your tools is using the built-in testing interface in the platform dashboard. This interface provides a user-friendly way to test your tools without writing any code.

How to Access the Testing Interface

  • Navigate to your tool in the dashboard

  • Click the "Test" button or "Test Tool" option

  • The testing interface will open, showing your tool's code and parameter fields

Using the Testing Interface

The testing interface allows you to:

  • Enter test parameters - Fill in the parameter fields with test values

  • Run the tool - Click "Run Test" to execute your tool

  • View results - See the return value, execution time, and any console output

  • See errors - If something goes wrong, see detailed error messages

  • Test multiple times - Run the tool multiple times with different inputs

Example: Testing a Weather Tool

Let's say you have a weather tool that takes a city name. To test it:

  • Open the testing interface for your weather tool

  • Enter "New York" in the "city" parameter field

  • Click "Run Test"

  • View the results - you should see the weather data for New York

  • Try testing with different cities to ensure it works with various inputs

  • Try testing with invalid inputs (like an empty city) to verify error handling

Using the Test Execution API

For programmatic testing or integration with CI/CD pipelines, you can use the test execution API endpoint. This allows you to test tools programmatically.

The Test Endpoint

The test execution endpoint allows you to execute tool code with test parameters:

Request Body

  • code - The JavaScript code for your tool (the execute function and any helper code)

  • params - An object containing test parameter values. These should match your tool's parameter schema.

  • config - An object containing test secret values. Use this to simulate workspace secrets during testing.

Response Format

The API returns a response with execution results:

Example: Testing via API

Here's an example of testing a tool via the API:

When to Use the API

Use the API endpoint when:

  • You want to automate testing (CI/CD pipelines)

  • You're building testing tools or scripts

  • You need to test tools programmatically

  • You want to integrate testing into your development workflow

For most use cases, the built-in testing interface is easier and more convenient.

Validating Code Before Execution

Before executing your tool, you can validate the code to check for syntax errors and security violations. This is useful for catching issues early without actually running the code.

The Validation Endpoint

The validation endpoint checks your code without executing it:

What Validation Checks

Validation checks for:

  • Syntax errors - Ensures your JavaScript code is syntactically correct

  • Blocked features - Detects attempts to use features that aren't allowed (like require(), file system access, etc.)

  • Code structure - Verifies that your code has the required execute function

  • Security violations - Identifies potential security issues

Validation Response

The validation endpoint returns detailed results:

When to Use Validation

Use validation when:

  • You want to check code before saving it

  • You're writing code manually and want to catch errors early

  • You want to verify code doesn't use blocked features

  • You're building a code editor and want real-time validation

Validation is automatically performed when you save a tool, but you can also validate manually to check code before saving.

Example: Validating Code

Debugging Your Tools

Debugging is an essential skill for building reliable tools. The platform provides several tools to help you understand what's happening during execution and fix issues.

Using Console Logging

The primary debugging tool is console logging. You can use console.log, console.error, console.warn, and console.info to output information during execution:

Best Practices for Console Logging

  • Log at key points - Log when execution starts, at important decision points, and before returning

  • Log variable values - Log important variables to see their values during execution

  • Use descriptive messages - Make log messages clear so you understand what's happening

  • Log errors - Use console.error for errors to make them stand out

  • Don't log secrets - Never log API keys, passwords, or other sensitive data

  • Remove debug logs in production - Consider removing excessive logging once your tool is working

Example: Comprehensive Debugging

Understanding Console Output

All console output is captured and available in:

  • Test results - When testing, you'll see all console output in the test results

  • Execution logs - In production, console output is saved to execution logs

  • Real-time during testing - When testing in the interface, you can see console output as it happens

Understanding Console Output Capture

The platform captures all console output from your tool execution. This makes it easy to debug issues and understand what your code is doing.

Available Console Methods

All standard console methods are available and captured:

  • console.log() - General informational messages. Use this for most debugging output.

  • console.error() - Error messages. Use this for errors and failures. These are highlighted in logs.

  • console.warn() - Warning messages. Use this for warnings about potential issues.

  • console.info() - Informational messages. Similar to console.log but semantically indicates informational content.

Console Output Format

Console output is captured with metadata:

Viewing Console Output

You can view console output in several places:

  • Testing interface - See console output in real-time when testing

  • Execution logs - View console output for all executions in the dashboard

  • API responses - Console output is included in test execution API responses

Best Practices

  • Use appropriate log levels - Use console.error for errors, console.warn for warnings, console.log for general info

  • Structure your logs - Use consistent formatting to make logs easier to read

  • Log important events - Log key decision points, API calls, and results

  • Don't over-log - Too much logging can make it hard to find important information

  • Never log secrets - Never log API keys, passwords, or other sensitive data

Understanding Execution Logs

Every tool execution is logged with detailed information. These logs are invaluable for debugging, monitoring, and understanding how your tools are being used.

What's Logged

Execution logs include comprehensive information:

  • Execution metadata:

  • Execution ID (unique identifier) - Tool name and version - Start time and end time - Execution duration - Status (success, error, timeout)

- **Parameters received** - The parameters passed to your tool. Sensitive values may be sanitized (e.g., passwords might be shown as "***") - **Console output** - All console.log, console.error, console.warn, and console.info output from your code - **Result or error** - What your tool returned or what error occurred - **Resource usage**:

  • Memory used (in bytes) - CPU time used - Execution time

- **Error details** - If an error occurred, detailed error information including:

  • Error type - Error message - Stack trace (if available)

### Accessing Execution Logs

You can access execution logs in the platform dashboard:

  • Navigate to your tool

  • Click on "Execution Logs" or "History"

  • View a list of all executions

  • Click on any execution to see detailed logs

Using Logs for Debugging

Execution logs are your primary tool for debugging production issues:

  • See what parameters were received - Understand what data the AI assistant provided

  • Review console output - See what your code logged during execution

  • Check execution time - See if your tool is running slowly

  • Review errors - See detailed error information when things go wrong

  • Compare executions - Compare successful and failed executions to identify patterns

Example: Using Logs to Debug

Let's say a tool is failing in production. Here's how you'd use logs to debug:

  • Open the execution logs for your tool

  • Find a failed execution

  • Check the error message - it might say "API returned error: 401"

  • Check the console output - you might see "API key found (length: 0 characters)" indicating the API key is empty

  • Check the parameters - verify the AI assistant is providing the expected data

  • Compare with a successful execution to see what's different

Log Retention

Execution logs are retained for a period of time (typically 30-90 days depending on your plan). This allows you to:

  • Debug issues that occurred in the past

  • Analyze usage patterns over time

  • Audit tool executions for compliance

  • Identify trends and patterns

Testing Best Practices

Following best practices helps you build reliable, well-tested tools:

1. Test with Realistic Data

  • Test with data similar to what you'll receive in production

  • Test with edge cases (empty strings, very long strings, special characters)

  • Test with different data types and formats

2. Test Error Scenarios

  • Test with missing required parameters

  • Test with invalid parameter values

  • Test with missing secrets

  • Test API failure scenarios (simulate 404, 500 errors)

  • Test timeout scenarios (if possible)

3. Test Multiple Times

  • Run the same test multiple times to ensure consistency

  • Test with different parameter combinations

  • Test both success and failure paths

4. Review Console Output

  • Check that console output makes sense

  • Verify that important steps are being logged

  • Ensure no sensitive data is being logged

5. Check Execution Time

  • Monitor how long executions take

  • Optimize if executions are slow

  • Ensure executions complete well within the 30-second timeout

6. Test Before Deploying

  • Always test thoroughly before making a tool active

  • Test after making changes to existing tools

  • Re-test if you update dependencies or change configuration

Common Testing Scenarios

Here are some common scenarios you should test:

Happy Path Testing

Test that your tool works correctly with valid inputs:

  • Provide all required parameters with valid values

  • Ensure all required secrets are configured

  • Verify the tool returns the expected result

  • Check that the return format is correct

Validation Testing

Test that your tool validates inputs correctly:

  • Test with missing required parameters

  • Test with invalid parameter types

  • Test with invalid parameter values (e.g., invalid email format)

  • Verify that clear error messages are returned

Error Handling Testing

Test that your tool handles errors gracefully:

  • Test with missing secrets

  • Test API failure scenarios

  • Test network error scenarios

  • Verify that errors are caught and returned properly

Edge Case Testing

Test edge cases that might cause issues:

  • Empty strings

  • Very long strings

  • Special characters

  • Null or undefined values

  • Empty arrays or objects

Summary: Testing Workflow

Here's a recommended testing workflow:

  • Write your tool code - Create or update your tool

  • Validate the code - Use the validation endpoint or interface to check for syntax errors and blocked features

  • Test with valid inputs - Test the happy path with realistic data

  • Test error scenarios - Test with invalid inputs, missing parameters, etc.

  • Review console output - Check logs to ensure everything is working as expected

  • Check execution time - Ensure your tool completes quickly

  • Deploy when ready - Once testing passes, deploy your tool

  • Monitor production logs - After deploying, monitor execution logs to catch any issues

Following this workflow helps ensure your tools are reliable, secure, and ready for production use.

Last updated