AI Agents
Enable your AI agents to send and manage images with IMG Processing API.
Introduction
Model Context Protocol (MCP) is an open protocol to standardize how applications provide context to LLMs. IMG Processing provides an MCP server to enable AI agents to send and manage images using the IMG Processing API.
This page describes the tools available in IMG Processing’s MCP server, which allows AI agents to interact with the IMG Processing platform.
Currently available tools
Tools are individual operations exposed by the IMG Processing MCP server. Each tool corresponds to a specific API capability, such as uploading an image, resizing it, or removing its background. These tools are grouped by resource type and can be invoked by LLM agents through the MCP protocol.
Tool | Operation | Maps to |
---|---|---|
retrieve_images | read | Get image |
list_images | read | List images |
delete_images | write | Delete image |
add_watermark_images | write | Add watermarks |
blur_images | write | Blur image |
classify_images | write | Classify image |
convert_images | write | Convert image |
create_from_url_images | write | Create from URL |
crop_images | write | Crop image |
download_images | read | Download image |
extract_formatted_text_images | write | Extract formatted text |
imagine_images | write | Imagine image |
mirror_images | write | Mirror image |
modulate_images | write | Modulate image |
publish_images | write | Publish image |
remove_background_images | write | Remove background |
resize_images | write | Resize image |
rotate_images | write | Rotate image |
unpublish_images | write | Unpublish image |
upload_images | write | Upload image |
visualize_images | write | Visualize image |
Installation
Direct invocation
You can run the MCP Server directly via npx
:
Via MCP Client
There is a partial list of existing clients at modelcontextprotocol.io. If you already have a client, consult their documentation to install the MCP server.
For clients with a configuration JSON, it might look something like this:
Exposing endpoints to your MCP Client
There are two ways to expose endpoints as tools in the MCP server:
- Exposing one tool per endpoint, and filtering as necessary
- Exposing a set of tools to dynamically discover and invoke endpoints from the API
Filtering endpoints and tools
You can run the package on the command line to discover and filter the set of tools that are exposed by the MCP Server. This can be helpful for large APIs where including all endpoints at once is too much for your AI’s context window.
You can filter by multiple aspects:
--tool
includes a specific tool by name--resource
includes all tools under a specific resource, and can have wildcards, e.g.my.resource*
--operation
includes just read (get/list) or just write operations
Dynamic tools
If you specify --tools=dynamic
to the MCP server, instead of exposing one tool per endpoint in the API, it will
expose the following tools:
list_api_endpoints
- Discovers available endpoints, with optional filtering by search queryget_api_endpoint_schema
- Gets detailed schema information for a specific endpointinvoke_api_endpoint
- Executes any endpoint with the appropriate parameters
This allows you to have the full set of API endpoints available to your MCP Client, while not requiring that all of their schemas be loaded into context at once. Instead, the LLM will automatically use these tools together to search for, look up, and invoke endpoints dynamically. However, due to the indirect nature of the schemas, it can struggle to provide the correct properties a bit more than when tools are imported explicitly. Therefore, you can opt-in to explicit tools, the dynamic tools, or both.
See more information with --help
.
All of these command-line options can be repeated, combined together, and have corresponding exclusion versions (e.g. --no-tool
).
Use --list
to see the list of available tools, or see below.
Specifying the MCP Client
Different clients have varying abilities to handle arbitrary tools and schemas.
You can specify the client you are using with the --client
argument, and the MCP server will automatically
serve tools and schemas that are more compatible with that client.
-
--client=<type>
: Set all capabilities based on a known MCP client -
Valid values:
openai-agents
,claude
,claude-code
,cursor
-
Example:
--client=cursor
Additionally, if you have a client not on the above list, or the client has gotten better over time, you can manually enable or disable certain capabilities:
--capability=<name>
: Specify individual client capabilities- Available capabilities:
top-level-unions
: Enable support for top-level unions in tool schemasvalid-json
: Enable JSON string parsing for argumentsrefs
: Enable support for $ref pointers in schemasunions
: Enable support for union types (anyOf) in schemasformats
: Enable support for format validations in schemas (e.g. date-time, email)tool-name-length=N
: Set maximum tool name length to N characters- Example:
--capability=top-level-unions --capability=tool-name-length=40
- Example:
--capability=top-level-unions,tool-name-length=40
Examples
- Filter for read operations on cards:
- Exclude specific tools while including others:
- Configure for Cursor client with custom max tool name length:
- Complex filtering with multiple criteria: