Skip to main content

Configuration

MCP Compose is configured using a TOML file, typically named mcp_compose.toml. This page provides a comprehensive reference for all configuration options.

Configuration File Location

MCP Compose searches for configuration in the following order:

  1. Path specified via --config CLI argument
  2. mcp_compose.toml in the current directory
  3. mcp_compose.toml in parent directories (walking up to root)

Variable Substitution

Keep sensitive data like API keys out of your configuration files by referencing environment variables instead. MCP Compose automatically replaces variable placeholders with their actual values when the configuration is loaded.

Environment Variables

Environment variables let you externalize configuration values that may differ between environments (development, staging, production) or contain secrets that shouldn't be committed to version control. Use ${VAR_NAME} or $VAR_NAME syntax to reference them. If a referenced variable is not set, MCP Compose will raise an error at startup, helping you catch misconfigurations early.

[[servers.proxied.stdio]]
name = "my-server"
command = ["python", "server.py"]
env = { API_KEY = "${MY_API_KEY}" }

Special Variables

In addition to environment variables, MCP Compose provides built-in special variables that are automatically resolved based on context. These are particularly useful for creating portable configurations that can be moved between machines or shared with team members without modification.

VariableDescription
${MCP_COMPOSE_CONFIG_DIR}Absolute path to the directory containing the config file

The MCP_COMPOSE_CONFIG_DIR variable is particularly useful for portable configurations with relative paths:

[[servers.proxied.stdio]]
name = "calculator"
command = ["python", "server.py"]
working_dir = "${MCP_COMPOSE_CONFIG_DIR}"

Composer Section

This is the main section where you give your unified MCP server a name and configure how it behaves when multiple downstream servers expose tools with the same name. Think of it as the identity and personality of your composed server.

[composer]
name = "my-unified-server" # Server name (required)
conflict_resolution = "prefix" # Tool name conflict strategy
log_level = "INFO" # Logging level
port = 8080 # Default port for HTTP transports

Conflict Resolution Strategies

When composing multiple MCP servers, it's common for different servers to expose tools with identical names (e.g., both a filesystem server and a database server might have a read tool). The conflict resolution strategy determines how MCP Compose handles these collisions. The prefix strategy is recommended as it preserves all tools while making their origin clear to clients.

StrategyDescriptionExample Result
prefixPrefix tool name with server namecalculator_add
suffixSuffix tool name with server nameadd_calculator
errorFail on conflictError raised
overrideLast server winsadd
ignoreSkip conflicting toolsTool skipped
customUse custom templateSee tool_manager section

Log Levels

The log level controls how much information MCP Compose outputs during operation. Use DEBUG during development to see detailed information about tool discovery, message routing, and server lifecycle. In production, INFO provides a good balance between visibility and noise, while ERROR minimizes output to only critical issues.

LevelDescription
DEBUGVerbose debugging information
INFOGeneral operational messages
WARNINGWarning messages
ERRORError messages only

Transport Section

Choose how MCP clients (like Claude Desktop or VS Code) connect to your composed server. STDIO is the simplest option where the client launches MCP Compose as a subprocess, while Streamable HTTP runs as a standalone server that multiple clients can connect to over the network.

[transport]
# STDIO transport - for subprocess communication
stdio_enabled = true

# Streamable HTTP transport (recommended)
streamable_http_enabled = true
streamable_http_path = "/mcp"
streamable_http_cors_enabled = true

# SSE transport (deprecated - use streamable_http instead)
sse_enabled = false
sse_path = "/sse"
sse_cors_enabled = true
OptionTypeDefaultDescription
stdio_enabledbooltrueEnable STDIO transport
streamable_http_enabledboolfalseEnable Streamable HTTP transport
streamable_http_pathstring"/mcp"HTTP endpoint path
streamable_http_cors_enabledbooltrueEnable CORS for HTTP
sse_enabledboolfalseEnable SSE transport (deprecated)
sse_pathstring"/sse"SSE endpoint path
sse_cors_enabledbooltrueEnable CORS for SSE

Authentication Section

Protect your MCP server from unauthorized access by requiring clients to prove their identity. Choose from simple API keys for internal use, JWT tokens for stateless authentication, or OAuth2 for integration with existing identity providers like GitHub or Auth0.

[authentication]
enabled = false
providers = ["api_key"]
default_provider = "api_key"

API Key Authentication

API keys are the simplest form of authentication, ideal for internal services or development environments. Clients include the key in a request header, and MCP Compose validates it against the configured list. You can configure multiple keys to support key rotation or different clients with separate keys.

[authentication.api_key]
header_name = "X-API-Key"
keys = ["${MCP_API_KEY_1}", "${MCP_API_KEY_2}"]

Basic Authentication

HTTP Basic Authentication uses a username and password combination, transmitted as a Base64-encoded header. While simple to implement, it should only be used over HTTPS to prevent credential interception. This method is suitable for simple setups or when integrating with systems that only support Basic auth.

[authentication.basic]
username = "${MCP_USERNAME}"
password = "${MCP_PASSWORD}"

JWT Authentication

JSON Web Tokens (JWT) provide stateless authentication where the token itself contains encoded claims about the user. MCP Compose validates the token's signature using the configured secret and checks the issuer and audience claims. JWTs are ideal for microservices architectures where you don't want the MCP server to maintain session state or query an external auth service for every request.

[authentication.jwt]
secret = "${JWT_SECRET}"
algorithm = "HS256"
issuer = "mcp-compose"
audience = "mcp-clients"

Anaconda Authentication

Anaconda authentication validates bearer tokens using the Anaconda.org API. Clients must provide a valid Anaconda access token in the Authorization header. This is ideal for teams already using Anaconda for package management and want to leverage their existing Anaconda accounts for MCP access control.

[authentication]
enabled = true
providers = ["anaconda"]
default_provider = "anaconda"

[authentication.anaconda]
domain = "anaconda.com"
OptionTypeDefaultDescription
domainstring"anaconda.com"Anaconda domain. Use your custom domain for Anaconda Enterprise.

Clients authenticate by including their Anaconda token in the Authorization header:

Authorization: Bearer <your_anaconda_token>

Fallback Mode

For local development, you can enable fallback mode which allows MCP Compose to use your locally stored Anaconda credentials (from anaconda login) when no Bearer token is provided by the client. This is useful for testing without explicitly passing tokens.

Set the environment variable before starting MCP Compose:

export MCP_COMPOSE_ANACONDA_TOKEN="fallback"
mcp-compose serve --config mcp_compose.toml

In fallback mode, the authenticator:

  1. First tries to use the Bearer token if provided by the client
  2. If no token is provided, attempts to retrieve the token from the local anaconda-auth library (your anaconda login session)
  3. Raises an authentication error if neither method succeeds
warning

Fallback mode is intended for local development only. In production, always require clients to provide explicit Bearer tokens.

OAuth2 Authentication

OAuth2 enables integration with enterprise identity providers like GitHub, Auth0, Okta, or any OpenID Connect-compatible service. Users authenticate through the identity provider's login flow, and MCP Compose receives tokens that can be validated and used to determine user identity. This is the recommended approach for production deployments where you want centralized user management and single sign-on (SSO) capabilities.

[authentication.oauth2]
provider = "generic" # github, auth0, okta, or generic
authorization_endpoint = "https://id.example.com/oauth2/authorize"
token_endpoint = "https://id.example.com/oauth2/token"
userinfo_endpoint = "https://id.example.com/oauth2/userinfo"
scopes = ["openid", "profile"]
client_id = "${OAUTH_CLIENT_ID}"
client_secret = "${OAUTH_CLIENT_SECRET}"
discovery_url = "${OAUTH_DISCOVERY_URL}" # Optional OIDC discovery

mTLS Authentication

Mutual TLS (mTLS) requires both the server and client to present certificates, providing strong cryptographic authentication. This is commonly used in zero-trust environments or service mesh architectures where certificate-based identity is the standard. The client must present a certificate signed by the configured CA for the connection to be accepted.

[authentication.mtls]
ca_cert = "/path/to/ca.crt"
client_cert = "/path/to/client.crt"
client_key = "/path/to/client.key"

Authorization Section

Control what authenticated users are allowed to do. Define roles with specific permissions so that administrators can manage servers while regular users can only execute tools. Rate limiting prevents any single user from overwhelming the system.

[authorization]
enabled = false
model = "rbac"

Role-Based Access Control (RBAC)

RBAC lets you define named roles with specific permissions, then assign users to those roles. Permissions follow a resource:action format (e.g., tools:execute, servers:read). Use wildcards like tools:* to grant all actions on a resource, or * for full administrative access. This model scales well as your team grows—just assign new users to existing roles rather than managing individual permissions.

[[authorization.roles]]
name = "admin"
permissions = ["*"]

[[authorization.roles]]
name = "developer"
permissions = ["tools:*", "servers:read", "logs:read"]

[[authorization.roles]]
name = "user"
permissions = ["tools:execute", "tools:list"]

Rate Limiting

Rate limiting protects your MCP servers from being overwhelmed by too many requests, whether from a misconfigured client, a runaway script, or a denial-of-service attempt. Limits are specified as requests per minute and can be customized per role, allowing power users or automated systems higher throughput while protecting against abuse.

[authorization.rate_limiting]
enabled = false
default_limit = 100
per_role_limits = { admin = 1000, developer = 500, user = 100 }

Servers Section

This is where you define the MCP servers that MCP Compose will unify. You can embed Python-based servers directly, launch local servers as subprocesses (STDIO), or connect to remote servers over HTTP. Each downstream server's tools become available through the single composed endpoint.

Embedded Servers

Embedded servers are Python packages that implement the MCP protocol, loaded directly into the MCP Compose process. This approach has the lowest latency since there's no inter-process communication, but it means all servers share the same Python environment. Use embedded servers for tightly-coupled functionality or when you need maximum performance. The tool_mappings option lets you rename tools at load time to avoid conflicts or provide clearer names.

[servers.embedded]

[[servers.embedded.servers]]
name = "jupyter-mcp-server"
package = "jupyter_mcp_server"
enabled = true
tool_mappings = { "create" = "jupyter_create", "run" = "jupyter_run" }
OptionTypeRequiredDescription
namestringYesServer identifier
packagestringYesPython package name
enabledboolNoEnable/disable server (default: true)
tool_mappingstableNoCustom tool name mappings

STDIO Proxied Servers

STDIO servers run as separate subprocesses, communicating with MCP Compose via stdin/stdout using the MCP JSON-RPC protocol. This is the most common deployment pattern as it provides process isolation—a crashing server won't take down MCP Compose or other servers. You can run servers written in any language, use different Python environments, or apply resource limits. The working_dir option is crucial when your server scripts use relative paths for imports or data files.

[[servers.proxied.stdio]]
name = "weather-server"
command = ["uvx", "mcp-server-weather"]
working_dir = "${MCP_COMPOSE_CONFIG_DIR}"
env = { WEATHER_API_KEY = "${WEATHER_API_KEY}" }

# Lifecycle management
restart_policy = "on-failure" # never, on-failure, always
max_restarts = 3
restart_delay = 5

# Health checks
health_check_enabled = true
health_check_interval = 30
health_check_timeout = 5
health_check_method = "tool"
health_check_tool = "health"

# Logging
log_stdout = true
log_stderr = true
log_file = "/var/log/mcp-compose/weather-server.log"
OptionTypeRequiredDefaultDescription
namestringYes-Server identifier
commandarrayYes-Command and arguments
working_dirstringNo-Working directory for subprocess
envtableNo{}Environment variables
restart_policystringNo"never"Restart behavior
max_restartsintNo3Maximum restart attempts
restart_delayintNo5Seconds between restarts
health_check_enabledboolNofalseEnable health checking
health_check_intervalintNo30Seconds between checks
health_check_timeoutintNo5Health check timeout
health_check_methodstringNo"tool"Check method
health_check_toolstringNo"health"Tool to call for health
log_stdoutboolNotrueLog server stdout
log_stderrboolNotrueLog server stderr
log_filestringNo-File path for logs

Resource Limits

Prevvent a single misbehaving server from consuming all system resources by setting memory and CPU limits. When a server exceeds its memory limit, it will be terminated; CPU limits throttle the process to the specified percentage. This is especially important in multi-tenant environments or when running untrusted code.

[[servers.proxied.stdio]]
name = "resource-limited-server"
command = ["python", "server.py"]

[servers.proxied.stdio.resource_limits]
max_memory_mb = 512
max_cpu_percent = 50

Streamable HTTP Proxied Servers

Connect to MCP servers running remotely over HTTP with streaming support for real-time responses. This is ideal for servers that are deployed as independent services, run in containers, or are hosted by third parties. MCP Compose handles connection management, automatic reconnection on failures, and authentication. The protocol option determines how streaming responses are delivered: lines for newline-delimited JSON, chunked for HTTP chunked encoding, or poll for environments where streaming isn't available.

[[servers.proxied.http]]
name = "remote-api-server"
url = "https://api.example.com/mcp"
protocol = "lines" # chunked, lines, poll
auth_token = "${REMOTE_SERVER_TOKEN}"
auth_type = "bearer" # bearer, basic
timeout = 30
retry_interval = 5
keep_alive = true
reconnect_on_failure = true
max_reconnect_attempts = 10
poll_interval = 2 # Only for poll protocol
mode = "proxy" # proxy or translator

# Health checks
health_check_enabled = true
health_check_interval = 60
health_check_endpoint = "/health"
OptionTypeRequiredDefaultDescription
namestringYes-Server identifier
urlstringYes-Server URL
protocolstringNo"lines"HTTP streaming protocol
auth_tokenstringNo-Authentication token
auth_typestringNo"bearer"Token type
timeoutintNo30Request timeout in seconds
retry_intervalintNo5Retry interval in seconds
keep_aliveboolNotrueMaintain persistent connection
reconnect_on_failureboolNotrueAuto-reconnect on failure
max_reconnect_attemptsintNo10Maximum reconnection attempts
poll_intervalintNo2Polling interval (poll protocol only)
modestringNo"proxy"Operating mode

SSE Proxied Servers (Deprecated)

warning

SSE transport is deprecated. Use Streamable HTTP ([[servers.proxied.http]]) instead.

[[servers.proxied.sse]]
name = "legacy-server"
url = "http://localhost:8080/sse"
auth_token = "${AUTH_TOKEN}"
timeout = 30
retry_interval = 5
reconnect_on_failure = true
max_reconnect_attempts = 10

Tool Manager Section

Fine-tune how tools from different servers are named and organized. Create user-friendly aliases, apply different naming strategies to specific tools, or enable versioning when you need to support multiple versions of the same tool simultaneously.

[tool_manager]
conflict_resolution = "prefix"

Tool-Specific Overrides

While the global conflict resolution strategy works for most cases, sometimes you need different handling for specific tools. Use glob patterns to match tool names and apply a different strategy. For example, you might want notebook-related tools prefixed for clarity, but search tools from a single authoritative server to remain unprefixed.

[[tool_manager.tool_overrides]]
tool_pattern = "notebook_*"
resolution = "prefix"

[[tool_manager.tool_overrides]]
tool_pattern = "search_*"
resolution = "suffix"

Custom Naming Template

When the built-in strategies don't fit your naming conventions, define a custom template using placeholders. This gives you complete control over how composed tool names are formed—useful when integrating with existing systems that expect specific naming patterns.

[tool_manager.custom_template]
template = "{server_name}_{tool_name}"

Available template variables:

  • {server_name} - The server's name
  • {tool_name} - The original tool name

Tool Aliases

Aliases let you expose tools under different names without modifying the underlying servers. Use them to provide shorter, more intuitive names, maintain backward compatibility when renaming tools, or create domain-specific vocabularies for different user groups.

[tool_manager.aliases]
jupyter_create = "create_notebook"
fs_read = "read_file"

Tool Versioning

When you need to support multiple versions of a tool simultaneously (e.g., during a migration or A/B testing), enable versioning to expose both versions with distinct names. The version suffix is appended to tool names, allowing clients to explicitly choose which version to invoke.

[tool_manager.versioning]
enabled = false
allow_multiple_versions = false
version_suffix_format = "_v{version}"

REST API Section

Expose a REST API for programmatic management of your MCP Compose instance. Use it to start/stop servers, invoke tools, or integrate with external systems. The built-in OpenAPI documentation makes it easy to explore available endpoints.

[api]
enabled = true
path_prefix = "/api/v1"
host = "0.0.0.0"
port = 8080
cors_enabled = true
cors_origins = ["http://localhost:3000"]
cors_methods = ["GET", "POST", "PUT", "DELETE"]
docs_enabled = true
docs_path = "/docs"
openapi_path = "/openapi.json"
OptionTypeDefaultDescription
enabledbooltrueEnable REST API
path_prefixstring"/api/v1"API path prefix
hoststring"0.0.0.0"Bind address
portint8080Listen port
cors_enabledbooltrueEnable CORS
cors_originsarray["*"]Allowed origins
cors_methodsarray["GET", "POST", ...]Allowed HTTP methods
docs_enabledbooltrueEnable API documentation
docs_pathstring"/docs"Documentation path
openapi_pathstring"/openapi.json"OpenAPI spec path

Web UI Section

Enable a browser-based dashboard for managing your MCP Compose instance without command-line tools. Test tools interactively, view logs in real-time, monitor metrics, and even edit configuration—all from a visual interface.

[ui]
enabled = true
framework = "react"
mode = "embedded" # embedded or separate
path = "/ui"
port = 9456 # Used when mode is 'separate'
static_dir = "/var/www/mcp-compose/ui"
features = [
"server_management",
"tool_testing",
"logs_viewing",
"metrics_dashboard",
"configuration_editor"
]
OptionTypeDefaultDescription
enabledbooltrueEnable web UI
frameworkstring"react"UI framework
modestring"embedded"Deployment mode
pathstring"/ui"UI path
portint9456Separate mode port
static_dirstring-Static files directory
featuresarrayallEnabled UI features

Available Features

FeatureDescription
server_managementStart, stop, restart servers
tool_testingInteractive tool testing
logs_viewingView server logs
metrics_dashboardMetrics and monitoring
configuration_editorEdit configuration

Monitoring Section

Gain visibility into how your MCP servers are performing. Collect metrics for dashboards and alerts, configure structured logging for troubleshooting, and enable distributed tracing to follow requests across multiple servers.

[monitoring]
enabled = true

Metrics

Export operational metrics in Prometheus format for integration with monitoring systems like Grafana. Track tool invocation counts and durations to understand usage patterns, monitor error rates to detect problems early, and observe resource consumption to plan capacity. The collection_interval controls how frequently metrics are sampled.

[monitoring.metrics]
enabled = true
provider = "prometheus"
endpoint = "/metrics"
collection_interval = 15
collect = [
"tool_invocation_count",
"tool_invocation_duration",
"tool_error_rate",
"server_health_status",
"process_cpu_usage",
"process_memory_usage",
"request_rate",
"response_time"
]

Logging

Configure structured logging for operational visibility and troubleshooting. JSON format is recommended for production as it's easily parsed by log aggregation systems like ELK or Splunk. Log rotation prevents disk space exhaustion—choose daily for time-based rotation or size to rotate when files reach the specified limit. Enable aggregate_managed_logs to include stdout/stderr from downstream servers in the main log stream.

[monitoring.logging]
level = "INFO"
format = "json" # json or text
output = "stdout" # stdout or file
log_file = "/var/log/mcp-compose/composer.log"
rotation = "daily" # daily, size, or none
max_size_mb = 100
max_files = 7
aggregate_managed_logs = true

Distributed Tracing

Distributed tracing helps you understand the flow of requests across your composed MCP servers. When enabled, MCP Compose generates trace spans for each tool invocation and propagates trace context to downstream servers. View traces in systems like Jaeger or Zipkin to identify bottlenecks, debug failures, and understand dependencies. The sample_rate controls what percentage of requests are traced—use 1.0 for full tracing or lower values in high-traffic production environments.

[monitoring.tracing]
enabled = false
provider = "opentelemetry"
endpoint = "http://localhost:4317"
sample_rate = 1.0

Health Endpoints

Health endpoints enable integration with load balancers, orchestrators like Kubernetes, and monitoring systems. The basic /health endpoint returns a simple status, while /health/detailed provides comprehensive information about each downstream server's health, making it easy to diagnose which component is causing issues.

[monitoring.health]
endpoint = "/health"
detailed_endpoint = "/health/detailed"

Validating Configuration

Validate your configuration file without starting the server:

mcp-compose validate --config mcp_compose.toml

Best Practices

  1. Use environment variables for sensitive data like API keys, tokens, and passwords
  2. Use ${MCP_COMPOSE_CONFIG_DIR} for portable relative paths in working_dir
  3. Set appropriate restart policies for production deployments (on-failure recommended)
  4. Enable health checks for critical servers to detect failures early
  5. Use prefix conflict resolution to avoid tool name collisions
  6. Enable monitoring in production for observability
  7. Use Streamable HTTP instead of SSE for new deployments
  8. Validate configuration before deploying changes

Next Steps

See the Examples page for complete configuration examples including local development, authentication patterns, and production deployments.