Why Docker for MCP?
Running MCP servers in containers provides several key benefits:
🔄 Reproducibility
Same environment everywhere—dev, staging, production. No "works on my machine" issues.
🔒 Isolation
MCP servers run in isolated containers with controlled access to host resources.
📦 Dependency Management
Bundle all dependencies in the image. No Python version conflicts or missing packages.
🚀 Easy Deployment
Deploy to any Docker host, Kubernetes, or container service with one command.
Note: MCP servers communicate via stdio by default. Docker works best with HTTP-based transports (SSE) for production deployments.
Python MCP Server Dockerfile
Here's a production-ready Dockerfile for a Python MCP server using FastMCP:
# Build stage
FROM python:3.11-slim as builder
WORKDIR /app
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Create virtual environment
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Production stage
FROM python:3.11-slim
WORKDIR /app
# Copy virtual environment from builder
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Create non-root user
RUN useradd --create-home --shell /bin/bash mcp
USER mcp
# Copy application code
COPY --chown=mcp:mcp . .
# Expose port for SSE transport
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')" || exit 1
# Run the MCP server
CMD ["python", "server.py"]And the corresponding requirements.txt:
fastmcp>=0.1.0
httpx>=0.25.0
uvicorn>=0.24.0
python-dotenv>=1.0.0Here's a minimal server that works with Docker:
from fastmcp import FastMCP
import os
mcp = FastMCP("docker-example")
@mcp.tool()
def get_environment() -> dict:
"""Get current environment info"""
return {
"container_id": os.environ.get("HOSTNAME", "unknown"),
"python_version": os.sys.version,
"environment": os.environ.get("ENV", "development")
}
@mcp.tool()
def process_data(data: str) -> str:
"""Process input data"""
return f"Processed: {data.upper()}"
# Health endpoint for Docker health checks
@mcp.custom_route("/health", methods=["GET"])
async def health_check():
return {"status": "healthy"}
if __name__ == "__main__":
# Use SSE transport for Docker
mcp.run(transport="sse", host="0.0.0.0", port=8000)Build and run:
# Build the image
docker build -t mcp-server:latest .
# Run the container
docker run -d \
--name mcp-server \
-p 8000:8000 \
-e ENV=production \
mcp-server:latest
# Check logs
docker logs -f mcp-serverTypeScript MCP Server Dockerfile
For TypeScript MCP servers using the official SDK:
# Build stage
FROM node:20-alpine as builder
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY tsconfig.json ./
# Install dependencies
RUN npm ci
# Copy source code
COPY src/ ./src/
# Build TypeScript
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -S mcp && adduser -S mcp -G mcp
USER mcp
# Copy built application
COPY --from=builder --chown=mcp:mcp /app/dist ./dist
COPY --from=builder --chown=mcp:mcp /app/node_modules ./node_modules
COPY --from=builder --chown=mcp:mcp /app/package.json ./
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["node", "dist/index.js"]The TypeScript server with HTTP transport:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
const app = express();
const server = new Server(
{ name: "docker-mcp", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
// Register tools
server.setRequestHandler("tools/list", async () => ({
tools: [{
name: "get_info",
description: "Get container info",
inputSchema: { type: "object", properties: {} }
}]
}));
server.setRequestHandler("tools/call", async (request) => {
if (request.params.name === "get_info") {
return {
content: [{
type: "text",
text: JSON.stringify({
hostname: process.env.HOSTNAME,
nodeVersion: process.version,
uptime: process.uptime()
})
}]
};
}
throw new Error("Unknown tool");
});
// Health endpoint
app.get("/health", (req, res) => {
res.json({ status: "healthy", uptime: process.uptime() });
});
// SSE endpoint for MCP
app.get("/sse", async (req, res) => {
const transport = new SSEServerTransport("/message", res);
await server.connect(transport);
});
app.post("/message", express.json(), async (req, res) => {
// Handle MCP messages
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`MCP server running on port ${PORT}`);
});Docker Compose for Development
Docker Compose makes it easy to run multiple MCP servers together with shared services:
version: '3.8'
services:
# Python MCP server
mcp-python:
build:
context: ./python-server
dockerfile: Dockerfile
ports:
- "8000:8000"
environment:
- ENV=development
- DATABASE_URL=postgres://postgres:password@db:5432/mcp
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
volumes:
- ./python-server:/app:ro # Read-only mount for dev
restart: unless-stopped
networks:
- mcp-network
# TypeScript MCP server
mcp-typescript:
build:
context: ./typescript-server
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- REDIS_URL=redis://redis:6379
depends_on:
- redis
restart: unless-stopped
networks:
- mcp-network
# Shared PostgreSQL database
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: mcp
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- mcp-network
# Redis for caching/sessions
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
networks:
- mcp-network
# MCP Gateway/Router (optional)
gateway:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- mcp-python
- mcp-typescript
networks:
- mcp-network
volumes:
postgres_data:
redis_data:
networks:
mcp-network:
driver: bridgeStart everything with:
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop everything
docker-compose down
# Rebuild after changes
docker-compose up -d --buildHealth Checks and Monitoring
Proper health checks are essential for container orchestration:
import time
from datetime import datetime
from fastmcp import FastMCP
mcp = FastMCP("monitored-server")
# Track server state
start_time = time.time()
request_count = 0
@mcp.tool()
def example_tool(input: str) -> str:
global request_count
request_count += 1
return f"Processed: {input}"
@mcp.custom_route("/health", methods=["GET"])
async def health():
"""Basic health check"""
return {"status": "healthy"}
@mcp.custom_route("/health/live", methods=["GET"])
async def liveness():
"""Kubernetes liveness probe"""
return {"status": "alive", "timestamp": datetime.utcnow().isoformat()}
@mcp.custom_route("/health/ready", methods=["GET"])
async def readiness():
"""Kubernetes readiness probe"""
# Add your readiness checks here
# e.g., database connection, external service availability
return {
"status": "ready",
"uptime_seconds": time.time() - start_time,
"request_count": request_count
}
@mcp.custom_route("/metrics", methods=["GET"])
async def metrics():
"""Prometheus-compatible metrics"""
uptime = time.time() - start_time
return f"""# HELP mcp_uptime_seconds Server uptime in seconds
# TYPE mcp_uptime_seconds gauge
mcp_uptime_seconds {uptime}
# HELP mcp_requests_total Total requests processed
# TYPE mcp_requests_total counter
mcp_requests_total {request_count}
"""
if __name__ == "__main__":
mcp.run(transport="sse", host="0.0.0.0", port=8000)Multi-Server Architecture
For complex applications, use an Nginx gateway to route requests to multiple MCP servers:
events {
worker_connections 1024;
}
http {
upstream mcp_python {
server mcp-python:8000;
}
upstream mcp_typescript {
server mcp-typescript:3000;
}
server {
listen 80;
# Python MCP server - data processing tools
location /api/data/ {
proxy_pass http://mcp_python/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_buffering off;
proxy_cache off;
}
# TypeScript MCP server - code analysis tools
location /api/code/ {
proxy_pass http://mcp_typescript/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_buffering off;
proxy_cache off;
}
# SSE requires special handling
location ~ /sse$ {
proxy_pass http://mcp_python;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding off;
}
# Health check endpoint
location /health {
return 200 'OK';
add_header Content-Type text/plain;
}
}
}Production Deployment
Production Docker Compose with security hardening:
version: '3.8'
services:
mcp-server:
image: your-registry.com/mcp-server:v1.0.0
deploy:
replicas: 3
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
environment:
- ENV=production
secrets:
- db_password
- api_key
security_opt:
- no-new-privileges:true
read_only: true
tmpfs:
- /tmp
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
secrets:
db_password:
external: true
api_key:
external: trueDeploy with:
# Create secrets
echo "your-db-password" | docker secret create db_password -
echo "your-api-key" | docker secret create api_key -
# Deploy stack
docker stack deploy -c docker-compose.prod.yml mcp
# Check status
docker stack services mcp
docker stack ps mcpKubernetes Orchestration
For larger deployments, Kubernetes provides advanced orchestration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcp-server
labels:
app: mcp-server
spec:
replicas: 3
selector:
matchLabels:
app: mcp-server
template:
metadata:
labels:
app: mcp-server
spec:
containers:
- name: mcp-server
image: your-registry.com/mcp-server:v1.0.0
ports:
- containerPort: 8000
env:
- name: ENV
value: production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: mcp-secrets
key: database-url
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health/live
port: 8000
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /health/ready
port: 8000
initialDelaySeconds: 5
periodSeconds: 10
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
---
apiVersion: v1
kind: Service
metadata:
name: mcp-server
spec:
selector:
app: mcp-server
ports:
- port: 80
targetPort: 8000
type: ClusterIP
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: mcp-server-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: mcp-server
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70Deploy to Kubernetes:
# Create namespace
kubectl create namespace mcp
# Create secrets
kubectl create secret generic mcp-secrets \
--from-literal=database-url='postgres://...' \
-n mcp
# Apply manifests
kubectl apply -f k8s/ -n mcp
# Check status
kubectl get pods -n mcp
kubectl get hpa -n mcpBest Practices
Security
- ✓ Run containers as non-root user
- ✓ Use read-only root filesystem where possible
- ✓ Never store secrets in images—use Docker secrets or environment variables
- ✓ Scan images for vulnerabilities (Trivy, Snyk)
- ✓ Use specific image tags, not
:latest
Performance
- ✓ Use multi-stage builds to minimize image size
- ✓ Layer Dockerfile commands efficiently (dependencies before code)
- ✓ Set appropriate resource limits
- ✓ Use connection pooling for databases
- ✓ Enable container health checks
Observability
- ✓ Expose Prometheus metrics endpoint
- ✓ Use structured JSON logging
- ✓ Include correlation IDs for request tracing
- ✓ Separate liveness and readiness probes
- ✓ Monitor container resource usage
CI/CD
- ✓ Automate image builds on git push
- ✓ Tag images with git SHA and semantic version
- ✓ Run tests inside containers before pushing
- ✓ Use image registries with vulnerability scanning
- ✓ Implement rolling deployments with rollback
Summary
Docker provides a robust way to deploy MCP servers in production:
- • Multi-stage builds keep images small and secure
- • Docker Compose simplifies multi-service development
- • Health checks enable proper orchestration
- • Kubernetes provides enterprise-scale deployment
- • SSE transport works better than stdio in containers