We covered how MCP became the universal standard back in February. That post was about the what and why. This one is about the how -- specifically, how to build MCP servers that work reliably in production, not just in demos.
At CODERCOPS, we have built MCP servers for client projects ranging from internal knowledge bases to e-commerce inventory systems. Along the way, we have accumulated a collection of patterns, mistakes, and hard-won lessons that we think are worth sharing. This is the practical guide we wish had existed when we started.
Building MCP servers that survive contact with production traffic
What MCP Servers Actually Do
Before we dive into code, let us be precise about terminology. An MCP server is a process that exposes tools, resources, and prompts to AI clients via the Model Context Protocol. It does not "run" an AI model. It provides structured access to external capabilities that an AI model can invoke.
Think of it this way:
Without MCP Server:
User → "What's the status of order #4521?"
AI → "I don't have access to your order system."
With MCP Server:
User → "What's the status of order #4521?"
AI → [calls get_order_status tool via MCP]
→ "Order #4521 shipped on March 10. Tracking: 1Z999AA10..."The MCP server is the bridge. The AI client (Claude, ChatGPT, Cursor, Claude Code) speaks JSON-RPC to the server, the server does the actual work (database queries, API calls, file reads), and the result flows back.
The Three Primitives
Every MCP server exposes some combination of three primitives:
| Primitive | Purpose | Example | Who Initiates |
|---|---|---|---|
| Tools | Actions the AI can invoke | query_database, send_email, create_ticket |
AI model (via client) |
| Resources | Data the AI can read | file://config.json, db://users/schema |
Client application |
| Prompts | Reusable prompt templates | summarize_document, code_review |
User |
Tools are by far the most common. In our experience, 90% of MCP servers you will build are primarily tool servers.
Architecture: How the Pieces Fit
Here is the full picture of how an MCP integration works:
┌─────────────────────────────────────────────────────────┐
│ MCP Host Application │
│ (Claude Desktop, Cursor, VS Code, your custom app) │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ MCP Client │ │ MCP Client │ │ MCP Client │ │
│ │ (instance) │ │ (instance) │ │ (instance) │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
└─────────┼──────────────────┼──────────────────┼──────────┘
│ JSON-RPC │ JSON-RPC │ JSON-RPC
│ (stdio/SSE) │ (stdio/SSE) │ (stdio/SSE)
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ MCP Server │ │ MCP Server │ │ MCP Server │
│ (Database) │ │ (Slack API) │ │ (File Sys.) │
└──────────────┘ └──────────────┘ └──────────────┘Key architectural details:
- One client per server. Each MCP client instance maintains a 1:1 connection with a single server. A host application can create multiple clients.
- Transport options. Two official transports: stdio (local, process-based) and Streamable HTTP with SSE (remote, network-based). Stdio is simpler for local tools; SSE is required for remote/shared servers.
- JSON-RPC 2.0. All communication uses JSON-RPC 2.0. Requests have
method,params, andid. Responses haveresultorerror. - Stateful sessions. MCP connections are stateful. After the initial handshake (capability negotiation), both sides know what the other supports.
Building Your First MCP Server
Let us build a real MCP server from scratch. We will create a server that provides tools for interacting with a PostgreSQL database -- a pattern we have used in multiple client projects.
Project Setup
mkdir mcp-postgres-server
cd mcp-postgres-server
npm init -y
npm install @modelcontextprotocol/sdk pg zod
npm install -D typescript @types/node @types/pg tsxCreate tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"declaration": true
},
"include": ["src/**/*"]
}The Server Code
Create src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import pg from "pg";
// --- Configuration ---
const pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
max: 5,
idleTimeoutMillis: 30000,
});
// --- Server Setup ---
const server = new McpServer({
name: "postgres-tools",
version: "1.0.0",
});
// --- Tool: List Tables ---
server.tool(
"list_tables",
"List all tables in the database with row counts",
{},
async () => {
const result = await pool.query(`
SELECT
schemaname,
tablename,
n_tup_ins - n_tup_del AS estimated_rows
FROM pg_stat_user_tables
ORDER BY estimated_rows DESC
`);
return {
content: [
{
type: "text",
text: JSON.stringify(result.rows, null, 2),
},
],
};
}
);
// --- Tool: Describe Table ---
server.tool(
"describe_table",
"Get column names, types, and constraints for a table",
{
table_name: z
.string()
.regex(/^[a-zA-Z_][a-zA-Z0-9_]*$/)
.describe("Name of the table to describe"),
},
async ({ table_name }) => {
const result = await pool.query(
`
SELECT
column_name,
data_type,
is_nullable,
column_default,
character_maximum_length
FROM information_schema.columns
WHERE table_name = $1
ORDER BY ordinal_position
`,
[table_name]
);
if (result.rows.length === 0) {
return {
content: [
{
type: "text",
text: `Table '${table_name}' not found.`,
},
],
isError: true,
};
}
return {
content: [
{
type: "text",
text: JSON.stringify(result.rows, null, 2),
},
],
};
}
);
// --- Tool: Run Read-Only Query ---
server.tool(
"query",
"Execute a read-only SQL query against the database",
{
sql: z.string().describe("SQL query to execute (SELECT only)"),
params: z
.array(z.union([z.string(), z.number(), z.boolean(), z.null()]))
.optional()
.describe("Query parameters for parameterized queries"),
},
async ({ sql, params }) => {
// Security: reject non-SELECT queries
const normalized = sql.trim().toUpperCase();
if (
!normalized.startsWith("SELECT") &&
!normalized.startsWith("WITH")
) {
return {
content: [
{
type: "text",
text: "Only SELECT and WITH (CTE) queries are allowed.",
},
],
isError: true,
};
}
// Security: reject dangerous patterns
const dangerous = [
"DROP",
"DELETE",
"UPDATE",
"INSERT",
"ALTER",
"TRUNCATE",
"CREATE",
"GRANT",
"REVOKE",
];
for (const keyword of dangerous) {
if (normalized.includes(keyword)) {
return {
content: [
{
type: "text",
text: `Query contains forbidden keyword: ${keyword}`,
},
],
isError: true,
};
}
}
try {
const result = await pool.query(sql, params || []);
return {
content: [
{
type: "text",
text: JSON.stringify(
{
rows: result.rows.slice(0, 100), // Limit results
rowCount: result.rowCount,
truncated: (result.rowCount || 0) > 100,
},
null,
2
),
},
],
};
} catch (err) {
return {
content: [
{
type: "text",
text: `Query error: ${(err as Error).message}`,
},
],
isError: true,
};
}
}
);
// --- Resource: Database Schema ---
server.resource(
"schema",
"db://schema",
async (uri) => {
const result = await pool.query(`
SELECT
t.table_name,
json_agg(
json_build_object(
'column', c.column_name,
'type', c.data_type,
'nullable', c.is_nullable
) ORDER BY c.ordinal_position
) AS columns
FROM information_schema.tables t
JOIN information_schema.columns c
ON t.table_name = c.table_name
AND t.table_schema = c.table_schema
WHERE t.table_schema = 'public'
GROUP BY t.table_name
ORDER BY t.table_name
`);
return {
contents: [
{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(result.rows, null, 2),
},
],
};
}
);
// --- Start Server ---
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Postgres MCP server running on stdio");
}
main().catch((err) => {
console.error("Fatal error:", err);
process.exit(1);
});Registering with Claude Desktop
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["tsx", "/path/to/mcp-postgres-server/src/index.ts"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost:5432/mydb"
}
}
}
}Restart Claude Desktop. You should now see the database tools available.
Registering with Claude Code
For Claude Code, you can add servers at the project level or globally:
# Project-level (creates .mcp.json in project root)
claude mcp add postgres -- npx tsx /path/to/mcp-postgres-server/src/index.ts
# Or directly edit .mcp.json{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["tsx", "/path/to/mcp-postgres-server/src/index.ts"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost:5432/mydb"
}
}
}
}Common MCP Server Patterns
After building a dozen MCP servers, we have found that most fall into a few categories. Here are the patterns we use most frequently.
Pattern 1: API Wrapper
Wrapping a REST API as MCP tools. This is the most common pattern.
server.tool(
"search_tickets",
"Search support tickets by status, assignee, or keyword",
{
status: z.enum(["open", "closed", "pending"]).optional(),
assignee: z.string().optional(),
keyword: z.string().optional(),
limit: z.number().min(1).max(50).default(10),
},
async ({ status, assignee, keyword, limit }) => {
const params = new URLSearchParams();
if (status) params.set("status", status);
if (assignee) params.set("assignee", assignee);
if (keyword) params.set("q", keyword);
params.set("limit", String(limit));
const response = await fetch(
`${TICKETING_API}/tickets?${params}`,
{
headers: {
Authorization: `Bearer ${process.env.TICKETING_API_KEY}`,
},
}
);
if (!response.ok) {
return {
content: [
{
type: "text",
text: `API error: ${response.status} ${response.statusText}`,
},
],
isError: true,
};
}
const data = await response.json();
return {
content: [
{
type: "text",
text: JSON.stringify(data, null, 2),
},
],
};
}
);Pattern 2: File System Access with Guardrails
Providing controlled file access -- never the full filesystem.
import path from "path";
import fs from "fs/promises";
const ALLOWED_ROOT = process.env.DOCS_DIR || "/app/docs";
function validatePath(requestedPath: string): string {
const resolved = path.resolve(ALLOWED_ROOT, requestedPath);
if (!resolved.startsWith(ALLOWED_ROOT)) {
throw new Error("Path traversal detected");
}
return resolved;
}
server.tool(
"read_document",
"Read a document from the knowledge base",
{
filepath: z
.string()
.describe("Relative path within the docs directory"),
},
async ({ filepath }) => {
try {
const safePath = validatePath(filepath);
const content = await fs.readFile(safePath, "utf-8");
return {
content: [{ type: "text", text: content }],
};
} catch (err) {
return {
content: [
{
type: "text",
text: `Error reading file: ${(err as Error).message}`,
},
],
isError: true,
};
}
}
);
server.tool(
"list_documents",
"List available documents in the knowledge base",
{
directory: z
.string()
.default(".")
.describe("Subdirectory to list"),
},
async ({ directory }) => {
const safePath = validatePath(directory);
const entries = await fs.readdir(safePath, {
withFileTypes: true,
});
const files = entries.map((e) => ({
name: e.name,
type: e.isDirectory() ? "directory" : "file",
}));
return {
content: [
{
type: "text",
text: JSON.stringify(files, null, 2),
},
],
};
}
);Pattern 3: Stateful Workflow Server
For multi-step workflows where operations depend on prior state:
// In-memory state for active workflows
const workflows = new Map<
string,
{
step: number;
data: Record<string, unknown>;
createdAt: Date;
}
>();
server.tool(
"start_deployment",
"Start a deployment workflow for a service",
{
service: z.string(),
version: z.string(),
environment: z.enum(["staging", "production"]),
},
async ({ service, version, environment }) => {
const workflowId = `deploy-${Date.now()}`;
// Step 1: Validate
const isValid = await validateDeployment(
service,
version,
environment
);
if (!isValid.ok) {
return {
content: [
{
type: "text",
text: `Validation failed: ${isValid.reason}`,
},
],
isError: true,
};
}
workflows.set(workflowId, {
step: 1,
data: { service, version, environment },
createdAt: new Date(),
});
return {
content: [
{
type: "text",
text: JSON.stringify({
workflowId,
status: "awaiting_confirmation",
message: `Ready to deploy ${service}@${version} to ${environment}. Call confirm_deployment with this workflowId to proceed.`,
checks: isValid.checks,
}),
},
],
};
}
);
server.tool(
"confirm_deployment",
"Confirm and execute a pending deployment",
{
workflowId: z.string(),
},
async ({ workflowId }) => {
const workflow = workflows.get(workflowId);
if (!workflow) {
return {
content: [
{
type: "text",
text: "Workflow not found or expired.",
},
],
isError: true,
};
}
// Execute deployment
const result = await executeDeployment(workflow.data);
workflows.delete(workflowId);
return {
content: [
{
type: "text",
text: JSON.stringify(result),
},
],
};
}
);Deployment Considerations
Local (stdio) vs Remote (SSE)
| Aspect | stdio (Local) | Streamable HTTP + SSE (Remote) |
|---|---|---|
| Latency | Minimal (IPC) | Network-dependent |
| Setup | Zero config | Need HTTP server, auth, TLS |
| Scaling | One instance per client | Shared across clients |
| Use case | Dev tools, local DBs | Team/org-wide services |
| Security | Process isolation | Need auth middleware |
| State | Per-process | Need external state store |
For most development-time tools, stdio is the right choice. For shared infrastructure tools (company knowledge base, deployment system, monitoring), remote SSE servers make more sense.
Remote Server with Express
Here is how we structure remote MCP servers:
import express from "express";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
const app = express();
// Auth middleware
app.use("/mcp", (req, res, next) => {
const token = req.headers.authorization?.replace("Bearer ", "");
if (!token || !validateToken(token)) {
res.status(401).json({ error: "Unauthorized" });
return;
}
next();
});
// SSE endpoint for MCP
app.get("/mcp/sse", async (req, res) => {
const transport = new SSEServerTransport("/mcp/messages", res);
const server = createMcpServer(); // Your server factory
await server.connect(transport);
});
// Message endpoint for MCP
app.post("/mcp/messages", async (req, res) => {
// Handle incoming JSON-RPC messages
// The SDK handles routing internally
});
app.listen(3001, () => {
console.log("MCP server listening on :3001");
});Docker Deployment
We containerize all our production MCP servers:
FROM node:22-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:22-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
# Non-root user
RUN addgroup --system mcp && adduser --system --group mcp
USER mcp
EXPOSE 3001
CMD ["node", "dist/index.js"]Debugging MCP Servers
Debugging MCP servers is harder than debugging a typical API because the communication goes through a JSON-RPC layer and the client is an AI model. Here are the techniques we rely on.
Technique 1: MCP Inspector
The official MCP Inspector is the single most useful debugging tool:
npx @modelcontextprotocol/inspector npx tsx src/index.tsThis launches a web UI that lets you:
- See all registered tools, resources, and prompts
- Invoke tools manually with custom parameters
- Inspect the full JSON-RPC request/response cycle
- Test without needing an AI client
Technique 2: Structured Logging
Always log to stderr (stdout is reserved for JSON-RPC in stdio transport):
function log(level: string, message: string, data?: unknown) {
const entry = {
timestamp: new Date().toISOString(),
level,
message,
...(data ? { data } : {}),
};
console.error(JSON.stringify(entry));
}
// Use in tool handlers
server.tool("my_tool", "...", {}, async () => {
log("info", "my_tool invoked", { timestamp: Date.now() });
try {
const result = await doWork();
log("info", "my_tool completed", {
resultSize: JSON.stringify(result).length,
});
return { content: [{ type: "text", text: JSON.stringify(result) }] };
} catch (err) {
log("error", "my_tool failed", {
error: (err as Error).message,
});
throw err;
}
});Technique 3: Claude Code Verbose Mode
When running MCP servers via Claude Code, use the --mcp-debug flag to see all JSON-RPC traffic:
claude --mcp-debugThis prints every request and response between Claude Code and your MCP server, which is invaluable for understanding what the model is actually sending.
Technique 4: Unit Testing Tool Handlers
Extract tool logic into testable functions:
// src/tools/query.ts
export async function executeQuery(
pool: pg.Pool,
sql: string,
params?: unknown[]
) {
// Validation logic
// Query execution
// Result formatting
}
// src/tools/query.test.ts
import { describe, it, expect, beforeAll, afterAll } from "vitest";
import { executeQuery } from "./query.js";
describe("executeQuery", () => {
let pool: pg.Pool;
beforeAll(async () => {
pool = new pg.Pool({
connectionString: process.env.TEST_DATABASE_URL,
});
});
afterAll(async () => {
await pool.end();
});
it("rejects non-SELECT queries", async () => {
const result = await executeQuery(pool, "DROP TABLE users");
expect(result.isError).toBe(true);
});
it("limits results to 100 rows", async () => {
const result = await executeQuery(
pool,
"SELECT generate_series(1, 200) AS n"
);
expect(JSON.parse(result.content[0].text).truncated).toBe(true);
});
it("parameterizes queries correctly", async () => {
const result = await executeQuery(
pool,
"SELECT $1::text AS name",
["test"]
);
expect(JSON.parse(result.content[0].text).rows[0].name).toBe(
"test"
);
});
});Security: The Part Everyone Skips
MCP servers are code that runs with your credentials and accesses your infrastructure. The security surface is real. Here is our checklist.
Tool Authorization
Not every tool should be available to every user:
interface ToolPermissions {
readOnly: string[];
readWrite: string[];
admin: string[];
}
const permissions: ToolPermissions = {
readOnly: ["list_tables", "describe_table", "query"],
readWrite: ["create_record", "update_record"],
admin: ["drop_table", "run_migration"],
};
function createServerForRole(role: keyof ToolPermissions) {
const server = new McpServer({
name: "postgres-tools",
version: "1.0.0",
});
const allowedTools = permissions[role];
// Only register tools this role is allowed to access
if (allowedTools.includes("list_tables")) {
server.tool("list_tables", "...", {}, listTablesHandler);
}
if (allowedTools.includes("query")) {
server.tool("query", "...", { sql: z.string() }, queryHandler);
}
// ... etc
return server;
}Data Access Control
Restrict what data can flow back to the AI model:
// Redact sensitive fields before returning
function redactSensitive(
rows: Record<string, unknown>[]
): Record<string, unknown>[] {
const sensitiveFields = [
"password",
"ssn",
"credit_card",
"api_key",
"secret",
"token",
];
return rows.map((row) => {
const cleaned = { ...row };
for (const field of sensitiveFields) {
if (field in cleaned) {
cleaned[field] = "[REDACTED]";
}
}
return cleaned;
});
}Input Validation
Always validate and sanitize inputs. The AI model can (and will) send unexpected inputs:
server.tool(
"query",
"Execute a read-only SQL query",
{
sql: z
.string()
.max(5000) // Limit query length
.refine(
(sql) => !sql.includes(";"), // No multi-statement
"Multi-statement queries are not allowed"
),
},
async ({ sql }) => {
// Additional runtime validation
// ...
}
);Rate Limiting
Especially important for remote MCP servers:
const rateLimits = new Map<string, { count: number; resetAt: number }>();
function checkRateLimit(
clientId: string,
maxPerMinute: number = 60
): boolean {
const now = Date.now();
const entry = rateLimits.get(clientId);
if (!entry || entry.resetAt < now) {
rateLimits.set(clientId, {
count: 1,
resetAt: now + 60000,
});
return true;
}
entry.count++;
return entry.count <= maxPerMinute;
}Production Gotchas
These are the things that bit us in production. Learn from our pain.
Gotcha 1: Tool Descriptions Are Prompts
The AI model reads your tool descriptions to decide when and how to call tools. Vague descriptions lead to incorrect tool usage.
// Bad: The model will call this for everything
server.tool("search", "Search for things", { query: z.string() }, handler);
// Good: The model knows exactly when to use this
server.tool(
"search_customer_orders",
"Search for customer orders by order ID, customer email, or date range. Returns order status, items, and shipping info. Use this when the user asks about order status, shipping, or purchase history.",
{
order_id: z.string().optional().describe("Exact order ID, e.g., ORD-12345"),
email: z.string().email().optional().describe("Customer email address"),
date_from: z.string().optional().describe("Start date, ISO format"),
date_to: z.string().optional().describe("End date, ISO format"),
},
handler
);Gotcha 2: Response Size Matters
AI models have context limits. Returning a 50,000-row query result is worse than useless -- it blows up the context and degrades model performance.
// Always paginate and summarize
const MAX_ROWS = 50;
const MAX_RESPONSE_CHARS = 10000;
function formatResponse(rows: unknown[], totalCount: number): string {
const limited = rows.slice(0, MAX_ROWS);
let text = JSON.stringify(limited, null, 2);
if (text.length > MAX_RESPONSE_CHARS) {
text = text.slice(0, MAX_RESPONSE_CHARS) + "\n... [truncated]";
}
if (totalCount > MAX_ROWS) {
text += `\n\nShowing ${MAX_ROWS} of ${totalCount} results. Refine your query for more specific results.`;
}
return text;
}Gotcha 3: Error Messages Are Part of the UX
When a tool fails, the error message goes back to the AI model, which uses it to decide what to do next. Write error messages for AI consumption:
// Bad: The model has no idea what to do with this
return { content: [{ type: "text", text: "Error: ECONNREFUSED" }], isError: true };
// Good: The model can explain the issue and suggest next steps
return {
content: [
{
type: "text",
text: "Database connection failed. The PostgreSQL server at localhost:5432 is not responding. This usually means the database server is down or the connection string is incorrect. Ask the user to verify their database is running.",
},
],
isError: true,
};Gotcha 4: Timeouts Kill Sessions
MCP clients expect responses within a reasonable time. A tool that hangs for 60 seconds is a terrible experience.
function withTimeout<T>(
promise: Promise<T>,
ms: number,
toolName: string
): Promise<T> {
return Promise.race([
promise,
new Promise<T>((_, reject) =>
setTimeout(
() =>
reject(
new Error(
`${toolName} timed out after ${ms}ms. The operation took too long. Try a more specific query or smaller dataset.`
)
),
ms
)
),
]);
}
// Usage in tool handler
server.tool("query", "...", { sql: z.string() }, async ({ sql }) => {
const result = await withTimeout(
pool.query(sql),
10000, // 10 second timeout
"query"
);
// ...
});Gotcha 5: Stdio Transport and Process Lifecycle
With stdio transport, the MCP server process is spawned and managed by the host application. When the host closes, the server process gets killed. Handle cleanup:
process.on("SIGINT", async () => {
console.error("Shutting down...");
await pool.end();
process.exit(0);
});
process.on("SIGTERM", async () => {
console.error("Shutting down...");
await pool.end();
process.exit(0);
});
// Also handle unexpected disconnects
process.stdin.on("end", async () => {
console.error("stdin closed, shutting down...");
await pool.end();
process.exit(0);
});Testing MCP Servers End-to-End
Beyond unit tests, we run integration tests that simulate the full MCP protocol:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
describe("MCP Postgres Server E2E", () => {
let client: Client;
beforeAll(async () => {
const transport = new StdioClientTransport({
command: "npx",
args: ["tsx", "src/index.ts"],
env: {
...process.env,
DATABASE_URL: process.env.TEST_DATABASE_URL!,
},
});
client = new Client(
{ name: "test-client", version: "1.0.0" },
{}
);
await client.connect(transport);
});
afterAll(async () => {
await client.close();
});
it("lists available tools", async () => {
const result = await client.listTools();
const toolNames = result.tools.map((t) => t.name);
expect(toolNames).toContain("list_tables");
expect(toolNames).toContain("describe_table");
expect(toolNames).toContain("query");
});
it("executes a query", async () => {
const result = await client.callTool("query", {
sql: "SELECT 1 + 1 AS sum",
});
const text = (result.content as Array<{ text: string }>)[0].text;
const parsed = JSON.parse(text);
expect(parsed.rows[0].sum).toBe(2);
});
it("rejects dangerous queries", async () => {
const result = await client.callTool("query", {
sql: "DROP TABLE users",
});
expect(result.isError).toBe(true);
});
});Our MCP Server Checklist
Before shipping any MCP server to production, we run through this list:
| Check | Why |
|---|---|
| Tool descriptions are detailed and specific | Model needs clear guidance on when to use each tool |
| All inputs validated with Zod schemas | Prevents injection and unexpected behavior |
| Response sizes capped | Avoids context overflow |
| Sensitive data redacted | PII should never reach the model |
| Timeouts on all external calls | Prevents hung sessions |
| Structured logging to stderr | Debugging without breaking JSON-RPC |
| Graceful shutdown handlers | Clean resource cleanup |
| Rate limiting (remote servers) | Prevents abuse |
| E2E tests with MCP client SDK | Catches protocol-level issues |
| Error messages written for AI consumption | Enables graceful error recovery |
Where MCP Is Heading
The MCP ecosystem is moving fast. A few things we are watching:
MCP Apps (shipped January 2026) let MCP servers render interactive UI within AI conversations. This opens up approval workflows, data visualization, and form-based inputs directly in Claude.
OAuth 2.1 integration is coming to the spec, which will standardize authentication for remote MCP servers. Right now, auth is DIY.
Server discovery via a registry is in early stages. Imagine npm install but for MCP capabilities.
At CODERCOPS, we are betting heavily on MCP as the integration layer for AI-powered applications. Every project we build now has MCP servers as part of the architecture. If you are building AI integrations and not using MCP yet, you are writing glue code that will need to be rewritten.
The protocol is stable, the tooling is mature, and the ecosystem is enormous. The best time to start building MCP servers was six months ago. The second best time is today.
Building AI integrations for your product? CODERCOPS specializes in MCP server development, AI agent architecture, and production-grade tool integrations. Get in touch to discuss your project.
Comments