Building MCP Servers: Extending AI Tools with Custom Integrations

Model Context Protocol (MCP) is how you give AI tools like Claude access to your own systems — databases, APIs, documentation, file systems. Instead of copying context into a chat window, you build a server that the AI queries directly. Once you understand the pattern, it takes about an hour to build one.
What MCP Actually Is
MCP is a standardized protocol for AI tools to communicate with external data sources. Think of it like a USB port for AI — a common interface that any tool can plug into. An MCP server exposes “tools” (actions the AI can call) and “resources” (data the AI can read). The AI client (Claude Code, for example) discovers available tools at connection time and can invoke them during a conversation.
The protocol uses JSON-RPC over stdio or HTTP. The server declares its capabilities, and the client calls them as needed. No polling, no webhooks — it’s a direct request-response pattern.
Architecture Overview
An MCP server has three main parts:
- Tool definitions — what the AI can do (query a database, search docs, create a ticket)
- Resource definitions — what the AI can read (project context, config files, schemas)
- Transport layer — how the server communicates (stdio for local tools, SSE/HTTP for remote)
Building a Documentation Server
Here’s a practical example: an MCP server that lets AI search and read your project’s documentation. Install the SDK first:
npm init -y
npm install @modelcontextprotocol/sdk
Create the server:
// src/index.ts
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
import { readdir, readFile } from 'fs/promises';
import { join, extname } from 'path';
const DOCS_DIR = process.env.DOCS_DIR || './docs';
const server = new McpServer({
name: 'docs-server',
version: '1.0.0',
});
// Tool: search documentation files
server.tool(
'search_docs',
'Search documentation files by keyword',
{ query: z.string().describe('Search keyword or phrase') },
async ({ query }) => {
const files = await getAllMarkdownFiles(DOCS_DIR);
const results = [];
for (const file of files) {
const content = await readFile(file, 'utf-8');
if (content.toLowerCase().includes(query.toLowerCase())) {
const lines = content.split('n');
const matchingLines = lines
.filter(line => line.toLowerCase().includes(query.toLowerCase()))
.slice(0, 3);
results.push({
file: file.replace(DOCS_DIR, ''),
matches: matchingLines,
});
}
}
return {
content: [{
type: 'text',
text: JSON.stringify(results, null, 2),
}],
};
}
);
// Tool: read a specific doc file
server.tool(
'read_doc',
'Read a documentation file by path',
{ path: z.string().describe('Relative path to the doc file') },
async ({ path }) => {
const fullPath = join(DOCS_DIR, path);
const content = await readFile(fullPath, 'utf-8');
return {
content: [{ type: 'text', text: content }],
};
}
);
async function getAllMarkdownFiles(dir: string): Promise<string[]> {
const entries = await readdir(dir, { withFileTypes: true });
const files: string[] = [];
for (const entry of entries) {
const fullPath = join(dir, entry.name);
if (entry.isDirectory()) {
files.push(...await getAllMarkdownFiles(fullPath));
} else if (extname(entry.name) === '.md') {
files.push(fullPath);
}
}
return files;
}
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
}
main().catch(console.error);
Connecting to Claude Code
Register your server in the Claude Code MCP config file. Create or edit .mcp.json in your project root:
{
"mcpServers": {
"docs": {
"command": "npx",
"args": ["tsx", "./mcp-docs/src/index.ts"],
"env": {
"DOCS_DIR": "./docs"
}
}
}
}
Restart Claude Code and it automatically discovers and connects to your server. You can then ask Claude to search your docs or read specific files, and it uses your MCP tools to do so.
Real-World Use Cases
Database query tool — expose read-only SQL queries so the AI can check data while debugging. Much safer than giving it direct database access.
Deployment status — query your CI/CD pipeline’s API to check build statuses, recent deployments, and error logs.
Internal API explorer — let the AI query your staging API with pre-configured authentication, so it can test endpoints and understand response shapes.
Project context — expose package.json, environment configs, and architecture decision records so the AI always has up-to-date project context.
Tips from Building Several
- Keep tools focused. One tool per action. Don’t build a “do everything” tool — the AI picks the right tool better when each has a clear, narrow purpose.
- Validate inputs. Use Zod schemas for every parameter. The AI will sometimes pass unexpected types.
- Return structured data. JSON responses are easier for the AI to parse and reason about than free-form text.
- Add descriptions to everything. Tool descriptions, parameter descriptions — they’re the AI’s documentation for understanding what your tools do.
- Test with the MCP Inspector. Run
npx @modelcontextprotocol/inspectorto test your server interactively before connecting it to an AI client.
Written by
Adrian Saycon
A developer with a passion for emerging technologies, Adrian Saycon focuses on transforming the latest tech trends into great, functional products.


