I Built a Real MCP Server — Install It in Claude Desktop Right Now
There's a difference between understanding MCP and implementing it. I built a working Node.js MCP server using the official @modelcontextprotocol/sdk — stdio transport, 6 tools with Zod schemas, full JSON-RPC 2.0 protocol. Install it in Claude Desktop with one config line and ask Claude about my portfolio.
I've written about MCP before — how the protocol works, what the JSON-RPC messages look like. But there's a difference between understanding a protocol and implementing it. Last night I built a real, working MCP server from scratch: a Node.js package that you can install in Claude Desktop right now and use to query my portfolio.
The server is at github.com/matua-agent/harrison-mcp-server. Install it in Claude Desktop with one config line, restart, and ask Claude anything about my background.
Simulation vs. implementation
My MCP demo shows the protocol. It runs in a browser, emits protocol-shaped JSON over SSE, and makes the message flow visible. That's a useful educational tool. But it's not what Clio's team actually builds.
A real MCP server:
- Runs as a process, not a web server — spawned by the client (Claude Desktop, Claude Code, etc.)
- Uses stdio transport — reads from stdin, writes to stdout, newline-delimited JSON-RPC 2.0
- Handles the
initialize→tools/list→tools/calllifecycle precisely - Returns tool results in the exact format the client expects — or returns
isError: trueon failure
The client manages the process lifecycle. Your server binary is listed in claude_desktop_config.json as a command to run. Claude Desktop spawns it, connects stdio, and handles the protocol. You don't deal with HTTP, ports, TLS, or request routing.
The implementation
The official SDK is @modelcontextprotocol/sdk. The high-level API is McpServer:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "harrison-mcp-server",
version: "1.0.0",
});
server.registerTool(
"get_profile",
{
title: "Get Harrison's Profile",
description:
"Returns Harrison's full background: current roles, summary, contact info. " +
"Use for any question about who Harrison is or how to contact him.",
inputSchema: {}, // no required inputs
},
async () => {
return {
content: [{ type: "text", text: JSON.stringify(PROFILE, null, 2) }],
};
}
);
const transport = new StdioServerTransport();
await server.connect(transport);That's it. The SDK handles framing, buffering, the initialize handshake, and routing tool calls to the right handler. You register tools, connect the transport, and your server is live.
The six tools
I built six tools, each with a specific purpose and Zod-validated input schema:
get_profile— No args. Returns name, location, email, current roles, status. Zero-argument tool, always available.search_knowledge_base— Accepts a query string. Searches across projects, research, skills, and job targets using keyword relevance scoring. This is the primary discovery tool — call it first for anything not clearly covered by the more specific tools.get_project_details— Accepts a project ID or name. Returns full description, live URL, GitHub link, and tags. Callget_project_details('list')to see all available projects.list_projects— Optional tag filter. Returns summary of all 23 portfolio projects. Useful for browsing by technology.get_research— No args. Returns 3 peer-reviewed papers with DOIs, journal names, years, and methodology summaries.get_job_targets— No args. Returns target companies, fit scores, salary range, and application status. Includes "strengths for interviewers" — things worth knowing before a call.
Tool description quality drives behavior
The single most important thing I learned building tool-using agents: description quality determines when tools get called. The input schema tells the model how to call a tool; the description tells it when.
Compare these for search_knowledge_base:
- ❌ "Search Harrison's knowledge base" — when does Claude use this vs. answer from its context?
- ✅ "Search Harrison's curated knowledge base. Returns relevant information about his background, projects, research, skills, or job search. Searches across all sections. Use for any question about Harrison." — explicit scope, explicit trigger condition
The phrase "Use for any question about Harrison" is the key signal. Without it, Claude might try to answer from its training data (which has nothing specific about me) instead of calling the tool. With it, the tool is reliably called for on-topic questions.
Wire format
When you send a tools/call request, it looks like this:
// Client → Server (stdin)
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "search_knowledge_base",
"arguments": {
"query": "Clio MCP server"
}
}
}
// Server → Client (stdout)
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"content": [
{
"type": "text",
"text": "{ \"query\": \"Clio MCP server\", \"results\": [...] }"
}
],
"isError": false
}
}The content array can hold text, images, and embedded resources. For most tools, you return a single text block with JSON — Claude can parse and reason over structured JSON natively.
How to install in Claude Desktop
Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"harrison": {
"command": "npx",
"args": ["-y", "harrison-mcp-server"]
}
}
}Restart Claude Desktop. The npx -y flag downloads and runs the package on first use (no separate install step). After restart, you'll see a 🔌 tools indicator in the UI. Ask Claude: "What research has Harrison published?" and watch it call get_research.
What this demonstrates
Clio's Enterprise AI team builds MCP servers to connect Claude to their legal databases, matter management systems, and billing APIs. The same transport (stdio), protocol (MCP JSON-RPC 2.0), and SDK (@modelcontextprotocol/sdk) I used here is what they use.
There's a qualitative difference between "I understand how MCP servers work" and "I can build one you can install right now." This server is the latter.
The next step would be building a server that connects to real external APIs — a Clio-adjacent version might expose search_matters, get_document, create_time_entry. The architecture is identical; only the tool implementations change.