loke.dev
Header image for Can the Model Context Protocol Finally Give Your AI Features the Context They Crave?

Can the Model Context Protocol Finally Give Your AI Features the Context They Crave?

Explore why Anthropic’s new open standard is the missing piece for developers trying to build AI tools that interact seamlessly with local databases, APIs, and file systems.

· 4 min read

I spent three hours yesterday manually copying logs from a local Postgres instance into Claude just to find a single syntax error in a migration script. It’s 2024, and for all the "AGI is coming" hype, we’re still mostly just highly paid glorified copy-pasters moving text from one window to another.

The dream of AI is that it just *knows*—that it has the context of your codebase, your database schema, and your internal Slack channels without you having to feed it snippets like a nervous bird. Anthropic’s Model Context Protocol (MCP) is basically an attempt to stop the copy-paste madness by creating a universal "USB-C port" for AI models.

Why we’re all drowning in glue code

Usually, if you want an LLM to talk to your data, you write a custom integration. You write a FastAPI wrapper, handle the authentication, format the JSON specifically for OpenAI or Anthropic, and then pray the model doesn't hallucinate the schema. If you want to switch models or add a new data source, you’re back to square one, writing more glue code.

MCP flips the script. Instead of the AI being at the center of a spiderweb of custom integrations, you build an MCP Server that exposes your data in a standardized way. Any MCP Client (like Claude Desktop or a custom IDE) can then plug into that server and instantly understand how to read files, query databases, or trigger API calls.

Building a simple MCP Server

Let's look at what this actually looks like in practice. We’ll build a small Python server that lets an AI inspect a local SQLite database. No more "I'll paste the schema here" messages.

First, you’ll need the mcp Python SDK:

pip install mcp

Here’s a basic server that exposes a tool to list table names.

from mcp.server.fastmcp import FastMCP
import sqlite3

# Initialize FastMCP - it's a high-level wrapper that makes this easy
mcp = FastMCP("DatabaseExplorer")

DB_PATH = "my_app_data.db"

@mcp.tool()
def get_table_names() -> list[str]:
    """Retrieves all table names from the local SQLite database."""
    with sqlite3.connect(DB_PATH) as conn:
        cursor = conn.cursor()
        cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
        return [row[0] for row in cursor.fetchall()]

@mcp.tool()
def query_table(table_name: str, limit: int = 5) -> str:
    """Gets the first few rows of a specific table to understand its structure."""
    # Note: In a real app, you'd want to be very careful about SQL injection here!
    # AI tools run in your local environment, so treat them with caution.
    with sqlite3.connect(DB_PATH) as conn:
        cursor = conn.cursor()
        cursor.execute(f"SELECT * FROM {table_name} LIMIT {limit}")
        columns = [description[0] for description in cursor.description]
        rows = cursor.fetchall()
        return f"Columns: {columns}\nData: {rows}"

if __name__ == "__main__":
    mcp.run()

This looks like standard Python, right? The magic is in the @mcp.tool() decorator. When you connect this to an MCP-compatible client, the AI sees these functions as "capabilities." It knows when to call get_table_names because you gave it a docstring explaining what it does.

Resources vs. Tools: The Nuance

MCP distinguishes between Resources and Tools.

- Resources are like read-only files or data snapshots (think of a log file or a documentation page). They are things the AI can "read."
- Tools are actions. They can change things, query things, or perform calculations.

If you wanted the AI to monitor a live log file, you’d expose it as a resource:

@mcp.resource("logs://app-logs")
def get_app_logs() -> str:
    with open("app.log", "r") as f:
        return f.read()

The AI can now "subscribe" to logs://app-logs. It's a much cleaner mental model than trying to jam everything into a massive prompt window.

Connecting the dots (The "Host" part)

The part that usually trips people up is how the AI actually *reaches* this code. If you’re using the Claude Desktop app, you add the server to your claude_desktop_config.json file:

{
  "mcpServers": {
    "my-db-tool": {
      "command": "python",
      "args": ["/path/to/your/server.py"]
    }
  }
}

Once you restart Claude, a little hammer icon appears. The model now has a direct line to your local SQLite DB. You can just ask, "Hey, what are the last five users who signed up?" and it will execute the Python code, get the result, and explain it to you.

The "Gotchas" and the Reality Check

It’s not all sunshine and automated workflows yet. There are a few things to keep in mind:

1. Security is your problem. If you give an MCP tool the ability to DROP TABLE, and the LLM has a bad day (or follows a prompt injection from a malicious email you asked it to summarize), your data is gone. Always use read-only database users for these tools.
2. Latency. Every tool call is a round-trip. If your tool is slow, the "chat" experience feels sluggish.
3. Local vs. Remote. MCP is fantastic for local dev tools, but exposing remote APIs via MCP requires thinking about auth tokens and how to securely pass them from the client to the server.

Is this actually the "One Protocol to Rule Them All"?

We've seen "standards" come and go, but MCP feels different because it’s solving a pain point that every single AI developer is feeling right now: context fragmentation.

By decoupling the *source of the data* from the *AI model*, we’re moving toward a world where you don't have to rebuild your entire stack just because a better model came out. You keep your MCP servers, and you just point the new model at them.

If you’re tired of being a human middleware, it’s probably time to stop writing bespoke wrappers and start looking at how to turn your internal tools into MCP servers. It's a bit more work upfront, but your future, less-caffeinated self will thank you.