The MCP adoption curve does not look like a typical developer tool. Anthropic open-sourced the Model Context Protocol in November 2024. By March 2026, according to available reporting, it had crossed 97 million installs. That trajectory -- from zero to nine-figure install count in roughly 16 months -- puts it in the company of npm packages that become invisible infrastructure. The question worth asking is not "what is MCP" but "why did it spread this fast?"
The answer, once you understand what problem it actually solves, becomes obvious.
What MCP is
MCP (Model Context Protocol) is an open standard for connecting AI applications to external systems. It uses a client-server architecture built on JSON-RPC 2.0. The official documentation at modelcontextprotocol.io describes it as "a USB-C port for AI applications" -- a standardized connector that works regardless of what AI model is on one end and what tool or data source is on the other.
Three participants are involved in every MCP interaction:
- MCP Host: the AI application itself (Claude Desktop, VS Code with Copilot, Cursor, your custom agent)
- MCP Client: the component inside the host that maintains a connection to a server
- MCP Server: a separate process that exposes capabilities to the client
One host can connect to multiple servers simultaneously. VS Code, for example, can maintain live connections to a filesystem server, a Sentry error tracking server, and a database server all at once. The model sees all their capabilities as a unified toolset.
Servers expose three types of primitives:
- Tools: executable functions the LLM can invoke (search a database, call an API, run a shell command)
- Resources: read-only data the model can access (file contents, API responses, structured documents)
- Prompts: reusable instruction templates for specific workflows
The protocol supports two transport modes. STDIO is used for local servers running on the same machine -- near-zero overhead, default for most developer setups. Streamable HTTP enables remote servers with OAuth-based authentication, which is what production deployments typically require.
Why adoption took off
Before MCP, connecting an AI agent to external tools meant function calling: you define JSON schemas in your application code, the model returns structured arguments, your application executes the function. This works for five tools. At 20 tools across multiple AI providers, you have a maintenance problem.
Function calling has two specific friction points that MCP eliminates:
Vendor lock-in. Each AI provider has its own schema format for tool definitions. A tool you build for OpenAI's function calling does not drop into Anthropic's API without rewriting the schema. MCP servers, by contrast, work with any MCP-compatible client regardless of which underlying model the client uses. Claude, ChatGPT, Gemini, GitHub Copilot, and Cursor all support MCP. Write the server once; use it everywhere.
Tight coupling. With function calling, tool definitions live inside the application. Adding a new tool means modifying and redeploying the application. MCP servers are independent processes. A client calls tools/list at runtime to discover what capabilities are available. If a server adds or removes a tool, it sends a notifications/tools/list_changed event and clients update automatically -- no application code changes required.
This separation of concerns is the real reason MCP spread. It lets platform teams build and maintain tool servers independently from the AI applications that consume them. A database team can ship an MCP server for their internal Postgres cluster. Every AI application in the organization connects to it without any application developer writing database connection code.
The ecosystem reinforced itself quickly. The official modelcontextprotocol/servers repository on GitHub hosts reference implementations including filesystem access, web fetch, and database connectors. The community-curated awesome-mcp-servers list has grown to include hundreds of servers covering everything from GitHub to Sentry to Notion. Every server added to that ecosystem is available to every MCP-compatible client immediately.
That network effect, combined with Claude Desktop and VS Code shipping MCP support natively, drove the install numbers.
How to build your first MCP server
The official quickstart uses Python and the mcp SDK. Here is the exact setup.
Prerequisites: Python 3.10 or higher. MCP Python SDK version 1.2.0 or higher. The uv package manager (recommended over pip for this workflow).
Step 1: Install uv and set up the project
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
Restart your terminal after installation, then:
uv init my-mcp-server
cd my-mcp-server
uv venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
uv add "mcp[cli]" httpx
Step 2: Write the server
Create server.py. The FastMCP class reads your Python type hints and docstrings to generate tool definitions automatically:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("my-server")
@mcp.tool()
def add_numbers(a: float, b: float) -> float:
"""Add two numbers together."""
return a + b
@mcp.tool()
def greet(name: str) -> str:
"""Generate a greeting for a person."""
return f"Hello, {name}!"
def main():
mcp.run(transport="stdio")
if __name__ == "__main__":
main()
Expected output when you run uv run server.py: the process starts and waits silently for JSON-RPC input. That is correct behavior for a STDIO server -- it is not hanging.
One thing that trips people up here: never use print() in a STDIO server. Standard output is the communication channel for JSON-RPC messages. Any stray print statement corrupts the protocol. Use logging or write to stderr instead: print("debug", file=sys.stderr).
Step 3: Connect it to Claude Desktop
Open your Claude Desktop configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add your server:
{
"mcpServers": {
"my-server": {
"command": "uv",
"args": ["run", "/absolute/path/to/your/server.py"]
}
}
}
Restart Claude Desktop. Your tools appear automatically in the interface. Claude can now call add_numbers and greet when the context warrants it.
For VS Code with Copilot, the configuration lives in your workspace or user settings under mcp.servers. The format is similar; the VS Code docs cover it at code.visualstudio.com under the Copilot/MCP section.
Install the MCP Inspector for faster iteration. Before spending time going through Claude Desktop each time, install the inspector: npx @modelcontextprotocol/inspector uv run server.py. It opens a local web UI at localhost:6274 where you can call tools interactively and inspect their responses. This cuts the development loop significantly.
Real use cases in production
The deployments that actually stick fall into a few patterns:
Internal data access. Teams build MCP servers that expose parameterized read access to internal databases or BI tools. An analyst asks Claude a question; Claude queries the MCP server; the server runs a safe, parameterized SQL query and returns results. Credentials never leave the server. The model never touches the database directly.
Developer workflow integration. Cursor and VS Code use MCP servers for Sentry error data, GitHub PR context, and test runner output. The developer stays in the editor while the model has full context: current code, recent errors, open issues.
Agentic pipelines. Multi-step agent workflows chain MCP server calls across multiple systems: pull a Jira ticket, fetch the relevant code from GitHub, run tests, post a summary to Slack. Each system is a separate MCP server; the agent orchestrates them.
MCP vs. the alternatives
The comparison that comes up most is MCP vs. function calling, but they are not actually competing approaches. As documented by Portkey, function calling is Phase 1 (the model expresses intent with structured JSON output) and MCP is Phase 2 (the infrastructure that executes that intent portably). Most production MCP deployments use function calling under the hood; MCP standardizes what happens after the model decides to call a tool.
The more useful comparison for someone evaluating options:
| Approach | Portability | Discoverability | Ops overhead | Best for |
|---|---|---|---|---|
| Function calling only | Low (per-provider schemas) | None (static at deploy time) | Low | Single-provider prototypes |
| MCP | High (any MCP client) | Dynamic via tools/list | Medium (server process) | Multi-model, multi-tool production |
| OpenAI Plugins (deprecated) | None (OpenAI only) | Plugin manifest | High | Discontinued; irrelevant |
Function calling alone is the right answer for a quick prototype against one model. MCP earns its operational overhead when you need portability across models, dynamic tool discovery, or credential isolation between tool logic and application code.
Honest limitations
MCP has real problems worth knowing before you commit to it.
Security surface. According to Red Hat's November 2025 security analysis, MCP servers can execute OS commands and make API calls. If adversarial content in a data source triggers unintended tool executions through the LLM -- a prompt injection through the server's output -- you have a problem. The MCP specification defines OAuth for authorization, but the community has identified that parts of the current specification conflict with modern OAuth standards. Treat every MCP server as a trust boundary. Audit what capabilities you expose.
The confused deputy problem. When an MCP server acts on behalf of a user request, it is easy to accidentally give that user access to anything the server can reach. The server has credentials; the user inherits them unless you implement explicit user-level permission checks. This is not automatic, and the spec does not enforce it for you.
STDIO transport is local-only. The zero-config setup only works on the same machine as the host. Multi-user production deployments need Streamable HTTP transport with proper authentication, which is significantly more infrastructure to stand up and operate.
The spec is still moving. The protocol has evolved quickly. Servers built against early spec versions may not support newer capabilities (Sampling, Elicitation, Tasks). SDK versions have not always been backward-compatible across minor releases. Pin your SDK version -- mcp[cli]>=1.2.0 is a safe floor -- and test before upgrading.
Where to start
Pick one internal tool your team already uses. Write a 50-line MCP server that exposes its most-queried data as a resource or tool. Connect it to Claude Desktop for a week. If it saves time, extend it. If it does not, you spent an afternoon.
The official documentation at modelcontextprotocol.io is accurate and regularly updated. The MCP Inspector is worth using from day one. The modelcontextprotocol/servers repository has reference implementations you can read and adapt rather than starting from scratch.
The install count got to 97 million because the people who tried it found it useful enough to keep. The protocol is not magic -- it is a well-designed client-server spec with good SDK support. That turns out to be enough.
Sage Thornton writes technical guides for The Daily Vibe.



