Dobby
Back to Blog
Developermcpprotocoltools

MCP Protocol: Connecting AI Agents to the Real World

The Model Context Protocol (MCP) gives AI agents structured access to tools and APIs. Here is what it is, how it works, and why it matters.

Gil KalMarch 19, 20268 min read

AI agents are only as useful as the tools they can access. A code review agent that cannot read your repository is useless. A project management agent that cannot create tasks in Jira is just a chatbot. The Model Context Protocol (MCP) solves this by providing a standardized way for agents to discover, authenticate with, and invoke tools — regardless of which LLM or framework powers the agent.

Before MCP, every tool integration was custom. If you wanted your agent to create a GitHub issue, you wrote a GitHub API wrapper. If you wanted it to search your knowledge base, you wrote another wrapper. If you wanted it to approve a task, yet another. Each integration had its own authentication, its own error handling, its own schema. MCP replaces all of that with a single protocol that every tool and every agent can speak.

What MCP Actually Is

MCP is a JSON-RPC 2.0 based protocol that defines how AI agents interact with external tools and data sources. It has three core concepts: Tools (actions an agent can take), Resources (data an agent can read), and Prompts (templates for common workflows). An MCP server exposes these capabilities, and an MCP client — typically an AI agent or IDE — consumes them.

The elegance of MCP is in its simplicity. A tool is described by a name, a description that the LLM reads to decide when to use it, and a JSON schema defining its inputs and outputs. The LLM does not need custom integration code for each tool. It reads the tool descriptions, decides which tool to use based on the current task, constructs the correct parameters, and sends a standard JSON-RPC request. The MCP server executes the tool and returns a structured result.

MCP is to AI agents what HTTP was to web browsers — a standard protocol that turns a fragmented ecosystem into an interoperable one.

Why Not Just Use REST APIs?

REST APIs were designed for application-to-application communication. MCP is designed for AI-to-tool communication. The difference matters in several ways.

First, tool discovery. With REST, a developer reads API documentation and writes integration code. With MCP, the agent asks the server what tools are available and gets back structured descriptions it can reason about. Second, schema awareness. MCP input schemas tell the LLM exactly what parameters each tool expects, including types, required fields, and descriptions. The LLM uses this to construct correct calls without custom parsing logic. Third, unified authentication. Instead of managing separate API keys for every service, MCP supports OAuth 2.1 tokens that scope access across the entire tool catalog. With REST, each integration is custom. With MCP, tools are plug-and-play.

Anatomy of an MCP Tool

An MCP tool has a name, a description (which the LLM reads to decide when to use it), an input schema (what parameters it accepts), and an output format. When an agent decides to use a tool, it sends a JSON-RPC request with the tool name and parameters. The MCP server executes the tool and returns the result. The entire exchange is structured and logged.

Here is what a typical MCP tool call looks like. The agent wants to create a task, so it sends a request with the tool name and the required arguments:

{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "create_task",
    "arguments": {
      "title": "Fix login page bug",
      "priority": "high",
      "assigned_agent": "backend-worker"
    }
  },
  "id": 1
}

The server responds with a structured result that the agent can parse and act on. If the call fails, the error response includes a code and message that helps the agent decide whether to retry, try a different approach, or escalate to a human. This structured error handling is a significant improvement over REST integrations where each API has its own error format.

Tool composition is another powerful pattern. An agent can chain multiple tool calls together — first searching for relevant documents, then creating a task based on the findings, then assigning the task to another agent. Each call goes through the same MCP server with the same authentication, the same logging, and the same policy enforcement. The agent orchestrates complex workflows by combining simple, well-defined tools.

Tool Categories in Practice

A production MCP server can expose dozens of tools across multiple domains. For example, Dobby's MCP server provides 73+ tools organized into categories: task management (create, update, list, assign tasks), approvals (get pending, approve, reject, request changes), agent operations (status, metrics, health, scheduling), knowledge base (search documents, add memories, manage connectors), and gateway management (create keys, check usage, view costs). Each tool is documented with its schema, so any MCP-compatible client can use it immediately.

Beyond the core categories, MCP tools can integrate with external services. GitHub tools let agents create issues, review PRs, and check CI status. Exa tools provide web search capabilities. AgentMail tools handle email operations. The point is that MCP is not limited to a single domain — it scales to cover every tool an agent might need, all through the same protocol and the same authentication.

MCP in Your IDE

One of the most practical uses of MCP is connecting your IDE to your agent platform. Claude Desktop, Cursor, VS Code, and other editors support MCP natively. Once connected, you can create tasks, check agent status, approve actions, and query your knowledge base — all without leaving your editor. The MCP server handles authentication, so each user only sees data they are authorized to access.

Setting up MCP in your IDE typically takes under five minutes. You add a configuration block with the server URL and your gateway key, restart the editor, and the tools appear in your AI assistant's tool palette. From that point, you can ask your AI assistant to list pending approvals and it will call the MCP tool, or ask it to create a task and it will use the create_task tool with the right parameters. The developer experience is seamless because the protocol handles all the complexity.

The IDE integration changes how developers interact with their agent platform. Instead of switching to a web dashboard to check on agent status or approve a pending action, they stay in their editor. A developer reviewing code can ask the AI assistant to show the agent's execution history, check if there are pending approvals, or look up related knowledge base documents — all through MCP tools that return structured data directly into the conversation.

Security and Authentication

MCP supports OAuth 2.1 for authentication, which means agents authenticate with scoped tokens that limit what they can do. A code review agent might have read access to repositories but no ability to merge. A monitoring agent might read metrics but cannot modify configurations. This scope-based access is enforced at the protocol level, not the application level — which means it works consistently across all tools.

Scoped access is critical for multi-tenant environments. When multiple teams share an MCP server, each team's agents can only access their own tenant's data. A marketing team's agent cannot read engineering tasks, and an engineering agent cannot access marketing budgets. The MCP server enforces tenant isolation on every request, using the gateway key to identify the caller and filter results accordingly.

The Gateway MCP Endpoint

When MCP goes through an Agentic Gateway, you get additional benefits: every tool invocation is logged with the actor's identity, cost is tracked per call, and organizational policies are enforced. If an organization has a kill-switch active, MCP calls are blocked alongside LLM calls. The gateway provides a single endpoint that handles both LLM completions and MCP tool calls — one key, one URL, full observability.

The gateway also enables rate limiting and anomaly detection on MCP calls. If an agent starts making an unusual number of tool calls — say, creating hundreds of tasks in a loop — the gateway can throttle or block the calls before they cause damage. This kind of behavioral monitoring is difficult to implement at the individual tool level but straightforward when all calls flow through a central gateway.

For organizations running multiple tenants or teams, the gateway MCP endpoint provides data isolation out of the box. Each gateway key is scoped to a specific tenant, so tool calls automatically filter results to the caller's data. A marketing team's agent calling list_tasks only sees marketing tasks. An engineering agent sees engineering tasks. The isolation is enforced at the gateway level, so individual tools do not need to implement tenant filtering — they receive pre-filtered requests.

Getting Started with MCP

To connect your IDE to an MCP server, you need two things: a gateway key and the server URL. Add the configuration to your IDE settings, and the tools become available immediately. Start with read-only tools (list tasks, check status) to build familiarity, then move to write operations (create tasks, approve actions) once you are comfortable with the workflow. The protocol is designed to be incremental — you do not need to use all 73 tools on day one.

For teams building their own MCP tools, the protocol makes it straightforward. Define a tool name, write a description that helps the LLM understand when to use it, specify the input schema, and implement the handler. The MCP SDK handles the JSON-RPC transport, error formatting, and schema validation. Most teams can build and deploy a new MCP tool in under an hour.

Ready to take control of your AI agents?

Start free with Dobby AI — connect, monitor, and govern agents from any framework.

Get Started Free