Skip to content
Should you build an MCP server for your software?

Should you build an MCP server for your software?

Back to articles
·Matthieu Guigon
  • MCP
  • AI
  • SaaS
  • Architecture
  • API
  • Strategy

The Model Context Protocol (MCP) lets AI agents interact directly with your software through structured tools. For a SaaS or business application, building an MCP server is often more relevant — and cheaper — than developing an in-house AI feature. But you need to get it right: tool granularity, rate limiting, security and appropriate business scope.

In 2026, the question is no longer 'should we integrate AI into our product?' but how. And the default answer — plugging in the OpenAI or Anthropic API to add a chatbot to the interface — isn't always the right one. There's an alternative that few decision-makers know about yet: the Model Context Protocol (MCP). The idea is simple: instead of building an AI feature inside your product, you open your product to AI agents that already exist. Claude, GPT, Copilot, and all the agents that are coming can then interact directly with your software — read data, trigger actions, query your system. No in-house chatbot, no fine-tuning, no prompt engineering to maintain. After implementing MCP servers in production, here's why I believe it's an underestimated strategic lever — and what you need to know before getting started.

MCP: understanding the protocol in 2 minutes

The Model Context Protocol is an open standard created by Anthropic in late 2024. Its role: to define a universal interface between an AI agent (the 'client') and an external service (the 'server'). In practice, an MCP server exposes tools — actions that the agent can call — with a natural language description of what each tool does, what parameters it accepts, and what it returns. The agent reads these descriptions, understands what's available, and decides on its own which tool to call based on the user's request. This is the fundamental difference from a classic REST API. An API is a technical contract: endpoints, HTTP verbs, Swagger documentation. An MCP server is a semantic contract: the agent understands what your service does and intelligently decides how to use it. To draw a parallel: your REST API is your technical back-office. Your MCP server is the front door you open to your users' AI assistants. Both can coexist — and they often do. The MCP server calls your API internally, but exposes an interface designed for LLMs, not for developers.

Why MCP over an in-house AI feature

blog.articles.faut-il-creer-un-serveur-mcp-pour-votre-logiciel.content.section2

The granularity trap: a logistics case study

Building an MCP server is simple in theory. In practice, the number one challenge is tool granularity. I worked with a logistics management software vendor on this topic. The first version of their MCP server was intuitive: a single get_logistics_data tool that returned everything — shipments, warehouse capacity, carrier details, delivery performance, customer data. Result: unusable. When a user asked 'which shipments are delayed this week?', the agent called that tool, received tens of thousands of tokens of data, and the LLM's context window exploded. The response was slow, imprecise, and expensive in tokens. Version 2 changed everything. We split into specialized tools: get_shipments with filters (date, status, carrier), get_warehouse_status for capacity and stock levels, get_carrier_details for carrier information, get_delivery_stats for analytics. Each tool returns only what's relevant, with pagination and filters. The agent picks the right tool based on the question — and often only calls one. The golden rule: one tool = one user intent. If you need to explain 'this tool does X AND Y AND Z', it needs to be split up. Think about how a human uses your software: they don't go to the 'everything page' — they go to the shipments page, or the warehouse page, or the dashboard. Your tools should mirror that logic.

Security, rate limiting and business boundaries

An MCP server is a gateway into your system. And like any gateway, it needs to be properly locked down. First rule: authentication. Every MCP connection must be tied to an authenticated user with their existing permissions. If a user doesn't have access to financial data in your UI, they shouldn't have access via MCP either. No shared service accounts, no global tokens — the same rules as your API. Second rule: rate limiting. An AI agent can chain dozens of calls in seconds. Without limits, a user can — intentionally or not — overload your system. Implement per-user and per-tool quotas, and circuit breakers if an agent loops. Third rule: functional scope. Don't expose everything. Start with read-only actions — viewing shipments, checking warehouse capacity, reading delivery analytics. Write actions (rerouting a shipment, canceling a dispatch, reassigning a carrier) should be added incrementally, with confirmations and guardrails. An AI agent that cancels 500 shipments because the prompt was ambiguous is a real scenario. Finally, log everything. Every MCP call, every tool invoked, every response returned. It's your safety net for understanding what went wrong — and your dataset for improving tool quality.

MCP isn't for everyone — and that's fine

Should every piece of software have an MCP server? No. MCP is relevant when your users already interact with AI agents in their daily workflow — and that's increasingly the case in tech, logistics, marketing and finance. It's also relevant when your product has an existing API: the MCP server can build on top of it, which dramatically reduces development time. And it's particularly interesting when you're deciding between 'adding AI' and 'doing nothing': MCP is a strategic middle ground that shows you're in the AI race, without the costs and complexity of a native LLM integration. However, if your product handles highly sensitive data (healthcare, legal, defense), the question of access scope is critical and may make MCP unsuitable — at least initially. And if your users aren't in the AI agent ecosystem, the ROI will be low. My approach: start with a minimal read-only MCP server, with 3 to 5 tools covering the main use cases. Measure adoption. Iterate. Add write actions when trust is established. It's exactly the same philosophy as a public API — except your 'developers' are AI agents. If you're wondering whether your product should expose an MCP server — or if you want an outside perspective on the right architecture — let's talk. This is exactly the kind of engagement I handle from strategy to production.

Further reading