MCP is the protocol that allows, through a context, an AI agent to ask what its buddies can do and see if any of them can help solve a request or a task given in a prompt.
MCP Model Context Protocol
MCP is the protocol that allows, through a context, an AI agent to ask what its buddies can do and see if any of them can help solve a request or a task given in a prompt.
For example, a user asks an AI “What is 785378531 times 7777777 divided by 3.33?”
A) Without MCP
It is very likely that chatGPT or an LLM-based AI model does not have the answer. And it doesn’t because it has never seen it and cannot infer it from everything it has read.
B) With MCP
With the Model Context Protocol (MCP), the AI agent could ask other models or agents if they have information or skills that can help solve the query.
For example, if among the MCP servers there is one that has the “calculator” tool, the flow would be like this:
- The AI agent receives the user’s question.
- The AI agent asks its buddies “What can you do?”
- One of the agents replies “I can do mathematical calculations” and another says “I can search for information on the internet,” etc.
- The AI agent decides that the best way to solve the question is to use the agent that can do mathematical calculations.
- The AI agent sends the question to the mathematical calculations agent.
- The mathematical calculations agent performs the operation and returns the result to the AI agent.
- The AI agent presents the result to the user.
Why is it cool?
MCP is cool because it allows AI agents to do things and not just talk. A standalone chatGPT cannot buy a plane ticket, make a bank transfer, or calculate a square root. But if it has access to other agents that can do those things, then it can solve more complex problems.
Some key advantages of MCP in my view:
Specialization: Each agent can specialize in a specific task. It doesn’t make sense for an LLM-based AI agent to identify car license plates in a parking lot. Better to use a system that already works well for that.
Resource savings: A specialized model (a calculator) will be faster and more efficient than an LLM. In this example, you use the CPU and don’t need a GPU for basic mathematical calculations.
Cost savings: Forget about tokens and paying for a model you don’t need.
Less hallucination: A specialized model won’t make up an answer. If you ask a math-specialized AI agent or the Windows calculator, it will give you a correct answer and won’t invent anything.
Why isn’t it cool?
MCP isn’t cool because it’s a bit of a makeshift protocol for now.
Some problems are:
- Security: MCP is not a protocol designed for security. If an AI agent can ask other agents, what happens if a malicious agent sneaks into the system? Can it make other agents do things they shouldn’t?
- Auditing: There is no clear mechanism to audit the actions of the agents. If an agent does something wrong, how can we track which agent did it and why?
- Accounting: MCP does not have control mechanisms to audit the number of tokens used in each interaction. If an agent asks another agent a question, how do we know how many tokens were spent in total?
- Error debugging: An AI agent in a banking support system makes 30 calls to “tools” from 5 of its buddies and responds incorrectly. How do we know which agent did something wrong? How can we debug the error? Complicated.
The future
For now, we can use MCPO (we are already doing it) because:
- It allows you to expose any MCP tool as an HTTP server compatible with OpenAPI instantly.
- It is a very simple proxy that makes our MCP Servers accessible through standard OpenAPI. So your tools “just work” with LLM agents and applications that expect OpenAPI servers.
- Easy to secure and audit.
No custom protocol. No glue code. No complications.
We will do a specific post about MCPO soon.
Photo by Fernando Arcos: image

