You're thinking like a SW engineer. Instead, think like someone who just happens to know a bit of programming. MCP is much, much easier than tool calling, I think.
As an example, I wrote a function in Python that, given a query string, executes a command line tool and returns the output. To make it an MCP server, I simply added type annotations to the function definition, and wrote a nice docstring, and added a decorator.
That's it. And now it works with all providers and all tools that support MCP. No REST APIs, etc needed. Not tied to a particular agentic tool.
Every time I've written a tool, I've ended up with "Wow, was it really that simple?"
As for security: If you write your own tool, the security implications are the same.
There is so much accidental complexity in software because people keep reinventing the wheel. I think it might be interesting to do some research for a book on this topic.
You just described how to write a tool the LLM can use. Not MCP!! MCP is basically a tool that runs in a server so can be written in any programming language. Which is also its problem: now each MCP tool requires its own server with all the complications that come with it, including runtime overhead, security model fragmentation, incompatibility…
> You just described how to write a tool the LLM can use. Not MCP!! MCP is basically a tool that runs in a server so can be written in any programming language.
It's weird you're saying it's not MCP, when this is precisely what I've done to write several MCP servers.
You write a function. Wrap it with a decorator, and add another line in __main__, and voila, it's an MCP server.
> now each MCP tool requires its own server with all the complications that come with it, including runtime overhead, security model fragmentation, incompatibility…
You can lump multiple tools in a server. Personally, it makes sense to organize them by functionality, though.
> including runtime overhead, security model fragmentation, incompatibility…
What incompatibility?
Runtime overhead is minimal.
Security - as I said, if you write your own tools, you control it just as you would with the old tool use. Beyond that, yes - you're dependent on the wrapper library's vulnerabilities, as well as the MCP client. Yes, we've introduced one new layer (the wrapper library), but seriously, it's like saying "Oh, you introduced Flask into our flow, that's a security concern!" Eventually, the libraries will be vetted and we'll know which are secure and which aren't.
You’re just confused. You can write a tool , or if your framework supports it, the tool can be also a MCP. But llm cares only about tools. Try to learn the underlying mechanics and you will understand the difference.
As an example, I wrote a function in Python that, given a query string, executes a command line tool and returns the output. To make it an MCP server, I simply added type annotations to the function definition, and wrote a nice docstring, and added a decorator.
That's it. And now it works with all providers and all tools that support MCP. No REST APIs, etc needed. Not tied to a particular agentic tool.
Every time I've written a tool, I've ended up with "Wow, was it really that simple?"
As for security: If you write your own tool, the security implications are the same.