Among people who do a lot of agent-assisted software development, there is some skepticism about whether MCP is useful:
A quick experiment makes this clear: try completing a GitHub task with the GitHub MCP, then repeat it with the
ghCLI tool. You’ll almost certainly find the latter uses context far more efficiently and you get to your intended results quicker.
This is a fair criticism; in some scenarios an agent does better if we let it go wild with bash (which is effectively giving it the ability to write+run code) and a CLI tool. I also agree with Armin’s assertion we need better ways to compose MCP tool results. But I still think MCP is useful as-is, and I’d like to sketch out why I believe that.
MCP is simple
To connect an AI agent to an MCP server, I don’t need to download anything; I just provide a URL. Authentication is taken care of as part of the connection (more on this later). Tools are annotated with info that indicates whether they are safe to run. My agent doesn’t need to be able to execute code, and it doesn’t even need a filesystem.
It’s true that this agent might be less flexible or powerful than one with the ability to run arbitrary code. But that’s a tradeoff, and people are exploring ways to combine MCP with code execution - this is something to keep an eye on in 2026!
Programmers are weird
I’m a programmer who spends a lot of time with coding agents like Claude Code and Codex CLI. You could think of me as a power user driving an agent semi-interactively, and most people discussing MCP are in the same boat.
CLI tools are often a viable alternative to MCP for us, but a big part of that is that we can evaluate whether any given call to bash looks safe. That is not a skill that most people have.
bash commands. But can’t we just sandbox their agents?
Toward Autonomous Agents
Let’s step away from the well-trodden path of Claude Code. Say you’re building an agent that operates autonomously based on untrusted data. To make this more concrete, let’s say it’s an incident investigator agent; when a monitor goes off, it tries to find the root cause using data from your favourite observability provider. How do you give that agent access to your observability data?
If your observability provider has a CLI available, the agent could use that. But using a CLI means:
- Your agent will need access to a filesystem (provisioned with a copy of the CLI)
- Your agent will need a sandbox to stop malicious code execution and resource exhaustion
- You’re opening yourself up to credential exfiltration attacks. The CLI needs credentials to talk to the observability provider; if the agent can execute arbitrary code, it can almost certainly read those credentials.
All of these problems go away if you connect your agent to an MCP server instead. MCP can get a production-ready agent off the ground almost immediately.
Putting it all together
MCP is a dead-simple way to give agents access to tools safely, and it works today. For some agents, that simplicity is extremely valuable; for others it is not. As you move away from expert oversight and toward fully automated agents, the case for MCP grows stronger.


