We’ve Been Here Before: APIs, MCPs, and the Next Integration Layer

MCPs have been all the rage lately- some in tech are calling them the second coming of the internet, while others dismiss them as the McKinseyfication of APIs. My team and I have been thinking about where we land on this spectrum. Like most things in life, the reality is probably somewhere in the middle.
Anthropic’s post does a great job explaining what MCPs are, so I won’t go into the details. At a high level, they appear to be an abstraction layer that allows LLMs to interact with external resources. The natural next question is—doesn’t that just make them APIs? That’s where things get interesting.
APIs were designed as standardized interfaces for tools to interact with each other, enabling the API-first world we know today. This model worked because every system that wanted to interact with another could build its own connector, ensuring compatibility. Millions of APIs exist today just to make systems talk to each other.
However, this breaks down when the system (LLMs, in this case) trying to interact isn’t designed to access APIs. Think of it like interlocking tiles, you shape your tile to fit with another. An entire integrations industry emerged just to build these connectors, APIs that could connect systems to get productive work done. Every time one tile changes, the other has to be adjusted. Now, imagine an LLM trying to interact with an external system. It has to analyze the opposing tile, figure out the right shape for its own, and construct it on the fly—every single time. That’s tedious, and that’s why MCPs exist. If LLMs are going to access everything online, we need a standardized protocol for these "tiles" and an intermediary—an "interlocking agent"—that pieces them together. That’s what an MCP server does—it acts as this interlocking agent.
Now, some might argue that MCPs should just be APIs. And for now, I think APIs will work better for most real-world use cases. They offer reliability and deterministic behavior, something MCPs can’t guarantee. Not yet, at least. But the real question is about the future.
APIs have been around for over a decade, and yet it’s still a struggle to find APIs for a large section of the internet. That’s because APIs rely on human intent—they only get built when there’s enough demand. But what happens in a world where LLMs want to access as many systems as possible? Manually building APIs one by one has never been scalable. Like it or not, MCPs, or something very similar, are likely to become the abstraction layer that enables the future LLMs are pointing us toward.
That said, MCPs have the same core problem as AI agents. In theory, they sound great—who wouldn’t want an invisible agent in the cloud handling everything? But for non-trivial use cases, agents need to understand nuance—which is inherently human. The teams that have made agents work have spent months painstakingly refining their behavior. The only way to make them viable today is to narrow their focus so much that there’s a finite amount of nuance to handle.
MCPs face the same challenge. The idea is solid, but how do we get MCPs that can process even slightly complex LLM requests, trigger the right set of actions, and retrieve the right information? My hunch is that we’ll first see targeted MCPs, ones built for a specific, narrow use case with a fixed set of LLM requests. The teams that iterate quickly will be the ones who make these work. And as they refine the system over time, MCPs will start to feel increasingly "magical," unlocking agentic workflows that, for now, only exist in impressive demos.
This blog is written by Abhishek Saikia, Co-founder at KushoAI. We're building an AI agent that tests software and finds bugs for you.
Member discussion