Tools, Permissions, and MCP: How a Coding Agent Becomes Real
People still talk about coding agents as if the model is the whole story.
That is backwards.
A model becomes a coding agent only when it gains a controlled way to act on the world around it.
That means three things have to come together:
- a tool surface
- a permission system
- an integration layer for external capabilities
Claw Code is a good case study because all three show up clearly in the public parity repo.
Series Map
This article is part of Inside the AI Coding Agent Stack:
- What Claw Code Reveals About AI Coding Agent Architecture
- Why AI Coding Agents Use Rust and Python Together
- Tools, Permissions, and MCP: How a Coding Agent Becomes Real
- Hooks, Plugins, and Sessions in AI Coding Agents
- Clean-Room Rewrites and Parity Audits for AI Agent Teams
A Tool Is a Promise
In a coding agent, a tool is not just a function call.
It is a promise that the model can do something concrete and repeatable.
Examples include:
- reading files
- writing or editing files
- glob and grep search
- shell execution
- web fetch and web search
- sub-agent delegation
- notebook editing
- configuration inspection
Once a model has access to those capabilities, the product changes. The user is no longer asking only for ideas. The user is asking for outcomes.
That is why tool design matters.
The tool layer defines the agent's real operating surface.
Why Coarse Tool Design Breaks Down
A common early-stage mistake is collapsing everything into one giant execution surface:
- one shell tool for everything
- one vague file tool
- one broad "integration" layer
That looks simple until users need control.
Then every problem arrives at once:
- permissions become unreadable
- audit trails get muddy
- prompts become noisy
- errors are harder to classify
- users lose confidence in what the agent will actually do
The better pattern is compositional:
- separate read from write
- separate workspace actions from network actions
- separate local tools from external tools
- separate built-in tools from extension-backed tools
That is the direction visible in Claw Code's runtime and tool organization, and it is the direction I expect more serious coding agents to follow.
Permissions Are Not an Afterthought
The second big lesson is that permission systems are part of the user experience.
That may sound obvious, but many AI products still act as if security is something you can bolt on later.
You cannot.
In a coding agent, permission design determines whether the user feels the system is:
- predictable
- inspectable
- reversible
- safe enough to trust
The public Claw Code materials make this concrete. The parity repo includes explicit permission modes such as read-only, workspace-write, and danger-full-access. That naming matters. It lets the user reason about the system quickly.
Good permission design does not eliminate power.
It makes power legible.
That is a very different goal.
What Good Permission Design Looks Like
The best permission systems for coding agents tend to share a few qualities:
1. Capability maps to permission level
Users should be able to tell, at a glance, why a tool sits in a certain risk bucket.
2. Defaults are understandable
A system that defaults to a surprising mode creates distrust before the first action even runs.
3. Escalation is visible
If a task needs more power than the current mode allows, the path to escalation should be explicit.
4. Auditability is preserved
Users need to know what was run, what was blocked, and what changed.
This is where coding agents become meaningfully different from generic chat tools. In chat, a bad answer is annoying. In an agent, a bad action can be expensive.
MCP Changes the Expansion Story
Then there is MCP.
MCP matters because it gives coding agents a cleaner way to expand beyond their built-in tools.
Instead of hard-coding every external capability into the product, the agent can attach to MCP servers and gain access to:
- additional tools
- external resources
- remote services
- structured data sources
That turns the architecture from a closed toolbox into a capability bus.
If you want the protocol background, start with our MCP guide. The shorter version is this:
MCP lets agent builders widen the environment without reinventing the integration model every time.
That is strategically important because the long-term value of a coding agent is not just how it edits files locally. It is how well it connects local work to the rest of a team's systems.
Why These Three Layers Must Be Designed Together
This is the part people often miss.
Tools, permissions, and MCP are not three separate features.
They are one design problem.
If you add tools without clean permissions, the system feels reckless.
If you add permissions without a rich tool model, the system feels cramped.
If you add MCP without both, the system becomes a sprawling integration surface with unclear trust boundaries.
The right mental model looks more like this:
textTool surface -> defines what the agent can do Permission policy -> defines when and under what trust level it may do it MCP layer -> defines how the capability surface can expand over time
That triad is what turns "LLM with function calling" into "usable coding agent."
What Builders Should Borrow
If I were designing a new coding agent today, I would borrow the following lessons from this pattern:
- Treat tools as product primitives, not hidden implementation details.
- Give permission modes human-readable names.
- Keep local and remote capability surfaces easy to distinguish.
- Use MCP to standardize extension, not to excuse architectural sloppiness.
- Make the trust boundary visible in the interface.
That last point is worth repeating.
Users trust systems they can inspect.
The most powerful agent in the world still loses if it feels opaque.
Why This Matters for the Market
This design lens also helps explain why AI coding tools are diverging.
Some products optimize for low-friction suggestion surfaces inside the editor.
Others are becoming full operating environments with sessions, tools, permissions, and extensibility.
Both categories can win.
But only the second category is really playing the "coding agent" game in the deeper sense.
That is one reason I think the market conversation is gradually moving away from pure benchmark talk. The question is shifting from "Which model is smartest?" to "Which environment lets intelligence act safely and usefully?"
That is a much harder problem.
And a much more defensible one.
Final Take
When people say a coding agent feels "real," what they usually mean is not that the prose got smarter.
They mean the system can:
- take action
- respect boundaries
- connect to the outside world
- stay understandable while doing all three
That is a tools problem.
A permissions problem.
And an MCP problem.
Claw Code is worth paying attention to because it makes that truth easy to see.
Explore the Full Series
For the full reading path, visit the AI Coding Agent Stack topic hub. It brings this series together with related coverage on MCP, developer tooling, and production-minded agent design.
Read Next
- Hooks, Plugins, and Sessions in AI Coding Agents
- What Claw Code Reveals About AI Coding Agent Architecture
- MCP Protocol Guide 2026