Back to Blog

Why AI Agents Need a Graph

AI agents can write code fast. But they can't see where it fits. We're building the graph that gives them spatial awareness.

AI coding agents are getting remarkably good at writing components. Give one a prompt and it’ll produce a pixel-perfect button, a form with validation, a modal with transitions. The code is clean. The output is fast.

But there’s a problem no one is talking about.

The spatial awareness gap

These agents have no idea where their work goes.

An AI can write a checkout button. But it doesn’t know that the button sits on a product detail screen, which is reached from search results or a notification deep link, and that clicking it should route to a three-step checkout flow — unless the user is unauthenticated, in which case it redirects to login first.

That’s not a coding problem. It’s a context problem. The agent can’t see the graph.

Today’s AI tools — Cursor, GitHub Copilot, Windsurf — operate on files. They read code, modify code, and generate code. But code is a flat list of parts. It doesn’t tell you how those parts connect into the experience a user actually navigates. It’s like giving someone a box of puzzle pieces without showing them the picture on the box.

What changes when agents can see the graph

Imagine a different world:

  • An AI agent receives a task: “Add a referral flow after checkout confirmation.”
  • Instead of guessing where Confirmation lives, it queries the Natural Graph: “What’s the path to the checkout confirmation screen?”
  • It gets back: App → Cart → Checkout → Confirmation
  • It sees what screens exist around that node, what states are possible, and what transitions already connect to it
  • It writes the feature in the right place, wired into the right flow, the first time

That’s not science fiction. That’s what happens when AI agents have spatial awareness of the product they’re modifying.

From code-generator to product-builder

The shift we’re betting on is subtle but profound:

Code-GeneratorProduct-Builder
Writes components in isolationUnderstands where components live in the experience
Needs detailed instructions for every edge caseDiscovers edge cases by reading the graph
Produces code that compilesProduces code that fits
Requires human review for contextBrings its own context to the review

The difference isn’t intelligence — today’s models are plenty smart. The difference is input. When the only input is source code, the output is source code. When the input includes the experience graph, the output is a product decision.

Why MCP matters

Model Context Protocol (MCP) is the emerging standard for giving AI agents structured access to external tools and data. Cursor, Windsurf, and other agentic coding tools already support MCP servers. It’s how agents will move beyond “read the file, edit the file” into “understand the system, act on the system.”

A Natural Graph exposed as an MCP server becomes something any compatible agent can query:

  • “What screens can a logged-out user reach?”
  • “Show me every flow that touches the payment form.”
  • “What breaks if I remove the email verification step?”

These aren’t hypothetical questions. They’re the questions every senior engineer asks during code review. The difference is that today, answering them requires holding the entire product in your head. With a graph exposed via MCP, the answer is a lookup — for humans and AI agents alike.

And it’s not just for engineers. A PM can ask the same questions in ChatGPT or Claude to understand the current state of the product. A designer can validate whether the shipped experience matches their intent. A founder can explore the full scope of their product without reading a single line of code. MCP makes the graph accessible to every role, not just the people writing code.

What we’re building

We’re building the graph layer that makes this possible.

We won’t go deep on implementation — we’re in early access and moving fast. But the direction is clear: your application has a shape, and that shape should be queryable by both humans and machines. We’re building the context layer that extracts that shape, keeps it in sync, and exposes it via MCP so any agentic coding tool can use it.

When AI agents can see the graph, they stop being fast typists and start being collaborators who understand what they’re building.

The future isn’t AI writing code

It’s AI navigating a meaningful graph.

Code is the medium. The graph is the meaning. The agents that build the best products will be the ones that can see both.

We’re building that visibility at Natural Link.


Want to see how graph-aware AI development works? Request access — it’s a 20-minute conversation. Or read our first post on why we’re viewing software the wrong way.

Share with your team