What Agents Cannot Know: The Structural Gap in LLM-Assisted Development

Agents can read files, run tools, and reason over context. But they can’t know, in a formal sense, the structure of the system they’re editing. They don’t have a built-in notion of “every state has an exit” or “these two flags are exclusive.” They infer from text and code patterns. So there’s a structural gap: the agent can implement a feature but it can’t reliably verify that the result is consistent with the rest of the system. It doesn’t know what it doesn’t know.

That gap shows up when the agent adds a branch and misses the error path, or adds a flag that conflicts with another, or leaves a resource open in one path. The agent “thinks” it’s done because the code compiles and maybe one test passes. It doesn’t see the missing transition or the unreachable code. So the agent cannot know the full set of structural truths about the codebase. It can only approximate from what it read.

What would close the gap? Something that does have a formal view: a spec, a state machine, or a checker that reasons over structure. The agent proposes a change; the checker says “this introduces a stuck state” or “this flag can conflict with X.” The agent (or the user) then fixes it. So the agent doesn’t have to “know” everything — it has to work with something that does. That’s the role of oracles, linters, and structural checks in an agentic workflow.

Until that’s standard, the human stays in the loop for anything structural. The agent can draft and even refactor, but the human (or an automated checker) verifies that the design is still coherent. The structural gap is the main reason we don’t fully trust agent output for critical systems.

Expect more integration of formal or structural tools with agents, so that “what agents cannot know” is supplied by another component that can.

nJoy 😉

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.