The Postmodern Compiler
Who Is Debugging Your Compiler Output?
In 2024, the answer was: a human, in a terminal, reading error messages.
In 2026, the answer is increasingly: an AI agent, in a context window, reading files.
These two audiences want fundamentally different things.
A human wants beautiful diagnostics. Colorized output. Squiggly underlines pointing at the exact character. “Did you mean…?” suggestions. Rust’s compiler is legendary for this — and it took hundreds of thousands of lines of code to build.
An AI wants data. A string that says auth.kz:65 positioned next to the code it’s looking at. No colors (it can’t see them). No squiggly lines (it can’t parse them from terminal output reliably). No “did you mean” (it can figure that out itself). Just: where did this generated code come from?
Koru is built AI-first. The compiler’s primary user is an AI agent. And that changes everything about how you design error reporting.
The Modernist Trap
When you build a compiler, there’s an almost irresistible urge to own the entire stack. Parse the source. Build the AST. Type-check everything. Emit errors with beautiful formatting. Generate code. Optimize it. Link it. Ship it.
This is the modernist compiler — self-contained, self-sufficient, a cathedral of engineering. GCC does this. Clang does this. Rust’s compiler does this with extraordinary polish.
It’s impressive. It’s also enormous. A significant portion of any mature compiler is the error reporting and diagnostic machinery — the infrastructure that maps from internal compiler state back to the original source code, formats it prettily, and presents it to a human.
Koru compiles to Zig. The Zig compiler already does type checking. Already catches shadowed variables. Already verifies correctness of the generated code. Already reports errors with line numbers.
The traditional instinct says: intercept those Zig errors, map them back to Koru source, present them in Koru terms with beautiful formatting.
But for whom? The AI agent reading the output doesn’t need the errors reformatted. It needs to know which Koru source line produced the generated Zig that Zig is complaining about. That’s a much simpler problem.
The Gap
There was a real problem, though. The Zig errors pointed at the generated code, not the Koru source. When Zig said “type mismatch at line 152”, you’d open the generated .zig file and see:
const result_5 = main_module.log_event.handler(.{ .msg = f.reason }); Line 152 of what? Which flow? Which branch? Which Koru file?
For an AI agent, this meant reverse-engineering the generated code — counting nested switch arms, matching handler names back to event declarations, figuring out which continuation produced result_5. That costs tokens, introduces errors, and burns time on something that should be instant.
The information existed. Every AST node in the Koru compiler carries a source location:
// FOUNDATIONAL: Every item knows where it came from
location: errors.SourceLocation = .{ .file = "generated", .line = 0, .column = 0 }, The parser fills in the real file, line, and column. The emitter threads the location through its context. But nobody was writing it to the output.
30 Lines
So we wrote it to the output. Three comment markers:
Flow markers — before each generated function:
// >>> FLOW: auth.kz:58 ~step1()
pub fn flow0() void { Branch markers — before each switch arm:
// >>> BRANCH: auth.kz:60 | ok s1 |>
.ok => |s1| { Proc markers — inside each handler:
pub fn handler(__koru_event_input: Input) Output {
// >>> PROC: step [auth.kz:31] Comments in the generated Zig. Zero runtime cost — the Zig compiler strips them. Zero complexity — no source map tables, no error interception, no diagnostic reformatting.
This coverage extends to subflows — code generated through comptime transforms like ~if, ~for, and ~capture. Every emission path carries markers, so source location isn’t lost when the AST passes through a transform layer.
The AI agent sees a Zig error at line 152. It looks up two lines. // >>> BRANCH: auth.kz:65 | failed f |>. It opens auth.kz, goes to line 65. Done. No reverse-engineering. No guessing.
What This Looks Like at Scale
Here’s a real 4-level-deep error handling flow:
~step1()
| ok s1 |> step2(value: s1.value)
| ok s2 |> step3(value: s2.value)
| ok s3 |> step4(value: s3.value)
| ok _ |> log(msg: "Success!")
| failed f |> log(msg: f.reason)
| failed f |> log(msg: f.reason)
| failed f |> log(msg: f.reason)
| failed f |> log(msg: f.reason) The generated Zig is 60 lines of nested switches. Before the markers, an AI agent would need to parse the entire nesting structure, count result variables, and infer which branch it was looking at. After:
// >>> FLOW: input.kz:59 ~step1()
pub fn flow0() void {
const result_0 = main_module.step1_event.handler(.{ });
switch (result_0) {
// >>> BRANCH: input.kz:60 | ok s1 |>
.ok => |s1| {
const result_1 = main_module.step2_event.handler(.{ .value = s1.value });
switch (result_1) {
// >>> BRANCH: input.kz:61 | ok s2 |>
.ok => |s2| {
// ... every level marked ...
},
// >>> BRANCH: input.kz:69 | failed f |>
.failed => |f| { Every branch is self-documenting. The AI doesn’t need to understand the nesting — it just reads the nearest comment.
Design for Data, Not Display
This is the AI-first principle at work: design for data, not display.
A human-first compiler invests in:
- Colorized terminal output
- Squiggly underline positioning
- “Did you mean…?” suggestions
- Rich error explanations with code snippets
- LSP integration for IDE squigglies
An AI-first compiler invests in:
- Source locations as structured data, positioned where they’re needed
- Greppable markers with consistent format (
// >>> TYPE: file:line) - The actual Koru syntax echoed in the comment (
| ok s1 |>) - Machine-readable breadcrumbs that survive in any context (file read, grep output, error log)
The // >>> BRANCH: auth.kz:60 | ok s1 |> marker works whether the AI reads the full file, greps for >>>, or encounters it in a Zig error log. It’s data that survives any extraction method.
The Postmodern Compiler
This philosophy extends beyond error reporting:
- Type checking? Zig does it. Koru doesn’t reimplement shadowing detection or type inference. When Zig catches a type error, the source markers tell the AI where to look in the Koru source.
- Optimization? Zig and LLVM do it. Koru focuses on structural optimizations (dead-stripping, purity analysis) that only it can see.
- Build system? Zig has one. Koru generates
build.zigfiles. - Package management? Use npm. Or cargo. Or pip. Koru doesn’t have a package registry — it has
std.package:requires.*events that declare what you need and let your ecosystem’s existing package manager fetch it. Why build a registry when every language already has one? - Testing? Zig has
zig test. Koru’s test framework compiles to Zig test blocks.
At every layer: does this already exist? If yes, use it. If there’s a gap, bridge it with the minimum viable intervention — data, not display.
We call this the postmodern compiler. Not because it’s ironic or self-referential, but because it refuses to rebuild what already exists. It composes. It reuses. It bridges gaps with data.
What We Didn’t Build
This is the most important part:
- No source map data structure. No JSON sidecar files. Comments in the output are the source map.
- No error interception layer. No code that parses Zig’s stderr and reformats it. The markers in the generated code are sufficient.
- No duplicate type checker. No reimplementation of Zig’s type system. Zig catches the error, the markers point back to the Koru source.
- No diagnostic rendering engine. No terminal formatting. The AI doesn’t need it. Humans reading the generated Zig get the markers too — they’re useful for everyone, just not pretty for anyone.
Each of these would have been hundreds or thousands of lines. Ongoing maintenance. New failure modes.
Instead: 30 lines. Three comment markers. Zero runtime cost. The AI agent that debugs Koru programs is dramatically more effective, and the compiler stays small.
The Uncomfortable Question
If your compiler’s primary debugger is an AI — and increasingly, it is — how much of your diagnostic infrastructure is serving a human audience that’s shrinking?
We’re not saying beautiful error messages are bad. Rust’s diagnostics are a genuine contribution to developer experience. But they’re a massive investment, and they serve a specific user: a human reading a terminal.
If you’re building a new language in 2026, your most frequent user might be an AI agent that wants file:line next to the generated code. Not a reformatted error message. Not a squiggly underline. Just data, in the right place.
30 lines.
Koru is an event-continuation language that compiles to Zig. Learn more at korulang.org or read about AI-first tooling.