The Router That Doesn't Exist

· 7 min read

This is a complete HTTP server with dynamic routing:

~import "$orisha"
~import "$std/io"

~orisha:handler = orisha:router(req)
| [GET /] |> response { status: 200, body: "Hello World" }
| [GET /users/:id] p |> response { status: 200, body: p.id }
| [POST /users] _ |> response { status: 201, body: "Created" }
| [*] |> response { status: 404, body: "Not Found" }

~orisha:serve(port: 3000)
| shutdown s |> std.io:print.ln("{{s.reason}}")
| failed f |> std.io:print.ln("{{f.msg}}")

That’s it. kqueue event loop. 4 worker threads. TCP_NODELAY. Keep-alive. Pattern-matched routing with parameter extraction.

173,422 requests per second.

What You’re Looking At

Koru uses events and continuations instead of functions and return values. The ~orisha:handler line says: “When a request comes in, route it.” The branches below it are the routes.

| [GET /] |> response { status: 200, body: "Hello World" }

This reads: “If the request matches GET /, continue to response with status 200.”

| [GET /users/:id] p |> response { status: 200, body: p.id }

This reads: “If the request matches GET /users/:id, bind the match to p, and respond with the extracted id.”

The [*] is a catch-all. Standard stuff. The syntax is the interesting part — but the syntax isn’t the point.

The Point

orisha:router is a compile-time transform. It’s marked [comptime|transform] in the framework source, which means the Koru compiler runs it during compilation, not at runtime.

Here’s what actually happens when you compile:

  1. The compiler encounters orisha:router(req)
  2. It invokes the router transform, passing the full AST
  3. The transform walks your continuation branches, finds the [GET /] patterns
  4. It parses each pattern into method + path + parameters
  5. It generates inline Zig dispatch code
  6. It replaces itself with that code

Your pattern branches become a direct if-chain:

if (std.mem.eql(u8, req.method, "GET")) {
    if (std.mem.eql(u8, req.path, "/")) {
        // inline: response { status: 200, body: "Hello World" }
    }
    if (std.mem.eql(u8, req.path, "/about")) {
        // inline: response { status: 200, body: "About page" }
    }
    if (std.mem.startsWith(u8, req.path, "/users/")) {
        const _param = req.path[7..];
        const p = .{ .id = _param };
        // inline: response { status: 200, body: p.id }
    }
}
if (std.mem.eql(u8, req.method, "POST")) {
    if (std.mem.eql(u8, req.path, "/users")) {
        // inline: response { status: 201, body: "Created" }
    }
}
// fallback: response { status: 404, body: "Not Found" }

No route table. No trie. No hash map. No dispatch. Just comparisons.

By the time the binary exists, the router is gone. What remains is the minimum work required to match a string against known strings.

The Server

The serve loop is 100 lines of Zig that Koru generates from ~orisha:serve. It’s not hidden behind abstractions:

  • Main thread binds to the port, sets the socket non-blocking, spawns 4 workers
  • Each worker gets its own kqueue instance
  • Main thread accepts connections in a loop, round-robins them to workers
  • Workers read requests, parse the method and path with pointer arithmetic, call the handler (your router), write the response
  • After writing, the connection is re-registered with kqueue for keep-alive
const MAX_EVENTS = 64;
const MAX_CONNECTIONS = 1024;
const NUM_WORKERS = 4;

TCP_NODELAY disables Nagle’s algorithm. ONESHOT events prevent thundering herd. Keep-alive reuses connections. The handler is called directly — @This().handler_event.handler(.{ .req = &req }) — no vtable, no indirection.

The Numbers

These are from February 8, 2026. MacBook (Apple Silicon). wrk -t2 -c50 -d10s. All servers implementing the same dynamic endpoint: GET /api/users/:id returning {"id":"42","name":"User 42"}.

Serverreq/s
Go (net/http)93,563
Bun91,872
actix-web (TechEmpower config)132,650
.NET Kestrel (TechEmpower config)139,653
Orisha150,303

actix-web was compiled with snmalloc, SIMD-json, and every TechEmpower optimization we could find. .NET had ClearProviders() and tuned Kestrel settings. These are not default configurations — they’re race-tuned.

Orisha is the 10-line program at the top of this post.

For static files with pre-gzip compression, the gap widens:

Serverreq/s
nginx (gzip_static, all optimizations)132,400
Orisha (compile-time embedded + gzip)168,404

The previous post covers the static story in detail. The short version: when the entire HTTP response — headers, body, ETag — is a compile-time constant, there’s nothing left to do at runtime except write(fd, blob, len).

What the Router Actually Looks Like

Here’s the transform. It lives in lib/index.kz and it’s regular Koru code. The compiler runs it because of the [comptime|transform] annotation.

~[comptime|transform]pub event router {
    invocation: *const Invocation,
    item: *const Item,
    program: *const Program,
    allocator: std.mem.Allocator
}
| transformed { program: *const Program }

The proc receives the full program AST. It:

  1. Finds the flow containing the router invocation
  2. Iterates the continuations looking for branches that start with [
  3. Parses each branch: [GET /users/:id] becomes method="GET", path="/users/:id", params=["id"]
  4. Groups routes by method to minimize comparisons
  5. Generates the inline Zig dispatch code
  6. Replaces the flow in the AST with the generated code

The key line is at the end:

const maybe_new_program = ast_functional.replaceFlowRecursive(
    allocator, program, flow, new_item
) catch unreachable;

The router rewrites itself out of existence.

Why This Matters

Every web framework has a router. Express has a regex-based router. Django has URL patterns. Rails has a DSL. actix-web has proc macros. They all share a property: the router exists at runtime.

Some are fast. actix-web’s proc macros generate efficient code. Rust’s type system helps. But the route table is still a data structure that gets traversed on every request.

Orisha’s router is a compiler pass. It reads your source code, generates optimal dispatch, and disappears. The binary contains string comparisons. That’s it.

This isn’t possible in most languages because most compilers don’t let you run arbitrary code during compilation that can read the AST and rewrite it. Zig’s comptime is sandboxed — no I/O, no side effects. Rust’s proc macros operate on token streams, not semantic ASTs.

Koru’s [comptime|transform] procs receive the typed, resolved AST. They can walk it, analyze it, and produce new programs. The router is just one example of what this enables.

What It Doesn’t Do (Yet)

  • No middleware. The handler is a direct dispatch. Middleware as abstract events is on the roadmap.
  • No request body parsing. The body field exists but parsing is manual.
  • No response streaming. Responses are written in one shot.
  • No TLS. You’d put this behind a reverse proxy today.
  • Route lookup is a linear scan grouped by method. For 5 routes, this doesn’t matter. For 500, we’d want a compile-time perfect hash.

These are engineering problems, not architectural ones. The architecture — compile-time transforms rewriting pattern branches into dispatch code — scales to all of them.

The Full Loop

You write pattern branches. The compiler transforms them into dispatch code. The dispatch code runs inside a kqueue event loop with worker threads. The worker calls your handler directly — no framework in the way.

Source:    | [GET /users/:id] p |> response { ... }
Compile:   if (mem.startsWith(path, "/users/")) { ... }
Runtime:   kqueue -> read -> compare -> write -> re-register

The router that doesn’t exist serves 150,000 requests per second.


Orisha is the web framework for Koru. Source at github.com/korulang/orisha.