A 14KB Docker Image That Serves HTTP

· 7 min read

A 14KB Docker Image That Serves HTTP

Here’s a Koru program:

~import "$orisha"
~import "$std/io"
~import "$std/build"
~import "$koru/docker"

~orisha:handler = response { status: 200, body: "Hello from Koru!", content_type: "text/plain" }

~orisha:serve(port: 3001)
| shutdown s |> std.io:println(text: s.reason)
| failed f |> std.io:println(text: f.msg)

~[build(linux)]std.build:config {
    "target": "x86_64-linux-musl"
}

~[build(linux)]koru.docker:image(tag: "koru-server:latest") {
    FROM scratch
    COPY a.out /server
    EXPOSE 3001
    ENTRYPOINT ["/server"]
}

Compile and deploy:

$ koruc simple_server.kz docker build
✓ Compiled to a.out
✓ built koru-server:latest
$ docker images koru-server
REPOSITORY     TAG       SIZE
koru-server    latest    14.8kB

That’s the entire image. There is no OS. There is no runtime. There is a binary and syscalls. And the Dockerfile is in the source file.

What’s Inside

This isn’t a toy. The 14KB binary contains:

  • 4 epoll worker threads — concurrent request handling
  • HTTP/1.1 keep-alive — persistent connections
  • TCP_NODELAY — no Nagle buffering
  • Non-blocking I/O — O_NONBLOCK on all sockets
  • Static musl linking — zero runtime dependencies
$ docker run --rm -d -p 3001:3001 koru-scratch

$ curl http://localhost:3001/
Hello from Koru!

$ docker logs koru-test
Orisha listening on http://0.0.0.0:3001
Running with 4 workers + epoll + TCP_NODELAY + keep-alive

It runs. In Docker. On Alpine. On scratch. Wherever you put it.

The Numbers

ELF section breakdown:

SectionSizeContents
.text11,029 BMachine code
.rodata2,608 BString constants, HTTP headers
.data72 BMutable globals
Total on disk14,776 B

For comparison, here’s what other languages produce for a basic HTTP server:

LanguageBinary SizeNotes
Koru/Orisha14.4 KBepoll, 4 workers, keep-alive, static
C (musl, hand-rolled)20-50 KBDepends on HTTP parsing
Zig (std.http.Server)100-400 KBPulls in std.http, std.fmt
Rust (hyper, musl)1.5-2.4 MBtokio + hyper
Go (net/http)6-7 MBRuntime + GC + net
Java (GraalVM native)20-50 MBJDK subset
Node.js (binary)50-57 MBV8 engine

How It Works

Four Koru features compose to make this possible.

1. Variants Select Platform Code

Orisha’s server has two implementations. On macOS, it uses kqueue. On Linux, it uses epoll. These are declared as variants — alternative proc bodies for the same event:

~proc serve {
    // kqueue worker pool (macOS/BSD)
}

~proc serve|epoll {
    // epoll worker pool (Linux)
}

The library also declares which variant to use when:

~[build(linux)]std.build:variants {
    "orisha:serve": "epoll"
}

When you compile with --build=linux, the variant system selects epoll. The kqueue code is never emitted. The compiler doesn’t know what “kqueue” or “epoll” are — it matches flags and picks the corresponding proc.

2. Build Config Sets the Target

The user’s program declares how to build for each platform:

~[build(linux)]std.build:config {
    "target": "x86_64-linux-musl"
}

build:config is a key-value store gated by the same ~[build(X)] flag system as variants. The compiler collects it and passes "-target" "x86_64-linux-musl" to Zig. It doesn’t interpret the value. It’s a convention, not a directive.

Variants and config use the same guard but serve different purposes:

  • build:variantswhich code to compile (library concern)
  • build:confighow to compile it (deployment concern)

Both fire on --build=linux. Neither requires the compiler to know what “linux” means.

3. Docker Images Are Declared in Source

The Dockerfile lives in the .kz file, gated by the same ~[build(linux)] flag:

~[build(linux)]koru.docker:image(tag: "koru-server:latest") {
    FROM scratch
    COPY a.out /server
    EXPOSE 3001
    ENTRYPOINT ["/server"]
}

koru.docker is a library. It declares a [command] event that walks the AST at build time, extracts inline Dockerfiles, and shells out to docker build. The compiler doesn’t know what Docker is. The library doesn’t know what Orisha is. They compose through the AST.

koruc simple_server.kz docker build — one command compiles the server, cross-compiles to Linux, and builds the container image.

4. Dead Stripping Removes Everything Else

Koru’s dead strip pass walks the AST before emission. The Orisha library defines events for routing, request matching, listeners, acceptors — dozens of constructs. If your program doesn’t use them, they’re removed before code generation.

For simple_server.kz, the dead stripper removes 92 items. What survives: the handler, the serve event, and println for the shutdown message. Everything else vanishes.

Benchmarks

These numbers are through Docker Desktop’s network bridge on macOS — not bare metal. They represent a lower bound.

$ wrk -t4 -c50 -d10s http://localhost:3001/

Running 10s test @ http://localhost:3001/
  4 threads and 50 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.95ms    0.85ms  17.64ms   85.01%
    Req/Sec     6.28k     1.04k    8.44k    72.28%
  252355 requests in 10.10s, 25.27MB read
Requests/sec:  24975.20
Transfer/sec:      2.50MB

25,000 requests/sec. 1.9ms average latency. 250K requests, zero errors.

This is pre-optimization. There’s no io_uring. No SIMD header parsing. No connection pooling. No sendfile. The server reads into a 4KB stack buffer, parses the method and path with a linear scan, calls the handler, and writes the response. That’s it.

The point isn’t that 25K req/sec is fast. The point is that a 14KB binary handles a quarter million requests without dropping one.

The Source

29 lines of Koru. One command to compile, cross-compile, and build the Docker image.

Source:          29 lines (server + config + Dockerfile)
Binary:          14,776 bytes
Docker image:    14,776 bytes
Dependencies:    0
Commands:        1 (koruc simple_server.kz docker build)

What This Means

The previous post showed that a Koru program with events, branches, and I/O compiles to 4.8KB — matching hand-written Zig. That was proof that the abstraction has zero cost.

This post shows that the same property scales to real infrastructure. An HTTP server with multi-threaded I/O, platform-specific system calls, cross-compilation, and Docker deployment — and the binary is 14KB. The Docker image is the binary.

There’s no runtime to bundle. No interpreter to ship. No framework extracted from a larger system. The binary contains the server and nothing else. FROM scratch works because there’s nothing to add.

The variant system means this isn’t macOS-only or Linux-only. It’s whatever you target. --build=linux gives you epoll. --build=macos gives you kqueue. Tomorrow it could be --build=freebsd with kqueue, or --build=linux-io-uring with io_uring. The compiler doesn’t care. It matches flags and selects code.

The Docker image isn’t a separate artifact you maintain. It’s declared in the source alongside the server, the build config, and the handler. One file. One command. The server, the cross-compilation target, and the container image are all part of the same program.

14KB. That’s not a proof of concept. That’s a deployment artifact.