Variant Selection: Build-Time Polymorphism Without the Ceremony

· 8 min read

The Problem

You have a compute event. Sometimes you want the naive implementation (correct, debuggable). Sometimes you want the fast one (optimized, harder to debug). Sometimes you want the GPU version.

Traditional solutions:

  • Runtime dispatch: Virtual functions, strategy pattern. Costs cycles.
  • Preprocessor macros: #ifdef RELEASE. Ugly, error-prone.
  • Build system gymnastics: Swap files based on config. Fragile.

Koru’s solution: proc variants.


Declaring Variants

An event can have multiple proc implementations, distinguished by a variant suffix:

~event compute { x: i32 }
| result { value: i32 }

~proc compute|naive {
    // Simple, debuggable
    var sum: i32 = 0;
    var i: i32 = 0;
    while (i < x) {
        sum += i;
        i += 1;
    }
    return .{ .result = .{ .value = sum } };
}

~proc compute|fast {
    // Optimized - Gauss's formula
    return .{ .result = .{ .value = @divFloor(x * (x - 1), 2) } };
}

~proc compute|gpu {
    // GPU kernel dispatch
    return .{ .result = .{ .value = gpu_sum(x) } };
}

Three implementations. Same event signature. Same branches. Different algorithms.


Explicit Selection

Call a specific variant with the |variant suffix:

// Always use the fast implementation
~compute|fast(x: 1000)
| result r |> display(r.value)

// Always use naive for debugging
~compute|naive(x: 1000)
| result r |> debug_inspect(r.value)

The variant is resolved at compile time. No runtime dispatch. The emitter generates a direct call to compute_event.handler__fast(...) or compute_event.handler__naive(...).


Build-Time Selection

The real power: select variants based on build configuration.

~import "$std/build"

// When compiled with --build=release, use fast variants
~[release]std.build:variants {
    "compute": "fast",
    "graphics:blur": "gpu"
}

// When compiled with --build=debug, use naive variants
~[debug]std.build:variants {
    "compute": "naive",
    "graphics:blur": "cpu"
}

Now your code doesn’t need to know which variant it’s using:

// This call uses whatever variant the build config selected
~compute(x: 1000)
| result r |> process(r.value)

Compile with --build=release and you get the fast path. Compile with --build=debug and you get the debuggable path. Zero code changes.


How It Works

1. InvocationMeta

The build:variants event receives metadata about its call site:

~[comptime]pub event variants {
    meta: InvocationMeta,  // Call site info
    source: Source,        // The variant mappings
    program: Program       // Full AST for validation
}

InvocationMeta provides:

  • meta.annotations - The flow annotations (["release"], ["debug"])
  • meta.path - Full event path ("std.build:variants")
  • meta.location - Source file and line

2. Annotation Matching

The proc checks if its annotation matches the --build flag:

~proc variants {
    for (meta.annotations) |ann| {
        if (std.mem.eql(u8, ann, "release")) {
            if (Root.CompilerEnv.hasFlag("build=release")) {
                // This config applies!
                matched = true;
            }
        }
    }

    if (!matched) {
        return .{ .skipped = .{ .reason = "no matching flag" } };
    }

    // Parse and register variants...
}

3. AST Validation

Before registering a variant, we validate the event exists:

if (ast_functional.findEventByCanonicalName(program, key) == null) {
    return .{ .invalid_event = .{ .name = key } };
}

Typo in your variant config? Compile-time error. Not a runtime mystery.

4. Variant Registry

Valid mappings go into a core registry that the emitter reads:

// In emitter_helpers.zig
pub fn registerVariant(event_name: []const u8, variant_name: []const u8) bool;
pub fn getVariant(event_name: []const u8) ?[]const u8;

When emitting an invocation without an explicit variant, the emitter checks the registry:

if (effective_variant == null) {
    effective_variant = getVariant(canonical_name);
}
try writeHandlerName(emitter, allocator, effective_variant);

The Mangling

Variants are mangled into handler names:

DeclarationHandler Name
~proc computehandler
~proc compute\|fasthandler__fast
~proc compute\|naivehandler__naive
~proc compute\|zig[optimized]handler__zig_5b_optimized_5d_

Brackets and special characters get hex-encoded. The result is always a valid Zig identifier.


Why Not Just Functions?

You might ask: why not just have compute_fast() and compute_naive() as separate events?

Because variants preserve the contract. All variants of compute:

  • Take the same input shape
  • Return the same output branches
  • Can be swapped without changing call sites

This is compile-time polymorphism. The interface is fixed; the implementation varies.


Real-World Example

A graphics pipeline with configurable quality:

~import "$std/build"
~import "$std/graphics"

// Quality presets
~[quality=high]std.build:variants {
    "graphics:blur": "gaussian_16tap",
    "graphics:shadows": "pcf_8x8",
    "graphics:ao": "hbao_plus"
}

~[quality=low]std.build:variants {
    "graphics:blur": "box_4tap",
    "graphics:shadows": "hard",
    "graphics:ao": "ssao_simple"
}

// The rendering code doesn't care about quality level
~render_frame(scene: scene)
| ready |>
    graphics:blur(input: color_buffer)
    | done buf |>
        graphics:shadows(scene: scene, buffer: buf)
        | done buf2 |> present(buf2)

Build with --build=quality=high for beautiful visuals. Build with --build=quality=low for potato mode. Same code.


It Composes

The variant system isn’t a special feature - it’s just comptime code using the same primitives as everything else. That means it composes with everything:

Property-based testing: Generate random inputs, run both variants, compare results:

~import "$std/testing"

~test("naive and fast produce same results")
| run |>
    testing:property(generator: random_i32)
    | sample x |>
        compute|naive(x: x)
        | result naive_r |>
            compute|fast(x: x)
            | result fast_r |>
                testing:assert_eq(naive_r.value, fast_r.value)

Taps: Profile specific variants without touching the call sites:

~tap(compute|fast -> *)
| result |> profiler:record(event: "compute_fast")

Custom selection logic: Don’t like annotation matching? Write your own:

~[comptime]pub event my_variants { source: Source, program: Program }
| configured {}

~proc my_variants {
    // Your logic here - check environment, read config files, whatever
    if (should_use_fast()) {
        emitter_helpers.registerVariant("compute", "fast");
    }
    return .{ .configured = .{} };
}

There’s no “variant system” to extend. It’s just comptime events, AST access, and a registry. The same tools that build ~if and ~tap build variant selection.

~[release]std.build:variants { "hot_path": "optimized" }
~[debug]std.build:variants { "hot_path": "instrumented" }

~hot_path(data: input)
| done |> continue_processing()

What looks like build configuration is just a comptime event. What looks like a function call is build-configured dispatch. All resolved at compile time. Zero overhead.

That’s Koru.