Instant Shell Commands: When Pragmatism Trumps Purity

· 7 min read

The Problem: Build Tasks Are Too Slow

You’re iterating on a project. You want to run tests:

koruc main.kz test

What happens in a “proper” programmable pipeline?

  1. Parse main.kz → AST
  2. Collect all event declarations
  3. Generate backend.zig (the compiler backend)
  4. Compile backend.zig (multiple seconds with Zig compiler)
  5. Run backend executable
  6. Execute test event handler
  7. Finally run your tests

For every single test run.

This is fine for production builds. For rapid iteration, it’s death by a thousand recompilations.

The Constraint: The Pipeline Is Sacred

Here’s what we built in Koru:

Everything flows through the backend compiler.

User code → Parse → AST → Backend Generation → Backend Compilation → Execution

This isn’t just an implementation detail. It’s the programmable pipeline philosophy:

  • Your event handlers run at compile-time
  • You can transform the AST
  • You can inject code
  • You can rewrite the compilation itself

All of this happens in backend.zig - a Zig program generated from your Koru code, then compiled.

This is beautiful. This is powerful. This is slow.

The Trade-off: What If We Broke The Rule?

We had shell commands declared in Koru code:

~std.build:command.sh(name: "test") {
  ./run_regression.sh
}

~std.build:command.sh(name: "build") {
  zig build
}

These look like Koru events. They’re declared using event syntax. They’re in the AST.

But they don’t need the backend compiler’s power. They’re just shell scripts.

What if we executed them in the frontend instead?

The Heresy: Frontend Execution

Here’s the heretical idea:

koruc main.kz test

Could become:

  1. Parse main.kz → AST
  2. Look for build:command.sh events with name: "test"
  3. Extract the shell script from the Source parameter
  4. Execute it immediately in the frontend
  5. Done

No backend generation. No Zig compilation. Instant execution.

But this breaks the pipeline! The command.sh events aren’t going through the backend. They’re special-cased in the frontend.

This violates the “programmable pipeline” principle.

The Justification: Documented Trade-offs

From the code comments in build.kz:

// ============================================================================
// BUILD COMMANDS - Shell Script Execution (Frontend Optimization)
// ============================================================================
// FRONTEND OPTIMIZATION: Unlike most compiler features, build:command.sh
// is processed in the frontend (koruc) rather than the backend compiler.
// This allows instant execution without backend compilation overhead.
//
// For commands that need Zig/Koru compilation, use build:command.proc or
// build:command.flow (backend passes, slower but more powerful).

And from test 640’s README:

## Design Pattern

Unlike most compiler features, build:command.sh is processed in the
**frontend** (koruc binary) rather than the backend compiler. This trade-off:

- ✅ Makes commands instant (no backend compilation)
- ✅ Enables fast development iteration
- ⚠️  Breaks the "programmable pipeline" pattern slightly
- 📝 Is clearly documented as a pragmatic optimization

For commands that need Zig/Koru compilation, use build:command.proc or
build:command.flow (backend passes).

The key: We document the heresy.

We’re not pretending this is philosophically pure. We’re explicitly calling out the trade-off.

The Implementation: 179 Lines of Pragmatism

Here’s how it works in main.zig:

// Check if argv[1] is a shell command name
if (argv.len > 1) {
    const potential_command_name = argv[1];

    // Parse the input file to get AST
    var parser = try Parser.init(allocator, source, input, &[_][]const u8{}, null);
    const parse_result = try parser.parse();

    // Walk AST looking for build:command.sh events
    for (parse_result.source_file.items) |item| {
        if (item == .flow) {
            const flow = item.flow;

            // Is this a build:command.sh invocation?
            if (isCommandShEvent(flow)) {
                const cmd_name = extractName(flow.invocation.args);

                if (std.mem.eql(u8, cmd_name, potential_command_name)) {
                    // Found it! Extract the shell script
                    const script = extractSource(flow.invocation.args);

                    // Execute immediately (NO BACKEND COMPILATION!)
                    try executeShellCommand(script, argv[2..]); // Pass remaining args
                    return;
                }
            }
        }
    }
}

// Not a command? Continue to normal compilation...

The heresy is 179 lines of code that happen BEFORE the backend pipeline.

What This Unlocks

1. Instant Test Iteration

$ time koruc main.kz test
./run_regression.sh
All tests passed!

real    0m0.04s  # 40 milliseconds

Compare to going through the backend:

$ time koruc main.kz --backend test  # Hypothetical
[compiling backend.zig...]
All tests passed!

real    0m3.21s  # 3+ seconds

80x faster iteration.

2. Standard Build Tasks Without the Tax

~std.build:command.sh(name: "format") {
  zig fmt src/
}

~std.build:command.sh(name: "lint") {
  eslint .
}

~std.build:command.sh(name: "deploy") {
  ./deploy.sh --prod
}

These are pure shell operations. They don’t need AST transformation or compile-time execution. Making them wait for backend compilation would be pure overhead.

3. Argument Forwarding

The shell script receives all remaining argv:

koruc main.kz test --verbose --filter=101

The test script gets: ["--verbose", "--filter=101"]

~std.build:command.sh(name: "test") {
  ./run_regression.sh "$@"  # $@ = all args
}

Natural shell semantics.

4. Escape Hatch for Backend Commands

Want the full power of the backend compiler? Use the backend event system:

// This runs in the backend (has full compile-time power)
~[comptime]pub event build:command.proc { name: []const u8, ast: ProgramAST }

~proc build:advanced_optimizer {
  // Can transform AST, call compiler passes, etc.
  // Pays the backend compilation cost
}

You choose the trade-off.

The Philosophy: Pragmatism Over Dogma

This feature teaches something important:

Perfect consistency is less valuable than practical utility.

We could have stuck to the pure pipeline:

  • ✅ Philosophically consistent
  • ✅ Everything flows through backend
  • ✅ No special cases
  • 3-second tax on every build task

Instead we chose pragmatism:

  • ⚠️ Special-cased in frontend
  • ⚠️ Breaks the pipeline purity
  • 40ms iteration time
  • Clearly documented trade-off

The key is not pretending it’s pure. The comments say:

“Unlike most compiler features, build:command.sh is processed in the frontend…”

“⚠️ Breaks the ‘programmable pipeline’ pattern slightly”

We document the heresy. That makes it honest.

The Pattern: Escape Hatches With Clear Labels

This is a pattern worth repeating:

  1. Build the pure system (programmable pipeline)
  2. Identify the pragmatic exception (instant shell commands)
  3. Make it explicit (clear naming: command.sh vs command.proc)
  4. Document the trade-off (in code comments AND user docs)
  5. Provide the pure alternative (command.proc for backend power)

Users can then choose their own trade-off:

  • Need instant execution? → command.sh (frontend)
  • Need compile-time power? → command.proc (backend)

The system serves both needs without pretending they’re the same.

The Test: Honest Documentation

From test 640’s post-validation script:

#!/bin/bash
# Test shell command execution

set -e

echo "=== Testing shell command 'hello' ==="
koruc input.kz hello

echo ""
echo "=== Testing shell command 'args' with arguments ==="
koruc input.kz args one two three

Expected output:

=== Testing shell command 'hello' ===
Hello from Koru!

=== Testing shell command 'args' with arguments ===
Args: one two three

The test validates the feature AND serves as living documentation.

The Lesson: Break Your Own Rules When You Document Why

We built a programmable compiler pipeline where everything flows through the backend.

Then we added a feature that bypasses it entirely.

This is not a failure of design. This is pragmatic engineering.

The rules are:

  1. Build the pure system first
  2. Identify legitimate exceptions
  3. Make them explicit in naming and documentation
  4. Provide both paths (fast/limited and slow/powerful)

When you follow these rules, “breaking the pipeline” becomes:

  • ✅ A documented optimization
  • ✅ A clear user choice
  • ✅ An honest trade-off

Not:

  • ❌ A dirty hack
  • ❌ A hidden special case
  • ❌ A betrayal of principles

The Punchline

We needed instant build tasks.

We could have forced everything through the backend compiler (pure but slow).

Instead we made shell commands a frontend optimization, clearly documented as an exception to the pipeline.

The result:

  • 40ms test iteration (80x faster)
  • Clear naming (command.sh vs command.proc)
  • Documented trade-offs (in code and docs)
  • User choice (speed vs power)

Pragmatism over dogma.

Clarity over consistency.

Honest trade-offs over hidden compromises.

And the iteration speed? Chef’s kiss.


Want to try Koru? Check out the language guide or read about self-documenting compiler flags. The compiler is open source, the tests are real, and the pragmatism is intentional.

This is an AI-first project. Every feature is designed through human-AI collaboration. If that excites you, join us.