Architecture

How Asupersync enforces cancel-correctness, structured concurrency, and deterministic testing.

Overview

Runtime Architecture

Asupersync's runtime is built on three pillars: Regions for structure, the Cancel Protocol for cleanup, and the Lab for testing.

APPLICATION LAYERRegion Tree (structured concurrency scopes)CANCEL PROTOCOL1. Request2. Drain3. FinalizeSCHEDULERCancel Lane (hi)Timed Lane (mid)Ready Lane (lo)
Core Model

Structured Concurrency

Every task lives inside a Region. Regions form a tree. When a parent Region closes, all children are cancelled — automatically.

regions.rs
01// Regions scope task lifetimes02Region::open(cx, "server", 
async
|cx| {
03 // Child tasks are bound to this Region04 cx.spawn("worker-1", handle_requests());05 cx.spawn("worker-2", process_queue());06 07 // When this Region closes, workers are cancelled08 // via the cancel protocol (not silently dropped)09 cx.shutdown_signal().
await
;
10}).
await
Syntax_Validation_Active
UTF-8_ENCODED

Unlike Tokio's tokio::spawn which creates globally-scoped tasks, Asupersync tasks are always scoped to a Region. This eliminates orphaned futures — if you can't see the task, it can't outlive you.

Safety

Cancel Protocol

Three phases ensure resources are always cleaned up, even during cancellation.

1. Request

Cancel signal sent. Task sees CancelRequested on its Cx. Can begin graceful shutdown.

2. Drain

Task gets budgeted time to finish in-flight work. Flush buffers, close connections, release locks.

3. Finalize

Registered finalizers run in LIFO order. Like defer in Go, but integrated with the cancel protocol.

Security

Capability Tiers

Five hierarchical permission levels enforce the principle of least authority. Every task gets exactly the permissions it needs — nothing more.

FiberCapCompute-only. No spawning, no I/O, no timers.TaskCapCan spawn child tasks within its Region.IoCapNetwork, filesystem, and timer access.RemoteCapCross-machine communication and data transfer.SupervisorCapCan manage, restart, and cancel other tasks.

Each capability tier extends the one below it. A task with FiberCap can only compute — it cannot spawn, perform I/O, or set timers. IoCap adds network and filesystem access. At the top, SupervisorCap can manage and restart other tasks.

This hierarchy ensures that if a task is compromised or buggy, its blast radius is limited to its granted capabilities. A compute-only fiber cannot exfiltrate data over the network. An I/O task cannot restart other tasks.

Formal Foundations

Small-Step Semantics

35 transition rules define every possible async state change. Each rule is mechanized in Lean 4 for machine-checked correctness.

Think of small-step semantics like a recipe card for the runtime. Each transition rule says: “If the system is in state X, then one computation step produces state Y.” By chaining rules together, you can trace any execution from start to finish — and prove that no step can violate safety guarantees.

SPAWN
⟨spawn(e), σ⟩ → ⟨permit(id), σ[id ↦ Running(e)]⟩

Spawning a task produces a Permit and registers the new task as Running in the state.

CANCEL-PROPAGATE
⟨cancel(r), σ⟩ → ⟨(), σ[t.state ↦ CancelRequested | t ∈ r]⟩

Cancelling a Region marks every task in it as CancelRequested.

DRAIN-TICK
⟨tick, σ[t: Draining(fuel)]⟩ → ⟨tick, σ[t: Draining(fuel-1)]⟩

Each scheduler tick decrements Cancel Fuel, guaranteeing termination.

EFFECT-RESERVE
⟨reserve(eff), σ⟩ → ⟨handle(id), σ[id ↦ Reserved(eff)]⟩

Staging an effect creates a reversible reservation — not yet committed.

CHECKPOINT-MASKED
⟨checkpoint, σ[t: CancelRequested]⟩ → ⟨Cancelled, σ[t: Draining]⟩

If a cancel is pending at a checkpoint, the task transitions to Draining and returns Cancelled.

CLOSE-CANCEL-CHILDREN
⟨close(r), σ⟩ → ⟨(), σ[t.state ↦ CancelRequested | t ∈ children(r)]⟩

Closing a Region first cancels all children, cascading down the Region tree.

These 6 rules are a sample — the full system has 35 rules covering checkpoint, commit, rollback, supervisor restart, and more. See the Spec Explorer for the complete formal specification.

Testing Infrastructure

17 Test Oracles

The Lab runtime ships with 17 independent correctness monitors. Each oracle watches for a specific class of concurrency bug.

Each test oracle is an independent auditor. When you run a Lab test, all 17 oracles are active simultaneously — checking for resource leaks, protocol violations, budget overruns, and more. Combined with DPOR (think of it as a maze solver that only explores paths that lead to different outcomes), the Lab systematically finds bugs that random testing would miss.

TaskLeakDetects tasks that escape their Region scope
ObligationLeakCatches Permits/Leases dropped without consumption
CancelProtocolVerifies 3-phase cancel contract compliance
BudgetOverrunFlags tasks exceeding their drain budget
RegionNestingValidates parent-child Region tree invariants
SchedulerFairnessEnsures Cancel Lane gets proper priority
QuiescenceCheckConfirms all tasks finished before Region close
FinalizerOrderVerifies LIFO execution order of finalizers
CapabilityEscalationBlocks unauthorized capability upgrades
SporkInvariantChecks fork-spawned Region consistency
DeadlockFreedomDetects cycles in the wait-graph
ProgressGuaranteeValidates martingale progress certificates
SeedDeterminismEnsures identical seeds reproduce identical schedules
MemoryOrderingChecks acquire/release memory consistency
FuelExhaustionPrevents cancel propagation past fuel=0
SupervisorPolicyEnforces restart limits and backoff policies
EffectAtomicityEnsures two-phase effects commit atomically
Cancel Lifecycle

State Machine

Five states, four transitions, and Cancel Fuel that guarantees termination. Walk through the full cancellation lifecycle.

Cancel Fuel is the key invariant. It's a counter that strictly decreases with each step of cancel propagation — like a rocket with a finite fuel tank. The formal proof is a supermartingale: a mathematical sequence that, on average, can only go down. This guarantees that cancellation always terminates, even in adversarial workloads.

The five states — Running, CancelRequested, Cancelling (Drain), Finalizing, and Completed — form a strict total order. A task can only move forward through these states, never backward. Combined with the budget algebra that composes nested region budgets, this gives Asupersync its cancel-correctness guarantee.

Security

Macaroon Attenuation

Capabilities are bearer tokens with 8 caveat predicates. Anyone can add restrictions — nobody can remove them. Delegation is safe by construction.

Inspired by Google's Macaroons paper, Asupersync's capabilities use a decentralized attenuation model. When you delegate a capability to a child task, you can restrict it by adding caveats — but you can never widen an existing capability. This makes delegation inherently safe: you don't need a central authority to verify that a delegated token is valid.

The 8 caveat predicates cover time bounds (TimeBefore, TimeAfter), spatial scoping (RegionScope, TaskScope), usage limits (MaxUses, RateLimit), resource patterns (ResourceScope), and a Custom escape hatch for application-specific restrictions.

Statistical Testing

E-Process Monitoring

Anytime-valid invariant monitoring via Ville's inequality. Peek at any time, reject the instant evidence is strong enough — no p-hacking penalty.

Traditional statistical tests require a fixed sample size: you commit to N observations upfront, then compute a p-value. Peeking at intermediate results invalidates the guarantee. E-processes solve this using a betting martingale: E_t = E_(t-1) × (1 + λ × (X_t - p₀)).

Via Ville's inequality, if the E-value ever exceeds 1/α, you know with mathematical certainty that the null hypothesis is false — regardless of when you checked. The Lab runtime monitors three critical invariants (task leak, obligation leak, quiescence) this way, with bet size λ=0.5 and null rate p₀=0.001.

Effect Safety

Saga Compensation

16 operation kinds, CALM-optimized coordination barriers, and automatic LIFO rollback. Distributed operations that unwind cleanly on failure.

Multi-step operations (create account → send email → charge card) are modeled as Sagas. Each step has a forward action and a compensating action. If any step fails or cancellation arrives, completed steps are unwound in LIFO order — the last completed step compensates first.

What makes this unique is CALM analysis. The Saga engine classifies each of its 16 operation kinds as monotone (order-independent, like Reserve and Send) or non-monotone (barrier-required, like Commit and Release). Consecutive monotone operations are batched without any coordination barriers — synchronization is inserted only where the math demands it.

Development

Build Timeline

From formal proofs to published crate.

Phase 1

Python Foundation (1,700+ stars)

  • Built original MCP Agent Mail in Python with SQLite storage

  • Designed agent identity, messaging, and file reservation primitives

  • Achieved 1,700+ GitHub stars and broad community adoption

  • Identified scalability bottlenecks: Git lock contention, SQLite pool exhaustion

Runtime Log v0.2
Phase 2

Rust Ground-Up Rewrite

  • 12-crate modular workspace architecture with zero unsafe code

  • Built on Tokio for high-performance async concurrency

  • SQLite WAL mode with connection pooling and PRAGMA tuning

  • Git-backed archive with commit coalescing (9x reduction)

Runtime Log v0.2
Phase 3

MCP Surface & Search V3

  • Implemented 34 MCP tools across 9 clusters via fastmcp_rust

  • Added 20+ MCP resources for fast agent-discoverable lookups

  • Built hybrid search with two-tier fusion and reranking on frankensearch

  • Cross-project coordination via product bus and contact handshakes

Runtime Log v0.2
Phase 4

Operations Console & CLI

  • 15-screen TUI built on frankentui with 5 themes

  • Robot mode: 16 non-interactive subcommands for agent consumption

  • Web UI for human oversight with Overseer compose form

  • Pre-commit guard integration for reservation enforcement

Runtime Log v0.2
Phase 5

Stress Testing & Production Hardening

  • 10-scenario stress gauntlet: 30-agent pipelines, pool exhaustion, thundering herd

  • Sustained ~49 RPS with 1,494 ops in 30s mixed workloads

  • Commit coalescer achieving 9.1x write reduction

  • Cross-platform installers for Linux, macOS, and Windows

Runtime Log v0.2
Interactive DemosGet Started