How Asupersync enforces cancel-correctness, structured concurrency, and deterministic testing.
Asupersync's runtime is built on three pillars: Regions for structure, the Cancel Protocol for cleanup, and the Lab for testing.
Every task lives inside a Region. Regions form a tree. When a parent Region closes, all children are cancelled — automatically.
01// Regions scope task lifetimes02Region::open(cx, "server", |cx| {03 // Child tasks are bound to this Region04 cx.spawn("worker-1", handle_requests());05 cx.spawn("worker-2", process_queue());06 07 // When this Region closes, workers are cancelled08 // via the cancel protocol (not silently dropped)09 cx.shutdown_signal().;10}).Unlike Tokio's tokio::spawn which creates globally-scoped tasks, Asupersync tasks are always scoped to a Region. This eliminates orphaned futures — if you can't see the task, it can't outlive you.
Three phases ensure resources are always cleaned up, even during cancellation.
Cancel signal sent. Task sees CancelRequested on its Cx. Can begin graceful shutdown.
Task gets budgeted time to finish in-flight work. Flush buffers, close connections, release locks.
Registered finalizers run in LIFO order. Like defer in Go, but integrated with the cancel protocol.
Five hierarchical permission levels enforce the principle of least authority. Every task gets exactly the permissions it needs — nothing more.
Each capability tier extends the one below it. A task with FiberCap can only compute — it cannot spawn, perform I/O, or set timers. IoCap adds network and filesystem access. At the top, SupervisorCap can manage and restart other tasks.
This hierarchy ensures that if a task is compromised or buggy, its blast radius is limited to its granted capabilities. A compute-only fiber cannot exfiltrate data over the network. An I/O task cannot restart other tasks.
35 transition rules define every possible async state change. Each rule is mechanized in Lean 4 for machine-checked correctness.
Think of small-step semantics like a recipe card for the runtime. Each transition rule says: “If the system is in state X, then one computation step produces state Y.” By chaining rules together, you can trace any execution from start to finish — and prove that no step can violate safety guarantees.
⟨spawn(e), σ⟩ → ⟨permit(id), σ[id ↦ Running(e)]⟩Spawning a task produces a Permit and registers the new task as Running in the state.
⟨cancel(r), σ⟩ → ⟨(), σ[t.state ↦ CancelRequested | t ∈ r]⟩Cancelling a Region marks every task in it as CancelRequested.
⟨tick, σ[t: Draining(fuel)]⟩ → ⟨tick, σ[t: Draining(fuel-1)]⟩Each scheduler tick decrements Cancel Fuel, guaranteeing termination.
⟨reserve(eff), σ⟩ → ⟨handle(id), σ[id ↦ Reserved(eff)]⟩Staging an effect creates a reversible reservation — not yet committed.
⟨checkpoint, σ[t: CancelRequested]⟩ → ⟨Cancelled, σ[t: Draining]⟩If a cancel is pending at a checkpoint, the task transitions to Draining and returns Cancelled.
⟨close(r), σ⟩ → ⟨(), σ[t.state ↦ CancelRequested | t ∈ children(r)]⟩Closing a Region first cancels all children, cascading down the Region tree.
These 6 rules are a sample — the full system has 35 rules covering checkpoint, commit, rollback, supervisor restart, and more. See the Spec Explorer for the complete formal specification.
The Lab runtime ships with 17 independent correctness monitors. Each oracle watches for a specific class of concurrency bug.
Each test oracle is an independent auditor. When you run a Lab test, all 17 oracles are active simultaneously — checking for resource leaks, protocol violations, budget overruns, and more. Combined with DPOR (think of it as a maze solver that only explores paths that lead to different outcomes), the Lab systematically finds bugs that random testing would miss.
Five states, four transitions, and Cancel Fuel that guarantees termination. Walk through the full cancellation lifecycle.
Cancel Fuel is the key invariant. It's a counter that strictly decreases with each step of cancel propagation — like a rocket with a finite fuel tank. The formal proof is a supermartingale: a mathematical sequence that, on average, can only go down. This guarantees that cancellation always terminates, even in adversarial workloads.
The five states — Running, CancelRequested, Cancelling (Drain), Finalizing, and Completed — form a strict total order. A task can only move forward through these states, never backward. Combined with the budget algebra that composes nested region budgets, this gives Asupersync its cancel-correctness guarantee.
Capabilities are bearer tokens with 8 caveat predicates. Anyone can add restrictions — nobody can remove them. Delegation is safe by construction.
Inspired by Google's Macaroons paper, Asupersync's capabilities use a decentralized attenuation model. When you delegate a capability to a child task, you can restrict it by adding caveats — but you can never widen an existing capability. This makes delegation inherently safe: you don't need a central authority to verify that a delegated token is valid.
The 8 caveat predicates cover time bounds (TimeBefore, TimeAfter), spatial scoping (RegionScope, TaskScope), usage limits (MaxUses, RateLimit), resource patterns (ResourceScope), and a Custom escape hatch for application-specific restrictions.
Anytime-valid invariant monitoring via Ville's inequality. Peek at any time, reject the instant evidence is strong enough — no p-hacking penalty.
Traditional statistical tests require a fixed sample size: you commit to N observations upfront, then compute a p-value. Peeking at intermediate results invalidates the guarantee. E-processes solve this using a betting martingale: E_t = E_(t-1) × (1 + λ × (X_t - p₀)).
Via Ville's inequality, if the E-value ever exceeds 1/α, you know with mathematical certainty that the null hypothesis is false — regardless of when you checked. The Lab runtime monitors three critical invariants (task leak, obligation leak, quiescence) this way, with bet size λ=0.5 and null rate p₀=0.001.
16 operation kinds, CALM-optimized coordination barriers, and automatic LIFO rollback. Distributed operations that unwind cleanly on failure.
Multi-step operations (create account → send email → charge card) are modeled as Sagas. Each step has a forward action and a compensating action. If any step fails or cancellation arrives, completed steps are unwound in LIFO order — the last completed step compensates first.
What makes this unique is CALM analysis. The Saga engine classifies each of its 16 operation kinds as monotone (order-independent, like Reserve and Send) or non-monotone (barrier-required, like Commit and Release). Consecutive monotone operations are batched without any coordination barriers — synchronization is inserted only where the math demands it.
From formal proofs to published crate.
Python Foundation (1,700+ stars)
Built original MCP Agent Mail in Python with SQLite storage
Designed agent identity, messaging, and file reservation primitives
Achieved 1,700+ GitHub stars and broad community adoption
Identified scalability bottlenecks: Git lock contention, SQLite pool exhaustion
Rust Ground-Up Rewrite
12-crate modular workspace architecture with zero unsafe code
Built on Tokio for high-performance async concurrency
SQLite WAL mode with connection pooling and PRAGMA tuning
Git-backed archive with commit coalescing (9x reduction)
MCP Surface & Search V3
Implemented 34 MCP tools across 9 clusters via fastmcp_rust
Added 20+ MCP resources for fast agent-discoverable lookups
Built hybrid search with two-tier fusion and reranking on frankensearch
Cross-project coordination via product bus and contact handshakes
Operations Console & CLI
15-screen TUI built on frankentui with 5 themes
Robot mode: 16 non-interactive subcommands for agent consumption
Web UI for human oversight with Overseer compose form
Pre-commit guard integration for reservation enforcement
Stress Testing & Production Hardening
10-scenario stress gauntlet: 30-agent pipelines, pool exhaustion, thundering herd
Sustained ~49 RPS with 1,494 ops in 30s mixed workloads
Commit coalescer achieving 9.1x write reduction
Cross-platform installers for Linux, macOS, and Windows