Explore Asupersync's key concepts through interactive visualizations.
Click a region to cancel it and watch the cancel cascade propagate through its children.
The visualization displays an Asupersync region tree, the foundational data structure behind structured concurrency. Every task in the runtime lives inside a region, and regions nest to form a strict hierarchy. When a parent region closes or is cancelled, all of its children receive a cancel signal automatically.
Click any region node to trigger cancellation. The cancel protocol propagates downward through the tree: each child enters the drain phase, runs its finalizers, and then reports completion upward. This structured teardown eliminates orphaned tasks, a class of bugs that plagues runtimes like Tokio where tokio::spawn creates globally-scoped futures with no parent linkage.
The practical consequence is composability. Because region lifetimes are lexically scoped, you can reason about task ownership the same way you reason about variable ownership in Rust. If a region is not visible in your code, no task inside it can outlive your scope. See the architecture deep-dive for the formal model.
Step through the three-phase cancel protocol and compare it with Tokio's silent drop.
This visualization walks through Asupersync's three-phase cancel protocol, the mechanism that replaces Tokio's implicit Drop-based cancellation. In Tokio, dropping a JoinHandle silently abandons the future mid-execution with no cleanup guarantee. Asupersync's protocol ensures that every cancelled task follows a structured shutdown sequence: Request, Drain, Finalize.
During the Request phase, the runtime sets a flag on the task's Cx token. The task can observe this flag at any checkpoint() call and begin graceful shutdown. The Drain phase gives the task a bounded budget to flush buffers, close connections, and release locks. Finally, the Finalize phase runs registered finalizers in LIFO order, guaranteeing that resources acquired last are released first.
Step through each phase in the visualization to see the state transitions. Then compare the Asupersync side with Tokio's behavior: notice how Tokio's drop produces no observable cleanup steps, while Asupersync's protocol generates a deterministic, auditable shutdown trace.
Reserve side-effects to make them reversible. Commit them permanently only when safe from cancellation.
Two-phase effects solve a fundamental problem in cancellable systems: what happens to side-effects that are partially committed when cancellation arrives? In a traditional runtime, a function might send an email, then get cancelled before recording that it did so, leaving the system in an inconsistent state.
Asupersync splits every side-effect into two steps. The Reserve step stages the effect as a reversible intention, producing a handle the task can hold. The Commit step, which only succeeds if the task is not in a cancelled state, makes the effect permanent. If cancellation arrives between Reserve and Commit, the runtime automatically rolls back all reserved-but-uncommitted effects.
The visualization shows effects moving through this lifecycle. Try triggering a cancel during the window between reservation and commit to see the rollback in action. This pattern integrates with the obligation system: each reserved effect produces a permit that must be either committed or explicitly dropped, making leaked effects a compile-time error.
Toggle capabilities on the Cx token and see how the runtime intercepts unauthorized operations.
Capability security is Asupersync's enforcement mechanism for the principle of least authority. Every task receives a Cx token at spawn time, and that token carries a specific capability tier that determines what operations the task is permitted to perform. A task with FiberCap can only compute. A task with IoCap can additionally access the network and filesystem.
Toggle individual capabilities in the visualization to see the runtime's response. When a task attempts an operation outside its tier, the runtime does not panic or silently fail. Instead, the capability gate intercepts the call and returns a typed error, allowing the task to handle the denial gracefully.
This design eliminates an entire category of security vulnerabilities. Third-party library code that you spawn with FiberCap cannot exfiltrate data over the network, because the I/O capability simply does not exist in its token. The restriction is structural, not policy-based, so it cannot be bypassed by malicious code at runtime.
Capabilities behave like Macaroons. A child can add further restrictions, but can never strip the parent's constraints away.
This visualization demonstrates Asupersync's Macaroon-inspired capability delegation model. When a parent task delegates a capability to a child, it can attach additional restrictions (caveats) that narrow the child's permissions. The critical invariant: a child can never widen a capability it received. Delegation is a one-way ratchet toward less privilege.
The mechanism works without a central authority. Each caveat is cryptographically chained to the previous one, so any recipient can verify the full chain of restrictions locally. This means you can safely pass capabilities across trust boundaries, because the math guarantees that tampering with the caveat chain invalidates the entire token.
Try adding caveats in the visualization and observe how they stack. A parent that grants IoCap with a “read-only filesystem” caveat produces a child token that can never write to disk, regardless of what further delegations occur downstream. This composable restriction model is what makes Asupersync's capability security practical at scale.
Cancel budgets mathematically compose downwards across nested regions using a Product Semiring.
Budget algebra governs how cancellation time limits propagate through the region tree. Each region declares a budget: the maximum time (or ticks) its tasks are allowed to spend in the drain phase during cancellation. When regions nest, their budgets compose using a product semiring, which takes the minimum of parent and child budgets.
This algebraic composition ensures a global invariant: no nested region can drain longer than its parent permits. If a parent region has a 100ms drain budget and a child declares 500ms, the effective child budget is clamped to 100ms. The composition is associative and commutative, so the nesting depth does not affect the computation's correctness.
Adjust the budget values in the visualization to see how they propagate. Notice that the effective budget at any leaf node is always the minimum along its path from the root. This property is what makes cancel fuel a termination proof: the Lyapunov potential is bounded by the root budget, guaranteeing that the entire cancel cascade completes in finite time.
Watch the Fiedler value plummet as a cyclic wait-graph forms, triggering proactive intervention.
Spectral deadlock detection monitors the runtime's wait-graph in real time by computing the second-smallest eigenvalue of its Laplacian matrix, known as the Fiedler value. A connected graph with no cycles has a positive Fiedler value. As a deadlock cycle forms, the Fiedler value drops toward zero, giving the runtime advance warning before tasks actually block.
The visualization builds a wait-graph incrementally. Watch the Fiedler value in the readout as edges are added. When it crosses the critical threshold, the runtime triggers intervention: it identifies the task in the cycle with the lowest priority and cancels it via the cancel protocol, breaking the deadlock without operator intervention.
Traditional deadlock detectors are reactive: they find cycles after tasks are already stuck. Spectral analysis is predictive. The Fiedler value functions as a continuous health metric, analogous to monitoring CPU temperature rather than waiting for a thermal shutdown. This approach catches near-deadlocks, situations where tasks are one edge away from a cycle, before they become production incidents.
Rateless erasure coding over lossy channels perfectly reconstructs data without retransmission requests.
Fountain codes(specifically RaptorQ) allow Asupersync to transfer data reliably across lossy channels without acknowledgments or retransmission requests. The sender encodes the original data into a potentially infinite stream of coded symbols. The receiver can reconstruct the original data from any sufficient subset of received symbols, regardless of which specific symbols were lost.
The visualization shows symbols flowing from sender to receiver with a configurable packet loss rate. Unlike TCP, which stalls on every dropped packet to request retransmission, RaptorQ continues transmitting new coded symbols. The receiver accumulates symbols until it has slightly more than the original data size (typically 0.2% overhead), then reconstructs the complete message in one pass using the Luby transform.
This property makes fountain codes ideal for cancel-heavy workloads where connections may be interrupted at any moment. If a cancel signal arrives mid-transfer, the receiver either has enough symbols to decode (and the data is fully recovered) or it does not (and the partial transfer is cleanly discarded). There is no half-received state to reconcile.
GenServers that mathematically cannot forget to reply to clients.
Spork is Asupersync's actor framework, inspired by Erlang/OTP but with a critical fix: the linear obligation system guarantees that every GenServer call receives a reply. In Erlang, a gen_server that crashes or forgets to call gen_server:reply/2 leaves the caller hanging indefinitely. In Spork, the reply obligation is a permit that must be consumed before the handler returns, enforced at compile time.
The visualization shows message flow through a Spork GenServer. Each incoming call produces a reply permit, and the mailbox tracks which permits are outstanding. If a handler attempts to return without consuming the reply permit, the ObligationLeak oracle catches it during Lab testing.
Spork also integrates with the region tree. Each GenServer lives inside a region, and its supervisor can restart it using the cancel protocol rather than Erlang's abrupt kill-and-restart. This means in-flight messages are drained and finalizers run before the new instance starts, eliminating the lost-message window that plagues OTP restarts.
See how the runtime pushes synchronization barriers out of the hot path for monotone operations.
The CALM theorem (Consistency As Logical Monotonicity) states that monotone operations, those whose outputs only grow as inputs grow, can be computed without coordination barriers. Asupersync's runtime performs CALM analysis on each operation to determine whether it requires synchronization. Operations classified as monotone (like appending to a log or adding to a set) execute without barriers, while non-monotone operations (like reading a counter's final value) insert barriers only where the math demands them.
The visualization shows a stream of operations being classified and scheduled. Consecutive monotone operations are batched and executed concurrently. When a non-monotone operation appears, the runtime inserts a coordination barrier, waits for all in-flight monotone operations to complete, then proceeds.
This optimization is not a heuristic. The CALM theorem provides a mathematical guarantee that removing these barriers does not affect correctness. The result is higher throughput without sacrificing consistency, because synchronization cost is paid only where it is provably necessary.
A mathematical energy function that strictly decreases over time, proving the system will reach quiescence.
A Lyapunov potential is a mathematical energy function assigned to the runtime's state. The key property: every scheduler step either decreases the potential or leaves it unchanged, and the potential is bounded below by zero. This strict monotonic decrease proves that the system must eventually reach quiescence, a state where no more work remains.
The visualization plots the potential value over time as the scheduler processes tasks. Each completed task, each drained region, and each consumed permit reduces the potential. The curve approaches zero asymptotically, and the formal proof guarantees it reaches zero in finite steps.
This is not just a theoretical curiosity. The Lyapunov potential serves as a progress metric in production. If the potential stops decreasing, something is wrong: a task may be livelock-spinning, a region may be failing to drain, or a permit may have leaked. The Lab runtime checks this invariant via a martingale progress certificate, catching liveness violations during testing rather than in production.
Multi-armed bandit algorithm continuously tunes the scheduler's cancel-streak limits based on real-time regret minimization.
The EXP3 scheduler treats scheduling policy as an online learning problem. Each scheduling decision (how many consecutive cancel tasks to process before switching to ready tasks, for example) is an “arm” in a multi-armed bandit. The EXP3 algorithm assigns weights to each arm based on observed rewards (throughput, latency, cancel responsiveness) and selects arms with probability proportional to their weights.
The visualization shows the arm weights updating in real time as the scheduler processes tasks. Arms that produce good outcomes gain weight; arms that produce poor outcomes lose weight. The regret bound guarantees that the scheduler's cumulative performance converges to within a logarithmic factor of the best fixed policy in hindsight, even if the workload changes adversely.
Unlike static scheduling policies that must be tuned by operators, the EXP3 approach adapts automatically. A workload that suddenly becomes cancel-heavy will shift weight toward longer cancel streaks within milliseconds. The hedge algorithm variant provides even tighter bounds when the number of arms is small, making this practical for the three-lane scheduler's finite set of configuration knobs.
The Lab Runtime drops cancel bombs at every single await point to mathematically prove your application survives interruption.
Systematic cancellation testing replaces ad-hoc timeout tests with exhaustive verification. The Lab runtime instruments your code to intercept every checkpoint() call, then replays the test multiple times, injecting a cancellation at a different await point on each replay. If your code has N await points, the Lab runs N+1 executions: one clean run and one cancellation at each point.
The visualization shows these injection points lighting up as the Lab works through them. Each injected cancellation triggers the full cancel protocol, and the suite of test oracles (including the TaskLeak, ObligationLeak, and CancelProtocol oracles) verifies that the cancellation was handled correctly.
Combined with DPOR search pruning and seed-deterministic scheduling, this approach provides a mathematical guarantee: if no oracle fires across all injection points and all explored interleavings, your code is cancel-correct. This is not a confidence interval or a probabilistic claim. It is an exhaustive proof over the tested input space.
Anytime-valid statistical monitors that continuously evaluate runtime invariants and instantly halt tests upon violation.
E-process test oracles are statistical monitors that run continuously alongside your tests, checking runtime invariants without requiring a fixed sample size. Traditional hypothesis tests commit to N observations upfront; peeking at intermediate results invalidates the statistical guarantee. E-processes use a betting supermartingale that remains valid at every observation, allowing the oracle to halt the test the instant a violation is detected.
The visualization shows multiple oracles monitoring a running test. Each oracle tracks a specific invariant: the TaskLeak oracle watches for tasks escaping their region scope, the ObligationLeak oracle catches dropped permits, and the CancelProtocol oracle verifies that every cancellation follows the three-phase contract.
The anytime-valid property is critical for Lab testing, where the number of scheduling interleavings is not known in advance. The Lab explores interleavings until either an oracle fires or DPOR reports that all distinct interleavings have been covered. Because E-processes maintain their statistical validity at every stopping time, there is no risk of false positives from early termination.
Canonicalizing execution traces to prune DPOR search spaces without missing any bugs.
A Foata fingerprint is a canonical hash of a concurrent execution trace. Two traces that differ only in the ordering of independent (non-conflicting) operations produce the same fingerprint. This is because the fingerprint is computed over the trace's Mazurkiewicz trace equivalence class: the set of all orderings that differ only in how independent events are interleaved.
The visualization shows multiple execution traces being reduced to their Foata normal form. In this normal form, independent events are grouped into “layers” where all events within a layer can execute in any order. The fingerprint is then a hash of these layers, ignoring intra-layer ordering.
This technique is essential for DPOR efficiency. Without fingerprinting, the search algorithm would explore many traces that are equivalent (same conflicts, same outcomes, different ordering of independent steps). By comparing Foata fingerprints, the Lab runtime detects and skips equivalent traces, reducing the search space by orders of magnitude while guaranteeing that no distinct behavior is missed.
Drawing distribution-free mathematical boundaries to flag anomalous tasks with absolute certainty.
Conformal calibration constructs prediction intervals around expected task behavior without assuming any particular probability distribution. Given a calibration set of observed task durations (or resource usage, or any scalar metric), conformal prediction outputs an interval such that a new observation falls inside the interval with a user-specified probability, say 95%. If a task's metric falls outside this interval, it is flagged as anomalous.
The visualization plots task metrics with the conformal prediction band overlaid. Points inside the band are normal; points outside are flagged. The width of the band adapts to the observed data, widening when variance is high and narrowing when behavior is consistent.
The distribution-free guarantee is what distinguishes conformal calibration from parametric approaches. You do not need to assume Gaussian, exponential, or any other distributional form. The coverage guarantee holds for any data distribution, making it robust against the heavy-tailed latency distributions common in async systems. The Lab runtime uses conformal calibration to set adaptive thresholds for test oracles, replacing hand-tuned timeout constants with statistically grounded bounds.
A strictly decreasing budget that mathematically guarantees a cancellation cascade will not spin infinitely.
Cancel fuel is a non-negative integer assigned to each cancel propagation event. Every step of the cancellation (entering a child region, notifying a task, running a finalizer) costs one unit of fuel. When fuel reaches zero, propagation stops. This simple mechanism provides a termination proof: the cancel cascade is guaranteed to complete in at most N steps, where N is the initial fuel value.
The visualization shows fuel depleting as cancellation propagates through nested regions. Each region consumes fuel proportional to the number of tasks it contains. The initial fuel value is derived from the budget algebra, which composes parent and child budgets via the product semiring.
The formal proof that cancel fuel guarantees termination is a supermartingale argument: the fuel value is a non-negative quantity that strictly decreases with each step. Because a non-negative integer sequence that strictly decreases must reach zero in finite steps, the cascade terminates. This proof is mechanically verified in Lean 4 as part of Asupersync's small-step semantics.
Resolving concurrent races mathematically. Simultaneous crashes are sorted deterministically, guaranteeing perfect replay.
Trace replay stability ensures that a recorded execution trace can be replayed identically, even when the original execution contained races. The key challenge is tie-breaking: when two events occur “simultaneously” (within the same scheduler tick), the replay engine must resolve them in the same order every time. Asupersync uses a deterministic tie-breaking rule based on task IDs and region depth, producing a total order over concurrent events.
The visualization shows concurrent events arriving at the same tick and being sorted into a deterministic replay order. Regardless of the order in which the operating system delivers these events, the replay engine produces the same sequence. This determinism is seed-dependent: the same seed always produces the same tie-breaking decisions.
Stable replay is the foundation of Asupersync's debugging workflow. When a Lab test discovers a bug under a specific seed, you can replay that exact execution as many times as needed to diagnose the root cause. Without stable tie-breaking, replays would diverge from the original execution at the first race, making the recorded trace useless for debugging.
Automatic LIFO compensation of forward actions when long-running workflows fail or are cancelled.
Sagas model multi-step distributed workflows where each step has a forward action and a compensating (undo) action. If any step fails or if cancellation arrives mid-workflow, the runtime automatically compensates all completed steps in reverse (LIFO) order. This ensures that the system returns to a consistent state without manual intervention.
The visualization shows a saga executing forward steps (create account, send email, charge card). Trigger a failure or cancellation at any point to watch the compensation cascade: the last completed step compensates first, then the second-to-last, and so on. Each compensation runs through the cancel protocol, so compensating actions themselves are subject to the same cleanup guarantees.
Asupersync's saga engine applies CALM analysis to optimize execution. Consecutive monotone forward steps (those that can be reordered without affecting the outcome) execute concurrently without coordination barriers. Non-monotone steps trigger a barrier. This classification-driven optimization means the saga engine extracts maximum concurrency while maintaining correctness, without requiring the developer to annotate parallelism manually.
Machine-checked execution rules in Lean 4 that mathematically prove the soundness of the runtime.
Small-step semantics define the runtime's behavior as a set of transition rules, each specifying how one computation step transforms the system state. The rule SPAWN, for example, says: given an expression spawn(e) and state σ, one step produces a permit and registers the new task as Running in σ. By chaining rules, you can trace any execution from start to finish.
The visualization lets you step through these transitions one at a time. Each step highlights the active transition rule and shows how the system state changes. Rules like CANCEL-PROPAGATE, DRAIN-TICK, and EFFECT-RESERVE cover the cancel protocol, fuel consumption, and two-phase effects respectively.
These rules are not informal pseudocode. They are mechanically verified in Lean 4, meaning a theorem prover has checked that the rules are consistent, that safety invariants hold across all possible rule applications, and that the system cannot reach an undefined state. This level of rigor is what enables Asupersync's guarantee that cancel-correctness is a provable property, not a testing aspiration.
See how cancel, timed, and ready lanes prioritize work in the three-lane scheduler.
The three-lane scheduler partitions all runnable tasks into three priority lanes. The Cancel lane has the highest priority and processes tasks that are in the drain phase of the cancel protocol. The Timed lane handles tasks with pending timer expirations. The Ready lane processes all other runnable tasks at the lowest priority.
The visualization shows tasks flowing through the three lanes. Notice that cancel-phase tasks always execute before ready tasks, ensuring that the runtime clears cancellation backlogs before taking on new work. This priority ordering is critical: without it, a burst of new task spawns could starve the cancel lane, causing drain budgets to expire and forcing hard termination of tasks that could have shut down gracefully.
The EXP3 scheduler sits on top of this three-lane structure, tuning parameters like the maximum cancel streak length (how many consecutive cancel-lane tasks to process before giving the ready lane a turn). This layered design separates the correctness concern (cancel lane always has priority) from the performance concern (how to balance throughput and cancel responsiveness).
Follow a Permit through its lifecycle: Reserve → Hold → Consume.
The obligation systemuses linear types to enforce resource lifecycle invariants at compile time. When a task acquires a resource (opens a file, takes a lock, reserves a side-effect), it receives a permit. That permit must be explicitly consumed (by closing the file, releasing the lock, or committing the effect) before the task can complete. Dropping a permit without consuming it is a compile-time error.
The visualization traces a permit through its three lifecycle stages. Reserve creates the permit. Hold represents the window during which the task is using the resource. Consume releases the permit, satisfying the linear obligation. If the task is cancelled during the Hold phase, the drain phase gives it time to consume outstanding permits.
Leases are a time-bounded variant: a lease-permit automatically expires after a deadline, releasing the resource without explicit consumption. The ObligationLeak oracle in the Lab runtime catches any permit that reaches task completion without being consumed, turning resource leaks from production incidents into test failures.
Change the seed to see different task execution orders. Same code, different interleavings.
The Lab runtime is a deterministic, single-threaded scheduler that replaces the production runtime during testing. It accepts a seed value and uses it to control every scheduling decision: task selection order, timer resolution, and tie-breaking. The same seed always produces the same execution, making concurrency bugs reproducible.
The visualization shows a set of tasks executing under different seeds. Change the seed and watch the execution order change. Tasks that ran first under one seed may run last under another. Despite this reordering, the test oracles verify that all invariants hold across every interleaving.
The Lab is not a mock or a simulator. It runs your actual application code, including I/O (via capability-controlled stubs), against the real obligation systemand cancel protocol. The only difference from production is that scheduling is deterministic rather than OS-driven. Combined with systematic DPOR exploration, the Lab can cover all distinct interleavings for small test scenarios, providing exhaustive verification rather than probabilistic confidence.
Toggle DPOR to see how Asupersync prunes redundant commutations to make testing race conditions practical.
DPOR (Dynamic Partial Order Reduction) is an algorithm that explores concurrent interleavings selectively, skipping orderings that are provably equivalent. Two interleavings are equivalent if they differ only in the ordering of independent (non-conflicting) operations. DPOR identifies conflict points at runtime, then backtracks only to explore orderings that produce genuinely different outcomes.
The visualization shows an interleaving search tree. Without DPOR, every possible ordering is explored, leading to factorial blowup. Toggle DPOR on to see branches pruned. The remaining branches are precisely those that contain at least one conflict-dependent reordering, meaning they could exhibit different behavior from previously explored paths.
Foata fingerprints further accelerate this pruning. When two unexplored branches produce the same Foata fingerprint, the Lab skips the duplicate. The combined effect of DPOR and fingerprint deduplication typically reduces the search space from factorial to polynomial in the number of tasks, making exhaustive interleaving testing practical for real-world async applications.
Side-by-side comparison of cancel behavior. Press 'Cancel Now' to see the difference.
This visualization provides a direct comparison between Tokio's cancellation model and Asupersync's three-phase cancel protocol. On the Tokio side, dropping a JoinHandle immediately destroys the future. Any in-flight work, open connections, and acquired locks are abandoned without cleanup. On the Asupersync side, cancellation initiates the structured Request → Drain → Finalize sequence.
Press the Cancel button and observe both sides simultaneously. Tokio's side completes instantly because the future is simply dropped. Asupersync's side takes slightly longer (bounded by the drain budget) but produces a clean shutdown: buffers are flushed, connections are closed, permits are consumed, and finalizers run in LIFO order.
The tradeoff is explicit. Tokio's approach is faster in the cancellation path, but it makes cancel-correctness the developer's problem. Asupersync's approach adds bounded latency to cancellation, but it makes cleanup automatic and verifiable. The Lab runtime can prove that your application handles cancellation correctly across all interleavings; in Tokio, you must rely on manual code review and hope that every .await site handles its drop correctly.