Background
This document provides an introduction to the theory behind Rumpsteak-Aura. It covers multiparty session types, choreographic programming, and algebraic effects. Understanding these concepts helps explain how the system achieves type-safe distributed programming.
Session Types
Session types encode communication protocols as types in the type system. They specify the sequence and structure of messages exchanged between processes. A session type acts as a contract that all communication must follow. The compiler verifies that implementations adhere to this contract.
Traditional type systems ensure data safety. They prevent type errors like passing a string where an integer is expected. Session types extend this to communication safety. They prevent protocol errors like sending when you should receive or terminating when more messages are expected.
A session type describes a communication channel from one endpoint’s perspective. The type !String.?Int.end means send a string, then receive an integer, then close the channel. The dual type ?String.!Int.end means receive a string, then send an integer, then close. These types are complementary. They ensure the endpoints coordinate correctly.
Multiparty Session Types (MPST)
Multiparty session types extend binary session types to protocols with three or more participants. While session types handle point-to-point channels, MPST handles protocols where multiple parties interact. Each participant can communicate with multiple others. The type system ensures all interactions remain synchronized.
MPST introduces new challenges beyond binary sessions. Participants must agree on the overall protocol flow. Messages between two participants affect what other participants can do next. The type system must track these dependencies to prevent global deadlocks. Consider a three-party payment protocol where a customer requests a quote from a merchant, the merchant checks with a bank, and the bank approves or denies.
In MPST, each participant has a local view that includes all their partners. The customer’s type shows communication with both merchant and bank. The system ensures all three views are compatible. For a more in-depth discussion of MPST theory, see A Very Gentle Introduction to Multiparty Session Types. Advanced optimizations through asynchronous subtyping are covered in Precise Subtyping for Asynchronous Multiparty Sessions. And Mechanised Subject Reduction for Multiparty Asynchronous Session Types includes a mechanized Coq proof that adds a well-typed condition that fixes the original MPST formulation by Honda et al. The type theory in this crate draws from this revised forumulation.
Global Types and Projection
Global types describe protocols from a neutral, third-party perspective. They specify all interactions between all participants. Local types describe the protocol from one participant’s viewpoint. Projection transforms global types into local types for each participant.
G = Alice -> Bob: Int.
Bob -> Alice: {Sum: Int, Product: Int}
This global type projects to different local types. Alice gets Bob!Int.Bob&{Sum: Int, Product: Int}. Bob gets Alice?Int.Alice⊕{Sum: Int, Product: Int}. Each participant sees only their part of the protocol.
The projection algorithm ensures the local types are compatible. If projection succeeds, the protocol is guaranteed to be deadlock-free. All participants will complete the protocol without communication errors.
Lean Signatures (Core Types)
Rumpsteak-Aura’s formal core is specified in Lean. The core session types are:
- the global protocol type (
GlobalType), - the recursive local type (
LocalTypeR), and - the coinductive local type used in bisimulation proofs (
LocalTypeC).
Below are their exact Lean signatures, with namespaces preserved.
Global types and labels (lean/RumpsteakV2/Protocol/GlobalType.lean):
namespace RumpsteakV2.Protocol.GlobalType
inductive PayloadSort where
| unit : PayloadSort
| nat : PayloadSort
| bool : PayloadSort
| string : PayloadSort
| prod : PayloadSort → PayloadSort → PayloadSort
deriving Repr, DecidableEq, BEq, Inhabited
structure Label where
name : String
sort : PayloadSort := PayloadSort.unit
deriving Repr, DecidableEq, BEq, Inhabited
inductive GlobalType where
| end : GlobalType
| comm : String → String → List (Label × GlobalType) → GlobalType
| mu : String → GlobalType → GlobalType
| var : String → GlobalType
deriving Repr, Inhabited
end RumpsteakV2.Protocol.GlobalType
Inductive local types (lean/RumpsteakV2/Protocol/LocalTypeR.lean):
namespace RumpsteakV2.Protocol.LocalTypeR
open RumpsteakV2.Protocol.GlobalType
inductive LocalTypeR where
| end : LocalTypeR
| send : String → List (Label × LocalTypeR) → LocalTypeR
| recv : String → List (Label × LocalTypeR) → LocalTypeR
| mu : String → LocalTypeR → LocalTypeR
| var : String → LocalTypeR
deriving Repr, Inhabited
end RumpsteakV2.Protocol.LocalTypeR
Coinductive local types (M-types) (lean/RumpsteakV2/Coinductive/LocalTypeC.lean):
namespace RumpsteakV2.Coinductive
open RumpsteakV2.Protocol.GlobalType
inductive LocalTypeHead where
| end : LocalTypeHead
| var : String → LocalTypeHead
| mu : String → LocalTypeHead
| send : String → List Label → LocalTypeHead
| recv : String → List Label → LocalTypeHead
deriving Repr, DecidableEq
def LocalTypeChild : LocalTypeHead → Type
| .end => PEmpty
| .var _ => PEmpty
| .mu _ => Unit
| .send _ ls => Fin ls.length
| .recv _ ls => Fin ls.length
def LocalTypeF : PFunctor := ⟨LocalTypeHead, LocalTypeChild⟩
abbrev LocalTypeC := PFunctor.M LocalTypeF
end RumpsteakV2.Coinductive
Well-Formedness (Lean)
Well-formedness is explicit in the Lean development.
Global types are well-formed when all of the following hold:
def GlobalType.wellFormed (g : GlobalType) : Bool :=
g.allVarsBound && g.allCommsNonEmpty && g.noSelfComm && g.isProductive
Intuitively:
allVarsBound: recursion variables are boundallCommsNonEmpty: communications have at least one branchnoSelfComm: no self-sendisProductive: recursion must pass through a communication
Local types are well-formed when they are closed and contractive:
structure LocalTypeR.WellFormed (t : LocalTypeR) : Prop where
closed : t.isClosed
contractive : t.isContractive = true
Coinductive local types are well-formed when they are closed and observable:
def ClosedC (t : LocalTypeC) : Prop :=
∀ v, ¬ UnfoldsToVarC t v
structure WellFormedC (t : LocalTypeC) : Prop where
closed : ClosedC t
observable : ObservableC t
Progress Conditions (Lean)
Progress is not derived from well-formedness alone. The V2 proofs separate structural well-formedness from a progress predicate that asserts a communication is reachable.
The progress predicate is:
inductive ReachesComm : GlobalType → Prop where
| comm (sender receiver : String) (branches : List (Label × GlobalType)) :
branches ≠ [] → ReachesComm (.comm sender receiver branches)
| mu (t : String) (body : GlobalType) :
ReachesComm (body.substitute t (.mu t body)) →
ReachesComm (.mu t body)
The associated progress theorem uses this predicate together with well-formedness and well-typedness:
def CanProgress (c : Configuration) : Prop :=
∃ c' act, ConfigStep c c' act
theorem deadlock_free (c : Configuration)
(htyped : WellTypedConfig c)
(hwf : c.globalType.wellFormed = true)
(hcomm : ReachesComm c.globalType) :
CanProgress c
In other words: well-formedness ensures validity of recursion and structure,
while ReachesComm ensures actual progress is possible. Both are required
to justify progress.
Choreographic Programming
Choreographic programming takes the global types concept further. Instead of just types, you write actual program logic from a global perspective. The choreography describes computations and data flow between participants. Endpoint projection generates local implementations for each participant.
#![allow(unused)]
fn main() {
choreography!(r#"
protocol Calculator =
roles Alice, Bob
Alice -> Bob : Value
Bob -> Alice : Result
"#);
}
This choreography specifies the communication protocol. The system projects it into separate programs for Alice and Bob. Alice’s program sends a value and receives a result. Bob’s program receives a value and sends back a result.
Choreographic programming eliminates the need to manually coordinate distributed implementations. The global specification ensures all participants agree on the protocol. Type checking at the choreographic level prevents distributed system errors.
A good place to start learning more about choreographic programming is Introduction to Choreographies. For the integration with multiparty session types, see Applied Choreographies.
Algebraic Effects
Algebraic effects separate what a program does from how it does it. The effect algebra defines a set of abstract operations that a program can perform. Handlers provide concrete implementations of these operations. This separation allows the same program to run with different implementations.
In Rumpsteak-Aura, communication operations are effects. Sending and receiving messages are abstract operations. Different handlers implement these operations differently. An in-memory handler passes messages through local channels. A network handler sends messages over TCP or WebSocket connections.
#![allow(unused)]
fn main() {
let program = Program::new()
.send(Role::Bob, Message::Hello)
.recv::<Message>(Role::Bob)
.end();
// Same program, different handlers
interpret(&mut in_memory_handler, &mut endpoint, program).await;
interpret(&mut network_handler, &mut endpoint, program).await;
}
The program specifies what messages to send and receive. The handler determines how this happens. This design enables testing with local handlers and deployment with network handlers. The protocol logic remains unchanged.
Integration in Rumpsteak-Aura
Rumpsteak-Aura combines these concepts into a practical system. Choreographies define distributed protocols globally. The type system ensures protocol safety through MPST. Effect handlers provide flexible execution strategies. The choreography macro parses protocol definitions, generates role and message types automatically, and creates session types for each participant through projection.
The effect system decouples protocol logic from transport mechanisms. Handlers interpret send and receive operations. Middleware can add logging, retry logic, or fault injection. The same choreography works across different deployment scenarios. Content addressing assigns cryptographic identities to all protocol artifacts, enabling memoization and structural sharing.
Topology configuration separates deployment concerns from protocol logic. Resource heaps provide explicit state management with nullifier-based consumption tracking. This integration provides both safety and flexibility through a type system that prevents protocol errors and an effect system that allows diverse implementations.
Further Reading
See Architecture for system design details. See Choreographic DSL for the choreography language. See Effect Handlers for the handler system.
See Content Addressing for cryptographic protocol identities. See Topology for deployment configuration. See Resource Heap for explicit state management.
Getting Started
Installation
Add Rumpsteak-Aura to your project dependencies.
[dependencies]
rumpsteak-aura = "*"
rumpsteak-aura-choreography = "*"
This adds the main facade crate and the choreographic programming layer.
Understanding the Crates
Rumpsteak-Aura is organized as a Cargo workspace with several crates. The crate structure mirrors the Lean formalization for verified correspondence. The rumpsteak-types crate contains core type definitions (GlobalType, LocalTypeR, Label, PayloadSort) that match Lean exactly. The rumpsteak-theory crate contains pure algorithms for projection, merge, subtyping, and well-formedness checks.
The rumpsteak-aura-choreography crate is the choreographic programming layer providing the DSL parser, effect handlers, and code generation. The rumpsteak-lean-bridge crate enables cross-validation with Lean through JSON import and export functions.
The rumpsteak-aura crate is the main facade that re-exports types from other crates with feature flags. Most users need both rumpsteak-aura and rumpsteak-aura-choreography for session types and the high-level DSL.
Feature Flags
The workspace provides granular feature flags to control dependencies and functionality.
Root Crate (rumpsteak-aura)
| Feature | Default | Description |
|---|---|---|
serialize | no | Serialization support for session types |
test-utils | no | Testing utilities (random generation) |
wasm | no | WebAssembly support |
theory | no | Session type algorithms via rumpsteak-theory |
theory-async-subtyping | no | POPL 2021 asynchronous subtyping algorithm |
theory-bounded | no | Bounded recursion strategies |
full | no | Enable all optional features |
Theory Crate (rumpsteak-theory)
| Feature | Default | Description |
|---|---|---|
projection | yes | Global to local type projection |
duality | yes | Dual type computation |
merge | yes | Local type merging |
well-formedness | yes | Type validation |
bounded | yes | Bounded recursion strategies |
async-subtyping | yes | POPL 2021 asynchronous subtyping |
sync-subtyping | yes | Synchronous subtyping |
semantics | yes | Async step semantics from ECOOP 2025 |
coherence | yes | Coherence predicates |
Choreography Crate (rumpsteak-aura-choreography)
| Feature | Default | Description |
|---|---|---|
test-utils | no | Testing utilities (random, fault injection) |
wasm | no | WebAssembly support |
Lean Bridge Crate (rumpsteak-lean-bridge)
| Feature | Default | Description |
|---|---|---|
runner | yes | LeanRunner for invoking Lean binary |
cli | no | Command-line interface binary |
exporter | no | Choreography exporter binary |
golden | no | Golden file management CLI |
Example: Minimal Dependencies
# Just the core runtime, no algorithms
rumpsteak-aura = { version = "*", default-features = false }
This keeps the dependency surface small while enabling the core runtime.
Example: Full Feature Set
# Everything enabled
rumpsteak-aura = { version = "*", features = ["full"] }
This enables all optional features for the facade crate.
For WASM support, enable the wasm feature on the choreography crate.
rumpsteak-aura-choreography = { version = "*", features = ["wasm"] }
This enables compilation to WebAssembly targets.
Creating a Choreography
This example shows a simple ping-pong protocol between two roles.
Define the choreography using the choreography! macro.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::choreography;
choreography!(r#"
protocol PingPong =
roles Alice, Bob
Alice -> Bob : Ping
Bob -> Alice : Pong
"#);
}
The macro automatically generates role types, message types, and session types. This is the recommended approach for most use cases. For advanced scenarios requiring runtime parsing, see Choreographic DSL.
Run the protocol using the effect system.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::{InMemoryHandler, Program, interpret};
use serde::{Serialize, Deserialize};
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
enum Role {
Alice,
Bob,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
enum Message {
Ping,
Pong,
}
let mut handler = InMemoryHandler::new(Role::Alice);
let program = Program::new()
.send(Role::Bob, Message::Ping)
.recv::<Message>(Role::Bob)
.end();
let mut endpoint = ();
let result = interpret(&mut handler, &mut endpoint, program).await?;
}
The InMemoryHandler provides local message passing for testing. See Using Rumpsteak Handlers for production handlers.
Core Concepts
Choreographies
A choreography specifies a distributed protocol from a global viewpoint. Projection transforms the global view into local behavior for each role.
Roles
Roles are participants in the protocol. Each role sends and receives messages according to their projected session type.
Messages
Messages are data exchanged between roles. They must implement Serialize and Deserialize from the serde library.
Effect Handlers
Handlers interpret choreographic effects into actual communication. Different handlers provide different transports.
The InMemoryHandler provides local testing. The RumpsteakHandler provides channel-based communication. WebSocket handlers provide network communication.
The RumpsteakHandler supports two patterns. You can register built-in SimpleChannel pairs.
#![allow(unused)]
fn main() {
let (client_ch, server_ch) = SimpleChannel::pair();
client_endpoint.register_channel(Role::Server, client_ch);
server_endpoint.register_channel(Role::Client, server_ch);
}
Both endpoints now communicate through the channel pair.
Alternatively, you can wrap your own sink and stream transports.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::effects::RumpsteakSession;
let ws_session = RumpsteakSession::from_sink_stream(websocket_writer, websocket_reader);
client_endpoint.register_session(Role::Server, ws_session);
}
Both options integrate with the same handler.
Projection
The system projects global choreographies into local session types. Each role gets a type-safe API for their part of the protocol. This ensures communication follows the choreography specification.
Architecture
Overview
Rumpsteak implements choreographic programming for Rust. The system compiles global protocol specifications into local session types for each participant.
The architecture has four main layers:
- DSL and parsing (choreographic syntax to AST)
- Projection (global protocol to local types)
- Code generation (local types to Rust code)
- Effect system (protocol execution with pluggable transports)
Component Diagram
graph TB
subgraph Input["Developer Input (Compile-Time)"]
DSL["Choreography DSL<br/>Global Protocol Specification"]
end
subgraph Layer1["Layer 1: Parsing & AST Construction"]
Parser["Parser<br/>(Pest Grammar)"]
AST["AST<br/>Choreography + Protocol Tree"]
end
subgraph Layer2["Layer 2: Projection"]
Proj["Projection Algorithm"]
LT["Local Types<br/>(Per Role)"]
end
subgraph Layer3["Layer 3: Code Generation"]
CodeGen["Code Generator"]
Session["Generated Session Types"]
Effects["Generated Effect Programs"]
end
subgraph Layer4["Layer 4: Runtime Execution"]
Handler["Effect Handler<br/>(InMemory / Rumpsteak)"]
Transport["Transport Layer<br/>(Channels / Network)"]
Exec["Running Protocol"]
end
DSL --> Parser
Parser --> AST
AST --> Proj
Proj --> LT
LT --> CodeGen
CodeGen --> Session
CodeGen --> Effects
Session --> Handler
Effects --> Handler
Handler --> Transport
Transport --> Exec
This diagram summarizes the compile time flow from DSL input to runtime execution. It also highlights the boundary between compilation and effect handler execution.
Core Components
AST Module
The AST module is located in rust/choreography/src/ast/. It represents choreographies as data structures.
The main type is Choreography.
#![allow(unused)]
fn main() {
pub struct Choreography {
pub name: Ident,
pub namespace: Option<String>,
pub roles: Vec<Role>,
pub protocol: Protocol,
pub attrs: HashMap<String, String>,
}
}
This struct holds the protocol name and optional namespace. It contains participating roles and the protocol tree. Metadata attributes are stored in the attrs field.
The Protocol enum defines all possible protocol actions.
#![allow(unused)]
fn main() {
pub enum Protocol {
Send {
from: Role,
to: Role,
message: MessageType,
continuation: Box<Protocol>,
annotations: HashMap<String, String>,
from_annotations: HashMap<String, String>,
to_annotations: HashMap<String, String>,
},
Broadcast {
from: Role,
to_all: Vec<Role>,
message: MessageType,
continuation: Box<Protocol>,
annotations: HashMap<String, String>,
from_annotations: HashMap<String, String>,
},
Choice {
role: Role,
branches: Vec<Branch>,
annotations: HashMap<String, String>,
},
Loop { condition: Option<Condition>, body: Box<Protocol> },
Parallel { protocols: Vec<Protocol> },
Rec { label: Ident, body: Box<Protocol> },
Var(Ident),
Extension {
extension: Box<dyn ProtocolExtension>,
continuation: Box<Protocol>,
annotations: HashMap<String, String>,
},
End,
}
}
Protocol is a recursive tree structure. It includes support for annotations at multiple levels. Broadcasts and recursive definitions are supported.
Parser Module
The parser module is located in rust/choreography/src/compiler/parser/. It converts DSL text into AST using the Pest parser generator.
The parser validates role declarations. It builds the protocol tree from the input text. It runs a layout preprocessor before the grammar parse. This enables layout sensitive syntax with explicit braces for empty blocks.
Two entry points are available.
#![allow(unused)]
fn main() {
pub fn parse_choreography_str(input: &str) -> Result<Choreography, ParseError>
pub fn parse_choreography_file(path: &Path) -> Result<Choreography, ParseError>
}
The parser performs syntactic validation and basic semantic checks.
Projection Module
The projection module is located in rust/choreography/src/compiler/projection.rs. Projection transforms a global protocol into a local view for each role.
The algorithm determines what each participant should do.
#![allow(unused)]
fn main() {
pub fn project(choreography: &Choreography, role: &Role) -> Result<LocalType, ProjectionError>
}
Projection handles merging parallel branches. It also detects conflicts between branches.
Code Generation Module
The codegen module is located in rust/choreography/src/compiler/codegen/. It converts local types into Rust session types and effect programs.
The generator creates compile-time type-safe protocol implementations.
#![allow(unused)]
fn main() {
pub fn generate_session_type(role: &Role, local_type: &LocalType, protocol_name: &str) -> TokenStream
pub fn generate_choreography_code(name: &str, roles: &[Role], local_types: &[(Role, LocalType)]) -> TokenStream
pub fn generate_choreography_code_with_dynamic_roles(choreography: &Choreography, local_types: &[(Role, LocalType)]) -> TokenStream
pub fn generate_choreography_code_with_namespacing(choreography: &Choreography, local_types: &[(Role, LocalType)]) -> TokenStream
pub fn generate_choreography_code_with_topology(choreography: &Choreography, local_types: &[(Role, LocalType)]) -> TokenStream
pub fn generate_dynamic_role_support(choreography: &Choreography) -> TokenStream
pub fn generate_role_implementations(roles: &[Role]) -> TokenStream
pub fn generate_topology_integration(choreography: &Choreography) -> TokenStream
}
The generator creates session types and role structs. It supports dynamic roles including parameterized roles and runtime management.
Effect System
The effect system is located in rust/choreography/src/effects/. It decouples protocol logic from transport.
Protocols are represented as effect programs. Handlers interpret these programs.
#![allow(unused)]
fn main() {
pub trait ChoreoHandler: Send {
type Role: RoleId;
type Endpoint: Endpoint;
async fn send<M: Serialize + Send + Sync>(
&mut self, ep: &mut Self::Endpoint, to: Self::Role, msg: &M
) -> ChoreoResult<()>;
async fn recv<M: DeserializeOwned + Send>(
&mut self, ep: &mut Self::Endpoint, from: Self::Role
) -> ChoreoResult<M>;
async fn choose(
&mut self, ep: &mut Self::Endpoint, to: Self::Role, label: <Self::Role as RoleId>::Label
) -> ChoreoResult<()>;
async fn offer(
&mut self, ep: &mut Self::Endpoint, from: Self::Role
) -> ChoreoResult<<Self::Role as RoleId>::Label>;
}
}
Handlers implement this trait to provide different execution strategies.
Data Flow
This section demonstrates the transformation of a choreography through each layer.
Input choreography:
#![allow(unused)]
fn main() {
Alice -> Bob: Request
Bob -> Alice: Response
}
The choreography specifies a request-response pattern.
After parsing, the AST contains a nested send structure.
#![allow(unused)]
fn main() {
Protocol::Send {
from: Alice, to: Bob, message: Request,
continuation: Protocol::Send {
from: Bob, to: Alice, message: Response,
continuation: Protocol::End
}
}
}
This represents the global protocol tree.
After projection for Alice, the local type shows send then receive.
#![allow(unused)]
fn main() {
LocalType::Send {
to: Bob, message: Request,
continuation: LocalType::Receive {
from: Bob, message: Response,
continuation: LocalType::End
}
}
}
Alice sends a request and waits for a response.
After projection for Bob, the local type shows receive then send.
#![allow(unused)]
fn main() {
LocalType::Receive {
from: Alice, message: Request,
continuation: LocalType::Send {
to: Alice, message: Response,
continuation: LocalType::End
}
}
}
Bob waits for a request and sends a response.
After code generation for Alice, a session type is created.
#![allow(unused)]
fn main() {
type Alice_Protocol = Send<Bob, Request, Receive<Bob, Response, End>>;
}
This session type enforces the protocol at compile time.
At runtime, effect programs execute using handlers.
#![allow(unused)]
fn main() {
Program::new()
.send(Bob, Request)
.recv::<Response>(Bob)
.end()
}
The handler interprets this program into actual communication.
Design Decisions
Why Choreographic Programming
Creating distributed programs typically requires writing separate implementations for each participant. This approach is error-prone and hard to verify.
Choreographies specify the global protocol once. Automatic projection generates local code for each role. This approach prevents protocol mismatches and simplifies reasoning about distributed systems.
Why Effect Handlers
Separating protocol logic from transport enables testing and composition. The same protocol can run with different handlers without code changes.
Effect handlers provide runtime flexibility. Test handlers use in-memory communication. Production handlers use network transports.
Why Session Types
Session types provide compile-time guarantees about protocol compliance. The Rust type system enforces that each role follows their protocol correctly.
Type checking prevents common distributed systems errors. Deadlocks and protocol violations are caught at compile time.
Platform Abstraction
The runtime module provides platform-specific async primitives. Native targets use tokio. WASM uses wasm-bindgen-futures.
This abstraction makes the core library portable. The same code runs on servers and in browsers.
Extension Points
Custom Handlers
Implement ChoreoHandler to add new transport mechanisms. See Effect Handlers for details.
Middleware
Wrap handlers with middleware for cross-cutting concerns. Logging, metrics, and retry logic can be added as middleware. Middleware composes naturally.
Custom Projections
The projection algorithm can be extended for domain-specific optimizations. Override default projection rules by implementing custom projection functions.
Code Generation Backends
Add new code generation backends to target different session type libraries. The AST and LocalType representations are language-agnostic. Backends for other languages can be added.
Workspace Organization
Rumpsteak-Aura is organized as a Cargo workspace with multiple crates. All Rust code is located in the rust/ directory. The crate structure mirrors the Lean formalization.
rumpsteak-aura/
├── rust/ All Rust crates
│ ├── src/ Facade crate (rumpsteak-aura)
│ ├── types/ Core types (rumpsteak-types)
│ │ └── src/
│ │ ├── global.rs GlobalType (matches Lean)
│ │ ├── local.rs LocalTypeR (matches Lean)
│ │ ├── label.rs Label, PayloadSort
│ │ └── action.rs Action, LocalAction
│ ├── theory/ Session type algorithms (rumpsteak-theory)
│ │ └── src/
│ │ ├── projection.rs Global to local projection
│ │ ├── merge.rs Branch merging
│ │ ├── subtyping/ Sync and async subtyping
│ │ ├── duality.rs Dual type computation
│ │ └── bounded.rs Bounded recursion
│ ├── choreography/ DSL and effects (rumpsteak-aura-choreography)
│ │ └── src/
│ │ ├── ast/ Extended AST definitions
│ │ ├── compiler/ Parser, projection, codegen
│ │ ├── effects/ Effect handlers and middleware
│ │ └── runtime/ Platform abstraction
│ ├── lean-bridge/ Lean validation (rumpsteak-lean-bridge)
│ │ └── src/
│ │ ├── export.rs Rust to JSON export
│ │ ├── import.rs JSON to Rust import
│ │ └── runner.rs Lean binary invocation
│ ├── transport/ Transport abstractions (rumpsteak-transport)
│ └── macros/ Procedural macros (rumpsteak-aura-macros)
├── lean/ Lean 4 verification code
├── examples/ Example protocols
└── docs/ Documentation
This tree outlines the workspace layout and crate locations. It helps map each crate name to its directory.
Crate Responsibilities
The rumpsteak-types crate contains core type definitions (GlobalType, LocalTypeR, Label, PayloadSort) that match Lean exactly. The rumpsteak-theory crate contains pure algorithms for projection, merge, duality, subtyping, and well-formedness checks. This crate depends only on rumpsteak-types.
The rumpsteak-aura-choreography crate is the choreographic programming layer including the DSL parser, effect handlers, code generation, and runtime support. The rumpsteak-lean-bridge crate provides Lean integration through JSON export and import with a runner for invoking the verification binary.
The rumpsteak-aura crate is the main facade that re-exports types from other crates with feature flags. Most users import from this crate.
Choreographic DSL
Overview
The parser translates protocol specifications from a layout-sensitive, PureScript or Elm inspired DSL into the internal AST (Choreography + Protocol). The DSL is direct style. Statements are newline separated. Indentation defines blocks.
Empty blocks must use {}. The DSL does not use an explicit end keyword. A protocol ends when its block ends.
The parser module is located in rust/choreography/src/compiler/parser/. It provides an implementation of the choreography DSL parser using Pest plus a layout preprocessor.
DSL Syntax
#![allow(unused)]
fn main() {
choreography!(r#"
protocol PingPong =
roles Alice, Bob
Alice -> Bob : Ping
Bob -> Alice : Pong
"#);
}
This example shows role declarations and message passing with the macro. The macro expects a string literal that contains the DSL.
Namespaces
Namespaces are expressed via a module declaration (rather than attributes).
#![allow(unused)]
fn main() {
let protocol = r#"
module threshold_ceremony exposing (ThresholdProtocol)
protocol ThresholdProtocol =
roles Coordinator, Signers[*]
Coordinator -> Signers[*] : Request
"#;
}
Multiple modules can coexist in separate files. Inside the choreography! macro you typically omit the module header, but it is supported in string-based parsing.
Supported Constructs
1) Send Statement
#![allow(unused)]
fn main() {
Role1 -> Role2 : MessageName
Role1 -> Role2 : MessageWithPayload { data : String, count : Int }
}
The send statement transfers a message from one role to another.
2) Broadcast Statement
#![allow(unused)]
fn main() {
Leader ->* : Announcement
}
Broadcast sends a message to all other roles.
3) Choice Statement (explicit decider)
#![allow(unused)]
fn main() {
case choose Client of
Buy when (balance > price) ->
Client -> Server : Purchase
Cancel ->
Client -> Server : Cancel
}
The deciding role (Client) selects a branch. Guards are optional.
Alias syntax (sugar):
#![allow(unused)]
fn main() {
choice at Client
Buy ->
Client -> Server : Purchase
Cancel ->
Client -> Server : Cancel
}
This is equivalent to the case choose form. It is a lighter syntax for the same choice structure.
4) Loop Statement
#![allow(unused)]
fn main() {
loop decide by Client
Client -> Server : Request
Server -> Client : Response
}
This loop continues while the deciding role chooses to continue. The decide by form is
desugared to a choice+recursion pattern at parse time. The first message in the loop body
serves as the “continue” signal - when the deciding role sends it, the loop continues.
A synthetic Done message is added as the termination signal.
The example above desugars to:
#![allow(unused)]
fn main() {
rec RoleDecidesLoop
choice at Client
Request ->
Client -> Server : Request
Server -> Client : Response
continue RoleDecidesLoop
Done ->
Client -> Server : Done
}
Requirement: The first statement in a loop decide by Role body must be a send from
the deciding role. This ensures the decision to continue is fused with the first message,
saving a round trip compared to sending a separate “continue” signal.
This desugaring converts the RoleDecides loop into standard multiparty session type
constructs (choice + recursion), enabling formal verification in Lean and compatibility
with standard MPST projection algorithms.
#![allow(unused)]
fn main() {
loop repeat 5
A -> B : Ping
B -> A : Pong
}
This loop repeats a fixed number of times. The compiler records the iteration count in the AST.
#![allow(unused)]
fn main() {
loop while "has_more_data"
A -> B : Data
}
This loop uses a custom condition token. The condition is preserved for tooling and extension passes.
#![allow(unused)]
fn main() {
loop forever
A -> B : Tick
}
This loop has no exit condition. Use it for persistent background protocols.
5) Parallel Statement (branch adjacency)
Parallel composition is expressed by adjacent branch blocks at the same indentation level.
#![allow(unused)]
fn main() {
branch
A -> B : Msg1
B -> A : Ack
branch
C -> D : Msg2
}
A solitary branch is a parse error. Use {} for an empty branch if needed.
6) Recursion and Calls
#![allow(unused)]
fn main() {
rec Loop
A -> B : Tick
continue Loop
}
This defines a recursive label Loop and uses continue Loop to jump back, modeling unbounded recursion.
#![allow(unused)]
fn main() {
call Handshake
}
This calls another protocol that is in scope. The call target must be defined in the same file or where block.
The continue keyword is for recursive back-references within a rec block. The call keyword is for invoking sub-protocols defined in where blocks.
7) Protocol Composition (where block)
Define and reuse local protocol fragments inside a where block.
#![allow(unused)]
fn main() {
protocol Main =
roles A, B, C
call Handshake
call DataTransfer
A -> C : Done
where
protocol Handshake =
roles A, B
A -> B : Hello
B -> A : Hi
protocol DataTransfer =
roles A, B
A -> B : Data
B -> A : Ack
}
Local protocols can call each other and can be used within choices, loops, and branches.
8) Message Types and Payloads
Message types support generic parameters. Payloads may be written with {} or ().
#![allow(unused)]
fn main() {
A -> B : Request<String>
B -> A : Response<i32>
A -> B : Data<String, i32, bool>
A -> B : Msg { data : String, count : Int }
B -> A : Result(i : Int, ok : Bool)
}
This section shows generic parameters and payload shapes. Both {} and () forms are accepted.
9) Dynamic Role Count Support
Dynamic role counts are supported via wildcard and symbolic parameters.
#![allow(unused)]
fn main() {
protocol ThresholdProtocol =
roles Coordinator, Signers[*]
Coordinator -> Signers[*] : Request
Signers[0..threshold] -> Coordinator : Response
}
This example uses a wildcard role family. It also uses a symbolic bound in the role index.
#![allow(unused)]
fn main() {
protocol ConsensusProtocol =
roles Leader, Followers[N]
Leader -> Followers[*] : Proposal
Followers[i] -> Leader : Vote
}
This example mixes a named count with index variables. It enables parameterized protocols.
#![allow(unused)]
fn main() {
protocol PartialBroadcast =
roles Broadcaster, Receivers[*]
Broadcaster -> Receivers[0..count] : Message
Receivers[0..threshold] -> Broadcaster : Ack
}
This example shows bounded ranges for role indices. It models partial broadcasts and threshold acknowledgments.
10) Timing Patterns
Timing patterns provide constructs for building time-aware protocols. All patterns desugar to standard MPST constructs (choice, recursion) and remain verifiable in Lean.
Timed Choice
A timed choice races an operation against a timeout deadline:
#![allow(unused)]
fn main() {
protocol TimedRequest =
roles Client, Server
Client -> Server : Request
timed_choice at Client(5s)
OnTime ->
Server -> Client : Response
TimedOut ->
Client -> Server : Cancel
}
This desugars to a standard Choice with a TimedChoice { duration } annotation. The timeout is enforced at runtime by the effect interpreter.
Durations support: ms (milliseconds), s (seconds), m (minutes), h (hours).
Heartbeat Pattern
The heartbeat pattern models connection liveness detection:
#![allow(unused)]
fn main() {
protocol Liveness =
roles Alice, Bob
heartbeat Alice -> Bob every 1s on_missing(3) {
Bob -> Alice : Disconnect
} body {
Alice -> Bob : Ping
Bob -> Alice : Pong
}
}
This desugars to a recursive choice:
#![allow(unused)]
fn main() {
rec HeartbeatLoop
Alice -> Bob : Heartbeat
choice at Bob
Alive ->
// body
Alice -> Bob : Ping
Bob -> Alice : Pong
continue HeartbeatLoop
Dead ->
// on_missing
Bob -> Alice : Disconnect
}
The on_missing(3) parameter indicates how many missed heartbeats before declaring the sender dead. The runtime uses this for timeout calculations.
Runtime Timeout Annotation
The @runtime_timeout annotation provides transport-level timeout hints:
#![allow(unused)]
fn main() {
protocol TimedOps =
roles Client, Server
@runtime_timeout(5s) Client -> Server : Request
Server -> Client : Response
}
Unlike timed_choice, this annotation does not change the session type. It is purely a hint to the transport layer and is not verified in Lean. Use it for operational timeouts when you don’t want the protocol to branch on timeout.
11) String-based Protocol Definition
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::compiler::parser::parse_choreography_str;
let protocol = r#"
protocol PingPong =
roles Alice, Bob
Alice -> Bob : Ping
Bob -> Alice : Pong
"#;
let choreography = parse_choreography_str(protocol)?;
}
This parses a string into a Choreography AST. It is the entry point for runtime parsing.
Namespaced protocols are expressed with a module header.
#![allow(unused)]
fn main() {
let protocol = r#"
module secure_messaging exposing (EncryptedProtocol)
protocol EncryptedProtocol =
roles Sender, Receiver
Sender -> Receiver : SecureMessage
"#;
let choreography = parse_choreography_str(protocol)?;
}
The parser builds the AST for projection, validation, and code generation.
Implementation Details
Parser Stack
- Layout preprocessor converts indentation into explicit block delimiters.
- Pest grammar parses the canonical brace based syntax.
- Parser module constructs the AST and runs validation.
Parse Pipeline
- Preprocess layout (indentation ->
{}/()). - Parse with Pest grammar.
- Build statements and normalize branch adjacency to
Parallel. - Validate roles and resolve
callreferences. - Build
Choreography/ProtocolAST.
API
Primary Functions
parse_choreography_str parses a DSL string into a Choreography AST.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::compiler::parser::parse_choreography_str;
let choreo = parse_choreography_str(r#"
protocol Example =
roles A, B
A -> B : Hello
"#)?;
}
This example parses a DSL string into a Choreography. It uses parse_choreography_str directly.
parse_choreography_file parses a DSL file from disk.
#![allow(unused)]
fn main() {
use std::path::Path;
use rumpsteak_aura_choreography::compiler::parser::parse_choreography_file;
let choreo = parse_choreography_file(Path::new("protocol.choreo"))?;
}
This example parses a file path into a Choreography. The file must contain a valid DSL definition.
parse_dsl is an alias for parse_choreography_str.
Error Handling
#![allow(unused)]
fn main() {
match parse_choreography_str(input) {
Ok(choreo) => { /* use choreography */ }
Err(ParseError::UndefinedRole(role)) => {
eprintln!("Role '{}' used but not declared", role);
}
Err(ParseError::DuplicateRole(role)) => {
eprintln!("Role '{}' declared multiple times", role);
}
Err(ParseError::Syntax { location, message }) => {
eprintln!("Syntax error at {}: {}", location, message);
}
Err(e) => {
eprintln!("Parse error: {}", e);
}
}
}
This pattern matches common parse errors. It formats diagnostics with the reported context.
Grammar Details
Tokens and Keywords
- Identifiers:
[a-zA-Z][a-zA-Z0-9_]* - Integers:
[0-9]+ - Strings:
"..."(used inloop while) - Keywords:
protocol,roles,case,choose,of,choice,at,loop,decide,by,repeat,while,forever,branch,rec,call,where,module,import,exposing - Operators:
->,->*,:,{},(),,,|
Comments
Single-line comments use --. Multi-line comments use {- ... -}.
-- This is a single-line comment
{- This is a
multi-line comment -}
Whitespace and Layout
Indentation defines blocks. Use {} to force an empty block or to opt out of layout. Parenthesized blocks must be non-empty.
Validation
The parser validates that roles are declared, declared roles are unique, and branch blocks appear in parallel adjacency. Additional semantic validation is performed by choreography.validate().
Error Messages
Example undefined role error:
Undefined role 'Charlie'
--> input:5:14
|
5 | Alice -> Charlie : Hello
^^^^^^^
This error indicates a role that was not declared in roles. The location points to the undefined identifier.
Example duplicate role error:
Duplicate role declaration 'Alice'
--> input:3:33
|
3 | roles Alice, Bob, Charlie, Alice
^^^^^
This error indicates that a role name appears more than once. The location points to the duplicate entry.
Examples
Simple Two-Party Protocol
#![allow(unused)]
fn main() {
protocol PingPong =
roles Alice, Bob
Alice -> Bob : Ping
Bob -> Alice : Pong
}
This example shows a simple two role protocol. It uses a single send and reply.
Protocol with Choice
#![allow(unused)]
fn main() {
protocol Negotiation =
roles Buyer, Seller
Buyer -> Seller : Offer
case choose Seller of
Accept ->
Seller -> Buyer : Accept
Reject ->
Seller -> Buyer : Reject
}
This example shows an explicit choice decided by Seller. Each branch starts with a send from the deciding role.
Complex E-Commerce Protocol
#![allow(unused)]
fn main() {
protocol ECommerce =
roles Buyer, Seller, Shipper
Buyer -> Seller : Inquiry
Seller -> Buyer : Quote
case choose Buyer of
Order ->
Buyer -> Seller : Order
Seller -> Shipper : ShipRequest
Shipper -> Buyer : Tracking
loop decide by Buyer
Buyer -> Shipper : StatusCheck
Shipper -> Buyer : StatusUpdate
Shipper -> Buyer : Delivered
Buyer -> Seller : Confirmation
Cancel ->
Buyer -> Seller : Cancel
}
This example combines choice and looping. It models a longer interaction with a buyer controlled loop.
Parallel Example
#![allow(unused)]
fn main() {
protocol ParallelDemo =
roles A, B, C, D
branch
A -> B : Msg1
branch
C -> D : Msg2
}
This example uses adjacent branch blocks. Each branch defines a parallel sub protocol.
Integration
With Projection
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::compiler::{parser, projection};
let choreo = parser::parse_choreography_str(input)?;
for role in &choreo.roles {
let local_type = projection::project(&choreo, role)?;
println!("Local type for {}: {:?}", role.name, local_type);
}
}
This projects the global protocol to a local type for each role. The result can be used for type driven code generation.
With Code Generation
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::compiler::{parser, projection, codegen};
let choreo = parser::parse_choreography_str(input)?;
let mut local_types = Vec::new();
for role in &choreo.roles {
let local_type = projection::project(&choreo, role)?;
local_types.push((role.clone(), local_type));
}
let code = codegen::generate_choreography_code(
&choreo.name.to_string(),
&choreo.roles,
&local_types,
);
}
This generates Rust code for the choreography. The generated code includes session types and role APIs.
Testing
Run parser tests with:
cargo test --package rumpsteak-aura-choreography parser
This runs the parser test suite for the choreography crate. It exercises grammar and layout handling.
Choreographic Projection Patterns
The projection algorithm transforms global choreographic protocols into local session types. Each participating role receives their own local session type. The algorithm supports patterns beyond basic send and receive operations.
Supported Patterns
1. Basic Send/Receive
The global protocol specifies a send operation.
#![allow(unused)]
fn main() {
Protocol::Send {
from: alice,
to: bob,
message: Data,
continuation: End,
}
}
This defines Alice sending to Bob.
Alice’s projection shows a send operation.
#![allow(unused)]
fn main() {
LocalType::Send {
to: bob,
message: Data,
continuation: End,
}
}
Alice sends Data to Bob.
Bob’s projection shows a receive operation.
#![allow(unused)]
fn main() {
LocalType::Receive {
from: alice,
message: Data,
continuation: End,
}
}
Bob receives Data from Alice.
Charlie’s projection shows no participation.
#![allow(unused)]
fn main() {
LocalType::End
}
Charlie is uninvolved in this exchange.
2. Communicated Choice
The global protocol specifies a choice.
#![allow(unused)]
fn main() {
Protocol::Choice {
role: alice,
branches: vec![
Branch {
label: "yes",
protocol: Send { from: alice, to: bob, ... },
},
Branch {
label: "no",
protocol: Send { from: alice, to: bob, ... },
},
],
}
}
Alice chooses between yes and no branches.
Alice’s projection shows selection.
#![allow(unused)]
fn main() {
LocalType::Select {
to: bob,
branches: vec![
("yes", ...),
("no", ...),
],
}
}
Alice selects one branch and communicates it to Bob.
Bob’s projection shows branching.
#![allow(unused)]
fn main() {
LocalType::Branch {
from: alice,
branches: vec![
("yes", ...),
("no", ...),
],
}
}
Bob waits for Alice’s choice.
3. Local Choice
Local choice supports branches that do not start with send operations. This allows for local decisions without communication.
The global protocol defines a local choice.
#![allow(unused)]
fn main() {
Protocol::Choice {
role: alice,
branches: vec![
Branch {
label: "option1",
protocol: End,
},
Branch {
label: "option2",
protocol: End,
},
],
}
}
No send operation appears in the branches.
Alice’s projection shows local choice.
#![allow(unused)]
fn main() {
LocalType::LocalChoice {
branches: vec![
("option1", End),
("option2", End),
],
}
}
Alice makes a local decision.
The key difference is between LocalChoice and Select. The Select variant indicates communicated choice where the selection is sent to another role. The LocalChoice variant indicates local decision with no communication.
4. Loop with Conditions
Loop conditions are preserved in the projected local types.
The global protocol includes a loop.
#![allow(unused)]
fn main() {
Protocol::Loop {
condition: Some(Condition::Count(5)),
body: Send { from: alice, to: bob, ... },
}
}
The loop executes 5 times.
Alice’s projection preserves the condition.
#![allow(unused)]
fn main() {
LocalType::Loop {
condition: Some(Condition::Count(5)),
body: Send { to: bob, ... },
}
}
The count condition is maintained.
Supported conditions include Condition::Count(n) for fixed iteration count. Custom boolean expressions use Condition::Custom(expr). Infinite loops use None and must be terminated externally.
Note: Condition::RoleDecides(role) loops are desugared at parse time to a choice+recursion pattern, so they don’t appear in the AST after parsing and are handled as standard Protocol::Rec with Protocol::Choice during projection.
5. Parallel Composition
Parallel merging includes conflict detection.
Compatible parallel composition has no conflicts.
#![allow(unused)]
fn main() {
Protocol::Parallel {
protocols: vec![
Send { from: alice, to: bob, ... },
Send { from: alice, to: charlie, ... },
],
}
}
Different recipients avoid conflicts.
Alice’s projection merges the operations.
#![allow(unused)]
fn main() {
LocalType::Send {
to: bob,
continuation: Send {
to: charlie,
continuation: End,
},
}
}
Operations are merged sequentially. Order is non-deterministic at runtime.
Conflicting parallel composition causes errors.
#![allow(unused)]
fn main() {
Protocol::Parallel {
protocols: vec![
Send { from: alice, to: bob, ... },
Send { from: alice, to: bob, ... },
],
}
}
Same recipient creates a conflict.
Alice’s projection fails.
#![allow(unused)]
fn main() {
Err(ProjectionError::InconsistentParallel)
}
Cannot send to same recipient in parallel.
Conflict detection rules prevent invalid projections. Multiple sends to the same role create a conflict. Multiple receives from the same role create a conflict.
Multiple selects to the same role create a conflict. Multiple branches from the same role create a conflict. Operations on different roles are allowed.
Projection Rules Summary
Chooser’s View
When all branches start with send operations, the projection is Select. This indicates communicated choice. When branches do not start with send operations, the projection is LocalChoice. This indicates local decision.
Receiver’s View
When the role receives the choice, the projection is Branch. When the role is not involved, continuations are merged.
Merge Semantics
When a role is not involved in a choice, the projections of all branches must be merged. The merge algorithm follows precise semantics based on whether the role sends or receives.
Send merge requires identical label sets. A non-participant cannot choose which message to send based on a choice they did not observe. If a role sends different messages in different branches, projection fails.
Receive merge unions label sets. A non-participant can receive any message regardless of which branch was taken. Different incoming messages in different branches are combined into a single receive with multiple labels.
This distinction is critical for protocol safety. The Rust implementation matches the Lean formalization in ProjectionR.lean.
Parallel Composition
When a role appears in zero branches, the projection is End. When a role appears in one branch, that projection is used. When a role appears in two or more branches, projections are merged if compatible. An error occurs if conflicts exist.
6. Dynamic Role Projection
Dynamic roles require special handling during projection. The system supports runtime-determined role counts and range-based selection.
The global protocol uses dynamic roles.
#![allow(unused)]
fn main() {
Protocol::Send {
from: coordinator,
to: workers, // workers.param = RoleParam::Dynamic
message: Task,
continuation: End,
}
}
Coordinator sends to all workers.
Coordinator’s projection shows broadcast to dynamic set.
#![allow(unused)]
fn main() {
LocalType::Send {
to: workers, // Dynamic role reference
message: Task,
continuation: End,
}
}
The coordinator sends to all worker instances at runtime.
Workers’ projection shows receive.
#![allow(unused)]
fn main() {
LocalType::Receive {
from: coordinator,
message: Task,
continuation: End,
}
}
Each worker instance receives the task.
Range-based selection targets subsets.
#![allow(unused)]
fn main() {
Protocol::Send {
from: workers, // With range [0..threshold]
to: coordinator,
message: Response,
continuation: End,
}
}
Only workers in range respond.
Projection for workers in range shows send.
#![allow(unused)]
fn main() {
LocalType::Send {
to: coordinator,
message: Response,
continuation: End,
}
}
Workers outside range have no operation.
#![allow(unused)]
fn main() {
LocalType::End
}
Role indexing uses symbolic parameters.
#![allow(unused)]
fn main() {
Protocol::Send {
from: workers, // With index [i]
to: database,
message: Query,
continuation: End,
}
}
Each worker instance acts independently.
Workers’ projection preserves index semantics.
#![allow(unused)]
fn main() {
LocalType::Send {
to: database,
message: Query,
continuation: End,
}
}
Each instance executes independently with its own index.
Dynamic role projection has these constraints. Wildcard broadcast Workers[*] requires all instances. Range selection Workers[0..n] requires subset determination at runtime.
Index semantics Workers[i] preserve independence. Validation ensures safe dynamic role usage. Code generation includes runtime checks.
Implementation Notes
LocalType Variants
The enhanced projection algorithm uses these LocalType variants.
#![allow(unused)]
fn main() {
pub enum LocalType {
Send { to, message, continuation },
Receive { from, message, continuation },
Select { to, branches },
Branch { from, branches },
LocalChoice { branches },
Loop { condition, body },
Rec { label, body },
Var(label),
End,
}
}
Each variant represents a different local type pattern.
Code Generation
The generate_type_expr function in codegen.rs handles all variants. This includes the new LocalChoice and Loop types. Code generation transforms local types into Rust session types.
Dynamic roles use specialized code generation via generate_choreography_code_with_dynamic_roles. This function includes runtime role binding. Validation occurs at choreography initialization. Generated code supports dynamic role counts.
Content Addressing
Content addressing assigns cryptographic identities to protocol artifacts. Each value receives a unique hash derived from its canonical serialization. This enables structural sharing, memoization, and verifiable protocol state.
Overview
The content addressing system provides three capabilities. It computes deterministic content IDs for all protocol types. It enables memoization of expensive operations like projection. It supports structural sharing for memory efficiency.
ContentId
The ContentId type wraps a cryptographic hash of a value’s canonical form.
#![allow(unused)]
fn main() {
use rumpsteak_types::{ContentId, Sha256Hasher, GlobalType, Label};
use rumpsteak_types::contentable::Contentable;
let g = GlobalType::comm("A", "B", vec![(Label::new("msg"), GlobalType::End)]);
let bytes = g.to_bytes()?;
let cid: ContentId<Sha256Hasher> = ContentId::from_bytes(&bytes);
}
The content ID is computed from canonical bytes of the value. Two structurally equivalent types produce the same content ID. JSON is the default encoding; DAG-CBOR is available as an optional feature.
Hasher Trait
The hash algorithm is configurable through the Hasher trait.
#![allow(unused)]
fn main() {
pub trait Hasher: Clone + Default + PartialEq + Send + Sync + 'static {
type Digest: AsRef<[u8]> + Clone + PartialEq + Eq + Hash + Send + Sync + 'static;
const HASH_SIZE: usize;
fn digest(data: &[u8]) -> Self::Digest;
fn algorithm_name() -> &'static str;
}
}
SHA-256 is the default implementation. Additional hashers can be implemented by users when a different tradeoff is required.
The Sha256Hasher provides cryptographic security and broad compatibility. Custom hashers can target performance or proof system constraints.
Contentable Trait
Types that support content addressing implement the Contentable trait.
#![allow(unused)]
fn main() {
pub trait Contentable {
fn to_bytes(&self) -> Result<Vec<u8>, ContentableError>;
fn from_bytes(bytes: &[u8]) -> Result<Self, ContentableError>;
}
}
The to_bytes method produces a canonical byte representation. The from_bytes method reconstructs the value from bytes.
Implementations exist for GlobalType, LocalTypeR, Label, and PayloadSort. Custom types can implement the trait for integration.
Serialization Formats
Two serialization formats are available for content addressing.
JSON (Default)
JSON serialization is enabled by default. It produces human-readable output suitable for debugging and interoperability.
#![allow(unused)]
fn main() {
use rumpsteak_types::{GlobalType, Label};
use rumpsteak_types::contentable::Contentable;
let g = GlobalType::send("A", "B", Label::new("msg"), GlobalType::End);
// Serialize to JSON bytes
let json_bytes = g.to_bytes()?;
// Deserialize from JSON bytes
let recovered = GlobalType::from_bytes(&json_bytes)?;
// Compute content ID from JSON
let cid = g.content_id_sha256()?;
}
DAG-CBOR (Optional)
DAG-CBOR serialization is available with the dag-cbor feature flag. It produces compact binary output compatible with IPLD and IPFS.
Enable the feature in Cargo.toml:
[dependencies]
rumpsteak-types = { version = "0.7", features = ["dag-cbor"] }
DAG-CBOR methods become available on the Contentable trait:
#![allow(unused)]
fn main() {
use rumpsteak_types::{GlobalType, Label};
use rumpsteak_types::contentable::Contentable;
let g = GlobalType::send("A", "B", Label::new("msg"), GlobalType::End);
// Serialize to DAG-CBOR bytes
let cbor_bytes = g.to_cbor_bytes()?;
// Deserialize from DAG-CBOR bytes
let recovered = GlobalType::from_cbor_bytes(&cbor_bytes)?;
// Compute content ID from CBOR (different from JSON-based CID)
let cbor_cid = g.content_id_cbor_sha256()?;
}
DAG-CBOR provides several advantages over JSON. It produces smaller output, typically 30-50% more compact. It handles binary data natively without base64 encoding. It generates CIDs compatible with IPFS and other IPLD-based systems.
Note that JSON and DAG-CBOR produce different content IDs for the same value. Choose one format consistently within a system to ensure content ID compatibility.
Deterministic Serialization
Content addressing requires deterministic serialization. Two invariants ensure this property.
Branch ordering sorts labeled branches by label name before serialization. This ensures {a: T1, b: T2} and {b: T2, a: T1} produce the same content ID.
De Bruijn indices replace named variables with numeric indices. This ensures alpha-equivalent types produce the same content ID.
Named: μx. A → B : x μy. A → B : y
De Bruijn: μ. A → B : 0 μ. A → B : 0
The conversion to de Bruijn form happens during serialization. The runtime representation retains named variables for debugging.
De Bruijn Conversion
The de Bruijn representation is computed on demand for serialization.
#![allow(unused)]
fn main() {
impl GlobalType {
fn to_de_bruijn(&self) -> GlobalTypeDB {
self.to_de_bruijn_with_env(&[])
}
}
}
The environment tracks bound variables. Each Mu binding extends the environment. Variable references are converted to indices.
#![allow(unused)]
fn main() {
GlobalType::Var(name) => {
let index = env.iter().position(|&v| v == name).unwrap_or(0);
GlobalTypeDB::Var(index)
}
}
This transformation is internal to the serialization layer.
Memoization
Content IDs enable efficient memoization of expensive operations. A projection cache can use (ContentId, Role) pairs as keys. Repeated projections of the same global type return cached results.
#![allow(unused)]
fn main() {
use std::collections::HashMap;
use rumpsteak_types::ContentId;
use rumpsteak_theory::project;
let mut cache: HashMap<(ContentId<_>, String), LocalTypeR> = HashMap::new();
let cid = global.content_id_sha256()?;
let key = (cid, "Alice".to_string());
if let Some(cached) = cache.get(&key) {
// Use cached result
} else {
let local = project(&global, "Alice")?;
cache.insert(key, local.clone());
}
}
This pattern enables caching projection results by content ID.
Content Store
The ContentStore provides deduplication for protocol artifacts.
#![allow(unused)]
fn main() {
use rumpsteak_types::ContentStore;
use rumpsteak_types::{GlobalType, Label};
let mut store = ContentStore::new();
let global = GlobalType::comm("A", "B", vec![(Label::new("ping"), GlobalType::End)]);
store.insert(&global, global.clone())?;
let retrieved = store.get(&global)?;
}
Identical types are stored once regardless of how many times they are inserted. The store uses content IDs as keys for O(1) lookup.
Lean Correspondence
The Lean formalization defines equivalent types and proofs.
structure ContentId (H : Type) [Hasher H] where
hash : ByteArray
deriving DecidableEq, BEq, Hashable
class Contentable (α : Type) where
toCbor : α → ByteArray
fromCbor : ByteArray → Except String α
def contentId [Hasher H] [Contentable α] (x : α) : ContentId H :=
⟨Hasher.hash H (Contentable.toCbor x)⟩
The fromCbor ∘ toCbor = id property is proven for all contentable types. This ensures round-trip correctness.
The Rust implementation with the dag-cbor feature provides to_cbor_bytes/from_cbor_bytes methods that correspond to the Lean toCbor/fromCbor methods. This alignment enables verification of content-addressed artifacts across both implementations.
Verification Properties
Several properties are verified in Lean.
Content equivalence holds when two values with the same content ID are structurally equal. This assumes collision resistance of the hash function.
Projection determinism holds when the same global type and role always produce the same local type. Content IDs enable caching without correctness concerns.
Alpha equivalence holds when de Bruijn conversion produces identical results for alpha-equivalent types.
Usage Example
#![allow(unused)]
fn main() {
use rumpsteak_types::{contentable::Contentable, ContentStore, GlobalType, Label};
use rumpsteak_theory::project;
// Create a protocol
let ping_pong = GlobalType::comm(
"Alice", "Bob",
vec![
(Label::new("ping"), GlobalType::comm(
"Bob", "Alice",
vec![(Label::new("pong"), GlobalType::End)],
)),
],
);
// Compute content ID
let cid = ping_pong.content_id_sha256()?;
println!("Protocol CID: {:?}", cid);
// Store in content-addressed storage
let mut store = ContentStore::new();
store.insert(&ping_pong, ping_pong.clone())?;
// Project for each role
let alice_type = project(&ping_pong, "Alice")?;
let bob_type = project(&ping_pong, "Bob")?;
}
This example demonstrates content ID computation, storage, and projection.
Effect Handlers
Overview
The effect handler system decouples protocol logic from transport implementation. Handlers interpret choreographic effects into actual communication operations.
A protocol executes by calling handler methods for each operation. Different handlers provide different execution strategies. Protocol code remains unchanged.
ChoreoHandler Trait
All handlers implement this trait.
#![allow(unused)]
fn main() {
pub trait ChoreoHandler: Send {
type Role: RoleId;
type Endpoint: Endpoint;
async fn send<M: Serialize + Send + Sync>(
&mut self, ep: &mut Self::Endpoint, to: Self::Role, msg: &M
) -> ChoreoResult<()>;
async fn recv<M: DeserializeOwned + Send>(
&mut self, ep: &mut Self::Endpoint, from: Self::Role
) -> ChoreoResult<M>;
async fn choose(
&mut self, ep: &mut Self::Endpoint, who: Self::Role,
label: <Self::Role as RoleId>::Label
) -> ChoreoResult<()>;
async fn offer(
&mut self, ep: &mut Self::Endpoint, from: Self::Role
) -> ChoreoResult<<Self::Role as RoleId>::Label>;
async fn with_timeout<F, T>(
&mut self, ep: &mut Self::Endpoint, at: Self::Role, dur: Duration, body: F
) -> ChoreoResult<T>
where
F: Future<Output = ChoreoResult<T>> + Send;
}
}
The trait defines four core methods and a timeout hook.
The send method transmits a message to another role. The recv method waits for a message from another role. The choose method makes a branch selection. The offer method receives a branch selection.
The Endpoint associated type holds connection state. Different handlers use different endpoint types.
Send bounds and portability
The trait requires messages to be Send. The send method requires Serialize + Send + Sync. The recv method requires DeserializeOwned + Send. Handler futures require F: Future + Send in with_timeout.
This matches the requirements of target runtimes. Native targets use tokio. WASM targets use single-thread executors. The bounds keep middleware stacks interchangeable between single-threaded and multi-threaded deployments.
Code written for browsers compiles unchanged for native binaries. Work can move across threads transparently.
Built-in Handlers
InMemoryHandler
The InMemoryHandler is located in rust/choreography/src/effects/handlers/in_memory.rs. It provides fast local message passing for testing. The implementation uses futures channels internally.
Basic usage creates a handler for a single role.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::InMemoryHandler;
let mut handler = InMemoryHandler::new(Role::Alice);
}
This creates an Alice handler.
For coordinated testing between roles, use shared channels.
#![allow(unused)]
fn main() {
let channels = Arc::new(Mutex::new(HashMap::new()));
let choice_channels = Arc::new(Mutex::new(HashMap::new()));
let alice = InMemoryHandler::with_channels(Role::Alice, channels.clone(), choice_channels.clone());
let bob = InMemoryHandler::with_channels(Role::Bob, channels.clone(), choice_channels.clone());
}
The shared channels enable communication between handlers in the same process.
RumpsteakHandler
The RumpsteakHandler is located in rust/choreography/src/effects/handlers/rumpsteak.rs. It provides production-ready session-typed channels. The implementation uses the core Rumpsteak library for type-safe communication.
This handler enforces session types at runtime. It provides strong guarantees about protocol compliance.
See Using Rumpsteak Handlers for complete documentation.
RecordingHandler
The RecordingHandler is located in rust/choreography/src/effects/handlers/recording.rs. It records all operations for verification and testing. The handler stores a log of send, recv, choose, and offer calls.
Basic usage creates a recording handler.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::RecordingHandler;
let mut handler = RecordingHandler::new(Role::Alice);
// ... execute protocol ...
let events = handler.events();
assert_eq!(events[0], RecordedEvent::Send { to: Role::Bob, ... });
}
The recorded events can be inspected in tests to verify protocol behavior.
NoOpHandler
The NoOpHandler is located in rust/choreography/src/effects/handler.rs. It implements all operations as no-ops. This is useful for testing protocol structure without actual communication.
#![allow(unused)]
fn main() {
let handler = NoOpHandler::<MyRole>::new();
}
All operations succeed immediately without side effects.
Middleware
Middleware wraps handlers to add cross-cutting functionality. Multiple middleware can compose around a single handler.
Trace
The Trace middleware is located in rust/choreography/src/effects/middleware/mod.rs. It logs all operations for debugging. The middleware outputs send, recv, choose, and offer calls with role and message details.
Usage example shows wrapping a handler.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::middleware::Trace;
let base_handler = InMemoryHandler::new(role);
let mut handler = Trace::new(base_handler, "Alice".to_string());
}
Each operation logs before delegating to the inner handler.
Metrics
The Metrics middleware is located in rust/choreography/src/effects/middleware/metrics.rs. It counts operations for monitoring. The middleware tracks send_count, recv_count, choose_count, and offer_count.
Usage example shows metrics collection.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::middleware::Metrics;
let base_handler = InMemoryHandler::new(role);
let mut handler = Metrics::new(base_handler);
// ... execute protocol ...
println!("Sends: {}", handler.send_count());
}
Metrics accumulate over the handler lifetime.
Retry
The Retry middleware is located in rust/choreography/src/effects/middleware/retry.rs. It retries failed operations with exponential backoff. Only send operations are retried since recv changes protocol state.
Usage example configures retry behavior.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::middleware::Retry;
use std::time::Duration;
let base_handler = InMemoryHandler::new(role);
let mut handler = Retry::new(base_handler, 3, Duration::from_millis(100));
}
The handler retries up to 3 times. Delays are 100ms, 200ms, 400ms using exponential backoff.
FaultInjection
The FaultInjection middleware is located in rust/choreography/src/effects/middleware/fault_injection.rs. It requires the test-utils feature. The middleware injects random failures and delays for testing fault tolerance.
Usage example configures fault injection.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::middleware::FaultInjection;
use std::time::Duration;
let base_handler = InMemoryHandler::new(role);
let mut handler = FaultInjection::new(base_handler)
.with_failure_rate(0.1)
.with_delay_range(Duration::from_millis(10), Duration::from_millis(100));
}
Operations randomly fail 10% of the time. Delays range from 10ms to 100ms.
Composing Middleware
Middleware can stack in layers.
#![allow(unused)]
fn main() {
let handler = InMemoryHandler::new(role);
let handler = Retry::new(handler, 3, Duration::from_millis(100));
let handler = Trace::new(handler, "Alice".to_string());
let handler = Metrics::new(handler);
}
Operations flow through the stack. The order is Metrics to Trace to Retry to InMemory.
Creating Custom Handlers
Implement ChoreoHandler for your transport.
#![allow(unused)]
fn main() {
pub struct MyHandler {
role: MyRole,
connections: HashMap<MyRole, Connection>,
}
#[async_trait]
impl ChoreoHandler for MyHandler {
type Role = MyRole;
type Endpoint = MyEndpoint;
async fn send<M: Serialize + Send + Sync>(
&mut self, ep: &mut Self::Endpoint, to: Self::Role, msg: &M
) -> Result<()> {
let conn = self.connections.get_mut(&to)?;
let bytes = bincode::serialize(msg)?;
conn.send(bytes).await?;
Ok(())
}
async fn recv<M: DeserializeOwned + Send>(
&mut self, ep: &mut Self::Endpoint, from: Self::Role
) -> Result<M> {
let conn = self.connections.get_mut(&from)?;
let bytes = conn.recv().await?;
let msg = bincode::deserialize(&bytes)?;
Ok(msg)
}
// Implement choose and offer...
}
}
The handler manages connection state and serialization. The endpoint type holds per-role state if needed.
Handler Selection Guide
Use InMemoryHandler for local testing and simple protocols.
Use RumpsteakHandler for production deployments with session type guarantees.
Use RecordingHandler for test verification and debugging.
Use NoOpHandler for protocol structure testing.
Use middleware to add logging, metrics, retries, or fault injection. Middleware works with any handler.
WASM Considerations
InMemoryHandler and RumpsteakHandler both work in WASM environments. They use futures channels for communication.
For WASM network communication, implement a custom handler. Use web-sys WebSocket or fetch APIs. See WASM Guide for details.
Role Family Resolution
For protocols with parameterized roles (wildcards and ranges), use ChoreographicAdapter for role family resolution.
ChoreographicAdapter Trait
The adapter trait provides methods for resolving role families at runtime.
#![allow(unused)]
fn main() {
pub trait ChoreographicAdapter: Sized {
type Role: RoleId;
type Error;
/// Resolve all instances of a parameterized role family.
fn resolve_family(&self, family: &str) -> Result<Vec<Self::Role>, Self::Error>;
/// Resolve a range of role instances [start, end).
fn resolve_range(&self, family: &str, start: u32, end: u32)
-> Result<Vec<Self::Role>, Self::Error>;
/// Get the number of instances in a role family.
fn family_size(&self, family: &str) -> Result<usize, Self::Error>;
/// Broadcast a message to all roles in the list.
async fn broadcast<M: Message>(&mut self, to: &[Self::Role], msg: M)
-> Result<(), Self::Error>;
/// Collect messages from all roles in the list.
async fn collect<M: Message>(&mut self, from: &[Self::Role])
-> Result<Vec<M>, Self::Error>;
}
}
TestAdapter for Role Families
The TestAdapter implements ChoreographicAdapter for testing protocols with role families.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::runtime::test_adapter::TestAdapter;
// Create adapter with configured role family
let witnesses: Vec<Role> = (0..5).map(Role::Witness).collect();
let adapter = TestAdapter::new(Role::Coordinator)
.with_family("Witness", witnesses);
// Resolve all witnesses
let all = adapter.resolve_family("Witness")?; // 5 witnesses
// Resolve subset for threshold operations
let threshold = adapter.resolve_range("Witness", 0, 3)?; // 3 witnesses
// Get family size
let size = adapter.family_size("Witness")?; // 5
}
Broadcast and Collect
For one-to-many and many-to-one communication patterns:
#![allow(unused)]
fn main() {
// Broadcast to all witnesses
let witnesses = adapter.resolve_family("Witness")?;
adapter.broadcast(&witnesses, SigningRequest { ... }).await?;
// Collect responses from threshold
let threshold = adapter.resolve_range("Witness", 0, 3)?;
let responses: Vec<PartialSignature> = adapter.collect(&threshold).await?;
}
Execution Hints
Annotations like @parallel and @min_responses(N) control how broadcast and collect operations execute. These are deployment hints, not protocol semantics. They affect code generation without changing the session type.
@parallel Coordinator -> Witness[*] : SignRequest
@min_responses(3) Witness[*] -> Coordinator : PartialSignature
The @parallel annotation causes generated code to use futures::future::join_all() for concurrent execution instead of sequential iteration.
The @min_responses(N) annotation generates threshold checking. The collect operation succeeds if at least N responses arrive. Fewer responses result in an InsufficientResponses error.
Execution hints are extracted from annotations and passed separately to code generation. This keeps the LocalType pure for Lean verification while enabling runtime optimizations.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::ast::{ExecutionHints, ChoreographyWithHints};
// Extract hints from a parsed choreography
let with_hints = ChoreographyWithHints::from_choreography(choreography);
// Hints are available for codegen
let hints = &with_hints.hints;
if hints.is_parallel(&path) {
// Generate parallel code
}
}
Default behavior without hints is sequential execution with all responses required.
Topology Validation
Role family constraints can be validated against topology configuration.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::topology::Topology;
let config = r#"
topology Prod for Protocol {
role_constraints {
Witness: min = 3, max = 10
}
}
"#;
let topology = Topology::parse(config)?.topology;
// Validate resolved family meets constraints
let count = adapter.family_size("Witness")?;
topology.validate_family("Witness", count)?;
}
Effect Interpretation
Handlers interpret effect programs.
#![allow(unused)]
fn main() {
let program = Program::new()
.send(Role::Bob, msg)
.recv::<Response>(Role::Bob)
.end();
let result = interpret(&mut handler, &mut endpoint, program).await?;
}
The interpret function walks the effect tree. It calls handler methods for each operation. The result contains received messages and execution status.
Using Rumpsteak Handlers
Overview
The RumpsteakHandler provides a production-ready implementation of choreographic effects using session-typed channels. This guide covers everything you need to know to use it effectively in your applications.
Quick Start
Basic Two-Party Protocol
use rumpsteak_aura_choreography::effects::{
ChoreoHandler,
handlers::rumpsteak::{RumpsteakHandler, RumpsteakEndpoint, SimpleChannel},
};
// Define roles
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
enum Role { Alice, Bob }
impl rumpsteak_aura::Role for Role {
type Message = MyMessage;
fn seal(&mut self) {}
fn is_sealed(&self) -> bool { false }
}
// Define messages
#[derive(Debug, Clone, Serialize, Deserialize)]
struct MyMessage {
content: String,
}
impl rumpsteak_aura::Message<Box<dyn std::any::Any + Send>> for MyMessage {
fn upcast(msg: Box<dyn std::any::Any + Send>) -> Self {
*msg.downcast::<MyMessage>().unwrap()
}
fn downcast(self) -> Result<Box<dyn std::any::Any + Send>, Self> {
Ok(Box::new(self))
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create endpoints
let mut alice_ep = RumpsteakEndpoint::new(Role::Alice);
let mut bob_ep = RumpsteakEndpoint::new(Role::Bob);
// Setup channels
let (alice_ch, bob_ch) = SimpleChannel::pair();
alice_ep.register_channel(Role::Bob, alice_ch);
bob_ep.register_channel(Role::Alice, bob_ch);
// Create handlers
let mut alice_handler = RumpsteakHandler::<Role, MyMessage>::new();
let mut bob_handler = RumpsteakHandler::<Role, MyMessage>::new();
// Send and receive
let msg = MyMessage { content: "Hello!".to_string() };
alice_handler.send(&mut alice_ep, Role::Bob, &msg).await?;
let received: MyMessage = bob_handler.recv(&mut bob_ep, Role::Alice).await?;
println!("Received: {}", received.content);
Ok(())
}
This example creates two endpoints and connects them with a SimpleChannel pair. It demonstrates direct send and recv calls through the handler.
Core Concepts
Roles
Roles represent participants in the choreography. They must implement:
rumpsteak_aura::RoleClone,Copy,Debug,PartialEq,Eq,Hash
Messages
Messages are the data exchanged between roles. They must:
- Implement
SerializeandDeserialize(via serde) - Implement
rumpsteak_aura::Message - Be
SendandSync
Endpoints
RumpsteakEndpoint<R> manages the channels and session state for a role:
- One endpoint per role in the protocol
- Contains channels to all peers
- Tracks session metadata (operation counts, state descriptions)
Channels
SimpleChannel provides bidirectional async message passing:
- Created in pairs:
SimpleChannel::pair() - Uses mpsc unbounded channels internally
- Automatically serializes/deserializes messages
Handlers
RumpsteakHandler<R, M> interprets choreographic effects:
- Stateless (can be shared across operations)
- Implements
ChoreoHandlertrait - Provides send, recv, choose, offer operations
API Reference
RumpsteakEndpoint
#![allow(unused)]
fn main() {
impl<R: Role + Eq + Hash + Clone> RumpsteakEndpoint<R>
}
This shows the generic bounds required by the endpoint type.
Constructor
#![allow(unused)]
fn main() {
pub fn new(local_role: R) -> Self
}
Create a new endpoint for a role.
Channel Management
#![allow(unused)]
fn main() {
pub fn register_channel<T>(&mut self, peer: R, channel: T)
}
Register a channel with a peer role.
#![allow(unused)]
fn main() {
pub fn register_session(&mut self, peer: R, session: RumpsteakSession)
}
Register a dynamically dispatched session (for example one produced via
RumpsteakSession::from_simple_channel or
RumpsteakSession::from_sink_stream). Use this when you need additional
transport logic such as WebSockets, recording, or custom middleware stacks.
#![allow(unused)]
fn main() {
pub fn has_channel(&self, peer: &R) -> bool
}
Check if a channel exists for a peer.
#![allow(unused)]
fn main() {
pub fn close_channel(&mut self, peer: &R) -> bool
}
Close a specific channel.
#![allow(unused)]
fn main() {
pub fn close_all_channels(&mut self) -> usize
}
Close all channels and return count.
#![allow(unused)]
fn main() {
pub fn active_channel_count(&self) -> usize
}
Get number of active channels.
#![allow(unused)]
fn main() {
pub fn is_all_closed(&self) -> bool
}
Check if all channels are closed.
Metadata Access
#![allow(unused)]
fn main() {
pub fn get_metadata(&self, peer: &R) -> Option<&SessionMetadata>
}
Get session metadata for a peer.
#![allow(unused)]
fn main() {
pub fn all_metadata(&self) -> Vec<(R, &SessionMetadata)>
}
Get metadata for all sessions.
RumpsteakHandler
#![allow(unused)]
fn main() {
impl<R, M> RumpsteakHandler<R, M>
}
This shows the handler type parameters. The handler is generic over role and message types.
Constructor
#![allow(unused)]
fn main() {
pub fn new() -> Self
}
Create a new handler.
ChoreoHandler Implementation
#![allow(unused)]
fn main() {
async fn send<Msg>(&mut self, ep: &mut Endpoint, to: Role, msg: &Msg) -> Result<()>
where Msg: Serialize + Send + Sync
}
Send a message to a role.
#![allow(unused)]
fn main() {
async fn recv<Msg>(&mut self, ep: &mut Endpoint, from: Role) -> Result<Msg>
where Msg: DeserializeOwned + Send
}
Receive a message from a role.
#![allow(unused)]
fn main() {
async fn choose(&mut self, ep: &mut Endpoint, who: Role, label: Label) -> Result<()>
}
Make a choice (internal choice).
#![allow(unused)]
fn main() {
async fn offer(&mut self, ep: &mut Endpoint, from: Role) -> Result<Label>
}
Offer a choice (external choice).
#![allow(unused)]
fn main() {
async fn with_timeout<F, T>(&mut self, ep: &mut Endpoint, at: Role, dur: Duration, body: F) -> Result<T>
where F: Future<Output = Result<T>> + Send
}
Execute operation with timeout.
SimpleChannel
#![allow(unused)]
fn main() {
pub struct SimpleChannel
}
This type wraps a bidirectional byte channel. It is the default transport for the handler.
Constructor
#![allow(unused)]
fn main() {
pub fn pair() -> (Self, Self)
}
Create a connected pair of channels.
Operations
#![allow(unused)]
fn main() {
pub async fn send(&mut self, msg: Vec<u8>) -> Result<(), String>
}
Send raw bytes.
#![allow(unused)]
fn main() {
pub async fn recv(&mut self) -> Result<Vec<u8>, String>
}
Receive raw bytes.
RumpsteakSession Builders
#![allow(unused)]
fn main() {
RumpsteakSession::from_simple_channel(channel: SimpleChannel)
}
Wraps a legacy channel in the new dynamic session API.
#![allow(unused)]
fn main() {
RumpsteakSession::from_sink_stream(sender, receiver)
}
Accepts any async sink and stream pair carrying Vec<u8> payloads. It exposes the pair to the handler. Use this for custom transports, then call endpoint.register_session(peer, session).
SessionMetadata
#![allow(unused)]
fn main() {
pub struct SessionMetadata {
pub state_description: String,
pub is_complete: bool,
pub operation_count: usize,
}
}
This struct records session state for a peer. It is updated as operations run.
Tracks session progression:
state_description: Human-readable current stateis_complete: Whether session has completedoperation_count: Number of operations performed
Usage Patterns
Pattern 1: Request-Response
#![allow(unused)]
fn main() {
// Client side
let request = Request { query: "data".to_string() };
handler.send(&mut endpoint, Role::Server, &request).await?;
let response: Response = handler.recv(&mut endpoint, Role::Server).await?;
}
This pattern sends a request and waits for a response. It is the simplest round trip flow.
Pattern 2: Choice with Branches
#![allow(unused)]
fn main() {
// Sender
let decision = if condition {
Label("accept")
} else {
Label("reject")
};
handler.choose(&mut endpoint, Role::Other, decision).await?;
// Receiver
let choice = handler.offer(&mut endpoint, Role::Other).await?;
match choice.0 {
"accept" => {
// Handle accept branch
}
"reject" => {
// Handle reject branch
}
_ => unreachable!(),
}
}
This pattern uses choose and offer to coordinate a branch. The chosen label drives the receiver logic.
Pattern 3: Sequential Messages
#![allow(unused)]
fn main() {
for item in items {
handler.send(&mut endpoint, Role::Peer, &item).await?;
let ack: Ack = handler.recv(&mut endpoint, Role::Peer).await?;
}
}
This pattern sends a batch of items with acknowledgments. Each step waits for the peer response.
Pattern 4: Multi-Party Coordination
#![allow(unused)]
fn main() {
// Coordinator
let offer: Offer = handler.recv(&mut endpoint, Role::Buyer).await?;
handler.send(&mut endpoint, Role::Seller, &offer).await?;
let response: Response = handler.recv(&mut endpoint, Role::Seller).await?;
handler.send(&mut endpoint, Role::Buyer, &response).await?;
}
This pattern relays messages between two peers. It keeps the coordinator role in control of ordering.
Pattern 5: Timeout Protection
#![allow(unused)]
fn main() {
let result = handler.with_timeout(
&mut endpoint,
Role::Self_,
Duration::from_secs(5),
async {
handler.recv(&mut endpoint, Role::Peer).await
}
).await;
match result {
Ok(msg) => {
// Process message
}
Err(ChoreographyError::Timeout(_)) => {
// Handle timeout
}
Err(e) => {
// Handle other errors
}
}
}
This pattern wraps a receive in with_timeout. It distinguishes timeout errors from other failures.
Best Practices
1. Resource Management
Recommended approach:
#![allow(unused)]
fn main() {
// Close channels explicitly when done
endpoint.close_all_channels();
}
This closes channels explicitly when the protocol is complete.
Recommended alternative:
#![allow(unused)]
fn main() {
// Use Drop to ensure cleanup
{
let mut endpoint = RumpsteakEndpoint::new(role);
// ... use endpoint ...
} // Drop ensures cleanup
}
This relies on drop to clean up resources at scope end.
Avoid:
#![allow(unused)]
fn main() {
// Don't forget to clean up resources
let mut endpoint = RumpsteakEndpoint::new(role);
// ... use endpoint ...
// Forgot to close!
}
This leaves channels open after the protocol.
2. Error Handling
Recommended approach:
#![allow(unused)]
fn main() {
match handler.send(&mut ep, role, &msg).await {
Ok(()) => { /* success */ }
Err(ChoreographyError::Transport(e)) => {
// Handle transport error
tracing::error!("Send failed: {}", e);
}
Err(e) => {
// Handle other errors
}
}
}
This handles transport errors explicitly. It keeps other errors visible.
Avoid:
#![allow(unused)]
fn main() {
// Don't ignore errors
handler.send(&mut ep, role, &msg).await.unwrap();
}
This panics on failures and hides transport details.
3. Channel Setup
Recommended approach:
#![allow(unused)]
fn main() {
// Setup all channels before starting protocol
let (ch1, ch2) = SimpleChannel::pair();
alice_ep.register_channel(Role::Bob, ch1);
bob_ep.register_channel(Role::Alice, ch2);
// Then start protocol
protocol_run().await?;
}
This ensures channels exist before the first send.
Avoid:
#![allow(unused)]
fn main() {
// Don't register channels mid-protocol
handler.send(&mut ep, role, &msg).await?; // Might not have channel!
ep.register_channel(role, channel); // Too late!
}
This can cause send failures when a channel is missing.
4. Metadata Usage
Recommended approach:
#![allow(unused)]
fn main() {
// Use metadata for debugging and monitoring
if let Some(meta) = endpoint.get_metadata(&peer) {
tracing::info!(
peer = ?peer,
operations = meta.operation_count,
state = %meta.state_description,
"Session status"
);
}
}
This reports progress and state for each peer. It is useful for debugging.
5. Testing
Recommended approach:
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_protocol() {
// Setup test environment
let mut alice_ep = RumpsteakEndpoint::new(Role::Alice);
let mut bob_ep = RumpsteakEndpoint::new(Role::Bob);
let (alice_ch, bob_ch) = SimpleChannel::pair();
alice_ep.register_channel(Role::Bob, alice_ch);
bob_ep.register_channel(Role::Alice, bob_ch);
// Test protocol
let msg = TestMessage { data: vec![1, 2, 3] };
handler.send(&mut alice_ep, Role::Bob, &msg).await.unwrap();
let received: TestMessage = handler.recv(&mut bob_ep, Role::Alice).await.unwrap();
assert_eq!(received.data, vec![1, 2, 3]);
}
}
This sets up a local channel pair and exercises a full send and receive. It validates handler wiring in tests.
Topology
Topology separates protocol logic from deployment configuration. Choreographies remain location-free while topology definitions specify where roles execute. This enables the same protocol to run in different environments without modification.
Overview
The topology system provides three capabilities. It maps roles to network locations. It defines constraints on role placement. It selects transports based on location relationships.
Design Principle
Choreographies describe what happens between roles, not where they are.
Alice -> Bob: Ping
Bob -> Alice: Pong
This protocol works regardless of whether Alice and Bob are in the same process, on the same machine, or across continents. Location is a deployment concern, not a protocol concern.
The topology layer makes this separation explicit.
Choreography GlobalType Topology Execution
─────────────────────────────────────────────────────────────────────
Alice -> Bob: Ping comm "Alice" "Bob" Alice ↦ nodeA send(nodeA, nodeB, Ping)
[("Ping", end)] Bob ↦ nodeB
This diagram shows how a choreography and a topology combine at runtime. It highlights that location mapping happens after projection.
Location Types
Locations specify where a role executes.
#![allow(unused)]
fn main() {
pub enum Location {
Local, // In-process
Remote(TopologyEndpoint), // Network endpoint
Colocated(RoleName), // Same node as another role
}
}
The Local variant indicates in-process execution using memory channels. The Remote variant specifies a network endpoint. The Colocated variant references another role’s location.
Topology Structure
A topology maps roles to locations with optional constraints.
#![allow(unused)]
fn main() {
pub struct Topology {
mode: Option<TopologyMode>,
locations: BTreeMap<RoleName, Location>,
constraints: Vec<TopologyConstraint>,
}
}
The mode field stores an optional deployment mode. The locations field maps role names to their locations. The constraints field specifies placement requirements.
Topology Constraints
Constraints express requirements on role placement.
#![allow(unused)]
fn main() {
pub enum TopologyConstraint {
Colocated(RoleName, RoleName), // Must be same node
Separated(RoleName, RoleName), // Must be different nodes
Pinned(RoleName, Location), // Must be at specific location
Region(RoleName, Region), // Must be in specific region
}
}
Constraints are validated when binding a topology to a choreography.
Role Family Constraints
For protocols with parameterized roles (wildcards and ranges), you can specify instance count constraints.
#![allow(unused)]
fn main() {
pub struct RoleFamilyConstraint {
min: u32, // Minimum instances required
max: Option<u32>, // Maximum instances allowed (None = unlimited)
}
}
These constraints validate that resolved role families have an acceptable number of instances.
DSL Syntax
Role constraints are specified in a role_constraints block.
#![allow(unused)]
fn main() {
topology ThresholdSig for ThresholdSignature {
Coordinator: localhost:8000
role_constraints {
Witness: min = 3, max = 10
Worker: min = 1
}
constraints {
region: Coordinator -> us_east_1
}
}
}
The min specifies the minimum number of instances required. The max specifies the maximum allowed. If max is omitted, there is no upper limit.
Rust API
Role constraints are available through the Topology struct.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::topology::{Topology, RoleFamilyConstraint};
// Access constraint for a family
let constraint = topology.get_family_constraint("Witness");
if let Some(c) = constraint {
println!("Witness: min={}, max={:?}", c.min, c.max);
}
// Validate a resolved count
let count = adapter.family_size("Witness")?;
topology.validate_family("Witness", count)?;
}
The validate_family method returns an error if the count is below minimum or above maximum.
Integration with TestAdapter
Role constraints integrate with the runtime adapter for validation at startup.
#![allow(unused)]
fn main() {
let topology = Topology::parse(config)?.topology;
// Create adapter with role family
let witnesses: Vec<Role> = (0..5).map(Role::Witness).collect();
let adapter = TestAdapter::new(Role::Coordinator)
.with_family("Witness", witnesses);
// Validate before running protocol
let count = adapter.family_size("Witness")?;
topology.validate_family("Witness", count)?;
}
This ensures the configured role family meets the deployment requirements before the protocol starts.
DSL Syntax
Topologies are defined using a DSL extension.
#![allow(unused)]
fn main() {
topology TwoPhaseCommit_Dev for TwoPhaseCommit {
Coordinator: localhost:9000
ParticipantA: localhost:9001
ParticipantB: localhost:9002
}
}
This topology maps three roles to local network endpoints.
Constraints are specified in a nested block.
#![allow(unused)]
fn main() {
topology TwoPhaseCommit_Prod for TwoPhaseCommit {
Coordinator: coordinator.prod.internal:9000
ParticipantA: participant-a.prod.internal:9000
ParticipantB: participant-b.prod.internal:9000
constraints {
separated: Coordinator, ParticipantA
separated: Coordinator, ParticipantB
region: Coordinator -> us_east_1
region: ParticipantA -> us_west_2
}
}
}
The constraints block specifies separation constraints and regions. Use multiple separated lines when more than two roles must be distinct.
Built-in Modes
Topology modes provide common configurations.
topology MyProtocol_Test for MyProtocol {
mode: local
}
The local mode places all roles in the same process. This is the default for testing.
topology MyProtocol_K8s for MyProtocol {
mode: kubernetes(my_app)
}
The kubernetes mode discovers services using the Kubernetes API. The namespace is provided in the mode value.
topology MyProtocol_Consul for MyProtocol {
mode: consul(dc1)
}
The consul mode discovers services using the Consul API. The datacenter is provided in the mode value.
Available modes include local, per_role, kubernetes, and consul.
Rust API
Zero Configuration
Testing requires no explicit topology.
#![allow(unused)]
fn main() {
let handler = InMemoryHandler::new(Role::Alice);
}
This creates an in-memory handler with implicit local topology.
Minimal Configuration
Simple deployments specify peer addresses directly.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::{RoleName, TopologyEndpoint};
let topology = Topology::builder()
.local_role(RoleName::from_static("Alice"))
.remote_role(
RoleName::from_static("Bob"),
TopologyEndpoint::new("localhost:8081").unwrap(),
)
.build();
let handler = PingPong::with_topology(topology, Role::Alice)?;
}
This builds a topology in code and binds it to a generated protocol handler.
Full Configuration
Production deployments use explicit topology objects.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::{Region, RoleName, TopologyEndpoint};
let topology = Topology::builder()
.remote_role(
RoleName::from_static("Coordinator"),
TopologyEndpoint::new("coordinator.internal:9000").unwrap(),
)
.remote_role(
RoleName::from_static("ParticipantA"),
TopologyEndpoint::new("participant-a.internal:9000").unwrap(),
)
.remote_role(
RoleName::from_static("ParticipantB"),
TopologyEndpoint::new("participant-b.internal:9000").unwrap(),
)
.separated(
RoleName::from_static("Coordinator"),
RoleName::from_static("ParticipantA"),
)
.separated(
RoleName::from_static("Coordinator"),
RoleName::from_static("ParticipantB"),
)
.region(
RoleName::from_static("Coordinator"),
Region::new("us_east_1").unwrap(),
)
.build();
let handler = TwoPhaseCommit::with_topology(topology, Role::Coordinator)?;
}
This example configures explicit endpoints and constraints. It then creates a topology aware handler for a role.
Loading from Files
Topologies can be loaded from external files.
#![allow(unused)]
fn main() {
let parsed = Topology::load("deploy/prod.topology")?;
let handler = PingPong::with_topology(parsed.topology, Role::Alice)?;
}
This supports separation of code and configuration.
Topology Integration
Topology definitions are separate from the choreography DSL. Define topologies in standalone .topology files or strings and load them at runtime.
#![allow(unused)]
fn main() {
let parsed = Topology::load("deploy/dev.topology")?;
let handler = PingPong::with_topology(parsed.topology, Role::Alice)?;
}
This loads a topology file and binds it to a generated handler.
You can also parse a topology from a string.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::topology::parse_topology;
let parsed = parse_topology(r#"
topology Dev for PingPong {
Alice: localhost:8080
Bob: localhost:8081
}
"#)?;
let handler = PingPong::with_topology(parsed.topology, Role::Alice)?;
}
This parses the DSL into a ParsedTopology. The topology field contains the Topology value used at runtime.
Transport Selection
The topology determines which transport to use for each role pair.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::{RoleName, TopologyError};
fn select_transport(
topo: &Topology,
from: &RoleName,
to: &RoleName,
) -> Result<Transport, TopologyError> {
let from_loc = topo.get_location(from)?;
let to_loc = topo.get_location(to)?;
Ok(match (from_loc, to_loc) {
(Location::Local, Location::Local) => InMemoryTransport::new(),
(_, Location::Remote(endpoint)) => TcpTransport::new(endpoint),
(_, Location::Colocated(peer)) => SharedMemoryTransport::new(peer),
})
}
}
The handler automatically routes messages through appropriate transports.
Validation
Topologies are validated against choreography roles.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::RoleName;
let choreo = parse_choreography_str(dsl)?;
let topo = parse_topology(topo_dsl)?;
let roles = [
RoleName::from_static("Alice"),
RoleName::from_static("Bob"),
];
let validation = topo.topology.validate(&roles);
if !validation.is_valid() {
return Err(format!("Topology validation failed: {:?}", validation).into());
}
}
Validation ensures all choreography roles appear in the topology. It also verifies constraints are satisfiable.
Lean Correspondence
The Lean formalization defines topology types and validation.
inductive Location where
| local
| remote (endpoint : String)
| colocated (peer : String)
structure Topology where
locations : RBMap String Location compare
constraints : List TopologyConstraint
def Topology.valid (topo : Topology) (g : GlobalType) : Bool :=
g.roles.all (fun r => topo.locations.contains r)
Projection correctness is proven independent of topology. The project function does not reference location information.
Default Behavior
The InMemoryHandler::new() API remains valid. Choreographies without explicit topologies use implicit local mode.
Usage Example
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::{choreography, RoleName, Topology, TopologyEndpoint};
choreography!(r#"
protocol Auction =
roles Auctioneer, Bidder1, Bidder2
Auctioneer -> Bidder1 : Item
Auctioneer -> Bidder2 : Item
Bidder1 -> Auctioneer : Bid
Bidder2 -> Auctioneer : Bid
Auctioneer -> Bidder1 : Result
Auctioneer -> Bidder2 : Result
"#);
// Development topology
let dev_topo = Topology::builder()
.remote_role(
RoleName::from_static("Auctioneer"),
TopologyEndpoint::new("localhost:9000").unwrap(),
)
.remote_role(
RoleName::from_static("Bidder1"),
TopologyEndpoint::new("localhost:9001").unwrap(),
)
.remote_role(
RoleName::from_static("Bidder2"),
TopologyEndpoint::new("localhost:9002").unwrap(),
)
.build();
// Production topology
let prod_topo = Topology::builder()
.remote_role(
RoleName::from_static("Auctioneer"),
TopologyEndpoint::new("auction.prod:9000").unwrap(),
)
.remote_role(
RoleName::from_static("Bidder1"),
TopologyEndpoint::new("bidder1.prod:9000").unwrap(),
)
.remote_role(
RoleName::from_static("Bidder2"),
TopologyEndpoint::new("bidder2.prod:9000").unwrap(),
)
.separated(
RoleName::from_static("Auctioneer"),
RoleName::from_static("Bidder1"),
)
.separated(
RoleName::from_static("Auctioneer"),
RoleName::from_static("Bidder2"),
)
.build();
// Same protocol, different deployments
let dev_handler = Auction::with_topology(dev_topo, Role::Auctioneer)?;
let prod_handler = Auction::with_topology(prod_topo, Role::Auctioneer)?;
}
This example shows the same auction protocol deployed in development and production environments.
Resource Heap
The resource heap provides explicit state management for protocol execution. Resources are allocated with content-addressed identifiers and consumed with nullifier tracking. This enables replay protection, deterministic execution, and ZK compatibility.
Overview
The resource heap provides three capabilities. It tracks allocated resources with unique identifiers. It prevents double-consumption through nullifier sets. It supports Merkleization for verifiable state proofs.
Motivation
Channel consumption is implicit in traditional session type implementations. The resource heap makes consumption explicit for several benefits.
Replay protection prevents processing the same message twice. Byzantine fault detection proves protocol violations with heap state. ZK compatibility represents state as a Merkle tree. Deterministic execution ensures heap operations are pure functions.
Resource Model
Resources are protocol artifacts with unique identifiers.
#![allow(unused)]
fn main() {
pub struct ResourceId {
hash: [u8; 32],
counter: u64,
}
pub enum Resource {
Channel(ChannelState),
Message(Message),
Session { role: String, type_hash: u64 },
Value { tag: String, data: Vec<u8> },
}
}
The ResourceId stores a content hash and an allocation counter. The Resource enum represents different kinds of heap-allocated values.
Heap Structure
The heap stores resources in a deterministic data structure.
#![allow(unused)]
fn main() {
pub struct Heap {
resources: BTreeMap<ResourceId, Resource>,
nullifiers: BTreeSet<ResourceId>,
counter: u64,
}
}
Resources are stored in a BTreeMap for deterministic iteration order. Nullifiers track consumed resources. The allocation counter ensures unique IDs for identical content.
Heap Operations
All heap operations are pure functions returning new heap values.
Allocation
#![allow(unused)]
fn main() {
impl Heap {
pub fn alloc(&self, resource: Resource) -> (ResourceId, Heap) {
let rid = ResourceId::from_resource(&resource, self.counter);
let mut new_heap = self.clone();
new_heap.resources.insert(rid.clone(), resource);
new_heap.counter += 1;
(rid, new_heap)
}
}
}
Allocation creates a new resource ID from the content hash and counter. The counter ensures unique IDs when allocating identical content multiple times.
Consumption
#![allow(unused)]
fn main() {
impl Heap {
pub fn consume(&self, rid: &ResourceId) -> Result<Heap, HeapError> {
if self.nullifiers.contains(rid) {
return Err(HeapError::AlreadyConsumed(rid.clone()));
}
if !self.resources.contains_key(rid) {
return Err(HeapError::NotFound(rid.clone()));
}
let mut new_heap = self.clone();
new_heap.nullifiers.insert(rid.clone());
Ok(new_heap)
}
}
}
Consumption adds the resource ID to the nullifier set. Double-consumption fails with an error. The resource remains in the heap for reference but is marked as consumed.
Reading
#![allow(unused)]
fn main() {
impl Heap {
pub fn read(&self, rid: &ResourceId) -> Result<&Resource, HeapError> {
if self.is_consumed(rid) {
return Err(HeapError::AlreadyConsumed(rid.clone()));
}
self.resources.get(rid).ok_or(HeapError::NotFound(rid.clone()))
}
pub fn is_consumed(&self, rid: &ResourceId) -> bool {
self.nullifiers.contains(rid)
}
}
}
Reading retrieves a resource by ID and fails if it was consumed. The is_consumed method checks the nullifier set.
Nullifier Tracking
Nullifiers prevent double-spending of resources.
#![allow(unused)]
fn main() {
let (msg_id, heap1) = heap.alloc(Resource::Message(msg));
let heap2 = heap1.consume(&msg_id)?;
let result = heap2.consume(&msg_id); // Error: AlreadyConsumed
}
Once a resource is consumed, subsequent consumption attempts fail. This provides replay protection at the heap level.
Merkleization
The heap state can be converted to a Merkle tree for verification.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::{HeapCommitment, MerkleTree};
let tree = MerkleTree::from_heap(&heap);
let root = tree.root;
let proof = tree.prove(0);
let commitment = HeapCommitment::from_heap(&heap);
}
The Merkle root commits to the active resources in the heap. MerkleTree::prove generates inclusion proofs by leaf index. HeapCommitment::from_heap combines resource and nullifier roots with the allocation counter.
Integration with Configuration
The heap replaces ad-hoc state tracking in protocol configurations.
#![allow(unused)]
fn main() {
pub struct Configuration {
processes: Vec<RoleProcess>,
heap: Heap,
}
}
Channels become heap-allocated resources. Messages are enqueued via allocation and dequeued via consumption. Session state is tracked as a resource.
Reduction Semantics
Protocol reduction rules use heap operations.
The Send rule allocates a message resource.
#![allow(unused)]
fn main() {
fn reduce_send(config: Configuration, from: Role, to: Role, msg: Message)
-> Configuration
{
let (msg_id, new_heap) = config.heap.alloc(Resource::Message(msg));
Configuration { heap: new_heap, ..config }
}
}
The Recv rule consumes a message resource.
#![allow(unused)]
fn main() {
fn reduce_recv(config: Configuration, rid: ResourceId)
-> Result<(Message, Configuration), ReductionError>
{
let msg = config.heap.read(&rid)?;
let new_heap = config.heap.consume(&rid)?;
Ok((msg.clone(), Configuration { heap: new_heap, ..config }))
}
}
This makes message passing explicit and verifiable.
Determinism Requirements
All heap operations are deterministic.
Resource IDs derive from content hash plus allocation counter. The BTreeMap and BTreeSet provide deterministic iteration order. Serialization produces sorted (ResourceId, Resource) pairs.
No HashMap or HashSet appears in heap implementation. This ensures the same operations produce the same results across executions.
Lean Correspondence
The Lean formalization defines heap types and operations.
structure ResourceId where
id : ContentId
deriving DecidableEq, BEq, Hashable
inductive Resource where
| channel : ChannelState → Resource
| message : Message → Resource
| session : LocalTypeR → Resource
structure Heap where
resources : RBMap ResourceId Resource compare
nullifiers : RBSet ResourceId compare
def Heap.alloc (h : Heap) (r : Resource) : (ResourceId × Heap) := ...
def Heap.consume (h : Heap) (rid : ResourceId) : Except HeapError Heap := ...
def Heap.read (h : Heap) (rid : ResourceId) : Except HeapError Resource := ...
Verification theorems are proven for heap properties.
theorem alloc_consume_succeeds (h : Heap) (r : Resource) :
let (rid, h') := h.alloc r
(h'.consume rid).isOk = true
theorem consume_consume_fails (h : Heap) (rid : ResourceId) :
(h.consume rid).isOk → (h.consume rid >>= fun h' => h'.consume rid).isErr
These theorems ensure allocation followed by consumption succeeds, and double-consumption fails.
Error Types
#![allow(unused)]
fn main() {
pub enum HeapError {
NotFound(ResourceId),
AlreadyConsumed(ResourceId),
AlreadyExists(ResourceId),
TypeMismatch { expected: String, got: String },
Other(String),
}
}
Errors provide context for debugging heap operations.
Usage Example
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::{Heap, HeapMessage, MerkleTree, Resource};
// Create empty heap
let heap = Heap::new();
// Allocate a message
let msg = HeapMessage::new("Alice", "Bob", "ping", vec![], 0);
let (msg_id, heap) = heap.alloc(Resource::Message(msg));
// Check resource exists
assert!(!heap.is_consumed(&msg_id));
// Consume the message
let heap = heap.consume(&msg_id)?;
assert!(heap.is_consumed(&msg_id));
// Double consumption fails
assert!(heap.consume(&msg_id).is_err());
// Compute Merkle root for verification
let tree = MerkleTree::from_heap(&heap);
let root = tree.root;
let proof = tree.prove(0);
}
This example demonstrates allocation, consumption, and Merkle proof generation. The proof uses a leaf index from the active resource list.
ZK Compatibility
The heap design supports zero-knowledge proof systems.
Merkle proofs enable proving resource existence without revealing the full heap. Nullifier proofs enable proving consumption without revealing which resource was consumed. The deterministic structure ensures proof verification is reproducible.
Custom hashers can be introduced for ZK-friendly hashing when generating circuit-compatible proofs.
DSL Extensions Part 1: Runtime Effect System
The extension system enables domain-specific choreographic effects. This is Part 1 of the DSL extension guide, covering the runtime effect system that handles execution-time extensions like logging, validation, and metrics.
Part 2 (DSL Extensions Part 2: Syntax Extensions) covers syntax-level extensions that add new grammar rules and custom protocol constructs to the choreographic DSL.
Overview
Extensions are typed algebraic effects. They participate in the full effect lifecycle. This includes construction, projection, interpretation, and composition.
Type-safe extension creation happens at construction time. Role-based filtering occurs during projection. Handler dispatch executes at interpretation time. Cross-crate reuse enables composition.
Core Concepts
ExtensionEffect Trait
All extensions implement the ExtensionEffect trait.
#![allow(unused)]
fn main() {
pub trait ExtensionEffect: Send + Sync + Debug {
fn type_id(&self) -> TypeId;
fn type_name(&self) -> &'static str;
fn participating_role_ids(&self) -> Vec<Box<dyn Any>>;
fn as_any(&self) -> &dyn Any;
fn as_any_mut(&mut self) -> &mut dyn Any;
fn clone_box(&self) -> Box<dyn ExtensionEffect>;
}
}
The trait provides TypeId-based discrimination for compile-time type safety. Type-erased role participation enables projection semantics while maintaining object safety. Type-safe downcasting uses the Any trait. Extension cloning supports effect algebra operations.
The trait uses Vec<Box<dyn Any>> for role information. This design balances object safety with third-party projection needs. Generic methods prevent trait objects. Extensions store as Box<dyn ExtensionEffect> in the effect algebra. Object-safe traits are required.
Type safety persists despite erasure. Rust’s Any trait provides safe downcasting. Extensions box their specific role types. The effect algebra downcasts back to the choreography’s role type. Mismatched types fail downcast and are skipped.
#![allow(unused)]
fn main() {
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
vec![Box::new(self.role)]
}
}
Runtime flexibility combines with compile-time safety. Extensions work with any role type. Role types check at definition and use sites. No runtime type confusion occurs.
Extension Registry
The ExtensionRegistry<E> provides type-safe handler dispatch.
#![allow(unused)]
fn main() {
let mut registry = ExtensionRegistry::new();
registry.register::<MyExtension, _>(|endpoint, ext| {
Box::pin(async move {
let my_ext = ext.as_any()
.downcast_ref::<MyExtension>()
.ok_or(ExtensionError::TypeMismatch { ... })?;
endpoint.do_something(my_ext)?;
Ok(())
})
});
}
The registry fails fast on unknown extensions. Type-safe registration prevents errors. Registries merge for composition. Empty registries have zero overhead.
Effect Integration
Extensions integrate with the effect algebra.
#![allow(unused)]
fn main() {
let program = Program::new()
.ext(ValidateCapability {
capability: "send".into(),
role: Alice
})
.send(Bob, Message("hello"))
.ext(LogEvent {
event: "message_sent".into()
})
.end();
}
Extensions appear before and after communication operations. The effect algebra maintains execution order.
Projection Semantics
Extensions project based on participating_role_ids(). The method returns type-erased role values.
Global Extensions
An empty vector makes an extension appear in all projections.
#![allow(unused)]
fn main() {
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
vec![]
}
}
Logging events use global extensions. All roles record the events.
Role-Specific Extensions
A non-empty vector with boxed roles restricts projection.
#![allow(unused)]
fn main() {
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
vec![Box::new(self.role)]
}
}
Capability validation uses role-specific extensions. Only the specified role sees the extension.
Multi-Role Extensions
Multiple roles participate by boxing each role.
#![allow(unused)]
fn main() {
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
vec![
Box::new(Role::Alice),
Box::new(Role::Bob),
]
}
}
Coordination extensions target specific role subsets. Only Alice and Bob handle this extension.
Type Erasure and Downcasting
The effect algebra downcasts type-erased roles to concrete types.
#![allow(unused)]
fn main() {
for role_any in ext.participating_role_ids() {
if let Some(role) = role_any.downcast_ref::<R>() {
roles.insert(*role);
}
}
}
This maintains object safety. Third-party extensions specify participating roles for projection.
Implementing Projection
Extension developers control projection to different roles. Four design patterns cover common cases.
Pattern 1: Global Extensions
Global extensions are visible to all roles.
#![allow(unused)]
fn main() {
#[derive(Clone, Debug)]
pub struct AuditLog {
pub action: String,
pub timestamp: u64,
}
impl ExtensionEffect for AuditLog {
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
vec![]
}
}
}
An empty vector makes the extension global. Logging, metrics, and global invariants use this pattern.
Pattern 2: Single Role Extensions
Single role extensions apply to one specific role.
#![allow(unused)]
fn main() {
#[derive(Clone, Debug)]
pub struct ValidatePermission {
pub permission: String,
pub role: MyRole,
}
impl ExtensionEffect for ValidatePermission {
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
vec![Box::new(self.role)]
}
}
}
The extension stores the target role. Only that role participates. Validation, authorization, and local state updates use this pattern.
Pattern 3: Multi-Role Extensions
Multiple specific roles handle multi-role extensions.
#![allow(unused)]
fn main() {
#[derive(Clone, Debug)]
pub struct ConsensusRound {
pub round: u32,
pub participants: Vec<NodeId>,
}
impl ExtensionEffect for ConsensusRound {
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
self.participants
.iter()
.map(|p| Box::new(*p) as Box<dyn Any>)
.collect()
}
}
}
Each participant gets boxed. Consensus protocols, quorum operations, and multi-party computation use this pattern.
Pattern 4: Conditional Projection
Participation can depend on extension state.
#![allow(unused)]
fn main() {
#[derive(Clone, Debug)]
pub struct OptionalNotification {
pub message: String,
pub notify_role: Option<Role>,
}
impl ExtensionEffect for OptionalNotification {
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
if let Some(role) = self.notify_role {
vec![Box::new(role)]
} else {
vec![]
}
}
}
}
The extension becomes global when no specific role is set. Conditional notifications, optional observers, and dynamic routing use this pattern.
Implementation Constraints
Four constraints apply when implementing participating_role_ids().
Boxed roles must match the choreography’s role type R: RoleId. Role types must be 'static for Any compatibility. Roles should be Copy or cheap to clone. The same extension value must always return the same roles.
Complete Example
This example shows proper projection implementation.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::effects::*;
use std::any::{Any, TypeId};
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub enum ServiceRole {
Gateway,
AuthService,
Database,
}
#[derive(Clone, Debug)]
pub struct RateLimitCheck {
pub service: ServiceRole,
pub limit: u32,
}
impl ExtensionEffect for RateLimitCheck {
fn type_id(&self) -> TypeId {
TypeId::of::<Self>()
}
fn type_name(&self) -> &'static str {
"RateLimitCheck"
}
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
vec![Box::new(self.service)]
}
fn as_any(&self) -> &dyn Any {
self
}
fn as_any_mut(&mut self) -> &mut dyn Any {
self
}
fn clone_box(&self) -> Box<dyn ExtensionEffect> {
Box::new(self.clone())
}
}
}
The service being rate-limited performs the check. Other services do not see this extension in their projection.
Usage Example
The extension integrates into choreography programs.
#![allow(unused)]
fn main() {
fn api_request_protocol() -> Program<ServiceRole, Message> {
Program::new()
.ext(RateLimitCheck {
service: ServiceRole::Gateway,
limit: 1000,
})
.send(ServiceRole::AuthService, AuthRequest)
.ext(RateLimitCheck {
service: ServiceRole::AuthService,
limit: 100,
})
.send(ServiceRole::Database, TokenQuery)
.end()
}
}
Each service checks its own rate limit. The Gateway projection includes only the Gateway rate limit. The AuthService projection includes only the AuthService rate limit.
Projection Behavior
Extensions project to participating roles only. Consider a three-role choreography with Alice, Bob, and Charlie.
#![allow(unused)]
fn main() {
Program::new()
.ext(LogEvent { message: "start" })
.ext(ValidateCapability { role: Alice, cap: "send" })
.send(Bob, Message)
.ext(AuditLog { action: "sent" })
.end()
}
Alice’s projection includes all extensions. Bob’s projection includes LogEvent and AuditLog but not ValidateCapability. Charlie’s projection includes LogEvent and AuditLog but not ValidateCapability.
The extension before send appears in Alice’s projection only. Global extensions appear in all projections.
Extension Ordering
Extensions maintain source order in projections. Consider this program.
#![allow(unused)]
fn main() {
Program::new()
.ext(Ext1)
.ext(Ext2)
.send(Alice, Msg)
.ext(Ext3)
.end()
}
Alice’s projection preserves the order. Ext1 executes first, then Ext2, then send, then Ext3. The handler processes extensions in definition order.
Handler Patterns
Three patterns integrate extensions with handlers. Each pattern serves different needs.
Pattern 1: Wrapper Handler
A wrapper handler adds extension support to existing handlers.
#![allow(unused)]
fn main() {
pub struct ExtensibleWrapper<H> {
base: H,
registry: ExtensionRegistry<H::Endpoint>,
}
impl<H> ExtensibleWrapper<H> {
pub fn new(base: H, registry: ExtensionRegistry<H::Endpoint>) -> Self {
Self { base, registry }
}
}
impl<H: ChoreoHandler> ExtensibleHandler for ExtensibleWrapper<H> {
type Endpoint = H::Endpoint;
fn extension_registry(&self) -> &ExtensionRegistry<Self::Endpoint> {
&self.registry
}
}
impl<H: ChoreoHandler> ChoreoHandler for ExtensibleWrapper<H> {
type Role = H::Role;
type Endpoint = H::Endpoint;
async fn send<M>(&mut self, ep: &mut Self::Endpoint, to: Self::Role, msg: &M) -> Result<()>
where M: Serialize + Send + Sync {
self.base.send(ep, to, msg).await
}
}
}
The wrapper delegates choreographic operations to the base handler. Extensions use the separate registry. This enables adding extensions to any existing handler.
Pattern 2: Registry Composition
Multiple registries merge for composition.
#![allow(unused)]
fn main() {
let mut registry = ExtensionRegistry::new();
registry.merge(auth_registry);
registry.merge(logging_registry);
registry.merge(metrics_registry);
}
Each domain provides its own registry. Merging combines all extension handlers. Duplicate registrations are detected at merge time.
Pattern 3: Domain-Specific Handler
Domain handlers embed extension support directly.
#![allow(unused)]
fn main() {
pub struct DomainHandler {
role: Role,
registry: ExtensionRegistry<()>,
capabilities: Vec<String>,
budget: Arc<Mutex<u64>>,
}
impl DomainHandler {
pub fn new(role: Role, caps: Vec<String>) -> Self {
let mut registry = ExtensionRegistry::new();
registry.register::<ValidateCapability, _>(move |_ep, ext| {
Box::pin(async move {
// Extension handling logic
Ok(())
})
});
Self { role, registry, capabilities: caps, budget: Arc::new(Mutex::new(1000)) }
}
}
}
The handler owns its extensions. Domain-specific state integrates with extension handling. This pattern suits applications with fixed extension sets.
Cross-Crate Composition
Extensions compose across crate boundaries. Crate A defines reusable primitives. Crate B uses those primitives.
Defining Primitives
A library crate exports extension types.
#![allow(unused)]
fn main() {
#[derive(Clone, Debug)]
pub struct RoundRobinMetadata {
pub participant_count: usize,
pub current_index: usize,
}
impl ExtensionEffect for RoundRobinMetadata {
// Implementation
}
pub fn round_robin<R: RoleId>(participants: Vec<R>) -> Program<R, ()> {
Program::new()
.ext(RoundRobinMetadata {
participant_count: participants.len(),
current_index: 0,
})
.end()
}
}
The crate provides both the extension type and helper functions. Users can combine these with their own extensions.
Using Primitives
Application crates import and use extensions.
#![allow(unused)]
fn main() {
use my_library::extensions::{RoundRobinMetadata, round_robin};
let program = round_robin(vec![Alice, Bob, Charlie])
.then(Program::new()
.ext(CustomExtension { /* app-specific */ })
.send(Alice, Message)
.end()
);
}
Extensions from different crates compose freely. The type system ensures safety.
Error Handling
Extension errors follow fail-fast semantics. Unknown extensions cause immediate failure. Type mismatches produce clear error messages.
Fail-Fast Semantics
Unregistered extensions terminate execution.
#![allow(unused)]
fn main() {
match interpret_extensible(&mut handler, &mut ep, program).await {
Err(ChoreographyError::Extension(ExtensionError::UnknownExtension { type_name })) => {
eprintln!("Extension {} not registered", type_name);
}
_ => {}
}
}
Registration happens before protocol execution. This catches configuration errors early.
Type Mismatch Detection
Downcasts validate extension types.
#![allow(unused)]
fn main() {
let validated = ext.as_any()
.downcast_ref::<ValidateCapability>()
.ok_or(ExtensionError::TypeMismatch {
expected: "ValidateCapability",
actual: ext.type_name(),
})?;
}
Type mismatches produce descriptive errors. The expected and actual type names appear in the error.
Error Propagation
Extension errors propagate through the interpreter.
#![allow(unused)]
fn main() {
pub enum ExtensionError {
UnknownExtension { type_name: &'static str },
TypeMismatch { expected: &'static str, actual: &'static str },
ExecutionFailed { type_name: &'static str, error: String },
}
}
Each error variant provides context. Handlers can distinguish between different failure modes.
Performance
Extension overhead varies by usage pattern. Design decisions affect runtime cost.
Zero Overhead Cases
Empty registries incur no cost. Programs without extensions run at baseline speed. The interpreter checks for extensions before dispatching.
Minimal Overhead Cases
Extension dispatch uses TypeId comparison. This operation is fast. Dynamic dispatch for handlers adds minimal cost. Most overhead comes from extension logic itself.
Optimization Tips
Register extensions once and reuse the registry. Avoid cloning extension data unnecessarily. Keep extension state small. Use Copy types for roles when possible.
Best Practices
Follow these practices when designing extensions and handlers.
Extension Design
Keep extensions focused on single concerns. Use global extensions sparingly. Prefer role-specific extensions for targeted behavior. Document which roles should handle your extension.
Handler Design
Register all required extensions upfront. Provide clear error messages for failures. Use wrapper handlers to add extensions incrementally. Test extension handlers independently.
Testing
Test extension projection behavior. Verify correct role participation. Check error handling for unregistered extensions. Test extension composition across crates.
Examples
Complete examples demonstrate extension usage. See the rust/choreography/examples/ directory. The extension_workflow.rs example shows composition patterns. The extension_capability.rs example demonstrates validation. The extension_logging.rs example shows global extensions.
Next Steps
This completes Part 1 of the DSL extension system. You now understand how to create runtime effects that execute during choreography interpretation.
Continue to Part 2, DSL Extensions Part 2: Syntax Extensions, to learn how to add custom syntax and grammar rules to the choreographic DSL.
Other related topics:
- Choreographic Projection Patterns for how extensions project to different roles
- Effect Handlers for core effect system concepts
- Extension Guide for building complete applications with extensions
Syntax Extensions in Rumpsteak-Aura
This guide covers the complete syntax extension system in rumpsteak-aura, which allows 3rd party projects to inherit ALL features automatically while adding their own custom extensions.
Overview
The rumpsteak-aura extension system provides a clean way for 3rd party projects to extend choreographic DSL syntax while automatically inheriting all core features.
Core capabilities include full feature inheritance (choice constructs, loops, branch parallel, parameterized roles, protocol composition, error handling), automatic extension discovery and registration, cached grammar composition with 387x performance improvements, priority-based conflict resolution between extensions, and a preprocessing approach for simplicity and maintainability.
Architecture
The extension system uses a preprocessing approach that provides clean separation between extension and base parsing:
graph TD
A["**3rd Party Project**<br/>Inherits all features"]
B["**Extension Registry**<br/>Grammar + Parsers"]
C["**Grammar Composer**<br/>Dynamic composition"]
D["**Rumpsteak-Aura Parser**<br/>Standard grammar + Features"]
E["**Validated AST**"]
A --> B
B --> C
C --> D
D --> E
Key benefits include clean separation between extension and base parsing, cached transformations with minimal overhead (387x faster when cached), no complex runtime grammar composition, and compatibility with the current DSL without requiring legacy syntax.
Complete Integration Example
The external-demo project demonstrates the proper way for 3rd party projects to integrate with rumpsteak-aura. It uses a two-crate architecture:
external-demo- Regular crate that re-exports ALL rumpsteak-aura functionalityexternal-demo-macros- Proc-macro crate providing full-featured choreography! macro
Here’s the complete implementation:
1. Project Setup
Two-crate architecture for maximum compatibility:
external-demo/Cargo.toml (Regular crate):
[dependencies]
# Re-export all rumpsteak-aura functionality
rumpsteak-aura = { path = ".." }
rumpsteak-aura-choreography = { path = "../choreography" }
# Import custom proc macros
external-demo-macros = { path = "../external-demo-macros" }
This dependency set re-exports the core crates and the custom macro crate. It keeps the consumer crate in sync with the base APIs.
external-demo-macros/Cargo.toml (Proc-macro crate):
[lib]
proc-macro = true
[dependencies]
rumpsteak-aura-choreography = { path = "../choreography" }
proc-macro2 = "1.0"
quote = "1.0"
syn = { version = "2.0", features = ["full"] }
This declares a proc-macro crate that depends on the choreography crate. It also pulls in the standard macro tooling dependencies.
2. Re-export Pattern (external-demo)
#![allow(unused)]
fn main() {
// external-demo/src/lib.rs
// Re-export ALL rumpsteak-aura functionality so 3rd parties get everything
pub use rumpsteak_aura::*;
pub use rumpsteak_aura_choreography::*;
// Import our custom proc macros from the separate proc-macro crate
pub use external_demo_macros::*;
// Extension definitions for Aura
pub mod aura_extensions;
/// Full-featured choreography! macro with ALL rumpsteak-aura features
pub use external_demo_macros::choreography;
}
This re-exports the base APIs and the custom macro entry point. It ensures downstream crates see the same surface as rumpsteak-aura.
3. Extension Definition
#![allow(unused)]
fn main() {
// external-demo/src/aura_extensions.rs
use rumpsteak_aura_choreography::extensions::*;
#[derive(Debug)]
pub struct AuraGrammarExtension;
impl GrammarExtension for AuraGrammarExtension {
fn grammar_rules(&self) -> &'static str {
r#"
timeout_stmt = { "timeout" ~ integer ~ block }
"#
}
fn statement_rules(&self) -> Vec<&'static str> {
vec!["timeout_stmt"]
}
fn extension_id(&self) -> &'static str {
"aura_extensions"
}
fn priority(&self) -> u32 {
100
}
}
pub fn register_aura_extensions(registry: &mut ExtensionRegistry) {
let _ = registry.register_grammar(AuraGrammarExtension);
}
}
This defines a grammar extension and registers it in the extension registry. The rule name and priority control how it composes.
4. Proc-Macro Implementation (external-demo-macros)
#![allow(unused)]
fn main() {
// external-demo-macros/src/choreography.rs
use rumpsteak_aura_choreography::{
ast::Choreography,
compiler::parser::parse_choreography_str,
};
/// Full-featured choreography macro that inherits ALL rumpsteak-aura features
pub fn choreography_impl(input: TokenStream) -> Result<TokenStream, syn::Error> {
let input_str = input.to_string();
let registry = ExtensionRegistry::new(); // Empty registry for stable generation
match parse_and_generate_with_extensions(&input_str, ®istry) {
Ok(tokens) => Ok(tokens),
Err(err) => {
let error_msg = err.to_string();
Err(syn::Error::new(
proc_macro2::Span::call_site(),
format!("Choreography compilation error: {}", error_msg),
))
}
}
}
// lib.rs exports the proc macro
#[proc_macro]
pub fn choreography(input: TokenStream) -> TokenStream {
match choreography::choreography_impl(input.into()) {
Ok(output) => output.into(),
Err(err) => err.to_compile_error().into(),
}
}
}
This macro implementation delegates to the extension-aware parser and code generator. It forwards errors as compile errors.
4. Dynamic Role Generation
#![allow(unused)]
fn main() {
/// Generate code for an Aura choreography with integrated effect system
fn generate_aura_choreography_code(choreography: &Choreography) -> TokenStream {
// Extract namespace from the module header (if present)
let namespace = choreography.namespace.as_deref().unwrap_or("aura_choreography");
let choreo_name = syn::Ident::new(namespace, proc_macro2::Span::call_site());
// Extract roles from choreography to generate role enum dynamically
let role_variants: Vec<syn::Ident> = choreography
.roles
.iter()
.map(|role| syn::Ident::new(&role.name.to_string(), proc_macro2::Span::call_site()))
.collect();
// Generate the main choreography module with dynamic roles
quote! {
pub mod #choreo_name {
// Dynamically extracted roles from choreography
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum AuraRole {
#(#role_variants),*
}
impl std::fmt::Display for AuraRole {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
#(AuraRole::#role_variants => write!(f, stringify!(#role_variants))),*
}
}
}
// Aura effect types that work with any role configuration
#[derive(Clone, Debug)]
pub struct ValidateCapability {
pub capability: String,
pub role: AuraRole,
}
#[derive(Clone, Debug)]
pub struct ChargeFlowCost {
pub cost: u64,
pub role: AuraRole,
}
pub fn create_choreography() -> Program<AuraRole, String> {
Program::new()
}
}
}
}
}
This example shows dynamic role extraction and code generation scaffolding. It builds a role enum from the parsed choreography.
Grammar Composition System
The GrammarComposer provides high-performance grammar composition with caching:
Basic Usage
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::{GrammarComposer, GrammarExtension};
let mut composer = GrammarComposer::new();
composer.register_extension(MyExtension);
// First composition (computes and caches)
let grammar = composer.compose()?; // ~3.6ms
// Subsequent compositions (uses cache)
let grammar2 = composer.compose()?; // ~9.3μs (387x faster!)
}
This example composes a grammar and then hits the cache on the second call. It demonstrates the speedup from caching.
Performance Optimizations
The grammar composer includes several performance optimizations:
#![allow(unused)]
fn main() {
impl GrammarComposer {
/// Compose grammar with caching for optimal performance
pub fn compose(&mut self) -> Result<String, GrammarCompositionError> {
// Check cache first - provides 387x speedup
let current_hash = self.compute_extension_hash();
if let Some(ref cached) = self.cached_grammar {
if current_hash == self.extension_hash {
return Ok(cached.clone());
}
}
// Recompute with optimizations
let composed = self.compose_uncached()?;
// Cache the result
self.cached_grammar = Some(composed.clone());
self.extension_hash = current_hash;
Ok(composed)
}
}
}
This shows the cached path in the composer implementation. The cache avoids recomputing the composed grammar.
Performance benefits include 387x speedup for repeated compositions through caching, pre-allocated string buffers for memory optimization, hash-based cache invalidation, and optimized string operations with reduced allocations.
Extension Parser System
The ExtensionParser handles extension-aware parsing with optimizations:
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::*;
// Create extension parser
let mut parser = ExtensionParser::new();
parser.register_extension(MyGrammarExtension, MyStatementParser);
// Parse with extensions
let choreography = parser.parse_with_extensions(choreography_text)?;
}
This example constructs an extension aware parser and parses a DSL string. It uses registered grammar and statement parsers.
Optimized Parsing
#![allow(unused)]
fn main() {
impl ExtensionParser {
/// Parse with extension support (optimized for performance)
pub fn parse_with_extensions(
&mut self,
input: &str,
) -> Result<Choreography, ExtensionParseError> {
// Reuse pre-allocated buffers to reduce allocations
self.parse_buffer.clear();
self.extension_cache.clear();
// Reserve capacity based on input size for efficient parsing
self.parse_buffer.reserve(input.len());
// Use standard parser (inherits ALL features automatically)
let mut choreography = parse_choreography_str(input)
.map_err(ExtensionParseError::StandardParseError)?;
// Post-process to handle extension statements
choreography.protocol =
self.process_extensions_optimized(choreography.protocol, input, &choreography.roles)?;
Ok(choreography)
}
}
}
This example shows a custom parse pass that augments the base parser. It runs extension logic after the core parse succeeds.
Feature Inheritance Demonstration
The system ensures 3rd party projects automatically inherit ALL rumpsteak-aura features:
Choice Constructs
#![allow(unused)]
fn main() {
choreography!(r#"
protocol Example =
roles Alice, Bob, Charlie
case choose Alice of
Path1 ->
Alice -> Bob : Request
Path2 ->
Alice -> Charlie : Alternative
"#);
}
This example shows choice syntax in an extended project. The macro receives the DSL as a string literal.
Parameterized Roles
#![allow(unused)]
fn main() {
choreography!(r#"
protocol Distributed =
roles Worker[N], Manager, Client[3]
Worker[*] -> Manager : Status
Manager -> Client[0] : Response
"#);
}
This example shows parameterized roles and indexed role references. It uses the same macro entry point.
Loop Constructs
#![allow(unused)]
fn main() {
choreography!(r#"
protocol Streaming =
roles Producer, Consumer
loop forever
Producer -> Consumer : Data
"#);
}
This example shows an infinite loop in a protocol. It uses the same syntax as the base DSL.
Protocol Composition
All advanced rumpsteak-aura features work automatically in 3rd party projects without any additional integration work.
Extension Discovery System
The discovery system automatically finds and registers extensions:
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::extensions::discovery::*;
// Automatic discovery
let discovery = ExtensionDiscovery::new();
let extensions = discovery.discover_extensions("./extensions")?;
// Metadata-driven registration
let metadata = ExtensionMetadata {
name: "timeout".to_string(),
version: semver::Version::parse("1.0.0")?,
description: "Timeout extension for choreographic protocols".to_string(),
dependencies: vec![], // Extension dependencies
};
let registry = ExtensionRegistry::new();
registry.register_with_metadata(TimeoutExtension, metadata)?;
}
This example discovers extensions and registers them with metadata. It shows both discovery and manual registration paths.
Best Practices for 3rd Party Integration
1. Use Standard Parser for Maximum Compatibility
#![allow(unused)]
fn main() {
// ✅ GOOD: Use standard parser to inherit ALL features
let choreography = parse_choreography_str(&input)?;
// ❌ BAD: Custom parsing loses feature inheritance
let choreography = my_custom_parser(&input)?;
}
This shows the recommended parsing path. Custom parsers bypass feature inheritance.
2. Extract Extension Data from AST
#![allow(unused)]
fn main() {
// ✅ GOOD: Extract extension data from the parsed AST
let nodes = extract_extension_nodes(&choreography.protocol, "timeout");
// ❌ BAD: Custom parsing of extension syntax
let extensions = parse_custom_extension_syntax(&input)?;
}
This example compares AST extraction with custom parsing. The AST route preserves extension semantics.
3. Generate Dynamic Roles
#![allow(unused)]
fn main() {
// ✅ GOOD: Extract roles dynamically from choreography
let role_variants: Vec<_> = choreography.roles.iter()
.map(|role| syn::Ident::new(&role.name.to_string(), span))
.collect();
// ❌ BAD: Hardcode role names
enum Role { Alice, Bob } // Won't work with different choreographies
}
This shows dynamic role generation from the AST. It avoids hard coded role sets.
4. Proper Error Handling
#![allow(unused)]
fn main() {
// ✅ GOOD: Detailed error context
let choreography = match parse_choreography_str(&input) {
Ok(result) => result,
Err(e) => {
let error_msg = format!("Choreography parse error: {}", e);
return syn::Error::new(span, error_msg).to_compile_error().into();
}
};
}
This example returns detailed parse errors. It keeps compiler diagnostics actionable.
Testing Extension System
Comprehensive test coverage ensures extension system robustness:
Grammar Composition Tests
#![allow(unused)]
fn main() {
#[test]
fn test_grammar_composition_performance() {
let mut composer = GrammarComposer::new();
composer.register_extension(TestExtension);
// First composition
let start = std::time::Instant::now();
let result1 = composer.compose().unwrap();
let first_time = start.elapsed();
// Second composition (should use cache)
let start = std::time::Instant::now();
let result2 = composer.compose().unwrap();
let second_time = start.elapsed();
assert_eq!(result1, result2);
// Should be significantly faster due to caching
assert!(second_time < first_time / 10);
}
}
This test checks caching by comparing the first and second composition times. It asserts a speedup from the cached path.
Extension Parser Tests
#![allow(unused)]
fn main() {
#[test]
fn test_extension_parsing() {
let mut parser = ExtensionParser::new();
parser.register_extension(TestGrammarExtension, TestStatementParser);
let choreography = parser.parse_with_extensions(r#"
protocol TestProtocol =
roles Alice, Bob
Alice -> Bob : Message
"#).expect("Should parse with extensions");
assert_eq!(choreography.roles.len(), 2);
assert_eq!(choreography.roles[0].name, "Alice");
}
}
This test exercises the extension parser with a small protocol. It checks role extraction and parse success.
Feature Inheritance Tests
#![allow(unused)]
fn main() {
#[test]
fn test_feature_inheritance() {
// Test that 3rd party projects inherit parameterized roles
let choreography = parse_choreography_str(r#"
protocol Test =
roles Worker[N], Manager
Worker[*] -> Manager : Status
"#).expect("Should parse parameterized roles");
// Verify parameterized role parsing worked
assert!(choreography.roles.iter().any(|r| r.param.is_some()));
}
}
This test validates that parameterized roles survive extension parsing. It checks that the role list includes a parameter.
Migration Guide
From Custom DSL to Extension System
Before (custom DSL):
#![allow(unused)]
fn main() {
// Custom macro with limited features
my_choreography!(r#"
protocol MyProtocol =
A -> B : Message with_timeout 5000
"#);
}
This example shows a legacy custom macro. It now expects a string literal input like the standard macro.
After (extension system with full feature inheritance):
#![allow(unused)]
fn main() {
// Standard rumpsteak-aura with extensions
choreography!(r#"
protocol MyProtocol =
roles A, B
case choose A of
Fast ->
A -> B : QuickMessage
Slow ->
timeout 5000 { A -> B : SlowMessage }
"#);
}
This example uses the standard macro with an extension statement. It inherits all base DSL features.
Performance Considerations
The extension system is optimized for production use. Grammar composition takes approximately 3.6ms initially and 9.3μs when cached (387x speedup). Pre-allocated buffers reduce GC pressure. Extension parsing adds minimal overhead over standard parsing.
For performance-critical applications:
- Reuse
GrammarComposerinstances to benefit from caching - Use
ExtensionParserBuilderpattern for consistent parser configuration - Monitor performance with built-in metrics
Troubleshooting
Common Issues
Grammar conflicts require namespaced rule names and appropriate priorities. Feature loss occurs when not using parse_choreography_str for full feature inheritance. Performance improves when reusing composer instances for caching. Role mismatch happens when not generating roles dynamically from the choreography AST.
Debugging Tools
#![allow(unused)]
fn main() {
// Check composed grammar
let mut composer = GrammarComposer::new();
composer.register_extension(MyExtension);
let grammar = composer.compose()?;
println!("Composed grammar:\n{}", grammar);
// Extension statistics
let parser = ExtensionParser::new();
let stats = parser.extension_stats();
println!("Extensions registered: {}", stats.grammar_extensions);
}
This snippet inspects the composed grammar and extension stats. It helps diagnose registration issues.
Complete Example: external-demo
The external-demo project provides a complete working example of 3rd party integration using the two-crate pattern. It demonstrates full feature inheritance where all rumpsteak-aura features work automatically. The integration uses a simple preprocessing approach. Roles are extracted from the choreography at compile time. Extension data converts to effect system calls. The implementation leverages cached grammar composition for performance.
To see the complete implementation:
cd external-demo
cargo build --examples # Verify compilation
cargo run --example simple_ping_pong # Run example
cargo run --example threshold_ceremony # Advanced features
This demonstrates how 3rd party projects can integrate with rumpsteak-aura while inheriting ALL features automatically and adding their own domain-specific extensions.
Advanced Topics
Extension Dependencies
#![allow(unused)]
fn main() {
let metadata = ExtensionMetadata {
dependencies: vec![
ExtensionDependency {
name: "timeout".to_string(),
version_req: semver::VersionReq::parse("^1.0")?,
}
],
// ...
};
}
This example declares extension dependencies in metadata. The registry checks these at registration time.
Custom Validation
#![allow(unused)]
fn main() {
impl ProtocolExtension for MyProtocol {
fn validate(&self, roles: &[Role]) -> Result<(), ExtensionValidationError> {
// Custom validation logic
if self.requires_minimum_roles > roles.len() {
return Err(ExtensionValidationError::InvalidStructure {
reason: format!("Requires at least {} roles", self.requires_minimum_roles),
});
}
Ok(())
}
}
}
The extension system provides a complete foundation for building domain-specific choreographic languages while maintaining full compatibility with rumpsteak-aura’s features.
Composition Tutorial
This tutorial integrates rumpsteak-aura into your project. It demonstrates composing domain-specific choreographies with extensions.
Prerequisites
You need Rust 1.75 or later. Basic understanding of choreographic programming is required. See Getting Started for fundamentals. Familiarity with effect handlers helps. See Using Rumpsteak Handlers for handler concepts.
Step 1: Add Dependencies
Add rumpsteak-aura to your Cargo.toml.
[dependencies]
rumpsteak-aura = "0.7"
rumpsteak-aura-choreography = "0.7"
tokio = { version = "1", features = ["full"] }
The core library provides session types. The choreography layer provides effects and projection. Macros enable the choreography DSL. Tokio provides the async runtime.
Step 2: Define Domain Types
Create domain-specific types for your choreography.
#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub enum Role {
Client,
Server,
Database,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct AuthRequest {
pub username: String,
pub password: String,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct AuthResponse {
pub token: String,
pub user_id: u64,
}
}
Roles represent participants. Messages are the data exchanged. Roles must implement Copy, Clone, Debug, PartialEq, Eq, and Hash. Messages must implement Serialize and Deserialize.
Step 3: Create Domain Extensions
Define extensions for domain needs.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::effects::*;
use std::any::{Any, TypeId};
#[derive(Clone, Debug)]
pub struct ValidateCapability {
pub capability: String,
pub role: Role,
}
impl ExtensionEffect for ValidateCapability {
fn type_id(&self) -> TypeId {
TypeId::of::<Self>()
}
fn type_name(&self) -> &'static str {
"ValidateCapability"
}
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
vec![Box::new(self.role)]
}
fn as_any(&self) -> &dyn Any {
self
}
fn as_any_mut(&mut self) -> &mut dyn Any {
self
}
fn clone_box(&self) -> Box<dyn ExtensionEffect> {
Box::new(self.clone())
}
}
}
The ValidateCapability extension checks permissions. Only the specified role participates. The participating_role_ids() method returns a single boxed role.
Define a logging extension for all roles.
#![allow(unused)]
fn main() {
#[derive(Clone, Debug)]
pub struct LogEvent {
pub message: String,
pub level: LogLevel,
}
#[derive(Clone, Debug)]
pub enum LogLevel {
Info,
Warn,
Error,
}
impl ExtensionEffect for LogEvent {
fn type_id(&self) -> TypeId {
TypeId::of::<Self>()
}
fn type_name(&self) -> &'static str {
"LogEvent"
}
fn participating_role_ids(&self) -> Vec<Box<dyn Any>> {
vec![]
}
fn as_any(&self) -> &dyn Any {
self
}
fn as_any_mut(&mut self) -> &mut dyn Any {
self
}
fn clone_box(&self) -> Box<dyn ExtensionEffect> {
Box::new(self.clone())
}
}
}
The LogEvent extension appears in all role projections. An empty vector makes it global.
Step 4: Create an Extensible Handler
Build a handler with extension support.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::effects::*;
use async_trait::async_trait;
pub struct DomainHandler {
role: Role,
registry: ExtensionRegistry<()>,
capabilities: Vec<String>,
}
impl DomainHandler {
pub fn new(role: Role, capabilities: Vec<String>) -> Self {
let mut registry = ExtensionRegistry::new();
let caps = capabilities.clone();
registry.register::<ValidateCapability, _>(move |_ep, ext| {
let caps = caps.clone();
Box::pin(async move {
let validate = ext.as_any()
.downcast_ref::<ValidateCapability>()
.ok_or(ExtensionError::TypeMismatch {
expected: "ValidateCapability",
actual: ext.type_name(),
})?;
if !caps.contains(&validate.capability) {
return Err(ExtensionError::ExecutionFailed {
type_name: "ValidateCapability",
error: format!("Missing capability: {}", validate.capability),
});
}
println!("Validated capability: {}", validate.capability);
Ok(())
})
});
registry.register::<LogEvent, _>(|_ep, ext| {
Box::pin(async move {
let log = ext.as_any()
.downcast_ref::<LogEvent>()
.ok_or(ExtensionError::TypeMismatch {
expected: "LogEvent",
actual: ext.type_name(),
})?;
println!("[{:?}] {}", log.level, log.message);
Ok(())
})
});
Self { role, registry, capabilities }
}
}
}
The handler stores capabilities. The registry maps extension types to handlers. Each registration provides an async handler function.
Implement the handler traits.
#![allow(unused)]
fn main() {
#[async_trait]
impl ExtensibleHandler for DomainHandler {
type Endpoint = ();
fn extension_registry(&self) -> &ExtensionRegistry<Self::Endpoint> {
&self.registry
}
}
#[async_trait]
impl ChoreoHandler for DomainHandler {
type Role = Role;
type Endpoint = ();
async fn send<M: serde::Serialize + Send + Sync>(
&mut self,
_ep: &mut Self::Endpoint,
to: Self::Role,
msg: &M,
) -> ChoreoResult<()> {
println!("{:?} -> {:?}: sending message", self.role, to);
Ok(())
}
async fn recv<M: serde::de::DeserializeOwned + Send>(
&mut self,
_ep: &mut Self::Endpoint,
from: Self::Role,
) -> ChoreoResult<M> {
Err(ChoreographyError::Transport("recv not implemented".into()))
}
async fn choose(
&mut self,
_ep: &mut Self::Endpoint,
_who: Self::Role,
label: <Self::Role as RoleId>::Label,
) -> ChoreoResult<()> {
println!("{:?}: choosing {:?}", self.role, label);
Ok(())
}
async fn offer(
&mut self,
_ep: &mut Self::Endpoint,
from: Self::Role,
) -> ChoreoResult<<Self::Role as RoleId>::Label> {
println!("{:?}: offering choice from {:?}", self.role, from);
unimplemented!("offer not implemented for example")
}
async fn with_timeout<F, T>(
&mut self,
_ep: &mut Self::Endpoint,
_at: Self::Role,
_dur: std::time::Duration,
body: F,
) -> ChoreoResult<T>
where
F: std::future::Future<Output = ChoreoResult<T>> + Send,
{
body.await
}
}
}
The handler implements both ExtensibleHandler and ChoreoHandler. Extension handling happens through the registry. Choreographic operations implement protocol communication.
Step 5: Build a Choreography
Create a choreography using the effect algebra.
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::effects::*;
pub fn auth_protocol() -> Program<Role, String> {
Program::new()
.ext(LogEvent {
message: "Starting authentication protocol".into(),
level: LogLevel::Info,
})
.ext(ValidateCapability {
capability: "authenticate".into(),
role: Role::Client,
})
.send(Role::Server, "auth_request".to_string())
.ext(LogEvent {
message: "Authentication request sent".into(),
level: LogLevel::Info,
})
.ext(ValidateCapability {
capability: "query_database".into(),
role: Role::Server,
})
.send(Role::Database, "user_query".to_string())
.send(Role::Server, "user_data".to_string())
.send(Role::Client, "auth_response".to_string())
.ext(LogEvent {
message: "Authentication complete".into(),
level: LogLevel::Info,
})
.end()
}
}
The program combines extensions and communication. Logging appears before and after operations. Capability validation precedes privileged actions. Messages flow between roles.
Step 6: Execute the Choreography
Combine the components and execute.
use rumpsteak_aura_choreography::effects::*;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut handler = DomainHandler::new(
Role::Client,
vec!["authenticate".into(), "query_database".into()],
);
let program = auth_protocol();
let mut endpoint = ();
let result = interpret_extensible(&mut handler, &mut endpoint, program).await?;
match result.final_state {
InterpreterState::Completed => {
println!("Protocol completed successfully");
}
InterpreterState::Failed(err) => {
println!("Protocol failed: {}", err);
}
InterpreterState::Timeout => {
println!("Protocol timed out");
}
}
Ok(())
}
The handler receives capabilities at construction. The program executes through the interpreter. The final state indicates success or failure.
Advanced Patterns
Three advanced patterns enable sophisticated choreographies.
Pattern 1: Reusable Primitives
Create composable protocol fragments.
#![allow(unused)]
fn main() {
pub fn with_retry<R: RoleId, M: Clone>(
program: Program<R, M>,
max_attempts: u32,
) -> Program<R, M> {
Program::new()
.ext(RetryMetadata { max_attempts })
.then(program)
}
pub fn with_timeout_ext<R: RoleId, M>(
program: Program<R, M>,
timeout_ms: u64,
) -> Program<R, M> {
Program::new()
.ext(TimeoutMetadata { timeout_ms })
.then(program)
}
}
Functions wrap protocols with cross-cutting concerns. Retry logic and timeouts apply to any program. Composition builds complex protocols from simple pieces.
Use the primitives in protocols.
#![allow(unused)]
fn main() {
let resilient_protocol = with_retry(
with_timeout_ext(auth_protocol(), 5000),
3
);
}
The protocol gains retry and timeout behavior. Primitives compose without modifying the base protocol.
Pattern 2: Handler Composition
Compose multiple handlers into one.
#![allow(unused)]
fn main() {
pub struct ComposedHandler<H1, H2> {
primary: H1,
secondary: H2,
registry: ExtensionRegistry<()>,
}
impl<H1, H2> ComposedHandler<H1, H2>
where
H1: ExtensibleHandler,
H2: ExtensibleHandler<Endpoint = H1::Endpoint>,
{
pub fn new(primary: H1, secondary: H2) -> Self {
let mut registry = ExtensionRegistry::new();
registry.merge(primary.extension_registry().clone());
registry.merge(secondary.extension_registry().clone());
Self { primary, secondary, registry }
}
}
}
The composed handler merges extension registries. Both handlers contribute extensions. This enables modular handler design.
Pattern 3: Cross-Crate Extensions
Share extensions across projects.
#![allow(unused)]
fn main() {
pub mod extensions {
pub use crate::ValidateCapability;
pub use crate::LogEvent;
pub use crate::RetryMetadata;
}
}
Library crates export extension modules. Consuming crates import and use them.
#![allow(unused)]
fn main() {
use my_library::extensions::*;
let program = Program::new()
.ext(ValidateCapability { /* ... */ })
.ext(LogEvent { /* ... */ });
}
Extensions compose across crate boundaries. The type system maintains safety.
Testing
Test choreographies and extensions independently.
#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_auth_protocol() {
let mut handler = DomainHandler::new(
Role::Client,
vec!["authenticate".into()],
);
let program = auth_protocol();
let mut endpoint = ();
let result = interpret_extensible(&mut handler, &mut endpoint, program)
.await
.unwrap();
assert_eq!(result.final_state, InterpreterState::Completed);
}
#[test]
fn test_extension_registration() {
let handler = DomainHandler::new(Role::Client, vec![]);
assert!(handler.registry.is_registered::<ValidateCapability>());
assert!(handler.registry.is_registered::<LogEvent>());
}
}
}
Integration tests verify protocol execution. Unit tests check extension registration. Both test types ensure correctness.
Next Steps
Learn extension architecture in DSL Extensions Part 1: Runtime Effect System. Explore advanced examples in rust/choreography/examples/. Read about testing in the examples directory. See production handlers in Using Rumpsteak Handlers.
WASM Guide
Overview
Rumpsteak’s choreographic programming system compiles to WebAssembly. The core library and effect handlers work in browser environments.
WASM support enables choreographic protocols in web applications, browser-based distributed systems, and serverless edge computing.
What Works in WASM
The following features compile and run in WASM:
Core session types and choreography system work fully. The InMemoryHandler provides local message passing for testing protocols. RumpsteakHandler now compiles for WASM and can be used with custom network transports (WebSocket, fetch API, etc.). All middleware (Trace, Metrics, Retry, FaultInjection) functions correctly. Effect system and interpreter execute normally. Timeouts use wasm-timer for cross-platform support.
What Does Not Work in WASM
Multi-threading is unavailable since WASM runs single-threaded.
Native file system access requires browser File APIs.
Building for WASM
Add the wasm feature to your dependencies:
[dependencies]
rumpsteak-aura-choreography = { version = "0.7", features = ["wasm"] }
wasm-bindgen = "0.2"
wasm-bindgen-futures = "0.4"
Build using wasm-pack:
wasm-pack build --target web
This generates a pkg directory with JavaScript bindings.
Example Protocol
The wasm-ping-pong example demonstrates a complete browser protocol:
#![allow(unused)]
fn main() {
use wasm_bindgen::prelude::*;
use rumpsteak_aura_choreography::{InMemoryHandler, Program, interpret};
#[wasm_bindgen]
pub async fn run_protocol(message: String) -> Result<String, JsValue> {
let mut handler = InMemoryHandler::new(Role::Alice);
let program = Program::new()
.send(Role::Bob, Message::Ping(message))
.recv::<Message>(Role::Bob)
.end();
let mut endpoint = ();
let result = interpret(&mut handler, &mut endpoint, program).await
.map_err(|e| JsValue::from_str(&format!("{:?}", e)))?;
match result.received_values.first() {
Some(Message::Pong(response)) => Ok(response.clone()),
_ => Err(JsValue::from_str("Expected Pong message")),
}
}
}
The wasm_bindgen attribute exposes the function to JavaScript.
Build and run the example:
cd examples/wasm-ping-pong
./build.sh
python3 -m http.server 8000
Open http://localhost:8000 in a browser to run the protocol.
Using RumpsteakHandler in WASM
RumpsteakHandler now compiles for WASM. To use it with real network transport:
#![allow(unused)]
fn main() {
use wasm_bindgen::prelude::*;
use rumpsteak_aura_choreography::{RumpsteakHandler, RumpsteakEndpoint, SimpleChannel};
#[wasm_bindgen]
pub async fn run_distributed_protocol() -> Result<(), JsValue> {
// Create endpoints
let mut alice_ep = RumpsteakEndpoint::new(Role::Alice);
let mut bob_ep = RumpsteakEndpoint::new(Role::Bob);
// Option 1: use SimpleChannel (works natively and in WASM)
let (alice_ch, bob_ch) = SimpleChannel::pair();
alice_ep.register_channel(Role::Bob, alice_ch);
bob_ep.register_channel(Role::Alice, bob_ch);
// Option 2: wrap browser transports directly
// let ws_session = RumpsteakSession::from_sink_stream(ws_writer, ws_reader);
// alice_ep.register_session(Role::Bob, ws_session);
// Create handler
let mut handler = RumpsteakHandler::new();
// Use with choreography operations
handler.send(&mut alice_ep, Role::Bob, &message).await?;
let response = handler.recv(&mut bob_ep, Role::Alice).await?;
Ok(())
}
}
SimpleChannel uses futures::channel::mpsc which is WASM-compatible. For distributed WASM applications, implement custom channels using browser APIs.
Async traits in WASM
The handler traits continue to use the async_trait macro rather than native async fn in traits. This keeps the traits object-safe (needed for middleware stacks such as Trace<Retry<H>>) and lets us share a single implementation strategy across native and WASM targets. The generated futures are still Send, so handlers that run under multithreaded executors behave the same way as handlers compiled for single-threaded WASM.
Custom Network Transport
InMemoryHandler works for single-context protocols. Real distributed WASM applications need network transport.
Implement ChoreoHandler with WebSocket or fetch APIs:
#![allow(unused)]
fn main() {
use web_sys::WebSocket;
use wasm_bindgen::JsCast;
pub struct WebSocketHandler {
role: Role,
socket: WebSocket,
incoming: mpsc::UnboundedReceiver<Vec<u8>>,
}
impl WebSocketHandler {
pub fn new(role: Role, url: &str) -> Result<Self, JsValue> {
let socket = WebSocket::new(url)?;
let (tx, rx) = mpsc::unbounded();
let onmessage = Closure::wrap(Box::new(move |e: MessageEvent| {
// Handle incoming messages
if let Ok(buffer) = e.data().dyn_into::<js_sys::ArrayBuffer>() {
let bytes = js_sys::Uint8Array::new(&buffer).to_vec();
tx.unbounded_send(bytes).ok();
}
}) as Box<dyn FnMut(MessageEvent)>);
socket.set_onmessage(Some(onmessage.as_ref().unchecked_ref()));
onmessage.forget();
Ok(Self { role, socket, incoming: rx })
}
}
}
With the Phase 3 handler you can skip the custom ChoreoHandler. Wrap this
WebSocketHandler with RumpsteakSession::from_sink_stream and register it on
the endpoint instead.
For HTTP-based protocols, use the fetch API:
#![allow(unused)]
fn main() {
use web_sys::window;
async fn send_http(to: Role, msg: &Message) -> Result<()> {
let window = window().ok_or("no window")?;
let resp = JsFuture::from(
window.fetch_with_str(&format!("http://server/{}", to))
).await?;
Ok(())
}
}
The pattern is the same: implement ChoreoHandler using browser APIs for your transport.
Platform Differences
The runtime module provides platform-specific functions:
#![allow(unused)]
fn main() {
use rumpsteak_aura_choreography::runtime::{spawn, spawn_local};
spawn(async { /* task */ }); // Works on native and WASM
}
On native targets, spawn uses tokio. On WASM, it uses wasm-bindgen-futures.
Timeouts use conditional compilation:
#![allow(unused)]
fn main() {
// Native
tokio::time::timeout(duration, future).await
// WASM
use wasm_timer::Delay;
futures::select! {
result = future => result,
_ = Delay::new(duration) => Err(Timeout),
}
}
The library handles this automatically. Your code works on both platforms.
Testing in WASM
Use wasm-bindgen-test for browser tests:
#![allow(unused)]
fn main() {
use wasm_bindgen_test::*;
wasm_bindgen_test_configure!(run_in_browser);
#[wasm_bindgen_test]
async fn test_protocol() {
let mut handler = InMemoryHandler::new(Role::Alice);
let program = Program::new().send(Role::Bob, Message::Test).end();
let mut endpoint = ();
let result = interpret(&mut handler, &mut endpoint, program).await;
assert!(result.is_ok());
}
}
Run tests with wasm-pack:
wasm-pack test --headless --chrome
The tests execute in a headless browser.
Deployment
For production WASM deployments:
Build with release optimizations:
wasm-pack build --target web --release
The generated WASM is optimized for size and performance.
Serve the pkg directory with your web application. Import the JavaScript bindings:
<script type="module">
import init, { run_protocol } from './pkg/my_protocol.js';
async function main() {
await init();
const result = await run_protocol("data");
console.log(result);
}
main();
</script>
The init function loads the WASM module. Protocol functions then become available.
Browser Compatibility
WASM support works in modern browsers:
Chrome/Chromium 90+, Firefox 88+, Safari 14+, Edge 90+.
Older browsers may require polyfills or transpilation.
Performance
WASM execution is near-native speed for computation. Network operations have typical browser fetch/WebSocket latency.
InMemoryHandler has negligible overhead in WASM. Custom network handlers add latency from browser APIs.
The generated WASM binary size is approximately 200-300KB after optimization.
Limitations
WASM is single-threaded. Parallel protocol execution uses async concurrency, not OS threads.
Browser security restricts some operations. Cross-origin requests need CORS configuration. WebSocket connections require proper server setup.
File system access uses browser APIs which differ from native file operations.
Example Projects
The examples/wasm-ping-pong directory contains a complete working example.
See the build script for compilation details and the README for deployment instructions.
Lean Verification
Rumpsteak uses Lean 4 to formally verify the correctness of choreographic projection. This chapter describes what properties are proven, what remains axiomatized, and how verification integrates with the Rust implementation.
Verification Strategy
The project employs a defense-in-depth approach with three layers. Core session type theory is proven in Lean. Two independent projection implementations are compared for equivalence. Randomized protocols are validated against the Lean binary. This layered approach ensures that bugs in one layer are caught by another.
What is Formally Proven
The following properties have complete proofs in lean/Rumpsteak/Proofs/:
Merge Operator Properties
The merge operator combines local types for non-participants in choices. Proofs establish reflexivity (merge lt lt = lt), commutativity (merge a b = merge b a), and associativity (merge (merge a b) c = merge a (merge b c)). These properties ensure that merge order does not affect projection results.
Subtyping and Ordering
The subtyping relation ensures projected programs conform to their specifications:
theorem subLabelsOf_refl (lt : LocalType) : subLabelsOf lt lt = true := by
unfold subLabelsOf; simp [List.all_eq_true, List.any_eq_true]
theorem isSubsequence_refl {α} [DecidableEq α] (xs : List α) :
isSubsequence xs xs = true := by induction xs <;> simp [isSubsequence, *]
theorem isSubtype_refl (lt : LocalType) : isSubtype lt lt = true := by
simp [isSubtype, isSubsequence_refl]
subLabelsOf_refl uses all_eq_true and any_eq_true to show every projected label witnesses itself. isSubsequence_refl and isSubtype_refl prove the order check is reflexive. These lemmas ensure only differences between exporter output and projection trigger failures.
Global Type Consumption
Safe traversal lemmas for global type structures ensure projection terminates and produces valid results.
What is Axiomatized
The following properties are stated as axioms but not yet proven:
| Property | Description | Impact |
|---|---|---|
| Subject Reduction | Type preservation during execution | Safety guarantee |
| Progress | Well-typed programs don’t deadlock | Liveness guarantee |
| Non-participant Correctness | Merging produces valid local types | Projection soundness |
| Merge Composition | Complex merge interactions | Edge case coverage |
These axioms represent the theoretical foundations that the implementation relies upon. They do not affect the correctness of tested protocol patterns, which are validated empirically against the Lean runner.
Core Lean Predicates
The runner enforces three predicates for each branch:
def subLabelsOf (lt sup : LocalType) : Bool :=
lt.actions.all (fun a => sup.actions.any (fun b => decide (b = a)))
def isSubsequence {α} [DecidableEq α] : List α → List α → Bool
| [], _ => true
| _, [] => false
| a :: as, b :: bs => if a = b then isSubsequence as bs else isSubsequence (a :: as) bs
def isSubtype (sub sup : LocalType) : Bool :=
sub.actions.length <= sup.actions.length ∧ isSubsequence sub.actions sup.actions
subLabelsOf: Symmetric label matching. Every exported action must appear in the projection.isSubsequence: Asymmetric ordering. Exported actions must preserve projection order.isSubtype: Combines length guard with ordering to reject reordered or extended traces.
Module Structure
The Lean codebase is organized into focused modules:
- Rumpsteak.Choreography: Decodes choreographies from JSON, validates roles and actions with
hasUniqueRolesandhasValidActions - Rumpsteak.Projection: Builds per-role
LocalTypetraces from global types - Rumpsteak.Program: Maps exported effects to
LocalActionvalues as a homomorphism from JSON to the projection domain - Rumpsteak.Subtyping: Defines order checks with
DecidableEq - Rumpsteak.Runner: Applies membership, order, and label invariants per branch
Features Without Lean Formalization
The DSL includes features that extend beyond the formally verified core:
| Feature | Description | Status |
|---|---|---|
| Dynamic Roles | Parameterized role arrays R[i] | Runtime checks only |
| Broadcasts | One-to-many messages | Desugars to nested sends |
| Local Choices | Non-communicating decisions | Type-checked |
| Parallel Composition | Concurrent branches | Structural checks |
| Protocol Extensions | Custom DSL syntax | User-defined |
These features are tested but not formally proven correct.
Recursion Model
The DSL uses explicit recursion with rec blocks and continue statements:
DSL syntax:
rec Loop {
A -> B: Msg
continue Loop
}
Theory (µ-binders):
µX. A → B: Msg. X
The continue keyword provides explicit back-references that align with the theory’s Var constructor. Cross-validation tests verify that recursive protocols project equivalently in both implementations.
Running Verification
Use the Nix shell to ensure Rust, Lean, and Lake versions match:
nix develop --command just rumpsteak-lean-check
This exports the sample choreography for Chef, SousChef, and Baker from lean/choreo/lean-sample.choreo, builds the Lean runner, and validates each role. Logs are written to lean/artifacts/runner-<role>.{log,json}.
Extended Scenario
nix develop --command just rumpsteak-lean-check-extended
Validates lean/choreo/lean-extended.choreo with two course cycles before dessert options.
Negative Testing
nix develop --command just rumpsteak-lean-check-failing
Exports lean/choreo/lean-failing.choreo with a corrupted label. The runner exits non-zero, confirming error detection works.
Full CI Sweep
nix develop --command just ci-dry-run
Runs Rust fmt, clippy, tests, and all Lean verification scenarios.
Sample Choreography
lean/choreo/lean-sample.choreo models a collaborative dinner with two meal branches (pasta, tacos) and three dessert options (cake, fruit, none). The runner checks that each branch for Chef, SousChef, and Baker is a subsequence of its projection and introduces no extra labels.
Editing Scenarios
Update inputs in lean/choreo/. Change --role in the Just recipes to validate other roles. Regenerate and re-run the verification commands to test new scenarios.
Known Limitations
- Axioms: Safety properties (subject reduction, progress) are assumed, not proven
- Recursion: DSL implicit loops cannot be cross-validated against theory explicit µ-binders
- Extensions: Custom DSL extensions bypass formal verification
- Payloads: Type annotations on messages are not semantically verified
- Coinductive round-trip: The EQ2C round-trip proof currently assumes
ProductiveCon the RHS; regularity alone does not imply productivity
Future Work
- Prove axiomatized properties in Lean
- Unify recursion models between DSL and theory
- Add formal verification for broadcast desugaring
- Extend proofs to cover parallel composition
Lean-Rust Bridge
The rumpsteak-lean-bridge crate connects Rust implementations to Lean verification. It provides JSON serialization, test infrastructure, and the LeanRunner for invoking the Lean binary.
Two Projection Implementations
The project maintains two independent projection implementations.
| Implementation | Location | Purpose |
|---|---|---|
| Theory Projection | rust/theory/src/projection.rs | Minimal, matches Lean formalization |
| DSL Projection | rust/choreography/src/compiler/projection.rs | Extended features for code generation |
The Theory Projection implements core MPST projection with direct correspondence to Lean. The DSL Projection extends this for practical use. Cross-validation tests verify both produce equivalent results on the common subset of protocols.
JSON Format Specification
The JSON format mirrors Lean type definitions exactly. Each type uses a discriminated union with a kind field.
GlobalType
{ "kind": "end" }
This represents protocol termination.
{
"kind": "comm",
"sender": "Client",
"receiver": "Server",
"branches": [
{
"label": { "name": "request", "sort": "unit" },
"continuation": { "kind": "end" }
}
]
}
This represents communication with labeled branches.
{
"kind": "mu",
"var": "loop",
"body": { "kind": "var", "name": "loop" }
}
This represents a recursive type with explicit binding.
LocalTypeR
{
"kind": "send",
"partner": "Server",
"branches": [
{
"label": { "name": "request", "sort": "unit" },
"continuation": { "kind": "end" }
}
]
}
This represents internal choice where the sender selects a branch.
{
"kind": "recv",
"partner": "Client",
"branches": [
{
"label": { "name": "response", "sort": "string" },
"continuation": { "kind": "end" }
}
]
}
This represents external choice where the sender selects and the receiver handles.
PayloadSort
{ "sort": "unit" }
{ "sort": "nat" }
{ "sort": "bool" }
{ "sort": "string" }
{ "sort": { "prod": [{ "sort": "nat" }, { "sort": "bool" }] } }
These examples show the supported payload sorts including products.
Test Infrastructure
Cross-Validation Tests
Location: rust/lean-bridge/tests/projection_equivalence_tests.rs
These tests verify DSL and Theory projections produce equivalent results. Coverage includes simple sends, multi-message sequences, binary and multi-way choices, nested choices, three-party protocols, non-participant merging scenarios, and recursive protocols with explicit continue statements.
rec Loop {
A -> B: Ping
B -> A: Pong
continue Loop
}
The tests parse DSL choreographies, project using both implementations, and compare results structurally. Recursive protocols use explicit continue syntax that maps directly to the theory’s Var constructor.
Projection Property Tests
Location: rust/lean-bridge/tests/proptest_projection.rs
Randomized testing with deterministic seeds covers simple protocols, choice protocols with binary and multi-way branches, deep nesting at 4-6 levels, three-party non-participant merge scenarios, and continuation preservation regressions. Each generated protocol is validated against the Lean binary when available.
Async Subtyping Property Tests
Location: rust/lean-bridge/tests/proptest_async_subtyping.rs
These tests validate the SISO decomposition and asynchronous subtyping algorithms from rumpsteak-theory. The test suite uses fixed seeds for full reproducibility.
#![allow(unused)]
fn main() {
// Example: Input tree contravariance property
let tree = InputTree::Node {
partner: "Alice".to_string(),
label: Label::new("msg"),
children: vec![InputTree::Leaf],
};
assert!(tree.subtype(&InputTree::Leaf));
}
Input tree subtyping is contravariant where a process accepting more inputs is substitutable for one accepting fewer. Output tree subtyping is covariant where a process sending fewer outputs is substitutable for one sending more.
The property tests cover input tree reflexivity and contravariance, output tree reflexivity and covariance, SISO decomposition for simple types and sequences, async_subtype and async_equivalent reflexivity and symmetry, orphan freedom checks, choice type decomposition, recursive type handling, and segment subtyping.
Lean Integration Tests
Location: rust/lean-bridge/tests/lean_integration_tests.rs
Direct validation against the Lean runner covers ping-pong protocols, three-party ring topologies, choice branching correctness, recursive protocols unrolled one iteration, error detection for invalid projections and wrong labels, and complex multi-branch protocols.
CI Enforcement
The GitHub Actions workflow ensures Lean verification runs on every push.
- Build Lean binary:
cd lean && lake build - Run Lean verification scripts:
just rumpsteak-lean-check* - Run Lean-validated Rust tests:
cargo test -p rumpsteak-lean-bridge
The test test_lean_binary_available_for_ci explicitly requires the Lean binary and fails if missing. This makes Lean verification mandatory in CI rather than silently skipped.
Running Tests Locally
Build Lean Binary
# With Nix (recommended)
nix develop --command bash -c "cd lean && lake build"
# Without Nix (requires Lean 4 toolchain)
cd lean && lake build
These commands build the Lean binary required for cross-validation tests.
Run All Lean-Validated Tests
# Full test suite
cargo test -p rumpsteak-lean-bridge
# Cross-validation only
cargo test -p rumpsteak-lean-bridge --test projection_equivalence_tests
# Projection property tests only
cargo test -p rumpsteak-lean-bridge --test proptest_projection
# Async subtyping property tests only
cargo test -p rumpsteak-lean-bridge --test proptest_async_subtyping
# Integration tests only
cargo test -p rumpsteak-lean-bridge --test lean_integration_tests
These commands run specific test suites for focused debugging.
Rust API
Export and Import
#![allow(unused)]
fn main() {
use rumpsteak_lean_bridge::{global_to_json, local_to_json, json_to_global, json_to_local};
use rumpsteak_types::{GlobalType, LocalTypeR, Label};
let g = GlobalType::comm(
"Client",
"Server",
vec![(Label::new("hello"), GlobalType::End)],
);
let json = global_to_json(&g);
let parsed = json_to_global(&json)?;
}
This example shows the round-trip export and import of a global type through JSON.
LeanRunner
#![allow(unused)]
fn main() {
use rumpsteak_lean_bridge::LeanRunner;
// Check availability
if LeanRunner::is_available() {
let runner = LeanRunner::new()?;
let result = runner.validate(&choreography_json, &program_json)?;
assert!(result.success);
}
// For CI: require Lean (panics if missing)
LeanRunner::require_available();
}
The LeanRunner invokes the Lean binary for validation. Use is_available() to check for the binary before use. Use require_available() in CI environments where the binary must be present.
Validator
#![allow(unused)]
fn main() {
use rumpsteak_lean_bridge::Validator;
let validator = Validator::new();
let result = validator.validate_projection_with_lean(&choreo, &program)?;
assert!(result.is_valid());
}
The Validator provides a higher-level API for comparing Rust and Lean projections.
CLI Tool
cargo build -p rumpsteak-lean-bridge --features cli --release
This command builds the CLI with the cli feature enabled.
# Generate sample protocol JSON
lean-bridge sample --sample ping-pong --pretty
lean-bridge sample --sample calculator --pretty
# Validate JSON round-trip
lean-bridge validate --input protocol.json --type global
# Parse and display JSON
lean-bridge import --input protocol.json --type local
# Export sample to file
lean-bridge export --type global --output protocol.json --pretty
These commands demonstrate the CLI operations for generating, validating, and converting protocol JSON.
Merge Semantics
The Rust merge implementation matches the Lean formalization in ProjectionR.lean.
Send merge follows mergeSendSorted where branches must have identical label sets. A non-participant cannot choose based on an unseen choice. Receive merge follows mergeRecvSorted where branches union their label sets. A non-participant can handle any incoming message.
The rumpsteak-theory crate implements these in merge.rs. Tests validate correspondence between Rust and Lean results.
Error Handling
Import Errors
#![allow(unused)]
fn main() {
use rumpsteak_lean_bridge::ImportError;
match json_to_global(&json) {
Ok(g) => println!("Parsed: {:?}", g),
Err(ImportError::UnknownKind(k)) => eprintln!("Unknown kind: {}", k),
Err(ImportError::MissingField(f)) => eprintln!("Missing field: {}", f),
Err(ImportError::InvalidFormat(msg)) => eprintln!("Invalid: {}", msg),
}
}
Import errors distinguish between unknown type kinds, missing required fields, and format violations.
Validation Results
#![allow(unused)]
fn main() {
use rumpsteak_lean_bridge::ValidationResult;
match validator.compare_local(&rust, &lean) {
ValidationResult::Match => println!("Types match"),
ValidationResult::StructuralMismatch(diff) => eprintln!("Differs: {}", diff),
ValidationResult::LabelMismatch { expected, found } => {
eprintln!("Label: expected {}, found {}", expected, found);
}
}
}
Validation results indicate whether types match or describe the specific mismatch.
Extending the Bridge
Adding Type Kinds
#![allow(unused)]
fn main() {
fn global_to_json(g: &GlobalType) -> Value {
match g {
GlobalType::MyNewType { field } => json!({
"kind": "my_new_type",
"field": field,
}),
// ...
}
}
fn json_to_global(json: &Value) -> Result<GlobalType, ImportError> {
match json["kind"].as_str() {
Some("my_new_type") => {
let field = json["field"].as_str()
.ok_or(ImportError::MissingField("field"))?;
Ok(GlobalType::MyNewType { field: field.to_string() })
}
// ...
}
}
}
Update both export and import functions when adding new type kinds. Update Lean definitions to match.
Custom Validators
#![allow(unused)]
fn main() {
impl Validator {
pub fn validate_with_options(
&self,
rust: &LocalTypeR,
lean: &Value,
options: ValidationOptions,
) -> ValidationResult {
// Custom logic
}
}
}
Extend the Validator with custom comparison options as needed.
References
See the following resources for background on the theoretical foundations.
- Multiparty Session Types by Honda et al.
- Session Types and Choreographies survey
Examples
Example Index
Basic Protocols
The adder.rs example shows a simple addition service. It uses client and server roles.
The alternating_bit.rs example implements the alternating bit protocol. This provides reliable message delivery.
The client_server_log.rs example demonstrates client‑server interaction. It includes logging functionality.
The ring.rs example shows ring topology. Messages pass sequentially through the ring.
Advanced Protocols
The three_adder.rs example shows a three‑party protocol. It includes coordination logic.
The oauth.rs example implements OAuth authentication flow. It uses client, authorization server, and resource server roles.
The double_buffering.rs example shows producer‑consumer pattern. It uses double buffering for efficiency.
The elevator.rs example implements multi‑floor elevator control. The protocol coordinates elevator movements.
The fft.rs example shows distributed Fast Fourier Transform. Computation is distributed across roles.
Choice and Branching
The ring_choice.rs example shows ring topology with choice points. Roles make decisions at each node. The example demonstrates choice constructs and branching patterns.
WASM
The wasm-ping-pong example runs in browsers. It shows browser‑based ping‑pong protocol. See examples/wasm-ping-pong/README.md for details.
RumpsteakEndpoint supports two patterns. Use register_channel for SimpleChannel. Use register_session for custom transports. Call RumpsteakSession::from_sink_stream for WebSockets or other transports.
Common Patterns
Request‑Response
Client sends request to server. Server processes and sends response back.
#![allow(unused)]
fn main() {
choreography!(r#"
protocol RequestResponse =
roles Client, Server
Client -> Server : Request
Server -> Client : Response
"#);
}
Use this pattern for synchronous operations. Client waits for server response.
Choice
One role decides between branches. Other roles react to the decision.
#![allow(unused)]
fn main() {
choreography!(r#"
protocol ChoicePattern =
roles Client, Server
case choose Server of
Accept ->
Server -> Client : Confirmation
Reject ->
Server -> Client : Rejection
"#);
}
The deciding role chooses which branch to execute. Other participants react accordingly.
Sequential Messages
Send multiple messages in sequence. Each message may depend on previous responses.
#![allow(unused)]
fn main() {
choreography!(r#"
protocol SequentialMessages =
roles Client, Server
Client -> Server : Message1
Server -> Client : Ack
Client -> Server : Message2
Server -> Client : Ack
"#);
}
This pattern provides acknowledgment after each step.
Multi‑Party Coordination
Three or more roles coordinate. Messages flow between different pairs.
#![allow(unused)]
fn main() {
choreography!(r#"
protocol MultiPartyCoordination =
roles Buyer, Coordinator, Seller
Buyer -> Coordinator : Offer
Coordinator -> Seller : Offer
Seller -> Coordinator : Response
Coordinator -> Buyer : Response
"#);
}
The coordinator role mediates between other participants.
Loops
Repeat protocol steps. Loop condition determines when to stop.
#![allow(unused)]
fn main() {
choreography!(r#"
protocol LoopPattern =
roles Client, Server
loop repeat 5
Client -> Server : Request
Server -> Client : Response
"#);
}
Use loops for batch processing or iterative protocols. This example repeats 5 times.
Parallel Composition
Execute independent protocol branches concurrently via adjacent branch blocks.
#![allow(unused)]
fn main() {
choreography!(r#"
protocol ParallelPattern =
roles Coordinator, Worker1, Worker2
branch
Coordinator -> Worker1 : Task
branch
Coordinator -> Worker2 : Task
"#);
}
Branches must not conflict. Different recipients allow parallel execution.
Dynamic Role Binding
Bind role counts at runtime for threshold protocols.
#![allow(unused)]
fn main() {
choreography!(r#"
protocol Threshold =
roles Coordinator, Signers[*]
Coordinator -> Signers[*] : Request
Signers[0..threshold] -> Coordinator : Signature
"#);
}
The wildcard syntax Signers[*] defers count to runtime. Range syntax Signers[0..threshold] selects a subset. Generated code supports runtime validation and binding.
Testing Patterns
Unit Test with InMemoryHandler
Test protocol logic without network.
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_protocol() {
let mut handler = InMemoryHandler::new(Role::Alice);
let program = Program::new()
.send(Role::Bob, TestMessage)
.end();
let mut endpoint = ();
let result = interpret(&mut handler, &mut endpoint, program).await;
assert!(result.is_ok());
}
}
InMemoryHandler provides fast deterministic testing.
Integration Test with RumpsteakHandler
Test actual session‑typed communication.
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_session_types() {
let (alice_ch, bob_ch) = SimpleChannel::pair();
let mut alice_ep = RumpsteakEndpoint::new(Role::Alice);
alice_ep.register_channel(Role::Bob, alice_ch);
let mut bob_ep = RumpsteakEndpoint::new(Role::Bob);
bob_ep.register_channel(Role::Alice, bob_ch);
// Run protocol with both endpoints
}
}
This tests real message passing with session type checking.
Verification with RecordingHandler
Verify protocol execution sequence.
#![allow(unused)]
fn main() {
let mut handler = RecordingHandler::new(Role::Alice);
// ... execute protocol ...
let events = handler.events();
assert_eq!(events[0], RecordedEvent::Send { to: Role::Bob, ... });
assert_eq!(events[1], RecordedEvent::Recv { from: Role::Bob, ... });
}
RecordingHandler captures operation history for assertions.
Fault Injection Testing
Test error handling with FaultInjection middleware.
#![allow(unused)]
fn main() {
let base = InMemoryHandler::new(Role::Alice);
let mut handler = FaultInjection::new(base)
.with_failure_rate(0.1);
// Protocol should handle 10% random failures
}
Use this to verify retry logic and error recovery.
Running Examples
Navigate to the example and run with cargo.
cargo run --example adder
Some examples require specific setup. Check comments at the top of each file.
The wasm-ping-pong example has its own build script.
cd examples/wasm-ping-pong
./build.sh
See individual example files for detailed documentation.
API Reference
Core AST Types
Choreography
#![allow(unused)]
fn main() {
pub struct Choreography {
pub name: Ident,
pub namespace: Option<String>,
pub roles: Vec<Role>,
pub protocol: Protocol,
pub attrs: HashMap<String, String>,
}
}
Represents a complete choreographic protocol specification. Name identifies the protocol. Namespace provides optional module scoping. Roles list all participants. Protocol contains the interaction tree. Attrs hold annotations like optimize or verify.
Methods:
#![allow(unused)]
fn main() {
pub fn qualified_name(&self) -> String
pub fn validate(&self) -> Result<(), ValidationError>
pub fn get_attributes(&self) -> &HashMap<String, String>
pub fn get_attributes_mut(&mut self) -> &mut HashMap<String, String>
pub fn get_attribute(&self, key: &str) -> Option<&String>
pub fn set_attribute(&mut self, key: String, value: String)
pub fn remove_attribute(&mut self, key: &str) -> Option<String>
pub fn has_attribute(&self, key: &str) -> bool
pub fn get_attribute_as<T>(&self, key: &str) -> Option<T>
pub fn get_attribute_as_bool(&self, key: &str) -> Option<bool>
pub fn clear_attributes(&mut self)
pub fn attribute_count(&self) -> usize
pub fn attribute_keys(&self) -> Vec<&String>
pub fn validate_required_attributes(&self, required_keys: &[&str]) -> Result<(), Vec<String>>
pub fn find_nodes_with_annotation(&self, key: &str) -> Vec<&Protocol>
pub fn find_nodes_with_annotation_value(&self, key: &str, value: &str) -> Vec<&Protocol>
pub fn total_annotation_count(&self) -> usize
}
These methods manage choreography attributes and validation. They also provide annotation queries across the protocol tree.
Protocol
#![allow(unused)]
fn main() {
pub enum Protocol {
Send {
from: Role,
to: Role,
message: MessageType,
continuation: Box<Protocol>,
annotations: HashMap<String, String>,
from_annotations: HashMap<String, String>,
to_annotations: HashMap<String, String>,
},
Broadcast {
from: Role,
to_all: Vec<Role>,
message: MessageType,
continuation: Box<Protocol>,
annotations: HashMap<String, String>,
from_annotations: HashMap<String, String>,
},
Choice {
role: Role,
branches: Vec<Branch>,
annotations: HashMap<String, String>,
},
Loop {
condition: Option<Condition>,
body: Box<Protocol>,
},
Parallel {
protocols: Vec<Protocol>,
},
Rec {
label: Ident,
body: Box<Protocol>,
},
Var(Ident),
Extension {
extension: Box<dyn ProtocolExtension>,
continuation: Box<Protocol>,
annotations: HashMap<String, String>,
},
End,
}
}
Protocol represents the global choreography as a tree. Send describes point-to-point message transmission. Broadcast sends messages to multiple roles. Choice represents branching with labeled branches. Loop contains iteration with optional conditions. Parallel holds concurrent protocol execution. Rec defines recursion points. Var references recursive labels. Extension represents custom protocol statements. End terminates the protocol.
Methods:
#![allow(unused)]
fn main() {
pub fn mentions_role(&self, role: &Role) -> bool
pub fn validate(&self, roles: &[Role]) -> Result<(), ValidationError>
pub fn get_annotations(&self) -> &HashMap<String, String>
pub fn get_annotation(&self, key: &str) -> Option<&String>
pub fn get_annotations_mut(&mut self) -> Option<&mut HashMap<String, String>>
pub fn has_annotation(&self, key: &str) -> bool
pub fn set_annotation(&mut self, key: String, value: String) -> bool
pub fn get_annotation_as<T>(&self, key: &str) -> Option<T>
pub fn get_annotation_as_bool(&self, key: &str) -> Option<bool>
pub fn get_annotations_with_prefix(&self, prefix: &str) -> HashMap<String, String>
pub fn collect_nodes_with_annotation(&self, key: &str, nodes: &mut Vec<&Protocol>)
pub fn collect_nodes_with_annotation_value(
&self,
key: &str,
value: &str,
nodes: &mut Vec<&Protocol>,
)
pub fn deep_annotation_count(&self) -> usize
}
These methods manage statement annotations and query annotation usage. They are used by validation and tooling passes.
Branch
#![allow(unused)]
fn main() {
pub struct Branch {
pub label: Ident,
pub guard: Option<TokenStream>,
pub protocol: Protocol,
}
}
Branch in a choice construct. Label identifies the branch. Guard provides optional conditional expression. Protocol contains the branch continuation.
Condition
#![allow(unused)]
fn main() {
pub enum Condition {
RoleDecides(Role),
Count(usize),
Custom(TokenStream),
Fuel(usize),
YieldAfter(usize),
YieldWhen(String),
}
}
Loop condition specification.
RoleDecides means a role controls iteration. Note: loop decide by Role is desugared
to a choice+recursion pattern at parse time, so RoleDecides is rarely seen in the AST
after parsing. See the DSL documentation for details.
Count specifies fixed iterations.
Custom contains arbitrary Rust expressions.
Fuel limits the number of iterations.
YieldAfter yields control after a number of steps.
YieldWhen yields control on a matching label or condition.
LocalType
#![allow(unused)]
fn main() {
pub enum LocalType {
Send {
to: Role,
message: MessageType,
continuation: Box<LocalType>,
},
Receive {
from: Role,
message: MessageType,
continuation: Box<LocalType>,
},
Select {
to: Role,
branches: Vec<(Ident, LocalType)>,
},
Branch {
from: Role,
branches: Vec<(Ident, LocalType)>,
},
LocalChoice {
branches: Vec<(Ident, LocalType)>,
},
Loop {
condition: Option<Condition>,
body: Box<LocalType>,
},
Rec {
label: Ident,
body: Box<LocalType>,
},
Var(Ident),
Timeout {
duration: Duration,
body: Box<LocalType>,
},
End,
}
}
LocalType is the projected session type for a single role. Send and Receive represent communication actions. Select makes a choice and notifies others. Branch receives a choice from another role. LocalChoice is internal branching without communication. Loop, Rec, Var handle iteration and recursion. Timeout wraps a local type with a duration. End terminates the session.
Methods:
#![allow(unused)]
fn main() {
pub fn is_well_formed(&self) -> bool
}
This checks recursive structure for well formedness. It is used in validation and tests.
Role
#![allow(unused)]
fn main() {
pub struct Role { /* private fields */ }
}
Role identifies a protocol participant. Name is the role identifier. Param specifies role count (Static, Symbolic, or Runtime). Index specifies role instance (Concrete, Symbolic, Wildcard, or Range). Array_size stores a computed size for code generation.
Methods:
#![allow(unused)]
fn main() {
pub fn new(name: Ident) -> RoleValidationResult<Self>
pub fn with_param(name: Ident, param: RoleParam) -> RoleValidationResult<Self>
pub fn with_index(name: Ident, index: RoleIndex) -> RoleValidationResult<Self>
pub fn with_param_and_index(
name: Ident,
param: RoleParam,
index: RoleIndex,
) -> RoleValidationResult<Self>
pub fn parameterized(name: Ident, param: TokenStream) -> RoleValidationResult<Self>
pub fn array(name: Ident, size: usize) -> RoleValidationResult<Self>
pub fn name(&self) -> &Ident
pub fn param(&self) -> Option<&RoleParam>
pub fn index(&self) -> Option<&RoleIndex>
pub fn array_size(&self) -> Option<&TokenStream>
pub fn is_indexed(&self) -> bool
pub fn is_parameterized(&self) -> bool
pub fn is_array(&self) -> bool
pub fn is_dynamic(&self) -> bool
pub fn is_symbolic(&self) -> bool
pub fn is_wildcard(&self) -> bool
pub fn is_range(&self) -> bool
pub fn get_param(&self) -> Option<&RoleParam>
pub fn get_index(&self) -> Option<&RoleIndex>
pub fn matches_family(&self, family: &Role) -> bool
pub fn validate(&self) -> RoleValidationResult<()>
pub fn safe_static(name: Ident, count: u32) -> RoleValidationResult<Self>
pub fn safe_indexed(name: Ident, index: u32) -> RoleValidationResult<Self>
pub fn safe_range(name: Ident, start: u32, end: u32) -> RoleValidationResult<Self>
}
These methods construct and validate roles. Constructors return RoleValidationResult when bounds checks fail. They also inspect role parameters and indices.
RoleParam
#![allow(unused)]
fn main() {
pub enum RoleParam {
Static(u32),
Symbolic(String),
Runtime,
}
}
Role parameter for dynamic role counts. Static specifies a compile-time count. Symbolic uses a named parameter. Runtime indicates count determined at runtime.
RoleIndex
#![allow(unused)]
fn main() {
pub enum RoleIndex {
Concrete(u32),
Symbolic(String),
Wildcard,
Range(RoleRange),
}
}
Role index for referencing specific instances. Concrete specifies a fixed index. Symbolic uses a named variable. Wildcard matches all instances. Range specifies a range of indices.
MessageType
#![allow(unused)]
fn main() {
pub struct MessageType {
pub name: Ident,
pub type_annotation: Option<TokenStream>,
pub payload: Option<TokenStream>,
}
}
MessageType describes a protocol message. Name is the message identifier. Type_annotation contains optional Rust type annotations. Payload specifies the message data type.
Methods:
#![allow(unused)]
fn main() {
pub fn to_ident(&self) -> Ident
}
This method converts the message type name to an identifier. It is used during code generation.
Parser API
parse_choreography_str
#![allow(unused)]
fn main() {
pub fn parse_choreography_str(input: &str) -> Result<Choreography, ParseError>
}
Parses a choreography from a string. Returns ParseError if syntax is invalid or roles are undefined.
parse_choreography_file
#![allow(unused)]
fn main() {
pub fn parse_choreography_file(path: &Path) -> Result<Choreography, ParseError>
}
Parses a choreography from a file. Reads the file content and delegates to parse_choreography_str.
parse_choreography
#![allow(unused)]
fn main() {
pub fn parse_choreography(input: TokenStream) -> Result<Choreography, ParseError>
}
Parses a choreography from a proc macro token stream. Used internally by the choreography macro.
ParseError
#![allow(unused)]
fn main() {
pub enum ParseError {
Pest(Box<pest::error::Error<Rule>>),
Layout { span: ErrorSpan, message: String },
Syntax { span: ErrorSpan, message: String },
UndefinedRole { role: String, span: ErrorSpan },
DuplicateRole { role: String, span: ErrorSpan },
EmptyChoreography,
InvalidMessage { message: String, span: ErrorSpan },
InvalidCondition { message: String, span: ErrorSpan },
UndefinedProtocol { protocol: String, span: ErrorSpan },
DuplicateProtocol { protocol: String, span: ErrorSpan },
GrammarComposition(GrammarCompositionError),
}
}
ParseError describes parsing failures. Each variant includes error context and location information. ErrorSpan provides formatted error messages with source snippets.
Projection API
project
#![allow(unused)]
fn main() {
pub fn project(choreography: &Choreography, role: &Role) -> Result<LocalType, ProjectionError>
}
Projects a global choreography to a local session type for one role. Returns ProjectionError if projection fails due to conflicts or invalid patterns.
ProjectionError
#![allow(unused)]
fn main() {
pub enum ProjectionError {
NonParticipantChoice,
UnsupportedParallel(String),
InconsistentParallel,
UnboundVariable(String),
DynamicRoleProjection { role: String },
UnboundSymbolic { param: String },
RangeProjection,
WildcardProjection,
}
}
ProjectionError indicates projection failures. NonParticipantChoice means a role is not involved in a choice. UnsupportedParallel indicates parallel composition is not supported for the role. InconsistentParallel means conflicting parallel branches. UnboundVariable indicates a recursive variable is not in scope. DynamicRoleProjection means runtime roles cannot be projected statically. UnboundSymbolic indicates a symbolic parameter is not bound. RangeProjection and WildcardProjection indicate unsupported index types.
Code Generation API
generate_session_type
#![allow(unused)]
fn main() {
pub fn generate_session_type(
choreography: &Choreography,
role: &Role,
) -> Result<TokenStream, ProjectionError>
}
Generates Rumpsteak session type for a specific role. Projects the choreography to a local type and converts to Rust type definitions.
generate_choreography_code
#![allow(unused)]
fn main() {
pub fn generate_choreography_code(choreography: &Choreography) -> TokenStream
}
Generates complete choreography code including roles, messages, and session types.
generate_effects_protocol
#![allow(unused)]
fn main() {
pub fn generate_effects_protocol(choreography: &Choreography) -> TokenStream
}
Generates effect-based protocol implementations. Creates effect programs that handlers can interpret at runtime.
generate_role_implementations
#![allow(unused)]
fn main() {
pub fn generate_role_implementations(choreography: &Choreography) -> TokenStream
}
Generates trait implementations for role types.
generate_helpers
#![allow(unused)]
fn main() {
pub fn generate_helpers(choreography: &Choreography) -> TokenStream
}
Generates helper functions for protocol execution.
Effect System API
Program
#![allow(unused)]
fn main() {
pub struct Program<R: RoleId, M> {
pub effects: Vec<Effect<R, M>>,
}
}
Program is a sequence of effects representing a protocol. Effects lists the operations to execute.
Builder methods:
#![allow(unused)]
fn main() {
pub fn new() -> Self
pub fn send(self, to: R, msg: M) -> Self
pub fn recv<T: 'static>(self, from: R) -> Self
pub fn choose(self, at: R, label: Label) -> Self
pub fn offer(self, from: R) -> Self
pub fn with_timeout(self, at: R, dur: Duration, body: Program<R, M>) -> Self
pub fn parallel(self, programs: Vec<Program<R, M>>) -> Self
pub fn branch(self, choosing_role: R, branches: Vec<(Label, Program<R, M>)>) -> Self
pub fn loop_n(self, iterations: usize, body: Program<R, M>) -> Self
pub fn loop_inf(self, body: Program<R, M>) -> Self
pub fn ext<E: ExtensionEffect + 'static>(self, extension: E) -> Self
pub fn end(self) -> Self
pub fn then(self, other: Program<R, M>) -> Self
}
Analysis methods:
#![allow(unused)]
fn main() {
pub fn roles_involved(&self) -> HashSet<R>
pub fn send_count(&self) -> usize
pub fn recv_count(&self) -> usize
pub fn has_timeouts(&self) -> bool
pub fn has_parallel(&self) -> bool
pub fn validate(&self) -> Result<(), ProgramError>
pub fn is_empty(&self) -> bool
pub fn len(&self) -> usize
}
These methods inspect program structure and validate the sequence. They are used by analysis and debugging tools.
Effect
#![allow(unused)]
fn main() {
pub enum Effect<R: RoleId, M> {
Send { to: R, msg: M },
Recv { from: R, msg_type: &'static str },
Choose { at: R, label: Label },
Offer { from: R },
Branch {
choosing_role: R,
branches: Vec<(Label, Program<R, M>)>,
},
Loop {
iterations: Option<usize>,
body: Box<Program<R, M>>,
},
Timeout {
at: R,
dur: Duration,
body: Box<Program<R, M>>,
},
Parallel {
programs: Vec<Program<R, M>>,
},
Extension(Box<dyn ExtensionEffect>),
End,
}
}
Effect represents a single choreographic operation. Send transmits a message to another role. Recv receives a message with type information. Choose makes an internal choice and broadcasts the label. Offer waits for an external choice from another role. Branch represents conditional execution based on choice. Loop executes body multiple times or infinitely. Timeout wraps a sub-program with time limit. Parallel executes multiple programs concurrently. Extension provides domain-specific effects. End terminates the program.
Methods:
#![allow(unused)]
fn main() {
pub fn ext<E: ExtensionEffect + 'static>(extension: E) -> Self
pub fn is_extension<E: ExtensionEffect + 'static>(&self) -> bool
pub fn as_extension<E: ExtensionEffect + 'static>(&self) -> Option<&E>
pub fn as_extension_mut<E: ExtensionEffect + 'static>(&mut self) -> Option<&mut E>
pub fn extension_type_id(&self) -> Option<TypeId>
}
These helpers inspect and extract extension payloads. They are used by interpreters and middleware.
interpret
#![allow(unused)]
fn main() {
pub async fn interpret<H, R, M>(
handler: &mut H,
endpoint: &mut H::Endpoint,
program: Program<R, M>,
) -> Result<InterpretResult<M>>
where
H: ChoreoHandler<Role = R> + Send,
R: RoleId,
M: ProgramMessage + Serialize + DeserializeOwned + 'static,
}
Interprets a program using a handler. Executes each effect by calling handler methods. Returns InterpretResult with received messages and execution status.
interpret_extensible
#![allow(unused)]
fn main() {
pub async fn interpret_extensible<H, R, M>(
handler: &mut H,
endpoint: &mut <H as ExtensibleHandler>::Endpoint,
program: Program<R, M>,
) -> Result<InterpretResult<M>>
where
H: ChoreoHandler<Role = R, Endpoint = <H as ExtensibleHandler>::Endpoint>
+ ExtensibleHandler
+ Send,
R: RoleId,
M: ProgramMessage + Serialize + DeserializeOwned + 'static,
}
Interprets a program using an extensible handler. Supports handlers that dispatch extension effects to registered handlers.
InterpretResult
#![allow(unused)]
fn main() {
pub struct InterpretResult<M> {
pub received_values: Vec<M>,
pub final_state: InterpreterState,
}
}
InterpretResult contains execution results. Received_values holds messages from recv operations. Final_state indicates execution outcome.
InterpreterState
#![allow(unused)]
fn main() {
pub enum InterpreterState {
Completed,
Timeout,
Failed(String),
}
}
State of the program interpreter. Completed means successful execution. Timeout indicates operation exceeded duration. Failed contains error description.
ChoreoHandler
#![allow(unused)]
fn main() {
#[async_trait]
pub trait ChoreoHandler: Send {
type Role: RoleId;
type Endpoint: Endpoint;
async fn send<M: Serialize + Send + Sync>(
&mut self,
ep: &mut Self::Endpoint,
to: Self::Role,
msg: &M,
) -> ChoreoResult<()>;
async fn recv<M: DeserializeOwned + Send>(
&mut self,
ep: &mut Self::Endpoint,
from: Self::Role,
) -> ChoreoResult<M>;
async fn choose(
&mut self,
ep: &mut Self::Endpoint,
who: Self::Role,
label: <Self::Role as RoleId>::Label,
) -> ChoreoResult<()>;
async fn offer(
&mut self,
ep: &mut Self::Endpoint,
from: Self::Role,
) -> ChoreoResult<<Self::Role as RoleId>::Label>;
async fn with_timeout<F, T>(
&mut self,
ep: &mut Self::Endpoint,
at: Self::Role,
dur: Duration,
body: F,
) -> ChoreoResult<T>
where
F: std::future::Future<Output = ChoreoResult<T>> + Send;
async fn broadcast<M: Serialize + Send + Sync>(
&mut self,
ep: &mut Self::Endpoint,
recipients: &[Self::Role],
msg: &M,
) -> ChoreoResult<()>;
}
}
ChoreoHandler trait defines the handler interface. Implement this trait to create custom transport handlers. Uses async_trait for object safety.
ChoreographicAdapter
#![allow(unused)]
fn main() {
#[async_trait]
pub trait ChoreographicAdapter: Send {
type Error: std::error::Error + Send + Sync + From<ChoreographyError> + 'static;
type Role: RoleId;
async fn send<M: Message>(&mut self, to: Self::Role, msg: M) -> Result<(), Self::Error>;
async fn recv<M: Message>(&mut self, from: Self::Role) -> Result<M, Self::Error>;
async fn provide_message<M: Message>(&mut self, to: Self::Role) -> Result<M, Self::Error>;
async fn select_branch<L: LabelId>(&mut self, labels: &[L]) -> Result<L, Self::Error>;
async fn broadcast<M: Message + Clone>(
&mut self,
to: &[Self::Role],
msg: M,
) -> Result<(), Self::Error>;
async fn collect<M: Message>(&mut self, from: &[Self::Role]) -> Result<Vec<M>, Self::Error>;
async fn choose(
&mut self,
to: Self::Role,
label: <Self::Role as RoleId>::Label,
) -> Result<(), Self::Error>;
async fn offer(
&mut self,
from: Self::Role,
) -> Result<<Self::Role as RoleId>::Label, Self::Error>;
}
}
ChoreographicAdapter drives generated runners directly.
Implement provide_message and select_branch to supply runtime data and decisions.
Default implementations return errors to avoid placeholder behavior.
ExtensionEffect
#![allow(unused)]
fn main() {
pub trait ExtensionEffect: Send + Sync + std::fmt::Debug {
fn type_id(&self) -> TypeId;
fn type_name(&self) -> &'static str;
fn participating_role_ids(&self) -> Vec<Box<dyn Any>>;
fn as_any(&self) -> &dyn Any;
fn as_any_mut(&mut self) -> &mut dyn Any;
fn clone_box(&self) -> Box<dyn ExtensionEffect>;
}
}
ExtensionEffect enables domain-specific protocol extensions. Type_id provides unique identification. Type_name returns human-readable name. Participating_role_ids specifies which roles are involved (empty means all roles). As_any and as_any_mut enable downcasting. Clone_box enables cloning trait objects.
ChoreographyError
#![allow(unused)]
fn main() {
pub enum ChoreographyError {
Transport(String),
Serialization(String),
ChannelSendFailed { channel_type: &'static str, reason: String },
ChannelClosed { channel_type: &'static str, operation: &'static str },
NoPeerChannel { peer: String },
LabelSerializationFailed { operation: &'static str, reason: String },
MessageSerializationFailed {
operation: &'static str,
type_name: &'static str,
reason: String
},
Timeout(Duration),
ProtocolViolation(String),
UnknownRole(String),
ProtocolContext {
protocol: &'static str,
role: &'static str,
phase: &'static str,
inner: Box<ChoreographyError>
},
RoleContext { role: &'static str, index: Option<u32>, inner: Box<ChoreographyError> },
MessageContext {
operation: &'static str,
message_type: &'static str,
direction: &'static str,
other_role: &'static str,
inner: Box<ChoreographyError>
},
ChoiceError { role: &'static str, details: String },
WithContext { context: String, inner: Box<ChoreographyError> },
InvalidChoice { expected: Vec<String>, actual: String },
ExecutionError(String),
}
}
ChoreographyError describes execution failures. Transport covers network and channel errors. Serialization handles encoding and decoding issues. Timeout indicates operation exceeded duration. ProtocolViolation means session type mismatch. UnknownRole indicates referenced role not found.
LabelId
#![allow(unused)]
fn main() {
pub trait LabelId: Copy + Eq + std::hash::Hash + Debug + Send + Sync + 'static {
fn as_str(&self) -> &'static str;
fn from_str(label: &str) -> Option<Self>;
}
}
LabelId identifies branches in choice protocols. Implementations map stable identifiers to protocol branch names.
RoleId
#![allow(unused)]
fn main() {
pub trait RoleId: Copy + Eq + std::hash::Hash + Debug + Send + Sync + 'static {
type Label: LabelId;
fn role_name(&self) -> RoleName;
fn role_index(&self) -> Option<u32>;
}
}
RoleId trait for role identifiers. Automatically implemented for types meeting the bounds.
Handler APIs
InMemoryHandler
#![allow(unused)]
fn main() {
pub struct InMemoryHandler<R: RoleId>
}
In-memory handler for testing using futures channels. Simulates message passing without network communication. WASM-compatible.
Constructor:
#![allow(unused)]
fn main() {
pub fn new(role: R) -> Self
pub fn with_channels(
role: R,
channels: Arc<Mutex<HashMap<(R, R), MessageChannelPair>>>,
choice_channels: Arc<Mutex<HashMap<(R, R), ChoiceChannelPair>>>,
) -> Self
}
The new constructor creates an isolated handler. With_channels shares channels between handlers for coordinated testing.
RumpsteakHandler
#![allow(unused)]
fn main() {
pub struct RumpsteakHandler<R: RoleId>
}
Handler for Rumpsteak session-typed channels. Provides unified abstraction over bidirectional channels and dynamic sessions.
Constructor:
#![allow(unused)]
fn main() {
pub fn new() -> Self
}
Requires RumpsteakEndpoint for connection management. See Using Rumpsteak Handlers for complete usage.
RumpsteakEndpoint
#![allow(unused)]
fn main() {
pub struct RumpsteakEndpoint<R: RoleId>
}
Endpoint managing per-peer channels and sessions. Tracks session metadata and connection state.
SimpleChannel
#![allow(unused)]
fn main() {
pub struct SimpleChannel
}
Simple bidirectional channel for backward compatibility.
Methods:
#![allow(unused)]
fn main() {
pub fn pair() -> (Self, Self)
pub async fn send(&mut self, msg: Vec<u8>) -> Result<(), String>
pub async fn recv(&mut self) -> Result<Vec<u8>, String>
}
These methods create a channel pair and move raw bytes. They are used by the Rumpsteak handler implementation.
RecordingHandler
#![allow(unused)]
fn main() {
pub struct RecordingHandler<R: RoleId>
}
Recording handler for testing that captures all effects. Does not produce actual values. Use for protocol structure verification.
Constructor:
#![allow(unused)]
fn main() {
pub fn new(role: R) -> Self
}
Methods:
#![allow(unused)]
fn main() {
pub fn events(&self) -> Vec<RecordedEvent<R>>
pub fn clear(&self)
}
Returns the list of recorded operations.
RecordedEvent
#![allow(unused)]
fn main() {
pub enum RecordedEvent<R: RoleId> {
Send { from: R, to: R, msg_type: String },
Recv { from: R, to: R, msg_type: String },
Choose { at: R, label: Label },
Offer { from: R, to: R },
}
}
Represents a recorded choreographic operation.
Extension System API
ExtensibleHandler
#![allow(unused)]
fn main() {
pub trait ExtensibleHandler {
type Endpoint: Endpoint;
fn extension_registry(&self) -> &ExtensionRegistry<Self::Endpoint>;
}
}
ExtensibleHandler enables extension effect dispatch. Handlers implementing this trait can process domain-specific extensions.
ExtensionRegistry
#![allow(unused)]
fn main() {
pub struct ExtensionRegistry<EP: Endpoint>
}
Registry mapping extension types to handlers.
Methods:
#![allow(unused)]
fn main() {
pub fn new() -> Self
pub fn register<E, F>(&mut self, handler: F)
where
E: ExtensionEffect + 'static,
F: Fn(&mut EP, &dyn ExtensionEffect) -> BoxFuture<'static, Result<()>>
+ Send
+ Sync
+ 'static,
pub fn is_registered<E: ExtensionEffect + 'static>(&self) -> bool
pub fn handle(
&self,
ep: &mut EP,
ext: &dyn ExtensionEffect,
) -> Option<BoxFuture<'static, Result<()>>>
}
These methods register and invoke extension handlers. They are used by extensible interpreters.
ExtensionError
#![allow(unused)]
fn main() {
pub enum ExtensionError {
UnknownExtension { type_name: String },
TypeMismatch { expected: &'static str, actual: &'static str },
ExecutionFailed(String),
}
}
Errors during extension processing.
Runtime API
spawn
#![allow(unused)]
fn main() {
pub fn spawn<F>(future: F)
where F: Future<Output = ()> + Send + 'static
}
Spawns a task on the platform runtime. Uses tokio on native targets. Uses wasm-bindgen-futures on WASM.
spawn_local
#![allow(unused)]
fn main() {
pub fn spawn_local<F>(future: F)
where F: Future<Output = ()> + 'static
}
Spawns a local task without Send bound. Useful for WASM where Send is not required.
Macro API
choreography!
#![allow(unused)]
fn main() {
choreography!(r#"
protocol ProtocolName =
roles Role1, Role2
Role1 -> Role2 : Message
"#);
}
Procedural macro for choreographies defined in the DSL. Parses the DSL and generates role types, message types, and session types. The macro requires a string literal input.
Example:
#![allow(unused)]
fn main() {
choreography!(r#"
protocol TwoPhaseCommit =
roles Coordinator, Participant
Coordinator -> Participant : Prepare
Participant -> Coordinator : Vote
"#);
}
This example defines a two party commit protocol. The macro expands into generated role and session types.
Theory API
project
#![allow(unused)]
fn main() {
pub fn project(g: &GlobalType, role: &str) -> Result<LocalTypeR, ProjectionError>
}
Projects a global type to a local type for a given role. Uses merge internally to combine branches when the role is not involved in a choice.
merge
#![allow(unused)]
fn main() {
pub fn merge(t1: &LocalTypeR, t2: &LocalTypeR) -> Result<LocalTypeR, MergeError>
}
Merges two local types. Send merge requires identical label sets. Receive merge unions label sets. Returns an error if the types have incompatible structure.
can_merge
#![allow(unused)]
fn main() {
pub fn can_merge(t1: &LocalTypeR, t2: &LocalTypeR) -> bool
}
Checks if two local types can be merged without error.
MergeError
#![allow(unused)]
fn main() {
pub enum MergeError {
EndMismatch(LocalTypeR),
PartnerMismatch { expected: String, found: String },
DirectionMismatch,
IncompatibleContinuations { label: String },
RecursiveVariableMismatch { expected: String, found: String },
VariableMismatch { expected: String, found: String },
IncompatibleTypes,
SendLabelMismatch { left: String, right: String },
SendBranchCountMismatch { left: usize, right: usize },
}
}
MergeError indicates why two local types cannot be merged. The SendLabelMismatch and SendBranchCountMismatch variants are specific to send merge which requires identical labels.
sync_subtype
#![allow(unused)]
fn main() {
pub fn sync_subtype(sub: &LocalTypeR, sup: &LocalTypeR) -> bool
}
Checks synchronous subtyping between two local types. Uses tree-based algorithm.
async_subtype
#![allow(unused)]
fn main() {
pub fn async_subtype(sub: &LocalTypeR, sup: &LocalTypeR) -> bool
}
Checks asynchronous subtyping between two local types. Uses SISO decomposition.
dual
#![allow(unused)]
fn main() {
pub fn dual(lt: &LocalTypeR) -> LocalTypeR
}
Computes the dual of a local type. Swaps send and receive operations.
Content Addressing API
ContentId
#![allow(unused)]
fn main() {
pub struct ContentId<H: Hasher = Sha256Hasher> {
hash: H::Digest,
}
}
Content identifier wrapping a cryptographic hash. Parameterized by hasher type with SHA-256 as default.
Methods:
#![allow(unused)]
fn main() {
pub fn from_bytes(data: &[u8]) -> Self
pub fn from_hash(hash: impl AsRef<[u8]>) -> Result<Self, ContentIdError>
pub fn as_bytes(&self) -> &[u8]
pub fn to_hex(&self) -> String
pub fn algorithm(&self) -> &'static str
}
These methods construct and inspect content identifiers. Use from_bytes when hashing raw canonical bytes.
Hasher
#![allow(unused)]
fn main() {
pub trait Hasher: Clone + Default + PartialEq + Send + Sync + 'static {
type Digest: AsRef<[u8]> + Clone + PartialEq + Eq + Hash + Send + Sync + 'static;
const HASH_SIZE: usize;
fn digest(data: &[u8]) -> Self::Digest;
fn algorithm_name() -> &'static str;
}
}
Hasher trait for swappable hash algorithms. The default implementation is Sha256Hasher.
Contentable
#![allow(unused)]
fn main() {
pub trait Contentable {
fn to_bytes(&self) -> Result<Vec<u8>, ContentableError>;
fn from_bytes(bytes: &[u8]) -> Result<Self, ContentableError>;
}
}
Trait for types that support content addressing. Implementations exist for GlobalType, LocalTypeR, Label, and PayloadSort.
ContentStore
#![allow(unused)]
fn main() {
pub struct ContentStore<K: Contentable, V, H: Hasher = Sha256Hasher>
}
Content-addressed storage with deduplication.
Methods:
#![allow(unused)]
fn main() {
pub fn new() -> Self
pub fn with_capacity(capacity: usize) -> Self
pub fn get(&self, key: &K) -> Result<Option<&V>, ContentableError>
pub fn insert(&mut self, key: &K, value: V) -> Result<Option<V>, ContentableError>
pub fn get_or_insert_with<F>(&mut self, key: &K, f: F) -> Result<&V, ContentableError>
pub fn contains(&self, key: &K) -> Result<bool, ContentableError>
pub fn remove(&mut self, key: &K) -> Result<Option<V>, ContentableError>
pub fn clear(&mut self)
pub fn len(&self) -> usize
pub fn is_empty(&self) -> bool
pub fn metrics(&self) -> CacheMetrics
pub fn reset_metrics(&self)
}
These methods provide lookup, insertion, and cache metrics. Key-based operations return ContentableError when serialization fails. The store uses content IDs derived from Contentable keys.
Topology API
Location
#![allow(unused)]
fn main() {
pub enum Location {
Local,
Remote(TopologyEndpoint),
Colocated(RoleName),
}
}
Location specifies where a role executes. Local is in-process. Remote specifies a network endpoint. Colocated references another role.
TopologyConstraint
#![allow(unused)]
fn main() {
pub enum TopologyConstraint {
Colocated(RoleName, RoleName),
Separated(RoleName, RoleName),
Pinned(RoleName, Location),
Region(RoleName, Region),
}
}
Constraints on role placement. Colocated requires same node. Separated requires different nodes. Pinned fixes a location. Region specifies geographic constraints.
Topology
#![allow(unused)]
fn main() {
pub struct Topology {
mode: Option<TopologyMode>,
locations: BTreeMap<RoleName, Location>,
constraints: Vec<TopologyConstraint>,
}
}
Maps roles to locations with optional constraints.
Methods:
#![allow(unused)]
fn main() {
pub fn builder() -> TopologyBuilder
pub fn new() -> Topology
pub fn local_mode() -> Topology
pub fn with_role(self, role: RoleName, location: Location) -> Topology
pub fn with_constraint(self, constraint: TopologyConstraint) -> Topology
pub fn get_location(&self, role: &RoleName) -> Result<Location, TopologyError>
pub fn validate(&self, choreo_roles: &[RoleName]) -> TopologyValidation
pub fn load(path: impl AsRef<Path>) -> Result<ParsedTopology, TopologyLoadError>
pub fn parse(content: &str) -> Result<ParsedTopology, TopologyLoadError>
}
These methods create and validate topology data. get_location returns TopologyError when a role is not present. The load and parse helpers return ParsedTopology metadata.
TopologyBuilder
#![allow(unused)]
fn main() {
pub struct TopologyBuilder { ... }
}
Builder for constructing topologies.
Methods:
#![allow(unused)]
fn main() {
pub fn new() -> Self
pub fn mode(self, mode: TopologyMode) -> Self
pub fn local_role(self, role: RoleName) -> Self
pub fn remote_role(self, role: RoleName, endpoint: TopologyEndpoint) -> Self
pub fn colocated_role(self, role: RoleName, peer: RoleName) -> Self
pub fn role(self, role: RoleName, location: Location) -> Self
pub fn colocated(self, r1: RoleName, r2: RoleName) -> Self
pub fn separated(self, r1: RoleName, r2: RoleName) -> Self
pub fn pinned(self, role: RoleName, location: Location) -> Self
pub fn region(self, role: RoleName, region: Region) -> Self
pub fn build(self) -> Topology
}
These methods add locations and constraints to a topology builder. Call build to produce a Topology.
TopologyHandler
#![allow(unused)]
fn main() {
pub struct TopologyHandler { ... }
}
Topology-aware handler for protocol execution.
Methods:
#![allow(unused)]
fn main() {
pub fn new(topology: Topology, role: RoleName) -> Self
pub fn from_parsed(parsed: ParsedTopology, role: RoleName) -> Self
pub async fn initialize(&self) -> TransportResult<()>
pub async fn send(&self, to_role: &RoleName, message: Vec<u8>) -> TransportResult<()>
pub async fn recv(&self, from_role: &RoleName) -> TransportResult<Vec<u8>>
pub fn get_location(&self, role: &RoleName) -> Result<Location, TopologyError>
pub fn is_connected(&self, role: &RoleName) -> Result<bool, TopologyError>
pub async fn close(&self) -> TransportResult<()>
}
These methods manage topology aware transports. Initialize before sending messages when eager setup is required.
TopologyHandlerBuilder
#![allow(unused)]
fn main() {
pub struct TopologyHandlerBuilder { ... }
}
Builder for TopologyHandler.
Methods:
#![allow(unused)]
fn main() {
pub fn new(topology: Topology) -> Self
pub fn with_role(self, role: RoleName) -> Self
pub fn build(self) -> Result<TopologyHandler, TopologyHandlerBuildError>
}
TopologyHandlerBuildError
#![allow(unused)]
fn main() {
pub enum TopologyHandlerBuildError {
MissingRole,
}
}
Returned when required builder configuration is absent.
TransportMessage
#![allow(unused)]
fn main() {
pub trait TransportMessage: Send + Sync + 'static {
fn to_bytes(&self) -> Vec<u8>;
fn from_bytes(bytes: &[u8]) -> Result<Self, TransportMessageError>
where
Self: Sized;
}
}
TransportMessageError
#![allow(unused)]
fn main() {
pub enum TransportMessageError {
Deserialization(String),
}
}
Resource Heap API
ResourceId
#![allow(unused)]
fn main() {
pub struct ResourceId { /* private fields */ }
}
Unique identifier for heap-allocated resources. Derived from content hash and allocation counter.
Methods:
#![allow(unused)]
fn main() {
pub fn new(hash: [u8; 32], counter: u64) -> Self
pub fn from_resource(resource: &Resource, counter: u64) -> Self
pub fn hash(&self) -> [u8; 32]
pub fn counter(&self) -> u64
pub fn to_short_hex(&self) -> String
}
Resource
#![allow(unused)]
fn main() {
pub enum Resource {
Channel(ChannelState),
Message(Message),
Session { role: String, type_hash: u64 },
Value { tag: String, data: Vec<u8> },
}
}
Resource kinds that can be allocated on the heap.
Heap
#![allow(unused)]
fn main() {
pub struct Heap {
resources: BTreeMap<ResourceId, Resource>,
nullifiers: BTreeSet<ResourceId>,
counter: u64,
}
}
Deterministic heap with nullifier-based consumption tracking.
Methods:
#![allow(unused)]
fn main() {
pub fn new() -> Self
pub fn size(&self) -> usize
pub fn nullified_count(&self) -> usize
pub fn contains(&self, rid: &ResourceId) -> bool
pub fn alloc(&self, resource: Resource) -> (ResourceId, Heap)
pub fn consume(&self, rid: &ResourceId) -> Result<Heap, HeapError>
pub fn read(&self, rid: &ResourceId) -> Result<&Resource, HeapError>
pub fn is_consumed(&self, rid: &ResourceId) -> bool
pub fn is_active(&self, rid: &ResourceId) -> bool
pub fn active_resources(&self) -> impl Iterator<Item = (&ResourceId, &Resource)>
pub fn consumed_ids(&self) -> impl Iterator<Item = &ResourceId>
pub fn alloc_session(&self, role: &str, type_hash: u64) -> (ResourceId, Heap)
}
These methods allocate and consume heap resources. The iterator helpers expose active and consumed IDs for commitment generation.
HeapError
#![allow(unused)]
fn main() {
pub enum HeapError {
NotFound(ResourceId),
AlreadyConsumed(ResourceId),
AlreadyExists(ResourceId),
TypeMismatch { expected: String, got: String },
Other(String),
}
}
Errors from heap operations. NotFound indicates missing resource. AlreadyConsumed indicates double-consumption attempt. TypeMismatch and Other capture additional failure cases.
Analysis API
analyze
#![allow(unused)]
fn main() {
pub fn analyze(choreography: &Choreography) -> AnalysisResult
}
Analyzes choreography for properties and potential issues. Returns warnings and communication graph.
AnalysisResult
#![allow(unused)]
fn main() {
pub struct AnalysisResult {
pub warnings: Vec<AnalysisWarning>,
pub communication_graph: CommunicationGraph,
pub participation: HashMap<Role, ParticipationInfo>,
}
}
Results of choreography analysis.
generate_dot_graph
#![allow(unused)]
fn main() {
pub fn generate_dot_graph(choreography: &Choreography) -> String
}
Generates GraphViz DOT representation of the choreography. Useful for visualization and documentation.
Crate Architecture
This document describes the crate architecture introduced in v0.7.0. The structure aligns with Lean verification and provides clear separation of concerns.
Crate Dependency Graph
graph TB
subgraph Foundation
types["rumpsteak-types<br/>Core type definitions"]
end
subgraph Theory
theory["rumpsteak-theory<br/>Algorithms: projection, merge, subtyping"]
end
subgraph Verification
lean["rumpsteak-lean-bridge<br/>Lean interop & validation"]
end
subgraph Application
choreo["rumpsteak-aura-choreography<br/>DSL, parsing & code generation"]
macros["rumpsteak-aura-macros<br/>Proc macros"]
end
subgraph Facade
main["rumpsteak-aura<br/>Main crate, re-exports"]
end
types --> theory
types --> lean
types --> choreo
theory --> choreo
choreo --> macros
theory --> main
choreo --> main
lean --> main
This diagram shows the dependency relationships between crates. Arrows indicate dependency direction. The rumpsteak-types crate serves as the foundation for all other crates.
Crate Descriptions
rumpsteak-types
This crate is located in rust/types/. It contains all core type definitions that match Lean exactly. It has no dependencies on other workspace crates.
The crate defines GlobalType for global protocol views. It defines LocalTypeR for local participant views. It also defines Label for message labels with payload sorts, PayloadSort for type classification, and Action for send and receive actions.
The crate also provides content addressing infrastructure. The ContentId type wraps a cryptographic hash. The Contentable trait defines canonical serialization. The Hasher trait abstracts hash algorithms.
Feature flags:
dag-cbor— Enables DAG-CBOR serialization for IPLD/IPFS compatibility. Addsto_cbor_bytes(),from_cbor_bytes(), andcontent_id_cbor_sha256()methods toContentabletypes.
#![allow(unused)]
fn main() {
use rumpsteak_types::{GlobalType, LocalTypeR, Label, PayloadSort};
let g = GlobalType::comm(
"Client",
"Server",
vec![(Label::new("request"), GlobalType::End)],
);
let lt = LocalTypeR::send("Server", Label::new("request"), LocalTypeR::End);
}
The first expression creates a global type matching Lean’s GlobalType.comm "Client" "Server" [...] constructor. The second creates a local type matching Lean’s LocalTypeR.send "Server" [...] constructor.
rumpsteak-theory
This crate is located in rust/theory/. It implements pure algorithms for session type operations. The crate performs no IO or parsing.
The projection module handles GlobalType to LocalTypeR projection with merging. The merge module implements branch merging with distinct semantics for send and receive. Send merge requires identical label sets while receive merge unions labels. This matches Lean’s mergeSendSorted and mergeRecvSorted functions.
The subtyping/sync module provides synchronous subtyping. The subtyping/async module provides asynchronous subtyping via SISO decomposition. The well_formedness module contains validation predicates. The duality module computes dual types. The bounded module implements bounded recursion strategies.
The content_id module provides content addressing for all types. Projection results are memoized by content ID for performance. See Content Addressing for details.
#![allow(unused)]
fn main() {
use rumpsteak_theory::{project, merge, sync_subtype, async_subtype};
use rumpsteak_types::GlobalType;
let global = GlobalType::comm("A", "B", vec![...]);
let local_a = project(&global, "A")?;
let local_b = project(&global, "B")?;
assert!(sync_subtype(&local_a, &local_a_expected));
}
The project function computes the local type for a given role. The sync_subtype function checks synchronous subtyping between local types.
rumpsteak-lean-bridge
This crate is located in rust/lean-bridge/. It provides bidirectional conversion between Rust types and Lean-compatible JSON. See Lean-Rust Bridge for detailed documentation.
The export module converts Rust types to JSON for Lean. The import module converts Lean JSON back to Rust types. The validate module provides cross-validation between Rust and Lean.
The lean-bridge CLI tool is available with the cli feature.
lean-bridge sample --sample ping-pong --pretty
lean-bridge validate --input protocol.json --type global
lean-bridge import --input protocol.json
These commands generate samples, validate round-trips, and import JSON respectively.
rumpsteak-aura-choreography
This crate is located in rust/choreography/. It provides DSL and parsing for choreographic programming. It depends on rumpsteak-types and rumpsteak-theory.
The ast/ directory contains extended AST types including Protocol, LocalType, and Role. The compiler/parser module handles DSL parsing. The compiler/projection module handles choreography to LocalType projection. The compiler/codegen module handles Rust code generation. The effects/ directory contains the effect system and handlers. The extensions/ directory contains the DSL extension system. The runtime/ directory contains platform abstraction.
The topology/ directory provides deployment configuration. See Topology for the separation between protocol logic and deployment. The heap/ directory provides explicit resource management. See Resource Heap for nullifier-based consumption tracking.
rumpsteak-aura
This crate is located in rust/src/. It is the main facade crate that re-exports from all other crates.
The crate supports several feature flags. The theory feature includes rumpsteak-theory algorithms. The full feature enables all optional features. See Getting Started for the complete feature flag reference.
#![allow(unused)]
fn main() {
use rumpsteak_aura::prelude::*;
}
The prelude provides access to types, theory algorithms, and other commonly used items.
Data Flow
DSL Text Choreography AST GlobalType
-----------► Parser -------------------► Lower -------------------►
"A -> B" (DSL) (rumpsteak-types)
|
| project()
v
LocalTypeR
(rumpsteak-types)
|
+---------------+---------------+
| |
v v
Session Types JSON
& Effect Programs (Lean)
(choreography) (lean-bridge)
| |
v v
Runtime Lean
Execution Validation
The flow begins with DSL text parsed into a choreography AST. The AST is lowered to GlobalType. Projection computes LocalTypeR for each role. From local types, code generation produces session types, effect programs, or JSON for Lean.
Lean Correspondence
The types crate mirrors Lean definitions exactly. The following table shows the correspondence.
| Lean Type | Rust Type | File |
|---|---|---|
GlobalType.end | GlobalType::End | rust/types/src/global.rs |
GlobalType.comm p q bs | GlobalType::Comm { sender, receiver, branches } | rust/types/src/global.rs |
GlobalType.rec t G | GlobalType::Mu { var, body } | rust/types/src/global.rs |
GlobalType.var t | GlobalType::Var(String) | rust/types/src/global.rs |
LocalTypeR.end | LocalTypeR::End | rust/types/src/local.rs |
LocalTypeR.send q bs | LocalTypeR::Send { partner, branches } | rust/types/src/local.rs |
LocalTypeR.recv p bs | LocalTypeR::Recv { partner, branches } | rust/types/src/local.rs |
LocalTypeR.rec t T | LocalTypeR::Mu { var, body } | rust/types/src/local.rs |
LocalTypeR.var t | LocalTypeR::Var(String) | rust/types/src/local.rs |
PayloadSort.unit | PayloadSort::Unit | rust/types/src/global.rs |
Label | Label { name, sort } | rust/types/src/global.rs |
The Rust variant names match Lean constructor names. Field names are consistent across both implementations.
Extension Points
The architecture supports extension at several points.
Adding New Code Generation Targets
Implement generators in rust/choreography/src/compiler/codegen/ that work with the choreography AST or LocalType.
#![allow(unused)]
fn main() {
pub fn generate_my_target(lt: &LocalType) -> MyOutput {
// implementation
}
}
The function receives a local type and produces output in the target format. Follow existing generators as examples.
Custom Projection Strategies
Extend projection in rust/theory/src/projection.rs.
#![allow(unused)]
fn main() {
pub fn project_with_strategy(
global: &GlobalType,
role: &str,
strategy: ProjectionStrategy,
) -> Result<LocalTypeR, ProjectionError> {
// implementation
}
}
The strategy parameter controls projection behavior. This enables alternative projection algorithms.
Lean Integration Extensions
Extend the lean-bridge for custom validation.
#![allow(unused)]
fn main() {
impl Validator {
pub fn my_custom_validation(
&self,
rust_type: &LocalTypeR,
lean_result: &Value,
) -> ValidationResult {
// implementation
}
}
}
Custom validation methods can implement domain-specific comparison rules. See Lean-Rust Bridge for more details.