Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Distributed Systems Contract

This contract specifies Aura's distributed systems model. It defines the safety, liveness, and consistency guarantees provided by the architecture. It also documents the synchrony assumptions, latency expectations, and adversarial capabilities the system tolerates. This contract complements Privacy and Information Flow Contract, which focuses on metadata and privacy budgets. Together these contracts define the full set of invariants protocol authors must respect.

Formal verification of these properties uses Quint model checking (verification/quint/) and Lean 4 theorem proofs (verification/lean/). See Verification Coverage Report for current status.

1. Scope

The contract applies to the following aspects of the system.

Effect handlers and protocols operate within the 8-layer architecture described in Aura System Architecture. Journals and reducers are covered by this contract. The journal specification appears in Authority and Identity and Journal. Aura Consensus is documented in Consensus.

Relational contexts and rendezvous flows fall under this contract. Relational contexts are specified in Relational Contexts. Transport semantics appear in Transport and Information Flow. Rendezvous flows are detailed in Rendezvous Architecture. Shared notation appears in Theoretical Model.

1.1 Terminology Alignment

This contract uses shared terminology from Theoretical Model.

  • Consensus role terms: witness for consensus attestation, signer for FROST share holders
  • Social-role terms: Member, Participant, Moderator
  • Access terms: AccessLevel (Full, Partial, Limited)
  • Topology terms: 1-hop and n-hop paths

1.2 Assumptions

  • Cryptographic primitives remain secure at configured parameters.
  • Partial synchrony eventually holds after GST, with bounded Δ_net.
  • Honest participants execute the guard chain in the required order.
  • Anti-entropy exchange eventually delivers missing facts to connected peers.

1.3 Non-goals

  • This contract does not provide global linearizability across all operations.
  • This contract does not guarantee progress during permanent partitions.
  • This contract does not guarantee metadata secrecy without privacy controls defined in Privacy and Information Flow Contract.

2. Safety Guarantees

2.1 Journal CRDT Properties

Facts merge via set union and never retract. Journals satisfy the semilattice laws:

  • Commutativity: merge(j1, j2) ≡ merge(j2, j1)
  • Associativity: merge(merge(j1, j2), j3) ≡ merge(j1, merge(j2, j3))
  • Idempotence: merge(j, j) ≡ j

Reduction is deterministic. Identical fact sets produce identical states. No two facts share the same nonce within a namespace (InvariantNonceUnique).

Verified by: Aura.Proofs.Journal, journal/core.qnt

2.2 Charge-Before-Send

Every transport observable is preceded by CapGuard, FlowGuard, and JournalCoupler. See Runtime and Authorization. No packet is emitted without a successful charge. Guard evaluation is pure over a prepared snapshot and yields commands that an interpreter executes, so the chain never blocks on embedded I/O.

Flow budgets satisfy monotonicity: charging never increases available budget (monotonic_decrease). Charging the exact remaining amount results in zero budget (exact_charge).

Verified by: Aura.Proofs.FlowBudget, authorization.qnt, transport.qnt

2.3 Consensus Agreement

For any pair (cid, prestate_hash) there is at most one commit fact (InvariantUniqueCommitPerInstance). Fallback gossip plus FROST signatures prevent divergent commits. Byzantine witnesses cannot force multiple commits for the same instance.

Commits require threshold participation (InvariantCommitRequiresThreshold). Equivocating witnesses are excluded from threshold calculations (InvariantEquivocatorsExcluded).

Verified by: Aura.Proofs.Consensus.Agreement, consensus/core.qnt

2.3.1 Runtime Byzantine Admission

Before consensus-backed ceremonies execute, Aura validates runtime theorem-pack capability profiles and rejects missing requirements at admission time. This prevents silent downgrade of BFT assumptions.

Capability keys are mapped to threat-model assumptions as follows:

  • byzantine_envelope: Byzantine threshold/fault model assumptions (f < n/3), authenticated participants, evidence validity, conflict exclusion.
  • termination_bounded: bounded progress/termination assumptions for long-running protocol lanes.
  • mixed_determinism: declared native/wasm determinism envelope and conformance constraints.
  • reconfiguration: coherent delegation/link assumptions for live topology changes.

Successful admissions are recorded as ByzantineSafetyAttestation payloads attached to consensus outputs (CommitFact, DKG transcript commits). Admission mismatches fail closed and surface redacted capability references for debugging.

2.3.2 Quantitative Termination Bounds

Category C choreography executions are bounded by deterministic step budgets derived from Telltale weighted measures:

  • W = 2 * sum(depth(local_type)) + sum(buffer_sizes)
  • max_steps = ceil(k_sigma * W * budget_multiplier)

Aura enforces this budget per execution and fails with BoundExceeded when the bound is crossed. This removes wall-clock coupling from safety/liveness enforcement and keeps bound checks replay-deterministic across native and wasm conformance lanes.

2.3.3 Threshold Parameterization

Consensus parameterization uses n participants, threshold t, and Byzantine budget f. Threshold signatures require t distinct valid shares. Safety requires f < t. Byzantine quorum-style assumptions additionally require f < n/3. Profiles must declare which bound is active for a given ceremony.

2.3.4 Runtime Parity Guarantees and Limits

Aura uses telltale parity lanes to compare runtime artifacts under declared conformance envelopes. These checks provide implementation conformance signals. They do not replace domain theorem claims from Quint and Lean.

Guarantees:

  • parity lanes use canonical surfaces (observable, scheduler_step, effect)
  • mismatch reports include first mismatch location for deterministic triage
  • strict and envelope-bounded classifications are explicit in report artifacts

Limits:

  • parity checks validate observed runtime behavior, not full state-space reachability
  • parity checks depend on scenario and seed coverage
  • parity success does not imply new consensus or CRDT theorem coverage

2.4 Evidence CRDT

The evidence system tracks votes and equivocations as a grow-only CRDT:

  • Monotonicity: Votes and equivocator sets only grow under merge
  • Commit preservation: merge preserves existing commit facts
  • Semilattice laws: Evidence merge is commutative, associative, and idempotent

Verified by: Aura.Proofs.Consensus.Evidence, consensus/core.qnt

2.5 Equivocation Detection

The system detects witnesses who vote for conflicting results:

  • Soundness: Detection only reports actual equivocation (no false positives)
  • Completeness: All equivocations are detectable given sufficient evidence
  • Honest safety: Honest witnesses are never falsely accused

Types like HasEquivocated and HasEquivocatedInSet exclude conflicting shares from consensus. See Consensus.

Verified by: Aura.Proofs.Consensus.Equivocation, consensus/adversary.qnt

2.6 FROST Threshold Signatures

Threshold signatures satisfy binding and consistency properties:

  • Share binding: Shares are cryptographically bound to (consensus_id, result_id, prestate_hash)
  • Threshold requirement: Aggregation requires at least k shares from distinct signers
  • Session consistency: All shares in an aggregation have the same session
  • Determinism: Same shares always produce the same signature

Verified by: Aura.Proofs.Consensus.Frost, consensus/frost.qnt

2.7 Context Isolation

Messages scoped to ContextId never leak into other contexts. Contexts may be explicitly bridged through typed protocols only. See Theoretical Model. Each authority maintains separate journals per context to enforce this isolation.

Verified by: transport.qnt (InvariantContextIsolation)

2.8 Transport Layer

Beyond context isolation, transport satisfies:

  • Flow budget non-negativity: Spent never exceeds limit (InvariantFlowBudgetNonNegative)
  • Sequence monotonicity: Message sequence numbers strictly increase (InvariantSequenceMonotonic)
  • Fact backing: Every sent message has a corresponding journal fact (InvariantSentMessagesHaveFacts)

Verified by: transport.qnt

2.9 Deterministic Reduction Order

Commitment tree operations resolve conflicts using the stable ordering described in Authority and Identity. This ordering is derived from the cryptographic identifiers and facts stored in the journal. Conflicts are always resolved in the same way across all replicas.

2.10 Receipt Chain

Multi-hop forwarding requires signed receipts. Downstream peers reject messages lacking a chain rooted in their relational context. See Transport and Information Flow. This prevents unauthorized message propagation.

3. Protocol-Specific Guarantees

3.1 DKG and Resharing

Distributed key generation and resharing satisfy:

  • Threshold bounds: 1 ≤ t ≤ n where t is threshold and n is participant count
  • Phase consistency: Commitment counts match protocol phase
  • Share timing: Shares distributed only after commitment verification

Verified by: keys/dkg.qnt, keys/resharing.qnt

3.2 Invitation Flows

Invitation lifecycle satisfies authorization invariants:

  • Sender authority: Only sender can cancel an invitation
  • Receiver authority: Only receiver can accept or decline
  • Single resolution: No invitation resolved twice
  • Terminal immutability: Terminal status (accepted/declined/cancelled/expired) is permanent
  • Fact backing: Accepted invitations have corresponding journal facts
  • Ceremony gating: Ceremonies only initiated for accepted invitations

Verified by: invitation.qnt

3.3 Epoch Validity

Epochs enforce temporal boundaries:

  • Receipt validity window: Receipts only valid within their epoch
  • Replay prevention: Old epoch receipts cannot be replayed in new epochs

Verified by: epochs.qnt

3.4 Cross-Protocol Safety

Concurrent protocol execution (e.g., Recovery∥Consensus) satisfies:

  • No deadlock: Interleaved execution always makes progress
  • Revocation enforcement: Revoked devices excluded from all protocols

Verified by: interaction.qnt

4. Liveness Guarantees

4.1 Fast-Path Consensus

Fast-path consensus completes in 2×Δ_net (two message delays) when all witnesses are online. Responses are gathered synchronously before committing.

4.2 Fallback Consensus

Fallback consensus eventually completes under partial synchrony with bounded message delays. The fallback timeout is T_fallback = 2×Δ_net in formal verification (implementations may use up to 3×Δ_net for margin). Gossip ensures progress if a majority of witnesses re-transmit proposals. See Consensus for timeout configuration.

Verified by: Aura.Proofs.Consensus.Liveness, consensus/liveness.qnt

4.3 Anti-Entropy

Journals converge under eventual delivery. Periodic syncs or reorder-resistant CRDT merges reconcile fact sets even after partitions. Vector clocks accurately track causal dependencies (InvariantVectorClockConsistent). Authorities exchange their complete fact journals with neighbors to ensure diverged state is healed.

Verified by: journal/anti_entropy.qnt

4.4 Rendezvous

Offer and answer envelopes flood gossip neighborhoods. Secure channels can be established as long as at least one bidirectional path remains between parties. See Rendezvous Architecture.

4.5 Flow Budgets

Flow-budget progress is conditional. If derived limit(ctx, peer) > 0 in a future epoch and epoch updates converge, FlowGuard eventually grants headroom. Budget exhaustion remains temporary only under these assumptions.

Liveness requires that each authority eventually receives messages from its immediate neighbors. This is the eventual delivery assumption. Liveness also requires that clocks do not drift unboundedly. This is necessary for epoch rotation and receipt expiry.

5. Time System

Aura uses a unified TimeStamp with domain-specific comparison:

  • Reflexivity: compare(policy, t, t) = eq
  • Transitivity: compare(policy, a, b) = lt ∧ compare(policy, b, c) = lt → compare(policy, a, c) = lt
  • Privacy: Physical time hidden when ignorePhysical = true

Time variants include PhysicalClock (wall time), LogicalClock (vector/Lamport), OrderClock (opaque ordering tokens), and Range (validity windows).

Verified by: Aura.Proofs.TimeSystem

6. Synchrony and Timing Model

Aura assumes partial synchrony. There exists a bound Δ_net on message delay and processing time once the network stabilizes (Global Stabilization Time, GST). This bound is possibly unknown before stabilization occurs.

ParameterValueNotes
T_fallback2×Δ_net (verification) / 2-3×Δ_net (production)Fallback consensus timeout
Gossip interval250-500msPeriodic message exchange
GST modelUnknown until stabilizationPartial synchrony assumption

Before stabilization, timeouts may be conservative. Handlers must tolerate jitter but assume eventual delivery of periodic messages.

Epoch rotation relies on loosely synchronized clocks. The journal serves as the source of truth. Authorities treat an epoch change as effective once they observe it in the journal. This design avoids hard synchronization requirements.

7. Latency Expectations

OperationTypical Bound (Δ_net assumptions)
Threshold tree update (fast path)≤ 2×Δ_net (two message delays)
Fallback consensus≤ T_fallback after GST
Rendezvous offer propagationO(log N) gossip hops
FlowGuard chargeSingle local transaction (<10 ms)
Anti-entropy reconciliationk × gossip period (k depends on fanout)

8. Adversarial Model

8.1 Network Adversary

A network adversary controls a subset of transport links. It can delay or drop packets but cannot break cryptography. It cannot forge receipts without FlowGuard grants. Receipts are protected by signatures and epoch binding.

The network adversary may compromise up to f authorities per consensus instance within the active threshold profile. Safety requires the bounds declared in §2.3.3 (f < t, and f < n/3 when that profile is selected).

8.2 Byzantine Witness

A Byzantine witness may equivocate during consensus. The system detects equivocation via the evidence CRDT (see §2.5). Equivocators are excluded from threshold calculations.

A Byzantine witness cannot cause multiple commits. Threshold signature verification rejects tampered results. The t of t-of-n threshold signatures prevents this attack. Honest majority can always commit (InvariantHonestMajorityCanCommit).

Verified by: Aura.Proofs.Consensus.Adversary, consensus/adversary.qnt

8.3 Malicious Relay

A malicious relay may drop or delay envelopes. It cannot forge flow receipts because receipts require cryptographic signatures. It cannot read payloads because of context-specific encryption.

Downstream peers detect misbehavior via missing receipts or inconsistent budget charges. The transport layer detects relay failures automatically.

8.4 Device Compromise

A compromised device reveals its share and journal copy. It cannot reconstitute the account without meeting the branch policy. Recovery relies on relational contexts as described in Relational Contexts.

Device compromise is recoverable because the threshold prevents a single device from acting unilaterally. Guardians can revoke the compromised device and issue a new one. Compromised nonces are excluded from future consensus (InvariantCompromisedNoncesExcluded).

9. Consistency Model

Journals eventually converge after replicas exchange all facts. This is eventual consistency. Authorities that have seen the same fact set arrive at identical states.

Operations guarded by Aura Consensus achieve operation-scoped agreement. Once a commit fact is accepted, all honest replicas agree on that operation result. See Consensus. This does not define a single global linearizable log for all operations.

Causal delivery is not enforced at the transport layer. Choreographies enforce ordering via session types instead. See Multi-party Session Types and Choreography.

Each authority's view of its own journal is monotone. Once it observes a fact locally, it will never un-see it. This is monotonic read-after-write consistency.

10. Failure Handling

Timeouts trigger fallback consensus. See Consensus for T_fallback guidelines. Fallback consensus allows the system to make progress during temporary network instability.

Partition recovery relies on anti-entropy. Authorities merge fact sets when connectivity returns. The journal is the single source of truth for state.

Budget exhaustion causes local blocking. Protocols must implement backoff or wait for epoch rotation. Budgets are described in Privacy and Information Flow Contract.

Guard-chain failures return local errors. These errors include AuthorizationDenied, InsufficientBudget, and JournalCommitFailed. Protocol authors must handle these errors without leaking information. Proper error handling is critical for security.

10.1 Error-Channel Privacy Requirements

  • Runtime errors must use bounded enums and redacted payloads.
  • Error paths must not include plaintext message content, raw capability tokens, or cross-context identifiers.
  • Remote peers may observe protocol-level status outcomes only, not internal guard-stage diagnostics.

11. Deployment Guidance

Configure witness sets using the parameter bounds declared in §2.3.3 for the active ceremony profile.

Tune gossip fanout and timeout parameters based on observed round-trip times and network topology. Conservative parameters ensure liveness under poor conditions.

Monitor receipt acceptance rates, consensus backlog, and budget utilization. See Distributed Maintenance Architecture for monitoring guidance. Early detection of synchrony violations prevents cascading failures.

12. Verification Coverage

This contract's guarantees are formally verified: Canonical invariant names are indexed in Project Structure. Canonical proof and coverage status are indexed in Verification Coverage Report.

LayerToolLocation
Model checkingQuint + Apalacheverification/quint/
Theorem proofsLean 4verification/lean/
Differential testingRust + Lean oraclecrates/aura-testkit/
ITF conformanceQuint tracesverification/quint/traces/

See Verification Coverage Report for metrics and Aura Formal Verification for the Quint-Lean correspondence map.

13. References

Aura System Architecture describes runtime layering.

Authorization describes guard chain ordering.

Theoretical Model covers the formal calculus and semilattice laws.

Authority and Identity documents reduction ordering.

Journal and Distributed Maintenance Architecture cover fact storage and convergence.

Relational Contexts documents cross-authority state.

Consensus describes fast path and fallback consensus.

Transport and Information Flow documents transport semantics.

Authorization covers CapGuard and FlowGuard sequencing.

Verification Coverage Report tracks formal verification status.