Introduction
Jacquard is a deterministic routing system for ad hoc shaped networks. It provides a stable routing abstraction and seven in-tree routing engines: pathway for explicit-path routing, field for corridor-envelope routing over a continuously updated field model, batman-bellman for Bellman-Ford-enhanced next-hop routing, batman-classic for spec-faithful BATMAN IV next-hop routing, babel for RFC 8966 distance-vector routing with bidirectional ETX and feasibility distances, olsrv2 for OLSRv2 link-state routing, and scatter for bounded deferred-delivery diffusion routing. It is designed so a host can add external routing engines through the same contract.
See Core Types for the model objects, pipeline, observation, and world-extension surfaces that carry the system. See Time Model for the deterministic time rules. See Routing Engines for the engine contract, host runtime-effect boundary, and links to the in-tree engine pages. See Router Control Plane for how a route moves from objective through materialization, maintenance, and teardown. See Crate Architecture for separation of concerns and implementation policies.
Scope
Jacquard owns the shared routing contract and seven in-tree routing engines. The router control plane, runtime adapters, and simulation harness are implemented crates. Protection-versus-connectivity policy may be supplied by a host, but Jacquard itself stays routing-engine-neutral at the contract layer.
The central split is between shared facts and local runtime state. Service descriptors, topology observations, admission checks, and route witnesses are explicit shared objects. Adaptive policy, selected routing actions, installed-route ownership, and engine-private runtime state stay local.
The routing model is shaped so admission, installation, maintenance, and replay remain explicit.
Problem
Jacquard is aimed at networks that are unstable, capacity-constrained, and potentially adversarial. Nodes may churn, links may degrade quickly, identities may be weak or partially authenticated, and local coordination may be necessary without any reliable global authority.
That creates two competing pressures. The system needs stronger coordination than naive flooding or purely local heuristics, but it also cannot afford to hard-code one routing doctrine such as GPS-based clique membership, singleton leaders, or full consensus on every routing transition.
It also needs to support more than one routing engine being present at once. A host such as Aura may want to run onion and pathway side by side, migrate traffic gradually from one to the other, or use one engine as a limited lower-layer carrier for another. Those are different cases and should not be collapsed into one mechanism.
System Shape
The top-level routing contract is routing-engine-neutral. A routing engine produces observational candidates, checks admission, admits a route, realizes it under router-provided canonical identity, publishes commitments, and handles engine-local maintenance. The control plane owns canonical route truth. The data plane forwards over already admitted truth.
When a routing engine needs local coordination, Jacquard allows it to expose a shared coordination result such as a committee selection. Jacquard does not require that every routing engine use committees, and it does not require that a committee have a distinguished leader. The shared layer standardizes the result shape, not the formation process. Formation may be engine-local, host-local, provisioned, or otherwise out of band.
Jacquard also allows a host-owned policy engine to compose routing engines through a neutral substrate contract. That means multiple routing engines may be used together, but the shared layer does not treat one canonical route as simultaneously owned by several unrelated engines. Composition happens through explicit carrier leases and layer parameters above the routing-engine boundary.
Design Commitments
Jacquard is fully deterministic. It uses typed time, typed ordering, explicit bounds, and explicit ownership objects instead of ambient runtime assumptions.
Observation scopes are kept explicit. Local node state, peer estimates, link estimates, and neighborhood aggregates are separate model surfaces. This split keeps routing logic honest about what is known unequivocally, what is inferred about a peer, and what is an aggregate local view.
Jacquard is intentionally not opinionated about engine-local scoring, committee formation policy, or trust heuristics. The shared layer commits to the result shapes, evidence classes, ownership rules, and canonical transition path. The routing-engine layer owns the scoring rules, diversity logic, and misbehavior handling that depend on its routing semantics.
Lifecycle and Integration
The system is committed to one explicit service lifecycle: observation → candidate → admission → router-owned canonical identity allocation → engine realization → materialized route → maintenance, replacement, or teardown. Major transitions stay typed and explicit. Data-plane health stays observational until the control plane publishes a canonical change.
The composition boundary is intentionally narrow. The shared layer exposes substrate requirements, substrate leases, and layer parameters, but does not let routing engines leak their internals into one another.
Jacquard is also meant to be the integration point where multiple teams can contribute device-specific expertise without forking the routing model. One team may contribute a BLE node extension, another a Wi-Fi link extension, and another a platform-specific transport or service extension. The cooperative effect comes from merging those self-describing observations into one shared world picture above the routing-engine boundary, then letting routing engines incorporate observed nodes and links when their own criteria are met.
Core Types
This page focuses on the core primitives that other routing objects build on. See Crate Architecture for the internal directory layout of core.
Identity, Observation, And Fact
NodeId identifies one running Jacquard client. ControllerId identifies the cryptographic actor that authenticates for that node. NodeBinding makes that relationship explicit instead of assuming one node identity is enough for every deployment.
Jacquard now uses an explicit epistemic ladder. Observation<T> is raw local input or a received report with provenance attached. Estimate<T> is a belief update derived from one or more observations. Fact<T> is stronger: it is the value the system is willing to treat as established routing truth. This split matters because a recent topology sighting, a scored route candidate, and a published route witness are different kinds of claim.
#![allow(unused)]
fn main() {
pub struct NodeBinding {
pub node_id: NodeId,
pub controller_id: ControllerId,
pub binding_epoch: RouteEpoch,
pub proof: NodeBindingProof,
}
pub struct Observation<T> {
pub value: T,
pub source_class: FactSourceClass,
pub evidence_class: RoutingEvidenceClass,
pub origin_authentication: OriginAuthenticationClass,
pub observed_at_tick: Tick,
}
pub struct Estimate<T> {
pub value: T,
pub confidence_permille: RatioPermille,
pub updated_at_tick: Tick,
}
pub struct Fact<T> {
pub value: T,
pub basis: FactBasis,
pub established_at_tick: Tick,
}
}
This group of types shows two important boundaries. NodeBinding says who controls a node. Observation<T>, Estimate<T>, and Fact<T> say what kind of claim is being carried. Together they prevent the model from collapsing identity, evidence, inference, and publication into one opaque record.
FactSourceClass and OriginAuthenticationClass are intentionally separate. One says whether the fact is local or remote. The other says whether the origin is controlled, authenticated, or unauthenticated. That keeps provenance and authentication from collapsing into one mixed enum.
IdentityAssuranceClass is a second identity-facing qualifier. It says how strongly a node identity is grounded for routing-control decisions. That keeps “who claims to exist” separate from “how much committee or admission weight that identity should receive”.
Time And Qualifiers
Tick, DurationMs, OrderStamp, RouteEpoch, and ByteCount are the core scalar units. They keep local time, bounded duration, deterministic ordering, topology versioning, and byte quantities distinct at the type level. TimeWindow and TimeoutPolicy are the first compound objects built on those primitives. See Time Model for the full time-domain rules and the validated TimeWindow::new constructor.
Belief<T> and Limit<T> are the two main qualifier types. Belief<T> distinguishes Absent from Estimated(Estimate<T>), so the model can say both whether an estimate exists and how strong it is. Limit<T> says whether a budget is bounded or explicitly unlimited.
World Schema
Configuration is the shared graph-shaped world object the router reasons about. It wires together Node, Link, and Environment. World extensions emit Observation<ObservedValue> items that contribute to that picture.
Engine-specific heuristics do not live here. Novelty scoring, bridge detection, reach estimation, feasibility distances, and similar derived signals stay behind the engine trait boundary as engine-owned estimate types. core carries the world facts those heuristics are computed from, not the heuristics themselves.
Route Lifecycle Objects
RouteHandle, RouteLease, RouteMaterializationInput, RouteInstallation, RouteMaterializationProof, and RouteCommitment are the main runtime coordination objects in core. The router allocates canonical route identity through RouteHandle, RouteLease, and RouteMaterializationInput. The engine returns RouteInstallation and RouteMaterializationProof to describe what it realized under that identity.
Live routes are split into router-owned PublishedRouteRecord and engine-mutable RouteRuntimeState, composed as MaterializedRoute. Canonical route state does not come directly from a transport callback or raw health observation. Activation enforces the structural invariants. The admission decision must be admissible, the realized protection must satisfy the objective protection floor, and lease validity must be checked explicitly before publication or maintenance continues.
See Router Control Plane for the full lifecycle flow from objective through teardown.
Coordination And Layering
CommitteeSelection is the main shared coordination object. It carries a selected member set, role declarations, lease window, evidence basis, claim strength, and identity-assurance posture. core exposes only the coordination result shape. It does not define one universal committee-formation algorithm, require a leader, or encode engine-local scoring policy.
SubstrateRequirements, SubstrateCandidate, SubstrateLease, and LayerParameters are the shared layering objects. They exist so a host-level orchestrator can compose engines without teaching one engine about another’s internals. core exposes the carrier contract shape, not the host policy that decides when one engine should migrate to another.
DiscoveryScopeId is separate from the routing concept of a neighborhood. It is only a service-scope identifier used in ServiceScope::Discovery. It does not name a routing authority set or an engine-local topology object.
Pipeline And Observations
Jacquard keeps the shared routing pipeline explicit:
observation -> estimate -> fact -> candidate -> admission -> materialization -> publication
Only the first three stages live in the shared world model. Observation<T> carries raw local or remote input with provenance. Estimate<T> carries engine- or host-derived belief. Fact<T> carries the stronger claims the system is willing to treat as established routing truth. Candidate production, admission, materialization, and publication happen above this layer through the router and engine contracts.
This split matters because a recent link sighting, an engine-scored path preference, and a router-published route witness are not the same kind of statement. The type system keeps those boundaries visible.
World Extension Surface
World extensions contribute shared Node, Link, Environment, and related observation values without taking ownership of routing semantics.
- Extensions emit transport-neutral observations into the shared graph.
- Extensions do not publish canonical route truth.
- Engine-local heuristics such as novelty, relay value, bridge centrality, or next-hop scores stay private to the engine that derives them.
- Transport-specific authoring and handshake logic stays outside
jacquard-core.
This is the boundary that lets concrete transports, host integrations, and profile crates describe the world honestly while keeping route selection and publication above the shared model.
Time Model
Jacquard uses a typed deterministic time model. It does not treat wall clock as distributed truth. The routing core works with local monotonic time, bounded durations, deterministic ordering tokens, and topology epochs.
Time Domains
Tick is local monotonic time. It is used for expiry, replay checks, scheduling, and publication timestamps. DurationMs is a bounded duration type for timeout and backoff policy. OrderStamp is a deterministic ordering token. RouteEpoch versions topology and reconfiguration state.
These domains are not interchangeable. Tick is not wall clock. OrderStamp is not an expiry. RouteEpoch is not elapsed time. Field names should carry their domain when needed, such as *_tick, *_ms, and *_epoch.
When validity depends on time, Jacquard passes Tick explicitly. A topology or service epoch may version shared state, but it must not be reinterpreted as elapsed time just to satisfy a validity check.
#![allow(unused)]
fn main() {
pub struct Tick(pub u64);
pub struct DurationMs(pub u32);
pub struct OrderStamp(pub u64);
pub struct RouteEpoch(pub u64);
}
Each type is a newtype over a fixed-width integer. They are distinct at the type level so the compiler rejects accidental mixing.
Local Choice
Clock time is a local choice in Jacquard. It is valid for local waiting, retry, retention, and expiry decisions. It is not proof that another node observed the same event or reached the same conclusion.
Remote observation of another device clock must stay above the routing core. If a host needs to exchange time-related state, it should pass that state explicitly as application data. The routing core may carry the data, but it must not treat a remote clock as native routing truth.
Runtime Boundary
Jacquard accesses time and deterministic ordering through abstract effects. TimeEffects provides Tick. OrderEffects provides OrderStamp. This keeps production, tests, and simulation on one semantic model even when their underlying runtimes differ.
TimeWindow and TimeoutPolicy are the main compound time objects in the model. TimeWindow is used for bounded validity. TimeoutPolicy is used for bounded retries and local waiting policy. Both stay in the deterministic time domain and avoid raw timestamp fields. TimeWindow is constructed through a validated constructor so invalid windows with end_tick <= start_tick are rejected at the type boundary instead of leaking into leases, service validity, or route admission.
#![allow(unused)]
fn main() {
pub struct TimeoutPolicy {
pub attempt_count_max: u32,
pub initial_backoff_ms: DurationMs,
pub backoff_multiplier_permille: RatioPermille,
pub backoff_ms_max: DurationMs,
pub overall_timeout_ms: DurationMs,
}
}
TimeoutPolicy governs all bounded retry and backoff behavior. The multiplier uses RatioPermille rather than a floating-point scale factor.
Routing Engines
This page describes the trait surface for adding a routing algorithm to Jacquard. It also captures the host capability boundary that engines consume and the in-tree engine shapes. See Pathway Routing for the explicit-path engine, Batman Routing for the batman-bellman and batman-classic next-hop engines, Field Routing for the corridor-envelope engine, Babel Routing for the RFC 8966 distance-vector engine, OLSRv2 Routing for the deterministic link-state engine, and Scatter Routing for the bounded deferred-delivery diffusion engine.
Routing Engine Contract
A routing engine is a routing algorithm that consumes the shared world picture and realizes routes under router-provided identity. Jacquard ships seven in-tree engines: pathway (explicit-path), field (corridor-envelope), batman-bellman (Bellman-Ford-enhanced next-hop), batman-classic (spec-faithful BATMAN IV next-hop), babel (RFC 8966 distance-vector), olsrv2 (OLSRv2 link-state), and scatter (bounded deferred-delivery diffusion). External engines such as onion routing plug into the same contract without depending on any in-tree engine’s internals.
pub trait RoutingEnginePlanner {
#[must_use]
fn engine_id(&self) -> RoutingEngineId;
#[must_use]
fn capabilities(&self) -> RoutingEngineCapabilities;
#[must_use]
fn candidate_routes(
&self,
objective: &RoutingObjective,
profile: &SelectedRoutingParameters,
topology: &Observation<Configuration>,
) -> Vec<RouteCandidate>;
fn check_candidate(
&self,
objective: &RoutingObjective,
profile: &SelectedRoutingParameters,
candidate: &RouteCandidate,
topology: &Observation<Configuration>,
) -> Result<RouteAdmissionCheck, RouteError>;
fn admit_route(
&self,
objective: &RoutingObjective,
profile: &SelectedRoutingParameters,
candidate: RouteCandidate,
topology: &Observation<Configuration>,
) -> Result<RouteAdmission, RouteError>;
}
pub trait RoutingEngine: RoutingEnginePlanner {
fn materialize_route(
&mut self,
input: RouteMaterializationInput,
) -> Result<RouteInstallation, RouteError>;
fn route_commitments(&self, route: &MaterializedRoute) -> Vec<RouteCommitment>;
fn engine_tick(
&mut self,
tick: &RoutingTickContext,
) -> Result<RoutingTickOutcome, RouteError> {
Ok(RoutingTickOutcome {
topology_epoch: tick.topology.value.epoch,
change: RoutingTickChange::NoChange,
next_tick_hint: RoutingTickHint::HostDefault,
})
}
fn maintain_route(
&mut self,
identity: &PublishedRouteRecord,
runtime: &mut RouteRuntimeState,
trigger: RouteMaintenanceTrigger,
) -> Result<RouteMaintenanceResult, RouteError>;
fn teardown(&mut self, route_id: &RouteId);
}
RoutingEnginePlanner is pure. RoutingEngine is effectful. The split keeps candidate production deterministic and keeps runtime mutation inside explicit realization and maintenance methods. The router allocates canonical route identity first. The engine realizes the admitted route under that identity and returns RouteInstallation. The final MaterializedRoute is assembled above the engine boundary as router-owned identity plus engine-owned runtime state, and maintenance only receives the mutable runtime portion.
That activation step also enforces the shared control-plane invariants. The admission decision must still be admissible. The realized protection must satisfy the objective protection floor. Lease validity must be checked explicitly before maintenance or publication proceeds.
Engine Tick
engine_tick is the optional engine-wide bootstrap and convergence hook. The router or host owns cadence and passes a shared RoutingTickContext containing the authoritative merged topology observation for that step. The engine returns a small RoutingTickOutcome so the router can observe whether the tick changed engine-private state without standardizing engine internals. The hook itself does not publish canonical route truth directly.
RoutingTickOutcome.next_tick_hint is advisory scheduling pressure, not self-scheduling authority. Proactive engines such as Babel- or BATMAN-style implementations can report that more work is due soon, but the host/router still owns final cadence.
An engine may still use a richer internal runtime model behind that hook. First-party pathway, for example, now drives protocol-side ingress and bounded control-state refresh through a private choreography guest runtime while keeping the shared engine_tick signature unchanged.
That private choreography runtime does not replace the shared Jacquard effect traits. Generated Telltale effect interfaces remain engine-private implementation details, and the pathway interpreter adapts them onto the stable TimeEffects, OrderEffects, StorageEffects, RouteEventLogEffects, and TransportSenderEffects surfaces exposed by jacquard-traits. Host-owned TransportDriver implementations now stop at the router or bridge layer, which delivers explicit ingress before each synchronous router round.
First-party field follows the same ownership rule, but with a narrower proof boundary: the deterministic local observer-controller remains the semantic owner of corridor belief and posture choice, while any field-private choreography layer may provide only observational summary inputs. Canonical route publication remains router-owned.
Runtime Effect Boundary
The host capability surface stays narrow on purpose.
TransportSenderEffectsis the shared synchronous send capability engines use during a deterministic round.TransportDriveris the host-owned ingress and supervision surface.TimeEffects,OrderEffects,StorageEffects, andRouteEventLogEffectsremain capability traits, not runtime-owner traits.
Engines do not own async streams, driver supervision loops, or Jacquard time assignment. Hosts and bridges own those responsibilities and inject observations before the next synchronous router round.
Contract Rules
Two implementation rules are worth keeping explicit. If a planning or admission judgment depends on observations, the current topology must be passed into that method directly rather than read from ambient engine state. And if an engine keeps planner caches, those caches are memoization only: cache hits and misses must not change the semantic result for the same topology.
External routing engines should depend on jacquard-core and jacquard-traits. They should not depend on pathway internals, router internals, or simulator-private helpers. The stable shared contract includes RouteSummary, Estimate<RouteEstimate>, RouteAdmissionCheck, RouteWitness, RouteHandle, RouteLease, RouteMaterializationInput, RouteInstallation, RouteCommitment, RouteMaintenanceResult, CommitteeSelection, SubstrateRequirements, SubstrateLease, LayerParameters, Observation<T>, and Fact<T>. External engines must not assume pathway route shape, pathway topology structure, pathway-specific maintenance semantics, or any authority model outside those shared route objects.
Route Shape Visibility
Jacquard does not require every routing engine to expose a full hop-by-hop path.
ExplicitPath- engine can expose an actual route path shapeCorridorEnvelope- engine exposes a conservative end-to-end corridor envelope without claiming an explicit pathNextHopOnly- engine only claims best-next-hop visibility toward the destinationOpaque- engine does not expose useful route shape beyond viability
This matters for proactive engines. Pathway is ExplicitPath. Field is CorridorEnvelope. The batman engines (bellman and classic), babel, and olsrv2 are NextHopOnly. Scatter is Opaque: it can claim bounded deferred-delivery viability without claiming a stable next hop or explicit path shape.
In-Tree Engines
See Pathway Routing, Batman Routing, Field Routing, Babel Routing, OLSRv2 Routing, and Scatter Routing for engine-specific models, capability assumptions, and maintenance behavior.
Policy And Coordination
Policy and coordination traits are separate from route realization. They cover host policy, optional local coordination results, and engine layering without direct engine-to-engine awareness.
#![allow(unused)]
fn main() {
pub trait PolicyEngine {
#[must_use]
fn compute_profile(
&self,
objective: &RoutingObjective,
inputs: &RoutingPolicyInputs,
) -> SelectedRoutingParameters;
}
pub trait CommitteeSelector {
type TopologyView;
fn select_committee(
&self,
objective: &RoutingObjective,
profile: &SelectedRoutingParameters,
topology: &Observation<Self::TopologyView>,
) -> Result<Option<CommitteeSelection>, RouteError>;
}
pub trait CommitteeCoordinatedEngine {
type Selector: CommitteeSelector;
fn committee_selector(&self) -> Option<&Self::Selector>;
}
pub trait SubstratePlanner {
#[must_use]
fn candidate_substrates(
&self,
requirements: &SubstrateRequirements,
topology: &Observation<Configuration>,
) -> Vec<SubstrateCandidate>;
}
pub trait SubstrateRuntime {
fn acquire_substrate(
&mut self,
candidate: SubstrateCandidate,
) -> Result<SubstrateLease, RouteError>;
fn release_substrate(&mut self, lease: &SubstrateLease) -> Result<(), RouteError>;
fn observe_substrate_health(
&self,
lease: &SubstrateLease,
) -> Result<Observation<RouteHealth>, RouteError>;
}
pub trait LayeredRoutingEnginePlanner {
#[must_use]
fn candidate_routes_on_substrate(
&self,
objective: &RoutingObjective,
profile: &SelectedRoutingParameters,
substrate: &SubstrateLease,
parameters: &LayerParameters,
) -> Vec<RouteCandidate>;
fn admit_route_on_substrate(
&self,
objective: &RoutingObjective,
profile: &SelectedRoutingParameters,
substrate: &SubstrateLease,
parameters: &LayerParameters,
candidate: RouteCandidate,
) -> Result<RouteAdmission, RouteError>;
}
pub trait LayeredRoutingEngine: RoutingEngine + LayeredRoutingEnginePlanner {
fn materialize_route_on_substrate(
&mut self,
input: RouteMaterializationInput,
substrate: SubstrateLease,
parameters: LayerParameters,
) -> Result<RouteInstallation, RouteError>;
}
pub trait LayeringPolicyEngine {
fn activate_layered_route(
&mut self,
objective: RoutingObjective,
outer_engine: RoutingEngineId,
substrate_requirements: SubstrateRequirements,
parameters: LayerParameters,
) -> Result<MaterializedRoute, RouteError>;
}
}
PolicyEngine, CommitteeSelector, CommitteeCoordinatedEngine, SubstratePlanner, and LayeredRoutingEnginePlanner are planning or read-only surfaces. SubstrateRuntime, LayeredRoutingEngine, and LayeringPolicyEngine are effectful. CommitteeSelector is optional. Jacquard standardizes the CommitteeSelection result shape, not one formation algorithm, and selectors may return None when no committee applies.
Selector implementations may be engine-local, host-local, provisioned, or otherwise out of band. The substrate and layering traits are still forward-looking contract surfaces for host-owned composition.
Router Control Plane
jacquard-router is a generic middleware layer that owns the canonical control plane above the routing-engine boundary. The router registers one or more routing engines, orchestrates cross-engine candidate selection, and publishes the selected engine result as canonical. Routing engines plan, admit, and maintain route-private runtime state without touching canonical route identity or publication. This includes proactive engines: the router does not own proactive routing tables, it only owns canonical publication over the evidence those engines return.
Ownership
The router owns canonical route-handle issuance, canonical lease publication, canonical commitment publication, explicit ingress queues, and router-owned round cadence. The router also dispatches maintenance triggers to engines.
Routing engines remain the owners of route-private runtime state and proof-bearing evidence. Profile implementations and test harnesses remain observational with respect to canonical route truth.
Cross-Engine Orchestration
The router coordinates multiple registered routing engines while keeping engine internals encapsulated. Engines are registered once and queried during activation and maintenance. Each engine returns candidates, evidence, and proofs through shared trait boundaries.
A policy engine computes the routing profile (protection class, connectivity posture, mode) from the current routing objective and local state. The router passes that profile to all registered engines. Engines return candidates ordered by cost and evidence. The router selects the best candidate, asks that engine to admit and materialize the route, and only then publishes canonical state.
The router remains oblivious to engine-specific scoring, topology models, or repair strategies. Engines remain oblivious to lease ownership, commitment publication, or multi-engine selection logic.
Activation Flow
The control-plane path is:
objective
-> policy profile
-> authoritative topology observation
-> explicit queued ingress
-> synchronous router round
-> cross-engine candidate ordering
-> selected-engine admission
-> router-owned handle + lease
-> engine materialization proof
-> canonical publication
-> router-published commitments
The engine does not mint the canonical handle, publish the canonical lease, or surface commitments as canonical truth. The router consumes RouteMaterializationProof, RouteWitness, RouteMaintenanceResult, and RouteSemanticHandoff to publish canonical state.
Route Lifecycle
The route lifecycle is owned by the control plane above the engine boundary.
- A host activates a
RoutingObjective. - The router computes policy and queries registered engines for candidates.
- The selected engine admits and materializes under router-owned identity.
- The router publishes canonical route state and commitments.
- Later rounds drive maintenance, replacement, handoff, expiry, or teardown.
Engines report proof-bearing maintenance outcomes such as continued health, repair, handoff, replacement pressure, or expiry. The router decides whether that engine result implies canonical mutation.
Tick and Maintenance
The router advances through synchronous rounds. Hosts feed topology, policy inputs, and transport observations into RoutingMiddleware, then call advance_round on the control plane. During that round the router drives RoutingTickContext into each registered engine and consumes RoutingTickOutcome. Engines may refresh private control state and summarize previously ingested observations. They may run engine-private choreographies. Engines do not publish canonical truth directly during engine_tick.
RoutingTickOutcome.next_tick_hint lets proactive engines report scheduling pressure without taking ownership of cadence. The router or host may honor that hint, clamp it, or ignore it, but the cadence decision remains router/host owned.
When maintenance returns a typed engine result, the router decides whether that implies canonical mutation. ReplacementRequired triggers router-owned reselection and replacement. HandedOff triggers router-owned lease transfer. LeaseExpired or Expired removes the canonical route.
Continued or repaired states update the router-published commitment view without changing canonical identity.
RoutingControlPlane returns typed router outcomes instead of collapsing everything to Result<(), E>.
The router also owns the durable publication sequence for canonical state:
typed engine evidence
-> router checkpoint update
-> router-stamped route event
-> in-memory canonical publication
Pathway may still checkpoint route-private runtime payloads, but canonical route publication and canonical route-event emission happen in the router.
Configuration and State Updates
The router exposes RoutingMiddleware for hosts to update observable topology, policy inputs, and transport ingress without triggering route activation or maintenance. Hosts ingest topology observations when new world state arrives. Hosts ingest policy inputs when local conditions (capacity, churn, health) change. Hosts ingest transport observations explicitly instead of letting engines or routers poll transport adapters directly.
The router also exposes a recovery interface for checkpoint replay. Hosts call recover_checkpointed_routes after restart to restore the previous canonical route table and active materialized state.
Discovery Boundary
Shared discovery and coarse capability selection stay on ServiceDescriptor. Pathway nodes advertise route-capable surfaces through shared service descriptors. The router and test harness consume those shared descriptors. Jacquard does not introduce one universal handshake object for Discover, Activate, Repair, or Hold.
If a future engine needs stronger bilateral terms, add service-specific negotiation objects on that concrete path only.
Multi-Device Composition
A direct host/runtime composition harness exists outside the simulator. jacquard-mem-link-profile provides the shared in-memory carrier and effect adapters. jacquard-reference-client now shows the minimum host bridge wiring for a new device target: one bridge-owned transport driver, one or more queue-backed transport senders handed to engines, explicit ingress stamping, and explicit synchronous router rounds. The end-to-end multi-device test exercises reference-client, router, pathway, batman-bellman, batman-classic, babel, and mem-link-profile across multiple runtimes.
This harness proves crate-boundary composition. It does not replace the simulator. The simulator remains the scenario/replay layer above these shared boundaries.
Minimal Host Wiring
The reference examples for a new deployment target are the split
reference-client end-to-end tests in
crates/reference-client/tests/e2e_pathway_shared_network.rs,
crates/reference-client/tests/e2e_batman_pathway_handoff.rs,
crates/reference-client/tests/e2e_olsrv2_shared_network.rs,
and
crates/reference-client/tests/e2e_olsrv2_pathway_handoff.rs,
backed by the shared scenarios in
crates/testkit/src/reference_client_scenarios.rs.
- build a shared
Observation<Configuration>with ordinaryServiceDescriptorvalues - attach one bridge-owned
TransportDriverper device runtime - construct one or more engines per device over queue-backed
TransportSenderEffects - wrap those engines in one router that owns canonical publication
- bind one host bridge owner per runtime, ingest topology and transport updates explicitly there, and advance the router through synchronous bridge rounds instead of minting route truth directly
The minimum composition surface for a new device includes world input, bridge-owned transport registration, router activation, and data-plane forwarding over admitted routes.
Profile Implementations
jacquard-mem-node-profile, jacquard-mem-link-profile, and jacquard-reference-client are Jacquard’s in-tree profile and composition crates. The two mem-* crates model node and link inputs without importing routing logic. jacquard-adapter sits beside them as the reusable support crate for transport/profile implementers. The reference client composes those profile implementations with jacquard-router and the in-tree routing engines to exercise the full shared routing path in tests.
Ownership Boundary
Profile crates are Observed. They model capability advertisement, transport carriage, and link-level state. They do not plan routes, issue canonical handles, publish route truth, or interpret routing policy. Canonical route ownership remains on the router, and engine-private runtime state remains inside the routing engine. This keeps profile code reusable across routing engines and prevents observational fixtures from drifting into shadow control planes.
jacquard-core types flow through these crates unchanged. Node, NodeProfile, NodeState, Link, LinkEndpoint, LinkState, and ServiceDescriptor keep their shared-model shape end to end. The mem-* crates wrap builders around those shared objects instead of replacing or reshaping them, and the reference client hands the constructed world picture to the router as a plain Observation<Configuration>.
Crate Responsibilities
| Crate | Provides | Shared boundary it implements |
|---|---|---|
jacquard-adapter | TransportIngressSender, TransportIngressReceiver, TransportIngressNotifier, TransportIngressDrain, PeerDirectory, PendingClaims, ClaimGuard | none — it provides transport-neutral adapter support primitives over jacquard-core vocabulary |
jacquard-mem-node-profile | SimulatedNodeProfile, NodeStateSnapshot, SimulatedServiceDescriptor builders | none — it only emits jacquard-core model values |
jacquard-mem-link-profile | SimulatedLinkProfile, SharedInMemoryNetwork, InMemoryTransport, InMemoryRetentionStore, InMemoryRuntimeEffects, transport-neutral reference defaults | TransportSenderEffects, TransportDriver, RetentionStore, TimeEffects, OrderEffects, StorageEffects, RouteEventLogEffects |
jacquard-reference-client | topology::{node, link}, ClientBuilder, HostBridge, ReferenceRouter/ReferenceClient aliases | none — it is pure composition over the crates above |
The mem-* crates stay routing-engine-neutral and transport-neutral: they carry frames, emit observations, and build shared model values, but they do not mint route truth, interpret routing policy, or own BLE/IP-specific authoring helpers. jacquard-adapter likewise stays transport-neutral: it owns generic ownership scaffolding only, not endpoint constructors, protocol state, or driver traits. Reference-client fixtures are the single place where a service descriptor picks up engine-specific routing-engine tags (e.g. PATHWAY_ENGINE_ID, BATMAN_BELLMAN_ENGINE_ID, BABEL_ENGINE_ID), because that decision is composition, not profile. The reference-client bridge is also the only sanctioned place where transport ingress is drained and stamped before delivery to the router.
Composition
ClientBuilder is the wiring entry point. It attaches one bridge-owned InMemoryTransport driver to a SharedInMemoryNetwork, constructs queue-backed sender capabilities for each enabled engine, registers the engine set on a fresh MultiEngineRouter, and returns a ReferenceClient host bridge. The builder supports any combination of pathway, batman-bellman, batman-classic, babel, olsrv2, and field engines. Multiple clients built against the same network share one deterministic carrier while still advancing routing state through explicit bridge rounds.
graph LR NodeProfile[jacquard-mem-node-profile<br/>SimulatedNodeProfile<br/>NodeStateSnapshot<br/>SimulatedServiceDescriptor] LinkProfile[jacquard-mem-link-profile<br/>SimulatedLinkProfile<br/>InMemoryTransport<br/>InMemoryRetentionStore<br/>InMemoryRuntimeEffects] Network((SharedInMemoryNetwork)) Ref[jacquard-reference-client<br/>fixtures + ClientBuilder + HostBridge] Router[MultiEngineRouter] Engines[PathwayEngine / BatmanBellmanEngine /<br/>BatmanClassicEngine / BabelEngine /<br/>OlsrV2Engine / FieldEngine] NodeProfile --> Ref LinkProfile --> Ref Network --> LinkProfile Ref --> Router Ref --> Engines Router -- registers --> Engines
The reference end-to-end examples are the split reference-client tests in
crates/reference-client/tests/client_builder.rs,
crates/reference-client/tests/e2e_pathway_shared_network.rs,
crates/reference-client/tests/e2e_batman_pathway_handoff.rs,
crates/reference-client/tests/e2e_olsrv2_shared_network.rs,
and
crates/reference-client/tests/e2e_olsrv2_pathway_handoff.rs,
plus the shared scenarios in
crates/testkit/src/reference_client_scenarios.rs.
They show how to add a new client runtime to the same in-memory network without
bypassing the bridge-owned ingress path or the router-owned canonical path.
Extension Guidance
Mirror the existing layering when adding a new device or transport profile. Build node-side world inputs as builders over the shared NodeProfile, NodeState, and ServiceDescriptor types. Build link-side and transport behavior as adapters that implement the shared effect boundaries listed above. Reuse jacquard-adapter only for generic mailbox, peer, or claim ownership support. Compose the new profile with the router and a routing engine through a host harness that looks like jacquard-reference-client. Do not introduce a parallel node schema, a pathway-specific transport trait, or transport-specific endpoint constructors inside the mem/reference profile crates.
Keep the ownership boundary strict. Profile crates stay Observed. Routers stay the canonical ActorOwned route publisher. Routing engines own only route-private runtime state and typed evidence. The Crate Architecture document has the full dependency graph and ownership rules these crates fit into.
Batman Routing
Two BATMAN routing engines are provided. Each implements the proactive originator-message model over the shared Jacquard world picture.
-
jacquard-batman-bellman(engine IDjacquard.batmanb) is the Jacquard-enhanced engine. It replaces the spec’s distributed TQ propagation with a local Bellman-Ford computation over a gossip-merged topology graph. It enriches TQ with Jacquard link beliefs and includes a bootstrap shortcut for tick-1 route availability. This is the engine measured in the tuning corpus. -
jacquard-batman-classic(engine IDjacquard.batmanc) is a spec-faithful engine. It implements the BATMAN IV originator-message model without structural departures. TQ is carried in the OGM and updated by each re-broadcasting node. No candidate is emitted before receive-window data has accumulated.
Both engines declare RouteShapeVisibility::NextHopOnly and the same capability envelope. They are transport-neutral and operate alongside other engines on a shared multi-engine router. The router retains canonical route publication, handle issuance, and lease management. Batman owns proactive originator observations, neighbor ranking, and best-next-hop state within its own crate boundary.
Shared Inputs
Both engines consume Observation<Configuration> from the shared Jacquard world model. Destination eligibility is checked against ServiceDescriptor before either engine produces a candidate. A destination node must declare support for the engine’s specific ID in its shared service surface before the engine emits a RouteCandidate toward it. See Pathway Routing for the shared planning contract both engines implement.
Classic BATMAN (jacquard.batmanc)
OGM Structure
The classic engine’s originator advertisement carries only the fields required by the spec:
OriginatorAdvertisement {
originator: NodeId,
sequence: u64,
tq: RatioPermille, // path quality from this node to originator; 1000 at source
ttl: u8, // hops remaining; decremented at each relay
}
No per-link state is included. Quality information travels as the tq scalar, which each re-broadcasting node updates before forwarding. Advertisements are framed with the eight-byte magic prefix JQBATMNC and bincode-serialized.
Flooding and TTL
flood_gossip runs each tick. It sends the local originator OGM (tq=1000, ttl=DEFAULT_OGM_HOP_LIMIT=50) to every direct neighbor. It also sends a re-broadcast copy of each learned OGM whose TTL has not reached zero.
Before forwarding a learned OGM, the engine computes its path quality to the originator and encodes it in the outgoing advertisement:
rebroadcast_tq = tq_product(link_state_tq_to_sender, received_tq)
rebroadcast_ttl = received_ttl - 1
OGMs with ttl=0 are discarded and not forwarded. This bounds propagation to at most DEFAULT_OGM_HOP_LIMIT relay hops from the originator. Stale OGMs cannot circulate without bound in large meshes.
TQ Propagation
TQ degrades multiplicatively as an OGM hops through the network. An originator X sends tq=1000. Each relay node B applies tq_product(link_state_tq_to_sender, received_tq) before re-broadcasting. When node A receives X’s OGM via relay B, it reads B’s reported path quality directly from the received TQ field:
received_tq_via_B = link_B_to_prev × ... × link_Y_to_X × 1000 / 1000^n
A stores received_ogm_info[X][B] with the received TQ and a hop count derived from DEFAULT_OGM_HOP_LIMIT - received_ttl + 1. This data drives A’s local routing decision for X without any local path computation.
This is the classic distributed implicit computation. Each node contributes its local link observation. The flood assembles an end-to-end quality estimate without any node building a full topology graph.
Receive Window and Quality Scoring
A separate receive window is maintained per (originator, via_neighbor) pair. It counts unique sequence numbers received within the staleness window. The window occupancy permille is computed as:
occupancy_permille = received_count / window_span × 1000
This receive quality is applied as a third factor in the local routing decision alongside local_link_tq_to_B and received_tq_from_B, combined via two nested tq_product calls. The receive quality is not encoded in the re-broadcast TQ. Downstream nodes see only the link-state-based path quality in re-broadcast advertisements.
Echo-Only Bidirectionality
A neighbor B is confirmed bidirectional only when a local OGM has been received back via B. bidirectional_neighbor_valid checks the bidirectional_receive_windows table and returns false if no echo has been seen. There is no topology fallback. A neighbor for which no echo has been received does not contribute routing observations regardless of what the shared world model reports about the reverse link.
No Bootstrap
If no receive-window data has accumulated for a (originator, via_neighbor) pair, observation_tq is zero and no routing observation is produced for that path. The engine produces no RouteCandidate on tick 1 for any multi-hop destination. Routes emerge as sequence windows fill. This matches the spec’s behavior.
Enhanced BATMAN (jacquard.batmanb)
OGM Structure
The enhanced engine’s originator advertisement carries full link state rather than a TQ scalar:
OriginatorAdvertisement {
originator: NodeId,
sequence: u64,
links: Vec<AdvertisedLink>, // runtime_state, transport_kind, delivery_confidence
// no tq field, no ttl field
}
This advertisement is sufficient to reconstruct a topology graph. It does not encode a pre-computed path quality. Advertisements are framed with magic JQBATMAN. The absence of TTL means OGMs are flooded verbatim to all neighbors every tick without hop-count bounds.
Gossip Merging and Bellman-Ford
merge_advertisements folds learned advertisements into a copy of the current topology observation. It inserts synthesized Link entries for gossip-discovered edges not already present in the direct view. This produces a merged topology that may include nodes and links beyond the local one-hop view.
refresh_private_state then runs Bellman-Ford on this merged topology to compute (path_tq, hop_count) from each direct neighbor to every reachable originator. When a receive window exists for the (originator, neighbor) pair, the local routing decision uses three factors:
steady_state_tq = tq_product(tq_product(local_link_tq, bellman_ford_path_tq), receive_quality)
When no receive window exists (bootstrap), the decision uses two factors:
bootstrap_tq = tq_product(local_link_tq, bellman_ford_path_tq)
This substitutes a deterministic local computation for the spec’s distributed OGM-propagated TQ. The computation is reproducible from the topology snapshot. The spec’s TQ reflects whatever the neighborhood has recently observed.
TQ Enrichment
derive_tq starts from the same ogm_equivalent_tq(LinkRuntimeState) baseline as the classic engine. When richer Jacquard link beliefs are present, it incorporates up to four additional terms in a running average.
| Enrichment | Normalization |
|---|---|
delivery_confidence_permille | Direct permille value |
symmetry_permille | Direct permille value |
transfer_rate_bytes_per_sec | Normalized against 128 kbps saturation ceiling |
stability_horizon_ms | Normalized against 4000 ms saturation ceiling |
The final TQ is the integer average over all contributing terms. With no beliefs present the result is identical to the classic engine’s baseline. This enrichment has no equivalent in the BATMAN protocol.
Topology Fallback for Bidirectionality
bidirectional_neighbor_valid first checks bidirectional_receive_windows as in the classic engine. If no echo window exists, it falls back to checking whether the shared topology contains a reverse link with usable state. This accelerates route availability on tick 1 before any echoes have been received. It admits routes the spec would withhold until echo confirmation.
Bootstrap Shortcut
In derive_originator_observations, if no receive-window data exists for a specific (originator, via_neighbor) pair, the engine uses the Bellman-Ford path TQ directly as the combined TQ: bootstrap_tq = tq_product(local_link_tq, path_tq). This is a two-factor formula. Once a receive window exists for that pair, the engine switches to the standard three-factor formula: tq = tq_product(tq_product(local_link_tq, path_tq), receive_quality).
The bootstrap check is per-originator-per-neighbor. Receiving an OGM for one originator does not disable bootstrap for other originators that have not yet accumulated window data. On tick 1, before any OGMs have been received, the engine produces routing candidates from topology-derived path quality for all reachable destinations. The spec produces no candidates until receive-window data has accumulated.
Shared Mechanisms
The following mechanisms are identical in both engines.
OGM Receive Window
OgmReceiveWindow tracks received sequence numbers per (originator, via_neighbor) pair using a sliding window of size stale_after_ticks. Occupancy is computed as received_count / window_span × 1000. Sequences outside the staleness window are pruned. The window becomes empty once the last observed sequence ages out.
Sequence numbers are accepted strictly monotonically. Earlier or equal sequences from the same originator via the same neighbor are discarded.
TQ Product
Both engines use the same compound quality formula:
tq_product(left, right) = (left × right) / 1000
The result is on the same 0–1000 permille scale as the inputs. A path through two links each at 900 TQ yields 810. Multi-hop paths accumulate tq_product in sequence, producing monotonically decreasing quality with hop count. Links with a derived TQ below 700 are classified RouteDegradation::Degraded.
Decay Window
DecayWindow governs observation freshness and refresh cadence. The default marks observations stale after 8 ticks and triggers a refresh within 4 ticks. Both engines accept with_decay_window at construction for tuning.
Neighbor Ranking and BestNextHop
Candidates for each originator are ranked in this order:
receive_qualitydescendingtqdescendingis_bidirectionaldescendingobserved_at_tickdescendinghop_countascendingvia_neighborascending (deterministic tie-break)
The top-ranked entry becomes BestNextHop. It carries the next-hop NodeId, TQ, receive quality, hop count, observation tick, topology epoch, transport kind, degradation status, bidirectionality flag, and a derived BackendRouteId.
Planning, Admission, and Lifecycle
Planning, admission, and route lifecycle use identical logic in both engines. The planner checks the destination’s ServiceDescriptor for the engine-specific ID before emitting any candidate.
candidate_routes emits at most one RouteCandidate per reachable destination. admit_route validates the candidate’s BackendRouteId against the current BestNextHop table entry. A stale or superseded reference is inadmissible. materialize_route records an active route and derives health from TQ: HealthScore = tq, PenaltyPoints = 1000 - tq.
maintain_route returns ReplacementRequired when the best next-hop has changed. It returns Failed(LostReachability) when the destination has no table entry or when is_bidirectional is false. Route replacement is the only reconfiguration path. Neither engine implements suffix repair or hold.
Capabilities
Both engines declare the same capability envelope:
| Capability | Value |
|---|---|
max_protection | LinkProtected |
max_connectivity | ConnectedOnly |
repair_support | Unsupported |
hold_support | Unsupported |
decidable_admission | Supported |
quantitative_bounds | ProductiveOnly |
reconfiguration_support | ReplaceOnly |
route_shape_visibility | NextHopOnly |
Spec Compliance
Faithful Mechanisms (Classic)
| Mechanism | Implementation |
|---|---|
| OGM sequence-number freshness gating | Strictly monotonic: OGMs with equal or older sequence are discarded |
| Receive-window occupancy as route quality | occupancy_permille = received / window_span |
| TQ product formula | (left × right) / 1000 |
| TQ propagated via OGM | tq field updated at each relay hop |
| TTL-bounded propagation | DEFAULT_OGM_HOP_LIMIT=50, decremented at each hop |
| Bidirectionality via echo | Echo window required. No topology fallback. |
| Proactive flood | OGMs sent to all direct neighbors each tick |
| Staleness window pruning | Sequences outside window are dropped |
| Next-hop-only route shape | RouteShapeVisibility::NextHopOnly |
| Single best next-hop per destination | Top-ranked entry only |
One minor deviation exists. The re-broadcast TQ uses ogm_equivalent_tq(LinkRuntimeState) as the local quality factor. The strict BATMAN IV spec uses receive-window occupancy as this factor. The local routing decision correctly applies receive-window quality as a third factor. The deviation therefore affects downstream quality estimates in OGMs rather than local route selection.
Enhanced Engine Departures
| Mechanism | Change | Implication |
|---|---|---|
| TQ computation | Local Bellman-Ford on merged topology. No TQ field in OGM. | Path quality is deterministic and reproducible from the topology snapshot rather than reflecting recent neighborhood observation. The computation is closer to OLSR-style local SPF than DV gossip. |
| Link state in OGM | Full AdvertisedLink state per neighbor. No TQ scalar or TTL. | This is equivalent to distributing a topology database via gossip. It enables local path computation with no BATMAN protocol equivalent. |
| TTL | Absent. OGMs propagate without hop-count bounds. | OGMs circulate for as long as advertisements remain within the staleness window. The spec’s propagation depth control is lost. |
| TQ enrichment | Delivery confidence, symmetry, transfer rate, stability averaged into TQ. | TQ reflects richer signal quality than packet counts alone. No BATMAN protocol equivalent. |
| Bidirectionality | Echo window with topology fallback. | Routes are available on tick 1. The engine admits paths the spec would withhold until echo confirmation. |
| Bootstrap | Per-originator-per-neighbor: path TQ used as combined TQ (two factors) when no window exists for that pair. | Routing candidates are produced on tick 1. Receiving OGMs for one destination does not disable bootstrap for others. The spec produces no candidates until receive-window data has accumulated. |
| Full topology reconstruction | merge_advertisements builds a complete adjacency graph. | The implementation behaves closer to a link-state protocol than a pure DV-gossip protocol. Path computation is centralized and explicit rather than implicit in the OGM flood. |
Classic as Babel Comparison
The classic engine is the correct baseline when comparing against Babel (RFC 8966). Babel was designed to address specific weaknesses of classic DV-gossip protocols. These weaknesses are present in the spec-faithful implementation.
Babel addresses three gaps relative to classic BATMAN:
- Asymmetric-link handling: classic batman’s bidirectionality gate excludes paths with poor incoming links entirely. Babel’s feasibility condition handles asymmetric metrics without excluding those paths.
- Loop freedom: classic batman relies on sequence-number freshness for loop prevention. Babel’s feasibility condition provides a provable loop-freedom guarantee during transient topology changes.
- Triggered updates: classic batman floods on a fixed tick schedule. Babel sends triggered updates immediately when a metric changes, reducing recovery latency.
Comparing classic batman against Babel measures what each mechanism contributes independently. Comparing the enhanced engine against Babel conflates the Bellman-Ford and topology-enrichment changes with the DV-gossip differences, making performance attribution unreliable.
Enhanced as OLSRv2 Comparison
The enhanced engine is the correct baseline when comparing against OLSRv2 (jacquard-olsrv2). Both use local shortest-path computation over a topology database distributed by gossip. The primary structural difference is TC messages with MPR election versus OGM flooding.
The enhanced engine also exhibits the partition-recovery weakness that OLSR directly addresses. The receive window used as receive_quality is the same window used to gate bidirectionality. Both indicators require the full window span to recover when a partition clears, which delays route restoration compared to OLSR’s explicit TC-flood response to topology changes.
That comparison therefore measures two distinct questions: the cost of MPR-suppressed flooding overhead versus the enhanced batman’s simpler model, and whether OLSR’s immediate topology-change response produces better recovery behavior under adverse conditions.
Babel Routing
jacquard-babel (engine ID jacquard.babel..) implements the Babel distance-vector routing protocol as described in RFC 8966. It uses bidirectional ETX link cost, additive path metric, and a feasibility distance table for loop-free route selection.
Protocol Overview
Babel is a distance-vector routing protocol designed for wireless mesh networks. Each node originates route updates advertising itself as a destination with metric 0. Relay nodes add their local link cost before re-advertising the best route. Downstream nodes select the path with the lowest total metric.
Three properties distinguish Babel from the batman engines in Jacquard. First, link cost uses bidirectional ETX rather than forward-only TQ. Second, path metric is additive rather than multiplicative. Third, route selection is gated by a feasibility condition that provides loop freedom during transient topology changes.
Shared Inputs
The Babel engine consumes Observation<Configuration> from the shared Jacquard world model. Destination eligibility is checked against ServiceDescriptor before the engine produces a candidate. A destination node must declare support for the engine’s specific ID in its shared service surface before the engine emits a RouteCandidate toward it. See Pathway Routing for the shared planning contract all engines implement.
Update Structure
The Babel update carries four fields:
BabelUpdate {
destination: NodeId,
router_id: NodeId,
seqno: u16,
metric: u16,
}
The originator sets metric=0 and assigns a monotonically increasing seqno. Each relay node adds the local link cost to the metric before re-advertising. The router_id identifies the originator of the route entry. Updates are framed with the eight-byte magic prefix JQBABEL. and bincode-serialized.
No TTL field is present. Propagation depth is controlled by the decay window: stale entries are pruned when observed_at_tick exceeds stale_after_ticks. No hop-count bound is needed because only the selected route per destination is re-advertised, and infeasible routes are rejected by the feasibility condition.
ETX Link Cost
Link cost uses the Expected Transmission Count formula:
cost = 256 * 1_000_000 / (fwd_delivery_permille * rev_delivery_permille)
This captures bidirectional link quality. A perfectly symmetric active link (1000 permille in both directions) yields cost 256. An asymmetric link where the forward direction is good (980 permille) but the reverse is poor (300 permille) yields cost 871. The formula penalizes asymmetric links more heavily than BATMAN’s forward-only TQ because poor reverse delivery means acknowledgments are lost, increasing true retransmission count.
If either direction is absent or faulted (delivery 0), cost equals BABEL_INFINITY (0xFFFF), making the route unusable. This replaces the echo-window bidirectionality gate used by batman-classic. No separate bidirectionality check is needed because asymmetry is encoded directly in the metric.
Additive Metric
Path metric is the sum of link cost and the neighbor’s reported metric:
compound_metric = link_cost + neighbor_metric
If either input equals BABEL_INFINITY, the result is BABEL_INFINITY. Otherwise the sum saturates at BABEL_INFINITY - 1 (0xFFFE). The metric scale runs from 0 (perfect local route) to 0xFFFF (unreachable). Values at or above 0xFFFF are treated as unreachable.
This additive model differs from BATMAN’s multiplicative TQ product. A single bad hop in a multi-hop path raises the total metric by its full link cost. In BATMAN, the same bad hop would reduce the multiplicative product less dramatically relative to other hops. Babel therefore discriminates more strongly against paths with one weak link among otherwise strong links.
Feasibility Distance Table
Every node maintains a per-destination feasibility distance FD[D] stored as a (seqno, metric) pair. A route entry for destination D is feasible if and only if:
seqno_is_newer(entry.seqno, FD[D].seqno)
OR (entry.seqno == FD[D].seqno AND entry.metric < FD[D].metric)
The seqno_is_newer function uses modular arithmetic over u16 as defined in RFC 8966 Section 3.5.1. A seqno is newer if the unsigned distance (candidate - reference) mod 2^16 falls in the range (0, 2^15).
When FD is absent for a destination (never selected, or all routes expired), any finite-metric route is feasible. The feasibility condition prevents routing loops during transient topology changes. It ensures that a node never selects a route whose metric has increased relative to its last feasibly selected route, unless a newer seqno proves that the originator has acknowledged the topology change.
Admission vs Selection
Updates are always admitted to the route table. The feasibility condition gates selection only. This matches RFC 8966: a node stores all received route entries and uses the FC only when choosing which route to select.
FD Update Rules
FD is updated to (seqno, metric) of the selected route only when the selection is feasible. Infeasible fallback selections leave FD unchanged. This preserves the loop-freedom guarantee: the FD ratchet never moves backward.
Infeasible Fallback
When no feasible route exists for a destination, the engine selects the best infeasible route to preserve connectivity. This selection does not update FD. The periodic seqno increment (every 16 ticks) propagates a fresh seqno from the originator. When that update arrives, it satisfies the feasibility condition (newer seqno) and allows FD to be updated, ending the fallback period. This replaces the explicit SEQREQ mechanism from RFC 8966 with a bounded periodic refresh.
FD Expiry
When all routes to a destination expire from the route table, FD for that destination is removed. The next route learned will be treated as if FD is absent (any finite metric is feasible).
Sequence Number Management
The originator seqno is incremented every SEQNO_REFRESH_INTERVAL_TICKS (16 ticks). This periodic increment serves as the mechanism for resolving infeasible-fallback states across the network. The seqno uses u16 with modular arithmetic and wraps at 2^16.
No explicit seqno request mechanism is implemented. In the full RFC 8966 protocol, a node can send a SEQREQ to the originator asking it to bump its seqno immediately. In the Jacquard tick model, the periodic increment bounds the infeasible-fallback window to at most 16 ticks without requiring asynchronous request handling.
Selected-Route Flooding
Each tick, the engine floods two types of updates to all direct neighbors. The first is the local node’s originated update with the current seqno and metric 0. The second is a re-advertisement of the best selected route per destination. Non-selected routes are not re-broadcast.
This differs from batman-classic, which re-broadcasts all received OGMs. Babel’s selected-route flooding reduces overhead and works in concert with the feasibility condition to provide loop freedom.
Decay Window
DecayWindow governs route entry freshness. The default marks entries stale after 8 ticks and expects the next refresh within 4 ticks. Both parameters are configurable via BabelEngine::with_decay_window. Stale entries are pruned during each refresh pass before route selection.
The decay window is identical in shape to the one used by the batman engines. All distance-vector engines accept with_decay_window at construction for tuning.
Quality Scoring
The engine converts Babel metric to a RatioPermille quality score using a linear mapping. Metric 0 maps to quality 1000 (perfect). Metric values at or above 1024 map to quality 0. Routes with metric at or above 512 are classified as degraded.
Planning, Admission, and Lifecycle
Planning, admission, and route lifecycle follow the shared contract used by all Jacquard engines. The planner checks the destination’s ServiceDescriptor for the Babel engine ID before emitting any candidate.
candidate_routes emits at most one RouteCandidate per reachable destination. admit_route validates the candidate’s BackendRouteId against the current best next-hop table. A stale or superseded reference is inadmissible. materialize_route records an active route and derives health from the quality score.
maintain_route returns ReplacementRequired when the best next-hop has changed. It returns Failed(LostReachability) when the destination has no table entry. Route replacement is the only reconfiguration path. The engine does not implement suffix repair or hold.
Capabilities
The Babel engine declares the same capability envelope as the batman engines:
| Capability | Value |
|---|---|
max_protection | LinkProtected |
max_connectivity | ConnectedOnly |
repair_support | Unsupported |
hold_support | Unsupported |
decidable_admission | Supported |
quantitative_bounds | ProductiveOnly |
reconfiguration_support | ReplaceOnly |
route_shape_visibility | NextHopOnly |
Comparison with Batman Engines
vs Batman-Classic
Batman-classic is the correct comparison baseline for Babel. Both are pure distance-vector gossip protocols without local topology reconstruction. Babel addresses three gaps relative to batman-classic. Asymmetric-link handling: batman-classic’s bidirectionality gate excludes poor-reverse paths entirely, while Babel’s ETX cost encodes asymmetric quality as a finite metric. Loop freedom: batman-classic relies on sequence-number freshness, while Babel’s feasibility condition provides a formal guarantee. Propagation: batman-classic re-broadcasts all received OGMs, while Babel forwards only the selected route.
vs Batman-Bellman
Batman-bellman replaces the spec’s distributed TQ propagation with a local Bellman-Ford computation over a gossip-merged topology graph. Comparing babel against batman-bellman conflates the Bellman-Ford and topology-enrichment changes with the DV-gossip differences, making performance attribution unreliable. For protocol-level comparison, use batman-classic.
OLSRv2 Routing
jacquard-olsrv2 (engine ID jacquard.olsrv2.) implements a deterministic OLSRv2-class proactive link-state engine. It preserves the core OLSRv2 shape: HELLO-driven symmetric-neighbor learning, deterministic MPR election, TC-style topology flooding, and shortest-path next-hop derivation over the learned topology database.
The crate is not a wire-compatible RFC 7181 daemon clone. It is a Jacquard engine that consumes Observation<Configuration>, advances only during router-owned synchronous rounds, and publishes only next-hop route candidates through the shared engine traits.
Engine Overview
The engine owns five pieces of runtime state:
- one-hop neighbor state learned from HELLO exchange
- two-hop reachability learned from symmetric neighbors
- local MPR and MPR-selector sets
- topology tuples learned from TC advertisements
- a derived shortest-path tree and best-next-hop table
The router and host own ingress draining, tick cadence, and time attachment. jacquard-olsrv2 consumes explicit ingress through the shared runtime hook and returns router-visible NextHopOnly candidates.
Jacquard-Specific Simplifications
Jacquard keeps the OLSRv2 surface deterministic and auditable:
- one deterministic decay window controls HELLO and TC freshness
- one deterministic MPR election policy is used for all nodes
- link cost is integer-only and derived from shared link observations
- all sets and maps use canonical ordering with no ambient randomness
- no async protocol loop, no host-driver ownership, and no external RFC interoperability layer
- route publication remains router-owned
The result is an OLSRv2-class baseline for comparative routing work rather than a feature-complete NHDP implementation.
HELLO Semantics
Each round, the engine may originate one HELLO message carrying the local originator ID, a monotonically increasing local HELLO sequence number, the current symmetric-neighbor set, and the current local MPR set.
Inbound HELLO processing updates the one-hop neighbor table and the derived two-hop reachability map. A link is treated as symmetric only when the inbound HELLO confirms the local node inside the neighbor’s symmetric-neighbor set. The shared topology observation constrains whether the underlying link is usable. HELLO state alone does not override a failed transport observation.
HELLO state expires when the engine-local hold window passes. Expiry uses Tick, not wall-clock time.
MPR Election
MPR election is deterministic and local. The input surface is the currently symmetric one-hop neighbors, two-hop neighbors reachable through those one-hop neighbors, and the integer link metric derived from the shared observation model.
The algorithm chooses a minimal covering relay set for the known two-hop neighbors. Ties break first on lower metric cost, then on canonical node order. The elected set is exported only as engine-local control state plus the local HELLO advertisement. It is not promoted into shared core vocabulary.
TC Flooding
TC advertisements carry the originator ID, a monotonically increasing local TC sequence number, and the advertised-neighbor set selected for flooding.
The engine originates a fresh TC when the advertised-neighbor surface changes or when the local topology state needs refresh. Inbound TC processing accepts only strictly fresher sequence numbers per originator, replaces older topology tuples for that originator, and expires stale tuples by the same tick-based hold window.
Forwarding is constrained by MPR-selector state. A node forwards only when the sender has selected it as an MPR and the TC sequence has not already been forwarded for that originator.
Shortest-Path Computation
The topology database is a deterministic set of directed topology tuples. Shortest-path derivation runs over local symmetric edges, accepted TC tuples, and integer link cost derived from the shared link observation.
The shortest-path tree is recomputed when HELLO or TC ingestion changes the topology database. Best-next-hop derivation collapses the tree into one NodeId next hop per reachable destination. Only destinations that advertise support for jacquard.olsrv2. in the shared service surface are eligible for route candidates.
Capability Envelope
The OLSRv2 engine declares the same conservative next-hop envelope used by the other proactive engines:
| Capability | Value |
|---|---|
max_protection | LinkProtected |
max_connectivity | ConnectedOnly |
repair_support | Unsupported |
hold_support | Unsupported |
decidable_admission | Supported |
quantitative_bounds | ProductiveOnly |
reconfiguration_support | ReplaceOnly |
route_shape_visibility | NextHopOnly |
The engine keeps a full topology graph privately but does not claim explicit-path visibility at the shared contract boundary.
Route Lifecycle And Maintenance
Planning and admission follow the standard Jacquard route lifecycle. candidate_routes emits next-hop candidates from the derived best-next-hop table. check_candidate validates the candidate against current engine-private topology state. admit_route binds the candidate to router-owned identity. materialize_route installs the active next-hop record.
Maintenance returns Continued while the selected next hop remains valid. It returns ReplacementRequired when the shortest-path table selects a new next hop. It returns Failed(LostReachability) when no route remains. There is no suffix repair or engine-owned hold mode. Route replacement is the only reconfiguration path.
Comparison Role
jacquard-olsrv2 is the in-tree full-topology proactive baseline. It answers a different question from the batman and Babel engines. batman-classic and babel are distance-vector gossip baselines. batman-bellman is a topology-enriched BATMAN variant. OLSRv2 is the proactive link-state baseline with explicit topology flooding.
OLSRv2 is the primary comparison point for measuring how much full topology knowledge buys over gossip-style next-hop routing.
Related Pages
Pathway Routing
jacquard-pathway is Jacquard’s first-party routing-engine implementation. It consumes the shared world model from jacquard-core and implements the stable routing boundaries from jacquard-traits. Pathway-only heuristics, runtime caches, and repair state remain inside the pathway crate. Proactive engines such as babel, batman-bellman, batman-classic, and olsrv2 are separate routing-engine crates that do not change pathway’s explicit-path semantics.
Shared Inputs
Pathway planning consumes the shared world picture from jacquard-core. The engine reads Observation<Configuration>, Node, Link, Environment, ServiceDescriptor, and NodeRelayBudget without wrapping or reshaping them.
The pathway engine treats ServiceDescriptor as the shared capability-advertisement plane. Route-capable pathway nodes expose the default Jacquard routing surface, which includes the Discover, Move, and Hold services along with relay headroom, hold capacity, link-quality observations, and coarse environment posture. Pathway does not add a second advertisement protocol or a pathway-global algorithm handshake on top of that surface.
The static PATHWAY_CAPABILITIES envelope is exercised by contract tests. The in-tree pathway crate proves its Repair, Hold, partition-tolerance, decidable-admission, and explicit-route-shape claims against live planner and runtime behavior.
Deterministic Topology Model
DeterministicPathwayTopologyModel is the pathway-owned read-only query surface. It queries shared Configuration objects and then derives pathway-private estimate types such as PathwayPeerEstimate and PathwayNeighborhoodEstimate. Those estimates stay encapsulated in jacquard-pathway so engine-specific scoring does not leak into the shared cross-engine schema.
Peer and neighborhood estimates expose optional score components, so unknown and zero remain distinct without turning those pathway-private components into shared observed facts. The components are clamped to the crate’s HealthScore range so composition stays bounded. Where service validity matters, the topology model receives observed_at_tick explicitly rather than reinterpreting RouteEpoch as time. Pathway uses these estimates directly in candidate ordering and committee selection, so swapping the topology model changes pathway-private route preference and coordination behavior without changing the shared world schema.
Planning and Admission
The pathway engine implements the shared RoutingEnginePlanner contract, which produces candidates in five deterministic steps:
- read the current topology snapshot
- freeze a
PathwaySearchDomainover deterministicNodeIdgraph state for that snapshot - resolve the routing objective into one v13
SearchQuery:SearchQuery::SingleGoalfor one exact destination, orSearchQuery::CandidateSetfor selector-style service and gateway objectives over a deterministic acceptable-destination set - run Telltale’s canonical search machine once for that query under an explicit
SearchExecutionPolicyplus declared fairness bundle - derive deterministic backend references, route ids, costs, and estimates from the selected-result witness and the final authoritative search state, then sort by path metric, pathway-private topology-model preference, and deterministic route key
This algorithm produces a stable candidate ordering across replays. The search metric is integer-only and combines hop count, delivery confidence, loss-derived congestion, symmetry, pathway-private peer and neighborhood estimates, protocol-repeat penalties, protocol-diversity bonuses, and a deferred-delivery bonus when the destination is honestly hold-capable. The shared RouteCost surface then reflects the chosen path’s hop count, confidence, symmetry, congestion, protocol diversity, and deferred-delivery hold reservation without exposing the pathway-private estimate internals that shaped the search.
Concretely, the direct per-link reliability inputs are: delivery_confidence_permille, symmetry_permille, and loss_permille. Pathway turns these into weighted edge penalties during path search. Higher delivery confidence and better symmetry reduce path cost; higher loss increases it. These signals are then combined with pathway-private peer and neighborhood bonuses and penalties rather than collapsed into one shared “reliability” field.
median_rtt_ms is part of the shared link observation surface, but pathway does not currently use it in path scoring.
Deferred-delivery classification is deliberately stricter than capability advertisement alone. A destination only qualifies for retention-biased routing when its Hold service advertisement is currently valid for pathway, the advertised capacity hint reports positive hold_capacity_bytes, and the node state separately reports positive hold_capacity_available_bytes. A stale advertisement, an empty capacity hint, or unknown live capacity is not enough.
Telltale Search Core
The generic search core lives in telltale-search, and Pathway supplies a domain adapter plus route-specific policy on top of it.
telltale-search owns:
- canonical batch extraction and search-machine semantics over weighted graph state
- generalized
SearchQueryhandling, selected-result semantics, and witness publication SearchExecutionPolicy/SearchRunConfigvalidation with explicit fairness assumptions- replay artifacts, canonical observation reconstruction, and observation comparison
- epoch reconfiguration through
EpochReconfigurationRequestwith explicit reseeding policy - theorem-backed exactness and fairness claims for the exposed runtime profiles
Pathway owns:
- topology interpretation and edge-cost policy
- heuristic policy (
Zeroor the current hop-lower-bound heuristic) - objective-to-
SearchQuerymapping forNode,Service, andGatewaydestinations - candidate-path derivation from the final search state, route class, connectivity posture, and route summary
- admission policy, witness generation, and committee handling
- opaque backend-token encoding and cache-miss re-derivation
This split is intentional. Pathway uses the generic search machine as a deterministic planning substrate, while the published route semantics remain Pathway-owned.
Inherited Search Features
The Pathway engine inherits several capabilities directly from the v13 Telltale search system:
- canonical batch extraction instead of planner-local frontier bookkeeping
- one objective-scoped
SearchQueryexecution rather than a planner-local loop that rebuilds selector semantics out of repeated single-goal runs - selected-result and witness semantics exported directly by the search runtime
- explicit execution-policy control through
SearchExecutionPolicyandSearchRunConfig - replay artifacts that preserve epoch trace, batch schedule, fairness bundle, and final authoritative state
- explicit epoch reconfiguration with a real reseeding policy; Pathway currently uses
PreserveOpenAndIncons
Pathway currently uses SearchQuery::SingleGoal for exact node destinations and
SearchQuery::CandidateSet for service/gateway objectives that select among
multiple acceptable destinations. For exact queries, the runtime can also emit
the optional path-problem helper surfaces. Candidate-set queries stay on the
generic selected-result surface and intentionally do not rely on a
distinguished goal anchor.
Pathway currently exposes only exact run-to-completion profiles to the router. The supported public modes are canonical serial and threaded exact single-lane, both with batch_width = 1, SearchCachingProfile::EphemeralPerStep, and SearchEffortProfile::RunToCompletion. Budgeted or bounded execution contracts remain part of the generic Telltale runtime surface, but Pathway rejects them fail-closed for router-visible planning until it has a Pathway-owned policy for exposing them.
Proof and Assurance Surface
Pathway also inherits proof-oriented guarantees and trace surfaces from the Telltale runtime:
- fail-closed configuration validation before execution, including scheduler profile, batch width, executor compatibility, caching profile, effort profile, and fairness bundle
- explicit determinism and fairness claims tied to the selected scheduler profile rather than hidden host-runtime assumptions
- replay and observation-comparison surfaces that can reconstruct and compare the final observed selected result
- state and artifact traces that expose canonical batches, normalized commits, fairness certificates, epoch transitions, and final authoritative machine state
- theorem-backed exact observable equivalence between canonical serial and threaded exact single-lane execution for the current Pathway domain
These guarantees belong to the Telltale search substrate, not to Pathway’s route policy layer. Pathway relies on them to justify exactness, replayability, and debug visibility at the search boundary while still owning topology freezing, route-objective mapping, candidate derivation, and router publication semantics.
Pathway defaults to canonical serial search with batch_width = 1, epsilon = 1.0, SearchCachingProfile::EphemeralPerStep, SearchEffortProfile::RunToCompletion, and the minimum exact fairness bundle required by the generic runtime. ThreadedExactSingleLane is available as an explicit opt-in planner mode. Batched parallel, budgeted, and bounded profiles are not exposed because the weaker fairness or approximation story is not acceptable for default routing behavior.
Admission Contract
Admission and witness generation operate on shared result objects. The pathway engine returns RouteCandidate, RouteAdmissionCheck, RouteAdmission, and RouteWitness values. This keeps pathway interoperable with the common router and layering surfaces. The routing-invariants check in toolkit/checks/rust/routing_invariants.rs enforces the planning rules below.
- if a planning judgment depends on observations, the current topology must be passed explicitly to the planner method that makes that judgment
BackendRouteRefstays opaque at the shared boundary, but in pathway it is a self-contained plan token rather than a cache key- pathway may memoize derived candidates internally, but cache hits and misses must produce the same result for the same topology
- admitted routes carry that opaque backend ref forward so
materialize_routecan decode the selected pathway plan without searching planner cache state - materialization still revalidates that decoded plan against the latest observed topology, the shared topology epoch, and the plan validity window before issuing a proof
Pathway route ids are path identities. The stable route id is derived from source, destination, route class, and concrete segment path. Epoch stays in the plan token and proof instead of becoming part of the stable route identity. Pathway-private plan tokens, route-identity bytes, ordering keys, and runtime checkpoints all use the same versioned canonical binary encoding policy so replay, hashing, and checkpoint recovery stay aligned.
Engine Middleware
RoutingEngine::engine_tick is the engine-wide progress hook for pathway. The router or host supplies a shared RoutingTickContext, and pathway returns a RoutingTickOutcome that reports whether the tick changed pathway-private state. Inside jacquard-pathway, this hook is the engine-internal middleware loop.
topology observation
-> refresh pathway-private estimates
-> summarize transport ingress
-> update bounded control state
-> clear stale candidate cache
-> checkpoint current pathway runtime state
Each tick ingests the latest topology observation, refreshes the pathway-private estimate caches, summarizes the latest bounded transport observations, and folds that evidence into a bounded control state. That control state carries transport stability, repair pressure, and anti-entropy pressure with deterministic decay. Pathway uses it to tighten route health, escalate repair posture under sustained pressure, make AntiEntropyRequired consume real anti-entropy debt rather than acting as pure bookkeeping, and drive the cooperative route-export, neighbor-advertisement, and anti-entropy protocol exchanges described below. The hook then evicts stale candidate entries and writes the scoped topology-epoch checkpoint.
Discovery enters the pathway engine through the shared world picture: nodes, links, environment, and service advertisements are already merged into Observation<Configuration> before the engine plans. Pathway then derives its route-export, neighbor-advertisement, and anti-entropy choreography payloads from those shared observations plus active shared route objects rather than maintaining a second hidden advertisement schema.
Internal Choreography Surface
Pathway carries a private Telltale choreography layer inside jacquard-pathway. This does not change the shared Jacquard routing contract. Router-facing planning, admission, materialization, maintenance, and tick flow use the shared RoutingEngine trait plus the pathway-owned PathwayRoutingEngine extension seam.
The internal split is:
- planner-local deterministic Rust:
- topology interpretation
- candidate search and ranking
- committee scoring
- route-health derivation
- choreography-backed cooperative protocols:
- forwarding hop
- activation handshake
- bounded suffix repair
- semantic handoff
- hold / replay exchange
- route export exchange
- neighbor advertisement exchange
- anti-entropy exchange
Pathway protocols live inline in the pathway crate as tell! definitions. That keeps the generated protocol/session code adjacent to the Rust host logic that enters those protocols and avoids a second file-based choreography source of truth.
Pathway also keeps one pathway-owned choreography interpreter surface above the shared runtime traits. That interpreter maps protocol-local requests onto the existing Jacquard boundaries:
TransportSenderEffectsfor endpoint-addressed payload sendsRetentionStorefor deferred-delivery payload storageRouteEventLogEffectsfor replay-visible route events- router-owned checkpoint orchestration for persisted pathway-private state
Host-owned ingress draining stops outside pathway itself. The router or bridge drains TransportDriver, converts raw ingress into shared observations, and feeds those observations into pathway through explicit router ingestion before a synchronous round. Inside pathway, those observations enter a bounded pending-ingress queue. A round consumes that queue deterministically and records a host-facing pathway round-progress snapshot that reports whether the round advanced state, waited quietly, or dropped excess ingress fail-closed.
This is intentionally still pathway-private. The router should only observe shared route objects, shared tick context, shared round outcome, and shared checkpoint orchestration. It should not depend on pathway-private choreography payloads or generated effect interfaces.
The generated or protocol-local Telltale effect interfaces are not the shared Jacquard effect contract. They stay inside jacquard-pathway as implementation-facing protocol surfaces. Concrete host adapters still implement the shared traits from jacquard-traits, and the pathway choreography interpreter translates protocol-local requests onto those stable cross-engine traits instead of replacing them.
At runtime, pathway entry points cross one private guest-runtime layer before touching transport send capability, retention, or route-event logging directly. forward_payload, materialization-side activation, maintenance-side repair and handoff, retained-payload replay, round-side ingress recording, route export, neighbor advertisement, and anti-entropy exchange all enter that pathway-local choreography boundary first. The guest runtime resolves stable inline protocol metadata for the protocol being entered, fails closed if that metadata is unavailable, and then records small protocol checkpoints keyed by protocol kind plus route or tick session so recovery does not depend on hidden in-memory sequencing state. Telltale session futures remain confined to choreography modules; the engine/runtime layer itself stays synchronous and driver-free.
Runtime and Repair
Materialization stores a pathway-private active-route object under the router-owned canonical identity. That object contains the explicit PathwayPath, optional CommitteeSelection, and a deterministic ordering key plus four route-private substates:
PathwayForwardingStatefor current owner, owner-relative next-hop cursor, in-flight frames, and last ackPathwayRepairStatefor bounded repair budget and last repair tickPathwayHandoffStatefor the last handoff receipt and handoff tickPathwayRouteAntiEntropyStatefor partition mode, retained objects, and last anti-entropy refresh
Canonical route identity, admission, and lease ownership remain outside this pathway-private runtime object.
Pathway decodes the admitted opaque backend ref during materialization instead of recovering route shape from planner cache state. Token decode alone is not enough. The runtime re-derives the candidate against the latest observed topology and fails closed if the plan epoch, handle epoch, witness epoch, latest topology epoch, or plan validity window do not still agree. Materialization itself fails closed until the engine has observed topology through engine_tick, so pathway does not synthesize a pre-observation route health or an empty-world fallback.
Route Health
Route health is derived from the active route’s remaining suffix rather than from engine-global topology presence. Pathway validates the current owner-relative suffix against the latest observed topology and folds first-hop transport observations into that route-local view when available. It publishes ReachabilityState::Unknown when it lacks route-local validation data rather than pretending the route is generically reachable or unreachable.
The runtime route-health calculation currently combines three signal groups:
| Signal group | Inputs |
|---|---|
| First-hop transport summary | remote-link stability score; remote-link congestion penalty |
| Remaining-suffix topology view | delivery confidence; symmetry; loss-derived congestion penalty |
| Pathway control state | transport stability score; anti-entropy pressure |
As in planning, median_rtt_ms is not currently part of the published route-health calculation.
Lifecycle and Maintenance
Lifecycle sequencing is explicit and fail-closed. Pathway validates first, builds the next active-route state off to the side, persists the checkpoint, records the route event, and only then publishes the in-memory runtime mutation. If checkpoint or route-event logging fails, the new state is not committed.
Protocol checkpoints follow the same fail-closed rule. Pathway writes or updates the protocol checkpoint through the choreography guest runtime before treating that step as complete, and rollback paths remove route-scoped protocol checkpoints when materialization or teardown does not commit. Those checkpoints carry protocol metadata derived from the live inline protocol modules themselves, including the protocol name, declared roles, and source-path identity, so replay and recovery stay aligned with the live generated protocol surface rather than with a handwritten local label only.
Maintenance is expressed through the shared RouteMaintenanceResult surface. Repair means a bounded local suffix-repair algorithm over the latest observed topology. LinkDegraded and EpochAdvanced attempt to recompute the remaining suffix from the current owner to the final destination, consume one repair step on success, and escalate to typed replacement when no bounded patch is available or the repair budget is exhausted.
CapacityExceeded returns ReplacementRequired without flipping partition mode, since it indicates replacement pressure rather than partition evidence. PartitionDetected enters partition mode and reports the current retained-object count through HoldFallback. PolicyShift performs handoff and AntiEntropyRequired flushes retained payloads to recover. Pathway exposes one current commitment per route, so repair, handoff, and deferred-delivery posture stay inside the route runtime state rather than becoming separate concurrent commitments.
Forwarding
A handoff advances the owner-relative cursor to the remaining suffix under the next owner. Forwarding then succeeds only for the current owner of that suffix.
Old owners fail closed with StaleOwner, exhausted owner-relative paths fail with Invalidated, and malformed admitted plan tokens fail the same way during materialization. Each case maps to a typed RouteMaintenanceResult value rather than a side-channel mutation.
Optional Committee Coordination
Pathway can attach a swappable CommitteeSelector, with DeterministicCommitteeSelector as the optional in-tree implementation. The selector returns Option<CommitteeSelection> rather than assuming a committee always exists. The in-tree selector reads the pathway neighborhood estimate for committee gating and the pathway peer estimate for ranking. Route ordering and local coordination stay on the same topology-model interpretation.
Committee eligibility is stricter than forwarding value alone. A member must be route-capable for pathway, must present a usable shared service surface, and may be disqualified by bounded behavior-history penalties before ranking happens. Selection is diversity-constrained: controller diversity is mandatory, and discovery-scope diversity is enforced when alternatives exist. If discovery-scope diversity would suppress the minimum viable committee, pathway falls back to controller-only diversity rather than silently disabling coordination.
None means no committee applies, and a selector error is not silently downgraded to None. Pathway surfaces a selector error as a typed inadmissible candidate using BackendUnavailable. The result is advisory coordination evidence only and does not replace canonical route admission, route witness, or route lease ownership.
Retention and Storage
The pathway engine uses the shared RetentionStore boundary for deferred-delivery payloads. While a route is in partition mode, forward_payload buffers payloads into the retention store instead of sending them immediately. Pathway then flushes those retained payloads on recovery or before handoff when a next hop becomes available. The typed partition fallback surface remains RouteMaintenanceOutcome::HoldFallback, which now carries the retained-object count visible on the route at the time the fallback was entered.
Retained payload identity flows through the shared Hashing boundary. Route and runtime checkpoints flow through the shared storage and route-event-log effects. Storage keys and runtime checkpoints are scoped by the local engine identity so multiple local pathway engines can share one backend without overwriting one another.
V1 pathway supports a scoped checkpoint round-trip for pathway-private active-route state and the latest topology epoch. That recovery surface is intentionally narrow: it restores the pathway-owned runtime object keyed by RouteId, while canonical route identity and lease ownership remain on the router side.
The choreography layer adds a second scoped recovery surface: protocol checkpoints are keyed by protocol kind plus route session or tick session and round-trip through the same storage boundary. Route recovery still uses the active-route checkpoint; protocol recovery uses the protocol checkpoint catalog. Neither requires ambient hidden state outside the engine-owned checkpoint store.
Swappable Trait Surface
The pathway engine exposes its narrow read-only pathway seams as two traits in jacquard-pathway: PathwayTopologyModel and PathwayRoutingEngine. Substituting either one replaces a pathway subcomponent without forking the engine, and the coupling is pathway-specific rather than leaking into jacquard-traits. RetentionStore remains a shared runtime boundary on the neutral effect surface. For host runtime effects beyond these seams, the engine uses the shared TimeEffects, OrderEffects, StorageEffects, RouteEventLogEffects, and Hashing surfaces from jacquard-traits.
Topology Model
#![allow(unused)]
fn main() {
pub trait PathwayTopologyModel {
type PeerEstimate;
type NeighborhoodEstimate;
#[must_use]
fn local_node(&self, local_node_id: &NodeId, configuration: &Configuration) -> Option<Node>;
#[must_use]
fn neighboring_nodes(
&self,
local_node_id: &NodeId,
configuration: &Configuration,
) -> Vec<(NodeId, Node)>;
#[must_use]
fn reachable_endpoints(
&self,
local_node_id: &NodeId,
configuration: &Configuration,
) -> Vec<LinkEndpoint>;
#[must_use]
fn adjacent_links(&self, local_node_id: &NodeId, configuration: &Configuration) -> Vec<Link>;
#[must_use]
fn peer_estimate(
&self,
local_node_id: &NodeId,
peer_node_id: &NodeId,
observed_at_tick: Tick,
configuration: &Configuration,
) -> Option<Self::PeerEstimate>;
#[must_use]
fn neighborhood_estimate(
&self,
local_node_id: &NodeId,
observed_at_tick: Tick,
configuration: &Configuration,
) -> Option<Self::NeighborhoodEstimate>;
}
}
PathwayTopologyModel is read-only. The associated estimate types are the important boundary. If a pathway implementation wants novelty scores, reach estimates, bridge heuristics, or neighborhood flow signals, those stay pathway-owned behind PathwayTopologyModel. They are not promoted into jacquard-core as shared Node, Link, or Environment schema.
Engine Binding
#![allow(unused)]
fn main() {
pub trait PathwayRoutingEngine: RoutingEngine {
type TopologyModel: PathwayTopologyModel;
type Retention: RetentionStore;
fn topology_model(&self) -> &Self::TopologyModel;
fn retention_store(&self) -> &Self::Retention;
}
}
PathwayRoutingEngine binds one concrete topology model and one retention store to a pathway engine instance. It stays narrow on purpose: hosts can inspect the read-only pathway subcomponents without gaining a mutation hook into pathway-private runtime state. Transport send capability and transport ingress ownership are now split cleanly: pathway consumes the shared TransportSenderEffects capability, while the host/router owns ingress supervision and delivers explicit observations before each round.
Shared Retention Boundary
#![allow(unused)]
fn main() {
pub trait RetentionStore {
fn retain_payload(
&mut self,
object_id: ContentId<Blake3Digest>,
payload: Vec<u8>,
) -> Result<(), RetentionError>;
fn take_retained_payload(
&mut self,
object_id: &ContentId<Blake3Digest>,
) -> Result<Option<Vec<u8>>, RetentionError>;
fn contains_retained_payload(
&self,
object_id: &ContentId<Blake3Digest>,
) -> Result<bool, RetentionError>;
}
}
RetentionStore is the storage boundary for opaque deferred-delivery payloads during partitions. It stays intentionally narrow so platform-specific persistence can substitute without forcing the rest of the pathway engine to know about it, and it is not treated as a pathway-specific trait surface.
Field Routing
jacquard-field is Jacquard’s corridor-envelope routing engine. It does not
claim a full explicit path. Instead it maintains a continuously updated local
field model, freezes that model into deterministic search snapshots, runs
Telltale search privately, and publishes only conservative corridor-envelope
claims through the shared routing contract.
Engine Shape
Field owns four private layers:
- observer state
- regime and posture control state
- a bounded private summary-exchange choreography runtime
- a Telltale-backed search substrate over frozen field snapshots
Those layers stay engine-private. The router still owns canonical route identity, publication, and cross-engine selection.
The current Rust implementation now makes the operational layer more explicit:
crates/field/src/policy.rscentralizes calibrated regime, posture, continuity, promotion, and evidence thresholds as one deterministicFieldPolicysurfacecrates/field/src/operational.rsderives a reducedFieldOperationalViewwith support, retention, entropy, and freshness bands for decision code- those operational surfaces remain runtime-private and do not become posterior truth or canonical route truth
Continuously Updated Field Model
Field updates one destination-local model from three evidence classes:
- direct topology observations
- forwarded protocol summaries from neighbors
- reverse delivery feedback
The runtime ingests forwarded summaries and feedback explicitly on the engine
surface through ingest_forward_summary, record_forward_summary, and
record_reverse_feedback, stores them as pending evidence, and feeds them into
refresh_destination_observers on the next tick. Observer refresh is
fail-closed and explicit: protocol evidence enters the observer path only
through those engine-owned evidence buffers.
That refresh updates:
- posterior belief
- progress belief
- corridor belief
- continuation frontier
The resulting frontier is the local admissible continuation surface that the planner and runtime consume.
Regime Detection
Field runs a local control-plane pass on each engine tick before planning. That pass compresses destination-local state plus topology observations into one bounded mean-field summary and one bounded price vector.
The regime detector scores five operating regimes:
SparseCongestedRetentionFavorableUnstableAdversarial
Those scores are derived from the current combination of:
- congestion pressure
- relay pressure
- retention pressure
- churn pressure
- risk pressure
- mean-field alignment and field-strength signals
- control prices accumulated by the bounded PI loop
The active regime is not replaced immediately on every score change. Field uses residual accumulation, a change threshold, a hysteresis threshold, and a post-transition dwell window to prevent one-tick oscillation. A regime change happens only when a different regime stays strong enough for long enough to clear that bounded switching logic.
Telltale Search
Field planning is search-backed.
For each routing objective, the planner:
- resolves the objective into a native Telltale
SearchQuery - freezes the current field model into one deterministic snapshot
- runs exact Telltale search over that snapshot
- derives one selected private continuation from the selected-result witness
- emits a shared
RouteCandidatewithCorridorEnvelopevisibility
The public result shape stays corridor-only even when the private selected result witness is a concrete node path. That split is deliberate: search is an internal implementation substrate, not a new source of canonical route truth. Field may consider multiple admissible continuations internally, but that plurality stays private. One routing objective still yields one field-selected private result and one planner-visible corridor claim.
The query split is:
- exact node objectives resolve to
SearchQuery::single_goal - gateway and service objectives resolve to selected-result
SearchQuery::try_candidate_setqueries over frontier neighbors - candidate-set queries are truncated by the field per-objective search budget before execution
The search record retained by the engine also captures snapshot transitions and explicit reseeding decisions, so evidence changes within one shared route epoch still show up as field-owned search reconfiguration rather than being silently treated as the same run.
The search/publication boundary is explicit:
- the selected private result stays inside the search record
- continuation choice is reduced to one selected runtime realization
- the published route summary remains one corridor-envelope claim
- backend token and active-route state keep the richer private realization detail needed for runtime maintenance and forwarding
Experimental Surface
Field now separates two different tuning surfaces:
FieldSearchConfigremains the search-substrate surface:- scheduler profile
- batch-width / effort invariants
- heuristic mode
- query budget
- reseeding policy
FieldPolicyis the operational surface:- regime detection and dwell
- posture switching
- continuity and bootstrap floors
- promotion / hold / narrow / withdraw gates
- evidence aging, carry-forward, publication, and replay thresholds
The intended maintained experiment knobs are profile-level and few:
- regime sensitivity
- posture conservatism
- continuity softness
- promotion strictness
- evidence freshness / corroboration weight
Those profile-level variables expand into the lower-level policy families internally. The point of the split is to keep the experiment surface legible without turning the runtime into an unbounded configuration matrix.
Execution Policy
Field keeps truth semantics and execution policy separate.
- destination eligibility and selected-result meaning do not change with local posture or regime
- local posture and regime may change only the search execution profile
Posture Control
Posture is the field engine’s local execution stance. It determines how the engine reacts to the currently detected regime when it ranks continuations, publishes corridor claims, and chooses a search execution profile.
Field chooses among four postures:
OpportunisticStructuredRetentionBiasedRiskSuppressed
The posture controller scores all four against the current regime, mean-field state, and control prices, then selects the highest-scoring posture subject to its own hysteresis. The primary posture mapping is:
- sparse regime ->
Opportunistic - congested regime ->
Structured - retention-favorable regime ->
RetentionBiased - unstable or adversarial regime ->
RiskSuppressed
As with regimes, posture changes are damped. Field keeps a posture switch threshold and a short dwell window after each transition. That prevents one tick of changed evidence from causing immediate flapping. When the regime is very strong, the controller can move more quickly back to that regime’s primary posture, but posture still remains an execution choice rather than a truth owner.
Field defaults to canonical serial exact search and may promote to threaded exact single-lane search on native targets when the engine enters a congested regime or a risk-suppressed posture. Query meaning, admissible destinations, and corridor-envelope publication stay unchanged.
Corridor Realization
The public field route is a corridor claim, not a single next-hop commitment.
Field therefore keeps two private runtime notions separate:
- one selected runtime realization inside the corridor
- one bounded continuation envelope of admissible neighbor realizations
That means runtime can change its concrete send target inside the installed corridor envelope without forcing immediate route replacement. Replacement is required only when the best available continuation leaves the installed continuation envelope, the corridor support is withdrawn, or policy state makes the installed route inadmissible.
Route Lineage
The field route lineage is:
- local field evidence updates observer state and the continuation frontier
- field search selects one private result inside the frozen snapshot
- the selected private result yields one selected runtime realization
- planner publication emits one corridor-envelope candidate
- router admission/materialization turns that candidate into an installed route
- runtime forwarding and maintenance continue to operate inside the installed continuation envelope
That lineage is intentionally asymmetric:
- field owns the private evidence, search, and runtime-realization layers
- router owns candidate comparison, canonical publication, and installed-route truth
- field quality/comparison objects remain reference-only unless one theorem or router rule explicitly promotes them into router-owned truth
Runtime Surfaces
Field exposes bounded private diagnostics for inspection and replay-oriented tooling:
- the last planner search record, including query, effective search config, execution report, replay artifact, and snapshot reconfiguration data
- a versioned
FieldReplaySnapshotsurface that packages search, protocol, runtime, and commitment views without requiring access to hidden engine internals - a reduced
reduced_runtime_search_replay()extraction fromFieldReplaySnapshotthat exposes the proof-facing search/runtime bundle without re-reading private engine state - a reduced
reduced_protocol_replay()extraction fromFieldReplaySnapshotthat exposes the proof-facing protocol artifact and protocol-reconfiguration bundle without re-reading private engine state - a versioned
FieldExportedReplayBundlesurface derived from the reduced replay helpers, with stable JSON packaging for debugging and regression fixtures - a reduced
FieldLeanReplayFixturederived from that exported replay bundle so proof-facing fixture vocabulary tracks Rust replay structure directly - bounded protocol artifacts from the private choreography runtime
- bounded runtime round artifacts carrying blocked-receive state, host disposition, emitted-summary count, remaining step budget, execution-policy class, destination class, search-snapshot linkage metadata, bootstrap class, and one reduced observational route projection
- one route-commitment view per materialized route, with pending, lease-expiry, topology-supersession, evidence-withdrawal, and backend-unavailable outcomes
- route-scoped recovery state carrying checkpoint, continuation-shift, and bootstrap activation/upgrade/withdrawal counters
Private protocol flows such as summary dissemination, anti-entropy, retention replay, and explicit coordination remain bounded operational surfaces. They affect field semantics only when they yield engine-owned evidence that is later ingested through the forward-summary or reverse-feedback paths. Otherwise they remain observational runtime behavior rather than semantic route truth.
Route-scoped explicit-coordination sessions are also the current field reconfiguration surface. When a live route shifts its concrete realization inside the already-admitted continuation envelope, field reconfigures the route-scoped protocol session instead of forcing full route replacement. Owner-transfer, checkpoint/restore, and continuation-shift steps are retained as replay-visible protocol reconfiguration markers.
Field route publication also has an explicit bootstrap phase. A bootstrap route is a weaker corridor claim that is allowed when the evidence is coherent but not yet strong enough for steady admission. Promotion out of bootstrap is not just a second support threshold. The runtime evaluates five observable gates:
- support growth relative to the installed bootstrap corridor
- uncertainty reduction
- anti-entropy confirmation from recent coherent summary publication
- continuation coherence inside the installed corridor envelope
- freshness of the leading continuation
Between Steady and Bootstrap, runtime now also keeps one explicit
degraded-steady continuity band. A degraded-steady route is still a conservative
steady corridor claim at the publication boundary, but the runtime has started
preserving narrowed corridor structure, asymmetric continuation shifts, and
anti-entropy carry-forward more aggressively because the corridor is no longer
comfortably steady.
Runtime and replay surfaces then distinguish five bootstrap transitions:
- activation
- hold
- narrowing when the corridor is still conservative but must contract before it can strengthen
- upgrade to steady state
- withdrawal when the corridor collapses
Replay also distinguishes continuity-band movement itself:
- entering degraded-steady before bootstrap collapse
- recovering from degraded-steady back to steady
- downgrading from degraded-steady into bootstrap when continuity can no longer be preserved
When promotion does not occur, replay also records the dominant blocker:
- weak support trend
- unresolved uncertainty
Service destinations also use a bounded service-retention carry-forward path. When a coherent service corridor has just been published, the observer/runtime path can synthesize a short-lived forwarded-evidence reinforcement window so service fanout families do not lose continuity after one missing forwarded round. That carry-forward is bounded and replay-visible; it preserves coherent service summaries, but it does not invent a route when no corridor evidence remains.
- missing anti-entropy confirmation
- broken continuation coherence
- stale leading evidence
The participant-set boundary is explicit:
- owner and generation movement are supported
- route-scoped checkpoint/restore is supported
- continuation-shift reconfiguration inside one admitted corridor is supported
- participant-set change is not supported
Those runtime round artifacts are intentionally observational. They expose only reduced route shape, reduced search linkage, and support hints. They do not expose the selected witness, the full continuation envelope, or hidden protocol session state. They do not promote the field runtime into a second canonical route owner.
The replay surfaces also carry an explicit surface-class split:
- search replay is observational
- protocol replay packaging is observational, while
reduced_protocol_replay()is the maintained proof-facing protocol replay reduction - runtime replay is reduced
- commitment replay is observational
- exported replay is reduced, versioned, and tooling-oriented rather than authoritative
Proof Boundary
The field proof stack is intentionally narrower than the richer Rust runtime, and that reduction is deliberate.
Lean covers:
- the reduced local observer-controller model
- the reduced private protocol boundary, including fixed-participant closure, fragment-trace alignment, receive-refinement witnesses, and explicit observational-only reconfiguration semantics
- the reduced field search boundary, including query-family mapping, snapshot identity, selected-result shape, execution-policy vocabulary, and reconfiguration metadata
- the reduced runtime and runtime-search adequacy boundary, including trace/evidence extraction, runtime-state refinement, runtime-artifact search linkage, search projection, reduced protocol replay projection, and reduced canonical-route refinement
- replay-derived fixture vocabulary mirrored in
verification/Field/Adequacy/ReplayFixtures.lean
Lean does not own router truth, private choreography internals, or full replay packaging semantics. Those richer Rust surfaces remain observational or out-of-scope unless an explicit reduction theorem promotes part of them.
The most important assurance is ownership discipline:
- the deterministic local controller owns field semantics
- private protocol exports are observational-only
- runtime artifact reduction is observational-only
- canonical route truth remains router-owned
Router-owned truth can still be richer than support-only ranking. The current verification tree also carries a stronger support-then-hop-then-stable router selector and the matching system-level selector lift. Field does not publish extra planner-visible candidates to satisfy that richer objective. It still publishes one corridor candidate per objective and leaves richer canonical choice to the router/system layer.
The current broader resilience story is likewise router/system-owned rather than field-private-search-owned. The maintained proof stack includes bounded dropout and bounded non-participation stability packs under the reduced reliable-immediate regime. Those results say how router-owned canonical support stabilizes once the selected winner survives the stated fault budget; they do not turn field-private replay or protocol reconfiguration into new owners of canonical route truth.
See:
- Routing Engines
- Crate Architecture
verification/Field/Docs/Model.mdverification/Field/Docs/Protocol.mdverification/Field/Docs/Adequacy.md
Scatter Routing
jacquard-scatter is Jacquard’s bounded deferred-delivery diffusion engine.
It does not maintain a topology graph.
It does not publish a best next hop.
It does not compute an explicit end-to-end path or corridor envelope.
scatter publishes a narrow router claim.
The claim states that an objective is supportable somewhere in the current world model.
The claim is opaque, partition-tolerant, and hold-capable.
After materialization, the engine moves data through engine-private transport packets under the standard RoutingEngine and RouterManagedEngine boundary.
See Crate Architecture for the shared ownership and boundary rules that constrain this engine.
Core Model
The first in-tree scatter implementation keeps a small deterministic model.
The engine retains messages, summarizes peer observations, and tracks per-route progress.
It does not assume stable endpoint identity beyond the router objective vocabulary already present in Jacquard.
- payloads carry a stable local message id
- expiry is local and typed through
created_tickplus boundedDurationMs - replication is bounded by hard copy budgets
- forwarding is local and opportunistic
- handoff is preferential rather than ack-driven custody transfer
- published route shape visibility is
Opaque
Policy Surface
ScatterEngineConfig defines the deterministic policy surface for the engine.
It keeps the behavior-critical constants named and typed.
It avoids anonymous literals in the runtime.
ScatterExpiryPolicyScatterBudgetPolicyScatterRegimeThresholdsScatterDecisionThresholdsScatterTransportPolicyScatterOperationalBounds
These policies cover message lifetime, replication budgets, regime detection, carrier thresholds, contact feasibility, and bounded runtime work.
Route Lifecycle
Planner behavior is conservative.
candidate_routes emits at most one candidate for a supportable objective.
The router remains the owner of canonical route truth.
- the planner confirms that the destination or service objective is supportable in the current observation
- the router admits an opaque and partition-tolerant
scatterclaim - the runtime materializes a route-local progress surface
- payloads are retained, carried, replicated, or handed off according to local regime and peer score
- maintenance can report hold-fallback viability even when no direct next hop exists
This split keeps route publication router-owned.
It lets scatter own its private deferred-delivery mechanics.
Transport Boundary
scatter follows the standard Jacquard ownership split.
The engine consumes explicit TransportObservation.
The engine sends only through TransportSenderEffects.
The engine does not own async transport streams or assign Tick.
Host bridges own ingress draining and time attachment. Transport choice stays a local contact-feasibility judgment. The first implementation keeps that judgment reduced and deterministic. It does not build separate routing models per transport.
Contrast With Other Engines
batman-bellman,batman-classic,babel, andolsrv2retain routing control state but do not buffer payloads for deferred deliverypathwaysupports deferred delivery through explicit path and retention boundaries plus full-route searchfieldcarries forward bounded routing and service evidence rather than general payload custodyscatteris the in-tree opaque deferred-delivery baseline, so payload custody stays local, bounded, and diffusion-oriented
Current Non-Goals
The current engine does not attempt to provide a full DTN control plane. It keeps the surface intentionally narrow.
- topology reconstruction
- stable semantic identity routing beyond Jacquard objectives
- ack-driven authoritative custody transfer
- multipath planning
- distributed time agreement
- remote-clock freshness claims
Simulator
jacquard-simulator is the deterministic scenario harness for Jacquard. It reuses the same core ownership model as the host bridge. The four core types are JacquardScenario, ScriptedEnvironmentModel, JacquardSimulator, and JacquardReplayArtifact.
Hosts own transport drivers. The bridge stamps ingress with Jacquard Tick. The router advances through explicit synchronous rounds. Engines keep private runtime state below the shared routing boundary.
The simulator selects engines per host through EngineLane. Available lanes include single-engine variants (Pathway, BatmanBellman, BatmanClassic, Babel, OlsrV2, Scatter, Field) and mixed-engine variants (PathwayAndBatmanBellman, PathwayAndBabel, PathwayAndOlsrV2, PathwayAndField, BabelAndBatmanBellman, OlsrV2AndBatmanBellman, FieldAndBatmanBellman, AllEngines). All engines share one host bridge per node.
The simulator also owns the maintained tuning and diffusion harnesses. The tuning_matrix binary runs scenario sweeps, writes deterministic artifacts under artifacts/analysis/, and automatically generates the analysis report. The tuning methodology and current recommendations live in Routing Tuning.
Reused Surfaces
The simulator reuses existing Jacquard composition surfaces. It does not maintain a simulator-only stack.
jacquard-reference-client provides host bridge ownership and round advancement. jacquard-adapter provides queueing and adapter support primitives. jacquard-mem-link-profile provides in-memory transport composition. jacquard-mem-node-profile and reference-client::topology provide fixture topology authoring.
Environment Model
ScriptedEnvironmentModel schedules environment changes as EnvironmentHook values keyed to specific ticks. Applied hooks appear in each JacquardRoundArtifact for replay and inspection.
ReplaceTopologyswaps the full network configuration at a given tick.MediumDegradationadjusts delivery confidence and loss on a link between two nodes.AsymmetricDegradationadjusts forward and reverse confidence and loss independently on a directed link.Partitionremoves reachability between two nodes.CascadePartitionremoves multiple directed links simultaneously.MobilityRelinkreplaces one link with another to model node movement.IntrinsicLimitadjusts connection count or hold capacity constraints on a node.
Replay Artifacts
JacquardSimulator::run_scenario() returns a JacquardReplayArtifact and a JacquardSimulationStats. The artifact captures the complete observable state of the run.
- environment traces and applied hooks per round
- ingress-batch boundaries and host-round outcomes
RouteEventandRouteEventStampedoutputsDriverStatusEventrecords for dropped ingress- deterministic checkpoints with host snapshots
- failure summaries for diagnostic inspection
For the pathway lane, checkpoints carry InMemoryRuntimeEffects snapshots per host. These snapshots are needed to rebuild the bridge and recover checkpointed route state. Simulations can be resumed from the last checkpoint using JacquardSimulator::resume_replay(). Non-choreography engines do not expose Telltale-native internals to the simulation harness.
Starter Path
- Build a
JacquardScenarioandScriptedEnvironmentModelwithjacquard_simulator::presets. - Pass them to
JacquardSimulator::run_scenario(). - Inspect the returned
JacquardReplayArtifactfor round, event, and checkpoint data. - For matrix sweeps, run
cargo run --bin tuning_matrix -- localand review the generated report atartifacts/analysis/local/latest/router-tuning-report.pdf.
Routing Tuning
jacquard-simulator includes a maintained tuning harness for all seven in-tree
engines. The harness runs deterministic scenario matrices, sweeps maintained
public parameters, writes stable artifacts under artifacts/analysis/, and
generates CSV tables plus a PDF report with vector plots through the repo-local
Python, Polars, Altair, and ReportLab toolchain. It also includes a
dedicated head-to-head corpus that runs the same regimes under explicit
engine sets: batman-bellman, batman-classic, babel, olsrv2, scatter,
pathway, field, and pathway-batman-bellman.
The harness also emits a companion diffusion-oriented corpus in the same artifact directory. That second track models mobility-driven contacts, message persistence, bounded replication, resource cost, and observer leakage for partition-tolerant delivery scenarios.
Design Setting
The maintained corpus is designed for disrupted and mobility-driven mesh environments. In that setting, end-to-end paths are often absent. Connectivity appears through short contact windows, weak bridges, and repeated partial recovery rather than through one stable connected graph. Nodes are also resource-constrained, so routing quality depends on bounded state, bounded work, and disciplined use of transmissions and custody.
The route-visible matrix gives useful evidence for this setting because it stresses the conditions that determine whether a router-facing engine remains usable at all. The maintained families vary bridge pressure, asymmetry, loss, relink events, partitions, recovery, contention, and local node pressure. Those are the same forces that determine whether a proactive engine keeps a route, whether a search-driven engine finds one, and where each approach breaks down.
The diffusion track adds the second half of the picture. It models cases where movement is the transport mechanism and messages must persist across disconnection. Its mobility-driven contacts, bounded replication, energy and transmission accounting, storage utilization, and observer-leakage measures give insight into whether a deferred-delivery policy remains viable in the population-level setting described above, not only in easy connected regimes.
Commands
Run the smaller smoke sweep:
cargo run --bin tuning_matrix -- smoke
Run the full local sweep and generate the report:
cargo run --bin tuning_matrix -- local
Regenerate the report for an existing artifact directory:
nix develop --command python3 -m analysis.report artifacts/analysis/local/latest
The local report is written to
artifacts/analysis/{suite}/latest/router-tuning-report.pdf.
On main, GitHub Pages also publishes the latest CI-built routing report PDF
under the docs site root.
Matrix Structure
The maintained matrix varies replay-visible regime dimensions rather than only one aggregate stress score:
- topology and density:
sparse line,medium ring,medium mesh,dense mesh,bridge cluster,high fanout - delivery pressure: low, moderate, and high loss
- medium pressure: interference and contention
- directional mismatch: none, mild, moderate, and severe asymmetry
- topology movement: relink, partition, recovery, and cascade partition
- local node pressure: connection-count and hold-capacity limits
- workload class: connected-only, repairable-connected, service, and concurrent mixed workloads
The harness writes:
runs.jsonl: one run-level summary per scenario seed and parameter setting- the generated aggregate summary file: grouped means and maintained field metrics
- the generated breakdown summary file: first sustained breakdown boundary per config
head_to_head_summary.csv: explicit engine-set comparisons over shared regimesdiffusion_runs.jsonl: one run-level summary per diffusion scenario seed and policy setting- the diffusion aggregate summary file: grouped means for delivery, coverage, transmissions, energy, boundedness, and leakage metrics
- the diffusion boundary summary file: per-policy viability, collapse, and overload boundaries across maintained diffusion families
- CSV tables for recommendations, transitions, boundaries, and profile variants
- vector plot assets plus a generated PDF report
Measured Outputs
The report scores configurations with route-visible metrics and also publishes transition and boundary tables:
- activation success
- route presence
- first materialization, first loss, and recovery timing
- route churn and engine handoffs
- stress boundary and first breakdown family
- Field-specific replay measures such as selected-result rounds, search reconfiguration rounds, protocol reconfiguration counts, continuation shifts, and checkpoint restore counts
The default recommendations are intended to be robust centers of acceptable behavior for this maintained corpus, not one-off winners from a single easy scenario.
The diffusion track adds a second set of metrics that are intentionally not route-centric:
- delivery probability
- delivery latency
- coverage
- total transmissions
- energy per delivered message
- storage utilization
- estimated reproduction number
- corridor persistence
- observer leakage
- boundedness state (
collapse,viable,explosive)
Current Guidance
For the latest artifact set, run:
cargo run --bin tuning_matrix -- local
The report is generated automatically at
artifacts/analysis/local/latest/router-tuning-report.pdf.
On main, the latest CI-built copy is also published with the docs site.
BATMAN Bellman
The BATMAN Bellman matrix is most informative in recoverable transition families. Route-presence plateaus alone are too flat, so the report also looks at stability accumulation, first-loss timing, and failure boundaries.
The responsive range clusters around the short-window settings.
batman-bellman-1-1 leads the balanced default ranking, with
batman-bellman-2-1 and batman-bellman-3-1 close behind. Asymmetric bridge
breakdown regimes remain hard failures across the tested window range, so the
recommendation should be read as guidance for recoverable pressure rather than
impossible bridges.
BATMAN Classic
BATMAN Classic converges more slowly than BATMAN Bellman due to its echo-only bidirectionality and lack of bootstrap shortcut. The tested decay window settings cluster tightly. The recommendation reflects the spec-faithful model’s need for larger windows to allow receive-window accumulation.
Babel
Babel separates most clearly in the asymmetry-cost-penalty family, where the bidirectional ETX formula produces measurably different route selection. The partition-feasibility-recovery family shows the FD table’s bounded infeasible-fallback window. Decay window settings do not yet separate sharply, suggesting the FD table and seqno refresh interval dominate convergence timing.
Pathway
The Pathway matrix shows a clear minimum-budget boundary:
- budget
1remains the hard cliff in the maintained service-pressure families - budgets at and above
2form the viable floor pathway-4-zeroandpathway-4-hop-lower-boundlead the balanced default ranking
The practical interpretation: 2 is the minimum viable budget floor, 3 to
4 is the sensible default range, and larger budgets need a regime-specific
justification.
Field
The simulator includes dedicated Field families, Field replay extraction, Field-specific CSV columns, and Field plot/report sections. The matrix observes:
- corridor route-support evolution in the replay surface
- degraded-steady continuity-band entry, recovery, and downgrade timing
- bootstrap activation, hold, narrowing, upgrade, and withdrawal behavior
- dominant promotion decisions and dominant promotion blockers
- service-retention carry-forward and asymmetric continuation-shift success
- search and protocol replay metadata
- continuation-shift and reconfiguration counters
- field-favorable comparison regimes
The Field sweep produces non-zero activation success and route presence at the router-visible route boundary. The tested Field settings cluster closely, so the recommendation should be read as a viable range rather than one sharply preferred point. The continuity profile table is the better place to choose between lower-churn and broader-reselection behavior.
Mixed Comparison
The comparison regimes are useful for regime suitability:
- low-loss connected-only cases favor the distance-vector stacks
- concurrent mixed workloads favor Pathway
- high-loss bridge and some field-favorable comparison cases remain hard failure regimes
The comparison section should be read as “which engine family fits this regime best” rather than “which engine is globally best”.
Head-To-Head Engine Sets
The report includes a direct engine-set comparison over the same regime families. This is separate from the mixed-engine comparison corpus:
- mixed-engine comparison asks which engine wins when several engines are available to the same router
- head-to-head comparison asks what happens when the host set is restricted to
one explicit stack:
batman-bellman,batman-classic,babel,olsrv2,pathway,field, orpathway-batman-bellman
Review Guidance
- prefer the PDF report and CSV tables over a single composite score
- use the transition table to distinguish robust settings from lucky averages
- use the boundary table to see where an engine stops being acceptable
- rerun the same matrix after meaningful engine, router, or simulator changes before updating defaults
Crate Architecture
This page describes the crate layout, the boundary rules, and the implementation policies that keep the workspace consistent.
Boundary Rule
core defines what exists. traits defines what components are allowed to do.
core owns shared identifiers, data types, constants, error types, and the full model pipeline from world objects through observations, engine-neutral estimates, policy, and action. Derives, trivial constructors, and simple validation are allowed. Cross-crate behavioral interfaces belong in traits.
traits owns the cross-crate behavioral interfaces, grouped below by purpose. The layering subset is forward-looking. The shared shape is part of the stable design, but in-tree coverage is still contract-oriented rather than a mature production layering stack.
Shared transport vocabulary follows the same rule. core keeps a small,
observed-world transport schema in TransportKind, EndpointLocator, and
LinkEndpoint because those types appear in shared Link,
ServiceDescriptor, and TransportObservation facts. Jacquard intentionally
does not force those types fully opaque today; EndpointLocator keeps only the
neutral locator families the shared model actually needs, while transport-
specific endpoint builders belong in transport-owned profile crates rather than
in core or the transport-neutral mem profile crates.
| Category | Traits |
|---|---|
| Routing contract | RoutingEnginePlanner, RoutingEngine, Router, RoutingControlPlane, RoutingDataPlane, PolicyEngine |
| Local coordination | CommitteeSelector, CommitteeCoordinatedEngine |
| Layering | SubstratePlanner, SubstrateRuntime, LayeredRoutingEnginePlanner, LayeredRoutingEngine, LayeringPolicyEngine |
| Runtime effects | TimeEffects, OrderEffects, StorageEffects, RouteEventLogEffects, TransportSenderEffects |
| Host-owned drivers | TransportDriver |
| Hashing and content | Hashing, ContentAddressable, TemplateAddressable |
| Simulator | RoutingScenario, RoutingEnvironmentModel, RoutingSimulator, RoutingReplayView |
Dependency Graph
The workspace today contains repo-local policy tooling in jacquard-toolkit-xtask plus the routing crates jacquard-core, jacquard-traits, jacquard-adapter, jacquard-macros, jacquard-pathway, jacquard-field, jacquard-batman-bellman, jacquard-batman-classic, jacquard-babel, jacquard-olsrv2, jacquard-scatter, jacquard-router, jacquard-mem-node-profile, jacquard-mem-link-profile, jacquard-reference-client, jacquard-testkit, and jacquard-simulator.
jacquard-core
↑ ↑
jacquard-traits jacquard-adapter
↑ ↑
jacquard-mem-node-profile
│
jacquard-mem-link-profile
│
jacquard-pathway ─────────┐
jacquard-field ─────────┤
jacquard-batman-bellman ──┤
jacquard-batman-classic ──┼──→ jacquard-router ←── jacquard-reference-client
jacquard-babel ───────────┤ │ ↑
jacquard-olsrv2 ──────────┤ │
jacquard-scatter ─────────┘ └──→ jacquard-simulator
jacquard-testkit provides shared test support (used by simulator and reference-client tests)
jacquard-reference-client composes mem-* + router + in-tree engines
jacquard-simulator reuses reference-client composition rather than a simulator-only stack
jacquard-toolkit-xtask
Every crate depends on jacquard-core. Every crate except jacquard-core depends on jacquard-traits only when they need behavioral boundaries. jacquard-adapter depends only on jacquard-core plus proc-macro/serialization support because it owns reusable mailbox, ownership, endpoint-convenience, and host-side observational projector helpers, not runtime traits or router semantics. jacquard-router depends on registered engines only through shared traits, not through pathway or BATMAN internals. jacquard-mem-node-profile depends on jacquard-core and jacquard-adapter plus serialization support. jacquard-mem-link-profile depends on jacquard-core, jacquard-traits, and jacquard-adapter because it implements shared transport, retention, and effect traits while reusing the canonical raw-ingress mailbox. jacquard-core and jacquard-traits remain runtime-free.
Crate Layout
Inside core, files are grouped into three areas. base/ holds cross-cutting primitives: identity, time, qualifiers, constants, and errors. model/ holds the world-to-action pipeline: world objects, observations, estimation, policy, and action. routing/ holds route lifecycle and runtime coordination objects.
core defines result shapes, not policies. It exposes coordination objects like CommitteeSelection, layering objects like SubstrateLease, and route lifecycle objects like RouteHandle, but it does not encode engine-local scoring, committee algorithms, leader requirements, layering decisions, or a parallel authority system above those route objects. Authority flows through the route contracts themselves: admitted routes, witnesses, proofs, leases, and explicit lifecycle transitions.
Purity And Side Effects
Jacquard treats purity and side effects as part of the trait contract.
Puretraits must be deterministic with respect to their inputs. They should not perform I/O, read ambient time, allocate order stamps, or mutate hidden state that changes outputs.Read-onlytraits may inspect owned state or snapshots, but they must not mutate canonical routing truth or perform runtime effects.Effectfultraits may perform I/O or mutate owned runtime state, but only through an explicit boundary with a narrow purpose.
Signature design follows the same split. Use &self for pure and read-only methods. Use &mut self only when the method has explicit state mutation or side effects. Do not mix pure planning and effectful runtime mutation in one trait unless the split is impossible and documented.
That is why Jacquard separates RoutingEnginePlanner from RoutingEngine, SubstratePlanner from SubstrateRuntime, and LayeredRoutingEnginePlanner from LayeredRoutingEngine. Engine-specific read-only seams such as pathway topology access stay in the owning engine crate rather than leaking into jacquard-traits. The shared round lifecycle follows the same rule: router-owned cadence and explicit ingress live at the contract layer, while engine-specific control loops and control-state contents stay inside the owning engine crate.
Enforcement
Trait purity and routing invariants are enforced by the lint suite. The stable-toolchain check lane is split between the external toolkit runner and Jacquard’s local toolkit/xtask, while nightly compiler-backed coverage lives in the external toolkit lint suite plus toolkit/lints/model_policy and toolkit/lints/routing_invariants. Public trait definitions in jacquard-traits also carry #[purity(...)] or #[effect_trait] annotations that the proc macros validate at compile time.
Runtime Boundary
The routing core does not call platform APIs directly. Hashing, storage, route-event logging, transport send capability, host-owned transport drivers, time, and ordering all cross explicit shared boundaries in traits. jacquard-adapter sits alongside that boundary, not inside it: reusable adapter-side ingress mailboxes, unresolved/resolved peer bookkeeping, claim guards, transport-neutral endpoint conveniences, and host-side topology projectors live there so core stays data-only and traits stays contract-only. The router consumes explicit ingress and advances through synchronous rounds rather than polling adapters ambiently. That is how native execution, tests, and simulation share one semantic model.
The effect traits are narrower than the higher-level component traits. They model runtime capabilities, not whole subsystems. RoutingEngine, Router, and RetentionStore are larger behavioral contracts and should not be forced through the effect layer.
First-party pathway keeps one additional internal layer above those shared effects: pathway-private choreography effect interfaces generated from Telltale protocols. Those generated interfaces are not promoted into jacquard-traits. Concrete host/runtime adapters implement the shared effect traits, and jacquard-pathway interprets its private choreography requests in terms of those stable shared boundaries.
Within jacquard-pathway itself, the async envelope is narrower still. Telltale session futures are driven to completion only inside choreography modules. The engine/runtime layer owns a bounded explicit ingress queue, consumes it during one synchronous round, and exposes a pathway round-progress snapshot for host-facing inspection. It does not own transport drivers, ambient async callbacks, or executor-shaped advancement.
Invariants
- No crate may use floating-point types in routing logic, routing state, routing policy, or simulator verdicts.
- No crate may treat wall-clock time as distributed semantic truth.
Tickis time andRouteEpochis configuration versioning. Crates must not convert between them by rewrapping the inner integer.- Canonical ordering must flow through shared ordering types. Crates must not invent crate-local tie-break schemes.
- Canonical hashing and content IDs must flow through the shared hash and content-addressing boundaries.
- Transport may observe links and carry bytes, but it must not invent route truth, publish canonical route health, or mutate materialized-route ownership.
- GPS, absolute location, clique grids, and singleton leaders are not shared routing truth. Spatial hints stay engine-private above the shared observation boundary.
- Multiple routing engines may coexist in one host runtime. Generic mixed-engine canonical route ownership is not a base-layer assumption.
Ownership
Each crate owns a narrow slice of runtime state.
| Crate | Owns |
|---|---|
jacquard-core | Shared vocabulary. No live state. |
jacquard-traits | Compile-time boundaries. No runtime state. |
jacquard-adapter | Generic adapter-side ingress mailboxes, peer identity bookkeeping, claim ownership helpers, transport-neutral endpoint conveniences, and host-side observational read models. No route truth, no transport-specific protocol logic, no router actions, no time/order stamping. |
jacquard-pathway | Pathway-private forwarding state, topology caches, repair state, retention state, engine-local committee scoring, and the private choreography guest runtime plus its protocol checkpoints. |
jacquard-field | Field-private posterior state, mean-field compression, regime/posture control state, Telltale-backed frozen-snapshot search, bounded runtime-round diagnostics, continuation scoring, and any field-private choreography runtime used only for observational summary exchange. |
jacquard-batman-bellman | BATMAN Bellman-private originator observations, gossip-merged topology, Bellman-Ford path computation, TQ enrichment, next-hop ranking tables, and active next-hop forwarding records. |
jacquard-batman-classic | BATMAN Classic-private OGM-carried TQ state, receive windows, echo-based bidirectionality tables, learned advertisement state, next-hop ranking tables, and active next-hop forwarding records. |
jacquard-babel | Babel-private route table, feasibility-distance state, additive-metric scoring, seqno management, and active next-hop forwarding records. |
jacquard-olsrv2 | OLSRv2-private HELLO state, symmetric-neighbor and two-hop reachability tables, deterministic MPR state, TC topology tuples, shortest-path derivation, and active next-hop forwarding records. |
jacquard-router | Canonical route identity, materialization inputs, leases, handle issuance, top-level route-health publication, and multi-engine orchestration state. |
jacquard-mem-node-profile | In-memory node capability and node-state modeling only. No routing semantics. |
jacquard-mem-link-profile | In-memory link capability, carrier, retention, and runtime-effect adapter state only. No canonical routing truth. |
jacquard-reference-client | Narrow host-side bridge composition of profile implementations, bridge-owned drivers, router, and one or more in-tree engine instances for tests and examples. Observational with respect to canonical route truth, but owner of ingress queueing and round advancement in the reference harness. |
jacquard-simulator | Replay artifacts, scenario traces, post-run analysis. No canonical route truth during a live run. |
A host-owned policy engine above the router may own cross-engine migration policy and substrate selection.
Extensibility
core::Configuration is the shared graph-shaped world object. Engine-specific structure such as topology exports, peer novelty, bridge estimates, planning caches, and forwarding tables belongs in the engine crate behind its trait boundary rather than in core.
The extension surface is split across Core Types, Routing Engines, and Pathway Routing.
For first-party pathway specifically, Telltale stays an internal implementation substrate. Shared crates remain runtime-free. The future router may drive pathway through shared planning, tick, maintenance, and checkpoint orchestration, but it must not depend on pathway-private choreography payloads, protocol session keys, or guest-runtime internals.
For first-party field, the proof and ownership boundary is even stricter: field-private choreography may supply only observational evidence into the deterministic local controller. It must not publish canonical route truth or leak field-private session semantics into shared router surfaces.