Threshold Dilithium: FROST-Inspired Post-Quantum Signatures
FROST (Flexible Round-Optimized Schnorr Threshold signatures) demonstrates how to build threshold signatures from Schnorr's linear structure. Dilithium, the lattice-based signature scheme standardized as ML-DSA, shares this linear structure. This post describes how to adapt the FROST approach to the Dilithium setting.
What FROST Is Really Doing
FROST splits a group secret key into shares using Shamir secret sharing. Each signer holds one share. During signing, each signer contributes a partial signature based on their share and some randomness. An aggregator combines these partials into a single signature indistinguishable from one produced by a single signer.
The key insight is that Schnorr signatures are linear in both the secret key and the nonce. The sum of partial secrets behaves like one secret. The sum of partial nonces behaves like one nonce. FROST exploits this linearity while adding careful bookkeeping to prevent nonce reuse and rogue-key attacks.
Dilithium in Brief
Dilithium is a linear algebra construction over polynomial rings with carefully managed noise. Keys are vectors over a ring. Signing samples a short random vector y, computes a commitment via matrix multiplication, hashes the commitment with the message to produce a challenge, and combines y with the secret and challenge into a response.
The response must satisfy norm bounds. If the response is too large, the signer rejects and resamples. This rejection sampling prevents information leakage about the secret key. The structure resembles Schnorr lifted into a lattice setting with additional constraints on vector lengths.
Adapting FROST to Dilithium
The adaptation follows from asking which parts of the signing equation can be split across participants and recombined. The answer mirrors FROST.
We secret-share the key by splitting the secret key vector into additive shares. Each participant holds one share. The shares sum to the full secret key. We distribute randomness by having each signer sample their own piece of the ephemeral vector. The pieces sum to produce the global randomness used in a standard signature.
Reconstruction happens only at the signature level. No signer sees the full secret key or full randomness. The final signature is indistinguishable from one produced by a single signer holding the complete key.
Protocol Structure
The protocol runs in two rounds following the FROST pattern.
In round one, each signer commits to their randomness and broadcasts the commitment. After collecting all commitments, the group computes a shared challenge by hashing the message and aggregate commitment.
In round two, each signer uses their secret share and randomness share to compute a partial response. The aggregator collects partial responses and sums them into the final response. The resulting signature pairs this response with the aggregate commitment and passes standard Dilithium verification.
What Changes in the Lattice Setting
Several aspects require more care in Dilithium than in Schnorr.
Rejection sampling becomes distributed. In single-signer Dilithium, the signer rejects and restarts if the response norm is too large. In a threshold setting, rejection must happen locally without requiring distributed coordination. Each signer applies a tighter local bound so that any valid combination of partials satisfies the global bound.
The noise structure is higher-dimensional. Randomness is not a simple scalar but a vector in a polynomial ring. Sharing and combining this randomness while preserving security requires careful design. The commitment and response objects are larger than their Schnorr counterparts.
Communication costs increase. Partial signatures and commitments are vectors or polynomials rather than group elements. The protocol must ensure each participant can complete their work locally without excessive bandwidth.
Why This Works
The concept is straightforward. FROST works because Schnorr is linear in the secret and nonce. Dilithium signatures are also linear constructions over a module with managed noise. Thresholdizing Dilithium respects this linearity. The secret key equals the sum of shares. The signing randomness equals the sum of shares. The final response equals the sum of partial responses. Everything happens in the same algebraic space as the single-signer scheme.
The verifier sees an ordinary Dilithium signature. The distributed key and randomness are invisible. Verification uses the standard algorithm with no awareness of the threshold structure.
Practical Considerations
Threshold Dilithium signatures may not be practical for many use cases. Dilithium signatures are substantially larger than ECDSA or Schnorr signatures. A Dilithium2 signature is on the order of 2.5 KB compared to ~64 bytes for ECDSA. Threshold variants do not increase this size, but applications sensitive to signature size should evaluate whether the overhead is acceptable.
The scheme is therefore best suited for contexts where post-quantum security is required and signature size is not the primary constraint.
Ice Nine Protocol Specification Overview
Ice Nine is a threshold signature scheme in the lattice setting. It follows the Schnorr pattern adapted for post-quantum security. The protocol operates in two rounds. Parties hold additive shares of a master secret. They produce partial signatures that aggregate into a single valid signature.
The design supports both -of- and -of- threshold configurations. In the -of- case all parties must participate. In the -of- case any subset of or more parties can sign. Lagrange interpolation coefficients adjust the partial shares in the threshold case.
Getting Started
This section provides a practical overview of the protocol flow from key generation through signing.
Protocol Flow
A typical deployment follows these steps.
- Configuration: Create a
ThresholdConfigwith party count, threshold, and security parameters. - Key Generation: Run DKG to establish shared key material. Each party receives a secret share.
- Signing Sessions: For each message, run the two-round signing protocol to produce a threshold signature.
- Verification: Any party can verify signatures using the public key.
Configuration
Start by creating a threshold configuration.
-- Create configuration for 5 parties with Dilithium2 security
let cfg := ThresholdConfig.dilithium2 5
-- The default threshold is 2/3+1 = 4 for 5 parties
-- cfg.threshold = 4
-- cfg.globalBound = 130994
-- cfg.localBound = 32748
See Threshold Configuration for detailed parameter guidance.
Key Generation
Run distributed key generation to establish shared keys.
-- Each party generates their DKG contribution
let (commitMsg, secretState) := dkgCommit S partyId randomness
-- Broadcast commit messages, then reveal
let revealMsg := dkgReveal S secretState
-- After receiving all reveals, compute key shares
let keyShare := dkgFinalize S partyId allCommits allReveals
The DKG protocol uses commit-reveal to prevent rogue-key attacks. Each party receives a KeyShare containing their secret share and the global public key. See Key Generation for DKG modes and VSS.
Signing
Signing proceeds in two rounds with session types enforcing correct usage.
-- Round 1: Commit to nonces
let nonce := FreshNonce.sample S -- Sample fresh nonce
let ready := initSession S keyShare message session nonce
let (committed, commitMsg) := commit S ready
-- Broadcast commitMsg
-- After receiving all commits, reveal
let (revealed, revealMsg) := reveal S committed challenge bindingFactor aggregateW
-- Broadcast revealMsg
-- Round 2: Produce signature share
let (signed, shareMsg) ← signWithLocalRejection S revealed cfg sampleNonce
-- Broadcast shareMsg
-- Aggregator combines shares
let signature := aggregateValidated S cfg protocolCfg challenge commits shares pkShares factors
The FreshNonce type ensures nonces are used exactly once. The signWithLocalRejection function handles norm bounds locally without distributed retries. See Two-Round Threshold Signing for the complete protocol.
Verification
Verify signatures against the public key.
let valid := verify S signature publicKey message
Verification checks the algebraic relation A(z) = w + c·pk and the norm bound. See Verification for details.
Error Handling
Protocol operations return structured errors for recovery.
match result with
| .ok signature => -- Success
| .error e =>
-- Check if error identifies a misbehaving party
match BlameableError.blamedParty e with
| some party => excludeParty party
| none => handleOperationalError e
See Extensions for error recovery patterns across protocol phases.
Extension Operations
After initial setup, use extension protocols for operational needs.
Share Refresh updates shares without changing the key. Run periodically for proactive security.
Share Repair recovers a lost share using contributions from other parties.
Rerandomization masks shares and nonces for unlinkability between sessions.
See Extensions for these protocols.
CRDT-Based State Management
The implementation uses a semilattice/CRDT architecture for protocol state. Each protocol phase (commit, reveal, shares, done) carries state that forms a join-semilattice. The join operation (⊔) merges states from different replicas or out-of-order message arrivals.
Protocol progress is modeled as transitions between phases: commit → reveal → shares → done. Each phase carries accumulated data. States within a phase merge via componentwise join. The type-indexed implementation (Protocol/State/PhaseIndexed.lean) makes invalid phase transitions compile-time errors.
Messages are stored in MsgMap structures keyed by sender ID. This makes conflicting messages from the same sender un-expressable in the type system. Each party can contribute at most one message per phase.
The coordination protocols (RefreshCoord, RepairCoord) cleanly separate concerns into three layers:
- CRDT layer (
process*): Pure merge semantics that are idempotent, commutative, and always succeed. Use for replication and networking. - Validation layer (
detect*,validate*): Conflict detection without modifying state. Use for auditing and security. - Combined (
process*Validated): Merge and validation in one call, returning(newState, Option error).
Step functions that advance state are monotone with respect to the semilattice order. This ensures that merging divergent traces preserves safety properties. If then .
Protocol state combines with auxiliary CRDT data for refresh masks, repair bundles, and rerandomization masks. The product of semilattices is itself a semilattice.
Why This Works: Linearity as a Common Thread
The core insight of the Ice Nine protocol is that Dilithium, like Schnorr, is fundamentally linear. This linearity enables threshold signing and makes the protocol work.
In single-signer Dilithium, the signature response is computed as where is a random ephemeral value and is the secret key. This is a direct linear combination in both variables. A threshold variant distributes the secret as (additive shares) and the randomness as (signer contributions). Each signer produces .
When these partials are summed, we get . The aggregated response is identical to what a single signer would produce with the full secret and full randomness.
This is the core mechanism of FROST applied to the lattice setting. The threshold scheme is not a different scheme; it is the base scheme distributed and recombined through linearity. The final signature is indistinguishable from a single-signer signature under the group key.
Linearity also enables the norm bound guarantee. By the triangle inequality, if each partial satisfies , then the sum automatically satisfies where is the number of aggregated partials. By choosing , the global bound is guaranteed for any combination of valid partials. This is a mathematical certainty, not probabilistic.
The linearity property is what makes the entire protocol work: threshold signing that produces standard signatures, BFT-compatible design with per-signer validation, and deterministic norm bound guarantees via the triangle inequality.
Session Types
The implementation uses session types to enforce disciplined handling of secret material, making certain classes of errors impossible to express.
The signing protocol (Protocol/Sign/Session.lean) uses linear session types where each state transition consumes the previous state. Nonces are wrapped in FreshNonce structures that can only be consumed once. Nonce reuse is a compile-time error, not a runtime check.
Signature extraction requires a ThresholdCtx that pairs the active signer set with a proof that . This prevents signatures from being produced without sufficient participation.
Specification Components
Algebraic Setting. Secrets live in a module over a scalar ring. Public keys live in a module . A linear map connects them. Commitments bind public values. A hash function produces challenges.
Key Generation. Two modes exist. A trusted dealer can sample the master secret and distribute shares. Alternatively parties can run a distributed key generation protocol. Both modes produce additive shares with . The DKG protocol uses the commit-reveal pattern with CRDT-mergeable state. Following FROST, each party provides a proof of knowledge (PoK) during DKG to prevent rogue-key attacks. For malicious security, Verifiable Secret Sharing (VSS) allows parties to verify shares against polynomial commitments.
Signing. Signing proceeds in two rounds with phase-indexed state. In round one each party commits to a nonce. After all commits arrive parties reveal their nonces and compute a shared challenge. In round two each party produces a partial signature. An aggregator combines the partials into the final signature. State at each phase is mergeable.
Verification. A verifier checks the aggregated signature against the public key. The check uses the linear map and the hash function. The signature is valid if the recomputed challenge matches and the norm bound is satisfied.
Extensions. The protocol supports several extensions. Complaints identify misbehaving parties. Share refresh updates shares using zero-sum mask functions that merge via join; two refresh modes are available: a coordinator-based protocol (Protocol/Shares/RefreshCoord.lean) and a fully distributed DKG-based protocol (Protocol/Shares/RefreshDKG.lean) where parties contribute zero-polynomials without a trusted coordinator. Share repair combines helper deltas via append-based bundles; a coordination protocol (Protocol/Shares/RepairCoord.lean) handles Lagrange coefficient computation and contribution verification. Rerandomization applies zero-sum masks to shares and nonces with merge-preserving structure.
Security. The protocol assumes binding commitments and models the hash as a random oracle. Under these assumptions it achieves threshold unforgeability. Formal proofs reside in the Lean verification modules. Totality theorems ensure that validation functions always return either success or a structured error.
Module Organization
The implementation is organized into focused modules within subdirectories:
Protocol/Core/ - Foundation types and utilities:
Core/Core.lean- Scheme record, key shares, DKG messages, linear security wrappersCore/CRDT.lean- CRDT primitives: Join typeclass, MsgMap sender-keyed collectionCore/Security.lean- Security markers (Zeroizable, ConstantTimeEq), SecretBox, NonceBoxCore/Patterns.lean- Reusable patterns: constraint aliases, commit-reveal framework, utilitiesCore/Error.lean- BlameableError typeclass, error utilitiesCore/Lagrange.lean- Unified Lagrange coefficient APICore/Serialize.lean- Serialization API for network transportCore/ThresholdConfig.lean- Threshold configuration and local rejection boundsCore/NormBounded.lean- Norm bound predicates and verificationCore/Abort.lean- Session-level abort coordination for liveness failures
Protocol/Sign/ - Signing protocol:
Sign/Types.lean- Session tracking, signing messages, error typesSign/Core.lean- Challenge computation, n-of-n aggregationSign/Threshold.lean- Coefficient strategies, threshold contextSign/Session.lean- Session-typed API preventing nonce reuseSign/ThresholdMerge.lean- Threshold-aware state merge operationsSign/LocalRejection.lean- Local rejection sampling for norm boundsSign/ValidatedAggregation.lean- Validated signature aggregationSign/NonceLifecycle.lean- Nonce lifecycle tracking and safety
Protocol/DKG/ - Distributed key generation:
DKG/Core.lean- Basic DKG protocolDKG/Threshold.lean- Threshold DKG with exclusionDKG/Feldman.lean- Polynomial commitments, share verificationDKG/VSSDKG.lean- VSS transcript, complaint mechanismDKG/Dealer.lean- Trusted dealer mode
Protocol/Shares/ - Share management:
Shares/Refresh.lean- Share refresh with zero-sum masksShares/RefreshCoord.lean- Coordinator-based refresh protocolShares/RefreshDKG.lean- DKG-based distributed refresh (no coordinator)Shares/RefreshState.lean- Refresh state CRDTShares/Repair.lean- Share repair protocolShares/RepairCoord.lean- Repair coordination with commit-revealShares/Rerandomize.lean- Signature rerandomization
Protocol/State/ - Phase state and CRDT operations:
State/Phase.lean- Phase state with MsgMap CRDTState/PhaseIndexed.lean- Type-indexed phase transitionsState/PhaseHandlers.lean- Phase transition handlersState/PhaseSig.lean- Phase signaturesState/PhaseMerge.lean- Composite state merging
Top-level modules:
Crypto.lean- Cryptographic primitives (hash, commitment)Instances.lean- Concrete scheme instantiations (Int, ZMod)Norms.lean- Norm bounds for lattice parametersSamples.lean- Sample transcript generation for testing
Proofs/ - Formal verification (separate from protocol):
Proofs/Core/ - Foundation for proofs:
Proofs/Core/Assumptions.lean- Cryptographic assumptions, axiom index, Dilithium parametersProofs/Core/ListLemmas.lean- Reusable lemmas for list operations and sumsProofs/Core/HighBits.lean- HighBits specification for Dilithium error absorptionProofs/Core/NormProperties.lean- Norm bound properties and lemmasProofs/Core/MsgMapLemmas.lean- MsgMap CRDT lemmas
Proofs/Probability/ - Probabilistic analysis:
Proofs/Probability/Dist.lean- Distribution properties and analysisProofs/Probability/Indistinguishability.lean- Computational indistinguishabilityProofs/Probability/Rejection.lean- Rejection sampling analysisProofs/Probability/Commitment.lean- Commitment scheme probability properties
Proofs/Correctness/ - Happy-path proofs:
Proofs/Correctness/Correctness.lean- Verification theoremsProofs/Correctness/DKG.lean- DKG correctness proofsProofs/Correctness/Sign.lean- Signing correctness proofsProofs/Correctness/Lagrange.lean- Lagrange interpolation correctnessProofs/Correctness/ThresholdConfig.lean- Threshold configuration correctness
Proofs/Soundness/ - Security proofs:
Proofs/Soundness/Soundness.lean- Special soundness, nonce reuse attack, SIS reductionProofs/Soundness/VSS.lean- VSS security propertiesProofs/Soundness/Robustness.lean- Protocol robustness propertiesProofs/Soundness/LocalRejection.lean- Local rejection sampling soundness
Proofs/Extensions/ - Extension protocol proofs:
Proofs/Extensions/Phase.lean- Phase handler monotonicity (CRDT safety)Proofs/Extensions/Coordination.lean- Refresh/repair coordination securityProofs/Extensions/RefreshCoord.lean- Refresh coordination proofsProofs/Extensions/Repair.lean- Share repair correctnessProofs/Extensions/RefreshRepair.lean- Refresh invariants, rerandomization
Lattice Cryptography
The security of Ice Nine reduces to standard lattice hardness assumptions:
Short Integer Solution (SIS). Given matrix , finding short with is hard. Signature unforgeability reduces to SIS: a forger that produces valid signatures can be used to solve SIS instances.
Module Learning With Errors (MLWE). Distinguishing from where are small is hard. Key secrecy reduces to MLWE: recovering the secret key from the public key requires solving MLWE.
Rejection Sampling. The signing protocol uses rejection sampling to ensure signatures are independent of the secret key. If the response norm exceeds the bound , the party aborts and retries with a fresh nonce. Expected attempts: ~4 for Dilithium parameters.
Parameter Validation. The implementation includes lightweight parameter validation to catch obviously insecure configurations. The standard parameter sets follow NIST FIPS 204 (ML-DSA/Dilithium).
Linear Types for Security-Critical Values
The implementation uses linear-style type discipline to prevent misuse of single-use values. While Lean doesn't have true linear types, private constructors and module-scoped access functions make violations visible and intentional.
Linear wrappers:
| Type | Purpose | Consequence of Misuse |
|---|---|---|
FreshNonce | Signing nonces | Nonce reuse → key recovery |
LinearMask | Refresh masks | Double-apply → zero-sum violation |
LinearDelta | Repair deltas | Double-use → corrupted share |
LinearOpening | Commitment randomness | Reuse → equivocation possible |
Pattern: Each wrapper has:
- Private constructor (
private mk) - prevents arbitrary creation - Private value field - prevents direct access outside module
- Public creation function - controlled entry point
consumefunction - extracts value and produces consumption proof
structure LinearMask (S : Scheme) where
private mk ::
private partyId : S.PartyId
private maskValue : S.Secret
private publicImage : S.Public
def LinearMask.consume (m : LinearMask S) : S.Secret × MaskConsumed S
The consumption proof (MaskConsumed, DeltaConsumed, OpeningConsumed) can be required by downstream functions to ensure the value was properly consumed.
Nonce Safety
Critical property: Nonce reuse completely breaks Schnorr-style signatures. If the same nonce is used with two different challenges :
- Then
The session-typed signing protocol makes nonce reuse a compile-time error. Each FreshNonce can only be consumed once, and the type system enforces that signing sessions progress linearly through states without backtracking.
Implementation Security
The implementation includes comprehensive security documentation in Protocol/Core/Security.lean:
Randomness Requirements: All secret values (shares, nonces, commitment openings) must be sampled from a CSPRNG. Nonce reuse is catastrophic. The session-typed API makes it a compile-time error.
Side-Channel Considerations: The specification flags timing-vulnerable functions (Lagrange computation, norm checks). Production implementations must use constant-time primitives. The ConstantTimeEq typeclass (in Security.lean) marks types requiring constant-time equality comparison.
Memory Zeroization: Sensitive values must be securely erased after use. Platform-specific APIs are documented for C, POSIX, Windows, and Rust. The Zeroizable typeclass (in Security.lean) marks types requiring secure erasure.
Secret Wrappers: The SecretBox and NonceBox types (in Security.lean) wrap sensitive values with private constructors, discouraging accidental duplication or exposure. Use SecretBox.wrap and NonceBox.fresh to create instances.
Docs Index
- Algebraic Setting: Algebraic primitives, module structure, lattice instantiations, and semilattice definitions
- Key Generation: Dealer and distributed key generation with CRDT state, VSS, party exclusion
- Signing Protocol: Two-round signing protocol with session types, rejection sampling
- Verification: Signature verification and threshold context
- Extensions: Complaints, refresh/repair coordination, rerandomization, type-indexed phases
- Protocol Integration: External context binding, evidence piggybacking, fast-path signing, self-validating shares
- Threshold Configuration: ThresholdConfig parameters and tuning
Algebraic Setting
The protocol is parameterized by an abstract scheme structure. This structure defines the algebraic objects and operations. Concrete instantiations choose specific rings and modules.
Scheme Record
The Scheme structure bundles the algebraic setting with cryptographic oracles. It keeps types abstract so proofs and implementations can swap in concrete rings and primitives.
structure Scheme where
PartyId : Type
Message : Type
Secret : Type
Public : Type
Challenge : Type
Scalar : Type
[scalarSemiring : Semiring Scalar]
[secretAdd : AddCommGroup Secret]
[publicAdd : AddCommGroup Public]
[secretModule : Module Scalar Secret]
[publicModule : Module Scalar Public]
[challengeSMulSecret : SMul Challenge Secret]
[challengeSMulPublic : SMul Challenge Public]
A : Secret →ₗ[Scalar] Public
Commitment : Type
Opening : Type
commit : Public → Opening → Commitment
commitBinding : ∀ {x₁ x₂ o₁ o₂}, commit x₁ o₁ = commit x₂ o₂ → x₁ = x₂
-- Domain-separated hash functions (FROST pattern)
hashCollisionResistant : Prop
hashToScalar : ByteArray → ByteArray → Scalar -- H(domain, data)
hashDkg : PartyId → Public → Public → Scalar -- H_dkg for PoK
hash : Message → Public → List PartyId → List Commitment → Public → Challenge
-- Optional identifier derivation
deriveIdentifier : ByteArray → Option PartyId := fun _ => none
normOK : Secret → Prop
[normOKDecidable : DecidablePred normOK]
Modules and Scalars
Let denote a commutative semiring with unity. This serves as the scalar ring. Common choices include for a prime .
The secret space is an additive commutative group. It carries an -module structure. Secrets are elements of .
The public space is also an additive commutative group with an -module structure. Public keys are elements of .
In lattice instantiations both spaces may be polynomial rings or vectors over .
The Linear Map A
The core algebraic structure is a linear map . This map is -linear. It satisfies
and
for all secrets and scalars .
The map connects private shares to their public counterparts. If is a secret share then is the corresponding public share.
Linearity ensures that aggregation in corresponds to aggregation in . The sum of public shares equals the image of the sum of secret shares. This property underlies the threshold aggregation.
Linearity in Lattice Signatures
Dilithium and other lattice-based signature schemes inherit a fundamental linearity property from their algebraic structure. This linearity is precisely what enables threshold signing.
Why Dilithium is Linear
In Dilithium signing, the response is computed as where is random and is the secret key. This sum respects linearity in both variables. Unlike schemes where the signature is output as a function of intermediate hash values, Dilithium's response is a direct linear combination.
The same linearity holds in threshold settings. If (secret shares) and each signer produces , then the sum produces the same response as a single signer with full secret.
This is the core insight of FROST applied to the lattice setting. The threshold scheme is not a different scheme; it is the base scheme distributed and recombined through linearity.
Norm Bounds via Triangle Inequality
In single-signer Dilithium, the response must satisfy a global norm bound. In threshold signing, each signer produces a partial , and these are summed to produce the global response .
By the triangle inequality, .
If we ensure that each partial satisfies a local bound , and we aggregate at most partials, then the global response is guaranteed to satisfy
By setting , we obtain a hard guarantee that any combination of or fewer valid partials automatically satisfies the global Dilithium bound. This is a mathematical certainty, not a probabilistic property.
Each signer enforces the local bound via internal rejection sampling. The aggregator enforces it by checking each partial before inclusion. Once the aggregator has collected valid partials (all satisfying ), it is mathematically guaranteed that their sum satisfies the global bound. No further norm checking is needed; no distributed coordination or restart is required.
Design Choice: No Explicit Error Term
In Dilithium, the public key includes an error term: where is small. This makes indistinguishable from random under MLWE. During verification, Dilithium uses a HighBits() function to absorb the error term .
We deliberately keep the abstract map without an explicit error term for these reasons:
-
Abstraction boundary. The error term is a key generation detail. Once the public key is computed, signing and verification only need the linear relationship .
-
HighBits is an implementation detail. The truncation that absorbs can be encoded in the concrete instantiation:
- The
hashfunction can hashHighBits(w)rather than full - The
normOKpredicate can include the HighBits consistency check
- The
-
Clean protocol logic. The signing and verification math stays clean: The error handling lives in the concrete instantiation, not the abstract protocol.
-
Instantiation flexibility. Different lattice schemes handle errors differently (Dilithium vs Falcon vs others). The abstract Scheme lets each instantiation encode its approach appropriately.
For a Dilithium instantiation, the Scheme record would be configured as:
def dilithiumScheme (params : DilithiumParams) : Scheme :=
{ Secret := PolyVec params.l -- s₁ ∈ R_q^l
, Public := PolyVec params.k -- t = As₁ + s₂ (precomputed at keygen)
, A := dilithiumKeyMap -- maps response z to Az
, normOK := fun z =>
dilithiumNormCheck z params ∧
highBitsConsistent z params -- both quality checks
, hash := fun m pk ... w =>
hashHighBits (highBits w params.gamma2) m -- hash truncated w
, ... }
The error term is baked into the public key during key generation. The normOK predicate enforces both the coefficient bound and the HighBits consistency check. The hash function operates on truncated values as Dilithium requires.
HighBits Specification
The HighBits function is formally specified in Proofs/Core/HighBits.lean:
structure HighBitsSpec (P : Type*) [AddCommGroup P] where
highBits : P → P
gamma2 : Nat
isSmall : P → Prop
idempotent : ∀ x, highBits (highBits x) = highBits x
absorbs_small : ∀ (w e : P), isSmall e →
highBits (w + e) = highBits w ∨ highBits (w - e) = highBits w
The key property is absorption: small perturbations don't change HighBits. During verification:
- Signer computes:
- Verifier computes:
Since and :
The absorption property ensures when .
Challenges
The challenge space is a separate type. In typical instantiations challenges are scalars from or a related ring. Challenges act on secrets and publics via scalar multiplication.
For a challenge and secret the product is a secret. For a challenge and public the product is a public. This action must be compatible with the module structure.
Commitments
A commitment scheme binds a public value to a commitment string. Let denote the commitment function. Here is the opening space and is the commitment space.
The binding property requires that
for all and .
This property ensures that a commitment uniquely determines the committed value. Adversaries cannot produce two different openings for the same commitment.
Hash Functions
Following FROST, we use domain-separated hash functions for different protocol operations. This prevents cross-protocol attacks where a hash collision in one context could be exploited in another.
Domain Separation
Each hash function uses a unique domain prefix:
| Domain | FROST Name | Purpose | Prefix |
|---|---|---|---|
| Binding factor | H1 | Compute binding factors | ice-nine-v1-rho |
| Challenge | H2 | Fiat-Shamir challenge | ice-nine-v1-chal |
| Nonce | H3 | Nonce derivation | ice-nine-v1-nonce |
| Message | H4 | Message pre-hashing | ice-nine-v1-msg |
| Commitment list | H5 | Commitment encoding | ice-nine-v1-com |
| DKG | HDKG | Proof of knowledge | ice-nine-v1-dkg |
| Identifier | HID | Party ID derivation | ice-nine-v1-id |
Challenge Hash (H2)
The primary hash function maps protocol inputs to challenges:
The inputs are the message , the global public key , the participant set , the list of commitments, and the aggregated nonce .
DKG Hash (HDKG)
Used for proof of knowledge in DKG:
Maps (party ID, public key, commitment) to a scalar challenge.
Identifier Derivation (HID)
The deriveIdentifier function maps arbitrary byte strings to party identifiers:
Uses domain prefix ice-nine-v1-id. Returns none if the derived value is zero (invalid identifier).
-- In Scheme record
deriveIdentifier : ByteArray → Option PartyId := fun _ => none
-- Convenience function
def Scheme.deriveId (S : Scheme) (data : ByteArray) : Option S.PartyId :=
S.deriveIdentifier data
This enables deterministic identifier derivation from human-readable strings or external identifiers.
Random Oracle Model
In security proofs the hash is modeled as a random oracle. This means it behaves as a uniformly random function. The random oracle model enables simulation-based security arguments.
Norm Predicate
A norm predicate gates acceptance. Signatures with large response vectors are rejected.
The predicate captures a bound on the norm of the response. For lattice signatures this prevents leakage through the response distribution. Rejection sampling or abort strategies ensure the bound holds.
In the abstract model the predicate is a parameter. Concrete instantiations define it based on the lattice parameters.
Dilithium-Style Norm Bounds
For Dilithium-style signatures, the norm check uses:
- norm:
- Rejection bound: where
structure DilithiumParams where
n : Nat := 256 -- ring dimension
q : Nat := 8380417 -- modulus
tau : Nat -- challenge weight (number of ±1)
eta : Nat -- secret coefficient bound
gamma1 : Nat -- nonce coefficient range
gamma2 : Nat -- low-bits range
def dilithiumNormOK (p : DilithiumParams) (z : List Int) : Prop :=
vecInfNorm z < p.gamma1 - p.tau * p.eta
Standard parameter sets (defined and validated in Proofs/Core/Assumptions.lean):
| Level | τ | η | γ₁ | γ₂ | β | Security |
|---|---|---|---|---|---|---|
| Dilithium2 | 39 | 2 | 2¹⁷ | 95232 | 78 | 128-bit |
| Dilithium3 | 49 | 4 | 2¹⁹ | 261888 | 196 | 192-bit |
| Dilithium5 | 60 | 2 | 2¹⁹ | 261888 | 120 | 256-bit |
structure DilithiumSigningParams where
tau : Nat -- challenge weight
eta : Nat -- secret coefficient bound
gamma1 : Nat -- nonce coefficient range
gamma2 : Nat -- low-bits truncation range
def DilithiumSigningParams.beta (p : DilithiumSigningParams) : Nat :=
p.tau * p.eta
-- Critical security constraints (proven for each parameter set):
-- γ₁ > β ensures rejection sampling has room
-- γ₂ > β ensures HighBits absorbs error
theorem dilithiumL2_gamma1_ok : dilithiumL2.gamma1 > dilithiumL2.beta
theorem dilithiumL2_gamma2_ok : dilithiumL2.gamma2 > dilithiumL2.beta
Security Parameter Guidance
When selecting parameters for threshold deployment, consider how threshold configuration affects security margins.
Threshold and Local Bounds. The local rejection bound is B_local = B_global / T where T is the number of signers. More signers means each signer must produce smaller partials. This affects rejection rates but not core security assumptions.
Security Level Selection. Choose the security level based on your threat model, not performance concerns.
| Use Case | Recommended Level | Rationale |
|---|---|---|
| General purpose | Dilithium2 (128-bit) | Sufficient for most applications |
| High-value assets | Dilithium3 (192-bit) | Conservative security margin |
| Long-term secrets | Dilithium5 (256-bit) | Maximum security against future attacks |
The 128-bit level provides security comparable to AES-128. The 192-bit and 256-bit levels provide additional margins against potential improvements in lattice algorithms.
Threshold and Fault Tolerance. The default 2/3+1 threshold balances security and liveness.
| Threshold | Fault Tolerance | Trade-off |
|---|---|---|
| n-of-n | Zero faults tolerated | Maximum security, fragile |
| 2/3+1 | Up to 1/3 faulty | Good balance (BFT standard) |
| 1/2+1 | Up to 1/2 faulty | More fault tolerant, weaker security |
Lower thresholds improve liveness but reduce the cost of compromise. An adversary needs fewer corrupted parties to forge signatures.
Signer Count Limits. The local bound B_local decreases as T increases. Very large signer counts reduce B_local to the point where rejection rates become impractical.
| Signers (T) | B_local (Dilithium2) | Practical? |
|---|---|---|
| 3-10 | 13099-43664 | Yes |
| 20 | 6549 | Marginal |
| 50+ | <2620 | Not recommended |
For deployments with more than 20 signers, consider using Dilithium3 or Dilithium5 for larger global bounds.
Parameter Validation. Always validate parameter configurations using ThresholdConfig.validate. The validation checks:
- Threshold does not exceed party count
- maxSigners is at least threshold
- Local bound is positive after division
- Global bound is sufficient for the configuration
Configurations from untrusted sources should be validated before use even if they were created through ThresholdConfig.create.
Concrete Scheme Instantiations
The implementation provides a lattice-friendly scheme in Instances.lean:
Lattice Scheme
Integer vectors with hash-based commitments and Dilithium-style norm checking.
structure LatticeParams where
n : Nat := 256 -- dimension
q : Nat := 8380417 -- modulus
bound : Nat := 1 <<< 19 -- ℓ∞ bound (approx Dilithium γ1)
def latticeScheme (p : LatticeParams := {}) : Scheme :=
{ PartyId := Nat
, Message := ByteArray
, Secret := Fin p.n → Int
, Public := Fin p.n → Int
, Challenge := Int
, Scalar := Int
, A := LinearMap.id
, Commitment := LatticeCommitment (Fin p.n → Int) (Fin p.n → Int)
, Opening := Fin p.n → Int
, commit := latticeCommit -- hash-based: H(encode(w, nonce))
, normOK := fun v => intVecInfLeq p.bound v
, ... }
This models the SIS/LWE structure:
- Secret key: (integer vector with bounded coefficients)
- Public key: (identity map for simplicity)
- Norm bound: for signature validity
- Commitments: Hash-based with axiomatized binding
The hash-based commitment uses hashBytes(encodePair(w, nonce)) where encodePair serializes the value and opening. Binding is axiomatized via HashBinding assumption.
Semilattice Structure
Protocol state at each phase forms a join-semilattice. A semilattice is an algebraic structure with a binary join operation that is commutative, associative, and idempotent.
The join induces a partial order: iff .
Message Map Semilattice
Protocol messages are stored in MsgMap structures, which are hash maps keyed by sender ID. This design makes conflicting messages from the same sender un-expressable at the type level.
structure MsgMap (S : Scheme) (M : Type*) [BEq S.PartyId] [Hashable S.PartyId] where
map : Std.HashMap S.PartyId M
instance : Join (MsgMap S M) := ⟨MsgMap.merge⟩
instance : LE (MsgMap S M) := ⟨fun a b => a.map.toList.all (fun (k, _) => b.map.contains k)⟩
The merge operation takes the union of keys, keeping the first value on conflict. This provides:
- Conflict-freedom by construction: At most one message per sender
- Commutativity: Merge order doesn't affect the set of senders
- Idempotence: Merging a map with itself is identity
- Strict mode:
tryInsertreturns conflict indicators for misbehavior detection
This provides the foundation for merging commit maps, reveal maps, and share maps while preventing Byzantine parties from injecting multiple conflicting messages.
Product Semilattice
If and are semilattices, their product is a semilattice under componentwise join.
instance prodJoin (α β) [Join α] [Join β] : Join (α × β) :=
⟨fun a b => (a.1 ⊔ b.1, a.2 ⊔ b.2)⟩
This allows combining protocol state with auxiliary data (refresh masks, repair bundles, rerandomization masks).
CRDT Semantics
Semilattice merge ensures conflict-free replicated data type (CRDT) semantics. Replicas can merge states in any order and converge to the same result. This is essential for handling:
- Out-of-order message delivery
- Network partitions and rejoins
- Concurrent protocol executions
Instance Constraint Patterns
The protocol modules use consistent instance requirements:
For HashMap-based structures (MsgMap, NonceRegistry):
[BEq S.PartyId]- Boolean equality[Hashable S.PartyId]- Hash function for HashMap
For decidable equality (if-then-else, match guards):
[DecidableEq S.PartyId]- Prop-valued equality with decision procedure
For field arithmetic (Lagrange coefficients):
[Field S.Scalar]- Division required for λ_i = Π x_j / (x_j - x_i)[DecidableEq S.Scalar]- For checking degeneracy
Lagrange Interpolation
The Protocol/Core/Lagrange.lean module provides a unified API for computing Lagrange interpolation coefficients used across threshold protocols.
Core Function
def coeffAtZero [Field F] [DecidableEq F]
(partyScalar : F)
(allScalars : List F)
: F :=
let others := allScalars.filter (· ≠ partyScalar)
others.foldl (fun acc xj => acc * (xj / (xj - partyScalar))) 1
This computes for evaluating at 0.
Scheme-Aware API
def schemeCoeffAtZero (S : Scheme)
[Field S.Scalar] [DecidableEq S.Scalar] [DecidableEq S.PartyId]
(pidToScalar : S.PartyId → S.Scalar)
(allParties : List S.PartyId)
(party : S.PartyId)
: S.Scalar
def buildPartyCoeffs (S : Scheme)
[Field S.Scalar] [DecidableEq S.Scalar] [DecidableEq S.PartyId]
(pidToScalar : S.PartyId → S.Scalar)
(parties : List S.PartyId)
: List (PartyCoeff S)
These functions integrate with the Scheme type for protocol use, including batch computation and precomputed coefficient storage.
Secret Wrappers
Secrets and nonces are wrapped in opaque structures to discourage accidental duplication or exposure.
/-- Wrapper for secrets with private constructor.
Forces explicit acknowledgment when accessing secret material. -/
structure SecretBox (α : Type*) where
private mk ::
val : α
/-- Wrap a secret value. -/
def SecretBox.wrap (secret : α) : SecretBox α := ⟨secret⟩
/-- Wrapper for nonces to highlight their ephemeral nature.
Nonce reuse is catastrophic: treat these as single-use. -/
structure NonceBox (α : Type*) where
private mk ::
val : α
/-- Create from a freshly sampled nonce. -/
def NonceBox.fresh (nonce : α) : NonceBox α := ⟨nonce⟩
The private constructors prevent arbitrary creation of wrapped values. Use SecretBox.wrap and NonceBox.fresh to create instances. This lightweight discipline signals intent. Code that accesses the val field must explicitly acknowledge it is handling secret material.
Both types have Zeroizable and ConstantTimeEq marker instances to indicate security requirements for production implementations.
Key Share Integration:
structure KeyShare (S : Scheme) where
pid : S.PartyId -- party identifier
sk_i : SecretBox S.Secret -- wrapped secret share
pk_i : S.Public -- public share = A(sk_i)
pk : S.Public -- global public key
/-- Create KeyShare from unwrapped secret (use during DKG). -/
def KeyShare.create (S : Scheme) (pid : S.PartyId) (sk : S.Secret) (pk_i pk : S.Public)
: KeyShare S :=
{ pid := pid, sk_i := ⟨sk⟩, pk_i := pk_i, pk := pk }
/-- Access secret for cryptographic operations. -/
def KeyShare.secret (share : KeyShare S) : S.Secret :=
share.sk_i.val
All protocol code that needs the secret share calls share.secret rather than pattern-matching on the SecretBox. This creates a clear audit trail for secret access.
Single-Signer Schnorr Relation
The threshold protocol generalizes the single-signer Schnorr pattern. The single-signer version proceeds as follows.
Key generation. Sample a short secret . Compute the public key .
Signing. Given message :
- Sample a short ephemeral secret .
- Compute the nonce .
- Compute the challenge .
- Compute the response .
- Output signature .
Verification. Given signature and message :
- Compute .
- Check .
- Compute .
- Accept if .
The threshold protocol distributes among parties. Each party holds a share with . The signing procedure produces partial responses that aggregate to .
Key Generation
Key generation produces the master secret and distributes shares among parties. Two modes are supported. A trusted dealer can sample and distribute shares directly. Alternatively parties can run a distributed key generation protocol without trusting any single entity.
Both modes produce additive shares. Each party holds a secret share . The shares sum to the master secret . The corresponding public key is .
Key Share Structure
After key generation each party holds a KeyShare containing its local view.
structure KeyShare (S : Scheme) where
pid : S.PartyId -- party identifier
sk_i : SecretBox S.Secret -- secret share (wrapped for safety)
pk_i : S.Public -- public share = A(sk_i)
pk : S.Public -- global public key
The secret share is wrapped in a SecretBox with a private constructor. This prevents accidental duplication or exposure of the secret material. Access the secret via KeyShare.secret:
/-- Create KeyShare from unwrapped secret (use during DKG). -/
def KeyShare.create (S : Scheme) (pid : S.PartyId) (sk : S.Secret) (pk_i pk : S.Public)
: KeyShare S :=
{ pid := pid, sk_i := SecretBox.wrap sk, pk_i := pk_i, pk := pk }
/-- Get the unwrapped secret share for computation.
**Security**: Only use when the secret is needed for cryptographic operations. -/
def KeyShare.secret (share : KeyShare S) : S.Secret :=
share.sk_i.val
The key share is the persistent credential a signer carries into the signing protocol.
Trusted Dealer Mode
In trusted dealer mode a single entity generates and distributes all shares. This mode is simpler but requires trust in the dealer.
Dealer Output Structure
The dealer produces a DealerShare for each party and a DealerTranscript bundling the complete key material.
structure DealerShare (S : Scheme) :=
(pid : S.PartyId)
(sk_i : S.Secret)
(pk_i : S.Public)
(opening : S.Opening)
(commitPk : S.Commitment)
structure DealerTranscript (S : Scheme) :=
(shares : List (DealerShare S))
(pk : S.Public)
Dealer Procedure
The dealer proceeds as follows.
- Sample a short master secret .
- For each party sample a short share .
- Adjust the final share so that .
- Compute each public share .
- Compute the global public key .
- Commit to each public share as for random openings .
- Publish the global public key and all commitments.
- Send each party its secret share over a secure channel.
The pure implementation dealerKeygen takes pre-sampled secrets and openings to keep IO/randomness out of the core logic.
Dealer Correctness
The linearity of ensures consistency. Since the sum of public shares equals the global public key.
Parties can verify this relation after receiving their shares. They compute and check that the published commitment opens correctly.
Trust Assumption
The dealer learns the master secret . This mode is appropriate when a trusted party exists or when the protocol operates in a setting where the dealer is later destroyed. For stronger security guarantees use distributed key generation.
Distributed Key Generation
Distributed key generation removes the need for a trusted dealer. Parties jointly generate shares such that no single party learns the master secret. The protocol runs in two rounds with CRDT-mergeable state at each phase.
Message Types
The DKG protocol uses commit and reveal messages. Following FROST, commit messages include a proof of knowledge to prevent rogue-key attacks.
/-- Proof of knowledge for DKG: proves knowledge of secret corresponding to public key. -/
structure ProofOfKnowledge (S : Scheme) where
commitment : S.Public -- R = A(k) for random nonce k
response : S.Secret -- μ = k + c·sk where c = H_dkg(pid, pk, R)
structure DkgCommitMsg (S : Scheme) where
sender : S.PartyId
commitPk : S.Commitment
pok : ProofOfKnowledge S -- proof of knowledge (prevents rogue-key attacks)
structure DkgRevealMsg (S : Scheme) where
sender : S.PartyId
pk_i : S.Public
opening : S.Opening
Proof of Knowledge
The proof of knowledge (PoK) prevents rogue-key attacks where an adversary chooses their public key as a function of honest parties' keys. Each party proves knowledge of the secret corresponding to their public share.
Generation (during Round 1):
- Sample random nonce
- Compute commitment
- Compute challenge
- Compute response
Verification (during aggregation): Check that
This works because .
def verifyProofOfKnowledge (S : Scheme) [DecidableEq S.Public]
(pid : S.PartyId) (pk : S.Public) (pok : ProofOfKnowledge S) : Bool :=
let c := S.hashDkg pid pk pok.commitment
S.A pok.response = pok.commitment + c • pk
Party State
Each party maintains local state during the protocol.
structure DkgLocalState (S : Scheme) :=
(pid : S.PartyId)
(sk_i : S.Secret)
(pk_i : S.Public)
(openPk : S.Opening)
The local state contains:
- A secret share sampled locally.
- An opening value for the commitment.
- The commitment where .
Round 1: Commit
In round one each party commits to its public share.
- Party samples a short secret share .
- Compute the public share .
- Sample a random opening .
- Compute the commitment .
- Broadcast the commitment to all parties.
Parties wait until they receive commitments from all other participants. The commitment hides the public share until the reveal phase.
Round 2: Reveal
In round two each party reveals its public share and opening.
- Party broadcasts the pair .
- Each party verifies all received openings.
- For each received pair check that .
- If any verification fails the party aborts or files a complaint.
The binding property of the commitment ensures that a dishonest party cannot change its public share after seeing others.
Aggregation
After successful verification all parties compute the global public key.
Each party stores the set of all public shares and the global key . Secret shares remain private. Only party knows .
Error Handling
Several error conditions may arise during DKG. The implementation provides structured error types.
inductive DkgError (PartyId : Type) where
| lengthMismatch : DkgError PartyId
| duplicatePids : DkgError PartyId
| commitMismatch : PartyId → DkgError PartyId
| invalidProofOfKnowledge : PartyId → DkgError PartyId
Missing commitment. If a party does not receive a commitment from within a timeout it marks as absent. The protocol can proceed with the remaining parties if enough are present. Detected via lengthMismatch.
Invalid opening. If then party is marked as malicious. Detected via commitMismatch.
Invalid proof of knowledge. If for party 's PoK, this indicates either a malicious party attempting a rogue-key attack or corruption of the PoK data. Detected via invalidProofOfKnowledge.
Duplicate participants. If the same party identifier appears twice, detected via duplicatePids.
Inconsistent views. If parties disagree about which commitments were received they must run a consensus or abort. The protocol assumes a broadcast channel or equivalent.
Checked Aggregation
The dkgAggregateChecked function validates the transcript before computing the global public key. It accepts lists extracted from the CRDT state (via CommitState.commitList, etc.):
def dkgAggregateChecked
(S : Scheme) [DecidableEq S.PartyId]
(commits : List (DkgCommitMsg S))
(reveals : List (DkgRevealMsg S))
: Except (DkgError S.PartyId) S.Public
Note: The internal CRDT state uses MsgMap (hash maps keyed by sender) which guarantees at most one message per party. The aggregation functions receive lists extracted from these maps, so duplicates are already eliminated.
It returns either the public key or a structured error identifying the failure mode. The totality theorem ensures this function always returns a result:
lemma dkgAggregateChecked_total : (result).isOk ∨ (result).isError
Validity Predicate
A transcript is valid when commits and reveals align by party identifier and each opening matches its commitment.
def dkgValid
(S : Scheme)
(commits : List (DkgCommitMsg S))
(reveals : List (DkgRevealMsg S)) : Prop :=
List.Forall₂
(fun c r => c.sender = r.sender ∧ S.commit r.pk_i r.opening = c.commitPk)
commits reveals
If dkgAggregateChecked succeeds, then dkgValid holds.
Threshold Considerations
In a -of- threshold setting any parties can sign. The key generation phase produces additive shares. Threshold functionality arises during signing through Lagrange interpolation.
Lagrange Coefficients
Let be a signing subset with . The Lagrange coefficient for party is
These coefficients satisfy when the shares are derived from a degree polynomial evaluated at points .
Additive vs Polynomial Shares
The base protocol uses additive shares where . For -of- signing this is sufficient. For -of- signing the shares must encode polynomial structure.
One approach is to run a verifiable secret sharing protocol during DKG. Each party shares its contribution as a polynomial. After aggregation each party holds a share of the sum of polynomials. The implementation provides Feldman VSS in Protocol/DKG/Feldman.lean and Protocol/DKG/VSSDKG.lean.
An alternative approach uses additive shares with Lagrange adjustment at signing time. The Lagrange coefficients are computed over the active signer set .
Verifiable Secret Sharing (VSS)
Feldman VSS provides malicious security by allowing recipients to verify their shares against polynomial commitments. This detects cheating dealers before shares are used.
Polynomial Commitment
A dealer commits to polynomial by publishing for each coefficient.
structure PolyCommitment (S : Scheme) where
commitments : List S.Public -- [A(a₀), A(a₁), ..., A(a_{t-1})]
threshold : Nat
consistent : commitments.length = threshold
Share Verification
Party verifies share by checking:
This works because is linear:
def verifyShare (S : Scheme) [Module S.Scalar S.Public]
(comm : PolyCommitment S) (share : VSSShare S) : Prop :=
S.A share.value = expectedPublicValue S comm share.evalPoint
VSS Transcript
A complete VSS dealing is captured in a transcript structure with proof obligations:
def evalPointsDistinct {S : Scheme} [DecidableEq S.Scalar]
(shares : List (VSSShare S)) : Prop :=
(shares.map (·.evalPoint)).Nodup
structure VSSTranscript (S : Scheme) [DecidableEq S.Scalar] where
commitment : PolyCommitment S
shares : List (VSSShare S)
evalPointsNodup : evalPointsDistinct shares -- required for Lagrange interpolation
The evalPointsNodup proof ensures all shares have distinct evaluation points, which is essential for Lagrange interpolation to work correctly. The tryCreateVSSTranscript function provides runtime-checked construction:
def tryCreateVSSTranscript (S : Scheme) [DecidableEq S.Scalar]
(p : Polynomial S.Secret)
(parties : List (S.PartyId × S.Scalar))
: Option (VSSTranscript S) :=
if h : (parties.map Prod.snd).Nodup then
some (createVSSTranscript S p parties h)
else
none
Complaint Mechanism
If verification fails, a party files a complaint with evidence:
structure VSSComplaint (S : Scheme) where
complainant : S.PartyId
accused : S.PartyId
badShare : VSSShare S
commitment : PolyCommitment S
Complaints are publicly verifiable. Anyone can check that the share does not verify against the commitment.
VSS-DKG Integration
In VSS-DKG, each party acts as a dealer for their own contribution:
- Party samples polynomial with
- Party broadcasts commitment to
- Party sends share privately to each party
- Each party verifies received shares against commitments
- Invalid shares generate complaints. Valid complaints identify faulty dealers.
- Aggregate: and
def vssFinalize (S : Scheme)
(st : VSSLocalState S)
(allCommitments : List (VSSCommitMsg S))
: Option (KeyShare S)
Security Properties
Correctness. Honest shares verify: if and , verification succeeds.
Soundness. Invalid shares are detected. A complaint is valid iff the share fails verification.
Binding. Commitment determines the polynomial (up to kernel of ). Dealer cannot equivocate.
Hiding. Fewer than shares reveal nothing about the secret (information-theoretic).
Public Key Verification
Any external verifier can check the consistency of published data.
- Verify that each commitment opens to .
- Verify that .
These checks do not require knowledge of secret shares. They ensure that the published public key corresponds to the committed shares.
DKG with Complaints
The threshold DKG variant collects complaints when verification fails rather than immediately aborting.
inductive Complaint (PartyId : Type) where
| openingMismatch (accused : PartyId)
| missingReveal (accused : PartyId)
The dkgAggregateWithComplaints function returns either the public key or a list of complaints.
def dkgAggregateWithComplaints
(S : Scheme) [DecidableEq S.PartyId]
(commits : List (DkgCommitMsg S))
(reveals : List (DkgRevealMsg S))
: Except (List (Complaint S.PartyId)) S.Public
This enables exclusion of misbehaving parties while continuing with the honest subset.
Party Exclusion
After collecting complaints, the protocol computes which parties should be excluded.
def Complaint.accused : Complaint PartyId → PartyId
| .openingMismatch p => p
| .missingReveal p => p
def accusedParties (complaints : List (Complaint PartyId)) : List PartyId :=
(complaints.map Complaint.accused).dedup
def excludeFaulty (allParties : List PartyId) (complaints : List (Complaint PartyId)) : List PartyId :=
let faulty := accusedParties complaints
allParties.filter (fun p => p ∉ faulty)
Quorum Check
For threshold security, enough valid parties must remain:
inductive ExclusionResult (PartyId : Type)
| quorumMet (validParties : List PartyId) -- can continue
| quorumFailed (validCount : Nat) (needed : Nat) -- must abort
def hasThresholdQuorum (allParties complaints : List _) (threshold : Nat) : Bool :=
(excludeFaulty allParties complaints).length ≥ threshold
If , the protocol can continue. Otherwise it must abort.
Reconstruction from Subset
When some parties are excluded, the remaining valid parties can reconstruct the public key using Lagrange interpolation.
def dkgLagrangeCoeff [Field F] (pidToScalar : PartyId → F) (validPids : List PartyId) (i : PartyId) : F :=
let xi := pidToScalar i
let others := validPids.filter (· ≠ i)
(others.map (fun j => let xj := pidToScalar j; xj / (xj - xi))).prod
def reconstructPkFromSubset
(S : Scheme) [Field S.Scalar]
(pidToScalar : S.PartyId → S.Scalar)
(validReveals : List (DkgRevealMsg S)) : S.Public :=
let validPids := validReveals.map (·.sender)
let weightedPks := validReveals.map fun r =>
let λ_i := dkgLagrangeCoeff pidToScalar validPids r.sender
λ_i • r.pk_i
weightedPks.sum
Full DKG with Exclusion
The dkgWithExclusion function combines complaint handling with reconstruction:
def dkgWithExclusion
(S : Scheme) [Field S.Scalar]
(pidToScalar : S.PartyId → S.Scalar)
(allParties : List S.PartyId)
(commits : List (DkgCommitMsg S))
(reveals : List (DkgRevealMsg S))
(threshold : Nat)
: Except (List (Complaint S.PartyId) × Nat) (S.Public × List S.PartyId) :=
let complaints := dkgCheckComplaints S commits reveals
if complaints.isEmpty then
-- No complaints: simple aggregation
Except.ok ((reveals.map (·.pk_i)).sum, allParties)
else
-- Try reconstruction from valid subset
let validParties := excludeFaulty allParties complaints
if validParties.length ≥ threshold then
let validReveals := filterValidReveals reveals validParties
let pk := reconstructPkFromSubset S pidToScalar validReveals
Except.ok (pk, validParties)
else
Except.error (complaints, validParties.length)
Threshold Parameters
structure DKGParams where
n : Nat -- total parties
t : Nat -- threshold
threshold_valid : t ≤ n -- proof that t ≤ n
def DKGParams.maxFaulty (p : DKGParams) : Nat := p.n - p.t
def DKGParams.canContinue (p : DKGParams) (faultyCount : Nat) : Bool :=
faultyCount ≤ p.maxFaulty
Finalizing Key Shares
After DKG completes successfully, each party converts its local state into a persistent key share using KeyShare.create:
def finalizeKeyShare
(S : Scheme)
(st : DkgLocalState S)
(pk : S.Public)
: KeyShare S :=
KeyShare.create S st.pid st.sk_i st.pk_i pk
The KeyShare.create function wraps the secret in a SecretBox, ensuring the protective wrapper is applied at the point of creation.
Security Properties
Secrecy. No coalition of fewer than parties learns the master secret .
Correctness. Honest execution produces shares that satisfy .
Robustness. If all parties are honest the protocol terminates successfully. If a party misbehaves it is detected through commitment verification.
Totality. The checked aggregation functions always return either success or a structured error.
Two-Round Threshold Signing
The signing protocol produces a threshold signature from partial contributions. It runs in two rounds with phase-indexed CRDT state. The first round commits to ephemeral nonces. The second round produces partial signatures. An aggregator combines the partials into a final signature.
Module Structure
The signing implementation is split across focused modules for maintainability:
-
Protocol/Sign/Types.lean- Core types:SessionTracker,NonceRegistry, message types (SignCommitMsg,SignRevealWMsg,SignShareMsg), error types (SignError), and output types (Signature,SignatureDone) -
Protocol/Sign/LocalRejection.lean- Local rejection sampling for norm bounds (eliminates global retry coordination) -
Protocol/Core/Abort.lean- Session-level abort coordination for liveness failures and security violations -
Protocol/Sign/Core.lean- Core functions:lagrangeCoeffAtZero,lagrangeCoeffs,computeChallenge,aggregateSignature,aggregateSignatureLagrange,ValidSignTranscript,validateSigning -
Protocol/Sign/Threshold.lean- Threshold support:CoeffStrategy,strategyOK,SignMode,ThresholdCtx, context constructors (mkAllCtx,mkThresholdCtx,mkThresholdCtxComputed), and context-based aggregation -
Protocol/Sign/Sign.lean- Re-exports from all submodules for backward compatibility -
Protocol/Sign/Session.lean- Session-typed API that makes nonce reuse a compile-time error
Phase-Indexed State
The signing protocol uses phase-indexed state that forms a join-semilattice at each phase.
inductive Phase
| commit
| reveal
| shares
| done
Each phase carries accumulated data that can be merged with the join operation (⊔).
Inputs
Each signing session requires the following inputs.
- Message. The message to be signed.
- Signer set. The active signing subset .
- Secret shares. Each party holds its secret share .
- Public key. All parties know the global public key .
- Public shares. All parties know the public shares from key generation.
Message Types
The protocol uses three message types across its phases.
structure SignCommitMsg (S : Scheme) where
sender : S.PartyId
session : Nat
commitW : S.Commitment -- hash commitment to hiding nonce
hidingVal : S.Public -- public hiding commitment W_hiding = A(hiding nonce)
bindingVal : S.Public -- public binding commitment W_binding = A(binding nonce)
context : ExternalContext := {} -- for external protocol binding
structure SignRevealWMsg (S : Scheme) where
sender : S.PartyId
session : Nat
opening : S.Opening -- opening for commitment verification
context : ExternalContext := {}
structure SignShareMsg (S : Scheme) where
sender : S.PartyId
session : Nat
z_i : S.Secret
context : ExternalContext := {} -- for external protocol binding
All message types include an optional context field for binding to external protocols. See Protocol Integration for details.
Dual Nonce Structure (FROST Pattern)
Following FROST, we use two nonces per signer instead of one:
- Hiding nonce: Protects against adaptive adversaries
- Binding nonce: Commits to the signing context (message + all commitments)
This provides stronger security against adaptive chosen-message attacks.
structure SigningNonces (S : Scheme) where
hidingVal : S.Secret -- protects signer from adaptive adversaries
bindingVal : S.Secret -- commits to signing context
structure SigningCommitments (S : Scheme) where
hidingVal : S.Public -- W_hiding = A(hiding nonce)
bindingVal : S.Public -- W_binding = A(binding nonce)
Domain-Separated Hash Functions (FROST H4, H5)
Following FROST, the signing protocol uses domain-separated hash functions:
-
H4 (Message pre-hashing):
hashMessagepre-hashes large messages before signing, using domain prefixice-nine-v1-msg. This enables efficient signing of arbitrarily large messages. -
H5 (Commitment list hashing):
hashCommitmentListproduces canonical encodings of commitment lists, using domain prefixice-nine-v1-com. This ensures all parties compute identical binding factors.
def hashMessage (S : Scheme) (msg : S.Message) : ByteArray :=
HashDomain.message ++ serializeMessage msg
def hashCommitmentList (S : Scheme) (commits : List (SignCommitMsg S)) : ByteArray :=
HashDomain.commitmentList ++ encodeCommitmentList S commits
Binding Factor Derivation
The binding factor binds each signer's nonce to the full signing context:
where uses domain prefix ice-nine-v1-rho.
The effective nonce is then:
This prevents an adversary from adaptively choosing the message after seeing nonce commitments.
Party State
Each party maintains local state during signing.
structure SignLocalState (S : Scheme) where
share : KeyShare S -- party's long-term credential from DKG
msg : S.Message -- message being signed
session : Nat -- unique session identifier
nonces : SigningNonces S -- dual ephemeral nonces (NEVER reuse!)
commits : SigningCommitments S -- public nonce commitments
openW : S.Opening -- commitment randomness for hiding commitment
The local state contains:
- The ephemeral nonces sampled for this session.
- The public commitments and .
- The commitment opening .
- The session identifier for binding.
Round 1: Nonce Commitment
In round one each party commits to an ephemeral nonce. This commitment prevents adaptive attacks where an adversary chooses its nonce after seeing others.
Procedure (session-typed API)
The implementation no longer exposes raw signRound1/2. Use the linear, session-typed API in Protocol/Sign/Session.lean:
open IceNine.Protocol.SignSession
-- caller supplies fresh randomness
let ready := initSession S keyShare msg session hidingNonce bindingNonce opening
match commit S ready with
| .error err => -- session reuse or commitment reuse detected locally
| .ok { committed, msg := commitMsg, tracker, nonceReg } =>
-- after collecting commits and computing challenge/binding factors:
let (revealed, revealMsg, tracker', nonceReg') :=
reveal S committed challenge bindingFactor aggregateW tracker nonceReg
match sign S revealed tracker' nonceReg' with
| .inl (signed, tracker'', nonceReg'') => finalize S signed
| .inr _ => -- retry with NEW FreshNonce via retryWithFreshNonce
Round‑1 message contents are identical: each SignCommitMsg carries the hash commitment plus both public nonce values.
Each party still samples fresh , computes , , samples an opening , and broadcasts the commitment to together with both public nonces. The implementation also:
- marks the
sessionas used in a per-partySessionTracker - records
commitWin a per-partyNonceRegistryand rejects if the same commitment was used in a different session for that party.
Commit Phase State
The commit phase accumulates commit messages in a semilattice structure.
structure CommitState (S : Scheme) :=
(commits : List (DkgCommitMsg S))
instance (S : Scheme) : Join (CommitState S) :=
⟨fun a b => { commits := a.commits ⊔ b.commits }⟩
Waiting for Commits
Each party waits until it receives commitments from all other parties in . If a commitment does not arrive within a timeout the party may abort or exclude the missing party.
Nonce Reveal and Challenge Computation
After all commitments are received parties reveal their nonces and compute the shared challenge.
Reveal Phase
Each party broadcasts only the opening for its hiding commitment.
The public nonces (hidingVal, bindingVal) were already sent in Round 1 inside SignCommitMsg.
Trackers (SessionTracker, NonceRegistry) are threaded through. No additional updates occur in this phase.
Verification
Each party verifies all received openings. For each check that
If any opening is invalid the party aborts the session. It may also file a complaint identifying the misbehaving party.
Nonce Aggregation
After successful verification compute the aggregated nonce.
Challenge Derivation
Compute the challenge using the hash function.
The challenge depends on the message, public key, signer set, all commitments, and the aggregated nonce. Including the commitments in the hash input binds the challenge to the commit phase.
Round 2: Partial Signatures
In round two each party produces its partial signature.
n-of-n Case
In the -of- setting all parties must participate. The session transition sign
in SignSession computes each partial signature as
This is the standard Schnorr response. The ephemeral secret masks the product of challenge and secret share. The function returns none if the norm check fails.
t-of-n Case
In the -of- setting a subset of parties signs. Each party must adjust its contribution using Lagrange coefficients.
structure LagrangeCoeff (S : Scheme) :=
(pid : S.PartyId)
(lambda : S.Scalar)
Let denote the Lagrange coefficient for party over the signing set . Party computes
The Lagrange coefficient ensures that the sum of adjusted shares equals the master secret.
Lagrange Coefficient Computation
def lagrangeCoeffAtZero
(S : Scheme) [Field S.Scalar] [DecidableEq S.PartyId]
(pidToScalar : S.PartyId → S.Scalar)
(Sset : List S.PartyId) (i : S.PartyId) : S.Scalar
The Lagrange coefficient for party over set is
This computation uses party identifiers as evaluation points. All arithmetic is in the scalar ring . Returns 0 if duplicate identifiers would cause division by zero.
Norm Check
Before broadcasting the partial signature each party may check the norm.
If the norm exceeds the bound the party aborts this session. It can retry with a fresh nonce. This check prevents leakage through the response distribution.
Broadcasting Partials
Each party broadcasts its partial signature to all parties in .
Aggregation
An aggregator collects all partial signatures and produces the final signature.
Shares Phase State
The shares phase accumulates partial signatures along with threshold context. Messages are stored in MsgMap structures keyed by sender ID, making conflicting messages from the same sender un-expressable.
structure ShareState (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] :=
(commits : CommitMap S) -- at most one commit per party
(reveals : RevealMap S) -- at most one reveal per party
(shares : ShareMap S) -- at most one share per party
(active : Finset S.PartyId)
structure ShareWithCtx (S : Scheme) :=
(state : ShareState S)
(ctx : ThresholdCtx S)
The MsgMap structure provides conflict detection: tryAddShare returns a conflict indicator if a party attempts to submit multiple shares, enabling detection of Byzantine behavior.
The threshold context ensures aggregation only proceeds when sufficient parties contribute.
inductive SignMode | all | threshold
inductive CoeffStrategy (S : Scheme)
| ones -- n-of-n: z = Σ z_i
| lagrange (coeffs : List (LagrangeCoeff S)) -- t-of-n: z = Σ λ_i·z_i
structure ThresholdCtx (S : Scheme) :=
(active : Finset S.PartyId) -- participating signers
(t : Nat) -- threshold (t = |active| for n-of-n)
(mode : SignMode) -- n-of-n vs t-of-n
(strategy : CoeffStrategy S) -- aggregation method
(card_ge : t ≤ active.card) -- proof: have enough signers
(strategy_ok : strategyOK S active.toList strategy) -- coeffs align
The CoeffStrategy type selects the aggregation path:
ones: simple sum z = Σ z_i (n-of-n, no field operations)lagrange coeffs: weighted sum z = Σ λ_i·z_i (t-of-n, requires field)
Context Constructors
Smart constructors build ThresholdCtx with proofs:
-- n-of-n: threshold = |active|, strategy = ones
def mkAllCtx (S : Scheme) [DecidableEq S.PartyId]
(active : Finset S.PartyId) : ThresholdCtx S
-- t-of-n with pre-supplied coefficients
def mkThresholdCtx
(S : Scheme) [DecidableEq S.PartyId]
(active : Finset S.PartyId)
(t : Nat)
(coeffs : List (LagrangeCoeff S))
(hcard : t ≤ active.card)
(halign : coeffs.map (·.pid) = active.toList)
(hlen : coeffs.length = active.toList.length)
: ThresholdCtx S
-- t-of-n with computed Lagrange coefficients (requires Field)
def mkThresholdCtxComputed
(S : Scheme) [Field S.Scalar] [DecidableEq S.PartyId]
(active : Finset S.PartyId)
(t : Nat)
(pidToScalar : S.PartyId → S.Scalar)
(hcard : t ≤ active.card) : ThresholdCtx S
Collecting Partials
The aggregator receives from each . If any partial is missing the aggregator may wait or abort.
Signature Structure
structure Signature (S : Scheme) :=
(z : S.Secret)
(c : S.Challenge)
(Sset : List S.PartyId)
(commits : List S.Commitment)
(context : ExternalContext := {}) -- merged from contributing shares
The signature's context is merged from all contributing shares, enabling external protocols to verify consensus binding.
Signature Computation
For n-of-n aggregation:
def aggregateSignature
(S : Scheme)
(c : S.Challenge)
(Sset : List S.PartyId)
(commits : List S.Commitment)
(shares : List (SignShareMsg S))
: Signature S
Compute the aggregated response.
For t-of-n aggregation with Lagrange weighting:
def aggregateSignatureLagrange
(S : Scheme)
(c : S.Challenge)
(Sset : List S.PartyId)
(commits : List S.Commitment)
(coeffs : List (LagrangeCoeff S))
(shares : List (SignShareMsg S))
: Signature S
Strategy-Based Aggregation
The aggregateWithStrategy function selects the aggregation path based on the coefficient strategy:
def aggregateWithStrategy
(S : Scheme) [DecidableEq S.PartyId]
(c : S.Challenge)
(active : List S.PartyId)
(commits : List S.Commitment)
(shares : List (SignShareMsg S))
(strategy : CoeffStrategy S)
: Option (Signature S)
This function validates that all shares come from the declared active set, then dispatches to aggregateSignature (for .ones) or aggregateSignatureLagrange (for .lagrange coeffs).
Context-Based Aggregation
For full validation using the threshold context:
def aggregateSignatureWithCtx
(S : Scheme) [DecidableEq S.PartyId]
(c : S.Challenge)
(ctx : ThresholdCtx S)
(commits : List S.Commitment)
(shares : List (SignShareMsg S))
: Option (Signature S)
This function checks sharesFromActive (membership validation) before delegating to aggregateWithStrategy using the context's strategy.
Final Signature
The signature consists of the following components.
The signature includes the aggregated response, the challenge, the signer set, and all nonce commitments.
Correctness
The aggregated response satisfies the verification equation. Consider the -of- case.
where and .
Applying the linear map gives
Rearranging yields
The verifier recomputes from this equation and checks the challenge.
t-of-n Correctness
In the -of- case the Lagrange coefficients ensure
The rest of the argument follows as above.
Session Binding
The challenge binds to the specific session through several mechanisms.
- The message is included in the hash input.
- The public key is included.
- The signer set is included.
- All nonce commitments are included.
- The aggregated nonce is included.
This binding prevents cross-session attacks where an adversary reuses partial signatures from different sessions.
Transcript Validation
The validateSigning function checks the entire signing transcript before aggregation.
Its current signature also requires precomputed binding factors (derived from the commit list):
def validateSigning
(S : Scheme) [DecidableEq S.PartyId]
(pk : S.Public)
(m : S.Message)
(Sset : List S.PartyId)
(commits : List (SignCommitMsg S))
(reveals : List (SignRevealWMsg S))
(bindingFactors : BindingFactors S)
(shares : List (SignShareMsg S))
: Except (SignError S.PartyId) (Signature S)
Callers must compute bindingFactors (e.g., via computeBindingFactors) and supply them. Otherwise validation will not type-check.
Error Types
inductive SignError (PartyId : Type) where
| lengthMismatch : SignError PartyId
| participantMismatch : PartyId → SignError PartyId
| duplicateParticipants : PartyId → SignError PartyId
| commitMismatch : PartyId → SignError PartyId
| sessionMismatch : Nat → Nat → SignError PartyId
Note: Legacy error variants (normCheckFailed, maxRetriesExceeded, sessionAborted) have been removed. With local rejection sampling, norm bounds are handled locally. Session-level aborts use AbortReason from Protocol/Core/Abort.lean.
Valid Transcript Predicate
structure ValidSignTranscript (S : Scheme)
(Sset : List S.PartyId)
(commits : List (SignCommitMsg S))
(reveals : List (SignRevealWMsg S))
(shares : List (SignShareMsg S)) : Prop where
len_comm_reveal : commits.length = reveals.length
len_reveal_share : reveals.length = shares.length
pids_nodup : (commits.map (·.sender)).Nodup
pids_eq : commits.map (·.sender) = Sset
commit_open_ok : List.Forall₂ (fun c r => c.sender = r.sender ∧ S.commit c.hiding r.opening = c.commitW) commits reveals
sessions_ok : let sess := (commits.head?.map (·.session)).getD 0; ∀ sh ∈ shares, sh.session = sess
Monotonic Step Functions
State transitions are monotone with respect to the semilattice order.
def stepCommit (S : Scheme) (msg : DkgCommitMsg S) (st : CommitState S) : CommitState S :=
{ commits := st.commits ⊔ [msg] }
lemma stepCommit_monotone (S : Scheme) (msg : DkgCommitMsg S) :
∀ a b, a ≤ b → stepCommit S msg a ≤ stepCommit S msg b
Similar monotonicity holds for stepReveal and stepShare.
Abort Conditions
The signing protocol may abort under several conditions.
Missing commitment. A party in does not broadcast its commitment.
Invalid opening. A received opening does not match its commitment.
Missing partial. A party does not broadcast its partial signature.
Norm violation. A partial signature fails the norm check.
Inconsistent challenge. Parties compute different challenges due to inconsistent views.
Session mismatch. Messages have inconsistent session identifiers.
When aborting parties should discard all session state. They should not reuse the ephemeral nonce in any future session.
Session Tracking and Nonce Safety
CRITICAL: Nonce reuse completely breaks Schnorr-style signatures.
If the same nonce is used with two different challenges :
The secret key can be recovered:
This attack is formalized in Proofs/Soundness/Soundness.lean as the nonce_reuse_key_recovery theorem. See Verification: Special Soundness for the formal proof.
Session Tracker
Each party tracks used session IDs:
structure SessionTracker (S : Scheme) where
usedSessions : Finset Nat -- sessions that have been used
partyId : S.PartyId
def SessionTracker.isFresh (tracker : SessionTracker S) (session : Nat) : Bool :=
session ∉ tracker.usedSessions
def SessionTracker.markUsed (tracker : SessionTracker S) (session : Nat) : SessionTracker S :=
{ tracker with usedSessions := tracker.usedSessions.insert session }
Session Validation
inductive SessionCheckResult
| ok -- session is fresh
| alreadyUsed (session : Nat) -- DANGER: would reuse nonce
| invalidId
def checkSession (tracker : SessionTracker S) (session : Nat) : SessionCheckResult :=
if tracker.isFresh session then .ok
else .alreadyUsed session
Nonce Registry
For network-wide detection, the NonceRegistry uses HashMap-based indices for O(1) lookup:
structure NonceRegistry (S : Scheme)
[BEq S.PartyId] [Hashable S.PartyId]
[BEq S.Commitment] [Hashable S.Commitment] where
bySession : Std.HashMap (S.PartyId × Nat) S.Commitment -- primary index
byCommitment : Std.HashMap (S.PartyId × S.Commitment) (List Nat) -- reverse index
def NonceRegistry.hasCommitment (reg : NonceRegistry S)
(pid : S.PartyId) (session : Nat) (commit : S.Commitment) : Bool :=
match reg.bySession.get? (pid, session) with
| some c => c == commit
| none => false
def NonceRegistry.detectReuse (reg : NonceRegistry S)
(pid : S.PartyId) (commit : S.Commitment) : Option (Nat × Nat) :=
match reg.byCommitment.get? (pid, commit) with
| some (s1 :: s2 :: _) => if s1 ≠ s2 then some (s1, s2) else none
| _ => none
The dual index structure provides:
- O(1) lookup by (partyId, session) for commitment retrieval
- O(1) reuse detection by (partyId, commitment) for security monitoring
Local Rejection Sampling
In lattice signatures, the response must have bounded norm to prevent leakage. Ice Nine uses local rejection sampling where each signer independently ensures their partial signature satisfies norm bounds before broadcasting.
Key Invariant
If each of signers produces with , then the aggregate satisfies .
This eliminates global rejection as a distributed control-flow path.
Local Sign Result
inductive LocalSignResult (S : Scheme)
| success (z_i : S.Secret) (hidingNonce bindingNonce : S.Secret) (attempts : Nat)
| failure (err : LocalRejectionError S.PartyId)
Rejection Loop
Each signer runs local rejection sampling independently:
def localRejectionLoop (S : Scheme) (cfg : ThresholdConfig)
(partyId : S.PartyId) (sk_i : S.Secret) (challenge : S.Challenge)
(bindingFactor : S.Scalar) (sampleNonce : IO (S.Secret × S.Secret))
: IO (LocalSignResult S)
The loop:
- Samples fresh nonce pair
- Computes
- Checks if
- Returns success or retries with new nonces (up to
maxLocalAttempts)
Parallelization
Local rejection is trivially parallelizable:
- Across signers: Each signer's loop is independent
- Within signer: Batch multiple nonce candidates (
localRejectionLoopParallel) - Within batch: SIMD/vectorized norm checking
Reference: Lyubashevsky, "Fiat-Shamir with Aborts", ASIACRYPT 2009.
Session Abort Coordination
While local rejection handles norm bounds, session-level aborts are needed for:
- Liveness failures: Parties offline, session times out
- Security violations: Trust assumption violated (more than faulty parties)
- Explicit cancellation: Group decides to abandon signing
Abort Reasons
inductive AbortReason (PartyId : Type*) where
-- Liveness failures (require f+1 votes)
| timeout (respondents : Nat) (required : Nat)
| insufficientParticipants (actual : Nat) (required : Nat)
| partyUnresponsive (parties : List PartyId)
-- Security violations (immediate abort, no voting)
| trustViolation (faultyCount : Nat) (maxTolerable : Nat)
| globalNormExceeded (aggregateNorm : Nat) (bound : Nat)
| tooManyComplaints (complaints : Nat) (maxTolerable : Nat)
-- Explicit request (requires f+1 votes)
| requestedBy (requester : PartyId)
Abort State (CRDT)
structure AbortState (S : Scheme) where
session : Nat
votes : Finset S.PartyId
reasons : List (AbortReason S.PartyId)
immediateTriggered : Bool := false
instance (S : Scheme) : Join (AbortState S) :=
⟨fun a b => {
session := a.session
votes := a.votes ∪ b.votes
reasons := (a.reasons ++ b.reasons).dedup
immediateTriggered := a.immediateTriggered || b.immediateTriggered
}⟩
Abort Threshold
Abort consensus uses ThresholdConfig.abortThreshold which is where (max faulty parties). This ensures at least one honest party agreed to abort, preventing malicious minorities from forcing spurious aborts.
def ThresholdConfig.abortThreshold (cfg : ThresholdConfig) : Nat :=
cfg.maxFaulty + 1
def AbortState.shouldAbort (state : AbortState S) (cfg : ThresholdConfig) : Bool :=
state.hasImmediateReason || state.isConfirmedByVotes cfg
Security violations trigger immediate abort without voting.
Session-Typed Signing
The Protocol/Sign/Session.lean module provides a session-typed API that makes nonce reuse a compile-time error rather than a runtime check. Each signing session progresses through states that consume the previous state, ensuring nonces are used exactly once.
Session State Machine
ReadyToCommit ──► Committed ──► Revealed ──► SignedWithStats ──► Done
│ │
└──────► Aborted ◄──────┘
Each transition consumes its input state. There is no way to "go back" or reuse a consumed state. The ReadyToCommit state contains a FreshNonce that gets consumed during the commit transition. Sessions may transition to Aborted from Committed or Revealed when abort consensus is reached.
Linear Nonce (Dual Nonce Version)
Following FROST, we use two nonces per signer:
structure FreshNonce (S : Scheme) where
private mk ::
private hidingNonce : S.Secret -- secret hiding nonce
private bindingNonce : S.Secret -- secret binding nonce
private publicHiding : S.Public -- W_hiding = A(hiding)
private publicBinding : S.Public -- W_binding = A(binding)
private opening : S.Opening -- commitment opening
def FreshNonce.sample (S : Scheme) (h b : S.Secret) (opening : S.Opening) : FreshNonce S
The private constructor prevents arbitrary creation. The only way to create a FreshNonce is via sample with fresh randomness for both hiding and binding nonces.
Session States
Each state carries the data accumulated up to that point (dual nonce version):
structure ReadyToCommit (S : Scheme) where
keyShare : KeyShare S
nonce : FreshNonce S -- dual nonces, will be consumed on commit
message : S.Message
session : Nat
structure Committed (S : Scheme) where
keyShare : KeyShare S
private hidingNonce : S.Secret -- moved from FreshNonce
private bindingNonce : S.Secret -- moved from FreshNonce
publicHiding : S.Public
publicBinding : S.Public
opening : S.Opening
message : S.Message
session : Nat
commitment : S.Commitment
nonceConsumed : NonceConsumed S -- proof token
structure Revealed (S : Scheme) where
keyShare : KeyShare S
private hidingNonce : S.Secret
private bindingNonce : S.Secret
publicHiding : S.Public
publicBinding : S.Public
message : S.Message
session : Nat
challenge : S.Challenge
bindingFactor : S.Scalar -- ρ for this signer
aggregateNonce : S.Public -- w = Σ (W_hiding_i + ρ_i·W_binding_i)
structure SignedWithStats (S : Scheme) extends Signed S where
localAttempts : Nat -- number of local rejection attempts
hidingNonce : S.Secret -- the nonce used
bindingNonce : S.Secret
structure Done (S : Scheme) where
shareMsg : SignShareMsg S
structure Aborted (S : Scheme) where
session : Nat
reason : AbortReason S.PartyId
voteCount : Nat
Session Transitions
Each transition function consumes its input and produces the next state:
-- Consumes ReadyToCommit, produces Committed
-- The FreshNonce inside is consumed; cannot be accessed again
def commit (S : Scheme) (ready : ReadyToCommit S)
: Except (SignError S.PartyId) (CommitResult S)
-- Consumes Committed, produces Revealed
-- Receives challenge, binding factor, and aggregate nonce from coordinator
def reveal (S : Scheme) (committed : Committed S)
(challenge : S.Challenge) (bindingFactor : S.Scalar) (aggregateW : S.Public)
: Revealed S × SignRevealWMsg S × SessionTracker S × NonceRegistry S
-- Consumes Revealed, produces SignedWithStats using local rejection sampling
-- Never fails norm check because rejection happens internally
def signWithLocalRejection (S : Scheme) (revealed : Revealed S)
(cfg : ThresholdConfig) (sampleNonce : IO (S.Secret × S.Secret))
: IO (Except (LocalRejectionError S.PartyId) (SignedWithStats S))
-- Consumes Signed, produces Done
def finalize (S : Scheme) (signed : Signed S) : Done S
-- Abort transitions (consume current state, produce Aborted)
def abortFromCommitted (S : Scheme) (committed : Committed S)
(reason : AbortReason S.PartyId) (voteCount : Nat) : Aborted S
def abortFromRevealed (S : Scheme) (revealed : Revealed S)
(reason : AbortReason S.PartyId) (voteCount : Nat) : Aborted S
Local Rejection Sampling
Ice Nine implements restart-free threshold Dilithium signing with local rejection sampling. Each signer applies rejection sampling locally before sending their partial signature. The bounds are chosen so that any valid combination of partials automatically satisfies global bounds.
The key invariant is that if each of T signers produces z_i with ‖z_i‖∞ ≤ B_local, then the aggregate z = Σz_i satisfies ‖z‖∞ ≤ T · B_local ≤ B_global. This eliminates global rejection as a distributed control-flow path. Given sufficient honest parties at or above threshold, signing does not trigger a distributed restart due to rejection sampling.
With local rejection sampling, norm bounds are handled locally by each signer. Each signer runs rejection sampling independently until producing a valid partial. No global retry coordination is needed. The signWithLocalRejection function never returns a norm failure.
Design Rationale
Local rejection sampling is designed for BFT consensus integration. In a typical BFT setting with n = 3f+1 validators and threshold t = 2f+1, at most f validators can be Byzantine. The signing protocol must produce signatures despite malicious behavior, without allowing a small coalition to stall progress.
The naive approach to threshold Dilithium would check the global bound after aggregation. If the aggregate z exceeds the bound, all parties must restart with fresh randomness. This creates a liveness vulnerability: a single malicious party could repeatedly cause the ceremony to fail by sending out-of-bounds partials.
Local rejection eliminates this vulnerability through a mathematical guarantee. By the triangle inequality:
If each partial satisfies ‖z_i‖∞ ≤ B_local and we aggregate at most T partials, then ‖z‖∞ ≤ T · B_local. Setting B_local = B_global / T guarantees that any combination of valid partials produces a valid aggregate. Global overflow becomes mathematically impossible, not just unlikely.
The aggregator enforces per-share bounds and discards invalid partials. With at least 2f+1 honest validators, there always exist t valid partials to aggregate. Malicious parties can only cause their own shares to be rejected. They cannot force a distributed restart.
Trade-offs
Local rejection changes the scheme parameters compared to single-signer Dilithium.
More local sampling. Tighter per-signer bounds (B_local < B_global) increase the expected number of rejection sampling iterations per signer. This is additional CPU cost but does not affect the distributed protocol. The extra work is invisible to other parties.
Narrower distribution. The effective distribution on z is more constrained than standard Dilithium. This means the scheme is "Dilithium-like with stricter bounds" rather than a bit-for-bit clone of ML-DSA. Security arguments must account for the modified bounds.
Exceptional global rejection. If global rejection ever occurs despite local bounds, it indicates either a negligible probability event or Byzantine behavior (shares that passed local validation but were crafted to overflow when combined). This is treated as a security anomaly, not normal control flow.
Local Rejection vs Global Abort
Session aborts are for global failures that cannot be resolved locally:
- Timeout waiting for other parties
- Security violations (trust assumption breached)
- Explicit cancellation by consensus
Type-Level Guarantees
The session type system provides these guarantees:
-
Nonce uniqueness: Each
FreshNoncecan only be consumed once. Thecommitfunction consumes theReadyToCommitstate, and the nonce value moves intoCommitted. There's no path back. -
Linear flow: States can only progress forward. Each transition consumes its input. Sessions may also transition to
Aborted. -
Local rejection: With
signWithLocalRejection, norm bounds are satisfied during signing via local rejection sampling. No global retry loop is needed. -
Compile-time enforcement: These are not runtime checks. The type system prevents writing code that would reuse a nonce.
-
Abort safety: Aborted sessions cannot continue. Once a session transitions to
Aborted, any consumed nonces are invalidated.
Example: Nonce Reuse is Uncompilable
-- This CANNOT compile because `ready` is consumed by first `commit`
def badNonceReuse (S : Scheme) (ready : ReadyToCommit S) :=
let (committed1, _) := commit S ready -- consumes ready
let (committed2, _) := commit S ready -- ERROR: ready already consumed
(committed1, committed2)
Fast-Path Signing
For latency-sensitive applications, Ice Nine supports single-round signing when nonces are precomputed and pre-shared. This mode skips the commit-reveal round by preparing nonces during idle time.
Precomputed Nonces
structure PrecomputedNonce (S : Scheme) where
commitment : S.Commitment
nonce : FreshNonce S
publicHiding : S.Public
publicBinding : S.Public
generatedAt : Nat
maxAge : Nat := 3600 -- 1 hour default expiry
structure NoncePool (S : Scheme) where
available : List (PrecomputedNonce S)
consumedCount : Nat := 0
Fast Signing Function
def signFast (S : Scheme)
(keyShare : KeyShare S)
(precomputed : PrecomputedNonce S)
(challenge : S.Challenge)
(bindingFactor : S.Scalar)
(session : Nat)
(context : ExternalContext := {})
: Option (SignShareMsg S)
This function produces a signature share immediately using a precomputed nonce. It SHOULD return none if the norm check fails, requiring a fresh nonce. The current Lean code stubs out the norm check and always returns some. Production code must call S.normOK here and propagate failure.
Security Assumptions
Fast-path signing trusts that:
- Nonce commitments were honestly generated
- Nonces are not being reused (enforced by
FreshNoncetype) - Precomputed nonces have not expired
- The coordinator will aggregate honestly
Use standard two-round signing when these assumptions don't hold.
See Protocol Integration for detailed usage patterns with external consensus protocols.
Security Considerations
Nonce reuse. Using the same nonce in two sessions leaks the secret share . Session types make nonce reuse a compile-time error.
Commitment binding. The commitment prevents a party from choosing its nonce adaptively. This protects against rogue key attacks.
Challenge entropy. The hash function must produce challenges with sufficient entropy. In the random oracle model this is guaranteed.
Norm bounds. The norm check prevents statistical leakage of the secret through the response distribution. Local rejection sampling ensures each partial signature is independent of the secret. No global retry coordination is needed.
Abort coordination. Session-level aborts (for liveness failures or security violations) use CRDT state with an threshold, ensuring at least one honest party agreed to abort.
Totality. The validation function always returns either a valid signature or a structured error.
Nonce Lifecycle
The Protocol/Sign/NonceLifecycle.lean module provides unified effect abstractions for nonce management. It consolidates session tracking, nonce registry, and pool management into a single composable interface with explicit error handling.
Nonce Safety Invariants
The module enforces four safety invariants.
Session uniqueness ensures each session ID is used at most once per party. Reusing a session ID could lead to nonce reuse.
Commitment uniqueness ensures each nonce commitment appears in at most one session. If the same commitment appears twice, a nonce was reused.
Consumption tracking ensures nonces are consumed exactly once. The FreshNonce type enforces this at compile time. The runtime registry provides defense-in-depth.
Expiry enforcement ensures precomputed nonces expire and cannot be used past their maximum age. This prevents stale nonces from being used after key rotation.
NonceState Structure
The NonceState structure consolidates all nonce-related tracking.
structure NonceState (S : Scheme) where
partyId : S.PartyId
usedSessions : Finset Nat
commitBySession : Std.HashMap (S.PartyId × Nat) S.Commitment
sessionsByCommit : Std.HashMap (S.PartyId × S.Commitment) (List Nat)
poolAvailable : List (PrecomputedNonce S)
poolConsumed : Nat
The usedSessions field tracks all sessions where commit has occurred. The commitBySession map provides forward lookup from session to commitment. The sessionsByCommit map provides reverse lookup for reuse detection.
NonceError Type
Nonce operations return typed errors for precise error handling.
inductive NonceError (PartyId : Type*)
| sessionAlreadyUsed (session : Nat) (party : Option PartyId)
| commitmentReuse (session1 session2 : Nat) (party : PartyId)
| nonceExpired (generatedAt currentTime maxAge : Nat)
| poolEmpty
| poolExhausted (consumed : Nat)
| invalidSession (reason : String)
| nonceNotFound (session : Nat)
The commitmentReuse error is a security violation. The BlameableError instance identifies the party responsible. Other errors are operational without blame attribution.
Pure Operations
The NonceOp namespace provides pure operations on NonceState.
-- Check session freshness
def checkFresh (session : Nat) (st : NonceState S) : NonceResult S Unit
-- Mark session as used
def markUsed (session : Nat) (st : NonceState S) : NonceResult S Unit
-- Record a commitment
def recordCommitment (session : Nat) (commit : S.Commitment) (st : NonceState S)
: NonceResult S Unit
-- Check for reuse and throw if detected
def assertNoReuse (commit : S.Commitment) (st : NonceState S) : NonceResult S Unit
-- Full commit operation: check fresh, mark used, record, check reuse
def commitSession (session : Nat) (commit : S.Commitment) (st : NonceState S)
: NonceResult S Unit
The commitSession operation combines all checks into a single atomic operation. It ensures session freshness, marks the session as used, records the commitment, and checks for reuse.
Monadic Interface
The NonceM monad provides composition for chaining operations.
abbrev NonceM (S : Scheme) :=
StateT (NonceState S) (Except (NonceError S.PartyId))
-- Run and get final state
def NonceM.run (m : NonceM S α) (st : NonceState S)
: Except (NonceError S.PartyId) (α × NonceState S)
The monadic interface threads state automatically and short-circuits on errors.
Session Type Integration
The module bridges runtime tracking with compile-time session types.
-- Create ReadyToCommit using NonceState for tracking
def initSessionWithState (S : Scheme) (keyShare : KeyShare S) (message : S.Message)
(session : Nat) (nonce : FreshNonce S) (st : NonceState S) : ReadyToCommit S
-- Commit transition that updates NonceState
def commitWithState (S : Scheme) (ready : ReadyToCommit S) (st : NonceState S)
: Except (NonceError S.PartyId) (Committed S × SignCommitMsg S × NonceState S)
The commitWithState function consumes the FreshNonce inside ReadyToCommit. Session types prevent reuse at compile time. The NonceState provides runtime detection as defense-in-depth.
Validated Aggregation
The Protocol/Sign/ValidatedAggregation.lean module provides per-share validation during signature aggregation. This enables local rejection sampling by validating each partial signature before aggregation and collecting blame for invalid shares.
Validation Process
Validated aggregation performs three checks on each share.
Norm bound check verifies ‖z_i‖∞ ≤ B_local where B_local is the local rejection bound from ThresholdConfig. Shares exceeding the bound are rejected.
Algebraic check verifies A(z_i) = w_eff + c·pk_i where w_eff is the effective nonce commitment incorporating the binding factor. This detects malformed shares.
Commitment presence verifies the share has a corresponding commitment from round 1. Missing commitments indicate protocol violations.
ShareValidationError
Validation errors identify the responsible party.
inductive ShareValidationError (PartyId : Type*)
| normExceeded (party : PartyId) (actual : Nat) (bound : Nat)
| algebraicInvalid (party : PartyId) (reason : String)
| missingCommitment (party : PartyId)
| missingBindingFactor (party : PartyId)
All error variants implement BlameableError for blame attribution.
AggregationResult
The aggregation result contains the signature (if successful), blame information, and statistics.
structure AggregationResult (S : Scheme) where
signature : Option (Signature S)
includedParties : List S.PartyId
blame : BlameResult S.PartyId
success : Bool
totalReceived : Nat
validCount : Nat
The blame field contains collected errors according to ProtocolConfig. The includedParties list identifies which shares were aggregated.
aggregateValidated Function
The primary aggregation function validates shares and aggregates valid ones.
def aggregateValidated (S : Scheme) (cfg : ThresholdConfig)
(protocolCfg : ProtocolConfig) (challenge : S.Challenge)
(commitments : List (SignCommitMsg S)) (shares : List (SignShareMsg S))
(pkShares : S.PartyId → S.Public) (bindingFactors : BindingFactors S)
: AggregationResult S
The function validates all shares, partitions them into valid and invalid, collects blame according to protocolCfg, and aggregates the first threshold valid shares.
Guarantee
If at least threshold parties produce valid shares, the aggregate signature is guaranteed to satisfy the global norm bound. This follows from the local bound relationship: T · B_local ≤ B_global.
Byzantine Robustness
The aggregator validates each partial independently and discards invalid ones. Validation checks three conditions.
Structural validity ensures the partial has correct dimensions and format. Norm bound checking verifies ‖z_i‖∞ ≤ B_local where B_local is the local rejection bound. Algebraic validity checks that A(z_i) = w_eff + c·pk_i where w_eff incorporates the binding factor.
Parties that send out-of-bound or algebraically invalid partials have their shares rejected. The aggregator does not attempt to include them in the signature. This allows misbehaving parties to be identified and excluded.
Given n = 3f+1 validators with at most f Byzantine, there always exist at least t = 2f+1 honest validators. These honest validators will produce valid partials. The aggregator collects valid partials until reaching the threshold, then produces a valid signature.
A malicious party can slow the signing ceremony by sending invalid partials, but cannot prevent a valid signature from being produced. The combination of per-share validation and honest majority ensures liveness despite Byzantine participants.
Verification
Verification checks that a signature is valid for a given message and public key. The procedure uses the linear map and the hash function. It does not require knowledge of any secret shares.
Inputs
The verifier receives the following inputs.
- Signature. The
Signaturestructure containing:structure Signature (S : Scheme) where z : S.Secret c : S.Challenge Sset : List S.PartyId commits : List S.Commitment context : ExternalContext := {} - Message. The message that was signed.
- Public key. The global public key .
Verification Procedure
The verify function checks signature validity.
def verify
(S : Scheme)
(pk : S.Public)
(m : S.Message)
(sig : Signature S)
: Prop :=
let w' : S.Public := S.A sig.z - (sig.c • pk)
S.normOK sig.z ∧
let c' := S.hash m pk sig.Sset sig.commits w'
c' = sig.c
The verifier executes the following steps.
Step 1: Nonce Recovery
Compute the recovered nonce from the signature components.
This computation uses the linear map applied to the response . It subtracts the challenge times the public key.
Step 2: Norm Check
Verify that the response satisfies the norm bound.
If the response has excessive norm the signature is invalid. This check rejects signatures that could leak information through the response distribution.
Step 3: Challenge Recomputation
Compute the expected challenge using the hash function.
The hash input matches the signing protocol. It includes the message, public key, signer set, all nonce commitments, and the recovered nonce.
Step 4: Challenge Comparison
Compare the recomputed challenge to the signature's challenge.
If the challenges match the signature is valid. Otherwise the signature is invalid.
Verification Correctness
We show that valid signatures pass verification. Consider a signature produced by honest parties.
The aggregated response is
In the -of- case this simplifies to
where and .
Applying and using linearity gives
Subtracting yields
The recovered nonce equals the original aggregated nonce. Since the hash inputs are identical the recomputed challenge matches the original. The signature passes verification.
t-of-n Correctness
In the -of- case the Lagrange coefficients ensure
The aggregated response is
The rest of the argument follows identically.
Algebraic Identity
The core algebraic identity underlying verification is
This identity holds because of the following chain of equalities.
For each partial response
Summing over the signer set
In the -of- case the public shares sum to the global key
Therefore
and the verification equation follows.
Commitment Verification
The nonce commitments serve two purposes during verification.
Session Binding
The commitments are included in the hash input. This binds the challenge to the specific signing session. An adversary cannot reuse partial signatures from different sessions because the commitments would differ.
Audit Trail
The commitments provide an audit trail. A verifier can check that the signing session followed the protocol. Without access to the openings the verifier cannot check the commitments directly. However the commitments still bind the signers to their original nonces.
Error Cases
Several conditions cause verification to fail.
Invalid response. The response does not satisfy .
Challenge mismatch. The recomputed challenge differs from the signature's challenge .
Malformed signature. The signature does not contain all required components.
Batch Verification
When verifying multiple signatures with the same public key some computations can be shared.
Shared Public Key Computation
If multiple signatures use the same the verifier can cache rather than recomputing it.
Parallel Hash Evaluation
The hash computations for different signatures are independent. They can run in parallel.
Aggregated Equation Check
For a batch of signatures with random weights the verifier can check
This aggregated check is faster than checking each equation separately. A failure requires checking signatures individually to identify the invalid ones.
Phase-Restricted Signature Extraction
Signatures can only be extracted from the done phase. Earlier phases cannot produce signatures by construction.
def extractSignature (S : Scheme) : State S .done → Signature S
| ⟨sig⟩ => sig
For threshold-aware extraction with context:
structure DoneWithCtx (S : Scheme) :=
(ctx : ThresholdCtx S)
(sig : Signature S)
def extractSignatureWithCtx (S : Scheme) : DoneWithCtx S → Signature S
| ⟨_, sig⟩ => sig
This phase-based discipline ensures signatures are only produced after complete protocol execution.
Correctness Theorems
The Lean implementation provides formal correctness proofs in Proofs/Correctness/Correctness.lean.
Generic Happy-Path Correctness
theorem verify_happy_generic
(S : Scheme) [DecidableEq S.PartyId]
(pk : S.Public)
(m : S.Message)
(Sset : List S.PartyId)
(commits : List (SignCommitMsg S))
(reveals : List (SignRevealWMsg S))
(shares : List (SignShareMsg S))
(hvalid : ValidSignTranscript S Sset commits reveals shares) :
let w := reveals.foldl (fun acc r => acc + r.w_i) (0 : S.Public)
let c := S.hash m pk Sset (commits.map (·.commitW)) w
verify S pk m (aggregateSignature S c Sset (commits.map (·.commitW)) shares)
Lattice Scheme Correctness
theorem verify_happy_lattice
(pk : latticeScheme.Public)
(m : latticeScheme.Message)
(Sset : List latticeScheme.PartyId)
(commits : List (SignCommitMsg latticeScheme))
(reveals : List (SignRevealWMsg latticeScheme))
(shares : List (SignShareMsg latticeScheme))
(hvalid : ValidSignTranscript latticeScheme Sset commits reveals shares) :
let w := reveals.foldl (fun acc r => acc + r.w_i) (0 : latticeScheme.Public)
let c := latticeScheme.hash m pk Sset (commits.map (·.commitW)) w
verify latticeScheme pk m (aggregateSignature latticeScheme c Sset (commits.map (·.commitW)) shares)
Core Linearity Lemma
These lemmas are in Proofs/Core/ListLemmas.lean:
lemma sum_zipWith_add_smul
{R M : Type _} [Semiring R] [AddCommMonoid M] [Module R M]
(c : R) (ys sks : List M) (hlen : ys.length = sks.length) :
(zipWith (fun y s => y + c • s) ys sks).sum = ys.sum + c • sks.sum
Soundness
Verification is sound under the security assumptions. If the hash function behaves as a random oracle an adversary cannot produce a valid signature without knowing the secret shares.
The binding property of commitments prevents an adversary from adaptively choosing nonces. The norm check prevents signatures with excessive response norms.
Special Soundness
The core security property is special soundness: two valid signatures with the same nonce but different challenges reveal the secret key.
Given two valid signature equations with the same nonce but different challenges:
We can derive:
If and we're in a field, this reveals such that :
This property explains:
- Why nonce reuse is catastrophic - the secret key can be algebraically recovered
- The basis for the Fiat-Shamir security proof - a simulator can rewind a forger to extract two transcripts
- The core of the SIS reduction - extracted differences are short vectors in ker(A)
theorem special_soundness_algebraic
{M : Type*} [AddCommGroup M]
{R : Type*} [Ring R] [Module R M]
(A : M →ₗ[R] M) (z₁ z₂ w pk : M) (c₁ c₂ : R)
(hv1 : A z₁ = w + c₁ • pk)
(hv2 : A z₂ = w + c₂ • pk) :
A (z₁ - z₂) = (c₁ - c₂) • pk
theorem nonce_reuse_key_recovery
{F : Type*} [Field F]
{M : Type*} [AddCommGroup M] [Module F M]
(z₁ z₂ sk : M) (c₁ c₂ : F)
(hne : c₁ ≠ c₂)
(heq : z₁ - z₂ = (c₁ - c₂) • sk) :
sk = (c₁ - c₂)⁻¹ • (z₁ - z₂)
Unforgeability Reduction to SIS
Signature unforgeability reduces to the Short Integer Solution problem via special soundness.
Reduction outline:
- Challenger generates SIS instance: matrix , wants short with
- Embed as the public key: (viewing pk as derived from A)
- Simulate signing oracle using random oracle programming
- When forger outputs , rewind to get second signature with same
- Use special soundness to extract difference
- This difference is a short vector in ker(A) → SIS solution
Tightness: The reduction loses a factor of (hash queries) from rewinding.
structure EUFCMAtoSIS (S : Scheme) (p : SISParams) where
forgerAdvantage : Real
signingQueries : Nat
hashQueries : Nat
sisAdvantage : Real
reduction_bound : sisAdvantage ≥ forgerAdvantage / hashQueries
Threshold Response Independence
For threshold signatures, response independence holds for the aggregated response. If each party's response is independent of their share (by local rejection sampling), then the aggregate is independent of the master secret .
Composition argument:
- Each where is the party's nonce
- Local rejection sampling ensures is statistically independent of
- By linearity:
- Since each reveals nothing about , and parties don't see each other's , the aggregate reveals nothing about
This standard composition follows from Lyubashevsky's "Fiat-Shamir with Aborts" applied to each party individually.
Reference: Lyubashevsky, "Fiat-Shamir with Aborts", ASIACRYPT 2009.
Axiom Index
All axioms used throughout the codebase are documented in Proofs/Core/Assumptions.lean. They fall into three categories:
-
Mathlib-equivalent: Properties provable with Mathlib but requiring significant integration work (e.g.,
coeffs_sum_to_onefor Lagrange interpolation) -
Cryptographic assumptions: Properties requiring computational hardness assumptions (e.g.,
vss_hiding,SISHard,MLWEHard) -
Implementation details: Properties about data structures requiring extensive formalization (e.g.,
MsgMap.merge_idem)
Each axiom includes a justification and literature reference. See Proofs/Core/Assumptions.lean for the complete index.
Security Assumptions Structure
structure Assumptions (S : Scheme) where
hashRO : Prop -- hash modeled as QROM (Fiat-Shamir)
commitCR : Prop -- commitment collision resistant (implies binding)
hashBinding : HashBinding -- binding for hash-based commitment
commitHiding : HidingAssumption S -- commitment hiding (requires ROM; NOT formally proven)
normLeakageBound : Prop -- norm bounds prevent leakage
corruptionBound : Nat -- max corrupted parties < threshold
sisParams : Option SISParams -- SIS parameters for unforgeability
mlweParams : Option MLWEParams -- MLWE parameters for key secrecy
def thresholdUFcma (S : Scheme) (A : Assumptions S) : Prop :=
A.hashRO ∧ A.commitCR ∧ A.normLeakageBound
Collision Resistance and Hiding
Hash-based commitments derive their properties from the underlying hash function:
-- Collision resistance: finding x₁ ≠ x₂ with H(x₁) = H(x₂) is hard
structure CollisionResistant (H : α → β) : Prop where
no_collisions : True -- axiomatized
-- Hiding: commitments reveal nothing about the value (requires ROM)
structure HidingAssumption (S : Scheme) : Prop where
hiding : True -- axiomatized; NOT provable without probabilistic reasoning
Important: Hiding is NOT formally proven in Lean. It requires probabilistic reasoning that Lean cannot express. Protocols needing hiding must explicitly include this assumption.
Lattice Hardness Assumptions
Security reduces to standard lattice problems:
Short Integer Solution (SIS)
Given , find short with .
structure SISParams where
n : Nat -- lattice dimension (rows of A)
m : Nat -- columns of A
q : Nat -- modulus
beta : Nat -- solution norm bound
m_large : m > n * Nat.log2 q := by decide -- security requirement
structure SISInstance (p : SISParams) where
A : Fin p.n → Fin p.m → ZMod p.q -- public matrix
structure SISSolution (p : SISParams) (inst : SISInstance p) where
z : Fin p.m → Int
z_short : ∀ i, Int.natAbs (z i) ≤ p.beta
z_nonzero : ∃ i, z i ≠ 0
Az_zero : ∀ i : Fin p.n,
(Finset.univ.sum fun j => inst.A i j * (z j : ZMod p.q)) = 0
def SISHard (p : SISParams) : Prop := True -- axiomatized
Reduction: A forger that produces valid signatures can be used to solve SIS.
Module Learning With Errors (MLWE)
Distinguish from where are small.
structure MLWEParams where
n : Nat -- ring dimension (degree of R_q = Z_q[X]/(X^n + 1))
k : Nat -- module rank (number of ring elements in secret)
q : Nat -- modulus
eta : Nat -- error distribution parameter
structure MLWEInstance (p : MLWEParams) where
A : Fin p.k → Fin p.k → (Fin p.n → ZMod p.q) -- public matrix
b : Fin p.k → (Fin p.n → ZMod p.q) -- public vector b = As + e
structure MLWESecret (p : MLWEParams) where
s : Fin p.k → (Fin p.n → Int) -- secret vector
e : Fin p.k → (Fin p.n → Int) -- error vector
s_short : ∀ i j, Int.natAbs (s i j) ≤ p.eta
e_short : ∀ i j, Int.natAbs (e i j) ≤ p.eta
def MLWEHard (p : MLWEParams) : Prop := True -- axiomatized
Reduction: Recovering the secret key from the public key requires solving MLWE.
Full Lattice Assumptions
structure LatticeAssumptions (S : Scheme) extends Assumptions S where
sisInst : SISParams -- SIS instance derived from scheme
mlweInst : MLWEParams -- MLWE instance derived from scheme
sis_hard : SISHard sisInst -- SIS is hard for these parameters
mlwe_hard : MLWEHard mlweInst -- MLWE is hard for these parameters
def keySecrecy (S : Scheme) (A : LatticeAssumptions S) : Prop :=
A.mlwe_hard
def livenessOrAbort (S : Scheme) (A : Assumptions S) : Prop := True
Parameter Validation
Lightweight checks catch obviously insecure parameters. These are necessary conditions, not sufficient for security - real analysis requires the lattice estimator.
def minSecureDimension : Nat := 256
def SISParams.isSecure (p : SISParams) : Prop :=
p.n ≥ minSecureDimension ∧
p.m ≥ 2 * p.n ∧
p.beta < p.q / 2
def MLWEParams.isSecure (p : MLWEParams) : Prop :=
p.n ≥ minSecureDimension ∧
p.k ≥ 1 ∧
p.eta ≤ 4
def isPowerOf2 (n : Nat) : Bool :=
n > 0 && (n &&& (n - 1)) = 0
The implementation uses standard NIST FIPS 204 (ML-DSA/Dilithium) parameter sets which provide 128, 192, and 256 bits of security.
Batch Verification Implementation
This section provides implementation guidance for batch verification in Rust.
Mathematical Foundation
For signatures with random weights , batch verification checks:
This can be rewritten as a single multiscalar multiplication:
When using the same public key for all signatures:
Security Requirements
Random weights prevent an adversary from constructing signatures that cancel each other out. With 128-bit random weights, the probability of a forgery passing batch verification is negligible ().
- Weights MUST be cryptographically random (CSPRNG)
- Weights MUST be at least 128 bits for security
- Weights MUST be generated fresh for each batch verification
Rust Data Structures
#![allow(unused)] fn main() { /// Entry for batch verification pub struct BatchEntry<S: Scheme> { pub message: S::Message, pub signature: Signature<S>, pub public_key: S::Public, } /// Batch verification context pub struct BatchContext<S: Scheme> { entries: Vec<BatchEntry<S>>, } /// Batch verification result pub enum BatchResult { /// All signatures verified AllValid, /// Batch failed - need individual verification BatchFailed, /// Individual failures identified Failures(Vec<usize>), } }
Batch Verification Algorithm
#![allow(unused)] fn main() { impl<S: Scheme> BatchContext<S> { pub fn verify(&self, rng: &mut impl CryptoRng) -> BatchResult { if self.entries.len() < 2 { // For single entry, verify individually return self.verify_individual(); } // Generate random 128-bit weights let weights: Vec<S::Scalar> = self.entries .iter() .map(|_| S::Scalar::random_128bit(rng)) .collect(); // Compute LHS: Σ αⱼ·A(zⱼ) - Σ αⱼcⱼ·pkⱼ let mut lhs = S::Public::zero(); for (entry, weight) in self.entries.iter().zip(&weights) { let az = S::A(&entry.signature.z); let c_pk = entry.signature.c * entry.public_key; lhs += weight * (az - c_pk); } // Compute RHS: Σ αⱼ·wⱼ let mut rhs = S::Public::zero(); for (entry, weight) in self.entries.iter().zip(&weights) { let w = recover_nonce(&entry.signature, &entry.public_key); rhs += weight * w; } // Check equation if lhs == rhs { BatchResult::AllValid } else { // Fall back to individual verification self.identify_failures() } } fn identify_failures(&self) -> BatchResult { let failures: Vec<usize> = self.entries .iter() .enumerate() .filter(|(_, e)| !verify_single(e)) .map(|(i, _)| i) .collect(); if failures.is_empty() { // Rare case: batch failed but all individual pass // Could indicate implementation bug BatchResult::AllValid } else { BatchResult::Failures(failures) } } } }
Performance Optimizations
Single Multiscalar Multiplication
For best performance, use a single multiscalar multiplication (MSM) operation:
#![allow(unused)] fn main() { // Instead of computing each term separately: // for each j: lhs += weight[j] * (A(z[j]) - c[j] * pk[j]) // Collect all scalars and points: let scalars: Vec<Scalar> = /* ... */; let points: Vec<Point> = /* ... */; // Single MSM operation let result = multiscalar_mul(&scalars, &points); }
Libraries like curve25519-dalek or arkworks provide efficient MSM implementations.
Same Public Key Optimization
When all signatures use the same public key:
#![allow(unused)] fn main() { // Combine challenge terms let combined_challenge: Scalar = weights .iter() .zip(&challenges) .map(|(w, c)| w * c) .sum(); // Single public key multiplication let pk_term = combined_challenge * public_key; }
Performance Expectations
| Batch Size | Individual (ms) | Batch (ms) | Speedup |
|---|---|---|---|
| 2 | 2.0 | 1.2 | 1.7x |
| 10 | 10.0 | 2.5 | 4.0x |
| 100 | 100.0 | 15.0 | 6.7x |
| 1000 | 1000.0 | 120.0 | 8.3x |
Estimates based on typical lattice signature verification. Actual performance depends on implementation.
Error Handling Strategies
When batch verification fails:
-
Binary search (optional): Split batch in half, verify each half. Repeat to narrow down failures. Complexity: O(log n) batch verifications.
-
Full fallback: Verify all signatures individually. Complexity: O(n) individual verifications.
Choose based on expected failure rate:
- Low failure rate: Binary search is faster
- High failure rate: Full fallback is simpler
Benchmarking
#![allow(unused)] fn main() { #[bench] fn bench_batch_verification(b: &mut Bencher) { let entries = generate_valid_entries(100); let mut rng = OsRng; b.iter(|| { let ctx = BatchContext::new(entries.clone()); ctx.verify(&mut rng) }); } }
References
- Bellare, Garay, Rabin. "Fast Batch Verification for Modular Exponentiation and Digital Signatures" (EUROCRYPT 1998)
- FROST RFC Section 5.3: Batch Verification
- dalek-cryptography/bulletproofs: Efficient MSM implementations
Extensions
The core protocol supports several extensions. These extensions address operational requirements beyond basic signing. They enable share management and enhanced privacy. All extension state structures form semilattices that merge via join.
Error Handling
The Protocol/Core/Error.lean module provides unified error handling patterns across the protocol.
BlameableError Typeclass
Errors that can identify a misbehaving party implement the BlameableError typeclass:
class BlameableError (E : Type*) (PartyId : Type*) where
blamedParty : E → Option PartyId
instance : BlameableError (DkgError PartyId) PartyId where
blamedParty
| .lengthMismatch => none
| .duplicatePids => none
| .commitMismatch p => some p
instance : BlameableError (Complaint PartyId) PartyId where
blamedParty c := some c.accused
instance : BlameableError (VSS.VSSError PartyId) PartyId where
blamedParty
| .shareVerificationFailed accused _ => some accused
| .missingCommitment p => some p
| .missingShare from _ => some from
| .thresholdMismatch _ _ => none
| .duplicateDealer p => some p
Cheater Detection Modes
Following FROST, we provide configurable cheater detection:
inductive CheaterDetectionMode
| disabled -- No blame tracking (best performance)
| firstCheater -- Stop after finding one cheater (good balance)
| allCheaters -- Exhaustively identify all cheaters (complete but slower)
structure ProtocolConfig where
cheaterDetection : CheaterDetectionMode := .firstCheater
maxErrors : Nat := 0 -- 0 = unlimited
def ProtocolConfig.default : ProtocolConfig
def ProtocolConfig.highSecurity : ProtocolConfig -- allCheaters mode
def ProtocolConfig.performance : ProtocolConfig -- disabled mode
Error Aggregation
Utilities for collecting and analyzing errors across protocol execution:
/-- Count how many errors blame a specific party -/
def countBlame (errors : List E) (party : PartyId) : Nat
/-- Get all unique blamed parties from a list of errors -/
def allBlamedParties (errors : List E) : List PartyId
/-- Collect blame respecting detection mode -/
def collectBlame (config : ProtocolConfig) (errors : List E) : BlameResult PartyId
Error Categories
| Module | Type | Category | Description |
|---|---|---|---|
| DKGCore | DkgError | Fatal | Protocol abort required |
| DKGThreshold | Complaint | Recoverable | Party exclusion possible |
| DKGThreshold | ExclusionResult | Result | Quorum check outcome |
| VSS | VSSError | Fatal/Recoverable | Depends on complaint count |
| RefreshCoord | CommitValidationError | Validation | Commit validation outcome |
| RefreshCoord | RevealValidationError | Validation | Reveal validation outcome |
| RepairCoord | ContribCommitValidationError | Validation | Commit validation outcome |
| RepairCoord | ContribRevealValidationError | Validation | Reveal validation outcome |
Session Abort Coordination
The Protocol/Core/Abort.lean module provides abort coordination for session-level failures that cannot be handled locally. Unlike local rejection sampling which handles norm bounds within a single signer, aborts require distributed consensus to abandon a session.
When Aborts Are Needed
Session aborts handle failures that affect the entire signing session.
Liveness failures occur when parties are offline or unresponsive. These require f+1 votes to confirm the abort. Examples include session timeout with insufficient responses and specific parties detected as unresponsive.
Security violations trigger immediate abort without voting. These indicate the protocol cannot safely continue. Examples include trust violations where more than n-t parties are faulty, aggregate signatures exceeding bounds despite local rejection, and DKG/VSS with too many complaints.
Explicit cancellation occurs when a party requests abort. This also requires f+1 votes to confirm.
Abort Reasons
The AbortReason type categorizes why a session needs to abort.
inductive AbortReason (PartyId : Type*) where
-- Liveness failures (require f+1 votes)
| timeout (respondents : Nat) (required : Nat)
| insufficientParticipants (actual : Nat) (required : Nat)
| partyUnresponsive (parties : List PartyId)
-- Security violations (immediate abort)
| trustViolation (faultyCount : Nat) (maxTolerable : Nat)
| globalNormExceeded (aggregateNorm : Nat) (bound : Nat)
| tooManyComplaints (complaints : Nat) (maxTolerable : Nat)
-- Explicit request (requires f+1 votes)
| requestedBy (requester : PartyId)
The isImmediate method returns true for security violations that bypass voting. The isSecurityViolation method distinguishes security issues from liveness issues.
Abort State (CRDT)
Abort state forms a CRDT for distributed coordination.
structure AbortState (S : Scheme) where
session : Nat
votes : Finset S.PartyId
reasons : List (AbortReason S.PartyId)
immediateTriggered : Bool := false
The CRDT merge unions the vote sets, concatenates reasons with deduplication, and ORs the immediate flag. This allows replicas to merge abort proposals from different parties in any order.
Abort Threshold
The abort threshold is f+1 where f = n - t is the maximum faulty parties. This ensures at least one honest party agreed to abort.
def AbortState.shouldAbort (state : AbortState S) (cfg : ThresholdConfig) : Bool :=
state.hasImmediateReason || state.isConfirmedByVotes cfg
A session aborts when either an immediate reason was triggered or f+1 parties voted. The +1 prevents a coalition of f malicious parties from forcing spurious aborts.
Abort Messages
Parties propose aborts by broadcasting AbortMsg messages.
structure AbortMsg (S : Scheme) where
sender : S.PartyId
session : Nat
reason : AbortReason S.PartyId
evidence : List S.PartyId := []
Convenience constructors create messages for common scenarios. The AbortMsg.timeout constructor creates timeout messages. The AbortMsg.unresponsive constructor identifies specific unresponsive parties. The AbortMsg.trustViolation constructor reports security violations.
Processing Abort Messages
The AbortState.processMsg method updates state when receiving an abort message.
def AbortState.processMsg (state : AbortState S) (msg : AbortMsg S) : AbortState S :=
if msg.session = state.session then
Join.join state (AbortState.fromMsg msg)
else
state -- Ignore messages for different sessions
Messages for different sessions are ignored. Messages for the current session are merged into the CRDT state.
Relationship to Local Rejection
Local rejection and session aborts serve different purposes.
| Mechanism | Scope | Coordination | Example |
|---|---|---|---|
| Local rejection | Single party | None | Norm bound exceeded |
| Session abort | Session-wide | f+1 votes | Timeout, trust violation |
Local rejection handles per-party issues within a signing attempt. Each party independently retries until finding a valid response. Session aborts handle failures that require distributed coordination to resolve.
Security Markers
The Protocol/Core/Security.lean module provides typeclass markers for implementation security requirements. These markers document FFI implementation obligations.
Zeroizable Typeclass
Marks types containing secret material that must be securely erased from memory after use.
class Zeroizable (α : Type*) where
needsZeroization : Bool := true
instance : Zeroizable (SecretBox α)
instance : Zeroizable (LinearMask S)
instance : Zeroizable (LinearDelta S)
instance : Zeroizable (LinearOpening S)
Implementation guidance:
| Platform | API |
|---|---|
| Rust | zeroize::Zeroize trait, ZeroizeOnDrop |
| C | explicit_bzero(), SecureZeroMemory() |
| POSIX | explicit_bzero() |
| Windows | SecureZeroMemory() |
ConstantTimeEq Typeclass
Marks types requiring constant-time equality comparison to prevent timing side-channels.
class ConstantTimeEq (α : Type*) where
requiresConstantTime : Bool := true
instance : ConstantTimeEq (SecretBox α)
instance : ConstantTimeEq (LinearMask S)
instance : ConstantTimeEq (LinearDelta S)
Implementation guidance:
- Use
subtle::ConstantTimeEqin Rust - Use
CRYPTO_memcmp()from OpenSSL - Never use early-exit comparison loops
- Avoid branching on secret-dependent values
Linear Security Wrappers
The protocol uses linear-style type discipline to prevent misuse of single-use sensitive values. Private constructors and module-scoped access make violations visible.
LinearMask
Refresh masks that must be applied exactly once to preserve the zero-sum invariant.
structure LinearMask (S : Scheme) where
private mk ::
private partyId : S.PartyId
private maskValue : S.Secret
private publicImage : S.Public
def LinearMask.create (S : Scheme) (pid : S.PartyId) (mask : S.Secret) : LinearMask S
def LinearMask.consume (m : LinearMask S) : S.Secret × MaskConsumed S
Security: If a mask is applied twice or not at all, the sum of masks will not be zero and the global secret changes.
LinearDelta
Repair contributions that must be aggregated exactly once.
structure LinearDelta (S : Scheme) where
private mk ::
private senderId : S.PartyId
private targetId : S.PartyId
private deltaValue : S.Secret
def LinearDelta.create (S : Scheme) (sender target : S.PartyId) (delta : S.Secret) : LinearDelta S
def LinearDelta.consume (d : LinearDelta S) : S.Secret × DeltaConsumed S
Security: Using a delta twice adds its value twice to the reconstruction sum, corrupting the repaired share.
LinearOpening
Commitment randomness that should be used exactly once to prevent equivocation.
structure LinearOpening (S : Scheme) where
private mk ::
private openingValue : S.Opening
private used : Bool := false
def LinearOpening.fresh (S : Scheme) (opening : S.Opening) : LinearOpening S
def LinearOpening.consume (o : LinearOpening S) : S.Opening × OpeningConsumed S
Security: Reusing an opening with different values could allow opening the same commitment to different values.
Consumption Proofs
Each linear type produces a consumption proof when consumed:
structure MaskConsumed (S : Scheme) where
private mk ::
forParty : S.PartyId
structure DeltaConsumed (S : Scheme) where
private mk ::
fromHelper : S.PartyId
structure OpeningConsumed (S : Scheme) where
private mk ::
consumed : Unit := ()
Downstream functions can require these proofs to ensure proper consumption occurred.
Reusable Protocol Patterns
The Protocol/Core/Patterns.lean module provides reusable abstractions for common protocol patterns. Scheme-specific versions are re-exported from Protocol/Core/Core.lean.
Constraint Aliases
Standard constraint combinations are aliased for cleaner signatures. Generic versions in Patterns.lean work with any type, while Core.lean provides Scheme-specific aliases:
-- Generic (Patterns.lean):
abbrev HashConstraints (PartyId : Type*) := BEq PartyId × Hashable PartyId
abbrev StateConstraints (PartyId : Type*) := HashConstraints PartyId × DecidableEq PartyId
abbrev FieldConstraints (Scalar : Type*) := Field Scalar × DecidableEq Scalar
-- Scheme-specific (Core.lean):
abbrev PartyIdHash (S : Scheme) := BEq S.PartyId × Hashable S.PartyId
abbrev PartyIdState (S : Scheme) := PartyIdHash S × DecidableEq S.PartyId
abbrev ScalarField (S : Scheme) := Field S.Scalar × DecidableEq S.Scalar
Usage:
-- Before (verbose):
def foo (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] [DecidableEq S.PartyId] ...
-- After (concise):
def foo (S : Scheme) [PartyIdState S] ...
Generic Share Verification
The pattern A(secret) = expectedPublic is centralized:
def verifySecretPublic (S : Scheme) (secret : S.Secret) (expectedPublic : S.Public) : Prop :=
S.A secret = expectedPublic
def verifySecretPublicBool (S : Scheme) [DecidableEq S.Public]
(secret : S.Secret) (expectedPublic : S.Public) : Bool :=
S.A secret = expectedPublic
def verifyCommitmentOpening (S : Scheme) [DecidableEq S.Commitment]
(value : S.Public) (opening : S.Opening) (expectedCommit : S.Commitment) : Bool :=
S.commit value opening = expectedCommit
Used consistently across:
- VSS share verification
- Repair share reconstruction
- Refresh share updates
- DKG public key verification
Commit-Reveal Framework
Generic types for commit-reveal protocol phases (in Patterns.lean):
/-- Generic validation error -/
inductive CommitRevealError (PhaseType PartyId : Type*)
| wrongPhase (current : PhaseType)
| senderNotAuthorized (sender : PartyId)
| duplicateMessage (sender : PartyId)
| commitmentMismatch (sender : PartyId)
| missingCommit (sender : PartyId)
| verificationFailed (sender : PartyId) (reason : String)
/-- Typeclass for messages with sender -/
class HasSender (M : Type*) (PartyId : Type*) where
sender : M → PartyId
def getSender [HasSender M PartyId] (m : M) : PartyId := HasSender.sender m
Scheme-specific verification (in Core.lean):
/-- Typeclass for verifiable commit-reveal pairs -/
class CommitVerifiable (CommitMsg RevealMsg : Type*) (S : Scheme) where
verifyOpening : S → CommitMsg → RevealMsg → Bool
Utility Functions
These utilities are in Patterns.lean:
/-- Check party authorization -/
def isAuthorized (authorizedParties : List PartyId) (party : PartyId) : Bool
/-- Extract unique values by key (replaces map+dedup pattern) -/
def extractUnique (key : α → β) (items : List α) : List β
External Protocol Integration
The Protocol/Core/Core.lean and Protocol/Sign/Types.lean modules provide integration points for external consensus and coordination protocols.
External Context
All signing messages can carry optional context for binding to external protocol instances:
structure ExternalContext where
consensusId : Option ByteArray := none -- external session identifier
resultId : Option ByteArray := none -- what is being signed (e.g., H(op, prestate))
prestateHash : Option ByteArray := none -- state validation
evidenceDelta : Option ByteArray := none -- CRDT evidence for propagation
Evidence Carrier Typeclass
The EvidenceCarrier typeclass enables piggybacking CRDT evidence deltas on signing messages:
class EvidenceCarrier (M : Type*) where
attachEvidence : M → ByteArray → M
extractEvidence : M → Option ByteArray
hasEvidence : M → Bool := fun m => (extractEvidence m).isSome
Instances are provided for all signing message types (SignCommitMsg, SignRevealWMsg, SignShareMsg) and Signature.
Self-Validating Shares
For gossip-based fallback protocols, ValidatableShare bundles a share with enough context for independent validation:
structure ValidatableShare (S : Scheme) where
share : SignShareMsg S
nonceCommit : S.Commitment
publicHiding : S.Public
publicBinding : S.Public
pkShare : S.Public
Recipients can validate the share without full session state using ValidatableShare.isWellFormed.
Context-Aware Aggregation
When aggregating shares from multiple parties, context consistency can be validated:
def aggregateWithContextValidation (S : Scheme) (c : S.Challenge)
(Sset : List S.PartyId) (commits : List S.Commitment)
(shares : List (SignShareMsg S))
: Except String (Signature S)
This catches Byzantine behavior where a party signs different operations.
See Protocol Integration for detailed usage patterns.
Serialization
The Protocol/Core/Serialize.lean module provides type-safe serialization for network transport.
Serialization Headers
Messages include headers for version and scheme identification:
inductive SchemeId
| mlDsa44 | mlDsa65 | mlDsa87 | custom (id : ByteArray)
structure SerializationHeader where
version : UInt8 -- protocol version (currently 1)
schemeId : SchemeId -- parameter set identifier
class SerializableWithHeader (α : Type*) extends Serializable α where
header : SerializationHeader
toBytesWithHeader : α → ByteArray
fromBytesWithHeader : ByteArray → Option α
This enables:
- Version negotiation: Detect incompatible protocol versions
- Parameter validation: Verify scheme compatibility before deserializing
- Safe upgrades: Add new schemes without breaking existing deployments
Serializable Typeclass
class Serializable (α : Type*) where
toBytes : α → ByteArray
fromBytes : ByteArray → Option α
Implementations are provided for primitives (Nat, Int, ByteArray), compound types (Option, List), and all protocol messages.
Wire Format
The wire format uses little-endian encoding with explicit length prefixes:
| Type | Format |
|---|---|
| Nat | 8-byte little-endian |
| Int | 8-byte little-endian (two's complement) |
| List α | 4-byte length + concatenated elements |
| Option α | 1-byte tag (0=none, 1=some) + element if some |
Message Wrapping
Messages are wrapped with type tags for network transport:
inductive MessageTag
| dkgCommit | dkgReveal
| signCommit | signReveal | signShare
| signature
| abort
structure WrappedMessage where
tag : MessageTag
payload : ByteArray
def WrappedMessage.toBytes (msg : WrappedMessage) : ByteArray :=
-- tag (1 byte) + length (4 bytes) + payload
Security Note: Deserialization is a potential attack surface. All fromBytes implementations validate input lengths and return none on malformed input.
Complaints
The complaint mechanism identifies misbehaving parties. It activates when protocol invariants are violated.
Complaint Types
Two complaint variants cover the main failure modes.
inductive Complaint (PartyId : Type) where
| openingMismatch (accused : PartyId)
| missingReveal (accused : PartyId)
Complaint Triggers
A party files a complaint under the following conditions.
Invalid commitment opening. Party reveals a pair that does not match its commitment . Generates openingMismatch.
Missing message. Party fails to send a required message within the timeout. Generates missingReveal.
Inconsistent session. The participant set differs from the expected set.
Invalid partial signature. Party produces a partial signature that fails verification.
Aggregation with Complaints
DKG aggregation returns either the public key or a list of complaints identifying misbehaving parties.
def dkgAggregateWithComplaints
(S : Scheme) [DecidableEq S.PartyId]
(commits : List (DkgCommitMsg S))
(reveals : List (DkgRevealMsg S))
: Except (List (Complaint S.PartyId)) S.Public
Complaint Resolution
When a party files a complaint the protocol may proceed in different ways.
Abort. All parties discard the current session. They may retry with a new session excluding the accused party.
Exclude. The accused party is removed from the participant set. The remaining parties continue if the threshold is still met.
Penalize. In applications with economic stakes the accused party may lose a deposit or face other penalties.
Complaint Verification
A complaint is verifiable when it includes evidence. For a commitment opening failure the evidence is the commitment, the revealed pair, and a proof that they do not match. Other parties can independently verify the complaint.
Handling Byzantine Signers
In threshold signing, Byzantine signers may attempt to disrupt the protocol by sending invalid partial signatures. The protocol handles such behavior through per-share validation, allowing honest parties to detect and exclude misbehavior without halting progress.
Per-Share Validation
The aggregator treats each partial signature as an independent object. For each partial from signer , the aggregator performs three validation checks.
Structural validity checks that the partial is well-formed and consistent with commitments. The partial must have correct dimension, correct modulus, and match the commitment structure.
Norm bound checking verifies . This ensures each partial satisfies the local bound that, when combined with other valid partials, produces a globally valid signature.
Algebraic validity verifies the partial against the signer's public share . This is analogous to FROST's per-share verification relation. The partial must satisfy a relation like where is derived from the commitment.
Invalid Share Handling
If any check fails, the aggregator discards that partial share. The aggregator does not attempt to include that share in the aggregate. Instead, it records evidence of misbehavior. This evidence can support later exclusion decisions or penalties in consensus protocols.
The aggregator then waits for valid shares from other signers, continuing until it has collected valid partials (where under the BFT assumption).
Byzantine Resilience Guarantee
Because there are at least honest validators, there exist at least honest signers whose partials will be valid. The aggregator can always collect valid partials and complete the signature despite up to malicious validators.
This ensures that no single participant, nor any coalition of size , can prevent the threshold signature from being formed. Malicious signers cannot force global rejection or restart. They can only delay the process by not participating or by sending invalid shares that are simply rejected.
Misbehavior Detection
Per-share validation enables attribution. Each invalid partial can be tied to a specific signer via its ID. This makes misbehavior identifiable and attributable even though the final signature itself remains a standard Dilithium signature under the group key.
The protocol transcript contains all per-signer checks, allowing external auditors to verify which parties sent invalid shares. This supports automated exclusion policies or manual review in consensus systems.
Share Refresh
Share refresh updates the shares without changing the public key. This procedure provides proactive security. Even if an adversary compromises old shares the refreshed shares are independent.
Mask Function Structure
Masks are represented as functions from party identifier to secret.
structure MaskFn (S : Scheme) :=
(mask : S.PartyId → S.Secret)
instance (S : Scheme) : Join (MaskFn S) :=
⟨fun a b => { mask := fun pid => a.mask pid + b.mask pid }⟩
The join merges masks by pointwise addition.
Zero-Sum Mask Invariant
The zero-sum property is captured in a dependent structure.
structure ZeroSumMaskFn (S : Scheme) (active : Finset S.PartyId) :=
(fn : MaskFn S)
(sum_zero : (active.toList.map (fun pid => fn.mask pid)).sum = 0)
instance (S : Scheme) (active : Finset S.PartyId) : Join (ZeroSumMaskFn S active)
Merging zero-sum masks preserves zero-sum on the same active set.
Refresh Overview
Each party holds a share . After refresh each party holds a new share . The masks sum to zero.
This ensures the master secret is unchanged.
Refresh Function
Applies a mask to each share and recomputes the corresponding public share.
def refreshShares
(S : Scheme)
(m : MaskFn S)
(shares : List (KeyShare S)) : List (KeyShare S) :=
shares.map (fun ks =>
let sk' := ks.secret + m.mask ks.pid
let pk' := S.A sk'
KeyShare.create S ks.pid sk' pk' ks.pk)
Mask Generation
The masks must sum to zero. One approach is as follows.
- Each party samples a random mask contribution .
- Parties run a protocol to compute a common offset .
- Each party sets in a way that ensures .
An alternative uses pairwise contributions. Party sends to party and receives from . The net contribution is . These contributions cancel pairwise.
Public Share Update
After refresh each party computes its new public share.
The global public key remains unchanged because
Refresh Protocol
A full refresh protocol runs as follows.
- Each party generates its mask contribution.
- Parties exchange commitments to their contributions.
- Parties reveal their contributions.
- Each party verifies all commitments.
- Each party computes its new share.
- Parties publish new commitments to their public shares.
- External verifiers can check the consistency of the new public shares.
Refresh Coordination Protocol
The Protocol/Shares/RefreshCoord.lean module provides a distributed protocol for generating zero-sum masks without a trusted coordinator.
Phases:
- Commit: Each party commits to their random mask
- Reveal: Parties reveal masks after all commits received
- Adjust: Coordinator computes adjustment to achieve zero-sum
- Apply: All parties apply masks to their shares
CRDT Design: Uses MsgMap for commits and reveals to ensure at most one message per party. This makes conflicting messages un-expressable at the type level.
abbrev MaskCommitMap (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] :=
MsgMap S (MaskCommitMsg S)
abbrev MaskRevealMap (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] :=
MsgMap S (MaskRevealMsg S)
structure RefreshRoundState (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] where
phase : RefreshPhase
parties : List S.PartyId
roundNum : Nat
coordStrategy : CoordinatorStrategy S.PartyId
maskCommits : MaskCommitMap S -- at most one per party
maskReveals : MaskRevealMap S -- at most one per party
adjustment : Option (AdjustmentMsg S)
Commitment Design: The protocol commits to A(mask) (the public image) rather than the mask itself. This enables:
- Public verifiability without revealing the secret mask
- Consistency with DKG (which commits to public shares)
- Simple zero-sum verification via
A(Σ m_i)
Architecture: Separated CRDT and Validation
The protocol separates concerns into two layers:
- CRDT Layer - Pure merge semantics for replication/networking
- Validation Layer - Conflict detection for auditing/security
-- CRDT merge: idempotent, always succeeds, silently ignores duplicates
def processCommit (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RefreshRoundState S) (msg : MaskCommitMsg S)
: RefreshRoundState S
def processReveal (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RefreshRoundState S) (msg : MaskRevealMsg S)
: RefreshRoundState S
-- Validation: detect conflicts without modifying state
def detectCommitConflict (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RefreshRoundState S) (msg : MaskCommitMsg S)
: Option (MaskCommitMsg S)
def validateCommit (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RefreshRoundState S) (msg : MaskCommitMsg S)
: Option (CommitValidationError S)
-- Combined: merge + validation in one call
def processCommitValidated (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RefreshRoundState S) (msg : MaskCommitMsg S)
: RefreshRoundState S × Option (CommitValidationError S)
Design Rationale:
- CRDT functions can be used directly for networking without error handling in merge path
- Validation is explicit and composable. Call it when you need it.
- Follows "make illegal states unrepresentable" at CRDT level, "detect suspicious patterns" separately
Coordinator Selection:
inductive CoordinatorStrategy (PartyId : Type*)
| fixed (pid : PartyId) -- always same coordinator
| roundRobin (round : Nat) -- rotate based on round number
| random (seed : Nat) -- pseudo-random selection
Zero-Sum Verification: The coordinator computes adjustment ensuring :
def computeAdjustment (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
[DecidableEq S.PartyId] [Inhabited S.PartyId]
(st : RefreshRoundState S) : Option S.Secret :=
if st.phase = .adjust then
let coord := st.coordinator
let otherMasks := st.maskReveals.toList.filter (fun r => r.sender ≠ coord)
let sumOthers := (otherMasks.map (·.mask)).sum
some (-sumOthers)
else none
DKG-Based Refresh Protocol
The Protocol/Shares/RefreshDKG.lean module provides a fully distributed refresh protocol without a trusted coordinator. Each party contributes a zero-polynomial (polynomial with constant term 0), ensuring the master secret remains unchanged while all shares are updated.
Three-Round Protocol:
-
Round 1 (Commit): Each party samples a random polynomial with and broadcasts a commitment to its polynomial coefficients.
-
Round 2 (Share): Each party evaluates its polynomial at all other parties' identifiers and sends the shares privately.
-
Round 3 (Verify & Apply): Each party verifies received shares against commitments and computes its refresh delta as the sum of all received shares.
structure RefreshLocalState (S : Scheme) where
pid : S.PartyId
coefficients : List S.Secret -- [0, a₁, a₂, ..., a_{t-1}]
opening : S.Opening
structure RefreshDelta (S : Scheme) where
pid : S.PartyId
delta : S.Secret -- sum of all f_j(pid)
def refreshRound1 (S : Scheme) (pid : S.PartyId)
(randomCoeffs : List S.Secret) (opening : S.Opening)
: RefreshLocalState S × RefreshCommitMsg S
def refreshRound3 (S : Scheme)
(st : RefreshLocalState S)
(commits : List (RefreshCommitMsg S))
(shares : List (RefreshShareMsg S))
: Except (RefreshDKGError S.PartyId) (RefreshDelta S)
Security Properties:
- No trusted coordinator: Unlike
RefreshCoord.lean, no single party learns all masks - Verifiable: Polynomial commitments allow verification of received shares
- Zero-sum by construction: Since each , the sum of deltas preserves the master secret:
Verification:
def verifyZeroSum (S : Scheme) (deltas : List (RefreshDelta S)) : Bool :=
(deltas.map (·.delta)).sum = 0
Public Key Invariance Theorem
If the mask sums to zero over the participant list, the global public key is unchanged.
lemma refresh_pk_unchanged
(S : Scheme)
(m : MaskFn S)
(shares : List (KeyShare S))
(hsum : (shares.map (fun ks => m.mask ks.pid)).sum = 0) :
let refreshed := refreshShares S m shares
(refreshed.foldl (fun acc ks => acc + ks.pk_i) (0 : S.Public)) =
(shares.foldl (fun acc ks => acc + ks.pk_i) (0 : S.Public))
Refresh State (CRDT)
Refresh state forms a semilattice for conflict-free replication.
structure RefreshState (S : Scheme) :=
(mask : MaskFn S)
instance (S : Scheme) : Join (RefreshState S)
instance (S : Scheme) : SemilatticeSup (RefreshState S)
Security Properties
Forward secrecy. Old shares reveal nothing about new shares.
Backward secrecy. New shares reveal nothing about old shares.
Invariant preservation. The master secret and global public key are unchanged.
Share Repair
Share repair allows a party to recover a lost share. The recovering party receives help from other parties.
Repair Request Structure
Messages for initiating and responding to repair requests. Bundles merge via list append.
structure RepairRequest (S : Scheme) :=
(requester : S.PartyId)
(knownPk_i : S.Public) -- The public share is still known
structure RepairMsg (S : Scheme) where
sender : S.PartyId -- helper party
target : S.PartyId -- requester (for routing)
delta : S.Secret -- weighted share contribution
structure RepairBundle (S : Scheme) :=
(msgs : List (RepairMsg S))
instance (S : Scheme) : Join (RepairBundle S) := ⟨fun a b => ⟨a.msgs ++ b.msgs⟩⟩
Repair Session State
The repair session tracks protocol progress with CRDT merge semantics.
structure RepairSession (S : Scheme) :=
(request : RepairRequest S)
(helpers : Finset S.PartyId)
(received : RepairBundle S)
(threshold : Nat)
instance (S : Scheme) : Join (RepairSession S)
Repair Scenario
Party has lost its share . It wants to recover the share without other parties learning it.
Helper Protocol
Each helper sends a masked delta to . The deltas are structured so that their sum equals .
Let denote the message from to . The messages satisfy
Repair Function
Sums the deltas from all helper messages to reconstruct the lost share.
def repairShare
(S : Scheme)
(msgs : List (RepairMsg S))
: S.Secret :=
(msgs.map (·.delta)).sum
Correctness
The repaired share equals the original when deltas sum correctly.
lemma repair_correct
(S : Scheme)
(msgs : List (RepairMsg S))
(sk_i : S.Secret)
(h : msgs.foldl (fun acc m => acc + m.delta) (0 : S.Secret) = sk_i) :
repairShare S msgs = sk_i
Privacy (Zero-Sum Masks)
Additional zero-sum masks do not change the repaired value:
lemma repair_masked_zero
(S : Scheme)
(msgs : List (RepairMsg S))
(mask : List S.Secret)
(hmask : mask.sum = 0)
(hlen : mask.length = msgs.length) :
repairShare S (List.zipWith (fun m z => { m with delta := m.delta + z }) msgs mask)
= repairShare S msgs
Construction
One construction uses Lagrange interpolation. If the shares are polynomial shares evaluated at distinct points then any parties can reconstruct the share at any point.
The Lagrange coefficient for party evaluating at 0 over set is:
These coefficients satisfy the partition of unity property: .
The implementation verifies these properties against Mathlib's Lagrange module. These lemmas confirm equivalence with the standard formulation and the partition of unity property.
-- Our formula equals Mathlib's basis polynomial evaluated at 0
lemma lagrangeCoeffAtZero_eq_basis_eval :
lagrangeCoeffAtZero S pidToScalar Sset i = (Lagrange.basis s v i).eval 0
-- Lagrange coefficients sum to 1 (partition of unity)
lemma lagrangeCoeffs_sum_one (hs : s.Nonempty) :
(∑ i ∈ s, (Lagrange.basis s v i).eval 0) = 1
-- Weighted sum reconstructs polynomial at 0
lemma lagrange_interpolation_at_zero :
(∑ i ∈ s, (Lagrange.basis s v i).eval 0 * r i) = p.eval 0
For additive shares a simpler approach works. Each helper sends a random share of its own share . The recovering party combines these to reconstruct its share through a pre-established linear relation.
Helper Contribution Function
Each helper scales its share by a Lagrange coefficient and sends the result.
def helperContribution
(S : Scheme)
(helper : KeyShare S)
(requester : S.PartyId)
(coefficient : S.Scalar)
: RepairMsg S :=
{ sender := helper.pid
, target := requester
, delta := coefficient • helper.secret }
Verification
Checks that the repaired secret maps to the expected public share.
def verifyRepairedShare
(S : Scheme)
(repairedSk : S.Secret)
(expectedPk : S.Public)
: Prop :=
S.A repairedSk = expectedPk
Threshold Completion
Checks whether enough helpers have responded and attempts to complete the repair.
def hasEnoughHelpers
(S : Scheme) [DecidableEq S.PartyId]
(session : RepairSession S)
: Bool :=
(helperPids S session.received.msgs).length ≥ session.threshold
def tryCompleteRepair
(S : Scheme) [DecidableEq S.PartyId]
(session : RepairSession S)
: Option S.Secret
Security
Privacy. No single helper learns . Zero-sum masking ensures that additional randomness does not affect the reconstructed value.
Correctness. The recovered share equals the original share. Verified via verifyRepairedShare.
Threshold. At least helpers must participate. Enforced by hasEnoughHelpers.
Robustness. If threshold is met, tryCompleteRepair returns Some.
Repair Protocol Steps
- Party broadcasts a
RepairRequestwith its known public share. - Each helper computes its delta via
helperContribution. - Each helper sends the
RepairMsgto over a secure channel. - Party collects messages in a
RepairSession(CRDT-mergeable). - When
hasEnoughHelpersreturns true, party callstryCompleteRepair. - Party verifies via
verifyRepairedShare.
Repair Coordination Protocol
The Protocol/Shares/RepairCoord.lean module provides a commit-reveal protocol for coordinating repair contributions.
Phases:
- Request: Requester broadcasts repair request with known
- Commit: Helpers commit to their Lagrange-weighted contributions
- Reveal: Helpers reveal contributions after all commits received
- Verify: Requester verifies
CRDT Design: Uses MsgMap for commits and reveals to ensure at most one message per helper. This makes conflicting messages un-expressable at the type level.
abbrev ContribCommitMap (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] :=
MsgMap S (ContribCommitMsg S)
abbrev ContribRevealMap (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] :=
MsgMap S (ContribRevealMsg S)
inductive RepairPhase
| request | commit | reveal | verify | done
structure RepairCoordState (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] where
phase : RepairPhase
request : RepairRequest S
helpers : List S.PartyId
threshold : Nat
commits : ContribCommitMap S -- at most one per helper
reveals : ContribRevealMap S -- at most one per helper
repairedShare : Option S.Secret
Architecture: Separated CRDT and Validation
Like RefreshCoord, RepairCoord separates CRDT merge from validation:
-- CRDT merge: idempotent, always succeeds, silently ignores duplicates/non-helpers
def processContribCommit (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RepairCoordState S) (msg : ContribCommitMsg S)
: RepairCoordState S
def processContribReveal (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RepairCoordState S) (msg : ContribRevealMsg S)
: RepairCoordState S
-- Validation: detect conflicts without modifying state
def detectContribCommitConflict (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RepairCoordState S) (msg : ContribCommitMsg S)
: Option (ContribCommitMsg S)
def validateContribCommit (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RepairCoordState S) (msg : ContribCommitMsg S)
: Option (ContribCommitValidationError S) -- wrongPhase | notHelper | conflict
-- Combined: merge + validation in one call
def processContribCommitValidated (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RepairCoordState S) (msg : ContribCommitMsg S)
: RepairCoordState S × Option (ContribCommitValidationError S)
Usage Patterns:
- Networking: Use
processContribCommit/processContribRevealdirectly for replication - Auditing: Call
detectContribCommitConflictto check for duplicate messages - Strict mode: Use
processContribCommitValidatedfor both merge and error reporting
Helper State: Each helper maintains local state for participation:
structure HelperLocalState (S : Scheme) where
helperId : S.PartyId
keyShare : KeyShare S
lagrangeCoeff : S.Scalar
contribution : S.Secret -- λ_j · sk_j
commitment : S.Commitment
opening : S.Opening
Lagrange Coefficient Computation:
Uses the unified Protocol/Core/Lagrange.lean module:
def lagrangeCoefficient [Field F] [DecidableEq F]
(partyScalar : F) (otherScalars : List F) : F :=
otherScalars.foldl (fun acc m =>
if m = partyScalar then acc
else acc * (m / (m - partyScalar))) 1
Transcript Creation:
The createTranscript function extracts ordered lists from MsgMap for verification:
def createTranscript (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
(st : RepairCoordState S) : Option (RepairTranscript S) :=
if st.phase = .done then
some { commits := st.commits.toList.map (·.2)
reveals := st.reveals.toList.map (·.2)
repairedShare := st.repairedShare.getD default }
else none
Verification: The repaired share is verified against the known public share:
def completeRepair (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId]
[DecidableEq S.Public]
(st : RepairCoordState S) : RepairCoordState S :=
if st.phase = .verify then
let repairedSk := aggregateContributions S st.reveals.toList
if S.A repairedSk = st.request.knownPk_i then
{ st with repairedShare := some repairedSk, phase := .done }
else st
else st
Rerandomization
Rerandomization adds privacy by masking shares and nonces. It prevents linking between signing sessions.
Rerandomization Masks Structure
Holds zero-sum masks for both shares and nonces over a specific signer set. Masks merge by pointwise addition.
/-- Raw mask functions without zero-sum proof. -/
structure RawRerandMasks (S : Scheme) :=
(shareMask : S.PartyId → S.Secret)
(nonceMask : S.PartyId → S.Secret)
instance (S : Scheme) : Join (RawRerandMasks S)
/-- Zero-sum masks for rerandomizing signatures over a specific signer set. -/
structure RerandMasks (S : Scheme) (Sset : List S.PartyId) :=
(masks : RawRerandMasks S)
(shareSumZero : (Sset.map masks.shareMask).sum = 0)
(nonceSumZero : (Sset.map masks.nonceMask).sum = 0)
instance (S : Scheme) (Sset : List S.PartyId) : Join (RerandMasks S Sset)
Motivation
Without rerandomization the same shares produce related signatures. An observer might link signatures to the same key or signer set. Rerandomization breaks this linkage.
State Rerandomization
Applies masks to both the nonce and share within a signing state.
/-- Apply raw rerandomization masks to signing local state. -/
def rerandStateRaw
(S : Scheme)
(masks : RawRerandMasks S)
(st : SignLocalState S) : SignLocalState S :=
let newSk := st.share.secret + masks.shareMask st.share.pid
let newPk := S.A newSk
{ st with
y_i := st.y_i + masks.nonceMask st.share.pid,
w_i := S.A (st.y_i + masks.nonceMask st.share.pid),
share := KeyShare.create S st.share.pid newSk newPk st.share.pk }
/-- Apply zero-sum rerandomization masks to signing local state. -/
def rerandState
(S : Scheme)
(Sset : List S.PartyId)
(masks : RerandMasks S Sset)
(st : SignLocalState S) : SignLocalState S :=
rerandStateRaw S masks.masks st
Share Rerandomization
Before signing each party adds a mask to its share.
The masks must sum to zero as in refresh. The signing protocol uses the masked shares. The verification equation remains valid because the master secret is unchanged.
Nonce Rerandomization
Similarly parties can add masks to their nonces.
If then the aggregated nonce is unchanged.
Signature Preservation Theorem
Zero-sum masks cancel out in the aggregated signature.
lemma rerand_preserves_sig
(S : Scheme)
(masks : RerandMasks S)
(Sset : List S.PartyId)
(c : S.Challenge)
(commits : List S.Commitment)
(shares : List (SignShareMsg S)) :
(shares.map (fun sh => masks.shareMask sh.sender)).sum = 0 →
aggregateSignature S c Sset commits
(shares.map (fun sh => { sh with z_i := sh.z_i + masks.shareMask sh.sender }))
= aggregateSignature S c Sset commits shares
Combined Rerandomization
A full rerandomization applies masks to both shares and nonces. The masks must satisfy the zero-sum property independently.
Unlinkability
Rerandomization provides unlinkability. An adversary who sees multiple signatures cannot determine if they were produced by the same set of parties. Each session appears independent.
Protocol Integration
Rerandomization integrates into the signing protocol as follows.
- Before round one parties agree on a session identifier.
- Each party derives its masks from the session identifier and randomness.
- Parties exchange commitments to their masked values.
- The signing protocol proceeds with masked shares and nonces.
- The final signature is identical to one produced without rerandomization.
Party Addition and Removal
The protocol can add or remove parties from the signer set.
Party Removal
To remove party the remaining parties redistribute its share. This is equivalent to a repair followed by a refresh. The share is recovered and then split among the remaining parties.
Party Addition
To add party the existing parties run a refresh that creates a new share. The new share is generated such that the shares still sum to the master secret.
Threshold Adjustment
Changing the threshold requires restructuring the shares. Moving from -of- to -of- requires a new secret sharing polynomial of degree . This is more complex than simple refresh.
Key Rotation
Key rotation changes the master secret and public key. Unlike refresh which preserves the key rotation generates a new key.
Rotation Procedure
- Parties run a new DKG protocol.
- The new protocol produces fresh shares for a new secret .
- The new public key is .
- External systems update to use the new public key.
Migration
Applications must migrate from the old key to the new key. Signatures under the old key remain valid. New signatures use the new key.
Composite State (CRDT)
Protocol phase state combines with refresh, repair, and rerandomization state in a product semilattice. Merge is provided by the product semilattice instances.
abbrev CompositeState (S : Scheme) (p : Phase) :=
(State S p) × RefreshState S × RepairBundle S × RerandMasks S
Threshold-Aware Merge
For threshold protocols, the ShareWithCtx structure couples share state with threshold context. When two replicas merge, the active signer sets may differ. The merge must:
- Union the active sets:
merged.active = a.active ∪ b.active - Preserve threshold:
merged.t = max(a.t, b.t) - Prove validity:
merged.t ≤ |merged.active| - Recompute coefficients: Lagrange depend on the signer set
Merge Strategies
Three merge strategies are provided:
Conservative merge (ones): Falls back to n-of-n with simple sum. No field required but loses threshold efficiency.
def mergeShareWithCtxOnes (S : Scheme) [DecidableEq S.PartyId]
(a b : ShareWithCtx S) : ShareWithCtx S
Full merge with recompute: Recomputes Lagrange coefficients for merged set. Preserves t-of-n semantics but requires a Field instance.
def mergeShareWithCtx
(S : Scheme) [Field S.Scalar] [DecidableEq S.PartyId]
(pidToScalar : S.PartyId → S.Scalar)
(a b : ShareWithCtx S) : ShareWithCtx S
Auto merge: Selects strategy based on field availability. Pass some inferInstance for field-backed schemes or none for lattice-only builds.
def mergeShareWithCtxAuto
(S : Scheme) [DecidableEq S.PartyId]
(fieldInst : Option (Field S.Scalar))
(pidToScalar : S.PartyId → S.Scalar)
(a b : ShareWithCtx S) : ShareWithCtx S
Cardinality Proof Maintenance
The merge must prove that max(t_a, t_b) ≤ |a.active ∪ b.active|. This follows from:
a.ctx.t ≤ |a.active| ≤ |a.active ∪ b.active|(subset monotonicity)b.ctx.t ≤ |b.active| ≤ |a.active ∪ b.active|(subset monotonicity)- Therefore
max(t_a, t_b) ≤ |merged.active|
Type-Indexed Phase Transitions
The Protocol/State/PhaseIndexed.lean module encodes protocol phases at the type level. Invalid phase transitions become compile-time errors rather than runtime failures.
Phase-Indexed State
State is indexed by the current phase, ensuring only valid operations are available:
inductive PhaseState (S : Scheme) : Phase → Type where
| commit : CommitPhaseData S → PhaseState S .commit
| reveal : RevealPhaseData S → PhaseState S .reveal
| shares : SharePhaseData S → PhaseState S .shares
| done : DonePhaseData S → PhaseState S .done
Each phase carries phase-specific data. The runtime phase state (Protocol/State/Phase.lean) uses MsgMap for conflict-free message storage, while the type-indexed version (Protocol/State/PhaseIndexed.lean) uses lists for simpler type signatures:
-- Type-indexed version (PhaseIndexed.lean) - simpler types
structure CommitPhaseData (S : Scheme) where
commits : List (DkgCommitMsg S)
expectedParties : Nat
structure RevealPhaseData (S : Scheme) where
commits : List (DkgCommitMsg S)
reveals : List (DkgRevealMsg S)
commitsComplete : commits.length = expectedParties
expectedParties : Nat
-- Runtime version (Phase.lean) - conflict-free by construction
structure CommitState (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] where
commits : CommitMap S -- MsgMap: at most one per sender
structure RevealState (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] where
commits : CommitMap S
reveals : RevealMap S -- MsgMap: at most one per sender
Type-Safe Transitions
Transitions are functions between specific phase types:
-- Can only be called when in commit phase
def transitionToReveal (S : Scheme)
(st : PhaseState S .commit)
(hready : match st with | .commit data => data.commits.length = data.expectedParties)
: PhaseState S .reveal
-- Can only be called when in reveal phase
def transitionToShares (S : Scheme)
(st : PhaseState S .reveal)
(activeSigners : Finset S.PartyId)
(message : S.Message)
(threshold : Nat)
(hready : match st with | .reveal data => data.reveals.length = data.expectedParties)
: PhaseState S .shares
Phase Monotonicity
Phase ordering is proven at the type level:
def Phase.toNat : Phase → Nat
| .commit => 0 | .reveal => 1 | .shares => 2 | .done => 3
theorem reveal_advances : Phase.commit < Phase.reveal
theorem shares_advances : Phase.reveal < Phase.shares
theorem done_advances : Phase.shares < Phase.done
Benefits
- Compile-time safety: Cannot call
addRevealin commit phase - Self-documenting: Type signatures show valid transitions
- Proof automation: Phase ordering proofs are trivial
- Extraction functions: Data access is phase-aware
def getCommits (S : Scheme) : {p : Phase} → PhaseState S p → List (DkgCommitMsg S)
| .commit, .commit data => data.commits
| .reveal, .reveal data => data.commits
| .shares, .shares data => data.commits
| .done, .done _ => []
Error Recovery Patterns
This section describes common error scenarios and recovery strategies across protocol phases.
DKG Failures
DKG failures occur when key generation cannot complete successfully.
Insufficient participants. If fewer than threshold parties complete the protocol, DKG cannot produce a valid key. Recovery requires restarting DKG with a different participant set or waiting for offline parties.
Too many complaints. If more than f parties receive valid complaints, the faulty party count exceeds tolerance. Recovery options include identifying and excluding faulty parties then restarting, or aborting if the faulty count exceeds trust assumptions.
Commitment mismatch. When a party's reveal does not match their commitment, they are excluded. If exclusion leaves fewer than threshold honest parties, DKG must restart.
-- Check if DKG can continue after exclusions
def canContinueDkg (totalParties exclusions threshold : Nat) : Bool :=
totalParties - exclusions >= threshold
-- Recovery action based on complaint count
def dkgRecoveryAction (complaints faultTolerance : Nat) : DkgRecoveryAction :=
if complaints = 0 then .continue
else if complaints <= faultTolerance then .excludeAndContinue
else .abort
Signing Session Failures
Signing sessions can fail for liveness or security reasons.
Timeout waiting for commitments. Some parties did not send commitments within the deadline. If at least threshold parties committed, the session can proceed without the missing parties. Otherwise, abort and retry with explicit participant selection.
Timeout waiting for shares. Parties committed but did not send shares. Same recovery as commitment timeout. Parties who committed but did not reveal are identified for potential exclusion.
Norm bound exceeded in aggregation. With local rejection sampling, individual shares exceeding bounds are rejected rather than triggering session restart. If fewer than threshold valid shares remain after rejection, the session fails.
-- Determine next action based on valid share count
def signingRecoveryAction (validShares threshold : Nat) : SigningRecoveryAction :=
if validShares >= threshold then .aggregateValidShares
else .retryWithDifferentSigners
Refresh Protocol Failures
Refresh failures affect share updates without changing the underlying key.
Coordinator failure. If the coordinator becomes unresponsive, select a new coordinator using the rotation strategy and restart the refresh round. Partial state is discarded.
Zero-sum violation. If masks do not sum to zero, the coordinator computed an incorrect adjustment. This indicates coordinator malfunction. Restart with a different coordinator.
Insufficient mask commits. If parties do not commit masks, refresh cannot proceed. Wait for additional commits or abort if the deadline passes.
-- Coordinator selection on failure
def selectNewCoordinator (strategy : CoordinatorStrategy PartyId)
(failed : PartyId) (round : Nat) : PartyId :=
match strategy with
| .fixed pid => pid -- Single point of failure
| .roundRobin _ => computeNextCoordinator (round + 1)
| .random seed => computeRandomCoordinator (seed + 1)
Repair Protocol Failures
Repair failures occur when recovering a lost share.
Insufficient helpers. Fewer than threshold parties responded with contributions. Request assistance from additional parties or abort if insufficient parties are available.
Verification failure. The repaired share does not match the known public share. This indicates either corrupted contributions or an incorrect public share. Retry with a different helper set.
Commitment mismatch during reveal. A helper's contribution reveal does not match their commitment. Exclude that helper and request a replacement if available.
-- Check if repair can complete
def canCompleteRepair (receivedHelpers threshold : Nat) : Bool :=
receivedHelpers >= threshold
-- Retry strategy for repair
def repairRetryStrategy (helpers : List PartyId) (failed : List PartyId)
(available : List PartyId) : Option (List PartyId) :=
let remaining := helpers.filter (fun h => h ∉ failed)
let needed := threshold - remaining.length
if needed <= available.length then
some (remaining ++ available.take needed)
else
none
Abort Coordination
When recovery is not possible, sessions must abort cleanly.
Liveness aborts require f+1 votes before taking effect. This prevents a minority from disrupting the protocol. Collect abort proposals and check against the abort threshold.
Security aborts take effect immediately without voting. When a trust violation or other security issue is detected, any party can trigger immediate abort.
Post-abort state should be cleaned up. Consumed nonces are invalidated. Session state is marked as aborted. New sessions require fresh nonces.
-- Determine abort handling
def handleAbort (reason : AbortReason PartyId) (votes : Nat)
(cfg : ThresholdConfig) : AbortHandling :=
if reason.isImmediate then .abortNow
else if votes >= cfg.abortThreshold then .abortNow
else .collectMoreVotes
-- Post-abort cleanup
def cleanupAbortedSession (session : Nat) (st : NonceState S) : NonceState S :=
-- Mark session as used to prevent nonce reuse
{ st with usedSessions := Insert.insert session st.usedSessions }
General Recovery Principles
Follow these principles when implementing error recovery.
Fail fast for security violations. Security issues like trust violations or nonce reuse should trigger immediate abort without waiting for consensus.
Retry with exclusion for misbehaving parties. When specific parties cause failures, exclude them from retry attempts rather than retrying with the same set.
Preserve nonce safety across retries. Never reuse a nonce even if the session failed. Generate fresh nonces for each signing attempt.
Log blame for auditing. Use BlameableError and collectBlame to track which parties caused failures. This information supports manual review and automated exclusion policies.
Prefer local recovery over distributed coordination. Local rejection sampling handles norm bounds locally. Only escalate to session abort for failures that require distributed consensus.
Protocol Integration
Ice Nine provides integration points for external consensus and coordination protocols. This document describes how to integrate Ice Nine's threshold signing with protocols like Aura.
External Context
External protocols can attach context to signing messages using the ExternalContext structure. This enables binding signatures to consensus instances without coupling Ice Nine to specific protocols.
structure ExternalContext where
consensusId : Option ByteArray := none -- external session identifier
resultId : Option ByteArray := none -- what is being signed (e.g., H(op, prestate))
prestateHash : Option ByteArray := none -- state validation
evidenceDelta : Option ByteArray := none -- CRDT evidence for propagation
Usage Pattern
-
Attach context when creating messages:
let ctx : ExternalContext := { consensusId := some myConsensusId resultId := some (hashOf operation prestate) prestateHash := some prestateHash } let share : SignShareMsg S := { sender := pid, session := sess, z_i := z, context := ctx } -
Validate context consistency during aggregation:
match aggregateWithContextValidation S c signers commits shares with | .ok signature => -- all shares reference same consensus instance | .error msg => -- inconsistent context detected -
Access context from signatures:
let sig := aggregateSignature S c signers commits shares let consensusId := sig.context.consensusId
Evidence Carrier
The EvidenceCarrier typeclass enables piggybacking CRDT evidence deltas on signing messages. This is useful for protocols like Aura that propagate evidence alongside consensus messages.
class EvidenceCarrier (M : Type*) where
attachEvidence : M → ByteArray → M
extractEvidence : M → Option ByteArray
hasEvidence : M → Bool
Instances
All signing message types implement EvidenceCarrier:
SignCommitMsgSignRevealWMsgSignShareMsgSignatureValidatableShare
Usage
-- Attach evidence before sending
let msgWithEvid := EvidenceCarrier.attachEvidence msg myEvidenceDelta
-- Extract evidence on receive
match EvidenceCarrier.extractEvidence msg with
| some delta => mergeEvidence delta
| none => ()
Fast Path Signing
For low-latency consensus, Ice Nine supports precomputed nonces that enable single-round signing.
Precomputed Nonces
Parties can pre-generate nonces during idle time:
-- Generate during idle time
let precomputed := PrecomputedNonce.create S hiding binding opening currentTime
-- Pre-share the commitment
let preshared := PreSharedCommitment.fromPrecomputed S partyId precomputed
broadcast preshared
-- When signing request arrives, produce share immediately
match signFast S keyShare precomputed challenge bindingFactor session context with
| some share => send share
| none => -- norm check failed, need fresh nonce
Nonce Pool Management
For concurrent signing sessions, use NoncePool:
-- Initialize pool
let pool := NoncePool.empty S
-- Add precomputed nonces during idle
let pool := pool.add precomputed1
let pool := pool.add precomputed2
-- Take nonce for signing (automatically prunes expired)
match pool.take currentTime with
| some (nonce, pool') => use nonce, update pool to pool'
| none => generate fresh nonce
Security Assumptions
Fast path signing trusts that:
- Nonce commitments were honestly generated
- Nonces are not being reused
- Precomputed nonces have not expired
- Coordinator will aggregate honestly
Use standard two-round signing when these assumptions don't hold.
Self-Validating Shares
For fallback gossip protocols, ValidatableShare bundles a share with enough context for independent validation:
structure ValidatableShare (S : Scheme) where
share : SignShareMsg S
nonceCommit : S.Commitment
publicHiding : S.Public
publicBinding : S.Public
pkShare : S.Public
Creating ValidatableShares
let vs := ValidatableShare.create S shareMsg commitMsg partyPublicKey
Use Case: Aura Fallback
In Aura's fallback protocol, witnesses gossip shares to achieve threshold:
-- Create self-validating share for gossip
let validatable := ValidatableShare.create S myShare myCommit myPkShare
-- Recipients can validate without full session state
if validatable.isWellFormed then
addToProposals validatable
Aura Integration Example
Here's how Ice Nine maps to Aura's fast path:
Setup Phase
- DKG: Use Ice Nine's DKG to establish shared threshold key material
- Pre-share commitments: Witnesses periodically broadcast
PreSharedCommitmentvalues
Fast Path
-
Execute: Initiator sends
Execute(cid, Op, prestate_hash, evidΔ) -
Witness Share Production:
-- Compute result ID let rid := hash operation prestate -- Create context let ctx : ExternalContext := { consensusId := some cid resultId := some rid prestateHash := some prestateHash evidenceDelta := some evidDelta } -- Produce share using precomputed nonce let share := signFast S keyShare precomputed challenge bindingFactor session ctx -
Aggregation:
-- Validate all shares reference same consensus instance match aggregateWithContextValidation S challenge signers commits shares with | .ok signature => -- Extract merged context for CommitFact let thresholdSig := signature let attesters := signature.Sset commitFact cid rid thresholdSig attesters | .error msg => -- Inconsistent shares, enter fallback startFallback msg
Fallback
Use ValidatableShare for gossip:
-- Wrap shares for independent validation
let proposal := ValidatableShare.create S share commit pkShare
-- Gossip to peers
for peer in randomSubset witnesses k do
send (AggregateShare cid proposals evidDelta) peer
-- On receive, validate and accumulate
for vs in received do
if vs.isWellFormed && !hasEquivocated vs.sender then
addProposal vs
if proposalCount rid >= threshold then
produceThresholdSignature rid proposals
Context Validation
When aggregating shares from multiple parties, validate context consistency:
def validateShareContexts (shares : List (SignShareMsg S)) : ContextValidationResult
-- Returns:
-- - .ok : all contexts consistent
-- - .inconsistentConsensusId i j : shares i,j have different cid
-- - .inconsistentResultId i j : shares i,j have different rid
-- - .inconsistentPrestateHash i j : shares i,j have different prestate
This catches Byzantine behavior where a party signs different operations.
Best Practices
- Always set consensusId: Bind shares to specific consensus instances
- Include resultId: Ensures all parties sign the same computed result
- Use precomputed nonces for latency: But regenerate periodically (default: 1 hour)
- Validate context on aggregation: Detect Byzantine behavior early
- Use ValidatableShare for gossip: Enables independent validation in fallback
- Propagate evidence: Use EvidenceCarrier for CRDT convergence
Threshold Configuration
This document describes the ThresholdConfig structure and its parameters. For an explanation of how local rejection sampling works, see Local Rejection Sampling in the signing protocol documentation.
ThresholdConfig Structure
The ThresholdConfig structure bundles all parameters for threshold signing with local rejection.
structure ThresholdConfig where
totalParties : Nat -- n: total parties in the scheme
threshold : Nat -- t: minimum parties for reconstruction
maxSigners : Nat -- T: maximum partials aggregated (usually = t)
globalBound : Nat -- B_global: Dilithium bound (γ₁ - β)
localBound : Nat -- B_local: per-signer bound (B_global / T)
maxLocalAttempts : Nat -- maximum rejection attempts before failure
The localBound is derived automatically as globalBound / maxSigners. This ensures that when T signers each produce a partial with norm at most B_local, the aggregate stays within B_global.
Creating Configurations
Use ThresholdConfig.create for custom configurations or the preset constructors for standard security levels.
-- Custom configuration for 5 parties with default threshold
let cfg := ThresholdConfig.create 5 130994
-- Custom threshold (3-of-5 instead of default 4-of-5)
let cfg := ThresholdConfig.create 5 130994 (t := 3) (maxSigners := 3)
-- Standard Dilithium configurations
let cfg2 := ThresholdConfig.dilithium2 5 -- 128-bit security
let cfg3 := ThresholdConfig.dilithium3 5 -- 192-bit security
let cfg5 := ThresholdConfig.dilithium5 5 -- 256-bit security
The create constructor fills validity proofs automatically using omega. If proof obligations fail, the configuration parameters are inconsistent.
Default Threshold
The default threshold uses the 2/3+1 rule for Byzantine fault tolerance.
| Parties (n) | Threshold (t) | Max Faulty (f) |
|---|---|---|
| 3 | 3 | 0 |
| 4 | 3 | 1 |
| 5 | 4 | 1 |
| 6 | 5 | 1 |
| 7 | 5 | 2 |
| 10 | 7 | 3 |
The formula is t = 2n/3 + 1 (integer division). This ensures honest majority for BFT protocols.
Byzantine Fault Tolerance Context
The default threshold implements the standard BFT assumption. With validators, where at most are Byzantine, a threshold of ensures an honest quorum.
The relationship works as follows. If total validators and at most are faulty, then at least are honest. Any threshold guarantees that any -sized subset must contain at least one honest validator. Conversely, any -sized coalition of malicious validators controls at most validators, so they cannot prevent an honest quorum from forming.
This provides the guarantee needed for threshold signing: no coalition of size can prevent signature production. With local rejection sampling, malicious signers cannot force global rejection. An aggregator can always collect valid partials from honest signers. The signing protocol never stalls due to rejection; it either succeeds or fails only through the usual BFT modes (too many offline/malicious, faulty leader).
In practice, deployments often use small values of :
- 3-of-3 (n=3, f=0): All parties must participate, zero fault tolerance
- 3-of-4 (n=4, f=1): Tolerates one faulty party
- 4-of-7 (n=7, f=2): Tolerates two faulty parties
For most consensus systems, the BFT assumption ( honest out of ) is compatible with the consensus protocol's own fault model. The threshold signing subprotocol inherits the same assumptions, making it compatible with the consensus layer.
Dilithium Security Levels
Ice Nine supports the three NIST-standardized Dilithium parameter sets.
| Level | γ₁ | β | B_global (γ₁ - β) | Security |
|---|---|---|---|---|
| Dilithium2 | 2¹⁷ (131072) | 78 | 130994 | 128-bit |
| Dilithium3 | 2¹⁹ (524288) | 196 | 524092 | 192-bit |
| Dilithium5 | 2¹⁹ (524288) | 120 | 524168 | 256-bit |
The global bound is γ₁ - β where γ₁ is the nonce coefficient range and β = τ · η (challenge weight times secret coefficient bound).
Local Bound Calculation
The local bound is computed as B_local = B_global / T where T is maxSigners. This creates headroom for aggregation.
| Signers (T) | B_local (Dilithium2) | B_local (Dilithium3) |
|---|---|---|
| 3 | 43664 | 174697 |
| 4 | 32748 | 131023 |
| 5 | 26198 | 104818 |
| 7 | 18713 | 74870 |
| 10 | 13099 | 52409 |
Smaller local bounds mean tighter rejection sampling. More signers require each signer to produce smaller partials.
Rejection Rate Estimation
The expectedLocalAttempts accessor estimates how many sampling attempts each signer needs on average.
def ThresholdConfig.expectedLocalAttempts (cfg : ThresholdConfig) : Nat :=
if cfg.globalBound > 2 * cfg.localBound then
cfg.globalBound / (cfg.globalBound - 2 * cfg.localBound)
else
cfg.maxLocalAttempts
The formula approximates 1 / (1 - 2·B_local/B_global). Tighter local bounds increase rejection rates.
| Signers (T) | Ratio (B_local/B_global) | Expected Attempts |
|---|---|---|
| 3 | 0.33 | ~3 |
| 4 | 0.25 | ~2 |
| 5 | 0.20 | ~1.7 |
| 7 | 0.14 | ~1.4 |
| 10 | 0.10 | ~1.25 |
For typical configurations, signers need 1-4 attempts on average. The maxLocalAttempts parameter (default 256) provides a safety bound for pathological cases.
Parameter Tuning Guidelines
When tuning parameters, consider these tradeoffs.
Threshold vs Rejection Rate
Lower thresholds mean fewer signers aggregate their partials. Fewer signers means larger local bounds and lower rejection rates. However, lower thresholds reduce fault tolerance.
Security Level vs Performance
Higher security levels (Dilithium3, Dilithium5) have larger global bounds. Larger bounds mean more headroom for local rejection, so rejection rates decrease. The tradeoff is larger signatures and keys.
maxLocalAttempts Setting
The default of 256 attempts is conservative. In practice, signers rarely need more than 10 attempts. Lower values fail faster on misconfigured systems. Higher values handle edge cases but delay failure detection.
For latency-sensitive applications, consider setting maxLocalAttempts to 32 or 64. For maximum reliability, keep the default.
Configuration Validation
The validate method checks configuration consistency.
def ThresholdConfig.validate (cfg : ThresholdConfig) : ConfigValidation
-- Returns .ok for valid configurations, or a specific error:
-- .noParties, .thresholdZero, .thresholdExceedsParties,
-- .maxSignersBelowThreshold, .globalBoundZero, .localBoundZero,
-- .localBoundTooLarge
Always validate configurations from untrusted sources before use. The structure proofs guarantee consistency for configurations created through create, but deserialized configurations should be validated.
Abort Threshold
The abort threshold determines how many parties must agree to abort a session.
def ThresholdConfig.abortThreshold (cfg : ThresholdConfig) : Nat :=
cfg.maxFaulty + 1
This equals f + 1 where f = n - t is the maximum faulty parties. The +1 ensures at least one honest party agreed to abort. This prevents a coalition of f malicious parties from forcing spurious aborts.
Threshold and Security Margins
Increasing the number of signers T reduces each signer's local bound proportionally. This affects security margins in two ways.
First, smaller local bounds increase the precision required for norm checking. Production implementations must use exact arithmetic or verified bounds.
Second, the aggregate bound T · B_local approaches B_global as T increases. At the limit, there is no margin for rounding errors. Keep T well below the theoretical maximum.
For most deployments, use the default threshold (2/3+1) with the matching maxSigners value. This provides good fault tolerance with comfortable rejection rates.
References
Ice Nine adapts the FROST threshold signature pattern to the lattice setting. The protocol introduces local rejection sampling to eliminate distributed coordination for norm bounds. This section places Ice Nine in context with related work and identifies its novel contributions.
Relationship to FROST
Ice Nine follows the FROST protocol structure closely. FROST is a two-round Schnorr threshold signature scheme that uses dual nonces and binding factors to prevent adaptive attacks. Ice Nine preserves these mechanisms while adapting them to Dilithium's algebraic structure.
The direct adaptations include the commit-reveal round structure, dual nonces for hiding and binding, per-signer binding factors, domain-separated hash functions, and proof of knowledge during DKG. The core insight transfers directly because Dilithium's response equation z = y + c·s has the same linear form as Schnorr.
Novel Contributions
Ice Nine introduces several innovations beyond the FROST-to-lattice adaptation.
Local rejection sampling with triangle inequality guarantee is the primary innovation. Prior threshold Dilithium work faces exponential retry complexity as the number of participants increases. Ice Nine solves this by having each signer enforce a local bound. By setting the local bound to the global bound divided by the signer count, the triangle inequality guarantees the aggregate satisfies the global bound. No distributed retries are needed.
Ice Nine has a BFT-friendly design. The protocol validates each partial signature independently during aggregation. Invalid shares are discarded without affecting valid ones. With honest majority, sufficient valid partials always exist. Malicious parties cannot force distributed restarts.
CRDT-based protocol state enables conflict-free merging of divergent execution traces. Protocol state forms join-semilattices. This handles out-of-order messages and network partitions without additional coordination.
Compile-time nonce safety uses session types to make nonce reuse a compile-time error. Private constructors and consumption proofs enforce linear usage at the type level.
Formal verification in Lean 4 provides machine-checked proofs of correctness and security properties. Proofs include aggregation correctness, security reductions to SIS and MLWE, and CRDT safety via monotonicity of state transitions.
Comparison with Related Work
| Scheme | Parties | Rejection Handling | BFT Design |
|---|---|---|---|
| DiLizium | 2 | Global retry | No |
| TOPCOAT | 2 | Cross-instance | No |
| PQShield compact | T ≤ 8 | Parallel instances | No |
| Ice Nine | Arbitrary t-of-n | Local bounds | Yes |
DiLizium and TOPCOAT are limited to two parties. The PQShield compact scheme supports small thresholds but runs T parallel Dilithium executions. Ice Nine supports arbitrary threshold configurations with local rejection sampling.
Note: This comparison is based on a cursory literature search, there may be information missing from this review.
References
FROST
- Komlo, Goldberg. FROST: Flexible Round-Optimized Schnorr Threshold Signatures. SAC 2020. ePrint 2020/852
Threshold Dilithium
- Vakarjuk et al. DiLizium: A Two-Party Lattice-Based Signature Scheme. Entropy 2021. MDPI
- Snetkov, Vakarjuk, Laud. TOPCOAT: Towards Practical Two-Party CRYSTALS-Dilithium. 2024. Springer
- Niot, del Pino. Finally! A Compact Lattice-Based Threshold Signature. PKC 2025. PQShield
Lattice Cryptography
- Lyubashevsky. Fiat-Shamir with Aborts. ASIACRYPT 2009.
- Ducas et al. CRYSTALS-Dilithium. NIST PQC Round 3.
- NIST FIPS 204. ML-DSA (Dilithium) Standard. 2024. NIST
Byzantine Fault Tolerance
- Cachin et al. Byzantine Fault-Tolerant Aggregate Signatures. ASIACCS 2024. ACM