Two-Round Threshold Signing
The signing protocol produces a threshold signature from partial contributions. It runs in two rounds with phase-indexed CRDT state. The first round commits to ephemeral nonces. The second round produces partial signatures. An aggregator combines the partials into a final signature.
Module Structure
The signing implementation is split across focused modules for maintainability:
-
Protocol/Sign/Types.lean- Core types:SessionTracker,NonceRegistry, message types (SignCommitMsg,SignRevealWMsg,SignShareMsg), error types (SignError), and output types (Signature,SignatureDone) -
Protocol/Sign/LocalRejection.lean- Local rejection sampling for norm bounds (eliminates global retry coordination) -
Protocol/Core/Abort.lean- Session-level abort coordination for liveness failures and security violations -
Protocol/Sign/Core.lean- Core functions:lagrangeCoeffAtZero,lagrangeCoeffs,computeChallenge,aggregateSignature,aggregateSignatureLagrange,ValidSignTranscript,validateSigning -
Protocol/Sign/Threshold.lean- Threshold support:CoeffStrategy,strategyOK,SignMode,ThresholdCtx, context constructors (mkAllCtx,mkThresholdCtx,mkThresholdCtxComputed), and context-based aggregation -
Protocol/Sign/Sign.lean- Re-exports from all submodules for backward compatibility -
Protocol/Sign/Session.lean- Session-typed API that makes nonce reuse a compile-time error
Phase-Indexed State
The signing protocol uses phase-indexed state that forms a join-semilattice at each phase.
inductive Phase
| commit
| reveal
| shares
| done
Each phase carries accumulated data that can be merged with the join operation (⊔).
Inputs
Each signing session requires the following inputs.
- Message. The message to be signed.
- Signer set. The active signing subset .
- Secret shares. Each party holds its secret share .
- Public key. All parties know the global public key .
- Public shares. All parties know the public shares from key generation.
Message Types
The protocol uses three message types across its phases.
structure SignCommitMsg (S : Scheme) where
sender : S.PartyId
session : Nat
commitW : S.Commitment -- hash commitment to hiding nonce
hidingVal : S.Public -- public hiding commitment W_hiding = A(hiding nonce)
bindingVal : S.Public -- public binding commitment W_binding = A(binding nonce)
context : ExternalContext := {} -- for external protocol binding
structure SignRevealWMsg (S : Scheme) where
sender : S.PartyId
session : Nat
opening : S.Opening -- opening for commitment verification
context : ExternalContext := {}
structure SignShareMsg (S : Scheme) where
sender : S.PartyId
session : Nat
z_i : S.Secret
context : ExternalContext := {} -- for external protocol binding
All message types include an optional context field for binding to external protocols. See Protocol Integration for details.
Dual Nonce Structure (FROST Pattern)
Following FROST, we use two nonces per signer instead of one:
- Hiding nonce: Protects against adaptive adversaries
- Binding nonce: Commits to the signing context (message + all commitments)
This provides stronger security against adaptive chosen-message attacks.
structure SigningNonces (S : Scheme) where
hidingVal : S.Secret -- protects signer from adaptive adversaries
bindingVal : S.Secret -- commits to signing context
structure SigningCommitments (S : Scheme) where
hidingVal : S.Public -- W_hiding = A(hiding nonce)
bindingVal : S.Public -- W_binding = A(binding nonce)
Domain-Separated Hash Functions (FROST H4, H5)
Following FROST, the signing protocol uses domain-separated hash functions:
-
H4 (Message pre-hashing):
hashMessagepre-hashes large messages before signing, using domain prefixice-nine-v1-msg. This enables efficient signing of arbitrarily large messages. -
H5 (Commitment list hashing):
hashCommitmentListproduces canonical encodings of commitment lists, using domain prefixice-nine-v1-com. This ensures all parties compute identical binding factors.
def hashMessage (S : Scheme) (msg : S.Message) : ByteArray :=
HashDomain.message ++ serializeMessage msg
def hashCommitmentList (S : Scheme) (commits : List (SignCommitMsg S)) : ByteArray :=
HashDomain.commitmentList ++ encodeCommitmentList S commits
Binding Factor Derivation
The binding factor binds each signer's nonce to the full signing context:
where uses domain prefix ice-nine-v1-rho.
The effective nonce is then:
This prevents an adversary from adaptively choosing the message after seeing nonce commitments.
Party State
Each party maintains local state during signing.
structure SignLocalState (S : Scheme) where
share : KeyShare S -- party's long-term credential from DKG
msg : S.Message -- message being signed
session : Nat -- unique session identifier
nonces : SigningNonces S -- dual ephemeral nonces (NEVER reuse!)
commits : SigningCommitments S -- public nonce commitments
openW : S.Opening -- commitment randomness for hiding commitment
The local state contains:
- The ephemeral nonces sampled for this session.
- The public commitments and .
- The commitment opening .
- The session identifier for binding.
Round 1: Nonce Commitment
In round one each party commits to an ephemeral nonce. This commitment prevents adaptive attacks where an adversary chooses its nonce after seeing others.
Procedure (session-typed API)
The implementation no longer exposes raw signRound1/2. Use the linear, session-typed API in Protocol/Sign/Session.lean:
open IceNine.Protocol.SignSession
-- caller supplies fresh randomness
let ready := initSession S keyShare msg session hidingNonce bindingNonce opening
match commit S ready with
| .error err => -- session reuse or commitment reuse detected locally
| .ok { committed, msg := commitMsg, tracker, nonceReg } =>
-- after collecting commits and computing challenge/binding factors:
let (revealed, revealMsg, tracker', nonceReg') :=
reveal S committed challenge bindingFactor aggregateW tracker nonceReg
match sign S revealed tracker' nonceReg' with
| .inl (signed, tracker'', nonceReg'') => finalize S signed
| .inr _ => -- retry with NEW FreshNonce via retryWithFreshNonce
Round‑1 message contents are identical: each SignCommitMsg carries the hash commitment plus both public nonce values.
Each party still samples fresh , computes , , samples an opening , and broadcasts the commitment to together with both public nonces. The implementation also:
- marks the
sessionas used in a per-partySessionTracker - records
commitWin a per-partyNonceRegistryand rejects if the same commitment was used in a different session for that party.
Commit Phase State
The commit phase accumulates commit messages in a semilattice structure.
structure CommitState (S : Scheme) :=
(commits : List (DkgCommitMsg S))
instance (S : Scheme) : Join (CommitState S) :=
⟨fun a b => { commits := a.commits ⊔ b.commits }⟩
Waiting for Commits
Each party waits until it receives commitments from all other parties in . If a commitment does not arrive within a timeout the party may abort or exclude the missing party.
Nonce Reveal and Challenge Computation
After all commitments are received parties reveal their nonces and compute the shared challenge.
Reveal Phase
Each party broadcasts only the opening for its hiding commitment.
The public nonces (hidingVal, bindingVal) were already sent in Round 1 inside SignCommitMsg.
Trackers (SessionTracker, NonceRegistry) are threaded through. No additional updates occur in this phase.
Verification
Each party verifies all received openings. For each check that
If any opening is invalid the party aborts the session. It may also file a complaint identifying the misbehaving party.
Nonce Aggregation
After successful verification compute the aggregated nonce.
Challenge Derivation
Compute the challenge using the hash function.
The challenge depends on the message, public key, signer set, all commitments, and the aggregated nonce. Including the commitments in the hash input binds the challenge to the commit phase.
Round 2: Partial Signatures
In round two each party produces its partial signature.
n-of-n Case
In the -of- setting all parties must participate. The session transition sign
in SignSession computes each partial signature as
This is the standard Schnorr response. The ephemeral secret masks the product of challenge and secret share. The function returns none if the norm check fails.
t-of-n Case
In the -of- setting a subset of parties signs. Each party must adjust its contribution using Lagrange coefficients.
structure LagrangeCoeff (S : Scheme) :=
(pid : S.PartyId)
(lambda : S.Scalar)
Let denote the Lagrange coefficient for party over the signing set . Party computes
The Lagrange coefficient ensures that the sum of adjusted shares equals the master secret.
Lagrange Coefficient Computation
def lagrangeCoeffAtZero
(S : Scheme) [Field S.Scalar] [DecidableEq S.PartyId]
(pidToScalar : S.PartyId → S.Scalar)
(Sset : List S.PartyId) (i : S.PartyId) : S.Scalar
The Lagrange coefficient for party over set is
This computation uses party identifiers as evaluation points. All arithmetic is in the scalar ring . Returns 0 if duplicate identifiers would cause division by zero.
Norm Check
Before broadcasting the partial signature each party may check the norm.
If the norm exceeds the bound the party aborts this session. It can retry with a fresh nonce. This check prevents leakage through the response distribution.
Broadcasting Partials
Each party broadcasts its partial signature to all parties in .
Aggregation
An aggregator collects all partial signatures and produces the final signature.
Shares Phase State
The shares phase accumulates partial signatures along with threshold context. Messages are stored in MsgMap structures keyed by sender ID, making conflicting messages from the same sender un-expressable.
structure ShareState (S : Scheme) [BEq S.PartyId] [Hashable S.PartyId] :=
(commits : CommitMap S) -- at most one commit per party
(reveals : RevealMap S) -- at most one reveal per party
(shares : ShareMap S) -- at most one share per party
(active : Finset S.PartyId)
structure ShareWithCtx (S : Scheme) :=
(state : ShareState S)
(ctx : ThresholdCtx S)
The MsgMap structure provides conflict detection: tryAddShare returns a conflict indicator if a party attempts to submit multiple shares, enabling detection of Byzantine behavior.
The threshold context ensures aggregation only proceeds when sufficient parties contribute.
inductive SignMode | all | threshold
inductive CoeffStrategy (S : Scheme)
| ones -- n-of-n: z = Σ z_i
| lagrange (coeffs : List (LagrangeCoeff S)) -- t-of-n: z = Σ λ_i·z_i
structure ThresholdCtx (S : Scheme) :=
(active : Finset S.PartyId) -- participating signers
(t : Nat) -- threshold (t = |active| for n-of-n)
(mode : SignMode) -- n-of-n vs t-of-n
(strategy : CoeffStrategy S) -- aggregation method
(card_ge : t ≤ active.card) -- proof: have enough signers
(strategy_ok : strategyOK S active.toList strategy) -- coeffs align
The CoeffStrategy type selects the aggregation path:
ones: simple sum z = Σ z_i (n-of-n, no field operations)lagrange coeffs: weighted sum z = Σ λ_i·z_i (t-of-n, requires field)
Context Constructors
Smart constructors build ThresholdCtx with proofs:
-- n-of-n: threshold = |active|, strategy = ones
def mkAllCtx (S : Scheme) [DecidableEq S.PartyId]
(active : Finset S.PartyId) : ThresholdCtx S
-- t-of-n with pre-supplied coefficients
def mkThresholdCtx
(S : Scheme) [DecidableEq S.PartyId]
(active : Finset S.PartyId)
(t : Nat)
(coeffs : List (LagrangeCoeff S))
(hcard : t ≤ active.card)
(halign : coeffs.map (·.pid) = active.toList)
(hlen : coeffs.length = active.toList.length)
: ThresholdCtx S
-- t-of-n with computed Lagrange coefficients (requires Field)
def mkThresholdCtxComputed
(S : Scheme) [Field S.Scalar] [DecidableEq S.PartyId]
(active : Finset S.PartyId)
(t : Nat)
(pidToScalar : S.PartyId → S.Scalar)
(hcard : t ≤ active.card) : ThresholdCtx S
Collecting Partials
The aggregator receives from each . If any partial is missing the aggregator may wait or abort.
Signature Structure
structure Signature (S : Scheme) :=
(z : S.Secret)
(c : S.Challenge)
(Sset : List S.PartyId)
(commits : List S.Commitment)
(context : ExternalContext := {}) -- merged from contributing shares
The signature's context is merged from all contributing shares, enabling external protocols to verify consensus binding.
Signature Computation
For n-of-n aggregation:
def aggregateSignature
(S : Scheme)
(c : S.Challenge)
(Sset : List S.PartyId)
(commits : List S.Commitment)
(shares : List (SignShareMsg S))
: Signature S
Compute the aggregated response.
For t-of-n aggregation with Lagrange weighting:
def aggregateSignatureLagrange
(S : Scheme)
(c : S.Challenge)
(Sset : List S.PartyId)
(commits : List S.Commitment)
(coeffs : List (LagrangeCoeff S))
(shares : List (SignShareMsg S))
: Signature S
Strategy-Based Aggregation
The aggregateWithStrategy function selects the aggregation path based on the coefficient strategy:
def aggregateWithStrategy
(S : Scheme) [DecidableEq S.PartyId]
(c : S.Challenge)
(active : List S.PartyId)
(commits : List S.Commitment)
(shares : List (SignShareMsg S))
(strategy : CoeffStrategy S)
: Option (Signature S)
This function validates that all shares come from the declared active set, then dispatches to aggregateSignature (for .ones) or aggregateSignatureLagrange (for .lagrange coeffs).
Context-Based Aggregation
For full validation using the threshold context:
def aggregateSignatureWithCtx
(S : Scheme) [DecidableEq S.PartyId]
(c : S.Challenge)
(ctx : ThresholdCtx S)
(commits : List S.Commitment)
(shares : List (SignShareMsg S))
: Option (Signature S)
This function checks sharesFromActive (membership validation) before delegating to aggregateWithStrategy using the context's strategy.
Final Signature
The signature consists of the following components.
The signature includes the aggregated response, the challenge, the signer set, and all nonce commitments.
Correctness
The aggregated response satisfies the verification equation. Consider the -of- case.
where and .
Applying the linear map gives
Rearranging yields
The verifier recomputes from this equation and checks the challenge.
t-of-n Correctness
In the -of- case the Lagrange coefficients ensure
The rest of the argument follows as above.
Session Binding
The challenge binds to the specific session through several mechanisms.
- The message is included in the hash input.
- The public key is included.
- The signer set is included.
- All nonce commitments are included.
- The aggregated nonce is included.
This binding prevents cross-session attacks where an adversary reuses partial signatures from different sessions.
Transcript Validation
The validateSigning function checks the entire signing transcript before aggregation.
Its current signature also requires precomputed binding factors (derived from the commit list):
def validateSigning
(S : Scheme) [DecidableEq S.PartyId]
(pk : S.Public)
(m : S.Message)
(Sset : List S.PartyId)
(commits : List (SignCommitMsg S))
(reveals : List (SignRevealWMsg S))
(bindingFactors : BindingFactors S)
(shares : List (SignShareMsg S))
: Except (SignError S.PartyId) (Signature S)
Callers must compute bindingFactors (e.g., via computeBindingFactors) and supply them. Otherwise validation will not type-check.
Error Types
inductive SignError (PartyId : Type) where
| lengthMismatch : SignError PartyId
| participantMismatch : PartyId → SignError PartyId
| duplicateParticipants : PartyId → SignError PartyId
| commitMismatch : PartyId → SignError PartyId
| sessionMismatch : Nat → Nat → SignError PartyId
Note: Legacy error variants (normCheckFailed, maxRetriesExceeded, sessionAborted) have been removed. With local rejection sampling, norm bounds are handled locally. Session-level aborts use AbortReason from Protocol/Core/Abort.lean.
Valid Transcript Predicate
structure ValidSignTranscript (S : Scheme)
(Sset : List S.PartyId)
(commits : List (SignCommitMsg S))
(reveals : List (SignRevealWMsg S))
(shares : List (SignShareMsg S)) : Prop where
len_comm_reveal : commits.length = reveals.length
len_reveal_share : reveals.length = shares.length
pids_nodup : (commits.map (·.sender)).Nodup
pids_eq : commits.map (·.sender) = Sset
commit_open_ok : List.Forall₂ (fun c r => c.sender = r.sender ∧ S.commit c.hiding r.opening = c.commitW) commits reveals
sessions_ok : let sess := (commits.head?.map (·.session)).getD 0; ∀ sh ∈ shares, sh.session = sess
Monotonic Step Functions
State transitions are monotone with respect to the semilattice order.
def stepCommit (S : Scheme) (msg : DkgCommitMsg S) (st : CommitState S) : CommitState S :=
{ commits := st.commits ⊔ [msg] }
lemma stepCommit_monotone (S : Scheme) (msg : DkgCommitMsg S) :
∀ a b, a ≤ b → stepCommit S msg a ≤ stepCommit S msg b
Similar monotonicity holds for stepReveal and stepShare.
Abort Conditions
The signing protocol may abort under several conditions.
Missing commitment. A party in does not broadcast its commitment.
Invalid opening. A received opening does not match its commitment.
Missing partial. A party does not broadcast its partial signature.
Norm violation. A partial signature fails the norm check.
Inconsistent challenge. Parties compute different challenges due to inconsistent views.
Session mismatch. Messages have inconsistent session identifiers.
When aborting parties should discard all session state. They should not reuse the ephemeral nonce in any future session.
Session Tracking and Nonce Safety
CRITICAL: Nonce reuse completely breaks Schnorr-style signatures.
If the same nonce is used with two different challenges :
The secret key can be recovered:
This attack is formalized in Proofs/Soundness/Soundness.lean as the nonce_reuse_key_recovery theorem. See Verification: Special Soundness for the formal proof.
Session Tracker
Each party tracks used session IDs:
structure SessionTracker (S : Scheme) where
usedSessions : Finset Nat -- sessions that have been used
partyId : S.PartyId
def SessionTracker.isFresh (tracker : SessionTracker S) (session : Nat) : Bool :=
session ∉ tracker.usedSessions
def SessionTracker.markUsed (tracker : SessionTracker S) (session : Nat) : SessionTracker S :=
{ tracker with usedSessions := tracker.usedSessions.insert session }
Session Validation
inductive SessionCheckResult
| ok -- session is fresh
| alreadyUsed (session : Nat) -- DANGER: would reuse nonce
| invalidId
def checkSession (tracker : SessionTracker S) (session : Nat) : SessionCheckResult :=
if tracker.isFresh session then .ok
else .alreadyUsed session
Nonce Registry
For network-wide detection, the NonceRegistry uses HashMap-based indices for O(1) lookup:
structure NonceRegistry (S : Scheme)
[BEq S.PartyId] [Hashable S.PartyId]
[BEq S.Commitment] [Hashable S.Commitment] where
bySession : Std.HashMap (S.PartyId × Nat) S.Commitment -- primary index
byCommitment : Std.HashMap (S.PartyId × S.Commitment) (List Nat) -- reverse index
def NonceRegistry.hasCommitment (reg : NonceRegistry S)
(pid : S.PartyId) (session : Nat) (commit : S.Commitment) : Bool :=
match reg.bySession.get? (pid, session) with
| some c => c == commit
| none => false
def NonceRegistry.detectReuse (reg : NonceRegistry S)
(pid : S.PartyId) (commit : S.Commitment) : Option (Nat × Nat) :=
match reg.byCommitment.get? (pid, commit) with
| some (s1 :: s2 :: _) => if s1 ≠ s2 then some (s1, s2) else none
| _ => none
The dual index structure provides:
- O(1) lookup by (partyId, session) for commitment retrieval
- O(1) reuse detection by (partyId, commitment) for security monitoring
Local Rejection Sampling
In lattice signatures, the response must have bounded norm to prevent leakage. Ice Nine uses local rejection sampling where each signer independently ensures their partial signature satisfies norm bounds before broadcasting.
Key Invariant
If each of signers produces with , then the aggregate satisfies .
This eliminates global rejection as a distributed control-flow path.
Local Sign Result
inductive LocalSignResult (S : Scheme)
| success (z_i : S.Secret) (hidingNonce bindingNonce : S.Secret) (attempts : Nat)
| failure (err : LocalRejectionError S.PartyId)
Rejection Loop
Each signer runs local rejection sampling independently:
def localRejectionLoop (S : Scheme) (cfg : ThresholdConfig)
(partyId : S.PartyId) (sk_i : S.Secret) (challenge : S.Challenge)
(bindingFactor : S.Scalar) (sampleNonce : IO (S.Secret × S.Secret))
: IO (LocalSignResult S)
The loop:
- Samples fresh nonce pair
- Computes
- Checks if
- Returns success or retries with new nonces (up to
maxLocalAttempts)
Parallelization
Local rejection is trivially parallelizable:
- Across signers: Each signer's loop is independent
- Within signer: Batch multiple nonce candidates (
localRejectionLoopParallel) - Within batch: SIMD/vectorized norm checking
Reference: Lyubashevsky, "Fiat-Shamir with Aborts", ASIACRYPT 2009.
Session Abort Coordination
While local rejection handles norm bounds, session-level aborts are needed for:
- Liveness failures: Parties offline, session times out
- Security violations: Trust assumption violated (more than faulty parties)
- Explicit cancellation: Group decides to abandon signing
Abort Reasons
inductive AbortReason (PartyId : Type*) where
-- Liveness failures (require f+1 votes)
| timeout (respondents : Nat) (required : Nat)
| insufficientParticipants (actual : Nat) (required : Nat)
| partyUnresponsive (parties : List PartyId)
-- Security violations (immediate abort, no voting)
| trustViolation (faultyCount : Nat) (maxTolerable : Nat)
| globalNormExceeded (aggregateNorm : Nat) (bound : Nat)
| tooManyComplaints (complaints : Nat) (maxTolerable : Nat)
-- Explicit request (requires f+1 votes)
| requestedBy (requester : PartyId)
Abort State (CRDT)
structure AbortState (S : Scheme) where
session : Nat
votes : Finset S.PartyId
reasons : List (AbortReason S.PartyId)
immediateTriggered : Bool := false
instance (S : Scheme) : Join (AbortState S) :=
⟨fun a b => {
session := a.session
votes := a.votes ∪ b.votes
reasons := (a.reasons ++ b.reasons).dedup
immediateTriggered := a.immediateTriggered || b.immediateTriggered
}⟩
Abort Threshold
Abort consensus uses ThresholdConfig.abortThreshold which is where (max faulty parties). This ensures at least one honest party agreed to abort, preventing malicious minorities from forcing spurious aborts.
def ThresholdConfig.abortThreshold (cfg : ThresholdConfig) : Nat :=
cfg.maxFaulty + 1
def AbortState.shouldAbort (state : AbortState S) (cfg : ThresholdConfig) : Bool :=
state.hasImmediateReason || state.isConfirmedByVotes cfg
Security violations trigger immediate abort without voting.
Session-Typed Signing
The Protocol/Sign/Session.lean module provides a session-typed API that makes nonce reuse a compile-time error rather than a runtime check. Each signing session progresses through states that consume the previous state, ensuring nonces are used exactly once.
Session State Machine
ReadyToCommit ──► Committed ──► Revealed ──► SignedWithStats ──► Done
│ │
└──────► Aborted ◄──────┘
Each transition consumes its input state. There is no way to "go back" or reuse a consumed state. The ReadyToCommit state contains a FreshNonce that gets consumed during the commit transition. Sessions may transition to Aborted from Committed or Revealed when abort consensus is reached.
Linear Nonce (Dual Nonce Version)
Following FROST, we use two nonces per signer:
structure FreshNonce (S : Scheme) where
private mk ::
private hidingNonce : S.Secret -- secret hiding nonce
private bindingNonce : S.Secret -- secret binding nonce
private publicHiding : S.Public -- W_hiding = A(hiding)
private publicBinding : S.Public -- W_binding = A(binding)
private opening : S.Opening -- commitment opening
def FreshNonce.sample (S : Scheme) (h b : S.Secret) (opening : S.Opening) : FreshNonce S
The private constructor prevents arbitrary creation. The only way to create a FreshNonce is via sample with fresh randomness for both hiding and binding nonces.
Session States
Each state carries the data accumulated up to that point (dual nonce version):
structure ReadyToCommit (S : Scheme) where
keyShare : KeyShare S
nonce : FreshNonce S -- dual nonces, will be consumed on commit
message : S.Message
session : Nat
structure Committed (S : Scheme) where
keyShare : KeyShare S
private hidingNonce : S.Secret -- moved from FreshNonce
private bindingNonce : S.Secret -- moved from FreshNonce
publicHiding : S.Public
publicBinding : S.Public
opening : S.Opening
message : S.Message
session : Nat
commitment : S.Commitment
nonceConsumed : NonceConsumed S -- proof token
structure Revealed (S : Scheme) where
keyShare : KeyShare S
private hidingNonce : S.Secret
private bindingNonce : S.Secret
publicHiding : S.Public
publicBinding : S.Public
message : S.Message
session : Nat
challenge : S.Challenge
bindingFactor : S.Scalar -- ρ for this signer
aggregateNonce : S.Public -- w = Σ (W_hiding_i + ρ_i·W_binding_i)
structure SignedWithStats (S : Scheme) extends Signed S where
localAttempts : Nat -- number of local rejection attempts
hidingNonce : S.Secret -- the nonce used
bindingNonce : S.Secret
structure Done (S : Scheme) where
shareMsg : SignShareMsg S
structure Aborted (S : Scheme) where
session : Nat
reason : AbortReason S.PartyId
voteCount : Nat
Session Transitions
Each transition function consumes its input and produces the next state:
-- Consumes ReadyToCommit, produces Committed
-- The FreshNonce inside is consumed; cannot be accessed again
def commit (S : Scheme) (ready : ReadyToCommit S)
: Except (SignError S.PartyId) (CommitResult S)
-- Consumes Committed, produces Revealed
-- Receives challenge, binding factor, and aggregate nonce from coordinator
def reveal (S : Scheme) (committed : Committed S)
(challenge : S.Challenge) (bindingFactor : S.Scalar) (aggregateW : S.Public)
: Revealed S × SignRevealWMsg S × SessionTracker S × NonceRegistry S
-- Consumes Revealed, produces SignedWithStats using local rejection sampling
-- Never fails norm check because rejection happens internally
def signWithLocalRejection (S : Scheme) (revealed : Revealed S)
(cfg : ThresholdConfig) (sampleNonce : IO (S.Secret × S.Secret))
: IO (Except (LocalRejectionError S.PartyId) (SignedWithStats S))
-- Consumes Signed, produces Done
def finalize (S : Scheme) (signed : Signed S) : Done S
-- Abort transitions (consume current state, produce Aborted)
def abortFromCommitted (S : Scheme) (committed : Committed S)
(reason : AbortReason S.PartyId) (voteCount : Nat) : Aborted S
def abortFromRevealed (S : Scheme) (revealed : Revealed S)
(reason : AbortReason S.PartyId) (voteCount : Nat) : Aborted S
Local Rejection Sampling
Ice Nine implements restart-free threshold Dilithium signing with local rejection sampling. Each signer applies rejection sampling locally before sending their partial signature. The bounds are chosen so that any valid combination of partials automatically satisfies global bounds.
The key invariant is that if each of T signers produces z_i with ‖z_i‖∞ ≤ B_local, then the aggregate z = Σz_i satisfies ‖z‖∞ ≤ T · B_local ≤ B_global. This eliminates global rejection as a distributed control-flow path. Given sufficient honest parties at or above threshold, signing does not trigger a distributed restart due to rejection sampling.
With local rejection sampling, norm bounds are handled locally by each signer. Each signer runs rejection sampling independently until producing a valid partial. No global retry coordination is needed. The signWithLocalRejection function never returns a norm failure.
Design Rationale
Local rejection sampling is designed for BFT consensus integration. In a typical BFT setting with n = 3f+1 validators and threshold t = 2f+1, at most f validators can be Byzantine. The signing protocol must produce signatures despite malicious behavior, without allowing a small coalition to stall progress.
The naive approach to threshold Dilithium would check the global bound after aggregation. If the aggregate z exceeds the bound, all parties must restart with fresh randomness. This creates a liveness vulnerability: a single malicious party could repeatedly cause the ceremony to fail by sending out-of-bounds partials.
Local rejection eliminates this vulnerability through a mathematical guarantee. By the triangle inequality:
If each partial satisfies ‖z_i‖∞ ≤ B_local and we aggregate at most T partials, then ‖z‖∞ ≤ T · B_local. Setting B_local = B_global / T guarantees that any combination of valid partials produces a valid aggregate. Global overflow becomes mathematically impossible, not just unlikely.
The aggregator enforces per-share bounds and discards invalid partials. With at least 2f+1 honest validators, there always exist t valid partials to aggregate. Malicious parties can only cause their own shares to be rejected. They cannot force a distributed restart.
Trade-offs
Local rejection changes the scheme parameters compared to single-signer Dilithium.
More local sampling. Tighter per-signer bounds (B_local < B_global) increase the expected number of rejection sampling iterations per signer. This is additional CPU cost but does not affect the distributed protocol. The extra work is invisible to other parties.
Narrower distribution. The effective distribution on z is more constrained than standard Dilithium. This means the scheme is "Dilithium-like with stricter bounds" rather than a bit-for-bit clone of ML-DSA. Security arguments must account for the modified bounds.
Exceptional global rejection. If global rejection ever occurs despite local bounds, it indicates either a negligible probability event or Byzantine behavior (shares that passed local validation but were crafted to overflow when combined). This is treated as a security anomaly, not normal control flow.
Local Rejection vs Global Abort
Session aborts are for global failures that cannot be resolved locally:
- Timeout waiting for other parties
- Security violations (trust assumption breached)
- Explicit cancellation by consensus
Type-Level Guarantees
The session type system provides these guarantees:
-
Nonce uniqueness: Each
FreshNoncecan only be consumed once. Thecommitfunction consumes theReadyToCommitstate, and the nonce value moves intoCommitted. There's no path back. -
Linear flow: States can only progress forward. Each transition consumes its input. Sessions may also transition to
Aborted. -
Local rejection: With
signWithLocalRejection, norm bounds are satisfied during signing via local rejection sampling. No global retry loop is needed. -
Compile-time enforcement: These are not runtime checks. The type system prevents writing code that would reuse a nonce.
-
Abort safety: Aborted sessions cannot continue. Once a session transitions to
Aborted, any consumed nonces are invalidated.
Example: Nonce Reuse is Uncompilable
-- This CANNOT compile because `ready` is consumed by first `commit`
def badNonceReuse (S : Scheme) (ready : ReadyToCommit S) :=
let (committed1, _) := commit S ready -- consumes ready
let (committed2, _) := commit S ready -- ERROR: ready already consumed
(committed1, committed2)
Fast-Path Signing
For latency-sensitive applications, Ice Nine supports single-round signing when nonces are precomputed and pre-shared. This mode skips the commit-reveal round by preparing nonces during idle time.
Precomputed Nonces
structure PrecomputedNonce (S : Scheme) where
commitment : S.Commitment
nonce : FreshNonce S
publicHiding : S.Public
publicBinding : S.Public
generatedAt : Nat
maxAge : Nat := 3600 -- 1 hour default expiry
structure NoncePool (S : Scheme) where
available : List (PrecomputedNonce S)
consumedCount : Nat := 0
Fast Signing Function
def signFast (S : Scheme)
(keyShare : KeyShare S)
(precomputed : PrecomputedNonce S)
(challenge : S.Challenge)
(bindingFactor : S.Scalar)
(session : Nat)
(context : ExternalContext := {})
: Option (SignShareMsg S)
This function produces a signature share immediately using a precomputed nonce. It SHOULD return none if the norm check fails, requiring a fresh nonce. The current Lean code stubs out the norm check and always returns some. Production code must call S.normOK here and propagate failure.
Security Assumptions
Fast-path signing trusts that:
- Nonce commitments were honestly generated
- Nonces are not being reused (enforced by
FreshNoncetype) - Precomputed nonces have not expired
- The coordinator will aggregate honestly
Use standard two-round signing when these assumptions don't hold.
See Protocol Integration for detailed usage patterns with external consensus protocols.
Security Considerations
Nonce reuse. Using the same nonce in two sessions leaks the secret share . Session types make nonce reuse a compile-time error.
Commitment binding. The commitment prevents a party from choosing its nonce adaptively. This protects against rogue key attacks.
Challenge entropy. The hash function must produce challenges with sufficient entropy. In the random oracle model this is guaranteed.
Norm bounds. The norm check prevents statistical leakage of the secret through the response distribution. Local rejection sampling ensures each partial signature is independent of the secret. No global retry coordination is needed.
Abort coordination. Session-level aborts (for liveness failures or security violations) use CRDT state with an threshold, ensuring at least one honest party agreed to abort.
Totality. The validation function always returns either a valid signature or a structured error.
Nonce Lifecycle
The Protocol/Sign/NonceLifecycle.lean module provides unified effect abstractions for nonce management. It consolidates session tracking, nonce registry, and pool management into a single composable interface with explicit error handling.
Nonce Safety Invariants
The module enforces four safety invariants.
Session uniqueness ensures each session ID is used at most once per party. Reusing a session ID could lead to nonce reuse.
Commitment uniqueness ensures each nonce commitment appears in at most one session. If the same commitment appears twice, a nonce was reused.
Consumption tracking ensures nonces are consumed exactly once. The FreshNonce type enforces this at compile time. The runtime registry provides defense-in-depth.
Expiry enforcement ensures precomputed nonces expire and cannot be used past their maximum age. This prevents stale nonces from being used after key rotation.
NonceState Structure
The NonceState structure consolidates all nonce-related tracking.
structure NonceState (S : Scheme) where
partyId : S.PartyId
usedSessions : Finset Nat
commitBySession : Std.HashMap (S.PartyId × Nat) S.Commitment
sessionsByCommit : Std.HashMap (S.PartyId × S.Commitment) (List Nat)
poolAvailable : List (PrecomputedNonce S)
poolConsumed : Nat
The usedSessions field tracks all sessions where commit has occurred. The commitBySession map provides forward lookup from session to commitment. The sessionsByCommit map provides reverse lookup for reuse detection.
NonceError Type
Nonce operations return typed errors for precise error handling.
inductive NonceError (PartyId : Type*)
| sessionAlreadyUsed (session : Nat) (party : Option PartyId)
| commitmentReuse (session1 session2 : Nat) (party : PartyId)
| nonceExpired (generatedAt currentTime maxAge : Nat)
| poolEmpty
| poolExhausted (consumed : Nat)
| invalidSession (reason : String)
| nonceNotFound (session : Nat)
The commitmentReuse error is a security violation. The BlameableError instance identifies the party responsible. Other errors are operational without blame attribution.
Pure Operations
The NonceOp namespace provides pure operations on NonceState.
-- Check session freshness
def checkFresh (session : Nat) (st : NonceState S) : NonceResult S Unit
-- Mark session as used
def markUsed (session : Nat) (st : NonceState S) : NonceResult S Unit
-- Record a commitment
def recordCommitment (session : Nat) (commit : S.Commitment) (st : NonceState S)
: NonceResult S Unit
-- Check for reuse and throw if detected
def assertNoReuse (commit : S.Commitment) (st : NonceState S) : NonceResult S Unit
-- Full commit operation: check fresh, mark used, record, check reuse
def commitSession (session : Nat) (commit : S.Commitment) (st : NonceState S)
: NonceResult S Unit
The commitSession operation combines all checks into a single atomic operation. It ensures session freshness, marks the session as used, records the commitment, and checks for reuse.
Monadic Interface
The NonceM monad provides composition for chaining operations.
abbrev NonceM (S : Scheme) :=
StateT (NonceState S) (Except (NonceError S.PartyId))
-- Run and get final state
def NonceM.run (m : NonceM S α) (st : NonceState S)
: Except (NonceError S.PartyId) (α × NonceState S)
The monadic interface threads state automatically and short-circuits on errors.
Session Type Integration
The module bridges runtime tracking with compile-time session types.
-- Create ReadyToCommit using NonceState for tracking
def initSessionWithState (S : Scheme) (keyShare : KeyShare S) (message : S.Message)
(session : Nat) (nonce : FreshNonce S) (st : NonceState S) : ReadyToCommit S
-- Commit transition that updates NonceState
def commitWithState (S : Scheme) (ready : ReadyToCommit S) (st : NonceState S)
: Except (NonceError S.PartyId) (Committed S × SignCommitMsg S × NonceState S)
The commitWithState function consumes the FreshNonce inside ReadyToCommit. Session types prevent reuse at compile time. The NonceState provides runtime detection as defense-in-depth.
Validated Aggregation
The Protocol/Sign/ValidatedAggregation.lean module provides per-share validation during signature aggregation. This enables local rejection sampling by validating each partial signature before aggregation and collecting blame for invalid shares.
Validation Process
Validated aggregation performs three checks on each share.
Norm bound check verifies ‖z_i‖∞ ≤ B_local where B_local is the local rejection bound from ThresholdConfig. Shares exceeding the bound are rejected.
Algebraic check verifies A(z_i) = w_eff + c·pk_i where w_eff is the effective nonce commitment incorporating the binding factor. This detects malformed shares.
Commitment presence verifies the share has a corresponding commitment from round 1. Missing commitments indicate protocol violations.
ShareValidationError
Validation errors identify the responsible party.
inductive ShareValidationError (PartyId : Type*)
| normExceeded (party : PartyId) (actual : Nat) (bound : Nat)
| algebraicInvalid (party : PartyId) (reason : String)
| missingCommitment (party : PartyId)
| missingBindingFactor (party : PartyId)
All error variants implement BlameableError for blame attribution.
AggregationResult
The aggregation result contains the signature (if successful), blame information, and statistics.
structure AggregationResult (S : Scheme) where
signature : Option (Signature S)
includedParties : List S.PartyId
blame : BlameResult S.PartyId
success : Bool
totalReceived : Nat
validCount : Nat
The blame field contains collected errors according to ProtocolConfig. The includedParties list identifies which shares were aggregated.
aggregateValidated Function
The primary aggregation function validates shares and aggregates valid ones.
def aggregateValidated (S : Scheme) (cfg : ThresholdConfig)
(protocolCfg : ProtocolConfig) (challenge : S.Challenge)
(commitments : List (SignCommitMsg S)) (shares : List (SignShareMsg S))
(pkShares : S.PartyId → S.Public) (bindingFactors : BindingFactors S)
: AggregationResult S
The function validates all shares, partitions them into valid and invalid, collects blame according to protocolCfg, and aggregates the first threshold valid shares.
Guarantee
If at least threshold parties produce valid shares, the aggregate signature is guaranteed to satisfy the global norm bound. This follows from the local bound relationship: T · B_local ≤ B_global.
Byzantine Robustness
The aggregator validates each partial independently and discards invalid ones. Validation checks three conditions.
Structural validity ensures the partial has correct dimensions and format. Norm bound checking verifies ‖z_i‖∞ ≤ B_local where B_local is the local rejection bound. Algebraic validity checks that A(z_i) = w_eff + c·pk_i where w_eff incorporates the binding factor.
Parties that send out-of-bound or algebraically invalid partials have their shares rejected. The aggregator does not attempt to include them in the signature. This allows misbehaving parties to be identified and excluded.
Given n = 3f+1 validators with at most f Byzantine, there always exist at least t = 2f+1 honest validators. These honest validators will produce valid partials. The aggregator collects valid partials until reaching the threshold, then produces a valid signature.
A malicious party can slow the signing ceremony by sending invalid partials, but cannot prevent a valid signature from being produced. The combination of per-share validation and honest majority ensures liveness despite Byzantine participants.