Skip to main content

Implementation Details

This page provides technical details about the HoneyBadger MPC protocol implementation in Stoffel, including message formats, state machines, and integration points.

Architecture

Component Overview

┌─────────────────────────────────────────────────────────────┐
│                      Stoffel Application                     │
├─────────────────────────────────────────────────────────────┤
│                        Rust SDK                              │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐       │
│  │ StoffelClient│  │StoffelServer │  │  StoffelNode │       │
│  └──────────────┘  └──────────────┘  └──────────────┘       │
├─────────────────────────────────────────────────────────────┤
│                    mpc-protocols Crate                       │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐       │
│  │ HoneyBadger  │  │   TripleGen  │  │    Shamir    │       │
│  │   Engine     │  │ Preprocessor │  │    Shares    │       │
│  └──────────────┘  └──────────────┘  └──────────────┘       │
├─────────────────────────────────────────────────────────────┤
│                    QUIC Transport Layer                      │
│  ┌──────────────────────────────────────────────────┐       │
│  │              quinn + rustls (TLS 1.3)            │       │
│  └──────────────────────────────────────────────────┘       │
└─────────────────────────────────────────────────────────────┘

HoneyBadger Engine

The core MPC engine (HoneyBadgerMpcEngine) manages:
  • Secret share distribution
  • Protocol message routing
  • Beaver triple consumption
  • Result reconstruction
pub struct HoneyBadgerMpcEngine {
    party_id: PartyId,
    n_parties: usize,
    threshold: usize,
    instance_id: u64,
    preprocessing: PreprocessingMaterial,
    network: Box<dyn NetworkTransport>,
}

Message Protocol

Wire Format

Messages are serialized using bincode for efficient binary encoding:
#[derive(Serialize, Deserialize)]
pub enum MPCaaSMessage {
    /// Server sends configuration to client after connection
    ServerInfo {
        n_parties: usize,
        threshold: usize,
        instance_id: u64,
        party_id: PartyId,
    },

    /// Client announces readiness with input count
    ClientReady {
        client_id: ClientId,
        num_inputs: usize,
    },

    /// Coordinate computation start across servers
    ComputationTrigger {
        session_id: SessionId,
    },

    /// Signal computation completion
    ComputationComplete,
}

NetEnvelope Wrapper

All network messages are wrapped in a NetEnvelope:
#[derive(Serialize, Deserialize)]
pub enum NetEnvelope {
    /// HoneyBadger protocol messages
    HoneyBadger(HoneyBadgerMessage),

    /// Signaling/coordination messages
    Signaling(SignalingMessage),

    /// Client-server MPCaaS messages
    MPCaaS(MPCaaSMessage),
}

Message Flow

Client                    Server 1                  Server 2
   │                          │                          │
   │──── Connect ────────────>│                          │
   │<─── ServerInfo ──────────│                          │
   │                          │                          │
   │──── ClientReady ────────>│                          │
   │                          │                          │
   │──── InputShares ────────>│                          │
   │                          │                          │
   │                          │<── HoneyBadger ─────────>│
   │                          │    (peer protocol)       │
   │                          │                          │
   │<─── ComputationComplete ─│                          │
   │<─── OutputShares ────────│                          │

State Machines

Server State Machine

pub enum ServerState {
    /// Just created, not yet started
    Initialized,

    /// Binding to network port
    Starting,

    /// Establishing peer mesh connections
    ConnectingPeers,

    /// Generating preprocessing material
    Preprocessing,

    /// Ready to accept client connections
    Ready,

    /// Actively computing on client inputs
    Computing,

    /// Graceful shutdown in progress
    ShuttingDown,
}
State transitions:
Initialized

     ▼ start()
Starting

     ▼ bind successful
ConnectingPeers

     ▼ all peers connected
Preprocessing

     ▼ triples generated
Ready ◄────────────────┐
     │                 │
     ▼ client input    │
Computing              │
     │                 │
     ▼ result sent     │
     └─────────────────┘

Client State Machine

pub enum ClientState {
    /// Connected to servers, ready to submit
    Connected,

    /// Currently sending input shares
    Submitting,

    /// Waiting for computation result
    Computing,

    /// Session ended
    Disconnected,
}

Preprocessing Implementation

TripleGen Protocol

Beaver triple generation uses a specialized protocol with stricter requirements:
pub struct TripleGenConfig {
    n_parties: usize,
    threshold: usize,  // Must satisfy: n >= 4t + 1
    n_triples: usize,
    n_random_shares: usize,
}
Triple generation phases:
  1. Random polynomial generation: Each party generates random degree-t polynomials
  2. Share exchange: Parties exchange evaluations at their indices
  3. Triple computation: Compute c = a * b using Beaver’s protocol
  4. Verification: Zero-knowledge proofs ensure correctness

Preprocessing Material

pub struct PreprocessingMaterial {
    /// Beaver triples: (a, b, c) where c = a*b
    triples: Vec<BeaverTriple>,

    /// Random secret-shared values
    random_shares: Vec<Share>,

    /// Current consumption indices
    triple_index: usize,
    random_index: usize,
}

pub struct BeaverTriple {
    a: Share,
    b: Share,
    c: Share,  // c = a * b
}

Secure Multiplication

Beaver Triple Protocol

When multiplying secret values [x] and [y]:
fn secure_multiply(x: Share, y: Share, triple: BeaverTriple) -> Share {
    let (a, b, c) = triple;

    // 1. Compute masked values
    let d = x - a;  // Local subtraction
    let e = y - b;  // Local subtraction

    // 2. Open masked values (requires communication)
    let d_open = reconstruct(d);
    let e_open = reconstruct(e);

    // 3. Compute result locally
    // z = xy = (d+a)(e+b) = de + db + ea + ab
    //        = de + d*[b] + e*[a] + [c]
    let z = d_open * e_open + d_open * b + e_open * a + c;

    z
}
Security: d and e are uniformly random (masked by a and b), revealing nothing about x or y.

Share Representation

Shamir Share Structure

pub struct Share {
    /// The share value in the finite field
    value: FieldElement,

    /// The evaluation point (party index)
    index: PartyIndex,

    /// Degree of the sharing polynomial
    degree: usize,
}

Field Operations

All operations are performed in a finite field:
impl Share {
    /// Local addition (no communication)
    fn add(&self, other: &Share) -> Share {
        Share {
            value: self.value + other.value,
            index: self.index,
            degree: max(self.degree, other.degree),
        }
    }

    /// Scalar multiplication (no communication)
    fn scalar_mul(&self, scalar: FieldElement) -> Share {
        Share {
            value: self.value * scalar,
            index: self.index,
            degree: self.degree,
        }
    }
}

Network Transport

QUIC Implementation

pub trait NetworkTransport: Send + Sync {
    /// Send message to specific party
    async fn send(&self, party: PartyId, message: NetEnvelope) -> Result<()>;

    /// Receive next message from any party
    async fn receive(&self) -> Result<(PartyId, NetEnvelope)>;

    /// Broadcast message to all parties
    async fn broadcast(&self, message: NetEnvelope) -> Result<()>;
}

Connection Management

pub struct QuicNetworkManager {
    /// Our party's endpoint
    endpoint: Endpoint,

    /// Connections to all other parties
    connections: DashMap<PartyId, Connection>,

    /// Stream multiplexing
    streams: DashMap<StreamId, BiStream>,
}
Connection establishment:
// Server side: accept incoming connections
async fn accept_connection(&self) -> Result<Connection> {
    let incoming = self.endpoint.accept().await?;
    let connection = incoming.await?;
    Ok(connection)
}

// Client side: connect to peer
async fn connect_to_peer(&self, addr: SocketAddr) -> Result<Connection> {
    let connection = self.endpoint.connect(addr, "peer")?.await?;
    Ok(connection)
}

Synchronization Requirements

Critical Parameters

All servers in an MPC cluster must agree on:
ParameterDescriptionConsequence of Mismatch
instance_idUnique computation identifierParties won’t recognize each other
n_partiesNumber of compute nodesProtocol messages misrouted
thresholdByzantine fault toleranceSecurity guarantees violated
preprocessing_start_epochUnix timestamp for syncPreprocessing fails

Synchronization Protocol

// All servers must use identical values
let config = MPCConfig {
    instance_id: 12345,  // Same across all
    n_parties: 5,        // Same across all
    threshold: 1,        // Same across all

    // Must be synchronized to wall clock
    preprocessing_start_epoch: SystemTime::now()
        .duration_since(UNIX_EPOCH)?
        .as_secs() + 20,  // 20 seconds in future
};

FFI Exports

The MPC engine exports C-compatible functions for language bindings:
// From stoffel-vm cffi.rs
HoneyBadgerMpcEngine* honeybadger_engine_new(
    uint8_t party_id,
    size_t n_parties,
    size_t threshold,
    uint64_t instance_id
);

int honeybadger_engine_add_share(
    HoneyBadgerMpcEngine* engine,
    const uint8_t* share_data,
    size_t share_len
);

int honeybadger_engine_compute(
    HoneyBadgerMpcEngine* engine,
    uint8_t** result_data,
    size_t* result_len
);

void honeybadger_engine_free(HoneyBadgerMpcEngine* engine);

Error Handling

Error Categories

pub enum MPCError {
    /// Network communication failure
    NetworkError(String),

    /// Preprocessing material exhausted
    PreprocessingExhausted,

    /// Protocol violation detected
    ProtocolViolation(String),

    /// Threshold exceeded (too many failures)
    ThresholdExceeded,

    /// Configuration mismatch between parties
    ConfigurationMismatch(String),

    /// Timeout waiting for peers
    Timeout,
}

Recovery Strategies

ErrorRecovery
NetworkErrorRetry with exponential backoff
PreprocessingExhaustedGenerate more triples, restart
ProtocolViolationIdentify malicious party, exclude
ThresholdExceededCannot recover, abort computation
TimeoutIncrease timeout, check connectivity

Performance Characteristics

Communication Complexity

OperationMessagesRounds
Addition00 (local)
MultiplicationO(n)1
ComparisonO(n log n)O(log n)
ReconstructionO(n)1

Latency Factors

  1. Network RTT: Dominates for small computations
  2. Triple generation: Pre-computed, amortized
  3. Reconstruction: Requires threshold+1 responses
  4. Computation: Linear in program complexity

Next Steps