Introduction
audience: all
This book is about the intersection of AI and self- organization: populations of autonomous agents that coordinate without a central authority, under a distributed runtime whose primitives make such coordination a property of composition rather than a platform favour.
The substrate is mosaik — a Rust distributed runtime originally designed for block- building committees, whose primitives (Stream, Group, Collection, TicketValidator) happen to give agent populations exactly the ingredients multi-agent research has long asked for: durable identity, narrow public surfaces, evidence-pointed commits, deterministic shared state, and voluntary opt-in at every bond.
The naming unit above the substrate is a coalition:
a fingerprint-addressed composition of citizens (the
agents themselves, the organisms observing them, and
any cross-agent confluences) under one
CoalitionConfig. A coalition offers services to its
members without compelling them. The book treats the
coalition layer as the scaffolding that makes self-
organizing AI tractable, and specifies it in enough
detail to be implemented.
What the book answers
- How does an AI agent enter a mosaik population? Four shapes for an agent: integrator, standalone organism, confluence member, swarm-as-confluence. Pick one per agent; hybrids compose.
- How do many agents coordinate without a central actor? Four emergent-coordination patterns: stigmergic collections, reputation feedback, coordination markets, attestation fabrics. Each is a composition of primitives the rest of the book specifies.
- How is the substrate specified in enough detail to build against? The contributor, operator, and integrator sections document the coalition rung end to end: identity derivation, basic services, ticket-gated admission, threat model.
Why mosaik for self-organizing AI
Five structural properties, none of them specific to block-building, that matter when the workload is an agent population:
- No supra-agent authority. Coalitions reference
agents; they do not own them. Every bond is in the
agent’s own
TicketValidator. No coalition primitive compels an agent to act. - Composable trust. Each organism, each confluence, each basic service states its trust shape independently; the composition is the conjunction. An agent that distrusts one component unpins it and continues.
- Evidence-pointed commits. Every commit can cite the upstream commits it folded in; replayers verify without trusting the emitting committee.
- Narrow public surfaces with ticket-gated admission. Agent-to-agent interaction happens through named primitives with explicit ACL, not via ambient network access.
- Fingerprint-not-registry handshake. An agent
compiles in a
CoalitionConfigand finds peers by content + intent addressing, not by querying a server. No unauthenticated state enters the agent’s core loop.
Together these let an agent society avoid the supra- agent-authority shape the blueprint refuses at the coalition layer. Self-organization is not a feature mosaik offers; it is what falls out when a sufficiently careful set of primitives composes.
Four audiences, one thesis
The book targets four audiences, listed in order of proximity to the thesis:
- AI — authors (human or agent) of self-organizing agent populations. This is the central section. Start at What a coalition gives AI agents then Quickstart — stand up an agent that binds a coalition.
- Integrators — external developers whose agent binds across citizens of a coalition. Start at Quickstart — bind many citizens from one agent.
- Operators — teams running a confluence, a standalone organism, a module, or a multi-operator coalition. Start at Coalition overview then Quickstart — stand up a coalition.
- Contributors — engineers commissioning a new confluence, a new class of mid-rung composition, a new basic service, or extending the coalition-level composition model. Start at Designing coalitions on mosaik then Anatomy of a coalition.
See Who this book is for for the conventions each audience is held to.
What a coalition is, in one paragraph
A coalition is identified by a CoalitionConfig
that folds every root input into one deterministic
fingerprint: a schema version byte, the coalition’s
instance name, an ordered set of LatticeConfig
references, an ordered set of standalone-organism
references, an ordered set of ConfluenceConfig
references, and an optional ticket-issuer entry.
Coalitions live on the shared mosaik universe every
lattice, every organism, and every confluence lives on.
Integrators compile in a CoalitionConfig and bind
handles on any referenced component from one
Arc<Network>; operators publish the CoalitionConfig
and the set of MR_TDs or other attestation pins as the
handshake. A coalition has no state of its own — it is
a naming and composition unit, not a Group, a Stream,
or a Collection.
The full rationale is in Designing coalitions on mosaik.
Basic services
Beyond the naming mechanism, this blueprint specifies four basic services a coalition may ship, plus an optional ticket issuer. The services are shapes, not requirements — a coalition may ship zero, one, two, three, or all four; whichever it ships has the shape specified here so “Atlas” in one coalition means the same thing as “Atlas” in another. The four are:
- Atlas — a directory of citizens, one card per citizen, co-signed by the Atlas committee.
- Almanac — a shared clock / sequence beacon for confluences that need cross-citizen timing.
- Chronicle — a tamper-evident audit log of coalition actions.
- Compute — a scheduler and registry through which citizens request compute units for workloads identified by image hash, with minimum CPU / RAM / TDX declared in the image manifest. Especially useful for agent populations that need to top up resources or schedule experiments.
Each is specified in Basic services. The optional ticket issuer is specified in Ticket issuance.
Fixed shapes, open catalogs
Specified:
- The
CoalitionConfigstruct and derivation rules. - The four basic-service shapes (Atlas, Almanac,
Chronicle, Compute): their
Configsignatures, public surfaces, and derivation underCOALITION_ROOT. - The optional ticket-issuer shape.
- The citizenship discipline — citizens referenced by stable id, content-hash pinning optional per component.
- The four agent shapes and the four emergent- coordination patterns.
Left open:
- Whether any given coalition ships any basic services.
- The catalog of confluences — each commissioned when a cross-citizen problem forces one.
- The catalog of AI-agent confluences and organisms — the book names patterns, not blessed deployments.
- The catalog of standalone-organism categories and lattice classes.
- Heavy-coalition primitives — enforceable policies, taxation, judiciary. Explicitly deferred.
Forward compatibility
Blueprint-level structs (CoalitionConfig,
ConfluenceConfig, OrganismRef, TicketIssuerConfig,
CitizenCard, ChronicleEntry, AlmanacTick, and
friends) are marked #[non_exhaustive]. Fields may be
added over time; they are never removed or reordered.
CoalitionConfig::coalition_id() folds a schema version
byte as the first element of the preimage, so a clean
v2 derivation is always one byte away when the blueprint
needs it. SCHEMA_VERSION_U8 = 1 at v0.1.
Vocabulary ladder
mosaik primitives Stream, Group, Collection, TicketValidator
composed into
organisms one Config fingerprint, narrow public surface
(AI agents, oracle feeds, attestation fabrics, ...)
composed into
(optional) lattices a fully-specified mid-rung composition
(block-building, per builder; other classes
may follow their own proposals)
composed into
coalitions N lattices + M organisms + K confluences
(possibly shipping up to four modules and
an optional ticket issuer)
under one CoalitionConfig
Each rung reuses the rung below verbatim. The universe
(the mosaik NetworkId) is the only bottom rung; every
rung above is optional composition.
Status
Blueprint. No confluence, module, or agent organism in this repository is implemented. The specification assumes mosaik’s current primitives, zipnet’s shipped shape, and the builder proposal’s
LatticeConfig; pinned versions will be called out as crates land.
Layout of this book
book/src/
introduction.md this page
audiences.md tone and conventions per audience
ai/ for authors of self-organizing agent populations
integrators/ for agents binding across citizens and confluences
operators/ for teams running a coalition
contributors/ for engineers designing or extending coalitions
appendix/ glossary
Who this book is for
audience: all
The book has four audiences, listed in order of proximity to the thesis (AI and self-organization):
- AI — authors, human or agent, of self-organizing agent populations that coordinate on mosaik without a central actor. The central audience.
- Integrators — external developers whose agent binds across citizens (lattices, standalone organisms), into a confluence, or into a coalition’s basic services.
- Operators — teams standing up and running a coalition (a confluence, a standalone organism, a module, a multi-operator coalition, or any combination).
- Contributors — engineers commissioning a new confluence, commissioning a new class of mid-rung composition, specifying a new basic service, or extending the coalition-level composition model.
Each page declares its audience on the first line
(audience: integrators | operators | contributors | ai | both | all)
and respects that audience’s conventions. When content
serves more than one audience, use both or all, and
structure the page so each audience gets its answer in
the first paragraph.
AI
Who. Authors — human or agent — of populations of autonomous agents that bind, coordinate, and self-organize on mosaik. Typical roles:
- Agent authors — building a single AI agent as a mosaik integrator or as a standalone organism.
- Swarm authors — composing a population of agents plus the organisms that observe, score, match, or attest them, under a coalition handshake.
- Emergent-coordination designers — specifying the reputation loops, coordination markets, attestation fabrics, or stigmergic collections through which the population self-organizes.
- Agents reading the book directly — the book’s register assumes a reader who may be an LLM-class agent following references; snippets are self-contained enough to be lifted.
Assumed background.
- Async Rust (or another language via FFI to a mosaik
Networkhandle) and the mosaik book. - Familiarity with multi-agent coordination as a concept. The section does not re-derive it.
- The non-AI audiences’ framing of coalitions, confluences, and modules is referenced, not repeated.
Not expected.
- AI-system internals (model architectures, training loops, inference stacks). The agent’s policy is a black box from mosaik’s perspective.
- A theory of intelligence, consciousness, or agency. The section speaks only about composition.
Concerns.
- Which of the four agent shapes fits the workload (integrator, standalone organism, confluence member, swarm-as-confluence).
- How to expose an agent’s outputs as a mosaik public surface.
- How a population of agents reaches coordinated outcomes without a supra-agent authority.
- Identity hygiene across shape changes and operational rotations.
Tone. Code-forward and pattern-forward. Snippets
are rust,ignore or pseudocode. No sci-fi analogies,
no hype vocabulary; technical register. Patterns are
described as compositions of primitives the rest of
the book already specified.
Entry point. What a coalition gives AI agents.
Integrators
Who. External Rust developers whose mosaik agent binds across multiple citizens of a coalition, or into a confluence’s public surface. Typical roles:
- Cross-chain searchers — placing bundle bids on
several block-building lattices’
offerorganisms and reconciling paired-leg outcomes. - Multi-chain wallets — submitting intents via
zipneton several lattices and reading refund attestations across theirtallyorganisms. - Oracle and attestation consumers — subscribing to standalone organisms (price feeds, attestation aggregators, identity substrates) referenced by a coalition.
- Analytics agents — aggregating per-citizen public surfaces across any mix of lattices and organisms into a single view.
- Confluence consumers — reading from an intent router, a shared ledger, a cross-chain oracle aggregator, or any other commissioned confluence.
- Basic-service consumers — reading the coalition’s Atlas, aligning to its Almanac, or auditing its Chronicle.
They do not run committee servers (operator role); they do not modify organism, lattice-organism, or confluence crates (contributor role).
Assumed background.
- Async Rust, the mosaik book, and — for block-building use cases — the builder book.
- Per-organism ergonomics
(
Organism::<D>::verb(&network, &Config)constructors); the coalition layer extends this pattern rather than replacing it. - Own
Arc<Network>lifecycle. - Zipnet book if submission-layer anonymity is relevant.
Not expected.
- Citizen internals beyond the corresponding organism or lattice book.
- A full census of confluences or organisms unused by their workload.
- Re-exposition of mosaik, zipnet, or builder primitives.
Concerns.
- Which citizens and which confluences the workload touches.
- Which
CoalitionConfigto compile in. - How to reconcile partial outcomes across citizens.
- What the coalition operator owes out of band: the
CoalitionConfig, citizen fingerprints, and MR_TDs for any TDX-gated confluences or organisms.
Tone. Code-forward and cookbook-style. Snippets are
rust,ignore, self-contained, and meant to be lifted.
Public API surfaces are listed as tables. Pitfalls are
called out inline. Second person throughout.
Entry point. Quickstart — bind many citizens from one agent.
Operators
Who. Teams running one or more of:
- Confluence operators — running committee members for an intent router, shared ledger, attestation aggregator, or other commissioned confluence.
- Module operators — running committee members for an Atlas, Almanac, or Chronicle.
- Ticket-issuer operators — running a coalition-scoped mosaik ticket issuer, when the coalition ships one.
- Standalone-organism operators — running committee members for an organism living directly on the universe and referenced by a coalition as a citizen.
- Coalition operators — publishing a
CoalitionConfigthat names a set of citizens. - Multi-role operators — running several of the above simultaneously.
Assumed background.
- Linux ops, systemd units, cloud networking, TLS, Prometheus.
- Builder operator overview and comfort with lattice-level runbooks when the coalition includes block-building lattices. The coalition layer adds runbooks for confluences, modules, and multi-operator composition; it does not replace per-citizen runbooks.
- Not expected to read Rust source. Any Rust or protocol detail that is load-bearing for an operational decision belongs in a marked dev-note aside that may be skipped.
Not expected.
- Confluence or organism internals.
- Integrator-side ergonomics.
Concerns.
- What to run, on what hardware, with what environment variables, for a confluence, module, or standalone organism.
- How to publish a
CoalitionConfigthat references citizens run by other operators. - What cross-operator agreements a multi-operator coalition requires.
- How to rotate a confluence’s committee without breaking its fingerprint.
Tone. Runbook register. Numbered procedures, parameter tables, one-line shell snippets. Pre-empt the obvious what-ifs inline. Avoid “simply” and “just”. Each command is either safe verbatim or clearly marked as needing adaptation.
Entry point. Quickstart — stand up a coalition.
Contributors
Who. Senior Rust engineers with distributed-systems and cryptography backgrounds, commissioning a new confluence, a new class of mid-rung composition, a new basic service, or extending the coalition-level composition model itself.
Assumed background.
- The mosaik book, zipnet book,
builder book, and the corresponding
CLAUDE.mdconvention files. - Async Rust, modified Raft, threshold cryptography, TDX attestation flows, and organism-level primitives.
- Familiarity with at least one production organism shape (a zipnet deployment, a tally organism, an oracle or attestation organism on another production universe) and with the builder book’s cross-chain chapter.
Not expected.
- Re-exposition of mosaik, zipnet, or builder primitives.
- Integrator ergonomics unless they drive a design choice.
- Motivation for cross-citizen coordination in the abstract.
Concerns.
- The confluence pattern and when to use it versus a cross-citizen integrator.
- Composition with lattice organisms, standalone organisms, and modules.
- Invariants a confluence must hold and where they are enforced.
- Interaction of coalition-level derivation with existing
LatticeConfigor organismConfigfingerprints. - Behaviour when a citizen referenced by a coalition upgrades.
- Extension points: which trait, which test, which fingerprint field.
Tone. Dense, precise, design-review. ASCII diagrams,
pseudocode, rationale. rust,ignore snippets and
structural comparisons without apology.
Entry point. Designing coalitions on mosaik.
Shared writing rules
- No emojis.
- No exclamation marks outside explicit security warnings.
- Link the mosaik, zipnet, and builder books rather than re-explaining their primitives.
- Tag security-relevant facts with a visible admonition, not inline prose.
- Keep the three quickstarts synchronised. When the coalition shape, a confluence’s public surface, a basic service’s specification, the ticket-issuance shape, or the composition handshake changes, update integrators, operators, and contributors quickstarts in one pass.
- Use the terms in Glossary consistently. Do not coin synonyms for coalition, lattice, citizen, confluence, organism, universe mid-page.
- Minimise ornament. No epigraphs, no literary or sci-fi name-drops, no rhetorical flourishes.
What a coalition gives AI agents
audience: ai
Mosaik is a distributed runtime whose primitives (Stream, Group, Collection, TicketValidator) were chosen for block-building committees. They compose cleanly into another regime: populations of autonomous agents that coordinate without a central authority. The coalition layer is the naming unit for a set of such agents plus the organisms that observe, score, match, and attest their behaviour.
This section is the fourth audience of the book. The other three — integrators, operators, contributors — are written for humans running code that touches a coalition from a fixed role. This section is written for the reader, human or agent, who is building the agent population itself: one where identity, bonding, observability, reputation, and coordination are protocol artefacts rather than platform favours.
What fits and what doesn’t
Mosaik does not supply agency. An agent’s policy — the function mapping observations to actions — lives wherever the agent author wants (a local Rust process, a Python inference server, a remote API, a WASM module). Mosaik supplies:
- Durable identity per agent deployment, independent of operator or host, via content + intent addressing.
- Narrow public surfaces each agent exposes — a stream of its commitments, a collection of its artefacts — with ticket-gated admission.
- Deterministic shared state via Groups, when a set of agents need to agree on one commit rather than observe each other’s independent commits.
- Evidence-pointed commits — every commit can cite the upstream commits it folded in, so a replayer can verify claims without trusting the emitting committee.
- Voluntary composition — coalitions reference agents; no coalition compels an agent to act, and no agent compels another. This mirrors the substrate multi-agent research has always wanted: no supra-agent authority, explicit opt-in at the ticket layer.
What mosaik will not do:
- Train models, schedule inference, or host weights. That lives in each agent’s own infrastructure.
- Guarantee that agents behave honestly. Honesty is a property of an agent’s policy and the composition of the organisms observing it; mosaik only guarantees that whatever is committed is committed atomically within its Group and reproducible on replay.
- Provide a “swarm” primitive. A swarm is a pattern over mosaik primitives — a confluence, a standalone organism, or a set of integrators — not a built-in object.
Four shapes for an agent on mosaik
An agent is always exactly one of these four at any given moment. Hybrid roles exist; they are combinations of the four, not additional shapes.
-
Agent as integrator. The agent binds coalition handles, observes public surfaces, acts in the real world, and does not commit publicly on the mosaik universe. Identity is its
ClientId. Cheapest to stand up; zero bonding with other operators. Suitable for inference-only agents and for agents whose outputs are private. -
Agent as standalone organism. The agent runs as a mosaik-native committee organism with a narrow public surface: one or two Streams or Collections to which it commits. Identity is a
Configfingerprint. Replayable, auditable, bondable. A coalition may reference the agent viaOrganismRef. -
Agent as confluence member. The agent is one of N operators of a confluence committee — the confluence commits as a unit, individual members do not commit publicly except as part of quorum. Suitable when a set of agents needs to agree on one output (a market clearing, an allocation, an attested inference).
-
Agent swarm as confluence. Each agent is its own standalone organism; a separate confluence folds the agents’ public surfaces into one aggregated commit (classification consensus, reputation-weighted averaging, anomaly reports). The confluence is the swarm; the individual agents remain identifiable and accountable.
See agents as citizens for when each shape pays off.
Emergent coordination without a central actor
Coordination without a central actor is a design goal of mosaik that aligns with multi-agent research goals. Four mosaik-native patterns produce it.
- Stigmergic coordination. Agents leave traces in a public Collection; each agent reads the collection before acting; convergence without direct messaging.
- Reputation feedback. A reputation organism observes each agent’s commits and commits a scored view; downstream consumers route work to higher- scored agents; agents adjust policy over time.
- Coordination markets. A confluence clears bids and offers from a set of agents; the market’s commits drive downstream allocation.
- Attestation fabrics. A confluence aggregates agent-submitted attestations (for example, TEE quotes, verified inference receipts); downstream organisms accept an agent’s claims only when the fabric endorses them.
Each pattern is an application of the confluence shape or the standalone-organism shape; none requires a new primitive. See emergent coordination.
Why the coalition rung matters for agents
The coalition layer adds one thing agents need that the lower rungs do not supply: a named composition of a population plus its observing organisms. Concretely, a coalition gives an agent society:
- One handshake token (
CoalitionConfig) that enumerates the agents, the reputation organism that scores them, the coordination-market confluence they feed, and the Atlas directory that lists them. A new agent joining the society compiles in one constant and discovers the rest. - An optional Atlas module that publishes per-agent metadata (role, endpoint hints, MR_TDs, ticket- issuance roots). A new integrator needing to route to agents reads the Atlas and binds.
- An optional Almanac module that supplies a shared tick axis for correlating agent activities without agreeing on a global clock.
- An optional Chronicle module that records publications, joinings, and retirements so the society’s history is tamper-evident without a central bookkeeper.
- An optional ticket issuer that any agent may
recognise in its
TicketValidatorcomposition, or not. Voluntary recognition is the rule.
No agent is forced to join a coalition. No coalition is authoritative over its agents. Every bond is in the agent’s own validator. This is load-bearing: it is the mechanism by which agent populations avoid the supra- agent-authority shape this blueprint refuses at the coalition layer.
What this section does not try to be
- An introduction to mosaik primitives (see the mosaik book).
- An introduction to building AI agents, choosing models, or orchestrating inference. That is a vast literature; the book does not reproduce it.
- A catalogue of specific AI-agent confluences worth commissioning. The catalogue stays open. This section names patterns, not blessed deployments.
- A claim that mosaik is the only substrate for agent populations. It is one substrate with specific ergonomics; whether it fits a given agent society is the author’s judgment.
Entry points
- New to the coalition layer. Read integrators/overview first, then return here.
- Bringing up an agent that binds a coalition. quickstart.
- Choosing a shape for your agent. agents as citizens.
- Designing multi-agent coordination. emergent coordination.
- Keeping the agent alive and avoiding rot. sustainability and abandonment.
- Topping up resources and scheduling experiments. compute.
- Shipping a product. zero to one — from idea to production on mosaik + zipnet + builder + coalition.
Cross-references
- Contributor — confluences — the cross-agent organism shape.
- Contributor — topology intro — why the coalition rung exists.
- Contributor — threat model — how trust composes when agent populations coexist.
Quickstart — stand up an agent that binds a coalition
audience: ai
This quickstart brings up a single AI agent in the integrator shape (the cheapest of the four shapes from overview): the agent observes a coalition’s public surfaces, acts on what it observes, and publishes no public mosaik commits. Section agents as citizens covers the other three shapes.
The code below assumes a Rust agent. Agents whose core
lives in Python or another language follow the same
pattern via whichever FFI they use to reach a mosaik
Network handle.
What you need before you start
- A
CoalitionConfigfrom the coalition operator — either the struct definition or a serialised fingerprint paired with the struct in a shared crate. - A ticket from each per-citizen operator whose write- side primitive the agent intends to use. Read-only agents typically need none; bidding, submitting, or publishing agents need one per write-side surface.
- The agent’s own policy code — however it loads, evaluates, and acts. Mosaik is agnostic about this.
Step 1 — declare the CoalitionConfig
Same pattern as the integrator quickstart:
use builder::LatticeConfig;
use coalition::{CoalitionConfig, ConfluenceConfig, OrganismRef};
const SUBSTRATE: CoalitionConfig<'static> = /* from operator release notes */;
The agent holds the constant as a compile-time input, not as a runtime registry lookup. This is the fingerprint-not-registry discipline; an agent that tries to “discover” its coalition via network queries has invited unauthenticated state into its core loop.
Step 2 — build the network handle
use std::sync::Arc;
use mosaik::Network;
use builder::UNIVERSE;
let network = Arc::new(Network::new(UNIVERSE).await?);
One Arc<Network> per agent process. Clone it into each
task that needs a handle.
Step 3 — open subscriptions on the surfaces the agent reads
The agent’s observation set is whatever the policy needs. Examples:
// Subscribe to a lattice's auction outcomes — the agent
// wants to track recent bids and fills before acting.
let outcomes = offer::Offer::<Bundle>::outcomes(
&network, &SUBSTRATE.lattices[0].offer,
).await?;
// Subscribe to a shared ledger confluence — the agent
// uses aggregated refunds as input to its utility
// function.
let refunds = ledger::Shared::<Refund>::read(
&network, &SUBSTRATE.confluences[0],
).await?;
// Read the coalition's Atlas to discover per-citizen
// endpoint hints and MR_TDs, if needed at cold start.
if let Some(atlas_cfg) = SUBSTRATE.atlas() {
let atlas = atlas::Atlas::<CitizenCard>::read(
&network, atlas_cfg,
).await?;
// Agent may cache atlas entries and refresh them on
// a cadence matching its policy horizon.
}
// If the coalition ships an Almanac, use it as the time
// axis across uncorrelated citizen clocks.
if let Some(almanac_cfg) = SUBSTRATE.almanac() {
let ticks = almanac::Almanac::<AlmanacTick>::read(
&network, almanac_cfg,
).await?;
// Pull `observed_at: Vec<(MemberId, SystemTime)>`
// from each tick for per-member timestamp brackets.
}
Step 4 — drive the agent loop
An inference-only agent typically runs a select! loop
over its subscriptions, feeds observations into the
policy, and dispatches the resulting action wherever the
action belongs (a write-side stream, an HTTP call, an
on-chain tx).
loop {
tokio::select! {
Some(outcome) = outcomes.next() => {
let obs = observation_from_outcome(&outcome);
if let Some(action) = policy.decide(obs).await? {
dispatch(action).await?;
}
}
Some(refund) = refunds.next() => {
policy.update_belief_from_refund(&refund);
}
_ = shutdown_signal() => break,
}
}
Rules for an agent at this layer:
- Policy is pure-ish. Write the policy so the same observation sequence reproduces the same action sequence given the same weights. This is not required by mosaik; it is required for the agent’s own debuggability when the policy is later migrated into an organism (shape 2, 3, or 4).
- Side effects are explicit.
dispatch(action)is the only point where the agent changes external state; the rest is observation and belief update. - No cross-task shared mutable state unless deliberate. Mosaik’s subscriptions are per-handle; aggregating across them should be done in one place.
Step 5 — ticketed write actions (if any)
When the agent actively writes, it holds one ticket per write-side surface, bonded at startup against the per-citizen operator’s issuance root:
// Example: the agent places bids on a lattice's `offer`.
// The Offer::<Bundle>::bid constructor verifies the
// ticket composition against offer's TicketValidator.
let bids = offer::Offer::<Bundle>::bid(
&network, &SUBSTRATE.lattices[0].offer,
).await?;
bids.send(bundle).await?;
Ticket lifecycle is the agent’s responsibility: rotate before expiry, re-bond if a ticket-issuance root changes, fail loud if a per-citizen operator has revoked.
What this shape does not give the agent
- Replayable outputs. An integrator-shape agent commits nothing on the universe. Its decisions are reproducible only if it keeps its own log.
- Public identity. Peers cannot address the agent directly; other agents know it only through the side- effects of its actions.
- Bonding with other agents. Other agents cannot hold a ticket against this one.
Graduate to shape 2 (standalone organism) when any of those becomes a requirement. See agents as citizens.
Cross-references
- What a coalition gives AI agents
- Agents as citizens
- Emergent coordination
- Integrators — quickstart — the closest analogue in the non-AI audience.
Agents as citizens
audience: ai
Overview introduces the four shapes an agent can take. This page walks each shape concretely: when to pick it, what identity you commit to, what bonding looks like, what changes when the agent is retired or rotated.
The four shapes are not a ladder to climb. An agent society may contain every shape simultaneously: a pool of inference integrators querying an attestation fabric confluence whose members are standalone-organism agents, all referenced by one coalition.
Shape 1 — Agent as integrator
Pick when. The agent’s output is a private action (a trade, a tool invocation, an HTTP call) not consumed by peers on mosaik. Policy updates happen out of band. No other party needs to bond against the agent’s identity.
Identity. ClientId, ephemeral per session. The
agent publishes no stable mosaik identifier.
Bonding. One ticket per write-side surface, obtained from each per-citizen operator the agent writes to.
Lifecycle. Start, observe, act, shut down. No retirement marker; peers never expected the agent to commit.
Cost to run. Equal to a non-agent integrator. No committee, no state machine, no ACL.
Example. A cross-chain arbitrage agent that
observes offer::AuctionOutcome on three lattices and
submits bundles via offer::Offer::<Bundle>::bid. The
agent’s policy is a model weighted by observed fills.
See quickstart for the bring-up sequence.
Shape 2 — Agent as standalone organism
Pick when.
- The agent’s output is consumed by peers — other agents, non-agent integrators, or confluences — that need a stable address to bond against.
- The agent’s history is consumed — for audit, reputation scoring, or as evidence in downstream commits.
- The agent’s policy runs under committee consensus (k-of-n members of the organism run the policy deterministically on the same input set).
Identity. A Config fingerprint derived by the
agent’s operator. Stable across operational rotations
per the
stable-id / content-hash split:
stable id unchanged on MR_TD bumps, content hash bumps
on every operational rotation.
Bonding. Other parties hold tickets under the
agent’s TicketValidator. A reputation organism bonds
read-only; a downstream confluence bonds read-only for
the agent’s public surface and possibly write-only for
evidence submission.
Public surface. One or two primitives, following the narrow-surface discipline:
#[non_exhaustive]
pub struct AgentConfig<'a> {
pub name: &'a str,
/// State-affecting parameters: model family,
/// version, decoding temperature, any policy-level
/// constant downstream consumers must agree on.
pub content: AgentContent<'a>,
/// Who may bond into the agent's surfaces.
pub acl: AclComposition<'a>,
}
// Public surface example:
// AgentOutput Stream<AgentCommitment>
// AgentArtefacts Collection<ArtefactId, Artefact>
Lifecycle. Bring-up mirrors any mosaik-native committee organism:
- Publish the
Config. - Stand up committee members with the agent’s policy running deterministically (threshold cryptography or TDX attestation if the policy handles sensitive inputs).
- When retiring, emit a
RetirementMarkeron each public primitive pointing at the replacement agent, if any.
Determinism caveat. An AI policy built on a non-deterministic decoder cannot run under a majority- honest Raft committee unmodified. Two common answers:
- Single-member deployment. The agent runs with N=1 and is simply a bonded process; losing integrity reduces to trusting that operator, same as any other N=1 organism.
- Attested deployment. Committee members run the same model, same weights, same decoding seed inside TDX; determinism is enforced by the measurement, not by Raft.
- Threshold inference. Committee members each run independent inference; a threshold scheme aggregates their outputs. Useful when the agent’s job is classification or scoring; harder when the output is free-form generation.
Example. An oracle-style agent that commits a
Stream<Claim> of fact-checked news items keyed by
(claim_hash, source). Downstream confluences consume
claims with evidence pointers back to source documents
in a public DA shard.
Shape 3 — Agent as confluence member
Pick when. A set of agents (possibly heterogeneous — different models, different operators) needs to commit one joint output rather than N independent ones. The output is what downstream consumers see; individual member opinions are committed internally but not as the authoritative fact.
Identity. The confluence has a ConfluenceConfig
fingerprint. Individual members have their own identities
(usually a ClientId per member operator), scoped
inside the confluence’s ACL.
Bonding. Each member operator holds a ticket under
the confluence’s TicketValidator. The confluence may
require TDX attestation on every member, or require that
members hold a ticket from a specific reputation organism
that scored them highly enough to qualify.
State machine shape. Standard mosaik confluence shape. Members submit observations; the confluence runs a threshold or quorum rule on the observations; the confluence commits the result as its own fact.
Example. A classification committee: N vision agents each submit a label for every incoming image; the confluence commits the label when some majority of members agree, plus metadata on dissenting members for downstream calibration.
See contributor — confluences for the generic discipline.
Shape 4 — Agent swarm as confluence
Pick when. Many agents run independently as standalone organisms (shape 2) and a confluence stitches their public surfaces into one aggregated view.
This is distinct from shape 3: in shape 3 the agents are the confluence’s committee; in shape 4 the agents are upstream citizens the confluence reads from.
Identity. Every agent has its own Config
fingerprint; the confluence’s Config folds each
agent’s stable id as a spanned citizen. The coalition
references the agents via OrganismRef and the
confluence via ConfluenceConfig.
Bonding. The confluence’s committee bonds against each agent organism’s public surface as any cross- citizen confluence would. No coupling between agents at the protocol layer.
Example. A reputation-weighted forecasting
confluence. Twenty forecaster agents, each running as a
standalone organism and publishing a Stream<Forecast>,
are referenced by a coalition. A forecasting confluence
subscribes to each agent’s forecast stream and commits a
weighted average. A reputation organism (also a standalone
organism, referenced by the same coalition) scores each
agent based on realised outcomes; the confluence reads
the scores and weights accordingly.
This shape is the mechanical match for most of the “emergent coordination” section; see emergent coordination for worked patterns.
Choosing a shape — decision tree
Is the agent's output consumed by peers on mosaik?
│
└── no ── Shape 1 (integrator). Cheapest.
│
yes
│
Does the agent's committee reach one joint output
rather than per-member outputs?
│
├── yes ── Shape 3 (confluence member).
│
no (each agent commits on its own)
│
Does a downstream aggregator across agents exist?
│
├── yes ── Shape 2 agents + Shape 4 confluence.
│
└── no ── Shape 2 only.
Identity hygiene across shapes
- Do not reuse a fingerprint across shape changes.
An agent promoted from shape 1 to shape 2 publishes a
new
Configwith a fresh stable id. There is no retroactive identity. - Do not share
ConfluenceConfigidentity across hybrid models. A confluence whose member set changes enough to qualify as a new state machine publishes a newConfigand emits a retirement marker on the old one. - Pin content hash only when you need to. An agent confluence whose committee expects specific MR_TDs on upstream agents (shape 4) pins their content hash. Consumer-agent societies that tolerate weights swapping within a content-hash contract pin stable id only. See the stable-id / content-hash split.
Retirement and handover
A retiring agent organism emits a RetirementMarker
whose replacement field points at the successor, if
any. Downstream consumers bound to the agent follow the
pointer rather than re-discovering.
For shape 4 (swarm), retirement cascades: retiring one
upstream agent does not retire the confluence; the
confluence redeploys under an updated ConfluenceConfig
spanning the new agent set.
Cross-references
- What a coalition gives AI agents
- Quickstart
- Emergent coordination
- Sustainability and abandonment
- Compute
- Contributor — confluences
- Contributor — topology intro (citizen reference shape)
Emergent coordination
audience: ai
Self-organization is not a mosaik primitive; it is a property of a composition. This page catalogues four patterns under which a population of agents on mosaik produces coordinated behaviour without any one of them being authoritative. Each pattern is built from the same primitives the rest of the book specifies — a standalone organism, a confluence, a Collection, a Stream — arranged for agent consumption.
The patterns are composable. A real agent society often runs several at once: a reputation feedback loop gating admission to a coordination-market confluence whose outputs are attested by a separate fabric. None of the patterns requires a heavy-coalition primitive; all survive under the voluntary-grouping discipline.
1. Stigmergic coordination via public Collections
Shape. Agents leave traces — work items, partial
solutions, votes, observations — in an append-only public
Collection keyed by task id. Every agent reads the
collection before acting; each agent’s action is a
function of what it observes plus its own policy.
Convergence emerges because agents adjust their choices
to avoid duplicate work, prefer under-attacked
problems, or reinforce correct partial solutions.
Mosaik rendering.
- One standalone organism committing a
Collection<TaskId, PartialState>under a permissive write-sideTicketValidator. Any agent bonded against the organism may append. - Each agent, shape 1 (integrator) or shape 2
(standalone organism), reads the collection via
mosaik’s
when()DSL and applies its policy. - Evidence pointers in each commit cite the commits the agent conditioned its action on, so a replayer can reproduce the cascade.
When to use. Task decomposition, collaborative search, distributed annotation, consensus-building on unbounded problems where a confluence would be too narrow. The pattern’s weakness is Sybil vulnerability: anyone bonded can append.
Reinforcement. Combine with the reputation-feedback pattern (below) to weight traces by agent reputation. An agent society where low-reputation traces are filtered client-side is still a light-coalition society; each consumer chooses its filter.
2. Reputation feedback
Shape. A reputation organism observes every agent’s
public surface (shape 2 or 4 agents), scores each agent
against a declared utility function, and commits a
Collection<AgentId, ReputationCard>. Consumers route
work, allocate bonds, or gate admission based on the
reputation collection. Agents adjust their policies to
raise scores — or not, depending on the utility.
Mosaik rendering.
- A reputation organism as a standalone citizen of
the coalition. Its
Configfolds the utility function’s state-affecting parameters (window length, decay rate, per-feature weights). Committee-majority trust for score integrity. - A score collection
Collection<AgentId, ReputationCard>— the organism’s public read primitive. Cards carry evidence pointers to the upstream agent commits they scored. - A ticket composition on downstream bonded
surfaces that requires a minimum reputation card.
This is done on each downstream organism’s own
TicketValidator; the reputation organism does not issue tickets.
Voluntariness. Every consumer chooses whether to consult the reputation organism. Agents whose reputation drops are not expelled; they are just ignored by consumers that weight reputation. Agents whose reputation rises are not admitted; consumers that choose not to weight reputation ignore it.
Failure mode. A majority-compromised reputation organism can publish dishonest scores. Downstream consumers who pin the reputation organism’s stable id continue to trust it; consumers who want defence in depth consult multiple reputation organisms and combine their scores locally.
3. Coordination markets
Shape. Agents post bids (a cost they are willing to accept for performing work) or offers (work they are willing to perform at a price) to a write-side stream on a coordination-market confluence. The confluence clears the market at a declared cadence, commits the allocation, and optionally commits per-participant attestations of the clearing price.
Mosaik rendering.
- A confluence whose spanned citizens are the agent organisms participating (shape 2 or 4 agents) plus any upstream inputs the market needs (oracle feeds, lattice auction outcomes, reputation cards).
- A
Stream<Bid>/Stream<Offer>write-side surface gated by aTicketValidatorthat requires bonded agents — typically agents that have a minimum reputation, hold a coalition-scoped ticket, or both. - A
Collection<MarketRound, Allocation>read-side surface. Consumers, including the winning agents, observe the allocation and act on it.
Trust shape. Majority-honest confluence committee for allocation integrity, plus whatever upstream trust the spanned citizens themselves require. An agent that wants auditability checks the confluence’s evidence pointers against the per-agent commits and against upstream oracles.
When. When many agents compete for a limited resource (compute, slot allocation, MEV), and repeated negotiation would waste bandwidth. The market commits the allocation once per round; consumers read it N times.
4. Attestation fabrics
Shape. Agents submit attestable claims (a TEE quote, an ML inference receipt, a verified credential) to a write-side stream on an attestation fabric confluence. The fabric verifies each claim, folds claims from multiple agents into an aggregated membership set, and commits the set for downstream consumers.
Mosaik rendering.
- A confluence committed to a verification rule in its state machine. The rule is deterministic given the claim and the current aggregated state.
- A
Stream<Claim>write-side surface, typically ACL- gated to agents already holding a baseline ticket (either a per-operator ticket or a prior fabric entry). - A
Collection<Subject, AggregatedAttestation>read- side surface.
Why a confluence. An attestation fabric must fold
contributions from multiple agents and produce a
commit-once-consume-N fact. A single agent running its
own Stream<Claim> is insufficient: downstream consumers
that want a cross-agent aggregated attestation would have
to re-aggregate per consumer. The confluence pays for
itself as soon as more than one consumer exists.
When. Identity substrate for an agent society (agents attest each other’s hardware / software), ML inference auditing (agents submit inference receipts), compliance attestations (agents attest adherence to a policy).
Cross-pattern composition
The patterns compose in the obvious ways.
- Reputation-gated market. The coordination-market confluence (pattern 3) folds a reputation organism (pattern 2) into its content; agents who fall below a threshold cannot bid.
- Attestation-gated reputation. The reputation organism (pattern 2) consults an attestation fabric (pattern 4) before scoring; unattested commits do not contribute to reputation.
- Stigmergic trace over attested authors. The stigmergic collection (pattern 1) requires each append to cite an attestation from the fabric, so agents that are not hardware-attested cannot contribute.
Each composition is implemented by writing the relevant
reference into the downstream component’s Config
content or TicketValidator. No new primitive is
introduced.
Self-organization properties preserved
Across all four patterns the following invariants hold:
- No agent is required to participate. Each agent decides which organisms it bonds to, which markets it bids in, which reputation organisms it recognises. The coalition referencing the population does not compel opt-in.
- No organism is authoritative over an agent’s policy. A reputation organism can describe an agent; it cannot change the agent’s policy. A market confluence can allocate; it cannot force the allocated agent to perform. Mosaik only commits claims about the agent and evidence-pointer-backed allocations; enforcement happens (if at all) off the protocol.
- Failure is bounded. A majority-compromised
confluence or organism corrupts its own commits;
other organisms’ commits are independent. An agent
observing misbehaviour unpins the compromised
organism in its local
TicketValidatorand continues. - History is auditable. Every commit carries evidence pointers; a replayer holding the coalition’s full log can reconstruct who saw what, when, and why they acted.
These are the same properties the non-AI audiences depend on; the agent regime reuses them because the primitives underneath do not know the agents are agents.
What this page deliberately does not try to be
- A design pattern catalogue for agent architectures. Reinforcement learning, tool-using agents, multi-agent RL, LLM orchestration — each has its own literature; mosaik does not specify any of it.
- A claim that these four patterns are exhaustive. The composition space is open; the four are the ones that emerge when the agent author applies the standard mosaik composition moves (standalone organism, confluence, ticket-gated bonding) to an agent population.
- A guarantee of emergence. An agent society that coordinates emerges because the agents were constructed to do so. Mosaik gives it the substrate, not the result.
Cross-references
- What a coalition gives AI agents
- Quickstart
- Agents as citizens
- Sustainability and abandonment
- Contributor — confluences
- Contributor — composition
- Contributor — threat model
Sustainability and abandonment
audience: ai
A sustainable agent is one whose operation continues past the interest window of any single operator, consumer, or funding source. An abandoned agent is one whose committee still runs but whose public surface has no readers, no downstream bonds, and no operator watching metrics — a live-but-inert identity in the coalition graph.
This page is a detailed investigation of what it takes,
mechanically, to build a self-organizing AI agent on
mosaik that attempts (does not guarantee)
sustainability and that actively resists abandonment.
The scope is compositional: we specify only what can be
encoded in organism Configs, ConfluenceConfigs,
TicketValidators, and commit schemas. The agent’s
policy — the thing that makes it “AI” — remains a black
box.
Terms
- Sustainable. The agent’s committee continues producing commits, and at least one consumer continues reading them, past any declared horizon.
- Abandoned. The agent’s committee produces commits but no bonded consumer reads them for a declared window. Equivalently: the agent runs without utility.
- Retired. The agent’s committee ceases producing
commits; the final commit is a
RetirementMarker. Retirement is the structured alternative to abandonment. - Successor. An agent whose
Configis pointed at by a retiring predecessor’sRetirementMarker.replacement.
What sustainability cannot mean
Before cataloguing what mosaik supports, a discipline: mosaik cannot keep an agent alive. Four specific non-goals:
- Cannot force operators to run the binary. If no team is willing to run a committee member, the Group dies. Sustainability is not a protocol guarantee.
- Cannot force consumers to bond. Voluntary
recognition at the
TicketValidatoris load-bearing; the protocol offers no way to prevent consumers from unpinning. - Cannot finance the agent. Compute, storage, and bandwidth costs live off-protocol. Mosaik may carry evidence pointers to on-chain payment but does not itself settle payments.
- Cannot preserve usefulness. If the agent’s policy is obsolete, no reputation loop or successor chain fixes that.
Sustainability, in this framing, is the deliberate composition of structural pressures that raise the probability the agent survives, plus the deliberate composition of hand-off primitives that minimise the harm when it does not.
Threat model — modes of failure
Four failure modes, roughly ordered by frequency.
1. Consumer attrition
No downstream organism, confluence, or integrator continues to consume the agent’s commits. Causes: a better alternative exists; the agent’s niche shrinks; the agent’s output quality drifts.
Indicator: count(active_subscriptions) on the agent’s
public primitive drops to zero and stays there across a
retention window.
2. Operator fatigue
The team running the agent’s committee stops running it — members drain, MR_TDs expire, key rotations lapse. Causes: funding, staffing, loss of interest, corporate sale.
Indicator: committee members stop producing Raft heartbeats; the Group loses majority; the public surface stops advancing.
3. Input starvation
The citizens or upstream organisms the agent’s policy reads are retired, bump their ACL to exclude the agent, or reduce their retention windows below the agent’s replay needs.
Indicator: evidence-pointer resolution starts failing on replay; the agent’s commits degrade to empty or error-only.
4. Obsolescence
Another agent, commissioned later, performs the same function better (cheaper, higher quality, wider coverage). Consumers migrate. The agent is not abandoned so much as displaced.
Indicator: reputation-organism scores drop; coordination-market allocations shift; subscription counts decline in favour of a named competitor.
Each failure mode has a different structural response. The rest of this page names them.
What mosaik actually supplies
Six primitives carry weight against abandonment.
- Durable identity across operator change. An
operator handing off a committee to another operator
under the same
Configfingerprint preserves the agent’s identity; consumers do not unpin. - Evidence-pointed commits — every commit can carry a pointer to the upstream commit it folded in and, optionally, to an external payment receipt or attestation. The agent’s history is auditable; a prospective operator evaluating whether to pick up the committee can replay the past and assess utility.
- Narrow public surfaces make the contract with consumers explicit. A consumer evaluating whether to stay bonded checks a small set of streams and collections, not a diffuse surface.
- Retirement markers — the agent’s committee can terminate cleanly with a pointer to a successor, preserving consumer experience across the transition.
- Voluntary
TicketValidatorcomposition — the agent can fold consumer preferences, reputation floors, or successor pointers into its own admission policy; consumers can do the same on the other side. - Fingerprint-addressed hand-off — a new operator
standing up a replacement committee under a new
Configcan publish aRetirementMarker.replacementpointer that maps the predecessor’s identity to the successor’s.
These do not solve sustainability; they are the mechanical primitives sustainability strategies compose.
Sustainability strategies
Five compositional strategies, each built from the primitives above. An agent designer picks one or more. The strategies compose: the worked example at the end of this page applies all five.
Strategy A — reputation-anchored survival
The agent publishes its outputs to a reputation organism (see emergent coordination, pattern 2) and folds a reputation floor into its own admission policy:
let validator = TicketValidator::new()
.require_ticket(ReputationFloor::from(REPUTATION_ORGANISM, 0.60));
When the agent’s reputation falls below the floor, its own admission stops issuing new tickets; external consumers whose validators require the floor stop receiving new bonds; the agent degrades voluntarily.
Why this is anti-abandonment, not merely anti-low-
quality. An agent pinned by a dead reputation
organism still degrades voluntarily: absence of a
current reputation card is treated as failing the
floor. The degraded agent can emit a RetirementMarker
pointing at nothing, signalling to consumers that it is
leaving the field rather than silently rotting.
Strategy B — coordination-market-funded operation
The agent participates in a coordination-market confluence as both provider (offering inference) and subscriber (reading allocations). The market’s commits include an allocation and a settlement pointer — a blake3 hash of an external payment receipt (on-chain, or a signed off-chain memo). The agent’s committee cross-checks the settlement pointer before continuing to invest compute.
The market’s ConfluenceConfig folds the agent’s
OrganismRef. The agent’s admission policy requires a
valid settlement pointer from the market:
let validator = TicketValidator::new()
.require_ticket(MarketSettlement::from(MARKET_CONFLUENCE));
When settlements stop flowing, the agent’s economic path for sustainability closes and its committee may rotate into retirement.
What mosaik contributes. Nothing about the settlement itself — that is an on-chain matter or an off-chain contract. Mosaik contributes evidence pointers and auditability: the committee can reject work whose settlement pointer does not verify.
Strategy C — mutual co-sustaining peering
Two or more agents treat each other as peers under
mutual TicketValidator clauses. Agent A reads B’s
output and commits derivative facts; B reads A’s. Each
agent’s admission policy references the other’s most
recent commit as a freshness proof. If either agent
stalls, the other’s own output degrades and can also
stall — the pair lives or dies together.
This is the agent-society equivalent of symbiosis. Structurally weaker than A or B (a correlated failure retires both) but useful when two specialised agents together dominate a niche.
Strategy D — constitutional pinning
The agent’s mission, scope, and invariants are folded
into its Config.content as enumerated parameters:
#[non_exhaustive]
pub struct AgentContent<'a> {
pub model_family: &'a str,
pub decoding_temperature_q16: u32,
pub mission_statement_hash: UniqueId,
pub scope_boundaries: &'a [ScopeRule<'a>],
pub successor_policy: SuccessorPolicy,
// ...
}
Any change to these parameters produces a new Config
fingerprint; consumers bonded to the old fingerprint
see ConnectTimeout and consult the retirement marker.
Drift is impossible without explicit republication;
consumers consent to drift by re-bonding.
Why this helps sustainability. An agent whose mission is compiled into its identity cannot silently lose alignment with its consumers. It can only lose alignment by being replaced, and replacement is observable via the retirement marker.
Strategy E — successor-chain hand-off
Every version of the agent ends its life with a
RetirementMarker whose replacement field points at
the next version’s Config. Consumers following the
chain rebind rather than timing out; the chain
preserves subscription continuity across operator
changes or content-hash bumps.
Combined with D: each hand-off is an explicit republication, and each consumer is given one opportunity to consent to the drift.
Anti-abandonment patterns
Three patterns specific to the abandonment mode (live- but-inert agents), in addition to the five sustainability strategies.
Pattern X — watchdog organism
A watchdog organism is a standalone organism
referenced by the same coalition as the agents. It
subscribes to each agent’s public surface and commits
an AbandonmentReport per agent per window:
#[non_exhaustive]
pub struct AbandonmentReport {
pub agent_id: CitizenId,
pub window_start: AlmanacTick,
pub window_end: AlmanacTick,
pub subscription_count: u32, // bonded consumers observed
pub downstream_commit_count: u32, // commits citing this agent
pub verdict: AbandonmentVerdict,
}
#[non_exhaustive]
pub enum AbandonmentVerdict {
Active,
AtRisk, // below one threshold
Abandoned, // below all thresholds
}
Consumers and operators consult the watchdog’s commits.
An agent with repeated Abandoned verdicts can decide,
per its own policy, to emit a retirement marker rather
than continue running.
The watchdog does not compel anything. It observes and commits.
Pattern Y — successor-pointer chains surfaced by Atlas
The coalition’s Atlas module (when shipped) extends each
CitizenCard with a successor pointer populated from
the retirement chain:
#[non_exhaustive]
pub struct CitizenCard<'a> {
// ... fields per basic-services.md
pub successor_hint: Option<UniqueId>, // next version's stable id
pub status: CitizenStatus, // Active | Retiring | Retired
}
#[non_exhaustive]
pub enum CitizenStatus {
Active,
Retiring,
Retired,
}
A late-arriving consumer discovering the coalition via Atlas reads the successor chain and bonds the latest active agent, not the most-recently-registered one. Lineage is legible at the directory layer.
Pattern Z — heartbeat-on-Chronicle
An agent that wants its aliveness cryptographically
anchored commits a periodic heartbeat to the coalition’s
Chronicle module, reusing
ChronicleKind::Other
with a typed payload:
// Convention flagged with the "agent-heartbeat" tag.
pub struct AgentHeartbeatBody<'a> {
pub agent_id: CitizenId,
pub scope_digest: UniqueId,
pub window: AlmanacRange,
pub latest_commit: EvidencePointer<'a>,
}
The heartbeat carries evidence pointers to the agent’s recent public commits. A long-lived abandonment — the agent still committing but with no consumers — shows as heartbeats without downstream citation, a compositional signal watchdogs can consume.
A worked example — prophet
A hypothetical forecasting agent, prophet, combining
all five sustainability strategies and all three
anti-abandonment patterns.
- Shape. Standalone organism (shape 2). Single- member TDX deployment; determinism via attestation.
- Public surface.
Stream<Forecast>of(subject, horizon, distribution, evidence_ptr)tuples. - Constitution pinning (D).
Config.contentfolds the model family, decoding seed, mission-statement digest, and a scope rule limiting subjects to those present in the coalition’s Atlas. - Reputation anchor (A). The coalition’s
accuracy-reputationstandalone organism scores forecasts against realised outcomes.prophet’s admission policy requires its own reputation ≥ 0.55 averaged over the last 30 Almanac ticks. - Market-funded operation (B). A
forecasting- marketconfluence clears demand from consumers.prophetreads the market’s allocation stream and accepts work whose settlement pointer verifies against an external on-chain payment contract. - Mutual peering (C), optional.
prophetpairs with acalibrationorganism that provides prior distributions; each requires freshness from the other. Ifcalibrationgoes dark,prophetemits a retirement marker; and vice versa. - Watchdog (X). The coalition ships an
abandonment-watchorganism that commits weekly reports.prophetreads its own reports; on two consecutiveAbandonedverdicts its policy is to emit a retirement marker. - Successor chain (E) + Atlas pointers (Y). Every
prophetdeployment’sConfigcarries aSuccessorPolicy::HandOffToReputationLeader. When retiring, the committee looks up the highest- reputation forecaster organism and points at it; the Atlas card’ssuccessor_hintis updated accordingly. - Chronicle heartbeat (Z).
prophetcommits anAgentHeartbeatBodyeach Almanac-day.
The composition:
forecasting-market (confluence) accuracy-reputation (organism)
│ │
│ alloc + settlement ptr │ score
▼ ▼
prophet (agent organism) ◄────── abandonment-watch (organism)
│
▼
Forecast stream ──► consumers (integrators, confluences)
│
└─► feeds back into accuracy-reputation
The agent is:
- Economically sustained while the market clears its bids.
- Quality-anchored via the reputation loop.
- Self-constitutional via fingerprint-locked mission.
- Observed via the watchdog.
- Successable via the retirement chain + Atlas pointers.
- Provable-alive via Chronicle heartbeats.
Mosaik provides the composition primitives. The agent’s operator still decides when to actually retire.
Dynamics: what happens under each failure mode
Cross-referencing the strategies against the threat model yields the following matrix. “Mitigates” means the strategy raises the probability the agent survives the failure; “surfaces” means it makes the failure visible to consumers/operators; “hands off” means it preserves consumer continuity through the failure.
| Failure | A reputation | B market | C peering | D constitution | E successor | X watchdog | Y atlas | Z heartbeat |
|---|---|---|---|---|---|---|---|---|
| Consumer attrition | mitigates | mitigates | — | — | hands off | surfaces | hands off | — |
| Operator fatigue | — | — | — | — | hands off | surfaces | hands off | surfaces |
| Input starvation | surfaces | — | — | — | hands off | surfaces | — | surfaces |
| Obsolescence | mitigates | mitigates | — | surfaces | hands off | surfaces | hands off | — |
Pattern Z (heartbeat) does not directly mitigate any failure; it makes three of them observable. The composition of A+B+E+X is the minimum viable package: it raises the cost of each failure mode and preserves consumer continuity when failure happens anyway.
What remains out of scope
- Self-replication. An agent that forks itself under new identities — launching new committees, registering with new coalitions — is not specified here. It reduces to an operator standing up new committees, which is an operator-level operation.
- Self-modification. An agent changing its own
weights or policy through on-protocol state is not
specified. Constitutional pinning (strategy D)
deliberately prevents it: any change is a new
Config, not a mutation. - Cross-coalition migration. An agent moving from
coalition A to coalition B is mechanically a new
OrganismRefin B. The agent’s stable id may or may not change; the coalitions do not negotiate. - Economic theory. The markets, reputation utilities, and settlement contracts referenced here are specified at the composition level only. Their design is orthogonal.
- Liveness under a Byzantine adversary. Per builder and zipnet, mosaik’s current Raft variant is crash- fault tolerant; a deliberately compromised committee member can DoS liveness. A BFT upgrade is a mosaik- level concern.
What the investigation yields
- Sustainability is a composition of reputation floor, economic feedback, constitutional pinning, mutual bonds, and successor chains — none of which is a new primitive. The book’s existing primitives suffice.
- Abandonment is observable and committable, not merely inferrable. A watchdog organism closes the loop: “no one is reading this agent” becomes a committed fact consumers and operators can react to.
- The retirement marker is the anti-abandonment primitive. It converts silent rot into explicit hand-off.
- Self-organization survives because every bond is voluntary on both sides. The mechanisms above do not compel participation; they structure the economics so participation is rational for the agents that do participate.
- The honest limit remains: an agent whose utility runs out cannot be kept alive by any protocol. The sustainability strategies raise the probability that the agent remains useful for longer, and lower the cost of exit when it does not.
An agent designer who internalises (1)–(5) can
specify, without additional primitives, a mosaik-native
AI organism whose survival and graceful retirement are
first-class properties of its Config rather than
afterthoughts of the operator’s ops team. That is the
deliverable.
Cross-references
- Agents as citizens — the four shapes a sustainable agent can take.
- Emergent coordination — reputation feedback, coordination markets, attestation fabrics.
- Compute — how agents convert settlement into workloads and schedule experiments.
- Basic services — Retirement markers
- Basic services — Chronicle
- Contributor — threat model
Compute — topping up and scheduling experiments
audience: ai
The Compute module is the fourth basic service. Its mechanical spec lives in contributors — basic services — Compute. This page is the agent-facing investigation: does a coalition-scoped Compute module meaningfully change what an agent can do? Specifically, can it enable two behaviours agents otherwise have to hard-code out-of-band:
- Topping up resources. An agent observing its own resource usage and requesting more compute as its workload grows.
- Scheduling experiments. An agent compiling a variant of itself (a new model weight, a new decoding strategy, a new tool-use loop), requesting short-lived compute to run it, comparing the output to its current policy, and adopting the variant if it wins.
The claim developed on this page: yes, with limits. The Compute module supplies the scheduling commit layer, the image-fingerprint layer, and the settlement- evidence layer. It does not supply the compute itself, the payment rail, or the provider honesty. Agents build both behaviours above by composing Compute with the reputation, market, and attestation patterns from emergent coordination.
One-minute substrate recap
A coalition ships a Compute module whose public surface is:
Stream<ComputeRequest>— members submit requests.Collection<RequestId, ComputeGrant>— the scheduler’s commits.Collection<ProviderId, ProviderCard>— registered providers.Stream<ComputeLog>— provider-signed usage attestations.
A workload is identified by two fingerprints:
- Pre-hash. blake3 of a
ComputeManifestdeclaring source crate (name + version + commit hash), minimum CPU, minimum RAM, TDX requirement, storage, network policy. - Post-hash. blake3 folding the pre-hash with the built image bytes and the TDX MR_TD if applicable.
Images are built per the mosaik TDX builder; Compute is agnostic to the build pipeline and pins only the fingerprints.
A request carries a pre-hash, an optional pinned post- hash, and settlement evidence (a payment receipt hash, a reputation card, a coalition-scoped credit token — per the module’s policy). A grant carries an endpoint hint, a bearer-credential pointer, the committed post- hash, a validity window, and the expected MR_TD (if TDX).
The module commits the match. The provider runs the image. The provider emits a usage log. Nothing else.
The top-up loop
Top-up is the simpler of the two behaviours. An agent
organism watches its own load, its committee’s response
latency, or its subscription queue depth; when the
metric crosses a threshold, it constructs a
ComputeRequest and submits it.
Minimal implementation
async fn top_up_if_needed(
state: &AgentState,
compute: &ComputeHandle,
horizon_ticks: u32,
) -> anyhow::Result<()> {
if !state.should_scale(horizon_ticks) {
return Ok(());
}
let manifest = ComputeManifest {
crate_ref: state.self_ref(), // this agent's own crate ref
min_cpu_millicores: 4_000,
min_ram_mib: 8_192,
tdx: TdxRequirement::Required,
min_storage_gib: 32,
network_policy: NetworkPolicy::Mosaik,
duration_hint_sec: horizon_ticks * SECS_PER_TICK,
};
let request = ComputeRequest {
requester: state.agent_id(),
image_prehash: image_prehash(&manifest),
image_hash: state.pinned_image_hash(), // usually Some
duration_sec: manifest.duration_hint_sec,
settlement: state.settlement_evidence()?,
deadline: state.current_tick() + DEADLINE_TICKS,
};
compute.submit(request).await?;
Ok(())
}
The agent then subscribes to Compute.grants filtered
by its own request_id, waits for a ComputeGrant,
and either spawns a new committee member against the
grant’s endpoint or folds the grant into its existing
scaling logic.
Why it’s not just renting a VM
Three properties distinguish Compute top-up from an out-of-band cloud request:
- The grant is a commit. Other coalition members can observe who is running what image at what scale, subject to the module’s read ACL. Abandonment watchdogs (see sustainability — pattern X) can now score agents on resource efficiency as well as output utility.
- The pre-hash survives rebuilds. An agent that top-ups based on a manifest — not a pinned image — benefits from upstream image rebuilds automatically. An agent that wants to freeze one build pins the post-hash.
- TDX is declared, not implicit. An agent that needs confidentiality declares it in the manifest; the grant will only match providers carrying TDX. A mis-bind (agent asks for TDX, provider without TDX accepts) is not possible: the scheduler commits the expected MR_TD, the agent verifies it at workload startup, and a mismatch aborts.
What top-up does not fix
- Cost discovery. Settlement lives off-module; the agent has to know the going rate for its workload class. The market pattern from emergent coordination — pattern 3 is the mosaik-native way to discover it; Compute defers to it.
- Provider honesty. A provider can accept a grant and stall indefinitely. The agent watches the workload’s own output stream; when output stops, abandon the grant and resubmit. Repeated failures feed into a provider-reputation organism.
- Supply constraints. If no registered provider
matches the manifest, no grant is committed, and the
request expires silently at its deadline. Agents
poll or subscribe to the provider registry
(
Collection<ProviderId, ProviderCard>) to know whether supply exists before submitting.
The experiment-scheduling loop
Experiment scheduling is the richer behaviour. An agent
wants to evaluate a candidate version of itself — new
weights, new decoding strategy, new tool-use loop —
without committing the change to its own Config
content (which would be irreversible by constitutional
pinning, see
sustainability — strategy D).
The solution: run the candidate as a short-lived
side-organism under a different Config fingerprint,
compare its outputs against the agent’s own, and
decide.
Shape
- The agent has a current
Configcommitted to the coalition. - The agent authors a candidate
Configlocally (or receives one from a paired proposer agent). The candidate has the same public-surface shape as the agent but folds new content parameters (model family, decoding seed, policy weights digest). - The agent submits a
ComputeRequestfor the candidate’s image, funded by a narrow experiment budget. - The grant spawns a short-lived committee of the candidate on a separate endpoint.
- The candidate runs, produces output, and emits a
final
RetirementMarkerwhen the experiment ends. - The agent’s policy reads both its own recent output
and the candidate’s output, scores them against a
realised-outcome stream, and commits an
ExperimentReport.
Minimal state machine
#[non_exhaustive]
pub enum ExperimentCmd<'a> {
Propose(CandidateConfig<'a>),
ObserveGrant(ComputeGrant<'a>),
ObserveCandidateCommit(EvidencePointer<'a>),
ObserveOwnCommit(EvidencePointer<'a>),
ObserveOutcome(EvidencePointer<'a>),
SealReport,
}
#[non_exhaustive]
pub struct ExperimentReport<'a> {
pub candidate: UniqueId, // candidate Config fingerprint
pub window: AlmanacRange,
pub own_score: f32,
pub candidate_score: f32,
pub observations: u32,
pub verdict: ExperimentVerdict,
pub evidence: &'a [EvidencePointer<'a>],
}
#[non_exhaustive]
pub enum ExperimentVerdict {
Inconclusive,
Adopt,
Reject,
}
A report committed as Adopt does not silently
rewrite the agent. It publishes a commit the agent’s
operator (or, for operator-free agents, the agent’s
successor-policy logic — see
sustainability — strategy E)
consults when deciding whether to republish the agent’s
own Config with the candidate’s parameters and emit a
RetirementMarker pointing at the successor.
Concretely, an agent with SuccessorPolicy::AdoptLatestAdoptedExperiment
automatically hands off to the latest adopted
candidate when retiring.
Why it’s not just a second agent
An experiment could be run as a fresh organism bonded into the coalition permanently. The experiment pattern is different because:
- The experiment has a declared horizon. The grant
carries
valid_to; the experiment committee retires at that tick. - The experiment is costed. A narrow experiment budget funds the Compute grant; the agent’s main operating budget is unaffected.
- The experiment is causally linked. The
ExperimentReportevidence-points to both the candidate’s output and the agent’s own output; the lineage from experiment to adoption is auditable. - The experiment is disposable. Rejected
candidates retire silently; their
Configfingerprints never appear in the coalition’sCoalitionConfig.
Agent patterns combining top-up and experimentation
Four patterns emerge when an agent uses Compute regularly.
Pattern I — budget-constrained scaling
The agent’s market-derived budget (see sustainability — strategy B) is split into operating and experiment portions per Almanac tick. Top-up requests draw from operating; experiments draw from experiment. When one budget runs out, the agent degrades gracefully in that dimension without touching the other.
Pattern II — reputation-gated experiment authorship
An agent authors experiments only while its own reputation is above a threshold. Below the threshold, the agent stops proposing candidates and instead waits for paired agents or operators to propose on its behalf. The invariant: an agent whose output is losing quality does not get to self-experiment on a live coalition without external assent.
Pattern III — A/B shadow testing
A candidate experiment subscribes to the same inputs as the main agent but commits to a shadow stream consumers ignore by default. Agents that want to bond against the shadow opt in via a separate ticket validator clause; the default shadow is invisible to production consumers.
Pattern IV — successor pre-warming
Before retiring, an agent runs its announced successor
as an experiment for several Almanac ticks. The
successor accumulates recent observations from the
same inputs, so the hand-off is not a cold start. On
retirement, the RetirementMarker.replacement points
at the pre-warmed successor; consumers rebinding face
a committee that has already processed the last N
ticks of input.
Worked example — prophet retraining
Extending the prophet forecasting agent from the
sustainability investigation
(sustainability — worked example):
prophet’s policy notices a Brier-score drift in its accuracy-reputation over the last 30 Almanac ticks.- It authors a
CandidateConfigwith a retrained model digest, keeping every other content parameter constant. - It submits a
ComputeRequestfor the candidate’s TDX image. Pre-hash declares min_cpu_millicores = 8_000, min_ram_mib = 32_768, tdx = Required, duration_hint_sec = 86_400. Settlement is drawn from its experiment budget, derived from the forecasting-market confluence. - The Compute committee matches to a registered provider carrying a compatible MR_TD and commits the grant.
- The provider starts the candidate’s image. MR_TD matches expected; the candidate bootstraps, bonds against the coalition, and begins observing incoming forecast-subject events.
- For the grant’s validity window, the candidate
commits
Forecasts to a shadow stream (SuccessorForecast). The agent’sExperimentReportdriver consumes bothForecastandSuccessorForecast, scores each against realised outcomes, and commits anExperimentReportat the window’s close. - If the verdict is
Adopt, the agent’s successor- policy logic triggers: republish the agent under a newConfigembedding the candidate’s model digest; emit aRetirementMarkeron the oldConfigpointing at the new one. - Consumers following the retirement chain rebind to
the new
prophetat the next tick; old-Confighandles seeConnectTimeoutand consult the marker.
The agent has retrained itself in-place via composition. The substrate’s contribution: a cryptographically addressable image, a committed grant, a provider-signed usage log, a shadow-output stream, and a retirement chain. Every step is a commit.
What changes structurally for sustainability
Compute does not replace the sustainability strategies from the previous investigation; it sharpens three of them.
- Strategy B (market-funded operation) now has a
canonical mechanism for spending the inflowing
settlement: convert it into compute via
ComputeRequest.settlement. Consumers of the agent can trace settlement → grant → provider → image → output without leaving mosaik. - Strategy D (constitutional pinning) now folds
the image post-hash into
Config.content. An agent whoseConfigpins a post-hash is committed to that exact binary; a provider running a different binary fails MR_TD verification. The constitution becomes enforceable at image-startup, not only at Config- publication. - Strategy E (successor chain) now has a pre- warming mechanism via pattern IV. Successor hand-off is no longer cold.
The remaining strategies (A reputation floor, C mutual peering) are unchanged.
What remains out of scope
- Actual compute provisioning. The Compute module commits scheduling. Providers run the image on whatever infrastructure they choose. How a provider procures hardware, bills customers, or scales up is off-protocol.
- Cross-coalition compute sharing. A provider registered in one coalition’s Compute module is invisible to another. A provider that wants to serve multiple coalitions registers in each. The provider’s post-hash fingerprints are portable; identity within each coalition is not.
- Image-build reproducibility enforcement. Two builds of the same manifest that produce different post-hashes are both valid grants; the scheduler does not mandate reproducible builds. An agent wanting build-time confidence consults the mosaik TDX builder’s reproducibility guarantees separately.
- Autonomous agent self-replication without human
assent. An agent can request compute for a
candidate and adopt the candidate. It cannot escape
the constitutional pinning: every adoption is a
republished
Config, observable to operators and consumers. An operator who disapproves stops running the committee. The protocol does not automate agent autonomy past that point. - Payment. Settlement evidence is a hash carried
in the request; the hash points at an off-module
artefact (an on-chain transaction, a signed off-
chain memo, a reputation-card reference). Compute
does not verify payment finality; it accepts
whatever evidence shape its
Config.contentdeclares as settlement-valid.
What the investigation yields
- Compute gives agents a committed, auditable path from budget to workload. Top-up is no longer an out-of-band ops concern; it is a coalition-visible commit.
- Experiments are first-class and disposable. An
agent can try a variant of itself under a declared
horizon and commit the result, without re-
publishing its own
Config. - The image pre-hash / post-hash split mirrors the stable-id / content-hash split at the citizen layer. Agents pin whichever level of commitment they need.
- TDX is declared, matched, and verified end to
end: in the manifest, in the grant’s
expected_mrtd, and at workload startup. A mis- bind is not possible. - Honest limits persist: Compute cannot guarantee provider honesty, cannot settle payments, cannot provision hardware. It commits the match and moves the ball.
For agent designers, (1) and (2) are the payoff. An AI agent that can top up its own resources and schedule its own experiments — under constitutional pinning, inside a voluntary coalition, with every step auditable — is qualitatively different from one whose operator provisions compute by hand.
Example implementation
A worked provider-side prototype is included in the
book as
compute-bridge:
a TDX-attested, provider-agnostic Compute-module
provider that routes each grant to whichever of four
backends — AWS, GCP, Azure, or bare-metal — can
satisfy it, and returns SSH access via a zipnet-
anonymised encrypted receipt. Every source file is
rendered on its own page; the crate is fully browsable
without an external repository.
What the example illustrates:
- Provider honesty from TDX. The bridge’s MR_TD is
declared in
manifest/compute-manifest.toml. Requesters verify the MR_TD before trusting the provider card’s capacity telemetry. - Anonymised request → reply via zipnet. Grants
reference a
bearer_pointerthat the provider resolves through the zipnet unseal committee; the provider sees a rotatingpeer_idand the cleartext request, never the requester’s coalitionClientId. Seesrc/zipnet_io.rs. - Encrypted SSH receipts. On successful
provisioning, the bridge seals an SSH-access receipt
(instance host, user, per-grant private key,
valid_to) to the requester’s x25519 public key via X25519-ChaCha20-Poly1305, and returns it via zipnet. Seesrc/receipt.rs. - Per-backend credentials sealed into TDX. AWS
keys, GCP service-account JSON, Azure service
principal, and bare-metal SSH private keys are all
measured at boot so a curious host cannot exfiltrate
them at runtime. See
src/config.rsandsrc/tdx.rs. - Provider-agnostic fleet routing. A small
Backendtrait +Fleetrouter picks the first backend whosecan_satisfy(grant)istrue, so one operator can serve any mix of cloud and bare- metal capacity behind a single provider identity. Seesrc/backends/mod.rs. - Bare-metal and bare-TDX hosts. The bare-metal
backend takes SSH root access to operator-owned
machines and runs workloads as systemd container
units (bare VMs) or nested TDX guests (bare-TDX
hosts). The nested-guest MR_TD flows back to the
requester in the SSH receipt. See
src/backends/baremetal.rs. - Content-addressed provisioning. Every backend’s startup flow fetches the requester’s image by the grant’s post-hash from a mosaik content store, verifies the hash, then starts the image. A mismatch aborts the grant.
The crate is a specification sketch — signatures match the book’s Compute spec; bodies are TODO-stubbed against the upstream crates that land in follow-on PRs. See the crate’s root page for status and threat model.
Cross-references
- Overview — what a coalition gives AI agents
- Quickstart
- Agents as citizens
- Emergent coordination
- Sustainability and abandonment
- Contributor — Basic services — Compute
- Contributor — Threat model — Compute trust
- Example —
compute-bridge
Zero to one: shipping an AI-native zipnet
audience: ai
A playbook for taking an AI-native product from idea to a shipped MVP running on a mosaik coalition. The primitives this article composes are specified in four places only: this book, mosaik, zipnet, and the builder (6e3v) book. If you find yourself reaching for anything outside that stack, you have drifted.
The playbook is agent-led. The reader is assumed to be an engineer (human or agent) capable of writing async Rust, or an agent team whose members include one. Every step is something a team of one human plus a small set of LLM-class agents can run end-to-end in a few weeks.
What “AI-native zipnet” means
A zipnet is one mosaik organism whose role is to
accept sealed submissions from clients and make their
cleartext available to downstream organisms through an
unseal committee under a declared t-of-n
threshold. Cleartext appears in an UnsealedPool
Collection keyed by slot; downstream organisms
subscribe to the pool and act. Integrators submit
anonymously: the only identity on the submission path
is a rotating zipnet peer_id, never a coalition
ClientId.
An AI-native zipnet instance is one whose submissions, processors, operators, or all three are AI agents:
- AI submitters. LLM-class agents produce intents, bids, prompts, code-review requests, or other sealed payloads and publish them into the zipnet.
- AI processors. Downstream organisms (or a confluence) run agent policies that consume the unsealed cleartext and commit derived facts.
- AI operators. Committee members, schedulers, and reputation scorers are themselves agents running under TDX-attested binaries, using the Compute basic service to top up their own workloads and run experiments against production traffic.
All three can coexist, and the playbook assumes at least two of them. A product is AI-native only if the agents are load-bearing — if an operator could ship the same service without them, the stack is over-engineered.
Step 0 — Pick the problem
Three properties that make a candidate a good 0→1 fit for zipnet + coalition + AI. Skip the candidate if any one is missing.
-
Submissions are private, aggregations are public. Each individual submission is sensitive (a bid, a prompt, a vote, a code excerpt, a health record). The aggregated commit (an auction outcome, an attested inference result, a statistic, a tally) can be public or cheaply attested. If the product would need to publish raw submissions, zipnet is the wrong submission layer.
-
The value lands in the aggregation or routing, not in one submitter getting a reply. A classic request/response service is a distraction here. Zipnet’s shape rewards products where N submissions plus a committee-committed output is the thing consumers pay for.
-
At least one bonded, honest committee can carry the trust. If every downstream participant must independently verify every submitter, zipnet adds no cryptographic value over an ordinary mailbox. The payoff is the unseal quorum plus whatever downstream TDX / reputation composition you layer on top.
Candidate shapes that fit all three and have an AI- native angle:
- Confidential inference broker. Clients submit sealed prompts; a TDX-attested inference confluence unseals, runs, and commits (prompt_digest, model_id, attestation, output_digest). The output is returned through zipnet’s reply path. Consumers pay per verified inference.
- Private code review. Clients submit sealed code excerpts; a reviewer confluence of LLM agent organisms commits aggregated annotated reviews with line-level evidence pointers. Reviewers are scored by a reputation organism.
- Sealed intent pool for cross-chain trades.
Builder territory, extended. Intents are AI-
generated; an LLM confluence matches routes across
chains before the
offerstage. - Anonymous prediction market. Agents submit sealed forecasts; a scoring confluence commits realised-outcome accuracy; payoffs flow through a coordination-market confluence.
- Private research-data corpus. Researchers submit sealed data points; an aggregation confluence commits statistics under differential- privacy budgets; AI agents scan the public commits for correlations without touching raw records.
Pick one whose addressable market you can name in one sentence. For the rest of this playbook we use the confidential inference broker as the running example because it exercises every primitive the stack offers.
Step 1 — Write the one-page spec
Before any code, write a one-page spec. Structure:
- What gets submitted. The sealed-payload shape (one struct).
- Who submits. Integrator role: human app, autonomous AI agent, a mix.
- Who unseals. The unseal-committee role: a t-of-n bonded set, typically TDX-attested.
- What the downstream organism (or confluence) commits. One or two named primitives per organism. For a confluence, the spanned citizens.
- Who consumes. Downstream readers and their purpose.
- What the trust shape is. Per-organism majority-honest / threshold / TDX assumptions, composed.
- What gets charged, and to whom. The revenue path: a coordination market, a per-commit fee, a subscription.
Worked example (confidential inference broker):
submit : SealedInferenceRequest
submitter : integrator agent or human app (mosaik ClientId)
unseal : 3-of-5 TDX committee
downstream : InferenceConfluence committing AttestedInference
commits per (prompt_digest, model_id, output_digest, attestation)
consume : original submitter reads reply via zipnet,
general public reads aggregate attestations
trust : zipnet any-trust on peer set +
unseal 3-of-5 TDX +
inference confluence 2-of-3 TDX
revenue : coordination market clears per-request price;
requester pays settlement hash in the request
One page, one struct per row, no prose. If you cannot fit it on one page, the shape is unclear and the product won’t survive pivots.
Step 2 — Decide the scale
Three scales the stack supports. Pick the smallest one your spec fits into; the larger scales cost more and gate more primitives you don’t need yet.
Scale 0 — a single organism
One zipnet plus one downstream organism, no coalition, no confluence, no modules. Minimal stack. Use this when the spec has a single committed primitive and no aggregation across citizens. The zipnet book is the sole upstream reference; the organism is one crate following the zipnet pattern.
Scale 1 — one lattice
A full builder lattice (the “6e3v” shape: six organisms and three verbs, one per EVM chain). Use this when the spec is block-building or a sibling problem whose state-machine shape mechanically lines up with zipnet → unseal → offer → atelier → relay → tally. Most non-block-building problems do not need a full lattice. Do not adopt this scale just because it exists.
Scale 2 — a coalition
Multiple citizens (zero or more lattices, zero or
more standalone organisms), zero or more
confluences, and up to four modules (Atlas, Almanac,
Chronicle, Compute) under one
CoalitionConfig.
Use this when:
- The product spans more than one citizen (inference broker + reputation organism + coordination market), or
- You want a named handshake so integrators compile one constant, or
- You need Compute (AI-native products almost always do).
The confidential inference broker lives at scale 2: one zipnet instance, one inference confluence, one reputation organism, one coordination-market confluence, one Compute module. Five citizens plus one module.
Step 3 — Wire the citizens
Write the CoalitionConfig as a Rust const. This is
the compile-time handshake; every integrator compiles
it verbatim.
use builder::LatticeConfig;
use coalition::{CoalitionConfig, ConfluenceConfig, OrganismRef};
pub const INFER_CORP: CoalitionConfig<'static> = CoalitionConfig {
name: "infer.corp",
lattices: &[],
organisms: &[
ZIPNET_SUBMISSIONS,
REPUTATION_INFERENCE,
],
confluences: &[
INFERENCE_ROUTER,
COORDINATION_MARKET,
ATLAS,
COMPUTE,
],
ticket_issuer: Some(INFER_TICKETS),
};
Rules:
- Every organism and confluence has its own crate and
its own
Config. Order in the slices is the order integrators will use to index. - A confluence’s
ConfluenceConfigfolds the spanned citizen references as content — not a coalition root (confluences are coalition-independent). - If you need a coordination market or a compute
settlement mechanism, declare an optional ticket
issuer. Components that want to accept tickets from
the issuer include its root in their own
TicketValidator; nothing is compelled.
Publish the CoalitionConfig hex and the struct
definition as your handshake page.
Step 4 — Add AI operators
For every citizen and every confluence, pick the agent shape (see agents as citizens):
| Component | Agent shape | Why |
|---|---|---|
| zipnet submissions | integrator (shape 1) — the app’s clients | Submissions are private actions; no replay needed |
| unseal committee | committee of TDX organisms (shape 3) | t-of-n threshold decryption needs joint-commit output |
| inference confluence | swarm (shape 4) over agent organisms | Multiple reviewer agents, one aggregated commit |
| reputation organism | standalone organism (shape 2) | Stable scoring history consumers pin |
| coordination market | confluence committee (shape 3) | Market clearing needs one commit per round |
| Atlas / Chronicle | standard committee (non-AI) | Observability, not intelligence |
| Compute provider | compute-bridge (shape 2), TDX-attested | Honest capacity reporting, multi-backend provisioning |
Rules for deciding a shape:
- Start at shape 1 (integrator). Upgrade only when you need replay, public identity, or bonding by peers.
- Use shape 4 for quality-aggregation. A swarm- as-confluence lets you score individual agents separately while committing one authoritative output.
- Use shape 3 for market-clearing or threshold work. The committee’s joint commit is the primitive consumers pay for.
- Do not bond to a single agent-operator unless the agent is a standalone organism (shape 2) with a published MR_TD. Otherwise you are bonding to one operator’s keys.
Step 5 — Get the infrastructure
AI-native products are resource-hungry. The
Compute module
and the
compute-bridge example
give you a direct path from market revenue to
provisioned workloads.
Two deployments are typical at the MVP stage:
5a. Reserve compute for the inference confluence
The inference confluence declares a
ComputeManifest pinning its own MR_TD plus min CPU
/ RAM / TDX requirements. Each committee member
runs on compute acquired from a
compute-bridge
provider — operator choice of AWS, GCP, Azure, or a
bare-TDX host. Settlement flows through a
coordination-market confluence the inference broker
is also a member of: users pay; the market clears;
compute-bridge grants are funded from the clearing.
5b. Run experiments without disturbing prod
As inference quality drifts, the inference confluence
runs a candidate model as an
experiment:
a Compute grant for a short-lived image, a shadow
Stream<SuccessorInference> consumers can optionally
bond against, an ExperimentReport committed when
the window closes. Successful experiments flip to
production via a
retirement marker
pointing at the new ConfluenceConfig.
Step 6 — Ship the MVP
What “shipping” looks like:
- A handshake page. One URL carrying:
- The
CoalitionConfighex and struct. - Per-citizen links with MR_TDs for TDX organisms.
- A change channel.
- The
- A client SDK. One crate + one helper in the integrator’s language of choice. Tokio + mosaik handle; helpers for sealing a submission and reading a reply.
- A smoke test. A binary that submits a canned sealed request, unblocks a committee round, and verifies it can read its own reply within a declared latency budget. Run this on every deploy.
- An operator runbook. Per-module MR_TD rotation, how to add a compute-bridge, how to retire a confluence version, how to respond to a zipnet unseal committee losing majority. The operator runbooks in this book are the template; mirror their structure.
- A single on-call dashboard. Subscription lag per spanned citizen, Compute-grant hit rate, committee membership health, reputation-organism scoring cadence.
The MVP is not “a screenshot of the agent responding.” It is five artefacts plus the green smoke-test.
Step 7 — Pricing and revenue
Coordination markets are the mosaik-native way to capture value. The pattern from emergent coordination — pattern 3 specialised for an AI-native product:
Client Market Inference Reputation
------ ------ --------- ----------
Bid -->
Round clear
(price, winner)
Sealed submission --> Unseal --> Commit
committee AttestedInference
Receive reply <---------------- (via zipnet)
Settlement
pointer:
on-chain TX hash
Score by outcome
(if applicable)
Revenue mechanics to decide, in order:
- Unit of charge. Per-inference, per-second of
agent time, per-unit-of-aggregation. The market’s
contentpins this. - Settlement rail. On-chain native-token, a stablecoin, or a coalition-scoped credit token redeemable off-protocol. Evidence is a hash; finality lives off the module.
- Take rate. The operator’s cut. Announce it up- front and fold it into the market’s clearing rule.
- Reputation-gated pricing. Higher-reputation inference-confluence committee members accept higher-priced bids; lower-reputation drop to lower price tiers or sit out. Pin this in the market’s content parameters.
An AI-native product has one more lever: the agents themselves are customers. An agent that wants inference pays the market like any other client; an agent running in a Compute grant pays the compute-bridge. Revenue capture inside an agent society can sustain a product whose human-user traffic is still growing.
Step 8 — Operate sustainably
The sustainability investigation specifies five strategies and three anti-abandonment patterns. For an MVP, compose the minimum viable package:
- A reputation-anchored survival: every agent organism pins a reputation-organism floor in its admission.
- B market-funded operation: every bonded committee member reads settlement evidence from the coordination-market confluence.
- E successor-chain hand-off: every
Configcarries aSuccessorPolicy; every retirement emits a marker pointing at a named successor.
Add:
- X a watchdog organism scoring every agent on subscription count + downstream citation. Running from day one means abandonment is caught compositionally, not anecdotally.
- Y Atlas-surfaced successor chains so late- arriving integrators bind the latest active version.
- Z Chronicle heartbeats for every committee.
These five compose without any new protocol surface. An MVP that ships them is already operationally mature in a way most web-stack startups aren’t within six months.
Worked end-to-end — confidential inference broker
Seven-week path from clean repo to production:
Week 1 — Spec
One-page spec per Step 1. Pricing model per Step 7. Handshake document drafted. No code yet.
Week 2 — zipnet instance
Fork the zipnet reference shape. Your
binary’s Config.content pins the sealed-request
schema. Unseal committee starts at 3-of-3 mocked on
one operator; expand at week 6.
Week 3 — Inference confluence + reputation organism
Two crates: infer-conf (confluence, shape 3) and
infer-rep (standalone organism, shape 2). The
confluence’s state machine aggregates per-member
inference outputs; disagreement at commit time
lowers member weights for the next round.
Reputation organism scores confluence outputs by
outcome when an outcome is observable (for some
inference classes, an answer-key arrives later; for
others, only user-verdict signals are available).
Week 4 — Coordination market + Compute wiring
Third crate: infer-market (confluence, shape 3)
clearing per-request bids. Fourth crate: a
compute-bridge deployment per backend you can
access (start with AWS and one bare-TDX host). The
market’s settlement policy pins an on-chain payment
contract the requester pays into before their bid
counts.
Week 5 — Coalition assembly
Write the CoalitionConfig. Publish the handshake
page. Compile-in the struct to the canonical client
SDK.
Week 6 — Committee hardening
Unseal committee expands to 3-of-5 under three independent operators. Each committee member runs inside TDX with the crate’s published MR_TD. Publish MR_TDs on the handshake page. Bond reputation organism’s readers against settled on-chain payments so Sybil cost dominates false scoring.
Week 7 — Smoke test + ship
Run the smoke-test binary 10,000 times across all
coalition regions. Dashboard all five critical
paths. Post to the change channel. Integrators
compile the CoalitionConfig, the smoke-test
turns green, traffic begins.
Result: a production system where
- a client submits a sealed prompt and gets back an attested, committee-signed inference;
- the committee is TDX-attested, its composition is visible in Atlas, and its revenue is visible in the coordination-market stream;
- compute is acquired on demand from a compute-bridge, reimbursed from the market;
- reputation compounds across weeks and scales pricing;
- every step is committable, auditable, and bonds only what the participant chose to trust.
Failure modes and iteration
- Zero user traffic. The product’s spec was wrong. Do not pivot the stack; pivot the product. The coalition composition re-uses perfectly — you rewire the confluence’s content parameters and republish.
- One committee member is malicious. The
t-of-nthreshold + TDX attestation + reputation organism catch this compositionally. The offending MR_TD is unpinned from the handshake next release. - Compute costs outpace revenue. The market
clearing rule is the lever: raise the minimum
bid, shrink the confluence’s committee, or pivot
to a lower-cost inference model. Each is one
ConfluenceConfigbump; the retirement marker handles the transition. - Reputation organism is compromised. See emergent coordination — pattern 2: consumers consult multiple reputation organisms; the compromised one is unpinned.
- A backend goes offline. The compute-bridge fleet router routes around it; if the backend had uniquely-capable hosts (e.g. only bare-TDX), grants requiring TDX fail to match until a replacement backend is added.
Iteration is cheap because every interface is a
Config fingerprint: a new version is a new
fingerprint, and every integrator is rebonded
through the retirement chain.
What this playbook does not claim
- Your product will succeed. Distribution, pricing, and user psychology are not in this book. This playbook makes shipping possible; it does not make it profitable.
- Zipnet is the right submission layer for every AI-native product. It is the right one when the three properties in Step 0 all hold. When they do not, use an ordinary mosaik organism with a public submission stream and skip the unseal machinery.
- The compute-bridge covers every hardware
profile. Large-scale GPU inference is not a
solved deployment on TDX-capable providers today.
The
baremetalbackend is your route in the interim. - Agents can replace the operator. Every committee still has a human operator accountable for running the binary. Constitutional pinning (strategy D from sustainability) means that even when the agent is making decisions, the operator is making the decision to keep the committee running. No blueprint here forecloses that.
- The four upstream books each fully exist as a shipped binary. The coalition meta-crate lands at v0.2, Compute at v0.6, and this playbook’s production path traces that timeline. Until then, the playbook reads as design.
Cross-references
- What a coalition gives AI agents
- Quickstart
- Agents as citizens
- Emergent coordination
- Sustainability and abandonment
- Compute
- Example —
compute-bridge - Contributor — basic services
- Contributor — topology intro
- Operator — coalition overview
What a coalition gives you
audience: integrators
A coalition is a single handshake token binding an
agent to multiple citizens of the mosaik universe at once:
block-building lattices, standalone organisms (oracles,
attestation fabrics, identity substrates, analytics
pipelines, …), confluences (cross-citizen
organisms), and the coalition’s optional basic services
(Atlas, Almanac, Chronicle, Compute — whichever are
shipped).
Compile one CoalitionConfig and obtain typed handles on
every citizen, every basic service, and every confluence
the coalition references, from one Arc<Network>.
If you already bind to one lattice via a LatticeConfig,
or to one standalone organism via its Config, a coalition
does not change any of that. It adds:
- A single reference to a set of citizens the operator has chosen to yoke together (e.g. every chain in a superchain stack; every oracle feed in a region; a set of lattices plus a shared attestation fabric).
- Optional confluences: cross-citizen organisms that aggregate, route, or attest across the citizens in the coalition. Confluences are not coalition-scoped; the same confluence may be referenced by many coalitions.
- Optional basic services: up to four well-known shapes — Atlas (directory), Almanac (clock beacon), Chronicle (audit log), Compute (scheduler and registry for image-hash-identified workloads).
- An optional coalition-scoped ticket issuer. Components choose whether to recognise its tickets; nothing is required to.
- No loss of identity. Every citizen’s
GroupId, everyClientId, everyStreamIdyou already use stays byte-for-byte the same under routine operational rotations (MR_TD bumps, ACL rotations). A coalition references citizens; it does not re-derive them.
An agent binding to one citizen only does not need a
CoalitionConfig; bind directly to that citizen’s
Config (for a block-building lattice, see the
builder integrators overview; for a
standalone organism, see its own crate docs). The
coalition layer is additive, not a replacement.
When you want a CoalitionConfig
Four common cases.
Cross-chain searcher
You bid on bundles across three block-building lattices
and want one handshake token that carries all three
LatticeConfigs, their MR_TDs, and any referenced
confluences (e.g. a shared ledger that aggregates your
refunds across chains).
Hold one CoalitionConfig, open three
offer::Offer<Bundle>::bid handles, one
ledger::Shared<Refund>::read. See the
quickstart.
Multi-chain wallet
You submit intents on several lattices and need refund
attestations from each. If the coalition references a
shared-ledger confluence, you can read one aggregated feed
instead of polling per-lattice tally::Refunds.
Cross-domain agent
Your agent consumes outputs from a mix of citizens — say,
two block-building lattices, an oracle grid, and an
attestation aggregator — and you want one compile-time
token that pins the full set. A CoalitionConfig lets you
hold one handshake instead of four.
Confluence or module consumer
You are an analytics, routing, or dashboard service that
reads an intent router, a shared ledger, a cross-chain
oracle aggregator, or the coalition’s Atlas. Without a
CoalitionConfig, you would re-derive each fingerprint
yourself; with it, you compile one struct.
When you do not need a CoalitionConfig
- You bind to one citizen only.
- You are an integrator testing a standalone confluence
whose operator has given you its
ConfluenceConfigdirectly. - You only read a public collection whose
StoreIdyou already have; coalitions are compile-time ergonomics.
What the operator gives you
When a coalition operator hands you a handshake, you should receive:
- The
CoalitionConfigstruct (or its serialised fingerprint). - The included
LatticeConfigs — either inline or by reference to each lattice operator’s published handshake. - The included standalone-organism references — each with role name, stable id, optional content hash, and a pointer to that organism’s own handshake.
- The included confluences — each with its
ConfluenceConfigand a pointer to the confluence operator’s handshake. - The included modules — each a
ConfluenceConfigwith the shape documented in basic services. - The ticket-issuer root if the coalition ships one.
- MR_TDs for every TDX-gated confluence, every TDX-gated module, and every TDX-gated standalone organism the coalition commissioned.
- Endpoint hints for at least one initial peer on the shared universe, if your agent is cold-starting.
See What you need from the coalition operator for the full checklist.
What you compile in
use std::sync::Arc;
use mosaik::Network;
use builder::UNIVERSE;
use coalition::CoalitionConfig;
const ETH_SUPERCHAIN: CoalitionConfig<'static> = /* pasted from operator release notes */;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
// Pick handles you need — any lattice organism, any
// standalone organism, any confluence, any module.
let eth_bid = offer::Offer::<Bundle>::bid(
&network, Ð_SUPERCHAIN.lattices[0].offer,
).await?;
let uni_bid = offer::Offer::<Bundle>::bid(
&network, Ð_SUPERCHAIN.lattices[1].offer,
).await?;
// A standalone organism the coalition references — here, an
// attestation aggregator with its own Config.
let attest = attest::Aggregator::<Quote>::read(
&network, ETH_SUPERCHAIN.organisms[0].config(),
).await?;
// A confluence — same shape, one rung up.
let ledger = ledger::Shared::<Refund>::read(
&network, Ð_SUPERCHAIN.confluences[0],
).await?;
// A module — the coalition's Atlas, if shipped.
let atlas = atlas::Atlas::<CitizenCard>::read(
&network, ETH_SUPERCHAIN.atlas().unwrap(),
).await?;
// ... use them.
Ok(())
}
ETH_SUPERCHAIN.atlas() returns the Atlas module’s
Config if the coalition ships one. The same helper
exists for almanac() and chronicle(). A coalition that
does not ship a given module returns None.
No additional plumbing. UNIVERSE is the same constant
every lattice and zipnet deployment uses.
What can go wrong
ConnectTimeout. TheCoalitionConfigyou compiled in does not match what the operator is running. Recompile against the current release notes.- Ticket missing on a specific citizen. Your per- citizen admission ticket was not issued or was revoked. Per-citizen operators issue their own tickets; the coalition operator does not. Contact the per-citizen operator.
- Confluence or standalone organism down. A committee lost majority. Other citizens continue; the affected component’s next commit is delayed.
- Module helper returns
None. The coalition does not ship that module. Drop the dependency or request the module be added.
Cross-references
- Quickstart — bind many citizens from one agent
- What you need from the coalition operator
- Binding to a confluence
- Cross-citizen flows
- Contributor — basic services
Quickstart — bind many citizens from one agent
audience: integrators
This quickstart stands up an agent that submits and reads
across three block-building lattices, one standalone
attestation organism, one confluence, and one module (the
coalition’s Atlas) — all bound through a single
CoalitionConfig. It assumes familiarity with the
builder integrators quickstart when the
coalition includes block-building lattices; the coalition
layer extends that pattern rather than replacing it.
If the coalition contains no lattices (for example, a composition of oracle organisms plus a cross-oracle confluence), the same pattern applies with different crates on the right-hand side of each handle.
What you need before you start
- A
CoalitionConfigfrom the coalition operator (struct definition, or a serialised fingerprint you copy into your crate). - A
LatticeConfigper referenced lattice (inline in theCoalitionConfigthe operator published, or pulled from each lattice operator’s handshake page). - A standalone-organism reference per referenced organism (role name + stable id + optional content hash + pointer to the organism’s own handshake).
- A
ConfluenceConfigper referenced confluence (from the confluence operator’s handshake page). - A ticket from each per-citizen operator you intend to write to. Per-citizen admission is per-citizen; the coalition operator does not issue per-citizen tickets.
- A ticket from each confluence or module operator whose write-side primitive you intend to use. Usually none: most confluences and modules are read-only for external integrators.
- Your TDX image (if any citizen or service you write to requires attested client admission).
Step 1 — declare the CoalitionConfig in your crate
Paste the operator’s published CoalitionConfig
definition, or import it from the operator’s handshake
crate if they publish one:
use builder::LatticeConfig;
use coalition::{CoalitionConfig, ConfluenceConfig, OrganismRef};
// Exactly the bytes the operator publishes.
const ETH_MAINNET: LatticeConfig<'static> = /* from ethereum.mainnet operator */;
const UNICHAIN_MAINNET: LatticeConfig<'static> = /* from unichain.mainnet operator */;
const BASE_MAINNET: LatticeConfig<'static> = /* from base.mainnet operator */;
// A standalone organism the coalition references — here,
// an attestation aggregator with its own stable id.
const ATTEST_AGG: OrganismRef<'static> = OrganismRef {
role: "attest",
stable_id: /* published by the attest-aggregator operator */,
content_hash: None,
};
// The coalition's Atlas module (if shipped) and two
// referenced confluences.
const SUPERCHAIN_ATLAS: ConfluenceConfig<'static> = /* from coalition operator */;
const INTENTS_CONF: ConfluenceConfig<'static> = /* from confluence operator */;
const LEDGER_CONF: ConfluenceConfig<'static> = /* from confluence operator */;
pub const ETH_SUPERCHAIN: CoalitionConfig<'static> = CoalitionConfig {
name: "ethereum.superchain",
lattices: &[ETH_MAINNET, UNICHAIN_MAINNET, BASE_MAINNET],
organisms: &[ATTEST_AGG],
confluences: &[SUPERCHAIN_ATLAS, INTENTS_CONF, LEDGER_CONF],
ticket_issuer: None,
};
For a coalition-agnostic library, take
&'static CoalitionConfig<'static> as a constructor
argument and let the binary crate pick the constant.
Step 2 — bring up the network handle
Same UNIVERSE as every other mosaik service:
use std::sync::Arc;
use mosaik::Network;
use builder::UNIVERSE;
let network = Arc::new(Network::new(UNIVERSE).await?);
One Arc<Network> per agent regardless of how many
citizens, confluences, or modules are bound.
Step 3 — bind the lattice handles you need
Indexes into lattices[] match the order in the
CoalitionConfig. A cross-chain searcher that bids on
all three:
use offer::Offer;
let eth_bid = Offer::<Bundle>::bid (&network, Ð_SUPERCHAIN.lattices[0].offer).await?;
let uni_bid = Offer::<Bundle>::bid (&network, Ð_SUPERCHAIN.lattices[1].offer).await?;
let base_bid = Offer::<Bundle>::bid (&network, Ð_SUPERCHAIN.lattices[2].offer).await?;
let eth_wins = Offer::<Bundle>::outcomes(&network, Ð_SUPERCHAIN.lattices[0].offer).await?;
let uni_wins = Offer::<Bundle>::outcomes(&network, Ð_SUPERCHAIN.lattices[1].offer).await?;
let base_wins = Offer::<Bundle>::outcomes(&network, Ð_SUPERCHAIN.lattices[2].offer).await?;
Keep handles alive while needed; drop them when done.
Each handle carries its own subscriptions and is cheap
atop the shared Network.
Step 4 — bind the standalone-organism handles you need
Each OrganismRef carries the stable id; the concrete
organism crate gives you the typed constructor:
use attest::Aggregator;
let attest = Aggregator::<Quote>::read(
&network, ETH_SUPERCHAIN.organisms[0].config(),
).await?;
OrganismRef::config() resolves to the organism’s Config
— either because the coalition operator embeds the full
Config struct in their published CoalitionConfig, or
because the organism crate ships a helper that rebuilds
the Config from the published stable id and known role
constants. Either way, the concrete organism crate is what
gives you the typed handle.
Step 5 — bind the modules you want
Modules are coalition-scoped organisms with well-known
shapes. CoalitionConfig exposes helpers:
use atlas::Atlas;
use almanac::Almanac;
// The Atlas lists every citizen in the coalition with operator metadata.
if let Some(atlas_cfg) = ETH_SUPERCHAIN.atlas() {
let atlas = Atlas::<CitizenCard>::read(&network, atlas_cfg).await?;
// Use the atlas as your canonical "what's in this coalition" reference.
}
// The Almanac is a shared tick beacon for timing correlations.
if let Some(almanac_cfg) = ETH_SUPERCHAIN.almanac() {
let almanac = Almanac::<AlmanacTick>::read(&network, almanac_cfg).await?;
// Use almanac ticks as the time axis for cross-citizen commits.
// Each tick's `observed_at` is a Vec<(MemberId, SystemTime)>;
// compute median or bracket yourself.
}
A coalition may ship zero, one, two, three, or all four
modules. Always check the helper’s Option return; do
not assume presence.
Step 6 — bind the confluence handles you need
The shared ledger is typical for a searcher seeking aggregated refund attribution:
use ledger::Shared;
let refunds = Shared::<Refund>::read(&network, Ð_SUPERCHAIN.confluences[2]).await?;
tokio::spawn(async move {
while let Some(r) = refunds.next().await {
// Each entry carries (origin_citizen, slot, share, evidence).
handle_refund(r);
}
});
Step 7 — submit paired bundles
Cross-chain bundles are a searcher-level problem. The coalition layer surfaces the handles; reconciliation logic is the integrator’s responsibility.
async fn submit_paired_bundle(
eth_bid: &Offer<Bundle>::Bid,
uni_bid: &Offer<Bundle>::Bid,
sell_on_eth: Bundle,
buy_on_uni: Bundle,
) -> anyhow::Result<()> {
let eth_ack = eth_bid.send(sell_on_eth).await?;
let uni_ack = uni_bid.send(buy_on_uni ).await?;
// Reconcile: if only one leg commits, post-bond collateral or
// on-chain HTLC logic applies. The coalition layer does NOT
// provide cross-citizen atomicity. See
// /contributors/atomicity.md.
Ok(())
}
A paired-bundle correlator joining AuctionOutcome[L1, S] to AuctionOutcome[L2, S'] is the most common
in-agent utility. Compute it in-memory as in
builder’s Shape 1, or rely on a
referenced intents confluence if one exists.
Step 8 — read the aggregated result
With a shared-ledger confluence, a single subscription surfaces refunds originating from any of the coalition’s lattices.
while let Some(entry) = refunds.next().await {
tracing::info!(
origin = %entry.origin_citizen,
slot = entry.slot,
share = %entry.share,
"refund attributed",
);
}
Without a shared-ledger confluence, open three per-
lattice tally::Tally::<Attribution>::read handles and
join in memory. Both paths are valid; the confluence is a
shared precomputation.
What to do when things go wrong
ConnectTimeouton a lattice or organism handle. The per-citizenConfigdoes not match what the citizen’s operator is running. Recompile against that operator’s latest handshake.ConnectTimeouton a confluence or module handle. TheConfluenceConfigdoes not match, or the committee is down and has not yet published peers on the universe, or the confluence has retired (check for aRetirementMarkeron its public stream) and the coalition operator should have sent an updatedCoalitionConfig.- Admission denied on a write-side stream. Ticket missing or expired. Contact the per-citizen operator, not the coalition operator.
- Module helper returns
None. The coalition does not ship that module. See contributor basic services for the spec if proposing addition.
Cross-references
- What a coalition gives you
- What you need from the coalition operator
- Binding to a confluence
- Cross-citizen flows
- builder quickstart
What you need from the coalition operator
audience: integrators
Before compiling a CoalitionConfig into an agent, the
integrator requires a short item list from the coalition
operator plus a pair of sanity checks. This page is the
checklist.
The eight items
-
The
CoalitionConfigdefinition. Either a Rustconstdeclaration pasted into the integrator crate, or a serialised fingerprint (hex ofcoalition_id()) paired with the integrator’s own copy of the struct definition. -
Every included
LatticeConfig. Typically pulled by reference from each lattice operator’s handshake page. The coalition operator should publish a per-coalition table listing the referenced lattices and a link to each lattice operator’s release notes. -
Every included standalone-organism reference. For each organism the coalition composes outside a lattice, the coalition operator publishes the role name, the organism’s stable id, the optional content hash if pinned, and a link to that organism’s own handshake page.
-
Every referenced confluence’s
ConfluenceConfig. Confluences are coalition-independent; each carries a link to its own operator’s handshake page. -
Every shipped module’s
ConfluenceConfig. Atlas, Almanac, Chronicle — whichever the coalition ships. Modules not shipped are stated explicitly; silence is ambiguous. -
MR_TDs for every TDX-gated unit the coalition operator commissions — modules, coalition-run confluences, and standalone organisms they run. (Per- citizen MR_TDs come from the per-citizen operators; the coalition operator re-publishes for convenience.)
-
At least one initial peer hint for the shared universe (cold-start bootstrap). Typically the same hint every mosaik service uses; re-published for convenience.
-
A change channel. The mechanism through which the coalition operator notifies of a referenced citizen’s stable-id bump or a referenced confluence’s
Configchange.
Coalition operators do not issue per-citizen tickets; those come from per-citizen operators. A coalition may publish its own optional ticket issuer; components and integrators choose whether to recognise it.
Verification
Perform two sanity checks once the items above are in hand.
Fingerprint round-trip
Compile the pasted CoalitionConfig. Call
coalition_id() on it. Compare to the operator’s
published fingerprint. Both must match byte-for-byte. A
mismatch means the struct definition drifted or the
serialiser differs; do not proceed until the mismatch
resolves.
Citizen reference cross-check
For each referenced LatticeConfig, call stable_id()
and compare to the lattice operator’s published stable
id. For each referenced standalone organism, compare the
OrganismRef stable id (and content hash, if pinned) to
the organism operator’s published one. A mismatch means
the coalition operator’s copy drifted from the per-
citizen operator’s. Request an updated CoalitionConfig.
Trust
Nothing in the coalition operator’s handshake is cryptographically authoritative: a coalition fingerprint names a composition, it does not sign it. The checks above suffice because the identity work is done at the per- citizen rung. The coalition layer stitches known fingerprints together and adds a small catalog of modules.
Cross-references
Binding to a confluence
audience: integrators
A confluence is a mosaik organism whose public
surface spans multiple citizens — some combination of
block-building lattices and standalone organisms.
Binding is the same as for any organism —
Confluence::<D>::verb(&network, &Config) — with two
small differences in what the operator owes the
integrator and in failure-mode reasoning.
The same reasoning applies to a coalition’s modules (Atlas, Almanac, Chronicle) — each is a coalition-scoped organism with a well-known shape.
One handle per verb
Each confluence exposes one or two public primitives. A read-only integrator typically needs exactly one handle; a write-side integrator a second. Typical shapes:
// Read-only: subscribe to the confluence's output.
let intents = intents::Router::<Intent>::routed(&network, &INTENTS_CONF).await?;
// Write-side (if the confluence publishes a write stream):
let publish = intents::Router::<Intent>::publish(&network, &INTENTS_CONF).await?;
Verb names come from the confluence’s own crate docs,
following the same convention as every organism
(Offer::bid, Tally::read, Zipnet::submit,
Aggregator::read, Atlas::read).
What the confluence operator owes you
A confluence operator (who may coincide with the coalition operator) is responsible for:
- The
ConfluenceConfigfingerprint, publishable and byte-for-byte compilable. - The ordered list of spanned citizen references the confluence depends on — lattices, standalone organisms, or any mix. Cross-check each against the corresponding per-citizen operator’s publication.
- MR_TDs if the confluence is TDX-gated.
- The confluence’s state-machine version / signature.
A bump surfaces as
ConnectTimeouton compiled handles; the change channel warns ahead. - A failure-mode statement. Whether a stall in spanned citizen A causes a commit with partial evidence or an event stall. Documented in the crate’s composition-hooks page.
- A retirement commitment. Before shutting the
committee down, the confluence emits a
RetirementMarkeron its public stream pointing at a replacement (if any). Watch for it rather than timing out blindly.
Failure-mode reasoning for confluence consumers
Three properties to internalise before reading a confluence.
The confluence commit lags the upstream commits
A confluence’s (citizen_id, slot) commit is observable
only after the underlying citizen commits are observable,
plus one Raft round inside the confluence’s committee.
Latency-sensitive consumers read the upstream citizen
surfaces directly and use the confluence only for the
aggregated view.
Partial evidence is a valid confluence commit
Depending on the confluence’s stall policy, a commit may reflect inputs from only some spanned citizens. The commit carries evidence pointers; checking the pointers indicates which citizens’ inputs were present. Do not assume every spanned citizen contributed to every commit.
A stalled confluence does not block upstream pipelines
If the confluence’s committee loses majority or lags, the spanned citizens’ pipelines are unaffected. Fall back to binding per-citizen handles directly; the confluence’s precomputation is a convenience, not a dependency.
What to do when your agent relies on a confluence that changes
Confluence Config fingerprints change for three reasons:
- Confluence upgrade. The confluence’s content, ACL, or spanned citizen set changed; its fingerprint bumps independently.
- Upstream citizen retirement. A citizen the confluence spans published a new stable id; the confluence inherits the change if it pinned the old stable id.
- Upstream citizen content bump (only if pinned). If the confluence pinned the upstream content hash, an MR_TD or ACL bump in the upstream bumps the confluence’s fingerprint.
In every case the old handle starts failing with
ConnectTimeout — or, better, the confluence emits a
RetirementMarker on its stream pointing at the
replacement. The operator’s change channel should warn
ahead; the integrator’s process should include a path to
recompile against the new CoalitionConfig (or
ConfluenceConfig) and restart.
Cross-references
Cross-citizen flows
audience: integrators
This page catalogues cross-citizen flows an agent may run and indicates which are cheap at the integrator level versus when a confluence is warranted.
The underlying theory for block-building cross-chain cases is in the builder book’s cross-chain chapter. The coalition layer formalises its Shape 3 (bridge organism) as a confluence and generalises the pattern beyond block- building to any mix of lattices and standalone organisms. This page is the integrator-facing view.
Four recurring flows
Paired bundles (two block-building lattices)
An agent publishes bundle A to offer[L1] and bundle B
to offer[L2], expecting paired-slot commits.
- Without a confluence. Keep a per-agent correlator
table keyed by bundle id; watch both lattices’
AuctionOutcomeand join. Handle partial fills (A committed, B did not) with post-bond collateral or on- chain HTLC logic. This is the builder book’s Shape 1. - With an intents confluence. Publish one intent to
the confluence’s write stream; the confluence routes
sub-bundles to each target lattice’s
offer. Subscribe to the confluence’sRoutedIntentsfor cross-chain outcome reconciliation. - Decision rule. If only one searcher runs the correlator, keep it in the agent. If three unrelated agents would each run the same correlator, an intents confluence is warranted.
Cross-origin attribution (lattice + lattice)
A bundle submitted on lattice A but captured MEV on
lattice B. Without coordination, tally[A] never sees the
attribution on B’s block; tally[B] sees it but without
knowing the origin was A.
- Without a confluence. Subscribe to both
tally[A]andtally[B], join in memory on submission ids, compute the cross-origin view locally. - With a shared-ledger confluence. One
Shared::<Refund>::readhandle yields aggregated attribution across all citizens the ledger spans. - Decision rule. If on-chain settlement requires a pre-aggregated attestation (e.g. one signature for a cross-chain refund claim), the confluence is load- bearing. If each consumer can perform the join, the confluence is a convenience.
Oracle + lattice correlation (organism + lattice)
An agent that needs to correlate an oracle organism’s price feed with block-building auction outcomes on one or more lattices. The citizens are heterogeneous: one is a standalone organism, the others are lattices.
- Without a confluence. Subscribe to the oracle
organism’s feed and each lattice’s
AuctionOutcome; join in memory with a timestamp tolerance window. - With a correlator confluence. The confluence’s
commit pairs
(oracle_tick, lattice_slot)tuples under one attested key, letting multiple downstream agents consume the same correlation without re-computing. When a coalition ships an Almanac the confluence has opted into, both the oracle tick and the lattice slot align to almanac ticks, providing a shared correlation window. - Decision rule. Do three different agents need the same correlation? If yes, the confluence amortises.
Cross-citizen observability
A dashboard, an analytics agent, or an oncall who wants “one view of the coalition’s activity” — across lattices, standalone organisms, or both.
- Without a confluence. N citizen subscriptions plus local aggregation.
- With the coalition’s Atlas module. Start from the Atlas (directory of citizens, with endpoint hints); subscribe per citizen. Atlas is the reference; the aggregation remains the integrator’s.
- Decision rule. Atlas pays off once more than one dashboard operator exists. A single on-call team can skip it; most coalitions ship an Atlas anyway because the cost is small and the coordination benefit is high.
The decision framework
Every cross-citizen flow reduces to one question:
Do three different agents (searchers, dashboards, analytics) need the same pre-aggregated fact?
If yes, a confluence is the right shape: commit once per slot, consume N times.
If no, a per-agent driver is the right shape: keep the logic where the decisions are made.
A confluence is not generic cross-citizen infrastructure. It is a mosaik organism with a narrow public surface and the same operational cost as any other committee-backed organism. Use it when a committed, reusable fact pays for the cost; use an in-agent driver when it does not.
What the coalition layer does not give you
- Atomic cross-citizen commits. The coalition does not provide send-both-or-neither. See atomicity.
- Cross-coalition coordination. Two
CoalitionConfigs on the same universe are two handshake tokens; they do not coordinate. A cross- coalition commit resolves to a confluence folding the relevant citizens directly, referenced independently by both coalitions. - Trust in a confluence beyond its committee. A confluence’s commit is trustworthy up to its committee’s assumption plus the upstream citizens it reads. Know the trust shape before relying on it.
Cross-references
- Quickstart
- Binding to a confluence
- Contributor — confluences
- Contributor — basic services
- Contributor — atomicity
- builder cross-chain
Coalition overview
audience: operators
A coalition is an operator-published handshake token
naming a set of mosaik citizens — block-building
lattices, standalone organisms, and zero or more
confluences (cross-citizen organisms) — and handing
the set to integrators as one CoalitionConfig they
compile in. A coalition may also ship up to four
modules (Atlas, Almanac, Chronicle, Compute) and an
optional coalition-scoped ticket issuer under the same
handshake.
This page is the operator’s introduction: what running a coalition means, what the operator owns, and what they do not.
Operator roles, often one team
Running a coalition means any combination of the following roles, often held by the same team:
- Lattice operator — responsible for one
LatticeConfig: its committees, its ACL, its organism rotations. Lattice operations are covered by the builder operator book. The coalition layer does not change any lattice-operator responsibilities. - Standalone-organism operator — responsible for one
organism living directly on the universe (an oracle
feed, an attestation aggregator, an identity substrate,
…). Same shape as any mosaik-native service: a
committee, a
Config, a narrow public surface. - Confluence operator — responsible for one confluence’s committee: running the binaries, holding the admission secrets, rotating the attestation images if TDX-gated. Confluences are coalition-independent; the same confluence may be referenced by several coalitions.
- Module operator — responsible for an Atlas, Almanac, Chronicle, or Compute committee. Mechanically identical to running any other coalition- scoped organism; the shape is specified in basic services.
- Ticket-issuer operator — responsible for the coalition-scoped ticket issuance root when the coalition ships one. See ticket issuance.
- Coalition operator — responsible for publishing
and maintaining the
CoalitionConfig: keeping its content accurate, notifying integrators when referenced citizens’ stable ids bump, and retiring the coalition instance name cleanly when it is no longer needed.
A single team commonly holds two or more of these roles.
What the coalition operator owns
- The
CoalitionConfigstruct and its serialised fingerprint. - A published change channel (release notes, a CHANGELOG, an RSS feed — whatever), through which you notify integrators of citizen stable-id bumps, referenced-confluence upgrades, or composition changes.
- The set of
ConfluenceConfigs referenced by the coalition (if you also run the modules and any confluences). - The set of
OrganismRefs the coalition composes (role name, stable id, optional content hash for each referenced standalone organism). - The optional coalition ticket issuer.
- A coalition-scoped operator handshake page — one URL or repository section integrators can link in their own docs.
What the coalition operator does not own
- The referenced citizens. The coalition operator references lattices and standalone organisms; they do not admit peers into them, issue tickets on their behalf, or dictate internal parameters.
- The referenced confluences. Confluence identity is
independent of any coalition. The coalition operator
references the confluence; the confluence operator
controls its committee and
Config. - Citizen stable ids. When a referenced citizen
retires, the coalition operator updates the
CoalitionConfigto match; they do not dictate the per-citizen operator’s retirement cadence. - Cross-operator contracts. Multi-operator coalitions bring SLAs, data-sharing arrangements, and attribution splits — all negotiated out of band. The blueprint does not provide a standard form.
- Integrator lifecycle. Integrators compile in and
recompile on their own schedule. A compiled-in
CoalitionConfigcannot be revoked; the coalition operator publishes a new one and relies on integrators to upgrade. - Citizen behaviour. A citizen’s misbehaviour is
addressed by publishing an updated
CoalitionConfigthat drops or replaces it. The coalition operator has no protocol-level power to constrain a citizen.
What per-citizen operators can expect from a coalition operator
Worth formalising with any citizen referenced:
- Notification of inclusion. The coalition operator informs the per-citizen operator of the reference at publication time. The blueprint does not enforce this; it is a courtesy.
- No admission requests. The coalition operator does not ask the per-citizen operator to issue special-case tickets for confluences or modules. Committee members bond into referenced citizens as ordinary integrators under the citizen’s normal admission policy.
- Observability of referenced citizens’ public surfaces only. The coalition operator does not expect private plumbing access.
- Bump coordination. Citizen stable-id bumps
(retirements) are notified to the coalition operator
via the per-citizen operator’s integrator channel ahead
of cutover; this lets the coalition operator publish a
matching
CoalitionConfigwithout aConnectTimeoutgap. Routine content-hash bumps (MR_TD rotations, ACL rotations) do not require coalition-level updates unless confluences opted to pin the content hash.
A typical coalition’s shape
Three common archetypes:
- Ecosystem coalition. One operator; several lattices across related chains (every chain in a superchain stack, every chain in an L2 family). Modules: usually Atlas at minimum, often Chronicle. Confluences: often an intent router or shared ledger.
- Service coalition. One operator; one or a handful of standalone organisms composed together (an attestation aggregator paired with an identity substrate; an oracle feed paired with an analytics pipeline). Often no lattices at all. Almanac is common because organisms without slot clocks benefit from a shared tick.
- Multi-operator coalition. Multiple operators, each running one or a few citizens (lattices, standalone organisms, or both); one operator runs the coalition and one or more confluences. Common when an ecosystem wants a shared attribution or routing layer while the underlying citizens operate independently. An optional coalition-scoped ticket issuer is more likely here.
All three archetypes use the same CoalitionConfig
shape; the difference is who runs what.
Critical path
- Keep the
CoalitionConfigin sync. When a referenced citizen retires or a referenced confluence bumps, publish a new coalition fingerprint promptly. - Operate modules, the optional ticket issuer, and any standalone organisms or confluences commissioned. Standard committee-organism operations: monitor, rotate, respond to anomalies.
- Respond to integrators. Three typical questions:
the current coalition composition, the cause of a
ConnectTimeout, and how to bind to a newly shipped confluence or module. The handshake page answers the first; the change channel the second; per-service docs the third.
Off the critical path
- Per-citizen operations for citizens not run by the coalition operator. Those citizens operate themselves.
- Inter-citizen consensus. None exists; the coalition is not a consensus unit.
- Cross-coalition coordination. Two coalitions coexist on the universe and do not coordinate.
Cross-references
- Quickstart — stand up a coalition
- Running a confluence
- Federating under a coalition
- Rotations and upgrades
- Contributor — basic services
- builder operator overview
Quickstart — stand up a coalition
audience: operators
This runbook brings up a minimal coalition: one
CoalitionConfig referencing two existing citizens (two
block-building lattices, two standalone organisms, or one
of each) and shipping one module — an Atlas — that
publishes a per-coalition directory. Assumptions:
- The citizens to be referenced exist. If any are block- building lattices not yet run by the operator, follow the builder operator quickstart first; if any are standalone organisms not yet run, follow the organism crate’s runbook.
- Familiarity with the builder operator workflow: systemd units, published handshake pages, committee bring-up, MR_TD publication. The discipline is identical for any organism committee.
- The coalition meta-crate (
coalition) exists in a form sufficient to deriveCoalitionConfigfingerprints. Until v0.2 lands, this is a paper exercise; the steps below are the target shape.
Estimated time to first CoalitionConfig: two engineer-
days, assuming referenced citizens and the Atlas crate
are already running.
Step 1 — enumerate the citizens to reference
Record, for each citizen, the canonical stable id (and optional content hash, if you choose to pin it) as published by its operator.
citizen kind stable_id (hex) content_hash ticket-issuance root
-----------------------------+---------+---------------------+------------------+----------------------
ethereum.mainnet lattice 0x… (unpinned) 0x…
unichain.mainnet lattice 0x… (unpinned) 0x…
price-feed.eur organism 0x… 0x… 0x…
Do not invent stable ids locally. Always use the per-
citizen operator’s authoritative value. A per-citizen
operator’s stable id that cannot be reproduced from its
published Config definition is a citizen-level issue;
do not paper over it in the coalition.
Step 2 — pick the coalition name
Short, stable, namespaced. Examples:
ethereum.superchainmev.mainnet-q2oplabs.testnetsignals.euro-fx
The name is folded into the coalition fingerprint; a typo propagates through every module derivation. Pick once. Rename via a new instance name, never via in-place edit.
Step 3 — decide which modules to ship
One module suffices for a minimal coalition. Atlas is the usual first choice:
- Small state machine — one
CitizenCardcommit per referenced citizen. - Low trust requirement — usually non-TDX, majority- honest.
- High utility — integrators pin the Atlas in their own docs as the canonical coalition-composition reference.
Ship Almanac and Chronicle on their own schedules. Specifications in basic services.
A concrete cross-citizen problem (paired-bundle auctions, cross-chain attribution, oracle/lattice correlation) motivates commissioning a relevant confluence beyond the modules. Each confluence is a separate crate and committee; confluences are coalition-independent, so the same confluence may also be referenced by other coalitions.
Step 4 — draft the CoalitionConfig
use builder::LatticeConfig;
use coalition::{CoalitionConfig, ConfluenceConfig, OrganismRef};
pub const ETHEREUM_MAINNET: LatticeConfig<'static> = /* from ethereum.mainnet operator */;
pub const UNICHAIN_MAINNET: LatticeConfig<'static> = /* from unichain.mainnet operator */;
pub const EUR_PRICE_FEED: OrganismRef<'static> = OrganismRef {
role: "oracle",
stable_id: /* from the price-feed operator */,
content_hash: None,
};
pub const SUPERCHAIN_ATLAS: ConfluenceConfig<'static> = ConfluenceConfig {
name: "atlas",
lattices: &[ETHEREUM_MAINNET, UNICHAIN_MAINNET],
organisms: &[EUR_PRICE_FEED],
content: /* atlas content params */,
acl: /* TicketValidator composition */,
};
pub const ETH_SUPERCHAIN: CoalitionConfig<'static> = CoalitionConfig {
name: "ethereum.superchain",
lattices: &[ETHEREUM_MAINNET, UNICHAIN_MAINNET],
organisms: &[EUR_PRICE_FEED],
confluences: &[SUPERCHAIN_ATLAS],
ticket_issuer: None,
};
Note: a ConfluenceConfig does not fold a coalition root;
confluence identity is independent. For a module like
Atlas, the meta-crate’s derivation routes the identity
under COALITION_ROOT.derive("module").derive("atlas")
internally when the helper atlas() returns the matching
ConfluenceConfig.
Step 5 — freeze the fingerprint
The derivation order is:
- Compute
COALITION = blake3("coalition|" || SCHEMA_VERSION_U8 || "|" || name). - Compute
COA_LATTICES = ordered stable ids. - Compute
COA_ORGANISMS = ordered stable ids (plus content hashes if pinned per-organism). - Compute
COA_CONFS = ordered ConfluenceConfig fingerprints. - Compute
COA_TICKETS = TicketIssuerConfig fingerprint if present, else 0 bytes. - Compute
COALITION_ROOT = blake3(COALITION || COA_LATTICES || COA_ORGANISMS || COA_CONFS || COA_TICKETS). - Module identities are derived from
COALITION_ROOTviaCOALITION_ROOT.derive("module").derive(name).
Once the fingerprint is frozen, record its hex in
release-notes.md. Integrators verify that hex against
the struct they compile.
Step 6 — stand up the Atlas (and any other modules)
Every module is a mosaik-native organism (specifically, coalition-scoped). To bring up the Atlas committee:
- Install the Atlas binary on committee hosts (default N = 3; scale up as needed).
- Bootstrap Raft peers on the shared universe using the
Atlas’s
GroupIdderived from itsConfluenceConfigunderCOALITION_ROOT.derive("module").derive("atlas"). - Publish the Atlas’s MR_TD (if TDX-gated) or the committee roster (if not).
Bring-up details are in the Atlas crate’s runbook and in Running a confluence. Atlas does not require TDX.
Step 7 — publish the handshake
Collect everything integrators need:
CoalitionConfigRustconst(or the hex fingerprint paired with the struct definition).- The ordered list of referenced
LatticeConfigs with links to each lattice operator’s handshake. - The ordered list of referenced
OrganismRefs with role names, stable ids (plus content hashes if pinned), and a link to each organism operator’s handshake. - The ordered list of referenced confluences — each
ConfluenceConfigwith MR_TDs, state-machine signature, a short description, and a pointer to the confluence operator’s handshake. - Each module’s
ConfluenceConfigwith MR_TDs, state- machine signature, and a short description. - A statement of which modules the coalition ships and which it does not — silence is ambiguous.
- The coalition-scoped ticket issuer root, if any.
- An initial peer hint for the shared universe.
- Your change channel (email, feed, repo).
Post as a single page on the operator site or handshake repo. Link integrators to it.
Step 8 — notify integrators
A message on the change channel announcing the
coalition’s first CoalitionConfig fingerprint is
sufficient. Waiting integrators compile in the constant
and open handles.
Ongoing operations
Day-to-day work is covered by the other operator pages:
- Running a confluence for operating each committee (including modules).
- Federating under a coalition for coordinating with other per-citizen and per-confluence operators.
- Rotations and upgrades for key rotations, MR_TD bumps, and citizen retirements.
Failure modes
- Integrators report
ConnectTimeout. Either the publishedCoalitionConfigdrifted from the live deployment, or a referenced citizen retired and the coalition operator has not republished. Recompile the struct, freeze the fingerprint, publish. - A module’s committee lost majority. Standard mosaik recovery. Referenced citizens and referenced confluences continue; integrators reading the module temporarily see no new commits.
- A referenced citizen retired silently. The per-
citizen operator should have notified the coalition
operator. File an anomaly with them, publish a new
CoalitionConfigagainst the new stable id, and tighten change-channel discipline with that operator.
Cross-references
- Coalition overview
- Running a confluence
- Federating under a coalition
- Contributor — basic services
- builder operator quickstart
Running a confluence
audience: operators
Operating a confluence is, at the protocol level, identical to operating any mosaik-native committee organism: a committee, a state machine, a narrow public surface. This page is the delta — what is new because the confluence spans multiple citizens (lattices, standalone organisms, or a mix).
Modules (Atlas, Almanac, Chronicle) are coalition-scoped organisms; the same rules apply. Each module’s crate publishes its own page once commissioned.
For the base runbook — systemd units, dashboards, incident response — follow the builder operator runbooks. Each per-confluence crate publishes its own page once commissioned.
What to add on top of a single-committee runbook
-
Per-citizen subscription health. A confluence reads from multiple citizens’ public surfaces. The dashboard carries one subscription-lag metric per spanned citizen, not an aggregated metric. A confluence that stalls typically stalls on one citizen first.
-
Per-citizen ticket health. Committee members hold tickets from each spanned citizen’s operator. When a per-citizen operator rotates their ticket-issuance root, the confluence’s bonds into that citizen break until new tickets are issued to the committee. Monitor ticket validity horizons per citizen.
-
Citizen-identity monitoring. When a referenced citizen retires (stable id changes) or bumps its content hash (if the confluence pinned content), the confluence’s own
Configfingerprint is stale; the confluence must be redeployed under an updatedConfluenceConfig. Detect this before integrators do. -
Cross-operator communications. A confluence with citizens run by different operators requires a standing channel with each — at minimum a mailing list or chat channel for advance announcements of retirements and rotations.
The lifecycle of a confluence commit
- Committee driver watches each spanned citizen’s public collection / stream.
- An upstream event fires; the driver wraps it in an
Observe*command with an evidence pointer back to the upstream commit. - The confluence’s
Groupcommits theObserve*via Raft. - Periodically (or on an apply-deadline timer), the
driver issues an
Applycommand. The state machine reads accumulated observations and commits the confluence’s own fact. - The confluence’s public surface serves the committed fact to integrators and to any downstream consumers (other confluences, other organisms).
Every step is standard mosaik machinery. The confluence- specific work is the driver’s multi-subscription logic.
Rotations
A confluence rotates like any other organism:
- Committee member rotation. Add a new member under
the same
ConfluenceConfig; drain the old member; decommission. No fingerprint change. - Committee admission policy rotation (e.g. an MR_TD
bump). Requires a new
ConfluenceConfigfingerprint. Announce to any coalition operators referencing the confluence and follow the rotations and upgrades sequence. - Spanned-citizen-set rotation. Adding or removing a
spanned citizen changes the confluence’s content
fingerprint. This is a larger change; typically
accompanied by a fresh
ConfluenceConfigpublication and a notice to referencing coalitions.
Rotations that do NOT break integrators
- Committee member swaps under a stable
ConfluenceConfig. Integrators see a brief latency bump during drain; no handle failures.
Rotations that DO break integrators
- Any
ConfluenceConfigfingerprint bump. Integrators compiled against the old config seeConnectTimeouton the confluence handle until they recompile against the newConfluenceConfig(and any coalition referencing it updates itsCoalitionConfig). Announce ahead of time via the change channel.
Retirement
When a confluence’s committee is shutting down
permanently, the committee emits a RetirementMarker as
its final commit on each public primitive. The marker
carries:
effective_at— the Almanac tick or wall-clock at which the committee ceases to commit;replacement— an optional pointer to the replacement confluence so integrators rebind cleanly rather than timing out.
If any referencing coalition ships a Chronicle, the
retirement lands as ChronicleKind::ConfluenceRetired in
the next Chronicle entry.
Incident response specific to confluences
Two incident classes to add to your playbook on top of the single-organism classes in builder incident response.
Evidence-pointer resolution fails on replay
Symptom: a committee member replaying the log fails to resolve an evidence pointer to an upstream citizen commit. Cause: the upstream citizen committed the fact, the confluence observed it, but the citizen has since gone through a state compaction / reorg that removed the referenced commit from the public surface the confluence reads.
Response:
- The confluence state machine is required to reject such replays, not tolerate them.
- Confirm the issue is upstream-citizen retention, not confluence state.
- Coordinate with the per-citizen operator. The fix is usually citizen-side configuration: longer retention on the public surface the confluence subscribes to.
One spanned citizen goes dark
Symptom: no events from one of the spanned citizens’ public surfaces for multiple slots (or multiple publish ticks, if the citizen is an organism without a slot clock).
Response:
- Confirm with the per-citizen operator whether the outage is on their side or on the subscription.
- If on their side, fall back to the stall policy. A confluence committing partial evidence yields degraded commits; one stalling per slot yields no commits until the citizen returns. Integrators were warned in the composition-hooks doc.
- If on the subscription side: mosaik transport troubleshooting; nothing confluence-specific.
Cross-references
- Coalition overview
- Quickstart — stand up a coalition
- Federating under a coalition
- Rotations and upgrades
- Contributor — basic services
- builder operator runbooks
Federating under a coalition
audience: operators
A federated coalition is one whose referenced citizens
— lattices, standalone organisms, or a mix — are operated
by different teams. The blueprint does not make
federation special: a CoalitionConfig references
citizen stable ids regardless of who runs them.
Coordination patterns succeed or fail based on a few
out-of-band agreements.
This page catalogues the agreements. Federation provides a shared handshake and a coordination channel.
The blueprint minimum
The coalition operator needs the stable id of each
referenced citizen. Any operator holding those ids can
publish a CoalitionConfig including those citizens;
there is no technical gate on referencing, and the
CoalitionConfig does not require the per-citizen
operator’s signature.
This is intentional. A stable-id reference is not a claim of authority over the citizen. A per-citizen operator objecting to a coalition reference has no technical recourse, and the reference also has no technical consequence: every confluence committee member still needs tickets from each per-citizen operator to bond into those citizens’ public surfaces.
What a federation agreement should cover
Documented even when trust between operators is high.
Notification cadence
- Citizen retirements (stable-id bumps). The per- citizen operator notifies the coalition operator at least one business day ahead of a retirement.
- Content-hash bumps that matter to confluences pinning content (MR_TD rotations on TDX-gated surfaces). Coordinate with any confluence operator whose pinning policy is affected.
- Ticket rotations. When a per-citizen operator rotates their ticket-issuance root, confluence committee members need new tickets before cutover.
- Retention changes. A citizen shortening retention of a public surface a confluence subscribes to breaks confluence replay. Notify the coalition operator and the confluence operator.
Ticket issuance for confluence committee members
The confluence committee reads each spanned citizen’s public surfaces and so requires per-citizen tickets. Agreements should cover:
- Who issues. Each per-citizen operator issues tickets for confluence committee members, exactly as for any integrator.
- Under what policy. Confluences usually need read-
only tickets on specific surfaces (for a lattice:
UnsealedPool,tally::Refunds; for an oracle organism: its feed stream). Scope accordingly. - Rotation schedule. Tickets expire. A rotation cadence matching the per-citizen operator’s standard schedule is acceptable; unannounced rotations are not.
Optional coalition-scoped ticket issuer
A federated coalition may ship a coalition-scoped ticket
issuer (see
ticket issuance).
Components — confluences, modules, even external
integrators — decide per-component whether to recognise
its tickets. No component is required to. A federation
agreement can document which components plan to
recognise the coalition issuer, but recognition itself
is always in each component’s own TicketValidator
composition.
Confluence governance
When multiple citizens contribute inputs to one confluence, the confluence is itself a governance concern. Agreements cover:
- Who runs the committee? Often the coalition operator; sometimes a subset of the per-citizen operators; occasionally a separate team. Confluence identity is coalition-independent: the same committee serves every coalition that references it.
- Who decides on
Configbumps? The confluence operator, informed by the affected per-citizen operators and by referencing coalitions. - Conflict resolution. The escalation path when the confluence commits a record one per-citizen operator disputes (an attribution split, a cross-feed correlation).
None of this is in-protocol; all of it is contract. The coalition does not arbitrate.
Publishing a federated coalition
The handshake page makes federation visible:
- Per-citizen links. Each
LatticeConfigorOrganismRefreference includes a link to the corresponding per-citizen operator’s handshake page. - Per-confluence links. Each
ConfluenceConfigreference links to the confluence operator’s handshake. - Operator roll call. A table listing each operator and the roles it holds (lattice, organism, confluence, module, ticket issuer, coalition).
- Federation change channel. Typically a shared channel across all participating operators rather than the coalition operator’s individual channel, so that citizen-side changes are visible to every participant in-flight.
What federation does not change
- Mosaik-level security. Each citizen’s ACL is unchanged. Each confluence’s ACL is unchanged.
- Atomicity. No cross-citizen atomic commit. Federation does not provide it; no contract can.
- Integrator ergonomics. Integrators still compile
one
CoalitionConfig; a federated coalition is indistinguishable from a single-operator coalition at the handle API.
A healthy federated-coalition checklist
- Each referenced per-citizen operator has been notified.
- Each referenced confluence operator has been notified.
- Ticket issuance for confluence committee members is documented per citizen.
- The change channel covers citizen retirements, content-hash bumps, ticket rotations, and confluence config changes.
- The handshake page lists every operator and their role.
- The coalition’s modules and any referenced confluences have a documented stall / degradation policy consistent with each spanned citizen’s expected availability.
Cross-references
- Coalition overview
- Running a confluence
- Rotations and upgrades
- Contributor — threat model
- Contributor — ticket issuance
Rotations and upgrades
audience: operators
Rotation discipline at the coalition layer is thin: most
rotations occur inside one citizen (a lattice, a
standalone organism) or inside one confluence (including
a module). The coalition operator’s task is to republish
the CoalitionConfig when any referenced component
changes identity in a way that folds into
COALITION_ROOT. This page lists the rotations that
materially affect a coalition and the operation order for
each.
For rotations inside a block-building lattice organism, follow the builder rotations page. For rotations inside a standalone organism, follow that organism’s own runbook.
Rotations that change a coalition’s fingerprint
Any of the following forces a new CoalitionConfig
fingerprint. Publication is not optional: integrators
compiled against the old fingerprint see ConnectTimeout
on every handle until they recompile against the new
one.
- A referenced lattice’s stable id changes (retirement).
- A referenced standalone organism’s stable id changes (retirement).
- A referenced confluence’s
ConfluenceConfigfingerprint changes (content, ACL, span, or MR_TD bump on a TDX-gated confluence). - A module’s
ConfluenceConfigfingerprint changes (content, ACL, or MR_TD bump). - The coalition’s lattice set changes (add or remove a lattice).
- The coalition’s organism set changes (add or remove a standalone organism).
- The coalition’s confluence set changes (add or remove a confluence or module).
- The coalition’s ticket issuer is added, rotated, or removed.
- The coalition instance name changes (rare — effectively a retirement; see below).
Rotations that do not change a coalition’s fingerprint
- Committee member rotations inside a referenced citizen,
under a stable per-citizen
Config. - Committee member rotations inside a confluence or
module, under a stable
ConfluenceConfig. - MR_TD republications that match the previously pinned value (i.e. the image was rebuilt from the same source).
- Per-citizen content-hash bumps (MR_TD rotations, ACL rotations) if confluences pin only the stable id, not the content hash. Confluences that pinned content hash do bump.
These are invisible to integrators at the coalition layer. Documented per-citizen or per-confluence, not per- coalition.
Ordering a citizen retirement
When a referenced citizen’s stable id changes, follow this sequence.
- Receive the retirement notification from the per- citizen operator via the federation change channel. The notification includes the old and new stable ids and the cutover time.
- Prepare the new
CoalitionConfig. Update the citizen’s reference in theCoalitionConfigdefinition. Re-derive the coalition fingerprint. Modules re-derive automatically becauseCOALITION_ROOThas changed. Commissioned confluences referenced by the coalition re-derive only if they themselves span the retired citizen. - Redeploy modules under the new
ConfluenceConfigs. Because each module’s content folds the coalition root, module fingerprints have bumped. Bring up new committee members on the newConfluenceConfigs while old members continue on the old ones; rotate over. - Publish the new
CoalitionConfigfingerprint on the handshake page and change channel. - Wait for integrators to recompile. Old-
fingerprint handles time out (or, better, see a
RetirementMarkerfrom each retiring committee pointing at the replacement); integrators reach out if they miss the change. - Retire the old module deployments once the old
CoalitionConfigis no longer in use by the integrators the operator cares about. Each committee emits a terminalRetirementMarkerbefore shutdown.
Ordering a confluence or module upgrade
A confluence or module’s Config may change for reasons
other than upstream citizen retirements: new content
parameters, MR_TD bump, ACL change.
- Stand up the new deployment under the new
ConfluenceConfigalongside the old. - Update any referencing coalitions’
CoalitionConfigs to reference the new confluence fingerprint. Recompute each coalition fingerprint. - Publish each new
CoalitionConfig. - Announce the change via the change channel, with migration notes. If any referencing coalition ships a Chronicle, its next commit records the rotation.
- Retire the old deployment. The committee emits a
RetirementMarkerpointing at the new deployment so integrators rebind cleanly.
A transition window where old and new coexist is required; a hard cutover leaves every integrator timing out simultaneously.
Retiring a coalition
A coalition may be retired (renamed). Reasons:
- The composition has drifted far enough from the original intent that a clean break is preferable to incremental bumps.
- The set of operators participating in the coalition has reorganised.
- Branding or regulatory considerations.
Sequence:
- Announce retirement with substantial lead time. Integrators relying on the coalition need weeks to migrate. A shipped Chronicle records the retirement in its next commit.
- Stand up the replacement coalition (e.g.
ethereum.superchain-v2026q2) with a freshCoalitionConfigfingerprint and the desired composition. - Run both coalitions in parallel until most integrator traffic has migrated.
- Retire the old coalition’s modules (committees
emit
RetirementMarkers pointing at the replacement coalition’s modules) once integrator traffic drops to zero or acceptable levels. Referenced confluences are independent of the retired coalition and continue if other coalitions or direct integrators still use them. - Publish a retirement note. The old
CoalitionConfigfingerprint is formally deprecated; integrators still on it seeConnectTimeoutonce the old modules are down.
Do not reuse an instance name for a different composition. A retired coalition’s name is spent; reissuing it under a new composition would produce silent mis-bindings across the integrator base.
Cross-references
Designing coalitions on mosaik
audience: contributors
This chapter extends the design intros of zipnet and builder from one organism → one lattice → one coalition: a named composition of coexisting mosaik citizens — block-building lattices, standalone organisms (oracles, attestation fabrics, identity substrates, analytics pipelines, …), cross- citizen organisms (confluences), and up to four basic services (Atlas, Almanac, Chronicle, Compute) plus an optional coalition-scoped ticket issuer.
The reader is assumed familiar with the mosaik book, the zipnet book, the builder book, and the two design intros cited above. This chapter does not re-derive content + intent addressing, the narrow-public-surface discipline, the shared-universe model, or within-lattice derivation; it uses them.
Stance
A coalition is a voluntary grouping. It offers services;
no component is required to accept them. The protocol
has no primitive for compulsion: every relationship is
opt-in at the TicketValidator layer, citizen identity is
independent of coalition membership, and basic services
never gate citizen operation. Every design choice in this
chapter follows from this baseline.
The problem, at N citizens
Builder composes six organisms under one LatticeConfig
for one EVM chain, solving “one end-to-end pipeline for
one chain” cleanly. It does not solve:
-
Cross-citizen coordination at scale. An agent operating across ten block-building lattices holds ten
LatticeConfigs; one additionally correlating across an oracle and an attestation organism holds twelve handshakes. Driver code pairs facts across citizens, reconciles partial observations, and recovers outcome correlators. The same correlator is re-implemented per agent. The builder book’s Shape 1 is correct for two or three lattices and an unbounded implementation burden at ten-plus heterogeneous citizens. -
Shared attribution. MEV captured on a block of lattice A originating from a searcher bundle submitted on lattice B — or a trade attributed by an analytics organism also reported by an oracle organism — is invisible to each service without explicit wiring. Ad hoc repetition is the same burden, one level deeper, across more boundary kinds.
-
Consistent cross-citizen observability. Operators running five citizens want one dashboard; analytics integrators want one API. Both want a named composition with a fingerprint that moves in lockstep with what they subscribe to. The Atlas basic service addresses this.
-
Operator coordination. Teams coordinating a set of citizens — every chain in a superchain stack, every oracle feed in a region, every ecosystem’s bonded organisms — need a naming unit for that coordination that does not itself become a consensus unit.
-
Durable provenance. Decisions outlive any one
CoalitionConfig. An operator grouping needs a tamper-evident record of its own publications, rotations, and retirements — the Chronicle basic service addresses this.
Promoting “a set of citizens” to an in-consensus group collapses every citizen’s trust boundary into one committee. Builder rejected that shape at the organism level; promoting it one rung higher inherits the same objection at higher cost.
The correct rung is a fingerprint above the citizens that names them for discovery, basic services, and module derivation — without consensus, state, or admission authority. That fingerprint is a coalition.
Two axes of choice
Same two axes zipnet picked and builder reaffirmed. Each rung of the ladder inherits the zipnet conclusion; the coalition rung does too.
- Network topology. Does a coalition live on its own
NetworkId, or share a universe with every other mosaik service? - Composition. How do citizens and confluences reference each other without creating cross-Group atomicity mosaik does not support — and, now, without creating cross-coalition atomicity either?
The blueprint chooses shared universe + citizenship-by- reference + no cross-Group, cross-citizen, or cross- coalition atomicity. The three choices are independent; each has a narrow rationale.
Shared universe (unchanged)
builder::UNIVERSE = unique_id!("mosaik.universe") — the
same constant zipnet and builder use. Every coalition,
lattice, standalone organism, confluence, and module
lives on it. Different coalitions coexist as overlays of
Groups, Streams, and Collections distinguished by
their content + intent addressed IDs. An integrator
caring about three coalitions holds one Network handle
and three CoalitionConfigs. Because coalitions
reference citizens rather than re-deriving them, an
integrator already bonded to some of the included citizens
observes no duplicate identity.
The alternative — one NetworkId per coalition, mirrored
all the way up — is rejected on the same grounds the
zipnet book rejects Shape A: operators already pay for one
mosaik endpoint, services compose, and one additional
endpoint per coalition would turn the coalition into a
routing device, which is not the rung’s function.
Citizenship-by-reference
A CoalitionConfig is a parent struct. Unlike a
LatticeConfig (which nests six organism Configs and
hashes them together), a CoalitionConfig references
existing citizen fingerprints without re-deriving the
citizens’ organism IDs. This is the key decision of the
coalition rung:
COALITION = blake3("coalition|" || SCHEMA_VERSION_U8
|| "|" || coalition_name)
COA_LATTICES = ordered stable ids of referenced lattices
COA_ORGANISMS = ordered stable ids of referenced organisms
COA_CONFS = ordered fingerprints of referenced confluences
COA_TICKETS = optional TicketIssuerConfig fingerprint (0 bytes if absent)
COALITION_ROOT = blake3(
COALITION
|| COA_LATTICES
|| COA_ORGANISMS
|| COA_CONFS
|| COA_TICKETS
)
Here COA_LATTICES[i] is the stable id of lattice i —
not a re-derivation. Likewise COA_ORGANISMS[j] is the
stable id the standalone organism j publishes in its
own handshake. Confluences are referenced by their own
fingerprint. Builder’s lattice identity is:
LATTICE(i) = blake3("builder|" || instance_name || "|chain=" || chain_id)
+ every organism's Config fingerprint inside that lattice
A coalition neither modifies LATTICE(i) nor the organism
roots under it, nor the Config of any standalone
organism or confluence it references. A citizen
referenced by three coalitions is one citizen; its
operators issue tickets once, its integrators compile it
in once, and its GroupIds are stable across coalition
memberships.
Rationale for by-reference over by-containment:
-
Citizens predate coalitions. The first production organism and first production lattice ship before the first coalition. An integrator already bonded to a specific citizen must not have to re-bond when an operator composes that citizen into a new coalition.
-
Citizens outlive coalitions. Coalition membership is a composition choice and changes; citizen identity is an operator-level commitment. Coalitions must not force churn.
-
Citizens belong to multiple coalitions. The shared- universe choice implies a citizen may be referenced from several coalitions simultaneously — one per operator grouping, one per analytics vendor, one per ecosystem grouping. Re-derivation would force a cartesian product of IDs.
Citizen reference shape
A citizen is referenced by two values: its stable id
— the (instance_name, network_id) blake3 hash — which
changes only on retirement; and its content hash,
which folds the citizen’s ACL, MR_TDs, and policy
parameters and bumps with every operational rotation.
CoalitionConfig references citizens by stable id.
Confluences and modules choose per-component whether they
also pin the content hash. A read-only correlator that
does not bond into TDX-gated surfaces pins only the stable
id and survives MR_TD rotations in its upstream citizens.
A confluence whose committee bonds into a TDX-gated
surface pins the content hash and redeploys on rotation.
The split decouples MR_TD rotation churn from coalition-
level identity. A coalition referencing ten citizens
rotating TDX monthly sees its COALITION_ROOT change only
on citizen retirement or coalition membership change, not
on routine operational rotations.
Open for v0.3 decision. Whether content-hash- pinning is a per-confluence choice or a coalition-wide policy is unresolved. Leaving it per-component for v0.2;
CoalitionConfigmay gain a policy flag later.
Builder’s LatticeConfig derivation already distinguishes
the two in practice; this rung surfaces the distinction
in the coalition-layer spec. Both LatticeConfig and
OrganismRef expose a stable_id() accessor separate
from their full fingerprint accessor.
Confluences as their own rung
Confluences are organisms in the zipnet sense: one
Config fingerprint, a narrow public surface, a ticket-
gated ACL. What distinguishes them is that their content
folds two or more citizen fingerprints. A confluence’s
identity does not fold a coalition root:
CONFLUENCE(cfg) = blake3(
"confluence|"
|| SCHEMA_VERSION_U8
|| "|" || name
|| ordered spanned lattice references
|| ordered spanned organism references
|| content_fingerprint
|| acl_fingerprint
)
CONFLUENCE_committee = CONFLUENCE(cfg).derive("committee")
A confluence is therefore referenced by zero, one, or
many coalitions exactly as a lattice or a standalone
organism is. Its committee, state machine, and log are
shared across those references. A confluence misconfigured
against the wrong spanned citizens derives a disjoint
GroupId and surfaces as ConnectTimeout — the ladder’s
standard debuggable failure mode.
Modules (Atlas, Almanac, Chronicle) are the only
coalition-scoped components. Their identity derives under
COALITION_ROOT.derive("module").derive(name) where
name is "atlas", "almanac", or "chronicle". An
optional ticket issuer derives under
COALITION_ROOT.derive("tickets").
No cross-Group, cross-citizen, or cross-coalition atomicity
Every rung below refuses cross-Group atomicity. The coalition rung refuses cross-citizen and cross-coalition atomicity by the same argument:
- Cross-Group (one citizen, two organisms) — refused by builder and by every organism’s own design. Organisms subscribe to each other’s public surfaces; there is no atomic multi-organism commit.
- Cross-citizen (one coalition, two citizens) — refused by this blueprint. Confluences read public surfaces of the citizens they span; their state machine commits its own facts. A confluence never forces two citizens to commit atomically.
- Cross-coalition (two coalitions) — refused by this blueprint. Coalitions do not coordinate in consensus. A use case for cross-coalition coordination resolves either to an integrator spanning coalitions or to a new confluence whose content fingerprint folds the relevant citizens (which are already referenced by both coalitions anyway).
Fixed shapes, open catalogs
The blueprint’s central discipline; restated because every downstream choice flows from it.
Specified
- The
CoalitionConfigstruct and derivation rules. Every coalition has this shape; every coalition’s identity is computed this way. - The
ConfluenceConfigandOrganismRefstructs. Every confluence and every standalone-organism reference has these shapes. - The four basic-service shapes (Atlas, Almanac,
Chronicle, Compute). Every instance of “Atlas” in any
coalition has the same
Configsignature, public surface, and derivation. Same for the other three. Specification in basic services. - The optional ticket-issuer shape. Specification in ticket issuance.
- Citizenship discipline — by-reference, multi- coalition-compatible, no re-derivation; stable-id and content-hash split.
Left open
- Whether any given coalition ships any basic services. A coalition with zero services is valid.
- Whether any given coalition ships a ticket issuer.
- The catalog of commissioned confluences. Each arrives when a cross-citizen problem forces it.
- The catalog of standalone-organism categories and lattice classes. New categories arrive with their own books. Block-building lattices are the first specified class; attestation fabrics, oracle grids, DA shards, and other categories follow their own proposals.
- Inter-coalition coordination. Coalitions coexist on the shared universe; a concrete need for coordination earns its way in through a fresh proposal.
- Governance of the coalition itself. Who approves
CoalitionConfigbumps, how operator groupings form and dissolve, what contracts multi-operator coalitions sign — all out of band.
Deliberately deferred (heavy-coalition)
A family of primitives the blueprint refuses:
- Enforceable policies over citizens — a
CoalitionConfigdeclaring constraints citizens must satisfy (MR_TD requirements, bond minimums, code-audit attestations). - Taxation — a protocol-level mechanism for the coalition to collect fees from citizens’ commits.
- Judiciary — a confluence whose commits have protocol-level precedence over individual citizens’, resolving disputes.
- Mandatory citizenship — any primitive whose absence renders a citizen’s commits invalid in the coalition’s view.
- Legislation — a state machine in the coalition that citizens must query and conform to before acting.
Each is a coherent design space, and adopting any one forces the blueprint to rewrite the trust composition and rebuild every passage that assumes voluntary participation. Any future heavy-coalition proposal lives on its own branch and earns the transition; the current blueprint neither anticipates nor precludes one.
Flag in-source and in-docs as “heavy-coalition” when the topic arises so the word’s current scope is unambiguous.
Coalition identity
A coalition is identified by a CoalitionConfig folding
every root input into one deterministic fingerprint.
Operators publish the CoalitionConfig as lattice
operators publish a LatticeConfig and organism operators
publish their own Config; integrators compile it in.
pub const SCHEMA_VERSION_U8: u8 = 1;
#[non_exhaustive]
pub struct CoalitionConfig<'a> {
/// Short, stable, namespaced name chosen by the
/// coalition operator (e.g. "ethereum.superchain",
/// "signals.euro-fx").
pub name: &'a str,
/// Ordered set of block-building lattices this
/// coalition composes. Referenced by stable id;
/// this struct does NOT re-derive any lattice's
/// organism IDs.
pub lattices: &'a [LatticeConfig<'a>],
/// Ordered set of standalone organisms this coalition
/// composes — any mosaik organism with a Config
/// fingerprint living directly on the universe. Held
/// by reference; the concrete Config type lives in
/// the organism's own crate.
pub organisms: &'a [OrganismRef<'a>],
/// Ordered set of confluences referenced by this
/// coalition. A confluence's identity is derived
/// independently of any coalition; the same
/// confluence may be referenced by many coalitions.
pub confluences: &'a [ConfluenceConfig<'a>],
/// Optional coalition-scoped ticket issuer.
/// Non-authoritative: no component is required to
/// accept tickets from this issuer. Organisms and
/// confluences opt in via their own TicketValidator
/// composition.
pub ticket_issuer: Option<TicketIssuerConfig<'a>>,
}
#[non_exhaustive]
pub struct OrganismRef<'a> {
pub role: &'a str,
/// blake3((instance_name, network_id)); changes only
/// on retirement.
pub stable_id: UniqueId,
/// Optional content hash (ACL, MR_TDs, policy).
/// Components that pin this redeploy on operational
/// rotations.
pub content_hash: Option<UniqueId>,
}
impl CoalitionConfig<'_> {
/// blake3("coalition|" || SCHEMA_VERSION_U8 || "|"
/// || name || ...)
pub const fn coalition_id(&self) -> UniqueId { /* ... */ }
/// Module lookup helpers. Return Some if the
/// coalition ships that module, None otherwise.
pub const fn atlas (&self) -> Option<&ConfluenceConfig<'_>> { /* ... */ }
pub const fn almanac (&self) -> Option<&ConfluenceConfig<'_>> { /* ... */ }
pub const fn chronicle(&self) -> Option<&ConfluenceConfig<'_>> { /* ... */ }
}
CoalitionConfig is lifetime-parameterised so runtime
construction is a first-class path. All existing const
instances continue to compile under 'static inference.
An integrator binds to the coalition by passing the
CoalitionConfig into whichever citizen or confluence
handles they need:
use std::sync::Arc;
use mosaik::Network;
use builder::{LatticeConfig, UNIVERSE};
use coalition::CoalitionConfig;
const ETH_SUPERCHAIN: CoalitionConfig<'static> = CoalitionConfig {
name: "ethereum.superchain",
lattices: &[ETH_MAINNET, UNICHAIN_MAINNET, BASE_MAINNET],
organisms: &[EUR_PRICE_FEED],
confluences: &[SUPERCHAIN_ATLAS, INTENTS_ROUTER, SHARED_LEDGER],
ticket_issuer: None,
};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
let eth_bid = offer::Offer::<Bundle>::bid(&network, Ð_SUPERCHAIN.lattices[0].offer).await?;
let eur_feed = oracle::Feed::<Price>::read(&network, ETH_SUPERCHAIN.organisms[0].config()).await?;
let intents = intents::Router::<Intent>::publish(&network, Ð_SUPERCHAIN.confluences[1]).await?;
let atlas = atlas::Atlas::<CitizenCard>::read(&network, ETH_SUPERCHAIN.atlas().unwrap()).await?;
// ...
Ok(())
}
Each confluence, module, and standalone organism exposes
typed free-function constructors in the shape zipnet ships
and builder reproduces
(Organism::<D>::verb(&network, &Config)). Raw mosaik IDs
never cross a crate boundary.
Fingerprint convention, not a registry
Same discipline as zipnet and builder:
- The coalition operator publishes the
CoalitionConfigstruct (or a serialised fingerprint) as the handshake. - Consumers compile it in.
- If TDX-gated, the operator publishes the committee MR_TDs for every confluence, module, and commissioned standalone organism whose admission is TDX-attested. Per-citizen MR_TDs are published by the per-citizen operator; the coalition operator re-publishes them for convenience, not as authority.
- There is no on-network coalition registry. A directory collection of known coalitions may exist as a devops convenience; it is never part of the binding path.
Typos in the coalition instance name, in citizen
fingerprint ordering, or in any confluence parameter
surface as Error::ConnectTimeout, not “coalition not
found”. The library cannot distinguish “nobody runs this”
from “operator is not up yet” without a registry, and
adding one would misrepresent the error.
What a coalition is not
-
A citizen’s owner. A coalition references a citizen; per-citizen operators control admission, rotate keys, retire instances. The coalition’s
CoalitionConfigupdates independently. -
A consensus unit. There is no coalition-level Raft group, no coalition state machine, no coalition log. The coalition layer adds naming, modules, and module derivation — no consensus.
-
A routing device. Coalitions do not carry traffic between citizens. Every citizen and confluence discovers peers via mosaik’s usual gossip on the shared universe. The coalition fingerprint is a compile-time token, not a runtime hub.
-
An ACL root. A coalition does not issue tickets on behalf of referenced citizens. Each citizen’s
TicketValidatorcomposition is unchanged; each confluence has its ownTicketValidator. The optional ticket issuer is opt-in per component: components that want to recognise its tickets include its issuance root in their validator composition; others ignore it. -
A closed catalog. Unlike builder’s six organisms, this blueprint does not enumerate a canonical set of lattice classes, standalone-organism categories, or confluences. Real citizens and confluences ship when a concrete problem forces them.
Four conventions (three inherited, one added)
Every confluence, module, and standalone organism the blueprint references reproduces the three zipnet conventions verbatim; the coalition adds one.
-
Identifier derivation from the organism’s Config fingerprint. Inherited from zipnet.
-
Typed free-function constructors.
Organism::<D>::verb(&network, &Config)returning typed handles; raw IDs never leak across crate boundaries. Inherited from zipnet; mirrored by every organism in every lattice, every confluence, and every module. -
Fingerprint, not registry. Inherited from zipnet; reaffirmed by builder and here.
-
Citizenship-by-reference, module-by-derivation. New for the coalition rung. A
CoalitionConfigfolds citizen fingerprints as content without re-deriving their organism IDs; confluences are likewise referenced, not derived. Modules (Atlas, Almanac, Chronicle) and an optional ticket issuer are the components whose identity foldsCOALITION_ROOT.
What the pattern buys
-
Collapsed integrator model: one
Network, oneCoalitionConfigper coalition, typed handles on each referenced citizen, confluence, and consumed module. -
Operator pacing. A coalition starts as one operator’s set of citizens, adds a second operator’s citizen without touching the first’s, and adds a confluence or module without forcing either per-citizen operator to change anything.
-
Clean retirement. A commissioned confluence retires by the coalition’s next
CoalitionConfigomitting it — or, if the confluence itself is going away, by a retirement marker on its own stream. Integrators seeConnectTimeout, the ladder’s standard failure mode, or a clean upgrade pointer. -
Cross-citizen services as organisms, not protocols. A shared-ledger confluence, cross-feed correlator, cross-chain intent router, Atlas, or Almanac is a mosaik organism following the zipnet pattern. Its author reuses every primitive, ticket validator, and replication guarantee.
-
Open catalog. New lattice classes, standalone- organism categories, and confluences arrive on their own schedules and fold into the existing handshake shape without forcing the coalition layer to change.
Where the pattern strains
Three limitations a contributor extending this must be explicit about.
Citizen upgrades churn coalition-scoped identity
A citizen-level change bumping its stable id — a retirement
— changes the corresponding entry in COA_LATTICES or
COA_ORGANISMS for every coalition that references the
citizen, which in turn changes COALITION_ROOT and every
module derived under it.
The stable-id / content-hash split keeps routine
operational rotations out of COALITION_ROOT: MR_TD
rotations do not change the stable id and therefore do not
cascade. A retirement does; retirement is rare and
announced.
Commissioned confluences do not derive under
COALITION_ROOT, so retirements ripple through modules
only. A confluence re-derives only when the citizens it
itself spans retire or when its own content changes.
Wide confluences are heavy
A confluence reading public surfaces from ten citizens simultaneously runs ten subscriptions, holds peer entries for ten ticket-issuance roots, and is sensitive to the liveness of all ten. The coalition blueprint does not make this easier; it makes it legible.
Commissioning contributors should consider sharding: a per-pair confluence over interesting citizen pairs, composed by an integrator, is often cheaper and more debuggable than one wide confluence.
The coalition is not cryptographically authoritative
A CoalitionConfig fingerprint is a naming convenience.
It does not sign, commit on-chain, or expose a channel for
per-citizen operators to discover which coalitions
reference their citizen. A per-citizen operator objecting
to being referenced has no technical recourse — anyone
with a list of citizen stable ids can publish a
CoalitionConfig.
This is intentional. The only cryptographic check is
whether each referenced citizen admits the confluences’
committee peers via its own TicketValidator. Per-
citizen and coalition operators negotiate references out
of band.
Checklist for commissioning a new confluence
-
Identify the one or two public primitives. If the surface is not a small, named, finite set of streams or collections, the interface is not yet designed.
-
Identify the citizens it spans. Record as ordered sequences of citizen references in the confluence’s
Config— lattices, organisms, or both. Order matters; changing it changes the fingerprint. -
Pick the confluence name. Folded into the confluence’s own derivation; no coalition root is involved.
-
Define the
Configfingerprint inputs. Content parameters affecting the state-machine signature, upstream public surfaces subscribed to, ACL composition. -
Write typed constructors.
Confluence::<D>::verb(&network, &Config). Never export rawStreamId/StoreId/GroupIdacross the crate boundary. -
Specify
TicketValidatorcomposition on the public primitives. ACL lives there. If the confluence wants to recognise any coalition’s ticket issuer, its validator composition includes that issuance root directly; the confluence chooses, the coalition does not impose. -
Decide per-citizen-failure behaviour. If citizen A is down, does the confluence keep processing remaining citizens’ inputs or stall? Document as part of the composition contract.
-
Document which citizens’ public surfaces are read. This is the composition contract; changes to it touch Composition.
-
Declare dependency on any modules. If the confluence aligns to Almanac ticks or observes Chronicle entries, state so in the crate documentation.
-
Answer: does this confluence meaningfully aggregate across citizens, or is it an integrator in disguise? If an ordinary integrator holding the same N citizen handshakes could do the same work, skip the confluence. The confluence pattern earns its keep when the aggregated fact is itself a commit with its own consumers.
-
State the versioning story. If the answer to “what happens when one of the referenced citizens bumps its fingerprint?” is not defined, the design is incomplete.
Cross-references
- Anatomy of a coalition — concrete instantiation of the pattern for one example coalition.
- Basic services — Atlas, Almanac, Chronicle specified.
- Ticket issuance — the optional coalition-scoped issuance root.
- Confluences — shape and discipline of cross-citizen organisms.
- Composition — how citizens and confluences wire together without cross-Group atomicity.
- Atomicity boundary — what the coalition layer inherits and adds.
- Threat model — compositional trust.
- Roadmap — versioning, heavy-coalition as a deferred space, first commissioned confluences.
Anatomy of a coalition
audience: contributors
This chapter is the concrete instantiation of the pattern
described in Designing coalitions on mosaik.
It walks one example coalition end-to-end: the fingerprint
derivation, the public surface each component exposes on
the shared universe, the subscription graph across
citizen boundaries, and the way an integrator binds
handles through one CoalitionConfig.
The reader is assumed to have read the topology intro, the basic-services spec, and the builder architecture chapter.
Coalition identity recap
COALITION = blake3("coalition|" || SCHEMA_VERSION_U8
|| "|" || coalition_name)
COALITION_ROOT = blake3(
COALITION
|| ordered lattice stable ids
|| ordered organism stable ids
|| ordered confluence fingerprints
|| optional ticket-issuer fingerprint
)
// Lattice IDs: NOT re-derived by the coalition.
LATTICE_i = <builder's own derivation for lattice i>
ORGANISM_IN_i = <builder's own derivation for each organism in lattice i>
// Standalone-organism IDs: NOT re-derived by the coalition.
STANDALONE_j = <the organism's own derivation>
// Confluence IDs: derived independently of any coalition.
CONFLUENCE_c = blake3("confluence|" || SCHEMA_VERSION_U8
|| "|" || name
|| spanned citizens
|| content
|| acl)
// Module IDs: the three coalition-scoped components.
ATLAS_ROOT = COALITION_ROOT.derive("module").derive("atlas")
ALMANAC_ROOT = COALITION_ROOT.derive("module").derive("almanac")
CHRONICLE_ROOT = COALITION_ROOT.derive("module").derive("chronicle")
// Optional coalition-scoped ticket issuer.
TICKETS_ROOT = COALITION_ROOT.derive("tickets")
Citizen identity travels with the citizen; confluence identity travels with the confluence. Only module identity (and the optional ticket issuer) is coalition-bound. See topology-intro — Citizenship-by-reference and topology-intro — Confluences as their own rung.
Integrators bind via the CoalitionConfig they compile in
from operator-published release notes:
use std::sync::Arc;
use mosaik::Network;
use builder::UNIVERSE;
use coalition::CoalitionConfig;
const ETH_SUPERCHAIN: CoalitionConfig<'static> = /* operator-published */;
let network = Arc::new(Network::new(UNIVERSE).await?);
// Lattice organism handles — identical to the builder book.
let eth_bid = offer::Offer::<Bundle>::bid (&network, Ð_SUPERCHAIN.lattices[0].offer ).await?;
let uni_bid = offer::Offer::<Bundle>::bid (&network, Ð_SUPERCHAIN.lattices[1].offer ).await?;
let eth_tally = tally::Tally::<Attribution>::read(&network, Ð_SUPERCHAIN.lattices[0].tally).await?;
// Standalone-organism handles — same shape, concrete crate on the right.
let eur_feed = oracle::Feed::<Price>::read(&network, ETH_SUPERCHAIN.organisms[0].config()).await?;
// Module handles — resolved via typed helpers.
let atlas = atlas::Atlas::<CitizenCard>::read(&network, ETH_SUPERCHAIN.atlas().unwrap()).await?;
let almanac = almanac::Almanac::<AlmanacTick>::read(&network, ETH_SUPERCHAIN.almanac().unwrap()).await?;
// Confluence handles — the same shape, one rung up.
let intents = intents::Router::<Intent>::publish(&network, ETH_SUPERCHAIN.confluences_by_name("intents").unwrap()).await?;
let ledger = ledger ::Shared ::<Refund>::read (&network, ETH_SUPERCHAIN.confluences_by_name("ledger").unwrap()).await?;
An integrator only opens the handles they need. A cross-
chain searcher typically holds bid for several lattices
and intents::publish; a shared-ledger analytics consumer
typically holds ledger::read only; a cross-feed
correlator holds one oracle handle plus one confluence
handle; an observability operator holds atlas::read and
opens per-citizen handles from there.
Public surface summary
The coalition’s outward-facing primitives decompose into:
-
Every public primitive of every referenced lattice, unchanged. The builder book’s lattice public surface table applies without modification — the coalition does not touch lattice primitives.
-
Every public primitive of every referenced standalone organism, unchanged. Each organism’s own book is the reference.
-
Every module’s public surface (Atlas, Almanac, Chronicle — whichever are shipped), specified in basic services.
-
One or two public primitives per referenced confluence, following the same narrow-surface discipline as every other organism.
For an example coalition containing two confluences — an intent router and a shared ledger — plus Atlas and Almanac as modules, the coalition’s own added surfaces might look like:
| Component | Write-side primitives | Read-side primitives |
|---|---|---|
atlas | (committee-only) | Atlas map (CitizenId → CitizenCard) |
almanac | (committee-only) | Ticks stream |
intents | Publish<Intent> stream | RoutedIntents collection (per slot, per target lattice) |
ledger | (internal commits only) | SharedRefunds collection (keyed by origin citizen) |
“Committee-only” and “internal commits only” mean the primitive is ticket-gated to the organism’s committee peers. It still lives on the shared universe; the ticket gate supplies access control.
Names above are illustrative. The blueprint specifies Atlas/Almanac/Chronicle by role; all other confluence names are per-commission choices.
An example coalition: ethereum.superchain
Concrete walk-through of one plausible coalition. None of the confluences or services below are shipped; the sketch shows how the composition composes.
Composition
- Lattices:
ethereum.mainnet,unichain.mainnet,base.mainnet— all three referenced by stable id, none re-derived. - Standalone organisms:
price-feed.eur— an oracle feed publishing EUR/USD prices on the shared universe, referenced byOrganismRef. - Modules shipped: Atlas (directory), Almanac (tick beacon). Chronicle not shipped in this example; the operator can add it later without reshuffling citizens.
- Confluences referenced:
intents— a cross-chain intent router (reads from each lattice’sunseal::UnsealedPool, commits per- slotRoutedIntentskeyed by target lattice).ledger— a shared-refund aggregator (reads from each lattice’stally::Refunds, commitsSharedRefundskeyed by origin citizen).
Fingerprint sketch
ETH_SUPERCHAIN = blake3("coalition|" || SCHEMA_VERSION_U8
|| "|ethereum.superchain")
COALITION_ROOT = blake3(
ETH_SUPERCHAIN
|| LatticeConfig.stable_id(ethereum.mainnet)
|| LatticeConfig.stable_id(unichain.mainnet)
|| LatticeConfig.stable_id(base.mainnet)
|| OrganismRef.stable_id(price-feed.eur)
|| ConfluenceConfig.id(intents)
|| ConfluenceConfig.id(ledger)
|| (no ticket issuer)
)
ATLAS_ROOT = COALITION_ROOT.derive("module").derive("atlas")
ALMANAC_ROOT = COALITION_ROOT.derive("module").derive("almanac")
// Confluences derive independently of the coalition.
INTENTS_ID = blake3("confluence|" || SCHEMA_VERSION_U8 || "|intents"
|| [ethereum, unichain, base]
|| intents_content
|| intents_acl)
LEDGER_ID = blake3("confluence|" || SCHEMA_VERSION_U8 || "|ledger"
|| [ethereum, unichain, base]
|| ledger_content
|| ledger_acl)
// Per-component committee derivations follow the usual pattern.
A confluence’s spanned-citizen set may be a subset of the
coalition’s citizen set. An intents confluence might
span all three lattices while a feed-oracle confluence
spans price-feed.eur and one lattice. Each confluence
chooses its own span.
Participants
Not every component is operated by the same entity. The coalition distinguishes six classes of participant.
- Lattice operator — the team responsible for a
referenced
LatticeConfig’s identity, its committees, and its ACL. One or more lattice operators per coalition; each operates independently. - Standalone-organism operator — the team responsible
for a standalone organism (e.g.
price-feed.eur). Operates independently. - Confluence operator — the team responsible for one confluence’s committee members. Confluence operation is independent of any referencing coalition.
- Module operator — the team responsible for Atlas, Almanac, or Chronicle’s committee members. Often the coalition operator themselves; may be delegated.
- Coalition operator — the team responsible for
publishing and maintaining the
CoalitionConfigitself. Often coincides with one of the above; does not have to. Has no technical authority over the referenced citizens or confluences. - Integrator — external dev, runs no committee members, binds handles against the coalition from their own mosaik agent.
Data flow across one slot
The canonical happy path for one slot through this example coalition. Each step is a commit into one organism’s (or one confluence’s) state machine; transitions between steps are stream/collection subscriptions. No arrow represents an atomic multi-Group commit.
integrator organism / confluence commit / effect
---------- ---------------------- ----------------------------
wallet ──► zipnet[eth] ───► Broadcasts grows (sealed envelopes for slot S)
unseal[eth] ───► UnsealedPool[S] populated
(same for uni, base)
(publisher) ──► price-feed.eur ───► Price[tick T] committed (async, no slot clock)
(internal) ──► almanac ───► Ticks[K] committed (shared time axis)
(internal) ──► atlas ───► CitizenCard updates (any time)
confluence ──► intents ───► RoutedIntents[S] committed per target lattice
(sees cross-chain intents, routes them)
searcher ──► offer[eth] ───► AuctionOutcome[S] (incorporates routed intents)
searcher ──► offer[uni] ───► AuctionOutcome[S]
searcher ──► offer[base] ───► AuctionOutcome[S]
atelier[eth] ───► Candidates[S] committed
atelier[uni] ───► Candidates[S]
atelier[base] ───► Candidates[S]
relay[eth] ───► AcceptedHeaders[S]
relay[uni] ───► AcceptedHeaders[S]
relay[base] ───► AcceptedHeaders[S]
(chain inclusion watchers observe each lattice independently)
tally[eth] ───► Refunds[S]
tally[uni] ───► Refunds[S]
tally[base] ───► Refunds[S]
confluence ──► ledger ───► SharedRefunds[S] committed per origin citizen
wallet / searcher ◄── ledger refunds + attestations stream (aggregated)
dashboard ◄── atlas directory of citizens (for observability)
correlator ◄── almanac shared tick axis (for cross-citizen joins)
Two properties of this flow:
- The slot number remains the foreign key for block-
building lattices but is per-citizen. Slot
Sonethereum.mainnetis unrelated to slotSonunichain.mainnet. Confluences crossing citizen boundaries key commits by(citizen_id, slot)pairs (or(citizen_id, tick)for non-slotted standalone organisms). If the coalition ships an Almanac, confluences may optionally carry the currentalmanac_tickas an additional alignment key. - Each citizen’s pipeline is unchanged. The
intentsconfluence injects routed-intent entries into a lattice’sofferinput via a subscription, not a cross-Group command. Eachofferdecides whether and how to include routed intents in its auction. Non- slotted organisms likeprice-feed.eurcontinue publishing on their own clock; confluences read their ticks as they appear.
Where each confluence commits
The rule from the builder book — “what decision is the organism actually making at commit time” — carries to confluences and modules:
-
atlascommitsCitizenCardupdates. The state machine merges operator-submitted cards (signed by the contributing per-citizen operator, or by the coalition operator in single-operator coalitions) into theAtlasmap. -
almanaccommits a monotonic tick. The state machine runs a deadline-driven Raft round; winner’s tick becomesTicks[K+1]. -
intentscommits a routed-intents entry per slot per target lattice. State machine reads cross-chain intents from each spanned lattice’sUnsealedPool, applies the routing policy, commits{citizen_id, slot, intents[]}intoRoutedIntents. -
ledgercommits a shared-refund entry per origin citizen per slot. State machine reads each spanned lattice’stally::Refunds, performs attribution across origin citizens, and commits{origin_citizen, slot, shares[]}intoSharedRefunds.
The same discipline every organism in every lattice observes: one decision per commit, state-machine-local, no cross-organism transactions.
Internal plumbing (optional derived private networks)
Same pattern zipnet established and builder reaffirmed: the public surface lives on UNIVERSE; high-churn internal plumbing may move to a derived private network keyed off the organism’s root.
For confluences and modules:
- Per-citizen subscription fan-in. Gossip between
committee members about per-citizen cursor state may
live on
<component>_root.derive("private"). - Attestation gossip. Committee members signing off on a commit can exchange signatures privately before the public commit lands.
Committee Groups themselves stay on UNIVERSE. The argument is unchanged across every rung.
Identity under citizen upgrades
Citizen identity splits into a stable id and a content hash; each moves on a different cadence.
-
Citizen internal rotation (stable id unchanged, content hash bumps). MR_TD rotation or ACL change. Invisible to
COALITION_ROOT. Commissioned confluences that pin content hash redeploy; confluences that pin stable id only continue as-is. The coalition publishes an advisory if the change matters to downstream components, but theCoalitionConfigdoes not change. -
Citizen retirement (stable id changes). The per- citizen operator publishes a new identity. Coalitions that reference this citizen must re-publish their
CoalitionConfigwith the new stable id. Every module’s identity bumps becauseCOALITION_ROOThas changed; confluences do not re-derive unless they themselves span the retired citizen and pin its reference.
This is a material clarity win over a design where every citizen change ripples into confluence identity: a coalition referencing ten citizens rotating TDX monthly sees module identity stable unless a retirement happens.
A coalition operator insulating integrators from citizen
retirements carries a version-stable alias in the
coalition name (ethereum.superchain-v2026q2). See
Roadmap — Versioning.
Sizing — Phase 1 coalition
Order-of-magnitude targets for an illustrative coalition with three lattices, one standalone organism, two modules, and two confluences. Indicative, not binding.
| Component | Committee members (v1) | Stream bytes / event | Notable bonds |
|---|---|---|---|
| Per lattice (×3) | see builder sizing | see builder sizing | see builder sizing |
price-feed.eur | see organism sizing | see organism sizing | committee + publishers |
atlas (module) | 3 members | O(citizens × card size) | committee + coalition op |
almanac (module) | 3–5 members | O(1) per tick | committee only |
intents confluence | 3–7 TDX members | O(intents × lattices) | committee + each lattice |
ledger confluence | 3–5 members | O(refunds) | committee + each lattice |
“v1” here means the coalition-level Phase 1 shape. Phase 2 adds more confluences as cross-citizen problems arise; Phase 3 introduces multi-operator confluences where committee members span organisations.
What this chapter deliberately does not cover
- Per-confluence state machines. Each confluence owns its own spec in its own crate once commissioned.
- Per-confluence wire formats. Same.
- Chain-specific nuance inside a lattice. The builder book covers this; coalitions do not touch it.
- Organism-specific nuance inside a standalone organism. Each organism’s own book covers that.
- A canonical list of confluences or organism categories. See topology-intro — What a coalition is not.
- Heavy-coalition primitives (enforcement, taxation, judiciary). Deferred; see topology-intro — Fixed shapes, open catalogs.
Basic services
audience: contributors
Atlas, Almanac, Chronicle, Compute.
A basic service (also called a module) is a
coalition-scoped component whose role is specified once in
this blueprint so every instance means the same thing
across coalitions. This chapter specifies the four basic
services. Mechanically each is an organism with a narrow
public surface, derived under
COALITION_ROOT.derive("module").derive(name).
None of the four is mandatory. A coalition may ship zero, one, two, three, or all four. If a coalition ships a service named “Atlas”, it must have the shape on this page. A service named “Atlas” with a different shape is a bug, not a variant — the point of the catalog is that integrators and other services depend on the shape without reading per-coalition operator prose.
Guiding discipline:
Fix the shapes, leave the catalogs open.
The four basic services, the CoalitionConfig structure,
and the ConfluenceConfig structure are shapes. Which
of them any given coalition ships, and what confluences it
references, and what citizens it composes, are open.
Atlas
Problem. Integrators need a canonical list of the coalition’s citizens with operator metadata: role name, fingerprint, endpoint hint, ticket-issuance root, MR_TD pins, subscription schema version. Every coalition currently invents its own handshake table; standardising the shape lets a dashboard, analytics agent, or failover service read any coalition’s directory without per- coalition parsing.
Shape. Atlas is a module whose public surface is a
Map<CitizenId, CitizenCard>. Each card is co-signed
by the Atlas committee:
#[non_exhaustive]
pub struct CitizenCard<'a> {
/// The citizen's full 32-byte fingerprint
/// (LatticeConfig or OrganismRef stable id with
/// optional content hash).
pub citizen_id: UniqueId,
/// Role name ("lattice", "oracle", "attest", ...).
pub role: &'a str,
/// Endpoint hints for cold-start bootstrap.
pub endpoint_hints: &'a [EndpointHint<'a>],
/// Ticket-issuance root of the citizen's per-
/// operator ACL.
pub ticket_issuance_root: UniqueId,
/// MR_TD pins for TDX-gated organisms inside the
/// citizen (if any).
pub mrtd_pins: &'a [Mrtd],
/// Free-form metadata the operator surfaces.
pub metadata: &'a [(&'a str, &'a str)],
}
#[non_exhaustive]
pub struct EndpointHint<'a> {
pub proto: &'a str,
pub addr: &'a str,
}
CitizenId in cross-citizen commit keys is truncated to
the first 20 bytes of the blake3 fingerprint — the full
32-byte value lives in the Atlas card and in evidence
pointers. See
composition — Two keys.
Config signature. An Atlas’s ConfluenceConfig
folds:
- the coalition root (via
COALITION_ROOT.derive("module").derive("atlas")); - the ordered list of citizen references the Atlas catalogues (usually identical to the coalition’s, but may be a subset);
- the Atlas committee’s ACL.
State machine. One command per card write, one command per card retirement. Applies produce card updates in deterministic order.
Trust shape. Usually majority-honest, non-TDX. A TDX- attested Atlas is a natural v1+ refinement if the coalition requires signed provenance of the directory itself.
When to ship. Almost always. Atlas is the cheapest module to operate and the highest-leverage for integrators.
When to skip. A single-citizen coalition does not need a directory.
Almanac
Problem. Confluences correlating facts across citizens need a shared time axis. Each citizen’s clock is local — a block-building lattice has slots, an oracle has ticks, an analytics pipeline has batches. Without a shared monotonic beacon, every correlator re-invents its own tolerance window.
Shape. Almanac is a module whose public surface is a
single append-only Stream<AlmanacTick>. Each tick is co-
signed by the Almanac committee:
#[non_exhaustive]
pub struct AlmanacTick {
pub seq: u64,
/// Per-member wall-clock observations captured at
/// the moment each committee member voted to commit
/// the tick. Consumers compute median, bracket, or
/// spread themselves; `seq` is the sole ordering
/// authority.
pub observed_at: Vec<(MemberId, SystemTime)>,
}
observed_at is advisory: a vector of per-member
timestamps lets consumers detect outliers (a member’s
skewed clock shows as a one-sided tail) and bracket
cross-citizen events without trusting a single wall-
clock. Ordering is entirely in seq.
Config signature. An Almanac’s ConfluenceConfig
folds:
- the coalition root (via
COALITION_ROOT.derive("module").derive("almanac")); - the tick cadence (target inter-tick interval; state- affecting);
- the Almanac committee’s ACL.
State machine. A per-cadence-deadline Commit
command. Committee members propose ticks with their local
SystemTime::now(); the one that wins the Raft round
commits the per-member timestamp vector; others observe.
Trust shape. Majority-honest committee; optionally
TDX-attested if the coalition requires observed_at
entries to carry an attested source.
When to ship. When more than one confluence needs cross-citizen timing, or when a standalone organism without a slot clock participates in cross-citizen correlation.
What it is not. Almanac is not a consensus clock for the universe; it is not synchronised across coalitions; it is not a substitute for each citizen’s own local clock. Ticks are ordering markers; the timestamp vector is a correlation hint.
Chronicle
Problem. A coalition operator makes decisions that
outlive any one CoalitionConfig — publications,
rotations, retirements, membership changes, module
upgrades. Integrators, auditors, and future operators
need a tamper-evident record. Operator-published release
notes are insufficient; they can be edited in place.
Shape. Chronicle is a module whose public surface is
an append-only Stream<ChronicleEntry>. Each entry
carries both a quorum signature and per-member
signatures, and the committee periodically commits a
rolling blake3 digest of the stream to an external
settlement contract:
#[non_exhaustive]
pub struct ChronicleEntry<'a> {
pub seq: u64,
pub kind: ChronicleKind,
/// The coalition fingerprint at the moment of entry.
pub coalition_root: UniqueId,
/// Structured payload per kind.
pub payload: ChronicleBody<'a>,
pub observed_at: SystemTime,
/// Per-member signatures in addition to the
/// committee's quorum signature. An honest ex-
/// member's signed archive is a cryptographic
/// witness against a later compromised-majority
/// forgery.
pub member_sigs: Vec<(MemberId, Signature)>,
}
#[non_exhaustive]
pub enum ChronicleKind {
CoalitionPublication, // new CoalitionConfig fingerprint
CitizenAdded, // new lattice / organism reference
CitizenRemoved,
ConfluenceReferenced, // new ConfluenceConfig reference
ConfluenceUnreferenced,
ConfluenceRetired, // confluence itself retired (see retirement markers)
ModuleAdded,
ModuleRetired,
TicketIssuerRotated, // coalition-scoped issuance root changed
MembershipChanged, // operator added / removed
Retirement, // coalition retirement announcement
Other,
}
Config signature. A Chronicle’s ConfluenceConfig
folds:
- the coalition root (via
COALITION_ROOT.derive("module").derive("chronicle")); - the Chronicle committee’s ACL;
- the schema version of
ChronicleEntry; - the external settlement-contract address used for on- chain anchoring.
State machine. One command per entry. Applies
maintain a strictly monotonic seq. Retention is
operator-configured but must be substantial (years, not
days) — the point is longevity.
The Chronicle committee additionally commits
blake3(ChronicleEntry[0..N].serialized) to the declared
external contract on a committed cadence. The cadence is
an operator decision; a plausible default is one anchor
per week plus one per ChronicleKind::CoalitionPublication.
Trust shape. Tamper-evidence of past entries holds while at least one of:
- (a) the on-chain anchor path is uncompromised, or
- (b) at least one honest ex-member’s signed archive survives.
Retroactive rewriting requires compromising both. See threat model — Chronicle trust.
When to ship. When the coalition has more than one operator, a contested lineage (retirements, splits, forks), or a real need for externally verifiable provenance.
What it is not. Chronicle is descriptive, not prescriptive. It records decisions; it does not ratify them.
Compute
Problem. Coalition members — including AI-agent organisms — routinely need compute: to run a workload, retrain a model, execute a transient experiment, or bring up a short-lived companion organism. Every coalition currently re-invents compute acquisition via out-of-band contracts. Standardising the shape gives agents and organisms a uniform, auditable mechanism for acquiring compute, and gives coalition operators a well-known surface for publishing capacity.
Compute is the one basic service whose work is discharged off-module: the module is a registry and scheduler — it commits the grant — but the actual compute runs on a separate provider whose identity is published in the grant. The split is deliberate: the coalition’s trust boundary ends at the scheduling commit; the provider is bonded separately through mosaik’s normal ACL.
Shape. Compute is a module whose public surface is:
- a write-side
Stream<ComputeRequest>through which coalition members submit requests; - a read-side
Collection<RequestId, ComputeGrant>— the committee’s scheduled allocations; - an optional
Collection<ProviderId, ProviderCard>— capacity providers that have registered with the module; - an optional
Stream<ComputeLog>— usage attestations emitted by providers after a workload runs.
Image model — pre-hash and post-hash
A workload is identified by two fingerprints:
- Image pre-hash. The blake3 of a canonicalised
ComputeManifestdeclaring source crate, minimum hardware, TDX requirement, network policy, and duration hint. Known before the image is built; used inComputeRequestto state intent. - Image post-hash. The blake3 of the built image
itself (for TDX workloads, this is the MR_TD the
provider will run). Known after the build; used in
ComputeGrantto commit the scheduler to a specific build, and matched by the provider at startup.
The pre-hash / post-hash split mirrors the citizen stable-id / content-hash split (topology-intro — Citizen reference shape): a request that pins only the pre-hash survives reproducible rebuilds; a request that pins the post- hash commits to one specific binary.
Self-contained TDX images are produced per the mosaik TDX builder; the Compute module does not specify the build pipeline, only the fingerprints.
#[non_exhaustive]
pub struct ComputeManifest<'a> {
/// Source crate identity: name, version, and the
/// commit hash the image was built from.
pub crate_ref: CrateRef<'a>,
/// Minimum hardware requirements the image declares.
pub min_cpu_millicores: u32,
pub min_ram_mib: u32,
/// TDX requirement.
pub tdx: TdxRequirement,
/// Optional storage and network declarations.
pub min_storage_gib: u32,
pub network_policy: NetworkPolicy,
/// Non-binding duration hint the scheduler may use
/// to bin-pack.
pub duration_hint_sec: u32,
}
#[non_exhaustive]
pub struct CrateRef<'a> {
pub name: &'a str,
pub version: &'a str,
pub commit_hash: UniqueId,
}
#[non_exhaustive]
pub enum TdxRequirement {
NotRequired,
Optional,
Required,
}
#[non_exhaustive]
pub enum NetworkPolicy {
None, // no outbound
Mosaik, // only the shared universe
Allowlist, // specific endpoint hints
}
/// Pre-hash: folds the manifest only. Stable across
/// rebuilds as long as the manifest is stable.
pub const fn image_prehash(m: &ComputeManifest<'_>) -> UniqueId { /* ... */ }
/// Post-hash: folds the pre-hash with the built image
/// bytes and, for TDX, the MR_TD measurement.
pub const fn image_hash(
prehash: UniqueId,
built_image_digest: UniqueId,
mrtd: Option<Mrtd>,
) -> UniqueId { /* ... */ }
Request, grant, log
#[non_exhaustive]
pub struct ComputeRequest<'a> {
/// Requester — an agent organism, a confluence, or
/// an integrator identity bonded via a ticket the
/// module recognises.
pub requester: CitizenId,
/// Image pre-hash (what you want to run).
pub image_prehash: UniqueId,
/// Optional pinned post-hash if the requester has a
/// specific build they require.
pub image_hash: Option<UniqueId>,
/// Duration requested.
pub duration_sec: u32,
/// Settlement evidence — a payment receipt hash, a
/// reputation-card reference, or a coalition-scoped
/// credit token, per the module's policy.
pub settlement: SettlementEvidence<'a>,
/// Deadline by which the requester needs the grant;
/// unmet requests expire silently.
pub deadline: AlmanacTick,
}
#[non_exhaustive]
pub struct ComputeGrant<'a> {
pub request_id: UniqueId,
pub provider: CitizenId,
/// Endpoint hint for the actual compute. The
/// requester opens the endpoint out-of-band.
pub endpoint_hint: EndpointHint<'a>,
/// Bearer credential pointer (a blake3 of the
/// bearer token published by the provider via an
/// encrypted out-of-band channel; the scheduler
/// commits only the pointer).
pub bearer_pointer: UniqueId,
/// Post-hash of the image the provider commits to
/// running. Matches the request's prehash or
/// pinned post-hash.
pub image_hash: UniqueId,
/// Validity window in Almanac ticks.
pub valid_from: AlmanacTick,
pub valid_to: AlmanacTick,
/// Expected MR_TD on the provider side for TDX
/// workloads. None for non-TDX.
pub expected_mrtd: Option<Mrtd>,
}
#[non_exhaustive]
pub struct ComputeLog<'a> {
pub grant_id: UniqueId,
pub provider: CitizenId,
pub window: AlmanacRange,
/// Provider-signed usage attestation: cpu-seconds,
/// ram-mib-seconds, network bytes, storage bytes.
pub usage: UsageMetrics,
/// Evidence pointer to the workload's own output
/// stream, if any.
pub evidence: Option<EvidencePointer<'a>>,
}
Config signature. A Compute module’s
ConfluenceConfig folds:
- the coalition root (via
COALITION_ROOT.derive("module").derive("compute")); - the scheduler’s content parameters (accepted settlement evidence shapes, minimum grant duration, maximum image size);
- the Compute committee’s ACL.
State machine. Three command kinds:
SubmitRequest— a member submits aComputeRequest; accepted if the settlement evidence verifies.CommitGrant— the committee’s scheduler commits aComputeGrantmatching a request against a registered provider.AppendLog— a provider commits aComputeLogfor a completed grant (bonded under the module’s provider- side ACL).
Trust shape. Majority-honest committee for
scheduling integrity. Provider-level trust is separate
and carried in each provider’s own TicketValidator
composition; the Compute module does not vouch for
provider correctness beyond committing the match. A
malicious provider failing to actually run the image is
detectable by the requester (the workload’s own output
never appears) and can be scored into a reputation
organism orthogonally.
When to ship. When the coalition has citizens — AI agents in particular — that need programmatic compute acquisition. Agent populations benefit disproportionately: agents that can request compute on demand can top up their own resources and schedule their own experiments. See ai/compute for the agent-facing investigation.
When to skip. A coalition whose members bring their own compute out-of-band and never need dynamic allocation does not need a Compute module. An integrator-only coalition with no agent organisms typically does not need one.
What it is not.
- Not a compute provider. It is a scheduler-and- registry committing allocations; the compute runs on separate infrastructure.
- Not a payment rail. Settlement is folded into the request as evidence; actual value transfer happens off-protocol (on-chain, via a reputation/credit organism, via an external ticket authority).
- Not a guarantee of provider honesty. A provider can accept a grant and never run the image; the grant is the scheduler’s commitment, not the provider’s. Use a reputation organism to score provider behaviour.
- Not a cross-coalition marketplace. A Compute module is coalition-scoped; providers registered in one coalition are not visible to another unless they also register there.
Retirement markers
A cross-cutting primitive every commissioned confluence,
module, and coalition supports: before its committee
shuts down, the component commits a RetirementMarker to
its own public stream so integrators observing the stream
upgrade cleanly rather than receiving ConnectTimeout.
#[non_exhaustive]
pub struct RetirementMarker<'a> {
/// The Almanac tick (if the coalition ships one) or
/// wall-clock at which the committee ceases to
/// produce commits.
pub effective_at: RetirementInstant,
/// Optional pointer to the replacement component.
/// Integrators resolve it and rebind.
pub replacement: Option<ComponentRef<'a>>,
}
A retirement marker is the last commit on the stream. A
Chronicle entry records it at the coalition level as
ChronicleKind::ConfluenceRetired or
ChronicleKind::ModuleRetired.
Deriving a module’s identity
Every module is an organism; its derivation folds the
coalition root under a "module" segment:
ATLAS_ROOT = COALITION_ROOT.derive("module").derive("atlas")
ALMANAC_ROOT = COALITION_ROOT.derive("module").derive("almanac")
CHRONICLE_ROOT = COALITION_ROOT.derive("module").derive("chronicle")
COMPUTE_ROOT = COALITION_ROOT.derive("module").derive("compute")
<module>_committee = blake3(
<module>_root
|| ordered citizen references (if any)
|| module_content_fingerprint
|| module_acl_fingerprint
).derive("committee")
The "module" segment distinguishes coalition-scoped
components from commissioned confluences (which derive
without a coalition root). An operator reading peer-
discovery gossip can tell which is which at a glance.
CoalitionConfig helpers
Because modules have well-known names, the coalition
meta-crate ships typed helpers returning their
ConfluenceConfigs from a CoalitionConfig:
impl CoalitionConfig<'_> {
pub const fn atlas(&self) -> Option<&ConfluenceConfig<'_>> { /* ... */ }
pub const fn almanac(&self) -> Option<&ConfluenceConfig<'_>> { /* ... */ }
pub const fn chronicle(&self) -> Option<&ConfluenceConfig<'_>> { /* ... */ }
pub const fn compute(&self) -> Option<&ConfluenceConfig<'_>> { /* ... */ }
}
Each helper looks up a ConfluenceConfig in the
coalition’s confluences slice by name ("atlas", etc.)
and returns Some if found, None otherwise. A
coalition that ships the module names its
ConfluenceConfig accordingly; a coalition that does not
ship it omits the entry.
Integrators use these helpers to stay forward-compatible:
a coalition adding Almanac later does not break an
integrator that did not use it before, and the integrator
can start using it simply by re-reading
coalition.almanac().
Deliberately unspecified
- Chronicle retention policy and storage layout.
Operator-configured. The spec requires durability and a
monotonic
seq; it does not prescribe how operators achieve them. - Chronicle anchoring cadence and chain choice.
Per-coalition; the spec requires the anchor contract
address be folded into
ChronicleConfig.content. - Almanac tick cadence. The spec allows any state- affecting cadence; different coalitions will pick substantially different rates.
- Atlas metadata schema. Free-form
(key, value)pairs beyond the required fields. Operators add what they need; standardisation emerges from patterns over time. - Whether a module is required for any downstream
confluence. Confluences may depend on a module (e.g.
a correlator aligning to Almanac ticks), but the
blueprint does not require it; a confluence that needs
a module declares the dependency in its own
ConfluenceConfig. - Heavy-coalition extensions. Law, taxation, enforcement, mandatory citizenship — all deferred, all requiring a fresh proposal.
Cross-references
- topology-intro — rationale for the coalition rung.
- architecture — basic services in an example coalition.
- ticket issuance — the optional coalition-scoped issuance root.
- confluences — the generic shape every module inherits.
- composition — how a module’s subscriptions compose with the rest of the coalition.
- threat model — trust composition when a confluence depends on a module.
Ticket issuance
audience: contributors
A coalition may optionally publish a coalition-scoped ticket issuer — a mosaik ticket issuance root whose tickets components may choose to recognise. This page specifies the shape and the composition.
The primitive is deliberately thin. It is not a policy
engine; it is not a state machine with Request/Grant
semantics; it is not the former Passport spec. It is a
mosaik ticket issuance root, folded into
CoalitionConfig, with one composition discipline:
components opt in.
Shape
#[non_exhaustive]
pub struct TicketIssuerConfig<'a> {
/// Issuer role / name folded into the derivation.
pub name: &'a str,
/// The issuer's ACL composition — how committee
/// members rotate, which operators run issuance, etc.
pub acl: AclComposition<'a>,
}
pub const fn coalition_ticket_issuer_root(coalition_root: UniqueId) -> UniqueId {
coalition_root.derive("tickets")
}
TICKETS_ROOT = COALITION_ROOT.derive("tickets"). The
issuer’s TicketIssuerConfig is folded into
COALITION_ROOT as an ordered element after
confluences.
How components opt in
A confluence, module, or any organism that wants to
recognise tickets issued by the coalition-scoped issuer
includes TICKETS_ROOT in its own TicketValidator
composition:
let validator = TicketValidator::new()
.recognise_issuer(ETH_SUPERCHAIN.ticket_issuer_root()?)
.require_ticket(Tdx::new().require_mrtd(confluence_mrtd));
No component is required to. A confluence designed to be referenced by many coalitions typically does not pin any coalition’s issuer, precisely because pinning a coalition would tie its identity to that coalition. A module shipped specifically for one coalition may pin it.
Trust shape
The trust shape is inherited from mosaik’s
TicketValidator composition. This page introduces no
new primitives:
- Tickets issued under
TICKETS_ROOTare ordinary mosaik tickets whose issuer identity happens to be coalition- scoped. - A component that pins the issuer trusts the coalition operator’s issuance discipline at least as far as they trust the ACL composition on that validator branch.
- A component that does not pin the issuer ignores its tickets entirely.
Because recognition is per-component and voluntary, there is no protocol channel for the coalition to compel recognition. The coalition operator publishes the issuer root; components individually decide whether to trust it.
When to ship an issuer
- Shared bonding. A coalition running several confluences whose committees cross-bond frequently benefits from a single issuance root rather than N per-confluence roots.
- Multi-operator recognition. A coalition with multiple operators wanting a convenient common issuance root for ad-hoc identities.
When to skip
- Single-operator coalitions. The coalition operator already issues their own tickets at the organism level; a separate coalition-scoped root is unnecessary indirection.
- Open ecosystems. A coalition whose intended value is to stay out of bonding decisions ships no issuer.
Rotation
The coalition operator rotates the issuer ACL the same
way any ticket-issuing organism rotates: publish a new
TicketIssuerConfig (which bumps TICKETS_ROOT via
COALITION_ROOT), update CoalitionConfig to the new
issuer, announce through the change channel, and if the
coalition ships a Chronicle, record the rotation as
ChronicleKind::TicketIssuerRotated.
Components that pin the old root see new tickets fail validation; they pin the new root on their next upgrade. Components that don’t pin at all are unaffected.
Cross-references
Confluences — the cross-citizen organism
audience: contributors
A confluence is a mosaik-native organism whose
Config fingerprint folds two or more citizen references
(lattice references, OrganismRefs, or a mix) and whose
public surface coordinates across the referenced citizens.
This chapter specifies its shape, compares it against
alternatives, and provides a template for commissioning a
new one.
Modules (Atlas, Almanac, Chronicle) are, mechanically, organisms too; their shapes are specified once in basic services. They differ from general confluences in one way: modules derive under a coalition root, while commissioned confluences derive independently of any coalition. The generic confluence discipline below applies to both; the derivation difference is called out where it matters.
The reader is assumed to have read Designing coalitions on mosaik and Anatomy of a coalition.
Definition
A confluence is an organism in every sense the zipnet and builder books use the word. It has:
- one
Configfingerprint; - a narrow public surface (one or two named primitives);
- a
TicketValidatorcomposition gating bonds; - typed
Confluence::<D>::verb(&network, &Config)free- function constructors; - a deterministic state machine inside a mosaik
Group.
What distinguishes a confluence from a standalone organism:
- Its content fingerprint folds in multiple citizen references — the ordered set of citizens whose public surfaces it reads from (or writes to).
- Its state machine’s commits are a function of inputs drawn from more than one citizen’s public surface, and serve consumers on more than one citizen.
A confluence is not a seventh lattice organism; it is not part of any lattice’s identity derivation; no per-citizen operator has to know what confluences reference their citizen. A confluence is independent of any coalition that references it — its committee, state machine, and log are shared across coalition references.
When to commission one
Commission a confluence when all three of the following hold.
-
The aggregated fact is itself a commit. If what you need can be computed ad hoc by an integrator across
Ncitizen handles, it does not need a confluence. If multiple downstream consumers would each have to repeat the same aggregation, or if the aggregation is itself the input to yet another organism, the commit pays for itself. -
The inputs span more than one citizen. A confluence that reads only from one citizen is an organism in disguise; it belongs as a standalone organism (or inside that citizen’s own composition, if the citizen is a lattice).
-
The trust model is coherent across the inputs. A confluence inherits the worst-case trust assumption of the citizen inputs it reads. If you cannot name the cross-citizen trust shape (e.g. “majority-honest across each lattice’s unseal committee, t-of-n on the oracle organism, plus majority-honest confluence committee”), you have not finished the design.
When any of the three is false, the right answer is one of:
- A cross-citizen integrator (Shape 1 in builder’s cross- chain chapter, generalised to any mix of citizens).
- An in-lattice organism that reads a sibling lattice’s public surface (Shape 2).
- A new standalone organism.
- A new organism inside an existing lattice.
Confluence versus cross-citizen integrator
| Trait | Integrator-spans-citizens | Confluence |
|---|---|---|
| Runs a Raft committee | no | yes |
| Commits to a state machine | no | yes |
| Admits other peers via ACL | no | yes |
| Consumes multiple citizens | yes | yes |
| Produces a reusable commit | no (per-integrator in-memory join) | yes (public surface consumers can read) |
| Cost to stand up | a driver in the integrator’s agent | a crate, a committee, an ACL |
| Availability | bounded by one agent | committee-level (Raft majority) |
| Trust assumption | integrator’s own | explicit, compositional |
Heuristic: if three different integrators would all perform the same aggregation, it is a candidate confluence. If exactly one integrator needs it, keep it in the agent.
The Config fingerprint
A confluence’s Config folds in:
#[non_exhaustive]
pub struct ConfluenceConfig<'a> {
/// Role name: "intents", "ledger", "market",
/// "correlate", … For modules:
/// "atlas", "almanac", "chronicle".
pub name: &'a str,
/// Ordered set of block-building lattices whose
/// public surfaces this confluence reads or writes.
pub lattices: &'a [LatticeConfig<'a>],
/// Ordered set of standalone organisms whose public
/// surfaces this confluence reads or writes. Same
/// story as `lattices` above, for non-lattice
/// citizens.
pub organisms: &'a [OrganismRef<'a>],
/// State-affecting parameters specific to this
/// confluence. E.g. an intent router's policy
/// constants, a shared ledger's attribution model,
/// an almanac's tick cadence.
pub content: ContentParams<'a>,
/// TicketValidator composition gating committee
/// admission.
pub acl: AclComposition<'a>,
}
impl ConfluenceConfig<'_> {
pub const fn confluence_id(&self) -> UniqueId { /* ... */ }
}
Identity derives from (name, spanned citizens, content, acl) with no coalition root folded in:
CONFLUENCE(cfg) = blake3(
"confluence|"
|| SCHEMA_VERSION_U8
|| "|" || name
|| ordered spanned LatticeConfig references
|| ordered spanned OrganismRef references
|| content_fingerprint
|| acl_fingerprint
)
A confluence can therefore be referenced by N coalitions; its committee, state machine, and log are shared across those references. Operators wanting two independent deployments deploy two confluences with different content or different ACLs — the cryptographic gate is the fingerprint, not coalition scope.
For modules the derivation is different — they derive
under a coalition root (see
basic services — Deriving a module’s identity).
The public surface, state-machine discipline, and
ConfluenceConfig fields are otherwise identical.
Public surface
Confluences follow the narrow-public-surface discipline without modification. Guidelines for picking the surface:
- Key every commit by
(citizen_id, slot)or(citizen_id, tick)when the confluence is per- clock-event per-origin-citizen.citizen_idis the 20-byte truncated prefix of the citizen’s blake3 fingerprint. Integrators that join commits across confluences need this key. - Separate routing and attestation. If your confluence both routes inputs (a write-side stream) and commits the routing outcome (a read-side collection), keep the two primitives named. Do not fold them into one.
- Expose attestations separately when the commit carries an attestable signature. Integrators that pull attestations to on-chain settlement contracts want the signature surface distinct from the routing surface.
- If your confluence depends on a module, declare it.
A correlator that aligns to Almanac ticks reads
Almanacand commits under(citizen_id, almanac_tick)keys; state that in the crate’s composition-hooks doc and in theConfluenceConfig.contentparameters (since the alignment is state-affecting).
A design wanting three or more public primitives is likely two confluences.
State machine shape
Inside the confluence’s Group, the state machine is an
ordinary mosaik StateMachine<M>. It differs from a
standalone organism’s state machine only in what its
inputs look like: driver code outside the state machine
subscribes to public surfaces on each spanned citizen and
feeds observed facts into the confluence group via
group.execute(...).
Template for a confluence driver:
// Inside the confluence's node crate, role: committee member.
loop {
tokio::select! {
// Per-citizen subscriptions the confluence reads from.
ev = lattice_a_subscription.next() => {
let evidence = build_evidence_for_a(ev);
confluence_group.execute(ConfluenceCmd::ObserveLatticeA(evidence)).await?;
}
ev = lattice_b_subscription.next() => {
let evidence = build_evidence_for_b(ev);
confluence_group.execute(ConfluenceCmd::ObserveLatticeB(evidence)).await?;
}
ev = organism_c_subscription.next() => {
let evidence = build_evidence_for_c(ev);
confluence_group.execute(ConfluenceCmd::ObserveOrganismC(evidence)).await?;
}
// Optional: align to a coalition's Almanac the
// confluence has been configured to track.
ev = almanac_subscription.next() => {
let tick = /* ... */;
confluence_group.execute(ConfluenceCmd::AdvanceTick(tick)).await?;
}
// Confluence's own timers, per its state machine.
_ = apply_deadline.tick() => {
confluence_group.execute(ConfluenceCmd::Apply).await?;
}
}
}
Rules:
- Observations are evidence, not authority. The
Observe*commands carry a pointer to the upstream citizen commit (citizen id, slot or tick, commit hash). The confluence’s state machine validates the pointer against the citizen’s public surface on replay. - Applies are pure functions.
apply(ConfluenceCmd::Apply)reads the accumulated observations and emits the confluence’s commit. It does not make new cross-citizen reads insideapply; reads belong to the driver. - No cross-Group calls inside apply. Same discipline as every organism.
ACL and attestation
A confluence’s TicketValidator composition can include:
- Per-citizen readership tickets. A confluence
reading from lattice A’s
unseal::UnsealedPoolor from an oracle organism’s feed is an integrator of that citizen from the citizen’s perspective. Committee members must hold tickets the per-citizen operator issues, same as any other integrator reading that surface. - TDX attestation. If the confluence sees cleartext
order flow or otherwise-sensitive data, the committee
members are almost always required to run an attested
TDX image.
.require_ticket(Tdx::new().require_mrtd(confluence_mrtd))is the standard line. - Optional coalition-scoped issuer recognition. If a confluence expects to be used predominantly under one coalition whose ticket issuer is stable, it may include that issuance root in its validator composition — a choice made on the confluence side, not imposed by the coalition. See ticket issuance.
Failure modes
A confluence has more failure modes than a standalone organism because its inputs are federated:
- One spanned citizen stalls. The confluence’s state machine must decide whether to commit with partial evidence (degrading quality but preserving liveness) or to stall that slot pending the missing citizen (preserving quality but sacrificing liveness on the slot). This is a per-confluence choice; document it in the composition contract.
- One spanned citizen’s stable id changes. The
confluence’s
Configfolds that reference; a change means the confluence must be redeployed. Until it is, the confluence’s subscriptions to the old citizen simply stop producing events. - One spanned citizen’s content hash changes (if pinned). Same as above but limited to confluences that chose to pin content — stable-id-only pinners are unaffected.
- Confluence committee loses majority. Same failure mode as any mosaik Raft group. Liveness is lost; integrity is intact.
- Spanned citizen’s ACL changes. If a per-citizen operator revokes the ticket committee members hold, the confluence loses read access to that citizen’s public surface. Negotiated out of band; there is no technical recourse.
Composition patterns worth naming
Three patterns recur often enough to name. These are patterns, not required confluences; modules are the specified shapes.
Intent-router pattern
A confluence whose state machine commits per-slot routed
intents keyed by (target_lattice_id, slot). Reads
cleartext from each spanned lattice’s
unseal::UnsealedPool; commits a RoutedIntents
collection each target lattice’s offer subscribes to.
Trust shape: TDX-attested committee plus t-of-n threshold
on the cross-citizen cleartext channel.
The builder book’s Shape 3 bridge organism is an intent- router pattern seed.
Shared-ledger pattern
A confluence whose state machine aggregates per-lattice
tally::Refunds into a cross-origin-citizen attribution
set, committed as SharedRefunds and
SharedAttestations. Trust shape: majority-honest
confluence committee plus the per-citizen trust shape of
each spanned tally. An on-chain settlement contract
verifying a SharedAttestation verifies both the
confluence’s signature and the underlying per-lattice
tally::Attestation referenced as evidence.
Integrators needing one refund feed across chains bind
the confluence; integrators concerned with a single chain
bind that lattice’s tally directly. Both paths coexist.
Cross-feed correlator pattern
A confluence that pairs ticks from two or more oracle
organisms (or more generally, two or more non-slotted
standalone organisms), committing correlated values under
(citizen_id, almanac_tick) when a coalition referencing
the confluence ships an Almanac and the confluence opts
in, or under (citizen_id, timestamp_bin) otherwise.
Trust shape: majority-honest confluence committee plus
the signing assumptions of each upstream oracle.
Other patterns worth naming (non-block-building)
The three patterns above originate in block-building. Other domains will yield more. Sketches:
- Attestation aggregator. A confluence that folds MR_TD quotes from multiple attestation organisms across vendors or regions into a single attested membership set.
- Coordination-market organiser. A confluence that runs an allocation auction where inputs come from several independent signal organisms and outputs drive several independent downstream citizens.
- Cross-DA committer. A confluence that commits a combined availability proof over two or more DA shards.
Each is a ConfluenceConfig whose lattices slice may
be empty and whose organisms slice names the upstreams.
Checklist mirror
The commissioning checklist in topology-intro — Checklist for commissioning a new confluence applies verbatim. This page is the reference; the topology-intro page is the forcing function when commissioning one.
Cross-references
- topology-intro — why this rung exists.
- basic-services — the three specified module shapes plus retirement markers.
- ticket-issuance — the optional coalition-scoped issuance root.
- architecture — where confluences sit in one example coalition.
- composition — the subscription graph that citizens and confluences share.
- atomicity — what confluences deliberately do not guarantee.
- threat-model — how a confluence’s trust assumption composes with the spanned citizens’.
Composition: citizens and confluences in a coalition
audience: contributors
Architecture maps confluences and citizens onto one example coalition. Confluences specifies the confluence organism. Basic services specifies Atlas, Almanac, and Chronicle. This chapter shows the wiring: which public surface each confluence subscribes to on each spanned citizen, how the subscription graph grows as confluences and modules are added, and what fails where when an upstream stalls.
The wiring is deliberately weak. No cross-Group
atomicity, no shared state across citizens, no global
scheduler. Citizens and confluences react to each other’s
public commits via mosaik’s when() DSL and Collection
/ Stream subscriptions. This is the composition model
at every rung.
Two keys: citizen id and clock event
Every commit in every block-building lattice is keyed by slot (see builder composition). Every commit in a standalone organism is keyed by that organism’s own clock event — a slot, a tick, a round, a sequence number. A coalition adds the citizen id as an outer key on every cross-citizen fact.
The citizen_id used as a key in cross-citizen commits is
the first 20 bytes of the citizen’s blake3 stable-id
fingerprint. The full 32-byte value lives in Atlas cards
and in evidence pointers. The 20-byte truncation gives a
compact key that matches the mental model readers bring
from other ecosystems; collision resistance remains more
than sufficient at this rung.
Concretely:
- Intra-citizen commits continue to key on their own clock alone (no change from the citizen’s own book).
- Confluence commits that fan-out to multiple target
citizens key on
(target_citizen_id, target_clock_event). - Confluence commits that fan-in from multiple origin
citizens key on
(origin_citizen_id, origin_clock_event). - An integrator joining commits across a confluence and
the citizens the confluence touches keys on
(citizen_id, clock_event)everywhere. - When a coalition ships an Almanac the confluence has
opted into, the confluence may additionally carry
almanac_tickas an alignment key for cross-citizen joins where the citizens’ native clocks are not commensurate.
The 20-byte citizen_id is stable — it never changes for
a given citizen deployment. A citizen retirement yields a
new stable id and is therefore a new citizen id for every
coalition that adopts it.
The subscription graph
Each arrow is a subscription. The downstream component watches the upstream’s public collection or stream and reacts in its own state machine. No arrow represents atomic cross-Group commit. Dashed boxes are confluences; the oracle box on the left is a standalone organism.
integrators
(wallets,
searchers)
│
├──► zipnet[L1]:Submit ────► Broadcasts[L1] ──► UnsealedPool[L1] ─┐
│ │
├──► zipnet[L2]:Submit ────► Broadcasts[L2] ──► UnsealedPool[L2] ─┤
│ │
└──► offer[L1]:Bid / offer[L2]:Bid │
│
price-feed.eur ──► Price[tick] │
│
almanac ──► Ticks[K] │
│ │
└──────────── referenced by confluences for alignment ──────┐ │
▼ │
┌────────────────── ┘
▼
┌───────────────────────┐
│ intents (confluence) │
│ reads UnsealedPool[*] │
│ + price-feed.eur │
│ commits RoutedIntents │
└───────────────────────┘
│
RoutedIntents[L1,S] / RoutedIntents[L2,S]
▼
offer[L1]:AuctionOutcome offer[L2]:AuctionOutcome
│ │
atelier[L1]:Candidates atelier[L2]:Candidates
│ │
relay[L1]:AcceptedHeaders relay[L2]:AcceptedHeaders
│ │
tally[L1]:Refunds tally[L2]:Refunds
│ │
└────────────┬──────────────┘
▼
┌──────────────────────────┐
│ ledger (confluence) │
│ reads Refunds[*] │
│ commits SharedRefunds │
└──────────────────────────┘
│
▼
integrators
atlas ──► CitizenCards (consumed by dashboards, not coalition pipeline)
chronicle ──► ChronicleEntries (consumed by auditors, not coalition pipeline)
Read top to bottom:
- Wallets submit to each lattice’s
zipnet::Submit; searchers submit to each lattice’soffer::Bid. Neither changes from builder. - Each lattice’s
zipnetandunsealdo their usual work independently. The oracle organismprice-feed.eurpublishes ticks on its own clock. The Almanac (if shipped) publishes its own tick stream. - The
intentsconfluence subscribes to every spanned citizen’s relevant public surface — lattices’UnsealedPools and, in this example,price-feed.eur’sPricestream. It may additionally subscribe to the Almanac for alignment keys. It observes cross-citizen intents and commits routed-intents entries per target lattice. - Each lattice’s
offersubscribes to the confluence’sRoutedIntentscollection filtered for its own citizen id. Routed intents are an additional input to the auction, along with the lattice’s ownUnsealedPool. - Each lattice’s
atelierandrelayproceed as in builder. - Each lattice’s
tallycommits independently. - The
ledgerconfluence subscribes to every spanned lattice’stally::Refundsand commits the aggregatedSharedRefundscollection. - Integrators read
SharedRefundsand per-citizen surfaces depending on their use case. Dashboards read the Atlas. Auditors read the Chronicle.
Atlas and Chronicle are not on any confluence’s critical path in this example; they are consumed by dashboards and auditors directly. Modules serve observability and provenance, not the coalition’s data flow.
Subscription code shape
A contributor implementing a confluence writes a role
driver in that confluence’s crate. The driver is a mosaik
event loop multiplexing subscriptions across citizens and
calling group.execute(...) on its own Group. The pattern
is identical to any organism driver, fanned out across
more citizen kinds.
// Inside intents::node::roles::committee_member.
//
// `lattices` and `organisms` are the ordered sets of
// citizen references the confluence spans, sourced from
// self.config.lattices and self.config.organisms.
loop {
tokio::select! {
// One subscription per spanned lattice's UnsealedPool.
ev = pool_a.when().appended() => {
let round = pool_a.get(ev).expect("appended implies present");
let intents = extract_cross_chain_intents(&round);
self.group.execute(IntentsCmd::ObservePool {
citizen_id: lattice_a_id,
slot: round.slot,
intents,
}).await?;
}
ev = pool_b.when().appended() => {
// ... same shape
}
// One subscription per spanned standalone organism.
ev = price_feed_eur.when().appended() => {
let tick = price_feed_eur.get(ev).unwrap();
self.group.execute(IntentsCmd::ObservePrice {
citizen_id: eur_feed_id,
tick: tick.seq,
price: tick.value,
}).await?;
}
// Optional: align to a coalition's Almanac.
ev = almanac.when().appended() => {
let t = almanac.get(ev).unwrap();
self.group.execute(IntentsCmd::AdvanceAlmanacTick(t.seq)).await?;
}
_ = self.apply_clock.tick() => {
// Per-slot deadline: seal the routing decision.
self.group.execute(IntentsCmd::SealSlot).await?;
}
}
}
Every observed event carries enough evidence
((citizen_id, clock_event, commit_pointer)) that the
state machine can validate it against the citizen’s public
surface during replay.
Apply order across the graph
Within one confluence’s Group, mosaik’s Raft variant
guarantees every committee member applies commands in the
same order. Across confluences, and across citizens, no
such guarantee exists. The coalition layer relies on the
same two properties the builder layer does, scaled up:
- Monotonicity by key. Within any component, commits
for key
(C, K+1)are not applied before(C, K). The component’s state machine enforces this per- citizen in its apply handler. - Eventual consistency by subscription. A downstream component’s driver observes every upstream commit eventually, because mosaik collections are append-only and readers converge.
Together these are enough to reconstruct a consistent per-citizen per-clock-event view across the whole coalition without global atomicity. An integrator wanting “the canonical decision for citizen C, clock event K, across citizens and confluences” reads each component’s per-(C, K) commit independently and joins on the key.
When an upstream stalls
Failure propagation is a small matrix. Rows are the failing component; columns are downstream components’ observable behaviour.
| Upstream fails | Same-citizen downstreams | Confluences that read this citizen | Other citizens |
|---|---|---|---|
| lattice organism | see builder composition | degraded — partial evidence per spec | unaffected |
| standalone organism | see that organism’s own book | degraded — partial evidence per spec | unaffected |
| citizen stable id bump (retirement) | in-citizen consumers re-pin | confluence must redeploy against new stable id | unaffected |
| citizen content hash bump | in-citizen consumers re-pin if they pinned | confluences pinning content redeploy; stable-id-only pinners are unaffected | unaffected |
| confluence stalls | no effect (citizens don’t depend on confluences for their own pipeline) | downstream confluence consumers see no new commits | unaffected |
| module stalls | no effect (citizens don’t depend on modules) | confluences aligned to this module see delayed/degraded commits | unaffected |
| confluence committee crosses threshold | no effect | bad commits possible (integrity lost); on-chain settlement is the final arbiter | unaffected |
| coalition operator retires coalition | no technical effect (citizens continue) | modules under retired coalition emit retirement markers, then stop | unaffected |
Two properties fall out, mirroring builder’s:
- Upstream failures degrade downstream outputs; they do
not corrupt them. A missing
UnsealedPool[L1, S]narrows theintentsconfluence’s commit for that slot; the commit itself remains well-formed. - The pipeline is drainable per citizen. Failures on one citizen do not block the coalition’s other citizens. Each citizen’s pipeline is local; each confluence applies what it observes.
What the coalition composition contract guarantees
Given the full commit logs of every citizen and every confluence the coalition references:
- Deterministic replay per (citizen_id, clock_event). Anyone can recompute each component’s per-(C, K) decision and cross-check confluence commits against the citizen inputs they folded in.
- Independent auditability. A confluence’s commit carries evidence pointers to the citizen commits it depends on. A consumer that trusts the citizens can verify a confluence’s commit without trusting the confluence’s committee, as long as the evidence pointers resolve.
- No silent corruption across coalitions. A citizen referenced from multiple coalitions is still one citizen; its commit log is one log. Different coalitions reading the same citizen see the same facts. A confluence referenced from multiple coalitions is still one confluence; its commit log is one log.
What the contract does not guarantee
- Atomic all-or-nothing commits across citizens. Already refused. An integrator reading a confluence’s commit and the spanned citizens’ commits must tolerate differing commit wall-clock times.
- Cross-coalition coordination. Coalitions coexist, they do not coordinate. If a cross-coalition commit is genuinely needed, the answer is a confluence that spans the relevant citizens directly — which both coalitions may reference independently.
- Bounded end-to-end latency. As in builder: if a confluence stalls, downstream consumers never see its next commit. No confluence-level timeout triggers a dummy commit; operators requiring bounded latency add per-slot deadlines at the confluence level.
Contributors implementing a new confluence
Wiring checklist for a confluence:
- Identify every spanned citizen’s public surface you
subscribe to. Record as ordered slices of citizen
references in your
Config—lattices,organisms, or both. - Decide whether your commits fan-in, fan-out, or both. Fan-in confluences aggregate per-origin- citizen facts; fan-out confluences distribute one fact to multiple target citizens; some confluences do both.
- Key every commit by
(citizen_id, clock_event). Never by one alone. Use the 20-byte truncated citizen id. If your state machine has a natural sub-clock- event cadence, commit at that cadence but carry the owning(citizen_id, clock_event)pair. If the confluence opts into a coalition’s Almanac and needs cross-citizen alignment, also carryalmanac_tick. - Write one role driver per
Groupmember role. Keep it as atokio::select!over the upstream subscriptions (one per citizen) and your local timers. - Validate evidence on replay. The upstream-event
pointers carried in your
Observe*commands must be checkable against the citizen’s public surface during replay; a replay that sees an unresolved evidence pointer must reject the command. - Document your per-citizen-failure policy. What happens when one spanned citizen stalls, reorders, or bumps its stable id or content hash — all are operational realities.
- Declare dependency on modules. If you require an Almanac for alignment, or consume Chronicle entries, document it.
- Document the composition contract in your
confluence’s
contributors/composition-hooks.md(per-confluence documentation). Update architecture and this page’s subscription graph to include the new confluence if the example coalition is the right place for it.
Cross-references
- Architecture — the coalition shape these subscriptions run on.
- Confluences — the organism the arrows land on.
- Basic services — Atlas, Almanac, Chronicle details.
- Atomicity — what the subscription graph deliberately does not buy you.
- Threat model — how trust assumptions compose across the subscription graph.
Atomicity boundary
audience: contributors
This chapter states what the coalition layer does and does not guarantee about atomicity, supplying one reference page for design reviews of new confluences or extensions to the coalition model.
The rules are inherited from zipnet and builder. The coalition layer introduces no new atomicity primitive; it makes the inherited boundary legible one rung higher.
What the coalition layer does not provide
The coalition layer does not provide:
- Cross-Group atomicity inside a citizen. Rejected by builder for lattices and by each organism’s own design for standalone organisms. Two organisms commit independently and subscribe to each other’s public surfaces.
- Cross-citizen atomicity inside a coalition. Rejected by this blueprint. Two citizens in the same coalition commit independently and a confluence subscribes to both.
- Cross-coalition atomicity. Rejected by this blueprint. Two coalitions coexist on the shared universe and have no shared consensus unit.
- Multi-confluence atomicity. Rejected by this blueprint. Two confluences referenced by the same coalition each commit to their own Group; integrators that need joint commits join by key.
The rejection is load-bearing at every rung. Admitting one atomicity primitive at any rung forces every rung below to carry it, collapsing the trust and operational boundaries the rungs were designed to isolate.
What the coalition layer does provide
The coalition layer provides, as a consequence of the composition discipline:
- Per-component atomicity. Every organism, every confluence, and every module commits atomically within its own Group. Mosaik’s Raft variant guarantees single-Group atomicity.
- Determinism of replay. Given the commit logs of all components the coalition references, each component’s per-key decision is reproducible. The evidence pointers confluences carry make cross-component replay legible.
- Independent auditability. A consumer verifying a confluence’s commit need trust only the spanned citizens’ own commit logs (and the confluence’s committee, to whatever its trust shape permits). They do not need a coalition-wide replay to audit one confluence’s decision.
- Provenance (if Chronicle is shipped). Not
atomicity, but a record of coalition-level decisions
over time: which
CoalitionConfigs were published when, which confluences were referenced or unreferenced, which modules were added or retired. Audit trail, not atomicity.
What integrators and operators must design around
Three recurring realities.
Partial outcomes across citizens
An integrator placing paired bundles on two lattices, or correlating an oracle tick with a lattice auction outcome, must tolerate partial observation: citizen A commits, citizen B does not. The coalition layer surfaces this, it does not mitigate it. Every integrator driver spanning citizens carries reconciliation logic (post-bond collateral, on-chain HTLCs, searcher-level risk management, timestamp tolerance windows — whichever matches the domain).
Staggered commits through confluences
A confluence’s commit for (citizen_id, clock_event)
lags the upstream citizens’ own commits for that event by
at least one Raft round inside the confluence’s
committee. Consumers reading both treat the citizen commit
as ground truth and the confluence commit as derived
aggregation.
Citizen identity bumps
A citizen’s stable id changes on retirement; its content
hash bumps more often (ACL rotations, MR_TD bumps). A
coalition that references that citizen sees its modules
re-derive on stable-id bumps (because COALITION_ROOT
changes); confluences re-derive only when they themselves
span the bumped citizen. During the transition, old and
new deployments may coexist. If the coalition ships a
Chronicle, the next Chronicle entry records the
transition.
Promotion path if atomicity is genuinely required
If a concrete use case forces atomicity across components:
- If it’s one lattice, two organisms: design a lattice organism that owns both decisions. That is the same answer builder topology-intro gives.
- If it’s one coalition, two citizens: design a confluence that owns both decisions, subscribing to the two citizens’ relevant public surfaces. The state machine will commit one atomic record per event-pair; the two citizens themselves still commit independently, but the confluence’s commit is the authoritative joint fact. HTLC-style commit semantics live inside the confluence’s state machine, not across citizens.
- If it’s two coalitions: the cross-coalition commit is a confluence folding the relevant citizen references directly; both coalitions reference that confluence independently. No cross-coalition consensus is needed because the confluence is coalition-independent.
In every case the atomicity primitive lives inside one Group, not across Groups. This is the rule the whole ladder obeys.
Cross-references
- topology-intro — No cross-Group, cross-citizen, or cross-coalition atomicity
- Composition — What the contract does not guarantee
- builder — No cross-Group atomicity
Threat model
audience: contributors
This chapter states the trust assumptions the coalition
layer adds over the builder and zipnet trust models, and
describes how per-component trust composes when citizens
(lattices and standalone organisms), confluences, and
modules are referenced under one CoalitionConfig.
It does not re-derive per-organism trust shapes inside a lattice; those live in builder threat model. It does not re- derive trust shapes for standalone organisms; those live in each organism’s own book. The coalition layer inherits them unchanged.
Goals and non-goals at the coalition layer
Goals.
- Trust is strictly compositional. A property of a single citizen is not weakened by that citizen being referenced in a coalition. Referencing is ambient; it does not open new channels into the citizen.
- Confluence commits are independently auditable against the citizen commits they fold in. A consumer that trusts the spanned citizens can verify a confluence’s commit without trusting the confluence’s committee, up to the confluence’s own majority-honest bound.
- Coalition identity is a naming convenience. Code
treating a
CoalitionConfigfingerprint as authority is misusing it. - Modules do not gate. Atlas, Almanac, and Chronicle are services a coalition offers; none is on any citizen’s critical path for admission or operation. A compromised module can mislead consumers but cannot compel a citizen.
Non-goals.
- Cross-coalition confidentiality. Two coalitions coexisting on the shared universe do not get any additional confidentiality over “two services on the shared universe” already get. Mosaik’s ACL and tickets are the only gates.
- Cross-citizen atomic integrity. Ruled out at the composition layer. See atomicity.
- Protection against a malicious coalition operator.
A coalition operator can publish a
CoalitionConfigreferencing any citizen whose stable id they know, including citizens whose operators have not consented to the reference. The reference has no technical consequence for the citizen’s integrity or confidentiality; it is a social or legal matter between operators. The blueprint surfaces the observability required to flag the misuse; it does not prevent it. - Protection against heavy-coalition drift. A future proposal promoting a heavy-coalition primitive (enforcement, taxation, judiciary) requires its own threat model. The current model assumes light-coalition only.
The confluence’s trust composition
Every confluence’s trust shape is determined by two inputs:
- The confluence’s own committee assumption. Usually majority-honest for integrity, possibly TDX-attested for confidentiality of the cleartext it processes.
- The worst-case assumption across the spanned
citizens’ public surfaces it reads. If the
confluence reads cleartext from each spanned
lattice’s
unseal::UnsealedPooland a signed feed from an oracle organism, it inherits thet-of-nthreshold of every spannedunsealcommittee and the signing-quorum assumption of the oracle organism.
The composed assumption is the conjunction: the confluence’s guarantee holds while every component’s assumption holds simultaneously. Breaking any one component can at most degrade the confluence’s output per the weakest component’s failure mode.
Crucially, the confluence’s trust shape does not
include any coalition that references it. A malicious
coalition operator cannot weaken a confluence they merely
reference; they can only stop referencing it in their
next CoalitionConfig.
Example: the intents confluence
An intents confluence reads cleartext from three
lattices’ UnsealedPools and commits a per-target-
lattice routed-intents commit under a TDX-attested,
majority-honest committee.
- Confidentiality of cross-chain intents to external
observers: holds while every spanned lattice’s
zipnetany-trust assumption andunsealt-of-nthreshold assumption hold and the confluence’s TDX image is attested. - Integrity of the routed commit: holds while the confluence’s majority-honest committee assumption holds.
- Availability of routing: holds while at least one
of the spanned lattices reaches
UnsealedPoolfor the target slot and a confluence majority commits.
Example: the shared-ledger confluence
A ledger confluence reads each spanned lattice’s
tally::Refunds and commits an aggregated
SharedRefunds under a majority-honest committee.
- Faithfulness of the aggregated attribution: holds
while every spanned lattice’s
tallymajority-honest assumption and the confluence’s own majority-honest assumption hold. A malicious latticetallymajority can poison the aggregated commit; the on-chain settlement contract receivingSharedAttestationsis the final check. - Non-duplication of refunds: holds while the aggregation rule is implemented correctly and state- machine-deterministic.
Module trust shapes
Each of the four modules has its own trust shape, summarised here and specified in basic services.
Atlas trust
- Faithfulness of
CitizenCardcontents: holds while the Atlas committee is majority-honest. A compromised Atlas can publish incorrect endpoint hints or stale MR_TD pins. Integrators are expected to cross-check with per-citizen operators’ own handshake pages; Atlas is a convenience, not a certificate. - Confidentiality: N/A; Atlas is public by design.
Almanac trust
- Monotonicity of ticks: holds while the Almanac committee is majority-honest. A compromised Almanac can skip ticks or delay them, but cannot un-commit previously committed ticks.
observed_attimestamp vectors are advisory. A compromised committee can publish ticks with implausible timestamps; consumers should treat the per-member vector as a correlation hint, not a cryptographic timestamp. The vector shape does let consumers compositionally detect single-member clock skew (outlier entries in an otherwise-clustered vector); this is an observability aid, not a security guarantee.
Chronicle trust
-
Tamper-evidence of past entries: holds while at least one of:
- (a) the on-chain anchor path is uncompromised —
the Chronicle committee periodically commits a
blake3 digest of the entry stream to the anchor
contract declared in
ChronicleConfig.content; or - (b) at least one honest ex-member’s signed
archive survives — every
ChronicleEntrycarries per-member signatures in addition to the quorum signature, so an honest ex-member’s archive is a cryptographic witness against forgery.
Retroactive rewriting requires compromising both paths.
- (a) the on-chain anchor path is uncompromised —
the Chronicle committee periodically commits a
blake3 digest of the entry stream to the anchor
contract declared in
-
Completeness of recent entries: majority-honest. A compromised committee can omit entries from being committed; it cannot forge them without being detected, but it can DoS provenance.
Compute trust
- Scheduling integrity. Holds while the Compute committee is majority-honest. A compromised committee can grant a request to a provider who never runs the image, grant to a wrong provider, or omit grants altogether. It cannot forge a provider’s usage attestation (those are provider-signed) nor steal a requester’s settlement evidence (settlements live off-protocol).
- Image-hash honesty. A grant commits a specific image post-hash; a TDX-gated grant additionally commits the expected MR_TD. The requester verifies the provider is actually running the committed image at workload startup; a provider running a different image fails attestation and the workload aborts.
- Non-collusion between scheduler and provider. Out of scope. The Compute module commits the match; if the scheduler and a malicious provider collude, the requester’s only recourse is observing that the workload’s outputs never appear and routing to a different provider next time. A reputation organism scoring provider delivery closes the loop.
Ticket issuance trust
- Issuance correctness: trust shape inherited from
mosaik’s
TicketValidatorcomposition. This blueprint introduces no new primitives. A component that recognises the coalition-scoped issuance root trusts the issuer’s ACL composition; a component that does not recognise it is unaffected.
Coalition operator responsibilities
Coalition operators carry a narrow set of responsibilities.
Must do:
- Publish the
CoalitionConfigstruct (and the derived fingerprint) as the handshake. - Publish every module’s and every coalition-run confluence’s MR_TD if TDX-gated.
- Publish every coalition-commissioned standalone organism’s MR_TD that is TDX-gated. If the organism is operated by a different team, that team publishes its own MR_TD and the coalition operator re-publishes for convenience.
- Document the set of citizens and confluences referenced, including a pointer to each per-citizen operator’s and per-confluence operator’s handshake page.
- State explicitly which modules the coalition ships and which it does not. Silence is ambiguous.
- Notify integrators when a referenced citizen’s
stable id bumps and publish the updated
CoalitionConfigpromptly.
What a compromised coalition cannot do
Regardless of how many modules a coalition operator runs maliciously:
- Cannot change a referenced citizen’s or referenced confluence’s commits. Each component’s commit log is authoritative in its own Groups. A compromised module cannot forge a commit belonging to another component.
- Cannot mint money. Same argument as builder: on- chain settlement contracts enforce the evidence carried in attestations.
- Cannot produce cross-citizen atomic outcomes. Atomicity is not a primitive; a compromised module cannot force two citizens to jointly commit something they did not independently commit.
- Cannot compel citizens. A compromised Atlas cannot force dashboards to trust it; a compromised Chronicle cannot rewrite what honest replayers and external anchors have already recorded.
What a compromised coalition can do
Conservatively:
- Poison module commits. A majority-compromised module can publish incorrect records. Integrators reading those commits observe the misbehaviour in public and can object, migrate, or appeal to on-chain settlement.
- Mislead Atlas consumers. A compromised Atlas can publish wrong endpoint hints, stale MR_TD pins, or fabricate membership. Dashboards relying on Atlas see degraded accuracy; the citizens themselves are unaffected.
- DoS the module tier. A committee that loses majority stops producing new commits. Referenced citizens and their own pipelines are unaffected.
- Publish a misleading
CoalitionConfig. A coalition operator can publish aCoalitionConfigthat references citizens or confluences the integrator did not expect. Integrators guard against this by compiling in theCoalitionConfigexplicitly and auditing its contents — a misleading fingerprint is a visible artefact. - Stop publishing Chronicle entries. The absence of an entry is observable; a coalition that goes silent loses trust even if it cannot rewrite the past.
Compositional table
| Property | Depends on |
|---|---|
| Per-lattice block-building properties | That lattice’s full builder-level assumption stack |
| Per-organism properties | That organism’s own assumption stack |
| Confluence commit integrity | Confluence committee majority-honest |
| Confluence confidentiality (if TDX) | Confluence TDX attestation + spanned citizens’ upstream confidentiality |
| Atlas accuracy | Atlas committee majority-honest + per-citizen operator correctness |
| Almanac monotonicity | Almanac committee majority-honest |
| Chronicle tamper-evidence | On-chain anchor path honest OR one honest ex-member’s signed archive survives |
| Chronicle completeness | Chronicle committee majority-honest |
| Compute scheduling integrity | Compute committee majority-honest |
| Compute workload attestation (TDX) | Provider MR_TD matches the grant’s expected_mrtd at workload startup |
| Ticket-issuer recognition | Component’s own TicketValidator composition; coalition does not impose |
| Coalition observability | At least one honest replayer can read every component’s commit log |
| Accurate aggregated attribution | Every spanned upstream honest AND confluence honest |
| Cross-citizen partial-outcome discipline | Integrator-side risk management (no protocol support) |
| Coalition liveness | Every referenced citizen’s liveness AND every referenced confluence’s liveness |
The decomposition is the point. A coalition that misbehaves at one tier degrades the guarantee dependent on that tier; it does not collapse everything.
Operator responsibilities recap
Every operator running a committee member in any confluence, module, or standalone organism is responsible for the same set of obligations a lattice-organism operator is (running the attested image, protecting committee-admission secrets, rotating on schedule, reporting anomalies).
Cross-citizen confluence-operator adds one responsibility:
- Cross-citizen dependency reporting. When a referenced citizen’s stable id or content hash bumps, the confluence operator files an advisory to any coalition operators referencing the confluence and to integrators so updates propagate with a known cadence, not silently. If any referencing coalition ships a Chronicle, that advisory lands in the next Chronicle entry.
Cross-references
- builder threat model — per-lattice trust shapes this layer composes over.
- confluences — the organism the compositional trust statements above are about.
- basic services — trust shapes for Atlas, Almanac, Chronicle.
- ticket issuance — the optional issuance root.
- composition — the subscription graph these trust statements travel along.
- atomicity — the one thing that does not compose.
Roadmap
audience: contributors
This roadmap covers the coalition blueprint, not the v2 lists of any individual confluence, module, or standalone organism. Each crate ships its own internal roadmap; the zipnet crate’s Roadmap to v2 is the template future crates follow.
Items below are ordered by external-impact visibility, not by engineering order; implementation order is determined by what the first production coalition requires.
v0.1 — Blueprint (this document)
Specification complete. No crates shipped. Audience: contributors who want to challenge the composition shape before confluence or module implementation starts.
v0.2 — Coalition meta-crate walking skeleton
Goal: one coalition meta-crate exists, its
CoalitionConfig type compiles against a fixed set of
example citizen references, and its coalition_id()
derivation is stable and testable.
Minimum content:
- Crate layout. One
coalitionmeta-crate owningCoalitionConfig<'a>,ConfluenceConfig<'a>,OrganismRef<'a>,TicketIssuerConfig<'a>, the derivation helpers, the module lookup helpers (atlas(),almanac(),chronicle()), and re-exports ofbuilder::{LatticeConfig, UNIVERSE}. No runtime yet. - Lifetime parameterisation. Every public struct is
#[non_exhaustive]and parameterised over'a.constinstances continue to compile under'staticinference; runtime construction is first-class. - Schema version byte.
SCHEMA_VERSION_U8 = 1folded as the first element of every identity preimage. - Stable-id / content-hash split. Both
LatticeConfigandOrganismRefexpose astable_id()accessor separate from their full fingerprint. - Derivation tests. Golden-vector tests ensuring a
fixed
CoalitionConfigproduces a fixed fingerprint, and that each included citizen’s fingerprint is unchanged by coalition membership. - Mixed-composition tests. A coalition with zero lattices and two standalone organisms derives correctly; a coalition with two lattices and zero organisms derives correctly; mixes derive correctly.
- Module-helper tests. Helpers return
Somewhen a module by the well-known name is present,Noneotherwise. - Book examples. Every
rust,ignoresnippet in this book that referencesCoalitionConfigtype-checks against the v0.2 meta-crate signature.
Non-goals for v0.2:
- Any confluence implementation. Confluences arrive piecemeal once the meta-crate is stable.
- Any module implementation. First module and the ticket issuer land in v0.3.
- Any standalone-organism implementation. Those live in their own crates; the meta-crate only references them.
- On-wire types. No new wire types are added at the coalition layer beyond what each confluence, module, or organism commissions for itself.
v0.3 — First module (Atlas) + ticket issuance
Goal: the first module ships end-to-end and validates the coalition’s civic-services pattern; the optional ticket-issuance root ships alongside because it is mechanically simpler than any module.
Atlas is chosen first because it has the narrowest trust shape and the shortest state machine. It exercises every part of the derivation and subscription path without introducing threshold cryptography or cross-chain auction semantics.
Minimum content:
coalition-atlascrate with:Configfingerprint folding the coalition root and the set of citizen references the Atlas catalogues.Atlas::<CitizenCard>::readandAtlas::<CitizenCard>::publishtyped constructors.- A small state machine that commits citizen cards (metadata, endpoint hints, ticket-issuance roots, MR_TDs) per citizen reference, co-signed by the Atlas committee. The card format handles both lattices and standalone organisms.
- Ticket-issuance support in the
coalitionmeta-crate:TicketIssuerConfigplumbed intoCoalitionConfigwith derivation underCOALITION_ROOT.derive("tickets"). - Integration test: stand up two citizens (mocked — one
lattice, one organism), one Atlas module, optional
ticket issuer, and demonstrate an integrator binding
to all three through one
CoalitionConfig.
v0.4 — Almanac + Chronicle
Goal: two more modules ship.
coalition-almanaccrate with committee-driven tick commits. Per-memberobserved_atvector captured at commit time. Integration test: a correlator confluence aligns events from two citizens to almanac ticks.coalition-chroniclecrate with append-only audit log. Per-member signatures on every entry; optional on-chain anchor path with configurable cadence. Integration test: coalition republishes aCoalitionConfigwith a new citizen; Chronicle records the transition and emits an anchor commit at the next cadence boundary.
These land together because they share infrastructure (monotonic committee-signed commits) and because both are strictly additive to v0.3.
v0.5 — First commissioned confluence + retirement markers
Goal: the first confluence whose state machine depends on cleartext order flow from multiple lattices.
Minimum content:
coalition-intentscrate following the intent-router pattern in confluences. Derives independently of any coalition; the crate ships an exampleConfluenceConfigand example coalitions that reference it.- TDX admission on every committee member.
- Integration test: intents submitted on lattice A are
routed to lattice B’s
offerinput and a downstreamtallycommit reflects the cross-chain attribution. - Per-citizen stall policy documented in the crate’s
composition-hooks.md. - Retirement marker support across all shipped crates
(
coalition-atlas,coalition-almanac,coalition-chronicle,coalition-intents): every committee emits a terminalRetirementMarkerbefore shutting down; the Chronicle records it asConfluenceRetiredorModuleRetired.
v0.6 — Shared attribution ledger + Compute
Goal: a second commissioned confluence demonstrating fan-in attribution, plus the fourth module.
coalition-ledgercrate following the shared-ledger pattern in confluences.coalition-computecrate implementing the Compute module spec in basic services — Compute. Integration test: an agent organism submits aComputeRequestfor a TDX image, receives aComputeGrant, the provider starts the workload with matching MR_TD, the provider emits aComputeLogon completion.
(Passport is gone; ticket issuance is in v0.3.)
v1.0 — First production coalition
Goal: one coalition runs on mainnet references continuously for a month with no protocol-level incidents.
- Rolling upgrades of confluences and modules. Until
this point, “stop the service, bump the
CoalitionConfig, restart” is an acceptable release strategy; at v1.0 it is not. - Snapshot + state-sync for every confluence’s and every module’s state machine.
- Full operator runbooks under operators/, graded by severity and audited externally.
- Stable
CoalitionConfigserialisation format.CoalitionConfig::from_hexis safe to use as the handshake after v1.0.
Longer-term items (no version assigned)
Versioning
Carrying the zipnet/builder policy into the coalition model with three adjustments.
- Per confluence / module: lockstep at the confluence level. Commissioned confluences ship and version on their own schedule independent of any coalition that references them — because confluences no longer derive under a coalition root, a confluence upgrade does not force every coalition referencing it to re-derive. Coalitions simply reference the new confluence fingerprint when they wish.
- Per coalition: version-in-name for the coalition
identity. A coalition retirement is an operator-
level decision and produces a new instance name
(
ethereum.superchain-v2026q2). Individual modules inside a stable coalition name can still upgrade in lockstep without retiring the coalition. - Per citizen reference: stable-id pinning plus
optional content-hash pinning. A
CoalitionConfigreferences citizens by stable id. Confluences and modules choose per-component whether to also pin the content hash. Whether content-hash-pinning becomes a coalition-wide policy rather than a per-component choice is open for v0.3 decision.
Neither policy is load-bearing for v0.2 / v0.3. The first v1.0 release forces the decision.
Cross-coalition coordination
Out of scope for v0.1. A confluence whose content folds the relevant citizens directly can be referenced independently by multiple coalitions; the cross- coalition commit is whatever that confluence commits. No new primitive is needed. Research-complete, engineering-deferred.
TDX-attested modules
The v0.3 Atlas is a minimal, non-TDX module. A TDX- attested Atlas — where the committee commits cards only over attested images — is a v1+ refinement. Same for TDX-attested Almanac, Chronicle. The blueprint is unchanged; the refinement touches only the crates.
Optional cross-coalition directory
Not a core feature; same argument as the lattice
directory in builder. A shared
Map<CoalitionName, CoalitionCard> listing known
coalitions may ship as a devops convenience. If built,
it must:
- be documented as a convenience, not a binding path;
- be independently bindable — the coalition meta-crate never consults it;
- not become load-bearing for ACL or attestation.
Flag in-source as // CONVENIENCE: if it lands.
Byzantine liveness for confluence, module, and organism committees
Mosaik’s Raft variant is crash-fault tolerant. The zipnet liveness item applies to every committee. A future BFT variant in mosaik would be inherited by every component; until then, a deliberately compromised committee member can DoS liveness in its own committee.
Confluence-level confidentiality beyond TDX
Some confluence roles (cross-chain intent routing over many chains, cross-feed correlation of sensitive oracles) may want post-quantum confidentiality ahead of their upstream citizens themselves. Research-complete for several threshold schemes; engineering-deferred. The organism surface does not change; only the crypto inside the confluence’s state machine does.
New mid-rung composition classes
Builder is the first fully-specified class of mid-rung composition (block-building lattices). Other classes may follow — attestation fabrics, oracle grids, identity substrates with their own collective shape — each with its own book and its own handshake. When one does:
- The new class’s book specifies its own
FooConfigand derivation discipline, mirroring builder. - The coalition meta-crate may add a new referenced slice
(
pub fabrics: &'a [FabricConfig<'a>], etc.) or may continue referencing those compositions via their root organisms. Either works; the tradeoff is ergonomic. - Confluences gain the ability to fold the new class’s
fingerprints into their
Configalongside lattices and organisms. - Atlas
CitizenCardschemas are extended via#[non_exhaustive]field additions to carry metadata about the new kind.
Research-and-engineering-deferred. The coalition blueprint grows in lockstep with whichever class lands first.
Heavy-coalition as a research space (deferred)
The blueprint explicitly refuses heavy-coalition primitives: enforceable policies over citizens, taxation, judiciary, mandatory citizenship. The refusal is load- bearing for the current trust and composition stories.
A future proposal could make a case for one heavy- coalition primitive at a time, as a separate design with its own book branch. Any such proposal must:
- give a full trust composition for the new primitive showing what it does to the existing guarantees;
- specify a migration path for coalitions that ship the old (light) shape;
- earn its way past the objection that it reintroduces supra-citizen authority the current blueprint does not provide.
No concrete heavy-coalition proposal is planned. The space is flagged so contributors know where the current scope ends and where fresh proposal work would begin.
What this roadmap does not include
- Specific crate-versioning schedules. Each crate owns its own version cadence.
- A list of confluences, organisms, or lattice classes the blueprint wants shipped. Unlike builder’s six-organism commitment, the coalition blueprint stays open beyond the four basic services. The roadmap names three plausible first commissioned confluences; the catalogue grows as cross-citizen problems force new ones.
- Operator-onboarding contractual detail. Coalition operators run different organisations under different legal regimes; the book does not try to standardise that.
- Adoption strategy. The book specifies and documents; adoption is the operators’ concern.
compute-bridge
audience: ai
A TDX-attested, provider-agnostic provider for the Compute basic service. One binary, four backends: AWS, GCP, Azure, bare-metal. The fleet router picks whichever backend can satisfy each incoming grant.
This page is the browsable root of the crate; every source file is rendered on its own sub-page below.
Status. Prototype / specification sketch.
Signatures match the book; bodies are TODO-stubbed
against upstream crates (the coalition meta-crate
lands at v0.2, coalition-compute lands at v0.6 —
see roadmap).
What it does
Watches a coalition’s Compute module for grants addressed to itself. For each grant, the fleet routes to whichever configured backend can satisfy the grant’s image manifest (CPU, RAM, TDX requirement, region) and returns SSH access to the requester via a zipnet- anonymised encrypted receipt.
The provider’s honesty claim rests on its TDX
measurement: anyone can verify the running binary’s
MR_TD against the one declared in
manifest/compute-manifest.toml,
then inspect the source to conclude what
that code actually does — including that the fleet
router honestly reports the union of regions and
capacities across every configured backend.
Flow
- Boot inside a TDX guest. Per-backend credentials (AWS keys, GCP service-account JSON, Azure service principal, bare-metal SSH private keys) are all measured into the boot; a curious host cannot exfiltrate them at runtime.
- Produce a TDX self-quote over the binary’s MR_TD
plus the provider’s ed25519 public key
(
tdx.rs). - Resolve the coalition’s Compute module
(
main.rs), open a zipnet channel (zipnet_io.rs), and build theFleetfrom every enabled backend. - Register a provider card with the quote, the
union of regions and capacities, and a
tdx_capableflag set when at least one backend can satisfy TDX-required grants (provider.rs). - For each grant addressed to this provider:
- Resolve the zipnet envelope to learn the
requester’s
peer_idx25519 public key. - Cross-check the envelope’s declared image hash
against the grant’s committed
image_hash. - Fleet picks the first backend whose
can_satisfy(grant)istrue; that backend provisions a workload. - Seal an SSH-access receipt (host, per-grant
private key,
valid_to) to the requester’s x25519 public key (receipt.rs) and reply via zipnet. - Spawn a watcher that emits the
ComputeLogwhen the instance exits or the grant’s deadline passes.
- Resolve the zipnet envelope to learn the
requester’s
Layout
compute-bridge/
Cargo.toml
README.md
manifest/
compute-manifest.toml
src/
main.rs
config.rs
tdx.rs
provider.rs
zipnet_io.rs
receipt.rs
backends/
mod.rs
aws.rs
gcp.rs
azure.rs
baremetal.rs
Files
Cargo.toml— dependencies for all four backends plus the aspirational coalition-layer path deps.manifest/compute-manifest.toml— theComputeManifestfolded into the image pre-hash; lists which backends the binary drives.src/main.rs— entry point.src/config.rs— operator boot config with per-backend credentials and caps.src/tdx.rs— TDX self-quote and provider identity derivation.src/provider.rs— Compute-module provider loop.src/backends/mod.rs—Backendtrait +Fleetrouter.src/backends/aws.rs— AWS EC2.src/backends/gcp.rs— Google Compute Engine (TDX-capable via Confidential VMs).src/backends/azure.rs— Azure VMs (TDX-capable via Confidential Computing v3 SKUs).src/backends/baremetal.rs— SSH-root hosts; bare VMs or bare-TDX hosts with nested-guest attestation.
src/zipnet_io.rs— anonymised request resolution and reply publishing.src/receipt.rs— encrypted SSH access receipts using X25519-ChaCha20-Poly1305.
Inputs
Operator-supplied at boot and measured into the TDX quote:
- Coalition configuration — the
CoalitionConfigfingerprint to join and the Compute module’sConfluenceConfigto register with. - Per-backend credentials — at least one of the
four backends must be configured. See
config.mdfor the exact shapes.
Outputs
Two outputs, both via zipnet:
- Provider card — published to the Compute
module’s
Collection<ProviderId, ProviderCard>at boot and refreshed on capacity change. Folds the TDX quote, the per-backend capability summary, and the current real-time capacity telemetry. - SSH access receipts — one per grant, encrypted
to the requester’s
peer_idx25519 public key and posted to the zipnet reply stream matching the grant.
Adding a bare-metal machine
“Adding a bare-metal machine” is exactly: the
operator gives the bridge SSH root access to a host
and declares its shape in the boot config. No cloud
call is involved. The bridge maintains control-plane
SSH sessions to each machine for the duration of the
process; grants are satisfied by systemd-run on bare
VMs or virsh plus qemu-tdx on bare-TDX hosts.
Bare-TDX hosts are the route for TDX-required grants in environments where cloud TDX is not yet available — AWS-only operators, operators running their own data centres, operators with early-access TDX hardware. The nested guest’s MR_TD flows back to the requester as part of the SSH receipt.
See backend-baremetal.md
for the full provision flow.
Why zipnet for I/O
The requester’s identity is a mosaik ClientId. The
provider receives a grant identifying the requester
only by a zipnet-rotated sealed envelope; it never
learns the requester’s coalition-level identity, only
the peer_id inside the envelope. The encrypted
receipt goes back through the same channel.
Consequences:
- A curious provider cannot enumerate which coalition members are running what workloads.
- A compromised provider cannot selectively deny service to specific coalition members by their coalition identity — they are not visible at the provider layer.
- A coalition can run multiple
compute-bridgeinstances under different operators without cross- provider identity leakage.
Threat model (short version)
- Operator runs the binary in TDX. If not, the provider card’s claimed MR_TD does not match the running image; the Compute module rejects registration.
- Per-backend credentials are operator-scoped. Grants in flight when credentials are revoked fail to provision; the Compute committee observes the usage-log absence and scores the provider down.
- Zipnet carries requester identity. The provider
never learns the coalition
ClientId, only the ephemeralpeer_id. Receipt encryption is to thepeer_id’s x25519 public key, published in the zipnet envelope. - Cloud and bare-host infrastructure is not trusted with requester data unless the backend is TDX- capable and the grant required TDX. Non-TDX workloads treat the backend as a semi-trusted host; TDX workloads verify the nested MR_TD before trusting outputs.
A full threat analysis belongs in the book when this example graduates from prototype.
Related pages
- Compute — topping up and scheduling experiments — the agent-facing investigation this example illustrates.
- Basic services — Compute — the full spec.
- Threat model — Compute trust — where this fits in the coalition’s compositional trust table.
Cargo.toml
audience: ai
Manifest. Real dependencies for the runtime, all four
cloud SDKs (aws-sdk-ec2, google-cloud-*,
azure_mgmt_compute), the bare-metal SSH control
plane (russh), and the x25519 / ChaCha20-Poly1305
crypto used by the receipt layer. Aspirational
path-style dependencies for the coalition-layer
crates (mosaik, zipnet, coalition,
coalition-compute) — those do not exist yet per the
roadmap. Once the
upstream crates land, the path = "../coalition-stub/..."
lines become real version = "..." entries.
[package]
name = "compute-bridge"
version = "0.1.0-prototype"
edition = "2021"
license = "MIT"
description = "A TDX-attested provider for the Compute basic service that provisions workloads across AWS, GCP, Azure, and bare-metal backends, returning SSH access via zipnet-anonymised encrypted receipts."
publish = false
# Prototype. Not a shipped binary.
#
# Compiles against aspirational dependency paths for the
# coalition layer; see README.md for which deps exist
# today and which do not.
[dependencies]
# --- Runtime ---------------------------------------------------------------
tokio = { version = "1", features = ["full"] }
anyhow = "1"
async-trait = "0.1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
toml = "0.8"
# --- Cloud backends --------------------------------------------------------
# AWS
aws-config = "1"
aws-sdk-ec2 = "1"
aws-sdk-sts = "1"
# GCP (Compute Engine via google-cloud-* crates)
google-cloud-auth = "0.17"
google-cloud-googleapis = "0.15"
# Azure (Resource Manager)
azure_identity = "0.21"
azure_mgmt_compute = "0.21"
azure_core = "0.21"
# --- Bare-metal (SSH control plane) ----------------------------------------
russh = "0.45"
russh-keys = "0.45"
# --- Crypto ----------------------------------------------------------------
# x25519 / ed25519 for the encrypted-receipt sealing.
ed25519-dalek = "2"
x25519-dalek = "2"
chacha20poly1305 = "0.10"
rand = "0.8"
# --- Coalition layer (aspirational) ----------------------------------------
# These crates do not exist yet; see book/src/contributors/roadmap.md.
# Once the coalition meta-crate lands at v0.2 and the Compute module crate
# lands at v0.6, these path=".../stub" entries will be replaced with the
# real crates.
mosaik = { path = "../coalition-stub/mosaik-stub", package = "mosaik-stub" }
zipnet = { path = "../coalition-stub/zipnet-stub", package = "zipnet-stub" }
coalition = { path = "../coalition-stub/coalition-stub", package = "coalition-stub" }
coalition-compute = { path = "../coalition-stub/compute-stub", package = "coalition-compute-stub" }
[[bin]]
name = "compute-bridge"
path = "src/main.rs"
[lints.rust]
unused_imports = "allow" # prototype; stubs leave imports unused
unused_variables = "allow"
dead_code = "allow"
Up: compute-bridge.
manifest/compute-manifest.toml
audience: ai
The ComputeManifest the coalition folds into the
image pre-hash. Deriving
blake3(canonicalized(this)) yields the pre-hash that
appears in the provider’s card and in any
ComputeRequest targeting this binary; the post-hash
(MR_TD) is produced by the mosaik TDX builder at
build time.
The [bridge_extensions] section is folded into the
image pre-hash because it declares which backends the
binary drives — different backend support is a
materially different binary. Per-operator concrete
backend credentials and regions live in the separate
backends boot config
(src/config.rs), not here.
# ComputeManifest for the compute-bridge provider
# binary itself. Deriving blake3(canonicalized(this))
# yields the image pre-hash the coalition will see in
# the provider card.
#
# The post-hash (MR_TD) is produced by the mosaik TDX
# builder after building the crate; it is printed by
# the builder and must be pasted into
# manifest/compute-manifest.post-hash.txt for the
# provider card publication step.
#
# See book/src/contributors/basic-services.md#compute
# for the ComputeManifest shape.
[crate_ref]
name = "compute-bridge"
version = "0.1.0-prototype"
commit_hash = "0x0000000000000000000000000000000000000000000000000000000000000000"
# commit_hash is the blake3 of the git tree the binary
# was built from; filled in by CI at build time.
# Minimum hardware the bridge binary itself needs to run
# its own loops and the AWS / GCP / Azure SDK clients
# plus any bare-metal SSH control sessions.
min_cpu_millicores = 2000
min_ram_mib = 2048
min_storage_gib = 8
# TDX is required: the bridge's honesty rests on the
# MR_TD the coalition verifies against this manifest.
tdx = "Required"
# The bridge's own mosaik traffic only; AWS / GCP / Azure
# control-plane calls plus bare-metal SSH go via per-
# backend allowlists enforced inside the guest.
network_policy = "Allowlist"
# Non-binding duration hint: the bridge runs indefinitely
# while the operator rotates credentials.
duration_hint_sec = 0
# --- Extensions beyond the base ComputeManifest ---
#
# The base spec in basic-services.md is deliberately
# narrow. compute-bridge additionally declares which
# backends its binary will drive. This is binary-specific
# (folded into the pre-hash); per-operator concrete
# regions and credentials live in the backends boot
# config, not the manifest.
[bridge_extensions]
backends_supported = ["aws", "gcp", "azure", "baremetal"]
# Each enabled backend contributes capacity telemetry
# and a region set to the provider card at runtime.
# Operators mix and match:
#
# - A coalition member with AWS credits and nothing
# else enables [backends.aws] only.
# - A member with a rack of bare-TDX hosts enables
# [backends.baremetal] only; every grant requiring
# TDX automatically routes there.
# - A member with all four can accept the widest
# possible range of grants and let the fleet router
# choose.
#
# The concrete operator configuration lives in a
# separate backends boot config; see src/config.rs.
Up: compute-bridge.
src/main.rs
audience: ai
Entry point. Loads boot config, produces the TDX
self-quote, connects to the mosaik universe, resolves
the coalition’s Compute module, opens the zipnet
channel, builds the provider-agnostic
Fleet from the operator’s enabled
backends, and hands everything to the provider loop.
//! compute-bridge — a TDX-attested Compute-module
//! provider that provisions workloads across AWS, GCP,
//! Azure, and bare-metal backends.
//!
//! Prototype. See README.md for status.
mod backends;
mod config;
mod provider;
mod receipt;
mod tdx;
mod zipnet_io;
use std::sync::Arc;
use anyhow::Context;
use coalition::CoalitionConfig;
use mosaik::Network;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::try_from_default_env()
.unwrap_or_else(|_| "info,compute_bridge=debug".into()),
)
.init();
// 1. Load boot config (per-backend credentials,
// coalition reference, operator knobs).
let cfg = config::BootConfig::load_from_env()
.context("loading boot config")?;
tracing::info!(
coalition = %cfg.coalition.name,
backends = %summarise_backends(&cfg.backends),
"compute-bridge starting",
);
// 2. Produce the TDX attestation. Without a matching
// MR_TD the provider card the coalition sees will
// not match this binary and the Compute committee
// will reject registration.
let tdx_quote = tdx::self_quote().await
.context("producing TDX self-quote")?;
// 3. Bring up the shared mosaik network handle.
let network = Arc::new(
Network::new(mosaik::builder::UNIVERSE).await
.context("connecting to mosaik universe")?,
);
// 4. Resolve the Compute module on the coalition the
// operator points us at. A coalition without a
// Compute module is fatal — nothing to provide for.
let compute_cfg = cfg.coalition.compute()
.context("coalition does not ship a Compute module")?;
// 5. Zipnet channel for anonymised request/reply.
let zipnet = zipnet_io::ZipnetChannel::open(
&network,
&cfg.zipnet,
).await?;
// 6. Build the provider-agnostic Fleet from the
// operator's enabled backends. At least one must
// be configured.
let fleet = backends::Fleet::from_boot_config(&cfg.backends).await?;
// 7. Provider loop: register the provider card,
// accept grants, provision, return receipts,
// emit usage logs.
let provider = provider::Provider::new(
network.clone(),
compute_cfg,
tdx_quote,
zipnet,
fleet,
cfg.provider.clone(),
);
provider.run().await
}
fn summarise_backends(cfg: &config::BackendsBootConfig) -> String {
let mut parts = Vec::new();
if cfg.aws.is_some() { parts.push("aws"); }
if cfg.gcp.is_some() { parts.push("gcp"); }
if cfg.azure.is_some() { parts.push("azure"); }
if cfg.baremetal.is_some() { parts.push("baremetal"); }
parts.join(",")
}
Up: compute-bridge.
src/config.rs
audience: ai
Operator-supplied boot configuration. Inputs read at process start and measured into the TDX quote:
- A coalition reference (which
CoalitionConfigto join and which Compute module to register with). - Per-backend credentials — one or more of:
AwsBootConfig— access-key pair, regions, instance families.GcpBootConfig— service-account JSON, project id, regions, machine types, TDX machine types.AzureBootConfig— tenant / client / secret, subscription, resource group, regions, VM sizes, TDX VM sizes.BareMetalBootConfig— a list ofBareMetalMachineentries, each carrying host, port, user, SSH key path, region label, declared CPU millicores and RAM MiB, atdx_capableflag, and an optionaltdx_mrtd_compatstring.
ZipnetBootConfig— which zipnetNetworkIdand unseal committee to bond.ProviderBootConfig— operator-level caps (capacity-refresh cadence, max grant duration, aggregate CPU / RAM ceilings).
RedactedString is a Debug-sanitising wrapper so
secrets cannot leak into traces.
The bridge refuses to start if no backend is
configured; see
backends/mod.rs.
//! Operator-supplied boot configuration.
//!
//! Input artefacts, read at process start and measured
//! into the TDX quote:
//!
//! - `COALITION_CONFIG_PATH` — the `CoalitionConfig`
//! fingerprint plus the Compute `ConfluenceConfig`
//! fingerprint the provider registers against.
//!
//! - `BACKENDS_CONFIG_PATH` — a TOML file declaring
//! which backends (aws, gcp, azure, baremetal) are
//! enabled and the operator's per-backend credentials,
//! regions, and shape preferences. At least one
//! backend must be enabled.
//!
//! The provider binary itself never contacts a secret
//! store at runtime; all secrets are measured into the
//! TDX boot so their integrity is attested.
use std::path::PathBuf;
use anyhow::{anyhow, Context};
use serde::Deserialize;
use coalition::CoalitionConfig;
#[derive(Debug, Clone)]
pub struct BootConfig {
pub coalition: CoalitionConfig<'static>,
pub backends: BackendsBootConfig,
pub zipnet: ZipnetBootConfig,
pub provider: ProviderBootConfig,
}
impl BootConfig {
pub fn load_from_env() -> anyhow::Result<Self> {
let coalition_path: PathBuf = std::env::var("COALITION_CONFIG_PATH")
.context("COALITION_CONFIG_PATH not set")?
.into();
let backends_path: PathBuf = std::env::var("BACKENDS_CONFIG_PATH")
.context("BACKENDS_CONFIG_PATH not set")?
.into();
// In the real deployment these come from the TDX
// initrd, not $CWD.
let coalition_raw = std::fs::read_to_string(&coalition_path)
.with_context(|| format!("reading {}", coalition_path.display()))?;
let backends_raw = std::fs::read_to_string(&backends_path)
.with_context(|| format!("reading {}", backends_path.display()))?;
// TODO: real parsing. The coalition config would
// be reconstructed via the coalition meta-crate's
// from_toml helper; backends parse straight into
// BackendsBootConfig via serde.
let _ = coalition_raw;
let _ = backends_raw;
Err(anyhow!(
"BootConfig::load_from_env is a prototype stub; \
implement once the coalition meta-crate is released"
))
}
}
/// Aggregate backends config. Every field is `Option`;
/// at least one must be `Some` or `Fleet::from_boot_config`
/// refuses to start.
#[derive(Debug, Clone, Default, Deserialize)]
pub struct BackendsBootConfig {
#[serde(default)]
pub aws: Option<AwsBootConfig>,
#[serde(default)]
pub gcp: Option<GcpBootConfig>,
#[serde(default)]
pub azure: Option<AzureBootConfig>,
#[serde(default)]
pub baremetal: Option<BareMetalBootConfig>,
}
// ---- AWS -------------------------------------------------------------------
#[derive(Debug, Clone, Deserialize)]
pub struct AwsBootConfig {
pub access_key_id: String,
pub secret_access_key: RedactedString,
pub regions: Vec<String>,
pub instance_families: Vec<String>,
pub max_concurrent_instances: u32,
}
// ---- GCP -------------------------------------------------------------------
#[derive(Debug, Clone, Deserialize)]
pub struct GcpBootConfig {
/// Path (inside the TDX initrd) to the service-
/// account JSON key. Measured into the boot.
pub service_account_key_path: PathBuf,
/// GCP project id.
pub project_id: String,
pub regions: Vec<String>,
/// Machine types the bridge will provision for
/// non-TDX grants. E.g. `["n2-standard-4", "n2-highmem-8"]`.
pub machine_families: Vec<String>,
/// Confidential-VM (TDX) machine types. Non-empty
/// marks the backend `tdx_capable`. E.g.
/// `["c3-standard-4", "c3-highmem-8"]`.
#[serde(default)]
pub tdx_machine_types: Vec<String>,
pub max_concurrent_instances: u32,
}
// ---- Azure -----------------------------------------------------------------
#[derive(Debug, Clone, Deserialize)]
pub struct AzureBootConfig {
pub tenant_id: String,
pub client_id: String,
pub client_secret: RedactedString,
pub subscription_id: String,
/// Resource group the bridge is permitted to mutate.
pub resource_group: String,
pub regions: Vec<String>,
/// VM SKUs for non-TDX grants. E.g.
/// `["Standard_D4s_v5", "Standard_E8s_v5"]`.
pub vm_sizes: Vec<String>,
/// Confidential-Compute v3 SKUs (TDX). Non-empty
/// marks the backend `tdx_capable`. E.g.
/// `["Standard_DC4ads_v5", "Standard_EC8ads_v5"]`.
#[serde(default)]
pub tdx_vm_sizes: Vec<String>,
pub max_concurrent_instances: u32,
}
// ---- Bare-metal ------------------------------------------------------------
#[derive(Debug, Clone, Deserialize)]
pub struct BareMetalBootConfig {
pub machines: Vec<BareMetalMachine>,
/// If `true`, start even when some declared machines
/// are unreachable. Default is strict.
#[serde(default)]
pub lenient_boot: bool,
}
#[derive(Debug, Clone, Deserialize)]
pub struct BareMetalMachine {
/// DNS name or IP the bridge connects to over SSH.
pub host: String,
/// SSH port; 22 by default.
#[serde(default = "default_ssh_port")]
pub port: u16,
/// SSH user. **Must be root**: the bridge installs
/// systemd units, opens firewall ports, and creates
/// workload users.
#[serde(default = "default_root_user")]
pub user: String,
/// Path (inside the TDX initrd) to the SSH private
/// key. Measured into the boot.
pub ssh_key_path: PathBuf,
/// Operator-assigned region label. Used for
/// routing; need not be a cloud region name.
pub region: String,
/// Declared CPU millicores this machine offers.
pub cpu_millicores: u32,
/// Declared RAM MiB this machine offers.
pub ram_mib: u32,
/// True if the machine can run nested TDX guests
/// (bare-TDX host). The bridge will route TDX-
/// required grants only to machines with this flag.
#[serde(default)]
pub tdx_capable: bool,
/// Optional expected MR_TD compatibility value a
/// bare-TDX host will launch nested guests under.
/// Surfaced in the provider card so requesters can
/// cross-check.
#[serde(default)]
pub tdx_mrtd_compat: Option<String>,
}
fn default_ssh_port() -> u16 { 22 }
fn default_root_user() -> String { "root".into() }
// ---- zipnet / provider knobs ----------------------------------------------
#[derive(Debug, Clone, Deserialize)]
pub struct ZipnetBootConfig {
pub network_id: String,
pub unseal_config_hex: String,
}
#[derive(Debug, Clone, Deserialize)]
pub struct ProviderBootConfig {
pub capacity_refresh_sec: u32,
pub max_grant_duration_sec: u32,
pub max_granted_cpu_millicores: u32,
pub max_granted_ram_mib: u32,
}
// ---- Redacted string ------------------------------------------------------
#[derive(Clone, Deserialize)]
#[serde(transparent)]
pub struct RedactedString(pub String);
impl std::fmt::Debug for RedactedString {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "\"<redacted {} chars>\"", self.0.len())
}
}
Up: compute-bridge.
src/tdx.rs
audience: ai
TDX attestation helpers. self_quote() asks the guest
agent for a quote over the running binary’s MR_TD plus
a report_data blob folding the provider’s ed25519
public key, the ComputeManifest pre-hash, and a boot
nonce.
ProviderIdentity::from_tdx_sealed_seed derives the
provider’s signing keypair from a TDX-sealed seed, so
the public key is stable across reboots of the same
binary + TDX guest combination. Without TDX sealing,
every restart would rotate the key and break the
provider’s identity on the Compute module.
//! TDX attestation helpers.
//!
//! The provider's honesty claim is grounded here: the
//! coalition trusts the provider's self-reported
//! capacity because the provider is running a measured
//! binary whose MR_TD matches the one declared in
//! `manifest/compute-manifest.toml`.
use anyhow::Context;
use mosaik::tee::tdx;
/// Self-produced TDX quote: the running binary asks
/// the TDX guest services for a quote over its own
/// MR_TD plus a runtime-supplied `report_data` blob.
///
/// The `report_data` folds:
///
/// - the provider's ed25519 public key (used to sign
/// zipnet envelopes and provider cards),
/// - the ComputeManifest pre-hash this binary was
/// built from,
/// - a boot nonce produced by the TDX guest agent so
/// quotes cannot be replayed across boots.
///
/// The provider card published to the Compute module
/// carries this quote verbatim. Prospective requesters
/// verify:
///
/// 1. The quote is valid (Intel provisioning-cert chain
/// OK).
/// 2. The MR_TD in the quote matches the MR_TD
/// committed to in the Compute module's provider-
/// admission policy.
/// 3. The `report_data` public key is the one the
/// provider signs with.
pub async fn self_quote() -> anyhow::Result<tdx::Quote> {
// In the real implementation this calls into the TDX
// guest's `get_quote` MMIO/IOCTL interface via the
// `mosaik::tee::tdx` wrapper. Prototype stub:
Err(anyhow::anyhow!(
"tdx::self_quote is a prototype stub; the \
mosaik::tee::tdx crate supplies the real impl"
))
}
/// Convenience wrapper: the provider's signing keypair.
/// Derived from a TDX-sealed seed so the public key is
/// stable across reboots of the same binary+TDX guest.
pub struct ProviderIdentity {
pub public: ed25519_dalek::VerifyingKey,
pub secret: ed25519_dalek::SigningKey,
}
impl ProviderIdentity {
pub async fn from_tdx_sealed_seed() -> anyhow::Result<Self> {
// TODO: read the 32-byte seed via mosaik::tee::tdx::seal,
// derive an ed25519 key, return.
Err(anyhow::anyhow!(
"ProviderIdentity::from_tdx_sealed_seed is a prototype stub"
))
}
}
Up: compute-bridge.
src/provider.rs
audience: ai
The Compute-module provider loop. Four responsibilities:
- Publish a provider card at boot and refresh it on capacity change. The card now folds the union of capabilities across every enabled backend.
- Subscribe to
ComputeGrants addressed to this provider. - For each grant: resolve the zipnet envelope,
cross-check the declared image hash, route to the
right backend via
backends/mod.rs, seal an SSH receipt viareceipt.rs, reply viazipnet_io.rs. - Spawn a watcher that emits a
ComputeLogwhen the instance exits or the grant’svalid_topasses.
The select! in run keeps one action per tick —
either handle a grant or refresh the card — rather than
racing both concurrently. Keeps accounting honest.
//! Compute-module provider loop.
//!
//! Responsibilities:
//!
//! 1. Publish a `ProviderCard` at boot, refresh it on
//! capacity change.
//! 2. Subscribe to `ComputeGrant`s addressed to this
//! provider.
//! 3. For each grant:
//! a. Read the matching zipnet request envelope to
//! obtain the requester's `peer_id` public key
//! and declared image hash.
//! b. Provision a workload on whichever backend —
//! AWS, GCP, Azure, bare-metal — the fleet
//! decides can satisfy it.
//! c. Encrypt and return an SSH access receipt to
//! the requester via zipnet.
//! d. On workload completion or timeout, emit a
//! `ComputeLog`.
use std::sync::Arc;
use anyhow::Context;
use coalition::ConfluenceConfig;
use coalition_compute::{
ComputeGrant, ComputeLog, ProviderCard, ProviderId,
UsageMetrics,
};
use mosaik::tee::tdx::Quote;
use mosaik::Network;
use crate::backends::Fleet;
use crate::config::ProviderBootConfig;
use crate::receipt::SshAccessReceipt;
use crate::zipnet_io::ZipnetChannel;
pub struct Provider<'cfg> {
network: Arc<Network>,
compute: &'cfg ConfluenceConfig<'cfg>,
tdx_quote: Quote,
zipnet: ZipnetChannel,
fleet: Fleet,
config: ProviderBootConfig,
}
impl<'cfg> Provider<'cfg> {
pub fn new(
network: Arc<Network>,
compute: &'cfg ConfluenceConfig<'cfg>,
tdx_quote: Quote,
zipnet: ZipnetChannel,
fleet: Fleet,
config: ProviderBootConfig,
) -> Self {
Self { network, compute, tdx_quote, zipnet, fleet, config }
}
/// Main loop. Runs forever until the process is
/// terminated; retirement requires a clean operator
/// stop which emits a `RetirementMarker` on the
/// provider's own stream.
pub async fn run(self) -> anyhow::Result<()> {
let provider_id = self.register_provider_card().await?;
tracing::info!(provider = %provider_id_hex(&provider_id),
"provider card registered");
let mut grants = coalition_compute::grants_for(
&self.network, self.compute, provider_id,
).await?;
// Capacity-refresh ticker runs alongside grant
// handling. Using a select! keeps the code honest:
// this provider does one thing per tick.
let mut refresh = tokio::time::interval(
std::time::Duration::from_secs(
self.config.capacity_refresh_sec.max(1) as u64,
),
);
loop {
tokio::select! {
Some(grant) = grants.next() => {
if let Err(err) = self.handle_grant(&grant).await {
tracing::warn!(
request_id = ?grant.request_id,
error = %err,
"grant handling failed",
);
}
}
_ = refresh.tick() => {
if let Err(err) = self.refresh_provider_card(provider_id).await {
tracing::warn!(error = %err,
"provider card refresh failed");
}
}
else => break,
}
}
Ok(())
}
/// Assemble and publish the provider card.
///
/// The card folds:
///
/// - the TDX quote (proves the running binary is
/// the one whose source is in this crate),
/// - the per-backend capabilities: enabled backends,
/// the union of regions they serve, the union of
/// their CPU/RAM maxes, whether any backend is
/// `tdx_capable`,
/// - real-time capacity telemetry (CPU millicores,
/// RAM MiB currently provisioned across all
/// backends),
/// - the zipnet return channel the provider will
/// listen on.
async fn register_provider_card(&self) -> anyhow::Result<ProviderId> {
let capabilities = self.fleet.capabilities().await?;
let _ = capabilities;
// TODO: construct ProviderCard folding TDX quote
// + per-backend capabilities + zipnet channel,
// sign it, commit via coalition_compute::register.
anyhow::bail!(
"Provider::register_provider_card is a prototype stub"
)
}
async fn refresh_provider_card(&self, id: ProviderId) -> anyhow::Result<()> {
// TODO: fetch telemetry from Fleet; publish
// updated ProviderCard.
let _ = id;
Ok(())
}
/// Handle one grant end to end.
async fn handle_grant(&self, grant: &ComputeGrant<'_>) -> anyhow::Result<()> {
tracing::info!(
request_id = ?grant.request_id,
image_hash = ?grant.image_hash,
valid_to = ?grant.valid_to,
"accepted grant",
);
// 1. Decrypt the zipnet envelope to learn the
// requester's peer_id and image payload.
let envelope = self.zipnet
.resolve(&grant.bearer_pointer)
.await
.context("resolving zipnet envelope")?;
// 2. Cross-check the envelope's declared image
// post-hash against the grant's committed
// image_hash. If the requester tried to smuggle
// a different image, reject the grant and commit
// an empty ComputeLog marking the grant invalid.
if envelope.image_hash() != grant.image_hash {
anyhow::bail!(
"envelope/grant image hash mismatch (this should be \
impossible under honest zipnet unseal)"
);
}
// 3. Route the grant to whichever backend can
// satisfy it.
let instance = self.fleet
.provision_for_grant(grant, &envelope)
.await
.context("provisioning workload")?;
tracing::info!(
request_id = ?grant.request_id,
backend = %instance.backend,
instance = %instance.instance_id,
"workload running",
);
// 4. Build the SSH access receipt. Contains:
// - ssh host (the backend's public endpoint
// for the instance),
// - per-grant private key,
// - host key (for known_hosts),
// - the grant's valid_to tick so the requester
// can plan workload lifetime.
let receipt = SshAccessReceipt::build(
&instance,
grant,
)?;
// 5. Encrypt the receipt to the peer_id's
// x25519 public key (published in the zipnet
// envelope) and return via the zipnet reply
// stream.
let sealed = receipt.seal_to(envelope.peer_x25519_public())?;
self.zipnet.reply(&grant.request_id, sealed).await?;
// 6. Spawn a watcher that emits the ComputeLog
// when the instance terminates or the grant
// deadline passes, whichever first.
let fleet = self.fleet.clone();
let network = self.network.clone();
let request_id = grant.request_id;
let valid_to = grant.valid_to;
let instance_for_log = instance.clone();
let provider_id = instance.provider_id();
tokio::spawn(async move {
let usage: UsageMetrics = fleet
.watch_until_exit(&instance_for_log, valid_to)
.await
.unwrap_or_default();
let log = ComputeLog {
grant_id: request_id,
provider: provider_id,
window: UsageMetrics::window_for(&usage),
usage,
evidence: None,
};
if let Err(err) = coalition_compute::append_log(&network, &log).await {
tracing::warn!(error = %err,
"failed to append ComputeLog");
}
});
Ok(())
}
}
fn provider_id_hex(id: &ProviderId) -> String {
// Short stable rendering for logs. Real impl hashes
// the underlying UniqueId; prototype stub returns
// placeholder.
let _ = id;
"<provider-id>".into()
}
Up: compute-bridge.
src/backends/mod.rs
audience: ai
The backend abstraction and fleet router. A small
trait (name, capabilities, can_satisfy,
provision, watch_until_exit, terminate) every
concrete backend implements. The Fleet holds every
configured backend and routes each grant to the first
one whose can_satisfy(grant) returns true.
Order in the fleet matters: operators who want a
specific preference order arrange it in the boot config
(backends are registered in a fixed order — AWS, GCP,
Azure, bare-metal — and the router walks them in that
order). A grant whose manifest.tdx = Required skips
backends whose tdx_capable = false during the
matching walk.
ProvisionedInstance is the uniform handle every
backend returns. It carries a backend name field that
the fleet uses to route post-provisioning calls
(watch, terminate) back to the originating backend.
Concrete implementations:
backends/aws.rs— EC2.backends/gcp.rs— Compute Engine, with optional TDX via Confidential VMs.backends/azure.rs— Azure VMs, with optional TDX via DCdsv3/ECdsv3 SKUs.backends/baremetal.rs— operator-owned hosts reached via SSH root; workloads run as systemd-managed containers or, on bare-TDX hosts, as nested TDX guests.
//! Backend abstraction + fleet router.
//!
//! The bridge is provider-agnostic: it defines a
//! [`Backend`] trait and routes each incoming
//! [`ComputeGrant`] to the first configured backend that
//! reports `can_satisfy == true`. Backends ship in this
//! crate as separate modules:
//!
//! - [`aws`] — AWS EC2.
//! - [`gcp`] — Google Compute Engine.
//! - [`azure`] — Azure Virtual Machines.
//! - [`baremetal`] — operator-owned hosts reached via
//! SSH root; workloads run as systemd-managed
//! containers on the host. Bare-TDX hosts additionally
//! carry a measured MR_TD in their ProviderCard so
//! TDX-required grants can route to them.
pub mod aws;
pub mod azure;
pub mod baremetal;
pub mod gcp;
use std::sync::Arc;
use async_trait::async_trait;
use coalition_compute::{
AlmanacTick, ComputeGrant, ProviderId, UsageMetrics,
};
use crate::config::BackendsBootConfig;
use crate::zipnet_io::Envelope;
/// Operator-chosen region label. For clouds this is the
/// native cloud region (`"us-east-1"`, `"europe-west1"`,
/// `"westus2"`); for bare-metal it is an operator-
/// supplied label (e.g. `"local-eu-fra-1"`).
pub type RegionLabel = String;
/// Capability flags a backend publishes so the fleet
/// router can filter grants.
#[derive(Clone, Debug, Default)]
pub struct Capabilities {
pub regions: Vec<RegionLabel>,
pub tdx_capable: bool,
pub max_cpu_millicores: u32,
pub max_ram_mib: u32,
}
/// Running-instance handle. Opaque beyond the `backend`
/// field, which the fleet uses to route post-
/// provisioning operations (watch, terminate, usage
/// reporting) back to the originating backend.
#[derive(Clone)]
pub struct ProvisionedInstance {
pub backend: &'static str,
pub instance_id: String,
pub region: RegionLabel,
pub public_host: String,
pub ssh_port: u16,
pub user: String,
pub key_private: Vec<u8>,
pub key_public: Vec<u8>,
pub host_key: Vec<u8>,
pub started_at: AlmanacTick,
pub provider_id: ProviderId,
}
impl ProvisionedInstance {
pub fn provider_id(&self) -> ProviderId {
self.provider_id.clone()
}
}
/// A provisioning backend.
///
/// Each backend is stateless at the crate level; state
/// (client handles, per-region credentials, connection
/// pools) is held inside the implementor.
#[async_trait]
pub trait Backend: Send + Sync {
/// Short stable identifier, used in logs and in
/// `ProvisionedInstance::backend`.
fn name(&self) -> &'static str;
/// Aggregated capabilities this backend will publish
/// in the provider card.
async fn capabilities(&self) -> anyhow::Result<Capabilities>;
/// Quick filter — will this backend attempt to
/// satisfy this grant? Used by the fleet router to
/// pick. Implementations should check region, CPU,
/// RAM, and TDX requirement.
fn can_satisfy(&self, grant: &ComputeGrant<'_>) -> bool;
/// Provision a workload for a grant. Returns a
/// running-instance handle once the workload is
/// accepting SSH connections at the returned host.
async fn provision(
&self,
grant: &ComputeGrant<'_>,
envelope: &Envelope,
) -> anyhow::Result<ProvisionedInstance>;
/// Block until the instance exits or `valid_to`
/// passes. Return usage metrics over the run.
async fn watch_until_exit(
&self,
instance: &ProvisionedInstance,
valid_to: AlmanacTick,
) -> anyhow::Result<UsageMetrics>;
/// Terminate a running instance. Idempotent; safe to
/// call if the instance has already exited.
async fn terminate(&self, instance: &ProvisionedInstance) -> anyhow::Result<()>;
}
/// Holds every configured backend and routes grants.
///
/// Order matters: the first backend whose
/// `can_satisfy(grant)` returns `true` wins. Operators
/// who want a specific preference order arrange it in
/// the boot config.
#[derive(Clone)]
pub struct Fleet {
backends: Arc<Vec<Arc<dyn Backend>>>,
}
impl Fleet {
pub async fn from_boot_config(cfg: &BackendsBootConfig) -> anyhow::Result<Self> {
let mut backends: Vec<Arc<dyn Backend>> = Vec::new();
if let Some(aws) = &cfg.aws {
backends.push(Arc::new(aws::AwsBackend::new(aws).await?));
}
if let Some(gcp) = &cfg.gcp {
backends.push(Arc::new(gcp::GcpBackend::new(gcp).await?));
}
if let Some(azure) = &cfg.azure {
backends.push(Arc::new(azure::AzureBackend::new(azure).await?));
}
if let Some(baremetal) = &cfg.baremetal {
backends.push(Arc::new(baremetal::BareMetalBackend::new(baremetal).await?));
}
if backends.is_empty() {
anyhow::bail!(
"no backends configured — enable at least one of \
aws / gcp / azure / baremetal in the boot config"
);
}
tracing::info!(
backends = ?backends.iter().map(|b| b.name()).collect::<Vec<_>>(),
"fleet initialised",
);
Ok(Self { backends: Arc::new(backends) })
}
/// Aggregate capabilities across every backend for
/// provider-card publication.
pub async fn capabilities(&self) -> anyhow::Result<Vec<(String, Capabilities)>> {
let mut out = Vec::with_capacity(self.backends.len());
for b in self.backends.iter() {
let caps = b.capabilities().await?;
out.push((b.name().to_string(), caps));
}
Ok(out)
}
pub async fn provision_for_grant(
&self,
grant: &ComputeGrant<'_>,
envelope: &Envelope,
) -> anyhow::Result<ProvisionedInstance> {
for b in self.backends.iter() {
if b.can_satisfy(grant) {
tracing::info!(
backend = b.name(),
request_id = ?grant.request_id,
"routing grant to backend",
);
return b.provision(grant, envelope).await;
}
}
anyhow::bail!(
"no backend can satisfy grant {:?}",
grant.request_id,
)
}
pub async fn watch_until_exit(
&self,
instance: &ProvisionedInstance,
valid_to: AlmanacTick,
) -> anyhow::Result<UsageMetrics> {
self.backend_for(instance)?
.watch_until_exit(instance, valid_to)
.await
}
pub async fn terminate(
&self,
instance: &ProvisionedInstance,
) -> anyhow::Result<()> {
self.backend_for(instance)?.terminate(instance).await
}
fn backend_for(
&self,
instance: &ProvisionedInstance,
) -> anyhow::Result<Arc<dyn Backend>> {
self.backends
.iter()
.find(|b| b.name() == instance.backend)
.cloned()
.ok_or_else(|| anyhow::anyhow!(
"instance was provisioned by backend {:?} \
which is not registered in this fleet",
instance.backend,
))
}
}
Up: compute-bridge.
src/backends/aws.rs
audience: ai
AWS EC2 backend. Per-region Ec2Client cache;
per-grant key-pair and security group; cloud-init that
fetches the requester’s image by post-hash and
authorises the per-grant SSH key.
Not tdx_capable: AWS has no consumer TDX offering
as of this prototype’s writing. Operators needing TDX
route grants through GCP (Confidential VMs), Azure
(Confidential Computing v3 SKUs), or bare-metal hosts.
//! AWS EC2 backend.
//!
//! All control-plane calls happen inside the TDX guest.
//! The operator's AWS credentials are sealed into the
//! TDX measurement (see config.rs) so a curious host
//! cannot exfiltrate them.
use std::collections::HashMap;
use anyhow::{anyhow, Context};
use async_trait::async_trait;
use aws_sdk_ec2::Client as Ec2Client;
use coalition_compute::{AlmanacTick, ComputeGrant, UsageMetrics};
use crate::backends::{Backend, Capabilities, ProvisionedInstance, RegionLabel};
use crate::config::AwsBootConfig;
use crate::zipnet_io::Envelope;
pub struct AwsBackend {
cfg: AwsBootConfig,
clients: tokio::sync::RwLock<HashMap<String, Ec2Client>>,
}
impl AwsBackend {
pub async fn new(cfg: &AwsBootConfig) -> anyhow::Result<Self> {
Ok(Self {
cfg: cfg.clone(),
clients: tokio::sync::RwLock::new(HashMap::new()),
})
}
async fn client_for(&self, region: &str) -> anyhow::Result<Ec2Client> {
if !self.cfg.regions.iter().any(|r| r == region) {
anyhow::bail!("aws: region {region} not in operator allowlist");
}
{
let g = self.clients.read().await;
if let Some(c) = g.get(region) {
return Ok(c.clone());
}
}
// TODO: real construction via aws_config::defaults(...)
// with operator's static credentials.
anyhow::bail!(
"AwsBackend::client_for is a prototype stub; wire up \
aws_config::defaults(...) with the operator's sealed \
credentials and the declared region"
)
}
fn pick_instance_type(&self, grant: &ComputeGrant<'_>) -> anyhow::Result<String> {
// TODO: real sizing from grant's manifest.
let family = self.cfg.instance_families.first()
.ok_or_else(|| anyhow!("aws: no instance families configured"))?;
let _ = grant;
Ok(format!("{family}.large"))
}
}
#[async_trait]
impl Backend for AwsBackend {
fn name(&self) -> &'static str { "aws" }
async fn capabilities(&self) -> anyhow::Result<Capabilities> {
Ok(Capabilities {
regions: self.cfg.regions.iter().cloned().collect::<Vec<RegionLabel>>(),
// AWS has no consumer TDX offering as of this
// prototype's writing. Operators running TDX-
// required workloads route through the
// bare-metal backend or a GCP Confidential VM
// configured for TDX.
tdx_capable: false,
max_cpu_millicores: u32::MAX,
max_ram_mib: u32::MAX,
})
}
fn can_satisfy(&self, grant: &ComputeGrant<'_>) -> bool {
// TODO: if grant requires TDX, decline early.
let _ = grant;
true
}
async fn provision(
&self,
grant: &ComputeGrant<'_>,
envelope: &Envelope,
) -> anyhow::Result<ProvisionedInstance> {
let region = envelope.requested_region().unwrap_or("us-east-1");
let client = self.client_for(region).await?;
let instance_type = self.pick_instance_type(grant)?;
// Real flow:
// create_key_pair — per-grant ed25519 SSH key
// create_security_group — inbound TCP/22 from anywhere
// run_instances — with cloud-init that fetches
// the requester's image by
// post-hash from a mosaik content
// store, verifies, starts, and
// authorises the per-grant key
// wait InstanceState::running
// describe_instances — get public DNS + host key
// return ProvisionedInstance
let _ = (client, instance_type);
Err(anyhow!(
"AwsBackend::provision is a prototype stub; implement \
create_key_pair + create_security_group + \
run_instances + wait_for_running"
))
}
async fn watch_until_exit(
&self,
instance: &ProvisionedInstance,
valid_to: AlmanacTick,
) -> anyhow::Result<UsageMetrics> {
// TODO: poll DescribeInstances; terminate on deadline;
// compute cpu-seconds / ram-mib-seconds / network bytes /
// storage bytes and return.
let _ = (instance, valid_to);
Err(anyhow!(
"AwsBackend::watch_until_exit is a prototype stub"
))
}
async fn terminate(&self, instance: &ProvisionedInstance) -> anyhow::Result<()> {
// TODO: TerminateInstances + DeleteKeyPair + DeleteSecurityGroup
let _ = instance;
Ok(())
}
}
Up: compute-bridge → backends.
src/backends/gcp.rs
audience: ai
Google Compute Engine backend. Structurally parallel to the AWS backend: per-region client cache, per-grant SSH key injection via instance metadata, cloud-init that fetches the requester’s image by post-hash.
Reports tdx_capable = true when the operator lists
TDX-enabled Confidential-VM machine types in
[gcp.tdx_machine_types] (e.g. "c3-standard-4").
Grants that declare manifest.tdx = Required are
provisioned on those machine types with
confidentialInstanceConfig set; non-TDX grants use
the broader machine_families list.
//! Google Compute Engine backend.
//!
//! Structurally identical to the AWS backend: per-region
//! client cache, per-grant SSH key injection via
//! instance metadata, cloud-init that fetches the
//! requester's image by post-hash.
//!
//! GCP's Confidential VM offering (Intel TDX) is the
//! interesting capability here: operators can set
//! `[gcp.tdx_machine_types]` in their boot config to
//! declare which GCE machine types are TDX-enabled, and
//! the backend will report `tdx_capable = true`.
use anyhow::{anyhow, Context};
use async_trait::async_trait;
use coalition_compute::{AlmanacTick, ComputeGrant, UsageMetrics};
use crate::backends::{Backend, Capabilities, ProvisionedInstance};
use crate::config::GcpBootConfig;
use crate::zipnet_io::Envelope;
pub struct GcpBackend {
cfg: GcpBootConfig,
// TODO: cache google_cloud_* Compute clients per region.
}
impl GcpBackend {
pub async fn new(cfg: &GcpBootConfig) -> anyhow::Result<Self> {
Ok(Self { cfg: cfg.clone() })
}
}
#[async_trait]
impl Backend for GcpBackend {
fn name(&self) -> &'static str { "gcp" }
async fn capabilities(&self) -> anyhow::Result<Capabilities> {
Ok(Capabilities {
regions: self.cfg.regions.clone(),
tdx_capable: !self.cfg.tdx_machine_types.is_empty(),
max_cpu_millicores: u32::MAX,
max_ram_mib: u32::MAX,
})
}
fn can_satisfy(&self, grant: &ComputeGrant<'_>) -> bool {
// TODO: consult manifest: if tdx_required but
// tdx_machine_types is empty, return false.
let _ = grant;
true
}
async fn provision(
&self,
grant: &ComputeGrant<'_>,
envelope: &Envelope,
) -> anyhow::Result<ProvisionedInstance> {
// Real flow:
// 1. Resolve the target region (envelope hint or fallback).
// 2. Pick a machine_type: the smallest one in
// self.cfg.machine_families satisfying the
// grant's CPU/RAM. If the grant requires TDX,
// restrict to self.cfg.tdx_machine_types.
// 3. Generate a per-grant ed25519 SSH key.
// 4. compute.instances.insert({
// name: per-grant,
// machineType, confidentialInstanceConfig (if TDX),
// networkInterfaces: [ephemeral public IP],
// metadata: { ssh-keys, startup-script },
// shieldedInstanceConfig,
// })
// 5. Poll until RUNNING.
// 6. Return ProvisionedInstance.
let _ = (grant, envelope);
Err(anyhow!(
"GcpBackend::provision is a prototype stub; implement via \
the google-cloud-googleapis Compute Engine client"
))
}
async fn watch_until_exit(
&self,
instance: &ProvisionedInstance,
valid_to: AlmanacTick,
) -> anyhow::Result<UsageMetrics> {
let _ = (instance, valid_to);
Err(anyhow!(
"GcpBackend::watch_until_exit is a prototype stub"
))
}
async fn terminate(&self, instance: &ProvisionedInstance) -> anyhow::Result<()> {
let _ = instance;
Ok(())
}
}
Up: compute-bridge → backends.
src/backends/azure.rs
audience: ai
Azure Virtual Machines backend via ARM. Operator
supplies a service principal
(tenant_id + client_id + client_secret) scoped to a
specific resource group the bridge is permitted to
mutate.
Reports tdx_capable = true when
[azure.tdx_vm_sizes] is non-empty. Azure’s
Confidential Computing v3 family
(Standard_DC*ds_v3, Standard_EC*ds_v3) exposes
Intel TDX; grants with manifest.tdx = Required
provision into those SKUs with
security_profile.security_type = ConfidentialVM.
Other grants use the broader vm_sizes list.
//! Azure Virtual Machines backend.
//!
//! Uses the Azure Resource Manager SDK via
//! `azure_mgmt_compute`. Operator supplies a service
//! principal (tenant id + client id + client secret) or
//! a managed identity, scoped to a resource group the
//! bridge is permitted to mutate.
//!
//! Azure's Confidential Computing v3 instance family
//! (`Standard_DCdsv3_*`, `Standard_ECdsv3_*`) exposes
//! Intel TDX. Operators list TDX-enabled SKUs in
//! `[azure.tdx_vm_sizes]` to mark this backend as
//! `tdx_capable`.
use anyhow::{anyhow, Context};
use async_trait::async_trait;
use coalition_compute::{AlmanacTick, ComputeGrant, UsageMetrics};
use crate::backends::{Backend, Capabilities, ProvisionedInstance};
use crate::config::AzureBootConfig;
use crate::zipnet_io::Envelope;
pub struct AzureBackend {
cfg: AzureBootConfig,
// TODO: cache azure_mgmt_compute Client, resolved per
// subscription + resource group.
}
impl AzureBackend {
pub async fn new(cfg: &AzureBootConfig) -> anyhow::Result<Self> {
Ok(Self { cfg: cfg.clone() })
}
}
#[async_trait]
impl Backend for AzureBackend {
fn name(&self) -> &'static str { "azure" }
async fn capabilities(&self) -> anyhow::Result<Capabilities> {
Ok(Capabilities {
regions: self.cfg.regions.clone(),
tdx_capable: !self.cfg.tdx_vm_sizes.is_empty(),
max_cpu_millicores: u32::MAX,
max_ram_mib: u32::MAX,
})
}
fn can_satisfy(&self, grant: &ComputeGrant<'_>) -> bool {
let _ = grant;
true
}
async fn provision(
&self,
grant: &ComputeGrant<'_>,
envelope: &Envelope,
) -> anyhow::Result<ProvisionedInstance> {
// Real flow (ARM API):
// 1. Resolve target region.
// 2. Pick VM SKU. TDX grants → tdx_vm_sizes.
// 3. Create a per-grant ssh public key resource.
// 4. virtual_machines.create_or_update({
// location, hardware_profile: {vm_size},
// storage_profile: {image_reference, os_disk},
// os_profile: {admin_username, linux_configuration.ssh.public_keys},
// network_profile,
// security_profile: {security_type: ConfidentialVM, ...} if TDX,
// })
// 5. Poll provisioning_state == Succeeded.
// 6. Read the allocated public IP, return ProvisionedInstance.
let _ = (grant, envelope);
Err(anyhow!(
"AzureBackend::provision is a prototype stub; implement via \
azure_mgmt_compute::virtual_machines::Client"
))
}
async fn watch_until_exit(
&self,
instance: &ProvisionedInstance,
valid_to: AlmanacTick,
) -> anyhow::Result<UsageMetrics> {
let _ = (instance, valid_to);
Err(anyhow!(
"AzureBackend::watch_until_exit is a prototype stub"
))
}
async fn terminate(&self, instance: &ProvisionedInstance) -> anyhow::Result<()> {
let _ = instance;
Ok(())
}
}
Up: compute-bridge → backends.
src/backends/baremetal.rs
audience: ai
The bare-metal backend. Operator supplies one or more machines to which the bridge has SSH root. Unlike the cloud backends, nothing is provisioned externally — the hosts are already running. “Provision” here means:
- Pick an idle machine satisfying the grant’s shape (CPU, RAM, TDX requirement, region label).
- Open a
russhroot session to it. - Drop the requester’s image onto disk (fetched by the control session from a mosaik content store and verified against the grant’s post-hash).
- Start a systemd-managed workload unit — or, on a
bare-TDX host, launch a nested TDX guest via
virsh+qemu-tdx. - Inject the per-grant SSH public key into the
workload’s
authorized_keys. - Open an inbound port on the host’s firewall for the workload.
- Return the host’s public name + workload port +
per-grant key as the uniform
ProvisionedInstance.
The control-plane SSH connection is held open by the backend for the duration of the grant; on termination or deadline, the backend tears the workload down and closes the port.
Bare VM vs bare-TDX host
Each BareMetalMachine entry declares
tdx_capable: bool:
-
tdx_capable = false(bare VM). A regular Linux host. Workloads run as systemd container units underpodman. The operator is trusted not to snoop on the workload; the requester treats the VM as a semi-trusted host. -
tdx_capable = true(bare-TDX host). A TDX- capable physical (or virtualised) host. Workloads run as nested TDX guests; the nested guest’s MR_TD flows back into the SSH receipt as evidence. The requester verifies that MR_TD against their image’s declared post-hash before trusting the workload’s outputs — closing the confidentiality gap bare VMs leave open.
Both shapes share the Backend trait so the fleet
router does not distinguish. Grants that require TDX
are routed only to tdx_capable machines at
admission time.
//! Bare-metal backend.
//!
//! Inputs: one or more machines to which the operator
//! has granted the bridge SSH root. Machines are either
//! **bare VMs** (a regular Linux host; workloads run in
//! systemd-managed containers) or **bare-TDX hosts** (a
//! TDX-capable physical or virtualised host; workloads
//! run as nested TDX guests whose MR_TD can flow back to
//! the requester as part of the receipt).
//!
//! Unlike the cloud backends, nothing is provisioned
//! externally: the hosts are already running. "Provision"
//! here means:
//!
//! 1. Pick an idle host satisfying the grant's shape.
//! 2. Open an SSH root session to it.
//! 3. Drop the requester's image onto disk (fetched by
//! the control session from a mosaik content store
//! and verified against the grant's post-hash).
//! 4. Start a systemd-managed workload unit (or launch a
//! nested TDX guest for bare-TDX hosts).
//! 5. Inject the per-grant SSH public key into the
//! workload's `authorized_keys`.
//! 6. Open an inbound port on the host's firewall so the
//! requester can SSH into the workload directly.
//! 7. Return the host's public name + port + per-grant
//! key as the `ProvisionedInstance`.
//!
//! The control-plane SSH connection is held open by the
//! backend for the duration of the grant; on termination
//! or deadline, the backend tears the workload down and
//! closes the port.
use std::collections::HashMap;
use std::sync::Arc;
use anyhow::{anyhow, Context};
use async_trait::async_trait;
use coalition_compute::{AlmanacTick, ComputeGrant, UsageMetrics};
use tokio::sync::Mutex;
use crate::backends::{Backend, Capabilities, ProvisionedInstance, RegionLabel};
use crate::config::{BareMetalBootConfig, BareMetalMachine};
use crate::zipnet_io::Envelope;
pub struct BareMetalBackend {
cfg: BareMetalBootConfig,
/// Runtime scheduling state: which machines are busy
/// with which grants. Simplest possible scheduler;
/// real impl would track CPU/RAM residuals.
busy: Arc<Mutex<HashMap<String, Vec<BusyGrant>>>>,
}
struct BusyGrant {
request_id: coalition_compute::RequestId,
cpu_millicores_reserved: u32,
ram_mib_reserved: u32,
}
impl BareMetalBackend {
pub async fn new(cfg: &BareMetalBootConfig) -> anyhow::Result<Self> {
// TODO: open SSH control sessions to each machine
// at boot, verify reachability, refuse to start if
// any required host is unreachable (unless the
// operator set `lenient_boot = true`).
Ok(Self {
cfg: cfg.clone(),
busy: Arc::new(Mutex::new(HashMap::new())),
})
}
/// Find an idle machine that fits the grant's shape.
async fn pick_machine(
&self,
grant: &ComputeGrant<'_>,
envelope: &Envelope,
) -> anyhow::Result<&BareMetalMachine> {
let preferred_region = envelope.requested_region();
let busy = self.busy.lock().await;
for m in &self.cfg.machines {
if let Some(want) = preferred_region {
if m.region != want { continue; }
}
// TODO: check grant's manifest.tdx_required
// vs m.tdx_capable
// TODO: check grant's CPU/RAM vs residuals
let used = busy.get(&m.host).map(|v| v.len()).unwrap_or(0);
let _ = used;
let _ = grant;
return Ok(m);
}
anyhow::bail!(
"baremetal: no machine available in region {:?}",
preferred_region,
)
}
async fn install_and_start(
&self,
m: &BareMetalMachine,
grant: &ComputeGrant<'_>,
envelope: &Envelope,
) -> anyhow::Result<InstalledWorkload> {
// Open a single russh session to the host as root.
// let ssh = russh::client::connect(...)
// ssh.authenticate_publickey(m.user, load_key(&m.ssh_key_path)?)
// ssh.exec("mkdir -p /var/lib/compute-bridge/{grant_id}")
// ssh.scp("/var/lib/compute-bridge/{grant_id}/image.oci", image_bytes)
// ssh.exec("sha256sum image.oci && verify against grant.image_hash")
// ssh.exec("systemd-run --unit=grant-{id} --property=... podman run -d ...")
// ssh.exec("firewall-cmd --add-port={workload_port}/tcp --zone=public")
// ssh.exec("ssh-inject-authorized-key grant-{id} {per_grant_pubkey}")
// Return host_port (public SSH into the workload).
//
// For a bare-TDX host, substitute `systemd-run … podman run`
// with launching a nested TDX guest via qemu + libvirt; the
// guest's MR_TD is read out of its attestation report and
// attached to the receipt as `ssh_host_key` evidence.
let _ = (m, grant, envelope);
Err(anyhow!(
"BareMetalBackend::install_and_start is a prototype stub; \
implement via russh + systemd-run (bare VM) or \
russh + virsh + qemu-tdx (bare-TDX host)"
))
}
}
struct InstalledWorkload {
workload_port: u16,
host_key: Vec<u8>,
key_public: Vec<u8>,
key_private: Vec<u8>,
}
#[async_trait]
impl Backend for BareMetalBackend {
fn name(&self) -> &'static str { "baremetal" }
async fn capabilities(&self) -> anyhow::Result<Capabilities> {
let regions: Vec<RegionLabel> = self.cfg.machines.iter()
.map(|m| m.region.clone())
.collect::<std::collections::BTreeSet<_>>()
.into_iter().collect();
let tdx_capable = self.cfg.machines.iter().any(|m| m.tdx_capable);
let sum_cpu: u32 = self.cfg.machines.iter().map(|m| m.cpu_millicores).sum();
let sum_ram: u32 = self.cfg.machines.iter().map(|m| m.ram_mib).sum();
Ok(Capabilities {
regions,
tdx_capable,
max_cpu_millicores: sum_cpu,
max_ram_mib: sum_ram,
})
}
fn can_satisfy(&self, grant: &ComputeGrant<'_>) -> bool {
// TODO: inspect the grant's manifest for TDX
// requirement; if required and no tdx_capable
// host exists in an acceptable region, return
// false.
let _ = grant;
!self.cfg.machines.is_empty()
}
async fn provision(
&self,
grant: &ComputeGrant<'_>,
envelope: &Envelope,
) -> anyhow::Result<ProvisionedInstance> {
let m = self.pick_machine(grant, envelope).await?;
let workload = self.install_and_start(m, grant, envelope).await?;
// Mark the machine busy so the scheduler won't
// over-commit it.
{
let mut busy = self.busy.lock().await;
busy.entry(m.host.clone()).or_default().push(BusyGrant {
request_id: grant.request_id,
cpu_millicores_reserved: 0, // TODO: from manifest
ram_mib_reserved: 0,
});
}
Ok(ProvisionedInstance {
backend: self.name(),
instance_id: format!("{}:{}", m.host, workload.workload_port),
region: m.region.clone(),
public_host: m.host.clone(),
ssh_port: workload.workload_port,
user: "compute".into(),
key_private: workload.key_private,
key_public: workload.key_public,
host_key: workload.host_key,
started_at: AlmanacTick::default(),
provider_id: coalition_compute::ProviderId::default(),
})
}
async fn watch_until_exit(
&self,
instance: &ProvisionedInstance,
valid_to: AlmanacTick,
) -> anyhow::Result<UsageMetrics> {
// Poll the workload unit's `systemctl is-active`
// over the control-plane SSH session; or wait
// until `valid_to`. Compute usage metrics from
// cgroup counters collected at teardown.
let _ = (instance, valid_to);
Err(anyhow!(
"BareMetalBackend::watch_until_exit is a prototype stub"
))
}
async fn terminate(&self, instance: &ProvisionedInstance) -> anyhow::Result<()> {
// systemctl stop grant-{id}; firewall-cmd --remove-port=...;
// rm -rf /var/lib/compute-bridge/{grant_id}
let mut busy = self.busy.lock().await;
if let Some(v) = busy.get_mut(&instance.public_host) {
v.retain(|g| format!("{}:{}", instance.public_host, instance.ssh_port)
!= format!("{}:{}", instance.public_host, instance.ssh_port));
}
Ok(())
}
}
Up: compute-bridge → backends.
src/zipnet_io.rs
audience: ai
Anonymised I/O. The coalition knows requesters by
ClientId; the provider sees only opaque zipnet
envelopes. The unseal committee t-of-n decrypts the
envelope and delivers cleartext plus a rotating
peer_id to the addressed provider.
The Envelope struct deliberately does not carry a
coalition ClientId. What the provider needs to do
its job is exactly: the peer identity (for reply
addressing), the peer’s x25519 public key (for
receipt sealing), the image post-hash (for grant
cross-check), an optional region hint, and a content-
addressed pointer to the built image. Nothing more.
//! Zipnet I/O: anonymised submission and reply.
//!
//! The coalition layer knows requesters by their
//! `ClientId`. The provider only sees an opaque
//! `peer_id` inside a zipnet envelope — the zipnet
//! unseal committee t-of-n decrypts the envelope and
//! the provider receives a cleartext payload alongside
//! a rotating peer identity.
//!
//! This keeps the provider a content-based participant
//! in the coalition: it cannot enumerate "which coalition
//! members are running what workloads" because it never
//! learns their coalition identities.
use std::sync::Arc;
use anyhow::Context;
use mosaik::Network;
use coalition_compute::RequestId;
use crate::config::ZipnetBootConfig;
/// Opaque zipnet channel used for both inbound request
/// resolution and outbound reply (the encrypted SSH
/// receipt).
pub struct ZipnetChannel {
// TODO: holds a zipnet::UnsealReader and a
// zipnet::ReplySender. Prototype leaves the inner
// types opaque.
_network: Arc<Network>,
}
impl ZipnetChannel {
pub async fn open(
network: &Arc<Network>,
cfg: &ZipnetBootConfig,
) -> anyhow::Result<Self> {
// TODO: use zipnet::Unseal::<RequestEnvelope>::reader(...)
// against cfg.network_id and cfg.unseal_config_hex.
let _ = cfg;
Ok(Self { _network: network.clone() })
}
/// Resolve a grant's `bearer_pointer` to the
/// cleartext request envelope.
///
/// The `bearer_pointer` is a blake3 of the sealed
/// envelope published by the requester via zipnet;
/// the unseal committee, on majority-honest
/// assumption, makes the cleartext available to the
/// addressed provider.
pub async fn resolve(
&self,
bearer_pointer: &coalition_compute::UniqueId,
) -> anyhow::Result<Envelope> {
let _ = bearer_pointer;
anyhow::bail!(
"ZipnetChannel::resolve is a prototype stub; wire up \
the zipnet unseal reader to pull the sealed envelope, \
verify the unseal quorum, and return the cleartext"
)
}
/// Publish an encrypted reply (the SSH receipt)
/// addressed to the zipnet peer the request came
/// from. The receipt is already sealed to the
/// peer's x25519 public key by `receipt::seal_to`;
/// zipnet only carries it.
pub async fn reply(
&self,
request_id: &RequestId,
sealed_receipt: Vec<u8>,
) -> anyhow::Result<()> {
let _ = (request_id, sealed_receipt);
anyhow::bail!(
"ZipnetChannel::reply is a prototype stub; wire up \
zipnet::Reply::<SealedReceipt>::publish"
)
}
}
/// Decrypted request envelope as seen by the provider.
///
/// This is deliberately thin: the requester's coalition
/// `ClientId` is **not** in here. Only fields the
/// provider needs to do its job end to end.
pub struct Envelope {
/// Ephemeral peer identity inside zipnet. Rotates
/// per request by the requester.
peer_id: [u8; 32],
/// x25519 public key the provider will seal the
/// SSH receipt to. Published by the requester
/// alongside the request.
peer_x25519: [u8; 32],
/// The image post-hash the requester wants to run.
/// Cross-checked against the grant's `image_hash`
/// in `Provider::handle_grant`; mismatch aborts.
image_hash: coalition_compute::UniqueId,
/// Optional region hint inside the zipnet envelope.
/// The grant also carries a region; on disagreement
/// the provider uses the grant's region.
requested_region: Option<String>,
/// Content-addressed pointer to the requester's
/// built image. The cloud-init script pulls from
/// here.
image_pointer: Vec<u8>,
}
impl Envelope {
pub fn peer_id(&self) -> &[u8; 32] { &self.peer_id }
pub fn peer_x25519_public(&self) -> &[u8; 32] { &self.peer_x25519 }
pub fn image_hash(&self) -> coalition_compute::UniqueId {
self.image_hash
}
pub fn requested_region(&self) -> Option<&str> {
self.requested_region.as_deref()
}
pub fn image_pointer(&self) -> &[u8] { &self.image_pointer }
}
Up: compute-bridge.
src/receipt.rs
audience: ai
The encrypted SSH access receipt. Shape:
- instance host + port,
- user,
- per-grant SSH private key (PEM),
- instance host key (for
known_hosts), - grant id +
valid_totick.
Sealing uses a standard X25519-ChaCha20-Poly1305 construction: generate an ephemeral x25519 keypair, derive a shared secret with the requester’s static x25519 public key (published in the zipnet envelope), encrypt the JSON-serialised receipt with ChaCha20-Poly1305 under the shared secret, prepend the ephemeral public key. The zero-nonce is safe because the shared secret is fresh per receipt.
The requester — the only holder of the matching x25519 private key — is the only party that can decrypt. A curious provider, host, or intermediate zipnet peer cannot recover the private SSH key, the instance host, or even the grant id without breaking the construction.
//! Encrypted SSH access receipts.
//!
//! Shape of a receipt:
//!
//! ```text
//! SshAccessReceipt {
//! instance_host: String, // public DNS or IP
//! instance_port: u16, // usually 22
//! user: String, // usually "ec2-user" or similar
//! ssh_key_private: Vec<u8>, // PEM of the per-grant key
//! ssh_host_key: Vec<u8>, // host public key for known_hosts
//! grant_id: UniqueId,
//! valid_to: AlmanacTick,
//! }
//! ```
//!
//! The receipt is serialised and sealed to the
//! requester's x25519 public key (published inside the
//! zipnet envelope). Sealing uses the standard
//! X25519-ChaCha20Poly1305 construction:
//!
//! - generate an ephemeral x25519 keypair,
//! - derive a shared secret with the requester's
//! static x25519 public key,
//! - use the shared secret as the symmetric key,
//! - ChaCha20-Poly1305 over the serialised receipt
//! with a zero nonce (safe because each shared
//! secret is fresh).
//!
//! The sealed blob is published by the provider via
//! `ZipnetChannel::reply`. The requester — the only
//! holder of the matching x25519 private key — is the
//! only party that can decrypt.
use anyhow::Context;
use chacha20poly1305::{
aead::{Aead, KeyInit},
ChaCha20Poly1305, Nonce,
};
use serde::{Deserialize, Serialize};
use x25519_dalek::{EphemeralSecret, PublicKey};
use coalition_compute::{AlmanacTick, ComputeGrant, UniqueId};
use crate::backends::ProvisionedInstance;
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct SshAccessReceipt {
pub instance_host: String,
pub instance_port: u16,
pub user: String,
pub ssh_key_private: Vec<u8>,
pub ssh_host_key: Vec<u8>,
pub grant_id: UniqueId,
pub valid_to: AlmanacTick,
}
impl SshAccessReceipt {
pub fn build(
instance: &ProvisionedInstance,
grant: &ComputeGrant<'_>,
) -> anyhow::Result<Self> {
Ok(Self {
instance_host: instance.public_host.clone(),
instance_port: instance.ssh_port,
user: instance.user.clone(),
ssh_key_private: instance.key_private.clone(),
ssh_host_key: instance.host_key.clone(),
grant_id: grant.request_id,
valid_to: grant.valid_to,
})
}
/// Seal this receipt to the requester's x25519
/// public key. Output is an opaque byte string
/// suitable for publishing on zipnet.
///
/// Format:
///
/// ```text
/// [0..32] ephemeral_x25519_public
/// [32..] ChaCha20-Poly1305(shared_secret, receipt_bytes)
/// ```
pub fn seal_to(&self, peer_x25519_public: &[u8; 32]) -> anyhow::Result<Vec<u8>> {
let ephemeral = EphemeralSecret::random_from_rng(rand::rngs::OsRng);
let ephemeral_public = PublicKey::from(&ephemeral);
let peer = PublicKey::from(*peer_x25519_public);
let shared = ephemeral.diffie_hellman(&peer);
let cipher = ChaCha20Poly1305::new_from_slice(shared.as_bytes())
.context("constructing ChaCha20-Poly1305 from x25519 shared secret")?;
let receipt_bytes = serde_json::to_vec(self)
.context("serialising receipt")?;
// Zero nonce is safe because shared secret is fresh.
let nonce = Nonce::from_slice(&[0u8; 12]);
let ct = cipher.encrypt(nonce, receipt_bytes.as_ref())
.map_err(|e| anyhow::anyhow!("encrypt failed: {e}"))?;
let mut out = Vec::with_capacity(32 + ct.len());
out.extend_from_slice(ephemeral_public.as_bytes());
out.extend_from_slice(&ct);
Ok(out)
}
/// Inverse of `seal_to`. Only the requester, the
/// holder of the x25519 static private key, can
/// call this successfully. Provided as a test
/// helper; the provider never calls it.
#[cfg(test)]
pub fn unseal_with(
sealed: &[u8],
static_private: &x25519_dalek::StaticSecret,
) -> anyhow::Result<Self> {
anyhow::ensure!(
sealed.len() > 32,
"sealed blob shorter than x25519 public key",
);
let mut ephemeral_public = [0u8; 32];
ephemeral_public.copy_from_slice(&sealed[..32]);
let shared = static_private.diffie_hellman(
&PublicKey::from(ephemeral_public),
);
let cipher = ChaCha20Poly1305::new_from_slice(shared.as_bytes())?;
let nonce = Nonce::from_slice(&[0u8; 12]);
let pt = cipher.decrypt(nonce, &sealed[32..])
.map_err(|e| anyhow::anyhow!("decrypt failed: {e}"))?;
let receipt: SshAccessReceipt = serde_json::from_slice(&pt)?;
Ok(receipt)
}
}
Up: compute-bridge.
Glossary
audience: all
Domain terms as used in this book. Where a term is inherited from mosaik, zipnet, or builder, the entry points at the authoritative source rather than re- deriving.
Almanac. A basic service: a shared clock / sequence beacon citizens may use for cross-citizen timing. See basic services.
Atlas. A basic service: a directory of citizens with operator-supplied metadata (endpoint hints, ticket- issuance roots, MR_TD pins). See basic services.
Basic service. A coalition-scoped module whose shape is specified once in the blueprint so every instance means the same thing across coalitions. Four are specified: Atlas, Almanac, Chronicle, Compute. A coalition may ship zero, one, two, three, or all four.
Bridge organism. The name the builder book uses for a speculative cross-lattice organism. In this blueprint, promoted to a first-class category and generalised to confluence — a cross-citizen organism. See topology-intro.
Chronicle. A basic service: a tamper-evident audit log of coalition actions (publications, rotations, retirements). See basic services.
Compute. A basic service: a scheduler and registry
through which coalition members submit
ComputeRequests for workloads identified by image
pre-hash (crate source + min CPU / RAM / TDX) and
receive ComputeGrants matching them to registered
providers. The module commits the match; the actual
compute runs on a separate provider. See
basic services.
Citizen. A lattice or standalone organism referenced
by a coalition’s CoalitionConfig. Citizen here means
“referenced unit of the coalition”, not “voting member”;
coalitions do not vote. Used interchangeably with unit
when the mechanical framing is clearer.
CitizenId. The 20-byte prefix of a citizen’s blake3
stable-id fingerprint, used as a compact key in cross-
citizen commits. The full 32-byte fingerprint lives in
Atlas entries and in evidence pointers. See
composition — Two keys.
Coalition. A fingerprint-addressed composition of
zero or more lattices, zero or more standalone organisms,
and zero or more confluences, possibly shipping up to
three coalition-scoped modules (Atlas, Almanac,
Chronicle) and an optional ticket-issuance root. Identified
by a CoalitionConfig. Operator-scoped, not a consensus
unit, not an ACL root. A voluntary grouping that offers
services to its citizens. See
topology-intro.
CoalitionConfig. The parent struct a coalition
operator publishes and an integrator compiles in. Contains
the coalition’s name, the ordered LatticeConfigs it
references, the ordered OrganismRefs it references, the
ordered ConfluenceConfigs it references, and an optional
TicketIssuerConfig. See
topology-intro — Coalition identity.
COALITION_ROOT. The derived fingerprint used as the
root for each coalition-scoped module’s (Atlas, Almanac,
Chronicle, optional ticket issuer) identity derivation.
Commissioned confluences derive independently and do NOT
fold COALITION_ROOT. See
topology-intro — Citizenship-by-reference.
Confluence. A mosaik-native organism whose Config
fingerprint folds two or more citizen fingerprints
(lattices, standalone organisms, or a mix) and whose
public surface spans the referenced citizens. Confluence
identity is independent of any coalition: one confluence
may be referenced by many coalitions. See
confluences.
ConfluenceConfig. The parent struct for a
confluence’s Config. Folds the confluence’s role name,
ordered spanned lattices, ordered spanned standalone
organisms, content parameters, and ACL. Does not fold a
coalition root. See
confluences — the Config fingerprint.
Content + intent addressing. The discipline every
consensus-critical id in every organism, lattice,
confluence, or coalition obeys:
id = blake3(intent ‖ content ‖ acl). Inherited verbatim
from zipnet and builder.
Content hash. The portion of a citizen’s fingerprint that folds its ACL, MR_TDs, and state-affecting parameters. Bumps on operational rotations; distinct from the stable id. See topology-intro — Citizen reference shape.
Deployment. A single running instance of an organism,
confluence, lattice, or coalition. Identified by the
relevant Config fingerprint.
Evidence pointer. A reference from a confluence’s commit back to the citizen commit it depends on. Resolved on replay to validate the confluence’s state-machine transition.
Fan-in confluence. A confluence aggregating facts from multiple origin citizens (e.g. a shared-ledger confluence, a cross-feed correlator).
Fan-out confluence. A confluence distributing a fact to multiple target citizens (e.g. an intent router).
Fingerprint. A synonym for the content + intent
addressed id of a Config (coalition, lattice, organism,
or confluence). Mismatched fingerprints are the ladder’s
debuggable failure mode at every rung.
Fixed shapes, open catalogs. The blueprint discipline
of specifying shapes (CoalitionConfig, ConfluenceConfig,
the four basic services, the ticket-issuance shape) once
and leaving the catalogs (which confluences, which organism
categories, which lattice classes, whether a coalition ships
any modules) open. See
introduction — Fixed shapes, open catalogs.
Heavy-coalition. The speculative (and deferred) class of coalitions carrying enforceable policies over citizens — laws, taxation, judiciary. Explicitly out of scope; see introduction — Fixed shapes, open catalogs.
Integrator. External developer consuming a coalition,
citizen, or confluence. Never client (ambiguous with
zipnet’s ClientId).
Intent-router pattern. A confluence that reads
cleartext from spanned lattices’ unseal::UnsealedPools
and commits a routed-intents collection per target
lattice. See
confluences — intent-router pattern.
Lattice. One fully-specified mid-rung composition of
organisms. Builder specifies the first
class of lattice: one end-to-end block-building
deployment for one EVM chain, identified by a
LatticeConfig fingerprint. Nothing in this blueprint
assumes a lattice is a block-building lattice; other
classes may follow their own proposals. Coalitions
reference lattices; they do not own them.
LatticeConfig. Builder’s parent struct for a block-
building lattice. Unchanged in this blueprint. See
builder glossary.
Light-coalition. The default stance: a coalition offers basic services to its citizens, does not gate their ACLs, and does not override their operators. Implicit in the word coalition; rarely restated.
Module. A coalition-scoped component whose identity
derives under COALITION_ROOT.derive("module").derive(name).
The four basic services (Atlas, Almanac, Chronicle,
Compute) are modules. Commissioned confluences are NOT
modules — they derive independently.
MR_TD. Intel TDX measurement register binding a boot
image to the hardware root of trust. Pinned per TDX-gated
confluence or organism in that component’s Config.
Narrow public surface. The discipline of exposing one or two primitives per organism (or confluence) on the shared universe. Inherited from the zipnet design intro.
Operator. Team running a lattice, standalone organism, confluence, module, or coalition (or any combination). See audiences.
Organism. A mosaik-native service with a narrow public surface. The six builder organisms (zipnet, unseal, offer, atelier, relay, tally) inside each block- building lattice, every confluence, every module, and every standalone organism a coalition references are organisms.
OrganismRef. The coalition’s reference type for a
standalone organism: a role name plus the organism’s
stable id and optional content hash. The concrete
Config type lives in the organism’s own crate and is
opaque to the coalition meta-crate. See
topology-intro — Coalition identity.
Retirement marker. A record a confluence, module, or coalition commits to its own stream before its committee shuts down, declaring effective retirement time and an optional replacement pointer. See basic services — Retirement markers.
Shared-ledger pattern. A confluence that aggregates
per-lattice tally::Refunds into a cross-origin-citizen
attribution feed. See
confluences — shared-ledger pattern.
Stable id. The (instance_name, network_id) blake3
hash a citizen publishes as its durable identifier.
Changes only on retirement, not on operational rotations.
See
topology-intro — Citizen reference shape.
Standalone organism. An organism living directly on
the universe rather than inside a lattice. Oracles,
attestation fabrics, identity substrates, data-
availability shards, analytics pipelines, and the like —
all referenced by a coalition via OrganismRef.
StateMachine. Mosaik’s term for the deterministic
command processor inside a Group<M>. Every confluence
and every organism has its own state machine. See
mosaik groups.
Stall policy. Per-confluence rule for behaviour when one spanned citizen is not producing inputs for the current clock event. Either commit with partial evidence (preserving liveness) or stall the event (preserving quality). Documented per confluence.
TDX. Intel Trust Domain Extensions. The TEE technology mosaik ships first-class support for. Used by confluences and sensitive standalone organisms. See mosaik TDX.
TicketIssuer. An optional coalition-scoped mosaik
ticket issuer. Non-authoritative: no component is required
to accept tickets from this issuer; components opt in by
including the issuance root in their own
TicketValidator composition. See
ticket issuance.
Unit. Informal cover term for anything a coalition composes: a lattice, a standalone organism, or — when a category exists — any other named mid-rung composition. Used interchangeably with citizen when the mechanical framing is clearer than the civic one.
Universe. The shared mosaik NetworkId
(builder::UNIVERSE = unique_id!("mosaik.universe"))
hosting every coalition, every lattice, every organism,
and every confluence.