Composition: citizens and confluences in a coalition
audience: contributors
Architecture maps confluences and citizens onto one example coalition. Confluences specifies the confluence organism. Basic services specifies Atlas, Almanac, and Chronicle. This chapter shows the wiring: which public surface each confluence subscribes to on each spanned citizen, how the subscription graph grows as confluences and modules are added, and what fails where when an upstream stalls.
The wiring is deliberately weak. No cross-Group
atomicity, no shared state across citizens, no global
scheduler. Citizens and confluences react to each other’s
public commits via mosaik’s when() DSL and Collection
/ Stream subscriptions. This is the composition model
at every rung.
Two keys: citizen id and clock event
Every commit in every block-building lattice is keyed by slot (see builder composition). Every commit in a standalone organism is keyed by that organism’s own clock event — a slot, a tick, a round, a sequence number. A coalition adds the citizen id as an outer key on every cross-citizen fact.
The citizen_id used as a key in cross-citizen commits is
the first 20 bytes of the citizen’s blake3 stable-id
fingerprint. The full 32-byte value lives in Atlas cards
and in evidence pointers. The 20-byte truncation gives a
compact key that matches the mental model readers bring
from other ecosystems; collision resistance remains more
than sufficient at this rung.
Concretely:
- Intra-citizen commits continue to key on their own clock alone (no change from the citizen’s own book).
- Confluence commits that fan-out to multiple target
citizens key on
(target_citizen_id, target_clock_event). - Confluence commits that fan-in from multiple origin
citizens key on
(origin_citizen_id, origin_clock_event). - An integrator joining commits across a confluence and
the citizens the confluence touches keys on
(citizen_id, clock_event)everywhere. - When a coalition ships an Almanac the confluence has
opted into, the confluence may additionally carry
almanac_tickas an alignment key for cross-citizen joins where the citizens’ native clocks are not commensurate.
The 20-byte citizen_id is stable — it never changes for
a given citizen deployment. A citizen retirement yields a
new stable id and is therefore a new citizen id for every
coalition that adopts it.
The subscription graph
Each arrow is a subscription. The downstream component watches the upstream’s public collection or stream and reacts in its own state machine. No arrow represents atomic cross-Group commit. Dashed boxes are confluences; the oracle box on the left is a standalone organism.
integrators
(wallets,
searchers)
│
├──► zipnet[L1]:Submit ────► Broadcasts[L1] ──► UnsealedPool[L1] ─┐
│ │
├──► zipnet[L2]:Submit ────► Broadcasts[L2] ──► UnsealedPool[L2] ─┤
│ │
└──► offer[L1]:Bid / offer[L2]:Bid │
│
price-feed.eur ──► Price[tick] │
│
almanac ──► Ticks[K] │
│ │
└──────────── referenced by confluences for alignment ──────┐ │
▼ │
┌────────────────── ┘
▼
┌───────────────────────┐
│ intents (confluence) │
│ reads UnsealedPool[*] │
│ + price-feed.eur │
│ commits RoutedIntents │
└───────────────────────┘
│
RoutedIntents[L1,S] / RoutedIntents[L2,S]
▼
offer[L1]:AuctionOutcome offer[L2]:AuctionOutcome
│ │
atelier[L1]:Candidates atelier[L2]:Candidates
│ │
relay[L1]:AcceptedHeaders relay[L2]:AcceptedHeaders
│ │
tally[L1]:Refunds tally[L2]:Refunds
│ │
└────────────┬──────────────┘
▼
┌──────────────────────────┐
│ ledger (confluence) │
│ reads Refunds[*] │
│ commits SharedRefunds │
└──────────────────────────┘
│
▼
integrators
atlas ──► CitizenCards (consumed by dashboards, not coalition pipeline)
chronicle ──► ChronicleEntries (consumed by auditors, not coalition pipeline)
Read top to bottom:
- Wallets submit to each lattice’s
zipnet::Submit; searchers submit to each lattice’soffer::Bid. Neither changes from builder. - Each lattice’s
zipnetandunsealdo their usual work independently. The oracle organismprice-feed.eurpublishes ticks on its own clock. The Almanac (if shipped) publishes its own tick stream. - The
intentsconfluence subscribes to every spanned citizen’s relevant public surface — lattices’UnsealedPools and, in this example,price-feed.eur’sPricestream. It may additionally subscribe to the Almanac for alignment keys. It observes cross-citizen intents and commits routed-intents entries per target lattice. - Each lattice’s
offersubscribes to the confluence’sRoutedIntentscollection filtered for its own citizen id. Routed intents are an additional input to the auction, along with the lattice’s ownUnsealedPool. - Each lattice’s
atelierandrelayproceed as in builder. - Each lattice’s
tallycommits independently. - The
ledgerconfluence subscribes to every spanned lattice’stally::Refundsand commits the aggregatedSharedRefundscollection. - Integrators read
SharedRefundsand per-citizen surfaces depending on their use case. Dashboards read the Atlas. Auditors read the Chronicle.
Atlas and Chronicle are not on any confluence’s critical path in this example; they are consumed by dashboards and auditors directly. Modules serve observability and provenance, not the coalition’s data flow.
Subscription code shape
A contributor implementing a confluence writes a role
driver in that confluence’s crate. The driver is a mosaik
event loop multiplexing subscriptions across citizens and
calling group.execute(...) on its own Group. The pattern
is identical to any organism driver, fanned out across
more citizen kinds.
// Inside intents::node::roles::committee_member.
//
// `lattices` and `organisms` are the ordered sets of
// citizen references the confluence spans, sourced from
// self.config.lattices and self.config.organisms.
loop {
tokio::select! {
// One subscription per spanned lattice's UnsealedPool.
ev = pool_a.when().appended() => {
let round = pool_a.get(ev).expect("appended implies present");
let intents = extract_cross_chain_intents(&round);
self.group.execute(IntentsCmd::ObservePool {
citizen_id: lattice_a_id,
slot: round.slot,
intents,
}).await?;
}
ev = pool_b.when().appended() => {
// ... same shape
}
// One subscription per spanned standalone organism.
ev = price_feed_eur.when().appended() => {
let tick = price_feed_eur.get(ev).unwrap();
self.group.execute(IntentsCmd::ObservePrice {
citizen_id: eur_feed_id,
tick: tick.seq,
price: tick.value,
}).await?;
}
// Optional: align to a coalition's Almanac.
ev = almanac.when().appended() => {
let t = almanac.get(ev).unwrap();
self.group.execute(IntentsCmd::AdvanceAlmanacTick(t.seq)).await?;
}
_ = self.apply_clock.tick() => {
// Per-slot deadline: seal the routing decision.
self.group.execute(IntentsCmd::SealSlot).await?;
}
}
}
Every observed event carries enough evidence
((citizen_id, clock_event, commit_pointer)) that the
state machine can validate it against the citizen’s public
surface during replay.
Apply order across the graph
Within one confluence’s Group, mosaik’s Raft variant
guarantees every committee member applies commands in the
same order. Across confluences, and across citizens, no
such guarantee exists. The coalition layer relies on the
same two properties the builder layer does, scaled up:
- Monotonicity by key. Within any component, commits
for key
(C, K+1)are not applied before(C, K). The component’s state machine enforces this per- citizen in its apply handler. - Eventual consistency by subscription. A downstream component’s driver observes every upstream commit eventually, because mosaik collections are append-only and readers converge.
Together these are enough to reconstruct a consistent per-citizen per-clock-event view across the whole coalition without global atomicity. An integrator wanting “the canonical decision for citizen C, clock event K, across citizens and confluences” reads each component’s per-(C, K) commit independently and joins on the key.
When an upstream stalls
Failure propagation is a small matrix. Rows are the failing component; columns are downstream components’ observable behaviour.
| Upstream fails | Same-citizen downstreams | Confluences that read this citizen | Other citizens |
|---|---|---|---|
| lattice organism | see builder composition | degraded — partial evidence per spec | unaffected |
| standalone organism | see that organism’s own book | degraded — partial evidence per spec | unaffected |
| citizen stable id bump (retirement) | in-citizen consumers re-pin | confluence must redeploy against new stable id | unaffected |
| citizen content hash bump | in-citizen consumers re-pin if they pinned | confluences pinning content redeploy; stable-id-only pinners are unaffected | unaffected |
| confluence stalls | no effect (citizens don’t depend on confluences for their own pipeline) | downstream confluence consumers see no new commits | unaffected |
| module stalls | no effect (citizens don’t depend on modules) | confluences aligned to this module see delayed/degraded commits | unaffected |
| confluence committee crosses threshold | no effect | bad commits possible (integrity lost); on-chain settlement is the final arbiter | unaffected |
| coalition operator retires coalition | no technical effect (citizens continue) | modules under retired coalition emit retirement markers, then stop | unaffected |
Two properties fall out, mirroring builder’s:
- Upstream failures degrade downstream outputs; they do
not corrupt them. A missing
UnsealedPool[L1, S]narrows theintentsconfluence’s commit for that slot; the commit itself remains well-formed. - The pipeline is drainable per citizen. Failures on one citizen do not block the coalition’s other citizens. Each citizen’s pipeline is local; each confluence applies what it observes.
What the coalition composition contract guarantees
Given the full commit logs of every citizen and every confluence the coalition references:
- Deterministic replay per (citizen_id, clock_event). Anyone can recompute each component’s per-(C, K) decision and cross-check confluence commits against the citizen inputs they folded in.
- Independent auditability. A confluence’s commit carries evidence pointers to the citizen commits it depends on. A consumer that trusts the citizens can verify a confluence’s commit without trusting the confluence’s committee, as long as the evidence pointers resolve.
- No silent corruption across coalitions. A citizen referenced from multiple coalitions is still one citizen; its commit log is one log. Different coalitions reading the same citizen see the same facts. A confluence referenced from multiple coalitions is still one confluence; its commit log is one log.
What the contract does not guarantee
- Atomic all-or-nothing commits across citizens. Already refused. An integrator reading a confluence’s commit and the spanned citizens’ commits must tolerate differing commit wall-clock times.
- Cross-coalition coordination. Coalitions coexist, they do not coordinate. If a cross-coalition commit is genuinely needed, the answer is a confluence that spans the relevant citizens directly — which both coalitions may reference independently.
- Bounded end-to-end latency. As in builder: if a confluence stalls, downstream consumers never see its next commit. No confluence-level timeout triggers a dummy commit; operators requiring bounded latency add per-slot deadlines at the confluence level.
Contributors implementing a new confluence
Wiring checklist for a confluence:
- Identify every spanned citizen’s public surface you
subscribe to. Record as ordered slices of citizen
references in your
Config—lattices,organisms, or both. - Decide whether your commits fan-in, fan-out, or both. Fan-in confluences aggregate per-origin- citizen facts; fan-out confluences distribute one fact to multiple target citizens; some confluences do both.
- Key every commit by
(citizen_id, clock_event). Never by one alone. Use the 20-byte truncated citizen id. If your state machine has a natural sub-clock- event cadence, commit at that cadence but carry the owning(citizen_id, clock_event)pair. If the confluence opts into a coalition’s Almanac and needs cross-citizen alignment, also carryalmanac_tick. - Write one role driver per
Groupmember role. Keep it as atokio::select!over the upstream subscriptions (one per citizen) and your local timers. - Validate evidence on replay. The upstream-event
pointers carried in your
Observe*commands must be checkable against the citizen’s public surface during replay; a replay that sees an unresolved evidence pointer must reject the command. - Document your per-citizen-failure policy. What happens when one spanned citizen stalls, reorders, or bumps its stable id or content hash — all are operational realities.
- Declare dependency on modules. If you require an Almanac for alignment, or consume Chronicle entries, document it.
- Document the composition contract in your
confluence’s
contributors/composition-hooks.md(per-confluence documentation). Update architecture and this page’s subscription graph to include the new confluence if the example coalition is the right place for it.
Cross-references
- Architecture — the coalition shape these subscriptions run on.
- Confluences — the organism the arrows land on.
- Basic services — Atlas, Almanac, Chronicle details.
- Atomicity — what the subscription graph deliberately does not buy you.
- Threat model — how trust assumptions compose across the subscription graph.