Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Zero to one: shipping an AI-native zipnet

audience: ai

A playbook for taking an AI-native product from idea to a shipped MVP running on a mosaik coalition. The primitives this article composes are specified in four places only: this book, mosaik, zipnet, and the builder (6e3v) book. If you find yourself reaching for anything outside that stack, you have drifted.

The playbook is agent-led. The reader is assumed to be an engineer (human or agent) capable of writing async Rust, or an agent team whose members include one. Every step is something a team of one human plus a small set of LLM-class agents can run end-to-end in a few weeks.

What “AI-native zipnet” means

A zipnet is one mosaik organism whose role is to accept sealed submissions from clients and make their cleartext available to downstream organisms through an unseal committee under a declared t-of-n threshold. Cleartext appears in an UnsealedPool Collection keyed by slot; downstream organisms subscribe to the pool and act. Integrators submit anonymously: the only identity on the submission path is a rotating zipnet peer_id, never a coalition ClientId.

An AI-native zipnet instance is one whose submissions, processors, operators, or all three are AI agents:

  • AI submitters. LLM-class agents produce intents, bids, prompts, code-review requests, or other sealed payloads and publish them into the zipnet.
  • AI processors. Downstream organisms (or a confluence) run agent policies that consume the unsealed cleartext and commit derived facts.
  • AI operators. Committee members, schedulers, and reputation scorers are themselves agents running under TDX-attested binaries, using the Compute basic service to top up their own workloads and run experiments against production traffic.

All three can coexist, and the playbook assumes at least two of them. A product is AI-native only if the agents are load-bearing — if an operator could ship the same service without them, the stack is over-engineered.

Step 0 — Pick the problem

Three properties that make a candidate a good 0→1 fit for zipnet + coalition + AI. Skip the candidate if any one is missing.

  1. Submissions are private, aggregations are public. Each individual submission is sensitive (a bid, a prompt, a vote, a code excerpt, a health record). The aggregated commit (an auction outcome, an attested inference result, a statistic, a tally) can be public or cheaply attested. If the product would need to publish raw submissions, zipnet is the wrong submission layer.

  2. The value lands in the aggregation or routing, not in one submitter getting a reply. A classic request/response service is a distraction here. Zipnet’s shape rewards products where N submissions plus a committee-committed output is the thing consumers pay for.

  3. At least one bonded, honest committee can carry the trust. If every downstream participant must independently verify every submitter, zipnet adds no cryptographic value over an ordinary mailbox. The payoff is the unseal quorum plus whatever downstream TDX / reputation composition you layer on top.

Candidate shapes that fit all three and have an AI- native angle:

  • Confidential inference broker. Clients submit sealed prompts; a TDX-attested inference confluence unseals, runs, and commits (prompt_digest, model_id, attestation, output_digest). The output is returned through zipnet’s reply path. Consumers pay per verified inference.
  • Private code review. Clients submit sealed code excerpts; a reviewer confluence of LLM agent organisms commits aggregated annotated reviews with line-level evidence pointers. Reviewers are scored by a reputation organism.
  • Sealed intent pool for cross-chain trades. Builder territory, extended. Intents are AI- generated; an LLM confluence matches routes across chains before the offer stage.
  • Anonymous prediction market. Agents submit sealed forecasts; a scoring confluence commits realised-outcome accuracy; payoffs flow through a coordination-market confluence.
  • Private research-data corpus. Researchers submit sealed data points; an aggregation confluence commits statistics under differential- privacy budgets; AI agents scan the public commits for correlations without touching raw records.

Pick one whose addressable market you can name in one sentence. For the rest of this playbook we use the confidential inference broker as the running example because it exercises every primitive the stack offers.

Step 1 — Write the one-page spec

Before any code, write a one-page spec. Structure:

  • What gets submitted. The sealed-payload shape (one struct).
  • Who submits. Integrator role: human app, autonomous AI agent, a mix.
  • Who unseals. The unseal-committee role: a t-of-n bonded set, typically TDX-attested.
  • What the downstream organism (or confluence) commits. One or two named primitives per organism. For a confluence, the spanned citizens.
  • Who consumes. Downstream readers and their purpose.
  • What the trust shape is. Per-organism majority-honest / threshold / TDX assumptions, composed.
  • What gets charged, and to whom. The revenue path: a coordination market, a per-commit fee, a subscription.

Worked example (confidential inference broker):

submit        : SealedInferenceRequest
submitter     : integrator agent or human app (mosaik ClientId)
unseal        : 3-of-5 TDX committee
downstream    : InferenceConfluence committing AttestedInference
                commits per (prompt_digest, model_id, output_digest, attestation)
consume       : original submitter reads reply via zipnet,
                general public reads aggregate attestations
trust         : zipnet any-trust on peer set +
                unseal 3-of-5 TDX +
                inference confluence 2-of-3 TDX
revenue       : coordination market clears per-request price;
                requester pays settlement hash in the request

One page, one struct per row, no prose. If you cannot fit it on one page, the shape is unclear and the product won’t survive pivots.

Step 2 — Decide the scale

Three scales the stack supports. Pick the smallest one your spec fits into; the larger scales cost more and gate more primitives you don’t need yet.

Scale 0 — a single organism

One zipnet plus one downstream organism, no coalition, no confluence, no modules. Minimal stack. Use this when the spec has a single committed primitive and no aggregation across citizens. The zipnet book is the sole upstream reference; the organism is one crate following the zipnet pattern.

Scale 1 — one lattice

A full builder lattice (the “6e3v” shape: six organisms and three verbs, one per EVM chain). Use this when the spec is block-building or a sibling problem whose state-machine shape mechanically lines up with zipnet → unseal → offer → atelier → relay → tally. Most non-block-building problems do not need a full lattice. Do not adopt this scale just because it exists.

Scale 2 — a coalition

Multiple citizens (zero or more lattices, zero or more standalone organisms), zero or more confluences, and up to four modules (Atlas, Almanac, Chronicle, Compute) under one CoalitionConfig. Use this when:

  • The product spans more than one citizen (inference broker + reputation organism + coordination market), or
  • You want a named handshake so integrators compile one constant, or
  • You need Compute (AI-native products almost always do).

The confidential inference broker lives at scale 2: one zipnet instance, one inference confluence, one reputation organism, one coordination-market confluence, one Compute module. Five citizens plus one module.

Step 3 — Wire the citizens

Write the CoalitionConfig as a Rust const. This is the compile-time handshake; every integrator compiles it verbatim.

use builder::LatticeConfig;
use coalition::{CoalitionConfig, ConfluenceConfig, OrganismRef};

pub const INFER_CORP: CoalitionConfig<'static> = CoalitionConfig {
    name: "infer.corp",
    lattices: &[],
    organisms: &[
        ZIPNET_SUBMISSIONS,
        REPUTATION_INFERENCE,
    ],
    confluences: &[
        INFERENCE_ROUTER,
        COORDINATION_MARKET,
        ATLAS,
        COMPUTE,
    ],
    ticket_issuer: Some(INFER_TICKETS),
};

Rules:

  • Every organism and confluence has its own crate and its own Config. Order in the slices is the order integrators will use to index.
  • A confluence’s ConfluenceConfig folds the spanned citizen references as content — not a coalition root (confluences are coalition-independent).
  • If you need a coordination market or a compute settlement mechanism, declare an optional ticket issuer. Components that want to accept tickets from the issuer include its root in their own TicketValidator; nothing is compelled.

Publish the CoalitionConfig hex and the struct definition as your handshake page.

Step 4 — Add AI operators

For every citizen and every confluence, pick the agent shape (see agents as citizens):

ComponentAgent shapeWhy
zipnet submissionsintegrator (shape 1) — the app’s clientsSubmissions are private actions; no replay needed
unseal committeecommittee of TDX organisms (shape 3)t-of-n threshold decryption needs joint-commit output
inference confluenceswarm (shape 4) over agent organismsMultiple reviewer agents, one aggregated commit
reputation organismstandalone organism (shape 2)Stable scoring history consumers pin
coordination marketconfluence committee (shape 3)Market clearing needs one commit per round
Atlas / Chroniclestandard committee (non-AI)Observability, not intelligence
Compute providercompute-bridge (shape 2), TDX-attestedHonest capacity reporting, multi-backend provisioning

Rules for deciding a shape:

  • Start at shape 1 (integrator). Upgrade only when you need replay, public identity, or bonding by peers.
  • Use shape 4 for quality-aggregation. A swarm- as-confluence lets you score individual agents separately while committing one authoritative output.
  • Use shape 3 for market-clearing or threshold work. The committee’s joint commit is the primitive consumers pay for.
  • Do not bond to a single agent-operator unless the agent is a standalone organism (shape 2) with a published MR_TD. Otherwise you are bonding to one operator’s keys.

Step 5 — Get the infrastructure

AI-native products are resource-hungry. The Compute module and the compute-bridge example give you a direct path from market revenue to provisioned workloads.

Two deployments are typical at the MVP stage:

5a. Reserve compute for the inference confluence

The inference confluence declares a ComputeManifest pinning its own MR_TD plus min CPU / RAM / TDX requirements. Each committee member runs on compute acquired from a compute-bridge provider — operator choice of AWS, GCP, Azure, or a bare-TDX host. Settlement flows through a coordination-market confluence the inference broker is also a member of: users pay; the market clears; compute-bridge grants are funded from the clearing.

5b. Launch a compute-bridge for your agent subscribers

Shipping an AI-native product also creates a second product almost for free: if the coalition’s agents are hungry for compute and your cloud / bare-metal access is any good, you can run a compute-bridge and serve them. The bridge is designed so that a single operator interaction is enough to go live: attach keys, watch dashboard. See the launching section on the crate page for the full six-step flow; the shape of the operator’s day after step 5 is:

  • Inputs you attach, once.

    • AWS access-key pair (if enabling AWS), scoped to a dedicated sub-account.
    • GCP service-account JSON (if enabling GCP), scoped to one project.
    • Azure service principal (if enabling Azure), scoped to one resource group.
    • Bare-metal SSH private keys (if enabling bare-metal) for each host the operator is willing to share.
    • A CoalitionConfig pointing at the coalition whose agents you want to serve.
    • A per-backend rate table so the dashboard can estimate costs.

    Everything above is TOML plus files. No third-party SaaS, no runtime secret fetch, no separate billing integration. Every file is measured into the TDX boot and unreadable from the host.

  • Outputs you watch, live. A dashboard served on 127.0.0.1:<port> inside the TDX guest; operator reaches it by SSH port-forward from their workstation. The dashboard shows:

    • registered status + MR_TD match;
    • active-subscriber count (coalition agents bonded to your provider card);
    • active grants / lifetime total;
    • per-backend capacity vs utilisation (usage, idle headroom);
    • window cost estimate from the operator’s rate table;
    • window settled revenue from the coordination market;
    • ring-buffer of recent events, all redacted of requester identity.

    The dashboard enforces the zipnet privacy contract on the operator, not just the submitter: requester ClientId, peer_id, prompts, image contents, and per-settlement attribution are never in the dashboard’s memory. The operator sees flow, not identities.

Subscribers are the flywheel

Agents in the coalition “subscribe” to a compute- bridge when they bond its provider card in the Compute module’s Collection<ProviderId, ProviderCard>. Subscribing means the agent trusts the bridge’s TDX attestation, is willing to route grants to it, and will accept its settlement attestations on the coordination market.

The active-subscriber count on the dashboard is the operator’s product-market-fit signal for the bridge itself:

  • More regions + more TDX capacity + lower declared cost → more subscribers over time.
  • Subscribers compound: a bridge already serving N agents is more trustworthy to the next agent than a freshly-registered one (the reputation organism can read the lifetime ComputeLog history on the Compute module’s log stream).
  • When subscribers migrate to a competitor, the dashboard makes it visible early — the operator can enable another backend, drop their declared price, or retire cleanly.

None of this requires the operator to track identities. The subscriber count is a single integer; the reputation organism does the per-agent accounting on its own stream.

Privacy contract, end to end

Both sides of a grant are zipnet-sealed:

  • Inbound request. The requester’s ComputeRequest envelope is submitted through the Compute module’s zipnet; the unseal committee delivers cleartext to the matched provider with a rotating peer_id only.
  • Outbound receipt. The compute-bridge seals the SSH access receipt to the requester’s x25519 public key via X25519-ChaCha20-Poly1305 and returns it on the zipnet reply stream.
  • Operator view. Aggregate counters only. The operator cannot match a cpu-hour spike to a specific requester.
  • Subscriber view. A subscribing agent sees its own grants and receipts (because it sealed them itself); it cannot see other agents’ workloads.

Net effect: an AI-native product with a compute-bridge side-business runs with no single party — the requester, the bridge operator, the coalition operator, the cloud — holding the full picture.

5c. Run experiments without disturbing prod

As inference quality drifts, the inference confluence runs a candidate model as an experiment: a Compute grant for a short-lived image, a shadow Stream<SuccessorInference> consumers can optionally bond against, an ExperimentReport committed when the window closes. Successful experiments flip to production via a retirement marker pointing at the new ConfluenceConfig.

Step 6 — Ship the MVP

What “shipping” looks like:

  1. A handshake page. One URL carrying:
    • The CoalitionConfig hex and struct.
    • Per-citizen links with MR_TDs for TDX organisms.
    • A change channel.
  2. A client SDK. One crate + one helper in the integrator’s language of choice. Tokio + mosaik handle; helpers for sealing a submission and reading a reply.
  3. A smoke test. A binary that submits a canned sealed request, unblocks a committee round, and verifies it can read its own reply within a declared latency budget. Run this on every deploy.
  4. An operator runbook. Per-module MR_TD rotation, how to add a compute-bridge, how to retire a confluence version, how to respond to a zipnet unseal committee losing majority. The operator runbooks in this book are the template; mirror their structure.
  5. A single on-call dashboard. Subscription lag per spanned citizen, Compute-grant hit rate, committee membership health, reputation-organism scoring cadence.

The MVP is not “a screenshot of the agent responding.” It is five artefacts plus the green smoke-test.

Step 7 — Pricing and revenue

Coordination markets are the mosaik-native way to capture value. The pattern from emergent coordination — pattern 3 specialised for an AI-native product:

  Client         Market            Inference            Reputation
  ------         ------            ---------            ----------
  Bid -->
                 Round clear
                 (price, winner)
                                   Sealed submission --> Unseal --> Commit
                                   committee                         AttestedInference
  Receive reply <----------------  (via zipnet)

                 Settlement
                 pointer:
                 on-chain TX hash
                                                       Score by outcome
                                                       (if applicable)

Revenue mechanics to decide, in order:

  • Unit of charge. Per-inference, per-second of agent time, per-unit-of-aggregation. The market’s content pins this.
  • Settlement rail. On-chain native-token, a stablecoin, or a coalition-scoped credit token redeemable off-protocol. Evidence is a hash; finality lives off the module.
  • Take rate. The operator’s cut. Announce it up- front and fold it into the market’s clearing rule.
  • Reputation-gated pricing. Higher-reputation inference-confluence committee members accept higher-priced bids; lower-reputation drop to lower price tiers or sit out. Pin this in the market’s content parameters.

An AI-native product has one more lever: the agents themselves are customers. An agent that wants inference pays the market like any other client; an agent running in a Compute grant pays the compute-bridge. Revenue capture inside an agent society can sustain a product whose human-user traffic is still growing.

Step 8 — Operate sustainably

The sustainability investigation specifies five strategies and three anti-abandonment patterns. For an MVP, compose the minimum viable package:

  • A reputation-anchored survival: every agent organism pins a reputation-organism floor in its admission.
  • B market-funded operation: every bonded committee member reads settlement evidence from the coordination-market confluence.
  • E successor-chain hand-off: every Config carries a SuccessorPolicy; every retirement emits a marker pointing at a named successor.

Add:

  • X a watchdog organism scoring every agent on subscription count + downstream citation. Running from day one means abandonment is caught compositionally, not anecdotally.
  • Y Atlas-surfaced successor chains so late- arriving integrators bind the latest active version.
  • Z Chronicle heartbeats for every committee.

These five compose without any new protocol surface. An MVP that ships them is already operationally mature in a way most web-stack startups aren’t within six months.

Worked end-to-end — confidential inference broker

Seven-week path from clean repo to production:

Week 1 — Spec

One-page spec per Step 1. Pricing model per Step 7. Handshake document drafted. No code yet.

Week 2 — zipnet instance

Fork the zipnet reference shape. Your binary’s Config.content pins the sealed-request schema. Unseal committee starts at 3-of-3 mocked on one operator; expand at week 6.

Week 3 — Inference confluence + reputation organism

Two crates: infer-conf (confluence, shape 3) and infer-rep (standalone organism, shape 2). The confluence’s state machine aggregates per-member inference outputs; disagreement at commit time lowers member weights for the next round. Reputation organism scores confluence outputs by outcome when an outcome is observable (for some inference classes, an answer-key arrives later; for others, only user-verdict signals are available).

Week 4 — Coordination market + Compute wiring

Third crate: infer-market (confluence, shape 3) clearing per-request bids. Fourth crate: a compute-bridge deployment per backend you can access (start with AWS and one bare-TDX host). The market’s settlement policy pins an on-chain payment contract the requester pays into before their bid counts.

Week 5 — Coalition assembly

Write the CoalitionConfig. Publish the handshake page. Compile-in the struct to the canonical client SDK.

Week 6 — Committee hardening

Unseal committee expands to 3-of-5 under three independent operators. Each committee member runs inside TDX with the crate’s published MR_TD. Publish MR_TDs on the handshake page. Bond reputation organism’s readers against settled on-chain payments so Sybil cost dominates false scoring.

Week 7 — Smoke test + ship

Run the smoke-test binary 10,000 times across all coalition regions. Dashboard all five critical paths. Post to the change channel. Integrators compile the CoalitionConfig, the smoke-test turns green, traffic begins.

Result: a production system where

  • a client submits a sealed prompt and gets back an attested, committee-signed inference;
  • the committee is TDX-attested, its composition is visible in Atlas, and its revenue is visible in the coordination-market stream;
  • compute is acquired on demand from a compute-bridge, reimbursed from the market;
  • reputation compounds across weeks and scales pricing;
  • every step is committable, auditable, and bonds only what the participant chose to trust.

Failure modes and iteration

  • Zero user traffic. The product’s spec was wrong. Do not pivot the stack; pivot the product. The coalition composition re-uses perfectly — you rewire the confluence’s content parameters and republish.
  • One committee member is malicious. The t-of-n threshold + TDX attestation + reputation organism catch this compositionally. The offending MR_TD is unpinned from the handshake next release.
  • Compute costs outpace revenue. The market clearing rule is the lever: raise the minimum bid, shrink the confluence’s committee, or pivot to a lower-cost inference model. Each is one ConfluenceConfig bump; the retirement marker handles the transition.
  • Reputation organism is compromised. See emergent coordination — pattern 2: consumers consult multiple reputation organisms; the compromised one is unpinned.
  • A backend goes offline. The compute-bridge fleet router routes around it; if the backend had uniquely-capable hosts (e.g. only bare-TDX), grants requiring TDX fail to match until a replacement backend is added.

Iteration is cheap because every interface is a Config fingerprint: a new version is a new fingerprint, and every integrator is rebonded through the retirement chain.

What this playbook does not claim

  • Your product will succeed. Distribution, pricing, and user psychology are not in this book. This playbook makes shipping possible; it does not make it profitable.
  • Zipnet is the right submission layer for every AI-native product. It is the right one when the three properties in Step 0 all hold. When they do not, use an ordinary mosaik organism with a public submission stream and skip the unseal machinery.
  • The compute-bridge covers every hardware profile. Large-scale GPU inference is not a solved deployment on TDX-capable providers today. The baremetal backend is your route in the interim.
  • Agents can replace the operator. Every committee still has a human operator accountable for running the binary. Constitutional pinning (strategy D from sustainability) means that even when the agent is making decisions, the operator is making the decision to keep the committee running. No blueprint here forecloses that.
  • The four upstream books each fully exist as a shipped binary. The coalition meta-crate lands at v0.2, Compute at v0.6, and this playbook’s production path traces that timeline. Until then, the playbook reads as design.

Cross-references