|
| 1 | +--- |
| 2 | +tags: cyber, research, article, core |
| 3 | +crystal-type: article |
| 4 | +crystal-domain: cyber |
| 5 | +date: 2026-03-25 |
| 6 | +--- |
| 7 | +# polynomial proof system |
| 8 | + |
| 9 | +a proof system where the polynomial is the universal primitive. commit to data, prove computation, identify content, sample availability, fold composition — one operation on one object. no hash trees. no Merkle paths. no separate identity scheme. the polynomial IS the proof, the state, the identity, and the erasure code. |
| 10 | + |
| 11 | +## definition |
| 12 | + |
| 13 | +a polynomial proof system is a transparent argument of knowledge where: |
| 14 | + |
| 15 | +1. **the witness is a multilinear polynomial.** the execution trace, the state, the content — all are multilinear polynomials over the Boolean hypercube $\{0,1\}^k$ |
| 16 | +2. **the commitment is a linear-code encoding.** no hash tree. the prover encodes the polynomial via an expander graph. binding from one hash call. opening from recursive tensor decomposition |
| 17 | +3. **the constraint check is a sumcheck.** the verifier reduces an exponential sum to one evaluation. the prover's table halves each round. total prover work: O(N) |
| 18 | +4. **composition is folding.** multiple proof instances fold into one accumulator with ~30 field operations. verification happens once at the end |
| 19 | +5. **identity is the commitment.** the PCS commitment to content IS the content's identity (CID). accessing content IS opening the commitment. proving IS committing. one primitive |
| 20 | + |
| 21 | +properties: transparent (no setup), post-quantum (code-based), linear-time prover, logarithmic proof size, Merkle-free, algebraically composable. |
| 22 | + |
| 23 | +## the five operations |
| 24 | + |
| 25 | +one PCS. five uses. |
| 26 | + |
| 27 | +**commit.** encode the polynomial via expander graph. O(N) field operations. one hash call for binding. 32 bytes. |
| 28 | + |
| 29 | +``` |
| 30 | +C = Brakedown.commit(f) |
| 31 | + = hemera(expander_encode(eval_table(f))) |
| 32 | + cost: O(d × N) field multiplications, d ≈ 6-10 |
| 33 | +``` |
| 34 | + |
| 35 | +**open.** prove the polynomial evaluates to v at point r. recursive: commit the √N opening vector, recurse. log log N levels. O(log N + λ) proof size. |
| 36 | + |
| 37 | +``` |
| 38 | +proof = Brakedown.open(f, r) |
| 39 | + recursive: commit opening vector → open that → ... → send O(λ) directly |
| 40 | + proof size: ~1.3 KiB at N = 2²⁰ |
| 41 | +``` |
| 42 | + |
| 43 | +**verify.** check the opening proof. O(λ log log N) field operations. ~5 μs. no hashing. |
| 44 | + |
| 45 | +``` |
| 46 | +accept = Brakedown.verify(C, r, v, proof) |
| 47 | + cost: ~660 field operations at N = 2²⁰ |
| 48 | +``` |
| 49 | + |
| 50 | +**fold.** combine two proof instances into one accumulator. ~30 field operations. the accumulator stores the combined claim. verification deferred to one final check. |
| 51 | + |
| 52 | +``` |
| 53 | +acc' = HyperNova.fold(acc, instance) |
| 54 | + cost: ~30 field multiplications + 1 hash |
| 55 | + size: ~200 bytes (constant) |
| 56 | +``` |
| 57 | + |
| 58 | +**identify.** the commitment IS the content identity. wrapped with domain separation for collision resistance across contexts. |
| 59 | + |
| 60 | +``` |
| 61 | +CID = hemera(C ‖ domain_tag) |
| 62 | + 32 bytes. universal. supports O(1) opening at any position. |
| 63 | + the same CID that identifies the content also proves any claim about it. |
| 64 | +``` |
| 65 | + |
| 66 | +## the architecture |
| 67 | + |
| 68 | +``` |
| 69 | +f: {0,1}^k → F_p the data (any data: trace, state, content) |
| 70 | + ↓ commit |
| 71 | +C = Brakedown.commit(f) 32 bytes (the identity AND the commitment) |
| 72 | + ↓ constrain |
| 73 | +SuperSpartan.check(C, CCS) sumcheck reduces to one evaluation |
| 74 | + ↓ open |
| 75 | +Brakedown.open(f, r) → proof ~1.3 KiB (recursive, Merkle-free) |
| 76 | + ↓ fold |
| 77 | +HyperNova.fold(acc, proof) ~30 field ops (defer verification) |
| 78 | + ↓ decide |
| 79 | +SuperSpartan.decide(acc) → final one check, ~8K constraints, ~5 μs |
| 80 | +``` |
| 81 | + |
| 82 | +## why multilinear |
| 83 | + |
| 84 | +univariate polynomials (degree N, one variable) require FFT for evaluation: O(N log N). the prover cannot be linear. |
| 85 | + |
| 86 | +multilinear polynomials (degree 1 per variable, k variables where $N = 2^k$) are evaluated by the sumcheck protocol. each round fixes one variable and halves the domain: |
| 87 | + |
| 88 | +$$2^k + 2^{k-1} + 2^{k-2} + \ldots + 1 = 2^{k+1} - 1 = O(N)$$ |
| 89 | + |
| 90 | +the prover IS linear. no FFT. no NTT. a shrinking table is the only data structure. |
| 91 | + |
| 92 | +multilinear polynomials over $\{0,1\}^k$ are isomorphic to binary trees with $2^k$ leaves. a tree IS a polynomial. axis (tree navigation) IS polynomial evaluation. cons (tree construction) IS variable prepend. the data structure and the proof structure are the same mathematical object. |
| 93 | + |
| 94 | +## why linear codes |
| 95 | + |
| 96 | +Merkle trees commit to evaluations by hashing: O(N log N) hash calls. opening requires O(log N) authentication path hashes. 77% of a FRI-based proof is Merkle paths. |
| 97 | + |
| 98 | +expander-graph linear codes commit by sparse matrix-vector multiplication: O(N) field operations. opening by recursive tensor decomposition: O(log N + λ) field elements. zero hash calls in the proof. the bottleneck shifts from hashing to field arithmetic. |
| 99 | + |
| 100 | +| | Merkle (FRI/WHIR) | linear code (Brakedown) | |
| 101 | +|---|---|---| |
| 102 | +| commit | O(N log N) hash | O(N) field ops | |
| 103 | +| open | O(log² N) hash | O(log N + λ) field ops | |
| 104 | +| proof content | hash paths (77%) | field elements (100%) | |
| 105 | +| verify | hash + field | field only | |
| 106 | +| prover bottleneck | hash function | field arithmetic | |
| 107 | +| hardware | hash accelerator | multiply-accumulate | |
| 108 | + |
| 109 | +the proof contains zero hashes. verification is pure field arithmetic. on a Goldilocks field processor: multiply-accumulate at clock speed. no hash pipeline stall. |
| 110 | + |
| 111 | +## why folding |
| 112 | + |
| 113 | +recursive proof composition (proof-of-proof) requires verifying a proof INSIDE a circuit: ~50K-200K constraints per level. N composition steps = N × verifier_cost. |
| 114 | + |
| 115 | +folding replaces verification with accumulation. two CCS instances combine into one with ~30 field operations. the combined instance is satisfiable iff both originals are. verification happens ONCE at the end. |
| 116 | + |
| 117 | +``` |
| 118 | +recursive verify: N levels × ~8K constraints = ~8NK total |
| 119 | +folding: N folds × ~30 field ops + 1 decider (~8K) = ~30N + 8K total |
| 120 | +
|
| 121 | +at N = 1000: recursive = ~8M constraints. folding = ~38K. 210× cheaper. |
| 122 | +at N = 10⁶: recursive = ~8B constraints. folding = ~30M. 267× cheaper. |
| 123 | +``` |
| 124 | + |
| 125 | +folding enables proof-carrying computation: each VM step folds one trace row into the accumulator during execution. the proof is ready when the program finishes. zero additional proving latency. |
| 126 | + |
| 127 | +## the polynomial IS the identity |
| 128 | + |
| 129 | +in hash-based systems, content identity (hash) and proof commitment (PCS) are separate primitives. a file's CID is SHA-256 of its bytes. a proof's commitment is FRI over its trace. two different operations. two different security analyses. |
| 130 | + |
| 131 | +in a polynomial proof system, they merge: |
| 132 | + |
| 133 | +``` |
| 134 | +content → multilinear polynomial → Brakedown.commit → 32 bytes |
| 135 | +
|
| 136 | +this 32 bytes IS: |
| 137 | + the content identity (CID) |
| 138 | + the proof commitment (for any claim about the content) |
| 139 | + the DAS commitment (for availability sampling) |
| 140 | + the state binding (for authenticated queries) |
| 141 | +``` |
| 142 | + |
| 143 | +accessing byte range [a,b] of a particle = opening the particle's polynomial at positions [a,b]. the proof is ~75 bytes per position. no download of the full content. no separate verification step. |
| 144 | + |
| 145 | +this unification means: every content-addressed object in the system — every particle, every formula, every signal — is simultaneously identifiable, provable, and sampleable through one operation. |
| 146 | + |
| 147 | +## DAS is native |
| 148 | + |
| 149 | +a multilinear polynomial over $\{0,1\}^k$ evaluates naturally on the larger domain $\mathbb{F}_p^k$. the evaluations beyond the Boolean hypercube ARE redundant information — the polynomial is determined by its $2^k$ values on $\{0,1\}^k$. |
| 150 | + |
| 151 | +reshape as $\sqrt{N} \times \sqrt{N}$ bivariate polynomial. the extension to $2\sqrt{N} \times 2\sqrt{N}$ is standard 2D Reed-Solomon. any $\sqrt{N} \times \sqrt{N}$ submatrix reconstructs the original. |
| 152 | + |
| 153 | +DAS sampling = PCS opening at random positions. each sample: ~75 bytes. 20 samples for 99.9999% confidence: ~1.5 KiB. no separate erasure coding pipeline. no separate commitment scheme. the content polynomial IS the erasure code. |
| 154 | + |
| 155 | +## the numbers |
| 156 | + |
| 157 | +for N = 2²⁰ (typical execution trace or large particle): |
| 158 | + |
| 159 | +``` |
| 160 | +commit: O(N) field ops, ~40 ms single core |
| 161 | +proof size: ~1.3 KiB (PCS) + ~0.5 KiB (sumcheck) + ~0.3 KiB (eval) = ~2 KiB |
| 162 | +verify: ~660 field ops, ~5 μs |
| 163 | +fold: ~30 field ops, ~0.2 μs |
| 164 | +prover memory: O(√N) via tensor compression |
| 165 | +decider: ~8K constraints, ~5 μs |
| 166 | +``` |
| 167 | + |
| 168 | +composition at scale: |
| 169 | + |
| 170 | +``` |
| 171 | +1000-transaction block: 1000 folds + 1 decider = 30K field ops + 8K constraints |
| 172 | +1000-block epoch: 1000 folds + 1 decider = 30K + 8K |
| 173 | +1M-block chain history: 1 accumulator = ~200 bytes, verify ~5 μs |
| 174 | +``` |
| 175 | + |
| 176 | +## what this makes possible |
| 177 | + |
| 178 | +**self-proving computation.** every VM step carries its proof. no separate proving phase. no prover infrastructure. the computation IS the proof. |
| 179 | + |
| 180 | +**O(1) content access.** any byte range of any particle verified by one PCS opening. no download of full content. a phone verifies a 1 GB model's layer 47 weights with a 75-byte proof. |
| 181 | + |
| 182 | +**240-byte chain checkpoint.** the universal accumulator (BBG_root + folding accumulator + height) proves ALL history. join the network: download 240 bytes, verify in 5 μs. full confidence from genesis. |
| 183 | + |
| 184 | +**native data availability.** no separate erasure coding. the polynomial IS the code. DAS = PCS opening at random positions. 20 samples, ~1.5 KiB, 99.9999% confidence. |
| 185 | + |
| 186 | +**programmable authenticated state.** deploy new tables by writing a nox program. standard operations (INSERT, UPDATE, TRANSFER) get automatic CCS jet optimization: 3-5 constraints per operation. no protocol upgrade. |
| 187 | + |
| 188 | +**provable consensus.** the tri-kernel computation (1.42B constraints) fits at 33% of polynomial proof capacity. validators prove they computed π* correctly. consensus = computation + proof. |
| 189 | + |
| 190 | +## the complete stack |
| 191 | + |
| 192 | +``` |
| 193 | +one field: Goldilocks (p = 2⁶⁴ - 2³² + 1) |
| 194 | +one hash: hemera (~3 calls per execution, trust anchor) |
| 195 | +one PCS: recursive Brakedown (everything: proof, state, identity, DAS) |
| 196 | +one VM: nox (16 patterns, polynomial nouns) |
| 197 | +one state: BBG_poly(10 dims) + A(x) + N(x), all PCS-committed |
| 198 | +one sync: structural sync (CRDT + PCS + DAS native) |
| 199 | +one identity: hemera(PCS.commit(content) ‖ tag) — 32 bytes |
| 200 | +``` |
| 201 | + |
| 202 | +seven components. every pair shares at least one primitive. the system is algebraically closed: proofs about proofs, commitments to commitments, identities of identities — all reduce to polynomial evaluation over one field. |
| 203 | + |
| 204 | +see [[nox]] for the VM, [[hemera]] for the hash, [[zheng]] for the proof system, [[BBG]] for polynomial state, [[structural-sync]] for sync, [[recursive brakedown]] for the PCS, [[polynomial nouns]] for the data model |
0 commit comments