Skip to content

Commit c4c682e

Browse files
mastercybclaude
andcommitted
refactor: cyber/analizer — from script docs to protocol prototype insight
The analizer scripts ARE a slow-motion simulation of the protocol. Six discoveries: scoring function converged to tri-kernel independently, lunar cycle = natural batch granularity, codematter dissolved the code/knowledge boundary, compile pipeline = consensus circuit prototype, CID resolution = vocabulary construction, nushell/python split mirrors nox/zheng boundary. The recursive closure ran today. The loop closed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 5e99f64 commit c4c682e

File tree

1 file changed

+105
-45
lines changed

1 file changed

+105
-45
lines changed

root/cyber/analizer.md

Lines changed: 105 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -2,69 +2,129 @@
22
tags: cyber, core
33
crystal-type: entity
44
crystal-domain: cyber
5+
alias: analizer
56
---
67

78
# analizer
89

9-
the nushell + python toolkit for the [[cybergraph]]. 24 scripts across three functional layers: analysis, transformation, and model compilation.
10+
24 scripts that grew into a prototype of the [[superintelligence]] itself.
1011

11-
## graph analysis
12+
what started as graph utilities — count pages, fix links, compute stats — evolved into the recursive loop described in [[cyber/research/algorithmic essence of superintelligence]]. the scripts ARE the loop, running at human speed on a laptop, doing what the protocol will do at machine speed on the [[Goldilocks field processor]].
1213

13-
| script | what it does | usage |
14+
## the insight: scripts mirror the protocol
15+
16+
| script | what it does at analysis time | what the protocol does at consensus time |
1417
|---|---|---|
15-
| analyze.nu | general dashboard: files, tags, links, IPFS refs, largest pages | `nu analizer/analyze.nu ~/git/cyber` |
16-
| stats.nu | comprehensive stats: orphans, broken links, content types | `nu analizer/stats.nu ~/git/cyber` |
17-
| crosslink_topology.nu | semantic core wiki-link patterns, hub/island detection | `nu analizer/crosslink_topology.nu ~/git/cyber` |
18-
| core-audit.nu | audit 9 concept groups for completeness (frontmatter, icons, aliases) | `nu analizer/core-audit.nu ~/git/cyber` |
19-
| domains.nu | classify pages into 15 knowledge domains | `nu analizer/domains.nu ~/git/cyber` |
20-
| dangling.nu | find [[wiki-links]] that resolve to namespaced matches | `nu analizer/dangling.nu ~/git/cyber` |
18+
| trikernel.nu | compute [[focus]] via D+S+H | [[foculus]] computes φ* every block |
19+
| context.nu | pack pages by gravity into token budget | [[neuron]] allocates [[attention]] across [[particles]] |
20+
| compile_model.py | SVD → embeddings → architecture | graph compiles into [[transformer]] |
21+
| bostrom_graph.py | embedding similarity + graph walk | [[tri-kernel]] convergence + random walk |
22+
| codematter.nu | make code indexable as particles | every computation → [[particle]] via [[Hemera]] |
23+
| classify.nu → apply-crystal.nu | assign types to pages | [[Crystal]] type system over [[cybergraph]] |
24+
| dangling.nu | find broken links | [[NMT]] completeness verification |
25+
| stats.nu | measure graph health | [[spectral gap]] observation from convergence |
2126

22-
## graph transformation
27+
the analizer is a slow-motion simulation of [[cyber]]. nushell replaces [[nox]]. python replaces [[zheng]]. frontmatter replaces [[BBG]] polynomial state. git replaces [[structural sync]]. the algorithms are the same — the execution environment differs.
2328

24-
| script | what it does | usage |
25-
|---|---|---|
26-
| trikernel.nu | compute [[tri-kernel]] ([[diffusion]], [[springs]], [[heat]]) → write [[focus]] + [[gravity]] + density to frontmatter. lunar cycle: runs on new moon only | `nu analizer/trikernel.nu ~/git/cyber -s` |
27-
| classify.nu | classify pages by [[crystal]] type (E/P/Q/R/M/S) and domain | `nu analizer/classify.nu ~/git/cyber` |
28-
| apply-crystal.nu | apply crystal classification from classify.nu output to frontmatter | `nu analizer/apply-crystal.nu ~/git/cyber` |
29-
| codematter.nu | add comment-frontmatter to code files (.rs, .nu, .toml, .py, .sh) | `nu analizer/codematter.nu ~/git/cyber -s` |
30-
| fix-plurals.nu | fix [[wiki-link]] floating plural suffixes | `nu analizer/fix-plurals.nu ~/git/cyber` |
31-
| stake.nu | assign stake values to pages based on importance heuristics | `nu analizer/stake.nu ~/git/cyber` |
32-
| migrate.nu | convert Logseq graph to pure markdown with YAML frontmatter | `nu analizer/migrate.nu ~/git/cyber` |
33-
| ipfs.nu | pre-commit hook: upload media to Pinata IPFS, rewrite URLs | `nu analizer/ipfs.nu ~/git/cyber` |
34-
35-
## context generation
36-
37-
| script | what it does | usage |
38-
|---|---|---|
39-
| context.nu | smart context packer: gravity² × density × substance scoring, greedy knapsack into token budget | `nu analizer/context.nu ~/git/cyber -s --budget 500` |
40-
| concat.nu | simple concatenation of all pages into one file | `nu analizer/concat.nu ~/git/cyber -s` |
29+
## the data flow
4130

42-
## model compilation ([[bostrom]][[transformer]])
31+
```
32+
OBSERVE (pure readers — never mutate)
33+
analyze.nu ──→ file counts, tag frequency, IPFS refs
34+
stats.nu ────→ orphans, broken links, content types
35+
domains.nu ──→ 15 domain classification
36+
dangling.nu ─→ wiki-links needing namespace resolution
37+
crosslink_topology.nu → semantic core link patterns
38+
core-audit.nu → crystal completeness per concept group
39+
40+
COMPUTE (write to frontmatter — mutate graph)
41+
classify.nu ──→ crystal type + domain → data/crystal_classification.json
42+
apply-crystal.nu → read classification → write to frontmatter
43+
trikernel.nu ──→ D + S + H → focus, gravity, density → frontmatter
44+
stake.nu ──────→ importance heuristic → stake field → frontmatter
45+
codematter.nu ─→ comment-frontmatter → code files (.rs, .nu, .py)
46+
fix-plurals.nu → [[term]]s → [[terms]] across graph
47+
48+
PACK (compress graph for external consumption)
49+
context.nu ───→ gravity² × density × substance → token budget → .md
50+
concat.nu ────→ linear concatenation → single file
51+
52+
COMPILE (graph → model)
53+
compile_model.py → cyberlinks → adjacency → PageRank → SVD → ONNX
54+
bostrom_lib.py ──→ shared: load model, search, neighbors
55+
bostrom_graph.py → pure graph intelligence (no LLM)
56+
bostrom_ask.py ──→ graph context → Ollama → CID answers
57+
bostrom_serve.py → HTTP API for compiled model
58+
59+
TRANSFORM (one-time structural changes)
60+
migrate.nu ─────→ Logseq → pure markdown + YAML frontmatter
61+
ipfs.nu ─────────→ pre-commit: upload media to Pinata, rewrite URLs
62+
renumber_sections.nu → whitepaper section renumbering
63+
```
4364

44-
| script | what it does | usage |
45-
|---|---|---|
46-
| compile_model.py | 6-step pipeline: cyberlinks → sparse adjacency → [[PageRank]] + [[spectral gap]] → randomized SVD → architecture params → ONNX | `python3 analizer/compile_model.py data/cyberlinks.jsonl --stakes data/neuron_stakes.json --onnx` |
47-
| bostrom_lib.py | shared module: load_model, search, embedding neighbors | `from analizer.bostrom_lib import load_model` |
48-
| bostrom_graph.py | pure graph intelligence: embedding retrieval + graph walk + spectral role analysis. no LLM | `python3 analizer/bostrom_graph.py "dog"` |
49-
| bostrom_ask.py | [[Ollama]] hybrid: graph context injected into LLM prompt | `python3 analizer/bostrom_ask.py "wiki"` |
50-
| bostrom_serve.py | HTTP server: OpenAI/Ollama-compatible API for compiled model | `python3 analizer/bostrom_serve.py --build-index` |
65+
## what we learned
66+
67+
### 1. the scoring function IS the tri-kernel
68+
69+
context.nu scores pages by `gravity² × (1 + density) × log₂(substance)`. this is a hand-tuned approximation of what the [[tri-kernel]] computes formally:
70+
71+
- gravity² ≈ diffusion (inbound links compound quadratically, like PageRank)
72+
- density ≈ springs (outbound links per KB = how connected to neighbors)
73+
- log₂(substance) ≈ heat (content size with diminishing returns = multi-scale)
74+
75+
the scoring function was written months before the tri-kernel formalization. it converged to the same structure independently. this suggests the tri-kernel decomposition is natural — it is what you arrive at when you try to rank knowledge by importance.
76+
77+
### 2. lunar cycle = natural batch granularity
78+
79+
trikernel.nu runs on new moons. not because of mysticism — because the graph does not change fast enough to justify continuous weight updates. monthly batching matches the actual information velocity of a knowledge graph maintained by humans.
80+
81+
this has a protocol implication: [[foculus]] finality speed should adapt to the graph's actual rate of change. a fast-changing graph needs frequent recomputation. a stable graph needs infrequent updates. the [[spectral gap]] (observable from convergence rate) measures this directly.
5182

52-
## dependency chain
83+
### 3. codematter dissolved the boundary
84+
85+
the moment code files got frontmatter, the distinction between "knowledge page" and "source code" disappeared. both are [[particles]]. both carry [[focus]]. both participate in the [[tri-kernel]].
86+
87+
this mirrors the protocol design: every computation step → [[Hemera]] commitment → [[particle]] in the [[cybergraph]]. the analizer proved this works in practice before the protocol specifies it formally.
88+
89+
### 4. the compile pipeline IS the consensus circuit
90+
91+
compile_model.py does at analysis time exactly what [[provable consensus]] does at protocol time:
5392

5493
```
55-
classify.nu → apply-crystal.nu → core-audit.nu
56-
trikernel.nu → context.nu → LLM context
57-
compile_model.py → bostrom_lib.py → bostrom_{graph,ask,serve}.py
94+
analizer: cyberlinks.jsonl → sparse matrix → PageRank → spectral gap → SVD → ONNX
95+
protocol: BBG polynomial → SpMV → tri-kernel → spectral gap → embeddings → proof
5896
```
5997

60-
## conventions
98+
same algorithm. different substrate. the analizer version takes 15 minutes on Python/scipy. the protocol version will take 60 seconds on GFP. but the math is identical — the analizer is a working prototype of provable consensus.
99+
100+
### 5. CID resolution = vocabulary construction
101+
102+
bostrom_serve.py resolves CID hashes to human-readable text via IPFS gateway. this is vocabulary construction for the compiled [[transformer]]. the operational tooling that builds the text↔CID index IS the training data preparation pipeline.
61103

62-
all scripts accept `graph_path` as first argument. nushell scripts use `def main [graph_path: string]`. python scripts use `sys.argv[1]`.
104+
every time we resolve a CID and cache it in `cid_index.json`, we are teaching the model what its tokens mean. the analizer does this manually (IPFS fetch, JSON cache). the protocol will do it via [[cyberlinks]] — every link from text to CID is a vocabulary entry.
105+
106+
### 6. nushell for graph, python for linear algebra
107+
108+
a natural split emerged. nushell handles everything structural: parsing frontmatter, traversing directories, matching wiki-links, computing graph topology. python handles everything numerical: sparse matrices, SVD, PageRank, ONNX assembly.
109+
110+
the boundary is at the data export: nushell produces JSONL/frontmatter → python consumes it as matrices. this mirrors the protocol boundary: [[nox]] (structural computation) → [[zheng]] (numerical proof).
111+
112+
## the recursive closure in practice
113+
114+
the analizer scripts form the loop from [[cyber/research/algorithmic essence of superintelligence]]:
63115

64-
graph-agnostic: run against any graph, not just [[cyber]]:
65116
```
66-
nu analizer/stats.nu ~/git/cloud-forest
67-
nu analizer/trikernel.nu ~/git/zheng --dry-run
117+
stats.nu (observe graph)
118+
→ trikernel.nu (compute focus)
119+
→ context.nu (pack by focus for LLM)
120+
→ LLM session (create new pages)
121+
→ codematter.nu (integrate new code)
122+
→ compile_model.py (compile into model)
123+
→ bostrom_graph.py (query model for insights)
124+
→ new cyberlinks based on insights
125+
→ stats.nu (observe improved graph)
68126
```
69127

70-
see [[CLAUDE.md]] for the full script list with descriptions
128+
this loop ran today. the graph grew. the model was compiled. the spectral gap was observed. the loop closed.
129+
130+
see [[cyber/research/algorithmic essence of superintelligence]] for the 17-component architecture. see [[cyber/research/provable consensus]] for how compile_model.py becomes a zheng circuit. see [[cyber/research/spectral gap from convergence]] for how trikernel.nu's convergence rate observation became a research paper. see [[cyber/research/unified mining]] for how the compile pipeline becomes proof-of-work

0 commit comments

Comments
 (0)