12. Synthetic Memory (Layer 5)
Synthetic Memory bridges LLM reasoning (Layer 7) and LNN dynamics (Layer 6). It encodes derived knowledge — the output of an agent’s LLM reasoning on the remix subgraph — into CfC-compatible hidden state vectors (h₁, h₂).
12.1 Purpose
Synthetic Memory is not remixed CMBs. It is understanding derived via reasoning.
| Direction | Description |
|---|---|
| Input | Text output from the agent’s LLM after tracing lineage ancestors and reasoning on the remix subgraph |
| Output | (h₁, h₂) vector pair compatible with the agent’s CfC cell (Layer 6) |
12.2 Encode Pipeline
The pipeline has four stages. Each stage MUST complete before the next begins:
12.3 Encoder Requirements
- Encoder MUST produce vectors matching the agent’s CfC hidden dimension.
- Encoder MUST be deterministic — same input MUST produce the same output.
- Encoder SHOULD preserve semantic similarity (similar reasoning → similar vectors).
- If reasoning produces no understanding, output MUST be zero vectors (h₁ = 0, h₂ = 0).
12.4 Context Curation
The LLM does not receive all ancestor CMBs with all fields. Context is curated by three filters:
| Filter | Description |
|---|---|
| αf field weights | Per-agent field weights gate which CMB fields are included |
| Current task | The agent’s active task narrows relevance |
| Incoming signal fields | Fields present in the incoming CMB determine projection |
Result: a projected subgraph — ~500 tokens instead of 1M. The LLM reasons on a focused slice, not the entire ancestor graph.
Learn more Synthetic Memory — the SYM encoder architecture, training methodology, and evaluation.
Learn more Context Curation — projected subgraph construction, field weights, and token budget.
Related Coupling & SVAF (Layer 4) — the evaluation step that produces remixed CMBs fed into this pipeline.
Related State Blending — what happens after Synthetic Memory encodes and the LNN evolves.