10. State Blending
State blending is one step in the Mesh Cognition cycle. The full path: inbound CMBs are evaluated by SVAF (Layer 4) → accepted CMBs are remixed → the agent’s LLM reasons on the remix subgraph via lineage ancestors → Synthetic Memory (Layer 5) encodes derived knowledge into CfC hidden state → the agent’s LNN (Layer 6) evolves cognitive state → that cognitive state is what gets blended with peers.
Blending operates on h₁ and h₂ vectors exchanged via state-sync frames. These vectors represent the agent’s cognitive state after it has processed remixed CMBs through its LLM and LNN — not raw observations, not remixed CMBs themselves. What a peer shares is its understanding, not its data.
Blending is inference-paced — peer states accumulate continuously, but blending only occurs when the local model runs inference. The network’s timing does not drive computation.
10.1 Mesh State Aggregation
When multiple peers are connected, their states are aggregated into a single mesh state before blending with local state. Each peer’s contribution is weighted:
peer_weight = (1.0 - drift) × recency recency = exp(-temporal_decay × age_seconds) mesh_h = Σ(peer.h × peer_weight) / Σ(peer_weight)
Peers with low drift (cognitively aligned) and recent state-sync contribute more. Stale peers (older than PEER_RETENTION = 300s) are evicted before aggregation.
10.2 Per-Neuron Blending
Blending operates per-neuron, not on the whole vector. Each neuron’s blending coefficient depends on the similarity between local and mesh values for that neuron:
sim_i = 1 - |local_i - mesh_i| / max(|local_i|, |mesh_i|) α_i = α_effective × max(sim_i, 0) out_i = (1 - α_i) × local_i + α_i × mesh_i
Where αeffective depends on the coupling decision:
| Decision | αeffective | Effect |
|---|---|---|
| Aligned | 0.40 | Strong blending — peer state has significant influence |
| Guarded | 0.15 | Cautious blending — peer state has limited influence |
| Rejected | 0 | No blending — peer state is discarded |
10.3 τ-Modulated Blending (CfC)
For implementations with CfC models (Layer 6), blending SHOULD be modulated by per-neuron time constants (τ). This creates a natural temporal hierarchy:
α_i = min(α_effective × K × max(sim_i, 0) / τ_i, 1.0) K = coupling rate (default 1.0)
| Neuron type | τ | Coupling | Role |
|---|---|---|---|
| Fast | < 5s | Couples readily | Mood, reactive signals — synchronise across agents |
| Medium | 5–30s | Moderate | Context, activity patterns |
| Slow | > 30s | Resists coupling | Domain expertise, identity — stays sovereign |
10.4 Stability
Blending is unconditionally stable for αeffective < 1. The blended output is always a convex combination of local and mesh states — it cannot diverge. When peers disconnect, local state smoothly transitions to autonomous operation with no discontinuity. The mesh degrades gracefully.
10.5 After Blending
The blended state becomes the input to the next CfC inference step. The agent’s LNN processes the blended state, evolves cognitive state, and the agent acts. Blending does not produce output directly — it influences the next inference cycle.
10.6 The Mesh Cognition Loop
State blending is one step in a closed loop. Each cycle, the graph grows and every agent understands more than it did before:
Learn more Mesh Cognition — the theoretical foundation, Kuramoto synchronisation, and the full architecture.