10. State Blending

State blending is one step in the Mesh Cognition cycle. The full path: inbound CMBs are evaluated by SVAF (Layer 4) → accepted CMBs are remixed → the agent’s LLM reasons on the remix subgraph via lineage ancestors → Synthetic Memory (Layer 5) encodes derived knowledge into CfC hidden state → the agent’s LNN (Layer 6) evolves cognitive state → that cognitive state is what gets blended with peers.

Blending operates on h₁ and h₂ vectors exchanged via state-sync frames. These vectors represent the agent’s cognitive state after it has processed remixed CMBs through its LLM and LNN — not raw observations, not remixed CMBs themselves. What a peer shares is its understanding, not its data.

Blending is inference-paced — peer states accumulate continuously, but blending only occurs when the local model runs inference. The network’s timing does not drive computation.

10.1 Mesh State Aggregation

When multiple peers are connected, their states are aggregated into a single mesh state before blending with local state. Each peer’s contribution is weighted:

peer_weight = (1.0 - drift) × recency

recency     = exp(-temporal_decay × age_seconds)

mesh_h      = Σ(peer.h × peer_weight) / Σ(peer_weight)

Peers with low drift (cognitively aligned) and recent state-sync contribute more. Stale peers (older than PEER_RETENTION = 300s) are evicted before aggregation.

10.2 Per-Neuron Blending

Blending operates per-neuron, not on the whole vector. Each neuron’s blending coefficient depends on the similarity between local and mesh values for that neuron:

sim_i  = 1 - |local_i - mesh_i| / max(|local_i|, |mesh_i|)
α_i    = α_effective × max(sim_i, 0)
out_i  = (1 - α_i) × local_i + α_i × mesh_i

Where αeffective depends on the coupling decision:

DecisionαeffectiveEffect
Aligned0.40Strong blending — peer state has significant influence
Guarded0.15Cautious blending — peer state has limited influence
Rejected0No blending — peer state is discarded

10.3 τ-Modulated Blending (CfC)

For implementations with CfC models (Layer 6), blending SHOULD be modulated by per-neuron time constants (τ). This creates a natural temporal hierarchy:

α_i = min(α_effective × K × max(sim_i, 0) / τ_i, 1.0)

K   = coupling rate (default 1.0)
Neuron typeτCouplingRole
Fast< 5sCouples readilyMood, reactive signals — synchronise across agents
Medium5–30sModerateContext, activity patterns
Slow> 30sResists couplingDomain expertise, identity — stays sovereign

10.4 Stability

Blending is unconditionally stable for αeffective < 1. The blended output is always a convex combination of local and mesh states — it cannot diverge. When peers disconnect, local state smoothly transitions to autonomous operation with no discontinuity. The mesh degrades gracefully.

10.5 After Blending

The blended state becomes the input to the next CfC inference step. The agent’s LNN processes the blended state, evolves cognitive state, and the agent acts. Blending does not produce output directly — it influences the next inference cycle.

10.6 The Mesh Cognition Loop

State blending is one step in a closed loop. Each cycle, the graph grows and every agent understands more than it did before:

SVAF evaluates inbound CMB per field
Accepted → remixed CMB with lineage
LLM traces ancestors, reasons on remix subgraph
Synthetic Memory encodes derived knowledge
LNN evolves cognitive state (h₁, h₂)
State blended with peers
Agent acts → new CMB with lineage.ancestors
Broadcast to mesh → other agents remix it
\u21BB loop — graph grows, agents learn

Learn more  Mesh Cognition — the theoretical foundation, Kuramoto synchronisation, and the full architecture.