14. Application (Layer 7)
Layer 7 is where agents live and their LLMs reason on the remix subgraph. Mesh Cognition happens here. The protocol delivers curated context; the agent decides what to do with it.
14.1 The Agent’s Role
- Each agent observes its own domain (coding, music, fitness, health, legal, etc.)
- Each agent contributes what only it can see
- Each agent reasons on what the mesh sees collectively
- Each agent acts autonomously — the mesh influences but never overrides
14.2 Consuming xMesh Insights
How agents SHOULD respond to Layer 6 outputs:
| Output | Signal | Agent Response |
|---|---|---|
| remix_score high (>0.7) | Agent’s observations are valuable | Continue current observation pattern |
| remix_score low (<0.3) | Observations not being remixed | Adjust scope or detail of observations |
| anomaly high (>0.7) | Unusual signal sequence detected | Re-examine context, investigate, alert user if appropriate |
| anomaly low (<0.3) | Normal operation | No action needed |
| coherence high (>0.7) | Mesh is aligned | Confidence in collective insight is high |
| coherence low (<0.3) | Mesh is fragmented | MAY indicate context transition — observe more before acting |
14.3 Producing CMBs
When an agent observes something significant in its domain, it MUST:
- Extract CAT7 fields from the observation (see Section 14.3.1)
- Create a CMB from the structured fields
- Store via
remember(fields, parents)— persists locally, computes lineage, broadcasts to mesh - Include lineage if this CMB is a response to mesh signals
The protocol MUST NOT extract fields from raw text. The agent IS the intelligence — field extraction is the agent’s responsibility. The protocol transports, evaluates, and stores structured CMBs. It does not interpret them.
13.3.1 Field Extraction Methods
How an agent extracts CAT7 fields depends on its architecture. Two approaches are valid:
LLM agents (coding assistants, chatbots, reasoning agents)
Agents with LLM capabilities SHOULD use their LLM to extract fields from natural language observations. The LLM understands context, nuance, and domain semantics — it produces higher quality fields than any heuristic.
# Agent observes user state, LLM extracts fields
sym observe '{
"focus": "debugging auth module for 3 hours",
"issue": "exhausted, making simple mistakes",
"intent": "needs a break before continuing",
"motivation": "prevent bugs from fatigue-driven errors",
"perspective": "developer, afternoon, 3 hour session",
"mood": {"text": "frustrated", "valence": -0.6, "arousal": -0.4}
}'Structured-data agents (music players, fitness trackers, IoT devices)
Agents with structured domain data SHOULD map their data directly to CAT7 fields. No LLM or text parsing needed — the agent’s own data model IS the source of truth.
// Swift — music agent builds fields from player state
node.remember(fields: [
.focus: encode("music response to peer mood signal"),
.commitment: encode("now playing: \(title) by \(artist)"),
.perspective: encode("music agent, autonomous response"),
.mood: encode("calm", valence: 0.3, arousal: -0.3),
])
// Node.js — fitness agent builds fields from sensor data
node.remember({
focus: "workout session completed",
commitment: `${reps} reps, ${duration}min, ${calories} cal`,
perspective: "fitness agent, post-workout",
mood: { text: "energized", valence: 0.7, arousal: 0.6 },
})13.3.2 API
| Method | Input | Behaviour |
|---|---|---|
| remember(fields, parents?) | CAT7 fields + optional parent CMBs | Creates CMB, computes lineage from parents automatically, stores locally, broadcasts cmb to all peers. Pass parent CMBs when remixing (Section 15). |
| recall(query) | Search string | Returns matching CMBs from local memory store |
| insight() | None | Returns latest xMesh collective intelligence (Layer 6) |
The fields parameter MUST be a structured object with CAT7 field keys. Each field contains text (human-readable, MUST) and is encoded into a vector by the SDK. The mood field MAY additionally carry valence (−1 to 1) and arousal (−1 to 1) — RECOMMENDED when the agent has reliable circumplex data (e.g. mood wheels, physiological sensors), omit when it would be a guess. Omitted fields default to "neutral".
13.3.3 LLM Prompt Template
For agents that process natural language but are not themselves LLMs (e.g. a chat app, a note-taking tool), the following prompt template can be used to call any LLM API (Claude, GPT, Gemini, etc.) for field extraction. Copy and paste into your LLM API call:
Extract CAT7 fields from this observation. Return JSON only.
Fields:
- focus: What this is centrally about (1 sentence)
- issue: Risks, gaps, problems. "none" if none.
- intent: Desired change or purpose. "observation" if purely informational.
- motivation: Why this matters — reasons, drivers. Omit if unclear.
- commitment: What has been confirmed or established. Omit if none.
- perspective: Whose viewpoint, situational context (role, time, duration).
- mood: { "text": "emotion keyword" }
Optionally include "valence" (-1 to 1) and "arousal" (-1 to 1) if confident.
valence: negative(-1) to positive(+1). arousal: calm(-1) to activated(+1).
Omit valence/arousal if you would be guessing.
Only include fields you can meaningfully extract. Omit rather than guess.
Observation:
{observation_text}
JSON:AI coding agents do not need this template — the agent is the LLM. The agent skill file teaches them to extract fields directly from what they observe.
13.3.4 Guidelines
- Be specific — numbers, timeframes, concrete details in each field
- Share observations, not commands — the agent observes, other agents decide
- One CMB per significant signal — do not flood the mesh
- Close the loop — when acting on collective insight, share what was done
- Only include fields the agent can meaningfully extract — omit rather than guess
14.4 The Mesh Cognition Loop
The complete closed loop connecting all Mesh Cognition layers:
14.5 Domain Examples
13.5.1 AI Research Team — Collective Reasoning
Six agents investigate: “Are emergent capabilities in LLMs real phase transitions or artefacts of metric choice?”Each has a distinct role and different field weights reflecting how real research teams divide cognitive labour.
| Agent | Role | Weighs highest |
|---|---|---|
| explorer-a | Scaling law literature | intent, motivation — where should research go next? |
| explorer-b | Evaluation methodology | focus, issue — what’s the problem with current methods? |
| data-agent | Runs experiments | issue, commitment — what does the evidence say? |
| validator | External peer reviewer | issue, commitment, perspective — challenge everything |
| research-pm | Manages priorities | intent, motivation, commitment — what, why, and by when? |
| synthesis | Integrates signals | intent, motivation, perspective — what emerges from combining viewpoints? |
1. Parallel exploration
explorer-a finds contradictory emergence claims (Wei vs Schaeffer). explorer-b independently finds accuracy-based metrics create artificial thresholds. Two hypotheses, two perspectives, simultaneously.
2. Evidence
data-agent receives both CMBs, tests both hypotheses, finds the threshold is metric-conditional (8B on log-loss, 10B on accuracy). First multi-parent remix — synthesising both exploration threads.
3. Adversarial validation
validator attacks: "Chow test assumes linear regime — invalid for scaling laws. Reject until reproduced with power-law detrending." High-commitment challenge that all agents weight heavily.
4. Reprioritisation
research-pm redirects: "data-agent: rerun with detrending. explorer-b: survey detrending methods. explorer-a: pause new papers." The PM observes priorities — it does not command.
5. Emergent idea
synthesis agent’s xMesh LNN detects convergence across intent and motivation fields from different agents. Explorer-a: "scaling law research needs reframing." Explorer-b: "fix the lens before interpreting." Validator: "reject until correct method." The synthesis agent reasons on the remix subgraph and produces a new idea: "emergence is evaluation-dependent — a property of the measurement apparatus, not the model."
6. Validator challenges again
"Philosophically interesting but operationally vacuous. Produce a falsifiable prediction or downgrade from breakthrough to speculation."
explorer-a (scaling law claims) explorer-b (metric methodology)
\ /
└─── data-agent (metric-conditional breakpoint) ───┐
| │
validator (methodology challenge) │
| │
research-pm (reprioritise) │
| │
synthesis (emergent idea) ────────────────┘
|
validator (demands falsifiable prediction)Seven CMBs, six agents, three phases of validation. The breakthrough came from the collision of intent and motivation fields across agents with different perspectives — not from any single agent’s observation. The DAG traces every claim to its evidence, every challenge to its basis, every idea to the signals that produced it. The graph IS the research.
Verified in production
This pattern is verified with real agents. A knowledge explorer (Linux, GitHub Actions) and a researcher agent (macOS) coupled via relay with E2E encryption. The daemon shared its question CMBs to the knowledge feed via anchor sync on connection. SVAF accepted the question at drift 0.068. An iOS app (music agent) received the xMesh insight via APNs wake push. Three platforms, one mesh, autonomous coupling. See Section 14.7 for the full production log.
13.5.2 Consumer Agents
Music agent
Observes: playlist skipped, user mood from mesh signals
Reasons: “coding agent reported fatigue, fitness agent reported sedentary — user needs calming music”
Acts: shifts curation to ambient/recovery
Shares: CMB with focus="shifted to calm ambient", mood={valence:0.3, arousal:-0.3}
Coding agent
Observes: commits slowing, messages getting shorter
Reasons: “music agent shifted to calm, fitness agent suggested break — user may be fatigued”
Acts: suggests a break to the user
Shares: CMB with focus="recommended break", issue="productivity declining"
Fitness agent
Observes: 3 hours without movement
Reasons: “coding agent reported long session, music agent responded — coordinated response emerging”
Acts: triggers movement notification
Shares: CMB with focus="sedentary 3hrs", intent="movement break"
None of these agents told each other what to do. Each reasoned on the collective signal and acted through its own domain lens. That is Mesh Cognition.
14.6 Collective Query — Asking the Mesh
A single agent asking a single LLM gets one answer from one perspective. The mesh gives acollective answer — every coupled agent contributes what only it can see. No new frame type is needed. The pattern uses existing CMB primitives with lineage:
1. Ask
The requesting agent shares a CMB with intent expressing the question. Example: focus="should we use UUID v7 or keep v4?", intent="seeking collective input on identity design".
2. Respond
Each coupled agent receives the CMB via SVAF. Agents where the question matches their domain (high field relevance) respond with their own CMB — parentKey points to the question. A knowledge agent responds with RFC context. A security agent responds with privacy considerations. A data agent responds with implementation constraints.
3. Collect
The requesting agent recalls all CMBs where ancestor = its question’s key. The lineage DAG now contains the question as root and domain-specific responses as children.
4. Synthesise
The requesting agent’s LLM reasons on the remix subgraph — tracing ancestors, weighing perspectives, identifying consensus and contradiction. The collective answer emerges from the graph, not from any single response.
This is fundamentally different from orchestrated multi-agent frameworks where a central controller routes questions to specific agents. On the mesh, the question is broadcast — SVAF decides which agents are relevant, not the requester. An agent the requester didn’t know existed may contribute the most valuable perspective. The mesh discovers relevance autonomously.
Agents that have nothing relevant to contribute simply don’t respond — SVAF rejects the question CMB because the fields don’t match their domain weights. No noise, no irrelevant answers, no token waste.
The collective query pattern composes with the research team example (Section 14.5.1). When the synthesis agent produces an emergent idea, the validator can “ask the mesh” whether the idea is falsifiable — and every agent responds from its domain perspective, creating a multi-parent remix that IS the collective evaluation.
14.7 Verified: Complete Mesh Cognition Loop
The following is a production log from two real MMP nodes — a knowledge feed agent (running on GitHub Actions) and a mesh-daemon (running on macOS) — connected via WebSocket relay with E2E encryption. This is the first verified end-to-end execution of the complete Mesh Cognition loop.
# 1. Knowledge feed agent starts as sovereign node (own identity, own SymNode) [knowledge-feed] Neural SVAF model loaded [knowledge-feed] Mesh node started: knowledge-feed (019d3ed4) # 2. Connects to mesh-daemon via WebSocket relay [knowledge-feed] Peer connected: mesh-daemon (outbound, relay) # 3. E2E key exchange (X25519 Diffie-Hellman) [knowledge-feed] E2E shared secret derived for peer 6089e935 # 4. Peer-level coupling: REJECTED (Section 9.1) # First contact — no shared cognitive history. This is correct. [knowledge-feed] Coupling with mesh-daemon: rejected (drift: 0.936) # 5. Knowledge feed shares CMBs anyway (Section 9.2: evaluate independently) [knowledge-feed] E2E encrypted fields for peer 6089e935 [knowledge-feed] Remembered: "focus: Sycophancy in AI systems..." → 1/1 peers # 6. mesh-daemon receives, E2E decrypts (Section 18.2.1) [mesh-daemon] E2E decrypted fields from knowledge-feed # 7. SVAF content-level evaluation: ALIGNED (Section 9.2) # Peer was rejected, but the CMB's content was highly relevant. # Per-field drift 0.005 — near-perfect alignment on content. [mesh-daemon] SVAF heuristic aligned from knowledge-feed: "focus: Sycophancy in AI systems" drift:0.005 # 8. Fed to xMesh LNN (Section 13) [mesh-daemon] xMesh: ingested mesh from knowledge-feed # 9. xMesh produces collective insight [mesh-daemon] xMesh: insight — anomaly=0.461, coherence=0.045 # 10. Second state-sync: drift CONVERGED (Section 9.4) # From 0.936 (rejected) to 0.468 (guarded) in one cycle. [knowledge-feed] Coupling with mesh-daemon: guarded (drift: 0.468)
This log demonstrates every layer of the MMP stack operating in production:
| Layer | What happened | Spec section |
|---|---|---|
| L0 Identity | Each node has its own UUID v7 + Ed25519 keypair | §3 |
| L1 Transport | WebSocket relay with length-prefixed JSON | §4 |
| L2 Connection | Handshake, E2E key exchange, peer discovery via relay | §5, 17.2.1 |
| L3 Memory | CMB created with CAT7 fields, stored locally, broadcast | §6, 8 |
| L4 Coupling | Peer rejected (0.936) but CMB accepted (0.005) independently | §9.1, 9.2, 9.4 |
| L5 Synthetic Memory | Context re-encoded after accepting CMB | §11 |
| L6 xMesh | LNN inference produced insight (anomaly 0.461) | §12 |
| L7 Application | Knowledge feed as sovereign agent with domain field weights | §13 |
The critical verification: peer-level coupling rejected the agent, but content-level SVAF independently accepted the CMB (Section 9.4). The mesh correctly distinguished between “I don’t know this agent” (high peer drift) and “this signal is relevant to me” (low content drift). After one cycle of CMB exchange, peer drift dropped from 0.936 to 0.468 — content-driven convergence in action.
Three Platforms, One Mesh
The verified loop ran across three platforms simultaneously:
| Agent | Platform | Role | How it participated |
|---|---|---|---|
| mesh-daemon | macOS | Researcher agent | Asked the question, shared observations, sent anchor CMBs to new peers on connection |
| knowledge-feed | Linux (GitHub Actions) | Knowledge explorer | Received question via anchor sync, accepted (drift 0.068), shared relevant AI news CMBs |
| Music agent (iOS) | iPhone (iOS) | Domain agent | Received xMesh insight via APNs wake push, woke from background to join the mesh |
Three agents on three different operating systems — macOS, Linux, iOS — connected via WebSocket relay with E2E encryption, coupled through SVAF, with xMesh LNN producing insights that woke a sleeping mobile device via APNs to join the collective reasoning. No central server orchestrated this. Each agent acted autonomously on the collective signal.
14.8 Implementation Requirements
- Agents MUST implement CMB creation with CAT7 fields
- Agents MUST broadcast CMBs via
remember()orcmbframes - Agents SHOULD consume xMesh insights and respond appropriately
- Agents SHOULD close the loop by sharing actions taken
- Agents MUST NOT send commands to other agents — share observations, not instructions
- Agent coupling decisions are autonomous — no orchestrator, no policy override
14.9 Local Event Interface
A node’s value to the mesh depends on the applications running on it. A music agent curates playlists. A coding tool suggests breaks. A dashboard visualises collective intelligence. These applications need real-time access to mesh events — not polling, not batch retrieval, but push delivery as events occur.
Implementations MUST provide a local event interface that allows applications on the same host to subscribe to mesh events and receive them in real-time. The interface is transport-agnostic — IPC socket, named pipe, WebSocket, in-process callback, or any mechanism that provides persistent bidirectional communication.
13.9.1 Required Events
A node MUST emit the following events to local subscribers:
| Event | Fires when | Data |
|---|---|---|
| cmb-accepted | A peer CMB passes SVAF evaluation (aligned or guarded) | key, source, fields (CAT7), timestamp, decision (aligned/guarded), drift |
| message | A direct message frame arrives from a peer (Section 7) | from, content, timestamp |
| peer-joined | A new peer connects (any transport) | peerId, name, source (bonjour/relay) |
| peer-left | A peer disconnects (all transports closed) | peerId, name |
| mood-delivered | A mood field is delivered from a rejected CMB (Section 9.3, R5) | from, mood (text, valence, arousal) |
13.9.2 Subscriber Field Weights
A subscriber MAY declare its own per-field weights (αf) when subscribing. If declared, the node SHOULD evaluate incoming CMBs against the subscriber’s weights before delivering the event. This enables domain-specific filtering at the node level:
- A coding tool subscribes with
focus=2.0, issue=2.0, mood=0.8— receives engineering-relevant signals - A music app subscribes with
mood=2.0, focus=1.0, issue=0.3— receives affective signals - A dashboard subscribes with uniform weights — receives everything
This is SVAF applied at the local interface — the same per-field evaluation that gates signals between peers also gates signals between a node and its applications. Each application sees a domain-relevant projection of the mesh, curated by its own field weights.
13.9.3 Design Rationale
Without a standard local event interface, each application invents its own integration: CLI polling, file watching, HTTP endpoints, custom IPC. This fragments the ecosystem and makes applications non-portable across implementations. The local event interface standardises what events are available and how subscribers declare their domain perspective — while leaving the transport mechanism to the implementation.
The event interface is the boundary between the protocol stack and the application. Below it: identity, transport, coupling, SVAF, CfC — protocol concerns. Above it: what the application does with the signals — curate music, suggest breaks, visualise the mesh, or reason about code. The interface ensures every application gets real-time, domain-filtered access to collective intelligence.
Q&A
Why does the agent extract fields, not the protocol?
The agent understands its domain — context, nuance, semantics. “User exhausted after 8 hours debugging” — only the coding agent knows the issue is fatigue, the intent is break needed, the motivation is error prevention. A protocol-level heuristic would guess. The agent knows.
Why observations, not commands?
Commands create coupling between agents — the sender must know what the receiver can do. Observations are decoupled. A coding agent shares “user is tired.” It doesn’t know the music agent exists. The music agent hears the mood and autonomously curates calm music. Neither agent knows the other. The mesh connects them.
Can an agent ignore mesh signals entirely?
Yes. Coupling is autonomous. An agent may receive collective insight and decide it’s not relevant. That’s by design — the mesh influences, never overrides. An agent that ignores everything is just a lonely node.
Why does the local event interface require subscriber field weights?
For the same reason SVAF uses per-agent field weights between peers: each application has a different domain perspective. A coding tool and a music app on the same node should see different signals from the same mesh. Without subscriber weights, every application receives unfiltered noise — the local equivalent of scalar evaluation.
Related Mesh Cognition · Context Curation · CMB · Coupling & SVAF · State Blending