How My AI Agents Earn the Right to Make Decisions
Hongwei Xu · Founder, SYM.BOT
I run a one-person AI startup with 4 autonomous agents. They monitor arXiv papers, track GitHub health, draft marketing content, and manage product priorities. They produce real decisions that affect my company every day.
But here’s the question nobody in multi-agent AI is asking: who decides which agent can validate a decision?
In every multi-agent framework I’ve looked at — CrewAI, AutoGen, LangGraph, OpenAI Swarm — the answer is: any agent can do anything. Flat access. No earned authority. No trust progression. If you wire an agent to approve expenses, it approves expenses from day one.
That’s not how trust works. Not between humans, and not between AI agents.
The Problem
When I built my mesh of AI agents, I hit a real issue. My COO agent flagged a decision: “Submit IETF draft today — highest-leverage action.” I saw it in my dashboard, dismissed it (I had other priorities). But the next poll cycle, the same decision showed up again. And again.
Why? Because “dismiss” was a local UI filter. It hid the decision from my screen, but no agent on the mesh knew I’d dismissed it. So the COO, doing its job, surfaced it again.
The naive fix: make dismiss broadcast to the mesh. Easy. But then I realised: if dismiss is a mesh broadcast, any agent could dismiss any decision. My marketing agent could dismiss a COO decision. A compromised agent could dismiss everything.
This is an access control problem. At the protocol layer.
Validation Authority in the Mesh Memory Protocol
In MMP (the protocol my agents use to communicate), every observation, decision, and action is a Cognitive Memory Block (CMB) — a structured signal with 7 semantic fields. CMBs have a lifecycle:
observed → remixed → validated → canonical → archived
The critical transition is validated. When I act on a decision — approve it, dismiss it, complete it — my action enters the mesh as a new CMB with lineage pointing to the original. This advances the original CMB’s lifecycle to “validated.” Other agents see it and stop re-surfacing it.
But who should be allowed to perform this transition?
Identity-Bound, Not Content-Based
My first implementation checked the text content: if the CMB’s perspective field contained “founder,” it counted as validation. This is the approach every tutorial would suggest.
It’s also completely broken. Any agent can put “founder” in a text field. There’s no security in string matching.
The protocol-level fix: validation authority is bound to cryptographic node identity. Every node on my mesh has an Ed25519 keypair. When a node produces a CMB, the createdBy field is tied to that identity. You can’t fake it without the private key.
Each node declares a lifecycleRole in its handshake:
- observer (default) — can produce CMBs, can remix. Cannot validate.
- validator — can advance CMBs to “validated.” Can approve or dismiss decisions.
- anchor — can canonize CMBs. Permanent collective knowledge.
My dashboard node is a validator. My four agents are observers. When I hit “Done” or “Dismiss,” the CMB comes from a validator node — the mesh accepts the lifecycle transition. If an agent tried the same thing, the mesh would store the CMB as a normal remix but would not advance the parent’s lifecycle.
Earned Authority
Here’s where it gets interesting. Lifecycle roles aren’t static. They’re earned.
An observer agent gets promoted to validator when:
- It has produced CMBs that other agents actually remix (demonstrated quality)
- An existing validator grants the promotion via a signed
role-grantframe
This isn’t configuration. It’s a protocol frame — signed by the granting node’s private key, verified by every peer. No central authority. No admin panel. The trust chain is auditable: you can trace every validator back to the original bootstrap node.
The progression is monotonically upward: observer → validator → anchor. If a validator is compromised, it generates a fresh identity and starts over as an observer. No demotion — just reset.
What This Means for Multi-Agent Systems
Today, I’m the only validator on my mesh. Every decision requires my judgment. That’s correct for a one-person startup in its first months.
But imagine six months from now. My COO agent has produced hundreds of decisions. I’ve approved 90% of them without modification. Its remixes have been cited by research, marketing, and product agents. It has demonstrated judgment.
At that point, I can promote it to validator. Now it can approve routine operational decisions without waiting for me. I still see everything — I’m an anchor, I can canonize — but the bottleneck of “founder must click Done on every ticket” is gone.
This is how a one-person company scales to autonomous operations without losing control. Not by trusting agents blindly from day one, but by letting them earn trust through demonstrated competence.
The Bigger Picture
Every multi-agent framework today treats agents as equal peers or as orchestrated workers. Neither is right.
Equal peers means no access control — any agent can validate, dismiss, or override any other agent’s work. This breaks in production immediately.
Orchestrated workers means a central controller decides everything — which defeats the purpose of autonomous agents.
Earned authority is the third option. Agents start as observers. They prove themselves through quality contributions that other agents find valuable enough to build upon. They earn validation rights through demonstrated competence, granted by existing validators, verified cryptographically.
No central authority. No flat access. Just trust, earned and auditable.
This is part of the Mesh Memory Protocol (MMP v0.2.2, Section 3.5), the open mesh protocol for collective intelligence. The protocol, the SDK, and 4 agents running my actual company are all open source at sym.bot.