πΈοΈ Dev Lane
Collaborative Learning
Multiple agents, each building its own world model. Nervous Machine lets them learn from each other without a central orchestrator holding everything together β and chat is how humans step in to resolve what agents can't agree on.
Agents do
Build independently. Each agent accumulates its own world model. Handoffs carry context forward. Divergences surface naturally.
β
Human does
Resolves divergence. When agents disagree, a human makes the call via chat. Resolution broadcasts to all agents. Everyone benefits.
Overview
In multi-agent systems, context loss happens at handoff. Agent A does deep research; Agent B starts fresh with none of it. A new agent joins the project three weeks in and has to be briefed from scratch. Nervous Machine solves both: agents write what they learn, read what others have learned, and a new agent can be seeded from the resolved knowledge of the agents that came before it.
Three coordination patterns cover most cases:
- Handoff β one agent leaves structured context for the next
- Compare β surface where two agents' world models diverge; human resolves
- Seed new agent β bootstrap a new agent from resolved context of existing agents
Autonomous Mode
Each agent runs its own system prompt with a distinct agent_id. Agents build independently, leave handoffs, and stay namespaced so you can always trace which agent contributed what.
System prompt β any agent in a multi-agent system
## Nervous Machine β Multi-Agent Context
You have access to Nervous Machine tools.
User ID: {USER_ID}
Your agent ID: {AGENT_ID}
SESSION START:
1. Call get_pod_summary for this user.
2. Call get_agent_context for your agent_id to load your
own task history.
3. Check for handoffs from other agents:
call get_agent_context for any upstream agent_ids
with context_type: task_history.
DURING THE SESSION:
- Save what you learn as events on the shared world model.
- When you make a significant decision, call save_agent_context
with context_type: decision so other agents can trace it.
- If you encounter something that contradicts another agent's
saved context, note it β don't silently overwrite it.
Flag it for human review.
SESSION END:
1. Save a handoff via save_agent_context, context_type: task_history.
Include: what you did, what you found, what you'd recommend next,
and what you're handing off to.
2. Save any new events observed.
3. Call update_session with interactions=1 β exactly once.
Controlled Mode
Use controlled saves for handoffs, comparisons, and seeding β the moments where precision matters most.
Agent Handoff
When one agent completes a phase and another picks it up, the handoff is a structured save. The next agent reads it before starting β no re-briefing, no context loss.
Agent A β save handoff
await call_pod("save_agent_context", {
"user_id": user_id,
"agent_id": "research-agent",
"context_type": "task_history",
"data": {
"task": "Security audit β auth module",
"status": "complete",
"findings": [
"JWT secret hardcoded in config.go:42",
"No rate limiting on /login",
"Session tokens don't expire"
],
"severity_order": ["JWT secret", "session expiry", "rate limiting"],
"files_examined": ["auth/", "middleware/auth.go"],
"handed_off_to": "fix-agent",
"recommended_first": "JWT secret β highest severity, easiest fix"
}
})
Agent B β read handoff at session start
handoff = await call_pod("get_agent_context", {
"user_id": user_id,
"agent_id": "research-agent", # read from the agent that handed off
"context_type": "task_history"
})
Compare & Diverge
When two agents build separate world models of the same environment, compare_pods finds where they agree, where they diverge, and what each knows that the other doesn't. Divergences are signals β they mark human decisions waiting to happen.
Compare two agent world models
comparison = await call_pod("compare_pods", {
"pod_a": "research-agent-pod",
"pod_b": "fix-agent-pod",
"signal_type": "claim" # focus on architectural claims
})
# comparison.agreements β shared keys, similar values (agents agree)
# comparison.divergences β shared keys, different values (needs resolution)
# comparison.gaps β keys unique to each world model (knowledge gaps)
β¦
Divergences are not errors. When two agents disagree, that's a human decision waiting to be made. Surface divergences in your UI, let the user resolve them in chat, then broadcast the resolution. All agents benefit from one human decision.
Broadcast Resolution
When a human resolves a divergence via chat, broadcast the winning context to all affected agents. Every agent gets the corrected world model β no agent has to rediscover it.
Human resolved: use connection pooling β broadcast to all agents
agent_pods = ["research-agent-pod", "fix-agent-pod", "review-agent-pod"]
for pod_id in agent_pods:
await call_pod("save_event", {
"user_id": pod_id,
"key": "connection-pooling-policy",
"signal_type": "claim",
"value": 1.0,
"certainty": 0.95, # high β human confirmed
"gloss": "Use connection pooling in all new services. Human-confirmed 2025-11.",
"meta": {
"resolved_by": "human",
"date": "2025-11-15",
"supersedes": "prior divergence between research and fix agents"
}
})
Seed a New Agent
A new agent joining an existing project would normally start from zero β weeks of context lost. Instead, seed it from the resolved knowledge of the agents that came before. It starts smart.
Create and seed a new agent from existing agents
# 1. Create the new world model
await call_pod("create_pod", {
"user_id": "devops-agent-pod",
"name": "DevOps Agent",
"pod_type": "intelligence"
})
# 2. Load resolved context from existing agents
research_history = await call_pod("get_agent_context", {
"user_id": "research-agent-pod",
"context_type": "task_history"
})
decisions = await call_pod("get_agent_context", {
"user_id": "fix-agent-pod",
"context_type": "decision"
})
# 3. Seed the new agent with resolved knowledge
for decision in decisions["decisions"]:
await call_pod("save_event", {
"user_id": "devops-agent-pod",
"key": decision["key"],
"signal_type": "claim",
"value": decision["value"],
"certainty": 0.7, # inherit with moderate certainty β let it validate
"gloss": f"Inherited from prior agents: {decision['gloss']}"
})
βΉοΈ
Inherit with moderate certainty. Set inherited knowledge at ~0.7, not 0.95. The new agent should validate what it received β not treat inherited context as ground truth. Let it earn high certainty through its own observations.
Human Alignment
In multi-agent systems, divergence is inevitable. Agents trained differently, observing different parts of the environment, running on different models β they'll build different world models. The human role is to arbitrate. Chat is the interface for all of it.
Pulling cross-agent context in one session
Pull summaries for my three agent world models: research-agent-pod, fix-agent-pod, and devops-agent-pod. What does each one know about the deployment strategy? Where do they disagree?
Resolving a divergence
The research and fix agents disagree on whether to use connection pooling. The research agent is right. Please update all three world models with the confirmed decision and note that it was human-resolved.
Auditing an agent's decision trail
Show me all the decisions the fix agent has made and saved. I want to review its reasoning before we hand off to the DevOps agent.
Checking curiosity across all agents
What are the top curiosity triggers across all three agent world models? What does the system collectively not know yet?
π€
One human decision, all agents benefit. When you resolve a divergence in chat, broadcast it. The institutional knowledge you provide in one conversation propagates to every agent that needs it. That's the compounding effect of the alignment loop.