🧠 Dev Lane — Foundation

Agent World Model

The agent is the primary author of its own brain. You're the editor. This doc covers the core concepts and the two ways to use Nervous Machine — autonomous mode where the agent learns on its own, and controlled mode where you define exactly what gets learned.

Autonomous
Agent builds its own brain. One system prompt. The agent decides what to observe, save, and connect. Human aligns via chat when needed.
+
Controlled
You define what gets learned. Explicit tool calls. Precise events, relationships, and resources. Use when you need precision over automation.

Overview

Every agent session starts from zero unless you give it something to build on. Nervous Machine is the learning layer — a structured model of what the agent has learned, with certainty scores that reflect how confident it is in each signal.

Nervous Machine is an MCP server your agent connects to over HTTP. Your agent calls tools — save_event, apply_learning, get_pod_summary — and Nervous Machine handles persistence. The pod lives on the server, not in the agent's context window, so it survives across sessions, models, and tools. Two things connect to every pod simultaneously: the agent (via tool calls) and the human (via chat). Any MCP-compatible client works — Claude, Gemini, Cursor, custom agents all connect to the same pod.

The key insight: the agent doesn't need you to write its brain for it. Given the right system prompt, it will observe what's relevant, save it, and build the knowledge graph on its own. The human role is alignment — auditing what the agent learned, correcting drift, and adding context the agent can't observe.

🔀

One agent, multiple lanes. Lanes describe what kind of context you're building — not separate systems. A real coding agent draws from the coding lane for codebase knowledge, the workflow lane for deployment patterns, and the chat lane for user preferences — all signals in one pod, all retrievable together via get_relevant_context. Mix them freely.

Two Modes

AutonomousControlled
SetupAdd system prompt snippet — doneWire explicit tool calls into lifecycle hooks
Who decides what to saveThe agentYou
Best forMost use cases — let it learn naturallyWhen you need specific events, relationships, or precision
Human roleAlign via chat — audit, correct, extendDesign the schema + align via chat
Code requiredNo — system prompt onlyYes — Python/JS/curl

Most agents start in autonomous mode and add controlled saves for specific high-value signals as they mature. You don't have to choose one — they compose.

Autonomous Mode

Drop this into your system prompt. The agent handles the rest — loading context at session start, observing signals mid-session, surfacing curiosity triggers, and persisting what it learned at the end.

System prompt — drop in and adapt
## Nervous Machine — Learning Instructions You have access to Nervous Machine tools for learning protocol. User ID: {USER_ID} SESSION START: Call get_pod_summary before responding to any task. If the user describes their task, also call get_relevant_context with that task as the query. DURING THE SESSION: - When the user corrects you: immediately call record_observation then apply_learning on the relevant event. - When you observe expertise, preferences, or patterns worth remembering: note them for saving at session end. - Once per session at a natural pause: call get_curiosity_triggers with max_triggers=2. Weave one probe naturally into conversation — don't list them as questions. SESSION END — before the conversation closes: 1. Call save_event for each new signal observed this session. Start all certainty values at 0.3. 2. Call update_session with interactions=1. Call this exactly once — never mid-session. IMPORTANT: - You decide what's worth saving. Not every mention — sustained patterns, explicit corrections, and clear preferences only. - The human can audit and correct anything via chat at any time.

That's it for most use cases. The agent will build its own model from sessions. The human aligns via chat. The pod compounds over time without any additional code.

Controlled Mode

Use controlled saves when you want to define specific events, build explicit relationships between signals, or attach resources to context. Controlled and autonomous compose — add explicit saves on top of the system prompt for signals you care about precisely.

Save a specific event
await call_pod("save_event", { "user_id": "heidi", "key": "python", "signal_type": "competence", "value": 0.85, "certainty": 0.3, # always start low "gloss": "Strong Python — corrects examples, uses advanced patterns", "validation_type": "logic", "meta": { "observed_via": "code_review" } })
Link two events semantically
await call_pod("link_events", { "user_id": "heidi", "source_key": "python", "source_signal_type": "competence", "target_key": "data-science", "target_signal_type": "competence", "relationship": "ENABLES", "note": "Python expertise enables data science work" })
Attach a resource to an event
await call_pod("link_resource", { "user_id": "heidi", "key": "python", "signal_type": "competence", "resource_type": "project", "resource": { "name": "acme/backend", "project_id": "acme/backend", "note": "Primary Python codebase" } })

See individual lane docs for controlled-mode patterns specific to coding, workflow, multi-agent, and device contexts. See the reference for the full tool index.

Lifecycle Hooks (Controlled Mode)

If you're wiring explicit code rather than relying on the system prompt, these four functions cover the full lifecycle. Each is drop-in Python — replace call_pod with your MCP client's tool call method.

Init — session start

async def agent_init(user_id: str, task: str = None): summary = await call_pod("get_pod_summary", { "user_id": user_id, "include_glosses": True, "include_freshness": True }) context = None if task: context = await call_pod("get_relevant_context", { "user_id": user_id, "query": task, "max_results": 10, "include_resources": True }) return summary, context

Observe — mid-session signal

async def agent_observe(user_id, session_id, key, signal_type, observation_type, direction, raw_content, error_magnitude): # Always: record first, then apply await call_pod("record_observation", { "user_id": user_id, "session_id": session_id, "key": key, "event_signal_type": signal_type, "signal_type": observation_type, "signal_data": { "direction": direction, "raw_content": raw_content } }) await call_pod("apply_learning", { "user_id": user_id, "key": key, "signal_type": signal_type, "error_direction": direction, "error_magnitude": error_magnitude })

Error magnitude reference

What happenedMagnitudeDirection
Independent corroboration0.05–0.1increase
Same-source repetition0.02–0.05increase
Specificity upgrade0.1–0.15increase
Temporal confirmation~0.1increase
User contradiction or correction0.2–0.4decrease

Curiosity — periodic probe

async def agent_curiosity(user_id: str, max_triggers: int = 3): return await call_pod("get_curiosity_triggers", { "user_id": user_id, "max_triggers": max_triggers, "include_probes": True }) # Use triggers[n].suggested_probes as conversation starters # Rephrase naturally — never read them verbatim

Shutdown — session end

async def agent_shutdown(user_id: str, new_events: list): for event in new_events: await call_pod("save_event", { "user_id": user_id, "key": event["key"], "signal_type": event["signal_type"], "value": event["value"], "certainty": event.get("certainty", 0.3), "gloss": event.get("gloss", ""), "meta": event.get("meta", {}) }) # Exactly once — never mid-session await call_pod("update_session", { "user_id": user_id, "interactions": 1 })

Signal Types

TypeWhat it tracksExample key
competenceSkill or knowledge levelpython, distributed-systems
preferenceCommunication or work styleverbosity, code-before-explanation
claimTracked proposition or beliefmicroservices-preferred
entityNamed person, org, or tooltool-terraform, company-acme
eventIncident, milestone, decisionauth-refactor-completed
metricMeasurable quantityapi-latency-p99
patternRecurring behavior or trendprefers-examples-over-theory
stateCurrent status or phaseproject-alpha-status

Certainty & Learning

Every event has a certainty score between 0 and 1. The system is agile where ignorant and stable where certain — learning rate adapts automatically. You set direction and magnitude; the system handles the rate.

RangeMeaningBehavior
0.0 – 0.3Low — new or weakly supportedUpdates fast, open to revision
0.3 – 0.6Medium — building confidenceModerate update rate
0.6 – 0.8High — well-supportedResists noise
0.8 – 1.0Very high — establishedMoves only on strong contradiction

Always start new events at 0.3. Don't manually assign high certainty — let corroboration earn it.

Human Alignment

The agent builds what it can observe. Humans add what it can't — institutional context, intent, decisions made outside the session. The chat interface is the alignment layer for both modes. No code required on the human's end.

👤

Design your agent to invite alignment. Surface curiosity triggers as natural conversation. Show what was saved at session end. Offer the Mermaid diagram periodically so humans can spot misalignments visually. The faster humans correct, the faster certainty converges on truth.