đŸ’Ŧ Universal Alignment Interface

Chat — The Alignment Interface

Chat is how any human aligns any agent — regardless of lane. Your coding agent learned the wrong constraint. Your workflow agent has a stale approval chain. Your device flagged something it doesn't understand. Chat is how you fix all of it, in plain language, no tools required.

Agents do
Build context autonomously. Coding agents, workflow agents, device agents — all building world models across every session.
⇄
You do
Align with four prompts. Pull summary. Check curiosity. Save. Update. That's the entire manual layer — everything else is automatic.
â„šī¸

Honest note on how this works in chat. Claude doesn't automatically know to look at your world model — you need to ask. The four prompts below are small overhead for what you get in return. Developers using other lanes program these into their workflows so it's invisible. In chat, you say them yourself. Once it's part of your session habit, it takes about 10 seconds total.

Connect in 60 Seconds

Nervous Machine is a remote MCP server. Add it once through your AI client's integrations UI — no terminal, no config files.

  1. Open Settings in Claude.ai
    Click your profile icon (bottom-left), then Settings → Integrations.
  2. Add Nervous Machine
    Click Add Integration and paste:
https://context.nervousmachine.com/mcp
  1. Start a chat and create your world model
    Open a new conversation. Pick a short ID — your first name is perfect. Use one of the creation prompts below.

The Four-Prompt Ritual

This is the entire manual layer. Four prompts, about 10 seconds of overhead, unlocks everything your world model knows.

#WhenWhat to say
1 First ever session Create a world model called [name] from this conversation / document / data
2 Every session start Pull my world model summary for [name]
3 Mid-session check-in What are my curiosity triggers?
4 Before closing Update my world model with what you learned this session

Everything else — learning, certainty tracking, contradiction detection, the knowledge graph — happens inside those calls.

Create Your World Model

You only do this once. The more context you seed it with upfront, the faster it becomes useful.

From a document

Upload a resume, project brief, bio, or knowledge dump. Claude extracts signals and seeds your world model from it — the fastest path to a useful model from day one.

Seed from uploaded document
Please create a world model with user ID 'heidi' and seed it from this document. Extract expertise, preferences, interests, and any active projects you can find.

From a conversation

Seed from current conversation
Based on what we've discussed so far, please create a world model called 'heidi' and save what you've learned about me.

From scratch

Start empty and tell it about yourself
I don't have a world model yet. Please create one with user ID 'heidi'. I'll tell you about myself to seed it.

From pasted notes

Seed from anything
Please create a world model called 'heidi' and seed it from these notes: [paste a bio, skill list, project description, or stream of consciousness — anything works].
âœĻ

Seed generously. The world model starts with low certainty on everything — that's intentional, it means it updates fast. But a richer seed means faster convergence. A paragraph about your work, a couple of preferences, and one active project is enough to make the first real session noticeably better.

Visualize Your World Model

After creating or loading your world model, ask Claude to draw it. The export_cluster_diagram tool renders your knowledge graph as a Mermaid diagram — clusters, connections, and the structure of what's known, visible at a glance. This is the fastest way to spot misalignments that a text summary would hide.

Full world model diagram

See your whole knowledge graph
Show me a diagram of my world model. I want to see how everything connects.

Focused view — centered on a topic

Zoom into a cluster
Show me a diagram of my world model centered on my Python expertise. What connects to it?

After an update

See what changed
We just updated my world model. Show me an updated diagram so I can see what was added.
👤

The diagram is your alignment view. Wrong connections, nodes that shouldn't be there, obvious gaps — the diagram surfaces all of it. If something looks off, just say so. That correction becomes a signal that sharpens the whole cluster.

Load Your World Model

Use this at the start of every session. It's lightweight — loads the most important signals with their glosses and freshness, without pulling the full world model. Fast enough that it doesn't disrupt the session flow.

Standard session start
Pull my pod summary for heidi.
Task-specific start
I'm working on [describe task]. Pull relevant context from my world model (heidi) before we start.
â„šī¸

Summary vs. full load. The summary is the right default — fast, focused, gives Claude what it needs. Only ask for everything ("load my full world model") when you're doing a deep audit or something feels wrong.

Curiosity Triggers

Curiosity triggers are questions your world model has about itself — signals that are uncertain, stale, or haven't been validated yet. Checking them periodically is how the model gets sharper over time. You don't have to answer all of them — even one or two per session compounds significantly.

Check curiosity triggers
What are my curiosity triggers? What does my world model most want to learn about me?

Curiosity across world models

If you have multiple world models — personal, coding, workflow, device — you can check curiosity across all of them in a single chat session. This is one of the most powerful things the chat interface enables that no individual agent does on its own.

Cross-model curiosity check
Pull summaries for my pods: heidi, heidi-coding, and heidi-workflow. Then tell me the top curiosity triggers across all three — what's most uncertain or unresolved?
Finding gaps between world models
Compare my personal world model (heidi) with my coding world model (heidi-coding). Are there things one knows that would be useful to share with the other?

Update Your World Model

If the world model is already loaded, Claude will often update it naturally as things come up — you don't always have to ask. But don't rely on it. A quick prompt before closing ensures nothing slips.

Session wrap-up
Before we close, update my world model with anything new you learned about me this session.
Save something specific
Please save to my world model that I'm leading a team of 5 engineers and we work primarily in Go.
Correct something wrong
My world model says I prefer concise answers — that's not right anymore. I want detailed explanations with examples now. Please update that.
âš ī¸

Don't skip the wrap-up. Context learned during a session only persists if it's explicitly saved. The wrap-up prompt takes 5 seconds. Skip it and the session's learning is gone when the chat closes.

Align Any Agent via Chat

Chat isn't just for personal context — it's the correction layer for every agent you run. They build autonomously. You reach in and fix what drifted, in plain language.

LaneWhat you alignExample prompt
Coding Wrong architectural constraints, outdated stack, rejected approaches "My coding agent's world model thinks we use gRPC — we dropped that 6 months ago. Please correct it."
Workflow Stale approval chains, deprecated integrations, changed policies "The Stripe backup is deprecated. Remove it from the workflow agent's world model."
Multi-agent Divergences between agents, handoff gaps, conflicting world models "My research and fix agents disagree on the JWT approach. Show me both and help me resolve it."
Devices Unexplained anomalies, sensor offsets, intent and priority setting "That temperature spike on the 15th was a planned heat test. Mark the cause in the device agent's world model."

FAQ

Why doesn't Claude load my world model automatically?

MCP tools are available to Claude, but it doesn't know which to call without being asked. Automatic loading at session start is on the roadmap. For now, "pull my pod summary for [name]" is the trigger — 2 seconds and it becomes habit fast.

Why summary instead of full load?

get_pod_summary is fast and loads the signals Claude needs to calibrate. The full world model includes everything — low-certainty signals, raw metadata, the whole graph. You rarely need all of that for a normal session. Use summary by default, full load for audits.

Does it work on the free Claude.ai plan?

Yes. MCP integrations are available across all Claude.ai plans.

Can I use the same world model with Gemini or Cursor?

Yes. Nervous Machine is model-agnostic. Any MCP-compatible client connects to the same world model. Your context follows you — not locked to Claude.

How is this different from Claude's built-in memory?

Claude's memory stores conversation snippets. Nervous Machine builds a structured model with certainty scores, semantic connections, and curiosity-driven learning — and it works across every tool and model you use. It's not storage, it's a model that gets sharper over time.