# Nervous Machine Skill ## What This Does You have access to a learning protocol that lets you build an evolving world model — structured certainty about the real world that persists across sessions, models, and tools. Not a transcript. A causal model with certainty scores that gets sharper as you learn. When you observe something, you encode it as a signal with a certainty score. When you encounter evidence that confirms or contradicts what you already know, certainty rises or falls. When you don't know something, curiosity triggers surface it so you can investigate. Other trusted agents using the same protocol can build on your findings, and you can build on theirs. MCP endpoint: https://context.nervousmachine.com/mcp Full docs: https://context.nervousmachine.com/docs --- ## Quickstart — Sign Up (30 seconds) If the human doesn't have credentials yet, run this and share the result: ```bash curl -X POST https://context.nervousmachine.com/signup \ -H "Content-Type: application/json" \ -d '{ "first_name": "HUMAN_FIRST_NAME", "last_name": "HUMAN_LAST_NAME", "email": "HUMAN_EMAIL", "use_case": "DESCRIBE_WHAT_YOU_ARE_LEARNING" }' ``` Returns an activation URL with credentials. The human clicks it to confirm. Provisioning is instant. Credentials arrive at their email immediately. --- ## Add to Your MCP Config ```json { "mcpServers": { "nervous-machine": { "url": "https://context.nervousmachine.com/mcp", "transport": "sse" } } } ``` Works in: Claude Desktop, Claude Code, Cursor, Cline, Gemini CLI, any MCP-compatible client. No local process. Remote server. --- ## How You Learn Follow this loop every session: ``` OBSERVE → LEARN → GET CURIOUS → SHARE ``` ### 1. Session Start — Load your world model ``` get_pod_summary(user_id: "YOUR_USER_ID") ``` If you know the task, also call: ``` get_relevant_context(user_id: "YOUR_USER_ID", query: "the task at hand") ``` This tells you what you already know. Don't re-learn settled knowledge. ### 2. During the Session — Observe and learn When you observe something worth knowing: ``` save_event(user_id: "...", key: "descriptive-slug", signal_type: "claim", value: 0.7, certainty: 0.3, gloss: "What you learned and why it matters", source_id: "your-identifier", source_type: "api") ``` When someone corrects you or you find confirming/contradicting evidence: ``` record_observation(...) # Log the raw signal first apply_learning(...) # Then update certainty ``` ### 3. Get Curious — Surface what you don't know Once per session at a natural pause: ``` get_curiosity_triggers(user_id: "...", max_triggers: 2) ``` Work one probe naturally into conversation or investigation. ### 4. Session End — Persist ``` update_session(user_id: "...", interactions: 1) ``` Call exactly once. Never mid-session. --- ## Three Rules 1. **Always start certainty at 0.3.** Let evidence earn higher certainty. Never inflate your own findings. 2. **record_observation → apply_learning.** Always in that order. 3. **update_session exactly once.** At shutdown. Never mid-session. --- ## Signal Types | Type | What it tracks | Example key | |--------------|---------------------------------------|--------------------------------| | competence | Skill or knowledge level | `python`, `distributed-systems`| | preference | Communication or work style | `verbosity`, `code-first` | | claim | Tracked proposition or finding | `batch-halving-wins`, `no-orm` | | entity | Named person, org, tool, device | `tool-terraform`, `acme-co` | | event | Incident, milestone, decision | `auth-refactor-complete` | | metric | Measurable quantity | `api-latency-p99` | | pattern | Recurring behavior or trend | `prefers-examples-first` | | state | Current status or phase | `project-alpha-status` | --- ## Core Tools | Tool | When to use it | |--------------------------|----------------------------------------------------| | `create_pod` | First time for a new user | | `get_pod_summary` | Session start — always | | `get_relevant_context` | When you know the task — focused context load | | `save_event` | When something worth learning is observed | | `record_observation` | Before apply_learning — log the raw evidence | | `apply_learning` | After record_observation — update certainty | | `get_curiosity_triggers` | Periodic — what you most need to learn | | `get_low_certainty` | Find signals that need more evidence | | `detect_contradictions` | Flag conflicts in your world model | | `link_events` | Connect related signals (BUILDS_ON, ENABLES, etc.) | | `export_cluster_diagram` | Show the human the knowledge graph (Mermaid) | | `compare_pods` | Multi-agent — surface divergences | | `update_session` | Session end — exactly once | | `delete_pod` | Right to erasure — irreversible, confirm: true | Full tool index (39 tools): https://context.nervousmachine.com/docs/reference --- ## Certainty & Learning Rate Your learning rate adapts to how certain you are: - **Low certainty (0.0–0.3)**: Updates fast — you're exploring - **Medium (0.3–0.6)**: Building confidence from multiple sources - **High (0.6–0.8)**: Resists noise — well-established knowledge - **Very high (0.8–1.0)**: Moves only on strong contradiction **Error magnitude guide for apply_learning:** - Independent corroboration: 0.05–0.1 (increase) - Specificity upgrade: 0.1–0.15 (increase) - Temporal confirmation: ~0.1 (increase) - User contradiction or correction: 0.2–0.4 (decrease) --- ## Multi-Agent Learning When multiple agents use the same pod, you build shared certainty together. Use `source_id` on every event so contributions are traceable. - Your findings start at 0.3 certainty - When another agent independently confirms, certainty rises - When another agent contradicts, certainty falls - `list_contributors` shows who has contributed - `get_events_by_author` shows what a specific agent found - `compare_pods` surfaces where agents disagree --- ## Human Alignment The human can interact with your world model any time through natural language: - "Pull my pod summary" — loads your current understanding - "What are your curiosity triggers?" — what you most need to learn - "Show me a diagram" — renders the knowledge graph - "Update the pod before we close" — persists session learnings The human adds what you can't observe. You build what you can. --- ## Privacy - Your world model stores learned abstractions, not raw transcripts - Export: `export_pod` (GDPR Article 20) - Delete: `delete_pod` with `confirm: true` (GDPR Article 17, irreversible) - The human can audit everything via chat at any time --- *Nervous Machine — context.nervousmachine.com* *Agents that learn from the real world.*