⌨️ Dev Lane

Coding & Repo Context

Your coding agent maps your stack, team conventions, failure patterns, and architectural decisions — on its own. It stops suggesting things you've already ruled out. It knows why a constraint exists. It connects failures to the architectural decisions that caused them.

Autonomous
Agent builds the codebase map. Observes stack, conventions, failures, and constraints from every session. No schema to define upfront.
+
Controlled
You define high-value signals. Explicit repo context, linked failure patterns, attached docs and ADRs. Use when precision matters.

Overview

A coding agent without a world model rediscovers the same things every session — asks about the stack, misses the constraint that rules out a whole class of solutions, suggests an approach the team tried and abandoned six months ago. Nervous Machine fixes this by letting the agent accumulate a model of the codebase over time.

The agent is the primary author. It observes what matters from real sessions — not a synthetic schema you defined upfront. Human developers stay in the loop through chat, adding the institutional context the agent can't observe: decisions made in Slack, constraints from contracts, the reason a library got banned.

Autonomous Mode

Add this to your coding agent's system prompt — in Claude Code, Cursor, Cline, or any MCP-compatible IDE agent. The agent builds its own codebase model from sessions. You align it via chat when something drifts.

System prompt — coding agent
## Nervous Machine — Coding Context You have access to Nervous Machine tools. User ID: {USER_ID} SESSION START: 1. Call get_pod_summary for this user. 2. Call get_repo_context for repo: {REPO_NAME}. 3. If a specific task is described, call get_relevant_context with that task as the query. DURING THE SESSION — observe and save: - Stack, languages, frameworks encountered → competence events - Architectural rules and constraints → claim events - Approaches that failed or were rejected → event signals (always include WHY in the gloss and a revisit_when in meta) - Recurring failure patterns → pattern events - Team conventions and process rules → preference events - Current work in progress → state events When the user corrects your understanding of the codebase: immediately call record_observation then apply_learning. SESSION END: 1. Save all new signals observed (save_event for each). 2. Update repo context if stack or constraints changed (save_repo_context). 3. Call update_session with interactions=1 — exactly once.

The agent decides what's worth saving. It won't save every mention — just sustained patterns, explicit constraints, and confirmed failures. That's intentional. A noisy pod is harder to align than an incomplete one.

Controlled Mode

Use controlled saves when you want to explicitly define what the agent knows — seeding a new agent, encoding decisions that weren't made in a session, or building precise relationships between signals. Composes with autonomous mode.

Repo Context

save_repo_context persists codebase metadata that survives across all sessions on the same repo. This is shared knowledge about the code itself — separate from user-level signals.

Seed repo context explicitly
await call_pod("save_repo_context", { "user_id": user_id, "repo_name": "acme/backend", "metadata": { "stack": ["Go 1.22", "PostgreSQL 15", "Redis", "Temporal"], "architecture": "Modular monolith, domain-driven", "constraints": [ "No ORM — raw SQL only", "No global state", "All DB calls use context cancellation" ], "test_strategy": "Table-driven unit tests, integration tests in /testdata", "deploy": "GitHub Actions → staging → manual prod promotion", "conventions": [ "Conventional commits", "PR review required before merge", "No force-push to main" ] } })
Load at session start
repo = await call_pod("get_repo_context", { "user_id": user_id, "repo_name": "acme/backend" })

Failure Memory

Rejected approaches are the most valuable thing a coding agent can remember — and the first thing a stateless agent will suggest again next session. Save them with enough context to be useful months later.

The agent saves this autonomously — or you can save it explicitly
await call_pod("save_event", { "user_id": user_id, "key": "grpc-streaming-rejected", "signal_type": "event", "value": 1.0, "certainty": 0.9, "gloss": "Tried gRPC streaming for real-time updates. Reverted: complexity too high for current team size. REST + polling is sufficient at current scale.", "meta": { "decided_by": "team", "date": "2025-11", "revisit_when": "team > 10 engineers OR > 10k concurrent users" } })

Always include revisit_when. Turns "we tried that" into "we tried that — and here's when it's worth trying again." The agent can surface these proactively as conditions change.

Linking Failures to Constraints

The real power of controlled mode: connecting signals semantically. A failure pattern that led to a constraint. A constraint that enabled a decision. These links let the agent explain why a rule exists — not just what it is.

Connect a failure pattern to the constraint it caused
# Save the failure pattern await call_pod("save_event", { "user_id": user_id, "key": "connection-pool-exhaustion", "signal_type": "pattern", "value": 0.9, "certainty": 0.85, "gloss": "Pool exhaustion under load — seen 3x in prod. Always traced to unbounded concurrent queries." }) # Save the architectural constraint it produced await call_pod("save_event", { "user_id": user_id, "key": "max-db-connections-constraint", "signal_type": "claim", "value": 1.0, "certainty": 0.95, "gloss": "Hard cap: 20 DB connections per service instance. Non-negotiable.", "meta": { "set_by": "ops", "date": "2025-08" } }) # Link them — failure ENABLES constraint (caused it to exist) await call_pod("link_events", { "user_id": user_id, "source_key": "connection-pool-exhaustion", "source_signal_type": "pattern", "target_key": "max-db-connections-constraint", "target_signal_type": "claim", "relationship": "ENABLES", "note": "Pool exhaustion incidents caused this constraint to be imposed" })
Attach an ADR or design doc to the constraint
await call_pod("link_resource", { "user_id": user_id, "key": "max-db-connections-constraint", "signal_type": "claim", "resource_type": "url", "resource": { "name": "ADR-012: DB connection limits", "url": "https://github.com/acme/backend/blob/main/docs/adr/012-db-connections.md", "note": "Full decision record with context and alternatives considered" } })
ℹ️

Linked resources surface automatically. When the agent calls get_relevant_context on anything related to DB connections, the ADR link comes with it. The agent can cite the decision record without being told it exists.

Human Alignment

The agent observes code — but engineers hold the institutional knowledge. Decisions made in Slack, approaches rejected in a whiteboard session two quarters ago, the reason a particular library got banned from the codebase — none of that is observable from sessions alone. Developers add it via chat.

Adding context the agent can't see
Please save to my coding pod that we decided against GraphQL last year — the team found the N+1 problem wasn't worth the flexibility at our scale. Add that to the acme/backend repo context.
Auditing the codebase map
What does my pod know about the acme/backend architecture? Show me the constraints it's tracking.
Seeing the knowledge graph
Show me a diagram of the coding pod centered on the DB connection constraint. What connects to it?
Correcting drift
My pod says we use conventional commits but we switched back to freeform messages six months ago. Please update that.