43 MCP tools · Rust · $0.00/query

Your AI agent has amnesia.
m1nd remembers.

Every tool finds what exists. m1nd finds what's missing.

Every session, your agent re-reads your entire codebase — burning $0.05–$0.50 per search cycle in LLM tokens, forgetting everything by next session. m1nd replaces that loop with a persistent code graph that learns from usage, detects structural holes no grep ever could, and answers in 31ms at $0.00. Local Rust binary. No API keys. No meter ticking.

Claude Code Cursor Windsurf GitHub Copilot Zed Cline Roo Code Continue OpenCode Amazon Q
find
what's missing
what if?
navigate
9,767 nodes · 26,557 edges · 335 files
28 languages. Zero configuration.
Regex extractors, tree-sitter parsers, and generic fallback — all built in.
PythonPython
RustRust
TypeScriptTypeScript
JavaScriptJavaScript
GoGo
JavaJava
CC
C++C++
C#C#
RubyRuby
PHPPHP
SwiftSwift
KotlinKotlin
ScalaScala
BashBash
LuaLua
RR
HTMLHTML
CSSCSS
JSONJSON
ElixirElixir
DartDart
ZigZig
HaskellHaskell
OCamlOCaml
TOMLTOML
YAMLYAML
SQLSQL
+ generic regex fallback for any language not listed
Works with every MCP client
Standard MCP protocol. One binary, any client.
Claude Code
Cursor
Windsurf
GitHub Copilot
Zed
Cline
Roo Code
Continue
OpenCode
Amazon Q
Built with Rust
MIT License
Zero LLM tokens
Local only — your code never leaves

AI agents are powerful reasoners.
Terrible navigators.

Every time an agent needs context, it fires LLM calls to search, read, and guess. On a 10,000-file codebase, that's hundreds of dollars a week — and it still misses what isn't there.

"What does this change affect?"
Blast radius is invisible without structural analysis.
m1nd.impact
"What am I missing?"
Structural holes are undetectable by keyword search.
m1nd.missing
"What else will change?"
Co-change patterns require historical context.
m1nd.predict
"How are these connected?"
Dependency chains span files, modules, and abstraction layers.
m1nd.why
Approach What It Does Why It Fails
Full-text search Matches tokens Finds what you said, not what you meant
RAG Embeds chunks, top-K Each retrieval is amnesiac. No relationships.
Static analysis AST, call graphs Frozen snapshot. Can't answer "what if?". Can't learn.
Knowledge graphs Triple stores Manual curation. Only returns what was explicitly encoded.

Six capabilities no other tool has.

Not incremental improvements. Structural differences.

The graph learns
Hebbian Plasticity
When results are correct, paths strengthen. When wrong, they weaken. Every query makes the next one more accurate. The graph adapts to your codebase, not the other way around.
The graph cancels noise
XLR Differential Processing
Borrowed from audio engineering. Two signals travel the same path -- the difference reveals the truth. False-positive edges are cancelled before they pollute results.
The graph remembers
Trail System
Save, resume, and merge investigation state across sessions. An agent can pick up exactly where another left off. No re-discovery. No lost context.
The graph tests claims
Hypothesis Engine
25,015 paths explored in 58ms. Bayesian confidence scoring. The graph doesn't just retrieve -- it evaluates whether a structural claim is plausible.
The graph simulates alternatives
Counterfactual Engine
4,189 affected nodes analyzed in 3ms. Remove a module virtually, see what breaks, measure cascade depth -- before touching a single line of code.
The graph lets you navigate
Stateful exploration with history, branching, and undo. Enter a perspective, follow routes, fork your investigation, compare paths. 12 tools. Like a surgeon inside your codebase.

The core loop

m1nd doesn't search your data -- it activates it. Query a concept, and the graph lights up.

DORMANT — CLICK TO ACTIVATE 0.94 0.89 0.83 0.71 0.23 hole!

Click the graph to see spreading activation in action

01
Ingest
Build property graph from source data. Code, JSON, any domain.
02
Activate
Spreading activation across 4 dimensions with XLR noise cancellation.
03
Learn
Hebbian plasticity: correct results strengthen connections, wrong results weaken them.
04
Persist
Graph + plasticity state saved to disk. Next session starts where this one left off.

Four activation dimensions

Structural
Graph topology: edges, PageRank, community structure. How things are wired.
Semantic
Label similarity: char n-grams, co-occurrence (PPMI), synonym expansion.
Temporal
Time dynamics: recency decay, change velocity, co-change history.
Causal
Dependency flow: directed causation along import, call, and contain edges.

Every search tool was built for humans.
m1nd was built for agents.

2026 is the year of AI slop — agents brute-forcing their way through codebases with grep loops, burning tokens like kindling. grep, ripgrep, tree-sitter: brilliant tools. For humans who read terminals. But an AI agent doesn't want 200 lines of output to interpret linearly. It wants a graph with weights, dimensions, and a direct answer: "what matters and what's missing" — ready to decide, not to parse.

The end of context window theater. No more feeding search results to an LLM so it can search again.

The slop cycle
1. Agent calls grep 200 lines of noise
2. Feeds entire output to LLM burns tokens parsing text
3. LLM decides to grep again repeat 3–5 times
4. Finally acts on incomplete picture
$0.30–$0.50 burned per search. 10 seconds gone. Structural blind spots remain.
Precision surgery
1. Agent calls m1nd.activate
2. Gets ranked subgraph with confidence scores
3. Sees structural holes no text search could find
4. Acts immediately with full picture
1 call. 31ms. $0.00. Zero tokens. Complete structural understanding.

m1nd doesn't output text for an LLM to re-interpret. It outputs structured decisions — weighted nodes, confidence scores, dimensional rankings, structural holes. The format an agent actually needs to act, not more slop to chew on.

Why $0.00 is real — not marketing

When an AI agent searches your code with an LLM, it sends your code to a cloud API, pays per token (input + output), waits for the response, and often repeats 3–5 times. Every cycle costs $0.05–$0.50.

m1nd uses zero LLM calls. Your codebase lives as a weighted graph in local RAM. Queries are pure math — spreading activation, graph traversal, linear algebra — executed by a Rust binary on your machine. No API call. No tokens. No data leaves your computer. That's why it costs $0.00 and runs in 31ms.

LLM grep cycle
Your code → Cloud API → tokens → $$
Like asking Google for directions every time you need to cross the street
m1nd query
Graph in RAM → math → answer → $0
Like looking at a map that's already on your desk

Navigate code like a surgeon.

grep is stateless. Every search starts from zero. m1nd Perspectives maintain a living navigation session — with history, branches, and suggestions.

🔍
start
Enter a perspective. A navigable surface forms around your query.
🧭
routes
Browse ranked routes — weighted paths through the graph.
🎯
follow
Move focus to a target. New routes synthesize automatically.
🔀
branch
Fork your exploration. Like git branches, but for investigation.
💡
suggest
m1nd recommends your next move based on navigation history.
⚖️
compare
Diff two perspectives. Shared nodes, unique paths, dimension deltas.

12 tools. Stateful exploration with memory, branching, and undo.
No other code navigation tool on the market does this.

perspective session
$ m1nd.perspective_start --query "payment flow"
Perspective p-7a3f opened · 6 routes synthesized

$ m1nd.perspective_follow --route "checkout → stripe"
Focus moved to stripe/client.rs · 4 new routes from this node

$ m1nd.perspective_branch --name "refund-path"
Branch created · forked from stripe/client.rs

$ m1nd.perspective_suggest
refund/handler.rs (affinity: 0.91, structural + causal)
  "Based on 3 hops: this is the most likely next dependency"

$ m1nd.perspective_compare --a "main" --b "refund-path"
Shared: 12 nodes · Unique to refund-path: 5 nodes · Divergence: 0.34

Everything an agent needs.

Callable by any MCP client via JSON-RPC stdio. No SDK required.

Foundation (13)
m1nd.ingest
Load data into the graph. Code extractor or JSON descriptor.
m1nd.activate
Spreading activation query -- "what's related to X?"
m1nd.impact
Blast radius -- "what does changing X affect?"
m1nd.why
Path explanation -- "how are A and B connected?"
m1nd.learn
Hebbian feedback -- "this result was correct / wrong / partial."
m1nd.drift
Weight drift analysis -- "what changed since last session?"
m1nd.health
Diagnostics -- node/edge counts, sessions, persistence status.
m1nd.seek
Targeted node lookup -- find specific entities by ID or pattern.
m1nd.scan
Broad graph scan -- enumerate nodes by type, label, or neighborhood.
m1nd.timeline
Temporal sequence -- how nodes changed over time.
m1nd.diverge
Branch analysis -- where do two paths diverge?
m1nd.warmup
Context priming -- "prepare for task X."
m1nd.federate
Multi-repo federation -- stitch graphs across repositories.
Perspective Navigation (12)
perspective.start
Open a new perspective -- a scoped, filterable view into the graph.
perspective.routes
Find all paths between two nodes within the perspective.
perspective.follow
Navigate forward -- follow an edge from the current position.
perspective.back
Navigate backward -- retrace to a previous position.
perspective.peek
Preview neighbors without moving -- look before you leap.
perspective.inspect
Full detail on the current node -- edges, weights, metadata.
perspective.suggest
AI-guided navigation -- "where should I go next?"
perspective.affinity
Score how related two nodes are from this vantage point.
perspective.branch
Fork the current perspective into parallel explorations.
perspective.compare
Diff two perspectives -- what does each see that the other doesn't?
perspective.list
List all active perspectives and their positions.
perspective.close
Close a perspective and release its resources.
Lock System (5)
lock.create
Snapshot the graph state -- create a baseline for comparison.
lock.watch
Monitor changes against the lock -- real-time drift detection.
lock.diff
Diff current state against lock -- what changed?
lock.rebase
Update the lock to current state -- accept all changes.
lock.release
Release the lock and free the snapshot memory.
Superpowers (13)
m1nd.hypothesize
Test a structural claim -- Bayesian confidence over 25K+ paths.
m1nd.counterfactual
Removal simulation -- "what breaks if we remove X?"
m1nd.missing
Structural hole detection -- "what's missing from this picture?"
m1nd.resonate
Harmonic analysis -- standing waves, resonant frequencies.
m1nd.fingerprint
Equivalence detection -- "are these two things duplicates?"
m1nd.trace
Full activation trace -- every hop, every weight, every decision.
m1nd.validate_plan
Validate an implementation plan against the graph structure.
m1nd.predict
Co-change prediction -- "what else will need to change?"
m1nd.differential
XLR differential -- separate signal from noise between two queries.
trail.save
Save current investigation state -- all activations, perspectives, context.
trail.resume
Resume a saved trail -- pick up exactly where you left off.
trail.merge
Merge two trails -- combine parallel investigations.
trail.list
List all saved trails with metadata and timestamps.

Real numbers from a real codebase.

Measured on a 335-file Rust + Python + TypeScript project. No cherry-picking.

Full ingest
910ms
335 files → 9,767 nodes → 26,557 edges
Spreading activation
31-77ms
4-dimensional wavefront, XLR noise gate
Blast radius (impact)
5-52ms
Full cascade analysis with hop depth
Counterfactual simulation
3ms
4,189 affected nodes evaluated
Lock diff
0.08µs
Snapshot comparison, near-instant
Federation (2 repos)
1.3s
18,203 cross-repo edges stitched

m1nd vs. the alternatives.

Different category, not just a better version.

The token tax is real

Every time an AI agent greps your codebase and feeds results to an LLM, you pay. Cursor users have reported $22K/month in overages. Teams running Copilot + frontier models on large repos see $200–$500/month in invisible search costs alone.

100 searches/day × $0.30/search = $9/day = $270/month
That's $3,240/year on search alone. Per developer.
m1nd
$0
/query. /day. /month. /forever.
Local Rust binary. No API keys. No cloud.
No data leaves your machine. Ever.
m1nd Sourcegraph Cursor Copilot Greptile
Code graph Full property graph Symbol index None None AST index
Learns from use Hebbian plasticity No No No No
Persists investigations Trail system No No No No
Tests hypotheses Bayesian engine No No No No
Simulates removal Counterfactual No No No No
Multi-agent locks Lock system (works) No Attempted, failed No No
Multi-repo Federation Yes No No Per-repo
Search latency 31ms (local) ~200ms (cloud) 320ms+ (cloud) 500–800ms Cloud-dependent
Agent interface 43 MCP tools API Built-in only Built-in only API
Monthly cost $0 (forever) $59/user/mo $20+/mo (overages to $22K) $19+/mo $30/dev/mo
Capabilities (of 16) 16/16 1/16 0/16 0/16 1/16

3–4 years ahead. Measured.

We benchmarked m1nd against every tool in the category across 16 capabilities. m1nd scores 16/16. The best competitor scores 3. Cursor and Copilot score zero.

Capabilities covered (out of 16)
m1nd
16/16
100%
CodeGraphCtx
3/16
Joern
2/16
CodeQL
2/16
ast-grep
2/16
Letta Code
2/16
Sourcegraph
1/16
Augment
1/16
Greptile
1/16
Cursor
0/16
Copilot
0/16
Six things no one else can claim
01 — ZERO COMPETITORS
Hebbian plasticity on code graphs
The graph rewires itself based on which query paths lead to correct results. No other tool does this. No published paper describes it for code.
02 — 10-30x FASTER
$0.00/query at 31ms
Local Rust binary. No LLM tokens burned per query. Every LLM-dependent tool pays $0.10-$0.50 per search and waits 300ms-2s for results.
03 — ZERO PRIOR ART
Finds what's MISSING in code
Structural hole detection from Burt's sociology theory, applied to code graphs. Finds gaps in your architecture that no search can surface.
04 — LARGEST SURFACE
43 MCP tools = complete cognitive cycle
Ingest, activate, hypothesize, simulate, learn, navigate, federate. The largest competitor offers ~15 tools. Most offer 3-5.
05 — UNPUBLISHED TECHNIQUE
XLR noise cancellation
Borrowed from audio engineering: Cross-Layer Resonance filtering suppresses false-positive edges during spreading activation. No prior art exists for code graphs. Zero competitors, zero papers.
06 — OTHERS TRIED AND FAILED
Multi-agent graph locking
Cursor attempted multi-agent file locking and abandoned it. m1nd ships 5 lock tools that work: snapshot, watch, diff, rebase, release. Multiple agents, one consistent graph.
3–4 years ahead
Built on 6 disciplines no single competitor covers: cognitive science, sociology, neuroscience, signal processing, physics, and distributed systems.
Read the full competitive analysis →

A full session, start to finish.

m1nd ingesting and analyzing a production codebase. Every number is real.

m1nd production session
1. Ingest > m1nd.ingest path=./
Scanned 335 source files across 3 languages
9,767 nodes, 26,557 edges built in 910ms
Co-change history: 214 commit groups analyzed

2. Activate > m1nd.activate query="spreading activation"
Seeds: activation.rs, graph.rs, xlr.rs
Wavefront: 3 → 18 → 47 nodes
0.94 fn::propagate_wavefront (structural 0.97)
0.91 fn::score_candidates (semantic 0.93)
0.87 fn::xlr_gate (causal 0.89)
0.72 fn::compute_pagerank (structural 0.78)
XLR cancelled 12 noise edges · 31ms

3. Impact > m1nd.impact targets=["graph.rs"]
Blast radius: 23 nodes across 4 hops
Direct: activation.rs, xlr.rs, temporal.rs, builder.rs
Indirect: server.rs, tools.rs, session.rs
High-impact: graph.rs is a keystone (PageRank top 3%)

4. Hypothesize > m1nd.hypothesize "graph.rs depends on temporal.rs"
Explored 25,015 paths in 58ms
Confidence: 0.87 (strong structural + temporal evidence)
Supporting: 14 direct paths, 3 co-change clusters
Weakening: 2 paths through deprecated module

5. Counterfactual > m1nd.counterfactual remove=["graph.rs"]
CASCADE: 4,189 affected nodes in 3ms
ORPHANED: 8 functions become unreachable
RESILIENT: temporal.rs, domain.rs survive via alternate paths
Simulation complete: graph.rs is load-bearing

6. Learn > m1nd.learn feedback=correct nodes=[activation.rs, xlr.rs]
LTP applied: 14 edges strengthened (avg +0.12)
Plasticity state persisted · 2,847 modified weights total

Up and running in 60 seconds.

1

Build

cargo build --release
2

Run

m1nd starts as a JSON-RPC stdio server. MCP-compatible out of the box.

./target/release/m1nd-mcp
3

Ingest & Query

Send MCP messages over stdin. State persists automatically.

// Ingest a codebase
{"method":"tools/call","params":{"name":"m1nd.ingest",
  "arguments":{"path":"/your/project","agent_id":"my-agent"}}}

// Query
{"method":"tools/call","params":{"name":"m1nd.activate",
  "arguments":{"query":"authentication","agent_id":"my-agent"}}}

// Learn from results
{"method":"tools/call","params":{"name":"m1nd.learn",
  "arguments":{"feedback":"correct","node_ids":["file::src/auth.rs"]}}}
4

Any domain

Not just code. Ingest any knowledge graph from JSON.

// JSON descriptor -- works for any domain
{"nodes": [
  {"id":"concept::activation", "label":"Spreading Activation", "type":"Concept"},
  {"id":"concept::plasticity", "label":"Hebbian Plasticity", "type":"Process"}
],
"edges": [
  {"source":"concept::activation", "target":"concept::plasticity",
   "relation":"enables", "weight":0.8}
]}

What m1nd is not.

No tool is everything. Here is where m1nd has edges.

Not a code editor
m1nd navigates and analyzes. It does not write, refactor, or apply patches. Pair it with an agent that does.
Not a search engine
It finds structural relationships, not string matches. For literal text search, use grep. For structural understanding, use m1nd.
Graph lives in memory
The full graph is kept in RAM for speed. Large monorepos (100K+ files) will need significant memory. Persistence is to disk on shutdown.
Single-machine today
Federation stitches graphs from multiple repos, but the server itself runs on one machine. Distributed mode is not yet implemented.
~15,500
Lines of Rust
159
Tests
43
MCP Tools
6+1
Languages
~8 MB
Binary (ARM64)

Stop paying for amnesia.
Give your agents a m1nd.

Open source. Local. Zero cost. One binary. 43 tools.
Your codebase never leaves your machine.

Get m1nd on GitHub Sponsor this project

MIT Licensed · Rust · ~8 MB binary · Works with any MCP client