Every engineer's AI builds a knowledge graph. Engram syncs them into a shared hive mind that grows smarter with every session — a living wiki that never goes stale.
Three primitives. Infinite context.
Store knowledge as atomic memories with semantic embeddings. Tag by project and topic for scoped recall.
mcp: remember("User prefers tabs over spaces", project: "global")
Find memories by meaning, not keywords. Vector similarity + full-text search + recency and importance ranking.
mcp: recall("authentication architecture", project: "Lattice")
Link memories into a knowledge graph with typed edges: relates_to, supersedes, contradicts, part_of, and more.
mcp: connect(from: 42, to: 17, relation: "supersedes")
Each engineer's AI autonomously learns patterns, decisions, and conventions — building a personal knowledge graph across every coding session.
A complete memory system — not just a vector store.
Each team member's AI agent builds a personal knowledge graph. Expose projects to your team and an AI consolidates contributions into a high-signal shared graph — no meetings needed.
Unlike static wikis that rot, the team graph is continuously updated as engineers work. Auto-consolidation merges redundant knowledge, contradictions are flagged, and outdated decisions get superseded.
Memories aren't isolated — they're connected. Traverse edges to discover related knowledge that flat search would miss. Supports depth-based graph traversal on recall.
Hybrid retrieval combining semantic embeddings (paraphrase-MiniLM-L6-v2) with SQLite FTS5 full-text search. Finds memories by meaning and keywords simultaneously.
Community detection groups related memories automatically. Duplicate detection prevents redundancy at write time. Batch organize and consolidate tools keep things clean.
Mark memories as private so they never leave your machine. Control which projects are exposed to which teams. Your personal knowledge stays personal.
Group memories into narrative sessions — debugging hunts, feature implementations, code reviews. Replay entire episodes to restore context from past work.
Checkpoint work-in-progress across sessions. Save plans, progress, and context. Resume exactly where you left off — even days later.
Free forever for local use. Sync and share when you're ready.
curl -fsSL https://raw.githubusercontent.com/jsflax/Engram/main/scripts/install.sh | bash
brew install jsflax/tap/engram
Or download the desktop app with 3D visualizer:
Download Engram.appRequires macOS 15+ and Claude Code (or any MCP-compatible client).
The honest answers.
Engram is an MCP server that gives AI agents persistent memory across sessions. It stores knowledge as a semantic graph — memories with embeddings, typed edges, topics, and projects — in a local SQLite database. Your AI can remember things, recall them by meaning, and build connections between ideas over time.
Any client that supports the Model Context Protocol (MCP). Right now that's primarily Claude Code, but any MCP-compatible client — including custom agents built with the Claude Agent SDK — can use Engram as a tool server.
Each team member's AI agent builds a personal knowledge graph as they code. When you expose projects to your team, those memories flow into a shared team graph. An AI agent consolidates contributions — deduplicating, resolving contradictions, and synthesizing cross-team insights. When anyone recalls, results merge personal and team knowledge with attribution.
On the Free plan, completely — everything stays in a SQLite database on your machine. Nothing leaves your computer. On Pro with cloud sync, your data passes through our servers for sync and backup. We're working on end-to-end encryption so the server only ever sees opaque blobs, but that's not shipped yet. We'll be transparent about where we are on that. On Team plans, you control exactly which projects are exposed — and memories marked private never leave your machine.
A vector store gives you similarity search over documents. Engram gives you that plus a knowledge graph with typed relationships (supersedes, contradicts, part_of), project and topic scoping, episodic memory for replaying past sessions, task checkpointing, auto-organization, and duplicate detection. It's designed for an agent's working memory, not document retrieval.
Yes. The core memory system is fully local — SQLite database, local embeddings, local search. Cloud sync is optional and only needed if you want cross-device access or backup. If you're offline, everything keeps working.
No. The Free plan includes every feature except cloud sync. Unlimited memories, full knowledge graph, vector search, 3D visualizer, all MCP tools, auto-organization — all free, forever. You only pay if you want your memories synced across devices.
Your local database is always yours — nothing gets deleted or locked. You just lose cloud sync. Your memories stay on your machine exactly as they were.
The client is source-available under the Business Source License (BSL). You can read the code, self-host, and use it for internal purposes. The restriction is on offering it as a competing commercial service. The license converts to Apache 2.0 after the change date. The server is private.