Skip to main content
Build an AI memory system with hybrid search and LLM-powered Q&A. Choose your platform:
create() will OVERWRITE existing files without warning!
FunctionPurposeIf File ExistsParameter Order
create(path, kind)Create new .mv2 fileDELETES all datapath first, then kind
use(kind, path)Open existing .mv2 filePreserves datakind first, then path
Always check if the file exists before choosing:
const mem = existsSync(path) ? await use('basic', path) : await create(path, 'basic');

Install

npm install -g memvid-cli
Works on macOS, Linux, and Windows. Requires Node.js 14+.

Create & Ingest

# Create a new memory
memvid create knowledge.mv2

# Add documents
echo "Alice works at Anthropic as a Senior Engineer in San Francisco." | \
  memvid put knowledge.mv2 --title "Team Info"

echo "Bob joined OpenAI last month as a Research Scientist." | \
  memvid put knowledge.mv2 --title "New Hires"

echo "Project Alpha has a budget of $500k and is led by Alice." | \
  memvid put knowledge.mv2 --title "Projects"
# Search works immediately (BM25 lexical search)
memvid find knowledge.mv2 --query "who works at AI companies"

Ask Questions

# Ask with LLM synthesis (requires OPENAI_API_KEY)
export OPENAI_API_KEY=sk-...
memvid ask knowledge.mv2 --question "What is Alice's role?" --use-model openai

Extract Facts

# Extract structured facts
memvid enrich knowledge.mv2 --engine rules

# Query entity state (O(1) lookup)
memvid state knowledge.mv2 "Alice"
Output:
Entity: Alice
  employer: Anthropic
  role: Senior Engineer
  location: San Francisco

What You Built

In 5 minutes, you created a complete AI memory system with:
FeatureDescription
Hybrid SearchCombines lexical (BM25) and semantic (vector) search
LLM Q&ANatural language questions with sourced answers
Entity ExtractionStructured facts with O(1) lookups
Single FileEverything stored in one portable .mv2 file

Next Steps


Common Patterns

Using External Embeddings

import { create, use, OpenAIEmbeddings } from '@memvid/sdk';
import { existsSync } from 'fs';

const embedder = new OpenAIEmbeddings({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'text-embedding-3-small'
});

const path = 'project.mv2';
const mem = existsSync(path)
  ? await use('basic', path)
  : await create(path, 'basic');

await mem.putMany(docs, { embedder });
await mem.find('query', { embedder });

Batch Ingestion

const docs = [
  { title: 'Doc 1', label: 'kb', text: 'Content 1' },
  { title: 'Doc 2', label: 'kb', text: 'Content 2' },
  { title: 'Doc 3', label: 'kb', text: 'Content 3' }
];

await mem.putMany(docs);

PDF Ingestion with Tables

memvid put project.mv2 --input report.pdf --title "Q4 Report" --tables
memvid tables list project.mv2
memvid tables export project.mv2 --table-id tbl_001 -o table.csv