Skip to main content
Build web apps, agents, and AI applications with the Memvid Node.js SDK. Native bindings deliver blazing-fast performance with a TypeScript-first API.

Installation

npm install @memvid/sdk
# or
pnpm add @memvid/sdk
# or
yarn add @memvid/sdk
Requirements: Node.js 18+, macOS/Linux/Windows. Native bindings included - no extra dependencies needed.

Quick Start

import { create, use } from '@memvid/sdk';
import { existsSync } from 'fs';

const path = 'knowledge.mv2';

// IMPORTANT: Use create() for NEW files, use() for EXISTING files
const mem = existsSync(path)
  ? await use('basic', path)    // Open existing file
  : await create(path, 'basic'); // Create new file

// Add documents (no embeddings needed!)
await mem.put({
  title: 'Meeting Notes',
  label: 'notes',
  text: 'Alice mentioned she works at Anthropic...'
});

// Search works immediately
const results = await mem.find('who works at AI companies?', { k: 5, mode: 'lex' });
console.log(results.hits);

// Ask questions
const answer = await mem.ask('What does Alice do?', { k: 5, mode: 'lex' });
console.log(answer.text);

// Seal when done (commits changes)
await mem.seal();
create() vs use() - Don’t mix them up!
FunctionPurposeIf file exists
create(path, kind)Create new .mv2 fileOverwrites existing data!
use(kind, path)Open existing .mv2 fileReads existing data
Common mistake: Using create() to reopen a file will erase all your data. Always use use() for existing files.
No embeddings required! Memvid’s BM25 lexical search works out of the box. Add embeddings later if you need semantic search.

API Reference

CategoryMethodsDescription
File Operationscreate, open, close, useCreate, open, close memory files
Data Ingestionput, putManyAdd documents with embeddings
Searchfind, ask, vecSearch, timelineQuery your memory
Correctionscorrect, correctManyStore ground truth with retrieval boost
Memory Cardsmemories, state, enrich, addMemoryCardsStructured fact extraction
TablesputPdfTables, listTables, getTablePDF table extraction
SessionssessionStart, sessionEnd, sessionReplayTime-travel debugging
TicketssyncTickets, currentTicket, getCapacityCapacity management
Securitylock, unlock, lockWho, lockNudgeEncryption and access control
Utilitiesverify, doctor, maskPiiMaintenance and utilities

Core Functions

File Operations

import { create, open, verifyMemvid, doctorMemvid, info } from '@memvid/sdk';

// Create new memory file
const mem = await create('project.mv2');

// Open existing memory
const existing = await open('project.mv2');

// With options
const mem = await create('project.mv2', 'basic', {
  enableLex: true,    // Enable lexical index
  enableVec: true,    // Enable vector index
  memoryId: 'mem_abc' // Bind to dashboard
});

// Verify file integrity
await verifyMemvid('project.mv2', { deep: true });

// Repair and optimize
await doctorMemvid('project.mv2', {
  rebuildTimeIndex: true,
  rebuildVecIndex: true,
  vacuum: true
});

// Get SDK info
const sdkInfo = info();

Framework Adapters

Choose an adapter for your framework:
import { use } from '@memvid/sdk';

// Available adapters
const mem = await use('basic', 'file.mv2');
const langchain = await use('langchain', 'file.mv2');
const llamaindex = await use('llamaindex', 'file.mv2');
const vercelai = await use('vercel-ai', 'file.mv2');
const openai = await use('openai', 'file.mv2');
const crewai = await use('crewai', 'file.mv2');
const autogen = await use('autogen', 'file.mv2');
const haystack = await use('haystack', 'file.mv2');
const langgraph = await use('langgraph', 'file.mv2');
const semantickernel = await use('semantic-kernel', 'file.mv2');
const googleadk = await use('google-adk', 'file.mv2');

Data Ingestion

put() - Add Single Document

await mem.put({
  // Required
  title: 'Document Title',

  // Content (one of these)
  text: 'Document content...',
  file: '/path/to/document.pdf',

  // Optional
  uri: 'mv2://docs/intro',
  tags: ['api', 'v2'],
  labels: ['public', 'reviewed'],
  kind: 'markdown',
  track: 'documentation',
  metadata: { author: 'Alice', version: '2.0' },

  // Embeddings
  enableEmbedding: true,
  embeddingModel: 'bge-small',  // or 'openai', 'nomic', etc.
  vectorCompression: true,       // 16x compression with PQ

  // Behavior
  autoTag: true,                 // Auto-generate tags
  extractDates: true             // Extract date mentions
});

putMany() - Batch Ingestion

const docs = [
  { title: 'Doc 1', text: 'First document content' },
  { title: 'Doc 2', text: 'Second document content' },
  { title: 'Doc 3', text: 'Third document content' }
];

const frameIds = await mem.putMany(docs, {
  enableEmbedding: true,
  compressionLevel: 3,
  embedder: openaiEmbeddings  // Custom embedder
});

Search & Retrieval

// Simple search
const results = await mem.find('budget projections');

// With options
const results = await mem.find('financial outlook', {
  mode: 'auto',           // 'lex', 'sem', 'auto', 'clip'
  k: 10,                  // Number of results
  snippetChars: 480,      // Snippet length
  scope: 'track:meetings', // Scope filter

  // Adaptive retrieval
  adaptive: true,
  minRelevancy: 0.5,
  maxK: 100,
  adaptiveStrategy: 'combined',  // 'relative', 'absolute', 'cliff', 'elbow'

  // Time-travel
  asOfFrame: 100,
  asOfTs: 1704067200,

  // Custom embeddings
  embedder: customEmbedder,
  queryEmbeddingModel: 'openai'
});

console.log(results.hits);

ask() - LLM Q&A

const answer = await mem.ask('What was decided about the budget?', {
  k: 8,
  mode: 'auto',

  // LLM settings
  model: 'gpt-4o-mini',
  modelApiKey: process.env.OPENAI_API_KEY,
  llmContextChars: 120000,

  // Privacy
  maskPii: true,

  // Time filters
  since: 1704067200,
  until: 1706745600,

  // Options
  contextOnly: false,   // Set true to skip synthesis
  returnSources: true,  // Include source documents

  // Adaptive retrieval
  adaptive: true,
  minRelevancy: 0.5
});

console.log(answer.text);
console.log(answer.sources);
const results = await mem.vecSearch('query', queryEmbedding, {
  k: 10,
  adaptive: true,
  minRelevancy: 0.7
});

Grounding & Hallucination Detection

The ask() response includes a grounding object that measures how well the answer is supported by context:
const answer = await mem.ask('What is the API endpoint?', {
  model: 'gpt-4o-mini',
  modelApiKey: process.env.OPENAI_API_KEY
});

// Check grounding quality
console.log(answer.grounding);
// {
//   score: 0.85,
//   label: 'HIGH',           // 'LOW', 'MEDIUM', or 'HIGH'
//   sentence_count: 3,
//   grounded_sentences: 3,
//   has_warning: false,
//   warning_reason: undefined
// }

// Check if follow-up is needed
if (answer.follow_up?.needed) {
  console.log('Low confidence:', answer.follow_up.reason);
  console.log('Try these instead:', answer.follow_up.suggestions);
}
Grounding Fields:
FieldTypeDescription
scorenumberGrounding score from 0.0 to 1.0
labelstringQuality label: LOW, MEDIUM, or HIGH
sentence_countnumberSentences in the answer
grounded_sentencesnumberSentences supported by context
has_warningbooleanTrue if answer may be hallucinated
warning_reasonstring?Explanation if warning is present
Follow-up Fields:
FieldTypeDescription
neededbooleanTrue if answer confidence is low
reasonstringWhy confidence is low
hintstringHelpful hint for the user
available_topicsstring[]Topics in this memory
suggestionsstring[]Suggested follow-up questions

correct() - Ground Truth Corrections

Store authoritative corrections that take priority in future retrievals:
// Store a correction
const frameId = await mem.correct('Ben Koenig reported to Chloe Nguyen before 2025');

// With options
const frameId = await mem.correct('The API rate limit is 1000 req/min', {
  source: 'Engineering Team - Jan 2025',
  topics: ['API', 'rate limiting'],
  boost: 2.5  // Higher retrieval priority (default: 2.0)
});

// Batch corrections
const frameIds = await mem.correctMany([
  { statement: 'OAuth tokens expire after 24 hours', topics: ['auth', 'OAuth'] },
  { statement: 'Production DB is db.prod.example.com', source: 'Ops Team' }
]);

// Verify correction is retrievable
const results = await mem.find('Ben Koenig reported to');
console.log(results.hits[0].snippet);  // Should show the correction
Use correct() to fix hallucinations or add verified facts. Corrections receive boosted retrieval scores and are labeled [Correction] in results.

Memory Cards (Entity Extraction)

Automatic Enrichment

// Extract facts using rules engine (fast, offline)
const result = await mem.enrich('rules');

// View extracted cards
const { cards, count } = await mem.memories();

// Filter by entity
const aliceCards = await mem.memories('Alice');

// Get entity state (O(1) lookup)
const alice = await mem.state('Alice');
console.log(alice.slots);
// { employer: 'Anthropic', role: 'Engineer', location: 'SF' }

// Get stats
const stats = await mem.memoriesStats();
console.log(stats.entityCount, stats.cardCount);

// List all entities
const entities = await mem.memoryEntities();

Manual Memory Cards

// Add SPO triplets directly
const result = await mem.addMemoryCards([
  { entity: 'Alice', slot: 'employer', value: 'Anthropic' },
  { entity: 'Alice', slot: 'role', value: 'Senior Engineer' },
  { entity: 'Bob', slot: 'team', value: 'Infrastructure' }
]);

console.log(result.added, result.ids);

Export Facts

// Export to JSON
const json = await mem.exportFacts('json');

// Export to CSV
const csv = await mem.exportFacts('csv', 'Alice');

// Export to N-Triples (RDF)
const ntriples = await mem.exportFacts('ntriples');

Table Extraction

// Extract tables from PDF
const result = await mem.putPdfTables('financial-report.pdf', true);
console.log(`Extracted ${result.tables_count} tables`);

// List all tables
const tables = await mem.listTables();
for (const table of tables) {
  console.log(table.table_id, table.n_rows, table.n_cols);
}

// Get table data
const data = await mem.getTable('tbl_001', 'dict');
const csv = await mem.getTable('tbl_001', 'csv');

Time-Travel & Sessions

Timeline Queries

const timeline = await mem.timeline({
  limit: 50,
  since: 1704067200,
  until: 1706745600,
  reverse: true,
  asOfFrame: 100
});

Session Recording

// Start recording
const sessionId = await mem.sessionStart('qa-test');

// Perform operations
await mem.find('test query');
await mem.ask('What happened?');

// Add checkpoint
await mem.sessionCheckpoint();

// End session
const summary = await mem.sessionEnd();

// List sessions
const sessions = await mem.sessionList();

// Replay session with different params
const replay = await mem.sessionReplay(sessionId, 10, true);
console.log(replay.match_rate);

// Delete session
await mem.sessionDelete(sessionId);

Encryption & Security

import { lock, unlock, lockWho, lockNudge } from '@memvid/sdk';

// Encrypt to .mv2e capsule
const encryptedPath = await lock('project.mv2', {
  password: 'secret',
  force: true
});

// Decrypt back to .mv2
const decryptedPath = await unlock('project.mv2e', {
  password: 'secret'
});

// Check who has the lock
const lockInfo = await lockWho('project.mv2');

// Nudge stale lock
const released = await lockNudge('project.mv2');

Tickets & Capacity

// Get current capacity
const capacity = await mem.getCapacity();

// Get current ticket info
const ticket = await mem.currentTicket();

// Sync tickets from dashboard
const result = await mem.syncTickets('mem_abc123', apiKey);

// Apply ticket manually
await mem.applyTicket(ticketString);

// Get memory binding
const binding = await mem.getMemoryBinding();

// Unbind from dashboard
await mem.unbindMemory();

Embedding Providers

External Providers

import {
  OpenAIEmbeddings,
  GeminiEmbeddings,
  MistralEmbeddings,
  CohereEmbeddings,
  VoyageEmbeddings,
  NvidiaEmbeddings,
  getEmbedder
} from '@memvid/sdk';

// OpenAI
const openai = new OpenAIEmbeddings({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'text-embedding-3-small'  // or 'text-embedding-3-large'
});

// Gemini
const gemini = new GeminiEmbeddings({
  apiKey: process.env.GEMINI_API_KEY,
  model: 'text-embedding-004'
});

// Mistral
const mistral = new MistralEmbeddings({
  apiKey: process.env.MISTRAL_API_KEY
});

// Cohere
const cohere = new CohereEmbeddings({
  apiKey: process.env.COHERE_API_KEY,
  model: 'embed-english-v3.0'
});

// Voyage
const voyage = new VoyageEmbeddings({
  apiKey: process.env.VOYAGE_API_KEY,
  model: 'voyage-3'
});

// NVIDIA
const nvidia = new NvidiaEmbeddings({
  apiKey: process.env.NVIDIA_API_KEY
});

// Factory function
const embedder = getEmbedder('openai', { apiKey: '...' });

// Use with putMany
await mem.putMany(docs, { embedder: openai });

// Use with find
await mem.find('query', { embedder: gemini });

Local Embeddings (No API Required)

import { LOCAL_EMBEDDING_MODELS } from '@memvid/sdk';

await mem.put({
  text: 'content',
  enableEmbedding: true,
  embeddingModel: LOCAL_EMBEDDING_MODELS.BGE_SMALL  // 384d, fast
});

// Available local models
LOCAL_EMBEDDING_MODELS.BGE_SMALL   // 384d - fastest
LOCAL_EMBEDDING_MODELS.BGE_BASE    // 768d - balanced
LOCAL_EMBEDDING_MODELS.NOMIC       // 768d - general purpose
LOCAL_EMBEDDING_MODELS.GTE_LARGE   // 1024d - highest quality

Error Handling

import {
  MemvidError,
  CapacityExceededError,    // MV001
  TicketInvalidError,       // MV002
  TicketReplayError,        // MV003
  LexIndexDisabledError,    // MV004
  TimeIndexMissingError,    // MV005
  VerifyFailedError,        // MV006
  LockedError,              // MV007
  ApiKeyRequiredError,      // MV008
  MemoryAlreadyBoundError,  // MV009
  FrameNotFoundError,       // MV010
  VecIndexDisabledError,    // MV011
  CorruptFileError,         // MV012
  VecDimensionMismatchError // MV014
} from '@memvid/sdk';

try {
  await mem.put({ title: 'Large file', file: 'huge.bin' });
} catch (err) {
  if (err instanceof CapacityExceededError) {
    console.log('Storage capacity exceeded (MV001)');
  } else if (err instanceof LockedError) {
    console.log('File locked by another process (MV007)');
  } else if (err instanceof VecIndexDisabledError) {
    console.log('Enable vector index first (MV011)');
  }
}

Asset Extraction

// Get frame content
const content = await mem.view(frameId);
const contentByUri = await mem.viewByUri('mv2://docs/intro');

// Extract binary assets (PDF, images)
const asset = await mem.extractAsset(frameId);
console.log(asset.mimeType, asset.filename, asset.data);

// Get frame metadata
const info = await mem.getFrameInfo(frameId);
console.log(info.uri, info.title, info.timestamp);

Environment Variables

VariableDescription
MEMVID_API_KEYDashboard API key for sync
OPENAI_API_KEYOpenAI embeddings and LLM
GEMINI_API_KEYGemini embeddings
MISTRAL_API_KEYMistral embeddings
COHERE_API_KEYCohere embeddings
VOYAGE_API_KEYVoyage embeddings
NVIDIA_API_KEYNVIDIA embeddings
ANTHROPIC_API_KEYClaude for entities
MEMVID_MODELS_DIRModel cache directory
MEMVID_OFFLINEUse cached models only

Deploying to Vercel

The Memvid Node.js SDK uses native bindings (N-API) for optimal performance. When deploying to Vercel’s serverless environment, you need to configure Next.js to bundle the native binary correctly.

next.config.ts Configuration

Add outputFileTracingIncludes to ensure the native .node files are bundled with your serverless functions:
// next.config.ts
import type { NextConfig } from 'next';

const nextConfig: NextConfig = {
  experimental: {
    outputFileTracingIncludes: {
      '/api/*': [
        './node_modules/@memvid/sdk-linux-x64-gnu/**/*',
        './node_modules/@memvid/sdk/**/*',
      ],
    },
  },
};

export default nextConfig;

Explicit Platform Package (Optional)

For more reliable deployments, explicitly add the Linux platform package to your dependencies:
{
  "dependencies": {
    "@memvid/sdk": "^2.0.146",
    "@memvid/sdk-linux-x64-gnu": "^2.0.146"
  }
}
Vercel’s serverless runtime uses Amazon Linux 2 (x64). The SDK automatically selects the correct platform binary, but explicit inclusion ensures bundling works correctly.

Serverless /tmp Storage

Vercel’s serverless functions have ephemeral /tmp storage that doesn’t persist between invocations. For production apps:
  1. Use cloud storage (S3, R2, etc.) to persist .mv2 files
  2. Download on-demand when the function cold starts
  3. Pass files as buffers between API routes instead of file paths
// Example: Download from S3 if not in /tmp
import { existsSync } from 'fs';
import { writeFile } from 'fs/promises';

const localPath = `/tmp/${memoryId}.mv2`;

if (!existsSync(localPath)) {
  const buffer = await downloadFromS3(userId, memoryId);
  await writeFile(localPath, buffer);
}

const mem = await open(localPath);

Troubleshooting

ErrorSolution
Native binary not found for platform: linux-x64Add outputFileTracingIncludes config
GLIBC_2.35 not foundEnsure you’re using SDK v2.0.146+ (built for glibc 2.26)
ENOENT: no such file or directoryFiles in /tmp don’t persist; use cloud storage

TypeScript Types

import type {
  PutInput,
  PutManyInput,
  FindInput,
  AskInput,
  MemoryCard,
  MemoryCardInput,
  EntityState,
  FrameInfo,
  TableInfo,
  SessionSummary,
  MemvidErrorCode
} from '@memvid/sdk';

Next Steps