Skip to main content
Memvid provides native adapters for the most popular AI frameworks. Each adapter exposes tools, retrievers, and utilities formatted specifically for that framework.
One Memory, Any Framework. The same .mv2 file works with all adapters. Switch frameworks without re-indexing your data.

Quick Comparison

FrameworkLanguageBest For
LangChainPython, Node.jsAgents, chains, RAG
LlamaIndexPython, Node.jsRAG pipelines, indexing
Vercel AINode.jsNext.js, streaming
OpenAIPython, Node.jsDirect API, function calling
Google ADKPython, Node.jsGemini, ADK agents
AutoGenPython, Node.jsMulti-agent systems
CrewAIPython, Node.jsAgent crews
Semantic KernelPython, Node.jsEnterprise AI, Azure
HaystackPythonSearch pipelines

Choosing the Right Adapter

Recommended: LangChain or LlamaIndexUse the tools to search your knowledge base and build RAG pipelines.
# LangChain with tools
from memvid_sdk import use
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

mem = use('langchain', 'knowledge.mv2')
agent = create_react_agent(ChatOpenAI(model="gpt-4o"), mem.tools)
result = agent.invoke({"messages": [{"role": "user", "content": "What is..."}]})

# Or use find() + ask() directly
results = mem.find("search query", k=5)
answer = mem.ask("What is the main concept?")
Recommended: LangChain, AutoGen, or CrewAIThese frameworks excel at agent orchestration with tool use.
# LangChain Agent
from langgraph.prebuilt import create_react_agent
mem = use('langchain', 'knowledge.mv2')
agent = create_react_agent(model, mem.tools)

# AutoGen
mem = use('autogen', 'knowledge.mv2')
assistant = AssistantAgent("helper", llm_config={"tools": mem.tools})
Recommended: Vercel AI SDKNative streaming support and seamless integration with Next.js.
import { use } from '@memvid/sdk';
import { streamText } from 'ai';

const mem = await use('vercel-ai', 'knowledge.mv2');

export async function POST(req: Request) {
  const result = await streamText({
    model: openai('gpt-4o'),
    tools: mem.tools,
    messages: await req.json(),
  });
  return result.toDataStreamResponse();
}
Recommended: OpenAI or Google ADKUse the native function calling APIs without framework overhead.
# OpenAI
mem = use('openai', 'knowledge.mv2')
response = client.chat.completions.create(
    model="gpt-4o",
    tools=mem.tools,
    messages=[...]
)

# Google Gemini
mem = use('google-adk', 'knowledge.mv2')
chat = client.chats.create(
    model="gemini-2.0-flash",
    config=types.GenerateContentConfig(tools=[mem.tools])
)
Recommended: Semantic KernelMicrosoft’s SDK with enterprise features and Azure integration.
mem = use('semantic-kernel', 'knowledge.mv2')
kernel.add_plugin(mem.tools, "memvid")

Universal Features

All adapters provide these core capabilities:

Tools / Functions

Every adapter exposes three primary tools:
ToolDescription
memvid_putStore documents in memory
memvid_findSearch for relevant documents
memvid_askAsk questions with RAG synthesis
# Access tools (framework-specific format)
tools = mem.tools

Direct API Access

You can always bypass the adapter and use the core API directly:
# These work with any adapter
results = mem.find('search query', k=10)
answer = mem.ask('What is machine learning?')
timeline = mem.timeline(limit=50)
stats = mem.stats()

Installation

Each adapter requires its framework to be installed:
# Core SDK
npm install @memvid/sdk

# Framework dependencies (install what you need)
npm install @langchain/core @langchain/openai   # LangChain
npm install llamaindex                          # LlamaIndex
npm install ai @ai-sdk/openai                   # Vercel AI
npm install openai                              # OpenAI
npm install @google/generative-ai              # Google ADK

Adapter Architecture

Each adapter:
  1. Wraps the core Memvid API
  2. Formats tools/retrievers for the specific framework
  3. Handles framework-specific types and conventions
  4. Provides seamless integration without lock-in

Performance Comparison

All adapters have similar performance since they use the same core engine:
OperationLatencyNotes
find()< 5msHybrid search (lex + vec)
ask()20-200msDepends on LLM response time
put()< 40msWith embedding generation
File Open< 10msCold start
The framework overhead is minimal (< 1ms per operation).

Framework Guides