Memvid provides native adapters for the most popular AI frameworks. Each adapter exposes tools, retrievers, and utilities formatted specifically for that framework.
One Memory, Any Framework. The same .mv2 file works with all adapters. Switch frameworks without re-indexing your data.
Quick Comparison
Framework Language Best For LangChain Python, Node.js Agents, chains, RAG LlamaIndex Python, Node.js RAG pipelines, indexing Vercel AI Node.js Next.js, streaming OpenAI Python, Node.js Direct API, function calling Google ADK Python, Node.js Gemini, ADK agents AutoGen Python, Node.js Multi-agent systems CrewAI Python, Node.js Agent crews Semantic Kernel Python, Node.js Enterprise AI, Azure Haystack Python Search pipelines
Choosing the Right Adapter
I'm building a RAG application
Recommended: LangChain or LlamaIndex Use the tools to search your knowledge base and build RAG pipelines. # LangChain with tools
from memvid_sdk import use
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
mem = use( 'langchain' , 'knowledge.mv2' )
agent = create_react_agent(ChatOpenAI( model = "gpt-4o" ), mem.tools)
result = agent.invoke({ "messages" : [{ "role" : "user" , "content" : "What is..." }]})
# Or use find() + ask() directly
results = mem.find( "search query" , k = 5 )
answer = mem.ask( "What is the main concept?" )
Recommended: LangChain , AutoGen , or CrewAI These frameworks excel at agent orchestration with tool use. # LangChain Agent
from langgraph.prebuilt import create_react_agent
mem = use( 'langchain' , 'knowledge.mv2' )
agent = create_react_agent(model, mem.tools)
# AutoGen
mem = use( 'autogen' , 'knowledge.mv2' )
assistant = AssistantAgent( "helper" , llm_config = { "tools" : mem.tools})
I'm using Next.js / Vercel
Recommended: Vercel AI SDK Native streaming support and seamless integration with Next.js. import { use } from '@memvid/sdk' ;
import { streamText } from 'ai' ;
const mem = await use ( 'vercel-ai' , 'knowledge.mv2' );
export async function POST ( req : Request ) {
const result = await streamText ({
model: openai ( 'gpt-4o' ),
tools: mem . tools ,
messages: await req . json (),
});
return result . toDataStreamResponse ();
}
I want direct OpenAI/Gemini function calling
Recommended: OpenAI or Google ADK Use the native function calling APIs without framework overhead. # OpenAI
mem = use( 'openai' , 'knowledge.mv2' )
response = client.chat.completions.create(
model = "gpt-4o" ,
tools = mem.tools,
messages = [ ... ]
)
# Google Gemini
mem = use( 'google-adk' , 'knowledge.mv2' )
chat = client.chats.create(
model = "gemini-2.0-flash" ,
config = types.GenerateContentConfig( tools = [mem.tools])
)
I'm building enterprise AI with Azure
Recommended: Semantic Kernel Microsoft’s SDK with enterprise features and Azure integration. mem = use( 'semantic-kernel' , 'knowledge.mv2' )
kernel.add_plugin(mem.tools, "memvid" )
Universal Features
All adapters provide these core capabilities:
Every adapter exposes three primary tools:
Tool Description memvid_putStore documents in memory memvid_findSearch for relevant documents memvid_askAsk questions with RAG synthesis
# Access tools (framework-specific format)
tools = mem.tools
Direct API Access
You can always bypass the adapter and use the core API directly:
# These work with any adapter
results = mem.find( 'search query' , k = 10 )
answer = mem.ask( 'What is machine learning?' )
timeline = mem.timeline( limit = 50 )
stats = mem.stats()
Installation
Each adapter requires its framework to be installed:
# Core SDK
npm install @memvid/sdk
# Framework dependencies (install what you need)
npm install @langchain/core @langchain/openai # LangChain
npm install llamaindex # LlamaIndex
npm install ai @ai-sdk/openai # Vercel AI
npm install openai # OpenAI
npm install @google/generative-ai # Google ADK
# Core SDK
pip install memvid-sdk
# Framework dependencies (install what you need)
pip install "memvid-sdk[langchain]" # LangChain
pip install "memvid-sdk[llamaindex]" # LlamaIndex
pip install "memvid-sdk[openai]" # OpenAI
pip install google-genai # Google ADK
pip install "memvid-sdk[autogen]" # AutoGen
pip install "memvid-sdk[crewai]" # CrewAI
pip install "memvid-sdk[semantic-kernel]" # Semantic Kernel
pip install "memvid-sdk[haystack]" # Haystack
# Or install all integrations:
pip install "memvid-sdk[full]"
Adapter Architecture
Each adapter:
Wraps the core Memvid API
Formats tools/retrievers for the specific framework
Handles framework-specific types and conventions
Provides seamless integration without lock-in
All adapters have similar performance since they use the same core engine:
Operation Latency Notes find()< 5ms Hybrid search (lex + vec) ask()20-200ms Depends on LLM response time put()< 40ms With embedding generation File Open < 10ms Cold start
The framework overhead is minimal (< 1ms per operation).
Examples Gallery
Framework Guides