- SDK adapters for framework-native tools and retrievers
- REST API integrations for platforms that work best with HTTP
One Memory, Many Surfaces. Use SDK adapters with
.mv2 files, or use cloud memories through https://api.memvid.com for API-first platforms.Quick Comparison
| Integration | Type | Language | Best For |
|---|---|---|---|
| LangChain | SDK adapter | Python, Node.js | Agents, chains, RAG |
| LlamaIndex | SDK adapter | Python, Node.js | RAG pipelines, indexing |
| Vercel AI | SDK adapter | Node.js | Next.js, streaming |
| OpenAI | SDK adapter | Python, Node.js | Direct API, function calling |
| Google ADK | SDK adapter | Python, Node.js | Gemini, ADK agents |
| AutoGen | SDK adapter | Python, Node.js | Multi-agent systems |
| CrewAI | SDK adapter | Python, Node.js | Agent crews |
| Semantic Kernel | SDK adapter | Python, Node.js | Enterprise AI, Azure |
| Haystack | SDK adapter | Python | Search pipelines |
| API Integration Patterns | REST API | Any HTTP runtime | Shared production patterns |
| n8n | REST API | No-code / low-code | Workflow automation |
| Replit | REST API | Node.js, Python | Cloud prototyping and apps |
| Lovable | REST API | TypeScript | Productized AI app builders |
| v0 | REST API | Next.js | Generated app frontends |
Choosing the Right Integration
I'm building a RAG application
I'm building a RAG application
Recommended: LangChain or LlamaIndexUse the tools to search your knowledge base and build RAG pipelines.
I'm building an AI agent
I'm building an AI agent
I'm using Next.js / Vercel
I'm using Next.js / Vercel
I want direct OpenAI/Gemini function calling
I want direct OpenAI/Gemini function calling
I'm building enterprise AI with Azure
I'm building enterprise AI with Azure
I'm automating workflows with no-code tools
I'm automating workflows with no-code tools
Universal Features
All adapters provide these core capabilities:Tools / Functions
Every adapter exposes three primary tools:| Tool | Description |
|---|---|
memvid_put | Store documents in memory |
memvid_find | Search for relevant documents |
memvid_ask | Ask questions with RAG synthesis |
Direct API Access
You can always bypass the adapter and use the core API directly:Installation
Each adapter requires its framework to be installed:- Node.js
- Python
Adapter Architecture
Each adapter:- Wraps the core Memvid API
- Formats tools/retrievers for the specific framework
- Handles framework-specific types and conventions
- Provides seamless integration without lock-in
Performance Comparison
All adapters have similar performance since they use the same core engine:| Operation | Latency | Notes |
|---|---|---|
find() | < 5ms | Hybrid search (lex + vec) |
ask() | 20-200ms | Depends on LLM response time |
put() | < 40ms | With embedding generation |
| File Open | < 10ms | Cold start |
Examples Gallery
RAG Chatbot
Build a chatbot with persistent memory using LangChain
Document Q&A
Create a document Q&A system with LlamaIndex
Knowledge Base
Build a searchable company knowledge base
Research Assistant
Create an AI research assistant for papers
Framework Guides
LangChain
Agents, chains, retrievers
LlamaIndex
RAG pipelines, query engines
Vercel AI
Next.js, streaming
OpenAI
Function calling
Google ADK
Gemini, ADK agents
AutoGen
Multi-agent systems
CrewAI
Agent crews
Semantic Kernel
Enterprise AI
Haystack
Search pipelines