General Questions
What is Memvid?
Memvid is a portable AI memory system that packages your data, embeddings, and search indices into a single.mv2 file. It’s designed for building RAG applications, AI agents, and knowledge bases without the complexity of traditional vector databases.
Is Memvid open source?
Yes, the core library (memvid-core) is open source. The Python SDK and Node.js SDK are available as packages with comprehensive documentation.
What makes Memvid different from other vector databases?
Memvid’s key differentiator is single-file portability. Unlike traditional vector databases that require servers and complex configurations, a.mv2 file contains everything, your data, embeddings, indices, and metadata, in one portable file.
What platforms does Memvid support?
Memvid supports:- macOS (Intel and Apple Silicon)
- Linux (x86_64 and ARM64)
- Windows (x86_64)
File Format
Can I rely on a single .mv2 file in production?
Yes. Memvid is designed for production use. The .mv2 file is completely self-contained with no sidecar files, no external dependencies, and no hidden state. Copying the file transfers the entire memory, including the write-ahead log and all indices.
How large can a .mv2 file be?
File size depends on your capacity tier:
| Tier | Capacity | WAL Size |
|---|---|---|
| Free | 1 GB | 4 MB |
| Developer | 25 GB | 16 MB |
| Enterprise | Unlimited | 64 MB |
Can multiple processes access the same file?
Yes, with some rules:- Multiple readers: Allowed simultaneously
- Single writer: Only one writer at a time
- Read-only mode: Use
read_only=Truefor concurrent read access
Performance
How fast is Memvid?
Memvid is built in Rust for maximum performance:| Operation | Performance |
|---|---|
| Search (1K docs) | < 1ms |
| Search (100K docs) | < 10ms |
| Single doc ingestion | 1-10 docs/sec |
Batch ingestion (put_many) | 500-1000 docs/sec |
| WAL append | < 0.1ms |
What search modes are available?
Lexical (lex): BM25 keyword search for exact matchesSemantic (sem): Vector search for conceptual similarityHybrid (auto): Combines both for best results (recommended)
How do I optimize search performance?
- Build indices: Ensure lexical and vector indices are enabled
- Use batch ingestion: Use
put_many()for 100-200x faster ingestion - Enable parallel segments: Use
--parallel-segmentsfor large datasets - Choose the right mode: Use
lexfor keywords,semfor concepts,autofor general queries
SDKs and Integration
Which programming languages are supported?
- Python:
pip install memvid-sdk - Node.js:
npm install @memvid/sdk - Rust: Use
memvid-corecrate directly - CLI:
cargo install memvid-cli
Can I use Memvid with LangChain?
Yes! Both Python and Node.js SDKs support framework adapters: Python:What AI frameworks are supported?
Python SDK:- LangChain
- LlamaIndex
- CrewAI
- AutoGen
- Haystack
- Vercel AI SDK
- OpenAI Functions
- LangChain.js
- Semantic Kernel
Capacity and Storage
What happens when I exceed capacity?
You’ll receive aCapacityExceeded error (MV001). Solutions:
- Delete unused frames:
memvid delete knowledge.mv2 --frame-id <id> - Vacuum to reclaim space:
memvid doctor knowledge.mv2 --vacuum - Create a larger memory file with a higher tier
How do I check my storage usage?
Can I reduce storage size?
Yes, use vector compression:Troubleshooting
Why is my file locked?
Another process is using the file. Check for:- Other terminals running
memvidcommands - Running applications with open handles
- Stale processes (use
lsof your-file.mv2to find them)
memvid who your-file.mv2 to see who holds the lock.
Why are my searches returning no results?
- Check indices: Run
memvid stats your-file.mv2to verify indices exist - Try different modes: Use
--mode lexfor keywords or--mode semfor concepts - Rebuild indices: Run
memvid doctor your-file.mv2 --rebuild-lex-index
How do I recover from corruption?
Use thedoctor command:
Why is ingestion slow?
Use batch ingestion for better performance:Getting Help
Where can I report bugs?
Report issues on GitHub: github.com/memvid/memvid/issuesIs there a community?
Yes! Join us on:- Discord: discord.gg/ttSjNttQ
- Twitter: @memvid
- GitHub Discussions: github.com/memvid/memvid/discussions