Skip to main content
Integrate Memvid with LangChain to build powerful RAG pipelines. The langchain adapter provides native LangChain tools for seamless integration with agents.

Installation

npm install @memvid/sdk @langchain/core @langchain/openai @langchain/langgraph zod

Quick Start

import { use } from '@memvid/sdk';

// Open with LangChain adapter
const mem = await use('langchain', 'knowledge.mv2');

// Access LangChain tools (compatible with createReactAgent)
const tools = mem.tools;  // Array of tool() objects

Using Tools with Agents

import { use } from '@memvid/sdk';
import { ChatOpenAI } from '@langchain/openai';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { HumanMessage } from '@langchain/core/messages';

// Get Memvid tools
const mem = await use('langchain', 'knowledge.mv2');
const tools = mem.tools;

// Create agent with LangGraph
const llm = new ChatOpenAI({ model: 'gpt-4o' });
const agent = createReactAgent({ llm, tools });

// Run
const inputs = { messages: [new HumanMessage('Search for authentication info')] };
const stream = await agent.stream(inputs, { streamMode: 'values' });

for await (const { messages } of stream) {
  const lastMsg = messages[messages.length - 1];
  if (lastMsg.content) {
    console.log(lastMsg.content);
  }
}

Available Tools

The LangChain adapter provides three tools:
ToolDescription
memvid_putStore documents in memory with title, label, and text
memvid_findSearch for relevant documents by query
memvid_askAsk questions with RAG-style answer synthesis

Using as a Retriever (Python)

from memvid_sdk import use
from langchain_openai import ChatOpenAI
from langchain.chains import RetrievalQA

# Initialize with langchain adapter
mem = use('langchain', 'knowledge.mv2', read_only=True)

# Get the retriever
retriever = mem.as_retriever(k=5)

# Create QA chain
qa_chain = RetrievalQA.from_chain_type(
    llm=ChatOpenAI(model="gpt-4o"),
    retriever=retriever
)

result = qa_chain.run("What is the main concept?")
print(result)

Conversational RAG (Python)

from memvid_sdk import use
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory

# Initialize
mem = use('langchain', 'knowledge.mv2', read_only=True)
retriever = mem.as_retriever(k=5)

# Create conversational chain
llm = ChatOpenAI(model="gpt-4o")
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

chain = ConversationalRetrievalChain.from_llm(
    llm=llm,
    retriever=retriever,
    memory=memory
)

# Chat
response = chain.invoke({"question": "What are the key features?"})
print(response["answer"])

# Follow up
response = chain.invoke({"question": "Tell me more about that"})
print(response["answer"])

Custom Search Options

from memvid_sdk import use

mem = use('langchain', 'knowledge.mv2')

# Search with specific mode
results = mem.find('authentication', mode='lex', k=10)  # Lexical only
results = mem.find('user login flow', mode='sem', k=10)  # Semantic only
results = mem.find('auth best practices', mode='auto', k=10)  # Hybrid

# With scope filtering
results = mem.find('API', scope='mv2://docs/', k=5)

Best Practices

  1. Use read-only mode for retrieval-only applications
  2. Set appropriate k values based on your context window
  3. Use hybrid mode for best recall
  4. Close the memory when done
mem = use('langchain', 'knowledge.mv2', read_only=True)
try:
    # Do work
    results = mem.find('query', k=10)
finally:
    mem.seal()

Next Steps