In-Memory Vector Store
Overview
The in-memory vector store is Rig’s default vector store implementation, included in rig-core. It provides a lightweight, RAM-based solution for vector similarity search, ideal for development, testing, and small-scale applications.
Key Features
- Zero external dependencies
- Automatic or custom document ID generation
- Multiple embedding support per document
- Cosine similarity search
- Configurable index strategies (brute-force or LSH)
- Flexible document schema support
- Automatic
Toolimplementation for agent integration
Implementation Details
Core Components
- Store Structure:
The InMemoryVectorStore uses a simple but effective data structure:
pub struct InMemoryVectorStore<D: Serialize> {
embeddings: HashMap<String, (D, OneOrMany<Embedding>)>,
}Key components:
- Key: String identifier for each document
- Value: Tuple containing:
D: The serializable documentOneOrMany<Embedding>: Either a single embedding or multiple embeddings
The store supports multiple embeddings per document through the OneOrMany enum:
pub enum OneOrMany<T> {
One(T),
Many(Vec<T>),
}When searching, the store:
- Computes cosine similarity between the query and all document embeddings
- For documents with multiple embeddings, uses the best-matching embedding
- Uses a
BinaryHeapto efficiently maintain the top-N results - Returns results sorted by similarity score
Memory layout example:
{
"doc1" => (
Document { title: "Example 1", ... },
One(Embedding { vec: [0.1, 0.2, ...] })
),
"doc2" => (
Document { title: "Example 2", ... },
Many([
Embedding { vec: [0.3, 0.4, ...] },
Embedding { vec: [0.5, 0.6, ...] }
])
)
}- Vector Search Implementation:
- Uses a binary heap for efficient top-N retrieval
- Maintains scores using ordered floating-point comparisons
- Supports multiple embeddings per document with best-match selection
Core Traits
As of v0.31.0, the vector store system is built around these traits:
VectorStoreIndex: The primary trait for querying a vector store by similarity. Types implementing this trait automatically implement theTooltrait, meaning any vector store index can be used as an agent tool.InsertDocuments: Trait for inserting documents and their embeddings into a vector store (replaces the oldVectorStoretrait).VectorStoreIndexDyn: Type-erased version for dynamic dispatch scenarios.
/// Trait for querying a vector store by similarity.
pub trait VectorStoreIndex: Send + Sync {
async fn top_n(
&self,
request: VectorSearchRequest,
) -> Result<Vec<(f64, String, serde_json::Value)>, VectorStoreError>;
async fn top_n_ids(
&self,
request: VectorSearchRequest,
) -> Result<Vec<(f64, String)>, VectorStoreError>;
}
/// Trait for inserting documents and embeddings into a vector store.
pub trait InsertDocuments: Send + Sync {
async fn add_documents(
&mut self,
documents: Vec<(String, OneOrMany<Embedding>)>,
) -> Result<(), VectorStoreError>;
}Index Strategies
v0.31.0 introduces configurable index strategies via the IndexStrategy enum:
- Brute-force (default): Linear scan with cosine similarity. Best for small datasets.
- LSH (Locality-Sensitive Hashing): Available via the
rig::vector_store::lshmodule for approximate nearest-neighbor search on larger datasets.
Vector Store as a Tool
Types implementing VectorStoreIndex automatically implement the Tool trait. This means you can add a vector store index directly as a tool to an agent:
let agent = openai.agent("gpt-4o")
.preamble("You can search a knowledge base.")
.tool(vector_store_index)
.build();The tool will accept a search query and return matching documents as VectorStoreOutput.
Document Management
Three ways to add documents:
- Auto-generated IDs:
let store = InMemoryVectorStore::from_documents(vec![
(doc1, embedding1),
(doc2, embedding2)
]);- Custom IDs:
let store = InMemoryVectorStore::from_documents_with_ids(vec![
("custom_id_1", doc1, embedding1),
("custom_id_2", doc2, embedding2)
]);- Function-generated IDs:
let store = InMemoryVectorStore::from_documents_with_id_f(
documents,
|doc| format!("doc_{}", doc.title)
);Querying with VectorSearchRequest
As of v0.31.0, vector store queries are built using VectorSearchRequest, which supports filtering:
use rig::vector_store::{VectorStoreIndex, VectorSearchRequest};
use rig::vector_store::request::Filter;
// Simple query
let results = index.top_n(
VectorSearchRequest::builder("search query", 5).build()
).await?;
// Query with a score threshold
let results = index.top_n(
VectorSearchRequest::builder("search query", 5)
.threshold(0.7)
.build()
).await?;
// Query with filters (for backends that support filtering)
let results = index.top_n(
VectorSearchRequest::builder("search query", 5)
.filter(Filter::eq("category", "science"))
.build()
).await?;The Filter enum provides a backend-agnostic way to express filter conditions:
pub enum Filter {
Eq(String, serde_json::Value),
Ne(String, serde_json::Value),
Gt(String, serde_json::Value),
Lt(String, serde_json::Value),
And(Vec<Filter>),
Or(Vec<Filter>),
// ... other variants
}Backends can implement the SearchFilter trait to translate these canonical filters into their native query language.
Special Considerations
1. Memory Usage
- All embeddings and documents are stored in RAM
- Memory usage scales linearly with document count and embedding dimensions
- Consider available memory when storing large datasets
2. Performance Characteristics
- Fast lookups using HashMap for document retrieval
- Efficient top-N selection using BinaryHeap
- O(n) complexity for vector similarity search (brute-force strategy)
- Best for small to medium-sized datasets
- Consider LSH indexing for larger datasets
3. Document Storage
- Documents must be serializable
- Supports multiple embeddings per document
- Automatic pruning of large arrays (>400 elements)
Usage Example
use rig::providers::openai;
use rig::embeddings::EmbeddingsBuilder;
use rig::vector_store::in_memory_store::InMemoryVectorStore;
use rig::vector_store::{VectorStoreIndex, VectorSearchRequest, InsertDocuments};
#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
let openai = openai::Client::from_env();
let model = openai.embedding_model(openai::TEXT_EMBEDDING_ADA_002);
// Initialize store
let mut store = InMemoryVectorStore::default();
// Create embeddings
let embeddings = EmbeddingsBuilder::new(model.clone())
.simple_document("doc1", "First document content")
.simple_document("doc2", "Second document content")
.build()
.await?;
// Add documents to store (uses InsertDocuments trait)
store.add_documents(embeddings).await?;
// Create vector store index
let index = store.index(model);
// Search similar documents using VectorSearchRequest
let results = index.top_n(
VectorSearchRequest::builder("search query", 5).build()
).await?;
Ok(())
}Implementation Specifics
Vector Search Algorithm
The core search implementation:
/// Implement vector search on [InMemoryVectorStore].
fn vector_search(&self, prompt_embedding: &Embedding, n: usize) -> EmbeddingRanking<D> {
// Sort documents by best embedding distance
let mut docs = BinaryHeap::new();
for (id, (doc, embeddings)) in self.embeddings.iter() {
// Get the best context for the document given the prompt
if let Some((distance, embed_doc)) = embeddings
.iter()
.map(|embedding| {
(
OrderedFloat(embedding.cosine_similarity(prompt_embedding, false)),
&embedding.document,
)
})
.max_by(|a, b| a.0.cmp(&b.0))
{
docs.push(Reverse(RankingItem(distance, id, doc, embed_doc)));
};
}
// Return top-n results
// ...
}Error Handling
Vector store operations can produce errors via the VectorStoreError enum:
EmbeddingError: Issues with embedding generationJsonError: Document serialization/deserialization errorsDatastoreError: General storage operations errorsMissingIdError: When a requested document ID doesn’t existFilterError: When constructing or converting filter expressions
Best Practices
-
Memory Management:
- Monitor memory usage with large datasets
- Consider chunking large document additions
- Use cloud-based vector stores for production deployments
-
Document Structure:
- Keep documents serializable
- Avoid extremely large arrays
- Consider using custom ID generation for meaningful identifiers
-
Performance Optimization:
- Pre-allocate store capacity when possible
- Batch document additions
- Use appropriate embedding dimensions
- Consider LSH indexing for datasets exceeding a few thousand documents
Limitations
-
Scalability:
- Limited by available RAM
- No persistence between program runs
- Single-machine only
-
Features:
- Filtering support is basic compared to dedicated vector databases
- No automatic persistence
-
Production Use:
- Best suited for development/testing
- Consider cloud-based alternatives for production
- No built-in backup/recovery mechanisms
For production deployments, consider using one of Rig’s other vector store integrations (MongoDB, LanceDB, Neo4j, Qdrant, SQLite, SurrealDB, Milvus, ScyllaDB, or AWS S3Vectors) which offer persistence and better scalability.
Thread Safety
The InMemoryVectorStore is thread-safe for concurrent reads but requires exclusive access for writes. The store implements Clone for creating independent instances and Send + Sync for safe concurrent access across thread boundaries.
For concurrent write access, consider wrapping the store in a synchronization primitive like Arc<RwLock<InMemoryVectorStore>>.
Comparison with Other Vector Stores
| Feature | In-Memory | MongoDB | Qdrant | LanceDB | SQLite |
|---|---|---|---|---|---|
| Persistence | No | Yes | Yes | Yes | Yes |
| Horizontal Scaling | No | Yes | Yes | No | No |
| Setup Complexity | Low | Medium | Medium | Low | Low |
| Memory Usage | High | Low | Medium | Low | Low |
| Query Speed | Fast | Medium | Fast | Fast | Medium |
| Filtering | Basic | Rich | Rich | Rich | SQL |