Get Started with cargo add rig-core

Rig is a Rust library for building portable, modular, and lightweight Fullstack AI Agents. You can find API documentation on docs.rs.

Why Rig?

Rig is a Rust library for building portable, modular, and lightweight Fullstack AI Agents. You can find API documentation on docs.rs.

Why Rust?

Rust 🦀 has many advantages over Python or JS/TS, used by AI frameworks like Langchain, LlamaIndex, ai16z.

To vizualise’s Rust’s edge over other languages, you can check out this tool.

  • Lightweight: Rust runs orders of magnitude faster than Python, which makes running & deploying swarms of agents a breeze.
  • Safety: Rust’s type system and ownership model helps work with unexpected LLM outputs, coercing types and handling errors.
  • Portability: Rust code can be compiled to WebAssembly, allowing it to run in web browsers (even local LLM models!).

High-level features

  • Full support for LLM completion and embedding workflows
  • Simple but powerful common abstractions over LLM providers (e.g. OpenAI, Cohere) and vector stores (e.g. MongoDB, in-memory)
  • Integrate LLMs in your app with minimal boilerplate

Integrations

Model Providers

Rig natively supports the following completion and embedding model provider integrations:


ChatGPT logoClaude Anthropic logoCohere logoGemini logoxAI logoperplexity logo

You can also implement your own model provider integration by defining types that implement the CompletionModel and EmbeddingModel traits.

Vector Stores

Rig currently supports the following vector store integrations via companion crates:

  • rig-mongodb: Vector store implementation for MongoDB
  • rig-lancedb: Vector store implementation for LanceDB
  • rig-neo4j: Vector store implementation for Neo4j
  • rig-qdrant: Vector store implementation for Qdrant

You can also implement your own vector store integration by defining types that implement the VectorStoreIndex trait.

Simple example:

  use rig::{completion::Prompt, providers::openai};
 
  #[tokio::main]
  async fn main() {
      // Create OpenAI client and agent.
      // This requires the `OPENAI_API_KEY` environment variable to be set.
      let openai_client = openai::Client::from_env();
 
      let gpt4 = openai_client.agent("gpt-4").build();
 
      // Prompt the model and print its response
      let response = gpt4
          .prompt("Who are you?")
          .await
          .expect("Failed to prompt GPT-4");
 
      println!("GPT-4: {response}");
  }

Note: using #[tokio::main] requires you enable tokio’s macros and rt-multi-thread features or just full to enable all features cargo add tokio --features macros,rt-multi-thread