OpenAI Integration
The OpenAI provider in Rig offers integration with OpenAI’s API services, supporting both completion and embedding models. It provides a client-based architecture for interacting with OpenAI’s models.
Key Features
- Full support for GPT-3.5 and GPT-4 model families
- Text embedding models (Ada, text-embedding-3)
- Automatic token usage tracking
- Tool/function calling support
- Custom API endpoint configuration
Basic Usage
use rig::providers::openai;
// Create client from environment variable
let client = openai::Client::from_env();
// Or explicitly with API key
let client = openai::Client::new("your-api-key");
// Create a completion model
let gpt4 = client.completion_model(openai::GPT_4);
// Create an embedding model
let embedder = client.embedding_model(openai::TEXT_EMBEDDING_3_LARGE);
Available Models
Completion Models
GPT_4
/GPT_4O
: GPT-4 base and optimized versionsGPT_35_TURBO
: GPT-3.5 Turbo and its variantsGPT_35_TURBO_INSTRUCT
: Instruction-tuned GPT-3.5
Embedding Models
TEXT_EMBEDDING_3_LARGE
: 3072 dimensionsTEXT_EMBEDDING_3_SMALL
: 1536 dimensionsTEXT_EMBEDDING_ADA_002
: 1536 dimensions (legacy)
Special Considerations
-
Tool Calling: OpenAI models support function calling through a specialized JSON format. The provider automatically handles conversion between Rig’s tool definitions and OpenAI’s expected format.
-
Response Processing: The provider implements special handling for:
- Tool/function call responses
- System messages
- Token usage tracking
-
Error Handling: OpenAI-specific errors are automatically converted to Rig’s error types for consistent error handling across providers.
Implementation Details
The core OpenAI provider implementation can be found in:
//! OpenAI API client and Rig integration
//!
//! # Example
//! ```
//! use rig::providers::openai;
//!
//! let client = openai::Client::new("YOUR_API_KEY");
//!
//! let gpt4o = client.completion_model(openai::GPT_4O);
//! ```
use crate::{
agent::AgentBuilder,
completion::{self, CompletionError, CompletionRequest},
embeddings::{self, EmbeddingError, EmbeddingsBuilder},
extractor::ExtractorBuilder,
json_utils, Embed,
};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use serde_json::json;
// ================================================================
// Main OpenAI Client
// ================================================================
const OPENAI_API_BASE_URL: &str = "https://api.openai.com";
#[derive(Clone)]
pub struct Client {
base_url: String,
http_client: reqwest::Client,
}
impl Client {
/// Create a new OpenAI client with the given API key.
pub fn new(api_key: &str) -> Self {
Self::from_url(api_key, OPENAI_API_BASE_URL)
}
/// Create a new OpenAI client with the given API key and base API URL.
Tool calling and response handling:
#[derive(Debug, Deserialize)]
pub struct Choice {
pub index: usize,
pub message: Message,
pub logprobs: Option<serde_json::Value>,
pub finish_reason: String,
}
#[derive(Debug, Deserialize)]
pub struct Message {
pub role: String,
pub content: Option<String>,
pub tool_calls: Option<Vec<ToolCall>>,
}
#[derive(Debug, Deserialize)]
pub struct ToolCall {
pub id: String,
pub r#type: String,
pub function: Function,
}
#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct ToolDefinition {
pub r#type: String,
pub function: completion::ToolDefinition,
}
impl From<completion::ToolDefinition> for ToolDefinition {
fn from(tool: completion::ToolDefinition) -> Self {
Self {
r#type: "function".into(),
function: tool,
}
}
For detailed API reference and additional features, see the OpenAI API documentation and Rig’s API documentation.