openai_dart 1.4.0
openai_dart: ^1.4.0 copied to clipboard
Dart client for the OpenAI API. Provides type-safe access to the latest OpenAI GPT and image models, Responses API, Whisper, Embeddings, and more with streaming support.
OpenAI Dart Client #
Unofficial Dart client for the OpenAI API to build with GPT-5.2, Responses API, GPT Image 1.5, Sora, Whisper, Embeddings, and more.
Table of Contents
Features #
Responses API #
- Response creation (
responses.create) - Streaming support (
responses.createStream) with SSE - Built-in tools (web search, file search, code interpreter, computer use)
- Multi-turn conversations with
previousResponseId - Structured output and JSON schema
- Background mode and reasoning
Chat Completions #
- Chat completion creation (
chat.completions.create) - Streaming support (
chat.completions.createStream) with SSE - Multi-turn conversations
- System messages and developer messages
- JSON mode and structured output
Tool Use #
- Custom function/tool calling
- Tool choice modes (auto, none, required, function)
- Parallel tool calls
- Tool call streaming with delta accumulation
Vision #
- Image analysis with vision-capable models
- Base64 images (PNG, JPEG, GIF, WebP)
- URL images
- Multiple images in a single request
Audio #
- Text-to-speech (
audio.speech.create) - Speech-to-text (
audio.transcriptions.create) - Audio translation (
audio.translations.create) - Multiple voices and formats
Images (GPT Image) #
- Image generation (
images.generate) with GPT Image 1.5 - Image editing with masks (
images.edit) - Image variations (
images.createVariation) - Multiple sizes and formats
Embeddings #
- Embedding creation (
embeddings.create) - Batch embeddings
- Dimension control (for text-embedding-3 models)
Files & Uploads #
- File upload for fine-tuning and assistants
- Large file uploads with multipart support
- File listing and retrieval
Batches #
- Batch request creation
- Batch status monitoring
- Batch result retrieval
Fine-tuning #
- Fine-tuning job creation
- Job monitoring and cancellation
- Checkpoint management
Moderations #
- Content moderation
- Text and image moderation
- Category-specific scores
Realtime API #
- WebSocket-based real-time conversations
- WebRTC support via HTTP SDP signaling (
realtimeSessions.calls) - Call management (create, accept, reject, refer, hangup)
- Audio input/output streaming
- Server and client events
Evals API #
- Evaluation creation and management
- Multiple grader types (string check, text similarity, label model, score model, python)
- Run management with data sources (JSONL, completions, responses)
- Output item analysis with pass/fail results
- Status polling helpers
Videos API (Sora) #
- Video generation (
videos.create) - Video status polling (
videos.retrieve) - Content download (video, thumbnail, spritesheet)
- Video remix (
videos.remix)
Conversations API #
- Server-side conversation state management
- Long-term storage (no 30-day TTL)
- Integration with Responses API
- Items management (add, list, retrieve, delete)
Containers API #
- Isolated execution environments
- Container file management
- File upload and content retrieval
ChatKit API (Beta) #
- Chat UI toolkit powered by workflows
- Session management with client secrets
- Thread and item management
Assistants API (Deprecated) #
Deprecated: Use the Responses API instead. Import from
package:openai_dart/openai_dart_assistants.dart.
- Assistant creation and management
- Thread management
- Messages and runs
- Streaming run events
- Tool integration (code interpreter, file search)
Vector Stores (Deprecated) #
Deprecated: Part of Assistants API. Import from
package:openai_dart/openai_dart_assistants.dart.
- Vector store management
- File batch processing
- File search integration
Why choose this client? #
- Type-safe with sealed classes
- Minimal dependencies (http, logging, meta, web_socket only)
- Works on all compilation targets (native, web, WASM)
- Interceptor-driven architecture
- Comprehensive error handling
- Automatic retry with exponential backoff
- SSE streaming support
- Resource-based API design matching official SDKs
Quickstart #
import 'package:openai_dart/openai_dart.dart';
void main() async {
final client = OpenAIClient.withApiKey('YOUR_API_KEY');
final response = await client.responses.create(
CreateResponseRequest(
model: 'gpt-5.2',
input: ResponseInput.text('What is the capital of France?'),
),
);
print(response.outputText); // Paris is the capital of France.
client.close();
}
Installation #
dependencies:
openai_dart: ^x.y.z
Platform Support #
| Platform | Status |
|---|---|
| Dart VM | ✅ Full support |
| Flutter (iOS/Android) | ✅ Full support |
| Flutter Web | ✅ Full support |
| WASM | ✅ Full support |
Import Structure #
The package provides multiple entry points for different APIs:
Main Entry Point (Recommended) #
import 'package:openai_dart/openai_dart.dart';
Includes: Chat Completions, Responses API, Embeddings, Images, Videos, Audio, Files, Batches, Fine-tuning, Moderations, Evals, Conversations, Containers, ChatKit.
Assistants API (Deprecated) #
import 'package:openai_dart/openai_dart_assistants.dart' as assistants;
Includes: Assistants, Threads, Messages, Runs, Vector Stores.
Note: The Assistants API is being deprecated by OpenAI. Use the Responses API instead.
Realtime API #
import 'package:openai_dart/openai_dart_realtime.dart' as realtime;
Includes: Real-time conversations via WebSocket (audio streaming) and WebRTC (HTTP-based SDP signaling).
Handling Name Conflicts #
When using multiple entry points, use import prefixes to avoid naming conflicts:
import 'package:openai_dart/openai_dart.dart';
import 'package:openai_dart/openai_dart_assistants.dart' as assistants;
import 'package:openai_dart/openai_dart_realtime.dart' as realtime;
// Responses API types (modern, recommended)
final tool = CodeInterpreterTool();
// Assistants API types (deprecated)
final assistantTool = assistants.CodeInterpreterTool();
// Realtime API types
final rtEvent = realtime.ResponseCreatedEvent(...);
Configuration #
Configuration Options
import 'package:openai_dart/openai_dart.dart';
final client = OpenAIClient(
config: OpenAIConfig(
authProvider: ApiKeyProvider('YOUR_API_KEY'),
baseUrl: 'https://api.openai.com/v1', // Default
timeout: Duration(minutes: 10),
retryPolicy: RetryPolicy(maxRetries: 3),
organization: 'org-xxx', // Optional
project: 'proj-xxx', // Optional
),
);
From environment variables:
final client = OpenAIClient.fromEnvironment();
// Reads OPENAI_API_KEY, OPENAI_BASE_URL, OPENAI_ORG_ID, OPENAI_PROJECT_ID
With API key directly:
final client = OpenAIClient.withApiKey('sk-...');
Custom base URL (for proxies or Azure):
final client = OpenAIClient(
config: OpenAIConfig(
baseUrl: 'https://my-resource.openai.azure.com/openai/deployments/my-deployment',
authProvider: AzureApiKeyProvider('YOUR_AZURE_KEY'),
),
);
Usage #
Responses API #
Responses API Example
import 'package:openai_dart/openai_dart.dart';
final client = OpenAIClient.fromEnvironment();
final response = await client.responses.create(
CreateResponseRequest(
model: 'gpt-5.2',
input: ResponseInput.text('What is the capital of France?'),
),
);
print('Response: ${response.outputText}');
print('Usage: ${response.usage}');
client.close();
Basic Chat Completion #
Chat Completion Example
import 'package:openai_dart/openai_dart.dart';
final client = OpenAIClient.fromEnvironment();
final response = await client.chat.completions.create(
ChatCompletionCreateRequest(
model: 'gpt-5.2',
messages: [
ChatMessage.system('You are a helpful assistant.'),
ChatMessage.user('What is the capital of France?'),
],
maxTokens: 100,
),
);
print('Response: ${response.text}');
print('Finish reason: ${response.choices.first.finishReason}');
print('Usage: ${response.usage?.promptTokens} in, ${response.usage?.completionTokens} out');
client.close();
Streaming #
Streaming Example
final stream = client.chat.completions.createStream(
ChatCompletionCreateRequest(
model: 'gpt-5.2',
messages: [ChatMessage.user('Tell me a story')],
),
);
await for (final event in stream) {
stdout.write(event.textDelta ?? '');
}
Tool Calling #
Tool Calling Example
final response = await client.chat.completions.create(
ChatCompletionCreateRequest(
model: 'gpt-5.2',
messages: [
ChatMessage.user("What's the weather in Tokyo?"),
],
tools: [
Tool.function(
name: 'get_weather',
description: 'Get the current weather for a location',
parameters: {
'type': 'object',
'properties': {
'location': {'type': 'string', 'description': 'City name'},
},
'required': ['location'],
},
),
],
),
);
if (response.hasToolCalls) {
for (final toolCall in response.allToolCalls) {
print('Function: ${toolCall.function.name}');
print('Arguments: ${toolCall.function.arguments}');
}
}
Vision #
Vision Example
final response = await client.chat.completions.create(
ChatCompletionCreateRequest(
model: 'gpt-5.2',
messages: [
ChatMessage.user([
ContentPart.text('What is in this image?'),
ContentPart.imageUrl('https://example.com/image.jpg'),
]),
],
),
);
print(response.text);
Embeddings #
Embeddings Example
final response = await client.embeddings.create(
EmbeddingRequest(
model: 'text-embedding-3-small',
input: EmbeddingInput.text('Hello, world!'),
dimensions: 256, // Optional: reduce dimensions
),
);
final vector = response.firstEmbedding;
print('Embedding dimensions: ${vector.length}');
Image Generation #
Image Generation Example
final response = await client.images.generate(
ImageGenerationRequest(
model: 'gpt-image-1.5',
prompt: 'A white cat wearing a top hat',
size: ImageSize.size1024x1024,
quality: ImageQuality.hd,
),
);
print('Image URL: ${response.data.first.url}');
Audio #
Text-to-Speech Example
final audioBytes = await client.audio.speech.create(
SpeechRequest(
model: 'tts-1',
input: 'Hello! How are you today?',
voice: SpeechVoice.nova,
),
);
File('output.mp3').writeAsBytesSync(audioBytes);
Speech-to-Text Example
final response = await client.audio.transcriptions.create(
TranscriptionRequest(
file: File('audio.mp3').readAsBytesSync(),
filename: 'audio.mp3',
model: 'whisper-1',
),
);
print('Transcription: ${response.text}');
Realtime API #
WebSocket Connection Example
import 'package:openai_dart/openai_dart.dart';
import 'package:openai_dart/openai_dart_realtime.dart' as realtime;
final client = OpenAIClient.fromEnvironment();
// Connect to a realtime session via WebSocket
final session = await client.realtime.connect(
model: 'gpt-realtime-1.5',
config: const realtime.SessionUpdateConfig(
voice: realtime.RealtimeVoice.alloy,
instructions: 'You are a helpful assistant.',
),
);
// Send a request and process events until the response is complete
session.createResponse();
await for (final event in session.events) {
switch (event) {
case realtime.SessionCreatedEvent(:final session):
print('Session created: ${session.id}');
case realtime.ResponseTextDeltaEvent(:final delta):
stdout.write(delta);
case realtime.ResponseDoneEvent():
await session.close();
case realtime.ErrorEvent(:final error):
print('Error: ${error.message}');
await session.close();
default:
break;
}
}
client.close();
WebRTC Call Example
Note: For WebRTC peer connections in Flutter, use the
flutter_webrtcpackage.
import 'package:flutter_webrtc/flutter_webrtc.dart';
import 'package:openai_dart/openai_dart.dart';
import 'package:openai_dart/openai_dart_realtime.dart' as realtime;
final client = OpenAIClient.fromEnvironment();
// 1. Create a peer connection and generate an SDP offer
final pc = await createPeerConnection({'iceServers': []});
final offer = await pc.createOffer();
await pc.setLocalDescription(offer);
// 2. Send the SDP offer to OpenAI and get the SDP answer
final sdpAnswer = await client.realtimeSessions.calls.create(
realtime.RealtimeCallCreateRequest(
sdp: offer.sdp!,
session: const realtime.RealtimeSessionCreateRequest(
model: 'gpt-realtime-1.5',
voice: realtime.RealtimeVoice.alloy,
),
),
);
// 3. Set the SDP answer to complete the WebRTC handshake
await pc.setRemoteDescription(RTCSessionDescription(sdpAnswer, 'answer'));
// Call management operations (callId is obtained from your SIP/telephony layer)
const callId = 'call_xxx';
await client.realtimeSessions.calls.accept(callId);
await client.realtimeSessions.calls.hangup(callId);
await client.realtimeSessions.calls.refer(
callId,
realtime.RealtimeCallReferRequest(targetUri: 'tel:+14155550123'),
);
await client.realtimeSessions.calls.reject(
callId,
request: realtime.RealtimeCallRejectRequest(statusCode: 486),
);
client.close();
Extension Methods #
The package provides convenient extension methods for common operations:
Stream Extensions #
// Collect all text from a streaming response
final text = await stream.collectText();
// Iterate only text deltas
await for (final delta in stream.textDeltas()) {
stdout.write(delta);
}
// Accumulate streaming chunks into a complete response
await for (final accumulated in stream.accumulate()) {
print('Content so far: ${accumulated.content}');
}
// Or use ChatStreamAccumulator directly for full control
final accumulator = ChatStreamAccumulator();
await for (final event in stream) {
accumulator.add(event);
}
// Build a ChatCompletion from the accumulated stream data
final completion = accumulator.toChatCompletion();
print(completion.text);
Message List Extensions #
// Build message lists fluently
final messages = <ChatMessage>[]
.withSystemMessage('You are helpful')
.withUserMessage('Hello!');
Examples #
See the example/ directory for complete examples:
| Example | Description |
|---|---|
chat_example.dart |
Basic chat completions with multi-turn conversations |
streaming_example.dart |
Streaming responses with text deltas |
tool_calling_example.dart |
Function calling with tool definitions |
vision_example.dart |
Image analysis with vision models |
responses_example.dart |
Responses API with built-in tools |
embeddings_example.dart |
Text embeddings with dimension control |
images_example.dart |
GPT Image generation |
videos_example.dart |
Sora video generation |
audio_example.dart |
Text-to-speech and transcription |
files_example.dart |
File upload and management |
conversations_example.dart |
Conversations API for state management |
containers_example.dart |
Containers for isolated execution |
chatkit_example.dart |
ChatKit sessions and threads |
assistants_example.dart |
Assistants API (deprecated) |
evals_example.dart |
Model evaluation and testing |
error_handling_example.dart |
Exception handling patterns |
models_example.dart |
Model listing and retrieval |
batches_example.dart |
Batch processing for async jobs |
moderation_example.dart |
Content moderation |
web_search_example.dart |
Web search with Responses API |
realtime_example.dart |
Realtime API (WebSocket and WebRTC) |
fine_tuning_example.dart |
Fine-tuning job management |
API Coverage #
| API | Status |
|---|---|
| Chat Completions | ✅ Full |
| Responses API | ✅ Full |
| Embeddings | ✅ Full |
| Images | ✅ Full |
| Videos (Sora) | ✅ Full |
| Audio (Speech, Transcription, Translation) | ✅ Full |
| Files | ✅ Full |
| Uploads | ✅ Full |
| Batches | ✅ Full |
| Models | ✅ Full |
| Moderations | ✅ Full |
| Fine-tuning | ✅ Full |
| Evals | ✅ Full |
| Conversations | ✅ Full |
| Containers | ✅ Full |
| ChatKit (Beta) | ✅ Full |
| Realtime | ✅ Full (separate import) |
| Assistants (Deprecated) | ✅ Full (separate import) |
| Threads (Deprecated) | ✅ Full (separate import) |
| Messages (Deprecated) | ✅ Full (separate import) |
| Runs (Deprecated) | ✅ Full (separate import) |
| Vector Stores (Deprecated) | ✅ Full (separate import) |
| Completions (Legacy) | ✅ Full |
Sponsor #
If these packages are useful to you or your company, please sponsor the project. Development and maintenance are provided to the community for free, but integration tests against real APIs and the tooling required to build and verify releases still have real costs. Your support, at any level, helps keep these packages maintained and free for the Dart & Flutter community.
License #
MIT License - see LICENSE for details.