openai_dart 1.1.0
openai_dart: ^1.1.0 copied to clipboard
Dart client for the OpenAI API. Provides type-safe access to GPT, DALL-E, Whisper, Embeddings, Assistants, and more with streaming support.
1.1.0 #
Added baseUrl and defaultHeaders parameters to withApiKey constructors, aligned Responses API models with the latest OpenAI spec, fixed null index handling in ToolCallDelta.fromJson, and improved hashCode for list fields.
- FEAT: Add baseUrl and defaultHeaders to withApiKey constructors (#57). (f0dd0caa)
- FIX: Align Responses API models with current OpenAI spec (#59). (a55a67b7)
- FIX: Handle null index in ToolCallDelta.fromJson (#64). (9b3df8a4)
- FIX: Use Object.hashAll() for list fields in hashCode (#65). (4b19abd9)
- REFACTOR: Unify equality_helpers.dart across packages (#67). (ec2897f8)
1.0.1 #
1.0.0 #
Note: This release has breaking changes.
TL;DR: Complete reimplementation with a new architecture, minimal dependencies, resource-based API, and improved developer experience. Hand-crafted models (no code generation), interceptor-driven architecture, comprehensive error handling, full OpenAI API coverage, and alignment with the latest OpenAI OpenAPI (2026-02-19).
What's new #
- Resource-based API organization:
client.chat.completions— Chat completion creation, streamingclient.responses— Responses API (recommended unified API)client.conversations— Conversation managementclient.embeddings— Text embeddingsclient.audio.speech/audio.transcriptions/audio.translations— Audio APIsclient.images— Image generation, editing, variationsclient.files/client.uploads— File and large upload managementclient.batches— Batch processingclient.models— Model listing and retrievalclient.moderations— Content moderationclient.fineTuning.jobs— Fine-tuning job managementclient.beta.assistants/beta.threads/beta.vectorStores— Assistants API (Beta)client.videos— Sora video generationclient.containers— Code execution containersclient.chatkit— ChatKit sessions and threads (Beta)client.evals— Model evaluationclient.realtime— WebSocket-based Realtime APIclient.completions— Legacy text completions
- Architecture:
- Interceptor chain (Auth → Logging → Error → Transport with Retry wrapper).
- Authentication: API key, organization+key, or Azure via
AuthProviderinterface (ApiKeyProvider,OrganizationApiKeyProvider,AzureApiKeyProvider). - Retry with exponential backoff + jitter (only for idempotent methods on 429, 5xx, timeouts).
- Abortable requests via
abortTriggerparameter. - SSE streaming parser for real-time responses.
- WebSocket support for Realtime API.
- Central
OpenAIConfig(timeouts, retry policy, log level, baseUrl, auth).
- Hand-crafted models:
- No code generation dependencies (no freezed, json_serializable).
- Minimal runtime dependencies (
http,logging,meta,web_socketonly). - Immutable models with
copyWithusing sentinel pattern. - Full type safety with sealed exception hierarchy.
- Improved DX:
- Simplified message creation (e.g.,
ChatMessage.user(),ChatMessage.system()). - Explicit streaming methods (
createStream()vscreate()). - Response helpers (
.text,.hasToolCalls,.allToolCalls). ChatStreamAccumulatorand extension methods (collectText(),textDeltas(),accumulate()).- Rich logging with field redaction for sensitive data.
- Simplified message creation (e.g.,
- Full API coverage:
- Chat completions with tool calling, vision, structured outputs, audio, and predicted outputs.
- Responses API with built-in tool output types (web search, file search, code interpreter, image generation, MCP).
- Videos API (Sora) for video generation and remixing.
- Conversations API for multi-turn conversation management.
- Containers API for isolated code execution environments.
- ChatKit API for session and thread management (Beta).
- Evals API with multiple grader types and data source configurations.
- Realtime API for WebSocket-based audio conversations.
- Full Assistants, Threads, Messages, Runs, and Vector Stores API (Beta, separate import).
Breaking Changes #
- Resource-based API: Methods reorganized under strongly-typed resources:
client.createChatCompletion()→client.chat.completions.create()client.createChatCompletionStream()→client.chat.completions.createStream()client.createEmbedding()→client.embeddings.create()client.createImage()→client.images.generate()client.createSpeech()→client.audio.speech.create()client.createTranscription()→client.audio.transcriptions.create()client.createFineTuningJob()→client.fineTuning.jobs.create()client.uploadFile()→client.files.upload()client.createBatch()→client.batches.create()
- Model class renames:
CreateChatCompletionRequest→ChatCompletionCreateRequestChatCompletionMessage.user(content: ChatCompletionUserMessageContent.string('...'))→ChatMessage.user('...')ChatCompletionMessage.system(content: '...')→ChatMessage.system('...')ChatCompletionTool(type: ..., function: FunctionObject(...))→Tool.function(...)ChatCompletionModel.modelId('gpt-4o')→'gpt-4o'(plain string)EmbeddingInput.string('...')→EmbeddingInput.text('...')CreateImageRequest→ImageGenerationRequestImageSize.v1024x1024→ImageSize.size1024x1024
- Import structure: Assistants and Realtime APIs moved to separate entry points:
import 'package:openai_dart/openai_dart_assistants.dart'for Assistants, Threads, Messages, Runs, Vector Storesimport 'package:openai_dart/openai_dart_realtime.dart'for Realtime API
- Configuration: New
OpenAIConfigwithAuthProviderpattern:OpenAIClient(apiKey: 'KEY')→OpenAIClient(config: OpenAIConfig(authProvider: ApiKeyProvider('KEY')))- Or use
OpenAIClient.fromEnvironment()to readOPENAI_API_KEY. - Or use
OpenAIClient.withApiKey('KEY')for quick setup.
- Exceptions: Replaced
OpenAIClientExceptionwith typed hierarchy:ApiException,AuthenticationException,RateLimitException,NotFoundException,RequestTimeoutException,AbortedException,ConnectionException,ParseException,StreamException.
- Streaming: Use convenience getters and extension methods:
event.choices.first.delta.content→event.textDelta.map()callbacks → Dart 3 switch expressions oristype checks.
- Nullable fields:
Model.created,Model.ownedBy,ChatCompletion.createdare now nullable for OpenAI-compatible provider support. - Session cleanup:
endSession()→close(). - Dependencies: Removed
freezed,json_serializable; now minimal (http,logging,meta,web_socket).
See MIGRATION.md for step-by-step examples and mapping tables.
Commits #
- BREAKING FEAT: Complete v1.0.0 reimplementation (#24). (ed68e31b)
- BREAKING FEAT: Add type-safe ResponseInput and convenience factories (#29). (015307ea)
- BREAKING FIX: Make created and ownedBy nullable for provider compatibility (#30). (5c56f005)
- FEAT: Add Skills API, response compaction, JSON image editing, and batch endpoints (#34). (98128ade)
- FIX: Pre-release documentation and code fixes (#41). (5616f8f3)
- REFACTOR: Align client package architecture across SDK packages (#37). (cf741ee1)
- REFACTOR: Align API surface across all SDK packages (#36). (ed969cc7)
- DOCS: Refactors repository URLs to new location. (76835268)
0.6.1 #
- FEAT: Add image streaming and new GPT image models (#827). (1218d8c3)
- FEAT: Add ImageGenStreamEvent schema for streaming (#834). (eb640052)
- FEAT: Add ImageGenUsage schema for image generation (#833). (aecf79a9)
- FEAT: Add metadata fields to ImagesResponse (#831). (bd94b4c6)
- FEAT: Add prompt_tokens_details to CompletionUsage (#830). (ede649d1)
- FEAT: Add fine-tuning method parameter and schemas (#828). (99d77425)
- FEAT: Add Batch model and usage fields (#826). (b2933f50)
- FEAT: Add OpenRouter-specific sampling parameters (#825). (3dd9075c)
- FIX: Remove default value from image stream parameter (#829). (d94c7063)
- FIX: Fix OpenRouter reasoning type enum parsing (#810) (#824). (44ab2841)
0.6.0 #
Note: This release has breaking changes.
- FIX: Correct text content serialization in CreateMessageRequest (#805). (e4569c96)
- FIX: Handle optional space after colon in SSE parser (#779). (9defa827)
- FEAT: Add OpenRouter provider routing support (#794). (6d306bc1)
- FEAT: Add OpenAI-compatible vendor reasoning content support (#793). (e0712c38)
- FEAT: Upgrade to http v1.5.0 (#785). (f7c87790)
- BREAKING BUILD: Require Dart >=3.8.0 (#792). (b887f5c6)
0.5.4+1 #
0.5.4 #
0.5.3 #
0.5.2 #
0.5.1 #
0.5.0 #
- BREAKING FEAT: Align OpenAI API changes (#706). (b8b04ca6)
- FEAT: Add support for web search, gpt-image-1 and list chat completions (#716). (269dea03)
- FEAT: Update OpenAI model catalog (#714). (68df4558)
- FEAT: Change the default value of 'reasoning_effort' from medium to null (#713). (f224572e)
- FEAT: Update dependencies (requires Dart 3.6.0) (#709). (9e3467f7)
- REFACTOR: Remove fetch_client dependency in favor of http v1.3.0 (#659). (0e0a685c)
- REFACTOR: Fix linter issues (#708). (652e7c64)
- DOCS: Fix TruncationObject docs typo. (ee5ed4fd)
- DOCS: Document Azure Assistants API base url (#626). (c3459eea)
0.4.5 #
- FEAT: Support Predicted Outputs (#613). (315fe0fd)
- FEAT: Support streaming audio responses in chat completions (#615). (6da756a8)
- FEAT: Add gpt-4o-2024-11-20 to model catalog (#614). (bf333081)
- FIX: Default store field to null to support Azure and Groq APIs (#608). (21332960)
- FIX: Make first_id and last_id nullable in list endpoints (#607). (7cfc4ddf)
- DOCS: Update OpenAI endpoints descriptions (#612). (10c66888)
- REFACTOR: Add new lint rules and fix issues (#621). (60b10e00)
- REFACTOR: Upgrade api clients generator version (#610). (0c8750e8)
0.4.3 #
- FEAT: Add support for audio in chat completions (#577). (0fb058cd)
- FEAT: Add support for storing outputs for model distillation and metadata (#578). (c9b8bdf4)
- FEAT: Support multi-modal moderations (#576). (45b9f423)
- FIX: submitThreadToolOutputsToRunStream not returning any events (#574). (00803ac7)
- DOCS: Add xAI to list of OpenAI-compatible APIs (#582). (017cb74f)
- DOCS: Fix assistants API outdated documentation (#579). (624c4128)
0.4.2+1 #
- DOCS: Add note about the new openai_realtime_dart client. (44672f0a)
0.4.2 #
0.4.1 #
0.4.0 #
- FEAT: Add support for disabling parallel tool calls (#492). (a91e0719)
- FEAT: Add GPT-4o-mini to model catalog (#497). (faa23aee)
- FEAT: Support chunking strategy in file_search tool (#496). (cfa974a9)
- FEAT: Add support for overrides in the file search tool (#491). (89605638)
- FEAT: Allow to customize OpenAI-Beta header (#502). (5fed8dbb)
- FEAT: Add support for service tier (#494). (0838e4b9)
0.3.3 #
0.3.2+1 #
0.3.2 #
0.3.1 #
0.3.0 #
Note: This release has breaking changes.
If you are using the Assistants API v1, please refer to the OpenAI docs to see how to migrate to v2.
0.2.1 #
- FEAT: Support for Batch API (#383). (6b89f4a2)
- FEAT: Streaming support for Assistant API (#379). (6ef68196)
- FEAT: Option to specify tool choice in Assistant API (#382). (97d7977a)
- FEAT: JSON mode in Assistant API (#381). (a864dae3)
- FEAT: Max tokens and truncation strategy in Assistant API (#380). (7153167b)
- FEAT: Updated models catalog with GPT-4 Turbo with Vision (#378). (88537540)
- FEAT: Weights & Biases integration for fine-tuning and seed options (#377). (a5fff1bf)
- FEAT: Support for checkpoints in fine-tuning jobs (#376). (69f8e2f9)
0.2.0 #
0.1.7 #
0.1.6 #
0.1.5 #
0.1.4 #
0.1.0+1 #
0.1.0 #
Note: This release has breaking changes. Migration guides: new factories and multi-modal
0.0.2+2 #
0.0.2 #
- FEAT: Support new models API functionality (#203). (33ebe746)
- FEAT: Support new images API functionality (#202). (fcf21daf)
- FEAT: Support new fine-tuning API functionality (#201). (f5f44ad8)
- FEAT: Support new embeddings API functionality (#200). (9b43d85b)
- FEAT: Support new completion API functionality (#199). (f12f6f57)
- FEAT: Support new chat completion API functionality (#198). (01820d69)
- FIX: Handle nullable function call fields when streaming (#191). (8f23cf16)
0.0.1 #
0.0.1-dev.1 #
- Bootstrap project