v1TextToSpeechVoiceIdStreamWithTimestampsPost method
- required String? voiceId,
- bool? enableLogging,
- int? optimizeStreamingLatency,
- V1TextToSpeechVoiceIdStreamWithTimestampsPostOutputFormat? outputFormat,
- String? xiApiKey,
- required BodyTextToSpeechStreamingWithTimestampsV1TextToSpeechVoiceIdStreamWithTimestampsPost? body,
Text To Speech Streaming With Timestamps @param voice_id Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices. @param enable_logging When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers. @param optimize_streaming_latency You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates). Defaults to None. @param output_format Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs. @param xi-api-key Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
Implementation
Future<chopper.Response<StreamingAudioChunkWithTimestampsResponseModel>>
v1TextToSpeechVoiceIdStreamWithTimestampsPost({
required String? voiceId,
bool? enableLogging,
int? optimizeStreamingLatency,
enums.V1TextToSpeechVoiceIdStreamWithTimestampsPostOutputFormat?
outputFormat,
String? xiApiKey,
required BodyTextToSpeechStreamingWithTimestampsV1TextToSpeechVoiceIdStreamWithTimestampsPost?
body,
}) {
generatedMapping.putIfAbsent(
StreamingAudioChunkWithTimestampsResponseModel,
() => StreamingAudioChunkWithTimestampsResponseModel.fromJsonFactory,
);
return _v1TextToSpeechVoiceIdStreamWithTimestampsPost(
voiceId: voiceId,
enableLogging: enableLogging,
optimizeStreamingLatency: optimizeStreamingLatency,
outputFormat: outputFormat?.value?.toString(),
xiApiKey: xiApiKey?.toString(),
body: body,
);
}