class OpenAI::TranscriptionRequest
- OpenAI::TranscriptionRequest
- Reference
- Object
Overview
TranscriptionRequest represents a request structure for audio API
Included Modules
- JSON::Serializable
Extended Modules
- JSON::Schema
Defined in:
openai/api/audio.crConstructors
- .new(pull : JSON::PullParser)
- .new(file : File | Path | String, model : String = "whisper-1", prompt : Nil | String = nil, response_format : OpenAI::TranscriptionRespFormat = TranscriptionRespFormat::JSON, temperature : Float64 = 0.0, language : Nil | String = nil)
Instance Method Summary
- #build_metada(builder : HTTP::FormData::Builder)
- #file : File | Path | String
- #file=(file : File | Path | String)
-
#language : String | Nil
The language of the input audio.
-
#language=(language : String | Nil)
The language of the input audio.
-
#model : String
ID of the model to use.
-
#model=(model : String)
ID of the model to use.
-
#prompt : String | Nil
An optional text to guide the model's style or continue a previous audio segment.
-
#prompt=(prompt : String | Nil)
An optional text to guide the model's style or continue a previous audio segment.
-
#response_format : TranscriptionRespFormat
The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
-
#response_format=(response_format : TranscriptionRespFormat)
The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
-
#temperature : Float64
The sampling temperature, between 0 and 1.
-
#temperature=(temperature : Float64)
The sampling temperature, between 0 and 1.
Constructor Detail
Instance Method Detail
The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.
The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.
An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.
An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.
The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.