class OpenAI::ChatCompletionRequest

Included Modules

Extended Modules

Defined in:

openai/api/chat.cr

Constructors

Instance Method Summary

Constructor Detail

def self.new(model : String, messages : Array(OpenAI::ChatMessage), max_tokens : Int32 | Nil = nil, temperature : Float64 = 1.0, top_p : Float64 = 1.0, stream : Bool = false, stop : Array(String) | String | Nil = nil, presence_penalty : Float64 = 0.0, frequency_penalty : Float64 = 0.0, logit_bias : Nil | Hash(String, Float64) = nil, user : Nil | String = nil, functions : Nil | Array(OpenAI::ChatFunction) = nil, function_call : JSON::Any | String | Nil = nil, tools : Nil | Array(OpenAI::ChatTool) = nil, tool_choice : JSON::Any | String | Nil = nil) #

[View source]
def self.new(pull : JSON::PullParser) #

[View source]

Instance Method Detail

def frequency_penalty : Float64 #

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.


[View source]
def frequency_penalty=(frequency_penalty : Float64) #

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.


[View source]
def function_call : String | JSON::Any | Nil #

[View source]
def function_call=(function_call : String | JSON::Any | Nil) #

[View source]
def functions : Array(ChatFunction) | Nil #

[View source]
def functions=(functions : Array(ChatFunction) | Nil) #

[View source]
def logit_bias : Hash(String, Float64) | Nil #

Modify the likelihood of specified tokens appearing in the completion. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs


[View source]
def logit_bias=(logit_bias : Hash(String, Float64) | Nil) #

Modify the likelihood of specified tokens appearing in the completion. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs


[View source]
def max_tokens : Int32 | Nil #

The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.


[View source]
def max_tokens=(max_tokens : Int32 | Nil) #

The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.


[View source]
def messages : Array(ChatMessage) #

A list of messages comprising the conversation so far


[View source]
def messages=(messages : Array(ChatMessage)) #

A list of messages comprising the conversation so far


[View source]
def model : String #

the model id


[View source]
def model=(model : String) #

the model id


[View source]
def num_completions : Int32 #

[View source]
def num_completions=(num_completions : Int32) #

[View source]
def presence_penalty : Float64 #

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.


[View source]
def presence_penalty=(presence_penalty : Float64) #

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.


[View source]
def response_format : ResponseFormat | Nil #

An object specifying the format that the model must output. Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.


[View source]
def response_format=(response_format : ResponseFormat | Nil) #

An object specifying the format that the model must output. Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.


[View source]
def seed : Int32 | Nil #

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.


[View source]
def seed=(seed : Int32 | Nil) #

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.


[View source]
def stop : String | Array(String) | Nil #

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.


[View source]
def stop=(stop : String | Array(String) | Nil) #

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.


[View source]
def stream : Bool #

Whether to stream back partial progress. If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE]


[View source]
def stream=(stream : Bool) #

Whether to stream back partial progress. If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE]


[View source]
def temperature : Float64 #

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.


[View source]
def temperature=(temperature : Float64) #

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.


[View source]
def tool_choice : String | JSON::Any | Nil #

Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. none is the default when no functions are present. auto is the default if functions are present.


[View source]
def tool_choice=(tool_choice : String | JSON::Any | Nil) #

Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. none is the default when no functions are present. auto is the default if functions are present.


[View source]
def tools : Array(ChatTool) | Nil #

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.


[View source]
def tools=(tools : Array(ChatTool) | Nil) #

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.


[View source]
def top_p : Float64 #

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Alter this or temperature but not both.


[View source]
def top_p=(top_p : Float64) #

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Alter this or temperature but not both.


[View source]
def user : String | Nil #

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.


[View source]
def user=(user : String | Nil) #

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.


[View source]