class OpenRouter::CompletionRequest
- OpenRouter::CompletionRequest
- Reference
- Object
Defined in:
openrouter/types/completion_request.crConstructors
- .new(messages : Array(Message), model : String | Nil = nil, tools : Array(Tool) = [] of Tool)
- .new(prompt : String, model : String | Nil = nil, tools : Array(Tool) = [] of Tool)
Instance Method Summary
- #add_tool(tool : Tool)
-
#frequency_penalty : Float32 | Nil
Range: (-2, 2)
-
#frequency_penalty=(frequency_penalty : Float32 | Nil)
Range: (-2, 2)
- #logit_bias_key : Float32 | Nil
- #logit_bias_key=(logit_bias_key : Float32 | Nil)
- #logit_bias_value : Float32 | Nil
- #logit_bias_value=(logit_bias_value : Float32 | Nil)
-
#max_tokens : Int32 | Nil
See LLM Parameters (openrouter.ai/docs/parameters)
-
#max_tokens=(max_tokens : Int32 | Nil)
See LLM Parameters (openrouter.ai/docs/parameters)
- #messages : Array(Message) | Nil
- #messages=(messages : Array(Message) | Nil)
-
#min_p : Float32 | Nil
Range: [0, 1]
-
#min_p=(min_p : Float32 | Nil)
Range: [0, 1]
- #model : String | Nil
- #model=(model : String | Nil)
-
#models : Array(String) | Nil
for models and route, See "Model Routing" section at openrouter.ai/docs/model-routing
-
#models=(models : Array(String) | Nil)
for models and route, See "Model Routing" section at openrouter.ai/docs/model-routing
-
#presence_penalty : Float32 | Nil
Range: (-2, 2)
-
#presence_penalty=(presence_penalty : Float32 | Nil)
Range: (-2, 2)
- #prompt : String | Nil
- #prompt=(prompt : String | Nil)
-
#provider : String | Nil
See "Provider Routing" section: openrouter.ai/docs/provider-routing
-
#provider=(provider : String | Nil)
See "Provider Routing" section: openrouter.ai/docs/provider-routing
-
#repetition_penalty : Float32 | Nil
Range: (0, 29)
-
#repetition_penalty=(repetition_penalty : Float32 | Nil)
Range: (0, 29)
- #route : String | Nil
- #route=(route : String | Nil)
- #seed : Int32 | Nil
- #seed=(seed : Int32 | Nil)
-
#stop : String | Array(String) | Nil
the stop tokens
-
#stop=(stop : String | Array(String) | Nil)
the stop tokens
-
#stream : Bool
whether to stream the response
-
#stream=(stream : Bool)
whether to stream the response
- #temperature : Float32 | Nil
- #temperature=(temperature : Float32 | Nil)
- #to_json(io : IO)
- #to_json(json : JSON::Builder)
-
#tools : Array(Tool)
Tool calling Will be passed down as-is for providers implementing OpenAI's interface.
-
#tools=(tools : Array(Tool))
Tool calling Will be passed down as-is for providers implementing OpenAI's interface.
-
#top_a : Float32 | Nil
Range: [0, 1]
-
#top_a=(top_a : Float32 | Nil)
Range: [0, 1]
-
#top_k : Float32 | Nil
Range: (1, Infinity) Not available for OpenAI models
-
#top_k=(top_k : Float32 | Nil)
Range: (1, Infinity) Not available for OpenAI models
- #top_logprobs : Int32 | Nil
- #top_logprobs=(top_logprobs : Int32 | Nil)
-
#top_p : Float32 | Nil
Range: (0, 1)
-
#top_p=(top_p : Float32 | Nil)
Range: (0, 1)
-
#transforms : Array(String) | Nil
Reduce latency by providing the model with a predicted output https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs prediction?: { type: 'content'; content: string; }; ### OpenRouter-only parameters ##### provider?: ProviderPreferences; // See "Prompt Transforms" section at openrouter.ai/docs/transforms
-
#transforms=(transforms : Array(String) | Nil)
Reduce latency by providing the model with a predicted output https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs prediction?: { type: 'content'; content: string; }; ### OpenRouter-only parameters ##### provider?: ProviderPreferences; // See "Prompt Transforms" section at openrouter.ai/docs/transforms
Constructor Detail
Instance Method Detail
for models and route, See "Model Routing" section at openrouter.ai/docs/model-routing
for models and route, See "Model Routing" section at openrouter.ai/docs/model-routing
See "Provider Routing" section: openrouter.ai/docs/provider-routing
Tool calling Will be passed down as-is for providers implementing OpenAI's interface. For providers with custom interfaces, we transform and map the properties. Otherwise, we transform the tools into a YAML template. The model responds with an assistant message. See models supporting tool calling: openrouter.ai/models?supported_parameters=tools
Tool calling Will be passed down as-is for providers implementing OpenAI's interface. For providers with custom interfaces, we transform and map the properties. Otherwise, we transform the tools into a YAML template. The model responds with an assistant message. See models supporting tool calling: openrouter.ai/models?supported_parameters=tools
Reduce latency by providing the model with a predicted output https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs prediction?: { type: 'content'; content: string; };
OpenRouter-only parameters
provider?: ProviderPreferences; // See "Prompt Transforms" section at openrouter.ai/docs/transforms
Reduce latency by providing the model with a predicted output https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs prediction?: { type: 'content'; content: string; };
OpenRouter-only parameters
provider?: ProviderPreferences; // See "Prompt Transforms" section at openrouter.ai/docs/transforms