module
LLM::Adapter
Overview
A normalized adapter interface for LLM clients.
Implementations should return a String response (JSON text after any provider-specific cleanup).
Direct including types
Defined in:
llm/adapter.crInstance Method Summary
-
#request(prompt : String, format : String = "json") : String
Send a single prompt and get a response as a String.
-
#request_messages(messages : Messages, format : String = "json") : String
Send chat-style messages (system/user) and get a response as a String.
-
#request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String
Context-aware request.
-
#supports_context? : Bool
Whether this adapter supports server-side KV context reuse across calls.
Instance Method Detail
Send a single prompt and get a response as a String.
Send chat-style messages (system/user) and get a response as a String.
def request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String
#
Context-aware request. Adapters that support provider-side context can reuse it using a cache_key. Default implementation falls back to request_messages without context reuse.
def supports_context? : Bool
#
Whether this adapter supports server-side KV context reuse across calls.