class
LLM::OllamaAdapter
- LLM::OllamaAdapter
- Reference
- Object
Overview
Adapter for Ollama (LLM::Ollama) with optional context reuse.
Included Modules
Defined in:
llm/adapter.crConstructors
Instance Method Summary
- #client : LLM::Ollama
-
#request(prompt : String, format : String = "json") : String
Send a single prompt and get a response as a String.
-
#request_messages(messages : Messages, format : String = "json") : String
Send chat-style messages (system/user) and get a response as a String.
-
#request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String
Context-aware request.
-
#supports_context? : Bool
Whether this adapter supports server-side KV context reuse across calls.
Instance methods inherited from module LLM::Adapter
request(prompt : String, format : String = "json") : String
request,
request_messages(messages : Messages, format : String = "json") : String
request_messages,
request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String
request_with_context,
supports_context? : Bool
supports_context?
Constructor Detail
Instance Method Detail
Description copied from module LLM::Adapter
Send a single prompt and get a response as a String.
Description copied from module LLM::Adapter
Send chat-style messages (system/user) and get a response as a String.
def request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String
#
Description copied from module LLM::Adapter
Context-aware request. Adapters that support provider-side context can reuse it using a cache_key. Default implementation falls back to request_messages without context reuse.
def supports_context? : Bool
#
Description copied from module LLM::Adapter
Whether this adapter supports server-side KV context reuse across calls.