class
LLM::OllamaAdapter
- LLM::OllamaAdapter
- Reference
- Object
Overview
Adapter for Ollama (LLM::Ollama) with optional context reuse.
Included Modules
Defined in:
llm/adapter.crConstructors
Instance Method Summary
- #client : LLM::Ollama
-
#request(prompt : String, format : String = "json") : String
Send a single prompt and get a response as a String.
-
#request_messages(messages : Messages, format : String = "json") : String
For Ollama, messages are flattened to "system\n\nuser" when context reuse isn't explicitly used.
-
#request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String
Pass-through to the underlying context-aware API for maximum efficiency.
-
#supports_context? : Bool
Whether this adapter supports server-side KV context reuse across calls.
Instance methods inherited from module LLM::Adapter
request(prompt : String, format : String = "json") : String
request,
request_messages(messages : Messages, format : String = "json") : String
request_messages,
request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String
request_with_context,
supports_context? : Bool
supports_context?
Constructor Detail
Instance Method Detail
Description copied from module LLM::Adapter
Send a single prompt and get a response as a String.
For Ollama, messages are flattened to "system\n\nuser" when context reuse isn't explicitly used.
def request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String
#
Pass-through to the underlying context-aware API for maximum efficiency.
def supports_context? : Bool
#
Description copied from module LLM::Adapter
Whether this adapter supports server-side KV context reuse across calls.