class LLM::OllamaAdapter

Overview

Adapter for Ollama (LLM::Ollama) with optional context reuse.

Included Modules

Defined in:

llm/adapter.cr

Constructors

Instance Method Summary

Instance methods inherited from module LLM::Adapter

request(prompt : String, format : String = "json") : String request, request_messages(messages : Messages, format : String = "json") : String request_messages, request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String request_with_context, supports_context? : Bool supports_context?

Constructor Detail

def self.new(client : LLM::Ollama) #

[View source]

Instance Method Detail

def client : LLM::Ollama #

[View source]
def request(prompt : String, format : String = "json") : String #
Description copied from module LLM::Adapter

Send a single prompt and get a response as a String.


[View source]
def request_messages(messages : Messages, format : String = "json") : String #

For Ollama, messages are flattened to "system\n\nuser" when context reuse isn't explicitly used.


[View source]
def request_with_context(system : String | Nil, user : String, format : String = "json", cache_key : String | Nil = nil) : String #

Pass-through to the underlying context-aware API for maximum efficiency.


[View source]
def supports_context? : Bool #
Description copied from module LLM::Adapter

Whether this adapter supports server-side KV context reuse across calls.


[View source]