class Llamero::BasePrompt

Overview

Representing the collection of individual messages that compose an entire "prompt" series for interacting with an LLM.

# Example of a simple use case
prompt = Llamero::BasePrompt.new(system_prompt: "You are a helpful assistant that can answer questions.")
prompt.add_message("user", "What is the capital of the moon?")
response = model.chat(prompt, grammar_class: MyExpectedStructuredResponse.from_json)
# Example of a more complex use case

class MyBasePrompt < Llamero::BasePrompt
  def initialize
    super(system_prompt: system_prompt: "You are a helpful assistant that can answer questions.")
  end
end

prompt = MyBasePrompt.new
prompt.add_message("user", "What is the capital of the moon?")

response = model.chat(prompt, grammar_class: MyExpectedStructuredResponse.from_json)

It is recommended to use the #system_prompt when initializing your prompt vs adding a system prompt message to the prompt chain. Doing this will allow you to re-use your prompt chain with different system prompts without having to re-initialize your prompt chain. This is very useful when using a MoE workflow.

Defined in:

prompts/base_prompt.cr

Constructors

Instance Method Summary

Constructor Detail

def self.new(system_prompt : String = "", messages : Array(PromptMessage) = [] of PromptMessage) #

Initialize your prompt chain with a system prompt, or an array of existing PromptMessage objects


[View source]

Instance Method Detail

def add_message(role : String, content : String) #

[View source]
def composed_prompt_chain_for_instruction_models : String #

The composed prompt chain in a format that can be used by LLM's, specifically chat-based models or instruction models


[View source]
def composed_prompt_chain_for_instruction_models=(composed_prompt_chain_for_instruction_models : String) #

The composed prompt chain in a format that can be used by LLM's, specifically chat-based models or instruction models


[View source]
def prompt_chain : Array(PromptMessage) #

The collection of messages that make up this prompt, in order. Does not include the system prompt


[View source]
def prompt_chain=(prompt_chain : Array(PromptMessage)) #

The collection of messages that make up this prompt, in order. Does not include the system prompt


[View source]
def system_prompt : String #

The system prompt that belongs to this collection of messages


[View source]
def system_prompt=(system_prompt : String) #

The system prompt that belongs to this collection of messages


[View source]
def to_llm_instruction_prompt_structure(system_prompt_opening_tag : String, system_prompt_closing_tag : String, user_prompt_opening_tag : String, user_prompt_closing_tag : String, unique_ending_token : String) #

Creates the prompt chain for the specific model parameters that are passed in.

This is intentionally decoupled from the model itself because it allows you to use the same prompt across multiple models, adjusting the wrappers as necessary


[View source]