class Llamero::BaseEmbeddingModel
- Llamero::BaseEmbeddingModel
- Reference
- Object
Overview
The primary base for all embedding models.
Defined in:
embeddings/base_embedding_model.crConstant Summary
-
Log =
::Log.for("embeddings")
-
Creates a logger specifically for the embeddings class
Constructors
-
.new(model_name : String, grammar_root_path : Path | Nil = nil, lora_root_path : Path | Nil = nil, model_root_path : Path | Nil = nil, enable_logging : Bool = false)
Override any of the default values that are set in the child class
Instance Method Summary
-
#create_embedding_with(string_to_create_embedding_with : String, timeout : Time::Span = Time::Span.new(minutes: 2), max_retries : Int32 = 5) : Array(Float64)
This is the primary method for creating embeddings.
-
#create_embeddings_with(array_of_strings_to_create_embeddings_with : Array(String), timeout : Time::Span = Time::Span.new(minutes: 2), max_retries : Int32 = 5) : Array(Array(Float64))
This is a method for creating embeddings based on a prompt chain, and accepts a grammar class in case your embedding model creates a response that is different from Array(Float64)
-
#embeddings_created : Array(Array(Float64))
The embeddings that are created by the embedding model
-
#embeddings_created=(embeddings_created : Array(Array(Float64)))
The embeddings that are created by the embedding model
-
#enable_logging : Bool
Whether to enable logging for the model.
-
#enable_logging=(enable_logging : Bool)
Whether to enable logging for the model.
-
#logging_output_from_embedding_model : IO
The logging output from running the embedding model.
-
#logging_output_from_embedding_model=(logging_output_from_embedding_model : IO)
The logging output from running the embedding model.
-
#lora_root_path : Path
The directory where any lora filters will be located.
-
#lora_root_path=(lora_root_path : Path)
The directory where any lora filters will be located.
-
#model_name : String
This should be the full filename of the model, including the .gguf file extension.
-
#model_name=(model_name : String)
This should be the full filename of the model, including the .gguf file extension.
-
#model_root_path : Path
The directory where the model files will be located.
-
#model_root_path=(model_root_path : Path)
The directory where the model files will be located.
Constructor Detail
Override any of the default values that are set in the child class
Instance Method Detail
This is the primary method for creating embeddings. By default it returns an Array(Float64)
new_embedding = create_embedding_with("Hello, world!")
new_embedding.class # => Array(Float64)
Default Timeout: 30 seconds Default Max Retries: 5
This is a method for creating embeddings based on a prompt chain, and accepts a grammar class in case your embedding model creates a response that is different from Array(Float64)
Timeout: 2 minutes Retry: 5 times
The embeddings that are created by the embedding model
The embeddings that are created by the embedding model
Whether to enable logging for the model. This is useful for debugging and understanding the model's behavior.
Default: false
Whether to enable logging for the model. This is useful for debugging and understanding the model's behavior.
Default: false
The logging output from running the embedding model. This is not the same output as the Llamero code, this is from the embedding binary.
The logging output from running the embedding model. This is not the same output as the Llamero code, this is from the embedding binary.
The directory where any lora filters will be located. This is optional, but if you want to use lora filters, you will need to specify this. Lora filters are specific per model they were fine-tune from.
Current unimplemented.
Default: /Users/#{whoami
.strip}/loras
The directory where any lora filters will be located. This is optional, but if you want to use lora filters, you will need to specify this. Lora filters are specific per model they were fine-tune from.
Current unimplemented.
Default: /Users/#{whoami
.strip}/loras
This should be the full filename of the model, including the .gguf file extension.
Example: meta-llama-3-8b-instruct-Q6_K.gguf
This should be the full filename of the model, including the .gguf file extension.
Example: meta-llama-3-8b-instruct-Q6_K.gguf
The directory where the model files will be located. This is required.
Default: /Users/#{whoami
.strip}/models
The directory where the model files will be located. This is required.
Default: /Users/#{whoami
.strip}/models