class TensorflowLite::Image::ObjectDetection
- TensorflowLite::Image::ObjectDetection
- Reference
- Object
Included Modules
Defined in:
tflite_image/object_detection.crInstance Method Summary
-
#adjust(detections : Array(Output), target_width : Int32, target_height : Int32, offset_left : Int32, offset_top : Int32) : Array(Output)
adjust the detections so they can be applied directly to the source image (or a scaled version in the same aspect ratio)
-
#adjust(detections : Array(Output), image : Canvas, offset_left : Int32, offset_top : Int32) : Array(Output)
adjust the detections so they can be applied directly to the source image (or a scaled version in the same aspect ratio)
-
#markup(image : Canvas, detections : Array(Output), minimum_score : Float32 = 0.3_f32, font : PCFParser::Font | Nil = nil) : Canvas
add the detection details to an image
-
#process(image : Canvas) : Tuple(Canvas, Array(Output))
attempts to classify the object, assumes the image has already been prepared
Instance methods inherited from module TensorflowLite::Image::Common
client : Client
client,
detection_adjustments(image : Canvas, scale_mode : Scale = @scaling_mode)detection_adjustments(image_width : Int32, image_height : Int32, scale_mode : Scale = @scaling_mode) detection_adjustments, input_format : Format input_format, labels : Array(String) labels, resolution : Tuple(Int32, Int32) resolution, run(canvas : Canvas, scale_mode : Scale = @scaling_mode, resize_method : StumpyResize::InterpolationMethod = :bilinear) run, scaling_mode : Scale scaling_mode, scaling_mode=(scaling_mode : Scale) scaling_mode=
Constructor methods inherited from module TensorflowLite::Image::Common
new(client : Client, labels : Array(String) | Nil = nil, scaling_mode : Scale = DEFAULT_SCALE_MODE, input_format : Format | Nil = nil, output_format : Format | Nil = nil)
new
Instance Method Detail
adjust the detections so they can be applied directly to the source image (or a scaled version in the same aspect ratio)
you can run detection_adjustments
just once and then apply them to detections for each invokation using this function
adjust the detections so they can be applied directly to the source image (or a scaled version in the same aspect ratio)
you can run detection_adjustments
just once and then apply them to detections for each invokation using this function
add the detection details to an image
if marking up the original image, you'll need to take into account how it was scaled
attempts to classify the object, assumes the image has already been prepared