module TensorflowLite::Image
Defined in:
tflite_image.crtflite_image/detection/classification.cr
tflite_image/image_offset_calculations.cr
Constant Summary
-
DEFAULT_SCALE_MODE =
Scale::Fit
Class Method Summary
-
.adjust(detections : Array, target_width : Int32, target_height : Int32, offset_left : Int32, offset_top : Int32)
adjust the detections so they can be applied directly to the source image (or a scaled version in the same aspect ratio)
-
.adjust(detections : Array, image : Canvas | FFmpeg::Frame, offset_left : Int32, offset_top : Int32)
adjust the detections so they can be applied directly to the source image (or a scaled version in the same aspect ratio)
-
.calculate_boxing_offset(original_width : Int32, original_height : Int32, target_width : Int32, target_height : Int32) : Tuple(Int32, Int32)
how much do we need to adjust detections if we scaled to fit
-
.calculate_cropping_offset(original_width : Int32, original_height : Int32, target_width : Int32, target_height : Int32) : Tuple(Int32, Int32)
how much we need to adjust detections if we scaled to cover
-
.markup(image : Canvas, detections : Array, minimum_score : Float32 = 0.3_f32, font : PCFParser::Font | Nil = nil, color = StumpyPNG::RGBA::BLACK) : Canvas
add the detection details to an image
Class Method Detail
adjust the detections so they can be applied directly to the source image (or a scaled version in the same aspect ratio)
you can run detection_adjustments
just once and then apply them to detections for each invokation using this function
adjust the detections so they can be applied directly to the source image (or a scaled version in the same aspect ratio)
you can run detection_adjustments
just once and then apply them to detections for each invokation using this function
how much do we need to adjust detections if we scaled to fit
how much we need to adjust detections if we scaled to cover