Skip to content

Latest commit

 

History

History
executable file
·
242 lines (198 loc) · 9.95 KB

File metadata and controls

executable file
·
242 lines (198 loc) · 9.95 KB

Oriented Object Detection

This example demonstrates a YoloV11 OBB based oriented object detection model using a Hailo-8, Hailo-8L, or Hailo-10H device. It receives a HEF and images/video/camera as input, and returns the image\video with annotations of detected objects and bounding boxes. Oriented object detection extends traditional bounding box detection by adding rotation angle, making it ideal for:

  • Aerial/satellite imagery
  • Document analysis
  • Rotated text detection
  • Any scenario where objects may appear at arbitrary angles

output example

Requirements

  • hailo_platform:
    • 4.23.0 (for Hailo-8 devices)
    • 5.3.0 (for Hailo-10H devices)
  • opencv-python<=4.10.0.84
  • numpy<2.0
  • scipy
  • lap
  • cython_bbox
  • python-dotenv
  • PyYAML

Supported Models

This example currently supports only YoloV11-OBB model.

Linux Installation

Run this app in one of two ways:

  1. Standalone installation in a clean virtual environment (no TAPPAS required) — see Option 1
  2. From an installed hailo-apps repository — see Option 2

Option 1: Standalone Installation

To avoid compatibility issues, it's recommended to use a clean virtual environment.

  1. Install PCIe driver and PyHailoRT

    • Download and install the PCIe driver and PyHailoRT from the Hailo website
    • To install the PyHailoRT whl:
    pip install hailort-X.X.X-cpXX-cpXX-linux_x86_64.whl
  2. Clone the repository:

    git clone https://github.com/hailo-ai/hailo-apps.git
    cd hailo-apps/python/standalone_apps/oriented_object_detection
  3. Install dependencies:

    pip install -r requirements.txt

Option 2: Inside an Installed hailo-apps Repository

If you installed the full repository:

git clone https://github.com/hailo-ai/hailo-apps.git
cd hailo-apps
sudo ./install.sh
source setup_env.sh

Then the app is already ready for usage:

cd hailo-apps/python/standalone_apps/oriented_object_detection

Windows Installation

To avoid compatibility issues, it's recommended to use a clean virtual environment.

  1. Install HailoRT (MSI) + PyHailoRT

    1. Download and install the HailoRT Windows MSI from the Hailo website.

    2. During the installation, make sure PyHailoRT is selected (in the MSI “Custom Setup” tree).

    3. After installation, the PyHailoRT wheel is located under: C:\Program Files\HailoRT\python

    4. Create and activate a virtual environment:

    python -m venv wind_venv
    .\wind_venv\Scripts\Activate.ps1
    1. Install the PyHailoRT wheel from the MSI installation folder:
    pip install "C:\Program Files\HailoRT\python\hailort-*.whl"
  2. Clone the repository:

    git clone https://github.com/hailo-ai/hailo-apps.git
    cd hailo-apps\hailo_apps\python\standalone_apps\oriented_object_detection
  3. Install dependencies:

    pip install -r requirements.txt
    

Run

After completing either installation option, run from the application folder:

python .\oriented_object_detection -n <model_path> -i <input_image_path> -l <label_file_path> -b <batch_size>

The output results will be saved under a folder named output, or in the directory specified by --output-dir.

Arguments

  • --hef-path, -n:
    • A model name (e.g., yolov8n) → the script will automatically download and resolve the correct HEF for your device.
    • A file path to a local HEF → the script will use the specified network directly.
  • -i, --input:
    • An input source such as an image (bus.jpg), a video (video.mp4), a directory of images, or usb to auto-select the first available USB camera.
      • On Linux, you can also use /dev/vidoeX (e.g., /dev/video0) to select a specific camera.
      • On Windows, you can also use a camera index (0, 1, 2, ...) to select a specific camera.
      • On Raspberry Pi, you can also use rpi to enable the Raspberry Pi camera.
    • A predefined input name from resources_config.yaml (e.g., bus, street).
      • If you choose a predefined name, the input will be automatically downloaded if it doesn't already exist.
      • Use --list-inputs to display all available predefined inputs.
  • -b, --batch-size: [optional] Number of images in one batch. Defaults to 1.
  • -l, --labels: [optional] Path to a text file containing class labels. If not provided, default COCO labels are used.
  • -s, --save-output: [optional] Save the output of the inference from a stream.
  • -o, --output-dir: [optional] Directory where output images/videos will be saved.
  • cr, --camera-resolution: [optional][Camera only] Input resolution: sd (640x480), hd (1280x720), or fhd (1920x1080).
  • or, --output-resolution: [optional] Set output size using sd|hd|fhd, or pass custom width/height (e.g., --output-resolution 1920 1080).
  • --show-fps: [optional] Display FPS performance metrics for video/camera input.
  • --no-display: [optional] Run without opening a display window. Useful for headless or performance testing.
  • --video-unpaced: [optional] Process video input as fast as possible without respecting the original video FPS (no pacing).
  • -t, --time-to-run: [optional] Maximum runtime in seconds. Stops the application after the specified duration.
  • -f, --frame-rate: [optional][Camera only] Override the camera input framerate.
  • --list-models: [optional] Print all supported models for this application (from resources_config.yaml) and exit.
  • --list-inputs: [optional] Print the available predefined input resources (images/videos) defined in resources_config.yaml for this application, then exit.

For more information:

./oriented_object_detection.py -h

Example

Inference on a usb camera stream

./oriented_object_detection.py -n ./yolo11s_obb_.hef -i usb

Inference with tracking on a usb camera stream

./oriented_object_detection.py -n ./yolo11s_obb.hef -i usb

Inference with tracking and motion trail visualization

DRAW_TRAIL=1 ./oriented_object_detection.py -n ./yolo11s_obb.hef -i usb

Inference on a usb camera stream with custom frame rate

./oriented_object_detection.py -n ./yolo11s_obb.hef -i usb -f 30

Inference on a video

./oriented_object_detection.py -n ./yolo11s_obb.hef -i full_mov_slow.mp4

Inference on an image

./oriented_object_detection.py -n ./yolo11s_obb.hef -i bus.jpg

Inference on a folder of images

./oriented_object_detection.py -n ./yolo11s_obb.hef -i input_folder

Visualization Configuration

The application supports flexible configuration for how detections results are visualized. These settings can be modified in the configuration file to adjust the appearance of detection outputs.

Example Configuration:

{
  "visualization_params": {
    "score_th": 0.35,
    "max_boxes_to_draw": 500
  },
  "oriented_postprocess": {
    "obb_model_input_map": {
      "yolo11s_obb_640x640_simp/conv53": "/model.23/cv2.0/cv2.0.2/Conv_output_0", 
      "yolo11s_obb_640x640_simp/conv54": "/model.23/cv4.0/cv4.0.2/Conv_output_0", 
      "yolo11s_obb_640x640_simp/conv57": "/model.23/cv3.0/cv3.0.2/Conv_output_0", 
      "yolo11s_obb_640x640_simp/conv67": "/model.23/cv2.1/cv2.1.2/Conv_output_0", 
      "yolo11s_obb_640x640_simp/conv68": "/model.23/cv4.1/cv4.1.2/Conv_output_0", 
      "yolo11s_obb_640x640_simp/conv71": "/model.23/cv3.1/cv3.1.2/Conv_output_0", 
      "yolo11s_obb_640x640_simp/conv85": "/model.23/cv2.2/cv2.2.2/Conv_output_0", 
      "yolo11s_obb_640x640_simp/conv86": "/model.23/cv4.2/cv4.2.2/Conv_output_0", 
      "yolo11s_obb_640x640_simp/conv89": "/model.23/cv3.2/cv3.2.2/Conv_output_0" 
    },
    "img_size": 640,
    "cls_num": 15,
    "scores_th": 0.375,
    "nms_iou_th": 0.25
  }
}

Parameter Descriptions:

Visualization Parameters:

  • score_thres: Minimum confidence score required to display a detected object.
  • max_boxes_to_draw: Maximum number of detected objects to display per frame.

Oriented Postprocess Parameters:

  • obb_model_input_map: Mapping between model original names for identification of classes/boxes/angels heads.
  • img_size: Input image size (width and height) expected by the model.
  • cls_num: Number of object classes the model can detect.
  • scores_th: Confidence threshold for filtering detections during post-processing.
  • nms_iou_th: Intersection over Union (IoU) threshold for Non-Maximum Suppression to eliminate duplicate detections.

Additional Notes

  • The example was only tested with HailoRT v4.23.0
  • The example expects a HEF which maps to the same outputs in the configuration json.
  • Images are only supported in the following formats: .jpg, .jpeg, .png or .bmp
  • Number of input images should be divisible by batch_size
  • For any issues, open a post on the Hailo Community

Disclaimer

This code example is provided by Hailo solely on an “AS IS” basis and “with all faults”. No responsibility or liability is accepted or shall be imposed upon Hailo regarding the accuracy, merchantability, completeness or suitability of the code example. Hailo shall not have any liability or responsibility for errors or omissions in, or any business decisions made by you in reliance on this code example or any part of it. If an error occurs when running this example, please open a ticket in the "Issues" tab.

This example was tested on specific versions and we can only guarantee the expected results using the exact version mentioned above on the exact environment. The example might work for other versions, other environment or other HEF file, but there is no guarantee that it will.