Skip to content

Dynamic batch sizes are not supported in tiny-yolov3 model #678

@rohitdavas

Description

@rohitdavas

Bug Report

Which model does this pertain to?

Tiny-yolov3 folder

Describe the bug

Model does not support batch sizes > 1.

error :

----------- testing start ----------
input data name : input_1, shape = (2, 3, 416, 416), type = <class 'numpy.ndarray'>
input data name : image_shape, shape = (2, 2), type = <class 'numpy.ndarray'>
2025-03-19 23:19:18.958568678 [E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running Squeeze node. Name:'TFNodes/yolo_evaluation_layer_1/Squeeze' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/squeeze.h:52 static onnxruntime::TensorShapeVector onnxruntime::SqueezeBase::ComputeOutputShape(const onnxruntime::TensorShape&, const onnxruntime::TensorShapeVector&) input_shape[i] == 1 was false. Dimension of input 0 must be 1 instead of 2. shape={2,2}

Reproduction instructions

Run model inference on any batch size > 1.

System Information

OS Platform and Distribution (e.g. Linux Ubuntu 16.04):
ONNX version (e.g. 1.6):
Backend/Runtime version (e.g. ONNX Runtime 1.1, PyTorch 1.2):

Provide a code snippet to reproduce your errors.

import numpy as np 

    def test_call(self, ):
        
        for batch_size in [1, 2]:

            image = np.random.rand(batch_size, 3, 416, 416,).astype(np.float32)
            image_shape = np.array([[416, 416]], dtype=np.float32).reshape(1, 2)
            if batch_size != 1:
                image_shape = np.vstack([ image_shape for _ in range(batch_size)])
                
            inputs = {
                self.input_names[0]: image,
                self.input_names[1]: image_shape
            }
            print(f"----------- testing start ----------")
            for key, value in inputs.items():
                print(f"input data name : {key}, shape = {value.shape}, type = {type(value)}")
            
            # any forward wrapper that takes in the input and returns the output dict.
            output_dict = self.forward(inputs)
            

            for key, value in output_dict.items():
                print(f"output data name : {key}, shape = {value.shape}, type = {type(value)}")
...

Notes

  1. runs correct with batch size 1.
  2. fails on any batch size > 1

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions