Upsample issue encountered when relay.frontend.from_onnx(...)

Hi,

When transfer my onnx model to IR with relay.frontend.from_onnx() in tvm, Something wrong with its upsampling op.

 the error log is as below:
  Traceback (most recent call last):
  File "onnx_to_tvm.py", line 47, in <module>
  mod, params = relay.frontend.from_onnx(model, shape_dict, freeze_params=True)
  File "/code/tvm/python/tvm/relay/frontend/onnx.py", line 4044, in from_onnx
mod, params = g.from_onnx(graph, opset)
 File "/code/tvm/python/tvm/relay/frontend/onnx.py", line 3817, in from_onnx
op = self._convert_operator(op_name, inputs, attr, opset)
 File "/code/tvm/python/tvm/relay/frontend/onnx.py", line 3946, in _convert_operator
sym = convert_map[op_name](inputs, attrs, self._params)
File "/code/tvm/python/tvm/relay/frontend/onnx.py", line 1342, in _impl_v9
out = _op.nn.upsampling(
File "/code/tvm/python/tvm/relay/op/nn/nn.py", line 1351, in upsampling
return _make.upsampling(data, scale_h, scale_w, layout, method, align_corners)
File "/code/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 223, in __call__
values, tcodes, num_args = _make_tvm_args(args, temp_args)
File "/code/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 188, in _make_tvm_args
raise TypeError("Don't know how to handle type %s" % type(arg))
 TypeError: Don't know how to handle type <class 'numpy.ndarray'>

TVM version in our server is 0.8.0 onnx version is 1.10.2. I ever tried to install onnx==0.4.0, but failed, seems python version a little high as 3.8.x.
Is there any other ways to solve this issue? thanks.

Hey itshan, can you share the model you used? This should be a simple error to solve.

HI Andew,

Thanks for your reply.

I found the scale values are like [[1],[1],[2],[2]] in my onnx model, but in class Upsample(OnnxOpConverter) of xxx/frontend/onnx.py, the input scale values are supposed as [1, 1, 2, 2].

I can change a bit code in Upsample(xxx) to fix it. but I’d like to hear your comments for this change and any other better solution? thanks.

Hey Itshan, sorry for taking a while to get back to you. Yeah go ahead and create a PR and I can review it. You should probably just flatten the scale vector since your scale value does not fit the onnx spec (which wants a rank 1 matrix) it seems. How did you convert it?

Interestingly enough https://github.com/onnx/onnx/blob/master/docs/Operators.md#Upsample Upsample is supposed to deprecated anyways.

thanks, Andrew. yes, flatten also can fix it.

But upsample op is commonly used in deep learning models, just like yolo. Have not you met the similar issues reported before?

Hmm, I’ve tuned a bunch of yolo models from onnx before and haven’t had this issue. There also isn’t an open github issue on it.

I would just open a PR with flattening the tensor beforehand and add a note explaining this situation; I don’t think [[1], [1], [2], [2]] for scale fits onnx spec.

More information provided here, hope it’s helpful:

Actually, My yolo mode is trained with darknet. thereafter, I convert it to caffe firstly and then to onnx. Maybe the upsample values are set like [[1], [1], [2], [2]] when caffe to onnx.