Relay.build_module.create_executor retruns an array full of 'nan's

Hello everyone. I’m new to TVM and I’m trying to compile an ONNX model replicating the code in the tutorial. I’m running it on colab:

target = "llvm"

input_name = "1"
shape_dict = {input_name: x.shape}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)

with tvm.transform.PassContext(opt_level=1):
    executor = relay.build_module.create_executor(
        "graph", mod, tvm.cpu(0), target
    ).evaluate()
...100%, 0.02 MB, 819 KB/s, 0 seconds passed/usr/local/lib/python3.7/dist-packages/tvm/relay/frontend/onnx.py:1869: UserWarning: Mismatched attribute type in ' : kernel_shape'

==> Context: Bad node spec for node. Name:  OpType: Conv
  warnings.warn(str(e))
Cannot find config for target=llvm, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 32, 32, 32), 'float32'), ('TENSOR', (9, 32, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
Cannot find config for target=llvm, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 32, 32), 'float32'), ('TENSOR', (32, 64, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.

Cannot find config for target=llvm, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 32, 32), 'float32'), ('TENSOR', (64, 64, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
Cannot find config for target=llvm, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 1, 32, 32), 'float32'), ('TENSOR', (64, 1, 5, 5), 'float32'), (1, 1), (2, 2, 2, 2), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
dtype = "float32"
tvm_output = np.array(executor(tvm.nd.array(x.astype(dtype))))
tvm_output

The tvm_output is a Matrix of nans:

array(<tvm.nd.NDArray shape=(1, 1, 672, 672), cpu(0)>
    array([[[[nan, nan, nan, ..., nan, nan, nan],
             [nan, nan, nan, ..., nan, nan, nan],
             [nan, nan, nan, ..., nan, nan, nan],
             ...,
             [nan, nan, nan, ..., nan, nan, nan],
             [nan, nan, nan, ..., nan, nan, nan],
             [nan, nan, nan, ..., nan, nan, nan]]]], dtype=float32),
          dtype=object)

And the shape is zero:

tvm_output.shape

()

Thank you in advance for sharing your knowledge.