They are not equal. In another two-dimension input case (input shape (1, 32)), the stride values are (32, 1) and (1, 1), respectively. That’s why we got the bug shown above.
When I changed the input shape to (2, 32), then they were both (32, 1).
I downgraded the PyTorch version to 1.12.0, and the problem is gone.
I’m having the same problem. I want to pass PT tensors to TVM efficiently, but strides are somehow corrupted if I use tvm.runtime.ndarray.from_dlpack(to_dlpack(tensor)).
The error I get from TVM:
File "/Users/masa/projects/dev/tvm/src/runtime/library_module.cc", line 87
TVMError: Assert fail: T.int64(1) == arg_p_inp_0_strides[1] and T.int64(77) == arg_p_inp_0_strides[0], arg.p_inp_0.strides: expected to be compact array
Maybe this is a PT problem. Here is a weird demonstration:
# from torch.utils.dlpack import to_dlpack, from_dlpack
In [52]: a = torch.randint(0, 100, (1, 77), dtype=torch.int32)
In [53]: b = from_dlpack(to_dlpack(a))
In [54]: a.stride()
Out[54]: (77, 1)
In [55]: b.stride()
Out[55]: (1, 1)
A weird thing is, I’ve used exactly the same way, tvm.runtime.ndarray.from_dlpack(to_dlpack(tensor)), to convert PT tensors to TVM before, and this script still works today.
We can enhance the check in the TVM side to still prove this as continuous to check either the strides matches the shape product, or size ==1(which means strides does not matter
Alternatively, we can create a normalized strides in tvm runtime From DLPack, which either is more convenient.