TVM conv2d accuracy

Hello,everyone!
I am tring to implement Caffe frontend in TVM. When I tried to verify the correctness of the conversion tool, I found that the output of the convolution layer was inconsistent with the actual output:
np.testing.assert_allclose(caffe_out, tvm_out, rtol=1e-5, atol=1e-5)
I checked the parameters input to TVM and the original Caffe model, and found that they are exactly the same. So, I began to suspect that TVM lost precision in convolution implemention(op.nn.conv2d)。I am sorry that i can’t provide you with any origin codes, but you can use TF frontend to reproduce this problem.
Some error messages are as follows:
AssertionError:
Not equal to tolerance rtol=1e-05, atol=1e-05

(mismatch 0.27777777777777146%)
 x: array([-228.74648 ,  -13.361052, -203.51198 , ...,  -47.17854 ,
        132.35803 ,   -7.620415], dtype=float32)
 y: array([-228.74648 ,  -13.361056, -203.51195 , ...,  -47.17854 ,
        132.35803 ,   -7.620427], dtype=float32)

help, please!

1e-5 is a quite tight tolerance for this sort of thing. I haven’t done that much with Caffe and TVM in particular on precision, but I wouldn’t be surprised if both of them don’t give answers to that precision, in which case the comparison is not meaningful (though I don’t have the full info on that, so you’d have to look into this yourself). I wouldn’t be surprised if changing summation order in an innocent way might swing the result by that much. Does the test pass with 1e-2 or 1e-3? How tight of a tolerance do you need?

At present, it is suspected that the accuracy loss is caused by the different processing order of multiplication and accumulation in the convolution op between Caffe and TVM.