About output tensor bit width that different from input tensor

A = tvm.te.placeholder((M,K), name=‘A’, dtype=‘int8’) B = tvm.te.placeholder((N,K), name=‘B’, dtype=‘int8’) k = tvm.te.reduce_axis((0,K), name=‘k’)

C = tvm.te.compute((M,N), lambda i, j: tvm.te.sum(A[i][k]*B[j][k], axis=k), name=‘C’)

For the experssion like above, tensor C’s dtype is ‘int8’ as well.

But in certain implementation, we need to let tensor C’s dtype to be ‘int32’, can we change the tensor C’s bit width in current TVM TE language?

I don’t want the solution that get a tensor D to extend bitwidth to ‘int32’ with value from tensor C, this may has different result.

It seems TVM doesn’t support it for the reduction operation. For other operations, astype can solve your issue.

Thanks, actually astype method works in my case. Thanks again for your suggestion!