A = tvm.te.placeholder((M,K), name=‘A’, dtype=‘int8’) B = tvm.te.placeholder((N,K), name=‘B’, dtype=‘int8’) k = tvm.te.reduce_axis((0,K), name=‘k’)
C = tvm.te.compute((M,N), lambda i, j: tvm.te.sum(A[i][k]*B[j][k], axis=k), name=‘C’)
For the experssion like above, tensor C’s dtype is ‘int8’ as well.
But in certain implementation, we need to let tensor C’s dtype to be ‘int32’, can we change the tensor C’s bit width in current TVM TE language?
I don’t want the solution that get a tensor D to extend bitwidth to ‘int32’ with value from tensor C, this may has different result.