For the hand-craft assemby code, it usual not have fixed tensor dimension.
For example, we have matrix multply assembly code, it just need X and Y dimension meet certain requirement, like X % 4 = 0, Y % 4 = 0. But for TVM I only know the fixed dimension method, in other words, we need to specify the X dimension to 128, Y to 64, if we want to use TVM generated assembly code, we need to write the python code again, does autoTVM again.
If there is one method currently that I am not know, please help share method that can generate tensor dimensions are not fixed, only some limitations on dimension, that is great.
Hi @gfvvz ,
Please have a read to a previous answer I gave it here:
The answer to your question is then yes in TVM there is the capability to support symbolic tensor shapes that will change dynamically at run-time. However, be mindful that not knowing the shape of the tensors at compile time, might hinder some optimizations, so the code might not be “optimal”.
Thanks for your suggestion, but I want to generate assembly code directly, then call it in C code, with original one, I can call assembly code by the function call like this: void assembly(int C,intA,int *B), and result are correct.
I try to use te.ver() for the output geometry, like this.
M_var = tvm.te.var(name='M_var')
N_var = tvm.te.var(name='N_var')
K_var = tvm.te.var(name='K_var')
# Construct the TVM computation.
A = tvm.te.placeholder((M_var,K_var), name='A', dtype=tvm_dtype)
B = tvm.te.placeholder((N_var,K_var), name='B', dtype=tvm_dtype)
k = tvm.te.reduce_axis((0,K_var), name='k')
C = tvm.te.compute((N_var, M_var),
lambda i, j: tvm.te.sum(A[j][k].astype(tvm_output_dtype) * B[i][k].astype(tvm_output_dtype), axis=k),
name='C')
s = tvm.te.create_schedule(C.op)
func = tvm.build(s, [A, B, C], \
target = target, \
target_host = target_host, \
name='matmul')
func.imported_modules[0].save('tvm_matmul.s', 's')
And generate assembly code after build function, but I am not sure how to call such assembly code in my C code, how to pass M, N, K value?