[Extern op] tvm.tir.call_packed

Hi,

Can we specify different data types for the arguments in calling extern operator tvm.tir.call_packed, such as input is int8 output is int32.

For example in this tutorial, can we specify ins as int8 and outs in int32? I see only one dtype could be specified, or I am missing somthing.

B = te.extern(
    A.shape,
    [A],
    lambda ins, outs: tvm.tir.call_packed("tvm.contrib.my_tvm_addone", ins[0], outs[0]),
    name="C",
)

Thank you,

The tensor data type is wrapped in the tensor, so tvm.contrib.my_tvm_addone , so it should be able to take in A as int8 tensor.

1 Like

Hi @tqchen

Thank you for your answer, maybe I need bit more elaboration.

I am trying to use extern op for calling pynq call to fpga.

For example, I prepare my input tensors like this.

    dtype = "int8"
    out_dtype = "int32"
    A = te.placeholder((batch, in_dim), name="A", dtype=dtype)
    B = te.placeholder((out_dim, in_dim), name="B", dtype=dtype)
    C = te.placeholder((out_dim,), name="C", dtype=out_dtype)

With the code at the end of this post is my extern matmul code for initial testing with numpy, and I am not sure how to specify input dtype.

With the below code, I got this error

  Check failed: ret == 0 (-1 vs. 0) : Assert fail: (((tir.tvm_struct_get(arg2, 0, 5) == (uint8)0) && (tir.tvm_struct_get(arg2, 0, 6) == (uint8)32)) && (tir.tvm_struct_get(arg2, 0, 7) == (uint16)1)), arg2.dtype is expected to be int32

@tvm.register_func("tvm.contrib.my_matmul")
def my_matmul(a, b, c):

    m, k = get_const_tuple(a.shape)
    n, k = get_const_tuple(b.shape)
    cm, cn = get_const_tuple(c.shape)

    tvm.nd.array(np.matmul(a.asnumpy(), b.asnumpy().T)).copyto(c)
    cm, cn = get_const_tuple(c.shape)

@autotvm.register_topi_compute("dense_nopack.my_extern")
def dense_nopack(cfg, data, weight, bias=None, out_dtype=None):
    """Compute dense without packing"""
    if out_dtype is None:
        out_dtype = data.dtype
    M, K = get_const_tuple(data.shape)
    N, _ = get_const_tuple(weight.shape)

    CC = te.extern(
        (M, N),
        [data, weight],
        lambda ins, outs: tvm.tir.call_packed("tvm.contrib.my_matmul", ins[0], ins[1], outs[0]),
        dtype="int32",
        name="matmul_extern",
    )

    if bias is not None:
        C = te.compute((M, N), lambda i, j: CC[i, j] + bias[j].astype(out_dtype))
        return C

    return CC

Thank you.

Insop

as the error message indicated, you might want to check your arg2’s dtype, in this case C which is supposed to be int32 and make sure you create int32 array when passing to the function

Thanks a lot, and figured it out. The assert was coming from the numpy test code.