Question about load module from library

Description

I want to tune a operator, export it to library .so, load it from library and benchmark it finally. My coded are as follows.

I defined the op here:

@auto_scheduler.register_workload
def element_add(m, n, dtype):
    A = te.placeholder((m, n), name="A", dtype=dtype)
    B = te.placeholder((m, n), name="B", dtype=dtype)

    C = te.compute(
        (m, n),
        lambda i, j: A[i, j] + B[i, j],
        name="C",
        attrs={"layout_free_placeholders": [B]},  
    )

    return [A, B, C]

then I tuned the kernel

# ...
task = auto_scheduler.SearchTask(
            func=element_add, args=(m, n, dtype), target=target)
task.tune(tune_option)
sch, args = task.apply_best(log_file)

then built and exported it as the return type description in the document

m_l = tvm.lower(sch, args, name=f"add")
m = tvm.build(m_l, target = target)
m.export_library(lib_file)

loaded it again and ran the benchmark

dev = tvm.device(str(target), 0)
lib: tvm.runtime.Module = tvm.runtime.load_module(lib_file)
module = graph_executor.GraphModule(lib[f"add"](dev))
a_tvm = tvm.nd.array((np.random.uniform(
                            size=(m, n))).astype(dtype))
b_tvm = tvm.nd.array((np.random.uniform(
                            size=(m, n))).astype(dtype))
c_tvm = tvm.nd.array(np.zeros((m,n), dtype=dtype))
module.set_input("A", a_tvm)
module.set_input("B", b_tvm)
module.set_input("C", c_tvm)
timer = module.benchmark(dev, number=20)

But I got a TVMError:

Traceback (most recent call last):
  File "op_tuning.py", line 45, in element2helper
    module = graph_executor.GraphModule(lib[f"add"](dev))
  File "/home/xxx/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  4: TVMFuncCall
        at /home/xxx/tvm/src/runtime/c_runtime_api.cc:475
  3: tvm::runtime::PackedFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
        at /home/xxx/tvm/include/tvm/runtime/packed_func.h:1151
  2: std::function<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
        at /usr/include/c++/7/bits/std_function.h:706
  1: _M_invoke
        at /usr/include/c++/7/bits/std_function.h:316
  0: operator()
        at /home/xxx/tvm/src/runtime/library_module.cc:80
  File "/home/xxx/tvm/src/runtime/library_module.cc", line 80
TVMError:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------

  Check failed: ret == 0 (-1 vs. 0) : Assert fail: (num_args == 3), add: num_args should be 3

I’m sure the args in tvm.lower(sch, args, name="add") have 3 arguments, so what is the cause of this error?

Try Working with Operators Using Tensor Expression — tvm 0.9.dev182+ge718f5a8a documentation instead of graph module API

tvm.runtime.load_module directly returns a callable function. In your case, you can call the calculation as simple as

lib: tvm.runtime.Module = tvm.runtime.load_module(lib_file)
lib(a_tvm, b_tvm, c_tvm)

Between @merrymercy, in which case we should use GraphModule? In many examples, relay.build + GraphModule are a common combination Auto-scheduling a Neural Network for x86 CPU — tvm 0.9.dev182+ge718f5a8a documentation.

If I have a relay built function, how can I directly call it without using GraphModule?

thanks for your kind reply:+1:

got it :smiley: thanks, it’s really helpful

I don’t think we can call it without using GraphModule, because you need a graph runtime to run a relay program.

1 Like

Got it. Thanks for the answer.