Hi, @tqchen @srkreddy1238 @masahi @junrushao in c++ i am deploying tvm compiled model for Cuda and OpenCL statically, when i inference the Cuda model i am getting below error:
module.cc:92: Check failed: f != nullptr Cannot find function fuse_transpose_kernel0 in the imported modules or global registry
when i inference the OpenCL model i am getting below error:
module.cc:92: Check failed: f != nullptr Cannot find function fuse_transpose_90_kernel0 in the imported modules or global registry
Wheh i searched for fuse_transpose_90_kernel0 and fuse_transpose_kernel0 i didn’t find any similar function in JSON file.
Could you provide a minimal reproducible example so we could look into it?
FR_TVM_Deploy::FR_TVM_Deploy()
{
//tvm model for compiled functions
tvm::runtime::Module mod_syslib = (*tvm::runtime::Registry::Get("module._GetSystemLib"))();
//load graph
std::string json_data(&graph_json[0], &graph_json[0] + graph_json_len);
int device_type = kDLOpenCL;
int device_id = 0;
// get global function module for graph runtime
tvm::runtime::Module mod = (*tvm::runtime::Registry::Get("tvm.graph_runtime.create"))(json_data, mod_syslib, device_type, device_id);
this->handle = new tvm::runtime::Module(mod);
//parameters needs to be TVMByteArray typr to indecate the binary data
TVMByteArray params;
params.data = reinterpret_cast<const char *>(¶m_params[0]);
params.size = param_params_len;
mod.GetFunction("load_params")(params);
}
tvm::runtime::PackedFunc set_input = mod->GetFunction("set_input");
set_input("input", input);
// get the function from the module(run it)
tvm::runtime::PackedFunc run = mod->GetFunction("run");
run();
when i run the model it gives error.
Below is the python script to build model.
grapt, lib params = nnvm.compiler.build(sys, target = ‘cuda --system-lib’, shape_dict, dtype_dict, params, target_host = ‘llvm --system-lib’)
1 Like
This look same as
How to make module.so static link glibc?
Please find my response there.