Cannot do cpp_deploy use SSD model from official document

I want to deploy an object detection model in C++ with .so exported from TVM.

First i tried to import ONNX’s model assuming TVM can support it very well but soon to find non-success because TVM relay has not supported some operators from ONNX yet.

So i use the SSD model which is in MXNet format from official doc url:, and assume: If TVM can import a DL model(from any format), then it should be able to export .so (target=“llvm --system-lib”)

The .so file is exported(with 2 other .params and .json files), and then i use C++ code to try to use it, but failed due to a C++ exception when code trying to get a PackedFunction from model:

‘’’ build % lldb ./TVMCppDeployTest … Process 48178 stopped

  • thread #1, queue = ‘’, stop reason = signal SIGABRT frame #0: 0x00007fff6c5e133a libsystem_kernel.dylib__pthread_kill + 10 libsystem_kernel.dylib__pthread_kill: -> 0x7fff6c5e133a <+10>: jae 0x7fff6c5e1344 ; <+20> 0x7fff6c5e133c <+12>: movq %rax, %rdi 0x7fff6c5e133f <+15>: jmp 0x7fff6c5db629 ; cerror_nocancel 0x7fff6c5e1344 <+20>: retq
    (lldb) bt
  • thread #1, queue = ‘’, stop reason = signal SIGABRT
    • frame #0: 0x00007fff6c5e133a libsystem_kernel.dylib__pthread_kill + 10 frame #1: 0x00007fff6c69de60 libsystem_pthread.dylibpthread_kill + 430 frame #2: 0x00007fff6c568808 libsystem_c.dylibabort + 120 frame #3: 0x00007fff697c7458 libc++abi.dylibabort_message + 231 frame #4: 0x00007fff697b88a7 libc++abi.dylibdemangling_terminate_handler() + 238 frame #5: 0x00007fff6b2f35b1 libobjc.A.dylib_objc_terminate() + 104 frame #6: 0x00007fff697c6887 libc++abi.dylibstd::__terminate(void (*)()) + 8 frame #7: 0x00007fff697c91a2 libc++abi.dylib__cxxabiv1::failed_throw(__cxxabiv1::__cxa_exception*) + 27 frame #8: 0x00007fff697c9169 libc++abi.dylib__cxa_throw + 113 frame #9: 0x00000001000117bc TVMCppDeployTeststd::__1::__throw_bad_function_call() at functional:1431:5 frame #10: 0x000000010001171c TVMCppDeployTeststd::__1::__function::__value_func<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator(this=0x00007ffeefbff5f0, __args=0x00007ffeefb3f070, __args=0x00007ffeefb3f050)(tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) const at functional:1872:13 frame #11: 0x00000001000115a1 TVMCppDeployTeststd::__1::function<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator(this=0x00007ffeefbff5f0, __arg=TVMArgs @ 0x00007ffeefb3f070, __arg=0x00007ffeefb3f448)(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const at functional:2548:12 frame #12: 0x0000000100006118 TVMCppDeployTesttvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator(this=0x00007ffeefbff5f0, args=0x00007ffeefb3f508)<DLContext&>(DLContext&) const at packed_func.h:1192:3 frame #13: 0x000000010000542a TVMCppDeployTestmain at tvm_cpp_deploy_test.cpp:40:33 frame #14: 0x00007fff6c499cc9 libdyld.dylibstart + 1 frame #15: 0x00007fff6c499cc9 libdyld.dylibstart + 1 ‘’’

C++ model deploy test file:

‘’’ #include #include <dlpack/dlpack.h> #include <opencv4/opencv2/opencv.hpp> #include <tvm/runtime/module.h> #include <tvm/runtime/registry.h> #include <tvm/runtime/packed_func.h> #include #include

void Mat_to_CHW(float *data, cv::Mat &frame) { assert(data && !frame.empty()); unsigned int volChl = 256 * 256;

for(int c = 0; c < 3; ++c)
    for (unsigned j = 0; j < volChl; ++j)
        data[c*volChl + j] = static_cast<float>(float([j * 3 + c]) / 255.0);


#define MODEL_NAME “ssd_512_resnet50_v1_voc”

int main(){ std::cout<<“init DLContext…”<<std::endl; DLContext ctx{kDLCPU, 0};

std::cout<<"load mod_factory..."<<std::endl;
tvm::runtime::Module mod_factory =
    tvm::runtime::Module::LoadFromFile("../" MODEL_NAME ".so");
std::cout<<"load mod_factory OK."<<std::endl;

std::ifstream json_in("../" MODEL_NAME ".json", std::ios::in);
std::string json_data((std::istreambuf_iterator<char>(json_in)), std::istreambuf_iterator<char>());

// parameters in binary
std::ifstream params_in("../" MODEL_NAME ".params", std::ios::binary);
std::string params_data((std::istreambuf_iterator<char>(params_in)), std::istreambuf_iterator<char>());

TVMByteArray params_arr; = params_data.c_str();
params_arr.size = params_data.length();

// create the graph runtime module
tvm::runtime::Module gmod = mod_factory.GetFunction("default")(ctx);
std::cout<<"get module obj OK."<<std::endl;
tvm::runtime::PackedFunc set_input = gmod.GetFunction("set_input"); //<--- Log indicates exception thrown here;
std::cout<<"get inner function set_input OK."<<std::endl;
tvm::runtime::PackedFunc load_params = gmod.GetFunction("load_params");
std::cout<<"get inner function load_params OK."<<std::endl;
tvm::runtime::PackedFunc run = gmod.GetFunction("run");
std::cout<<"get inner function run OK."<<std::endl;
tvm::runtime::PackedFunc get_output = gmod.GetFunction("get_output");
std::cout<<"get inner function get_output OK."<<std::endl;

tvm::runtime::NDArray x = tvm::runtime::NDArray::Empty({1, 3, 256, 256}, DLDataType{kDLFloat, 32, 1}, ctx);

cv::Mat image, frame, input;
image = cv::imread("./cat.png");
std::cout<<"call cv::cvtColor"<<std::endl;
cv::cvtColor(image, frame, cv::COLOR_BGR2RGB);
std::cout<<"call cv::resize"<<std::endl;
cv::resize(frame, input,  cv::Size(256,256));
float data[256 * 256 * 3];

std::cout<<"call Mat_to_CHW"<<std::endl;
Mat_to_CHW(data, input);

std::cout<<"call memcpy"<<std::endl;
memcpy(x->data, &data, 3 * 256 * 256 * sizeof(float));

std::cout<<"call set_input"<<std::endl;
set_input("data", x);

std::cout<<"call load_params"<<std::endl;

std::cout<<"before run()"<<std::endl;
std::cout<<"after run()"<<std::endl;

/* get_output(0, y); auto result = static_cast<float*>(y->data); for (int i = 0; i < 3; i++) std::cout<<result[i]<<std::endl; */

}//end main; ‘’’

Hi, I have found where the problem is: the new TVM (with Relay) only needs to export a .so file, it’s about 148MB with the official documentation’s SSD model case.

I’m previously puzzled by code in , which exports 2 versions file: .dll & .o, which i cannot find out the difference.

In fact, the key is “Export 1 .so when using new Replay”.

Export 1 .so:

with tvm.transform.PassContext(opt_level=3):
    lib =, target, params=params)
    lib.export_library("./{}.so".format(model_name))  #i only added this line & it's exported ok.
return lib

& When loading this .so in C++ code, since .json & .params is embedded in .so file, no longer needed to get “load_params” PackedFunction.

Nice work.

Have you tried to deploy object detection model from Tensorflow, such as from Object Detection API with TensorFlow 1

Hi, i didn’t try import models from tensorflow v1, my focus is currently not on model deploy, but a pre-work for the next RPC wrapper around TVM’s api.

I prefer ONNX instead of Tensorflow v1, (& Google seems not involved in TVM project, they have their TF-Serving…) but TVM’s Relaty currently doesn’t support some operators…