How to build tvm as static library and inference using c++?

There are two parts for deployment

  • TVM runtime

Above solution is about how we build static TVM runtime and compile against our final executable. Instead of building individual files of runtime and then archiving I suggested to include all .cc files in another file and then make an archive.

  • Compiled module (graph, params, model) Here model is the lib.so (python build output) which is just compiled module not runtime.