How to deploy TVM model on windows [ Inference ]

I want to deploy Inference of TVM on Windows. To this end, I did the following work:

1. Clone the tvm source code from github

2. Download LLVM source code,Visual studio build it

3. Build tvm with llvm, modify config.cmake (USE LLVM OFF to ON)

4. into tvm root path

cd python; python setup.py install --user; cd .. cd topi/python; python setup.py install --user; cd ../.. cd nnvm/python; python setup.py install --user; cd ../..`


After finishing the above work:

we can find this file in Release


Generate the TVM model

import numpy as np
import nnvm.compiler
import nnvm.testing
import tvm
from tvm.contrib import graph_runtime
import mxnet as mx
from mxnet import ndarray as nd

prefix,epoch = "model",0
sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, epoch)
image_size = (112, 112)
opt_level = 3

shape_dict = {'data': (1, 3, *image_size)}
target = tvm.target.create("llvm -mcpu=haswell")
# "target" means your target platform you want to compile.

#target = tvm.target.create("llvm -mcpu=broadwell")
nnvm_sym, nnvm_params = nnvm.frontend.from_mxnet(sym, arg_params, aux_params)
with nnvm.compiler.build_config(opt_level=opt_level):
   graph, lib, params = nnvm.compiler.build(nnvm_sym, target, shape_dict, params=nnvm_params)
lib.export_library("./deploy_lib.dll")
print('lib export succeefully')
with open("./deploy_graph.json", "w") as fo:
   fo.write(graph.json())
with open("./deploy_param.params", "wb") as fo:
   fo.write(nnvm.compiler.save_param_dict(params))

Then we can find tvm model in current folder

image


If you find vcvars64.bat error

BUT WHEN I USE THE MODEL ,ERROR OCCURRED

Demo code isTVM_mobilefacenet

cmake-gui configure---->generate ---->visual studio open----> build

@tqchen

1 Like

Ye, I am also struggling with deploy the network under windows VS Cpp environment.

hi ,did you complete the task deploying model on windwos using tvm?