Relay.build error

Hi all, I am a beginner for TVM. Now I try transform insightface project (MXNet) with TVM. My version is 0.7 dev. And I got error message as following. Please help me. How can I solve it? If there are any information I need to supply, please tell me. Thanks in advance.

[18:27:31] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.5.1. Attempting to upgrade...
[18:27:31] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
Cannot find config for target=opencl -keys=intel_graphics,opencl,gpu -device=intel_graphics -max_num_threads=256 -model=unknown -thread_warp_size=16, workload=('dense_small_batch.cuda', ('TENSOR', (1, 25088), 'float32'), ('TENSOR', (512, 25088), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
Traceback (most recent call last):
  File "tvm_transform_1.py", line 26, in <module>
    graph, lib, params = relay.build(relay_func, target, params=relay_params)
  File "/home/dave/tvm/python/tvm/relay/build_module.py", line 259, in build
    graph_json, mod, params = bld_mod.build(mod, target, target_host, params)
  File "/home/dave/tvm/python/tvm/relay/build_module.py", line 126, in build
    self._build(mod, target, target_host)
  File "/home/dave/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /home/dave/tvm/build/libtvm.so(non-virtual thunk to tvm::tir::StmtExprMutator::VisitExpr(tvm::PrimExpr const&)+0x7c) [0x7fa83d9f488c]
  [bt] (7) /home/dave/tvm/build/libtvm.so(tvm::tir::ExprFunctor<tvm::PrimExpr (tvm::PrimExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::tir::ExprFunctor<tvm::PrimExpr (tvm::PrimExpr const&)>*)#10}::_FUN(tvm::runtime::ObjectRef const&, tvm::tir::ExprFunctor<tvm::PrimExpr (tvm::PrimExpr const&)>*)+0x27) [0x7fa83d9df177]
  [bt] (6) /home/dave/tvm/build/libtvm.so(tvm::tir::ExprMutator::VisitExpr_(tvm::tir::MulNode const*)+0x2e) [0x7fa83dd684be]
  [bt] (5) /home/dave/tvm/build/libtvm.so(non-virtual thunk to tvm::tir::StmtExprMutator::VisitExpr(tvm::PrimExpr const&)+0x7c) [0x7fa83d9f488c]
  [bt] (4) /home/dave/tvm/build/libtvm.so(tvm::tir::ExprFunctor<tvm::PrimExpr (tvm::PrimExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::tir::ExprFunctor<tvm::PrimExpr (tvm::PrimExpr const&)>*)#7}::_FUN(tvm::runtime::ObjectRef const&, tvm::tir::ExprFunctor<tvm::PrimExpr (tvm::PrimExpr const&)>*)+0x27) [0x7fa83d9df087]
  [bt] (3) /home/dave/tvm/build/libtvm.so(tvm::tir::IntrinInjecter::VisitExpr_(tvm::tir::CallNode const*)+0x19b) [0x7fa83de1ccbb]
  [bt] (2) /home/dave/tvm/build/libtvm.so(tvm::tir::IntrinInjecter::ApplyPattern(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::PrimExpr const&)+0x160) [0x7fa83de1c100]
  [bt] (1) /home/dave/tvm/build/libtvm.so(+0xb5a344) [0x7fa83e061344]
  [bt] (0) /home/dave/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x82) [0x7fa83d9c5322]
  File "/home/dave/tvm/src/target/source/intrin_rule_opencl.cc", line 80
TVMError: Check failed: analyzer.CanProve(call->args[3] == call->args[4]): Intel warp shuffle dose not support width != warp_size

NNVM has been deprecated for a while IIRC

Thanks for repay. But my code do not use nnvm.

import tvm
from tvm.contrib import graph_runtime
import mxnet as mx

## load mxnet model
prefix, epoch = "model/models", 0
# prefix = '/your/mxnet/checkpoint/prefix'
# epoch = 0
mx_sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, epoch)
# print(mx_sym, arg_params, aux_params)
image_size = (112, 112)
opt_level = 3
target = tvm.target.intel_graphics()

# import model into tvm from mxnet
shape_dict = {'data': (1, 3, *image_size)}
# load model
relay_func, relay_params = relay.frontend.from_mxnet(mx_sym, shape_dict,
                                                     arg_params=arg_params, aux_params=aux_params)
# print(relay_func, relay_params)

with relay.build_config(opt_level=opt_level):
    graph, lib, params = relay.build(relay_func, target, params=relay_params)

lib.export_library("./deploy_lib.so")
print('lib export succeefully')
with open("./deploy_graph.json", "w") as fo:
    fo.write(graph.json())
with open("./deploy_param.params", "wb") as fo:
    fo.write(relay.save_param_dict(params))

Seems like something is calling code which is in nnvm?

Yes, it is from

mx.model.load_checkpoint()

But I don’t know why this API use nnvm. Because the api contain in mxnet framework. Whether or not I cannot use relay for mxnet framework.