Cannot find function tvm.contrib.cblas.matmul in the imported modules or global registry

I tried to compile a resnet18_v2 model (original from gluoncv model zoo) with cblas(mkl) but got an error like below:

$ tvmc compile --target "llvm -libs=cblas" --output models/tvm/resnet18_v2_llvm.tar models/onnx/resnet18_v2.onnx
$ tvmc run --inputs /tmp/input.npz --output /tmp/output.npz models/tvm/resnet18_v2_llvm.tar
Traceback (most recent call last):                                                                   
  File "/home/lhy/.local/bin/tvmc", line 33, in <module>                                                                                                                                                   
    sys.exit(load_entry_point('tvm==0.8.dev625+g9e74f90c9', 'console_scripts', 'tvmc')())
  File "/home/lhy/Documents/Lib/tvm/python/tvm/driver/tvmc/main.py", line 94, in main
    sys.exit(_main(sys.argv[1:]))                                                                    
  File "/home/lhy/Documents/Lib/tvm/python/tvm/driver/tvmc/main.py", line 87, in _main
    return args.func(args)                   
  File "/home/lhy/Documents/Lib/tvm/python/tvm/driver/tvmc/runner.py", line 119, in drive_run
    profile=args.profile,                                                                            
  File "/home/lhy/Documents/Lib/tvm/python/tvm/driver/tvmc/runner.py", line 396, in run_module
    prof_result = timer()                                                                            
  File "/home/lhy/Documents/Lib/tvm/python/tvm/runtime/module.py", line 226, in evaluator
    blob = feval(*args)                                                                                                                                                                                    
  File "/home/lhy/Documents/Lib/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__                                                                                                         
    raise get_last_ffi_error()                                                                       
tvm._ffi.base.TVMError: Traceback (most recent call last):           
  [bt] (8) /home/lhy/Documents/Lib/tvm/build/libtvm.so(+0x166d0c7) [0x7f4b21d770c7]
  [bt] (7) /home/lhy/Documents/Lib/tvm/build/libtvm.so(tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x365) [0x7f4b21d7d9c5]
  [bt] (6) /home/lhy/Documents/Lib/tvm/build/libtvm.so(tvm::runtime::LocalSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)+0x67) [0x7f4b21d73
d97]                                              
  [bt] (5) /home/lhy/Documents/Lib/tvm/build/libtvm.so(+0x166b2fb) [0x7f4b21d752fb]
  [bt] (4) /home/lhy/Documents/Lib/tvm/build/libtvm.so(+0x166ae3c) [0x7f4b21d74e3c]                                                                                                                        
  [bt] (3) /home/lhy/Documents/Lib/tvm/build/libtvm.so(tvm::runtime::GraphRuntime::Run()+0x37) [0x7f4b21d873a7]
...
  File "/home/lhy/Documents/Lib/tvm/src/runtime/module.cc", line 115
  File "/home/lhy/Documents/Lib/tvm/src/runtime/library_module.cc", line 78
TVMError: 
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------

  Check failed: ret == 0 (-1 vs. 0) : TVMError: 
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
  Check failed: f != nullptr == false: Cannot find function tvm.contrib.cblas.matmul in the imported modules or global registry
terminate called after throwing an instance of 'dmlc::Error'
  what():  [10:15:11] /home/lhy/Documents/Lib/tvm/src/runtime/workspace_pool.cc:118: 
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------

  Check failed: allocated_.size() == 1 (2 vs. 1) : 
Stack trace:
  [bt] (0) /home/lhy/Documents/Lib/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x80) [0x7f4b21012260]
  [bt] (1) /home/lhy/Documents/Lib/tvm/build/libtvm.so(tvm::runtime::WorkspacePool::~WorkspacePool()+0x199) [0x7f4b21d5ef49]
  [bt] (2) /lib/x86_64-linux-gnu/libc.so.6(__call_tls_dtors+0x3f) [0x7f4b86b1243f]
  [bt] (3) /lib/x86_64-linux-gnu/libc.so.6(+0x49b8d) [0x7f4b86b11b8d]
  [bt] (4) /lib/x86_64-linux-gnu/libc.so.6(on_exit+0) [0x7f4b86b11be0]
  [bt] (5) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfa) [0x7f4b86aef0ba]
  [bt] (6) /opt/miniconda3/bin/python(+0x1dc3c0) [0x55b6580393c0]

I compiled tvm from source with MKL support by setting USE_MKL=/path/to/mkl, USE_MKLDNN=ON and USE_OPENMP=intel in config.cmake.

BTW, if I changed the target arg from llvm -libs=cblas to llvm, everything works fine.

Any help is appreciated.

Did you build TVM with cblas support?

Yes, I edited config.cmake and set USE_MKL=/path/to/mkl, USE_MKLDNN=ON, USE_OPENMP=intel. Did I miss something when installing tvm ?

Did you enable USE_BLAS? Also you can check the cmake logs to see if TVM is configured as you expected (build with cblas, for example).

I have tried two different settings for USE_BLAS. One is set(USE_BLAS mkl) which gives me a warning

CMake Deprecation Warning at cmake/modules/contrib/BLAS.cmake:35 (message):                          
  USE_BLAS=mkl is deprecated.  Use USE_MKL=ON instead.

I also tried to delete this line which leaves it to the default value none. This time, no warnings appeared during cmake ... But the test result keeps the same.

The log during cmake .. shows

CMake Deprecation Warning at cmake/modules/contrib/BLAS.cmake:35 (message):                          
  USE_BLAS=mkl is deprecated.  Use USE_MKL=ON instead.                                               
Call Stack (most recent call first):                                                                 
  CMakeLists.txt:339 (include)                                                                                                      
-- Use MKL library /opt/intel/oneapi/mkl/latest/lib/intel64/libmkl_rt.so                             
-- Use MKLDNN library /opt/intel/oneapi/dnnl/2021.2.0/cpu_gomp/lib/libdnnl.so

I guess that confirms the mkl has been enabled successfully ?

BTW, I found the compiled tvm model can be loaded and run successfully by changing --target "llvm -libs=cblas" to --target "llvm -libs=mkl". Is it a specific design or bug ? Although, it doesn’t give me any significant performance increase comparing with --target llvm.

Ok yeah you need to specify mkl in target instead of just cblas. I assume the case of running resent on CPU, cblas/mkl won’t help a lot, because resent only has one dense which doesn’t dominate the performance.

Do you mean mkl can not be used in conv layers here? If so, I am wondering why mkl helps mxnet/pytorch/tensorflow quite a lot for most CNN models?