Cannot tune via autoTVM on Jetson Nano (Check failed: bf != nullptr == false: target.build.cuda is not enabled) when built with CUDA

I’m using this tvm example to tune on a 2gb Jetson Nano using AutoTVM.

However, I’m getting errors such as:

Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=('conv2d_nchw.cuda', ('TENSOR', (1, 128, 28, 28), 'float32'), ('TENSOR', (256, 128, 1, 1), 'float32'), (2, 2), (0, 0, 0, 0), (1, 1), 'float32'). A fallback configuration is used, which may bring great performance regression.

Once “tuning” has finished, I will get to the compile portion with this error:

A fallback configuration is used, which may bring great performance regression.
Traceback (most recent call last):
  File "jetbot.py", line 278, in <module>
    tune_and_evaluate(tuning_option)
  File "jetbot.py", line 257, in tune_and_evaluate
    lib = relay.build_module.build(mod, target=target, params=params)
  File "/home/devtop/tvm/python/tvm/relay/build_module.py", line 283, in build
    graph_json, mod, params = bld_mod.build(mod, target, target_host, params)
  File "/home/devtop/tvm/python/tvm/relay/build_module.py", line 132, in build
    self._build(mod, target, target_host)
  File "/home/devtop/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (6) /home/devtop/tvm/build/libtvm.so(TVMFuncCall+0x63) [0x7f2a21a11643]
  [bt] (5) /home/devtop/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x141) [0x7f2a21832ec1]
  [bt] (4) /home/devtop/tvm/build/libtvm.so(tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::IRModule, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, tvm::runtime::NDArray> > > const&)+0x1ef7) [0x7f2a21832057]
  [bt] (3) /home/devtop/tvm/build/libtvm.so(tvm::build(tvm::runtime::Map<tvm::runtime::String, tvm::IRModule, void, void> const&, tvm::Target const&)+0x70e) [0x7f2a20d85d4e]
  [bt] (2) /home/devtop/tvm/build/libtvm.so(tvm::build(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target const&)+0x43c) [0x7f2a20d84d4c]
  [bt] (1) /home/devtop/tvm/build/libtvm.so(tvm::codegen::Build(tvm::IRModule, tvm::Target)+0x792) [0x7f2a212df042]
  [bt] (0) /home/devtop/tvm/build/libtvm.so(+0xe24858) [0x7f2a212de858]
  File "/home/devtop/tvm/src/target/codegen.cc", line 58
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
  Check failed: bf != nullptr == false: target.build.cuda is not enabled

Just for some context, I have built TVM with CUDA enabled. Just to confirm that this is my output on the Jetson Nano.

>> import tvm
>> tvm.gpu().exist
True

dev@device:~/tvm/build$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_21:14:42_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89

Any ideas on what might be the issue?

1 Like

I came across this problem, and solve the problem by install cuda -tvm from https://tlcpack.ai/