How to disable autoTVM

Hi, I upgraded the TVM source code to the latest version. When I running “relay_quick_start.py”, some error printed.

//-------------------------------------------------------------------------------------------

download failed due to URLError(ConnectionRefusedError(111, ‘Connection refused’),), retrying, 2 attempts left

download failed due to URLError(ConnectionRefusedError(111, ‘Connection refused’),), retrying, 1 attempt left

WARNING:root:Failed to download tophub package for cuda: <urlopen error [Errno 111] Connection refused>

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 3, 224, 224), ‘float32’), (‘TENSOR’, (64, 3, 7, 7), ‘float32’), (2, 2), (3, 3, 3, 3), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 64, 56, 56), ‘float32’), (‘TENSOR’, (64, 64, 3, 3), ‘float32’), (1, 1), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 64, 56, 56), ‘float32’), (‘TENSOR’, (64, 64, 1, 1), ‘float32’), (1, 1), (0, 0, 0, 0), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 64, 56, 56), ‘float32’), (‘TENSOR’, (128, 64, 3, 3), ‘float32’), (2, 2), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 128, 28, 28), ‘float32’), (‘TENSOR’, (128, 128, 3, 3), ‘float32’), (1, 1), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 64, 56, 56), ‘float32’), (‘TENSOR’, (128, 64, 1, 1), ‘float32’), (2, 2), (0, 0, 0, 0), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 128, 28, 28), ‘float32’), (‘TENSOR’, (256, 128, 3, 3), ‘float32’), (2, 2), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 256, 14, 14), ‘float32’), (‘TENSOR’, (256, 256, 3, 3), ‘float32’), (1, 1), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 128, 28, 28), ‘float32’), (‘TENSOR’, (256, 128, 1, 1), ‘float32’), (2, 2), (0, 0, 0, 0), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 256, 14, 14), ‘float32’), (‘TENSOR’, (512, 256, 3, 3), ‘float32’), (2, 2), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 512, 7, 7), ‘float32’), (‘TENSOR’, (512, 512, 3, 3), ‘float32’), (1, 1), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 256, 14, 14), ‘float32’), (‘TENSOR’, (512, 256, 1, 1), ‘float32’), (2, 2), (0, 0, 0, 0), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression.

Traceback (most recent call last): File “/media/cvg/DATA/tvm/tutorials/relay_quick_start.py”, line 100, in graph, lib, params = relay.build(mod, target, params=params)

File “/home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/build_module.py”, line 251, in build graph_json, mod, params = bld_mod.build(mod, target, target_host, params)

File “/home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/build_module.py”, line 120, in build self._build(mod, target, target_host)

File “tvm/_ffi/_cython/./packed_func.pxi”, line 321, in core.PackedFuncBase.call

File “tvm/_ffi/_cython/./packed_func.pxi”, line 256, in core.FuncCall

File “tvm/_ffi/_cython/./packed_func.pxi”, line 245, in core.FuncCall3

File “tvm/_ffi/_cython/./base.pxi”, line 160, in core.CALL tvm._ffi.base.TVMError: Traceback (most recent call last):

[bt] (8) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::MemoizedExprTranslator<tvm::runtime::Array<tvm::te::Tensor, void> >::VisitExpr(tvm::RelayExpr const&)+0xa9) [0x7f8df37db339]

[bt] (7) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x82) [0x7f8df37db102]

[bt] (6) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>)+0x27) [0x7f8df37ce0b7]

[bt] (5) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr_(tvm::relay::CallNode const*)+0x14f) [0x7f8df37d373f]

[bt] (4) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::MemoizedExprTranslator<tvm::runtime::Array<tvm::te::Tensor, void> >::VisitExpr(tvm::RelayExpr const&)+0xa9) [0x7f8df37db339]

[bt] (3) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x82) [0x7f8df37db102]

[bt] (2) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>)+0x27) [0x7f8df37ce0b7]

[bt] (1) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr_(tvm::relay::CallNode const*)+0x694) [0x7f8df37d3c84]

[bt] (0) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x160928b) [0x7f8df396228b]

File “tvm/_ffi/_cython/./packed_func.pxi”, line 55, in core.tvm_callback

File “/home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/backend/compile_engine.py”, line 263, in lower_call op, call.attrs, inputs, ret_type, target)

File “/home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/backend/compile_engine.py”, line 182, in select_implementation all_impls = get_valid_implementations(op, attrs, inputs, out_type, target)

File “/home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/backend/compile_engine.py”, line 123, in get_valid_implementations strategy = fstrategy(attrs, inputs, out_type, target)

File “/home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/target/generic_func.py”, line 45, in call return _ffi_api.GenericFuncCallFunc(self, *args)

File “tvm/_ffi/_cython/./packed_func.pxi”, line 321, in core.PackedFuncBase.call

File “tvm/_ffi/_cython/./packed_func.pxi”, line 266, in core.FuncCall

File “tvm/_ffi/_cython/./base.pxi”, line 160, in core.CALL

[bt] (3) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(TVMFuncCall+0x61) [0x7f8df3965ba1]

[bt] (2) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x102b047) [0x7f8df3384047]

[bt] (1) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::GenericFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1b8) [0x7f8df3383d98]

[bt] (0) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x160928b) [0x7f8df396228b]

File “tvm/_ffi/_cython/./packed_func.pxi”, line 55, in core.tvm_callback

File “/home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/op/strategy/cuda.py”, line 462, in dense_strategy_cuda if nvcc.have_tensorcore(tvm.gpu(0).compute_version):

File “/home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/_ffi/runtime_ctypes.py”, line 233, in compute_version self.device_type, self.device_id, 4)

File “/home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/_ffi/runtime_ctypes.py”, line 195, in _GetDeviceAttr device_type, device_id, attr_id)

File “tvm/_ffi/_cython/./packed_func.pxi”, line 321, in core.PackedFuncBase.call

File “tvm/_ffi/_cython/./packed_func.pxi”, line 256, in core.FuncCall

File “tvm/_ffi/_cython/./packed_func.pxi”, line 245, in core.FuncCall3

File “tvm/_ffi/_cython/./base.pxi”, line 160, in core.CALL

[bt] (4) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(TVMFuncCall+0x61) [0x7f8df3965ba1]

[bt] (3) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x160af4d) [0x7f8df3963f4d]

[bt] (2) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::runtime::DeviceAPIManager::GetAPI(int, bool)+0x15c) [0x7f8df39680dc]

[bt] (1) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::runtime::DeviceAPIManager::GetAPI(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, bool)+0x2e6) [0x7f8df3967e16]

[bt] (0) /home/cvg/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x16093c2) [0x7f8df39623c2]

File “/media/cvg/DATA/tvm/src/runtime/c_runtime_api.cc”, line 131

TVMError: Check failed: allow_missing: Device API gpu is not enabled.

//---------------------------------------------------------------------------------------------

I found that running relay.build automatically call autoTVM. If I want to disable autoTVM, what should I do?

I had to disable the autotvm download for another reason but I couldn’t find a proper way to do it. Instead I modified the tophub.py file (under tvm/python/tvm/autotvm) in order to skip the download.

I modified the beginning of check_backend with:

    def check_backend(tophub_location, backend):
        backend = _alias(backend)
        assert backend in PACKAGE_VERSION, 'Cannot find backend "%s" in TopHub' % backend
        version = PACKAGE_VERSION[backend]
        package_name = "%s_%s.log" % (backend, version)
        if os.path.isfile(os.path.join(AUTOTVM_TOPHUB_ROOT_PATH, package_name)):
            return True
        logging.warning("Not downloading tophub package for %s", backend)
        return False

and in load_reference_log I changed tophub_location = _get_tophub_location() on line 220 to tophub_location = AUTOTVM_TOPHUB_NONE_LOC

I forgot to say that you cannot completely disable autotvm as it is part of the schedule determination, however you can force it to use fallback configurations which work in almost all situations by not allowing autotvm to have access to logs.