I have this erro when I run the first tutorials

It’s this tutorials: https://tvm.apache.org/docs/tutorials/get_started/relay_quick_start.html#sphx-glr-tutorials-get-started-relay-quick-start-py

When I run the compilation step,there has erros :

WARNING:root:Failed to download tophub package for cuda: <urlopen error [Errno 111] Connection refused> download failed due to URLError(ConnectionRefusedError(111, ‘Connection refused’)), retrying, 2 attempts left download failed due to URLError(ConnectionRefusedError(111, ‘Connection refused’)), retrying, 1 attempt left WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 3, 224, 224), ‘float32’), (‘TENSOR’, (64, 3, 7, 7), ‘float32’), (2, 2), (3, 3, 3, 3), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 64, 56, 56), ‘float32’), (‘TENSOR’, (64, 64, 3, 3), ‘float32’), (1, 1), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 64, 56, 56), ‘float32’), (‘TENSOR’, (64, 64, 1, 1), ‘float32’), (1, 1), (0, 0, 0, 0), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 64, 56, 56), ‘float32’), (‘TENSOR’, (128, 64, 3, 3), ‘float32’), (2, 2), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 128, 28, 28), ‘float32’), (‘TENSOR’, (128, 128, 3, 3), ‘float32’), (1, 1), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 64, 56, 56), ‘float32’), (‘TENSOR’, (128, 64, 1, 1), ‘float32’), (2, 2), (0, 0, 0, 0), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 128, 28, 28), ‘float32’), (‘TENSOR’, (256, 128, 3, 3), ‘float32’), (2, 2), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 256, 14, 14), ‘float32’), (‘TENSOR’, (256, 256, 3, 3), ‘float32’), (1, 1), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 128, 28, 28), ‘float32’), (‘TENSOR’, (256, 128, 1, 1), ‘float32’), (2, 2), (0, 0, 0, 0), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 256, 14, 14), ‘float32’), (‘TENSOR’, (512, 256, 3, 3), ‘float32’), (2, 2), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 512, 7, 7), ‘float32’), (‘TENSOR’, (512, 512, 3, 3), ‘float32’), (1, 1), (1, 1, 1, 1), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘conv2d_nchw.cuda’, (‘TENSOR’, (1, 256, 14, 14), ‘float32’), (‘TENSOR’, (512, 256, 1, 1), ‘float32’), (2, 2), (0, 0, 0, 0), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘dense_small_batch.cuda’, (‘TENSOR’, (1, 512), ‘float32’), (‘TENSOR’, (1000, 512), ‘float32’), None, ‘float32’). A fallback configuration is used, which may bring great performance regression. download failed due to URLError(ConnectionRefusedError(111, ‘Connection refused’)), retrying, 2 attempts left download failed due to URLError(ConnectionRefusedError(111, ‘Connection refused’)), retrying, 1 attempt left

TVMError Traceback (most recent call last) in () 2 target = tvm.target.cuda() 3 with tvm.transform.PassContext(opt_level=opt_level): ----> 4 graph, lib, params = relay.build(mod, target, params=params)

~/tvm/python/tvm/relay/build_module.py in build(mod, target, target_host, params, mod_name) 253 with tophub_context: 254 bld_mod = BuildModule() –> 255 graph_json, mod, params = bld_mod.build(mod, target, target_host, params) 256 mod = _graph_runtime_factory.GraphRuntimeFactoryModule(graph_json, mod, mod_name, params) 257 return mod

~/tvm/python/tvm/relay/build_module.py in build(self, mod, target, target_host, params) 119 self._set_params(params) 120 # Build the IR module –> 121 self._build(mod, target, target_host) 122 # Get artifacts 123 graph_json = self.get_json()

~/tvm/python/tvm/_ffi/_ctypes/packed_func.py in call(self, *args) 223 self.handle, values, tcodes, ctypes.c_int(num_args), 224 ctypes.byref(ret_val), ctypes.byref(ret_tcode)) != 0: –> 225 raise get_last_ffi_error() 226 _ = temp_args 227 _ = args

TVMError: Traceback (most recent call last): [bt] (8) /home/ljs/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::GraphAddCallNode(tvm::relay::CallNode const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::_cxx11::basic_string<char, std::char_traits, std::allocator > const&)+0xf0) [0x7fbc5c431a40] [bt] (7) /home/ljs/tvm/build/libtvm.so(tvm::relay::backend::MemoizedExprTranslator<std::vector<tvm::relay::backend::GraphNodeRef, std::allocatortvm::relay::backend::GraphNodeRef > >::VisitExpr(tvm::RelayExpr const&)+0x193) [0x7fbc5c4392d3] [bt] (6) /home/ljs/tvm/build/libtvm.so(tvm::relay::ExprFunctor<std::vector<tvm::relay::backend::GraphNodeRef, std::allocatortvm::relay::backend::GraphNodeRef > (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<std::vector<tvm::relay::backend::GraphNodeRef, std::allocatortvm::relay::backend::GraphNodeRef > (tvm::RelayExpr const&)>)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<std::vector<tvm::relay::backend::GraphNodeRef, std::allocatortvm::relay::backend::GraphNodeRef > (tvm::RelayExpr const&)>)+0x27) [0x7fbc5c4260f7] [bt] (5) /home/ljs/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::CallNode const*)+0xe80) [0x7fbc5c4362a0] [bt] (4) /home/ljs/tvm/build/libtvm.so(+0x15b3455) [0x7fbc5c40a455] [bt] (3) /home/ljs/tvm/build/libtvm.so(tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&)+0x8e6) [0x7fbc5c415946] [bt] (2) /home/ljs/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::Create(tvm::relay::Function const&)+0xa44) [0x7fbc5c412754] [bt] (1) /home/ljs/tvm/build/libtvm.so(tvm::relay::OpImplementation::Schedule(tvm::Attrs const&, tvm::runtime::Array<tvm::te::Tensor, void> const&, tvm::Target const&)+0xb1) [0x7fbc5c4cdbe1] [bt] (0) /home/ljs/tvm/build/libtvm.so(+0x1744a3b) [0x7fbc5c59ba3b] File “/home/ljs/anaconda3/lib/python3.7/urllib/request.py”, line 1317, in do_open encode_chunked=req.has_header(‘Transfer-encoding’)) File “/home/ljs/anaconda3/lib/python3.7/http/client.py”, line 1229, in request self._send_request(method, url, body, headers, encode_chunked) File “/home/ljs/anaconda3/lib/python3.7/http/client.py”, line 1275, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File “/home/ljs/anaconda3/lib/python3.7/http/client.py”, line 1224, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File “/home/ljs/anaconda3/lib/python3.7/http/client.py”, line 1016, in _send_output self.send(msg) File “/home/ljs/anaconda3/lib/python3.7/http/client.py”, line 956, in send self.connect() File “/home/ljs/anaconda3/lib/python3.7/http/client.py”, line 1384, in connect super().connect() File “/home/ljs/anaconda3/lib/python3.7/http/client.py”, line 928, in connect (self.host,self.port), self.timeout, self.source_address) File “/home/ljs/anaconda3/lib/python3.7/socket.py”, line 727, in create_connection raise err File “/home/ljs/anaconda3/lib/python3.7/socket.py”, line 716, in create_connection sock.connect(sa) File “/home/ljs/tvm/python/tvm/_ffi/_ctypes/packed_func.py”, line 78, in cfun rv = local_pyfunc(*pyargs) File “/home/ljs/tvm/python/tvm/relay/op/strategy/generic.py”, line 33, in wrapper return topi_schedule(outs) File “/home/ljs/tvm/python/tvm/autotvm/task/topi_integration.py”, line 223, in wrapper return topi_schedule(cfg, outs, *args, **kwargs) File “/home/ljs/tvm/python/tvm/topi/cuda/conv2d.py”, line 47, in schedule_conv2d_nchw traverse_inline(s, outs[0].op, _callback) File “/home/ljs/tvm/python/tvm/topi/util.py”, line 64, in traverse_inline _traverse(final_op) File “/home/ljs/tvm/python/tvm/topi/util.py”, line 61, in _traverse _traverse(tensor.op) File “/home/ljs/tvm/python/tvm/topi/util.py”, line 61, in _traverse _traverse(tensor.op) File “/home/ljs/tvm/python/tvm/topi/util.py”, line 61, in _traverse _traverse(tensor.op) File “/home/ljs/tvm/python/tvm/topi/util.py”, line 62, in _traverse callback(op) File “/home/ljs/tvm/python/tvm/topi/cuda/conv2d.py”, line 45, in _callback schedule_direct_cuda(cfg, s, op.output(0)) File “/home/ljs/tvm/python/tvm/topi/cuda/conv2d_direct.py”, line 47, in schedule_direct_cuda target.kind.name, target.model, ‘conv2d_nchw.cuda’) File “/home/ljs/tvm/python/tvm/autotvm/tophub.py”, line 222, in load_reference_log download_package(tophub_location, package_name) File “/home/ljs/tvm/python/tvm/autotvm/tophub.py”, line 187, in download_package download(download_url, os.path.join(rootpath, package_name), True, verbose=0) File “/home/ljs/tvm/python/tvm/contrib/download.py”, line 111, in download raise err File “/home/ljs/tvm/python/tvm/contrib/download.py”, line 97, in download urllib2.urlretrieve(url, tempfile, reporthook=_download_progress) File “/home/ljs/anaconda3/lib/python3.7/urllib/request.py”, line 247, in urlretrieve with contextlib.closing(urlopen(url, data)) as fp: File “/home/ljs/anaconda3/lib/python3.7/urllib/request.py”, line 222, in urlopen return opener.open(url, data, timeout) File “/home/ljs/anaconda3/lib/python3.7/urllib/request.py”, line 525, in open response = self._open(req, data) File “/home/ljs/anaconda3/lib/python3.7/urllib/request.py”, line 543, in _open ‘_open’, req) File “/home/ljs/anaconda3/lib/python3.7/urllib/request.py”, line 503, in _call_chain result = func(*args) File “/home/ljs/anaconda3/lib/python3.7/urllib/request.py”, line 1360, in https_open context=self._context, check_hostname=self._check_hostname) File “/home/ljs/anaconda3/lib/python3.7/urllib/request.py”, line 1319, in do_open raise URLError(err) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred:

urllib.error.URLError: <urlopen error [Errno 111] Connection refused>

is it a network issue? could you check the network connection?

1 Like

Thank you for replying.It’s network issue and I have fixed it by downloading tophub in https://github.com/uwsampl/tophub

1 Like

I met this problem too and fixed it by putting the ‘tophub’ file into ‘~/.tvm/tophub’

Thanks for the tips.

However, I met another warning and don’t know whether it’s important.

Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘dense_small_batch.cuda’, (‘TENSOR’, (1, 512), ‘float32’), (‘TENSOR’, (1000, 512), ‘float32’), None, ‘float32’). A fallback configuration is used, which may bring great performance regression.

hi! i just met this error too, I’d like to check where should I put the folder ‘tophub’ in?

I have the same error and fix it by cloning https://github.com/uwsampl/tophub into ‘~/.tvm’. Replace the original ‘tophub’ folder directly。

hello~ have you solve the problem?

Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown - 
thread_warp_size=32, workload=(‘dense_small_batch.cuda’, (‘TENSOR’, (1, 512), ‘float32’), 
(‘TENSOR’, (1000, 512), ‘float32’), None, ‘float32’). A fallback configuration is used, which may bring 
great performance regression.

This is not as such as an error, its a warning only You will have to autotune your model according to your configuration and requirements. Without autotuning, a default config is used, which may not give you the the best performance.

Hello! I tuned a CNN model trained by tensorflow, and I test it’s performence, I found the whole TVM infer time is bigger than the tensofrflow far away!

Extract tasks…
Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -
thread_warp_size=32, workload=(‘dense_small_batch.cuda’, (‘TENSOR’, (2500, 512), ‘float32’),
(‘TENSOR’, (6600, 512), ‘float32’), None, ‘float32’). A fallback configuration is used, which may bring 
great performance regression.

How to fix this non-exists configuration case?

hi, it works fine to put ‘tophub’ into ‘~/.tvm/tophub’ in my case. Did you fix that finally?

No…

But I found this code in ‘python/tvm/relay/backend/compile_engine.py’, and guess which means such shape didn’t been tuned before or didn’t been saved as ‘workload’. Thus it won’t cause any accuracy error, but only to be tuned a bit slowly :slight_smile:

To confirm that, you could tune that shape or run this tutorials again. Such warning might be gone.

I met the same problem, too. Thanks for your tips. :smiley: