Errors happened when I ran the relay_quick_start.py with Nvidia GPU

I met one error when I ran the relay_quick_start.py with nvidia GPU. It shows as bellow(no traceback information). It could be a autotvm or backend issue. I have followed two topics in the forum. 1) I have this erro when I run the first tutorials ; 2) How to get rid of -target is deprecated?. But it is still blocked, wish somebody could give me some suggestions. Thanks a lot.

Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=(‘dense_small_batch.cuda’, (‘TENSOR’, (1, 512), ‘float32’), (‘TENSOR’, (1000, 512), ‘float32’), None, ‘float32’). A fallback configuration is used, which may bring great performance regression.

This should be a WARNING, not ERROR. From my understanding, by default, AutoTVM will search out if or not this OP are tuned already.

Sorry for vague description. This warning make me confused because i can’t find GPU config finally take effect or not.How can I identify the issue?

I think this warning is just saying no tuned config for “dense_small_batch.cuda” operator, then TVM choose a default schedule for this operator, so finally, it’s ok. Just my guess, I hope not misleading you. :slight_smile:

Thank you for the reply! :coffee:So, how to achieve an correct config?I tried to download the ‘tophub’, and put the folder in this location ‘tvm/python/tvm/tophub’. It didn’t work, and seemed to have nothing with tophub

I don’t think this OP is tuned already in tophub. To eliminate this “WARNING”, you’d better tune this OP by yourself, otherwise, what to say, just “take it”, :sweat_smile:

OK, I’ll take it :joy: