The online demo uses a pretrained torch model. I modified the target and ran the demo. The demo seems to download a torch model and convert to onnx and then to tvm module using the specified target. Hence i expect that model should already be tuned to be run on TX2. Or am i wrong somewhere?
Apart from this, for target = ‘cuda’, there seems to only one warning of this kind ,
WARNING:autotvm:Cannot find config for target=cuda, workload=(‘dense’, (1, 1280, ‘float32’), (27, 1280, ‘float32’), 0, ‘float32’). A fallback configuration is used, which may bring great performance regression.
This is the code which converts torch to tvm using relay: