I am trying to run an online demo of the gesture recognition model on the NVIDIA Jetson TX2 and the model uses TVM to autotune for the underlying ARM CPU.
I get the following warning multiple times:
…
WARNING:autotvm:Cannot find config for target=llvm -target=aarch64-linux-gnu, workload=(‘conv2d’, (1, 3, 224, 224, ‘float32’), (32, 3, 3, 3, ‘float32’), (2, 2), (1, 1), (1, 1), ‘NCHW’, ‘float32’). A fallback configuration is used, which may bring great performance regression.
…
I also noticed that the performance is degraded as the warning suggests.
Some info about setup:
The model uses MobileNetV2 as backbone, and I am trying to use only the CPU to run the model.
I tried modifying the -target argument after finding the target from running “gcc -v” on the device.
Also i do not use RPC from the host. I am running the demo on the device.
The online demo uses a pretrained torch model. I modified the target and ran the demo. The demo seems to download a torch model and convert to onnx and then to tvm module using the specified target. Hence i expect that model should already be tuned to be run on TX2. Or am i wrong somewhere?
Apart from this, for target = ‘cuda’, there seems to only one warning of this kind ,
WARNING:autotvm:Cannot find config for target=cuda, workload=(‘dense’, (1, 1280, ‘float32’), (27, 1280, ‘float32’), 0, ‘float32’). A fallback configuration is used, which may bring great performance regression.
This is the code which converts torch to tvm using relay:
So it’s not doing auto-tuning. If you didn’t tune the model by yourself, TVM will try to use pre-tuned logs, but if a workload in your model doesn’t appear in the pre-tuned logs, you will see the WARNING.