Autotvm compilation Error with Adreno 630 (Galaxy S9)

I’m trying to do auto-tuning with Adreno 630. Although there was a similar question (Autotune error with Qualcomm Adreno), my error is different and the given solution doesn’t work. The following is one debug message of multiple trials.

I’ve followed Deploy the Pretrained Model on Android — tvm 0.8.dev0 documentation and Auto-tuning a Convolutional Network for Mobile GPU — tvm 0.8.dev0 documentation. Currently, I’ve run a TVM-built model and an autotuned TVM-built model on Galaxy S9 CPU, and a TVM-built model on Galaxy S9 Adreno GPU.

My target and host setting are: target = tvm.target.Target(“opencl”) target_host = “llvm -mtriple=arm64-linux-android”

Thanks!

Hi @thkim, can you try with the “opencl --device=mali” target and see if you are able to autotune? The issue might be related to using the cuda winograd schedule.

Thank you for your reply @csullivan. Unfortunately, I was not able to autotune with the “opencl --device=mali” I didn’t include -device option because I’m using Adreno gpu. For reference, I’ve tried the following conditions:

target = ‘opencl’ target = ‘opencl -device=mali’ target = ‘opencl --device=mali’ target = tvm.target.Target(“opencl”) target = tvm.target.Target(“opencl -device=mali”) target = tvm.target.Target(“opencl --device=mali”)

Hi @thkim, I’ve used

target = "opencl --device=mali"
target_host = "llvm -mtriple=arm64-linux-android"

for autotuning on Adreno 650 successfully. When you use this configuration what errors do you see?

Related: We will soon upstream support for an Adreno target specifically. Please checkout [RFC] Texture memory support if you are interested.

Hi @csullivan, I’ve used the same target and target_host. It shows multiple 0.00/ 0.00 GFLOPS.

I’ve used my desktop that doesn’t have GPU. I think it makes some problem with the autotuning process. Is it correct? Currently, I’m trying to use my laptop including GEFORCE RTX 2060. There is an issue with using my GPU driver in the TVM docker. If you’ve made the environment to use GPU driver in the TVM docker, could you give me some tips?

Thank you so much for your reply!

You don’t need to reply to my additional questions. I’ve made the environment to use GPU driver in the TVM docker. However, I’m facing the same issue.

Hi @csullivan, I came back here. From the log file, the compilation error messages look as follows:

It seems that the android toolchain file has problems. I’ve just followed this link (tvm/apps/android_rpc at main · apache/tvm · GitHub) to generate a standalone toolchain, but it’s targetting the CPU, not GPU. I think the problem is from the standalone toolchain file considering the error messages. What standalone toolchain file did you use?

Thanks!

Taeho