Build Apache TVM runtime for Adreno GPU on Android ARM64v8

I have two host platforms: a linux amd64 docker container and a macos m1 mac.

My target platform is an android device with a qualcomm soc: adreno gpu and hexagon dsp. I would like to just work with the adreno gpu for now.

How do I build tvm for this use case?

I am confused on how to even use tvm. I see python code in tutorials, but how do I get to that point? I’m obviously not going to run python on the android. How do I build tvm so that the runtime works for Android, but I can do model optimization on the host?

My goal is to create add opencl kernels as new operators that I can use in TVM.

Any help is greatly appreciated. I am in a serious time crunch.

Oh I see, that’s just the python frontend, but its really just another way of using the tvmc compiler.

I am using the python frontend’s topi operators now, which is just what I wanted.

I just have a problem with the adreno opencl ml extension backend. I built the host compiler runtime and runtime exactly as done here: https://tvm.apache.org/docs/how_to/deploy/adreno.html

Here’s my issue: