https://docs.tvm.ai/deploy/android.html
this doc shows NNVM compilation of model for android target. But now, I use relay and get sym, params, intrp, without graph. Can I save ‘intrp’ or others to files and load in android? How?
Can you try setting target_host to llvm -target=arm64-linux-android? I believe this may not have been required in nnvm, but it may be required for relay.
Also what is the precise CPU target (phone/board model) you are trying to deploy to?
Thank you for your demo code. I finally succeed in building the lib as your code. I find the key is : target = tvm.target.arm_cpu(model='pixel2')
no matter target_host = 'llvm -target=arm64-linux-android' or None.
I knew 'arm_cpu ’ as doc said ‘This function will also download pre-tuned op parameters when there is none.’
What’s the concrete effect of opts = ["-device=arm_cpu"] + ["-model=snapdragon835", "-target=arm64-linux-android -mattr=+neon"] as ‘pixel2’ ?
Why target = 'x86_64-linux-gnu'
failed building.
Can I use ‘pixel2’ model in all android phone? and how about other phones whose names are not included in the dict of arm_cpu function?
you can look this file :target.py, target of android is just “-target=arm64-linux-android -mattr=+neon”,when you run the model in PC,you can set it target=“llvm”, when using rpc to connect android phone , you can set target = ‘llvm -target=arm64-linux-android -mattr=+neon’,if you want to deploy on the android ,you must follow the toturial, using rpc to connect your phone ,then you can successful export the lib ,json ,params