How to save relay model for deploying to android?

https://docs.tvm.ai/deploy/android.html
this doc shows NNVM compilation of model for android target. But now, I use relay and get sym, params, intrp, without graph. Can I save ‘intrp’ or others to files and load in android? How?

target = 'llvm'
input_name = '0'
shape_dict = {input_name: x.shape}
sym, params = relay.frontend.from_onnx(onnx_model, shape_dict)
with relay.build_config(opt_level=3):
    intrp = relay.build_module.create_executor('graph', sym, tvm.cpu(0), target)
1 Like

Hi @houzi09.
How about using relay.build ?

I hope this tutorial helps you.

Deploy the Pretrained Model on Android — tvm 0.6.dev documentation

Thank you. I will try it now!

1 Like
arch = "arm64"
target = "llvm -target=%s-linux-android" % arch
target_host = None

with relay.build_config(opt_level=3):
    graph, lib, params = relay.build(func, target=target, target_host=target_host, params=params)

temp = util.tempdir()
path_so = temp.relpath('deploy_lib.so')
lib.export_library(path_so, ndk.create_shared)

when I export lib, got error below:

RuntimeError Traceback (most recent call last)
in
1 temp = util.tempdir()
2 path_so = temp.relpath(‘deploy_lib.so’)
----> 3 lib.export_library(path_so, ndk.create_shared)

/share_sdb/software/tvm/python/tvm/module.py in export_library(self, file_name, fcompile, **kwargs)
143 kwargs.update({‘options’: ["-I" + path for path in find_include_path()]})
–> 144 fcompile(file_name, files, **kwargs)
145
146 def time_evaluator(self, func_name, ctx, number=10, repeat=1, min_repeat_ms=0):

/share_sdb/software/tvm/python/tvm/contrib/ndk.py in create_shared(output, objects, options)
63 msg = “Compilation error:\n”
64 msg += py_str(out)
—> 65 raise RuntimeError(msg)
66
67 # assign output format

RuntimeError: Compilation error:
/share_sdb/software/android/android-toolchain-arm64/bin/…/lib/gcc/aarch64-linux-android/4.9.x/…/…/…/…/aarch64-linux-android/bin/ld: /tmp/tmpq3aamcjh/lib.o: Relocations in generic ELF (EM: 62)
/share_sdb/software/android/android-toolchain-arm64/bin/…/lib/gcc/aarch64-linux-android/4.9.x/…/…/…/…/aarch64-linux-android/bin/ld: /tmp/tmpq3aamcjh/lib.o: Relocations in generic ELF (EM: 62)
/tmp/tmpq3aamcjh/lib.o: error adding symbols: File in wrong format
clang80++: error: linker command failed with exit code 1 (use -v to see invocation)

I have generated standalone toolchain and set environment:
os.environ[‘TVM_NDK_CC’] = ‘/XXX/android-toolchain-arm64/bin/aarch64-linux-android-g++’

The error is reproduced in my environment.

I built this image today and reproduced it when I ran this script.

The revision of tvm is below.

dmlc/tvm at 4ab97dfae18bdeef2f5a60ab5d368442f1f731b9

I do not know the cause of the error yet.

At least in this revision the script works well.

The difference of repository is about below.

Comparing 1f908a955d63a79bd72ce5980051f51e47f5657a…4ab97dfae18bdeef2f5a60ab5d368442f1f731b9 · dmlc/tvm

2 Likes

I generate a *.so with your version TVM.

did you want to deploy model on android in native way ?

I am a newbie. I just follow doc deploy_to_android, and now
stucked in the error from saving model lib .so .

Can you try setting target_host to llvm -target=arm64-linux-android? I believe this may not have been required in nnvm, but it may be required for relay.

Also what is the precise CPU target (phone/board model) you are trying to deploy to?

I have a short script that I use to quickly try out MxNet (not ONNX) models on phones, but the general flow should be very similar. You can use it as a reference here: https://gist.github.com/eqy/b71d04b73842ce214819ad4c4930e7a1

1 Like

Thank you for your demo code. I finally succeed in building the lib as your code. I find the key is :
target = tvm.target.arm_cpu(model='pixel2')
no matter
target_host = 'llvm -target=arm64-linux-android' or None.
I knew 'arm_cpu ’ as doc said ‘This function will also download pre-tuned op parameters when there is none.’
What’s the concrete effect of
opts = ["-device=arm_cpu"] + ["-model=snapdragon835", "-target=arm64-linux-android -mattr=+neon"] as ‘pixel2’ ?
Why
target = 'x86_64-linux-gnu'
failed building.
Can I use ‘pixel2’ model in all android phone? and how about other phones whose names are not included in the dict of arm_cpu function?

Yes. Can you show me the building code?

you can look this file :target.py, target of android is just “-target=arm64-linux-android -mattr=+neon”,when you run the model in PC,you can set it target=“llvm”, when using rpc to connect android phone , you can set target = ‘llvm -target=arm64-linux-android -mattr=+neon’,if you want to deploy on the android ,you must follow the toturial, using rpc to connect your phone ,then you can successful export the lib ,json ,params