TVM cross compilation for QNX OS

What are the steps to cross compile model to QNX OS, what compilers or gnu need to be installed on host machine? also to install the tvm runtime for QNX what should be the cmake instructions? also the hardware I am using is ARM cpu with neon processor so what should be the target device for cross compilation?

Hi @Akshay02 can you let me know if you’re building tvm runtime with or without opencl support?

Hi @VarunGupta I am building TVM runtime without opencl support

Hi @Akshay02 can you let me know if for which architecture you’re cross-compiling tvm runtime?

hello. How are you processing about the steps of cross compile model on QNX OS?

For cross-compiling models for cpu on aarch64-qnx, I set target as “llvm -mtriple=aarch64-qnx” and build the relay module . Then I generate the model.so using QNX Cross-compiler : aarch64-unknown-nto-qnx7.1.0-g++

Below is a snippet of code I use for CC models :

 with tvm.transform.PassContext(opt_level=3):
        
    lib = relay.build(
            net, target=tvm.target.Target(target, host=target_host), params=params
        )

lib.export_library(tmp.relpath(filename), cc.cross_compiler('aarch64-unknown-nto-qnx7.1.0-g++'))

thanks for your answer. but I got a error

aarch64-unknown-nto-qnx7.1.0-ld: /tmp/tmp8dhaqdj9/lib0.o: relocations in generic ELF (EM: 164)  
/aarch64-unknown-nto-qnx7.1.0-ld: /tmp/tmp8dhaqdj9/lib0.o: error adding symbols: file in wrong format

my code is

x = relay.var("x", shape=(2, 2), dtype="float32")
    y = relay.var("y", shape=(2, 2), dtype="float32")
    params = {"y": np.ones((2, 2), dtype="float32")}
    relay_mod = tvm.IRModule.from_expr(relay.Function([x, y], x + y))


    target = get_hexagon_target("v73")

    with tvm.transform.PassContext(opt_level=3):
        hexagon_lowered = tvm.relay.build(
            relay_mod,
            target,
            params=params,
        )

    hexagon_lowered.export_library(result_so, cc="path/aarch64-unknown-nto-qnx7.1.0-g++")

Can you help me check where the problem is?

  1. Another question is: I use the AOT exector compiler the model. And I can run the model in my hexagon device。 But I found the performance is very poor. Dose this situation normal? I see that other people also have doubts about performance. My model is mobilenetv2-7, and i run the model in hexagon device took 14s. I use the launcher_Android example(apps/hexagon_launcher).

I don’t have much knowledge regarding point 2 but regarding point 1 , are you build .so for hexagon (32-bit DSP6 file). I think you need to add target_host parameter

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)