[SOLVED][PYNQ][VTA] Run model locally on PYNQ-Z1 instead of via RPC

First off, the question: Is it possible to run a model compiled for VTA on the PYNQ board locally, instead of using RPC?

So if I create the graph runtime on my host machine with:

# Some setup code
remote = rpc.connect(ip, port)

lib = remote.load_module("lib.tar")

ctx = remote.ext_dev(0)

m = graph_runtime.create(graph, lib, ctx)
# load params, input and execute model...

everything works fine and the model is executed as expected.

I then tried to write a python script, that I can execute on the PYNQ board directly, instead of via RPC:

# Some setup code
lib = tvm.module.load("./lib.tar")

ctx = tvm.ext_dev(0)

m = graph_runtime.create(graph, lib, ctx)
# load params, input and execute model...

On calling graph_runtime.create(..) I get the error

TVMError: Check failed: allow_missing: Device API ext_dev is not enabled.

I compiled the TVM runtime and VTA on the PYNQ like it’s described in this tutorial: https://docs.tvm.ai/vta/install.html#pynq-side-rpc-server-build-deployment

Do I have to enable some additional features, so that I can use tvm.ext_dev(0) on the PYNQ?

Hey! Since you are working on a PYNQ board can you help me with setting up things. I have a ImportError problem.

I’m very new to tvm and as the first steps I followed the vta installation guide specifed below to setup my Xilinx PYNQ-Z1 board with tvm to run an example first.
VTA Installation Guide

As per this guide I cloned tvm to the SD card of my Xilinx PYNQ board and followed each and every step precisely. Finally I started the RPC server with the line :sudo ./apps/vta_rpc/start_rpc_server.sh and left that terminal where the RPC server is running aside and opened another terminal. In that terminal with “ssh xilinx@” I logged in to the board and followed the section in the guide named “Testing your PYNQ based hardware setup”.
There at the end when I ran the line : python vta/tests/python/pynq/test_program_rpc.py
I end up with the ImportError: No module named tvm. Can anyone please help me with this problem?

First off, the test_program_rpc.py script has to be run on your host machine, not on the PYNQ board.

Citing the documentation:

# On the Host-side
python <tvm root>/vta/tests/python/pynq/test_program_rpc.py

The error No module named tvm is probably because of a wrong PYTHONPATH. Make sure that the PYTHONPATH on you PYNQ board contains /home/xilinx/tvm/python. You can check this with echo $PYTHONPATH. If it doesn’t contain the above mentioned path, put export PYTHONPATH=$HOME/tvm/python:$HOME/tvm/vta/python:$PYTHONPATH in ~/.bashrc and source it.

Besides libtvm_runtime.so, you will need to load in libvta.so as in find_vta function https://github.com/dmlc/tvm/blob/master/vta/python/vta/exec/rpc_server.py#L45

then you should be good to go


Thank you! It works now

Thanks, adding

dll_path = "/home/xilinx/tvm/build/libvta.so"
ctypes.CDLL(dll_path, ctypes.RTLD_GLOBAL)

got me one step further. No I’m getting another error on calling

m.run(**inputs)  # m is the graph_runtime
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (3) /home/xilinx/tvm/build/libtvm_runtime.so(TVMFuncCall+0x37) [0xab551e38]
  [bt] (2) /home/xilinx/tvm/build/libtvm_runtime.so(tvm::runtime::GraphRuntime::Run()+0x1d) [0xab59e1ae]
  [bt] (1) /home/xilinx/tvm/build/libtvm_runtime.so(+0x7373a) [0xab59f73a]
  [bt] (0) /home/xilinx/tvm/build/libtvm_runtime.so(+0x3bc00) [0xab567c00]
  File "/home/xilinx/tvm/src/runtime/module_util.cc", line 73
TVMError: Check failed: ret == 0 (-1 vs. 0) : Assert fail: (dev_type == 12), device_type need to be 12

Any ideas?

Nevermind, I forgot to program the FPGA first. This works now, thanks!

@tqchen Can you mark this post as [SOLVED]?

You should be able to edit the topic by yourself as the original author :slight_smile: glad that it worked out

Somehow I can edit only some of my posts and not the title :thinking:

@flip1995 I’m glad the issue was resolved! Would you mind adding a tutorial to the website for people who might want to run the FPGA examples without going through RPC? I’d be happy to guide you through the process of extending our tutorials.

Yeah, I’ll do that. But I’m currently swamped with my master thesis, so it will take me some time to get to this. Should we create a github issue and assign me, so I don’t forget about it?

Will do, thanks - what’s your github username?

1 Like

Same as here @flip1995

Any link to the said tutorial?

Sadly not. I never got to write the tutorial. And now I didn’t use TVM with a FPGA card in 1 ½ years and neither have access to one. So I’m not sure if this still works and I don’t feel qualified to write the tutorial anymore.

I recommend that you follow the VTA tutorials that deploy the model over RPC and then write a python script similar to what you would do on a raspi/jetson/… and add the two lines to load the shared library.