Does TVM support deploying Relay model to Xilinx FPGA backend?

Hi. I’m trying to deploy a pytorch model to Xinlinx FPGA board to do some inferring tasks. Does TVM support deploying Relay model to the specific backend? I’m aware of a note on the page HLS backend example saying that TVM does not support deploy an end to end neural networks for now.

Hi @futureCodeKing ,

It really depends on what you call a “backend”. The FPGA itself is only the hardware, and there are a couple of possibilities using TVM.

  1. Use VTA: this accelerator can be implemented on FPGAs of the Zynq family, and tasks can be offloaded from the CPU to the accelerator using the standard TVM runtime.
  2. Using Gemmini: this accelerator can also be implemented, and TVM can generate C code to offload tasks to the accelerator using microTVM.

Generating HLS is, as far as I understand it, not supported.

I know of a Vitis AI backend, but have never used it myself. You can find the documentation here

You can try deploying the relay model on the Vitis AI backend, but currently only one DPU subgraph is supported to run on the Xilinx DPU through PYXIR. Although the latest Vitis AI already supports multiple DPU subgraphs, pyxir has not yet been updated to support multiple DPU subgraphs. Does anyone have any ideas on how pyxir supports multiple subgraphs?