Hi,
I want to ask there is any plan or somebody already working to support arbitrary precision integer type (ap_int) in TVM?
I know It’s not general-purpose optimization but this can be very useful for certain kind of accelerators like FPGA or even when designing new ASIC. tvm stack seems to support the data type itself and currently they are promoted to the higher precision type.
I’m working on accelerating DNN on FPGA, which needs ap_int, and I want to use TVM as the compiler stack. If there are demands to do, I’d like to work on adding this support to TVM main line because GPU also seems to support lower precisions than 8. I’m expecting It’ll give us some benefits.
We already have experience to do this for Halide internally so, please let me know how community is thinking about it.
Hi, there is a TVM-based project called HeteroCL, whose main target right now is FPGA. You can check the webpage here. Currently, HeteroCL supports both arbitrary-precision integers and fixed-point numbers. You can check the tutorial in the website also. HeteroCL can now target both Xilinx and Intel devices by generating HLS code. Thus, in order to run your applications on FPGAs, you’ll need the HLS tools from either Xilinx or Intel. The most stable back end right now is Xilinx HLS C. We are also working on generating OpenCL code for both Xilinx and Intel devices. Hopefully, we can release an update before the end of the year. We are also closely working with TVM to connect its front-end ML stack with the existing HeteroCL framework. We have been building such flow by using Relay and ideally we will also release an update before the end of the year. Finally, you can check the HeteroCL publication for more details. There you can also find my contact information.
@seanlatias
I checked HeteroCL so far and talked with the guy from the project in last TVM conference but now I realize I didn’t deeply look the arbitrary precision type system in it.
Thank you, I’ll check it out first and track their future work.
@seanlatias I’m looking HeteroCL project and tried some tutorials. I think it’s great project.
While I’ve been seeing HeteroCL, I just felt this could be one of the TVM backends instead of being independent (like merge it into tvm repository). What do you think about this? otherwise, how are you plaining to connect existing ML stack to it you mentioned in your last reply?