Hi, I’m interested in running Faster RCNN and Mask RCNN models with TVM.
Thanks to @vinx13, we now have ROIPooling, ROIAlign, Proposal, and box related ops. With @Laurawly’s PR we will have argsort and AdaptiveAvgPooling. It seems we have all pieces needed to run Faster RCNN and Mask RCNN models from GluonCV. The only thing missing that I could find is the relay frontend for gather_nd op, which is easy to add.
We have all ops supported. Gluncv models recently use deconvolution for rcnn models which causes performance issues since we don’t have very optimized deconv schedule in tvm.
Hi,
I’m replying to this thread as I think the issues are closely related. I’m interested in running maskrcnn_benchmark in TVM (specifically e2e_mask_rcnn_X-152-32x8d-FPN-IN5k_1.44x_caffe2). I’ve tried with pytorch_tvm (using torch.jit.trace()), and also converting first to onnx (and using then relay.frontend.from_onnx()), to no success (missing operators in the latter case). I’ve also tried converting from pytorch to mxnet, but no luck either. Is there any plan on supporting the missing operators using onnx or pytorch_tvm? (btw my target would be CPU). Thx in advance.
At least for Tensorflow there is some ongoing work to support NonMaxSuppression which is a key operator for these kind of models. A PR is expected soon by @yongwww
We are working on enabling mask-rcnn, fast-rcnn, faster-rcnn, ssd, etc support in TVM. Hopefully all of these models will be supported before end of this year. Welcome to contribute!
Thanks for your reply! I’m glad to know that we will have a working implementation in the following months. About contributing, I understand that there is a group already working on this specific issue, could you please give some direction on how to contact them? Thx again for your support.