Supporting CumSum from ONNX - Use te Scan op or develop from scratch?

Hi, I’ve just came across a model that requires support for ONNX CumSum op https://github.com/onnx/onnx/blob/master/docs/Operators.md#CumSum. The model comes from DETR object detection model https://github.com/facebookresearch/detr. Since this model doesn’t need ad hoc object detection ops that are painful to support, I think it is a great fit for TVM. Our ONNX frontend (also PyTorch) only needs to implement Cumsum op.

Since TVM has support for scan operation https://tvm.apache.org/docs/tutorials/language/scan.html#sphx-glr-tutorials-language-scan-py, I’m wondering if it is a good idea to implement Relay cumsum op on top of te scan, or implement a new topi operator from scratch. I also want to utilize scan primitive from thrust to support fast cumsum on cuda.

@tqchen @kevinthesun @Laurawly @jwfromm

Hi masahi, Can you run detr model(onnx) from Relay => TIR => cuda/llvm successfully now? I meet a problem that the model had dynamic shape in the inference

Yes we now support DETR via PyTorch frontend. I remember there was an issue if we use ONNX frontend, but I don’t know if that was fixed.

For PyTorch, you can use the script in [Bug] Error in constant folding when model has dropout operator · Issue #7530 · apache/tvm · GitHub. There is no dynamic shape if you import via PT frontend.