Transform custom cuda op in pytorch model to Relay IR

Hi everyone, I have an op implemented in cuda and I have embedded it into a model implemented in pytorch, which is like:

import my_cuda_op # I have compiled this cuda op

# in pytorch model forward function:
def forward(args):
    # some pytorch implement code
    layer1 = some_pytorch_implement(input)
    ...
    layer_n = my_cuda_op.my_func(x) # here is the cuda op
    ...
    return output


So what I want to do is translate the pytorch model with cuda op into relay IR. Which is: Pytorch → Relay IR ->…… Is there an example? Any suggestion is appreciated.

The quick way is you should add your own custom op like aten::my_cuda_op mapping your implemented relay op (yes, you should create one relay op so that you could mapping) into pytorch.py, finally, the created relay op’s topi could call extern cuda op implmentation.

Thank you FrozenGene,

I’m still a novice in TVM, so please excuse me if my upcoming questions sound naive.

Do you happen to have any concrete examples or tutorials? Specifically, I’m interested in the step of invoking external CUDA implementations from Relay ops.

I came across a link: 向 Relay 中添加算子 | Apache TVM 中文站

However, it seems that this resource doesn’t cover the aspect of CUDA implementation.

External Tensor Functions — tvm 0.14.dev0 documentation maybe a good ref

Thank you so much.

If it works, I will refer to this document for experimentation and will share the results as a reply in this thread.

I sincerely appreciate your assistance.