How to make a pytorch custom layer appear in relay IR

I have a pytorch custom layer,example like this(a L2norm layer):

class L2Norm(nn.Module):
    def __init__(self,n_channels, scale):
        super(L2Norm,self).__init__()
        self.n_channels = n_channels
        self.gamma = scale or None
        self.eps = 1e-10
        self.weight = nn.Parameter(torch.Tensor(self.n_channels))
        self.reset_parameters()
    def reset_parameters(self):
        init.constant_(self.weight,self.gamma)
    def forward(self, x):
        norm = x.pow(2).sum(dim=1, keepdim=True).sqrt()+self.eps
        #x /= norm
        x = torch.div(x,norm)
        out = self.weight.unsqueeze(0).unsqueeze(2).unsqueeze(3).expand_as(x) * x
        return out

Transform the model into IR:

mod, params = relay.frontend.from_pytorch(model)
print(mod)

Get the results:

......
%36 = power(%35, meta[relay.Constant][0]);
%37 = sum(%36, axis=[1], keepdims=True);
%38 = sqrt(%37);
%39 = add(%38, 1e-10f);
%40 = divide(%35, %39);
%41 = broadcast_to_like(%2, %40);
%42 = multiply(%41, %40);

But what I want is like that:

......
%36 = L2Norm(%35,%2)

ALL the compute is a whole layer.

The purpose of this is because my device is support L2norml ,and not support separate operations.

Do you have any suggestions for me? thank you very much.

@jwfromm @masahi I have the same problem. I am implementing an operator but tvm doesn’t treat it as a single operator but instead splits in into multiple suboperators that correspond to the ones that are defined in the class.

Hi @fush did you get the solution to your problem???

We don’t know about your operator so we cannot have such op in an IR. You can use the MergeComposite pass to reconstruct decomposed ops into a single, “composite” op.

Thanks @masahi however we need it at the level of generating relay ir eg. in from_pytorch how do we control the granularity of an operator like @fush has implemented just like it does in other inbuilt layers like conv, relu etc.

@masahi @jwfromm may I get some help on this please??

Here is a demo for intercepting a custom layer with relax.

I know with relax, since it supports torch fx frontend, you could map call_module node to your own modules. 5. Integration with Machine Learning Frameworks — Machine Learing Compilation 0.0.1 documentation is an example. And on pytorch side, possibly override call_module method of fx._symbolic_trace.py:Tracer to intecept your custom module.