I have a pytorch custom layer,example like this(a L2norm layer):
class L2Norm(nn.Module):
def __init__(self,n_channels, scale):
super(L2Norm,self).__init__()
self.n_channels = n_channels
self.gamma = scale or None
self.eps = 1e-10
self.weight = nn.Parameter(torch.Tensor(self.n_channels))
self.reset_parameters()
def reset_parameters(self):
init.constant_(self.weight,self.gamma)
def forward(self, x):
norm = x.pow(2).sum(dim=1, keepdim=True).sqrt()+self.eps
#x /= norm
x = torch.div(x,norm)
out = self.weight.unsqueeze(0).unsqueeze(2).unsqueeze(3).expand_as(x) * x
return out
Transform the model into IR:
mod, params = relay.frontend.from_pytorch(model)
print(mod)
Get the results:
......
%36 = power(%35, meta[relay.Constant][0]);
%37 = sum(%36, axis=[1], keepdims=True);
%38 = sqrt(%37);
%39 = add(%38, 1e-10f);
%40 = divide(%35, %39);
%41 = broadcast_to_like(%2, %40);
%42 = multiply(%41, %40);
But what I want is like that:
......
%36 = L2Norm(%35,%2)
ALL the compute is a whole layer.
The purpose of this is because my device is support L2norml ,and not support separate operations.
Do you have any suggestions for me? thank you very much.