Errors when obtaining gradients for `nn.batch_norm`

Hi there, I am trying to get the gradients of some popular models, however it seems that TVM does not register gradients for nn.batch_norm operators currently, is there way to register gradients for unsupported OPs?

model = nn.Sequential(
    nn.Conv2d(3, 3, kernel_size=3, padding=1),
    nn.BatchNorm2d(3)
)


# Grab the TorchScripted model via tracing
input_shape = [1, 3, 32, 32]
input_data = torch.randn(input_shape)
scripted_model = torch.jit.trace(model, input_data).eval()
input_name = "input0"
shape_list = [(input_name, input_data.shape)]
mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
mod = relay.transform.InferType()(mod)
bwd_mod = relay.transform.gradient(mod['main'], mode="first_order")

# >> running results
"""
the operator nn.batch_norm does not have a registered gradient.
      1 mod = relay.transform.InferType()(mod)
----> 2 bwd_mod = relay.transform.gradient(mod['main'], mode="first_order")
      3 bwd_mod

~/Workspace/tvm/python/tvm/relay/transform/transform.py in gradient(expr, mod, mode)
"""

BatchNorm 2d

you can add it here https://github.com/apache/tvm/blob/main/python/tvm/relay/op/_tensor_grad.py

1 Like

Thanks for the information! This is exactly what I need.