[NNVM] Fusing convolution with Exponential Linear Unit

Hi, Let’s say I have a Convolution op, followed by Batch norm and Exponential Linear unit. Something like below. The elu definition is taken from MXNet frontend.

def elu(data):
    slope = 0.5
    return -slope * sym.relu(1 - sym.exp(data)) + sym.relu(data)

def get_sym(layout, kernel_layout, channels):
    data = sym.Variable(name="data")
    data = sym.conv2d(data=data, kernel_size=(3,3), channels=channels, padding=(1, 1),
                      layout=layout, kernel_layout=kernel_layout, use_bias=True)
    data = sym.batch_norm(data)
    data = elu(data)
    return data

In picture it looks like this:

Currently, NNVM cannot fuse operations above into a single fused op. Instead, it creates two.

It looks like when the same output of one op is fed to more than one ops, NNVM won’t apply operator fusion. I understand that this is a correct behavior. But I think this particular example can be fused into a single op.

Can we detect patterns like the one above or similar and apply operator fusion? If so, is it a good idea to support such pattern detection in NNVM compiler?

it seems to be possible to make such enhancement, by detecting common ewise pattern that can be fused into a final one.