[compile] Does tvm support DivToMul without indications

Base on 除法转乘法 — TVM 开发指南, I get a test, which show the tvm can do DivToMul with relay.transform.DivToMul. But for a more common case, we may have a model which have many div operators, so it is not convenience to transform it one by one by adding relay.transform.DivToMul in source, and my question:Does tvm support DivToMul without indications,and transform all the div into mul for better performance ?

  • test:
import tvm
from tvm import relay
import numpy as np

for dtype, rtol in [("float16", 1e-3), ("float32", 1e-7), ("float64", 1e-12)]:
    x = relay.var("x", relay.TensorType((), dtype))
    y = relay.Constant(tvm.nd.array(np.array([1.5]).astype(dtype)))
    z = x / y
    mod = tvm.IRModule.from_expr(z)
    transformed = relay.transform.DivToMul()(mod)
    assert transformed["main"].body.op.name == "multiply"
    np.testing.assert_allclose(transformed["main"].body.args[1].data.numpy()[0], 1 / 1.5, rtol=rtol)
  • debug
(Pdb) n
> divtomul.py(11)<module>()
-> assert transformed["main"].body.op.name == "multiply"
(Pdb) p transformed
def @main(%x: float16 /* ty=float16 */) -> Tensor[(1), float16] {
  multiply(%x, meta[relay.Constant][0] /* ty=Tensor[(1), float16] */) /* ty=Tensor[(1), float16] */
}