Expr cannot be divided by tir.Var in Relax?

Hi,

Here is torch code snippet that do tensor divide with its shape.

class IFBlock(nn.Module):
    def __init__(self):
        super(IFBlock, self).__init__()

    def forward(self, x):
        return x / (x.shape[2])

When play with relax, and input shape is dynamic, its onnx_frontend got error as below:

  File "/data/aigc/workset/tvm_upstream/python/tvm/relax/frontend/onnx/onnx_frontend.py", line 2397, in from_onnx
    return g.from_onnx(graph, opset)
  File "/data/aigc/workset/tvm_upstream/python/tvm/relax/frontend/onnx/onnx_frontend.py", line 2040, in from_onnx
    self._construct_nodes(graph)
  File "/data/aigc/workset/tvm_upstream/python/tvm/relax/frontend/onnx/onnx_frontend.py", line 2203, in _construct_nodes
    op = self._convert_operator(op_name, inputs, attr, self.opset)
  File "/data/aigc/workset/tvm_upstream/python/tvm/relax/frontend/onnx/onnx_frontend.py", line 2301, in _convert_operator
    sym = op_function(self.bb, inputs, attrs, [self._nodes, self._params])
  File "/data/aigc/workset/tvm_upstream/python/tvm/relax/frontend/onnx/onnx_frontend.py", line 244, in _impl_v14
    else inputs[0].data.numpy()
  File "/data/aigc/workset/tvm_upstream/python/tvm/runtime/object.py", line 75, in __getattr__
    raise AttributeError(f"{type(self)} has no attribute {name}") from None
AttributeError: <class 'tvm.relax.expr.Var'> has no attribute data

So I wonder whether this kind of operation could be supported by relax? I mean with pure relax script, how could we implement such function as metioned above?

Thx~

I checked relax related code, and find there is “VarBinding”, not sure whether it could help for solving the problem?

I mean whether we could transform the shape value as described in the graph to relax.expr.Var, and bind this var to true shape’s value expressed by tir.Var?

this is indeed a restriction. Atm relax operator only operates on relax inputs.

We can however, create a TensorIR function that runs such division, that consumes x.shape. Relax also starts to gain PrimValue support(that wraps the symbolic shape), and in theory we could support operator that runs such division

1 Like

@tqchen , Thx for reply~ For the TensorIR function, do you mean the code like below? In the func, shape’s value is taken as primevalue to do the div.

def f(dtype, d0):
    with IRBuilder() as builder:
        with relax_builder.function():
            R.func_name("main")
            x1 = R.arg("d0", R.Tensor(d0, dtype))
            x2 = relax.PrimValue(d0[2])

            with R.dataflow() as frame:
                output = R.emit(x1/x2)
                R.output(output)

            R.func_ret_value(frame.output_vars[0])

    func = builder.get()
    return tvm.IRModule({"main": func})

A=tvm.tir.expr.Var("a", "int64")
B=tvm.tir.expr.Var("b", "int64")
mod = f("float32", (1, 64, A, B))
mod = relax.transform.LegalizeOps()(mod)

But I still get error as…

  File "/data/aigc/workset/tvm_upstream/test/test_func.py", line 38, in <module>
    mod = f("float32", (1, 64, A, B))
  File "/data/aigc/workset/tvm_upstream/test/test_func.py", line 28, in f
    output = R.emit(x1/x2)
  File "/data/aigc/workset/tvm_upstream/python/tvm/script/ir_builder/relax/ir.py", line 420, in emit
    return _ffi_api.Emit(value, annotate_struct_info)  # type: ignore[attr-defined] # pylint: disable=no-member
  File "/data/aigc/workset/tvm_upstream/python/tvm/_ffi/_ctypes/packed_func.py", line 239, in __call__
    raise_last_ffi_error()
  File "/data/aigc/workset/tvm_upstream/python/tvm/_ffi/base.py", line 476, in raise_last_ffi_error
    raise py_err

...

  7: tvm::relax::Normalizer::VisitExpr_(tvm::relax::CallNode const*)
  6: tvm::relax::Normalizer::InferStructInfo(tvm::relax::Call const&)
  5: _ZN3tvm7runtime13PackedFuncObj
  4: tvm::runtime::TypedPackedFunc<tvm::relax::StructInfo (tvm::relax::Call const&, tvm::relax::BlockBuilder const&)>::AssignTypedLambda<tvm::relax::StructInfo (*)(tvm::relax::Call const&, tvm::relax::BlockBuilder const&)>(tvm::relax::StructInfo (*)(tvm::relax::Call const&, tvm::relax::BlockBuilder const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const
  3: _ZN3tvm5relax29InferStructInfoBroadcastArithERKNS0_4CallERKNS0_12BlockBu
  2: tvm::relax::StructInfo tvm::relax::InferStructInfoBroadcast<tvm::runtime::DataType (*)(tvm::relax::Call const&, tvm::relax::BlockBuilder const&, tvm::relax::TensorStructInfo const&, tvm::relax::TensorStructInfo const&)>(tvm::relax::Call const&, tvm::relax::BlockBuilder const&, tvm::runtime::DataType (*)(tvm::relax::Call const&, tvm::relax::BlockBuilder const&, tvm::relax::TensorStructInfo const&, tvm::relax::TensorStructInfo const&))
  1: tvm::relax::GetInputTensorStructInfo(tvm::relax::Call const&, tvm::relax::BlockBuilder const&)
  0: _ZN3tvm5relax16BlockBuilderImpl11ReportFatalERKNS_10Diagnos
  File "/data/aigc/workset/tvm_upstream/src/relax/ir/block_builder.cc", line 138
TVMError: Op(relax.divide) requires the input x2 to be Tensor. However, the given one has a relax.PrimStructInfo

have you solved the problem until now?

Use topi divide function seems ok.

Good! Could you share your updated code? Thanks a lot.