Is there a demo to run onnx model with multiple input nodes?

Is there a demo to run onnx model with multiple input nodes? I saw tvm official website the example shows only one input node situation url.

Is there anyone can help me?

I’m not sure about executor class.

However, this method might work on your requirement too. (never run it, but from my guess it should work too)

mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
func = mod["main"]

target = "llvm"
with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(func, target, params=params)

dev = tvm.cpu(0)
m = graph_executor.GraphModule(lib["default"](dev))

# set input (you can set many input as you want, but you need to know input node name)
m.set_input("data", tvm.nd.array(x.astype(dtype)))

# execute
m.run()

# get outputs
tvm_output = m.get_output(0)

Thank you very much!

And I wonder how to do with this api:

with tvm.transform.PassContext(opt_level=1):
    intrp = relay.build_module.create_executor("graph", mod, tvm.cpu(0), target)
dtype = "float32"
tvm_output = intrp.evaluate()(tvm.nd.array(x.astype(dtype)), **params).numpy()

For intrp.evaluate() function, every arguments except for **params will be treated as input. So you can do it just by passing it as an additional argument.

with tvm.transform.PassContext(opt_level=1):
    intrp = relay.build_module.create_executor("graph", mod, tvm.cpu(0), target)
dtype = "float32"
tvm_output = intrp.evaluate()(tvm.nd.array(x.astype(dtype)), tvm.nd.array(x2.astype(dtype)), **params).numpy()

Can this usage have out-of-order problem?

I mean transmit an input tensor to a wrong input node.

My understanding is that the input values is set by in-order.

Here is the source code of _graph_wrapper() function which sets the input values to input tensors.

tvm/build_module.py at e1b3ff4ae3b1ea22691e8d3cc9c001459a7aa080 · apache/tvm

def _graph_wrapper(*args, **kwargs):
    args = self._convert_args(self.mod["main"], args, kwargs)
    # Create map of inputs.
    for i, arg in enumerate(args):
        gmodule.set_input(i, arg)
    # Run the module, and fetch the output.
    gmodule.run()
    flattened = []
    for i in range(gmodule.get_num_outputs()):
        flattened.append(gmodule.get_output(i).copyto(_nd.cpu(0)))
    unflattened = _unflatten(iter(flattened), ret_type)
    return unflattened