Do we need to explicitly input model weights again before executing graph_module?

Hi, I am using the debug executor and the docs is really confusing.

In the doc of debugger, it appears that we need to do a m.setinput(**params) before actually runing the module, however it does not say where this ‘params’ come from, and I don’t really find anyway to retrieve the params after loading a runtime module from exported library.

When it comes to the doc of graph_executor, the sample code skips m.set_input(**params).

So what does this **params stands for and do we really need to explicitly feed the module with it before running?

BTW I tried both adding and removing this set_input step, it seems that this line doesn’t really affect the final result.

Here are the docs I am refering to:

https://tvm.apache.org/docs/reference/api/python/graph_executor.html https://tvm.apache.org/docs/arch/debugger.html

@lhutton1 @comaniac @AndrewZhaoLuo @masahi

Normally we get params from loading the model from some other DNN framework. For example, in the PyTorch example, we get it from

mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)

We then use mod and params to compile the model into a library.

Normally I would think that the original params would be included in the library during the relay.build phase, and if you are getting the correct result, then that is probably the case. However, if you wanted to use other params, then this step would be handy.

For the debugger, it could be that originally you needed to pass the params again, but not anymore. The community is still refining the debugger system, so there is not one canonical approach just yet.

2 Likes

Hi Wheest, thanks for the clarification! Yeah I do observe that normally when straightly loading a runtime library we don’t have to set the params for it. My guess is that this API is kept to overwrite all params(including weights, etc) for some specific reason.