If i run same code without module.set_input(**params) code works with out any crash but results were zero because of weights params buffer values were zero . I see that following comment in graph_runtime.py "upload big arrays first to avoid memory issue in rpc mode " Is there any memory issue in copying params in rpc mode?
I have stuck on this to processed further Could you please help on this?
I’m also experiencing a similar issue but instead with the BYOC flow. Unlike your example, my params are bound to the graph as constants which my 3rd party codegen library will serialize as part of the module. I get memory corruption when I try to create a graph runtime with my compiled module on the remote.
I did a bit of digging into this and the issue is because the RPC server uses a ring buffer to read/write from/to the remote device. If the data you are writing is larger than the capacity of the buffer, it will overwrite previous data, causing memory corruption.
I wonder if the capacity of this buffer can be increased manually?
Thanks, @lhutton1, Also quickly i have tried with latest master code , where I see that the couple of updates went into RPC module and still i am seeing the same issue.
Thanks, @FrozenGene, Could you please kindly help me to point the snippet code where I need to do your suggested modification in the TVM stack
I am having this issue, and i found a very small model could work but had this error if the model size is increased. And agree with Kalyan. commit 9a8ed5b seems working for me. Thanks!
May I ask if TVM12.0 still encounters this error? How should I modify it
TVMError:Socket SockChannel::Recv Error :Connect reset by peer. 我在Android端运行, outputs = [module.get_output(i).asnumpy() for i in range(module.get_num_outputs())] 这里只能选择get_output(0)不报错,get_output(1)就报错