Running MXNet & Darknet models on VTA : VTAMemAlloc error

Hi,

I am trying to run Resnet, Alexnet, SqueezeNet, Vgg-16 on VTA using MXNet & Darknet. My code combines two tutorials from VTA tutorials. After quantizing & tuning models, it classifies ‘cat’ image as tutorial does.

Deploy Pretrained Vision Model from MxNet on VTA

Auto-tuning a convolutional network on VTA

Importing from MXNet gluon model zoo, resnet18_v1 used on tutorial and resnet18_v2 works well. But when using alexnet, squeezenet1.0, vgg16, same error arises. Same error also arises when I import alexnet from Darknet. Tuning & compiling with relay.build() function & upload with remote.upload() works fine. But when I generate graph_runtime, this error message appears on VTA.

INFO:RPCServer:load_module /tmp/tmpt4gzi0gq/graphlib.o python3: /home/xilinx/tvm/3rdparty/vta-hw/src/pynq/pynq_driver.cc:29: void* VTAMemAlloc(size_t, int): Assertion `size <= VTA_MAX_XFER’ failed.

This is an error displayed on my terminal.

Upload… Traceback (most recent call last): File “/usr/lib/python3.7/pdb.py”, line 1699, in main pdb._runscript(mainpyfile) File “/usr/lib/python3.7/pdb.py”, line 1568, in _runscript self.run(statement) File “/usr/lib/python3.7/bdb.py”, line 578, in run exec(cmd, globals, locals) File “”, line 1, in File “/home/tux/tvm/vta/tutorials/autotvm/cat_tiger_sugar.py”, line 33, in “”" File “/home/tux/tvm/vta/tutorials/autotvm/cat_tiger_sugar.py”, line 506, in tune_and_evaluate m = graph_runtime.create(graph, lib, ctx) File “/home/tux/tvm/python/tvm/contrib/graph_runtime.py”, line 60, in create return GraphModule(fcreate(graph_json_str, libmod, device_type_id)) File “/home/tux/tvm/python/tvm/_ffi/_ctypes/packed_func.py”, line 234, in call raise get_last_ffi_error() tvm._ffi.base.TVMError: Traceback (most recent call last): [bt] (5) /home/tux/tvm/build/libtvm.so(TVMFuncCall+0x5f) [0x7f9f0bb909bf] [bt] (4) /home/tux/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue), tvm::runtime::RPCModuleNode::WrapRemoteFunc(void*)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x33) [0x7f9f0bbf6ed3] [bt] (3) /home/tux/tvm/build/libtvm.so(tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x3ea) [0x7f9f0bbf69ba] [bt] (2) /home/tux/tvm/build/libtvm.so(tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)+0x53) [0x7f9f0bbeadd3] [bt] (1) /home/tux/tvm/build/libtvm.so(tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)+0x6bd) [0x7f9f0bbe07ed] [bt] (0) /home/tux/tvm/build/libtvm.so(+0x2c5a4d8) [0x7f9f0bbdd4d8] File “/home/tux/tvm/src/runtime/rpc/rpc_endpoint.cc”, line 799 TVMError: Check failed: code == RPCCode: :kReturn: code=1

It seems like while generating graph_runtime, something is too big to fit on VTA memory. But I have no idea which is causing this problem.

The only thing I’ve changed from tutorial code is adding predefined optional parameters on graph_pack(). graph_pack() uses parameters called start_pack & stop_pack which indicates the range of the nodes tiled from NCHW to NCHW1n16c & offloaded to VTA. I’ve added start_name_idx & stop_name_idx which indicates the location of start_pack & stop_pack in order to use graph_pack() with the exact range needed to traverse. Following is a setting I’ve used on graph_pack() function.

network = “vgg16” start_pack = “nn.conv2d” stop_pack = “nn.max_pool2d” start_name_idx = 0 stop_name_idx = 111

network = “squeezenet1.0” start_pack = “nn.conv2d” stop_pack = “nn.avg_pool2d” start_name_idx = 0 stop_name_idx = 255

network = “alexnet” start_pack = “nn.conv2d” stop_pack = “nn.max_pool2d” start_name_idx = 0 stop_name_idx = 42

This is the code I am using. Uncommenting another graph_pack setting will change the network it is used. For example if you want to try alexnet, you can uncomment from line 115 to 119 which is setting for alexnet & comment from line 125 to 129 which is setting for resnet18_v2.

I am newbie to this field, so any questions or advices will be appreciated. Thank you.