Hi everyone!
Since I didn’t find any information about VTA-packed graphs (using vta.top.graph_pack
) being supported in auto_scheduler, I decided to try it out. Here’s what I did :
#starting from a pytorch model converted in torchscript
input_name = 'input0'
img_shape = tuple(first_dataset_img.size())
shape_list = [(input_name, img_shape)]
relay_graph, params = relay.frontend.from_pytorch(scripted_model, shape_list)
with tvm.transform.PassContext(opt_level=3):
with relay.quantize.qconfig():
relay_graph = relay.quantize.quantize(relay_graph, params)
start_layer = 'nn.conv2d'
stop_layer = 'nn.adaptive_avg_pool2d'
start_layer_idx = 0
stop_layer_idx = 595
assert env.BLOCK_IN == env.BLOCK_OUT
relay_graph = graph_pack(
relay_graph["main"],
env.BATCH,
env.BLOCK_OUT,
env.WGT_WIDTH,
start_name=start_layer,
stop_name=stop_layer,
start_name_idx=start_layer_idx,
stop_name_idx=stop_layer_idx,
)
Then, to extract tasks using auto_scheduler, I tried this :
tasks, task_weights = auto_scheduler.extract_tasks(relay_graph, params, 'llvm -mtriple=aarch64-linux-gnu')
And I ended up with the following error :
File ".../tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun
rv = local_pyfunc(*pyargs)
File ".../tvm/python/tvm/relay/op/strategy/x86.py", line 186, in conv2d_strategy_cpu
elif is_depthwise_conv2d(data.shape, layout, kernel.shape, kernel_layout, groups):
File ".../tvm/python/tvm/relay/op/strategy/generic.py", line 93, in is_depthwise_conv2d
ic = get_conv2d_in_channels(data_shape, data_layout)
File ".../tvm/python/tvm/relay/op/strategy/generic.py", line 75, in get_conv2d_in_channels
raise ValueError("Unknown conv2d data layout {}".format(data_layout))
ValueError: Unknown conv2d data layout NCHW1n16c
I wonder what could be the cause of this error :
- NCHW layout not being supported well in auto_scheduling?
- VTA-packed convolution layers not being supported well in auto_scheduling?
- Maybe a simple conversion to NHWC using a Convert Layout pass could help…
Can anyone help? Thanks in advance!
P.S. : I am running all of this on an Intel x86 CPU. I am using TVM commit #5f828a6cc526c00cb5d5e8658e611d2a93ecf22f.