Got exception in tune_relay_arm.py

I am trying to autotune the mxnet nn by following code in sample of tune_relay_arm.py. The sample is good ,then I changed to my own net. The tuning is good ,buf failed at compiling . I debug into the code ,and found exception has been existed in the stage of task creation as following code .

if try_winograd: print(‘got total tasks %i’ %(len(tasks))) for i in range(len(tasks)): try: # try winograd template tsk = autotvm.task.create(tasks[i].name, tasks[i].args, tasks[i].target, tasks[i].target_host, ‘winograd’) input_channel = tsk.workload[1][1] if input_channel >= 64: tasks[i] = tsk except Exception: print(‘--------------------got exception No.%i’ %(i)) print(tasks[i]) pass

--------------------got exception No.0 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 1024, 13, 13), ‘float32’), (‘TENSOR’, (75, 1024, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 1024, 13, 13, ‘float32’), (75, 1024, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) --------------------got exception No.2 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 1024, 13, 13), ‘float32’), (‘TENSOR’, (512, 1024, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 1024, 13, 13, ‘float32’), (512, 1024, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’))

so the No.1 task creation is good while no 0,2 has failed. Then I found the exception is caused by topi.nn.conv2d in topi_integration.py:

@register(“topi_nn_conv2d”) def _topi_nn_conv2d(*args, **kwargs): assert not kwargs, “Do not support kwargs in template function call” args = deserialize_args(args) A, W = args[:2] layout = args[-2] assert layout == ‘NCHW’, “only support NCHW currently” C = topi.nn.conv2d(*args, **kwargs) s = topi.generic.schedule_conv2d_nchw([C]) return s, [A, W, C]

  The line: 
         C = topi.nn.conv2d(*args, **kwargs)
  just create an exception and output no other information? 
I checked  the args and found it is good too . 

What should I do to make forward? Thank you .

Can you post the complete error message?

code for task creation.

if try_winograd: print(‘got total tasks %i’ %(len(tasks))) for i in range(len(tasks)): try: # try winograd template tsk = autotvm.task.create(tasks[i].name, tasks[i].args, tasks[i].target, tasks[i].target_host, ‘winograd’) input_channel = tsk.workload[1][1] if input_channel >= 64: tasks[i] = tsk

print(‘--------------------got normal No.%i’ % (i)) print(tasks[i]) except Exception: print(‘#####################got exception No.%i’ %(i)) print(tasks[i]) pass

# if we want to use spatial pack for depthwise convolution print(‘-----------------topi_nn_depthwise_conv2d_nchw---------------’) if try_spatial_pack_depthwise: tuner = ‘xgb_knob’ for i in range(len(tasks)): if tasks[i].name == ‘topi_nn_depthwise_conv2d_nchw’: try: tsk = autotvm.task.create(tasks[i].name, tasks[i].args, tasks[i].target, tasks[i].target_host, ‘contrib_spatial_pack’) tasks[i] = tsk

# print(‘--------------------got depthwise normal No.%i’ % (i)) # print(tasks[i])

except Exception: print(‘--------------------got depthwise exception No.%i’ % (i)) print(tasks[i]) pass

The output info is: --------------------got normal No.0 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 128, 128), ‘float32’), (‘TENSOR’, (6, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 128, 128, ‘float32’), (6, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.1 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 56, 128, 128), ‘float32’), (‘TENSOR’, (32, 56, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 56, 128, 128, ‘float32’), (32, 56, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.2 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 192, 64, 64), ‘float32’), (‘TENSOR’, (32, 192, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 192, 64, 64, ‘float32’), (32, 192, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.3 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 144, 64, 64), ‘float32’), (‘TENSOR’, (32, 144, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 144, 64, 64, ‘float32’), (32, 144, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.4 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 144, 128, 128), ‘float32’), (‘TENSOR’, (144, 1, 3, 3), ‘float32’), (2, 2), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 144, 128, 128, ‘float32’), (144, 1, 3, 3, ‘float32’), (2, 2), (1, 1), (1, 1), ‘float32’)) #####################got exception No.5 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 24, 128, 128), ‘float32’), (‘TENSOR’, (144, 24, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 24, 128, 128, ‘float32’), (144, 24, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.6 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 144, 128, 128), ‘float32’), (‘TENSOR’, (24, 144, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 144, 128, 128, ‘float32’), (24, 144, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.7 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 96, 128, 128), ‘float32’), (‘TENSOR’, (24, 96, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 96, 128, 128, ‘float32’), (24, 96, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.8 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 96, 256, 256), ‘float32’), (‘TENSOR’, (96, 1, 3, 3), ‘float32’), (2, 2), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 96, 256, 256, ‘float32’), (96, 1, 3, 3, ‘float32’), (2, 2), (1, 1), (1, 1), ‘float32’)) #####################got exception No.9 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 16, 256, 256), ‘float32’), (‘TENSOR’, (96, 16, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 16, 256, 256, ‘float32’), (96, 16, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.10 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 256, 256), ‘float32’), (‘TENSOR’, (16, 32, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 256, 256, ‘float32’), (16, 32, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.11 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 32, 256, 256), ‘float32’), (‘TENSOR’, (32, 1, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 32, 256, 256, ‘float32’), (32, 1, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’)) #####################got exception No.12 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 256, 256), ‘float32’), (‘TENSOR’, (32, 32, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 256, 256, ‘float32’), (32, 32, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.13 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 3, 512, 512), ‘float32’), (‘TENSOR’, (32, 3, 3, 3), ‘float32’), (2, 2), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 3, 512, 512, ‘float32’), (32, 3, 3, 3, ‘float32’), (2, 2), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.14 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 144, 128, 128), ‘float32’), (‘TENSOR’, (144, 1, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 144, 128, 128, ‘float32’), (144, 1, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’)) #####################got exception No.15 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 192, 64, 64), ‘float32’), (‘TENSOR’, (192, 1, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 192, 64, 64, ‘float32’), (192, 1, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’)) #####################got exception No.16 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 64, 64), ‘float32’), (‘TENSOR’, (192, 32, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 64, 64, ‘float32’), (192, 32, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.17 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 24, 128, 128), ‘float32’), (‘TENSOR’, (24, 24, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 24, 128, 128, ‘float32’), (24, 24, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.18 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 64, 64), ‘float32’), (‘TENSOR’, (6, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 64, 64, ‘float32’), (6, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.19 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 64, 64, 64), ‘float32’), (‘TENSOR’, (32, 64, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 64, 64, 64, ‘float32’), (32, 64, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.20 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 576, 32, 32), ‘float32’), (‘TENSOR’, (96, 576, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 576, 32, 32, ‘float32’), (96, 576, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.21 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 384, 32, 32), ‘float32’), (‘TENSOR’, (96, 384, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 384, 32, 32, ‘float32’), (96, 384, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.22 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 384, 32, 32), ‘float32’), (‘TENSOR’, (384, 1, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 384, 32, 32, ‘float32’), (384, 1, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’)) #####################got exception No.23 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 64, 32, 32), ‘float32’), (‘TENSOR’, (384, 64, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 64, 32, 32, ‘float32’), (384, 64, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.24 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 384, 32, 32), ‘float32’), (‘TENSOR’, (64, 384, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 384, 32, 32, ‘float32’), (64, 384, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.25 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 192, 32, 32), ‘float32’), (‘TENSOR’, (64, 192, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 192, 32, 32, ‘float32’), (64, 192, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.26 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 192, 64, 64), ‘float32’), (‘TENSOR’, (192, 1, 3, 3), ‘float32’), (2, 2), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 192, 64, 64, ‘float32’), (192, 1, 3, 3, ‘float32’), (2, 2), (1, 1), (1, 1), ‘float32’)) #####################got exception No.27 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 576, 32, 32), ‘float32’), (‘TENSOR’, (576, 1, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 576, 32, 32, ‘float32’), (576, 1, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’)) #####################got exception No.28 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 96, 32, 32), ‘float32’), (‘TENSOR’, (576, 96, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 96, 32, 32, ‘float32’), (576, 96, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.29 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 64, 64), ‘float32’), (‘TENSOR’, (32, 32, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 64, 64, ‘float32’), (32, 32, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.30 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 32, 32), ‘float32’), (‘TENSOR’, (6, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 32, 32, ‘float32’), (6, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.31 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 128, 32, 32), ‘float32’), (‘TENSOR’, (32, 128, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 128, 32, 32, ‘float32’), (32, 128, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.32 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 960, 16, 16), ‘float32’), (‘TENSOR’, (160, 960, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 960, 16, 16, ‘float32’), (160, 960, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.33 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 576, 16, 16), ‘float32’), (‘TENSOR’, (160, 576, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 576, 16, 16, ‘float32’), (160, 576, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.34 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 576, 32, 32), ‘float32’), (‘TENSOR’, (576, 1, 3, 3), ‘float32’), (2, 2), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 576, 32, 32, ‘float32’), (576, 1, 3, 3, ‘float32’), (2, 2), (1, 1), (1, 1), ‘float32’)) #####################got exception No.35 Task(func_name=topi_nn_depthwise_conv2d_nchw, args=((‘TENSOR’, (1, 960, 16, 16), ‘float32’), (‘TENSOR’, (960, 1, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’), kwargs={}, workload=(‘depthwise_conv2d_nchw’, (1, 960, 16, 16, ‘float32’), (960, 1, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘float32’)) #####################got exception No.36 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 160, 16, 16), ‘float32’), (‘TENSOR’, (960, 160, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 160, 16, 16, ‘float32’), (960, 160, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.37 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 96, 32, 32), ‘float32’), (‘TENSOR’, (96, 96, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 96, 32, 32, ‘float32’), (96, 96, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.38 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 16, 16), ‘float32’), (‘TENSOR’, (6, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 16, 16, ‘float32’), (6, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.39 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 192, 16, 16), ‘float32’), (‘TENSOR’, (32, 192, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 192, 16, 16, ‘float32’), (32, 192, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.40 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 128, 16, 16), ‘float32’), (‘TENSOR’, (256, 128, 3, 3), ‘float32’), (2, 2), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 128, 16, 16, ‘float32’), (256, 128, 3, 3, ‘float32’), (2, 2), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.41 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 160, 16, 16), ‘float32’), (‘TENSOR’, (128, 160, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 160, 16, 16, ‘float32’), (128, 160, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.42 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 160, 16, 16), ‘float32’), (‘TENSOR’, (160, 160, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 160, 16, 16, ‘float32’), (160, 160, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.43 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 8, 8), ‘float32’), (‘TENSOR’, (6, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 8, 8, ‘float32’), (6, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.44 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 288, 8, 8), ‘float32’), (‘TENSOR’, (32, 288, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 288, 8, 8, ‘float32’), (32, 288, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.45 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 128, 8, 8), ‘float32’), (‘TENSOR’, (256, 128, 3, 3), ‘float32’), (2, 2), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 128, 8, 8, ‘float32’), (256, 128, 3, 3, ‘float32’), (2, 2), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.46 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 256, 8, 8), ‘float32’), (‘TENSOR’, (128, 256, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 256, 8, 8, ‘float32’), (128, 256, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.47 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 256, 8, 8), ‘float32’), (‘TENSOR’, (256, 256, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 256, 8, 8, ‘float32’), (256, 256, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.48 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 4, 4), ‘float32’), (‘TENSOR’, (6, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 4, 4, ‘float32’), (6, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.49 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 256, 4, 4), ‘float32’), (‘TENSOR’, (32, 256, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 256, 4, 4, ‘float32’), (32, 256, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) #####################got exception No.50 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 256, 4, 4), ‘float32’), (‘TENSOR’, (256, 256, 1, 1), ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 256, 4, 4, ‘float32’), (256, 256, 1, 1, ‘float32’), (1, 1), (0, 0), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.51 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 128, 128), ‘float32’), (‘TENSOR’, (24, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 128, 128, ‘float32’), (24, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.52 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 64, 64), ‘float32’), (‘TENSOR’, (24, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 64, 64, ‘float32’), (24, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.53 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 32, 32), ‘float32’), (‘TENSOR’, (24, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 32, 32, ‘float32’), (24, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.54 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 16, 16), ‘float32’), (‘TENSOR’, (24, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 16, 16, ‘float32’), (24, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.55 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 8, 8), ‘float32’), (‘TENSOR’, (24, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 8, 8, ‘float32’), (24, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.56 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 4, 4), ‘float32’), (‘TENSOR’, (24, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 4, 4, ‘float32’), (24, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.57 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 128, 128), ‘float32’), (‘TENSOR’, (12, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 128, 128, ‘float32’), (12, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.58 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 64, 64), ‘float32’), (‘TENSOR’, (12, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 64, 64, ‘float32’), (12, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.59 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 32, 32), ‘float32’), (‘TENSOR’, (12, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 32, 32, ‘float32’), (12, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.60 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 16, 16), ‘float32’), (‘TENSOR’, (12, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 16, 16, ‘float32’), (12, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.61 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 8, 8), ‘float32’), (‘TENSOR’, (12, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 8, 8, ‘float32’), (12, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’)) --------------------got normal No.62 Task(func_name=topi_nn_conv2d, args=((‘TENSOR’, (1, 32, 4, 4), ‘float32’), (‘TENSOR’, (12, 32, 3, 3), ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’), kwargs={}, workload=(‘conv2d’, (1, 32, 4, 4, ‘float32’), (12, 32, 3, 3, ‘float32’), (1, 1), (1, 1), (1, 1), ‘NCHW’, ‘float32’))

After tuning, the compiling error is:

WARNING:autotvm:Cannot find config for target=llvm -device=arm_cpu -target=aarch64-linux-gnu, workload=(‘conv2d_transpose_nchw’, (1, 32, 64, 64, ‘float32’), (32, 32, 4, 4, ‘float32’), (2, 2), (1, 1), ‘float32’). A fallback configuration is used, which may bring great performance regression. Traceback (most recent call last): File “/home/deep/workssd/arm/tvm_app/tune_relay_arm_cpu_rk3399.py”, line 436, in tune_and_evaluate(tuning_option) File “/home/deep/workssd/arm/tvm_app/tune_relay_arm_cpu_rk3399.py”, line 400, in tune_and_evaluate net, target=target, params=params) File “/home/deep/workssd/arm/tvm/python/tvm/relay/build_module.py”, line 196, in build params) File “/home/deep/workssd/arm/tvm/python/tvm/relay/build_module.py”, line 107, in build self._build(func, target, target_host) File "/home/deep/workssd/arm/tvm/python/tvm/ffi/ctypes/function.py", line 209, in call raise get_last_ffi_error() tvm.ffi.base.TVMError: Traceback (most recent call last): [bt] (8) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::CallNode const*)+0xb28) [0x7f4ec1395818] [bt] (7) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::Expr const&)+0x566) [0x7f4ec138ee46] [bt] (6) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::CallNode const*)+0xb28) [0x7f4ec1395818] [bt] (5) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::Expr const&)+0x566) [0x7f4ec138ee46] [bt] (4) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::CallNode const*)+0x6b5) [0x7f4ec13953a5] [bt] (3) /home/deep/workssd/arm/tvm/build/libtvm.so(+0x76eb4c) [0x7f4ec136ab4c] [bt] (2) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&)+0x469) [0x7f4ec1372949] [bt] (1) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::Create(tvm::relay::Function const&)+0xd77) [0x7f4ec1371de7] [bt] (0) /home/deep/workssd/arm/tvm/build/libtvm.so(+0xc2cebb) [0x7f4ec1828ebb] File “/home/deep/workssd/arm/tvm/python/tvm/_ffi/_ctypes/function.py”, line 71, in cfun rv = local_pyfunc(*pyargs) File “/home/deep/workssd/arm/tvm/python/tvm/relay/op/nn/_nn.py”, line 159, in schedule_conv2d assert op is not None TVMError: AssertionError Error in sys.excepthook: Traceback (most recent call last): File “/usr/lib/python3/dist-packages/apport_python_hook.py”, line 63, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File “/usr/lib/python3/dist-packages/apport/init.py”, line 5, in from apport.report import Report File “/usr/lib/python3/dist-packages/apport/report.py”, line 30, in import apport.fileutils File “/usr/lib/python3/dist-packages/apport/fileutils.py”, line 23, in from apport.packaging_impl import impl as packaging File “/usr/lib/python3/dist-packages/apport/packaging_impl.py”, line 23, in import apt File “/usr/lib/python3/dist-packages/apt/init.py”, line 23, in import apt_pkg ModuleNotFoundError: No module named ‘apt_pkg’

Original exception was: Traceback (most recent call last): File “/home/deep/workssd/arm/tvm_app/tune_relay_arm_cpu_rk3399.py”, line 436, in tune_and_evaluate(tuning_option) File “/home/deep/workssd/arm/tvm_app/tune_relay_arm_cpu_rk3399.py”, line 400, in tune_and_evaluate net, target=target, params=params) File “/home/deep/workssd/arm/tvm/python/tvm/relay/build_module.py”, line 196, in build params) File “/home/deep/workssd/arm/tvm/python/tvm/relay/build_module.py”, line 107, in build self._build(func, target, target_host) File "/home/deep/workssd/arm/tvm/python/tvm/ffi/ctypes/function.py", line 209, in call raise get_last_ffi_error() tvm.ffi.base.TVMError: Traceback (most recent call last): [bt] (8) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::CallNode const*)+0xb28) [0x7f4ec1395818] [bt] (7) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::Expr const&)+0x566) [0x7f4ec138ee46] [bt] (6) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::CallNode const*)+0xb28) [0x7f4ec1395818] [bt] (5) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::Expr const&)+0x566) [0x7f4ec138ee46] [bt] (4) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::CallNode const*)+0x6b5) [0x7f4ec13953a5] [bt] (3) /home/deep/workssd/arm/tvm/build/libtvm.so(+0x76eb4c) [0x7f4ec136ab4c] [bt] (2) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&)+0x469) [0x7f4ec1372949] [bt] (1) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::Create(tvm::relay::Function const&)+0xd77) [0x7f4ec1371de7] [bt] (0) /home/deep/workssd/arm/tvm/build/libtvm.so(+0xc2cebb) [0x7f4ec1828ebb] File “/home/deep/workssd/arm/tvm/python/tvm/_ffi/_ctypes/function.py”, line 71, in cfun rv = local_pyfunc(*pyargs) File “/home/deep/workssd/arm/tvm/python/tvm/relay/op/nn/_nn.py”, line 159, in schedule_conv2d assert op is not None TVMError: AssertionError

The op.nn.conv2d will cause exception at line 644 in arm_cpu/conv2d.py
assert KH == 3 and KW == 3 and HPAD == 1 and WPAD == 1 and HSTR == 1 and WSTR == 1
This line means only kernel(3,3) will be allowed to tune .So the exception is a normal operation?

Now the problem is why compiling fails?

@kevinthesun

It looks like this is the error during compilation. Can you try autotvm.FallbackContext() to compile?

Thank you . I use the following code to compile:

compile kernels with history best records #with autotvm.apply_history_best(log_file): with autotvm.FallbackContext(): print(“Compile…”) with relay.build_config(opt_level=3): graph, lib, params = relay.build_module.build( net, target=target, params=params)

It seems the error:

Traceback (most recent call last): File “/home/deep/workssd/arm/tvm_app/deploy_mb3_yolov3_on_rk3399.py”, line 231, in graph, lib, params = relay.build(func, target, params=params,target_host=target_host) File “/home/deep/workssd/arm/tvm/python/tvm/relay/build_module.py”, line 196, in build params) File “/home/deep/workssd/arm/tvm/python/tvm/relay/build_module.py”, line 107, in build self._build(func, target, target_host) File "/home/deep/workssd/arm/tvm/python/tvm/_ffi/ctypes/function.py", line 209, in call raise get_last_ffi_error() ValueError: Traceback (most recent call last): [bt] (8) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ForwardRewriter::VisitExpr(tvm::relay::CallNode const*)+0x55a) [0x7f35db5901ca] [bt] (7) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr(tvm::relay::Expr const&)+0x9e) [0x7f35db3c227e] [bt] (6) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>::VisitExpr(tvm::relay::Expr const&)+0xca) [0x7f35db38134a] [bt] (5) /home/deep/workssd/arm/tvm/build/libtvm.so(std::_Function_handler<tvm::relay::Expr (tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>), tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>::InitVTable()::{lambda(tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>)#6}>::_M_invoke(std::Any_data const&, tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>*&&)+0x34) [0x7f35db37c804] [bt] (4) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ForwardRewriter::VisitExpr(tvm::relay::CallNode const*)+0x6dc) [0x7f35db59034c] [bt] (3) /home/deep/workssd/arm/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), void tvm::runtime::TypedPackedFunc<tvm::relay::Expr (tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&)>::AssignTypedLambda<tvm::relay::Expr ()(tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&)>(tvm::relay::Expr ()(tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xad) [0x7f35db557bad] [bt] (2) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::alter_op_layout::AlterOpLayoutRewrite(tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&)+0x13cd) [0x7f35db55350d] [bt] (1) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::alter_op_layout::CallAlter(tvm::relay::Call const&, std::vector<tvm::relay::Expr, std::allocatortvm::relay::Expr > const&)+0x6ad) [0x7f35db550e2d] [bt] (0) /home/deep/workssd/arm/tvm/build/libtvm.so(+0xc2cebb) [0x7f35db7dfebb] File “/home/deep/workssd/arm/tvm/python/tvm/_ffi/_ctypes/function.py”, line 71, in cfun rv = local_pyfunc(*pyargs) File “/home/deep/workssd/arm/tvm/python/tvm/relay/op/nn/_nn.py”, line 178, in alter_op_layout_conv2d return topi.nn.conv2d_alter_layout(attrs, inputs, tinfos, op) File “</usr/local/lib/python3.6/dist-packages/decorator.py:decorator-gen-21>”, line 2, in conv2d_alter_layout File “/home/deep/workssd/arm/tvm/python/tvm/target.py”, line 372, in dispatch_func return dispatch_dict[k](*args, **kwargs) File “/home/deep/workssd/arm/tvm/topi/python/topi/arm_cpu/conv2d.py”, line 709, in _alter_conv2d_layout_arm new_attrs[“channels”] = inputs[1].checked_type.shape[attrs[‘kernel_layout’].index(‘O’)] File “/home/deep/workssd/arm/tvm/python/tvm/relay/expr.py”, line 47, in checked_type raise ValueError(“The type checker has not populated” ValueError: The type checker has not populated the checked_type for this node Error in sys.excepthook: Traceback (most recent call last): File “/usr/lib/python3/dist-packages/apport_python_hook.py”, line 63, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File “/usr/lib/python3/dist-packages/apport/init.py”, line 5, in from apport.report import Report File “/usr/lib/python3/dist-packages/apport/report.py”, line 30, in import apport.fileutils File “/usr/lib/python3/dist-packages/apport/fileutils.py”, line 23, in from apport.packaging_impl import impl as packaging File “/usr/lib/python3/dist-packages/apport/packaging_impl.py”, line 23, in import apt File “/usr/lib/python3/dist-packages/apt/init.py”, line 23, in import apt_pkg ModuleNotFoundError: No module named ‘apt_pkg’

Original exception was: Traceback (most recent call last): File “/home/deep/workssd/arm/tvm_app/deploy_mb3_yolov3_on_rk3399.py”, line 231, in graph, lib, params = relay.build(func, target, params=params,target_host=target_host) File “/home/deep/workssd/arm/tvm/python/tvm/relay/build_module.py”, line 196, in build params) File “/home/deep/workssd/arm/tvm/python/tvm/relay/build_module.py”, line 107, in build self._build(func, target, target_host) File "/home/deep/workssd/arm/tvm/python/tvm/_ffi/ctypes/function.py", line 209, in call raise get_last_ffi_error() ValueError: Traceback (most recent call last): [bt] (8) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ForwardRewriter::VisitExpr(tvm::relay::CallNode const*)+0x55a) [0x7f35db5901ca] [bt] (7) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr(tvm::relay::Expr const&)+0x9e) [0x7f35db3c227e] [bt] (6) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>::VisitExpr(tvm::relay::Expr const&)+0xca) [0x7f35db38134a] [bt] (5) /home/deep/workssd/arm/tvm/build/libtvm.so(std::_Function_handler<tvm::relay::Expr (tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>), tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>::InitVTable()::{lambda(tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>)#6}>::_M_invoke(std::Any_data const&, tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>*&&)+0x34) [0x7f35db37c804] [bt] (4) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::ForwardRewriter::VisitExpr(tvm::relay::CallNode const*)+0x6dc) [0x7f35db59034c] [bt] (3) /home/deep/workssd/arm/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), void tvm::runtime::TypedPackedFunc<tvm::relay::Expr (tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&)>::AssignTypedLambda<tvm::relay::Expr ()(tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&)>(tvm::relay::Expr ()(tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xad) [0x7f35db557bad] [bt] (2) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::alter_op_layout::AlterOpLayoutRewrite(tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&)+0x13cd) [0x7f35db55350d] [bt] (1) /home/deep/workssd/arm/tvm/build/libtvm.so(tvm::relay::alter_op_layout::CallAlter(tvm::relay::Call const&, std::vector<tvm::relay::Expr, std::allocatortvm::relay::Expr > const&)+0x6ad) [0x7f35db550e2d] [bt] (0) /home/deep/workssd/arm/tvm/build/libtvm.so(+0xc2cebb) [0x7f35db7dfebb] File “/home/deep/workssd/arm/tvm/python/tvm/_ffi/_ctypes/function.py”, line 71, in cfun rv = local_pyfunc(*pyargs) File “/home/deep/workssd/arm/tvm/python/tvm/relay/op/nn/_nn.py”, line 178, in alter_op_layout_conv2d return topi.nn.conv2d_alter_layout(attrs, inputs, tinfos, op) File “</usr/local/lib/python3.6/dist-packages/decorator.py:decorator-gen-21>”, line 2, in conv2d_alter_layout File “/home/deep/workssd/arm/tvm/python/tvm/target.py”, line 372, in dispatch_func return dispatch_dict[k](*args, **kwargs) File “/home/deep/workssd/arm/tvm/topi/python/topi/arm_cpu/conv2d.py”, line 709, in _alter_conv2d_layout_arm new_attrs[“channels”] = inputs[1].checked_type.shape[attrs[‘kernel_layout’].index(‘O’)] File “/home/deep/workssd/arm/tvm/python/tvm/relay/expr.py”, line 47, in checked_type raise ValueError(“The type checker has not populated” ValueError: The type checker has not populated the checked_type for this node

The error is caused by this line: @conv2d_alter_layout.register([“arm_cpu”]) def _alter_conv2d_layout_arm(attrs, inputs, tinfos, F):

copy_inputs = [s for s in inputs]

new_attrs = {k: attrs[k] for k in attrs.keys()}

if F.__name__ == 'tvm.relay.op':
    # Derive channels for frontends (e.g ONNX) that miss "channel" field.
    print(inputs[1])
    #print(inputs[1].checked_type.shape[attrs['kernel_layout']])
    #if(inputs[1].checked_type )
    #print(inputs[1].checked_type)
    new_attrs["channels"] = inputs[1].checked_type.shape[attrs['kernel_layout'].index('O')]

The inputs[1].checked_type is None . But the previous is good.

The last output of line ‘’‘print(inputs[1])’‘’ seems to be:

v0.0.1 free_var %data: Tensor[(1, 3, 416, 416), float32] %0 = nn.conv2d(%data, meta[relay.Constant][1] /* ty=Tensor[(16, 3, 3, 3), float32] / / ty=Tensor[(16, 3, 3, 3), float32] /, strides=[2, 2], padding=[1, 1], channels=16, kernel_size=[3, 3]) %1 = add(%0, meta[relay.Constant][2] / ty=Tensor[(16, 1, 1), float32] / / ty=Tensor[(16, 1, 1), float32] /) %2 = add(%1, 3f / ty=float32 /) %3 = maximum(%2, 0f / ty=float32 /) %4 = minimum(%3, 6f / ty=float32 /) %5 = multiply(%1, %4) %6 = divide(%5, 6f / ty=float32 /) %7 = nn.conv2d(%6, meta[relay.Constant][3] / ty=Tensor[(16, 16, 1, 1), float32] / / ty=Tensor[(16, 16, 1, 1), float32] /, channels=16, kernel_size=[1, 1]) %8 = add(%7, meta[relay.Constant][4] / ty=Tensor[(16, 1, 1), float32] / / ty=Tensor[(16, 1, 1), float32] /) %9 = nn.relu(%8) %10 = nn.conv2d(%9, meta[relay.Constant][5] / ty=Tensor[(16, 1, 3, 3), float32] / / ty=Tensor[(16, 1, 3, 3), float32] /, padding=[1, 1], groups=16, channels=16, kernel_size=[3, 3]) %11 = add(%10, meta[relay.Constant][6] / ty=Tensor[(16, 1, 1), float32] / / ty=Tensor[(16, 1, 1), float32] /) %12 = nn.relu(%11) %13 = nn.conv2d(%12, meta[relay.Constant][7] / ty=Tensor[(16, 16, 1, 1), float32] / / ty=Tensor[(16, 16, 1, 1), float32] /, channels=16, kernel_size=[1, 1]) %14 = add(%13, meta[relay.Constant][8] / ty=Tensor[(16, 1, 1), float32] / / ty=Tensor[(16, 1, 1), float32] /) %15 = add(%14, %6) %16 = nn.conv2d(%15, meta[relay.Constant][9] / ty=Tensor[(64, 16, 1, 1), float32] / / ty=Tensor[(64, 16, 1, 1), float32] /, channels=64, kernel_size=[1, 1]) %17 = add(%16, meta[relay.Constant][10] / ty=Tensor[(64, 1, 1), float32] / / ty=Tensor[(64, 1, 1), float32] /) %18 = nn.relu(%17) %19 = nn.conv2d(%18, meta[relay.Constant][11] / ty=Tensor[(64, 1, 3, 3), float32] / / ty=Tensor[(64, 1, 3, 3), float32] /, strides=[2, 2], padding=[1, 1], groups=64, channels=64, kernel_size=[3, 3]) %20 = add(%19, meta[relay.Constant][12] / ty=Tensor[(64, 1, 1), float32] / / ty=Tensor[(64, 1, 1), float32] /) %21 = nn.relu(%20) %22 = nn.conv2d(%21, meta[relay.Constant][13] / ty=Tensor[(24, 64, 1, 1), float32] / / ty=Tensor[(24, 64, 1, 1), float32] /, channels=24, kernel_size=[1, 1]) %23 = add(%22, meta[relay.Constant][14] / ty=Tensor[(24, 1, 1), float32] / / ty=Tensor[(24, 1, 1), float32] /) %24 = nn.conv2d(%23, meta[relay.Constant][15] / ty=Tensor[(72, 24, 1, 1), float32] / / ty=Tensor[(72, 24, 1, 1), float32] /, channels=72, kernel_size=[1, 1]) %25 = add(%24, meta[relay.Constant][16] / ty=Tensor[(72, 1, 1), float32] / / ty=Tensor[(72, 1, 1), float32] /) %26 = nn.relu(%25) %27 = nn.conv2d(%26, meta[relay.Constant][17] / ty=Tensor[(72, 1, 3, 3), float32] / / ty=Tensor[(72, 1, 3, 3), float32] /, padding=[1, 1], groups=72, channels=72, kernel_size=[3, 3]) %28 = add(%27, meta[relay.Constant][18] / ty=Tensor[(72, 1, 1), float32] / / ty=Tensor[(72, 1, 1), float32] /) %29 = nn.relu(%28) %30 = nn.conv2d(%29, meta[relay.Constant][19] / ty=Tensor[(24, 72, 1, 1), float32] / / ty=Tensor[(24, 72, 1, 1), float32] /, channels=24, kernel_size=[1, 1]) %31 = add(%30, meta[relay.Constant][20] / ty=Tensor[(24, 1, 1), float32] / / ty=Tensor[(24, 1, 1), float32] /) %32 = add(%31, %23) %33 = nn.conv2d(%32, meta[relay.Constant][21] / ty=Tensor[(72, 24, 1, 1), float32] / / ty=Tensor[(72, 24, 1, 1), float32] /, channels=72, kernel_size=[1, 1]) %34 = add(%33, meta[relay.Constant][22] / ty=Tensor[(72, 1, 1), float32] / / ty=Tensor[(72, 1, 1), float32] /) %35 = nn.relu(%34) %36 = nn.conv2d(%35, meta[relay.Constant][23] / ty=Tensor[(72, 1, 5, 5), float32] / / ty=Tensor[(72, 1, 5, 5), float32] /, strides=[2, 2], padding=[2, 2], groups=72, channels=72, kernel_size=[5, 5]) %37 = add(%36, meta[relay.Constant][24] / ty=Tensor[(72, 1, 1), float32] / / ty=Tensor[(72, 1, 1), float32] /) %38 = nn.relu(%37) %39 = contrib.adaptive_avg_pool2d(%38, output_size=[1, 1]) %40 = nn.conv2d(%39, meta[relay.Constant][25] / ty=Tensor[(18, 72, 1, 1), float32] / / ty=Tensor[(18, 72, 1, 1), float32] /, channels=18, kernel_size=[1, 1]) %41 = expand_dims(meta[relay.Constant][26] / ty=Tensor[(18,), float32] / / ty=Tensor[(18,), float32] /, axis=1, num_newaxis=2) / ty=Tensor[(18, 1, 1), float32] / %42 = add(%40, %41) %43 = nn.relu(%42) %44 = nn.conv2d(%43, meta[relay.Constant][27] / ty=Tensor[(72, 18, 1, 1), float32] / / ty=Tensor[(72, 18, 1, 1), float32] /, channels=72, kernel_size=[1, 1]) %45 = expand_dims(meta[relay.Constant][28] / ty=Tensor[(72,), float32] / / ty=Tensor[(72,), float32] /, axis=1, num_newaxis=2) / ty=Tensor[(72, 1, 1), float32] / %46 = add(%44, %45) %47 = add(%46, 3f / ty=float32 /) %48 = maximum(%47, 0f / ty=float32 /) %49 = minimum(%48, 6f / ty=float32 /) %50 = divide(%49, 6f / ty=float32 /) %51 = squeeze(%50, axis=[0, 2, 3]) %52 = expand_dims(%51, axis=1, num_newaxis=2) multiply(meta[relay.Constant][0] / ty=Tensor[(40, 72, 1, 1), float32] / / ty=Tensor[(40, 72, 1, 1), float32] */, %52) // meta data omitted. you can use show_meta_data=True to include meta data

I have commented the line

if F.name == ‘tvm.relay.op’: # Derive channels for frontends (e.g ONNX) that miss “channel” field. print(inputs[1]) #print(inputs[1].checked_type.shape[attrs[‘kernel_layout’]]) # if(inputs[1].checked_type is None): # print(‘get checked_type none’) # else: # new_attrs[“channels”] = inputs[1].checked_type.shape[attrs[‘kernel_layout’].index(‘O’)]

with autotvm.FallbackContext() compile successfully.

I have tried to make the compiling pass.There’re 2 bug in origin compiling code:

  1. arm_cpu/conv2d.py

if F.name == ‘tvm.relay.op’: # Derive channels for frontends (e.g ONNX) that miss “channel” field. if(“channels” not in new_attrs.keys()): new_attrs[“channels”] = inputs[1].checked_type.shape[attrs[‘kernel_layout’].index(‘O’)]

inputs[1].checked_type sometime is None.

  1. tvm/relay/op/nn/_nn.py

def find_conv2d_op(op): “”“Find the op with conv2d in its tag by traversing.”“” #if ‘conv2d’ in op.tag: if ‘conv’ in op.tag: return op #print(‘input------’) for tensor in op.input_tensors: #print(tensor) op = find_conv2d_op(tensor.op) if op is not None: return op_ return None

some op name is op.name=depthwise_conv ,there’s no conv2d but just “conv”. So the problem is caused by assiginning wrong op name in building process .

looking forward to got fixed.

@kevinthesun

Have you tried to change the op tag in https://github.com/dmlc/tvm/blob/master/topi/python/topi/arm_cpu/depthwise_conv2d.py#L311 and https://github.com/dmlc/tvm/blob/master/topi/python/topi/arm_cpu/depthwise_conv2d.py#L137?

I haven’t tried because I just start to learn tvm. I think that will fix the bug 2 too.
By the way, there are a lot stride(1,1) convolution in my network,why are they not to be tuned?

Have this problem got coped ? I met same problem with tune_rely_cuda.py