Official tutorial (tune_relay_x86.py) fails with different tuner

Hi, I’m trying to run experiment by switching tuners and having issue that I cannot understand.

First off, random tuner works perfectly. However, when switching to other tuners like xgb, ga,…, although tuning process ends successfully, I’m having the following error messages during compilation.

Traceback (most recent call last):

  File "tune_relay_x86.py", line 217, in <module>
    tune_and_evaluate(tuning_option)

  File "tune_relay_x86.py", line 198, in tune_and_evaluate
    mod, target=target, params=params)

  File "/home/sunggg/Projects/tvm/python/tvm/relay/build_module.py", line 249, in build
    graph_json, mod, params = bld_mod.build(func, target, target_host, params)

  File "/home/sunggg/Projects/tvm/python/tvm/relay/build_module.py", line 119, in build
    self._build(func, target, target_host)

  File "/home/sunggg/Projects/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 213, in __call__
    raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /home/sunggg/Projects/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr(tvm::RelayExpr const&)+0x99) [0x7fede3528949]
  [bt] (7) /home/sunggg/Projects/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExp
  [bt] (6) /home/sunggg/Projects/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::InitVTable()::{lambda(t::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>*)+0x27) [0x7fede3518b17]
  [bt] (5) /home/sunggg/Projects/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr_(tvm::relay::CallNode const*)+0x15a) [0x7fede351db0a]
  [bt] (4) /home/sunggg/Projects/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr(tvm::RelayExpr const&)+0x99) [0x7fede3528949]
  [bt] (3) /home/sunggg/Projects/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExp
  [bt] (2) /home/sunggg/Projects/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::InitVTable()::{lambda(t::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>*)+0x27) [0x7fede3518b17]
  [bt] (1) /home/sunggg/Projects/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr_(tvm::relay::CallNode const*)+0x689) [0x7fede351e039]
  [bt] (0) /home/sunggg/Projects/tvm/build/libtvm.so(+0x13b725b) [0x7fede367225b]
  File "/home/sunggg/Projects/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 78, in cfun
    rv = local_pyfunc(*pyargs)
  File "/home/sunggg/Projects/tvm/python/tvm/relay/backend/compile_engine.py", line 250, in lower_call
    op, call.attrs, inputs, ret_type, target)
  File "/home/sunggg/Projects/tvm/python/tvm/relay/backend/compile_engine.py", line 198, in select_implementation
    outs = impl.compute(attrs, inputs, out_type)
  File "/home/sunggg/Projects/tvm/python/tvm/relay/op/op.py", line 168, in compute
    return _OpImplementationCompute(self, attrs, inputs, out_type)
  File "/home/sunggg/Projects/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 213, in __call__
    raise get_last_ffi_error()
  [bt] (3) /home/sunggg/Projects/tvm/build/libtvm.so(TVMFuncCall+0x65) [0x7fede3676dc5]
  [bt] (2) /home/sunggg/Projects/tvm/build/libtvm.so(+0x12fc103) [0x7fede35b7103]
  [bt] (1) /home/sunggg/Projects/tvm/build/libtvm.so(tvm::relay::OpImplementation::Compute(tvm::Attrs const&, tvm::Array<tvm::te::Tensor, void> const&, tvm::Typ
  [bt] (0) /home/sunggg/Projects/tvm/build/libtvm.so(+0x13b725b) [0x7fede367225b]
  File "/home/sunggg/Projects/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 78, in cfun
    rv = local_pyfunc(*pyargs)
  File "/home/sunggg/Projects/tvm/python/tvm/relay/op/strategy/generic.py", line 452, in _compute_dense
    return [topi_compute(inputs[0], inputs[1], None, out_dtype)]
  File "/home/sunggg/Projects/tvm/python/tvm/autotvm/task/topi_integration.py", line 154, in wrapper
    cfg = DispatchContext.current.query(tgt, workload)
  File "/home/sunggg/Projects/tvm/python/tvm/autotvm/task/dispatcher.py", line 72, in query
    ret = self._query_inside(target, workload)
  File "/home/sunggg/Projects/tvm/python/tvm/autotvm/task/dispatcher.py", line 414, in _query_inside
    assert wkl == workload
TVMError: AssertionError

All I change from the tutorial is the number of trials and tuning algorithm. Is this an installation issue? Thank you in advance!

Could you delete the old tuning logs, then re-run the script again and see if it resolves the issue?

Thanks for the swift reply.

Yes. This was issue. I think the error message is quite misleading although it was caused by the trivial reason. Anyways, appreciate your help!