Invalid comm_reducer?

Hi,

I am trying to implement a custom reduce operation via tvm.te.comm_reducer. I got the example at https://tvm.apache.org/docs/how_to/work_with_schedules/tuple_inputs.html#reduction-with-tuple-inputs working and am now adapting this to my needs.

I tried making both input buffers 1D and do the reduction on two values that are calculated from the input buffers:

@auto_scheduler.register_workload
def func(M, N):
    val_m = te.placeholder((M,), name='val_m', dtype='int32')
    val_n = te.placeholder((N,), name='val_n', dtype='int32')

    def fcombine(x, y):
        t0 = tvm.tir.Select((x[1] >= y[1]), x[0], y[0])
        t1 = tvm.tir.Select((x[1] >= y[1]), x[1], y[1])
        return t0, t1
    def fidentity(t0, t1):
        return tvm.tir.const(-1, t0), tvm.te.min_value(t1)
    reduce = te.comm_reducer(fcombine, fidentity, name="reduce")

    n = te.reduce_axis((0, N), name='n')
    def fcompute(m):
        t0 = val_m[m] * val_n[n]
        t1 = val_m[m] + val_n[n]
        return reduce((t0, t1), axis=n)
    T0, T1 = te.compute(
        (M,),
        fcompute,
        name='comm_reducer_example'
    )
    return [val_m, val_n, T0, T1]

This fails with the following error:

Traceback (most recent call last):
  File "tune.py", line 100, in <module>
    task.tune(tune_option)
  File ".../tvm-v0.8.0/python/tvm/auto_scheduler/search_task.py", line 498, in tune
    _ffi_api.AutoSchedule(search_policy, tuning_options)
  File ".../tvm-v0.8.0/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  15: TVMFuncCall
  14: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::runtime::Array<tvm::runtime::ObjectRef, void> (tvm::auto_scheduler::SearchPolicy, tvm::auto_scheduler::TuningOptions)>::AssignTypedLambda<tvm::auto_scheduler::__mk_TVM3::{lambda(tvm::auto_scheduler::SearchPolicy, tvm::auto_scheduler::TuningOptions)#1}>(tvm::auto_scheduler::__mk_TVM3::{lambda(tvm::auto_scheduler::SearchPolicy, tvm::auto_scheduler::TuningOptions)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  13: tvm::auto_scheduler::AutoSchedule(tvm::auto_scheduler::SearchPolicy, tvm::auto_scheduler::TuningOptions)
  12: tvm::auto_scheduler::SketchPolicyNode::Search(int, int, int, tvm::auto_scheduler::ProgramMeasurer)
  11: tvm::auto_scheduler::SketchPolicyNode::SearchOneRound(int, tvm::runtime::Array<tvm::auto_scheduler::State, void>*)
  10: tvm::auto_scheduler::SketchPolicyNode::GenerateSketches()
  9: tvm::auto_scheduler::RuleAddCacheWrite::Apply(tvm::auto_scheduler::SketchPolicyNode const&, tvm::auto_scheduler::State const&, int) const
  8: tvm::auto_scheduler::State::cache_write(int, tvm::runtime::String const&, tvm::auto_scheduler::ComputeDAG const&)
  7: tvm::auto_scheduler::CacheWriteStepNode::ApplyToState(tvm::auto_scheduler::State*, tvm::auto_scheduler::ComputeDAG const&) const
  6: tvm::auto_scheduler::ComputeDAG::ReplayAndGetDAG(tvm::runtime::Array<tvm::auto_scheduler::Step, void> const&) const
  5: tvm::auto_scheduler::ComputeDAG::ApplySteps(tvm::runtime::Array<tvm::auto_scheduler::Step, void> const&, tvm::runtime::Array<tvm::te::Stage, void>*, tvm::runtime::Map<tvm::te::Stage, tvm::runtime::Array<tvm::tir::IterVar, void>, tvm::runtime::ObjectHash, tvm::runtime::ObjectEqual>*, tvm::auto_scheduler::LayoutRewriteOption) const
  4: tvm::auto_scheduler::StepApplyToSchedule(tvm::auto_scheduler::Step const&, tvm::runtime::Array<tvm::te::Stage, void>*, tvm::runtime::Map<tvm::te::Stage, tvm::runtime::Array<tvm::tir::IterVar, void>, tvm::runtime::ObjectHash, tvm::runtime::ObjectEqual>*, tvm::te::Schedule*, tvm::runtime::Array<tvm::auto_scheduler::Step, void> const&)
  3: tvm::auto_scheduler::CacheWriteStepNode::ApplyToSchedule(tvm::runtime::Array<tvm::te::Stage, void>*, tvm::runtime::Map<tvm::te::Stage, tvm::runtime::Array<tvm::tir::IterVar, void>, tvm::runtime::ObjectHash, tvm::runtime::ObjectEqual>*, tvm::te::Schedule*) const
  2: tvm::te::Schedule::cache_write(tvm::runtime::Array<tvm::te::Tensor, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
  1: tvm::te::CacheWriteWithReLayout(tvm::te::Schedule, tvm::runtime::Array<tvm::te::Tensor, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
  0: _ZN3tvm7runtime6deta
  File ".../tvm-v0.8.0/src/te/schedule/schedule_dataflow_rewrite.cc", line 314
TVMError:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (ReduceEqual(reduce_body, first_reduce)) is false:

Can you please help me understand what this error means? When I replace return reduce((t0, t1), axis=n) with return reduce((0, 0), axis=n) in fcompute it works, but that is not what Iā€™d like to express with TVM.

Many thanks in advance!

1 Like

I met the same error. How did you solve it?

Unfortunately, I was not able to solve it.