Check failed: (!checked_type.defined()) is false: Expected Schedule, but got tir.Schedule

Hi, I am trying to add an operator to Relay as described in the Tutorial. I want to write the schedule using TIR which seems to be an issue. Here is the registration of the OP:

@tvm.ir.register_op_attr("contrib.learning.gemm", "FTVMStrategy")
def gemm_strategy_learning(attrs, inputs, out_type, target):
    """Strategy implementations for the dense layers 

Args:
    attrs (tvm.runtime.object.Object): attributes for the strategy
    inputs (tvm.ir.container.Array): inputs
    out_type (tvm.ir.tensor_type.TensorType): output type
    target (tvm.target.target.Target): target for the strategy

Returns:
    OpStrategy: strategies implementation
"""
if len(inputs) == 3:
    strategy = OpStrategy()
    strategy.add_implementation(
        wrap_gemm_topi_compute(gemm_tir),
        _strategy.wrap_topi_schedule(schedule_gemm_tir),
        name="contrib.learning.gemm",
    )

    return strategy
return None

And here are the compute and schedule definitions:

def gemm_cisc_tir(
	data: tvm.te.tensor.Tensor,
	weight: tvm.te.tensor.Tensor,
	bias: tvm.te.tensor.Tensor,
) -> tvm.te.tensor.Tensor:
	"""Computation definition for my custom GEMM

	Args:
		data (tvm.te.tensor.Tensor): Input feature map
		weight (tvm.te.tensor.Tensor): Layer weights
		bias (tvm.te.tensor.Tensor): Layer biases

	Returns:
		tvm.te.tensor.Tensor: dense operator result
	"""

	# Derive shapes
	ishape = topi.utils.get_const_tuple(data.shape)
	wshape = topi.utils.get_const_tuple(weight.shape)
	oshape = (data.shape[0], weight.shape[1])

	rk = te.reduce_axis((0, wshape[0]), name="rk")

	res = te.compute(
		oshape,
		lambda x_o, y_o: te.sum(
			data[x_o, rk] * weight[rk, y_o]+ bias[y_o]),
			axis=[rk],
		),
		name="res",
		tag="dense",
	)

	return res



def schedule_gemm_cisc_tir(
	outs: tvm.ir.container.Array
) -> tvm.tir.Schedule:
	"""Schedule definition for my custom GEMM

	Args:
		outs (tvm.ir.container.Array): Output tensors

	Returns:
		tvm.te.schedule.Schedule: transformed schedule
	"""
	output = outs[0]

	res_stage = output.op.output(0)
	outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs
	
	data, weight, bias = res_stage.op.input_tensors

	func = te.create_prim_func([data, weight, bias, output])
	sch = tvm.tir.Schedule(func)
	
	return sch

The actual scheduling is omitted for brevity. My goal is to lower from Relay into C code, the pattern matching on Relay is already implemented. What is the correct way to go forward here? Simply returning a tir.Schedule doesn’t seem to be the right way.

Op strategy schedule expects the legacy te.Schedule as opposed to the tir.Schedule. To create a te schedule, you can use te.create_schedule

Ah I see, thanks! What would I use if I want to go forward with the tir.Schedule approach?

I don’t think the tir.Schedule approach is well integrated into the relay op strategy workflow.

But we can get around it by just returning an empty te schedule (basically ignoring it) and using te.create_prim_func to generate a TIR PrimFunc from a te.compute. Then you can create a tir.Schedule for that primfunc and apply your scheduling primitives. Finally you can create a new compute by calling te.extern_primfunc which would could then be returned by your original strategy function for compute. Basically you’ll be directly returning your scheduled primfunc as the new compute and not do anything in your schedule strategy.

Approximately your compute strategy function (gemm_cisc_tir) should look something like this:

compute = te.compute(...)
func = te.create_prim_func(...)
sch = tir.Schedule(func)
# Your tir schedules
new_compute = te.extern_primfunc([inputs], sch.mod["main"]) # inputs are te.placeholders and sch.mod["main"] is the primfunc inside the scheduled mod
return new_compute
1 Like

Thanks! I tried to use this but now I’m running into another error:

TVMError: Do not have a default for tir.BlockRealize

Do you have an idea how, what is going on here? I am currently trying to just pass back the original schedule without any actual scheduling done on my side to see if it works.

If there is no way around this, I would also be more than happy if there is an alternative way to generate code besides op strategy. I just found this in the VTA example and started with it. I want to eventually move to meta_schedule so I’ll probably have to make some changes down the line in any case.

I’m not sure where the error is coming from, so not sure what could be going wrong. I’ve used both the create_prim_func and extern_primfunc calls, but never as part of the op strategy, so I’ll have to try it out to see what could be going wrong. Logically it seems like there should be no issues.

Thank you so much! Here is also the full error log:

tvm._ffi.base.TVMError: Traceback (most recent call last):
  16: TVMFuncCall
  15: tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
  14: tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::IRModule, tvm::runtime::String const&)
  13: tvm::TIRToRuntime(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target const&)
  12: tvm::codegen::Build(tvm::IRModule, tvm::Target)
  11: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::IRModule, tvm::Target)>::AssignTypedLambda<tvm::runtime::Module (*)(tvm::IRModule, tvm::Target)>(tvm::runtime::Module (*)(tvm::IRModule, tvm::Target), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  10: tvm::codegen::BuildCHost(tvm::IRModule, tvm::Target)
  9: tvm::codegen::CodeGenCHost::AddFunction(tvm::tir::PrimFunc const&, bool)
  8: tvm::codegen::CodeGenC::AddFunction(tvm::tir::PrimFunc const&)
  7: tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  6: tvm::codegen::CodeGenC::VisitStmt_(tvm::tir::LetStmtNode const*)
  5: tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  4: tvm::codegen::CodeGenC::VisitStmt_(tvm::tir::LetStmtNode const*)
  3: tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  2: tvm::codegen::CodeGenC::VisitStmt_(tvm::tir::AttrStmtNode const*)
  1: tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  0: tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmtDefault_(tvm::runtime::Object const*)
  File "/home/user/git/tvm/include/tvm/tir/stmt_functor.h", line 104
TVMError: Do not have a default for tir.BlockRealize

I’ve registered both functions (compute description and schedule) with register_topi_compute and register_topi_schedule respectively.
And here is the call to extern_primfunc:

new_compute = te.extern_primfunc([data, weight, bias, ], sch.mod["main"], dtype="int8", name="tir")

Specifiying the dtype was necessary as it otherwise complained about not being able to infer the type automatically.