Hi, All
If I want to use BYOC for a new operator, should I skip the Step 4-5 in this tutorial Adding an Operator to Relay — tvm 0.9.dev182+ge718f5a8a documentation?
Thanks!
Hi, All
If I want to use BYOC for a new operator, should I skip the Step 4-5 in this tutorial Adding an Operator to Relay — tvm 0.9.dev182+ge718f5a8a documentation?
Thanks!
Yes, if you don’t intend to compile your model with native tvm, you only need relay op so can skip that steps.
I just encounter a problem that
Check failed: (!actual_type.defined()) is false: Expected type PrimExpr but got relay.Var
Even though my new operator defined with
Expr MakeFun(Expr X1, Expr X2, Expr X3, DataType out_dtype)
My type relation function is written as
bool FunRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
const TypeReporter& reporter) {
CHECK_EQ(types.size(), 4);
reporter->Assign(types[3], type[0]);
return true;
}
I just want the output type match the type of the first input (X1).
this problem solved due to mismatched set_body_typed(MakeFun)
and the actual MakeFun
Now I counter another problem
nn.func doesn't have an FTVMStrategy registered. You can register one in python with `tvm.relay.op.register_strategy`.
Should I put some dummy strategy for this operator even if I use BYOC?
Hi, how should I set the add_implementation function at the Relay frontend when I decide to use BYOC? Since I will not use the operator from topi
@override_native_generic_func("cumprod_strategy")
def cumprod_strategy(attrs, inputs, out_type, target):
"""cumprod generic strategy"""
strategy = _op.OpStrategy()
strategy.add_implementation(
wrap_compute_scanop(topi.cumprod),
wrap_topi_schedule(topi.generic.schedule_extern),
name="cumprod.generic",
)
return strategy
@cumsum_strategy.register(["cuda", "gpu"])
def cumsum_strategy_cuda(attrs, inputs, out_type, target):
"""cumsum cuda strategy"""
strategy = _op.OpStrategy()
strategy.add_implementation(
wrap_compute_scanop(topi.cuda.cumsum),
wrap_topi_schedule(topi.cuda.schedule_scan),
name="cumsum.cuda",
)
return strategy
You shouldn’t need to deal with strategy stuff for operators offloaded to BYOC.
When I remove those strategy statement for my customize operator at generic.py
and cuda.py
, I encounter a problem like
AttributeError: module 'tvm.relay.op.strategy' has no attribute 'myop_strategy'
is there any workaround to avoid such check?
How to specify my customized codegen when building the model? For example, my customized codegen will generate CUDA kernel code and the host code, should I still keep the target='cuda'
?