In addition to registering the compute and schedule to Relay op strategy, you also need to register them as an AutoTVM task so that they can be extracted via extract_from_program
and tuned. Specifically, you need to add the following decorators to your compute and schedule functions. Here I use conv2d_nchw.cuda
as an example.
@autotvm.register_topi_compute("conv2d_nchw.cuda")
def conv2d_nchw(cfg, data, kernel, strides, padding, dilation, out_dtype="float32"):
# Compute function.
@autotvm.register_topi_schedule("conv2d_nchw.cuda")
def schedule_conv2d_nchw(cfg, outs):
# Schedule function.
In this example, we registered an AutoTVM task conv2d_nchw.cuda
. Since we also have the corresponding op strategy at https://github.com/apache/incubator-tvm/blob/main/python/tvm/relay/op/strategy/cuda.py#L128, this task will be extracted by extract_from_program
.