Autotvm.template decorator not recognized by autotvm.task.extract_from_program

Hi,

I’m trying to build a kernel and graph tunable template for a custom convolution implementation in relay and to that end have used the @autotvm.template(...) decorator on my function returning a relay operation. When I try to extract tunable tasks from this with autotvm.task.extract_from_program(...) it does not seem to work and finds 0 tasks. This makes me think that the two are not meant to work together.

Can someone with more experience comment on this? If this isn’t the way to what I’d like to do then what would be the best way to go about things?

I’d love to use a te based approach, but could not find a way to do graph tuning on top of it.

Thanks! Adrian

The customized template is not recognized by extract_from_program. extract_from_progam leverages Relay op strategy to determine the tuning tasks. As a result, you have to either register your schedule to op strategy (see https://tvm.apache.org/docs/dev/relay_op_strategy.html for details), or build the model using your schedule function manually.

Thanks for the reply! I tried the op strategy route, but could still not convince autotvm to pick up on the task.

I did the following:

  • register a new relay op according to the desc here
  • create a compute and schedule impl and registered them as a strategy:
def schedule_myop(attrs, outs, target):
    outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs
    if target.target_name not in ("llvm", "c"):
        raise RuntimeError("schedule not registered for '%s'" % target)
    s = te.create_schedule([x.op for x in outs])
    return s

def compute_myop(attrs, inputs, out_dtype):
    # mock compute implementation
    const = tvm.tir.const(10, dtype='float32')
    data = inputs[0]
    dummy_comp = te.compute(
            data.shape,
            lambda n, c, y, x:data[n, c, y, x] + const,
            name='dummy_sparse_static_conv2d'
            )
    return [dummy_comp]

@override_native_generic_func("myop_strategy")
def myop_strategy(attrs, inputs, out_type, target):
    strategy = op.OpStrategy()
    strategy.add_implementation(
            compute_myop,
            schedule_myop,
            name="myop_strategy"
            )
    return strategy

op.op.register_strategy('myop', myop_strategy)
op.register_pattern('myop', op.OpPattern.OPAQUE)

When I run the op in a simple computation extract_from_program still returns 0 tasks. Am I missing something? Note that this code is outside the main TVM package, not sure if that has any bearing. I don’t think the problem is in the test scenario (code not shared here) because when I replace myop with the relay conv2d operator it is able to find 1 tuning task, as expected.

On a separate note, this is the only place where I have implementations for compute and schedule, so I’m not sure why @override_native_generic_func("myop_strategy") is needed, but it does not seem to work without it.

Hi @comaniac! I have one related question. I have implemented my compute/schedule function decorated by @autotvm.template and tuned it together with other tasks extracted from my model by extract_from_program. The file with statistic contains tuning information for schedules which were found in target strategy and also tuning information about my custom schedule. I’m trying to apply statistic to my model by the following code:

with autotvm.apply_history_best(log_file):
    print("Compile...")
    with tvm.transform.PassContext(opt_level=3):
        lib = relay.build(tvm_mod, target=target, params=params)

But the statistic for my custom schedule was not applied. Is there any method to build tvm module, apply custom schedule for some layers and for other layers apply schedules extracted by extract_from_program?