Thanks for the reply! I tried the op strategy route, but could still not convince autotvm to pick up on the task.
I did the following:
- register a new relay op according to the desc here
- create a compute and schedule impl and registered them as a strategy:
def schedule_myop(attrs, outs, target):
outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs
if target.target_name not in ("llvm", "c"):
raise RuntimeError("schedule not registered for '%s'" % target)
s = te.create_schedule([x.op for x in outs])
def compute_myop(attrs, inputs, out_dtype):
# mock compute implementation
const = tvm.tir.const(10, dtype='float32')
data = inputs
dummy_comp = te.compute(
lambda n, c, y, x:data[n, c, y, x] + const,
def myop_strategy(attrs, inputs, out_type, target):
strategy = op.OpStrategy()
When I run the op in a simple computation
extract_from_program still returns 0 tasks.
Am I missing something? Note that this code is outside the main TVM package, not sure if that has any bearing. I don’t think the problem is in the test scenario (code not shared here) because when I replace
myop with the relay
conv2d operator it is able to find 1 tuning task, as expected.
On a separate note, this is the only place where I have implementations for compute and schedule, so I’m not sure why
@override_native_generic_func("myop_strategy") is needed, but it does not seem to work without it.