Thanks for the reply! I tried the op strategy route, but could still not convince autotvm to pick up on the task.
I did the following:
- register a new relay op according to the desc here
- create a compute and schedule impl and registered them as a strategy:
def schedule_myop(attrs, outs, target):
outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs
if target.target_name not in ("llvm", "c"):
raise RuntimeError("schedule not registered for '%s'" % target)
s = te.create_schedule([x.op for x in outs])
return s
def compute_myop(attrs, inputs, out_dtype):
# mock compute implementation
const = tvm.tir.const(10, dtype='float32')
data = inputs[0]
dummy_comp = te.compute(
data.shape,
lambda n, c, y, x:data[n, c, y, x] + const,
name='dummy_sparse_static_conv2d'
)
return [dummy_comp]
@override_native_generic_func("myop_strategy")
def myop_strategy(attrs, inputs, out_type, target):
strategy = op.OpStrategy()
strategy.add_implementation(
compute_myop,
schedule_myop,
name="myop_strategy"
)
return strategy
op.op.register_strategy('myop', myop_strategy)
op.register_pattern('myop', op.OpPattern.OPAQUE)
When I run the op in a simple computation extract_from_program
still returns 0 tasks.
Am I missing something? Note that this code is outside the main TVM package, not sure if that has any bearing. I don’t think the problem is in the test scenario (code not shared here) because when I replace myop
with the relay conv2d
operator it is able to find 1 tuning task, as expected.
On a separate note, this is the only place where I have implementations for compute and schedule, so I’m not sure why @override_native_generic_func("myop_strategy")
is needed, but it does not seem to work without it.