In tune_nnvm_x86.py (and tune_relay_x86.py) example I noticed there is no explicitly written schedule creation like the one in getting started example: s = tvm.create_schedule(C.op)
As I understood so far schedule has to be always present. But in this case I can’t find it anywhere. What’s more, when I comment out with autotvm.apply_history_best(log_file): it is still passing. The only difference is lower performance which is expected.
The schedules (they are actually schedule templates) are in topi (topi/python/topi), with different templates defining different search spaces for different back-ends. Instead of handwritten schedules, with AutoTVM we have moved to defining a search space of possible schedules that is explored automatically during tuning.
Thank you!. I’ve just checked this directory and I think I have the idea of what’s going on. Maybe this is just rephrasing of what you wrote, but please tell me if my understanding is correct.
Schedules (schedule templates) present in topi/python/topi/ are describing schedules for different operators within . There are different operators and different data types covered e.g. by x86 backend and cuda.
What happens with schedules for operations that are not covered for given backend. Is it falling to some generic solution? Because from what I gathered any schedule has to be present always.
AutoTVM is defining this space automatically during training. Here we don’t have problem with not covered operation. As I got it somewhere else, schedules are created and trained for each unique operator (by unique I understand operation but also data type etc.).