AutoTVM how is the search space being generated

After looking at the tutorial for auto tuning in auto tvm, there seems to two methods to create a search space, one where the user gives an array if possible values to search and the other where tvm generates this space. In the latter case how does tvm model the search space equation, or is just a brute force listing of all possible values ?. Also whats is the difference between auto scheduling and auto tvm, arent both trying to search for the best parameters in the search space ?.

Now after the search space is defined what parameters are used to decide which config is the best, is it just execution time ?.

1 Like
  • The search space is defined in the schedule template, such as:

This line defines a search parameter, and the candidates are the factors of the length of f.

  • While AutoTVM needs schedule templates defined in TOPI, auto-scheduler generates schedules from scratch. As a result, the auto-scheduler generated schedules are more flexible and expected to achieve even better performance.

  • The execution time is the main metric to judge the schedule quality.

Hey @comaniac Thanks. Are the hardware parameters like (num of threads, cache size) taken into consideration when auto scheduling is used. Also how does auto scheduling generate schedules from scratch (ie while executing the matrix multiplication example it prints a bunch of programs that it has generated as candidates how is this done). Does it look at every loop and try to figure out the best transformation based on execution time / GFLOPS ?.

Also the docs mentioned that XG boost model is used to to pick the next config from the config space (while tuning), Does this model only take the config space as an input and its loss based on execution time ?

1 Like

Hey @comaniac thanks for the reference it as very helpful in understanding what the auto scheduler does. I had a few doubts

  1. Is this auto scheduler already implemented in TVM or is it ongoing, as we were going through some of the files in the auto scheduler folder (https://github.com/apache/incubator-tvm/blob/master/python/tvm/auto_scheduler/measure.py) Many of the funtions are not implemented.

  2. The hardwareparams class implemented in python/tvm/auto_scheduler/auto_schedule.py, are these parameters used while generating the schedule space (by introducing some sort of constrain on the space) or in the cost model

  1. All functions are implemented already. Many of them are implemented in C++. Is this what you’re looking for?
  1. Most of them used in the cost model at this moment.

Thanks a lot @comaniac . Where could one get started when trying to add their own rules to generate sketches for new hardware ?.

as mentioned in the paper (section 4.1)

On the other hand, the derivation-based sketch generation in Ansor is flexible enough to generate the required structures for emerging algorithms and hardware, as we allow users to register new derivation rules and integrate them seamlessly with existing rules.

Since the upstream of auto-scheduler is still in progress (~90%), we do not have a clean interface for users to add new sketch generation rules yet. The easiest way for now is referring or modifying an existing rule, such as:

The custom sketch rule support is ready in our develop branch, while the final user interface for it has not been decided yet.

If this is important for you, we can consider to upstream a experimental version of custom sketch.

cc @comaniac @merrymercy

1 Like

Hey @jcf94 and @comaniac thanks for the response , I was just trying to learn the compiler flow of tvm and how if needed new rules could be added which are hardware specific, As of now there is no immediate need, but thanks nevertheless.