Auto tuning comparison

Hi, I had tried to compare different tuning parameters in TVM autotuning, Have also found the results of mentioned in the graph below. But for my understanding of auto-tuning when we increase the configuration space the tuning may consume less time, but with my results. I am not able to understand the behavior followed in auto-tuning, which in my case its increasing the inference time too according to Configuration space. so please correct me if I am wrong in understanding these.Auto%20tuning%20Graph

Also any documents to understand more about auto-tuning ??

I don’t quite understand what you meant by “increase the configuration space”. You meant increasing ntrials to explore more configs in a space?

ya that is correct, configuration space is ntrials

If so the results look weird. At least all 4 approaches should achieve similar performance when fully explored the configuration space (100%). You may want to check your evaluation settings especially for inference time evaluation. It can be misleading due to cold start, caching or other reasons.

ok sure will check that, also is their any documents about different tuners and their algorithm behind those.

I don’t think so, but the tuner names are pretty straightforward.

Random: just random.

Grid search: Search the configuration space in a sequential order.

GA: Genetic algorithm.

xgb-rank: XGBoost with simulated annealing cost model. This is a bit complicate so you may refer to it publication here.

1 Like

Does this means, example: if i apply 30% of ntrails, this is something like unrolling of convolution operation by factor of 30% or not.

Not at all. It just means you explored 30% of an entire tuning space.

Thanks for your answers it cleared my doubts

Could you try 500 ntrial or 800 ntrial? I think the result will be different.

i think i have done for around 800 n_trails because if you see in the graph 100% configuration space will results in 800 around n_trails.