Thank you all for your replies.
Seems that autoscheduler already supports tuning a whole model from dense to sparse. Initially I was considering tuning a single operator, and I’d like to hold on to it for now and come back later with more background. Agreed that we should have a better, unified mechanism for sparse inputs.
I think the real problem is that autotvm does not officially support layout transformation, thus we all go implicitly, and then the correctness check fails. Following R1 might suggests a larger-scale design change.
On the other hand, previously ref_input
can be enabled without setting the check_correctness
option, and it just works implicitly (the attribute is always submitted to the executor). IMO output checking is not a requirement for this piece of code, it can be safely decoupled, and easily accomplished with external developer efforts.