[AutoTVM GPU] training data did not have the following fields

Training meet error: training data did not have the following fields: f1121, f1125, f1170, f1127, f1158, f1142, f1141, f1132, f1153, f1147, f1173, f1123, f1160, f1137, f1152, f1150, f1157, f1145, f1161, f1164, f1140, f1172, f1134

n_trials too large? I set 2000. @merrymercy

n_trials should not be the problem here. We have used > 10,000 trials before without an issue.

I find if we use transfer_learning to train, we have problem.

Do you use your modified tuner?
The code in my PR should not have this problem

Yes. But it is very strange that everything is ok when not use transfer_learning.

Because there are some “if” statements in the schedule.

The output of feature extraction will be different for different branches.
When loading history data, we should set the context GLOBAL_SCOPE.in_tuning = True. (This is a patch I sent several days ago https://github.com/dmlc/tvm/pull/1615)

Thanks. I haven’t updated this code. I think it is the reason. However, I have started the training and I don’t want restart training again. Because it occupy much time. I want to ask, if I don’t use transfer_learning, it will affect my result performance or not?

No, it won’t affect the performance

On a related note, can you please point to the local and global cost model (Eq. 4 in the paper).

Please open a separate thread for new questions.