Auto-scheduler failed to extract tasks with the heterogeneous target as reported by https://discuss.tvm.apache.org/t/can-autoscheduler-support-tuning-for-multiple-targets/1048. I made some investigation and found that we need to manually call target = relay.build_module.build_target_by_device_type_map(target)
to first transform the target to be a dict. However, even with that change, I still got an error in the TE compiler:
The error is:
TVMError: No target is provided for device llvm
where the input targets
for UpdateMainWorkspaceSize is:
1: llvm -keys=cpu -link-params=0
2: cuda -keys=cuda,gpu -max_num_threads=1024 -thread_warp_size=32
I have no idea why the TE compiler is looking for llvm in 0
instead of 1
. The most weird thing is, this error wonāt happen if we directly build the Relay module.
To reproduce, using this script target_debug.py Ā· GitHub with my local branch: GitHub - comaniac/tvm at test_target