Hi, I have recently used TVM and tried AutoTVM and Auto-scheduler to generate tuned models for GPU. It seems l can easily analyze TensorIR and its CUDA code for each task as below.
import tvm
task = tvm.auto_scheduler.SearchTask(func, args, target)
task.tune(tune_option)
sch, args = task.apply_best(log_file)
# printout IRModule
print(tvm.lower(sch, args))
# printout source code
func = tvm.build (sch, args, target=str(target))
print(func.imported_modules[0].get_source())
However, I cannot find a right way to print out whole source code of tuned-model instead of per-task. Seems like module of graph_executor does not support get_source(). Here is my brief source code which failed.
# Tuning finished by auto-scheduler
# save_best_per_task analyze log_file and save the best result per workloadkey
# to best_config_per_task
best_config_per_task = save_best_per_task(log_file)
with auto_scheduler.ApplyHistoryBest(best_config_per_task):
with tvm.transform.PassContext(opt_level=3,
config={"relay.backend.use_auto_scheduler": True}):
lib = relay.build(relay_model, target=target, params=params)
device = tvm.device(str(target), 0)
module = graph_executor.GraphModule(lib["default"](device))
module.set_input('input_ids', random_input)
print(module.module.get_source()) #get cpp format source code --> fails
Can there be other ways for printing out the source code run by graph executor? Thanks.