TVM use Tensorrt is so slowly

builder_ = nvinfer1::createInferBuilder(*logger) this function is very time-consuming and has been called more than once. Does the this call every time there is a subgraph? this function is nvidia api.

here is my code

model_path = “models/yolov5s.v5.onnx”

logging.basicConfig(level=logging.DEBUG)

onnx_model = onnx.load(model_path)

BATCH_SIZE = 1

input_shape = (BATCH_SIZE, 3, 640, 640)

input_name = “images” dtype=“float16”

shape_dict = {input_name: input_shape} mod, params = relay.frontend.from_onnx(onnx_model, shape_dict,dtype=dtype)

mod = relay.transform.InferType()(mod)

mod = tensorrt.partition_for_tensorrt(mod)

with tvm.transform.PassContext(opt_level=3): lib = relay.build(mod,target=“cuda”,params=params)

dev = tvm.cuda(0)

module_exec = graph_executor.GraphModule(lib"default")

x_data = np.random.uniform(-1, 1, input_shape).astype(dtype)

module_exec.set_input(input_name, x_data)

print(module_exec.benchmark(dev,number=1,repeat=1))