tvm._ffi.base.TVMError: TVMError: Driver error:


I try to use relay TensorRT integration to accelerate the tensorflow inference, reference these relay TensorRT integration, compile tensorflow model two tutorials. When I run the program, the following error occurs:

Traceback (most recent call last):
  File "", line 220, in <module>
    test(arg.checkpoint_dir, arg.style_name, arg.test_dir, arg.if_adjust_brightness)
  File "", line 127, in test
  File "/home/lulin/work/tvm/python/tvm/contrib/", line 206, in run
  File "/home/lulin/work/tvm/python/tvm/_ffi/_ctypes/", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: TVMError: Driver error:

The error program as followes:

    with tvm.transform.PassContext(opt_level=3, config={'relay.ext.tensorrt.options': config}):
        lib =, target=target, params=params)

    dtype = "float32"
    ctx = tvm.gpu(0)
    loaded_lib = tvm.runtime.load_module('')
    gen_module = tvm.contrib.graph_runtime.GraphModule(loaded_lib['default'](ctx))

    # gen_module.set_input("generator_input", tvm.nd.array(x.astype(dtype)))
    tvm_output = gen_module.get_output(0, tvm.nd.empty(x.shape, "float32"))

My environment:

  1. Ubuntu 16.04
  2. Python 3.7.10
  3. TensorFlow-gpu 1.15
  4. TensorRT-
  5. CUDA 11.0 with cudnn 8.1.0

The way build TVM with TensorRT is following the official documentation, which is modify the config.cmake file, and then build.


And also add the tensorRT path in the ~/.bashrc.

Could this error caused by the TensorRT version issue? Anyone have some ideas about this error?

Deepest thanks for your reply!