Hello,
I try to use relay TensorRT integration to accelerate the tensorflow inference, reference these relay TensorRT integration, compile tensorflow model two tutorials. When I run the program, the following error occurs:
Traceback (most recent call last):
File "test_tvm.py", line 220, in <module>
test(arg.checkpoint_dir, arg.style_name, arg.test_dir, arg.if_adjust_brightness)
File "test_tvm.py", line 127, in test
gen_module.run(data=tvm.nd.array(x.astype(dtype)))
File "/home/lulin/work/tvm/python/tvm/contrib/graph_runtime.py", line 206, in run
self._run()
File "/home/lulin/work/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: TVMError: Driver error:
The error program as followes:
with tvm.transform.PassContext(opt_level=3, config={'relay.ext.tensorrt.options': config}):
lib = relay.build(mod, target=target, params=params)
lib.export_library('compiled.so')
dtype = "float32"
ctx = tvm.gpu(0)
loaded_lib = tvm.runtime.load_module('compiled.so')
gen_module = tvm.contrib.graph_runtime.GraphModule(loaded_lib['default'](ctx))
# gen_module.set_input("generator_input", tvm.nd.array(x.astype(dtype)))
gen_module.run(data=tvm.nd.array(x.astype(dtype)))
tvm_output = gen_module.get_output(0, tvm.nd.empty(x.shape, "float32"))
My environment:
- Ubuntu 16.04
- Python 3.7.10
- TensorFlow-gpu 1.15
- TensorRT-7.2.3.4
- CUDA 11.0 with cudnn 8.1.0
The way build TVM with TensorRT is following the official documentation, which is modify the config.cmake file, and then build.
set(USE_TENSORRT_CODEGEN ON)
set(USE_TENSORRT_RUNTIME /home/XXX/TensorRT-7.2.3.4)
And also add the tensorRT path in the ~/.bashrc
.
Could this error caused by the TensorRT version issue? Anyone have some ideas about this error?
Deepest thanks for your reply!