nvidia-smi
reports that my current environment (an NVIDIA T4 on AWS) has driver version 450.80.02
installed:
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
nvcc
reports runtime (toolkit) version 10.0.130:
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
The toolkit-to-driver-relationship chart in NVIDIA’s docs indicates that runtime version 10.0.130
requires driver version >=410.48
. The relationship that 450.80.02 >= 410.48
is true, therefore, the system is in order.
After compiling TVM from source with CUDA support enabled, I attempted to run the following debug code:
import tvm
print(tvm.gpu(0).exist)
print(tvm.gpu(0).compute_version)
This prints False
and raises:
Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading == false: CUDA: CUDA driver version is insufficient for CUDA runtime version
I am unable to trace the line of reasoning being followed by this error message (specifically that CUDA driver version is insufficient for CUDA runtime version
). That doesn’t seem correct to me. But am I perhaps missing something?