Warning: Unable to detect CUDA version, default to "-arch=sm_20" instead

I met this warning message when compile to cuda target using a cpu host instance, while there is no warning if I compile with a gpu host instance. Does this message matter?

/tvm/src/target/target_kind.cc:163: Warning: Unable to detect CUDA version, default to "-arch=sm_20" instead

The target I’m using

target = "cuda -arch=sm_72"
target_host = "llvm -mtriple=x86_64-linux-gnu"
target = tvm.target.Target(target=target, host=target_host)
des_exec = relay.vm.compile(mod, target=target, params=params)

I think it’s a bug which does not actually impact the compilation. Can you please post your script for reproducing the issue and debugging?

@zxybazh sure, thanks for the reply.

import torch
import torchvision
import tvm
from tvm import relay

input_shape = [1, 3, 224, 224]
input_data = torch.randn(input_shape)

model_name = "resnet152"
model = getattr(torchvision.models, model_name)(pretrained=True)
model = model.eval()
scripted_model = torch.jit.trace(model, input_data).eval()

mod, params = relay.frontend.from_pytorch(scripted_model, [('data', [1,3,224,224])])
target = "cuda -arch=sm_72"
target_host = "llvm -mtriple=x86_64-linux-gnu"
target = tvm.target.Target(target=target, host=target_host)
des_exec = relay.vm.compile(mod, target=target, params=params)
2 Likes

Hi, I’m also having issues compiling with sm_70 (NVIDIA V100). It gives me the same output as @jsheng-jian. Was there a solution to this? Thanks

Hello, I’m also trying to use cuda target (tried passing it to Target class and also the target.cuda() method ) however facing the same problem, is there any workaround?

Best Uslu

Hi, I’ve actually resolved this problem on my end. What worked for me is installing TVM from source. That worked out way better for me and I don’t even have to specify the arch. Their installation guide is very helpful and easy to follow along: Install from Source — tvm 0.11.dev0 documentation.

I can also confirm TVM also runs on GPU hardware when profiling, so seems like it works!

2 Likes