[compile on gpu] tvm._ffi.base.TVMError: Traceback

I build the tvm base on source code after fixing some https://discuss.tvm.apache.org/t/build-how-to-choose-the-llvm-version-used-to-build-tvm/18313 on my gpu machine.

Then I try to compile a demo case related to ansor schedule (can’t upload test case here ?)

the test can be get from 使用 Auto-scheduling 优化算子 | Apache TVM 中文站, or it can be get from [AutoScheduling] How to choose the best schedule for AutoScheduling ? · Issue #17692 · apache/tvm · GitHub

It gets a error when I run python matmul_ansor.py

(tvm0.18_py310_zyd_Dietcode) root@j00595921debug2-cc95c9977-q752v:/home/zhongyunde/test/ansor# python matmul_ansor.py

...

Traceback (most recent call last):
  File "/home/zhongyunde/test/ansor/matmul_ansor.py", line 67, in <module>
    func(a_tvm, b_tvm, c_tvm, out_tvm)
  File "/home/zhongyunde/tvm_codes/apache-tvm-src-v0.18.0/python/tvm/runtime/module.py", line 201, in __call__
    return self.entry_func(*args)
  File "/home/zhongyunde/tvm_codes/apache-tvm-src-v0.18.0/python/tvm/_ffi/_ctypes/packed_func.py", line 245, in __call__
    raise_last_ffi_error()
  File "/home/zhongyunde/tvm_codes/apache-tvm-src-v0.18.0/python/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
    raise py_err
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (3) /home/zhongyunde/tvm_codes/apache-tvm-src-v0.18.0/build/libtvm.so(TVMFuncCall+0x59) [0x7f877cf757d9]
  [bt] (2) /home/zhongyunde/tvm_codes/apache-tvm-src-v0.18.0/build/libtvm.so(+0x2ea3f25) [0x7f877cfc2f25]
  [bt] (1) /home/zhongyunde/tvm_codes/apache-tvm-src-v0.18.0/build/libtvm.so(+0xc18105) [0x7f877ad37105]
  [bt] (0) /home/zhongyunde/tvm_codes/apache-tvm-src-v0.18.0/build/libtvm.so(tvm::runtime::Backtrace[abi:cxx11]()+0x2c) [0x7f877cfc54ec]
TVMError: Assert fail: T.tvm_struct_get(A, 0, 10, "int32") == 2, Argument default_function.A.device_type has an unsatisfied constraint: 2 == T.tvm_struct_get(A, 0, 10, "int32")

I try to compile with a new tvm 0.18, it still has same issue.

  • The following code shows No GPUs found. Using CPU (suggested by deepseek), but I can get CUDA Version: 12.2 with nvidia-smi.
import tensorflow as tf

# List available GPUs
gpus = tf.config.list_physical_devices('GPU')
print("GPUs:", gpus)

# Use only the first GPU if available
if gpus:
    try:
        tf.config.set_visible_devices(gpus[0], 'GPU')
        # Optional: Limit memory growth
        tf.config.experimental.set_memory_growth(gpus[0], True)
    except RuntimeError as e:
        print(e)
else:
    print("No GPUs found. Using CPU.")

Oh,I need adjust the following 2 places together to support it run on gpu for the demo case on [[AutoScheduling] How to choose the best schedule for AutoScheduling ? · Issue #17692 · apache/tvm · GitHub](https://issue 17692).

Now it test ok with tvm v0.18 (export PYTHONPATH=/home/w00469877/tvm_codes/apache-tvm-src-v0.18.0/python)

target = tvm.target.Target(“llvm”) → target = tvm.target.Target(“cuda”)

dev = tvm.cpu() → dev = tvm.cuda()