USE_LIBTORCH build fail for 0.9


I try on enable USE_LIBTORCH in 0.9, but got below error msg. Which version of pytorch is verified, and is there special step to enable it?

In file included from /data/qtensor/tvm/src/runtime/contrib/libtorch/
/usr/local/include/ATen/DLConvertor.h:17:11: error: ‘DLContext’ does not name a type; did you mean ‘Context’?
   17 | TORCH_API DLContext getDLContext(const Tensor& tensor, const int64_t& device_id);
      |           ^~~~~~~~~
      |           Context
In file included from /data/qtensor/tvm/include/tvm/tir/usmp/utils.h:30,
                 from /data/qtensor/tvm/src/relay/backend/contrib/libtorch/../../utils.h:37,
                 from /data/qtensor/tvm/src/relay/backend/contrib/libtorch/
/data/qtensor/tvm/include/tvm/runtime/device_api.h: In function ‘const char* tvm::runtime::DeviceName(int)’:
/data/qtensor/tvm/include/tvm/runtime/device_api.h:247:10: error: ‘kDLCUDA’ was not declared in this scope
  247 |     case kDLCUDA:
      |          ^~~~~~~
/data/qtensor/tvm/include/tvm/runtime/device_api.h:249:10: error: ‘kDLCUDAHost’ was not declared in this scope
  249 |     case kDLCUDAHost:

@t-vi any idea? seems to me that dlpack.h has the same DLPACK_DLPACK_H_ for both pytorch’s and tvm’s 3rd package. So how to solve the conflict?

According to my personal experience, compiling with LibTorch is a bit tricky:

  1. We need to install PyTorch (not only LibTorch) with cxx11 support. However, the PyTorch wheel from the official site does not contain cxx11.
  2. Build TVM with LibTorch with cxx11 support. We can download it again or directly use the LibTorch embedded in the PyTorch folder.

I’m not sure why the PyTorch version matters. It’s just my observation.