Error on Using d2l-tvm with Colab for Learning TVM

Dear TVM Development Team,

I hope this email finds you well. I am currently trying to learn TVM using the d2l-tvm resource. However, since I don’t have a GPU on my local PC, I am attempting to work in a Google Colab environment. I’ve encountered several issues while setting this up, and I would greatly appreciate your guidance on how to resolve them or proceed with learning TVM effectively.

First Issue: Installation of TVM and TOPI Wheels

In the d2l-tvm guide, there’s a command to install specific wheels:

!pip install https://tvm-repo.s3-us-west-2.amazonaws.com/tvm-0.7.dev1-cp37-cp37m-linux_x86_64.whl https://tvm-repo.s3-us-west-2.amazonaws.com/topi-0.7.dev1-py3-none-any.whl

However, this fails due to a Python version mismatch. The wheels are built for Python 3.7, while Colab uses a newer version, making the installation impossible.

Second Issue: Using !pip install apache-tvm As an alternative, I tried installing TVM with !pip install apache-tvm. Unfortunately, this version does not support CUDA, and I also ran into version conflicts with numpy and ml_dtypes. Additionally, when I tried importing d2ltvm, I encountered the following errors:

Apr 8, 2025, 3:19:29 PM WARNING 0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.  
Apr 8, 2025, 3:19:29 PM WARNING 0.00s - to python to disable frozen modules.  
Apr 8, 2025, 3:19:29 PM WARNING 0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off  
Apr 8, 2025, 3:19:29 PM WARNING 0.00s - Debugger warning: It seems that frozen modules are being used, which may  
Apr 8, 2025, 3:19:28 PM WARNING WARNING:root:kernel 5736a882-6d24-4e47-b660-815f4cf33aff restarted  
Apr 8, 2025, 3:19:28 PM INFO KernelRestarter: restarting kernel (1/5), keep random ports  
Apr 8, 2025, 3:19:27 PM WARNING free(): invalid size  

These errors seem to indicate compatibility issues, possibly tied to the Python version or conflicting modules.

Third Issue: Building TVM from Source

Following the official TVM installation guide, I attempted to build TVM from source with CUDA enabled. However, after completing the build, I received an error stating that the relay module does not exist. I’m unsure if TVM has shifted from using relay to relax, and if so, whether this impacts my ability to use d2l-tvm materials, which might rely on relay.

Overall Concern

My main goal is to learn the basic concepts of TVM using d2l-tvm in a Colab environment. However, due to these issues, I’m wondering if d2l-tvm is still compatible with the current version of TVM, especially in Colab. If it’s no longer feasible to use d2l-tvm in this setup, could you recommend an alternative approach to learning TVM? For example, would following the official TVM documentation be a better way to get started?

I’d be very grateful for any advice or solutions you can provide to help me overcome these challenges. Thank you for your time and support!

Best regards,

likely this is because d2l-tvm is too stale. checkout official tutorials which should be up to date

Thank you for the reply. Does TVM now use Relax instead of Relay? Also, would you recommend learning through the official tutorials?

yes,please go through official tutorials, right now we use relax

Thank you so much for continuously taking the time to respond and support users like me—it’s truly appreciated. As you suggested, I’m trying to follow the official End-to-End Optimization Model tutorial using relax, but I’ve encountered a couple of issues that I would appreciate your help with.

1. MLC Nightly Build (mlc-ai-nightly-cu123)

I installed the nightly version using:

pip install --pre -U -f https://mlc.ai/wheels mlc-ai-nightly-cu123

This installed TVM version 0.20.dev0. While running the tutorial, particularly the part:

exported_program = export(torch_model, example_args)
mod = from_exported_program(exported_program, keep_params_as_input=True)

I encountered the following error:

AssertionError: Unsupported function type relu_.default

I manually checked the exported_program_translator.py file in the TVM Official repository and confirmed that relu_.default does seem to be supported in the codebase.

image

However, the error suggests that this version of TVM still fails to recognize the operation. Is it possible that the nightly wheel is outdated or does not include this latest support?

2. Official Pre-release via PyPI (pip install apache-tvm --pre)

To try an alternative, I installed TVM using the official pre-release:

pip install apache-tvm --pre

This gave me version 0.14.dev273. However, in this version, I encountered a different error:

AttributeError: module 'tvm.runtime' has no attribute 'Executable'

This occurred when attempting to import relax or instantiate a Relax VirtualMachine. It seems this version may not yet support the relax module or may have incomplete support for the current tutorial APIs.


3. Source Build Attempt

I also attempted to build TVM from source using the method provided in the official documentation:

git clone --recursive https://github.com/apache/tvm tvm
!echo 'set(CMAKE_BUILD_TYPE RelWithDebInfo)' >> config.cmake
!echo 'set(USE_LLVM ON)' >> config.cmake
!echo 'set(USE_CUDA ON)' >> config.cmake  # Colab에서 GPU 사용
!cmake ..
!cmake --build . --parallel $(nproc)

However, this also resulted in a 0.20.dev build that seems to lack support for relu_.default, so the same issue persists.


4. [Relax][PyTorch] Improve ExportedProgram frontend by supporting Github Issue

Since it looks like the support for relu_.default was only recently added in the April 8th commit, I was wondering if manual source build is currently the only way to try the tutorial using Relax?


I would like to ask whether there is a prebuilt wheel that includes full relax support and works reliably on Google Colab or a CPU-only machine.

If it’s currently difficult to run the tutorial in such environments, I’d greatly appreciate any suggestions on alternative ways to start learning about heterogeneous computing Compiler Optimization

Additionally, if there are any learning roadmaps or recommended resources to systematically get started with Relax—especially for users who can’t compile from source—I would be truly grateful if you could point me in the right direction.

Thank you again for your time and your contributions to the community. I really appreciate any guidance you can provide.

Warm regards,