Build TVM for WASM

Hi everyone! I have C++ code where I need to run an inference on the model I trained, and I would be happy to use TVM for that.

So first - I load my Pytorch model and create a torch script:

    traced_script_module = torch.jit.trace(model, (image, lms_X))

    shape_list = []
    shape_list.append(('input0', (1, 5, 64, 64)))
    shape_list.append(('input1', (1, 18)))

    mod, params = from_pytorch(traced_script_module, shape_list)

So no issue there, I prepare mod and params without any errors. Then I try to do this:

target = “wasm” target_host = “” with tvm.transform.PassContext(opt_level=3): graph, lib, params = relay.build_module.build(mod, target=target, params=params)

And that’s where I get an error that target ‘wasm’ is undefined. So my question is - how to build TVM using wasm correctly? How to setup target properly?

My OS: Windows 10 TVM without LLVM, because apparently it’s not trivial at all to build TVM with LLVM because LLVM does not have LLVMConfig.cmake on Windows for some reason. For the time being I just want to make it work even without LLVM. LLVM I can leave for later. Or can I?

Thank you in advance for your response and thank you for such a beautiful library!

since you mentioned wasm, i assume you want to deloy for web. Checkout https://github.com/mlc-ai/web-stable-diffusion which contains a complete pipeline

We do need to rely on LLVM unfortunately, you can however try use the windows pre-build that comes with llvm support or build with conda that comes with llvm env

1 Like

Thank you very much for your response!

I finally figured out what LLVM to build and I managed to create .dll file from my model successfuly. On a side note: I would advise to add an additional tutorial on how to build LLVM for Windows, because on Windows pre-built binaries will not work. Actually I can volunteer and create this tutorial for you, if you don’t mind.

On this URL - I went through it, and it looks like out-of-box solution for creating full WASM library from the model, but this is not what I need. I have big C++ library that is compiled to WASM, and inside this library I need to run an inference on my model, and I want to use TVM for that. What is unclear for me is how to load TVM model if I am in WASM environment.

Can you provide some other examples of successful use of TVM in WASM environment?

The conda environment https://github.com/apache/tvm/blob/main/conda/build-environment.yaml likely is the best way to get things setup and running on windows, we use it to build windoes packages.

For building things through wasm, you likely need to build libtvm runtime into (.bc file) the original compiled model(also in .bc file) and link together with other bc files in your library. Then you can call into tvm’s c++ runtime. The webSD project can still serve as a good reference as the build libtvm runtime and compiled model are the same, except that you need to link in extra things(from your project)

1 Like

That’s what concerns me the most. An example:

tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile(lib_path);

I have this line in my code, and lib_path is pointing to compiled model library. In web environment there is no such thing as a defined local path like we have on Desktop. It is either URL or bite stream. Unfortunately, there is no load function that could load the library from the bite stream. So my question is - is “LoadFromFile” capable of working with URLs as well as with local paths?

Thank you very much for your help!

Indeed we cannot rely on LoadFromFile

Instead, we can get syslib, which is what is typically used in the wasm runtime where dynamic loading is not possible. see https://github.com/apache/tvm/blob/main/apps/howto_deploy/cpp_deploy.cc#L82 as an example.

This is also how we build models for the web stable diffusion model, you can follow the web stable diffusion example on how to build something with system lib

1 Like

First of all, thank you for your recommendations, you are a great help!

I have compiled tvm_runtime.bc in the web folder - tvm/web at main ¡ apache/tvm ¡ GitHub

But when I try to run https://github.com/apache/tvm/blob/main/web/tests/python/prepare_test_libs.py, I have an error: Target llvm -mtriple=wasm32-unknown-unknown-wasm is not enbaled

I understand it might be because my libtvm and libtvm_runtime I compiled previously is not WASM-compatible. Although I was under the impression that through tvm/web I will be able to compile libtvm that knows this target, apparently I was mistaken. So my question is - how to actually compile/install TVM in a way that I can produce WASm library from my model?

I am continuing looking into Web Stable Diffusion, but for the time being I can not find an information on how to actually make this wasm target available.

Also I have tried to compile tvm libraries with this command

cmake … -DUSE_LLVM=ON -DLLVM_DIR=/path/to/emsdk/upstream/bin -DCMAKE_C_COMPILER=/path/to/emsdk/upstream/bin/clang -DCMAKE_CXX_COMPILER=/path/to/emsdk/upstream/bin/clang++

Compilation is successful, but the issue with target is still there

You can simply type make under the web folder that will give you a bc file in web/build which you can link to

1 Like

although the error likely means you are not building tvm with LLVM that comes with wasm support

1 Like

Yes, I got this, I got tvm_runtime.bc file that I will “merge” with my model library file, but I need to compile my model with wasm configuration. I have noticed that I was building tvm not on ‘unity’ branch, my mistake. Can it be an issue? Thanks!

unity branch likely will have the most up to date webgpu support as that is what websd is using

1 Like

I have installed TVM with LLVM using Conda environment, went to ‘web’ folder and run ‘make’ command successfuly, then went to ‘python’ folder and successfuly installed TVM Python package.

After that I tried to run prepare_test_libs.py and I got this error:

RuntimeError: Compilation error:
wasm-ld: warning: Linking two modules of different data layouts: '/tmp/tmpbbb6nyj3/lib0.bc' is 'e-m:e-p:32:32-p10:8:8-p20:8:8-i64:64-n32:64-S128-ni:1:10:20' whereas 'ld-temp.o' is 'e-m:e-p:32:32-i64:64-n32:64-S128'


wasm-ld: warning: Linking two modules of different target triples: '/tmp/tmpbbb6nyj3/lib0.bc' is 'wasm32-unknown-unknown-wasm' whereas 'ld-temp.o' is 'wasm32-unknown-emscripten'

So the text of an error is self-explanatory, so I changed the target in prepare_test_libs.py in prepare_tir_lib function from

target = "llvm -mtriple=wasm32-unknown-unknown-wasm"

to

target = "llvm -mtriple=wasm32-unknown-emscripten"

But that resulted in a different issue:

RuntimeError: Compilation error:
wasm-ld: warning: Linking two modules of different data layouts: '/tmp/tmp9xkhf6sz/lib0.bc' is 'e-m:e-p:32:32-p10:8:8-p20:8:8-i64:64-f128:64-n32:64-S128-ni:1:10:20' whereas 'ld-temp.o' is 'e-m:e-p:32:32-i64:64-n32:64-S128'

wasm-ld: /b/s/w/ir/cache/builder/emscripten-releases/llvm-project/llvm/lib/Bitcode/Reader/MetadataLoader.cpp:366: (anonymous namespace)::(anonymous namespace)::PlaceholderQueue::~PlaceholderQueue(): Assertion `empty() && "PlaceholderQueue hasn't been flushed before being destroyed"' failed.

Could you please guide me how to solve this issue? Thank you very much!

My Emscripten version is 2.0.15 and I use LLVM provided in Conda environment and it has 14.0.6 version. Maybe these versions are not compatible?

I think I have figured out an issue. emsdk 2.0.15 has clang 13.0 version. I believe clang version is the reflection of LLVM version in that case. I have updated emsdk to 2.0.30 - it was an educated guess just to find Emscripten that has Clang 14.0.

After that python prepare_test_lips.py worked. I think it will be great to reflect that in the specs, I have looked through them multiple times and didn’t find an advise to check emsdk LLVM and TVM LLVM compatibilities. If I missed it, sorry, if not - please add it :slight_smile:

Also one interesting thing: both targets work

target = "llvm -mtriple=wasm32-unknown-unknown-wasm"

or

target = "llvm -mtriple=wasm32-unknown-emscripten"

Could you tell me what is the difference between these?

Although I ran prepare_test_libs.py successfuly, I still have a question. It produces .wasm file but I need a static library that will be WASM compatible. For example, when I compile my code using Emscripten it produces .wasm and .a file, and this .a file I can later use in CMakeLists.txt, use it’s internals in C++ code and build another WASM package that I can use in the web browser.

So my question is - how to generate WASM library for TVM runtime, not .wasm file? Do I just need to change the format of file from ‘.wasm’ to ‘.a’? Sorry if my questions are stupid, I am new to WASM and TVM. Thank you for your response!

you can try to do mod.export_library("data.tar") which should give you a taball that contains the necessary .o files. you still need to link libtvm runtime

I was under the impression adding runtime to building functions like so:

runtime = Runtime("cpp", {"system-lib": True})
target = "llvm -mtriple=wasm32-unknown-unknown-wasm"
fadd = tvm.build(s, [A, B], target, runtime=runtime, name="add_one")

Will link runtime and I will be able to use produced library as “System-Lib”. Please let me know if I am mistaken

system-lib only asks the generated code to register to tvm runtime, you still need to link libtvm_runtime.bc to use the library

Hello @tqchen! What I can’t seem to find is an example on how to build my model to .bc file. I have managed to build .wasm library from my model and even .a static library (thanks to this example - GitHub - kazum/tvm-wasm: Build pure WebAssembly from pre-trained DL model). Also I didn’t find an example on how to build libray into .bc file in Web Stable Diffusion repo.

Could you please advise how to do that?

So I understood that I need to have - library.bc (need to build it), wasm_runtime.bc (already done), then I need to link those two using llvm-link, after I can compile it to .a static library, link to my project, and then I can init the model in C++ code like so:

tvm::runtime::Module mod_syslib = (*tvm::runtime::Registry::Get("runtime.SystemLib"))();

Is my understanding correct? Thank you!