Native WebAssembly Support

Hello,

Wanted to know if there has been any success in stripping away the current web runtime down to the bare minimum for executing TVM-compiled modules bare-bones on the CPU.

So far, I’ve tried stripping away RPC support, OpenGL support from web/web_runtime.cc, though the compiled web runtime still has traces of GLFW and other potentially unneeded libraries.

Apart from the Makefile in the main directory, is there anywhere else in the TVM stack that may possibly link GLFW and anything else other than the core C runtime and CPU runtime which I could strip away?

Main goal is to compile modules down to a barebones WASM module holding the bare minimum imports necessary for having as much cross-compatibility as possible.

Current list of required imports: https://pastebin.com/RkpZnBva

Modified web_runtime.cc:

#include <sys/stat.h>

#include "../src/runtime/c_runtime_api.cc"
#include "../src/runtime/cpu_device_api.cc"
#include "../src/runtime/workspace_pool.cc"
#include "../src/runtime/module_util.cc"
#include "../src/runtime/system_lib_module.cc"
#include "../src/runtime/module.cc"
#include "../src/runtime/registry.cc"
#include "../src/runtime/dso_module.cc"
#include "../src/runtime/graph/graph_runtime.cc"

// dummy parallel runtime
int TVMBackendParallelLaunch(
    FTVMParallelLambda flambda,
    void* cdata,
    int num_task) {
  TVMAPISetLastError("Parallel is not supported in Web runtime");
  return -1;
}

int TVMBackendParallelBarrier(int task_id, TVMParallelGroupEnv* penv) {
  return 0;
}

Modified Makefile:

EMCC_FLAGS= -std=c++11 -DDMLC_LOG_STACK_TRACE=0\
	-Os -s RESERVED_FUNCTION_POINTERS=2 -s MAIN_MODULE=1 -s NO_EXIT_RUNTIME=1\
	-s TOTAL_MEMORY=1073741824\
	-s EXTRA_EXPORTED_RUNTIME_METHODS="['cwrap','getValue','setValue','addFunction']"\
	$(INCLUDE_FLAGS)

web: build/libtvm_web_runtime.js build/libtvm_web_runtime.bc

build/libtvm_web_runtime.bc: web/web_runtime.cc
	@mkdir -p build/web
	@mkdir -p $(@D)
	emcc $(EMCC_FLAGS) -MM -MT build/libtvm_web_runtime.bc $< >build/web/web_runtime.d
	emcc $(EMCC_FLAGS) -o $@ web/web_runtime.cc

build/libtvm_web_runtime.js: build/libtvm_web_runtime.bc
	@mkdir -p $(@D)
	emcc $(EMCC_FLAGS) -o $@ build/libtvm_web_runtime.bc

I remember @nhynes mentioned he have some experience on minimum wasm support.
I am surprised that glfw is still here when you removed the opengl runtime. Perhaps you should look into compiler flags, and try some minimum code to see which could introduces the problem

Right, glfw is still included for the TVM web runtime, while as for a test module glfw does not get included in at all with the compiler flags I mentioned before.

There appears to also be a number of extra libraries included to do with threading, time, etc. in the TVM web runtime when compiled over emscripten (don’t necessarily know if they’re needed) as well.

The test module I’m using is basically test_add_one in tests/web/prepare_test_libs.py.

def prepare_test_libs(base_path):
    target = "llvm -target=asmjs-unknown-emscripten -system-lib"
    if not tvm.module.enabled(target):
        raise RuntimeError("Target %s is not enabled" % target)
    n = tvm.var("n")
    A = tvm.placeholder((n,), name='A')
    B = tvm.compute(A.shape, lambda *i: A(*i) + 1.0, name='B')
    s = tvm.create_schedule(B.op)
    fadd1 = tvm.build(s, [A, B], target, name="add_one")
    obj_path = os.path.join(base_path, "test_add_one.bc")
    fadd1.save(obj_path)
    emscripten.create_js(os.path.join(base_path, "test_module.js"), obj_path)

check the implementation of create_js and compiler flags in there

What’s the use case? If you want a minimal wasm bytecode, Rust has good wasm support. I’ve been using TVM --> wasm and a minimal Rust runtime. You could probably compile everything to wasm and then use CraneLift (formerly Cretonne) to compile to machine code.

Use case is that I’m currently working on a WebAssembly execution environment in Go with a team, and we’re interested in being able to run TVM modules on it.

Checked out your tvm-rust repository and it definitely looks awesome, how did you structure modules to build into WASM with the runtime?

I modified up test_tvm_basic:

extern crate ndarray;
#[macro_use]
extern crate tvm;

use ndarray::Array;
use tvm::{
  ffi::runtime::DLTensor,
  runtime::{Module, SystemLibModule},
};

#[no_mangle]
pub extern "C" fn app_main() -> f32 {
  let syslib = SystemLibModule::default();
  let add = syslib
      .get_function("default_function")
      .expect("main function not found");
  let mut a = Array::from_vec(vec![1f32, 2., 3., 4.]);
  let mut b = Array::from_vec(vec![1f32, 0., 1., 0.]);
  let mut c = Array::from_vec(vec![0f32; 4]);
  let e = Array::from_vec(vec![2f32, 2., 4., 4.]);
  let mut a_dl: DLTensor = (&mut a).into();
  let mut b_dl: DLTensor = (&mut b).into();
  let mut c_dl: DLTensor = (&mut c).into();
  call_packed!(add, &mut a_dl, &mut b_dl, &mut c_dl);

  return c[0];
}

fn main() {
  let _test = app_main();
}

… which compiles and runs just fine, except when built on the wasm32-unknown-unknown target.

Testing it on both the execution environment I’m working on + on v8 runtime’s WASM support, it appears to panic:

RuntimeError: unreachable
    at __rust_start_panic (wasm-function[161]:1)
    at rust_panic.llvm.11899242900311734611 (wasm-function[157]:30)
    at std::panicking::rust_panic_with_hook::h46bcfc603a27ca0c (wasm-function[152]:444)
    at std::panicking::continue_panic_fmt::h0f7198ce87360bd8 (wasm-function[151]:122)
    at rust_begin_unwind (wasm-function[150]:3)
    at core::panicking::panic_fmt::h64d514aa7d957863 (wasm-function[247]:70)
    at core::option::expect_failed::h2b7190432d25c369 (wasm-function[258]:111)
    at app_main (wasm-function[12]:863)
    at run (/home/kenta/go/src/github.com/perlin-network/life/exec.js:8:36)
    at <anonymous>
--- Begin stack trace ---
<7> [161] __rust_start_panic
<6> [157] rust_panic.llvm.11899242900311734611
<5> [152] std::panicking::rust_panic_with_hook::h46bcfc603a27ca0c
<4> [151] std::panicking::continue_panic_fmt::h0f7198ce87360bd8
<3> [150] rust_begin_unwind
<2> [247] core::panicking::panic_fmt::h64d514aa7d957863
<1> [258] core::option::expect_failed::h2b7190432d25c369
<0> [12] app_main
--- End stack trace ---
panic: wasm: unreachable executed

goroutine 1 [running]:
main.main()
        /home/kenta/go/src/github.com/perlin-network/life/main.go:187 +0x614
exit status 2

hmm. It’s possible that the issue is with the v8 runtime. The rust runtime was meant to be used with a very basic wasm interpreter (with no js bindings). One tip (which should help even if you don’t use tvm-rs) is to compile rustc with the wasm-syscall feature, which will let you get actual panic messages instead of unreachable.

The Go wasm interpreter we’re working on is very basic and barebones and works with other Rust programs compiled with the wasm32-unknown-unknown backend.

Will try the wasm-syscall feature out, though is there anything else specifically in tvm-rust that you configure for compiling to wasm?

Currently just building it normally via. cargo build --target wasm32-unknown-unknown --release.

Tried debug a little as to why the panics occur compiling the test_nnvm example down to WASM in rust-tvm and got the log message: Missing function fuse_dense.

The panic occurs when attempting to call GraphExecutor::new.

let params_bytes = include_bytes!(concat!(env!("OUT_DIR"), "/graph.params")).to_vec();

    let params = tvm::runtime::load_param_dict(&params_bytes)
        .unwrap()
        .into_iter()
        .map(|(k, v)| (k, v.to_owned()))
        .collect::<HashMap<String, Tensor<'static>>>();

    let graph = Graph::try_from(
        std::str::from_utf8(include_bytes!(concat!(env!("OUT_DIR"), "/graph.json"))).unwrap(),
    ).unwrap();

    let attempt = GraphExecutor::new(graph, &syslib);

It seems that TVMBackendRegisterSystemLibSymbol was not called when running the module compiled down to WASM (causing the function dense_fuse to not have been loaded).

Have you had any similar experiences beforehand?

yeah you’ll have to call that manually :slight_smile: wasm doesn’t have an .init section