Are web or webassembly supported autotvm

tvm has supported rpc feature, but I cannot find RPCTracker for web. Do we have any suggestion?

We dont need the rpc tracker for the web, but we do support a proxy in python that talks to the web via websocket and forward the connections to other places as described in the web folder

I’ve already build websocket demo, and run our model with web assembly backend in browser. but kernel is not tuned with this machine. I’ve read some tutorials about android、cuda etc. I found rpc tracker is usefull.

Do we another way without rpc tracker that could be able to do tuning in browser?

If we could run model in browser or any wasm environment by rpc connection. We could be able to measure the cost. Do we have convenience way to do these tasks ?

@tqchen any suggestion on the above issue.


We can use the rpc proxy with example-prc flag tvm/web at main · apache/tvm · GitHub

Which starts a RPC server via websocket and connect to the proxy. The proxy can then serve as a server(which can connects back to another trakcer if needed),

You can try to use the tracker process here tvm/ at main · apache/tvm · GitHub

Thanks @tqchen . after below steps, I find a new problem, I don’t know how to send wasm_binary via rpc tracker. I passed session_constructor_args=["rpc.WasmSession", wasm_binary] in rpc.connect.

  1. launch rpc tracker

python -m tvm.exec.rpc_tracker --host= --port=9192

  1. launch rpc proxy and setup tracker address

python -m tvm.exec.rpc_proxy --example-rpc=1 --tracker=

  1. build a new rpc server as proxy way.

python -m tvm.exec.rpc_server --tracker= --key=wasm --isproxy=1 --host= --port=9090

  1. python -m tvm.exec.query_rpc_tracker --host= --port=9192 server:proxy[wasm]

  1. build connection

tracker = rpc.connect_tracker(“”,9192)

I’ve solved the problem.

python -m tvm.exec.rpc_tracker --host= --port=9192

python -m tvm.exec.rpc_proxy --example-rpc=1 --tracker=

python -m tvm.exec.rpc_server --tracker= --key=wasm-proxy --isproxy=1 --host=localhost --port=9090


awesome, glad it works. I don’t think we have previous experiences tuning on web, would be great if you can share your findings as well

Actually I haven’t completed turning on web. I still have jobs to make it work.

I think turning need two process.

  1. build network connection between web and local machine. so that I could upload and transform params or model and then run model on web, which have already worked.
  2. build the new model or module by localbuilder.

After turning, I found the library cannot run model on web. The error message is “no such function in module: fused_layout_transform_42”. It looks like there’s some incorrect in the process of building library.

Here is my code, firstly I use custom build_func to build wasm library.

builder=autotvm.LocalBuilder( n_parallel = 1, # build_func = “wasm” build_func=fcompile )

def fcompile(*args): emcc.create_tvmjs_wasm(args[0], args[1]) fcompile.output_format = “wasm”

Durning turning I found that the library is smaller than these way

with tvm.transform.PassContext(opt_level=3): graph, lib, params =, target, params=params) lib.export_library(wasm_path, emcc.create_tvmjs_wasm)

Any suggestion about that?