End to end RPC benchmarking fails

When i try to measure the end to end running time on an android device using:

from tvm.contrib import graph_executor
module = graph_executor.GraphModule(lib["default"](device))
module.set_input(**tvm_dummy)
res = module.benchmark(device, end_to_end=True)

i get the following error:

Traceback (most recent call last):
  File "/Users/nkaminsky/code/TVM/tvm_benchmarking.py", line 224, in <module>
    _main()
  File "/Users/nkaminsky/code/TVM/tvm_benchmarking.py", line 199, in _main
    runtime_args["test_settings"],
  File "/Users/nkaminsky/code/TVM/tvm_benchmarking.py", line 121, in _measure_inference_time
    res = module.benchmark(device, end_to_end=True)
  File "/Users/nkaminsky/code/my-tvm-new/python/tvm/contrib/graph_executor.py", line 404, in benchmark
    )(device.device_type, device.device_id, *args)
  File "/Users/nkaminsky/code/my-tvm-new/python/tvm/runtime/module.py", line 292, in evaluator
    blob = feval(*args)
  File "/Users/nkaminsky/code/my-tvm-new/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm.error.RPCError: Traceback (most recent call last):
  [bt] (8) 9   libtvm.dylib                        0x000000011725e126 tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const + 1158
  [bt] (7) 8   libtvm.dylib                        0x0000000117256c2c tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::__1::function<void (tvm::runtime::TVMArgs)> const&) + 124
  [bt] (6) 7   libtvm.dylib                        0x000000011724ecad tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::__1::function<void (tvm::runtime::TVMArgs)>) + 333
  [bt] (5) 6   libtvm.dylib                        0x000000011724d4ce tvm::runtime::RPCEndpoint::HandleUntilReturnEvent(bool, std::__1::function<void (tvm::runtime::TVMArgs)>) + 622
  [bt] (4) 5   libtvm.dylib                        0x000000011724d76e tvm::runtime::RPCEndpoint::EventHandler::HandleNextEvent(bool, bool, std::__1::function<void (tvm::runtime::TVMArgs)>) + 494
  [bt] (3) 4   libtvm.dylib                        0x0000000117251879 tvm::runtime::RPCEndpoint::EventHandler::HandleProcessPacket(std::__1::function<void (tvm::runtime::TVMArgs)>) + 393
  [bt] (2) 3   libtvm.dylib                        0x0000000117253815 tvm::runtime::RPCEndpoint::EventHandler::HandleReturn(tvm::runtime::RPCCode, std::__1::function<void (tvm::runtime::TVMArgs)>) + 213
  [bt] (1) 2   libtvm.dylib                        0x0000000115e19639 tvm::runtime::detail::LogFatal::Entry::Finalize() + 89
  [bt] (0) 1   libtvm.dylib                        0x00000001171ed2c8 tvm::runtime::Backtrace() + 24
  File "/Users/nkaminsky/code/my-tvm-new/src/runtime/rpc/rpc_endpoint.cc", line 376
RPCError: Error caught from RPC call:
[18:51:23] /Users/nkaminsky/code/my-tvm-new/apps/android_rpc/app/src/main/jni/../../../../../../include/../src/runtime/c_runtime_api.cc:131: 
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (allow_missing) is false: Device API rpc is not enabled.

this happens only when end_to_end is set to True. can somebody help me please?

Can you post your config.cmake? It seems likely that you have USE_RPC turned off. If it is off and you set it to ON, does the error go away?

this is the config make file i used, the use rpc was set to ON when the repository was built. i manage to run and record the results when end_to_end is set to False.

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

#--------------------------------------------------------------------
#  Template custom cmake configuration for compiling
#
#  This file is used to override the build options in build.
#  If you want to change the configuration, please use the following
#  steps. Assume you are on the root directory. First copy the this
#  file so that any local changes will be ignored by git
#
#  $ mkdir build
#  $ cp cmake/config.cmake build
#
#  Next modify the according entries, and then compile by
#
#  $ cd build
#  $ cmake ..
#
#  Then build in parallel with 8 threads
#
#  $ make -j8
#--------------------------------------------------------------------

#---------------------------------------------
# Backend runtimes.
#---------------------------------------------

# Whether enable CUDA during compile,
#
# Possible values:
# - ON: enable CUDA with cmake's auto search
# - OFF: disable CUDA
# - /path/to/cuda: use specific path to cuda toolkit
set(USE_CUDA OFF)

# Whether enable ROCM runtime
#
# Possible values:
# - ON: enable ROCM with cmake's auto search
# - OFF: disable ROCM
# - /path/to/rocm: use specific path to rocm
set(USE_ROCM OFF)

# Whether enable SDAccel runtime
set(USE_SDACCEL OFF)

# Whether enable Intel FPGA SDK for OpenCL (AOCL) runtime
set(USE_AOCL OFF)

# Whether enable OpenCL runtime
#
# Possible values:
# - ON: enable OpenCL with cmake's auto search
# - OFF: disable OpenCL
# - /path/to/opencl-sdk: use specific path to opencl-sdk
set(USE_OPENCL ON)

# Whether enable Metal runtime
set(USE_METAL OFF)

# Whether enable Vulkan runtime
#
# Possible values:
# - ON: enable Vulkan with cmake's auto search
# - OFF: disable vulkan
# - /path/to/vulkan-sdk: use specific path to vulkan-sdk
set(USE_VULKAN ON)

# Whether enable OpenGL runtime
set(USE_OPENGL OFF)

# Whether enable MicroTVM runtime
set(USE_MICRO OFF)

# Whether enable RPC runtime
set(USE_RPC ON)

# Whether to build the C++ RPC server binary
set(USE_CPP_RPC ON)

# Whether to build the iOS RPC server application
set(USE_IOS_RPC OFF)

# Whether embed stackvm into the runtime
set(USE_STACKVM_RUNTIME OFF)

# Whether enable tiny embedded graph executor.
set(USE_GRAPH_EXECUTOR ON)

# Whether enable tiny graph executor with CUDA Graph
set(USE_GRAPH_EXECUTOR_CUDA_GRAPH OFF)

# Whether enable pipeline executor.
set(USE_PIPELINE_EXECUTOR OFF)

# Whether to enable the profiler for the graph executor and vm
set(USE_PROFILER ON)

# Whether enable microTVM standalone runtime
set(USE_MICRO_STANDALONE_RUNTIME OFF)

# Whether build with LLVM support
# Requires LLVM version >= 4.0
#
# Possible values:
# - ON: enable llvm with cmake's find search
# - OFF: disable llvm, note this will disable CPU codegen
#        which is needed for most cases
# - /path/to/llvm-config: enable specific LLVM when multiple llvm-dev is available.
set(USE_LLVM ON)

#---------------------------------------------
# Contrib libraries
#---------------------------------------------
# Whether to build with BYODT software emulated posit custom datatype
#
# Possible values:
# - ON: enable BYODT posit, requires setting UNIVERSAL_PATH
# - OFF: disable BYODT posit
#
# set(UNIVERSAL_PATH /path/to/stillwater-universal) for ON
set(USE_BYODT_POSIT OFF)

# Whether use BLAS, choices: openblas, atlas, apple
set(USE_BLAS none)

# Whether to use MKL
# Possible values:
# - ON: Enable MKL
# - /path/to/mkl: mkl root path
# - OFF: Disable MKL
# set(USE_MKL /opt/intel/mkl) for UNIX
# set(USE_MKL ../IntelSWTools/compilers_and_libraries_2018/windows/mkl) for WIN32
# set(USE_MKL <path to venv or site-packages directory>) if using `pip install mkl`
set(USE_MKL OFF)

# Whether use MKLDNN library, choices: ON, OFF, path to mkldnn library
set(USE_MKLDNN OFF)

# Whether use OpenMP thread pool, choices: gnu, intel
# Note: "gnu" uses gomp library, "intel" uses iomp5 library
set(USE_OPENMP none)

# Whether use contrib.random in runtime
set(USE_RANDOM ON)

# Whether use NNPack
set(USE_NNPACK OFF)

# Possible values:
# - ON: enable tflite with cmake's find search
# - OFF: disable tflite
# - /path/to/libtensorflow-lite.a: use specific path to tensorflow lite library
set(USE_TFLITE OFF)

# /path/to/tensorflow: tensorflow root path when use tflite library
set(USE_TENSORFLOW_PATH none)

# Required for full builds with TFLite. Not needed for runtime with TFLite.
# /path/to/flatbuffers: flatbuffers root path when using tflite library
set(USE_FLATBUFFERS_PATH none)

# Possible values:
# - OFF: disable tflite support for edgetpu
# - /path/to/edgetpu: use specific path to edgetpu library
set(USE_EDGETPU OFF)

# Possible values:
# - ON: enable cuDNN with cmake's auto search in CUDA directory
# - OFF: disable cuDNN
# - /path/to/cudnn: use specific path to cuDNN path
set(USE_CUDNN OFF)

# Whether use cuBLAS
set(USE_CUBLAS OFF)

# Whether use MIOpen
set(USE_MIOPEN OFF)

# Whether use MPS
set(USE_MPS OFF)

# Whether use rocBlas
set(USE_ROCBLAS OFF)

# Whether use contrib sort
set(USE_SORT ON)

# Whether use MKL-DNN (DNNL) codegen
set(USE_DNNL_CODEGEN OFF)

# Whether to use Arm Compute Library (ACL) codegen
# We provide 2 separate flags since we cannot build the ACL runtime on x86.
# This is useful for cases where you want to cross-compile a relay graph
# on x86 then run on AArch.
#
# An example of how to use this can be found here: docs/deploy/arm_compute_lib.rst.
#
# USE_ARM_COMPUTE_LIB - Support for compiling a relay graph offloading supported
#                       operators to Arm Compute Library. OFF/ON
# USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR - Run Arm Compute Library annotated functions via the ACL
#                                     runtime. OFF/ON/"path/to/ACL"
set(USE_ARM_COMPUTE_LIB OFF)
set(USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR OFF)

# Whether to build with Arm Ethos-N support
# Possible values:
# - OFF: disable Arm Ethos-N support
# - path/to/arm-ethos-N-stack: use a specific version of the
#   Ethos-N driver stack
set(USE_ETHOSN OFF)
# If USE_ETHOSN is enabled, use ETHOSN_HW (ON) if Ethos-N hardware is available on this machine
# otherwise use ETHOSN_HW (OFF) to use the software test infrastructure
set(USE_ETHOSN_HW OFF)

# Whether to build with TensorRT codegen or runtime
# Examples are available here: docs/deploy/tensorrt.rst.
#
# USE_TENSORRT_CODEGEN - Support for compiling a relay graph where supported operators are
#                        offloaded to TensorRT. OFF/ON
# USE_TENSORRT_RUNTIME - Support for running TensorRT compiled modules, requires presense of
#                        TensorRT library. OFF/ON/"path/to/TensorRT"
set(USE_TENSORRT_CODEGEN OFF)
set(USE_TENSORRT_RUNTIME OFF)

# Whether use VITIS-AI codegen
set(USE_VITIS_AI OFF)

# Build Verilator codegen and runtime
set(USE_VERILATOR OFF)

# Build ANTLR parser for Relay text format
# Possible values:
# - ON: enable ANTLR by searching default locations (cmake find_program for antlr4 and /usr/local for jar)
# - OFF: disable ANTLR
# - /path/to/antlr-*-complete.jar: path to specific ANTLR jar file
set(USE_ANTLR OFF)

# Whether use Relay debug mode
set(USE_RELAY_DEBUG OFF)

# Whether to build fast VTA simulator driver
set(USE_VTA_FSIM OFF)

# Whether to build cycle-accurate VTA simulator driver
set(USE_VTA_TSIM OFF)

# Whether to build VTA FPGA driver (device side only)
set(USE_VTA_FPGA OFF)

# Whether use Thrust
set(USE_THRUST OFF)

# Whether to build the TensorFlow TVMDSOOp module
set(USE_TF_TVMDSOOP OFF)

# Whether to use STL's std::unordered_map or TVM's POD compatible Map
set(USE_FALLBACK_STL_MAP OFF)

# Whether to use hexagon device
set(USE_HEXAGON_DEVICE OFF)
set(USE_HEXAGON_SDK /path/to/sdk)

# Hexagon architecture to target when compiling TVM itself (not the target for
# compiling _by_ TVM). This applies to components like the TVM runtime, but is
# also used to select correct include/library paths from the Hexagon SDK when
# building offloading runtime for Android.
# Valid values are v60, v62, v65, v66, v68.
set(USE_HEXAGON_ARCH "v66")

# Whether to use ONNX codegen
set(USE_TARGET_ONNX OFF)

# Whether enable BNNS runtime
set(USE_BNNS OFF)

# Whether to use libbacktrace
# Libbacktrace provides line and column information on stack traces from errors.
# It is only supported on linux and macOS.
# Possible values:
# - AUTO: auto set according to system information and feasibility
# - ON: enable libbacktrace
# - OFF: disable libbacktrace
set(USE_LIBBACKTRACE AUTO)

# Whether to build static libtvm_runtime.a, the default is to build the dynamic
# version: libtvm_runtime.so.
#
# The static runtime library needs to be linked into executables with the linker
# option --whole-archive (or its equivalent). The reason is that the TVM registry
# mechanism relies on global constructors being executed at program startup.
# Global constructors alone are not sufficient for the linker to consider a
# library member to be used, and some of such library members (object files) may
# not be included in the final executable. This would make the corresponding
# runtime functions to be unavailable to the program.
set(BUILD_STATIC_RUNTIME OFF)


# Caches the build so that building is faster when switching between branches.
# If you switch branches, build and then encounter a linking error, you may
# need to regenerate the build tree through "make .." (the cache will
# still provide significant speedups).
# Possible values:
# - AUTO: search for path to ccache, disable if not found.
# - ON: enable ccache by searching for the path to ccache, report an error if not found
# - OFF: disable ccache
# - /path/to/ccache: use specific path to ccache
set(USE_CCACHE AUTO)

# Whether to enable PAPI support in profiling. PAPI provides access to hardware
# counters while profiling.
# Possible values:
# - ON: enable PAPI support. Will search PKG_CONFIG_PATH for a papi.pc
# - OFF: disable PAPI support.
# - /path/to/folder/containing/: Path to folder containing papi.pc.
set(USE_PAPI OFF)

Can you try this branch I have: https://github.com/tkonolige/incubator-tvm/tree/fix_e2e_rpc_device I think it might fix your problem. If it does not fix your problem, could you provide a script to reproduce the error?

thanks for the help! im still getting the same error with the following script:

import torch
import torchvision
import tvm
from tvm import relay, rpc
from tvm.contrib import utils, ndk, graph_executor
import os

model_name = "mobilenet_v2"
model = getattr(torchvision.models, model_name)(pretrained=True)
model = model.eval()
input_shape = [1, 3, 224, 224]
input_data = torch.randn(input_shape)
scripted_model = torch.jit.trace(model, input_data).eval()
mod, params = relay.frontend.from_pytorch(scripted_model, [('input', input_shape)])
target = tvm.target.Target('llvm', 'llvm -mtriple=arm64-linux-android')
with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(mod, target=target, params=params)
tracker_host = "0.0.0.0"
tracker_port = 9000
key = 'android'
os.environ["TVM_NDK_CC"] = 'path/to/android-toolchain-arm64/bin/aarch64-linux-android-g++'
# os.environ["TVM_NDK_CC"] = "/android-toolchain-arm64/bin/aarch64-linux-android-g++"
tracker = rpc.connect_tracker(tracker_host, tracker_port)
remote = tracker.request(key, priority=0, session_timeout=0)
device = remote.cpu(0)
tmp = utils.tempdir()
lib_fname = tmp.relpath(f"net.so")
fcompile = ndk.create_shared
lib.export_library(lib_fname, fcompile)
remote.upload(lib_fname)
exported_lib = remote.load_module(f"net.so")
module = graph_executor.GraphModule(exported_lib["default"](device))
print(module.benchmark(device, end_to_end=False))
print(module.benchmark(device, end_to_end=True))

i get the following output:

One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.
Execution time summary:
    mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
    18.6121      18.4753      19.2032      18.4476       0.2959   
               
Traceback (most recent call last):
  File "/Users/nkaminsky/Library/Application Support/JetBrains/PyCharmCE2021.1/scratches/scratch.py", line 34, in <module>
    print(module.benchmark(device, end_to_end=True))
  File "/Users/nkaminsky/.local/lib/python3.7/site-packages/tvm-0.8.dev1779+g80de1239e-py3.7-macosx-10.9-x86_64.egg/tvm/contrib/graph_executor.py", line 404, in benchmark
    )(device.device_type, device.device_id, *args)
  File "/Users/nkaminsky/.local/lib/python3.7/site-packages/tvm-0.8.dev1779+g80de1239e-py3.7-macosx-10.9-x86_64.egg/tvm/runtime/module.py", line 292, in evaluator
    blob = feval(*args)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 323, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
tvm.error.RPCError: Traceback (most recent call last):
  [bt] (8) 9   libtvm.dylib                        0x000000013e622126 tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const + 1158
  [bt] (7) 8   libtvm.dylib                        0x000000013e61ac2c tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::__1::function<void (tvm::runtime::TVMArgs)> const&) + 124
  [bt] (6) 7   libtvm.dylib                        0x000000013e612cad tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::__1::function<void (tvm::runtime::TVMArgs)>) + 333
  [bt] (5) 6   libtvm.dylib                        0x000000013e6114ce tvm::runtime::RPCEndpoint::HandleUntilReturnEvent(bool, std::__1::function<void (tvm::runtime::TVMArgs)>) + 622
  [bt] (4) 5   libtvm.dylib                        0x000000013e61176e tvm::runtime::RPCEndpoint::EventHandler::HandleNextEvent(bool, bool, std::__1::function<void (tvm::runtime::TVMArgs)>) + 494
  [bt] (3) 4   libtvm.dylib                        0x000000013e615879 tvm::runtime::RPCEndpoint::EventHandler::HandleProcessPacket(std::__1::function<void (tvm::runtime::TVMArgs)>) + 393
  [bt] (2) 3   libtvm.dylib                        0x000000013e617815 tvm::runtime::RPCEndpoint::EventHandler::HandleReturn(tvm::runtime::RPCCode, std::__1::function<void (tvm::runtime::TVMArgs)>) + 213
  [bt] (1) 2   libtvm.dylib                        0x000000013d1dd639 tvm::runtime::detail::LogFatal::Entry::Finalize() + 89
  [bt] (0) 1   libtvm.dylib                        0x000000013e5b12c8 tvm::runtime::Backtrace() + 24
  File "/Users/nkaminsky/code/my-tvm-new/src/runtime/rpc/rpc_endpoint.cc", line 376
RPCError: Error caught from RPC call:
[11:25:30] /Users/nkaminsky/code/my-tvm-new/apps/android_rpc/app/src/main/jni/../../../../../../include/../src/runtime/c_runtime_api.cc:131: 
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (allow_missing) is false: Device API rpc is not enabled.


Process finished with exit code 1

i tried to run it on a samsung s21 5g. the python script was run on a mac.

Can you try running with the branch I provided above and see if that fixes it?

I tried to run the script with the branch above but i get the same error message.

Unfortunately, I do not have an android device to debug with. If you can provide me with a backtrace of the error on the device, I may be able to debug more, but otherwise I’m not sure I can help further.

the backtrace i get on the mac when running the script is:

Traceback (most recent call last):
  File "/Users/nkaminsky/Library/Application Support/JetBrains/PyCharmCE2021.1/scratches/scratch.py", line 34, in <module>
    print(module.benchmark(device, end_to_end=True))
  File "/Users/nkaminsky/.local/lib/python3.7/site-packages/tvm-0.8.dev1779+g80de1239e-py3.7-macosx-10.9-x86_64.egg/tvm/contrib/graph_executor.py", line 404, in benchmark
    )(device.device_type, device.device_id, *args)
  File "/Users/nkaminsky/.local/lib/python3.7/site-packages/tvm-0.8.dev1779+g80de1239e-py3.7-macosx-10.9-x86_64.egg/tvm/runtime/module.py", line 292, in evaluator
    blob = feval(*args)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 323, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
tvm.error.RPCError: Traceback (most recent call last):
  [bt] (8) 9   libtvm.dylib                        0x000000014d612126 tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const + 1158
  [bt] (7) 8   libtvm.dylib                        0x000000014d60ac2c tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::__1::function<void (tvm::runtime::TVMArgs)> const&) + 124
  [bt] (6) 7   libtvm.dylib                        0x000000014d602cad tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::__1::function<void (tvm::runtime::TVMArgs)>) + 333
  [bt] (5) 6   libtvm.dylib                        0x000000014d6014ce tvm::runtime::RPCEndpoint::HandleUntilReturnEvent(bool, std::__1::function<void (tvm::runtime::TVMArgs)>) + 622
  [bt] (4) 5   libtvm.dylib                        0x000000014d60176e tvm::runtime::RPCEndpoint::EventHandler::HandleNextEvent(bool, bool, std::__1::function<void (tvm::runtime::TVMArgs)>) + 494
  [bt] (3) 4   libtvm.dylib                        0x000000014d605879 tvm::runtime::RPCEndpoint::EventHandler::HandleProcessPacket(std::__1::function<void (tvm::runtime::TVMArgs)>) + 393
  [bt] (2) 3   libtvm.dylib                        0x000000014d607815 tvm::runtime::RPCEndpoint::EventHandler::HandleReturn(tvm::runtime::RPCCode, std::__1::function<void (tvm::runtime::TVMArgs)>) + 213
  [bt] (1) 2   libtvm.dylib                        0x000000014c1cd639 tvm::runtime::detail::LogFatal::Entry::Finalize() + 89
  [bt] (0) 1   libtvm.dylib                        0x000000014d5a12c8 tvm::runtime::Backtrace() + 24
  File "/Users/nkaminsky/code/my-tvm-new/src/runtime/rpc/rpc_endpoint.cc", line 376
RPCError: Error caught from RPC call:
[12:29:22] /Users/nkaminsky/code/my-tvm-new/apps/android_rpc/app/src/main/jni/../../../../../../include/../src/runtime/c_runtime_api.cc:131: 
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (allow_missing) is false: Device API rpc is not enabled.

the messages i get from log of samsung s21:

10-01 12:29:18.001 12980 24788 W System.err: matchKey:android:0.9133355248641466
10-01 12:29:18.002 12980 24788 W System.err: key: client:android:0.9133355248641466
10-01 12:29:18.002 12980 24788 W System.err: alloted timeout: 300
10-01 12:29:18.003 12980 24788 W System.err: Connection from /192.168.2.100:49618
10-01 12:29:18.004 12980 24789 W System.err: waiting for timeout: 300000
10-01 12:29:18.006 12980 24788 W System.err: starting server loop...
10-01 12:29:19.315   889   889 I SurfaceFlinger: SFWD update time=1007783023355
10-01 12:29:21.639 25488 25534 I PlayCommon: [2602] ammr.k(22): Preparing logs for uploading
10-01 12:29:21.647 25488 25534 I PlayCommon: [2602] ammr.k(133): Connecting to server for timestamp: https://play.googleapis.com/play/log/timestamp
10-01 12:29:21.648 25488 25534 I System.out: (HTTPLog)-Static: isSBSettingEnabled false
10-01 12:29:21.648 25488 25534 I System.out: (HTTPLog)-Static: isSBSettingEnabled false
10-01 12:29:21.651   769   858 E Netd    : getNetworkForDns: getNetId from enterpriseCtrl is netid 0
10-01 12:29:21.684  1311  1881 D NetdEventListenerService: DNS Requested by : 601, 1110235
10-01 12:29:21.687   769   873 E Netd    : getNetworkForDns: getNetId from enterpriseCtrl is netid 0
10-01 12:29:21.923  1311  1881 D EnterpriseUtils: getCallingOrCurrentUserId(): move: cxtInfo.mContainerId = 11
10-01 12:29:21.923  1311  1881 D EnterpriseUtils: getCallingOrCurrentUserId(): move: cxtInfo.mContainerId = 11
10-01 12:29:21.930  1311  1881 I ClientCertificateManager Service: ClientCertificateManager.isPremiumContainer() : false for user : 11
10-01 12:29:21.986 12980 24788 W System.err: Load module from /data/user/0/org.apache.tvm.tvmrpc/cache/tvm4j_rpc_8451472791973249568/net.so
10-01 12:29:21.987   748   748 E audit   : type=1400 audit(1633080561.980:421): avc:  granted  { execute } for  pid=12980 comm="Thread-4" path="/data/user/0/org.apache.tvm.tvmrpc/cache/tvm4j_rpc_8451472791973249568/net.so" dev="dm-10" ino=41856 scontext=u:r:untrusted_app_27:s0:c512,c768 tcontext=u:object_r:app_data_file:s0:c512,c768 tclass=file SEPF_SM-G991B_11_0007 audit_filtered
10-01 12:29:22.180 12980 24788 D TVM_RUNTIME: /Users/nkaminsky/code/my-tvm-new/apps/android_rpc/app/src/main/jni/../../../../../../include/../src/runtime/c_runtime_api.cc:131:
10-01 12:29:22.180 12980 24788 D TVM_RUNTIME: ---------------------------------------------------------------
10-01 12:29:22.180 12980 24788 D TVM_RUNTIME: An error occurred during the execution of TVM.
10-01 12:29:22.180 12980 24788 D TVM_RUNTIME: For more information, please see: https://tvm.apache.org/docs/errors.html
10-01 12:29:22.180 12980 24788 D TVM_RUNTIME: ---------------------------------------------------------------
10-01 12:29:22.180 12980 24788 D TVM_RUNTIME:   Check failed: (allow_missing) is false: Device API rpc is not enabled.
10-01 12:29:22.455 12980 24788 W System.err: done server loop...
10-01 12:29:22.458 12980 24788 W System.err: Finish serving /192.168.2.100:49618
10-01 12:29:22.462 12980 24789 W System.err: watchdog woken up, ok...
10-01 12:29:22.463 12980 24788 W System.err: using port: 5001
10-01 12:29:22.465   769   873 E Netd    : getNetworkForDns: getNetId from enterpriseCtrl is netid 0
10-01 12:29:22.538 12980 24788 W System.err: registered with tracker...
10-01 12:29:22.538 12980 24788 W System.err: waiting for requests...

any idea what might be the problem?

Unfortunately, that information does not help. What could help is a backtrace of the error on the device. i.e. what is the call stack when Check failed: (allow_missing) is false: Device API rpc is not enabled. on the android device. I’m not sure the best way to get this. Maybe you can set a breakpoint on /Users/nkaminsky/code/my-tvm-new/apps/android_rpc/app/src/main/jni/../../../../../../include/../src/runtime/c_runtime_api.cc:131 on the android device.

I ran the script with the main branch by mistake… with your branch the issue is fixed! thanks a lot for the help and patience.