Why "c --unpacked-api" not work anymore?

For me, target=“c --unpacked-api=1” in tvm.build is also not working, in the latest main commit. It does work when I use relay.build though. Is this the expected behavior?

Hi, we are currently working on a prototype for the TIR export functionality. After pulling the latest changes from main, we are no longer able to create C code with the AOT executor. We tried many different options and all fail.

This works:

target = "c" executor = "graph" executor_options = {}

with tvm.transform.PassContext( opt_level=opt_level, config={ "tir.disable_vectorize": True, }, ): module = relay.build( loaded_model.mod, executor=tvm.relay.backend.Executor(name=executor, options=executor_options), target=tvm.target.Target( target, host=target, ), params=loaded_model.params, ) source = module.lib.get_source()

This

target = "c" executor = "aot" executor_options = {}

and this

target = "c" executor = "aot" executor_options = {"unpacked-api": 0}

and this

target = "c" executor = "aot" executor_options = {"unpacked-api": 0, "interface-api": "packed"}

all fail with

File "$HOME/MyGithub/tvm/src/runtime/module.cc", line 104 TVMError: Module[metadata_module] does not support GetSource

This

target = "c" executor = "aot" executor_options = {"unpacked-api": 1}

and this

target = "c" executor = "aot" executor_options = {"unpacked-api": 1, "interface-api": "packed"}

fail with

`File “$HOME/MyGithub/tvm/src/relay/backend/aot_executor_codegen.cc”, line 921 TVMError:

An error occurred during the execution of TVM. For more information, please see: Handle TVM Errors — tvm 0.9.dev0 documentation

Check failed: (static_cast(use_unpacked_api_) == false) is false: Need unpacked-api == false (got: 1) and interface-api == “packed” (got: packed) when targeting c++ runtime`

This target = "c --unpacked-api" executor = "aot" executor_options = {}

fails with

File "/home/bbo9fe/MyGithub/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 260, in __init_handle_by_constructor__ raise get_last_ffi_error() tvm._ffi.base.TVMError: TVMError: Executor "graph": Attribute "unpacked-api" not in "[link-params]".

(The error message is more detailed than on main. But that is the only difference in our code)

The reason for this is that in python/tvm/relay/build_module.py in the function _reconstruct_from_deprecated_options(deprecated_params_target) the executor is replaced by a graph executor if deprecated_executor_args contains elements. Is this behavior intended? Because in our opinion it is hard for users to understand what happens here.

This target = "llvm" executor = "aot" executor_options = {"unpacked-api": 0, "interface-api": "packed"}

fails with

4: tvm::codegen::CodeGenLLVM::VisitStmt_(tvm::tir::SeqStmtNode const*) 3: tvm::tir::ExprFunctor<llvm::Value* (tvm::PrimExpr const&)>::VisitExpr(tvm::PrimExpr const&) 2: tvm::codegen::CodeGenLLVM::VisitExpr_(tvm::tir::CallNode const*) 1: tvm::codegen::CodeGenCPU::CreateIntrinsic(tvm::tir::CallNode const*) 0: tvm::codegen::CodeGenLLVM::CreateIntrinsic(tvm::tir::CallNode const*) File "$HOME/MyGithub/tvm/src/target/llvm/codegen_llvm.cc", line 1091 TVMError: unknown intrinsic Op(tir.tvm_call_cpacked_lowered)

This error message is hard to understand: Check failed: (static_cast<bool>(use_unpacked_api_) == false) is false: Need unpacked-api == false (got: 1) and interface-api == "packed" (got: packed) when targeting c++ runtime

Why is the c++ runtime referenced in the aot_executor_codegen.cc ?

The current behavior is really hard to understand. We would like to work on this issue but do not have a deep enough technical understanding. So any input is highly welcome.

1 Like

Hi @SebastianBoblestETAS, can you try building with runtime = "c"? This should mirror the previous AOT configuration. PR 10283 added support for the C++ runtime, so your previous config may have actually been under-specified (but TVM was probably not properly complaining before). You should be able to use the previous options with runtime = "c".

Hi @areusch, thanks for helping me here. I actually can run everything now with runtime=tvm.relay.backend.Runtime(“crt”), executor=tvm.relay.backend.Executor(name=“aot”, options={“unpacked-api”: 1})

however, source = module.lib.get_source() now only gives me the main function:

#include "tvm/runtime/c_runtime_api.h" #ifdef __cplusplus extern "C" { #endif TVM_DLL int32_t tvmgen_default___tvm_main__(void* conv1d_input,void* output0); int32_t tvmgen_default_run(void* args, void* type_code, int num_args, void* out_value, void* out_type_code, void* resource_handle) { return tvmgen_default___tvm_main__(((DLTensor*)(((TVMValue*)args)[0].v_handle))[0].data,((DLTensor*)(((TVMValue*)args)[1].v_handle))[0].data); } #ifdef __cplusplus } #endif

The code for the kernels is no longer there and I do not know how to find it. Maybe a little context on what we want to do is helpful: Currently we work on a prototype of the TIR exporter we recently discussed. We added a new function get_json() to Module.

The goal that we have is to be able to extract the full TIR representation of the host module and all device modules regardless of the specific executor, target or runtime.

We are currently writing a JsonScriptPrinter class similar to the TVMScriptPrinter class. We currently use it in driver_api.cc in the build function like this:

const auto* bf = runtime::Registry::Get(“script.AsJSONScript”);

std::string jsonTir = (*bf)(mhost_all, “T”, false);

mhost->SetJson(jsonTir);

Currently we think this is a good approach to provide an API for this, but this can be discussed in an RFC once we have a working version. For the moment it would be sufficient to again get access to the kernel code for executor = Executor(“aot”). For executor = Executor(“graph”) everthing works fine. @UlrikHjort @Khoi @MJKlaiber

1 Like

we discussed this a bit at the TVM Community Meeting this morning:

  • Sebastian raised three concerns:

    1. Can we add a clean set of unit tests to document how tvm.relay.build is called and catch API changes?
    2. Error messages are not detailed enough. The error message Sebastian received didn’t make sense unless he already knew in advance that the default executor was graph.
    3. Deprecated parameters are not handled in an ideal way. It’s hard to know what the set of options are.
  • We discussed that this breakage was likely due to relay.build allowing invalid build options in the past. PR 10283 tightened the set of allowed options.

  • Gave some context on PR 9352 which originally introduced these runtime and executor parameters. This change is a half-step to the desired end state discussed in [pre-RFC] Compilation Configuration Representation. That RFC needs to be resolved before we can proceed here. We want to resolve this before releasing 0.9.

  • In the meantime, it seems like we should fix up the _reconstruct_from_deprecated_options to provide better error messages since a few different folks have ran into problems here.

  • @gromero raised that tvmgen_default_run_model was renamed to tvmgen_default___main__, and this was an externally-visible change. This shouldn’t have been an API change, but @gromero was working around a limitation in AOT where -link-params=0 was not supported. @gromero will reproduce this and raise a GH issue to capture this.

2 Likes

@SebastianBoblestETAS glad you were able to get this working again.

I think we have changed the module hierarchy with that PR. You can explore the Module hierarchy like so:

def print_mod_tree(m, indent=0):
    print(f"{' ' * indent} - {m!r}")
    for i in m.imported_modules:
        print_mod_tree(i, indent + 2)

You might need to pick through imported_modules to find the kernels you’re looking for.

I think it would be great to discuss this use case with folks working more closely on TIR. But, in order to avoid hijacking this thread away from the tvm.relay.build API incompatibilities, could you start another thread and perhaps outline your use case there? This could be a “pre-RFC” (but a very simple one…no need to follow the RFC template, just give a brief summary of your motivations, goals, and approach). I can cc some relevant folks there.