Help! Tons of erros when using AOT,I don't know what todo

I’d like to try AOT , but I have encountered a series of errors. :sob: My TVM version:

commit 75cf964b0b2d4f737b5cb25131a6c146b5edf22d (HEAD -> main, origin/main, origin/HEAD)
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Mon Oct 18 17:31:33 2021 -0400

    Test run triage (#9308)

My LLVM version is 11.0.0

Here are my codes.

#test_aot.py
path=path_to_keras_mod
target = tvm.target.Target("llvm -mcpu=skylake --executor=aot")
    with tvm.transform.PassContext(opt_level=3):
        mod = keras.models.load_model(path,compile=True)
        mod, params = relay.frontend.from_keras(mod,layout = layout,shape=shape_dict)
        factory : AOTExecutorFactoryModule = relay.build(mod,target=target,target_host="llvm", params=params)
    path_lib = "./mnist_llvm_x86_aot.so"
    factory.export_library(path_lib)

Errors:

2021-10-21 14:16:58.339944: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
  File "test_aot.py", line 42, in <module>
    test_mnist()
  File "test_aot.py", line 21, in test_mnist
    target = tvm.target.Target("llvm -mcpu=skylake --executor=aot")
  File "/Tony/workspace/Deeplearning_Framework/TVM/tvm_latest/tvm/python/tvm/target/target.py", line 112, in __init__
    self.__init_handle_by_constructor__(_ffi_api.Target, target)
  File "/Tony/workspace/Deeplearning_Framework/TVM/tvm_latest/tvm/python/tvm/_ffi/_ctypes/object.py", line 136, in __init_handle_by_constructor__
    handle = __init_by_constructor__(fconstructor, args)
  File "/Tony/workspace/Deeplearning_Framework/TVM/tvm_latest/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 260, in __init_handle_by_constructor__
    raise get_last_ffi_error()
ValueError: Traceback (most recent call last):
  5: TVMFuncCall
        at /Tony/workspace/Deeplearning_Framework/TVM/tvm_latest/tvm/src/runtime/c_runtime_api.cc:474
  4: tvm::runtime::PackedFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
        at /Tony/workspace/Deeplearning_Framework/TVM/tvm_latest/tvm/include/tvm/runtime/packed_func.h:1151
  3: std::function<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
        at /usr/include/c++/7/bits/std_function.h:706
  2: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), void (*)(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
        at /usr/include/c++/7/bits/std_function.h:316
  1: tvm::TargetInternal::ConstructorDispatcher(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
        at /Tony/workspace/Deeplearning_Framework/TVM/tvm_latest/tvm/src/target/target.cc:574
  0: tvm::Target::Target(tvm::runtime::String const&)
        at /Tony/workspace/Deeplearning_Framework/TVM/tvm_latest/tvm/src/target/target.cc:472
  File "/Tony/workspace/Deeplearning_Framework/TVM/tvm_latest/tvm/src/target/target.cc", line 472
ValueError: Error when parsing target["executor"]: Cannot recognize 'executor'. Candidates are: unpacked-api, runtime, interface-api, keys, link-params, device, mcpu, host, mattr, mtriple, tag, model, system-lib, libs, mfloat-abi, from_device, mabi. Target creation from string failed: llvm -mcpu=skylake --executor=aot

when I force set executor=aot in relay.build funcion like this and change my code:

#relay.build
    target, target_host = Target.check_and_update_host_consist(
        target, target_host, target_is_dict_key=False
    )

    # Retrieve the executor from the target
    #executor = get_executor_from_target(target, target_host)
    executor="aot"#here
    # If current dispatch context is fallback context (the default root context),
    # then load pre-tuned parameters from TopHub
    if isinstance(autotvm.DispatchContext.current, autotvm.FallbackContext):
        tophub_context = autotvm.tophub.context(list(target.values()))

My new code:

#test_aot.py
    with tvm.transform.PassContext(opt_level=3):
        mod = keras.models.load_model(path,compile=True)
        mod, params = relay.frontend.from_keras(mod,layout = layout,shape=shape_dict)
        factory : AOTExecutorFactoryModule = relay.build(mod,target="llvm",target_host="llvm", params=params)
    path_lib = "./mnist_llvm_x86_aot.so"
    factory.export_library(path_lib)

Then I got Segmentation fault (core dumped). I tried to find the root cause.It seems appears in this filetvm_latest/tvm/src/target/llvm,in function CreateIntrinsic.And The std::cout print is unknown intrinsic Op(tir.lookup_param)

#else
    std::vector<unsigned> indices;
#endif
    for (int i = 0; i < num_elems; ++i) {
      indices.push_back(i);
    }
    return builder_->CreateShuffleVector(v0, v1, indices);
  } else if (op->op.same_as(builtin::atomic_add())) {
    // TODO(masahi): Support atomic for CPU backend
    LOG(FATAL) << "CPU backend does not support atomic add yet.";
    return nullptr;
  } else {
    std::cout<<"unknown intrinsic " << op->op<<std::endl;#I added this
    LOG(FATAL) << "unknown intrinsic " << op->op;
    return nullptr;
  }
}

hey @tony222, thanks for trying this out! At the moment we only support AOT using the C runtime (e.g. with microTVM). we’re working on bringing support to the C++ runtime so that you can use it with export_library soon. stay tuned!

1 Like

Thank you Mr. We are all expecting to see this coming. Good luck and best wishes. :two_hearts: Oh, by the way, I haven’t seen any function call such as _TV BackendParallelLaunch_ptr used in AOT generate C runtime. Is thread pool available or will it be available ?

We haven’t yet tried parallel launching in AOT, but in general we expect to tackle heterogeneous execution in the C runtime through the [pre-RFC] C Device API (that RFC in particular doesn’t explicitly address it, but that’s the approach we’re taking).

Thank you very much! This looks interesting , and I got sth new to learn.:love_letter: