Relax support auto tune for opencl

Dear I am using this branch TVM:

And I want to auto tune my tir using meta_schedule.tune_tir but I got below error

Traceback (most recent call last):
  File "/mnt/e/ubuntu_code/relax/apps/relax_examples/my_test.py", line 154, in <module>
    database = ms.tune_tir(
  File "/mnt/e/ubuntu_code/relax/python/tvm/meta_schedule/tir_integration.py", line 134, in tune_tir
    TuneContext(
  File "/mnt/e/ubuntu_code/relax/python/tvm/meta_schedule/tune_context.py", line 149, in __init__
    _ffi_api.TuneContextInitialize(self)  # type: ignore # pylint: disable=no-member
  File "/mnt/e/ubuntu_code/relax/python/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  5: TVMFuncCall
  4: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void (tvm::meta_schedule::TuneContext)>::AssignTypedLambda<tvm::runtime::Registry::set_body_method<tvm::meta_schedule::TuneContext, tvm::meta_schedule::TuneContextNode, void, , void>(void (tvm::meta_schedule::TuneContextNode::*)())::{lambda(tvm::meta_schedule::TuneContext)#1}>(tvm::runtime::Registry::set_body_method<tvm::meta_schedule::TuneContext, tvm::meta_schedule::TuneContextNode, void, , void>(void (tvm::meta_schedule::TuneContextNode::*)())::{lambda(tvm::meta_schedule::TuneContext)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  3: tvm::meta_schedule::TuneContextNode::Initialize()
  2: tvm::meta_schedule::PostOrderApplyNode::InitializeWithTuneContext(tvm::meta_schedule::TuneContext const&)
  1: tvm::meta_schedule::SpaceGeneratorNode::InitializeWithTuneContext(tvm::meta_schedule::TuneContext const&)
  0: tvm::meta_schedule::GetRuleKindFromTarget(tvm::Target const&)
  File "/mnt/e/ubuntu_code/relax/src/meta_schedule/space_generator/space_generator.cc", line 82
TVMError: Unsupported target: opencl -keys=opencl,gpu -max_num_threads=256 -max_shared_memory_per_block=16384 -max_threads_per_block=256 -texture_spatial_limit=16384 -thread_warp_size=1

I checked the file “https://github.com/mlc-ai/relax/blob/mlc/src/meta_schedule/space_generator/space_generator.cc” Seems relax auto tune does not support OpenCL

Anyone knows how to tune opencl with relax branch? thanks

Can you try adding "opencl" to this line?

Thanks for your reply. I modify as you said. I want to tune opencl on my android mobile phone via RPC. my code is below

rpc_host = "127.0.0.1"
rpc_port = 9190
rpc_key = "android"

my_rpc_config = ms.runner.RPCConfig(
            tracker_host=rpc_host,
            tracker_port=rpc_port,
            tracker_key=rpc_key,
            session_timeout_sec=180,
        )
my_workers = my_rpc_config.count_num_servers(allow_missing=False)

def get_runner():
    runner_config = {
        "evaluator_config": ms.runner.EvaluatorConfig(
            number=3,
            repeat=1,
            min_repeat_ms=100,
            enable_cpu_cache_flush=False,
        ),
        "alloc_repeat": 5,
    }
    runner = ms.runner.RPCRunner(
        rpc_config=my_rpc_config, max_workers=my_workers, **runner_config
    )

    return runner

database = ms.tune_tir(
    mod=MyModule,
    target="opencl",
    max_trials_global=1280,
    num_trials_per_iter=64,
    work_dir="./tune_tmp",
    runner=get_runner(),
)
print(type(database))
sch = ms.tir_integration.compile_tir(database, MyModule, "opencl")

but got another error as below

2023-06-06 08:32:58 [INFO] [task_scheduler.cc:121] [Task #0: main] Trial #1: Error in running:
RPCRunner: An exception occurred
Traceback (most recent call last):
  File "/mnt/e/ubuntu_code/relax/python/tvm/exec/popen_worker.py", line 87, in main
    result = fn(*args, **kwargs)
  File "/mnt/e/ubuntu_code/relax/python/tvm/meta_schedule/runner/rpc_runner.py", line 392, in _worker_func
    rt_mod: Module = f_upload_module(session, local_path, remote_path)
  File "/mnt/e/ubuntu_code/relax/python/tvm/meta_schedule/runner/rpc_runner.py", line 451, in default_upload_module
    rt_mod: Module = session.load_module(remote_path)
  File "/mnt/e/ubuntu_code/relax/python/tvm/rpc/client.py", line 161, in load_module
    return _ffi_api.LoadRemoteModule(self._sess, path)
  File "/mnt/e/ubuntu_code/relax/python/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
    raise get_last_ffi_error()
tvm.error.RPCError: Traceback (most recent call last):
  10: TVMFuncCall
  9: _ZN3tvm7runtime13PackedFun
  8: tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::runtime::Module, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)>::AssignTypedLambda<tvm::runtime::__mk_TVM4::{lambda(tvm::runtime::Module, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)#1}>(tvm::runtime::__mk_TVM4::{lambda(tvm::runtime::Module, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const, tvm::runtime::TVMRetValue) const
  7: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
  6: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)
  5: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)
  4: tvm::runtime::RPCEndpoint::HandleUntilReturnEvent(bool, std::function<void (tvm::runtime::TVMArgs)>)
  3: tvm::runtime::RPCEndpoint::EventHandler::HandleNextEvent(bool, bool, std::function<void (tvm::runtime::TVMArgs)>)
  2: tvm::runtime::RPCEndpoint::EventHandler::HandleProcessPacket(std::function<void (tvm::runtime::TVMArgs)>)
  1: tvm::runtime::RPCEndpoint::EventHandler::HandleReturn(tvm::runtime::RPCCode, std::function<void (tvm::runtime::TVMArgs)>)
  0: _ZN3tvm7runtime6detail8LogFatalD2Ev.
  File "/mnt/e/ubuntu_code/relax/src/runtime/rpc/rpc_endpoint.cc", line 376
RPCError: Error caught from RPC call:
[16:32:46] /mnt/e/ubuntu_code/relax/apps/cpp_rpc/rpc_env.cc:247: sh: g++: inaccessible or not found

Do you have any idea about this error ? Thanks

先试试CUDA呢,毕竟是主流。relax还在开发中,肯定先支持cuda。

主要是给手机端用的,android手机只有opencl,没有cuda

Hello, have the Android tuning issues been resolved? I am meeting the same problem

@twzhyyxwhez31057, @huanyingjun I suppose it was necessary to specify export function for exporting model to the Android device. In this PR this issue should be resolved. And you can see in the test how you should configure your builder object for Meta-Schedule.

1 Like