[AutoTVM] Optimizing Ansor-Generated Tasks with AutoTVM

Hi everyone,

I’ve been working on a project that involves optimizing tasks generated by Ansor using AutoTVM. You can find my project on GitHub:

In a nutshell, I exploit kernel configurations using Ansor. Once it gives me a good configuration, I use AutoTVM to explore it (via DropletSearch). That worked for simple models. However, when I dive into more complex models, I had to use a ‘hack’ to bridge the gap between Ansor and AutoTVM due to the differences in their methods for extracting models.

This ‘hack’ ultimately generates a task and a configuration space. However, when I try to use the tuning method on AutoTVM, I run into an error, and I’m wondering if any of you have encountered a similar issue. I am copying the error below. If you want to see the hack, check here: https://github.com/lac-dcc/bennu/blob/main/src/optimize_layer.py#L13

Has anyone else faced this problem or found a workaround for it? I’d greatly appreciate your insights and experiences on this matter.



Traceback (most recent call last):
  File "benchmarks/resnet18.py", line 131, in <module>
    build_template(logfile, index, target, trials)
  File "benchmarks/resnet18.py", line 89, in build_template
    execute_one_layer(c, cfg_ansor, target, trials)
  File "/home/canesche/git/bennu/src/optimize_layer.py", line 49, in execute_one_layer
  File "/home/canesche/git/tvm/python/tvm/autotvm/tuner/tuner.py", line 135, in tune
    results = measure_batch(inputs)
  File "/home/canesche/git/tvm/python/tvm/autotvm/measure/measure.py", line 290, in measure_batch
    build_results = builder.build(measure_inputs)
  File "/home/canesche/git/tvm/python/tvm/autotvm/measure/measure_methods.py", line 142, in build
    res = future.result()
  File "/home/canesche/miniconda3/lib/python3.8/concurrent/futures/_base.py", line 444, in result
    return self.__get_result()
  File "/home/canesche/miniconda3/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
    raise self._exception
  File "/home/canesche/miniconda3/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/canesche/git/tvm/python/tvm/contrib/popen_pool.py", line 432, in <lambda>
    worker = lambda *args: self._worker_run(*args)
  File "/home/canesche/git/tvm/python/tvm/contrib/popen_pool.py", line 400, in _worker_run
    proc.send(fn, args, kwargs, self._timeout)
  File "/home/canesche/git/tvm/python/tvm/contrib/popen_pool.py", line 254, in send
    data = cloudpickle.dumps((fn, args, kwargs, timeout), protocol=pickle.HIGHEST_PROTOCOL)
  File "/home/canesche/.local/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 73, in dumps
  File "/home/canesche/.local/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 632, in dump
    return Pickler.dump(self, obj)
  File "/home/canesche/git/tvm/python/tvm/autotvm/task/task.py", line 189, in __getstate__
    "func": cloudpickle.dumps(self.func),
  File "/home/canesche/.local/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 73, in dumps
  File "/home/canesche/.local/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 632, in dump
    return Pickler.dump(self, obj)
  File "stringsource", line 2, in tvm._ffi._cy3.core.PackedFuncBase.__reduce_cython__
TypeError: self.chandle cannot be converted to a Python object for pickling
Exception ignored in: <function Tracker.__del__ at 0x7fa352abc9d0>
Traceback (most recent call last):
  File "/home/canesche/git/tvm/python/tvm/rpc/tracker.py", line 495, in __del__
  File "/home/canesche/git/tvm/python/tvm/rpc/tracker.py", line 490, in terminate
  File "/home/canesche/git/tvm/python/tvm/contrib/popen_pool.py", line 145, in kill
  File "/home/canesche/git/tvm/python/tvm/contrib/popen_pool.py", line 43, in kill_child_processes
ImportError: sys.meta_path is None, Python is likely shutting down
1 Like