Does graph tuner only work on openCL?

I tried autotvm resnet-18 by setting like: target = tvm.target.Target(target=“opencl”, host=“llvm”) .

The tune tasks run sucessfully, however failed in tune_graph.

> Traceback (most recent call last):
>   File "./tune_relay_local_opencl.py", line 245, in <module>
>     tune_and_evaluate(tuning_option)
>   File "./tune_relay_local_opencl.py", line 214, in tune_and_evaluate
>     tune_graph(mod["main"], data_shape, log_file, graph_opt_sch_file)
>   File "./tune_relay_local_opencl.py", line 193, in tune_graph
>     executor = Tuner(graph, {input_name: dshape}, records, target_op, target)
>   File "/Users/banma-1396/proj/tvm/tvm/python/tvm/autotvm/graph_tuner/dynamic_programming_tuner.py", line 44, in __init__
>     super(DPTuner, self).__init__(*args, **kwargs)
>   File "/Users/banma-1396/proj/tvm/tvm/python/tvm/autotvm/graph_tuner/base_graph_tuner.py", line 202, in __init__
>     self._fetch_cfg()
>   File "/Users/banma-1396/proj/tvm/tvm/python/tvm/autotvm/graph_tuner/base_graph_tuner.py", line 286, in _fetch_cfg
>     for record in cfg_dict[workload]:
> KeyError: ('conv2d_NCHWc.x86', ('TENSOR', (1, 3, 224, 224), 'float32'), ('TENSOR', (64, 3, 7, 7), 'float32'), (2, 2), (3, 3, 3, 3), (1, 1), 'NCHW', 'NCHW', 'float32')

After add more detail print, I found the op of “conv2d_NCHWc.x86” generated from tune_graph is not right. From the autotvm tasks log, the scheduling primitives maybe something like this: conv2d_nchw.cuda.

I tried the target LLVM, the python script is OK. Does graph tuner work on openCL ?