Tvm cuda grid sample Op fp16

My model only include grid_sample operator, and I wanna let tvm run it on cuda using FP16 precision . And after mod = partition_for_tensorrt(mod, params, target=trt_target), Warning: Op “image.grid_sample” not registered FTVMMixedPrecisionConversionType appears 1 times in graph. And error occured:

Check failed: ret == 0 (-1 vs. 0) . Assert fail: (((tir.tvm_struct_get(arg.compute, 0, 5) == (uint8)2) . && (tir.tvm_struct_get(arg.compute, 0, 6) == (uint8)32)) . && (tir.tvm_struct_get(arg.compute, 0, 7) == (uint16)1)) . arg.compute.dtype is expected to be float32.

It dosen’t support generate Fp16 cuda grid sample code ,right?

My code like this: model_path = “/data2/qingqing/tvm/.vscode/ws/onnx_models/grid_sample.onnx” onnx_model = onnx.load(model_path) input_shape = (48,32,8,22) input_shape_1 = (48,29838,8,2)

input_name = "value_l_"
input_name_1 = "sampling_grid_l_"

shape_dict = {"value_l_": (48,32,8,22),"sampling_grid_l_":(48,29838,8,2)}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)#,freeze_params=True

mod = relay.transform.DynamicToStatic()(mod)

mod = relay.transform.ToMixedPrecision("float16")(mod)
from tvm.relay.op.contrib.tensorrt import partition_for_tensorrt
use_fp16 = data_type == "float16"
trt_target = tvm.target.Target(f"tensorrt -use_fp16={use_fp16} -use_implicit_batch=False")
mod = relay.transform.InferType()(mod)
mod = partition_for_tensorrt(mod, params, target=trt_target)
target = "cuda"

with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(mod, target=[target,trt_target], params=params)

dev = tvm.cuda(0) gen_module = tvm.contrib.graph_executor.GraphModule(lib"default") gen_module.run()