Batch_norm don't have scahedule for cuda?

Hi guys, I met a problem:

I have BN layer in my model, and I used relay.transform.FuseOps to opt my relay, but got wrong below:

batch_norm is not optimized for this platform.
……
raise RuntimeError(f"schedule not registered for '{target}'")
RuntimeError: schedule not registered for 'cuda -keys=cuda,gpu -arch=sm_80 - max_num_threads=1024 -thread_warp_size=32'

It means I can’t use relay.transform.FuseOps for a model have batch_norm on GPU?

(If I don’t use relay.transform.FuseOps , the error won’t happen.)

I sincerely ask for advice, I appreciate any suggestions.

sorry, I got it now. The mod need to remove the bn first, which is make bn to add and sqrt and so on. After this processing, the bn is no longer in the mod and the problem of ‘bn doesn’t have a cuda schedule’ will be solved.