Hi guys, I met a problem:
I have BN layer in my model, and I used relay.transform.FuseOps
to opt my relay, but got wrong below:
batch_norm is not optimized for this platform.
……
raise RuntimeError(f"schedule not registered for '{target}'")
RuntimeError: schedule not registered for 'cuda -keys=cuda,gpu -arch=sm_80 - max_num_threads=1024 -thread_warp_size=32'
It means I can’t use relay.transform.FuseOps
for a model have batch_norm on GPU?
(If I don’t use relay.transform.FuseOps
, the error won’t happen.)
I sincerely ask for advice, I appreciate any suggestions.