[Pytorch] register_forward_hook support

I would like to compile a Pytorch model that had forward_hooks inserted. Does TVM support something like this? My goal is to use hooks to return to the CPU side periodically.

I’m totally new to TVM. Any suggestions would be valuable! Thanks!

No it is not supported. You have to trace your model via torch.jit.trace(...).

Thanks for the answer! Is there anyway to add hooks within TVM’s runtime?

Probably not, but what exactly do you want to do with the hooks? The way TVM and PyTorch work are very different. If adding hooks involve Python, it’s not going to work.

We want to implement GPU preemption during inference execution. The way we do it in Pytorch is to use hooks as exit points. I wonder if we can do something like calling “cudaDeviceSynchronize” between CUDA kernels within TVM compiled models?

If that’s all you need, it is probably not difficult to support. Actually I might have other use cases for runtime hook mechanism. In quantization, I often want to look at the values of intermediate tensors, for calculating quantization parameters or figuring out which layers are causing most accuracy loss. If I can pass a user defined Python function to the runtime and have it called after every layer, that would be very useful for me. Do you know if PyTorch register_forward_hook would enable something like that?