Does TVM support multi GPU inference

Currently, TVM’s inference based on static models is single card inference. May I ask if TVM currently supports multi card distributed inference, or if this feature will be added in the future

Please see the related thread here

Okay, thank you. I will continue to monitor this research and development progress in the future@ Hzfengsy