Currently, the path to support PyTorch in TVM isn’t ideal. Previous issue ended w/ Pytorch/TVM being a suitable solution for the time being but it doesn’t follow TVM’s other frontend implementations in providing a python function to pass in a model and get the corresponding Relay module and converted parameters. Rather, one must build the project, which has its own pointer to a TVM repo (currently Facebook’s own fork) as a sub-directory which is also built in the project’s build script, and either use to_relay (to grab the relay module) or call an enable function in the python script where they define their model (uses TVM to optimize and update the PT graph). The other option which is probably more often used is PyTorch->ONNX->Relay.
I propose we add a simple frontend to enable PyTorch for TVM in line w/ the other frameworks. I have currently implemented a simplified version which accepts traced models and has all image classification models in TorchVision working except for shufflenet (so resnet, squeezenet, vgg, mobilenet, densenet, inception, alexnet, googlenet, and mnasnet) (PR in Amazon’s fork). This implementation is pretty similar to the other frontends and allows us to easily map new operators for more coverage down the line. It supports PyTorch 1.3 and below and also seems to support the nightly version of 1.4 as of a week ago (haven’t tested recently)
Later on, we can also add support for control flow (through scripting) which will lead to a wider range of model coverage. Some issues I can see are making sure the parser stays up to date w/ PyTorch (but I think all of the frontends have the same issue). Let me know your thoughts and if there’s any interest in bringing this to upstream, I opened the PR in Amazon’s fork as we will probably want to have this in regardless.