Import scripted (instead of traced) PyTorch model

I want to import a scripted instead of traced PyTorch model in TVM. But why? I am working with a model that cannot be traced at all because the input data is dynamic. Only scripting works.

According to the documentation, tvm.relay.frontend — tvm 0.8.dev0 documentation , scripting is not supported: Note: We currently only support traces (ie: torch.jit.trace(model, input))

The difference between scripting and tracing for a very basic neural net can be seen below:

When using the traced version with TVM, it works fine. When I try to use the scripted version, the following errors occur: NotImplementedError: The following operators are not implemented: ['aten::feature_dropout_', 'aten::__is__', 'aten::format', 'aten::conv2d', 'prim::unchecked_cast', 'aten::warn', 'aten::dim', 'aten::__isnot__']

It could be that some operators are not implemented in TVM, but seeing as a lot of work on this is/was done (see i.e. apache/tvm#5133) this is not super probable. Does anyone know if there’s a proposed solution for this?

In principle, if all you need is dynamic shape, we could use trace and use symbolic input shape. But for now our PyTorch frontend doesn’t support dynamic input shape. See How to setting model compiled from pytorch with mutable input size

The problem with scripting is that we end up too many garbage like you saw, because all python constructs will appear as op. Things like aten::warn, aten::format, prim::RaiseException etc.

Probably the best approach is this WIP PR WIP/RFC: initial stab at TorchScript fallback by t-vi · Pull Request #7401 · apache/tvm · GitHub