Hi @areusch, I’m sorry I’m a bit late, but was reading through the whole thread and got curious, as we are talking about Python dependencies, on what we envisage as the experience for an end-user, installing a TVM package generated using the proposed mechanisms.
At the moment, if we generate a TVM package (purely based on apache/tvm repo) and install it on a new Virtualenv, we don’t really get something that we can use to import a model, as none of the frontend dependencies will be in place.
It is also not very usual for users being expected to consume “requirements.txt”-style files directly. What usually is there, is a clean pip install <something>, that will install all dependencies required.
Q1: Is that correct to assume the proposal is that the user will be expected to pip install tvm[importer-tensorflow] (I’m using “tvm” as package here, just as a placeholder), which will install tvm with all required dependencies for TensorFlow frontend enabled, but in case the same user now needs to use ONNX or PyTorch, it will be broken until the user manually install the dependency?
Q2: I know a lot of thinking and effort was put into the current mechanism being designed, but I’m curious on why don’t we just declare all the dependent frameworks and versions, and let the package to install everything, rather than we needing to keep an elaborate mechanism of dependencies processing and files being generated? It seems to me that it would make the experience much more aligned with everything else in the ecosystem.