As I’m working on this now finally, we discussed the implementation so far in the TVM Community Meeting this morning. Here are notes:
-
@kparzysz: Should we consider conda for this? It locks down not only Python dependencies but also C++ dependencies.
-
@tqchen originally preferred to have
requirements.txtso users weren’t forced to use conda. This is the reason whygen_requirements.pycreatesrequirements.txt. - @mousius mentioned that he’s had teams move off conda because it’s too heavy-handed and just wraps the Python dependency problem without adding too much value
- @hogepodge notes that conda dependency solvers can can take hours to update and that’s contributed to an overall negative experience with conda for him
- @areusch mentioned that adding another packaging system in the loop could make it hard for dependencies to stay current
- Ioannis says that adding conda would add yet another dependency to TVM, and that poetry is built on pip.
-
@tqchen originally preferred to have
-
@kparzysz asks: if we’re going to consider relaxing the ci-constraints.txt in the released package, shouldn’t we consider testing against those relaxed constraints? won’t there be a combinatorial explosion there?
-
@areusch notes: we could consider using tox to test different combinations, but another question here is: we should probably unit-test the apache-tvm wheel before we release it, but right now we run the tests in-tree so we might not have any expectation they’d pass against the wheel. the Python ecosystem does kind of have an expectation that things float around, and we should be judicious about what we place in
install_requires, since that could constrain downstream users. - @driazati notes: we aren’t going to necessarily be able to test all combinations, but we can look at reported bugs and constrain as necessary. one concern about departing from a normal way to specify dependencies in setup.py is that if someone does want to fix a weird versioning problem by constraining it in the release, they’ll have to learn our bespoke process and that’s extra work on this proposal.
-
@areusch notes: we could consider using tox to test different combinations, but another question here is: we should probably unit-test the apache-tvm wheel before we release it, but right now we run the tests in-tree so we might not have any expectation they’d pass against the wheel. the Python ecosystem does kind of have an expectation that things float around, and we should be judicious about what we place in
-
@driazati notes: we could also produce purpose-built CI images–i.e. one for GPU unit tests, one for GPU integration tests, etc. this would help shrink them from e.g. the current 20GB size, which is huge by docker image standards.
- @areusch notes: would be good to see how many images we might want and see how much disk savings there are, and whether we could build e.g. integration tests on top of unit tests.
@kparzysz: re: the incompatibilities between tensorflow and paddlepaddle, could we avoid installing the two at once to avoid over-burdening users?
- @areusch: we do declare extras right now, but you can’t install two incompatible extras at once. this does mean that if we did get to a scenario where we couldn’t work around this with pep527 environment markers, we’d need to move to an architecture where e.g. an import was done by invoking a subprocess and therefore they could live in a separate virtualenv.