[RFC] Consolidating TVM Python Dependencies

See also: strawman consolidated dependency list

  • NOTE: the strawman was made a few weeks ago and may be out of date. It will be manually rebuilt before merging.

Author’s note: dependency management tools, like text editors, are often the subject of holy wars. This RFC seeks only to improve our dependency management in TVM—we can consider using any dependency management tool that fits our requirements. For the purposes of maintaining a constructive conversation here, let’s focus debates between dependency management tools around their impacts on the TVM project as a whole, not any one developer’s individual workflow.

Background

TVM has historically attempted to avoid overly specifying Python package requirements in order to be as lightweight and flexible as its applications allow. However, this practice has led to a scattering of Python dependencies around the codebase, such that it’s now quite difficult for the average TVM developer to create a virtualenv for local development that comes close to matching that used in the regression.

This RFC proposes that we move all Python package requirements into a single location, then source or otherwise generate files where needed. Further, it proposes that we checkin the output of pip freeze from each CI container, so that it’s easy to lookup the actual package versions TVM tests against.

Challenges

C0: Requirement Groups

TVM’s Python code is organized into parts:

  • A core portion, required to use TVM at all
  • A set of Relay importers, which depend on a variety of third-party Python packages necessary to parse formats foreign to TVM
  • A set of optional components, such as microTVM, which may depend on third-party libraries specific to one use case of TVM

TVM’s Python package requirements depend on which parts of TVM you want to use. Further, TVM developers have an additional set of requirements (pylint, pytest, etc) that should not be included as dependencies of any built TVM package.

C1: Varying Constraints on Dependency Versions

Python dependencies can also be separately categorized into these 3 categories, which bear no relation to the groups from C0.

  • Loose dependencies — where any non-ancient version will likely do
  • Range dependencies — where some version preference (i.e. in major/minor version) exists, but generally any package in that range will do. Sometimes users may want to purposefully install a package outside the range i.e. to break one feature of TVM but enable another for a particular model. Dependencies that follow semantic versioning are likely candidates for this group.
  • Exact dependencies — where using any other version than that used in CI is unworkable

A survey of TVM’s present dependencies is included in the strawman pyproject.toml from the PoC PR.

C2: Python Package Requirements

The de facto standard Python package manager, pip, resolves and installs package dependencies by default (i.e. unless --no-deps is given). However, pip install produces a set of installed packages that is neither deterministic (even given a frozen index) nor consistent. Specifically, if a user executes pip install a b, they may see a different set of installed packages than if they execute pip install b a.

Tools such as pipenv and poetry have been written to work around this.

C3: Updating CI containers

Generally speaking, the policy of tvm is to avoid restricting dependency versions when possible. This allows the TVM CI to remain up-to-date with respect to its dependencies as the CI is updated. However, users of TVM would ideally like to install TVM alongside the same set of Python dependencies used in the regression—this gives a predictable user experience.

Currently, Python packages are installed to each container using a series of pip install commands, and CI container updates are made individually (i.e. ci-cpu is updated independently of ci-gpu). This means that it’s entirely possible and expected that we test TVM against different but unpredictable versions of its dependencies.

Topics for this RFC

In general, our Python dependencies are scattered around the codebase and I’d like to argue that any solution going forward should at least centralize these. The topics I’d like to debate with this RFC are:

T0. In what format should we store dependencies?

T1. Which dependencies should be listed in setup.py?

T2. What changes, if any, do we need to make to the CI process in order to produce a tested list of Python dependencies?

T3. What pathway should we provide to the user to install the dependencies they need to use TVM?

T4. What pathway should we provide to the developer to install dependencies for developing TVM?

Approaches

A0. De-centralized dependency tracking (current approach)

Currently, dependencies are tracked in a decentralized fashion:

  • [setup.py](http://setup.py) reports the bare minimum dependencies required to use TVM. Some extras are provided, but there is no coordination between the versions specified in setup.py and the version used in CI test containers (i.e. pip install -e python is not executed in the CI test container).
  • CI containers are built with Python package version restrictions specified in the install script. Where versions are not restricted, no checking is performed to ensure that package versions are compatible (i.e. only pip install is used, not pipenv, poetry, or another tool that checks the integrity of the version graph).
  • To run developer tools such as pylint, the suggested approach is to use the ci-lint container with a local docker container.
  • To build docs, developers install dependencies specified in docs/README.md. Developers can ensure they have the correct dependencies by comparing against pip freeze from ci-gpu (NOTE however that ci-gpu requires a gpu instance to run locally).

There are some benefits to this approach:

  1. It is simple to perform each step of the process separately
  2. Tests for the most part run against recent Python deps as containers are updated regularly, and non-pinned Python packages update automatically with each container rebuild.
  3. Since containers run slightly different versions of Python packages, some diversity is present in the set of Python packages TVM is tested against.

However, there are drawbacks:

  1. It’s very difficult for a developer to tell exactly which versions of dependencies TVM is tested against, short of pulling a multi-gigabyte docker image and running pip freeze.
  2. When building documentation, it’s actually impossible to deduce this unless the developer happens to have a machine that can run nvidia-docker. The ci-gpu container, which is used to build docs, can’t be started without binding to a GPU.
  3. Although there is diversity in the dependencies tested, we have no control over this and limited visibility into it.
  4. End users installing TVM (i.e. from pip install tlcpack or from pip install -e ./python) can’t expect it to depend on the specific tested versions of Python packages. While loose dependency pinning is standard practice in the Python community, having the ability to pin to a known-good configuration can be helpful. Further, there isn’t even a “simple command” a user could run—they need to download ci-cpu and ci-gpu and cherry-pick package versions from pip freeze.
  5. There is a tool to run containers for local developer use, but it doesn’t work well with git-subtree and requires developers to lookup the relevant container versions in Jenkinsfile. It’s unwieldy. When using git-subtree, the only way to run the linter locally is to checkout your development branch in the original git repo.

A1. Centralized Management with a set of requirements.txt

Create a set of requirements.txt files that contain the reference versions of TVM packages. More than 1 requirements.txt file is necessary because the set of dependencies needed to use TVM varies with your use case, and we wish to maintain flexibility. For instance, to use the pytorch importer, torch is needed; but we don’t wish to require users to install that for basic TVM usage or when using TVM purely for running inference. Therefore, a new file requirements-torch.txt would be generated for this case, and would correspond to a torch extras_require entry in setup.py.

Additionally, a requirements-dev.txt would be created to capture developer requirements such as pylint and the docs-building requirements.

When building CI containers, care needs to be taken to install Python packages only from requirements.txtfiles in the repo. Either all Python packages need to be installed in one pip install command, or a shell script helper should be written to verify that the packages requested are present in requirements.txt. When installation is finished, docker/build.sh should pip freeze and write the output to docker/python-versions/v0.62-ci-cpu-constraints.txt.

Finally, setup.py must read the requirements.txt files and fill install_requires and extras_require from those files.

Pros:

  • It is easy to determine the set of TVM dependencies and the actual package versions used in test.
  • requirements.txt is a universally-consumable format so no additional tooling is imposed on TVM developers or users.
  • setup.py will agree with requirements.txt as pip wheels are built.

Cons:

  • CI containers may continue to diverge from one another in terms of dependency management.
  • The set of installed Python packages could still differ depending on the order of pip install -r requirements.txt and e.g. pip install -r requirements-torch.txt. Developers may not remember the order in which these commands were invoked or the full history of their local virtualenv, so bug reports could arise from dependency problems that are hard to document and reproduce.
  • The set of installed Python packages could still not be consistent—a Python package mentioned later in a requirements.txt may install a dependency incompatible with a previously-mentioned Python package.
  • Developer usage is still somewhat tricky — multiple pip install -r commands are needed
  • When syncing, developers need to remember to rebuild their virtualenv

A2. Consistent centralized dependencies with a tool such as poetry or pipenv

This approach is similar to A1, but instead of creating a set of requirements.txt files, a more advanced dependency management tools such as poetry or pipenv is used. These tools tend to favor a centralized file—for instance, poetry stores dependencies in pyproject.toml at the root of the TVM repo. The set of requirements.txt files could still be auto-generated for developers who prefer that approach, and a unit test could verify they are in sync with the authoritative pyproject.toml.

Pros:

  • It is easy to determine the set of TVM dependencies and the actual package versions used in test.
  • Local developer virtualenv management is automated
  • The set of installed packages is always consistent
  • Version specification is a little bit better in poetry with operators dedicated to semantic versioning (i.e. ^0.4 means anything >=0.4.0 and <0.5)
  • setup.py will agree with pyproject.toml as pip wheels are built.

Cons:

  • Additional tooling is needed for the optimal developer experience
  • Holy wars abound with respect to developer tooling, though this could be mitigated by tools such as dephell.
  • setup.py would need to parse pyproject.toml and so could be more complex. It would also need to map semantic versions to pip-compatible versions (translating ^0.4 to pip constraints >=0.4, <0.5 is straightforward, but setup.py may wish to loosen the version constraints on some dependencies).
  • Nothing in this approach fixes the problem of building CI containers with different dependency sets; however, it does provide a way forward here (see Consistent CI containers below).
  • The container dependency snapshot needs to be manually checked-in under docker/python-versions

Consistent CI Containers

The topic of ensuring CI containers run on the same Python package versions is for another RFC. But, approach A2 enables a fairly straightforward, if notably more complex, flow which I’ll sketch here:

  1. When it is time to update the CI containers due to a change in Python package version, a script launches the base container (i.e. ubuntu:18.04), installs poetry, and runs poetry lock . The output is written to docker/python-versions/poetry.lock-v1.01.
  2. When a new container is built, the corresponding poetry.lock-v1.01 file is copied from docker/python-versions to the root of the repository. All pip install commands are replaced with poetry install, and no further change is needed because the poetry.lock file specifies the exact version to install plus any dependencies needed (and their exact versions).
  3. When one CI container is updated, all of them are updated and all containers with the same version number share the same poetry.lock-file. This is why I’ve bumped the container major version to 1 in this example.
  4. The new set of containers is tested against a PR that submits the new poetry.lock file and bumps the global container version number in Jenkinsfile. Additionally, a docker/python-versions/poetry.lock-latest file could be included to view diffs against the previous lock-file in code review.

A more thorough testing flow should be specified at the time this is baked into an RFC. Additional challenges that would need to be addressed in a hypothetical RFC are support for executing Python code for non-linux OS (currently the CI does not do this, but we should not add any impediments to this).

Discussion

Here are some points for discussion. There are probably things I haven’t considered with this RFC, let’s discuss them as well.

C0. Which approach should we take to this problem?

C1. Do you care if TVM adds an extra tool such as poetry as the preferred way to manage your local development virtualenv? Do you suggest a different tool (please do so based on the merits of such tool, not simply that it’s the one you use)?

C2. How important is it to standardize the CI container build process? Should we further consider a standardized CI container build pipeline?

C3. Is loose dependency specification in the setup.py the right thing to aim for? At what level should we specify dependency versions there?

4 Likes

I am a fan of approach A2. It seems like the python community is moving towards using poetry, and the poetry format is a lot nicer than requirements.txt for specifying dependencies. If we autogenerate requirements.txt, then everyone can use their preferred development tools.

Can poetry build the C++ part of TVM? If so, that might be a good way to make it easier for new people to start hacking on TVM.

Thanks for the proposal. Some of the general thoughts that I think we could likely agree on, e.g. having a centeralized location for the list of dependencies (pyproject.toml or requirement.txt) It would be useful to think about different use case scenarios and how users use the package.

As of now we can see two different use scenarios:

  • S0: normal usage case, mixing tvm with the other packages. Normally in this cases loose dependencies are preferred, as we do not want to dictate what version of numpy user want to install, and tvm is supposed to work well with most versions of them. See Pytorch’s requirement as an example https://github.com/pytorch/pytorch/blob/master/requirements.txt
  • S1: single application, where tvm is the sole application running, and there might be desire for precise pin-pointing the version range that are well tested, poetry seems to be a useful tool for this type of development flow.
  • S2: Development consistency: in CI building, we usually hope to pin the exact version of dependencies, notably, we don’t have to do so for all versions, but the useful ones include: pylint, and the frontend importer versions.

The first primary goal for us serve the case of S0. In this case, thare are two major ways people use to install packages:

  • pip: manual installation or venv flow.
  • conda: very popular among ml scientists.

The first thing to consider for us is to have a natural first class flow for conda and pip. Which means a requirements.txt and conda recipe that encodes most of the requirements loosely. Notably because the natural favor of the range dependency ^1.2 in poetry(for S1) and the need for S0. We might want to consider have a first class requirement.txt in the first place.

In the mean time, it would be useful to think about whether or not we want to strictly adopt a lock to pin down the CI dependencies. My personally take is that it is too complicated. While a lock file gives exact reproducibility (within the python land), it is not necessarily the platform of choice(in windows conda usually is easier), nor did it pin down all other dependencies(e.g. LLVM versions).

For most part of the development we do not have to be that precise, and usually a centralized ci_constraint.txt file that specifies the exact version of the CI packages via == constraint might be sufficient for most of the cases. We also do not want ot pin down the versions that we think should be loose(e.g. in the case of numpy), so developers who develop not via a sole app purposes can still make use of the constraint.

In summary, having centralized location to track dependencies is a good thing (A1 or A2). We want to provide such information to developers who needs them. A1 might be the first step that most of dev environment(conda, pip) can agree on and use and also is the current status quo(e.g. pytorch). It would be usefult to discuss whether we need to go to the extra complexity to use a lock file for CI, my personal take is that it might not be necessary and we can get away with a constraint.txt with all the == dependencies for our frontend (tensorflow, pytorch) and linters.

@tqchen thanks for your reply. I agree we should not overly constrain dependencies. my thinking is that, purely considering the non-dev dependencies (i.e. those we might surface to pip/conda), there are two types of dependenceis:

  • direct dependencies
  • indirect dependencies (I.e. dependencies of dependencies)

my opinion is that direct dependencies should go in the TVM analogue of the pytorch requirements.txt and should be also included in setup.py install_requires. indirect dependencies shouldn’t be given to pip/conda at all. I do sort of think that specifying at least the major version (I.e. tensorflow ^2) for each of these is not unreasonable.

given that it’s always possible with pip to install a later version of tensorflow with a subsequent pip install command, i’m even leaning towards specifying the minor version there too. i’d love to hear opinions from others–my thought is that pip install tvm, ran all on its own in a fresh virtualenv, should produce a working installation, and if you need to install a different version of tensorflow because you’re doing something technically unsupported, pip allows you to do that.

also, regardless of being a TVM developer or user, it should be possible to install exactly those versions that the CI used for stable local use. this should probably not be the default flow, so they shouldn’t be placed in setup.py’s install_requires. but, users should be able to download this ci_constraint.txt file, and it should be obvious which one to download if you want to e.g. pip install tvm==0.8.0.

1 Like

In the particular case of tensorflow/pytorch, they should be part of extra_required, and it indeed makes sense to specify a range for some of these dependencies

Great to see this tricky topic is being tackled!

As the text above states, “the defacto standard Python package manager is pip”. Despite it’s limitations, at this current point in time, that fact is unavoidable. Users expect “pip install foo” to reliably install a working instance of foo. They also expect to be able to co-install foo with other packages, which makes packages with overly restrictive version constraints un-popular, version constraints should be sufficient to deliver a viable tool install, but no more.

If seems reasonable that as a project, if necessary, we might impose the use of a specific package manager on developers but attempting to impose a “not yet mainstream” package manager on users will limit the reach of the project.

As a slight aside, on “package managers”, we’ve used pipenv in a number of internal projects. Despite the fact that many of us really the ui and concept of pipenv, we’ve recently made the decision to back out of using it due to the widely discussed instability it has with the time taken to resolve dependencies.

@mjs thanks for your reply! I agree we should avoid overly restrictive version constraints in our released packages. my thoughts are that specifying a range for semver packages and especially complex dependencies with significant API exposure to tvm (I.e. most frontend packages–tensorflow, torch, etc) would be a good idea in the spirit of producing a working install. i’m happy to back off of that approach given evidence to the contrary.

i’ve had a similar experience with pipenv as you have in the past. i’ve liked poetry better so far.

my thoughts are that auto-generating e.g. requirements.txt, requirements-frontend-torch.txt, etc. from a pyproject.toml or equivalent config would alleviate any problems with developers disliking a not-yet-standard tool such as poetry.

I think it might be helpful to talk about some of the expected end results: e.g. what should the requirements.txt looks like, how can we advertise them, how do we use them in the ci, regardless of auto generating one from another, or perform consistency checking is more like a mechanical thing.

Because pip is still the status quo and should be our first class citizen, here is a strawman:

  • B0: Follow common practices(like pytorch) to specify loose dependencies
  • B1: When talking about installation and dev env, list both pip, conda and poetry without having to enoucrage one over another, we might want to put pip in the first place.

From B0, we might want the following files:

  • requirements.txt: common deps
    • Most should have no constraint at all
    • We can have minimum version constraint if necessary (e.g. xgboost>=1.2.0).
  • requirements-extra.txt: extra requirements (can have multiple, to simplify we can have )
    • same as above, we can have a mininum version constraint. We should talk about whether or not maximum version constraint should be placed here. Per python ecosystem, seems a better practice for now is to only add maximum version constraint when there is a regression, and we should strive to remove as soon as it is fixed.

Additionally, we can have the following constraint file

  • ci-constraint.txt: pinning the exact version that are needed in the CI, including dependency of dependencies if a regression occurs
    • We should only pinning things that are needed to be pinned as per current docker installation.

We will use the txt files docker to generate the image to simplify the flow and make sure pip is the first class citizen. While it is true that a lock file might be more precise, we might still prefer the simplified flow per above discussion(lock does not cover everything and an equal constraint might be sufficient for most cases.

Hi. The semver proposal makes sense to me. There is arguably a case the that maximum major version constraints should be provided since the semver definition of a major version change specifically implies a breaking change of some form.

w.r.t the discussion around requirements.txt, common convention and best practice encourages that:

  • install_requires: provide abstract, minimal, requirements for a specific package
  • requirements.txt: provides exhaustive pinned versions for a compete environment

There is a general overview here: https://packaging.python.org/discussions/install-requires-vs-requirements/

The end user experience is defined by how we manage install_requires, rather than what we provide in a requirements.txt. While requirements.txt foo is a viable way to share fixed environments within the tvm projects own developer community, we should focus on the install_requires mechanism for tvm end users, because, that is the one they will use by default.

@tqchen @mjs thanks for your replies

there is also a new pip dependency resolver that is about to be standard. this should alleviate a number of the reproducibility problems, but i’m not sure if there is a replacement flow for generating e.g. a ci lockfile. assuming everyone upgrades to pip 20.3, the main concerns we need to address in the long run are how we manage the list of dependencies in the repo. in practice, I don’t think this upgrade will be standard for linux users til the major distributions pick it up.

wrt install_requires vs requirements.txt: this makes sense to me. I think that means that we could use the following rules to decide when to promote a version-constrained requirement from requirements.txt to install_requires (NOTE: all packages listed in requirements.txt should be included in install_requires, just maybe without a version):

  1. semver dependencies should be placed into install_requires as foo >= X.Y, foo < (X+1). we should only specify semver dependencies to the minor version level (and even then, only if absolutely necessary)
  2. “at least” dependencies (of the form foo >= 2.1) should be placed into install_requires. we should scrutinize these to see if they are really semver; we should also consider additionally placing a foo < (X+1) constraint. it’s possible that “at least” dependencies may not explicitly state they follow semver rules, so it wouldn’t be obvious that rule #1 is apropos, but nevertheless given a clear version pattern, restricting beyond the next major version may be prudent.
  3. precise version pins in requirements.txt should be placed into install_requires, but we should essentially never do this except in extreme cases.

now wrt the file layout:

  • I like @tqchen proposal for requirements.txt and requirements-extra.txt. these can live in the python/ subdirectory of tvm repo. potentially, setup.py could just apply some set of rules (i.e. the ones above) to generate install_requires from these files without being overly specific.
  • for the CI: it’s not clear to me we need to pin beyond what’s done in requirements.txt. a constraints file makes sense if we do need that. the main case I could think of is when a package introduces an accidentally-breaking revision. if we are cutting a release, and the CI needs constraints above requirements.txt, perhaps we should consider promoting that constraint to install_requires. finally, we do need a way for developers to get a list of which versions ended up being used in the CI (because right now, if you don’t have a GPU, you can’t produce this list). we don’t need to discuss that here, though.

finally, i’d like to think through some common dependency management cases:

C1. someone adds a new core dependency

  1. edit requirements.txt and insert the new dependency in alphabetical order.
  2. ensure no other requirements-extra.txt specifies this dependency
  3. run a tool to validate the requirements.txt (setup.py?)
  4. update the CI containers and submit a Jenkinsfile change
  5. submit requirements.txt PR along with new Python code that uses the new dependencies

C2. someone adds a new extras dependency

  • same as core, but swap requirements.txt and requirements-extra.txt
  • test path isn’t as clear in the CI, but this is a separate problem

C3. a pinned or semver package’s version needs to get updated.

  1. edit requirements.txt to update the version pin
  2. test locally
  3. rebuild CI containers with new version.
  4. test with new CI containers (how: TBD)
  5. update the CI containers and submit a Jenkinsfile change
  6. submit requirements.txt PR along with new Python code that uses the new dependencies

I think these all make sense to me, though we should consider how to improve the “update the CI containers” step in a separate RFC; specifically:

  • ensuring all containers use the same version dependencies
  • documenting the actual versions used

Thanks @areusch I still think having a manually specified ci_constraint.txt is easier than having a resolver to pin the version. This is the way we pin dependencies right now, the developers have clear expectations about what to happen(regardless of the dep resolver’s behavior) and we do not need to involve a lock file that might complicates the docker build.

@tqchen I’m more concerned that there are at least 112 dependencies to specify:

$ docker/bash.sh -i tlcpack/ci-gpu:v0.70 pip freeze | wc -l
118

probably more when you include the lint deps.

The lock file will contain all the dependencies that are resolved, e.g. numpy, dependency of the dependencies, which results in the 118 deps you talk about.

If we are going to specify the ci_constraint.txt, I would imagine we only specify the necessary ones, e.g. tensorflow==2.3.1 keras==2.4.3, without specifying the numpy version. While this is indeed less precise than the lock file, so far the pinning approach seems to work well for our case, and the ci_constraint.txt will provide more useful information to the users who need it.

I think we need to export the output of pip freeze from the CI containers to help those who are trying to exactly reproduce the CI. I agree with you that we should try to constrain as few packages as possible–so ci_constraint.txt could be small. but for the purposes of exactly reproducing the CI, we need to provide an exact snapshot of the venv. a challenging part of doing this is that unless every constraint is pre-computed by e.g. running pip install with all possible packages, ci-lint and ci-gpu install different subsets of packages with ordering that is not easy to determine (you have to read 10 different shell scripts and determine which packages were affected in each), and thus may choose different indirect dependencies. this means it may not actually be possible to export such a pip freeze from the CI, because some indirect dependency may be at version A for lint and and B for gpu.

The main question is whether not the exact pinpointing is critical to reproduce CI. Our previous experience seems to suggest that pin pointing the few constraints are sufficient, if not, we should add things to the constraint.txt

Notably, there are also additional environments that are not covered by the pip lock, e.g. LLVM CUDA dependencies. When we are getting to the level of exact reproduce, perhaps the easiest way is still to fall back to use the exact docker binary tag itself.

it’s not really a question that has a general answer. sometimes pinpointing doesn’t matter and other times it does. as an extreme case, pylint cannot be reproduced without both precise pinpointing of both python packages and the interpreter itself. that’s why it can be hard to lint locally now (docker/lint.sh doesn’t work with git-subtree). most times, the interpreter version doesn’t matter so much.

we should also consider the case of using TVM with in conjunction with a project repo (I.e. not just using TVM in its source tree). in this case, suppose additional python packages need to be installed, some of which are large. you may prefer not to pay the cost of starting with ci-gpu, itself a 17.4GB image. instead, you may want to pick and choose the dependencies, and point pip at a constraints file that lists the exact versions of packages that were installed.

anyway, I think this discussion is more about the CI flow which should go in another RFC. I think we both do agree that ci_constraint.txt is a reasonable thing to do to add constraints used to build CI docker images above those specified in requirements.txt.

I agree, at a high level having a constraint.txt would be sufficient in most cases. In the case of pylint, while it is true there might be some python version dependent errors, most cases they are not inconsistent with each other(with the correct pylint version) and the error message produced by pylint are still helpful to do a quick lint fix.

thanks for all the discussions! here are the next steps I can see:

  1. create a python/requirements.txt for the core direct dependencies, specify version constraints as appropriate to be used in the CI. also create e.g. python/requirements-tflite.txt files (one per extra feature of TVM) in the same spirit.
  2. modify python/setup.py to use those requirements.txt to derive install_requires.
  3. create python/requirements-dev.txt, which will not be read by setup.py but which should be used to install dev dependencies (i.e. lint, pytest, sphinx).
  4. modify the CI docker build scripts as follows:
    1. update to pip 20.3 to use the new dependency resolver
    2. delete all pip install foo commands and replace them with a single pip install command (see next bullet point)
    3. create python/ci-constraints.txt which captures even more restrictive constraints for use with the CI that we would not ordinarily populate.
    4. modify the CI install scripts in docker/install/*.sh, deleting all pip install commands and replacing with pip install -e ./python -c python/ci-constraints-concatenated.txt (and potentially ./python[tflite] when extras are needed). ci-constraints-concatenated.txt is synthetically built by a shell script function concatenating these files (we may need to de-dupe python packages, so this may become a python script):
      • python/requirements.txt
      • python/requirements-$extra.txt
      • python/ci-constraints.txt

in a follow-on RFC, we’ll consider the challenge of ensuring that this constraints file is uniform across all CI containers (I.e. also the question of whether this is necessary). we’ll also consider challenges in keeping the requirements up-to-date, which may be easier with a tool such as poetry or may not. I think given the new pip dependency resolver, motivation for the use of poetry or some additional tool should come after considering that.

Looks good to me. Given the collections of requirements does it makes sense to create a requirements folder?