[RFC] tlcpack: Thirdparty Binary Packages

So far we only released source code in past tvm releases, and we will continue to do so.

As we continue to develop tvm, we also see some demand for convenient binary packages, such as wheel or docker binaries. One important factor of such binary packages is the potential links with third party SDKs, a typical example is CUDA for Nvidia GPUs. Because some resulting binaries are subject to additional terms(e.g. CUDA EULA), the resulting binaries may not be Apache license compatible.

Notably, it is acceptable for users to use the binary package, as they already accepted the CUDA EULA when they install the CUDA dependencies. This is one benefit for producing source release, we can make sure that the source release is 100% ALv2 compatible from the licensing point of view.

The current ASF policy disallows non-apache compatible binaries to use the project name. It is important for us to comply and protect the Apache brand. As a result, we will not use tvm to name the binary artifact.

ASF does allow third-party releases to be created using a different name, e.g. Foo, powered by Apache TVM.

To better help the community while complying with the ASF policy, we(as a group of individual volunteers) decided to create tlcpack – tensor learning compiler binary package powered by Apache TVM. tlcpack does not contain any additional source code addons, and is only a collection of binary builds that build from the tvm source by turning on different build configurations. Notably, the only difference is in the package naming, the idea is that the users can do

pip install tlcpack -f https://tlcpack.ai/wheels.html
python
>> import tvm # tvm will be available.

There are 4 versions of tlcpack wheels: tlcpack (for cpu only), tlcpack-cu100 (for CUDA 10.0), tlcpack-cu101 (for CUDA 10.1), and tlcpack-cu102 (for CUDA 10.2). The supported Python versions are 3.6, 3.7, and 3.8. Currently, only the linux platform is supported. The wheels for MacOS and Windows will be released in the future. To help the developers, we plan to update the wheels every month to keep up with the latest developments in the TVM. We also plan to produce binary release corresponding to the official source release starting v0.7.

We also provide docker images to provide convenient services to the community developers who want to use them. Notably, the volunteers are releasing tlcpack not wearing Apache hats. We have clear disclaimers that these binary releases are not official Apache releases. The name tlcpack is picked so that it is clearly distinguished from the official Apache source release.

Wearing ASF hats, we will continue to work together with the community to produce high-quality source releases that comply with the Apache release policy.

co-author @tqchen

14 Likes

Finally! Thank you for the hard work!!

Thanks!

The pip package will be of great help if one just wants to run some simple tests. And the developer docker images are also helpful, I have had trouble building TVM in a old CentOS system which only have gcc 4.8 support.

BTW, I’m curious about the decision of releasing cuda100 with 101 and 102 at the same time.

Thanks @haichen and @tqchen! This is really cool, and new users will certainly benefit from that.

I’m curious to understand why we are (I assume) self-hosting, rather than using pypi. Is this due to ASF licensing rules as well?

Also, are the scripts and parameters you’re using to generate these packages, being pushed somewhere, so that we can replicate that in test pipelines?

The pypi was due to pypi file size limit (the cuda binary size was quite big). We can move to pypi once the pypi file size limit request is approved. Scripts are available https://github.com/tlc-pack/tlcpack

1 Like

Understood. Thanks @tqchen!

The wheels are built on a newer version of CentOS. Pip wheel for CPU is manylinux2010 compatibility and wheels for CUDA are manylinux2014 compatibility. Releasing wheels with different CUDA versions is to accommodate different develop and deploy environment.

2 Likes

BTW, @haichen shall we also release runtime-only version of tlcpack? It would be helpful if people want to bake TVM-compiled operators in other frameworks shallowly in python

Thanks for your work.

I tested the wheels however I got CUDA error with tlcpack_cu102-0.7.dev1-cp36-cp36m-manylinux2014_x86_64.whl:

 tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (3) /usr/local/lib/python3.6/dist-packages/tvm/libtvm.so(TVMFuncCall+0x48) [0x7fe7e6f0dc48]
  [bt] (2) /usr/local/lib/python3.6/dist-packages/tvm/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::detail::PackFuncVoidAddr_<4, tvm::runtime::CUDAWrappedFunc>(tvm::runtime::CUDAWrappedFunc, std::vector<tvm::runtime::detail::ArgConvertCode, std::allocator<tvm::runtime::detail::ArgConvertCode> > const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x9e) [0x7fe7e6fae74e]
  [bt] (1) /usr/local/lib/python3.6/dist-packages/tvm/libtvm.so(tvm::runtime::CUDAWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void**) const+0x755) [0x7fe7e6fae645]
  [bt] (0) /usr/local/lib/python3.6/dist-packages/tvm/libtvm.so(+0x15f4505) [0x7fe7e6faa505]
  File "/workspace/tvm/src/runtime/cuda/cuda_module.cc", line 105
  File "/workspace/tvm/src/runtime/library_module.cc", line 78
CUDAError: Check failed: ret == 0 (-1 vs. 0) : cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUDA_ERROR_NOT_INITIALIZED

I tested this with image: docker pull nvidia/cuda:10.2-cudnn8-devel-ubuntu18.04

while if build tvm from source, it works without any issue…

Anyone know what could be the cause of this?

Update: I got the same error using the Dockerfile.package-cu102 provided.

The error can be produced using https://tvm.apache.org/docs/tutorials/get_started/relay_quick_start.html#sphx-glr-tutorials-get-started-relay-quick-start-py

Thanks for this initiative and it is commendable towards reducing the burden for use of the Apache TVM project.

Could you link to the Apache policy here for other folks to read and see what other guidelines need to be investigated as I couldn’t find it easily enough ? It might also be worthwhile for the project to document these aspects as items to consider.

Would this be something that would also allow others to add their own binary packages to this - I can envisage a linkage against the Ethos-N compiler / driver library for providing users with use of the work in Arm easily.

regards Ramana

Because the package is thirdparty (non-ASF). The main policy that need to referred to is the trademark policy if the product would like to be referred to as “Foo Powered by Apache TVM.” See http://www.apache.org/foundation/marks/faq/#poweredby

We can also move followup convos specific to tlcpack to https://github.com/tlc-pack/tlcpack/issues

Hi @tqchen, is that discussion still ongoing? Is there any help needed, to be able to host tlcpack on PyPI?

This could be the case. however we figured out a mechanism that auto update to github binary tag, and the matrix in https://tlcpack.ai/ so far gives quite clear instruction so i think we are probably fine

I’m working on getting Linux Nightly built for pip. To do this, I needed to upgrade to manylinux2014. It wasn’t possible to easily build LLVM with manylinux2010 any longer. Additionally, the nightly will only work for builds from main rather than the latest stable release. I think this is enough motivation to start working towards the 0.8 release, as there are many new features that have landed.

The next steps are to write a workflow to build the Docker image and publish the resulting package.

2 Likes

@hogepodge thanks for doing the work on enabling nightly packages.

I got a question about the names of the packages: why do we need to have different names, as in tlcpack and tlcpack-nightly being published as different things?

cc @mjs @manupa-arm @tqchen

1 Like

The main reason is that they corresponds to different hashtags, the nightly always points to the latest, while the non nightly points to a fixed tag(likely will switch to only stable version after v0.8)

Hi @tqchen,

Im not sure whether that requires different namespaces for the packages.

Why cant we use something as follows :

  • Released versions: e.g.: tlcpack-0.8 tlcpack-0.8.1 tlcpack-0.9

  • Pre-release versions instead of tlcpack_nightly: tlcpack-0.10.devXXX

The main thing is to make sure user expects exactly what to get. The versioning always follows the github version now. e.g. when you do

pip install tlcpack-nightly -f https://tlcpack.ai/wheels

You know that it is going to give you the latest nightly developer build. Mixing the namespace would require more specialized commands during installation. For users who do not want nightly build, it also helps because they can choose to use the one without nightly suffix and as a result not having to see the nightly builds.

The separation also makes the management easier, since nightly and stable are managed separated, we have script to cleanup stale nightly build to save the overall space.

@manupa-arm i think the pip PEP has some more on this: PEP 440 -- Version Identification and Dependency Specification | Python.org

in particular the note at the bottom means we should probably keep only a small number of nightlies in the release package. it might be possible to keep more on our tlcpack pip index, though.

Hi all, I wanted to touch base on this packaging/versioning topic.

I think it would be beneficial for our users to get the packages directly from pypi rather than hosting them somewhere else. It would be great so that people can set tlcpack as dependencies on their workflows.

Additionally, I think it would be great if we could keep a history of nightly packages so that users’ environments are reproducible. At the moment, if somebody notices an issue with a package, it is very hard to go an rebuild that package so that the issue can be reproduced.

With regards to pypi repository, I noticed we had a interaction with pypa in past: https://github.com/pypa/pypi-support/issues/594.

As we have ongoing discussions to increase the frequency of releases (see Release Planning - Reviewing our Tracking Issues), for the benefit of our users, I think it would be great to improve the tlcpack tooling so that we can streamline publishing on pypi.

So, I think there are a few actions here:

  1. Re-engage with pypa, in order to get adequate quota to host the packages
  2. Create the necessary community-owned credentials so that we can publish packages on pypi
  3. Improve tooling on tlcpack so that it is able to publish packages on pypi. I can start this one, but probably will need from others to fix issues and maintain this. I’ll raise the appropriate tickets on https://github.com/tlc-pack/tlcpack
  4. Amend TVM release process so that we guarantee that tlcpack gets some attention as well?

cc @tqchen @haichen @areusch @Mousius @ramana-arm for comments.

1 Like