[RFC] tlcpack: Thirdparty Binary Packages

So far we only released source code in past tvm releases, and we will continue to do so.

As we continue to develop tvm, we also see some demand for convenient binary packages, such as wheel or docker binaries. One important factor of such binary packages is the potential links with third party SDKs, a typical example is CUDA for Nvidia GPUs. Because some resulting binaries are subject to additional terms(e.g. CUDA EULA), the resulting binaries may not be Apache license compatible.

Notably, it is acceptable for users to use the binary package, as they already accepted the CUDA EULA when they install the CUDA dependencies. This is one benefit for producing source release, we can make sure that the source release is 100% ALv2 compatible from the licensing point of view.

The current ASF policy disallows non-apache compatible binaries to use the project name. It is important for us to comply and protect the Apache brand. As a result, we will not use tvm to name the binary artifact.

ASF does allow third-party releases to be created using a different name, e.g. Foo, powered by Apache TVM.

To better help the community while complying with the ASF policy, we(as a group of individual volunteers) decided to create tlcpack – tensor learning compiler binary package powered by Apache TVM. tlcpack does not contain any additional source code addons, and is only a collection of binary builds that build from the tvm source by turning on different build configurations. Notably, the only difference is in the package naming, the idea is that the users can do

pip install tlcpack -f https://tlcpack.ai/wheels.html
>> import tvm # tvm will be available.

There are 4 versions of tlcpack wheels: tlcpack (for cpu only), tlcpack-cu100 (for CUDA 10.0), tlcpack-cu101 (for CUDA 10.1), and tlcpack-cu102 (for CUDA 10.2). The supported Python versions are 3.6, 3.7, and 3.8. Currently, only the linux platform is supported. The wheels for MacOS and Windows will be released in the future. To help the developers, we plan to update the wheels every month to keep up with the latest developments in the TVM. We also plan to produce binary release corresponding to the official source release starting v0.7.

We also provide docker images to provide convenient services to the community developers who want to use them. Notably, the volunteers are releasing tlcpack not wearing Apache hats. We have clear disclaimers that these binary releases are not official Apache releases. The name tlcpack is picked so that it is clearly distinguished from the official Apache source release.

Wearing ASF hats, we will continue to work together with the community to produce high-quality source releases that comply with the Apache release policy.

co-author @tqchen


Finally! Thank you for the hard work!!


The pip package will be of great help if one just wants to run some simple tests. And the developer docker images are also helpful, I have had trouble building TVM in a old CentOS system which only have gcc 4.8 support.

BTW, I’m curious about the decision of releasing cuda100 with 101 and 102 at the same time.

Thanks @haichen and @tqchen! This is really cool, and new users will certainly benefit from that.

I’m curious to understand why we are (I assume) self-hosting, rather than using pypi. Is this due to ASF licensing rules as well?

Also, are the scripts and parameters you’re using to generate these packages, being pushed somewhere, so that we can replicate that in test pipelines?

The pypi was due to pypi file size limit (the cuda binary size was quite big). We can move to pypi once the pypi file size limit request is approved. Scripts are available https://github.com/tlc-pack/tlcpack

1 Like

Understood. Thanks @tqchen!

The wheels are built on a newer version of CentOS. Pip wheel for CPU is manylinux2010 compatibility and wheels for CUDA are manylinux2014 compatibility. Releasing wheels with different CUDA versions is to accommodate different develop and deploy environment.


BTW, @haichen shall we also release runtime-only version of tlcpack? It would be helpful if people want to bake TVM-compiled operators in other frameworks shallowly in python

Thanks for your work.

I tested the wheels however I got CUDA error with tlcpack_cu102-0.7.dev1-cp36-cp36m-manylinux2014_x86_64.whl:

 tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (3) /usr/local/lib/python3.6/dist-packages/tvm/libtvm.so(TVMFuncCall+0x48) [0x7fe7e6f0dc48]
  [bt] (2) /usr/local/lib/python3.6/dist-packages/tvm/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::detail::PackFuncVoidAddr_<4, tvm::runtime::CUDAWrappedFunc>(tvm::runtime::CUDAWrappedFunc, std::vector<tvm::runtime::detail::ArgConvertCode, std::allocator<tvm::runtime::detail::ArgConvertCode> > const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x9e) [0x7fe7e6fae74e]
  [bt] (1) /usr/local/lib/python3.6/dist-packages/tvm/libtvm.so(tvm::runtime::CUDAWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void**) const+0x755) [0x7fe7e6fae645]
  [bt] (0) /usr/local/lib/python3.6/dist-packages/tvm/libtvm.so(+0x15f4505) [0x7fe7e6faa505]
  File "/workspace/tvm/src/runtime/cuda/cuda_module.cc", line 105
  File "/workspace/tvm/src/runtime/library_module.cc", line 78
CUDAError: Check failed: ret == 0 (-1 vs. 0) : cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUDA_ERROR_NOT_INITIALIZED

I tested this with image: docker pull nvidia/cuda:10.2-cudnn8-devel-ubuntu18.04

while if build tvm from source, it works without any issue…

Anyone know what could be the cause of this?

Update: I got the same error using the Dockerfile.package-cu102 provided.

The error can be produced using https://tvm.apache.org/docs/tutorials/get_started/relay_quick_start.html#sphx-glr-tutorials-get-started-relay-quick-start-py

Thanks for this initiative and it is commendable towards reducing the burden for use of the Apache TVM project.

Could you link to the Apache policy here for other folks to read and see what other guidelines need to be investigated as I couldn’t find it easily enough ? It might also be worthwhile for the project to document these aspects as items to consider.

Would this be something that would also allow others to add their own binary packages to this - I can envisage a linkage against the Ethos-N compiler / driver library for providing users with use of the work in Arm easily.

regards Ramana

Because the package is thirdparty (non-ASF). The main policy that need to referred to is the trademark policy if the product would like to be referred to as “Foo Powered by Apache TVM.” See http://www.apache.org/foundation/marks/faq/#poweredby

We can also move followup convos specific to tlcpack to https://github.com/tlc-pack/tlcpack/issues

Hi @tqchen, is that discussion still ongoing? Is there any help needed, to be able to host tlcpack on PyPI?

This could be the case. however we figured out a mechanism that auto update to github binary tag, and the matrix in https://tlcpack.ai/ so far gives quite clear instruction so i think we are probably fine

I’m working on getting Linux Nightly built for pip. To do this, I needed to upgrade to manylinux2014. It wasn’t possible to easily build LLVM with manylinux2010 any longer. Additionally, the nightly will only work for builds from main rather than the latest stable release. I think this is enough motivation to start working towards the 0.8 release, as there are many new features that have landed.

The next steps are to write a workflow to build the Docker image and publish the resulting package.