Publish TVM4J artifact

It would be great if there were a single multi-platform JAR for TVM4J, including Mac OS X, Windows and *nix x64 builds, available on a publicly accessible package repository to avoid manual compilation.

The Z3-TurnKey project is a pretty good example of how this works. If you inspect the JAR, it contains prebuilt native binaries for OS X/Linux/Windows, dispatching to the correct native library at runtime.

Due to the ASF policy, we only produce source release officially.

Also note that the CUDA related binaries requires EULA from NV which is not strictly apache complaint. This is fine for our users but it creates some barriers for releasing the binary as an ASF entity(unofficially).

We certainly want the package to be available and convenient to use for everyon. Common practices in ASF allow community members to publish binary releases as ourselves(individuals). We would start some conversations around the 0.7 release timeframe.

Thanks for your reply @tqchen.

Due to the ASF policy, we only produce source release officially.

Which policy are you referring to? I am not aware of any policy that forbids the release of compiled artifacts, as long as they are accompanied by appropriate sources and signed by a committer.

Also note that the CUDA related binaries requires EULA from NV which is not strictly apache complaint. This is fine for our users but it creates some barriers for releasing the binary as an ASF entity(unofficially).

I was under the impression that CUDA was just one of the targets which TVM supports. What about OpenCL or any of the CPU architectures?

We would start some conversations around the 0.7 release timeframe.

Sounds good! I will try to check back when TVM4J is more stable.

I’ve started to deploy artifacts that include TVM4J for Linux, Mac, and Windows, with and without CUDA, from here:

Please let me know if you notice anything missing! Thanks
1 Like

I’ve started to deploy artifacts that include TVM4J for Linux, Mac, and Windows, with and without CUDA…

Documentation looks great, thanks for the update Sam!

We would start some conversations around the 0.7 release timeframe.

Now that TVM has released 0.7, I have decided to revisit this issue. With regards to licensing, it seems that both JavaCPP and DJL both publish MXNet binaries, which is also Apache licensed. Conveniently, DJL also has a native version which automatically checks the platform and downloads the correct binaries for the target platform at runtime, so there must be a way to work out the legal issues if Amazon is comfortable hosting it. What are the TVM maintainers’ stance on TVM/MXNet artifacts hosted by third parties? And who is the best party to contact regarding these questions, OctoML/ASF/TVM/MXNet, Konduit/JavaCPP, Amazon/DJL, NVIDIA/CUDA or some other party? Thanks!

The current approach we take is to release the binaries under a different name. Especially when the library bundles things like cuDNN e.g. https://tlcpack.ai/ This is mainly a branding issue rather than legal one

1 Like

BTW, Gradle JavaCPP does the same kind of thing as DJL to download binaries only for the host platform: