Can we use TVM as JIT compiler?

Hi all,

I am interested in a possible C++ integration of TVM as a jit compiler. Can we write a compute/schedule from C++? Something on the line of:

int main()
{
    auto A= tvm::te::placeholder({m});
    auto B = tvm::te::compute({m}, [=](int i){ return X[i]+1;})
    auto s = tvm::create_schedule({compute.op});
    auto axis = B.axis();
    s[B].vectorize(axis[0]);
    auto func = tvm::lower(s, {A, B});
    module = tvm::build(func);
    // Set inputs, outputs
    module.run()
    // Get outputs
}

I “think” it should be possible, but did not find much documentation/tutorials around. Also, what about tuning? Can I write my schedule in C++, expose it in python and let the tuner tune it?

Thanks,

Giuseppe

Definitely yes! There is no reason not to use TVM as a JIT compiler if we want to build a deep learning framework.

1 Like

Hi @junrushao,

Thanks for your reply! Can you point me to some examples?

I am looking to integrate at operator level (and not at relay level). Is the snippet I wrote similar to a real implementation?

Thanks again

Hi @junrushao,

I was able to get something working at the end. It is actually very cool :slight_smile:

My next question is: since I am only using the tensor language (no graph runtime, no relay, no auto-tuner etc…) can we produce a libtvm.so that only includes TE/TIR?

The problem is that libtvm.so is quite big (> 80MB) and I was wondering how much space we could save once we remove things we don’t use (when we jit at Tensor Language level).

Of course I can always hack the CMakeLists.txt, but I was wondering if there was a canonical way of doing it.

Thanks,

Giuseppe

I think we can somehow trim the TVM to remove graph runtime, vm, etc…However, if you want to use TVM as a JIT compiler, you have to ship with LLVM build, which is really large (>200MB IIRC)

Hi @junrushao,

It might be that we already have the library available in our system. Could you tell me exactly the name of the library?

Thanks,

Giuseppe

You mean the name of LLVM shared library? It is something like libLLVM.so

Yes, I don’t know why it was not showing up when I was inspecting libtvm.so through ldd.

Thanks for the help @junrushao!

Sure! BTW, If you set USE_LLVM to ON, by default it should show up if LLVM is dynamically linked. A possible reason that it doesnt show up is that LLVM is statically linked maybe

1 Like

NowI am interested in a possible C++ integration of TVM as a jit compiler too. so how to modify the code in the end? Thank you very much