I’m confused with the meaning of ‘target’ in TVM. Target should be used to mean something like host, iPhone, raspberry pi, but why is it ‘llvm’? This is what I thought when I went through the tutorials.
Also where is llvm used in TVM?
We install gcc before building TVM so the components working as compiler or runtime are built with gcc aren’t they?
I don’t fully understand llvm features, but I’m thinking of using micro TVM for applications for microcontrollers so let me ask this too. If llvm is a kind of compilers (sorry if my understanding is wrong), can users even specify their own compilers to run micro TVM?
For example, STMicro has their own compiler for their cortex microcontrollers, so what users hope is to build TVM runtime with their compiler.
Please give me some information on what llvm is used for in TVM. Is it for compiling runtime or maybe any other purpose?
One more question.
Can we even use micro TVM for any vendors other than STMicro right now? Let’s say NXP, TI, Renesas, etc…
Let me summarize my questions.
In TVM, what does target mean and why is it llvm?
As for micro TVM, can users build micro TVM runtime using their own compiler?
When it comes to microcontrollers, can we use micro TVM for microcontrollers produced by any vendors such as NXP, TI or Renesas? We have to give a specific string before starting the build, I’m wondering how we can use other microcontrollers.
@sho hi there, sorry for the confusion. hopefully I can clear some of this up.
You’re absolutely right that there is some ambiguity we need to address here. In general, “target” in TVM is intended to describe the deployment environment. In practice, the context behind target often means it takes on slightly more nuanced meanings:
When used in tvm.relay.build, it does describe the deployment environment, but it’s described through the lens of “which codegen should we use to build for this environment?” Therefore it’s currently possible to describe environments using multiple different target strings.
When used with AutoTVM, it describes the configuration of the codegen which was used to produce the timed operator implementation. Most times, this matches the deployment environment, but in particular with microTVM we are doing some cleanups here around e.g. executor and runtime which really have no bearing on AutoTVM. you can see some of that discussion here.
You’re right that LLVM is a suite of compiler tools, and Clang is the name of the C/C++ compiler built using that. When you specify an llvm Target to TVM, it means to translate the model implementation directly into LLVM’s Internal Representation for a particular target, bypassing C. This allows TVM to specify the most optimal code for many CPU platforms. You could also specify c target (which is another way to describe the same deployment environment, as discussed before), and TVM will instead generate C code meant to be consumed by a downstream compiler.
As you noted, some vendors have released specialized compilers. If you want to use those to compile everything including the emitted model code, you do need to specify c target to TVM. However, it’s important to note that there are quite a few pieces involved with deploying microTVM:
The compiled model.
The runtime components.
Your program.
The latter pieces are most often compiled with whatever compiler is typically used for your platform. However, piece #1 can often be built directly by TVM using the llvm target because the compiled model code should mainly depend on the CPU architecture in use. Since many boards use popular architectures which LLVM can target, such as ARM or RISC-V, you can often get away with specifying llvm here.
However, I will say for microTVM that for now, I suggest sticking with the c target if you
a) need better visibility into what’s running on the CPU e.g. using a debugger
b) want to use AOT due to a limitation of our codegen right now (we have yet to test the new embedded C interface with llvm target)
At least it’s the best starting point, for now. As the project matures, then a more general rule will start to apply here, where you will often see the best performance if you use the llvm target.
It is used to translate an abstract representation of the model implementation (i.e in TVM’s TIR language) into executable machine code.
Answered above, I hope. Please let me know if you have more questions.
yes
You absolutely can. I’d suggest starting with things supported by the Zephyr RTOS or Arduino to make your life easier. While ST is working on a port specific to their AI deployment tool, you can also target their microcontrollers using the standard microTVM flow (indeed, we often test on STM-Nucleo boards).
I believe with tvmc --help you can discover them. They are also listed in target_kind.cc.
Ah I see. Yeah that is confusing–there are two runtimes: the C++ runtime and C runtime. The C runtime is what you use with microTVM. You have to build the C++ runtime to build the TVM compiler. So you shouldn’t consider this Getting Started step as compiling item #2 (runtime components) for the purposes of running on your microcontroller.
Sorry for my late reply. I needed some time to clean up my thought.
Could you correct me if I’m wrong?
As you probably know, I was confused about LLVM, and how it is used.
So LLVM is used to build TVM compiler. LLVM also works when you input your model into TVM compiler and get TVM’s IR. LLVM compiles the TVM’s IR, and produces executables.
(I’m still not sure where gcc is used when you build TVM from source though…)
You don’t alway need to use LLVM. In that case, you specify ‘c’ as the target, and get your C code ready. You can use this C code, the runtime and your other programs to build with your own compiler, and run inference on MCUs.
Sorry while you’re budy preparing for TVMCon. Day1 was wonderful. I’m looking forward to Day2 and 3 as well.
@sho no problem, apologies if this is a bit confusing.
There are actually two roles LLVM could play in TVM:
To build TVM from source
As a Target backend (e.g. relay.build(..., target="llvm")), in which case TVM links against LLVM (e.g. LLVM is a static library) at compile time, and TVM then contains the code-generation pieces of LLVM and is in turn able to emit machine code to implement a model, just like LLVM does.
This is correct. TVM’s compilation flow is:
import models into Relay
“lower” Relay into TIR
“codegen” TIR to match the target. // Target("llvm") or Target("c") mostly matters here
So the final codegen step is where TIR is translated into some type of machine code. That could be LLVM’s Intermediate Representation (which we can generate when TVM is linked against LLVM–like in kind #2 above), C source code, CUDA code, or other types.
The tar file is located at model_library_format_path. This is chosen to be a tempfile in the script, but you could choose any path you want (just replace the lines leading up to export_model_library_format).
The docs for each might help explain a bit more. If there are confusing parts, it would be great to hear (feel free to report or submit a PR):
it means you get C code(not objects or executables compiled by LLVM) anyway right? Could you tell me why this list even exists? As long as I can see, TVM and micro TVM require us to specify ‘target’, but they don’t need to know what the target is for code generation, even when they do AutoTVM.
To run some tutorials, since they use zephyr, we need to specify ‘target’ and ‘board’. But this is to let zephyr know what board it flashes the executables to, and on TVM side, the information on ‘target’ doesn’t seem to be used…
Yes. However, the main reason we suggest the C route right now is because of some cleanup we need to do around AOT code generation. It should be possible to use LLVM to generate code to work with microTVM using GraphExecutor. I don’t really recommend it yet as a starting point, because it’s hard to debug and because we need to fix some minor problems with the way AOT works with the llvm backend. But, just wanted to say that in general LLVM is a better route to go than C if you’re not as much concerned with debugging.
You’re right in that microTVM generally doesn’t need to know the specific SoC right now–what mainly matters is that it knows the architecture so it knows which intrinsics to use. However, couple things:
it’s often daunting to configure this all when you’re new to microTVM. Having a way to just specify the SoC is convenient.
you could imagine a future where we may be able to do some optimizations based on the SoC (e.g. given a priori knowledge of the memory architecture or by consulting AutoTVM logs using this info)
So we are keeping this around. Additionally, some vendors have expressed interest in using this information.
Yes, getting C source code is much easier for beginners like me especially when I need to debug the program.
Just one thing. I was worried if I have only one option, using LLVM, that means it’s difficult to deploy the TVM artifact to whatever platform we want. Like there might be some minor architectures (say for some minor microcontrollers) that LLVM doesn’t support (so we have to develop LLVM backend ourselves to be able to emit executables for that minor architecture).
I’m strongly hoping for the case #2.
TVM is developing very fast, so I’ll try to catch up with its functionalities!
Yep–we intend to keep the c backend fully-supported alongside the LLVM backend for this reason. We export in Model Library Format to make it easy to consume the TVM output as a human (or equivalently from automation such as a Project API server)