[BYOC] JSON codegen for BYOC

Hi,

I am a beginner with TVM and trying the BYOC example tutorial for the JSON codegen and runtime. I have enabled the DNNL codegen in the config.cmake and am able to generate the json output corresponding to module using the following code.

mod = relay.transform.AnnotateTarget(["dnnl"])(mod)
mod = relay.transform.MergeCompilerRegions()(mod)
mod = relay.transform.PartitionGraph()(mod)
graph_json, lib, params = relay.build(mod, target='llvm')
print('Graph json:', graph_json)

I have the following questions:

  1. Why is the target “llvm” and not “DNNL” which was registered in the codegen.cc ?
  2. For a custom code generator, would it mean that the target would be still “llvm” ?
  3. Is it possible to write a code generator without a runtime?
  4. In case I would like to compile the generated json along with the parameters into executable outside the TVM environment for a custom accelerator, what would be the role of the lib generated by relay.build ()?

Regards, Debjyoti

  1. The target here is the target in TVM. i.e., the target that executes the unoffloaded ops.

  2. No. See the above response.

  3. If you meant you only want to write a codegen but don’t want to write a runtime, then your codegen has to generate C/C++ code and compile it along with TVM host. In this case, the TVM C runtime module will be used and you don’t need to worry about it at all. If your codegen generates other forms such as JSON, then you have to provide a runtime; otherwise TVM has no way to know how to execute the code/JSON you generated.

    On the other hand, if you meant to generate an executable module without TVM runtime, this is a part of the AOT (ahread of time) compilation which hasn’t been fully supported yet.

  4. The generated lib includes the host module and the execution modules for the part that cannot be offloaded to your accelerator. The host module will invoke your module in runtime when it executes a graph node that is annotated for your accelerator. This is the point of integrating TVM runtime and your outside accelerator.

1 Like

Thanks for the prompt response.

Indeed, I was looking for AOT compilation and not JIT.

I understand that parts of the code which are annotated by a “target” are marked, followed by merging compiler regions and generating the code partitioned between the target and TVM.

For the parts annotated by “target”, is it possible to write custom passes, which are target dependent before relay.build is called? I am trying to achieve the following:

  1. for the compiler regions annotated for the target, decompose or merge them them into custom nodes (which are specific to the target)
  2. Run optimization passes on these nodes (targeted towards different goals — say low energy or high throughput) depending on the target constraints
  3. Invoke the code gen for this part. Ideally, all the operators would be offloaded to the accelerator and hence I do not plan to use the TVM runtime.

Would you recommend doing this with relay transform passes or to write custom passes outside TVM with the generated json structure? It would be convenient to use the relay pass infrastructure.

1 Like