Capture tensor-level IR and schedule from Relay

We are starting to experiment with tensor expressions, aimed at creating custom-generated code for an accelerator. I can see what’s happening at the tensor expression level, but am trying to see the same thing when coming from Relay.

I have a toy example with element-wise matrix add:

A = te.placeholder(shape=(160, 640), name='A')
B = te.placeholder(shape=(160, 640), name='B')
C = te.compute(A.shape, lambda i,j: A[i,j] + B[i,j], name='C')
s = my_custom_schedule(C)

I can then do all of the following:

# see the schedule!
tedd.viz_schedule_tree(s, output_dot_string = True)
# create an IR module and look at it!
irmod = tvm.lower(s, [A, B, C], name="fadd")
# compile it to C code!
cmod = tvm.build(irmod, [A, B, C], target="c", name="fadd")
cmod.save("./fadd.c", "c")

So far all good.

But what I really want is to do all this from Relay, using the strategy mechanism to plug in my custom schedule. I’ve managed to plug in the custom schedule, create a Relay IRModule, compile it to a runtime module, and see the schedule reflected in the generated C code.

@schedule_injective.register(["mydevice"])
def my_custom_schedule(T) ... 

x = relay.var('x', shape=(160, 640))
y = relay.var('y', shape=(160, 640))
z = relay.add(x, y)
func : relay.function.Function = relay.Function([x, y], z)
mod : tvm.ir.module.IRModule = tvm.IRModule.from_expr(func)
_, lib, _ = relay.build(mod, target="c --device=mydevice")

But in the relay.build flow I can’t figure out how to intercept and examine the schedule and the TIR-level module as in the TE flow above so that we can see what’s going on.

I’ve looked at Relay’s build_module and there does not seem to be an API to get just the intermediate TIR-level representation including the schedule – the output is a runtime module.

I’ve what seem to be related posts on this question, and seen some recent posts on the AOT compiler and revising the compilation flow, but it’s still not clear if there is a way to do this or if maybe I’m thinking about it the wrong way.

Thanks for any help.

1 Like

Relay compiles the entire model and executes part of it in the Executor as well as the kernels so there isn’t a single c file to inspect since the execution is spread across a few components. Is your goal to just read the code? you can add a tracing function to the PassContext as described at the bottom of this doc: How to Use TVM Pass Infra — tvm 0.8.dev0 documentation

Thanks. I still don’t see how to use the pass infrastructure to see the Tensor IR from Relay. I want to start from Relay as above, and see this:

primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
  attr = {"global_symbol": "fadd", "tir.noalias": True}
  buffers = {C: Buffer(C_2: Pointer(float32), float32, [160, 640], []),
             A: Buffer(A_2: Pointer(float32), float32, [160, 640], []),
             B: Buffer(B_2: Pointer(float32), float32, [160, 640], [])}
  buffer_map = {A_1: A, B_1: B, C_1: C} {
  for (i.outer: int32, 0, 10) {
    for (j.outer: int32, 0, 20) {
      for (i.inner: int32, 0, 16) {
        for (j.inner: int32, 0, 32) {
          C_2[((((i.outer*10240) + (i.inner*640)) + (j.outer*32)) + j.inner)] = ((float32*)A_2[((((i.outer*10240) + (i.inner*640)) + (j.outer*32)) + j.inner)] + (float32*)B_2[((((i.outer*10240) + (i.inner*640)) + (j.outer*32)) + j.inner)])
        }
      }
    }
  }
}
1 Like

Hello, I agree with your idea. Is there a solution now?