[RELAX][TE] te.create_prim_func fails on te.scan: “Unsupported Operation: te.ScanOp. Only te.placeholder and te.compute are allowed for now.”

Summary When I try to build a minimal TE function that uses te.scan, te.create_prim_func fails with:

TypeError: Unsupported Operation: te.ScanOp. Only te.placeholder and te.compute are allowed for now.

I’m wondering if there’s a plan to support te.ScanOp in te.create_prim_func (and in the new TIR pipeline), and if so, what’s the timeline. My use case is to implement an ONNX GRU/LSTM-like converter in tvm/relax/frontend/onnx similar to the historical TOPI LSTM (which uses te.scan).


Minimal Reproducible Example (TE + te.scan)

# English comments only
import tvm
from tvm import te, tir

def stub_with_scan(T=4, B=2, H=3):
    T = tvm.tir.IntImm("int32", T)
    B = tvm.tir.IntImm("int32", B)
    H = tvm.tir.IntImm("int32", H)

    scan_len = tvm.tir.IntImm("int32", int(T) + 1)

    # state placeholders
    h_state = te.placeholder((scan_len, B, H), name="h_state")
    c_state = te.placeholder((scan_len, B, H), name="c_state")

    # init (runtime inputs)
    h_init = te.placeholder((1, B, H), name="h_init")
    c_init = te.placeholder((1, B, H), name="c_init")

    def step(prev, init, name):
        return te.compute(
            (scan_len, B, H),
            lambda t, b, j: tir.if_then_else(
                t == 0, init[0, b, j], prev[t - 1, b, j]
            ),
            name=name,
        )

    next_h = step(h_state, h_init, "next_h")
    next_c = step(c_state, c_init, "next_c")

    scan_h, scan_c = te.scan(
        init=[h_init, c_init],
        update=[next_h, next_c],
        state_placeholder=[h_state, c_state],
        name="stub_scan",
        inputs=[],
    )

    # drop t=0 to expose (T, B, H)
    hidden = te.compute((T, B, H), lambda t, b, j: scan_h[t + 1, b, j], name="hidden")
    cell   = te.compute((T, B, H), lambda t, b, j: scan_c[t + 1, b, j], name="cell")
    return (hidden, cell), (h_init, c_init)

(hidden, cell), inputs = stub_with_scan()
# This line fails:
prim = te.create_prim_func([*(inputs), hidden, cell])

Observed error

Exception has occurred: TypeError
Unsupported Operation: te.ScanOp. Only te.placeholder and te.compute are allowed for now.

Motivation / Background

  • I’m trying to implement GRUOnnxOpConverter under tvm/relax/frontend/onnx by follow­ing the style of historical TOPI LSTM that used te.scan for time-step recurrence.

What I tried

  1. TE without te.scan: works (builds and runs), which suggests the failure is specifically about te.ScanOp.
  2. Port to TIR directly: possible, but I’d like to keep TE parity for GRU/LSTM-like ops (mirroring the older TOPI design), and also avoid prematurely committing to TIR for this converter.
  3. Relax loop constructs: I’m working in relax/frontend/onnx, but the TE path is very convenient for expressing RNN-like recurrences if te.scan were supported.

Questions

  1. Is te.ScanOp planned to be supported by te.create_prim_func and the new TIR scheduling pipeline?
  2. If yes, is there a rough timeline or a tracking issue/RFC I can follow?
  3. If not, what is the recommended path to implement ONNX GRU/LSTM-style recurrences today?
  • Should we implement the loop directly in TIR (e.g., for/block with state carried across time)?
  • Or is there a Relax-level recommended pattern for dynamic time loops that replaces the older TE scan approach?
  1. For feature parity with legacy TOPI LSTM (which used te.scan), what is the official guidance in the current architecture?

Why this matters

  • ONNX RNN family (LSTM/GRU/RNN) are common; keeping a clean, maintainable way to express their time-step recurrences in TVM helps maintainers and users.
  • Many existing examples and mental models for sequence ops were based on TE scan. If scan is no longer a supported path, it would be great to have clear migration guidance (TE→TIR/Relax) and roadmap.

Unfortunately it seems we lose the lowering support for ScanOp node, I think it would be great to add support in create_prim_func routine.

I think so too, I even want to change the source code to implement it myself, but I don’t have the ability to do it yet.