As I was investigating the potential usefulness of TVM as a high-level behavioral simulator for a new AI accelerator architecture, the
tvm.relay.backend.interpreter.Interpreter class caught my eye.
optimize() function appears to be the first (only?) step in moving a generic computation graph towards a particular compute architecture.
I think I know how to generate its
mod argument from a TensorFlow design, using the
But, how do I create its
I’m assuming that both of those should be customized to reflect the nature of my new architecture; is that correct?
Is there a tutorial for this available somewhere?