Graph_plan_memory doesn't support nested tuples?

Hi, the model I’m working on has the following output:

  ...
  %1562 = (%1550, %1551, %1552, %1553, %1554, %1555, %1556, %1557, %1558, %1559, %1560, %1561);
  (%1549, %1562)
}

i.e., the output is a tuple where the second element is another tuple with 12 elements.

relay.build(...) errors on this model with the following message:

  [bt] (0) /mnt/2e797a66-fd2b-44fc-a3ba-24d7d65f2780/projects/dev/tvm/build/libtvm.so(+0x104245b) [0x7f30ec71a45b]
  File "/mnt/2e797a66-fd2b-44fc-a3ba-24d7d65f2780/projects/dev/tvm/src/relay/backend/graph_plan_memory.cc", line 86
TVMError: Check failed: tok.size() == 1U (12 vs. 1) : 

The error is happening when memory planner is visiting TupleNode:

So it seems to me the memory planner is complaining about tuple of tuples? @tqchen do you have an idea what’s going on?

UPDATE: VM compilation doesn’t have this problem.

We (@manupa-arm) ran into this in the graph partitioner. I think in the end we were forced to introduce logic to flatten such tuples, so if a more fundamental solution can be found that would simplify our logic.

Yes, we will need to update the code if we want to support nested tuple. Perhaps we can pass he token around also in nested tuples and unpack them.

@masahi there is code for doing this mapping inside of the VM, if you message me on Slack we can probably figure out how to update the code, might require a bit of debugging

Ok, thanks! I found the code Jared was probably referring to (transform/memory_plan.py, transform/memory_alloc.py, not sure why they are written in python). I’m going to learn about memory planning and see what I can do.

There is a C++ helper called Linearize or FlattenTuple (can look later)

The helpers are here:

thanks, I’ll take a look

@masahi FromTupleType is the one you probably want it takes a Type representing the layout of expr and returns a sequence of expressions which correspond to the linearized view of the tuple, i.e it will handle projecting nested tuples out.