[Discussion/Alignment] Memory Planning

There are quite a few mentions of new global memory planning, but with this I would like to start a more comprehensive discussion and would be interested in your current plans. I’m working on model transformations that aim to reduce memory usage on memory constrained devices. For this work to show full potential, I would need to implement a few changes in the memory planning of TVM. So this is also meant to align our goals.

Current status as far as I understand:

  • GraphPlanMemory: Goes through the final Relay IR data-flow and allocates new buffers whenever required. Every buffer is a distinct storage and buffers can only be placed at the beginning.
  • Intra-TIR function memory allocation: Temporary buffers for operations.

Proposed goals:

  • Unify these two methods of allocation into a global memory plan. The first important step towards that seems to be approached through this: [RFC][uTVM] Query intermediary workspace requirement. I do not have anything to add in this regard.
  • Improve method of planning. In my opinion, there are two good approaches:
    • Use a heuristic like in TFLM to find an often optimal solution with low computational complexity
    • Find the optimum with a mixed integer programming optimizer. This is pretty simple and optimal, but of course has higher computational complexity. It could look something like this:
      • (1) sets the objective to minimize the largest ending offset
      • (2) sets up constraints for ending and starting offsets
      • (3) ensures positive offsets
      • (4) sets up constraints against overlaps for conflicting buffers. conflicting buffers would simply be buffers that have overlapping lifetimes

Both approaches requires an input in the form of a list of buffers with the attributes “size” and “lifetime”. Or more generally than “lifetime” could be just “conflict”.

My current plan to approach this is to change GraphPlanMemory into a function that analyzes the given Relay IR, but instead of returning the storage_ids, it would just return the buffers with sizes and lifetimes. Then the memory planning can work as an independent unit which transforms the given buffers (which can also include the intra-TIR function ones) to a static memory plan that would just consist of one “storage” for all storages that can be combined. Additional conflict constraints would come from the buffers being on different devices.

Open questions:

  • Was there a reason why the buffers are only placed at the start of storages?
  • Maybe related to above: How to handle exotic alignment requirements?
  • How to align this with non-static memory plans?
3 Likes

Linking GitHub - pulp-platform/dory: A tool to deploy Deep Neural Networks on PULP-based SoC's As a possible way of memory planning on microcontrollers by constraint programming.

hi @r.stahl @jossevandelm,

Thanks for your post! I think this is a very interesting topic and a natural next step for microTVM. It would be great to have some detailed discussion around the next steps, as I think there are quite a few interested parties and timelines may vary. I’d prefer if we could move in a direction that could accommodate as many as possible.

I think your status summary matches my understanding.

I’d like to approach the next steps of this in a manner similar to how you’ve split the problem:

  1. Add the ability to do global memory planning
  2. Add (some) interesting memory planner implementations

In particular, I’m not completely convinced that it’ll be possible to build a single global memory planner that handles all scenarios well. And, as @jossevandelm alluded to, it also seems possible to me that we may want to perform a graph-level search over possible memory configurations. I’d like to adopt a solution that allows the user to choose the memory planner implementation if they want–I think this also dovetails well with the goal of enabling different parties to do memory planner work in parallel.

To that end, I think the first step here is to post and implement an RFC describing a global memory planner interface. As part of this, we should describe the way that TVM will model memory. This hasn’t been settled yet, but I believe general sentiment has been gravitating towards defining a set of named memory pools (e.g. given by (name : str, size_bytes : int)). A strawman interface would probably look like:

PlanMemory(device_memory_pools, prim_func) -> [(tensor_id, pool_name, pool_offset)]

Where tensor_id is a unique integer identifying the TIR buffer and prim_func is the function generated by the AOT PR. It would be nice if it were possible to separate the concept of a lifetime from TIR itself, but it might be tricky to do this if we introduce control flow in that top-level TIR function (which I think is a very likely scenario). Another thing to consider is that making prim_func a part of this interface implies we generate the top-level TIR function and do analysis on this for both Graph and AOT codegen. I think that seems like a natural evolution (to join the two codegen paths), but it should be discussed.

Other considerations here are things like dynamic shapes and whether people may implement graph-level optimizations which may affect memory planning. I think if you want to write up a PoC or proposal here, feel free (particularly curious about the question of describing lifetime in a way that’s disconnected from TIR). I think also some of the ARM folks may be interested in participating here or may have their own proposal to make.

Answers to your questions:

  • Was there a reason why the buffers are only placed at the start of storages?

I don’t know, but I suspect it was mainly done since storages are per-buffer.

  • Maybe related to above: How to handle exotic alignment requirements?

What do you mean by “exotic?” Currently my thinking is to support word alignment. For memory which needs to be placed in a specific region, my thinking is that the named pools are user-facing and the names should mean something to the user. It should be possible to use the memory map produced by Model Library Format to define those memory pools in the C project on terms acceptable to the project’s linker script.

  • How to align this with non-static memory plans?

This is a big question I’m not sure about. I imagine something like this:

  1. split the graph into static and dynamic tensors
  2. dynamic tensors have size ranges; initially the max is expressed as sizeof(X) * n, n an expr in terms of input shapes
  3. the upper bounds of such size ranges become bounded by solving constraints once memory planning is complete
  4. offsets are computed at runtime using the expr defined above

cc’ing a few others who may have input here.

@junrushao1994 @tqchen @jroesch @manupa-arm @matt-arm @ramana-arm @stoa @aca88

Andrew

Hi @areusch,

thank you for your detailed feedback!

I agree with you that it makes sense to have an interface for different possible memory planners. Your proposed interface would already fit all my requirements.

Memory Pools: These seems nice for handling different memory properties with their constraints (like GPU memory), but I’m not convinced that they are the best tool for the property of size. The optimization process would become pretty hands-on, because if my application size changes, I may have to adjust the “remaining” memory pool for the ML model and everytime make sure that memory is not wasted or performance suffers too much from too constrained memory. What we are used to from C compilers is a priorization for performance or memory (-O3/-Os). In the non-prioritized category it still tries to do the best possible job while avoiding unreasonable trade-offs. In the micro-world I guess a higher amount of control for this trade-off could be desired, but not really for all TVM users, right?

Regarding lifetime and TIR: it was not entirely clear to me when the dataflow representation becomes the schedule where we can start analyzing lifetimes. I saw that at the end of Optimize, the AutoScheduler is invoked, so I assume this is the point where the schedule is regarded as final. Is this accurate? It is a bit unclear to me because the representation is still only for dataflow.

Alignment: What I meant is that there could be alignemnt requirements that need more than word alignment, like GPUs or certain SIMD operations. But this could be a property of the “pool” you mentioned and then be accounted for by the planners when placing buffers at offsets.

hi @r.stahl,

Great! I’ve added a few follow-up thoughts below. I am happy to put together an RFC around a memory planning interface, but it might be next week before it’s posted. Some follow-up replies:

The optimization process would become pretty hands-on, because if my application size changes, I may have to adjust the “remaining” memory pool for the ML model and everytime make sure that memory is not wasted or performance suffers too much from too constrained memory.

For microTVM, this is where I think project generators could help. My thought is that we would place the memory planner output in metadata in Model Library Format, and then downstream Project API implementations could consume that and declare global buffers sized to match the application demands.

On non-microTVM (e.g. traditional OS), I think we could still dynamically allocate memory pools using e.g. malloc, but this would effectively handle all of the system memory allocation with one malloc call per memory pool. GPU-specific memory pools can be allocated by the underlying GPU driver (however, some more work needs to be done to link the memory pool with the TVMDevice that provides it).

I saw that at the end of Optimize , the AutoScheduler is invoked, so I assume this is the point where the schedule is regarded as final. Is this accurate?

Right now, we don’t finalize the schedule until we’re already in GraphExecutorCodegen, so I think this may be the source of the confusion. I would like to move to a world where GraphExecutorCodegen conumes the top-level AOT TIR (or some similar thing).

Then later on after PR 7518 is merged, we would like to be able to define compiler passes over the entire scheduled program. This would allow for full dataflow analysis, which until this point, will need to just be confined to the single top-level TIR function generated from AOT.