Why is block dependency info attached to schedules?

Hi all,

I’ve recently run into a couple optimization opportunities in TIR and I realized that some of them might be useful as TIR transformation passes.

I also realized that these transforms are easier to implement and reason about if I do it at the level of BlockNodes because of the existence of clear dependence analysis, but I realized (while trying to implement them) that all the dependence info is based on SRef trees which are not available without a ScheduleStateBlockScope. Thus, they cannot be directly used in implementation of a transforms pass for TIR (as we don’t really create a schedule when applying transformation passes).

So I wanted to ask a high-level question on what is the reason behind having the dependence info attached to Schedules, and whether it would make sense to extract it out to a different class (perhaps as an analysis pass) that can be used in other passes.

P.S. I could split my optimization as an analysis pass and a schedule primitive that acts on that, but I think it might be better to implement them as a transforms pass as they really don’t need any inputs from user.

Thanks in advance,
Anirudh

Hi @sanirudh, thanks for your excellent question.

You are right, the current block dependencies are embedded in the ScheduleState, here are the reasons:

  1. The block dependencies must be updated after transformations (primitives and passes). The current approach ensures we update the information after schedule primitives (in the Replace API), Meanwhile we UPDATE the info rather than analysis again to reduce compilation cost.
  2. We only use dependency info in the Schedule now.

I appreciate your case to use it in TIR transformation passes. Refactoring it into a separate class would be good. The class may need to have two major APIs:

  1. create: to create the dep graph from a TIR (used at the beginning of the schedule and passes)
  2. update: update the dep instead of creating a new one. (used in the primitives)

Please let me know if it makes sense to you.

Hi @Hzfengsy, thanks for your detailed answer. I understand the reasoning behind the existing workflow and I also understand the requirement of both a create and update functions in case we refactor the dependences into a separate class.

I’ll go through the code and try out some refactors. If it seems logical, I’ll raise a PR for more comments, thanks again for the info.

1 Like

Hi @Hzfengsy, I’ve gone through the code and I’ve got a decent understanding of the dependence structure and how it is used in ScheduleState, BlockInfo and BlockScope. I’ve thought about different refactors and the one that seems to make the most sense to me, is to reuse BlockScope based dependences, but extract it outside schedules so that we can write a separate analysis pass that can also compute the dependences.

My only confusion here was that the stage_pipeline member of BlockScopeNode might need to be refactored out of BlockScope and into BlockInfo itself, so that BlockScope is exclusively about dependences.

Just wanted to get your thoughts on this idea. I think the code change itself should not be hard, but wanted to make sure that the design makes sense. Thanks again for your inputs.

The refactor looks good to me. Thanks for your great work! cc @spectrometerHBH @junrushao

1 Like

Thanks a lot, I’ll raise PRs with those changes.