Weights-dependent optimization passes?

Hi, in my experiments to compile PyTorch models to shared libraries with TVM, I observed that, if I modify the weight values of the model (e.g. zeroing out the parameters for a layer), the output library seems to have different fusion results. I found this by comparing the NNVM JSONs of compiled libraries, where the tvmgen_default_fused_* operator names indicate different operators being fused. However, when I inspect the Relay IR of the modules, they’re the same. So my question is, are there optimization passes (or something similar) whose behaviors depend on concrete weight (parameter) values? If so, which are those passes? Thanks!