How does the PlanDevices pass work?

I’m trying to track the process of compiling a onnx model. I use resnet18-v1-7.onnx as the example. I printed the relay IR:

def @main(%data: Tensor[(1, 3, 224, 224), float32] /* ty=Tensor[(1, 3, 224, 224), float32] */) → Tensor[(1, 1000), float32] {

%0 = nn.conv2d(%data, meta[relay.Constant][0] /* ty=Tensor[(64, 3, 7, 7), float32] /, strides=[2, 2], padding=[3, 3, 3, 3], channels=64, kernel_size=[7, 7]) / ty=Tensor[(1, 64, 112, 112), float32] */;

%1 = nn.batch_norm(%0, meta[relay.Constant][1] /* ty=Tensor[(64), float32] /, meta[relay.Constant][2] / ty=Tensor[(64), float32] /, meta[relay.Constant][3] / ty=Tensor[(64), float32] /, meta[relay.Constant][4] / ty=Tensor[(64), float32] /) / ty=(Tensor[(1, 64, 112, 112), float32], Tensor[(64), float32], Tensor[(64), float32]) */;

%2 = %1.0 /* ty=Tensor[(1, 64, 112, 112), float32] */;

%3 = nn.relu(%2) /* ty=Tensor[(1, 64, 112, 112), float32] */;

%4 = nn.max_pool2d(%3, pool_size=[3, 3], strides=[2, 2], padding=[1, 1, 1, 1]) /* ty=Tensor[(1, 64, 56, 56), float32] */;

%5 = nn.conv2d(%4, meta[relay.Constant][5] /* ty=Tensor[(64, 64, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) / ty=Tensor[(1, 64, 56, 56), float32] */;

%6 = nn.batch_norm(%5, meta[relay.Constant][6] /* ty=Tensor[(64), float32] /, meta[relay.Constant][7] / ty=Tensor[(64), float32] /, meta[relay.Constant][8] / ty=Tensor[(64), float32] /, meta[relay.Constant][9] / ty=Tensor[(64), float32] /) / ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;

%7 = %6.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;

%8 = nn.relu(%7) /* ty=Tensor[(1, 64, 56, 56), float32] */;

%9 = nn.conv2d(%8, meta[relay.Constant][10] /* ty=Tensor[(64, 64, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) / ty=Tensor[(1, 64, 56, 56), float32] */;

%10 = nn.batch_norm(%9, meta[relay.Constant][11] /* ty=Tensor[(64), float32] /, meta[relay.Constant][12] / ty=Tensor[(64), float32] /, meta[relay.Constant][13] / ty=Tensor[(64), float32] /, meta[relay.Constant][14] / ty=Tensor[(64), float32] /) / ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;

%11 = %10.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;

%12 = add(%4, %11) /* ty=Tensor[(1, 64, 56, 56), float32] */;

%13 = nn.relu(%12) /* ty=Tensor[(1, 64, 56, 56), float32] */;

%14 = nn.conv2d(%13, meta[relay.Constant][15] /* ty=Tensor[(64, 64, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) / ty=Tensor[(1, 64, 56, 56), float32] */;

%15 = nn.batch_norm(%14, meta[relay.Constant][16] /* ty=Tensor[(64), float32] /, meta[relay.Constant][17] / ty=Tensor[(64), float32] /, meta[relay.Constant][18] / ty=Tensor[(64), float32] /, meta[relay.Constant][19] / ty=Tensor[(64), float32] /) / ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;

%16 = %15.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;

%17 = nn.relu(%16) /* ty=Tensor[(1, 64, 56, 56), float32] */;

%18 = nn.conv2d(%17, meta[relay.Constant][20] /* ty=Tensor[(64, 64, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) / ty=Tensor[(1, 64, 56, 56), float32] */;

%19 = nn.batch_norm(%18, meta[relay.Constant][21] /* ty=Tensor[(64), float32] /, meta[relay.Constant][22] / ty=Tensor[(64), float32] /, meta[relay.Constant][23] / ty=Tensor[(64), float32] /, meta[relay.Constant][24] / ty=Tensor[(64), float32] /) / ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;

%20 = %19.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;

%21 = add(%13, %20) /* ty=Tensor[(1, 64, 56, 56), float32] */;

%22 = nn.relu(%21) /* ty=Tensor[(1, 64, 56, 56), float32] */;

%23 = nn.conv2d(%22, meta[relay.Constant][25] /* ty=Tensor[(128, 64, 1, 1), float32] /, strides=[2, 2], padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) / ty=Tensor[(1, 128, 28, 28), float32] */;

%24 = nn.batch_norm(%23, meta[relay.Constant][26] /* ty=Tensor[(128), float32] /, meta[relay.Constant][27] / ty=Tensor[(128), float32] /, meta[relay.Constant][28] / ty=Tensor[(128), float32] /, meta[relay.Constant][29] / ty=Tensor[(128), float32] /) / ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;

%25 = nn.conv2d(%22, meta[relay.Constant][30] /* ty=Tensor[(128, 64, 3, 3), float32] /, strides=[2, 2], padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) / ty=Tensor[(1, 128, 28, 28), float32] */;

%26 = nn.batch_norm(%25, meta[relay.Constant][31] /* ty=Tensor[(128), float32] /, meta[relay.Constant][32] / ty=Tensor[(128), float32] /, meta[relay.Constant][33] / ty=Tensor[(128), float32] /, meta[relay.Constant][34] / ty=Tensor[(128), float32] /) / ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;

%27 = %26.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;

%28 = nn.relu(%27) /* ty=Tensor[(1, 128, 28, 28), float32] */;

%29 = nn.conv2d(%28, meta[relay.Constant][35] /* ty=Tensor[(128, 128, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) / ty=Tensor[(1, 128, 28, 28), float32] */;

%30 = nn.batch_norm(%29, meta[relay.Constant][36] /* ty=Tensor[(128), float32] /, meta[relay.Constant][37] / ty=Tensor[(128), float32] /, meta[relay.Constant][38] / ty=Tensor[(128), float32] /, meta[relay.Constant][39] / ty=Tensor[(128), float32] /) / ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;

%31 = %24.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;

%32 = %30.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;

%33 = add(%31, %32) /* ty=Tensor[(1, 128, 28, 28), float32] */;

%34 = nn.relu(%33) /* ty=Tensor[(1, 128, 28, 28), float32] */;

%35 = nn.conv2d(%34, meta[relay.Constant][40] /* ty=Tensor[(128, 128, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) / ty=Tensor[(1, 128, 28, 28), float32] */;

%36 = nn.batch_norm(%35, meta[relay.Constant][41] /* ty=Tensor[(128), float32] /, meta[relay.Constant][42] / ty=Tensor[(128), float32] /, meta[relay.Constant][43] / ty=Tensor[(128), float32] /, meta[relay.Constant][44] / ty=Tensor[(128), float32] /) / ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;

%37 = %36.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;

%38 = nn.relu(%37) /* ty=Tensor[(1, 128, 28, 28), float32] */;

%39 = nn.conv2d(%38, meta[relay.Constant][45] /* ty=Tensor[(128, 128, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) / ty=Tensor[(1, 128, 28, 28), float32] */;

%40 = nn.batch_norm(%39, meta[relay.Constant][46] /* ty=Tensor[(128), float32] /, meta[relay.Constant][47] / ty=Tensor[(128), float32] /, meta[relay.Constant][48] / ty=Tensor[(128), float32] /, meta[relay.Constant][49] / ty=Tensor[(128), float32] /) / ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;

%41 = %40.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;

%42 = add(%34, %41) /* ty=Tensor[(1, 128, 28, 28), float32] */;

%43 = nn.relu(%42) /* ty=Tensor[(1, 128, 28, 28), float32] */;

%44 = nn.conv2d(%43, meta[relay.Constant][50] /* ty=Tensor[(256, 128, 1, 1), float32] /, strides=[2, 2], padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) / ty=Tensor[(1, 256, 14, 14), float32] */;

%45 = nn.batch_norm(%44, meta[relay.Constant][51] /* ty=Tensor[(256), float32] /, meta[relay.Constant][52] / ty=Tensor[(256), float32] /, meta[relay.Constant][53] / ty=Tensor[(256), float32] /, meta[relay.Constant][54] / ty=Tensor[(256), float32] /) / ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;

%46 = nn.conv2d(%43, meta[relay.Constant][55] /* ty=Tensor[(256, 128, 3, 3), float32] /, strides=[2, 2], padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) / ty=Tensor[(1, 256, 14, 14), float32] */;

%47 = nn.batch_norm(%46, meta[relay.Constant][56] /* ty=Tensor[(256), float32] /, meta[relay.Constant][57] / ty=Tensor[(256), float32] /, meta[relay.Constant][58] / ty=Tensor[(256), float32] /, meta[relay.Constant][59] / ty=Tensor[(256), float32] /) / ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;

%48 = %47.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;

%49 = nn.relu(%48) /* ty=Tensor[(1, 256, 14, 14), float32] */;

%50 = nn.conv2d(%49, meta[relay.Constant][60] /* ty=Tensor[(256, 256, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) / ty=Tensor[(1, 256, 14, 14), float32] */;

%51 = nn.batch_norm(%50, meta[relay.Constant][61] /* ty=Tensor[(256), float32] /, meta[relay.Constant][62] / ty=Tensor[(256), float32] /, meta[relay.Constant][63] / ty=Tensor[(256), float32] /, meta[relay.Constant][64] / ty=Tensor[(256), float32] /) / ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;

%52 = %45.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;

%53 = %51.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;

%54 = add(%52, %53) /* ty=Tensor[(1, 256, 14, 14), float32] */;

%55 = nn.relu(%54) /* ty=Tensor[(1, 256, 14, 14), float32] */;

%56 = nn.conv2d(%55, meta[relay.Constant][65] /* ty=Tensor[(256, 256, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) / ty=Tensor[(1, 256, 14, 14), float32] */;

%57 = nn.batch_norm(%56, meta[relay.Constant][66] /* ty=Tensor[(256), float32] /, meta[relay.Constant][67] / ty=Tensor[(256), float32] /, meta[relay.Constant][68] / ty=Tensor[(256), float32] /, meta[relay.Constant][69] / ty=Tensor[(256), float32] /) / ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;

%58 = %57.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;

%59 = nn.relu(%58) /* ty=Tensor[(1, 256, 14, 14), float32] */;

%60 = nn.conv2d(%59, meta[relay.Constant][70] /* ty=Tensor[(256, 256, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) / ty=Tensor[(1, 256, 14, 14), float32] */;

%61 = nn.batch_norm(%60, meta[relay.Constant][71] /* ty=Tensor[(256), float32] /, meta[relay.Constant][72] / ty=Tensor[(256), float32] /, meta[relay.Constant][73] / ty=Tensor[(256), float32] /, meta[relay.Constant][74] / ty=Tensor[(256), float32] /) / ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;

%62 = %61.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;

%63 = add(%55, %62) /* ty=Tensor[(1, 256, 14, 14), float32] */;

%64 = nn.relu(%63) /* ty=Tensor[(1, 256, 14, 14), float32] */;

%65 = nn.conv2d(%64, meta[relay.Constant][75] /* ty=Tensor[(512, 256, 1, 1), float32] /, strides=[2, 2], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) / ty=Tensor[(1, 512, 7, 7), float32] */;

%66 = nn.batch_norm(%65, meta[relay.Constant][76] /* ty=Tensor[(512), float32] /, meta[relay.Constant][77] / ty=Tensor[(512), float32] /, meta[relay.Constant][78] / ty=Tensor[(512), float32] /, meta[relay.Constant][79] / ty=Tensor[(512), float32] /) / ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;

%67 = nn.conv2d(%64, meta[relay.Constant][80] /* ty=Tensor[(512, 256, 3, 3), float32] /, strides=[2, 2], padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) / ty=Tensor[(1, 512, 7, 7), float32] */;

%68 = nn.batch_norm(%67, meta[relay.Constant][81] /* ty=Tensor[(512), float32] /, meta[relay.Constant][82] / ty=Tensor[(512), float32] /, meta[relay.Constant][83] / ty=Tensor[(512), float32] /, meta[relay.Constant][84] / ty=Tensor[(512), float32] /) / ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;

%69 = %68.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;

%70 = nn.relu(%69) /* ty=Tensor[(1, 512, 7, 7), float32] */;

%71 = nn.conv2d(%70, meta[relay.Constant][85] /* ty=Tensor[(512, 512, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) / ty=Tensor[(1, 512, 7, 7), float32] */;

%72 = nn.batch_norm(%71, meta[relay.Constant][86] /* ty=Tensor[(512), float32] /, meta[relay.Constant][87] / ty=Tensor[(512), float32] /, meta[relay.Constant][88] / ty=Tensor[(512), float32] /, meta[relay.Constant][89] / ty=Tensor[(512), float32] /) / ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;

%73 = %66.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;

%74 = %72.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;

%75 = add(%73, %74) /* ty=Tensor[(1, 512, 7, 7), float32] */;

%76 = nn.relu(%75) /* ty=Tensor[(1, 512, 7, 7), float32] */;

%77 = nn.conv2d(%76, meta[relay.Constant][90] /* ty=Tensor[(512, 512, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) / ty=Tensor[(1, 512, 7, 7), float32] */;

%78 = nn.batch_norm(%77, meta[relay.Constant][91] /* ty=Tensor[(512), float32] /, meta[relay.Constant][92] / ty=Tensor[(512), float32] /, meta[relay.Constant][93] / ty=Tensor[(512), float32] /, meta[relay.Constant][94] / ty=Tensor[(512), float32] /) / ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;

%79 = %78.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;

%80 = nn.relu(%79) /* ty=Tensor[(1, 512, 7, 7), float32] */;

%81 = nn.conv2d(%80, meta[relay.Constant][95] /* ty=Tensor[(512, 512, 3, 3), float32] /, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) / ty=Tensor[(1, 512, 7, 7), float32] */;

%82 = nn.batch_norm(%81, meta[relay.Constant][96] /* ty=Tensor[(512), float32] /, meta[relay.Constant][97] / ty=Tensor[(512), float32] /, meta[relay.Constant][98] / ty=Tensor[(512), float32] /, meta[relay.Constant][99] / ty=Tensor[(512), float32] /) / ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;

%83 = %82.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;

%84 = add(%76, %83) /* ty=Tensor[(1, 512, 7, 7), float32] */;

%85 = nn.relu(%84) /* ty=Tensor[(1, 512, 7, 7), float32] */;

%86 = nn.global_avg_pool2d(%85) /* ty=Tensor[(1, 512, 1, 1), float32] */;

%87 = nn.batch_flatten(%86) /* ty=Tensor[(1, 512), float32] */;

%88 = nn.dense(%87, meta[relay.Constant][100] /* ty=Tensor[(1000, 512), float32] /, units=1000) / ty=Tensor[(1, 1000), float32] */;

add(%88, meta[relay.Constant][101] /* ty=Tensor[(1000), float32] /) / ty=Tensor[(1, 1000), float32] */

}

I found the PlanDevices pass had processed some strange ops which are not in uper IR, like:

/root/tvm/src/relay/transforms/device_planner.cc:569: Build: FoldConstant: FoldConstantExpr: ConstEvaluate: PlanDevicesCore: DeviceAnalyzer: initial function domain:

fn():?53345968?

and function body domain:

?53345968?

for function:

fn (executor=meta[Executor][0], runtime=meta[Runtime][0]) → Tensor[(64), float32] {

add(meta[relay.Constant][0] /* ty=Tensor[(64), float32] /, 1e-05f / ty=float32 /) / ty=Tensor[(64), float32] */

} /* ty=fn () → Tensor[(64), float32] */

and:

/root/tvm/src/relay/transforms/device_planner.cc:569: Build: FoldConstant: FoldConstantExpr: ConstEvaluate: PlanDevicesCore: DeviceAnalyzer: initial function domain:

fn():?48159776?

and function body domain:

?48159776?

for function:

fn (executor=meta[Executor][0], runtime=meta[Runtime][0]) → Tensor[(64), float32] {

sqrt(meta[relay.Constant][0] /* ty=Tensor[(64), float32] /) / ty=Tensor[(64), float32] */

} /* ty=fn () → Tensor[(64), float32] */

It confusd me a lot. Where these functions(add, sqrt) come from? Thanks.

Hi @ironboot, one of the default passes in TVM is relay.transform.SimplifyInference. This pass expands batch_norm into it’s composite operators, which include add and sqrt. That pass is almost certainly being applied before the PlanDevices pass so it’s printing information about them. If you’d like to see how the module is simplified, you could try just applying SimplifyInference with something like this print(relay.transform.SimplifyInference()(module)).

Hi @jwfromm , thanks for your explanation. I found that the GetPassPrefix() function that I ignored earlier contains a lot of passes, including the one you mentioned.