It’s my CPU information:
Which target string should I use ?
llvm -mcpu=???
If you have LLVM installed, you can run llc --version | grep "Host CPU"
.
Thank you for your suggestion! When I run “llc --version|grep “Host CPU””, I get “Host CPU: znver1”. When I run the tune_relay_x86.py tutorial on spyder, the following problems will be reported:
For x86 target, NCHW layout is recommended for conv2d.
Cannot find tuned schedules for target=llvm -keys=cpu -link-params=0 -mcpu=znver1, workload_key=[“537c8642716948c33a6eaaabc86b159d”]. A fallback TOPI schedule is used, which may bring great performance regression or even compilation failure. Compute DAG info: placeholder = PLACEHOLDER [1, 14, 14, 1024] PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3] placeholder = PLACEHOLDER [1, 1, 1024, 2048] Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy2) + ry), ((xx2) + rx), rc]*placeholder[ry, rx, rc, ff])
I’d appreciate your help on how to avoid this error so the tuning can finish.
Hi @zihediaoyuwang, I think the problem is that you are running an “NHWC” network, while x86 target is optimized for “NCHW”. To remove this error, after you load the network:
mod, params, data_shape, out_shape = get_network(model_name, batch_size)
Add the following code:
desired_layout = "NCHW"
# Assume for the time being that graphs only have
# conv2d as heavily-sensitive operators.
desired_layouts = {
"nn.conv2d": [desired_layout, "default"],
"qnn.conv2d": [desired_layout, "default"],
}
# Convert the layout of the graph where possible.
seq = transform.Sequential(
[
relay.transform.RemoveUnusedFunctions(),
relay.transform.ConvertLayout(desired_layouts),
]
)
mod = seq(mod)
If you want to load your own tflite
network, you may want to use tvmc
(see this tutorial) with the option --desired-layout NCHW"
. Actually, the above snippet of code is copied from (python/tvm/driver/tvmc/common.py
)
Hope this helps,