Mkldnn verbose doesn't work

Hi.

I have a question about USE_MKLDNN.

My build options : LLVM ON, BLAS none, USE_MKL /opt/intel/mkl, USE_MKLDNN ON

Even though I set MKLDNN_VERBOSE=1, any outputs about MKLDNN are not printed while tvm relay build and module run…

tvm uses mkl(mkldnn) for dense layer. But why this happen? Is TVM not using mkldnn properly?

Did you add -libs=mkl,mkldnn in your target?

Yes. target = “llvm -mcpu=cascadelake -libs=mkldnn”

In this case MKL_VERBOSE=1 also works. In my opinion, It seems that mkl and mkldnn are not completely separated but have some overlapping parts.

By the way…I can not understand why MKLDNN_VERBOSE=1 doesn’t work.

In relay build time, i saw the warnings that Cannot find config … workload(“dense_mkldnn.x86”)…

I’m not sure why MKLDNN_VERBOSE=1 doesn’t work. The warning shows during the compilation is fine. It just means that AutoTVM doesn’t find a log record corrsponding to “dense_mkldnn.x86”.

Oh…

If i do not use autoTVM for tuning my graph, does mkldnn not be applied???

autoTVM : tuning my graph operation like ‘for’ loop (using tvm schedule primitives), it is what i know… then mkldnn or -libs options are used like tvm schedule primitives??

Sorry about the confusion. You can still use mkldnn without autoTVM tuning. The warning log only indicates that AutoTVM cannot find the profile results of mkldnn kernel.

In pytorch, graph converts a mkldnn_graph(use to_mkldnn). then compile the graph.

In tvm, is the same process as pytorch included in relay.build?? normal graph → mkldnn graph.

if it’s right, where can i find a graph(dense layer) applied with mkldnn.

Im already know that tvm only applies mkldnn to dense layer.

The reason for all these questions is that using mkldnn did not show any improvement in performance.

Is it correct that there is no performance improvement?

<<<resnet50, batch_size = 1,>>>
target = llvm -mcpu=cascadelake => 13ms
target = llvm -mcpu=cascadelake -libs=mkldnn => 13ms

When you use MKLDNN as 3rdparty library, TVM only supports dense op using MKLDNN. What you want is to use BYOCG (Bring your own codegen) with MKLDNN. You can refer to this blog.

https://tvm.apache.org/2020/07/15/how-to-bring-your-own-codegen-to-tvm#bring-dnnl-to-tvm-c-source-codegen

Thanks a lot!

Have a nice day

What target string should make TVM compiled model to use MKL lib?

Search shows the following possible options:

llvm -mcpu=skylake-avx512 -libs=dnnl
llvm -mcpu=skylake-avx512 -libs=mkldnn
llvm -mcpu=skylake-avx512 -libs=mkl

But what option(s) are actually correct?