TFLite an internal invariant was violated while typechecking your program

Tried to compile TFLite model as described here using the latest TVM.
https://docs.tvm.ai/tutorials/frontend/from_tflite.html
Got the following error

$:~/workplace/compile-tflite$ ./compile.py 
File /home/ubuntu/.tvm_test_data/tf/official/mobilenet_v1_1.0_224.tgz exists, skip.
<tflite.Model.Model object at 0x7fcb2ef50d68>
Traceback (most recent call last):
  File "./compile.py", line 50, in <module>
    graph, lib, params = relay.build(func, target, params=params)
  File "/usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/build_module.py", line 356, in build
    params)
  File "/usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/build_module.py", line 183, in build
    self._build(func, target, target_host)
  File "tvm/_ffi/_cython/./function.pxi", line 310, in tvm._ffi._cy3.core.FunctionBase.__call__
  File "tvm/_ffi/_cython/./function.pxi", line 245, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./function.pxi", line 234, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 170, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x4a2e26) [0x7fcb115c1e26]
  [bt] (7) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x75a47d) [0x7fcb1187947d]
  [bt] (6) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Expr const&, tvm::relay::Module const&)+0x3cb) [0x7fcb118786ab]
  [bt] (5) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x56221a) [0x7fcb1168121a]
  [bt] (4) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x5617c0) [0x7fcb116807c0]
  [bt] (3) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x387) [0x7fcb11878d27]
  [bt] (2) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x7590a6) [0x7fcb118780a6]
  [bt] (1) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x536b78) [0x7fcb11655b78]
  [bt] (0) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x14d3a3) [0x7fcb1126c3a3]
  [bt] (8) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Expr const&, tvm::relay::Module const&)+0x3cb) [0x7fcb118786ab]
  [bt] (7) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x56221a) [0x7fcb1168121a]
  [bt] (6) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x5617c0) [0x7fcb116807c0]
  [bt] (5) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x387) [0x7fcb11878d27]
  [bt] (4) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x75908a) [0x7fcb1187808a]
  [bt] (3) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x76f614) [0x7fcb1188e614]
  [bt] (2) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x5a2f16) [0x7fcb116c1f16]
  [bt] (1) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x61cf98) [0x7fcb1173bf98]
  [bt] (0) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x14d3a3) [0x7fcb1126c3a3]
  [bt] (8) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x5617c0) [0x7fcb116807c0]
  [bt] (7) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x387) [0x7fcb11878d27]
  [bt] (6) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x75908a) [0x7fcb1187808a]
  [bt] (5) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x76f614) [0x7fcb1188e614]
  [bt] (4) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x5a2f16) [0x7fcb116c1f16]
  [bt] (3) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x5be2e2) [0x7fcb116dd2e2]
  [bt] (2) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x774581) [0x7fcb11893581]
  [bt] (1) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x77442a) [0x7fcb1189342a]
  [bt] (0) /usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x14d3a3) [0x7fcb1126c3a3]
  File "/home/ubuntu/tvm/src/relay/ir/error.cc", line 132
TVMError: 
Error(s) have occurred. We have annotated the program with them:

In `main`: 
v0.0.1
fn (%input: Tensor[(1, 224, 224, 3), float32]) {
  %0 = nn.pad(%input, pad_width=[[0, 0], [0, 0], [0, 1], [0, 1]])
  %1 = nn.conv2d(%0, meta[relay.Constant][0], strides=[2, 2], channels=32, kernel_size=[3, 3])an internal invariant was violated while typechecking your program [14:46:24] /home/ubuntu/tvm/src/relay/pass/type_solver.cc:119: Check failed: resolved.defined(): Unable to unify parent types: TensorType([32, 224, 3, 3], float32) and TensorType([32, 3, 3, 3], float32)
; 
  %2 = nn.bias_add(%1, meta[relay.Constant][1])
  %3 = clip(%2, a_min=0, a_max=6)
  %4 = nn.pad(%3, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]])
  %5 = nn.conv2d(%4, meta[relay.Constant][2], groups=32, channels=32, kernel_size=[3, 3])
  %6 = nn.bias_add(%5, meta[relay.Constant][3])
  %7 = clip(%6, a_min=0, a_max=6)
  %8 = nn.pad(%7, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %9 = nn.conv2d(%8, meta[relay.Constant][4], channels=64, kernel_size=[1, 1])
  %10 = nn.bias_add(%9, meta[relay.Constant][5])
  %11 = clip(%10, a_min=0, a_max=6)
  %12 = nn.pad(%11, pad_width=[[0, 0], [0, 0], [0, 1], [0, 1]])
  %13 = nn.conv2d(%12, meta[relay.Constant][6], strides=[2, 2], groups=64, channels=64, kernel_size=[3, 3])
  %14 = nn.bias_add(%13, meta[relay.Constant][7])
  %15 = clip(%14, a_min=0, a_max=6)
  %16 = nn.pad(%15, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %17 = nn.conv2d(%16, meta[relay.Constant][8], channels=128, kernel_size=[1, 1])
  %18 = nn.bias_add(%17, meta[relay.Constant][9])
  %19 = clip(%18, a_min=0, a_max=6)
  %20 = nn.pad(%19, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]])
  %21 = nn.conv2d(%20, meta[relay.Constant][10], groups=128, channels=128, kernel_size=[3, 3])
  %22 = nn.bias_add(%21, meta[relay.Constant][11])
  %23 = clip(%22, a_min=0, a_max=6)
  %24 = nn.pad(%23, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %25 = nn.conv2d(%24, meta[relay.Constant][12], channels=128, kernel_size=[1, 1])
  %26 = nn.bias_add(%25, meta[relay.Constant][13])
  %27 = clip(%26, a_min=0, a_max=6)
  %28 = nn.pad(%27, pad_width=[[0, 0], [0, 0], [0, 1], [0, 1]])
  %29 = nn.conv2d(%28, meta[relay.Constant][14], strides=[2, 2], groups=128, channels=128, kernel_size=[3, 3])
  %30 = nn.bias_add(%29, meta[relay.Constant][15])
  %31 = clip(%30, a_min=0, a_max=6)
  %32 = nn.pad(%31, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %33 = nn.conv2d(%32, meta[relay.Constant][16], channels=256, kernel_size=[1, 1])
  %34 = nn.bias_add(%33, meta[relay.Constant][17])
  %35 = clip(%34, a_min=0, a_max=6)
  %36 = nn.pad(%35, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]])
  %37 = nn.conv2d(%36, meta[relay.Constant][18], groups=256, channels=256, kernel_size=[3, 3])
  %38 = nn.bias_add(%37, meta[relay.Constant][19])
  %39 = clip(%38, a_min=0, a_max=6)
  %40 = nn.pad(%39, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %41 = nn.conv2d(%40, meta[relay.Constant][20], channels=256, kernel_size=[1, 1])
  %42 = nn.bias_add(%41, meta[relay.Constant][21])
  %43 = clip(%42, a_min=0, a_max=6)
  %44 = nn.pad(%43, pad_width=[[0, 0], [0, 0], [0, 1], [0, 1]])
  %45 = nn.conv2d(%44, meta[relay.Constant][22], strides=[2, 2], groups=256, channels=256, kernel_size=[3, 3])
  %46 = nn.bias_add(%45, meta[relay.Constant][23])
  %47 = clip(%46, a_min=0, a_max=6)
  %48 = nn.pad(%47, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %49 = nn.conv2d(%48, meta[relay.Constant][24], channels=512, kernel_size=[1, 1])
  %50 = nn.bias_add(%49, meta[relay.Constant][25])
  %51 = clip(%50, a_min=0, a_max=6)
  %52 = nn.pad(%51, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]])
  %53 = nn.conv2d(%52, meta[relay.Constant][26], groups=512, channels=512, kernel_size=[3, 3])
  %54 = nn.bias_add(%53, meta[relay.Constant][27])
  %55 = clip(%54, a_min=0, a_max=6)
  %56 = nn.pad(%55, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %57 = nn.conv2d(%56, meta[relay.Constant][28], channels=512, kernel_size=[1, 1])
  %58 = nn.bias_add(%57, meta[relay.Constant][29])
  %59 = clip(%58, a_min=0, a_max=6)
  %60 = nn.pad(%59, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]])
  %61 = nn.conv2d(%60, meta[relay.Constant][30], groups=512, channels=512, kernel_size=[3, 3])
  %62 = nn.bias_add(%61, meta[relay.Constant][31])
  %63 = clip(%62, a_min=0, a_max=6)
  %64 = nn.pad(%63, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %65 = nn.conv2d(%64, meta[relay.Constant][32], channels=512, kernel_size=[1, 1])
  %66 = nn.bias_add(%65, meta[relay.Constant][33])
  %67 = clip(%66, a_min=0, a_max=6)
  %68 = nn.pad(%67, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]])
  %69 = nn.conv2d(%68, meta[relay.Constant][34], groups=512, channels=512, kernel_size=[3, 3])
  %70 = nn.bias_add(%69, meta[relay.Constant][35])
  %71 = clip(%70, a_min=0, a_max=6)
  %72 = nn.pad(%71, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %73 = nn.conv2d(%72, meta[relay.Constant][36], channels=512, kernel_size=[1, 1])
  %74 = nn.bias_add(%73, meta[relay.Constant][37])
  %75 = clip(%74, a_min=0, a_max=6)
  %76 = nn.pad(%75, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]])
  %77 = nn.conv2d(%76, meta[relay.Constant][38], groups=512, channels=512, kernel_size=[3, 3])
  %78 = nn.bias_add(%77, meta[relay.Constant][39])
  %79 = clip(%78, a_min=0, a_max=6)
  %80 = nn.pad(%79, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %81 = nn.conv2d(%80, meta[relay.Constant][40], channels=512, kernel_size=[1, 1])
  %82 = nn.bias_add(%81, meta[relay.Constant][41])
  %83 = clip(%82, a_min=0, a_max=6)
  %84 = nn.pad(%83, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]])
  %85 = nn.conv2d(%84, meta[relay.Constant][42], groups=512, channels=512, kernel_size=[3, 3])
  %86 = nn.bias_add(%85, meta[relay.Constant][43])
  %87 = clip(%86, a_min=0, a_max=6)
  %88 = nn.pad(%87, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %89 = nn.conv2d(%88, meta[relay.Constant][44], channels=512, kernel_size=[1, 1])
  %90 = nn.bias_add(%89, meta[relay.Constant][45])
  %91 = clip(%90, a_min=0, a_max=6)
  %92 = nn.pad(%91, pad_width=[[0, 0], [0, 0], [0, 1], [0, 1]])
  %93 = nn.conv2d(%92, meta[relay.Constant][46], strides=[2, 2], groups=512, channels=512, kernel_size=[3, 3])
  %94 = nn.bias_add(%93, meta[relay.Constant][47])
  %95 = clip(%94, a_min=0, a_max=6)
  %96 = nn.pad(%95, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %97 = nn.conv2d(%96, meta[relay.Constant][48], channels=1024, kernel_size=[1, 1])
  %98 = nn.bias_add(%97, meta[relay.Constant][49])
  %99 = clip(%98, a_min=0, a_max=6)
  %100 = nn.pad(%99, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]])
  %101 = nn.conv2d(%100, meta[relay.Constant][50], groups=1024, channels=1024, kernel_size=[3, 3])
  %102 = nn.bias_add(%101, meta[relay.Constant][51])
  %103 = clip(%102, a_min=0, a_max=6)
  %104 = nn.pad(%103, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %105 = nn.conv2d(%104, meta[relay.Constant][52], channels=1024, kernel_size=[1, 1])
  %106 = nn.bias_add(%105, meta[relay.Constant][53])
  %107 = clip(%106, a_min=0, a_max=6)
  %108 = nn.avg_pool2d(%107, pool_size=[7, 7], strides=[2, 2])an internal invariant was violated while typechecking your program [14:46:24] /home/ubuntu/tvm/src/relay/op/nn/pooling.cc:73: Check failed: data != nullptr: 
; 
  %109 = nn.pad(%108, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]])
  %110 = nn.conv2d(%109, meta[relay.Constant][54], channels=1001, kernel_size=[1, 1])
  %111 = nn.bias_add(%110, meta[relay.Constant][55])
  %112 = transpose(%111, axes=[0, 2, 3, 1])
  %113 = reshape(%112, newshape=[1, 1001])
  nn.softmax(%113, axis=1)
}
// meta data omitted. you can use show_meta_data=True to include meta data

@FrozenGene Can you try the latest TVM to compile mobilenet_v1_1.0_224 TFLite model?

It seems like NHWC problem. The conv should have a arg called data_layout = “NHWC” here

Found PR which brakes the compilation https://github.com/dmlc/tvm/pull/3141
@FrozenGene

ok, the problem was that my tvm was 2 weeks old. The compilation works after I updated to the latest tvm.