Running TF Lite-compiled model gives Error: the input tensor 'data' is not in the graph

I am going step-by-step through the TVM User Tutorial using my Ubuntu Linux VM to get started with TVM, but I want to see if the TVM is going to work with our pre-trained TensorFlow Lite model as opposed to ONNX (we are not interested in ONNX at this point).

I used the following command to compile our mobilenet_v1 image classification TF Lite model: tvmc compile --target “llvm” --output my_tflie_model.tar --model-format tflite my_tflite_model.tflite

This command seems to have worked, except that it gave a bunch of “NHWC layout is not optimized for x86 with autotvm” messages and it did not add the “-net” suffix to the output .tar filename (should it?).

Then I tried to run the compiled model with this command: tvmc run --inputs my_image.npz --output predictions_my_tflite_model.npz my_tflite_model.tar

It gives me this error: Error: the input tensor ‘data’ is not in the graph. Expected inputs: ‘dict_keys([‘input’])’

'input" is the name of the input tensor in our TF Lite model, but apparently, TVM expects the input tensor to be named ‘data’. Is that what’s going on? Can this error be fixed?

Your prompt response would be greatly appreciated.

@akotlarsky I tried to reproduce this error at HEAD (099ebaa7d5d1bd4862df36230f464cc3c92aa630), but I see something different:

$ docker/bash.sh ci_cpu python3 -m tvm.driver.tvmc run --inputs img/n02099429_1.npz --output predictions.npz mobilenet.tar
REPO_DIR: /home/areusch/ws/tvm2
DOCKER CONTAINER NAME: tlcpack/ci-cpu:v0.80

Running 'python3 -m tvm.driver.tvmc run --inputs img/foo.npz --output predictions.npz 
mobilenet.tar' inside tlcpack/ci-cpu:v0.80...
mesg: ttyname failed: Inappropriate ioctl for device
Adding group `areusch' (GID 1000) ...
Done.
2022-01-21 16:24:45.389782: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror
: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH:
2022-01-21 16:24:45.389803: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Error: the input tensor 'x' is not in the graph. Expected inputs: '<generator object Map.__iter__ at 0x7fef81394678>'

(I’m invoking tvmc in a slightly different way but this should not affect the behavior).

It does look like for some reason your input tensor was renamed to data. Here are a couple suggestions:

  1. Try running from HEAD (or same revision I’m using if possible).
  2. You could also pass -f mlf to tvmc compile, then uncompress that tar and look at the Relay source for your model in src/relay.txt. That would be a great way to tell if the problem is with the Relay tflite importer or somewhere else in the compilation pipeline (the name in relay.txt should give a clue).

Your assessment of the error seems correct, though. Let me know if this helps.