Java API - unexpected results

Hello,

I have model for object segmentation. I have successfully compiled module so I can run it on android device. But results of device inference are not ok. I have followed example deploy_android app demo from repository. My input network shape is 1,288,512,3 named input_tensor. My output network is 1,288,512,1 which is named “Sigmoid”. My compile command looks like:

TVM_LIBRARY_PATH=<tvm>/build-sys-llvm12 python3 \
    -m tvm.driver.tvmc compile \
    --target "llvm -device=arm_cpu -mtriple=armv7a-linux-androideabi -mattr=+neon" \
    --cross-compiler <NDK>/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi28-clang++ \
    --cross-compiler-options="--target=armv7a-none-linux-androideabi28 -mfpu=neon -static-libstdc++ -lm --sysroot=<standalone-toolchain>/sysroot" \
    --input-shapes "input_tensor:[1,288,512,3]" \
    --desired-layout NHWC \
    <model_filepath>.pb

I have feed network with float values from range -1.0, 1.0 in float32 in rgb format - that is the way the network was trained. And I expect values from 0.0 to 1.0 in float32, but I’m getting for whole image 1.0 for all pixels. I don’t know how to debug it? I have tested different NDArray types. When I call “get_num_inputs” I’m getting strange number of inputs. Is it ok, that output is referenced by index, not by name?

It looks like the problem was caused by unsupported resize operation.