Reading Input/output tensor shapes from SO and other metadata

Hi All,

I’ve followed the deployment tutorials as well as the sample code provided at apps/howto_deploy successfully and everything works well. I develop and build on linux/x86 and deploy to aarch64/android natively as a C++ module compiled to so, with the necessary libtvm_runtime.so.

However, the example provided int apps/howto_deploy/cpp_deploy.cc does not show how to read any metadata about the neural network which I deployed. I want to check input and output shapes, data types, count the number of ops, tensors etc. This is required by the larger software context with which I want integrate TVM.

I’ve also saved the execution graph as a JSON file, but couldn’t find a hard spec for this JSON.

Can you suggest some way to do this?

Thanks

It seems there is get_input_info function in the runtime API, does it help?

Yes, that’s precisely what I need!

Thank you.

The code to access actual shape information from the PackedFunc’s return value is somewhat awkward. Here’s why I did:

    PackedFunc get_input_info = gmod.GetFunction("get_input_info");
    ObjectRef input_info_retValue = get_input_info();
    Map<String, ObjectRef> input_info = Downcast<Map<String, ObjectRef>>(input_info_retValue);
    Map<String, ObjectRef> shapeInfoMap =Downcast<Map<String, ObjectRef>>(input_info["shape"]);

    for (auto map_node : shapeInfoMap) {
        std::cout << "Node key " << map_node.first << "\n";
        ShapeTuple tup = Downcast<ShapeTuple>(map_node.second);
        for (int j =0; j < tup.size(); ++j)
            std::cout << "j: " <<j << "\t"<< tup[j] << "\n";
    }

So I’m Downcasting three times just to get to the concrete ShapeTuple. Is there anything better?

I think we can at least remove the first Downcast. Does this work?

 Map<String, ObjectRef> input_info = get_input_info();

I’d also try replacing Map<String, ObjectRef> with Map<String, ShapeTuple>.

Yes. It’s much clearer now. For reference, here’s the code after I’ve implemented your suggestions and tested it:

    PackedFunc get_input_info = gmod.GetFunction("get_input_info");
    Map<String, ObjectRef> input_info = get_input_info();
    Map<String, ShapeTuple> shapeInfoMap =Downcast<Map<String, ShapeTuple>>(input_info["shape"]);

    for (auto map_node : shapeInfoMap) {
        std::cout << "Node key " << map_node.first << "\n";
        ShapeTuple tup = map_node.second;
        for (int j =0; j < tup.size(); ++j)
            std::cout << "j: " <<j << "\t"<< tup[j] << "\n";
    }

Thank you for your help.

hi, yakovdan, i using this code, will generate this err:

Program received signal SIGSEGV, Segmentation fault.
0x0000560bfbcf1696 in tvm::runtime::MapNode::iterator::operator-> (this=0x7ffc8b595fa0)
    at /workspace/guohua.zhu/gerrit/tvm/include/tvm/runtime/container/map.h:1152
1152      TVM_DISPATCH_MAP_CONST(self, p, { return p->DeRefItr(index); });

have you solve this error?