Python and C++ inference are different?

Now that tvm.graph_runtime.create is deprecated for tvm.graph_executor.create, is there a way to still achieve establishing the graph runtime with the 3 artifact compile (i.e .so, json, and params)?

@W1k1, I have the same code logic as your reply above, but I changed this segment here:

    int dtype_code = kDLFloat;
    int dtype_bits = 32;
    int dtype_lanes = 1;
    int device_type = kDLCPU;
    int device_id = 0;

    // ...
    // I read in all the json and params the same way. It's omitted here.
    // ...
    // ...

    auto tvm_graph_runtime_create = tvm::runtime::Registry::Get("tvm.graph_executor.create");
    tvm::runtime::Module gmod = (*tvm_graph_runtime_create)(json_data, mod_factory, device_type, device_id);
    tvm::runtime::PackedFunc set_input = gmod.GetFunction("set_input");
    tvm::runtime::PackedFunc get_output = gmod.GetFunction("get_output");

Changing from graph_runtime to graph executor was a simple rename in Python, but this is not the case on the C++ side…