How to import native Tensorflow2 trained model into TVM Relay?

I’ve looked at the from_tensorflow.py tutorial, it imports the tensorflow model as below:

######################################################################
# Import model
# ------------
# Creates tensorflow graph definition from protobuf file.

with tf_compat_v1.gfile.GFile(model_path, "rb") as f:
    graph_def = tf_compat_v1.GraphDef()
    graph_def.ParseFromString(f.read())
    graph = tf.import_graph_def(graph_def, name="")
    # Call the utility to import the graph definition into default graph.
    graph_def = tf_testing.ProcessGraphDefParam(graph_def)
    # Add shapes to the graph.
    with tf_compat_v1.Session() as sess:
        graph_def = tf_testing.AddShapesToGraphDef(sess, "softmax")

According to tensorflow document, this is to port tensorflow1.x trained models to 2.x compatible. I’ve already got a natively trained 2.x model, how can I import it to TVM?

TensorFlow 2.x support in TVM is still in progress.

Thanks for quick reply.

I’ve got a workaround way to first convert my model to onnx using tf2onnx package, then feed the onnx model using from_onnx(). It works for now.

Expecting the TF2 import native support though!

1 Like

TF2 was supported now

Can we get an update on how this should be done? There is an example in the doc reading model from a file but I would like to know the way of doing it with a model object (in memory), like:

model = tf.keras.Sequential(layers)
# import model into tvm