My team also want to compile tensorflow model in the near future in order to deploy on server-class CPU. I found this link might be helpful (https://www.tensorflow.org/extend/tool_developers/)
However, seems that that guy doesn’t provide pull request and any further information.
Thanks for the information. I noticed that but it just list overview of tf model, doesn’t list more detail information such as op like ONNX / NNVM does.
We are working on tvm for now. You may get started on supporting tf first, guys in our team can join you soon.
Great! Glad to hear this. I will start to implement tf from next week.
How about the status? Have you implemented tf now? If yes, could you share it? Thanks.
Is it possible that import tf model to mxnet, then convert to NNVM? Is there any problem for this kind of conversion? Thanks.
https://github.com/dmlc/tvm/pull/1188
PR under review for tensorflow frontend.
Works good for InceptionV1 & V3.
Mobilenet also works for me with few changes which is yet to PR.
Hi,
I found your discussion of performance of resnet50 on tensorflow and tvm on https://github.com/dmlc/nnvm/issues/440.
Could you share how you converted tensorflow resnet50 model to nnvm? I tried to but failed, as Anyone successful converting tensorflow resnet50 model to nnvm?. If you would not mind sharing your advice, I appreciate it very much.
Thanks!
We have one fork of tvm. We have done many things on CoreML frontend. And we convert TF resnet50 to CoreML, then we conver CoreML to NNVM.
Did you encounter problems in converting resnet50 tf model to coreml? I had error messages as https://github.com/tf-coreml/tf-coreml/issues/210, but I saw you reported the error message on a different model. Any advice on how to solve the resnet50 conversion issue? Thanks a lot.
I don’t meet this error when to convert resnet50. You could try the way I mentioned in that issue.
I followed your advice converting tf resnet50 to coreml, and had the above problems.
My resnet50 model is the pretrained model ResNet-50 v1 or ResNet-50 v2 from https://github.com/tensorflow/models/tree/master/official/resnet. The model is in saved_model format. I freezed the model by
python freeze_graph.py --input_saved_model_dir=saved_model_dir --output_graph=frozen_model.pb --output_node_names=ArgMax --clear_devices
Freezing was successful. But both models raise
ValueError: Length of the 'dim' parameter must be equal to 4
when converted to coreml.
Shoudl I use a different tf resnet50 model? Could you share which tf resnet50 model you use and how you freeze it, if this is public information?
Thank you very much.
Just official resnet50 model provided by Tensorflow, not special. https://github.com/tensorflow/models/tree/master/research/slim
Did you use v1 or v2 on that page? How did you freeze it?
I downloaded v1, and used
python freeze_graph.py --input_graph=resnet_v1_50_inf_graph.pb --input_checkpoint=resnet_v1_50.ckpt --input_binary=true --output_graph=frozen_resnet_v1_50_slim.pb --output_node_names=resnet_v1_50/predictions/Reshape_1
to freeze it. But I got an error:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1759, in restore
err, "a mismatch between the current graph and the graph")
tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [1,1,2048,1001] rhs shape= [1,1,2048,1000]
I use resnetv2 provided by it. Your error I remember export_inference_graph.py have one parameter can control it. You could investigate it.
You may try this initial version of changes where I could compile Resnet_v2 via tensorflow frontend.
I am planning to PR it soon.
Yes set the --labels_offset=1
flag when exporting inference graph solves this problem. Thanks.
Thanks for the quick commit!
When I tried tf slim models of resnet 50 v1 and v2 (https://github.com/tensorflow/models/tree/master/research/slim), I got NotImplementedError: Please freeze the graph with add_shapes=True
. I use the freeze script from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py, and it does not have an add_shapes option. Is there any other freeze_graph I should use?
Sorry for bothering you so much
freeze_graph.py --input_saved_model_dir=20180601_resnet_v2_imagenet_savedmodel/1527888387/ --output_graph=frozen_model-v2-fp16.pb --output_node_names=ArgMax --clear_devices
I use this command to freeze the model.
Ref.
helper function as shown below can be used to add shapes.
graph_def = nnvm.testing.tf.AddShapesToGraphDef(‘softmax’)