Support for tensorflow ops for newer TF version

Hi

Standard Tensorflow models exported in TF 1.12 & later have newer ops which have not been implemented in TVM right now.

  1. VariableV2: I guess TF has deprecated Variable and moved to variable2. I am not 100% sure though. Here is the link
  2. TruncatedNormal
  3. Assign

I havent encountered any other non supported op. Want to know whats the way forward ?

Traceback (most recent call last):
  File "tvm_basic_infra_testing.py", line 106, in <module>
    net, params = tvm.relay.frontend.from_tensorflow(graph_def, shape={'input':input_shape}, layout=layout)
  File "/tvm/python/tvm/relay/frontend/tensorflow.py", line 2182, in from_tensorflow
    sym, params = g.from_tensorflow(graph, layout, shape, outputs)
  File "/tvm/python/tvm/relay/frontend/tensorflow.py", line 1700, in from_tensorflow
    "The following operators are not implemented: {}".format(missing_operators))
NotImplementedError: The following operators are not implemented: {'TruncatedNormal', 'Assign', 'VariableV2'}

Thanks
Kshitij Srivastava

@SrivastavaKshitij thanks for reporting. TF 1.12 (or may be 2.0) deprecated should be addressed soon.

I have raised below issue to track it.

1 Like

Some now ops like PlaceholderWithDefault should be supported as well.

@yongwww Add it to the issue on github. We will continue to add if any and take it to completion.

@yongwww or @SrivastavaKshitij one of you wanted to take up this ?

I am in the process of exploring TVM for Tensorflow and Pytorch backend. I will not be able to commit to this at the moment.

When I try a facenet model ,The following operators are not implemented: ‘RandomUniform’, ‘QueueDequeueUpToV2’, ‘FIFOQueueV2’

@srkreddy1238 I can take up this, for the ops ( TruncatedNormal', 'Assign', 'VariableV2', 'PlaceholderWithDefault'), but I don’t have a timeline at this point since I am working on something else.

@JDanielWu
For FIFOQueueV2, you can just map it to a dummy implementation, we met it before

For RandomUniform, we can add the support (welcome to contribute), but looks it is not related to inference. For both RandomUniform and QueueDequeueUpToV2, not sure if they can be removed during freezing or mapped to dummy impl.

I have the same issue, I tried tensorflow version 1.11.0 and 1.9.0 still I’m facing the same issue. Can you please tell me up to which version of tensorflow is tvm support available.

@kingman21 Which operators are your referring to ?

Tensorflow version is not a major difference here (at least up to < TF 2.0).
Tensorflow has got huge operators used across training and inference and TVM implement a part of these based on the need. TVM now covers most of operators needed for vision official slim models released by Tensorflow.

We are considering ( TruncatedNormal', 'Assign', 'VariableV2', 'PlaceholderWithDefault'), immediately as these are modified/upgraded versions of existing operators.

You are welcome to report the unsupported operators and contribute as well to improve tensorflow frontend in TVM.

#3184 provides limited support for PlaceholderWithDefault treating it as Placeholder.

tf.graph_util.convert_variables_to_constants graph transform can be used before import to take care of Variable(V2), Assign.

I came across two more ops that are not supported right now:

  1. StopGradient
  2. SquaredDifference

These ops are generated from tf.nn.moments which is used in describing a Batch Normalization Layer.

When I try a nlp model ,The following operators are not implemented: ‘RandomUniform’. I’m trying to define function myself,but the return value error.Can you take a look at it for me? The return value cannot be np.array, it must be node?

def _random_uniform():
    def _impl(inputs, attr, params):
        shape = _get_list_param(params, inputs[0])
        seed = attr['seed']
        seed2 = attr['seed2']
        dtype = attr['dtype'].name

        if seed != 0:
            np.random.seed(seed)
        elif seed2 !=0:
            np.random.seed(seed2)
        return np.random.random(size=shape).astype(dtype)
    return _impl

I tried this approch and able to run the code, but this gives rise to different error, where my model is having output.shape of (1,4) now if the check the model after importing random_uniform it changes my model output to shape of (1,15.512). Not sure where its changing the graph output

There is something wrong with my code. for example, When ‘RandomUniform’ is in a loop, the shape keeps changing… If your shape of ‘RandomUniform’ doesn’t change, you can print shape or inputs[0] on _random_uniform.

If there is any discovery, welcome timely feedback.

I tried to debug little more and found out these

  • When the tvm function gets the random_unifrorm variable in graph, it will not process the next elements in the graph. So its having a different shape as output.

  • Also, I have checked the params return value which is having like 70% of the graph in that.

I didn’t quite catch your meaning…We only need to make sure that the input and output of random_unifrorm op are correct. If random_unifrorm op is ok, we check again why did the model output changed

@heliqi i thing the code for random_uniform is working fine because it gives me correct output shape. before that what type of model you have tried for this. because i have been trying to import lstm to tensorflow frontend. So i get random_uniform in this way.

My model is seq2seq. I also find the code for random_uniform is working fine, the error caused by loop control flow.

error info: shape mismatch

......
%1817 = add(%35, %1816);
%1818 = reshape(%1817, newshape(-1,[28500]))
%1819 = topk(%1818, k=3, dtype='int32')
%1820 = %1819.1
%1821 = cast(%1820, dtype='float64')
%1822 = cast(9500, dtype='float64')
%1823 = divide(%1821, %1822)
%1824 = cast(%1823, dtype='int32')
%1825 = add(%33, %1824)
%1826 = reshape(%1825, newshape=[-1])
%1827 = take(%30, %1826, axis=0) in particular dimension 2 conflicts (int64)3 does not match 2; unable to unify: `Tensor[(15, 8, 3, 64), float32]` and `Tensor[(15, 8, 2, 64), float32]`; ;
%1818 = take(%loop_var172, %loop_var50, axis =0)
........

The %1827 node is ‘while/GatherV2’ the next node is NextIteration , so I suspect there’s something wrong with the shape calculation in the loop.