I have a tf.keras model that uses tf.keras.layers.Conv1d (https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D). However, when I try to convert the model to TVM, I get the stack trace below. Why isn’t this supported in TVM? It seems to be a fairly common layer for processing 1D time series data. Is there any plan to add this op to TVM?
>>> mod, params = relay.frontend.from_keras(model, shape_dict)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tblstri/tvm/python/tvm/relay/frontend/keras.py", line 1161, in from_keras
keras_op_to_relay(inexpr, keras_layer, keras_layer.name + ":" + str(node_idx), etab)
File "/home/tblstri/tvm/python/tvm/relay/frontend/keras.py", line 1036, in keras_op_to_relay
"Operator {} is not supported for frontend Keras.".format(op_name)
tvm.error.OpNotImplemented: Operator Conv1D is not supported for frontend Keras.
JoeyChou is right, we suppoort Conv1D but it looks like no one has added support to the Keras frontend. Keras is less popular then say ONNX, PyTorch, TF, and MxNet for TVM users so someone probably just needs add a few lines of code to add support.
Also, if I try to convert the same model using onnx, I get a similar error:
>>> mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tblstri/tvm/python/tvm/relay/frontend/onnx.py", line 2748, in from_onnx
mod, params = g.from_onnx(graph, opset, freeze_params)
File "/home/tblstri/tvm/python/tvm/relay/frontend/onnx.py", line 2529, in from_onnx
raise tvm.error.OpNotImplemented(msg)
tvm.error.OpNotImplemented: The following operators are not supported for frontend ONNX: Size
Any chance the model you’re testing is public? I’d love to reproduce this and figure out what’s going on, any time we segfault is definitely a bug in the backend, it might also be a bug in the importer.
Thanks. I added a nullptr check to the offending function to make sure it doesn’t segfault, but it looks like we’re getting a dynamically ranked tensor in your program, which TVM doesn’t support. Trying to figure out if that’s real or an artifact of the import
The original model input has a shape of (None, 104, 64) (B,T,C). Would the unknown batch dim create a dynamically ranked tensor? Otherwise, it might have something to do with the padding I used, which is the TF same padding. Or maybe it has something to do with the TF SeparableConv1D layer, which is a convenience layer for a 1D depthwise convolution followed by a pointwise convolution.
What appears to be happening is the keras2onnx converter is inserting a bunch of constants into the graph for calculating reshapes (to hit ONNX’s dynamic API), but then instead of leaving them as constants, it treats them as parameters. The TVM ONNX importer then sees that as a dynamic rank, even though the relevant values that would be used to make it static are available. I need to spend some time rethinking how the onnx importer handles this, its a very interesting integration test
In the meantime, I implemented Keras Conv1d, would you care giving that a try?
>>> mod, params = relay.frontend.from_keras(model, shape_dict)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tblstri/tvm/python/tvm/relay/frontend/keras.py", line 1242, in from_keras
keras_op_to_relay(inexpr, keras_layer, keras_layer.name + ":" + str(node_idx), etab)
File "/home/tblstri/tvm/python/tvm/relay/frontend/keras.py", line 1112, in keras_op_to_relay
"Operator {} is not supported for frontend Keras.".format(op_name)
tvm.error.OpNotImplemented: Operator SeparableConv1D is not supported for frontend Keras.
And if I convert all the SeparableConv1D layers to Conv2D depthwise followed by Conv2D pointwise layers (which may not be as efficient as using Conv1D), I get past the from_keras step, but if I try to run inference with the TVM model, I get another error:
>>> shape_dict = {'features_1:0': (None, 1, 104, 64)}
>>> mod, params = relay.frontend.from_keras(model, shape_dict)
>>> feats = np.random.rand(1, 1, 104, 64)
>>> tvm_input = tvm.nd.array(feats.astype(dtype))
>>> with tvm.transform.PassContext(opt_level=1):
... intrp = relay.build_module.create_executor("graph", mod, tvm.cpu(0), target)
...
>>> tvm_output = intrp.evaluate()(tvm_input, **params).asnumpy()
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
...