TVM support for Conv1D operator

I have a tf.keras model that uses tf.keras.layers.Conv1d (https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D). However, when I try to convert the model to TVM, I get the stack trace below. Why isn’t this supported in TVM? It seems to be a fairly common layer for processing 1D time series data. Is there any plan to add this op to TVM?

>>> mod, params = relay.frontend.from_keras(model, shape_dict)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/tblstri/tvm/python/tvm/relay/frontend/keras.py", line 1161, in from_keras
    keras_op_to_relay(inexpr, keras_layer, keras_layer.name + ":" + str(node_idx), etab)
  File "/home/tblstri/tvm/python/tvm/relay/frontend/keras.py", line 1036, in keras_op_to_relay
    "Operator {} is not supported for frontend Keras.".format(op_name)
tvm.error.OpNotImplemented: Operator Conv1D is not supported for frontend Keras.

Maybe it is not included Keras frontend. But there is a conv1d op in TVM https://tvm.apache.org/docs/api/python/relay/nn.html#tvm.relay.nn.conv1d

JoeyChou is right, we suppoort Conv1D but it looks like no one has added support to the Keras frontend. Keras is less popular then say ONNX, PyTorch, TF, and MxNet for TVM users so someone probably just needs add a few lines of code to add support.

Thanks for the replies. In TF2, tf.keras is the primary TF api for training models, so it may be worth supporting it for TVM.

Also, if I try to convert the same model using onnx, I get a similar error:

>>> mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/tblstri/tvm/python/tvm/relay/frontend/onnx.py", line 2748, in from_onnx
    mod, params = g.from_onnx(graph, opset, freeze_params)
  File "/home/tblstri/tvm/python/tvm/relay/frontend/onnx.py", line 2529, in from_onnx
    raise tvm.error.OpNotImplemented(msg)
tvm.error.OpNotImplemented: The following operators are not supported for frontend ONNX: Size

This PR adds the Size op to the onnx frontend. Give your model a shot with the change.

Thanks @jwfromm! I’ve made the changes in your PR, but now I’m getting a segmentation fault.

import onnx
import numpy as np
import tvm
from tvm import te
import tvm.relay as relay
target = "llvm"
input_name="features:0"
shape_dict = {input_name: [1, 104, 64]}
onnx_model = onnx.load("model.onnx")
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)

Output:

Segmentation fault

This is probably something I am doing. It seems to happen within onnx.py while calling

op = self._convert_operator(op_name, inputs, attr, opset)

(https://github.com/apache/tvm/blob/main/python/tvm/relay/frontend/onnx.py#L2556) with:

-> op = self._convert_operator(op_name, inputs, attr, opset)
(Pdb) op_name
'Conv'
(Pdb) inputs
<tvm.relay.frontend.onnx.onnx_input object at 0x7f6ee8d15860>
(Pdb) attr
{'dilations': (1, 1), 'strides': (1, 1), 'kernel_shape': (1, 26), 'group': 128, 'tvm_custom': {'name': 'StatefulPartitionedCall/matchboxnet/separable_conv1d_13/separable_conv2d/depthwise', 'num_outputs': 1}}
(Pdb) opset
11

And if I dig even deeper, it seems to throw the seg fault on input_shape=infer_shape(data): https://github.com/apache/tvm/blob/main/python/tvm/relay/frontend/onnx.py#L401. Could this possibly be due to the padding used in my original model?

Any chance the model you’re testing is public? I’d love to reproduce this and figure out what’s going on, any time we segfault is definitely a bug in the backend, it might also be a bug in the importer.

Here’s the model with random weights: https://drive.google.com/file/d/1Mc0_3tIPMVbrc2IusvN1QPO4QA2XXMVs/view?usp=sharing

Thanks. I added a nullptr check to the offending function to make sure it doesn’t segfault, but it looks like we’re getting a dynamically ranked tensor in your program, which TVM doesn’t support. Trying to figure out if that’s real or an artifact of the import

The original model input has a shape of (None, 104, 64) (B,T,C). Would the unknown batch dim create a dynamically ranked tensor? Otherwise, it might have something to do with the padding I used, which is the TF same padding. Or maybe it has something to do with the TF SeparableConv1D layer, which is a convenience layer for a 1D depthwise convolution followed by a pointwise convolution.

What appears to be happening is the keras2onnx converter is inserting a bunch of constants into the graph for calculating reshapes (to hit ONNX’s dynamic API), but then instead of leaving them as constants, it treats them as parameters. The TVM ONNX importer then sees that as a dynamic rank, even though the relevant values that would be used to make it static are available. I need to spend some time rethinking how the onnx importer handles this, its a very interesting integration test :slight_smile:

In the meantime, I implemented Keras Conv1d, would you care giving that a try?

Thanks! Getting closer, but now it’s complaining about the tf.keras.layers.SeparableConv1D:

>>> mod, params = relay.frontend.from_keras(model, shape_dict)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/tblstri/tvm/python/tvm/relay/frontend/keras.py", line 1242, in from_keras
    keras_op_to_relay(inexpr, keras_layer, keras_layer.name + ":" + str(node_idx), etab)
  File "/home/tblstri/tvm/python/tvm/relay/frontend/keras.py", line 1112, in keras_op_to_relay
    "Operator {} is not supported for frontend Keras.".format(op_name)
tvm.error.OpNotImplemented: Operator SeparableConv1D is not supported for frontend Keras.

And if I convert all the SeparableConv1D layers to Conv2D depthwise followed by Conv2D pointwise layers (which may not be as efficient as using Conv1D), I get past the from_keras step, but if I try to run inference with the TVM model, I get another error:

>>> shape_dict = {'features_1:0': (None, 1, 104, 64)}
>>> mod, params = relay.frontend.from_keras(model, shape_dict)
>>> feats = np.random.rand(1, 1, 104, 64)
>>> tvm_input = tvm.nd.array(feats.astype(dtype))
>>> with tvm.transform.PassContext(opt_level=1):
...   intrp = relay.build_module.create_executor("graph", mod, tvm.cpu(0), target)
... 
>>> tvm_output = intrp.evaluate()(tvm_input, **params).asnumpy()
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
...

Please can you help me with how you solved this error