Import of transformer-based models after quantization of pytorch FX patterns fails

I am quantizing a transformer-based network. I am using pytorch’s FX model to quantize the entire network including softmax layer and layernorm layer. After I export the model using torch.jit.trace, I import it into TVM. It reports an error message for not implementing the following operators: [‘aten::masked_scatter_’, ‘quantized::layer_norm’, ‘quantized::conv1d’, ‘quantized::softmax’, ‘quantized::matmul’]. What should I do to import correctly?

TVM version:v0.10 pytorch version:1.13