Adding CoreML codegen with the BYOC feature enables us to offload subgraphs to Apple’s Neural Engine on iOS devices. There are some approaches how to build a CoreML model in TVM.
A0: Build with coremltools
I think this is the most intuitive way to construct CoreML models. coremltools provides good APIs and they are well documented. My concern is that we need to implement codegen with Python to use coremltools. I wonder if there is any limitation to implement external codegen with Python.
A1: Generate protobuf from Core ML specification
Core ML model format is protobuf and its specification is available here, so we can create Core ML model with generated .pb.h and .pb.cc files. TFLite also takes this approach to implement Core ML Delegate. We can implement CoreML codegen flexibly, but it looks a harder way to go than A0.
A2: Convert ONNX models
We can use onnx-coreml to convert generated ONNX models by this feature. We don’t need to maintenance codes to build CoreML models, but I’m not sure how the implementation would be. The flexibility of the output CoreML model might be limited by the intermediate ONNX model.
Any comments are welcome