MicroTVM missing common math ops

(Note: this is a repost, as it seems that my original version of this thread was deleted, perhaps among the recent spam on the forums).

I was working with MicroTVM, and I’m getting an export error with tanh

File "/tvm/src/target/source/codegen_c.cc", line 766
TVMError: Unresolved call Op(tir.tanh)

I tested this on v0.15 (a340dbed), as well as the current main HEAD (c2c579bb).

The error emerges in void CodeGenC::VisitExpr_(const CallNode* op, std::ostream& os), in src/target/source/codegen_c.cc#L578.

This is a pretty low-level expression (though not as primitive as e.g., a DivNode or other nodes in expr.h), so I would argue it’s important it’s supported

However, since it’s not a primitive, I’m unsure how I could add the appropriate VisitExpr_ method to generate the code, and I’m surprised that the codegen wouldn’t create the relevant nodes the expression requires.

I’m happy to take the implementation on, but I would need to understand the following:

  1. Why is this failing and how could I get a functioning implementation? I see that some operations are given explicit LLVM implementations (e.g., sinh) would I need something like this for C? Are there any examples I could look at?
  2. Where would a good place to add tests for op coverage in the MicroTVM C backend? tests/python I think. We have python/topi/test_topi_math.py. I notice that the C backend is not enabled as default test target.

Looking more at this problem, I found it also impacts sin, sigmoid, exp, and various other expressions.

It can be reproduced with something like the following:

Repo code
    class TanhNetImpl(nn.Module):
        def __init__(self):
            super(TanhNetImpl, self).__init__()
            self.tanh = nn.Tanh()  # Using the Tanh layer

        def forward(self, x):
            out = self.tanh(x)
            return out

    model = TanhNetImpl()
    input_shape = (1, 3, 32, 32)
    input_name = "data"
    input_dtype = "float32"
    input_data = torch.rand((input_shape))

    scripted_model = torch.jit.trace(model, input_data).eval()
    y = scripted_model(input_data)

    relay_mod, params = relay.frontend.from_pytorch(
        scripted_model, [(input_name, input_shape)]
    )

import tvm
from tvm import relay
from tvm.relay.backend import Executor, Runtime

# Use the C runtime (crt)
RUNTIME = Runtime("crt")

# We define the target by passing the board name to `tvm.target.target.micro`.
# If your board is not included in the supported models, you can define the target such as:
# TARGET = tvm.target.Target("c -keys=arm_cpu,cpu -mcpu=cortex-m4")
TARGET = tvm.target.Target("c -keys=mips_cpu,cpu")

# Use the AOT executor rather than graph or vm executors. Use unpacked API and C calling style.
EXECUTOR = tvm.relay.backend.Executor(
    "aot",
    {
        "unpacked-api": True,
        "interface-api": "c",
        "workspace-byte-alignment": 8,
        "link-params": False,
    },
)

# Now, we set the compilation configurations and compile the model for the target:
config = {
    "tir.disable_vectorize": True,
    "tir.usmp.algorithm": "hill_climb",
}

print("Building TVM model")
with tvm.transform.PassContext(opt_level=3, config=config):
    lowered = tvm.relay.build(
        relay_mod, target=TARGET, params=params, runtime=RUNTIME, executor=EXECUTOR
    )