Proper way to extend device type/versions? Different codegen but same runtime

In short, I have two versions of the same device. They use the same runtime implementation but v1 and v2 must have different TOPI and CodeGen.

So far I’ve been unable to extend TVM to uniquely target my_hardware -device=v1 and my_hardware -device=v2. The TOPI stage looks only for operators registered with the string my_hardware in the keys_array.

What is the proper way to achieve this?

My solution.

It uses the “-model=” flag. See below

(1.) In testing script, set up target as follows

# Heterogeneous execution for 'v1' or 'v2' of hardware
target = {"mytarget": "mytarget -model=v1", "cpu": "llvm"}
target = {"mytarget": "mytarget -model=v2", "cpu": "llvm"}

(2.) Modify python/tvm/target/

def mytarget(model='unknown', options=None):
    """Returns a 'mytarget' target.
    model: str
        Version of 'mytarget' in ["v1", "v2"]
    options : str or list of str
        Additional options
    opts = ['-model=%s' % model]
    opts = _merge_opts(opts, options)
    return _api_internal._TargetCreate("mytarget", *opts)

(3.) Add switch on model type in TOPI layer

@autotvm.register_topi_schedule(generic.schedule_conv2d_nhwc, "mytarget", "direct")
def schedule_conv2d_nhwc(cfg, outs):
    model =
    if model == "v1":
        return _schedule_conv2d_nhwc_v1(cfg, outs)
    if model == "v2":
        return _schedule_conv2d_nhwc_v2(cfg, outs)
    raise Exception("Invalid 'mytarget' model type for nn.conv2d")

(4). Modify tvm::codegen::Build(...) to also switch on the model type. This is the only step where I felt like I wasn’t doing things the ‘TVM’ way.

(5). Now separate CodeGens & TOPI are being called and we can use the same runtime implementation for each.

Step 1 and 2 are already supported in some targets (e.g., CUDA) of TVM. You just need to do the same thing in your device codegen.

Step 4 is the most important one. I feel like you can dispatch to a proper implementation in the codegen so that you don’t have to change Build?

This was my initial thought. I tried to check current target in the CodeGen stage so I could make the switch like I’m doing in the TOPI layer, but was getting a nullptr for the current target.