Proper way to extend device type/versions? Different codegen but same runtime

My solution.

It uses the “-model=” flag. See below

(1.) In testing script, set up target as follows

# Heterogeneous execution for 'v1' or 'v2' of hardware
target = {"mytarget": "mytarget -model=v1", "cpu": "llvm"}
target = {"mytarget": "mytarget -model=v2", "cpu": "llvm"}

(2.) Modify python/tvm/target/target.py

def mytarget(model='unknown', options=None):
    """Returns a 'mytarget' target.
 
    Parameters
    ----------
    model: str
        Version of 'mytarget' in ["v1", "v2"]
    options : str or list of str
        Additional options
    """
    opts = ['-model=%s' % model]
    opts = _merge_opts(opts, options)
    return _api_internal._TargetCreate("mytarget", *opts)

(3.) Add switch on model type in TOPI layer

@autotvm.register_topi_schedule(generic.schedule_conv2d_nhwc, "mytarget", "direct")
def schedule_conv2d_nhwc(cfg, outs):
    model = tvm.target.current_target().model.lower()
 
    if model == "v1":
        return _schedule_conv2d_nhwc_v1(cfg, outs)
    if model == "v2":
        return _schedule_conv2d_nhwc_v2(cfg, outs)
 
    raise Exception("Invalid 'mytarget' model type for nn.conv2d")

(4). Modify tvm::codegen::Build(...) to also switch on the model type. This is the only step where I felt like I wasn’t doing things the ‘TVM’ way.

(5). Now separate CodeGens & TOPI are being called and we can use the same runtime implementation for each.