Does Relay VM support data-dependant dynamic shape?

As far as I know, there are two types of dynamic shape.

  1. The final output shape can be infered once the input shape is supplied. For example, resnet , once the input shape is fixed, the output shape is fixed.

  2. The final output shape is related to the input tensor value. For example, I want to get all elements that is less than 0.5, and use these elements to do some calculation. Does Relay VM support this kind of dynamic shape ?

I wrote an example of the 2nd type, and Relay VM doesn’t work. I wonder to what extent dynamic shape is supported in Relay VM.

TVMError: Check failed: fshape_func.count(op) > 0 (0 vs. 0) : Internal error, cannot find ShapeFunc for gather_nd

import numpy as np
from tvm.runtime.vm import VirtualMachine
import tvm
from tvm import te
from tvm.runtime import profiler_vm
from tvm import relay
import tensorflow as tf

data = np.random.rand(10, 1).astype('float32')

def create_tf_graph_def():
    graph = tf.Graph()
    with graph.as_default():
        a = tf.placeholder(tf.float32, shape=(None, 1))
        b = tf.less(a, 0.5)
        x = tf.where(b, name='x')
        y = tf.gather_nd(a, x, name='y')
        with tf.Session() as sess:
            result = sess.run(y, feed_dict = {a:data})
    return graph.as_graph_def()

def test_basic():
    graph_def = create_tf_graph_def()
    output_names = ['y']
    mod, params = relay.frontend.from_tensorflow(graph_def, outputs = output_names)
    target = 'llvm'
    ctx = tvm.cpu()
    exe = relay.vm.compile(mod, target, params=params)
    vm = VirtualMachine(exe, tvm.cpu())
    res = vm.invoke("main", data)
    print(res)
if __name__ == "__main__":
    test_basic()

Hi @sleepwalker2017

What version of Tensorflow are you using? I couldn’t reproduce this with TF 2.1, I got AttributeError: module 'tensorflow' has no attribute 'placeholder'.

In general though, I would say support for data dependent dynamic shapes is a work in progress, it’s supported for some operations, but I’m sure we’re missing others. Could you post the error you’re seeing?

Thanks!

Relay VM supports data dependent dynamic shape kernels(where, nms in tf). If you can post error message that would be very helpful.

Hello, I found it can’t find shape_func for gather_nd op. I’m trying to add shape_func for this op.

I don’t know the Relay VM strategy, my understanding is like this: When compiling dynamic ops, the compiler will add shape_func and memory_alloc functions to the bytecodes, and during runtime, the VM run the shape_function and allocate tensor JIT? Is that the fact?

Yes, that’s the basic flow. Shape function coverage is definitely still a work in progress, we haven’t found all of the ops that are missing it. If you could add it, that would be awesome!

OK, I’m learning the shape func strategy, and I got other questions:

1、I noticed that you’re working on Relay VM GPU support, when will this feature available?

2、Can relay VM support dynamic shape on CUDA ? If the generated kernel code has relation with the input shape, will Relay VM handle this situation correctly? Seems there is still a lot work to do.

It requires heterogeneous execution support, the shape functions and dynamic memory allocation need to happen on CPU, then the kernel can run on GPU. @zhiics has been working on that, but I’m not sure how close we are?