How to use foldr to concatenate all tensors in list

I guess it is doable, but there is only very limited sample about how to use prelude.foldr()

It’ll be appreciated if there can be some tutorial or introduction about this. Thanks a lot.

def test_myfoldr():
    input_tensor = relay.var("input", shape=[1, 2], dtype='float32')

    mylist = cons(input_tensor, cons(input_tensor, nil()))

    x = relay.var("x", shape=[1, 2], dtype='float32')
    y = relay.var("y", shape=[1, 2], dtype='float32')

    concat = relay.Function([x, y], relay.op.concatenate([x, y], axis=0))

    func = foldr(concat, mylist, nil())

    print(func)

    res = intrp.evaluate(func, binds={input_tensor: relay.Constant(tvm.ndarray.empty(shape=(1, 2)))})

This is my test code, it gets an error:

v0.0.1
E           %13 = fn () {
E             %11 = fn () {
E               %10 = {
E                 let %input: Tensor[(1, 2), float32] = meta[relay.Constant][0] // 
E                 %2 = fn (%x: Tensor[(1, 2), float32], %y: Tensor[(1, 2), float32]) {
E                   %0 = (%x, %y)
E                   %1 = concatenate(%0) // 
E                   %1
E                 }
E                 %3 = nil
E                 %4 = %3() // 
E                 %5 = cons
E                 %6 = %5(%input, %4) // 
E                 %7 = %5(%input, %6) // Error unifying `IncompleteTypeNode(0, 0x2376da0)` and `TypeCallNode(GlobalTypeVarNode(list, 5), [IncompleteTypeNode(0, 0x1d277b0)])`: [23:56:49] /LocalRun/xiaoquan.li/tvm/src/relay/pass/type_solver.cc:100: Check failed: resolved.defined(): Unable to unify parent types: TypeCallNode(GlobalTypeVarNode(list, 5), [IncompleteTypeNode(0, 0x1d277b0)]) and TensorType([1, 2], float32)
E           ; 
E                 %8 = %3() // 
E                 %9 = @foldr(%2, %7, %8) // 
E                 %9
E               }
E               %10
E             }
E             %12 = %11() // 
E             %12
E           }
E           %13

Since ADTs are relatively new to Relay, we’re still working on the documentation, error messages, and printing support for it. But don’t worry things should get better before the 0.6 release!

The key part of the error message is

Unable to unify parent types: TypeCallNode(GlobalTypeVarNode(list, 5), [IncompleteTypeNode(0, 0x1d277b0)]) and TensorType([1, 2], float32).

This means somewhere expected/provided a list, but somewhere else expected/provided a tensor. In particular, your concat expects two tensors and produces a new tensor, but you provide nil() to foldr, which is a list. Since tensors and lists are incompatible this results in a type error.

The simplest fix would be to replace nil() with some sort of empty tensor that has the property that relay.op.concatenate([x, empty], axis=0) = x, but an empty tensor like this isn’t expressible in Relay.

Instead you should use a function related to foldr called foldr1 or reduceR. This function is similar to foldr except it doesn’t have a starting value and only works on nonempty lists. Unfortunately, this function isn’t in Prelude yet. If you’re interested, you can try to reproduce the definition here in Relay and submit a PR.

Thanks for your detail explain. With an extra tensor, I can get foldr() work.

But then I realize maybe I have to convert a list to a tuple before using relay.op.concatenate or relay.op.stack, because I am trying to implement tf.TensorArray.stack() which stacks more than two tensors together. As long as two tensors are concatenated or stacked, other tensors can’t be stacked because of shape mismatch.

I got an error from this code, but there is no concrete message like previous one. I guess it is because relay.Tuple() can’t automatically get fields from it’s parameters when executed.

Is it possible to convert a list to a tuple? Or is there any other method to stack all tensors in a list?

def test_myfoldr_3():

    arr = [[1, 2], [3, 4]]

    tensor_const = relay.Constant(tvm.nd.array(arr))

    mylist = cons(tensor_const, nil())

    x = relay.var("x")
    y = relay.var("y")

    concat = relay.Function([x, y], relay.Tuple([x, y]))

    a = relay.Var("a", p.l(relay.TensorType((2, 2), dtype="int64")))

    func = relay.Function([a], foldr(concat, relay.Tuple([]), a))

    print(func)

    res = intrp.evaluate(func(mylist))

    print(res)

I am trying to implement this, but a pattern matching to value is needed.
(I am new to Haskell, I guess foldr1 f [x] = x should mean pattern matching a value, not sure whether it is correct term)

But I only find a way to pattern match data constructor now, so as an experiment, I add a special data constructor to emulate [x] in Haskell definition to make it work, but I don’t think a special data constructor is a ideal way, because it is needed to be handled in all list related pattern matching, Do I miss something? Thanks

foldr1           :: (a -> a -> a) -> [a] -> a
foldr1 f [x]     =  x
foldr1 f (x:xs)  =  f x (foldr1 f xs)
foldr1 _ []      =  error "Prelude.foldr1: empty list"

[x] is syntactic sugar for (x:[]), which is the pattern for a list with a single element. The equivalent Relay pattern is PatternConstructor(self.cons, [PatternVar(Var("x")), PatternConstructor(self.nil)]).

I would still encourage you to write foldr1 and submit a PR, because it’s a good way to understand how high-level control flow and pattern matching works in Relay; however, I recently realized that Relay will not yet be able to support the original function you wanted to write. The problem is that one of the dimensions of the tensor cannot be inferred statically by the default type inference pass (although it could be inferred by a custom pass) and so must be given dimension Any, which we don’t support just yet. Alternatively, it would be nice to create a new operator that takes a list of tensors as input and uses a Relation to specify the correct output shape, but operators don’t currently support ADTs as input.

To summarize, I encourage you to submit a PR with foldr1, but writing the equivalent of tf.TensorArray.stack() is not yet possible.

The ‘list update’ needs nat as index, but it seems we can’t convert a tensor to nat

In a while loop, the condition usually contains a tensor.

How can we update a list with runtime result of a tensor as index?