torch has operator copy_ which is similar to Identity
, but the destination tensor might have extra dimensions in front.
In that case the source data will be duplicated in each extra prefix dimensions ( The src
tensor must be broadcastable with the dest
tensor)
For example:
a = torch.tensor([0, 1, 2, 6]) # src shape (4)
b = torch.zeros([3,2,4]) # dest shape (3,2,4)
b.copy_(a)
tensor([[[0., 1., 2., 6.],
[0., 1., 2., 6.]],
[[0., 1., 2., 6.],
[0., 1., 2., 6.]],
[[0., 1., 2., 6.],
[0., 1., 2., 6.]]])
What operator(s) in Relay can do similar transformation?
Numpy version of the transformation
a = np.array([0,1,2,6])
# Construct the copy of A with shape (3,2,4)
b = np.stack([a,] * 2) # add new front dimension with size 2
b = np.stack([b,] * 3) # add new front dimension with size 3
b
array([[[0, 1, 2, 6],
[0, 1, 2, 6]],
[[0, 1, 2, 6],
[0, 1, 2, 6]],
[[0, 1, 2, 6],
[0, 1, 2, 6]]])
Relay version:
import tvm
from tvm import relay
import numpy as np
ctx = tvm.cpu(0)
target = 'llvm'
in_shape = (4,)
in_dtype = 'int32'
out_shape = (3,2,4)
out_dtype = 'float32'
a = relay.var("a", shape=in_shape, dtype=in_dtype)
b = relay.cast(a, dtype=out_dtype)
# take output shape prefix and reverse it
front_shape_len = len(out_shape) - len(in_shape)
for dim_sz in reversed(out_shape[:front_shape_len]):
b = relay.stack([b,] * dim_sz, axis=0)
func = relay.Function([a], b)
intrp = relay.create_executor("graph", ctx=ctx, target=target)
in_data = [0, 1, 2, 6]
intrp.evaluate(func)(in_data)
<tvm.nd.NDArray shape=(3, 2, 4), cpu(0)>
array([[[0., 1., 2., 6.],
[0., 1., 2., 6.]],
[[0., 1., 2., 6.],
[0., 1., 2., 6.]],
[[0., 1., 2., 6.],
[0., 1., 2., 6.]]], dtype=float32)