[Relay]What does relay.take() mean? Is it a copy or a reference?

It’s defined as taking elements from an array in the docs, I want to know how to assign value to the specified element of origin array after calling relay.take()?If it’s a copy from origin array, how to assign values to origin array, or it’s a reference then how to do so?Sorry, I don’t know if I’m describing it clearly

IIRC it’s basically the same thing as “Gather” in ONNX

To assign values you should use scatter or scatter_nd in relay.

1 Like

Thanks for your reply! But I just want to change few elements of a 2D array , scatter() seems difficult to do that. I found one_hot() seemed to work, but is there an easier way?

If you tell me exactly what you are trying to do I can probably help. Do you have an example?

Let’s say that arr is assigned:

arr = relay.zeros((m, n), dtype="int32") 

then I wanna do as in python:

arr[i][j] = 1 
arr[i][j] += 1

how to do these in relay? I know index_put_() in torch can do so.
Btw, how to initialize an array by custom? THANKS A LOT

For arr[i][j] = 1 just do relay.ones instead of relay.zeros

For arr[i][j] += 1 use scatter_add.

Scatter_add:

  output[indices[i][j]][j] += updates[i][j] if axis = 0,
  output[i][indices[i][j]] += updates[i][j] if axis = 1,

If scatter_add is insufficient, then you will need to use scatter_nd with ‘add’ mode.

The scatter_nd is pretty complicated but a few examples and you should know what is going on. It is pretty standard operator in Deep Learning. Here is an example: https://github.com/onnx/onnx/blob/master/docs/Operators.md#scatternd

Hi,@AndrewZhaoLuo ,I also have a problem about relay.take

I have to both do argmax and max to the same tensor as following

tensor1 = relay.op.argmax(prob, axis=2, keepdims=True)
tensor2 = relay.op.max(prob, axis=2, keepdims=True)

And they cost too much time since the prob tensor is very large.

Can I use relay.take() and tensor1 to get tensor2 from prob? Or Can I get both tensor1/tensor2 from a single function?

argmax and then using take sounds like the best solution here yes.

You could probably change argmax to give you both the indices and the results, though that is more complicated.

hi,@AndrewZhaoLuo

I tried using take but it doesn’t work.

I think what I need here is take_along_axis, right?

https://numpy.org/doc/stable/reference/generated/numpy.take_along_axis.html

I think numpy.take — NumPy v1.23 Manual is what you want.

Example data = [10, 11, 12, 13, 14] indices = [0, -1, 2, 3]

np.take(data, indices) → [10, 14, 12, 13]

Hi, @AndrewZhaoLuo If the data is multi-dimension, let say 3d tensor. If I want to get the last-dimension max value, the np.take can not get the right value from argmax result, as same as the relay.take in TVM

import numpy as np

tensor = np.arange(2 * 3 * 4).reshape(2, 3, 4)
print("tensor = ", tensor)

idx = np.argmax(tensor,axis=-1, keepdims=True)
print("idx = ", idx)

max_val_from_take = np.take(tensor, idx, axis=-1)
print("max_val_from_take = ", max_val_from_take)

max_val_from_take_along_axis = np.take_along_axis(tensor, idx, axis=-1)
print("max_val_from_take_along_axis = ", max_val_from_take_along_axis)

Ah yes, you are correct, as take does copy other axis. I don’t think we have something similar in TVM to take_along_axis yet, at least not directly.