How should you index DLTensors in C++ for the multi-dimensional case?

Hi everyone,

In the C++ deployment example (https://docs.tvm.ai/deploy/cpp_deploy.html), a 1-dimensional DLtensor is used as:

  DLTensor* x;
  DLTensor* y;
  int ndim = 1;
  int dtype_code = kDLFloat;
  int dtype_bits = 32;
  int dtype_lanes = 1;
  int device_type = kDLCPU;
  int device_id = 0;
  int64_t shape[1] = {10};
  TVMArrayAlloc(shape, ndim, dtype_code, dtype_bits, dtype_lanes,
                device_type, device_id, &x);
  TVMArrayAlloc(shape, ndim, dtype_code, dtype_bits, dtype_lanes,
                device_type, device_id, &y);
  for (int i = 0; i < shape[0]; ++i) {
    static_cast<float*>(x->data)[i] = i;
  }

If you have a tensor of shape int64_t shape[4] = {1,3,227,227};, how would you populate such a tensor? maybe using

static_cast<float*>(x->data)[i,j,m,n] = var[i,j,m,n];

Or would it only have one dimension still but of size 1x3x227x227? In which case you would still do:

static_cast<float*>(x->data)[i] = i;

If this is the case, do you know if this is row-major or column major?

I really appreciate any help you can provide on this issue

Yes, 1D access with row major should be ok. Since DLTensor is supposed to be a low level abstraction for ND tensor, I think they don’t have multi dim access.

1 Like

Thank a lot for your reply @masahi. Gonna give it a try :slight_smile: