Tvm.nd.empty is not really empty. Another problem is sometimes the computation results are totally wrong

Hi development team:

I get two problems recently. The first one is about the tvm.nd.empty. The second one is about Ansor’s computation results. My PyTorch version is 2.0.1, TVM version is 0.11.1 for GPU with CUDA 11.7.

  1. Just create an empty NDarray, the results are sometimes not really empty.
    tvm_placeholder = tvm.nd.empty((out_dim, 1), "float32", device=target_tvm_device)
    print("Empty: ", torch.utils.dlpack.from_dlpack(tvm_placeholder.to_dlpack()))
    
    Actually, the output is usually not empty, such as:
    tensor([[64.0090],
        [64.0110],
        [64.0003],
        ...,
        [64.0213],
        [64.0040],
        [64.0148]], device='cuda:0')
    
  2. I write a function that is then scheduled via auto_scheduler. I run the generated kernel 100 trials with random inputs to check the correctness. However, sometimes, I find all of the 100 trials are incorrect while sometimes all are correct. Does anyone have insight about this problem or tell me how to debug this problem if this is my bug? Is this problem related to the first problem that TVM fails to generate a real empty placeholder in its internal computation which is invisible to us users?

Thank you for any suggestions or comments.

empty means any result can happen in side without initialization