[uTVM][BYOC] Input/output tensors as part of the workspace

Currently we have to manually allocate the input/output tensors that the model will use. This means that they are not a part of the workspace, and will not be reused during the inference. The upside is that they are “thread-safe” and can be changed during inference, but this comes at the cost of using more memory. The memory usage becomes an issue when the model runs on a memory constrained device with large inputs.

Is there a way to have TVM allocate those tensors as part of its workspace, and return handles that we can use to write input and read output in-between inferences?