Does Tensor Expression guarantee that whatever schedules applied (if they can be applied) will not change the output of the algorithm - i.e. the generated code is always correct?
More specifically, given a bug-free algorithm and a set of schedule primitives, there could be three kinds of outcomes.
The schedules have no conflict and a correct code is generated.
The schedules have some conflict and errors are reported at code-gen time.
The schedules have some conflict, but no errors are reported at code-gen time, and an incorrect code is generated.
If 3) possible in Tensor Expression (sans implementation bugs)?
is possible. I have experienced but cannot find an example at the moment though. maybe vectorize a reduced axis? not sure whether tvm will report error in this case.
Thanks. It seems allowing 3 (i.e. silently generate incorrect code) would make TE a bit tricky to use.
If it is just because of lack of implementation right now, we can improve it over time. Are you aware of any case that is impossible to check at compile time?
This is likely to be very tedious, and the only real suggestion I can give is to try and modify some of the more intricate schedule examples (e.g., the conv2d CUDA tutorial) to see what errors naturally happen. It may be difficult to separate some false-positives from true errors, and likewise some errors may not be deterministic (e.g., violating shared memory semantics) due to run time variability.