Failure occurs when using relay.floor_mod and the divisor is of type uint64

import tvm
from tvm import relay

var_0 = relay.var("var_0", dtype = "int64", shape = (6, 4, 3)) # shape=(6, 4, 3)
var_1 = relay.var("var_1", dtype = "int64", shape = (1, 1, 4, 1)) # shape=(1, 1, 4, 1)
var_2 = relay.divide(var_0, var_1) # shape=(1, 6, 4, 3)
var_3 = relay.var("var_3", dtype = "float64", shape = (10, 1, 2)) # shape=(10, 1, 2)    
var_4 = relay.var("var_4", dtype = "float64", shape = (1, 1, 10, 4, 1)) # shape=(1, 1, 10, 4, 1)
var_5 = relay.mod(var_3, var_4) # shape=(1, 1, 10, 4, 2)
const_6 = relay.const([[136,518,527,973,609,16,368,433,226],    [959,998,932,580,856,939,840,348,113],[668,439,603,592,299,932,479,360,28],[333,95,936,221,230,453,99,554,413],[466,921,845,692,232,842,623,811,49],[914,2,397,26,21,187,980,964,837]], dtype = "uint64") # shape=(6, 9)
var_7 = relay.var("var_7", dtype = "uint64", shape = (1, 1, 9)) # shape=(1, 1, 9)
var_8 = relay.floor_mod(const_6, var_7) # shape=(1, 6, 9)
var_9 = relay.var("var_9", dtype = "int64", shape = (3, 6, 3)) # shape=(3, 6, 3)
var_10 = relay.var("var_10", dtype = "int64", shape = (1, 1, 6, 3)) # shape=(1, 1, 6, 3)
var_11 = relay.mod(var_9, var_10) # shape=(1, 3, 6, 3)
var_12 = relay.divide(const_6, var_7) # shape=(1, 6, 9)
var_13 = relay.var("var_13", dtype = "int64", shape = (1, 1, 1)) # shape=(1, 1, 1)
var_14 = relay.divide(var_2, var_13) # shape=(1, 6, 4, 3)
var_15 = relay.floor_mod(var_1, var_0) # shape=(1, 6, 4, 3)
var_16 = relay.var("var_16", dtype = "uint64", shape = (1, 9)) # shape=(1, 9)
var_17 = relay.floor_mod(var_12, var_16) # shape=(1, 6, 9)
tuple = relay.Tuple([var_17,var_15,var_11,var_8,var_14,var_5,])
F = relay.Function([var_16,var_7,var_10,var_1,var_0,var_4,var_3,var_13,var_9,], tuple)
mod = tvm.IRModule()
mod['main'] = F
mod = relay.transform.InferType()(mod)
print(mod.astext(show_meta_data=False))
graph, lib, params = relay.build(mod, target="cuda")

This code snippet incurs an unexpected crash on statement graph, lib, params = relay.build(mod, target="cuda"). And the message is Check failed: y != 0 (0 vs. 0), thrown by InfAwareDiv function in file named src/arith/const_int_bound.cc.

BTW, this crash is of no business with cuda and it still exists after changing it into llvm.

It seems TVM checks the minimum value and maximum value of the divisor and terminates if one of this value equals to 0. I cannot figure out the logic behind this. Could anyone explain it? Many thanks!

I suspect this might have to do with something similar as a problem in https://github.com/apache/tvm/pull/10098.

I’m taking a look

Yeah this was just a problem with the const int bound analyzer. Fix here: https://github.com/apache/tvm/pull/10102

Thanks for your kind reply.