Non single or double precision floating point in Mod

Bug Description

When the following script was compiled with “LLVM”, I run well, However, when I change the “LLVM” to “CUDA”, It crashed throwing Check failed: (false) is false: Non single or double precision floating point in Mod, expected 32 or 64 bits but got 16 bits.

Crash Traceback :

Traceback (most recent call last):
  File "a.py", line 18, in <module>
    graph, lib, params = relay.build(mod, target='cuda')  # crash
  File "/workplace/software/tvm/tvm-new/python/tvm/relay/build_module.py", line 449, in build
    graph_json, runtime_mod, params = bld_mod.build(
  File "/workplace/software/tvm/tvm-new/python/tvm/relay/build_module.py", line 189, in build
    self._build(mod, target, target_host, executor, runtime, mod_name)
  File "/workplace/software/tvm/tvm-new/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  20: TVMFuncCall
  19: _ZNSt17_Function_handlerIFvN3tvm7runtime7TVMArgsEPNS1_11
  18: tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
  17: tvm::relay::backend::RelayBuildModule::Build(tvm::IRModule, tvm::runtime::Map<tvm::Integer, tvm::Target, void, void> const&, tvm::Target const&, tvm::relay::Executor const&, tvm::relay::Runtime const&, tvm::runtime::String)
  16: tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::IRModule, tvm::runtime::String const&)
  15: tvm::build(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target const&)
  14: tvm::codegen::Build(tvm::IRModule, tvm::Target)
  13: tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::IRModule, tvm::Target)>::AssignTypedLambda<tvm::runtime::Module (*)(tvm::IRModule, tvm::Target)>(tvm::runtime::Module (*)(tvm::IRModule, tvm::Target), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const
  12: tvm::codegen::BuildCUDA(tvm::IRModule, tvm::Target)
  11: tvm::codegen::CodeGenC::AddFunction(tvm::tir::PrimFunc const&)
  10: tvm::NodeFunctor<void (tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>*)>::operator()(tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>*) const
  9: tvm::codegen::CodeGenCUDA::VisitStmt_(tvm::tir::AttrStmtNode const*)
  8: tvm::codegen::CodeGenC::VisitStmt_(tvm::tir::AttrStmtNode const*)
  7: tvm::NodeFunctor<void (tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>*)>::operator()(tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>*) const
  6: tvm::codegen::CodeGenCUDA::VisitStmt_(tvm::tir::AttrStmtNode const*)
  5: tvm::codegen::CodeGenC::VisitStmt_(tvm::tir::AttrStmtNode const*)
  4: tvm::NodeFunctor<void (tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>*)>::operator()(tvm::runtime::ObjectRef const&, tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>*) const
  3: tvm::codegen::CodeGenC::VisitStmt_(tvm::tir::StoreNode const*)
  2: tvm::codegen::CodeGenC::PrintExpr[abi:cxx11](tvm::PrimExpr const&)
  1: tvm::NodeFunctor<void (tvm::runtime::ObjectRef const&, tvm::tir::ExprFunctor<void (tvm::PrimExpr const&, std::ostream&)>*, std::ostream&)>::operator()(tvm::runtime::ObjectRef const&, tvm::tir::ExprFunctor<void (tvm::PrimExpr const&, std::ostream&)>*, std::ostream&) const
  0: tvm::codegen::CodeGenC::VisitExpr_(tvm::tir::ModNode const*, std::ostream&)
  File "/workplace/software/tvm/tvm-new/src/target/source/codegen_c.cc", line 548
TVMError: 
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (false) is false: Non single or double precision floating point in Mod, expected 32 or 64 bits but got 16 bits.

Reproducible Script

import tvm
from tvm import relay
from tvm.ir.transform import Sequential
from tvm.contrib import graph_runtime

mod = tvm.IRModule()
var_22 = relay.var("var_22", dtype = "bool", shape = ())#candidate|22|()|var|bool
var_23 = relay.var("var_23", dtype = "bool", shape = ())#candidate|23|()|var|bool
bop_25 = relay.mod(var_23.astype('float16'), var_22.astype('float16')) # shape=()
output = relay.Tuple([bop_25,])
F = relay.Function([var_22,var_23,], output)
mod['main'] = F
mod = relay.transform.InferType()(mod)
print('==========irmod built by Relay==========')
print(mod.astext(show_meta_data=False))
print('===================================')
graph, lib, params = relay.build(mod, target='llvm')  # run well
graph, lib, params = relay.build(mod, target='cuda')  # crash