TVM build error

import tvm n = 1024
A = tvm.te.placeholder((n,), name=‘A’)
B = tvm.te.placeholder((n,), name=‘B’)
C = tvm.te.compute(A.shape, lambda i: A[i] + B[i], name=“C”)
s = tvm.te.create_schedule(C.op)
target = “llvm”
bx, tx = s[C].split(C.op.axis[0], factor=64)
s[C].bind(bx, tvm.te.thread_axis(“blockIdx.x”))
s[C].bind(tx, tvm.te.thread_axis(“threadIdx.x”))
fadd = tvm.build(s, [A, B, C], target)

Error log listed below:
Traceback (most recent call last): File “test_tvm.py”, line 13, in fadd = tvm.build(s, [A, B, C], target) File “/home/tvm/tvm/python/tvm/driver/build_module.py”, line 281, in build rt_mod_host = _driver_ffi.tir_to_runtime(annotated_mods, target_host) File “/home/tvm/tvm/python/tvm/_ffi/_ctypes/packed_func.py”, line 237, in call raise get_last_ffi_error() tvm._ffi.base.TVMError: Traceback (most recent call last): 12: TVMFuncCall 11: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)>::AssignTypedLambda<tvm::{lambda(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)#6}>(tvm::{lambda(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)#6}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, tvm::runtime::TVMRetValue) 10: tvm::TIRToRuntime(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target const&) 9: tvm::codegen::Build(tvm::IRModule, tvm::Target) 8: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::IRModule, tvm::Target)>::AssignTypedLambda<tvm::codegen::{lambda(tvm::IRModule, tvm::Target)#6}>(tvm::codegen::{lambda(tvm::IRModule, tvm::Target)#6}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::cxx11::basic_string<char, std::char_traits, std::allocator >, tvm::runtime::TVMRetValue) 7: tvm::codegen::LLVMModuleNode::Init(tvm::IRModule const&, tvm::Target const&) 6: 5: tvm::codegen::CodeGenCPU::AddFunction(tvm::tir::PrimFunc const&) 4: tvm::codegen::CodeGenLLVM::AddFunctionInternal(tvm::tir::PrimFunc const&, bool) 3: tvm::tir::StmtFunctor<void (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&) 2: tvm::codegen::CodeGenCPU::VisitStmt(tvm::tir::AttrStmtNode const*) 1: tvm::codegen::CodeGenLLVM::VisitStmt(tvm::tir::AttrStmtNode const*) 0: tvm::codegen::CodeGenLLVM::GetThreadIndex(tvm::tir::IterVar const&) File “/home/tvm/tvm/src/target/llvm/codegen_llvm.cc”, line 301 TVMError: not implemented

the llvm target means run on cpu and didn’t implement the schedule of “blockIdx.x”, you can use gpu target like “cuda” or “opencl”.

I change “llvm” to “opencl” and “cuda”, but error still exits. Then I commented out 3 lines above the “build” line. The error vanished.

I tried this code on my platform, and it runs well.

import tvm

n = 1024

A = tvm.te.placeholder((n,), name="A")

B = tvm.te.placeholder((n,), name="B")

C = tvm.te.compute(A.shape, lambda i: A[i] + B[i], name="C")

s = tvm.te.create_schedule(C.op)

target = "cuda"

bx, tx = s[C].split(C.op.axis[0], factor=64)

s[C].bind(bx, tvm.te.thread_axis("blockIdx.x"))

s[C].bind(tx, tvm.te.thread_axis("threadIdx.x"))

fadd = tvm.build(s, [A, B, C], target)

Thanks, I will make further check