ROCM compile onnx failed

python3 -m tvm.driver.tvmc compile --target “rocm” --output rocm.tar resnet50-v2-7.onnx ‘gfx21687’ is not a recognized processor for this target (ignoring processor) ‘gfx21687’ is not a recognized processor for this target (ignoring processor) python3: /data/jenkins_workspace/workspace/llvm_project_release/llvm/lib/IR/Globals.cpp:118: void llvm::GlobalObject::setAlignment(llvm::MaybeAlign): Assertion `(!Align || *Align <= MaximumAlignment) && “Alignment is greater than MaximumAlignment!”’ failed. Aborted (core dumped)

exact the same problem except for not having not a recognized processor error

ROCm is quite picky when it comes to supported GPUs.

Are you sure your GPU is supported?

https://docs.amd.com/en-US/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.html

For RDNA1 and RDNA2 GPUs it is sometimes possible to override as suggested here: https://github.com/RadeonOpenCompute/ROCm/issues/1180#issuecomment-1243104624

Although I had no gfx1030 card, but navi10 card can run on ROCm-5.2.3 with HSA_OVERRIDE_GFX_VERSION=10.3.0 to pretend gfx1030 card, so it can verify that rocm-libs and tensorflow-rocm can support gfx1030 properly.

Hi, this error happens when target is set as: target = tvm.target.rocm(options='-mcpu=gfx906'). After I set the model argument as target = tvm.target.rocm(model='gfx906', options='-mcpu=gfx906'), the error donot happen and another error occurs:

0: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) [clone .cold] File “/home/xuyangyang/tvm2/python/tvm/_ffi/_ctypes/packed_func.py”, line 81, in cfun rv = local_pyfunc(*pyargs) File “/home/xuyangyang/tvm2/python/tvm/relay/op/strategy/generic.py”, line 56, in wrapper return topi_schedule(outs) File “/home/xuyangyang/tvm2/python/tvm/autotvm/task/topi_integration.py”, line 242, in wrapper return topi_schedule(cfg, outs, *args, **kwargs) File “/home/xuyangyang/tvm2/python/tvm/topi/cuda/conv2d.py”, line 46, in schedule_conv2d_nchw traverse_inline(s, outs[0].op, _callback) File “/home/xuyangyang/tvm2/python/tvm/topi/utils.py”, line 81, in traverse_inline _traverse(final_op) File “/home/xuyangyang/tvm2/python/tvm/topi/utils.py”, line 79, in _traverse callback(op) File “/home/xuyangyang/tvm2/python/tvm/topi/cuda/conv2d.py”, line 44, in _callback schedule_direct_cuda(cfg, s, op.output(0)) File “/home/xuyangyang/tvm2/python/tvm/topi/cuda/conv2d_direct.py”, line 50, in schedule_direct_cuda cfg.fallback_with_reference_log(ref_log) File “/home/xuyangyang/tvm2/python/tvm/autotvm/task/space.py”, line 1413, in fallback_with_reference_log factors = get_factors(int(np.prod(inp.config[knob_name].size))) File “/home/xuyangyang/tvm2/python/tvm/autotvm/task/space.py”, line 170, in get_factors ([i, n // i] for i in range(1, int(math.sqrt(n)) + 1, step) if n % i == 0), ValueError: math domain error

So you’re using a Vega 20 GPU? Just for reference.

It might make sense to report a bug along with information how to reproduce it (i.e. a link to the model file, tvm version). I’m sorry, I cannot help here.

Hi, I am using MI100 AMD GPU, latetest tvm version.