Hexagon v68 support

I am currently running tvm model on snapdragon 888 with ci_hex docker image (tvmcihexagon/ci-hexagon-base:v0.01_SDK4.2.0.2)

When I build model with tvm.target.hexagon(‘v66’), it works well

However, when I build model with tvm.target.hexagon(‘v68’), it raises an error as follows:

llvm: Unknown command line argument '-force-hvx-float'

When I comment following codes in python/tvm/target/target.py

#To enable auto-verctorization for v68 target added the below llvm-option by default
if arch_version == 68:
  if not llvm_options:
    llvm_options = ""
  llvm_options += " -force-hvx-float"

it does not raises an error but shows lots of warning like

'+hvx-qfloat' is not a recognized feature for this target (ignoring feature)
'+hvx-ieee-fp' is not a recognized feature for this target (ignoring feature)

Is there any problem in docker image (like newer version should I use?) or some other issue?

thanks in advance

Jaeyoon

We recommend that you use a newer LLVM. We keep improving the Hexagon backend in LLVM, and older versions of LLVM may not support all features. The warnings “… is not a recognized feature …” can be ignored. They indicate that the LLVM you are using does not generate vectorized floating-point code, but vectorization should still be supported for integer types. If your code has floating-point operations, they will execute as scalar code.

You can download a newer LLVM from https://releases.llvm.org.

Great thanks for your answer

Though it is a bit deviated from the original question, as you mentioned about floating-point operations, is it preferable to use fixed point instead of floating point then?

I saw HVX does not support native floating point but does fixed point

In addition, in the latest hexagon processor, there is a Tensor accelerator along with scalar unit and HVX. Is there anything I shoud do to use Tensor accelerator for suitable operation within TVM work (such as setting build flag or something else)?

Thanks Jaeyoon