MacOS 13 with Intel GPU errors

Hi all,

I am trying tvmlang on my Mac OS 13.3, MacBook pro 13 with just a mere Intel GPU 550. I am getting various errors for various tutorial models. I have miniconda 2.7.14.

I compiled with support for metal, opencl, opengl (I manually compiled glfw3 and modified config.mk to add the following flags)

the additional link flags you want to add
ADD_LDFLAGS = -lglfw3 -framework Cocoa -framework OpenGL -framework IOKit -framework CoreVideo

No matter what sample I try to run from tvm/tutorials/nnvm, I get various errors

python from_onnx.py - modified with tvm.opencl(0) instead of tvm.gpu(0), target ‘opencl’
tvm._ffi.base.TVMError: [21:08:20] src/runtime/module_util.cc:52: Check failed: ret == 0 (-1 vs. 0) [21:08:20] src/runtime/opencl/opencl_module.cc:223: Check failed: e == CL_SUCCESS OpenCL Error, code=-54: CL_INVALID_WORK_GROUP_SIZE

python from_onnx.py - modified with tvm.metal(0) instead of tvm.gpu(0), target ‘metal’
tvm._ffi.base.TVMError: [21:09:21] src/runtime/module_util.cc:52: Check failed: ret == 0 (-1 vs. 0) [21:09:21] src/runtime/metal/metal_module.mm:129: Check failed: state != nil cannot get state: for function fuse_conv2d_broadcast_add_relu_2__kernel2Compiler encountered an internal error

python from_onnx.py - modified with tvm.opengl(0) instead of tvm.opengl(0), target ‘opengl’

TVMError: [21:09:52] src/runtime/opengl/opengl_device_api.cc:288: ERROR: 0:1: ‘’ : version ‘300’ is not supported
ERROR: 0:1: ‘’ : syntax error: #version
ERROR: 0:2: ‘’ : #version required and missing.

#version 300 es
in vec2 point; // input to vertex shader
void main() {
gl_Position = vec4(point, 0.0, 1.0);
}

python from_mxnet_to_webgl.py - run_deploy_rpc = True, run_deploy_web = True

TVMError: [21:00:19] src/codegen/llvm/llvm_common.cc:141: Check failed: allow_null No available targets are compatible with this triple. target_triple=asmjs-unknown-emscripten

Would love to play with tvm but seems like it’s just not willing to work for me :slight_smile:

Alexandru

In case anybody else sees that - it was a conflict with an older version of TVM (0.2) that I had somehow installed in on of my Python installations.

pip list

pip uninstall tvm

Now everything works great.

A

I’ve done some specific optimizations for intel GPUs but haven’t pushed the code to the main stream. If you like, please feel free to give it a try: https://github.com/Laurawly/tvm-1.