NVidia Xaviar GPU inference Segmentation fault

I’m trying to run inference on the GPU of the NVidia Xaviar. I’m on the v0.7 tag, and have compiled with set(USE_LLVM ON), and set(USE_CUDA ON). I can access the GPU via TensorFlow, and TensorRT.

When running this simple inference script, I am able to use the CPU of the Xaviar, however running the same code with CUDA fails with a Segmentation Fault.

Observing $ tegrastats (the only way to see Xavier GPU utilisation as far as I know), I see that the GPU is being used. So I’m trying to figure out where my problem is coming from.

You can see my self-contained script here.

My target is tvm.target.cuda(model='tx2'), though I have tried with xaviar, and nothing too, and my context is gpu(0).

Any troubleshooting tips, or folk who’ve encountered similar issues? The Xaviar is perhaps a slightly different NVidia target than others, so that could be a source of issues. It works on my desktop GPU nae bother.