Run an operator on Ethos-N

I can run a max_pool2d operator on CPU. Nevertheless, when I dispatch max_pool2d operator to run

on Ethos-N, I can not get the correct result. The code is as follows:

in_dtype = 'uint8'

def max_pool():
    data = relay.var("data", shape=(1, 6, 6, 1), dtype=in_dtype)
    op = relay.nn.max_pool2d(data, pool_size=(2, 2), strides=(2,2) ,layout='NHWC')
    return relay.Function([data], op)

target = "c"
target_host = "c"
ctx = tvm.cpu(0)  
expr = max_pool()
mod = tvm.IRModule.from_expr(expr)
  
#Dispatch operator to Ethos-N / To run on CPU just comment it
pattern = get_pattern_table("ethos-n")    
mod = relay.transform.InferType()(mod)
mod = relay.transform.MergeComposite(pattern)(mod)
mod = relay.transform.AnnotateTarget("ethos-n")(mod)
mod = relay.transform.InferType()(mod)
mod = relay.transform.MergeCompilerRegions()(mod)
mod = relay.transform.InferType()(mod)
mod = relay.transform.PartitionGraph()(mod)

input = np.ones((1, 6, 6, 1), dtype=in_dtype)
input[0][0][0][0] = 3
input[0][5][0][0] = 4
input[0][0][5][0] = 5
input[0][5][5][0] = 6

with relay.build_config(opt_level=3):
    graph, lib, params = relay.build(mod, target=target, target_host=target_host)

lib.export_library('./lib.so')
lib = tvm.runtime.load_module('./lib.so')
module = graph_runtime.create(graph, lib, ctx)
module.set_input('data', tvm.nd.array(input))
module.run()
tvm_output = module.get_output(0).asnumpy()
print('Result:\n', tvm_output)

Ethos-N Result:

[[[[255] [255] [255]]

[[255] [255] [255]]

[[255] [255] [ 0]]]]

CPU Result:

[[[[3] [1] [5]]

[[1] [1] [1]]

[[4] [1] [6]]]]


I have also transferred the lib.so and graph and input to my EthosN-hardware to run but the result is still incorrect. How should I dispatch a max_pool2d operator to run on Ethos-N? Can anyone support an example?

Hi @guanjen375,

Thanks for reporting this, I took a quick look at the code you provided but didn’t notice anything immediately wrong. I’ll try to take a more detailed look next week. In the meantime, it would be great if you could provide the version of TVM (commit hash) and the Driver Stack you’re using to help with debugging?

My TVM commit: fe948da88e3e05129971024fde72ed52568a4747

Driver Stack is use the file /tvm/docker/install/ubuntu_install_ethosn_driver_stack.sh to install

config.cmake as follow:

set(USE_ETHOSN /opt/arm/ethosn-driver/)

set(USE_ETHOSN_HW OFF)

I have transferred the lib.so and graph and input to my Ethos-N device but got the incorrect result in

the same time

Thanks @guanjen375, I’ll take a look into this today

I just tried this script out locally and was able to reproduce the expected output (CPU result). Therefore, it suggests to me that something is wrong with your TVM runtime setup.

I noticed you mentioned set(USE_ETHOSN_HW OFF) which is okay for the device you compile on, but this should be set to ON on the device you would like to run your model. This should be the device that has the NPU onboard. I suspect this is the reason you’re seeing random values instead.

I have tried to transfer lib.so and graph and input to my EthosN-HW for running. But the result is incorrect. Is there any way to compile on x86 and run on NPU board or I should install TVM on my NPU board with set(USE_ETHOSN_HW ON)?

Yes I would suggest installing TVM on the NPU board with set(USE_ETHOSN_HW ON)