I can run a max_pool2d operator on CPU. Nevertheless, when I dispatch max_pool2d operator to run
on Ethos-N, I can not get the correct result. The code is as follows:
in_dtype = 'uint8'
def max_pool():
data = relay.var("data", shape=(1, 6, 6, 1), dtype=in_dtype)
op = relay.nn.max_pool2d(data, pool_size=(2, 2), strides=(2,2) ,layout='NHWC')
return relay.Function([data], op)
target = "c"
target_host = "c"
ctx = tvm.cpu(0)
expr = max_pool()
mod = tvm.IRModule.from_expr(expr)
#Dispatch operator to Ethos-N / To run on CPU just comment it
pattern = get_pattern_table("ethos-n")
mod = relay.transform.InferType()(mod)
mod = relay.transform.MergeComposite(pattern)(mod)
mod = relay.transform.AnnotateTarget("ethos-n")(mod)
mod = relay.transform.InferType()(mod)
mod = relay.transform.MergeCompilerRegions()(mod)
mod = relay.transform.InferType()(mod)
mod = relay.transform.PartitionGraph()(mod)
input = np.ones((1, 6, 6, 1), dtype=in_dtype)
input[0][0][0][0] = 3
input[0][5][0][0] = 4
input[0][0][5][0] = 5
input[0][5][5][0] = 6
with relay.build_config(opt_level=3):
graph, lib, params = relay.build(mod, target=target, target_host=target_host)
lib.export_library('./lib.so')
lib = tvm.runtime.load_module('./lib.so')
module = graph_runtime.create(graph, lib, ctx)
module.set_input('data', tvm.nd.array(input))
module.run()
tvm_output = module.get_output(0).asnumpy()
print('Result:\n', tvm_output)
Ethos-N Result:
[[[[255] [255] [255]]
[[255] [255] [255]]
[[255] [255] [ 0]]]]
CPU Result:
[[[[3] [1] [5]]
[[1] [1] [1]]
[[4] [1] [6]]]]
I have also transferred the lib.so and graph and input to my EthosN-hardware to run but the result is still incorrect. How should I dispatch a max_pool2d operator to run on Ethos-N? Can anyone support an example?