when I run the following code:
model_path = ‘mlpsimple.onnx’
onnx_model = onnx.load(model_path)
target = ‘llvm’
input_name = ‘input’
test_input = np.random.rand(1,1)
shape_dict = {input_name: test_input.shape}
sym, params = relay.frontend.from_onnx(onnx_model,shape=shape_dict, dtype=‘float32’)
print (‘sym’,sym)
print (‘params’,params)
with relay.build_config(opt_level=1):
intrp = relay.build_module.create_executor(‘graph’, sym, tvm.cpu(0), target)
Execute on TVM
dtype = ‘float32’
tvm_output = intrp.evaluate(sym)(tvm.nd.array(test_input.astype(dtype)), **params).asnumpy()
I will excounter the following error:
sym v0.0.1
%6 = fn (%input: Tensor[(1, 1), float32], %v1: Tensor[(1, 1), float32], %v2: Tensor[(1,), float32]) {
%0 = nn.batch_flatten(%input)
%1 = multiply(1f, %0)
%2 = nn.dense(%1, %v1, units=1)
%3 = multiply(1f, %v2)
%4 = nn.bias_add(%2, %3)
%5 = nn.relu(%4)
%5
}
%6
params {‘1’: <tvm.NDArray shape=(1, 1), cpu(0)>
array([[-0.46104646]], dtype=float32), ‘2’: <tvm.NDArray shape=(1,), cpu(0)>
array([-0.3470515], dtype=float32)}
-> tvm_output = intrp.evaluate(sym)(tvm.nd.array(test_input.astype(dtype)), **params).asnumpy()
*** stack smashing detected ***: python terminated
core dump