Simple Keras Network not working correctly

Hi, I wanted to test some stuff with the graph runtime and ran into the problem, that the graph runtime results are completly different to keras real results

The network itself is very simple and only contains one Conv2D layer

Model: "test_case"
Layer (type)                 Output Shape              Param #   
input_1 (InputLayer)         [(None, 5, 5, 1)]         0         
conv1 (Conv2D)               (None, 5, 5, 1)           9         
Total params: 9
Trainable params: 9
Non-trainable params: 0

The weights are fixed to [[0, 0, 0], [0, 0, 1], [0, 0, 0]]

batch_size = 1
input_name = "input_1"

inp_layer = keras.layers.Input((5, 5, 1))
data_shape = (batch_size,) + (5, 5, 1)
model = keras.models.load_model(model_path)

shape_dict = {input_name: data_shape}
mod, params = relay.frontend.from_keras(model, shape_dict, layout="NHWC")

with tvm.transform.PassContext(opt_level=3):
        graph, lib, params =, 'llvm')

module = graph_runtime.create(graph, lib, tvm.cpu())
inp = (np.ones((1,5,5,1)))
module.set_input(input_name, inp)
outp = module.get_output(0)
outp = outp.asnumpy()

the TVM results vary widely and are never equal to keras output. What did I miss here? (small test samples, which are directly defined in relay, work fine)

I found my problem, I did not pass the params from the keras model to the function

Interesting that it still builds and runs.

Do you know what value it assumes for the parameters?

I am not sure, the output was very close to zero (e-34), but not zero. Maybe just random values?