Is there a way compiling llvm ir and parameters to target library

Hi, I’ve got the LLVM ir of a model by code below

with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(mod, target, params=params)

ir = lib.get_lib().get_source("ll")

with open(os.path.join(model_path, "model.ll"), "w") as f:
    print(ir,file=f)

And I’ve used Polly to optimize the corresponding LLVM ir (Or I would appreciate it if somebody could tell me whether Polly is beneficial for inference).

Now I want to compile the optimized LLVM ir (.ll file) to the library (.tar or .so file). AFAIK, the .ll file only contains code without weights, which means I need to re-link weights with .ll file. I wonder whether it is possible or after Polly, it becomes impossible.

Thx for your help in advance.

Same question, i understand that it require both llvm ir, graph json, params and target device information (in my case is just CPU) to fully build a executable module, it can be done by:

dev = tvm.cpu(0)
lib = mod.get_lib()
graph_json = mod.get_graph_json()
params = mod.get_params()
# Create a graph executor
module = graph_executor.create(graph_json, lib, dev)

# Set the input data for the model

input_data = np.random.uniform(-1, 1, size=input_shape).astype("float32")
module.set_input(input_name, input_data)

# Set params
module.set_input(**params)

# Run the model
module.run()

But I have no idea how to use/ or convert those material manually into some kind of executable module (another llvm ir with main function so that compilergym could execute?). Is there anyway to easily compile those materials? I am still figuring out