Hi I am new to TVM and was hoping to get some understanding of its workflow. I followed this tutorial (Getting Starting using TVMC Python: a high-level API for TVM — tvm 0.11.dev0 documentation), which shows how to use the tvmc python interface. So I tried to load a simple model and compile it and run it. My tvm code looks like this:
from tvm.driver import tvmc
model = tvmc.load('simple_model.onnx') #Step 1: Load
package = tvmc.compile(model, target="cuda") #Step 2: Compile
result = tvmc.run(package, device="cuda") #Step 3: Run
print(result.get_output('output_0'))
And my simple model looks like this (in pytorch):
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = nn.Linear(in_features=4, out_features=4, device='cuda')
def forward(self, inputs):
x = self.linear(inputs)
return x
# Use this an input trace to serialize the model
model_onnx_path = "simple_model.onnx"
model = Model()
model.train(False)
dummy_input = torch.randn(1, 4, device='cuda')
print(model(dummy_input))
output = torch_onnx.export(model,
dummy_input,
model_onnx_path,
verbose=True)
However I noticed that everytime I ran my TVM compile and run code, it prints out a different output. Any reason for that? Sorry this might be a very basic question.