Support for pre-quantized model int8/uint8 conversion

Hi @anijain2305 thanks for the reply. I should’ve made myself clear. What I meant was if the model(weight and bias) was quantized to uint8, does TVM has a way to convert the uint8 weight and bias to int8 weight and bias?

I will certainly try what you suggested, thank you.