A bug about QNN and FQ2I

when I run FQ2I,some data consistency check failed. the network has such structure:

**quantize(int8) +dequantize+add+mul+quantize(int8);

and after run FQ2I, the stucture will convert to: **quantize+qnn.add(int8)+qnn.mul(int8)+quantize;

and the output scale of qnn.add fetch from the last quantize node, I thik it wrong. and the correct is from the input scale of qnn.add,and the output type of qnn.add should be int32. if the output type of qnn.add is “int8”,and the range of data is insufficiency.

Hmm, I believe the conversion is fine, qnn.add internally should do the addition in higher precision.

If you share toy model, i can probably take a closer look.