Hi~
I’m new to TVM and I met a problem when I tried to implement FGSM in TVM using the relay of a lenet5-onnx model.
The program runs successfully and the adversarial examples can attack the IRModel successfully(the relay ir is saved as before_fastmath.txt
).
Then I use the relay.transform.FastMath()(irmod)
to transform the relay IR to achieve better performance(the optimized relay ir is saved as after_fastmath.txt
). However, the program crashed with the optimized relay IR and threw an exception Check failed: (!MissingGrad(e)) is false: input has operators with missing gradients
.
I compared the two ir files and found that the only difference between the two IR is that the original relay uses %20 = nn.softmax(%19)
as softmax layer but the optimized relay ir use %20 = nn.fast_softmax(%19)
.
I’m not sure whether it’s caused by my API misuse or the incomplete implementation of nn.fast_softmax
(Would it be possible that the implementation of nn.fast_softmax
does not support gradient calculation?)
The reproduction script can be found here.
Thanks in advance