The topi.testing.conv2d_nhwc_python
function uses the scipy implementation of convolve2d method to perform the convolution after calculating the padding. Below is the relevant piece of code from the function.
at = a_np.transpose((0, 3, 1, 2))
wt = w_np.transpose((3, 2, 0, 1))
bt = np.zeros((batch, out_channel, out_height, out_width))
# computation
for n in range(batch):
for f in range(out_channel):
for c in range(in_channel):
if pad_h > 0 or pad_w > 0:
apad = np.zeros((in_height + pad_h, in_width + pad_w))
apad[pad_top:pad_top + in_height, pad_left:pad_left + in_width] = at[n, c]
else:
apad = at[n, c]
out = scipy.signal.convolve2d(
apad, np.rot90(np.rot90(wt[f, c])), mode='valid')
bt[n, f] += out[::stride_h, ::stride_w]
return bt.transpose((0, 2, 3, 1))
In the above code, when the condition if pad_h > 0 or pad_w > 0
is true, then the apad
is initialized with np.zeros
and then modified with appropriate values from at
. In this case, since np.zeros
always creates the array with float64
type, the type of apad
becomes float64
irrespective of whether the input type of at
was float32
or float16
.
In the other case, when both pad_h
and pad_w
are zero, apad
has the same type as at
.
Similarly, the output type which is bt
is also always float64
. irrespective of the input types.
I wanted to ask whether this is a bug in the code or was a conscious decision, considering the fact that the convolution is still correct, but one might have different precision in the output.
Thanks