Code question about maxpooling

as I have read here, these lines codes of maxpooling

  • out_height = util.simplify((height - kernel_height + pad_top + pad_down) // stride_height + 1)
  • out_width = util.simplify((width - kernel_width + pad_left + pad_right) // stride_width + 1)
  • dheight = tvm.reduce_axis((0, kernel_height))
  • dwidth = tvm.reduce_axis((0, kernel_width))
  • if pool_type == ‘max’:
  •    temp = pad(data, pad_before, pad_after, name="pad_temp", \
    
  •        pad_value=tvm.min_value(data.dtype))
    
  •    return tvm.compute((batch, channel, out_height, out_width), \
    
  •                        lambda n, c, h, w: \
    
  •                        tvm.max(temp[n, c, h*stride_height+dheight, w*stride_width+dwidth], \
    
  •                            axis=[dheight, dwidth]), \
    
  •                        tag="pool_max")
    

in tvm.max function, why should be " h * stride_height + dheight " here, but not " h * kernel_height " ?

for my understanding, h is belong to [0, out_height], " h * kernel_height " is the expend height of matrix that need to be done max pooling. the same for " w*stride_width " …

thanks for review and answer it.