Pooling and Convolution with "SAME" mode

Hello Community,

How do I achieve “SAME” mode in Pooling operator?
MXNet support only “VALID” and “FULL”.

I am implementing MXNet backend for Keras. Keras supports “VALID” and “SAME” modes.
The workaround I am thinking:

  • Calculating the padding to get the same output shape after pooling. (How ?)
  • Call sym.pad() and followed by sym.pool

Any suggestions would be greatly helpful.

Also, if we choose the above-mentioned padding route, will it have any impact on pooling operation? How do I calculate the padding size to achieve pooling with SAME mode.

Thanks,
Sandeep

For Convolution operator, I achieved “SAME” mode with this code block - https://github.com/deep-learning-tools/keras/blob/keras2_mxnet_backend/keras/backend/mxnet_backend.py#L3876-L3891

Hi,

I achieve “SAME” padding (for odd kernels), with the following (assuming you know your kernel size and dilation rate) for stride = 1:


# Ensures padding = 'SAME' for ODD kernel selection 
p0 = dilation_rate[0] * (kernel_size[0] - 1)/2
p1 = dilation_rate[1] * (kernel_size[1] - 1)/2
p = (p0,p1)

nfilters = 32 # some number

gluon.nn.Conv2D(channels = nfilters,  kernel_size =  kernel_size, 
                                           padding=p, dilation= dilation_rate)

you can find the arithmetic that is followed for the output kernel, here:

out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1
out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1

by demanding out_height == height and assuming stride = 1, you can solve the equation for padding (given kernel size and dilation). Now, the floor function perhaps complicates things, but for odd kernels works for me. Additional information for convolution arithmetic can be found here but note that mxnet has (perhaps?) slightly different definitions.

Hope this helps.

3 Likes

The padding requirements between pooling and convolution are basically the same, in the sense that they both run a kernel over the data+padding. What makes this a bit challenging with MXNet is that the padding is applied symmetrically, which means that getting the same length on output as input is not possible when a kernel dimension is even. The steps to take to get ‘SAME’ padding in 2-D pooling:

  1. set pad to (kernel[0]//2, kernel[1]//2)
  2. if any kernel dimension is even, slice off the first row/column of the output (this will replicate the implementation in TF)

Note that because of the slice operation, there is an extra memcopy associated with even dimensioned kernels.

# Example Code:
# Assuming symetric kernel of (k x k) and data shape of (batch, c, h, w)
pool = mx.sym.Pooling(data, kernel=(k,k), pad=(k//2, k//2))
if k%2 == 0:
    pool = pool[:,:,1:,1:]
3 Likes

Thank you @feevos … This was very helpful.

1 Like

Thank you @safrooze - Your solution worked for me.

Hi,
Can I get a specific apply of @safrooze answer when building a net . For example:

net = nn.HybridSequential()
    net.add(
         #conv_bn_block_0
        nn.Conv2D(channels=16, kernel_size=3, strides=1, padding=1),
        nn.BatchNorm(in_channels=16),
        nn.LeakyReLU(0.1),

        #max_pool_1
        nn.MaxPool2D(2,2),
        
         #conv_bn_block_2
        nn.Conv2D(channels=32, kernel_size=3, strides=1, padding=1),
        nn.BatchNorm(in_channels=32),
        nn.LeakyReLU(0.1),

        #max_pool_3
        nn.MaxPool2D(2,2),

        #conv_bn_block_4
        nn.Conv2D(channels=64, kernel_size=3, strides=1, padding=1),
        nn.BatchNorm(in_channels=64),
        nn.LeakyReLU(0.1),

        #max_pool_5
        nn.MaxPool2D(2,2),

        #conv_bn_block_6
        nn.Conv2D(channels=128, kernel_size=3, strides=1, padding=1),
        nn.BatchNorm(in_channels=128),
        nn.LeakyReLU(0.1),

        #max_pool_7
        nn.MaxPool2D(2,2),

        #conv_bn_block_8
        nn.Conv2D(channels=256, kernel_size=3, strides=1, padding=1),
        nn.BatchNorm(in_channels=256),
        nn.LeakyReLU(0.1),

        #max_pool_9
        nn.MaxPool2D(2,2),

        #conv_bn_block_10
        nn.Conv2D(channels=512, kernel_size=3, strides=1, padding=1),
        nn.BatchNorm(in_channels=512),
        nn.LeakyReLU(0.1),

        #max_pool_11
        nn.MaxPool2D(2,1,padding=1) #At this step, we want the output to have size of (1,512,13,13), inputsize = (1,512,13,13)
    )
    net.initialize(init.Xavier(),ctx = ctx)

With input size of (1,3,416,416), after passing block 10, the output shape would be (1, 512, 13, 13).
How can I get the same size after passing through block 11 ?

Summary

This text will be hidden

Hi, I was able to achieve same mode, however i could not use this solution with net.hybridize( )

class Net(nn.HybridBlock):
def __init__(self, num_classes=80, input_dim=416):
    super(TinyDarkNet, self).__init__()

    self.conv_bn_block_0 = ConvBNBlock(16, 3, 1, 1)
    self.max_pool_1 = nn.MaxPool2D(2, 2)
    self.conv_bn_block_2 = ConvBNBlock(32, 3, 1, 1)
    self.max_pool_3 = nn.MaxPool2D(2, 2)
    self.conv_bn_block_4 = ConvBNBlock(64, 3, 1, 1)
    self.max_pool_5 = nn.MaxPool2D(2, 2)
    self.conv_bn_block_6 = ConvBNBlock(128, 3, 1, 1)
    self.max_pool_7 = nn.MaxPool2D(2, 2)
    self.conv_bn_block_8 = ConvBNBlock(256, 3, 1, 1)
    self.max_pool_9 = nn.MaxPool2D(2, 2)
    self.conv_bn_block_10 = ConvBNBlock(512, 3, 1, 1)

Want to same pad here, current kernerl is odd

    self.max_pool_11 = nn.MaxPool2D(2,1,padding = 1) #ADD PADDING
    self.conv_bn_block_12 = ConvBNBlock(1024, 3, 1, 0)#CHECKED

def hybrid_forward(self, F, x, *args, **kwargs):
    x = self.conv_bn_block_0(x)
    x = self.max_pool_1(x)
    x = self.conv_bn_block_2(x)
    x = self.max_pool_3(x)
    x = self.conv_bn_block_4(x)
    x = self.max_pool_5(x)
    x = self.conv_bn_block_6(x)
    x = self.max_pool_7(x)
    conv_8 = self.conv_bn_block_8(x)
    x = self.max_pool_9(conv_8)
    x = self.conv_bn_block_10(x)

    x = self.max_pool_11(x)
    x = x[:,:,1:,1:]

    x = self.conv_bn_block_12(x)
    ...
    return your-Pass-Through-Net

Maybe you can try something from here (symbol indexing):

and replace the mx.sym. as F.

Thank you. I’m working on it, pretty much to try !