Replication Padding Layer in Gluon

Hi everyone,

I would like to use a ReplicationPad2d layer for Gluon (like the one available in torch). I’ve found a ReflectionPad2D layer in Gluon, but it’s not what I’m looking for.

So, I implemented my own version of it using HybridBlocks, but when I call .hybridize() on the layer, I get the following error:

AttributeError: 'Symbol' object has no attribute 'shape'

How can I solve my issue? The associated code is pasted below:

from  mxnet import gluon
from itertools import repeat
from mxnet.base import numeric_types

class ReplicationPad2D(gluon.HybridBlock):
    """
    Pads the input tensor using replication of the input boundary.
    This is a copy of torch's nn.ReplicationPad2d layer, but for Gluon.
    """

    def __init__(self, padding = 0, **kwargs):
        """
        This is the class' constructor.

        Parameters
        ----------
        padding: int or tuple (default = 0)
                Padding size.

        Returns
        -------
        None
        """

        super(ReplicationPad2D, self).__init__(**kwargs)

        if isinstance(padding, numeric_types):
            padding = tuple(repeat(padding, 4))
            
        self._padding = padding

    def hybrid_forward(self, F, x):
        """
        This method defines the forward pass of the ReplicationPad2D block.

        Parameters
        ----------
        F: function space
                function space that depends on the type of x:
                 - If x's type is NDArray, then F will be mxnet.nd.
                 - If x's type is Symbol, then F will be mxnet.sym.

        x: NDArray or Symbol
                Input of the block.
                                
        Returns
        -------
        padded_x: NDArray
                Padded tensor.
        """

        assert len(self._padding) % 2 == 0, "Padding length must be divisible by 2"
        assert len(self._padding) // 2 <= len(x.shape), "Padding length too large"

        if len(x.shape) == 3:
            assert len(self._padding) == 2, "3D tensors expect 2 values for padding"
            #ret = torch._C._nn.replication_pad1d(input, pad)
            raise NotImplementedError("replication_pad1d not implemented yet!")
        elif len(x.shape) == 4:
            assert len(self._padding) == 4, "4D tensors expect 4 values for padding"
            left_padding = self._padding[0]
            right_padding = self._padding[1]
            top_padding = self._padding[2]
            bottom_padding = self._padding[3]
            
            # Create new array with correct size (taking padding into account)
            padded_x = mx.nd.zeros((x.shape[0], x.shape[1], x.shape[2] + top_padding + bottom_padding, x.shape[3] + left_padding + right_padding), ctx = x.context)

            # Copy original array inside it
            padded_x[:, :, top_padding:(x.shape[2] + top_padding), left_padding:(x.shape[3] + left_padding)] = x

            # Add padding
            if left_padding > 0:
                padded_x[:, :, top_padding:(x.shape[2] + top_padding), 0:left_padding] = x[:, :, :, 0:1].tile(reps = (left_padding,))
            if right_padding > 0:
                padded_x[:, :, top_padding:(x.shape[2] + top_padding), (left_padding + x.shape[3] - 1):] = x[:, :, :, -1:].tile(reps = (right_padding,))
            if top_padding > 0:
                padded_x[:, :, 0:top_padding, :] = padded_x[:, :, top_padding:top_padding + 1, :].transpose().tile(reps = (top_padding,)).transpose().reshape(padded_x.shape[0], padded_x.shape[1], top_padding, padded_x.shape[3])
            if bottom_padding > 0:
                padded_x[:, :, (top_padding + x.shape[2]):, :] = padded_x[:, :, (top_padding + x.shape[2] - 1):(top_padding + x.shape[2]), :].transpose().tile(reps = (bottom_padding,)).transpose().reshape(padded_x.shape[0], padded_x.shape[1], bottom_padding, padded_x.shape[3])

        elif len(x.shape) == 5:
            assert len(self._padding) == 6, "5D tensors expect 6 values for padding"
            #ret = torch._C._nn.replication_pad3d(input, pad)
            raise NotImplementedError("replication_pad1d not implemented yet!")

        return padded_x

Hey @Phenicorn,
I also struggled to create a padded layer for Gluon.

You might have more luck in creating a custom operator for this:

Operators can be written in python or alternatively in Cuda C code.

Greetings,
~QueensGambit

1 Like

This error happens because when you hybridize your model, you don’t have an access to shape information anymore. The exception you receive makes sense, as Symbol doesn’t have shape attribute, and in hybridized mode every NDArray is replaced with Symbol.

You have 2 options:

  1. Rewrite your code to make it hybridizable, e.g. don’t use x.shape in your code. Though, I don’t see a simple way to do that with your code, if you don’t want to specify the size of the input tensor…

  2. Don’t hybridize this particular layer. The simplest way to do it is to inherit from Block instead of HybridBlock and implement forward instead of hybrid_forward. Your code will be executed slower, but if you are just experimenting, then maybe the speed drop won’t be a problem for you.