Hi.

I am try to use a encoder-decoder like model. On encoder stage,To let a image shrunk (supose by 2 for simply ) on every Conv2D,

I set:

Conv2D(kernel_size=4,stride=2,padding=1)

Conv2D(kernel_size=4,stride=2,padding=1)

…

When Decoder stage,To recover it I set:

Conv2DTranspose(kernel_size=4,stride=2,padding=1)

Conv2DTranspose(kernel_size=4,stride=2,padding=1)

…

That works well when images have a size of (64，64),(128，128),(256,256)…(exponential of 2)

but it will get error when meet other size . How can I turn the parameters and How to solve it?

Thank you for your time and consideration!!

Hi,

there’s is an exact formula to get your ouput dimensions of your convolutional layer given your input dimensions, filter size, stride, padding. You can plug in the numbers for your input dimensions and expected output dimensions into this formula in order to get values for filter size, stride and padding that lead to the shrinkage factor you want.

```
W_2=(W_1−F+2P)/S+1
H_2=(H_1−F+2P)/S+1
```

where `W_1, H_1`

and `W_2, H_2`

are the dimensions before and after the convolution respectively. `F`

is your filter size, `P`

is padding and `S`

is stride. You can use the same values for the TransposeConvolution in the decoder stage. If you have square images, i.e `W == H`

then both equations are the same and either suffices.

See the following links for a more detailed discussion on how that works

1 Like