How does size of training image matter with Image Classification results

How does size of training image matter with Image Classification results of Mxnet?
I received fairly good results when classifying a object into “Good” and “Faulty” parts and training image size of 240x240px. But when I created a model with higher resolution (480x480) to identify minor defects that were not clear in 240x240 images, the classification results were poorer than before.
What could be the reason behind this?

Thanks in advance.

Hi @abalki - usually higher resolution will yield better results. However you might need to adapt your network. For example if you used the same convolutional neural networks and double the size of your input, the size of your last feature maps would be twice as big. What that implies is that your network wouldn’t be able to learn as much hierarchical feature representation as with the smaller input. One way to avoid that is to make your network deeper, or use dilation in your convolutions.

Also you might need to tweak your hyper-parameters, learning ratae and optimizer in order to achieve the same or higher performance.