Running inference with varying input size

Hi all, I’m new to MXNet and coming across some issues.

I am surprised to find no answers yet in the list of available questions. It seems like a pretty common problem and pretty easy to solve in Tensorflow.

Put it simple, I’m loading a network, then trying to run inference with varying batch_sizes, height and width of image.

import mxnet as mx 
sym, arg, aux = mx.model.load_checkpoint("simple_pose_resnet50_v1d", 0)
mod = mx.mod.Module(symbol=sym, label_names=None)
mod.bind(for_training=False, data_shapes=[('data', (-1,3,-1,-1))])
mod.set_params(arg, aux)

from collections import namedtuple
Batch = namedtuple('Batch', ['data'])
mod.forward(Batch([pose_input]))
predicted_heatmap = mod.get_outputs()

I’ve highlighted -1 as a common notation in Tensorflow for unknown size and to be determined doing runtime. Is it possible to do this in MXNet? Please help

You have to define a custom transform on your dataset where the different shape inputs are handled. The simplest solution would be padding. You can find some more details in this thread: Can data loader work with different input shape

How about using ‘gluon.nn.SymbolBlock.imports’?

This might be helpful:
https://beta.mxnet.io/_modules/mxnet/gluon/block.html#SymbolBlock.imports

from mxnet import gluon
net = gluon.nn.SymbolBlock.imports("simple_pose_resnet50_v1d.json", ['data'], "simple_pose_resnet50_v1d-0000.params")
output = net(...)

@NRauschmayr I have considered this option. But in my case I am using a pretrained network and the input values range from -1 to 1 , and simply zero padding would add some additional information that might affect the accuracy.

I have decided to use @dai-ichiro ‘s answer instead, that is using the Gluon api! Thanks