Export an mxnet model which is train in python and test in c++


In term of time processing, i usually train my model in python, and test in c++.
I have a problem by exporting my mxnet model in c++.

With or without a batch size, my function MXPredCreate seem to be OK.

But when i try to test, it crash at my MXPredSetInput.

Is someone has a idea ? my ssd model is working both in c++ and python but other model crashed ?

Thks in advance.


What’s the crash output?

From my observations (take it with a grain of salt) I noticed the fact that the input tensor needs to be flattened when calling MXPredSetInput - maybe it works that way.

However, I experience a crash when input batch size is inferred - in my case I have a pytorch model exported as ONNX, converting it to Mxnet and serving in production via the C api from Go. When batch size is specified in training - for example an input shape of (batch_size, dim) everything works. When the shape is inferred (dim, ) then a crash triggers in production.

Here is the relevant code:

(python export)

sym, params = onnx_mxnet.import_model(onnx_path)
model = mxnet.mod.Module(symbol=sym, data_names=['input_0'], context=mxnet.cpu(), label_names=None)
model.set_params(arg_params=params, aux_params=params, allow_missing=True, allow_extra=True)
model.symbol.save("{}/{}".format(export_folder, symbol_file))
model.save_params("{}/{}".format(export_folder, params_file))

(go inference)

var handle C.PredictorHandle
success, err := C.MXPredCreate((*C.char)(unsafe.Pointer(&symbol[0])),

all params look correct (symbol, shapeIdx, shapeData, keys, etc) whereas the error thrown is “no such file or directory” which is quite puzzling, symbol and params files are definitely correct/present.

Any idea from someone battling the same issue?