run forward pass with MXNet spends much more time than run forward pass with openCV
Hi, I’m trying to run my own pre-trained model with cpu only on centos8, virtual machine. (which was trained with GPU on ubuntu 18.04)
Our whole project need to be written in c++ code, so my previous solution was to convert (symbols, params) to onnx and load it with opencv in our code.
We decided to migrate our code from openCV base to mxnet cpp build.
Here comes the problem.
Run forward pass and get prediction output with mxnet cpp is too much slower compare to opencv.
We are using exactly same network. The only difference is format. (symbols¶ms vs onnx).
- to use cpp bind build mxnet from source
- also test pip install version of mxnet 1.6.0
MXNet version: mxnet-1.8.0
CPU: Intel i5 9500F
RAM: 32 GiB
With opencv + onn: under 1 seconds
With mxnet + symbol, params: 8 seconds