Inference performance of arcface (deepinsight/insightface) performance based on MXNet version


I’m not sure this topic is good for here, but if you have any experience or suggestion and share it, I’m really appreciate it.

I’m using insightface (a.k.a arcface) MXNet implementation.

My issue is the inference time of trained model is quite (about 25x) difference based on MXNet version of training environment. And slower version achieves better accuracy.

At first, I’ve trained the model with MXNet 1.3.1. following training procedure. The trained model’s “Accuracy-Highest” is 0.96117, but inference takes about 5 seconds per one image (The inference environment is MXNet 1.4.1 on CPU environment).

After that, I’ve trained the model with MXNet 1.4.1. The 1.4.1 model inference is around 0.2 sec per one image, which is same as pre-trained model. But the “Accuracy-Highest” is 0.95133.

I’m wondering what makes the inference time differ, and how can I fasten the slower model keeping same accuracy.


getting about 5s on MXNet 1.6.0
very curious whether you found out what was the cause of dramatic speed up in 1.4.1