How to make full use of cpu to speed up when training with gluon?

it seems that it only uesed one half of the cpus

you can set num_workers bigger

parser.add_argument(‘–num-workers’, ‘-j’, dest=‘num_workers’, type=int,
default=4, help='Number of data workers, you can use larger ’
‘number to accelerate data loading, if you CPU and GPUs are powerful.’)

Just to clarify, setting this parameter will increase number of OS processes, that do data loading. It is common that when doing deep learning, it is actually the loading data part slows everything down.

@janelu9, take a look into this article, if you want to make the best from your CPU: Accelerating Deep Learning on CPU with Intel MKL-DNN | by Apache MXNet | Apache MXNet | Medium It all starts with installing MKL version of MXNet.

There was a similar question regarding to this some time ago: Multi CPU cores usage - #2 by ThomasDelteil