How to make full use of cpu to speed up when training with gluon?


#1

it seems that it only uesed one half of the cpus


#2

you can set num_workers bigger

parser.add_argument(’–num-workers’, ‘-j’, dest=‘num_workers’, type=int,
default=4, help='Number of data workers, you can use larger ’
‘number to accelerate data loading, if you CPU and GPUs are powerful.’)


#3

Just to clarify, setting this parameter will increase number of OS processes, that do data loading. It is common that when doing deep learning, it is actually the loading data part slows everything down.

@janelu9, take a look into this article, if you want to make the best from your CPU: https://medium.com/apache-mxnet/accelerating-deep-learning-on-cpu-with-intel-mkl-dnn-a9b294fb0b9 It all starts with installing MKL version of MXNet.

There was a similar question regarding to this some time ago: Multi CPU cores usage