Surprisingly low training performance on volta V100

I am getting surprisingly low performance when running the examples/image-classification/train_imagenet.py example on my Tesla V100 gpu. I am getting roughly 540 img/sec with a batch size of 64 with synthetic data when training either Alexnet or Resnet50. I was expecting Alexnet training performance to be much more than this.

This is using the python mxnet-cu91 package. Any guidance will be useful.

I made an issue request here as well if you need more details:

Can you provide some more information on:

  • MXNet version
  • nvidia-smi output
  • top/htop output
  • Machine configuration other than V100

I am using the Mxnet python package (mxnet-cu91) installed with pip.

Here are the system configurations:

----------Python Info----------
Version      : 3.5.2
Compiler     : GCC 5.4.0 20160609
Build        : ('default', 'Nov 23 2017 16:37:01')
Arch         : ('64bit', 'ELF')
------------Pip Info-----------
Version      : 10.0.1
Directory    : /usr/local/lib/python3.5/dist-packages/pip
----------MXNet Info-----------
Version      : 1.3.0
Directory    : /usr/local/lib/python3.5/dist-packages/mxnet
Commit Hash   : b434b8ec18f774c99b0830bd3ca66859212b4911
----------System Info----------
Platform     : Linux-4.13.0-45-generic-x86_64-with-Ubuntu-16.04-xenial
system       : Linux
node         : css-host-8
release      : 4.13.0-45-generic
version      : #50~16.04.1-Ubuntu SMP Wed May 30 11:18:27 UTC 2018
----------Hardware Info----------
machine      : x86_64
processor    : x86_64
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                40
On-line CPU(s) list:   0-39
Thread(s) per core:    2
Core(s) per socket:    10
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
Stepping:              1
CPU MHz:               1200.189
CPU max MHz:           3400.0000
CPU min MHz:           1200.0000
BogoMIPS:              4799.72
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              25600K
NUMA node0 CPU(s):     0-9,20-29
NUMA node1 CPU(s):     10-19,30-39
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti retpoline intel_ppin intel_pt spec_ctrl tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts

Do you want nvidia-smi/top output when the script is running?

Is your synthetic data saved in record file format?

It is the synthetic data generated when --benchmark 1 option is passed to the image_classification.py script.

The fastest demonstrated speed on V100 on MXNet is 1075 images/second, which was done by NVIDIA after some optimizations such as layout="NHWC" in Convolution operator and operator fusion using NNVM. These are described in this blog post.

Some of these optimizations require changing the network (e.g. layout) and some require changing the mxnet code (e.g. NNVM optimizations) and I believe neither have made their way into the source code yet.

Also I think you’re benchmarking with resnet, whereas NVIDIA’s benchmark is resnet-v1. With this command, I get about 680 images per second on a P3.2x instance using mxnet-cu90 version 1.3.0b20180618 (latest beta):

python train_imagenet.py --benchmark 1 --gpus 0 --batch-size 64 --network resnet-v1 --num-layers 50 --dtype float16