Memory increase when use adam and rmsprop

recently, i implement an hourglass like model for pose estimation. during training, the memory increases. and i use the memory_profile to diagnose separately for data loader and module. the data loader can not increase the memory using. and i profile the training process, i find if i use the adam and rmsprop optimizer, the memory will increase . if i use sgd, the memory do not increase.

i also test adam for other tasks, the memory do not increase. i also use output.wait_to_read() to synchronization, the memory also increases by using adam.

the host information:
----------Python Info----------
(‘Version :’, ‘2.7.12’)
(‘Compiler :’, ‘GCC 5.4.0 20160609’)
(‘Build :’, (‘default’, ‘Nov 20 2017 18:23:56’))
(‘Arch :’, (‘64bit’, ‘ELF’))
------------Pip Info-----------
(‘Version :’, ‘8.1.1’)
(‘Directory :’, ‘/usr/lib/python2.7/dist-packages/pip’)
----------MXNet Info-----------
(‘Version :’, ‘1.0.0’)
(‘Directory :’, ‘/home/hrli/incubator-mxnet/python/mxnet’)
Hashtag not found. Not installed from pre-built package.
----------System Info----------
(‘Platform :’, ‘Linux-4.13.0-38-generic-x86_64-with-Ubuntu-16.04-xenial’)
(‘system :’, ‘Linux’)
(‘node :’, ‘VILab-620-Server’)
(‘release :’, ‘4.13.0-38-generic’)
(‘version :’, ‘#43~16.04.1-Ubuntu SMP Wed Mar 14 17:48:43 UTC 2018’)
----------Hardware Info----------
(‘machine :’, ‘x86_64’)
(‘processor :’, ‘x86_64’)
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel® Xeon® CPU E5-2620 v4 @ 2.10GHz
Stepping: 1
CPU MHz: 1300.873
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4197.79
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K
NUMA node0 CPU(s): 0-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti retpoline intel_ppin intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0861 sec, LOAD: 1.6811 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0030 sec, LOAD: 2.9404 sec.

Hi @li-haoran, Would you be able to share a reproducible example of this? Do you get the same effect when using SGD with momentum too? With these optimizers you keep a record of previous values but I wouldn’t expect this to increase memory utilization.

when i update to the latest version, everything is ok.
But i still don’t known which bug cause this memory increment. the mxnet_profiler only provide the cpu and gpu usage, so i use line_profiler, this tells me the update process may increase the memory. Then i don’t known how to debug. the sgd with momentum is ok in that version.

thanks for your concern.