Gluon: Per-layer learning rate for fine tuning a pretrained network

Hi!
I want to train a pretrained model, for example, ResNet, where I want to limit the first two layers’ learning rate to lr/100, middle layers ’ learning rate to lr/10, and fully connected layers ’ learning rate to lr.
I’ve only found implementation using native mxnet: https://github.com/apache/incubator-mxnet/issues/2242.
But How to implement it using gluon and gluon.Trainer?

You can iterate through the parameters and modify the learning rate for each layer separately:

for key, value in model.collect_params().items():
   print value.lr_mult
1 Like