What is the recommended operation to protect some weights from being changed by the trainer in MxNet?


#1

What is the recommended operation to protect some weights from being changed by the trainer in MxNet?
As far as I know, if I want to protect some weights in TenserFlow, I should prevent them from being passed to the optimizer. So, I do the same in MxNet with following codes.

all_params = net.collect_params()
while True:

    firstKey = next(iter(all_params._params))
       
    if 'resnet' not in firstKey:
    
          break
   
    all_params._params.popitem(last = False)

trainer =

mx.gluon.Trainer(all_params,‘sgd’)

The variable ”all_params._params” belongs to a rare type called ”OrderedDict.” I think it means that the order in this dictionary is very important. I should not change the order. As shown above, I can only remove some parameters from the beginning of the network. It is very inconvenient. The ”params” gets a ”underline _” at the beginning, which means it should not be charged by the general user.
I do not receive any errors, but I wonder this is not the recommended operation.


#2
net.collect_params(collect='.*pattern_a|.*pattern_b')

you can use regex to pick what parameters can be updated.


#3

Another professor has suggested a strategy, and I cite the URL here for exchanging ideas. (https://stackoverflow.com/questions/51727604/what-is-the-recommended-operation-to-protect-some-weights-from-being-changed-by/51732793#51732793)