I went over the documentation and sample codes, but couldn’t figure out how to freeze or fine-tune certain layers on C++ API. (I saw the examples for Python, but couldn’t figure out how same thing can be achieved in C++)
Let’s say on Python side, I downloaded a pre-trained model from gluon model zoo, and cut the FC layer. Then, on C++ side, depending on my application, upon loading the pretrained model either I would like to freeze the whole base model and just train the new layers I add, or define a different learning rate for the base model and perform fine-tuning.
How can this be achieved?
Sorry, in case if it is already documented but I did not see.