My initial problem is need to get features (output of one of internal layers) and final output in prediction of given pre-trained model for further calculations.
Since there is no possibility to get both results from one forward() run, I’d like to:
- Load pre-trained model (mx.model.load_checkpoint)
- Add to given symbol one more layer that takes two interesting layers as input and output them as is
The questions are:
- Which layer type to take? My guess is mx.sym.cast with the same type is suitable
- How to pass two inputs?
Here is my experiment that doesn’t work:
sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-152', 0) all_layers = sym.get_internals() flatten0 = sym.get_children().get_children() sym = mx.sym.cast([flatten0, sym], dtype=float) mod = mx.mod.Module(symbol=sym, context=mx.cpu(), label_names=None) mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))], label_shapes=mod._label_shapes) mod.set_params(arg_params, aux_params, allow_missing=True)
AssertionError: Argument data must be Symbol instances, but got [, ]
Also maybe you can propose better approach for this problem. I saw tutorials that propose to build module twice: once for full symbol and once for reduced to embeddings layer. But need to pass forward through the same layers with the same input and parameters twice seems to me not optimal.