MXNet Forum

How can I add layer combining output of two internal layers


#1

Hi all,

My initial problem is need to get features (output of one of internal layers) and final output in prediction of given pre-trained model for further calculations.

Since there is no possibility to get both results from one forward() run, I’d like to:

  • Load pre-trained model (mx.model.load_checkpoint)
  • Add to given symbol one more layer that takes two interesting layers as input and output them as is

The questions are:

  • Which layer type to take? My guess is mx.sym.cast with the same type is suitable
  • How to pass two inputs?

Here is my experiment that doesn’t work:

  sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-152', 0)
  all_layers = sym.get_internals()
  flatten0 = sym.get_children()[0].get_children()[0]
  sym = mx.sym.cast([flatten0, sym], dtype=float)
  mod = mx.mod.Module(symbol=sym, context=mx.cpu(), label_names=None)
  mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))], label_shapes=mod._label_shapes)
  mod.set_params(arg_params, aux_params, allow_missing=True)

AssertionError: Argument data must be Symbol instances, but got [, ]

Also maybe you can propose better approach for this problem. I saw tutorials that propose to build module twice: once for full symbol and once for reduced to embeddings layer. But need to pass forward through the same layers with the same input and parameters twice seems to me not optimal.


#2

Have you tried using sym.Group()? It allows you to get multiple symbol outputs in one forward() call.


#3

Hi @safrooze. Thank you for response.

Not yet. Could you direct me to some example of its usage?


#4

#5

Great! It works.

Meanwhile I solved the problem with the initial approach (just replacing cast() with concat()) but your approach is much cleaner

Thank you