Access the activation of a custom layer during forward pass


#1

I have a network that I defined as:

def get_cam_model(symbol, arg_params, num_outputs, layer_name=‘relu1’):
outLabl = mx.sym.Variable(‘softmax_label’)
all_layers = symbol.get_internals()
net = all_layers[layer_name + ‘_output’]
net = mx.symbol.Custom(net, name=‘gap2d’, op_type=‘global_avg_pool2d’)
net = mx.symbol.FullyConnected(data=net, num_hidden=num_outputs, name=‘fc1’)
net = mx.symbol.LinearRegressionOutput(data=net, name=‘linreg1’, label=outLabl)
new_args = dict({k:arg_params[k] for k in arg_params if ‘fc1’ not in k})
return (net, new_args)

Which I trained with:
def fit(symbol, arg_params, aux_params, train, val, batch_size, num_gpus):
devs = [mx.gpu(i) for i in range(num_gpus)]
mod = mx.mod.Module(symbol=symbol, context=devs)
callback = mx.callback.Speedometer(batch_size, 100)
mod.fit(train,
val,
num_epoch=NUM_EPOCHS,
arg_params=arg_params,
aux_params=aux_params,
allow_missing=True,
batch_end_callback=callback,
kvstore=‘device’,
optimizer=‘adam’,
optimizer_params={‘learning_rate’:LEARNING_RATE,
‘beta1’:0.9,
‘beta2’:0.999,
‘epsilon’:1e-08,
‘lazy_update’:True},
initializer=mx.init.Xavier(rnd_type=‘gaussian’, factor_type=‘in’, magnitude=2),
eval_metric=‘rmse’
)
metric=mx.metric.Accuracy()
return mod, mod.score(val, metric)

How can I access the activation of the gap2d layer of the network? I spent hours and hours searching through the online documentation, but could not find anything on it.

Best,

Xin


#2

You can use Monitor to do so. Please, look at this example: https://github.com/apache/incubator-mxnet/blob/master/example/python-howto/monitor_weights.py

Alternatively, you can create an intermediate custom operator which will log inputs and outputs of a layer. Take a look into this implementation: https://github.com/apache/incubator-mxnet/issues/4805#issuecomment-275311851


#3

Thanks for the tips!

But my goal is not to monitor the weights during the process of training.

I want to access the activation of a layer (not the weights) after a forward pass through the network. Neither of your two approaches will work.


#4

You mean, you want to access to output of this layer after a forward pass?

If so, you can probably save reference to this layer separately, and just use it when you want to run a forward pass:

all_layers = symbol.get_internals()
net = all_layers[layer_name + ‘_output’]
**net2** = mx.symbol.Custom(net, name=‘gap2d’, op_type=‘global_avg_pool2d’)
net = mx.symbol.FullyConnected(data=net2, num_hidden=num_outputs, name=‘fc1’)
net = mx.symbol.LinearRegressionOutput(data=net, name=‘linreg1’, label=outLabl)
new_args = dict({k:arg_params[k] for k in arg_params if ‘fc1’ not in k})

Then, to get activation, you do forward pass on net2 to get the output of this layer, and on whole net to get the output of the whole network if needed.