How did EvalMetric update params?


#1

if we have several loss functions in a network ,like faster rcnn end to end version below,
rpn_eval_metric = metric.RPNAccMetric()
rpn_cls_metric=metric.RPNLogLossMetric()
rpn_bbox_metric=metric.RPNL1LossMetric()
eval_metric=metric.RCNNAccMetric()
cls_metric=metric.RCNNLogLossMetric()
bbox_metric=metric.RCNNL1LossMetric()
eval_metrics=mx.metric.CompositeEvalMetric()
for child_metric in[rpn_eval_metric,rpn_cls_metric,rpn_bbox_metric,eval_metric,cls_metric,bbox_metric]:
eval_metrics.add(child_metric)
how did EvalMetric update params? Each loss function will update all params in order after forward?


#2

I’m not sure what code you’re looking at, but typically metrics are not for training the network, but rather for evaluation how when the network has trained. These metrics are not training losses and do not impact gradient calculation and parameter updates.


#3

He is referring to this line of code

It occurs in the function train_net

@zhanlong.hao To answer your question
This line

If I am understanding it correctly, it indicates that it is an evaluation metric used while fitting the model i.e. training it (mostly implying it involves some sort of hyper-param tuning the model)

Correct me if I am wrong. Thanks


#4

You are correct @ChaiBapchya. This is the list of evaluation metrics which are used only for monitoring the training progress. They are not used during training optimization.