Inference mode activation

Hello

I have read several threads and still is confusing for me how to specify for a mxnet model that an inference mode is being using.

Let take into consideration this example (https://gluon.mxnet.io/chapter04_convolutional-neural-networks/cnn-batch-norm-gluon.html)

epochs = 1
smoothing_constant = .01

for e in range(epochs):

for i, (data, label) in enumerate(train_data):

    data = data.as_in_context(ctx)

    label = label.as_in_context(ctx)

    with autograd.record():

        output = net(data)

        loss = softmax_cross_entropy(output, label)

    loss.backward()

    trainer.step(data.shape[0])

    ##########################
    #  Keep a moving average of the losses
    ##########################
    curr_loss = nd.mean(loss).asscalar()
    moving_loss = (curr_loss if ((i == 0) and (e == 0))
                   else (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss)

test_accuracy = evaluate_accuracy(test_data, net)
train_accuracy = evaluate_accuracy(train_data, net)

My doubt is… as we can see before calling evaluate_accuracy function over test_data, none configuration is done in order to specify that the model should predict in inference mode. For this specific example, it is not a problem, but for models that use BatchNorm and Dropout, it will certainly be a problem.

Then, my question is, how to specify in gluon/mxnet that I want to conduct an inference phase with my model?

Best regards

Hi, by default (i.e. unless specified otherwise) the output of the model is always in inference (predict) mode. Explicitly (takes care of dropout/batchnorm etc):

with autograd.predict_mode():
    pred = net(input)

is equivalent with predict = net(input) (i.e. without the explicit instruction for autograd). For training mode, you need to use:

with autograd.record():
    pred = net(input) # now in training mode
1 Like

Oh, I see, Thank you so much!

1 Like