Correct use of nn.BatchNorm() for inference

I’d like to use nn.BatchNorm() in front of an auto-encoder model to standardize the input. So far standardization is done by an external sklearn StandardScaler() and I’d like to avoid having to persist 2 artifacts (scaler + autoencoder). Is there something specific to do for the gluon.nn.BatchNorm() to be used at inference time?