Gluon NLP Batchify

Hello,

I have an imbalanced sentiment analysis (Positive = 15% of the data) Fine Tuning experiment that I am trying to run with the BERTClassifier. However, it seems that the batchifying (n = 8), ends up creating batches such that the model is learning to classify everything as Negative. Most of the batches have labels of 0. Is there a way to reflect the output feature distribution in the batches created, so that there is at least 1 record in each batch that is Positive?