So for Hw8.2, I’m assuming we are asked to calculate the perplexity for each specific prediction generated by the trained models, but when I compare my result to the perplexity in the output of train_and_predict_rnn_gluon, there’s a difference. I look at the source code of train_and_predict_rnn_gluon, it seems to be using loss to approximate perplexity of the whole model. Hence is it safe to ignore this discrepancy between perplexity(string) and perplexity(model)? Thanks!!