The standard Bucketing module of MXNet symbolic interface for RNN/LSTMs provides support for different architecture per batch and sharing parameters across batches. This is because MXNet uses the same internal memory buffers among all executors while implementing bucketing module.
If we want to implement similar bucketing iterator based seq2seq models using MXNet Gluon, what can be a way to implement different architecture per batch and sharing the parameters across batches?
If I initialize an gluon.rnn.LSTMCell in my encoder class (extending the Block/HybridBlock class) and then while implementing the forward method of the Block class, I do LSTMCell.unroll(length_for_specific_batch) for every batch, will the parameters be shared across batches like it’s done for the symbolic graph using Bucketing module?
Very crude pseudo-code:
Initialize LSTMCell with num_hidden and other initializers
Update 1: I tried this and I this does not seem to work. LSTM cell infers the shape from the first batch size and uses that. Any suggestion on how to implement different architecture per batch along with parameter sharing?
Update 2: I was making some mistake in the dimensionality declaration. Here’s a working example which shows RNN can work with batches of different lengths and to me it looks like parameters are shared. Still it’ll be good if someone can verify that my understanding is indeed right.