Constant Affine Transform with Symbol and Module

Hi,
I need to scale images within a convolutional net with pooling to a constant size. BilinearSampler and GridGenerator look perfect for the task, but I can’t figure out how to bind the known Affine matrix to the symbol so that I can train with Module. I have the feeling this is really easy, but I can’t find the solution online. Any help or insight you can provide would be appreciated.
Thanks,
Jon

Simplified code:

import mxnet as mx
import numpy as np
context = mx.cpu()

Bsz = 1
nChan = 4
kernel = (3,3)
pad = (0,0)
stride = (2,2)
Isz = 256

affine_mtx = mx.nd.array([[[1,0,0],[0,1,0]]]*Bsz)
affine_mtx = mx.nd.reshape(affine_mtx, (Bsz, 6))

data = mx.sym.var(‘data’)
affine_matrix = mx.sym.var(‘affine_matrix’)
grid = mx.sym.GridGenerator(data=affine_matrix, transform_type=‘affine’, target_shape=(Isz, Isz), name=“grid_1”)

net = mx.sym.Convolution(data=data, num_filter=nChan, kernel=kernel, no_bias=False, stride=stride, pad=pad, name=‘conv2_1’)
net = mx.sym.Activation(data = net, act_type=‘relu’, name=‘relu2_1’)
net = mx.sym.Convolution(data=net, num_filter=2, kernel=kernel, no_bias=False, stride=stride, pad=pad, name=‘conv2_2’)
net = mx.sym.Activation(data = net, act_type=‘relu’, name=‘relu2_2’)
net = mx.sym.BilinearSampler(net, grid, name=“bilin”)

Y = mx.symbol.Variable(‘lin_reg_label’)
lro = mx.sym.MAERegressionOutput(data=net, label=Y, name=“lro”)

mod1 = mx.mod.Module(symbol=lro, context=context, data_names = [‘data’], label_names = [‘lin_reg_label’])
mod1.bind(data_shapes=[(‘data’, (Bsz,4,Isz,Isz))], label_shapes=[(‘lin_reg_label’,(Bsz, 2,Isz,Isz))])

mod2 = mx.mod.Module(symbol=lro, context=context, data_names = [‘data’,‘affine_matrix’], label_names = [‘lin_reg_label’])
mod2.bind(data_shapes=[(‘data’, (Bsz,4,Isz,Isz)),(‘affine_matrix’, affine_mtx.shape)], label_shapes=[(‘lin_reg_label’,(Bsz, 2,Isz,Isz))])
mod2.init_params(initializer=mx.init.Xavier(magnitude=2.24))
mod2.init_optimizer(kvstore = ‘device’, optimizer=“Nadam”, optimizer_params = {‘learning_rate’:0.0001, ‘beta1’:0.9, ‘beta2’:0.999, ‘epsilon’:1e-08, ‘schedule_decay’:0.004})

train_data = mx.nd.ones((1,4,Isz,Isz), context, dtype=np.float32)
train_label = mx.nd.ones((1,2,Isz,Isz), context, dtype=np.float32)
train_iter = mx.io.NDArrayIter(train_data, train_label, Bsz, shuffle=True, label_name=‘lin_reg_label’, last_batch_handle=‘discard’)
for i, Batch in enumerate(train_iter):
mod2.forward(Batch)

I’m not happy with it, but I figured out a work around.

  1. Include a slew of Affine matrixes in the training iterator:
    affine_mtx = mx.nd.array([[[[1,0,0],[0,1,0]]]]*train_data.shape[0])
    train_iter = mx.io.NDArrayIter({‘data’:train_data, ‘affine_matrix’:affine_mtx}, train_label, batch_size, label_name=‘lin_reg_label’)

  2. Resize the batch of Affine matrixes within the net:
    affine_matrix = mx.sym.reshape(affine_matrix, shape=(batch_size, 6))

  3. The Affine matrixes will have to be included with the data when calling mod.predict()