D2l book and GluonCV environment compatibility

Hi there, I have a curiosity about the compatibility between the environments needed for gluonCV and the d2l book.

Until some time ago, the standard installation for GluonCV (pip install --upgrade mxnet gluoncv) installed an older version (1.5.1.post0) of mxnet, which wasn’t compatible with most of the tutorials described in the d2l book.
For one, in most of the GluonCV tutorials you still have to convert np arrays to nd arrays imported from mxnet, something in the line of:

from mxnet import ndarray as nd
frame = mx.nd.array(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)).astype('uint8')

while the d2l book generally uses:

from mxnet import np

However, recently the GluonCV installation (pip install --upgrade mxnet gluoncv) also installs mxnet 1.6.0, which is the same version required for most of the stuff in the d2l book, so I thought that at this point I could run the code I was writing for the tutorials in the gluoncv environment.

I tried that and it mostly works, but in one case I’m getting an error.
This is a minimal example of what I’m doing:

from mxnet.gluon import loss as gloss
from mxnet.gluon import nn

def main():

    data = pd.read_csv("dataset.txt", header=0, index_col=None, sep="\s+")
    m = data.shape[0]				# number of samples
    n = data.shape[1] - 1			# number of features
    X = np.array(data.iloc[:, 0:n].values)		# array with x values
    Y = np.array(data.iloc[:, -1].values)			# array with y values

    loss = gloss.L2Loss()    
    model = nn.Sequential()
    model.add(nn.Dense(1))
    model.initialize(init.Normal(sigma=0.01))
    trainer = gluon.Trainer(model.collect_params(), 'sgd', {'learning_rate': 0.03})

    for epoch in range(nr_epochs):
        for X_batch, Y_batch in ExtractBatches(X, Y, batch_size):
            with autograd.record():
                 l = loss(model(X_batch), Y_batch)

def ExtractBatches(X, Y, batch_size):
    indices = list(range(len(X)))
    random.shuffle(indices)

    for i in range(0, len(X), batch_size):
        batch_indices = np.array(indices[i : min(i + batch_size, len(X))])
        yield X[batch_indices], Y[batch_indices]

At the last line in main(), I get the following error:

mxnet.base.MXNetError: [11:59:27] src/operator/numpy/linalg/./../../tensor/../elemwise_op_common.h:135: Check failed: assign(&dattr, vec.at(i)): Incompatible attr in node  at 1-th input: expected float64, got float32

If I get back to the d2l environment (installed following the instructions in the book, everything runs ok).
Why am I getting this error? It seems to be related to the variable type used for input, but the input is a mx.np array, which now should be supported in the GluonCV environment as well, if I understand correctly.

Also, will the GluonCV tutorials be updated in order to use the new mx.np arrays in the future?