Pooling Check Failed: VGG19 from model zoo

#1

I’m attempting to transition to the Gluon API after training a few model with the Module API but I’ve run into an issue I can’t figure out.

Here’s the full stack trace:

---------------------------------------------------------------------------
MXNetError                                Traceback (most recent call last)
<ipython-input-57-6c3d69d94382> in <module>
     17 
     18         with mx.autograd.record():
---> 19             output = vgg19(data)  # Forward pass
     20             loss = loss_fn(output, label)  # Get loss
     21 

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/gluon/block.py in __call__(self, *args)
    538             hook(self, args)
    539 
--> 540         out = self.forward(*args)
    541 
    542         for hook in self._forward_hooks.values():

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/gluon/block.py in forward(self, x, *args)
    915                     params = {i: j.data(ctx) for i, j in self._reg_params.items()}
    916 
--> 917                 return self.hybrid_forward(ndarray, x, *args, **params)
    918 
    919         assert isinstance(x, Symbol), \

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/gluon/model_zoo/vision/vgg.py in hybrid_forward(self, F, x)
     82 
     83     def hybrid_forward(self, F, x):
---> 84         x = self.features(x)
     85         x = self.output(x)
     86         return x

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/gluon/block.py in __call__(self, *args)
    538             hook(self, args)
    539 
--> 540         out = self.forward(*args)
    541 
    542         for hook in self._forward_hooks.values():

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/gluon/block.py in forward(self, x, *args)
    915                     params = {i: j.data(ctx) for i, j in self._reg_params.items()}
    916 
--> 917                 return self.hybrid_forward(ndarray, x, *args, **params)
    918 
    919         assert isinstance(x, Symbol), \

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/gluon/nn/basic_layers.py in hybrid_forward(self, F, x)
    115     def hybrid_forward(self, F, x):
    116         for block in self._children.values():
--> 117             x = block(x)
    118         return x
    119 

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/gluon/block.py in __call__(self, *args)
    538             hook(self, args)
    539 
--> 540         out = self.forward(*args)
    541 
    542         for hook in self._forward_hooks.values():

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/gluon/block.py in forward(self, x, *args)
    915                     params = {i: j.data(ctx) for i, j in self._reg_params.items()}
    916 
--> 917                 return self.hybrid_forward(ndarray, x, *args, **params)
    918 
    919         assert isinstance(x, Symbol), \

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/gluon/nn/conv_layers.py in hybrid_forward(self, F, x)
    693 
    694     def hybrid_forward(self, F, x):
--> 695         return F.Pooling(x, name='fwd', **self._kwargs)
    696 
    697     def __repr__(self):

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/ndarray/register.py in Pooling(data, kernel, pool_type, global_pool, cudnn_off, pooling_convention, stride, pad, p_value, count_include_pad, out, name, **kwargs)

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py in _imperative_invoke(handle, ndargs, keys, vals, out)
     90         c_str_array(keys),
     91         c_str_array([str(s) for s in vals]),
---> 92         ctypes.byref(out_stypes)))
     93 
     94     if original_output is not None:

~/venv/rana_dl/lib/python3.6/site-packages/mxnet/base.py in check_call(ret)
    250     """
    251     if ret != 0:
--> 252         raise MXNetError(py_str(_LIB.MXGetLastError()))
    253 
    254 

MXNetError: [11:30:50] src/operator/nn/pooling.cc:159: Check failed: param.kernel[1] <= dshape[3] + 2 * param.pad[1] kernel size (2) exceeds input (1 padded to 1)

Stack trace returned 8 entries:
[bt] (0) 0   libmxnet.so                         0x000000011b3ed390 std::__1::__tree<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, mxnet::NDArrayFunctionReg*>, std::__1::__map_value_compare<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, mxnet::NDArrayFunctionReg*>, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, true>, std::__1::allocator<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, mxnet::NDArrayFunctionReg*> > >::destroy(std::__1::__tree_node<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, mxnet::NDArrayFunctionReg*>, void*>*) + 2736
[bt] (1) 1   libmxnet.so                         0x000000011b3ed13f std::__1::__tree<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, mxnet::NDArrayFunctionReg*>, std::__1::__map_value_compare<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, mxnet::NDArrayFunctionReg*>, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, true>, std::__1::allocator<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, mxnet::NDArrayFunctionReg*> > >::destroy(std::__1::__tree_node<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, mxnet::NDArrayFunctionReg*>, void*>*) + 2143
[bt] (2) 2   libmxnet.so                         0x000000011b74ba83 mxnet::op::FullyConnectedComputeExCPU(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::__1::vector<mxnet::NDArray, std::__1::allocator<mxnet::NDArray> > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocator<mxnet::OpReqType> > const&, std::__1::vector<mxnet::NDArray, std::__1::allocator<mxnet::NDArray> > const&) + 282083
[bt] (3) 3   libmxnet.so                         0x000000011ca86a39 mxnet::imperative::SetShapeType(mxnet::Context const&, nnvm::NodeAttrs const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&, mxnet::DispatchMode*) + 1577
[bt] (4) 4   libmxnet.so                         0x000000011ca85396 mxnet::Imperative::Invoke(mxnet::Context const&, nnvm::NodeAttrs const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> > const&) + 742
[bt] (5) 5   libmxnet.so                         0x000000011c9d0fae SetNDInputsOutputs(nnvm::Op const*, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> >*, std::__1::vector<mxnet::NDArray*, std::__1::allocator<mxnet::NDArray*> >*, int, void* const*, int*, int, int, void***) + 1774
[bt] (6) 6   libmxnet.so                         0x000000011c9d1cd0 MXImperativeInvokeEx + 176
[bt] (7) 7   _ctypes.cpython-36m-darwin.so       0x000000010fa131a7 ffi_call_unix64 + 79

I’ve been following along with this guide: Converting Module API code to the Gluon API

I’m trying to use the VGG19 with batch normalization from the model zoo to trained with a dataset from a record file:

dataset = ImageRecordDataset("/Users/u6000791/datasets/rec/train.rec")
dataloader = DataLoader(dataset, batch_size=bat_size, shuffle=False, 
                        num_workers=0)
vgg19 = mx.gluon.model_zoo.vision.vgg19_bn(classes=2)

ctx = mx.cpu()
vgg19.initialize(mx.initializer.MSRAPrelu(), ctx=ctx)
trainer = gluon.Trainer(vgg19.collect_params(), 'sgd', {'learning_rate': 1e-3})

# Specify our metric of choice
metric = mx.metric.Accuracy()

# Define our loss function
loss_fn = gluon.loss.SoftmaxCrossEntropyLoss()

for epoch in range(100):
    for data, label in dataloader:  # start of mini-batch
        data = data.as_in_context(ctx)
        # Convert datatype from uint8 to float32 for consistency with layer definition
        data = data.astype("float32")  
        label = label.as_in_context(ctx)
        
        with mx.autograd.record():
            output = vgg19(data)  # Forward pass
            loss = loss_fn(output, label)  # Get loss
            
        loss.backward()  # Compute gradients
        trainer.step(data.shape[0])  # Update weights with SGD
        metric.update(label, output)  # Update the metrics # end of mini-batch
        
    name, acc = metric.get()
    print("Training metrics at epoch {}: {}={}".format(e, name, acc))
    metric.reset()  # End of epoch

Am I doing something wrong here, or does this mean that the model definition from the model zoo is incompatible with my dataset?

#2

I found my mistake.

I just needed to transform my dataset prior to feeding it to the model:

dataset = ImageFolderDataset("/Users/u6000791/Deep_Learning/Frames")

transformer = gluon.data.vision.transforms.ToTensor()

dataset = dataset.transform_first(transformer)
#3

Glad you found the fix @auslaner! It’s important to have the data in the NCHW format (batch size, channel, height, width) when passing to this network. Without ToTensor you had NHWC, and because C was very small (probably 3), you quickly reached a convolution that had a kernel size larger than the spatial dimensions of the feature map, hence the error there.