The problem when I am training SSD on my own pascal format dataset

you can check the location of this error and comment this line

It feels like there is a problem in how you packed your data into RecordIO format or something related to how you store your data.

The code that throws the exception is here - https://github.com/apache/incubator-mxnet/blob/v1.3.x/python/mxnet/gluon/data/dataloader.py#L162 You hit the limit of max_depth of 1000.

I am not entirely sure why you hit this limit, but my wild guess is that it is related to how you store your data.

Try to delete num_workers parameter from your DataLoader. It will force to use only 1 process to load data, which is slower, but at least you will see if it works at all (so the problem is in multiprocessing or not).