As far as I understand, image data augmentation techniques (i.e. random cropping, mirroring, shearing etc) are commonly used in DL to artificially increase the training set size.
This concept is pretty clear to me.
Now, putting that in practice, say I have a 100-image dataset. If I apply mirroring to every one of them, I would expect the dataset to double in size, to 200 in total.
This is apparently not happening when I use
So, iterating through the training set in this way
train_iter = mx.io.ImageRecordIter(path_imgrec='./train.rec', data_shape=(3, 75, 75), shuffle=True, batch_size=1)
or in this way
train_iter = mx.io.ImageRecordIter(path_imgrec='./train.rec', data_shape=(3, 75, 75), rand_crop=True, shuffle=True, batch_size=1, max_random_scale=1.5, min_random_scale=0.75, rand_mirror=True)
returns exactly the same number of images.
My confusion is clearly due to the fact that I don’t have a clear understanding of what happens under the hood when calling a batch and applying augmentation techniques over it.
Can someone provide help me with this one, please?