Training mxnet-rcnn without using jpeg images

Hi ,
I have a dataset of pascal format.
But i do not want to save the images as jpeg files instead i want them to be as numpy array.
What changes should i do in order to train the network, so that it takes numpy array?
Anyways network takes the numpy array. But I do not want to use jpeg files anywhere in the process before training… When i run the training script, Somehow the jpge files in pascal structure will be used to generate the data to the network. But instead taking jpeg files and generating data symbol, i want something to take directly the numpy arrays…

How can i do that? Any help is really appreciated.

Can you share the training script that you are using?

Typically you would need to modify the Dataset class used for loading the data into your network to use numpy arrays rather than loading the image from disk.

I am using the above code…
I just changed my class names and backbone network… rest everything is same.

You can try saving images in raw pixel or float values using opencv cv2.imencode. You can also customize im2rec.py tool of mxnet to save as rec files. For reading images in image iterator you use mx.img.imdecode() to directly get mx.ndarray or cv2.imdecode().

Note: It consumes lot of memory size to save image in raw format. E.g., A JPEG image of 50KB might consume 5MB size in raw format. Before scaling to many images, please check memory with few images.

1 Like

But how about bounding box co-ordinates and labels?? they should be stored as normal like pascal format…

structure:
Annotations
imagesets
JPEGImages = This needs to be changed?? or entire format has to be changed??