I am designing a recommender system, which will train on user to item implicit interaction data. The size of the data is so large that it will not fit in memory. The label is binary & initial features will be categorical & continuous, however, in future the network should ingest images, text and sequential data etc.
It is critical I can train the model very quickly, which may necessitate training on a GPU cluster. Although initially I expect to get away with a large multi GPU instance.
I’m looking for guidance/links to examples on:
- where to store my data
- what format to store it in
- how to best feed my network
My research suggests recordIO is the best practice approach for storage format. This thread agrees, however, I’ve seen other threads mention using csv iterators or numpy memory maps. Furthermore, every use case I see is with images only.