My code consists of MXNET symbolic API and also I am using another library that uses the GPU for its calculations. After the inference step in MXNET, I need free GPU memory for the other library to operate. How can I release the GPU memory taken up by MXNET symbolic model?
MXNet allocates a memory pool, where memory will be re-used. If not enough memory is available in the pool, MXNet will request more memory from CUDA. You can adjust the size of this pool by setting a value in the environment variable
I build a network and it consumes almost full of my GPU memory. After inference, I want to delete this network and free the memory for another network, but no matter how I try to delete it, the memory does not be released until the program exit.
So my question is: How do I reuse the memory for another network in memory pool?
have you tried ctx.empty_cache() ? https://mxnet.apache.org/api/python/docs/api/mxnet/context/index.html#mxnet.context.Context.empty_cache
I knew this API. It seems only available on version 1.5.1, but my os is win10, which does not have version 1.5.1 in pip currently. I’ll try this API on ubuntu and check if it works.