I’m running into a memory leak when performing inference on an mxnet model (i.e. converting an image buffer to tensor and running one forward pass through the model).
A minimal reproducable example is below:
import mxnet from gluoncv import model_zoo from gluoncv.data.transforms.presets import ssd model = model_zoo.get_model('ssd_512_resnet50_v1_coco') model.initialize() for _ in range(100000): # note: an example imgbuf string is too long to post # see gist or use requests etc to obtain imgbuf = ndarray = mxnet.image.imdecode(imgbuf, to_rgb=1) tensor, orig = ssd.transform_test(ndarray, 512) labels, confidences, bboxs = model.forward(tensor)
The result is a linear increase of RSS memory (from 700MB up to 10GB+).
Libraries used: gluoncv==0.3.0, mxnet-mkl==1.3.1
The problem persists with other pretrained models and with a custom model that I am trying to use. And using garbage collectors does not show any increase in objects.
This gist has the full code snippet including an example imgbuf.