MxNet Prediction Using GPU



I have been working on a tool that reads a directory of images and tests them on a network that has already been trained and outputs the scores for each image. Is there a way to utilize the multiple gpus I have access to rather than running on cpu only? I have attempted to implement the aforementioned idea when loading in a model as shown in the code below. when ran, however, i get the error message also shown below.


model = mx.mod.Module(symbol=sym, context=mx.gpu(), label_names=(‘softmax_label’, ))

Terminate called after throwing an instance of ‘dmlc::Error’
what(): [13:42:48] /home/travis/build/dmlc/mxnet-distro/mxnet-build/mshadow/mshadow/./tensor_gpu-inl.h:35: Check failed: e == cudaSuccess CUDA: initialization error


Hi @bezoldj1,

You might not have the CUDA version of MXNet installed which is required for GPU usage. Check out the install instructions here and select GPU option. You need to install CUDA beforehand, and cuDNN is recommended too. Alternatively you can use AWS EC2 DLAMI where all of this is setup for you.

Once you have CUDA/cuDNN setup, you can install the correct version of MXNet with:

pip install mxnet-cu90 (90 if you installed CUDA 9.0, 91 is you installed CUDA 9.1, etc)

And from there you just import mxnet as usual.