I have been working on a tool that reads a directory of images and tests them on a network that has already been trained and outputs the scores for each image. Is there a way to utilize the multiple gpus I have access to rather than running on cpu only? I have attempted to implement the aforementioned idea when loading in a model as shown in the code below. when ran, however, i get the error message also shown below.
model = mx.mod.Module(symbol=sym, context=mx.gpu(), label_names=(‘softmax_label’, ))
Terminate called after throwing an instance of ‘dmlc::Error’
what(): [13:42:48] /home/travis/build/dmlc/mxnet-distro/mxnet-build/mshadow/mshadow/./tensor_gpu-inl.h:35: Check failed: e == cudaSuccess CUDA: initialization error