I am trying to pass output of some pycuda operation to the input of mxnet computational graph.
I am able to achieve this via numpy conversion with the following code
import pycuda.driver as cuda import pycuda.autoinit import numpy as np import mxnet as mx batch_shape = (1, 1, 10, 10) h_input = np.zeros(shape=batch_shape, dtype=np.float32) # init output with ones to see if contents really changed h_output = np.ones(shape=batch_shape, dtype=np.float32) device_ptr = cuda.mem_alloc(input.nbytes) stream = cuda.Stream() cuda.memcpy_htod_async(d_input, h_input, stream) # here some actions with d_input may be performed, e.g. kernel calls # but for the sake of simplicity we'll just transfer it back to host cuda.memcpy_dtoh_async(d_input, h_output, stream) stream.synchronize() mx_input = mx.nd(h_output, ctx=mx.gpu(0)) print('output after pycuda calls: ', h_output) print('mx_input: ', mx_input)
However i would like to avoid the overhead of device-to-host and host-to-device memory copying.
I couldn’t find a way to construct
mxnet.ndarray.NDArray directly from
The closest thing that i was able to find is construction of ndarray from dlpack.
But it is not clear how to work with dlpack object from python.
Is there a way fo achieve
NDArray <-> pycuda interoperability without copying memory via host? Should it be a feature request?