Create mxnet.ndarray.NDArray from pycuda.driver.DeviceAllocation

I am trying to pass output of some pycuda operation to the input of mxnet computational graph.
I am able to achieve this via numpy conversion with the following code

    import pycuda.driver as cuda
    import pycuda.autoinit
    import numpy as np
    import mxnet as mx
    
    batch_shape = (1, 1, 10, 10)
    h_input = np.zeros(shape=batch_shape, dtype=np.float32)
    # init output with ones to see if contents really changed
    h_output = np.ones(shape=batch_shape, dtype=np.float32)
    device_ptr = cuda.mem_alloc(input.nbytes)
    stream = cuda.Stream()
    cuda.memcpy_htod_async(d_input, h_input, stream)

    # here some actions with d_input may be performed, e.g. kernel calls
    # but for the sake of simplicity we'll just transfer it back to host
    cuda.memcpy_dtoh_async(d_input, h_output, stream)
    stream.synchronize()
    mx_input = mx.nd(h_output, ctx=mx.gpu(0))

    print('output after pycuda calls: ', h_output)
    print('mx_input: ', mx_input)

However i would like to avoid the overhead of device-to-host and host-to-device memory copying.

I couldn’t find a way to construct mxnet.ndarray.NDArray directly from h_output.
The closest thing that i was able to find is construction of ndarray from dlpack.
But it is not clear how to work with dlpack object from python.

Is there a way fo achieve NDArray <-> pycuda interoperability without copying memory via host? Should it be a feature request?

Unfortunately, it is not possible at the moment.

Out of curiousity, has anything changed regarding this?

I am also interested in passing data already in the GPU into a DL network with the output still residing in the GPU to be passed to the next element in a pipeline. If possible I’d like to do this without having to perform CPU/GPU copies. Is this possible at the moment? Would it make any difference if C++ were used instead of Python? In the end it’s a bit tricky as it would require a conversion from a cuda data type to an MXNet tensor.

April 11 2020, is there anything new regard to this question?