NDArray conversion from other array types

I created a 2D nd.array by stacking a list of regular arrays:

from mxnet import nd

x = [0,1,2]
y = [3,4,5]
z = nd.array([x, y])

The above code works fine. I could replace x and y with numpy arrays, which works fine as well:

import numpy as np

x = np.array([0,1,2])
y = np.array([3,4,5])
z = nd.array([x, y])

Finally, I tried to replace x and y with nd.array:

x = nd.array([0,1,2])
y = nd.array([3,4,5])
z = nd.array([x, y])

This fails and gives the following errors:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mxnet/ndarray.py", line 1291, in array
    source_array = np.array(source_array, dtype=dtype)
ValueError: setting an array element with a sequence.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mxnet/ndarray.py", line 1293, in array
    raise TypeError('source_array must be array like object')
TypeError: source_array must be array like object

It does not seem to take the nd.array as an “array like” object, which is not so intuitive. Would it be possible to have nd.array recognized as an array-like object?

Also, could anyone explain if there is any overhead for converting either regular or numpy array to nd.array? If there is, is it significant enough for developers to worry about or negligible? Thanks!

It doesn’t seem to recognize the elements of the list. The way we got around this was by doing:

nd.stack(*[nd.arange(10), nd.arange(10, 20)])
1 Like

@dmadeka This works for now; hopefully, the more intuitive version above will be supported in the future. Thank you very much!

I ran a small benchmark comparing the performance between numpy and nd.arrays for the object creation. First, I created lists of numpy arrays and wrapped them with nd.arrays:

import time
import random
import numpy as np
from mxnet import nd

random.seed(1)

for k in range(50, 251, 50):
    t0 = t1 = 0

    for _ in range(1000):
        st = time.time()
        l = [np.array([random.random() for _ in range(500)]) for _ in range(k)]
        mt = time.time()
        n = nd.array(l)
        et = time.time()
        t0 += mt - st
        t1 += et - mt

    print('%3d: %9.6f + %9.6f = %9.6f' % (k, t0, t1, t0+t1))

The following shows the output of the above code:

 50:  4.042995 +  0.132606 =  4.175601
100:  8.098909 +  0.178784 =  8.277693
150: 12.673537 +  0.267941 = 12.941478
200: 16.861520 +  0.358759 = 17.220279
250: 21.823990 +  0.406390 = 22.230380

Then, I created lists of nd.arrays and stacked them to nd.arrays:

for k in range(50, 251, 50):
    t0 = t1 = 0

    for _ in range(1000):
        st = time.time()
        l = [nd.array([random.random() for _ in range(500)]) for _ in range(k)]
        mt = time.time()
        n = nd.stack(*l)
        et = time.time()
        t0 += mt - st
        t1 += et - mt

    print('%3d: %9.6f + %9.6f = %9.6f' % (k, t0, t1, t0+t1))

Here is the output from the second approach:

 50:  7.567485 +  0.089953 =  7.657437
100: 15.304291 +  0.138241 = 15.442532
150: 24.316803 +  0.208390 = 24.525193
200: 31.482239 +  0.229603 = 31.711842
250: 38.947322 +  0.266001 = 39.213324

From this rudimentary benchmark, it seems to be more efficient to create numpy arrays and wrap them with nd.arrays than create nd.arrays and stack them. Would this analogy correct in general? Thanks!