It’s known that ndarrays are not scalable to large sizes, for example the following commands will give an error:
>>> a = mx.nd.ones((1000, 5065309)) >>> b = a.asnumpy()
However, it seems that when the size of the ndarray is slightly larger than the max value of int32, no error is raised, but later operations may give incorrect results:
>>> 214748364 * 11 > np.iinfo('int32').max True >>> a = mx.nd.ones((214748364, 11)) >>> b = a.asnumpy() >>> b.sum() 0.0
The expected correct result should be the size of the array, which works fine here:
>>> 214748364 * 10 > np.iinfo('int32').max False >>> a = mx.nd.ones((214748364, 10)) >>> b = a.asnumpy() >>> b.sum() 2147483600.0 >>> 214748364 * 10 2147483640
In the second case, the answer is approximately correct (slightly off due to floating point arithmetic) but in the first case, something seems to be completely wrong.