ReLU Clips NaNs to Zero


#1

I didn’t see a bug report category, so I picked performance. In tracking down a bug, I found that the relu in mxnet.ndarray clips NaN’s to zero. This seems like an unwanted feature. The relu should propagate NaN’s so that bugs are not “covered up” for downstream operations. Minimal code to repeat this is below.

mx.version ‘1.2.0’
np.version ‘1.14.3’

import numpy as np
from mxnet import nd

nd.relu(np.NaN*nd.ones(1))
–> [0.] <NDArray 1 @cpu(0)>


#2

Thanks for reporting that. We tend to prefer bug to be reported in the issue section of the github repo: https://github.com/apache/incubator-mxnet/issues
That way if it is confirmed to be a bug we can link the PR to the issue. Would you mind creating an issue there?
Thanks


#3

Sure, no problem. I reported it here: https://github.com/apache/incubator-mxnet/issues/11115.