What is not implemented for Blocks and Hybrid Blocks?


#1

It seems that random is not yet implemented for gluon?

[23:48:39] /Users/travis/build/dmlc/mxnet-distro/mxnet-build/dmlc-core/include/dmlc/logging.h:308: [23:48:39] src/c_api/c_api_ndarray.cc:180: Check failed: infershape[op](attrs, &in_shapes, &out_shapes)

Stack trace returned 8 entries:
[bt] (0) 0   libmxnet.so                         0x00000001049a81f8 _ZN4dmlc15LogMessageFatalD2Ev + 40
[bt] (1) 1   libmxnet.so                         0x00000001049a5db9 _ZN4dmlc15LogMessageFatalD1Ev + 9
[bt] (2) 2   libmxnet.so                         0x000000010576d4d7 _Z12SetShapeTypePKN4nnvm2OpERKNS_9NodeAttrsERKN5mxnet7ContextERKNSt3__16vectorINS6_7NDArrayENSA_9allocatorISC_EEEEPSF_Pi + 6391
[bt] (3) 3   libmxnet.so                         0x0000000105773da9 _Z20ImperativeInvokeImplRKN5mxnet7ContextEON4nnvm9NodeAttrsEPNSt3__16vectorINS_7NDArrayENS6_9allocatorIS8_EEEESC_PNS7_IbNS9_IbEEEESF_ + 537
[bt] (4) 4   libmxnet.so                         0x000000010577713e MXInvokeCachedOp + 1534
[bt] (5) 5   libmxnet.so                         0x0000000105777dec MXInvokeCachedOpEx + 28
[bt] (6) 6   _ctypes.cpython-36m-darwin.so       0x00000001045a92c7 ffi_call_unix64 + 79
[bt] (7) 7   ???                                 0x00007fff5d9e4430 0x0 + 140734764041264

Segmentation fault: 11

I used a SeLU Activation function and it crashed - I was wondering if there is some sort of list of what remains to be implemented?:

    def hybrid_forward(self, F, x):
        x = self.scale * F.LeakyReLU(self.fc1(x), act_type='elu', slope=self.alpha)
        noise = F.random_uniform( shape=() )
        x = a*( self.alphaprime + 0.5*(1.0 + F.sign( noise - self.dropout ))*(x - self.alphaprime))  + b
        x = self.scale * F.LeakyReLU(self.fc2(x), act_type='elu', slope=self.alpha)
        noise = F.random_uniform( shape=() )
        x = a*( self.alphaprime + 0.5*(1.0 + F.sign( noise - self.dropout ))*(x - self.alphaprime))  + b
        x = self.output(x)
        return x

#2

Okay having figured this half of this out - I’ll leave it here.

The issue is that shape=() is used by nnvm as a marker for unknown shapes. Which means that this works very well for nd array if you explicitly pass the shape:

if F.__name__ == 'mxnet.ndarray':
    noise = F.random_uniform( shape=(x.shape[1]) )
else:
    noise = F.random_uniform( shape=(x.infer_shape()[1][0][0]) )

however neither F.random_uniform(shape=()) or the above works for symbol - do you know how I can get around this for the hybridize?


#3

With latest mxnet you can use F.random.uniform. It doesn’t have this issue