MXNet Get Batch Size during inference

Is there anyway to get the batch size of a symbol during inference? Usually, the get_shape function returns a tuple – but the first dimension seems to be unidentified (0)

You can use infer_shape, check the documentation.

It returns 0 for unknown - which ruins the ability to use it

Plz explain what are you trying to achieve with an example.

You know the batch size when data is passed to the network, so you can use data_iter.provide_data for this.

>>> nd_iter = mx.io.NDArrayIter(data={'data':mx.nd.ones((100,10))},
...                             label={'softmax_label':mx.nd.ones((100,))},
...                             batch_size=25)
>>> print(nd_iter.provide_data)
[DataDesc[data,(25, 10L),,NCHW]]

Thats kind of what I dont want to do – Id like to know it from the network

Dhruv

Networks are not batch size dependent in general, so that network can take inputs of any batch size. For example you are using dataset with batches of 128 batch size for training but testing it on some data with size 100 or different.

But in mxnet if you are using symbol and you are binding it using .simple_bind which takes shapes of all placeholders, only in that scenario your network (symbol) is batch size dependent, and if you use infer_shape then it would return you what you want.
Sample code below

batch_size = 128
data = mx.sym.var(‘data’)
predictions = mx.sym.FullyConnected(data, 10)
predictions.simple_bind(data = (batch_size, 784)
predictions.infer_shape()

Last line would return what you want.

I recommend to do what @thomelane is suggesting.