How to create symbol parameters?

In mxnet symbolic graph considers every arguments as symbol, unlike in tensorflow where we declare our variables differently and input placeholder differently, so when you eval or run the graph you only have to pass your value of our input placeholder. While in mxnet you have to pass inputs as well as weights or biases in order to eval, for example:-

inputs = sym.Variable(‘inputs’)
w = sym.Variable(‘w’)
predictions = sym.dot(inputs, w)
predictions.eval(ctx = mx.cpu(), w = “some ndarray”, inputs = “some ndarray”)

Of course that’s not the case when you use builtin layer like sym.FullyConnected, but I want you do that from scratch.

Hi @mouryarishik,

You’re working with operators that don’t have associated parameters, but you can create custom operators with parameters. I definitely recommend that you take a look at Gluon, because there is a mxnet.gluon.Parameter class that will give you this separation.

Check out this tutorial for how to create a custom block which can encapsulate your operation containing parameters. You’ll want to create a placeholder for the parameter inside __init__. And then you’ll have access to it as in the forward call. See the following snippet from the tutorial.

class NormalizationHybridLayer(gluon.HybridBlock):
    def __init__(self, hidden_units, scales):
        super(NormalizationHybridLayer, self).__init__()

        with self.name_scope():
            self.weights = self.params.get('weights',
                                           shape=(hidden_units, 0),
                                           allow_deferred_init=True)

            self.scales = self.params.get('scales',
                                      shape=scales.shape,
                                      init=mx.init.Constant(scales.asnumpy().tolist()), # Convert to regular list to make this object serializable
                                      differentiable=False)
            
    def hybrid_forward(self, F, x, weights, scales):
        normalized_data = F.broadcast_div(F.broadcast_sub(x, F.min(x)), (F.broadcast_sub(F.max(x), F.min(x))))
        weighted_data = F.FullyConnected(normalized_data, weights, num_hidden=self.weights.shape[0], no_bias=True)
        scaled_data = F.broadcast_mul(scales, weighted_data)
        return scaled_data

Thanks, that’d be very helpful. But can’t I do kinda same with using just pure symbol?
By the way thanks for help

Yes, you can using a Module.

You might have seen examples of Module where a only a symbol is given, e.g. from the docs:

data = mx.sym.Variable('data')
fc1  = mx.sym.FullyConnected(data, name='fc1', num_hidden=128)
act1 = mx.sym.Activation(fc1, name='relu1', act_type="relu")
fc2  = mx.sym.FullyConnected(act1, name='fc2', num_hidden=10)
out  = mx.sym.SoftmaxOutput(fc2, name = 'softmax')
mod = mx.mod.Module(out)

It’s not easy to see how parameters are handled from this example, but there are a few defaults being used here that help explain what’s going on.

mod = mx.mod.Module(out,
                    data_names=('data',),
                    label_names=('softmax_label',),
                    state_names=None)

A parameter is assumed to be any input that not data (specified in data_names), not a label (specified in label_names) and not a state (specified in state_names). So all the FullyConnected weights in this example are classed as parameters because they are not called data or softmax_label.

And back to your example, you could change data_names=('inputs',) and label_names=None which will leave w as a parameter. I hope that helps.

That’s what I wanted to do, thanks.
Xoxo