So for your example:

```
import mxnet as mx
class MyNetwork(mx.gluon.nn.HybridBlock):
def __init__(self, **kwargs):
super(MyNetwork, self).__init__(**kwargs)
with self.name_scope():
self.first_net = mx.gluon.nn.HybridSequential()
self.first_net.add(mx.gluon.nn.Dense(units = 3))
self.second_net = mx.gluon.nn.HybridSequential()
self.second_net.add(mx.gluon.nn.Dense(units = 2))
self.third_net = mx.gluon.nn.HybridSequential()
self.third_net.add(mx.gluon.nn.Dense(units = 1))
def hybrid_forward(self, F, x1, x2):
first = self.first_net(x1)
second = self.second_net(x2)
concat = F.concat(first, second, dim = 1)
third = self.third_net(concat)
return third
net = MyNetwork()
net.initialize()
trainer = mx.gluon.Trainer(net.collect_params(), optimizer='sgd', optimizer_params={'learning_rate': 1})
x1 = mx.nd.random.uniform(shape=(10, 4))
x2 = mx.nd.random.uniform(shape=(10, 4))
with mx.autograd.record():
output = net(x1, x2)
net.second_net[0].weight.data()
```

```
[[-0.04054644 -0.0198587 -0.05195032 0.03509606]
[-0.02584003 0.01509629 -0.01908049 -0.02449339]]
<NDArray 2x4 @cpu(0)>
```

Weights from initialisation.

## Update: default

```
output.backward()
trainer.step(batch_size=x1.shape[0])
net.second_net[0].weight.data()
```

```
[[-0.04904126 -0.0277727 -0.06480464 0.02541063]
[-0.05673689 -0.01368807 -0.06583347 -0.0597207 ]]
<NDArray 2x4 @cpu(0)>
```

Weights after single update step: **change from initialisation**.

## Update: after freezing

```
for param in net.second_net.collect_params().values():
param.grad_req = 'null'
```

```
with mx.autograd.record():
output = net(x1, x2)
output.backward()
trainer.step(batch_size=x1.shape[0])
net.second_net[0].weight.data()
```

```
[[-0.04904126 -0.0277727 -0.06480464 0.02541063]
[-0.05673689 -0.01368807 -0.06583347 -0.0597207 ]]
<NDArray 2x4 @cpu(0)>
```

Weights after another update step: **same as before**.