I am using our internal implemented parameter server. Is there anyone can give me an example about how to do a distribute training using gluon API.
- how to split and load data on different machine;
- how to compute gradient on slaver machine;
- how to gather gradient from slaver machine and update parameters on master.