i am chinese in Beijing, the website below is more info about my question: https://discuss.gluon.ai/t/topic/7499
i use the mxnet and python but no gluon to implement the paper !
yes no gluon ,just use the module.
i use the ‘group’ function ,in custome operator feature_loss’s backward part ,i give the extra grad to the network, it work ! but not the effect of the matconvnet…
i want to know if there is some another way to implement the addtion of grad in the backback-propagating.
Thanks your reply, i think you must master the mxnet better than me who just leave the school and learn the deep-learning tool for a month,
keep touch for bettter forward!