How to train specific layers using gluon with different learning rate?

Hi @JWarlock,

Just spotted this question (which you’ve probably solved already!), but since it’s related to another question I just asked here I can give you a simple example for both of these two scenarios.

import mxnet as mx

net = mx.gluon.nn.HybridSequential()
net.add(mx.gluon.nn.Conv2D(channels=3, kernel_size=3))
net.add(mx.gluon.nn.Conv2D(channels=4, kernel_size=3))
net.add(mx.gluon.nn.Dense(units=5))

# 'freezing' the 1st Conv2D layer
for param in net[0].collect_params().values():
    param.grad_req = 'null'

# 1/2 the learning rate used in the 2nd Conv2D layer
for param in net[1].collect_params().values():
    param.lr_mult = 0.5