MXNetError: Cannot find argument 'lazy_update'

New to MXNet here.
I am trying https://beta.mxnet.io/guide/crash-course/4-train.html
While trying to run the complete training loop I ran into the following error. Sounds like trainer params need an arg, isn’t clear how to pass this or the crash course page needs to be updated.

Stack Dump alert:

MXNetError: Cannot find argument ‘lazy_update’, Possible Arguments:

MXNetError Traceback (most recent call last)
in ()
9 loss.backward()
10 # update parameters
—> 11 trainer.step(batch_size)
12 # calculate training metrics
13 train_loss += loss.mean().asscalar()

~/software/anaconda3/lib/python3.6/site-packages/mxnet/gluon/trainer.py in step(self, batch_size, ignore_stale_grad)
198 for upd, arr, grad in zip(self._updaters, param.list_data(), param.list_grad()):
199 if not ignore_stale_grad or arr._fresh_grad:
–> 200 upd(i, grad, arr)
201 arr._fresh_grad = False
202

~/software/anaconda3/lib/python3.6/site-packages/mxnet/optimizer/optimizer.py in call(self, index, grad, weight)
1528 self.sync_state_context(self.states[index], weight.context)
1529 self.states_synced[index] = True
-> 1530 self.optimizer.update_multi_precision(index, weight, grad, self.states[index])
1531
1532 def sync_state_context(self, state, context):

~/software/anaconda3/lib/python3.6/site-packages/mxnet/optimizer/optimizer.py in update_multi_precision(self, index, weight, grad, state)
553 use_multi_precision = self.multi_precision and weight.dtype == numpy.float16
554 self._update_impl(index, weight, grad, state,
–> 555 multi_precision=use_multi_precision)
556
557 @register

~/software/anaconda3/lib/python3.6/site-packages/mxnet/optimizer/optimizer.py in _update_impl(self, index, weight, grad, state, multi_precision)
538 else:
539 sgd_update(weight, grad, out=weight, lazy_update=self.lazy_update,
–> 540 lr=lr, wd=wd, **kwargs)
541 else:
542 if state[0] is not None:

~/software/anaconda3/lib/python3.6/site-packages/mxnet/ndarray/register.py in sgd_update(weight, grad, lr, wd, rescale_grad, clip_gradient, out, name, **kwargs)

~/software/anaconda3/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py in _imperative_invoke(handle, ndargs, keys, vals, out)
90 c_str_array(keys),
91 c_str_array([str(s) for s in vals]),
—> 92 ctypes.byref(out_stypes)))
93
94 if original_output is not None:

~/software/anaconda3/lib/python3.6/site-packages/mxnet/base.py in check_call(ret)
147 “”"
148 if ret != 0:
–> 149 raise MXNetError(py_str(_LIB.MXGetLastError()))
150
151

MXNetError: Cannot find argument ‘lazy_update’, Possible Arguments:

lr : float, required
Learning rate
wd : float, optional, default=0
Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad : float, optional, default=1
Rescale gradient to grad = rescale_grad*grad.
clip_gradient : float, optional, default=-1
Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
, in operator sgd_update(name="", rescale_grad=“0.00390625”, wd=“0.0”, lr=“0.1”, lazy_update=“True”)

If you want to update an argument of an optimizer, you need to add it into the args dictionary during instantiating the Trainer. It would look something like that:

trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1, 'lazy_update': False})

If you don’t actually want to make changes to that argument and you get an error just trying to run the example as is, please, update MXNet version - maybe you have an old version installed. Current version is 1.4.x.

Thanks.
I was running MXNet version 1.2.1
Upgraded to 1.4.1 and the issue went away.