Stopping crioterion

How to change the stopping criterion from number of epochs to loss or some other measure of tolerance in
mlp_model.fit(train_iter, # train data
optimizer=‘sgd’, # use SGD to train
optimizer_params={‘learning_rate’:0.1}, # use fixed learning rate
eval_metric=‘acc’, # report accuracy during training
batch_end_callback = mx.callback.Speedometer(batch_size, 50), # output progress for each 100 data batches
num_epoch=1000) # train for at most 10 dataset passes

I don’t know of a better way to do this than writing your custom training loop. I do agree that what you are asking for would be nice to have.

Are you using gluon? There are tutorials showing training loops that you could customize. For example: http://gluon.mxnet.io/chapter04_convolutional-neural-networks/cnn-scratch.html#The-training-loop

No I am not using gluon.