Possible to Use MXNet gluon.Trainer without a Neural Network?

Hi,

can you please post the actual objective function you are trying to minimize? And an example of a datum? Your objective function must be able to be constructed through linear algebra using the building blocks of mxnet/gluon. In principle, what you need to do is wrap your objective function (and it’s parameters that you are trying to optimize) inside a custom Block/HybridBlock and then use that network inside the trainer. See this for undrestanding HybridBlock, and personally I find this example very intuitive: the output of the network is used directly as the loss function with no additional data. If you don’t need to use NN layers, but variables, you will need to define their shape inside HybridBlock and use the command self.params.get( ... ) see this example on how to define custom variables inside a HybridBlock.

In gluon, all loss functions are derived from gluon.loss.Loss (which is derived from HybridBlock). So a custom loss/objective function can be seen as a (trivial perhaps) neural network.

Hope this helps.