I have some self-trained computer vision models, trained with gluon and exported to symbol/params files. I’m looking for a possibility to run the models on the smallest possible edge device, so that I don’t have to send the video feed to a server and do inference remote.
I would like to do inference on a ESP32 with camera using my own mxnet models. Has anyone experience with this? I found that Tensorflow is somehow supported on ESP32, but I could not find anything similar for mxnet or onnx.