Running custom object detection model on Android

Hi, this is a follow-up to a previous question of mine, I need to run a custom object detection module on Android.

I already fine-tuned the model (ssd_512_mobilenet1.0_custom) on a custom dataset, I tried running inference with this model (loading the .params file produced during the training) and everything works perfectly on my computer. Now, I need to export this to Android.

I was referring to this answer to figure out the procedure, there are 3 suggested options:

  1. You can use ONNX to convert models to other runtimes, for example […] NNAPI for Android
  2. You can use TVM
  3. You can use SageMaker Neo + DLR runtime […]

Regarding the first one, I converted my model to ONNX (thanks again to @waytrue17 for the help).
However, in order to use it with NNAPI, it is necessary to convert it to daq. In the repository, they provide a precomplied AppImage of onnx2daq to make the conversion, but the script returns an error. I checked the issues section, and they report that “It actually fails for all onnx object detection models”.

Then, I gave a try to DLR, since it’s suggested to be the easiest way.
As I understand, in order to use my custom model with DLR, I would first need to compile it with TVM (which also covers the second point mentioned in the linked post). In the repo, they provide a Docker image with some conversion scripts for different frameworks.
I modified the ‘compile_gluoncv.py’ script, and now I have:

#!/usr/bin/env python3

from tvm import relay
import mxnet as mx
from mxnet.gluon.model_zoo.vision import get_model
from tvm_compiler_utils import tvm_compile

shape_dict = {'data': (1, 3, 300, 300)}
dtype='float32'
ctx = [mx.cpu(0)]

classes_custom = ["CML_mug"]
block = get_model('ssd_512_mobilenet1.0_custom', classes=classes_custom, pretrained_base=False, ctx=ctx)
block.load_parameters("ep_035.params", ctx=ctx)	### this is the file produced by training on the custom dataset


for arch in ["arm64-v8a", "armeabi-v7a", "x86_64", "x86"]:
  sym, params = relay.frontend.from_mxnet(block, shape=shape_dict, dtype=dtype)
  func = sym["main"]
  func = relay.Function(func.params, relay.nn.softmax(func.body), None, func.type_params, func.attrs)
  tvm_compile(func, params, arch, dlr_model_name)

However, when I run the script it returns the error:

ValueError: Model ssd_512_mobilenet1.0_custom is not supported. Available options are
	alexnet
	densenet121
	densenet161
	densenet169
	densenet201
	inceptionv3
	mobilenet0.25
	mobilenet0.5
	mobilenet0.75
	mobilenet1.0
	mobilenetv2_0.25
	mobilenetv2_0.5
	mobilenetv2_0.75
	mobilenetv2_1.0
	resnet101_v1
	resnet101_v2
	resnet152_v1
	resnet152_v2
	resnet18_v1
	resnet18_v2
	resnet34_v1
	resnet34_v2
	resnet50_v1
	resnet50_v2
	squeezenet1.0
	squeezenet1.1
	vgg11
	vgg11_bn
	vgg13
	vgg13_bn
	vgg16
	vgg16_bn
	vgg19
	vgg19_bn

Am I doing something wrong? Is this thing even possible?

As a side note, after this I’d need to deploy on Android a pose detection model (simple_pose_resnet18_v1b) and an activity recognition one (i3d_nl10_resnet101_v1_kinetics400) as well.

Thanks for solving this problem. I also have the same.

Native app development by https://mlsdev.com/services/web-development: Native app development means developing an app that runs on mobile devices like iPhone and Android. Many organizations now use native app development to support their across the organization communication functions. A mobile app with a native experience attracts users, which is far better than those that run on the web or other platforms. Apple’s Safari, Google Android, Blackberry’s Emulator, and Windows Mobile allow users to access websites that run on different operating systems. But the most important thing about a native app development is its ability to run only on particular devices.