My inquiry is exactly what you describe … That is, I would like to have a service file per model.
I have two different types of model:
- Classification
- Detection
I would like to use a different service file for each. So I do …
mxnet-model-export --model-name model1 --model-path <DIR-MODEL1> --service-file-path <DIR-MODEL1>/<Model1>.py
mxnet-model-export --model-name model2 --model-path <DIR-MODEL2> --service-file-path <DIR-MODEL2>/<Model2>.py
This generates the two different model files (each having a separate model service file). In each model’s model file __init__
method, I included a simple print("Model1")
or print("Model2")
statement.
Now, I start the server:
mxnet-model-server --models model1=<MODEL1>.model model2=<MODEL2>.model --host <MyHost>
The models get registered as Flask endpoints. When I look through the startup messages, I see that only the __init__
of the first model being registered is executed (e.g., “Model1” is print). Meaning that this __init__
method is run for Model 1 and Model 2.
Later on, when I call the endpoints via curl, I see the behavior that …
CURL MODEL 1 - OK
CURL MODEL 2 - Exception with ...
File "/usr/local/lib/python2.7/dist-packages/mms/serving_frontend.py", line 468, in predict_callback
response = modelservice.inference(input_data)
File "/usr/local/lib/python2.7/dist-packages/mms/model_service/model_service.py", line 105, in inference
data = self._postprocess(data)
File "/home/local/.../<MODEL1>.py", line 31, in _postprocess
That is, the service file for MODEL1 is hit.
My suspicion is probably confirmed by this statement:
Note that if you supply a custom service for pre or post-processing, both models will use that same pipeline. There is currently no support for using different pipelines per-model.
Consequently, my question is if and how I can I deal with the above scenario.