Inference on Scala for Models trained on v1.0 is very slow

We have been running inference using the Module API in Scala on mxnet v0.8. Recently we upgraded to v1.0 and the application has slowed down terribly (6 hrs -> 15hrs). Is there any modification that is expected on the client side when using the inference APIs?

I believe it’s due to the removing of dispose() in GC: Fixing thread safety issues in Scala library

I’m trying to do a “real” fix.