Object detection, set mx.ctx to GPU, but still using all CPU cores

Hi, I’m running an object detection routine on a server, and I set the script to run on GPU by using:

ctx = mx.gpu(0)

I’m loading the model and the data on the GPU:

net = gcv.model_zoo.get_model('ssd_512_resnet50_v1_custom', classes=classes, pretrained_base=False, ctx = ctx)

net.load_parameters(params_file, ctx = ctx)
frame = frame.as_in_context(ctx)

and I’m running:

classes, scores, boxes = net(frame)

When using nvidia-smi, I see that the selected GPU usage is at 20%, which is reasonable. However, the object detection routine is still using 750 % of the CPU (all of the 7 cores of the server).
Is it a normal behaviour, or did I set anything wrong?