my code is quite simple, load a model, predict a image.
img_inputND = mx.nd.array(mxnet_img,ctx=mx.cpu(0))
nd_["Input"]= img_inputND
e_ = sym_.bind(mx.cpu(0), nd_)
out_ = e_.forward()
It’s ok in CPU.
1@
img_inputND = mx.nd.array(mxnet_img,ctx=mx.gpu(0))
nd_["Input"]= img_inputND
e_ = sym_.bind(mx.cpu(0), nd_)
out_ = e_.forward()
mxnet.base.MXNetError: [21:13:59] src/executor/…/common/exec_utils.h:516: Check failed: x == default_ctx Input array is in gpu(0) while binding with ctx=cpu(0). All arguments must be in global context (cpu(0)) unless group2ctx is specified for cross-device graph.
2@
img_inputND = mx.nd.array(mxnet_img,ctx=mx.cpu(0))
nd_["Input"]= img_inputND
e_ = sym_.bind(mx.gpu(0), nd_)
out_ = e_.forward()
mxnet.base.MXNetError: [21:14:33] src/executor/…/common/exec_utils.h:516: Check failed: x == default_ctx Input array is in cpu(0) while binding with ctx=gpu(0). All arguments must be in global context (gpu(0)) unless group2ctx is specified for cross-device graph.
3@Here I changed ctx to be gpu(0) in two places, but the error information tells me one is in cpu context.
img_inputND = mx.nd.array(mxnet_img,ctx=mx.gpu(0))
nd_["Input"]= img_inputND
e_ = sym_.bind(mx.gpu(0), nd_)
out_ = e_.forward()
mxnet.base.MXNetError: [21:15:08] src/executor/…/common/exec_utils.h:516: Check failed: x == default_ctx Input array is in cpu(0) while binding with ctx=gpu(0). All arguments must be in global context (gpu(0)) unless group2ctx is specified for cross-device graph.
My Nvidia-SMI is OK, I’m on ubuntu 16.04 with cuda 9 and my mxnet version is 1.3.1 with gpu.
mx.test.utils.list_gpus() returns [0]
My GPU is OK, If I change another load model method, load_checkpoint or gluon, it’s ok to predict with GPU.