Not implemented for use with GPUs

Thought I’d give Gluon a whirl, downloaded it, started up the Jupyter, and part way through the “tutorial”, it get this:

MXNetError: [10:37:33] src/imperative/ Operator _ones is not implemented for GPU.

Came from executing this:

z = nd.ones(shape=(3, 3), ctx=mx.gpu(0))

I just re-installed mxnet, and same results. Seems odd to me, that a tool written expressly for use with GPUs, is not implemented for GPU usage.

Any suggestions??


Did you install the GPU version of MXNet? ‘pip install mxnet’ will give you the CPU version. To get the GPU version, use pip install mxnet-cu90 --pre.

Both times. That was my first thought as well. I thought “I must’ve installed the wrong version”, so I reinstalled. Same results though

Which version of MXNet are you using? Can you copy paste the output of pip show mxnet-cu90?

The following sequence works for me on Google Colab (Jupyter notebook):

!nvcc --version
# Run on a non-GPU instance first.


nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130

Then (on a GPU accelerated instance):

!pip install mxnet-cu100
# Must install mxnet version matching CUDA version above.
import mxnet as mx
# Testing that GPU works.
a = mx.nd.ones((2, 3), mx.gpu())
b = a * 2 + 1


[[3. 3. 3.] [3. 3. 3.]]
NDArray 2x3 @gpu(0)
1 Like