Homework 1.3 MxNet GPU

I can’t install mxnet for gpu. I have SSH into AWS GPU. I have tried installing both CUDA version 10.13 and 9 but when I run mxnet-CU90, it does not work. Does anyone know how to install the gpu version of mxnet?

1 Like

I’m having the same issue. I think it is highly unreasonable for the instructors to expect us to complete this homework with the confusing and vague installation guidelines we’ve been given and the inadequate instruction we’ve had on these topics :confused:

For AWS instances pre-installed with CUDA 10.*, you need to install mxnet-cu100 instead of mxnet-cu90

Alternatively, can you follow instructions at

You can use a clean AWS instance, install CUDA 9.0 by yourself, then use mxnet-cu90

I did installed mxnet-cu100 and updated environment.yml to pip: mxnet-cu100.
However, I get the following error when I tried to import mxnet.

import mxnet

Traceback (most recent call last):

File “<stdin>”, line 1, in <module>

File “/home/ubuntu/miniconda3/envs/gluon/lib/python3.6/site-packages/mxnet/init.py”, line 24, in <module>

from .context import Context, current_context, cpu, gpu, cpu_pinned

File “/home/ubuntu/miniconda3/envs/gluon/lib/python3.6/site-packages/mxnet/context.py”, line 24, in <module>

from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass

File “/home/ubuntu/miniconda3/envs/gluon/lib/python3.6/site-packages/mxnet/base.py”, line 213, in <module>

_LIB = _load_lib()

File “/home/ubuntu/miniconda3/envs/gluon/lib/python3.6/site-packages/mxnet/base.py”, line 204, in _load_lib

lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL)

File “/home/ubuntu/miniconda3/envs/gluon/lib/python3.6/ctypes/init.py”, line 348, in init

self._handle = _dlopen(self._name, mode)

OSError: libcudart.so.10.0: cannot open shared object file: No such file or directory

Can you just follow the instructions at:

and use mxnet-cu90 after installing CUDA 9.0

Similar question here. After we complete all the installations should we upload the homework1.ipynb to gpu instance to get the display for !nvidia-smi. Is there a way to keep the file at local but still get the output from !nvidia-smi?

We have so far provided a number of ways for getting this done:

  1. On your own GPU enabled machine
  2. Using Colab (it’s literally two lines of code that you need to add to the notebook)
  3. Using the DL AMI as per the slides (they’re tested for CUDA 9.2).

Beyond that, Aston is putting up detailed instructions for CUDA 10.0 which is still quite new and thus not supported by default everywhere. Apologies if you’re having trouble. We will post an update shortly.

Please have a look at the detailed walkthrough (tested as of this afternoon):

You need to take care of the /usr/local/cuda link to point to /usr/local/cuda-10.0. Otherwise MXNet won’t know where to find the right version of CUDA.

1 Like