The Gluon API framework mp

Here : https://gluon.mxnet.io/chapter07_distributed-learning/multiple-gpus-gluon.html#Initialize-on-multiple-devices , it says "Gluon supports initialization of network parameters over multiple devices. We accomplish this by passing in an array of device contexts, instead of the single contexts we’ve used in earlier notebooks. "

This sounds like model parallelism , if i am not wrong

Could someone please clarify ?

i guess you are looking this (Training with Multiple GPUs Using Model Parallelism)
it is the official tutorial from mxnet

yes , I have been through that as well
but I was confused on the language of the description in the aforementioned link

Is that model parallelism or data parallelism ?
Cause it talks about sharing params across devices

Model parallelism assumes that your whole model doesn’t fit into one particular device. And what is actually takes the memory are parameters of the layers of the model. So, talking about model parallelism usually means talking about storing parameters of different layers on different devices.

The link that Coffeered provided is actually show how to do that using Symbol API. If you want to do it Gluon, you can use https://stackoverflow.com/questions/47029809/simple-example-of-mxnet-model-parallelism as an example. It is not an official example, but it works.

2 Likes