Resolving Check failed: axis < ndim && axis >= -ndim axis 1 exceeds the input dimension of 1

My code is Below

train_iter = = train_x, label = train_y, batch_size = 128, shuffle = True)

inputs = sym.Variable(‘data’)
targets = sym.Variable(‘softmax_label’)

w1 = sym.Variable(‘w1’, shape = (784, 256), init = mx.init.Xavier())
b1 = sym.Variable(‘b1’, shape = (256,), init = mx.init.Zero())

w2 = sym.Variable(‘w2’, shape = (256, 128), init = mx.init.Xavier())
b2 = sym.Variable(‘b2’, shape = (128,), init = mx.init.Zero())

w3 = sym.Variable(‘w3’, shape = (128, 10), init = mx.init.Xavier())
b3 = sym.Variable(‘b3’, shape = (10), init = mx.init.Zero())

layer1 = sym.relu(sym.broadcast_add(, w1), b1))
layer2 = sym.relu(sym.broadcast_add(, w2), b2))
predictions = sym.softmax(sym.broadcast_add(, w3), b3))
cost = sym.MakeLoss(sym.sum(-targets * sym.log(predictions)))

model = mx.mod.Module(cost), optimizer = ‘adam’, optimizer_params={‘learning_rate’:0.1}, num_epoch=10)

when I run its throws error:-
Check failed: axis < ndim && axis >= -ndim axis 1 exceeds the input dimension of 1

the shape of train_y and train_y (which are used to create iterator) is (60000, 784) and (60000, 10) respectively.
plz help me i have been trying different methods but none of them worked

Complete Error is below

MXNetError Traceback (most recent call last)
2 optimizer=‘adam’,
3 optimizer_params={‘learning_rate’:0.1},
----> 4 num_epoch=10)

c:\users\rishik\appdata\local\programs\python\python36\lib\site-packages\mxnet\module\ in fit(self, train_data, eval_data, eval_metric, epoch_end_callback, batch_end_callback, kvstore, optimizer, optimizer_params, eval_end_callback, eval_batch_end_callback, initializer, arg_params, aux_params, allow_missing, force_rebind, force_init, begin_epoch, num_epoch, validation_metric, monitor, sparse_row_id_fn)
531 pre_sliced=True)
532 else:
–> 533 self.update_metric(eval_metric, data_batch.label)
535 try:

c:\users\rishik\appdata\local\programs\python\python36\lib\site-packages\mxnet\module\ in update_metric(self, eval_metric, labels, pre_sliced)
771 Whether the labels are already sliced per device (default: False).
772 “”"
–> 773 self._exec_group.update_metric(eval_metric, labels, pre_sliced)
775 def _sync_params_from_devices(self):

c:\users\rishik\appdata\local\programs\python\python36\lib\site-packages\mxnet\module\ in update_metric(self, eval_metric, labels, pre_sliced)
637 labels_ = OrderedDict(zip(self.label_names, labels_slice))
638 preds = OrderedDict(zip(self.output_names, texec.outputs))
–> 639 eval_metric.update_dict(labels_, preds)
641 def _bind_ith_exec(self, i, data_shapes, label_shapes, shared_group):

c:\users\rishik\appdata\local\programs\python\python36\lib\site-packages\mxnet\ in update_dict(self, label, pred)
130 label = list(label.values())
–> 132 self.update(label, pred)
134 def update(self, labels, preds):

c:\users\rishik\appdata\local\programs\python\python36\lib\site-packages\mxnet\ in update(self, labels, preds)
416 if pred_label.shape != label.shape:
417 pred_label = ndarray.argmax(pred_label, axis=self.axis)
–> 418 pred_label = pred_label.asnumpy().astype(‘int32’)
419 label = label.asnumpy().astype(‘int32’)
420 # flatten before checking shapes to avoid shape miss match

c:\users\rishik\appdata\local\programs\python\python36\lib\site-packages\mxnet\ndarray\ in asnumpy(self)
1970 self.handle,
1971 data.ctypes.data_as(ctypes.c_void_p),
-> 1972 ctypes.c_size_t(data.size)))
1973 return data

c:\users\rishik\appdata\local\programs\python\python36\lib\site-packages\mxnet\ in check_call(ret)
249 “”"
250 if ret != 0:
–> 251 raise MXNetError(py_str(_LIB.MXGetLastError()))

MXNetError: [11:59:39] c:\jenkins\workspace\mxnet-tag\mxnet\src\operator\nn…/tensor/broadcast_reduce_op.h:151: Check failed: axis < ndim && axis >= -ndim axis 1 exceeds the input dimension of 1

You’ve got an issue with your input to MakeLoss, it doesn’t have enough axes. After passing through the sum you have a single value. I spotted a few other issues in your code, so the following code should step through everything.

Any particular reason why you’re using Module API here? You’ll find debugging a million times easier with Gluon API! :slight_smile:

import mxnet as mx

train_x = mx.nd.random.uniform(shape=(60000, 784))
train_y = mx.nd.random.randint(shape=(60000,), low=0, high=10)
train_y = mx.nd.one_hot(train_y, depth=10)
train_iter = = train_x, label = train_y, batch_size = 128, shuffle = True)

inputs = mx.sym.Variable('data')
label = mx.sym.Variable('softmax_label')

w1 = mx.sym.Variable('w1', shape = (784, 10), init = mx.init.Xavier())
b1 = mx.sym.Variable('b1', shape = (10, ), init = mx.init.Zero())
layer1 = mx.sym.relu(mx.sym.broadcast_add(, w1), b1))

sm = mx.sym.softmax(layer1)
ce = label * sm.log() + (1 - label) * (1 - sm).log()
loss = mx.sym.MakeLoss(ce)

mod = mx.mod.Module(symbol=loss,

Thanks a lot, well I’ve solved my issue, the problem is you gotta have to pass loss for each example if in a batch when declaring MakeLoss, that’s the main probleml.

Glad you sorted it :slight_smile: So why Module API?

Actually I was learning how to use module api so it would be nice if I implement something in module api from scratch and test it out. So that’s why.
I prefer module API a bit more than gluon when debugging or prototyping is not the case is that I find it more practical to save and restore my model using module API. Yeah… don’t worry that’s just kinda personal preference.

Okay, thanks. Was just interested to hear your point of view because in my opinion Gluon is much easier to work with and debug. Guessing a Gluon fit function would be useful?

Yeah that’s a nice feature request. Well I think there is a good intuitive reason why we don’t need .fit function in gluon is that gluon is supposed to be more flexible and open for prototyping and stuff… And when you write your own function to train using net.forward() and backward() you can a lot there in each iteration. Of course that’s not the case with .fit method. But yeah I agree that .fit as an option would be perfect.