Check failed: axis < ndim && axis >= -ndim axis 1 exceeds the input dimension of 1

when i use the mx.sym.sum() as the input of the mx.sym.MakeLoss(),it throw this error.

i don’t know what lead it,do you know?

Do you have sample code that reproduces the problem?

`data = mx.symbol.Variable(‘data’)

label = mx.symbol.Variable(‘softmax_label’)

conv1=mx.symbol.Convolution(data,kernel=(5,5),stride=(1,1),pad=(2,2),num_filter=32)

fc1=mx.symbol.Activation(conv1,act_type=‘relu’)

pool1=mx.symbol.Pooling(fc1,kernel=(2,2),pool_type=‘max’,stride=(2,2))

conv2=mx.symbol.Convolution(pool1,kernel=(5,5),stride=(1,1),pad=(2,2),num_filter=64)

fc2=mx.symbol.Activation(conv2,act_type=‘relu’)

pool2=mx.symbol.Pooling(fc2,kernel=(2,2),pool_type=‘max’,stride=(2,2))

fc3=mx.sym.Reshape(pool2,shape=(-1,7764))

fc4=mx.symbol.FullyConnected(fc3,num_hidden=1024)

fc5=mx.symbol.Activation(fc4,act_type=‘relu’)

drop1=mx.symbol.Dropout(fc5,p=0.5)

fc6=mx.symbol.FullyConnected(drop1,num_hidden=10)

fc7=mx.symbol.softmax(fc6)

out=mx.symbol.MakeLoss(-mx.symbol.sum(label*mx.symbol.log(fc7)))`

i use the mnist dataset with data-shape (-1,1,28,28) and label-shape(-1,10).

after i use the MakeLoss,when i run it,the python doesnt work and kill the kernel.

in ubantu and windows it did the same thing.and the version is mxnet 1.1.0 .

thanks for your reply.

`MakeLoss`

expects a vector as input (One loss value for each example in the batch). So, you should sum along axis 1. Please pass `axis=1`

as a parameter to the `sum`

operator.

i’m sorry to say that after i do so it reacted the same as with no axis=1.i just want to do like mx.sym.softmax_cross_entropy(),but when i use it,it said there are not enough parameters to call it.i saw it as a bug on the github and they fixed it but i still can’t use it in the lastest version.

Just remove “mx.symbol.sum” from the out then it will work

change

out = mx.symbol.MakeLoss(-mx.symbol.sum(label * mx.symbol.log(fc7)))

to

out = mx.symbol.MakeLoss(-label * mx.symbol.log(fc7))

actually the thing is MakeLoss take losses for each of the examples in our training rather a sum over whole data. It is kinda silly that mxnet does’t allow us to do so. One thing you can do to calculate training loss later is:-

loss = mx.symbol.MakeLoss(-label*mx.symbol.log(fc7))

and

cost = mx.symbol.mean(loss)

then minimize loss using module’s forward and backward methods and eval cost separately.

Or if you are using module’s fit method then you can do something like this:-

model.fit(train_iter,

eval_data = eval_iter,

optimizer = ‘adam’,

optimizer_params = {‘learning_rate’: 0.001},

eval_metric = ‘mse’, ====> this is the line which will show you overall loss (cost) of the training data

num_epoch = 10)

Oh, so guess you fixed your issue @mouryarishik for Resolving Check failed: axis < ndim && axis >= -ndim axis 1 exceeds the input dimension of 1?

Yeah, I find a bit silly that mxnet does not allow us to do so.

Error in operator concat12: [11:05:22] src/operator/nn/mkldnn/…/…/tensor/broadcast_reduce_op.h:172: Check failed: axis < ndim && axis >= -ndim: axis 3 exceeds the input dimension of 3.

What I did is just to expand the dimension of input. When I did not expand the dimension, it works. Does anyone have similar issue that we cannot have bigger size than the input in mxnet? I think it is silly…

Hi @Erik

Each operator accepts different type of input, some accept only 2-dimensional inputs, some 3-d dimensional, some 2-3-4-5-dimensional inputs, some n-dimensional inputs with n > 1. Each operator describes its input size in the documentation. Can you share your code?